* [gentoo-commits] proj/linux-patches:5.3 commit in: /
@ 2019-09-16 11:55 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2019-09-16 11:55 UTC (permalink / raw
To: gentoo-commits
commit: 325122d101a2a83f214527d3a0fc62a61e0966de
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Sep 16 11:54:30 2019 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Sep 16 11:54:30 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=325122d1
Kernel patch enables gcc >= v9.1 optimizations for additional CPUs.
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
5012_enable-cpu-optimizations-for-gcc91.patch | 632 ++++++++++++++++++++++++++
2 files changed, 636 insertions(+)
diff --git a/0000_README b/0000_README
index f86fe5e..4403e5a 100644
--- a/0000_README
+++ b/0000_README
@@ -74,3 +74,7 @@ Desc: Kernel patch enables gcc >= v4.13 optimizations for additional CPUs.
Patch: 5011_enable-cpu-optimizations-for-gcc8.patch
From: https://github.com/graysky2/kernel_gcc_patch/
Desc: Kernel patch for >= gccv8 enables kernel >= v4.13 optimizations for additional CPUs.
+
+Patch: 5012_enable-cpu-optimizations-for-gcc91.patch
+From: https://github.com/graysky2/kernel_gcc_patch/
+Desc: Kernel patch enables gcc >= v9.1 optimizations for additional CPUs.
diff --git a/5012_enable-cpu-optimizations-for-gcc91.patch b/5012_enable-cpu-optimizations-for-gcc91.patch
new file mode 100644
index 0000000..dffd36d
--- /dev/null
+++ b/5012_enable-cpu-optimizations-for-gcc91.patch
@@ -0,0 +1,632 @@
+WARNING
+This patch works with gcc versions 9.1+ and with kernel version 4.13+ and should
+NOT be applied when compiling on older versions of gcc due to key name changes
+of the march flags introduced with the version 4.9 release of gcc.[1]
+
+Use the older version of this patch hosted on the same github for older
+versions of gcc.
+
+FEATURES
+This patch adds additional CPU options to the Linux kernel accessible under:
+ Processor type and features --->
+ Processor family --->
+
+The expanded microarchitectures include:
+* AMD Improved K8-family
+* AMD K10-family
+* AMD Family 10h (Barcelona)
+* AMD Family 14h (Bobcat)
+* AMD Family 16h (Jaguar)
+* AMD Family 15h (Bulldozer)
+* AMD Family 15h (Piledriver)
+* AMD Family 15h (Steamroller)
+* AMD Family 15h (Excavator)
+* AMD Family 17h (Zen)
+* AMD Family 17h (Zen 2)
+* Intel Silvermont low-power processors
+* Intel Goldmont low-power processors (Apollo Lake and Denverton)
+* Intel Goldmont Plus low-power processors (Gemini Lake)
+* Intel 1st Gen Core i3/i5/i7 (Nehalem)
+* Intel 1.5 Gen Core i3/i5/i7 (Westmere)
+* Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
+* Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
+* Intel 4th Gen Core i3/i5/i7 (Haswell)
+* Intel 5th Gen Core i3/i5/i7 (Broadwell)
+* Intel 6th Gen Core i3/i5/i7 (Skylake)
+* Intel 6th Gen Core i7/i9 (Skylake X)
+* Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
+* Intel 10th Gen Core i7/i9 (Ice Lake)
+* Intel Xeon (Cascade Lake)
+
+It also offers to compile passing the 'native' option which, "selects the CPU
+to generate code for at compilation time by determining the processor type of
+the compiling machine. Using -march=native enables all instruction subsets
+supported by the local machine and will produce code optimized for the local
+machine under the constraints of the selected instruction set."[3]
+
+MINOR NOTES
+This patch also changes 'atom' to 'bonnell' in accordance with the gcc v4.9
+changes. Note that upstream is using the deprecated 'match=atom' flags when I
+believe it should use the newer 'march=bonnell' flag for atom processors.[2]
+
+It is not recommended to compile on Atom-CPUs with the 'native' option.[4] The
+recommendation is to use the 'atom' option instead.
+
+BENEFITS
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=4.13
+gcc version >=9.1
+
+ACKNOWLEDGMENTS
+This patch builds on the seminal work by Jeroen.[5]
+
+REFERENCES
+1. https://gcc.gnu.org/gcc-4.9/changes.html
+2. https://bugzilla.kernel.org/show_bug.cgi?id=77461
+3. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html
+4. https://github.com/graysky2/kernel_gcc_patch/issues/15
+5. http://www.linuxforge.net/docs/linux/linux-gcc.php
+
+--- a/arch/x86/include/asm/module.h 2019-08-16 04:11:12.000000000 -0400
++++ b/arch/x86/include/asm/module.h 2019-08-22 15:56:23.988050322 -0400
+@@ -25,6 +25,36 @@ struct mod_arch_specific {
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MGOLDMONT
++#define MODULE_PROC_FAMILY "GOLDMONT "
++#elif defined CONFIG_MGOLDMONTPLUS
++#define MODULE_PROC_FAMILY "GOLDMONTPLUS "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
++#elif defined CONFIG_MSKYLAKE
++#define MODULE_PROC_FAMILY "SKYLAKE "
++#elif defined CONFIG_MSKYLAKEX
++#define MODULE_PROC_FAMILY "SKYLAKEX "
++#elif defined CONFIG_MCANNONLAKE
++#define MODULE_PROC_FAMILY "CANNONLAKE "
++#elif defined CONFIG_MICELAKE
++#define MODULE_PROC_FAMILY "ICELAKE "
++#elif defined CONFIG_MCASCADELAKE
++#define MODULE_PROC_FAMILY "CASCADELAKE "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -43,6 +73,28 @@ struct mod_arch_specific {
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MSTEAMROLLER
++#define MODULE_PROC_FAMILY "STEAMROLLER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
++#elif defined CONFIG_MEXCAVATOR
++#define MODULE_PROC_FAMILY "EXCAVATOR "
++#elif defined CONFIG_MZEN
++#define MODULE_PROC_FAMILY "ZEN "
++#elif defined CONFIG_MZEN2
++#define MODULE_PROC_FAMILY "ZEN2 "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+--- a/arch/x86/Kconfig.cpu 2019-08-16 04:11:12.000000000 -0400
++++ b/arch/x86/Kconfig.cpu 2019-08-22 15:59:31.596946943 -0400
+@@ -116,6 +116,7 @@ config MPENTIUMM
+ config MPENTIUM4
+ bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
+ depends on X86_32
++ select X86_P6_NOP
+ ---help---
+ Select this for Intel Pentium 4 chips. This includes the
+ Pentium 4, Pentium D, P4-based Celeron and Xeon, and
+@@ -148,9 +149,8 @@ config MPENTIUM4
+ -Paxville
+ -Dempsey
+
+-
+ config MK6
+- bool "K6/K6-II/K6-III"
++ bool "AMD K6/K6-II/K6-III"
+ depends on X86_32
+ ---help---
+ Select this for an AMD K6-family processor. Enables use of
+@@ -158,7 +158,7 @@ config MK6
+ flags to GCC.
+
+ config MK7
+- bool "Athlon/Duron/K7"
++ bool "AMD Athlon/Duron/K7"
+ depends on X86_32
+ ---help---
+ Select this for an AMD Athlon K7-family processor. Enables use of
+@@ -166,12 +166,90 @@ config MK7
+ flags to GCC.
+
+ config MK8
+- bool "Opteron/Athlon64/Hammer/K8"
++ bool "AMD Opteron/Athlon64/Hammer/K8"
+ ---help---
+ Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ Enables use of some extended instructions, and passes appropriate
+ optimization flags to GCC.
+
++config MK8SSE3
++ bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++ ---help---
++ Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MK10
++ bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++ ---help---
++ Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++ Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MBARCELONA
++ bool "AMD Barcelona"
++ ---help---
++ Select this for AMD Family 10h Barcelona processors.
++
++ Enables -march=barcelona
++
++config MBOBCAT
++ bool "AMD Bobcat"
++ ---help---
++ Select this for AMD Family 14h Bobcat processors.
++
++ Enables -march=btver1
++
++config MJAGUAR
++ bool "AMD Jaguar"
++ ---help---
++ Select this for AMD Family 16h Jaguar processors.
++
++ Enables -march=btver2
++
++config MBULLDOZER
++ bool "AMD Bulldozer"
++ ---help---
++ Select this for AMD Family 15h Bulldozer processors.
++
++ Enables -march=bdver1
++
++config MPILEDRIVER
++ bool "AMD Piledriver"
++ ---help---
++ Select this for AMD Family 15h Piledriver processors.
++
++ Enables -march=bdver2
++
++config MSTEAMROLLER
++ bool "AMD Steamroller"
++ ---help---
++ Select this for AMD Family 15h Steamroller processors.
++
++ Enables -march=bdver3
++
++config MEXCAVATOR
++ bool "AMD Excavator"
++ ---help---
++ Select this for AMD Family 15h Excavator processors.
++
++ Enables -march=bdver4
++
++config MZEN
++ bool "AMD Zen"
++ ---help---
++ Select this for AMD Family 17h Zen processors.
++
++ Enables -march=znver1
++
++config MZEN2
++ bool "AMD Zen 2"
++ ---help---
++ Select this for AMD Family 17h Zen 2 processors.
++
++ Enables -march=znver2
++
+ config MCRUSOE
+ bool "Crusoe"
+ depends on X86_32
+@@ -253,6 +331,7 @@ config MVIAC7
+
+ config MPSC
+ bool "Intel P4 / older Netburst based Xeon"
++ select X86_P6_NOP
+ depends on X86_64
+ ---help---
+ Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
+@@ -262,8 +341,19 @@ config MPSC
+ using the cpu family field
+ in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+
++config MATOM
++ bool "Intel Atom"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Atom platform. Intel Atom CPUs have an
++ in-order pipelining architecture and thus can benefit from
++ accordingly optimized code. Use a recent GCC with specific Atom
++ support in order to fully benefit from selecting this option.
++
+ config MCORE2
+- bool "Core 2/newer Xeon"
++ bool "Intel Core 2"
++ select X86_P6_NOP
+ ---help---
+
+ Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -271,14 +361,133 @@ config MCORE2
+ family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ (not a typo)
+
+-config MATOM
+- bool "Intel Atom"
++ Enables -march=core2
++
++config MNEHALEM
++ bool "Intel Nehalem"
++ select X86_P6_NOP
+ ---help---
+
+- Select this for the Intel Atom platform. Intel Atom CPUs have an
+- in-order pipelining architecture and thus can benefit from
+- accordingly optimized code. Use a recent GCC with specific Atom
+- support in order to fully benefit from selecting this option.
++ Select this for 1st Gen Core processors in the Nehalem family.
++
++ Enables -march=nehalem
++
++config MWESTMERE
++ bool "Intel Westmere"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Westmere formerly Nehalem-C family.
++
++ Enables -march=westmere
++
++config MSILVERMONT
++ bool "Intel Silvermont"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Silvermont platform.
++
++ Enables -march=silvermont
++
++config MGOLDMONT
++ bool "Intel Goldmont"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Goldmont platform including Apollo Lake and Denverton.
++
++ Enables -march=goldmont
++
++config MGOLDMONTPLUS
++ bool "Intel Goldmont Plus"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Goldmont Plus platform including Gemini Lake.
++
++ Enables -march=goldmont-plus
++
++config MSANDYBRIDGE
++ bool "Intel Sandy Bridge"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++ Enables -march=sandybridge
++
++config MIVYBRIDGE
++ bool "Intel Ivy Bridge"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++ Enables -march=ivybridge
++
++config MHASWELL
++ bool "Intel Haswell"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 4th Gen Core processors in the Haswell family.
++
++ Enables -march=haswell
++
++config MBROADWELL
++ bool "Intel Broadwell"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 5th Gen Core processors in the Broadwell family.
++
++ Enables -march=broadwell
++
++config MSKYLAKE
++ bool "Intel Skylake"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 6th Gen Core processors in the Skylake family.
++
++ Enables -march=skylake
++
++config MSKYLAKEX
++ bool "Intel Skylake X"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 6th Gen Core processors in the Skylake X family.
++
++ Enables -march=skylake-avx512
++
++config MCANNONLAKE
++ bool "Intel Cannon Lake"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 8th Gen Core processors
++
++ Enables -march=cannonlake
++
++config MICELAKE
++ bool "Intel Ice Lake"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 10th Gen Core processors in the Ice Lake family.
++
++ Enables -march=icelake-client
++
++config MCASCADELAKE
++ bool "Intel Cascade Lake"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for Xeon processors in the Cascade Lake family.
++
++ Enables -march=cascadelake
+
+ config GENERIC_CPU
+ bool "Generic-x86-64"
+@@ -287,6 +496,19 @@ config GENERIC_CPU
+ Generic x86-64 CPU.
+ Run equally well on all x86-64 CPUs.
+
++config MNATIVE
++ bool "Native optimizations autodetected by GCC"
++ ---help---
++
++ GCC 4.2 and above support -march=native, which automatically detects
++ the optimum settings to use based on your processor. -march=native
++ also detects and applies additional settings beyond -march specific
++ to your CPU, (eg. -msse4). Unless you have a specific reason not to
++ (e.g. distcc cross-compiling), you should probably be using
++ -march=native rather than anything listed below.
++
++ Enables -march=native
++
+ endchoice
+
+ config X86_GENERIC
+@@ -311,7 +533,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ int
+ default "7" if MPENTIUM4 || MPSC
+- default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++ default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
+ default "4" if MELAN || M486 || MGEODEGX1
+ default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+
+@@ -329,35 +551,36 @@ config X86_ALIGNMENT_16
+
+ config X86_INTEL_USERCOPY
+ def_bool y
+- depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE
+
+ config X86_USE_PPRO_CHECKSUM
+ def_bool y
+- depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MATOM || MNATIVE
+
+ config X86_USE_3DNOW
+ def_bool y
+ depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
+
+-#
+-# P6_NOPs are a relatively minor optimization that require a family >=
+-# 6 processor, except that it is broken on certain VIA chips.
+-# Furthermore, AMD chips prefer a totally different sequence of NOPs
+-# (which work on all CPUs). In addition, it looks like Virtual PC
+-# does not understand them.
+-#
+-# As a result, disallow these if we're not compiling for X86_64 (these
+-# NOPs do work on all x86-64 capable chips); the list of processors in
+-# the right-hand clause are the cores that benefit from this optimization.
+-#
+ config X86_P6_NOP
+- def_bool y
+- depends on X86_64
+- depends on (MCORE2 || MPENTIUM4 || MPSC)
++ default n
++ bool "Support for P6_NOPs on Intel chips"
++ depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE)
++ ---help---
++ P6_NOPs are a relatively minor optimization that require a family >=
++ 6 processor, except that it is broken on certain VIA chips.
++ Furthermore, AMD chips prefer a totally different sequence of NOPs
++ (which work on all CPUs). In addition, it looks like Virtual PC
++ does not understand them.
++
++ As a result, disallow these if we're not compiling for X86_64 (these
++ NOPs do work on all x86-64 capable chips); the list of processors in
++ the right-hand clause are the cores that benefit from this optimization.
++
++ Say Y if you have Intel CPU newer than Pentium Pro, N otherwise.
+
+ config X86_TSC
+ def_bool y
+- depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++ depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE || MATOM) || X86_64
+
+ config X86_CMPXCHG64
+ def_bool y
+@@ -367,7 +590,7 @@ config X86_CMPXCHG64
+ # generates cmov.
+ config X86_CMOV
+ def_bool y
+- depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++ depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
+
+ config X86_MINIMUM_CPU_FAMILY
+ int
+--- a/arch/x86/Makefile 2019-08-16 04:11:12.000000000 -0400
++++ b/arch/x86/Makefile 2019-08-22 16:01:22.559789904 -0400
+@@ -118,13 +118,53 @@ else
+ KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
+
+ # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++ cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++ cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
++ cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++ cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++ cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++ cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
++ cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++ cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++ cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
++ cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
++ cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
++ cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2)
+ cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+
+ cflags-$(CONFIG_MCORE2) += \
+- $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+- cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+- $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++ $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
++ cflags-$(CONFIG_MNEHALEM) += \
++ $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
++ cflags-$(CONFIG_MWESTMERE) += \
++ $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
++ cflags-$(CONFIG_MSILVERMONT) += \
++ $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
++ cflags-$(CONFIG_MGOLDMONT) += \
++ $(call cc-option,-march=goldmont,$(call cc-option,-mtune=goldmont))
++ cflags-$(CONFIG_MGOLDMONTPLUS) += \
++ $(call cc-option,-march=goldmont-plus,$(call cc-option,-mtune=goldmont-plus))
++ cflags-$(CONFIG_MSANDYBRIDGE) += \
++ $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
++ cflags-$(CONFIG_MIVYBRIDGE) += \
++ $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
++ cflags-$(CONFIG_MHASWELL) += \
++ $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
++ cflags-$(CONFIG_MBROADWELL) += \
++ $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
++ cflags-$(CONFIG_MSKYLAKE) += \
++ $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake))
++ cflags-$(CONFIG_MSKYLAKEX) += \
++ $(call cc-option,-march=skylake-avx512,$(call cc-option,-mtune=skylake-avx512))
++ cflags-$(CONFIG_MCANNONLAKE) += \
++ $(call cc-option,-march=cannonlake,$(call cc-option,-mtune=cannonlake))
++ cflags-$(CONFIG_MICELAKE) += \
++ $(call cc-option,-march=icelake-client,$(call cc-option,-mtune=icelake-client))
++ cflags-$(CONFIG_MCASCADE) += \
++ $(call cc-option,-march=cascadelake,$(call cc-option,-mtune=cascadelake))
++ cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
++ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+ cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+ KBUILD_CFLAGS += $(cflags-y)
+
+--- a/arch/x86/Makefile_32.cpu 2019-08-16 04:11:12.000000000 -0400
++++ b/arch/x86/Makefile_32.cpu 2019-08-22 16:02:14.687701216 -0400
+@@ -23,7 +23,19 @@ cflags-$(CONFIG_MK6) += -march=k6
+ # Please note, that patches that add -march=athlon-xp and friends are pointless.
+ # They make zero difference whatsosever to performance at this time.
+ cflags-$(CONFIG_MK7) += -march=athlon
++cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8,-march=athlon)
++cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-march=athlon)
++cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10,-march=athlon)
++cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona,-march=athlon)
++cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2,-march=athlon)
++cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1,-march=athlon)
++cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2,-march=athlon)
++cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3,-march=athlon)
++cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4,-march=athlon)
++cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1,-march=athlon)
++cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2,-march=athlon)
+ cflags-$(CONFIG_MCRUSOE) += -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MEFFICEON) += -march=i686 $(call tune,pentium3) -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MWINCHIPC6) += $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -32,8 +44,22 @@ cflags-$(CONFIG_MCYRIXIII) += $(call cc-
+ cflags-$(CONFIG_MVIAC3_2) += $(call cc-option,-march=c3-2,-march=i686)
+ cflags-$(CONFIG_MVIAC7) += -march=i686
+ cflags-$(CONFIG_MCORE2) += -march=i686 $(call tune,core2)
+-cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
+- $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++cflags-$(CONFIG_MNEHALEM) += -march=i686 $(call tune,nehalem)
++cflags-$(CONFIG_MWESTMERE) += -march=i686 $(call tune,westmere)
++cflags-$(CONFIG_MSILVERMONT) += -march=i686 $(call tune,silvermont)
++cflags-$(CONFIG_MGOLDMONT) += -march=i686 $(call tune,goldmont)
++cflags-$(CONFIG_MGOLDMONTPLUS) += -march=i686 $(call tune,goldmont-plus)
++cflags-$(CONFIG_MSANDYBRIDGE) += -march=i686 $(call tune,sandybridge)
++cflags-$(CONFIG_MIVYBRIDGE) += -march=i686 $(call tune,ivybridge)
++cflags-$(CONFIG_MHASWELL) += -march=i686 $(call tune,haswell)
++cflags-$(CONFIG_MBROADWELL) += -march=i686 $(call tune,broadwell)
++cflags-$(CONFIG_MSKYLAKE) += -march=i686 $(call tune,skylake)
++cflags-$(CONFIG_MSKYLAKEX) += -march=i686 $(call tune,skylake-avx512)
++cflags-$(CONFIG_MCANNONLAKE) += -march=i686 $(call tune,cannonlake)
++cflags-$(CONFIG_MICELAKE) += -march=i686 $(call tune,icelake-client)
++cflags-$(CONFIG_MCASCADELAKE) += -march=i686 $(call tune,cascadelake)
++cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
++ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+
+ # AMD Elan support
+ cflags-$(CONFIG_MELAN) += -march=i486
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:5.3 commit in: /
@ 2019-09-21 15:53 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2019-09-21 15:53 UTC (permalink / raw
To: gentoo-commits
commit: bdd00b553c1319822d9f07df86de47f812b2fbea
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Sep 21 15:53:16 2019 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Sep 21 15:53:16 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bdd00b55
Linux patch 5.3.1
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1000_linux-5.3.1.patch | 1081 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1085 insertions(+)
diff --git a/0000_README b/0000_README
index 4403e5a..f9d1f15 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
Individual Patch Descriptions:
--------------------------------------------------------------------------
+Patch: 1000_linux-5.3.1.patch
+From: http://www.kernel.org
+Desc: Linux 5.3.1
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1000_linux-5.3.1.patch b/1000_linux-5.3.1.patch
new file mode 100644
index 0000000..7d865ae
--- /dev/null
+++ b/1000_linux-5.3.1.patch
@@ -0,0 +1,1081 @@
+diff --git a/Documentation/filesystems/overlayfs.txt b/Documentation/filesystems/overlayfs.txt
+index 1da2f1668f08..845d689e0fd7 100644
+--- a/Documentation/filesystems/overlayfs.txt
++++ b/Documentation/filesystems/overlayfs.txt
+@@ -302,7 +302,7 @@ beneath or above the path of another overlay lower layer path.
+
+ Using an upper layer path and/or a workdir path that are already used by
+ another overlay mount is not allowed and may fail with EBUSY. Using
+-partially overlapping paths is not allowed but will not fail with EBUSY.
++partially overlapping paths is not allowed and may fail with EBUSY.
+ If files are accessed from two overlayfs mounts which share or overlap the
+ upper layer and/or workdir path the behavior of the overlay is undefined,
+ though it will not result in a crash or deadlock.
+diff --git a/Documentation/sphinx/automarkup.py b/Documentation/sphinx/automarkup.py
+index 77e89c1956d7..a8798369e8f7 100644
+--- a/Documentation/sphinx/automarkup.py
++++ b/Documentation/sphinx/automarkup.py
+@@ -25,7 +25,7 @@ RE_function = re.compile(r'([\w_][\w\d_]+\(\))')
+ # to the creation of incorrect and confusing cross references. So
+ # just don't even try with these names.
+ #
+-Skipfuncs = [ 'open', 'close', 'read', 'write', 'fcntl', 'mmap'
++Skipfuncs = [ 'open', 'close', 'read', 'write', 'fcntl', 'mmap',
+ 'select', 'poll', 'fork', 'execve', 'clone', 'ioctl']
+
+ #
+diff --git a/Makefile b/Makefile
+index 6886f22902c9..f32e8d2e09c3 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 3
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Bobtail Squid
+
+diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
+index e09760ece844..8eb5c0fbdee6 100644
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -220,8 +220,10 @@ static inline void set_pte(pte_t *ptep, pte_t pte)
+ * Only if the new pte is valid and kernel, otherwise TLB maintenance
+ * or update_mmu_cache() have the necessary barriers.
+ */
+- if (pte_valid_not_user(pte))
++ if (pte_valid_not_user(pte)) {
+ dsb(ishst);
++ isb();
++ }
+ }
+
+ extern void __sync_icache_dcache(pte_t pteval);
+@@ -484,8 +486,10 @@ static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
+
+ WRITE_ONCE(*pmdp, pmd);
+
+- if (pmd_valid(pmd))
++ if (pmd_valid(pmd)) {
+ dsb(ishst);
++ isb();
++ }
+ }
+
+ static inline void pmd_clear(pmd_t *pmdp)
+@@ -543,8 +547,10 @@ static inline void set_pud(pud_t *pudp, pud_t pud)
+
+ WRITE_ONCE(*pudp, pud);
+
+- if (pud_valid(pud))
++ if (pud_valid(pud)) {
+ dsb(ishst);
++ isb();
++ }
+ }
+
+ static inline void pud_clear(pud_t *pudp)
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
+index 0469aceaa230..485865fd0412 100644
+--- a/drivers/block/floppy.c
++++ b/drivers/block/floppy.c
+@@ -3780,7 +3780,7 @@ static int compat_getdrvprm(int drive,
+ v.native_format = UDP->native_format;
+ mutex_unlock(&floppy_mutex);
+
+- if (copy_from_user(arg, &v, sizeof(struct compat_floppy_drive_params)))
++ if (copy_to_user(arg, &v, sizeof(struct compat_floppy_drive_params)))
+ return -EFAULT;
+ return 0;
+ }
+@@ -3816,7 +3816,7 @@ static int compat_getdrvstat(int drive, bool poll,
+ v.bufblocks = UDRS->bufblocks;
+ mutex_unlock(&floppy_mutex);
+
+- if (copy_from_user(arg, &v, sizeof(struct compat_floppy_drive_struct)))
++ if (copy_to_user(arg, &v, sizeof(struct compat_floppy_drive_struct)))
+ return -EFAULT;
+ return 0;
+ Eintr:
+diff --git a/drivers/firmware/google/vpd.c b/drivers/firmware/google/vpd.c
+index 0739f3b70347..db0812263d46 100644
+--- a/drivers/firmware/google/vpd.c
++++ b/drivers/firmware/google/vpd.c
+@@ -92,8 +92,8 @@ static int vpd_section_check_key_name(const u8 *key, s32 key_len)
+ return VPD_OK;
+ }
+
+-static int vpd_section_attrib_add(const u8 *key, s32 key_len,
+- const u8 *value, s32 value_len,
++static int vpd_section_attrib_add(const u8 *key, u32 key_len,
++ const u8 *value, u32 value_len,
+ void *arg)
+ {
+ int ret;
+diff --git a/drivers/firmware/google/vpd_decode.c b/drivers/firmware/google/vpd_decode.c
+index 92e3258552fc..dda525c0f968 100644
+--- a/drivers/firmware/google/vpd_decode.c
++++ b/drivers/firmware/google/vpd_decode.c
+@@ -9,8 +9,8 @@
+
+ #include "vpd_decode.h"
+
+-static int vpd_decode_len(const s32 max_len, const u8 *in,
+- s32 *length, s32 *decoded_len)
++static int vpd_decode_len(const u32 max_len, const u8 *in,
++ u32 *length, u32 *decoded_len)
+ {
+ u8 more;
+ int i = 0;
+@@ -30,18 +30,39 @@ static int vpd_decode_len(const s32 max_len, const u8 *in,
+ } while (more);
+
+ *decoded_len = i;
++ return VPD_OK;
++}
++
++static int vpd_decode_entry(const u32 max_len, const u8 *input_buf,
++ u32 *_consumed, const u8 **entry, u32 *entry_len)
++{
++ u32 decoded_len;
++ u32 consumed = *_consumed;
++
++ if (vpd_decode_len(max_len - consumed, &input_buf[consumed],
++ entry_len, &decoded_len) != VPD_OK)
++ return VPD_FAIL;
++ if (max_len - consumed < decoded_len)
++ return VPD_FAIL;
++
++ consumed += decoded_len;
++ *entry = input_buf + consumed;
++
++ /* entry_len is untrusted data and must be checked again. */
++ if (max_len - consumed < *entry_len)
++ return VPD_FAIL;
+
++ consumed += decoded_len;
++ *_consumed = consumed;
+ return VPD_OK;
+ }
+
+-int vpd_decode_string(const s32 max_len, const u8 *input_buf, s32 *consumed,
++int vpd_decode_string(const u32 max_len, const u8 *input_buf, u32 *consumed,
+ vpd_decode_callback callback, void *callback_arg)
+ {
+ int type;
+- int res;
+- s32 key_len;
+- s32 value_len;
+- s32 decoded_len;
++ u32 key_len;
++ u32 value_len;
+ const u8 *key;
+ const u8 *value;
+
+@@ -56,26 +77,14 @@ int vpd_decode_string(const s32 max_len, const u8 *input_buf, s32 *consumed,
+ case VPD_TYPE_STRING:
+ (*consumed)++;
+
+- /* key */
+- res = vpd_decode_len(max_len - *consumed, &input_buf[*consumed],
+- &key_len, &decoded_len);
+- if (res != VPD_OK || *consumed + decoded_len >= max_len)
++ if (vpd_decode_entry(max_len, input_buf, consumed, &key,
++ &key_len) != VPD_OK)
+ return VPD_FAIL;
+
+- *consumed += decoded_len;
+- key = &input_buf[*consumed];
+- *consumed += key_len;
+-
+- /* value */
+- res = vpd_decode_len(max_len - *consumed, &input_buf[*consumed],
+- &value_len, &decoded_len);
+- if (res != VPD_OK || *consumed + decoded_len > max_len)
++ if (vpd_decode_entry(max_len, input_buf, consumed, &value,
++ &value_len) != VPD_OK)
+ return VPD_FAIL;
+
+- *consumed += decoded_len;
+- value = &input_buf[*consumed];
+- *consumed += value_len;
+-
+ if (type == VPD_TYPE_STRING)
+ return callback(key, key_len, value, value_len,
+ callback_arg);
+diff --git a/drivers/firmware/google/vpd_decode.h b/drivers/firmware/google/vpd_decode.h
+index cf8c2ace155a..8dbe41cac599 100644
+--- a/drivers/firmware/google/vpd_decode.h
++++ b/drivers/firmware/google/vpd_decode.h
+@@ -25,8 +25,8 @@ enum {
+ };
+
+ /* Callback for vpd_decode_string to invoke. */
+-typedef int vpd_decode_callback(const u8 *key, s32 key_len,
+- const u8 *value, s32 value_len,
++typedef int vpd_decode_callback(const u8 *key, u32 key_len,
++ const u8 *value, u32 value_len,
+ void *arg);
+
+ /*
+@@ -44,7 +44,7 @@ typedef int vpd_decode_callback(const u8 *key, s32 key_len,
+ * If one entry is successfully decoded, sends it to callback and returns the
+ * result.
+ */
+-int vpd_decode_string(const s32 max_len, const u8 *input_buf, s32 *consumed,
++int vpd_decode_string(const u32 max_len, const u8 *input_buf, u32 *consumed,
+ vpd_decode_callback callback, void *callback_arg);
+
+ #endif /* __VPD_DECODE_H */
+diff --git a/drivers/media/usb/dvb-usb/technisat-usb2.c b/drivers/media/usb/dvb-usb/technisat-usb2.c
+index c659e18b358b..676d233d46d5 100644
+--- a/drivers/media/usb/dvb-usb/technisat-usb2.c
++++ b/drivers/media/usb/dvb-usb/technisat-usb2.c
+@@ -608,10 +608,9 @@ static int technisat_usb2_frontend_attach(struct dvb_usb_adapter *a)
+ static int technisat_usb2_get_ir(struct dvb_usb_device *d)
+ {
+ struct technisat_usb2_state *state = d->priv;
+- u8 *buf = state->buf;
+- u8 *b;
+- int ret;
+ struct ir_raw_event ev;
++ u8 *buf = state->buf;
++ int i, ret;
+
+ buf[0] = GET_IR_DATA_VENDOR_REQUEST;
+ buf[1] = 0x08;
+@@ -647,26 +646,25 @@ unlock:
+ return 0; /* no key pressed */
+
+ /* decoding */
+- b = buf+1;
+
+ #if 0
+ deb_rc("RC: %d ", ret);
+- debug_dump(b, ret, deb_rc);
++ debug_dump(buf + 1, ret, deb_rc);
+ #endif
+
+ ev.pulse = 0;
+- while (1) {
+- ev.pulse = !ev.pulse;
+- ev.duration = (*b * FIRMWARE_CLOCK_DIVISOR * FIRMWARE_CLOCK_TICK) / 1000;
+- ir_raw_event_store(d->rc_dev, &ev);
+-
+- b++;
+- if (*b == 0xff) {
++ for (i = 1; i < ARRAY_SIZE(state->buf); i++) {
++ if (buf[i] == 0xff) {
+ ev.pulse = 0;
+ ev.duration = 888888*2;
+ ir_raw_event_store(d->rc_dev, &ev);
+ break;
+ }
++
++ ev.pulse = !ev.pulse;
++ ev.duration = (buf[i] * FIRMWARE_CLOCK_DIVISOR *
++ FIRMWARE_CLOCK_TICK) / 1000;
++ ir_raw_event_store(d->rc_dev, &ev);
+ }
+
+ ir_raw_event_handle(d->rc_dev);
+diff --git a/drivers/media/usb/tm6000/tm6000-dvb.c b/drivers/media/usb/tm6000/tm6000-dvb.c
+index e4d2dcd5cc0f..19c90fa9e443 100644
+--- a/drivers/media/usb/tm6000/tm6000-dvb.c
++++ b/drivers/media/usb/tm6000/tm6000-dvb.c
+@@ -97,6 +97,7 @@ static void tm6000_urb_received(struct urb *urb)
+ printk(KERN_ERR "tm6000: error %s\n", __func__);
+ kfree(urb->transfer_buffer);
+ usb_free_urb(urb);
++ dev->dvb->bulk_urb = NULL;
+ }
+ }
+ }
+@@ -127,6 +128,7 @@ static int tm6000_start_stream(struct tm6000_core *dev)
+ dvb->bulk_urb->transfer_buffer = kzalloc(size, GFP_KERNEL);
+ if (!dvb->bulk_urb->transfer_buffer) {
+ usb_free_urb(dvb->bulk_urb);
++ dvb->bulk_urb = NULL;
+ return -ENOMEM;
+ }
+
+@@ -153,6 +155,7 @@ static int tm6000_start_stream(struct tm6000_core *dev)
+
+ kfree(dvb->bulk_urb->transfer_buffer);
+ usb_free_urb(dvb->bulk_urb);
++ dvb->bulk_urb = NULL;
+ return ret;
+ }
+
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index fd54c7c87485..b19ab09cb18f 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -4451,10 +4451,12 @@ int stmmac_suspend(struct device *dev)
+ if (!ndev || !netif_running(ndev))
+ return 0;
+
+- phylink_stop(priv->phylink);
+-
+ mutex_lock(&priv->lock);
+
++ rtnl_lock();
++ phylink_stop(priv->phylink);
++ rtnl_unlock();
++
+ netif_device_detach(ndev);
+ stmmac_stop_all_queues(priv);
+
+@@ -4558,9 +4560,11 @@ int stmmac_resume(struct device *dev)
+
+ stmmac_start_all_queues(priv);
+
+- mutex_unlock(&priv->lock);
+-
++ rtnl_lock();
+ phylink_start(priv->phylink);
++ rtnl_unlock();
++
++ mutex_unlock(&priv->lock);
+
+ return 0;
+ }
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 8d33970a2950..5f5722bf6762 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -906,7 +906,7 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
+ __pskb_pull_tail(skb, pull_to - skb_headlen(skb));
+ }
+ if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) {
+- queue->rx.rsp_cons = ++cons;
++ queue->rx.rsp_cons = ++cons + skb_queue_len(list);
+ kfree_skb(nskb);
+ return ~0U;
+ }
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.c b/drivers/phy/qualcomm/phy-qcom-qmp.c
+index 34ff6434da8f..6bb49cc25c63 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp.c
+@@ -35,7 +35,7 @@
+ #define PLL_READY_GATE_EN BIT(3)
+ /* QPHY_PCS_STATUS bit */
+ #define PHYSTATUS BIT(6)
+-/* QPHY_COM_PCS_READY_STATUS bit */
++/* QPHY_PCS_READY_STATUS & QPHY_COM_PCS_READY_STATUS bit */
+ #define PCS_READY BIT(0)
+
+ /* QPHY_V3_DP_COM_RESET_OVRD_CTRL register bits */
+@@ -115,6 +115,7 @@ enum qphy_reg_layout {
+ QPHY_SW_RESET,
+ QPHY_START_CTRL,
+ QPHY_PCS_READY_STATUS,
++ QPHY_PCS_STATUS,
+ QPHY_PCS_AUTONOMOUS_MODE_CTRL,
+ QPHY_PCS_LFPS_RXTERM_IRQ_CLEAR,
+ QPHY_PCS_LFPS_RXTERM_IRQ_STATUS,
+@@ -133,7 +134,7 @@ static const unsigned int pciephy_regs_layout[] = {
+ [QPHY_FLL_MAN_CODE] = 0xd4,
+ [QPHY_SW_RESET] = 0x00,
+ [QPHY_START_CTRL] = 0x08,
+- [QPHY_PCS_READY_STATUS] = 0x174,
++ [QPHY_PCS_STATUS] = 0x174,
+ };
+
+ static const unsigned int usb3phy_regs_layout[] = {
+@@ -144,7 +145,7 @@ static const unsigned int usb3phy_regs_layout[] = {
+ [QPHY_FLL_MAN_CODE] = 0xd0,
+ [QPHY_SW_RESET] = 0x00,
+ [QPHY_START_CTRL] = 0x08,
+- [QPHY_PCS_READY_STATUS] = 0x17c,
++ [QPHY_PCS_STATUS] = 0x17c,
+ [QPHY_PCS_AUTONOMOUS_MODE_CTRL] = 0x0d4,
+ [QPHY_PCS_LFPS_RXTERM_IRQ_CLEAR] = 0x0d8,
+ [QPHY_PCS_LFPS_RXTERM_IRQ_STATUS] = 0x178,
+@@ -153,7 +154,7 @@ static const unsigned int usb3phy_regs_layout[] = {
+ static const unsigned int qmp_v3_usb3phy_regs_layout[] = {
+ [QPHY_SW_RESET] = 0x00,
+ [QPHY_START_CTRL] = 0x08,
+- [QPHY_PCS_READY_STATUS] = 0x174,
++ [QPHY_PCS_STATUS] = 0x174,
+ [QPHY_PCS_AUTONOMOUS_MODE_CTRL] = 0x0d8,
+ [QPHY_PCS_LFPS_RXTERM_IRQ_CLEAR] = 0x0dc,
+ [QPHY_PCS_LFPS_RXTERM_IRQ_STATUS] = 0x170,
+@@ -911,7 +912,6 @@ struct qmp_phy_cfg {
+
+ unsigned int start_ctrl;
+ unsigned int pwrdn_ctrl;
+- unsigned int mask_pcs_ready;
+ unsigned int mask_com_pcs_ready;
+
+ /* true, if PHY has a separate PHY_COM control block */
+@@ -1074,7 +1074,6 @@ static const struct qmp_phy_cfg msm8996_pciephy_cfg = {
+
+ .start_ctrl = PCS_START | PLL_READY_GATE_EN,
+ .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL,
+- .mask_pcs_ready = PHYSTATUS,
+ .mask_com_pcs_ready = PCS_READY,
+
+ .has_phy_com_ctrl = true,
+@@ -1106,7 +1105,6 @@ static const struct qmp_phy_cfg msm8996_usb3phy_cfg = {
+
+ .start_ctrl = SERDES_START | PCS_START,
+ .pwrdn_ctrl = SW_PWRDN,
+- .mask_pcs_ready = PHYSTATUS,
+ };
+
+ /* list of resets */
+@@ -1136,7 +1134,6 @@ static const struct qmp_phy_cfg ipq8074_pciephy_cfg = {
+
+ .start_ctrl = SERDES_START | PCS_START,
+ .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL,
+- .mask_pcs_ready = PHYSTATUS,
+
+ .has_phy_com_ctrl = false,
+ .has_lane_rst = false,
+@@ -1167,7 +1164,6 @@ static const struct qmp_phy_cfg qmp_v3_usb3phy_cfg = {
+
+ .start_ctrl = SERDES_START | PCS_START,
+ .pwrdn_ctrl = SW_PWRDN,
+- .mask_pcs_ready = PHYSTATUS,
+
+ .has_pwrdn_delay = true,
+ .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN,
+@@ -1199,7 +1195,6 @@ static const struct qmp_phy_cfg qmp_v3_usb3_uniphy_cfg = {
+
+ .start_ctrl = SERDES_START | PCS_START,
+ .pwrdn_ctrl = SW_PWRDN,
+- .mask_pcs_ready = PHYSTATUS,
+
+ .has_pwrdn_delay = true,
+ .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN,
+@@ -1226,7 +1221,6 @@ static const struct qmp_phy_cfg sdm845_ufsphy_cfg = {
+
+ .start_ctrl = SERDES_START,
+ .pwrdn_ctrl = SW_PWRDN,
+- .mask_pcs_ready = PCS_READY,
+
+ .is_dual_lane_phy = true,
+ .no_pcs_sw_reset = true,
+@@ -1254,7 +1248,6 @@ static const struct qmp_phy_cfg msm8998_pciephy_cfg = {
+
+ .start_ctrl = SERDES_START | PCS_START,
+ .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL,
+- .mask_pcs_ready = PHYSTATUS,
+ };
+
+ static const struct qmp_phy_cfg msm8998_usb3phy_cfg = {
+@@ -1279,7 +1272,6 @@ static const struct qmp_phy_cfg msm8998_usb3phy_cfg = {
+
+ .start_ctrl = SERDES_START | PCS_START,
+ .pwrdn_ctrl = SW_PWRDN,
+- .mask_pcs_ready = PHYSTATUS,
+
+ .is_dual_lane_phy = true,
+ };
+@@ -1457,7 +1449,7 @@ static int qcom_qmp_phy_enable(struct phy *phy)
+ void __iomem *pcs = qphy->pcs;
+ void __iomem *dp_com = qmp->dp_com;
+ void __iomem *status;
+- unsigned int mask, val;
++ unsigned int mask, val, ready;
+ int ret;
+
+ dev_vdbg(qmp->dev, "Initializing QMP phy\n");
+@@ -1545,10 +1537,17 @@ static int qcom_qmp_phy_enable(struct phy *phy)
+ /* start SerDes and Phy-Coding-Sublayer */
+ qphy_setbits(pcs, cfg->regs[QPHY_START_CTRL], cfg->start_ctrl);
+
+- status = pcs + cfg->regs[QPHY_PCS_READY_STATUS];
+- mask = cfg->mask_pcs_ready;
++ if (cfg->type == PHY_TYPE_UFS) {
++ status = pcs + cfg->regs[QPHY_PCS_READY_STATUS];
++ mask = PCS_READY;
++ ready = PCS_READY;
++ } else {
++ status = pcs + cfg->regs[QPHY_PCS_STATUS];
++ mask = PHYSTATUS;
++ ready = 0;
++ }
+
+- ret = readl_poll_timeout(status, val, val & mask, 10,
++ ret = readl_poll_timeout(status, val, (val & mask) == ready, 10,
+ PHY_INIT_COMPLETE_TIMEOUT);
+ if (ret) {
+ dev_err(qmp->dev, "phy initialization timed-out\n");
+diff --git a/drivers/phy/renesas/phy-rcar-gen3-usb2.c b/drivers/phy/renesas/phy-rcar-gen3-usb2.c
+index 8ffba67568ec..b7f6b1324395 100644
+--- a/drivers/phy/renesas/phy-rcar-gen3-usb2.c
++++ b/drivers/phy/renesas/phy-rcar-gen3-usb2.c
+@@ -61,6 +61,7 @@
+ USB2_OBINT_IDDIGCHG)
+
+ /* VBCTRL */
++#define USB2_VBCTRL_OCCLREN BIT(16)
+ #define USB2_VBCTRL_DRVVBUSSEL BIT(8)
+
+ /* LINECTRL1 */
+@@ -374,6 +375,7 @@ static void rcar_gen3_init_otg(struct rcar_gen3_chan *ch)
+ writel(val, usb2_base + USB2_LINECTRL1);
+
+ val = readl(usb2_base + USB2_VBCTRL);
++ val &= ~USB2_VBCTRL_OCCLREN;
+ writel(val | USB2_VBCTRL_DRVVBUSSEL, usb2_base + USB2_VBCTRL);
+ val = readl(usb2_base + USB2_ADPCTRL);
+ writel(val | USB2_ADPCTRL_IDPULLUP, usb2_base + USB2_ADPCTRL);
+diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
+index 0b4f36905321..8e667967928a 100644
+--- a/drivers/tty/serial/atmel_serial.c
++++ b/drivers/tty/serial/atmel_serial.c
+@@ -1400,7 +1400,6 @@ atmel_handle_transmit(struct uart_port *port, unsigned int pending)
+
+ atmel_port->hd_start_rx = false;
+ atmel_start_rx(port);
+- return;
+ }
+
+ atmel_tasklet_schedule(atmel_port, &atmel_port->tasklet_tx);
+diff --git a/drivers/tty/serial/sprd_serial.c b/drivers/tty/serial/sprd_serial.c
+index 73d71a4e6c0c..f49b7d6fbc88 100644
+--- a/drivers/tty/serial/sprd_serial.c
++++ b/drivers/tty/serial/sprd_serial.c
+@@ -609,7 +609,7 @@ static inline void sprd_rx(struct uart_port *port)
+
+ if (lsr & (SPRD_LSR_BI | SPRD_LSR_PE |
+ SPRD_LSR_FE | SPRD_LSR_OE))
+- if (handle_lsr_errors(port, &lsr, &flag))
++ if (handle_lsr_errors(port, &flag, &lsr))
+ continue;
+ if (uart_handle_sysrq_char(port, ch))
+ continue;
+diff --git a/drivers/usb/core/config.c b/drivers/usb/core/config.c
+index 9d6cb709ca7b..151a74a54386 100644
+--- a/drivers/usb/core/config.c
++++ b/drivers/usb/core/config.c
+@@ -921,7 +921,7 @@ int usb_get_bos_descriptor(struct usb_device *dev)
+ struct usb_bos_descriptor *bos;
+ struct usb_dev_cap_header *cap;
+ struct usb_ssp_cap_descriptor *ssp_cap;
+- unsigned char *buffer;
++ unsigned char *buffer, *buffer0;
+ int length, total_len, num, i, ssac;
+ __u8 cap_type;
+ int ret;
+@@ -966,10 +966,12 @@ int usb_get_bos_descriptor(struct usb_device *dev)
+ ret = -ENOMSG;
+ goto err;
+ }
++
++ buffer0 = buffer;
+ total_len -= length;
++ buffer += length;
+
+ for (i = 0; i < num; i++) {
+- buffer += length;
+ cap = (struct usb_dev_cap_header *)buffer;
+
+ if (total_len < sizeof(*cap) || total_len < cap->bLength) {
+@@ -983,8 +985,6 @@ int usb_get_bos_descriptor(struct usb_device *dev)
+ break;
+ }
+
+- total_len -= length;
+-
+ if (cap->bDescriptorType != USB_DT_DEVICE_CAPABILITY) {
+ dev_warn(ddev, "descriptor type invalid, skip\n");
+ continue;
+@@ -1019,7 +1019,11 @@ int usb_get_bos_descriptor(struct usb_device *dev)
+ default:
+ break;
+ }
++
++ total_len -= length;
++ buffer += length;
+ }
++ dev->bos->desc->wTotalLength = cpu_to_le16(buffer - buffer0);
+
+ return 0;
+
+diff --git a/fs/overlayfs/ovl_entry.h b/fs/overlayfs/ovl_entry.h
+index 28a2d12a1029..a8279280e88d 100644
+--- a/fs/overlayfs/ovl_entry.h
++++ b/fs/overlayfs/ovl_entry.h
+@@ -66,6 +66,7 @@ struct ovl_fs {
+ bool workdir_locked;
+ /* Traps in ovl inode cache */
+ struct inode *upperdir_trap;
++ struct inode *workbasedir_trap;
+ struct inode *workdir_trap;
+ struct inode *indexdir_trap;
+ /* Inode numbers in all layers do not use the high xino_bits */
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index b368e2e102fa..afbcb116a7f1 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -212,6 +212,7 @@ static void ovl_free_fs(struct ovl_fs *ofs)
+ {
+ unsigned i;
+
++ iput(ofs->workbasedir_trap);
+ iput(ofs->indexdir_trap);
+ iput(ofs->workdir_trap);
+ iput(ofs->upperdir_trap);
+@@ -1003,6 +1004,25 @@ static int ovl_setup_trap(struct super_block *sb, struct dentry *dir,
+ return 0;
+ }
+
++/*
++ * Determine how we treat concurrent use of upperdir/workdir based on the
++ * index feature. This is papering over mount leaks of container runtimes,
++ * for example, an old overlay mount is leaked and now its upperdir is
++ * attempted to be used as a lower layer in a new overlay mount.
++ */
++static int ovl_report_in_use(struct ovl_fs *ofs, const char *name)
++{
++ if (ofs->config.index) {
++ pr_err("overlayfs: %s is in-use as upperdir/workdir of another mount, mount with '-o index=off' to override exclusive upperdir protection.\n",
++ name);
++ return -EBUSY;
++ } else {
++ pr_warn("overlayfs: %s is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.\n",
++ name);
++ return 0;
++ }
++}
++
+ static int ovl_get_upper(struct super_block *sb, struct ovl_fs *ofs,
+ struct path *upperpath)
+ {
+@@ -1040,14 +1060,12 @@ static int ovl_get_upper(struct super_block *sb, struct ovl_fs *ofs,
+ upper_mnt->mnt_flags &= ~(MNT_NOATIME | MNT_NODIRATIME | MNT_RELATIME);
+ ofs->upper_mnt = upper_mnt;
+
+- err = -EBUSY;
+ if (ovl_inuse_trylock(ofs->upper_mnt->mnt_root)) {
+ ofs->upperdir_locked = true;
+- } else if (ofs->config.index) {
+- pr_err("overlayfs: upperdir is in-use by another mount, mount with '-o index=off' to override exclusive upperdir protection.\n");
+- goto out;
+ } else {
+- pr_warn("overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.\n");
++ err = ovl_report_in_use(ofs, "upperdir");
++ if (err)
++ goto out;
+ }
+
+ err = 0;
+@@ -1157,16 +1175,19 @@ static int ovl_get_workdir(struct super_block *sb, struct ovl_fs *ofs,
+
+ ofs->workbasedir = dget(workpath.dentry);
+
+- err = -EBUSY;
+ if (ovl_inuse_trylock(ofs->workbasedir)) {
+ ofs->workdir_locked = true;
+- } else if (ofs->config.index) {
+- pr_err("overlayfs: workdir is in-use by another mount, mount with '-o index=off' to override exclusive workdir protection.\n");
+- goto out;
+ } else {
+- pr_warn("overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.\n");
++ err = ovl_report_in_use(ofs, "workdir");
++ if (err)
++ goto out;
+ }
+
++ err = ovl_setup_trap(sb, ofs->workbasedir, &ofs->workbasedir_trap,
++ "workdir");
++ if (err)
++ goto out;
++
+ err = ovl_make_workdir(sb, ofs, &workpath);
+
+ out:
+@@ -1313,16 +1334,16 @@ static int ovl_get_lower_layers(struct super_block *sb, struct ovl_fs *ofs,
+ if (err < 0)
+ goto out;
+
+- err = -EBUSY;
+- if (ovl_is_inuse(stack[i].dentry)) {
+- pr_err("overlayfs: lowerdir is in-use as upperdir/workdir\n");
+- goto out;
+- }
+-
+ err = ovl_setup_trap(sb, stack[i].dentry, &trap, "lowerdir");
+ if (err)
+ goto out;
+
++ if (ovl_is_inuse(stack[i].dentry)) {
++ err = ovl_report_in_use(ofs, "lowerdir");
++ if (err)
++ goto out;
++ }
++
+ mnt = clone_private_mount(&stack[i]);
+ err = PTR_ERR(mnt);
+ if (IS_ERR(mnt)) {
+@@ -1469,8 +1490,8 @@ out_err:
+ * - another layer of this overlayfs instance
+ * - upper/work dir of any overlayfs instance
+ */
+-static int ovl_check_layer(struct super_block *sb, struct dentry *dentry,
+- const char *name)
++static int ovl_check_layer(struct super_block *sb, struct ovl_fs *ofs,
++ struct dentry *dentry, const char *name)
+ {
+ struct dentry *next = dentry, *parent;
+ int err = 0;
+@@ -1482,13 +1503,11 @@ static int ovl_check_layer(struct super_block *sb, struct dentry *dentry,
+
+ /* Walk back ancestors to root (inclusive) looking for traps */
+ while (!err && parent != next) {
+- if (ovl_is_inuse(parent)) {
+- err = -EBUSY;
+- pr_err("overlayfs: %s path overlapping in-use upperdir/workdir\n",
+- name);
+- } else if (ovl_lookup_trap_inode(sb, parent)) {
++ if (ovl_lookup_trap_inode(sb, parent)) {
+ err = -ELOOP;
+ pr_err("overlayfs: overlapping %s path\n", name);
++ } else if (ovl_is_inuse(parent)) {
++ err = ovl_report_in_use(ofs, name);
+ }
+ next = parent;
+ parent = dget_parent(next);
+@@ -1509,7 +1528,8 @@ static int ovl_check_overlapping_layers(struct super_block *sb,
+ int i, err;
+
+ if (ofs->upper_mnt) {
+- err = ovl_check_layer(sb, ofs->upper_mnt->mnt_root, "upperdir");
++ err = ovl_check_layer(sb, ofs, ofs->upper_mnt->mnt_root,
++ "upperdir");
+ if (err)
+ return err;
+
+@@ -1520,13 +1540,14 @@ static int ovl_check_overlapping_layers(struct super_block *sb,
+ * workbasedir. In that case, we already have their traps in
+ * inode cache and we will catch that case on lookup.
+ */
+- err = ovl_check_layer(sb, ofs->workbasedir, "workdir");
++ err = ovl_check_layer(sb, ofs, ofs->workbasedir, "workdir");
+ if (err)
+ return err;
+ }
+
+ for (i = 0; i < ofs->numlower; i++) {
+- err = ovl_check_layer(sb, ofs->lower_layers[i].mnt->mnt_root,
++ err = ovl_check_layer(sb, ofs,
++ ofs->lower_layers[i].mnt->mnt_root,
+ "lowerdir");
+ if (err)
+ return err;
+diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
+index a16fbe9a2a67..aa99c73c3fbd 100644
+--- a/include/net/pkt_sched.h
++++ b/include/net/pkt_sched.h
+@@ -118,7 +118,12 @@ void __qdisc_run(struct Qdisc *q);
+ static inline void qdisc_run(struct Qdisc *q)
+ {
+ if (qdisc_run_begin(q)) {
+- __qdisc_run(q);
++ /* NOLOCK qdisc must check 'state' under the qdisc seqlock
++ * to avoid racing with dev_qdisc_reset()
++ */
++ if (!(q->flags & TCQ_F_NOLOCK) ||
++ likely(!test_bit(__QDISC_STATE_DEACTIVATED, &q->state)))
++ __qdisc_run(q);
+ qdisc_run_end(q);
+ }
+ }
+diff --git a/include/net/sock_reuseport.h b/include/net/sock_reuseport.h
+index d9112de85261..43f4a818d88f 100644
+--- a/include/net/sock_reuseport.h
++++ b/include/net/sock_reuseport.h
+@@ -21,7 +21,8 @@ struct sock_reuseport {
+ unsigned int synq_overflow_ts;
+ /* ID stays the same even after the size of socks[] grows. */
+ unsigned int reuseport_id;
+- bool bind_inany;
++ unsigned int bind_inany:1;
++ unsigned int has_conns:1;
+ struct bpf_prog __rcu *prog; /* optional BPF sock selector */
+ struct sock *socks[0]; /* array of sock pointers */
+ };
+@@ -37,6 +38,23 @@ extern struct sock *reuseport_select_sock(struct sock *sk,
+ extern int reuseport_attach_prog(struct sock *sk, struct bpf_prog *prog);
+ extern int reuseport_detach_prog(struct sock *sk);
+
++static inline bool reuseport_has_conns(struct sock *sk, bool set)
++{
++ struct sock_reuseport *reuse;
++ bool ret = false;
++
++ rcu_read_lock();
++ reuse = rcu_dereference(sk->sk_reuseport_cb);
++ if (reuse) {
++ if (set)
++ reuse->has_conns = 1;
++ ret = reuse->has_conns;
++ }
++ rcu_read_unlock();
++
++ return ret;
++}
++
+ int reuseport_get_id(struct sock_reuseport *reuse);
+
+ #endif /* _SOCK_REUSEPORT_H */
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 5156c0edebe8..4ed9df74eb8a 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3467,18 +3467,22 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q,
+ qdisc_calculate_pkt_len(skb, q);
+
+ if (q->flags & TCQ_F_NOLOCK) {
+- if (unlikely(test_bit(__QDISC_STATE_DEACTIVATED, &q->state))) {
+- __qdisc_drop(skb, &to_free);
+- rc = NET_XMIT_DROP;
+- } else if ((q->flags & TCQ_F_CAN_BYPASS) && q->empty &&
+- qdisc_run_begin(q)) {
++ if ((q->flags & TCQ_F_CAN_BYPASS) && q->empty &&
++ qdisc_run_begin(q)) {
++ if (unlikely(test_bit(__QDISC_STATE_DEACTIVATED,
++ &q->state))) {
++ __qdisc_drop(skb, &to_free);
++ rc = NET_XMIT_DROP;
++ goto end_run;
++ }
+ qdisc_bstats_cpu_update(q, skb);
+
++ rc = NET_XMIT_SUCCESS;
+ if (sch_direct_xmit(skb, q, dev, txq, NULL, true))
+ __qdisc_run(q);
+
++end_run:
+ qdisc_run_end(q);
+- rc = NET_XMIT_SUCCESS;
+ } else {
+ rc = q->enqueue(skb, q, &to_free) & NET_XMIT_MASK;
+ qdisc_run(q);
+diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
+index 9408f9264d05..f3ceec93f392 100644
+--- a/net/core/sock_reuseport.c
++++ b/net/core/sock_reuseport.c
+@@ -295,8 +295,19 @@ struct sock *reuseport_select_sock(struct sock *sk,
+
+ select_by_hash:
+ /* no bpf or invalid bpf result: fall back to hash usage */
+- if (!sk2)
+- sk2 = reuse->socks[reciprocal_scale(hash, socks)];
++ if (!sk2) {
++ int i, j;
++
++ i = j = reciprocal_scale(hash, socks);
++ while (reuse->socks[i]->sk_state == TCP_ESTABLISHED) {
++ i++;
++ if (i >= reuse->num_socks)
++ i = 0;
++ if (i == j)
++ goto out;
++ }
++ sk2 = reuse->socks[i];
++ }
+ }
+
+ out:
+diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
+index 3abd173ebacb..96f787cf9b6e 100644
+--- a/net/dsa/dsa2.c
++++ b/net/dsa/dsa2.c
+@@ -623,6 +623,8 @@ static int dsa_port_parse_cpu(struct dsa_port *dp, struct net_device *master)
+ tag_protocol = ds->ops->get_tag_protocol(ds, dp->index);
+ tag_ops = dsa_tag_driver_get(tag_protocol);
+ if (IS_ERR(tag_ops)) {
++ if (PTR_ERR(tag_ops) == -ENOPROTOOPT)
++ return -EPROBE_DEFER;
+ dev_warn(ds->dev, "No tagger for this switch\n");
+ return PTR_ERR(tag_ops);
+ }
+diff --git a/net/ipv4/datagram.c b/net/ipv4/datagram.c
+index 7bd29e694603..9a0fe0c2fa02 100644
+--- a/net/ipv4/datagram.c
++++ b/net/ipv4/datagram.c
+@@ -15,6 +15,7 @@
+ #include <net/sock.h>
+ #include <net/route.h>
+ #include <net/tcp_states.h>
++#include <net/sock_reuseport.h>
+
+ int __ip4_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ {
+@@ -69,6 +70,7 @@ int __ip4_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len
+ }
+ inet->inet_daddr = fl4->daddr;
+ inet->inet_dport = usin->sin_port;
++ reuseport_has_conns(sk, true);
+ sk->sk_state = TCP_ESTABLISHED;
+ sk_set_txhash(sk);
+ inet->inet_id = jiffies;
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index d88821c794fb..16486c8b708b 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -423,12 +423,13 @@ static struct sock *udp4_lib_lookup2(struct net *net,
+ score = compute_score(sk, net, saddr, sport,
+ daddr, hnum, dif, sdif);
+ if (score > badness) {
+- if (sk->sk_reuseport) {
++ if (sk->sk_reuseport &&
++ sk->sk_state != TCP_ESTABLISHED) {
+ hash = udp_ehashfn(net, daddr, hnum,
+ saddr, sport);
+ result = reuseport_select_sock(sk, hash, skb,
+ sizeof(struct udphdr));
+- if (result)
++ if (result && !reuseport_has_conns(sk, false))
+ return result;
+ }
+ badness = score;
+diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c
+index 9ab897ded4df..96f939248d2f 100644
+--- a/net/ipv6/datagram.c
++++ b/net/ipv6/datagram.c
+@@ -27,6 +27,7 @@
+ #include <net/ip6_route.h>
+ #include <net/tcp_states.h>
+ #include <net/dsfield.h>
++#include <net/sock_reuseport.h>
+
+ #include <linux/errqueue.h>
+ #include <linux/uaccess.h>
+@@ -254,6 +255,7 @@ ipv4_connected:
+ goto out;
+ }
+
++ reuseport_has_conns(sk, true);
+ sk->sk_state = TCP_ESTABLISHED;
+ sk_set_txhash(sk);
+ out:
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index dd2d0b963260..d5779d6a6065 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -968,7 +968,7 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
+ if (unlikely(!tun_info ||
+ !(tun_info->mode & IP_TUNNEL_INFO_TX) ||
+ ip_tunnel_info_af(tun_info) != AF_INET6))
+- return -EINVAL;
++ goto tx_err;
+
+ key = &tun_info->key;
+ memset(&fl6, 0, sizeof(fl6));
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 827fe7385078..5995fdc99d3f 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -158,13 +158,14 @@ static struct sock *udp6_lib_lookup2(struct net *net,
+ score = compute_score(sk, net, saddr, sport,
+ daddr, hnum, dif, sdif);
+ if (score > badness) {
+- if (sk->sk_reuseport) {
++ if (sk->sk_reuseport &&
++ sk->sk_state != TCP_ESTABLISHED) {
+ hash = udp6_ehashfn(net, daddr, hnum,
+ saddr, sport);
+
+ result = reuseport_select_sock(sk, hash, skb,
+ sizeof(struct udphdr));
+- if (result)
++ if (result && !reuseport_has_conns(sk, false))
+ return result;
+ }
+ result = sk;
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index ac28f6a5d70e..17bd8f539bc7 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -985,6 +985,9 @@ static void qdisc_destroy(struct Qdisc *qdisc)
+
+ void qdisc_put(struct Qdisc *qdisc)
+ {
++ if (!qdisc)
++ return;
++
+ if (qdisc->flags & TCQ_F_BUILTIN ||
+ !refcount_dec_and_test(&qdisc->refcnt))
+ return;
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index fd05ae1437a9..c9cfc796eccf 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -10659,9 +10659,11 @@ static int cfg80211_cqm_rssi_update(struct cfg80211_registered_device *rdev,
+ hyst = wdev->cqm_config->rssi_hyst;
+ n = wdev->cqm_config->n_rssi_thresholds;
+
+- for (i = 0; i < n; i++)
++ for (i = 0; i < n; i++) {
++ i = array_index_nospec(i, n);
+ if (last < wdev->cqm_config->rssi_thresholds[i])
+ break;
++ }
+
+ low_index = i - 1;
+ if (low_index >= 0) {
+diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c
+index 5294abb3f178..8ffd07e2a160 100644
+--- a/virt/kvm/coalesced_mmio.c
++++ b/virt/kvm/coalesced_mmio.c
+@@ -40,7 +40,7 @@ static int coalesced_mmio_in_range(struct kvm_coalesced_mmio_dev *dev,
+ return 1;
+ }
+
+-static int coalesced_mmio_has_room(struct kvm_coalesced_mmio_dev *dev)
++static int coalesced_mmio_has_room(struct kvm_coalesced_mmio_dev *dev, u32 last)
+ {
+ struct kvm_coalesced_mmio_ring *ring;
+ unsigned avail;
+@@ -52,7 +52,7 @@ static int coalesced_mmio_has_room(struct kvm_coalesced_mmio_dev *dev)
+ * there is always one unused entry in the buffer
+ */
+ ring = dev->kvm->coalesced_mmio_ring;
+- avail = (ring->first - ring->last - 1) % KVM_COALESCED_MMIO_MAX;
++ avail = (ring->first - last - 1) % KVM_COALESCED_MMIO_MAX;
+ if (avail == 0) {
+ /* full */
+ return 0;
+@@ -67,25 +67,28 @@ static int coalesced_mmio_write(struct kvm_vcpu *vcpu,
+ {
+ struct kvm_coalesced_mmio_dev *dev = to_mmio(this);
+ struct kvm_coalesced_mmio_ring *ring = dev->kvm->coalesced_mmio_ring;
++ __u32 insert;
+
+ if (!coalesced_mmio_in_range(dev, addr, len))
+ return -EOPNOTSUPP;
+
+ spin_lock(&dev->kvm->ring_lock);
+
+- if (!coalesced_mmio_has_room(dev)) {
++ insert = READ_ONCE(ring->last);
++ if (!coalesced_mmio_has_room(dev, insert) ||
++ insert >= KVM_COALESCED_MMIO_MAX) {
+ spin_unlock(&dev->kvm->ring_lock);
+ return -EOPNOTSUPP;
+ }
+
+ /* copy data in first free entry of the ring */
+
+- ring->coalesced_mmio[ring->last].phys_addr = addr;
+- ring->coalesced_mmio[ring->last].len = len;
+- memcpy(ring->coalesced_mmio[ring->last].data, val, len);
+- ring->coalesced_mmio[ring->last].pio = dev->zone.pio;
++ ring->coalesced_mmio[insert].phys_addr = addr;
++ ring->coalesced_mmio[insert].len = len;
++ memcpy(ring->coalesced_mmio[insert].data, val, len);
++ ring->coalesced_mmio[insert].pio = dev->zone.pio;
+ smp_wmb();
+- ring->last = (ring->last + 1) % KVM_COALESCED_MMIO_MAX;
++ ring->last = (insert + 1) % KVM_COALESCED_MMIO_MAX;
+ spin_unlock(&dev->kvm->ring_lock);
+ return 0;
+ }
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:5.3 commit in: /
@ 2019-10-01 10:12 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2019-10-01 10:12 UTC (permalink / raw
To: gentoo-commits
commit: bd5b33fc88aee91eba184e2e7f408db443b6418f
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Oct 1 10:12:34 2019 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Oct 1 10:12:34 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bd5b33fc
Linux patch 5.3.2
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1001_linux-5.3.2.patch | 748 +++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 752 insertions(+)
diff --git a/0000_README b/0000_README
index f9d1f15..38a3537 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch: 1000_linux-5.3.1.patch
From: http://www.kernel.org
Desc: Linux 5.3.1
+Patch: 1001_linux-5.3.2.patch
+From: http://www.kernel.org
+Desc: Linux 5.3.2
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1001_linux-5.3.2.patch b/1001_linux-5.3.2.patch
new file mode 100644
index 0000000..bd1d724
--- /dev/null
+++ b/1001_linux-5.3.2.patch
@@ -0,0 +1,748 @@
+diff --git a/Makefile b/Makefile
+index f32e8d2e09c3..13fa3a409ddd 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 3
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Bobtail Squid
+
+diff --git a/arch/powerpc/include/asm/opal.h b/arch/powerpc/include/asm/opal.h
+index 57bd029c715e..d5a0807d21db 100644
+--- a/arch/powerpc/include/asm/opal.h
++++ b/arch/powerpc/include/asm/opal.h
+@@ -272,7 +272,7 @@ int64_t opal_xive_get_vp_info(uint64_t vp,
+ int64_t opal_xive_set_vp_info(uint64_t vp,
+ uint64_t flags,
+ uint64_t report_cl_pair);
+-int64_t opal_xive_allocate_irq(uint32_t chip_id);
++int64_t opal_xive_allocate_irq_raw(uint32_t chip_id);
+ int64_t opal_xive_free_irq(uint32_t girq);
+ int64_t opal_xive_sync(uint32_t type, uint32_t id);
+ int64_t opal_xive_dump(uint32_t type, uint32_t id);
+diff --git a/arch/powerpc/platforms/powernv/opal-call.c b/arch/powerpc/platforms/powernv/opal-call.c
+index 29ca523c1c79..dccdc9df5213 100644
+--- a/arch/powerpc/platforms/powernv/opal-call.c
++++ b/arch/powerpc/platforms/powernv/opal-call.c
+@@ -257,7 +257,7 @@ OPAL_CALL(opal_xive_set_queue_info, OPAL_XIVE_SET_QUEUE_INFO);
+ OPAL_CALL(opal_xive_donate_page, OPAL_XIVE_DONATE_PAGE);
+ OPAL_CALL(opal_xive_alloc_vp_block, OPAL_XIVE_ALLOCATE_VP_BLOCK);
+ OPAL_CALL(opal_xive_free_vp_block, OPAL_XIVE_FREE_VP_BLOCK);
+-OPAL_CALL(opal_xive_allocate_irq, OPAL_XIVE_ALLOCATE_IRQ);
++OPAL_CALL(opal_xive_allocate_irq_raw, OPAL_XIVE_ALLOCATE_IRQ);
+ OPAL_CALL(opal_xive_free_irq, OPAL_XIVE_FREE_IRQ);
+ OPAL_CALL(opal_xive_get_vp_info, OPAL_XIVE_GET_VP_INFO);
+ OPAL_CALL(opal_xive_set_vp_info, OPAL_XIVE_SET_VP_INFO);
+diff --git a/arch/powerpc/sysdev/xive/native.c b/arch/powerpc/sysdev/xive/native.c
+index 2f26b74f6cfa..cf156aadefe9 100644
+--- a/arch/powerpc/sysdev/xive/native.c
++++ b/arch/powerpc/sysdev/xive/native.c
+@@ -231,6 +231,17 @@ static bool xive_native_match(struct device_node *node)
+ return of_device_is_compatible(node, "ibm,opal-xive-vc");
+ }
+
++static s64 opal_xive_allocate_irq(u32 chip_id)
++{
++ s64 irq = opal_xive_allocate_irq_raw(chip_id);
++
++ /*
++ * Old versions of skiboot can incorrectly return 0xffffffff to
++ * indicate no space, fix it up here.
++ */
++ return irq == 0xffffffff ? OPAL_RESOURCE : irq;
++}
++
+ #ifdef CONFIG_SMP
+ static int xive_native_get_ipi(unsigned int cpu, struct xive_cpu *xc)
+ {
+diff --git a/drivers/clk/imx/clk-imx8mm.c b/drivers/clk/imx/clk-imx8mm.c
+index 6b8e75df994d..6f46bcb1d643 100644
+--- a/drivers/clk/imx/clk-imx8mm.c
++++ b/drivers/clk/imx/clk-imx8mm.c
+@@ -55,8 +55,8 @@ static const struct imx_pll14xx_rate_table imx8mm_pll1416x_tbl[] = {
+ };
+
+ static const struct imx_pll14xx_rate_table imx8mm_audiopll_tbl[] = {
+- PLL_1443X_RATE(786432000U, 655, 5, 2, 23593),
+- PLL_1443X_RATE(722534400U, 301, 5, 1, 3670),
++ PLL_1443X_RATE(393216000U, 262, 2, 3, 9437),
++ PLL_1443X_RATE(361267200U, 361, 3, 3, 17511),
+ };
+
+ static const struct imx_pll14xx_rate_table imx8mm_videopll_tbl[] = {
+diff --git a/drivers/clocksource/timer-of.c b/drivers/clocksource/timer-of.c
+index 80542289fae7..d8c2bd4391d0 100644
+--- a/drivers/clocksource/timer-of.c
++++ b/drivers/clocksource/timer-of.c
+@@ -113,8 +113,10 @@ static __init int timer_of_clk_init(struct device_node *np,
+ of_clk->clk = of_clk->name ? of_clk_get_by_name(np, of_clk->name) :
+ of_clk_get(np, of_clk->index);
+ if (IS_ERR(of_clk->clk)) {
+- pr_err("Failed to get clock for %pOF\n", np);
+- return PTR_ERR(of_clk->clk);
++ ret = PTR_ERR(of_clk->clk);
++ if (ret != -EPROBE_DEFER)
++ pr_err("Failed to get clock for %pOF\n", np);
++ goto out;
+ }
+
+ ret = clk_prepare_enable(of_clk->clk);
+diff --git a/drivers/clocksource/timer-probe.c b/drivers/clocksource/timer-probe.c
+index dda1946e84dd..ee9574da53c0 100644
+--- a/drivers/clocksource/timer-probe.c
++++ b/drivers/clocksource/timer-probe.c
+@@ -29,7 +29,9 @@ void __init timer_probe(void)
+
+ ret = init_func_ret(np);
+ if (ret) {
+- pr_err("Failed to initialize '%pOF': %d\n", np, ret);
++ if (ret != -EPROBE_DEFER)
++ pr_err("Failed to initialize '%pOF': %d\n", np,
++ ret);
+ continue;
+ }
+
+diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
+index c9d686a0e805..4818ae427098 100644
+--- a/drivers/crypto/talitos.c
++++ b/drivers/crypto/talitos.c
+@@ -3140,6 +3140,7 @@ static int talitos_remove(struct platform_device *ofdev)
+ break;
+ case CRYPTO_ALG_TYPE_AEAD:
+ crypto_unregister_aead(&t_alg->algt.alg.aead);
++ break;
+ case CRYPTO_ALG_TYPE_AHASH:
+ crypto_unregister_ahash(&t_alg->algt.alg.hash);
+ break;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 45be7a2132bb..beb2d268d1ef 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -4548,20 +4548,10 @@ static int dm_plane_atomic_check(struct drm_plane *plane,
+ static int dm_plane_atomic_async_check(struct drm_plane *plane,
+ struct drm_plane_state *new_plane_state)
+ {
+- struct drm_plane_state *old_plane_state =
+- drm_atomic_get_old_plane_state(new_plane_state->state, plane);
+-
+ /* Only support async updates on cursor planes. */
+ if (plane->type != DRM_PLANE_TYPE_CURSOR)
+ return -EINVAL;
+
+- /*
+- * DRM calls prepare_fb and cleanup_fb on new_plane_state for
+- * async commits so don't allow fb changes.
+- */
+- if (old_plane_state->fb != new_plane_state->fb)
+- return -EINVAL;
+-
+ return 0;
+ }
+
+@@ -7284,6 +7274,26 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ if (ret)
+ goto fail;
+
++ if (state->legacy_cursor_update) {
++ /*
++ * This is a fast cursor update coming from the plane update
++ * helper, check if it can be done asynchronously for better
++ * performance.
++ */
++ state->async_update =
++ !drm_atomic_helper_async_check(dev, state);
++
++ /*
++ * Skip the remaining global validation if this is an async
++ * update. Cursor updates can be done without affecting
++ * state or bandwidth calcs and this avoids the performance
++ * penalty of locking the private state object and
++ * allocating a new dc_state.
++ */
++ if (state->async_update)
++ return 0;
++ }
++
+ /* Check scaling and underscan changes*/
+ /* TODO Removed scaling changes validation due to inability to commit
+ * new stream into context w\o causing full reset. Need to
+@@ -7336,13 +7346,29 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ ret = -EINVAL;
+ goto fail;
+ }
+- } else if (state->legacy_cursor_update) {
++ } else {
+ /*
+- * This is a fast cursor update coming from the plane update
+- * helper, check if it can be done asynchronously for better
+- * performance.
++ * The commit is a fast update. Fast updates shouldn't change
++ * the DC context, affect global validation, and can have their
++ * commit work done in parallel with other commits not touching
++ * the same resource. If we have a new DC context as part of
++ * the DM atomic state from validation we need to free it and
++ * retain the existing one instead.
+ */
+- state->async_update = !drm_atomic_helper_async_check(dev, state);
++ struct dm_atomic_state *new_dm_state, *old_dm_state;
++
++ new_dm_state = dm_atomic_get_new_state(state);
++ old_dm_state = dm_atomic_get_old_state(state);
++
++ if (new_dm_state && old_dm_state) {
++ if (new_dm_state->context)
++ dc_release_state(new_dm_state->context);
++
++ new_dm_state->context = old_dm_state->context;
++
++ if (old_dm_state->context)
++ dc_retain_state(old_dm_state->context);
++ }
+ }
+
+ /* Must be success */
+diff --git a/drivers/gpu/drm/amd/display/dc/calcs/Makefile b/drivers/gpu/drm/amd/display/dc/calcs/Makefile
+index 95f332ee3e7e..16614d73a5fc 100644
+--- a/drivers/gpu/drm/amd/display/dc/calcs/Makefile
++++ b/drivers/gpu/drm/amd/display/dc/calcs/Makefile
+@@ -32,6 +32,10 @@ endif
+
+ calcs_ccflags := -mhard-float -msse $(cc_stack_align)
+
++ifdef CONFIG_CC_IS_CLANG
++calcs_ccflags += -msse2
++endif
++
+ CFLAGS_dcn_calcs.o := $(calcs_ccflags)
+ CFLAGS_dcn_calc_auto.o := $(calcs_ccflags)
+ CFLAGS_dcn_calc_math.o := $(calcs_ccflags) -Wno-tautological-compare
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/Makefile b/drivers/gpu/drm/amd/display/dc/dcn20/Makefile
+index e9721a906592..f57a3b281408 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/Makefile
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/Makefile
+@@ -18,6 +18,10 @@ endif
+
+ CFLAGS_dcn20_resource.o := -mhard-float -msse $(cc_stack_align)
+
++ifdef CONFIG_CC_IS_CLANG
++CFLAGS_dcn20_resource.o += -msse2
++endif
++
+ AMD_DAL_DCN20 = $(addprefix $(AMDDALPATH)/dc/dcn20/,$(DCN20))
+
+ AMD_DISPLAY_FILES += $(AMD_DAL_DCN20)
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/Makefile b/drivers/gpu/drm/amd/display/dc/dml/Makefile
+index 0bb7a20675c4..132ade1a234e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/Makefile
++++ b/drivers/gpu/drm/amd/display/dc/dml/Makefile
+@@ -32,6 +32,10 @@ endif
+
+ dml_ccflags := -mhard-float -msse $(cc_stack_align)
+
++ifdef CONFIG_CC_IS_CLANG
++dml_ccflags += -msse2
++endif
++
+ CFLAGS_display_mode_lib.o := $(dml_ccflags)
+
+ ifdef CONFIG_DRM_AMD_DC_DCN2_0
+diff --git a/drivers/gpu/drm/amd/display/dc/dsc/Makefile b/drivers/gpu/drm/amd/display/dc/dsc/Makefile
+index e019cd9447e8..17db603f2d1f 100644
+--- a/drivers/gpu/drm/amd/display/dc/dsc/Makefile
++++ b/drivers/gpu/drm/amd/display/dc/dsc/Makefile
+@@ -9,6 +9,10 @@ endif
+
+ dsc_ccflags := -mhard-float -msse $(cc_stack_align)
+
++ifdef CONFIG_CC_IS_CLANG
++dsc_ccflags += -msse2
++endif
++
+ CFLAGS_rc_calc.o := $(dsc_ccflags)
+ CFLAGS_rc_calc_dpi.o := $(dsc_ccflags)
+ CFLAGS_codec_main_amd.o := $(dsc_ccflags)
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 0a00be19f7a0..e4d51ce20a6a 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -568,6 +568,7 @@
+ #define USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0B4A 0x0b4a
+ #define USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE 0x134a
+ #define USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE_094A 0x094a
++#define USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE_0941 0x0941
+ #define USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE_0641 0x0641
+
+ #define USB_VENDOR_ID_HUION 0x256c
+diff --git a/drivers/hid/hid-lg.c b/drivers/hid/hid-lg.c
+index 5008a3dc28f4..0dc7cdfc56f7 100644
+--- a/drivers/hid/hid-lg.c
++++ b/drivers/hid/hid-lg.c
+@@ -818,7 +818,7 @@ static int lg_probe(struct hid_device *hdev, const struct hid_device_id *id)
+
+ if (!buf) {
+ ret = -ENOMEM;
+- goto err_free;
++ goto err_stop;
+ }
+
+ ret = hid_hw_raw_request(hdev, buf[0], buf, sizeof(cbuf),
+@@ -850,9 +850,12 @@ static int lg_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ ret = lg4ff_init(hdev);
+
+ if (ret)
+- goto err_free;
++ goto err_stop;
+
+ return 0;
++
++err_stop:
++ hid_hw_stop(hdev);
+ err_free:
+ kfree(drv_data);
+ return ret;
+@@ -863,8 +866,7 @@ static void lg_remove(struct hid_device *hdev)
+ struct lg_drv_data *drv_data = hid_get_drvdata(hdev);
+ if (drv_data->quirks & LG_FF4)
+ lg4ff_deinit(hdev);
+- else
+- hid_hw_stop(hdev);
++ hid_hw_stop(hdev);
+ kfree(drv_data);
+ }
+
+diff --git a/drivers/hid/hid-lg4ff.c b/drivers/hid/hid-lg4ff.c
+index cefba038520c..03f0220062ca 100644
+--- a/drivers/hid/hid-lg4ff.c
++++ b/drivers/hid/hid-lg4ff.c
+@@ -1477,7 +1477,6 @@ int lg4ff_deinit(struct hid_device *hid)
+ }
+ }
+ #endif
+- hid_hw_stop(hid);
+ drv_data->device_props = NULL;
+
+ kfree(entry);
+diff --git a/drivers/hid/hid-logitech-dj.c b/drivers/hid/hid-logitech-dj.c
+index cc47f948c1d0..7badbaa18878 100644
+--- a/drivers/hid/hid-logitech-dj.c
++++ b/drivers/hid/hid-logitech-dj.c
+@@ -1734,14 +1734,14 @@ static int logi_dj_probe(struct hid_device *hdev,
+ if (retval < 0) {
+ hid_err(hdev, "%s: logi_dj_recv_query_paired_devices error:%d\n",
+ __func__, retval);
+- goto logi_dj_recv_query_paired_devices_failed;
++ /*
++ * This can happen with a KVM, let the probe succeed,
++ * logi_dj_recv_queue_unknown_work will retry later.
++ */
+ }
+ }
+
+- return retval;
+-
+-logi_dj_recv_query_paired_devices_failed:
+- hid_hw_close(hdev);
++ return 0;
+
+ llopen_failed:
+ switch_to_dj_mode_fail:
+diff --git a/drivers/hid/hid-prodikeys.c b/drivers/hid/hid-prodikeys.c
+index 21544ebff855..5a3b3d974d84 100644
+--- a/drivers/hid/hid-prodikeys.c
++++ b/drivers/hid/hid-prodikeys.c
+@@ -551,10 +551,14 @@ static void pcmidi_setup_extra_keys(
+
+ static int pcmidi_set_operational(struct pcmidi_snd *pm)
+ {
++ int rc;
++
+ if (pm->ifnum != 1)
+ return 0; /* only set up ONCE for interace 1 */
+
+- pcmidi_get_output_report(pm);
++ rc = pcmidi_get_output_report(pm);
++ if (rc < 0)
++ return rc;
+ pcmidi_submit_output_report(pm, 0xc1);
+ return 0;
+ }
+@@ -683,7 +687,11 @@ static int pcmidi_snd_initialise(struct pcmidi_snd *pm)
+ spin_lock_init(&pm->rawmidi_in_lock);
+
+ init_sustain_timers(pm);
+- pcmidi_set_operational(pm);
++ err = pcmidi_set_operational(pm);
++ if (err < 0) {
++ pk_error("failed to find output report\n");
++ goto fail_register;
++ }
+
+ /* register it */
+ err = snd_card_register(card);
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index 166f41f3173b..c50bcd967d99 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -92,6 +92,7 @@ static const struct hid_device_id hid_quirks[] = {
+ { HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0B4A), HID_QUIRK_ALWAYS_POLL },
+ { HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE), HID_QUIRK_ALWAYS_POLL },
+ { HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE_094A), HID_QUIRK_ALWAYS_POLL },
++ { HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE_0941), HID_QUIRK_ALWAYS_POLL },
+ { HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE_0641), HID_QUIRK_ALWAYS_POLL },
+ { HID_USB_DEVICE(USB_VENDOR_ID_IDEACOM, USB_DEVICE_ID_IDEACOM_IDC6680), HID_QUIRK_MULTI_INPUT },
+ { HID_USB_DEVICE(USB_VENDOR_ID_INNOMEDIA, USB_DEVICE_ID_INNEX_GENESIS_ATARI), HID_QUIRK_MULTI_INPUT },
+diff --git a/drivers/hid/hid-sony.c b/drivers/hid/hid-sony.c
+index 49dd2d905c7f..73c0f7a95e2d 100644
+--- a/drivers/hid/hid-sony.c
++++ b/drivers/hid/hid-sony.c
+@@ -2811,7 +2811,6 @@ err_stop:
+ sony_cancel_work_sync(sc);
+ sony_remove_dev_list(sc);
+ sony_release_device_id(sc);
+- hid_hw_stop(hdev);
+ return ret;
+ }
+
+@@ -2876,6 +2875,7 @@ static int sony_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ */
+ if (!(hdev->claimed & HID_CLAIMED_INPUT)) {
+ hid_err(hdev, "failed to claim input\n");
++ hid_hw_stop(hdev);
+ return -ENODEV;
+ }
+
+diff --git a/drivers/hid/hidraw.c b/drivers/hid/hidraw.c
+index 006bd6f4f653..62ef47a730b0 100644
+--- a/drivers/hid/hidraw.c
++++ b/drivers/hid/hidraw.c
+@@ -370,7 +370,7 @@ static long hidraw_ioctl(struct file *file, unsigned int cmd,
+
+ mutex_lock(&minors_lock);
+ dev = hidraw_table[minor];
+- if (!dev) {
++ if (!dev || !dev->exist) {
+ ret = -ENODEV;
+ goto out;
+ }
+diff --git a/drivers/mtd/chips/cfi_cmdset_0002.c b/drivers/mtd/chips/cfi_cmdset_0002.c
+index f4da7bd552e9..7d29f596bc9e 100644
+--- a/drivers/mtd/chips/cfi_cmdset_0002.c
++++ b/drivers/mtd/chips/cfi_cmdset_0002.c
+@@ -1717,31 +1717,37 @@ static int __xipram do_write_oneword(struct map_info *map, struct flchip *chip,
+ continue;
+ }
+
++ /*
++ * We check "time_after" and "!chip_good" before checking
++ * "chip_good" to avoid the failure due to scheduling.
++ */
+ if (time_after(jiffies, timeo) &&
+- !chip_ready(map, chip, adr)) {
++ !chip_good(map, chip, adr, datum)) {
+ xip_enable(map, chip, adr);
+ printk(KERN_WARNING "MTD %s(): software timeout\n", __func__);
+ xip_disable(map, chip, adr);
++ ret = -EIO;
+ break;
+ }
+
+- if (chip_ready(map, chip, adr))
++ if (chip_good(map, chip, adr, datum))
+ break;
+
+ /* Latency issues. Drop the lock, wait a while and retry */
+ UDELAY(map, chip, adr, 1);
+ }
++
+ /* Did we succeed? */
+- if (!chip_good(map, chip, adr, datum)) {
++ if (ret) {
+ /* reset on all failures. */
+ cfi_check_err_status(map, chip, adr);
+ map_write(map, CMD(0xF0), chip->start);
+ /* FIXME - should have reset delay before continuing */
+
+- if (++retry_cnt <= MAX_RETRIES)
++ if (++retry_cnt <= MAX_RETRIES) {
++ ret = 0;
+ goto retry;
+-
+- ret = -EIO;
++ }
+ }
+ xip_enable(map, chip, adr);
+ op_done:
+diff --git a/drivers/platform/x86/i2c-multi-instantiate.c b/drivers/platform/x86/i2c-multi-instantiate.c
+index 197d8a192721..70efa3d29825 100644
+--- a/drivers/platform/x86/i2c-multi-instantiate.c
++++ b/drivers/platform/x86/i2c-multi-instantiate.c
+@@ -92,7 +92,7 @@ static int i2c_multi_inst_probe(struct platform_device *pdev)
+ for (i = 0; i < multi->num_clients && inst_data[i].type; i++) {
+ memset(&board_info, 0, sizeof(board_info));
+ strlcpy(board_info.type, inst_data[i].type, I2C_NAME_SIZE);
+- snprintf(name, sizeof(name), "%s-%s.%d", match->id,
++ snprintf(name, sizeof(name), "%s-%s.%d", dev_name(dev),
+ inst_data[i].type, i);
+ board_info.dev_name = name;
+ switch (inst_data[i].flags & IRQ_RESOURCE_TYPE) {
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 475d6f28ca67..7f7a4d9137e5 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -1206,6 +1206,8 @@ void nft_trace_notify(struct nft_traceinfo *info);
+ #define MODULE_ALIAS_NFT_OBJ(type) \
+ MODULE_ALIAS("nft-obj-" __stringify(type))
+
++#if IS_ENABLED(CONFIG_NF_TABLES)
++
+ /*
+ * The gencursor defines two generations, the currently active and the
+ * next one. Objects contain a bitmask of 2 bits specifying the generations
+@@ -1279,6 +1281,8 @@ static inline void nft_set_elem_change_active(const struct net *net,
+ ext->genmask ^= nft_genmask_next(net);
+ }
+
++#endif /* IS_ENABLED(CONFIG_NF_TABLES) */
++
+ /*
+ * We use a free bit in the genmask field to indicate the element
+ * is busy, meaning it is currently being processed either by
+diff --git a/mm/z3fold.c b/mm/z3fold.c
+index 75b7962439ff..ed19d98c9dcd 100644
+--- a/mm/z3fold.c
++++ b/mm/z3fold.c
+@@ -41,7 +41,6 @@
+ #include <linux/workqueue.h>
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
+-#include <linux/wait.h>
+ #include <linux/zpool.h>
+ #include <linux/magic.h>
+
+@@ -146,8 +145,6 @@ struct z3fold_header {
+ * @release_wq: workqueue for safe page release
+ * @work: work_struct for safe page release
+ * @inode: inode for z3fold pseudo filesystem
+- * @destroying: bool to stop migration once we start destruction
+- * @isolated: int to count the number of pages currently in isolation
+ *
+ * This structure is allocated at pool creation time and maintains metadata
+ * pertaining to a particular z3fold pool.
+@@ -166,11 +163,8 @@ struct z3fold_pool {
+ const struct zpool_ops *zpool_ops;
+ struct workqueue_struct *compact_wq;
+ struct workqueue_struct *release_wq;
+- struct wait_queue_head isolate_wait;
+ struct work_struct work;
+ struct inode *inode;
+- bool destroying;
+- int isolated;
+ };
+
+ /*
+@@ -775,7 +769,6 @@ static struct z3fold_pool *z3fold_create_pool(const char *name, gfp_t gfp,
+ goto out_c;
+ spin_lock_init(&pool->lock);
+ spin_lock_init(&pool->stale_lock);
+- init_waitqueue_head(&pool->isolate_wait);
+ pool->unbuddied = __alloc_percpu(sizeof(struct list_head)*NCHUNKS, 2);
+ if (!pool->unbuddied)
+ goto out_pool;
+@@ -815,15 +808,6 @@ out:
+ return NULL;
+ }
+
+-static bool pool_isolated_are_drained(struct z3fold_pool *pool)
+-{
+- bool ret;
+-
+- spin_lock(&pool->lock);
+- ret = pool->isolated == 0;
+- spin_unlock(&pool->lock);
+- return ret;
+-}
+ /**
+ * z3fold_destroy_pool() - destroys an existing z3fold pool
+ * @pool: the z3fold pool to be destroyed
+@@ -833,22 +817,6 @@ static bool pool_isolated_are_drained(struct z3fold_pool *pool)
+ static void z3fold_destroy_pool(struct z3fold_pool *pool)
+ {
+ kmem_cache_destroy(pool->c_handle);
+- /*
+- * We set pool-> destroying under lock to ensure that
+- * z3fold_page_isolate() sees any changes to destroying. This way we
+- * avoid the need for any memory barriers.
+- */
+-
+- spin_lock(&pool->lock);
+- pool->destroying = true;
+- spin_unlock(&pool->lock);
+-
+- /*
+- * We need to ensure that no pages are being migrated while we destroy
+- * these workqueues, as migration can queue work on either of the
+- * workqueues.
+- */
+- wait_event(pool->isolate_wait, !pool_isolated_are_drained(pool));
+
+ /*
+ * We need to destroy pool->compact_wq before pool->release_wq,
+@@ -1339,28 +1307,6 @@ static u64 z3fold_get_pool_size(struct z3fold_pool *pool)
+ return atomic64_read(&pool->pages_nr);
+ }
+
+-/*
+- * z3fold_dec_isolated() expects to be called while pool->lock is held.
+- */
+-static void z3fold_dec_isolated(struct z3fold_pool *pool)
+-{
+- assert_spin_locked(&pool->lock);
+- VM_BUG_ON(pool->isolated <= 0);
+- pool->isolated--;
+-
+- /*
+- * If we have no more isolated pages, we have to see if
+- * z3fold_destroy_pool() is waiting for a signal.
+- */
+- if (pool->isolated == 0 && waitqueue_active(&pool->isolate_wait))
+- wake_up_all(&pool->isolate_wait);
+-}
+-
+-static void z3fold_inc_isolated(struct z3fold_pool *pool)
+-{
+- pool->isolated++;
+-}
+-
+ static bool z3fold_page_isolate(struct page *page, isolate_mode_t mode)
+ {
+ struct z3fold_header *zhdr;
+@@ -1387,34 +1333,6 @@ static bool z3fold_page_isolate(struct page *page, isolate_mode_t mode)
+ spin_lock(&pool->lock);
+ if (!list_empty(&page->lru))
+ list_del(&page->lru);
+- /*
+- * We need to check for destruction while holding pool->lock, as
+- * otherwise destruction could see 0 isolated pages, and
+- * proceed.
+- */
+- if (unlikely(pool->destroying)) {
+- spin_unlock(&pool->lock);
+- /*
+- * If this page isn't stale, somebody else holds a
+- * reference to it. Let't drop our refcount so that they
+- * can call the release logic.
+- */
+- if (unlikely(kref_put(&zhdr->refcount,
+- release_z3fold_page_locked))) {
+- /*
+- * If we get here we have kref problems, so we
+- * should freak out.
+- */
+- WARN(1, "Z3fold is experiencing kref problems\n");
+- z3fold_page_unlock(zhdr);
+- return false;
+- }
+- z3fold_page_unlock(zhdr);
+- return false;
+- }
+-
+-
+- z3fold_inc_isolated(pool);
+ spin_unlock(&pool->lock);
+ z3fold_page_unlock(zhdr);
+ return true;
+@@ -1483,10 +1401,6 @@ static int z3fold_page_migrate(struct address_space *mapping, struct page *newpa
+
+ queue_work_on(new_zhdr->cpu, pool->compact_wq, &new_zhdr->work);
+
+- spin_lock(&pool->lock);
+- z3fold_dec_isolated(pool);
+- spin_unlock(&pool->lock);
+-
+ page_mapcount_reset(page);
+ put_page(page);
+ return 0;
+@@ -1506,14 +1420,10 @@ static void z3fold_page_putback(struct page *page)
+ INIT_LIST_HEAD(&page->lru);
+ if (kref_put(&zhdr->refcount, release_z3fold_page_locked)) {
+ atomic64_dec(&pool->pages_nr);
+- spin_lock(&pool->lock);
+- z3fold_dec_isolated(pool);
+- spin_unlock(&pool->lock);
+ return;
+ }
+ spin_lock(&pool->lock);
+ list_add(&page->lru, &pool->lru);
+- z3fold_dec_isolated(pool);
+ spin_unlock(&pool->lock);
+ z3fold_page_unlock(zhdr);
+ }
+diff --git a/sound/firewire/dice/dice-alesis.c b/sound/firewire/dice/dice-alesis.c
+index 218292bdace6..f5b325263b67 100644
+--- a/sound/firewire/dice/dice-alesis.c
++++ b/sound/firewire/dice/dice-alesis.c
+@@ -15,7 +15,7 @@ alesis_io14_tx_pcm_chs[MAX_STREAMS][SND_DICE_RATE_MODE_COUNT] = {
+
+ static const unsigned int
+ alesis_io26_tx_pcm_chs[MAX_STREAMS][SND_DICE_RATE_MODE_COUNT] = {
+- {10, 10, 8}, /* Tx0 = Analog + S/PDIF. */
++ {10, 10, 4}, /* Tx0 = Analog + S/PDIF. */
+ {16, 8, 0}, /* Tx1 = ADAT1 + ADAT2. */
+ };
+
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 99fc0917339b..b0de3e3b33e5 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2517,8 +2517,7 @@ static const struct pci_device_id azx_ids[] = {
+ AZX_DCAPS_PM_RUNTIME },
+ /* AMD Raven */
+ { PCI_DEVICE(0x1022, 0x15e3),
+- .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB |
+- AZX_DCAPS_PM_RUNTIME },
++ .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_AMD_SB },
+ /* ATI HDMI */
+ { PCI_DEVICE(0x1002, 0x0002),
+ .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
+diff --git a/sound/pci/hda/patch_analog.c b/sound/pci/hda/patch_analog.c
+index e283966bdbb1..bc9dd8e6fd86 100644
+--- a/sound/pci/hda/patch_analog.c
++++ b/sound/pci/hda/patch_analog.c
+@@ -357,6 +357,7 @@ static const struct hda_fixup ad1986a_fixups[] = {
+
+ static const struct snd_pci_quirk ad1986a_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x30af, "HP B2800", AD1986A_FIXUP_LAPTOP_IMIC),
++ SND_PCI_QUIRK(0x1043, 0x1153, "ASUS M9V", AD1986A_FIXUP_LAPTOP_IMIC),
+ SND_PCI_QUIRK(0x1043, 0x1443, "ASUS Z99He", AD1986A_FIXUP_EAPD),
+ SND_PCI_QUIRK(0x1043, 0x1447, "ASUS A8JN", AD1986A_FIXUP_EAPD),
+ SND_PCI_QUIRK_MASK(0x1043, 0xff00, 0x8100, "ASUS P5", AD1986A_FIXUP_3STACK),
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 78858918cbc1..b6f7b13768a1 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1655,6 +1655,8 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
+ case 0x152a: /* Thesycon devices */
+ case 0x25ce: /* Mytek devices */
+ case 0x2ab6: /* T+A devices */
++ case 0x3842: /* EVGA */
++ case 0xc502: /* HiBy devices */
+ if (fp->dsd_raw)
+ return SNDRV_PCM_FMTBIT_DSD_U32_BE;
+ break;
+diff --git a/tools/objtool/Makefile b/tools/objtool/Makefile
+index 88158239622b..20f67fcf378d 100644
+--- a/tools/objtool/Makefile
++++ b/tools/objtool/Makefile
+@@ -35,7 +35,7 @@ INCLUDES := -I$(srctree)/tools/include \
+ -I$(srctree)/tools/arch/$(HOSTARCH)/include/uapi \
+ -I$(srctree)/tools/objtool/arch/$(ARCH)/include
+ WARNINGS := $(EXTRA_WARNINGS) -Wno-switch-default -Wno-switch-enum -Wno-packed
+-CFLAGS += -Werror $(WARNINGS) $(KBUILD_HOSTCFLAGS) -g $(INCLUDES) $(LIBELF_FLAGS)
++CFLAGS := -Werror $(WARNINGS) $(KBUILD_HOSTCFLAGS) -g $(INCLUDES) $(LIBELF_FLAGS)
+ LDFLAGS += $(LIBELF_LIBS) $(LIBSUBCMD) $(KBUILD_HOSTLDFLAGS)
+
+ # Allow old libelf to be used:
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:5.3 commit in: /
@ 2019-10-05 20:41 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2019-10-05 20:41 UTC (permalink / raw
To: gentoo-commits
commit: 4a89d9fd835a72f2386fb6d0c22cfe8cc88a3c81
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Oct 5 20:41:09 2019 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Oct 5 20:41:09 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4a89d9fd
Linux patch 5.3.3 and 5.3.4
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 8 +
1002_linux-5.3.3.patch | 13 +
1003_linux-5.3.4.patch | 13640 +++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 13661 insertions(+)
diff --git a/0000_README b/0000_README
index 38a3537..74f4ffa 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,14 @@ Patch: 1001_linux-5.3.2.patch
From: http://www.kernel.org
Desc: Linux 5.3.2
+Patch: 1002_linux-5.3.3.patch
+From: http://www.kernel.org
+Desc: Linux 5.3.3
+
+Patch: 1003_linux-5.3.4.patch
+From: http://www.kernel.org
+Desc: Linux 5.3.4
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1002_linux-5.3.3.patch b/1002_linux-5.3.3.patch
new file mode 100644
index 0000000..d4e0fda
--- /dev/null
+++ b/1002_linux-5.3.3.patch
@@ -0,0 +1,13 @@
+diff --git a/Makefile b/Makefile
+index 13fa3a409ddd..a5f4e184b552 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 3
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Bobtail Squid
+
diff --git a/1003_linux-5.3.4.patch b/1003_linux-5.3.4.patch
new file mode 100644
index 0000000..7ec98e6
--- /dev/null
+++ b/1003_linux-5.3.4.patch
@@ -0,0 +1,13640 @@
+diff --git a/Documentation/devicetree/bindings/sound/allwinner,sun4i-a10-spdif.yaml b/Documentation/devicetree/bindings/sound/allwinner,sun4i-a10-spdif.yaml
+index e0284d8c3b63..38d4cede0860 100644
+--- a/Documentation/devicetree/bindings/sound/allwinner,sun4i-a10-spdif.yaml
++++ b/Documentation/devicetree/bindings/sound/allwinner,sun4i-a10-spdif.yaml
+@@ -70,7 +70,9 @@ allOf:
+ properties:
+ compatible:
+ contains:
+- const: allwinner,sun8i-h3-spdif
++ enum:
++ - allwinner,sun8i-h3-spdif
++ - allwinner,sun50i-h6-spdif
+
+ then:
+ properties:
+diff --git a/Documentation/sound/hd-audio/models.rst b/Documentation/sound/hd-audio/models.rst
+index 7d7c191102a7..11298f0ce44d 100644
+--- a/Documentation/sound/hd-audio/models.rst
++++ b/Documentation/sound/hd-audio/models.rst
+@@ -260,6 +260,9 @@ alc295-hp-x360
+ HP Spectre X360 fixups
+ alc-sense-combo
+ Headset button support for Chrome platform
++huawei-mbx-stereo
++ Enable initialization verbs for Huawei MBX stereo speakers;
++ might be risky, try this at your own risk
+
+ ALC66x/67x/892
+ ==============
+diff --git a/Makefile b/Makefile
+index a5f4e184b552..fa11c1d89acf 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 3
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Bobtail Squid
+
+diff --git a/arch/arm/boot/dts/am3517-evm.dts b/arch/arm/boot/dts/am3517-evm.dts
+index ebfe28c2f544..a1fd3e63e86e 100644
+--- a/arch/arm/boot/dts/am3517-evm.dts
++++ b/arch/arm/boot/dts/am3517-evm.dts
+@@ -124,10 +124,11 @@
+ };
+
+ lcd0: display@0 {
+- compatible = "panel-dpi";
++ /* This isn't the exact LCD, but the timings meet spec */
++ /* To make it work, set CONFIG_OMAP2_DSS_MIN_FCK_PER_PCK=4 */
++ compatible = "newhaven,nhd-4.3-480272ef-atxl";
+ label = "15";
+- status = "okay";
+- pinctrl-names = "default";
++ backlight = <&bl>;
+ enable-gpios = <&gpio6 16 GPIO_ACTIVE_HIGH>; /* gpio176, lcd INI */
+ vcc-supply = <&vdd_io_reg>;
+
+@@ -136,22 +137,6 @@
+ remote-endpoint = <&dpi_out>;
+ };
+ };
+-
+- panel-timing {
+- clock-frequency = <9000000>;
+- hactive = <480>;
+- vactive = <272>;
+- hfront-porch = <3>;
+- hback-porch = <2>;
+- hsync-len = <42>;
+- vback-porch = <3>;
+- vfront-porch = <4>;
+- vsync-len = <11>;
+- hsync-active = <0>;
+- vsync-active = <0>;
+- de-active = <1>;
+- pixelclk-active = <1>;
+- };
+ };
+
+ bl: backlight {
+diff --git a/arch/arm/boot/dts/exynos5420-peach-pit.dts b/arch/arm/boot/dts/exynos5420-peach-pit.dts
+index f78db6809cca..9eb48cabcca4 100644
+--- a/arch/arm/boot/dts/exynos5420-peach-pit.dts
++++ b/arch/arm/boot/dts/exynos5420-peach-pit.dts
+@@ -440,6 +440,7 @@
+ regulator-name = "vdd_ldo10";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
++ regulator-always-on;
+ regulator-state-mem {
+ regulator-off-in-suspend;
+ };
+diff --git a/arch/arm/boot/dts/exynos5800-peach-pi.dts b/arch/arm/boot/dts/exynos5800-peach-pi.dts
+index e0f470fe54c8..4398f2d1fe88 100644
+--- a/arch/arm/boot/dts/exynos5800-peach-pi.dts
++++ b/arch/arm/boot/dts/exynos5800-peach-pi.dts
+@@ -440,6 +440,7 @@
+ regulator-name = "vdd_ldo10";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
++ regulator-always-on;
+ regulator-state-mem {
+ regulator-off-in-suspend;
+ };
+diff --git a/arch/arm/boot/dts/imx7-colibri.dtsi b/arch/arm/boot/dts/imx7-colibri.dtsi
+index 895fbde4d433..c1ed83131b49 100644
+--- a/arch/arm/boot/dts/imx7-colibri.dtsi
++++ b/arch/arm/boot/dts/imx7-colibri.dtsi
+@@ -323,6 +323,7 @@
+ vmmc-supply = <®_module_3v3>;
+ vqmmc-supply = <®_DCDC3>;
+ non-removable;
++ sdhci-caps-mask = <0x80000000 0x0>;
+ };
+
+ &iomuxc {
+diff --git a/arch/arm/boot/dts/imx7d-cl-som-imx7.dts b/arch/arm/boot/dts/imx7d-cl-som-imx7.dts
+index e61567437d73..62d5e9a4a781 100644
+--- a/arch/arm/boot/dts/imx7d-cl-som-imx7.dts
++++ b/arch/arm/boot/dts/imx7d-cl-som-imx7.dts
+@@ -44,7 +44,7 @@
+ <&clks IMX7D_ENET1_TIME_ROOT_CLK>;
+ assigned-clock-parents = <&clks IMX7D_PLL_ENET_MAIN_100M_CLK>;
+ assigned-clock-rates = <0>, <100000000>;
+- phy-mode = "rgmii";
++ phy-mode = "rgmii-id";
+ phy-handle = <ðphy0>;
+ fsl,magic-packet;
+ status = "okay";
+@@ -70,7 +70,7 @@
+ <&clks IMX7D_ENET2_TIME_ROOT_CLK>;
+ assigned-clock-parents = <&clks IMX7D_PLL_ENET_MAIN_100M_CLK>;
+ assigned-clock-rates = <0>, <100000000>;
+- phy-mode = "rgmii";
++ phy-mode = "rgmii-id";
+ phy-handle = <ðphy1>;
+ fsl,magic-packet;
+ status = "okay";
+diff --git a/arch/arm/boot/dts/logicpd-torpedo-baseboard.dtsi b/arch/arm/boot/dts/logicpd-torpedo-baseboard.dtsi
+index 642e809e757a..449cc7616da6 100644
+--- a/arch/arm/boot/dts/logicpd-torpedo-baseboard.dtsi
++++ b/arch/arm/boot/dts/logicpd-torpedo-baseboard.dtsi
+@@ -108,7 +108,6 @@
+ &dss {
+ status = "ok";
+ vdds_dsi-supply = <&vpll2>;
+- vdda_video-supply = <&video_reg>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&dss_dpi_pins1>;
+ port {
+@@ -124,44 +123,20 @@
+ display0 = &lcd0;
+ };
+
+- video_reg: video_reg {
+- pinctrl-names = "default";
+- pinctrl-0 = <&panel_pwr_pins>;
+- compatible = "regulator-fixed";
+- regulator-name = "fixed-supply";
+- regulator-min-microvolt = <3300000>;
+- regulator-max-microvolt = <3300000>;
+- gpio = <&gpio5 27 GPIO_ACTIVE_HIGH>; /* gpio155, lcd INI */
+- };
+-
+ lcd0: display {
+- compatible = "panel-dpi";
++ /* This isn't the exact LCD, but the timings meet spec */
++ /* To make it work, set CONFIG_OMAP2_DSS_MIN_FCK_PER_PCK=4 */
++ compatible = "newhaven,nhd-4.3-480272ef-atxl";
+ label = "15";
+- status = "okay";
+- /* default-on; */
+ pinctrl-names = "default";
+-
++ pinctrl-0 = <&panel_pwr_pins>;
++ backlight = <&bl>;
++ enable-gpios = <&gpio5 27 GPIO_ACTIVE_HIGH>;
+ port {
+ lcd_in: endpoint {
+ remote-endpoint = <&dpi_out>;
+ };
+ };
+-
+- panel-timing {
+- clock-frequency = <9000000>;
+- hactive = <480>;
+- vactive = <272>;
+- hfront-porch = <3>;
+- hback-porch = <2>;
+- hsync-len = <42>;
+- vback-porch = <3>;
+- vfront-porch = <4>;
+- vsync-len = <11>;
+- hsync-active = <0>;
+- vsync-active = <0>;
+- de-active = <1>;
+- pixelclk-active = <1>;
+- };
+ };
+
+ bl: backlight {
+diff --git a/arch/arm/configs/omap2plus_defconfig b/arch/arm/configs/omap2plus_defconfig
+index c7bf9c493646..64eb896907bf 100644
+--- a/arch/arm/configs/omap2plus_defconfig
++++ b/arch/arm/configs/omap2plus_defconfig
+@@ -363,6 +363,7 @@ CONFIG_DRM_OMAP_PANEL_TPO_TD028TTEC1=m
+ CONFIG_DRM_OMAP_PANEL_TPO_TD043MTEA1=m
+ CONFIG_DRM_OMAP_PANEL_NEC_NL8048HL11=m
+ CONFIG_DRM_TILCDC=m
++CONFIG_DRM_PANEL_SIMPLE=m
+ CONFIG_FB=y
+ CONFIG_FIRMWARE_EDID=y
+ CONFIG_FB_MODE_HELPERS=y
+diff --git a/arch/arm/mach-at91/.gitignore b/arch/arm/mach-at91/.gitignore
+new file mode 100644
+index 000000000000..2ecd6f51c8a9
+--- /dev/null
++++ b/arch/arm/mach-at91/.gitignore
+@@ -0,0 +1 @@
++pm_data-offsets.h
+diff --git a/arch/arm/mach-at91/Makefile b/arch/arm/mach-at91/Makefile
+index 31b61f0e1c07..de64301dcff2 100644
+--- a/arch/arm/mach-at91/Makefile
++++ b/arch/arm/mach-at91/Makefile
+@@ -19,9 +19,10 @@ ifeq ($(CONFIG_PM_DEBUG),y)
+ CFLAGS_pm.o += -DDEBUG
+ endif
+
+-include/generated/at91_pm_data-offsets.h: arch/arm/mach-at91/pm_data-offsets.s FORCE
++$(obj)/pm_data-offsets.h: $(obj)/pm_data-offsets.s FORCE
+ $(call filechk,offsets,__PM_DATA_OFFSETS_H__)
+
+-arch/arm/mach-at91/pm_suspend.o: include/generated/at91_pm_data-offsets.h
++$(obj)/pm_suspend.o: $(obj)/pm_data-offsets.h
+
+ targets += pm_data-offsets.s
++clean-files += pm_data-offsets.h
+diff --git a/arch/arm/mach-at91/pm_suspend.S b/arch/arm/mach-at91/pm_suspend.S
+index c751f047b116..ed57c879d4e1 100644
+--- a/arch/arm/mach-at91/pm_suspend.S
++++ b/arch/arm/mach-at91/pm_suspend.S
+@@ -10,7 +10,7 @@
+ #include <linux/linkage.h>
+ #include <linux/clk/at91_pmc.h>
+ #include "pm.h"
+-#include "generated/at91_pm_data-offsets.h"
++#include "pm_data-offsets.h"
+
+ #define SRAMC_SELF_FRESH_ACTIVE 0x01
+ #define SRAMC_SELF_FRESH_EXIT 0x00
+diff --git a/arch/arm/mach-ep93xx/edb93xx.c b/arch/arm/mach-ep93xx/edb93xx.c
+index 1f0da76a39de..7b7280c21ee0 100644
+--- a/arch/arm/mach-ep93xx/edb93xx.c
++++ b/arch/arm/mach-ep93xx/edb93xx.c
+@@ -103,7 +103,7 @@ static struct spi_board_info edb93xx_spi_board_info[] __initdata = {
+ };
+
+ static struct gpiod_lookup_table edb93xx_spi_cs_gpio_table = {
+- .dev_id = "ep93xx-spi.0",
++ .dev_id = "spi0",
+ .table = {
+ GPIO_LOOKUP("A", 6, "cs", GPIO_ACTIVE_LOW),
+ { },
+diff --git a/arch/arm/mach-ep93xx/simone.c b/arch/arm/mach-ep93xx/simone.c
+index e2658e22bba1..8a53b74dc4b2 100644
+--- a/arch/arm/mach-ep93xx/simone.c
++++ b/arch/arm/mach-ep93xx/simone.c
+@@ -73,7 +73,7 @@ static struct spi_board_info simone_spi_devices[] __initdata = {
+ * v1.3 parts will still work, since the signal on SFRMOUT is automatic.
+ */
+ static struct gpiod_lookup_table simone_spi_cs_gpio_table = {
+- .dev_id = "ep93xx-spi.0",
++ .dev_id = "spi0",
+ .table = {
+ GPIO_LOOKUP("A", 1, "cs", GPIO_ACTIVE_LOW),
+ { },
+diff --git a/arch/arm/mach-ep93xx/ts72xx.c b/arch/arm/mach-ep93xx/ts72xx.c
+index 582e06e104fd..e0e1b11032f1 100644
+--- a/arch/arm/mach-ep93xx/ts72xx.c
++++ b/arch/arm/mach-ep93xx/ts72xx.c
+@@ -267,7 +267,7 @@ static struct spi_board_info bk3_spi_board_info[] __initdata = {
+ * goes through CPLD
+ */
+ static struct gpiod_lookup_table bk3_spi_cs_gpio_table = {
+- .dev_id = "ep93xx-spi.0",
++ .dev_id = "spi0",
+ .table = {
+ GPIO_LOOKUP("F", 3, "cs", GPIO_ACTIVE_LOW),
+ { },
+@@ -316,7 +316,7 @@ static struct spi_board_info ts72xx_spi_devices[] __initdata = {
+ };
+
+ static struct gpiod_lookup_table ts72xx_spi_cs_gpio_table = {
+- .dev_id = "ep93xx-spi.0",
++ .dev_id = "spi0",
+ .table = {
+ /* DIO_17 */
+ GPIO_LOOKUP("F", 2, "cs", GPIO_ACTIVE_LOW),
+diff --git a/arch/arm/mach-ep93xx/vision_ep9307.c b/arch/arm/mach-ep93xx/vision_ep9307.c
+index a88a1d807b32..cbcba3136d74 100644
+--- a/arch/arm/mach-ep93xx/vision_ep9307.c
++++ b/arch/arm/mach-ep93xx/vision_ep9307.c
+@@ -242,7 +242,7 @@ static struct spi_board_info vision_spi_board_info[] __initdata = {
+ };
+
+ static struct gpiod_lookup_table vision_spi_cs_gpio_table = {
+- .dev_id = "ep93xx-spi.0",
++ .dev_id = "spi0",
+ .table = {
+ GPIO_LOOKUP_IDX("A", 6, "cs", 0, GPIO_ACTIVE_LOW),
+ GPIO_LOOKUP_IDX("A", 7, "cs", 1, GPIO_ACTIVE_LOW),
+diff --git a/arch/arm/mach-omap2/.gitignore b/arch/arm/mach-omap2/.gitignore
+new file mode 100644
+index 000000000000..79a8d6ea7152
+--- /dev/null
++++ b/arch/arm/mach-omap2/.gitignore
+@@ -0,0 +1 @@
++pm-asm-offsets.h
+diff --git a/arch/arm/mach-omap2/Makefile b/arch/arm/mach-omap2/Makefile
+index 600650551621..21c6d4bca3c0 100644
+--- a/arch/arm/mach-omap2/Makefile
++++ b/arch/arm/mach-omap2/Makefile
+@@ -223,9 +223,10 @@ obj-y += omap_phy_internal.o
+
+ obj-$(CONFIG_MACH_OMAP2_TUSB6010) += usb-tusb6010.o
+
+-include/generated/ti-pm-asm-offsets.h: arch/arm/mach-omap2/pm-asm-offsets.s FORCE
++$(obj)/pm-asm-offsets.h: $(obj)/pm-asm-offsets.s FORCE
+ $(call filechk,offsets,__TI_PM_ASM_OFFSETS_H__)
+
+-$(obj)/sleep33xx.o $(obj)/sleep43xx.o: include/generated/ti-pm-asm-offsets.h
++$(obj)/sleep33xx.o $(obj)/sleep43xx.o: $(obj)/pm-asm-offsets.h
+
+ targets += pm-asm-offsets.s
++clean-files += pm-asm-offsets.h
+diff --git a/arch/arm/mach-omap2/sleep33xx.S b/arch/arm/mach-omap2/sleep33xx.S
+index 68fee339d3f1..dc221249bc22 100644
+--- a/arch/arm/mach-omap2/sleep33xx.S
++++ b/arch/arm/mach-omap2/sleep33xx.S
+@@ -6,7 +6,6 @@
+ * Dave Gerlach, Vaibhav Bedia
+ */
+
+-#include <generated/ti-pm-asm-offsets.h>
+ #include <linux/linkage.h>
+ #include <linux/platform_data/pm33xx.h>
+ #include <linux/ti-emif-sram.h>
+@@ -15,6 +14,7 @@
+
+ #include "iomap.h"
+ #include "cm33xx.h"
++#include "pm-asm-offsets.h"
+
+ #define AM33XX_CM_CLKCTRL_MODULESTATE_DISABLED 0x00030000
+ #define AM33XX_CM_CLKCTRL_MODULEMODE_DISABLE 0x0003
+diff --git a/arch/arm/mach-omap2/sleep43xx.S b/arch/arm/mach-omap2/sleep43xx.S
+index c1f4e4852644..90d2907a2eb2 100644
+--- a/arch/arm/mach-omap2/sleep43xx.S
++++ b/arch/arm/mach-omap2/sleep43xx.S
+@@ -6,7 +6,6 @@
+ * Dave Gerlach, Vaibhav Bedia
+ */
+
+-#include <generated/ti-pm-asm-offsets.h>
+ #include <linux/linkage.h>
+ #include <linux/ti-emif-sram.h>
+ #include <linux/platform_data/pm33xx.h>
+@@ -19,6 +18,7 @@
+ #include "iomap.h"
+ #include "omap-secure.h"
+ #include "omap44xx.h"
++#include "pm-asm-offsets.h"
+ #include "prm33xx.h"
+ #include "prcm43xx.h"
+
+diff --git a/arch/arm/mach-zynq/platsmp.c b/arch/arm/mach-zynq/platsmp.c
+index a7cfe07156f4..e65ee8180c35 100644
+--- a/arch/arm/mach-zynq/platsmp.c
++++ b/arch/arm/mach-zynq/platsmp.c
+@@ -57,7 +57,7 @@ int zynq_cpun_start(u32 address, int cpu)
+ * 0x4: Jump by mov instruction
+ * 0x8: Jumping address
+ */
+- memcpy((__force void *)zero, &zynq_secondary_trampoline,
++ memcpy_toio(zero, &zynq_secondary_trampoline,
+ trampoline_size);
+ writel(address, zero + trampoline_size);
+
+diff --git a/arch/arm/mm/copypage-xscale.c b/arch/arm/mm/copypage-xscale.c
+index 61d834157bc0..382e1c2855e8 100644
+--- a/arch/arm/mm/copypage-xscale.c
++++ b/arch/arm/mm/copypage-xscale.c
+@@ -42,6 +42,7 @@ static void mc_copy_user_page(void *from, void *to)
+ * when prefetching destination as well. (NP)
+ */
+ asm volatile ("\
++.arch xscale \n\
+ pld [%0, #0] \n\
+ pld [%0, #32] \n\
+ pld [%1, #0] \n\
+@@ -106,8 +107,9 @@ void
+ xscale_mc_clear_user_highpage(struct page *page, unsigned long vaddr)
+ {
+ void *ptr, *kaddr = kmap_atomic(page);
+- asm volatile(
+- "mov r1, %2 \n\
++ asm volatile("\
++.arch xscale \n\
++ mov r1, %2 \n\
+ mov r2, #0 \n\
+ mov r3, #0 \n\
+ 1: mov ip, %0 \n\
+diff --git a/arch/arm/plat-samsung/watchdog-reset.c b/arch/arm/plat-samsung/watchdog-reset.c
+index ce42cc640a61..71d85ff323f7 100644
+--- a/arch/arm/plat-samsung/watchdog-reset.c
++++ b/arch/arm/plat-samsung/watchdog-reset.c
+@@ -62,6 +62,7 @@ void samsung_wdt_reset(void)
+ #ifdef CONFIG_OF
+ static const struct of_device_id s3c2410_wdt_match[] = {
+ { .compatible = "samsung,s3c2410-wdt" },
++ { .compatible = "samsung,s3c6410-wdt" },
+ {},
+ };
+
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dts b/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dts
+index 4e916e1f71f7..1c2a9ca491c0 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dts
+@@ -66,8 +66,8 @@
+ gpios = <&gpio_ao GPIOAO_9 GPIO_ACTIVE_HIGH>;
+ gpios-states = <0>;
+
+- states = <3300000 0
+- 1800000 1>;
++ states = <3300000 0>,
++ <1800000 1>;
+ };
+
+ flash_1v8: regulator-flash_1v8 {
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-nexbox-a95x.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-nexbox-a95x.dts
+index b636912a2715..afcf8a9f667b 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-nexbox-a95x.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-nexbox-a95x.dts
+@@ -75,8 +75,8 @@
+ gpios-states = <1>;
+
+ /* Based on P200 schematics, signal CARD_1.8V/3.3V_CTR */
+- states = <1800000 0
+- 3300000 1>;
++ states = <1800000 0>,
++ <3300000 1>;
+ };
+
+ vddio_boot: regulator-vddio_boot {
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts
+index 9972b1515da6..6039adda12ee 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts
+@@ -77,8 +77,8 @@
+ gpios = <&gpio_ao GPIOAO_3 GPIO_ACTIVE_HIGH>;
+ gpios-states = <0>;
+
+- states = <3300000 0
+- 1800000 1>;
++ states = <3300000 0>,
++ <1800000 1>;
+ };
+
+ vcc1v8: regulator-vcc1v8 {
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-p20x.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxbb-p20x.dtsi
+index e8f925871edf..89f7b41b0e9e 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-p20x.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-p20x.dtsi
+@@ -46,8 +46,8 @@
+ gpios-states = <1>;
+
+ /* Based on P200 schematics, signal CARD_1.8V/3.3V_CTR */
+- states = <1800000 0
+- 3300000 1>;
++ states = <1800000 0>,
++ <3300000 1>;
+
+ regulator-settling-time-up-us = <10000>;
+ regulator-settling-time-down-us = <150000>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-hwacom-amazetv.dts b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-hwacom-amazetv.dts
+index 796baea7a0bf..c8d74e61dec1 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-hwacom-amazetv.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-hwacom-amazetv.dts
+@@ -38,8 +38,8 @@
+ gpios-states = <1>;
+
+ /* Based on P200 schematics, signal CARD_1.8V/3.3V_CTR */
+- states = <1800000 0
+- 3300000 1>;
++ states = <1800000 0>,
++ <3300000 1>;
+ };
+
+ vddio_boot: regulator-vddio_boot {
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-nexbox-a95x.dts b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-nexbox-a95x.dts
+index 26907ac82930..c433a031841f 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-nexbox-a95x.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-nexbox-a95x.dts
+@@ -38,8 +38,8 @@
+ gpios-states = <1>;
+
+ /* Based on P200 schematics, signal CARD_1.8V/3.3V_CTR */
+- states = <1800000 0
+- 3300000 1>;
++ states = <1800000 0>,
++ <3300000 1>;
+ };
+
+ vddio_boot: regulator-vddio_boot {
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq.dtsi b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+index 52aae341d0da..d1f4eb197af2 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+@@ -169,15 +169,14 @@
+ opp-1300000000 {
+ opp-hz = /bits/ 64 <1300000000>;
+ opp-microvolt = <1000000>;
+- opp-supported-hw = <0xc>, <0x7>;
++ opp-supported-hw = <0xc>, <0x4>;
+ clock-latency-ns = <150000>;
+ };
+
+ opp-1500000000 {
+ opp-hz = /bits/ 64 <1500000000>;
+ opp-microvolt = <1000000>;
+- /* Consumer only but rely on speed grading */
+- opp-supported-hw = <0x8>, <0x7>;
++ opp-supported-hw = <0x8>, <0x3>;
+ clock-latency-ns = <150000>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/qcs404-evb.dtsi b/arch/arm64/boot/dts/qcom/qcs404-evb.dtsi
+index 11c0a7137823..db6df76e97a1 100644
+--- a/arch/arm64/boot/dts/qcom/qcs404-evb.dtsi
++++ b/arch/arm64/boot/dts/qcom/qcs404-evb.dtsi
+@@ -61,7 +61,9 @@
+ protected-clocks = <GCC_BIMC_CDSP_CLK>,
+ <GCC_CDSP_CFG_AHB_CLK>,
+ <GCC_CDSP_BIMC_CLK_SRC>,
+- <GCC_CDSP_TBU_CLK>;
++ <GCC_CDSP_TBU_CLK>,
++ <141>, /* GCC_WCSS_Q6_AHB_CLK */
++ <142>; /* GCC_WCSS_Q6_AXIM_CLK */
+ };
+
+ &pms405_spmi_regulators {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328.dtsi b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+index e9fefd8a7e02..f0f2c555033b 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+@@ -801,6 +801,7 @@
+ <&cru SCLK_SDMMC_DRV>, <&cru SCLK_SDMMC_SAMPLE>;
+ clock-names = "biu", "ciu", "ciu-drive", "ciu-sample";
+ fifo-depth = <0x100>;
++ max-frequency = <150000000>;
+ status = "disabled";
+ };
+
+@@ -812,6 +813,7 @@
+ <&cru SCLK_SDIO_DRV>, <&cru SCLK_SDIO_SAMPLE>;
+ clock-names = "biu", "ciu", "ciu-drive", "ciu-sample";
+ fifo-depth = <0x100>;
++ max-frequency = <150000000>;
+ status = "disabled";
+ };
+
+@@ -823,6 +825,7 @@
+ <&cru SCLK_EMMC_DRV>, <&cru SCLK_EMMC_SAMPLE>;
+ clock-names = "biu", "ciu", "ciu-drive", "ciu-sample";
+ fifo-depth = <0x100>;
++ max-frequency = <150000000>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/include/asm/atomic_ll_sc.h b/arch/arm64/include/asm/atomic_ll_sc.h
+index c8c850bc3dfb..6dd011e0b434 100644
+--- a/arch/arm64/include/asm/atomic_ll_sc.h
++++ b/arch/arm64/include/asm/atomic_ll_sc.h
+@@ -26,7 +26,7 @@
+ * (the optimize attribute silently ignores these options).
+ */
+
+-#define ATOMIC_OP(op, asm_op) \
++#define ATOMIC_OP(op, asm_op, constraint) \
+ __LL_SC_INLINE void \
+ __LL_SC_PREFIX(arch_atomic_##op(int i, atomic_t *v)) \
+ { \
+@@ -40,11 +40,11 @@ __LL_SC_PREFIX(arch_atomic_##op(int i, atomic_t *v)) \
+ " stxr %w1, %w0, %2\n" \
+ " cbnz %w1, 1b" \
+ : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \
+- : "Ir" (i)); \
++ : #constraint "r" (i)); \
+ } \
+ __LL_SC_EXPORT(arch_atomic_##op);
+
+-#define ATOMIC_OP_RETURN(name, mb, acq, rel, cl, op, asm_op) \
++#define ATOMIC_OP_RETURN(name, mb, acq, rel, cl, op, asm_op, constraint)\
+ __LL_SC_INLINE int \
+ __LL_SC_PREFIX(arch_atomic_##op##_return##name(int i, atomic_t *v)) \
+ { \
+@@ -59,14 +59,14 @@ __LL_SC_PREFIX(arch_atomic_##op##_return##name(int i, atomic_t *v)) \
+ " cbnz %w1, 1b\n" \
+ " " #mb \
+ : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \
+- : "Ir" (i) \
++ : #constraint "r" (i) \
+ : cl); \
+ \
+ return result; \
+ } \
+ __LL_SC_EXPORT(arch_atomic_##op##_return##name);
+
+-#define ATOMIC_FETCH_OP(name, mb, acq, rel, cl, op, asm_op) \
++#define ATOMIC_FETCH_OP(name, mb, acq, rel, cl, op, asm_op, constraint) \
+ __LL_SC_INLINE int \
+ __LL_SC_PREFIX(arch_atomic_fetch_##op##name(int i, atomic_t *v)) \
+ { \
+@@ -81,7 +81,7 @@ __LL_SC_PREFIX(arch_atomic_fetch_##op##name(int i, atomic_t *v)) \
+ " cbnz %w2, 1b\n" \
+ " " #mb \
+ : "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter) \
+- : "Ir" (i) \
++ : #constraint "r" (i) \
+ : cl); \
+ \
+ return result; \
+@@ -99,8 +99,8 @@ __LL_SC_EXPORT(arch_atomic_fetch_##op##name);
+ ATOMIC_FETCH_OP (_acquire, , a, , "memory", __VA_ARGS__)\
+ ATOMIC_FETCH_OP (_release, , , l, "memory", __VA_ARGS__)
+
+-ATOMIC_OPS(add, add)
+-ATOMIC_OPS(sub, sub)
++ATOMIC_OPS(add, add, I)
++ATOMIC_OPS(sub, sub, J)
+
+ #undef ATOMIC_OPS
+ #define ATOMIC_OPS(...) \
+@@ -110,17 +110,17 @@ ATOMIC_OPS(sub, sub)
+ ATOMIC_FETCH_OP (_acquire, , a, , "memory", __VA_ARGS__)\
+ ATOMIC_FETCH_OP (_release, , , l, "memory", __VA_ARGS__)
+
+-ATOMIC_OPS(and, and)
+-ATOMIC_OPS(andnot, bic)
+-ATOMIC_OPS(or, orr)
+-ATOMIC_OPS(xor, eor)
++ATOMIC_OPS(and, and, )
++ATOMIC_OPS(andnot, bic, )
++ATOMIC_OPS(or, orr, )
++ATOMIC_OPS(xor, eor, )
+
+ #undef ATOMIC_OPS
+ #undef ATOMIC_FETCH_OP
+ #undef ATOMIC_OP_RETURN
+ #undef ATOMIC_OP
+
+-#define ATOMIC64_OP(op, asm_op) \
++#define ATOMIC64_OP(op, asm_op, constraint) \
+ __LL_SC_INLINE void \
+ __LL_SC_PREFIX(arch_atomic64_##op(s64 i, atomic64_t *v)) \
+ { \
+@@ -134,11 +134,11 @@ __LL_SC_PREFIX(arch_atomic64_##op(s64 i, atomic64_t *v)) \
+ " stxr %w1, %0, %2\n" \
+ " cbnz %w1, 1b" \
+ : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \
+- : "Ir" (i)); \
++ : #constraint "r" (i)); \
+ } \
+ __LL_SC_EXPORT(arch_atomic64_##op);
+
+-#define ATOMIC64_OP_RETURN(name, mb, acq, rel, cl, op, asm_op) \
++#define ATOMIC64_OP_RETURN(name, mb, acq, rel, cl, op, asm_op, constraint)\
+ __LL_SC_INLINE s64 \
+ __LL_SC_PREFIX(arch_atomic64_##op##_return##name(s64 i, atomic64_t *v))\
+ { \
+@@ -153,14 +153,14 @@ __LL_SC_PREFIX(arch_atomic64_##op##_return##name(s64 i, atomic64_t *v))\
+ " cbnz %w1, 1b\n" \
+ " " #mb \
+ : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \
+- : "Ir" (i) \
++ : #constraint "r" (i) \
+ : cl); \
+ \
+ return result; \
+ } \
+ __LL_SC_EXPORT(arch_atomic64_##op##_return##name);
+
+-#define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op) \
++#define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op, constraint)\
+ __LL_SC_INLINE s64 \
+ __LL_SC_PREFIX(arch_atomic64_fetch_##op##name(s64 i, atomic64_t *v)) \
+ { \
+@@ -175,7 +175,7 @@ __LL_SC_PREFIX(arch_atomic64_fetch_##op##name(s64 i, atomic64_t *v)) \
+ " cbnz %w2, 1b\n" \
+ " " #mb \
+ : "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter) \
+- : "Ir" (i) \
++ : #constraint "r" (i) \
+ : cl); \
+ \
+ return result; \
+@@ -193,8 +193,8 @@ __LL_SC_EXPORT(arch_atomic64_fetch_##op##name);
+ ATOMIC64_FETCH_OP (_acquire,, a, , "memory", __VA_ARGS__) \
+ ATOMIC64_FETCH_OP (_release,, , l, "memory", __VA_ARGS__)
+
+-ATOMIC64_OPS(add, add)
+-ATOMIC64_OPS(sub, sub)
++ATOMIC64_OPS(add, add, I)
++ATOMIC64_OPS(sub, sub, J)
+
+ #undef ATOMIC64_OPS
+ #define ATOMIC64_OPS(...) \
+@@ -204,10 +204,10 @@ ATOMIC64_OPS(sub, sub)
+ ATOMIC64_FETCH_OP (_acquire,, a, , "memory", __VA_ARGS__) \
+ ATOMIC64_FETCH_OP (_release,, , l, "memory", __VA_ARGS__)
+
+-ATOMIC64_OPS(and, and)
+-ATOMIC64_OPS(andnot, bic)
+-ATOMIC64_OPS(or, orr)
+-ATOMIC64_OPS(xor, eor)
++ATOMIC64_OPS(and, and, L)
++ATOMIC64_OPS(andnot, bic, )
++ATOMIC64_OPS(or, orr, L)
++ATOMIC64_OPS(xor, eor, L)
+
+ #undef ATOMIC64_OPS
+ #undef ATOMIC64_FETCH_OP
+@@ -237,7 +237,7 @@ __LL_SC_PREFIX(arch_atomic64_dec_if_positive(atomic64_t *v))
+ }
+ __LL_SC_EXPORT(arch_atomic64_dec_if_positive);
+
+-#define __CMPXCHG_CASE(w, sfx, name, sz, mb, acq, rel, cl) \
++#define __CMPXCHG_CASE(w, sfx, name, sz, mb, acq, rel, cl, constraint) \
+ __LL_SC_INLINE u##sz \
+ __LL_SC_PREFIX(__cmpxchg_case_##name##sz(volatile void *ptr, \
+ unsigned long old, \
+@@ -265,29 +265,34 @@ __LL_SC_PREFIX(__cmpxchg_case_##name##sz(volatile void *ptr, \
+ "2:" \
+ : [tmp] "=&r" (tmp), [oldval] "=&r" (oldval), \
+ [v] "+Q" (*(u##sz *)ptr) \
+- : [old] "Kr" (old), [new] "r" (new) \
++ : [old] #constraint "r" (old), [new] "r" (new) \
+ : cl); \
+ \
+ return oldval; \
+ } \
+ __LL_SC_EXPORT(__cmpxchg_case_##name##sz);
+
+-__CMPXCHG_CASE(w, b, , 8, , , , )
+-__CMPXCHG_CASE(w, h, , 16, , , , )
+-__CMPXCHG_CASE(w, , , 32, , , , )
+-__CMPXCHG_CASE( , , , 64, , , , )
+-__CMPXCHG_CASE(w, b, acq_, 8, , a, , "memory")
+-__CMPXCHG_CASE(w, h, acq_, 16, , a, , "memory")
+-__CMPXCHG_CASE(w, , acq_, 32, , a, , "memory")
+-__CMPXCHG_CASE( , , acq_, 64, , a, , "memory")
+-__CMPXCHG_CASE(w, b, rel_, 8, , , l, "memory")
+-__CMPXCHG_CASE(w, h, rel_, 16, , , l, "memory")
+-__CMPXCHG_CASE(w, , rel_, 32, , , l, "memory")
+-__CMPXCHG_CASE( , , rel_, 64, , , l, "memory")
+-__CMPXCHG_CASE(w, b, mb_, 8, dmb ish, , l, "memory")
+-__CMPXCHG_CASE(w, h, mb_, 16, dmb ish, , l, "memory")
+-__CMPXCHG_CASE(w, , mb_, 32, dmb ish, , l, "memory")
+-__CMPXCHG_CASE( , , mb_, 64, dmb ish, , l, "memory")
++/*
++ * Earlier versions of GCC (no later than 8.1.0) appear to incorrectly
++ * handle the 'K' constraint for the value 4294967295 - thus we use no
++ * constraint for 32 bit operations.
++ */
++__CMPXCHG_CASE(w, b, , 8, , , , , )
++__CMPXCHG_CASE(w, h, , 16, , , , , )
++__CMPXCHG_CASE(w, , , 32, , , , , )
++__CMPXCHG_CASE( , , , 64, , , , , L)
++__CMPXCHG_CASE(w, b, acq_, 8, , a, , "memory", )
++__CMPXCHG_CASE(w, h, acq_, 16, , a, , "memory", )
++__CMPXCHG_CASE(w, , acq_, 32, , a, , "memory", )
++__CMPXCHG_CASE( , , acq_, 64, , a, , "memory", L)
++__CMPXCHG_CASE(w, b, rel_, 8, , , l, "memory", )
++__CMPXCHG_CASE(w, h, rel_, 16, , , l, "memory", )
++__CMPXCHG_CASE(w, , rel_, 32, , , l, "memory", )
++__CMPXCHG_CASE( , , rel_, 64, , , l, "memory", L)
++__CMPXCHG_CASE(w, b, mb_, 8, dmb ish, , l, "memory", )
++__CMPXCHG_CASE(w, h, mb_, 16, dmb ish, , l, "memory", )
++__CMPXCHG_CASE(w, , mb_, 32, dmb ish, , l, "memory", )
++__CMPXCHG_CASE( , , mb_, 64, dmb ish, , l, "memory", L)
+
+ #undef __CMPXCHG_CASE
+
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index e7d46631cc42..b1454d117cd2 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -51,14 +51,6 @@
+ #define MIDR_CPU_MODEL_MASK (MIDR_IMPLEMENTOR_MASK | MIDR_PARTNUM_MASK | \
+ MIDR_ARCHITECTURE_MASK)
+
+-#define MIDR_IS_CPU_MODEL_RANGE(midr, model, rv_min, rv_max) \
+-({ \
+- u32 _model = (midr) & MIDR_CPU_MODEL_MASK; \
+- u32 rv = (midr) & (MIDR_REVISION_MASK | MIDR_VARIANT_MASK); \
+- \
+- _model == (model) && rv >= (rv_min) && rv <= (rv_max); \
+- })
+-
+ #define ARM_CPU_IMP_ARM 0x41
+ #define ARM_CPU_IMP_APM 0x50
+ #define ARM_CPU_IMP_CAVIUM 0x43
+@@ -159,10 +151,19 @@ struct midr_range {
+ #define MIDR_REV(m, v, r) MIDR_RANGE(m, v, r, v, r)
+ #define MIDR_ALL_VERSIONS(m) MIDR_RANGE(m, 0, 0, 0xf, 0xf)
+
++static inline bool midr_is_cpu_model_range(u32 midr, u32 model, u32 rv_min,
++ u32 rv_max)
++{
++ u32 _model = midr & MIDR_CPU_MODEL_MASK;
++ u32 rv = midr & (MIDR_REVISION_MASK | MIDR_VARIANT_MASK);
++
++ return _model == model && rv >= rv_min && rv <= rv_max;
++}
++
+ static inline bool is_midr_in_range(u32 midr, struct midr_range const *range)
+ {
+- return MIDR_IS_CPU_MODEL_RANGE(midr, range->model,
+- range->rv_min, range->rv_max);
++ return midr_is_cpu_model_range(midr, range->model,
++ range->rv_min, range->rv_max);
+ }
+
+ static inline bool
+diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
+index ed57b760f38c..a17393ff6677 100644
+--- a/arch/arm64/include/asm/exception.h
++++ b/arch/arm64/include/asm/exception.h
+@@ -30,4 +30,6 @@ static inline u32 disr_to_esr(u64 disr)
+ return esr;
+ }
+
++asmlinkage void enter_from_user_mode(void);
++
+ #endif /* __ASM_EXCEPTION_H */
+diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
+index 8af7a85f76bd..bc3949064725 100644
+--- a/arch/arm64/include/asm/tlbflush.h
++++ b/arch/arm64/include/asm/tlbflush.h
+@@ -251,6 +251,7 @@ static inline void __flush_tlb_kernel_pgtable(unsigned long kaddr)
+ dsb(ishst);
+ __tlbi(vaae1is, addr);
+ dsb(ish);
++ isb();
+ }
+ #endif
+
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index b1fdc486aed8..9323bcc40a58 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -894,7 +894,7 @@ static bool has_no_hw_prefetch(const struct arm64_cpu_capabilities *entry, int _
+ u32 midr = read_cpuid_id();
+
+ /* Cavium ThunderX pass 1.x and 2.x */
+- return MIDR_IS_CPU_MODEL_RANGE(midr, MIDR_THUNDERX,
++ return midr_is_cpu_model_range(midr, MIDR_THUNDERX,
+ MIDR_CPU_VAR_REV(0, 0),
+ MIDR_CPU_VAR_REV(1, MIDR_REVISION_MASK));
+ }
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index 320a30dbe35e..84a822748c84 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -30,9 +30,9 @@
+ * Context tracking subsystem. Used to instrument transitions
+ * between user and kernel mode.
+ */
+- .macro ct_user_exit
++ .macro ct_user_exit_irqoff
+ #ifdef CONFIG_CONTEXT_TRACKING
+- bl context_tracking_user_exit
++ bl enter_from_user_mode
+ #endif
+ .endm
+
+@@ -792,8 +792,8 @@ el0_cp15:
+ /*
+ * Trapped CP15 (MRC, MCR, MRRC, MCRR) instructions
+ */
++ ct_user_exit_irqoff
+ enable_daif
+- ct_user_exit
+ mov x0, x25
+ mov x1, sp
+ bl do_cp15instr
+@@ -805,8 +805,8 @@ el0_da:
+ * Data abort handling
+ */
+ mrs x26, far_el1
++ ct_user_exit_irqoff
+ enable_daif
+- ct_user_exit
+ clear_address_tag x0, x26
+ mov x1, x25
+ mov x2, sp
+@@ -818,11 +818,11 @@ el0_ia:
+ */
+ mrs x26, far_el1
+ gic_prio_kentry_setup tmp=x0
++ ct_user_exit_irqoff
+ enable_da_f
+ #ifdef CONFIG_TRACE_IRQFLAGS
+ bl trace_hardirqs_off
+ #endif
+- ct_user_exit
+ mov x0, x26
+ mov x1, x25
+ mov x2, sp
+@@ -832,8 +832,8 @@ el0_fpsimd_acc:
+ /*
+ * Floating Point or Advanced SIMD access
+ */
++ ct_user_exit_irqoff
+ enable_daif
+- ct_user_exit
+ mov x0, x25
+ mov x1, sp
+ bl do_fpsimd_acc
+@@ -842,8 +842,8 @@ el0_sve_acc:
+ /*
+ * Scalable Vector Extension access
+ */
++ ct_user_exit_irqoff
+ enable_daif
+- ct_user_exit
+ mov x0, x25
+ mov x1, sp
+ bl do_sve_acc
+@@ -852,8 +852,8 @@ el0_fpsimd_exc:
+ /*
+ * Floating Point, Advanced SIMD or SVE exception
+ */
++ ct_user_exit_irqoff
+ enable_daif
+- ct_user_exit
+ mov x0, x25
+ mov x1, sp
+ bl do_fpsimd_exc
+@@ -868,11 +868,11 @@ el0_sp_pc:
+ * Stack or PC alignment exception handling
+ */
+ gic_prio_kentry_setup tmp=x0
++ ct_user_exit_irqoff
+ enable_da_f
+ #ifdef CONFIG_TRACE_IRQFLAGS
+ bl trace_hardirqs_off
+ #endif
+- ct_user_exit
+ mov x0, x26
+ mov x1, x25
+ mov x2, sp
+@@ -882,8 +882,8 @@ el0_undef:
+ /*
+ * Undefined instruction
+ */
++ ct_user_exit_irqoff
+ enable_daif
+- ct_user_exit
+ mov x0, sp
+ bl do_undefinstr
+ b ret_to_user
+@@ -891,8 +891,8 @@ el0_sys:
+ /*
+ * System instructions, for trapped cache maintenance instructions
+ */
++ ct_user_exit_irqoff
+ enable_daif
+- ct_user_exit
+ mov x0, x25
+ mov x1, sp
+ bl do_sysinstr
+@@ -902,17 +902,18 @@ el0_dbg:
+ * Debug exception handling
+ */
+ tbnz x24, #0, el0_inv // EL0 only
++ mrs x24, far_el1
+ gic_prio_kentry_setup tmp=x3
+- mrs x0, far_el1
++ ct_user_exit_irqoff
++ mov x0, x24
+ mov x1, x25
+ mov x2, sp
+ bl do_debug_exception
+ enable_da_f
+- ct_user_exit
+ b ret_to_user
+ el0_inv:
++ ct_user_exit_irqoff
+ enable_daif
+- ct_user_exit
+ mov x0, sp
+ mov x1, #BAD_SYNC
+ mov x2, x25
+@@ -925,13 +926,13 @@ el0_irq:
+ kernel_entry 0
+ el0_irq_naked:
+ gic_prio_irq_setup pmr=x20, tmp=x0
++ ct_user_exit_irqoff
+ enable_da_f
+
+ #ifdef CONFIG_TRACE_IRQFLAGS
+ bl trace_hardirqs_off
+ #endif
+
+- ct_user_exit
+ #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+ tbz x22, #55, 1f
+ bl do_el0_irq_bp_hardening
+@@ -958,13 +959,14 @@ ENDPROC(el1_error)
+ el0_error:
+ kernel_entry 0
+ el0_error_naked:
+- mrs x1, esr_el1
++ mrs x25, esr_el1
+ gic_prio_kentry_setup tmp=x2
++ ct_user_exit_irqoff
+ enable_dbg
+ mov x0, sp
++ mov x1, x25
+ bl do_serror
+ enable_da_f
+- ct_user_exit
+ b ret_to_user
+ ENDPROC(el0_error)
+
+diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
+new file mode 100644
+index 000000000000..25a2a9b479c2
+--- /dev/null
++++ b/arch/arm64/kernel/image-vars.h
+@@ -0,0 +1,51 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Linker script variables to be set after section resolution, as
++ * ld.lld does not like variables assigned before SECTIONS is processed.
++ */
++#ifndef __ARM64_KERNEL_IMAGE_VARS_H
++#define __ARM64_KERNEL_IMAGE_VARS_H
++
++#ifndef LINKER_SCRIPT
++#error This file should only be included in vmlinux.lds.S
++#endif
++
++#ifdef CONFIG_EFI
++
++__efistub_stext_offset = stext - _text;
++
++/*
++ * The EFI stub has its own symbol namespace prefixed by __efistub_, to
++ * isolate it from the kernel proper. The following symbols are legally
++ * accessed by the stub, so provide some aliases to make them accessible.
++ * Only include data symbols here, or text symbols of functions that are
++ * guaranteed to be safe when executed at another offset than they were
++ * linked at. The routines below are all implemented in assembler in a
++ * position independent manner
++ */
++__efistub_memcmp = __pi_memcmp;
++__efistub_memchr = __pi_memchr;
++__efistub_memcpy = __pi_memcpy;
++__efistub_memmove = __pi_memmove;
++__efistub_memset = __pi_memset;
++__efistub_strlen = __pi_strlen;
++__efistub_strnlen = __pi_strnlen;
++__efistub_strcmp = __pi_strcmp;
++__efistub_strncmp = __pi_strncmp;
++__efistub_strrchr = __pi_strrchr;
++__efistub___flush_dcache_area = __pi___flush_dcache_area;
++
++#ifdef CONFIG_KASAN
++__efistub___memcpy = __pi_memcpy;
++__efistub___memmove = __pi_memmove;
++__efistub___memset = __pi_memset;
++#endif
++
++__efistub__text = _text;
++__efistub__end = _end;
++__efistub__edata = _edata;
++__efistub_screen_info = screen_info;
++
++#endif
++
++#endif /* __ARM64_KERNEL_IMAGE_VARS_H */
+diff --git a/arch/arm64/kernel/image.h b/arch/arm64/kernel/image.h
+index 2b85c0d6fa3d..c7d38c660372 100644
+--- a/arch/arm64/kernel/image.h
++++ b/arch/arm64/kernel/image.h
+@@ -65,46 +65,4 @@
+ DEFINE_IMAGE_LE64(_kernel_offset_le, TEXT_OFFSET); \
+ DEFINE_IMAGE_LE64(_kernel_flags_le, __HEAD_FLAGS);
+
+-#ifdef CONFIG_EFI
+-
+-/*
+- * Use ABSOLUTE() to avoid ld.lld treating this as a relative symbol:
+- * https://github.com/ClangBuiltLinux/linux/issues/561
+- */
+-__efistub_stext_offset = ABSOLUTE(stext - _text);
+-
+-/*
+- * The EFI stub has its own symbol namespace prefixed by __efistub_, to
+- * isolate it from the kernel proper. The following symbols are legally
+- * accessed by the stub, so provide some aliases to make them accessible.
+- * Only include data symbols here, or text symbols of functions that are
+- * guaranteed to be safe when executed at another offset than they were
+- * linked at. The routines below are all implemented in assembler in a
+- * position independent manner
+- */
+-__efistub_memcmp = __pi_memcmp;
+-__efistub_memchr = __pi_memchr;
+-__efistub_memcpy = __pi_memcpy;
+-__efistub_memmove = __pi_memmove;
+-__efistub_memset = __pi_memset;
+-__efistub_strlen = __pi_strlen;
+-__efistub_strnlen = __pi_strnlen;
+-__efistub_strcmp = __pi_strcmp;
+-__efistub_strncmp = __pi_strncmp;
+-__efistub_strrchr = __pi_strrchr;
+-__efistub___flush_dcache_area = __pi___flush_dcache_area;
+-
+-#ifdef CONFIG_KASAN
+-__efistub___memcpy = __pi_memcpy;
+-__efistub___memmove = __pi_memmove;
+-__efistub___memset = __pi_memset;
+-#endif
+-
+-__efistub__text = _text;
+-__efistub__end = _end;
+-__efistub__edata = _edata;
+-__efistub_screen_info = screen_info;
+-
+-#endif
+-
+ #endif /* __ARM64_KERNEL_IMAGE_H */
+diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
+index 32893b3d9164..742a636861e7 100644
+--- a/arch/arm64/kernel/traps.c
++++ b/arch/arm64/kernel/traps.c
+@@ -7,9 +7,11 @@
+ */
+
+ #include <linux/bug.h>
++#include <linux/context_tracking.h>
+ #include <linux/signal.h>
+ #include <linux/personality.h>
+ #include <linux/kallsyms.h>
++#include <linux/kprobes.h>
+ #include <linux/spinlock.h>
+ #include <linux/uaccess.h>
+ #include <linux/hardirq.h>
+@@ -900,6 +902,13 @@ asmlinkage void do_serror(struct pt_regs *regs, unsigned int esr)
+ nmi_exit();
+ }
+
++asmlinkage void enter_from_user_mode(void)
++{
++ CT_WARN_ON(ct_state() != CONTEXT_USER);
++ user_exit_irqoff();
++}
++NOKPROBE_SYMBOL(enter_from_user_mode);
++
+ void __pte_error(const char *file, int line, unsigned long val)
+ {
+ pr_err("%s:%d: bad pte %016lx.\n", file, line, val);
+diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
+index 7fa008374907..803b24d2464a 100644
+--- a/arch/arm64/kernel/vmlinux.lds.S
++++ b/arch/arm64/kernel/vmlinux.lds.S
+@@ -245,6 +245,8 @@ SECTIONS
+ HEAD_SYMBOLS
+ }
+
++#include "image-vars.h"
++
+ /*
+ * The HYP init code and ID map text can't be longer than a page each,
+ * and should not cross a page boundary.
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index f3c795278def..b1ee6cb4b17f 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -570,8 +570,12 @@ void free_initmem(void)
+ #ifdef CONFIG_BLK_DEV_INITRD
+ void __init free_initrd_mem(unsigned long start, unsigned long end)
+ {
++ unsigned long aligned_start, aligned_end;
++
++ aligned_start = __virt_to_phys(start) & PAGE_MASK;
++ aligned_end = PAGE_ALIGN(__virt_to_phys(end));
++ memblock_free(aligned_start, aligned_end - aligned_start);
+ free_reserved_area((void *)start, (void *)end, 0, "initrd");
+- memblock_free(__virt_to_phys(start), end - start);
+ }
+ #endif
+
+diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
+index 7dbf2be470f6..28a8f7b87ff0 100644
+--- a/arch/arm64/mm/proc.S
++++ b/arch/arm64/mm/proc.S
+@@ -286,6 +286,15 @@ skip_pgd:
+ msr sctlr_el1, x18
+ isb
+
++ /*
++ * Invalidate the local I-cache so that any instructions fetched
++ * speculatively from the PoC are discarded, since they may have
++ * been dynamically patched at the PoU.
++ */
++ ic iallu
++ dsb nsh
++ isb
++
+ /* Set the flag to zero to indicate that we're all done */
+ str wzr, [flag_ptr]
+ ret
+diff --git a/arch/ia64/kernel/module.c b/arch/ia64/kernel/module.c
+index 326448f9df16..1a42ba885188 100644
+--- a/arch/ia64/kernel/module.c
++++ b/arch/ia64/kernel/module.c
+@@ -914,10 +914,14 @@ module_finalize (const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, struct module *mo
+ void
+ module_arch_cleanup (struct module *mod)
+ {
+- if (mod->arch.init_unw_table)
++ if (mod->arch.init_unw_table) {
+ unw_remove_unwind_table(mod->arch.init_unw_table);
+- if (mod->arch.core_unw_table)
++ mod->arch.init_unw_table = NULL;
++ }
++ if (mod->arch.core_unw_table) {
+ unw_remove_unwind_table(mod->arch.core_unw_table);
++ mod->arch.core_unw_table = NULL;
++ }
+ }
+
+ void *dereference_module_function_descriptor(struct module *mod, void *ptr)
+diff --git a/arch/m68k/include/asm/atarihw.h b/arch/m68k/include/asm/atarihw.h
+index 533008262b69..5e5601c382b8 100644
+--- a/arch/m68k/include/asm/atarihw.h
++++ b/arch/m68k/include/asm/atarihw.h
+@@ -22,7 +22,6 @@
+
+ #include <linux/types.h>
+ #include <asm/bootinfo-atari.h>
+-#include <asm/raw_io.h>
+ #include <asm/kmap.h>
+
+ extern u_long atari_mch_cookie;
+@@ -132,14 +131,6 @@ extern struct atari_hw_present atari_hw_present;
+ */
+
+
+-#define atari_readb raw_inb
+-#define atari_writeb raw_outb
+-
+-#define atari_inb_p raw_inb
+-#define atari_outb_p raw_outb
+-
+-
+-
+ #include <linux/mm.h>
+ #include <asm/cacheflush.h>
+
+diff --git a/arch/m68k/include/asm/io_mm.h b/arch/m68k/include/asm/io_mm.h
+index 6c03ca5bc436..819f611dccf2 100644
+--- a/arch/m68k/include/asm/io_mm.h
++++ b/arch/m68k/include/asm/io_mm.h
+@@ -29,7 +29,11 @@
+ #include <asm-generic/iomap.h>
+
+ #ifdef CONFIG_ATARI
+-#include <asm/atarihw.h>
++#define atari_readb raw_inb
++#define atari_writeb raw_outb
++
++#define atari_inb_p raw_inb
++#define atari_outb_p raw_outb
+ #endif
+
+
+diff --git a/arch/m68k/include/asm/macintosh.h b/arch/m68k/include/asm/macintosh.h
+index d9a08bed4b12..f653b60f2afc 100644
+--- a/arch/m68k/include/asm/macintosh.h
++++ b/arch/m68k/include/asm/macintosh.h
+@@ -4,6 +4,7 @@
+
+ #include <linux/seq_file.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+
+ #include <asm/bootinfo-mac.h>
+
+diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
+index c345b79414a9..403f7e193833 100644
+--- a/arch/powerpc/Makefile
++++ b/arch/powerpc/Makefile
+@@ -39,13 +39,11 @@ endif
+ uname := $(shell uname -m)
+ KBUILD_DEFCONFIG := $(if $(filter ppc%,$(uname)),$(uname),ppc64)_defconfig
+
+-ifdef CONFIG_PPC64
+ new_nm := $(shell if $(NM) --help 2>&1 | grep -- '--synthetic' > /dev/null; then echo y; else echo n; fi)
+
+ ifeq ($(new_nm),y)
+ NM := $(NM) --synthetic
+ endif
+-endif
+
+ # BITS is used as extension for files which are available in a 32 bit
+ # and a 64 bit version to simplify shared Makefiles.
+diff --git a/arch/powerpc/platforms/powernv/opal-imc.c b/arch/powerpc/platforms/powernv/opal-imc.c
+index 186109bdd41b..e04b20625cb9 100644
+--- a/arch/powerpc/platforms/powernv/opal-imc.c
++++ b/arch/powerpc/platforms/powernv/opal-imc.c
+@@ -53,9 +53,9 @@ static void export_imc_mode_and_cmd(struct device_node *node,
+ struct imc_pmu *pmu_ptr)
+ {
+ static u64 loc, *imc_mode_addr, *imc_cmd_addr;
+- int chip = 0, nid;
+ char mode[16], cmd[16];
+ u32 cb_offset;
++ struct imc_mem_info *ptr = pmu_ptr->mem_info;
+
+ imc_debugfs_parent = debugfs_create_dir("imc", powerpc_debugfs_root);
+
+@@ -69,20 +69,20 @@ static void export_imc_mode_and_cmd(struct device_node *node,
+ if (of_property_read_u32(node, "cb_offset", &cb_offset))
+ cb_offset = IMC_CNTL_BLK_OFFSET;
+
+- for_each_node(nid) {
+- loc = (u64)(pmu_ptr->mem_info[chip].vbase) + cb_offset;
++ while (ptr->vbase != NULL) {
++ loc = (u64)(ptr->vbase) + cb_offset;
+ imc_mode_addr = (u64 *)(loc + IMC_CNTL_BLK_MODE_OFFSET);
+- sprintf(mode, "imc_mode_%d", nid);
++ sprintf(mode, "imc_mode_%d", (u32)(ptr->id));
+ if (!imc_debugfs_create_x64(mode, 0600, imc_debugfs_parent,
+ imc_mode_addr))
+ goto err;
+
+ imc_cmd_addr = (u64 *)(loc + IMC_CNTL_BLK_CMD_OFFSET);
+- sprintf(cmd, "imc_cmd_%d", nid);
++ sprintf(cmd, "imc_cmd_%d", (u32)(ptr->id));
+ if (!imc_debugfs_create_x64(cmd, 0600, imc_debugfs_parent,
+ imc_cmd_addr))
+ goto err;
+- chip++;
++ ptr++;
+ }
+ return;
+
+diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c
+index d00f84add5f4..6d2dbb5089d5 100644
+--- a/arch/s390/crypto/aes_s390.c
++++ b/arch/s390/crypto/aes_s390.c
+@@ -586,6 +586,9 @@ static int xts_aes_encrypt(struct blkcipher_desc *desc,
+ struct s390_xts_ctx *xts_ctx = crypto_blkcipher_ctx(desc->tfm);
+ struct blkcipher_walk walk;
+
++ if (!nbytes)
++ return -EINVAL;
++
+ if (unlikely(!xts_ctx->fc))
+ return xts_fallback_encrypt(desc, dst, src, nbytes);
+
+@@ -600,6 +603,9 @@ static int xts_aes_decrypt(struct blkcipher_desc *desc,
+ struct s390_xts_ctx *xts_ctx = crypto_blkcipher_ctx(desc->tfm);
+ struct blkcipher_walk walk;
+
++ if (!nbytes)
++ return -EINVAL;
++
+ if (unlikely(!xts_ctx->fc))
+ return xts_fallback_decrypt(desc, dst, src, nbytes);
+
+diff --git a/arch/s390/include/asm/string.h b/arch/s390/include/asm/string.h
+index 70d87db54e62..4c0690fc5167 100644
+--- a/arch/s390/include/asm/string.h
++++ b/arch/s390/include/asm/string.h
+@@ -71,11 +71,16 @@ extern void *__memmove(void *dest, const void *src, size_t n);
+ #define memcpy(dst, src, len) __memcpy(dst, src, len)
+ #define memmove(dst, src, len) __memmove(dst, src, len)
+ #define memset(s, c, n) __memset(s, c, n)
++#define strlen(s) __strlen(s)
++
++#define __no_sanitize_prefix_strfunc(x) __##x
+
+ #ifndef __NO_FORTIFY
+ #define __NO_FORTIFY /* FORTIFY_SOURCE uses __builtin_memcpy, etc. */
+ #endif
+
++#else
++#define __no_sanitize_prefix_strfunc(x) x
+ #endif /* defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) */
+
+ void *__memset16(uint16_t *s, uint16_t v, size_t count);
+@@ -163,8 +168,8 @@ static inline char *strcpy(char *dst, const char *src)
+ }
+ #endif
+
+-#ifdef __HAVE_ARCH_STRLEN
+-static inline size_t strlen(const char *s)
++#if defined(__HAVE_ARCH_STRLEN) || (defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__))
++static inline size_t __no_sanitize_prefix_strfunc(strlen)(const char *s)
+ {
+ register unsigned long r0 asm("0") = 0;
+ const char *tmp = s;
+diff --git a/arch/x86/include/asm/intel-family.h b/arch/x86/include/asm/intel-family.h
+index fe7c205233f1..9ae1c0f05fd2 100644
+--- a/arch/x86/include/asm/intel-family.h
++++ b/arch/x86/include/asm/intel-family.h
+@@ -73,6 +73,9 @@
+ #define INTEL_FAM6_ICELAKE_MOBILE 0x7E
+ #define INTEL_FAM6_ICELAKE_NNPI 0x9D
+
++#define INTEL_FAM6_TIGERLAKE_L 0x8C
++#define INTEL_FAM6_TIGERLAKE 0x8D
++
+ /* "Small Core" Processors (Atom) */
+
+ #define INTEL_FAM6_ATOM_BONNELL 0x1C /* Diamondville, Pineview */
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index bdc16b0aa7c6..dd0ca154a958 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1583,6 +1583,13 @@ bool kvm_intr_is_single_vcpu(struct kvm *kvm, struct kvm_lapic_irq *irq,
+ void kvm_set_msi_irq(struct kvm *kvm, struct kvm_kernel_irq_routing_entry *e,
+ struct kvm_lapic_irq *irq);
+
++static inline bool kvm_irq_is_postable(struct kvm_lapic_irq *irq)
++{
++ /* We can only post Fixed and LowPrio IRQs */
++ return (irq->delivery_mode == dest_Fixed ||
++ irq->delivery_mode == dest_LowestPrio);
++}
++
+ static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
+ {
+ if (kvm_x86_ops->vcpu_blocking)
+diff --git a/arch/x86/kernel/amd_nb.c b/arch/x86/kernel/amd_nb.c
+index d63e63b7d1d9..251c795b4eb3 100644
+--- a/arch/x86/kernel/amd_nb.c
++++ b/arch/x86/kernel/amd_nb.c
+@@ -21,6 +21,7 @@
+ #define PCI_DEVICE_ID_AMD_17H_DF_F4 0x1464
+ #define PCI_DEVICE_ID_AMD_17H_M10H_DF_F4 0x15ec
+ #define PCI_DEVICE_ID_AMD_17H_M30H_DF_F4 0x1494
++#define PCI_DEVICE_ID_AMD_17H_M70H_DF_F4 0x1444
+
+ /* Protect the PCI config register pairs used for SMN and DF indirect access. */
+ static DEFINE_MUTEX(smn_mutex);
+@@ -50,6 +51,7 @@ const struct pci_device_id amd_nb_misc_ids[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M10H_DF_F3) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M30H_DF_F3) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CNB17H_F3) },
++ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M70H_DF_F3) },
+ {}
+ };
+ EXPORT_SYMBOL_GPL(amd_nb_misc_ids);
+@@ -63,6 +65,7 @@ static const struct pci_device_id amd_nb_link_ids[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_DF_F4) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M10H_DF_F4) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M30H_DF_F4) },
++ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M70H_DF_F4) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CNB17H_F4) },
+ {}
+ };
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index 08fb79f37793..ad0d5ced82b3 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -1495,54 +1495,72 @@ static void lapic_setup_esr(void)
+ oldvalue, value);
+ }
+
+-static void apic_pending_intr_clear(void)
++#define APIC_IR_REGS APIC_ISR_NR
++#define APIC_IR_BITS (APIC_IR_REGS * 32)
++#define APIC_IR_MAPSIZE (APIC_IR_BITS / BITS_PER_LONG)
++
++union apic_ir {
++ unsigned long map[APIC_IR_MAPSIZE];
++ u32 regs[APIC_IR_REGS];
++};
++
++static bool apic_check_and_ack(union apic_ir *irr, union apic_ir *isr)
+ {
+- long long max_loops = cpu_khz ? cpu_khz : 1000000;
+- unsigned long long tsc = 0, ntsc;
+- unsigned int queued;
+- unsigned long value;
+- int i, j, acked = 0;
++ int i, bit;
++
++ /* Read the IRRs */
++ for (i = 0; i < APIC_IR_REGS; i++)
++ irr->regs[i] = apic_read(APIC_IRR + i * 0x10);
++
++ /* Read the ISRs */
++ for (i = 0; i < APIC_IR_REGS; i++)
++ isr->regs[i] = apic_read(APIC_ISR + i * 0x10);
+
+- if (boot_cpu_has(X86_FEATURE_TSC))
+- tsc = rdtsc();
+ /*
+- * After a crash, we no longer service the interrupts and a pending
+- * interrupt from previous kernel might still have ISR bit set.
+- *
+- * Most probably by now CPU has serviced that pending interrupt and
+- * it might not have done the ack_APIC_irq() because it thought,
+- * interrupt came from i8259 as ExtInt. LAPIC did not get EOI so it
+- * does not clear the ISR bit and cpu thinks it has already serivced
+- * the interrupt. Hence a vector might get locked. It was noticed
+- * for timer irq (vector 0x31). Issue an extra EOI to clear ISR.
++ * If the ISR map is not empty. ACK the APIC and run another round
++ * to verify whether a pending IRR has been unblocked and turned
++ * into a ISR.
+ */
+- do {
+- queued = 0;
+- for (i = APIC_ISR_NR - 1; i >= 0; i--)
+- queued |= apic_read(APIC_IRR + i*0x10);
+-
+- for (i = APIC_ISR_NR - 1; i >= 0; i--) {
+- value = apic_read(APIC_ISR + i*0x10);
+- for_each_set_bit(j, &value, 32) {
+- ack_APIC_irq();
+- acked++;
+- }
+- }
+- if (acked > 256) {
+- pr_err("LAPIC pending interrupts after %d EOI\n", acked);
+- break;
+- }
+- if (queued) {
+- if (boot_cpu_has(X86_FEATURE_TSC) && cpu_khz) {
+- ntsc = rdtsc();
+- max_loops = (long long)cpu_khz << 10;
+- max_loops -= ntsc - tsc;
+- } else {
+- max_loops--;
+- }
+- }
+- } while (queued && max_loops > 0);
+- WARN_ON(max_loops <= 0);
++ if (!bitmap_empty(isr->map, APIC_IR_BITS)) {
++ /*
++ * There can be multiple ISR bits set when a high priority
++ * interrupt preempted a lower priority one. Issue an ACK
++ * per set bit.
++ */
++ for_each_set_bit(bit, isr->map, APIC_IR_BITS)
++ ack_APIC_irq();
++ return true;
++ }
++
++ return !bitmap_empty(irr->map, APIC_IR_BITS);
++}
++
++/*
++ * After a crash, we no longer service the interrupts and a pending
++ * interrupt from previous kernel might still have ISR bit set.
++ *
++ * Most probably by now the CPU has serviced that pending interrupt and it
++ * might not have done the ack_APIC_irq() because it thought, interrupt
++ * came from i8259 as ExtInt. LAPIC did not get EOI so it does not clear
++ * the ISR bit and cpu thinks it has already serivced the interrupt. Hence
++ * a vector might get locked. It was noticed for timer irq (vector
++ * 0x31). Issue an extra EOI to clear ISR.
++ *
++ * If there are pending IRR bits they turn into ISR bits after a higher
++ * priority ISR bit has been acked.
++ */
++static void apic_pending_intr_clear(void)
++{
++ union apic_ir irr, isr;
++ unsigned int i;
++
++ /* 512 loops are way oversized and give the APIC a chance to obey. */
++ for (i = 0; i < 512; i++) {
++ if (!apic_check_and_ack(&irr, &isr))
++ return;
++ }
++ /* Dump the IRR/ISR content if that failed */
++ pr_warn("APIC: Stale IRR: %256pb ISR: %256pb\n", irr.map, isr.map);
+ }
+
+ /**
+@@ -1565,6 +1583,14 @@ static void setup_local_APIC(void)
+ return;
+ }
+
++ /*
++ * If this comes from kexec/kcrash the APIC might be enabled in
++ * SPIV. Soft disable it before doing further initialization.
++ */
++ value = apic_read(APIC_SPIV);
++ value &= ~APIC_SPIV_APIC_ENABLED;
++ apic_write(APIC_SPIV, value);
++
+ #ifdef CONFIG_X86_32
+ /* Pound the ESR really hard over the head with a big hammer - mbligh */
+ if (lapic_is_integrated() && apic->disable_esr) {
+@@ -1610,6 +1636,7 @@ static void setup_local_APIC(void)
+ value &= ~APIC_TPRI_MASK;
+ apic_write(APIC_TASKPRI, value);
+
++ /* Clear eventually stale ISR/IRR bits */
+ apic_pending_intr_clear();
+
+ /*
+diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
+index fdacb864c3dd..2c5676b0a6e7 100644
+--- a/arch/x86/kernel/apic/vector.c
++++ b/arch/x86/kernel/apic/vector.c
+@@ -398,6 +398,17 @@ static int activate_reserved(struct irq_data *irqd)
+ if (!irqd_can_reserve(irqd))
+ apicd->can_reserve = false;
+ }
++
++ /*
++ * Check to ensure that the effective affinity mask is a subset
++ * the user supplied affinity mask, and warn the user if it is not
++ */
++ if (!cpumask_subset(irq_data_get_effective_affinity_mask(irqd),
++ irq_data_get_affinity_mask(irqd))) {
++ pr_warn("irq %u: Affinity broken due to vector space exhaustion.\n",
++ irqd->irq);
++ }
++
+ return ret;
+ }
+
+diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
+index 96421f97e75c..231fa230ebc7 100644
+--- a/arch/x86/kernel/smp.c
++++ b/arch/x86/kernel/smp.c
+@@ -179,6 +179,12 @@ asmlinkage __visible void smp_reboot_interrupt(void)
+ irq_exit();
+ }
+
++static int register_stop_handler(void)
++{
++ return register_nmi_handler(NMI_LOCAL, smp_stop_nmi_callback,
++ NMI_FLAG_FIRST, "smp_stop");
++}
++
+ static void native_stop_other_cpus(int wait)
+ {
+ unsigned long flags;
+@@ -212,39 +218,41 @@ static void native_stop_other_cpus(int wait)
+ apic->send_IPI_allbutself(REBOOT_VECTOR);
+
+ /*
+- * Don't wait longer than a second if the caller
+- * didn't ask us to wait.
++ * Don't wait longer than a second for IPI completion. The
++ * wait request is not checked here because that would
++ * prevent an NMI shutdown attempt in case that not all
++ * CPUs reach shutdown state.
+ */
+ timeout = USEC_PER_SEC;
+- while (num_online_cpus() > 1 && (wait || timeout--))
++ while (num_online_cpus() > 1 && timeout--)
+ udelay(1);
+ }
+-
+- /* if the REBOOT_VECTOR didn't work, try with the NMI */
+- if ((num_online_cpus() > 1) && (!smp_no_nmi_ipi)) {
+- if (register_nmi_handler(NMI_LOCAL, smp_stop_nmi_callback,
+- NMI_FLAG_FIRST, "smp_stop"))
+- /* Note: we ignore failures here */
+- /* Hope the REBOOT_IRQ is good enough */
+- goto finish;
+-
+- /* sync above data before sending IRQ */
+- wmb();
+
+- pr_emerg("Shutting down cpus with NMI\n");
++ /* if the REBOOT_VECTOR didn't work, try with the NMI */
++ if (num_online_cpus() > 1) {
++ /*
++ * If NMI IPI is enabled, try to register the stop handler
++ * and send the IPI. In any case try to wait for the other
++ * CPUs to stop.
++ */
++ if (!smp_no_nmi_ipi && !register_stop_handler()) {
++ /* Sync above data before sending IRQ */
++ wmb();
+
+- apic->send_IPI_allbutself(NMI_VECTOR);
++ pr_emerg("Shutting down cpus with NMI\n");
+
++ apic->send_IPI_allbutself(NMI_VECTOR);
++ }
+ /*
+- * Don't wait longer than a 10 ms if the caller
+- * didn't ask us to wait.
++ * Don't wait longer than 10 ms if the caller didn't
++ * reqeust it. If wait is true, the machine hangs here if
++ * one or more CPUs do not reach shutdown state.
+ */
+ timeout = USEC_PER_MSEC * 10;
+ while (num_online_cpus() > 1 && (wait || timeout--))
+ udelay(1);
+ }
+
+-finish:
+ local_irq_save(flags);
+ disable_local_APIC();
+ mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 22c2720cd948..e7d25f436466 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -304,7 +304,13 @@ static void do_host_cpuid(struct kvm_cpuid_entry2 *entry, u32 function,
+ case 7:
+ case 0xb:
+ case 0xd:
++ case 0xf:
++ case 0x10:
++ case 0x12:
+ case 0x14:
++ case 0x17:
++ case 0x18:
++ case 0x1f:
+ case 0x8000001d:
+ entry->flags |= KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
+ break;
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index 718f7d9afedc..3b971026a653 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -5395,6 +5395,8 @@ done_prefixes:
+ ctxt->memopp->addr.mem.ea + ctxt->_eip);
+
+ done:
++ if (rc == X86EMUL_PROPAGATE_FAULT)
++ ctxt->have_exception = true;
+ return (rc != X86EMUL_CONTINUE) ? EMULATION_FAILED : EMULATION_OK;
+ }
+
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index a63964e7cec7..94aa6102010d 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -395,8 +395,6 @@ static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn,
+ mask |= (gpa & shadow_nonpresent_or_rsvd_mask)
+ << shadow_nonpresent_or_rsvd_mask_len;
+
+- page_header(__pa(sptep))->mmio_cached = true;
+-
+ trace_mark_mmio_spte(sptep, gfn, access, gen);
+ mmu_spte_set(sptep, mask);
+ }
+@@ -5611,13 +5609,13 @@ slot_handle_leaf(struct kvm *kvm, struct kvm_memory_slot *memslot,
+ PT_PAGE_TABLE_LEVEL, lock_flush_tlb);
+ }
+
+-static void free_mmu_pages(struct kvm_vcpu *vcpu)
++static void free_mmu_pages(struct kvm_mmu *mmu)
+ {
+- free_page((unsigned long)vcpu->arch.mmu->pae_root);
+- free_page((unsigned long)vcpu->arch.mmu->lm_root);
++ free_page((unsigned long)mmu->pae_root);
++ free_page((unsigned long)mmu->lm_root);
+ }
+
+-static int alloc_mmu_pages(struct kvm_vcpu *vcpu)
++static int alloc_mmu_pages(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu)
+ {
+ struct page *page;
+ int i;
+@@ -5638,9 +5636,9 @@ static int alloc_mmu_pages(struct kvm_vcpu *vcpu)
+ if (!page)
+ return -ENOMEM;
+
+- vcpu->arch.mmu->pae_root = page_address(page);
++ mmu->pae_root = page_address(page);
+ for (i = 0; i < 4; ++i)
+- vcpu->arch.mmu->pae_root[i] = INVALID_PAGE;
++ mmu->pae_root[i] = INVALID_PAGE;
+
+ return 0;
+ }
+@@ -5648,6 +5646,7 @@ static int alloc_mmu_pages(struct kvm_vcpu *vcpu)
+ int kvm_mmu_create(struct kvm_vcpu *vcpu)
+ {
+ uint i;
++ int ret;
+
+ vcpu->arch.mmu = &vcpu->arch.root_mmu;
+ vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
+@@ -5665,7 +5664,19 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
+ vcpu->arch.guest_mmu.prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;
+
+ vcpu->arch.nested_mmu.translate_gpa = translate_nested_gpa;
+- return alloc_mmu_pages(vcpu);
++
++ ret = alloc_mmu_pages(vcpu, &vcpu->arch.guest_mmu);
++ if (ret)
++ return ret;
++
++ ret = alloc_mmu_pages(vcpu, &vcpu->arch.root_mmu);
++ if (ret)
++ goto fail_allocate_root;
++
++ return ret;
++ fail_allocate_root:
++ free_mmu_pages(&vcpu->arch.guest_mmu);
++ return ret;
+ }
+
+
+@@ -5943,7 +5954,7 @@ void kvm_mmu_slot_set_dirty(struct kvm *kvm,
+ }
+ EXPORT_SYMBOL_GPL(kvm_mmu_slot_set_dirty);
+
+-static void __kvm_mmu_zap_all(struct kvm *kvm, bool mmio_only)
++void kvm_mmu_zap_all(struct kvm *kvm)
+ {
+ struct kvm_mmu_page *sp, *node;
+ LIST_HEAD(invalid_list);
+@@ -5952,14 +5963,10 @@ static void __kvm_mmu_zap_all(struct kvm *kvm, bool mmio_only)
+ spin_lock(&kvm->mmu_lock);
+ restart:
+ list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) {
+- if (mmio_only && !sp->mmio_cached)
+- continue;
+ if (sp->role.invalid && sp->root_count)
+ continue;
+- if (__kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list, &ign)) {
+- WARN_ON_ONCE(mmio_only);
++ if (__kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list, &ign))
+ goto restart;
+- }
+ if (cond_resched_lock(&kvm->mmu_lock))
+ goto restart;
+ }
+@@ -5968,11 +5975,6 @@ restart:
+ spin_unlock(&kvm->mmu_lock);
+ }
+
+-void kvm_mmu_zap_all(struct kvm *kvm)
+-{
+- return __kvm_mmu_zap_all(kvm, false);
+-}
+-
+ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
+ {
+ WARN_ON(gen & KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS);
+@@ -5994,7 +5996,7 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
+ */
+ if (unlikely(gen == 0)) {
+ kvm_debug_ratelimited("kvm: zapping shadow pages for mmio generation wraparound\n");
+- __kvm_mmu_zap_all(kvm, true);
++ kvm_mmu_zap_all_fast(kvm);
+ }
+ }
+
+@@ -6168,7 +6170,8 @@ unsigned long kvm_mmu_calculate_default_mmu_pages(struct kvm *kvm)
+ void kvm_mmu_destroy(struct kvm_vcpu *vcpu)
+ {
+ kvm_mmu_unload(vcpu);
+- free_mmu_pages(vcpu);
++ free_mmu_pages(&vcpu->arch.root_mmu);
++ free_mmu_pages(&vcpu->arch.guest_mmu);
+ mmu_free_memory_caches(vcpu);
+ }
+
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index e0368076a1ef..45e425c5e6f5 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -5274,7 +5274,8 @@ get_pi_vcpu_info(struct kvm *kvm, struct kvm_kernel_irq_routing_entry *e,
+
+ kvm_set_msi_irq(kvm, e, &irq);
+
+- if (!kvm_intr_is_single_vcpu(kvm, &irq, &vcpu)) {
++ if (!kvm_intr_is_single_vcpu(kvm, &irq, &vcpu) ||
++ !kvm_irq_is_postable(&irq)) {
+ pr_debug("SVM: %s: use legacy intr remap mode for irq %u\n",
+ __func__, irq.vector);
+ return -1;
+@@ -5328,6 +5329,7 @@ static int svm_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
+ * 1. When cannot target interrupt to a specific vcpu.
+ * 2. Unsetting posted interrupt.
+ * 3. APIC virtialization is disabled for the vcpu.
++ * 4. IRQ has incompatible delivery mode (SMI, INIT, etc)
+ */
+ if (!get_pi_vcpu_info(kvm, e, &vcpu_info, &svm) && set &&
+ kvm_vcpu_apicv_active(&svm->vcpu)) {
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index c030c96fc81a..1d11bf4bab8b 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -7369,10 +7369,14 @@ static int vmx_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
+ * irqbalance to make the interrupts single-CPU.
+ *
+ * We will support full lowest-priority interrupt later.
++ *
++ * In addition, we can only inject generic interrupts using
++ * the PI mechanism, refuse to route others through it.
+ */
+
+ kvm_set_msi_irq(kvm, e, &irq);
+- if (!kvm_intr_is_single_vcpu(kvm, &irq, &vcpu)) {
++ if (!kvm_intr_is_single_vcpu(kvm, &irq, &vcpu) ||
++ !kvm_irq_is_postable(&irq)) {
+ /*
+ * Make sure the IRTE is in remapped mode if
+ * we don't handle it in posted mode.
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 91602d310a3f..350adc83eb50 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -674,8 +674,14 @@ static int kvm_read_nested_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn,
+ data, offset, len, access);
+ }
+
++static inline u64 pdptr_rsvd_bits(struct kvm_vcpu *vcpu)
++{
++ return rsvd_bits(cpuid_maxphyaddr(vcpu), 63) | rsvd_bits(5, 8) |
++ rsvd_bits(1, 2);
++}
++
+ /*
+- * Load the pae pdptrs. Return true is they are all valid.
++ * Load the pae pdptrs. Return 1 if they are all valid, 0 otherwise.
+ */
+ int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3)
+ {
+@@ -694,8 +700,7 @@ int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3)
+ }
+ for (i = 0; i < ARRAY_SIZE(pdpte); ++i) {
+ if ((pdpte[i] & PT_PRESENT_MASK) &&
+- (pdpte[i] &
+- vcpu->arch.mmu->guest_rsvd_check.rsvd_bits_mask[0][2])) {
++ (pdpte[i] & pdptr_rsvd_bits(vcpu))) {
+ ret = 0;
+ goto out;
+ }
+@@ -6528,8 +6533,16 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu,
+ if (reexecute_instruction(vcpu, cr2, write_fault_to_spt,
+ emulation_type))
+ return EMULATE_DONE;
+- if (ctxt->have_exception && inject_emulated_exception(vcpu))
++ if (ctxt->have_exception) {
++ /*
++ * #UD should result in just EMULATION_FAILED, and trap-like
++ * exception should not be encountered during decode.
++ */
++ WARN_ON_ONCE(ctxt->exception.vector == UD_VECTOR ||
++ exception_type(ctxt->exception.vector) == EXCPT_TRAP);
++ inject_emulated_exception(vcpu);
+ return EMULATE_DONE;
++ }
+ if (emulation_type & EMULTYPE_SKIP)
+ return EMULATE_FAIL;
+ return handle_emulation_failure(vcpu, emulation_type);
+diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
+index e6dad600614c..4123100e0eaf 100644
+--- a/arch/x86/mm/numa.c
++++ b/arch/x86/mm/numa.c
+@@ -861,9 +861,9 @@ void numa_remove_cpu(int cpu)
+ */
+ const struct cpumask *cpumask_of_node(int node)
+ {
+- if (node >= nr_node_ids) {
++ if ((unsigned)node >= nr_node_ids) {
+ printk(KERN_WARNING
+- "cpumask_of_node(%d): node > nr_node_ids(%u)\n",
++ "cpumask_of_node(%d): (unsigned)node >= nr_node_ids(%u)\n",
+ node, nr_node_ids);
+ dump_stack();
+ return cpu_none_mask;
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index b196524759ec..7f2140414440 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -330,13 +330,15 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
+
+ pud = pud_offset(p4d, addr);
+ if (pud_none(*pud)) {
+- addr += PUD_SIZE;
++ WARN_ON_ONCE(addr & ~PUD_MASK);
++ addr = round_up(addr + 1, PUD_SIZE);
+ continue;
+ }
+
+ pmd = pmd_offset(pud, addr);
+ if (pmd_none(*pmd)) {
+- addr += PMD_SIZE;
++ WARN_ON_ONCE(addr & ~PMD_MASK);
++ addr = round_up(addr + 1, PMD_SIZE);
+ continue;
+ }
+
+@@ -666,6 +668,8 @@ void __init pti_init(void)
+ */
+ void pti_finalize(void)
+ {
++ if (!boot_cpu_has(X86_FEATURE_PTI))
++ return;
+ /*
+ * We need to clone everything (again) that maps parts of the
+ * kernel image.
+diff --git a/arch/x86/platform/intel/iosf_mbi.c b/arch/x86/platform/intel/iosf_mbi.c
+index 2e796b54cbde..9e2444500428 100644
+--- a/arch/x86/platform/intel/iosf_mbi.c
++++ b/arch/x86/platform/intel/iosf_mbi.c
+@@ -17,6 +17,7 @@
+ #include <linux/debugfs.h>
+ #include <linux/capability.h>
+ #include <linux/pm_qos.h>
++#include <linux/wait.h>
+
+ #include <asm/iosf_mbi.h>
+
+@@ -201,23 +202,45 @@ EXPORT_SYMBOL(iosf_mbi_available);
+ #define PUNIT_SEMAPHORE_BIT BIT(0)
+ #define PUNIT_SEMAPHORE_ACQUIRE BIT(1)
+
+-static DEFINE_MUTEX(iosf_mbi_punit_mutex);
+-static DEFINE_MUTEX(iosf_mbi_block_punit_i2c_access_count_mutex);
++static DEFINE_MUTEX(iosf_mbi_pmic_access_mutex);
+ static BLOCKING_NOTIFIER_HEAD(iosf_mbi_pmic_bus_access_notifier);
+-static u32 iosf_mbi_block_punit_i2c_access_count;
++static DECLARE_WAIT_QUEUE_HEAD(iosf_mbi_pmic_access_waitq);
++static u32 iosf_mbi_pmic_punit_access_count;
++static u32 iosf_mbi_pmic_i2c_access_count;
+ static u32 iosf_mbi_sem_address;
+ static unsigned long iosf_mbi_sem_acquired;
+ static struct pm_qos_request iosf_mbi_pm_qos;
+
+ void iosf_mbi_punit_acquire(void)
+ {
+- mutex_lock(&iosf_mbi_punit_mutex);
++ /* Wait for any I2C PMIC accesses from in kernel drivers to finish. */
++ mutex_lock(&iosf_mbi_pmic_access_mutex);
++ while (iosf_mbi_pmic_i2c_access_count != 0) {
++ mutex_unlock(&iosf_mbi_pmic_access_mutex);
++ wait_event(iosf_mbi_pmic_access_waitq,
++ iosf_mbi_pmic_i2c_access_count == 0);
++ mutex_lock(&iosf_mbi_pmic_access_mutex);
++ }
++ /*
++ * We do not need to do anything to allow the PUNIT to safely access
++ * the PMIC, other then block in kernel accesses to the PMIC.
++ */
++ iosf_mbi_pmic_punit_access_count++;
++ mutex_unlock(&iosf_mbi_pmic_access_mutex);
+ }
+ EXPORT_SYMBOL(iosf_mbi_punit_acquire);
+
+ void iosf_mbi_punit_release(void)
+ {
+- mutex_unlock(&iosf_mbi_punit_mutex);
++ bool do_wakeup;
++
++ mutex_lock(&iosf_mbi_pmic_access_mutex);
++ iosf_mbi_pmic_punit_access_count--;
++ do_wakeup = iosf_mbi_pmic_punit_access_count == 0;
++ mutex_unlock(&iosf_mbi_pmic_access_mutex);
++
++ if (do_wakeup)
++ wake_up(&iosf_mbi_pmic_access_waitq);
+ }
+ EXPORT_SYMBOL(iosf_mbi_punit_release);
+
+@@ -256,34 +279,32 @@ static void iosf_mbi_reset_semaphore(void)
+ * already blocked P-Unit accesses because it wants them blocked over multiple
+ * i2c-transfers, for e.g. read-modify-write of an I2C client register.
+ *
+- * The P-Unit accesses already being blocked is tracked through the
+- * iosf_mbi_block_punit_i2c_access_count variable which is protected by the
+- * iosf_mbi_block_punit_i2c_access_count_mutex this mutex is hold for the
+- * entire duration of the function.
+- *
+- * If access is not blocked yet, this function takes the following steps:
++ * To allow safe PMIC i2c bus accesses this function takes the following steps:
+ *
+ * 1) Some code sends request to the P-Unit which make it access the PMIC
+ * I2C bus. Testing has shown that the P-Unit does not check its internal
+ * PMIC bus semaphore for these requests. Callers of these requests call
+ * iosf_mbi_punit_acquire()/_release() around their P-Unit accesses, these
+- * functions lock/unlock the iosf_mbi_punit_mutex.
+- * As the first step we lock the iosf_mbi_punit_mutex, to wait for any in
+- * flight requests to finish and to block any new requests.
++ * functions increase/decrease iosf_mbi_pmic_punit_access_count, so first
++ * we wait for iosf_mbi_pmic_punit_access_count to become 0.
++ *
++ * 2) Check iosf_mbi_pmic_i2c_access_count, if access has already
++ * been blocked by another caller, we only need to increment
++ * iosf_mbi_pmic_i2c_access_count and we can skip the other steps.
+ *
+- * 2) Some code makes such P-Unit requests from atomic contexts where it
++ * 3) Some code makes such P-Unit requests from atomic contexts where it
+ * cannot call iosf_mbi_punit_acquire() as that may sleep.
+ * As the second step we call a notifier chain which allows any code
+ * needing P-Unit resources from atomic context to acquire them before
+ * we take control over the PMIC I2C bus.
+ *
+- * 3) When CPU cores enter C6 or C7 the P-Unit needs to talk to the PMIC
++ * 4) When CPU cores enter C6 or C7 the P-Unit needs to talk to the PMIC
+ * if this happens while the kernel itself is accessing the PMIC I2C bus
+ * the SoC hangs.
+ * As the third step we call pm_qos_update_request() to disallow the CPU
+ * to enter C6 or C7.
+ *
+- * 4) The P-Unit has a PMIC bus semaphore which we can request to stop
++ * 5) The P-Unit has a PMIC bus semaphore which we can request to stop
+ * autonomous P-Unit tasks from accessing the PMIC I2C bus while we hold it.
+ * As the fourth and final step we request this semaphore and wait for our
+ * request to be acknowledged.
+@@ -297,12 +318,18 @@ int iosf_mbi_block_punit_i2c_access(void)
+ if (WARN_ON(!mbi_pdev || !iosf_mbi_sem_address))
+ return -ENXIO;
+
+- mutex_lock(&iosf_mbi_block_punit_i2c_access_count_mutex);
++ mutex_lock(&iosf_mbi_pmic_access_mutex);
+
+- if (iosf_mbi_block_punit_i2c_access_count > 0)
++ while (iosf_mbi_pmic_punit_access_count != 0) {
++ mutex_unlock(&iosf_mbi_pmic_access_mutex);
++ wait_event(iosf_mbi_pmic_access_waitq,
++ iosf_mbi_pmic_punit_access_count == 0);
++ mutex_lock(&iosf_mbi_pmic_access_mutex);
++ }
++
++ if (iosf_mbi_pmic_i2c_access_count > 0)
+ goto success;
+
+- mutex_lock(&iosf_mbi_punit_mutex);
+ blocking_notifier_call_chain(&iosf_mbi_pmic_bus_access_notifier,
+ MBI_PMIC_BUS_ACCESS_BEGIN, NULL);
+
+@@ -330,10 +357,6 @@ int iosf_mbi_block_punit_i2c_access(void)
+ iosf_mbi_sem_acquired = jiffies;
+ dev_dbg(&mbi_pdev->dev, "P-Unit semaphore acquired after %ums\n",
+ jiffies_to_msecs(jiffies - start));
+- /*
+- * Success, keep iosf_mbi_punit_mutex locked till
+- * iosf_mbi_unblock_punit_i2c_access() gets called.
+- */
+ goto success;
+ }
+
+@@ -344,15 +367,13 @@ int iosf_mbi_block_punit_i2c_access(void)
+ dev_err(&mbi_pdev->dev, "Error P-Unit semaphore timed out, resetting\n");
+ error:
+ iosf_mbi_reset_semaphore();
+- mutex_unlock(&iosf_mbi_punit_mutex);
+-
+ if (!iosf_mbi_get_sem(&sem))
+ dev_err(&mbi_pdev->dev, "P-Unit semaphore: %d\n", sem);
+ success:
+ if (!WARN_ON(ret))
+- iosf_mbi_block_punit_i2c_access_count++;
++ iosf_mbi_pmic_i2c_access_count++;
+
+- mutex_unlock(&iosf_mbi_block_punit_i2c_access_count_mutex);
++ mutex_unlock(&iosf_mbi_pmic_access_mutex);
+
+ return ret;
+ }
+@@ -360,17 +381,20 @@ EXPORT_SYMBOL(iosf_mbi_block_punit_i2c_access);
+
+ void iosf_mbi_unblock_punit_i2c_access(void)
+ {
+- mutex_lock(&iosf_mbi_block_punit_i2c_access_count_mutex);
++ bool do_wakeup = false;
+
+- iosf_mbi_block_punit_i2c_access_count--;
+- if (iosf_mbi_block_punit_i2c_access_count == 0) {
++ mutex_lock(&iosf_mbi_pmic_access_mutex);
++ iosf_mbi_pmic_i2c_access_count--;
++ if (iosf_mbi_pmic_i2c_access_count == 0) {
+ iosf_mbi_reset_semaphore();
+- mutex_unlock(&iosf_mbi_punit_mutex);
+ dev_dbg(&mbi_pdev->dev, "punit semaphore held for %ums\n",
+ jiffies_to_msecs(jiffies - iosf_mbi_sem_acquired));
++ do_wakeup = true;
+ }
++ mutex_unlock(&iosf_mbi_pmic_access_mutex);
+
+- mutex_unlock(&iosf_mbi_block_punit_i2c_access_count_mutex);
++ if (do_wakeup)
++ wake_up(&iosf_mbi_pmic_access_waitq);
+ }
+ EXPORT_SYMBOL(iosf_mbi_unblock_punit_i2c_access);
+
+@@ -379,10 +403,10 @@ int iosf_mbi_register_pmic_bus_access_notifier(struct notifier_block *nb)
+ int ret;
+
+ /* Wait for the bus to go inactive before registering */
+- mutex_lock(&iosf_mbi_punit_mutex);
++ iosf_mbi_punit_acquire();
+ ret = blocking_notifier_chain_register(
+ &iosf_mbi_pmic_bus_access_notifier, nb);
+- mutex_unlock(&iosf_mbi_punit_mutex);
++ iosf_mbi_punit_release();
+
+ return ret;
+ }
+@@ -403,9 +427,9 @@ int iosf_mbi_unregister_pmic_bus_access_notifier(struct notifier_block *nb)
+ int ret;
+
+ /* Wait for the bus to go inactive before unregistering */
+- mutex_lock(&iosf_mbi_punit_mutex);
++ iosf_mbi_punit_acquire();
+ ret = iosf_mbi_unregister_pmic_bus_access_notifier_unlocked(nb);
+- mutex_unlock(&iosf_mbi_punit_mutex);
++ iosf_mbi_punit_release();
+
+ return ret;
+ }
+@@ -413,7 +437,7 @@ EXPORT_SYMBOL(iosf_mbi_unregister_pmic_bus_access_notifier);
+
+ void iosf_mbi_assert_punit_acquired(void)
+ {
+- WARN_ON(!mutex_is_locked(&iosf_mbi_punit_mutex));
++ WARN_ON(iosf_mbi_pmic_punit_access_count == 0);
+ }
+ EXPORT_SYMBOL(iosf_mbi_assert_punit_acquired);
+
+diff --git a/block/blk-flush.c b/block/blk-flush.c
+index aedd9320e605..1eec9cbe5a0a 100644
+--- a/block/blk-flush.c
++++ b/block/blk-flush.c
+@@ -214,6 +214,16 @@ static void flush_end_io(struct request *flush_rq, blk_status_t error)
+
+ /* release the tag's ownership to the req cloned from */
+ spin_lock_irqsave(&fq->mq_flush_lock, flags);
++
++ if (!refcount_dec_and_test(&flush_rq->ref)) {
++ fq->rq_status = error;
++ spin_unlock_irqrestore(&fq->mq_flush_lock, flags);
++ return;
++ }
++
++ if (fq->rq_status != BLK_STS_OK)
++ error = fq->rq_status;
++
+ hctx = flush_rq->mq_hctx;
+ if (!q->elevator) {
+ blk_mq_tag_set_rq(hctx, flush_rq->tag, fq->orig_rq);
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 0835f4d8d42e..a79b9ad1aba1 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -44,12 +44,12 @@ static void blk_mq_poll_stats_fn(struct blk_stat_callback *cb);
+
+ static int blk_mq_poll_stats_bkt(const struct request *rq)
+ {
+- int ddir, bytes, bucket;
++ int ddir, sectors, bucket;
+
+ ddir = rq_data_dir(rq);
+- bytes = blk_rq_bytes(rq);
++ sectors = blk_rq_stats_sectors(rq);
+
+- bucket = ddir + 2*(ilog2(bytes) - 9);
++ bucket = ddir + 2 * ilog2(sectors);
+
+ if (bucket < 0)
+ return -1;
+@@ -330,6 +330,7 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
+ else
+ rq->start_time_ns = 0;
+ rq->io_start_time_ns = 0;
++ rq->stats_sectors = 0;
+ rq->nr_phys_segments = 0;
+ #if defined(CONFIG_BLK_DEV_INTEGRITY)
+ rq->nr_integrity_segments = 0;
+@@ -673,9 +674,7 @@ void blk_mq_start_request(struct request *rq)
+
+ if (test_bit(QUEUE_FLAG_STATS, &q->queue_flags)) {
+ rq->io_start_time_ns = ktime_get_ns();
+-#ifdef CONFIG_BLK_DEV_THROTTLING_LOW
+- rq->throtl_size = blk_rq_sectors(rq);
+-#endif
++ rq->stats_sectors = blk_rq_sectors(rq);
+ rq->rq_flags |= RQF_STATS;
+ rq_qos_issue(q, rq);
+ }
+@@ -905,7 +904,10 @@ static bool blk_mq_check_expired(struct blk_mq_hw_ctx *hctx,
+ */
+ if (blk_mq_req_expired(rq, next))
+ blk_mq_rq_timed_out(rq, reserved);
+- if (refcount_dec_and_test(&rq->ref))
++
++ if (is_flush_rq(rq, hctx))
++ rq->end_io(rq, 0);
++ else if (refcount_dec_and_test(&rq->ref))
+ __blk_mq_free_request(rq);
+
+ return true;
+@@ -2841,6 +2843,8 @@ static unsigned int nr_hw_queues(struct blk_mq_tag_set *set)
+ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
+ struct request_queue *q)
+ {
++ int ret = -ENOMEM;
++
+ /* mark the queue as mq asap */
+ q->mq_ops = set->ops;
+
+@@ -2902,17 +2906,18 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
+ blk_mq_map_swqueue(q);
+
+ if (!(set->flags & BLK_MQ_F_NO_SCHED)) {
+- int ret;
+-
+ ret = elevator_init_mq(q);
+ if (ret)
+- return ERR_PTR(ret);
++ goto err_tag_set;
+ }
+
+ return q;
+
++err_tag_set:
++ blk_mq_del_queue_tag_set(q);
+ err_hctxs:
+ kfree(q->queue_hw_ctx);
++ q->nr_hw_queues = 0;
+ err_sys_init:
+ blk_mq_sysfs_deinit(q);
+ err_poll:
+diff --git a/block/blk-throttle.c b/block/blk-throttle.c
+index 8ab6c8153223..ee74bffe3504 100644
+--- a/block/blk-throttle.c
++++ b/block/blk-throttle.c
+@@ -2246,7 +2246,8 @@ void blk_throtl_stat_add(struct request *rq, u64 time_ns)
+ struct request_queue *q = rq->q;
+ struct throtl_data *td = q->td;
+
+- throtl_track_latency(td, rq->throtl_size, req_op(rq), time_ns >> 10);
++ throtl_track_latency(td, blk_rq_stats_sectors(rq), req_op(rq),
++ time_ns >> 10);
+ }
+
+ void blk_throtl_bio_endio(struct bio *bio)
+diff --git a/block/blk.h b/block/blk.h
+index de6b2e146d6e..d5edfd73d45e 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -19,6 +19,7 @@ struct blk_flush_queue {
+ unsigned int flush_queue_delayed:1;
+ unsigned int flush_pending_idx:1;
+ unsigned int flush_running_idx:1;
++ blk_status_t rq_status;
+ unsigned long flush_pending_since;
+ struct list_head flush_queue[2];
+ struct list_head flush_data_in_flight;
+@@ -47,6 +48,12 @@ static inline void __blk_get_queue(struct request_queue *q)
+ kobject_get(&q->kobj);
+ }
+
++static inline bool
++is_flush_rq(struct request *req, struct blk_mq_hw_ctx *hctx)
++{
++ return hctx->fq->flush_rq == req;
++}
++
+ struct blk_flush_queue *blk_alloc_flush_queue(struct request_queue *q,
+ int node, int cmd_size, gfp_t flags);
+ void blk_free_flush_queue(struct blk_flush_queue *q);
+diff --git a/block/mq-deadline.c b/block/mq-deadline.c
+index 2a2a2e82832e..35e84bc0ec8c 100644
+--- a/block/mq-deadline.c
++++ b/block/mq-deadline.c
+@@ -377,13 +377,6 @@ done:
+ * hardware queue, but we may return a request that is for a
+ * different hardware queue. This is because mq-deadline has shared
+ * state for all hardware queues, in terms of sorting, FIFOs, etc.
+- *
+- * For a zoned block device, __dd_dispatch_request() may return NULL
+- * if all the queued write requests are directed at zones that are already
+- * locked due to on-going write requests. In this case, make sure to mark
+- * the queue as needing a restart to ensure that the queue is run again
+- * and the pending writes dispatched once the target zones for the ongoing
+- * write requests are unlocked in dd_finish_request().
+ */
+ static struct request *dd_dispatch_request(struct blk_mq_hw_ctx *hctx)
+ {
+@@ -392,9 +385,6 @@ static struct request *dd_dispatch_request(struct blk_mq_hw_ctx *hctx)
+
+ spin_lock(&dd->lock);
+ rq = __dd_dispatch_request(dd);
+- if (!rq && blk_queue_is_zoned(hctx->queue) &&
+- !list_empty(&dd->fifo_list[WRITE]))
+- blk_mq_sched_mark_restart_hctx(hctx);
+ spin_unlock(&dd->lock);
+
+ return rq;
+@@ -561,6 +551,13 @@ static void dd_prepare_request(struct request *rq, struct bio *bio)
+ * spinlock so that the zone is never unlocked while deadline_fifo_request()
+ * or deadline_next_request() are executing. This function is called for
+ * all requests, whether or not these requests complete successfully.
++ *
++ * For a zoned block device, __dd_dispatch_request() may have stopped
++ * dispatching requests if all the queued requests are write requests directed
++ * at zones that are already locked due to on-going write requests. To ensure
++ * write request dispatch progress in this case, mark the queue as needing a
++ * restart to ensure that the queue is run again after completion of the
++ * request and zones being unlocked.
+ */
+ static void dd_finish_request(struct request *rq)
+ {
+@@ -572,6 +569,8 @@ static void dd_finish_request(struct request *rq)
+
+ spin_lock_irqsave(&dd->zone_lock, flags);
+ blk_req_zone_write_unlock(rq);
++ if (!list_empty(&dd->fifo_list[WRITE]))
++ blk_mq_sched_mark_restart_hctx(rq->mq_hctx);
+ spin_unlock_irqrestore(&dd->zone_lock, flags);
+ }
+ }
+diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
+index d696f165a50e..60bbc5090abe 100644
+--- a/drivers/acpi/acpi_lpss.c
++++ b/drivers/acpi/acpi_lpss.c
+@@ -219,12 +219,13 @@ static void bsw_pwm_setup(struct lpss_private_data *pdata)
+ }
+
+ static const struct lpss_device_desc lpt_dev_desc = {
+- .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_LTR,
++ .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_LTR
++ | LPSS_SAVE_CTX,
+ .prv_offset = 0x800,
+ };
+
+ static const struct lpss_device_desc lpt_i2c_dev_desc = {
+- .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_LTR,
++ .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_LTR | LPSS_SAVE_CTX,
+ .prv_offset = 0x800,
+ };
+
+@@ -236,7 +237,8 @@ static struct property_entry uart_properties[] = {
+ };
+
+ static const struct lpss_device_desc lpt_uart_dev_desc = {
+- .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_LTR,
++ .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_LTR
++ | LPSS_SAVE_CTX,
+ .clk_con_id = "baudclk",
+ .prv_offset = 0x800,
+ .setup = lpss_uart_setup,
+diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
+index 24f065114d42..2c4dda0787e8 100644
+--- a/drivers/acpi/acpi_processor.c
++++ b/drivers/acpi/acpi_processor.c
+@@ -279,9 +279,13 @@ static int acpi_processor_get_info(struct acpi_device *device)
+ }
+
+ if (acpi_duplicate_processor_id(pr->acpi_id)) {
+- dev_err(&device->dev,
+- "Failed to get unique processor _UID (0x%x)\n",
+- pr->acpi_id);
++ if (pr->acpi_id == 0xff)
++ dev_info_once(&device->dev,
++ "Entry not well-defined, consider updating BIOS\n");
++ else
++ dev_err(&device->dev,
++ "Failed to get unique processor _UID (0x%x)\n",
++ pr->acpi_id);
+ return -ENODEV;
+ }
+
+diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
+index a66e00fe31fe..66205ec54555 100644
+--- a/drivers/acpi/apei/ghes.c
++++ b/drivers/acpi/apei/ghes.c
+@@ -153,6 +153,7 @@ static void ghes_unmap(void __iomem *vaddr, enum fixed_addresses fixmap_idx)
+ int ghes_estatus_pool_init(int num_ghes)
+ {
+ unsigned long addr, len;
++ int rc;
+
+ ghes_estatus_pool = gen_pool_create(GHES_ESTATUS_POOL_MIN_ALLOC_ORDER, -1);
+ if (!ghes_estatus_pool)
+@@ -164,7 +165,7 @@ int ghes_estatus_pool_init(int num_ghes)
+ ghes_estatus_pool_size_request = PAGE_ALIGN(len);
+ addr = (unsigned long)vmalloc(PAGE_ALIGN(len));
+ if (!addr)
+- return -ENOMEM;
++ goto err_pool_alloc;
+
+ /*
+ * New allocation must be visible in all pgd before it can be found by
+@@ -172,7 +173,19 @@ int ghes_estatus_pool_init(int num_ghes)
+ */
+ vmalloc_sync_all();
+
+- return gen_pool_add(ghes_estatus_pool, addr, PAGE_ALIGN(len), -1);
++ rc = gen_pool_add(ghes_estatus_pool, addr, PAGE_ALIGN(len), -1);
++ if (rc)
++ goto err_pool_add;
++
++ return 0;
++
++err_pool_add:
++ vfree((void *)addr);
++
++err_pool_alloc:
++ gen_pool_destroy(ghes_estatus_pool);
++
++ return -ENOMEM;
+ }
+
+ static int map_gen_v2(struct ghes *ghes)
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index 15f103d7532b..3b2525908dd8 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -365,8 +365,10 @@ static int acpi_get_psd(struct cpc_desc *cpc_ptr, acpi_handle handle)
+ union acpi_object *psd = NULL;
+ struct acpi_psd_package *pdomain;
+
+- status = acpi_evaluate_object_typed(handle, "_PSD", NULL, &buffer,
+- ACPI_TYPE_PACKAGE);
++ status = acpi_evaluate_object_typed(handle, "_PSD", NULL,
++ &buffer, ACPI_TYPE_PACKAGE);
++ if (status == AE_NOT_FOUND) /* _PSD is optional */
++ return 0;
+ if (ACPI_FAILURE(status))
+ return -ENODEV;
+
+diff --git a/drivers/acpi/custom_method.c b/drivers/acpi/custom_method.c
+index b2ef4c2ec955..fd66a736621c 100644
+--- a/drivers/acpi/custom_method.c
++++ b/drivers/acpi/custom_method.c
+@@ -49,8 +49,10 @@ static ssize_t cm_write(struct file *file, const char __user * user_buf,
+ if ((*ppos > max_size) ||
+ (*ppos + count > max_size) ||
+ (*ppos + count < count) ||
+- (count > uncopied_bytes))
++ (count > uncopied_bytes)) {
++ kfree(buf);
+ return -EINVAL;
++ }
+
+ if (copy_from_user(buf + (*ppos), user_buf, count)) {
+ kfree(buf);
+@@ -70,6 +72,7 @@ static ssize_t cm_write(struct file *file, const char __user * user_buf,
+ add_taint(TAINT_OVERRIDDEN_ACPI_TABLE, LOCKDEP_NOW_UNRELIABLE);
+ }
+
++ kfree(buf);
+ return count;
+ }
+
+diff --git a/drivers/acpi/pci_irq.c b/drivers/acpi/pci_irq.c
+index d2549ae65e1b..dea8a60e18a4 100644
+--- a/drivers/acpi/pci_irq.c
++++ b/drivers/acpi/pci_irq.c
+@@ -449,8 +449,10 @@ int acpi_pci_irq_enable(struct pci_dev *dev)
+ * No IRQ known to the ACPI subsystem - maybe the BIOS /
+ * driver reported one, then use it. Exit in any case.
+ */
+- if (!acpi_pci_irq_valid(dev, pin))
++ if (!acpi_pci_irq_valid(dev, pin)) {
++ kfree(entry);
+ return 0;
++ }
+
+ if (acpi_isa_register_gsi(dev))
+ dev_warn(&dev->dev, "PCI INT %c: no GSI\n",
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index f7652baa6337..3e63294304c7 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -65,6 +65,12 @@ enum board_ids {
+ board_ahci_sb700, /* for SB700 and SB800 */
+ board_ahci_vt8251,
+
++ /*
++ * board IDs for Intel chipsets that support more than 6 ports
++ * *and* end up needing the PCS quirk.
++ */
++ board_ahci_pcs7,
++
+ /* aliases */
+ board_ahci_mcp_linux = board_ahci_mcp65,
+ board_ahci_mcp67 = board_ahci_mcp65,
+@@ -220,6 +226,12 @@ static const struct ata_port_info ahci_port_info[] = {
+ .udma_mask = ATA_UDMA6,
+ .port_ops = &ahci_vt8251_ops,
+ },
++ [board_ahci_pcs7] = {
++ .flags = AHCI_FLAG_COMMON,
++ .pio_mask = ATA_PIO4,
++ .udma_mask = ATA_UDMA6,
++ .port_ops = &ahci_ops,
++ },
+ };
+
+ static const struct pci_device_id ahci_pci_tbl[] = {
+@@ -264,26 +276,26 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ { PCI_VDEVICE(INTEL, 0x3b2b), board_ahci }, /* PCH RAID */
+ { PCI_VDEVICE(INTEL, 0x3b2c), board_ahci_mobile }, /* PCH M RAID */
+ { PCI_VDEVICE(INTEL, 0x3b2f), board_ahci }, /* PCH AHCI */
+- { PCI_VDEVICE(INTEL, 0x19b0), board_ahci }, /* DNV AHCI */
+- { PCI_VDEVICE(INTEL, 0x19b1), board_ahci }, /* DNV AHCI */
+- { PCI_VDEVICE(INTEL, 0x19b2), board_ahci }, /* DNV AHCI */
+- { PCI_VDEVICE(INTEL, 0x19b3), board_ahci }, /* DNV AHCI */
+- { PCI_VDEVICE(INTEL, 0x19b4), board_ahci }, /* DNV AHCI */
+- { PCI_VDEVICE(INTEL, 0x19b5), board_ahci }, /* DNV AHCI */
+- { PCI_VDEVICE(INTEL, 0x19b6), board_ahci }, /* DNV AHCI */
+- { PCI_VDEVICE(INTEL, 0x19b7), board_ahci }, /* DNV AHCI */
+- { PCI_VDEVICE(INTEL, 0x19bE), board_ahci }, /* DNV AHCI */
+- { PCI_VDEVICE(INTEL, 0x19bF), board_ahci }, /* DNV AHCI */
+- { PCI_VDEVICE(INTEL, 0x19c0), board_ahci }, /* DNV AHCI */
+- { PCI_VDEVICE(INTEL, 0x19c1), board_ahci }, /* DNV AHCI */
+- { PCI_VDEVICE(INTEL, 0x19c2), board_ahci }, /* DNV AHCI */
+- { PCI_VDEVICE(INTEL, 0x19c3), board_ahci }, /* DNV AHCI */
+- { PCI_VDEVICE(INTEL, 0x19c4), board_ahci }, /* DNV AHCI */
+- { PCI_VDEVICE(INTEL, 0x19c5), board_ahci }, /* DNV AHCI */
+- { PCI_VDEVICE(INTEL, 0x19c6), board_ahci }, /* DNV AHCI */
+- { PCI_VDEVICE(INTEL, 0x19c7), board_ahci }, /* DNV AHCI */
+- { PCI_VDEVICE(INTEL, 0x19cE), board_ahci }, /* DNV AHCI */
+- { PCI_VDEVICE(INTEL, 0x19cF), board_ahci }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19b0), board_ahci_pcs7 }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19b1), board_ahci_pcs7 }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19b2), board_ahci_pcs7 }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19b3), board_ahci_pcs7 }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19b4), board_ahci_pcs7 }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19b5), board_ahci_pcs7 }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19b6), board_ahci_pcs7 }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19b7), board_ahci_pcs7 }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19bE), board_ahci_pcs7 }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19bF), board_ahci_pcs7 }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19c0), board_ahci_pcs7 }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19c1), board_ahci_pcs7 }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19c2), board_ahci_pcs7 }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19c3), board_ahci_pcs7 }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19c4), board_ahci_pcs7 }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19c5), board_ahci_pcs7 }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19c6), board_ahci_pcs7 }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19c7), board_ahci_pcs7 }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19cE), board_ahci_pcs7 }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19cF), board_ahci_pcs7 }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x1c02), board_ahci }, /* CPT AHCI */
+ { PCI_VDEVICE(INTEL, 0x1c03), board_ahci_mobile }, /* CPT M AHCI */
+ { PCI_VDEVICE(INTEL, 0x1c04), board_ahci }, /* CPT RAID */
+@@ -623,30 +635,6 @@ static void ahci_pci_save_initial_config(struct pci_dev *pdev,
+ ahci_save_initial_config(&pdev->dev, hpriv);
+ }
+
+-static int ahci_pci_reset_controller(struct ata_host *host)
+-{
+- struct pci_dev *pdev = to_pci_dev(host->dev);
+- int rc;
+-
+- rc = ahci_reset_controller(host);
+- if (rc)
+- return rc;
+-
+- if (pdev->vendor == PCI_VENDOR_ID_INTEL) {
+- struct ahci_host_priv *hpriv = host->private_data;
+- u16 tmp16;
+-
+- /* configure PCS */
+- pci_read_config_word(pdev, 0x92, &tmp16);
+- if ((tmp16 & hpriv->port_map) != hpriv->port_map) {
+- tmp16 |= hpriv->port_map;
+- pci_write_config_word(pdev, 0x92, tmp16);
+- }
+- }
+-
+- return 0;
+-}
+-
+ static void ahci_pci_init_controller(struct ata_host *host)
+ {
+ struct ahci_host_priv *hpriv = host->private_data;
+@@ -849,7 +837,7 @@ static int ahci_pci_device_runtime_resume(struct device *dev)
+ struct ata_host *host = pci_get_drvdata(pdev);
+ int rc;
+
+- rc = ahci_pci_reset_controller(host);
++ rc = ahci_reset_controller(host);
+ if (rc)
+ return rc;
+ ahci_pci_init_controller(host);
+@@ -884,7 +872,7 @@ static int ahci_pci_device_resume(struct device *dev)
+ ahci_mcp89_apple_enable(pdev);
+
+ if (pdev->dev.power.power_state.event == PM_EVENT_SUSPEND) {
+- rc = ahci_pci_reset_controller(host);
++ rc = ahci_reset_controller(host);
+ if (rc)
+ return rc;
+
+@@ -1619,6 +1607,34 @@ update_policy:
+ ap->target_lpm_policy = policy;
+ }
+
++static void ahci_intel_pcs_quirk(struct pci_dev *pdev, struct ahci_host_priv *hpriv)
++{
++ const struct pci_device_id *id = pci_match_id(ahci_pci_tbl, pdev);
++ u16 tmp16;
++
++ /*
++ * Only apply the 6-port PCS quirk for known legacy platforms.
++ */
++ if (!id || id->vendor != PCI_VENDOR_ID_INTEL)
++ return;
++ if (((enum board_ids) id->driver_data) < board_ahci_pcs7)
++ return;
++
++ /*
++ * port_map is determined from PORTS_IMPL PCI register which is
++ * implemented as write or write-once register. If the register
++ * isn't programmed, ahci automatically generates it from number
++ * of ports, which is good enough for PCS programming. It is
++ * otherwise expected that platform firmware enables the ports
++ * before the OS boots.
++ */
++ pci_read_config_word(pdev, PCS_6, &tmp16);
++ if ((tmp16 & hpriv->port_map) != hpriv->port_map) {
++ tmp16 |= hpriv->port_map;
++ pci_write_config_word(pdev, PCS_6, tmp16);
++ }
++}
++
+ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ {
+ unsigned int board_id = ent->driver_data;
+@@ -1731,6 +1747,12 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ /* save initial config */
+ ahci_pci_save_initial_config(pdev, hpriv);
+
++ /*
++ * If platform firmware failed to enable ports, try to enable
++ * them here.
++ */
++ ahci_intel_pcs_quirk(pdev, hpriv);
++
+ /* prepare host */
+ if (hpriv->cap & HOST_CAP_NCQ) {
+ pi.flags |= ATA_FLAG_NCQ;
+@@ -1840,7 +1862,7 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ if (rc)
+ return rc;
+
+- rc = ahci_pci_reset_controller(host);
++ rc = ahci_reset_controller(host);
+ if (rc)
+ return rc;
+
+diff --git a/drivers/ata/ahci.h b/drivers/ata/ahci.h
+index 0570629d719d..3dbf398c92ea 100644
+--- a/drivers/ata/ahci.h
++++ b/drivers/ata/ahci.h
+@@ -247,6 +247,8 @@ enum {
+ ATA_FLAG_ACPI_SATA | ATA_FLAG_AN,
+
+ ICH_MAP = 0x90, /* ICH MAP register */
++ PCS_6 = 0x92, /* 6 port PCS */
++ PCS_7 = 0x94, /* 7+ port PCS (Denverton) */
+
+ /* em constants */
+ EM_MAX_SLOTS = 8,
+diff --git a/drivers/base/soc.c b/drivers/base/soc.c
+index 10b280f30217..7e91894a380b 100644
+--- a/drivers/base/soc.c
++++ b/drivers/base/soc.c
+@@ -157,6 +157,7 @@ out2:
+ out1:
+ return ERR_PTR(ret);
+ }
++EXPORT_SYMBOL_GPL(soc_device_register);
+
+ /* Ensure soc_dev->attr is freed prior to calling soc_device_unregister. */
+ void soc_device_unregister(struct soc_device *soc_dev)
+@@ -166,6 +167,7 @@ void soc_device_unregister(struct soc_device *soc_dev)
+ device_unregister(&soc_dev->dev);
+ early_soc_dev_attr = NULL;
+ }
++EXPORT_SYMBOL_GPL(soc_device_unregister);
+
+ static int __init soc_bus_register(void)
+ {
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index ab7ca5989097..1410fa893653 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -1755,6 +1755,7 @@ static int lo_compat_ioctl(struct block_device *bdev, fmode_t mode,
+ case LOOP_SET_FD:
+ case LOOP_CHANGE_FD:
+ case LOOP_SET_BLOCK_SIZE:
++ case LOOP_SET_DIRECT_IO:
+ err = lo_ioctl(bdev, mode, cmd, arg);
+ break;
+ default:
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index e21d2ded732b..a69a90ad9208 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -357,8 +357,10 @@ static enum blk_eh_timer_return nbd_xmit_timeout(struct request *req,
+ }
+ config = nbd->config;
+
+- if (!mutex_trylock(&cmd->lock))
++ if (!mutex_trylock(&cmd->lock)) {
++ nbd_config_put(nbd);
+ return BLK_EH_RESET_TIMER;
++ }
+
+ if (config->num_connections > 1) {
+ dev_err_ratelimited(nbd_to_dev(nbd),
+diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
+index 9044d31ab1a1..8d53b8ef545c 100644
+--- a/drivers/char/hw_random/core.c
++++ b/drivers/char/hw_random/core.c
+@@ -67,7 +67,7 @@ static void add_early_randomness(struct hwrng *rng)
+ size_t size = min_t(size_t, 16, rng_buffer_size());
+
+ mutex_lock(&reading_mutex);
+- bytes_read = rng_get_data(rng, rng_buffer, size, 1);
++ bytes_read = rng_get_data(rng, rng_buffer, size, 0);
+ mutex_unlock(&reading_mutex);
+ if (bytes_read > 0)
+ add_device_randomness(rng_buffer, bytes_read);
+diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
+index 6707659cffd6..44bd3dda01c2 100644
+--- a/drivers/char/ipmi/ipmi_msghandler.c
++++ b/drivers/char/ipmi/ipmi_msghandler.c
+@@ -4215,7 +4215,53 @@ static int handle_one_recv_msg(struct ipmi_smi *intf,
+ int chan;
+
+ ipmi_debug_msg("Recv:", msg->rsp, msg->rsp_size);
+- if (msg->rsp_size < 2) {
++
++ if ((msg->data_size >= 2)
++ && (msg->data[0] == (IPMI_NETFN_APP_REQUEST << 2))
++ && (msg->data[1] == IPMI_SEND_MSG_CMD)
++ && (msg->user_data == NULL)) {
++
++ if (intf->in_shutdown)
++ goto free_msg;
++
++ /*
++ * This is the local response to a command send, start
++ * the timer for these. The user_data will not be
++ * NULL if this is a response send, and we will let
++ * response sends just go through.
++ */
++
++ /*
++ * Check for errors, if we get certain errors (ones
++ * that mean basically we can try again later), we
++ * ignore them and start the timer. Otherwise we
++ * report the error immediately.
++ */
++ if ((msg->rsp_size >= 3) && (msg->rsp[2] != 0)
++ && (msg->rsp[2] != IPMI_NODE_BUSY_ERR)
++ && (msg->rsp[2] != IPMI_LOST_ARBITRATION_ERR)
++ && (msg->rsp[2] != IPMI_BUS_ERR)
++ && (msg->rsp[2] != IPMI_NAK_ON_WRITE_ERR)) {
++ int ch = msg->rsp[3] & 0xf;
++ struct ipmi_channel *chans;
++
++ /* Got an error sending the message, handle it. */
++
++ chans = READ_ONCE(intf->channel_list)->c;
++ if ((chans[ch].medium == IPMI_CHANNEL_MEDIUM_8023LAN)
++ || (chans[ch].medium == IPMI_CHANNEL_MEDIUM_ASYNC))
++ ipmi_inc_stat(intf, sent_lan_command_errs);
++ else
++ ipmi_inc_stat(intf, sent_ipmb_command_errs);
++ intf_err_seq(intf, msg->msgid, msg->rsp[2]);
++ } else
++ /* The message was sent, start the timer. */
++ intf_start_seq_timer(intf, msg->msgid);
++free_msg:
++ requeue = 0;
++ goto out;
++
++ } else if (msg->rsp_size < 2) {
+ /* Message is too small to be correct. */
+ dev_warn(intf->si_dev,
+ "BMC returned too small a message for netfn %x cmd %x, got %d bytes\n",
+@@ -4472,62 +4518,16 @@ void ipmi_smi_msg_received(struct ipmi_smi *intf,
+ unsigned long flags = 0; /* keep us warning-free. */
+ int run_to_completion = intf->run_to_completion;
+
+- if ((msg->data_size >= 2)
+- && (msg->data[0] == (IPMI_NETFN_APP_REQUEST << 2))
+- && (msg->data[1] == IPMI_SEND_MSG_CMD)
+- && (msg->user_data == NULL)) {
+-
+- if (intf->in_shutdown)
+- goto free_msg;
+-
+- /*
+- * This is the local response to a command send, start
+- * the timer for these. The user_data will not be
+- * NULL if this is a response send, and we will let
+- * response sends just go through.
+- */
+-
+- /*
+- * Check for errors, if we get certain errors (ones
+- * that mean basically we can try again later), we
+- * ignore them and start the timer. Otherwise we
+- * report the error immediately.
+- */
+- if ((msg->rsp_size >= 3) && (msg->rsp[2] != 0)
+- && (msg->rsp[2] != IPMI_NODE_BUSY_ERR)
+- && (msg->rsp[2] != IPMI_LOST_ARBITRATION_ERR)
+- && (msg->rsp[2] != IPMI_BUS_ERR)
+- && (msg->rsp[2] != IPMI_NAK_ON_WRITE_ERR)) {
+- int ch = msg->rsp[3] & 0xf;
+- struct ipmi_channel *chans;
+-
+- /* Got an error sending the message, handle it. */
+-
+- chans = READ_ONCE(intf->channel_list)->c;
+- if ((chans[ch].medium == IPMI_CHANNEL_MEDIUM_8023LAN)
+- || (chans[ch].medium == IPMI_CHANNEL_MEDIUM_ASYNC))
+- ipmi_inc_stat(intf, sent_lan_command_errs);
+- else
+- ipmi_inc_stat(intf, sent_ipmb_command_errs);
+- intf_err_seq(intf, msg->msgid, msg->rsp[2]);
+- } else
+- /* The message was sent, start the timer. */
+- intf_start_seq_timer(intf, msg->msgid);
+-
+-free_msg:
+- ipmi_free_smi_msg(msg);
+- } else {
+- /*
+- * To preserve message order, we keep a queue and deliver from
+- * a tasklet.
+- */
+- if (!run_to_completion)
+- spin_lock_irqsave(&intf->waiting_rcv_msgs_lock, flags);
+- list_add_tail(&msg->link, &intf->waiting_rcv_msgs);
+- if (!run_to_completion)
+- spin_unlock_irqrestore(&intf->waiting_rcv_msgs_lock,
+- flags);
+- }
++ /*
++ * To preserve message order, we keep a queue and deliver from
++ * a tasklet.
++ */
++ if (!run_to_completion)
++ spin_lock_irqsave(&intf->waiting_rcv_msgs_lock, flags);
++ list_add_tail(&msg->link, &intf->waiting_rcv_msgs);
++ if (!run_to_completion)
++ spin_unlock_irqrestore(&intf->waiting_rcv_msgs_lock,
++ flags);
+
+ if (!run_to_completion)
+ spin_lock_irqsave(&intf->xmit_msgs_lock, flags);
+diff --git a/drivers/char/mem.c b/drivers/char/mem.c
+index b08dc50f9f26..9eb564c002f6 100644
+--- a/drivers/char/mem.c
++++ b/drivers/char/mem.c
+@@ -97,6 +97,13 @@ void __weak unxlate_dev_mem_ptr(phys_addr_t phys, void *addr)
+ }
+ #endif
+
++static inline bool should_stop_iteration(void)
++{
++ if (need_resched())
++ cond_resched();
++ return fatal_signal_pending(current);
++}
++
+ /*
+ * This funcion reads the *physical* memory. The f_pos points directly to the
+ * memory location.
+@@ -175,6 +182,8 @@ static ssize_t read_mem(struct file *file, char __user *buf,
+ p += sz;
+ count -= sz;
+ read += sz;
++ if (should_stop_iteration())
++ break;
+ }
+ kfree(bounce);
+
+@@ -251,6 +260,8 @@ static ssize_t write_mem(struct file *file, const char __user *buf,
+ p += sz;
+ count -= sz;
+ written += sz;
++ if (should_stop_iteration())
++ break;
+ }
+
+ *ppos += written;
+@@ -468,6 +479,10 @@ static ssize_t read_kmem(struct file *file, char __user *buf,
+ read += sz;
+ low_count -= sz;
+ count -= sz;
++ if (should_stop_iteration()) {
++ count = 0;
++ break;
++ }
+ }
+ }
+
+@@ -492,6 +507,8 @@ static ssize_t read_kmem(struct file *file, char __user *buf,
+ buf += sz;
+ read += sz;
+ p += sz;
++ if (should_stop_iteration())
++ break;
+ }
+ free_page((unsigned long)kbuf);
+ }
+@@ -544,6 +561,8 @@ static ssize_t do_write_kmem(unsigned long p, const char __user *buf,
+ p += sz;
+ count -= sz;
+ written += sz;
++ if (should_stop_iteration())
++ break;
+ }
+
+ *ppos += written;
+@@ -595,6 +614,8 @@ static ssize_t write_kmem(struct file *file, const char __user *buf,
+ buf += sz;
+ virtr += sz;
+ p += sz;
++ if (should_stop_iteration())
++ break;
+ }
+ free_page((unsigned long)kbuf);
+ }
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index 1b4f95c13e00..d7a3888ad80f 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -320,18 +320,22 @@ int tpm_pcr_extend(struct tpm_chip *chip, u32 pcr_idx,
+ if (!chip)
+ return -ENODEV;
+
+- for (i = 0; i < chip->nr_allocated_banks; i++)
+- if (digests[i].alg_id != chip->allocated_banks[i].alg_id)
+- return -EINVAL;
++ for (i = 0; i < chip->nr_allocated_banks; i++) {
++ if (digests[i].alg_id != chip->allocated_banks[i].alg_id) {
++ rc = EINVAL;
++ goto out;
++ }
++ }
+
+ if (chip->flags & TPM_CHIP_FLAG_TPM2) {
+ rc = tpm2_pcr_extend(chip, pcr_idx, digests);
+- tpm_put_ops(chip);
+- return rc;
++ goto out;
+ }
+
+ rc = tpm1_pcr_extend(chip, pcr_idx, digests[0].digest,
+ "attempting extend a PCR value");
++
++out:
+ tpm_put_ops(chip);
+ return rc;
+ }
+@@ -354,14 +358,9 @@ int tpm_send(struct tpm_chip *chip, void *cmd, size_t buflen)
+ if (!chip)
+ return -ENODEV;
+
+- rc = tpm_buf_init(&buf, 0, 0);
+- if (rc)
+- goto out;
+-
+- memcpy(buf.data, cmd, buflen);
++ buf.data = cmd;
+ rc = tpm_transmit_cmd(chip, &buf, 0, "attempting to a send a command");
+- tpm_buf_destroy(&buf);
+-out:
++
+ tpm_put_ops(chip);
+ return rc;
+ }
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index c3181ea9f271..270f43acbb77 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -980,6 +980,8 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ goto out_err;
+ }
+
++ tpm_chip_start(chip);
++ chip->flags |= TPM_CHIP_FLAG_IRQ;
+ if (irq) {
+ tpm_tis_probe_irq_single(chip, intmask, IRQF_SHARED,
+ irq);
+@@ -989,6 +991,7 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ } else {
+ tpm_tis_probe_irq(chip, intmask);
+ }
++ tpm_chip_stop(chip);
+ }
+
+ rc = tpm_chip_register(chip);
+diff --git a/drivers/cpufreq/armada-8k-cpufreq.c b/drivers/cpufreq/armada-8k-cpufreq.c
+index 988ebc326bdb..39e34f5066d3 100644
+--- a/drivers/cpufreq/armada-8k-cpufreq.c
++++ b/drivers/cpufreq/armada-8k-cpufreq.c
+@@ -136,6 +136,8 @@ static int __init armada_8k_cpufreq_init(void)
+
+ nb_cpus = num_possible_cpus();
+ freq_tables = kcalloc(nb_cpus, sizeof(*freq_tables), GFP_KERNEL);
++ if (!freq_tables)
++ return -ENOMEM;
+ cpumask_copy(&cpus, cpu_possible_mask);
+
+ /*
+diff --git a/drivers/cpufreq/imx-cpufreq-dt.c b/drivers/cpufreq/imx-cpufreq-dt.c
+index 4f85f3112784..35db14cf3102 100644
+--- a/drivers/cpufreq/imx-cpufreq-dt.c
++++ b/drivers/cpufreq/imx-cpufreq-dt.c
+@@ -16,6 +16,7 @@
+
+ #define OCOTP_CFG3_SPEED_GRADE_SHIFT 8
+ #define OCOTP_CFG3_SPEED_GRADE_MASK (0x3 << 8)
++#define IMX8MN_OCOTP_CFG3_SPEED_GRADE_MASK (0xf << 8)
+ #define OCOTP_CFG3_MKT_SEGMENT_SHIFT 6
+ #define OCOTP_CFG3_MKT_SEGMENT_MASK (0x3 << 6)
+
+@@ -34,7 +35,12 @@ static int imx_cpufreq_dt_probe(struct platform_device *pdev)
+ if (ret)
+ return ret;
+
+- speed_grade = (cell_value & OCOTP_CFG3_SPEED_GRADE_MASK) >> OCOTP_CFG3_SPEED_GRADE_SHIFT;
++ if (of_machine_is_compatible("fsl,imx8mn"))
++ speed_grade = (cell_value & IMX8MN_OCOTP_CFG3_SPEED_GRADE_MASK)
++ >> OCOTP_CFG3_SPEED_GRADE_SHIFT;
++ else
++ speed_grade = (cell_value & OCOTP_CFG3_SPEED_GRADE_MASK)
++ >> OCOTP_CFG3_SPEED_GRADE_SHIFT;
+ mkt_segment = (cell_value & OCOTP_CFG3_MKT_SEGMENT_MASK) >> OCOTP_CFG3_MKT_SEGMENT_SHIFT;
+
+ /*
+diff --git a/drivers/cpuidle/governors/teo.c b/drivers/cpuidle/governors/teo.c
+index 7d05efdbd3c6..12d9e6cecf1d 100644
+--- a/drivers/cpuidle/governors/teo.c
++++ b/drivers/cpuidle/governors/teo.c
+@@ -242,7 +242,7 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
+ int latency_req = cpuidle_governor_latency_req(dev->cpu);
+ unsigned int duration_us, count;
+- int max_early_idx, idx, i;
++ int max_early_idx, constraint_idx, idx, i;
+ ktime_t delta_tick;
+
+ if (cpu_data->last_state >= 0) {
+@@ -257,6 +257,7 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+
+ count = 0;
+ max_early_idx = -1;
++ constraint_idx = drv->state_count;
+ idx = -1;
+
+ for (i = 0; i < drv->state_count; i++) {
+@@ -286,16 +287,8 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ if (s->target_residency > duration_us)
+ break;
+
+- if (s->exit_latency > latency_req) {
+- /*
+- * If we break out of the loop for latency reasons, use
+- * the target residency of the selected state as the
+- * expected idle duration to avoid stopping the tick
+- * as long as that target residency is low enough.
+- */
+- duration_us = drv->states[idx].target_residency;
+- goto refine;
+- }
++ if (s->exit_latency > latency_req && constraint_idx > i)
++ constraint_idx = i;
+
+ idx = i;
+
+@@ -321,7 +314,13 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ duration_us = drv->states[idx].target_residency;
+ }
+
+-refine:
++ /*
++ * If there is a latency constraint, it may be necessary to use a
++ * shallower idle state than the one selected so far.
++ */
++ if (constraint_idx < idx)
++ idx = constraint_idx;
++
+ if (idx < 0) {
+ idx = 0; /* No states enabled. Must use 0. */
+ } else if (idx > 0) {
+@@ -331,13 +330,12 @@ refine:
+
+ /*
+ * Count and sum the most recent idle duration values less than
+- * the target residency of the state selected so far, find the
+- * max.
++ * the current expected idle duration value.
+ */
+ for (i = 0; i < INTERVALS; i++) {
+ unsigned int val = cpu_data->intervals[i];
+
+- if (val >= drv->states[idx].target_residency)
++ if (val >= duration_us)
+ continue;
+
+ count++;
+@@ -356,8 +354,10 @@ refine:
+ * would be too shallow.
+ */
+ if (!(tick_nohz_tick_stopped() && avg_us < TICK_USEC)) {
+- idx = teo_find_shallower_state(drv, dev, idx, avg_us);
+ duration_us = avg_us;
++ if (drv->states[idx].target_residency > avg_us)
++ idx = teo_find_shallower_state(drv, dev,
++ idx, avg_us);
+ }
+ }
+ }
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index ab22bf8a12d6..a0e19802149f 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -254,7 +254,7 @@ static struct devfreq_governor *try_then_request_governor(const char *name)
+ /* Restore previous state before return */
+ mutex_lock(&devfreq_list_lock);
+ if (err)
+- return ERR_PTR(err);
++ return (err < 0) ? ERR_PTR(err) : ERR_PTR(-EINVAL);
+
+ governor = find_devfreq_governor(name);
+ }
+diff --git a/drivers/devfreq/exynos-bus.c b/drivers/devfreq/exynos-bus.c
+index d9f377912c10..7c06df8bd74f 100644
+--- a/drivers/devfreq/exynos-bus.c
++++ b/drivers/devfreq/exynos-bus.c
+@@ -191,11 +191,10 @@ static void exynos_bus_exit(struct device *dev)
+ if (ret < 0)
+ dev_warn(dev, "failed to disable the devfreq-event devices\n");
+
+- if (bus->regulator)
+- regulator_disable(bus->regulator);
+-
+ dev_pm_opp_of_remove_table(dev);
+ clk_disable_unprepare(bus->clk);
++ if (bus->regulator)
++ regulator_disable(bus->regulator);
+ }
+
+ /*
+@@ -383,6 +382,7 @@ static int exynos_bus_probe(struct platform_device *pdev)
+ struct exynos_bus *bus;
+ int ret, max_state;
+ unsigned long min_freq, max_freq;
++ bool passive = false;
+
+ if (!np) {
+ dev_err(dev, "failed to find devicetree node\n");
+@@ -396,27 +396,27 @@ static int exynos_bus_probe(struct platform_device *pdev)
+ bus->dev = &pdev->dev;
+ platform_set_drvdata(pdev, bus);
+
+- /* Parse the device-tree to get the resource information */
+- ret = exynos_bus_parse_of(np, bus);
+- if (ret < 0)
+- return ret;
+-
+ profile = devm_kzalloc(dev, sizeof(*profile), GFP_KERNEL);
+- if (!profile) {
+- ret = -ENOMEM;
+- goto err;
+- }
++ if (!profile)
++ return -ENOMEM;
+
+ node = of_parse_phandle(dev->of_node, "devfreq", 0);
+ if (node) {
+ of_node_put(node);
+- goto passive;
++ passive = true;
+ } else {
+ ret = exynos_bus_parent_parse_of(np, bus);
++ if (ret < 0)
++ return ret;
+ }
+
++ /* Parse the device-tree to get the resource information */
++ ret = exynos_bus_parse_of(np, bus);
+ if (ret < 0)
+- goto err;
++ goto err_reg;
++
++ if (passive)
++ goto passive;
+
+ /* Initialize the struct profile and governor data for parent device */
+ profile->polling_ms = 50;
+@@ -507,6 +507,9 @@ out:
+ err:
+ dev_pm_opp_of_remove_table(dev);
+ clk_disable_unprepare(bus->clk);
++err_reg:
++ if (!passive)
++ regulator_disable(bus->regulator);
+
+ return ret;
+ }
+diff --git a/drivers/devfreq/governor_passive.c b/drivers/devfreq/governor_passive.c
+index 58308948b863..be6eeab9c814 100644
+--- a/drivers/devfreq/governor_passive.c
++++ b/drivers/devfreq/governor_passive.c
+@@ -149,7 +149,6 @@ static int devfreq_passive_notifier_call(struct notifier_block *nb,
+ static int devfreq_passive_event_handler(struct devfreq *devfreq,
+ unsigned int event, void *data)
+ {
+- struct device *dev = devfreq->dev.parent;
+ struct devfreq_passive_data *p_data
+ = (struct devfreq_passive_data *)devfreq->data;
+ struct devfreq *parent = (struct devfreq *)p_data->parent;
+@@ -165,12 +164,12 @@ static int devfreq_passive_event_handler(struct devfreq *devfreq,
+ p_data->this = devfreq;
+
+ nb->notifier_call = devfreq_passive_notifier_call;
+- ret = devm_devfreq_register_notifier(dev, parent, nb,
++ ret = devfreq_register_notifier(parent, nb,
+ DEVFREQ_TRANSITION_NOTIFIER);
+ break;
+ case DEVFREQ_GOV_STOP:
+- devm_devfreq_unregister_notifier(dev, parent, nb,
+- DEVFREQ_TRANSITION_NOTIFIER);
++ WARN_ON(devfreq_unregister_notifier(parent, nb,
++ DEVFREQ_TRANSITION_NOTIFIER));
+ break;
+ default:
+ break;
+diff --git a/drivers/dma/bcm2835-dma.c b/drivers/dma/bcm2835-dma.c
+index 8101ff2f05c1..970f654611bd 100644
+--- a/drivers/dma/bcm2835-dma.c
++++ b/drivers/dma/bcm2835-dma.c
+@@ -871,8 +871,10 @@ static int bcm2835_dma_probe(struct platform_device *pdev)
+ pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask;
+
+ rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+- if (rc)
++ if (rc) {
++ dev_err(&pdev->dev, "Unable to set DMA mask\n");
+ return rc;
++ }
+
+ od = devm_kzalloc(&pdev->dev, sizeof(*od), GFP_KERNEL);
+ if (!od)
+diff --git a/drivers/dma/iop-adma.c b/drivers/dma/iop-adma.c
+index c6c0143670d9..a776857d89c8 100644
+--- a/drivers/dma/iop-adma.c
++++ b/drivers/dma/iop-adma.c
+@@ -116,9 +116,9 @@ static void __iop_adma_slot_cleanup(struct iop_adma_chan *iop_chan)
+ list_for_each_entry_safe(iter, _iter, &iop_chan->chain,
+ chain_node) {
+ pr_debug("\tcookie: %d slot: %d busy: %d "
+- "this_desc: %#x next_desc: %#x ack: %d\n",
++ "this_desc: %#x next_desc: %#llx ack: %d\n",
+ iter->async_tx.cookie, iter->idx, busy,
+- iter->async_tx.phys, iop_desc_get_next_desc(iter),
++ iter->async_tx.phys, (u64)iop_desc_get_next_desc(iter),
+ async_tx_test_ack(&iter->async_tx));
+ prefetch(_iter);
+ prefetch(&_iter->async_tx);
+@@ -306,9 +306,9 @@ retry:
+ int i;
+ dev_dbg(iop_chan->device->common.dev,
+ "allocated slot: %d "
+- "(desc %p phys: %#x) slots_per_op %d\n",
++ "(desc %p phys: %#llx) slots_per_op %d\n",
+ iter->idx, iter->hw_desc,
+- iter->async_tx.phys, slots_per_op);
++ (u64)iter->async_tx.phys, slots_per_op);
+
+ /* pre-ack all but the last descriptor */
+ if (num_slots != slots_per_op)
+@@ -516,7 +516,7 @@ iop_adma_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dma_dest,
+ return NULL;
+ BUG_ON(len > IOP_ADMA_MAX_BYTE_COUNT);
+
+- dev_dbg(iop_chan->device->common.dev, "%s len: %u\n",
++ dev_dbg(iop_chan->device->common.dev, "%s len: %zu\n",
+ __func__, len);
+
+ spin_lock_bh(&iop_chan->lock);
+@@ -549,7 +549,7 @@ iop_adma_prep_dma_xor(struct dma_chan *chan, dma_addr_t dma_dest,
+ BUG_ON(len > IOP_ADMA_XOR_MAX_BYTE_COUNT);
+
+ dev_dbg(iop_chan->device->common.dev,
+- "%s src_cnt: %d len: %u flags: %lx\n",
++ "%s src_cnt: %d len: %zu flags: %lx\n",
+ __func__, src_cnt, len, flags);
+
+ spin_lock_bh(&iop_chan->lock);
+@@ -582,7 +582,7 @@ iop_adma_prep_dma_xor_val(struct dma_chan *chan, dma_addr_t *dma_src,
+ if (unlikely(!len))
+ return NULL;
+
+- dev_dbg(iop_chan->device->common.dev, "%s src_cnt: %d len: %u\n",
++ dev_dbg(iop_chan->device->common.dev, "%s src_cnt: %d len: %zu\n",
+ __func__, src_cnt, len);
+
+ spin_lock_bh(&iop_chan->lock);
+@@ -620,7 +620,7 @@ iop_adma_prep_dma_pq(struct dma_chan *chan, dma_addr_t *dst, dma_addr_t *src,
+ BUG_ON(len > IOP_ADMA_XOR_MAX_BYTE_COUNT);
+
+ dev_dbg(iop_chan->device->common.dev,
+- "%s src_cnt: %d len: %u flags: %lx\n",
++ "%s src_cnt: %d len: %zu flags: %lx\n",
+ __func__, src_cnt, len, flags);
+
+ if (dmaf_p_disabled_continue(flags))
+@@ -683,7 +683,7 @@ iop_adma_prep_dma_pq_val(struct dma_chan *chan, dma_addr_t *pq, dma_addr_t *src,
+ return NULL;
+ BUG_ON(len > IOP_ADMA_XOR_MAX_BYTE_COUNT);
+
+- dev_dbg(iop_chan->device->common.dev, "%s src_cnt: %d len: %u\n",
++ dev_dbg(iop_chan->device->common.dev, "%s src_cnt: %d len: %zu\n",
+ __func__, src_cnt, len);
+
+ spin_lock_bh(&iop_chan->lock);
+diff --git a/drivers/dma/ti/edma.c b/drivers/dma/ti/edma.c
+index ceabdea40ae0..982631d4e1f8 100644
+--- a/drivers/dma/ti/edma.c
++++ b/drivers/dma/ti/edma.c
+@@ -2273,9 +2273,6 @@ static int edma_probe(struct platform_device *pdev)
+
+ ecc->default_queue = info->default_queue;
+
+- for (i = 0; i < ecc->num_slots; i++)
+- edma_write_slot(ecc, i, &dummy_paramset);
+-
+ if (info->rsv) {
+ /* Set the reserved slots in inuse list */
+ rsv_slots = info->rsv->rsv_slots;
+@@ -2288,6 +2285,12 @@ static int edma_probe(struct platform_device *pdev)
+ }
+ }
+
++ for (i = 0; i < ecc->num_slots; i++) {
++ /* Reset only unused - not reserved - paRAM slots */
++ if (!test_bit(i, ecc->slot_inuse))
++ edma_write_slot(ecc, i, &dummy_paramset);
++ }
++
+ /* Clear the xbar mapped channels in unused list */
+ xbar_chans = info->xbar_chans;
+ if (xbar_chans) {
+diff --git a/drivers/edac/altera_edac.c b/drivers/edac/altera_edac.c
+index c2e693e34d43..bf024ec0116c 100644
+--- a/drivers/edac/altera_edac.c
++++ b/drivers/edac/altera_edac.c
+@@ -1866,6 +1866,7 @@ static void altr_edac_a10_irq_handler(struct irq_desc *desc)
+ struct altr_arria10_edac *edac = irq_desc_get_handler_data(desc);
+ struct irq_chip *chip = irq_desc_get_chip(desc);
+ int irq = irq_desc_get_irq(desc);
++ unsigned long bits;
+
+ dberr = (irq == edac->db_irq) ? 1 : 0;
+ sm_offset = dberr ? A10_SYSMGR_ECC_INTSTAT_DERR_OFST :
+@@ -1875,7 +1876,8 @@ static void altr_edac_a10_irq_handler(struct irq_desc *desc)
+
+ regmap_read(edac->ecc_mgr_map, sm_offset, &irq_status);
+
+- for_each_set_bit(bit, (unsigned long *)&irq_status, 32) {
++ bits = irq_status;
++ for_each_set_bit(bit, &bits, 32) {
+ irq = irq_linear_revmap(edac->domain, dberr * 32 + bit);
+ if (irq)
+ generic_handle_irq(irq);
+diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
+index 873437be86d9..608fdab566b3 100644
+--- a/drivers/edac/amd64_edac.c
++++ b/drivers/edac/amd64_edac.c
+@@ -810,7 +810,7 @@ static void debug_display_dimm_sizes_df(struct amd64_pvt *pvt, u8 ctrl)
+
+ edac_printk(KERN_DEBUG, EDAC_MC, "UMC%d chip selects:\n", ctrl);
+
+- for (dimm = 0; dimm < 4; dimm++) {
++ for (dimm = 0; dimm < 2; dimm++) {
+ size0 = 0;
+ cs0 = dimm * 2;
+
+@@ -942,89 +942,102 @@ static void prep_chip_selects(struct amd64_pvt *pvt)
+ } else if (pvt->fam == 0x15 && pvt->model == 0x30) {
+ pvt->csels[0].b_cnt = pvt->csels[1].b_cnt = 4;
+ pvt->csels[0].m_cnt = pvt->csels[1].m_cnt = 2;
++ } else if (pvt->fam >= 0x17) {
++ int umc;
++
++ for_each_umc(umc) {
++ pvt->csels[umc].b_cnt = 4;
++ pvt->csels[umc].m_cnt = 2;
++ }
++
+ } else {
+ pvt->csels[0].b_cnt = pvt->csels[1].b_cnt = 8;
+ pvt->csels[0].m_cnt = pvt->csels[1].m_cnt = 4;
+ }
+ }
+
++static void read_umc_base_mask(struct amd64_pvt *pvt)
++{
++ u32 umc_base_reg, umc_mask_reg;
++ u32 base_reg, mask_reg;
++ u32 *base, *mask;
++ int cs, umc;
++
++ for_each_umc(umc) {
++ umc_base_reg = get_umc_base(umc) + UMCCH_BASE_ADDR;
++
++ for_each_chip_select(cs, umc, pvt) {
++ base = &pvt->csels[umc].csbases[cs];
++
++ base_reg = umc_base_reg + (cs * 4);
++
++ if (!amd_smn_read(pvt->mc_node_id, base_reg, base))
++ edac_dbg(0, " DCSB%d[%d]=0x%08x reg: 0x%x\n",
++ umc, cs, *base, base_reg);
++ }
++
++ umc_mask_reg = get_umc_base(umc) + UMCCH_ADDR_MASK;
++
++ for_each_chip_select_mask(cs, umc, pvt) {
++ mask = &pvt->csels[umc].csmasks[cs];
++
++ mask_reg = umc_mask_reg + (cs * 4);
++
++ if (!amd_smn_read(pvt->mc_node_id, mask_reg, mask))
++ edac_dbg(0, " DCSM%d[%d]=0x%08x reg: 0x%x\n",
++ umc, cs, *mask, mask_reg);
++ }
++ }
++}
++
+ /*
+ * Function 2 Offset F10_DCSB0; read in the DCS Base and DCS Mask registers
+ */
+ static void read_dct_base_mask(struct amd64_pvt *pvt)
+ {
+- int base_reg0, base_reg1, mask_reg0, mask_reg1, cs;
++ int cs;
+
+ prep_chip_selects(pvt);
+
+- if (pvt->umc) {
+- base_reg0 = get_umc_base(0) + UMCCH_BASE_ADDR;
+- base_reg1 = get_umc_base(1) + UMCCH_BASE_ADDR;
+- mask_reg0 = get_umc_base(0) + UMCCH_ADDR_MASK;
+- mask_reg1 = get_umc_base(1) + UMCCH_ADDR_MASK;
+- } else {
+- base_reg0 = DCSB0;
+- base_reg1 = DCSB1;
+- mask_reg0 = DCSM0;
+- mask_reg1 = DCSM1;
+- }
++ if (pvt->umc)
++ return read_umc_base_mask(pvt);
+
+ for_each_chip_select(cs, 0, pvt) {
+- int reg0 = base_reg0 + (cs * 4);
+- int reg1 = base_reg1 + (cs * 4);
++ int reg0 = DCSB0 + (cs * 4);
++ int reg1 = DCSB1 + (cs * 4);
+ u32 *base0 = &pvt->csels[0].csbases[cs];
+ u32 *base1 = &pvt->csels[1].csbases[cs];
+
+- if (pvt->umc) {
+- if (!amd_smn_read(pvt->mc_node_id, reg0, base0))
+- edac_dbg(0, " DCSB0[%d]=0x%08x reg: 0x%x\n",
+- cs, *base0, reg0);
+-
+- if (!amd_smn_read(pvt->mc_node_id, reg1, base1))
+- edac_dbg(0, " DCSB1[%d]=0x%08x reg: 0x%x\n",
+- cs, *base1, reg1);
+- } else {
+- if (!amd64_read_dct_pci_cfg(pvt, 0, reg0, base0))
+- edac_dbg(0, " DCSB0[%d]=0x%08x reg: F2x%x\n",
+- cs, *base0, reg0);
++ if (!amd64_read_dct_pci_cfg(pvt, 0, reg0, base0))
++ edac_dbg(0, " DCSB0[%d]=0x%08x reg: F2x%x\n",
++ cs, *base0, reg0);
+
+- if (pvt->fam == 0xf)
+- continue;
++ if (pvt->fam == 0xf)
++ continue;
+
+- if (!amd64_read_dct_pci_cfg(pvt, 1, reg0, base1))
+- edac_dbg(0, " DCSB1[%d]=0x%08x reg: F2x%x\n",
+- cs, *base1, (pvt->fam == 0x10) ? reg1
+- : reg0);
+- }
++ if (!amd64_read_dct_pci_cfg(pvt, 1, reg0, base1))
++ edac_dbg(0, " DCSB1[%d]=0x%08x reg: F2x%x\n",
++ cs, *base1, (pvt->fam == 0x10) ? reg1
++ : reg0);
+ }
+
+ for_each_chip_select_mask(cs, 0, pvt) {
+- int reg0 = mask_reg0 + (cs * 4);
+- int reg1 = mask_reg1 + (cs * 4);
++ int reg0 = DCSM0 + (cs * 4);
++ int reg1 = DCSM1 + (cs * 4);
+ u32 *mask0 = &pvt->csels[0].csmasks[cs];
+ u32 *mask1 = &pvt->csels[1].csmasks[cs];
+
+- if (pvt->umc) {
+- if (!amd_smn_read(pvt->mc_node_id, reg0, mask0))
+- edac_dbg(0, " DCSM0[%d]=0x%08x reg: 0x%x\n",
+- cs, *mask0, reg0);
++ if (!amd64_read_dct_pci_cfg(pvt, 0, reg0, mask0))
++ edac_dbg(0, " DCSM0[%d]=0x%08x reg: F2x%x\n",
++ cs, *mask0, reg0);
+
+- if (!amd_smn_read(pvt->mc_node_id, reg1, mask1))
+- edac_dbg(0, " DCSM1[%d]=0x%08x reg: 0x%x\n",
+- cs, *mask1, reg1);
+- } else {
+- if (!amd64_read_dct_pci_cfg(pvt, 0, reg0, mask0))
+- edac_dbg(0, " DCSM0[%d]=0x%08x reg: F2x%x\n",
+- cs, *mask0, reg0);
+-
+- if (pvt->fam == 0xf)
+- continue;
++ if (pvt->fam == 0xf)
++ continue;
+
+- if (!amd64_read_dct_pci_cfg(pvt, 1, reg0, mask1))
+- edac_dbg(0, " DCSM1[%d]=0x%08x reg: F2x%x\n",
+- cs, *mask1, (pvt->fam == 0x10) ? reg1
+- : reg0);
+- }
++ if (!amd64_read_dct_pci_cfg(pvt, 1, reg0, mask1))
++ edac_dbg(0, " DCSM1[%d]=0x%08x reg: F2x%x\n",
++ cs, *mask1, (pvt->fam == 0x10) ? reg1
++ : reg0);
+ }
+ }
+
+@@ -2537,13 +2550,6 @@ static void decode_umc_error(int node_id, struct mce *m)
+
+ err.channel = find_umc_channel(m);
+
+- if (umc_normaddr_to_sysaddr(m->addr, pvt->mc_node_id, err.channel, &sys_addr)) {
+- err.err_code = ERR_NORM_ADDR;
+- goto log_error;
+- }
+-
+- error_address_to_page_and_offset(sys_addr, &err);
+-
+ if (!(m->status & MCI_STATUS_SYNDV)) {
+ err.err_code = ERR_SYND;
+ goto log_error;
+@@ -2560,6 +2566,13 @@ static void decode_umc_error(int node_id, struct mce *m)
+
+ err.csrow = m->synd & 0x7;
+
++ if (umc_normaddr_to_sysaddr(m->addr, pvt->mc_node_id, err.channel, &sys_addr)) {
++ err.err_code = ERR_NORM_ADDR;
++ goto log_error;
++ }
++
++ error_address_to_page_and_offset(sys_addr, &err);
++
+ log_error:
+ __log_ecc_error(mci, &err, ecc_type);
+ }
+@@ -3137,12 +3150,15 @@ static bool ecc_enabled(struct pci_dev *F3, u16 nid)
+ static inline void
+ f17h_determine_edac_ctl_cap(struct mem_ctl_info *mci, struct amd64_pvt *pvt)
+ {
+- u8 i, ecc_en = 1, cpk_en = 1;
++ u8 i, ecc_en = 1, cpk_en = 1, dev_x4 = 1, dev_x16 = 1;
+
+ for_each_umc(i) {
+ if (pvt->umc[i].sdp_ctrl & UMC_SDP_INIT) {
+ ecc_en &= !!(pvt->umc[i].umc_cap_hi & UMC_ECC_ENABLED);
+ cpk_en &= !!(pvt->umc[i].umc_cap_hi & UMC_ECC_CHIPKILL_CAP);
++
++ dev_x4 &= !!(pvt->umc[i].dimm_cfg & BIT(6));
++ dev_x16 &= !!(pvt->umc[i].dimm_cfg & BIT(7));
+ }
+ }
+
+@@ -3150,8 +3166,15 @@ f17h_determine_edac_ctl_cap(struct mem_ctl_info *mci, struct amd64_pvt *pvt)
+ if (ecc_en) {
+ mci->edac_ctl_cap |= EDAC_FLAG_SECDED;
+
+- if (cpk_en)
++ if (!cpk_en)
++ return;
++
++ if (dev_x4)
+ mci->edac_ctl_cap |= EDAC_FLAG_S4ECD4ED;
++ else if (dev_x16)
++ mci->edac_ctl_cap |= EDAC_FLAG_S16ECD16ED;
++ else
++ mci->edac_ctl_cap |= EDAC_FLAG_S8ECD8ED;
+ }
+ }
+
+diff --git a/drivers/edac/amd64_edac.h b/drivers/edac/amd64_edac.h
+index 8f66472f7adc..4dce6a2ac75f 100644
+--- a/drivers/edac/amd64_edac.h
++++ b/drivers/edac/amd64_edac.h
+@@ -96,6 +96,7 @@
+ /* Hardware limit on ChipSelect rows per MC and processors per system */
+ #define NUM_CHIPSELECTS 8
+ #define DRAM_RANGES 8
++#define NUM_CONTROLLERS 8
+
+ #define ON true
+ #define OFF false
+@@ -351,8 +352,8 @@ struct amd64_pvt {
+ u32 dbam0; /* DRAM Base Address Mapping reg for DCT0 */
+ u32 dbam1; /* DRAM Base Address Mapping reg for DCT1 */
+
+- /* one for each DCT */
+- struct chip_select csels[2];
++ /* one for each DCT/UMC */
++ struct chip_select csels[NUM_CONTROLLERS];
+
+ /* DRAM base and limit pairs F1x[78,70,68,60,58,50,48,40] */
+ struct dram_range ranges[DRAM_RANGES];
+diff --git a/drivers/edac/edac_mc.c b/drivers/edac/edac_mc.c
+index 64922c8fa7e3..d899d86897d0 100644
+--- a/drivers/edac/edac_mc.c
++++ b/drivers/edac/edac_mc.c
+@@ -1235,9 +1235,13 @@ void edac_mc_handle_error(const enum hw_event_mc_err_type type,
+ if (p > e->location)
+ *(p - 1) = '\0';
+
+- /* Report the error via the trace interface */
+- grain_bits = fls_long(e->grain) + 1;
++ /* Sanity-check driver-supplied grain value. */
++ if (WARN_ON_ONCE(!e->grain))
++ e->grain = 1;
++
++ grain_bits = fls_long(e->grain - 1);
+
++ /* Report the error via the trace interface */
+ if (IS_ENABLED(CONFIG_RAS))
+ trace_mc_event(type, e->msg, e->label, e->error_count,
+ mci->mc_idx, e->top_layer, e->mid_layer,
+diff --git a/drivers/edac/pnd2_edac.c b/drivers/edac/pnd2_edac.c
+index ca25f8fe57ef..1ad538baaa4a 100644
+--- a/drivers/edac/pnd2_edac.c
++++ b/drivers/edac/pnd2_edac.c
+@@ -260,11 +260,14 @@ static u64 get_sideband_reg_base_addr(void)
+ }
+ }
+
++#define DNV_MCHBAR_SIZE 0x8000
++#define DNV_SB_PORT_SIZE 0x10000
+ static int dnv_rd_reg(int port, int off, int op, void *data, size_t sz, char *name)
+ {
+ struct pci_dev *pdev;
+ char *base;
+ u64 addr;
++ unsigned long size;
+
+ if (op == 4) {
+ pdev = pci_get_device(PCI_VENDOR_ID_INTEL, 0x1980, NULL);
+@@ -279,15 +282,17 @@ static int dnv_rd_reg(int port, int off, int op, void *data, size_t sz, char *na
+ addr = get_mem_ctrl_hub_base_addr();
+ if (!addr)
+ return -ENODEV;
++ size = DNV_MCHBAR_SIZE;
+ } else {
+ /* MMIO via sideband register base address */
+ addr = get_sideband_reg_base_addr();
+ if (!addr)
+ return -ENODEV;
+ addr += (port << 16);
++ size = DNV_SB_PORT_SIZE;
+ }
+
+- base = ioremap((resource_size_t)addr, 0x10000);
++ base = ioremap((resource_size_t)addr, size);
+ if (!base)
+ return -ENODEV;
+
+diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
+index b5bc4c7a8fab..b49c9e6f4bf1 100644
+--- a/drivers/firmware/arm_scmi/driver.c
++++ b/drivers/firmware/arm_scmi/driver.c
+@@ -271,6 +271,14 @@ static void scmi_tx_prepare(struct mbox_client *cl, void *m)
+ struct scmi_chan_info *cinfo = client_to_scmi_chan_info(cl);
+ struct scmi_shared_mem __iomem *mem = cinfo->payload;
+
++ /*
++ * Ideally channel must be free by now unless OS timeout last
++ * request and platform continued to process the same, wait
++ * until it releases the shared memory, otherwise we may endup
++ * overwriting its response with new message payload or vice-versa
++ */
++ spin_until_cond(ioread32(&mem->channel_status) &
++ SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE);
+ /* Mark channel busy + clear error */
+ iowrite32(0x0, &mem->channel_status);
+ iowrite32(t->hdr.poll_completion ? 0 : SCMI_SHMEM_FLAG_INTR_ENABLED,
+diff --git a/drivers/firmware/efi/cper.c b/drivers/firmware/efi/cper.c
+index 8fa977c7861f..addf0749dd8b 100644
+--- a/drivers/firmware/efi/cper.c
++++ b/drivers/firmware/efi/cper.c
+@@ -390,6 +390,21 @@ static void cper_print_pcie(const char *pfx, const struct cper_sec_pcie *pcie,
+ printk(
+ "%s""bridge: secondary_status: 0x%04x, control: 0x%04x\n",
+ pfx, pcie->bridge.secondary_status, pcie->bridge.control);
++
++ /* Fatal errors call __ghes_panic() before AER handler prints this */
++ if ((pcie->validation_bits & CPER_PCIE_VALID_AER_INFO) &&
++ (gdata->error_severity & CPER_SEV_FATAL)) {
++ struct aer_capability_regs *aer;
++
++ aer = (struct aer_capability_regs *)pcie->aer_info;
++ printk("%saer_uncor_status: 0x%08x, aer_uncor_mask: 0x%08x\n",
++ pfx, aer->uncor_status, aer->uncor_mask);
++ printk("%saer_uncor_severity: 0x%08x\n",
++ pfx, aer->uncor_severity);
++ printk("%sTLP Header: %08x %08x %08x %08x\n", pfx,
++ aer->header_log.dw0, aer->header_log.dw1,
++ aer->header_log.dw2, aer->header_log.dw3);
++ }
+ }
+
+ static void cper_print_tstamp(const char *pfx,
+diff --git a/drivers/firmware/qcom_scm.c b/drivers/firmware/qcom_scm.c
+index 2ddc118dba1b..74b84244a0db 100644
+--- a/drivers/firmware/qcom_scm.c
++++ b/drivers/firmware/qcom_scm.c
+@@ -9,6 +9,7 @@
+ #include <linux/init.h>
+ #include <linux/cpumask.h>
+ #include <linux/export.h>
++#include <linux/dma-direct.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/module.h>
+ #include <linux/types.h>
+@@ -440,6 +441,7 @@ int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz,
+ phys_addr_t mem_to_map_phys;
+ phys_addr_t dest_phys;
+ phys_addr_t ptr_phys;
++ dma_addr_t ptr_dma;
+ size_t mem_to_map_sz;
+ size_t dest_sz;
+ size_t src_sz;
+@@ -457,9 +459,10 @@ int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz,
+ ptr_sz = ALIGN(src_sz, SZ_64) + ALIGN(mem_to_map_sz, SZ_64) +
+ ALIGN(dest_sz, SZ_64);
+
+- ptr = dma_alloc_coherent(__scm->dev, ptr_sz, &ptr_phys, GFP_KERNEL);
++ ptr = dma_alloc_coherent(__scm->dev, ptr_sz, &ptr_dma, GFP_KERNEL);
+ if (!ptr)
+ return -ENOMEM;
++ ptr_phys = dma_to_phys(__scm->dev, ptr_dma);
+
+ /* Fill source vmid detail */
+ src = ptr;
+@@ -489,7 +492,7 @@ int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz,
+
+ ret = __qcom_scm_assign_mem(__scm->dev, mem_to_map_phys, mem_to_map_sz,
+ ptr_phys, src_sz, dest_phys, dest_sz);
+- dma_free_coherent(__scm->dev, ALIGN(ptr_sz, SZ_64), ptr, ptr_phys);
++ dma_free_coherent(__scm->dev, ptr_sz, ptr, ptr_dma);
+ if (ret) {
+ dev_err(__scm->dev,
+ "Assign memory protection call failed %d.\n", ret);
+diff --git a/drivers/gpio/gpio-madera.c b/drivers/gpio/gpio-madera.c
+index 4dbc837d1215..be963113f672 100644
+--- a/drivers/gpio/gpio-madera.c
++++ b/drivers/gpio/gpio-madera.c
+@@ -136,6 +136,9 @@ static int madera_gpio_probe(struct platform_device *pdev)
+ madera_gpio->gpio_chip.parent = pdev->dev.parent;
+
+ switch (madera->type) {
++ case CS47L15:
++ madera_gpio->gpio_chip.ngpio = CS47L15_NUM_GPIOS;
++ break;
+ case CS47L35:
+ madera_gpio->gpio_chip.ngpio = CS47L35_NUM_GPIOS;
+ break;
+@@ -147,6 +150,11 @@ static int madera_gpio_probe(struct platform_device *pdev)
+ case CS47L91:
+ madera_gpio->gpio_chip.ngpio = CS47L90_NUM_GPIOS;
+ break;
++ case CS42L92:
++ case CS47L92:
++ case CS47L93:
++ madera_gpio->gpio_chip.ngpio = CS47L92_NUM_GPIOS;
++ break;
+ default:
+ dev_err(&pdev->dev, "Unknown chip variant %d\n", madera->type);
+ return -EINVAL;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index beb2d268d1ef..421ca93a8ab8 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2107,6 +2107,7 @@ static int amdgpu_dm_backlight_get_brightness(struct backlight_device *bd)
+ }
+
+ static const struct backlight_ops amdgpu_dm_backlight_ops = {
++ .options = BL_CORE_SUSPENDRESUME,
+ .get_brightness = amdgpu_dm_backlight_get_brightness,
+ .update_status = amdgpu_dm_backlight_update_status,
+ };
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c
+index 5cc3acccda2a..b1e657e137a9 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c
+@@ -98,11 +98,14 @@ uint32_t dce110_get_min_vblank_time_us(const struct dc_state *context)
+ struct dc_stream_state *stream = context->streams[j];
+ uint32_t vertical_blank_in_pixels = 0;
+ uint32_t vertical_blank_time = 0;
++ uint32_t vertical_total_min = stream->timing.v_total;
++ struct dc_crtc_timing_adjust adjust = stream->adjust;
++ if (adjust.v_total_max != adjust.v_total_min)
++ vertical_total_min = adjust.v_total_min;
+
+ vertical_blank_in_pixels = stream->timing.h_total *
+- (stream->timing.v_total
++ (vertical_total_min
+ - stream->timing.v_addressable);
+-
+ vertical_blank_time = vertical_blank_in_pixels
+ * 10000 / stream->timing.pix_clk_100hz;
+
+@@ -171,6 +174,10 @@ void dce11_pplib_apply_display_requirements(
+ struct dc_state *context)
+ {
+ struct dm_pp_display_configuration *pp_display_cfg = &context->pp_display_cfg;
++ int memory_type_multiplier = MEMORY_TYPE_MULTIPLIER_CZ;
++
++ if (dc->bw_vbios && dc->bw_vbios->memory_type == bw_def_hbm)
++ memory_type_multiplier = MEMORY_TYPE_HBM;
+
+ pp_display_cfg->all_displays_in_sync =
+ context->bw_ctx.bw.dce.all_displays_in_sync;
+@@ -183,8 +190,20 @@ void dce11_pplib_apply_display_requirements(
+ pp_display_cfg->cpu_pstate_separation_time =
+ context->bw_ctx.bw.dce.blackout_recovery_time_us;
+
+- pp_display_cfg->min_memory_clock_khz = context->bw_ctx.bw.dce.yclk_khz
+- / MEMORY_TYPE_MULTIPLIER_CZ;
++ /*
++ * TODO: determine whether the bandwidth has reached memory's limitation
++ * , then change minimum memory clock based on real-time bandwidth
++ * limitation.
++ */
++ if (ASICREV_IS_VEGA20_P(dc->ctx->asic_id.hw_internal_rev) && (context->stream_count >= 2)) {
++ pp_display_cfg->min_memory_clock_khz = max(pp_display_cfg->min_memory_clock_khz,
++ (uint32_t) div64_s64(
++ div64_s64(dc->bw_vbios->high_yclk.value,
++ memory_type_multiplier), 10000));
++ } else {
++ pp_display_cfg->min_memory_clock_khz = context->bw_ctx.bw.dce.yclk_khz
++ / memory_type_multiplier;
++ }
+
+ pp_display_cfg->min_engine_clock_khz = determine_sclk_from_bounding_box(
+ dc,
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c b/drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c
+index a24a2bda8656..1596ddcb26e6 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c
+@@ -148,7 +148,7 @@ static void dce_mi_program_pte_vm(
+ pte->min_pte_before_flip_horiz_scan;
+
+ REG_UPDATE(GRPH_PIPE_OUTSTANDING_REQUEST_LIMIT,
+- GRPH_PIPE_OUTSTANDING_REQUEST_LIMIT, 0xff);
++ GRPH_PIPE_OUTSTANDING_REQUEST_LIMIT, 0x7f);
+
+ REG_UPDATE_3(DVMM_PTE_CONTROL,
+ DVMM_PAGE_WIDTH, page_width,
+@@ -157,7 +157,7 @@ static void dce_mi_program_pte_vm(
+
+ REG_UPDATE_2(DVMM_PTE_ARB_CONTROL,
+ DVMM_PTE_REQ_PER_CHUNK, pte->pte_req_per_chunk,
+- DVMM_MAX_PTE_REQ_OUTSTANDING, 0xff);
++ DVMM_MAX_PTE_REQ_OUTSTANDING, 0x7f);
+ }
+
+ static void program_urgency_watermark(
+diff --git a/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c b/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
+index c6136e0ed1a4..7a04be74c9cf 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
+@@ -987,6 +987,10 @@ static void bw_calcs_data_update_from_pplib(struct dc *dc)
+ struct dm_pp_clock_levels_with_latency mem_clks = {0};
+ struct dm_pp_wm_sets_with_clock_ranges clk_ranges = {0};
+ struct dm_pp_clock_levels clks = {0};
++ int memory_type_multiplier = MEMORY_TYPE_MULTIPLIER_CZ;
++
++ if (dc->bw_vbios && dc->bw_vbios->memory_type == bw_def_hbm)
++ memory_type_multiplier = MEMORY_TYPE_HBM;
+
+ /*do system clock TODO PPLIB: after PPLIB implement,
+ * then remove old way
+@@ -1026,12 +1030,12 @@ static void bw_calcs_data_update_from_pplib(struct dc *dc)
+ &clks);
+
+ dc->bw_vbios->low_yclk = bw_frc_to_fixed(
+- clks.clocks_in_khz[0] * MEMORY_TYPE_MULTIPLIER_CZ, 1000);
++ clks.clocks_in_khz[0] * memory_type_multiplier, 1000);
+ dc->bw_vbios->mid_yclk = bw_frc_to_fixed(
+- clks.clocks_in_khz[clks.num_levels>>1] * MEMORY_TYPE_MULTIPLIER_CZ,
++ clks.clocks_in_khz[clks.num_levels>>1] * memory_type_multiplier,
+ 1000);
+ dc->bw_vbios->high_yclk = bw_frc_to_fixed(
+- clks.clocks_in_khz[clks.num_levels-1] * MEMORY_TYPE_MULTIPLIER_CZ,
++ clks.clocks_in_khz[clks.num_levels-1] * memory_type_multiplier,
+ 1000);
+
+ return;
+@@ -1067,12 +1071,12 @@ static void bw_calcs_data_update_from_pplib(struct dc *dc)
+ * YCLK = UMACLK*m_memoryTypeMultiplier
+ */
+ dc->bw_vbios->low_yclk = bw_frc_to_fixed(
+- mem_clks.data[0].clocks_in_khz * MEMORY_TYPE_MULTIPLIER_CZ, 1000);
++ mem_clks.data[0].clocks_in_khz * memory_type_multiplier, 1000);
+ dc->bw_vbios->mid_yclk = bw_frc_to_fixed(
+- mem_clks.data[mem_clks.num_levels>>1].clocks_in_khz * MEMORY_TYPE_MULTIPLIER_CZ,
++ mem_clks.data[mem_clks.num_levels>>1].clocks_in_khz * memory_type_multiplier,
+ 1000);
+ dc->bw_vbios->high_yclk = bw_frc_to_fixed(
+- mem_clks.data[mem_clks.num_levels-1].clocks_in_khz * MEMORY_TYPE_MULTIPLIER_CZ,
++ mem_clks.data[mem_clks.num_levels-1].clocks_in_khz * memory_type_multiplier,
+ 1000);
+
+ /* Now notify PPLib/SMU about which Watermarks sets they should select
+diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c b/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
+index 4a6ba3173a5a..ae38c9c7277c 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
+@@ -847,6 +847,8 @@ static void bw_calcs_data_update_from_pplib(struct dc *dc)
+ int i;
+ unsigned int clk;
+ unsigned int latency;
++ /*original logic in dal3*/
++ int memory_type_multiplier = MEMORY_TYPE_MULTIPLIER_CZ;
+
+ /*do system clock*/
+ if (!dm_pp_get_clock_levels_by_type_with_latency(
+@@ -905,13 +907,16 @@ static void bw_calcs_data_update_from_pplib(struct dc *dc)
+ * ALSO always convert UMA clock (from PPLIB) to YCLK (HW formula):
+ * YCLK = UMACLK*m_memoryTypeMultiplier
+ */
++ if (dc->bw_vbios->memory_type == bw_def_hbm)
++ memory_type_multiplier = MEMORY_TYPE_HBM;
++
+ dc->bw_vbios->low_yclk = bw_frc_to_fixed(
+- mem_clks.data[0].clocks_in_khz * MEMORY_TYPE_MULTIPLIER_CZ, 1000);
++ mem_clks.data[0].clocks_in_khz * memory_type_multiplier, 1000);
+ dc->bw_vbios->mid_yclk = bw_frc_to_fixed(
+- mem_clks.data[mem_clks.num_levels>>1].clocks_in_khz * MEMORY_TYPE_MULTIPLIER_CZ,
++ mem_clks.data[mem_clks.num_levels>>1].clocks_in_khz * memory_type_multiplier,
+ 1000);
+ dc->bw_vbios->high_yclk = bw_frc_to_fixed(
+- mem_clks.data[mem_clks.num_levels-1].clocks_in_khz * MEMORY_TYPE_MULTIPLIER_CZ,
++ mem_clks.data[mem_clks.num_levels-1].clocks_in_khz * memory_type_multiplier,
+ 1000);
+
+ /* Now notify PPLib/SMU about which Watermarks sets they should select
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/resource.h b/drivers/gpu/drm/amd/display/dc/inc/resource.h
+index 47f81072d7e9..c0424b4035a5 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/resource.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/resource.h
+@@ -31,6 +31,8 @@
+ #include "dm_pp_smu.h"
+
+ #define MEMORY_TYPE_MULTIPLIER_CZ 4
++#define MEMORY_TYPE_HBM 2
++
+
+ enum dce_version resource_parse_asic_id(
+ struct hw_asic_id asic_id);
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+index 487aeee1cf8a..3c1084de5d59 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+@@ -4068,6 +4068,11 @@ static int smu7_program_display_gap(struct pp_hwmgr *hwmgr)
+
+ data->frame_time_x2 = frame_time_in_us * 2 / 100;
+
++ if (data->frame_time_x2 < 280) {
++ pr_debug("%s: enforce minimal VBITimeout: %d -> 280\n", __func__, data->frame_time_x2);
++ data->frame_time_x2 = 280;
++ }
++
+ display_gap2 = pre_vbi_time_in_us * (ref_clock / 100);
+
+ cgs_write_ind_register(hwmgr->device, CGS_IND_REG__SMC, ixCG_DISPLAY_GAP_CNTL2, display_gap2);
+diff --git a/drivers/gpu/drm/drm_kms_helper_common.c b/drivers/gpu/drm/drm_kms_helper_common.c
+index d9a5ac81949e..221a8528c993 100644
+--- a/drivers/gpu/drm/drm_kms_helper_common.c
++++ b/drivers/gpu/drm/drm_kms_helper_common.c
+@@ -40,7 +40,7 @@ MODULE_LICENSE("GPL and additional rights");
+ /* Backward compatibility for drm_kms_helper.edid_firmware */
+ static int edid_firmware_set(const char *val, const struct kernel_param *kp)
+ {
+- DRM_NOTE("drm_kms_firmware.edid_firmware is deprecated, please use drm.edid_firmware instead.\n");
++ DRM_NOTE("drm_kms_helper.edid_firmware is deprecated, please use drm.edid_firmware instead.\n");
+
+ return __drm_set_edid_firmware_path(val);
+ }
+diff --git a/drivers/hwmon/acpi_power_meter.c b/drivers/hwmon/acpi_power_meter.c
+index 6ba1a08253f0..4cf25458f0b9 100644
+--- a/drivers/hwmon/acpi_power_meter.c
++++ b/drivers/hwmon/acpi_power_meter.c
+@@ -681,8 +681,8 @@ static int setup_attrs(struct acpi_power_meter_resource *resource)
+
+ if (resource->caps.flags & POWER_METER_CAN_CAP) {
+ if (!can_cap_in_hardware()) {
+- dev_err(&resource->acpi_dev->dev,
+- "Ignoring unsafe software power cap!\n");
++ dev_warn(&resource->acpi_dev->dev,
++ "Ignoring unsafe software power cap!\n");
+ goto skip_unsafe_cap;
+ }
+
+diff --git a/drivers/hwmon/k10temp.c b/drivers/hwmon/k10temp.c
+index c77e89239dcd..5c1dddde193c 100644
+--- a/drivers/hwmon/k10temp.c
++++ b/drivers/hwmon/k10temp.c
+@@ -349,6 +349,7 @@ static const struct pci_device_id k10temp_id_table[] = {
+ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_17H_DF_F3) },
+ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_17H_M10H_DF_F3) },
+ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_17H_M30H_DF_F3) },
++ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_17H_M70H_DF_F3) },
+ { PCI_VDEVICE(HYGON, PCI_DEVICE_ID_AMD_17H_DF_F3) },
+ {}
+ };
+diff --git a/drivers/i2c/busses/i2c-riic.c b/drivers/i2c/busses/i2c-riic.c
+index f31413fd9521..800414886f6b 100644
+--- a/drivers/i2c/busses/i2c-riic.c
++++ b/drivers/i2c/busses/i2c-riic.c
+@@ -202,6 +202,7 @@ static irqreturn_t riic_tend_isr(int irq, void *data)
+ if (readb(riic->base + RIIC_ICSR2) & ICSR2_NACKF) {
+ /* We got a NACKIE */
+ readb(riic->base + RIIC_ICDRR); /* dummy read */
++ riic_clear_set_bit(riic, ICSR2_NACKF, 0, RIIC_ICSR2);
+ riic->err = -ENXIO;
+ } else if (riic->bytes_left) {
+ return IRQ_NONE;
+diff --git a/drivers/infiniband/core/addr.c b/drivers/infiniband/core/addr.c
+index 9b76a8fcdd24..bf539c34ccd3 100644
+--- a/drivers/infiniband/core/addr.c
++++ b/drivers/infiniband/core/addr.c
+@@ -352,7 +352,7 @@ static bool has_gateway(const struct dst_entry *dst, sa_family_t family)
+
+ if (family == AF_INET) {
+ rt = container_of(dst, struct rtable, dst);
+- return rt->rt_gw_family == AF_INET;
++ return rt->rt_uses_gateway;
+ }
+
+ rt6 = container_of(dst, struct rt6_info, dst);
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index 7ddd0e5bc6b3..bb8b71cc3821 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -3484,7 +3484,8 @@ static int __uverbs_create_xsrq(struct uverbs_attr_bundle *attrs,
+
+ err_copy:
+ ib_destroy_srq_user(srq, uverbs_get_cleared_udata(attrs));
+-
++ /* It was released in ib_destroy_srq_user */
++ srq = NULL;
+ err_free:
+ kfree(srq);
+ err_put:
+diff --git a/drivers/infiniband/hw/hfi1/mad.c b/drivers/infiniband/hw/hfi1/mad.c
+index 184dba3c2828..d8ff063a5419 100644
+--- a/drivers/infiniband/hw/hfi1/mad.c
++++ b/drivers/infiniband/hw/hfi1/mad.c
+@@ -2326,7 +2326,7 @@ struct opa_port_status_req {
+ __be32 vl_select_mask;
+ };
+
+-#define VL_MASK_ALL 0x000080ff
++#define VL_MASK_ALL 0x00000000000080ffUL
+
+ struct opa_port_status_rsp {
+ __u8 port_num;
+@@ -2625,15 +2625,14 @@ static int pma_get_opa_classportinfo(struct opa_pma_mad *pmp,
+ }
+
+ static void a0_portstatus(struct hfi1_pportdata *ppd,
+- struct opa_port_status_rsp *rsp, u32 vl_select_mask)
++ struct opa_port_status_rsp *rsp)
+ {
+ if (!is_bx(ppd->dd)) {
+ unsigned long vl;
+ u64 sum_vl_xmit_wait = 0;
+- u32 vl_all_mask = VL_MASK_ALL;
++ unsigned long vl_all_mask = VL_MASK_ALL;
+
+- for_each_set_bit(vl, (unsigned long *)&(vl_all_mask),
+- 8 * sizeof(vl_all_mask)) {
++ for_each_set_bit(vl, &vl_all_mask, BITS_PER_LONG) {
+ u64 tmp = sum_vl_xmit_wait +
+ read_port_cntr(ppd, C_TX_WAIT_VL,
+ idx_from_vl(vl));
+@@ -2730,12 +2729,12 @@ static int pma_get_opa_portstatus(struct opa_pma_mad *pmp,
+ (struct opa_port_status_req *)pmp->data;
+ struct hfi1_devdata *dd = dd_from_ibdev(ibdev);
+ struct opa_port_status_rsp *rsp;
+- u32 vl_select_mask = be32_to_cpu(req->vl_select_mask);
++ unsigned long vl_select_mask = be32_to_cpu(req->vl_select_mask);
+ unsigned long vl;
+ size_t response_data_size;
+ u32 nports = be32_to_cpu(pmp->mad_hdr.attr_mod) >> 24;
+ u8 port_num = req->port_num;
+- u8 num_vls = hweight32(vl_select_mask);
++ u8 num_vls = hweight64(vl_select_mask);
+ struct _vls_pctrs *vlinfo;
+ struct hfi1_ibport *ibp = to_iport(ibdev, port);
+ struct hfi1_pportdata *ppd = ppd_from_ibp(ibp);
+@@ -2770,7 +2769,7 @@ static int pma_get_opa_portstatus(struct opa_pma_mad *pmp,
+
+ hfi1_read_link_quality(dd, &rsp->link_quality_indicator);
+
+- rsp->vl_select_mask = cpu_to_be32(vl_select_mask);
++ rsp->vl_select_mask = cpu_to_be32((u32)vl_select_mask);
+ rsp->port_xmit_data = cpu_to_be64(read_dev_cntr(dd, C_DC_XMIT_FLITS,
+ CNTR_INVALID_VL));
+ rsp->port_rcv_data = cpu_to_be64(read_dev_cntr(dd, C_DC_RCV_FLITS,
+@@ -2841,8 +2840,7 @@ static int pma_get_opa_portstatus(struct opa_pma_mad *pmp,
+ * So in the for_each_set_bit() loop below, we don't need
+ * any additional checks for vl.
+ */
+- for_each_set_bit(vl, (unsigned long *)&(vl_select_mask),
+- 8 * sizeof(vl_select_mask)) {
++ for_each_set_bit(vl, &vl_select_mask, BITS_PER_LONG) {
+ memset(vlinfo, 0, sizeof(*vlinfo));
+
+ tmp = read_dev_cntr(dd, C_DC_RX_FLIT_VL, idx_from_vl(vl));
+@@ -2883,7 +2881,7 @@ static int pma_get_opa_portstatus(struct opa_pma_mad *pmp,
+ vfi++;
+ }
+
+- a0_portstatus(ppd, rsp, vl_select_mask);
++ a0_portstatus(ppd, rsp);
+
+ if (resp_len)
+ *resp_len += response_data_size;
+@@ -2930,16 +2928,14 @@ static u64 get_error_counter_summary(struct ib_device *ibdev, u8 port,
+ return error_counter_summary;
+ }
+
+-static void a0_datacounters(struct hfi1_pportdata *ppd, struct _port_dctrs *rsp,
+- u32 vl_select_mask)
++static void a0_datacounters(struct hfi1_pportdata *ppd, struct _port_dctrs *rsp)
+ {
+ if (!is_bx(ppd->dd)) {
+ unsigned long vl;
+ u64 sum_vl_xmit_wait = 0;
+- u32 vl_all_mask = VL_MASK_ALL;
++ unsigned long vl_all_mask = VL_MASK_ALL;
+
+- for_each_set_bit(vl, (unsigned long *)&(vl_all_mask),
+- 8 * sizeof(vl_all_mask)) {
++ for_each_set_bit(vl, &vl_all_mask, BITS_PER_LONG) {
+ u64 tmp = sum_vl_xmit_wait +
+ read_port_cntr(ppd, C_TX_WAIT_VL,
+ idx_from_vl(vl));
+@@ -2994,7 +2990,7 @@ static int pma_get_opa_datacounters(struct opa_pma_mad *pmp,
+ u64 port_mask;
+ u8 port_num;
+ unsigned long vl;
+- u32 vl_select_mask;
++ unsigned long vl_select_mask;
+ int vfi;
+ u16 link_width;
+ u16 link_speed;
+@@ -3071,8 +3067,7 @@ static int pma_get_opa_datacounters(struct opa_pma_mad *pmp,
+ * So in the for_each_set_bit() loop below, we don't need
+ * any additional checks for vl.
+ */
+- for_each_set_bit(vl, (unsigned long *)&(vl_select_mask),
+- 8 * sizeof(req->vl_select_mask)) {
++ for_each_set_bit(vl, &vl_select_mask, BITS_PER_LONG) {
+ memset(vlinfo, 0, sizeof(*vlinfo));
+
+ rsp->vls[vfi].port_vl_xmit_data =
+@@ -3120,7 +3115,7 @@ static int pma_get_opa_datacounters(struct opa_pma_mad *pmp,
+ vfi++;
+ }
+
+- a0_datacounters(ppd, rsp, vl_select_mask);
++ a0_datacounters(ppd, rsp);
+
+ if (resp_len)
+ *resp_len += response_data_size;
+@@ -3215,7 +3210,7 @@ static int pma_get_opa_porterrors(struct opa_pma_mad *pmp,
+ struct _vls_ectrs *vlinfo;
+ unsigned long vl;
+ u64 port_mask, tmp;
+- u32 vl_select_mask;
++ unsigned long vl_select_mask;
+ int vfi;
+
+ req = (struct opa_port_error_counters64_msg *)pmp->data;
+@@ -3273,8 +3268,7 @@ static int pma_get_opa_porterrors(struct opa_pma_mad *pmp,
+ vlinfo = &rsp->vls[0];
+ vfi = 0;
+ vl_select_mask = be32_to_cpu(req->vl_select_mask);
+- for_each_set_bit(vl, (unsigned long *)&(vl_select_mask),
+- 8 * sizeof(req->vl_select_mask)) {
++ for_each_set_bit(vl, &vl_select_mask, BITS_PER_LONG) {
+ memset(vlinfo, 0, sizeof(*vlinfo));
+ rsp->vls[vfi].port_vl_xmit_discards =
+ cpu_to_be64(read_port_cntr(ppd, C_SW_XMIT_DSCD_VL,
+@@ -3485,7 +3479,7 @@ static int pma_set_opa_portstatus(struct opa_pma_mad *pmp,
+ u32 nports = be32_to_cpu(pmp->mad_hdr.attr_mod) >> 24;
+ u64 portn = be64_to_cpu(req->port_select_mask[3]);
+ u32 counter_select = be32_to_cpu(req->counter_select_mask);
+- u32 vl_select_mask = VL_MASK_ALL; /* clear all per-vl cnts */
++ unsigned long vl_select_mask = VL_MASK_ALL; /* clear all per-vl cnts */
+ unsigned long vl;
+
+ if ((nports != 1) || (portn != 1 << port)) {
+@@ -3579,8 +3573,7 @@ static int pma_set_opa_portstatus(struct opa_pma_mad *pmp,
+ if (counter_select & CS_UNCORRECTABLE_ERRORS)
+ write_dev_cntr(dd, C_DC_UNC_ERR, CNTR_INVALID_VL, 0);
+
+- for_each_set_bit(vl, (unsigned long *)&(vl_select_mask),
+- 8 * sizeof(vl_select_mask)) {
++ for_each_set_bit(vl, &vl_select_mask, BITS_PER_LONG) {
+ if (counter_select & CS_PORT_XMIT_DATA)
+ write_port_cntr(ppd, C_TX_FLIT_VL, idx_from_vl(vl), 0);
+
+diff --git a/drivers/infiniband/hw/hfi1/verbs.c b/drivers/infiniband/hw/hfi1/verbs.c
+index 646f61545ed6..9f53f63b1453 100644
+--- a/drivers/infiniband/hw/hfi1/verbs.c
++++ b/drivers/infiniband/hw/hfi1/verbs.c
+@@ -874,16 +874,17 @@ int hfi1_verbs_send_dma(struct rvt_qp *qp, struct hfi1_pkt_state *ps,
+ else
+ pbc |= (ib_is_sc5(sc5) << PBC_DC_INFO_SHIFT);
+
+- if (unlikely(hfi1_dbg_should_fault_tx(qp, ps->opcode)))
+- pbc = hfi1_fault_tx(qp, ps->opcode, pbc);
+ pbc = create_pbc(ppd,
+ pbc,
+ qp->srate_mbps,
+ vl,
+ plen);
+
+- /* Update HCRC based on packet opcode */
+- pbc = update_hcrc(ps->opcode, pbc);
++ if (unlikely(hfi1_dbg_should_fault_tx(qp, ps->opcode)))
++ pbc = hfi1_fault_tx(qp, ps->opcode, pbc);
++ else
++ /* Update HCRC based on packet opcode */
++ pbc = update_hcrc(ps->opcode, pbc);
+ }
+ tx->wqe = qp->s_wqe;
+ ret = build_verbs_tx_desc(tx->sde, len, tx, ahg_info, pbc);
+@@ -1030,12 +1031,12 @@ int hfi1_verbs_send_pio(struct rvt_qp *qp, struct hfi1_pkt_state *ps,
+ else
+ pbc |= (ib_is_sc5(sc5) << PBC_DC_INFO_SHIFT);
+
++ pbc = create_pbc(ppd, pbc, qp->srate_mbps, vl, plen);
+ if (unlikely(hfi1_dbg_should_fault_tx(qp, ps->opcode)))
+ pbc = hfi1_fault_tx(qp, ps->opcode, pbc);
+- pbc = create_pbc(ppd, pbc, qp->srate_mbps, vl, plen);
+-
+- /* Update HCRC based on packet opcode */
+- pbc = update_hcrc(ps->opcode, pbc);
++ else
++ /* Update HCRC based on packet opcode */
++ pbc = update_hcrc(ps->opcode, pbc);
+ }
+ if (cb)
+ iowait_pio_inc(&priv->s_iowait);
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index 0569bcab02d4..14807ea8dc3f 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -6959,6 +6959,7 @@ static void mlx5_ib_remove(struct mlx5_core_dev *mdev, void *context)
+ mlx5_ib_unbind_slave_port(mpi->ibdev, mpi);
+ list_del(&mpi->list);
+ mutex_unlock(&mlx5_ib_multiport_mutex);
++ kfree(mpi);
+ return;
+ }
+
+diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
+index f13f36ae1af6..c6a277e69848 100644
+--- a/drivers/iommu/Makefile
++++ b/drivers/iommu/Makefile
+@@ -10,7 +10,7 @@ obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o
+ obj-$(CONFIG_IOMMU_IOVA) += iova.o
+ obj-$(CONFIG_OF_IOMMU) += of_iommu.o
+ obj-$(CONFIG_MSM_IOMMU) += msm_iommu.o
+-obj-$(CONFIG_AMD_IOMMU) += amd_iommu.o amd_iommu_init.o
++obj-$(CONFIG_AMD_IOMMU) += amd_iommu.o amd_iommu_init.o amd_iommu_quirks.o
+ obj-$(CONFIG_AMD_IOMMU_DEBUGFS) += amd_iommu_debugfs.o
+ obj-$(CONFIG_AMD_IOMMU_V2) += amd_iommu_v2.o
+ obj-$(CONFIG_ARM_SMMU) += arm-smmu.o
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index 61de81965c44..e1259429ded2 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -2577,7 +2577,9 @@ static int map_sg(struct device *dev, struct scatterlist *sglist,
+
+ bus_addr = address + s->dma_address + (j << PAGE_SHIFT);
+ phys_addr = (sg_phys(s) & PAGE_MASK) + (j << PAGE_SHIFT);
+- ret = iommu_map_page(domain, bus_addr, phys_addr, PAGE_SIZE, prot, GFP_ATOMIC);
++ ret = iommu_map_page(domain, bus_addr, phys_addr,
++ PAGE_SIZE, prot,
++ GFP_ATOMIC | __GFP_NOWARN);
+ if (ret)
+ goto out_unmap;
+
+diff --git a/drivers/iommu/amd_iommu.h b/drivers/iommu/amd_iommu.h
+new file mode 100644
+index 000000000000..12d540d9b59b
+--- /dev/null
++++ b/drivers/iommu/amd_iommu.h
+@@ -0,0 +1,14 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++
++#ifndef AMD_IOMMU_H
++#define AMD_IOMMU_H
++
++int __init add_special_device(u8 type, u8 id, u16 *devid, bool cmd_line);
++
++#ifdef CONFIG_DMI
++void amd_iommu_apply_ivrs_quirks(void);
++#else
++static void amd_iommu_apply_ivrs_quirks(void) { }
++#endif
++
++#endif
+diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c
+index 4413aa67000e..568c52317757 100644
+--- a/drivers/iommu/amd_iommu_init.c
++++ b/drivers/iommu/amd_iommu_init.c
+@@ -32,6 +32,7 @@
+ #include <asm/irq_remapping.h>
+
+ #include <linux/crash_dump.h>
++#include "amd_iommu.h"
+ #include "amd_iommu_proto.h"
+ #include "amd_iommu_types.h"
+ #include "irq_remapping.h"
+@@ -1002,7 +1003,7 @@ static void __init set_dev_entry_from_acpi(struct amd_iommu *iommu,
+ set_iommu_for_device(iommu, devid);
+ }
+
+-static int __init add_special_device(u8 type, u8 id, u16 *devid, bool cmd_line)
++int __init add_special_device(u8 type, u8 id, u16 *devid, bool cmd_line)
+ {
+ struct devid_map *entry;
+ struct list_head *list;
+@@ -1153,6 +1154,8 @@ static int __init init_iommu_from_acpi(struct amd_iommu *iommu,
+ if (ret)
+ return ret;
+
++ amd_iommu_apply_ivrs_quirks();
++
+ /*
+ * First save the recommended feature enable bits from ACPI
+ */
+diff --git a/drivers/iommu/amd_iommu_quirks.c b/drivers/iommu/amd_iommu_quirks.c
+new file mode 100644
+index 000000000000..c235f79b7a20
+--- /dev/null
++++ b/drivers/iommu/amd_iommu_quirks.c
+@@ -0,0 +1,92 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++
++/*
++ * Quirks for AMD IOMMU
++ *
++ * Copyright (C) 2019 Kai-Heng Feng <kai.heng.feng@canonical.com>
++ */
++
++#ifdef CONFIG_DMI
++#include <linux/dmi.h>
++
++#include "amd_iommu.h"
++
++#define IVHD_SPECIAL_IOAPIC 1
++
++struct ivrs_quirk_entry {
++ u8 id;
++ u16 devid;
++};
++
++enum {
++ DELL_INSPIRON_7375 = 0,
++ DELL_LATITUDE_5495,
++ LENOVO_IDEAPAD_330S_15ARR,
++};
++
++static const struct ivrs_quirk_entry ivrs_ioapic_quirks[][3] __initconst = {
++ /* ivrs_ioapic[4]=00:14.0 ivrs_ioapic[5]=00:00.2 */
++ [DELL_INSPIRON_7375] = {
++ { .id = 4, .devid = 0xa0 },
++ { .id = 5, .devid = 0x2 },
++ {}
++ },
++ /* ivrs_ioapic[4]=00:14.0 */
++ [DELL_LATITUDE_5495] = {
++ { .id = 4, .devid = 0xa0 },
++ {}
++ },
++ /* ivrs_ioapic[32]=00:14.0 */
++ [LENOVO_IDEAPAD_330S_15ARR] = {
++ { .id = 32, .devid = 0xa0 },
++ {}
++ },
++ {}
++};
++
++static int __init ivrs_ioapic_quirk_cb(const struct dmi_system_id *d)
++{
++ const struct ivrs_quirk_entry *i;
++
++ for (i = d->driver_data; i->id != 0 && i->devid != 0; i++)
++ add_special_device(IVHD_SPECIAL_IOAPIC, i->id, (u16 *)&i->devid, 0);
++
++ return 0;
++}
++
++static const struct dmi_system_id ivrs_quirks[] __initconst = {
++ {
++ .callback = ivrs_ioapic_quirk_cb,
++ .ident = "Dell Inspiron 7375",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 7375"),
++ },
++ .driver_data = (void *)&ivrs_ioapic_quirks[DELL_INSPIRON_7375],
++ },
++ {
++ .callback = ivrs_ioapic_quirk_cb,
++ .ident = "Dell Latitude 5495",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Latitude 5495"),
++ },
++ .driver_data = (void *)&ivrs_ioapic_quirks[DELL_LATITUDE_5495],
++ },
++ {
++ .callback = ivrs_ioapic_quirk_cb,
++ .ident = "Lenovo ideapad 330S-15ARR",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "81FB"),
++ },
++ .driver_data = (void *)&ivrs_ioapic_quirks[LENOVO_IDEAPAD_330S_15ARR],
++ },
++ {}
++};
++
++void __init amd_iommu_apply_ivrs_quirks(void)
++{
++ dmi_check_system(ivrs_quirks);
++}
++#endif
+diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
+index c5c93e48b4db..d1ebe5ce3e47 100644
+--- a/drivers/iommu/arm-smmu-v3.c
++++ b/drivers/iommu/arm-smmu-v3.c
+@@ -2843,11 +2843,13 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
+ }
+
+ /* Boolean feature flags */
++#if 0 /* ATS invalidation is slow and broken */
+ if (IS_ENABLED(CONFIG_PCI_PRI) && reg & IDR0_PRI)
+ smmu->features |= ARM_SMMU_FEAT_PRI;
+
+ if (IS_ENABLED(CONFIG_PCI_ATS) && reg & IDR0_ATS)
+ smmu->features |= ARM_SMMU_FEAT_ATS;
++#endif
+
+ if (reg & IDR0_SEV)
+ smmu->features |= ARM_SMMU_FEAT_SEV;
+diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c
+index 4786ca061e31..81e43c1df7ec 100644
+--- a/drivers/iommu/intel_irq_remapping.c
++++ b/drivers/iommu/intel_irq_remapping.c
+@@ -376,13 +376,13 @@ static int set_msi_sid_cb(struct pci_dev *pdev, u16 alias, void *opaque)
+ {
+ struct set_msi_sid_data *data = opaque;
+
++ if (data->count == 0 || PCI_BUS_NUM(alias) == PCI_BUS_NUM(data->alias))
++ data->busmatch_count++;
++
+ data->pdev = pdev;
+ data->alias = alias;
+ data->count++;
+
+- if (PCI_BUS_NUM(alias) == pdev->bus->number)
+- data->busmatch_count++;
+-
+ return 0;
+ }
+
+diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
+index 3e1a8a675572..41c605b0058f 100644
+--- a/drivers/iommu/iova.c
++++ b/drivers/iommu/iova.c
+@@ -577,7 +577,9 @@ void queue_iova(struct iova_domain *iovad,
+
+ spin_unlock_irqrestore(&fq->lock, flags);
+
+- if (atomic_cmpxchg(&iovad->fq_timer_on, 0, 1) == 0)
++ /* Avoid false sharing as much as possible. */
++ if (!atomic_read(&iovad->fq_timer_on) &&
++ !atomic_cmpxchg(&iovad->fq_timer_on, 0, 1))
+ mod_timer(&iovad->fq_timer,
+ jiffies + msecs_to_jiffies(IOVA_FQ_TIMEOUT));
+ }
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index 1b5c3672aea2..c3a8d732805f 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -2641,14 +2641,13 @@ static void its_irq_domain_free(struct irq_domain *domain, unsigned int virq,
+ struct its_node *its = its_dev->its;
+ int i;
+
++ bitmap_release_region(its_dev->event_map.lpi_map,
++ its_get_event_id(irq_domain_get_irq_data(domain, virq)),
++ get_count_order(nr_irqs));
++
+ for (i = 0; i < nr_irqs; i++) {
+ struct irq_data *data = irq_domain_get_irq_data(domain,
+ virq + i);
+- u32 event = its_get_event_id(data);
+-
+- /* Mark interrupt index as unused */
+- clear_bit(event, its_dev->event_map.lpi_map);
+-
+ /* Nuke the entry in the domain */
+ irq_domain_reset_irq_data(data);
+ }
+diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
+index cf755964f2f8..c72c036aea76 100644
+--- a/drivers/irqchip/irq-sifive-plic.c
++++ b/drivers/irqchip/irq-sifive-plic.c
+@@ -244,6 +244,7 @@ static int __init plic_init(struct device_node *node,
+ struct plic_handler *handler;
+ irq_hw_number_t hwirq;
+ int cpu, hartid;
++ u32 threshold = 0;
+
+ if (of_irq_parse_one(node, i, &parent)) {
+ pr_err("failed to parse parent for context %d.\n", i);
+@@ -266,10 +267,16 @@ static int __init plic_init(struct device_node *node,
+ continue;
+ }
+
++ /*
++ * When running in M-mode we need to ignore the S-mode handler.
++ * Here we assume it always comes later, but that might be a
++ * little fragile.
++ */
+ handler = per_cpu_ptr(&plic_handlers, cpu);
+ if (handler->present) {
+ pr_warn("handler already present for context %d.\n", i);
+- continue;
++ threshold = 0xffffffff;
++ goto done;
+ }
+
+ handler->present = true;
+@@ -279,8 +286,9 @@ static int __init plic_init(struct device_node *node,
+ handler->enable_base =
+ plic_regs + ENABLE_BASE + i * ENABLE_PER_HART;
+
++done:
+ /* priority must be > threshold to trigger an interrupt */
+- writel(0, handler->hart_base + CONTEXT_THRESHOLD);
++ writel(threshold, handler->hart_base + CONTEXT_THRESHOLD);
+ for (hwirq = 1; hwirq <= nr_irqs; hwirq++)
+ plic_toggle(handler, hwirq, 0);
+ nr_handlers++;
+diff --git a/drivers/isdn/mISDN/socket.c b/drivers/isdn/mISDN/socket.c
+index c6ba37df4b9d..dff4132b3702 100644
+--- a/drivers/isdn/mISDN/socket.c
++++ b/drivers/isdn/mISDN/socket.c
+@@ -754,6 +754,8 @@ base_sock_create(struct net *net, struct socket *sock, int protocol, int kern)
+
+ if (sock->type != SOCK_RAW)
+ return -ESOCKTNOSUPPORT;
++ if (!capable(CAP_NET_RAW))
++ return -EPERM;
+
+ sk = sk_alloc(net, PF_ISDN, GFP_KERNEL, &mISDN_proto, kern);
+ if (!sk)
+diff --git a/drivers/leds/led-triggers.c b/drivers/leds/led-triggers.c
+index 8d11a5e23227..eff1bda8b520 100644
+--- a/drivers/leds/led-triggers.c
++++ b/drivers/leds/led-triggers.c
+@@ -173,6 +173,7 @@ err_activate:
+ list_del(&led_cdev->trig_list);
+ write_unlock_irqrestore(&led_cdev->trigger->leddev_list_lock, flags);
+ led_set_brightness(led_cdev, LED_OFF);
++ kfree(event);
+
+ return ret;
+ }
+diff --git a/drivers/leds/leds-lm3532.c b/drivers/leds/leds-lm3532.c
+index 180895b83b88..e55a64847fe2 100644
+--- a/drivers/leds/leds-lm3532.c
++++ b/drivers/leds/leds-lm3532.c
+@@ -40,7 +40,7 @@
+ #define LM3532_REG_ZN_3_LO 0x67
+ #define LM3532_REG_MAX 0x7e
+
+-/* Contorl Enable */
++/* Control Enable */
+ #define LM3532_CTRL_A_ENABLE BIT(0)
+ #define LM3532_CTRL_B_ENABLE BIT(1)
+ #define LM3532_CTRL_C_ENABLE BIT(2)
+@@ -302,7 +302,7 @@ static int lm3532_led_disable(struct lm3532_led *led_data)
+ int ret;
+
+ ret = regmap_update_bits(led_data->priv->regmap, LM3532_REG_ENABLE,
+- ctrl_en_val, ~ctrl_en_val);
++ ctrl_en_val, 0);
+ if (ret) {
+ dev_err(led_data->priv->dev, "Failed to set ctrl:%d\n", ret);
+ return ret;
+@@ -321,7 +321,7 @@ static int lm3532_brightness_set(struct led_classdev *led_cdev,
+
+ mutex_lock(&led->priv->lock);
+
+- if (led->mode == LM3532_BL_MODE_ALS) {
++ if (led->mode == LM3532_ALS_CTRL) {
+ if (brt_val > LED_OFF)
+ ret = lm3532_led_enable(led);
+ else
+@@ -542,11 +542,14 @@ static int lm3532_parse_node(struct lm3532_data *priv)
+ }
+
+ if (led->mode == LM3532_BL_MODE_ALS) {
++ led->mode = LM3532_ALS_CTRL;
+ ret = lm3532_parse_als(priv);
+ if (ret)
+ dev_err(&priv->client->dev, "Failed to parse als\n");
+ else
+ lm3532_als_configure(priv, led);
++ } else {
++ led->mode = LM3532_I2C_CTRL;
+ }
+
+ led->num_leds = fwnode_property_read_u32_array(child,
+@@ -590,7 +593,13 @@ static int lm3532_parse_node(struct lm3532_data *priv)
+ goto child_out;
+ }
+
+- lm3532_init_registers(led);
++ ret = lm3532_init_registers(led);
++ if (ret) {
++ dev_err(&priv->client->dev, "register init err: %d\n",
++ ret);
++ fwnode_handle_put(child);
++ goto child_out;
++ }
+
+ i++;
+ }
+diff --git a/drivers/leds/leds-lp5562.c b/drivers/leds/leds-lp5562.c
+index 37632fc63741..edb57c42e8b1 100644
+--- a/drivers/leds/leds-lp5562.c
++++ b/drivers/leds/leds-lp5562.c
+@@ -260,7 +260,11 @@ static void lp5562_firmware_loaded(struct lp55xx_chip *chip)
+ {
+ const struct firmware *fw = chip->fw;
+
+- if (fw->size > LP5562_PROGRAM_LENGTH) {
++ /*
++ * the firmware is encoded in ascii hex character, with 2 chars
++ * per byte
++ */
++ if (fw->size > (LP5562_PROGRAM_LENGTH * 2)) {
+ dev_err(&chip->cl->dev, "firmware data size overflow: %zu\n",
+ fw->size);
+ return;
+diff --git a/drivers/md/bcache/closure.c b/drivers/md/bcache/closure.c
+index 73f5319295bc..c12cd809ab19 100644
+--- a/drivers/md/bcache/closure.c
++++ b/drivers/md/bcache/closure.c
+@@ -105,8 +105,14 @@ struct closure_syncer {
+
+ static void closure_sync_fn(struct closure *cl)
+ {
+- cl->s->done = 1;
+- wake_up_process(cl->s->task);
++ struct closure_syncer *s = cl->s;
++ struct task_struct *p;
++
++ rcu_read_lock();
++ p = READ_ONCE(s->task);
++ s->done = 1;
++ wake_up_process(p);
++ rcu_read_unlock();
+ }
+
+ void __sched __closure_sync(struct closure *cl)
+diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
+index c9e44ac1f9a6..21d5c1784d0c 100644
+--- a/drivers/md/dm-rq.c
++++ b/drivers/md/dm-rq.c
+@@ -408,6 +408,7 @@ static int map_request(struct dm_rq_target_io *tio)
+ ret = dm_dispatch_clone_request(clone, rq);
+ if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE) {
+ blk_rq_unprep_clone(clone);
++ blk_mq_cleanup_rq(clone);
+ tio->ti->type->release_clone_rq(clone, &tio->info);
+ tio->clone = NULL;
+ return DM_MAPIO_REQUEUE;
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 24638ccedce4..3100dd53c64c 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -1826,8 +1826,15 @@ static int super_1_validate(struct mddev *mddev, struct md_rdev *rdev)
+ if (!(le32_to_cpu(sb->feature_map) &
+ MD_FEATURE_RECOVERY_BITMAP))
+ rdev->saved_raid_disk = -1;
+- } else
+- set_bit(In_sync, &rdev->flags);
++ } else {
++ /*
++ * If the array is FROZEN, then the device can't
++ * be in_sync with rest of array.
++ */
++ if (!test_bit(MD_RECOVERY_FROZEN,
++ &mddev->recovery))
++ set_bit(In_sync, &rdev->flags);
++ }
+ rdev->raid_disk = role;
+ break;
+ }
+@@ -4176,7 +4183,7 @@ array_state_show(struct mddev *mddev, char *page)
+ {
+ enum array_state st = inactive;
+
+- if (mddev->pers)
++ if (mddev->pers && !test_bit(MD_NOT_READY, &mddev->flags))
+ switch(mddev->ro) {
+ case 1:
+ st = readonly;
+@@ -5744,9 +5751,6 @@ int md_run(struct mddev *mddev)
+ md_update_sb(mddev, 0);
+
+ md_new_event(mddev);
+- sysfs_notify_dirent_safe(mddev->sysfs_state);
+- sysfs_notify_dirent_safe(mddev->sysfs_action);
+- sysfs_notify(&mddev->kobj, NULL, "degraded");
+ return 0;
+
+ bitmap_abort:
+@@ -5767,6 +5771,7 @@ static int do_md_run(struct mddev *mddev)
+ {
+ int err;
+
++ set_bit(MD_NOT_READY, &mddev->flags);
+ err = md_run(mddev);
+ if (err)
+ goto out;
+@@ -5787,9 +5792,14 @@ static int do_md_run(struct mddev *mddev)
+
+ set_capacity(mddev->gendisk, mddev->array_sectors);
+ revalidate_disk(mddev->gendisk);
++ clear_bit(MD_NOT_READY, &mddev->flags);
+ mddev->changed = 1;
+ kobject_uevent(&disk_to_dev(mddev->gendisk)->kobj, KOBJ_CHANGE);
++ sysfs_notify_dirent_safe(mddev->sysfs_state);
++ sysfs_notify_dirent_safe(mddev->sysfs_action);
++ sysfs_notify(&mddev->kobj, NULL, "degraded");
+ out:
++ clear_bit(MD_NOT_READY, &mddev->flags);
+ return err;
+ }
+
+@@ -8900,6 +8910,7 @@ void md_check_recovery(struct mddev *mddev)
+
+ if (mddev_trylock(mddev)) {
+ int spares = 0;
++ bool try_set_sync = mddev->safemode != 0;
+
+ if (!mddev->external && mddev->safemode == 1)
+ mddev->safemode = 0;
+@@ -8945,7 +8956,7 @@ void md_check_recovery(struct mddev *mddev)
+ }
+ }
+
+- if (!mddev->external && !mddev->in_sync) {
++ if (try_set_sync && !mddev->external && !mddev->in_sync) {
+ spin_lock(&mddev->lock);
+ set_in_sync(mddev);
+ spin_unlock(&mddev->lock);
+@@ -9043,7 +9054,8 @@ void md_reap_sync_thread(struct mddev *mddev)
+ /* resync has finished, collect result */
+ md_unregister_thread(&mddev->sync_thread);
+ if (!test_bit(MD_RECOVERY_INTR, &mddev->recovery) &&
+- !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) {
++ !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery) &&
++ mddev->degraded != mddev->raid_disks) {
+ /* success...*/
+ /* activate any spares */
+ if (mddev->pers->spare_active(mddev)) {
+diff --git a/drivers/md/md.h b/drivers/md/md.h
+index 10f98200e2f8..08f2aee383e8 100644
+--- a/drivers/md/md.h
++++ b/drivers/md/md.h
+@@ -248,6 +248,9 @@ enum mddev_flags {
+ MD_UPDATING_SB, /* md_check_recovery is updating the metadata
+ * without explicitly holding reconfig_mutex.
+ */
++ MD_NOT_READY, /* do_md_run() is active, so 'array_state'
++ * must not report that array is ready yet
++ */
+ };
+
+ enum mddev_sb_flags {
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index bf5cf184a260..297bbc0f41f0 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -19,6 +19,9 @@
+ #include "raid0.h"
+ #include "raid5.h"
+
++static int default_layout = 0;
++module_param(default_layout, int, 0644);
++
+ #define UNSUPPORTED_MDDEV_FLAGS \
+ ((1L << MD_HAS_JOURNAL) | \
+ (1L << MD_JOURNAL_CLEAN) | \
+@@ -139,6 +142,19 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
+ }
+ pr_debug("md/raid0:%s: FINAL %d zones\n",
+ mdname(mddev), conf->nr_strip_zones);
++
++ if (conf->nr_strip_zones == 1) {
++ conf->layout = RAID0_ORIG_LAYOUT;
++ } else if (default_layout == RAID0_ORIG_LAYOUT ||
++ default_layout == RAID0_ALT_MULTIZONE_LAYOUT) {
++ conf->layout = default_layout;
++ } else {
++ pr_err("md/raid0:%s: cannot assemble multi-zone RAID0 with default_layout setting\n",
++ mdname(mddev));
++ pr_err("md/raid0: please set raid.default_layout to 1 or 2\n");
++ err = -ENOTSUPP;
++ goto abort;
++ }
+ /*
+ * now since we have the hard sector sizes, we can make sure
+ * chunk size is a multiple of that sector size
+@@ -547,10 +563,12 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio)
+
+ static bool raid0_make_request(struct mddev *mddev, struct bio *bio)
+ {
++ struct r0conf *conf = mddev->private;
+ struct strip_zone *zone;
+ struct md_rdev *tmp_dev;
+ sector_t bio_sector;
+ sector_t sector;
++ sector_t orig_sector;
+ unsigned chunk_sects;
+ unsigned sectors;
+
+@@ -584,8 +602,21 @@ static bool raid0_make_request(struct mddev *mddev, struct bio *bio)
+ bio = split;
+ }
+
++ orig_sector = sector;
+ zone = find_zone(mddev->private, §or);
+- tmp_dev = map_sector(mddev, zone, sector, §or);
++ switch (conf->layout) {
++ case RAID0_ORIG_LAYOUT:
++ tmp_dev = map_sector(mddev, zone, orig_sector, §or);
++ break;
++ case RAID0_ALT_MULTIZONE_LAYOUT:
++ tmp_dev = map_sector(mddev, zone, sector, §or);
++ break;
++ default:
++ WARN("md/raid0:%s: Invalid layout\n", mdname(mddev));
++ bio_io_error(bio);
++ return true;
++ }
++
+ bio_set_dev(bio, tmp_dev->bdev);
+ bio->bi_iter.bi_sector = sector + zone->dev_start +
+ tmp_dev->data_offset;
+diff --git a/drivers/md/raid0.h b/drivers/md/raid0.h
+index 540e65d92642..3816e5477db1 100644
+--- a/drivers/md/raid0.h
++++ b/drivers/md/raid0.h
+@@ -8,11 +8,25 @@ struct strip_zone {
+ int nb_dev; /* # of devices attached to the zone */
+ };
+
++/* Linux 3.14 (20d0189b101) made an unintended change to
++ * the RAID0 layout for multi-zone arrays (where devices aren't all
++ * the same size.
++ * RAID0_ORIG_LAYOUT restores the original layout
++ * RAID0_ALT_MULTIZONE_LAYOUT uses the altered layout
++ * The layouts are identical when there is only one zone (all
++ * devices the same size).
++ */
++
++enum r0layout {
++ RAID0_ORIG_LAYOUT = 1,
++ RAID0_ALT_MULTIZONE_LAYOUT = 2,
++};
+ struct r0conf {
+ struct strip_zone *strip_zone;
+ struct md_rdev **devlist; /* lists of rdevs, pointed to
+ * by strip_zone->dev */
+ int nr_strip_zones;
++ enum r0layout layout;
+ };
+
+ #endif
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 34e26834ad28..5afbb7df06e7 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -447,19 +447,21 @@ static void raid1_end_write_request(struct bio *bio)
+ /* We never try FailFast to WriteMostly devices */
+ !test_bit(WriteMostly, &rdev->flags)) {
+ md_error(r1_bio->mddev, rdev);
+- if (!test_bit(Faulty, &rdev->flags))
+- /* This is the only remaining device,
+- * We need to retry the write without
+- * FailFast
+- */
+- set_bit(R1BIO_WriteError, &r1_bio->state);
+- else {
+- /* Finished with this branch */
+- r1_bio->bios[mirror] = NULL;
+- to_put = bio;
+- }
+- } else
++ }
++
++ /*
++ * When the device is faulty, it is not necessary to
++ * handle write error.
++ * For failfast, this is the only remaining device,
++ * We need to retry the write without FailFast.
++ */
++ if (!test_bit(Faulty, &rdev->flags))
+ set_bit(R1BIO_WriteError, &r1_bio->state);
++ else {
++ /* Finished with this branch */
++ r1_bio->bios[mirror] = NULL;
++ to_put = bio;
++ }
+ } else {
+ /*
+ * Set R1BIO_Uptodate in our master bio, so that we
+@@ -3127,6 +3129,13 @@ static int raid1_run(struct mddev *mddev)
+ !test_bit(In_sync, &conf->mirrors[i].rdev->flags) ||
+ test_bit(Faulty, &conf->mirrors[i].rdev->flags))
+ mddev->degraded++;
++ /*
++ * RAID1 needs at least one disk in active
++ */
++ if (conf->raid_disks - mddev->degraded < 1) {
++ ret = -EINVAL;
++ goto abort;
++ }
+
+ if (conf->raid_disks - mddev->degraded == 1)
+ mddev->recovery_cp = MaxSector;
+@@ -3160,8 +3169,12 @@ static int raid1_run(struct mddev *mddev)
+ ret = md_integrity_register(mddev);
+ if (ret) {
+ md_unregister_thread(&mddev->thread);
+- raid1_free(mddev, conf);
++ goto abort;
+ }
++ return 0;
++
++abort:
++ raid1_free(mddev, conf);
+ return ret;
+ }
+
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 3de4e13bde98..39f8ef6ee59c 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -2526,7 +2526,8 @@ static void raid5_end_read_request(struct bio * bi)
+ int set_bad = 0;
+
+ clear_bit(R5_UPTODATE, &sh->dev[i].flags);
+- atomic_inc(&rdev->read_errors);
++ if (!(bi->bi_status == BLK_STS_PROTECTION))
++ atomic_inc(&rdev->read_errors);
+ if (test_bit(R5_ReadRepl, &sh->dev[i].flags))
+ pr_warn_ratelimited(
+ "md/raid:%s: read error on replacement device (sector %llu on %s).\n",
+@@ -2558,7 +2559,9 @@ static void raid5_end_read_request(struct bio * bi)
+ && !test_bit(R5_ReadNoMerge, &sh->dev[i].flags))
+ retry = 1;
+ if (retry)
+- if (test_bit(R5_ReadNoMerge, &sh->dev[i].flags)) {
++ if (sh->qd_idx >= 0 && sh->pd_idx == i)
++ set_bit(R5_ReadError, &sh->dev[i].flags);
++ else if (test_bit(R5_ReadNoMerge, &sh->dev[i].flags)) {
+ set_bit(R5_ReadError, &sh->dev[i].flags);
+ clear_bit(R5_ReadNoMerge, &sh->dev[i].flags);
+ } else
+@@ -5718,7 +5721,8 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
+ do_flush = false;
+ }
+
+- set_bit(STRIPE_HANDLE, &sh->state);
++ if (!sh->batch_head)
++ set_bit(STRIPE_HANDLE, &sh->state);
+ clear_bit(STRIPE_DELAYED, &sh->state);
+ if ((!sh->batch_head || sh == sh->batch_head) &&
+ (bi->bi_opf & REQ_SYNC) &&
+diff --git a/drivers/media/cec/cec-notifier.c b/drivers/media/cec/cec-notifier.c
+index 52a867bde15f..4d82a5522072 100644
+--- a/drivers/media/cec/cec-notifier.c
++++ b/drivers/media/cec/cec-notifier.c
+@@ -218,6 +218,8 @@ void cec_notifier_unregister(struct cec_notifier *n)
+
+ mutex_lock(&n->lock);
+ n->callback = NULL;
++ n->cec_adap->notifier = NULL;
++ n->cec_adap = NULL;
+ mutex_unlock(&n->lock);
+ cec_notifier_put(n);
+ }
+diff --git a/drivers/media/common/videobuf2/videobuf2-v4l2.c b/drivers/media/common/videobuf2/videobuf2-v4l2.c
+index 40d76eb4c2fe..5a9ba3846f0a 100644
+--- a/drivers/media/common/videobuf2/videobuf2-v4l2.c
++++ b/drivers/media/common/videobuf2/videobuf2-v4l2.c
+@@ -872,17 +872,19 @@ EXPORT_SYMBOL_GPL(vb2_queue_release);
+ __poll_t vb2_poll(struct vb2_queue *q, struct file *file, poll_table *wait)
+ {
+ struct video_device *vfd = video_devdata(file);
+- __poll_t res = 0;
++ __poll_t res;
++
++ res = vb2_core_poll(q, file, wait);
+
+ if (test_bit(V4L2_FL_USES_V4L2_FH, &vfd->flags)) {
+ struct v4l2_fh *fh = file->private_data;
+
+ poll_wait(file, &fh->wait, wait);
+ if (v4l2_event_pending(fh))
+- res = EPOLLPRI;
++ res |= EPOLLPRI;
+ }
+
+- return res | vb2_core_poll(q, file, wait);
++ return res;
+ }
+ EXPORT_SYMBOL_GPL(vb2_poll);
+
+diff --git a/drivers/media/dvb-core/dvb_frontend.c b/drivers/media/dvb-core/dvb_frontend.c
+index 209186c5cd9b..06ea30a689d7 100644
+--- a/drivers/media/dvb-core/dvb_frontend.c
++++ b/drivers/media/dvb-core/dvb_frontend.c
+@@ -152,6 +152,9 @@ static void dvb_frontend_free(struct kref *ref)
+
+ static void dvb_frontend_put(struct dvb_frontend *fe)
+ {
++ /* call detach before dropping the reference count */
++ if (fe->ops.detach)
++ fe->ops.detach(fe);
+ /*
+ * Check if the frontend was registered, as otherwise
+ * kref was not initialized yet.
+@@ -3040,7 +3043,6 @@ void dvb_frontend_detach(struct dvb_frontend *fe)
+ dvb_frontend_invoke_release(fe, fe->ops.release_sec);
+ dvb_frontend_invoke_release(fe, fe->ops.tuner_ops.release);
+ dvb_frontend_invoke_release(fe, fe->ops.analog_ops.release);
+- dvb_frontend_invoke_release(fe, fe->ops.detach);
+ dvb_frontend_put(fe);
+ }
+ EXPORT_SYMBOL(dvb_frontend_detach);
+diff --git a/drivers/media/dvb-core/dvbdev.c b/drivers/media/dvb-core/dvbdev.c
+index a3393cd4e584..7557fbf9d306 100644
+--- a/drivers/media/dvb-core/dvbdev.c
++++ b/drivers/media/dvb-core/dvbdev.c
+@@ -339,8 +339,10 @@ static int dvb_create_media_entity(struct dvb_device *dvbdev,
+ if (npads) {
+ dvbdev->pads = kcalloc(npads, sizeof(*dvbdev->pads),
+ GFP_KERNEL);
+- if (!dvbdev->pads)
++ if (!dvbdev->pads) {
++ kfree(dvbdev->entity);
+ return -ENOMEM;
++ }
+ }
+
+ switch (type) {
+diff --git a/drivers/media/dvb-frontends/dvb-pll.c b/drivers/media/dvb-frontends/dvb-pll.c
+index ba0c49107bd2..d45b4ddc8f91 100644
+--- a/drivers/media/dvb-frontends/dvb-pll.c
++++ b/drivers/media/dvb-frontends/dvb-pll.c
+@@ -9,6 +9,7 @@
+
+ #include <linux/slab.h>
+ #include <linux/module.h>
++#include <linux/idr.h>
+ #include <linux/dvb/frontend.h>
+ #include <asm/types.h>
+
+@@ -34,8 +35,7 @@ struct dvb_pll_priv {
+ };
+
+ #define DVB_PLL_MAX 64
+-
+-static unsigned int dvb_pll_devcount;
++static DEFINE_IDA(pll_ida);
+
+ static int debug;
+ module_param(debug, int, 0644);
+@@ -787,6 +787,7 @@ struct dvb_frontend *dvb_pll_attach(struct dvb_frontend *fe, int pll_addr,
+ struct dvb_pll_priv *priv = NULL;
+ int ret;
+ const struct dvb_pll_desc *desc;
++ int nr;
+
+ b1 = kmalloc(1, GFP_KERNEL);
+ if (!b1)
+@@ -795,9 +796,14 @@ struct dvb_frontend *dvb_pll_attach(struct dvb_frontend *fe, int pll_addr,
+ b1[0] = 0;
+ msg.buf = b1;
+
+- if ((id[dvb_pll_devcount] > DVB_PLL_UNDEFINED) &&
+- (id[dvb_pll_devcount] < ARRAY_SIZE(pll_list)))
+- pll_desc_id = id[dvb_pll_devcount];
++ nr = ida_simple_get(&pll_ida, 0, DVB_PLL_MAX, GFP_KERNEL);
++ if (nr < 0) {
++ kfree(b1);
++ return NULL;
++ }
++
++ if (id[nr] > DVB_PLL_UNDEFINED && id[nr] < ARRAY_SIZE(pll_list))
++ pll_desc_id = id[nr];
+
+ BUG_ON(pll_desc_id < 1 || pll_desc_id >= ARRAY_SIZE(pll_list));
+
+@@ -808,24 +814,20 @@ struct dvb_frontend *dvb_pll_attach(struct dvb_frontend *fe, int pll_addr,
+ fe->ops.i2c_gate_ctrl(fe, 1);
+
+ ret = i2c_transfer (i2c, &msg, 1);
+- if (ret != 1) {
+- kfree(b1);
+- return NULL;
+- }
++ if (ret != 1)
++ goto out;
+ if (fe->ops.i2c_gate_ctrl)
+ fe->ops.i2c_gate_ctrl(fe, 0);
+ }
+
+ priv = kzalloc(sizeof(struct dvb_pll_priv), GFP_KERNEL);
+- if (!priv) {
+- kfree(b1);
+- return NULL;
+- }
++ if (!priv)
++ goto out;
+
+ priv->pll_i2c_address = pll_addr;
+ priv->i2c = i2c;
+ priv->pll_desc = desc;
+- priv->nr = dvb_pll_devcount++;
++ priv->nr = nr;
+
+ memcpy(&fe->ops.tuner_ops, &dvb_pll_tuner_ops,
+ sizeof(struct dvb_tuner_ops));
+@@ -858,6 +860,11 @@ struct dvb_frontend *dvb_pll_attach(struct dvb_frontend *fe, int pll_addr,
+ kfree(b1);
+
+ return fe;
++out:
++ kfree(b1);
++ ida_simple_remove(&pll_ida, nr);
++
++ return NULL;
+ }
+ EXPORT_SYMBOL(dvb_pll_attach);
+
+@@ -894,9 +901,10 @@ dvb_pll_probe(struct i2c_client *client, const struct i2c_device_id *id)
+
+ static int dvb_pll_remove(struct i2c_client *client)
+ {
+- struct dvb_frontend *fe;
++ struct dvb_frontend *fe = i2c_get_clientdata(client);
++ struct dvb_pll_priv *priv = fe->tuner_priv;
+
+- fe = i2c_get_clientdata(client);
++ ida_simple_remove(&pll_ida, priv->nr);
+ dvb_pll_release(fe);
+ return 0;
+ }
+diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c
+index 759d60c6d630..afe7920557a8 100644
+--- a/drivers/media/i2c/ov5640.c
++++ b/drivers/media/i2c/ov5640.c
+@@ -3022,9 +3022,14 @@ static int ov5640_probe(struct i2c_client *client,
+ /* request optional power down pin */
+ sensor->pwdn_gpio = devm_gpiod_get_optional(dev, "powerdown",
+ GPIOD_OUT_HIGH);
++ if (IS_ERR(sensor->pwdn_gpio))
++ return PTR_ERR(sensor->pwdn_gpio);
++
+ /* request optional reset pin */
+ sensor->reset_gpio = devm_gpiod_get_optional(dev, "reset",
+ GPIOD_OUT_HIGH);
++ if (IS_ERR(sensor->reset_gpio))
++ return PTR_ERR(sensor->reset_gpio);
+
+ v4l2_i2c_subdev_init(&sensor->sd, client, &ov5640_subdev_ops);
+
+diff --git a/drivers/media/i2c/ov5645.c b/drivers/media/i2c/ov5645.c
+index 124c8df04633..58972c884705 100644
+--- a/drivers/media/i2c/ov5645.c
++++ b/drivers/media/i2c/ov5645.c
+@@ -45,6 +45,8 @@
+ #define OV5645_CHIP_ID_HIGH_BYTE 0x56
+ #define OV5645_CHIP_ID_LOW 0x300b
+ #define OV5645_CHIP_ID_LOW_BYTE 0x45
++#define OV5645_IO_MIPI_CTRL00 0x300e
++#define OV5645_PAD_OUTPUT00 0x3019
+ #define OV5645_AWB_MANUAL_CONTROL 0x3406
+ #define OV5645_AWB_MANUAL_ENABLE BIT(0)
+ #define OV5645_AEC_PK_MANUAL 0x3503
+@@ -55,6 +57,7 @@
+ #define OV5645_ISP_VFLIP BIT(2)
+ #define OV5645_TIMING_TC_REG21 0x3821
+ #define OV5645_SENSOR_MIRROR BIT(1)
++#define OV5645_MIPI_CTRL00 0x4800
+ #define OV5645_PRE_ISP_TEST_SETTING_1 0x503d
+ #define OV5645_TEST_PATTERN_MASK 0x3
+ #define OV5645_SET_TEST_PATTERN(x) ((x) & OV5645_TEST_PATTERN_MASK)
+@@ -121,7 +124,6 @@ static const struct reg_value ov5645_global_init_setting[] = {
+ { 0x3503, 0x07 },
+ { 0x3002, 0x1c },
+ { 0x3006, 0xc3 },
+- { 0x300e, 0x45 },
+ { 0x3017, 0x00 },
+ { 0x3018, 0x00 },
+ { 0x302e, 0x0b },
+@@ -350,7 +352,10 @@ static const struct reg_value ov5645_global_init_setting[] = {
+ { 0x3a1f, 0x14 },
+ { 0x0601, 0x02 },
+ { 0x3008, 0x42 },
+- { 0x3008, 0x02 }
++ { 0x3008, 0x02 },
++ { OV5645_IO_MIPI_CTRL00, 0x40 },
++ { OV5645_MIPI_CTRL00, 0x24 },
++ { OV5645_PAD_OUTPUT00, 0x70 }
+ };
+
+ static const struct reg_value ov5645_setting_sxga[] = {
+@@ -737,13 +742,9 @@ static int ov5645_s_power(struct v4l2_subdev *sd, int on)
+ goto exit;
+ }
+
+- ret = ov5645_write_reg(ov5645, OV5645_SYSTEM_CTRL0,
+- OV5645_SYSTEM_CTRL0_STOP);
+- if (ret < 0) {
+- ov5645_set_power_off(ov5645);
+- goto exit;
+- }
++ usleep_range(500, 1000);
+ } else {
++ ov5645_write_reg(ov5645, OV5645_IO_MIPI_CTRL00, 0x58);
+ ov5645_set_power_off(ov5645);
+ }
+ }
+@@ -1049,11 +1050,20 @@ static int ov5645_s_stream(struct v4l2_subdev *subdev, int enable)
+ dev_err(ov5645->dev, "could not sync v4l2 controls\n");
+ return ret;
+ }
++
++ ret = ov5645_write_reg(ov5645, OV5645_IO_MIPI_CTRL00, 0x45);
++ if (ret < 0)
++ return ret;
++
+ ret = ov5645_write_reg(ov5645, OV5645_SYSTEM_CTRL0,
+ OV5645_SYSTEM_CTRL0_START);
+ if (ret < 0)
+ return ret;
+ } else {
++ ret = ov5645_write_reg(ov5645, OV5645_IO_MIPI_CTRL00, 0x40);
++ if (ret < 0)
++ return ret;
++
+ ret = ov5645_write_reg(ov5645, OV5645_SYSTEM_CTRL0,
+ OV5645_SYSTEM_CTRL0_STOP);
+ if (ret < 0)
+diff --git a/drivers/media/i2c/ov9650.c b/drivers/media/i2c/ov9650.c
+index 30ab2225fbd0..b350f5c1a989 100644
+--- a/drivers/media/i2c/ov9650.c
++++ b/drivers/media/i2c/ov9650.c
+@@ -703,6 +703,11 @@ static int ov965x_set_gain(struct ov965x *ov965x, int auto_gain)
+ for (m = 6; m >= 0; m--)
+ if (gain >= (1 << m) * 16)
+ break;
++
++ /* Sanity check: don't adjust the gain with a negative value */
++ if (m < 0)
++ return -EINVAL;
++
+ rgain = (gain - ((1 << m) * 16)) / (1 << m);
+ rgain |= (((1 << m) - 1) << 4);
+
+diff --git a/drivers/media/i2c/tda1997x.c b/drivers/media/i2c/tda1997x.c
+index a62ede096636..5e68182001ec 100644
+--- a/drivers/media/i2c/tda1997x.c
++++ b/drivers/media/i2c/tda1997x.c
+@@ -2691,7 +2691,13 @@ static int tda1997x_probe(struct i2c_client *client,
+ }
+
+ ret = 0x34 + ((io_read(sd, REG_SLAVE_ADDR)>>4) & 0x03);
+- state->client_cec = i2c_new_dummy(client->adapter, ret);
++ state->client_cec = devm_i2c_new_dummy_device(&client->dev,
++ client->adapter, ret);
++ if (IS_ERR(state->client_cec)) {
++ ret = PTR_ERR(state->client_cec);
++ goto err_free_mutex;
++ }
++
+ v4l_info(client, "CEC slave address 0x%02x\n", ret);
+
+ ret = tda1997x_core_init(sd);
+@@ -2798,7 +2804,6 @@ static int tda1997x_remove(struct i2c_client *client)
+ media_entity_cleanup(&sd->entity);
+ v4l2_ctrl_handler_free(&state->hdl);
+ regulator_bulk_disable(TDA1997X_NUM_SUPPLIES, state->supplies);
+- i2c_unregister_device(state->client_cec);
+ cancel_delayed_work(&state->delayed_work_enable_hpd);
+ mutex_destroy(&state->page_lock);
+ mutex_destroy(&state->lock);
+diff --git a/drivers/media/pci/saa7134/saa7134-i2c.c b/drivers/media/pci/saa7134/saa7134-i2c.c
+index 493b1858815f..04e85765373e 100644
+--- a/drivers/media/pci/saa7134/saa7134-i2c.c
++++ b/drivers/media/pci/saa7134/saa7134-i2c.c
+@@ -342,7 +342,11 @@ static const struct i2c_client saa7134_client_template = {
+
+ /* ----------------------------------------------------------- */
+
+-/* On Medion 7134 reading EEPROM needs DVB-T demod i2c gate open */
++/*
++ * On Medion 7134 reading the SAA7134 chip config EEPROM needs DVB-T
++ * demod i2c gate closed due to an address clash between this EEPROM
++ * and the demod one.
++ */
+ static void saa7134_i2c_eeprom_md7134_gate(struct saa7134_dev *dev)
+ {
+ u8 subaddr = 0x7, dmdregval;
+@@ -359,14 +363,14 @@ static void saa7134_i2c_eeprom_md7134_gate(struct saa7134_dev *dev)
+
+ ret = i2c_transfer(&dev->i2c_adap, i2cgatemsg_r, 2);
+ if ((ret == 2) && (dmdregval & 0x2)) {
+- pr_debug("%s: DVB-T demod i2c gate was left closed\n",
++ pr_debug("%s: DVB-T demod i2c gate was left open\n",
+ dev->name);
+
+ data[0] = subaddr;
+ data[1] = (dmdregval & ~0x2);
+ if (i2c_transfer(&dev->i2c_adap, i2cgatemsg_w, 1) != 1)
+- pr_err("%s: EEPROM i2c gate open failure\n",
+- dev->name);
++ pr_err("%s: EEPROM i2c gate close failure\n",
++ dev->name);
+ }
+ }
+
+diff --git a/drivers/media/pci/saa7146/hexium_gemini.c b/drivers/media/pci/saa7146/hexium_gemini.c
+index dca20a3d98e2..f96226930670 100644
+--- a/drivers/media/pci/saa7146/hexium_gemini.c
++++ b/drivers/media/pci/saa7146/hexium_gemini.c
+@@ -292,6 +292,9 @@ static int hexium_attach(struct saa7146_dev *dev, struct saa7146_pci_extension_d
+ ret = saa7146_register_device(&hexium->video_dev, dev, "hexium gemini", VFL_TYPE_GRABBER);
+ if (ret < 0) {
+ pr_err("cannot register capture v4l2 device. skipping.\n");
++ saa7146_vv_release(dev);
++ i2c_del_adapter(&hexium->i2c_adapter);
++ kfree(hexium);
+ return ret;
+ }
+
+diff --git a/drivers/media/platform/aspeed-video.c b/drivers/media/platform/aspeed-video.c
+index f899ac3b4a61..4ef37cfc8446 100644
+--- a/drivers/media/platform/aspeed-video.c
++++ b/drivers/media/platform/aspeed-video.c
+@@ -630,7 +630,7 @@ static void aspeed_video_check_and_set_polarity(struct aspeed_video *video)
+ }
+
+ if (hsync_counter < 0 || vsync_counter < 0) {
+- u32 ctrl;
++ u32 ctrl = 0;
+
+ if (hsync_counter < 0) {
+ ctrl = VE_CTRL_HSYNC_POL;
+@@ -650,7 +650,8 @@ static void aspeed_video_check_and_set_polarity(struct aspeed_video *video)
+ V4L2_DV_VSYNC_POS_POL;
+ }
+
+- aspeed_video_update(video, VE_CTRL, 0, ctrl);
++ if (ctrl)
++ aspeed_video_update(video, VE_CTRL, 0, ctrl);
+ }
+ }
+
+diff --git a/drivers/media/platform/exynos4-is/fimc-is.c b/drivers/media/platform/exynos4-is/fimc-is.c
+index e043d55133a3..b7cc8e651e32 100644
+--- a/drivers/media/platform/exynos4-is/fimc-is.c
++++ b/drivers/media/platform/exynos4-is/fimc-is.c
+@@ -806,6 +806,7 @@ static int fimc_is_probe(struct platform_device *pdev)
+ return -ENODEV;
+
+ is->pmu_regs = of_iomap(node, 0);
++ of_node_put(node);
+ if (!is->pmu_regs)
+ return -ENOMEM;
+
+diff --git a/drivers/media/platform/exynos4-is/media-dev.c b/drivers/media/platform/exynos4-is/media-dev.c
+index d53427a8db11..a838189d4490 100644
+--- a/drivers/media/platform/exynos4-is/media-dev.c
++++ b/drivers/media/platform/exynos4-is/media-dev.c
+@@ -501,6 +501,7 @@ static int fimc_md_register_sensor_entities(struct fimc_md *fmd)
+ continue;
+
+ ret = fimc_md_parse_port_node(fmd, port, index);
++ of_node_put(port);
+ if (ret < 0) {
+ of_node_put(node);
+ goto cleanup;
+@@ -542,6 +543,7 @@ static int __of_get_csis_id(struct device_node *np)
+ if (!np)
+ return -EINVAL;
+ of_property_read_u32(np, "reg", ®);
++ of_node_put(np);
+ return reg - FIMC_INPUT_MIPI_CSI2_0;
+ }
+
+diff --git a/drivers/media/platform/fsl-viu.c b/drivers/media/platform/fsl-viu.c
+index 691be788e38b..b74e4f50d7d9 100644
+--- a/drivers/media/platform/fsl-viu.c
++++ b/drivers/media/platform/fsl-viu.c
+@@ -32,7 +32,7 @@
+ #define VIU_VERSION "0.5.1"
+
+ /* Allow building this driver with COMPILE_TEST */
+-#ifndef CONFIG_PPC
++#if !defined(CONFIG_PPC) && !defined(CONFIG_MICROBLAZE)
+ #define out_be32(v, a) iowrite32be(a, (void __iomem *)v)
+ #define in_be32(a) ioread32be((void __iomem *)a)
+ #endif
+diff --git a/drivers/media/platform/mtk-mdp/mtk_mdp_core.c b/drivers/media/platform/mtk-mdp/mtk_mdp_core.c
+index fc9faec85edb..5d44f2e92dd5 100644
+--- a/drivers/media/platform/mtk-mdp/mtk_mdp_core.c
++++ b/drivers/media/platform/mtk-mdp/mtk_mdp_core.c
+@@ -110,7 +110,9 @@ static int mtk_mdp_probe(struct platform_device *pdev)
+ mutex_init(&mdp->vpulock);
+
+ /* Old dts had the components as child nodes */
+- if (of_get_next_child(dev->of_node, NULL)) {
++ node = of_get_next_child(dev->of_node, NULL);
++ if (node) {
++ of_node_put(node);
+ parent = dev->of_node;
+ dev_warn(dev, "device tree is out of date\n");
+ } else {
+diff --git a/drivers/media/platform/omap3isp/isp.c b/drivers/media/platform/omap3isp/isp.c
+index 83216fc7156b..9cdb43859ae0 100644
+--- a/drivers/media/platform/omap3isp/isp.c
++++ b/drivers/media/platform/omap3isp/isp.c
+@@ -719,6 +719,10 @@ static int isp_pipeline_enable(struct isp_pipeline *pipe,
+ s_stream, mode);
+ pipe->do_propagation = true;
+ }
++
++ /* Stop at the first external sub-device. */
++ if (subdev->dev != isp->dev)
++ break;
+ }
+
+ return 0;
+@@ -833,6 +837,10 @@ static int isp_pipeline_disable(struct isp_pipeline *pipe)
+ &subdev->entity);
+ failure = -ETIMEDOUT;
+ }
++
++ /* Stop at the first external sub-device. */
++ if (subdev->dev != isp->dev)
++ break;
+ }
+
+ return failure;
+diff --git a/drivers/media/platform/omap3isp/ispccdc.c b/drivers/media/platform/omap3isp/ispccdc.c
+index 1ba8a5ba343f..e2f336c715a4 100644
+--- a/drivers/media/platform/omap3isp/ispccdc.c
++++ b/drivers/media/platform/omap3isp/ispccdc.c
+@@ -2602,6 +2602,7 @@ int omap3isp_ccdc_register_entities(struct isp_ccdc_device *ccdc,
+ int ret;
+
+ /* Register the subdev and video node. */
++ ccdc->subdev.dev = vdev->mdev->dev;
+ ret = v4l2_device_register_subdev(vdev, &ccdc->subdev);
+ if (ret < 0)
+ goto error;
+diff --git a/drivers/media/platform/omap3isp/ispccp2.c b/drivers/media/platform/omap3isp/ispccp2.c
+index efca45bb02c8..d0a49cdfd22d 100644
+--- a/drivers/media/platform/omap3isp/ispccp2.c
++++ b/drivers/media/platform/omap3isp/ispccp2.c
+@@ -1031,6 +1031,7 @@ int omap3isp_ccp2_register_entities(struct isp_ccp2_device *ccp2,
+ int ret;
+
+ /* Register the subdev and video nodes. */
++ ccp2->subdev.dev = vdev->mdev->dev;
+ ret = v4l2_device_register_subdev(vdev, &ccp2->subdev);
+ if (ret < 0)
+ goto error;
+diff --git a/drivers/media/platform/omap3isp/ispcsi2.c b/drivers/media/platform/omap3isp/ispcsi2.c
+index e85917f4a50c..fd493c5e4e24 100644
+--- a/drivers/media/platform/omap3isp/ispcsi2.c
++++ b/drivers/media/platform/omap3isp/ispcsi2.c
+@@ -1198,6 +1198,7 @@ int omap3isp_csi2_register_entities(struct isp_csi2_device *csi2,
+ int ret;
+
+ /* Register the subdev and video nodes. */
++ csi2->subdev.dev = vdev->mdev->dev;
+ ret = v4l2_device_register_subdev(vdev, &csi2->subdev);
+ if (ret < 0)
+ goto error;
+diff --git a/drivers/media/platform/omap3isp/isppreview.c b/drivers/media/platform/omap3isp/isppreview.c
+index 40e22400cf5e..97d660606d98 100644
+--- a/drivers/media/platform/omap3isp/isppreview.c
++++ b/drivers/media/platform/omap3isp/isppreview.c
+@@ -2225,6 +2225,7 @@ int omap3isp_preview_register_entities(struct isp_prev_device *prev,
+ int ret;
+
+ /* Register the subdev and video nodes. */
++ prev->subdev.dev = vdev->mdev->dev;
+ ret = v4l2_device_register_subdev(vdev, &prev->subdev);
+ if (ret < 0)
+ goto error;
+diff --git a/drivers/media/platform/omap3isp/ispresizer.c b/drivers/media/platform/omap3isp/ispresizer.c
+index 21ca6954df72..78d9dd7ea2da 100644
+--- a/drivers/media/platform/omap3isp/ispresizer.c
++++ b/drivers/media/platform/omap3isp/ispresizer.c
+@@ -1681,6 +1681,7 @@ int omap3isp_resizer_register_entities(struct isp_res_device *res,
+ int ret;
+
+ /* Register the subdev and video nodes. */
++ res->subdev.dev = vdev->mdev->dev;
+ ret = v4l2_device_register_subdev(vdev, &res->subdev);
+ if (ret < 0)
+ goto error;
+diff --git a/drivers/media/platform/omap3isp/ispstat.c b/drivers/media/platform/omap3isp/ispstat.c
+index 62b2eacb96fd..5b9b57f4d9bf 100644
+--- a/drivers/media/platform/omap3isp/ispstat.c
++++ b/drivers/media/platform/omap3isp/ispstat.c
+@@ -1026,6 +1026,8 @@ void omap3isp_stat_unregister_entities(struct ispstat *stat)
+ int omap3isp_stat_register_entities(struct ispstat *stat,
+ struct v4l2_device *vdev)
+ {
++ stat->subdev.dev = vdev->mdev->dev;
++
+ return v4l2_device_register_subdev(vdev, &stat->subdev);
+ }
+
+diff --git a/drivers/media/platform/rcar_fdp1.c b/drivers/media/platform/rcar_fdp1.c
+index 43aae9b6bb20..c23ec127c277 100644
+--- a/drivers/media/platform/rcar_fdp1.c
++++ b/drivers/media/platform/rcar_fdp1.c
+@@ -2306,7 +2306,7 @@ static int fdp1_probe(struct platform_device *pdev)
+ fdp1->fcp = rcar_fcp_get(fcp_node);
+ of_node_put(fcp_node);
+ if (IS_ERR(fdp1->fcp)) {
+- dev_err(&pdev->dev, "FCP not found (%ld)\n",
++ dev_dbg(&pdev->dev, "FCP not found (%ld)\n",
+ PTR_ERR(fdp1->fcp));
+ return PTR_ERR(fdp1->fcp);
+ }
+diff --git a/drivers/media/platform/vivid/vivid-ctrls.c b/drivers/media/platform/vivid/vivid-ctrls.c
+index 3e916c8befb7..7a52f585cab7 100644
+--- a/drivers/media/platform/vivid/vivid-ctrls.c
++++ b/drivers/media/platform/vivid/vivid-ctrls.c
+@@ -1473,7 +1473,7 @@ int vivid_create_controls(struct vivid_dev *dev, bool show_ccs_cap,
+ v4l2_ctrl_handler_init(hdl_vid_cap, 55);
+ v4l2_ctrl_new_custom(hdl_vid_cap, &vivid_ctrl_class, NULL);
+ v4l2_ctrl_handler_init(hdl_vid_out, 26);
+- if (!no_error_inj || dev->has_fb)
++ if (!no_error_inj || dev->has_fb || dev->num_hdmi_outputs)
+ v4l2_ctrl_new_custom(hdl_vid_out, &vivid_ctrl_class, NULL);
+ v4l2_ctrl_handler_init(hdl_vbi_cap, 21);
+ v4l2_ctrl_new_custom(hdl_vbi_cap, &vivid_ctrl_class, NULL);
+diff --git a/drivers/media/platform/vivid/vivid-kthread-cap.c b/drivers/media/platform/vivid/vivid-kthread-cap.c
+index 6cf495a7d5cc..003319d7816d 100644
+--- a/drivers/media/platform/vivid/vivid-kthread-cap.c
++++ b/drivers/media/platform/vivid/vivid-kthread-cap.c
+@@ -232,8 +232,8 @@ static void *plane_vaddr(struct tpg_data *tpg, struct vivid_buffer *buf,
+ return vbuf;
+ }
+
+-static int vivid_copy_buffer(struct vivid_dev *dev, unsigned p, u8 *vcapbuf,
+- struct vivid_buffer *vid_cap_buf)
++static noinline_for_stack int vivid_copy_buffer(struct vivid_dev *dev, unsigned p,
++ u8 *vcapbuf, struct vivid_buffer *vid_cap_buf)
+ {
+ bool blank = dev->must_blank[vid_cap_buf->vb.vb2_buf.index];
+ struct tpg_data *tpg = &dev->tpg;
+@@ -658,6 +658,8 @@ static void vivid_cap_update_frame_period(struct vivid_dev *dev)
+ u64 f_period;
+
+ f_period = (u64)dev->timeperframe_vid_cap.numerator * 1000000000;
++ if (WARN_ON(dev->timeperframe_vid_cap.denominator == 0))
++ dev->timeperframe_vid_cap.denominator = 1;
+ do_div(f_period, dev->timeperframe_vid_cap.denominator);
+ if (dev->field_cap == V4L2_FIELD_ALTERNATE)
+ f_period >>= 1;
+@@ -670,7 +672,8 @@ static void vivid_cap_update_frame_period(struct vivid_dev *dev)
+ dev->cap_frame_period = f_period;
+ }
+
+-static void vivid_thread_vid_cap_tick(struct vivid_dev *dev, int dropped_bufs)
++static noinline_for_stack void vivid_thread_vid_cap_tick(struct vivid_dev *dev,
++ int dropped_bufs)
+ {
+ struct vivid_buffer *vid_cap_buf = NULL;
+ struct vivid_buffer *vbi_cap_buf = NULL;
+diff --git a/drivers/media/platform/vsp1/vsp1_dl.c b/drivers/media/platform/vsp1/vsp1_dl.c
+index 104b6f514536..d7b43037e500 100644
+--- a/drivers/media/platform/vsp1/vsp1_dl.c
++++ b/drivers/media/platform/vsp1/vsp1_dl.c
+@@ -557,8 +557,10 @@ static struct vsp1_dl_list *vsp1_dl_list_alloc(struct vsp1_dl_manager *dlm)
+
+ /* Get a default body for our list. */
+ dl->body0 = vsp1_dl_body_get(dlm->pool);
+- if (!dl->body0)
++ if (!dl->body0) {
++ kfree(dl);
+ return NULL;
++ }
+
+ header_offset = dl->body0->max_entries * sizeof(*dl->body0->entries);
+
+diff --git a/drivers/media/radio/si470x/radio-si470x-usb.c b/drivers/media/radio/si470x/radio-si470x-usb.c
+index 49073747b1e7..fedff68d8c49 100644
+--- a/drivers/media/radio/si470x/radio-si470x-usb.c
++++ b/drivers/media/radio/si470x/radio-si470x-usb.c
+@@ -734,7 +734,7 @@ static int si470x_usb_driver_probe(struct usb_interface *intf,
+ /* start radio */
+ retval = si470x_start_usb(radio);
+ if (retval < 0)
+- goto err_all;
++ goto err_buf;
+
+ /* set initial frequency */
+ si470x_set_freq(radio, 87.5 * FREQ_MUL); /* available in all regions */
+@@ -749,6 +749,8 @@ static int si470x_usb_driver_probe(struct usb_interface *intf,
+
+ return 0;
+ err_all:
++ usb_kill_urb(radio->int_in_urb);
++err_buf:
+ kfree(radio->buffer);
+ err_ctrl:
+ v4l2_ctrl_handler_free(&radio->hdl);
+@@ -822,6 +824,7 @@ static void si470x_usb_driver_disconnect(struct usb_interface *intf)
+ mutex_lock(&radio->lock);
+ v4l2_device_disconnect(&radio->v4l2_dev);
+ video_unregister_device(&radio->videodev);
++ usb_kill_urb(radio->int_in_urb);
+ usb_set_intfdata(intf, NULL);
+ mutex_unlock(&radio->lock);
+ v4l2_device_put(&radio->v4l2_dev);
+diff --git a/drivers/media/rc/iguanair.c b/drivers/media/rc/iguanair.c
+index ea05e125016a..872d6441e512 100644
+--- a/drivers/media/rc/iguanair.c
++++ b/drivers/media/rc/iguanair.c
+@@ -413,6 +413,10 @@ static int iguanair_probe(struct usb_interface *intf,
+ int ret, pipein, pipeout;
+ struct usb_host_interface *idesc;
+
++ idesc = intf->altsetting;
++ if (idesc->desc.bNumEndpoints < 2)
++ return -ENODEV;
++
+ ir = kzalloc(sizeof(*ir), GFP_KERNEL);
+ rc = rc_allocate_device(RC_DRIVER_IR_RAW);
+ if (!ir || !rc) {
+@@ -427,18 +431,13 @@ static int iguanair_probe(struct usb_interface *intf,
+ ir->urb_in = usb_alloc_urb(0, GFP_KERNEL);
+ ir->urb_out = usb_alloc_urb(0, GFP_KERNEL);
+
+- if (!ir->buf_in || !ir->packet || !ir->urb_in || !ir->urb_out) {
++ if (!ir->buf_in || !ir->packet || !ir->urb_in || !ir->urb_out ||
++ !usb_endpoint_is_int_in(&idesc->endpoint[0].desc) ||
++ !usb_endpoint_is_int_out(&idesc->endpoint[1].desc)) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+- idesc = intf->altsetting;
+-
+- if (idesc->desc.bNumEndpoints < 2) {
+- ret = -ENODEV;
+- goto out;
+- }
+-
+ ir->rc = rc;
+ ir->dev = &intf->dev;
+ ir->udev = udev;
+diff --git a/drivers/media/rc/imon.c b/drivers/media/rc/imon.c
+index 7bee72108b0e..37a850421fbb 100644
+--- a/drivers/media/rc/imon.c
++++ b/drivers/media/rc/imon.c
+@@ -1826,12 +1826,17 @@ static void imon_get_ffdc_type(struct imon_context *ictx)
+ break;
+ /* iMON VFD, MCE IR */
+ case 0x46:
+- case 0x7e:
+ case 0x9e:
+ dev_info(ictx->dev, "0xffdc iMON VFD, MCE IR");
+ detected_display_type = IMON_DISPLAY_TYPE_VFD;
+ allowed_protos = RC_PROTO_BIT_RC6_MCE;
+ break;
++ /* iMON VFD, iMON or MCE IR */
++ case 0x7e:
++ dev_info(ictx->dev, "0xffdc iMON VFD, iMON or MCE IR");
++ detected_display_type = IMON_DISPLAY_TYPE_VFD;
++ allowed_protos |= RC_PROTO_BIT_RC6_MCE;
++ break;
+ /* iMON LCD, MCE IR */
+ case 0x9f:
+ dev_info(ictx->dev, "0xffdc iMON LCD, MCE IR");
+diff --git a/drivers/media/rc/mceusb.c b/drivers/media/rc/mceusb.c
+index 4d5351ebb940..9929fcdec74d 100644
+--- a/drivers/media/rc/mceusb.c
++++ b/drivers/media/rc/mceusb.c
+@@ -31,21 +31,22 @@
+ #include <linux/pm_wakeup.h>
+ #include <media/rc-core.h>
+
+-#define DRIVER_VERSION "1.94"
++#define DRIVER_VERSION "1.95"
+ #define DRIVER_AUTHOR "Jarod Wilson <jarod@redhat.com>"
+ #define DRIVER_DESC "Windows Media Center Ed. eHome Infrared Transceiver " \
+ "device driver"
+ #define DRIVER_NAME "mceusb"
+
++#define USB_TX_TIMEOUT 1000 /* in milliseconds */
+ #define USB_CTRL_MSG_SZ 2 /* Size of usb ctrl msg on gen1 hw */
+ #define MCE_G1_INIT_MSGS 40 /* Init messages on gen1 hw to throw out */
+
+ /* MCE constants */
+-#define MCE_CMDBUF_SIZE 384 /* MCE Command buffer length */
++#define MCE_IRBUF_SIZE 128 /* TX IR buffer length */
+ #define MCE_TIME_UNIT 50 /* Approx 50us resolution */
+-#define MCE_CODE_LENGTH 5 /* Normal length of packet (with header) */
+-#define MCE_PACKET_SIZE 4 /* Normal length of packet (without header) */
+-#define MCE_IRDATA_HEADER 0x84 /* Actual header format is 0x80 + num_bytes */
++#define MCE_PACKET_SIZE 31 /* Max length of packet (with header) */
++#define MCE_IRDATA_HEADER (0x80 + MCE_PACKET_SIZE - 1)
++ /* Actual format is 0x80 + num_bytes */
+ #define MCE_IRDATA_TRAILER 0x80 /* End of IR data */
+ #define MCE_MAX_CHANNELS 2 /* Two transmitters, hardware dependent? */
+ #define MCE_DEFAULT_TX_MASK 0x03 /* Vals: TX1=0x01, TX2=0x02, ALL=0x03 */
+@@ -607,9 +608,9 @@ static void mceusb_dev_printdata(struct mceusb_dev *ir, u8 *buf, int buf_len,
+ if (len <= skip)
+ return;
+
+- dev_dbg(dev, "%cx data: %*ph (length=%d)",
+- (out ? 't' : 'r'),
+- min(len, buf_len - offset), buf + offset, len);
++ dev_dbg(dev, "%cx data[%d]: %*ph (len=%d sz=%d)",
++ (out ? 't' : 'r'), offset,
++ min(len, buf_len - offset), buf + offset, len, buf_len);
+
+ inout = out ? "Request" : "Got";
+
+@@ -731,6 +732,9 @@ static void mceusb_dev_printdata(struct mceusb_dev *ir, u8 *buf, int buf_len,
+ case MCE_RSP_CMD_ILLEGAL:
+ dev_dbg(dev, "Illegal PORT_IR command");
+ break;
++ case MCE_RSP_TX_TIMEOUT:
++ dev_dbg(dev, "IR TX timeout (TX buffer underrun)");
++ break;
+ default:
+ dev_dbg(dev, "Unknown command 0x%02x 0x%02x",
+ cmd, subcmd);
+@@ -745,13 +749,14 @@ static void mceusb_dev_printdata(struct mceusb_dev *ir, u8 *buf, int buf_len,
+ dev_dbg(dev, "End of raw IR data");
+ else if ((cmd != MCE_CMD_PORT_IR) &&
+ ((cmd & MCE_PORT_MASK) == MCE_COMMAND_IRDATA))
+- dev_dbg(dev, "Raw IR data, %d pulse/space samples", ir->rem);
++ dev_dbg(dev, "Raw IR data, %d pulse/space samples",
++ cmd & MCE_PACKET_LENGTH_MASK);
+ #endif
+ }
+
+ /*
+ * Schedule work that can't be done in interrupt handlers
+- * (mceusb_dev_recv() and mce_async_callback()) nor tasklets.
++ * (mceusb_dev_recv() and mce_write_callback()) nor tasklets.
+ * Invokes mceusb_deferred_kevent() for recovering from
+ * error events specified by the kevent bit field.
+ */
+@@ -764,23 +769,80 @@ static void mceusb_defer_kevent(struct mceusb_dev *ir, int kevent)
+ dev_dbg(ir->dev, "kevent %d scheduled", kevent);
+ }
+
+-static void mce_async_callback(struct urb *urb)
++static void mce_write_callback(struct urb *urb)
+ {
+- struct mceusb_dev *ir;
+- int len;
+-
+ if (!urb)
+ return;
+
+- ir = urb->context;
++ complete(urb->context);
++}
++
++/*
++ * Write (TX/send) data to MCE device USB endpoint out.
++ * Used for IR blaster TX and MCE device commands.
++ *
++ * Return: The number of bytes written (> 0) or errno (< 0).
++ */
++static int mce_write(struct mceusb_dev *ir, u8 *data, int size)
++{
++ int ret;
++ struct urb *urb;
++ struct device *dev = ir->dev;
++ unsigned char *buf_out;
++ struct completion tx_done;
++ unsigned long expire;
++ unsigned long ret_wait;
++
++ mceusb_dev_printdata(ir, data, size, 0, size, true);
++
++ urb = usb_alloc_urb(0, GFP_KERNEL);
++ if (unlikely(!urb)) {
++ dev_err(dev, "Error: mce write couldn't allocate urb");
++ return -ENOMEM;
++ }
++
++ buf_out = kmalloc(size, GFP_KERNEL);
++ if (!buf_out) {
++ usb_free_urb(urb);
++ return -ENOMEM;
++ }
++
++ init_completion(&tx_done);
++
++ /* outbound data */
++ if (usb_endpoint_xfer_int(ir->usb_ep_out))
++ usb_fill_int_urb(urb, ir->usbdev, ir->pipe_out,
++ buf_out, size, mce_write_callback, &tx_done,
++ ir->usb_ep_out->bInterval);
++ else
++ usb_fill_bulk_urb(urb, ir->usbdev, ir->pipe_out,
++ buf_out, size, mce_write_callback, &tx_done);
++ memcpy(buf_out, data, size);
++
++ ret = usb_submit_urb(urb, GFP_KERNEL);
++ if (ret) {
++ dev_err(dev, "Error: mce write submit urb error = %d", ret);
++ kfree(buf_out);
++ usb_free_urb(urb);
++ return ret;
++ }
++
++ expire = msecs_to_jiffies(USB_TX_TIMEOUT);
++ ret_wait = wait_for_completion_timeout(&tx_done, expire);
++ if (!ret_wait) {
++ dev_err(dev, "Error: mce write timed out (expire = %lu (%dms))",
++ expire, USB_TX_TIMEOUT);
++ usb_kill_urb(urb);
++ ret = (urb->status == -ENOENT ? -ETIMEDOUT : urb->status);
++ } else {
++ ret = urb->status;
++ }
++ if (ret >= 0)
++ ret = urb->actual_length; /* bytes written */
+
+ switch (urb->status) {
+ /* success */
+ case 0:
+- len = urb->actual_length;
+-
+- mceusb_dev_printdata(ir, urb->transfer_buffer, len,
+- 0, len, true);
+ break;
+
+ case -ECONNRESET:
+@@ -790,140 +852,135 @@ static void mce_async_callback(struct urb *urb)
+ break;
+
+ case -EPIPE:
+- dev_err(ir->dev, "Error: request urb status = %d (TX HALT)",
++ dev_err(ir->dev, "Error: mce write urb status = %d (TX HALT)",
+ urb->status);
+ mceusb_defer_kevent(ir, EVENT_TX_HALT);
+ break;
+
+ default:
+- dev_err(ir->dev, "Error: request urb status = %d", urb->status);
++ dev_err(ir->dev, "Error: mce write urb status = %d",
++ urb->status);
+ break;
+ }
+
+- /* the transfer buffer and urb were allocated in mce_request_packet */
+- kfree(urb->transfer_buffer);
+- usb_free_urb(urb);
+-}
+-
+-/* request outgoing (send) usb packet - used to initialize remote */
+-static void mce_request_packet(struct mceusb_dev *ir, unsigned char *data,
+- int size)
+-{
+- int res;
+- struct urb *async_urb;
+- struct device *dev = ir->dev;
+- unsigned char *async_buf;
++ dev_dbg(dev, "tx done status = %d (wait = %lu, expire = %lu (%dms), urb->actual_length = %d, urb->status = %d)",
++ ret, ret_wait, expire, USB_TX_TIMEOUT,
++ urb->actual_length, urb->status);
+
+- async_urb = usb_alloc_urb(0, GFP_KERNEL);
+- if (unlikely(!async_urb)) {
+- dev_err(dev, "Error, couldn't allocate urb!");
+- return;
+- }
+-
+- async_buf = kmalloc(size, GFP_KERNEL);
+- if (!async_buf) {
+- usb_free_urb(async_urb);
+- return;
+- }
+-
+- /* outbound data */
+- if (usb_endpoint_xfer_int(ir->usb_ep_out))
+- usb_fill_int_urb(async_urb, ir->usbdev, ir->pipe_out,
+- async_buf, size, mce_async_callback, ir,
+- ir->usb_ep_out->bInterval);
+- else
+- usb_fill_bulk_urb(async_urb, ir->usbdev, ir->pipe_out,
+- async_buf, size, mce_async_callback, ir);
+-
+- memcpy(async_buf, data, size);
+-
+- dev_dbg(dev, "send request called (size=%#x)", size);
++ kfree(buf_out);
++ usb_free_urb(urb);
+
+- res = usb_submit_urb(async_urb, GFP_ATOMIC);
+- if (res) {
+- dev_err(dev, "send request FAILED! (res=%d)", res);
+- kfree(async_buf);
+- usb_free_urb(async_urb);
+- return;
+- }
+- dev_dbg(dev, "send request complete (res=%d)", res);
++ return ret;
+ }
+
+-static void mce_async_out(struct mceusb_dev *ir, unsigned char *data, int size)
++static void mce_command_out(struct mceusb_dev *ir, u8 *data, int size)
+ {
+ int rsize = sizeof(DEVICE_RESUME);
+
+ if (ir->need_reset) {
+ ir->need_reset = false;
+- mce_request_packet(ir, DEVICE_RESUME, rsize);
++ mce_write(ir, DEVICE_RESUME, rsize);
+ msleep(10);
+ }
+
+- mce_request_packet(ir, data, size);
++ mce_write(ir, data, size);
+ msleep(10);
+ }
+
+-/* Send data out the IR blaster port(s) */
++/*
++ * Transmit IR out the MCE device IR blaster port(s).
++ *
++ * Convert IR pulse/space sequence from LIRC to MCE format.
++ * Break up a long IR sequence into multiple parts (MCE IR data packets).
++ *
++ * u32 txbuf[] consists of IR pulse, space, ..., and pulse times in usec.
++ * Pulses and spaces are implicit by their position.
++ * The first IR sample, txbuf[0], is always a pulse.
++ *
++ * u8 irbuf[] consists of multiple IR data packets for the MCE device.
++ * A packet is 1 u8 MCE_IRDATA_HEADER and up to 30 u8 IR samples.
++ * An IR sample is 1-bit pulse/space flag with 7-bit time
++ * in MCE time units (50usec).
++ *
++ * Return: The number of IR samples sent (> 0) or errno (< 0).
++ */
+ static int mceusb_tx_ir(struct rc_dev *dev, unsigned *txbuf, unsigned count)
+ {
+ struct mceusb_dev *ir = dev->priv;
+- int i, length, ret = 0;
+- int cmdcount = 0;
+- unsigned char cmdbuf[MCE_CMDBUF_SIZE];
+-
+- /* MCE tx init header */
+- cmdbuf[cmdcount++] = MCE_CMD_PORT_IR;
+- cmdbuf[cmdcount++] = MCE_CMD_SETIRTXPORTS;
+- cmdbuf[cmdcount++] = ir->tx_mask;
++ u8 cmdbuf[3] = { MCE_CMD_PORT_IR, MCE_CMD_SETIRTXPORTS, 0x00 };
++ u8 irbuf[MCE_IRBUF_SIZE];
++ int ircount = 0;
++ unsigned int irsample;
++ int i, length, ret;
+
+ /* Send the set TX ports command */
+- mce_async_out(ir, cmdbuf, cmdcount);
+- cmdcount = 0;
+-
+- /* Generate mce packet data */
+- for (i = 0; (i < count) && (cmdcount < MCE_CMDBUF_SIZE); i++) {
+- txbuf[i] = txbuf[i] / MCE_TIME_UNIT;
+-
+- do { /* loop to support long pulses/spaces > 127*50us=6.35ms */
+-
+- /* Insert mce packet header every 4th entry */
+- if ((cmdcount < MCE_CMDBUF_SIZE) &&
+- (cmdcount % MCE_CODE_LENGTH) == 0)
+- cmdbuf[cmdcount++] = MCE_IRDATA_HEADER;
+-
+- /* Insert mce packet data */
+- if (cmdcount < MCE_CMDBUF_SIZE)
+- cmdbuf[cmdcount++] =
+- (txbuf[i] < MCE_PULSE_BIT ?
+- txbuf[i] : MCE_MAX_PULSE_LENGTH) |
+- (i & 1 ? 0x00 : MCE_PULSE_BIT);
+- else {
+- ret = -EINVAL;
+- goto out;
++ cmdbuf[2] = ir->tx_mask;
++ mce_command_out(ir, cmdbuf, sizeof(cmdbuf));
++
++ /* Generate mce IR data packet */
++ for (i = 0; i < count; i++) {
++ irsample = txbuf[i] / MCE_TIME_UNIT;
++
++ /* loop to support long pulses/spaces > 6350us (127*50us) */
++ while (irsample > 0) {
++ /* Insert IR header every 30th entry */
++ if (ircount % MCE_PACKET_SIZE == 0) {
++ /* Room for IR header and one IR sample? */
++ if (ircount >= MCE_IRBUF_SIZE - 1) {
++ /* Send near full buffer */
++ ret = mce_write(ir, irbuf, ircount);
++ if (ret < 0)
++ return ret;
++ ircount = 0;
++ }
++ irbuf[ircount++] = MCE_IRDATA_HEADER;
+ }
+
+- } while ((txbuf[i] > MCE_MAX_PULSE_LENGTH) &&
+- (txbuf[i] -= MCE_MAX_PULSE_LENGTH));
+- }
+-
+- /* Check if we have room for the empty packet at the end */
+- if (cmdcount >= MCE_CMDBUF_SIZE) {
+- ret = -EINVAL;
+- goto out;
+- }
++ /* Insert IR sample */
++ if (irsample <= MCE_MAX_PULSE_LENGTH) {
++ irbuf[ircount] = irsample;
++ irsample = 0;
++ } else {
++ irbuf[ircount] = MCE_MAX_PULSE_LENGTH;
++ irsample -= MCE_MAX_PULSE_LENGTH;
++ }
++ /*
++ * Even i = IR pulse
++ * Odd i = IR space
++ */
++ irbuf[ircount] |= (i & 1 ? 0 : MCE_PULSE_BIT);
++ ircount++;
++
++ /* IR buffer full? */
++ if (ircount >= MCE_IRBUF_SIZE) {
++ /* Fix packet length in last header */
++ length = ircount % MCE_PACKET_SIZE;
++ if (length > 0)
++ irbuf[ircount - length] -=
++ MCE_PACKET_SIZE - length;
++ /* Send full buffer */
++ ret = mce_write(ir, irbuf, ircount);
++ if (ret < 0)
++ return ret;
++ ircount = 0;
++ }
++ }
++ } /* after for loop, 0 <= ircount < MCE_IRBUF_SIZE */
+
+ /* Fix packet length in last header */
+- length = cmdcount % MCE_CODE_LENGTH;
+- cmdbuf[cmdcount - length] -= MCE_CODE_LENGTH - length;
++ length = ircount % MCE_PACKET_SIZE;
++ if (length > 0)
++ irbuf[ircount - length] -= MCE_PACKET_SIZE - length;
+
+- /* All mce commands end with an empty packet (0x80) */
+- cmdbuf[cmdcount++] = MCE_IRDATA_TRAILER;
++ /* Append IR trailer (0x80) to final partial (or empty) IR buffer */
++ irbuf[ircount++] = MCE_IRDATA_TRAILER;
+
+- /* Transmit the command to the mce device */
+- mce_async_out(ir, cmdbuf, cmdcount);
++ /* Send final buffer */
++ ret = mce_write(ir, irbuf, ircount);
++ if (ret < 0)
++ return ret;
+
+-out:
+- return ret ? ret : count;
++ return count;
+ }
+
+ /* Sets active IR outputs -- mce devices typically have two */
+@@ -963,7 +1020,7 @@ static int mceusb_set_tx_carrier(struct rc_dev *dev, u32 carrier)
+ cmdbuf[2] = MCE_CMD_SIG_END;
+ cmdbuf[3] = MCE_IRDATA_TRAILER;
+ dev_dbg(ir->dev, "disabling carrier modulation");
+- mce_async_out(ir, cmdbuf, sizeof(cmdbuf));
++ mce_command_out(ir, cmdbuf, sizeof(cmdbuf));
+ return 0;
+ }
+
+@@ -977,7 +1034,7 @@ static int mceusb_set_tx_carrier(struct rc_dev *dev, u32 carrier)
+ carrier);
+
+ /* Transmit new carrier to mce device */
+- mce_async_out(ir, cmdbuf, sizeof(cmdbuf));
++ mce_command_out(ir, cmdbuf, sizeof(cmdbuf));
+ return 0;
+ }
+ }
+@@ -1000,10 +1057,10 @@ static int mceusb_set_timeout(struct rc_dev *dev, unsigned int timeout)
+ cmdbuf[2] = units >> 8;
+ cmdbuf[3] = units;
+
+- mce_async_out(ir, cmdbuf, sizeof(cmdbuf));
++ mce_command_out(ir, cmdbuf, sizeof(cmdbuf));
+
+ /* get receiver timeout value */
+- mce_async_out(ir, GET_RX_TIMEOUT, sizeof(GET_RX_TIMEOUT));
++ mce_command_out(ir, GET_RX_TIMEOUT, sizeof(GET_RX_TIMEOUT));
+
+ return 0;
+ }
+@@ -1028,7 +1085,7 @@ static int mceusb_set_rx_wideband(struct rc_dev *dev, int enable)
+ ir->wideband_rx_enabled = false;
+ cmdbuf[2] = 1; /* port 1 is long range receiver */
+ }
+- mce_async_out(ir, cmdbuf, sizeof(cmdbuf));
++ mce_command_out(ir, cmdbuf, sizeof(cmdbuf));
+ /* response from device sets ir->learning_active */
+
+ return 0;
+@@ -1051,7 +1108,7 @@ static int mceusb_set_rx_carrier_report(struct rc_dev *dev, int enable)
+ ir->carrier_report_enabled = true;
+ if (!ir->learning_active) {
+ cmdbuf[2] = 2; /* port 2 is short range receiver */
+- mce_async_out(ir, cmdbuf, sizeof(cmdbuf));
++ mce_command_out(ir, cmdbuf, sizeof(cmdbuf));
+ }
+ } else {
+ ir->carrier_report_enabled = false;
+@@ -1062,7 +1119,7 @@ static int mceusb_set_rx_carrier_report(struct rc_dev *dev, int enable)
+ */
+ if (ir->learning_active && !ir->wideband_rx_enabled) {
+ cmdbuf[2] = 1; /* port 1 is long range receiver */
+- mce_async_out(ir, cmdbuf, sizeof(cmdbuf));
++ mce_command_out(ir, cmdbuf, sizeof(cmdbuf));
+ }
+ }
+
+@@ -1141,6 +1198,7 @@ static void mceusb_handle_command(struct mceusb_dev *ir, int index)
+ }
+ break;
+ case MCE_RSP_CMD_ILLEGAL:
++ case MCE_RSP_TX_TIMEOUT:
+ ir->need_reset = true;
+ break;
+ default:
+@@ -1279,7 +1337,7 @@ static void mceusb_get_emulator_version(struct mceusb_dev *ir)
+ {
+ /* If we get no reply or an illegal command reply, its ver 1, says MS */
+ ir->emver = 1;
+- mce_async_out(ir, GET_EMVER, sizeof(GET_EMVER));
++ mce_command_out(ir, GET_EMVER, sizeof(GET_EMVER));
+ }
+
+ static void mceusb_gen1_init(struct mceusb_dev *ir)
+@@ -1325,10 +1383,10 @@ static void mceusb_gen1_init(struct mceusb_dev *ir)
+ dev_dbg(dev, "set handshake - retC = %d", ret);
+
+ /* device resume */
+- mce_async_out(ir, DEVICE_RESUME, sizeof(DEVICE_RESUME));
++ mce_command_out(ir, DEVICE_RESUME, sizeof(DEVICE_RESUME));
+
+ /* get hw/sw revision? */
+- mce_async_out(ir, GET_REVISION, sizeof(GET_REVISION));
++ mce_command_out(ir, GET_REVISION, sizeof(GET_REVISION));
+
+ kfree(data);
+ }
+@@ -1336,13 +1394,13 @@ static void mceusb_gen1_init(struct mceusb_dev *ir)
+ static void mceusb_gen2_init(struct mceusb_dev *ir)
+ {
+ /* device resume */
+- mce_async_out(ir, DEVICE_RESUME, sizeof(DEVICE_RESUME));
++ mce_command_out(ir, DEVICE_RESUME, sizeof(DEVICE_RESUME));
+
+ /* get wake version (protocol, key, address) */
+- mce_async_out(ir, GET_WAKEVERSION, sizeof(GET_WAKEVERSION));
++ mce_command_out(ir, GET_WAKEVERSION, sizeof(GET_WAKEVERSION));
+
+ /* unknown what this one actually returns... */
+- mce_async_out(ir, GET_UNKNOWN2, sizeof(GET_UNKNOWN2));
++ mce_command_out(ir, GET_UNKNOWN2, sizeof(GET_UNKNOWN2));
+ }
+
+ static void mceusb_get_parameters(struct mceusb_dev *ir)
+@@ -1356,24 +1414,24 @@ static void mceusb_get_parameters(struct mceusb_dev *ir)
+ ir->num_rxports = 2;
+
+ /* get number of tx and rx ports */
+- mce_async_out(ir, GET_NUM_PORTS, sizeof(GET_NUM_PORTS));
++ mce_command_out(ir, GET_NUM_PORTS, sizeof(GET_NUM_PORTS));
+
+ /* get the carrier and frequency */
+- mce_async_out(ir, GET_CARRIER_FREQ, sizeof(GET_CARRIER_FREQ));
++ mce_command_out(ir, GET_CARRIER_FREQ, sizeof(GET_CARRIER_FREQ));
+
+ if (ir->num_txports && !ir->flags.no_tx)
+ /* get the transmitter bitmask */
+- mce_async_out(ir, GET_TX_BITMASK, sizeof(GET_TX_BITMASK));
++ mce_command_out(ir, GET_TX_BITMASK, sizeof(GET_TX_BITMASK));
+
+ /* get receiver timeout value */
+- mce_async_out(ir, GET_RX_TIMEOUT, sizeof(GET_RX_TIMEOUT));
++ mce_command_out(ir, GET_RX_TIMEOUT, sizeof(GET_RX_TIMEOUT));
+
+ /* get receiver sensor setting */
+- mce_async_out(ir, GET_RX_SENSOR, sizeof(GET_RX_SENSOR));
++ mce_command_out(ir, GET_RX_SENSOR, sizeof(GET_RX_SENSOR));
+
+ for (i = 0; i < ir->num_txports; i++) {
+ cmdbuf[2] = i;
+- mce_async_out(ir, cmdbuf, sizeof(cmdbuf));
++ mce_command_out(ir, cmdbuf, sizeof(cmdbuf));
+ }
+ }
+
+@@ -1382,7 +1440,7 @@ static void mceusb_flash_led(struct mceusb_dev *ir)
+ if (ir->emver < 2)
+ return;
+
+- mce_async_out(ir, FLASH_LED, sizeof(FLASH_LED));
++ mce_command_out(ir, FLASH_LED, sizeof(FLASH_LED));
+ }
+
+ /*
+diff --git a/drivers/media/rc/mtk-cir.c b/drivers/media/rc/mtk-cir.c
+index 50fb0aebb8d4..f2259082e3d8 100644
+--- a/drivers/media/rc/mtk-cir.c
++++ b/drivers/media/rc/mtk-cir.c
+@@ -35,6 +35,11 @@
+ /* Fields containing pulse width data */
+ #define MTK_WIDTH_MASK (GENMASK(7, 0))
+
++/* IR threshold */
++#define MTK_IRTHD 0x14
++#define MTK_DG_CNT_MASK (GENMASK(12, 8))
++#define MTK_DG_CNT(x) ((x) << 8)
++
+ /* Bit to enable interrupt */
+ #define MTK_IRINT_EN BIT(0)
+
+@@ -398,6 +403,9 @@ static int mtk_ir_probe(struct platform_device *pdev)
+ mtk_w32_mask(ir, val, ir->data->fields[MTK_HW_PERIOD].mask,
+ ir->data->fields[MTK_HW_PERIOD].reg);
+
++ /* Set de-glitch counter */
++ mtk_w32_mask(ir, MTK_DG_CNT(1), MTK_DG_CNT_MASK, MTK_IRTHD);
++
+ /* Enable IR and PWM */
+ val = mtk_r32(ir, MTK_CONFIG_HIGH_REG);
+ val |= MTK_OK_COUNT(ir->data->ok_count) | MTK_PWM_EN | MTK_IR_EN;
+diff --git a/drivers/media/usb/cpia2/cpia2_usb.c b/drivers/media/usb/cpia2/cpia2_usb.c
+index 17468f7d78ed..3ab80a7b4498 100644
+--- a/drivers/media/usb/cpia2/cpia2_usb.c
++++ b/drivers/media/usb/cpia2/cpia2_usb.c
+@@ -676,6 +676,10 @@ static int submit_urbs(struct camera_data *cam)
+ if (!urb) {
+ for (j = 0; j < i; j++)
+ usb_free_urb(cam->sbuf[j].urb);
++ for (j = 0; j < NUM_SBUF; j++) {
++ kfree(cam->sbuf[j].data);
++ cam->sbuf[j].data = NULL;
++ }
+ return -ENOMEM;
+ }
+
+diff --git a/drivers/media/usb/dvb-usb/dib0700_devices.c b/drivers/media/usb/dvb-usb/dib0700_devices.c
+index 66d685065e06..ab7a100ec84f 100644
+--- a/drivers/media/usb/dvb-usb/dib0700_devices.c
++++ b/drivers/media/usb/dvb-usb/dib0700_devices.c
+@@ -2439,9 +2439,13 @@ static int dib9090_tuner_attach(struct dvb_usb_adapter *adap)
+ 8, 0x0486,
+ };
+
++ if (!IS_ENABLED(CONFIG_DVB_DIB9000))
++ return -ENODEV;
+ if (dvb_attach(dib0090_fw_register, adap->fe_adap[0].fe, i2c, &dib9090_dib0090_config) == NULL)
+ return -ENODEV;
+ i2c = dib9000_get_i2c_master(adap->fe_adap[0].fe, DIBX000_I2C_INTERFACE_GPIO_1_2, 0);
++ if (!i2c)
++ return -ENODEV;
+ if (dib01x0_pmu_update(i2c, data_dib190, 10) != 0)
+ return -ENODEV;
+ dib0700_set_i2c_speed(adap->dev, 1500);
+@@ -2517,10 +2521,14 @@ static int nim9090md_tuner_attach(struct dvb_usb_adapter *adap)
+ 0, 0x00ef,
+ 8, 0x0406,
+ };
++ if (!IS_ENABLED(CONFIG_DVB_DIB9000))
++ return -ENODEV;
+ i2c = dib9000_get_tuner_interface(adap->fe_adap[0].fe);
+ if (dvb_attach(dib0090_fw_register, adap->fe_adap[0].fe, i2c, &nim9090md_dib0090_config[0]) == NULL)
+ return -ENODEV;
+ i2c = dib9000_get_i2c_master(adap->fe_adap[0].fe, DIBX000_I2C_INTERFACE_GPIO_1_2, 0);
++ if (!i2c)
++ return -ENODEV;
+ if (dib01x0_pmu_update(i2c, data_dib190, 10) < 0)
+ return -ENODEV;
+
+diff --git a/drivers/media/usb/dvb-usb/pctv452e.c b/drivers/media/usb/dvb-usb/pctv452e.c
+index d6b36e4f33d2..441d878fc22c 100644
+--- a/drivers/media/usb/dvb-usb/pctv452e.c
++++ b/drivers/media/usb/dvb-usb/pctv452e.c
+@@ -909,14 +909,6 @@ static int pctv452e_frontend_attach(struct dvb_usb_adapter *a)
+ &a->dev->i2c_adap);
+ if (!a->fe_adap[0].fe)
+ return -ENODEV;
+-
+- /*
+- * dvb_frontend will call dvb_detach for both stb0899_detach
+- * and stb0899_release but we only do dvb_attach(stb0899_attach).
+- * Increment the module refcount instead.
+- */
+- symbol_get(stb0899_attach);
+-
+ if ((dvb_attach(lnbp22_attach, a->fe_adap[0].fe,
+ &a->dev->i2c_adap)) == NULL)
+ err("Cannot attach lnbp22\n");
+diff --git a/drivers/media/usb/em28xx/em28xx-cards.c b/drivers/media/usb/em28xx/em28xx-cards.c
+index 1283c7ca9ad5..1de835a591a0 100644
+--- a/drivers/media/usb/em28xx/em28xx-cards.c
++++ b/drivers/media/usb/em28xx/em28xx-cards.c
+@@ -4020,7 +4020,6 @@ static void em28xx_usb_disconnect(struct usb_interface *intf)
+ dev->dev_next->disconnected = 1;
+ dev_info(&dev->intf->dev, "Disconnecting %s\n",
+ dev->dev_next->name);
+- flush_request_modules(dev->dev_next);
+ }
+
+ dev->disconnected = 1;
+diff --git a/drivers/media/usb/gspca/konica.c b/drivers/media/usb/gspca/konica.c
+index d8e40137a204..53db9a2895ea 100644
+--- a/drivers/media/usb/gspca/konica.c
++++ b/drivers/media/usb/gspca/konica.c
+@@ -114,6 +114,11 @@ static void reg_r(struct gspca_dev *gspca_dev, u16 value, u16 index)
+ if (ret < 0) {
+ pr_err("reg_r err %d\n", ret);
+ gspca_dev->usb_err = ret;
++ /*
++ * Make sure the buffer is zeroed to avoid uninitialized
++ * values.
++ */
++ memset(gspca_dev->usb_buf, 0, 2);
+ }
+ }
+
+diff --git a/drivers/media/usb/gspca/nw80x.c b/drivers/media/usb/gspca/nw80x.c
+index 59649704beba..880f569bda30 100644
+--- a/drivers/media/usb/gspca/nw80x.c
++++ b/drivers/media/usb/gspca/nw80x.c
+@@ -1572,6 +1572,11 @@ static void reg_r(struct gspca_dev *gspca_dev,
+ if (ret < 0) {
+ pr_err("reg_r err %d\n", ret);
+ gspca_dev->usb_err = ret;
++ /*
++ * Make sure the buffer is zeroed to avoid uninitialized
++ * values.
++ */
++ memset(gspca_dev->usb_buf, 0, USB_BUF_SZ);
+ return;
+ }
+ if (len == 1)
+diff --git a/drivers/media/usb/gspca/ov519.c b/drivers/media/usb/gspca/ov519.c
+index cfb1f53bc17e..f417dfc0b872 100644
+--- a/drivers/media/usb/gspca/ov519.c
++++ b/drivers/media/usb/gspca/ov519.c
+@@ -2073,6 +2073,11 @@ static int reg_r(struct sd *sd, u16 index)
+ } else {
+ gspca_err(gspca_dev, "reg_r %02x failed %d\n", index, ret);
+ sd->gspca_dev.usb_err = ret;
++ /*
++ * Make sure the result is zeroed to avoid uninitialized
++ * values.
++ */
++ gspca_dev->usb_buf[0] = 0;
+ }
+
+ return ret;
+@@ -2101,6 +2106,11 @@ static int reg_r8(struct sd *sd,
+ } else {
+ gspca_err(gspca_dev, "reg_r8 %02x failed %d\n", index, ret);
+ sd->gspca_dev.usb_err = ret;
++ /*
++ * Make sure the buffer is zeroed to avoid uninitialized
++ * values.
++ */
++ memset(gspca_dev->usb_buf, 0, 8);
+ }
+
+ return ret;
+diff --git a/drivers/media/usb/gspca/ov534.c b/drivers/media/usb/gspca/ov534.c
+index 56521c991db4..185c1f10fb30 100644
+--- a/drivers/media/usb/gspca/ov534.c
++++ b/drivers/media/usb/gspca/ov534.c
+@@ -693,6 +693,11 @@ static u8 ov534_reg_read(struct gspca_dev *gspca_dev, u16 reg)
+ if (ret < 0) {
+ pr_err("read failed %d\n", ret);
+ gspca_dev->usb_err = ret;
++ /*
++ * Make sure the result is zeroed to avoid uninitialized
++ * values.
++ */
++ gspca_dev->usb_buf[0] = 0;
+ }
+ return gspca_dev->usb_buf[0];
+ }
+diff --git a/drivers/media/usb/gspca/ov534_9.c b/drivers/media/usb/gspca/ov534_9.c
+index 867f860a9650..91efc650cf76 100644
+--- a/drivers/media/usb/gspca/ov534_9.c
++++ b/drivers/media/usb/gspca/ov534_9.c
+@@ -1145,6 +1145,7 @@ static u8 reg_r(struct gspca_dev *gspca_dev, u16 reg)
+ if (ret < 0) {
+ pr_err("reg_r err %d\n", ret);
+ gspca_dev->usb_err = ret;
++ return 0;
+ }
+ return gspca_dev->usb_buf[0];
+ }
+diff --git a/drivers/media/usb/gspca/se401.c b/drivers/media/usb/gspca/se401.c
+index 061deee138c3..e087cfb5980b 100644
+--- a/drivers/media/usb/gspca/se401.c
++++ b/drivers/media/usb/gspca/se401.c
+@@ -101,6 +101,11 @@ static void se401_read_req(struct gspca_dev *gspca_dev, u16 req, int silent)
+ pr_err("read req failed req %#04x error %d\n",
+ req, err);
+ gspca_dev->usb_err = err;
++ /*
++ * Make sure the buffer is zeroed to avoid uninitialized
++ * values.
++ */
++ memset(gspca_dev->usb_buf, 0, READ_REQ_SIZE);
+ }
+ }
+
+diff --git a/drivers/media/usb/gspca/sn9c20x.c b/drivers/media/usb/gspca/sn9c20x.c
+index b43f89fee6c1..2a6d0a1265a7 100644
+--- a/drivers/media/usb/gspca/sn9c20x.c
++++ b/drivers/media/usb/gspca/sn9c20x.c
+@@ -123,6 +123,13 @@ static const struct dmi_system_id flip_dmi_table[] = {
+ DMI_MATCH(DMI_PRODUCT_VERSION, "0341")
+ }
+ },
++ {
++ .ident = "MSI MS-1039",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "MICRO-STAR INT'L CO.,LTD."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "MS-1039"),
++ }
++ },
+ {
+ .ident = "MSI MS-1632",
+ .matches = {
+@@ -909,6 +916,11 @@ static void reg_r(struct gspca_dev *gspca_dev, u16 reg, u16 length)
+ if (unlikely(result < 0 || result != length)) {
+ pr_err("Read register %02x failed %d\n", reg, result);
+ gspca_dev->usb_err = result;
++ /*
++ * Make sure the buffer is zeroed to avoid uninitialized
++ * values.
++ */
++ memset(gspca_dev->usb_buf, 0, USB_BUF_SZ);
+ }
+ }
+
+diff --git a/drivers/media/usb/gspca/sonixb.c b/drivers/media/usb/gspca/sonixb.c
+index 046fc2c2a135..4d655e2da9cb 100644
+--- a/drivers/media/usb/gspca/sonixb.c
++++ b/drivers/media/usb/gspca/sonixb.c
+@@ -453,6 +453,11 @@ static void reg_r(struct gspca_dev *gspca_dev,
+ dev_err(gspca_dev->v4l2_dev.dev,
+ "Error reading register %02x: %d\n", value, res);
+ gspca_dev->usb_err = res;
++ /*
++ * Make sure the result is zeroed to avoid uninitialized
++ * values.
++ */
++ gspca_dev->usb_buf[0] = 0;
+ }
+ }
+
+diff --git a/drivers/media/usb/gspca/sonixj.c b/drivers/media/usb/gspca/sonixj.c
+index 50a6c8425827..2e1bd2df8304 100644
+--- a/drivers/media/usb/gspca/sonixj.c
++++ b/drivers/media/usb/gspca/sonixj.c
+@@ -1162,6 +1162,11 @@ static void reg_r(struct gspca_dev *gspca_dev,
+ if (ret < 0) {
+ pr_err("reg_r err %d\n", ret);
+ gspca_dev->usb_err = ret;
++ /*
++ * Make sure the buffer is zeroed to avoid uninitialized
++ * values.
++ */
++ memset(gspca_dev->usb_buf, 0, USB_BUF_SZ);
+ }
+ }
+
+diff --git a/drivers/media/usb/gspca/spca1528.c b/drivers/media/usb/gspca/spca1528.c
+index 2ae03b60163f..ccc477944ef8 100644
+--- a/drivers/media/usb/gspca/spca1528.c
++++ b/drivers/media/usb/gspca/spca1528.c
+@@ -71,6 +71,11 @@ static void reg_r(struct gspca_dev *gspca_dev,
+ if (ret < 0) {
+ pr_err("reg_r err %d\n", ret);
+ gspca_dev->usb_err = ret;
++ /*
++ * Make sure the buffer is zeroed to avoid uninitialized
++ * values.
++ */
++ memset(gspca_dev->usb_buf, 0, USB_BUF_SZ);
+ }
+ }
+
+diff --git a/drivers/media/usb/gspca/sq930x.c b/drivers/media/usb/gspca/sq930x.c
+index d1ba0888d798..c3610247a90e 100644
+--- a/drivers/media/usb/gspca/sq930x.c
++++ b/drivers/media/usb/gspca/sq930x.c
+@@ -425,6 +425,11 @@ static void reg_r(struct gspca_dev *gspca_dev,
+ if (ret < 0) {
+ pr_err("reg_r %04x failed %d\n", value, ret);
+ gspca_dev->usb_err = ret;
++ /*
++ * Make sure the buffer is zeroed to avoid uninitialized
++ * values.
++ */
++ memset(gspca_dev->usb_buf, 0, USB_BUF_SZ);
+ }
+ }
+
+diff --git a/drivers/media/usb/gspca/sunplus.c b/drivers/media/usb/gspca/sunplus.c
+index d0ddfa957ca9..f4a4222f0d2e 100644
+--- a/drivers/media/usb/gspca/sunplus.c
++++ b/drivers/media/usb/gspca/sunplus.c
+@@ -255,6 +255,11 @@ static void reg_r(struct gspca_dev *gspca_dev,
+ if (ret < 0) {
+ pr_err("reg_r err %d\n", ret);
+ gspca_dev->usb_err = ret;
++ /*
++ * Make sure the buffer is zeroed to avoid uninitialized
++ * values.
++ */
++ memset(gspca_dev->usb_buf, 0, USB_BUF_SZ);
+ }
+ }
+
+diff --git a/drivers/media/usb/gspca/vc032x.c b/drivers/media/usb/gspca/vc032x.c
+index 588a847ea483..4cb7c92ea132 100644
+--- a/drivers/media/usb/gspca/vc032x.c
++++ b/drivers/media/usb/gspca/vc032x.c
+@@ -2906,6 +2906,11 @@ static void reg_r_i(struct gspca_dev *gspca_dev,
+ if (ret < 0) {
+ pr_err("reg_r err %d\n", ret);
+ gspca_dev->usb_err = ret;
++ /*
++ * Make sure the buffer is zeroed to avoid uninitialized
++ * values.
++ */
++ memset(gspca_dev->usb_buf, 0, USB_BUF_SZ);
+ }
+ }
+ static void reg_r(struct gspca_dev *gspca_dev,
+diff --git a/drivers/media/usb/gspca/w996Xcf.c b/drivers/media/usb/gspca/w996Xcf.c
+index 16b679c2de21..a8350ee9712f 100644
+--- a/drivers/media/usb/gspca/w996Xcf.c
++++ b/drivers/media/usb/gspca/w996Xcf.c
+@@ -133,6 +133,11 @@ static int w9968cf_read_sb(struct sd *sd)
+ } else {
+ pr_err("Read SB reg [01] failed\n");
+ sd->gspca_dev.usb_err = ret;
++ /*
++ * Make sure the buffer is zeroed to avoid uninitialized
++ * values.
++ */
++ memset(sd->gspca_dev.usb_buf, 0, 2);
+ }
+
+ udelay(W9968CF_I2C_BUS_DELAY);
+diff --git a/drivers/media/usb/hdpvr/hdpvr-core.c b/drivers/media/usb/hdpvr/hdpvr-core.c
+index 9b9d894d29bc..b75c18a012a7 100644
+--- a/drivers/media/usb/hdpvr/hdpvr-core.c
++++ b/drivers/media/usb/hdpvr/hdpvr-core.c
+@@ -137,6 +137,7 @@ static int device_authorization(struct hdpvr_device *dev)
+
+ dev->fw_ver = dev->usbc_buf[1];
+
++ dev->usbc_buf[46] = '\0';
+ v4l2_info(&dev->v4l2_dev, "firmware version 0x%x dated %s\n",
+ dev->fw_ver, &dev->usbc_buf[2]);
+
+@@ -271,6 +272,7 @@ static int hdpvr_probe(struct usb_interface *interface,
+ #endif
+ size_t buffer_size;
+ int i;
++ int dev_num;
+ int retval = -ENOMEM;
+
+ /* allocate memory for our device state and initialize it */
+@@ -368,8 +370,17 @@ static int hdpvr_probe(struct usb_interface *interface,
+ }
+ #endif
+
++ dev_num = atomic_inc_return(&dev_nr);
++ if (dev_num >= HDPVR_MAX) {
++ v4l2_err(&dev->v4l2_dev,
++ "max device number reached, device register failed\n");
++ atomic_dec(&dev_nr);
++ retval = -ENODEV;
++ goto reg_fail;
++ }
++
+ retval = hdpvr_register_videodev(dev, &interface->dev,
+- video_nr[atomic_inc_return(&dev_nr)]);
++ video_nr[dev_num]);
+ if (retval < 0) {
+ v4l2_err(&dev->v4l2_dev, "registering videodev failed\n");
+ goto reg_fail;
+diff --git a/drivers/media/usb/ttusb-dec/ttusb_dec.c b/drivers/media/usb/ttusb-dec/ttusb_dec.c
+index 1d0afa340f47..3198f9624b7c 100644
+--- a/drivers/media/usb/ttusb-dec/ttusb_dec.c
++++ b/drivers/media/usb/ttusb-dec/ttusb_dec.c
+@@ -319,7 +319,7 @@ static int ttusb_dec_send_command(struct ttusb_dec *dec, const u8 command,
+
+ dprintk("%s\n", __func__);
+
+- b = kmalloc(COMMAND_PACKET_SIZE + 4, GFP_KERNEL);
++ b = kzalloc(COMMAND_PACKET_SIZE + 4, GFP_KERNEL);
+ if (!b)
+ return -ENOMEM;
+
+diff --git a/drivers/media/v4l2-core/videobuf-core.c b/drivers/media/v4l2-core/videobuf-core.c
+index 7ef3e4d22bf6..939fc11cf080 100644
+--- a/drivers/media/v4l2-core/videobuf-core.c
++++ b/drivers/media/v4l2-core/videobuf-core.c
+@@ -1123,7 +1123,6 @@ __poll_t videobuf_poll_stream(struct file *file,
+ struct videobuf_buffer *buf = NULL;
+ __poll_t rc = 0;
+
+- poll_wait(file, &buf->done, wait);
+ videobuf_queue_lock(q);
+ if (q->streaming) {
+ if (!list_empty(&q->stream))
+@@ -1143,7 +1142,9 @@ __poll_t videobuf_poll_stream(struct file *file,
+ }
+ buf = q->read_buf;
+ }
+- if (!buf)
++ if (buf)
++ poll_wait(file, &buf->done, wait);
++ else
+ rc = EPOLLERR;
+
+ if (0 == rc) {
+diff --git a/drivers/mmc/core/sdio_irq.c b/drivers/mmc/core/sdio_irq.c
+index 0bcc5e83bd1a..40109a615922 100644
+--- a/drivers/mmc/core/sdio_irq.c
++++ b/drivers/mmc/core/sdio_irq.c
+@@ -31,6 +31,7 @@ static int process_sdio_pending_irqs(struct mmc_host *host)
+ {
+ struct mmc_card *card = host->card;
+ int i, ret, count;
++ bool sdio_irq_pending = host->sdio_irq_pending;
+ unsigned char pending;
+ struct sdio_func *func;
+
+@@ -38,13 +39,16 @@ static int process_sdio_pending_irqs(struct mmc_host *host)
+ if (mmc_card_suspended(card))
+ return 0;
+
++ /* Clear the flag to indicate that we have processed the IRQ. */
++ host->sdio_irq_pending = false;
++
+ /*
+ * Optimization, if there is only 1 function interrupt registered
+ * and we know an IRQ was signaled then call irq handler directly.
+ * Otherwise do the full probe.
+ */
+ func = card->sdio_single_irq;
+- if (func && host->sdio_irq_pending) {
++ if (func && sdio_irq_pending) {
+ func->irq_handler(func);
+ return 1;
+ }
+@@ -96,7 +100,6 @@ static void sdio_run_irqs(struct mmc_host *host)
+ {
+ mmc_claim_host(host);
+ if (host->sdio_irqs) {
+- host->sdio_irq_pending = true;
+ process_sdio_pending_irqs(host);
+ if (host->ops->ack_sdio_irq)
+ host->ops->ack_sdio_irq(host);
+@@ -114,6 +117,7 @@ void sdio_irq_work(struct work_struct *work)
+
+ void sdio_signal_irq(struct mmc_host *host)
+ {
++ host->sdio_irq_pending = true;
+ queue_delayed_work(system_wq, &host->sdio_irq_work, 0);
+ }
+ EXPORT_SYMBOL_GPL(sdio_signal_irq);
+@@ -159,7 +163,6 @@ static int sdio_irq_thread(void *_host)
+ if (ret)
+ break;
+ ret = process_sdio_pending_irqs(host);
+- host->sdio_irq_pending = false;
+ mmc_release_host(host);
+
+ /*
+diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
+index eea52e2c5a0c..79c55c7b4afd 100644
+--- a/drivers/mmc/host/dw_mmc.c
++++ b/drivers/mmc/host/dw_mmc.c
+@@ -3460,6 +3460,10 @@ int dw_mci_runtime_resume(struct device *dev)
+ /* Force setup bus to guarantee available clock output */
+ dw_mci_setup_bus(host->slot, true);
+
++ /* Re-enable SDIO interrupts. */
++ if (sdio_irq_claimed(host->slot->mmc))
++ __dw_mci_enable_sdio_irq(host->slot, 1);
++
+ /* Now that slots are all setup, we can enable card detect */
+ dw_mci_enable_cd(host);
+
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index 33f4b6387ef7..978c8ccce7e3 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -2421,6 +2421,9 @@ static void msdc_restore_reg(struct msdc_host *host)
+ } else {
+ writel(host->save_para.pad_tune, host->base + tune_reg);
+ }
++
++ if (sdio_irq_claimed(host->mmc))
++ __msdc_enable_sdio_irq(host, 1);
+ }
+
+ static int msdc_runtime_suspend(struct device *dev)
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index a5dc5aae973e..c66e66fbaeb4 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -1849,7 +1849,9 @@ void sdhci_set_uhs_signaling(struct sdhci_host *host, unsigned timing)
+ ctrl_2 |= SDHCI_CTRL_UHS_SDR104;
+ else if (timing == MMC_TIMING_UHS_SDR12)
+ ctrl_2 |= SDHCI_CTRL_UHS_SDR12;
+- else if (timing == MMC_TIMING_UHS_SDR25)
++ else if (timing == MMC_TIMING_SD_HS ||
++ timing == MMC_TIMING_MMC_HS ||
++ timing == MMC_TIMING_UHS_SDR25)
+ ctrl_2 |= SDHCI_CTRL_UHS_SDR25;
+ else if (timing == MMC_TIMING_UHS_SDR50)
+ ctrl_2 |= SDHCI_CTRL_UHS_SDR50;
+diff --git a/drivers/mtd/nand/raw/stm32_fmc2_nand.c b/drivers/mtd/nand/raw/stm32_fmc2_nand.c
+index e63acc077c18..8cc852dc7d54 100644
+--- a/drivers/mtd/nand/raw/stm32_fmc2_nand.c
++++ b/drivers/mtd/nand/raw/stm32_fmc2_nand.c
+@@ -1427,21 +1427,16 @@ static void stm32_fmc2_calc_timings(struct nand_chip *chip,
+ struct stm32_fmc2_timings *tims = &nand->timings;
+ unsigned long hclk = clk_get_rate(fmc2->clk);
+ unsigned long hclkp = NSEC_PER_SEC / (hclk / 1000);
+- int tar, tclr, thiz, twait, tset_mem, tset_att, thold_mem, thold_att;
+-
+- tar = hclkp;
+- if (tar < sdrt->tAR_min)
+- tar = sdrt->tAR_min;
+- tims->tar = DIV_ROUND_UP(tar, hclkp) - 1;
+- if (tims->tar > FMC2_PCR_TIMING_MASK)
+- tims->tar = FMC2_PCR_TIMING_MASK;
+-
+- tclr = hclkp;
+- if (tclr < sdrt->tCLR_min)
+- tclr = sdrt->tCLR_min;
+- tims->tclr = DIV_ROUND_UP(tclr, hclkp) - 1;
+- if (tims->tclr > FMC2_PCR_TIMING_MASK)
+- tims->tclr = FMC2_PCR_TIMING_MASK;
++ unsigned long timing, tar, tclr, thiz, twait;
++ unsigned long tset_mem, tset_att, thold_mem, thold_att;
++
++ tar = max_t(unsigned long, hclkp, sdrt->tAR_min);
++ timing = DIV_ROUND_UP(tar, hclkp) - 1;
++ tims->tar = min_t(unsigned long, timing, FMC2_PCR_TIMING_MASK);
++
++ tclr = max_t(unsigned long, hclkp, sdrt->tCLR_min);
++ timing = DIV_ROUND_UP(tclr, hclkp) - 1;
++ tims->tclr = min_t(unsigned long, timing, FMC2_PCR_TIMING_MASK);
+
+ tims->thiz = FMC2_THIZ;
+ thiz = (tims->thiz + 1) * hclkp;
+@@ -1451,18 +1446,11 @@ static void stm32_fmc2_calc_timings(struct nand_chip *chip,
+ * tWAIT > tWP
+ * tWAIT > tREA + tIO
+ */
+- twait = hclkp;
+- if (twait < sdrt->tRP_min)
+- twait = sdrt->tRP_min;
+- if (twait < sdrt->tWP_min)
+- twait = sdrt->tWP_min;
+- if (twait < sdrt->tREA_max + FMC2_TIO)
+- twait = sdrt->tREA_max + FMC2_TIO;
+- tims->twait = DIV_ROUND_UP(twait, hclkp);
+- if (tims->twait == 0)
+- tims->twait = 1;
+- else if (tims->twait > FMC2_PMEM_PATT_TIMING_MASK)
+- tims->twait = FMC2_PMEM_PATT_TIMING_MASK;
++ twait = max_t(unsigned long, hclkp, sdrt->tRP_min);
++ twait = max_t(unsigned long, twait, sdrt->tWP_min);
++ twait = max_t(unsigned long, twait, sdrt->tREA_max + FMC2_TIO);
++ timing = DIV_ROUND_UP(twait, hclkp);
++ tims->twait = clamp_val(timing, 1, FMC2_PMEM_PATT_TIMING_MASK);
+
+ /*
+ * tSETUP_MEM > tCS - tWAIT
+@@ -1477,20 +1465,15 @@ static void stm32_fmc2_calc_timings(struct nand_chip *chip,
+ if (twait > thiz && (sdrt->tDS_min > twait - thiz) &&
+ (tset_mem < sdrt->tDS_min - (twait - thiz)))
+ tset_mem = sdrt->tDS_min - (twait - thiz);
+- tims->tset_mem = DIV_ROUND_UP(tset_mem, hclkp);
+- if (tims->tset_mem == 0)
+- tims->tset_mem = 1;
+- else if (tims->tset_mem > FMC2_PMEM_PATT_TIMING_MASK)
+- tims->tset_mem = FMC2_PMEM_PATT_TIMING_MASK;
++ timing = DIV_ROUND_UP(tset_mem, hclkp);
++ tims->tset_mem = clamp_val(timing, 1, FMC2_PMEM_PATT_TIMING_MASK);
+
+ /*
+ * tHOLD_MEM > tCH
+ * tHOLD_MEM > tREH - tSETUP_MEM
+ * tHOLD_MEM > max(tRC, tWC) - (tSETUP_MEM + tWAIT)
+ */
+- thold_mem = hclkp;
+- if (thold_mem < sdrt->tCH_min)
+- thold_mem = sdrt->tCH_min;
++ thold_mem = max_t(unsigned long, hclkp, sdrt->tCH_min);
+ if (sdrt->tREH_min > tset_mem &&
+ (thold_mem < sdrt->tREH_min - tset_mem))
+ thold_mem = sdrt->tREH_min - tset_mem;
+@@ -1500,11 +1483,8 @@ static void stm32_fmc2_calc_timings(struct nand_chip *chip,
+ if ((sdrt->tWC_min > tset_mem + twait) &&
+ (thold_mem < sdrt->tWC_min - (tset_mem + twait)))
+ thold_mem = sdrt->tWC_min - (tset_mem + twait);
+- tims->thold_mem = DIV_ROUND_UP(thold_mem, hclkp);
+- if (tims->thold_mem == 0)
+- tims->thold_mem = 1;
+- else if (tims->thold_mem > FMC2_PMEM_PATT_TIMING_MASK)
+- tims->thold_mem = FMC2_PMEM_PATT_TIMING_MASK;
++ timing = DIV_ROUND_UP(thold_mem, hclkp);
++ tims->thold_mem = clamp_val(timing, 1, FMC2_PMEM_PATT_TIMING_MASK);
+
+ /*
+ * tSETUP_ATT > tCS - tWAIT
+@@ -1526,11 +1506,8 @@ static void stm32_fmc2_calc_timings(struct nand_chip *chip,
+ if (twait > thiz && (sdrt->tDS_min > twait - thiz) &&
+ (tset_att < sdrt->tDS_min - (twait - thiz)))
+ tset_att = sdrt->tDS_min - (twait - thiz);
+- tims->tset_att = DIV_ROUND_UP(tset_att, hclkp);
+- if (tims->tset_att == 0)
+- tims->tset_att = 1;
+- else if (tims->tset_att > FMC2_PMEM_PATT_TIMING_MASK)
+- tims->tset_att = FMC2_PMEM_PATT_TIMING_MASK;
++ timing = DIV_ROUND_UP(tset_att, hclkp);
++ tims->tset_att = clamp_val(timing, 1, FMC2_PMEM_PATT_TIMING_MASK);
+
+ /*
+ * tHOLD_ATT > tALH
+@@ -1545,17 +1522,11 @@ static void stm32_fmc2_calc_timings(struct nand_chip *chip,
+ * tHOLD_ATT > tRC - (tSETUP_ATT + tWAIT)
+ * tHOLD_ATT > tWC - (tSETUP_ATT + tWAIT)
+ */
+- thold_att = hclkp;
+- if (thold_att < sdrt->tALH_min)
+- thold_att = sdrt->tALH_min;
+- if (thold_att < sdrt->tCH_min)
+- thold_att = sdrt->tCH_min;
+- if (thold_att < sdrt->tCLH_min)
+- thold_att = sdrt->tCLH_min;
+- if (thold_att < sdrt->tCOH_min)
+- thold_att = sdrt->tCOH_min;
+- if (thold_att < sdrt->tDH_min)
+- thold_att = sdrt->tDH_min;
++ thold_att = max_t(unsigned long, hclkp, sdrt->tALH_min);
++ thold_att = max_t(unsigned long, thold_att, sdrt->tCH_min);
++ thold_att = max_t(unsigned long, thold_att, sdrt->tCLH_min);
++ thold_att = max_t(unsigned long, thold_att, sdrt->tCOH_min);
++ thold_att = max_t(unsigned long, thold_att, sdrt->tDH_min);
+ if ((sdrt->tWB_max + FMC2_TIO + FMC2_TSYNC > tset_mem) &&
+ (thold_att < sdrt->tWB_max + FMC2_TIO + FMC2_TSYNC - tset_mem))
+ thold_att = sdrt->tWB_max + FMC2_TIO + FMC2_TSYNC - tset_mem;
+@@ -1574,11 +1545,8 @@ static void stm32_fmc2_calc_timings(struct nand_chip *chip,
+ if ((sdrt->tWC_min > tset_att + twait) &&
+ (thold_att < sdrt->tWC_min - (tset_att + twait)))
+ thold_att = sdrt->tWC_min - (tset_att + twait);
+- tims->thold_att = DIV_ROUND_UP(thold_att, hclkp);
+- if (tims->thold_att == 0)
+- tims->thold_att = 1;
+- else if (tims->thold_att > FMC2_PMEM_PATT_TIMING_MASK)
+- tims->thold_att = FMC2_PMEM_PATT_TIMING_MASK;
++ timing = DIV_ROUND_UP(thold_att, hclkp);
++ tims->thold_att = clamp_val(timing, 1, FMC2_PMEM_PATT_TIMING_MASK);
+ }
+
+ static int stm32_fmc2_setup_interface(struct nand_chip *chip, int chipnr,
+diff --git a/drivers/net/arcnet/arcnet.c b/drivers/net/arcnet/arcnet.c
+index 8459115d9d4e..553776cc1d29 100644
+--- a/drivers/net/arcnet/arcnet.c
++++ b/drivers/net/arcnet/arcnet.c
+@@ -1063,31 +1063,34 @@ EXPORT_SYMBOL(arcnet_interrupt);
+ static void arcnet_rx(struct net_device *dev, int bufnum)
+ {
+ struct arcnet_local *lp = netdev_priv(dev);
+- struct archdr pkt;
++ union {
++ struct archdr pkt;
++ char buf[512];
++ } rxdata;
+ struct arc_rfc1201 *soft;
+ int length, ofs;
+
+- soft = &pkt.soft.rfc1201;
++ soft = &rxdata.pkt.soft.rfc1201;
+
+- lp->hw.copy_from_card(dev, bufnum, 0, &pkt, ARC_HDR_SIZE);
+- if (pkt.hard.offset[0]) {
+- ofs = pkt.hard.offset[0];
++ lp->hw.copy_from_card(dev, bufnum, 0, &rxdata.pkt, ARC_HDR_SIZE);
++ if (rxdata.pkt.hard.offset[0]) {
++ ofs = rxdata.pkt.hard.offset[0];
+ length = 256 - ofs;
+ } else {
+- ofs = pkt.hard.offset[1];
++ ofs = rxdata.pkt.hard.offset[1];
+ length = 512 - ofs;
+ }
+
+ /* get the full header, if possible */
+- if (sizeof(pkt.soft) <= length) {
+- lp->hw.copy_from_card(dev, bufnum, ofs, soft, sizeof(pkt.soft));
++ if (sizeof(rxdata.pkt.soft) <= length) {
++ lp->hw.copy_from_card(dev, bufnum, ofs, soft, sizeof(rxdata.pkt.soft));
+ } else {
+- memset(&pkt.soft, 0, sizeof(pkt.soft));
++ memset(&rxdata.pkt.soft, 0, sizeof(rxdata.pkt.soft));
+ lp->hw.copy_from_card(dev, bufnum, ofs, soft, length);
+ }
+
+ arc_printk(D_DURING, dev, "Buffer #%d: received packet from %02Xh to %02Xh (%d+4 bytes)\n",
+- bufnum, pkt.hard.source, pkt.hard.dest, length);
++ bufnum, rxdata.pkt.hard.source, rxdata.pkt.hard.dest, length);
+
+ dev->stats.rx_packets++;
+ dev->stats.rx_bytes += length + ARC_HDR_SIZE;
+@@ -1096,13 +1099,13 @@ static void arcnet_rx(struct net_device *dev, int bufnum)
+ if (arc_proto_map[soft->proto]->is_ip) {
+ if (BUGLVL(D_PROTO)) {
+ struct ArcProto
+- *oldp = arc_proto_map[lp->default_proto[pkt.hard.source]],
++ *oldp = arc_proto_map[lp->default_proto[rxdata.pkt.hard.source]],
+ *newp = arc_proto_map[soft->proto];
+
+ if (oldp != newp) {
+ arc_printk(D_PROTO, dev,
+ "got protocol %02Xh; encap for host %02Xh is now '%c' (was '%c')\n",
+- soft->proto, pkt.hard.source,
++ soft->proto, rxdata.pkt.hard.source,
+ newp->suffix, oldp->suffix);
+ }
+ }
+@@ -1111,10 +1114,10 @@ static void arcnet_rx(struct net_device *dev, int bufnum)
+ lp->default_proto[0] = soft->proto;
+
+ /* in striking contrast, the following isn't a hack. */
+- lp->default_proto[pkt.hard.source] = soft->proto;
++ lp->default_proto[rxdata.pkt.hard.source] = soft->proto;
+ }
+ /* call the protocol-specific receiver. */
+- arc_proto_map[soft->proto]->rx(dev, bufnum, &pkt, length);
++ arc_proto_map[soft->proto]->rx(dev, bufnum, &rxdata.pkt, length);
+ }
+
+ static void null_rx(struct net_device *dev, int bufnum,
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+index 395b05701480..a1fab77b2096 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+@@ -1429,6 +1429,16 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
+ else
+ phy_reg |= 0xFA;
+ e1e_wphy_locked(hw, I217_PLL_CLOCK_GATE_REG, phy_reg);
++
++ if (speed == SPEED_1000) {
++ hw->phy.ops.read_reg_locked(hw, HV_PM_CTRL,
++ &phy_reg);
++
++ phy_reg |= HV_PM_CTRL_K1_CLK_REQ;
++
++ hw->phy.ops.write_reg_locked(hw, HV_PM_CTRL,
++ phy_reg);
++ }
+ }
+ hw->phy.ops.release(hw);
+
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.h b/drivers/net/ethernet/intel/e1000e/ich8lan.h
+index eb09c755fa17..1502895eb45d 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.h
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.h
+@@ -210,7 +210,7 @@
+
+ /* PHY Power Management Control */
+ #define HV_PM_CTRL PHY_REG(770, 17)
+-#define HV_PM_CTRL_PLL_STOP_IN_K1_GIGA 0x100
++#define HV_PM_CTRL_K1_CLK_REQ 0x200
+ #define HV_PM_CTRL_K1_ENABLE 0x4000
+
+ #define I217_PLL_CLOCK_GATE_REG PHY_REG(772, 28)
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 9ebbe3da61bb..d22491ce73e6 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -2583,6 +2583,10 @@ static void i40e_sync_filters_subtask(struct i40e_pf *pf)
+ return;
+ if (!test_and_clear_bit(__I40E_MACVLAN_SYNC_PENDING, pf->state))
+ return;
++ if (test_and_set_bit(__I40E_VF_DISABLE, pf->state)) {
++ set_bit(__I40E_MACVLAN_SYNC_PENDING, pf->state);
++ return;
++ }
+
+ for (v = 0; v < pf->num_alloc_vsi; v++) {
+ if (pf->vsi[v] &&
+@@ -2597,6 +2601,7 @@ static void i40e_sync_filters_subtask(struct i40e_pf *pf)
+ }
+ }
+ }
++ clear_bit(__I40E_VF_DISABLE, pf->state);
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/marvell/skge.c b/drivers/net/ethernet/marvell/skge.c
+index 9ac854c2b371..697321898e84 100644
+--- a/drivers/net/ethernet/marvell/skge.c
++++ b/drivers/net/ethernet/marvell/skge.c
+@@ -3108,7 +3108,7 @@ static struct sk_buff *skge_rx_get(struct net_device *dev,
+ skb_put(skb, len);
+
+ if (dev->features & NETIF_F_RXCSUM) {
+- skb->csum = csum;
++ skb->csum = le16_to_cpu(csum);
+ skb->ip_summed = CHECKSUM_COMPLETE;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c
+index 94304abc49e9..39e90b873319 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c
+@@ -399,10 +399,10 @@ add_ethtool_flow_rule(struct mlx5e_priv *priv,
+ struct mlx5_flow_table *ft,
+ struct ethtool_rx_flow_spec *fs)
+ {
++ struct mlx5_flow_act flow_act = { .flags = FLOW_ACT_NO_APPEND };
+ struct mlx5_flow_destination *dst = NULL;
+- struct mlx5_flow_act flow_act = {0};
+- struct mlx5_flow_spec *spec;
+ struct mlx5_flow_handle *rule;
++ struct mlx5_flow_spec *spec;
+ int err = 0;
+
+ spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 00b2d4a86159..98be5fe33674 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -1369,46 +1369,63 @@ static int parse_tunnel_attr(struct mlx5e_priv *priv,
+ return err;
+ }
+
+- if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS)) {
+- struct flow_match_ipv4_addrs match;
++ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_CONTROL)) {
++ struct flow_match_control match;
++ u16 addr_type;
+
+- flow_rule_match_enc_ipv4_addrs(rule, &match);
+- MLX5_SET(fte_match_set_lyr_2_4, headers_c,
+- src_ipv4_src_ipv6.ipv4_layout.ipv4,
+- ntohl(match.mask->src));
+- MLX5_SET(fte_match_set_lyr_2_4, headers_v,
+- src_ipv4_src_ipv6.ipv4_layout.ipv4,
+- ntohl(match.key->src));
+-
+- MLX5_SET(fte_match_set_lyr_2_4, headers_c,
+- dst_ipv4_dst_ipv6.ipv4_layout.ipv4,
+- ntohl(match.mask->dst));
+- MLX5_SET(fte_match_set_lyr_2_4, headers_v,
+- dst_ipv4_dst_ipv6.ipv4_layout.ipv4,
+- ntohl(match.key->dst));
+-
+- MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c, ethertype);
+- MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype, ETH_P_IP);
+- } else if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS)) {
+- struct flow_match_ipv6_addrs match;
++ flow_rule_match_enc_control(rule, &match);
++ addr_type = match.key->addr_type;
+
+- flow_rule_match_enc_ipv6_addrs(rule, &match);
+- memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c,
+- src_ipv4_src_ipv6.ipv6_layout.ipv6),
+- &match.mask->src, MLX5_FLD_SZ_BYTES(ipv6_layout, ipv6));
+- memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v,
+- src_ipv4_src_ipv6.ipv6_layout.ipv6),
+- &match.key->src, MLX5_FLD_SZ_BYTES(ipv6_layout, ipv6));
++ /* For tunnel addr_type used same key id`s as for non-tunnel */
++ if (addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
++ struct flow_match_ipv4_addrs match;
+
+- memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c,
+- dst_ipv4_dst_ipv6.ipv6_layout.ipv6),
+- &match.mask->dst, MLX5_FLD_SZ_BYTES(ipv6_layout, ipv6));
+- memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v,
+- dst_ipv4_dst_ipv6.ipv6_layout.ipv6),
+- &match.key->dst, MLX5_FLD_SZ_BYTES(ipv6_layout, ipv6));
++ flow_rule_match_enc_ipv4_addrs(rule, &match);
++ MLX5_SET(fte_match_set_lyr_2_4, headers_c,
++ src_ipv4_src_ipv6.ipv4_layout.ipv4,
++ ntohl(match.mask->src));
++ MLX5_SET(fte_match_set_lyr_2_4, headers_v,
++ src_ipv4_src_ipv6.ipv4_layout.ipv4,
++ ntohl(match.key->src));
+
+- MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c, ethertype);
+- MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype, ETH_P_IPV6);
++ MLX5_SET(fte_match_set_lyr_2_4, headers_c,
++ dst_ipv4_dst_ipv6.ipv4_layout.ipv4,
++ ntohl(match.mask->dst));
++ MLX5_SET(fte_match_set_lyr_2_4, headers_v,
++ dst_ipv4_dst_ipv6.ipv4_layout.ipv4,
++ ntohl(match.key->dst));
++
++ MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c,
++ ethertype);
++ MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype,
++ ETH_P_IP);
++ } else if (addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) {
++ struct flow_match_ipv6_addrs match;
++
++ flow_rule_match_enc_ipv6_addrs(rule, &match);
++ memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c,
++ src_ipv4_src_ipv6.ipv6_layout.ipv6),
++ &match.mask->src, MLX5_FLD_SZ_BYTES(ipv6_layout,
++ ipv6));
++ memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v,
++ src_ipv4_src_ipv6.ipv6_layout.ipv6),
++ &match.key->src, MLX5_FLD_SZ_BYTES(ipv6_layout,
++ ipv6));
++
++ memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c,
++ dst_ipv4_dst_ipv6.ipv6_layout.ipv6),
++ &match.mask->dst, MLX5_FLD_SZ_BYTES(ipv6_layout,
++ ipv6));
++ memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v,
++ dst_ipv4_dst_ipv6.ipv6_layout.ipv6),
++ &match.key->dst, MLX5_FLD_SZ_BYTES(ipv6_layout,
++ ipv6));
++
++ MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c,
++ ethertype);
++ MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype,
++ ETH_P_IPV6);
++ }
+ }
+
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_IP)) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index b15b27a497fc..fda4964c5cf4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -1554,6 +1554,7 @@ static const struct pci_device_id mlx5_core_pci_table[] = {
+ { PCI_VDEVICE(MELLANOX, 0x101e), MLX5_PCI_DEV_IS_VF}, /* ConnectX Family mlx5Gen Virtual Function */
+ { PCI_VDEVICE(MELLANOX, 0xa2d2) }, /* BlueField integrated ConnectX-5 network controller */
+ { PCI_VDEVICE(MELLANOX, 0xa2d3), MLX5_PCI_DEV_IS_VF}, /* BlueField integrated ConnectX-5 network controller VF */
++ { PCI_VDEVICE(MELLANOX, 0xa2d6) }, /* BlueField-2 integrated ConnectX-6 Dx network controller */
+ { 0, }
+ };
+
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.c b/drivers/net/ethernet/netronome/nfp/flower/main.c
+index eb846133943b..acb02e1513f2 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/main.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/main.c
+@@ -400,6 +400,7 @@ nfp_flower_spawn_vnic_reprs(struct nfp_app *app,
+ repr_priv = kzalloc(sizeof(*repr_priv), GFP_KERNEL);
+ if (!repr_priv) {
+ err = -ENOMEM;
++ nfp_repr_free(repr);
+ goto err_reprs_clean;
+ }
+
+@@ -413,6 +414,7 @@ nfp_flower_spawn_vnic_reprs(struct nfp_app *app,
+ port = nfp_port_alloc(app, port_type, repr);
+ if (IS_ERR(port)) {
+ err = PTR_ERR(port);
++ kfree(repr_priv);
+ nfp_repr_free(repr);
+ goto err_reprs_clean;
+ }
+@@ -433,6 +435,7 @@ nfp_flower_spawn_vnic_reprs(struct nfp_app *app,
+ err = nfp_repr_init(app, repr,
+ port_id, port, priv->nn->dp.netdev);
+ if (err) {
++ kfree(repr_priv);
+ nfp_port_free(port);
+ nfp_repr_free(repr);
+ goto err_reprs_clean;
+@@ -515,6 +518,7 @@ nfp_flower_spawn_phy_reprs(struct nfp_app *app, struct nfp_flower_priv *priv)
+ repr_priv = kzalloc(sizeof(*repr_priv), GFP_KERNEL);
+ if (!repr_priv) {
+ err = -ENOMEM;
++ nfp_repr_free(repr);
+ goto err_reprs_clean;
+ }
+
+@@ -525,11 +529,13 @@ nfp_flower_spawn_phy_reprs(struct nfp_app *app, struct nfp_flower_priv *priv)
+ port = nfp_port_alloc(app, NFP_PORT_PHYS_PORT, repr);
+ if (IS_ERR(port)) {
+ err = PTR_ERR(port);
++ kfree(repr_priv);
+ nfp_repr_free(repr);
+ goto err_reprs_clean;
+ }
+ err = nfp_port_init_phy_port(app->pf, app, port, i);
+ if (err) {
++ kfree(repr_priv);
+ nfp_port_free(port);
+ nfp_repr_free(repr);
+ goto err_reprs_clean;
+@@ -542,6 +548,7 @@ nfp_flower_spawn_phy_reprs(struct nfp_app *app, struct nfp_flower_priv *priv)
+ err = nfp_repr_init(app, repr,
+ cmsg_port_id, port, priv->nn->dp.netdev);
+ if (err) {
++ kfree(repr_priv);
+ nfp_port_free(port);
+ nfp_repr_free(repr);
+ goto err_reprs_clean;
+diff --git a/drivers/net/ethernet/nxp/lpc_eth.c b/drivers/net/ethernet/nxp/lpc_eth.c
+index f7e11f1b0426..b0c8be127bee 100644
+--- a/drivers/net/ethernet/nxp/lpc_eth.c
++++ b/drivers/net/ethernet/nxp/lpc_eth.c
+@@ -1344,13 +1344,14 @@ static int lpc_eth_drv_probe(struct platform_device *pdev)
+ pldat->dma_buff_base_p = dma_handle;
+
+ netdev_dbg(ndev, "IO address space :%pR\n", res);
+- netdev_dbg(ndev, "IO address size :%d\n", resource_size(res));
++ netdev_dbg(ndev, "IO address size :%zd\n",
++ (size_t)resource_size(res));
+ netdev_dbg(ndev, "IO address (mapped) :0x%p\n",
+ pldat->net_base);
+ netdev_dbg(ndev, "IRQ number :%d\n", ndev->irq);
+- netdev_dbg(ndev, "DMA buffer size :%d\n", pldat->dma_buff_size);
+- netdev_dbg(ndev, "DMA buffer P address :0x%08x\n",
+- pldat->dma_buff_base_p);
++ netdev_dbg(ndev, "DMA buffer size :%zd\n", pldat->dma_buff_size);
++ netdev_dbg(ndev, "DMA buffer P address :%pad\n",
++ &pldat->dma_buff_base_p);
+ netdev_dbg(ndev, "DMA buffer V address :0x%p\n",
+ pldat->dma_buff_base_v);
+
+@@ -1397,8 +1398,8 @@ static int lpc_eth_drv_probe(struct platform_device *pdev)
+ if (ret)
+ goto err_out_unregister_netdev;
+
+- netdev_info(ndev, "LPC mac at 0x%08x irq %d\n",
+- res->start, ndev->irq);
++ netdev_info(ndev, "LPC mac at 0x%08lx irq %d\n",
++ (unsigned long)res->start, ndev->irq);
+
+ device_init_wakeup(dev, 1);
+ device_set_wakeup_enable(dev, 0);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index b19ab09cb18f..5c4408bdc843 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -1532,13 +1532,15 @@ static int alloc_dma_rx_desc_resources(struct stmmac_priv *priv)
+ for (queue = 0; queue < rx_count; queue++) {
+ struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
+ struct page_pool_params pp_params = { 0 };
++ unsigned int num_pages;
+
+ rx_q->queue_index = queue;
+ rx_q->priv_data = priv;
+
+ pp_params.flags = PP_FLAG_DMA_MAP;
+ pp_params.pool_size = DMA_RX_SIZE;
+- pp_params.order = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE);
++ num_pages = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE);
++ pp_params.order = ilog2(num_pages);
+ pp_params.nid = dev_to_node(priv->device);
+ pp_params.dev = priv->device;
+ pp_params.dma_dir = DMA_FROM_DEVICE;
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 8f46aa1ddec0..cb7637364b40 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -1235,6 +1235,7 @@ deliver:
+ macsec_rxsa_put(rx_sa);
+ macsec_rxsc_put(rx_sc);
+
++ skb_orphan(skb);
+ ret = gro_cells_receive(&macsec->gro_cells, skb);
+ if (ret == NET_RX_SUCCESS)
+ count_rx(dev, skb->len);
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 3c8186f269f9..2fea5541c35a 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -763,6 +763,8 @@ static int ksz9031_get_features(struct phy_device *phydev)
+ * Whenever the device's Asymmetric Pause capability is set to 1,
+ * link-up may fail after a link-up to link-down transition.
+ *
++ * The Errata Sheet is for ksz9031, but ksz9021 has the same issue
++ *
+ * Workaround:
+ * Do not enable the Asymmetric Pause capability bit.
+ */
+@@ -1076,6 +1078,7 @@ static struct phy_driver ksphy_driver[] = {
+ /* PHY_GBIT_FEATURES */
+ .driver_data = &ksz9021_type,
+ .probe = kszphy_probe,
++ .get_features = ksz9031_get_features,
+ .config_init = ksz9021_config_init,
+ .ack_interrupt = kszphy_ack_interrupt,
+ .config_intr = kszphy_config_intr,
+diff --git a/drivers/net/phy/national.c b/drivers/net/phy/national.c
+index a221dd552c3c..a5bf0874c7d8 100644
+--- a/drivers/net/phy/national.c
++++ b/drivers/net/phy/national.c
+@@ -105,14 +105,17 @@ static void ns_giga_speed_fallback(struct phy_device *phydev, int mode)
+
+ static void ns_10_base_t_hdx_loopack(struct phy_device *phydev, int disable)
+ {
++ u16 lb_dis = BIT(1);
++
+ if (disable)
+- ns_exp_write(phydev, 0x1c0, ns_exp_read(phydev, 0x1c0) | 1);
++ ns_exp_write(phydev, 0x1c0,
++ ns_exp_read(phydev, 0x1c0) | lb_dis);
+ else
+ ns_exp_write(phydev, 0x1c0,
+- ns_exp_read(phydev, 0x1c0) & 0xfffe);
++ ns_exp_read(phydev, 0x1c0) & ~lb_dis);
+
+ pr_debug("10BASE-T HDX loopback %s\n",
+- (ns_exp_read(phydev, 0x1c0) & 0x0001) ? "off" : "on");
++ (ns_exp_read(phydev, 0x1c0) & lb_dis) ? "off" : "on");
+ }
+
+ static int ns_config_init(struct phy_device *phydev)
+diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
+index a30e41a56085..9a1b006904a7 100644
+--- a/drivers/net/ppp/ppp_generic.c
++++ b/drivers/net/ppp/ppp_generic.c
+@@ -1415,6 +1415,8 @@ static void __ppp_xmit_process(struct ppp *ppp, struct sk_buff *skb)
+ netif_wake_queue(ppp->dev);
+ else
+ netif_stop_queue(ppp->dev);
++ } else {
++ kfree_skb(skb);
+ }
+ ppp_xmit_unlock(ppp);
+ }
+diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
+index 50c05d0f44cb..00cab3f43a4c 100644
+--- a/drivers/net/usb/cdc_ncm.c
++++ b/drivers/net/usb/cdc_ncm.c
+@@ -681,8 +681,12 @@ cdc_ncm_find_endpoints(struct usbnet *dev, struct usb_interface *intf)
+ u8 ep;
+
+ for (ep = 0; ep < intf->cur_altsetting->desc.bNumEndpoints; ep++) {
+-
+ e = intf->cur_altsetting->endpoint + ep;
++
++ /* ignore endpoints which cannot transfer data */
++ if (!usb_endpoint_maxp(&e->desc))
++ continue;
++
+ switch (e->desc.bmAttributes & USB_ENDPOINT_XFERTYPE_MASK) {
+ case USB_ENDPOINT_XFER_INT:
+ if (usb_endpoint_dir_in(&e->desc)) {
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index 72514c46b478..ef1d667b0108 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -100,6 +100,11 @@ int usbnet_get_endpoints(struct usbnet *dev, struct usb_interface *intf)
+ int intr = 0;
+
+ e = alt->endpoint + ep;
++
++ /* ignore endpoints which cannot transfer data */
++ if (!usb_endpoint_maxp(&e->desc))
++ continue;
++
+ switch (e->desc.bmAttributes) {
+ case USB_ENDPOINT_XFER_INT:
+ if (!usb_endpoint_dir_in(&e->desc))
+@@ -339,6 +344,8 @@ void usbnet_update_max_qlen(struct usbnet *dev)
+ {
+ enum usb_device_speed speed = dev->udev->speed;
+
++ if (!dev->rx_urb_size || !dev->hard_mtu)
++ goto insanity;
+ switch (speed) {
+ case USB_SPEED_HIGH:
+ dev->rx_qlen = MAX_QUEUE_MEMORY / dev->rx_urb_size;
+@@ -355,6 +362,7 @@ void usbnet_update_max_qlen(struct usbnet *dev)
+ dev->tx_qlen = 5 * MAX_QUEUE_MEMORY / dev->hard_mtu;
+ break;
+ default:
++insanity:
+ dev->rx_qlen = dev->tx_qlen = 4;
+ }
+ }
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 6e84328bdd40..a4b38a980c3c 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -1154,7 +1154,8 @@ static int vrf_fib_rule(const struct net_device *dev, __u8 family, bool add_it)
+ struct sk_buff *skb;
+ int err;
+
+- if (family == AF_INET6 && !ipv6_mod_enabled())
++ if ((family == AF_INET6 || family == RTNL_FAMILY_IP6MR) &&
++ !ipv6_mod_enabled())
+ return 0;
+
+ skb = nlmsg_new(vrf_fib_rule_nl_size(), GFP_KERNEL);
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+index 2985bb17decd..4d5d10c01064 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+@@ -841,7 +841,7 @@ static int ath10k_wmi_tlv_op_pull_ch_info_ev(struct ath10k *ar,
+ struct wmi_ch_info_ev_arg *arg)
+ {
+ const void **tb;
+- const struct wmi_chan_info_event *ev;
++ const struct wmi_tlv_chan_info_event *ev;
+ int ret;
+
+ tb = ath10k_wmi_tlv_parse_alloc(ar, skb->data, skb->len, GFP_ATOMIC);
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.h b/drivers/net/wireless/ath/ath10k/wmi-tlv.h
+index d691f06e58f2..649b229a41e9 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.h
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.h
+@@ -1615,6 +1615,22 @@ struct chan_info_params {
+
+ #define WMI_TLV_FLAG_MGMT_BUNDLE_TX_COMPL BIT(9)
+
++struct wmi_tlv_chan_info_event {
++ __le32 err_code;
++ __le32 freq;
++ __le32 cmd_flags;
++ __le32 noise_floor;
++ __le32 rx_clear_count;
++ __le32 cycle_count;
++ __le32 chan_tx_pwr_range;
++ __le32 chan_tx_pwr_tp;
++ __le32 rx_frame_count;
++ __le32 my_bss_rx_cycle_count;
++ __le32 rx_11b_mode_data_duration;
++ __le32 tx_frame_cnt;
++ __le32 mac_clk_mhz;
++} __packed;
++
+ struct wmi_tlv_mgmt_tx_compl_ev {
+ __le32 desc_id;
+ __le32 status;
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.h b/drivers/net/wireless/ath/ath10k/wmi.h
+index 838768c98adc..e80dbe7e8f4c 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.h
++++ b/drivers/net/wireless/ath/ath10k/wmi.h
+@@ -6533,14 +6533,6 @@ struct wmi_chan_info_event {
+ __le32 noise_floor;
+ __le32 rx_clear_count;
+ __le32 cycle_count;
+- __le32 chan_tx_pwr_range;
+- __le32 chan_tx_pwr_tp;
+- __le32 rx_frame_count;
+- __le32 my_bss_rx_cycle_count;
+- __le32 rx_11b_mode_data_duration;
+- __le32 tx_frame_cnt;
+- __le32 mac_clk_mhz;
+-
+ } __packed;
+
+ struct wmi_10_4_chan_info_event {
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index 5de54d1559dd..8b0b464a1f21 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -887,11 +887,13 @@ static bool iwl_mvm_sar_geo_support(struct iwl_mvm *mvm)
+ * firmware versions. Unfortunately, we don't have a TLV API
+ * flag to rely on, so rely on the major version which is in
+ * the first byte of ucode_ver. This was implemented
+- * initially on version 38 and then backported to 36, 29 and
+- * 17.
++ * initially on version 38 and then backported to29 and 17.
++ * The intention was to have it in 36 as well, but not all
++ * 8000 family got this feature enabled. The 8000 family is
++ * the only one using version 36, so skip this version
++ * entirely.
+ */
+ return IWL_UCODE_SERIAL(mvm->fw->ucode_ver) >= 38 ||
+- IWL_UCODE_SERIAL(mvm->fw->ucode_ver) == 36 ||
+ IWL_UCODE_SERIAL(mvm->fw->ucode_ver) == 29 ||
+ IWL_UCODE_SERIAL(mvm->fw->ucode_ver) == 17;
+ }
+diff --git a/drivers/net/wireless/marvell/libertas/if_usb.c b/drivers/net/wireless/marvell/libertas/if_usb.c
+index afac2481909b..20436a289d5c 100644
+--- a/drivers/net/wireless/marvell/libertas/if_usb.c
++++ b/drivers/net/wireless/marvell/libertas/if_usb.c
+@@ -50,7 +50,8 @@ static const struct lbs_fw_table fw_table[] = {
+ { MODEL_8388, "libertas/usb8388_v5.bin", NULL },
+ { MODEL_8388, "libertas/usb8388.bin", NULL },
+ { MODEL_8388, "usb8388.bin", NULL },
+- { MODEL_8682, "libertas/usb8682.bin", NULL }
++ { MODEL_8682, "libertas/usb8682.bin", NULL },
++ { 0, NULL, NULL }
+ };
+
+ static const struct usb_device_id if_usb_table[] = {
+diff --git a/drivers/net/wireless/mediatek/mt76/mmio.c b/drivers/net/wireless/mediatek/mt76/mmio.c
+index 38368d19aa6f..83c96a47914f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mmio.c
++++ b/drivers/net/wireless/mediatek/mt76/mmio.c
+@@ -43,7 +43,7 @@ static u32 mt76_mmio_rmw(struct mt76_dev *dev, u32 offset, u32 mask, u32 val)
+ static void mt76_mmio_copy(struct mt76_dev *dev, u32 offset, const void *data,
+ int len)
+ {
+- __iowrite32_copy(dev->mmio.regs + offset, data, len >> 2);
++ __iowrite32_copy(dev->mmio.regs + offset, data, DIV_ROUND_UP(len, 4));
+ }
+
+ static int mt76_mmio_wr_rp(struct mt76_dev *dev, u32 base,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+index cdad2c8dc297..b941fa4a1bcd 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+@@ -257,9 +257,8 @@ static int mt7615_driver_own(struct mt7615_dev *dev)
+
+ static int mt7615_load_patch(struct mt7615_dev *dev)
+ {
+- const struct firmware *fw;
+ const struct mt7615_patch_hdr *hdr;
+- const char *firmware = MT7615_ROM_PATCH;
++ const struct firmware *fw = NULL;
+ int len, ret, sem;
+
+ sem = mt7615_mcu_patch_sem_ctrl(dev, 1);
+@@ -273,9 +272,9 @@ static int mt7615_load_patch(struct mt7615_dev *dev)
+ return -EAGAIN;
+ }
+
+- ret = request_firmware(&fw, firmware, dev->mt76.dev);
++ ret = request_firmware(&fw, MT7615_ROM_PATCH, dev->mt76.dev);
+ if (ret)
+- return ret;
++ goto out;
+
+ if (!fw || !fw->data || fw->size < sizeof(*hdr)) {
+ dev_err(dev->mt76.dev, "Invalid firmware\n");
+@@ -339,14 +338,12 @@ static u32 gen_dl_mode(u8 feature_set, bool is_cr4)
+
+ static int mt7615_load_ram(struct mt7615_dev *dev)
+ {
+- const struct firmware *fw;
+ const struct mt7615_fw_trailer *hdr;
+- const char *n9_firmware = MT7615_FIRMWARE_N9;
+- const char *cr4_firmware = MT7615_FIRMWARE_CR4;
+ u32 n9_ilm_addr, offset;
+ int i, ret;
++ const struct firmware *fw;
+
+- ret = request_firmware(&fw, n9_firmware, dev->mt76.dev);
++ ret = request_firmware(&fw, MT7615_FIRMWARE_N9, dev->mt76.dev);
+ if (ret)
+ return ret;
+
+@@ -394,7 +391,7 @@ static int mt7615_load_ram(struct mt7615_dev *dev)
+
+ release_firmware(fw);
+
+- ret = request_firmware(&fw, cr4_firmware, dev->mt76.dev);
++ ret = request_firmware(&fw, MT7615_FIRMWARE_CR4, dev->mt76.dev);
+ if (ret)
+ return ret;
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h b/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
+index f02ffcffe637..f83615dbe1c5 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
+@@ -25,9 +25,9 @@
+ #define MT7615_RX_RING_SIZE 1024
+ #define MT7615_RX_MCU_RING_SIZE 512
+
+-#define MT7615_FIRMWARE_CR4 "mt7615_cr4.bin"
+-#define MT7615_FIRMWARE_N9 "mt7615_n9.bin"
+-#define MT7615_ROM_PATCH "mt7615_rom_patch.bin"
++#define MT7615_FIRMWARE_CR4 "mediatek/mt7615_cr4.bin"
++#define MT7615_FIRMWARE_N9 "mediatek/mt7615_n9.bin"
++#define MT7615_ROM_PATCH "mediatek/mt7615_rom_patch.bin"
+
+ #define MT7615_EEPROM_SIZE 1024
+ #define MT7615_TOKEN_SIZE 4096
+diff --git a/drivers/net/wireless/mediatek/mt76/usb.c b/drivers/net/wireless/mediatek/mt76/usb.c
+index fb87ce7fbdf6..185eea83aada 100644
+--- a/drivers/net/wireless/mediatek/mt76/usb.c
++++ b/drivers/net/wireless/mediatek/mt76/usb.c
+@@ -164,7 +164,7 @@ static void mt76u_copy(struct mt76_dev *dev, u32 offset,
+ int i, ret;
+
+ mutex_lock(&usb->usb_ctrl_mtx);
+- for (i = 0; i < (len / 4); i++) {
++ for (i = 0; i < DIV_ROUND_UP(len, 4); i++) {
+ put_unaligned_le32(val[i], usb->data);
+ ret = __mt76u_vendor_request(dev, MT_VEND_MULTI_WRITE,
+ USB_DIR_OUT | USB_TYPE_VENDOR,
+diff --git a/drivers/net/wireless/realtek/rtw88/pci.c b/drivers/net/wireless/realtek/rtw88/pci.c
+index 353871c27779..23dd06afef3d 100644
+--- a/drivers/net/wireless/realtek/rtw88/pci.c
++++ b/drivers/net/wireless/realtek/rtw88/pci.c
+@@ -206,6 +206,23 @@ static int rtw_pci_reset_rx_desc(struct rtw_dev *rtwdev, struct sk_buff *skb,
+ return 0;
+ }
+
++static void rtw_pci_sync_rx_desc_device(struct rtw_dev *rtwdev, dma_addr_t dma,
++ struct rtw_pci_rx_ring *rx_ring,
++ u32 idx, u32 desc_sz)
++{
++ struct device *dev = rtwdev->dev;
++ struct rtw_pci_rx_buffer_desc *buf_desc;
++ int buf_sz = RTK_PCI_RX_BUF_SIZE;
++
++ dma_sync_single_for_device(dev, dma, buf_sz, DMA_FROM_DEVICE);
++
++ buf_desc = (struct rtw_pci_rx_buffer_desc *)(rx_ring->r.head +
++ idx * desc_sz);
++ memset(buf_desc, 0, sizeof(*buf_desc));
++ buf_desc->buf_size = cpu_to_le16(RTK_PCI_RX_BUF_SIZE);
++ buf_desc->dma = cpu_to_le32(dma);
++}
++
+ static int rtw_pci_init_rx_ring(struct rtw_dev *rtwdev,
+ struct rtw_pci_rx_ring *rx_ring,
+ u8 desc_size, u32 len)
+@@ -765,6 +782,7 @@ static void rtw_pci_rx_isr(struct rtw_dev *rtwdev, struct rtw_pci *rtwpci,
+ u32 pkt_offset;
+ u32 pkt_desc_sz = chip->rx_pkt_desc_sz;
+ u32 buf_desc_sz = chip->rx_buf_desc_sz;
++ u32 new_len;
+ u8 *rx_desc;
+ dma_addr_t dma;
+
+@@ -783,8 +801,8 @@ static void rtw_pci_rx_isr(struct rtw_dev *rtwdev, struct rtw_pci *rtwpci,
+ rtw_pci_dma_check(rtwdev, ring, cur_rp);
+ skb = ring->buf[cur_rp];
+ dma = *((dma_addr_t *)skb->cb);
+- pci_unmap_single(rtwpci->pdev, dma, RTK_PCI_RX_BUF_SIZE,
+- PCI_DMA_FROMDEVICE);
++ dma_sync_single_for_cpu(rtwdev->dev, dma, RTK_PCI_RX_BUF_SIZE,
++ DMA_FROM_DEVICE);
+ rx_desc = skb->data;
+ chip->ops->query_rx_desc(rtwdev, rx_desc, &pkt_stat, &rx_status);
+
+@@ -792,40 +810,35 @@ static void rtw_pci_rx_isr(struct rtw_dev *rtwdev, struct rtw_pci *rtwpci,
+ pkt_offset = pkt_desc_sz + pkt_stat.drv_info_sz +
+ pkt_stat.shift;
+
+- if (pkt_stat.is_c2h) {
+- /* keep rx_desc, halmac needs it */
+- skb_put(skb, pkt_stat.pkt_len + pkt_offset);
++ /* allocate a new skb for this frame,
++ * discard the frame if none available
++ */
++ new_len = pkt_stat.pkt_len + pkt_offset;
++ new = dev_alloc_skb(new_len);
++ if (WARN_ONCE(!new, "rx routine starvation\n"))
++ goto next_rp;
+
+- /* pass offset for further operation */
+- *((u32 *)skb->cb) = pkt_offset;
+- skb_queue_tail(&rtwdev->c2h_queue, skb);
++ /* put the DMA data including rx_desc from phy to new skb */
++ skb_put_data(new, skb->data, new_len);
++
++ if (pkt_stat.is_c2h) {
++ /* pass rx_desc & offset for further operation */
++ *((u32 *)new->cb) = pkt_offset;
++ skb_queue_tail(&rtwdev->c2h_queue, new);
+ ieee80211_queue_work(rtwdev->hw, &rtwdev->c2h_work);
+ } else {
+- /* remove rx_desc, maybe use skb_pull? */
+- skb_put(skb, pkt_stat.pkt_len);
+- skb_reserve(skb, pkt_offset);
+-
+- /* alloc a smaller skb to mac80211 */
+- new = dev_alloc_skb(pkt_stat.pkt_len);
+- if (!new) {
+- new = skb;
+- } else {
+- skb_put_data(new, skb->data, skb->len);
+- dev_kfree_skb_any(skb);
+- }
+- /* TODO: merge into rx.c */
+- rtw_rx_stats(rtwdev, pkt_stat.vif, skb);
++ /* remove rx_desc */
++ skb_pull(new, pkt_offset);
++
++ rtw_rx_stats(rtwdev, pkt_stat.vif, new);
+ memcpy(new->cb, &rx_status, sizeof(rx_status));
+ ieee80211_rx_irqsafe(rtwdev->hw, new);
+ }
+
+- /* skb delivered to mac80211, alloc a new one in rx ring */
+- new = dev_alloc_skb(RTK_PCI_RX_BUF_SIZE);
+- if (WARN(!new, "rx routine starvation\n"))
+- return;
+-
+- ring->buf[cur_rp] = new;
+- rtw_pci_reset_rx_desc(rtwdev, new, ring, cur_rp, buf_desc_sz);
++next_rp:
++ /* new skb delivered to mac80211, re-enable original skb DMA */
++ rtw_pci_sync_rx_desc_device(rtwdev, dma, ring, cur_rp,
++ buf_desc_sz);
+
+ /* host read next element in ring */
+ if (++cur_rp >= ring->r.len)
+diff --git a/drivers/net/wireless/zydas/zd1211rw/zd_mac.c b/drivers/net/wireless/zydas/zd1211rw/zd_mac.c
+index da7e63fca9f5..a9999d10ae81 100644
+--- a/drivers/net/wireless/zydas/zd1211rw/zd_mac.c
++++ b/drivers/net/wireless/zydas/zd1211rw/zd_mac.c
+@@ -223,7 +223,6 @@ void zd_mac_clear(struct zd_mac *mac)
+ {
+ flush_workqueue(zd_workqueue);
+ zd_chip_clear(&mac->chip);
+- lockdep_assert_held(&mac->lock);
+ ZD_MEMCLEAR(mac, sizeof(struct zd_mac));
+ }
+
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index af831d3d15d0..30de7efef003 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -509,14 +509,16 @@ static int nvme_update_ana_state(struct nvme_ctrl *ctrl,
+
+ down_write(&ctrl->namespaces_rwsem);
+ list_for_each_entry(ns, &ctrl->namespaces, list) {
+- if (ns->head->ns_id != le32_to_cpu(desc->nsids[n]))
++ unsigned nsid = le32_to_cpu(desc->nsids[n]);
++
++ if (ns->head->ns_id < nsid)
+ continue;
+- nvme_update_ns_ana_state(desc, ns);
++ if (ns->head->ns_id == nsid)
++ nvme_update_ns_ana_state(desc, ns);
+ if (++n == nr_nsids)
+ break;
+ }
+ up_write(&ctrl->namespaces_rwsem);
+- WARN_ON_ONCE(n < nr_nsids);
+ return 0;
+ }
+
+diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
+index 4dc12ea52f23..51800a9ce9a9 100644
+--- a/drivers/nvme/target/admin-cmd.c
++++ b/drivers/nvme/target/admin-cmd.c
+@@ -81,9 +81,11 @@ static u16 nvmet_get_smart_log_nsid(struct nvmet_req *req,
+ goto out;
+
+ host_reads = part_stat_read(ns->bdev->bd_part, ios[READ]);
+- data_units_read = part_stat_read(ns->bdev->bd_part, sectors[READ]);
++ data_units_read = DIV_ROUND_UP(part_stat_read(ns->bdev->bd_part,
++ sectors[READ]), 1000);
+ host_writes = part_stat_read(ns->bdev->bd_part, ios[WRITE]);
+- data_units_written = part_stat_read(ns->bdev->bd_part, sectors[WRITE]);
++ data_units_written = DIV_ROUND_UP(part_stat_read(ns->bdev->bd_part,
++ sectors[WRITE]), 1000);
+
+ put_unaligned_le64(host_reads, &slog->host_reads[0]);
+ put_unaligned_le64(data_units_read, &slog->data_units_read[0]);
+@@ -111,11 +113,11 @@ static u16 nvmet_get_smart_log_all(struct nvmet_req *req,
+ if (!ns->bdev)
+ continue;
+ host_reads += part_stat_read(ns->bdev->bd_part, ios[READ]);
+- data_units_read +=
+- part_stat_read(ns->bdev->bd_part, sectors[READ]);
++ data_units_read += DIV_ROUND_UP(
++ part_stat_read(ns->bdev->bd_part, sectors[READ]), 1000);
+ host_writes += part_stat_read(ns->bdev->bd_part, ios[WRITE]);
+- data_units_written +=
+- part_stat_read(ns->bdev->bd_part, sectors[WRITE]);
++ data_units_written += DIV_ROUND_UP(
++ part_stat_read(ns->bdev->bd_part, sectors[WRITE]), 1000);
+
+ }
+ rcu_read_unlock();
+diff --git a/drivers/parisc/dino.c b/drivers/parisc/dino.c
+index 3c730103e637..14be463e25b0 100644
+--- a/drivers/parisc/dino.c
++++ b/drivers/parisc/dino.c
+@@ -156,6 +156,15 @@ static inline struct dino_device *DINO_DEV(struct pci_hba_data *hba)
+ return container_of(hba, struct dino_device, hba);
+ }
+
++/* Check if PCI device is behind a Card-mode Dino. */
++static int pci_dev_is_behind_card_dino(struct pci_dev *dev)
++{
++ struct dino_device *dino_dev;
++
++ dino_dev = DINO_DEV(parisc_walk_tree(dev->bus->bridge));
++ return is_card_dino(&dino_dev->hba.dev->id);
++}
++
+ /*
+ * Dino Configuration Space Accessor Functions
+ */
+@@ -437,6 +446,21 @@ static void quirk_cirrus_cardbus(struct pci_dev *dev)
+ }
+ DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_CIRRUS, PCI_DEVICE_ID_CIRRUS_6832, quirk_cirrus_cardbus );
+
++#ifdef CONFIG_TULIP
++static void pci_fixup_tulip(struct pci_dev *dev)
++{
++ if (!pci_dev_is_behind_card_dino(dev))
++ return;
++ if (!(pci_resource_flags(dev, 1) & IORESOURCE_MEM))
++ return;
++ pr_warn("%s: HP HSC-PCI Cards with card-mode Dino not yet supported.\n",
++ pci_name(dev));
++ /* Disable this card by zeroing the PCI resources */
++ memset(&dev->resource[0], 0, sizeof(dev->resource[0]));
++ memset(&dev->resource[1], 0, sizeof(dev->resource[1]));
++}
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_DEC, PCI_ANY_ID, pci_fixup_tulip);
++#endif /* CONFIG_TULIP */
+
+ static void __init
+ dino_bios_init(void)
+diff --git a/drivers/platform/chrome/cros_ec_rpmsg.c b/drivers/platform/chrome/cros_ec_rpmsg.c
+index 5d3fb2abad1d..bec19d4814ab 100644
+--- a/drivers/platform/chrome/cros_ec_rpmsg.c
++++ b/drivers/platform/chrome/cros_ec_rpmsg.c
+@@ -41,6 +41,7 @@ struct cros_ec_rpmsg {
+ struct rpmsg_device *rpdev;
+ struct completion xfer_ack;
+ struct work_struct host_event_work;
++ struct rpmsg_endpoint *ept;
+ };
+
+ /**
+@@ -72,7 +73,6 @@ static int cros_ec_pkt_xfer_rpmsg(struct cros_ec_device *ec_dev,
+ struct cros_ec_command *ec_msg)
+ {
+ struct cros_ec_rpmsg *ec_rpmsg = ec_dev->priv;
+- struct rpmsg_device *rpdev = ec_rpmsg->rpdev;
+ struct ec_host_response *response;
+ unsigned long timeout;
+ int len;
+@@ -85,7 +85,7 @@ static int cros_ec_pkt_xfer_rpmsg(struct cros_ec_device *ec_dev,
+ dev_dbg(ec_dev->dev, "prepared, len=%d\n", len);
+
+ reinit_completion(&ec_rpmsg->xfer_ack);
+- ret = rpmsg_send(rpdev->ept, ec_dev->dout, len);
++ ret = rpmsg_send(ec_rpmsg->ept, ec_dev->dout, len);
+ if (ret) {
+ dev_err(ec_dev->dev, "rpmsg send failed\n");
+ return ret;
+@@ -196,11 +196,24 @@ static int cros_ec_rpmsg_callback(struct rpmsg_device *rpdev, void *data,
+ return 0;
+ }
+
++static struct rpmsg_endpoint *
++cros_ec_rpmsg_create_ept(struct rpmsg_device *rpdev)
++{
++ struct rpmsg_channel_info chinfo = {};
++
++ strscpy(chinfo.name, rpdev->id.name, RPMSG_NAME_SIZE);
++ chinfo.src = rpdev->src;
++ chinfo.dst = RPMSG_ADDR_ANY;
++
++ return rpmsg_create_ept(rpdev, cros_ec_rpmsg_callback, NULL, chinfo);
++}
++
+ static int cros_ec_rpmsg_probe(struct rpmsg_device *rpdev)
+ {
+ struct device *dev = &rpdev->dev;
+ struct cros_ec_rpmsg *ec_rpmsg;
+ struct cros_ec_device *ec_dev;
++ int ret;
+
+ ec_dev = devm_kzalloc(dev, sizeof(*ec_dev), GFP_KERNEL);
+ if (!ec_dev)
+@@ -225,7 +238,18 @@ static int cros_ec_rpmsg_probe(struct rpmsg_device *rpdev)
+ INIT_WORK(&ec_rpmsg->host_event_work,
+ cros_ec_rpmsg_host_event_function);
+
+- return cros_ec_register(ec_dev);
++ ec_rpmsg->ept = cros_ec_rpmsg_create_ept(rpdev);
++ if (!ec_rpmsg->ept)
++ return -ENOMEM;
++
++ ret = cros_ec_register(ec_dev);
++ if (ret < 0) {
++ rpmsg_destroy_ept(ec_rpmsg->ept);
++ cancel_work_sync(&ec_rpmsg->host_event_work);
++ return ret;
++ }
++
++ return 0;
+ }
+
+ static void cros_ec_rpmsg_remove(struct rpmsg_device *rpdev)
+@@ -233,6 +257,7 @@ static void cros_ec_rpmsg_remove(struct rpmsg_device *rpdev)
+ struct cros_ec_device *ec_dev = dev_get_drvdata(&rpdev->dev);
+ struct cros_ec_rpmsg *ec_rpmsg = ec_dev->priv;
+
++ rpmsg_destroy_ept(ec_rpmsg->ept);
+ cancel_work_sync(&ec_rpmsg->host_event_work);
+ }
+
+@@ -249,7 +274,6 @@ static struct rpmsg_driver cros_ec_driver_rpmsg = {
+ },
+ .probe = cros_ec_rpmsg_probe,
+ .remove = cros_ec_rpmsg_remove,
+- .callback = cros_ec_rpmsg_callback,
+ };
+
+ module_rpmsg_driver(cros_ec_driver_rpmsg);
+diff --git a/drivers/platform/x86/intel_int0002_vgpio.c b/drivers/platform/x86/intel_int0002_vgpio.c
+index d9542c661ddc..9ea1a2a19f86 100644
+--- a/drivers/platform/x86/intel_int0002_vgpio.c
++++ b/drivers/platform/x86/intel_int0002_vgpio.c
+@@ -144,6 +144,7 @@ static struct irq_chip int0002_cht_irqchip = {
+ * No set_wake, on CHT the IRQ is typically shared with the ACPI SCI
+ * and we don't want to mess with the ACPI SCI irq settings.
+ */
++ .flags = IRQCHIP_SKIP_SET_WAKE,
+ };
+
+ static const struct x86_cpu_id int0002_cpu_ids[] = {
+diff --git a/drivers/platform/x86/intel_pmc_core.c b/drivers/platform/x86/intel_pmc_core.c
+index c510d0d72475..3b6b8dcc4767 100644
+--- a/drivers/platform/x86/intel_pmc_core.c
++++ b/drivers/platform/x86/intel_pmc_core.c
+@@ -878,10 +878,14 @@ static int pmc_core_probe(struct platform_device *pdev)
+ if (pmcdev->map == &spt_reg_map && !pci_dev_present(pmc_pci_ids))
+ pmcdev->map = &cnp_reg_map;
+
+- if (lpit_read_residency_count_address(&slp_s0_addr))
++ if (lpit_read_residency_count_address(&slp_s0_addr)) {
+ pmcdev->base_addr = PMC_BASE_ADDR_DEFAULT;
+- else
++
++ if (page_is_ram(PHYS_PFN(pmcdev->base_addr)))
++ return -ENODEV;
++ } else {
+ pmcdev->base_addr = slp_s0_addr - pmcdev->map->slp_s0_offset;
++ }
+
+ pmcdev->regbase = ioremap(pmcdev->base_addr,
+ pmcdev->map->regmap_length);
+diff --git a/drivers/platform/x86/intel_pmc_core_pltdrv.c b/drivers/platform/x86/intel_pmc_core_pltdrv.c
+index a8754a6db1b8..186540014c48 100644
+--- a/drivers/platform/x86/intel_pmc_core_pltdrv.c
++++ b/drivers/platform/x86/intel_pmc_core_pltdrv.c
+@@ -18,8 +18,16 @@
+ #include <asm/cpu_device_id.h>
+ #include <asm/intel-family.h>
+
++static void intel_pmc_core_release(struct device *dev)
++{
++ /* Nothing to do. */
++}
++
+ static struct platform_device pmc_core_device = {
+ .name = "intel_pmc_core",
++ .dev = {
++ .release = intel_pmc_core_release,
++ },
+ };
+
+ /*
+diff --git a/drivers/ras/Makefile b/drivers/ras/Makefile
+index ef6777e14d3d..6f0404f50107 100644
+--- a/drivers/ras/Makefile
++++ b/drivers/ras/Makefile
+@@ -1,3 +1,4 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+-obj-$(CONFIG_RAS) += ras.o debugfs.o
++obj-$(CONFIG_RAS) += ras.o
++obj-$(CONFIG_DEBUG_FS) += debugfs.o
+ obj-$(CONFIG_RAS_CEC) += cec.o
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index e0c0cf462004..1b35b8311650 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -5640,7 +5640,7 @@ static int __init regulator_init(void)
+ /* init early to allow our consumers to complete system booting */
+ core_initcall(regulator_init);
+
+-static int __init regulator_late_cleanup(struct device *dev, void *data)
++static int regulator_late_cleanup(struct device *dev, void *data)
+ {
+ struct regulator_dev *rdev = dev_to_rdev(dev);
+ const struct regulator_ops *ops = rdev->desc->ops;
+@@ -5689,17 +5689,8 @@ unlock:
+ return 0;
+ }
+
+-static int __init regulator_init_complete(void)
++static void regulator_init_complete_work_function(struct work_struct *work)
+ {
+- /*
+- * Since DT doesn't provide an idiomatic mechanism for
+- * enabling full constraints and since it's much more natural
+- * with DT to provide them just assume that a DT enabled
+- * system has full constraints.
+- */
+- if (of_have_populated_dt())
+- has_full_constraints = true;
+-
+ /*
+ * Regulators may had failed to resolve their input supplies
+ * when were registered, either because the input supply was
+@@ -5717,6 +5708,35 @@ static int __init regulator_init_complete(void)
+ */
+ class_for_each_device(®ulator_class, NULL, NULL,
+ regulator_late_cleanup);
++}
++
++static DECLARE_DELAYED_WORK(regulator_init_complete_work,
++ regulator_init_complete_work_function);
++
++static int __init regulator_init_complete(void)
++{
++ /*
++ * Since DT doesn't provide an idiomatic mechanism for
++ * enabling full constraints and since it's much more natural
++ * with DT to provide them just assume that a DT enabled
++ * system has full constraints.
++ */
++ if (of_have_populated_dt())
++ has_full_constraints = true;
++
++ /*
++ * We punt completion for an arbitrary amount of time since
++ * systems like distros will load many drivers from userspace
++ * so consumers might not always be ready yet, this is
++ * particularly an issue with laptops where this might bounce
++ * the display off then on. Ideally we'd get a notification
++ * from userspace when this happens but we don't so just wait
++ * a bit and hope we waited long enough. It'd be better if
++ * we'd only do this on systems that need it, and a kernel
++ * command line option might be useful.
++ */
++ schedule_delayed_work(®ulator_init_complete_work,
++ msecs_to_jiffies(30000));
+
+ return 0;
+ }
+diff --git a/drivers/regulator/lm363x-regulator.c b/drivers/regulator/lm363x-regulator.c
+index 5647e2f97ff8..4b9f618b07e9 100644
+--- a/drivers/regulator/lm363x-regulator.c
++++ b/drivers/regulator/lm363x-regulator.c
+@@ -30,13 +30,13 @@
+
+ /* LM3632 */
+ #define LM3632_BOOST_VSEL_MAX 0x26
+-#define LM3632_LDO_VSEL_MAX 0x29
++#define LM3632_LDO_VSEL_MAX 0x28
+ #define LM3632_VBOOST_MIN 4500000
+ #define LM3632_VLDO_MIN 4000000
+
+ /* LM36274 */
+ #define LM36274_BOOST_VSEL_MAX 0x3f
+-#define LM36274_LDO_VSEL_MAX 0x34
++#define LM36274_LDO_VSEL_MAX 0x32
+ #define LM36274_VOLTAGE_MIN 4000000
+
+ /* Common */
+@@ -226,7 +226,7 @@ static const struct regulator_desc lm363x_regulator_desc[] = {
+ .of_match = "vboost",
+ .id = LM36274_BOOST,
+ .ops = &lm363x_boost_voltage_table_ops,
+- .n_voltages = LM36274_BOOST_VSEL_MAX,
++ .n_voltages = LM36274_BOOST_VSEL_MAX + 1,
+ .min_uV = LM36274_VOLTAGE_MIN,
+ .uV_step = LM363X_STEP_50mV,
+ .type = REGULATOR_VOLTAGE,
+@@ -239,7 +239,7 @@ static const struct regulator_desc lm363x_regulator_desc[] = {
+ .of_match = "vpos",
+ .id = LM36274_LDO_POS,
+ .ops = &lm363x_regulator_voltage_table_ops,
+- .n_voltages = LM36274_LDO_VSEL_MAX,
++ .n_voltages = LM36274_LDO_VSEL_MAX + 1,
+ .min_uV = LM36274_VOLTAGE_MIN,
+ .uV_step = LM363X_STEP_50mV,
+ .type = REGULATOR_VOLTAGE,
+@@ -254,7 +254,7 @@ static const struct regulator_desc lm363x_regulator_desc[] = {
+ .of_match = "vneg",
+ .id = LM36274_LDO_NEG,
+ .ops = &lm363x_regulator_voltage_table_ops,
+- .n_voltages = LM36274_LDO_VSEL_MAX,
++ .n_voltages = LM36274_LDO_VSEL_MAX + 1,
+ .min_uV = LM36274_VOLTAGE_MIN,
+ .uV_step = LM363X_STEP_50mV,
+ .type = REGULATOR_VOLTAGE,
+diff --git a/drivers/scsi/device_handler/scsi_dh_rdac.c b/drivers/scsi/device_handler/scsi_dh_rdac.c
+index 65f1fe343c64..5efc959493ec 100644
+--- a/drivers/scsi/device_handler/scsi_dh_rdac.c
++++ b/drivers/scsi/device_handler/scsi_dh_rdac.c
+@@ -546,6 +546,8 @@ static void send_mode_select(struct work_struct *work)
+ spin_unlock(&ctlr->ms_lock);
+
+ retry:
++ memset(cdb, 0, sizeof(cdb));
++
+ data_size = rdac_failover_get(ctlr, &list, cdb);
+
+ RDAC_LOG(RDAC_LOG_FAILOVER, sdev, "array %s, ctlr %d, "
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index da83034d4759..afcd9a885884 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -289,8 +289,13 @@ qla2x00_async_login(struct scsi_qla_host *vha, fc_port_t *fcport,
+ struct srb_iocb *lio;
+ int rval = QLA_FUNCTION_FAILED;
+
+- if (!vha->flags.online)
+- goto done;
++ if (!vha->flags.online || (fcport->flags & FCF_ASYNC_SENT) ||
++ fcport->loop_id == FC_NO_LOOP_ID) {
++ ql_log(ql_log_warn, vha, 0xffff,
++ "%s: %8phC - not sending command.\n",
++ __func__, fcport->port_name);
++ return rval;
++ }
+
+ sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ if (!sp)
+@@ -1262,8 +1267,13 @@ int qla24xx_async_gpdb(struct scsi_qla_host *vha, fc_port_t *fcport, u8 opt)
+ struct port_database_24xx *pd;
+ struct qla_hw_data *ha = vha->hw;
+
+- if (!vha->flags.online || (fcport->flags & FCF_ASYNC_SENT))
++ if (!vha->flags.online || (fcport->flags & FCF_ASYNC_SENT) ||
++ fcport->loop_id == FC_NO_LOOP_ID) {
++ ql_log(ql_log_warn, vha, 0xffff,
++ "%s: %8phC - not sending command.\n",
++ __func__, fcport->port_name);
+ return rval;
++ }
+
+ fcport->disc_state = DSC_GPDB;
+
+@@ -1953,8 +1963,11 @@ qla24xx_handle_plogi_done_event(struct scsi_qla_host *vha, struct event_arg *ea)
+ return;
+ }
+
+- if (fcport->disc_state == DSC_DELETE_PEND)
++ if ((fcport->disc_state == DSC_DELETE_PEND) ||
++ (fcport->disc_state == DSC_DELETED)) {
++ set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
+ return;
++ }
+
+ if (ea->sp->gen2 != fcport->login_gen) {
+ /* target side must have changed it. */
+@@ -6698,8 +6711,10 @@ qla2x00_abort_isp_cleanup(scsi_qla_host_t *vha)
+ }
+
+ /* Clear all async request states across all VPs. */
+- list_for_each_entry(fcport, &vha->vp_fcports, list)
++ list_for_each_entry(fcport, &vha->vp_fcports, list) {
+ fcport->flags &= ~(FCF_LOGIN_NEEDED | FCF_ASYNC_SENT);
++ fcport->scan_state = 0;
++ }
+ spin_lock_irqsave(&ha->vport_slock, flags);
+ list_for_each_entry(vp, &ha->vp_list, list) {
+ atomic_inc(&vp->vref_count);
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 98e60a34afd9..4fda308c3ef5 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -5086,6 +5086,7 @@ void qla24xx_create_new_sess(struct scsi_qla_host *vha, struct qla_work_evt *e)
+ if (fcport) {
+ fcport->id_changed = 1;
+ fcport->scan_state = QLA_FCPORT_FOUND;
++ fcport->chip_reset = vha->hw->base_qpair->chip_reset;
+ memcpy(fcport->node_name, e->u.new_sess.node_name, WWN_SIZE);
+
+ if (pla) {
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index 1c1f63be6eed..459c28aa3b94 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -1209,7 +1209,6 @@ static void qla24xx_chk_fcp_state(struct fc_port *sess)
+ sess->logout_on_delete = 0;
+ sess->logo_ack_needed = 0;
+ sess->fw_login_state = DSC_LS_PORT_UNAVAIL;
+- sess->scan_state = 0;
+ }
+ }
+
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 11e64b50497f..4e88d7e9cf9a 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -1089,6 +1089,18 @@ static void scsi_initialize_rq(struct request *rq)
+ cmd->retries = 0;
+ }
+
++/*
++ * Only called when the request isn't completed by SCSI, and not freed by
++ * SCSI
++ */
++static void scsi_cleanup_rq(struct request *rq)
++{
++ if (rq->rq_flags & RQF_DONTPREP) {
++ scsi_mq_uninit_cmd(blk_mq_rq_to_pdu(rq));
++ rq->rq_flags &= ~RQF_DONTPREP;
++ }
++}
++
+ /* Add a command to the list used by the aacraid and dpt_i2o drivers */
+ void scsi_add_cmd_to_list(struct scsi_cmnd *cmd)
+ {
+@@ -1821,6 +1833,7 @@ static const struct blk_mq_ops scsi_mq_ops = {
+ .init_request = scsi_mq_init_request,
+ .exit_request = scsi_mq_exit_request,
+ .initialize_rq_fn = scsi_initialize_rq,
++ .cleanup_rq = scsi_cleanup_rq,
+ .busy = scsi_mq_lld_busy,
+ .map_queues = scsi_map_queues,
+ };
+diff --git a/drivers/soc/amlogic/meson-clk-measure.c b/drivers/soc/amlogic/meson-clk-measure.c
+index 19d4cbc93a17..c470e24f1dfa 100644
+--- a/drivers/soc/amlogic/meson-clk-measure.c
++++ b/drivers/soc/amlogic/meson-clk-measure.c
+@@ -11,6 +11,8 @@
+ #include <linux/debugfs.h>
+ #include <linux/regmap.h>
+
++static DEFINE_MUTEX(measure_lock);
++
+ #define MSR_CLK_DUTY 0x0
+ #define MSR_CLK_REG0 0x4
+ #define MSR_CLK_REG1 0x8
+@@ -360,6 +362,10 @@ static int meson_measure_id(struct meson_msr_id *clk_msr_id,
+ unsigned int val;
+ int ret;
+
++ ret = mutex_lock_interruptible(&measure_lock);
++ if (ret)
++ return ret;
++
+ regmap_write(priv->regmap, MSR_CLK_REG0, 0);
+
+ /* Set measurement duration */
+@@ -377,8 +383,10 @@ static int meson_measure_id(struct meson_msr_id *clk_msr_id,
+
+ ret = regmap_read_poll_timeout(priv->regmap, MSR_CLK_REG0,
+ val, !(val & MSR_BUSY), 10, 10000);
+- if (ret)
++ if (ret) {
++ mutex_unlock(&measure_lock);
+ return ret;
++ }
+
+ /* Disable */
+ regmap_update_bits(priv->regmap, MSR_CLK_REG0, MSR_ENABLE, 0);
+@@ -386,6 +394,8 @@ static int meson_measure_id(struct meson_msr_id *clk_msr_id,
+ /* Get the value in multiple of gate time counts */
+ regmap_read(priv->regmap, MSR_CLK_REG2, &val);
+
++ mutex_unlock(&measure_lock);
++
+ if (val >= MSR_VAL_MASK)
+ return -EINVAL;
+
+diff --git a/drivers/soc/renesas/Kconfig b/drivers/soc/renesas/Kconfig
+index 2bbf49e5d441..9583c542c47f 100644
+--- a/drivers/soc/renesas/Kconfig
++++ b/drivers/soc/renesas/Kconfig
+@@ -55,6 +55,7 @@ config ARCH_EMEV2
+
+ config ARCH_R7S72100
+ bool "RZ/A1H (R7S72100)"
++ select ARM_ERRATA_754322
+ select PM
+ select PM_GENERIC_DOMAINS
+ select RENESAS_OSTM
+@@ -78,6 +79,7 @@ config ARCH_R8A73A4
+ config ARCH_R8A7740
+ bool "R-Mobile A1 (R8A77400)"
+ select ARCH_RMOBILE
++ select ARM_ERRATA_754322
+ select RENESAS_INTC_IRQPIN
+
+ config ARCH_R8A7743
+@@ -105,10 +107,12 @@ config ARCH_R8A77470
+ config ARCH_R8A7778
+ bool "R-Car M1A (R8A77781)"
+ select ARCH_RCAR_GEN1
++ select ARM_ERRATA_754322
+
+ config ARCH_R8A7779
+ bool "R-Car H1 (R8A77790)"
+ select ARCH_RCAR_GEN1
++ select ARM_ERRATA_754322
+ select HAVE_ARM_SCU if SMP
+ select HAVE_ARM_TWD if SMP
+ select SYSC_R8A7779
+@@ -152,6 +156,7 @@ config ARCH_R9A06G032
+ config ARCH_SH73A0
+ bool "SH-Mobile AG5 (R8A73A00)"
+ select ARCH_RMOBILE
++ select ARM_ERRATA_754322
+ select HAVE_ARM_SCU if SMP
+ select HAVE_ARM_TWD if SMP
+ select RENESAS_INTC_IRQPIN
+diff --git a/drivers/soc/renesas/rmobile-sysc.c b/drivers/soc/renesas/rmobile-sysc.c
+index 421ae1c887d8..54b616ad4a62 100644
+--- a/drivers/soc/renesas/rmobile-sysc.c
++++ b/drivers/soc/renesas/rmobile-sysc.c
+@@ -48,12 +48,8 @@ struct rmobile_pm_domain *to_rmobile_pd(struct generic_pm_domain *d)
+ static int rmobile_pd_power_down(struct generic_pm_domain *genpd)
+ {
+ struct rmobile_pm_domain *rmobile_pd = to_rmobile_pd(genpd);
+- unsigned int mask;
++ unsigned int mask = BIT(rmobile_pd->bit_shift);
+
+- if (rmobile_pd->bit_shift == ~0)
+- return -EBUSY;
+-
+- mask = BIT(rmobile_pd->bit_shift);
+ if (rmobile_pd->suspend) {
+ int ret = rmobile_pd->suspend();
+
+@@ -80,14 +76,10 @@ static int rmobile_pd_power_down(struct generic_pm_domain *genpd)
+
+ static int __rmobile_pd_power_up(struct rmobile_pm_domain *rmobile_pd)
+ {
+- unsigned int mask;
++ unsigned int mask = BIT(rmobile_pd->bit_shift);
+ unsigned int retry_count;
+ int ret = 0;
+
+- if (rmobile_pd->bit_shift == ~0)
+- return 0;
+-
+- mask = BIT(rmobile_pd->bit_shift);
+ if (__raw_readl(rmobile_pd->base + PSTR) & mask)
+ return ret;
+
+@@ -122,11 +114,15 @@ static void rmobile_init_pm_domain(struct rmobile_pm_domain *rmobile_pd)
+ struct dev_power_governor *gov = rmobile_pd->gov;
+
+ genpd->flags |= GENPD_FLAG_PM_CLK | GENPD_FLAG_ACTIVE_WAKEUP;
+- genpd->power_off = rmobile_pd_power_down;
+- genpd->power_on = rmobile_pd_power_up;
+- genpd->attach_dev = cpg_mstp_attach_dev;
+- genpd->detach_dev = cpg_mstp_detach_dev;
+- __rmobile_pd_power_up(rmobile_pd);
++ genpd->attach_dev = cpg_mstp_attach_dev;
++ genpd->detach_dev = cpg_mstp_detach_dev;
++
++ if (!(genpd->flags & GENPD_FLAG_ALWAYS_ON)) {
++ genpd->power_off = rmobile_pd_power_down;
++ genpd->power_on = rmobile_pd_power_up;
++ __rmobile_pd_power_up(rmobile_pd);
++ }
++
+ pm_genpd_init(genpd, gov ? : &simple_qos_governor, false);
+ }
+
+@@ -270,6 +266,11 @@ static void __init rmobile_setup_pm_domain(struct device_node *np,
+ break;
+
+ case PD_NORMAL:
++ if (pd->bit_shift == ~0) {
++ /* Top-level always-on domain */
++ pr_debug("PM domain %s is always-on domain\n", name);
++ pd->genpd.flags |= GENPD_FLAG_ALWAYS_ON;
++ }
+ break;
+ }
+
+diff --git a/drivers/spi/spi-bcm2835.c b/drivers/spi/spi-bcm2835.c
+index 840b1b8ff3dc..dfdcebb38830 100644
+--- a/drivers/spi/spi-bcm2835.c
++++ b/drivers/spi/spi-bcm2835.c
+@@ -319,6 +319,13 @@ static void bcm2835_spi_reset_hw(struct spi_controller *ctlr)
+ BCM2835_SPI_CS_INTD |
+ BCM2835_SPI_CS_DMAEN |
+ BCM2835_SPI_CS_TA);
++ /*
++ * Transmission sometimes breaks unless the DONE bit is written at the
++ * end of every transfer. The spec says it's a RO bit. Either the
++ * spec is wrong and the bit is actually of type RW1C, or it's a
++ * hardware erratum.
++ */
++ cs |= BCM2835_SPI_CS_DONE;
+ /* and reset RX/TX FIFOS */
+ cs |= BCM2835_SPI_CS_CLEAR_RX | BCM2835_SPI_CS_CLEAR_TX;
+
+@@ -477,7 +484,9 @@ static void bcm2835_spi_transfer_prologue(struct spi_controller *ctlr,
+ bcm2835_wr_fifo_count(bs, bs->rx_prologue);
+ bcm2835_wait_tx_fifo_empty(bs);
+ bcm2835_rd_fifo_count(bs, bs->rx_prologue);
+- bcm2835_spi_reset_hw(ctlr);
++ bcm2835_wr(bs, BCM2835_SPI_CS, cs | BCM2835_SPI_CS_CLEAR_RX
++ | BCM2835_SPI_CS_CLEAR_TX
++ | BCM2835_SPI_CS_DONE);
+
+ dma_sync_single_for_device(ctlr->dma_rx->device->dev,
+ sg_dma_address(&tfr->rx_sg.sgl[0]),
+@@ -498,7 +507,8 @@ static void bcm2835_spi_transfer_prologue(struct spi_controller *ctlr,
+ | BCM2835_SPI_CS_DMAEN);
+ bcm2835_wr_fifo_count(bs, tx_remaining);
+ bcm2835_wait_tx_fifo_empty(bs);
+- bcm2835_wr(bs, BCM2835_SPI_CS, cs | BCM2835_SPI_CS_CLEAR_TX);
++ bcm2835_wr(bs, BCM2835_SPI_CS, cs | BCM2835_SPI_CS_CLEAR_TX
++ | BCM2835_SPI_CS_DONE);
+ }
+
+ if (likely(!bs->tx_spillover)) {
+diff --git a/drivers/spi/spi-dw-mmio.c b/drivers/spi/spi-dw-mmio.c
+index 18c06568805e..86789dbaf577 100644
+--- a/drivers/spi/spi-dw-mmio.c
++++ b/drivers/spi/spi-dw-mmio.c
+@@ -172,8 +172,10 @@ static int dw_spi_mmio_probe(struct platform_device *pdev)
+
+ /* Optional clock needed to access the registers */
+ dwsmmio->pclk = devm_clk_get_optional(&pdev->dev, "pclk");
+- if (IS_ERR(dwsmmio->pclk))
+- return PTR_ERR(dwsmmio->pclk);
++ if (IS_ERR(dwsmmio->pclk)) {
++ ret = PTR_ERR(dwsmmio->pclk);
++ goto out_clk;
++ }
+ ret = clk_prepare_enable(dwsmmio->pclk);
+ if (ret)
+ goto out_clk;
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index 53335ccc98f6..545fc8189fb0 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -886,9 +886,11 @@ static irqreturn_t dspi_interrupt(int irq, void *dev_id)
+ trans_mode);
+ }
+ }
++
++ return IRQ_HANDLED;
+ }
+
+- return IRQ_HANDLED;
++ return IRQ_NONE;
+ }
+
+ static const struct of_device_id fsl_dspi_dt_ids[] = {
+diff --git a/drivers/staging/erofs/zmap.c b/drivers/staging/erofs/zmap.c
+index 9c0bd65c46bf..c2359321ca13 100644
+--- a/drivers/staging/erofs/zmap.c
++++ b/drivers/staging/erofs/zmap.c
+@@ -86,12 +86,11 @@ static int fill_inode_lazy(struct inode *inode)
+
+ vi->z_physical_clusterbits[1] = vi->z_logical_clusterbits +
+ ((h->h_clusterbits >> 5) & 7);
++ set_bit(EROFS_V_Z_INITED_BIT, &vi->flags);
+ unmap_done:
+ kunmap_atomic(kaddr);
+ unlock_page(page);
+ put_page(page);
+-
+- set_bit(EROFS_V_Z_INITED_BIT, &vi->flags);
+ out_unlock:
+ clear_and_wake_up_bit(EROFS_V_BL_Z_BIT, &vi->flags);
+ return err;
+diff --git a/drivers/staging/media/hantro/hantro_drv.c b/drivers/staging/media/hantro/hantro_drv.c
+index c3665f0e87a2..46dcb46bb927 100644
+--- a/drivers/staging/media/hantro/hantro_drv.c
++++ b/drivers/staging/media/hantro/hantro_drv.c
+@@ -724,6 +724,7 @@ static int hantro_probe(struct platform_device *pdev)
+ dev_err(vpu->dev, "Could not set DMA coherent mask.\n");
+ return ret;
+ }
++ vb2_dma_contig_set_max_seg_size(&pdev->dev, DMA_BIT_MASK(32));
+
+ for (i = 0; i < vpu->variant->num_irqs; i++) {
+ const char *irq_name = vpu->variant->irqs[i].name;
+diff --git a/drivers/staging/media/imx/imx6-mipi-csi2.c b/drivers/staging/media/imx/imx6-mipi-csi2.c
+index f29e28df36ed..bfa4b254c4e4 100644
+--- a/drivers/staging/media/imx/imx6-mipi-csi2.c
++++ b/drivers/staging/media/imx/imx6-mipi-csi2.c
+@@ -243,7 +243,7 @@ static int __maybe_unused csi2_dphy_wait_ulp(struct csi2_dev *csi2)
+ }
+
+ /* Waits for low-power LP-11 state on data and clock lanes. */
+-static int csi2_dphy_wait_stopstate(struct csi2_dev *csi2)
++static void csi2_dphy_wait_stopstate(struct csi2_dev *csi2)
+ {
+ u32 mask, reg;
+ int ret;
+@@ -254,11 +254,9 @@ static int csi2_dphy_wait_stopstate(struct csi2_dev *csi2)
+ ret = readl_poll_timeout(csi2->base + CSI2_PHY_STATE, reg,
+ (reg & mask) == mask, 0, 500000);
+ if (ret) {
+- v4l2_err(&csi2->sd, "LP-11 timeout, phy_state = 0x%08x\n", reg);
+- return ret;
++ v4l2_warn(&csi2->sd, "LP-11 wait timeout, likely a sensor driver bug, expect capture failures.\n");
++ v4l2_warn(&csi2->sd, "phy_state = 0x%08x\n", reg);
+ }
+-
+- return 0;
+ }
+
+ /* Wait for active clock on the clock lane. */
+@@ -316,9 +314,7 @@ static int csi2_start(struct csi2_dev *csi2)
+ csi2_enable(csi2, true);
+
+ /* Step 5 */
+- ret = csi2_dphy_wait_stopstate(csi2);
+- if (ret)
+- goto err_assert_reset;
++ csi2_dphy_wait_stopstate(csi2);
+
+ /* Step 6 */
+ ret = v4l2_subdev_call(csi2->src_sd, video, s_stream, 1);
+diff --git a/drivers/staging/media/tegra-vde/Kconfig b/drivers/staging/media/tegra-vde/Kconfig
+index 2e7f644ae591..ba49ea50b8c0 100644
+--- a/drivers/staging/media/tegra-vde/Kconfig
++++ b/drivers/staging/media/tegra-vde/Kconfig
+@@ -3,7 +3,7 @@ config TEGRA_VDE
+ tristate "NVIDIA Tegra Video Decoder Engine driver"
+ depends on ARCH_TEGRA || COMPILE_TEST
+ select DMA_SHARED_BUFFER
+- select IOMMU_IOVA if IOMMU_SUPPORT
++ select IOMMU_IOVA if (IOMMU_SUPPORT || COMPILE_TEST)
+ select SRAM
+ help
+ Say Y here to enable support for the NVIDIA Tegra video decoder
+diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c
+index 04a22663b4fb..51d97ec4f58f 100644
+--- a/drivers/video/fbdev/efifb.c
++++ b/drivers/video/fbdev/efifb.c
+@@ -122,28 +122,13 @@ static void efifb_copy_bmp(u8 *src, u32 *dst, int width, struct screen_info *si)
+ */
+ static bool efifb_bgrt_sanity_check(struct screen_info *si, u32 bmp_width)
+ {
+- static const int default_resolutions[][2] = {
+- { 800, 600 },
+- { 1024, 768 },
+- { 1280, 1024 },
+- };
+- u32 i, right_margin;
+-
+- for (i = 0; i < ARRAY_SIZE(default_resolutions); i++) {
+- if (default_resolutions[i][0] == si->lfb_width &&
+- default_resolutions[i][1] == si->lfb_height)
+- break;
+- }
+- /* If not a default resolution used for textmode, this should be fine */
+- if (i >= ARRAY_SIZE(default_resolutions))
+- return true;
+-
+- /* If the right margin is 5 times smaller then the left one, reject */
+- right_margin = si->lfb_width - (bgrt_tab.image_offset_x + bmp_width);
+- if (right_margin < (bgrt_tab.image_offset_x / 5))
+- return false;
++ /*
++ * All x86 firmwares horizontally center the image (the yoffset
++ * calculations differ between boards, but xoffset is predictable).
++ */
++ u32 expected_xoffset = (si->lfb_width - bmp_width) / 2;
+
+- return true;
++ return bgrt_tab.image_offset_x == expected_xoffset;
+ }
+ #else
+ static bool efifb_bgrt_sanity_check(struct screen_info *si, u32 bmp_width)
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index d4e11b2e04f6..f131651502b8 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -1141,7 +1141,8 @@ out_free_interp:
+ * (since it grows up, and may collide early with the stack
+ * growing down), and into the unused ELF_ET_DYN_BASE region.
+ */
+- if (IS_ENABLED(CONFIG_ARCH_HAS_ELF_RANDOMIZE) && !interpreter)
++ if (IS_ENABLED(CONFIG_ARCH_HAS_ELF_RANDOMIZE) &&
++ loc->elf_ex.e_type == ET_DYN && !interpreter)
+ current->mm->brk = current->mm->start_brk =
+ ELF_ET_DYN_BASE;
+
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 5df76c17775a..322ec4b839ed 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -1343,6 +1343,7 @@ get_old_root(struct btrfs_root *root, u64 time_seq)
+ struct tree_mod_elem *tm;
+ struct extent_buffer *eb = NULL;
+ struct extent_buffer *eb_root;
++ u64 eb_root_owner = 0;
+ struct extent_buffer *old;
+ struct tree_mod_root *old_root = NULL;
+ u64 old_generation = 0;
+@@ -1380,6 +1381,7 @@ get_old_root(struct btrfs_root *root, u64 time_seq)
+ free_extent_buffer(old);
+ }
+ } else if (old_root) {
++ eb_root_owner = btrfs_header_owner(eb_root);
+ btrfs_tree_read_unlock(eb_root);
+ free_extent_buffer(eb_root);
+ eb = alloc_dummy_extent_buffer(fs_info, logical);
+@@ -1396,7 +1398,7 @@ get_old_root(struct btrfs_root *root, u64 time_seq)
+ if (old_root) {
+ btrfs_set_header_bytenr(eb, eb->start);
+ btrfs_set_header_backref_rev(eb, BTRFS_MIXED_BACKREF_REV);
+- btrfs_set_header_owner(eb, btrfs_header_owner(eb_root));
++ btrfs_set_header_owner(eb, eb_root_owner);
+ btrfs_set_header_level(eb, old_root->level);
+ btrfs_set_header_generation(eb, old_generation);
+ }
+@@ -5475,6 +5477,7 @@ int btrfs_compare_trees(struct btrfs_root *left_root,
+ advance_left = advance_right = 0;
+
+ while (1) {
++ cond_resched();
+ if (advance_left && !left_end_reached) {
+ ret = tree_advance(left_path, &left_level,
+ left_root_level,
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 94660063a162..d9541d58ce3d 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -43,6 +43,7 @@ extern struct kmem_cache *btrfs_trans_handle_cachep;
+ extern struct kmem_cache *btrfs_bit_radix_cachep;
+ extern struct kmem_cache *btrfs_path_cachep;
+ extern struct kmem_cache *btrfs_free_space_cachep;
++extern struct kmem_cache *btrfs_free_space_bitmap_cachep;
+ struct btrfs_ordered_sum;
+ struct btrfs_ref;
+
+diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
+index 43fdb2992956..6858a05606dd 100644
+--- a/fs/btrfs/delayed-inode.c
++++ b/fs/btrfs/delayed-inode.c
+@@ -474,6 +474,9 @@ static void __btrfs_remove_delayed_item(struct btrfs_delayed_item *delayed_item)
+ struct rb_root_cached *root;
+ struct btrfs_delayed_root *delayed_root;
+
++ /* Not associated with any delayed_node */
++ if (!delayed_item->delayed_node)
++ return;
+ delayed_root = delayed_item->delayed_node->root->fs_info->delayed_root;
+
+ BUG_ON(!delayed_root);
+@@ -1525,7 +1528,12 @@ int btrfs_delete_delayed_dir_index(struct btrfs_trans_handle *trans,
+ * we have reserved enough space when we start a new transaction,
+ * so reserving metadata failure is impossible.
+ */
+- BUG_ON(ret);
++ if (ret < 0) {
++ btrfs_err(trans->fs_info,
++"metadata reservation failed for delayed dir item deltiona, should have been reserved");
++ btrfs_release_delayed_item(item);
++ goto end;
++ }
+
+ mutex_lock(&node->mutex);
+ ret = __btrfs_add_delayed_deletion_item(node, item);
+@@ -1534,7 +1542,8 @@ int btrfs_delete_delayed_dir_index(struct btrfs_trans_handle *trans,
+ "err add delayed dir index item(index: %llu) into the deletion tree of the delayed node(root id: %llu, inode id: %llu, errno: %d)",
+ index, node->root->root_key.objectid,
+ node->inode_id, ret);
+- BUG();
++ btrfs_delayed_item_release_metadata(dir->root, item);
++ btrfs_release_delayed_item(item);
+ }
+ mutex_unlock(&node->mutex);
+ end:
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 97beb351a10c..65af7eb3f7bd 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -416,6 +416,16 @@ int btrfs_verify_level_key(struct extent_buffer *eb, int level,
+ */
+ if (btrfs_header_generation(eb) > fs_info->last_trans_committed)
+ return 0;
++
++ /* We have @first_key, so this @eb must have at least one item */
++ if (btrfs_header_nritems(eb) == 0) {
++ btrfs_err(fs_info,
++ "invalid tree nritems, bytenr=%llu nritems=0 expect >0",
++ eb->start);
++ WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG));
++ return -EUCLEAN;
++ }
++
+ if (found_level)
+ btrfs_node_key_to_cpu(eb, &found_key, 0);
+ else
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 8b7eb22d508a..ef2f80825c82 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -5751,6 +5751,14 @@ search:
+ */
+ if ((flags & extra) && !(block_group->flags & extra))
+ goto loop;
++
++ /*
++ * This block group has different flags than we want.
++ * It's possible that we have MIXED_GROUP flag but no
++ * block group is mixed. Just skip such block group.
++ */
++ btrfs_release_block_group(block_group, delalloc);
++ continue;
+ }
+
+ have_block_group:
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index eeb75281894e..3e0c8fcb658f 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -3745,11 +3745,20 @@ err_unlock:
+ static void set_btree_ioerr(struct page *page)
+ {
+ struct extent_buffer *eb = (struct extent_buffer *)page->private;
++ struct btrfs_fs_info *fs_info;
+
+ SetPageError(page);
+ if (test_and_set_bit(EXTENT_BUFFER_WRITE_ERR, &eb->bflags))
+ return;
+
++ /*
++ * If we error out, we should add back the dirty_metadata_bytes
++ * to make it consistent.
++ */
++ fs_info = eb->fs_info;
++ percpu_counter_add_batch(&fs_info->dirty_metadata_bytes,
++ eb->len, fs_info->dirty_metadata_batch);
++
+ /*
+ * If writeback for a btree extent that doesn't belong to a log tree
+ * failed, increment the counter transaction->eb_write_errors.
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index 062be9dde4c6..52ad985cc7f9 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -764,7 +764,8 @@ static int __load_free_space_cache(struct btrfs_root *root, struct inode *inode,
+ } else {
+ ASSERT(num_bitmaps);
+ num_bitmaps--;
+- e->bitmap = kzalloc(PAGE_SIZE, GFP_NOFS);
++ e->bitmap = kmem_cache_zalloc(
++ btrfs_free_space_bitmap_cachep, GFP_NOFS);
+ if (!e->bitmap) {
+ kmem_cache_free(
+ btrfs_free_space_cachep, e);
+@@ -1881,7 +1882,7 @@ static void free_bitmap(struct btrfs_free_space_ctl *ctl,
+ struct btrfs_free_space *bitmap_info)
+ {
+ unlink_free_space(ctl, bitmap_info);
+- kfree(bitmap_info->bitmap);
++ kmem_cache_free(btrfs_free_space_bitmap_cachep, bitmap_info->bitmap);
+ kmem_cache_free(btrfs_free_space_cachep, bitmap_info);
+ ctl->total_bitmaps--;
+ ctl->op->recalc_thresholds(ctl);
+@@ -2135,7 +2136,8 @@ new_bitmap:
+ }
+
+ /* allocate the bitmap */
+- info->bitmap = kzalloc(PAGE_SIZE, GFP_NOFS);
++ info->bitmap = kmem_cache_zalloc(btrfs_free_space_bitmap_cachep,
++ GFP_NOFS);
+ spin_lock(&ctl->tree_lock);
+ if (!info->bitmap) {
+ ret = -ENOMEM;
+@@ -2146,7 +2148,9 @@ new_bitmap:
+
+ out:
+ if (info) {
+- kfree(info->bitmap);
++ if (info->bitmap)
++ kmem_cache_free(btrfs_free_space_bitmap_cachep,
++ info->bitmap);
+ kmem_cache_free(btrfs_free_space_cachep, info);
+ }
+
+@@ -2802,7 +2806,8 @@ out:
+ if (entry->bytes == 0) {
+ ctl->free_extents--;
+ if (entry->bitmap) {
+- kfree(entry->bitmap);
++ kmem_cache_free(btrfs_free_space_bitmap_cachep,
++ entry->bitmap);
+ ctl->total_bitmaps--;
+ ctl->op->recalc_thresholds(ctl);
+ }
+@@ -3606,7 +3611,7 @@ again:
+ }
+
+ if (!map) {
+- map = kzalloc(PAGE_SIZE, GFP_NOFS);
++ map = kmem_cache_zalloc(btrfs_free_space_bitmap_cachep, GFP_NOFS);
+ if (!map) {
+ kmem_cache_free(btrfs_free_space_cachep, info);
+ return -ENOMEM;
+@@ -3635,7 +3640,8 @@ again:
+
+ if (info)
+ kmem_cache_free(btrfs_free_space_cachep, info);
+- kfree(map);
++ if (map)
++ kmem_cache_free(btrfs_free_space_bitmap_cachep, map);
+ return 0;
+ }
+
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index ee582a36653d..d51d9466feb0 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -74,6 +74,7 @@ static struct kmem_cache *btrfs_inode_cachep;
+ struct kmem_cache *btrfs_trans_handle_cachep;
+ struct kmem_cache *btrfs_path_cachep;
+ struct kmem_cache *btrfs_free_space_cachep;
++struct kmem_cache *btrfs_free_space_bitmap_cachep;
+
+ static int btrfs_setsize(struct inode *inode, struct iattr *attr);
+ static int btrfs_truncate(struct inode *inode, bool skip_writeback);
+@@ -9380,6 +9381,7 @@ void __cold btrfs_destroy_cachep(void)
+ kmem_cache_destroy(btrfs_trans_handle_cachep);
+ kmem_cache_destroy(btrfs_path_cachep);
+ kmem_cache_destroy(btrfs_free_space_cachep);
++ kmem_cache_destroy(btrfs_free_space_bitmap_cachep);
+ }
+
+ int __init btrfs_init_cachep(void)
+@@ -9409,6 +9411,12 @@ int __init btrfs_init_cachep(void)
+ if (!btrfs_free_space_cachep)
+ goto fail;
+
++ btrfs_free_space_bitmap_cachep = kmem_cache_create("btrfs_free_space_bitmap",
++ PAGE_SIZE, PAGE_SIZE,
++ SLAB_RED_ZONE, NULL);
++ if (!btrfs_free_space_bitmap_cachep)
++ goto fail;
++
+ return 0;
+ fail:
+ btrfs_destroy_cachep();
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index f8a3c1b0a15a..001efc9ba1e7 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -3154,9 +3154,6 @@ out:
+ btrfs_free_path(path);
+
+ mutex_lock(&fs_info->qgroup_rescan_lock);
+- if (!btrfs_fs_closing(fs_info))
+- fs_info->qgroup_flags &= ~BTRFS_QGROUP_STATUS_FLAG_RESCAN;
+-
+ if (err > 0 &&
+ fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT) {
+ fs_info->qgroup_flags &= ~BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT;
+@@ -3172,16 +3169,30 @@ out:
+ trans = btrfs_start_transaction(fs_info->quota_root, 1);
+ if (IS_ERR(trans)) {
+ err = PTR_ERR(trans);
++ trans = NULL;
+ btrfs_err(fs_info,
+ "fail to start transaction for status update: %d",
+ err);
+- goto done;
+ }
+- ret = update_qgroup_status_item(trans);
+- if (ret < 0) {
+- err = ret;
+- btrfs_err(fs_info, "fail to update qgroup status: %d", err);
++
++ mutex_lock(&fs_info->qgroup_rescan_lock);
++ if (!btrfs_fs_closing(fs_info))
++ fs_info->qgroup_flags &= ~BTRFS_QGROUP_STATUS_FLAG_RESCAN;
++ if (trans) {
++ ret = update_qgroup_status_item(trans);
++ if (ret < 0) {
++ err = ret;
++ btrfs_err(fs_info, "fail to update qgroup status: %d",
++ err);
++ }
+ }
++ fs_info->qgroup_rescan_running = false;
++ complete_all(&fs_info->qgroup_rescan_completion);
++ mutex_unlock(&fs_info->qgroup_rescan_lock);
++
++ if (!trans)
++ return;
++
+ btrfs_end_transaction(trans);
+
+ if (btrfs_fs_closing(fs_info)) {
+@@ -3192,12 +3203,6 @@ out:
+ } else {
+ btrfs_err(fs_info, "qgroup scan failed with %d", err);
+ }
+-
+-done:
+- mutex_lock(&fs_info->qgroup_rescan_lock);
+- fs_info->qgroup_rescan_running = false;
+- mutex_unlock(&fs_info->qgroup_rescan_lock);
+- complete_all(&fs_info->qgroup_rescan_completion);
+ }
+
+ /*
+@@ -3425,6 +3430,9 @@ cleanup:
+ while ((unode = ulist_next(&reserved->range_changed, &uiter)))
+ clear_extent_bit(&BTRFS_I(inode)->io_tree, unode->val,
+ unode->aux, EXTENT_QGROUP_RESERVED, 0, 0, NULL);
++ /* Also free data bytes of already reserved one */
++ btrfs_qgroup_free_refroot(root->fs_info, root->root_key.objectid,
++ orig_reserved, BTRFS_QGROUP_RSV_DATA);
+ extent_changeset_release(reserved);
+ return ret;
+ }
+@@ -3469,7 +3477,7 @@ static int qgroup_free_reserved_data(struct inode *inode,
+ * EXTENT_QGROUP_RESERVED, we won't double free.
+ * So not need to rush.
+ */
+- ret = clear_record_extent_bits(&BTRFS_I(inode)->io_failure_tree,
++ ret = clear_record_extent_bits(&BTRFS_I(inode)->io_tree,
+ free_start, free_start + free_len - 1,
+ EXTENT_QGROUP_RESERVED, &changeset);
+ if (ret < 0)
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index ccd5706199d7..9634cae1e1b1 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -821,6 +821,95 @@ static int check_inode_item(struct extent_buffer *leaf,
+ return 0;
+ }
+
++static int check_root_item(struct extent_buffer *leaf, struct btrfs_key *key,
++ int slot)
++{
++ struct btrfs_fs_info *fs_info = leaf->fs_info;
++ struct btrfs_root_item ri;
++ const u64 valid_root_flags = BTRFS_ROOT_SUBVOL_RDONLY |
++ BTRFS_ROOT_SUBVOL_DEAD;
++
++ /* No such tree id */
++ if (key->objectid == 0) {
++ generic_err(leaf, slot, "invalid root id 0");
++ return -EUCLEAN;
++ }
++
++ /*
++ * Some older kernel may create ROOT_ITEM with non-zero offset, so here
++ * we only check offset for reloc tree whose key->offset must be a
++ * valid tree.
++ */
++ if (key->objectid == BTRFS_TREE_RELOC_OBJECTID && key->offset == 0) {
++ generic_err(leaf, slot, "invalid root id 0 for reloc tree");
++ return -EUCLEAN;
++ }
++
++ if (btrfs_item_size_nr(leaf, slot) != sizeof(ri)) {
++ generic_err(leaf, slot,
++ "invalid root item size, have %u expect %zu",
++ btrfs_item_size_nr(leaf, slot), sizeof(ri));
++ }
++
++ read_extent_buffer(leaf, &ri, btrfs_item_ptr_offset(leaf, slot),
++ sizeof(ri));
++
++ /* Generation related */
++ if (btrfs_root_generation(&ri) >
++ btrfs_super_generation(fs_info->super_copy) + 1) {
++ generic_err(leaf, slot,
++ "invalid root generation, have %llu expect (0, %llu]",
++ btrfs_root_generation(&ri),
++ btrfs_super_generation(fs_info->super_copy) + 1);
++ return -EUCLEAN;
++ }
++ if (btrfs_root_generation_v2(&ri) >
++ btrfs_super_generation(fs_info->super_copy) + 1) {
++ generic_err(leaf, slot,
++ "invalid root v2 generation, have %llu expect (0, %llu]",
++ btrfs_root_generation_v2(&ri),
++ btrfs_super_generation(fs_info->super_copy) + 1);
++ return -EUCLEAN;
++ }
++ if (btrfs_root_last_snapshot(&ri) >
++ btrfs_super_generation(fs_info->super_copy) + 1) {
++ generic_err(leaf, slot,
++ "invalid root last_snapshot, have %llu expect (0, %llu]",
++ btrfs_root_last_snapshot(&ri),
++ btrfs_super_generation(fs_info->super_copy) + 1);
++ return -EUCLEAN;
++ }
++
++ /* Alignment and level check */
++ if (!IS_ALIGNED(btrfs_root_bytenr(&ri), fs_info->sectorsize)) {
++ generic_err(leaf, slot,
++ "invalid root bytenr, have %llu expect to be aligned to %u",
++ btrfs_root_bytenr(&ri), fs_info->sectorsize);
++ return -EUCLEAN;
++ }
++ if (btrfs_root_level(&ri) >= BTRFS_MAX_LEVEL) {
++ generic_err(leaf, slot,
++ "invalid root level, have %u expect [0, %u]",
++ btrfs_root_level(&ri), BTRFS_MAX_LEVEL - 1);
++ return -EUCLEAN;
++ }
++ if (ri.drop_level >= BTRFS_MAX_LEVEL) {
++ generic_err(leaf, slot,
++ "invalid root level, have %u expect [0, %u]",
++ ri.drop_level, BTRFS_MAX_LEVEL - 1);
++ return -EUCLEAN;
++ }
++
++ /* Flags check */
++ if (btrfs_root_flags(&ri) & ~valid_root_flags) {
++ generic_err(leaf, slot,
++ "invalid root flags, have 0x%llx expect mask 0x%llx",
++ btrfs_root_flags(&ri), valid_root_flags);
++ return -EUCLEAN;
++ }
++ return 0;
++}
++
+ /*
+ * Common point to switch the item-specific validation.
+ */
+@@ -856,6 +945,9 @@ static int check_leaf_item(struct extent_buffer *leaf,
+ case BTRFS_INODE_ITEM_KEY:
+ ret = check_inode_item(leaf, key, slot);
+ break;
++ case BTRFS_ROOT_ITEM_KEY:
++ ret = check_root_item(leaf, key, slot);
++ break;
+ }
+ return ret;
+ }
+@@ -899,6 +991,12 @@ static int check_leaf(struct extent_buffer *leaf, bool check_item_data)
+ owner);
+ return -EUCLEAN;
+ }
++ /* Unknown tree */
++ if (owner == 0) {
++ generic_err(leaf, 0,
++ "invalid owner, root 0 is not defined");
++ return -EUCLEAN;
++ }
+ return 0;
+ }
+
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index a447d3ec48d5..e821a0e97cd8 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -4072,7 +4072,13 @@ int btrfs_balance(struct btrfs_fs_info *fs_info,
+ }
+
+ num_devices = btrfs_num_devices(fs_info);
+- allowed = 0;
++
++ /*
++ * SINGLE profile on-disk has no profile bit, but in-memory we have a
++ * special bit for it, to make it easier to distinguish. Thus we need
++ * to set it manually, or balance would refuse the profile.
++ */
++ allowed = BTRFS_AVAIL_ALLOC_BIT_SINGLE;
+ for (i = 0; i < ARRAY_SIZE(btrfs_raid_array); i++)
+ if (num_devices >= btrfs_raid_array[i].devs_min)
+ allowed |= btrfs_raid_array[i].bg_flag;
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index 3289b566463f..64e33e7bff1e 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -433,6 +433,8 @@ cifs_show_options(struct seq_file *s, struct dentry *root)
+ cifs_show_security(s, tcon->ses);
+ cifs_show_cache_flavor(s, cifs_sb);
+
++ if (tcon->no_lease)
++ seq_puts(s, ",nolease");
+ if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MULTIUSER)
+ seq_puts(s, ",multiuser");
+ else if (tcon->ses->user_name)
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index fe610e7e3670..5ef5a16c01d2 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -576,6 +576,7 @@ struct smb_vol {
+ bool noblocksnd:1;
+ bool noautotune:1;
+ bool nostrictsync:1; /* do not force expensive SMBflush on every sync */
++ bool no_lease:1; /* disable requesting leases */
+ bool fsc:1; /* enable fscache */
+ bool mfsymlinks:1; /* use Minshall+French Symlinks */
+ bool multiuser:1;
+@@ -1082,6 +1083,7 @@ struct cifs_tcon {
+ bool need_reopen_files:1; /* need to reopen tcon file handles */
+ bool use_resilient:1; /* use resilient instead of durable handles */
+ bool use_persistent:1; /* use persistent instead of durable handles */
++ bool no_lease:1; /* Do not request leases on files or directories */
+ __le32 capabilities;
+ __u32 share_flags;
+ __u32 maximal_access;
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 5299effa6f7d..8ee57d1f507f 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -74,7 +74,7 @@ enum {
+ Opt_user_xattr, Opt_nouser_xattr,
+ Opt_forceuid, Opt_noforceuid,
+ Opt_forcegid, Opt_noforcegid,
+- Opt_noblocksend, Opt_noautotune,
++ Opt_noblocksend, Opt_noautotune, Opt_nolease,
+ Opt_hard, Opt_soft, Opt_perm, Opt_noperm,
+ Opt_mapposix, Opt_nomapposix,
+ Opt_mapchars, Opt_nomapchars, Opt_sfu,
+@@ -134,6 +134,7 @@ static const match_table_t cifs_mount_option_tokens = {
+ { Opt_noforcegid, "noforcegid" },
+ { Opt_noblocksend, "noblocksend" },
+ { Opt_noautotune, "noautotune" },
++ { Opt_nolease, "nolease" },
+ { Opt_hard, "hard" },
+ { Opt_soft, "soft" },
+ { Opt_perm, "perm" },
+@@ -1713,6 +1714,9 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
+ case Opt_noautotune:
+ vol->noautotune = 1;
+ break;
++ case Opt_nolease:
++ vol->no_lease = 1;
++ break;
+ case Opt_hard:
+ vol->retry = 1;
+ break;
+@@ -3250,6 +3254,8 @@ static int match_tcon(struct cifs_tcon *tcon, struct smb_vol *volume_info)
+ return 0;
+ if (tcon->handle_timeout != volume_info->handle_timeout)
+ return 0;
++ if (tcon->no_lease != volume_info->no_lease)
++ return 0;
+ return 1;
+ }
+
+@@ -3464,6 +3470,7 @@ cifs_get_tcon(struct cifs_ses *ses, struct smb_vol *volume_info)
+ tcon->nocase = volume_info->nocase;
+ tcon->nohandlecache = volume_info->nohandlecache;
+ tcon->local_lease = volume_info->local_lease;
++ tcon->no_lease = volume_info->no_lease;
+ INIT_LIST_HEAD(&tcon->pending_opens);
+
+ spin_lock(&cifs_tcp_ses_lock);
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 64a5864127be..7e8e8826c26f 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -656,6 +656,15 @@ int open_shroot(unsigned int xid, struct cifs_tcon *tcon, struct cifs_fid *pfid)
+ return 0;
+ }
+
++ /*
++ * We do not hold the lock for the open because in case
++ * SMB2_open needs to reconnect, it will end up calling
++ * cifs_mark_open_files_invalid() which takes the lock again
++ * thus causing a deadlock
++ */
++
++ mutex_unlock(&tcon->crfid.fid_mutex);
++
+ if (smb3_encryption_required(tcon))
+ flags |= CIFS_TRANSFORM_REQ;
+
+@@ -677,7 +686,7 @@ int open_shroot(unsigned int xid, struct cifs_tcon *tcon, struct cifs_fid *pfid)
+
+ rc = SMB2_open_init(tcon, &rqst[0], &oplock, &oparms, &utf16_path);
+ if (rc)
+- goto oshr_exit;
++ goto oshr_free;
+ smb2_set_next_command(tcon, &rqst[0]);
+
+ memset(&qi_iov, 0, sizeof(qi_iov));
+@@ -690,18 +699,10 @@ int open_shroot(unsigned int xid, struct cifs_tcon *tcon, struct cifs_fid *pfid)
+ sizeof(struct smb2_file_all_info) +
+ PATH_MAX * 2, 0, NULL);
+ if (rc)
+- goto oshr_exit;
++ goto oshr_free;
+
+ smb2_set_related(&rqst[1]);
+
+- /*
+- * We do not hold the lock for the open because in case
+- * SMB2_open needs to reconnect, it will end up calling
+- * cifs_mark_open_files_invalid() which takes the lock again
+- * thus causing a deadlock
+- */
+-
+- mutex_unlock(&tcon->crfid.fid_mutex);
+ rc = compound_send_recv(xid, ses, flags, 2, rqst,
+ resp_buftype, rsp_iov);
+ mutex_lock(&tcon->crfid.fid_mutex);
+@@ -742,6 +743,8 @@ int open_shroot(unsigned int xid, struct cifs_tcon *tcon, struct cifs_fid *pfid)
+ if (rc)
+ goto oshr_exit;
+
++ atomic_inc(&tcon->num_remote_opens);
++
+ o_rsp = (struct smb2_create_rsp *)rsp_iov[0].iov_base;
+ oparms.fid->persistent_fid = o_rsp->PersistentFileId;
+ oparms.fid->volatile_fid = o_rsp->VolatileFileId;
+@@ -1167,6 +1170,7 @@ smb2_set_ea(const unsigned int xid, struct cifs_tcon *tcon,
+
+ rc = compound_send_recv(xid, ses, flags, 3, rqst,
+ resp_buftype, rsp_iov);
++ /* no need to bump num_remote_opens because handle immediately closed */
+
+ sea_exit:
+ kfree(ea);
+@@ -1488,6 +1492,8 @@ smb2_ioctl_query_info(const unsigned int xid,
+ resp_buftype, rsp_iov);
+ if (rc)
+ goto iqinf_exit;
++
++ /* No need to bump num_remote_opens since handle immediately closed */
+ if (qi.flags & PASSTHRU_FSCTL) {
+ pqi = (struct smb_query_info __user *)arg;
+ io_rsp = (struct smb2_ioctl_rsp *)rsp_iov[1].iov_base;
+@@ -3295,6 +3301,11 @@ smb21_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
+ if (oplock == SMB2_OPLOCK_LEVEL_NOCHANGE)
+ return;
+
++ /* Check if the server granted an oplock rather than a lease */
++ if (oplock & SMB2_OPLOCK_LEVEL_EXCLUSIVE)
++ return smb2_set_oplock_level(cinode, oplock, epoch,
++ purge_cache);
++
+ if (oplock & SMB2_LEASE_READ_CACHING_HE) {
+ new_oplock |= CIFS_CACHE_READ_FLG;
+ strcat(message, "R");
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 31e4a1b0b170..0aa40129dfb5 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -2351,6 +2351,7 @@ int smb311_posix_mkdir(const unsigned int xid, struct inode *inode,
+ rqst.rq_iov = iov;
+ rqst.rq_nvec = n_iov;
+
++ /* no need to inc num_remote_opens because we close it just below */
+ trace_smb3_posix_mkdir_enter(xid, tcon->tid, ses->Suid, CREATE_NOT_FILE,
+ FILE_WRITE_ATTRIBUTES);
+ /* resource #4: response buffer */
+@@ -2458,7 +2459,7 @@ SMB2_open_init(struct cifs_tcon *tcon, struct smb_rqst *rqst, __u8 *oplock,
+ iov[1].iov_len = uni_path_len;
+ iov[1].iov_base = path;
+
+- if (!server->oplocks)
++ if ((!server->oplocks) || (tcon->no_lease))
+ *oplock = SMB2_OPLOCK_LEVEL_NONE;
+
+ if (!(server->capabilities & SMB2_GLOBAL_CAP_LEASING) ||
+diff --git a/fs/cifs/xattr.c b/fs/cifs/xattr.c
+index 9076150758d8..db4ba8f6077e 100644
+--- a/fs/cifs/xattr.c
++++ b/fs/cifs/xattr.c
+@@ -31,7 +31,7 @@
+ #include "cifs_fs_sb.h"
+ #include "cifs_unicode.h"
+
+-#define MAX_EA_VALUE_SIZE 65535
++#define MAX_EA_VALUE_SIZE CIFSMaxBufSize
+ #define CIFS_XATTR_CIFS_ACL "system.cifs_acl"
+ #define CIFS_XATTR_ATTRIB "cifs.dosattrib" /* full name: user.cifs.dosattrib */
+ #define CIFS_XATTR_CREATETIME "cifs.creationtime" /* user.cifs.creationtime */
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 92266a2da7d6..f203bf989a4c 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -3813,8 +3813,8 @@ static int ext4_convert_unwritten_extents_endio(handle_t *handle,
+ * illegal.
+ */
+ if (ee_block != map->m_lblk || ee_len > map->m_len) {
+-#ifdef EXT4_DEBUG
+- ext4_warning("Inode (%ld) finished: extent logical block %llu,"
++#ifdef CONFIG_EXT4_DEBUG
++ ext4_warning(inode->i_sb, "Inode (%ld) finished: extent logical block %llu,"
+ " len %u; IO logical block %llu, len %u",
+ inode->i_ino, (unsigned long long)ee_block, ee_len,
+ (unsigned long long)map->m_lblk, map->m_len);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 006b7a2070bf..723b0d1a3881 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -4297,6 +4297,15 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
+
+ trace_ext4_punch_hole(inode, offset, length, 0);
+
++ ext4_clear_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
++ if (ext4_has_inline_data(inode)) {
++ down_write(&EXT4_I(inode)->i_mmap_sem);
++ ret = ext4_convert_inline_data(inode);
++ up_write(&EXT4_I(inode)->i_mmap_sem);
++ if (ret)
++ return ret;
++ }
++
+ /*
+ * Write out all dirty pages to avoid race conditions
+ * Then release them.
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index ea8237513dfa..186468fba82e 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -377,7 +377,7 @@ static void queue_request(struct fuse_iqueue *fiq, struct fuse_req *req)
+ req->in.h.len = sizeof(struct fuse_in_header) +
+ len_args(req->in.numargs, (struct fuse_arg *) req->in.args);
+ list_add_tail(&req->list, &fiq->pending);
+- wake_up_locked(&fiq->waitq);
++ wake_up(&fiq->waitq);
+ kill_fasync(&fiq->fasync, SIGIO, POLL_IN);
+ }
+
+@@ -389,16 +389,16 @@ void fuse_queue_forget(struct fuse_conn *fc, struct fuse_forget_link *forget,
+ forget->forget_one.nodeid = nodeid;
+ forget->forget_one.nlookup = nlookup;
+
+- spin_lock(&fiq->waitq.lock);
++ spin_lock(&fiq->lock);
+ if (fiq->connected) {
+ fiq->forget_list_tail->next = forget;
+ fiq->forget_list_tail = forget;
+- wake_up_locked(&fiq->waitq);
++ wake_up(&fiq->waitq);
+ kill_fasync(&fiq->fasync, SIGIO, POLL_IN);
+ } else {
+ kfree(forget);
+ }
+- spin_unlock(&fiq->waitq.lock);
++ spin_unlock(&fiq->lock);
+ }
+
+ static void flush_bg_queue(struct fuse_conn *fc)
+@@ -412,10 +412,10 @@ static void flush_bg_queue(struct fuse_conn *fc)
+ req = list_first_entry(&fc->bg_queue, struct fuse_req, list);
+ list_del(&req->list);
+ fc->active_background++;
+- spin_lock(&fiq->waitq.lock);
++ spin_lock(&fiq->lock);
+ req->in.h.unique = fuse_get_unique(fiq);
+ queue_request(fiq, req);
+- spin_unlock(&fiq->waitq.lock);
++ spin_unlock(&fiq->lock);
+ }
+ }
+
+@@ -439,9 +439,9 @@ static void request_end(struct fuse_conn *fc, struct fuse_req *req)
+ * smp_mb() from queue_interrupt().
+ */
+ if (!list_empty(&req->intr_entry)) {
+- spin_lock(&fiq->waitq.lock);
++ spin_lock(&fiq->lock);
+ list_del_init(&req->intr_entry);
+- spin_unlock(&fiq->waitq.lock);
++ spin_unlock(&fiq->lock);
+ }
+ WARN_ON(test_bit(FR_PENDING, &req->flags));
+ WARN_ON(test_bit(FR_SENT, &req->flags));
+@@ -483,10 +483,10 @@ put_request:
+
+ static int queue_interrupt(struct fuse_iqueue *fiq, struct fuse_req *req)
+ {
+- spin_lock(&fiq->waitq.lock);
++ spin_lock(&fiq->lock);
+ /* Check for we've sent request to interrupt this req */
+ if (unlikely(!test_bit(FR_INTERRUPTED, &req->flags))) {
+- spin_unlock(&fiq->waitq.lock);
++ spin_unlock(&fiq->lock);
+ return -EINVAL;
+ }
+
+@@ -499,13 +499,13 @@ static int queue_interrupt(struct fuse_iqueue *fiq, struct fuse_req *req)
+ smp_mb();
+ if (test_bit(FR_FINISHED, &req->flags)) {
+ list_del_init(&req->intr_entry);
+- spin_unlock(&fiq->waitq.lock);
++ spin_unlock(&fiq->lock);
+ return 0;
+ }
+- wake_up_locked(&fiq->waitq);
++ wake_up(&fiq->waitq);
+ kill_fasync(&fiq->fasync, SIGIO, POLL_IN);
+ }
+- spin_unlock(&fiq->waitq.lock);
++ spin_unlock(&fiq->lock);
+ return 0;
+ }
+
+@@ -535,16 +535,16 @@ static void request_wait_answer(struct fuse_conn *fc, struct fuse_req *req)
+ if (!err)
+ return;
+
+- spin_lock(&fiq->waitq.lock);
++ spin_lock(&fiq->lock);
+ /* Request is not yet in userspace, bail out */
+ if (test_bit(FR_PENDING, &req->flags)) {
+ list_del(&req->list);
+- spin_unlock(&fiq->waitq.lock);
++ spin_unlock(&fiq->lock);
+ __fuse_put_request(req);
+ req->out.h.error = -EINTR;
+ return;
+ }
+- spin_unlock(&fiq->waitq.lock);
++ spin_unlock(&fiq->lock);
+ }
+
+ /*
+@@ -559,9 +559,9 @@ static void __fuse_request_send(struct fuse_conn *fc, struct fuse_req *req)
+ struct fuse_iqueue *fiq = &fc->iq;
+
+ BUG_ON(test_bit(FR_BACKGROUND, &req->flags));
+- spin_lock(&fiq->waitq.lock);
++ spin_lock(&fiq->lock);
+ if (!fiq->connected) {
+- spin_unlock(&fiq->waitq.lock);
++ spin_unlock(&fiq->lock);
+ req->out.h.error = -ENOTCONN;
+ } else {
+ req->in.h.unique = fuse_get_unique(fiq);
+@@ -569,7 +569,7 @@ static void __fuse_request_send(struct fuse_conn *fc, struct fuse_req *req)
+ /* acquire extra reference, since request is still needed
+ after request_end() */
+ __fuse_get_request(req);
+- spin_unlock(&fiq->waitq.lock);
++ spin_unlock(&fiq->lock);
+
+ request_wait_answer(fc, req);
+ /* Pairs with smp_wmb() in request_end() */
+@@ -700,12 +700,12 @@ static int fuse_request_send_notify_reply(struct fuse_conn *fc,
+
+ __clear_bit(FR_ISREPLY, &req->flags);
+ req->in.h.unique = unique;
+- spin_lock(&fiq->waitq.lock);
++ spin_lock(&fiq->lock);
+ if (fiq->connected) {
+ queue_request(fiq, req);
+ err = 0;
+ }
+- spin_unlock(&fiq->waitq.lock);
++ spin_unlock(&fiq->lock);
+
+ return err;
+ }
+@@ -1149,12 +1149,12 @@ static int request_pending(struct fuse_iqueue *fiq)
+ * Unlike other requests this is assembled on demand, without a need
+ * to allocate a separate fuse_req structure.
+ *
+- * Called with fiq->waitq.lock held, releases it
++ * Called with fiq->lock held, releases it
+ */
+ static int fuse_read_interrupt(struct fuse_iqueue *fiq,
+ struct fuse_copy_state *cs,
+ size_t nbytes, struct fuse_req *req)
+-__releases(fiq->waitq.lock)
++__releases(fiq->lock)
+ {
+ struct fuse_in_header ih;
+ struct fuse_interrupt_in arg;
+@@ -1169,7 +1169,7 @@ __releases(fiq->waitq.lock)
+ ih.unique = (req->in.h.unique | FUSE_INT_REQ_BIT);
+ arg.unique = req->in.h.unique;
+
+- spin_unlock(&fiq->waitq.lock);
++ spin_unlock(&fiq->lock);
+ if (nbytes < reqsize)
+ return -EINVAL;
+
+@@ -1206,7 +1206,7 @@ static struct fuse_forget_link *dequeue_forget(struct fuse_iqueue *fiq,
+ static int fuse_read_single_forget(struct fuse_iqueue *fiq,
+ struct fuse_copy_state *cs,
+ size_t nbytes)
+-__releases(fiq->waitq.lock)
++__releases(fiq->lock)
+ {
+ int err;
+ struct fuse_forget_link *forget = dequeue_forget(fiq, 1, NULL);
+@@ -1220,7 +1220,7 @@ __releases(fiq->waitq.lock)
+ .len = sizeof(ih) + sizeof(arg),
+ };
+
+- spin_unlock(&fiq->waitq.lock);
++ spin_unlock(&fiq->lock);
+ kfree(forget);
+ if (nbytes < ih.len)
+ return -EINVAL;
+@@ -1238,7 +1238,7 @@ __releases(fiq->waitq.lock)
+
+ static int fuse_read_batch_forget(struct fuse_iqueue *fiq,
+ struct fuse_copy_state *cs, size_t nbytes)
+-__releases(fiq->waitq.lock)
++__releases(fiq->lock)
+ {
+ int err;
+ unsigned max_forgets;
+@@ -1252,13 +1252,13 @@ __releases(fiq->waitq.lock)
+ };
+
+ if (nbytes < ih.len) {
+- spin_unlock(&fiq->waitq.lock);
++ spin_unlock(&fiq->lock);
+ return -EINVAL;
+ }
+
+ max_forgets = (nbytes - ih.len) / sizeof(struct fuse_forget_one);
+ head = dequeue_forget(fiq, max_forgets, &count);
+- spin_unlock(&fiq->waitq.lock);
++ spin_unlock(&fiq->lock);
+
+ arg.count = count;
+ ih.len += count * sizeof(struct fuse_forget_one);
+@@ -1288,7 +1288,7 @@ __releases(fiq->waitq.lock)
+ static int fuse_read_forget(struct fuse_conn *fc, struct fuse_iqueue *fiq,
+ struct fuse_copy_state *cs,
+ size_t nbytes)
+-__releases(fiq->waitq.lock)
++__releases(fiq->lock)
+ {
+ if (fc->minor < 16 || fiq->forget_list_head.next->next == NULL)
+ return fuse_read_single_forget(fiq, cs, nbytes);
+@@ -1318,16 +1318,19 @@ static ssize_t fuse_dev_do_read(struct fuse_dev *fud, struct file *file,
+ unsigned int hash;
+
+ restart:
+- spin_lock(&fiq->waitq.lock);
+- err = -EAGAIN;
+- if ((file->f_flags & O_NONBLOCK) && fiq->connected &&
+- !request_pending(fiq))
+- goto err_unlock;
++ for (;;) {
++ spin_lock(&fiq->lock);
++ if (!fiq->connected || request_pending(fiq))
++ break;
++ spin_unlock(&fiq->lock);
+
+- err = wait_event_interruptible_exclusive_locked(fiq->waitq,
++ if (file->f_flags & O_NONBLOCK)
++ return -EAGAIN;
++ err = wait_event_interruptible_exclusive(fiq->waitq,
+ !fiq->connected || request_pending(fiq));
+- if (err)
+- goto err_unlock;
++ if (err)
++ return err;
++ }
+
+ if (!fiq->connected) {
+ err = fc->aborted ? -ECONNABORTED : -ENODEV;
+@@ -1351,7 +1354,7 @@ static ssize_t fuse_dev_do_read(struct fuse_dev *fud, struct file *file,
+ req = list_entry(fiq->pending.next, struct fuse_req, list);
+ clear_bit(FR_PENDING, &req->flags);
+ list_del_init(&req->list);
+- spin_unlock(&fiq->waitq.lock);
++ spin_unlock(&fiq->lock);
+
+ in = &req->in;
+ reqsize = in->h.len;
+@@ -1409,7 +1412,7 @@ out_end:
+ return err;
+
+ err_unlock:
+- spin_unlock(&fiq->waitq.lock);
++ spin_unlock(&fiq->lock);
+ return err;
+ }
+
+@@ -2121,12 +2124,12 @@ static __poll_t fuse_dev_poll(struct file *file, poll_table *wait)
+ fiq = &fud->fc->iq;
+ poll_wait(file, &fiq->waitq, wait);
+
+- spin_lock(&fiq->waitq.lock);
++ spin_lock(&fiq->lock);
+ if (!fiq->connected)
+ mask = EPOLLERR;
+ else if (request_pending(fiq))
+ mask |= EPOLLIN | EPOLLRDNORM;
+- spin_unlock(&fiq->waitq.lock);
++ spin_unlock(&fiq->lock);
+
+ return mask;
+ }
+@@ -2221,15 +2224,15 @@ void fuse_abort_conn(struct fuse_conn *fc)
+ flush_bg_queue(fc);
+ spin_unlock(&fc->bg_lock);
+
+- spin_lock(&fiq->waitq.lock);
++ spin_lock(&fiq->lock);
+ fiq->connected = 0;
+ list_for_each_entry(req, &fiq->pending, list)
+ clear_bit(FR_PENDING, &req->flags);
+ list_splice_tail_init(&fiq->pending, &to_end);
+ while (forget_pending(fiq))
+ kfree(dequeue_forget(fiq, 1, NULL));
+- wake_up_all_locked(&fiq->waitq);
+- spin_unlock(&fiq->waitq.lock);
++ wake_up_all(&fiq->waitq);
++ spin_unlock(&fiq->lock);
+ kill_fasync(&fiq->fasync, SIGIO, POLL_IN);
+ end_polls(fc);
+ wake_up_all(&fc->blocked_waitq);
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 5ae2828beb00..91c99724dee0 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -1767,6 +1767,7 @@ static int fuse_writepage(struct page *page, struct writeback_control *wbc)
+ WARN_ON(wbc->sync_mode == WB_SYNC_ALL);
+
+ redirty_page_for_writepage(wbc, page);
++ unlock_page(page);
+ return 0;
+ }
+
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index 24dbca777775..89bdc41e0d86 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -450,6 +450,9 @@ struct fuse_iqueue {
+ /** Connection established */
+ unsigned connected;
+
++ /** Lock protecting accesses to members of this structure */
++ spinlock_t lock;
++
+ /** Readers of the connection are waiting on this */
+ wait_queue_head_t waitq;
+
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index 4bb885b0f032..987877860c01 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -582,6 +582,7 @@ static int fuse_show_options(struct seq_file *m, struct dentry *root)
+ static void fuse_iqueue_init(struct fuse_iqueue *fiq)
+ {
+ memset(fiq, 0, sizeof(struct fuse_iqueue));
++ spin_lock_init(&fiq->lock);
+ init_waitqueue_head(&fiq->waitq);
+ INIT_LIST_HEAD(&fiq->pending);
+ INIT_LIST_HEAD(&fiq->interrupts);
+diff --git a/fs/fuse/readdir.c b/fs/fuse/readdir.c
+index 574d03f8a573..b2da3de6a78e 100644
+--- a/fs/fuse/readdir.c
++++ b/fs/fuse/readdir.c
+@@ -372,11 +372,13 @@ static enum fuse_parse_result fuse_parse_cache(struct fuse_file *ff,
+ for (;;) {
+ struct fuse_dirent *dirent = addr + offset;
+ unsigned int nbytes = size - offset;
+- size_t reclen = FUSE_DIRENT_SIZE(dirent);
++ size_t reclen;
+
+ if (nbytes < FUSE_NAME_OFFSET || !dirent->namelen)
+ break;
+
++ reclen = FUSE_DIRENT_SIZE(dirent); /* derefs ->namelen */
++
+ if (WARN_ON(dirent->namelen > FUSE_NAME_MAX))
+ return FOUND_ERR;
+ if (WARN_ON(reclen > nbytes))
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index 4f8b5fd6c81f..b7ba5e194965 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -1680,6 +1680,7 @@ out_unlock:
+ brelse(dibh);
+ up_write(&ip->i_rw_mutex);
+ gfs2_trans_end(sdp);
++ buf_in_tr = false;
+ }
+ gfs2_glock_dq_uninit(rd_gh);
+ cond_resched();
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index cfb48bd088e1..06d048341fa4 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -288,6 +288,7 @@ struct io_ring_ctx {
+ struct sqe_submit {
+ const struct io_uring_sqe *sqe;
+ unsigned short index;
++ u32 sequence;
+ bool has_user;
+ bool needs_lock;
+ bool needs_fixed_file;
+@@ -2040,7 +2041,7 @@ static int io_req_set_file(struct io_ring_ctx *ctx, const struct sqe_submit *s,
+
+ if (flags & IOSQE_IO_DRAIN) {
+ req->flags |= REQ_F_IO_DRAIN;
+- req->sequence = ctx->cached_sq_head - 1;
++ req->sequence = s->sequence;
+ }
+
+ if (!io_op_needs_file(s->sqe))
+@@ -2247,6 +2248,7 @@ static bool io_get_sqring(struct io_ring_ctx *ctx, struct sqe_submit *s)
+ if (head < ctx->sq_entries) {
+ s->index = head;
+ s->sqe = &ctx->sq_sqes[head];
++ s->sequence = ctx->cached_sq_head;
+ ctx->cached_sq_head++;
+ return true;
+ }
+diff --git a/fs/overlayfs/export.c b/fs/overlayfs/export.c
+index cb8ec1f65c03..73c9775215b3 100644
+--- a/fs/overlayfs/export.c
++++ b/fs/overlayfs/export.c
+@@ -227,9 +227,8 @@ static int ovl_d_to_fh(struct dentry *dentry, char *buf, int buflen)
+ /* Encode an upper or lower file handle */
+ fh = ovl_encode_real_fh(enc_lower ? ovl_dentry_lower(dentry) :
+ ovl_dentry_upper(dentry), !enc_lower);
+- err = PTR_ERR(fh);
+ if (IS_ERR(fh))
+- goto fail;
++ return PTR_ERR(fh);
+
+ err = -EOVERFLOW;
+ if (fh->len > buflen)
+diff --git a/fs/overlayfs/inode.c b/fs/overlayfs/inode.c
+index 7663aeb85fa3..bc14781886bf 100644
+--- a/fs/overlayfs/inode.c
++++ b/fs/overlayfs/inode.c
+@@ -383,7 +383,8 @@ static bool ovl_can_list(const char *s)
+ return true;
+
+ /* Never list trusted.overlay, list other trusted for superuser only */
+- return !ovl_is_private_xattr(s) && capable(CAP_SYS_ADMIN);
++ return !ovl_is_private_xattr(s) &&
++ ns_capable_noaudit(&init_user_ns, CAP_SYS_ADMIN);
+ }
+
+ ssize_t ovl_listxattr(struct dentry *dentry, char *list, size_t size)
+diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
+index 28101bbc0b78..d952d5962e93 100644
+--- a/fs/xfs/xfs_file.c
++++ b/fs/xfs/xfs_file.c
+@@ -28,6 +28,7 @@
+ #include <linux/falloc.h>
+ #include <linux/backing-dev.h>
+ #include <linux/mman.h>
++#include <linux/fadvise.h>
+
+ static const struct vm_operations_struct xfs_file_vm_ops;
+
+@@ -933,6 +934,30 @@ out_unlock:
+ return error;
+ }
+
++STATIC int
++xfs_file_fadvise(
++ struct file *file,
++ loff_t start,
++ loff_t end,
++ int advice)
++{
++ struct xfs_inode *ip = XFS_I(file_inode(file));
++ int ret;
++ int lockflags = 0;
++
++ /*
++ * Operations creating pages in page cache need protection from hole
++ * punching and similar ops
++ */
++ if (advice == POSIX_FADV_WILLNEED) {
++ lockflags = XFS_IOLOCK_SHARED;
++ xfs_ilock(ip, lockflags);
++ }
++ ret = generic_fadvise(file, start, end, advice);
++ if (lockflags)
++ xfs_iunlock(ip, lockflags);
++ return ret;
++}
+
+ STATIC loff_t
+ xfs_file_remap_range(
+@@ -1232,6 +1257,7 @@ const struct file_operations xfs_file_operations = {
+ .fsync = xfs_file_fsync,
+ .get_unmapped_area = thp_get_unmapped_area,
+ .fallocate = xfs_file_fallocate,
++ .fadvise = xfs_file_fadvise,
+ .remap_file_range = xfs_file_remap_range,
+ };
+
+diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
+index 3fa1fa59f9b2..ab25e69a15d1 100644
+--- a/include/linux/blk-mq.h
++++ b/include/linux/blk-mq.h
+@@ -140,6 +140,7 @@ typedef int (poll_fn)(struct blk_mq_hw_ctx *);
+ typedef int (map_queues_fn)(struct blk_mq_tag_set *set);
+ typedef bool (busy_fn)(struct request_queue *);
+ typedef void (complete_fn)(struct request *);
++typedef void (cleanup_rq_fn)(struct request *);
+
+
+ struct blk_mq_ops {
+@@ -200,6 +201,12 @@ struct blk_mq_ops {
+ /* Called from inside blk_get_request() */
+ void (*initialize_rq_fn)(struct request *rq);
+
++ /*
++ * Called before freeing one request which isn't completed yet,
++ * and usually for freeing the driver private data
++ */
++ cleanup_rq_fn *cleanup_rq;
++
+ /*
+ * If set, returns whether or not this queue currently is busy
+ */
+@@ -366,4 +373,10 @@ static inline blk_qc_t request_to_qc_t(struct blk_mq_hw_ctx *hctx,
+ BLK_QC_T_INTERNAL;
+ }
+
++static inline void blk_mq_cleanup_rq(struct request *rq)
++{
++ if (rq->q->mq_ops->cleanup_rq)
++ rq->q->mq_ops->cleanup_rq(rq);
++}
++
+ #endif
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 1ef375dafb1c..ae51050c5094 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -202,9 +202,12 @@ struct request {
+ #ifdef CONFIG_BLK_WBT
+ unsigned short wbt_flags;
+ #endif
+-#ifdef CONFIG_BLK_DEV_THROTTLING_LOW
+- unsigned short throtl_size;
+-#endif
++ /*
++ * rq sectors used for blk stats. It has the same value
++ * with blk_rq_sectors(rq), except that it never be zeroed
++ * by completion.
++ */
++ unsigned short stats_sectors;
+
+ /*
+ * Number of scatter-gather DMA addr+len pairs after
+@@ -903,6 +906,7 @@ static inline struct request_queue *bdev_get_queue(struct block_device *bdev)
+ * blk_rq_err_bytes() : bytes left till the next error boundary
+ * blk_rq_sectors() : sectors left in the entire request
+ * blk_rq_cur_sectors() : sectors left in the current segment
++ * blk_rq_stats_sectors() : sectors of the entire request used for stats
+ */
+ static inline sector_t blk_rq_pos(const struct request *rq)
+ {
+@@ -931,6 +935,11 @@ static inline unsigned int blk_rq_cur_sectors(const struct request *rq)
+ return blk_rq_cur_bytes(rq) >> SECTOR_SHIFT;
+ }
+
++static inline unsigned int blk_rq_stats_sectors(const struct request *rq)
++{
++ return rq->stats_sectors;
++}
++
+ #ifdef CONFIG_BLK_DEV_ZONED
+ static inline unsigned int blk_rq_zone_no(struct request *rq)
+ {
+diff --git a/include/linux/bug.h b/include/linux/bug.h
+index fe5916550da8..f639bd0122f3 100644
+--- a/include/linux/bug.h
++++ b/include/linux/bug.h
+@@ -47,6 +47,11 @@ void generic_bug_clear_once(void);
+
+ #else /* !CONFIG_GENERIC_BUG */
+
++static inline void *find_bug(unsigned long bugaddr)
++{
++ return NULL;
++}
++
+ static inline enum bug_trap_type report_bug(unsigned long bug_addr,
+ struct pt_regs *regs)
+ {
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 997a530ff4e9..bc1b40fb0db7 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -3531,6 +3531,8 @@ extern void inode_nohighmem(struct inode *inode);
+ /* mm/fadvise.c */
+ extern int vfs_fadvise(struct file *file, loff_t offset, loff_t len,
+ int advice);
++extern int generic_fadvise(struct file *file, loff_t offset, loff_t len,
++ int advice);
+
+ #if defined(CONFIG_IO_URING)
+ extern struct sock *io_uring_get_socket(struct file *file);
+diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
+index 4a351cb7f20f..cf87c673cbb8 100644
+--- a/include/linux/mmc/host.h
++++ b/include/linux/mmc/host.h
+@@ -493,6 +493,15 @@ void mmc_command_done(struct mmc_host *host, struct mmc_request *mrq);
+
+ void mmc_cqe_request_done(struct mmc_host *host, struct mmc_request *mrq);
+
++/*
++ * May be called from host driver's system/runtime suspend/resume callbacks,
++ * to know if SDIO IRQs has been claimed.
++ */
++static inline bool sdio_irq_claimed(struct mmc_host *host)
++{
++ return host->sdio_irqs > 0;
++}
++
+ static inline void mmc_signal_sdio_irq(struct mmc_host *host)
+ {
+ host->ops->enable_sdio_irq(host, 0);
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index c842735a4f45..4b97f427cc92 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -548,6 +548,7 @@
+ #define PCI_DEVICE_ID_AMD_17H_DF_F3 0x1463
+ #define PCI_DEVICE_ID_AMD_17H_M10H_DF_F3 0x15eb
+ #define PCI_DEVICE_ID_AMD_17H_M30H_DF_F3 0x1493
++#define PCI_DEVICE_ID_AMD_17H_M70H_DF_F3 0x1443
+ #define PCI_DEVICE_ID_AMD_CNB17H_F3 0x1703
+ #define PCI_DEVICE_ID_AMD_LANCE 0x2000
+ #define PCI_DEVICE_ID_AMD_LANCE_HOME 0x2001
+diff --git a/include/linux/quotaops.h b/include/linux/quotaops.h
+index dc905a4ff8d7..185d94829701 100644
+--- a/include/linux/quotaops.h
++++ b/include/linux/quotaops.h
+@@ -22,7 +22,7 @@ static inline struct quota_info *sb_dqopt(struct super_block *sb)
+ /* i_mutex must being held */
+ static inline bool is_quota_modification(struct inode *inode, struct iattr *ia)
+ {
+- return (ia->ia_valid & ATTR_SIZE && ia->ia_size != inode->i_size) ||
++ return (ia->ia_valid & ATTR_SIZE) ||
+ (ia->ia_valid & ATTR_UID && !uid_eq(ia->ia_uid, inode->i_uid)) ||
+ (ia->ia_valid & ATTR_GID && !gid_eq(ia->ia_gid, inode->i_gid));
+ }
+diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
+index 13e108bcc9eb..d783e15ba898 100644
+--- a/include/linux/sunrpc/xprt.h
++++ b/include/linux/sunrpc/xprt.h
+@@ -352,6 +352,7 @@ bool xprt_prepare_transmit(struct rpc_task *task);
+ void xprt_request_enqueue_transmit(struct rpc_task *task);
+ void xprt_request_enqueue_receive(struct rpc_task *task);
+ void xprt_request_wait_receive(struct rpc_task *task);
++void xprt_request_dequeue_xprt(struct rpc_task *task);
+ bool xprt_request_need_retransmit(struct rpc_task *task);
+ void xprt_transmit(struct rpc_task *task);
+ void xprt_end_transmit(struct rpc_task *task);
+diff --git a/include/net/route.h b/include/net/route.h
+index dfce19c9fa96..6c516840380d 100644
+--- a/include/net/route.h
++++ b/include/net/route.h
+@@ -53,10 +53,11 @@ struct rtable {
+ unsigned int rt_flags;
+ __u16 rt_type;
+ __u8 rt_is_input;
+- u8 rt_gw_family;
++ __u8 rt_uses_gateway;
+
+ int rt_iif;
+
++ u8 rt_gw_family;
+ /* Info on neighbour */
+ union {
+ __be32 rt_gw4;
+diff --git a/kernel/jump_label.c b/kernel/jump_label.c
+index df3008419a1d..cdb3ffab128b 100644
+--- a/kernel/jump_label.c
++++ b/kernel/jump_label.c
+@@ -407,7 +407,9 @@ static bool jump_label_can_update(struct jump_entry *entry, bool init)
+ return false;
+
+ if (!kernel_text_address(jump_entry_code(entry))) {
+- WARN_ONCE(1, "can't patch jump_label at %pS", (void *)jump_entry_code(entry));
++ WARN_ONCE(!jump_entry_is_init(entry),
++ "can't patch jump_label at %pS",
++ (void *)jump_entry_code(entry));
+ return false;
+ }
+
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index d9770a5393c8..ebe8315a756a 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -1514,7 +1514,8 @@ static int check_kprobe_address_safe(struct kprobe *p,
+ /* Ensure it is not in reserved area nor out of text */
+ if (!kernel_text_address((unsigned long) p->addr) ||
+ within_kprobe_blacklist((unsigned long) p->addr) ||
+- jump_label_text_reserved(p->addr, p->addr)) {
++ jump_label_text_reserved(p->addr, p->addr) ||
++ find_bug((unsigned long)p->addr)) {
+ ret = -EINVAL;
+ goto out;
+ }
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 1888f6a3b694..424abf802f02 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -3274,7 +3274,7 @@ bool kmsg_dump_get_buffer(struct kmsg_dumper *dumper, bool syslog,
+ /* move first record forward until length fits into the buffer */
+ seq = dumper->cur_seq;
+ idx = dumper->cur_idx;
+- while (l > size && seq < dumper->next_seq) {
++ while (l >= size && seq < dumper->next_seq) {
+ struct printk_log *msg = log_from_idx(idx);
+
+ l -= msg_print_text(msg, true, time, NULL, 0);
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index a14e5fbbea46..5efdce756fdf 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -3234,13 +3234,13 @@ static int __init rcu_spawn_gp_kthread(void)
+ t = kthread_create(rcu_gp_kthread, NULL, "%s", rcu_state.name);
+ if (WARN_ONCE(IS_ERR(t), "%s: Could not start grace-period kthread, OOM is now expected behavior\n", __func__))
+ return 0;
+- rnp = rcu_get_root();
+- raw_spin_lock_irqsave_rcu_node(rnp, flags);
+- rcu_state.gp_kthread = t;
+ if (kthread_prio) {
+ sp.sched_priority = kthread_prio;
+ sched_setscheduler_nocheck(t, SCHED_FIFO, &sp);
+ }
++ rnp = rcu_get_root();
++ raw_spin_lock_irqsave_rcu_node(rnp, flags);
++ rcu_state.gp_kthread = t;
+ raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
+ wake_up_process(t);
+ rcu_spawn_nocb_kthreads();
+diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
+index af7e7b9c86af..513b403b683b 100644
+--- a/kernel/rcu/tree_exp.h
++++ b/kernel/rcu/tree_exp.h
+@@ -792,6 +792,7 @@ static int rcu_print_task_exp_stall(struct rcu_node *rnp)
+ */
+ void synchronize_rcu_expedited(void)
+ {
++ bool boottime = (rcu_scheduler_active == RCU_SCHEDULER_INIT);
+ struct rcu_exp_work rew;
+ struct rcu_node *rnp;
+ unsigned long s;
+@@ -817,7 +818,7 @@ void synchronize_rcu_expedited(void)
+ return; /* Someone else did our work for us. */
+
+ /* Ensure that load happens before action based on it. */
+- if (unlikely(rcu_scheduler_active == RCU_SCHEDULER_INIT)) {
++ if (unlikely(boottime)) {
+ /* Direct call during scheduler init and early_initcalls(). */
+ rcu_exp_sel_wait_wake(s);
+ } else {
+@@ -835,5 +836,8 @@ void synchronize_rcu_expedited(void)
+
+ /* Let the next expedited grace period start. */
+ mutex_unlock(&rcu_state.exp_mutex);
++
++ if (likely(!boottime))
++ destroy_work_on_stack(&rew.rew_work);
+ }
+ EXPORT_SYMBOL_GPL(synchronize_rcu_expedited);
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index df9f1fe5689b..d38f007afea7 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -3486,8 +3486,36 @@ void scheduler_tick(void)
+
+ struct tick_work {
+ int cpu;
++ atomic_t state;
+ struct delayed_work work;
+ };
++/* Values for ->state, see diagram below. */
++#define TICK_SCHED_REMOTE_OFFLINE 0
++#define TICK_SCHED_REMOTE_OFFLINING 1
++#define TICK_SCHED_REMOTE_RUNNING 2
++
++/*
++ * State diagram for ->state:
++ *
++ *
++ * TICK_SCHED_REMOTE_OFFLINE
++ * | ^
++ * | |
++ * | | sched_tick_remote()
++ * | |
++ * | |
++ * +--TICK_SCHED_REMOTE_OFFLINING
++ * | ^
++ * | |
++ * sched_tick_start() | | sched_tick_stop()
++ * | |
++ * V |
++ * TICK_SCHED_REMOTE_RUNNING
++ *
++ *
++ * Other transitions get WARN_ON_ONCE(), except that sched_tick_remote()
++ * and sched_tick_start() are happy to leave the state in RUNNING.
++ */
+
+ static struct tick_work __percpu *tick_work_cpu;
+
+@@ -3500,6 +3528,7 @@ static void sched_tick_remote(struct work_struct *work)
+ struct task_struct *curr;
+ struct rq_flags rf;
+ u64 delta;
++ int os;
+
+ /*
+ * Handle the tick only if it appears the remote CPU is running in full
+@@ -3513,7 +3542,7 @@ static void sched_tick_remote(struct work_struct *work)
+
+ rq_lock_irq(rq, &rf);
+ curr = rq->curr;
+- if (is_idle_task(curr))
++ if (is_idle_task(curr) || cpu_is_offline(cpu))
+ goto out_unlock;
+
+ update_rq_clock(rq);
+@@ -3533,13 +3562,18 @@ out_requeue:
+ /*
+ * Run the remote tick once per second (1Hz). This arbitrary
+ * frequency is large enough to avoid overload but short enough
+- * to keep scheduler internal stats reasonably up to date.
++ * to keep scheduler internal stats reasonably up to date. But
++ * first update state to reflect hotplug activity if required.
+ */
+- queue_delayed_work(system_unbound_wq, dwork, HZ);
++ os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
++ WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
++ if (os == TICK_SCHED_REMOTE_RUNNING)
++ queue_delayed_work(system_unbound_wq, dwork, HZ);
+ }
+
+ static void sched_tick_start(int cpu)
+ {
++ int os;
+ struct tick_work *twork;
+
+ if (housekeeping_cpu(cpu, HK_FLAG_TICK))
+@@ -3548,15 +3582,20 @@ static void sched_tick_start(int cpu)
+ WARN_ON_ONCE(!tick_work_cpu);
+
+ twork = per_cpu_ptr(tick_work_cpu, cpu);
+- twork->cpu = cpu;
+- INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
+- queue_delayed_work(system_unbound_wq, &twork->work, HZ);
++ os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_RUNNING);
++ WARN_ON_ONCE(os == TICK_SCHED_REMOTE_RUNNING);
++ if (os == TICK_SCHED_REMOTE_OFFLINE) {
++ twork->cpu = cpu;
++ INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
++ queue_delayed_work(system_unbound_wq, &twork->work, HZ);
++ }
+ }
+
+ #ifdef CONFIG_HOTPLUG_CPU
+ static void sched_tick_stop(int cpu)
+ {
+ struct tick_work *twork;
++ int os;
+
+ if (housekeeping_cpu(cpu, HK_FLAG_TICK))
+ return;
+@@ -3564,7 +3603,10 @@ static void sched_tick_stop(int cpu)
+ WARN_ON_ONCE(!tick_work_cpu);
+
+ twork = per_cpu_ptr(tick_work_cpu, cpu);
+- cancel_delayed_work_sync(&twork->work);
++ /* There cannot be competing actions, but don't rely on stop-machine. */
++ os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_OFFLINING);
++ WARN_ON_ONCE(os != TICK_SCHED_REMOTE_RUNNING);
++ /* Don't cancel, as this would mess up the state machine. */
+ }
+ #endif /* CONFIG_HOTPLUG_CPU */
+
+@@ -3572,7 +3614,6 @@ int __init sched_tick_offload_init(void)
+ {
+ tick_work_cpu = alloc_percpu(struct tick_work);
+ BUG_ON(!tick_work_cpu);
+-
+ return 0;
+ }
+
+@@ -6939,10 +6980,6 @@ static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
+ #ifdef CONFIG_RT_GROUP_SCHED
+ if (!sched_rt_can_attach(css_tg(css), task))
+ return -EINVAL;
+-#else
+- /* We don't support RT-tasks being in separate groups */
+- if (task->sched_class != &fair_sched_class)
+- return -EINVAL;
+ #endif
+ /*
+ * Serialize against wake_up_new_task() such that if its
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index 867b4bb6d4be..b03ca2f73713 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -117,6 +117,7 @@ static void sugov_fast_switch(struct sugov_policy *sg_policy, u64 time,
+ unsigned int next_freq)
+ {
+ struct cpufreq_policy *policy = sg_policy->policy;
++ int cpu;
+
+ if (!sugov_update_next_freq(sg_policy, time, next_freq))
+ return;
+@@ -126,7 +127,11 @@ static void sugov_fast_switch(struct sugov_policy *sg_policy, u64 time,
+ return;
+
+ policy->cur = next_freq;
+- trace_cpu_frequency(next_freq, smp_processor_id());
++
++ if (trace_cpu_frequency_enabled()) {
++ for_each_cpu(cpu, policy->cpus)
++ trace_cpu_frequency(next_freq, cpu);
++ }
+ }
+
+ static void sugov_deferred_update(struct sugov_policy *sg_policy, u64 time,
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 46122edd8552..20951112b6cd 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -529,6 +529,7 @@ static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq);
+ static struct rq *dl_task_offline_migration(struct rq *rq, struct task_struct *p)
+ {
+ struct rq *later_rq = NULL;
++ struct dl_bw *dl_b;
+
+ later_rq = find_lock_later_rq(p, rq);
+ if (!later_rq) {
+@@ -557,6 +558,38 @@ static struct rq *dl_task_offline_migration(struct rq *rq, struct task_struct *p
+ double_lock_balance(rq, later_rq);
+ }
+
++ if (p->dl.dl_non_contending || p->dl.dl_throttled) {
++ /*
++ * Inactive timer is armed (or callback is running, but
++ * waiting for us to release rq locks). In any case, when it
++ * will fire (or continue), it will see running_bw of this
++ * task migrated to later_rq (and correctly handle it).
++ */
++ sub_running_bw(&p->dl, &rq->dl);
++ sub_rq_bw(&p->dl, &rq->dl);
++
++ add_rq_bw(&p->dl, &later_rq->dl);
++ add_running_bw(&p->dl, &later_rq->dl);
++ } else {
++ sub_rq_bw(&p->dl, &rq->dl);
++ add_rq_bw(&p->dl, &later_rq->dl);
++ }
++
++ /*
++ * And we finally need to fixup root_domain(s) bandwidth accounting,
++ * since p is still hanging out in the old (now moved to default) root
++ * domain.
++ */
++ dl_b = &rq->rd->dl_bw;
++ raw_spin_lock(&dl_b->lock);
++ __dl_sub(dl_b, p->dl.dl_bw, cpumask_weight(rq->rd->span));
++ raw_spin_unlock(&dl_b->lock);
++
++ dl_b = &later_rq->rd->dl_bw;
++ raw_spin_lock(&dl_b->lock);
++ __dl_add(dl_b, p->dl.dl_bw, cpumask_weight(later_rq->rd->span));
++ raw_spin_unlock(&dl_b->lock);
++
+ set_task_cpu(p, later_rq->cpu);
+ double_unlock_balance(later_rq, rq);
+
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 500f5db0de0b..86cfc5d5129c 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -9052,9 +9052,10 @@ more_balance:
+ out_balanced:
+ /*
+ * We reach balance although we may have faced some affinity
+- * constraints. Clear the imbalance flag if it was set.
++ * constraints. Clear the imbalance flag only if other tasks got
++ * a chance to move and fix the imbalance.
+ */
+- if (sd_parent) {
++ if (sd_parent && !(env.flags & LBF_ALL_PINNED)) {
+ int *group_imbalance = &sd_parent->groups->sgc->imbalance;
+
+ if (*group_imbalance)
+@@ -10300,18 +10301,18 @@ err:
+ void online_fair_sched_group(struct task_group *tg)
+ {
+ struct sched_entity *se;
++ struct rq_flags rf;
+ struct rq *rq;
+ int i;
+
+ for_each_possible_cpu(i) {
+ rq = cpu_rq(i);
+ se = tg->se[i];
+-
+- raw_spin_lock_irq(&rq->lock);
++ rq_lock_irq(rq, &rf);
+ update_rq_clock(rq);
+ attach_entity_cfs_rq(se);
+ sync_throttle(tg, i);
+- raw_spin_unlock_irq(&rq->lock);
++ rq_unlock_irq(rq, &rf);
+ }
+ }
+
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index 80940939b733..e4bc4aa739b8 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -241,13 +241,14 @@ static void do_idle(void)
+ check_pgt_cache();
+ rmb();
+
++ local_irq_disable();
++
+ if (cpu_is_offline(cpu)) {
+- tick_nohz_idle_stop_tick_protected();
++ tick_nohz_idle_stop_tick();
+ cpuhp_report_idle_dead();
+ arch_cpu_idle_dead();
+ }
+
+- local_irq_disable();
+ arch_cpu_idle_enter();
+
+ /*
+diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
+index 6e52b67b420e..517e3719027e 100644
+--- a/kernel/sched/psi.c
++++ b/kernel/sched/psi.c
+@@ -1198,7 +1198,7 @@ static ssize_t psi_write(struct file *file, const char __user *user_buf,
+ if (static_branch_likely(&psi_disabled))
+ return -EOPNOTSUPP;
+
+- buf_size = min(nbytes, (sizeof(buf) - 1));
++ buf_size = min(nbytes, sizeof(buf));
+ if (copy_from_user(buf, user_buf, buf_size))
+ return -EFAULT;
+
+diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c
+index 57518efc3810..b7d75a9e8ccf 100644
+--- a/kernel/time/alarmtimer.c
++++ b/kernel/time/alarmtimer.c
+@@ -672,7 +672,7 @@ static int alarm_timer_create(struct k_itimer *new_timer)
+ enum alarmtimer_type type;
+
+ if (!alarmtimer_get_rtcdev())
+- return -ENOTSUPP;
++ return -EOPNOTSUPP;
+
+ if (!capable(CAP_WAKE_ALARM))
+ return -EPERM;
+@@ -790,7 +790,7 @@ static int alarm_timer_nsleep(const clockid_t which_clock, int flags,
+ int ret = 0;
+
+ if (!alarmtimer_get_rtcdev())
+- return -ENOTSUPP;
++ return -EOPNOTSUPP;
+
+ if (flags & ~TIMER_ABSTIME)
+ return -EINVAL;
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 0a426f4e3125..5bbad147a90c 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -375,7 +375,8 @@ static int posix_cpu_timer_del(struct k_itimer *timer)
+ struct sighand_struct *sighand;
+ struct task_struct *p = timer->it.cpu.task;
+
+- WARN_ON_ONCE(p == NULL);
++ if (WARN_ON_ONCE(!p))
++ return -EINVAL;
+
+ /*
+ * Protect against sighand release/switch in exit/exec and process/
+@@ -580,7 +581,8 @@ static int posix_cpu_timer_set(struct k_itimer *timer, int timer_flags,
+ u64 old_expires, new_expires, old_incr, val;
+ int ret;
+
+- WARN_ON_ONCE(p == NULL);
++ if (WARN_ON_ONCE(!p))
++ return -EINVAL;
+
+ /*
+ * Use the to_ktime conversion because that clamps the maximum
+@@ -715,10 +717,11 @@ static int posix_cpu_timer_set(struct k_itimer *timer, int timer_flags,
+
+ static void posix_cpu_timer_get(struct k_itimer *timer, struct itimerspec64 *itp)
+ {
+- u64 now;
+ struct task_struct *p = timer->it.cpu.task;
++ u64 now;
+
+- WARN_ON_ONCE(p == NULL);
++ if (WARN_ON_ONCE(!p))
++ return;
+
+ /*
+ * Easy part: convert the reload time.
+@@ -1000,12 +1003,13 @@ static void check_process_timers(struct task_struct *tsk,
+ */
+ static void posix_cpu_timer_rearm(struct k_itimer *timer)
+ {
++ struct task_struct *p = timer->it.cpu.task;
+ struct sighand_struct *sighand;
+ unsigned long flags;
+- struct task_struct *p = timer->it.cpu.task;
+ u64 now;
+
+- WARN_ON_ONCE(p == NULL);
++ if (WARN_ON_ONCE(!p))
++ return;
+
+ /*
+ * Fetch the current sample and update the timer's expiry time.
+@@ -1202,7 +1206,9 @@ void set_process_cpu_timer(struct task_struct *tsk, unsigned int clock_idx,
+ u64 now;
+ int ret;
+
+- WARN_ON_ONCE(clock_idx == CPUCLOCK_SCHED);
++ if (WARN_ON_ONCE(clock_idx >= CPUCLOCK_SCHED))
++ return;
++
+ ret = cpu_timer_sample_group(clock_idx, tsk, &now);
+
+ if (oldval && ret != -EINVAL) {
+diff --git a/lib/lzo/lzo1x_compress.c b/lib/lzo/lzo1x_compress.c
+index ba16c08e8cb9..717c940112f9 100644
+--- a/lib/lzo/lzo1x_compress.c
++++ b/lib/lzo/lzo1x_compress.c
+@@ -83,17 +83,19 @@ next:
+ ALIGN((uintptr_t)ir, 4)) &&
+ (ir < limit) && (*ir == 0))
+ ir++;
+- for (; (ir + 4) <= limit; ir += 4) {
+- dv = *((u32 *)ir);
+- if (dv) {
++ if (IS_ALIGNED((uintptr_t)ir, 4)) {
++ for (; (ir + 4) <= limit; ir += 4) {
++ dv = *((u32 *)ir);
++ if (dv) {
+ # if defined(__LITTLE_ENDIAN)
+- ir += __builtin_ctz(dv) >> 3;
++ ir += __builtin_ctz(dv) >> 3;
+ # elif defined(__BIG_ENDIAN)
+- ir += __builtin_clz(dv) >> 3;
++ ir += __builtin_clz(dv) >> 3;
+ # else
+ # error "missing endian definition"
+ # endif
+- break;
++ break;
++ }
+ }
+ }
+ #endif
+diff --git a/mm/compaction.c b/mm/compaction.c
+index 952dc2fb24e5..1e994920e6ff 100644
+--- a/mm/compaction.c
++++ b/mm/compaction.c
+@@ -2078,6 +2078,17 @@ compact_zone(struct compact_control *cc, struct capture_control *capc)
+ const bool sync = cc->mode != MIGRATE_ASYNC;
+ bool update_cached;
+
++ /*
++ * These counters track activities during zone compaction. Initialize
++ * them before compacting a new zone.
++ */
++ cc->total_migrate_scanned = 0;
++ cc->total_free_scanned = 0;
++ cc->nr_migratepages = 0;
++ cc->nr_freepages = 0;
++ INIT_LIST_HEAD(&cc->freepages);
++ INIT_LIST_HEAD(&cc->migratepages);
++
+ cc->migratetype = gfpflags_to_migratetype(cc->gfp_mask);
+ ret = compaction_suitable(cc->zone, cc->order, cc->alloc_flags,
+ cc->classzone_idx);
+@@ -2281,10 +2292,6 @@ static enum compact_result compact_zone_order(struct zone *zone, int order,
+ {
+ enum compact_result ret;
+ struct compact_control cc = {
+- .nr_freepages = 0,
+- .nr_migratepages = 0,
+- .total_migrate_scanned = 0,
+- .total_free_scanned = 0,
+ .order = order,
+ .search_order = order,
+ .gfp_mask = gfp_mask,
+@@ -2305,8 +2312,6 @@ static enum compact_result compact_zone_order(struct zone *zone, int order,
+
+ if (capture)
+ current->capture_control = &capc;
+- INIT_LIST_HEAD(&cc.freepages);
+- INIT_LIST_HEAD(&cc.migratepages);
+
+ ret = compact_zone(&cc, &capc);
+
+@@ -2408,8 +2413,6 @@ static void compact_node(int nid)
+ struct zone *zone;
+ struct compact_control cc = {
+ .order = -1,
+- .total_migrate_scanned = 0,
+- .total_free_scanned = 0,
+ .mode = MIGRATE_SYNC,
+ .ignore_skip_hint = true,
+ .whole_zone = true,
+@@ -2423,11 +2426,7 @@ static void compact_node(int nid)
+ if (!populated_zone(zone))
+ continue;
+
+- cc.nr_freepages = 0;
+- cc.nr_migratepages = 0;
+ cc.zone = zone;
+- INIT_LIST_HEAD(&cc.freepages);
+- INIT_LIST_HEAD(&cc.migratepages);
+
+ compact_zone(&cc, NULL);
+
+@@ -2529,8 +2528,6 @@ static void kcompactd_do_work(pg_data_t *pgdat)
+ struct compact_control cc = {
+ .order = pgdat->kcompactd_max_order,
+ .search_order = pgdat->kcompactd_max_order,
+- .total_migrate_scanned = 0,
+- .total_free_scanned = 0,
+ .classzone_idx = pgdat->kcompactd_classzone_idx,
+ .mode = MIGRATE_SYNC_LIGHT,
+ .ignore_skip_hint = false,
+@@ -2554,16 +2551,10 @@ static void kcompactd_do_work(pg_data_t *pgdat)
+ COMPACT_CONTINUE)
+ continue;
+
+- cc.nr_freepages = 0;
+- cc.nr_migratepages = 0;
+- cc.total_migrate_scanned = 0;
+- cc.total_free_scanned = 0;
+- cc.zone = zone;
+- INIT_LIST_HEAD(&cc.freepages);
+- INIT_LIST_HEAD(&cc.migratepages);
+-
+ if (kthread_should_stop())
+ return;
++
++ cc.zone = zone;
+ status = compact_zone(&cc, NULL);
+
+ if (status == COMPACT_SUCCESS) {
+diff --git a/mm/fadvise.c b/mm/fadvise.c
+index 467bcd032037..4f17c83db575 100644
+--- a/mm/fadvise.c
++++ b/mm/fadvise.c
+@@ -27,8 +27,7 @@
+ * deactivate the pages and clear PG_Referenced.
+ */
+
+-static int generic_fadvise(struct file *file, loff_t offset, loff_t len,
+- int advice)
++int generic_fadvise(struct file *file, loff_t offset, loff_t len, int advice)
+ {
+ struct inode *inode;
+ struct address_space *mapping;
+@@ -178,6 +177,7 @@ static int generic_fadvise(struct file *file, loff_t offset, loff_t len,
+ }
+ return 0;
+ }
++EXPORT_SYMBOL(generic_fadvise);
+
+ int vfs_fadvise(struct file *file, loff_t offset, loff_t len, int advice)
+ {
+diff --git a/mm/madvise.c b/mm/madvise.c
+index 968df3aa069f..bac973b9f2cc 100644
+--- a/mm/madvise.c
++++ b/mm/madvise.c
+@@ -14,6 +14,7 @@
+ #include <linux/userfaultfd_k.h>
+ #include <linux/hugetlb.h>
+ #include <linux/falloc.h>
++#include <linux/fadvise.h>
+ #include <linux/sched.h>
+ #include <linux/ksm.h>
+ #include <linux/fs.h>
+@@ -275,6 +276,7 @@ static long madvise_willneed(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end)
+ {
+ struct file *file = vma->vm_file;
++ loff_t offset;
+
+ *prev = vma;
+ #ifdef CONFIG_SWAP
+@@ -298,12 +300,20 @@ static long madvise_willneed(struct vm_area_struct *vma,
+ return 0;
+ }
+
+- start = ((start - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
+- if (end > vma->vm_end)
+- end = vma->vm_end;
+- end = ((end - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
+-
+- force_page_cache_readahead(file->f_mapping, file, start, end - start);
++ /*
++ * Filesystem's fadvise may need to take various locks. We need to
++ * explicitly grab a reference because the vma (and hence the
++ * vma's reference to the file) can go away as soon as we drop
++ * mmap_sem.
++ */
++ *prev = NULL; /* tell sys_madvise we drop mmap_sem */
++ get_file(file);
++ up_read(¤t->mm->mmap_sem);
++ offset = (loff_t)(start - vma->vm_start)
++ + ((loff_t)vma->vm_pgoff << PAGE_SHIFT);
++ vfs_fadvise(file, offset, end - start, POSIX_FADV_WILLNEED);
++ fput(file);
++ down_read(¤t->mm->mmap_sem);
+ return 0;
+ }
+
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 9ec5e12486a7..e18108b2b786 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -2821,6 +2821,16 @@ int __memcg_kmem_charge_memcg(struct page *page, gfp_t gfp, int order,
+
+ if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) &&
+ !page_counter_try_charge(&memcg->kmem, nr_pages, &counter)) {
++
++ /*
++ * Enforce __GFP_NOFAIL allocation because callers are not
++ * prepared to see failures and likely do not have any failure
++ * handling code.
++ */
++ if (gfp & __GFP_NOFAIL) {
++ page_counter_charge(&memcg->kmem, nr_pages);
++ return 0;
++ }
+ cancel_charge(memcg, nr_pages);
+ return -ENOMEM;
+ }
+diff --git a/mm/oom_kill.c b/mm/oom_kill.c
+index eda2e2a0bdc6..26804abe99d6 100644
+--- a/mm/oom_kill.c
++++ b/mm/oom_kill.c
+@@ -1068,9 +1068,10 @@ bool out_of_memory(struct oom_control *oc)
+ * The OOM killer does not compensate for IO-less reclaim.
+ * pagefault_out_of_memory lost its gfp context so we have to
+ * make sure exclude 0 mask - all other users should have at least
+- * ___GFP_DIRECT_RECLAIM to get here.
++ * ___GFP_DIRECT_RECLAIM to get here. But mem_cgroup_oom() has to
++ * invoke the OOM killer even if it is a GFP_NOFS allocation.
+ */
+- if (oc->gfp_mask && !(oc->gfp_mask & __GFP_FS))
++ if (oc->gfp_mask && !(oc->gfp_mask & __GFP_FS) && !is_memcg_oom(oc))
+ return true;
+
+ /*
+diff --git a/mm/z3fold.c b/mm/z3fold.c
+index ed19d98c9dcd..05bdf90646e7 100644
+--- a/mm/z3fold.c
++++ b/mm/z3fold.c
+@@ -295,14 +295,11 @@ static void z3fold_unregister_migration(struct z3fold_pool *pool)
+ }
+
+ /* Initializes the z3fold header of a newly allocated z3fold page */
+-static struct z3fold_header *init_z3fold_page(struct page *page,
++static struct z3fold_header *init_z3fold_page(struct page *page, bool headless,
+ struct z3fold_pool *pool, gfp_t gfp)
+ {
+ struct z3fold_header *zhdr = page_address(page);
+- struct z3fold_buddy_slots *slots = alloc_slots(pool, gfp);
+-
+- if (!slots)
+- return NULL;
++ struct z3fold_buddy_slots *slots;
+
+ INIT_LIST_HEAD(&page->lru);
+ clear_bit(PAGE_HEADLESS, &page->private);
+@@ -310,6 +307,12 @@ static struct z3fold_header *init_z3fold_page(struct page *page,
+ clear_bit(NEEDS_COMPACTING, &page->private);
+ clear_bit(PAGE_STALE, &page->private);
+ clear_bit(PAGE_CLAIMED, &page->private);
++ if (headless)
++ return zhdr;
++
++ slots = alloc_slots(pool, gfp);
++ if (!slots)
++ return NULL;
+
+ spin_lock_init(&zhdr->page_lock);
+ kref_init(&zhdr->refcount);
+@@ -366,9 +369,10 @@ static inline int __idx(struct z3fold_header *zhdr, enum buddy bud)
+ * Encodes the handle of a particular buddy within a z3fold page
+ * Pool lock should be held as this function accesses first_num
+ */
+-static unsigned long encode_handle(struct z3fold_header *zhdr, enum buddy bud)
++static unsigned long __encode_handle(struct z3fold_header *zhdr,
++ struct z3fold_buddy_slots *slots,
++ enum buddy bud)
+ {
+- struct z3fold_buddy_slots *slots;
+ unsigned long h = (unsigned long)zhdr;
+ int idx = 0;
+
+@@ -385,11 +389,15 @@ static unsigned long encode_handle(struct z3fold_header *zhdr, enum buddy bud)
+ if (bud == LAST)
+ h |= (zhdr->last_chunks << BUDDY_SHIFT);
+
+- slots = zhdr->slots;
+ slots->slot[idx] = h;
+ return (unsigned long)&slots->slot[idx];
+ }
+
++static unsigned long encode_handle(struct z3fold_header *zhdr, enum buddy bud)
++{
++ return __encode_handle(zhdr, zhdr->slots, bud);
++}
++
+ /* Returns the z3fold page where a given handle is stored */
+ static inline struct z3fold_header *handle_to_z3fold_header(unsigned long h)
+ {
+@@ -624,6 +632,7 @@ static void do_compact_page(struct z3fold_header *zhdr, bool locked)
+ }
+
+ if (unlikely(PageIsolated(page) ||
++ test_bit(PAGE_CLAIMED, &page->private) ||
+ test_bit(PAGE_STALE, &page->private))) {
+ z3fold_page_unlock(zhdr);
+ return;
+@@ -924,7 +933,7 @@ retry:
+ if (!page)
+ return -ENOMEM;
+
+- zhdr = init_z3fold_page(page, pool, gfp);
++ zhdr = init_z3fold_page(page, bud == HEADLESS, pool, gfp);
+ if (!zhdr) {
+ __free_page(page);
+ return -ENOMEM;
+@@ -1100,6 +1109,7 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
+ struct z3fold_header *zhdr = NULL;
+ struct page *page = NULL;
+ struct list_head *pos;
++ struct z3fold_buddy_slots slots;
+ unsigned long first_handle = 0, middle_handle = 0, last_handle = 0;
+
+ spin_lock(&pool->lock);
+@@ -1118,16 +1128,22 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
+ /* this bit could have been set by free, in which case
+ * we pass over to the next page in the pool.
+ */
+- if (test_and_set_bit(PAGE_CLAIMED, &page->private))
++ if (test_and_set_bit(PAGE_CLAIMED, &page->private)) {
++ page = NULL;
+ continue;
++ }
+
+- if (unlikely(PageIsolated(page)))
++ if (unlikely(PageIsolated(page))) {
++ clear_bit(PAGE_CLAIMED, &page->private);
++ page = NULL;
+ continue;
++ }
++ zhdr = page_address(page);
+ if (test_bit(PAGE_HEADLESS, &page->private))
+ break;
+
+- zhdr = page_address(page);
+ if (!z3fold_page_trylock(zhdr)) {
++ clear_bit(PAGE_CLAIMED, &page->private);
+ zhdr = NULL;
+ continue; /* can't evict at this point */
+ }
+@@ -1145,26 +1161,30 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
+
+ if (!test_bit(PAGE_HEADLESS, &page->private)) {
+ /*
+- * We need encode the handles before unlocking, since
+- * we can race with free that will set
+- * (first|last)_chunks to 0
++ * We need encode the handles before unlocking, and
++ * use our local slots structure because z3fold_free
++ * can zero out zhdr->slots and we can't do much
++ * about that
+ */
+ first_handle = 0;
+ last_handle = 0;
+ middle_handle = 0;
+ if (zhdr->first_chunks)
+- first_handle = encode_handle(zhdr, FIRST);
++ first_handle = __encode_handle(zhdr, &slots,
++ FIRST);
+ if (zhdr->middle_chunks)
+- middle_handle = encode_handle(zhdr, MIDDLE);
++ middle_handle = __encode_handle(zhdr, &slots,
++ MIDDLE);
+ if (zhdr->last_chunks)
+- last_handle = encode_handle(zhdr, LAST);
++ last_handle = __encode_handle(zhdr, &slots,
++ LAST);
+ /*
+ * it's safe to unlock here because we hold a
+ * reference to this page
+ */
+ z3fold_page_unlock(zhdr);
+ } else {
+- first_handle = encode_handle(zhdr, HEADLESS);
++ first_handle = __encode_handle(zhdr, &slots, HEADLESS);
+ last_handle = middle_handle = 0;
+ }
+
+@@ -1194,9 +1214,9 @@ next:
+ spin_lock(&pool->lock);
+ list_add(&page->lru, &pool->lru);
+ spin_unlock(&pool->lock);
++ clear_bit(PAGE_CLAIMED, &page->private);
+ } else {
+ z3fold_page_lock(zhdr);
+- clear_bit(PAGE_CLAIMED, &page->private);
+ if (kref_put(&zhdr->refcount,
+ release_z3fold_page_locked)) {
+ atomic64_dec(&pool->pages_nr);
+@@ -1211,6 +1231,7 @@ next:
+ list_add(&page->lru, &pool->lru);
+ spin_unlock(&pool->lock);
+ z3fold_page_unlock(zhdr);
++ clear_bit(PAGE_CLAIMED, &page->private);
+ }
+
+ /* We started off locked to we need to lock the pool back */
+@@ -1315,7 +1336,8 @@ static bool z3fold_page_isolate(struct page *page, isolate_mode_t mode)
+ VM_BUG_ON_PAGE(!PageMovable(page), page);
+ VM_BUG_ON_PAGE(PageIsolated(page), page);
+
+- if (test_bit(PAGE_HEADLESS, &page->private))
++ if (test_bit(PAGE_HEADLESS, &page->private) ||
++ test_bit(PAGE_CLAIMED, &page->private))
+ return false;
+
+ zhdr = page_address(page);
+diff --git a/net/appletalk/ddp.c b/net/appletalk/ddp.c
+index a8cb6b2e20c1..5a203acdcae5 100644
+--- a/net/appletalk/ddp.c
++++ b/net/appletalk/ddp.c
+@@ -1023,6 +1023,11 @@ static int atalk_create(struct net *net, struct socket *sock, int protocol,
+ */
+ if (sock->type != SOCK_RAW && sock->type != SOCK_DGRAM)
+ goto out;
++
++ rc = -EPERM;
++ if (sock->type == SOCK_RAW && !kern && !capable(CAP_NET_RAW))
++ goto out;
++
+ rc = -ENOMEM;
+ sk = sk_alloc(net, PF_APPLETALK, GFP_KERNEL, &ddp_proto, kern);
+ if (!sk)
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index ca5207767dc2..bb222b882b67 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -855,6 +855,8 @@ static int ax25_create(struct net *net, struct socket *sock, int protocol,
+ break;
+
+ case SOCK_RAW:
++ if (!capable(CAP_NET_RAW))
++ return -EPERM;
+ break;
+ default:
+ return -ESOCKTNOSUPPORT;
+diff --git a/net/ieee802154/socket.c b/net/ieee802154/socket.c
+index badc5cfe4dc6..d93d4531aa9b 100644
+--- a/net/ieee802154/socket.c
++++ b/net/ieee802154/socket.c
+@@ -1008,6 +1008,9 @@ static int ieee802154_create(struct net *net, struct socket *sock,
+
+ switch (sock->type) {
+ case SOCK_RAW:
++ rc = -EPERM;
++ if (!capable(CAP_NET_RAW))
++ goto out;
+ proto = &ieee802154_raw_prot;
+ ops = &ieee802154_raw_ops;
+ break;
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index f5c163d4771b..a9183543ca30 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -560,7 +560,7 @@ struct dst_entry *inet_csk_route_req(const struct sock *sk,
+ rt = ip_route_output_flow(net, fl4, sk);
+ if (IS_ERR(rt))
+ goto no_route;
+- if (opt && opt->opt.is_strictroute && rt->rt_gw_family)
++ if (opt && opt->opt.is_strictroute && rt->rt_uses_gateway)
+ goto route_err;
+ rcu_read_unlock();
+ return &rt->dst;
+@@ -598,7 +598,7 @@ struct dst_entry *inet_csk_route_child_sock(const struct sock *sk,
+ rt = ip_route_output_flow(net, fl4, sk);
+ if (IS_ERR(rt))
+ goto no_route;
+- if (opt && opt->opt.is_strictroute && rt->rt_gw_family)
++ if (opt && opt->opt.is_strictroute && rt->rt_uses_gateway)
+ goto route_err;
+ return &rt->dst;
+
+diff --git a/net/ipv4/ip_forward.c b/net/ipv4/ip_forward.c
+index 06f6f280b9ff..00ec819f949b 100644
+--- a/net/ipv4/ip_forward.c
++++ b/net/ipv4/ip_forward.c
+@@ -123,7 +123,7 @@ int ip_forward(struct sk_buff *skb)
+
+ rt = skb_rtable(skb);
+
+- if (opt->is_strictroute && rt->rt_gw_family)
++ if (opt->is_strictroute && rt->rt_uses_gateway)
+ goto sr_failed;
+
+ IPCB(skb)->flags |= IPSKB_FORWARDED;
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index cc7ef0d05bbd..da521790cd63 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -499,7 +499,7 @@ int __ip_queue_xmit(struct sock *sk, struct sk_buff *skb, struct flowi *fl,
+ skb_dst_set_noref(skb, &rt->dst);
+
+ packet_routed:
+- if (inet_opt && inet_opt->opt.is_strictroute && rt->rt_gw_family)
++ if (inet_opt && inet_opt->opt.is_strictroute && rt->rt_uses_gateway)
+ goto no_route;
+
+ /* OK, we know where to send it, allocate and build IP header. */
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index b6a6f18c3dd1..7dcce724c78b 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -635,6 +635,7 @@ static void fill_route_from_fnhe(struct rtable *rt, struct fib_nh_exception *fnh
+
+ if (fnhe->fnhe_gw) {
+ rt->rt_flags |= RTCF_REDIRECTED;
++ rt->rt_uses_gateway = 1;
+ rt->rt_gw_family = AF_INET;
+ rt->rt_gw4 = fnhe->fnhe_gw;
+ }
+@@ -1313,7 +1314,7 @@ static unsigned int ipv4_mtu(const struct dst_entry *dst)
+ mtu = READ_ONCE(dst->dev->mtu);
+
+ if (unlikely(ip_mtu_locked(dst))) {
+- if (rt->rt_gw_family && mtu > 576)
++ if (rt->rt_uses_gateway && mtu > 576)
+ mtu = 576;
+ }
+
+@@ -1569,6 +1570,7 @@ static void rt_set_nexthop(struct rtable *rt, __be32 daddr,
+ struct fib_nh_common *nhc = FIB_RES_NHC(*res);
+
+ if (nhc->nhc_gw_family && nhc->nhc_scope == RT_SCOPE_LINK) {
++ rt->rt_uses_gateway = 1;
+ rt->rt_gw_family = nhc->nhc_gw_family;
+ /* only INET and INET6 are supported */
+ if (likely(nhc->nhc_gw_family == AF_INET))
+@@ -1634,6 +1636,7 @@ struct rtable *rt_dst_alloc(struct net_device *dev,
+ rt->rt_iif = 0;
+ rt->rt_pmtu = 0;
+ rt->rt_mtu_locked = 0;
++ rt->rt_uses_gateway = 0;
+ rt->rt_gw_family = 0;
+ rt->rt_gw4 = 0;
+ INIT_LIST_HEAD(&rt->rt_uncached);
+@@ -2694,6 +2697,7 @@ struct dst_entry *ipv4_blackhole_route(struct net *net, struct dst_entry *dst_or
+ rt->rt_genid = rt_genid_ipv4(net);
+ rt->rt_flags = ort->rt_flags;
+ rt->rt_type = ort->rt_type;
++ rt->rt_uses_gateway = ort->rt_uses_gateway;
+ rt->rt_gw_family = ort->rt_gw_family;
+ if (rt->rt_gw_family == AF_INET)
+ rt->rt_gw4 = ort->rt_gw4;
+@@ -2778,21 +2782,23 @@ static int rt_fill_info(struct net *net, __be32 dst, __be32 src,
+ if (nla_put_in_addr(skb, RTA_PREFSRC, fl4->saddr))
+ goto nla_put_failure;
+ }
+- if (rt->rt_gw_family == AF_INET &&
+- nla_put_in_addr(skb, RTA_GATEWAY, rt->rt_gw4)) {
+- goto nla_put_failure;
+- } else if (rt->rt_gw_family == AF_INET6) {
+- int alen = sizeof(struct in6_addr);
+- struct nlattr *nla;
+- struct rtvia *via;
+-
+- nla = nla_reserve(skb, RTA_VIA, alen + 2);
+- if (!nla)
++ if (rt->rt_uses_gateway) {
++ if (rt->rt_gw_family == AF_INET &&
++ nla_put_in_addr(skb, RTA_GATEWAY, rt->rt_gw4)) {
+ goto nla_put_failure;
+-
+- via = nla_data(nla);
+- via->rtvia_family = AF_INET6;
+- memcpy(via->rtvia_addr, &rt->rt_gw6, alen);
++ } else if (rt->rt_gw_family == AF_INET6) {
++ int alen = sizeof(struct in6_addr);
++ struct nlattr *nla;
++ struct rtvia *via;
++
++ nla = nla_reserve(skb, RTA_VIA, alen + 2);
++ if (!nla)
++ goto nla_put_failure;
++
++ via = nla_data(nla);
++ via->rtvia_family = AF_INET6;
++ memcpy(via->rtvia_addr, &rt->rt_gw6, alen);
++ }
+ }
+
+ expires = rt->dst.expires;
+diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c
+index 56be7d27f208..00ade9c185ea 100644
+--- a/net/ipv4/tcp_bbr.c
++++ b/net/ipv4/tcp_bbr.c
+@@ -386,7 +386,7 @@ static u32 bbr_bdp(struct sock *sk, u32 bw, int gain)
+ * which allows 2 outstanding 2-packet sequences, to try to keep pipe
+ * full even with ACK-every-other-packet delayed ACKs.
+ */
+-static u32 bbr_quantization_budget(struct sock *sk, u32 cwnd, int gain)
++static u32 bbr_quantization_budget(struct sock *sk, u32 cwnd)
+ {
+ struct bbr *bbr = inet_csk_ca(sk);
+
+@@ -397,7 +397,7 @@ static u32 bbr_quantization_budget(struct sock *sk, u32 cwnd, int gain)
+ cwnd = (cwnd + 1) & ~1U;
+
+ /* Ensure gain cycling gets inflight above BDP even for small BDPs. */
+- if (bbr->mode == BBR_PROBE_BW && gain > BBR_UNIT)
++ if (bbr->mode == BBR_PROBE_BW && bbr->cycle_idx == 0)
+ cwnd += 2;
+
+ return cwnd;
+@@ -409,7 +409,7 @@ static u32 bbr_inflight(struct sock *sk, u32 bw, int gain)
+ u32 inflight;
+
+ inflight = bbr_bdp(sk, bw, gain);
+- inflight = bbr_quantization_budget(sk, inflight, gain);
++ inflight = bbr_quantization_budget(sk, inflight);
+
+ return inflight;
+ }
+@@ -529,7 +529,7 @@ static void bbr_set_cwnd(struct sock *sk, const struct rate_sample *rs,
+ * due to aggregation (of data and/or ACKs) visible in the ACK stream.
+ */
+ target_cwnd += bbr_ack_aggregation_cwnd(sk);
+- target_cwnd = bbr_quantization_budget(sk, target_cwnd, gain);
++ target_cwnd = bbr_quantization_budget(sk, target_cwnd);
+
+ /* If we're below target cwnd, slow start cwnd toward target cwnd. */
+ if (bbr_full_bw_reached(sk)) /* only cut cwnd if we filled the pipe */
+diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
+index c801cd37cc2a..3e8b38c73d8c 100644
+--- a/net/ipv4/tcp_timer.c
++++ b/net/ipv4/tcp_timer.c
+@@ -210,7 +210,7 @@ static int tcp_write_timeout(struct sock *sk)
+ struct inet_connection_sock *icsk = inet_csk(sk);
+ struct tcp_sock *tp = tcp_sk(sk);
+ struct net *net = sock_net(sk);
+- bool expired, do_reset;
++ bool expired = false, do_reset;
+ int retry_until;
+
+ if ((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV)) {
+@@ -242,9 +242,10 @@ static int tcp_write_timeout(struct sock *sk)
+ if (tcp_out_of_resources(sk, do_reset))
+ return 1;
+ }
++ }
++ if (!expired)
+ expired = retransmits_timed_out(sk, retry_until,
+ icsk->icsk_user_timeout);
+- }
+ tcp_fastopen_active_detect_blackhole(sk, expired);
+
+ if (BPF_SOCK_OPS_TEST_FLAG(tp, BPF_SOCK_OPS_RTO_CB_FLAG))
+diff --git a/net/ipv4/xfrm4_policy.c b/net/ipv4/xfrm4_policy.c
+index cdef8f9a3b01..35b84b52b702 100644
+--- a/net/ipv4/xfrm4_policy.c
++++ b/net/ipv4/xfrm4_policy.c
+@@ -85,6 +85,7 @@ static int xfrm4_fill_dst(struct xfrm_dst *xdst, struct net_device *dev,
+ xdst->u.rt.rt_flags = rt->rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST |
+ RTCF_LOCAL);
+ xdst->u.rt.rt_type = rt->rt_type;
++ xdst->u.rt.rt_uses_gateway = rt->rt_uses_gateway;
+ xdst->u.rt.rt_gw_family = rt->rt_gw_family;
+ if (rt->rt_gw_family == AF_INET)
+ xdst->u.rt.rt_gw4 = rt->rt_gw4;
+diff --git a/net/ipv6/fib6_rules.c b/net/ipv6/fib6_rules.c
+index d22b6c140f23..f9e8fe3ff0c5 100644
+--- a/net/ipv6/fib6_rules.c
++++ b/net/ipv6/fib6_rules.c
+@@ -287,7 +287,8 @@ static bool fib6_rule_suppress(struct fib_rule *rule, struct fib_lookup_arg *arg
+ return false;
+
+ suppress_route:
+- ip6_rt_put(rt);
++ if (!(arg->flags & FIB_LOOKUP_NOREF))
++ ip6_rt_put(rt);
+ return true;
+ }
+
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index 87f47bc55c5e..6e2af411cd9c 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -318,7 +318,7 @@ struct dst_entry *fib6_rule_lookup(struct net *net, struct flowi6 *fl6,
+ if (rt->dst.error == -EAGAIN) {
+ ip6_rt_put_flags(rt, flags);
+ rt = net->ipv6.ip6_null_entry;
+- if (!(flags | RT6_LOOKUP_F_DST_NOREF))
++ if (!(flags & RT6_LOOKUP_F_DST_NOREF))
+ dst_hold(&rt->dst);
+ }
+
+diff --git a/net/nfc/llcp_sock.c b/net/nfc/llcp_sock.c
+index 9b8742947aff..8dfea26536c9 100644
+--- a/net/nfc/llcp_sock.c
++++ b/net/nfc/llcp_sock.c
+@@ -1004,10 +1004,13 @@ static int llcp_sock_create(struct net *net, struct socket *sock,
+ sock->type != SOCK_RAW)
+ return -ESOCKTNOSUPPORT;
+
+- if (sock->type == SOCK_RAW)
++ if (sock->type == SOCK_RAW) {
++ if (!capable(CAP_NET_RAW))
++ return -EPERM;
+ sock->ops = &llcp_rawsock_ops;
+- else
++ } else {
+ sock->ops = &llcp_sock_ops;
++ }
+
+ sk = nfc_llcp_sock_alloc(sock, sock->type, GFP_ATOMIC, kern);
+ if (sk == NULL)
+diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
+index d01410e52097..f1e7041a5a60 100644
+--- a/net/openvswitch/datapath.c
++++ b/net/openvswitch/datapath.c
+@@ -2263,7 +2263,7 @@ static const struct nla_policy vport_policy[OVS_VPORT_ATTR_MAX + 1] = {
+ [OVS_VPORT_ATTR_STATS] = { .len = sizeof(struct ovs_vport_stats) },
+ [OVS_VPORT_ATTR_PORT_NO] = { .type = NLA_U32 },
+ [OVS_VPORT_ATTR_TYPE] = { .type = NLA_U32 },
+- [OVS_VPORT_ATTR_UPCALL_PID] = { .type = NLA_U32 },
++ [OVS_VPORT_ATTR_UPCALL_PID] = { .type = NLA_UNSPEC },
+ [OVS_VPORT_ATTR_OPTIONS] = { .type = NLA_NESTED },
+ [OVS_VPORT_ATTR_IFINDEX] = { .type = NLA_U32 },
+ [OVS_VPORT_ATTR_NETNSID] = { .type = NLA_S32 },
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index 6c8b0f6d28f9..88f98f27ad88 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -150,6 +150,7 @@ static void __qrtr_node_release(struct kref *kref)
+ list_del(&node->item);
+ mutex_unlock(&qrtr_node_lock);
+
++ cancel_work_sync(&node->work);
+ skb_queue_purge(&node->rx_queue);
+ kfree(node);
+ }
+diff --git a/net/rds/bind.c b/net/rds/bind.c
+index 05464fd7c17a..93e336535d3b 100644
+--- a/net/rds/bind.c
++++ b/net/rds/bind.c
+@@ -244,7 +244,8 @@ int rds_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ */
+ if (rs->rs_transport) {
+ trans = rs->rs_transport;
+- if (trans->laddr_check(sock_net(sock->sk),
++ if (!trans->laddr_check ||
++ trans->laddr_check(sock_net(sock->sk),
+ binding_addr, scope_id) != 0) {
+ ret = -ENOPROTOOPT;
+ goto out;
+@@ -263,6 +264,8 @@ int rds_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+
+ sock_set_flag(sk, SOCK_RCU_FREE);
+ ret = rds_add_bound(rs, binding_addr, &port, scope_id);
++ if (ret)
++ rs->rs_transport = NULL;
+
+ out:
+ release_sock(sk);
+diff --git a/net/sched/act_api.c b/net/sched/act_api.c
+index 339712296164..2558f00f6b3e 100644
+--- a/net/sched/act_api.c
++++ b/net/sched/act_api.c
+@@ -831,6 +831,15 @@ static struct tc_cookie *nla_memdup_cookie(struct nlattr **tb)
+ return c;
+ }
+
++static const struct nla_policy tcf_action_policy[TCA_ACT_MAX + 1] = {
++ [TCA_ACT_KIND] = { .type = NLA_NUL_STRING,
++ .len = IFNAMSIZ - 1 },
++ [TCA_ACT_INDEX] = { .type = NLA_U32 },
++ [TCA_ACT_COOKIE] = { .type = NLA_BINARY,
++ .len = TC_COOKIE_MAX_SIZE },
++ [TCA_ACT_OPTIONS] = { .type = NLA_NESTED },
++};
++
+ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
+ struct nlattr *nla, struct nlattr *est,
+ char *name, int ovr, int bind,
+@@ -846,8 +855,8 @@ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
+ int err;
+
+ if (name == NULL) {
+- err = nla_parse_nested_deprecated(tb, TCA_ACT_MAX, nla, NULL,
+- extack);
++ err = nla_parse_nested_deprecated(tb, TCA_ACT_MAX, nla,
++ tcf_action_policy, extack);
+ if (err < 0)
+ goto err_out;
+ err = -EINVAL;
+@@ -856,18 +865,9 @@ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
+ NL_SET_ERR_MSG(extack, "TC action kind must be specified");
+ goto err_out;
+ }
+- if (nla_strlcpy(act_name, kind, IFNAMSIZ) >= IFNAMSIZ) {
+- NL_SET_ERR_MSG(extack, "TC action name too long");
+- goto err_out;
+- }
+- if (tb[TCA_ACT_COOKIE]) {
+- int cklen = nla_len(tb[TCA_ACT_COOKIE]);
+-
+- if (cklen > TC_COOKIE_MAX_SIZE) {
+- NL_SET_ERR_MSG(extack, "TC cookie size above the maximum");
+- goto err_out;
+- }
++ nla_strlcpy(act_name, kind, IFNAMSIZ);
+
++ if (tb[TCA_ACT_COOKIE]) {
+ cookie = nla_memdup_cookie(tb);
+ if (!cookie) {
+ NL_SET_ERR_MSG(extack, "No memory to generate TC cookie");
+@@ -1098,7 +1098,8 @@ static struct tc_action *tcf_action_get_1(struct net *net, struct nlattr *nla,
+ int index;
+ int err;
+
+- err = nla_parse_nested_deprecated(tb, TCA_ACT_MAX, nla, NULL, extack);
++ err = nla_parse_nested_deprecated(tb, TCA_ACT_MAX, nla,
++ tcf_action_policy, extack);
+ if (err < 0)
+ goto err_out;
+
+@@ -1152,7 +1153,8 @@ static int tca_action_flush(struct net *net, struct nlattr *nla,
+
+ b = skb_tail_pointer(skb);
+
+- err = nla_parse_nested_deprecated(tb, TCA_ACT_MAX, nla, NULL, extack);
++ err = nla_parse_nested_deprecated(tb, TCA_ACT_MAX, nla,
++ tcf_action_policy, extack);
+ if (err < 0)
+ goto err_out;
+
+@@ -1440,7 +1442,7 @@ static struct nlattr *find_dump_kind(struct nlattr **nla)
+
+ if (tb[1] == NULL)
+ return NULL;
+- if (nla_parse_nested_deprecated(tb2, TCA_ACT_MAX, tb[1], NULL, NULL) < 0)
++ if (nla_parse_nested_deprecated(tb2, TCA_ACT_MAX, tb[1], tcf_action_policy, NULL) < 0)
+ return NULL;
+ kind = tb2[TCA_ACT_KIND];
+
+diff --git a/net/sched/act_sample.c b/net/sched/act_sample.c
+index 10229124a992..86344fd2ff1f 100644
+--- a/net/sched/act_sample.c
++++ b/net/sched/act_sample.c
+@@ -146,6 +146,7 @@ static bool tcf_sample_dev_ok_push(struct net_device *dev)
+ case ARPHRD_TUNNEL6:
+ case ARPHRD_SIT:
+ case ARPHRD_IPGRE:
++ case ARPHRD_IP6GRE:
+ case ARPHRD_VOID:
+ case ARPHRD_NONE:
+ return false;
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index efd3cfb80a2a..9aef93300f1c 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -3027,8 +3027,10 @@ out:
+ void tcf_exts_destroy(struct tcf_exts *exts)
+ {
+ #ifdef CONFIG_NET_CLS_ACT
+- tcf_action_destroy(exts->actions, TCA_ACT_UNBIND);
+- kfree(exts->actions);
++ if (exts->actions) {
++ tcf_action_destroy(exts->actions, TCA_ACT_UNBIND);
++ kfree(exts->actions);
++ }
+ exts->nr_actions = 0;
+ #endif
+ }
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 1047825d9f48..81d58b280612 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1390,7 +1390,8 @@ check_loop_fn(struct Qdisc *q, unsigned long cl, struct qdisc_walker *w)
+ }
+
+ const struct nla_policy rtm_tca_policy[TCA_MAX + 1] = {
+- [TCA_KIND] = { .type = NLA_STRING },
++ [TCA_KIND] = { .type = NLA_NUL_STRING,
++ .len = IFNAMSIZ - 1 },
+ [TCA_RATE] = { .type = NLA_BINARY,
+ .len = sizeof(struct tc_estimator) },
+ [TCA_STAB] = { .type = NLA_NESTED },
+diff --git a/net/sched/sch_cbs.c b/net/sched/sch_cbs.c
+index 810645b5c086..4a403d35438f 100644
+--- a/net/sched/sch_cbs.c
++++ b/net/sched/sch_cbs.c
+@@ -392,7 +392,6 @@ static int cbs_init(struct Qdisc *sch, struct nlattr *opt,
+ {
+ struct cbs_sched_data *q = qdisc_priv(sch);
+ struct net_device *dev = qdisc_dev(sch);
+- int err;
+
+ if (!opt) {
+ NL_SET_ERR_MSG(extack, "Missing CBS qdisc options which are mandatory");
+@@ -404,6 +403,10 @@ static int cbs_init(struct Qdisc *sch, struct nlattr *opt,
+ if (!q->qdisc)
+ return -ENOMEM;
+
++ spin_lock(&cbs_list_lock);
++ list_add(&q->cbs_list, &cbs_list);
++ spin_unlock(&cbs_list_lock);
++
+ qdisc_hash_add(q->qdisc, false);
+
+ q->queue = sch->dev_queue - netdev_get_tx_queue(dev, 0);
+@@ -413,17 +416,7 @@ static int cbs_init(struct Qdisc *sch, struct nlattr *opt,
+
+ qdisc_watchdog_init(&q->watchdog, sch);
+
+- err = cbs_change(sch, opt, extack);
+- if (err)
+- return err;
+-
+- if (!q->offload) {
+- spin_lock(&cbs_list_lock);
+- list_add(&q->cbs_list, &cbs_list);
+- spin_unlock(&cbs_list_lock);
+- }
+-
+- return 0;
++ return cbs_change(sch, opt, extack);
+ }
+
+ static void cbs_destroy(struct Qdisc *sch)
+@@ -431,15 +424,18 @@ static void cbs_destroy(struct Qdisc *sch)
+ struct cbs_sched_data *q = qdisc_priv(sch);
+ struct net_device *dev = qdisc_dev(sch);
+
+- spin_lock(&cbs_list_lock);
+- list_del(&q->cbs_list);
+- spin_unlock(&cbs_list_lock);
++ /* Nothing to do if we couldn't create the underlying qdisc */
++ if (!q->qdisc)
++ return;
+
+ qdisc_watchdog_cancel(&q->watchdog);
+ cbs_disable_offload(dev, q);
+
+- if (q->qdisc)
+- qdisc_put(q->qdisc);
++ spin_lock(&cbs_list_lock);
++ list_del(&q->cbs_list);
++ spin_unlock(&cbs_list_lock);
++
++ qdisc_put(q->qdisc);
+ }
+
+ static int cbs_dump(struct Qdisc *sch, struct sk_buff *skb)
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index b17f2ed970e2..f5cb35e550f8 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -777,7 +777,7 @@ static int get_dist_table(struct Qdisc *sch, struct disttable **tbl,
+ struct disttable *d;
+ int i;
+
+- if (n > NETEM_DIST_MAX)
++ if (!n || n > NETEM_DIST_MAX)
+ return -EINVAL;
+
+ d = kvmalloc(sizeof(struct disttable) + n * sizeof(s16), GFP_KERNEL);
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index a07b516e503a..7a75f34ad393 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -1862,6 +1862,7 @@ rpc_xdr_encode(struct rpc_task *task)
+ req->rq_rbuffer,
+ req->rq_rcvsize);
+
++ req->rq_reply_bytes_recvd = 0;
+ req->rq_snd_buf.head[0].iov_len = 0;
+ xdr_init_encode(&xdr, &req->rq_snd_buf,
+ req->rq_snd_buf.head[0].iov_base, req);
+@@ -1881,6 +1882,8 @@ call_encode(struct rpc_task *task)
+ if (!rpc_task_need_encode(task))
+ goto out;
+ dprint_status(task);
++ /* Dequeue task from the receive queue while we're encoding */
++ xprt_request_dequeue_xprt(task);
+ /* Encode here so that rpcsec_gss can use correct sequence number. */
+ rpc_xdr_encode(task);
+ /* Did the encode result in an error condition? */
+@@ -2518,9 +2521,6 @@ call_decode(struct rpc_task *task)
+ return;
+ case -EAGAIN:
+ task->tk_status = 0;
+- xdr_free_bvec(&req->rq_rcv_buf);
+- req->rq_reply_bytes_recvd = 0;
+- req->rq_rcv_buf.len = 0;
+ if (task->tk_client->cl_discrtry)
+ xprt_conditional_disconnect(req->rq_xprt,
+ req->rq_connect_cookie);
+diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
+index 48c93b9e525e..b256806d69cd 100644
+--- a/net/sunrpc/xdr.c
++++ b/net/sunrpc/xdr.c
+@@ -1237,16 +1237,29 @@ xdr_encode_word(struct xdr_buf *buf, unsigned int base, u32 obj)
+ EXPORT_SYMBOL_GPL(xdr_encode_word);
+
+ /* If the netobj starting offset bytes from the start of xdr_buf is contained
+- * entirely in the head or the tail, set object to point to it; otherwise
+- * try to find space for it at the end of the tail, copy it there, and
+- * set obj to point to it. */
++ * entirely in the head, pages, or tail, set object to point to it; otherwise
++ * shift the buffer until it is contained entirely within the pages or tail.
++ */
+ int xdr_buf_read_netobj(struct xdr_buf *buf, struct xdr_netobj *obj, unsigned int offset)
+ {
+ struct xdr_buf subbuf;
++ unsigned int boundary;
+
+ if (xdr_decode_word(buf, offset, &obj->len))
+ return -EFAULT;
+- if (xdr_buf_subsegment(buf, &subbuf, offset + 4, obj->len))
++ offset += 4;
++
++ /* Is the obj partially in the head? */
++ boundary = buf->head[0].iov_len;
++ if (offset < boundary && (offset + obj->len) > boundary)
++ xdr_shift_buf(buf, boundary - offset);
++
++ /* Is the obj partially in the pages? */
++ boundary += buf->page_len;
++ if (offset < boundary && (offset + obj->len) > boundary)
++ xdr_shrink_pagelen(buf, boundary - offset);
++
++ if (xdr_buf_subsegment(buf, &subbuf, offset, obj->len))
+ return -EFAULT;
+
+ /* Is the obj contained entirely in the head? */
+@@ -1258,11 +1271,7 @@ int xdr_buf_read_netobj(struct xdr_buf *buf, struct xdr_netobj *obj, unsigned in
+ if (subbuf.tail[0].iov_len == obj->len)
+ return 0;
+
+- /* use end of tail as storage for obj:
+- * (We don't copy to the beginning because then we'd have
+- * to worry about doing a potentially overlapping copy.
+- * This assumes the object is at most half the length of the
+- * tail.) */
++ /* Find a contiguous area in @buf to hold all of @obj */
+ if (obj->len > buf->buflen - buf->len)
+ return -ENOMEM;
+ if (buf->tail[0].iov_len != 0)
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index 2e71f5455c6c..20631d64312c 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -1323,6 +1323,36 @@ xprt_request_dequeue_transmit(struct rpc_task *task)
+ spin_unlock(&xprt->queue_lock);
+ }
+
++/**
++ * xprt_request_dequeue_xprt - remove a task from the transmit+receive queue
++ * @task: pointer to rpc_task
++ *
++ * Remove a task from the transmit and receive queues, and ensure that
++ * it is not pinned by the receive work item.
++ */
++void
++xprt_request_dequeue_xprt(struct rpc_task *task)
++{
++ struct rpc_rqst *req = task->tk_rqstp;
++ struct rpc_xprt *xprt = req->rq_xprt;
++
++ if (test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate) ||
++ test_bit(RPC_TASK_NEED_RECV, &task->tk_runstate) ||
++ xprt_is_pinned_rqst(req)) {
++ spin_lock(&xprt->queue_lock);
++ xprt_request_dequeue_transmit_locked(task);
++ xprt_request_dequeue_receive_locked(task);
++ while (xprt_is_pinned_rqst(req)) {
++ set_bit(RPC_TASK_MSG_PIN_WAIT, &task->tk_runstate);
++ spin_unlock(&xprt->queue_lock);
++ xprt_wait_on_pinned_rqst(req);
++ spin_lock(&xprt->queue_lock);
++ clear_bit(RPC_TASK_MSG_PIN_WAIT, &task->tk_runstate);
++ }
++ spin_unlock(&xprt->queue_lock);
++ }
++}
++
+ /**
+ * xprt_request_prepare - prepare an encoded request for transport
+ * @req: pointer to rpc_rqst
+@@ -1747,28 +1777,6 @@ void xprt_retry_reserve(struct rpc_task *task)
+ xprt_do_reserve(xprt, task);
+ }
+
+-static void
+-xprt_request_dequeue_all(struct rpc_task *task, struct rpc_rqst *req)
+-{
+- struct rpc_xprt *xprt = req->rq_xprt;
+-
+- if (test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate) ||
+- test_bit(RPC_TASK_NEED_RECV, &task->tk_runstate) ||
+- xprt_is_pinned_rqst(req)) {
+- spin_lock(&xprt->queue_lock);
+- xprt_request_dequeue_transmit_locked(task);
+- xprt_request_dequeue_receive_locked(task);
+- while (xprt_is_pinned_rqst(req)) {
+- set_bit(RPC_TASK_MSG_PIN_WAIT, &task->tk_runstate);
+- spin_unlock(&xprt->queue_lock);
+- xprt_wait_on_pinned_rqst(req);
+- spin_lock(&xprt->queue_lock);
+- clear_bit(RPC_TASK_MSG_PIN_WAIT, &task->tk_runstate);
+- }
+- spin_unlock(&xprt->queue_lock);
+- }
+-}
+-
+ /**
+ * xprt_release - release an RPC request slot
+ * @task: task which is finished with the slot
+@@ -1788,7 +1796,7 @@ void xprt_release(struct rpc_task *task)
+ }
+
+ xprt = req->rq_xprt;
+- xprt_request_dequeue_all(task, req);
++ xprt_request_dequeue_xprt(task);
+ spin_lock(&xprt->transport_lock);
+ xprt->ops->release_xprt(xprt, task);
+ if (xprt->ops->release_request)
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index e74837824cea..f68818dbac1a 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -960,6 +960,7 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev,
+ }
+
+ cfg80211_process_rdev_events(rdev);
++ cfg80211_mlme_purge_registrations(dev->ieee80211_ptr);
+ }
+
+ err = rdev_change_virtual_intf(rdev, dev, ntype, params);
+diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
+index 6410bd22fe38..03757cc60e06 100644
+--- a/scripts/Makefile.kasan
++++ b/scripts/Makefile.kasan
+@@ -1,4 +1,9 @@
+ # SPDX-License-Identifier: GPL-2.0
++ifdef CONFIG_KASAN
++CFLAGS_KASAN_NOSANITIZE := -fno-builtin
++KASAN_SHADOW_OFFSET ?= $(CONFIG_KASAN_SHADOW_OFFSET)
++endif
++
+ ifdef CONFIG_KASAN_GENERIC
+
+ ifdef CONFIG_KASAN_INLINE
+@@ -7,8 +12,6 @@ else
+ call_threshold := 0
+ endif
+
+-KASAN_SHADOW_OFFSET ?= $(CONFIG_KASAN_SHADOW_OFFSET)
+-
+ CFLAGS_KASAN_MINIMAL := -fsanitize=kernel-address
+
+ cc-param = $(call cc-option, -mllvm -$(1), $(call cc-option, --param $(1)))
+@@ -45,7 +48,3 @@ CFLAGS_KASAN := -fsanitize=kernel-hwaddress \
+ $(instrumentation_flags)
+
+ endif # CONFIG_KASAN_SW_TAGS
+-
+-ifdef CONFIG_KASAN
+-CFLAGS_KASAN_NOSANITIZE := -fno-builtin
+-endif
+diff --git a/scripts/gcc-plugins/randomize_layout_plugin.c b/scripts/gcc-plugins/randomize_layout_plugin.c
+index 6d5bbd31db7f..bd29e4e7a524 100644
+--- a/scripts/gcc-plugins/randomize_layout_plugin.c
++++ b/scripts/gcc-plugins/randomize_layout_plugin.c
+@@ -443,13 +443,13 @@ static int is_pure_ops_struct(const_tree node)
+ if (node == fieldtype)
+ continue;
+
+- if (!is_fptr(fieldtype))
+- return 0;
+-
+- if (code != RECORD_TYPE && code != UNION_TYPE)
++ if (code == RECORD_TYPE || code == UNION_TYPE) {
++ if (!is_pure_ops_struct(fieldtype))
++ return 0;
+ continue;
++ }
+
+- if (!is_pure_ops_struct(fieldtype))
++ if (!is_fptr(fieldtype))
+ return 0;
+ }
+
+diff --git a/security/keys/trusted.c b/security/keys/trusted.c
+index ade699131065..1fbd77816610 100644
+--- a/security/keys/trusted.c
++++ b/security/keys/trusted.c
+@@ -1228,11 +1228,16 @@ hashalg_fail:
+
+ static int __init init_digests(void)
+ {
++ int i;
++
+ digests = kcalloc(chip->nr_allocated_banks, sizeof(*digests),
+ GFP_KERNEL);
+ if (!digests)
+ return -ENOMEM;
+
++ for (i = 0; i < chip->nr_allocated_banks; i++)
++ digests[i].alg_id = chip->allocated_banks[i].alg_id;
++
+ return 0;
+ }
+
+diff --git a/sound/firewire/motu/motu.c b/sound/firewire/motu/motu.c
+index 03cda2166ea3..72908b4de77c 100644
+--- a/sound/firewire/motu/motu.c
++++ b/sound/firewire/motu/motu.c
+@@ -247,6 +247,17 @@ static const struct snd_motu_spec motu_audio_express = {
+ .analog_out_ports = 4,
+ };
+
++static const struct snd_motu_spec motu_4pre = {
++ .name = "4pre",
++ .protocol = &snd_motu_protocol_v3,
++ .flags = SND_MOTU_SPEC_SUPPORT_CLOCK_X2 |
++ SND_MOTU_SPEC_TX_MICINST_CHUNK |
++ SND_MOTU_SPEC_TX_RETURN_CHUNK |
++ SND_MOTU_SPEC_RX_SEPARETED_MAIN,
++ .analog_in_ports = 2,
++ .analog_out_ports = 2,
++};
++
+ #define SND_MOTU_DEV_ENTRY(model, data) \
+ { \
+ .match_flags = IEEE1394_MATCH_VENDOR_ID | \
+@@ -265,6 +276,7 @@ static const struct ieee1394_device_id motu_id_table[] = {
+ SND_MOTU_DEV_ENTRY(0x000015, &motu_828mk3), /* FireWire only. */
+ SND_MOTU_DEV_ENTRY(0x000035, &motu_828mk3), /* Hybrid. */
+ SND_MOTU_DEV_ENTRY(0x000033, &motu_audio_express),
++ SND_MOTU_DEV_ENTRY(0x000045, &motu_4pre),
+ { }
+ };
+ MODULE_DEVICE_TABLE(ieee1394, motu_id_table);
+diff --git a/sound/firewire/tascam/tascam-pcm.c b/sound/firewire/tascam/tascam-pcm.c
+index b5ced5415e40..2377732caa52 100644
+--- a/sound/firewire/tascam/tascam-pcm.c
++++ b/sound/firewire/tascam/tascam-pcm.c
+@@ -56,6 +56,9 @@ static int pcm_open(struct snd_pcm_substream *substream)
+ goto err_locked;
+
+ err = snd_tscm_stream_get_clock(tscm, &clock);
++ if (err < 0)
++ goto err_locked;
++
+ if (clock != SND_TSCM_CLOCK_INTERNAL ||
+ amdtp_stream_pcm_running(&tscm->rx_stream) ||
+ amdtp_stream_pcm_running(&tscm->tx_stream)) {
+diff --git a/sound/firewire/tascam/tascam-stream.c b/sound/firewire/tascam/tascam-stream.c
+index e852e46ebe6f..ccfa92fbc145 100644
+--- a/sound/firewire/tascam/tascam-stream.c
++++ b/sound/firewire/tascam/tascam-stream.c
+@@ -8,20 +8,37 @@
+ #include <linux/delay.h>
+ #include "tascam.h"
+
++#define CLOCK_STATUS_MASK 0xffff0000
++#define CLOCK_CONFIG_MASK 0x0000ffff
++
+ #define CALLBACK_TIMEOUT 500
+
+ static int get_clock(struct snd_tscm *tscm, u32 *data)
+ {
++ int trial = 0;
+ __be32 reg;
+ int err;
+
+- err = snd_fw_transaction(tscm->unit, TCODE_READ_QUADLET_REQUEST,
+- TSCM_ADDR_BASE + TSCM_OFFSET_CLOCK_STATUS,
+- ®, sizeof(reg), 0);
+- if (err >= 0)
++ while (trial++ < 5) {
++ err = snd_fw_transaction(tscm->unit, TCODE_READ_QUADLET_REQUEST,
++ TSCM_ADDR_BASE + TSCM_OFFSET_CLOCK_STATUS,
++ ®, sizeof(reg), 0);
++ if (err < 0)
++ return err;
++
+ *data = be32_to_cpu(reg);
++ if (*data & CLOCK_STATUS_MASK)
++ break;
+
+- return err;
++ // In intermediate state after changing clock status.
++ msleep(50);
++ }
++
++ // Still in the intermediate state.
++ if (trial >= 5)
++ return -EAGAIN;
++
++ return 0;
+ }
+
+ static int set_clock(struct snd_tscm *tscm, unsigned int rate,
+@@ -34,7 +51,7 @@ static int set_clock(struct snd_tscm *tscm, unsigned int rate,
+ err = get_clock(tscm, &data);
+ if (err < 0)
+ return err;
+- data &= 0x0000ffff;
++ data &= CLOCK_CONFIG_MASK;
+
+ if (rate > 0) {
+ data &= 0x000000ff;
+@@ -79,17 +96,14 @@ static int set_clock(struct snd_tscm *tscm, unsigned int rate,
+
+ int snd_tscm_stream_get_rate(struct snd_tscm *tscm, unsigned int *rate)
+ {
+- u32 data = 0x0;
+- unsigned int trials = 0;
++ u32 data;
+ int err;
+
+- while (data == 0x0 || trials++ < 5) {
+- err = get_clock(tscm, &data);
+- if (err < 0)
+- return err;
++ err = get_clock(tscm, &data);
++ if (err < 0)
++ return err;
+
+- data = (data & 0xff000000) >> 24;
+- }
++ data = (data & 0xff000000) >> 24;
+
+ /* Check base rate. */
+ if ((data & 0x0f) == 0x01)
+diff --git a/sound/hda/hdac_controller.c b/sound/hda/hdac_controller.c
+index 3b0110545070..196bbc85699e 100644
+--- a/sound/hda/hdac_controller.c
++++ b/sound/hda/hdac_controller.c
+@@ -447,6 +447,8 @@ static void azx_int_disable(struct hdac_bus *bus)
+ list_for_each_entry(azx_dev, &bus->stream_list, list)
+ snd_hdac_stream_updateb(azx_dev, SD_CTL, SD_INT_MASK, 0);
+
++ synchronize_irq(bus->irq);
++
+ /* disable SIE for all streams */
+ snd_hdac_chip_writeb(bus, INTCTL, 0);
+
+diff --git a/sound/i2c/other/ak4xxx-adda.c b/sound/i2c/other/ak4xxx-adda.c
+index 5f59316f982a..7d15093844b9 100644
+--- a/sound/i2c/other/ak4xxx-adda.c
++++ b/sound/i2c/other/ak4xxx-adda.c
+@@ -775,11 +775,12 @@ static int build_adc_controls(struct snd_akm4xxx *ak)
+ return err;
+
+ memset(&knew, 0, sizeof(knew));
+- knew.name = ak->adc_info[mixer_ch].selector_name;
+- if (!knew.name) {
++ if (!ak->adc_info ||
++ !ak->adc_info[mixer_ch].selector_name) {
+ knew.name = "Capture Channel";
+ knew.index = mixer_ch + ak->idx_offset * 2;
+- }
++ } else
++ knew.name = ak->adc_info[mixer_ch].selector_name;
+
+ knew.iface = SNDRV_CTL_ELEM_IFACE_MIXER;
+ knew.info = ak4xxx_capture_source_info;
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 51f10ed9bc43..a2fb19129219 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -846,7 +846,13 @@ static void snd_hda_codec_dev_release(struct device *dev)
+ snd_hda_sysfs_clear(codec);
+ kfree(codec->modelname);
+ kfree(codec->wcaps);
+- kfree(codec);
++
++ /*
++ * In the case of ASoC HD-audio, hda_codec is device managed.
++ * It will be freed when the ASoC device is removed.
++ */
++ if (codec->core.type == HDA_DEV_LEGACY)
++ kfree(codec);
+ }
+
+ #define DEV_NAME_LEN 31
+diff --git a/sound/pci/hda/hda_controller.c b/sound/pci/hda/hda_controller.c
+index 48d863736b3c..a5a2e9fe7785 100644
+--- a/sound/pci/hda/hda_controller.c
++++ b/sound/pci/hda/hda_controller.c
+@@ -869,10 +869,13 @@ static int azx_rirb_get_response(struct hdac_bus *bus, unsigned int addr,
+ */
+ if (hbus->allow_bus_reset && !hbus->response_reset && !hbus->in_reset) {
+ hbus->response_reset = 1;
++ dev_err(chip->card->dev,
++ "No response from codec, resetting bus: last cmd=0x%08x\n",
++ bus->last_cmd[addr]);
+ return -EAGAIN; /* give a chance to retry */
+ }
+
+- dev_err(chip->card->dev,
++ dev_WARN(chip->card->dev,
+ "azx_get_response timeout, switching to single_cmd mode: last cmd=0x%08x\n",
+ bus->last_cmd[addr]);
+ chip->single_cmd = 1;
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index b0de3e3b33e5..783f9a9c40ec 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -1349,9 +1349,9 @@ static int azx_free(struct azx *chip)
+ }
+
+ if (bus->chip_init) {
++ azx_stop_chip(chip);
+ azx_clear_irq_pending(chip);
+ azx_stop_all_streams(chip);
+- azx_stop_chip(chip);
+ }
+
+ if (bus->irq >= 0)
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index bea7b0961080..36240def9bf5 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -1421,7 +1421,7 @@ static void hdmi_pcm_reset_pin(struct hdmi_spec *spec,
+ /* update per_pin ELD from the given new ELD;
+ * setup info frame and notification accordingly
+ */
+-static void update_eld(struct hda_codec *codec,
++static bool update_eld(struct hda_codec *codec,
+ struct hdmi_spec_per_pin *per_pin,
+ struct hdmi_eld *eld)
+ {
+@@ -1452,18 +1452,22 @@ static void update_eld(struct hda_codec *codec,
+ snd_hdmi_show_eld(codec, &eld->info);
+
+ eld_changed = (pin_eld->eld_valid != eld->eld_valid);
+- if (eld->eld_valid && pin_eld->eld_valid)
++ eld_changed |= (pin_eld->monitor_present != eld->monitor_present);
++ if (!eld_changed && eld->eld_valid && pin_eld->eld_valid)
+ if (pin_eld->eld_size != eld->eld_size ||
+ memcmp(pin_eld->eld_buffer, eld->eld_buffer,
+ eld->eld_size) != 0)
+ eld_changed = true;
+
+- pin_eld->monitor_present = eld->monitor_present;
+- pin_eld->eld_valid = eld->eld_valid;
+- pin_eld->eld_size = eld->eld_size;
+- if (eld->eld_valid)
+- memcpy(pin_eld->eld_buffer, eld->eld_buffer, eld->eld_size);
+- pin_eld->info = eld->info;
++ if (eld_changed) {
++ pin_eld->monitor_present = eld->monitor_present;
++ pin_eld->eld_valid = eld->eld_valid;
++ pin_eld->eld_size = eld->eld_size;
++ if (eld->eld_valid)
++ memcpy(pin_eld->eld_buffer, eld->eld_buffer,
++ eld->eld_size);
++ pin_eld->info = eld->info;
++ }
+
+ /*
+ * Re-setup pin and infoframe. This is needed e.g. when
+@@ -1481,6 +1485,7 @@ static void update_eld(struct hda_codec *codec,
+ SNDRV_CTL_EVENT_MASK_VALUE |
+ SNDRV_CTL_EVENT_MASK_INFO,
+ &get_hdmi_pcm(spec, pcm_idx)->eld_ctl->id);
++ return eld_changed;
+ }
+
+ /* update ELD and jack state via HD-audio verbs */
+@@ -1582,6 +1587,7 @@ static void sync_eld_via_acomp(struct hda_codec *codec,
+ struct hdmi_spec *spec = codec->spec;
+ struct hdmi_eld *eld = &spec->temp_eld;
+ struct snd_jack *jack = NULL;
++ bool changed;
+ int size;
+
+ mutex_lock(&per_pin->lock);
+@@ -1608,15 +1614,13 @@ static void sync_eld_via_acomp(struct hda_codec *codec,
+ * disconnected event. Jack must be fetched before update_eld()
+ */
+ jack = pin_idx_to_jack(codec, per_pin);
+- update_eld(codec, per_pin, eld);
++ changed = update_eld(codec, per_pin, eld);
+ if (jack == NULL)
+ jack = pin_idx_to_jack(codec, per_pin);
+- if (jack == NULL)
+- goto unlock;
+- snd_jack_report(jack,
+- (eld->monitor_present && eld->eld_valid) ?
++ if (changed && jack)
++ snd_jack_report(jack,
++ (eld->monitor_present && eld->eld_valid) ?
+ SND_JACK_AVOUT : 0);
+- unlock:
+ mutex_unlock(&per_pin->lock);
+ }
+
+@@ -2612,6 +2616,8 @@ static void i915_pin_cvt_fixup(struct hda_codec *codec,
+ /* precondition and allocation for Intel codecs */
+ static int alloc_intel_hdmi(struct hda_codec *codec)
+ {
++ int err;
++
+ /* requires i915 binding */
+ if (!codec->bus->core.audio_component) {
+ codec_info(codec, "No i915 binding for Intel HDMI/DP codec\n");
+@@ -2620,7 +2626,12 @@ static int alloc_intel_hdmi(struct hda_codec *codec)
+ return -ENODEV;
+ }
+
+- return alloc_generic_hdmi(codec);
++ err = alloc_generic_hdmi(codec);
++ if (err < 0)
++ return err;
++ /* no need to handle unsol events */
++ codec->patch_ops.unsol_event = NULL;
++ return 0;
+ }
+
+ /* parse and post-process for Intel codecs */
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index c1ddfd2fac52..36aee8ad2054 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -1058,6 +1058,9 @@ static const struct snd_pci_quirk beep_white_list[] = {
+ SND_PCI_QUIRK(0x1043, 0x834a, "EeePC", 1),
+ SND_PCI_QUIRK(0x1458, 0xa002, "GA-MA790X", 1),
+ SND_PCI_QUIRK(0x8086, 0xd613, "Intel", 1),
++ /* blacklist -- no beep available */
++ SND_PCI_QUIRK(0x17aa, 0x309e, "Lenovo ThinkCentre M73", 0),
++ SND_PCI_QUIRK(0x17aa, 0x30a3, "Lenovo ThinkCentre M93", 0),
+ {}
+ };
+
+@@ -3755,6 +3758,72 @@ static void alc269_x101_hp_automute_hook(struct hda_codec *codec,
+ vref);
+ }
+
++/*
++ * Magic sequence to make Huawei Matebook X right speaker working (bko#197801)
++ */
++struct hda_alc298_mbxinit {
++ unsigned char value_0x23;
++ unsigned char value_0x25;
++};
++
++static void alc298_huawei_mbx_stereo_seq(struct hda_codec *codec,
++ const struct hda_alc298_mbxinit *initval,
++ bool first)
++{
++ snd_hda_codec_write(codec, 0x06, 0, AC_VERB_SET_DIGI_CONVERT_3, 0x0);
++ alc_write_coef_idx(codec, 0x26, 0xb000);
++
++ if (first)
++ snd_hda_codec_write(codec, 0x21, 0, AC_VERB_GET_PIN_SENSE, 0x0);
++
++ snd_hda_codec_write(codec, 0x6, 0, AC_VERB_SET_DIGI_CONVERT_3, 0x80);
++ alc_write_coef_idx(codec, 0x26, 0xf000);
++ alc_write_coef_idx(codec, 0x23, initval->value_0x23);
++
++ if (initval->value_0x23 != 0x1e)
++ alc_write_coef_idx(codec, 0x25, initval->value_0x25);
++
++ snd_hda_codec_write(codec, 0x20, 0, AC_VERB_SET_COEF_INDEX, 0x26);
++ snd_hda_codec_write(codec, 0x20, 0, AC_VERB_SET_PROC_COEF, 0xb010);
++}
++
++static void alc298_fixup_huawei_mbx_stereo(struct hda_codec *codec,
++ const struct hda_fixup *fix,
++ int action)
++{
++ /* Initialization magic */
++ static const struct hda_alc298_mbxinit dac_init[] = {
++ {0x0c, 0x00}, {0x0d, 0x00}, {0x0e, 0x00}, {0x0f, 0x00},
++ {0x10, 0x00}, {0x1a, 0x40}, {0x1b, 0x82}, {0x1c, 0x00},
++ {0x1d, 0x00}, {0x1e, 0x00}, {0x1f, 0x00},
++ {0x20, 0xc2}, {0x21, 0xc8}, {0x22, 0x26}, {0x23, 0x24},
++ {0x27, 0xff}, {0x28, 0xff}, {0x29, 0xff}, {0x2a, 0x8f},
++ {0x2b, 0x02}, {0x2c, 0x48}, {0x2d, 0x34}, {0x2e, 0x00},
++ {0x2f, 0x00},
++ {0x30, 0x00}, {0x31, 0x00}, {0x32, 0x00}, {0x33, 0x00},
++ {0x34, 0x00}, {0x35, 0x01}, {0x36, 0x93}, {0x37, 0x0c},
++ {0x38, 0x00}, {0x39, 0x00}, {0x3a, 0xf8}, {0x38, 0x80},
++ {}
++ };
++ const struct hda_alc298_mbxinit *seq;
++
++ if (action != HDA_FIXUP_ACT_INIT)
++ return;
++
++ /* Start */
++ snd_hda_codec_write(codec, 0x06, 0, AC_VERB_SET_DIGI_CONVERT_3, 0x00);
++ snd_hda_codec_write(codec, 0x06, 0, AC_VERB_SET_DIGI_CONVERT_3, 0x80);
++ alc_write_coef_idx(codec, 0x26, 0xf000);
++ alc_write_coef_idx(codec, 0x22, 0x31);
++ alc_write_coef_idx(codec, 0x23, 0x0b);
++ alc_write_coef_idx(codec, 0x25, 0x00);
++ snd_hda_codec_write(codec, 0x20, 0, AC_VERB_SET_COEF_INDEX, 0x26);
++ snd_hda_codec_write(codec, 0x20, 0, AC_VERB_SET_PROC_COEF, 0xb010);
++
++ for (seq = dac_init; seq->value_0x23; seq++)
++ alc298_huawei_mbx_stereo_seq(codec, seq, seq == dac_init);
++}
++
+ static void alc269_fixup_x101_headset_mic(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+ {
+@@ -5780,6 +5849,7 @@ enum {
+ ALC255_FIXUP_DUMMY_LINEOUT_VERB,
+ ALC255_FIXUP_DELL_HEADSET_MIC,
+ ALC256_FIXUP_HUAWEI_MACH_WX9_PINS,
++ ALC298_FIXUP_HUAWEI_MBX_STEREO,
+ ALC295_FIXUP_HP_X360,
+ ALC221_FIXUP_HP_HEADSET_MIC,
+ ALC285_FIXUP_LENOVO_HEADPHONE_NOISE,
+@@ -5800,6 +5870,7 @@ enum {
+ ALC256_FIXUP_ASUS_MIC_NO_PRESENCE,
+ ALC299_FIXUP_PREDATOR_SPK,
+ ALC294_FIXUP_ASUS_INTSPK_HEADSET_MIC,
++ ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE,
+ };
+
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -6089,6 +6160,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC255_FIXUP_MIC_MUTE_LED
+ },
++ [ALC298_FIXUP_HUAWEI_MBX_STEREO] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc298_fixup_huawei_mbx_stereo,
++ .chained = true,
++ .chain_id = ALC255_FIXUP_MIC_MUTE_LED
++ },
+ [ALC269_FIXUP_ASUS_X101_FUNC] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc269_fixup_x101_headset_mic,
+@@ -6850,6 +6927,16 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC
+ },
++ [ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x19, 0x04a11040 },
++ { 0x21, 0x04211020 },
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC256_FIXUP_ASUS_HEADSET_MODE
++ },
+ };
+
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -7113,6 +7200,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
+ SND_PCI_QUIRK(0x19e5, 0x3204, "Huawei MACH-WX9", ALC256_FIXUP_HUAWEI_MACH_WX9_PINS),
+ SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */
++ SND_PCI_QUIRK(0x10ec, 0x118c, "Medion EE4254 MD62100", ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE),
+
+ #if 0
+ /* Below is a quirk table taken from the old code.
+@@ -7280,6 +7368,8 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ {.id = ALC225_FIXUP_HEADSET_JACK, .name = "alc-headset-jack"},
+ {.id = ALC295_FIXUP_CHROME_BOOK, .name = "alc-chrome-book"},
+ {.id = ALC299_FIXUP_PREDATOR_SPK, .name = "predator-spk"},
++ {.id = ALC298_FIXUP_HUAWEI_MBX_STEREO, .name = "huawei-mbx-stereo"},
++ {.id = ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE, .name = "alc256-medion-headset"},
+ {}
+ };
+ #define ALC225_STANDARD_PINS \
+diff --git a/sound/soc/atmel/mchp-i2s-mcc.c b/sound/soc/atmel/mchp-i2s-mcc.c
+index 86495883ca3f..ab7d5f98e759 100644
+--- a/sound/soc/atmel/mchp-i2s-mcc.c
++++ b/sound/soc/atmel/mchp-i2s-mcc.c
+@@ -670,8 +670,13 @@ static int mchp_i2s_mcc_hw_params(struct snd_pcm_substream *substream,
+ }
+
+ ret = regmap_write(dev->regmap, MCHP_I2SMCC_MRA, mra);
+- if (ret < 0)
++ if (ret < 0) {
++ if (dev->gclk_use) {
++ clk_unprepare(dev->gclk);
++ dev->gclk_use = 0;
++ }
+ return ret;
++ }
+ return regmap_write(dev->regmap, MCHP_I2SMCC_MRB, mrb);
+ }
+
+@@ -686,31 +691,37 @@ static int mchp_i2s_mcc_hw_free(struct snd_pcm_substream *substream,
+ err = wait_event_interruptible_timeout(dev->wq_txrdy,
+ dev->tx_rdy,
+ msecs_to_jiffies(500));
++ if (err == 0) {
++ dev_warn_once(dev->dev,
++ "Timeout waiting for Tx ready\n");
++ regmap_write(dev->regmap, MCHP_I2SMCC_IDRA,
++ MCHP_I2SMCC_INT_TXRDY_MASK(dev->channels));
++ dev->tx_rdy = 1;
++ }
+ } else {
+ err = wait_event_interruptible_timeout(dev->wq_rxrdy,
+ dev->rx_rdy,
+ msecs_to_jiffies(500));
+- }
+-
+- if (err == 0) {
+- u32 idra;
+-
+- dev_warn_once(dev->dev, "Timeout waiting for %s\n",
+- is_playback ? "Tx ready" : "Rx ready");
+- if (is_playback)
+- idra = MCHP_I2SMCC_INT_TXRDY_MASK(dev->channels);
+- else
+- idra = MCHP_I2SMCC_INT_RXRDY_MASK(dev->channels);
+- regmap_write(dev->regmap, MCHP_I2SMCC_IDRA, idra);
++ if (err == 0) {
++ dev_warn_once(dev->dev,
++ "Timeout waiting for Rx ready\n");
++ regmap_write(dev->regmap, MCHP_I2SMCC_IDRA,
++ MCHP_I2SMCC_INT_RXRDY_MASK(dev->channels));
++ dev->rx_rdy = 1;
++ }
+ }
+
+ if (!mchp_i2s_mcc_is_running(dev)) {
+ regmap_write(dev->regmap, MCHP_I2SMCC_CR, MCHP_I2SMCC_CR_CKDIS);
+
+ if (dev->gclk_running) {
+- clk_disable_unprepare(dev->gclk);
++ clk_disable(dev->gclk);
+ dev->gclk_running = 0;
+ }
++ if (dev->gclk_use) {
++ clk_unprepare(dev->gclk);
++ dev->gclk_use = 0;
++ }
+ }
+
+ return 0;
+@@ -809,6 +820,8 @@ static int mchp_i2s_mcc_dai_probe(struct snd_soc_dai *dai)
+
+ init_waitqueue_head(&dev->wq_txrdy);
+ init_waitqueue_head(&dev->wq_rxrdy);
++ dev->tx_rdy = 1;
++ dev->rx_rdy = 1;
+
+ snd_soc_dai_init_dma_data(dai, &dev->playback, &dev->capture);
+
+diff --git a/sound/soc/codecs/es8316.c b/sound/soc/codecs/es8316.c
+index 6db002cc2058..96d04896193f 100644
+--- a/sound/soc/codecs/es8316.c
++++ b/sound/soc/codecs/es8316.c
+@@ -51,7 +51,10 @@ static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(adc_vol_tlv, -9600, 50, 1);
+ static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(alc_max_gain_tlv, -650, 150, 0);
+ static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(alc_min_gain_tlv, -1200, 150, 0);
+ static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(alc_target_tlv, -1650, 150, 0);
+-static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(hpmixer_gain_tlv, -1200, 150, 0);
++static const SNDRV_CTL_TLVD_DECLARE_DB_RANGE(hpmixer_gain_tlv,
++ 0, 4, TLV_DB_SCALE_ITEM(-1200, 150, 0),
++ 8, 11, TLV_DB_SCALE_ITEM(-450, 150, 0),
++);
+
+ static const SNDRV_CTL_TLVD_DECLARE_DB_RANGE(adc_pga_gain_tlv,
+ 0, 0, TLV_DB_SCALE_ITEM(-350, 0, 0),
+@@ -89,7 +92,7 @@ static const struct snd_kcontrol_new es8316_snd_controls[] = {
+ SOC_DOUBLE_TLV("Headphone Playback Volume", ES8316_CPHP_ICAL_VOL,
+ 4, 0, 3, 1, hpout_vol_tlv),
+ SOC_DOUBLE_TLV("Headphone Mixer Volume", ES8316_HPMIX_VOL,
+- 0, 4, 7, 0, hpmixer_gain_tlv),
++ 0, 4, 11, 0, hpmixer_gain_tlv),
+
+ SOC_ENUM("Playback Polarity", dacpol),
+ SOC_DOUBLE_R_TLV("DAC Playback Volume", ES8316_DAC_VOLL,
+diff --git a/sound/soc/codecs/hdac_hda.c b/sound/soc/codecs/hdac_hda.c
+index 7d4940256914..91242b6f8ea7 100644
+--- a/sound/soc/codecs/hdac_hda.c
++++ b/sound/soc/codecs/hdac_hda.c
+@@ -495,6 +495,10 @@ static int hdac_hda_dev_probe(struct hdac_device *hdev)
+
+ static int hdac_hda_dev_remove(struct hdac_device *hdev)
+ {
++ struct hdac_hda_priv *hda_pvt;
++
++ hda_pvt = dev_get_drvdata(&hdev->dev);
++ cancel_delayed_work_sync(&hda_pvt->codec.jackpoll_work);
+ return 0;
+ }
+
+diff --git a/sound/soc/codecs/sgtl5000.c b/sound/soc/codecs/sgtl5000.c
+index a6a4748c97f9..7cbaedffa1ef 100644
+--- a/sound/soc/codecs/sgtl5000.c
++++ b/sound/soc/codecs/sgtl5000.c
+@@ -1173,12 +1173,17 @@ static int sgtl5000_set_power_regs(struct snd_soc_component *component)
+ SGTL5000_INT_OSC_EN);
+ /* Enable VDDC charge pump */
+ ana_pwr |= SGTL5000_VDDC_CHRGPMP_POWERUP;
+- } else if (vddio >= 3100 && vdda >= 3100) {
++ } else {
+ ana_pwr &= ~SGTL5000_VDDC_CHRGPMP_POWERUP;
+- /* VDDC use VDDIO rail */
+- lreg_ctrl |= SGTL5000_VDDC_ASSN_OVRD;
+- lreg_ctrl |= SGTL5000_VDDC_MAN_ASSN_VDDIO <<
+- SGTL5000_VDDC_MAN_ASSN_SHIFT;
++ /*
++ * if vddio == vdda the source of charge pump should be
++ * assigned manually to VDDIO
++ */
++ if (vddio == vdda) {
++ lreg_ctrl |= SGTL5000_VDDC_ASSN_OVRD;
++ lreg_ctrl |= SGTL5000_VDDC_MAN_ASSN_VDDIO <<
++ SGTL5000_VDDC_MAN_ASSN_SHIFT;
++ }
+ }
+
+ snd_soc_component_write(component, SGTL5000_CHIP_LINREG_CTRL, lreg_ctrl);
+@@ -1288,6 +1293,7 @@ static int sgtl5000_probe(struct snd_soc_component *component)
+ int ret;
+ u16 reg;
+ struct sgtl5000_priv *sgtl5000 = snd_soc_component_get_drvdata(component);
++ unsigned int zcd_mask = SGTL5000_HP_ZCD_EN | SGTL5000_ADC_ZCD_EN;
+
+ /* power up sgtl5000 */
+ ret = sgtl5000_set_power_regs(component);
+@@ -1315,9 +1321,8 @@ static int sgtl5000_probe(struct snd_soc_component *component)
+ 0x1f);
+ snd_soc_component_write(component, SGTL5000_CHIP_PAD_STRENGTH, reg);
+
+- snd_soc_component_write(component, SGTL5000_CHIP_ANA_CTRL,
+- SGTL5000_HP_ZCD_EN |
+- SGTL5000_ADC_ZCD_EN);
++ snd_soc_component_update_bits(component, SGTL5000_CHIP_ANA_CTRL,
++ zcd_mask, zcd_mask);
+
+ snd_soc_component_update_bits(component, SGTL5000_CHIP_MIC_CTRL,
+ SGTL5000_BIAS_R_MASK,
+diff --git a/sound/soc/codecs/tlv320aic31xx.c b/sound/soc/codecs/tlv320aic31xx.c
+index 9b37e98da0db..26a4f6cd3288 100644
+--- a/sound/soc/codecs/tlv320aic31xx.c
++++ b/sound/soc/codecs/tlv320aic31xx.c
+@@ -1553,7 +1553,8 @@ static int aic31xx_i2c_probe(struct i2c_client *i2c,
+ aic31xx->gpio_reset = devm_gpiod_get_optional(aic31xx->dev, "reset",
+ GPIOD_OUT_LOW);
+ if (IS_ERR(aic31xx->gpio_reset)) {
+- dev_err(aic31xx->dev, "not able to acquire gpio\n");
++ if (PTR_ERR(aic31xx->gpio_reset) != -EPROBE_DEFER)
++ dev_err(aic31xx->dev, "not able to acquire gpio\n");
+ return PTR_ERR(aic31xx->gpio_reset);
+ }
+
+@@ -1564,7 +1565,9 @@ static int aic31xx_i2c_probe(struct i2c_client *i2c,
+ ARRAY_SIZE(aic31xx->supplies),
+ aic31xx->supplies);
+ if (ret) {
+- dev_err(aic31xx->dev, "Failed to request supplies: %d\n", ret);
++ if (ret != -EPROBE_DEFER)
++ dev_err(aic31xx->dev,
++ "Failed to request supplies: %d\n", ret);
+ return ret;
+ }
+
+diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c
+index fa862af25c1a..085855f9b08d 100644
+--- a/sound/soc/fsl/fsl_ssi.c
++++ b/sound/soc/fsl/fsl_ssi.c
+@@ -799,15 +799,6 @@ static int fsl_ssi_hw_params(struct snd_pcm_substream *substream,
+ u32 wl = SSI_SxCCR_WL(sample_size);
+ int ret;
+
+- /*
+- * SSI is properly configured if it is enabled and running in
+- * the synchronous mode; Note that AC97 mode is an exception
+- * that should set separate configurations for STCCR and SRCCR
+- * despite running in the synchronous mode.
+- */
+- if (ssi->streams && ssi->synchronous)
+- return 0;
+-
+ if (fsl_ssi_is_i2s_master(ssi)) {
+ ret = fsl_ssi_set_bclk(substream, dai, hw_params);
+ if (ret)
+@@ -823,6 +814,15 @@ static int fsl_ssi_hw_params(struct snd_pcm_substream *substream,
+ }
+ }
+
++ /*
++ * SSI is properly configured if it is enabled and running in
++ * the synchronous mode; Note that AC97 mode is an exception
++ * that should set separate configurations for STCCR and SRCCR
++ * despite running in the synchronous mode.
++ */
++ if (ssi->streams && ssi->synchronous)
++ return 0;
++
+ if (!fsl_ssi_is_ac97(ssi)) {
+ /*
+ * Keep the ssi->i2s_net intact while having a local variable
+diff --git a/sound/soc/intel/common/sst-acpi.c b/sound/soc/intel/common/sst-acpi.c
+index 0e8e0a7a11df..5854868650b9 100644
+--- a/sound/soc/intel/common/sst-acpi.c
++++ b/sound/soc/intel/common/sst-acpi.c
+@@ -141,11 +141,12 @@ static int sst_acpi_probe(struct platform_device *pdev)
+ }
+
+ platform_set_drvdata(pdev, sst_acpi);
++ mach->pdata = sst_pdata;
+
+ /* register machine driver */
+ sst_acpi->pdev_mach =
+ platform_device_register_data(dev, mach->drv_name, -1,
+- sst_pdata, sizeof(*sst_pdata));
++ mach, sizeof(*mach));
+ if (IS_ERR(sst_acpi->pdev_mach))
+ return PTR_ERR(sst_acpi->pdev_mach);
+
+diff --git a/sound/soc/intel/common/sst-ipc.c b/sound/soc/intel/common/sst-ipc.c
+index ef5b66af1cd2..3a66121ee9bb 100644
+--- a/sound/soc/intel/common/sst-ipc.c
++++ b/sound/soc/intel/common/sst-ipc.c
+@@ -222,6 +222,8 @@ struct ipc_message *sst_ipc_reply_find_msg(struct sst_generic_ipc *ipc,
+
+ if (ipc->ops.reply_msg_match != NULL)
+ header = ipc->ops.reply_msg_match(header, &mask);
++ else
++ mask = (u64)-1;
+
+ if (list_empty(&ipc->rx_list)) {
+ dev_err(ipc->dev, "error: rx list empty but received 0x%llx\n",
+diff --git a/sound/soc/intel/skylake/skl-debug.c b/sound/soc/intel/skylake/skl-debug.c
+index b9b4a72a4334..b28a9c2b0380 100644
+--- a/sound/soc/intel/skylake/skl-debug.c
++++ b/sound/soc/intel/skylake/skl-debug.c
+@@ -188,7 +188,7 @@ static ssize_t fw_softreg_read(struct file *file, char __user *user_buf,
+ memset(d->fw_read_buff, 0, FW_REG_BUF);
+
+ if (w0_stat_sz > 0)
+- __iowrite32_copy(d->fw_read_buff, fw_reg_addr, w0_stat_sz >> 2);
++ __ioread32_copy(d->fw_read_buff, fw_reg_addr, w0_stat_sz >> 2);
+
+ for (offset = 0; offset < FW_REG_SIZE; offset += 16) {
+ ret += snprintf(tmp + ret, FW_REG_BUF - ret, "%#.4x: ", offset);
+diff --git a/sound/soc/intel/skylake/skl-nhlt.c b/sound/soc/intel/skylake/skl-nhlt.c
+index 1132109cb992..e01815cec6fd 100644
+--- a/sound/soc/intel/skylake/skl-nhlt.c
++++ b/sound/soc/intel/skylake/skl-nhlt.c
+@@ -225,7 +225,7 @@ int skl_nhlt_update_topology_bin(struct skl *skl)
+ struct hdac_bus *bus = skl_to_bus(skl);
+ struct device *dev = bus->dev;
+
+- dev_dbg(dev, "oem_id %.6s, oem_table_id %8s oem_revision %d\n",
++ dev_dbg(dev, "oem_id %.6s, oem_table_id %.8s oem_revision %d\n",
+ nhlt->header.oem_id, nhlt->header.oem_table_id,
+ nhlt->header.oem_revision);
+
+diff --git a/sound/soc/sh/rcar/adg.c b/sound/soc/sh/rcar/adg.c
+index fce4e050a9b7..b9aacf3d3b29 100644
+--- a/sound/soc/sh/rcar/adg.c
++++ b/sound/soc/sh/rcar/adg.c
+@@ -30,6 +30,7 @@ struct rsnd_adg {
+ struct clk *clkout[CLKOUTMAX];
+ struct clk_onecell_data onecell;
+ struct rsnd_mod mod;
++ int clk_rate[CLKMAX];
+ u32 flags;
+ u32 ckr;
+ u32 rbga;
+@@ -114,9 +115,9 @@ static void __rsnd_adg_get_timesel_ratio(struct rsnd_priv *priv,
+ unsigned int val, en;
+ unsigned int min, diff;
+ unsigned int sel_rate[] = {
+- clk_get_rate(adg->clk[CLKA]), /* 0000: CLKA */
+- clk_get_rate(adg->clk[CLKB]), /* 0001: CLKB */
+- clk_get_rate(adg->clk[CLKC]), /* 0010: CLKC */
++ adg->clk_rate[CLKA], /* 0000: CLKA */
++ adg->clk_rate[CLKB], /* 0001: CLKB */
++ adg->clk_rate[CLKC], /* 0010: CLKC */
+ adg->rbga_rate_for_441khz, /* 0011: RBGA */
+ adg->rbgb_rate_for_48khz, /* 0100: RBGB */
+ };
+@@ -302,7 +303,7 @@ int rsnd_adg_clk_query(struct rsnd_priv *priv, unsigned int rate)
+ * AUDIO_CLKA/AUDIO_CLKB/AUDIO_CLKC/AUDIO_CLKI.
+ */
+ for_each_rsnd_clk(clk, adg, i) {
+- if (rate == clk_get_rate(clk))
++ if (rate == adg->clk_rate[i])
+ return sel_table[i];
+ }
+
+@@ -369,10 +370,18 @@ void rsnd_adg_clk_control(struct rsnd_priv *priv, int enable)
+
+ for_each_rsnd_clk(clk, adg, i) {
+ ret = 0;
+- if (enable)
++ if (enable) {
+ ret = clk_prepare_enable(clk);
+- else
++
++ /*
++ * We shouldn't use clk_get_rate() under
++ * atomic context. Let's keep it when
++ * rsnd_adg_clk_enable() was called
++ */
++ adg->clk_rate[i] = clk_get_rate(adg->clk[i]);
++ } else {
+ clk_disable_unprepare(clk);
++ }
+
+ if (ret < 0)
+ dev_warn(dev, "can't use clk %d\n", i);
+diff --git a/sound/soc/soc-generic-dmaengine-pcm.c b/sound/soc/soc-generic-dmaengine-pcm.c
+index 748f5f641002..d93db2c2b527 100644
+--- a/sound/soc/soc-generic-dmaengine-pcm.c
++++ b/sound/soc/soc-generic-dmaengine-pcm.c
+@@ -306,6 +306,12 @@ static int dmaengine_pcm_new(struct snd_soc_pcm_runtime *rtd)
+
+ if (!dmaengine_pcm_can_report_residue(dev, pcm->chan[i]))
+ pcm->flags |= SND_DMAENGINE_PCM_FLAG_NO_RESIDUE;
++
++ if (rtd->pcm->streams[i].pcm->name[0] == '\0') {
++ strncpy(rtd->pcm->streams[i].pcm->name,
++ rtd->pcm->streams[i].pcm->id,
++ sizeof(rtd->pcm->streams[i].pcm->name));
++ }
+ }
+
+ return 0;
+diff --git a/sound/soc/sof/intel/hda-codec.c b/sound/soc/sof/intel/hda-codec.c
+index b8b37f082309..0d8437b080bf 100644
+--- a/sound/soc/sof/intel/hda-codec.c
++++ b/sound/soc/sof/intel/hda-codec.c
+@@ -62,8 +62,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address)
+ address, resp);
+
+ #if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA_AUDIO_CODEC)
+- /* snd_hdac_ext_bus_device_exit will use kfree to free hdev */
+- hda_priv = kzalloc(sizeof(*hda_priv), GFP_KERNEL);
++ hda_priv = devm_kzalloc(sdev->dev, sizeof(*hda_priv), GFP_KERNEL);
+ if (!hda_priv)
+ return -ENOMEM;
+
+@@ -82,8 +81,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address)
+
+ return 0;
+ #else
+- /* snd_hdac_ext_bus_device_exit will use kfree to free hdev */
+- hdev = kzalloc(sizeof(*hdev), GFP_KERNEL);
++ hdev = devm_kzalloc(sdev->dev, sizeof(*hdev), GFP_KERNEL);
+ if (!hdev)
+ return -ENOMEM;
+
+diff --git a/sound/soc/sof/pcm.c b/sound/soc/sof/pcm.c
+index 334e9d59b1ba..3b8955e755b2 100644
+--- a/sound/soc/sof/pcm.c
++++ b/sound/soc/sof/pcm.c
+@@ -208,12 +208,11 @@ static int sof_pcm_hw_params(struct snd_pcm_substream *substream,
+ if (ret < 0)
+ return ret;
+
++ spcm->prepared[substream->stream] = true;
++
+ /* save pcm hw_params */
+ memcpy(&spcm->params[substream->stream], params, sizeof(*params));
+
+- /* clear hw_params_upon_resume flag */
+- spcm->hw_params_upon_resume[substream->stream] = 0;
+-
+ return ret;
+ }
+
+@@ -236,6 +235,9 @@ static int sof_pcm_hw_free(struct snd_pcm_substream *substream)
+ if (!spcm)
+ return -EINVAL;
+
++ if (!spcm->prepared[substream->stream])
++ return 0;
++
+ dev_dbg(sdev->dev, "pcm: free stream %d dir %d\n", spcm->pcm.pcm_id,
+ substream->stream);
+
+@@ -258,6 +260,8 @@ static int sof_pcm_hw_free(struct snd_pcm_substream *substream)
+ if (ret < 0)
+ dev_err(sdev->dev, "error: platform hw free failed\n");
+
++ spcm->prepared[substream->stream] = false;
++
+ return ret;
+ }
+
+@@ -278,11 +282,7 @@ static int sof_pcm_prepare(struct snd_pcm_substream *substream)
+ if (!spcm)
+ return -EINVAL;
+
+- /*
+- * check if hw_params needs to be set-up again.
+- * This is only needed when resuming from system sleep.
+- */
+- if (!spcm->hw_params_upon_resume[substream->stream])
++ if (spcm->prepared[substream->stream])
+ return 0;
+
+ dev_dbg(sdev->dev, "pcm: prepare stream %d dir %d\n", spcm->pcm.pcm_id,
+@@ -311,6 +311,7 @@ static int sof_pcm_trigger(struct snd_pcm_substream *substream, int cmd)
+ struct snd_sof_pcm *spcm;
+ struct sof_ipc_stream stream;
+ struct sof_ipc_reply reply;
++ bool reset_hw_params = false;
+ int ret;
+
+ /* nothing to do for BE */
+@@ -351,6 +352,7 @@ static int sof_pcm_trigger(struct snd_pcm_substream *substream, int cmd)
+ case SNDRV_PCM_TRIGGER_SUSPEND:
+ case SNDRV_PCM_TRIGGER_STOP:
+ stream.hdr.cmd |= SOF_IPC_STREAM_TRIG_STOP;
++ reset_hw_params = true;
+ break;
+ default:
+ dev_err(sdev->dev, "error: unhandled trigger cmd %d\n", cmd);
+@@ -363,17 +365,17 @@ static int sof_pcm_trigger(struct snd_pcm_substream *substream, int cmd)
+ ret = sof_ipc_tx_message(sdev->ipc, stream.hdr.cmd, &stream,
+ sizeof(stream), &reply, sizeof(reply));
+
+- if (ret < 0 || cmd != SNDRV_PCM_TRIGGER_SUSPEND)
++ if (ret < 0 || !reset_hw_params)
+ return ret;
+
+ /*
+- * The hw_free op is usually called when the pcm stream is closed.
+- * Since the stream is not closed during suspend, the DSP needs to be
+- * notified explicitly to free pcm to prevent errors upon resume.
++ * In case of stream is stopped, DSP must be reprogrammed upon
++ * restart, so free PCM here.
+ */
+ stream.hdr.size = sizeof(stream);
+ stream.hdr.cmd = SOF_IPC_GLB_STREAM_MSG | SOF_IPC_STREAM_PCM_FREE;
+ stream.comp_id = spcm->stream[substream->stream].comp_id;
++ spcm->prepared[substream->stream] = false;
+
+ /* send IPC to the DSP */
+ return sof_ipc_tx_message(sdev->ipc, stream.hdr.cmd, &stream,
+@@ -481,6 +483,7 @@ static int sof_pcm_open(struct snd_pcm_substream *substream)
+ spcm->stream[substream->stream].posn.host_posn = 0;
+ spcm->stream[substream->stream].posn.dai_posn = 0;
+ spcm->stream[substream->stream].substream = substream;
++ spcm->prepared[substream->stream] = false;
+
+ ret = snd_sof_pcm_platform_open(sdev, substream);
+ if (ret < 0)
+diff --git a/sound/soc/sof/pm.c b/sound/soc/sof/pm.c
+index 278abfd10490..48c6d78d72e2 100644
+--- a/sound/soc/sof/pm.c
++++ b/sound/soc/sof/pm.c
+@@ -233,7 +233,7 @@ static int sof_set_hw_params_upon_resume(struct snd_sof_dev *sdev)
+
+ state = substream->runtime->status->state;
+ if (state == SNDRV_PCM_STATE_SUSPENDED)
+- spcm->hw_params_upon_resume[dir] = 1;
++ spcm->prepared[dir] = false;
+ }
+ }
+
+diff --git a/sound/soc/sof/sof-pci-dev.c b/sound/soc/sof/sof-pci-dev.c
+index 65d1bac4c6b8..6fd3df7c57a3 100644
+--- a/sound/soc/sof/sof-pci-dev.c
++++ b/sound/soc/sof/sof-pci-dev.c
+@@ -223,6 +223,9 @@ static void sof_pci_probe_complete(struct device *dev)
+ */
+ pm_runtime_allow(dev);
+
++ /* mark last_busy for pm_runtime to make sure not suspend immediately */
++ pm_runtime_mark_last_busy(dev);
++
+ /* follow recommendation in pci-driver.c to decrement usage counter */
+ pm_runtime_put_noidle(dev);
+ }
+diff --git a/sound/soc/sof/sof-priv.h b/sound/soc/sof/sof-priv.h
+index b8c0b2a22684..fa5cb7d2a660 100644
+--- a/sound/soc/sof/sof-priv.h
++++ b/sound/soc/sof/sof-priv.h
+@@ -297,7 +297,7 @@ struct snd_sof_pcm {
+ struct snd_sof_pcm_stream stream[2];
+ struct list_head list; /* list in sdev pcm list */
+ struct snd_pcm_hw_params params[2];
+- int hw_params_upon_resume[2]; /* set up hw_params upon resume */
++ bool prepared[2]; /* PCM_PARAMS set successfully */
+ };
+
+ /* ALSA SOF Kcontrol device */
+diff --git a/sound/soc/sunxi/sun4i-i2s.c b/sound/soc/sunxi/sun4i-i2s.c
+index 7fa5c61169db..ab8cb83c8b1a 100644
+--- a/sound/soc/sunxi/sun4i-i2s.c
++++ b/sound/soc/sunxi/sun4i-i2s.c
+@@ -222,10 +222,11 @@ static const struct sun4i_i2s_clk_div sun4i_i2s_mclk_div[] = {
+ };
+
+ static int sun4i_i2s_get_bclk_div(struct sun4i_i2s *i2s,
+- unsigned int oversample_rate,
++ unsigned long parent_rate,
++ unsigned int sampling_rate,
+ unsigned int word_size)
+ {
+- int div = oversample_rate / word_size / 2;
++ int div = parent_rate / sampling_rate / word_size / 2;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(sun4i_i2s_bclk_div); i++) {
+@@ -315,8 +316,8 @@ static int sun4i_i2s_set_clk_rate(struct snd_soc_dai *dai,
+ return -EINVAL;
+ }
+
+- bclk_div = sun4i_i2s_get_bclk_div(i2s, oversample_rate,
+- word_size);
++ bclk_div = sun4i_i2s_get_bclk_div(i2s, i2s->mclk_freq,
++ rate, word_size);
+ if (bclk_div < 0) {
+ dev_err(dai->dev, "Unsupported BCLK divider: %d\n", bclk_div);
+ return -EINVAL;
+diff --git a/sound/soc/uniphier/aio-cpu.c b/sound/soc/uniphier/aio-cpu.c
+index ee90e6c3937c..2ae582a99b63 100644
+--- a/sound/soc/uniphier/aio-cpu.c
++++ b/sound/soc/uniphier/aio-cpu.c
+@@ -424,8 +424,11 @@ int uniphier_aio_dai_suspend(struct snd_soc_dai *dai)
+ {
+ struct uniphier_aio *aio = uniphier_priv(dai);
+
+- reset_control_assert(aio->chip->rst);
+- clk_disable_unprepare(aio->chip->clk);
++ aio->chip->num_wup_aios--;
++ if (!aio->chip->num_wup_aios) {
++ reset_control_assert(aio->chip->rst);
++ clk_disable_unprepare(aio->chip->clk);
++ }
+
+ return 0;
+ }
+@@ -439,13 +442,15 @@ int uniphier_aio_dai_resume(struct snd_soc_dai *dai)
+ if (!aio->chip->active)
+ return 0;
+
+- ret = clk_prepare_enable(aio->chip->clk);
+- if (ret)
+- return ret;
++ if (!aio->chip->num_wup_aios) {
++ ret = clk_prepare_enable(aio->chip->clk);
++ if (ret)
++ return ret;
+
+- ret = reset_control_deassert(aio->chip->rst);
+- if (ret)
+- goto err_out_clock;
++ ret = reset_control_deassert(aio->chip->rst);
++ if (ret)
++ goto err_out_clock;
++ }
+
+ aio_iecout_set_enable(aio->chip, true);
+ aio_chip_init(aio->chip);
+@@ -458,7 +463,7 @@ int uniphier_aio_dai_resume(struct snd_soc_dai *dai)
+
+ ret = aio_init(sub);
+ if (ret)
+- goto err_out_clock;
++ goto err_out_reset;
+
+ if (!sub->setting)
+ continue;
+@@ -466,11 +471,16 @@ int uniphier_aio_dai_resume(struct snd_soc_dai *dai)
+ aio_port_reset(sub);
+ aio_src_reset(sub);
+ }
++ aio->chip->num_wup_aios++;
+
+ return 0;
+
++err_out_reset:
++ if (!aio->chip->num_wup_aios)
++ reset_control_assert(aio->chip->rst);
+ err_out_clock:
+- clk_disable_unprepare(aio->chip->clk);
++ if (!aio->chip->num_wup_aios)
++ clk_disable_unprepare(aio->chip->clk);
+
+ return ret;
+ }
+@@ -619,6 +629,7 @@ int uniphier_aio_probe(struct platform_device *pdev)
+ return PTR_ERR(chip->rst);
+
+ chip->num_aios = chip->chip_spec->num_dais;
++ chip->num_wup_aios = chip->num_aios;
+ chip->aios = devm_kcalloc(dev,
+ chip->num_aios, sizeof(struct uniphier_aio),
+ GFP_KERNEL);
+diff --git a/sound/soc/uniphier/aio.h b/sound/soc/uniphier/aio.h
+index ca6ccbae0ee8..a7ff7e556429 100644
+--- a/sound/soc/uniphier/aio.h
++++ b/sound/soc/uniphier/aio.h
+@@ -285,6 +285,7 @@ struct uniphier_aio_chip {
+
+ struct uniphier_aio *aios;
+ int num_aios;
++ int num_wup_aios;
+ struct uniphier_aio_pll *plls;
+ int num_plls;
+
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index e4bbf79de956..33cd26763c0e 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -457,6 +457,7 @@ static int set_sync_endpoint(struct snd_usb_substream *subs,
+ }
+ ep = get_endpoint(alts, 1)->bEndpointAddress;
+ if (get_endpoint(alts, 0)->bLength >= USB_DT_ENDPOINT_AUDIO_SIZE &&
++ get_endpoint(alts, 0)->bSynchAddress != 0 &&
+ ((is_playback && ep != (unsigned int)(get_endpoint(alts, 0)->bSynchAddress | USB_DIR_IN)) ||
+ (!is_playback && ep != (unsigned int)(get_endpoint(alts, 0)->bSynchAddress & ~USB_DIR_IN)))) {
+ dev_err(&dev->dev,
+diff --git a/tools/include/uapi/asm/bitsperlong.h b/tools/include/uapi/asm/bitsperlong.h
+index 57aaeaf8e192..edba4d93e9e6 100644
+--- a/tools/include/uapi/asm/bitsperlong.h
++++ b/tools/include/uapi/asm/bitsperlong.h
+@@ -1,22 +1,22 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ #if defined(__i386__) || defined(__x86_64__)
+-#include "../../arch/x86/include/uapi/asm/bitsperlong.h"
++#include "../../../arch/x86/include/uapi/asm/bitsperlong.h"
+ #elif defined(__aarch64__)
+-#include "../../arch/arm64/include/uapi/asm/bitsperlong.h"
++#include "../../../arch/arm64/include/uapi/asm/bitsperlong.h"
+ #elif defined(__powerpc__)
+-#include "../../arch/powerpc/include/uapi/asm/bitsperlong.h"
++#include "../../../arch/powerpc/include/uapi/asm/bitsperlong.h"
+ #elif defined(__s390__)
+-#include "../../arch/s390/include/uapi/asm/bitsperlong.h"
++#include "../../../arch/s390/include/uapi/asm/bitsperlong.h"
+ #elif defined(__sparc__)
+-#include "../../arch/sparc/include/uapi/asm/bitsperlong.h"
++#include "../../../arch/sparc/include/uapi/asm/bitsperlong.h"
+ #elif defined(__mips__)
+-#include "../../arch/mips/include/uapi/asm/bitsperlong.h"
++#include "../../../arch/mips/include/uapi/asm/bitsperlong.h"
+ #elif defined(__ia64__)
+-#include "../../arch/ia64/include/uapi/asm/bitsperlong.h"
++#include "../../../arch/ia64/include/uapi/asm/bitsperlong.h"
+ #elif defined(__riscv)
+-#include "../../arch/riscv/include/uapi/asm/bitsperlong.h"
++#include "../../../arch/riscv/include/uapi/asm/bitsperlong.h"
+ #elif defined(__alpha__)
+-#include "../../arch/alpha/include/uapi/asm/bitsperlong.h"
++#include "../../../arch/alpha/include/uapi/asm/bitsperlong.h"
+ #else
+ #include <asm-generic/bitsperlong.h>
+ #endif
+diff --git a/tools/lib/traceevent/Makefile b/tools/lib/traceevent/Makefile
+index 3292c290654f..86ce17a1f7fb 100644
+--- a/tools/lib/traceevent/Makefile
++++ b/tools/lib/traceevent/Makefile
+@@ -62,15 +62,15 @@ set_plugin_dir := 1
+
+ # Set plugin_dir to preffered global plugin location
+ # If we install under $HOME directory we go under
+-# $(HOME)/.traceevent/plugins
++# $(HOME)/.local/lib/traceevent/plugins
+ #
+ # We dont set PLUGIN_DIR in case we install under $HOME
+ # directory, because by default the code looks under:
+-# $(HOME)/.traceevent/plugins by default.
++# $(HOME)/.local/lib/traceevent/plugins by default.
+ #
+ ifeq ($(plugin_dir),)
+ ifeq ($(prefix),$(HOME))
+-override plugin_dir = $(HOME)/.traceevent/plugins
++override plugin_dir = $(HOME)/.local/lib/traceevent/plugins
+ set_plugin_dir := 0
+ else
+ override plugin_dir = $(libdir)/traceevent/plugins
+diff --git a/tools/lib/traceevent/event-plugin.c b/tools/lib/traceevent/event-plugin.c
+index 8ca28de9337a..e1f7ddd5a6cf 100644
+--- a/tools/lib/traceevent/event-plugin.c
++++ b/tools/lib/traceevent/event-plugin.c
+@@ -18,7 +18,7 @@
+ #include "event-utils.h"
+ #include "trace-seq.h"
+
+-#define LOCAL_PLUGIN_DIR ".traceevent/plugins"
++#define LOCAL_PLUGIN_DIR ".local/lib/traceevent/plugins/"
+
+ static struct registered_plugin_options {
+ struct registered_plugin_options *next;
+diff --git a/tools/perf/arch/x86/util/kvm-stat.c b/tools/perf/arch/x86/util/kvm-stat.c
+index 865a9762f22e..3f84403c0983 100644
+--- a/tools/perf/arch/x86/util/kvm-stat.c
++++ b/tools/perf/arch/x86/util/kvm-stat.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <errno.h>
+-#include "../../util/kvm-stat.h"
+-#include "../../util/evsel.h"
++#include "../../../util/kvm-stat.h"
++#include "../../../util/evsel.h"
+ #include <asm/svm.h>
+ #include <asm/vmx.h>
+ #include <asm/kvm.h>
+diff --git a/tools/perf/arch/x86/util/tsc.c b/tools/perf/arch/x86/util/tsc.c
+index 950539f9a4f7..b1eb963b4a6e 100644
+--- a/tools/perf/arch/x86/util/tsc.c
++++ b/tools/perf/arch/x86/util/tsc.c
+@@ -5,10 +5,10 @@
+ #include <linux/stddef.h>
+ #include <linux/perf_event.h>
+
+-#include "../../perf.h"
++#include "../../../perf.h"
+ #include <linux/types.h>
+-#include "../../util/debug.h"
+-#include "../../util/tsc.h"
++#include "../../../util/debug.h"
++#include "../../../util/tsc.h"
+
+ int perf_read_tsc_conversion(const struct perf_event_mmap_page *pc,
+ struct perf_tsc_conversion *tc)
+diff --git a/tools/perf/perf.c b/tools/perf/perf.c
+index 97e2628ea5dd..d4e4d53e8b44 100644
+--- a/tools/perf/perf.c
++++ b/tools/perf/perf.c
+@@ -441,6 +441,9 @@ int main(int argc, const char **argv)
+
+ srandom(time(NULL));
+
++ /* Setting $PERF_CONFIG makes perf read _only_ the given config file. */
++ config_exclusive_filename = getenv("PERF_CONFIG");
++
+ err = perf_config(perf_default_config, NULL);
+ if (err)
+ return err;
+diff --git a/tools/perf/tests/shell/trace+probe_vfs_getname.sh b/tools/perf/tests/shell/trace+probe_vfs_getname.sh
+index 45d269b0157e..11cc2af13f2b 100755
+--- a/tools/perf/tests/shell/trace+probe_vfs_getname.sh
++++ b/tools/perf/tests/shell/trace+probe_vfs_getname.sh
+@@ -32,6 +32,10 @@ if [ $err -ne 0 ] ; then
+ exit $err
+ fi
+
++# Do not use whatever ~/.perfconfig file, it may change the output
++# via trace.{show_timestamp,show_prefix,etc}
++export PERF_CONFIG=/dev/null
++
+ trace_open_vfs_getname
+ err=$?
+ rm -f ${file}
+diff --git a/tools/perf/trace/beauty/ioctl.c b/tools/perf/trace/beauty/ioctl.c
+index 52242fa4072b..e19eb6ea361d 100644
+--- a/tools/perf/trace/beauty/ioctl.c
++++ b/tools/perf/trace/beauty/ioctl.c
+@@ -21,7 +21,7 @@
+ static size_t ioctl__scnprintf_tty_cmd(int nr, int dir, char *bf, size_t size)
+ {
+ static const char *ioctl_tty_cmd[] = {
+- "TCGETS", "TCSETS", "TCSETSW", "TCSETSF", "TCGETA", "TCSETA", "TCSETAW",
++ [_IOC_NR(TCGETS)] = "TCGETS", "TCSETS", "TCSETSW", "TCSETSF", "TCGETA", "TCSETA", "TCSETAW",
+ "TCSETAF", "TCSBRK", "TCXONC", "TCFLSH", "TIOCEXCL", "TIOCNXCL", "TIOCSCTTY",
+ "TIOCGPGRP", "TIOCSPGRP", "TIOCOUTQ", "TIOCSTI", "TIOCGWINSZ", "TIOCSWINSZ",
+ "TIOCMGET", "TIOCMBIS", "TIOCMBIC", "TIOCMSET", "TIOCGSOFTCAR", "TIOCSSOFTCAR",
+diff --git a/tools/perf/ui/browsers/scripts.c b/tools/perf/ui/browsers/scripts.c
+index 4d565cc14076..0355d4aaf2ee 100644
+--- a/tools/perf/ui/browsers/scripts.c
++++ b/tools/perf/ui/browsers/scripts.c
+@@ -131,8 +131,10 @@ static int list_scripts(char *script_name, bool *custom,
+ int key = ui_browser__input_window("perf script command",
+ "Enter perf script command line (without perf script prefix)",
+ script_args, "", 0);
+- if (key != K_ENTER)
+- return -1;
++ if (key != K_ENTER) {
++ ret = -1;
++ goto out;
++ }
+ sprintf(script_name, "%s script %s", perf, script_args);
+ } else if (choice < num + max_std) {
+ strcpy(script_name, paths[choice]);
+diff --git a/tools/perf/ui/helpline.c b/tools/perf/ui/helpline.c
+index b3c421429ed4..54bcd08df87e 100644
+--- a/tools/perf/ui/helpline.c
++++ b/tools/perf/ui/helpline.c
+@@ -3,10 +3,10 @@
+ #include <stdlib.h>
+ #include <string.h>
+
+-#include "../debug.h"
++#include "../util/debug.h"
+ #include "helpline.h"
+ #include "ui.h"
+-#include "../util.h"
++#include "../util/util.h"
+
+ char ui_helpline__current[512];
+
+diff --git a/tools/perf/ui/util.c b/tools/perf/ui/util.c
+index 63bf06e80ab9..9ed76e88a3e4 100644
+--- a/tools/perf/ui/util.c
++++ b/tools/perf/ui/util.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include "util.h"
+-#include "../debug.h"
++#include "../util/debug.h"
+
+
+ /*
+diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
+index b0364d923f76..070c3bd57882 100644
+--- a/tools/perf/util/evlist.c
++++ b/tools/perf/util/evlist.c
+@@ -20,6 +20,7 @@
+ #include "bpf-event.h"
+ #include <signal.h>
+ #include <unistd.h>
++#include <sched.h>
+
+ #include "parse-events.h"
+ #include <subcmd/parse-options.h>
+@@ -1870,6 +1871,14 @@ static void *perf_evlist__poll_thread(void *arg)
+ struct perf_evlist *evlist = arg;
+ bool draining = false;
+ int i, done = 0;
++ /*
++ * In order to read symbols from other namespaces perf to needs to call
++ * setns(2). This isn't permitted if the struct_fs has multiple users.
++ * unshare(2) the fs so that we may continue to setns into namespaces
++ * that we're observing when, for instance, reading the build-ids at
++ * the end of a 'perf record' session.
++ */
++ unshare(CLONE_FS);
+
+ while (!done) {
+ bool got_data = false;
+diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
+index 1903d7ec9797..bf7cf1249553 100644
+--- a/tools/perf/util/header.c
++++ b/tools/perf/util/header.c
+@@ -2251,8 +2251,10 @@ static int process_cpu_topology(struct feat_fd *ff, void *data __maybe_unused)
+ /* On s390 the socket_id number is not related to the numbers of cpus.
+ * The socket_id number might be higher than the numbers of cpus.
+ * This depends on the configuration.
++ * AArch64 is the same.
+ */
+- if (ph->env.arch && !strncmp(ph->env.arch, "s390", 4))
++ if (ph->env.arch && (!strncmp(ph->env.arch, "s390", 4)
++ || !strncmp(ph->env.arch, "aarch64", 7)))
+ do_core_id_test = false;
+
+ for (i = 0; i < (u32)cpu_nr; i++) {
+diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
+index f24fd1954f6c..6bd270a1e93e 100644
+--- a/tools/perf/util/hist.c
++++ b/tools/perf/util/hist.c
+@@ -193,7 +193,10 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
+ hists__new_col_len(hists, HISTC_MEM_LVL, 21 + 3);
+ hists__new_col_len(hists, HISTC_LOCAL_WEIGHT, 12);
+ hists__new_col_len(hists, HISTC_GLOBAL_WEIGHT, 12);
+- hists__new_col_len(hists, HISTC_TIME, 12);
++ if (symbol_conf.nanosecs)
++ hists__new_col_len(hists, HISTC_TIME, 16);
++ else
++ hists__new_col_len(hists, HISTC_TIME, 12);
+
+ if (h->srcline) {
+ len = MAX(strlen(h->srcline), strlen(sort_srcline.se_header));
+diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
+index 668410b1d426..7666206d06fa 100644
+--- a/tools/perf/util/map.c
++++ b/tools/perf/util/map.c
+@@ -647,6 +647,7 @@ struct map_groups *map_groups__new(struct machine *machine)
+ void map_groups__delete(struct map_groups *mg)
+ {
+ map_groups__exit(mg);
++ unwind__finish_access(mg);
+ free(mg);
+ }
+
+@@ -887,7 +888,7 @@ int map_groups__clone(struct thread *thread, struct map_groups *parent)
+ if (new == NULL)
+ goto out_unlock;
+
+- err = unwind__prepare_access(thread, new, NULL);
++ err = unwind__prepare_access(mg, new, NULL);
+ if (err)
+ goto out_unlock;
+
+diff --git a/tools/perf/util/map_groups.h b/tools/perf/util/map_groups.h
+index 5f25efa6d6bc..77252e14008f 100644
+--- a/tools/perf/util/map_groups.h
++++ b/tools/perf/util/map_groups.h
+@@ -31,6 +31,10 @@ struct map_groups {
+ struct maps maps;
+ struct machine *machine;
+ refcount_t refcnt;
++#ifdef HAVE_LIBUNWIND_SUPPORT
++ void *addr_space;
++ struct unwind_libunwind_ops *unwind_libunwind_ops;
++#endif
+ };
+
+ #define KMAP_NAME_LEN 256
+diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
+index 590793cc5142..bbf7816cba31 100644
+--- a/tools/perf/util/thread.c
++++ b/tools/perf/util/thread.c
+@@ -105,7 +105,6 @@ void thread__delete(struct thread *thread)
+ }
+ up_write(&thread->comm_lock);
+
+- unwind__finish_access(thread);
+ nsinfo__zput(thread->nsinfo);
+ srccode_state_free(&thread->srccode_state);
+
+@@ -252,7 +251,7 @@ static int ____thread__set_comm(struct thread *thread, const char *str,
+ list_add(&new->list, &thread->comm_list);
+
+ if (exec)
+- unwind__flush_access(thread);
++ unwind__flush_access(thread->mg);
+ }
+
+ thread->comm_set = true;
+@@ -332,7 +331,7 @@ int thread__insert_map(struct thread *thread, struct map *map)
+ {
+ int ret;
+
+- ret = unwind__prepare_access(thread, map, NULL);
++ ret = unwind__prepare_access(thread->mg, map, NULL);
+ if (ret)
+ return ret;
+
+@@ -352,7 +351,7 @@ static int __thread__prepare_access(struct thread *thread)
+ down_read(&maps->lock);
+
+ for (map = maps__first(maps); map; map = map__next(map)) {
+- err = unwind__prepare_access(thread, map, &initialized);
++ err = unwind__prepare_access(thread->mg, map, &initialized);
+ if (err || initialized)
+ break;
+ }
+diff --git a/tools/perf/util/thread.h b/tools/perf/util/thread.h
+index e97ef6977eb9..bf06113be4f3 100644
+--- a/tools/perf/util/thread.h
++++ b/tools/perf/util/thread.h
+@@ -44,10 +44,6 @@ struct thread {
+ struct thread_stack *ts;
+ struct nsinfo *nsinfo;
+ struct srccode_state srccode_state;
+-#ifdef HAVE_LIBUNWIND_SUPPORT
+- void *addr_space;
+- struct unwind_libunwind_ops *unwind_libunwind_ops;
+-#endif
+ bool filter;
+ int filter_entry_depth;
+ };
+diff --git a/tools/perf/util/unwind-libunwind-local.c b/tools/perf/util/unwind-libunwind-local.c
+index 71a788921b62..ebdbb056510c 100644
+--- a/tools/perf/util/unwind-libunwind-local.c
++++ b/tools/perf/util/unwind-libunwind-local.c
+@@ -616,26 +616,26 @@ static unw_accessors_t accessors = {
+ .get_proc_name = get_proc_name,
+ };
+
+-static int _unwind__prepare_access(struct thread *thread)
++static int _unwind__prepare_access(struct map_groups *mg)
+ {
+- thread->addr_space = unw_create_addr_space(&accessors, 0);
+- if (!thread->addr_space) {
++ mg->addr_space = unw_create_addr_space(&accessors, 0);
++ if (!mg->addr_space) {
+ pr_err("unwind: Can't create unwind address space.\n");
+ return -ENOMEM;
+ }
+
+- unw_set_caching_policy(thread->addr_space, UNW_CACHE_GLOBAL);
++ unw_set_caching_policy(mg->addr_space, UNW_CACHE_GLOBAL);
+ return 0;
+ }
+
+-static void _unwind__flush_access(struct thread *thread)
++static void _unwind__flush_access(struct map_groups *mg)
+ {
+- unw_flush_cache(thread->addr_space, 0, 0);
++ unw_flush_cache(mg->addr_space, 0, 0);
+ }
+
+-static void _unwind__finish_access(struct thread *thread)
++static void _unwind__finish_access(struct map_groups *mg)
+ {
+- unw_destroy_addr_space(thread->addr_space);
++ unw_destroy_addr_space(mg->addr_space);
+ }
+
+ static int get_entries(struct unwind_info *ui, unwind_entry_cb_t cb,
+@@ -660,7 +660,7 @@ static int get_entries(struct unwind_info *ui, unwind_entry_cb_t cb,
+ */
+ if (max_stack - 1 > 0) {
+ WARN_ONCE(!ui->thread, "WARNING: ui->thread is NULL");
+- addr_space = ui->thread->addr_space;
++ addr_space = ui->thread->mg->addr_space;
+
+ if (addr_space == NULL)
+ return -1;
+diff --git a/tools/perf/util/unwind-libunwind.c b/tools/perf/util/unwind-libunwind.c
+index c0811977d7d5..b843f9d0a9ea 100644
+--- a/tools/perf/util/unwind-libunwind.c
++++ b/tools/perf/util/unwind-libunwind.c
+@@ -11,13 +11,13 @@ struct unwind_libunwind_ops __weak *local_unwind_libunwind_ops;
+ struct unwind_libunwind_ops __weak *x86_32_unwind_libunwind_ops;
+ struct unwind_libunwind_ops __weak *arm64_unwind_libunwind_ops;
+
+-static void unwind__register_ops(struct thread *thread,
++static void unwind__register_ops(struct map_groups *mg,
+ struct unwind_libunwind_ops *ops)
+ {
+- thread->unwind_libunwind_ops = ops;
++ mg->unwind_libunwind_ops = ops;
+ }
+
+-int unwind__prepare_access(struct thread *thread, struct map *map,
++int unwind__prepare_access(struct map_groups *mg, struct map *map,
+ bool *initialized)
+ {
+ const char *arch;
+@@ -28,7 +28,7 @@ int unwind__prepare_access(struct thread *thread, struct map *map,
+ if (!dwarf_callchain_users)
+ return 0;
+
+- if (thread->addr_space) {
++ if (mg->addr_space) {
+ pr_debug("unwind: thread map already set, dso=%s\n",
+ map->dso->name);
+ if (initialized)
+@@ -37,14 +37,14 @@ int unwind__prepare_access(struct thread *thread, struct map *map,
+ }
+
+ /* env->arch is NULL for live-mode (i.e. perf top) */
+- if (!thread->mg->machine->env || !thread->mg->machine->env->arch)
++ if (!mg->machine->env || !mg->machine->env->arch)
+ goto out_register;
+
+- dso_type = dso__type(map->dso, thread->mg->machine);
++ dso_type = dso__type(map->dso, mg->machine);
+ if (dso_type == DSO__TYPE_UNKNOWN)
+ return 0;
+
+- arch = perf_env__arch(thread->mg->machine->env);
++ arch = perf_env__arch(mg->machine->env);
+
+ if (!strcmp(arch, "x86")) {
+ if (dso_type != DSO__TYPE_64BIT)
+@@ -59,37 +59,37 @@ int unwind__prepare_access(struct thread *thread, struct map *map,
+ return 0;
+ }
+ out_register:
+- unwind__register_ops(thread, ops);
++ unwind__register_ops(mg, ops);
+
+- err = thread->unwind_libunwind_ops->prepare_access(thread);
++ err = mg->unwind_libunwind_ops->prepare_access(mg);
+ if (initialized)
+ *initialized = err ? false : true;
+ return err;
+ }
+
+-void unwind__flush_access(struct thread *thread)
++void unwind__flush_access(struct map_groups *mg)
+ {
+ if (!dwarf_callchain_users)
+ return;
+
+- if (thread->unwind_libunwind_ops)
+- thread->unwind_libunwind_ops->flush_access(thread);
++ if (mg->unwind_libunwind_ops)
++ mg->unwind_libunwind_ops->flush_access(mg);
+ }
+
+-void unwind__finish_access(struct thread *thread)
++void unwind__finish_access(struct map_groups *mg)
+ {
+ if (!dwarf_callchain_users)
+ return;
+
+- if (thread->unwind_libunwind_ops)
+- thread->unwind_libunwind_ops->finish_access(thread);
++ if (mg->unwind_libunwind_ops)
++ mg->unwind_libunwind_ops->finish_access(mg);
+ }
+
+ int unwind__get_entries(unwind_entry_cb_t cb, void *arg,
+ struct thread *thread,
+ struct perf_sample *data, int max_stack)
+ {
+- if (thread->unwind_libunwind_ops)
+- return thread->unwind_libunwind_ops->get_entries(cb, arg, thread, data, max_stack);
++ if (thread->mg->unwind_libunwind_ops)
++ return thread->mg->unwind_libunwind_ops->get_entries(cb, arg, thread, data, max_stack);
+ return 0;
+ }
+diff --git a/tools/perf/util/unwind.h b/tools/perf/util/unwind.h
+index 8a44a1569a21..3a7d00c20d86 100644
+--- a/tools/perf/util/unwind.h
++++ b/tools/perf/util/unwind.h
+@@ -6,6 +6,7 @@
+ #include <linux/types.h>
+
+ struct map;
++struct map_groups;
+ struct perf_sample;
+ struct symbol;
+ struct thread;
+@@ -19,9 +20,9 @@ struct unwind_entry {
+ typedef int (*unwind_entry_cb_t)(struct unwind_entry *entry, void *arg);
+
+ struct unwind_libunwind_ops {
+- int (*prepare_access)(struct thread *thread);
+- void (*flush_access)(struct thread *thread);
+- void (*finish_access)(struct thread *thread);
++ int (*prepare_access)(struct map_groups *mg);
++ void (*flush_access)(struct map_groups *mg);
++ void (*finish_access)(struct map_groups *mg);
+ int (*get_entries)(unwind_entry_cb_t cb, void *arg,
+ struct thread *thread,
+ struct perf_sample *data, int max_stack);
+@@ -46,20 +47,20 @@ int unwind__get_entries(unwind_entry_cb_t cb, void *arg,
+ #endif
+
+ int LIBUNWIND__ARCH_REG_ID(int regnum);
+-int unwind__prepare_access(struct thread *thread, struct map *map,
++int unwind__prepare_access(struct map_groups *mg, struct map *map,
+ bool *initialized);
+-void unwind__flush_access(struct thread *thread);
+-void unwind__finish_access(struct thread *thread);
++void unwind__flush_access(struct map_groups *mg);
++void unwind__finish_access(struct map_groups *mg);
+ #else
+-static inline int unwind__prepare_access(struct thread *thread __maybe_unused,
++static inline int unwind__prepare_access(struct map_groups *mg __maybe_unused,
+ struct map *map __maybe_unused,
+ bool *initialized __maybe_unused)
+ {
+ return 0;
+ }
+
+-static inline void unwind__flush_access(struct thread *thread __maybe_unused) {}
+-static inline void unwind__finish_access(struct thread *thread __maybe_unused) {}
++static inline void unwind__flush_access(struct map_groups *mg __maybe_unused) {}
++static inline void unwind__finish_access(struct map_groups *mg __maybe_unused) {}
+ #endif
+ #else
+ static inline int
+@@ -72,14 +73,14 @@ unwind__get_entries(unwind_entry_cb_t cb __maybe_unused,
+ return 0;
+ }
+
+-static inline int unwind__prepare_access(struct thread *thread __maybe_unused,
++static inline int unwind__prepare_access(struct map_groups *mg __maybe_unused,
+ struct map *map __maybe_unused,
+ bool *initialized __maybe_unused)
+ {
+ return 0;
+ }
+
+-static inline void unwind__flush_access(struct thread *thread __maybe_unused) {}
+-static inline void unwind__finish_access(struct thread *thread __maybe_unused) {}
++static inline void unwind__flush_access(struct map_groups *mg __maybe_unused) {}
++static inline void unwind__finish_access(struct map_groups *mg __maybe_unused) {}
+ #endif /* HAVE_DWARF_UNWIND_SUPPORT */
+ #endif /* __UNWIND_H */
+diff --git a/tools/perf/util/xyarray.h b/tools/perf/util/xyarray.h
+index 7ffe562e7ae7..2627b038b6f2 100644
+--- a/tools/perf/util/xyarray.h
++++ b/tools/perf/util/xyarray.h
+@@ -2,6 +2,7 @@
+ #ifndef _PERF_XYARRAY_H_
+ #define _PERF_XYARRAY_H_ 1
+
++#include <linux/compiler.h>
+ #include <sys/types.h>
+
+ struct xyarray {
+@@ -10,7 +11,7 @@ struct xyarray {
+ size_t entries;
+ size_t max_x;
+ size_t max_y;
+- char contents[];
++ char contents[] __aligned(8);
+ };
+
+ struct xyarray *xyarray__new(int xlen, int ylen, size_t entry_size);
+diff --git a/tools/power/x86/intel-speed-select/isst-config.c b/tools/power/x86/intel-speed-select/isst-config.c
+index 91c5ad1685a1..6a10dea01eef 100644
+--- a/tools/power/x86/intel-speed-select/isst-config.c
++++ b/tools/power/x86/intel-speed-select/isst-config.c
+@@ -603,6 +603,10 @@ static int isst_fill_platform_info(void)
+
+ close(fd);
+
++ if (isst_platform_info.api_version > supported_api_ver) {
++ printf("Incompatible API versions; Upgrade of tool is required\n");
++ return -1;
++ }
+ return 0;
+ }
+
+@@ -1529,6 +1533,7 @@ static void cmdline(int argc, char **argv)
+ {
+ int opt;
+ int option_index = 0;
++ int ret;
+
+ static struct option long_options[] = {
+ { "cpu", required_argument, 0, 'c' },
+@@ -1590,13 +1595,14 @@ static void cmdline(int argc, char **argv)
+ set_max_cpu_num();
+ set_cpu_present_cpu_mask();
+ set_cpu_target_cpu_mask();
+- isst_fill_platform_info();
+- if (isst_platform_info.api_version > supported_api_ver) {
+- printf("Incompatible API versions; Upgrade of tool is required\n");
+- exit(0);
+- }
++ ret = isst_fill_platform_info();
++ if (ret)
++ goto out;
+
+ process_command(argc, argv);
++out:
++ free_cpu_set(present_cpumask);
++ free_cpu_set(target_cpumask);
+ }
+
+ int main(int argc, char **argv)
+diff --git a/tools/testing/selftests/net/fib_nexthop_multiprefix.sh b/tools/testing/selftests/net/fib_nexthop_multiprefix.sh
+index e6828732843e..9dc35a16e415 100755
+--- a/tools/testing/selftests/net/fib_nexthop_multiprefix.sh
++++ b/tools/testing/selftests/net/fib_nexthop_multiprefix.sh
+@@ -15,6 +15,8 @@
+ PAUSE_ON_FAIL=no
+ VERBOSE=0
+
++which ping6 > /dev/null 2>&1 && ping6=$(which ping6) || ping6=$(which ping)
++
+ ################################################################################
+ # helpers
+
+@@ -200,7 +202,7 @@ validate_v6_exception()
+ local rc
+
+ if [ ${ping_sz} != "0" ]; then
+- run_cmd ip netns exec h0 ping6 -s ${ping_sz} -c5 -w5 ${dst}
++ run_cmd ip netns exec h0 ${ping6} -s ${ping_sz} -c5 -w5 ${dst}
+ fi
+
+ if [ "$VERBOSE" = "1" ]; then
+@@ -243,7 +245,7 @@ do
+ run_cmd taskset -c ${c} ip netns exec h0 ping -c1 -w1 172.16.10${i}.1
+ [ $? -ne 0 ] && printf "\nERROR: ping to h${i} failed\n" && ret=1
+
+- run_cmd taskset -c ${c} ip netns exec h0 ping6 -c1 -w1 2001:db8:10${i}::1
++ run_cmd taskset -c ${c} ip netns exec h0 ${ping6} -c1 -w1 2001:db8:10${i}::1
+ [ $? -ne 0 ] && printf "\nERROR: ping6 to h${i} failed\n" && ret=1
+
+ [ $ret -ne 0 ] && break
+diff --git a/tools/testing/selftests/net/fib_tests.sh b/tools/testing/selftests/net/fib_tests.sh
+index 4465fc2dae14..c4ba0ff4a53f 100755
+--- a/tools/testing/selftests/net/fib_tests.sh
++++ b/tools/testing/selftests/net/fib_tests.sh
+@@ -9,7 +9,7 @@ ret=0
+ ksft_skip=4
+
+ # all tests in this script. Can be overridden with -t option
+-TESTS="unregister down carrier nexthop ipv6_rt ipv4_rt ipv6_addr_metric ipv4_addr_metric ipv6_route_metrics ipv4_route_metrics ipv4_route_v6_gw rp_filter"
++TESTS="unregister down carrier nexthop suppress ipv6_rt ipv4_rt ipv6_addr_metric ipv4_addr_metric ipv6_route_metrics ipv4_route_metrics ipv4_route_v6_gw rp_filter"
+
+ VERBOSE=0
+ PAUSE_ON_FAIL=no
+@@ -17,6 +17,8 @@ PAUSE=no
+ IP="ip -netns ns1"
+ NS_EXEC="ip netns exec ns1"
+
++which ping6 > /dev/null 2>&1 && ping6=$(which ping6) || ping6=$(which ping)
++
+ log_test()
+ {
+ local rc=$1
+@@ -614,6 +616,20 @@ fib_nexthop_test()
+ cleanup
+ }
+
++fib_suppress_test()
++{
++ $IP link add dummy1 type dummy
++ $IP link set dummy1 up
++ $IP -6 route add default dev dummy1
++ $IP -6 rule add table main suppress_prefixlength 0
++ ping -f -c 1000 -W 1 1234::1 || true
++ $IP -6 rule del table main suppress_prefixlength 0
++ $IP link del dummy1
++
++ # If we got here without crashing, we're good.
++ return 0
++}
++
+ ################################################################################
+ # Tests on route add and replace
+
+@@ -1086,7 +1102,7 @@ ipv6_route_metrics_test()
+ log_test $rc 0 "Multipath route with mtu metric"
+
+ $IP -6 ro add 2001:db8:104::/64 via 2001:db8:101::2 mtu 1300
+- run_cmd "ip netns exec ns1 ping6 -w1 -c1 -s 1500 2001:db8:104::1"
++ run_cmd "ip netns exec ns1 ${ping6} -w1 -c1 -s 1500 2001:db8:104::1"
+ log_test $? 0 "Using route with mtu metric"
+
+ run_cmd "$IP -6 ro add 2001:db8:114::/64 via 2001:db8:101::2 congctl lock foo"
+@@ -1591,6 +1607,7 @@ do
+ fib_carrier_test|carrier) fib_carrier_test;;
+ fib_rp_filter_test|rp_filter) fib_rp_filter_test;;
+ fib_nexthop_test|nexthop) fib_nexthop_test;;
++ fib_suppress_test|suppress) fib_suppress_test;;
+ ipv6_route_test|ipv6_rt) ipv6_route_test;;
+ ipv4_route_test|ipv4_rt) ipv4_route_test;;
+ ipv6_addr_metric) ipv6_addr_metric_test;;
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:5.3 commit in: /
@ 2019-10-07 17:48 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2019-10-07 17:48 UTC (permalink / raw
To: gentoo-commits
commit: ea98da23f37c6a66683cf26882a3ad99b958ebb6
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Oct 7 17:48:04 2019 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Oct 7 17:48:04 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ea98da23
Linux patch 5.3.5
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1004_linux-5.3.5.patch | 6035 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 6039 insertions(+)
diff --git a/0000_README b/0000_README
index 74f4ffa..1b2145a 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch: 1003_linux-5.3.4.patch
From: http://www.kernel.org
Desc: Linux 5.3.4
+Patch: 1004_linux-5.3.5.patch
+From: http://www.kernel.org
+Desc: Linux 5.3.5
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1004_linux-5.3.5.patch b/1004_linux-5.3.5.patch
new file mode 100644
index 0000000..ab833c1
--- /dev/null
+++ b/1004_linux-5.3.5.patch
@@ -0,0 +1,6035 @@
+diff --git a/Makefile b/Makefile
+index fa11c1d89acf..bf03c110ed9b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 3
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Bobtail Squid
+
+@@ -751,6 +751,11 @@ else
+ # These warnings generated too much noise in a regular build.
+ # Use make W=1 to enable them (see scripts/Makefile.extrawarn)
+ KBUILD_CFLAGS += -Wno-unused-but-set-variable
++
++# Warn about unmarked fall-throughs in switch statement.
++# Disabled for clang while comment to attribute conversion happens and
++# https://github.com/ClangBuiltLinux/linux/issues/636 is discussed.
++KBUILD_CFLAGS += $(call cc-option,-Wimplicit-fallthrough,)
+ endif
+
+ KBUILD_CFLAGS += $(call cc-disable-warning, unused-const-variable)
+@@ -845,9 +850,6 @@ NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
+ # warn about C99 declaration after statement
+ KBUILD_CFLAGS += -Wdeclaration-after-statement
+
+-# Warn about unmarked fall-throughs in switch statement.
+-KBUILD_CFLAGS += $(call cc-option,-Wimplicit-fallthrough,)
+-
+ # Variable Length Arrays (VLAs) should not be used anywhere in the kernel
+ KBUILD_CFLAGS += -Wvla
+
+diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
+index 24360211534a..b587a3b3939a 100644
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -82,7 +82,7 @@ config ARM
+ select HAVE_FAST_GUP if ARM_LPAE
+ select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL
+ select HAVE_FUNCTION_GRAPH_TRACER if !THUMB2_KERNEL && !CC_IS_CLANG
+- select HAVE_FUNCTION_TRACER if !XIP_KERNEL
++ select HAVE_FUNCTION_TRACER if !XIP_KERNEL && (CC_IS_GCC || CLANG_VERSION >= 100000)
+ select HAVE_GCC_PLUGINS
+ select HAVE_HW_BREAKPOINT if PERF_EVENTS && (CPU_V6 || CPU_V6K || CPU_V7)
+ select HAVE_IDE if PCI || ISA || PCMCIA
+@@ -1572,8 +1572,9 @@ config ARM_PATCH_IDIV
+ code to do integer division.
+
+ config AEABI
+- bool "Use the ARM EABI to compile the kernel" if !CPU_V7 && !CPU_V7M && !CPU_V6 && !CPU_V6K
+- default CPU_V7 || CPU_V7M || CPU_V6 || CPU_V6K
++ bool "Use the ARM EABI to compile the kernel" if !CPU_V7 && \
++ !CPU_V7M && !CPU_V6 && !CPU_V6K && !CC_IS_CLANG
++ default CPU_V7 || CPU_V7M || CPU_V6 || CPU_V6K || CC_IS_CLANG
+ help
+ This option allows for the kernel to be compiled using the latest
+ ARM ABI (aka EABI). This is only useful if you are using a user
+diff --git a/arch/arm/Makefile b/arch/arm/Makefile
+index c3624ca6c0bc..9b3d4deca9e4 100644
+--- a/arch/arm/Makefile
++++ b/arch/arm/Makefile
+@@ -112,6 +112,10 @@ ifeq ($(CONFIG_ARM_UNWIND),y)
+ CFLAGS_ABI +=-funwind-tables
+ endif
+
++ifeq ($(CONFIG_CC_IS_CLANG),y)
++CFLAGS_ABI += -meabi gnu
++endif
++
+ # Accept old syntax despite ".syntax unified"
+ AFLAGS_NOWARN :=$(call as-option,-Wa$(comma)-mno-warn-deprecated,-Wa$(comma)-W)
+
+diff --git a/arch/arm/boot/dts/gemini-dlink-dir-685.dts b/arch/arm/boot/dts/gemini-dlink-dir-685.dts
+index bfaa2de63a10..e2030ba16512 100644
+--- a/arch/arm/boot/dts/gemini-dlink-dir-685.dts
++++ b/arch/arm/boot/dts/gemini-dlink-dir-685.dts
+@@ -72,7 +72,6 @@
+ reg = <0>;
+ /* 50 ns min period = 20 MHz */
+ spi-max-frequency = <20000000>;
+- spi-cpol; /* Clock active low */
+ vcc-supply = <&vdisp>;
+ iovcc-supply = <&vdisp>;
+ vci-supply = <&vdisp>;
+diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
+index 890eeaac3cbb..bd0f4821f7e1 100644
+--- a/arch/arm/mm/fault.c
++++ b/arch/arm/mm/fault.c
+@@ -191,7 +191,7 @@ static inline bool access_error(unsigned int fsr, struct vm_area_struct *vma)
+ {
+ unsigned int mask = VM_READ | VM_WRITE | VM_EXEC;
+
+- if (fsr & FSR_WRITE)
++ if ((fsr & FSR_WRITE) && !(fsr & FSR_CM))
+ mask = VM_WRITE;
+ if (fsr & FSR_LNX_PF)
+ mask = VM_EXEC;
+@@ -262,7 +262,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
+
+ if (user_mode(regs))
+ flags |= FAULT_FLAG_USER;
+- if (fsr & FSR_WRITE)
++ if ((fsr & FSR_WRITE) && !(fsr & FSR_CM))
+ flags |= FAULT_FLAG_WRITE;
+
+ /*
+diff --git a/arch/arm/mm/fault.h b/arch/arm/mm/fault.h
+index c063708fa503..9ecc2097a87a 100644
+--- a/arch/arm/mm/fault.h
++++ b/arch/arm/mm/fault.h
+@@ -6,6 +6,7 @@
+ * Fault status register encodings. We steal bit 31 for our own purposes.
+ */
+ #define FSR_LNX_PF (1 << 31)
++#define FSR_CM (1 << 13)
+ #define FSR_WRITE (1 << 11)
+ #define FSR_FS4 (1 << 10)
+ #define FSR_FS3_0 (15)
+diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c
+index f866870db749..0b94b674aa91 100644
+--- a/arch/arm/mm/mmap.c
++++ b/arch/arm/mm/mmap.c
+@@ -18,8 +18,9 @@
+ (((pgoff)<<PAGE_SHIFT) & (SHMLBA-1)))
+
+ /* gap between mmap and stack */
+-#define MIN_GAP (128*1024*1024UL)
+-#define MAX_GAP ((TASK_SIZE)/6*5)
++#define MIN_GAP (128*1024*1024UL)
++#define MAX_GAP ((STACK_TOP)/6*5)
++#define STACK_RND_MASK (0x7ff >> (PAGE_SHIFT - 12))
+
+ static int mmap_is_legacy(struct rlimit *rlim_stack)
+ {
+@@ -35,13 +36,22 @@ static int mmap_is_legacy(struct rlimit *rlim_stack)
+ static unsigned long mmap_base(unsigned long rnd, struct rlimit *rlim_stack)
+ {
+ unsigned long gap = rlim_stack->rlim_cur;
++ unsigned long pad = stack_guard_gap;
++
++ /* Account for stack randomization if necessary */
++ if (current->flags & PF_RANDOMIZE)
++ pad += (STACK_RND_MASK << PAGE_SHIFT);
++
++ /* Values close to RLIM_INFINITY can overflow. */
++ if (gap + pad > gap)
++ gap += pad;
+
+ if (gap < MIN_GAP)
+ gap = MIN_GAP;
+ else if (gap > MAX_GAP)
+ gap = MAX_GAP;
+
+- return PAGE_ALIGN(TASK_SIZE - gap - rnd);
++ return PAGE_ALIGN(STACK_TOP - gap - rnd);
+ }
+
+ /*
+diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
+index d9a0038774a6..d5e0b908f0ba 100644
+--- a/arch/arm/mm/mmu.c
++++ b/arch/arm/mm/mmu.c
+@@ -1177,6 +1177,22 @@ void __init adjust_lowmem_bounds(void)
+ */
+ vmalloc_limit = (u64)(uintptr_t)vmalloc_min - PAGE_OFFSET + PHYS_OFFSET;
+
++ /*
++ * The first usable region must be PMD aligned. Mark its start
++ * as MEMBLOCK_NOMAP if it isn't
++ */
++ for_each_memblock(memory, reg) {
++ if (!memblock_is_nomap(reg)) {
++ if (!IS_ALIGNED(reg->base, PMD_SIZE)) {
++ phys_addr_t len;
++
++ len = round_up(reg->base, PMD_SIZE) - reg->base;
++ memblock_mark_nomap(reg->base, len);
++ }
++ break;
++ }
++ }
++
+ for_each_memblock(memory, reg) {
+ phys_addr_t block_start = reg->base;
+ phys_addr_t block_end = reg->base + reg->size;
+diff --git a/arch/arm64/include/asm/cmpxchg.h b/arch/arm64/include/asm/cmpxchg.h
+index 7a299a20f6dc..7a8b8bc69e8d 100644
+--- a/arch/arm64/include/asm/cmpxchg.h
++++ b/arch/arm64/include/asm/cmpxchg.h
+@@ -63,7 +63,7 @@ __XCHG_CASE( , , mb_, 64, dmb ish, nop, , a, l, "memory")
+ #undef __XCHG_CASE
+
+ #define __XCHG_GEN(sfx) \
+-static inline unsigned long __xchg##sfx(unsigned long x, \
++static __always_inline unsigned long __xchg##sfx(unsigned long x, \
+ volatile void *ptr, \
+ int size) \
+ { \
+@@ -105,7 +105,7 @@ __XCHG_GEN(_mb)
+ #define arch_xchg(...) __xchg_wrapper( _mb, __VA_ARGS__)
+
+ #define __CMPXCHG_GEN(sfx) \
+-static inline unsigned long __cmpxchg##sfx(volatile void *ptr, \
++static __always_inline unsigned long __cmpxchg##sfx(volatile void *ptr, \
+ unsigned long old, \
+ unsigned long new, \
+ int size) \
+@@ -212,7 +212,7 @@ __CMPWAIT_CASE( , , 64);
+ #undef __CMPWAIT_CASE
+
+ #define __CMPWAIT_GEN(sfx) \
+-static inline void __cmpwait##sfx(volatile void *ptr, \
++static __always_inline void __cmpwait##sfx(volatile void *ptr, \
+ unsigned long val, \
+ int size) \
+ { \
+diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
+index b050641b5139..8dac7110f0cb 100644
+--- a/arch/arm64/mm/mmap.c
++++ b/arch/arm64/mm/mmap.c
+@@ -54,7 +54,11 @@ unsigned long arch_mmap_rnd(void)
+ static unsigned long mmap_base(unsigned long rnd, struct rlimit *rlim_stack)
+ {
+ unsigned long gap = rlim_stack->rlim_cur;
+- unsigned long pad = (STACK_RND_MASK << PAGE_SHIFT) + stack_guard_gap;
++ unsigned long pad = stack_guard_gap;
++
++ /* Account for stack randomization if necessary */
++ if (current->flags & PF_RANDOMIZE)
++ pad += (STACK_RND_MASK << PAGE_SHIFT);
+
+ /* Values close to RLIM_INFINITY can overflow. */
+ if (gap + pad > gap)
+diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h
+index 9a82dd11c0e9..bb8658cc7f12 100644
+--- a/arch/mips/include/asm/atomic.h
++++ b/arch/mips/include/asm/atomic.h
+@@ -68,7 +68,7 @@ static __inline__ void atomic_##op(int i, atomic_t * v) \
+ "\t" __scbeqz " %0, 1b \n" \
+ " .set pop \n" \
+ : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (v->counter) \
+- : "Ir" (i)); \
++ : "Ir" (i) : __LLSC_CLOBBER); \
+ } else { \
+ unsigned long flags; \
+ \
+@@ -98,7 +98,7 @@ static __inline__ int atomic_##op##_return_relaxed(int i, atomic_t * v) \
+ " .set pop \n" \
+ : "=&r" (result), "=&r" (temp), \
+ "+" GCC_OFF_SMALL_ASM() (v->counter) \
+- : "Ir" (i)); \
++ : "Ir" (i) : __LLSC_CLOBBER); \
+ } else { \
+ unsigned long flags; \
+ \
+@@ -132,7 +132,7 @@ static __inline__ int atomic_fetch_##op##_relaxed(int i, atomic_t * v) \
+ " move %0, %1 \n" \
+ : "=&r" (result), "=&r" (temp), \
+ "+" GCC_OFF_SMALL_ASM() (v->counter) \
+- : "Ir" (i)); \
++ : "Ir" (i) : __LLSC_CLOBBER); \
+ } else { \
+ unsigned long flags; \
+ \
+@@ -193,6 +193,7 @@ static __inline__ int atomic_sub_if_positive(int i, atomic_t * v)
+ if (kernel_uses_llsc) {
+ int temp;
+
++ loongson_llsc_mb();
+ __asm__ __volatile__(
+ " .set push \n"
+ " .set "MIPS_ISA_LEVEL" \n"
+@@ -200,16 +201,16 @@ static __inline__ int atomic_sub_if_positive(int i, atomic_t * v)
+ " .set pop \n"
+ " subu %0, %1, %3 \n"
+ " move %1, %0 \n"
+- " bltz %0, 1f \n"
++ " bltz %0, 2f \n"
+ " .set push \n"
+ " .set "MIPS_ISA_LEVEL" \n"
+ " sc %1, %2 \n"
+ "\t" __scbeqz " %1, 1b \n"
+- "1: \n"
++ "2: \n"
+ " .set pop \n"
+ : "=&r" (result), "=&r" (temp),
+ "+" GCC_OFF_SMALL_ASM() (v->counter)
+- : "Ir" (i));
++ : "Ir" (i) : __LLSC_CLOBBER);
+ } else {
+ unsigned long flags;
+
+@@ -269,7 +270,7 @@ static __inline__ void atomic64_##op(s64 i, atomic64_t * v) \
+ "\t" __scbeqz " %0, 1b \n" \
+ " .set pop \n" \
+ : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (v->counter) \
+- : "Ir" (i)); \
++ : "Ir" (i) : __LLSC_CLOBBER); \
+ } else { \
+ unsigned long flags; \
+ \
+@@ -299,7 +300,7 @@ static __inline__ s64 atomic64_##op##_return_relaxed(s64 i, atomic64_t * v) \
+ " .set pop \n" \
+ : "=&r" (result), "=&r" (temp), \
+ "+" GCC_OFF_SMALL_ASM() (v->counter) \
+- : "Ir" (i)); \
++ : "Ir" (i) : __LLSC_CLOBBER); \
+ } else { \
+ unsigned long flags; \
+ \
+@@ -333,7 +334,7 @@ static __inline__ s64 atomic64_fetch_##op##_relaxed(s64 i, atomic64_t * v) \
+ " .set pop \n" \
+ : "=&r" (result), "=&r" (temp), \
+ "+" GCC_OFF_SMALL_ASM() (v->counter) \
+- : "Ir" (i)); \
++ : "Ir" (i) : __LLSC_CLOBBER); \
+ } else { \
+ unsigned long flags; \
+ \
+diff --git a/arch/mips/include/asm/barrier.h b/arch/mips/include/asm/barrier.h
+index b865e317a14f..9228f7386220 100644
+--- a/arch/mips/include/asm/barrier.h
++++ b/arch/mips/include/asm/barrier.h
+@@ -211,14 +211,22 @@
+ #define __smp_wmb() barrier()
+ #endif
+
++/*
++ * When LL/SC does imply order, it must also be a compiler barrier to avoid the
++ * compiler from reordering where the CPU will not. When it does not imply
++ * order, the compiler is also free to reorder across the LL/SC loop and
++ * ordering will be done by smp_llsc_mb() and friends.
++ */
+ #if defined(CONFIG_WEAK_REORDERING_BEYOND_LLSC) && defined(CONFIG_SMP)
+ #define __WEAK_LLSC_MB " sync \n"
++#define smp_llsc_mb() __asm__ __volatile__(__WEAK_LLSC_MB : : :"memory")
++#define __LLSC_CLOBBER
+ #else
+ #define __WEAK_LLSC_MB " \n"
++#define smp_llsc_mb() do { } while (0)
++#define __LLSC_CLOBBER "memory"
+ #endif
+
+-#define smp_llsc_mb() __asm__ __volatile__(__WEAK_LLSC_MB : : :"memory")
+-
+ #ifdef CONFIG_CPU_CAVIUM_OCTEON
+ #define smp_mb__before_llsc() smp_wmb()
+ #define __smp_mb__before_llsc() __smp_wmb()
+@@ -238,36 +246,40 @@
+
+ /*
+ * Some Loongson 3 CPUs have a bug wherein execution of a memory access (load,
+- * store or pref) in between an ll & sc can cause the sc instruction to
++ * store or prefetch) in between an LL & SC can cause the SC instruction to
+ * erroneously succeed, breaking atomicity. Whilst it's unusual to write code
+ * containing such sequences, this bug bites harder than we might otherwise
+ * expect due to reordering & speculation:
+ *
+- * 1) A memory access appearing prior to the ll in program order may actually
+- * be executed after the ll - this is the reordering case.
++ * 1) A memory access appearing prior to the LL in program order may actually
++ * be executed after the LL - this is the reordering case.
+ *
+- * In order to avoid this we need to place a memory barrier (ie. a sync
+- * instruction) prior to every ll instruction, in between it & any earlier
+- * memory access instructions. Many of these cases are already covered by
+- * smp_mb__before_llsc() but for the remaining cases, typically ones in
+- * which multiple CPUs may operate on a memory location but ordering is not
+- * usually guaranteed, we use loongson_llsc_mb() below.
++ * In order to avoid this we need to place a memory barrier (ie. a SYNC
++ * instruction) prior to every LL instruction, in between it and any earlier
++ * memory access instructions.
+ *
+ * This reordering case is fixed by 3A R2 CPUs, ie. 3A2000 models and later.
+ *
+- * 2) If a conditional branch exists between an ll & sc with a target outside
+- * of the ll-sc loop, for example an exit upon value mismatch in cmpxchg()
++ * 2) If a conditional branch exists between an LL & SC with a target outside
++ * of the LL-SC loop, for example an exit upon value mismatch in cmpxchg()
+ * or similar, then misprediction of the branch may allow speculative
+- * execution of memory accesses from outside of the ll-sc loop.
++ * execution of memory accesses from outside of the LL-SC loop.
+ *
+- * In order to avoid this we need a memory barrier (ie. a sync instruction)
++ * In order to avoid this we need a memory barrier (ie. a SYNC instruction)
+ * at each affected branch target, for which we also use loongson_llsc_mb()
+ * defined below.
+ *
+ * This case affects all current Loongson 3 CPUs.
++ *
++ * The above described cases cause an error in the cache coherence protocol;
++ * such that the Invalidate of a competing LL-SC goes 'missing' and SC
++ * erroneously observes its core still has Exclusive state and lets the SC
++ * proceed.
++ *
++ * Therefore the error only occurs on SMP systems.
+ */
+ #ifdef CONFIG_CPU_LOONGSON3_WORKAROUNDS /* Loongson-3's LLSC workaround */
+-#define loongson_llsc_mb() __asm__ __volatile__(__WEAK_LLSC_MB : : :"memory")
++#define loongson_llsc_mb() __asm__ __volatile__("sync" : : :"memory")
+ #else
+ #define loongson_llsc_mb() do { } while (0)
+ #endif
+diff --git a/arch/mips/include/asm/bitops.h b/arch/mips/include/asm/bitops.h
+index 9a466dde9b96..985d6a02f9ea 100644
+--- a/arch/mips/include/asm/bitops.h
++++ b/arch/mips/include/asm/bitops.h
+@@ -66,7 +66,8 @@ static inline void set_bit(unsigned long nr, volatile unsigned long *addr)
+ " beqzl %0, 1b \n"
+ " .set pop \n"
+ : "=&r" (temp), "=" GCC_OFF_SMALL_ASM() (*m)
+- : "ir" (1UL << bit), GCC_OFF_SMALL_ASM() (*m));
++ : "ir" (1UL << bit), GCC_OFF_SMALL_ASM() (*m)
++ : __LLSC_CLOBBER);
+ #if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
+ } else if (kernel_uses_llsc && __builtin_constant_p(bit)) {
+ loongson_llsc_mb();
+@@ -76,7 +77,8 @@ static inline void set_bit(unsigned long nr, volatile unsigned long *addr)
+ " " __INS "%0, %3, %2, 1 \n"
+ " " __SC "%0, %1 \n"
+ : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (*m)
+- : "ir" (bit), "r" (~0));
++ : "ir" (bit), "r" (~0)
++ : __LLSC_CLOBBER);
+ } while (unlikely(!temp));
+ #endif /* CONFIG_CPU_MIPSR2 || CONFIG_CPU_MIPSR6 */
+ } else if (kernel_uses_llsc) {
+@@ -90,7 +92,8 @@ static inline void set_bit(unsigned long nr, volatile unsigned long *addr)
+ " " __SC "%0, %1 \n"
+ " .set pop \n"
+ : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (*m)
+- : "ir" (1UL << bit));
++ : "ir" (1UL << bit)
++ : __LLSC_CLOBBER);
+ } while (unlikely(!temp));
+ } else
+ __mips_set_bit(nr, addr);
+@@ -122,7 +125,8 @@ static inline void clear_bit(unsigned long nr, volatile unsigned long *addr)
+ " beqzl %0, 1b \n"
+ " .set pop \n"
+ : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (*m)
+- : "ir" (~(1UL << bit)));
++ : "ir" (~(1UL << bit))
++ : __LLSC_CLOBBER);
+ #if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
+ } else if (kernel_uses_llsc && __builtin_constant_p(bit)) {
+ loongson_llsc_mb();
+@@ -132,7 +136,8 @@ static inline void clear_bit(unsigned long nr, volatile unsigned long *addr)
+ " " __INS "%0, $0, %2, 1 \n"
+ " " __SC "%0, %1 \n"
+ : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (*m)
+- : "ir" (bit));
++ : "ir" (bit)
++ : __LLSC_CLOBBER);
+ } while (unlikely(!temp));
+ #endif /* CONFIG_CPU_MIPSR2 || CONFIG_CPU_MIPSR6 */
+ } else if (kernel_uses_llsc) {
+@@ -146,7 +151,8 @@ static inline void clear_bit(unsigned long nr, volatile unsigned long *addr)
+ " " __SC "%0, %1 \n"
+ " .set pop \n"
+ : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (*m)
+- : "ir" (~(1UL << bit)));
++ : "ir" (~(1UL << bit))
++ : __LLSC_CLOBBER);
+ } while (unlikely(!temp));
+ } else
+ __mips_clear_bit(nr, addr);
+@@ -192,7 +198,8 @@ static inline void change_bit(unsigned long nr, volatile unsigned long *addr)
+ " beqzl %0, 1b \n"
+ " .set pop \n"
+ : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (*m)
+- : "ir" (1UL << bit));
++ : "ir" (1UL << bit)
++ : __LLSC_CLOBBER);
+ } else if (kernel_uses_llsc) {
+ unsigned long *m = ((unsigned long *) addr) + (nr >> SZLONG_LOG);
+ unsigned long temp;
+@@ -207,7 +214,8 @@ static inline void change_bit(unsigned long nr, volatile unsigned long *addr)
+ " " __SC "%0, %1 \n"
+ " .set pop \n"
+ : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (*m)
+- : "ir" (1UL << bit));
++ : "ir" (1UL << bit)
++ : __LLSC_CLOBBER);
+ } while (unlikely(!temp));
+ } else
+ __mips_change_bit(nr, addr);
+@@ -244,11 +252,12 @@ static inline int test_and_set_bit(unsigned long nr,
+ " .set pop \n"
+ : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (*m), "=&r" (res)
+ : "r" (1UL << bit)
+- : "memory");
++ : __LLSC_CLOBBER);
+ } else if (kernel_uses_llsc) {
+ unsigned long *m = ((unsigned long *) addr) + (nr >> SZLONG_LOG);
+ unsigned long temp;
+
++ loongson_llsc_mb();
+ do {
+ __asm__ __volatile__(
+ " .set push \n"
+@@ -259,7 +268,7 @@ static inline int test_and_set_bit(unsigned long nr,
+ " .set pop \n"
+ : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (*m), "=&r" (res)
+ : "r" (1UL << bit)
+- : "memory");
++ : __LLSC_CLOBBER);
+ } while (unlikely(!res));
+
+ res = temp & (1UL << bit);
+@@ -300,11 +309,12 @@ static inline int test_and_set_bit_lock(unsigned long nr,
+ " .set pop \n"
+ : "=&r" (temp), "+m" (*m), "=&r" (res)
+ : "r" (1UL << bit)
+- : "memory");
++ : __LLSC_CLOBBER);
+ } else if (kernel_uses_llsc) {
+ unsigned long *m = ((unsigned long *) addr) + (nr >> SZLONG_LOG);
+ unsigned long temp;
+
++ loongson_llsc_mb();
+ do {
+ __asm__ __volatile__(
+ " .set push \n"
+@@ -315,7 +325,7 @@ static inline int test_and_set_bit_lock(unsigned long nr,
+ " .set pop \n"
+ : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (*m), "=&r" (res)
+ : "r" (1UL << bit)
+- : "memory");
++ : __LLSC_CLOBBER);
+ } while (unlikely(!res));
+
+ res = temp & (1UL << bit);
+@@ -358,12 +368,13 @@ static inline int test_and_clear_bit(unsigned long nr,
+ " .set pop \n"
+ : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (*m), "=&r" (res)
+ : "r" (1UL << bit)
+- : "memory");
++ : __LLSC_CLOBBER);
+ #if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
+ } else if (kernel_uses_llsc && __builtin_constant_p(nr)) {
+ unsigned long *m = ((unsigned long *) addr) + (nr >> SZLONG_LOG);
+ unsigned long temp;
+
++ loongson_llsc_mb();
+ do {
+ __asm__ __volatile__(
+ " " __LL "%0, %1 # test_and_clear_bit \n"
+@@ -372,13 +383,14 @@ static inline int test_and_clear_bit(unsigned long nr,
+ " " __SC "%0, %1 \n"
+ : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (*m), "=&r" (res)
+ : "ir" (bit)
+- : "memory");
++ : __LLSC_CLOBBER);
+ } while (unlikely(!temp));
+ #endif
+ } else if (kernel_uses_llsc) {
+ unsigned long *m = ((unsigned long *) addr) + (nr >> SZLONG_LOG);
+ unsigned long temp;
+
++ loongson_llsc_mb();
+ do {
+ __asm__ __volatile__(
+ " .set push \n"
+@@ -390,7 +402,7 @@ static inline int test_and_clear_bit(unsigned long nr,
+ " .set pop \n"
+ : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (*m), "=&r" (res)
+ : "r" (1UL << bit)
+- : "memory");
++ : __LLSC_CLOBBER);
+ } while (unlikely(!res));
+
+ res = temp & (1UL << bit);
+@@ -433,11 +445,12 @@ static inline int test_and_change_bit(unsigned long nr,
+ " .set pop \n"
+ : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (*m), "=&r" (res)
+ : "r" (1UL << bit)
+- : "memory");
++ : __LLSC_CLOBBER);
+ } else if (kernel_uses_llsc) {
+ unsigned long *m = ((unsigned long *) addr) + (nr >> SZLONG_LOG);
+ unsigned long temp;
+
++ loongson_llsc_mb();
+ do {
+ __asm__ __volatile__(
+ " .set push \n"
+@@ -448,7 +461,7 @@ static inline int test_and_change_bit(unsigned long nr,
+ " .set pop \n"
+ : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (*m), "=&r" (res)
+ : "r" (1UL << bit)
+- : "memory");
++ : __LLSC_CLOBBER);
+ } while (unlikely(!res));
+
+ res = temp & (1UL << bit);
+diff --git a/arch/mips/include/asm/cmpxchg.h b/arch/mips/include/asm/cmpxchg.h
+index f345a873742d..c8a47d18f628 100644
+--- a/arch/mips/include/asm/cmpxchg.h
++++ b/arch/mips/include/asm/cmpxchg.h
+@@ -46,6 +46,7 @@ extern unsigned long __xchg_called_with_bad_pointer(void)
+ __typeof(*(m)) __ret; \
+ \
+ if (kernel_uses_llsc) { \
++ loongson_llsc_mb(); \
+ __asm__ __volatile__( \
+ " .set push \n" \
+ " .set noat \n" \
+@@ -60,7 +61,7 @@ extern unsigned long __xchg_called_with_bad_pointer(void)
+ " .set pop \n" \
+ : "=&r" (__ret), "=" GCC_OFF_SMALL_ASM() (*m) \
+ : GCC_OFF_SMALL_ASM() (*m), "Jr" (val) \
+- : "memory"); \
++ : __LLSC_CLOBBER); \
+ } else { \
+ unsigned long __flags; \
+ \
+@@ -117,6 +118,7 @@ static inline unsigned long __xchg(volatile void *ptr, unsigned long x,
+ __typeof(*(m)) __ret; \
+ \
+ if (kernel_uses_llsc) { \
++ loongson_llsc_mb(); \
+ __asm__ __volatile__( \
+ " .set push \n" \
+ " .set noat \n" \
+@@ -132,8 +134,9 @@ static inline unsigned long __xchg(volatile void *ptr, unsigned long x,
+ " .set pop \n" \
+ "2: \n" \
+ : "=&r" (__ret), "=" GCC_OFF_SMALL_ASM() (*m) \
+- : GCC_OFF_SMALL_ASM() (*m), "Jr" (old), "Jr" (new) \
+- : "memory"); \
++ : GCC_OFF_SMALL_ASM() (*m), "Jr" (old), "Jr" (new) \
++ : __LLSC_CLOBBER); \
++ loongson_llsc_mb(); \
+ } else { \
+ unsigned long __flags; \
+ \
+@@ -229,6 +232,7 @@ static inline unsigned long __cmpxchg64(volatile void *ptr,
+ */
+ local_irq_save(flags);
+
++ loongson_llsc_mb();
+ asm volatile(
+ " .set push \n"
+ " .set " MIPS_ISA_ARCH_LEVEL " \n"
+@@ -274,6 +278,7 @@ static inline unsigned long __cmpxchg64(volatile void *ptr,
+ "r" (old),
+ "r" (new)
+ : "memory");
++ loongson_llsc_mb();
+
+ local_irq_restore(flags);
+ return ret;
+diff --git a/arch/mips/include/asm/mipsregs.h b/arch/mips/include/asm/mipsregs.h
+index 1e6966e8527e..bdbdc19a2b8f 100644
+--- a/arch/mips/include/asm/mipsregs.h
++++ b/arch/mips/include/asm/mipsregs.h
+@@ -689,6 +689,9 @@
+ #define MIPS_CONF7_IAR (_ULCAST_(1) << 10)
+ #define MIPS_CONF7_AR (_ULCAST_(1) << 16)
+
++/* Ingenic Config7 bits */
++#define MIPS_CONF7_BTB_LOOP_EN (_ULCAST_(1) << 4)
++
+ /* Config7 Bits specific to MIPS Technologies. */
+
+ /* Performance counters implemented Per TC */
+@@ -2813,6 +2816,7 @@ __BUILD_SET_C0(status)
+ __BUILD_SET_C0(cause)
+ __BUILD_SET_C0(config)
+ __BUILD_SET_C0(config5)
++__BUILD_SET_C0(config7)
+ __BUILD_SET_C0(intcontrol)
+ __BUILD_SET_C0(intctl)
+ __BUILD_SET_C0(srsmap)
+diff --git a/arch/mips/kernel/branch.c b/arch/mips/kernel/branch.c
+index 1db29957a931..2c38f75d87ff 100644
+--- a/arch/mips/kernel/branch.c
++++ b/arch/mips/kernel/branch.c
+@@ -58,6 +58,7 @@ int __mm_isBranchInstr(struct pt_regs *regs, struct mm_decoded_insn dec_insn,
+ unsigned long *contpc)
+ {
+ union mips_instruction insn = (union mips_instruction)dec_insn.insn;
++ int __maybe_unused bc_false = 0;
+
+ if (!cpu_has_mmips)
+ return 0;
+@@ -139,7 +140,6 @@ int __mm_isBranchInstr(struct pt_regs *regs, struct mm_decoded_insn dec_insn,
+ #ifdef CONFIG_MIPS_FP_SUPPORT
+ case mm_bc2f_op:
+ case mm_bc1f_op: {
+- int bc_false = 0;
+ unsigned int fcr31;
+ unsigned int bit;
+
+diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
+index 9635c1db3ae6..e654ffc1c8a0 100644
+--- a/arch/mips/kernel/cpu-probe.c
++++ b/arch/mips/kernel/cpu-probe.c
+@@ -1964,6 +1964,13 @@ static inline void cpu_probe_ingenic(struct cpuinfo_mips *c, unsigned int cpu)
+ c->cputype = CPU_JZRISC;
+ c->writecombine = _CACHE_UNCACHED_ACCELERATED;
+ __cpu_name[cpu] = "Ingenic JZRISC";
++ /*
++ * The XBurst core by default attempts to avoid branch target
++ * buffer lookups by detecting & special casing loops. This
++ * feature will cause BogoMIPS and lpj calculate in error.
++ * Set cp0 config7 bit 4 to disable this feature.
++ */
++ set_c0_config7(MIPS_CONF7_BTB_LOOP_EN);
+ break;
+ default:
+ panic("Unknown Ingenic Processor ID!");
+diff --git a/arch/mips/kernel/syscall.c b/arch/mips/kernel/syscall.c
+index b6dc78ad5d8c..b0e25e913bdb 100644
+--- a/arch/mips/kernel/syscall.c
++++ b/arch/mips/kernel/syscall.c
+@@ -132,6 +132,7 @@ static inline int mips_atomic_set(unsigned long addr, unsigned long new)
+ [efault] "i" (-EFAULT)
+ : "memory");
+ } else if (cpu_has_llsc) {
++ loongson_llsc_mb();
+ __asm__ __volatile__ (
+ " .set push \n"
+ " .set "MIPS_ISA_ARCH_LEVEL" \n"
+diff --git a/arch/mips/mm/mmap.c b/arch/mips/mm/mmap.c
+index d79f2b432318..f5c778113384 100644
+--- a/arch/mips/mm/mmap.c
++++ b/arch/mips/mm/mmap.c
+@@ -21,8 +21,9 @@ unsigned long shm_align_mask = PAGE_SIZE - 1; /* Sane caches */
+ EXPORT_SYMBOL(shm_align_mask);
+
+ /* gap between mmap and stack */
+-#define MIN_GAP (128*1024*1024UL)
+-#define MAX_GAP ((TASK_SIZE)/6*5)
++#define MIN_GAP (128*1024*1024UL)
++#define MAX_GAP ((TASK_SIZE)/6*5)
++#define STACK_RND_MASK (0x7ff >> (PAGE_SHIFT - 12))
+
+ static int mmap_is_legacy(struct rlimit *rlim_stack)
+ {
+@@ -38,6 +39,15 @@ static int mmap_is_legacy(struct rlimit *rlim_stack)
+ static unsigned long mmap_base(unsigned long rnd, struct rlimit *rlim_stack)
+ {
+ unsigned long gap = rlim_stack->rlim_cur;
++ unsigned long pad = stack_guard_gap;
++
++ /* Account for stack randomization if necessary */
++ if (current->flags & PF_RANDOMIZE)
++ pad += (STACK_RND_MASK << PAGE_SHIFT);
++
++ /* Values close to RLIM_INFINITY can overflow. */
++ if (gap + pad > gap)
++ gap += pad;
+
+ if (gap < MIN_GAP)
+ gap = MIN_GAP;
+diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
+index 144ceb0fba88..bece1264d1c5 100644
+--- a/arch/mips/mm/tlbex.c
++++ b/arch/mips/mm/tlbex.c
+@@ -631,7 +631,7 @@ static __maybe_unused void build_convert_pte_to_entrylo(u32 **p,
+ return;
+ }
+
+- if (cpu_has_rixi && _PAGE_NO_EXEC) {
++ if (cpu_has_rixi && !!_PAGE_NO_EXEC) {
+ if (fill_includes_sw_bits) {
+ UASM_i_ROTR(p, reg, reg, ilog2(_PAGE_GLOBAL));
+ } else {
+diff --git a/arch/powerpc/include/asm/futex.h b/arch/powerpc/include/asm/futex.h
+index 3a6aa57b9d90..eea28ca679db 100644
+--- a/arch/powerpc/include/asm/futex.h
++++ b/arch/powerpc/include/asm/futex.h
+@@ -60,8 +60,7 @@ static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval,
+
+ pagefault_enable();
+
+- if (!ret)
+- *oval = oldval;
++ *oval = oldval;
+
+ prevent_write_to_user(uaddr, sizeof(*uaddr));
+ return ret;
+diff --git a/arch/powerpc/kernel/eeh_driver.c b/arch/powerpc/kernel/eeh_driver.c
+index 89623962c727..fe0c32fb9f96 100644
+--- a/arch/powerpc/kernel/eeh_driver.c
++++ b/arch/powerpc/kernel/eeh_driver.c
+@@ -744,6 +744,33 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
+ */
+ #define MAX_WAIT_FOR_RECOVERY 300
+
++
++/* Walks the PE tree after processing an event to remove any stale PEs.
++ *
++ * NB: This needs to be recursive to ensure the leaf PEs get removed
++ * before their parents do. Although this is possible to do recursively
++ * we don't since this is easier to read and we need to garantee
++ * the leaf nodes will be handled first.
++ */
++static void eeh_pe_cleanup(struct eeh_pe *pe)
++{
++ struct eeh_pe *child_pe, *tmp;
++
++ list_for_each_entry_safe(child_pe, tmp, &pe->child_list, child)
++ eeh_pe_cleanup(child_pe);
++
++ if (pe->state & EEH_PE_KEEP)
++ return;
++
++ if (!(pe->state & EEH_PE_INVALID))
++ return;
++
++ if (list_empty(&pe->edevs) && list_empty(&pe->child_list)) {
++ list_del(&pe->child);
++ kfree(pe);
++ }
++}
++
+ /**
+ * eeh_handle_normal_event - Handle EEH events on a specific PE
+ * @pe: EEH PE - which should not be used after we return, as it may
+@@ -782,8 +809,6 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
+ return;
+ }
+
+- eeh_pe_state_mark(pe, EEH_PE_RECOVERING);
+-
+ eeh_pe_update_time_stamp(pe);
+ pe->freeze_count++;
+ if (pe->freeze_count > eeh_max_freezes) {
+@@ -793,6 +818,10 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
+ result = PCI_ERS_RESULT_DISCONNECT;
+ }
+
++ eeh_for_each_pe(pe, tmp_pe)
++ eeh_pe_for_each_dev(tmp_pe, edev, tmp)
++ edev->mode &= ~EEH_DEV_NO_HANDLER;
++
+ /* Walk the various device drivers attached to this slot through
+ * a reset sequence, giving each an opportunity to do what it needs
+ * to accomplish the reset. Each child gets a report of the
+@@ -969,6 +998,12 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
+ return;
+ }
+ }
++
++ /*
++ * Clean up any PEs without devices. While marked as EEH_PE_RECOVERYING
++ * we don't want to modify the PE tree structure so we do it here.
++ */
++ eeh_pe_cleanup(pe);
+ eeh_pe_state_clear(pe, EEH_PE_RECOVERING, true);
+ }
+
+@@ -981,7 +1016,8 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
+ */
+ void eeh_handle_special_event(void)
+ {
+- struct eeh_pe *pe, *phb_pe;
++ struct eeh_pe *pe, *phb_pe, *tmp_pe;
++ struct eeh_dev *edev, *tmp_edev;
+ struct pci_bus *bus;
+ struct pci_controller *hose;
+ unsigned long flags;
+@@ -1040,6 +1076,7 @@ void eeh_handle_special_event(void)
+ */
+ if (rc == EEH_NEXT_ERR_FROZEN_PE ||
+ rc == EEH_NEXT_ERR_FENCED_PHB) {
++ eeh_pe_state_mark(pe, EEH_PE_RECOVERING);
+ eeh_handle_normal_event(pe);
+ } else {
+ pci_lock_rescan_remove();
+@@ -1050,6 +1087,10 @@ void eeh_handle_special_event(void)
+ (phb_pe->state & EEH_PE_RECOVERING))
+ continue;
+
++ eeh_for_each_pe(pe, tmp_pe)
++ eeh_pe_for_each_dev(tmp_pe, edev, tmp_edev)
++ edev->mode &= ~EEH_DEV_NO_HANDLER;
++
+ /* Notify all devices to be down */
+ eeh_pe_state_clear(pe, EEH_PE_PRI_BUS, true);
+ eeh_set_channel_state(pe, pci_channel_io_perm_failure);
+diff --git a/arch/powerpc/kernel/eeh_event.c b/arch/powerpc/kernel/eeh_event.c
+index 64cfbe41174b..e36653e5f76b 100644
+--- a/arch/powerpc/kernel/eeh_event.c
++++ b/arch/powerpc/kernel/eeh_event.c
+@@ -121,6 +121,14 @@ int __eeh_send_failure_event(struct eeh_pe *pe)
+ }
+ event->pe = pe;
+
++ /*
++ * Mark the PE as recovering before inserting it in the queue.
++ * This prevents the PE from being free()ed by a hotplug driver
++ * while the PE is sitting in the event queue.
++ */
++ if (pe)
++ eeh_pe_state_mark(pe, EEH_PE_RECOVERING);
++
+ /* We may or may not be called in an interrupt context */
+ spin_lock_irqsave(&eeh_eventlist_lock, flags);
+ list_add(&event->list, &eeh_eventlist);
+diff --git a/arch/powerpc/kernel/eeh_pe.c b/arch/powerpc/kernel/eeh_pe.c
+index 854cef7b18f4..f0813d50e0b1 100644
+--- a/arch/powerpc/kernel/eeh_pe.c
++++ b/arch/powerpc/kernel/eeh_pe.c
+@@ -491,6 +491,7 @@ int eeh_add_to_parent_pe(struct eeh_dev *edev)
+ int eeh_rmv_from_parent_pe(struct eeh_dev *edev)
+ {
+ struct eeh_pe *pe, *parent, *child;
++ bool keep, recover;
+ int cnt;
+ struct pci_dn *pdn = eeh_dev_to_pdn(edev);
+
+@@ -516,10 +517,21 @@ int eeh_rmv_from_parent_pe(struct eeh_dev *edev)
+ */
+ while (1) {
+ parent = pe->parent;
++
++ /* PHB PEs should never be removed */
+ if (pe->type & EEH_PE_PHB)
+ break;
+
+- if (!(pe->state & EEH_PE_KEEP)) {
++ /*
++ * XXX: KEEP is set while resetting a PE. I don't think it's
++ * ever set without RECOVERING also being set. I could
++ * be wrong though so catch that with a WARN.
++ */
++ keep = !!(pe->state & EEH_PE_KEEP);
++ recover = !!(pe->state & EEH_PE_RECOVERING);
++ WARN_ON(keep && !recover);
++
++ if (!keep && !recover) {
+ if (list_empty(&pe->edevs) &&
+ list_empty(&pe->child_list)) {
+ list_del(&pe->child);
+@@ -528,6 +540,15 @@ int eeh_rmv_from_parent_pe(struct eeh_dev *edev)
+ break;
+ }
+ } else {
++ /*
++ * Mark the PE as invalid. At the end of the recovery
++ * process any invalid PEs will be garbage collected.
++ *
++ * We need to delay the free()ing of them since we can
++ * remove edev's while traversing the PE tree which
++ * might trigger the removal of a PE and we can't
++ * deal with that (yet).
++ */
+ if (list_empty(&pe->edevs)) {
+ cnt = 0;
+ list_for_each_entry(child, &pe->child_list, child) {
+diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
+index 6ba3cc2ef8ab..36c8a3652cf3 100644
+--- a/arch/powerpc/kernel/exceptions-64s.S
++++ b/arch/powerpc/kernel/exceptions-64s.S
+@@ -1211,6 +1211,10 @@ FTR_SECTION_ELSE
+ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
+ 9:
+ /* Deliver the machine check to host kernel in V mode. */
++BEGIN_FTR_SECTION
++ ld r10,ORIG_GPR3(r1)
++ mtspr SPRN_CFAR,r10
++END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
+ MACHINE_CHECK_HANDLER_WINDUP
+ EXCEPTION_PROLOG_0 PACA_EXMC
+ b machine_check_pSeries_0
+diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
+index 5faf0a64c92b..05824eb4323b 100644
+--- a/arch/powerpc/kernel/rtas.c
++++ b/arch/powerpc/kernel/rtas.c
+@@ -871,15 +871,17 @@ static int rtas_cpu_state_change_mask(enum rtas_cpu_state state,
+ return 0;
+
+ for_each_cpu(cpu, cpus) {
++ struct device *dev = get_cpu_device(cpu);
++
+ switch (state) {
+ case DOWN:
+- cpuret = cpu_down(cpu);
++ cpuret = device_offline(dev);
+ break;
+ case UP:
+- cpuret = cpu_up(cpu);
++ cpuret = device_online(dev);
+ break;
+ }
+- if (cpuret) {
++ if (cpuret < 0) {
+ pr_debug("%s: cpu_%s for cpu#%d returned %d.\n",
+ __func__,
+ ((state == UP) ? "up" : "down"),
+@@ -968,6 +970,8 @@ int rtas_ibm_suspend_me(u64 handle)
+ data.token = rtas_token("ibm,suspend-me");
+ data.complete = &done;
+
++ lock_device_hotplug();
++
+ /* All present CPUs must be online */
+ cpumask_andnot(offline_mask, cpu_present_mask, cpu_online_mask);
+ cpuret = rtas_online_cpus_mask(offline_mask);
+@@ -1006,6 +1010,7 @@ out_hotplug_enable:
+ __func__);
+
+ out:
++ unlock_device_hotplug();
+ free_cpumask_var(offline_mask);
+ return atomic_read(&data.error);
+ }
+diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
+index 11caa0291254..82f43535e686 100644
+--- a/arch/powerpc/kernel/traps.c
++++ b/arch/powerpc/kernel/traps.c
+@@ -472,6 +472,7 @@ void system_reset_exception(struct pt_regs *regs)
+ if (debugger(regs))
+ goto out;
+
++ kmsg_dump(KMSG_DUMP_OOPS);
+ /*
+ * A system reset is a request to dump, so we always send
+ * it through the crashdump code (if fadump or kdump are
+diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
+index b4ca9e95e678..c5cc16ab1954 100644
+--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
++++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
+@@ -902,7 +902,7 @@ int __meminit radix__create_section_mapping(unsigned long start, unsigned long e
+ return -1;
+ }
+
+- return create_physical_mapping(start, end, nid);
++ return create_physical_mapping(__pa(start), __pa(end), nid);
+ }
+
+ int __meminit radix__remove_section_mapping(unsigned long start, unsigned long end)
+diff --git a/arch/powerpc/mm/ptdump/ptdump.c b/arch/powerpc/mm/ptdump/ptdump.c
+index 6a88a9f585d4..5d6111a9ee0e 100644
+--- a/arch/powerpc/mm/ptdump/ptdump.c
++++ b/arch/powerpc/mm/ptdump/ptdump.c
+@@ -299,17 +299,15 @@ static void walk_pud(struct pg_state *st, pgd_t *pgd, unsigned long start)
+
+ static void walk_pagetables(struct pg_state *st)
+ {
+- pgd_t *pgd = pgd_offset_k(0UL);
+ unsigned int i;
+- unsigned long addr;
+-
+- addr = st->start_address;
++ unsigned long addr = st->start_address & PGDIR_MASK;
++ pgd_t *pgd = pgd_offset_k(addr);
+
+ /*
+ * Traverse the linux pagetable structure and dump pages that are in
+ * the hash pagetable.
+ */
+- for (i = 0; i < PTRS_PER_PGD; i++, pgd++, addr += PGDIR_SIZE) {
++ for (i = pgd_index(addr); i < PTRS_PER_PGD; i++, pgd++, addr += PGDIR_SIZE) {
+ if (!pgd_none(*pgd) && !pgd_is_leaf(*pgd))
+ /* pgd exists */
+ walk_pud(st, pgd, addr);
+diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
+index dea243185ea4..cb50a9e1fd2d 100644
+--- a/arch/powerpc/perf/imc-pmu.c
++++ b/arch/powerpc/perf/imc-pmu.c
+@@ -577,6 +577,7 @@ static int core_imc_mem_init(int cpu, int size)
+ {
+ int nid, rc = 0, core_id = (cpu / threads_per_core);
+ struct imc_mem_info *mem_info;
++ struct page *page;
+
+ /*
+ * alloc_pages_node() will allocate memory for core in the
+@@ -587,11 +588,12 @@ static int core_imc_mem_init(int cpu, int size)
+ mem_info->id = core_id;
+
+ /* We need only vbase for core counters */
+- mem_info->vbase = page_address(alloc_pages_node(nid,
+- GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE |
+- __GFP_NOWARN, get_order(size)));
+- if (!mem_info->vbase)
++ page = alloc_pages_node(nid,
++ GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE |
++ __GFP_NOWARN, get_order(size));
++ if (!page)
+ return -ENOMEM;
++ mem_info->vbase = page_address(page);
+
+ /* Init the mutex */
+ core_imc_refc[core_id].id = core_id;
+@@ -849,15 +851,17 @@ static int thread_imc_mem_alloc(int cpu_id, int size)
+ int nid = cpu_to_node(cpu_id);
+
+ if (!local_mem) {
++ struct page *page;
+ /*
+ * This case could happen only once at start, since we dont
+ * free the memory in cpu offline path.
+ */
+- local_mem = page_address(alloc_pages_node(nid,
++ page = alloc_pages_node(nid,
+ GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE |
+- __GFP_NOWARN, get_order(size)));
+- if (!local_mem)
++ __GFP_NOWARN, get_order(size));
++ if (!page)
+ return -ENOMEM;
++ local_mem = page_address(page);
+
+ per_cpu(thread_imc_mem, cpu_id) = local_mem;
+ }
+@@ -1095,11 +1099,14 @@ static int trace_imc_mem_alloc(int cpu_id, int size)
+ int core_id = (cpu_id / threads_per_core);
+
+ if (!local_mem) {
+- local_mem = page_address(alloc_pages_node(phys_id,
+- GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE |
+- __GFP_NOWARN, get_order(size)));
+- if (!local_mem)
++ struct page *page;
++
++ page = alloc_pages_node(phys_id,
++ GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE |
++ __GFP_NOWARN, get_order(size));
++ if (!page)
+ return -ENOMEM;
++ local_mem = page_address(page);
+ per_cpu(trace_imc_mem, cpu_id) = local_mem;
+
+ /* Initialise the counters for trace mode */
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda-tce.c b/arch/powerpc/platforms/powernv/pci-ioda-tce.c
+index e28f03e1eb5e..c75ec37bf0cd 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda-tce.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda-tce.c
+@@ -36,7 +36,8 @@ static __be64 *pnv_alloc_tce_level(int nid, unsigned int shift)
+ struct page *tce_mem = NULL;
+ __be64 *addr;
+
+- tce_mem = alloc_pages_node(nid, GFP_KERNEL, shift - PAGE_SHIFT);
++ tce_mem = alloc_pages_node(nid, GFP_ATOMIC | __GFP_NOWARN,
++ shift - PAGE_SHIFT);
+ if (!tce_mem) {
+ pr_err("Failed to allocate a TCE memory, level shift=%d\n",
+ shift);
+@@ -161,6 +162,9 @@ void pnv_tce_free(struct iommu_table *tbl, long index, long npages)
+
+ if (ptce)
+ *ptce = cpu_to_be64(0);
++ else
++ /* Skip the rest of the level */
++ i |= tbl->it_level_size - 1;
+ }
+ }
+
+@@ -260,7 +264,6 @@ long pnv_pci_ioda2_table_alloc_pages(int nid, __u64 bus_offset,
+ unsigned int table_shift = max_t(unsigned int, entries_shift + 3,
+ PAGE_SHIFT);
+ const unsigned long tce_table_size = 1UL << table_shift;
+- unsigned int tmplevels = levels;
+
+ if (!levels || (levels > POWERNV_IOMMU_MAX_LEVELS))
+ return -EINVAL;
+@@ -268,9 +271,6 @@ long pnv_pci_ioda2_table_alloc_pages(int nid, __u64 bus_offset,
+ if (!is_power_of_2(window_size))
+ return -EINVAL;
+
+- if (alloc_userspace_copy && (window_size > (1ULL << 32)))
+- tmplevels = 1;
+-
+ /* Adjust direct table size from window_size and levels */
+ entries_shift = (entries_shift + levels - 1) / levels;
+ level_shift = entries_shift + 3;
+@@ -281,7 +281,7 @@ long pnv_pci_ioda2_table_alloc_pages(int nid, __u64 bus_offset,
+
+ /* Allocate TCE table */
+ addr = pnv_pci_ioda2_table_do_alloc_pages(nid, level_shift,
+- tmplevels, tce_table_size, &offset, &total_allocated);
++ 1, tce_table_size, &offset, &total_allocated);
+
+ /* addr==NULL means that the first level allocation failed */
+ if (!addr)
+@@ -292,18 +292,18 @@ long pnv_pci_ioda2_table_alloc_pages(int nid, __u64 bus_offset,
+ * we did not allocate as much as we wanted,
+ * release partially allocated table.
+ */
+- if (tmplevels == levels && offset < tce_table_size)
++ if (levels == 1 && offset < tce_table_size)
+ goto free_tces_exit;
+
+ /* Allocate userspace view of the TCE table */
+ if (alloc_userspace_copy) {
+ offset = 0;
+ uas = pnv_pci_ioda2_table_do_alloc_pages(nid, level_shift,
+- tmplevels, tce_table_size, &offset,
++ 1, tce_table_size, &offset,
+ &total_allocated_uas);
+ if (!uas)
+ goto free_tces_exit;
+- if (tmplevels == levels && (offset < tce_table_size ||
++ if (levels == 1 && (offset < tce_table_size ||
+ total_allocated_uas != total_allocated))
+ goto free_uas_exit;
+ }
+@@ -318,7 +318,7 @@ long pnv_pci_ioda2_table_alloc_pages(int nid, __u64 bus_offset,
+
+ pr_debug("Created TCE table: ws=%08llx ts=%lx @%08llx base=%lx uas=%p levels=%d/%d\n",
+ window_size, tce_table_size, bus_offset, tbl->it_base,
+- tbl->it_userspace, tmplevels, levels);
++ tbl->it_userspace, 1, levels);
+
+ return 0;
+
+diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
+index 469c24463247..f914f0b14e4e 100644
+--- a/arch/powerpc/platforms/powernv/pci.h
++++ b/arch/powerpc/platforms/powernv/pci.h
+@@ -219,7 +219,7 @@ extern struct iommu_table_group *pnv_npu_compound_attach(
+ struct pnv_ioda_pe *pe);
+
+ /* pci-ioda-tce.c */
+-#define POWERNV_IOMMU_DEFAULT_LEVELS 1
++#define POWERNV_IOMMU_DEFAULT_LEVELS 2
+ #define POWERNV_IOMMU_MAX_LEVELS 5
+
+ extern int pnv_tce_build(struct iommu_table *tbl, long index, long npages,
+diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c
+index fe812bebdf5e..b571285f6c14 100644
+--- a/arch/powerpc/platforms/pseries/mobility.c
++++ b/arch/powerpc/platforms/pseries/mobility.c
+@@ -9,6 +9,7 @@
+ #include <linux/cpu.h>
+ #include <linux/kernel.h>
+ #include <linux/kobject.h>
++#include <linux/sched.h>
+ #include <linux/smp.h>
+ #include <linux/stat.h>
+ #include <linux/completion.h>
+@@ -207,7 +208,11 @@ static int update_dt_node(__be32 phandle, s32 scope)
+
+ prop_data += vd;
+ }
++
++ cond_resched();
+ }
++
++ cond_resched();
+ } while (rtas_rc == 1);
+
+ of_node_put(dn);
+@@ -310,8 +315,12 @@ int pseries_devicetree_update(s32 scope)
+ add_dt_node(phandle, drc_index);
+ break;
+ }
++
++ cond_resched();
+ }
+ }
++
++ cond_resched();
+ } while (rc == 1);
+
+ kfree(rtas_buf);
+diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
+index f5940cc71c37..63462e96cf0e 100644
+--- a/arch/powerpc/platforms/pseries/setup.c
++++ b/arch/powerpc/platforms/pseries/setup.c
+@@ -316,6 +316,9 @@ static void pseries_lpar_idle(void)
+ * low power mode by ceding processor to hypervisor
+ */
+
++ if (!prep_irq_for_idle())
++ return;
++
+ /* Indicate to hypervisor that we are idle. */
+ get_lppaca()->idle = 1;
+
+diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
+index 14e56c25879f..25d4adccf750 100644
+--- a/arch/powerpc/xmon/xmon.c
++++ b/arch/powerpc/xmon/xmon.c
+@@ -2534,13 +2534,16 @@ static void dump_pacas(void)
+ static void dump_one_xive(int cpu)
+ {
+ unsigned int hwid = get_hard_smp_processor_id(cpu);
+-
+- opal_xive_dump(XIVE_DUMP_TM_HYP, hwid);
+- opal_xive_dump(XIVE_DUMP_TM_POOL, hwid);
+- opal_xive_dump(XIVE_DUMP_TM_OS, hwid);
+- opal_xive_dump(XIVE_DUMP_TM_USER, hwid);
+- opal_xive_dump(XIVE_DUMP_VP, hwid);
+- opal_xive_dump(XIVE_DUMP_EMU_STATE, hwid);
++ bool hv = cpu_has_feature(CPU_FTR_HVMODE);
++
++ if (hv) {
++ opal_xive_dump(XIVE_DUMP_TM_HYP, hwid);
++ opal_xive_dump(XIVE_DUMP_TM_POOL, hwid);
++ opal_xive_dump(XIVE_DUMP_TM_OS, hwid);
++ opal_xive_dump(XIVE_DUMP_TM_USER, hwid);
++ opal_xive_dump(XIVE_DUMP_VP, hwid);
++ opal_xive_dump(XIVE_DUMP_EMU_STATE, hwid);
++ }
+
+ if (setjmp(bus_error_jmp) != 0) {
+ catch_memory_errors = 0;
+diff --git a/arch/s390/hypfs/inode.c b/arch/s390/hypfs/inode.c
+index ccad1398abd4..b5cfcad953c2 100644
+--- a/arch/s390/hypfs/inode.c
++++ b/arch/s390/hypfs/inode.c
+@@ -269,7 +269,7 @@ static int hypfs_show_options(struct seq_file *s, struct dentry *root)
+ static int hypfs_fill_super(struct super_block *sb, void *data, int silent)
+ {
+ struct inode *root_inode;
+- struct dentry *root_dentry;
++ struct dentry *root_dentry, *update_file;
+ int rc = 0;
+ struct hypfs_sb_info *sbi;
+
+@@ -300,9 +300,10 @@ static int hypfs_fill_super(struct super_block *sb, void *data, int silent)
+ rc = hypfs_diag_create_files(root_dentry);
+ if (rc)
+ return rc;
+- sbi->update_file = hypfs_create_update_file(root_dentry);
+- if (IS_ERR(sbi->update_file))
+- return PTR_ERR(sbi->update_file);
++ update_file = hypfs_create_update_file(root_dentry);
++ if (IS_ERR(update_file))
++ return PTR_ERR(update_file);
++ sbi->update_file = update_file;
+ hypfs_update_update(sb);
+ pr_info("Hypervisor filesystem mounted\n");
+ return 0;
+diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
+index fff790a3f4ee..c0867b0aae3e 100644
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -645,7 +645,9 @@ static int stimer_notify_direct(struct kvm_vcpu_hv_stimer *stimer)
+ .vector = stimer->config.apic_vector
+ };
+
+- return !kvm_apic_set_irq(vcpu, &irq, NULL);
++ if (lapic_in_kernel(vcpu))
++ return !kvm_apic_set_irq(vcpu, &irq, NULL);
++ return 0;
+ }
+
+ static void stimer_expiration(struct kvm_vcpu_hv_stimer *stimer)
+@@ -1852,7 +1854,13 @@ int kvm_vcpu_ioctl_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid,
+
+ ent->edx |= HV_FEATURE_FREQUENCY_MSRS_AVAILABLE;
+ ent->edx |= HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE;
+- ent->edx |= HV_STIMER_DIRECT_MODE_AVAILABLE;
++
++ /*
++ * Direct Synthetic timers only make sense with in-kernel
++ * LAPIC
++ */
++ if (lapic_in_kernel(vcpu))
++ ent->edx |= HV_STIMER_DIRECT_MODE_AVAILABLE;
+
+ break;
+
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index b33be928d164..70bcbd02edcb 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -5809,12 +5809,14 @@ static void bfq_update_inject_limit(struct bfq_data *bfqd,
+ */
+ if ((bfqq->last_serv_time_ns == 0 && bfqd->rq_in_driver == 1) ||
+ tot_time_ns < bfqq->last_serv_time_ns) {
++ if (bfqq->last_serv_time_ns == 0) {
++ /*
++ * Now we certainly have a base value: make sure we
++ * start trying injection.
++ */
++ bfqq->inject_limit = max_t(unsigned int, 1, old_limit);
++ }
+ bfqq->last_serv_time_ns = tot_time_ns;
+- /*
+- * Now we certainly have a base value: make sure we
+- * start trying injection.
+- */
+- bfqq->inject_limit = max_t(unsigned int, 1, old_limit);
+ } else if (!bfqd->rqs_injected && bfqd->rq_in_driver == 1)
+ /*
+ * No I/O injected and no request still in service in
+diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c
+index 024060165afa..76457003f140 100644
+--- a/drivers/block/pktcdvd.c
++++ b/drivers/block/pktcdvd.c
+@@ -2594,7 +2594,6 @@ static int pkt_new_dev(struct pktcdvd_device *pd, dev_t dev)
+ if (ret)
+ return ret;
+ if (!blk_queue_scsi_passthrough(bdev_get_queue(bdev))) {
+- WARN_ONCE(true, "Attempt to register a non-SCSI queue\n");
+ blkdev_put(bdev, FMODE_READ | FMODE_NDELAY);
+ return -EINVAL;
+ }
+diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
+index da5b6723329a..28693dbcb0c3 100644
+--- a/drivers/char/ipmi/ipmi_si_intf.c
++++ b/drivers/char/ipmi/ipmi_si_intf.c
+@@ -221,6 +221,9 @@ struct smi_info {
+ */
+ bool irq_enable_broken;
+
++ /* Is the driver in maintenance mode? */
++ bool in_maintenance_mode;
++
+ /*
+ * Did we get an attention that we did not handle?
+ */
+@@ -1007,11 +1010,20 @@ static int ipmi_thread(void *data)
+ spin_unlock_irqrestore(&(smi_info->si_lock), flags);
+ busy_wait = ipmi_thread_busy_wait(smi_result, smi_info,
+ &busy_until);
+- if (smi_result == SI_SM_CALL_WITHOUT_DELAY)
++ if (smi_result == SI_SM_CALL_WITHOUT_DELAY) {
+ ; /* do nothing */
+- else if (smi_result == SI_SM_CALL_WITH_DELAY && busy_wait)
+- schedule();
+- else if (smi_result == SI_SM_IDLE) {
++ } else if (smi_result == SI_SM_CALL_WITH_DELAY && busy_wait) {
++ /*
++ * In maintenance mode we run as fast as
++ * possible to allow firmware updates to
++ * complete as fast as possible, but normally
++ * don't bang on the scheduler.
++ */
++ if (smi_info->in_maintenance_mode)
++ schedule();
++ else
++ usleep_range(100, 200);
++ } else if (smi_result == SI_SM_IDLE) {
+ if (atomic_read(&smi_info->need_watch)) {
+ schedule_timeout_interruptible(100);
+ } else {
+@@ -1019,8 +1031,9 @@ static int ipmi_thread(void *data)
+ __set_current_state(TASK_INTERRUPTIBLE);
+ schedule();
+ }
+- } else
++ } else {
+ schedule_timeout_interruptible(1);
++ }
+ }
+ return 0;
+ }
+@@ -1198,6 +1211,7 @@ static void set_maintenance_mode(void *send_info, bool enable)
+
+ if (!enable)
+ atomic_set(&smi_info->req_events, 0);
++ smi_info->in_maintenance_mode = enable;
+ }
+
+ static void shutdown_smi(void *send_info);
+diff --git a/drivers/clk/actions/owl-common.c b/drivers/clk/actions/owl-common.c
+index 32dd29e0a37e..4de97cc7cb54 100644
+--- a/drivers/clk/actions/owl-common.c
++++ b/drivers/clk/actions/owl-common.c
+@@ -68,16 +68,17 @@ int owl_clk_probe(struct device *dev, struct clk_hw_onecell_data *hw_clks)
+ struct clk_hw *hw;
+
+ for (i = 0; i < hw_clks->num; i++) {
++ const char *name;
+
+ hw = hw_clks->hws[i];
+-
+ if (IS_ERR_OR_NULL(hw))
+ continue;
+
++ name = hw->init->name;
+ ret = devm_clk_hw_register(dev, hw);
+ if (ret) {
+ dev_err(dev, "Couldn't register clock %d - %s\n",
+- i, hw->init->name);
++ i, name);
+ return ret;
+ }
+ }
+diff --git a/drivers/clk/at91/clk-main.c b/drivers/clk/at91/clk-main.c
+index f607ee702c83..311cea0c3ae2 100644
+--- a/drivers/clk/at91/clk-main.c
++++ b/drivers/clk/at91/clk-main.c
+@@ -21,6 +21,10 @@
+
+ #define MOR_KEY_MASK (0xff << 16)
+
++#define clk_main_parent_select(s) (((s) & \
++ (AT91_PMC_MOSCEN | \
++ AT91_PMC_OSCBYPASS)) ? 1 : 0)
++
+ struct clk_main_osc {
+ struct clk_hw hw;
+ struct regmap *regmap;
+@@ -113,7 +117,7 @@ static int clk_main_osc_is_prepared(struct clk_hw *hw)
+
+ regmap_read(regmap, AT91_PMC_SR, &status);
+
+- return (status & AT91_PMC_MOSCS) && (tmp & AT91_PMC_MOSCEN);
++ return (status & AT91_PMC_MOSCS) && clk_main_parent_select(tmp);
+ }
+
+ static const struct clk_ops main_osc_ops = {
+@@ -450,7 +454,7 @@ static u8 clk_sam9x5_main_get_parent(struct clk_hw *hw)
+
+ regmap_read(clkmain->regmap, AT91_CKGR_MOR, &status);
+
+- return status & AT91_PMC_MOSCEN ? 1 : 0;
++ return clk_main_parent_select(status);
+ }
+
+ static const struct clk_ops sam9x5_main_ops = {
+@@ -492,7 +496,7 @@ at91_clk_register_sam9x5_main(struct regmap *regmap,
+ clkmain->hw.init = &init;
+ clkmain->regmap = regmap;
+ regmap_read(clkmain->regmap, AT91_CKGR_MOR, &status);
+- clkmain->parent = status & AT91_PMC_MOSCEN ? 1 : 0;
++ clkmain->parent = clk_main_parent_select(status);
+
+ hw = &clkmain->hw;
+ ret = clk_hw_register(NULL, &clkmain->hw);
+diff --git a/drivers/clk/clk-bulk.c b/drivers/clk/clk-bulk.c
+index 524bf9a53098..e9e16425c739 100644
+--- a/drivers/clk/clk-bulk.c
++++ b/drivers/clk/clk-bulk.c
+@@ -18,10 +18,13 @@ static int __must_check of_clk_bulk_get(struct device_node *np, int num_clks,
+ int ret;
+ int i;
+
+- for (i = 0; i < num_clks; i++)
++ for (i = 0; i < num_clks; i++) {
++ clks[i].id = NULL;
+ clks[i].clk = NULL;
++ }
+
+ for (i = 0; i < num_clks; i++) {
++ of_property_read_string_index(np, "clock-names", i, &clks[i].id);
+ clks[i].clk = of_clk_get(np, i);
+ if (IS_ERR(clks[i].clk)) {
+ ret = PTR_ERR(clks[i].clk);
+diff --git a/drivers/clk/clk-qoriq.c b/drivers/clk/clk-qoriq.c
+index 07f3b252f3e0..bed140f7375f 100644
+--- a/drivers/clk/clk-qoriq.c
++++ b/drivers/clk/clk-qoriq.c
+@@ -686,7 +686,7 @@ static const struct clockgen_chipinfo chipinfo[] = {
+ .guts_compat = "fsl,qoriq-device-config-1.0",
+ .init_periph = p5020_init_periph,
+ .cmux_groups = {
+- &p2041_cmux_grp1, &p2041_cmux_grp2
++ &p5020_cmux_grp1, &p5020_cmux_grp2
+ },
+ .cmux_to_group = {
+ 0, 1, -1
+diff --git a/drivers/clk/imx/clk-imx8mq.c b/drivers/clk/imx/clk-imx8mq.c
+index d407a07e7e6d..e07c69afc359 100644
+--- a/drivers/clk/imx/clk-imx8mq.c
++++ b/drivers/clk/imx/clk-imx8mq.c
+@@ -406,7 +406,8 @@ static int imx8mq_clocks_probe(struct platform_device *pdev)
+ clks[IMX8MQ_CLK_NOC_APB] = imx8m_clk_composite_critical("noc_apb", imx8mq_noc_apb_sels, base + 0x8d80);
+
+ /* AHB */
+- clks[IMX8MQ_CLK_AHB] = imx8m_clk_composite("ahb", imx8mq_ahb_sels, base + 0x9000);
++ /* AHB clock is used by the AHB bus therefore marked as critical */
++ clks[IMX8MQ_CLK_AHB] = imx8m_clk_composite_critical("ahb", imx8mq_ahb_sels, base + 0x9000);
+ clks[IMX8MQ_CLK_AUDIO_AHB] = imx8m_clk_composite("audio_ahb", imx8mq_audio_ahb_sels, base + 0x9100);
+
+ /* IPG */
+diff --git a/drivers/clk/imx/clk-pll14xx.c b/drivers/clk/imx/clk-pll14xx.c
+index b7213023b238..7a815ec76aa5 100644
+--- a/drivers/clk/imx/clk-pll14xx.c
++++ b/drivers/clk/imx/clk-pll14xx.c
+@@ -191,6 +191,10 @@ static int clk_pll1416x_set_rate(struct clk_hw *hw, unsigned long drate,
+ tmp &= ~RST_MASK;
+ writel_relaxed(tmp, pll->base);
+
++ /* Enable BYPASS */
++ tmp |= BYPASS_MASK;
++ writel(tmp, pll->base);
++
+ div_val = (rate->mdiv << MDIV_SHIFT) | (rate->pdiv << PDIV_SHIFT) |
+ (rate->sdiv << SDIV_SHIFT);
+ writel_relaxed(div_val, pll->base + 0x4);
+@@ -250,6 +254,10 @@ static int clk_pll1443x_set_rate(struct clk_hw *hw, unsigned long drate,
+ tmp &= ~RST_MASK;
+ writel_relaxed(tmp, pll->base);
+
++ /* Enable BYPASS */
++ tmp |= BYPASS_MASK;
++ writel_relaxed(tmp, pll->base);
++
+ div_val = (rate->mdiv << MDIV_SHIFT) | (rate->pdiv << PDIV_SHIFT) |
+ (rate->sdiv << SDIV_SHIFT);
+ writel_relaxed(div_val, pll->base + 0x4);
+@@ -283,16 +291,28 @@ static int clk_pll14xx_prepare(struct clk_hw *hw)
+ {
+ struct clk_pll14xx *pll = to_clk_pll14xx(hw);
+ u32 val;
++ int ret;
+
+ /*
+ * RESETB = 1 from 0, PLL starts its normal
+ * operation after lock time
+ */
+ val = readl_relaxed(pll->base + GNRL_CTL);
++ if (val & RST_MASK)
++ return 0;
++ val |= BYPASS_MASK;
++ writel_relaxed(val, pll->base + GNRL_CTL);
+ val |= RST_MASK;
+ writel_relaxed(val, pll->base + GNRL_CTL);
+
+- return clk_pll14xx_wait_lock(pll);
++ ret = clk_pll14xx_wait_lock(pll);
++ if (ret)
++ return ret;
++
++ val &= ~BYPASS_MASK;
++ writel_relaxed(val, pll->base + GNRL_CTL);
++
++ return 0;
+ }
+
+ static int clk_pll14xx_is_prepared(struct clk_hw *hw)
+@@ -348,6 +368,7 @@ struct clk *imx_clk_pll14xx(const char *name, const char *parent_name,
+ struct clk_pll14xx *pll;
+ struct clk *clk;
+ struct clk_init_data init;
++ u32 val;
+
+ pll = kzalloc(sizeof(*pll), GFP_KERNEL);
+ if (!pll)
+@@ -379,6 +400,10 @@ struct clk *imx_clk_pll14xx(const char *name, const char *parent_name,
+ pll->rate_table = pll_clk->rate_table;
+ pll->rate_count = pll_clk->rate_count;
+
++ val = readl_relaxed(pll->base + GNRL_CTL);
++ val &= ~BYPASS_MASK;
++ writel_relaxed(val, pll->base + GNRL_CTL);
++
+ clk = clk_register(NULL, &pll->hw);
+ if (IS_ERR(clk)) {
+ pr_err("%s: failed to register pll %s %lu\n",
+diff --git a/drivers/clk/ingenic/jz4740-cgu.c b/drivers/clk/ingenic/jz4740-cgu.c
+index 4c0a20949c2c..9b27d75d9485 100644
+--- a/drivers/clk/ingenic/jz4740-cgu.c
++++ b/drivers/clk/ingenic/jz4740-cgu.c
+@@ -53,6 +53,10 @@ static const u8 jz4740_cgu_cpccr_div_table[] = {
+ 1, 2, 3, 4, 6, 8, 12, 16, 24, 32,
+ };
+
++static const u8 jz4740_cgu_pll_half_div_table[] = {
++ 2, 1,
++};
++
+ static const struct ingenic_cgu_clk_info jz4740_cgu_clocks[] = {
+
+ /* External clocks */
+@@ -86,7 +90,10 @@ static const struct ingenic_cgu_clk_info jz4740_cgu_clocks[] = {
+ [JZ4740_CLK_PLL_HALF] = {
+ "pll half", CGU_CLK_DIV,
+ .parents = { JZ4740_CLK_PLL, -1, -1, -1 },
+- .div = { CGU_REG_CPCCR, 21, 1, 1, -1, -1, -1 },
++ .div = {
++ CGU_REG_CPCCR, 21, 1, 1, -1, -1, -1,
++ jz4740_cgu_pll_half_div_table,
++ },
+ },
+
+ [JZ4740_CLK_CCLK] = {
+diff --git a/drivers/clk/meson/axg-audio.c b/drivers/clk/meson/axg-audio.c
+index 8028ff6f6610..db0b73d53551 100644
+--- a/drivers/clk/meson/axg-audio.c
++++ b/drivers/clk/meson/axg-audio.c
+@@ -992,15 +992,18 @@ static int axg_audio_clkc_probe(struct platform_device *pdev)
+
+ /* Take care to skip the registered input clocks */
+ for (i = AUD_CLKID_DDR_ARB; i < data->hw_onecell_data->num; i++) {
++ const char *name;
++
+ hw = data->hw_onecell_data->hws[i];
+ /* array might be sparse */
+ if (!hw)
+ continue;
+
++ name = hw->init->name;
++
+ ret = devm_clk_hw_register(dev, hw);
+ if (ret) {
+- dev_err(dev, "failed to register clock %s\n",
+- hw->init->name);
++ dev_err(dev, "failed to register clock %s\n", name);
+ return ret;
+ }
+ }
+diff --git a/drivers/clk/qcom/gcc-sdm845.c b/drivers/clk/qcom/gcc-sdm845.c
+index 7131dcf9b060..95be125c3bdd 100644
+--- a/drivers/clk/qcom/gcc-sdm845.c
++++ b/drivers/clk/qcom/gcc-sdm845.c
+@@ -685,7 +685,7 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = {
+ .name = "gcc_sdcc2_apps_clk_src",
+ .parent_names = gcc_parent_names_10,
+ .num_parents = 5,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_floor_ops,
+ },
+ };
+
+@@ -709,7 +709,7 @@ static struct clk_rcg2 gcc_sdcc4_apps_clk_src = {
+ .name = "gcc_sdcc4_apps_clk_src",
+ .parent_names = gcc_parent_names_0,
+ .num_parents = 4,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_floor_ops,
+ },
+ };
+
+diff --git a/drivers/clk/renesas/clk-mstp.c b/drivers/clk/renesas/clk-mstp.c
+index 2db9093546c6..e326e6dc09fc 100644
+--- a/drivers/clk/renesas/clk-mstp.c
++++ b/drivers/clk/renesas/clk-mstp.c
+@@ -334,7 +334,8 @@ void __init cpg_mstp_add_clk_domain(struct device_node *np)
+ return;
+
+ pd->name = np->name;
+- pd->flags = GENPD_FLAG_PM_CLK | GENPD_FLAG_ACTIVE_WAKEUP;
++ pd->flags = GENPD_FLAG_PM_CLK | GENPD_FLAG_ALWAYS_ON |
++ GENPD_FLAG_ACTIVE_WAKEUP;
+ pd->attach_dev = cpg_mstp_attach_dev;
+ pd->detach_dev = cpg_mstp_detach_dev;
+ pm_genpd_init(pd, &pm_domain_always_on_gov, false);
+diff --git a/drivers/clk/renesas/renesas-cpg-mssr.c b/drivers/clk/renesas/renesas-cpg-mssr.c
+index d4075b130674..132cc96895e3 100644
+--- a/drivers/clk/renesas/renesas-cpg-mssr.c
++++ b/drivers/clk/renesas/renesas-cpg-mssr.c
+@@ -551,7 +551,8 @@ static int __init cpg_mssr_add_clk_domain(struct device *dev,
+
+ genpd = &pd->genpd;
+ genpd->name = np->name;
+- genpd->flags = GENPD_FLAG_PM_CLK | GENPD_FLAG_ACTIVE_WAKEUP;
++ genpd->flags = GENPD_FLAG_PM_CLK | GENPD_FLAG_ALWAYS_ON |
++ GENPD_FLAG_ACTIVE_WAKEUP;
+ genpd->attach_dev = cpg_mssr_attach_dev;
+ genpd->detach_dev = cpg_mssr_detach_dev;
+ pm_genpd_init(genpd, &pm_domain_always_on_gov, false);
+diff --git a/drivers/clk/sirf/clk-common.c b/drivers/clk/sirf/clk-common.c
+index ad7951b6b285..dcf4e25a0216 100644
+--- a/drivers/clk/sirf/clk-common.c
++++ b/drivers/clk/sirf/clk-common.c
+@@ -297,9 +297,10 @@ static u8 dmn_clk_get_parent(struct clk_hw *hw)
+ {
+ struct clk_dmn *clk = to_dmnclk(hw);
+ u32 cfg = clkc_readl(clk->regofs);
++ const char *name = clk_hw_get_name(hw);
+
+ /* parent of io domain can only be pll3 */
+- if (strcmp(hw->init->name, "io") == 0)
++ if (strcmp(name, "io") == 0)
+ return 4;
+
+ WARN_ON((cfg & (BIT(3) - 1)) > 4);
+@@ -311,9 +312,10 @@ static int dmn_clk_set_parent(struct clk_hw *hw, u8 parent)
+ {
+ struct clk_dmn *clk = to_dmnclk(hw);
+ u32 cfg = clkc_readl(clk->regofs);
++ const char *name = clk_hw_get_name(hw);
+
+ /* parent of io domain can only be pll3 */
+- if (strcmp(hw->init->name, "io") == 0)
++ if (strcmp(name, "io") == 0)
+ return -EINVAL;
+
+ cfg &= ~(BIT(3) - 1);
+@@ -353,7 +355,8 @@ static long dmn_clk_round_rate(struct clk_hw *hw, unsigned long rate,
+ {
+ unsigned long fin;
+ unsigned ratio, wait, hold;
+- unsigned bits = (strcmp(hw->init->name, "mem") == 0) ? 3 : 4;
++ const char *name = clk_hw_get_name(hw);
++ unsigned bits = (strcmp(name, "mem") == 0) ? 3 : 4;
+
+ fin = *parent_rate;
+ ratio = fin / rate;
+@@ -375,7 +378,8 @@ static int dmn_clk_set_rate(struct clk_hw *hw, unsigned long rate,
+ struct clk_dmn *clk = to_dmnclk(hw);
+ unsigned long fin;
+ unsigned ratio, wait, hold, reg;
+- unsigned bits = (strcmp(hw->init->name, "mem") == 0) ? 3 : 4;
++ const char *name = clk_hw_get_name(hw);
++ unsigned bits = (strcmp(name, "mem") == 0) ? 3 : 4;
+
+ fin = parent_rate;
+ ratio = fin / rate;
+diff --git a/drivers/clk/sprd/common.c b/drivers/clk/sprd/common.c
+index a5bdca1de5d0..9d56eac43832 100644
+--- a/drivers/clk/sprd/common.c
++++ b/drivers/clk/sprd/common.c
+@@ -76,16 +76,17 @@ int sprd_clk_probe(struct device *dev, struct clk_hw_onecell_data *clkhw)
+ struct clk_hw *hw;
+
+ for (i = 0; i < clkhw->num; i++) {
++ const char *name;
+
+ hw = clkhw->hws[i];
+-
+ if (!hw)
+ continue;
+
++ name = hw->init->name;
+ ret = devm_clk_hw_register(dev, hw);
+ if (ret) {
+ dev_err(dev, "Couldn't register clock %d - %s\n",
+- i, hw->init->name);
++ i, name);
+ return ret;
+ }
+ }
+diff --git a/drivers/clk/sprd/pll.c b/drivers/clk/sprd/pll.c
+index 36b4402bf09e..640270f51aa5 100644
+--- a/drivers/clk/sprd/pll.c
++++ b/drivers/clk/sprd/pll.c
+@@ -136,6 +136,7 @@ static unsigned long _sprd_pll_recalc_rate(const struct sprd_pll *pll,
+ k2 + refin * nint * CLK_PLL_1M;
+ }
+
++ kfree(cfg);
+ return rate;
+ }
+
+@@ -222,6 +223,7 @@ static int _sprd_pll_set_rate(const struct sprd_pll *pll,
+ if (!ret)
+ udelay(pll->udelay);
+
++ kfree(cfg);
+ return ret;
+ }
+
+diff --git a/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c b/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c
+index 9b3939fc7faa..5ca4d34b4094 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c
++++ b/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c
+@@ -502,6 +502,9 @@ static struct clk_hw_onecell_data sun8i_v3s_hw_clks = {
+ [CLK_MMC1] = &mmc1_clk.common.hw,
+ [CLK_MMC1_SAMPLE] = &mmc1_sample_clk.common.hw,
+ [CLK_MMC1_OUTPUT] = &mmc1_output_clk.common.hw,
++ [CLK_MMC2] = &mmc2_clk.common.hw,
++ [CLK_MMC2_SAMPLE] = &mmc2_sample_clk.common.hw,
++ [CLK_MMC2_OUTPUT] = &mmc2_output_clk.common.hw,
+ [CLK_CE] = &ce_clk.common.hw,
+ [CLK_SPI0] = &spi0_clk.common.hw,
+ [CLK_USB_PHY0] = &usb_phy0_clk.common.hw,
+diff --git a/drivers/clk/sunxi-ng/ccu_common.c b/drivers/clk/sunxi-ng/ccu_common.c
+index 7fe3ac980e5f..2e20e650b6c0 100644
+--- a/drivers/clk/sunxi-ng/ccu_common.c
++++ b/drivers/clk/sunxi-ng/ccu_common.c
+@@ -97,14 +97,15 @@ int sunxi_ccu_probe(struct device_node *node, void __iomem *reg,
+
+ for (i = 0; i < desc->hw_clks->num ; i++) {
+ struct clk_hw *hw = desc->hw_clks->hws[i];
++ const char *name;
+
+ if (!hw)
+ continue;
+
++ name = hw->init->name;
+ ret = of_clk_hw_register(node, hw);
+ if (ret) {
+- pr_err("Couldn't register clock %d - %s\n",
+- i, clk_hw_get_name(hw));
++ pr_err("Couldn't register clock %d - %s\n", i, name);
+ goto err_clk_unreg;
+ }
+ }
+diff --git a/drivers/clk/zte/clk-zx296718.c b/drivers/clk/zte/clk-zx296718.c
+index fd6c347bec6a..dd7045bc48c1 100644
+--- a/drivers/clk/zte/clk-zx296718.c
++++ b/drivers/clk/zte/clk-zx296718.c
+@@ -564,6 +564,7 @@ static int __init top_clocks_init(struct device_node *np)
+ {
+ void __iomem *reg_base;
+ int i, ret;
++ const char *name;
+
+ reg_base = of_iomap(np, 0);
+ if (!reg_base) {
+@@ -573,11 +574,10 @@ static int __init top_clocks_init(struct device_node *np)
+
+ for (i = 0; i < ARRAY_SIZE(zx296718_pll_clk); i++) {
+ zx296718_pll_clk[i].reg_base += (uintptr_t)reg_base;
++ name = zx296718_pll_clk[i].hw.init->name;
+ ret = clk_hw_register(NULL, &zx296718_pll_clk[i].hw);
+- if (ret) {
+- pr_warn("top clk %s init error!\n",
+- zx296718_pll_clk[i].hw.init->name);
+- }
++ if (ret)
++ pr_warn("top clk %s init error!\n", name);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(top_ffactor_clk); i++) {
+@@ -585,11 +585,10 @@ static int __init top_clocks_init(struct device_node *np)
+ top_hw_onecell_data.hws[top_ffactor_clk[i].id] =
+ &top_ffactor_clk[i].factor.hw;
+
++ name = top_ffactor_clk[i].factor.hw.init->name;
+ ret = clk_hw_register(NULL, &top_ffactor_clk[i].factor.hw);
+- if (ret) {
+- pr_warn("top clk %s init error!\n",
+- top_ffactor_clk[i].factor.hw.init->name);
+- }
++ if (ret)
++ pr_warn("top clk %s init error!\n", name);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(top_mux_clk); i++) {
+@@ -598,11 +597,10 @@ static int __init top_clocks_init(struct device_node *np)
+ &top_mux_clk[i].mux.hw;
+
+ top_mux_clk[i].mux.reg += (uintptr_t)reg_base;
++ name = top_mux_clk[i].mux.hw.init->name;
+ ret = clk_hw_register(NULL, &top_mux_clk[i].mux.hw);
+- if (ret) {
+- pr_warn("top clk %s init error!\n",
+- top_mux_clk[i].mux.hw.init->name);
+- }
++ if (ret)
++ pr_warn("top clk %s init error!\n", name);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(top_gate_clk); i++) {
+@@ -611,11 +609,10 @@ static int __init top_clocks_init(struct device_node *np)
+ &top_gate_clk[i].gate.hw;
+
+ top_gate_clk[i].gate.reg += (uintptr_t)reg_base;
++ name = top_gate_clk[i].gate.hw.init->name;
+ ret = clk_hw_register(NULL, &top_gate_clk[i].gate.hw);
+- if (ret) {
+- pr_warn("top clk %s init error!\n",
+- top_gate_clk[i].gate.hw.init->name);
+- }
++ if (ret)
++ pr_warn("top clk %s init error!\n", name);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(top_div_clk); i++) {
+@@ -624,11 +621,10 @@ static int __init top_clocks_init(struct device_node *np)
+ &top_div_clk[i].div.hw;
+
+ top_div_clk[i].div.reg += (uintptr_t)reg_base;
++ name = top_div_clk[i].div.hw.init->name;
+ ret = clk_hw_register(NULL, &top_div_clk[i].div.hw);
+- if (ret) {
+- pr_warn("top clk %s init error!\n",
+- top_div_clk[i].div.hw.init->name);
+- }
++ if (ret)
++ pr_warn("top clk %s init error!\n", name);
+ }
+
+ ret = of_clk_add_hw_provider(np, of_clk_hw_onecell_get,
+@@ -754,6 +750,7 @@ static int __init lsp0_clocks_init(struct device_node *np)
+ {
+ void __iomem *reg_base;
+ int i, ret;
++ const char *name;
+
+ reg_base = of_iomap(np, 0);
+ if (!reg_base) {
+@@ -767,11 +764,10 @@ static int __init lsp0_clocks_init(struct device_node *np)
+ &lsp0_mux_clk[i].mux.hw;
+
+ lsp0_mux_clk[i].mux.reg += (uintptr_t)reg_base;
++ name = lsp0_mux_clk[i].mux.hw.init->name;
+ ret = clk_hw_register(NULL, &lsp0_mux_clk[i].mux.hw);
+- if (ret) {
+- pr_warn("lsp0 clk %s init error!\n",
+- lsp0_mux_clk[i].mux.hw.init->name);
+- }
++ if (ret)
++ pr_warn("lsp0 clk %s init error!\n", name);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(lsp0_gate_clk); i++) {
+@@ -780,11 +776,10 @@ static int __init lsp0_clocks_init(struct device_node *np)
+ &lsp0_gate_clk[i].gate.hw;
+
+ lsp0_gate_clk[i].gate.reg += (uintptr_t)reg_base;
++ name = lsp0_gate_clk[i].gate.hw.init->name;
+ ret = clk_hw_register(NULL, &lsp0_gate_clk[i].gate.hw);
+- if (ret) {
+- pr_warn("lsp0 clk %s init error!\n",
+- lsp0_gate_clk[i].gate.hw.init->name);
+- }
++ if (ret)
++ pr_warn("lsp0 clk %s init error!\n", name);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(lsp0_div_clk); i++) {
+@@ -793,11 +788,10 @@ static int __init lsp0_clocks_init(struct device_node *np)
+ &lsp0_div_clk[i].div.hw;
+
+ lsp0_div_clk[i].div.reg += (uintptr_t)reg_base;
++ name = lsp0_div_clk[i].div.hw.init->name;
+ ret = clk_hw_register(NULL, &lsp0_div_clk[i].div.hw);
+- if (ret) {
+- pr_warn("lsp0 clk %s init error!\n",
+- lsp0_div_clk[i].div.hw.init->name);
+- }
++ if (ret)
++ pr_warn("lsp0 clk %s init error!\n", name);
+ }
+
+ ret = of_clk_add_hw_provider(np, of_clk_hw_onecell_get,
+@@ -862,6 +856,7 @@ static int __init lsp1_clocks_init(struct device_node *np)
+ {
+ void __iomem *reg_base;
+ int i, ret;
++ const char *name;
+
+ reg_base = of_iomap(np, 0);
+ if (!reg_base) {
+@@ -875,11 +870,10 @@ static int __init lsp1_clocks_init(struct device_node *np)
+ &lsp0_mux_clk[i].mux.hw;
+
+ lsp1_mux_clk[i].mux.reg += (uintptr_t)reg_base;
++ name = lsp1_mux_clk[i].mux.hw.init->name;
+ ret = clk_hw_register(NULL, &lsp1_mux_clk[i].mux.hw);
+- if (ret) {
+- pr_warn("lsp1 clk %s init error!\n",
+- lsp1_mux_clk[i].mux.hw.init->name);
+- }
++ if (ret)
++ pr_warn("lsp1 clk %s init error!\n", name);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(lsp1_gate_clk); i++) {
+@@ -888,11 +882,10 @@ static int __init lsp1_clocks_init(struct device_node *np)
+ &lsp1_gate_clk[i].gate.hw;
+
+ lsp1_gate_clk[i].gate.reg += (uintptr_t)reg_base;
++ name = lsp1_gate_clk[i].gate.hw.init->name;
+ ret = clk_hw_register(NULL, &lsp1_gate_clk[i].gate.hw);
+- if (ret) {
+- pr_warn("lsp1 clk %s init error!\n",
+- lsp1_gate_clk[i].gate.hw.init->name);
+- }
++ if (ret)
++ pr_warn("lsp1 clk %s init error!\n", name);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(lsp1_div_clk); i++) {
+@@ -901,11 +894,10 @@ static int __init lsp1_clocks_init(struct device_node *np)
+ &lsp1_div_clk[i].div.hw;
+
+ lsp1_div_clk[i].div.reg += (uintptr_t)reg_base;
++ name = lsp1_div_clk[i].div.hw.init->name;
+ ret = clk_hw_register(NULL, &lsp1_div_clk[i].div.hw);
+- if (ret) {
+- pr_warn("lsp1 clk %s init error!\n",
+- lsp1_div_clk[i].div.hw.init->name);
+- }
++ if (ret)
++ pr_warn("lsp1 clk %s init error!\n", name);
+ }
+
+ ret = of_clk_add_hw_provider(np, of_clk_hw_onecell_get,
+@@ -979,6 +971,7 @@ static int __init audio_clocks_init(struct device_node *np)
+ {
+ void __iomem *reg_base;
+ int i, ret;
++ const char *name;
+
+ reg_base = of_iomap(np, 0);
+ if (!reg_base) {
+@@ -992,11 +985,10 @@ static int __init audio_clocks_init(struct device_node *np)
+ &audio_mux_clk[i].mux.hw;
+
+ audio_mux_clk[i].mux.reg += (uintptr_t)reg_base;
++ name = audio_mux_clk[i].mux.hw.init->name;
+ ret = clk_hw_register(NULL, &audio_mux_clk[i].mux.hw);
+- if (ret) {
+- pr_warn("audio clk %s init error!\n",
+- audio_mux_clk[i].mux.hw.init->name);
+- }
++ if (ret)
++ pr_warn("audio clk %s init error!\n", name);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(audio_adiv_clk); i++) {
+@@ -1005,11 +997,10 @@ static int __init audio_clocks_init(struct device_node *np)
+ &audio_adiv_clk[i].hw;
+
+ audio_adiv_clk[i].reg_base += (uintptr_t)reg_base;
++ name = audio_adiv_clk[i].hw.init->name;
+ ret = clk_hw_register(NULL, &audio_adiv_clk[i].hw);
+- if (ret) {
+- pr_warn("audio clk %s init error!\n",
+- audio_adiv_clk[i].hw.init->name);
+- }
++ if (ret)
++ pr_warn("audio clk %s init error!\n", name);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(audio_div_clk); i++) {
+@@ -1018,11 +1009,10 @@ static int __init audio_clocks_init(struct device_node *np)
+ &audio_div_clk[i].div.hw;
+
+ audio_div_clk[i].div.reg += (uintptr_t)reg_base;
++ name = audio_div_clk[i].div.hw.init->name;
+ ret = clk_hw_register(NULL, &audio_div_clk[i].div.hw);
+- if (ret) {
+- pr_warn("audio clk %s init error!\n",
+- audio_div_clk[i].div.hw.init->name);
+- }
++ if (ret)
++ pr_warn("audio clk %s init error!\n", name);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(audio_gate_clk); i++) {
+@@ -1031,11 +1021,10 @@ static int __init audio_clocks_init(struct device_node *np)
+ &audio_gate_clk[i].gate.hw;
+
+ audio_gate_clk[i].gate.reg += (uintptr_t)reg_base;
++ name = audio_gate_clk[i].gate.hw.init->name;
+ ret = clk_hw_register(NULL, &audio_gate_clk[i].gate.hw);
+- if (ret) {
+- pr_warn("audio clk %s init error!\n",
+- audio_gate_clk[i].gate.hw.init->name);
+- }
++ if (ret)
++ pr_warn("audio clk %s init error!\n", name);
+ }
+
+ ret = of_clk_add_hw_provider(np, of_clk_hw_onecell_get,
+diff --git a/drivers/crypto/hisilicon/sec/sec_algs.c b/drivers/crypto/hisilicon/sec/sec_algs.c
+index 02768af0dccd..8c789b8671fc 100644
+--- a/drivers/crypto/hisilicon/sec/sec_algs.c
++++ b/drivers/crypto/hisilicon/sec/sec_algs.c
+@@ -215,17 +215,18 @@ static void sec_free_hw_sgl(struct sec_hw_sgl *hw_sgl,
+ dma_addr_t psec_sgl, struct sec_dev_info *info)
+ {
+ struct sec_hw_sgl *sgl_current, *sgl_next;
++ dma_addr_t sgl_next_dma;
+
+- if (!hw_sgl)
+- return;
+ sgl_current = hw_sgl;
+- while (sgl_current->next) {
++ while (sgl_current) {
+ sgl_next = sgl_current->next;
+- dma_pool_free(info->hw_sgl_pool, sgl_current,
+- sgl_current->next_sgl);
++ sgl_next_dma = sgl_current->next_sgl;
++
++ dma_pool_free(info->hw_sgl_pool, sgl_current, psec_sgl);
++
+ sgl_current = sgl_next;
++ psec_sgl = sgl_next_dma;
+ }
+- dma_pool_free(info->hw_sgl_pool, hw_sgl, psec_sgl);
+ }
+
+ static int sec_alg_skcipher_setkey(struct crypto_skcipher *tfm,
+diff --git a/drivers/dma-buf/sw_sync.c b/drivers/dma-buf/sw_sync.c
+index 051f6c2873c7..6713cfb1995c 100644
+--- a/drivers/dma-buf/sw_sync.c
++++ b/drivers/dma-buf/sw_sync.c
+@@ -132,17 +132,14 @@ static void timeline_fence_release(struct dma_fence *fence)
+ {
+ struct sync_pt *pt = dma_fence_to_sync_pt(fence);
+ struct sync_timeline *parent = dma_fence_parent(fence);
++ unsigned long flags;
+
++ spin_lock_irqsave(fence->lock, flags);
+ if (!list_empty(&pt->link)) {
+- unsigned long flags;
+-
+- spin_lock_irqsave(fence->lock, flags);
+- if (!list_empty(&pt->link)) {
+- list_del(&pt->link);
+- rb_erase(&pt->node, &parent->pt_tree);
+- }
+- spin_unlock_irqrestore(fence->lock, flags);
++ list_del(&pt->link);
++ rb_erase(&pt->node, &parent->pt_tree);
+ }
++ spin_unlock_irqrestore(fence->lock, flags);
+
+ sync_timeline_put(parent);
+ dma_fence_free(fence);
+@@ -265,7 +262,8 @@ static struct sync_pt *sync_pt_create(struct sync_timeline *obj,
+ p = &parent->rb_left;
+ } else {
+ if (dma_fence_get_rcu(&other->base)) {
+- dma_fence_put(&pt->base);
++ sync_timeline_put(obj);
++ kfree(pt);
+ pt = other;
+ goto unlock;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
+index eb3569b46c1e..430c56f9544a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
+@@ -139,14 +139,14 @@ static int amdgpufb_create_pinned_object(struct amdgpu_fbdev *rfbdev,
+ mode_cmd->pitches[0] = amdgpu_align_pitch(adev, mode_cmd->width, cpp,
+ fb_tiled);
+ domain = amdgpu_display_supported_domains(adev);
+-
+ height = ALIGN(mode_cmd->height, 8);
+ size = mode_cmd->pitches[0] * height;
+ aligned_size = ALIGN(size, PAGE_SIZE);
+ ret = amdgpu_gem_object_create(adev, aligned_size, 0, domain,
+ AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
+- AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS |
+- AMDGPU_GEM_CREATE_VRAM_CLEARED,
++ AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS |
++ AMDGPU_GEM_CREATE_VRAM_CLEARED |
++ AMDGPU_GEM_CREATE_CPU_GTT_USWC,
+ ttm_bo_type_kernel, NULL, &gobj);
+ if (ret) {
+ pr_err("failed to allocate framebuffer (%d)\n", aligned_size);
+@@ -168,7 +168,6 @@ static int amdgpufb_create_pinned_object(struct amdgpu_fbdev *rfbdev,
+ dev_err(adev->dev, "FB failed to set tiling flags\n");
+ }
+
+-
+ ret = amdgpu_bo_pin(abo, domain);
+ if (ret) {
+ amdgpu_bo_unreserve(abo);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+index 939f8305511b..fb291366d5ad 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+@@ -747,7 +747,8 @@ int amdgpu_mode_dumb_create(struct drm_file *file_priv,
+ struct amdgpu_device *adev = dev->dev_private;
+ struct drm_gem_object *gobj;
+ uint32_t handle;
+- u64 flags = AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
++ u64 flags = AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
++ AMDGPU_GEM_CREATE_CPU_GTT_USWC;
+ u32 domain;
+ int r;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+index 3747c3f1f0cc..15c371fac469 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+@@ -1583,7 +1583,8 @@ static const struct amdgpu_irq_src_funcs sdma_v5_0_illegal_inst_irq_funcs = {
+
+ static void sdma_v5_0_set_irq_funcs(struct amdgpu_device *adev)
+ {
+- adev->sdma.trap_irq.num_types = AMDGPU_SDMA_IRQ_LAST;
++ adev->sdma.trap_irq.num_types = AMDGPU_SDMA_IRQ_INSTANCE0 +
++ adev->sdma.num_instances;
+ adev->sdma.trap_irq.funcs = &sdma_v5_0_trap_irq_funcs;
+ adev->sdma.illegal_inst_irq.funcs = &sdma_v5_0_illegal_inst_irq_funcs;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/si.c b/drivers/gpu/drm/amd/amdgpu/si.c
+index 4d74453f3cfb..602397016b64 100644
+--- a/drivers/gpu/drm/amd/amdgpu/si.c
++++ b/drivers/gpu/drm/amd/amdgpu/si.c
+@@ -1881,7 +1881,7 @@ static void si_program_aspm(struct amdgpu_device *adev)
+ if (orig != data)
+ si_pif_phy1_wreg(adev,PB1_PIF_PWRDOWN_1, data);
+
+- if ((adev->family != CHIP_OLAND) && (adev->family != CHIP_HAINAN)) {
++ if ((adev->asic_type != CHIP_OLAND) && (adev->asic_type != CHIP_HAINAN)) {
+ orig = data = si_pif_phy0_rreg(adev,PB0_PIF_PWRDOWN_0);
+ data &= ~PLL_RAMP_UP_TIME_0_MASK;
+ if (orig != data)
+@@ -1930,14 +1930,14 @@ static void si_program_aspm(struct amdgpu_device *adev)
+
+ orig = data = si_pif_phy0_rreg(adev,PB0_PIF_CNTL);
+ data &= ~LS2_EXIT_TIME_MASK;
+- if ((adev->family == CHIP_OLAND) || (adev->family == CHIP_HAINAN))
++ if ((adev->asic_type == CHIP_OLAND) || (adev->asic_type == CHIP_HAINAN))
+ data |= LS2_EXIT_TIME(5);
+ if (orig != data)
+ si_pif_phy0_wreg(adev,PB0_PIF_CNTL, data);
+
+ orig = data = si_pif_phy1_rreg(adev,PB1_PIF_CNTL);
+ data &= ~LS2_EXIT_TIME_MASK;
+- if ((adev->family == CHIP_OLAND) || (adev->family == CHIP_HAINAN))
++ if ((adev->asic_type == CHIP_OLAND) || (adev->asic_type == CHIP_HAINAN))
+ data |= LS2_EXIT_TIME(5);
+ if (orig != data)
+ si_pif_phy1_wreg(adev,PB1_PIF_CNTL, data);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c
+index 592fa499c9f8..9594c154664f 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c
+@@ -334,7 +334,7 @@ bool dm_pp_get_clock_levels_by_type(
+ }
+ } else if (adev->smu.funcs && adev->smu.funcs->get_clock_by_type) {
+ if (smu_get_clock_by_type(&adev->smu,
+- dc_to_smu_clock_type(clk_type),
++ dc_to_pp_clock_type(clk_type),
+ &pp_clks)) {
+ get_default_clock_levels(clk_type, dc_clks);
+ return true;
+@@ -419,7 +419,7 @@ bool dm_pp_get_clock_levels_by_type_with_latency(
+ return false;
+ } else if (adev->smu.ppt_funcs && adev->smu.ppt_funcs->get_clock_by_type_with_latency) {
+ if (smu_get_clock_by_type_with_latency(&adev->smu,
+- dc_to_pp_clock_type(clk_type),
++ dc_to_smu_clock_type(clk_type),
+ &pp_clks))
+ return false;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
+index 50bfb5921de0..2ab0f97719b5 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
+@@ -348,6 +348,8 @@ void dcn20_clk_mgr_construct(
+
+ clk_mgr->base.dprefclk_khz = 700000; // 700 MHz planned if VCO is 3.85 GHz, will be retrieved
+
++ clk_mgr->pp_smu = pp_smu;
++
+ if (IS_FPGA_MAXIMUS_DC(ctx->dce_environment)) {
+ dcn2_funcs.update_clocks = dcn2_update_clocks_fpga;
+ clk_mgr->dentist_vco_freq_khz = 3850000;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index cbc480a33376..730f97ba8dbb 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -2187,6 +2187,14 @@ void dc_set_power_state(
+ dc_resource_state_construct(dc, dc->current_state);
+
+ dc->hwss.init_hw(dc);
++
++#ifdef CONFIG_DRM_AMD_DC_DCN2_0
++ if (dc->hwss.init_sys_ctx != NULL &&
++ dc->vm_pa_config.valid) {
++ dc->hwss.init_sys_ctx(dc->hwseq, dc, &dc->vm_pa_config);
++ }
++#endif
++
+ break;
+ default:
+ ASSERT(dc->current_state->stream_count == 0);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index 2c7aaed907b9..0bf85a7a2cd3 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -3033,6 +3033,8 @@ void dp_set_fec_ready(struct dc_link *link, bool ready)
+ link_enc->funcs->fec_set_ready(link_enc, true);
+ link->fec_state = dc_link_fec_ready;
+ } else {
++ link->link_enc->funcs->fec_set_ready(link->link_enc, false);
++ link->fec_state = dc_link_fec_not_ready;
+ dm_error("dpcd write failed to set fec_ready");
+ }
+ } else if (link->fec_state == dc_link_fec_ready && !ready) {
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_hwss.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_hwss.c
+index 2d019e1f6135..a9135764e580 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_hwss.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_hwss.c
+@@ -160,6 +160,10 @@ bool edp_receiver_ready_T7(struct dc_link *link)
+ break;
+ udelay(25); //MAx T7 is 50ms
+ } while (++tries < 300);
++
++ if (link->local_sink->edid_caps.panel_patch.extra_t7_ms > 0)
++ udelay(link->local_sink->edid_caps.panel_patch.extra_t7_ms * 1000);
++
+ return result;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index 2ceaab4fb5de..68db60e4caf3 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -265,12 +265,10 @@ bool resource_construct(
+ DC_ERR("DC: failed to create audio!\n");
+ return false;
+ }
+-
+ if (!aud->funcs->endpoint_valid(aud)) {
+ aud->funcs->destroy(&aud);
+ break;
+ }
+-
+ pool->audios[i] = aud;
+ pool->audio_count++;
+ }
+@@ -1659,24 +1657,25 @@ static struct audio *find_first_free_audio(
+ const struct resource_pool *pool,
+ enum engine_id id)
+ {
+- int i;
+- for (i = 0; i < pool->audio_count; i++) {
++ int i, available_audio_count;
++
++ available_audio_count = pool->audio_count;
++
++ for (i = 0; i < available_audio_count; i++) {
+ if ((res_ctx->is_audio_acquired[i] == false) && (res_ctx->is_stream_enc_acquired[i] == true)) {
+ /*we have enough audio endpoint, find the matching inst*/
+ if (id != i)
+ continue;
+-
+ return pool->audios[i];
+ }
+ }
+
+- /* use engine id to find free audio */
+- if ((id < pool->audio_count) && (res_ctx->is_audio_acquired[id] == false)) {
++ /* use engine id to find free audio */
++ if ((id < available_audio_count) && (res_ctx->is_audio_acquired[id] == false)) {
+ return pool->audios[id];
+ }
+-
+ /*not found the matching one, first come first serve*/
+- for (i = 0; i < pool->audio_count; i++) {
++ for (i = 0; i < available_audio_count; i++) {
+ if (res_ctx->is_audio_acquired[i] == false) {
+ return pool->audios[i];
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_types.h b/drivers/gpu/drm/amd/display/dc/dc_types.h
+index 6eabb6491a3d..ce6d73d21cca 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_types.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_types.h
+@@ -202,6 +202,7 @@ struct dc_panel_patch {
+ unsigned int dppowerup_delay;
+ unsigned int extra_t12_ms;
+ unsigned int extra_delay_backlight_off;
++ unsigned int extra_t7_ms;
+ };
+
+ struct dc_edid_caps {
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_audio.c b/drivers/gpu/drm/amd/display/dc/dce/dce_audio.c
+index 4a10a5d22c90..5de9623bdf66 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_audio.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_audio.c
+@@ -613,6 +613,8 @@ void dce_aud_az_configure(
+
+ AZ_REG_WRITE(AZALIA_F0_CODEC_PIN_CONTROL_SINK_INFO1,
+ value);
++ DC_LOG_HW_AUDIO("\n\tAUDIO:az_configure: index: %u data, 0x%x, displayName %s: \n",
++ audio->inst, value, audio_info->display_name);
+
+ /*
+ *write the port ID:
+@@ -922,7 +924,6 @@ static const struct audio_funcs funcs = {
+ .az_configure = dce_aud_az_configure,
+ .destroy = dce_aud_destroy,
+ };
+-
+ void dce_aud_destroy(struct audio **audio)
+ {
+ struct dce_audio *aud = DCE_AUD(*audio);
+@@ -953,7 +954,6 @@ struct audio *dce_audio_create(
+ audio->regs = reg;
+ audio->shifts = shifts;
+ audio->masks = masks;
+-
+ return &audio->base;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c
+index 7469333a2c8a..8166fdbacd73 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c
+@@ -357,9 +357,10 @@ bool cm_helper_translate_curve_to_hw_format(
+ seg_distr[7] = 4;
+ seg_distr[8] = 4;
+ seg_distr[9] = 4;
++ seg_distr[10] = 1;
+
+ region_start = -10;
+- region_end = 0;
++ region_end = 1;
+ }
+
+ for (i = region_end - region_start; i < MAX_REGIONS_NUMBER ; i++)
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.c
+index a546c2bc9129..e365f2dd7f9a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.c
+@@ -824,6 +824,9 @@ void optc1_program_manual_trigger(struct timing_generator *optc)
+
+ REG_SET(OTG_MANUAL_FLOW_CONTROL, 0,
+ MANUAL_FLOW_CONTROL, 1);
++
++ REG_SET(OTG_MANUAL_FLOW_CONTROL, 0,
++ MANUAL_FLOW_CONTROL, 0);
+ }
+
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+index d810c8940129..8fdb53a44bfb 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+@@ -585,6 +585,10 @@ static void dcn20_init_hw(struct dc *dc)
+ }
+ }
+
++ /* Power gate DSCs */
++ for (i = 0; i < res_pool->res_cap->num_dsc; i++)
++ dcn20_dsc_pg_control(hws, res_pool->dscs[i]->inst, false);
++
+ /* Blank pixel data with OPP DPG */
+ for (i = 0; i < dc->res_pool->timing_generator_count; i++) {
+ struct timing_generator *tg = dc->res_pool->timing_generators[i];
+@@ -1106,6 +1110,9 @@ void dcn20_enable_plane(
+ /* enable DCFCLK current DCHUB */
+ pipe_ctx->plane_res.hubp->funcs->hubp_clk_cntl(pipe_ctx->plane_res.hubp, true);
+
++ /* initialize HUBP on power up */
++ pipe_ctx->plane_res.hubp->funcs->hubp_init(pipe_ctx->plane_res.hubp);
++
+ /* make sure OPP_PIPE_CLOCK_EN = 1 */
+ pipe_ctx->stream_res.opp->funcs->opp_pipe_clock_control(
+ pipe_ctx->stream_res.opp,
+@@ -1315,6 +1322,18 @@ static void dcn20_apply_ctx_for_surface(
+ if (!top_pipe_to_program)
+ return;
+
++ /* Carry over GSL groups in case the context is changing. */
++ for (i = 0; i < dc->res_pool->pipe_count; i++) {
++ struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i];
++ struct pipe_ctx *old_pipe_ctx =
++ &dc->current_state->res_ctx.pipe_ctx[i];
++
++ if (pipe_ctx->stream == stream &&
++ pipe_ctx->stream == old_pipe_ctx->stream)
++ pipe_ctx->stream_res.gsl_group =
++ old_pipe_ctx->stream_res.gsl_group;
++ }
++
+ tg = top_pipe_to_program->stream_res.tg;
+
+ interdependent_update = top_pipe_to_program->plane_state &&
+diff --git a/drivers/gpu/drm/amd/display/dc/irq/dcn20/irq_service_dcn20.c b/drivers/gpu/drm/amd/display/dc/irq/dcn20/irq_service_dcn20.c
+index 3cc0f2a1f77c..5db29bf582d3 100644
+--- a/drivers/gpu/drm/amd/display/dc/irq/dcn20/irq_service_dcn20.c
++++ b/drivers/gpu/drm/amd/display/dc/irq/dcn20/irq_service_dcn20.c
+@@ -167,6 +167,11 @@ static const struct irq_source_info_funcs vblank_irq_info_funcs = {
+ .ack = NULL
+ };
+
++static const struct irq_source_info_funcs vupdate_no_lock_irq_info_funcs = {
++ .set = NULL,
++ .ack = NULL
++};
++
+ #undef BASE_INNER
+ #define BASE_INNER(seg) DCN_BASE__INST0_SEG ## seg
+
+@@ -221,12 +226,15 @@ static const struct irq_source_info_funcs vblank_irq_info_funcs = {
+ .funcs = &pflip_irq_info_funcs\
+ }
+
+-#define vupdate_int_entry(reg_num)\
++/* vupdate_no_lock_int_entry maps to DC_IRQ_SOURCE_VUPDATEx, to match semantic
++ * of DCE's DC_IRQ_SOURCE_VUPDATEx.
++ */
++#define vupdate_no_lock_int_entry(reg_num)\
+ [DC_IRQ_SOURCE_VUPDATE1 + reg_num] = {\
+ IRQ_REG_ENTRY(OTG, reg_num,\
+- OTG_GLOBAL_SYNC_STATUS, VUPDATE_INT_EN,\
+- OTG_GLOBAL_SYNC_STATUS, VUPDATE_EVENT_CLEAR),\
+- .funcs = &vblank_irq_info_funcs\
++ OTG_GLOBAL_SYNC_STATUS, VUPDATE_NO_LOCK_INT_EN,\
++ OTG_GLOBAL_SYNC_STATUS, VUPDATE_NO_LOCK_EVENT_CLEAR),\
++ .funcs = &vupdate_no_lock_irq_info_funcs\
+ }
+
+ #define vblank_int_entry(reg_num)\
+@@ -333,12 +341,12 @@ irq_source_info_dcn20[DAL_IRQ_SOURCES_NUMBER] = {
+ dc_underflow_int_entry(6),
+ [DC_IRQ_SOURCE_DMCU_SCP] = dummy_irq_entry(),
+ [DC_IRQ_SOURCE_VBIOS_SW] = dummy_irq_entry(),
+- vupdate_int_entry(0),
+- vupdate_int_entry(1),
+- vupdate_int_entry(2),
+- vupdate_int_entry(3),
+- vupdate_int_entry(4),
+- vupdate_int_entry(5),
++ vupdate_no_lock_int_entry(0),
++ vupdate_no_lock_int_entry(1),
++ vupdate_no_lock_int_entry(2),
++ vupdate_no_lock_int_entry(3),
++ vupdate_no_lock_int_entry(4),
++ vupdate_no_lock_int_entry(5),
+ vblank_int_entry(0),
+ vblank_int_entry(1),
+ vblank_int_entry(2),
+diff --git a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
+index 7c20171a3b6d..a53666ff6cf8 100644
+--- a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
++++ b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
+@@ -435,6 +435,12 @@ static void apply_below_the_range(struct core_freesync *core_freesync,
+ /* Either we've calculated the number of frames to insert,
+ * or we need to insert min duration frames
+ */
++ if (last_render_time_in_us / frames_to_insert <
++ in_out_vrr->min_duration_in_us){
++ frames_to_insert -= (frames_to_insert > 1) ?
++ 1 : 0;
++ }
++
+ if (frames_to_insert > 0)
+ inserted_frame_duration_in_us = last_render_time_in_us /
+ frames_to_insert;
+@@ -887,8 +893,8 @@ void mod_freesync_build_vrr_params(struct mod_freesync *mod_freesync,
+ struct core_freesync *core_freesync = NULL;
+ unsigned long long nominal_field_rate_in_uhz = 0;
+ unsigned int refresh_range = 0;
+- unsigned int min_refresh_in_uhz = 0;
+- unsigned int max_refresh_in_uhz = 0;
++ unsigned long long min_refresh_in_uhz = 0;
++ unsigned long long max_refresh_in_uhz = 0;
+
+ if (mod_freesync == NULL)
+ return;
+@@ -915,7 +921,7 @@ void mod_freesync_build_vrr_params(struct mod_freesync *mod_freesync,
+ min_refresh_in_uhz = nominal_field_rate_in_uhz;
+
+ if (!vrr_settings_require_update(core_freesync,
+- in_config, min_refresh_in_uhz, max_refresh_in_uhz,
++ in_config, (unsigned int)min_refresh_in_uhz, (unsigned int)max_refresh_in_uhz,
+ in_out_vrr))
+ return;
+
+@@ -931,15 +937,15 @@ void mod_freesync_build_vrr_params(struct mod_freesync *mod_freesync,
+ return;
+
+ } else {
+- in_out_vrr->min_refresh_in_uhz = min_refresh_in_uhz;
++ in_out_vrr->min_refresh_in_uhz = (unsigned int)min_refresh_in_uhz;
+ in_out_vrr->max_duration_in_us =
+ calc_duration_in_us_from_refresh_in_uhz(
+- min_refresh_in_uhz);
++ (unsigned int)min_refresh_in_uhz);
+
+- in_out_vrr->max_refresh_in_uhz = max_refresh_in_uhz;
++ in_out_vrr->max_refresh_in_uhz = (unsigned int)max_refresh_in_uhz;
+ in_out_vrr->min_duration_in_us =
+ calc_duration_in_us_from_refresh_in_uhz(
+- max_refresh_in_uhz);
++ (unsigned int)max_refresh_in_uhz);
+
+ refresh_range = in_out_vrr->max_refresh_in_uhz -
+ in_out_vrr->min_refresh_in_uhz;
+@@ -950,17 +956,18 @@ void mod_freesync_build_vrr_params(struct mod_freesync *mod_freesync,
+ in_out_vrr->fixed.ramping_active = in_config->ramping;
+
+ in_out_vrr->btr.btr_enabled = in_config->btr;
++
+ if (in_out_vrr->max_refresh_in_uhz <
+ 2 * in_out_vrr->min_refresh_in_uhz)
+ in_out_vrr->btr.btr_enabled = false;
++
+ in_out_vrr->btr.btr_active = false;
+ in_out_vrr->btr.inserted_duration_in_us = 0;
+ in_out_vrr->btr.frames_to_insert = 0;
+ in_out_vrr->btr.frame_counter = 0;
+ in_out_vrr->btr.mid_point_in_us =
+- in_out_vrr->min_duration_in_us +
+- (in_out_vrr->max_duration_in_us -
+- in_out_vrr->min_duration_in_us) / 2;
++ (in_out_vrr->min_duration_in_us +
++ in_out_vrr->max_duration_in_us) / 2;
+
+ if (in_out_vrr->state == VRR_STATE_UNSUPPORTED) {
+ in_out_vrr->adjust.v_total_min = stream->timing.v_total;
+diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+index b81c7e715dc9..9aaf2deff6e9 100644
+--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
++++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+@@ -1627,6 +1627,10 @@ static int navi10_set_peak_clock_by_device(struct smu_context *smu)
+ static int navi10_set_performance_level(struct smu_context *smu, enum amd_dpm_forced_level level)
+ {
+ int ret = 0;
++ struct amdgpu_device *adev = smu->adev;
++
++ if (adev->asic_type != CHIP_NAVI10)
++ return -EINVAL;
+
+ switch (level) {
+ case AMD_DPM_FORCED_LEVEL_PROFILE_PEAK:
+diff --git a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+index 3f7f4880be09..37bd541166a5 100644
+--- a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
++++ b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+@@ -1035,16 +1035,17 @@ static int analogix_dp_commit(struct analogix_dp_device *dp)
+ if (ret)
+ return ret;
+
++ /* Check whether panel supports fast training */
++ ret = analogix_dp_fast_link_train_detection(dp);
++ if (ret)
++ dp->psr_enable = false;
++
+ if (dp->psr_enable) {
+ ret = analogix_dp_enable_sink_psr(dp);
+ if (ret)
+ return ret;
+ }
+
+- /* Check whether panel supports fast training */
+- ret = analogix_dp_fast_link_train_detection(dp);
+- if (ret)
+- dp->psr_enable = false;
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/bridge/sii902x.c b/drivers/gpu/drm/bridge/sii902x.c
+index dd7aa466b280..36acc256e67e 100644
+--- a/drivers/gpu/drm/bridge/sii902x.c
++++ b/drivers/gpu/drm/bridge/sii902x.c
+@@ -750,6 +750,7 @@ static int sii902x_audio_codec_init(struct sii902x *sii902x,
+ sii902x->audio.i2s_fifo_sequence[i] |= audio_fifo_id[i] |
+ i2s_lane_id[lanes[i]] | SII902X_TPI_I2S_FIFO_ENABLE;
+
++ sii902x->audio.mclk = devm_clk_get(dev, "mclk");
+ if (IS_ERR(sii902x->audio.mclk)) {
+ dev_err(dev, "%s: No clock (audio mclk) found: %ld\n",
+ __func__, PTR_ERR(sii902x->audio.mclk));
+diff --git a/drivers/gpu/drm/bridge/tc358767.c b/drivers/gpu/drm/bridge/tc358767.c
+index 13ade28a36a8..b3a7d5f1250c 100644
+--- a/drivers/gpu/drm/bridge/tc358767.c
++++ b/drivers/gpu/drm/bridge/tc358767.c
+@@ -313,7 +313,7 @@ static ssize_t tc_aux_transfer(struct drm_dp_aux *aux,
+ struct drm_dp_aux_msg *msg)
+ {
+ struct tc_data *tc = aux_to_tc(aux);
+- size_t size = min_t(size_t, 8, msg->size);
++ size_t size = min_t(size_t, DP_AUX_MAX_PAYLOAD_BYTES - 1, msg->size);
+ u8 request = msg->request & ~DP_AUX_I2C_MOT;
+ u8 *buf = msg->buffer;
+ u32 tmp = 0;
+diff --git a/drivers/gpu/drm/mcde/mcde_drv.c b/drivers/gpu/drm/mcde/mcde_drv.c
+index baf63fb6850a..a810568c76df 100644
+--- a/drivers/gpu/drm/mcde/mcde_drv.c
++++ b/drivers/gpu/drm/mcde/mcde_drv.c
+@@ -319,7 +319,7 @@ static int mcde_probe(struct platform_device *pdev)
+ struct device *dev = &pdev->dev;
+ struct drm_device *drm;
+ struct mcde *mcde;
+- struct component_match *match;
++ struct component_match *match = NULL;
+ struct resource *res;
+ u32 pid;
+ u32 val;
+@@ -485,6 +485,10 @@ static int mcde_probe(struct platform_device *pdev)
+ }
+ put_device(p);
+ }
++ if (!match) {
++ dev_err(dev, "no matching components\n");
++ return -ENODEV;
++ }
+ if (IS_ERR(match)) {
+ dev_err(dev, "could not create component match\n");
+ ret = PTR_ERR(match);
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
+index 283ff690350e..50303ec194bb 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
+@@ -320,7 +320,9 @@ nv50_wndw_atomic_check_lut(struct nv50_wndw *wndw,
+ asyh->wndw.olut &= ~BIT(wndw->id);
+ }
+
+- if (!ilut && wndw->func->ilut_identity) {
++ if (!ilut && wndw->func->ilut_identity &&
++ asyw->state.fb->format->format != DRM_FORMAT_XBGR16161616F &&
++ asyw->state.fb->format->format != DRM_FORMAT_ABGR16161616F) {
+ static struct drm_property_blob dummy = {};
+ ilut = &dummy;
+ }
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/volt.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/volt.c
+index 7143ea4611aa..33a9fb5ac558 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/volt.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/volt.c
+@@ -96,6 +96,8 @@ nvbios_volt_parse(struct nvkm_bios *bios, u8 *ver, u8 *hdr, u8 *cnt, u8 *len,
+ info->min = min(info->base,
+ info->base + info->step * info->vidmask);
+ info->max = nvbios_rd32(bios, volt + 0x0e);
++ if (!info->max)
++ info->max = max(info->base, info->base + info->step * info->vidmask);
+ break;
+ case 0x50:
+ info->min = nvbios_rd32(bios, volt + 0x0a);
+diff --git a/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c b/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
+index 28c0620dfe0f..b5b14aa059ea 100644
+--- a/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
++++ b/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
+@@ -399,7 +399,13 @@ static int rpi_touchscreen_probe(struct i2c_client *i2c,
+
+ /* Look up the DSI host. It needs to probe before we do. */
+ endpoint = of_graph_get_next_endpoint(dev->of_node, NULL);
++ if (!endpoint)
++ return -ENODEV;
++
+ dsi_host_node = of_graph_get_remote_port_parent(endpoint);
++ if (!dsi_host_node)
++ goto error;
++
+ host = of_find_mipi_dsi_host_by_node(dsi_host_node);
+ of_node_put(dsi_host_node);
+ if (!host) {
+@@ -408,6 +414,9 @@ static int rpi_touchscreen_probe(struct i2c_client *i2c,
+ }
+
+ info.node = of_graph_get_remote_port(endpoint);
++ if (!info.node)
++ goto error;
++
+ of_node_put(endpoint);
+
+ ts->dsi = mipi_dsi_device_register_full(host, &info);
+@@ -428,6 +437,10 @@ static int rpi_touchscreen_probe(struct i2c_client *i2c,
+ return ret;
+
+ return 0;
++
++error:
++ of_node_put(endpoint);
++ return -ENODEV;
+ }
+
+ static int rpi_touchscreen_remove(struct i2c_client *i2c)
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 5a93c4edf1e4..ee6900eb3906 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -724,9 +724,9 @@ static const struct panel_desc auo_g133han01 = {
+ static const struct display_timing auo_g185han01_timings = {
+ .pixelclock = { 120000000, 144000000, 175000000 },
+ .hactive = { 1920, 1920, 1920 },
+- .hfront_porch = { 18, 60, 74 },
+- .hback_porch = { 12, 44, 54 },
+- .hsync_len = { 10, 24, 32 },
++ .hfront_porch = { 36, 120, 148 },
++ .hback_porch = { 24, 88, 108 },
++ .hsync_len = { 20, 48, 64 },
+ .vactive = { 1080, 1080, 1080 },
+ .vfront_porch = { 6, 10, 40 },
+ .vback_porch = { 2, 5, 20 },
+diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
+index c60d1a44d22a..b684cd719612 100644
+--- a/drivers/gpu/drm/radeon/radeon_connectors.c
++++ b/drivers/gpu/drm/radeon/radeon_connectors.c
+@@ -752,7 +752,7 @@ static int radeon_connector_set_property(struct drm_connector *connector, struct
+
+ radeon_encoder->output_csc = val;
+
+- if (connector->encoder->crtc) {
++ if (connector->encoder && connector->encoder->crtc) {
+ struct drm_crtc *crtc = connector->encoder->crtc;
+ struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);
+
+diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c
+index a6cbe11f79c6..15d7bebe1729 100644
+--- a/drivers/gpu/drm/radeon/radeon_drv.c
++++ b/drivers/gpu/drm/radeon/radeon_drv.c
+@@ -349,11 +349,19 @@ radeon_pci_remove(struct pci_dev *pdev)
+ static void
+ radeon_pci_shutdown(struct pci_dev *pdev)
+ {
++ struct drm_device *ddev = pci_get_drvdata(pdev);
++
+ /* if we are running in a VM, make sure the device
+ * torn down properly on reboot/shutdown
+ */
+ if (radeon_device_is_virtual())
+ radeon_pci_remove(pdev);
++
++ /* Some adapters need to be suspended before a
++ * shutdown occurs in order to prevent an error
++ * during kexec.
++ */
++ radeon_suspend_kms(ddev, true, true, false);
+ }
+
+ static int radeon_pmops_suspend(struct device *dev)
+diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c
+index 2fe6c4a8d915..3ab4fbf8eb0d 100644
+--- a/drivers/gpu/drm/stm/ltdc.c
++++ b/drivers/gpu/drm/stm/ltdc.c
+@@ -26,6 +26,7 @@
+ #include <drm/drm_fb_cma_helper.h>
+ #include <drm/drm_fourcc.h>
+ #include <drm/drm_gem_cma_helper.h>
++#include <drm/drm_gem_framebuffer_helper.h>
+ #include <drm/drm_of.h>
+ #include <drm/drm_plane_helper.h>
+ #include <drm/drm_probe_helper.h>
+@@ -922,6 +923,7 @@ static const struct drm_plane_funcs ltdc_plane_funcs = {
+ };
+
+ static const struct drm_plane_helper_funcs ltdc_plane_helper_funcs = {
++ .prepare_fb = drm_gem_fb_prepare_fb,
+ .atomic_check = ltdc_plane_atomic_check,
+ .atomic_update = ltdc_plane_atomic_update,
+ .atomic_disable = ltdc_plane_atomic_disable,
+diff --git a/drivers/gpu/drm/tinydrm/Kconfig b/drivers/gpu/drm/tinydrm/Kconfig
+index 87819c82bcce..f2f0739d1035 100644
+--- a/drivers/gpu/drm/tinydrm/Kconfig
++++ b/drivers/gpu/drm/tinydrm/Kconfig
+@@ -14,8 +14,8 @@ config TINYDRM_MIPI_DBI
+ config TINYDRM_HX8357D
+ tristate "DRM support for HX8357D display panels"
+ depends on DRM_TINYDRM && SPI
+- depends on BACKLIGHT_CLASS_DEVICE
+ select TINYDRM_MIPI_DBI
++ select BACKLIGHT_CLASS_DEVICE
+ help
+ DRM driver for the following HX8357D panels:
+ * YX350HV15-T 3.5" 340x350 TFT (Adafruit 3.5")
+@@ -35,8 +35,8 @@ config TINYDRM_ILI9225
+ config TINYDRM_ILI9341
+ tristate "DRM support for ILI9341 display panels"
+ depends on DRM_TINYDRM && SPI
+- depends on BACKLIGHT_CLASS_DEVICE
+ select TINYDRM_MIPI_DBI
++ select BACKLIGHT_CLASS_DEVICE
+ help
+ DRM driver for the following Ilitek ILI9341 panels:
+ * YX240QV29-T 2.4" 240x320 TFT (Adafruit 2.4")
+@@ -46,8 +46,8 @@ config TINYDRM_ILI9341
+ config TINYDRM_MI0283QT
+ tristate "DRM support for MI0283QT"
+ depends on DRM_TINYDRM && SPI
+- depends on BACKLIGHT_CLASS_DEVICE
+ select TINYDRM_MIPI_DBI
++ select BACKLIGHT_CLASS_DEVICE
+ help
+ DRM driver for the Multi-Inno MI0283QT display panel
+ If M is selected the module will be called mi0283qt.
+@@ -78,8 +78,8 @@ config TINYDRM_ST7586
+ config TINYDRM_ST7735R
+ tristate "DRM support for Sitronix ST7735R display panels"
+ depends on DRM_TINYDRM && SPI
+- depends on BACKLIGHT_CLASS_DEVICE
+ select TINYDRM_MIPI_DBI
++ select BACKLIGHT_CLASS_DEVICE
+ help
+ DRM driver Sitronix ST7735R with one of the following LCDs:
+ * JD-T18003-T01 1.8" 128x160 TFT
+diff --git a/drivers/gpu/drm/vkms/vkms_crc.c b/drivers/gpu/drm/vkms/vkms_crc.c
+index e66ff25c008e..e9fb4ebb789f 100644
+--- a/drivers/gpu/drm/vkms/vkms_crc.c
++++ b/drivers/gpu/drm/vkms/vkms_crc.c
+@@ -166,16 +166,24 @@ void vkms_crc_work_handle(struct work_struct *work)
+ struct drm_plane *plane;
+ u32 crc32 = 0;
+ u64 frame_start, frame_end;
++ bool crc_pending;
+ unsigned long flags;
+
+ spin_lock_irqsave(&out->state_lock, flags);
+ frame_start = crtc_state->frame_start;
+ frame_end = crtc_state->frame_end;
++ crc_pending = crtc_state->crc_pending;
++ crtc_state->frame_start = 0;
++ crtc_state->frame_end = 0;
++ crtc_state->crc_pending = false;
+ spin_unlock_irqrestore(&out->state_lock, flags);
+
+- /* _vblank_handle() hasn't updated frame_start yet */
+- if (!frame_start || frame_start == frame_end)
+- goto out;
++ /*
++ * We raced with the vblank hrtimer and previous work already computed
++ * the crc, nothing to do.
++ */
++ if (!crc_pending)
++ return;
+
+ drm_for_each_plane(plane, &vdev->drm) {
+ struct vkms_plane_state *vplane_state;
+@@ -196,20 +204,11 @@ void vkms_crc_work_handle(struct work_struct *work)
+ if (primary_crc)
+ crc32 = _vkms_get_crc(primary_crc, cursor_crc);
+
+- frame_end = drm_crtc_accurate_vblank_count(crtc);
+-
+- /* queue_work can fail to schedule crc_work; add crc for
+- * missing frames
++ /*
++ * The worker can fall behind the vblank hrtimer, make sure we catch up.
+ */
+ while (frame_start <= frame_end)
+ drm_crtc_add_crc_entry(crtc, true, frame_start++, &crc32);
+-
+-out:
+- /* to avoid using the same value for frame number again */
+- spin_lock_irqsave(&out->state_lock, flags);
+- crtc_state->frame_end = frame_end;
+- crtc_state->frame_start = 0;
+- spin_unlock_irqrestore(&out->state_lock, flags);
+ }
+
+ static const char * const pipe_crc_sources[] = {"auto"};
+diff --git a/drivers/gpu/drm/vkms/vkms_crtc.c b/drivers/gpu/drm/vkms/vkms_crtc.c
+index 4d11292bc6f3..f392fa13015b 100644
+--- a/drivers/gpu/drm/vkms/vkms_crtc.c
++++ b/drivers/gpu/drm/vkms/vkms_crtc.c
+@@ -30,13 +30,18 @@ static enum hrtimer_restart vkms_vblank_simulate(struct hrtimer *timer)
+ * has read the data
+ */
+ spin_lock(&output->state_lock);
+- if (!state->frame_start)
++ if (!state->crc_pending)
+ state->frame_start = frame;
++ else
++ DRM_DEBUG_DRIVER("crc worker falling behind, frame_start: %llu, frame_end: %llu\n",
++ state->frame_start, frame);
++ state->frame_end = frame;
++ state->crc_pending = true;
+ spin_unlock(&output->state_lock);
+
+ ret = queue_work(output->crc_workq, &state->crc_work);
+ if (!ret)
+- DRM_WARN("failed to queue vkms_crc_work_handle");
++ DRM_DEBUG_DRIVER("vkms_crc_work_handle already queued\n");
+ }
+
+ spin_unlock(&output->lock);
+diff --git a/drivers/gpu/drm/vkms/vkms_drv.c b/drivers/gpu/drm/vkms/vkms_drv.c
+index 738dd6206d85..92296bd8f623 100644
+--- a/drivers/gpu/drm/vkms/vkms_drv.c
++++ b/drivers/gpu/drm/vkms/vkms_drv.c
+@@ -92,7 +92,7 @@ static int vkms_modeset_init(struct vkms_device *vkmsdev)
+ dev->mode_config.max_height = YRES_MAX;
+ dev->mode_config.preferred_depth = 24;
+
+- return vkms_output_init(vkmsdev);
++ return vkms_output_init(vkmsdev, 0);
+ }
+
+ static int __init vkms_init(void)
+diff --git a/drivers/gpu/drm/vkms/vkms_drv.h b/drivers/gpu/drm/vkms/vkms_drv.h
+index b92c30c66a6f..2fee10a00051 100644
+--- a/drivers/gpu/drm/vkms/vkms_drv.h
++++ b/drivers/gpu/drm/vkms/vkms_drv.h
+@@ -48,6 +48,8 @@ struct vkms_plane_state {
+ struct vkms_crtc_state {
+ struct drm_crtc_state base;
+ struct work_struct crc_work;
++
++ bool crc_pending;
+ u64 frame_start;
+ u64 frame_end;
+ };
+@@ -105,10 +107,10 @@ bool vkms_get_vblank_timestamp(struct drm_device *dev, unsigned int pipe,
+ int *max_error, ktime_t *vblank_time,
+ bool in_vblank_irq);
+
+-int vkms_output_init(struct vkms_device *vkmsdev);
++int vkms_output_init(struct vkms_device *vkmsdev, int index);
+
+ struct drm_plane *vkms_plane_init(struct vkms_device *vkmsdev,
+- enum drm_plane_type type);
++ enum drm_plane_type type, int index);
+
+ /* Gem stuff */
+ struct drm_gem_object *vkms_gem_create(struct drm_device *dev,
+diff --git a/drivers/gpu/drm/vkms/vkms_output.c b/drivers/gpu/drm/vkms/vkms_output.c
+index 56fb5c2a2315..fb1941a6522c 100644
+--- a/drivers/gpu/drm/vkms/vkms_output.c
++++ b/drivers/gpu/drm/vkms/vkms_output.c
+@@ -35,7 +35,7 @@ static const struct drm_connector_helper_funcs vkms_conn_helper_funcs = {
+ .get_modes = vkms_conn_get_modes,
+ };
+
+-int vkms_output_init(struct vkms_device *vkmsdev)
++int vkms_output_init(struct vkms_device *vkmsdev, int index)
+ {
+ struct vkms_output *output = &vkmsdev->output;
+ struct drm_device *dev = &vkmsdev->drm;
+@@ -45,12 +45,12 @@ int vkms_output_init(struct vkms_device *vkmsdev)
+ struct drm_plane *primary, *cursor = NULL;
+ int ret;
+
+- primary = vkms_plane_init(vkmsdev, DRM_PLANE_TYPE_PRIMARY);
++ primary = vkms_plane_init(vkmsdev, DRM_PLANE_TYPE_PRIMARY, index);
+ if (IS_ERR(primary))
+ return PTR_ERR(primary);
+
+ if (enable_cursor) {
+- cursor = vkms_plane_init(vkmsdev, DRM_PLANE_TYPE_CURSOR);
++ cursor = vkms_plane_init(vkmsdev, DRM_PLANE_TYPE_CURSOR, index);
+ if (IS_ERR(cursor)) {
+ ret = PTR_ERR(cursor);
+ goto err_cursor;
+diff --git a/drivers/gpu/drm/vkms/vkms_plane.c b/drivers/gpu/drm/vkms/vkms_plane.c
+index 0fceb6258422..18c630cfc485 100644
+--- a/drivers/gpu/drm/vkms/vkms_plane.c
++++ b/drivers/gpu/drm/vkms/vkms_plane.c
+@@ -176,7 +176,7 @@ static const struct drm_plane_helper_funcs vkms_primary_helper_funcs = {
+ };
+
+ struct drm_plane *vkms_plane_init(struct vkms_device *vkmsdev,
+- enum drm_plane_type type)
++ enum drm_plane_type type, int index)
+ {
+ struct drm_device *dev = &vkmsdev->drm;
+ const struct drm_plane_helper_funcs *funcs;
+@@ -198,7 +198,7 @@ struct drm_plane *vkms_plane_init(struct vkms_device *vkmsdev,
+ funcs = &vkms_primary_helper_funcs;
+ }
+
+- ret = drm_universal_plane_init(dev, plane, 0,
++ ret = drm_universal_plane_init(dev, plane, 1 << index,
+ &vkms_plane_funcs,
+ formats, nformats,
+ NULL, type, NULL);
+diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
+index 81df62f48c4c..6ac8becc2372 100644
+--- a/drivers/hid/hid-apple.c
++++ b/drivers/hid/hid-apple.c
+@@ -54,7 +54,6 @@ MODULE_PARM_DESC(swap_opt_cmd, "Swap the Option (\"Alt\") and Command (\"Flag\")
+ struct apple_sc {
+ unsigned long quirks;
+ unsigned int fn_on;
+- DECLARE_BITMAP(pressed_fn, KEY_CNT);
+ DECLARE_BITMAP(pressed_numlock, KEY_CNT);
+ };
+
+@@ -181,6 +180,8 @@ static int hidinput_apple_event(struct hid_device *hid, struct input_dev *input,
+ {
+ struct apple_sc *asc = hid_get_drvdata(hid);
+ const struct apple_key_translation *trans, *table;
++ bool do_translate;
++ u16 code = 0;
+
+ if (usage->code == KEY_FN) {
+ asc->fn_on = !!value;
+@@ -189,8 +190,6 @@ static int hidinput_apple_event(struct hid_device *hid, struct input_dev *input,
+ }
+
+ if (fnmode) {
+- int do_translate;
+-
+ if (hid->product >= USB_DEVICE_ID_APPLE_WELLSPRING4_ANSI &&
+ hid->product <= USB_DEVICE_ID_APPLE_WELLSPRING4A_JIS)
+ table = macbookair_fn_keys;
+@@ -202,25 +201,33 @@ static int hidinput_apple_event(struct hid_device *hid, struct input_dev *input,
+ trans = apple_find_translation (table, usage->code);
+
+ if (trans) {
+- if (test_bit(usage->code, asc->pressed_fn))
+- do_translate = 1;
+- else if (trans->flags & APPLE_FLAG_FKEY)
+- do_translate = (fnmode == 2 && asc->fn_on) ||
+- (fnmode == 1 && !asc->fn_on);
+- else
+- do_translate = asc->fn_on;
+-
+- if (do_translate) {
+- if (value)
+- set_bit(usage->code, asc->pressed_fn);
+- else
+- clear_bit(usage->code, asc->pressed_fn);
+-
+- input_event(input, usage->type, trans->to,
+- value);
+-
+- return 1;
++ if (test_bit(trans->from, input->key))
++ code = trans->from;
++ else if (test_bit(trans->to, input->key))
++ code = trans->to;
++
++ if (!code) {
++ if (trans->flags & APPLE_FLAG_FKEY) {
++ switch (fnmode) {
++ case 1:
++ do_translate = !asc->fn_on;
++ break;
++ case 2:
++ do_translate = asc->fn_on;
++ break;
++ default:
++ /* should never happen */
++ do_translate = false;
++ }
++ } else {
++ do_translate = asc->fn_on;
++ }
++
++ code = do_translate ? trans->to : trans->from;
+ }
++
++ input_event(input, usage->type, code, value);
++ return 1;
+ }
+
+ if (asc->quirks & APPLE_NUMLOCK_EMULATION &&
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index 53bddb50aeba..602219a8710d 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -88,7 +88,7 @@ static void wacom_wac_queue_flush(struct hid_device *hdev,
+ }
+
+ static int wacom_wac_pen_serial_enforce(struct hid_device *hdev,
+- struct hid_report *report, u8 *raw_data, int size)
++ struct hid_report *report, u8 *raw_data, int report_size)
+ {
+ struct wacom *wacom = hid_get_drvdata(hdev);
+ struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+@@ -149,7 +149,8 @@ static int wacom_wac_pen_serial_enforce(struct hid_device *hdev,
+ if (flush)
+ wacom_wac_queue_flush(hdev, &wacom_wac->pen_fifo);
+ else if (insert)
+- wacom_wac_queue_insert(hdev, &wacom_wac->pen_fifo, raw_data, size);
++ wacom_wac_queue_insert(hdev, &wacom_wac->pen_fifo,
++ raw_data, report_size);
+
+ return insert && !flush;
+ }
+@@ -2176,7 +2177,7 @@ static void wacom_update_name(struct wacom *wacom, const char *suffix)
+ {
+ struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+ struct wacom_features *features = &wacom_wac->features;
+- char name[WACOM_NAME_MAX];
++ char name[WACOM_NAME_MAX - 20]; /* Leave some room for suffixes */
+
+ /* Generic devices name unspecified */
+ if ((features->type == HID_GENERIC) && !strcmp("Wacom HID", features->name)) {
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 1713235d28cb..2b4640397375 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -251,7 +251,7 @@ static int wacom_dtu_irq(struct wacom_wac *wacom)
+
+ static int wacom_dtus_irq(struct wacom_wac *wacom)
+ {
+- char *data = wacom->data;
++ unsigned char *data = wacom->data;
+ struct input_dev *input = wacom->pen_input;
+ unsigned short prox, pressure = 0;
+
+@@ -572,7 +572,7 @@ static int wacom_intuos_pad(struct wacom_wac *wacom)
+ strip2 = ((data[3] & 0x1f) << 8) | data[4];
+ }
+
+- prox = (buttons & ~(~0 << nbuttons)) | (keys & ~(~0 << nkeys)) |
++ prox = (buttons & ~(~0U << nbuttons)) | (keys & ~(~0U << nkeys)) |
+ (ring1 & 0x80) | (ring2 & 0x80) | strip1 | strip2;
+
+ wacom_report_numbered_buttons(input, nbuttons, buttons);
+diff --git a/drivers/i2c/busses/i2c-cht-wc.c b/drivers/i2c/busses/i2c-cht-wc.c
+index 66af44bfa67d..f6546de66fbc 100644
+--- a/drivers/i2c/busses/i2c-cht-wc.c
++++ b/drivers/i2c/busses/i2c-cht-wc.c
+@@ -178,6 +178,51 @@ static const struct i2c_algorithm cht_wc_i2c_adap_algo = {
+ .smbus_xfer = cht_wc_i2c_adap_smbus_xfer,
+ };
+
++/*
++ * We are an i2c-adapter which itself is part of an i2c-client. This means that
++ * transfers done through us take adapter->bus_lock twice, once for our parent
++ * i2c-adapter and once to take our own bus_lock. Lockdep does not like this
++ * nested locking, to make lockdep happy in the case of busses with muxes, the
++ * i2c-core's i2c_adapter_lock_bus function calls:
++ * rt_mutex_lock_nested(&adapter->bus_lock, i2c_adapter_depth(adapter));
++ *
++ * But i2c_adapter_depth only works when the direct parent of the adapter is
++ * another adapter, as it is only meant for muxes. In our case there is an
++ * i2c-client and MFD instantiated platform_device in the parent->child chain
++ * between the 2 devices.
++ *
++ * So we override the default i2c_lock_operations and pass a hardcoded
++ * depth of 1 to rt_mutex_lock_nested, to make lockdep happy.
++ *
++ * Note that if there were to be a mux attached to our adapter, this would
++ * break things again since the i2c-mux code expects the root-adapter to have
++ * a locking depth of 0. But we always have only 1 client directly attached
++ * in the form of the Charger IC paired with the CHT Whiskey Cove PMIC.
++ */
++static void cht_wc_i2c_adap_lock_bus(struct i2c_adapter *adapter,
++ unsigned int flags)
++{
++ rt_mutex_lock_nested(&adapter->bus_lock, 1);
++}
++
++static int cht_wc_i2c_adap_trylock_bus(struct i2c_adapter *adapter,
++ unsigned int flags)
++{
++ return rt_mutex_trylock(&adapter->bus_lock);
++}
++
++static void cht_wc_i2c_adap_unlock_bus(struct i2c_adapter *adapter,
++ unsigned int flags)
++{
++ rt_mutex_unlock(&adapter->bus_lock);
++}
++
++static const struct i2c_lock_operations cht_wc_i2c_adap_lock_ops = {
++ .lock_bus = cht_wc_i2c_adap_lock_bus,
++ .trylock_bus = cht_wc_i2c_adap_trylock_bus,
++ .unlock_bus = cht_wc_i2c_adap_unlock_bus,
++};
++
+ /**** irqchip for the client connected to the extchgr i2c adapter ****/
+ static void cht_wc_i2c_irq_lock(struct irq_data *data)
+ {
+@@ -286,6 +331,7 @@ static int cht_wc_i2c_adap_i2c_probe(struct platform_device *pdev)
+ adap->adapter.owner = THIS_MODULE;
+ adap->adapter.class = I2C_CLASS_HWMON;
+ adap->adapter.algo = &cht_wc_i2c_adap_algo;
++ adap->adapter.lock_ops = &cht_wc_i2c_adap_lock_ops;
+ strlcpy(adap->adapter.name, "PMIC I2C Adapter",
+ sizeof(adap->adapter.name));
+ adap->adapter.dev.parent = &pdev->dev;
+diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c
+index 9fcb13beeb8f..7a3291d91a5e 100644
+--- a/drivers/i2c/busses/i2c-tegra.c
++++ b/drivers/i2c/busses/i2c-tegra.c
+@@ -713,12 +713,6 @@ static int tegra_i2c_init(struct tegra_i2c_dev *i2c_dev, bool clk_reinit)
+ u32 tsu_thd;
+ u8 tlow, thigh;
+
+- err = pm_runtime_get_sync(i2c_dev->dev);
+- if (err < 0) {
+- dev_err(i2c_dev->dev, "runtime resume failed %d\n", err);
+- return err;
+- }
+-
+ reset_control_assert(i2c_dev->rst);
+ udelay(2);
+ reset_control_deassert(i2c_dev->rst);
+@@ -772,7 +766,7 @@ static int tegra_i2c_init(struct tegra_i2c_dev *i2c_dev, bool clk_reinit)
+ if (err) {
+ dev_err(i2c_dev->dev,
+ "failed changing clock rate: %d\n", err);
+- goto err;
++ return err;
+ }
+ }
+
+@@ -787,23 +781,21 @@ static int tegra_i2c_init(struct tegra_i2c_dev *i2c_dev, bool clk_reinit)
+
+ err = tegra_i2c_flush_fifos(i2c_dev);
+ if (err)
+- goto err;
++ return err;
+
+ if (i2c_dev->is_multimaster_mode && i2c_dev->hw->has_slcg_override_reg)
+ i2c_writel(i2c_dev, I2C_MST_CORE_CLKEN_OVR, I2C_CLKEN_OVERRIDE);
+
+ err = tegra_i2c_wait_for_config_load(i2c_dev);
+ if (err)
+- goto err;
++ return err;
+
+ if (i2c_dev->irq_disabled) {
+ i2c_dev->irq_disabled = false;
+ enable_irq(i2c_dev->irq);
+ }
+
+-err:
+- pm_runtime_put(i2c_dev->dev);
+- return err;
++ return 0;
+ }
+
+ static int tegra_i2c_disable_packet_mode(struct tegra_i2c_dev *i2c_dev)
+@@ -1616,12 +1608,14 @@ static int tegra_i2c_probe(struct platform_device *pdev)
+ }
+
+ pm_runtime_enable(&pdev->dev);
+- if (!pm_runtime_enabled(&pdev->dev)) {
++ if (!pm_runtime_enabled(&pdev->dev))
+ ret = tegra_i2c_runtime_resume(&pdev->dev);
+- if (ret < 0) {
+- dev_err(&pdev->dev, "runtime resume failed\n");
+- goto unprepare_div_clk;
+- }
++ else
++ ret = pm_runtime_get_sync(i2c_dev->dev);
++
++ if (ret < 0) {
++ dev_err(&pdev->dev, "runtime resume failed\n");
++ goto unprepare_div_clk;
+ }
+
+ if (i2c_dev->is_multimaster_mode) {
+@@ -1666,6 +1660,8 @@ static int tegra_i2c_probe(struct platform_device *pdev)
+ if (ret)
+ goto release_dma;
+
++ pm_runtime_put(&pdev->dev);
++
+ return 0;
+
+ release_dma:
+@@ -1726,17 +1722,25 @@ static int tegra_i2c_resume(struct device *dev)
+ struct tegra_i2c_dev *i2c_dev = dev_get_drvdata(dev);
+ int err;
+
++ err = tegra_i2c_runtime_resume(dev);
++ if (err)
++ return err;
++
+ err = tegra_i2c_init(i2c_dev, false);
+ if (err)
+ return err;
+
++ err = tegra_i2c_runtime_suspend(dev);
++ if (err)
++ return err;
++
+ i2c_mark_adapter_resumed(&i2c_dev->adapter);
+
+ return 0;
+ }
+
+ static const struct dev_pm_ops tegra_i2c_pm = {
+- SET_SYSTEM_SLEEP_PM_OPS(tegra_i2c_suspend, tegra_i2c_resume)
++ SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(tegra_i2c_suspend, tegra_i2c_resume)
+ SET_RUNTIME_PM_OPS(tegra_i2c_runtime_suspend, tegra_i2c_runtime_resume,
+ NULL)
+ };
+diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c
+index 00d5219094e5..48bba4913952 100644
+--- a/drivers/mailbox/mtk-cmdq-mailbox.c
++++ b/drivers/mailbox/mtk-cmdq-mailbox.c
+@@ -22,6 +22,7 @@
+ #define CMDQ_NUM_CMD(t) (t->cmd_buf_size / CMDQ_INST_SIZE)
+
+ #define CMDQ_CURR_IRQ_STATUS 0x10
++#define CMDQ_SYNC_TOKEN_UPDATE 0x68
+ #define CMDQ_THR_SLOT_CYCLES 0x30
+ #define CMDQ_THR_BASE 0x100
+ #define CMDQ_THR_SIZE 0x80
+@@ -104,8 +105,12 @@ static void cmdq_thread_resume(struct cmdq_thread *thread)
+
+ static void cmdq_init(struct cmdq *cmdq)
+ {
++ int i;
++
+ WARN_ON(clk_enable(cmdq->clock) < 0);
+ writel(CMDQ_THR_ACTIVE_SLOT_CYCLES, cmdq->base + CMDQ_THR_SLOT_CYCLES);
++ for (i = 0; i <= CMDQ_MAX_EVENT; i++)
++ writel(i, cmdq->base + CMDQ_SYNC_TOKEN_UPDATE);
+ clk_disable(cmdq->clock);
+ }
+
+diff --git a/drivers/mailbox/qcom-apcs-ipc-mailbox.c b/drivers/mailbox/qcom-apcs-ipc-mailbox.c
+index 705e17a5479c..d3676fd3cf94 100644
+--- a/drivers/mailbox/qcom-apcs-ipc-mailbox.c
++++ b/drivers/mailbox/qcom-apcs-ipc-mailbox.c
+@@ -47,7 +47,6 @@ static const struct mbox_chan_ops qcom_apcs_ipc_ops = {
+
+ static int qcom_apcs_ipc_probe(struct platform_device *pdev)
+ {
+- struct device_node *np = pdev->dev.of_node;
+ struct qcom_apcs_ipc *apcs;
+ struct regmap *regmap;
+ struct resource *res;
+@@ -55,6 +54,11 @@ static int qcom_apcs_ipc_probe(struct platform_device *pdev)
+ void __iomem *base;
+ unsigned long i;
+ int ret;
++ const struct of_device_id apcs_clk_match_table[] = {
++ { .compatible = "qcom,msm8916-apcs-kpss-global", },
++ { .compatible = "qcom,qcs404-apcs-apps-global", },
++ {}
++ };
+
+ apcs = devm_kzalloc(&pdev->dev, sizeof(*apcs), GFP_KERNEL);
+ if (!apcs)
+@@ -89,7 +93,7 @@ static int qcom_apcs_ipc_probe(struct platform_device *pdev)
+ return ret;
+ }
+
+- if (of_device_is_compatible(np, "qcom,msm8916-apcs-kpss-global")) {
++ if (of_match_device(apcs_clk_match_table, &pdev->dev)) {
+ apcs->clk = platform_device_register_data(&pdev->dev,
+ "qcom-apcs-msm8916-clk",
+ -1, NULL, 0);
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index 1f933dd197cd..b0aa595e4375 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -3738,18 +3738,18 @@ static int raid_iterate_devices(struct dm_target *ti,
+ static void raid_io_hints(struct dm_target *ti, struct queue_limits *limits)
+ {
+ struct raid_set *rs = ti->private;
+- unsigned int chunk_size = to_bytes(rs->md.chunk_sectors);
++ unsigned int chunk_size_bytes = to_bytes(rs->md.chunk_sectors);
+
+- blk_limits_io_min(limits, chunk_size);
+- blk_limits_io_opt(limits, chunk_size * mddev_data_stripes(rs));
++ blk_limits_io_min(limits, chunk_size_bytes);
++ blk_limits_io_opt(limits, chunk_size_bytes * mddev_data_stripes(rs));
+
+ /*
+ * RAID1 and RAID10 personalities require bio splitting,
+ * RAID0/4/5/6 don't and process large discard bios properly.
+ */
+ if (rs_is_raid1(rs) || rs_is_raid10(rs)) {
+- limits->discard_granularity = chunk_size;
+- limits->max_discard_sectors = chunk_size;
++ limits->discard_granularity = chunk_size_bytes;
++ limits->max_discard_sectors = rs->md.chunk_sectors;
+ }
+ }
+
+diff --git a/drivers/md/dm-zoned-target.c b/drivers/md/dm-zoned-target.c
+index 31478fef6032..d3bcc4197f5d 100644
+--- a/drivers/md/dm-zoned-target.c
++++ b/drivers/md/dm-zoned-target.c
+@@ -134,8 +134,6 @@ static int dmz_submit_bio(struct dmz_target *dmz, struct dm_zone *zone,
+
+ refcount_inc(&bioctx->ref);
+ generic_make_request(clone);
+- if (clone->bi_status == BLK_STS_IOERR)
+- return -EIO;
+
+ if (bio_op(bio) == REQ_OP_WRITE && dmz_is_seq(zone))
+ zone->wp_block += nr_blocks;
+diff --git a/drivers/mfd/intel-lpss-pci.c b/drivers/mfd/intel-lpss-pci.c
+index ade6e1ce5a98..e3a04929aaa3 100644
+--- a/drivers/mfd/intel-lpss-pci.c
++++ b/drivers/mfd/intel-lpss-pci.c
+@@ -35,6 +35,8 @@ static int intel_lpss_pci_probe(struct pci_dev *pdev,
+ info->mem = &pdev->resource[0];
+ info->irq = pdev->irq;
+
++ pdev->d3cold_delay = 0;
++
+ /* Probably it is enough to set this for iDMA capable devices only */
+ pci_set_master(pdev);
+ pci_try_set_mwi(pdev);
+diff --git a/drivers/net/dsa/rtl8366.c b/drivers/net/dsa/rtl8366.c
+index ca3d17e43ed8..ac88caca5ad4 100644
+--- a/drivers/net/dsa/rtl8366.c
++++ b/drivers/net/dsa/rtl8366.c
+@@ -339,10 +339,12 @@ int rtl8366_vlan_prepare(struct dsa_switch *ds, int port,
+ const struct switchdev_obj_port_vlan *vlan)
+ {
+ struct realtek_smi *smi = ds->priv;
++ u16 vid;
+ int ret;
+
+- if (!smi->ops->is_vlan_valid(smi, port))
+- return -EINVAL;
++ for (vid = vlan->vid_begin; vid < vlan->vid_end; vid++)
++ if (!smi->ops->is_vlan_valid(smi, vid))
++ return -EINVAL;
+
+ dev_info(smi->dev, "prepare VLANs %04x..%04x\n",
+ vlan->vid_begin, vlan->vid_end);
+@@ -370,8 +372,9 @@ void rtl8366_vlan_add(struct dsa_switch *ds, int port,
+ u16 vid;
+ int ret;
+
+- if (!smi->ops->is_vlan_valid(smi, port))
+- return;
++ for (vid = vlan->vid_begin; vid < vlan->vid_end; vid++)
++ if (!smi->ops->is_vlan_valid(smi, vid))
++ return;
+
+ dev_info(smi->dev, "add VLAN on port %d, %s, %s\n",
+ port,
+diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
+index df976b259e43..296286f4fb39 100644
+--- a/drivers/net/dsa/sja1105/sja1105_main.c
++++ b/drivers/net/dsa/sja1105/sja1105_main.c
+@@ -1875,7 +1875,9 @@ static int sja1105_set_ageing_time(struct dsa_switch *ds,
+ return sja1105_static_config_reload(priv);
+ }
+
+-/* Caller must hold priv->tagger_data.meta_lock */
++/* Must be called only with priv->tagger_data.state bit
++ * SJA1105_HWTS_RX_EN cleared
++ */
+ static int sja1105_change_rxtstamping(struct sja1105_private *priv,
+ bool on)
+ {
+@@ -1932,16 +1934,17 @@ static int sja1105_hwtstamp_set(struct dsa_switch *ds, int port,
+ break;
+ }
+
+- if (rx_on != priv->tagger_data.hwts_rx_en) {
+- spin_lock(&priv->tagger_data.meta_lock);
++ if (rx_on != test_bit(SJA1105_HWTS_RX_EN, &priv->tagger_data.state)) {
++ clear_bit(SJA1105_HWTS_RX_EN, &priv->tagger_data.state);
++
+ rc = sja1105_change_rxtstamping(priv, rx_on);
+- spin_unlock(&priv->tagger_data.meta_lock);
+ if (rc < 0) {
+ dev_err(ds->dev,
+ "Failed to change RX timestamping: %d\n", rc);
+- return -EFAULT;
++ return rc;
+ }
+- priv->tagger_data.hwts_rx_en = rx_on;
++ if (rx_on)
++ set_bit(SJA1105_HWTS_RX_EN, &priv->tagger_data.state);
+ }
+
+ if (copy_to_user(ifr->ifr_data, &config, sizeof(config)))
+@@ -1960,7 +1963,7 @@ static int sja1105_hwtstamp_get(struct dsa_switch *ds, int port,
+ config.tx_type = HWTSTAMP_TX_ON;
+ else
+ config.tx_type = HWTSTAMP_TX_OFF;
+- if (priv->tagger_data.hwts_rx_en)
++ if (test_bit(SJA1105_HWTS_RX_EN, &priv->tagger_data.state))
+ config.rx_filter = HWTSTAMP_FILTER_PTP_V2_L2_EVENT;
+ else
+ config.rx_filter = HWTSTAMP_FILTER_NONE;
+@@ -1983,12 +1986,12 @@ static void sja1105_rxtstamp_work(struct work_struct *work)
+
+ mutex_lock(&priv->ptp_lock);
+
+- now = priv->tstamp_cc.read(&priv->tstamp_cc);
+-
+ while ((skb = skb_dequeue(&data->skb_rxtstamp_queue)) != NULL) {
+ struct skb_shared_hwtstamps *shwt = skb_hwtstamps(skb);
+ u64 ts;
+
++ now = priv->tstamp_cc.read(&priv->tstamp_cc);
++
+ *shwt = (struct skb_shared_hwtstamps) {0};
+
+ ts = SJA1105_SKB_CB(skb)->meta_tstamp;
+@@ -2009,7 +2012,7 @@ static bool sja1105_port_rxtstamp(struct dsa_switch *ds, int port,
+ struct sja1105_private *priv = ds->priv;
+ struct sja1105_tagger_data *data = &priv->tagger_data;
+
+- if (!data->hwts_rx_en)
++ if (!test_bit(SJA1105_HWTS_RX_EN, &data->state))
+ return false;
+
+ /* We need to read the full PTP clock to reconstruct the Rx
+@@ -2165,6 +2168,7 @@ static int sja1105_probe(struct spi_device *spi)
+ tagger_data = &priv->tagger_data;
+ skb_queue_head_init(&tagger_data->skb_rxtstamp_queue);
+ INIT_WORK(&tagger_data->rxtstamp_work, sja1105_rxtstamp_work);
++ spin_lock_init(&tagger_data->meta_lock);
+
+ /* Connections between dsa_port and sja1105_port */
+ for (i = 0; i < SJA1105_NUM_PORTS; i++) {
+diff --git a/drivers/net/dsa/sja1105/sja1105_spi.c b/drivers/net/dsa/sja1105/sja1105_spi.c
+index 84dc603138cf..58dd37ecde17 100644
+--- a/drivers/net/dsa/sja1105/sja1105_spi.c
++++ b/drivers/net/dsa/sja1105/sja1105_spi.c
+@@ -409,7 +409,8 @@ int sja1105_static_config_upload(struct sja1105_private *priv)
+ rc = static_config_buf_prepare_for_upload(priv, config_buf, buf_len);
+ if (rc < 0) {
+ dev_err(dev, "Invalid config, cannot upload\n");
+- return -EINVAL;
++ rc = -EINVAL;
++ goto out;
+ }
+ /* Prevent PHY jabbering during switch reset by inhibiting
+ * Tx on all ports and waiting for current packet to drain.
+@@ -418,7 +419,8 @@ int sja1105_static_config_upload(struct sja1105_private *priv)
+ rc = sja1105_inhibit_tx(priv, port_bitmap, true);
+ if (rc < 0) {
+ dev_err(dev, "Failed to inhibit Tx on ports\n");
+- return -ENXIO;
++ rc = -ENXIO;
++ goto out;
+ }
+ /* Wait for an eventual egress packet to finish transmission
+ * (reach IFG). It is guaranteed that a second one will not
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
+index 5b602243d573..a4dead4ab0ed 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
+@@ -137,13 +137,12 @@ static int uldrx_handler(struct sge_rspq *q, const __be64 *rsp,
+ static int alloc_uld_rxqs(struct adapter *adap,
+ struct sge_uld_rxq_info *rxq_info, bool lro)
+ {
+- struct sge *s = &adap->sge;
+ unsigned int nq = rxq_info->nrxq + rxq_info->nciq;
++ int i, err, msi_idx, que_idx = 0, bmap_idx = 0;
+ struct sge_ofld_rxq *q = rxq_info->uldrxq;
+ unsigned short *ids = rxq_info->rspq_id;
+- unsigned int bmap_idx = 0;
++ struct sge *s = &adap->sge;
+ unsigned int per_chan;
+- int i, err, msi_idx, que_idx = 0;
+
+ per_chan = rxq_info->nrxq / adap->params.nports;
+
+@@ -161,6 +160,10 @@ static int alloc_uld_rxqs(struct adapter *adap,
+
+ if (msi_idx >= 0) {
+ bmap_idx = get_msix_idx_from_bmap(adap);
++ if (bmap_idx < 0) {
++ err = -ENOSPC;
++ goto freeout;
++ }
+ msi_idx = adap->msix_info_ulds[bmap_idx].idx;
+ }
+ err = t4_sge_alloc_rxq(adap, &q->rspq, false,
+diff --git a/drivers/net/ethernet/qlogic/qla3xxx.c b/drivers/net/ethernet/qlogic/qla3xxx.c
+index 457444894d80..b4b8ba00ee01 100644
+--- a/drivers/net/ethernet/qlogic/qla3xxx.c
++++ b/drivers/net/ethernet/qlogic/qla3xxx.c
+@@ -2787,6 +2787,7 @@ static int ql_alloc_large_buffers(struct ql3_adapter *qdev)
+ netdev_err(qdev->ndev,
+ "PCI mapping failed with error: %d\n",
+ err);
++ dev_kfree_skb_irq(skb);
+ ql_free_large_buffers(qdev);
+ return -ENOMEM;
+ }
+diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
+index 1502fe8b0456..b9ac45d9dee8 100644
+--- a/drivers/net/ethernet/socionext/netsec.c
++++ b/drivers/net/ethernet/socionext/netsec.c
+@@ -282,7 +282,6 @@ struct netsec_desc_ring {
+ void *vaddr;
+ u16 head, tail;
+ u16 xdp_xmit; /* netsec_xdp_xmit packets */
+- bool is_xdp;
+ struct page_pool *page_pool;
+ struct xdp_rxq_info xdp_rxq;
+ spinlock_t lock; /* XDP tx queue locking */
+@@ -634,8 +633,7 @@ static bool netsec_clean_tx_dring(struct netsec_priv *priv)
+ unsigned int bytes;
+ int cnt = 0;
+
+- if (dring->is_xdp)
+- spin_lock(&dring->lock);
++ spin_lock(&dring->lock);
+
+ bytes = 0;
+ entry = dring->vaddr + DESC_SZ * tail;
+@@ -682,8 +680,8 @@ next:
+ entry = dring->vaddr + DESC_SZ * tail;
+ cnt++;
+ }
+- if (dring->is_xdp)
+- spin_unlock(&dring->lock);
++
++ spin_unlock(&dring->lock);
+
+ if (!cnt)
+ return false;
+@@ -799,9 +797,6 @@ static void netsec_set_tx_de(struct netsec_priv *priv,
+ de->data_buf_addr_lw = lower_32_bits(desc->dma_addr);
+ de->buf_len_info = (tx_ctrl->tcp_seg_len << 16) | desc->len;
+ de->attr = attr;
+- /* under spin_lock if using XDP */
+- if (!dring->is_xdp)
+- dma_wmb();
+
+ dring->desc[idx] = *desc;
+ if (desc->buf_type == TYPE_NETSEC_SKB)
+@@ -1123,12 +1118,10 @@ static netdev_tx_t netsec_netdev_start_xmit(struct sk_buff *skb,
+ u16 tso_seg_len = 0;
+ int filled;
+
+- if (dring->is_xdp)
+- spin_lock_bh(&dring->lock);
++ spin_lock_bh(&dring->lock);
+ filled = netsec_desc_used(dring);
+ if (netsec_check_stop_tx(priv, filled)) {
+- if (dring->is_xdp)
+- spin_unlock_bh(&dring->lock);
++ spin_unlock_bh(&dring->lock);
+ net_warn_ratelimited("%s %s Tx queue full\n",
+ dev_name(priv->dev), ndev->name);
+ return NETDEV_TX_BUSY;
+@@ -1161,8 +1154,7 @@ static netdev_tx_t netsec_netdev_start_xmit(struct sk_buff *skb,
+ tx_desc.dma_addr = dma_map_single(priv->dev, skb->data,
+ skb_headlen(skb), DMA_TO_DEVICE);
+ if (dma_mapping_error(priv->dev, tx_desc.dma_addr)) {
+- if (dring->is_xdp)
+- spin_unlock_bh(&dring->lock);
++ spin_unlock_bh(&dring->lock);
+ netif_err(priv, drv, priv->ndev,
+ "%s: DMA mapping failed\n", __func__);
+ ndev->stats.tx_dropped++;
+@@ -1177,8 +1169,7 @@ static netdev_tx_t netsec_netdev_start_xmit(struct sk_buff *skb,
+ netdev_sent_queue(priv->ndev, skb->len);
+
+ netsec_set_tx_de(priv, dring, &tx_ctrl, &tx_desc, skb);
+- if (dring->is_xdp)
+- spin_unlock_bh(&dring->lock);
++ spin_unlock_bh(&dring->lock);
+ netsec_write(priv, NETSEC_REG_NRM_TX_PKTCNT, 1); /* submit another tx */
+
+ return NETDEV_TX_OK;
+@@ -1262,7 +1253,6 @@ err:
+ static void netsec_setup_tx_dring(struct netsec_priv *priv)
+ {
+ struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_TX];
+- struct bpf_prog *xdp_prog = READ_ONCE(priv->xdp_prog);
+ int i;
+
+ for (i = 0; i < DESC_NUM; i++) {
+@@ -1275,12 +1265,6 @@ static void netsec_setup_tx_dring(struct netsec_priv *priv)
+ */
+ de->attr = 1U << NETSEC_TX_SHIFT_OWN_FIELD;
+ }
+-
+- if (xdp_prog)
+- dring->is_xdp = true;
+- else
+- dring->is_xdp = false;
+-
+ }
+
+ static int netsec_setup_rx_dring(struct netsec_priv *priv)
+diff --git a/drivers/net/usb/hso.c b/drivers/net/usb/hso.c
+index ce78714f536f..a505b2ab88b8 100644
+--- a/drivers/net/usb/hso.c
++++ b/drivers/net/usb/hso.c
+@@ -2620,14 +2620,18 @@ static struct hso_device *hso_create_bulk_serial_device(
+ */
+ if (serial->tiocmget) {
+ tiocmget = serial->tiocmget;
++ tiocmget->endp = hso_get_ep(interface,
++ USB_ENDPOINT_XFER_INT,
++ USB_DIR_IN);
++ if (!tiocmget->endp) {
++ dev_err(&interface->dev, "Failed to find INT IN ep\n");
++ goto exit;
++ }
++
+ tiocmget->urb = usb_alloc_urb(0, GFP_KERNEL);
+ if (tiocmget->urb) {
+ mutex_init(&tiocmget->mutex);
+ init_waitqueue_head(&tiocmget->waitq);
+- tiocmget->endp = hso_get_ep(
+- interface,
+- USB_ENDPOINT_XFER_INT,
+- USB_DIR_IN);
+ } else
+ hso_free_tiomget(serial);
+ }
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index b6dc5d714b5e..3d77cd402ba9 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1350,6 +1350,7 @@ static const struct usb_device_id products[] = {
+ {QMI_FIXED_INTF(0x1e2d, 0x0082, 4)}, /* Cinterion PHxx,PXxx (2 RmNet) */
+ {QMI_FIXED_INTF(0x1e2d, 0x0082, 5)}, /* Cinterion PHxx,PXxx (2 RmNet) */
+ {QMI_FIXED_INTF(0x1e2d, 0x0083, 4)}, /* Cinterion PHxx,PXxx (1 RmNet + USB Audio)*/
++ {QMI_QUIRK_SET_DTR(0x1e2d, 0x00b0, 4)}, /* Cinterion CLS8 */
+ {QMI_FIXED_INTF(0x413c, 0x81a2, 8)}, /* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card */
+ {QMI_FIXED_INTF(0x413c, 0x81a3, 8)}, /* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card */
+ {QMI_FIXED_INTF(0x413c, 0x81a4, 8)}, /* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card */
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 5f5722bf6762..7370e06a0e4b 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -887,9 +887,9 @@ static int xennet_set_skb_gso(struct sk_buff *skb,
+ return 0;
+ }
+
+-static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
+- struct sk_buff *skb,
+- struct sk_buff_head *list)
++static int xennet_fill_frags(struct netfront_queue *queue,
++ struct sk_buff *skb,
++ struct sk_buff_head *list)
+ {
+ RING_IDX cons = queue->rx.rsp_cons;
+ struct sk_buff *nskb;
+@@ -908,7 +908,7 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
+ if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) {
+ queue->rx.rsp_cons = ++cons + skb_queue_len(list);
+ kfree_skb(nskb);
+- return ~0U;
++ return -ENOENT;
+ }
+
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
+@@ -919,7 +919,9 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
+ kfree_skb(nskb);
+ }
+
+- return cons;
++ queue->rx.rsp_cons = cons;
++
++ return 0;
+ }
+
+ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
+@@ -1045,8 +1047,7 @@ err:
+ skb->data_len = rx->status;
+ skb->len += rx->status;
+
+- i = xennet_fill_frags(queue, skb, &tmpq);
+- if (unlikely(i == ~0U))
++ if (unlikely(xennet_fill_frags(queue, skb, &tmpq)))
+ goto err;
+
+ if (rx->flags & XEN_NETRXF_csum_blank)
+@@ -1056,7 +1057,7 @@ err:
+
+ __skb_queue_tail(&rxq, skb);
+
+- queue->rx.rsp_cons = ++i;
++ i = ++queue->rx.rsp_cons;
+ work_done++;
+ }
+
+diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig
+index 2ab92409210a..297bf928d652 100644
+--- a/drivers/pci/Kconfig
++++ b/drivers/pci/Kconfig
+@@ -181,7 +181,7 @@ config PCI_LABEL
+
+ config PCI_HYPERV
+ tristate "Hyper-V PCI Frontend"
+- depends on X86 && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && X86_64
++ depends on X86_64 && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && SYSFS
+ help
+ The PCI device frontend driver allows the kernel to import arbitrary
+ PCI devices from a PCI backend to support PCI driver domains.
+diff --git a/drivers/pci/controller/dwc/pci-exynos.c b/drivers/pci/controller/dwc/pci-exynos.c
+index cee5f2f590e2..14a6ba4067fb 100644
+--- a/drivers/pci/controller/dwc/pci-exynos.c
++++ b/drivers/pci/controller/dwc/pci-exynos.c
+@@ -465,7 +465,7 @@ static int __init exynos_pcie_probe(struct platform_device *pdev)
+
+ ep->phy = devm_of_phy_get(dev, np, NULL);
+ if (IS_ERR(ep->phy)) {
+- if (PTR_ERR(ep->phy) == -EPROBE_DEFER)
++ if (PTR_ERR(ep->phy) != -ENODEV)
+ return PTR_ERR(ep->phy);
+
+ ep->phy = NULL;
+diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c
+index 9b5cb5b70389..aabf22eaa6b9 100644
+--- a/drivers/pci/controller/dwc/pci-imx6.c
++++ b/drivers/pci/controller/dwc/pci-imx6.c
+@@ -1173,8 +1173,8 @@ static int imx6_pcie_probe(struct platform_device *pdev)
+
+ imx6_pcie->vpcie = devm_regulator_get_optional(&pdev->dev, "vpcie");
+ if (IS_ERR(imx6_pcie->vpcie)) {
+- if (PTR_ERR(imx6_pcie->vpcie) == -EPROBE_DEFER)
+- return -EPROBE_DEFER;
++ if (PTR_ERR(imx6_pcie->vpcie) != -ENODEV)
++ return PTR_ERR(imx6_pcie->vpcie);
+ imx6_pcie->vpcie = NULL;
+ }
+
+diff --git a/drivers/pci/controller/dwc/pci-layerscape-ep.c b/drivers/pci/controller/dwc/pci-layerscape-ep.c
+index be61d96cc95e..ca9aa4501e7e 100644
+--- a/drivers/pci/controller/dwc/pci-layerscape-ep.c
++++ b/drivers/pci/controller/dwc/pci-layerscape-ep.c
+@@ -44,6 +44,7 @@ static const struct pci_epc_features ls_pcie_epc_features = {
+ .linkup_notifier = false,
+ .msi_capable = true,
+ .msix_capable = false,
++ .bar_fixed_64bit = (1 << BAR_2) | (1 << BAR_4),
+ };
+
+ static const struct pci_epc_features*
+diff --git a/drivers/pci/controller/dwc/pcie-histb.c b/drivers/pci/controller/dwc/pcie-histb.c
+index 954bc2b74bbc..811b5c6d62ea 100644
+--- a/drivers/pci/controller/dwc/pcie-histb.c
++++ b/drivers/pci/controller/dwc/pcie-histb.c
+@@ -340,8 +340,8 @@ static int histb_pcie_probe(struct platform_device *pdev)
+
+ hipcie->vpcie = devm_regulator_get_optional(dev, "vpcie");
+ if (IS_ERR(hipcie->vpcie)) {
+- if (PTR_ERR(hipcie->vpcie) == -EPROBE_DEFER)
+- return -EPROBE_DEFER;
++ if (PTR_ERR(hipcie->vpcie) != -ENODEV)
++ return PTR_ERR(hipcie->vpcie);
+ hipcie->vpcie = NULL;
+ }
+
+diff --git a/drivers/pci/controller/pci-tegra.c b/drivers/pci/controller/pci-tegra.c
+index 9a917b2456f6..673a1725ef38 100644
+--- a/drivers/pci/controller/pci-tegra.c
++++ b/drivers/pci/controller/pci-tegra.c
+@@ -2237,14 +2237,15 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
+ err = of_pci_get_devfn(port);
+ if (err < 0) {
+ dev_err(dev, "failed to parse address: %d\n", err);
+- return err;
++ goto err_node_put;
+ }
+
+ index = PCI_SLOT(err);
+
+ if (index < 1 || index > soc->num_ports) {
+ dev_err(dev, "invalid port number: %d\n", index);
+- return -EINVAL;
++ err = -EINVAL;
++ goto err_node_put;
+ }
+
+ index--;
+@@ -2253,12 +2254,13 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
+ if (err < 0) {
+ dev_err(dev, "failed to parse # of lanes: %d\n",
+ err);
+- return err;
++ goto err_node_put;
+ }
+
+ if (value > 16) {
+ dev_err(dev, "invalid # of lanes: %u\n", value);
+- return -EINVAL;
++ err = -EINVAL;
++ goto err_node_put;
+ }
+
+ lanes |= value << (index << 3);
+@@ -2272,13 +2274,15 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
+ lane += value;
+
+ rp = devm_kzalloc(dev, sizeof(*rp), GFP_KERNEL);
+- if (!rp)
+- return -ENOMEM;
++ if (!rp) {
++ err = -ENOMEM;
++ goto err_node_put;
++ }
+
+ err = of_address_to_resource(port, 0, &rp->regs);
+ if (err < 0) {
+ dev_err(dev, "failed to parse address: %d\n", err);
+- return err;
++ goto err_node_put;
+ }
+
+ INIT_LIST_HEAD(&rp->list);
+@@ -2330,6 +2334,10 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
+ return err;
+
+ return 0;
++
++err_node_put:
++ of_node_put(port);
++ return err;
+ }
+
+ /*
+diff --git a/drivers/pci/controller/pcie-mobiveil.c b/drivers/pci/controller/pcie-mobiveil.c
+index 672e633601c7..a45a6447b01d 100644
+--- a/drivers/pci/controller/pcie-mobiveil.c
++++ b/drivers/pci/controller/pcie-mobiveil.c
+@@ -88,6 +88,7 @@
+ #define AMAP_CTRL_TYPE_MASK 3
+
+ #define PAB_EXT_PEX_AMAP_SIZEN(win) PAB_EXT_REG_ADDR(0xbef0, win)
++#define PAB_EXT_PEX_AMAP_AXI_WIN(win) PAB_EXT_REG_ADDR(0xb4a0, win)
+ #define PAB_PEX_AMAP_AXI_WIN(win) PAB_REG_ADDR(0x4ba4, win)
+ #define PAB_PEX_AMAP_PEX_WIN_L(win) PAB_REG_ADDR(0x4ba8, win)
+ #define PAB_PEX_AMAP_PEX_WIN_H(win) PAB_REG_ADDR(0x4bac, win)
+@@ -462,7 +463,7 @@ static int mobiveil_pcie_parse_dt(struct mobiveil_pcie *pcie)
+ }
+
+ static void program_ib_windows(struct mobiveil_pcie *pcie, int win_num,
+- u64 pci_addr, u32 type, u64 size)
++ u64 cpu_addr, u64 pci_addr, u32 type, u64 size)
+ {
+ u32 value;
+ u64 size64 = ~(size - 1);
+@@ -482,7 +483,10 @@ static void program_ib_windows(struct mobiveil_pcie *pcie, int win_num,
+ csr_writel(pcie, upper_32_bits(size64),
+ PAB_EXT_PEX_AMAP_SIZEN(win_num));
+
+- csr_writel(pcie, pci_addr, PAB_PEX_AMAP_AXI_WIN(win_num));
++ csr_writel(pcie, lower_32_bits(cpu_addr),
++ PAB_PEX_AMAP_AXI_WIN(win_num));
++ csr_writel(pcie, upper_32_bits(cpu_addr),
++ PAB_EXT_PEX_AMAP_AXI_WIN(win_num));
+
+ csr_writel(pcie, lower_32_bits(pci_addr),
+ PAB_PEX_AMAP_PEX_WIN_L(win_num));
+@@ -624,7 +628,7 @@ static int mobiveil_host_init(struct mobiveil_pcie *pcie)
+ CFG_WINDOW_TYPE, resource_size(pcie->ob_io_res));
+
+ /* memory inbound translation window */
+- program_ib_windows(pcie, WIN_NUM_0, 0, MEM_WINDOW_TYPE, IB_WIN_SIZE);
++ program_ib_windows(pcie, WIN_NUM_0, 0, 0, MEM_WINDOW_TYPE, IB_WIN_SIZE);
+
+ /* Get the I/O and memory ranges from DT */
+ resource_list_for_each_entry(win, &pcie->resources) {
+diff --git a/drivers/pci/controller/pcie-rockchip-host.c b/drivers/pci/controller/pcie-rockchip-host.c
+index 8d20f1793a61..ef8e677ce9d1 100644
+--- a/drivers/pci/controller/pcie-rockchip-host.c
++++ b/drivers/pci/controller/pcie-rockchip-host.c
+@@ -608,29 +608,29 @@ static int rockchip_pcie_parse_host_dt(struct rockchip_pcie *rockchip)
+
+ rockchip->vpcie12v = devm_regulator_get_optional(dev, "vpcie12v");
+ if (IS_ERR(rockchip->vpcie12v)) {
+- if (PTR_ERR(rockchip->vpcie12v) == -EPROBE_DEFER)
+- return -EPROBE_DEFER;
++ if (PTR_ERR(rockchip->vpcie12v) != -ENODEV)
++ return PTR_ERR(rockchip->vpcie12v);
+ dev_info(dev, "no vpcie12v regulator found\n");
+ }
+
+ rockchip->vpcie3v3 = devm_regulator_get_optional(dev, "vpcie3v3");
+ if (IS_ERR(rockchip->vpcie3v3)) {
+- if (PTR_ERR(rockchip->vpcie3v3) == -EPROBE_DEFER)
+- return -EPROBE_DEFER;
++ if (PTR_ERR(rockchip->vpcie3v3) != -ENODEV)
++ return PTR_ERR(rockchip->vpcie3v3);
+ dev_info(dev, "no vpcie3v3 regulator found\n");
+ }
+
+ rockchip->vpcie1v8 = devm_regulator_get_optional(dev, "vpcie1v8");
+ if (IS_ERR(rockchip->vpcie1v8)) {
+- if (PTR_ERR(rockchip->vpcie1v8) == -EPROBE_DEFER)
+- return -EPROBE_DEFER;
++ if (PTR_ERR(rockchip->vpcie1v8) != -ENODEV)
++ return PTR_ERR(rockchip->vpcie1v8);
+ dev_info(dev, "no vpcie1v8 regulator found\n");
+ }
+
+ rockchip->vpcie0v9 = devm_regulator_get_optional(dev, "vpcie0v9");
+ if (IS_ERR(rockchip->vpcie0v9)) {
+- if (PTR_ERR(rockchip->vpcie0v9) == -EPROBE_DEFER)
+- return -EPROBE_DEFER;
++ if (PTR_ERR(rockchip->vpcie0v9) != -ENODEV)
++ return PTR_ERR(rockchip->vpcie0v9);
+ dev_info(dev, "no vpcie0v9 regulator found\n");
+ }
+
+diff --git a/drivers/pci/hotplug/rpaphp_core.c b/drivers/pci/hotplug/rpaphp_core.c
+index bcd5d357ca23..c3899ee1db99 100644
+--- a/drivers/pci/hotplug/rpaphp_core.c
++++ b/drivers/pci/hotplug/rpaphp_core.c
+@@ -230,7 +230,7 @@ static int rpaphp_check_drc_props_v2(struct device_node *dn, char *drc_name,
+ struct of_drc_info drc;
+ const __be32 *value;
+ char cell_drc_name[MAX_DRC_NAME_LEN];
+- int j, fndit;
++ int j;
+
+ info = of_find_property(dn->parent, "ibm,drc-info", NULL);
+ if (info == NULL)
+@@ -245,17 +245,13 @@ static int rpaphp_check_drc_props_v2(struct device_node *dn, char *drc_name,
+
+ /* Should now know end of current entry */
+
+- if (my_index > drc.last_drc_index)
+- continue;
+-
+- fndit = 1;
+- break;
++ /* Found it */
++ if (my_index <= drc.last_drc_index) {
++ sprintf(cell_drc_name, "%s%d", drc.drc_name_prefix,
++ my_index);
++ break;
++ }
+ }
+- /* Found it */
+-
+- if (fndit)
+- sprintf(cell_drc_name, "%s%d", drc.drc_name_prefix,
+- my_index);
+
+ if (((drc_name == NULL) ||
+ (drc_name && !strcmp(drc_name, cell_drc_name))) &&
+diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c
+index 06083b86d4f4..5fd90105510d 100644
+--- a/drivers/pci/pci-bridge-emul.c
++++ b/drivers/pci/pci-bridge-emul.c
+@@ -38,7 +38,7 @@ struct pci_bridge_reg_behavior {
+ u32 rsvd;
+ };
+
+-const static struct pci_bridge_reg_behavior pci_regs_behavior[] = {
++static const struct pci_bridge_reg_behavior pci_regs_behavior[] = {
+ [PCI_VENDOR_ID / 4] = { .ro = ~0 },
+ [PCI_COMMAND / 4] = {
+ .rw = (PCI_COMMAND_IO | PCI_COMMAND_MEMORY |
+@@ -173,7 +173,7 @@ const static struct pci_bridge_reg_behavior pci_regs_behavior[] = {
+ },
+ };
+
+-const static struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = {
++static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = {
+ [PCI_CAP_LIST_ID / 4] = {
+ /*
+ * Capability ID, Next Capability Pointer and
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 1b27b5af3d55..1f17da3dfeac 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -890,8 +890,8 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
+
+ pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
+ dev->current_state = (pmcsr & PCI_PM_CTRL_STATE_MASK);
+- if (dev->current_state != state && printk_ratelimit())
+- pci_info(dev, "Refused to change power state, currently in D%d\n",
++ if (dev->current_state != state)
++ pci_info_ratelimited(dev, "Refused to change power state, currently in D%d\n",
+ dev->current_state);
+
+ /*
+diff --git a/drivers/pinctrl/meson/pinctrl-meson-gxbb.c b/drivers/pinctrl/meson/pinctrl-meson-gxbb.c
+index 6c640837073e..5bfa56f3847e 100644
+--- a/drivers/pinctrl/meson/pinctrl-meson-gxbb.c
++++ b/drivers/pinctrl/meson/pinctrl-meson-gxbb.c
+@@ -192,8 +192,8 @@ static const unsigned int uart_rts_b_pins[] = { GPIODV_27 };
+
+ static const unsigned int uart_tx_c_pins[] = { GPIOY_13 };
+ static const unsigned int uart_rx_c_pins[] = { GPIOY_14 };
+-static const unsigned int uart_cts_c_pins[] = { GPIOX_11 };
+-static const unsigned int uart_rts_c_pins[] = { GPIOX_12 };
++static const unsigned int uart_cts_c_pins[] = { GPIOY_11 };
++static const unsigned int uart_rts_c_pins[] = { GPIOY_12 };
+
+ static const unsigned int i2c_sck_a_pins[] = { GPIODV_25 };
+ static const unsigned int i2c_sda_a_pins[] = { GPIODV_24 };
+@@ -439,10 +439,10 @@ static struct meson_pmx_group meson_gxbb_periphs_groups[] = {
+ GROUP(pwm_f_x, 3, 18),
+
+ /* Bank Y */
+- GROUP(uart_cts_c, 1, 19),
+- GROUP(uart_rts_c, 1, 18),
+- GROUP(uart_tx_c, 1, 17),
+- GROUP(uart_rx_c, 1, 16),
++ GROUP(uart_cts_c, 1, 17),
++ GROUP(uart_rts_c, 1, 16),
++ GROUP(uart_tx_c, 1, 19),
++ GROUP(uart_rx_c, 1, 18),
+ GROUP(pwm_a_y, 1, 21),
+ GROUP(pwm_f_y, 1, 20),
+ GROUP(i2s_out_ch23_y, 1, 5),
+diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
+index 9b9c61e3f065..977792654e01 100644
+--- a/drivers/pinctrl/pinctrl-amd.c
++++ b/drivers/pinctrl/pinctrl-amd.c
+@@ -565,15 +565,25 @@ static irqreturn_t amd_gpio_irq_handler(int irq, void *dev_id)
+ !(regval & BIT(INTERRUPT_MASK_OFF)))
+ continue;
+ irq = irq_find_mapping(gc->irq.domain, irqnr + i);
+- generic_handle_irq(irq);
++ if (irq != 0)
++ generic_handle_irq(irq);
+
+ /* Clear interrupt.
+ * We must read the pin register again, in case the
+ * value was changed while executing
+ * generic_handle_irq() above.
++ * If we didn't find a mapping for the interrupt,
++ * disable it in order to avoid a system hang caused
++ * by an interrupt storm.
+ */
+ raw_spin_lock_irqsave(&gpio_dev->lock, flags);
+ regval = readl(regs + i);
++ if (irq == 0) {
++ regval &= ~BIT(INTERRUPT_ENABLE_OFF);
++ dev_dbg(&gpio_dev->pdev->dev,
++ "Disabling spurious GPIO IRQ %d\n",
++ irqnr + i);
++ }
+ writel(regval, regs + i);
+ raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+ ret = IRQ_HANDLED;
+diff --git a/drivers/pinctrl/pinctrl-stmfx.c b/drivers/pinctrl/pinctrl-stmfx.c
+index d3332da35637..31b6e511670f 100644
+--- a/drivers/pinctrl/pinctrl-stmfx.c
++++ b/drivers/pinctrl/pinctrl-stmfx.c
+@@ -296,29 +296,29 @@ static int stmfx_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin,
+ switch (param) {
+ case PIN_CONFIG_BIAS_PULL_PIN_DEFAULT:
+ case PIN_CONFIG_BIAS_DISABLE:
++ case PIN_CONFIG_DRIVE_PUSH_PULL:
++ ret = stmfx_pinconf_set_type(pctl, pin, 0);
++ if (ret)
++ return ret;
++ break;
+ case PIN_CONFIG_BIAS_PULL_DOWN:
++ ret = stmfx_pinconf_set_type(pctl, pin, 1);
++ if (ret)
++ return ret;
+ ret = stmfx_pinconf_set_pupd(pctl, pin, 0);
+ if (ret)
+ return ret;
+ break;
+ case PIN_CONFIG_BIAS_PULL_UP:
+- ret = stmfx_pinconf_set_pupd(pctl, pin, 1);
++ ret = stmfx_pinconf_set_type(pctl, pin, 1);
+ if (ret)
+ return ret;
+- break;
+- case PIN_CONFIG_DRIVE_OPEN_DRAIN:
+- if (!dir)
+- ret = stmfx_pinconf_set_type(pctl, pin, 1);
+- else
+- ret = stmfx_pinconf_set_type(pctl, pin, 0);
++ ret = stmfx_pinconf_set_pupd(pctl, pin, 1);
+ if (ret)
+ return ret;
+ break;
+- case PIN_CONFIG_DRIVE_PUSH_PULL:
+- if (!dir)
+- ret = stmfx_pinconf_set_type(pctl, pin, 0);
+- else
+- ret = stmfx_pinconf_set_type(pctl, pin, 1);
++ case PIN_CONFIG_DRIVE_OPEN_DRAIN:
++ ret = stmfx_pinconf_set_type(pctl, pin, 1);
+ if (ret)
+ return ret;
+ break;
+diff --git a/drivers/pinctrl/tegra/pinctrl-tegra.c b/drivers/pinctrl/tegra/pinctrl-tegra.c
+index 186ef98e7b2b..f1b523beec5b 100644
+--- a/drivers/pinctrl/tegra/pinctrl-tegra.c
++++ b/drivers/pinctrl/tegra/pinctrl-tegra.c
+@@ -32,7 +32,9 @@ static inline u32 pmx_readl(struct tegra_pmx *pmx, u32 bank, u32 reg)
+
+ static inline void pmx_writel(struct tegra_pmx *pmx, u32 val, u32 bank, u32 reg)
+ {
+- writel(val, pmx->regs[bank] + reg);
++ writel_relaxed(val, pmx->regs[bank] + reg);
++ /* make sure pinmux register write completed */
++ pmx_readl(pmx, bank, reg);
+ }
+
+ static int tegra_pinctrl_get_groups_count(struct pinctrl_dev *pctldev)
+diff --git a/drivers/power/supply/power_supply_hwmon.c b/drivers/power/supply/power_supply_hwmon.c
+index 51fe60440d12..75cf861ba492 100644
+--- a/drivers/power/supply/power_supply_hwmon.c
++++ b/drivers/power/supply/power_supply_hwmon.c
+@@ -284,6 +284,7 @@ int power_supply_add_hwmon_sysfs(struct power_supply *psy)
+ struct device *dev = &psy->dev;
+ struct device *hwmon;
+ int ret, i;
++ const char *name;
+
+ if (!devres_open_group(dev, power_supply_add_hwmon_sysfs,
+ GFP_KERNEL))
+@@ -334,7 +335,19 @@ int power_supply_add_hwmon_sysfs(struct power_supply *psy)
+ }
+ }
+
+- hwmon = devm_hwmon_device_register_with_info(dev, psy->desc->name,
++ name = psy->desc->name;
++ if (strchr(name, '-')) {
++ char *new_name;
++
++ new_name = devm_kstrdup(dev, name, GFP_KERNEL);
++ if (!new_name) {
++ ret = -ENOMEM;
++ goto error;
++ }
++ strreplace(new_name, '-', '_');
++ name = new_name;
++ }
++ hwmon = devm_hwmon_device_register_with_info(dev, name,
+ psyhw,
+ &power_supply_hwmon_chip_info,
+ NULL);
+diff --git a/drivers/ptp/ptp_qoriq.c b/drivers/ptp/ptp_qoriq.c
+index c61f00b72e15..a577218d1ab7 100644
+--- a/drivers/ptp/ptp_qoriq.c
++++ b/drivers/ptp/ptp_qoriq.c
+@@ -507,6 +507,8 @@ int ptp_qoriq_init(struct ptp_qoriq *ptp_qoriq, void __iomem *base,
+ ptp_qoriq->regs.etts_regs = base + ETTS_REGS_OFFSET;
+ }
+
++ spin_lock_init(&ptp_qoriq->lock);
++
+ ktime_get_real_ts64(&now);
+ ptp_qoriq_settime(&ptp_qoriq->caps, &now);
+
+@@ -514,7 +516,6 @@ int ptp_qoriq_init(struct ptp_qoriq *ptp_qoriq, void __iomem *base,
+ (ptp_qoriq->tclk_period & TCLK_PERIOD_MASK) << TCLK_PERIOD_SHIFT |
+ (ptp_qoriq->cksel & CKSEL_MASK) << CKSEL_SHIFT;
+
+- spin_lock_init(&ptp_qoriq->lock);
+ spin_lock_irqsave(&ptp_qoriq->lock, flags);
+
+ regs = &ptp_qoriq->regs;
+diff --git a/drivers/rtc/Kconfig b/drivers/rtc/Kconfig
+index e72f65b61176..add43c337489 100644
+--- a/drivers/rtc/Kconfig
++++ b/drivers/rtc/Kconfig
+@@ -500,6 +500,7 @@ config RTC_DRV_M41T80_WDT
+ watchdog timer in the ST M41T60 and M41T80 RTC chips series.
+ config RTC_DRV_BD70528
+ tristate "ROHM BD70528 PMIC RTC"
++ depends on MFD_ROHM_BD70528 && (BD70528_WATCHDOG || !BD70528_WATCHDOG)
+ help
+ If you say Y here you will get support for the RTC
+ on ROHM BD70528 Power Management IC.
+diff --git a/drivers/rtc/rtc-pcf85363.c b/drivers/rtc/rtc-pcf85363.c
+index a075e77617dc..3450d615974d 100644
+--- a/drivers/rtc/rtc-pcf85363.c
++++ b/drivers/rtc/rtc-pcf85363.c
+@@ -166,7 +166,12 @@ static int pcf85363_rtc_set_time(struct device *dev, struct rtc_time *tm)
+ buf[DT_YEARS] = bin2bcd(tm->tm_year % 100);
+
+ ret = regmap_bulk_write(pcf85363->regmap, CTRL_STOP_EN,
+- tmp, sizeof(tmp));
++ tmp, 2);
++ if (ret)
++ return ret;
++
++ ret = regmap_bulk_write(pcf85363->regmap, DT_100THS,
++ buf, sizeof(tmp) - 2);
+ if (ret)
+ return ret;
+
+diff --git a/drivers/rtc/rtc-snvs.c b/drivers/rtc/rtc-snvs.c
+index 7ee673a25fd0..4f9a107a0427 100644
+--- a/drivers/rtc/rtc-snvs.c
++++ b/drivers/rtc/rtc-snvs.c
+@@ -279,6 +279,10 @@ static int snvs_rtc_probe(struct platform_device *pdev)
+ if (!data)
+ return -ENOMEM;
+
++ data->rtc = devm_rtc_allocate_device(&pdev->dev);
++ if (IS_ERR(data->rtc))
++ return PTR_ERR(data->rtc);
++
+ data->regmap = syscon_regmap_lookup_by_phandle(pdev->dev.of_node, "regmap");
+
+ if (IS_ERR(data->regmap)) {
+@@ -343,10 +347,9 @@ static int snvs_rtc_probe(struct platform_device *pdev)
+ goto error_rtc_device_register;
+ }
+
+- data->rtc = devm_rtc_device_register(&pdev->dev, pdev->name,
+- &snvs_rtc_ops, THIS_MODULE);
+- if (IS_ERR(data->rtc)) {
+- ret = PTR_ERR(data->rtc);
++ data->rtc->ops = &snvs_rtc_ops;
++ ret = rtc_register_device(data->rtc);
++ if (ret) {
+ dev_err(&pdev->dev, "failed to register rtc: %d\n", ret);
+ goto error_rtc_device_register;
+ }
+diff --git a/drivers/scsi/scsi_logging.c b/drivers/scsi/scsi_logging.c
+index 39b8cc4574b4..c6ed0b12e807 100644
+--- a/drivers/scsi/scsi_logging.c
++++ b/drivers/scsi/scsi_logging.c
+@@ -15,57 +15,15 @@
+ #include <scsi/scsi_eh.h>
+ #include <scsi/scsi_dbg.h>
+
+-#define SCSI_LOG_SPOOLSIZE 4096
+-
+-#if (SCSI_LOG_SPOOLSIZE / SCSI_LOG_BUFSIZE) > BITS_PER_LONG
+-#warning SCSI logging bitmask too large
+-#endif
+-
+-struct scsi_log_buf {
+- char buffer[SCSI_LOG_SPOOLSIZE];
+- unsigned long map;
+-};
+-
+-static DEFINE_PER_CPU(struct scsi_log_buf, scsi_format_log);
+-
+ static char *scsi_log_reserve_buffer(size_t *len)
+ {
+- struct scsi_log_buf *buf;
+- unsigned long map_bits = sizeof(buf->buffer) / SCSI_LOG_BUFSIZE;
+- unsigned long idx = 0;
+-
+- preempt_disable();
+- buf = this_cpu_ptr(&scsi_format_log);
+- idx = find_first_zero_bit(&buf->map, map_bits);
+- if (likely(idx < map_bits)) {
+- while (test_and_set_bit(idx, &buf->map)) {
+- idx = find_next_zero_bit(&buf->map, map_bits, idx);
+- if (idx >= map_bits)
+- break;
+- }
+- }
+- if (WARN_ON(idx >= map_bits)) {
+- preempt_enable();
+- return NULL;
+- }
+- *len = SCSI_LOG_BUFSIZE;
+- return buf->buffer + idx * SCSI_LOG_BUFSIZE;
++ *len = 128;
++ return kmalloc(*len, GFP_ATOMIC);
+ }
+
+ static void scsi_log_release_buffer(char *bufptr)
+ {
+- struct scsi_log_buf *buf;
+- unsigned long idx;
+- int ret;
+-
+- buf = this_cpu_ptr(&scsi_format_log);
+- if (bufptr >= buf->buffer &&
+- bufptr < buf->buffer + SCSI_LOG_SPOOLSIZE) {
+- idx = (bufptr - buf->buffer) / SCSI_LOG_BUFSIZE;
+- ret = test_and_clear_bit(idx, &buf->map);
+- WARN_ON(!ret);
+- }
+- preempt_enable();
++ kfree(bufptr);
+ }
+
+ static inline const char *scmd_name(const struct scsi_cmnd *scmd)
+diff --git a/drivers/soundwire/intel.c b/drivers/soundwire/intel.c
+index 317873bc0555..ec25a71d0887 100644
+--- a/drivers/soundwire/intel.c
++++ b/drivers/soundwire/intel.c
+@@ -289,6 +289,16 @@ intel_pdi_get_ch_cap(struct sdw_intel *sdw, unsigned int pdi_num, bool pcm)
+
+ if (pcm) {
+ count = intel_readw(shim, SDW_SHIM_PCMSYCHC(link_id, pdi_num));
++
++ /*
++ * WORKAROUND: on all existing Intel controllers, pdi
++ * number 2 reports channel count as 1 even though it
++ * supports 8 channels. Performing hardcoding for pdi
++ * number 2.
++ */
++ if (pdi_num == 2)
++ count = 7;
++
+ } else {
+ count = intel_readw(shim, SDW_SHIM_PDMSCAP(link_id));
+ count = ((count & SDW_SHIM_PDMSCAP_CPSS) >>
+diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
+index 703948c9fbe1..02206162eaa9 100644
+--- a/drivers/vfio/pci/vfio_pci.c
++++ b/drivers/vfio/pci/vfio_pci.c
+@@ -438,11 +438,20 @@ static void vfio_pci_disable(struct vfio_pci_device *vdev)
+ pci_write_config_word(pdev, PCI_COMMAND, PCI_COMMAND_INTX_DISABLE);
+
+ /*
+- * Try to reset the device. The success of this is dependent on
+- * being able to lock the device, which is not always possible.
++ * Try to get the locks ourselves to prevent a deadlock. The
++ * success of this is dependent on being able to lock the device,
++ * which is not always possible.
++ * We can not use the "try" reset interface here, which will
++ * overwrite the previously restored configuration information.
+ */
+- if (vdev->reset_works && !pci_try_reset_function(pdev))
+- vdev->needs_reset = false;
++ if (vdev->reset_works && pci_cfg_access_trylock(pdev)) {
++ if (device_trylock(&pdev->dev)) {
++ if (!__pci_reset_function_locked(pdev))
++ vdev->needs_reset = false;
++ device_unlock(&pdev->dev);
++ }
++ pci_cfg_access_unlock(pdev);
++ }
+
+ pci_restore_state(pdev);
+ out:
+diff --git a/drivers/video/fbdev/ssd1307fb.c b/drivers/video/fbdev/ssd1307fb.c
+index b674948e3bb8..3f28e1b5d422 100644
+--- a/drivers/video/fbdev/ssd1307fb.c
++++ b/drivers/video/fbdev/ssd1307fb.c
+@@ -432,7 +432,7 @@ static int ssd1307fb_init(struct ssd1307fb_par *par)
+ if (ret < 0)
+ return ret;
+
+- ret = ssd1307fb_write_cmd(par->client, 0x0);
++ ret = ssd1307fb_write_cmd(par->client, par->page_offset);
+ if (ret < 0)
+ return ret;
+
+diff --git a/fs/9p/cache.c b/fs/9p/cache.c
+index 995e332eee5c..eb2151fb6049 100644
+--- a/fs/9p/cache.c
++++ b/fs/9p/cache.c
+@@ -51,6 +51,8 @@ void v9fs_cache_session_get_cookie(struct v9fs_session_info *v9ses)
+ if (!v9ses->cachetag) {
+ if (v9fs_random_cachetag(v9ses) < 0) {
+ v9ses->fscache = NULL;
++ kfree(v9ses->cachetag);
++ v9ses->cachetag = NULL;
+ return;
+ }
+ }
+diff --git a/fs/ext4/block_validity.c b/fs/ext4/block_validity.c
+index 8e83741b02e0..d4d4fdfac1a6 100644
+--- a/fs/ext4/block_validity.c
++++ b/fs/ext4/block_validity.c
+@@ -38,6 +38,7 @@ int __init ext4_init_system_zone(void)
+
+ void ext4_exit_system_zone(void)
+ {
++ rcu_barrier();
+ kmem_cache_destroy(ext4_system_zone_cachep);
+ }
+
+@@ -49,17 +50,26 @@ static inline int can_merge(struct ext4_system_zone *entry1,
+ return 0;
+ }
+
++static void release_system_zone(struct ext4_system_blocks *system_blks)
++{
++ struct ext4_system_zone *entry, *n;
++
++ rbtree_postorder_for_each_entry_safe(entry, n,
++ &system_blks->root, node)
++ kmem_cache_free(ext4_system_zone_cachep, entry);
++}
++
+ /*
+ * Mark a range of blocks as belonging to the "system zone" --- that
+ * is, filesystem metadata blocks which should never be used by
+ * inodes.
+ */
+-static int add_system_zone(struct ext4_sb_info *sbi,
++static int add_system_zone(struct ext4_system_blocks *system_blks,
+ ext4_fsblk_t start_blk,
+ unsigned int count)
+ {
+ struct ext4_system_zone *new_entry = NULL, *entry;
+- struct rb_node **n = &sbi->system_blks.rb_node, *node;
++ struct rb_node **n = &system_blks->root.rb_node, *node;
+ struct rb_node *parent = NULL, *new_node = NULL;
+
+ while (*n) {
+@@ -91,7 +101,7 @@ static int add_system_zone(struct ext4_sb_info *sbi,
+ new_node = &new_entry->node;
+
+ rb_link_node(new_node, parent, n);
+- rb_insert_color(new_node, &sbi->system_blks);
++ rb_insert_color(new_node, &system_blks->root);
+ }
+
+ /* Can we merge to the left? */
+@@ -101,7 +111,7 @@ static int add_system_zone(struct ext4_sb_info *sbi,
+ if (can_merge(entry, new_entry)) {
+ new_entry->start_blk = entry->start_blk;
+ new_entry->count += entry->count;
+- rb_erase(node, &sbi->system_blks);
++ rb_erase(node, &system_blks->root);
+ kmem_cache_free(ext4_system_zone_cachep, entry);
+ }
+ }
+@@ -112,7 +122,7 @@ static int add_system_zone(struct ext4_sb_info *sbi,
+ entry = rb_entry(node, struct ext4_system_zone, node);
+ if (can_merge(new_entry, entry)) {
+ new_entry->count += entry->count;
+- rb_erase(node, &sbi->system_blks);
++ rb_erase(node, &system_blks->root);
+ kmem_cache_free(ext4_system_zone_cachep, entry);
+ }
+ }
+@@ -126,7 +136,7 @@ static void debug_print_tree(struct ext4_sb_info *sbi)
+ int first = 1;
+
+ printk(KERN_INFO "System zones: ");
+- node = rb_first(&sbi->system_blks);
++ node = rb_first(&sbi->system_blks->root);
+ while (node) {
+ entry = rb_entry(node, struct ext4_system_zone, node);
+ printk(KERN_CONT "%s%llu-%llu", first ? "" : ", ",
+@@ -137,7 +147,47 @@ static void debug_print_tree(struct ext4_sb_info *sbi)
+ printk(KERN_CONT "\n");
+ }
+
+-static int ext4_protect_reserved_inode(struct super_block *sb, u32 ino)
++/*
++ * Returns 1 if the passed-in block region (start_blk,
++ * start_blk+count) is valid; 0 if some part of the block region
++ * overlaps with filesystem metadata blocks.
++ */
++static int ext4_data_block_valid_rcu(struct ext4_sb_info *sbi,
++ struct ext4_system_blocks *system_blks,
++ ext4_fsblk_t start_blk,
++ unsigned int count)
++{
++ struct ext4_system_zone *entry;
++ struct rb_node *n;
++
++ if ((start_blk <= le32_to_cpu(sbi->s_es->s_first_data_block)) ||
++ (start_blk + count < start_blk) ||
++ (start_blk + count > ext4_blocks_count(sbi->s_es))) {
++ sbi->s_es->s_last_error_block = cpu_to_le64(start_blk);
++ return 0;
++ }
++
++ if (system_blks == NULL)
++ return 1;
++
++ n = system_blks->root.rb_node;
++ while (n) {
++ entry = rb_entry(n, struct ext4_system_zone, node);
++ if (start_blk + count - 1 < entry->start_blk)
++ n = n->rb_left;
++ else if (start_blk >= (entry->start_blk + entry->count))
++ n = n->rb_right;
++ else {
++ sbi->s_es->s_last_error_block = cpu_to_le64(start_blk);
++ return 0;
++ }
++ }
++ return 1;
++}
++
++static int ext4_protect_reserved_inode(struct super_block *sb,
++ struct ext4_system_blocks *system_blks,
++ u32 ino)
+ {
+ struct inode *inode;
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
+@@ -163,14 +213,15 @@ static int ext4_protect_reserved_inode(struct super_block *sb, u32 ino)
+ if (n == 0) {
+ i++;
+ } else {
+- if (!ext4_data_block_valid(sbi, map.m_pblk, n)) {
++ if (!ext4_data_block_valid_rcu(sbi, system_blks,
++ map.m_pblk, n)) {
+ ext4_error(sb, "blocks %llu-%llu from inode %u "
+ "overlap system zone", map.m_pblk,
+ map.m_pblk + map.m_len - 1, ino);
+ err = -EFSCORRUPTED;
+ break;
+ }
+- err = add_system_zone(sbi, map.m_pblk, n);
++ err = add_system_zone(system_blks, map.m_pblk, n);
+ if (err < 0)
+ break;
+ i += n;
+@@ -180,94 +231,130 @@ static int ext4_protect_reserved_inode(struct super_block *sb, u32 ino)
+ return err;
+ }
+
++static void ext4_destroy_system_zone(struct rcu_head *rcu)
++{
++ struct ext4_system_blocks *system_blks;
++
++ system_blks = container_of(rcu, struct ext4_system_blocks, rcu);
++ release_system_zone(system_blks);
++ kfree(system_blks);
++}
++
++/*
++ * Build system zone rbtree which is used for block validity checking.
++ *
++ * The update of system_blks pointer in this function is protected by
++ * sb->s_umount semaphore. However we have to be careful as we can be
++ * racing with ext4_data_block_valid() calls reading system_blks rbtree
++ * protected only by RCU. That's why we first build the rbtree and then
++ * swap it in place.
++ */
+ int ext4_setup_system_zone(struct super_block *sb)
+ {
+ ext4_group_t ngroups = ext4_get_groups_count(sb);
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ struct ext4_system_blocks *system_blks;
+ struct ext4_group_desc *gdp;
+ ext4_group_t i;
+ int flex_size = ext4_flex_bg_size(sbi);
+ int ret;
+
+ if (!test_opt(sb, BLOCK_VALIDITY)) {
+- if (sbi->system_blks.rb_node)
++ if (sbi->system_blks)
+ ext4_release_system_zone(sb);
+ return 0;
+ }
+- if (sbi->system_blks.rb_node)
++ if (sbi->system_blks)
+ return 0;
+
++ system_blks = kzalloc(sizeof(*system_blks), GFP_KERNEL);
++ if (!system_blks)
++ return -ENOMEM;
++
+ for (i=0; i < ngroups; i++) {
+ cond_resched();
+ if (ext4_bg_has_super(sb, i) &&
+ ((i < 5) || ((i % flex_size) == 0)))
+- add_system_zone(sbi, ext4_group_first_block_no(sb, i),
++ add_system_zone(system_blks,
++ ext4_group_first_block_no(sb, i),
+ ext4_bg_num_gdb(sb, i) + 1);
+ gdp = ext4_get_group_desc(sb, i, NULL);
+- ret = add_system_zone(sbi, ext4_block_bitmap(sb, gdp), 1);
++ ret = add_system_zone(system_blks,
++ ext4_block_bitmap(sb, gdp), 1);
+ if (ret)
+- return ret;
+- ret = add_system_zone(sbi, ext4_inode_bitmap(sb, gdp), 1);
++ goto err;
++ ret = add_system_zone(system_blks,
++ ext4_inode_bitmap(sb, gdp), 1);
+ if (ret)
+- return ret;
+- ret = add_system_zone(sbi, ext4_inode_table(sb, gdp),
++ goto err;
++ ret = add_system_zone(system_blks,
++ ext4_inode_table(sb, gdp),
+ sbi->s_itb_per_group);
+ if (ret)
+- return ret;
++ goto err;
+ }
+ if (ext4_has_feature_journal(sb) && sbi->s_es->s_journal_inum) {
+- ret = ext4_protect_reserved_inode(sb,
++ ret = ext4_protect_reserved_inode(sb, system_blks,
+ le32_to_cpu(sbi->s_es->s_journal_inum));
+ if (ret)
+- return ret;
++ goto err;
+ }
+
++ /*
++ * System blks rbtree complete, announce it once to prevent racing
++ * with ext4_data_block_valid() accessing the rbtree at the same
++ * time.
++ */
++ rcu_assign_pointer(sbi->system_blks, system_blks);
++
+ if (test_opt(sb, DEBUG))
+ debug_print_tree(sbi);
+ return 0;
++err:
++ release_system_zone(system_blks);
++ kfree(system_blks);
++ return ret;
+ }
+
+-/* Called when the filesystem is unmounted */
++/*
++ * Called when the filesystem is unmounted or when remounting it with
++ * noblock_validity specified.
++ *
++ * The update of system_blks pointer in this function is protected by
++ * sb->s_umount semaphore. However we have to be careful as we can be
++ * racing with ext4_data_block_valid() calls reading system_blks rbtree
++ * protected only by RCU. So we first clear the system_blks pointer and
++ * then free the rbtree only after RCU grace period expires.
++ */
+ void ext4_release_system_zone(struct super_block *sb)
+ {
+- struct ext4_system_zone *entry, *n;
++ struct ext4_system_blocks *system_blks;
+
+- rbtree_postorder_for_each_entry_safe(entry, n,
+- &EXT4_SB(sb)->system_blks, node)
+- kmem_cache_free(ext4_system_zone_cachep, entry);
++ system_blks = rcu_dereference_protected(EXT4_SB(sb)->system_blks,
++ lockdep_is_held(&sb->s_umount));
++ rcu_assign_pointer(EXT4_SB(sb)->system_blks, NULL);
+
+- EXT4_SB(sb)->system_blks = RB_ROOT;
++ if (system_blks)
++ call_rcu(&system_blks->rcu, ext4_destroy_system_zone);
+ }
+
+-/*
+- * Returns 1 if the passed-in block region (start_blk,
+- * start_blk+count) is valid; 0 if some part of the block region
+- * overlaps with filesystem metadata blocks.
+- */
+ int ext4_data_block_valid(struct ext4_sb_info *sbi, ext4_fsblk_t start_blk,
+ unsigned int count)
+ {
+- struct ext4_system_zone *entry;
+- struct rb_node *n = sbi->system_blks.rb_node;
++ struct ext4_system_blocks *system_blks;
++ int ret;
+
+- if ((start_blk <= le32_to_cpu(sbi->s_es->s_first_data_block)) ||
+- (start_blk + count < start_blk) ||
+- (start_blk + count > ext4_blocks_count(sbi->s_es))) {
+- sbi->s_es->s_last_error_block = cpu_to_le64(start_blk);
+- return 0;
+- }
+- while (n) {
+- entry = rb_entry(n, struct ext4_system_zone, node);
+- if (start_blk + count - 1 < entry->start_blk)
+- n = n->rb_left;
+- else if (start_blk >= (entry->start_blk + entry->count))
+- n = n->rb_right;
+- else {
+- sbi->s_es->s_last_error_block = cpu_to_le64(start_blk);
+- return 0;
+- }
+- }
+- return 1;
++ /*
++ * Lock the system zone to prevent it being released concurrently
++ * when doing a remount which inverse current "[no]block_validity"
++ * mount option.
++ */
++ rcu_read_lock();
++ system_blks = rcu_dereference(sbi->system_blks);
++ ret = ext4_data_block_valid_rcu(sbi, system_blks, start_blk,
++ count);
++ rcu_read_unlock();
++ return ret;
+ }
+
+ int ext4_check_blockref(const char *function, unsigned int line,
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index bf660aa7a9e0..c025efcbcf27 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -184,6 +184,14 @@ struct ext4_map_blocks {
+ unsigned int m_flags;
+ };
+
++/*
++ * Block validity checking, system zone rbtree.
++ */
++struct ext4_system_blocks {
++ struct rb_root root;
++ struct rcu_head rcu;
++};
++
+ /*
+ * Flags for ext4_io_end->flags
+ */
+@@ -1421,7 +1429,7 @@ struct ext4_sb_info {
+ int s_jquota_fmt; /* Format of quota to use */
+ #endif
+ unsigned int s_want_extra_isize; /* New inodes should reserve # bytes */
+- struct rb_root system_blks;
++ struct ext4_system_blocks __rcu *system_blks;
+
+ #ifdef EXTENTS_STATS
+ /* ext4 extents stats */
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 78a1b873e48a..aa3178f1b145 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -873,7 +873,21 @@ static struct inode *f2fs_alloc_inode(struct super_block *sb)
+
+ static int f2fs_drop_inode(struct inode *inode)
+ {
++ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ int ret;
++
++ /*
++ * during filesystem shutdown, if checkpoint is disabled,
++ * drop useless meta/node dirty pages.
++ */
++ if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) {
++ if (inode->i_ino == F2FS_NODE_INO(sbi) ||
++ inode->i_ino == F2FS_META_INO(sbi)) {
++ trace_f2fs_drop_inode(inode, 1);
++ return 1;
++ }
++ }
++
+ /*
+ * This is to avoid a deadlock condition like below.
+ * writeback_single_inode(inode)
+diff --git a/fs/fat/dir.c b/fs/fat/dir.c
+index 1bda2ab6745b..814ad2c2ba80 100644
+--- a/fs/fat/dir.c
++++ b/fs/fat/dir.c
+@@ -1100,8 +1100,11 @@ static int fat_zeroed_cluster(struct inode *dir, sector_t blknr, int nr_used,
+ err = -ENOMEM;
+ goto error;
+ }
++ /* Avoid race with userspace read via bdev */
++ lock_buffer(bhs[n]);
+ memset(bhs[n]->b_data, 0, sb->s_blocksize);
+ set_buffer_uptodate(bhs[n]);
++ unlock_buffer(bhs[n]);
+ mark_buffer_dirty_inode(bhs[n], dir);
+
+ n++;
+@@ -1158,6 +1161,8 @@ int fat_alloc_new_dir(struct inode *dir, struct timespec64 *ts)
+ fat_time_unix2fat(sbi, ts, &time, &date, &time_cs);
+
+ de = (struct msdos_dir_entry *)bhs[0]->b_data;
++ /* Avoid race with userspace read via bdev */
++ lock_buffer(bhs[0]);
+ /* filling the new directory slots ("." and ".." entries) */
+ memcpy(de[0].name, MSDOS_DOT, MSDOS_NAME);
+ memcpy(de[1].name, MSDOS_DOTDOT, MSDOS_NAME);
+@@ -1180,6 +1185,7 @@ int fat_alloc_new_dir(struct inode *dir, struct timespec64 *ts)
+ de[0].size = de[1].size = 0;
+ memset(de + 2, 0, sb->s_blocksize - 2 * sizeof(*de));
+ set_buffer_uptodate(bhs[0]);
++ unlock_buffer(bhs[0]);
+ mark_buffer_dirty_inode(bhs[0], dir);
+
+ err = fat_zeroed_cluster(dir, blknr, 1, bhs, MAX_BUF_PER_PAGE);
+@@ -1237,11 +1243,14 @@ static int fat_add_new_entries(struct inode *dir, void *slots, int nr_slots,
+
+ /* fill the directory entry */
+ copy = min(size, sb->s_blocksize);
++ /* Avoid race with userspace read via bdev */
++ lock_buffer(bhs[n]);
+ memcpy(bhs[n]->b_data, slots, copy);
+- slots += copy;
+- size -= copy;
+ set_buffer_uptodate(bhs[n]);
++ unlock_buffer(bhs[n]);
+ mark_buffer_dirty_inode(bhs[n], dir);
++ slots += copy;
++ size -= copy;
+ if (!size)
+ break;
+ n++;
+diff --git a/fs/fat/fatent.c b/fs/fat/fatent.c
+index 265983635f2b..3647c65a0f48 100644
+--- a/fs/fat/fatent.c
++++ b/fs/fat/fatent.c
+@@ -388,8 +388,11 @@ static int fat_mirror_bhs(struct super_block *sb, struct buffer_head **bhs,
+ err = -ENOMEM;
+ goto error;
+ }
++ /* Avoid race with userspace read via bdev */
++ lock_buffer(c_bh);
+ memcpy(c_bh->b_data, bhs[n]->b_data, sb->s_blocksize);
+ set_buffer_uptodate(c_bh);
++ unlock_buffer(c_bh);
+ mark_buffer_dirty_inode(c_bh, sbi->fat_inode);
+ if (sb->s_flags & SB_SYNCHRONOUS)
+ err = sync_dirty_buffer(c_bh);
+diff --git a/fs/fs_context.c b/fs/fs_context.c
+index 103643c68e3f..87c2c9687d90 100644
+--- a/fs/fs_context.c
++++ b/fs/fs_context.c
+@@ -279,10 +279,8 @@ static struct fs_context *alloc_fs_context(struct file_system_type *fs_type,
+ fc->user_ns = get_user_ns(reference->d_sb->s_user_ns);
+ break;
+ case FS_CONTEXT_FOR_RECONFIGURE:
+- /* We don't pin any namespaces as the superblock's
+- * subscriptions cannot be changed at this point.
+- */
+ atomic_inc(&reference->d_sb->s_active);
++ fc->user_ns = get_user_ns(reference->d_sb->s_user_ns);
+ fc->root = dget(reference);
+ break;
+ }
+diff --git a/fs/ocfs2/dlm/dlmunlock.c b/fs/ocfs2/dlm/dlmunlock.c
+index e78657742bd8..3883633e82eb 100644
+--- a/fs/ocfs2/dlm/dlmunlock.c
++++ b/fs/ocfs2/dlm/dlmunlock.c
+@@ -90,7 +90,8 @@ static enum dlm_status dlmunlock_common(struct dlm_ctxt *dlm,
+ enum dlm_status status;
+ int actions = 0;
+ int in_use;
+- u8 owner;
++ u8 owner;
++ int recovery_wait = 0;
+
+ mlog(0, "master_node = %d, valblk = %d\n", master_node,
+ flags & LKM_VALBLK);
+@@ -193,9 +194,12 @@ static enum dlm_status dlmunlock_common(struct dlm_ctxt *dlm,
+ }
+ if (flags & LKM_CANCEL)
+ lock->cancel_pending = 0;
+- else
+- lock->unlock_pending = 0;
+-
++ else {
++ if (!lock->unlock_pending)
++ recovery_wait = 1;
++ else
++ lock->unlock_pending = 0;
++ }
+ }
+
+ /* get an extra ref on lock. if we are just switching
+@@ -229,6 +233,17 @@ leave:
+ spin_unlock(&res->spinlock);
+ wake_up(&res->wq);
+
++ if (recovery_wait) {
++ spin_lock(&res->spinlock);
++ /* Unlock request will directly succeed after owner dies,
++ * and the lock is already removed from grant list. We have to
++ * wait for RECOVERING done or we miss the chance to purge it
++ * since the removement is much faster than RECOVERING proc.
++ */
++ __dlm_wait_on_lockres_flags(res, DLM_LOCK_RES_RECOVERING);
++ spin_unlock(&res->spinlock);
++ }
++
+ /* let the caller's final dlm_lock_put handle the actual kfree */
+ if (actions & DLM_UNLOCK_FREE_LOCK) {
+ /* this should always be coupled with list removal */
+diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c
+index 2bb3468fc93a..8caff834f002 100644
+--- a/fs/pstore/ram.c
++++ b/fs/pstore/ram.c
+@@ -144,6 +144,7 @@ static int ramoops_read_kmsg_hdr(char *buffer, struct timespec64 *time,
+ if (sscanf(buffer, RAMOOPS_KERNMSG_HDR "%lld.%lu-%c\n%n",
+ (time64_t *)&time->tv_sec, &time->tv_nsec, &data_type,
+ &header_length) == 3) {
++ time->tv_nsec *= 1000;
+ if (data_type == 'C')
+ *compressed = true;
+ else
+@@ -151,6 +152,7 @@ static int ramoops_read_kmsg_hdr(char *buffer, struct timespec64 *time,
+ } else if (sscanf(buffer, RAMOOPS_KERNMSG_HDR "%lld.%lu\n%n",
+ (time64_t *)&time->tv_sec, &time->tv_nsec,
+ &header_length) == 2) {
++ time->tv_nsec *= 1000;
+ *compressed = false;
+ } else {
+ time->tv_sec = 0;
+diff --git a/include/linux/dsa/sja1105.h b/include/linux/dsa/sja1105.h
+index 79435cfc20eb..897e799dbcb9 100644
+--- a/include/linux/dsa/sja1105.h
++++ b/include/linux/dsa/sja1105.h
+@@ -31,6 +31,8 @@
+ #define SJA1105_META_SMAC 0x222222222222ull
+ #define SJA1105_META_DMAC 0x0180C200000Eull
+
++#define SJA1105_HWTS_RX_EN 0
++
+ /* Global tagger data: each struct sja1105_port has a reference to
+ * the structure defined in struct sja1105_private.
+ */
+@@ -42,7 +44,7 @@ struct sja1105_tagger_data {
+ * from taggers running on multiple ports on SMP systems
+ */
+ spinlock_t meta_lock;
+- bool hwts_rx_en;
++ unsigned long state;
+ };
+
+ struct sja1105_skb_cb {
+diff --git a/include/linux/mailbox/mtk-cmdq-mailbox.h b/include/linux/mailbox/mtk-cmdq-mailbox.h
+index ccb73422c2fa..e6f54ef6698b 100644
+--- a/include/linux/mailbox/mtk-cmdq-mailbox.h
++++ b/include/linux/mailbox/mtk-cmdq-mailbox.h
+@@ -20,6 +20,9 @@
+ #define CMDQ_WFE_WAIT BIT(15)
+ #define CMDQ_WFE_WAIT_VALUE 0x1
+
++/** cmdq event maximum */
++#define CMDQ_MAX_EVENT 0x3ff
++
+ /*
+ * CMDQ_CODE_MASK:
+ * set write mask
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 0334ca97c584..fe4552e1c40b 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -1405,7 +1405,11 @@ extern void pagefault_out_of_memory(void);
+
+ extern void show_free_areas(unsigned int flags, nodemask_t *nodemask);
+
++#ifdef CONFIG_MMU
+ extern bool can_do_mlock(void);
++#else
++static inline bool can_do_mlock(void) { return false; }
++#endif
+ extern int user_shm_lock(size_t, struct user_struct *);
+ extern void user_shm_unlock(size_t, struct user_struct *);
+
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 82e4cd1b7ac3..ac8a6c4e1792 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -2435,4 +2435,7 @@ void pci_uevent_ers(struct pci_dev *pdev, enum pci_ers_result err_type);
+ #define pci_notice_ratelimited(pdev, fmt, arg...) \
+ dev_notice_ratelimited(&(pdev)->dev, fmt, ##arg)
+
++#define pci_info_ratelimited(pdev, fmt, arg...) \
++ dev_info_ratelimited(&(pdev)->dev, fmt, ##arg)
++
+ #endif /* LINUX_PCI_H */
+diff --git a/include/linux/soc/mediatek/mtk-cmdq.h b/include/linux/soc/mediatek/mtk-cmdq.h
+index 54ade13a9b15..4e8899972db4 100644
+--- a/include/linux/soc/mediatek/mtk-cmdq.h
++++ b/include/linux/soc/mediatek/mtk-cmdq.h
+@@ -13,9 +13,6 @@
+
+ #define CMDQ_NO_TIMEOUT 0xffffffffu
+
+-/** cmdq event maximum */
+-#define CMDQ_MAX_EVENT 0x3ff
+-
+ struct cmdq_pkt;
+
+ struct cmdq_client {
+diff --git a/include/scsi/scsi_dbg.h b/include/scsi/scsi_dbg.h
+index e03bd9d41fa8..7b196d234626 100644
+--- a/include/scsi/scsi_dbg.h
++++ b/include/scsi/scsi_dbg.h
+@@ -6,8 +6,6 @@ struct scsi_cmnd;
+ struct scsi_device;
+ struct scsi_sense_hdr;
+
+-#define SCSI_LOG_BUFSIZE 128
+-
+ extern void scsi_print_command(struct scsi_cmnd *);
+ extern size_t __scsi_format_command(char *, size_t,
+ const unsigned char *, size_t);
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index a13a62db3565..edc5c887a44c 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -1068,7 +1068,7 @@ TRACE_EVENT(rxrpc_recvmsg,
+ ),
+
+ TP_fast_assign(
+- __entry->call = call->debug_id;
++ __entry->call = call ? call->debug_id : 0;
+ __entry->why = why;
+ __entry->seq = seq;
+ __entry->offset = offset;
+diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
+index d5870723b8ad..15d70a90b50d 100644
+--- a/kernel/kexec_core.c
++++ b/kernel/kexec_core.c
+@@ -300,6 +300,8 @@ static struct page *kimage_alloc_pages(gfp_t gfp_mask, unsigned int order)
+ {
+ struct page *pages;
+
++ if (fatal_signal_pending(current))
++ return NULL;
+ pages = alloc_pages(gfp_mask & ~__GFP_ZERO, order);
+ if (pages) {
+ unsigned int count, i;
+diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
+index c4ce08f43bd6..ab4a4606d19b 100644
+--- a/kernel/livepatch/core.c
++++ b/kernel/livepatch/core.c
+@@ -1175,6 +1175,7 @@ err:
+ pr_warn("patch '%s' failed for module '%s', refusing to load module '%s'\n",
+ patch->mod->name, obj->mod->name, obj->mod->name);
+ mod->klp_alive = false;
++ obj->mod = NULL;
+ klp_cleanup_module_patches_limited(mod, patch);
+ mutex_unlock(&klp_mutex);
+
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 5960e2980a8a..4d39540011e2 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -596,7 +596,7 @@ config DEBUG_KMEMLEAK_EARLY_LOG_SIZE
+ int "Maximum kmemleak early log entries"
+ depends on DEBUG_KMEMLEAK
+ range 200 40000
+- default 400
++ default 16000
+ help
+ Kmemleak must track all the memory allocations to avoid
+ reporting false positives. Since memory may be allocated or
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 545fac19a711..3aa93af51d48 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -1700,8 +1700,6 @@ static void __sk_destruct(struct rcu_head *head)
+ sk_filter_uncharge(sk, filter);
+ RCU_INIT_POINTER(sk->sk_filter, NULL);
+ }
+- if (rcu_access_pointer(sk->sk_reuseport_cb))
+- reuseport_detach_sock(sk);
+
+ sock_disable_timestamp(sk, SK_FLAGS_TIMESTAMP);
+
+@@ -1728,7 +1726,14 @@ static void __sk_destruct(struct rcu_head *head)
+
+ void sk_destruct(struct sock *sk)
+ {
+- if (sock_flag(sk, SOCK_RCU_FREE))
++ bool use_call_rcu = sock_flag(sk, SOCK_RCU_FREE);
++
++ if (rcu_access_pointer(sk->sk_reuseport_cb)) {
++ reuseport_detach_sock(sk);
++ use_call_rcu = true;
++ }
++
++ if (use_call_rcu)
+ call_rcu(&sk->sk_rcu, __sk_destruct);
+ else
+ __sk_destruct(&sk->sk_rcu);
+diff --git a/net/dsa/tag_sja1105.c b/net/dsa/tag_sja1105.c
+index 47ee88163a9d..27fe80d07460 100644
+--- a/net/dsa/tag_sja1105.c
++++ b/net/dsa/tag_sja1105.c
+@@ -155,7 +155,11 @@ static struct sk_buff
+ /* Step 1: A timestampable frame was received.
+ * Buffer it until we get its meta frame.
+ */
+- if (is_link_local && sp->data->hwts_rx_en) {
++ if (is_link_local) {
++ if (!test_bit(SJA1105_HWTS_RX_EN, &sp->data->state))
++ /* Do normal processing. */
++ return skb;
++
+ spin_lock(&sp->data->meta_lock);
+ /* Was this a link-local frame instead of the meta
+ * that we were expecting?
+@@ -186,6 +190,12 @@ static struct sk_buff
+ } else if (is_meta) {
+ struct sk_buff *stampable_skb;
+
++ /* Drop the meta frame if we're not in the right state
++ * to process it.
++ */
++ if (!test_bit(SJA1105_HWTS_RX_EN, &sp->data->state))
++ return NULL;
++
+ spin_lock(&sp->data->meta_lock);
+
+ stampable_skb = sp->data->stampable_skb;
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index a53a543fe055..52690bb3e40f 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -1446,6 +1446,7 @@ static void erspan_setup(struct net_device *dev)
+ struct ip_tunnel *t = netdev_priv(dev);
+
+ ether_setup(dev);
++ dev->max_mtu = 0;
+ dev->netdev_ops = &erspan_netdev_ops;
+ dev->priv_flags &= ~IFF_TX_SKB_SHARING;
+ dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 7dcce724c78b..14654876127e 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -916,16 +916,15 @@ void ip_rt_send_redirect(struct sk_buff *skb)
+ if (peer->rate_tokens == 0 ||
+ time_after(jiffies,
+ (peer->rate_last +
+- (ip_rt_redirect_load << peer->rate_tokens)))) {
++ (ip_rt_redirect_load << peer->n_redirects)))) {
+ __be32 gw = rt_nexthop(rt, ip_hdr(skb)->daddr);
+
+ icmp_send(skb, ICMP_REDIRECT, ICMP_REDIR_HOST, gw);
+ peer->rate_last = jiffies;
+- ++peer->rate_tokens;
+ ++peer->n_redirects;
+ #ifdef CONFIG_IP_ROUTE_VERBOSE
+ if (log_martians &&
+- peer->rate_tokens == ip_rt_redirect_number)
++ peer->n_redirects == ip_rt_redirect_number)
+ net_warn_ratelimited("host %pI4/if%d ignores redirects for %pI4 to %pI4\n",
+ &ip_hdr(skb)->saddr, inet_iif(skb),
+ &ip_hdr(skb)->daddr, &gw);
+diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
+index 3e8b38c73d8c..483323332d74 100644
+--- a/net/ipv4/tcp_timer.c
++++ b/net/ipv4/tcp_timer.c
+@@ -198,8 +198,13 @@ static bool retransmits_timed_out(struct sock *sk,
+ return false;
+
+ start_ts = tcp_sk(sk)->retrans_stamp;
+- if (likely(timeout == 0))
+- timeout = tcp_model_timeout(sk, boundary, TCP_RTO_MIN);
++ if (likely(timeout == 0)) {
++ unsigned int rto_base = TCP_RTO_MIN;
++
++ if ((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV))
++ rto_base = tcp_timeout_init(sk);
++ timeout = tcp_model_timeout(sk, boundary, rto_base);
++ }
+
+ return (s32)(tcp_time_stamp(tcp_sk(sk)) - start_ts - timeout) >= 0;
+ }
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 16486c8b708b..5e5d0575a43c 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -821,6 +821,7 @@ static int udp_send_skb(struct sk_buff *skb, struct flowi4 *fl4,
+ int is_udplite = IS_UDPLITE(sk);
+ int offset = skb_transport_offset(skb);
+ int len = skb->len - offset;
++ int datalen = len - sizeof(*uh);
+ __wsum csum = 0;
+
+ /*
+@@ -854,10 +855,12 @@ static int udp_send_skb(struct sk_buff *skb, struct flowi4 *fl4,
+ return -EIO;
+ }
+
+- skb_shinfo(skb)->gso_size = cork->gso_size;
+- skb_shinfo(skb)->gso_type = SKB_GSO_UDP_L4;
+- skb_shinfo(skb)->gso_segs = DIV_ROUND_UP(len - sizeof(uh),
+- cork->gso_size);
++ if (datalen > cork->gso_size) {
++ skb_shinfo(skb)->gso_size = cork->gso_size;
++ skb_shinfo(skb)->gso_type = SKB_GSO_UDP_L4;
++ skb_shinfo(skb)->gso_segs = DIV_ROUND_UP(datalen,
++ cork->gso_size);
++ }
+ goto csum_partial;
+ }
+
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 6a576ff92c39..34ccef18b40e 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -5964,13 +5964,20 @@ static void __ipv6_ifa_notify(int event, struct inet6_ifaddr *ifp)
+ switch (event) {
+ case RTM_NEWADDR:
+ /*
+- * If the address was optimistic
+- * we inserted the route at the start of
+- * our DAD process, so we don't need
+- * to do it again
++ * If the address was optimistic we inserted the route at the
++ * start of our DAD process, so we don't need to do it again.
++ * If the device was taken down in the middle of the DAD
++ * cycle there is a race where we could get here without a
++ * host route, so nothing to insert. That will be fixed when
++ * the device is brought up.
+ */
+- if (!rcu_access_pointer(ifp->rt->fib6_node))
++ if (ifp->rt && !rcu_access_pointer(ifp->rt->fib6_node)) {
+ ip6_ins_rt(net, ifp->rt);
++ } else if (!ifp->rt && (ifp->idev->dev->flags & IFF_UP)) {
++ pr_warn("BUG: Address %pI6c on device %s is missing its host route.\n",
++ &ifp->addr, ifp->idev->dev->name);
++ }
++
+ if (ifp->idev->cnf.forwarding)
+ addrconf_join_anycast(ifp);
+ if (!ipv6_addr_any(&ifp->peer_addr))
+diff --git a/net/ipv6/ip6_input.c b/net/ipv6/ip6_input.c
+index fa014d5f1732..a593aaf25748 100644
+--- a/net/ipv6/ip6_input.c
++++ b/net/ipv6/ip6_input.c
+@@ -221,6 +221,16 @@ static struct sk_buff *ip6_rcv_core(struct sk_buff *skb, struct net_device *dev,
+ if (ipv6_addr_is_multicast(&hdr->saddr))
+ goto err;
+
++ /* While RFC4291 is not explicit about v4mapped addresses
++ * in IPv6 headers, it seems clear linux dual-stack
++ * model can not deal properly with these.
++ * Security models could be fooled by ::ffff:127.0.0.1 for example.
++ *
++ * https://tools.ietf.org/html/draft-itojun-v6ops-v4mapped-harmful-02
++ */
++ if (ipv6_addr_v4mapped(&hdr->saddr))
++ goto err;
++
+ skb->transport_header = skb->network_header + sizeof(*hdr);
+ IP6CB(skb)->nhoff = offsetof(struct ipv6hdr, nexthdr);
+
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 5995fdc99d3f..0454a8a3b39c 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1109,6 +1109,7 @@ static int udp_v6_send_skb(struct sk_buff *skb, struct flowi6 *fl6,
+ __wsum csum = 0;
+ int offset = skb_transport_offset(skb);
+ int len = skb->len - offset;
++ int datalen = len - sizeof(*uh);
+
+ /*
+ * Create a UDP header
+@@ -1141,8 +1142,12 @@ static int udp_v6_send_skb(struct sk_buff *skb, struct flowi6 *fl6,
+ return -EIO;
+ }
+
+- skb_shinfo(skb)->gso_size = cork->gso_size;
+- skb_shinfo(skb)->gso_type = SKB_GSO_UDP_L4;
++ if (datalen > cork->gso_size) {
++ skb_shinfo(skb)->gso_size = cork->gso_size;
++ skb_shinfo(skb)->gso_type = SKB_GSO_UDP_L4;
++ skb_shinfo(skb)->gso_segs = DIV_ROUND_UP(datalen,
++ cork->gso_size);
++ }
+ goto csum_partial;
+ }
+
+diff --git a/net/nfc/llcp_sock.c b/net/nfc/llcp_sock.c
+index 8dfea26536c9..ccdd790e163a 100644
+--- a/net/nfc/llcp_sock.c
++++ b/net/nfc/llcp_sock.c
+@@ -107,9 +107,14 @@ static int llcp_sock_bind(struct socket *sock, struct sockaddr *addr, int alen)
+ llcp_sock->service_name = kmemdup(llcp_addr.service_name,
+ llcp_sock->service_name_len,
+ GFP_KERNEL);
+-
++ if (!llcp_sock->service_name) {
++ ret = -ENOMEM;
++ goto put_dev;
++ }
+ llcp_sock->ssap = nfc_llcp_get_sdp_ssap(local, llcp_sock);
+ if (llcp_sock->ssap == LLCP_SAP_MAX) {
++ kfree(llcp_sock->service_name);
++ llcp_sock->service_name = NULL;
+ ret = -EADDRINUSE;
+ goto put_dev;
+ }
+diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c
+index ea64c90b14e8..17e6ca62f1be 100644
+--- a/net/nfc/netlink.c
++++ b/net/nfc/netlink.c
+@@ -970,7 +970,8 @@ static int nfc_genl_dep_link_down(struct sk_buff *skb, struct genl_info *info)
+ int rc;
+ u32 idx;
+
+- if (!info->attrs[NFC_ATTR_DEVICE_INDEX])
++ if (!info->attrs[NFC_ATTR_DEVICE_INDEX] ||
++ !info->attrs[NFC_ATTR_TARGET_INDEX])
+ return -EINVAL;
+
+ idx = nla_get_u32(info->attrs[NFC_ATTR_DEVICE_INDEX]);
+@@ -1018,7 +1019,8 @@ static int nfc_genl_llc_get_params(struct sk_buff *skb, struct genl_info *info)
+ struct sk_buff *msg = NULL;
+ u32 idx;
+
+- if (!info->attrs[NFC_ATTR_DEVICE_INDEX])
++ if (!info->attrs[NFC_ATTR_DEVICE_INDEX] ||
++ !info->attrs[NFC_ATTR_FIRMWARE_NAME])
+ return -EINVAL;
+
+ idx = nla_get_u32(info->attrs[NFC_ATTR_DEVICE_INDEX]);
+diff --git a/net/rds/ib.c b/net/rds/ib.c
+index 45acab2de0cf..9de2ae22d583 100644
+--- a/net/rds/ib.c
++++ b/net/rds/ib.c
+@@ -143,6 +143,9 @@ static void rds_ib_add_one(struct ib_device *device)
+ refcount_set(&rds_ibdev->refcount, 1);
+ INIT_WORK(&rds_ibdev->free_work, rds_ib_dev_free);
+
++ INIT_LIST_HEAD(&rds_ibdev->ipaddr_list);
++ INIT_LIST_HEAD(&rds_ibdev->conn_list);
++
+ rds_ibdev->max_wrs = device->attrs.max_qp_wr;
+ rds_ibdev->max_sge = min(device->attrs.max_send_sge, RDS_IB_MAX_SGE);
+
+@@ -203,9 +206,6 @@ static void rds_ib_add_one(struct ib_device *device)
+ device->name,
+ rds_ibdev->use_fastreg ? "FRMR" : "FMR");
+
+- INIT_LIST_HEAD(&rds_ibdev->ipaddr_list);
+- INIT_LIST_HEAD(&rds_ibdev->conn_list);
+-
+ down_write(&rds_ib_devices_lock);
+ list_add_tail_rcu(&rds_ibdev->list, &rds_ib_devices);
+ up_write(&rds_ib_devices_lock);
+diff --git a/net/sched/sch_cbq.c b/net/sched/sch_cbq.c
+index 06c7a2da21bc..39b427dc7512 100644
+--- a/net/sched/sch_cbq.c
++++ b/net/sched/sch_cbq.c
+@@ -1127,6 +1127,33 @@ static const struct nla_policy cbq_policy[TCA_CBQ_MAX + 1] = {
+ [TCA_CBQ_POLICE] = { .len = sizeof(struct tc_cbq_police) },
+ };
+
++static int cbq_opt_parse(struct nlattr *tb[TCA_CBQ_MAX + 1],
++ struct nlattr *opt,
++ struct netlink_ext_ack *extack)
++{
++ int err;
++
++ if (!opt) {
++ NL_SET_ERR_MSG(extack, "CBQ options are required for this operation");
++ return -EINVAL;
++ }
++
++ err = nla_parse_nested_deprecated(tb, TCA_CBQ_MAX, opt,
++ cbq_policy, extack);
++ if (err < 0)
++ return err;
++
++ if (tb[TCA_CBQ_WRROPT]) {
++ const struct tc_cbq_wrropt *wrr = nla_data(tb[TCA_CBQ_WRROPT]);
++
++ if (wrr->priority > TC_CBQ_MAXPRIO) {
++ NL_SET_ERR_MSG(extack, "priority is bigger than TC_CBQ_MAXPRIO");
++ err = -EINVAL;
++ }
++ }
++ return err;
++}
++
+ static int cbq_init(struct Qdisc *sch, struct nlattr *opt,
+ struct netlink_ext_ack *extack)
+ {
+@@ -1139,13 +1166,7 @@ static int cbq_init(struct Qdisc *sch, struct nlattr *opt,
+ hrtimer_init(&q->delay_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED);
+ q->delay_timer.function = cbq_undelay;
+
+- if (!opt) {
+- NL_SET_ERR_MSG(extack, "CBQ options are required for this operation");
+- return -EINVAL;
+- }
+-
+- err = nla_parse_nested_deprecated(tb, TCA_CBQ_MAX, opt, cbq_policy,
+- extack);
++ err = cbq_opt_parse(tb, opt, extack);
+ if (err < 0)
+ return err;
+
+@@ -1464,13 +1485,7 @@ cbq_change_class(struct Qdisc *sch, u32 classid, u32 parentid, struct nlattr **t
+ struct cbq_class *parent;
+ struct qdisc_rate_table *rtab = NULL;
+
+- if (!opt) {
+- NL_SET_ERR_MSG(extack, "Mandatory qdisc options missing");
+- return -EINVAL;
+- }
+-
+- err = nla_parse_nested_deprecated(tb, TCA_CBQ_MAX, opt, cbq_policy,
+- extack);
++ err = cbq_opt_parse(tb, opt, extack);
+ if (err < 0)
+ return err;
+
+diff --git a/net/sched/sch_cbs.c b/net/sched/sch_cbs.c
+index 4a403d35438f..284ab2dcf47f 100644
+--- a/net/sched/sch_cbs.c
++++ b/net/sched/sch_cbs.c
+@@ -306,7 +306,7 @@ static void cbs_set_port_rate(struct net_device *dev, struct cbs_sched_data *q)
+ if (err < 0)
+ goto skip;
+
+- if (ecmd.base.speed != SPEED_UNKNOWN)
++ if (ecmd.base.speed && ecmd.base.speed != SPEED_UNKNOWN)
+ speed = ecmd.base.speed;
+
+ skip:
+diff --git a/net/sched/sch_dsmark.c b/net/sched/sch_dsmark.c
+index bad1cbe59a56..05605b30bef3 100644
+--- a/net/sched/sch_dsmark.c
++++ b/net/sched/sch_dsmark.c
+@@ -361,6 +361,8 @@ static int dsmark_init(struct Qdisc *sch, struct nlattr *opt,
+ goto errout;
+
+ err = -EINVAL;
++ if (!tb[TCA_DSMARK_INDICES])
++ goto errout;
+ indices = nla_get_u16(tb[TCA_DSMARK_INDICES]);
+
+ if (hweight32(indices) != 1)
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index 8d8bc2ec5cd6..76bebe516194 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -961,12 +961,11 @@ static void taprio_set_picos_per_byte(struct net_device *dev,
+ if (err < 0)
+ goto skip;
+
+- if (ecmd.base.speed != SPEED_UNKNOWN)
++ if (ecmd.base.speed && ecmd.base.speed != SPEED_UNKNOWN)
+ speed = ecmd.base.speed;
+
+ skip:
+- picos_per_byte = div64_s64(NSEC_PER_SEC * 1000LL * 8,
+- speed * 1000 * 1000);
++ picos_per_byte = (USEC_PER_SEC * 8) / speed;
+
+ atomic64_set(&q->picos_per_byte, picos_per_byte);
+ netdev_dbg(dev, "taprio: set %s's picos_per_byte to: %lld, linkspeed: %d\n",
+diff --git a/net/tipc/link.c b/net/tipc/link.c
+index c2c5c53cad22..b0063d05599e 100644
+--- a/net/tipc/link.c
++++ b/net/tipc/link.c
+@@ -160,6 +160,7 @@ struct tipc_link {
+ struct {
+ u16 len;
+ u16 limit;
++ struct sk_buff *target_bskb;
+ } backlog[5];
+ u16 snd_nxt;
+ u16 window;
+@@ -866,6 +867,7 @@ static void link_prepare_wakeup(struct tipc_link *l)
+ void tipc_link_reset(struct tipc_link *l)
+ {
+ struct sk_buff_head list;
++ u32 imp;
+
+ __skb_queue_head_init(&list);
+
+@@ -887,11 +889,10 @@ void tipc_link_reset(struct tipc_link *l)
+ __skb_queue_purge(&l->deferdq);
+ __skb_queue_purge(&l->backlogq);
+ __skb_queue_purge(&l->failover_deferdq);
+- l->backlog[TIPC_LOW_IMPORTANCE].len = 0;
+- l->backlog[TIPC_MEDIUM_IMPORTANCE].len = 0;
+- l->backlog[TIPC_HIGH_IMPORTANCE].len = 0;
+- l->backlog[TIPC_CRITICAL_IMPORTANCE].len = 0;
+- l->backlog[TIPC_SYSTEM_IMPORTANCE].len = 0;
++ for (imp = 0; imp <= TIPC_SYSTEM_IMPORTANCE; imp++) {
++ l->backlog[imp].len = 0;
++ l->backlog[imp].target_bskb = NULL;
++ }
+ kfree_skb(l->reasm_buf);
+ kfree_skb(l->failover_reasm_skb);
+ l->reasm_buf = NULL;
+@@ -931,7 +932,7 @@ int tipc_link_xmit(struct tipc_link *l, struct sk_buff_head *list,
+ u16 bc_ack = l->bc_rcvlink->rcv_nxt - 1;
+ struct sk_buff_head *transmq = &l->transmq;
+ struct sk_buff_head *backlogq = &l->backlogq;
+- struct sk_buff *skb, *_skb, *bskb;
++ struct sk_buff *skb, *_skb, **tskb;
+ int pkt_cnt = skb_queue_len(list);
+ int rc = 0;
+
+@@ -980,19 +981,21 @@ int tipc_link_xmit(struct tipc_link *l, struct sk_buff_head *list,
+ seqno++;
+ continue;
+ }
+- if (tipc_msg_bundle(skb_peek_tail(backlogq), hdr, mtu)) {
++ tskb = &l->backlog[imp].target_bskb;
++ if (tipc_msg_bundle(*tskb, hdr, mtu)) {
+ kfree_skb(__skb_dequeue(list));
+ l->stats.sent_bundled++;
+ continue;
+ }
+- if (tipc_msg_make_bundle(&bskb, hdr, mtu, l->addr)) {
++ if (tipc_msg_make_bundle(tskb, hdr, mtu, l->addr)) {
+ kfree_skb(__skb_dequeue(list));
+- __skb_queue_tail(backlogq, bskb);
+- l->backlog[msg_importance(buf_msg(bskb))].len++;
++ __skb_queue_tail(backlogq, *tskb);
++ l->backlog[imp].len++;
+ l->stats.sent_bundled++;
+ l->stats.sent_bundles++;
+ continue;
+ }
++ l->backlog[imp].target_bskb = NULL;
+ l->backlog[imp].len += skb_queue_len(list);
+ skb_queue_splice_tail_init(list, backlogq);
+ }
+@@ -1008,6 +1011,7 @@ static void tipc_link_advance_backlog(struct tipc_link *l,
+ u16 seqno = l->snd_nxt;
+ u16 ack = l->rcv_nxt - 1;
+ u16 bc_ack = l->bc_rcvlink->rcv_nxt - 1;
++ u32 imp;
+
+ while (skb_queue_len(&l->transmq) < l->window) {
+ skb = skb_peek(&l->backlogq);
+@@ -1018,7 +1022,10 @@ static void tipc_link_advance_backlog(struct tipc_link *l,
+ break;
+ __skb_dequeue(&l->backlogq);
+ hdr = buf_msg(skb);
+- l->backlog[msg_importance(hdr)].len--;
++ imp = msg_importance(hdr);
++ l->backlog[imp].len--;
++ if (unlikely(skb == l->backlog[imp].target_bskb))
++ l->backlog[imp].target_bskb = NULL;
+ __skb_queue_tail(&l->transmq, skb);
+ /* next retransmit attempt */
+ if (link_is_bc_sndlink(l))
+diff --git a/net/tipc/msg.c b/net/tipc/msg.c
+index f48e5857210f..b956ce4a40ef 100644
+--- a/net/tipc/msg.c
++++ b/net/tipc/msg.c
+@@ -484,10 +484,7 @@ bool tipc_msg_make_bundle(struct sk_buff **skb, struct tipc_msg *msg,
+ bmsg = buf_msg(_skb);
+ tipc_msg_init(msg_prevnode(msg), bmsg, MSG_BUNDLER, 0,
+ INT_H_SIZE, dnode);
+- if (msg_isdata(msg))
+- msg_set_importance(bmsg, TIPC_CRITICAL_IMPORTANCE);
+- else
+- msg_set_importance(bmsg, TIPC_SYSTEM_IMPORTANCE);
++ msg_set_importance(bmsg, msg_importance(msg));
+ msg_set_seqno(bmsg, msg_seqno(msg));
+ msg_set_ack(bmsg, msg_ack(msg));
+ msg_set_bcast_ack(bmsg, msg_bcast_ack(msg));
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index ab47bf3ab66e..2ab43b2bba31 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -638,7 +638,7 @@ struct sock *__vsock_create(struct net *net,
+ }
+ EXPORT_SYMBOL_GPL(__vsock_create);
+
+-static void __vsock_release(struct sock *sk)
++static void __vsock_release(struct sock *sk, int level)
+ {
+ if (sk) {
+ struct sk_buff *skb;
+@@ -648,9 +648,17 @@ static void __vsock_release(struct sock *sk)
+ vsk = vsock_sk(sk);
+ pending = NULL; /* Compiler warning. */
+
++ /* The release call is supposed to use lock_sock_nested()
++ * rather than lock_sock(), if a sock lock should be acquired.
++ */
+ transport->release(vsk);
+
+- lock_sock(sk);
++ /* When "level" is SINGLE_DEPTH_NESTING, use the nested
++ * version to avoid the warning "possible recursive locking
++ * detected". When "level" is 0, lock_sock_nested(sk, level)
++ * is the same as lock_sock(sk).
++ */
++ lock_sock_nested(sk, level);
+ sock_orphan(sk);
+ sk->sk_shutdown = SHUTDOWN_MASK;
+
+@@ -659,7 +667,7 @@ static void __vsock_release(struct sock *sk)
+
+ /* Clean up any sockets that never were accepted. */
+ while ((pending = vsock_dequeue_accept(sk)) != NULL) {
+- __vsock_release(pending);
++ __vsock_release(pending, SINGLE_DEPTH_NESTING);
+ sock_put(pending);
+ }
+
+@@ -708,7 +716,7 @@ EXPORT_SYMBOL_GPL(vsock_stream_has_space);
+
+ static int vsock_release(struct socket *sock)
+ {
+- __vsock_release(sock->sk);
++ __vsock_release(sock->sk, 0);
+ sock->sk = NULL;
+ sock->state = SS_FREE;
+
+diff --git a/net/vmw_vsock/hyperv_transport.c b/net/vmw_vsock/hyperv_transport.c
+index 9d864ebeb7b3..4b126b21b453 100644
+--- a/net/vmw_vsock/hyperv_transport.c
++++ b/net/vmw_vsock/hyperv_transport.c
+@@ -559,7 +559,7 @@ static void hvs_release(struct vsock_sock *vsk)
+ struct sock *sk = sk_vsock(vsk);
+ bool remove_sock;
+
+- lock_sock(sk);
++ lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
+ remove_sock = hvs_close_lock_held(vsk);
+ release_sock(sk);
+ if (remove_sock)
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index 6f1a8aff65c5..a7adffd062c7 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -790,7 +790,7 @@ void virtio_transport_release(struct vsock_sock *vsk)
+ struct sock *sk = &vsk->sk;
+ bool remove_sock = true;
+
+- lock_sock(sk);
++ lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
+ if (sk->sk_type == SOCK_STREAM)
+ remove_sock = virtio_transport_close(vsk);
+
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 74dd46de01b6..e75517464786 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -3403,7 +3403,7 @@ static int selinux_inode_copy_up_xattr(const char *name)
+ static int selinux_kernfs_init_security(struct kernfs_node *kn_dir,
+ struct kernfs_node *kn)
+ {
+- const struct task_security_struct *tsec = current_security();
++ const struct task_security_struct *tsec = selinux_cred(current_cred());
+ u32 parent_sid, newsid, clen;
+ int rc;
+ char *context;
+diff --git a/security/selinux/include/objsec.h b/security/selinux/include/objsec.h
+index 91c5395dd20c..586b7abd0aa7 100644
+--- a/security/selinux/include/objsec.h
++++ b/security/selinux/include/objsec.h
+@@ -37,16 +37,6 @@ struct task_security_struct {
+ u32 sockcreate_sid; /* fscreate SID */
+ };
+
+-/*
+- * get the subjective security ID of the current task
+- */
+-static inline u32 current_sid(void)
+-{
+- const struct task_security_struct *tsec = current_security();
+-
+- return tsec->sid;
+-}
+-
+ enum label_initialized {
+ LABEL_INVALID, /* invalid or not initialized */
+ LABEL_INITIALIZED, /* initialized */
+@@ -185,4 +175,14 @@ static inline struct ipc_security_struct *selinux_ipc(
+ return ipc->security + selinux_blob_sizes.lbs_ipc;
+ }
+
++/*
++ * get the subjective security ID of the current task
++ */
++static inline u32 current_sid(void)
++{
++ const struct task_security_struct *tsec = selinux_cred(current_cred());
++
++ return tsec->sid;
++}
++
+ #endif /* _SELINUX_OBJSEC_H_ */
+diff --git a/security/smack/smack_access.c b/security/smack/smack_access.c
+index f1c93a7be9ec..38ac3da4e791 100644
+--- a/security/smack/smack_access.c
++++ b/security/smack/smack_access.c
+@@ -465,7 +465,7 @@ char *smk_parse_smack(const char *string, int len)
+ if (i == 0 || i >= SMK_LONGLABEL)
+ return ERR_PTR(-EINVAL);
+
+- smack = kzalloc(i + 1, GFP_KERNEL);
++ smack = kzalloc(i + 1, GFP_NOFS);
+ if (smack == NULL)
+ return ERR_PTR(-ENOMEM);
+
+@@ -500,7 +500,7 @@ int smk_netlbl_mls(int level, char *catset, struct netlbl_lsm_secattr *sap,
+ if ((m & *cp) == 0)
+ continue;
+ rc = netlbl_catmap_setbit(&sap->attr.mls.cat,
+- cat, GFP_KERNEL);
++ cat, GFP_NOFS);
+ if (rc < 0) {
+ netlbl_catmap_free(sap->attr.mls.cat);
+ return rc;
+@@ -536,7 +536,7 @@ struct smack_known *smk_import_entry(const char *string, int len)
+ if (skp != NULL)
+ goto freeout;
+
+- skp = kzalloc(sizeof(*skp), GFP_KERNEL);
++ skp = kzalloc(sizeof(*skp), GFP_NOFS);
+ if (skp == NULL) {
+ skp = ERR_PTR(-ENOMEM);
+ goto freeout;
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 4c5e5a438f8b..36b6b9d4cbaf 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -288,7 +288,7 @@ static struct smack_known *smk_fetch(const char *name, struct inode *ip,
+ if (!(ip->i_opflags & IOP_XATTR))
+ return ERR_PTR(-EOPNOTSUPP);
+
+- buffer = kzalloc(SMK_LONGLABEL, GFP_KERNEL);
++ buffer = kzalloc(SMK_LONGLABEL, GFP_NOFS);
+ if (buffer == NULL)
+ return ERR_PTR(-ENOMEM);
+
+@@ -937,7 +937,8 @@ static int smack_bprm_set_creds(struct linux_binprm *bprm)
+
+ if (rc != 0)
+ return rc;
+- } else if (bprm->unsafe)
++ }
++ if (bprm->unsafe & ~LSM_UNSAFE_PTRACE)
+ return -EPERM;
+
+ bsp->smk_task = isp->smk_task;
+@@ -3925,6 +3926,8 @@ access_check:
+ skp = smack_ipv6host_label(&sadd);
+ if (skp == NULL)
+ skp = smack_net_ambient;
++ if (skb == NULL)
++ break;
+ #ifdef CONFIG_AUDIT
+ smk_ad_init_net(&ad, __func__, LSM_AUDIT_DATA_NET, &net);
+ ad.a.u.net->family = family;
+diff --git a/tools/power/x86/intel-speed-select/isst-config.c b/tools/power/x86/intel-speed-select/isst-config.c
+index 6a10dea01eef..696586407e83 100644
+--- a/tools/power/x86/intel-speed-select/isst-config.c
++++ b/tools/power/x86/intel-speed-select/isst-config.c
+@@ -402,6 +402,9 @@ void set_cpu_mask_from_punit_coremask(int cpu, unsigned long long core_mask,
+ int j;
+
+ for (j = 0; j < topo_max_cpus; ++j) {
++ if (!CPU_ISSET_S(j, present_cpumask_size, present_cpumask))
++ continue;
++
+ if (cpu_map[j].pkg_id == pkg_id &&
+ cpu_map[j].die_id == die_id &&
+ cpu_map[j].punit_cpu_core == i) {
+diff --git a/tools/testing/selftests/net/udpgso.c b/tools/testing/selftests/net/udpgso.c
+index b8265ee9923f..614b31aad168 100644
+--- a/tools/testing/selftests/net/udpgso.c
++++ b/tools/testing/selftests/net/udpgso.c
+@@ -89,12 +89,9 @@ struct testcase testcases_v4[] = {
+ .tfail = true,
+ },
+ {
+- /* send a single MSS: will fail with GSO, because the segment
+- * logic in udp4_ufo_fragment demands a gso skb to be > MTU
+- */
++ /* send a single MSS: will fall back to no GSO */
+ .tlen = CONST_MSS_V4,
+ .gso_len = CONST_MSS_V4,
+- .tfail = true,
+ .r_num_mss = 1,
+ },
+ {
+@@ -139,10 +136,9 @@ struct testcase testcases_v4[] = {
+ .tfail = true,
+ },
+ {
+- /* send a single 1B MSS: will fail, see single MSS above */
++ /* send a single 1B MSS: will fall back to no GSO */
+ .tlen = 1,
+ .gso_len = 1,
+- .tfail = true,
+ .r_num_mss = 1,
+ },
+ {
+@@ -196,12 +192,9 @@ struct testcase testcases_v6[] = {
+ .tfail = true,
+ },
+ {
+- /* send a single MSS: will fail with GSO, because the segment
+- * logic in udp4_ufo_fragment demands a gso skb to be > MTU
+- */
++ /* send a single MSS: will fall back to no GSO */
+ .tlen = CONST_MSS_V6,
+ .gso_len = CONST_MSS_V6,
+- .tfail = true,
+ .r_num_mss = 1,
+ },
+ {
+@@ -246,10 +239,9 @@ struct testcase testcases_v6[] = {
+ .tfail = true,
+ },
+ {
+- /* send a single 1B MSS: will fail, see single MSS above */
++ /* send a single 1B MSS: will fall back to no GSO */
+ .tlen = 1,
+ .gso_len = 1,
+- .tfail = true,
+ .r_num_mss = 1,
+ },
+ {
+diff --git a/tools/testing/selftests/powerpc/tm/tm.h b/tools/testing/selftests/powerpc/tm/tm.h
+index 97f9f491c541..c402464b038f 100644
+--- a/tools/testing/selftests/powerpc/tm/tm.h
++++ b/tools/testing/selftests/powerpc/tm/tm.h
+@@ -55,7 +55,8 @@ static inline bool failure_is_unavailable(void)
+ static inline bool failure_is_reschedule(void)
+ {
+ if ((failure_code() & TM_CAUSE_RESCHED) == TM_CAUSE_RESCHED ||
+- (failure_code() & TM_CAUSE_KVM_RESCHED) == TM_CAUSE_KVM_RESCHED)
++ (failure_code() & TM_CAUSE_KVM_RESCHED) == TM_CAUSE_KVM_RESCHED ||
++ (failure_code() & TM_CAUSE_KVM_FAC_UNAV) == TM_CAUSE_KVM_FAC_UNAV)
+ return true;
+
+ return false;
+diff --git a/usr/Makefile b/usr/Makefile
+index 6a89eb019275..e6f7cb2f81db 100644
+--- a/usr/Makefile
++++ b/usr/Makefile
+@@ -11,6 +11,9 @@ datafile_y = initramfs_data.cpio$(suffix_y)
+ datafile_d_y = .$(datafile_y).d
+ AFLAGS_initramfs_data.o += -DINITRAMFS_IMAGE="usr/$(datafile_y)"
+
++# clean rules do not have CONFIG_INITRAMFS_COMPRESSION. So clean up after all
++# possible compression formats.
++clean-files += initramfs_data.cpio*
+
+ # Generate builtin.o based on initramfs_data.o
+ obj-$(CONFIG_BLK_DEV_INITRD) := initramfs_data.o
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:5.3 commit in: /
@ 2019-10-11 17:08 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2019-10-11 17:08 UTC (permalink / raw
To: gentoo-commits
commit: 0fd5bbcf53ff257d46e75b88b9180e402c053f91
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Oct 11 17:07:55 2019 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Oct 11 17:07:55 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0fd5bbcf
Linux patch 5.3.6
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1005_linux-5.3.6.patch | 5854 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 5858 insertions(+)
diff --git a/0000_README b/0000_README
index 1b2145a..2347308 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch: 1004_linux-5.3.5.patch
From: http://www.kernel.org
Desc: Linux 5.3.5
+Patch: 1005_linux-5.3.6.patch
+From: http://www.kernel.org
+Desc: Linux 5.3.6
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1005_linux-5.3.6.patch b/1005_linux-5.3.6.patch
new file mode 100644
index 0000000..db0720d
--- /dev/null
+++ b/1005_linux-5.3.6.patch
@@ -0,0 +1,5854 @@
+diff --git a/Makefile b/Makefile
+index bf03c110ed9b..d7469f0926a6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 3
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Bobtail Squid
+
+diff --git a/arch/arm/boot/dts/omap3-gta04.dtsi b/arch/arm/boot/dts/omap3-gta04.dtsi
+index b295f6fad2a5..954c216140ad 100644
+--- a/arch/arm/boot/dts/omap3-gta04.dtsi
++++ b/arch/arm/boot/dts/omap3-gta04.dtsi
+@@ -120,6 +120,7 @@
+ spi-max-frequency = <100000>;
+ spi-cpol;
+ spi-cpha;
++ spi-cs-high;
+
+ backlight= <&backlight>;
+ label = "lcd";
+diff --git a/arch/mips/include/asm/cpu-features.h b/arch/mips/include/asm/cpu-features.h
+index 6998a9796499..4e2bea8875f5 100644
+--- a/arch/mips/include/asm/cpu-features.h
++++ b/arch/mips/include/asm/cpu-features.h
+@@ -397,6 +397,22 @@
+ #define cpu_has_dsp3 __ase(MIPS_ASE_DSP3)
+ #endif
+
++#ifndef cpu_has_loongson_mmi
++#define cpu_has_loongson_mmi __ase(MIPS_ASE_LOONGSON_MMI)
++#endif
++
++#ifndef cpu_has_loongson_cam
++#define cpu_has_loongson_cam __ase(MIPS_ASE_LOONGSON_CAM)
++#endif
++
++#ifndef cpu_has_loongson_ext
++#define cpu_has_loongson_ext __ase(MIPS_ASE_LOONGSON_EXT)
++#endif
++
++#ifndef cpu_has_loongson_ext2
++#define cpu_has_loongson_ext2 __ase(MIPS_ASE_LOONGSON_EXT2)
++#endif
++
+ #ifndef cpu_has_mipsmt
+ #define cpu_has_mipsmt __isa_lt_and_ase(6, MIPS_ASE_MIPSMT)
+ #endif
+diff --git a/arch/mips/include/asm/cpu.h b/arch/mips/include/asm/cpu.h
+index 290369fa44a4..1e3526efca1b 100644
+--- a/arch/mips/include/asm/cpu.h
++++ b/arch/mips/include/asm/cpu.h
+@@ -433,5 +433,9 @@ enum cpu_type_enum {
+ #define MIPS_ASE_MSA 0x00000100 /* MIPS SIMD Architecture */
+ #define MIPS_ASE_DSP3 0x00000200 /* Signal Processing ASE Rev 3*/
+ #define MIPS_ASE_MIPS16E2 0x00000400 /* MIPS16e2 */
++#define MIPS_ASE_LOONGSON_MMI 0x00000800 /* Loongson MultiMedia extensions Instructions */
++#define MIPS_ASE_LOONGSON_CAM 0x00001000 /* Loongson CAM */
++#define MIPS_ASE_LOONGSON_EXT 0x00002000 /* Loongson EXTensions */
++#define MIPS_ASE_LOONGSON_EXT2 0x00004000 /* Loongson EXTensions R2 */
+
+ #endif /* _ASM_CPU_H */
+diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
+index e654ffc1c8a0..e698a20017c1 100644
+--- a/arch/mips/kernel/cpu-probe.c
++++ b/arch/mips/kernel/cpu-probe.c
+@@ -1573,6 +1573,8 @@ static inline void cpu_probe_legacy(struct cpuinfo_mips *c, unsigned int cpu)
+ __cpu_name[cpu] = "ICT Loongson-3";
+ set_elf_platform(cpu, "loongson3a");
+ set_isa(c, MIPS_CPU_ISA_M64R1);
++ c->ases |= (MIPS_ASE_LOONGSON_MMI | MIPS_ASE_LOONGSON_CAM |
++ MIPS_ASE_LOONGSON_EXT);
+ break;
+ case PRID_REV_LOONGSON3B_R1:
+ case PRID_REV_LOONGSON3B_R2:
+@@ -1580,6 +1582,8 @@ static inline void cpu_probe_legacy(struct cpuinfo_mips *c, unsigned int cpu)
+ __cpu_name[cpu] = "ICT Loongson-3";
+ set_elf_platform(cpu, "loongson3b");
+ set_isa(c, MIPS_CPU_ISA_M64R1);
++ c->ases |= (MIPS_ASE_LOONGSON_MMI | MIPS_ASE_LOONGSON_CAM |
++ MIPS_ASE_LOONGSON_EXT);
+ break;
+ }
+
+@@ -1946,6 +1950,8 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
+ decode_configs(c);
+ c->options |= MIPS_CPU_FTLB | MIPS_CPU_TLBINV | MIPS_CPU_LDPTE;
+ c->writecombine = _CACHE_UNCACHED_ACCELERATED;
++ c->ases |= (MIPS_ASE_LOONGSON_MMI | MIPS_ASE_LOONGSON_CAM |
++ MIPS_ASE_LOONGSON_EXT | MIPS_ASE_LOONGSON_EXT2);
+ break;
+ default:
+ panic("Unknown Loongson Processor ID!");
+diff --git a/arch/mips/kernel/proc.c b/arch/mips/kernel/proc.c
+index b2de408a259e..f8d36710cd58 100644
+--- a/arch/mips/kernel/proc.c
++++ b/arch/mips/kernel/proc.c
+@@ -124,6 +124,10 @@ static int show_cpuinfo(struct seq_file *m, void *v)
+ if (cpu_has_eva) seq_printf(m, "%s", " eva");
+ if (cpu_has_htw) seq_printf(m, "%s", " htw");
+ if (cpu_has_xpa) seq_printf(m, "%s", " xpa");
++ if (cpu_has_loongson_mmi) seq_printf(m, "%s", " loongson-mmi");
++ if (cpu_has_loongson_cam) seq_printf(m, "%s", " loongson-cam");
++ if (cpu_has_loongson_ext) seq_printf(m, "%s", " loongson-ext");
++ if (cpu_has_loongson_ext2) seq_printf(m, "%s", " loongson-ext2");
+ seq_printf(m, "\n");
+
+ if (cpu_has_mmips) {
+diff --git a/arch/powerpc/include/asm/cputable.h b/arch/powerpc/include/asm/cputable.h
+index d05f0c28e515..f43ff5a00d38 100644
+--- a/arch/powerpc/include/asm/cputable.h
++++ b/arch/powerpc/include/asm/cputable.h
+@@ -213,8 +213,9 @@ static inline void cpu_feature_keys_init(void) { }
+ #define CPU_FTR_POWER9_DD2_1 LONG_ASM_CONST(0x0000080000000000)
+ #define CPU_FTR_P9_TM_HV_ASSIST LONG_ASM_CONST(0x0000100000000000)
+ #define CPU_FTR_P9_TM_XER_SO_BUG LONG_ASM_CONST(0x0000200000000000)
+-#define CPU_FTR_P9_TLBIE_BUG LONG_ASM_CONST(0x0000400000000000)
++#define CPU_FTR_P9_TLBIE_STQ_BUG LONG_ASM_CONST(0x0000400000000000)
+ #define CPU_FTR_P9_TIDR LONG_ASM_CONST(0x0000800000000000)
++#define CPU_FTR_P9_TLBIE_ERAT_BUG LONG_ASM_CONST(0x0001000000000000)
+
+ #ifndef __ASSEMBLY__
+
+@@ -461,7 +462,7 @@ static inline void cpu_feature_keys_init(void) { }
+ CPU_FTR_CFAR | CPU_FTR_HVMODE | CPU_FTR_VMX_COPY | \
+ CPU_FTR_DBELL | CPU_FTR_HAS_PPR | CPU_FTR_ARCH_207S | \
+ CPU_FTR_TM_COMP | CPU_FTR_ARCH_300 | CPU_FTR_PKEY | \
+- CPU_FTR_P9_TLBIE_BUG | CPU_FTR_P9_TIDR)
++ CPU_FTR_P9_TLBIE_STQ_BUG | CPU_FTR_P9_TLBIE_ERAT_BUG | CPU_FTR_P9_TIDR)
+ #define CPU_FTRS_POWER9_DD2_0 CPU_FTRS_POWER9
+ #define CPU_FTRS_POWER9_DD2_1 (CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD2_1)
+ #define CPU_FTRS_POWER9_DD2_2 (CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD2_1 | \
+diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
+index 2484e6a8f5ca..8e8514efb124 100644
+--- a/arch/powerpc/include/asm/kvm_ppc.h
++++ b/arch/powerpc/include/asm/kvm_ppc.h
+@@ -598,6 +598,7 @@ extern int kvmppc_xive_native_get_vp(struct kvm_vcpu *vcpu,
+ union kvmppc_one_reg *val);
+ extern int kvmppc_xive_native_set_vp(struct kvm_vcpu *vcpu,
+ union kvmppc_one_reg *val);
++extern bool kvmppc_xive_native_supported(void);
+
+ #else
+ static inline int kvmppc_xive_set_xive(struct kvm *kvm, u32 irq, u32 server,
+diff --git a/arch/powerpc/include/asm/xive.h b/arch/powerpc/include/asm/xive.h
+index e4016985764e..818989e11678 100644
+--- a/arch/powerpc/include/asm/xive.h
++++ b/arch/powerpc/include/asm/xive.h
+@@ -46,7 +46,15 @@ struct xive_irq_data {
+
+ /* Setup/used by frontend */
+ int target;
++ /*
++ * saved_p means that there is a queue entry for this interrupt
++ * in some CPU's queue (not including guest vcpu queues), even
++ * if P is not set in the source ESB.
++ * stale_p means that there is no queue entry for this interrupt
++ * in some CPU's queue, even if P is set in the source ESB.
++ */
+ bool saved_p;
++ bool stale_p;
+ };
+ #define XIVE_IRQ_FLAG_STORE_EOI 0x01
+ #define XIVE_IRQ_FLAG_LSI 0x02
+@@ -127,6 +135,7 @@ extern int xive_native_get_queue_state(u32 vp_id, uint32_t prio, u32 *qtoggle,
+ extern int xive_native_set_queue_state(u32 vp_id, uint32_t prio, u32 qtoggle,
+ u32 qindex);
+ extern int xive_native_get_vp_state(u32 vp_id, u64 *out_state);
++extern bool xive_native_has_queue_state_support(void);
+
+ #else
+
+diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
+index bd95318d2202..864cc55fa03c 100644
+--- a/arch/powerpc/kernel/dt_cpu_ftrs.c
++++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
+@@ -691,9 +691,37 @@ static bool __init cpufeatures_process_feature(struct dt_cpu_feature *f)
+ return true;
+ }
+
++/*
++ * Handle POWER9 broadcast tlbie invalidation issue using
++ * cpu feature flag.
++ */
++static __init void update_tlbie_feature_flag(unsigned long pvr)
++{
++ if (PVR_VER(pvr) == PVR_POWER9) {
++ /*
++ * Set the tlbie feature flag for anything below
++ * Nimbus DD 2.3 and Cumulus DD 1.3
++ */
++ if ((pvr & 0xe000) == 0) {
++ /* Nimbus */
++ if ((pvr & 0xfff) < 0x203)
++ cur_cpu_spec->cpu_features |= CPU_FTR_P9_TLBIE_STQ_BUG;
++ } else if ((pvr & 0xc000) == 0) {
++ /* Cumulus */
++ if ((pvr & 0xfff) < 0x103)
++ cur_cpu_spec->cpu_features |= CPU_FTR_P9_TLBIE_STQ_BUG;
++ } else {
++ WARN_ONCE(1, "Unknown PVR");
++ cur_cpu_spec->cpu_features |= CPU_FTR_P9_TLBIE_STQ_BUG;
++ }
++
++ cur_cpu_spec->cpu_features |= CPU_FTR_P9_TLBIE_ERAT_BUG;
++ }
++}
++
+ static __init void cpufeatures_cpu_quirks(void)
+ {
+- int version = mfspr(SPRN_PVR);
++ unsigned long version = mfspr(SPRN_PVR);
+
+ /*
+ * Not all quirks can be derived from the cpufeatures device tree.
+@@ -712,10 +740,10 @@ static __init void cpufeatures_cpu_quirks(void)
+
+ if ((version & 0xffff0000) == 0x004e0000) {
+ cur_cpu_spec->cpu_features &= ~(CPU_FTR_DAWR);
+- cur_cpu_spec->cpu_features |= CPU_FTR_P9_TLBIE_BUG;
+ cur_cpu_spec->cpu_features |= CPU_FTR_P9_TIDR;
+ }
+
++ update_tlbie_feature_flag(version);
+ /*
+ * PKEY was not in the initial base or feature node
+ * specification, but it should become optional in the next
+diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
+index f255e22184b4..9e6f01abb31e 100644
+--- a/arch/powerpc/kernel/head_32.S
++++ b/arch/powerpc/kernel/head_32.S
+@@ -557,9 +557,9 @@ DataStoreTLBMiss:
+ cmplw 0,r1,r3
+ mfspr r2, SPRN_SPRG_PGDIR
+ #ifdef CONFIG_SWAP
+- li r1, _PAGE_RW | _PAGE_PRESENT | _PAGE_ACCESSED
++ li r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT | _PAGE_ACCESSED
+ #else
+- li r1, _PAGE_RW | _PAGE_PRESENT
++ li r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT
+ #endif
+ bge- 112f
+ lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */
+@@ -897,9 +897,11 @@ start_here:
+ bl machine_init
+ bl __save_cpu_setup
+ bl MMU_init
++#ifdef CONFIG_KASAN
+ BEGIN_MMU_FTR_SECTION
+ bl MMU_init_hw_patch
+ END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
++#endif
+
+ /*
+ * Go back to running unmapped so we can load up new values
+diff --git a/arch/powerpc/kernel/mce.c b/arch/powerpc/kernel/mce.c
+index b18df633eae9..cff31d4a501f 100644
+--- a/arch/powerpc/kernel/mce.c
++++ b/arch/powerpc/kernel/mce.c
+@@ -33,6 +33,7 @@ static DEFINE_PER_CPU(struct machine_check_event[MAX_MC_EVT],
+ mce_ue_event_queue);
+
+ static void machine_check_process_queued_event(struct irq_work *work);
++static void machine_check_ue_irq_work(struct irq_work *work);
+ void machine_check_ue_event(struct machine_check_event *evt);
+ static void machine_process_ue_event(struct work_struct *work);
+
+@@ -40,6 +41,10 @@ static struct irq_work mce_event_process_work = {
+ .func = machine_check_process_queued_event,
+ };
+
++static struct irq_work mce_ue_event_irq_work = {
++ .func = machine_check_ue_irq_work,
++};
++
+ DECLARE_WORK(mce_ue_event_work, machine_process_ue_event);
+
+ static void mce_set_error_info(struct machine_check_event *mce,
+@@ -199,6 +204,10 @@ void release_mce_event(void)
+ get_mce_event(NULL, true);
+ }
+
++static void machine_check_ue_irq_work(struct irq_work *work)
++{
++ schedule_work(&mce_ue_event_work);
++}
+
+ /*
+ * Queue up the MCE event which then can be handled later.
+@@ -216,7 +225,7 @@ void machine_check_ue_event(struct machine_check_event *evt)
+ memcpy(this_cpu_ptr(&mce_ue_event_queue[index]), evt, sizeof(*evt));
+
+ /* Queue work to process this event later. */
+- schedule_work(&mce_ue_event_work);
++ irq_work_queue(&mce_ue_event_irq_work);
+ }
+
+ /*
+diff --git a/arch/powerpc/kernel/mce_power.c b/arch/powerpc/kernel/mce_power.c
+index a814d2dfb5b0..714a98e0927f 100644
+--- a/arch/powerpc/kernel/mce_power.c
++++ b/arch/powerpc/kernel/mce_power.c
+@@ -26,6 +26,7 @@
+ unsigned long addr_to_pfn(struct pt_regs *regs, unsigned long addr)
+ {
+ pte_t *ptep;
++ unsigned int shift;
+ unsigned long flags;
+ struct mm_struct *mm;
+
+@@ -35,13 +36,18 @@ unsigned long addr_to_pfn(struct pt_regs *regs, unsigned long addr)
+ mm = &init_mm;
+
+ local_irq_save(flags);
+- if (mm == current->mm)
+- ptep = find_current_mm_pte(mm->pgd, addr, NULL, NULL);
+- else
+- ptep = find_init_mm_pte(addr, NULL);
++ ptep = __find_linux_pte(mm->pgd, addr, NULL, &shift);
+ local_irq_restore(flags);
++
+ if (!ptep || pte_special(*ptep))
+ return ULONG_MAX;
++
++ if (shift > PAGE_SHIFT) {
++ unsigned long rpnmask = (1ul << shift) - PAGE_SIZE;
++
++ return pte_pfn(__pte(pte_val(*ptep) | (addr & rpnmask)));
++ }
++
+ return pte_pfn(*ptep);
+ }
+
+@@ -344,7 +350,7 @@ static const struct mce_derror_table mce_p9_derror_table[] = {
+ MCE_INITIATOR_CPU, MCE_SEV_SEVERE, true },
+ { 0, false, 0, 0, 0, 0, 0 } };
+
+-static int mce_find_instr_ea_and_pfn(struct pt_regs *regs, uint64_t *addr,
++static int mce_find_instr_ea_and_phys(struct pt_regs *regs, uint64_t *addr,
+ uint64_t *phys_addr)
+ {
+ /*
+@@ -541,7 +547,8 @@ static int mce_handle_derror(struct pt_regs *regs,
+ * kernel/exception-64s.h
+ */
+ if (get_paca()->in_mce < MAX_MCE_DEPTH)
+- mce_find_instr_ea_and_pfn(regs, addr, phys_addr);
++ mce_find_instr_ea_and_phys(regs, addr,
++ phys_addr);
+ }
+ found = 1;
+ }
+diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
+index 9524d92bc45d..d7fcdfa7fee4 100644
+--- a/arch/powerpc/kvm/book3s.c
++++ b/arch/powerpc/kvm/book3s.c
+@@ -1083,9 +1083,11 @@ static int kvmppc_book3s_init(void)
+ if (xics_on_xive()) {
+ kvmppc_xive_init_module();
+ kvm_register_device_ops(&kvm_xive_ops, KVM_DEV_TYPE_XICS);
+- kvmppc_xive_native_init_module();
+- kvm_register_device_ops(&kvm_xive_native_ops,
+- KVM_DEV_TYPE_XIVE);
++ if (kvmppc_xive_native_supported()) {
++ kvmppc_xive_native_init_module();
++ kvm_register_device_ops(&kvm_xive_native_ops,
++ KVM_DEV_TYPE_XIVE);
++ }
+ } else
+ #endif
+ kvm_register_device_ops(&kvm_xics_ops, KVM_DEV_TYPE_XICS);
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index cde3f5a4b3e4..f8975c620f41 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -1678,7 +1678,14 @@ static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
+ *val = get_reg_val(id, vcpu->arch.pspb);
+ break;
+ case KVM_REG_PPC_DPDES:
+- *val = get_reg_val(id, vcpu->arch.vcore->dpdes);
++ /*
++ * On POWER9, where we are emulating msgsndp etc.,
++ * we return 1 bit for each vcpu, which can come from
++ * either vcore->dpdes or doorbell_request.
++ * On POWER8, doorbell_request is 0.
++ */
++ *val = get_reg_val(id, vcpu->arch.vcore->dpdes |
++ vcpu->arch.doorbell_request);
+ break;
+ case KVM_REG_PPC_VTB:
+ *val = get_reg_val(id, vcpu->arch.vcore->vtb);
+@@ -2860,7 +2867,7 @@ static void collect_piggybacks(struct core_info *cip, int target_threads)
+ if (!spin_trylock(&pvc->lock))
+ continue;
+ prepare_threads(pvc);
+- if (!pvc->n_runnable) {
++ if (!pvc->n_runnable || !pvc->kvm->arch.mmu_ready) {
+ list_del_init(&pvc->preempt_list);
+ if (pvc->runner == NULL) {
+ pvc->vcore_state = VCORE_INACTIVE;
+@@ -2881,15 +2888,20 @@ static void collect_piggybacks(struct core_info *cip, int target_threads)
+ spin_unlock(&lp->lock);
+ }
+
+-static bool recheck_signals(struct core_info *cip)
++static bool recheck_signals_and_mmu(struct core_info *cip)
+ {
+ int sub, i;
+ struct kvm_vcpu *vcpu;
++ struct kvmppc_vcore *vc;
+
+- for (sub = 0; sub < cip->n_subcores; ++sub)
+- for_each_runnable_thread(i, vcpu, cip->vc[sub])
++ for (sub = 0; sub < cip->n_subcores; ++sub) {
++ vc = cip->vc[sub];
++ if (!vc->kvm->arch.mmu_ready)
++ return true;
++ for_each_runnable_thread(i, vcpu, vc)
+ if (signal_pending(vcpu->arch.run_task))
+ return true;
++ }
+ return false;
+ }
+
+@@ -3119,7 +3131,7 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
+ local_irq_disable();
+ hard_irq_disable();
+ if (lazy_irq_pending() || need_resched() ||
+- recheck_signals(&core_info) || !vc->kvm->arch.mmu_ready) {
++ recheck_signals_and_mmu(&core_info)) {
+ local_irq_enable();
+ vc->vcore_state = VCORE_INACTIVE;
+ /* Unlock all except the primary vcore */
+diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+index 63e0ce91e29d..47f86252e8a1 100644
+--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
++++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+@@ -433,6 +433,37 @@ static inline int is_mmio_hpte(unsigned long v, unsigned long r)
+ (HPTE_R_KEY_HI | HPTE_R_KEY_LO));
+ }
+
++static inline void fixup_tlbie_lpid(unsigned long rb_value, unsigned long lpid)
++{
++
++ if (cpu_has_feature(CPU_FTR_P9_TLBIE_ERAT_BUG)) {
++ /* Radix flush for a hash guest */
++
++ unsigned long rb,rs,prs,r,ric;
++
++ rb = PPC_BIT(52); /* IS = 2 */
++ rs = 0; /* lpid = 0 */
++ prs = 0; /* partition scoped */
++ r = 1; /* radix format */
++ ric = 0; /* RIC_FLSUH_TLB */
++
++ /*
++ * Need the extra ptesync to make sure we don't
++ * re-order the tlbie
++ */
++ asm volatile("ptesync": : :"memory");
++ asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
++ : : "r"(rb), "i"(r), "i"(prs),
++ "i"(ric), "r"(rs) : "memory");
++ }
++
++ if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) {
++ asm volatile("ptesync": : :"memory");
++ asm volatile(PPC_TLBIE_5(%0,%1,0,0,0) : :
++ "r" (rb_value), "r" (lpid));
++ }
++}
++
+ static void do_tlbies(struct kvm *kvm, unsigned long *rbvalues,
+ long npages, int global, bool need_sync)
+ {
+@@ -451,16 +482,7 @@ static void do_tlbies(struct kvm *kvm, unsigned long *rbvalues,
+ "r" (rbvalues[i]), "r" (kvm->arch.lpid));
+ }
+
+- if (cpu_has_feature(CPU_FTR_P9_TLBIE_BUG)) {
+- /*
+- * Need the extra ptesync to make sure we don't
+- * re-order the tlbie
+- */
+- asm volatile("ptesync": : :"memory");
+- asm volatile(PPC_TLBIE_5(%0,%1,0,0,0) : :
+- "r" (rbvalues[0]), "r" (kvm->arch.lpid));
+- }
+-
++ fixup_tlbie_lpid(rbvalues[i - 1], kvm->arch.lpid);
+ asm volatile("eieio; tlbsync; ptesync" : : : "memory");
+ } else {
+ if (need_sync)
+diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+index 337e64468d78..07181d0dfcb7 100644
+--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
++++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+@@ -942,6 +942,8 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_300)
+ ld r11, VCPU_XIVE_SAVED_STATE(r4)
+ li r9, TM_QW1_OS
+ lwz r8, VCPU_XIVE_CAM_WORD(r4)
++ cmpwi r8, 0
++ beq no_xive
+ li r7, TM_QW1_OS + TM_WORD2
+ mfmsr r0
+ andi. r0, r0, MSR_DR /* in real mode? */
+@@ -2831,29 +2833,39 @@ kvm_cede_prodded:
+ kvm_cede_exit:
+ ld r9, HSTATE_KVM_VCPU(r13)
+ #ifdef CONFIG_KVM_XICS
+- /* Abort if we still have a pending escalation */
++ /* are we using XIVE with single escalation? */
++ ld r10, VCPU_XIVE_ESC_VADDR(r9)
++ cmpdi r10, 0
++ beq 3f
++ li r6, XIVE_ESB_SET_PQ_00
++ /*
++ * If we still have a pending escalation, abort the cede,
++ * and we must set PQ to 10 rather than 00 so that we don't
++ * potentially end up with two entries for the escalation
++ * interrupt in the XIVE interrupt queue. In that case
++ * we also don't want to set xive_esc_on to 1 here in
++ * case we race with xive_esc_irq().
++ */
+ lbz r5, VCPU_XIVE_ESC_ON(r9)
+ cmpwi r5, 0
+- beq 1f
++ beq 4f
+ li r0, 0
+ stb r0, VCPU_CEDED(r9)
+-1: /* Enable XIVE escalation */
+- li r5, XIVE_ESB_SET_PQ_00
++ li r6, XIVE_ESB_SET_PQ_10
++ b 5f
++4: li r0, 1
++ stb r0, VCPU_XIVE_ESC_ON(r9)
++ /* make sure store to xive_esc_on is seen before xive_esc_irq runs */
++ sync
++5: /* Enable XIVE escalation */
+ mfmsr r0
+ andi. r0, r0, MSR_DR /* in real mode? */
+ beq 1f
+- ld r10, VCPU_XIVE_ESC_VADDR(r9)
+- cmpdi r10, 0
+- beq 3f
+- ldx r0, r10, r5
++ ldx r0, r10, r6
+ b 2f
+ 1: ld r10, VCPU_XIVE_ESC_RADDR(r9)
+- cmpdi r10, 0
+- beq 3f
+- ldcix r0, r10, r5
++ ldcix r0, r10, r6
+ 2: sync
+- li r0, 1
+- stb r0, VCPU_XIVE_ESC_ON(r9)
+ #endif /* CONFIG_KVM_XICS */
+ 3: b guest_exit_cont
+
+diff --git a/arch/powerpc/kvm/book3s_xive.c b/arch/powerpc/kvm/book3s_xive.c
+index e3ba67095895..591bfb4bfd0f 100644
+--- a/arch/powerpc/kvm/book3s_xive.c
++++ b/arch/powerpc/kvm/book3s_xive.c
+@@ -67,8 +67,14 @@ void kvmppc_xive_push_vcpu(struct kvm_vcpu *vcpu)
+ void __iomem *tima = local_paca->kvm_hstate.xive_tima_virt;
+ u64 pq;
+
+- if (!tima)
++ /*
++ * Nothing to do if the platform doesn't have a XIVE
++ * or this vCPU doesn't have its own XIVE context
++ * (e.g. because it's not using an in-kernel interrupt controller).
++ */
++ if (!tima || !vcpu->arch.xive_cam_word)
+ return;
++
+ eieio();
+ __raw_writeq(vcpu->arch.xive_saved_state.w01, tima + TM_QW1_OS);
+ __raw_writel(vcpu->arch.xive_cam_word, tima + TM_QW1_OS + TM_WORD2);
+@@ -160,6 +166,9 @@ static irqreturn_t xive_esc_irq(int irq, void *data)
+ */
+ vcpu->arch.xive_esc_on = false;
+
++ /* This orders xive_esc_on = false vs. subsequent stale_p = true */
++ smp_wmb(); /* goes with smp_mb() in cleanup_single_escalation */
++
+ return IRQ_HANDLED;
+ }
+
+@@ -1113,6 +1122,31 @@ void kvmppc_xive_disable_vcpu_interrupts(struct kvm_vcpu *vcpu)
+ vcpu->arch.xive_esc_raddr = 0;
+ }
+
++/*
++ * In single escalation mode, the escalation interrupt is marked so
++ * that EOI doesn't re-enable it, but just sets the stale_p flag to
++ * indicate that the P bit has already been dealt with. However, the
++ * assembly code that enters the guest sets PQ to 00 without clearing
++ * stale_p (because it has no easy way to address it). Hence we have
++ * to adjust stale_p before shutting down the interrupt.
++ */
++void xive_cleanup_single_escalation(struct kvm_vcpu *vcpu,
++ struct kvmppc_xive_vcpu *xc, int irq)
++{
++ struct irq_data *d = irq_get_irq_data(irq);
++ struct xive_irq_data *xd = irq_data_get_irq_handler_data(d);
++
++ /*
++ * This slightly odd sequence gives the right result
++ * (i.e. stale_p set if xive_esc_on is false) even if
++ * we race with xive_esc_irq() and xive_irq_eoi().
++ */
++ xd->stale_p = false;
++ smp_mb(); /* paired with smb_wmb in xive_esc_irq */
++ if (!vcpu->arch.xive_esc_on)
++ xd->stale_p = true;
++}
++
+ void kvmppc_xive_cleanup_vcpu(struct kvm_vcpu *vcpu)
+ {
+ struct kvmppc_xive_vcpu *xc = vcpu->arch.xive_vcpu;
+@@ -1134,20 +1168,28 @@ void kvmppc_xive_cleanup_vcpu(struct kvm_vcpu *vcpu)
+ /* Mask the VP IPI */
+ xive_vm_esb_load(&xc->vp_ipi_data, XIVE_ESB_SET_PQ_01);
+
+- /* Disable the VP */
+- xive_native_disable_vp(xc->vp_id);
+-
+- /* Free the queues & associated interrupts */
++ /* Free escalations */
+ for (i = 0; i < KVMPPC_XIVE_Q_COUNT; i++) {
+- struct xive_q *q = &xc->queues[i];
+-
+- /* Free the escalation irq */
+ if (xc->esc_virq[i]) {
++ if (xc->xive->single_escalation)
++ xive_cleanup_single_escalation(vcpu, xc,
++ xc->esc_virq[i]);
+ free_irq(xc->esc_virq[i], vcpu);
+ irq_dispose_mapping(xc->esc_virq[i]);
+ kfree(xc->esc_virq_names[i]);
+ }
+- /* Free the queue */
++ }
++
++ /* Disable the VP */
++ xive_native_disable_vp(xc->vp_id);
++
++ /* Clear the cam word so guest entry won't try to push context */
++ vcpu->arch.xive_cam_word = 0;
++
++ /* Free the queues */
++ for (i = 0; i < KVMPPC_XIVE_Q_COUNT; i++) {
++ struct xive_q *q = &xc->queues[i];
++
+ xive_native_disable_queue(xc->vp_id, q, i);
+ if (q->qpage) {
+ free_pages((unsigned long)q->qpage,
+diff --git a/arch/powerpc/kvm/book3s_xive.h b/arch/powerpc/kvm/book3s_xive.h
+index 50494d0ee375..955b820ffd6d 100644
+--- a/arch/powerpc/kvm/book3s_xive.h
++++ b/arch/powerpc/kvm/book3s_xive.h
+@@ -282,6 +282,8 @@ int kvmppc_xive_select_target(struct kvm *kvm, u32 *server, u8 prio);
+ int kvmppc_xive_attach_escalation(struct kvm_vcpu *vcpu, u8 prio,
+ bool single_escalation);
+ struct kvmppc_xive *kvmppc_xive_get_device(struct kvm *kvm, u32 type);
++void xive_cleanup_single_escalation(struct kvm_vcpu *vcpu,
++ struct kvmppc_xive_vcpu *xc, int irq);
+
+ #endif /* CONFIG_KVM_XICS */
+ #endif /* _KVM_PPC_BOOK3S_XICS_H */
+diff --git a/arch/powerpc/kvm/book3s_xive_native.c b/arch/powerpc/kvm/book3s_xive_native.c
+index a998823f68a3..248c1ea9e788 100644
+--- a/arch/powerpc/kvm/book3s_xive_native.c
++++ b/arch/powerpc/kvm/book3s_xive_native.c
+@@ -67,20 +67,28 @@ void kvmppc_xive_native_cleanup_vcpu(struct kvm_vcpu *vcpu)
+ xc->valid = false;
+ kvmppc_xive_disable_vcpu_interrupts(vcpu);
+
+- /* Disable the VP */
+- xive_native_disable_vp(xc->vp_id);
+-
+- /* Free the queues & associated interrupts */
++ /* Free escalations */
+ for (i = 0; i < KVMPPC_XIVE_Q_COUNT; i++) {
+ /* Free the escalation irq */
+ if (xc->esc_virq[i]) {
++ if (xc->xive->single_escalation)
++ xive_cleanup_single_escalation(vcpu, xc,
++ xc->esc_virq[i]);
+ free_irq(xc->esc_virq[i], vcpu);
+ irq_dispose_mapping(xc->esc_virq[i]);
+ kfree(xc->esc_virq_names[i]);
+ xc->esc_virq[i] = 0;
+ }
++ }
++
++ /* Disable the VP */
++ xive_native_disable_vp(xc->vp_id);
++
++ /* Clear the cam word so guest entry won't try to push context */
++ vcpu->arch.xive_cam_word = 0;
+
+- /* Free the queue */
++ /* Free the queues */
++ for (i = 0; i < KVMPPC_XIVE_Q_COUNT; i++) {
+ kvmppc_xive_native_cleanup_queue(vcpu, i);
+ }
+
+@@ -1171,6 +1179,11 @@ int kvmppc_xive_native_set_vp(struct kvm_vcpu *vcpu, union kvmppc_one_reg *val)
+ return 0;
+ }
+
++bool kvmppc_xive_native_supported(void)
++{
++ return xive_native_has_queue_state_support();
++}
++
+ static int xive_native_debug_show(struct seq_file *m, void *private)
+ {
+ struct kvmppc_xive *xive = m->private;
+diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
+index 3e566c2e6066..3a77bb643452 100644
+--- a/arch/powerpc/kvm/powerpc.c
++++ b/arch/powerpc/kvm/powerpc.c
+@@ -561,7 +561,8 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+ * a POWER9 processor) and the PowerNV platform, as
+ * nested is not yet supported.
+ */
+- r = xive_enabled() && !!cpu_has_feature(CPU_FTR_HVMODE);
++ r = xive_enabled() && !!cpu_has_feature(CPU_FTR_HVMODE) &&
++ kvmppc_xive_native_supported();
+ break;
+ #endif
+
+diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c
+index e249fbf6b9c3..8d68f03bf5a4 100644
+--- a/arch/powerpc/mm/book3s32/mmu.c
++++ b/arch/powerpc/mm/book3s32/mmu.c
+@@ -358,6 +358,15 @@ void __init MMU_init_hw(void)
+ hash_mb2 = hash_mb = 32 - LG_HPTEG_SIZE - lg_n_hpteg;
+ if (lg_n_hpteg > 16)
+ hash_mb2 = 16 - LG_HPTEG_SIZE;
++
++ /*
++ * When KASAN is selected, there is already an early temporary hash
++ * table and the switch to the final hash table is done later.
++ */
++ if (IS_ENABLED(CONFIG_KASAN))
++ return;
++
++ MMU_init_hw_patch();
+ }
+
+ void __init MMU_init_hw_patch(void)
+diff --git a/arch/powerpc/mm/book3s64/hash_native.c b/arch/powerpc/mm/book3s64/hash_native.c
+index 90ab4f31e2b3..523e42eb11da 100644
+--- a/arch/powerpc/mm/book3s64/hash_native.c
++++ b/arch/powerpc/mm/book3s64/hash_native.c
+@@ -197,9 +197,32 @@ static inline unsigned long ___tlbie(unsigned long vpn, int psize,
+ return va;
+ }
+
+-static inline void fixup_tlbie(unsigned long vpn, int psize, int apsize, int ssize)
++static inline void fixup_tlbie_vpn(unsigned long vpn, int psize,
++ int apsize, int ssize)
+ {
+- if (cpu_has_feature(CPU_FTR_P9_TLBIE_BUG)) {
++ if (cpu_has_feature(CPU_FTR_P9_TLBIE_ERAT_BUG)) {
++ /* Radix flush for a hash guest */
++
++ unsigned long rb,rs,prs,r,ric;
++
++ rb = PPC_BIT(52); /* IS = 2 */
++ rs = 0; /* lpid = 0 */
++ prs = 0; /* partition scoped */
++ r = 1; /* radix format */
++ ric = 0; /* RIC_FLSUH_TLB */
++
++ /*
++ * Need the extra ptesync to make sure we don't
++ * re-order the tlbie
++ */
++ asm volatile("ptesync": : :"memory");
++ asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
++ : : "r"(rb), "i"(r), "i"(prs),
++ "i"(ric), "r"(rs) : "memory");
++ }
++
++
++ if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) {
+ /* Need the extra ptesync to ensure we don't reorder tlbie*/
+ asm volatile("ptesync": : :"memory");
+ ___tlbie(vpn, psize, apsize, ssize);
+@@ -283,7 +306,7 @@ static inline void tlbie(unsigned long vpn, int psize, int apsize,
+ asm volatile("ptesync": : :"memory");
+ } else {
+ __tlbie(vpn, psize, apsize, ssize);
+- fixup_tlbie(vpn, psize, apsize, ssize);
++ fixup_tlbie_vpn(vpn, psize, apsize, ssize);
+ asm volatile("eieio; tlbsync; ptesync": : :"memory");
+ }
+ if (lock_tlbie && !use_local)
+@@ -856,7 +879,7 @@ static void native_flush_hash_range(unsigned long number, int local)
+ /*
+ * Just do one more with the last used values.
+ */
+- fixup_tlbie(vpn, psize, psize, ssize);
++ fixup_tlbie_vpn(vpn, psize, psize, ssize);
+ asm volatile("eieio; tlbsync; ptesync":::"memory");
+
+ if (lock_tlbie)
+diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
+index b8ad14bb1170..17b0885581e4 100644
+--- a/arch/powerpc/mm/book3s64/hash_utils.c
++++ b/arch/powerpc/mm/book3s64/hash_utils.c
+@@ -34,6 +34,7 @@
+ #include <linux/libfdt.h>
+ #include <linux/pkeys.h>
+ #include <linux/hugetlb.h>
++#include <linux/cpu.h>
+
+ #include <asm/debugfs.h>
+ #include <asm/processor.h>
+@@ -1931,10 +1932,16 @@ static int hpt_order_get(void *data, u64 *val)
+
+ static int hpt_order_set(void *data, u64 val)
+ {
++ int ret;
++
+ if (!mmu_hash_ops.resize_hpt)
+ return -ENODEV;
+
+- return mmu_hash_ops.resize_hpt(val);
++ cpus_read_lock();
++ ret = mmu_hash_ops.resize_hpt(val);
++ cpus_read_unlock();
++
++ return ret;
+ }
+
+ DEFINE_DEBUGFS_ATTRIBUTE(fops_hpt_order, hpt_order_get, hpt_order_set, "%llu\n");
+diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c
+index 71f7fede2fa4..e66a77bdc657 100644
+--- a/arch/powerpc/mm/book3s64/radix_tlb.c
++++ b/arch/powerpc/mm/book3s64/radix_tlb.c
+@@ -211,22 +211,83 @@ static __always_inline void __tlbie_lpid_va(unsigned long va, unsigned long lpid
+ trace_tlbie(lpid, 0, rb, rs, ric, prs, r);
+ }
+
+-static inline void fixup_tlbie(void)
++
++static inline void fixup_tlbie_va(unsigned long va, unsigned long pid,
++ unsigned long ap)
++{
++ if (cpu_has_feature(CPU_FTR_P9_TLBIE_ERAT_BUG)) {
++ asm volatile("ptesync": : :"memory");
++ __tlbie_va(va, 0, ap, RIC_FLUSH_TLB);
++ }
++
++ if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) {
++ asm volatile("ptesync": : :"memory");
++ __tlbie_va(va, pid, ap, RIC_FLUSH_TLB);
++ }
++}
++
++static inline void fixup_tlbie_va_range(unsigned long va, unsigned long pid,
++ unsigned long ap)
+ {
+- unsigned long pid = 0;
++ if (cpu_has_feature(CPU_FTR_P9_TLBIE_ERAT_BUG)) {
++ asm volatile("ptesync": : :"memory");
++ __tlbie_pid(0, RIC_FLUSH_TLB);
++ }
++
++ if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) {
++ asm volatile("ptesync": : :"memory");
++ __tlbie_va(va, pid, ap, RIC_FLUSH_TLB);
++ }
++}
++
++static inline void fixup_tlbie_pid(unsigned long pid)
++{
++ /*
++ * We can use any address for the invalidation, pick one which is
++ * probably unused as an optimisation.
++ */
+ unsigned long va = ((1UL << 52) - 1);
+
+- if (cpu_has_feature(CPU_FTR_P9_TLBIE_BUG)) {
++ if (cpu_has_feature(CPU_FTR_P9_TLBIE_ERAT_BUG)) {
++ asm volatile("ptesync": : :"memory");
++ __tlbie_pid(0, RIC_FLUSH_TLB);
++ }
++
++ if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) {
+ asm volatile("ptesync": : :"memory");
+ __tlbie_va(va, pid, mmu_get_ap(MMU_PAGE_64K), RIC_FLUSH_TLB);
+ }
+ }
+
++
++static inline void fixup_tlbie_lpid_va(unsigned long va, unsigned long lpid,
++ unsigned long ap)
++{
++ if (cpu_has_feature(CPU_FTR_P9_TLBIE_ERAT_BUG)) {
++ asm volatile("ptesync": : :"memory");
++ __tlbie_lpid_va(va, 0, ap, RIC_FLUSH_TLB);
++ }
++
++ if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) {
++ asm volatile("ptesync": : :"memory");
++ __tlbie_lpid_va(va, lpid, ap, RIC_FLUSH_TLB);
++ }
++}
++
+ static inline void fixup_tlbie_lpid(unsigned long lpid)
+ {
++ /*
++ * We can use any address for the invalidation, pick one which is
++ * probably unused as an optimisation.
++ */
+ unsigned long va = ((1UL << 52) - 1);
+
+- if (cpu_has_feature(CPU_FTR_P9_TLBIE_BUG)) {
++ if (cpu_has_feature(CPU_FTR_P9_TLBIE_ERAT_BUG)) {
++ asm volatile("ptesync": : :"memory");
++ __tlbie_lpid(0, RIC_FLUSH_TLB);
++ }
++
++ if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) {
+ asm volatile("ptesync": : :"memory");
+ __tlbie_lpid_va(va, lpid, mmu_get_ap(MMU_PAGE_64K), RIC_FLUSH_TLB);
+ }
+@@ -273,6 +334,7 @@ static inline void _tlbie_pid(unsigned long pid, unsigned long ric)
+ switch (ric) {
+ case RIC_FLUSH_TLB:
+ __tlbie_pid(pid, RIC_FLUSH_TLB);
++ fixup_tlbie_pid(pid);
+ break;
+ case RIC_FLUSH_PWC:
+ __tlbie_pid(pid, RIC_FLUSH_PWC);
+@@ -280,8 +342,8 @@ static inline void _tlbie_pid(unsigned long pid, unsigned long ric)
+ case RIC_FLUSH_ALL:
+ default:
+ __tlbie_pid(pid, RIC_FLUSH_ALL);
++ fixup_tlbie_pid(pid);
+ }
+- fixup_tlbie();
+ asm volatile("eieio; tlbsync; ptesync": : :"memory");
+ }
+
+@@ -325,6 +387,7 @@ static inline void _tlbie_lpid(unsigned long lpid, unsigned long ric)
+ switch (ric) {
+ case RIC_FLUSH_TLB:
+ __tlbie_lpid(lpid, RIC_FLUSH_TLB);
++ fixup_tlbie_lpid(lpid);
+ break;
+ case RIC_FLUSH_PWC:
+ __tlbie_lpid(lpid, RIC_FLUSH_PWC);
+@@ -332,8 +395,8 @@ static inline void _tlbie_lpid(unsigned long lpid, unsigned long ric)
+ case RIC_FLUSH_ALL:
+ default:
+ __tlbie_lpid(lpid, RIC_FLUSH_ALL);
++ fixup_tlbie_lpid(lpid);
+ }
+- fixup_tlbie_lpid(lpid);
+ asm volatile("eieio; tlbsync; ptesync": : :"memory");
+ }
+
+@@ -407,6 +470,8 @@ static inline void __tlbie_va_range(unsigned long start, unsigned long end,
+
+ for (addr = start; addr < end; addr += page_size)
+ __tlbie_va(addr, pid, ap, RIC_FLUSH_TLB);
++
++ fixup_tlbie_va_range(addr - page_size, pid, ap);
+ }
+
+ static __always_inline void _tlbie_va(unsigned long va, unsigned long pid,
+@@ -416,7 +481,7 @@ static __always_inline void _tlbie_va(unsigned long va, unsigned long pid,
+
+ asm volatile("ptesync": : :"memory");
+ __tlbie_va(va, pid, ap, ric);
+- fixup_tlbie();
++ fixup_tlbie_va(va, pid, ap);
+ asm volatile("eieio; tlbsync; ptesync": : :"memory");
+ }
+
+@@ -427,7 +492,7 @@ static __always_inline void _tlbie_lpid_va(unsigned long va, unsigned long lpid,
+
+ asm volatile("ptesync": : :"memory");
+ __tlbie_lpid_va(va, lpid, ap, ric);
+- fixup_tlbie_lpid(lpid);
++ fixup_tlbie_lpid_va(va, lpid, ap);
+ asm volatile("eieio; tlbsync; ptesync": : :"memory");
+ }
+
+@@ -439,7 +504,6 @@ static inline void _tlbie_va_range(unsigned long start, unsigned long end,
+ if (also_pwc)
+ __tlbie_pid(pid, RIC_FLUSH_PWC);
+ __tlbie_va_range(start, end, pid, page_size, psize);
+- fixup_tlbie();
+ asm volatile("eieio; tlbsync; ptesync": : :"memory");
+ }
+
+@@ -775,7 +839,7 @@ is_local:
+ if (gflush)
+ __tlbie_va_range(gstart, gend, pid,
+ PUD_SIZE, MMU_PAGE_1G);
+- fixup_tlbie();
++
+ asm volatile("eieio; tlbsync; ptesync": : :"memory");
+ }
+ }
+diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
+index a44f6281ca3a..4e08246acd79 100644
+--- a/arch/powerpc/mm/init_64.c
++++ b/arch/powerpc/mm/init_64.c
+@@ -172,6 +172,21 @@ static __meminit void vmemmap_list_populate(unsigned long phys,
+ vmemmap_list = vmem_back;
+ }
+
++static bool altmap_cross_boundary(struct vmem_altmap *altmap, unsigned long start,
++ unsigned long page_size)
++{
++ unsigned long nr_pfn = page_size / sizeof(struct page);
++ unsigned long start_pfn = page_to_pfn((struct page *)start);
++
++ if ((start_pfn + nr_pfn) > altmap->end_pfn)
++ return true;
++
++ if (start_pfn < altmap->base_pfn)
++ return true;
++
++ return false;
++}
++
+ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
+ struct vmem_altmap *altmap)
+ {
+@@ -194,7 +209,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
+ * fail due to alignment issues when using 16MB hugepages, so
+ * fall back to system memory if the altmap allocation fail.
+ */
+- if (altmap) {
++ if (altmap && !altmap_cross_boundary(altmap, start, page_size)) {
+ p = altmap_alloc_block_buf(page_size, altmap);
+ if (!p)
+ pr_debug("altmap block allocation failed, falling back to system memory");
+diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c
+index 74f4555a62ba..0e6ed4413eea 100644
+--- a/arch/powerpc/mm/kasan/kasan_init_32.c
++++ b/arch/powerpc/mm/kasan/kasan_init_32.c
+@@ -5,12 +5,21 @@
+ #include <linux/kasan.h>
+ #include <linux/printk.h>
+ #include <linux/memblock.h>
++#include <linux/moduleloader.h>
+ #include <linux/sched/task.h>
+ #include <linux/vmalloc.h>
+ #include <asm/pgalloc.h>
+ #include <asm/code-patching.h>
+ #include <mm/mmu_decl.h>
+
++static pgprot_t kasan_prot_ro(void)
++{
++ if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
++ return PAGE_READONLY;
++
++ return PAGE_KERNEL_RO;
++}
++
+ static void kasan_populate_pte(pte_t *ptep, pgprot_t prot)
+ {
+ unsigned long va = (unsigned long)kasan_early_shadow_page;
+@@ -25,6 +34,7 @@ static int __ref kasan_init_shadow_page_tables(unsigned long k_start, unsigned l
+ {
+ pmd_t *pmd;
+ unsigned long k_cur, k_next;
++ pgprot_t prot = slab_is_available() ? kasan_prot_ro() : PAGE_KERNEL;
+
+ pmd = pmd_offset(pud_offset(pgd_offset_k(k_start), k_start), k_start);
+
+@@ -42,11 +52,20 @@ static int __ref kasan_init_shadow_page_tables(unsigned long k_start, unsigned l
+
+ if (!new)
+ return -ENOMEM;
+- if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
+- kasan_populate_pte(new, PAGE_READONLY);
+- else
+- kasan_populate_pte(new, PAGE_KERNEL_RO);
+- pmd_populate_kernel(&init_mm, pmd, new);
++ kasan_populate_pte(new, prot);
++
++ smp_wmb(); /* See comment in __pte_alloc */
++
++ spin_lock(&init_mm.page_table_lock);
++ /* Has another populated it ? */
++ if (likely((void *)pmd_page_vaddr(*pmd) == kasan_early_shadow_pte)) {
++ pmd_populate_kernel(&init_mm, pmd, new);
++ new = NULL;
++ }
++ spin_unlock(&init_mm.page_table_lock);
++
++ if (new && slab_is_available())
++ pte_free_kernel(&init_mm, new);
+ }
+ return 0;
+ }
+@@ -74,7 +93,7 @@ static int __ref kasan_init_region(void *start, size_t size)
+ if (!slab_is_available())
+ block = memblock_alloc(k_end - k_start, PAGE_SIZE);
+
+- for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE) {
++ for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {
+ pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);
+ void *va = block ? block + k_cur - k_start : kasan_get_one_page();
+ pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
+@@ -90,11 +109,23 @@ static int __ref kasan_init_region(void *start, size_t size)
+
+ static void __init kasan_remap_early_shadow_ro(void)
+ {
+- if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
+- kasan_populate_pte(kasan_early_shadow_pte, PAGE_READONLY);
+- else
+- kasan_populate_pte(kasan_early_shadow_pte, PAGE_KERNEL_RO);
++ pgprot_t prot = kasan_prot_ro();
++ unsigned long k_start = KASAN_SHADOW_START;
++ unsigned long k_end = KASAN_SHADOW_END;
++ unsigned long k_cur;
++ phys_addr_t pa = __pa(kasan_early_shadow_page);
++
++ kasan_populate_pte(kasan_early_shadow_pte, prot);
++
++ for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {
++ pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);
++ pte_t *ptep = pte_offset_kernel(pmd, k_cur);
+
++ if ((pte_val(*ptep) & PTE_RPN_MASK) != pa)
++ continue;
++
++ __set_pte_at(&init_mm, k_cur, ptep, pfn_pte(PHYS_PFN(pa), prot), 0);
++ }
+ flush_tlb_kernel_range(KASAN_SHADOW_START, KASAN_SHADOW_END);
+ }
+
+@@ -137,7 +168,11 @@ void __init kasan_init(void)
+ #ifdef CONFIG_MODULES
+ void *module_alloc(unsigned long size)
+ {
+- void *base = vmalloc_exec(size);
++ void *base;
++
++ base = __vmalloc_node_range(size, MODULE_ALIGN, VMALLOC_START, VMALLOC_END,
++ GFP_KERNEL, PAGE_KERNEL_EXEC, VM_FLUSH_RESET_PERMS,
++ NUMA_NO_NODE, __builtin_return_address(0));
+
+ if (!base)
+ return NULL;
+diff --git a/arch/powerpc/mm/ptdump/ptdump.c b/arch/powerpc/mm/ptdump/ptdump.c
+index 5d6111a9ee0e..74ff2bff4ea0 100644
+--- a/arch/powerpc/mm/ptdump/ptdump.c
++++ b/arch/powerpc/mm/ptdump/ptdump.c
+@@ -27,7 +27,7 @@
+ #include "ptdump.h"
+
+ #ifdef CONFIG_PPC32
+-#define KERN_VIRT_START PAGE_OFFSET
++#define KERN_VIRT_START 0
+ #endif
+
+ /*
+diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c
+index aba443be7daa..d271accf224b 100644
+--- a/arch/powerpc/platforms/powernv/opal.c
++++ b/arch/powerpc/platforms/powernv/opal.c
+@@ -705,7 +705,10 @@ static ssize_t symbol_map_read(struct file *fp, struct kobject *kobj,
+ bin_attr->size);
+ }
+
+-static BIN_ATTR_RO(symbol_map, 0);
++static struct bin_attribute symbol_map_attr = {
++ .attr = {.name = "symbol_map", .mode = 0400},
++ .read = symbol_map_read
++};
+
+ static void opal_export_symmap(void)
+ {
+@@ -722,10 +725,10 @@ static void opal_export_symmap(void)
+ return;
+
+ /* Setup attributes */
+- bin_attr_symbol_map.private = __va(be64_to_cpu(syms[0]));
+- bin_attr_symbol_map.size = be64_to_cpu(syms[1]);
++ symbol_map_attr.private = __va(be64_to_cpu(syms[0]));
++ symbol_map_attr.size = be64_to_cpu(syms[1]);
+
+- rc = sysfs_create_bin_file(opal_kobj, &bin_attr_symbol_map);
++ rc = sysfs_create_bin_file(opal_kobj, &symbol_map_attr);
+ if (rc)
+ pr_warn("Error %d creating OPAL symbols file\n", rc);
+ }
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda-tce.c b/arch/powerpc/platforms/powernv/pci-ioda-tce.c
+index c75ec37bf0cd..a0b9c0c23ed2 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda-tce.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda-tce.c
+@@ -49,6 +49,9 @@ static __be64 *pnv_alloc_tce_level(int nid, unsigned int shift)
+ return addr;
+ }
+
++static void pnv_pci_ioda2_table_do_free_pages(__be64 *addr,
++ unsigned long size, unsigned int levels);
++
+ static __be64 *pnv_tce(struct iommu_table *tbl, bool user, long idx, bool alloc)
+ {
+ __be64 *tmp = user ? tbl->it_userspace : (__be64 *) tbl->it_base;
+@@ -58,9 +61,9 @@ static __be64 *pnv_tce(struct iommu_table *tbl, bool user, long idx, bool alloc)
+
+ while (level) {
+ int n = (idx & mask) >> (level * shift);
+- unsigned long tce;
++ unsigned long oldtce, tce = be64_to_cpu(READ_ONCE(tmp[n]));
+
+- if (tmp[n] == 0) {
++ if (!tce) {
+ __be64 *tmp2;
+
+ if (!alloc)
+@@ -71,10 +74,15 @@ static __be64 *pnv_tce(struct iommu_table *tbl, bool user, long idx, bool alloc)
+ if (!tmp2)
+ return NULL;
+
+- tmp[n] = cpu_to_be64(__pa(tmp2) |
+- TCE_PCI_READ | TCE_PCI_WRITE);
++ tce = __pa(tmp2) | TCE_PCI_READ | TCE_PCI_WRITE;
++ oldtce = be64_to_cpu(cmpxchg(&tmp[n], 0,
++ cpu_to_be64(tce)));
++ if (oldtce) {
++ pnv_pci_ioda2_table_do_free_pages(tmp2,
++ ilog2(tbl->it_level_size) + 3, 1);
++ tce = oldtce;
++ }
+ }
+- tce = be64_to_cpu(tmp[n]);
+
+ tmp = __va(tce & ~(TCE_PCI_READ | TCE_PCI_WRITE));
+ idx &= ~mask;
+diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
+index 09bb878c21e0..4f76e5f30c97 100644
+--- a/arch/powerpc/platforms/pseries/lpar.c
++++ b/arch/powerpc/platforms/pseries/lpar.c
+@@ -1413,7 +1413,10 @@ static int pseries_lpar_resize_hpt_commit(void *data)
+ return 0;
+ }
+
+-/* Must be called in user context */
++/*
++ * Must be called in process context. The caller must hold the
++ * cpus_lock.
++ */
+ static int pseries_lpar_resize_hpt(unsigned long shift)
+ {
+ struct hpt_resize_state state = {
+@@ -1467,7 +1470,8 @@ static int pseries_lpar_resize_hpt(unsigned long shift)
+
+ t1 = ktime_get();
+
+- rc = stop_machine(pseries_lpar_resize_hpt_commit, &state, NULL);
++ rc = stop_machine_cpuslocked(pseries_lpar_resize_hpt_commit,
++ &state, NULL);
+
+ t2 = ktime_get();
+
+diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c
+index 1cdb39575eae..be86fce1a84e 100644
+--- a/arch/powerpc/sysdev/xive/common.c
++++ b/arch/powerpc/sysdev/xive/common.c
+@@ -135,7 +135,7 @@ static u32 xive_read_eq(struct xive_q *q, bool just_peek)
+ static u32 xive_scan_interrupts(struct xive_cpu *xc, bool just_peek)
+ {
+ u32 irq = 0;
+- u8 prio;
++ u8 prio = 0;
+
+ /* Find highest pending priority */
+ while (xc->pending_prio != 0) {
+@@ -148,8 +148,19 @@ static u32 xive_scan_interrupts(struct xive_cpu *xc, bool just_peek)
+ irq = xive_read_eq(&xc->queue[prio], just_peek);
+
+ /* Found something ? That's it */
+- if (irq)
+- break;
++ if (irq) {
++ if (just_peek || irq_to_desc(irq))
++ break;
++ /*
++ * We should never get here; if we do then we must
++ * have failed to synchronize the interrupt properly
++ * when shutting it down.
++ */
++ pr_crit("xive: got interrupt %d without descriptor, dropping\n",
++ irq);
++ WARN_ON(1);
++ continue;
++ }
+
+ /* Clear pending bits */
+ xc->pending_prio &= ~(1 << prio);
+@@ -307,6 +318,7 @@ static void xive_do_queue_eoi(struct xive_cpu *xc)
+ */
+ static void xive_do_source_eoi(u32 hw_irq, struct xive_irq_data *xd)
+ {
++ xd->stale_p = false;
+ /* If the XIVE supports the new "store EOI facility, use it */
+ if (xd->flags & XIVE_IRQ_FLAG_STORE_EOI)
+ xive_esb_write(xd, XIVE_ESB_STORE_EOI, 0);
+@@ -350,7 +362,7 @@ static void xive_do_source_eoi(u32 hw_irq, struct xive_irq_data *xd)
+ }
+ }
+
+-/* irq_chip eoi callback */
++/* irq_chip eoi callback, called with irq descriptor lock held */
+ static void xive_irq_eoi(struct irq_data *d)
+ {
+ struct xive_irq_data *xd = irq_data_get_irq_handler_data(d);
+@@ -366,6 +378,8 @@ static void xive_irq_eoi(struct irq_data *d)
+ if (!irqd_irq_disabled(d) && !irqd_is_forwarded_to_vcpu(d) &&
+ !(xd->flags & XIVE_IRQ_NO_EOI))
+ xive_do_source_eoi(irqd_to_hwirq(d), xd);
++ else
++ xd->stale_p = true;
+
+ /*
+ * Clear saved_p to indicate that it's no longer occupying
+@@ -397,11 +411,16 @@ static void xive_do_source_set_mask(struct xive_irq_data *xd,
+ */
+ if (mask) {
+ val = xive_esb_read(xd, XIVE_ESB_SET_PQ_01);
+- xd->saved_p = !!(val & XIVE_ESB_VAL_P);
+- } else if (xd->saved_p)
++ if (!xd->stale_p && !!(val & XIVE_ESB_VAL_P))
++ xd->saved_p = true;
++ xd->stale_p = false;
++ } else if (xd->saved_p) {
+ xive_esb_read(xd, XIVE_ESB_SET_PQ_10);
+- else
++ xd->saved_p = false;
++ } else {
+ xive_esb_read(xd, XIVE_ESB_SET_PQ_00);
++ xd->stale_p = false;
++ }
+ }
+
+ /*
+@@ -541,6 +560,8 @@ static unsigned int xive_irq_startup(struct irq_data *d)
+ unsigned int hw_irq = (unsigned int)irqd_to_hwirq(d);
+ int target, rc;
+
++ xd->saved_p = false;
++ xd->stale_p = false;
+ pr_devel("xive_irq_startup: irq %d [0x%x] data @%p\n",
+ d->irq, hw_irq, d);
+
+@@ -587,6 +608,7 @@ static unsigned int xive_irq_startup(struct irq_data *d)
+ return 0;
+ }
+
++/* called with irq descriptor lock held */
+ static void xive_irq_shutdown(struct irq_data *d)
+ {
+ struct xive_irq_data *xd = irq_data_get_irq_handler_data(d);
+@@ -601,16 +623,6 @@ static void xive_irq_shutdown(struct irq_data *d)
+ /* Mask the interrupt at the source */
+ xive_do_source_set_mask(xd, true);
+
+- /*
+- * The above may have set saved_p. We clear it otherwise it
+- * will prevent re-enabling later on. It is ok to forget the
+- * fact that the interrupt might be in a queue because we are
+- * accounting that already in xive_dec_target_count() and will
+- * be re-routing it to a new queue with proper accounting when
+- * it's started up again
+- */
+- xd->saved_p = false;
+-
+ /*
+ * Mask the interrupt in HW in the IVT/EAS and set the number
+ * to be the "bad" IRQ number
+@@ -797,6 +809,10 @@ static int xive_irq_retrigger(struct irq_data *d)
+ return 1;
+ }
+
++/*
++ * Caller holds the irq descriptor lock, so this won't be called
++ * concurrently with xive_get_irqchip_state on the same interrupt.
++ */
+ static int xive_irq_set_vcpu_affinity(struct irq_data *d, void *state)
+ {
+ struct xive_irq_data *xd = irq_data_get_irq_handler_data(d);
+@@ -820,6 +836,10 @@ static int xive_irq_set_vcpu_affinity(struct irq_data *d, void *state)
+
+ /* Set it to PQ=10 state to prevent further sends */
+ pq = xive_esb_read(xd, XIVE_ESB_SET_PQ_10);
++ if (!xd->stale_p) {
++ xd->saved_p = !!(pq & XIVE_ESB_VAL_P);
++ xd->stale_p = !xd->saved_p;
++ }
+
+ /* No target ? nothing to do */
+ if (xd->target == XIVE_INVALID_TARGET) {
+@@ -827,7 +847,7 @@ static int xive_irq_set_vcpu_affinity(struct irq_data *d, void *state)
+ * An untargetted interrupt should have been
+ * also masked at the source
+ */
+- WARN_ON(pq & 2);
++ WARN_ON(xd->saved_p);
+
+ return 0;
+ }
+@@ -847,9 +867,8 @@ static int xive_irq_set_vcpu_affinity(struct irq_data *d, void *state)
+ * This saved_p is cleared by the host EOI, when we know
+ * for sure the queue slot is no longer in use.
+ */
+- if (pq & 2) {
+- pq = xive_esb_read(xd, XIVE_ESB_SET_PQ_11);
+- xd->saved_p = true;
++ if (xd->saved_p) {
++ xive_esb_read(xd, XIVE_ESB_SET_PQ_11);
+
+ /*
+ * Sync the XIVE source HW to ensure the interrupt
+@@ -862,8 +881,7 @@ static int xive_irq_set_vcpu_affinity(struct irq_data *d, void *state)
+ */
+ if (xive_ops->sync_source)
+ xive_ops->sync_source(hw_irq);
+- } else
+- xd->saved_p = false;
++ }
+ } else {
+ irqd_clr_forwarded_to_vcpu(d);
+
+@@ -914,6 +932,23 @@ static int xive_irq_set_vcpu_affinity(struct irq_data *d, void *state)
+ return 0;
+ }
+
++/* Called with irq descriptor lock held. */
++static int xive_get_irqchip_state(struct irq_data *data,
++ enum irqchip_irq_state which, bool *state)
++{
++ struct xive_irq_data *xd = irq_data_get_irq_handler_data(data);
++
++ switch (which) {
++ case IRQCHIP_STATE_ACTIVE:
++ *state = !xd->stale_p &&
++ (xd->saved_p ||
++ !!(xive_esb_read(xd, XIVE_ESB_GET) & XIVE_ESB_VAL_P));
++ return 0;
++ default:
++ return -EINVAL;
++ }
++}
++
+ static struct irq_chip xive_irq_chip = {
+ .name = "XIVE-IRQ",
+ .irq_startup = xive_irq_startup,
+@@ -925,6 +960,7 @@ static struct irq_chip xive_irq_chip = {
+ .irq_set_type = xive_irq_set_type,
+ .irq_retrigger = xive_irq_retrigger,
+ .irq_set_vcpu_affinity = xive_irq_set_vcpu_affinity,
++ .irq_get_irqchip_state = xive_get_irqchip_state,
+ };
+
+ bool is_xive_irq(struct irq_chip *chip)
+@@ -1337,6 +1373,11 @@ static void xive_flush_cpu_queue(unsigned int cpu, struct xive_cpu *xc)
+ raw_spin_lock(&desc->lock);
+ xd = irq_desc_get_handler_data(desc);
+
++ /*
++ * Clear saved_p to indicate that it's no longer pending
++ */
++ xd->saved_p = false;
++
+ /*
+ * For LSIs, we EOI, this will cause a resend if it's
+ * still asserted. Otherwise do an MSI retrigger.
+diff --git a/arch/powerpc/sysdev/xive/native.c b/arch/powerpc/sysdev/xive/native.c
+index cf156aadefe9..ad8ee7dd7f57 100644
+--- a/arch/powerpc/sysdev/xive/native.c
++++ b/arch/powerpc/sysdev/xive/native.c
+@@ -811,6 +811,13 @@ int xive_native_set_queue_state(u32 vp_id, u32 prio, u32 qtoggle, u32 qindex)
+ }
+ EXPORT_SYMBOL_GPL(xive_native_set_queue_state);
+
++bool xive_native_has_queue_state_support(void)
++{
++ return opal_check_token(OPAL_XIVE_GET_QUEUE_STATE) &&
++ opal_check_token(OPAL_XIVE_SET_QUEUE_STATE);
++}
++EXPORT_SYMBOL_GPL(xive_native_has_queue_state_support);
++
+ int xive_native_get_vp_state(u32 vp_id, u64 *out_state)
+ {
+ __be64 state;
+diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
+index bc7a56e1ca6f..9b60878a4469 100644
+--- a/arch/riscv/kernel/entry.S
++++ b/arch/riscv/kernel/entry.S
+@@ -166,9 +166,13 @@ ENTRY(handle_exception)
+ move a0, sp /* pt_regs */
+ tail do_IRQ
+ 1:
+- /* Exceptions run with interrupts enabled */
++ /* Exceptions run with interrupts enabled or disabled
++ depending on the state of sstatus.SR_SPIE */
++ andi t0, s1, SR_SPIE
++ beqz t0, 1f
+ csrs sstatus, SR_SIE
+
++1:
+ /* Handle syscalls */
+ li t0, EXC_SYSCALL
+ beq s4, t0, handle_syscall
+diff --git a/arch/s390/kernel/process.c b/arch/s390/kernel/process.c
+index 63873aa6693f..9f2727bf3cbe 100644
+--- a/arch/s390/kernel/process.c
++++ b/arch/s390/kernel/process.c
+@@ -184,20 +184,30 @@ unsigned long get_wchan(struct task_struct *p)
+
+ if (!p || p == current || p->state == TASK_RUNNING || !task_stack_page(p))
+ return 0;
++
++ if (!try_get_task_stack(p))
++ return 0;
++
+ low = task_stack_page(p);
+ high = (struct stack_frame *) task_pt_regs(p);
+ sf = (struct stack_frame *) p->thread.ksp;
+- if (sf <= low || sf > high)
+- return 0;
++ if (sf <= low || sf > high) {
++ return_address = 0;
++ goto out;
++ }
+ for (count = 0; count < 16; count++) {
+ sf = (struct stack_frame *) sf->back_chain;
+- if (sf <= low || sf > high)
+- return 0;
++ if (sf <= low || sf > high) {
++ return_address = 0;
++ goto out;
++ }
+ return_address = sf->gprs[8];
+ if (!in_sched_functions(return_address))
+- return return_address;
++ goto out;
+ }
+- return 0;
++out:
++ put_task_stack(p);
++ return return_address;
+ }
+
+ unsigned long arch_align_stack(unsigned long sp)
+diff --git a/arch/s390/kernel/topology.c b/arch/s390/kernel/topology.c
+index 2db6fb405a9a..3627953007ed 100644
+--- a/arch/s390/kernel/topology.c
++++ b/arch/s390/kernel/topology.c
+@@ -311,7 +311,8 @@ int arch_update_cpu_topology(void)
+ on_each_cpu(__arch_update_dedicated_flag, NULL, 0);
+ for_each_online_cpu(cpu) {
+ dev = get_cpu_device(cpu);
+- kobject_uevent(&dev->kobj, KOBJ_CHANGE);
++ if (dev)
++ kobject_uevent(&dev->kobj, KOBJ_CHANGE);
+ }
+ return rc;
+ }
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 39cff07bf2eb..7d955dbf9e6d 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -332,7 +332,7 @@ static inline int plo_test_bit(unsigned char nr)
+ return cc == 0;
+ }
+
+-static inline void __insn32_query(unsigned int opcode, u8 query[32])
++static inline void __insn32_query(unsigned int opcode, u8 *query)
+ {
+ register unsigned long r0 asm("0") = 0; /* query function */
+ register unsigned long r1 asm("1") = (unsigned long) query;
+@@ -340,9 +340,9 @@ static inline void __insn32_query(unsigned int opcode, u8 query[32])
+ asm volatile(
+ /* Parameter regs are ignored */
+ " .insn rrf,%[opc] << 16,2,4,6,0\n"
+- : "=m" (*query)
++ :
+ : "d" (r0), "a" (r1), [opc] "i" (opcode)
+- : "cc");
++ : "cc", "memory");
+ }
+
+ #define INSN_SORTL 0xb938
+@@ -4257,7 +4257,7 @@ static long kvm_s390_guest_mem_op(struct kvm_vcpu *vcpu,
+ const u64 supported_flags = KVM_S390_MEMOP_F_INJECT_EXCEPTION
+ | KVM_S390_MEMOP_F_CHECK_ONLY;
+
+- if (mop->flags & ~supported_flags)
++ if (mop->flags & ~supported_flags || mop->ar >= NUM_ACRS || !mop->size)
+ return -EINVAL;
+
+ if (mop->size > MEM_OP_MAX_SIZE)
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index a3cba321b5c5..61aa9421e27a 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -2584,7 +2584,7 @@ static int nested_check_vm_entry_controls(struct kvm_vcpu *vcpu,
+
+ /* VM-entry exception error code */
+ if (has_error_code &&
+- vmcs12->vm_entry_exception_error_code & GENMASK(31, 15))
++ vmcs12->vm_entry_exception_error_code & GENMASK(31, 16))
+ return -EINVAL;
+
+ /* VM-entry interruption-info field: reserved bits */
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 350adc83eb50..e5ccfb33dbea 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -884,34 +884,42 @@ int kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
+ }
+ EXPORT_SYMBOL_GPL(kvm_set_xcr);
+
+-int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
++static int kvm_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ {
+- unsigned long old_cr4 = kvm_read_cr4(vcpu);
+- unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE |
+- X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE;
+-
+ if (cr4 & CR4_RESERVED_BITS)
+- return 1;
++ return -EINVAL;
+
+ if (!guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && (cr4 & X86_CR4_OSXSAVE))
+- return 1;
++ return -EINVAL;
+
+ if (!guest_cpuid_has(vcpu, X86_FEATURE_SMEP) && (cr4 & X86_CR4_SMEP))
+- return 1;
++ return -EINVAL;
+
+ if (!guest_cpuid_has(vcpu, X86_FEATURE_SMAP) && (cr4 & X86_CR4_SMAP))
+- return 1;
++ return -EINVAL;
+
+ if (!guest_cpuid_has(vcpu, X86_FEATURE_FSGSBASE) && (cr4 & X86_CR4_FSGSBASE))
+- return 1;
++ return -EINVAL;
+
+ if (!guest_cpuid_has(vcpu, X86_FEATURE_PKU) && (cr4 & X86_CR4_PKE))
+- return 1;
++ return -EINVAL;
+
+ if (!guest_cpuid_has(vcpu, X86_FEATURE_LA57) && (cr4 & X86_CR4_LA57))
+- return 1;
++ return -EINVAL;
+
+ if (!guest_cpuid_has(vcpu, X86_FEATURE_UMIP) && (cr4 & X86_CR4_UMIP))
++ return -EINVAL;
++
++ return 0;
++}
++
++int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
++{
++ unsigned long old_cr4 = kvm_read_cr4(vcpu);
++ unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE |
++ X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE;
++
++ if (kvm_valid_cr4(vcpu, cr4))
+ return 1;
+
+ if (is_long_mode(vcpu)) {
+@@ -8598,10 +8606,6 @@ EXPORT_SYMBOL_GPL(kvm_task_switch);
+
+ static int kvm_valid_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
+ {
+- if (!guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
+- (sregs->cr4 & X86_CR4_OSXSAVE))
+- return -EINVAL;
+-
+ if ((sregs->efer & EFER_LME) && (sregs->cr0 & X86_CR0_PG)) {
+ /*
+ * When EFER.LME and CR0.PG are set, the processor is in
+@@ -8620,7 +8624,7 @@ static int kvm_valid_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
+ return -EINVAL;
+ }
+
+- return 0;
++ return kvm_valid_cr4(vcpu, sregs->cr4);
+ }
+
+ static int __set_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
+diff --git a/arch/x86/purgatory/Makefile b/arch/x86/purgatory/Makefile
+index 10fb42da0007..b81b5172cf99 100644
+--- a/arch/x86/purgatory/Makefile
++++ b/arch/x86/purgatory/Makefile
+@@ -23,6 +23,7 @@ KCOV_INSTRUMENT := n
+
+ PURGATORY_CFLAGS_REMOVE := -mcmodel=kernel
+ PURGATORY_CFLAGS := -mcmodel=large -ffreestanding -fno-zero-initialized-in-bss
++PURGATORY_CFLAGS += $(DISABLE_STACKLEAK_PLUGIN)
+
+ # Default KBUILD_CFLAGS can have -pg option set when FTRACE is enabled. That
+ # in turn leaves some undefined symbols like __fentry__ in purgatory and not
+diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
+index c9d183d6c499..ca22afd47b3d 100644
+--- a/block/blk-mq-sched.c
++++ b/block/blk-mq-sched.c
+@@ -555,8 +555,6 @@ void blk_mq_sched_free_requests(struct request_queue *q)
+ struct blk_mq_hw_ctx *hctx;
+ int i;
+
+- lockdep_assert_held(&q->sysfs_lock);
+-
+ queue_for_each_hw_ctx(q, hctx, i) {
+ if (hctx->sched_tags)
+ blk_mq_free_rqs(q->tag_set, hctx->sched_tags, i);
+diff --git a/block/blk.h b/block/blk.h
+index d5edfd73d45e..0685c45e3d96 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -201,6 +201,8 @@ void elv_unregister_queue(struct request_queue *q);
+ static inline void elevator_exit(struct request_queue *q,
+ struct elevator_queue *e)
+ {
++ lockdep_assert_held(&q->sysfs_lock);
++
+ blk_mq_sched_free_requests(q);
+ __elevator_exit(q, e);
+ }
+diff --git a/crypto/skcipher.c b/crypto/skcipher.c
+index 5d836fc3df3e..22753c1c7202 100644
+--- a/crypto/skcipher.c
++++ b/crypto/skcipher.c
+@@ -90,7 +90,7 @@ static inline u8 *skcipher_get_spot(u8 *start, unsigned int len)
+ return max(start, end_page);
+ }
+
+-static void skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize)
++static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize)
+ {
+ u8 *addr;
+
+@@ -98,19 +98,21 @@ static void skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize)
+ addr = skcipher_get_spot(addr, bsize);
+ scatterwalk_copychunks(addr, &walk->out, bsize,
+ (walk->flags & SKCIPHER_WALK_PHYS) ? 2 : 1);
++ return 0;
+ }
+
+ int skcipher_walk_done(struct skcipher_walk *walk, int err)
+ {
+- unsigned int n; /* bytes processed */
+- bool more;
++ unsigned int n = walk->nbytes;
++ unsigned int nbytes = 0;
+
+- if (unlikely(err < 0))
++ if (!n)
+ goto finish;
+
+- n = walk->nbytes - err;
+- walk->total -= n;
+- more = (walk->total != 0);
++ if (likely(err >= 0)) {
++ n -= err;
++ nbytes = walk->total - n;
++ }
+
+ if (likely(!(walk->flags & (SKCIPHER_WALK_PHYS |
+ SKCIPHER_WALK_SLOW |
+@@ -126,7 +128,7 @@ unmap_src:
+ memcpy(walk->dst.virt.addr, walk->page, n);
+ skcipher_unmap_dst(walk);
+ } else if (unlikely(walk->flags & SKCIPHER_WALK_SLOW)) {
+- if (err) {
++ if (err > 0) {
+ /*
+ * Didn't process all bytes. Either the algorithm is
+ * broken, or this was the last step and it turned out
+@@ -134,27 +136,29 @@ unmap_src:
+ * the algorithm requires it.
+ */
+ err = -EINVAL;
+- goto finish;
+- }
+- skcipher_done_slow(walk, n);
+- goto already_advanced;
++ nbytes = 0;
++ } else
++ n = skcipher_done_slow(walk, n);
+ }
+
++ if (err > 0)
++ err = 0;
++
++ walk->total = nbytes;
++ walk->nbytes = 0;
++
+ scatterwalk_advance(&walk->in, n);
+ scatterwalk_advance(&walk->out, n);
+-already_advanced:
+- scatterwalk_done(&walk->in, 0, more);
+- scatterwalk_done(&walk->out, 1, more);
++ scatterwalk_done(&walk->in, 0, nbytes);
++ scatterwalk_done(&walk->out, 1, nbytes);
+
+- if (more) {
++ if (nbytes) {
+ crypto_yield(walk->flags & SKCIPHER_WALK_SLEEP ?
+ CRYPTO_TFM_REQ_MAY_SLEEP : 0);
+ return skcipher_walk_next(walk);
+ }
+- err = 0;
+-finish:
+- walk->nbytes = 0;
+
++finish:
+ /* Short-circuit for the common/fast path. */
+ if (!((unsigned long)walk->buffer | (unsigned long)walk->page))
+ goto out;
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index a69a90ad9208..0b727f7432f9 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -108,6 +108,7 @@ struct nbd_device {
+ struct nbd_config *config;
+ struct mutex config_lock;
+ struct gendisk *disk;
++ struct workqueue_struct *recv_workq;
+
+ struct list_head list;
+ struct task_struct *task_recv;
+@@ -138,7 +139,6 @@ static struct dentry *nbd_dbg_dir;
+
+ static unsigned int nbds_max = 16;
+ static int max_part = 16;
+-static struct workqueue_struct *recv_workqueue;
+ static int part_shift;
+
+ static int nbd_dev_dbg_init(struct nbd_device *nbd);
+@@ -1038,7 +1038,7 @@ static int nbd_reconnect_socket(struct nbd_device *nbd, unsigned long arg)
+ /* We take the tx_mutex in an error path in the recv_work, so we
+ * need to queue_work outside of the tx_mutex.
+ */
+- queue_work(recv_workqueue, &args->work);
++ queue_work(nbd->recv_workq, &args->work);
+
+ atomic_inc(&config->live_connections);
+ wake_up(&config->conn_wait);
+@@ -1139,6 +1139,10 @@ static void nbd_config_put(struct nbd_device *nbd)
+ kfree(nbd->config);
+ nbd->config = NULL;
+
++ if (nbd->recv_workq)
++ destroy_workqueue(nbd->recv_workq);
++ nbd->recv_workq = NULL;
++
+ nbd->tag_set.timeout = 0;
+ nbd->disk->queue->limits.discard_granularity = 0;
+ nbd->disk->queue->limits.discard_alignment = 0;
+@@ -1167,6 +1171,14 @@ static int nbd_start_device(struct nbd_device *nbd)
+ return -EINVAL;
+ }
+
++ nbd->recv_workq = alloc_workqueue("knbd%d-recv",
++ WQ_MEM_RECLAIM | WQ_HIGHPRI |
++ WQ_UNBOUND, 0, nbd->index);
++ if (!nbd->recv_workq) {
++ dev_err(disk_to_dev(nbd->disk), "Could not allocate knbd recv work queue.\n");
++ return -ENOMEM;
++ }
++
+ blk_mq_update_nr_hw_queues(&nbd->tag_set, config->num_connections);
+ nbd->task_recv = current;
+
+@@ -1197,7 +1209,7 @@ static int nbd_start_device(struct nbd_device *nbd)
+ INIT_WORK(&args->work, recv_work);
+ args->nbd = nbd;
+ args->index = i;
+- queue_work(recv_workqueue, &args->work);
++ queue_work(nbd->recv_workq, &args->work);
+ }
+ nbd_size_update(nbd);
+ return error;
+@@ -1217,8 +1229,10 @@ static int nbd_start_device_ioctl(struct nbd_device *nbd, struct block_device *b
+ mutex_unlock(&nbd->config_lock);
+ ret = wait_event_interruptible(config->recv_wq,
+ atomic_read(&config->recv_threads) == 0);
+- if (ret)
++ if (ret) {
+ sock_shutdown(nbd);
++ flush_workqueue(nbd->recv_workq);
++ }
+ mutex_lock(&nbd->config_lock);
+ nbd_bdev_reset(bdev);
+ /* user requested, ignore socket errors */
+@@ -1877,6 +1891,12 @@ static void nbd_disconnect_and_put(struct nbd_device *nbd)
+ nbd_disconnect(nbd);
+ nbd_clear_sock(nbd);
+ mutex_unlock(&nbd->config_lock);
++ /*
++ * Make sure recv thread has finished, so it does not drop the last
++ * config ref and try to destroy the workqueue from inside the work
++ * queue.
++ */
++ flush_workqueue(nbd->recv_workq);
+ if (test_and_clear_bit(NBD_HAS_CONFIG_REF,
+ &nbd->config->runtime_flags))
+ nbd_config_put(nbd);
+@@ -2263,20 +2283,12 @@ static int __init nbd_init(void)
+
+ if (nbds_max > 1UL << (MINORBITS - part_shift))
+ return -EINVAL;
+- recv_workqueue = alloc_workqueue("knbd-recv",
+- WQ_MEM_RECLAIM | WQ_HIGHPRI |
+- WQ_UNBOUND, 0);
+- if (!recv_workqueue)
+- return -ENOMEM;
+
+- if (register_blkdev(NBD_MAJOR, "nbd")) {
+- destroy_workqueue(recv_workqueue);
++ if (register_blkdev(NBD_MAJOR, "nbd"))
+ return -EIO;
+- }
+
+ if (genl_register_family(&nbd_genl_family)) {
+ unregister_blkdev(NBD_MAJOR, "nbd");
+- destroy_workqueue(recv_workqueue);
+ return -EINVAL;
+ }
+ nbd_dbg_init();
+@@ -2318,7 +2330,6 @@ static void __exit nbd_cleanup(void)
+
+ idr_destroy(&nbd_index_idr);
+ genl_unregister_family(&nbd_genl_family);
+- destroy_workqueue(recv_workqueue);
+ unregister_blkdev(NBD_MAJOR, "nbd");
+ }
+
+diff --git a/drivers/crypto/caam/caamalg_desc.c b/drivers/crypto/caam/caamalg_desc.c
+index 72531837571e..28ecef7a481c 100644
+--- a/drivers/crypto/caam/caamalg_desc.c
++++ b/drivers/crypto/caam/caamalg_desc.c
+@@ -503,6 +503,7 @@ void cnstr_shdsc_aead_givencap(u32 * const desc, struct alginfo *cdata,
+ const bool is_qi, int era)
+ {
+ u32 geniv, moveiv;
++ u32 *wait_cmd;
+
+ /* Note: Context registers are saved. */
+ init_sh_desc_key_aead(desc, cdata, adata, is_rfc3686, nonce, era);
+@@ -598,6 +599,14 @@ copy_iv:
+
+ /* Will read cryptlen */
+ append_math_add(desc, VARSEQINLEN, SEQINLEN, REG0, CAAM_CMD_SZ);
++
++ /*
++ * Wait for IV transfer (ofifo -> class2) to finish before starting
++ * ciphertext transfer (ofifo -> external memory).
++ */
++ wait_cmd = append_jump(desc, JUMP_JSL | JUMP_TEST_ALL | JUMP_COND_NIFP);
++ set_jump_tgt_here(desc, wait_cmd);
++
+ append_seq_fifo_load(desc, 0, FIFOLD_CLASS_BOTH | KEY_VLF |
+ FIFOLD_TYPE_MSG1OUT2 | FIFOLD_TYPE_LASTBOTH);
+ append_seq_fifo_store(desc, 0, FIFOST_TYPE_MESSAGE_DATA | KEY_VLF);
+diff --git a/drivers/crypto/caam/caamalg_desc.h b/drivers/crypto/caam/caamalg_desc.h
+index da4a4ee60c80..706007624d82 100644
+--- a/drivers/crypto/caam/caamalg_desc.h
++++ b/drivers/crypto/caam/caamalg_desc.h
+@@ -12,7 +12,7 @@
+ #define DESC_AEAD_BASE (4 * CAAM_CMD_SZ)
+ #define DESC_AEAD_ENC_LEN (DESC_AEAD_BASE + 11 * CAAM_CMD_SZ)
+ #define DESC_AEAD_DEC_LEN (DESC_AEAD_BASE + 15 * CAAM_CMD_SZ)
+-#define DESC_AEAD_GIVENC_LEN (DESC_AEAD_ENC_LEN + 7 * CAAM_CMD_SZ)
++#define DESC_AEAD_GIVENC_LEN (DESC_AEAD_ENC_LEN + 8 * CAAM_CMD_SZ)
+ #define DESC_QI_AEAD_ENC_LEN (DESC_AEAD_ENC_LEN + 3 * CAAM_CMD_SZ)
+ #define DESC_QI_AEAD_DEC_LEN (DESC_AEAD_DEC_LEN + 3 * CAAM_CMD_SZ)
+ #define DESC_QI_AEAD_GIVENC_LEN (DESC_AEAD_GIVENC_LEN + 3 * CAAM_CMD_SZ)
+diff --git a/drivers/crypto/caam/error.c b/drivers/crypto/caam/error.c
+index 4f0d45865aa2..95da6ae43482 100644
+--- a/drivers/crypto/caam/error.c
++++ b/drivers/crypto/caam/error.c
+@@ -118,6 +118,7 @@ static const struct {
+ u8 value;
+ const char *error_text;
+ } qi_error_list[] = {
++ { 0x00, "No error" },
+ { 0x1F, "Job terminated by FQ or ICID flush" },
+ { 0x20, "FD format error"},
+ { 0x21, "FD command format error"},
+diff --git a/drivers/crypto/caam/qi.c b/drivers/crypto/caam/qi.c
+index 0fe618e3804a..19a378bdf331 100644
+--- a/drivers/crypto/caam/qi.c
++++ b/drivers/crypto/caam/qi.c
+@@ -163,7 +163,10 @@ static void caam_fq_ern_cb(struct qman_portal *qm, struct qman_fq *fq,
+ dma_unmap_single(drv_req->drv_ctx->qidev, qm_fd_addr(fd),
+ sizeof(drv_req->fd_sgt), DMA_BIDIRECTIONAL);
+
+- drv_req->cbk(drv_req, -EIO);
++ if (fd->status)
++ drv_req->cbk(drv_req, be32_to_cpu(fd->status));
++ else
++ drv_req->cbk(drv_req, JRSTA_SSRC_QI);
+ }
+
+ static struct qman_fq *create_caam_req_fq(struct device *qidev,
+diff --git a/drivers/crypto/caam/regs.h b/drivers/crypto/caam/regs.h
+index 8591914d5c51..7c7ea8af6a48 100644
+--- a/drivers/crypto/caam/regs.h
++++ b/drivers/crypto/caam/regs.h
+@@ -641,6 +641,7 @@ struct caam_job_ring {
+ #define JRSTA_SSRC_CCB_ERROR 0x20000000
+ #define JRSTA_SSRC_JUMP_HALT_USER 0x30000000
+ #define JRSTA_SSRC_DECO 0x40000000
++#define JRSTA_SSRC_QI 0x50000000
+ #define JRSTA_SSRC_JRERROR 0x60000000
+ #define JRSTA_SSRC_JUMP_HALT_CC 0x70000000
+
+diff --git a/drivers/crypto/cavium/zip/zip_main.c b/drivers/crypto/cavium/zip/zip_main.c
+index a8447a3cf366..194624b4855b 100644
+--- a/drivers/crypto/cavium/zip/zip_main.c
++++ b/drivers/crypto/cavium/zip/zip_main.c
+@@ -593,6 +593,7 @@ static const struct file_operations zip_stats_fops = {
+ .owner = THIS_MODULE,
+ .open = zip_stats_open,
+ .read = seq_read,
++ .release = single_release,
+ };
+
+ static int zip_clear_open(struct inode *inode, struct file *file)
+@@ -604,6 +605,7 @@ static const struct file_operations zip_clear_fops = {
+ .owner = THIS_MODULE,
+ .open = zip_clear_open,
+ .read = seq_read,
++ .release = single_release,
+ };
+
+ static int zip_regs_open(struct inode *inode, struct file *file)
+@@ -615,6 +617,7 @@ static const struct file_operations zip_regs_fops = {
+ .owner = THIS_MODULE,
+ .open = zip_regs_open,
+ .read = seq_read,
++ .release = single_release,
+ };
+
+ /* Root directory for thunderx_zip debugfs entry */
+diff --git a/drivers/crypto/ccree/cc_aead.c b/drivers/crypto/ccree/cc_aead.c
+index 7aa4cbe19a86..29bf397cf0c1 100644
+--- a/drivers/crypto/ccree/cc_aead.c
++++ b/drivers/crypto/ccree/cc_aead.c
+@@ -236,7 +236,7 @@ static void cc_aead_complete(struct device *dev, void *cc_req, int err)
+ /* In case of payload authentication failure, MUST NOT
+ * revealed the decrypted message --> zero its memory.
+ */
+- cc_zero_sgl(areq->dst, areq_ctx->cryptlen);
++ cc_zero_sgl(areq->dst, areq->cryptlen);
+ err = -EBADMSG;
+ }
+ } else { /*ENCRYPT*/
+diff --git a/drivers/crypto/ccree/cc_fips.c b/drivers/crypto/ccree/cc_fips.c
+index 5ad3ffb7acaa..040e09c0e1af 100644
+--- a/drivers/crypto/ccree/cc_fips.c
++++ b/drivers/crypto/ccree/cc_fips.c
+@@ -21,7 +21,13 @@ static bool cc_get_tee_fips_status(struct cc_drvdata *drvdata)
+ u32 reg;
+
+ reg = cc_ioread(drvdata, CC_REG(GPR_HOST));
+- return (reg == (CC_FIPS_SYNC_TEE_STATUS | CC_FIPS_SYNC_MODULE_OK));
++ /* Did the TEE report status? */
++ if (reg & CC_FIPS_SYNC_TEE_STATUS)
++ /* Yes. Is it OK? */
++ return (reg & CC_FIPS_SYNC_MODULE_OK);
++
++ /* No. It's either not in use or will be reported later */
++ return true;
+ }
+
+ /*
+diff --git a/drivers/crypto/qat/qat_common/adf_common_drv.h b/drivers/crypto/qat/qat_common/adf_common_drv.h
+index 5c4c0a253129..d78f8d5c89c3 100644
+--- a/drivers/crypto/qat/qat_common/adf_common_drv.h
++++ b/drivers/crypto/qat/qat_common/adf_common_drv.h
+@@ -95,7 +95,7 @@ struct service_hndl {
+
+ static inline int get_current_node(void)
+ {
+- return topology_physical_package_id(smp_processor_id());
++ return topology_physical_package_id(raw_smp_processor_id());
+ }
+
+ int adf_service_register(struct service_hndl *service);
+diff --git a/drivers/devfreq/tegra-devfreq.c b/drivers/devfreq/tegra-devfreq.c
+index 35c38aad8b4f..cd15c96dd27f 100644
+--- a/drivers/devfreq/tegra-devfreq.c
++++ b/drivers/devfreq/tegra-devfreq.c
+@@ -474,11 +474,11 @@ static int tegra_devfreq_target(struct device *dev, unsigned long *freq,
+ {
+ struct tegra_devfreq *tegra = dev_get_drvdata(dev);
+ struct dev_pm_opp *opp;
+- unsigned long rate = *freq * KHZ;
++ unsigned long rate;
+
+- opp = devfreq_recommended_opp(dev, &rate, flags);
++ opp = devfreq_recommended_opp(dev, freq, flags);
+ if (IS_ERR(opp)) {
+- dev_err(dev, "Failed to find opp for %lu KHz\n", *freq);
++ dev_err(dev, "Failed to find opp for %lu Hz\n", *freq);
+ return PTR_ERR(opp);
+ }
+ rate = dev_pm_opp_get_freq(opp);
+@@ -487,8 +487,6 @@ static int tegra_devfreq_target(struct device *dev, unsigned long *freq,
+ clk_set_min_rate(tegra->emc_clock, rate);
+ clk_set_rate(tegra->emc_clock, 0);
+
+- *freq = rate;
+-
+ return 0;
+ }
+
+@@ -498,7 +496,7 @@ static int tegra_devfreq_get_dev_status(struct device *dev,
+ struct tegra_devfreq *tegra = dev_get_drvdata(dev);
+ struct tegra_devfreq_device *actmon_dev;
+
+- stat->current_frequency = tegra->cur_freq;
++ stat->current_frequency = tegra->cur_freq * KHZ;
+
+ /* To be used by the tegra governor */
+ stat->private_data = tegra;
+@@ -553,7 +551,7 @@ static int tegra_governor_get_target(struct devfreq *devfreq,
+ target_freq = max(target_freq, dev->target_freq);
+ }
+
+- *freq = target_freq;
++ *freq = target_freq * KHZ;
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+index 7850084a05e3..60655834d649 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+@@ -143,7 +143,8 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
+ /* ring tests don't use a job */
+ if (job) {
+ vm = job->vm;
+- fence_ctx = job->base.s_fence->scheduled.context;
++ fence_ctx = job->base.s_fence ?
++ job->base.s_fence->scheduled.context : 0;
+ } else {
+ vm = NULL;
+ fence_ctx = 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+index 0cf7e8606fd3..00beba533582 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+@@ -662,6 +662,9 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
+ if (sh_num == AMDGPU_INFO_MMR_SH_INDEX_MASK)
+ sh_num = 0xffffffff;
+
++ if (info->read_mmr_reg.count > 128)
++ return -EINVAL;
++
+ regs = kmalloc_array(info->read_mmr_reg.count, sizeof(*regs), GFP_KERNEL);
+ if (!regs)
+ return -ENOMEM;
+diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+index 9aaf2deff6e9..8bf9f541e7fe 100644
+--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
++++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+@@ -532,7 +532,7 @@ static int navi10_get_metrics_table(struct smu_context *smu,
+ struct smu_table_context *smu_table= &smu->smu_table;
+ int ret = 0;
+
+- if (!smu_table->metrics_time || time_after(jiffies, smu_table->metrics_time + HZ / 1000)) {
++ if (!smu_table->metrics_time || time_after(jiffies, smu_table->metrics_time + msecs_to_jiffies(100))) {
+ ret = smu_update_table(smu, SMU_TABLE_SMU_METRICS, 0,
+ (void *)smu_table->metrics_table, false);
+ if (ret) {
+diff --git a/drivers/gpu/drm/arm/malidp_hw.c b/drivers/gpu/drm/arm/malidp_hw.c
+index 50af399d7f6f..380be66d4c6e 100644
+--- a/drivers/gpu/drm/arm/malidp_hw.c
++++ b/drivers/gpu/drm/arm/malidp_hw.c
+@@ -385,6 +385,7 @@ int malidp_format_get_bpp(u32 fmt)
+ switch (fmt) {
+ case DRM_FORMAT_VUY101010:
+ bpp = 30;
++ break;
+ case DRM_FORMAT_YUV420_10BIT:
+ bpp = 15;
+ break;
+@@ -1309,7 +1310,7 @@ static irqreturn_t malidp_se_irq(int irq, void *arg)
+ break;
+ case MW_RESTART:
+ drm_writeback_signal_completion(&malidp->mw_connector, 0);
+- /* fall through to a new start */
++ /* fall through - to a new start */
+ case MW_START:
+ /* writeback started, need to emulate one-shot mode */
+ hw->disable_memwrite(hwdev);
+diff --git a/drivers/gpu/drm/drm_atomic_uapi.c b/drivers/gpu/drm/drm_atomic_uapi.c
+index abe38bdf85ae..15be4667f26f 100644
+--- a/drivers/gpu/drm/drm_atomic_uapi.c
++++ b/drivers/gpu/drm/drm_atomic_uapi.c
+@@ -1301,8 +1301,7 @@ int drm_mode_atomic_ioctl(struct drm_device *dev,
+ if (arg->reserved)
+ return -EINVAL;
+
+- if ((arg->flags & DRM_MODE_PAGE_FLIP_ASYNC) &&
+- !dev->mode_config.async_page_flip)
++ if (arg->flags & DRM_MODE_PAGE_FLIP_ASYNC)
+ return -EINVAL;
+
+ /* can't test and expect an event at the same time. */
+diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c
+index bd810454d239..d9de5cf8c09f 100644
+--- a/drivers/gpu/drm/drm_ioctl.c
++++ b/drivers/gpu/drm/drm_ioctl.c
+@@ -336,7 +336,12 @@ drm_setclientcap(struct drm_device *dev, void *data, struct drm_file *file_priv)
+ case DRM_CLIENT_CAP_ATOMIC:
+ if (!drm_core_check_feature(dev, DRIVER_ATOMIC))
+ return -EOPNOTSUPP;
+- if (req->value > 1)
++ /* The modesetting DDX has a totally broken idea of atomic. */
++ if (current->comm[0] == 'X' && req->value == 1) {
++ pr_info("broken atomic modeset userspace detected, disabling atomic\n");
++ return -EOPNOTSUPP;
++ }
++ if (req->value > 2)
+ return -EINVAL;
+ file_priv->atomic = req->value;
+ file_priv->universal_planes = req->value;
+diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
+index 592b92782fab..1e59a78e74bf 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.c
++++ b/drivers/gpu/drm/i915/display/intel_display.c
+@@ -7132,7 +7132,7 @@ retry:
+ pipe_config->fdi_lanes = lane;
+
+ intel_link_compute_m_n(pipe_config->pipe_bpp, lane, fdi_dotclock,
+- link_bw, &pipe_config->fdi_m_n, false);
++ link_bw, &pipe_config->fdi_m_n, false, false);
+
+ ret = ironlake_check_fdi_lanes(dev, intel_crtc->pipe, pipe_config);
+ if (ret == -EDEADLK)
+@@ -7379,11 +7379,15 @@ void
+ intel_link_compute_m_n(u16 bits_per_pixel, int nlanes,
+ int pixel_clock, int link_clock,
+ struct intel_link_m_n *m_n,
+- bool constant_n)
++ bool constant_n, bool fec_enable)
+ {
+- m_n->tu = 64;
++ u32 data_clock = bits_per_pixel * pixel_clock;
++
++ if (fec_enable)
++ data_clock = intel_dp_mode_to_fec_clock(data_clock);
+
+- compute_m_n(bits_per_pixel * pixel_clock,
++ m_n->tu = 64;
++ compute_m_n(data_clock,
+ link_clock * nlanes * 8,
+ &m_n->gmch_m, &m_n->gmch_n,
+ constant_n);
+diff --git a/drivers/gpu/drm/i915/display/intel_display.h b/drivers/gpu/drm/i915/display/intel_display.h
+index ee6b8194a459..868914c6d9b5 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.h
++++ b/drivers/gpu/drm/i915/display/intel_display.h
+@@ -351,7 +351,7 @@ struct intel_link_m_n {
+ void intel_link_compute_m_n(u16 bpp, int nlanes,
+ int pixel_clock, int link_clock,
+ struct intel_link_m_n *m_n,
+- bool constant_n);
++ bool constant_n, bool fec_enable);
+ bool is_ccs_modifier(u64 modifier);
+ void lpt_disable_clkout_dp(struct drm_i915_private *dev_priv);
+ u32 intel_plane_fb_max_stride(struct drm_i915_private *dev_priv,
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index d0fc34826771..87f4a381dec2 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -76,8 +76,8 @@
+ #define DP_DSC_MAX_ENC_THROUGHPUT_0 340000
+ #define DP_DSC_MAX_ENC_THROUGHPUT_1 400000
+
+-/* DP DSC FEC Overhead factor = (100 - 2.4)/100 */
+-#define DP_DSC_FEC_OVERHEAD_FACTOR 976
++/* DP DSC FEC Overhead factor = 1/(0.972261) */
++#define DP_DSC_FEC_OVERHEAD_FACTOR 972261
+
+ /* Compliance test status bits */
+ #define INTEL_DP_RESOLUTION_SHIFT_MASK 0
+@@ -526,6 +526,97 @@ int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
+ return 0;
+ }
+
++u32 intel_dp_mode_to_fec_clock(u32 mode_clock)
++{
++ return div_u64(mul_u32_u32(mode_clock, 1000000U),
++ DP_DSC_FEC_OVERHEAD_FACTOR);
++}
++
++static u16 intel_dp_dsc_get_output_bpp(u32 link_clock, u32 lane_count,
++ u32 mode_clock, u32 mode_hdisplay)
++{
++ u32 bits_per_pixel, max_bpp_small_joiner_ram;
++ int i;
++
++ /*
++ * Available Link Bandwidth(Kbits/sec) = (NumberOfLanes)*
++ * (LinkSymbolClock)* 8 * (TimeSlotsPerMTP)
++ * for SST -> TimeSlotsPerMTP is 1,
++ * for MST -> TimeSlotsPerMTP has to be calculated
++ */
++ bits_per_pixel = (link_clock * lane_count * 8) /
++ intel_dp_mode_to_fec_clock(mode_clock);
++ DRM_DEBUG_KMS("Max link bpp: %u\n", bits_per_pixel);
++
++ /* Small Joiner Check: output bpp <= joiner RAM (bits) / Horiz. width */
++ max_bpp_small_joiner_ram = DP_DSC_MAX_SMALL_JOINER_RAM_BUFFER / mode_hdisplay;
++ DRM_DEBUG_KMS("Max small joiner bpp: %u\n", max_bpp_small_joiner_ram);
++
++ /*
++ * Greatest allowed DSC BPP = MIN (output BPP from available Link BW
++ * check, output bpp from small joiner RAM check)
++ */
++ bits_per_pixel = min(bits_per_pixel, max_bpp_small_joiner_ram);
++
++ /* Error out if the max bpp is less than smallest allowed valid bpp */
++ if (bits_per_pixel < valid_dsc_bpp[0]) {
++ DRM_DEBUG_KMS("Unsupported BPP %u, min %u\n",
++ bits_per_pixel, valid_dsc_bpp[0]);
++ return 0;
++ }
++
++ /* Find the nearest match in the array of known BPPs from VESA */
++ for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp) - 1; i++) {
++ if (bits_per_pixel < valid_dsc_bpp[i + 1])
++ break;
++ }
++ bits_per_pixel = valid_dsc_bpp[i];
++
++ /*
++ * Compressed BPP in U6.4 format so multiply by 16, for Gen 11,
++ * fractional part is 0
++ */
++ return bits_per_pixel << 4;
++}
++
++static u8 intel_dp_dsc_get_slice_count(struct intel_dp *intel_dp,
++ int mode_clock, int mode_hdisplay)
++{
++ u8 min_slice_count, i;
++ int max_slice_width;
++
++ if (mode_clock <= DP_DSC_PEAK_PIXEL_RATE)
++ min_slice_count = DIV_ROUND_UP(mode_clock,
++ DP_DSC_MAX_ENC_THROUGHPUT_0);
++ else
++ min_slice_count = DIV_ROUND_UP(mode_clock,
++ DP_DSC_MAX_ENC_THROUGHPUT_1);
++
++ max_slice_width = drm_dp_dsc_sink_max_slice_width(intel_dp->dsc_dpcd);
++ if (max_slice_width < DP_DSC_MIN_SLICE_WIDTH_VALUE) {
++ DRM_DEBUG_KMS("Unsupported slice width %d by DP DSC Sink device\n",
++ max_slice_width);
++ return 0;
++ }
++ /* Also take into account max slice width */
++ min_slice_count = min_t(u8, min_slice_count,
++ DIV_ROUND_UP(mode_hdisplay,
++ max_slice_width));
++
++ /* Find the closest match to the valid slice count values */
++ for (i = 0; i < ARRAY_SIZE(valid_dsc_slicecount); i++) {
++ if (valid_dsc_slicecount[i] >
++ drm_dp_dsc_sink_max_slice_count(intel_dp->dsc_dpcd,
++ false))
++ break;
++ if (min_slice_count <= valid_dsc_slicecount[i])
++ return valid_dsc_slicecount[i];
++ }
++
++ DRM_DEBUG_KMS("Unsupported Slice Count %d\n", min_slice_count);
++ return 0;
++}
++
+ static enum drm_mode_status
+ intel_dp_mode_valid(struct drm_connector *connector,
+ struct drm_display_mode *mode)
+@@ -2248,7 +2339,7 @@ intel_dp_compute_config(struct intel_encoder *encoder,
+ adjusted_mode->crtc_clock,
+ pipe_config->port_clock,
+ &pipe_config->dp_m_n,
+- constant_n);
++ constant_n, pipe_config->fec_enable);
+
+ if (intel_connector->panel.downclock_mode != NULL &&
+ dev_priv->drrs.type == SEAMLESS_DRRS_SUPPORT) {
+@@ -2258,7 +2349,7 @@ intel_dp_compute_config(struct intel_encoder *encoder,
+ intel_connector->panel.downclock_mode->clock,
+ pipe_config->port_clock,
+ &pipe_config->dp_m2_n2,
+- constant_n);
++ constant_n, pipe_config->fec_enable);
+ }
+
+ if (!HAS_DDI(dev_priv))
+@@ -4345,91 +4436,6 @@ intel_dp_get_sink_irq_esi(struct intel_dp *intel_dp, u8 *sink_irq_vector)
+ DP_DPRX_ESI_LEN;
+ }
+
+-u16 intel_dp_dsc_get_output_bpp(int link_clock, u8 lane_count,
+- int mode_clock, int mode_hdisplay)
+-{
+- u16 bits_per_pixel, max_bpp_small_joiner_ram;
+- int i;
+-
+- /*
+- * Available Link Bandwidth(Kbits/sec) = (NumberOfLanes)*
+- * (LinkSymbolClock)* 8 * ((100-FECOverhead)/100)*(TimeSlotsPerMTP)
+- * FECOverhead = 2.4%, for SST -> TimeSlotsPerMTP is 1,
+- * for MST -> TimeSlotsPerMTP has to be calculated
+- */
+- bits_per_pixel = (link_clock * lane_count * 8 *
+- DP_DSC_FEC_OVERHEAD_FACTOR) /
+- mode_clock;
+-
+- /* Small Joiner Check: output bpp <= joiner RAM (bits) / Horiz. width */
+- max_bpp_small_joiner_ram = DP_DSC_MAX_SMALL_JOINER_RAM_BUFFER /
+- mode_hdisplay;
+-
+- /*
+- * Greatest allowed DSC BPP = MIN (output BPP from avaialble Link BW
+- * check, output bpp from small joiner RAM check)
+- */
+- bits_per_pixel = min(bits_per_pixel, max_bpp_small_joiner_ram);
+-
+- /* Error out if the max bpp is less than smallest allowed valid bpp */
+- if (bits_per_pixel < valid_dsc_bpp[0]) {
+- DRM_DEBUG_KMS("Unsupported BPP %d\n", bits_per_pixel);
+- return 0;
+- }
+-
+- /* Find the nearest match in the array of known BPPs from VESA */
+- for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp) - 1; i++) {
+- if (bits_per_pixel < valid_dsc_bpp[i + 1])
+- break;
+- }
+- bits_per_pixel = valid_dsc_bpp[i];
+-
+- /*
+- * Compressed BPP in U6.4 format so multiply by 16, for Gen 11,
+- * fractional part is 0
+- */
+- return bits_per_pixel << 4;
+-}
+-
+-u8 intel_dp_dsc_get_slice_count(struct intel_dp *intel_dp,
+- int mode_clock,
+- int mode_hdisplay)
+-{
+- u8 min_slice_count, i;
+- int max_slice_width;
+-
+- if (mode_clock <= DP_DSC_PEAK_PIXEL_RATE)
+- min_slice_count = DIV_ROUND_UP(mode_clock,
+- DP_DSC_MAX_ENC_THROUGHPUT_0);
+- else
+- min_slice_count = DIV_ROUND_UP(mode_clock,
+- DP_DSC_MAX_ENC_THROUGHPUT_1);
+-
+- max_slice_width = drm_dp_dsc_sink_max_slice_width(intel_dp->dsc_dpcd);
+- if (max_slice_width < DP_DSC_MIN_SLICE_WIDTH_VALUE) {
+- DRM_DEBUG_KMS("Unsupported slice width %d by DP DSC Sink device\n",
+- max_slice_width);
+- return 0;
+- }
+- /* Also take into account max slice width */
+- min_slice_count = min_t(u8, min_slice_count,
+- DIV_ROUND_UP(mode_hdisplay,
+- max_slice_width));
+-
+- /* Find the closest match to the valid slice count values */
+- for (i = 0; i < ARRAY_SIZE(valid_dsc_slicecount); i++) {
+- if (valid_dsc_slicecount[i] >
+- drm_dp_dsc_sink_max_slice_count(intel_dp->dsc_dpcd,
+- false))
+- break;
+- if (min_slice_count <= valid_dsc_slicecount[i])
+- return valid_dsc_slicecount[i];
+- }
+-
+- DRM_DEBUG_KMS("Unsupported Slice Count %d\n", min_slice_count);
+- return 0;
+-}
+-
+ static void
+ intel_pixel_encoding_setup_vsc(struct intel_dp *intel_dp,
+ const struct intel_crtc_state *crtc_state)
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
+index da70b1a41c83..c00e05894e35 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.h
++++ b/drivers/gpu/drm/i915/display/intel_dp.h
+@@ -102,10 +102,6 @@ bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp);
+ bool intel_dp_source_supports_hbr3(struct intel_dp *intel_dp);
+ bool
+ intel_dp_get_link_status(struct intel_dp *intel_dp, u8 *link_status);
+-u16 intel_dp_dsc_get_output_bpp(int link_clock, u8 lane_count,
+- int mode_clock, int mode_hdisplay);
+-u8 intel_dp_dsc_get_slice_count(struct intel_dp *intel_dp, int mode_clock,
+- int mode_hdisplay);
+
+ bool intel_dp_read_dpcd(struct intel_dp *intel_dp);
+ bool intel_dp_get_colorimetry_status(struct intel_dp *intel_dp);
+@@ -120,4 +116,6 @@ static inline unsigned int intel_dp_unused_lane_mask(int lane_count)
+ return ~((1 << lane_count) - 1) & 0xf;
+ }
+
++u32 intel_dp_mode_to_fec_clock(u32 mode_clock);
++
+ #endif /* __INTEL_DP_H__ */
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+index 8aa6a31e8ad0..c42d487f4dff 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+@@ -81,7 +81,7 @@ static int intel_dp_mst_compute_link_config(struct intel_encoder *encoder,
+ adjusted_mode->crtc_clock,
+ crtc_state->port_clock,
+ &crtc_state->dp_m_n,
+- constant_n);
++ constant_n, crtc_state->fec_enable);
+ crtc_state->dp_m_n.tu = slots;
+
+ return 0;
+diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
+index 75baff657e43..10875b8a39a3 100644
+--- a/drivers/gpu/drm/i915/gvt/scheduler.c
++++ b/drivers/gpu/drm/i915/gvt/scheduler.c
+@@ -1424,9 +1424,6 @@ static int prepare_mm(struct intel_vgpu_workload *workload)
+ #define same_context(a, b) (((a)->context_id == (b)->context_id) && \
+ ((a)->lrca == (b)->lrca))
+
+-#define get_last_workload(q) \
+- (list_empty(q) ? NULL : container_of(q->prev, \
+- struct intel_vgpu_workload, list))
+ /**
+ * intel_vgpu_create_workload - create a vGPU workload
+ * @vgpu: a vGPU
+@@ -1446,7 +1443,7 @@ intel_vgpu_create_workload(struct intel_vgpu *vgpu, int ring_id,
+ {
+ struct intel_vgpu_submission *s = &vgpu->submission;
+ struct list_head *q = workload_q_head(vgpu, ring_id);
+- struct intel_vgpu_workload *last_workload = get_last_workload(q);
++ struct intel_vgpu_workload *last_workload = NULL;
+ struct intel_vgpu_workload *workload = NULL;
+ struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
+ u64 ring_context_gpa;
+@@ -1472,15 +1469,20 @@ intel_vgpu_create_workload(struct intel_vgpu *vgpu, int ring_id,
+ head &= RB_HEAD_OFF_MASK;
+ tail &= RB_TAIL_OFF_MASK;
+
+- if (last_workload && same_context(&last_workload->ctx_desc, desc)) {
+- gvt_dbg_el("ring id %d cur workload == last\n", ring_id);
+- gvt_dbg_el("ctx head %x real head %lx\n", head,
+- last_workload->rb_tail);
+- /*
+- * cannot use guest context head pointer here,
+- * as it might not be updated at this time
+- */
+- head = last_workload->rb_tail;
++ list_for_each_entry_reverse(last_workload, q, list) {
++
++ if (same_context(&last_workload->ctx_desc, desc)) {
++ gvt_dbg_el("ring id %d cur workload == last\n",
++ ring_id);
++ gvt_dbg_el("ctx head %x real head %lx\n", head,
++ last_workload->rb_tail);
++ /*
++ * cannot use guest context head pointer here,
++ * as it might not be updated at this time
++ */
++ head = last_workload->rb_tail;
++ break;
++ }
+ }
+
+ gvt_dbg_el("ring id %d begin a new workload\n", ring_id);
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index fe7a6ec2c199..94b91a952699 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -1073,6 +1073,7 @@ struct i915_frontbuffer_tracking {
+ };
+
+ struct i915_virtual_gpu {
++ struct mutex lock; /* serialises sending of g2v_notify command pkts */
+ bool active;
+ u32 caps;
+ };
+diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
+index 7015a97b1097..6b702da7bba7 100644
+--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
++++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
+@@ -1248,14 +1248,15 @@ free_scratch_page:
+ return ret;
+ }
+
+-static int gen8_ppgtt_notify_vgt(struct i915_ppgtt *ppgtt, bool create)
++static void gen8_ppgtt_notify_vgt(struct i915_ppgtt *ppgtt, bool create)
+ {
+- struct i915_address_space *vm = &ppgtt->vm;
+- struct drm_i915_private *dev_priv = vm->i915;
++ struct drm_i915_private *dev_priv = ppgtt->vm.i915;
+ enum vgt_g2v_type msg;
+ int i;
+
+- if (i915_vm_is_4lvl(vm)) {
++ mutex_lock(&dev_priv->vgpu.lock);
++
++ if (i915_vm_is_4lvl(&ppgtt->vm)) {
+ const u64 daddr = px_dma(ppgtt->pd);
+
+ I915_WRITE(vgtif_reg(pdp[0].lo), lower_32_bits(daddr));
+@@ -1275,9 +1276,10 @@ static int gen8_ppgtt_notify_vgt(struct i915_ppgtt *ppgtt, bool create)
+ VGT_G2V_PPGTT_L3_PAGE_TABLE_DESTROY);
+ }
+
++ /* g2v_notify atomically (via hv trap) consumes the message packet. */
+ I915_WRITE(vgtif_reg(g2v_notify), msg);
+
+- return 0;
++ mutex_unlock(&dev_priv->vgpu.lock);
+ }
+
+ static void gen8_free_scratch(struct i915_address_space *vm)
+diff --git a/drivers/gpu/drm/i915/i915_vgpu.c b/drivers/gpu/drm/i915/i915_vgpu.c
+index 724627afdedc..8b03c67f8e5b 100644
+--- a/drivers/gpu/drm/i915/i915_vgpu.c
++++ b/drivers/gpu/drm/i915/i915_vgpu.c
+@@ -79,6 +79,7 @@ void i915_check_vgpu(struct drm_i915_private *dev_priv)
+ dev_priv->vgpu.caps = __raw_uncore_read32(uncore, vgtif_reg(vgt_caps));
+
+ dev_priv->vgpu.active = true;
++ mutex_init(&dev_priv->vgpu.lock);
+ DRM_INFO("Virtual GPU for Intel GVT-g detected.\n");
+ }
+
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
+index aa35d18ab43c..02acb4338721 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
+@@ -421,15 +421,15 @@ static int dsi_clk_init(struct msm_dsi_host *msm_host)
+ }
+
+ msm_host->byte_clk_src = clk_get_parent(msm_host->byte_clk);
+- if (!msm_host->byte_clk_src) {
+- ret = -ENODEV;
++ if (IS_ERR(msm_host->byte_clk_src)) {
++ ret = PTR_ERR(msm_host->byte_clk_src);
+ pr_err("%s: can't find byte_clk clock. ret=%d\n", __func__, ret);
+ goto exit;
+ }
+
+ msm_host->pixel_clk_src = clk_get_parent(msm_host->pixel_clk);
+- if (!msm_host->pixel_clk_src) {
+- ret = -ENODEV;
++ if (IS_ERR(msm_host->pixel_clk_src)) {
++ ret = PTR_ERR(msm_host->pixel_clk_src);
+ pr_err("%s: can't find pixel_clk clock. ret=%d\n", __func__, ret);
+ goto exit;
+ }
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+index 5c36c75232e6..895a34a1a1ea 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -1603,7 +1603,8 @@ nv50_sor_create(struct drm_connector *connector, struct dcb_output *dcbe)
+ nv_encoder->aux = aux;
+ }
+
+- if ((data = nvbios_dp_table(bios, &ver, &hdr, &cnt, &len)) &&
++ if (nv_connector->type != DCB_CONNECTOR_eDP &&
++ (data = nvbios_dp_table(bios, &ver, &hdr, &cnt, &len)) &&
+ ver >= 0x40 && (nvbios_rd08(bios, data + 0x08) & 0x04)) {
+ ret = nv50_mstm_new(nv_encoder, &nv_connector->aux, 16,
+ nv_connector->base.base.id,
+diff --git a/drivers/gpu/drm/omapdrm/dss/dss.c b/drivers/gpu/drm/omapdrm/dss/dss.c
+index 5711b7a720e6..25b6a79dc385 100644
+--- a/drivers/gpu/drm/omapdrm/dss/dss.c
++++ b/drivers/gpu/drm/omapdrm/dss/dss.c
+@@ -1090,7 +1090,7 @@ static const struct dss_features omap34xx_dss_feats = {
+
+ static const struct dss_features omap3630_dss_feats = {
+ .model = DSS_MODEL_OMAP3,
+- .fck_div_max = 32,
++ .fck_div_max = 31,
+ .fck_freq_max = 173000000,
+ .dss_fck_multiplier = 1,
+ .parent_clk_name = "dpll4_ck",
+diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c
+index 15d7bebe1729..5cc0fbb04ab1 100644
+--- a/drivers/gpu/drm/radeon/radeon_drv.c
++++ b/drivers/gpu/drm/radeon/radeon_drv.c
+@@ -325,8 +325,39 @@ bool radeon_device_is_virtual(void);
+ static int radeon_pci_probe(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
+ {
++ unsigned long flags = 0;
+ int ret;
+
++ if (!ent)
++ return -ENODEV; /* Avoid NULL-ptr deref in drm_get_pci_dev */
++
++ flags = ent->driver_data;
++
++ if (!radeon_si_support) {
++ switch (flags & RADEON_FAMILY_MASK) {
++ case CHIP_TAHITI:
++ case CHIP_PITCAIRN:
++ case CHIP_VERDE:
++ case CHIP_OLAND:
++ case CHIP_HAINAN:
++ dev_info(&pdev->dev,
++ "SI support disabled by module param\n");
++ return -ENODEV;
++ }
++ }
++ if (!radeon_cik_support) {
++ switch (flags & RADEON_FAMILY_MASK) {
++ case CHIP_KAVERI:
++ case CHIP_BONAIRE:
++ case CHIP_HAWAII:
++ case CHIP_KABINI:
++ case CHIP_MULLINS:
++ dev_info(&pdev->dev,
++ "CIK support disabled by module param\n");
++ return -ENODEV;
++ }
++ }
++
+ if (vga_switcheroo_client_probe_defer(pdev))
+ return -EPROBE_DEFER;
+
+diff --git a/drivers/gpu/drm/radeon/radeon_kms.c b/drivers/gpu/drm/radeon/radeon_kms.c
+index 07f7ace42c4b..e85c554eeaa9 100644
+--- a/drivers/gpu/drm/radeon/radeon_kms.c
++++ b/drivers/gpu/drm/radeon/radeon_kms.c
+@@ -100,31 +100,6 @@ int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags)
+ struct radeon_device *rdev;
+ int r, acpi_status;
+
+- if (!radeon_si_support) {
+- switch (flags & RADEON_FAMILY_MASK) {
+- case CHIP_TAHITI:
+- case CHIP_PITCAIRN:
+- case CHIP_VERDE:
+- case CHIP_OLAND:
+- case CHIP_HAINAN:
+- dev_info(dev->dev,
+- "SI support disabled by module param\n");
+- return -ENODEV;
+- }
+- }
+- if (!radeon_cik_support) {
+- switch (flags & RADEON_FAMILY_MASK) {
+- case CHIP_KAVERI:
+- case CHIP_BONAIRE:
+- case CHIP_HAWAII:
+- case CHIP_KABINI:
+- case CHIP_MULLINS:
+- dev_info(dev->dev,
+- "CIK support disabled by module param\n");
+- return -ENODEV;
+- }
+- }
+-
+ rdev = kzalloc(sizeof(struct radeon_device), GFP_KERNEL);
+ if (rdev == NULL) {
+ return -ENOMEM;
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x.c b/drivers/hwtracing/coresight/coresight-etm4x.c
+index 7bcac8896fc1..67928ff19c71 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x.c
+@@ -188,6 +188,13 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
+ dev_err(etm_dev,
+ "timeout while waiting for Idle Trace Status\n");
+
++ /*
++ * As recommended by section 4.3.7 ("Synchronization when using the
++ * memory-mapped interface") of ARM IHI 0064D
++ */
++ dsb(sy);
++ isb();
++
+ done:
+ CS_LOCK(drvdata->base);
+
+@@ -453,8 +460,12 @@ static void etm4_disable_hw(void *info)
+ /* EN, bit[0] Trace unit enable bit */
+ control &= ~0x1;
+
+- /* make sure everything completes before disabling */
+- mb();
++ /*
++ * Make sure everything completes before disabling, as recommended
++ * by section 7.3.77 ("TRCVICTLR, ViewInst Main Control Register,
++ * SSTATUS") of ARM IHI 0064D
++ */
++ dsb(sy);
+ isb();
+ writel_relaxed(control, drvdata->base + TRCPRGCTLR);
+
+diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
+index a89bfce5388e..17abf60c94ae 100644
+--- a/drivers/i2c/busses/i2c-qcom-geni.c
++++ b/drivers/i2c/busses/i2c-qcom-geni.c
+@@ -355,11 +355,13 @@ static int geni_i2c_rx_one_msg(struct geni_i2c_dev *gi2c, struct i2c_msg *msg,
+ {
+ dma_addr_t rx_dma;
+ unsigned long time_left;
+- void *dma_buf;
++ void *dma_buf = NULL;
+ struct geni_se *se = &gi2c->se;
+ size_t len = msg->len;
+
+- dma_buf = i2c_get_dma_safe_msg_buf(msg, 32);
++ if (!of_machine_is_compatible("lenovo,yoga-c630"))
++ dma_buf = i2c_get_dma_safe_msg_buf(msg, 32);
++
+ if (dma_buf)
+ geni_se_select_mode(se, GENI_SE_DMA);
+ else
+@@ -394,11 +396,13 @@ static int geni_i2c_tx_one_msg(struct geni_i2c_dev *gi2c, struct i2c_msg *msg,
+ {
+ dma_addr_t tx_dma;
+ unsigned long time_left;
+- void *dma_buf;
++ void *dma_buf = NULL;
+ struct geni_se *se = &gi2c->se;
+ size_t len = msg->len;
+
+- dma_buf = i2c_get_dma_safe_msg_buf(msg, 32);
++ if (!of_machine_is_compatible("lenovo,yoga-c630"))
++ dma_buf = i2c_get_dma_safe_msg_buf(msg, 32);
++
+ if (dma_buf)
+ geni_se_select_mode(se, GENI_SE_DMA);
+ else
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index e1259429ded2..3b1d7ae6f75e 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -1490,6 +1490,7 @@ static u64 *alloc_pte(struct protection_domain *domain,
+ pte_level = PM_PTE_LEVEL(__pte);
+
+ if (!IOMMU_PTE_PRESENT(__pte) ||
++ pte_level == PAGE_MODE_NONE ||
+ pte_level == PAGE_MODE_7_LEVEL) {
+ page = (u64 *)get_zeroed_page(gfp);
+ if (!page)
+@@ -1500,7 +1501,7 @@ static u64 *alloc_pte(struct protection_domain *domain,
+ /* pte could have been changed somewhere. */
+ if (cmpxchg64(pte, __pte, __npte) != __pte)
+ free_page((unsigned long)page);
+- else if (pte_level == PAGE_MODE_7_LEVEL)
++ else if (IOMMU_PTE_PRESENT(__pte))
+ domain->updated = true;
+
+ continue;
+diff --git a/drivers/mmc/host/sdhci-of-esdhc.c b/drivers/mmc/host/sdhci-of-esdhc.c
+index 4dd43b1adf2c..74de5e8c45c8 100644
+--- a/drivers/mmc/host/sdhci-of-esdhc.c
++++ b/drivers/mmc/host/sdhci-of-esdhc.c
+@@ -495,7 +495,12 @@ static int esdhc_of_enable_dma(struct sdhci_host *host)
+ dma_set_mask_and_coherent(dev, DMA_BIT_MASK(40));
+
+ value = sdhci_readl(host, ESDHC_DMA_SYSCTL);
+- value |= ESDHC_DMA_SNOOP;
++
++ if (of_dma_is_coherent(dev->of_node))
++ value |= ESDHC_DMA_SNOOP;
++ else
++ value &= ~ESDHC_DMA_SNOOP;
++
+ sdhci_writel(host, value, ESDHC_DMA_SYSCTL);
+ return 0;
+ }
+diff --git a/drivers/mmc/host/sdhci-tegra.c b/drivers/mmc/host/sdhci-tegra.c
+index 02d8f524bb9e..7bc950520fd9 100644
+--- a/drivers/mmc/host/sdhci-tegra.c
++++ b/drivers/mmc/host/sdhci-tegra.c
+@@ -4,6 +4,7 @@
+ */
+
+ #include <linux/delay.h>
++#include <linux/dma-mapping.h>
+ #include <linux/err.h>
+ #include <linux/module.h>
+ #include <linux/init.h>
+@@ -104,6 +105,7 @@
+
+ struct sdhci_tegra_soc_data {
+ const struct sdhci_pltfm_data *pdata;
++ u64 dma_mask;
+ u32 nvquirks;
+ u8 min_tap_delay;
+ u8 max_tap_delay;
+@@ -1233,11 +1235,25 @@ static const struct cqhci_host_ops sdhci_tegra_cqhci_ops = {
+ .update_dcmd_desc = sdhci_tegra_update_dcmd_desc,
+ };
+
++static int tegra_sdhci_set_dma_mask(struct sdhci_host *host)
++{
++ struct sdhci_pltfm_host *platform = sdhci_priv(host);
++ struct sdhci_tegra *tegra = sdhci_pltfm_priv(platform);
++ const struct sdhci_tegra_soc_data *soc = tegra->soc_data;
++ struct device *dev = mmc_dev(host->mmc);
++
++ if (soc->dma_mask)
++ return dma_set_mask_and_coherent(dev, soc->dma_mask);
++
++ return 0;
++}
++
+ static const struct sdhci_ops tegra_sdhci_ops = {
+ .get_ro = tegra_sdhci_get_ro,
+ .read_w = tegra_sdhci_readw,
+ .write_l = tegra_sdhci_writel,
+ .set_clock = tegra_sdhci_set_clock,
++ .set_dma_mask = tegra_sdhci_set_dma_mask,
+ .set_bus_width = sdhci_set_bus_width,
+ .reset = tegra_sdhci_reset,
+ .platform_execute_tuning = tegra_sdhci_execute_tuning,
+@@ -1257,6 +1273,7 @@ static const struct sdhci_pltfm_data sdhci_tegra20_pdata = {
+
+ static const struct sdhci_tegra_soc_data soc_data_tegra20 = {
+ .pdata = &sdhci_tegra20_pdata,
++ .dma_mask = DMA_BIT_MASK(32),
+ .nvquirks = NVQUIRK_FORCE_SDHCI_SPEC_200 |
+ NVQUIRK_ENABLE_BLOCK_GAP_DET,
+ };
+@@ -1283,6 +1300,7 @@ static const struct sdhci_pltfm_data sdhci_tegra30_pdata = {
+
+ static const struct sdhci_tegra_soc_data soc_data_tegra30 = {
+ .pdata = &sdhci_tegra30_pdata,
++ .dma_mask = DMA_BIT_MASK(32),
+ .nvquirks = NVQUIRK_ENABLE_SDHCI_SPEC_300 |
+ NVQUIRK_ENABLE_SDR50 |
+ NVQUIRK_ENABLE_SDR104 |
+@@ -1295,6 +1313,7 @@ static const struct sdhci_ops tegra114_sdhci_ops = {
+ .write_w = tegra_sdhci_writew,
+ .write_l = tegra_sdhci_writel,
+ .set_clock = tegra_sdhci_set_clock,
++ .set_dma_mask = tegra_sdhci_set_dma_mask,
+ .set_bus_width = sdhci_set_bus_width,
+ .reset = tegra_sdhci_reset,
+ .platform_execute_tuning = tegra_sdhci_execute_tuning,
+@@ -1316,6 +1335,7 @@ static const struct sdhci_pltfm_data sdhci_tegra114_pdata = {
+
+ static const struct sdhci_tegra_soc_data soc_data_tegra114 = {
+ .pdata = &sdhci_tegra114_pdata,
++ .dma_mask = DMA_BIT_MASK(32),
+ };
+
+ static const struct sdhci_pltfm_data sdhci_tegra124_pdata = {
+@@ -1325,22 +1345,13 @@ static const struct sdhci_pltfm_data sdhci_tegra124_pdata = {
+ SDHCI_QUIRK_NO_HISPD_BIT |
+ SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC |
+ SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN,
+- .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
+- /*
+- * The TRM states that the SD/MMC controller found on
+- * Tegra124 can address 34 bits (the maximum supported by
+- * the Tegra memory controller), but tests show that DMA
+- * to or from above 4 GiB doesn't work. This is possibly
+- * caused by missing programming, though it's not obvious
+- * what sequence is required. Mark 64-bit DMA broken for
+- * now to fix this for existing users (e.g. Nyan boards).
+- */
+- SDHCI_QUIRK2_BROKEN_64_BIT_DMA,
++ .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
+ .ops = &tegra114_sdhci_ops,
+ };
+
+ static const struct sdhci_tegra_soc_data soc_data_tegra124 = {
+ .pdata = &sdhci_tegra124_pdata,
++ .dma_mask = DMA_BIT_MASK(34),
+ };
+
+ static const struct sdhci_ops tegra210_sdhci_ops = {
+@@ -1349,6 +1360,7 @@ static const struct sdhci_ops tegra210_sdhci_ops = {
+ .write_w = tegra210_sdhci_writew,
+ .write_l = tegra_sdhci_writel,
+ .set_clock = tegra_sdhci_set_clock,
++ .set_dma_mask = tegra_sdhci_set_dma_mask,
+ .set_bus_width = sdhci_set_bus_width,
+ .reset = tegra_sdhci_reset,
+ .set_uhs_signaling = tegra_sdhci_set_uhs_signaling,
+@@ -1369,6 +1381,7 @@ static const struct sdhci_pltfm_data sdhci_tegra210_pdata = {
+
+ static const struct sdhci_tegra_soc_data soc_data_tegra210 = {
+ .pdata = &sdhci_tegra210_pdata,
++ .dma_mask = DMA_BIT_MASK(34),
+ .nvquirks = NVQUIRK_NEEDS_PAD_CONTROL |
+ NVQUIRK_HAS_PADCALIB |
+ NVQUIRK_DIS_CARD_CLK_CONFIG_TAP |
+@@ -1383,6 +1396,7 @@ static const struct sdhci_ops tegra186_sdhci_ops = {
+ .read_w = tegra_sdhci_readw,
+ .write_l = tegra_sdhci_writel,
+ .set_clock = tegra_sdhci_set_clock,
++ .set_dma_mask = tegra_sdhci_set_dma_mask,
+ .set_bus_width = sdhci_set_bus_width,
+ .reset = tegra_sdhci_reset,
+ .set_uhs_signaling = tegra_sdhci_set_uhs_signaling,
+@@ -1398,20 +1412,13 @@ static const struct sdhci_pltfm_data sdhci_tegra186_pdata = {
+ SDHCI_QUIRK_NO_HISPD_BIT |
+ SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC |
+ SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN,
+- .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
+- /* SDHCI controllers on Tegra186 support 40-bit addressing.
+- * IOVA addresses are 48-bit wide on Tegra186.
+- * With 64-bit dma mask used for SDHCI, accesses can
+- * be broken. Disable 64-bit dma, which would fall back
+- * to 32-bit dma mask. Ideally 40-bit dma mask would work,
+- * But it is not supported as of now.
+- */
+- SDHCI_QUIRK2_BROKEN_64_BIT_DMA,
++ .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
+ .ops = &tegra186_sdhci_ops,
+ };
+
+ static const struct sdhci_tegra_soc_data soc_data_tegra186 = {
+ .pdata = &sdhci_tegra186_pdata,
++ .dma_mask = DMA_BIT_MASK(40),
+ .nvquirks = NVQUIRK_NEEDS_PAD_CONTROL |
+ NVQUIRK_HAS_PADCALIB |
+ NVQUIRK_DIS_CARD_CLK_CONFIG_TAP |
+@@ -1424,6 +1431,7 @@ static const struct sdhci_tegra_soc_data soc_data_tegra186 = {
+
+ static const struct sdhci_tegra_soc_data soc_data_tegra194 = {
+ .pdata = &sdhci_tegra186_pdata,
++ .dma_mask = DMA_BIT_MASK(39),
+ .nvquirks = NVQUIRK_NEEDS_PAD_CONTROL |
+ NVQUIRK_HAS_PADCALIB |
+ NVQUIRK_DIS_CARD_CLK_CONFIG_TAP |
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index c66e66fbaeb4..e41ccb836538 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -2857,6 +2857,7 @@ static void sdhci_cmd_irq(struct sdhci_host *host, u32 intmask, u32 *intmask_p)
+ static void sdhci_adma_show_error(struct sdhci_host *host)
+ {
+ void *desc = host->adma_table;
++ dma_addr_t dma = host->adma_addr;
+
+ sdhci_dumpregs(host);
+
+@@ -2864,18 +2865,21 @@ static void sdhci_adma_show_error(struct sdhci_host *host)
+ struct sdhci_adma2_64_desc *dma_desc = desc;
+
+ if (host->flags & SDHCI_USE_64_BIT_DMA)
+- DBG("%p: DMA 0x%08x%08x, LEN 0x%04x, Attr=0x%02x\n",
+- desc, le32_to_cpu(dma_desc->addr_hi),
++ SDHCI_DUMP("%08llx: DMA 0x%08x%08x, LEN 0x%04x, Attr=0x%02x\n",
++ (unsigned long long)dma,
++ le32_to_cpu(dma_desc->addr_hi),
+ le32_to_cpu(dma_desc->addr_lo),
+ le16_to_cpu(dma_desc->len),
+ le16_to_cpu(dma_desc->cmd));
+ else
+- DBG("%p: DMA 0x%08x, LEN 0x%04x, Attr=0x%02x\n",
+- desc, le32_to_cpu(dma_desc->addr_lo),
++ SDHCI_DUMP("%08llx: DMA 0x%08x, LEN 0x%04x, Attr=0x%02x\n",
++ (unsigned long long)dma,
++ le32_to_cpu(dma_desc->addr_lo),
+ le16_to_cpu(dma_desc->len),
+ le16_to_cpu(dma_desc->cmd));
+
+ desc += host->desc_sz;
++ dma += host->desc_sz;
+
+ if (dma_desc->cmd & cpu_to_le16(ADMA2_END))
+ break;
+@@ -2951,7 +2955,8 @@ static void sdhci_data_irq(struct sdhci_host *host, u32 intmask)
+ != MMC_BUS_TEST_R)
+ host->data->error = -EILSEQ;
+ else if (intmask & SDHCI_INT_ADMA_ERROR) {
+- pr_err("%s: ADMA error\n", mmc_hostname(host->mmc));
++ pr_err("%s: ADMA error: 0x%08x\n", mmc_hostname(host->mmc),
++ intmask);
+ sdhci_adma_show_error(host);
+ host->data->error = -EIO;
+ if (host->ops->adma_workaround)
+@@ -3758,18 +3763,14 @@ int sdhci_setup_host(struct sdhci_host *host)
+ host->flags &= ~SDHCI_USE_ADMA;
+ }
+
+- /*
+- * It is assumed that a 64-bit capable device has set a 64-bit DMA mask
+- * and *must* do 64-bit DMA. A driver has the opportunity to change
+- * that during the first call to ->enable_dma(). Similarly
+- * SDHCI_QUIRK2_BROKEN_64_BIT_DMA must be left to the drivers to
+- * implement.
+- */
+ if (sdhci_can_64bit_dma(host))
+ host->flags |= SDHCI_USE_64_BIT_DMA;
+
+ if (host->flags & (SDHCI_USE_SDMA | SDHCI_USE_ADMA)) {
+- ret = sdhci_set_dma_mask(host);
++ if (host->ops->set_dma_mask)
++ ret = host->ops->set_dma_mask(host);
++ else
++ ret = sdhci_set_dma_mask(host);
+
+ if (!ret && host->ops->enable_dma)
+ ret = host->ops->enable_dma(host);
+diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
+index 902f855efe8f..8285498c0d8a 100644
+--- a/drivers/mmc/host/sdhci.h
++++ b/drivers/mmc/host/sdhci.h
+@@ -622,6 +622,7 @@ struct sdhci_ops {
+
+ u32 (*irq)(struct sdhci_host *host, u32 intmask);
+
++ int (*set_dma_mask)(struct sdhci_host *host);
+ int (*enable_dma)(struct sdhci_host *host);
+ unsigned int (*get_max_clock)(struct sdhci_host *host);
+ unsigned int (*get_min_clock)(struct sdhci_host *host);
+diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c
+index 12358f06d194..5d6f8977df3f 100644
+--- a/drivers/net/can/spi/mcp251x.c
++++ b/drivers/net/can/spi/mcp251x.c
+@@ -612,7 +612,7 @@ static int mcp251x_setup(struct net_device *net, struct spi_device *spi)
+ static int mcp251x_hw_reset(struct spi_device *spi)
+ {
+ struct mcp251x_priv *priv = spi_get_drvdata(spi);
+- u8 reg;
++ unsigned long timeout;
+ int ret;
+
+ /* Wait for oscillator startup timer after power up */
+@@ -626,10 +626,19 @@ static int mcp251x_hw_reset(struct spi_device *spi)
+ /* Wait for oscillator startup timer after reset */
+ mdelay(MCP251X_OST_DELAY_MS);
+
+- reg = mcp251x_read_reg(spi, CANSTAT);
+- if ((reg & CANCTRL_REQOP_MASK) != CANCTRL_REQOP_CONF)
+- return -ENODEV;
+-
++ /* Wait for reset to finish */
++ timeout = jiffies + HZ;
++ while ((mcp251x_read_reg(spi, CANSTAT) & CANCTRL_REQOP_MASK) !=
++ CANCTRL_REQOP_CONF) {
++ usleep_range(MCP251X_OST_DELAY_MS * 1000,
++ MCP251X_OST_DELAY_MS * 1000 * 2);
++
++ if (time_after(jiffies, timeout)) {
++ dev_err(&spi->dev,
++ "MCP251x didn't enter in conf mode after reset\n");
++ return -EBUSY;
++ }
++ }
+ return 0;
+ }
+
+diff --git a/drivers/net/dsa/microchip/ksz_common.h b/drivers/net/dsa/microchip/ksz_common.h
+index 72ec250b9540..823f544add0a 100644
+--- a/drivers/net/dsa/microchip/ksz_common.h
++++ b/drivers/net/dsa/microchip/ksz_common.h
+@@ -130,7 +130,7 @@ static inline void ksz_pwrite32(struct ksz_device *dev, int port, int offset,
+ { \
+ .name = #width, \
+ .val_bits = (width), \
+- .reg_stride = (width) / 8, \
++ .reg_stride = 1, \
+ .reg_bits = (regbits) + (regalign), \
+ .pad_bits = (regpad), \
+ .max_register = BIT(regbits) - 1, \
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
+index 202e9a246019..7c13656a8338 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
+@@ -21,6 +21,7 @@ static int mlxsw_sp_flower_parse_actions(struct mlxsw_sp *mlxsw_sp,
+ struct netlink_ext_ack *extack)
+ {
+ const struct flow_action_entry *act;
++ int mirror_act_count = 0;
+ int err, i;
+
+ if (!flow_action_has_entries(flow_action))
+@@ -95,6 +96,11 @@ static int mlxsw_sp_flower_parse_actions(struct mlxsw_sp *mlxsw_sp,
+ case FLOW_ACTION_MIRRED: {
+ struct net_device *out_dev = act->dev;
+
++ if (mirror_act_count++) {
++ NL_SET_ERR_MSG_MOD(extack, "Multiple mirror actions per rule are not supported");
++ return -EOPNOTSUPP;
++ }
++
+ err = mlxsw_sp_acl_rulei_act_mirror(mlxsw_sp, rulei,
+ block, out_dev,
+ extack);
+diff --git a/drivers/net/ethernet/netronome/nfp/abm/cls.c b/drivers/net/ethernet/netronome/nfp/abm/cls.c
+index 23ebddfb9532..9f8a1f69c0c4 100644
+--- a/drivers/net/ethernet/netronome/nfp/abm/cls.c
++++ b/drivers/net/ethernet/netronome/nfp/abm/cls.c
+@@ -176,8 +176,10 @@ nfp_abm_u32_knode_replace(struct nfp_abm_link *alink,
+ u8 mask, val;
+ int err;
+
+- if (!nfp_abm_u32_check_knode(alink->abm, knode, proto, extack))
++ if (!nfp_abm_u32_check_knode(alink->abm, knode, proto, extack)) {
++ err = -EOPNOTSUPP;
+ goto err_delete;
++ }
+
+ tos_off = proto == htons(ETH_P_IP) ? 16 : 20;
+
+@@ -198,14 +200,18 @@ nfp_abm_u32_knode_replace(struct nfp_abm_link *alink,
+ if ((iter->val & cmask) == (val & cmask) &&
+ iter->band != knode->res->classid) {
+ NL_SET_ERR_MSG_MOD(extack, "conflict with already offloaded filter");
++ err = -EOPNOTSUPP;
+ goto err_delete;
+ }
+ }
+
+ if (!match) {
+ match = kzalloc(sizeof(*match), GFP_KERNEL);
+- if (!match)
+- return -ENOMEM;
++ if (!match) {
++ err = -ENOMEM;
++ goto err_delete;
++ }
++
+ list_add(&match->list, &alink->dscp_map);
+ }
+ match->handle = knode->handle;
+@@ -221,7 +227,7 @@ nfp_abm_u32_knode_replace(struct nfp_abm_link *alink,
+
+ err_delete:
+ nfp_abm_u32_knode_delete(alink, knode);
+- return -EOPNOTSUPP;
++ return err;
+ }
+
+ static int nfp_abm_setup_tc_block_cb(enum tc_setup_type type,
+diff --git a/drivers/net/ieee802154/atusb.c b/drivers/net/ieee802154/atusb.c
+index ceddb424f887..0dd0ba915ab9 100644
+--- a/drivers/net/ieee802154/atusb.c
++++ b/drivers/net/ieee802154/atusb.c
+@@ -1137,10 +1137,11 @@ static void atusb_disconnect(struct usb_interface *interface)
+
+ ieee802154_unregister_hw(atusb->hw);
+
++ usb_put_dev(atusb->usb_dev);
++
+ ieee802154_free_hw(atusb->hw);
+
+ usb_set_intfdata(interface, NULL);
+- usb_put_dev(atusb->usb_dev);
+
+ pr_debug("%s done\n", __func__);
+ }
+diff --git a/drivers/ntb/test/ntb_perf.c b/drivers/ntb/test/ntb_perf.c
+index d028331558ea..e9b7c2dfc730 100644
+--- a/drivers/ntb/test/ntb_perf.c
++++ b/drivers/ntb/test/ntb_perf.c
+@@ -1378,7 +1378,7 @@ static int perf_setup_peer_mw(struct perf_peer *peer)
+ int ret;
+
+ /* Get outbound MW parameters and map it */
+- ret = ntb_peer_mw_get_addr(perf->ntb, peer->gidx, &phys_addr,
++ ret = ntb_peer_mw_get_addr(perf->ntb, perf->gidx, &phys_addr,
+ &peer->outbuf_size);
+ if (ret)
+ return ret;
+diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
+index a8d56887ec88..3e9f45aec8d1 100644
+--- a/drivers/nvdimm/btt.c
++++ b/drivers/nvdimm/btt.c
+@@ -392,9 +392,9 @@ static int btt_flog_write(struct arena_info *arena, u32 lane, u32 sub,
+ arena->freelist[lane].sub = 1 - arena->freelist[lane].sub;
+ if (++(arena->freelist[lane].seq) == 4)
+ arena->freelist[lane].seq = 1;
+- if (ent_e_flag(ent->old_map))
++ if (ent_e_flag(le32_to_cpu(ent->old_map)))
+ arena->freelist[lane].has_err = 1;
+- arena->freelist[lane].block = le32_to_cpu(ent_lba(ent->old_map));
++ arena->freelist[lane].block = ent_lba(le32_to_cpu(ent->old_map));
+
+ return ret;
+ }
+@@ -560,8 +560,8 @@ static int btt_freelist_init(struct arena_info *arena)
+ * FIXME: if error clearing fails during init, we want to make
+ * the BTT read-only
+ */
+- if (ent_e_flag(log_new.old_map) &&
+- !ent_normal(log_new.old_map)) {
++ if (ent_e_flag(le32_to_cpu(log_new.old_map)) &&
++ !ent_normal(le32_to_cpu(log_new.old_map))) {
+ arena->freelist[i].has_err = 1;
+ ret = arena_clear_freelist_error(arena, i);
+ if (ret)
+diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
+index 798c5c4aea9c..bb3f20ebc276 100644
+--- a/drivers/nvdimm/bus.c
++++ b/drivers/nvdimm/bus.c
+@@ -182,7 +182,7 @@ static int nvdimm_clear_badblocks_region(struct device *dev, void *data)
+ sector_t sector;
+
+ /* make sure device is a region */
+- if (!is_nd_pmem(dev))
++ if (!is_memory(dev))
+ return 0;
+
+ nd_region = to_nd_region(dev);
+diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
+index a16e52251a30..102c9d5141ee 100644
+--- a/drivers/nvdimm/namespace_devs.c
++++ b/drivers/nvdimm/namespace_devs.c
+@@ -1987,7 +1987,7 @@ static struct device *create_namespace_pmem(struct nd_region *nd_region,
+ nd_mapping = &nd_region->mapping[i];
+ label_ent = list_first_entry_or_null(&nd_mapping->labels,
+ typeof(*label_ent), list);
+- label0 = label_ent ? label_ent->label : 0;
++ label0 = label_ent ? label_ent->label : NULL;
+
+ if (!label0) {
+ WARN_ON(1);
+@@ -2322,8 +2322,9 @@ static struct device **scan_labels(struct nd_region *nd_region)
+ continue;
+
+ /* skip labels that describe extents outside of the region */
+- if (nd_label->dpa < nd_mapping->start || nd_label->dpa > map_end)
+- continue;
++ if (__le64_to_cpu(nd_label->dpa) < nd_mapping->start ||
++ __le64_to_cpu(nd_label->dpa) > map_end)
++ continue;
+
+ i = add_namespace_resource(nd_region, nd_label, devs, count);
+ if (i < 0)
+diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c
+index cb98b8fe786e..b0f7832bae72 100644
+--- a/drivers/nvdimm/pfn_devs.c
++++ b/drivers/nvdimm/pfn_devs.c
+@@ -618,9 +618,11 @@ static int __nvdimm_setup_pfn(struct nd_pfn *nd_pfn, struct dev_pagemap *pgmap)
+ struct nd_namespace_common *ndns = nd_pfn->ndns;
+ struct nd_namespace_io *nsio = to_nd_namespace_io(&ndns->dev);
+ resource_size_t base = nsio->res.start + start_pad;
++ resource_size_t end = nsio->res.end - end_trunc;
+ struct vmem_altmap __altmap = {
+ .base_pfn = init_altmap_base(base),
+ .reserve = init_altmap_reserve(base),
++ .end_pfn = PHYS_PFN(end),
+ };
+
+ memcpy(res, &nsio->res, sizeof(*res));
+diff --git a/drivers/nvdimm/region.c b/drivers/nvdimm/region.c
+index 37bf8719a2a4..0f6978e72e7c 100644
+--- a/drivers/nvdimm/region.c
++++ b/drivers/nvdimm/region.c
+@@ -34,7 +34,7 @@ static int nd_region_probe(struct device *dev)
+ if (rc)
+ return rc;
+
+- if (is_nd_pmem(&nd_region->dev)) {
++ if (is_memory(&nd_region->dev)) {
+ struct resource ndr_res;
+
+ if (devm_init_badblocks(dev, &nd_region->bb))
+@@ -123,7 +123,7 @@ static void nd_region_notify(struct device *dev, enum nvdimm_event event)
+ struct nd_region *nd_region = to_nd_region(dev);
+ struct resource res;
+
+- if (is_nd_pmem(&nd_region->dev)) {
++ if (is_memory(&nd_region->dev)) {
+ res.start = nd_region->ndr_start;
+ res.end = nd_region->ndr_start +
+ nd_region->ndr_size - 1;
+diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
+index af30cbe7a8ea..47b48800fb75 100644
+--- a/drivers/nvdimm/region_devs.c
++++ b/drivers/nvdimm/region_devs.c
+@@ -632,11 +632,11 @@ static umode_t region_visible(struct kobject *kobj, struct attribute *a, int n)
+ if (!is_memory(dev) && a == &dev_attr_dax_seed.attr)
+ return 0;
+
+- if (!is_nd_pmem(dev) && a == &dev_attr_badblocks.attr)
++ if (!is_memory(dev) && a == &dev_attr_badblocks.attr)
+ return 0;
+
+ if (a == &dev_attr_resource.attr) {
+- if (is_nd_pmem(dev))
++ if (is_memory(dev))
+ return 0400;
+ else
+ return 0;
+diff --git a/drivers/nvdimm/security.c b/drivers/nvdimm/security.c
+index a570f2263a42..5b7ea93edb93 100644
+--- a/drivers/nvdimm/security.c
++++ b/drivers/nvdimm/security.c
+@@ -177,6 +177,10 @@ static int __nvdimm_security_unlock(struct nvdimm *nvdimm)
+ || nvdimm->sec.state < 0)
+ return -EIO;
+
++ /* No need to go further if security is disabled */
++ if (nvdimm->sec.state == NVDIMM_SECURITY_DISABLED)
++ return 0;
++
+ if (test_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags)) {
+ dev_dbg(dev, "Security operation in progress.\n");
+ return -EBUSY;
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index 40b625458afa..2b53976cd9f9 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -2701,8 +2701,8 @@ static int hv_pci_remove(struct hv_device *hdev)
+ /* Remove the bus from PCI's point of view. */
+ pci_lock_rescan_remove();
+ pci_stop_root_bus(hbus->pci_bus);
+- pci_remove_root_bus(hbus->pci_bus);
+ hv_pci_remove_slots(hbus);
++ pci_remove_root_bus(hbus->pci_bus);
+ pci_unlock_rescan_remove();
+ hbus->state = hv_pcibus_removed;
+ }
+diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
+index 4575e0c6dc4b..a35d3f3996d7 100644
+--- a/drivers/pci/controller/vmd.c
++++ b/drivers/pci/controller/vmd.c
+@@ -31,6 +31,9 @@
+ #define PCI_REG_VMLOCK 0x70
+ #define MB2_SHADOW_EN(vmlock) (vmlock & 0x2)
+
++#define MB2_SHADOW_OFFSET 0x2000
++#define MB2_SHADOW_SIZE 16
++
+ enum vmd_features {
+ /*
+ * Device may contain registers which hint the physical location of the
+@@ -94,6 +97,7 @@ struct vmd_dev {
+ struct resource resources[3];
+ struct irq_domain *irq_domain;
+ struct pci_bus *bus;
++ u8 busn_start;
+
+ struct dma_map_ops dma_ops;
+ struct dma_domain dma_domain;
+@@ -440,7 +444,8 @@ static char __iomem *vmd_cfg_addr(struct vmd_dev *vmd, struct pci_bus *bus,
+ unsigned int devfn, int reg, int len)
+ {
+ char __iomem *addr = vmd->cfgbar +
+- (bus->number << 20) + (devfn << 12) + reg;
++ ((bus->number - vmd->busn_start) << 20) +
++ (devfn << 12) + reg;
+
+ if ((addr - vmd->cfgbar) + len >=
+ resource_size(&vmd->dev->resource[VMD_CFGBAR]))
+@@ -563,7 +568,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
+ unsigned long flags;
+ LIST_HEAD(resources);
+ resource_size_t offset[2] = {0};
+- resource_size_t membar2_offset = 0x2000, busn_start = 0;
++ resource_size_t membar2_offset = 0x2000;
+ struct pci_bus *child;
+
+ /*
+@@ -576,7 +581,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
+ u32 vmlock;
+ int ret;
+
+- membar2_offset = 0x2018;
++ membar2_offset = MB2_SHADOW_OFFSET + MB2_SHADOW_SIZE;
+ ret = pci_read_config_dword(vmd->dev, PCI_REG_VMLOCK, &vmlock);
+ if (ret || vmlock == ~0)
+ return -ENODEV;
+@@ -588,9 +593,9 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
+ if (!membar2)
+ return -ENOMEM;
+ offset[0] = vmd->dev->resource[VMD_MEMBAR1].start -
+- readq(membar2 + 0x2008);
++ readq(membar2 + MB2_SHADOW_OFFSET);
+ offset[1] = vmd->dev->resource[VMD_MEMBAR2].start -
+- readq(membar2 + 0x2010);
++ readq(membar2 + MB2_SHADOW_OFFSET + 8);
+ pci_iounmap(vmd->dev, membar2);
+ }
+ }
+@@ -606,14 +611,14 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
+ pci_read_config_dword(vmd->dev, PCI_REG_VMCONFIG, &vmconfig);
+ if (BUS_RESTRICT_CAP(vmcap) &&
+ (BUS_RESTRICT_CFG(vmconfig) == 0x1))
+- busn_start = 128;
++ vmd->busn_start = 128;
+ }
+
+ res = &vmd->dev->resource[VMD_CFGBAR];
+ vmd->resources[0] = (struct resource) {
+ .name = "VMD CFGBAR",
+- .start = busn_start,
+- .end = busn_start + (resource_size(res) >> 20) - 1,
++ .start = vmd->busn_start,
++ .end = vmd->busn_start + (resource_size(res) >> 20) - 1,
+ .flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED,
+ };
+
+@@ -681,8 +686,8 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
+ pci_add_resource_offset(&resources, &vmd->resources[1], offset[0]);
+ pci_add_resource_offset(&resources, &vmd->resources[2], offset[1]);
+
+- vmd->bus = pci_create_root_bus(&vmd->dev->dev, busn_start, &vmd_ops,
+- sd, &resources);
++ vmd->bus = pci_create_root_bus(&vmd->dev->dev, vmd->busn_start,
++ &vmd_ops, sd, &resources);
+ if (!vmd->bus) {
+ pci_free_resource_list(&resources);
+ irq_domain_remove(vmd->irq_domain);
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 1f17da3dfeac..b97d9e10c9cc 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -1443,7 +1443,7 @@ static void pci_restore_rebar_state(struct pci_dev *pdev)
+ pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl);
+ bar_idx = ctrl & PCI_REBAR_CTRL_BAR_IDX;
+ res = pdev->resource + bar_idx;
+- size = order_base_2((resource_size(res) >> 20) | 1) - 1;
++ size = ilog2(resource_size(res)) - 20;
+ ctrl &= ~PCI_REBAR_CTRL_BAR_SIZE;
+ ctrl |= size << PCI_REBAR_CTRL_BAR_SHIFT;
+ pci_write_config_dword(pdev, pos + PCI_REBAR_CTRL, ctrl);
+diff --git a/drivers/power/supply/sbs-battery.c b/drivers/power/supply/sbs-battery.c
+index 048d205d7074..f8d74e9f7931 100644
+--- a/drivers/power/supply/sbs-battery.c
++++ b/drivers/power/supply/sbs-battery.c
+@@ -314,17 +314,22 @@ static int sbs_get_battery_presence_and_health(
+ {
+ int ret;
+
+- if (psp == POWER_SUPPLY_PROP_PRESENT) {
+- /* Dummy command; if it succeeds, battery is present. */
+- ret = sbs_read_word_data(client, sbs_data[REG_STATUS].addr);
+- if (ret < 0)
+- val->intval = 0; /* battery disconnected */
+- else
+- val->intval = 1; /* battery present */
+- } else { /* POWER_SUPPLY_PROP_HEALTH */
++ /* Dummy command; if it succeeds, battery is present. */
++ ret = sbs_read_word_data(client, sbs_data[REG_STATUS].addr);
++
++ if (ret < 0) { /* battery not present*/
++ if (psp == POWER_SUPPLY_PROP_PRESENT) {
++ val->intval = 0;
++ return 0;
++ }
++ return ret;
++ }
++
++ if (psp == POWER_SUPPLY_PROP_PRESENT)
++ val->intval = 1; /* battery present */
++ else /* POWER_SUPPLY_PROP_HEALTH */
+ /* SBS spec doesn't have a general health command. */
+ val->intval = POWER_SUPPLY_HEALTH_UNKNOWN;
+- }
+
+ return 0;
+ }
+@@ -620,12 +625,14 @@ static int sbs_get_property(struct power_supply *psy,
+ switch (psp) {
+ case POWER_SUPPLY_PROP_PRESENT:
+ case POWER_SUPPLY_PROP_HEALTH:
+- if (client->flags & SBS_FLAGS_TI_BQ20Z75)
++ if (chip->flags & SBS_FLAGS_TI_BQ20Z75)
+ ret = sbs_get_ti_battery_presence_and_health(client,
+ psp, val);
+ else
+ ret = sbs_get_battery_presence_and_health(client, psp,
+ val);
++
++ /* this can only be true if no gpio is used */
+ if (psp == POWER_SUPPLY_PROP_PRESENT)
+ return 0;
+ break;
+diff --git a/drivers/pwm/pwm-stm32-lp.c b/drivers/pwm/pwm-stm32-lp.c
+index 2211a642066d..97a9afa191ee 100644
+--- a/drivers/pwm/pwm-stm32-lp.c
++++ b/drivers/pwm/pwm-stm32-lp.c
+@@ -59,6 +59,12 @@ static int stm32_pwm_lp_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ /* Calculate the period and prescaler value */
+ div = (unsigned long long)clk_get_rate(priv->clk) * state->period;
+ do_div(div, NSEC_PER_SEC);
++ if (!div) {
++ /* Clock is too slow to achieve requested period. */
++ dev_dbg(priv->chip.dev, "Can't reach %u ns\n", state->period);
++ return -EINVAL;
++ }
++
+ prd = div;
+ while (div > STM32_LPTIM_MAX_ARR) {
+ presc++;
+diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c
+index fc53e1e221f0..c94184d080f8 100644
+--- a/drivers/s390/block/dasd_eckd.c
++++ b/drivers/s390/block/dasd_eckd.c
+@@ -1553,8 +1553,8 @@ static int dasd_eckd_read_vol_info(struct dasd_device *device)
+ if (rc == 0) {
+ memcpy(&private->vsq, vsq, sizeof(*vsq));
+ } else {
+- dev_warn(&device->cdev->dev,
+- "Reading the volume storage information failed with rc=%d\n", rc);
++ DBF_EVENT_DEVID(DBF_WARNING, device->cdev,
++ "Reading the volume storage information failed with rc=%d", rc);
+ }
+
+ if (useglobal)
+@@ -1737,8 +1737,8 @@ static int dasd_eckd_read_ext_pool_info(struct dasd_device *device)
+ if (rc == 0) {
+ dasd_eckd_cpy_ext_pool_data(device, lcq);
+ } else {
+- dev_warn(&device->cdev->dev,
+- "Reading the logical configuration failed with rc=%d\n", rc);
++ DBF_EVENT_DEVID(DBF_WARNING, device->cdev,
++ "Reading the logical configuration failed with rc=%d", rc);
+ }
+
+ dasd_sfree_request(cqr, cqr->memdev);
+@@ -2020,14 +2020,10 @@ dasd_eckd_check_characteristics(struct dasd_device *device)
+ dasd_eckd_read_features(device);
+
+ /* Read Volume Information */
+- rc = dasd_eckd_read_vol_info(device);
+- if (rc)
+- goto out_err3;
++ dasd_eckd_read_vol_info(device);
+
+ /* Read Extent Pool Information */
+- rc = dasd_eckd_read_ext_pool_info(device);
+- if (rc)
+- goto out_err3;
++ dasd_eckd_read_ext_pool_info(device);
+
+ /* Read Device Characteristics */
+ rc = dasd_generic_read_dev_chars(device, DASD_ECKD_MAGIC,
+@@ -2059,9 +2055,6 @@ dasd_eckd_check_characteristics(struct dasd_device *device)
+ if (readonly)
+ set_bit(DASD_FLAG_DEVICE_RO, &device->flags);
+
+- if (dasd_eckd_is_ese(device))
+- dasd_set_feature(device->cdev, DASD_FEATURE_DISCARD, 1);
+-
+ dev_info(&device->cdev->dev, "New DASD %04X/%02X (CU %04X/%02X) "
+ "with %d cylinders, %d heads, %d sectors%s\n",
+ private->rdc_data.dev_type,
+@@ -3695,14 +3688,6 @@ static int dasd_eckd_release_space(struct dasd_device *device,
+ return -EINVAL;
+ }
+
+-static struct dasd_ccw_req *
+-dasd_eckd_build_cp_discard(struct dasd_device *device, struct dasd_block *block,
+- struct request *req, sector_t first_trk,
+- sector_t last_trk)
+-{
+- return dasd_eckd_dso_ras(device, block, req, first_trk, last_trk, 1);
+-}
+-
+ static struct dasd_ccw_req *dasd_eckd_build_cp_cmd_single(
+ struct dasd_device *startdev,
+ struct dasd_block *block,
+@@ -4447,10 +4432,6 @@ static struct dasd_ccw_req *dasd_eckd_build_cp(struct dasd_device *startdev,
+ cmdwtd = private->features.feature[12] & 0x40;
+ use_prefix = private->features.feature[8] & 0x01;
+
+- if (req_op(req) == REQ_OP_DISCARD)
+- return dasd_eckd_build_cp_discard(startdev, block, req,
+- first_trk, last_trk);
+-
+ cqr = NULL;
+ if (cdlspecial || dasd_page_cache) {
+ /* do nothing, just fall through to the cmd mode single case */
+@@ -4729,14 +4710,12 @@ static struct dasd_ccw_req *dasd_eckd_build_alias_cp(struct dasd_device *base,
+ struct dasd_block *block,
+ struct request *req)
+ {
+- struct dasd_device *startdev = NULL;
+ struct dasd_eckd_private *private;
+- struct dasd_ccw_req *cqr;
++ struct dasd_device *startdev;
+ unsigned long flags;
++ struct dasd_ccw_req *cqr;
+
+- /* Discard requests can only be processed on base devices */
+- if (req_op(req) != REQ_OP_DISCARD)
+- startdev = dasd_alias_get_start_dev(base);
++ startdev = dasd_alias_get_start_dev(base);
+ if (!startdev)
+ startdev = base;
+ private = startdev->private;
+@@ -5663,14 +5642,10 @@ static int dasd_eckd_restore_device(struct dasd_device *device)
+ dasd_eckd_read_features(device);
+
+ /* Read Volume Information */
+- rc = dasd_eckd_read_vol_info(device);
+- if (rc)
+- goto out_err2;
++ dasd_eckd_read_vol_info(device);
+
+ /* Read Extent Pool Information */
+- rc = dasd_eckd_read_ext_pool_info(device);
+- if (rc)
+- goto out_err2;
++ dasd_eckd_read_ext_pool_info(device);
+
+ /* Read Device Characteristics */
+ rc = dasd_generic_read_dev_chars(device, DASD_ECKD_MAGIC,
+@@ -6521,20 +6496,8 @@ static void dasd_eckd_setup_blk_queue(struct dasd_block *block)
+ unsigned int logical_block_size = block->bp_block;
+ struct request_queue *q = block->request_queue;
+ struct dasd_device *device = block->base;
+- struct dasd_eckd_private *private;
+- unsigned int max_discard_sectors;
+- unsigned int max_bytes;
+- unsigned int ext_bytes; /* Extent Size in Bytes */
+- int recs_per_trk;
+- int trks_per_cyl;
+- int ext_limit;
+- int ext_size; /* Extent Size in Cylinders */
+ int max;
+
+- private = device->private;
+- trks_per_cyl = private->rdc_data.trk_per_cyl;
+- recs_per_trk = recs_per_track(&private->rdc_data, 0, logical_block_size);
+-
+ if (device->features & DASD_FEATURE_USERAW) {
+ /*
+ * the max_blocks value for raw_track access is 256
+@@ -6555,28 +6518,6 @@ static void dasd_eckd_setup_blk_queue(struct dasd_block *block)
+ /* With page sized segments each segment can be translated into one idaw/tidaw */
+ blk_queue_max_segment_size(q, PAGE_SIZE);
+ blk_queue_segment_boundary(q, PAGE_SIZE - 1);
+-
+- if (dasd_eckd_is_ese(device)) {
+- /*
+- * Depending on the extent size, up to UINT_MAX bytes can be
+- * accepted. However, neither DASD_ECKD_RAS_EXTS_MAX nor the
+- * device limits should be exceeded.
+- */
+- ext_size = dasd_eckd_ext_size(device);
+- ext_limit = min(private->real_cyl / ext_size, DASD_ECKD_RAS_EXTS_MAX);
+- ext_bytes = ext_size * trks_per_cyl * recs_per_trk *
+- logical_block_size;
+- max_bytes = UINT_MAX - (UINT_MAX % ext_bytes);
+- if (max_bytes / ext_bytes > ext_limit)
+- max_bytes = ext_bytes * ext_limit;
+-
+- max_discard_sectors = max_bytes / 512;
+-
+- blk_queue_max_discard_sectors(q, max_discard_sectors);
+- blk_queue_flag_set(QUEUE_FLAG_DISCARD, q);
+- q->limits.discard_granularity = ext_bytes;
+- q->limits.discard_alignment = ext_bytes;
+- }
+ }
+
+ static struct ccw_driver dasd_eckd_driver = {
+diff --git a/drivers/s390/char/sclp_early.c b/drivers/s390/char/sclp_early.c
+index e71992a3c55f..cc5e84b80c69 100644
+--- a/drivers/s390/char/sclp_early.c
++++ b/drivers/s390/char/sclp_early.c
+@@ -40,7 +40,7 @@ static void __init sclp_early_facilities_detect(struct read_info_sccb *sccb)
+ sclp.has_gisaf = !!(sccb->fac118 & 0x08);
+ sclp.has_hvs = !!(sccb->fac119 & 0x80);
+ sclp.has_kss = !!(sccb->fac98 & 0x01);
+- sclp.has_sipl = !!(sccb->cbl & 0x02);
++ sclp.has_sipl = !!(sccb->cbl & 0x4000);
+ if (sccb->fac85 & 0x02)
+ S390_lowcore.machine_flags |= MACHINE_FLAG_ESOP;
+ if (sccb->fac91 & 0x40)
+diff --git a/drivers/s390/cio/ccwgroup.c b/drivers/s390/cio/ccwgroup.c
+index c522e9313c50..ae66875a934d 100644
+--- a/drivers/s390/cio/ccwgroup.c
++++ b/drivers/s390/cio/ccwgroup.c
+@@ -372,7 +372,7 @@ int ccwgroup_create_dev(struct device *parent, struct ccwgroup_driver *gdrv,
+ goto error;
+ }
+ /* Check for trailing stuff. */
+- if (i == num_devices && strlen(buf) > 0) {
++ if (i == num_devices && buf && strlen(buf) > 0) {
+ rc = -EINVAL;
+ goto error;
+ }
+diff --git a/drivers/s390/cio/css.c b/drivers/s390/cio/css.c
+index 22c55816100b..1fbfb0a93f5f 100644
+--- a/drivers/s390/cio/css.c
++++ b/drivers/s390/cio/css.c
+@@ -1388,6 +1388,8 @@ device_initcall(cio_settle_init);
+
+ int sch_is_pseudo_sch(struct subchannel *sch)
+ {
++ if (!sch->dev.parent)
++ return 0;
+ return sch == to_css(sch->dev.parent)->pseudo_subchannel;
+ }
+
+diff --git a/drivers/staging/erofs/dir.c b/drivers/staging/erofs/dir.c
+index dbf6a151886c..b11cecd0a21d 100644
+--- a/drivers/staging/erofs/dir.c
++++ b/drivers/staging/erofs/dir.c
+@@ -99,8 +99,15 @@ static int erofs_readdir(struct file *f, struct dir_context *ctx)
+ unsigned int nameoff, maxsize;
+
+ dentry_page = read_mapping_page(mapping, i, NULL);
+- if (IS_ERR(dentry_page))
+- continue;
++ if (dentry_page == ERR_PTR(-ENOMEM)) {
++ err = -ENOMEM;
++ break;
++ } else if (IS_ERR(dentry_page)) {
++ errln("fail to readdir of logical block %u of nid %llu",
++ i, EROFS_V(dir)->nid);
++ err = PTR_ERR(dentry_page);
++ break;
++ }
+
+ de = (struct erofs_dirent *)kmap(dentry_page);
+
+diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c
+index f0dab81ff816..155cee68fed5 100644
+--- a/drivers/staging/erofs/unzip_vle.c
++++ b/drivers/staging/erofs/unzip_vle.c
+@@ -393,7 +393,11 @@ z_erofs_vle_work_lookup(const struct z_erofs_vle_work_finder *f)
+ /* if multiref is disabled, `primary' is always true */
+ primary = true;
+
+- DBG_BUGON(work->pageofs != f->pageofs);
++ if (work->pageofs != f->pageofs) {
++ DBG_BUGON(1);
++ erofs_workgroup_put(egrp);
++ return ERR_PTR(-EIO);
++ }
+
+ /*
+ * lock must be taken first to avoid grp->next == NIL between
+@@ -939,6 +943,7 @@ repeat:
+ for (i = 0; i < nr_pages; ++i)
+ pages[i] = NULL;
+
++ err = 0;
+ z_erofs_pagevec_ctor_init(&ctor, Z_EROFS_NR_INLINE_PAGEVECS,
+ work->pagevec, 0);
+
+@@ -960,8 +965,17 @@ repeat:
+ pagenr = z_erofs_onlinepage_index(page);
+
+ DBG_BUGON(pagenr >= nr_pages);
+- DBG_BUGON(pages[pagenr]);
+
++ /*
++ * currently EROFS doesn't support multiref(dedup),
++ * so here erroring out one multiref page.
++ */
++ if (pages[pagenr]) {
++ DBG_BUGON(1);
++ SetPageError(pages[pagenr]);
++ z_erofs_onlinepage_endio(pages[pagenr]);
++ err = -EIO;
++ }
+ pages[pagenr] = page;
+ }
+ sparsemem_pages = i;
+@@ -971,7 +985,6 @@ repeat:
+ overlapped = false;
+ compressed_pages = grp->compressed_pages;
+
+- err = 0;
+ for (i = 0; i < clusterpages; ++i) {
+ unsigned int pagenr;
+
+@@ -995,7 +1008,12 @@ repeat:
+ pagenr = z_erofs_onlinepage_index(page);
+
+ DBG_BUGON(pagenr >= nr_pages);
+- DBG_BUGON(pages[pagenr]);
++ if (pages[pagenr]) {
++ DBG_BUGON(1);
++ SetPageError(pages[pagenr]);
++ z_erofs_onlinepage_endio(pages[pagenr]);
++ err = -EIO;
++ }
+ ++sparsemem_pages;
+ pages[pagenr] = page;
+
+@@ -1498,19 +1516,18 @@ static int z_erofs_vle_normalaccess_readpage(struct file *file,
+ err = z_erofs_do_read_page(&f, page, &pagepool);
+ (void)z_erofs_vle_work_iter_end(&f.builder);
+
+- if (err) {
++ /* if some compressed cluster ready, need submit them anyway */
++ z_erofs_submit_and_unzip(&f, &pagepool, true);
++
++ if (err)
+ errln("%s, failed to read, err [%d]", __func__, err);
+- goto out;
+- }
+
+- z_erofs_submit_and_unzip(&f, &pagepool, true);
+-out:
+ if (f.map.mpage)
+ put_page(f.map.mpage);
+
+ /* clean up the remaining free pages */
+ put_pages_list(&pagepool);
+- return 0;
++ return err;
+ }
+
+ static int z_erofs_vle_normalaccess_readpages(struct file *filp,
+diff --git a/drivers/staging/erofs/zmap.c b/drivers/staging/erofs/zmap.c
+index c2359321ca13..30e6d02d30de 100644
+--- a/drivers/staging/erofs/zmap.c
++++ b/drivers/staging/erofs/zmap.c
+@@ -350,6 +350,12 @@ static int vle_extent_lookback(struct z_erofs_maprecorder *m,
+
+ switch (m->type) {
+ case Z_EROFS_VLE_CLUSTER_TYPE_NONHEAD:
++ if (!m->delta[0]) {
++ errln("invalid lookback distance 0 at nid %llu",
++ vi->nid);
++ DBG_BUGON(1);
++ return -EIO;
++ }
+ return vle_extent_lookback(m, m->delta[0]);
+ case Z_EROFS_VLE_CLUSTER_TYPE_PLAIN:
+ map->m_flags &= ~EROFS_MAP_ZIPPED;
+diff --git a/drivers/thermal/qcom/tsens-8960.c b/drivers/thermal/qcom/tsens-8960.c
+index 8d9b721dadb6..e46a4e3f25c4 100644
+--- a/drivers/thermal/qcom/tsens-8960.c
++++ b/drivers/thermal/qcom/tsens-8960.c
+@@ -229,6 +229,8 @@ static int calibrate_8960(struct tsens_priv *priv)
+ for (i = 0; i < num_read; i++, s++)
+ s->offset = data[i];
+
++ kfree(data);
++
+ return 0;
+ }
+
+diff --git a/drivers/thermal/qcom/tsens-v0_1.c b/drivers/thermal/qcom/tsens-v0_1.c
+index 6f26fadf4c27..055647bcee67 100644
+--- a/drivers/thermal/qcom/tsens-v0_1.c
++++ b/drivers/thermal/qcom/tsens-v0_1.c
+@@ -145,8 +145,10 @@ static int calibrate_8916(struct tsens_priv *priv)
+ return PTR_ERR(qfprom_cdata);
+
+ qfprom_csel = (u32 *)qfprom_read(priv->dev, "calib_sel");
+- if (IS_ERR(qfprom_csel))
++ if (IS_ERR(qfprom_csel)) {
++ kfree(qfprom_cdata);
+ return PTR_ERR(qfprom_csel);
++ }
+
+ mode = (qfprom_csel[0] & MSM8916_CAL_SEL_MASK) >> MSM8916_CAL_SEL_SHIFT;
+ dev_dbg(priv->dev, "calibration mode is %d\n", mode);
+@@ -181,6 +183,8 @@ static int calibrate_8916(struct tsens_priv *priv)
+ }
+
+ compute_intercept_slope(priv, p1, p2, mode);
++ kfree(qfprom_cdata);
++ kfree(qfprom_csel);
+
+ return 0;
+ }
+@@ -198,8 +202,10 @@ static int calibrate_8974(struct tsens_priv *priv)
+ return PTR_ERR(calib);
+
+ bkp = (u32 *)qfprom_read(priv->dev, "calib_backup");
+- if (IS_ERR(bkp))
++ if (IS_ERR(bkp)) {
++ kfree(calib);
+ return PTR_ERR(bkp);
++ }
+
+ calib_redun_sel = bkp[1] & BKP_REDUN_SEL;
+ calib_redun_sel >>= BKP_REDUN_SHIFT;
+@@ -313,6 +319,8 @@ static int calibrate_8974(struct tsens_priv *priv)
+ }
+
+ compute_intercept_slope(priv, p1, p2, mode);
++ kfree(calib);
++ kfree(bkp);
+
+ return 0;
+ }
+diff --git a/drivers/thermal/qcom/tsens-v1.c b/drivers/thermal/qcom/tsens-v1.c
+index 10b595d4f619..870f502f2cb6 100644
+--- a/drivers/thermal/qcom/tsens-v1.c
++++ b/drivers/thermal/qcom/tsens-v1.c
+@@ -138,6 +138,7 @@ static int calibrate_v1(struct tsens_priv *priv)
+ }
+
+ compute_intercept_slope(priv, p1, p2, mode);
++ kfree(qfprom_cdata);
+
+ return 0;
+ }
+diff --git a/drivers/thermal/qcom/tsens.h b/drivers/thermal/qcom/tsens.h
+index 2fd94997245b..b89083b61c38 100644
+--- a/drivers/thermal/qcom/tsens.h
++++ b/drivers/thermal/qcom/tsens.h
+@@ -17,6 +17,7 @@
+
+ #include <linux/thermal.h>
+ #include <linux/regmap.h>
++#include <linux/slab.h>
+
+ struct tsens_priv;
+
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index 6bab66e84eb5..ebe15f2cf7fc 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -304,7 +304,7 @@ static void thermal_zone_device_set_polling(struct thermal_zone_device *tz,
+ &tz->poll_queue,
+ msecs_to_jiffies(delay));
+ else
+- cancel_delayed_work(&tz->poll_queue);
++ cancel_delayed_work_sync(&tz->poll_queue);
+ }
+
+ static void monitor_thermal_zone(struct thermal_zone_device *tz)
+diff --git a/drivers/thermal/thermal_hwmon.c b/drivers/thermal/thermal_hwmon.c
+index 40c69a533b24..dd5d8ee37928 100644
+--- a/drivers/thermal/thermal_hwmon.c
++++ b/drivers/thermal/thermal_hwmon.c
+@@ -87,13 +87,17 @@ static struct thermal_hwmon_device *
+ thermal_hwmon_lookup_by_type(const struct thermal_zone_device *tz)
+ {
+ struct thermal_hwmon_device *hwmon;
++ char type[THERMAL_NAME_LENGTH];
+
+ mutex_lock(&thermal_hwmon_list_lock);
+- list_for_each_entry(hwmon, &thermal_hwmon_list, node)
+- if (!strcmp(hwmon->type, tz->type)) {
++ list_for_each_entry(hwmon, &thermal_hwmon_list, node) {
++ strcpy(type, tz->type);
++ strreplace(type, '-', '_');
++ if (!strcmp(hwmon->type, type)) {
+ mutex_unlock(&thermal_hwmon_list_lock);
+ return hwmon;
+ }
++ }
+ mutex_unlock(&thermal_hwmon_list_lock);
+
+ return NULL;
+diff --git a/drivers/watchdog/aspeed_wdt.c b/drivers/watchdog/aspeed_wdt.c
+index cc71861e033a..5b64bc2e8788 100644
+--- a/drivers/watchdog/aspeed_wdt.c
++++ b/drivers/watchdog/aspeed_wdt.c
+@@ -34,6 +34,7 @@ static const struct aspeed_wdt_config ast2500_config = {
+ static const struct of_device_id aspeed_wdt_of_table[] = {
+ { .compatible = "aspeed,ast2400-wdt", .data = &ast2400_config },
+ { .compatible = "aspeed,ast2500-wdt", .data = &ast2500_config },
++ { .compatible = "aspeed,ast2600-wdt", .data = &ast2500_config },
+ { },
+ };
+ MODULE_DEVICE_TABLE(of, aspeed_wdt_of_table);
+@@ -259,7 +260,8 @@ static int aspeed_wdt_probe(struct platform_device *pdev)
+ set_bit(WDOG_HW_RUNNING, &wdt->wdd.status);
+ }
+
+- if (of_device_is_compatible(np, "aspeed,ast2500-wdt")) {
++ if ((of_device_is_compatible(np, "aspeed,ast2500-wdt")) ||
++ (of_device_is_compatible(np, "aspeed,ast2600-wdt"))) {
+ u32 reg = readl(wdt->base + WDT_RESET_WIDTH);
+
+ reg &= config->ext_pulse_width_mask;
+diff --git a/drivers/watchdog/imx2_wdt.c b/drivers/watchdog/imx2_wdt.c
+index 32af3974e6bb..8d019a961ccc 100644
+--- a/drivers/watchdog/imx2_wdt.c
++++ b/drivers/watchdog/imx2_wdt.c
+@@ -55,7 +55,7 @@
+
+ #define IMX2_WDT_WMCR 0x08 /* Misc Register */
+
+-#define IMX2_WDT_MAX_TIME 128
++#define IMX2_WDT_MAX_TIME 128U
+ #define IMX2_WDT_DEFAULT_TIME 60 /* in seconds */
+
+ #define WDOG_SEC_TO_COUNT(s) ((s * 2 - 1) << 8)
+@@ -180,7 +180,7 @@ static int imx2_wdt_set_timeout(struct watchdog_device *wdog,
+ {
+ unsigned int actual;
+
+- actual = min(new_timeout, wdog->max_hw_heartbeat_ms * 1000);
++ actual = min(new_timeout, IMX2_WDT_MAX_TIME);
+ __imx2_wdt_set_timeout(wdog, actual);
+ wdog->timeout = new_timeout;
+ return 0;
+diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
+index 4e11de6cde81..91cba70b69df 100644
+--- a/drivers/xen/balloon.c
++++ b/drivers/xen/balloon.c
+@@ -688,6 +688,7 @@ static void __init balloon_add_region(unsigned long start_pfn,
+ /* totalram_pages and totalhigh_pages do not
+ include the boot-time balloon extension, so
+ don't subtract from it. */
++ __SetPageOffline(page);
+ __balloon_append(page);
+ }
+
+diff --git a/drivers/xen/pci.c b/drivers/xen/pci.c
+index 3eeb9bea7630..224df03ce42e 100644
+--- a/drivers/xen/pci.c
++++ b/drivers/xen/pci.c
+@@ -17,6 +17,8 @@
+ #include "../pci/pci.h"
+ #ifdef CONFIG_PCI_MMCONFIG
+ #include <asm/pci_x86.h>
++
++static int xen_mcfg_late(void);
+ #endif
+
+ static bool __read_mostly pci_seg_supported = true;
+@@ -28,7 +30,18 @@ static int xen_add_device(struct device *dev)
+ #ifdef CONFIG_PCI_IOV
+ struct pci_dev *physfn = pci_dev->physfn;
+ #endif
+-
++#ifdef CONFIG_PCI_MMCONFIG
++ static bool pci_mcfg_reserved = false;
++ /*
++ * Reserve MCFG areas in Xen on first invocation due to this being
++ * potentially called from inside of acpi_init immediately after
++ * MCFG table has been finally parsed.
++ */
++ if (!pci_mcfg_reserved) {
++ xen_mcfg_late();
++ pci_mcfg_reserved = true;
++ }
++#endif
+ if (pci_seg_supported) {
+ struct {
+ struct physdev_pci_device_add add;
+@@ -201,7 +214,7 @@ static int __init register_xen_pci_notifier(void)
+ arch_initcall(register_xen_pci_notifier);
+
+ #ifdef CONFIG_PCI_MMCONFIG
+-static int __init xen_mcfg_late(void)
++static int xen_mcfg_late(void)
+ {
+ struct pci_mmcfg_region *cfg;
+ int rc;
+@@ -240,8 +253,4 @@ static int __init xen_mcfg_late(void)
+ }
+ return 0;
+ }
+-/*
+- * Needs to be done after acpi_init which are subsys_initcall.
+- */
+-subsys_initcall_sync(xen_mcfg_late);
+ #endif
+diff --git a/drivers/xen/xenbus/xenbus_dev_frontend.c b/drivers/xen/xenbus/xenbus_dev_frontend.c
+index 08adc590f631..597af455a522 100644
+--- a/drivers/xen/xenbus/xenbus_dev_frontend.c
++++ b/drivers/xen/xenbus/xenbus_dev_frontend.c
+@@ -55,6 +55,7 @@
+ #include <linux/string.h>
+ #include <linux/slab.h>
+ #include <linux/miscdevice.h>
++#include <linux/workqueue.h>
+
+ #include <xen/xenbus.h>
+ #include <xen/xen.h>
+@@ -116,6 +117,8 @@ struct xenbus_file_priv {
+ wait_queue_head_t read_waitq;
+
+ struct kref kref;
++
++ struct work_struct wq;
+ };
+
+ /* Read out any raw xenbus messages queued up. */
+@@ -300,14 +303,14 @@ static void watch_fired(struct xenbus_watch *watch,
+ mutex_unlock(&adap->dev_data->reply_mutex);
+ }
+
+-static void xenbus_file_free(struct kref *kref)
++static void xenbus_worker(struct work_struct *wq)
+ {
+ struct xenbus_file_priv *u;
+ struct xenbus_transaction_holder *trans, *tmp;
+ struct watch_adapter *watch, *tmp_watch;
+ struct read_buffer *rb, *tmp_rb;
+
+- u = container_of(kref, struct xenbus_file_priv, kref);
++ u = container_of(wq, struct xenbus_file_priv, wq);
+
+ /*
+ * No need for locking here because there are no other users,
+@@ -333,6 +336,18 @@ static void xenbus_file_free(struct kref *kref)
+ kfree(u);
+ }
+
++static void xenbus_file_free(struct kref *kref)
++{
++ struct xenbus_file_priv *u;
++
++ /*
++ * We might be called in xenbus_thread().
++ * Use workqueue to avoid deadlock.
++ */
++ u = container_of(kref, struct xenbus_file_priv, kref);
++ schedule_work(&u->wq);
++}
++
+ static struct xenbus_transaction_holder *xenbus_get_transaction(
+ struct xenbus_file_priv *u, uint32_t tx_id)
+ {
+@@ -650,6 +665,7 @@ static int xenbus_file_open(struct inode *inode, struct file *filp)
+ INIT_LIST_HEAD(&u->watches);
+ INIT_LIST_HEAD(&u->read_buffers);
+ init_waitqueue_head(&u->read_waitq);
++ INIT_WORK(&u->wq, xenbus_worker);
+
+ mutex_init(&u->reply_mutex);
+ mutex_init(&u->msgbuffer_mutex);
+diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
+index 4cc966a31cb3..fe7f0bd2048e 100644
+--- a/fs/9p/vfs_file.c
++++ b/fs/9p/vfs_file.c
+@@ -513,6 +513,7 @@ v9fs_mmap_file_mmap(struct file *filp, struct vm_area_struct *vma)
+ v9inode = V9FS_I(inode);
+ mutex_lock(&v9inode->v_mutex);
+ if (!v9inode->writeback_fid &&
++ (vma->vm_flags & VM_SHARED) &&
+ (vma->vm_flags & VM_WRITE)) {
+ /*
+ * clone a fid and add it to writeback_fid
+@@ -614,6 +615,8 @@ static void v9fs_mmap_vm_close(struct vm_area_struct *vma)
+ (vma->vm_end - vma->vm_start - 1),
+ };
+
++ if (!(vma->vm_flags & VM_SHARED))
++ return;
+
+ p9_debug(P9_DEBUG_VFS, "9p VMA close, %p, flushing", vma);
+
+diff --git a/fs/btrfs/tests/btrfs-tests.c b/fs/btrfs/tests/btrfs-tests.c
+index 1e3ba4949399..814a918998ec 100644
+--- a/fs/btrfs/tests/btrfs-tests.c
++++ b/fs/btrfs/tests/btrfs-tests.c
+@@ -51,7 +51,13 @@ static struct file_system_type test_type = {
+
+ struct inode *btrfs_new_test_inode(void)
+ {
+- return new_inode(test_mnt->mnt_sb);
++ struct inode *inode;
++
++ inode = new_inode(test_mnt->mnt_sb);
++ if (inode)
++ inode_init_owner(inode, NULL, S_IFREG);
++
++ return inode;
+ }
+
+ static int btrfs_init_test_fs(void)
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index ce0f5658720a..8fd530112810 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -645,6 +645,7 @@ void ceph_add_cap(struct inode *inode,
+ struct ceph_cap *cap;
+ int mds = session->s_mds;
+ int actual_wanted;
++ u32 gen;
+
+ dout("add_cap %p mds%d cap %llx %s seq %d\n", inode,
+ session->s_mds, cap_id, ceph_cap_string(issued), seq);
+@@ -656,6 +657,10 @@ void ceph_add_cap(struct inode *inode,
+ if (fmode >= 0)
+ wanted |= ceph_caps_for_mode(fmode);
+
++ spin_lock(&session->s_gen_ttl_lock);
++ gen = session->s_cap_gen;
++ spin_unlock(&session->s_gen_ttl_lock);
++
+ cap = __get_cap_for_mds(ci, mds);
+ if (!cap) {
+ cap = *new_cap;
+@@ -681,7 +686,7 @@ void ceph_add_cap(struct inode *inode,
+ list_move_tail(&cap->session_caps, &session->s_caps);
+ spin_unlock(&session->s_cap_lock);
+
+- if (cap->cap_gen < session->s_cap_gen)
++ if (cap->cap_gen < gen)
+ cap->issued = cap->implemented = CEPH_CAP_PIN;
+
+ /*
+@@ -775,7 +780,7 @@ void ceph_add_cap(struct inode *inode,
+ cap->seq = seq;
+ cap->issue_seq = seq;
+ cap->mseq = mseq;
+- cap->cap_gen = session->s_cap_gen;
++ cap->cap_gen = gen;
+
+ if (fmode >= 0)
+ __ceph_get_fmode(ci, fmode);
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index 18500edefc56..3b537e7038c7 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -801,7 +801,12 @@ static int fill_inode(struct inode *inode, struct page *locked_page,
+
+ /* update inode */
+ inode->i_rdev = le32_to_cpu(info->rdev);
+- inode->i_blkbits = fls(le32_to_cpu(info->layout.fl_stripe_unit)) - 1;
++ /* directories have fl_stripe_unit set to zero */
++ if (le32_to_cpu(info->layout.fl_stripe_unit))
++ inode->i_blkbits =
++ fls(le32_to_cpu(info->layout.fl_stripe_unit)) - 1;
++ else
++ inode->i_blkbits = CEPH_BLOCK_SHIFT;
+
+ __ceph_update_quota(ci, iinfo->max_bytes, iinfo->max_files);
+
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 920e9f048bd8..b11af7d8e8e9 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -4044,7 +4044,9 @@ static void delayed_work(struct work_struct *work)
+ pr_info("mds%d hung\n", s->s_mds);
+ }
+ }
+- if (s->s_state < CEPH_MDS_SESSION_OPEN) {
++ if (s->s_state == CEPH_MDS_SESSION_NEW ||
++ s->s_state == CEPH_MDS_SESSION_RESTARTING ||
++ s->s_state == CEPH_MDS_SESSION_REJECTED) {
+ /* this mds is failed or recovering, just wait */
+ ceph_put_mds_session(s);
+ continue;
+diff --git a/fs/fuse/cuse.c b/fs/fuse/cuse.c
+index bab7a0db81dd..f3b720884650 100644
+--- a/fs/fuse/cuse.c
++++ b/fs/fuse/cuse.c
+@@ -519,6 +519,7 @@ static int cuse_channel_open(struct inode *inode, struct file *file)
+ rc = cuse_send_init(cc);
+ if (rc) {
+ fuse_dev_free(fud);
++ fuse_conn_put(&cc->fc);
+ return rc;
+ }
+ file->private_data = fud;
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index 987877860c01..f3104db3de83 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -823,9 +823,12 @@ static const struct super_operations fuse_super_operations = {
+
+ static void sanitize_global_limit(unsigned *limit)
+ {
++ /*
++ * The default maximum number of async requests is calculated to consume
++ * 1/2^13 of the total memory, assuming 392 bytes per request.
++ */
+ if (*limit == 0)
+- *limit = ((totalram_pages() << PAGE_SHIFT) >> 13) /
+- sizeof(struct fuse_req);
++ *limit = ((totalram_pages() << PAGE_SHIFT) >> 13) / 392;
+
+ if (*limit >= 1 << 16)
+ *limit = (1 << 16) - 1;
+diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
+index 46a8d636d151..ab07db0f07cd 100644
+--- a/fs/nfs/nfs4xdr.c
++++ b/fs/nfs/nfs4xdr.c
+@@ -1174,7 +1174,7 @@ static void encode_attrs(struct xdr_stream *xdr, const struct iattr *iap,
+ } else
+ *p++ = cpu_to_be32(NFS4_SET_TO_SERVER_TIME);
+ }
+- if (bmval[2] & FATTR4_WORD2_SECURITY_LABEL) {
++ if (label && (bmval[2] & FATTR4_WORD2_SECURITY_LABEL)) {
+ *p++ = cpu_to_be32(label->lfs);
+ *p++ = cpu_to_be32(label->pi);
+ *p++ = cpu_to_be32(label->len);
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 4525d5acae38..0418b198edd3 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -1449,10 +1449,15 @@ void pnfs_roc_release(struct nfs4_layoutreturn_args *args,
+ const nfs4_stateid *res_stateid = NULL;
+ struct nfs4_xdr_opaque_data *ld_private = args->ld_private;
+
+- if (ret == 0) {
+- arg_stateid = &args->stateid;
++ switch (ret) {
++ case -NFS4ERR_NOMATCHING_LAYOUT:
++ break;
++ case 0:
+ if (res->lrs_present)
+ res_stateid = &res->stateid;
++ /* Fallthrough */
++ default:
++ arg_stateid = &args->stateid;
+ }
+ pnfs_layoutreturn_free_lsegs(lo, arg_stateid, &args->range,
+ res_stateid);
+diff --git a/fs/statfs.c b/fs/statfs.c
+index eea7af6f2f22..2616424012ea 100644
+--- a/fs/statfs.c
++++ b/fs/statfs.c
+@@ -318,19 +318,10 @@ COMPAT_SYSCALL_DEFINE2(fstatfs, unsigned int, fd, struct compat_statfs __user *,
+ static int put_compat_statfs64(struct compat_statfs64 __user *ubuf, struct kstatfs *kbuf)
+ {
+ struct compat_statfs64 buf;
+- if (sizeof(ubuf->f_bsize) == 4) {
+- if ((kbuf->f_type | kbuf->f_bsize | kbuf->f_namelen |
+- kbuf->f_frsize | kbuf->f_flags) & 0xffffffff00000000ULL)
+- return -EOVERFLOW;
+- /* f_files and f_ffree may be -1; it's okay
+- * to stuff that into 32 bits */
+- if (kbuf->f_files != 0xffffffffffffffffULL
+- && (kbuf->f_files & 0xffffffff00000000ULL))
+- return -EOVERFLOW;
+- if (kbuf->f_ffree != 0xffffffffffffffffULL
+- && (kbuf->f_ffree & 0xffffffff00000000ULL))
+- return -EOVERFLOW;
+- }
++
++ if ((kbuf->f_bsize | kbuf->f_frsize) & 0xffffffff00000000ULL)
++ return -EOVERFLOW;
++
+ memset(&buf, 0, sizeof(struct compat_statfs64));
+ buf.f_type = kbuf->f_type;
+ buf.f_bsize = kbuf->f_bsize;
+diff --git a/include/linux/memremap.h b/include/linux/memremap.h
+index f8a5b2a19945..c70996fe48c8 100644
+--- a/include/linux/memremap.h
++++ b/include/linux/memremap.h
+@@ -17,6 +17,7 @@ struct device;
+ */
+ struct vmem_altmap {
+ const unsigned long base_pfn;
++ const unsigned long end_pfn;
+ const unsigned long reserve;
+ unsigned long free;
+ unsigned long align;
+diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
+index 4a7944078cc3..8557ec664213 100644
+--- a/include/linux/sched/mm.h
++++ b/include/linux/sched/mm.h
+@@ -362,6 +362,8 @@ enum {
+
+ static inline void membarrier_mm_sync_core_before_usermode(struct mm_struct *mm)
+ {
++ if (current->mm != mm)
++ return;
+ if (likely(!(atomic_read(&mm->membarrier_state) &
+ MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE)))
+ return;
+diff --git a/include/sound/soc-dapm.h b/include/sound/soc-dapm.h
+index c00a0b8ade08..6c6694160130 100644
+--- a/include/sound/soc-dapm.h
++++ b/include/sound/soc-dapm.h
+@@ -353,6 +353,8 @@ struct device;
+ #define SND_SOC_DAPM_WILL_PMD 0x80 /* called at start of sequence */
+ #define SND_SOC_DAPM_PRE_POST_PMD \
+ (SND_SOC_DAPM_PRE_PMD | SND_SOC_DAPM_POST_PMD)
++#define SND_SOC_DAPM_PRE_POST_PMU \
++ (SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_POST_PMU)
+
+ /* convenience event type detection */
+ #define SND_SOC_DAPM_EVENT_ON(e) \
+diff --git a/include/trace/events/writeback.h b/include/trace/events/writeback.h
+index aa7f3aeac740..79095434c1be 100644
+--- a/include/trace/events/writeback.h
++++ b/include/trace/events/writeback.h
+@@ -66,8 +66,9 @@ DECLARE_EVENT_CLASS(writeback_page_template,
+ ),
+
+ TP_fast_assign(
+- strncpy(__entry->name,
+- mapping ? dev_name(inode_to_bdi(mapping->host)->dev) : "(unknown)", 32);
++ strscpy_pad(__entry->name,
++ mapping ? dev_name(inode_to_bdi(mapping->host)->dev) : "(unknown)",
++ 32);
+ __entry->ino = mapping ? mapping->host->i_ino : 0;
+ __entry->index = page->index;
+ ),
+@@ -110,8 +111,8 @@ DECLARE_EVENT_CLASS(writeback_dirty_inode_template,
+ struct backing_dev_info *bdi = inode_to_bdi(inode);
+
+ /* may be called for files on pseudo FSes w/ unregistered bdi */
+- strncpy(__entry->name,
+- bdi->dev ? dev_name(bdi->dev) : "(unknown)", 32);
++ strscpy_pad(__entry->name,
++ bdi->dev ? dev_name(bdi->dev) : "(unknown)", 32);
+ __entry->ino = inode->i_ino;
+ __entry->state = inode->i_state;
+ __entry->flags = flags;
+@@ -190,8 +191,8 @@ DECLARE_EVENT_CLASS(writeback_write_inode_template,
+ ),
+
+ TP_fast_assign(
+- strncpy(__entry->name,
+- dev_name(inode_to_bdi(inode)->dev), 32);
++ strscpy_pad(__entry->name,
++ dev_name(inode_to_bdi(inode)->dev), 32);
+ __entry->ino = inode->i_ino;
+ __entry->sync_mode = wbc->sync_mode;
+ __entry->cgroup_ino = __trace_wbc_assign_cgroup(wbc);
+@@ -234,8 +235,9 @@ DECLARE_EVENT_CLASS(writeback_work_class,
+ __field(unsigned int, cgroup_ino)
+ ),
+ TP_fast_assign(
+- strncpy(__entry->name,
+- wb->bdi->dev ? dev_name(wb->bdi->dev) : "(unknown)", 32);
++ strscpy_pad(__entry->name,
++ wb->bdi->dev ? dev_name(wb->bdi->dev) :
++ "(unknown)", 32);
+ __entry->nr_pages = work->nr_pages;
+ __entry->sb_dev = work->sb ? work->sb->s_dev : 0;
+ __entry->sync_mode = work->sync_mode;
+@@ -288,7 +290,7 @@ DECLARE_EVENT_CLASS(writeback_class,
+ __field(unsigned int, cgroup_ino)
+ ),
+ TP_fast_assign(
+- strncpy(__entry->name, dev_name(wb->bdi->dev), 32);
++ strscpy_pad(__entry->name, dev_name(wb->bdi->dev), 32);
+ __entry->cgroup_ino = __trace_wb_assign_cgroup(wb);
+ ),
+ TP_printk("bdi %s: cgroup_ino=%u",
+@@ -310,7 +312,7 @@ TRACE_EVENT(writeback_bdi_register,
+ __array(char, name, 32)
+ ),
+ TP_fast_assign(
+- strncpy(__entry->name, dev_name(bdi->dev), 32);
++ strscpy_pad(__entry->name, dev_name(bdi->dev), 32);
+ ),
+ TP_printk("bdi %s",
+ __entry->name
+@@ -335,7 +337,7 @@ DECLARE_EVENT_CLASS(wbc_class,
+ ),
+
+ TP_fast_assign(
+- strncpy(__entry->name, dev_name(bdi->dev), 32);
++ strscpy_pad(__entry->name, dev_name(bdi->dev), 32);
+ __entry->nr_to_write = wbc->nr_to_write;
+ __entry->pages_skipped = wbc->pages_skipped;
+ __entry->sync_mode = wbc->sync_mode;
+@@ -386,7 +388,7 @@ TRACE_EVENT(writeback_queue_io,
+ ),
+ TP_fast_assign(
+ unsigned long *older_than_this = work->older_than_this;
+- strncpy(__entry->name, dev_name(wb->bdi->dev), 32);
++ strscpy_pad(__entry->name, dev_name(wb->bdi->dev), 32);
+ __entry->older = older_than_this ? *older_than_this : 0;
+ __entry->age = older_than_this ?
+ (jiffies - *older_than_this) * 1000 / HZ : -1;
+@@ -472,7 +474,7 @@ TRACE_EVENT(bdi_dirty_ratelimit,
+ ),
+
+ TP_fast_assign(
+- strlcpy(__entry->bdi, dev_name(wb->bdi->dev), 32);
++ strscpy_pad(__entry->bdi, dev_name(wb->bdi->dev), 32);
+ __entry->write_bw = KBps(wb->write_bandwidth);
+ __entry->avg_write_bw = KBps(wb->avg_write_bandwidth);
+ __entry->dirty_rate = KBps(dirty_rate);
+@@ -537,7 +539,7 @@ TRACE_EVENT(balance_dirty_pages,
+
+ TP_fast_assign(
+ unsigned long freerun = (thresh + bg_thresh) / 2;
+- strlcpy(__entry->bdi, dev_name(wb->bdi->dev), 32);
++ strscpy_pad(__entry->bdi, dev_name(wb->bdi->dev), 32);
+
+ __entry->limit = global_wb_domain.dirty_limit;
+ __entry->setpoint = (global_wb_domain.dirty_limit +
+@@ -597,8 +599,8 @@ TRACE_EVENT(writeback_sb_inodes_requeue,
+ ),
+
+ TP_fast_assign(
+- strncpy(__entry->name,
+- dev_name(inode_to_bdi(inode)->dev), 32);
++ strscpy_pad(__entry->name,
++ dev_name(inode_to_bdi(inode)->dev), 32);
+ __entry->ino = inode->i_ino;
+ __entry->state = inode->i_state;
+ __entry->dirtied_when = inode->dirtied_when;
+@@ -671,8 +673,8 @@ DECLARE_EVENT_CLASS(writeback_single_inode_template,
+ ),
+
+ TP_fast_assign(
+- strncpy(__entry->name,
+- dev_name(inode_to_bdi(inode)->dev), 32);
++ strscpy_pad(__entry->name,
++ dev_name(inode_to_bdi(inode)->dev), 32);
+ __entry->ino = inode->i_ino;
+ __entry->state = inode->i_state;
+ __entry->dirtied_when = inode->dirtied_when;
+diff --git a/include/uapi/linux/sched.h b/include/uapi/linux/sched.h
+index b3105ac1381a..851ff1feadd5 100644
+--- a/include/uapi/linux/sched.h
++++ b/include/uapi/linux/sched.h
+@@ -33,6 +33,7 @@
+ #define CLONE_NEWNET 0x40000000 /* New network namespace */
+ #define CLONE_IO 0x80000000 /* Clone io context */
+
++#ifndef __ASSEMBLY__
+ /*
+ * Arguments for the clone3 syscall
+ */
+@@ -46,6 +47,7 @@ struct clone_args {
+ __aligned_u64 stack_size;
+ __aligned_u64 tls;
+ };
++#endif
+
+ /*
+ * Scheduling policies
+diff --git a/kernel/elfcore.c b/kernel/elfcore.c
+index fc482c8e0bd8..57fb4dcff434 100644
+--- a/kernel/elfcore.c
++++ b/kernel/elfcore.c
+@@ -3,6 +3,7 @@
+ #include <linux/fs.h>
+ #include <linux/mm.h>
+ #include <linux/binfmts.h>
++#include <linux/elfcore.h>
+
+ Elf_Half __weak elf_core_extra_phdrs(void)
+ {
+diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h
+index 89bab079e7a4..e84d21aa0722 100644
+--- a/kernel/locking/qspinlock_paravirt.h
++++ b/kernel/locking/qspinlock_paravirt.h
+@@ -269,7 +269,7 @@ pv_wait_early(struct pv_node *prev, int loop)
+ if ((loop & PV_PREV_CHECK_MASK) != 0)
+ return false;
+
+- return READ_ONCE(prev->state) != vcpu_running || vcpu_is_preempted(prev->cpu);
++ return READ_ONCE(prev->state) != vcpu_running;
+ }
+
+ /*
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index d38f007afea7..fffe790d98bb 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1537,7 +1537,8 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
+ if (cpumask_equal(p->cpus_ptr, new_mask))
+ goto out;
+
+- if (!cpumask_intersects(new_mask, cpu_valid_mask)) {
++ dest_cpu = cpumask_any_and(cpu_valid_mask, new_mask);
++ if (dest_cpu >= nr_cpu_ids) {
+ ret = -EINVAL;
+ goto out;
+ }
+@@ -1558,7 +1559,6 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
+ if (cpumask_test_cpu(task_cpu(p), new_mask))
+ goto out;
+
+- dest_cpu = cpumask_any_and(cpu_valid_mask, new_mask);
+ if (task_running(rq, p) || p->state == TASK_WAKING) {
+ struct migration_arg arg = { p, dest_cpu };
+ /* Need help from migration thread: drop lock and wait. */
+diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
+index aa8d75804108..5110d91b1b0e 100644
+--- a/kernel/sched/membarrier.c
++++ b/kernel/sched/membarrier.c
+@@ -226,7 +226,7 @@ static int membarrier_register_private_expedited(int flags)
+ * groups, which use the same mm. (CLONE_VM but not
+ * CLONE_THREAD).
+ */
+- if (atomic_read(&mm->membarrier_state) & state)
++ if ((atomic_read(&mm->membarrier_state) & state) == state)
+ return 0;
+ atomic_or(MEMBARRIER_STATE_PRIVATE_EXPEDITED, &mm->membarrier_state);
+ if (flags & MEMBARRIER_FLAG_SYNC_CORE)
+diff --git a/kernel/time/tick-broadcast-hrtimer.c b/kernel/time/tick-broadcast-hrtimer.c
+index 5be6154e2fd2..99fbfb8d9117 100644
+--- a/kernel/time/tick-broadcast-hrtimer.c
++++ b/kernel/time/tick-broadcast-hrtimer.c
+@@ -42,34 +42,39 @@ static int bc_shutdown(struct clock_event_device *evt)
+ */
+ static int bc_set_next(ktime_t expires, struct clock_event_device *bc)
+ {
+- int bc_moved;
+ /*
+- * We try to cancel the timer first. If the callback is on
+- * flight on some other cpu then we let it handle it. If we
+- * were able to cancel the timer nothing can rearm it as we
+- * own broadcast_lock.
++ * This is called either from enter/exit idle code or from the
++ * broadcast handler. In all cases tick_broadcast_lock is held.
+ *
+- * However we can also be called from the event handler of
+- * ce_broadcast_hrtimer itself when it expires. We cannot
+- * restart the timer because we are in the callback, but we
+- * can set the expiry time and let the callback return
+- * HRTIMER_RESTART.
++ * hrtimer_cancel() cannot be called here neither from the
++ * broadcast handler nor from the enter/exit idle code. The idle
++ * code can run into the problem described in bc_shutdown() and the
++ * broadcast handler cannot wait for itself to complete for obvious
++ * reasons.
+ *
+- * Since we are in the idle loop at this point and because
+- * hrtimer_{start/cancel} functions call into tracing,
+- * calls to these functions must be bound within RCU_NONIDLE.
++ * Each caller tries to arm the hrtimer on its own CPU, but if the
++ * hrtimer callbback function is currently running, then
++ * hrtimer_start() cannot move it and the timer stays on the CPU on
++ * which it is assigned at the moment.
++ *
++ * As this can be called from idle code, the hrtimer_start()
++ * invocation has to be wrapped with RCU_NONIDLE() as
++ * hrtimer_start() can call into tracing.
+ */
+- RCU_NONIDLE({
+- bc_moved = hrtimer_try_to_cancel(&bctimer) >= 0;
+- if (bc_moved)
+- hrtimer_start(&bctimer, expires,
+- HRTIMER_MODE_ABS_PINNED);});
+- if (bc_moved) {
+- /* Bind the "device" to the cpu */
+- bc->bound_on = smp_processor_id();
+- } else if (bc->bound_on == smp_processor_id()) {
+- hrtimer_set_expires(&bctimer, expires);
+- }
++ RCU_NONIDLE( {
++ hrtimer_start(&bctimer, expires, HRTIMER_MODE_ABS_PINNED);
++ /*
++ * The core tick broadcast mode expects bc->bound_on to be set
++ * correctly to prevent a CPU which has the broadcast hrtimer
++ * armed from going deep idle.
++ *
++ * As tick_broadcast_lock is held, nothing can change the cpu
++ * base which was just established in hrtimer_start() above. So
++ * the below access is safe even without holding the hrtimer
++ * base lock.
++ */
++ bc->bound_on = bctimer.base->cpu_base->cpu;
++ } );
+ return 0;
+ }
+
+@@ -95,10 +100,6 @@ static enum hrtimer_restart bc_handler(struct hrtimer *t)
+ {
+ ce_broadcast_hrtimer.event_handler(&ce_broadcast_hrtimer);
+
+- if (clockevent_state_oneshot(&ce_broadcast_hrtimer))
+- if (ce_broadcast_hrtimer.next_event != KTIME_MAX)
+- return HRTIMER_RESTART;
+-
+ return HRTIMER_NORESTART;
+ }
+
+diff --git a/kernel/time/timer.c b/kernel/time/timer.c
+index 343c7ba33b1c..7d63b7347066 100644
+--- a/kernel/time/timer.c
++++ b/kernel/time/timer.c
+@@ -1593,24 +1593,26 @@ void timer_clear_idle(void)
+ static int collect_expired_timers(struct timer_base *base,
+ struct hlist_head *heads)
+ {
++ unsigned long now = READ_ONCE(jiffies);
++
+ /*
+ * NOHZ optimization. After a long idle sleep we need to forward the
+ * base to current jiffies. Avoid a loop by searching the bitfield for
+ * the next expiring timer.
+ */
+- if ((long)(jiffies - base->clk) > 2) {
++ if ((long)(now - base->clk) > 2) {
+ unsigned long next = __next_timer_interrupt(base);
+
+ /*
+ * If the next timer is ahead of time forward to current
+ * jiffies, otherwise forward to the next expiry time:
+ */
+- if (time_after(next, jiffies)) {
++ if (time_after(next, now)) {
+ /*
+ * The call site will increment base->clk and then
+ * terminate the expiry loop immediately.
+ */
+- base->clk = jiffies;
++ base->clk = now;
+ return 0;
+ }
+ base->clk = next;
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index ca1255d14576..3e38a010003c 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -500,14 +500,17 @@ static const struct bpf_func_proto bpf_perf_event_output_proto = {
+ .arg5_type = ARG_CONST_SIZE_OR_ZERO,
+ };
+
+-static DEFINE_PER_CPU(struct pt_regs, bpf_pt_regs);
+-static DEFINE_PER_CPU(struct perf_sample_data, bpf_misc_sd);
++static DEFINE_PER_CPU(int, bpf_event_output_nest_level);
++struct bpf_nested_pt_regs {
++ struct pt_regs regs[3];
++};
++static DEFINE_PER_CPU(struct bpf_nested_pt_regs, bpf_pt_regs);
++static DEFINE_PER_CPU(struct bpf_trace_sample_data, bpf_misc_sds);
+
+ u64 bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size,
+ void *ctx, u64 ctx_size, bpf_ctx_copy_t ctx_copy)
+ {
+- struct perf_sample_data *sd = this_cpu_ptr(&bpf_misc_sd);
+- struct pt_regs *regs = this_cpu_ptr(&bpf_pt_regs);
++ int nest_level = this_cpu_inc_return(bpf_event_output_nest_level);
+ struct perf_raw_frag frag = {
+ .copy = ctx_copy,
+ .size = ctx_size,
+@@ -522,12 +525,25 @@ u64 bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size,
+ .data = meta,
+ },
+ };
++ struct perf_sample_data *sd;
++ struct pt_regs *regs;
++ u64 ret;
++
++ if (WARN_ON_ONCE(nest_level > ARRAY_SIZE(bpf_misc_sds.sds))) {
++ ret = -EBUSY;
++ goto out;
++ }
++ sd = this_cpu_ptr(&bpf_misc_sds.sds[nest_level - 1]);
++ regs = this_cpu_ptr(&bpf_pt_regs.regs[nest_level - 1]);
+
+ perf_fetch_caller_regs(regs);
+ perf_sample_data_init(sd, 0, 0);
+ sd->raw = &raw;
+
+- return __bpf_perf_event_output(regs, map, flags, sd);
++ ret = __bpf_perf_event_output(regs, map, flags, sd);
++out:
++ this_cpu_dec(bpf_event_output_nest_level);
++ return ret;
+ }
+
+ BPF_CALL_0(bpf_get_current_task)
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index ca6b0dff60c5..dd310d3b5843 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -2785,6 +2785,8 @@ static struct hist_field *create_alias(struct hist_trigger_data *hist_data,
+ return NULL;
+ }
+
++ alias->var_ref_idx = var_ref->var_ref_idx;
++
+ return alias;
+ }
+
+diff --git a/mm/usercopy.c b/mm/usercopy.c
+index 98e924864554..660717a1ea5c 100644
+--- a/mm/usercopy.c
++++ b/mm/usercopy.c
+@@ -11,6 +11,7 @@
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+ #include <linux/mm.h>
++#include <linux/highmem.h>
+ #include <linux/slab.h>
+ #include <linux/sched.h>
+ #include <linux/sched/task.h>
+@@ -227,7 +228,12 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
+ if (!virt_addr_valid(ptr))
+ return;
+
+- page = virt_to_head_page(ptr);
++ /*
++ * When CONFIG_HIGHMEM=y, kmap_to_page() will give either the
++ * highmem page or fallback to virt_to_page(). The following
++ * is effectively a highmem-aware virt_to_head_page().
++ */
++ page = compound_head(kmap_to_page((void *)ptr));
+
+ if (PageSlab(page)) {
+ /* Check slab allocator for flags and size. */
+diff --git a/net/9p/client.c b/net/9p/client.c
+index 9622f3e469f6..1d48afc7033c 100644
+--- a/net/9p/client.c
++++ b/net/9p/client.c
+@@ -281,6 +281,7 @@ p9_tag_alloc(struct p9_client *c, int8_t type, unsigned int max_size)
+
+ p9pdu_reset(&req->tc);
+ p9pdu_reset(&req->rc);
++ req->t_err = 0;
+ req->status = REQ_STATUS_ALLOC;
+ init_waitqueue_head(&req->wq);
+ INIT_LIST_HEAD(&req->req_list);
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index ad1e58184c4e..21212faec6d0 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -247,7 +247,8 @@ static void __ieee80211_wake_txqs(struct ieee80211_sub_if_data *sdata, int ac)
+ struct sta_info *sta;
+ int i;
+
+- spin_lock_bh(&fq->lock);
++ local_bh_disable();
++ spin_lock(&fq->lock);
+
+ if (sdata->vif.type == NL80211_IFTYPE_AP)
+ ps = &sdata->bss->ps;
+@@ -273,9 +274,9 @@ static void __ieee80211_wake_txqs(struct ieee80211_sub_if_data *sdata, int ac)
+ &txqi->flags))
+ continue;
+
+- spin_unlock_bh(&fq->lock);
++ spin_unlock(&fq->lock);
+ drv_wake_tx_queue(local, txqi);
+- spin_lock_bh(&fq->lock);
++ spin_lock(&fq->lock);
+ }
+ }
+
+@@ -288,12 +289,14 @@ static void __ieee80211_wake_txqs(struct ieee80211_sub_if_data *sdata, int ac)
+ (ps && atomic_read(&ps->num_sta_ps)) || ac != vif->txq->ac)
+ goto out;
+
+- spin_unlock_bh(&fq->lock);
++ spin_unlock(&fq->lock);
+
+ drv_wake_tx_queue(local, txqi);
++ local_bh_enable();
+ return;
+ out:
+- spin_unlock_bh(&fq->lock);
++ spin_unlock(&fq->lock);
++ local_bh_enable();
+ }
+
+ static void
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index d47469f824a1..3b81323fa017 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -3562,8 +3562,11 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk,
+ NFT_SET_OBJECT))
+ return -EINVAL;
+ /* Only one of these operations is supported */
+- if ((flags & (NFT_SET_MAP | NFT_SET_EVAL | NFT_SET_OBJECT)) ==
+- (NFT_SET_MAP | NFT_SET_EVAL | NFT_SET_OBJECT))
++ if ((flags & (NFT_SET_MAP | NFT_SET_OBJECT)) ==
++ (NFT_SET_MAP | NFT_SET_OBJECT))
++ return -EOPNOTSUPP;
++ if ((flags & (NFT_SET_EVAL | NFT_SET_OBJECT)) ==
++ (NFT_SET_EVAL | NFT_SET_OBJECT))
+ return -EOPNOTSUPP;
+ }
+
+diff --git a/net/netfilter/nft_lookup.c b/net/netfilter/nft_lookup.c
+index c0560bf3c31b..660bad688e2b 100644
+--- a/net/netfilter/nft_lookup.c
++++ b/net/netfilter/nft_lookup.c
+@@ -73,9 +73,6 @@ static int nft_lookup_init(const struct nft_ctx *ctx,
+ if (IS_ERR(set))
+ return PTR_ERR(set);
+
+- if (set->flags & NFT_SET_EVAL)
+- return -EOPNOTSUPP;
+-
+ priv->sreg = nft_parse_register(tb[NFTA_LOOKUP_SREG]);
+ err = nft_validate_register_load(priv->sreg, set->klen);
+ if (err < 0)
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index 7a75f34ad393..f7f78566be46 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -1837,7 +1837,7 @@ call_allocate(struct rpc_task *task)
+ return;
+ }
+
+- rpc_exit(task, -ERESTARTSYS);
++ rpc_call_rpcerror(task, -ERESTARTSYS);
+ }
+
+ static int
+@@ -2482,6 +2482,7 @@ call_decode(struct rpc_task *task)
+ struct rpc_clnt *clnt = task->tk_client;
+ struct rpc_rqst *req = task->tk_rqstp;
+ struct xdr_stream xdr;
++ int err;
+
+ dprint_status(task);
+
+@@ -2504,6 +2505,15 @@ call_decode(struct rpc_task *task)
+ * before it changed req->rq_reply_bytes_recvd.
+ */
+ smp_rmb();
++
++ /*
++ * Did we ever call xprt_complete_rqst()? If not, we should assume
++ * the message is incomplete.
++ */
++ err = -EAGAIN;
++ if (!req->rq_reply_bytes_recvd)
++ goto out;
++
+ req->rq_rcv_buf.len = req->rq_private_buf.len;
+
+ /* Check that the softirq receive buffer is valid */
+@@ -2512,7 +2522,9 @@ call_decode(struct rpc_task *task)
+
+ xdr_init_decode(&xdr, &req->rq_rcv_buf,
+ req->rq_rcv_buf.head[0].iov_base, req);
+- switch (rpc_decode_header(task, &xdr)) {
++ err = rpc_decode_header(task, &xdr);
++out:
++ switch (err) {
+ case 0:
+ task->tk_action = rpc_exit_task;
+ task->tk_status = rpcauth_unwrap_resp(task, &xdr);
+@@ -2561,7 +2573,7 @@ rpc_encode_header(struct rpc_task *task, struct xdr_stream *xdr)
+ return 0;
+ out_fail:
+ trace_rpc_bad_callhdr(task);
+- rpc_exit(task, error);
++ rpc_call_rpcerror(task, error);
+ return error;
+ }
+
+@@ -2628,7 +2640,7 @@ out_garbage:
+ return -EAGAIN;
+ }
+ out_err:
+- rpc_exit(task, error);
++ rpc_call_rpcerror(task, error);
+ return error;
+
+ out_unparsable:
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index 1f275aba786f..53934fe73a9d 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -930,8 +930,10 @@ static void __rpc_execute(struct rpc_task *task)
+ /*
+ * Signalled tasks should exit rather than sleep.
+ */
+- if (RPC_SIGNALLED(task))
++ if (RPC_SIGNALLED(task)) {
++ task->tk_rpc_status = -ERESTARTSYS;
+ rpc_exit(task, -ERESTARTSYS);
++ }
+
+ /*
+ * The queue->lock protects against races with
+@@ -967,6 +969,7 @@ static void __rpc_execute(struct rpc_task *task)
+ */
+ dprintk("RPC: %5u got signal\n", task->tk_pid);
+ set_bit(RPC_TASK_SIGNALLED, &task->tk_runstate);
++ task->tk_rpc_status = -ERESTARTSYS;
+ rpc_exit(task, -ERESTARTSYS);
+ }
+ dprintk("RPC: %5u sync task resuming\n", task->tk_pid);
+diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
+index 2ec349ed4770..f4763e8a6761 100644
+--- a/net/sunrpc/xprtrdma/transport.c
++++ b/net/sunrpc/xprtrdma/transport.c
+@@ -571,6 +571,7 @@ xprt_rdma_alloc_slot(struct rpc_xprt *xprt, struct rpc_task *task)
+ return;
+
+ out_sleep:
++ set_bit(XPRT_CONGESTED, &xprt->state);
+ rpc_sleep_on(&xprt->backlog, task, NULL);
+ task->tk_status = -EAGAIN;
+ }
+@@ -589,7 +590,8 @@ xprt_rdma_free_slot(struct rpc_xprt *xprt, struct rpc_rqst *rqst)
+
+ memset(rqst, 0, sizeof(*rqst));
+ rpcrdma_buffer_put(&r_xprt->rx_buf, rpcr_to_rdmar(rqst));
+- rpc_wake_up_next(&xprt->backlog);
++ if (unlikely(!rpc_wake_up_next(&xprt->backlog)))
++ clear_bit(XPRT_CONGESTED, &xprt->state);
+ }
+
+ static bool rpcrdma_check_regbuf(struct rpcrdma_xprt *r_xprt,
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index 805b1f35e1ca..2bd9b4de0e32 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -605,10 +605,10 @@ void rpcrdma_ep_destroy(struct rpcrdma_xprt *r_xprt)
+ * Unlike a normal reconnection, a fresh PD and a new set
+ * of MRs and buffers is needed.
+ */
+-static int
+-rpcrdma_ep_recreate_xprt(struct rpcrdma_xprt *r_xprt,
+- struct rpcrdma_ep *ep, struct rpcrdma_ia *ia)
++static int rpcrdma_ep_recreate_xprt(struct rpcrdma_xprt *r_xprt,
++ struct ib_qp_init_attr *qp_init_attr)
+ {
++ struct rpcrdma_ia *ia = &r_xprt->rx_ia;
+ int rc, err;
+
+ trace_xprtrdma_reinsert(r_xprt);
+@@ -625,7 +625,7 @@ rpcrdma_ep_recreate_xprt(struct rpcrdma_xprt *r_xprt,
+ }
+
+ rc = -ENETUNREACH;
+- err = rdma_create_qp(ia->ri_id, ia->ri_pd, &ep->rep_attr);
++ err = rdma_create_qp(ia->ri_id, ia->ri_pd, qp_init_attr);
+ if (err) {
+ pr_err("rpcrdma: rdma_create_qp returned %d\n", err);
+ goto out3;
+@@ -642,16 +642,16 @@ out1:
+ return rc;
+ }
+
+-static int
+-rpcrdma_ep_reconnect(struct rpcrdma_xprt *r_xprt, struct rpcrdma_ep *ep,
+- struct rpcrdma_ia *ia)
++static int rpcrdma_ep_reconnect(struct rpcrdma_xprt *r_xprt,
++ struct ib_qp_init_attr *qp_init_attr)
+ {
++ struct rpcrdma_ia *ia = &r_xprt->rx_ia;
+ struct rdma_cm_id *id, *old;
+ int err, rc;
+
+ trace_xprtrdma_reconnect(r_xprt);
+
+- rpcrdma_ep_disconnect(ep, ia);
++ rpcrdma_ep_disconnect(&r_xprt->rx_ep, ia);
+
+ rc = -EHOSTUNREACH;
+ id = rpcrdma_create_id(r_xprt, ia);
+@@ -673,7 +673,7 @@ rpcrdma_ep_reconnect(struct rpcrdma_xprt *r_xprt, struct rpcrdma_ep *ep,
+ goto out_destroy;
+ }
+
+- err = rdma_create_qp(id, ia->ri_pd, &ep->rep_attr);
++ err = rdma_create_qp(id, ia->ri_pd, qp_init_attr);
+ if (err)
+ goto out_destroy;
+
+@@ -698,25 +698,27 @@ rpcrdma_ep_connect(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia)
+ struct rpcrdma_xprt *r_xprt = container_of(ia, struct rpcrdma_xprt,
+ rx_ia);
+ struct rpc_xprt *xprt = &r_xprt->rx_xprt;
++ struct ib_qp_init_attr qp_init_attr;
+ int rc;
+
+ retry:
++ memcpy(&qp_init_attr, &ep->rep_attr, sizeof(qp_init_attr));
+ switch (ep->rep_connected) {
+ case 0:
+ dprintk("RPC: %s: connecting...\n", __func__);
+- rc = rdma_create_qp(ia->ri_id, ia->ri_pd, &ep->rep_attr);
++ rc = rdma_create_qp(ia->ri_id, ia->ri_pd, &qp_init_attr);
+ if (rc) {
+ rc = -ENETUNREACH;
+ goto out_noupdate;
+ }
+ break;
+ case -ENODEV:
+- rc = rpcrdma_ep_recreate_xprt(r_xprt, ep, ia);
++ rc = rpcrdma_ep_recreate_xprt(r_xprt, &qp_init_attr);
+ if (rc)
+ goto out_noupdate;
+ break;
+ default:
+- rc = rpcrdma_ep_reconnect(r_xprt, ep, ia);
++ rc = rpcrdma_ep_reconnect(r_xprt, &qp_init_attr);
+ if (rc)
+ goto out;
+ }
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index c9cfc796eccf..f03459ddc840 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -201,6 +201,38 @@ cfg80211_get_dev_from_info(struct net *netns, struct genl_info *info)
+ return __cfg80211_rdev_from_attrs(netns, info->attrs);
+ }
+
++static int validate_beacon_head(const struct nlattr *attr,
++ struct netlink_ext_ack *extack)
++{
++ const u8 *data = nla_data(attr);
++ unsigned int len = nla_len(attr);
++ const struct element *elem;
++ const struct ieee80211_mgmt *mgmt = (void *)data;
++ unsigned int fixedlen = offsetof(struct ieee80211_mgmt,
++ u.beacon.variable);
++
++ if (len < fixedlen)
++ goto err;
++
++ if (ieee80211_hdrlen(mgmt->frame_control) !=
++ offsetof(struct ieee80211_mgmt, u.beacon))
++ goto err;
++
++ data += fixedlen;
++ len -= fixedlen;
++
++ for_each_element(elem, data, len) {
++ /* nothing */
++ }
++
++ if (for_each_element_completed(elem, data, len))
++ return 0;
++
++err:
++ NL_SET_ERR_MSG_ATTR(extack, attr, "malformed beacon head");
++ return -EINVAL;
++}
++
+ static int validate_ie_attr(const struct nlattr *attr,
+ struct netlink_ext_ack *extack)
+ {
+@@ -322,8 +354,9 @@ const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = {
+
+ [NL80211_ATTR_BEACON_INTERVAL] = { .type = NLA_U32 },
+ [NL80211_ATTR_DTIM_PERIOD] = { .type = NLA_U32 },
+- [NL80211_ATTR_BEACON_HEAD] = { .type = NLA_BINARY,
+- .len = IEEE80211_MAX_DATA_LEN },
++ [NL80211_ATTR_BEACON_HEAD] =
++ NLA_POLICY_VALIDATE_FN(NLA_BINARY, validate_beacon_head,
++ IEEE80211_MAX_DATA_LEN),
+ [NL80211_ATTR_BEACON_TAIL] =
+ NLA_POLICY_VALIDATE_FN(NLA_BINARY, validate_ie_attr,
+ IEEE80211_MAX_DATA_LEN),
+@@ -2564,6 +2597,8 @@ int nl80211_parse_chandef(struct cfg80211_registered_device *rdev,
+
+ control_freq = nla_get_u32(attrs[NL80211_ATTR_WIPHY_FREQ]);
+
++ memset(chandef, 0, sizeof(*chandef));
++
+ chandef->chan = ieee80211_get_channel(&rdev->wiphy, control_freq);
+ chandef->width = NL80211_CHAN_WIDTH_20_NOHT;
+ chandef->center_freq1 = control_freq;
+@@ -3092,7 +3127,7 @@ static int nl80211_send_iface(struct sk_buff *msg, u32 portid, u32 seq, int flag
+
+ if (rdev->ops->get_channel) {
+ int ret;
+- struct cfg80211_chan_def chandef;
++ struct cfg80211_chan_def chandef = {};
+
+ ret = rdev_get_channel(rdev, wdev, &chandef);
+ if (ret == 0) {
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index 327479ce69f5..36eba5804efe 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -2108,7 +2108,7 @@ static void reg_call_notifier(struct wiphy *wiphy,
+
+ static bool reg_wdev_chan_valid(struct wiphy *wiphy, struct wireless_dev *wdev)
+ {
+- struct cfg80211_chan_def chandef;
++ struct cfg80211_chan_def chandef = {};
+ struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy);
+ enum nl80211_iftype iftype;
+
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index d66e6d4b7555..27d76c4c5cea 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -1711,7 +1711,12 @@ cfg80211_update_notlisted_nontrans(struct wiphy *wiphy,
+ return;
+ new_ie_len -= trans_ssid[1];
+ mbssid = cfg80211_find_ie(WLAN_EID_MULTIPLE_BSSID, ie, ielen);
+- if (!mbssid)
++ /*
++ * It's not valid to have the MBSSID element before SSID
++ * ignore if that happens - the code below assumes it is
++ * after (while copying things inbetween).
++ */
++ if (!mbssid || mbssid < trans_ssid)
+ return;
+ new_ie_len -= mbssid[1];
+ rcu_read_lock();
+diff --git a/net/wireless/wext-compat.c b/net/wireless/wext-compat.c
+index 46e4d69db845..b1f94730bde2 100644
+--- a/net/wireless/wext-compat.c
++++ b/net/wireless/wext-compat.c
+@@ -797,7 +797,7 @@ static int cfg80211_wext_giwfreq(struct net_device *dev,
+ {
+ struct wireless_dev *wdev = dev->ieee80211_ptr;
+ struct cfg80211_registered_device *rdev = wiphy_to_rdev(wdev->wiphy);
+- struct cfg80211_chan_def chandef;
++ struct cfg80211_chan_def chandef = {};
+ int ret;
+
+ switch (wdev->iftype) {
+diff --git a/security/integrity/ima/ima_crypto.c b/security/integrity/ima/ima_crypto.c
+index d4c7b8e1b083..73044fc6a952 100644
+--- a/security/integrity/ima/ima_crypto.c
++++ b/security/integrity/ima/ima_crypto.c
+@@ -268,8 +268,16 @@ static int ima_calc_file_hash_atfm(struct file *file,
+ rbuf_len = min_t(loff_t, i_size - offset, rbuf_size[active]);
+ rc = integrity_kernel_read(file, offset, rbuf[active],
+ rbuf_len);
+- if (rc != rbuf_len)
++ if (rc != rbuf_len) {
++ if (rc >= 0)
++ rc = -EINVAL;
++ /*
++ * Forward current rc, do not overwrite with return value
++ * from ahash_wait()
++ */
++ ahash_wait(ahash_rc, &wait);
+ goto out3;
++ }
+
+ if (rbuf[1] && offset) {
+ /* Using two buffers, and it is not the first
+diff --git a/sound/soc/codecs/sgtl5000.c b/sound/soc/codecs/sgtl5000.c
+index 7cbaedffa1ef..8e5e48f6a24b 100644
+--- a/sound/soc/codecs/sgtl5000.c
++++ b/sound/soc/codecs/sgtl5000.c
+@@ -31,6 +31,13 @@
+ #define SGTL5000_DAP_REG_OFFSET 0x0100
+ #define SGTL5000_MAX_REG_OFFSET 0x013A
+
++/* Delay for the VAG ramp up */
++#define SGTL5000_VAG_POWERUP_DELAY 500 /* ms */
++/* Delay for the VAG ramp down */
++#define SGTL5000_VAG_POWERDOWN_DELAY 500 /* ms */
++
++#define SGTL5000_OUTPUTS_MUTE (SGTL5000_HP_MUTE | SGTL5000_LINE_OUT_MUTE)
++
+ /* default value of sgtl5000 registers */
+ static const struct reg_default sgtl5000_reg_defaults[] = {
+ { SGTL5000_CHIP_DIG_POWER, 0x0000 },
+@@ -123,6 +130,13 @@ enum {
+ I2S_SCLK_STRENGTH_HIGH,
+ };
+
++enum {
++ HP_POWER_EVENT,
++ DAC_POWER_EVENT,
++ ADC_POWER_EVENT,
++ LAST_POWER_EVENT = ADC_POWER_EVENT
++};
++
+ /* sgtl5000 private structure in codec */
+ struct sgtl5000_priv {
+ int sysclk; /* sysclk rate */
+@@ -137,8 +151,109 @@ struct sgtl5000_priv {
+ u8 micbias_voltage;
+ u8 lrclk_strength;
+ u8 sclk_strength;
++ u16 mute_state[LAST_POWER_EVENT + 1];
+ };
+
++static inline int hp_sel_input(struct snd_soc_component *component)
++{
++ return (snd_soc_component_read32(component, SGTL5000_CHIP_ANA_CTRL) &
++ SGTL5000_HP_SEL_MASK) >> SGTL5000_HP_SEL_SHIFT;
++}
++
++static inline u16 mute_output(struct snd_soc_component *component,
++ u16 mute_mask)
++{
++ u16 mute_reg = snd_soc_component_read32(component,
++ SGTL5000_CHIP_ANA_CTRL);
++
++ snd_soc_component_update_bits(component, SGTL5000_CHIP_ANA_CTRL,
++ mute_mask, mute_mask);
++ return mute_reg;
++}
++
++static inline void restore_output(struct snd_soc_component *component,
++ u16 mute_mask, u16 mute_reg)
++{
++ snd_soc_component_update_bits(component, SGTL5000_CHIP_ANA_CTRL,
++ mute_mask, mute_reg);
++}
++
++static void vag_power_on(struct snd_soc_component *component, u32 source)
++{
++ if (snd_soc_component_read32(component, SGTL5000_CHIP_ANA_POWER) &
++ SGTL5000_VAG_POWERUP)
++ return;
++
++ snd_soc_component_update_bits(component, SGTL5000_CHIP_ANA_POWER,
++ SGTL5000_VAG_POWERUP, SGTL5000_VAG_POWERUP);
++
++ /* When VAG powering on to get local loop from Line-In, the sleep
++ * is required to avoid loud pop.
++ */
++ if (hp_sel_input(component) == SGTL5000_HP_SEL_LINE_IN &&
++ source == HP_POWER_EVENT)
++ msleep(SGTL5000_VAG_POWERUP_DELAY);
++}
++
++static int vag_power_consumers(struct snd_soc_component *component,
++ u16 ana_pwr_reg, u32 source)
++{
++ int consumers = 0;
++
++ /* count dac/adc consumers unconditional */
++ if (ana_pwr_reg & SGTL5000_DAC_POWERUP)
++ consumers++;
++ if (ana_pwr_reg & SGTL5000_ADC_POWERUP)
++ consumers++;
++
++ /*
++ * If the event comes from HP and Line-In is selected,
++ * current action is 'DAC to be powered down'.
++ * As HP_POWERUP is not set when HP muxed to line-in,
++ * we need to keep VAG power ON.
++ */
++ if (source == HP_POWER_EVENT) {
++ if (hp_sel_input(component) == SGTL5000_HP_SEL_LINE_IN)
++ consumers++;
++ } else {
++ if (ana_pwr_reg & SGTL5000_HP_POWERUP)
++ consumers++;
++ }
++
++ return consumers;
++}
++
++static void vag_power_off(struct snd_soc_component *component, u32 source)
++{
++ u16 ana_pwr = snd_soc_component_read32(component,
++ SGTL5000_CHIP_ANA_POWER);
++
++ if (!(ana_pwr & SGTL5000_VAG_POWERUP))
++ return;
++
++ /*
++ * This function calls when any of VAG power consumers is disappearing.
++ * Thus, if there is more than one consumer at the moment, as minimum
++ * one consumer will definitely stay after the end of the current
++ * event.
++ * Don't clear VAG_POWERUP if 2 or more consumers of VAG present:
++ * - LINE_IN (for HP events) / HP (for DAC/ADC events)
++ * - DAC
++ * - ADC
++ * (the current consumer is disappearing right now)
++ */
++ if (vag_power_consumers(component, ana_pwr, source) >= 2)
++ return;
++
++ snd_soc_component_update_bits(component, SGTL5000_CHIP_ANA_POWER,
++ SGTL5000_VAG_POWERUP, 0);
++ /* In power down case, we need wait 400-1000 ms
++ * when VAG fully ramped down.
++ * As longer we wait, as smaller pop we've got.
++ */
++ msleep(SGTL5000_VAG_POWERDOWN_DELAY);
++}
++
+ /*
+ * mic_bias power on/off share the same register bits with
+ * output impedance of mic bias, when power on mic bias, we
+@@ -170,36 +285,46 @@ static int mic_bias_event(struct snd_soc_dapm_widget *w,
+ return 0;
+ }
+
+-/*
+- * As manual described, ADC/DAC only works when VAG powerup,
+- * So enabled VAG before ADC/DAC up.
+- * In power down case, we need wait 400ms when vag fully ramped down.
+- */
+-static int power_vag_event(struct snd_soc_dapm_widget *w,
+- struct snd_kcontrol *kcontrol, int event)
++static int vag_and_mute_control(struct snd_soc_component *component,
++ int event, int event_source)
+ {
+- struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm);
+- const u32 mask = SGTL5000_DAC_POWERUP | SGTL5000_ADC_POWERUP;
++ static const u16 mute_mask[] = {
++ /*
++ * Mask for HP_POWER_EVENT.
++ * Muxing Headphones have to be wrapped with mute/unmute
++ * headphones only.
++ */
++ SGTL5000_HP_MUTE,
++ /*
++ * Masks for DAC_POWER_EVENT/ADC_POWER_EVENT.
++ * Muxing DAC or ADC block have to wrapped with mute/unmute
++ * both headphones and line-out.
++ */
++ SGTL5000_OUTPUTS_MUTE,
++ SGTL5000_OUTPUTS_MUTE
++ };
++
++ struct sgtl5000_priv *sgtl5000 =
++ snd_soc_component_get_drvdata(component);
+
+ switch (event) {
++ case SND_SOC_DAPM_PRE_PMU:
++ sgtl5000->mute_state[event_source] =
++ mute_output(component, mute_mask[event_source]);
++ break;
+ case SND_SOC_DAPM_POST_PMU:
+- snd_soc_component_update_bits(component, SGTL5000_CHIP_ANA_POWER,
+- SGTL5000_VAG_POWERUP, SGTL5000_VAG_POWERUP);
+- msleep(400);
++ vag_power_on(component, event_source);
++ restore_output(component, mute_mask[event_source],
++ sgtl5000->mute_state[event_source]);
+ break;
+-
+ case SND_SOC_DAPM_PRE_PMD:
+- /*
+- * Don't clear VAG_POWERUP, when both DAC and ADC are
+- * operational to prevent inadvertently starving the
+- * other one of them.
+- */
+- if ((snd_soc_component_read32(component, SGTL5000_CHIP_ANA_POWER) &
+- mask) != mask) {
+- snd_soc_component_update_bits(component, SGTL5000_CHIP_ANA_POWER,
+- SGTL5000_VAG_POWERUP, 0);
+- msleep(400);
+- }
++ sgtl5000->mute_state[event_source] =
++ mute_output(component, mute_mask[event_source]);
++ vag_power_off(component, event_source);
++ break;
++ case SND_SOC_DAPM_POST_PMD:
++ restore_output(component, mute_mask[event_source],
++ sgtl5000->mute_state[event_source]);
+ break;
+ default:
+ break;
+@@ -208,6 +333,41 @@ static int power_vag_event(struct snd_soc_dapm_widget *w,
+ return 0;
+ }
+
++/*
++ * Mute Headphone when power it up/down.
++ * Control VAG power on HP power path.
++ */
++static int headphone_pga_event(struct snd_soc_dapm_widget *w,
++ struct snd_kcontrol *kcontrol, int event)
++{
++ struct snd_soc_component *component =
++ snd_soc_dapm_to_component(w->dapm);
++
++ return vag_and_mute_control(component, event, HP_POWER_EVENT);
++}
++
++/* As manual describes, ADC/DAC powering up/down requires
++ * to mute outputs to avoid pops.
++ * Control VAG power on ADC/DAC power path.
++ */
++static int adc_updown_depop(struct snd_soc_dapm_widget *w,
++ struct snd_kcontrol *kcontrol, int event)
++{
++ struct snd_soc_component *component =
++ snd_soc_dapm_to_component(w->dapm);
++
++ return vag_and_mute_control(component, event, ADC_POWER_EVENT);
++}
++
++static int dac_updown_depop(struct snd_soc_dapm_widget *w,
++ struct snd_kcontrol *kcontrol, int event)
++{
++ struct snd_soc_component *component =
++ snd_soc_dapm_to_component(w->dapm);
++
++ return vag_and_mute_control(component, event, DAC_POWER_EVENT);
++}
++
+ /* input sources for ADC */
+ static const char *adc_mux_text[] = {
+ "MIC_IN", "LINE_IN"
+@@ -280,7 +440,10 @@ static const struct snd_soc_dapm_widget sgtl5000_dapm_widgets[] = {
+ mic_bias_event,
+ SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_PMD),
+
+- SND_SOC_DAPM_PGA("HP", SGTL5000_CHIP_ANA_POWER, 4, 0, NULL, 0),
++ SND_SOC_DAPM_PGA_E("HP", SGTL5000_CHIP_ANA_POWER, 4, 0, NULL, 0,
++ headphone_pga_event,
++ SND_SOC_DAPM_PRE_POST_PMU |
++ SND_SOC_DAPM_PRE_POST_PMD),
+ SND_SOC_DAPM_PGA("LO", SGTL5000_CHIP_ANA_POWER, 0, 0, NULL, 0),
+
+ SND_SOC_DAPM_MUX("Capture Mux", SND_SOC_NOPM, 0, 0, &adc_mux),
+@@ -301,11 +464,12 @@ static const struct snd_soc_dapm_widget sgtl5000_dapm_widgets[] = {
+ 0, SGTL5000_CHIP_DIG_POWER,
+ 1, 0),
+
+- SND_SOC_DAPM_ADC("ADC", "Capture", SGTL5000_CHIP_ANA_POWER, 1, 0),
+- SND_SOC_DAPM_DAC("DAC", "Playback", SGTL5000_CHIP_ANA_POWER, 3, 0),
+-
+- SND_SOC_DAPM_PRE("VAG_POWER_PRE", power_vag_event),
+- SND_SOC_DAPM_POST("VAG_POWER_POST", power_vag_event),
++ SND_SOC_DAPM_ADC_E("ADC", "Capture", SGTL5000_CHIP_ANA_POWER, 1, 0,
++ adc_updown_depop, SND_SOC_DAPM_PRE_POST_PMU |
++ SND_SOC_DAPM_PRE_POST_PMD),
++ SND_SOC_DAPM_DAC_E("DAC", "Playback", SGTL5000_CHIP_ANA_POWER, 3, 0,
++ dac_updown_depop, SND_SOC_DAPM_PRE_POST_PMU |
++ SND_SOC_DAPM_PRE_POST_PMD),
+ };
+
+ /* routes for sgtl5000 */
+diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
+index 7065bb5b2752..e1357dbb16c2 100644
+--- a/tools/lib/bpf/btf_dump.c
++++ b/tools/lib/bpf/btf_dump.c
+@@ -1213,6 +1213,7 @@ static void btf_dump_emit_type_chain(struct btf_dump *d,
+ return;
+ }
+
++ next_id = decls->ids[decls->cnt - 1];
+ next_t = btf__type_by_id(d->btf, next_id);
+ multidim = btf_kind_of(next_t) == BTF_KIND_ARRAY;
+ /* we need space if we have named non-pointer */
+diff --git a/tools/lib/traceevent/Makefile b/tools/lib/traceevent/Makefile
+index 86ce17a1f7fb..a39cdd0d890d 100644
+--- a/tools/lib/traceevent/Makefile
++++ b/tools/lib/traceevent/Makefile
+@@ -266,8 +266,8 @@ endef
+
+ define do_generate_dynamic_list_file
+ symbol_type=`$(NM) -u -D $1 | awk 'NF>1 {print $$1}' | \
+- xargs echo "U W w" | tr ' ' '\n' | sort -u | xargs echo`;\
+- if [ "$$symbol_type" = "U W w" ];then \
++ xargs echo "U w W" | tr 'w ' 'W\n' | sort -u | xargs echo`;\
++ if [ "$$symbol_type" = "U W" ];then \
+ (echo '{'; \
+ $(NM) -u -D $1 | awk 'NF>1 {print "\t"$$2";"}' | sort -u;\
+ echo '};'; \
+diff --git a/tools/lib/traceevent/event-parse.c b/tools/lib/traceevent/event-parse.c
+index b36b536a9fcb..13fd9fdf91e0 100644
+--- a/tools/lib/traceevent/event-parse.c
++++ b/tools/lib/traceevent/event-parse.c
+@@ -269,10 +269,10 @@ static int add_new_comm(struct tep_handle *tep,
+ errno = ENOMEM;
+ return -1;
+ }
++ tep->cmdlines = cmdlines;
+
+ cmdlines[tep->cmdline_count].comm = strdup(comm);
+ if (!cmdlines[tep->cmdline_count].comm) {
+- free(cmdlines);
+ errno = ENOMEM;
+ return -1;
+ }
+@@ -283,7 +283,6 @@ static int add_new_comm(struct tep_handle *tep,
+ tep->cmdline_count++;
+
+ qsort(cmdlines, tep->cmdline_count, sizeof(*cmdlines), cmdline_cmp);
+- tep->cmdlines = cmdlines;
+
+ return 0;
+ }
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index 89ac5a1f1550..3da374911852 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -908,7 +908,7 @@ ifndef NO_JVMTI
+ JDIR=$(shell /usr/sbin/update-java-alternatives -l | head -1 | awk '{print $$3}')
+ else
+ ifneq (,$(wildcard /usr/sbin/alternatives))
+- JDIR=$(shell /usr/sbin/alternatives --display java | tail -1 | cut -d' ' -f 5 | sed 's%/jre/bin/java.%%g')
++ JDIR=$(shell /usr/sbin/alternatives --display java | tail -1 | cut -d' ' -f 5 | sed -e 's%/jre/bin/java.%%g' -e 's%/bin/java.%%g')
+ endif
+ endif
+ ifndef JDIR
+diff --git a/tools/perf/arch/x86/util/unwind-libunwind.c b/tools/perf/arch/x86/util/unwind-libunwind.c
+index 05920e3edf7a..47357973b55b 100644
+--- a/tools/perf/arch/x86/util/unwind-libunwind.c
++++ b/tools/perf/arch/x86/util/unwind-libunwind.c
+@@ -1,11 +1,11 @@
+ // SPDX-License-Identifier: GPL-2.0
+
+ #include <errno.h>
++#include "../../util/debug.h"
+ #ifndef REMOTE_UNWIND_LIBUNWIND
+ #include <libunwind.h>
+ #include "perf_regs.h"
+ #include "../../util/unwind.h"
+-#include "../../util/debug.h"
+ #endif
+
+ #ifdef HAVE_ARCH_X86_64_SUPPORT
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index 352cf39d7c2f..8ec06bf3372c 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -1961,8 +1961,11 @@ int cmd_stat(int argc, const char **argv)
+ fprintf(output, "[ perf stat: executing run #%d ... ]\n",
+ run_idx + 1);
+
++ if (run_idx != 0)
++ perf_evlist__reset_prev_raw_counts(evsel_list);
++
+ status = run_perf_stat(argc, argv, run_idx);
+- if (forever && status != -1) {
++ if (forever && status != -1 && !interval) {
+ print_counters(NULL, argc, argv);
+ perf_stat__reset_stats();
+ }
+diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
+index bf7cf1249553..e95a2a26c40a 100644
+--- a/tools/perf/util/header.c
++++ b/tools/perf/util/header.c
+@@ -1061,7 +1061,7 @@ static int cpu_cache_level__read(struct cpu_cache_level *cache, u32 cpu, u16 lev
+
+ scnprintf(file, PATH_MAX, "%s/shared_cpu_list", path);
+ if (sysfs__read_str(file, &cache->map, &len)) {
+- zfree(&cache->map);
++ zfree(&cache->size);
+ zfree(&cache->type);
+ return -1;
+ }
+diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
+index 8394d48f8b32..3355c445abed 100644
+--- a/tools/perf/util/probe-event.c
++++ b/tools/perf/util/probe-event.c
+@@ -2329,6 +2329,7 @@ void clear_probe_trace_event(struct probe_trace_event *tev)
+ }
+ }
+ zfree(&tev->args);
++ tev->nargs = 0;
+ }
+
+ struct kprobe_blacklist_node {
+diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c
+index db8a6cf336be..6ce66c272747 100644
+--- a/tools/perf/util/stat.c
++++ b/tools/perf/util/stat.c
+@@ -155,6 +155,15 @@ static void perf_evsel__free_prev_raw_counts(struct perf_evsel *evsel)
+ evsel->prev_raw_counts = NULL;
+ }
+
++static void perf_evsel__reset_prev_raw_counts(struct perf_evsel *evsel)
++{
++ if (evsel->prev_raw_counts) {
++ evsel->prev_raw_counts->aggr.val = 0;
++ evsel->prev_raw_counts->aggr.ena = 0;
++ evsel->prev_raw_counts->aggr.run = 0;
++ }
++}
++
+ static int perf_evsel__alloc_stats(struct perf_evsel *evsel, bool alloc_raw)
+ {
+ int ncpus = perf_evsel__nr_cpus(evsel);
+@@ -205,6 +214,14 @@ void perf_evlist__reset_stats(struct perf_evlist *evlist)
+ }
+ }
+
++void perf_evlist__reset_prev_raw_counts(struct perf_evlist *evlist)
++{
++ struct perf_evsel *evsel;
++
++ evlist__for_each_entry(evlist, evsel)
++ perf_evsel__reset_prev_raw_counts(evsel);
++}
++
+ static void zero_per_pkg(struct perf_evsel *counter)
+ {
+ if (counter->per_pkg_mask)
+diff --git a/tools/perf/util/stat.h b/tools/perf/util/stat.h
+index 7032dd1eeac2..9cd0d9cff374 100644
+--- a/tools/perf/util/stat.h
++++ b/tools/perf/util/stat.h
+@@ -194,6 +194,7 @@ void perf_stat__collect_metric_expr(struct perf_evlist *);
+ int perf_evlist__alloc_stats(struct perf_evlist *evlist, bool alloc_raw);
+ void perf_evlist__free_stats(struct perf_evlist *evlist);
+ void perf_evlist__reset_stats(struct perf_evlist *evlist);
++void perf_evlist__reset_prev_raw_counts(struct perf_evlist *evlist);
+
+ int perf_stat_process_counter(struct perf_stat_config *config,
+ struct perf_evsel *counter);
+diff --git a/tools/testing/nvdimm/test/nfit_test.h b/tools/testing/nvdimm/test/nfit_test.h
+index 448d686da8b1..0bf5640f1f07 100644
+--- a/tools/testing/nvdimm/test/nfit_test.h
++++ b/tools/testing/nvdimm/test/nfit_test.h
+@@ -4,6 +4,7 @@
+ */
+ #ifndef __NFIT_TEST_H__
+ #define __NFIT_TEST_H__
++#include <linux/acpi.h>
+ #include <linux/list.h>
+ #include <linux/uuid.h>
+ #include <linux/ioport.h>
+@@ -202,9 +203,6 @@ struct nd_intel_lss {
+ __u32 status;
+ } __packed;
+
+-union acpi_object;
+-typedef void *acpi_handle;
+-
+ typedef struct nfit_test_resource *(*nfit_test_lookup_fn)(resource_size_t);
+ typedef union acpi_object *(*nfit_test_evaluate_dsm_fn)(acpi_handle handle,
+ const guid_t *guid, u64 rev, u64 func,
+diff --git a/tools/testing/selftests/bpf/progs/strobemeta.h b/tools/testing/selftests/bpf/progs/strobemeta.h
+index 8a399bdfd920..067eb625d01c 100644
+--- a/tools/testing/selftests/bpf/progs/strobemeta.h
++++ b/tools/testing/selftests/bpf/progs/strobemeta.h
+@@ -413,7 +413,10 @@ static __always_inline void *read_map_var(struct strobemeta_cfg *cfg,
+ #else
+ #pragma unroll
+ #endif
+- for (int i = 0; i < STROBE_MAX_MAP_ENTRIES && i < map.cnt; ++i) {
++ for (int i = 0; i < STROBE_MAX_MAP_ENTRIES; ++i) {
++ if (i >= map.cnt)
++ break;
++
+ descr->key_lens[i] = 0;
+ len = bpf_probe_read_str(payload, STROBE_MAX_STR_LEN,
+ map.entries[i].key);
+diff --git a/tools/testing/selftests/pidfd/Makefile b/tools/testing/selftests/pidfd/Makefile
+index 720b2d884b3c..e86141796444 100644
+--- a/tools/testing/selftests/pidfd/Makefile
++++ b/tools/testing/selftests/pidfd/Makefile
+@@ -1,5 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+-CFLAGS += -g -I../../../../usr/include/ -lpthread
++CFLAGS += -g -I../../../../usr/include/ -pthread
+
+ TEST_GEN_PROGS := pidfd_test pidfd_open_test
+
+diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
+index 6ef7f16c4cf5..7f8b5c8982e3 100644
+--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
+@@ -199,6 +199,11 @@ struct seccomp_notif_sizes {
+ };
+ #endif
+
++#ifndef PTRACE_EVENTMSG_SYSCALL_ENTRY
++#define PTRACE_EVENTMSG_SYSCALL_ENTRY 1
++#define PTRACE_EVENTMSG_SYSCALL_EXIT 2
++#endif
++
+ #ifndef seccomp
+ int seccomp(unsigned int op, unsigned int flags, void *args)
+ {
+diff --git a/tools/testing/selftests/tpm2/Makefile b/tools/testing/selftests/tpm2/Makefile
+index 9dd848427a7b..bf401f725eef 100644
+--- a/tools/testing/selftests/tpm2/Makefile
++++ b/tools/testing/selftests/tpm2/Makefile
+@@ -2,3 +2,4 @@
+ include ../lib.mk
+
+ TEST_PROGS := test_smoke.sh test_space.sh
++TEST_FILES := tpm2.py tpm2_tests.py
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:5.3 commit in: /
@ 2019-10-16 18:21 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2019-10-16 18:21 UTC (permalink / raw
To: gentoo-commits
commit: 8207d7631e9b199343cacb8aa1a86a599e7242d9
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 16 18:20:44 2019 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Oct 16 18:20:44 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8207d763
Add FILE_LOCKING to GENTOO_LINUX config. See bug #694688.
Thanks to Marius Stoica for reporting
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
4567_distro-Gentoo-Kconfig.patch | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 6ac8208..ecff093 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -6,9 +6,9 @@
source "Documentation/Kconfig"
+
+source "distro/Kconfig"
---- /dev/null 2018-12-28 10:40:34.089999934 -0500
-+++ b/distro/Kconfig 2018-12-28 18:54:40.467970759 -0500
-@@ -0,0 +1,147 @@
+--- /dev/null 2019-09-18 03:31:42.730171526 -0400
++++ b/distro/Kconfig 2019-09-18 13:28:03.170769896 -0400
+@@ -0,0 +1,149 @@
+menu "Gentoo Linux"
+
+config GENTOO_LINUX
@@ -91,6 +91,7 @@
+ depends on GENTOO_LINUX
+
+ select BINFMT_SCRIPT
++ select FILE_LOCKING
+
+ help
+ The init system is the first thing that loads after the kernel booted.
@@ -123,6 +124,7 @@
+ select EPOLL
+ select FANOTIFY
+ select FHANDLE
++ select FILE_LOCKING
+ select INOTIFY_USER
+ select IPV6
+ select NET
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:5.3 commit in: /
@ 2019-10-17 22:28 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2019-10-17 22:28 UTC (permalink / raw
To: gentoo-commits
commit: 60bbadab7b2c80a38d304667976d13a10add2526
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Oct 17 22:27:51 2019 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Oct 17 22:27:51 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=60bbadab
Linux patch 5.3.7
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1006_linux-5.3.7.patch | 5181 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 5185 insertions(+)
diff --git a/0000_README b/0000_README
index 2347308..e15ba25 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch: 1005_linux-5.3.6.patch
From: http://www.kernel.org
Desc: Linux 5.3.6
+Patch: 1006_linux-5.3.7.patch
+From: http://www.kernel.org
+Desc: Linux 5.3.7
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1006_linux-5.3.7.patch b/1006_linux-5.3.7.patch
new file mode 100644
index 0000000..de9efc4
--- /dev/null
+++ b/1006_linux-5.3.7.patch
@@ -0,0 +1,5181 @@
+diff --git a/Documentation/usb/rio.rst b/Documentation/usb/rio.rst
+deleted file mode 100644
+index ea73475471db..000000000000
+--- a/Documentation/usb/rio.rst
++++ /dev/null
+@@ -1,109 +0,0 @@
+-============
+-Diamonds Rio
+-============
+-
+-Copyright (C) 1999, 2000 Bruce Tenison
+-
+-Portions Copyright (C) 1999, 2000 David Nelson
+-
+-Thanks to David Nelson for guidance and the usage of the scanner.txt
+-and scanner.c files to model our driver and this informative file.
+-
+-Mar. 2, 2000
+-
+-Changes
+-=======
+-
+-- Initial Revision
+-
+-
+-Overview
+-========
+-
+-This README will address issues regarding how to configure the kernel
+-to access a RIO 500 mp3 player.
+-Before I explain how to use this to access the Rio500 please be warned:
+-
+-.. warning::
+-
+- Please note that this software is still under development. The authors
+- are in no way responsible for any damage that may occur, no matter how
+- inconsequential.
+-
+-It seems that the Rio has a problem when sending .mp3 with low batteries.
+-I suggest when the batteries are low and you want to transfer stuff that you
+-replace it with a fresh one. In my case, what happened is I lost two 16kb
+-blocks (they are no longer usable to store information to it). But I don't
+-know if that's normal or not; it could simply be a problem with the flash
+-memory.
+-
+-In an extreme case, I left my Rio playing overnight and the batteries wore
+-down to nothing and appear to have corrupted the flash memory. My RIO
+-needed to be replaced as a result. Diamond tech support is aware of the
+-problem. Do NOT allow your batteries to wear down to nothing before
+-changing them. It appears RIO 500 firmware does not handle low battery
+-power well at all.
+-
+-On systems with OHCI controllers, the kernel OHCI code appears to have
+-power on problems with some chipsets. If you are having problems
+-connecting to your RIO 500, try turning it on first and then plugging it
+-into the USB cable.
+-
+-Contact Information
+--------------------
+-
+- The main page for the project is hosted at sourceforge.net in the following
+- URL: <http://rio500.sourceforge.net>. You can also go to the project's
+- sourceforge home page at: <http://sourceforge.net/projects/rio500/>.
+- There is also a mailing list: rio500-users@lists.sourceforge.net
+-
+-Authors
+--------
+-
+-Most of the code was written by Cesar Miquel <miquel@df.uba.ar>. Keith
+-Clayton <kclayton@jps.net> is incharge of the PPC port and making sure
+-things work there. Bruce Tenison <btenison@dibbs.net> is adding support
+-for .fon files and also does testing. The program will mostly sure be
+-re-written and Pete Ikusz along with the rest will re-design it. I would
+-also like to thank Tri Nguyen <tmn_3022000@hotmail.com> who provided use
+-with some important information regarding the communication with the Rio.
+-
+-Additional Information and userspace tools
+-
+- http://rio500.sourceforge.net/
+-
+-
+-Requirements
+-============
+-
+-A host with a USB port running a Linux kernel with RIO 500 support enabled.
+-
+-The driver is a module called rio500, which should be automatically loaded
+-as you plug in your device. If that fails you can manually load it with
+-
+- modprobe rio500
+-
+-Udev should automatically create a device node as soon as plug in your device.
+-If that fails, you can manually add a device for the USB rio500::
+-
+- mknod /dev/usb/rio500 c 180 64
+-
+-In that case, set appropriate permissions for /dev/usb/rio500 (don't forget
+-about group and world permissions). Both read and write permissions are
+-required for proper operation.
+-
+-That's it. The Rio500 Utils at: http://rio500.sourceforge.net should
+-be able to access the rio500.
+-
+-Limits
+-======
+-
+-You can use only a single rio500 device at a time with your computer.
+-
+-Bugs
+-====
+-
+-If you encounter any problems feel free to drop me an email.
+-
+-Bruce Tenison
+-btenison@dibbs.net
+diff --git a/MAINTAINERS b/MAINTAINERS
+index a50e97a63bc8..1d235c674be8 100644
+--- a/MAINTAINERS
++++ b/MAINTAINERS
+@@ -16606,13 +16606,6 @@ W: http://www.linux-usb.org/usbnet
+ S: Maintained
+ F: drivers/net/usb/dm9601.c
+
+-USB DIAMOND RIO500 DRIVER
+-M: Cesar Miquel <miquel@df.uba.ar>
+-L: rio500-users@lists.sourceforge.net
+-W: http://rio500.sourceforge.net
+-S: Maintained
+-F: drivers/usb/misc/rio500*
+-
+ USB EHCI DRIVER
+ M: Alan Stern <stern@rowland.harvard.edu>
+ L: linux-usb@vger.kernel.org
+diff --git a/Makefile b/Makefile
+index d7469f0926a6..7a3e659c79ae 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 3
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Bobtail Squid
+
+diff --git a/arch/arm/configs/badge4_defconfig b/arch/arm/configs/badge4_defconfig
+index 5ae5b5228467..ef484c4cfd1a 100644
+--- a/arch/arm/configs/badge4_defconfig
++++ b/arch/arm/configs/badge4_defconfig
+@@ -91,7 +91,6 @@ CONFIG_USB_SERIAL_PL2303=m
+ CONFIG_USB_SERIAL_CYBERJACK=m
+ CONFIG_USB_SERIAL_XIRCOM=m
+ CONFIG_USB_SERIAL_OMNINET=m
+-CONFIG_USB_RIO500=m
+ CONFIG_EXT2_FS=m
+ CONFIG_EXT3_FS=m
+ CONFIG_MSDOS_FS=y
+diff --git a/arch/arm/configs/corgi_defconfig b/arch/arm/configs/corgi_defconfig
+index e4f6442588e7..4fec2ec379ad 100644
+--- a/arch/arm/configs/corgi_defconfig
++++ b/arch/arm/configs/corgi_defconfig
+@@ -195,7 +195,6 @@ CONFIG_USB_SERIAL_XIRCOM=m
+ CONFIG_USB_SERIAL_OMNINET=m
+ CONFIG_USB_EMI62=m
+ CONFIG_USB_EMI26=m
+-CONFIG_USB_RIO500=m
+ CONFIG_USB_LEGOTOWER=m
+ CONFIG_USB_LCD=m
+ CONFIG_USB_CYTHERM=m
+diff --git a/arch/arm/configs/pxa_defconfig b/arch/arm/configs/pxa_defconfig
+index 787c3f9be414..b817c57f05f1 100644
+--- a/arch/arm/configs/pxa_defconfig
++++ b/arch/arm/configs/pxa_defconfig
+@@ -581,7 +581,6 @@ CONFIG_USB_SERIAL_XIRCOM=m
+ CONFIG_USB_SERIAL_OMNINET=m
+ CONFIG_USB_EMI62=m
+ CONFIG_USB_EMI26=m
+-CONFIG_USB_RIO500=m
+ CONFIG_USB_LEGOTOWER=m
+ CONFIG_USB_LCD=m
+ CONFIG_USB_CYTHERM=m
+diff --git a/arch/arm/configs/s3c2410_defconfig b/arch/arm/configs/s3c2410_defconfig
+index 95b5a4ffddea..73ed73a8785a 100644
+--- a/arch/arm/configs/s3c2410_defconfig
++++ b/arch/arm/configs/s3c2410_defconfig
+@@ -327,7 +327,6 @@ CONFIG_USB_EMI62=m
+ CONFIG_USB_EMI26=m
+ CONFIG_USB_ADUTUX=m
+ CONFIG_USB_SEVSEG=m
+-CONFIG_USB_RIO500=m
+ CONFIG_USB_LEGOTOWER=m
+ CONFIG_USB_LCD=m
+ CONFIG_USB_CYPRESS_CY7C63=m
+diff --git a/arch/arm/configs/spitz_defconfig b/arch/arm/configs/spitz_defconfig
+index 4fb51d665abb..a1cdbfa064c5 100644
+--- a/arch/arm/configs/spitz_defconfig
++++ b/arch/arm/configs/spitz_defconfig
+@@ -189,7 +189,6 @@ CONFIG_USB_SERIAL_XIRCOM=m
+ CONFIG_USB_SERIAL_OMNINET=m
+ CONFIG_USB_EMI62=m
+ CONFIG_USB_EMI26=m
+-CONFIG_USB_RIO500=m
+ CONFIG_USB_LEGOTOWER=m
+ CONFIG_USB_LCD=m
+ CONFIG_USB_CYTHERM=m
+diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
+index f674f28df663..803499360da8 100644
+--- a/arch/arm64/kernel/process.c
++++ b/arch/arm64/kernel/process.c
+@@ -323,22 +323,27 @@ void arch_release_task_struct(struct task_struct *tsk)
+ fpsimd_release_task(tsk);
+ }
+
+-/*
+- * src and dst may temporarily have aliased sve_state after task_struct
+- * is copied. We cannot fix this properly here, because src may have
+- * live SVE state and dst's thread_info may not exist yet, so tweaking
+- * either src's or dst's TIF_SVE is not safe.
+- *
+- * The unaliasing is done in copy_thread() instead. This works because
+- * dst is not schedulable or traceable until both of these functions
+- * have been called.
+- */
+ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
+ {
+ if (current->mm)
+ fpsimd_preserve_current_state();
+ *dst = *src;
+
++ /* We rely on the above assignment to initialize dst's thread_flags: */
++ BUILD_BUG_ON(!IS_ENABLED(CONFIG_THREAD_INFO_IN_TASK));
++
++ /*
++ * Detach src's sve_state (if any) from dst so that it does not
++ * get erroneously used or freed prematurely. dst's sve_state
++ * will be allocated on demand later on if dst uses SVE.
++ * For consistency, also clear TIF_SVE here: this could be done
++ * later in copy_process(), but to avoid tripping up future
++ * maintainers it is best not to leave TIF_SVE and sve_state in
++ * an inconsistent state, even temporarily.
++ */
++ dst->thread.sve_state = NULL;
++ clear_tsk_thread_flag(dst, TIF_SVE);
++
+ return 0;
+ }
+
+@@ -351,13 +356,6 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
+
+ memset(&p->thread.cpu_context, 0, sizeof(struct cpu_context));
+
+- /*
+- * Unalias p->thread.sve_state (if any) from the parent task
+- * and disable discard SVE state for p:
+- */
+- clear_tsk_thread_flag(p, TIF_SVE);
+- p->thread.sve_state = NULL;
+-
+ /*
+ * In case p was allocated the same task_struct pointer as some
+ * other recently-exited task, make sure p is disassociated from
+diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
+index 0825c4a856e3..6106c49f84bc 100644
+--- a/arch/arm64/kernel/topology.c
++++ b/arch/arm64/kernel/topology.c
+@@ -340,17 +340,28 @@ void remove_cpu_topology(unsigned int cpu)
+ }
+
+ #ifdef CONFIG_ACPI
++static bool __init acpi_cpu_is_threaded(int cpu)
++{
++ int is_threaded = acpi_pptt_cpu_is_thread(cpu);
++
++ /*
++ * if the PPTT doesn't have thread information, assume a homogeneous
++ * machine and return the current CPU's thread state.
++ */
++ if (is_threaded < 0)
++ is_threaded = read_cpuid_mpidr() & MPIDR_MT_BITMASK;
++
++ return !!is_threaded;
++}
++
+ /*
+ * Propagate the topology information of the processor_topology_node tree to the
+ * cpu_topology array.
+ */
+ static int __init parse_acpi_topology(void)
+ {
+- bool is_threaded;
+ int cpu, topology_id;
+
+- is_threaded = read_cpuid_mpidr() & MPIDR_MT_BITMASK;
+-
+ for_each_possible_cpu(cpu) {
+ int i, cache_id;
+
+@@ -358,7 +369,7 @@ static int __init parse_acpi_topology(void)
+ if (topology_id < 0)
+ return topology_id;
+
+- if (is_threaded) {
++ if (acpi_cpu_is_threaded(cpu)) {
+ cpu_topology[cpu].thread_id = topology_id;
+ topology_id = find_acpi_cpu_topology(cpu, 1);
+ cpu_topology[cpu].core_id = topology_id;
+diff --git a/arch/mips/configs/mtx1_defconfig b/arch/mips/configs/mtx1_defconfig
+index 16bef819fe98..914af125a7fa 100644
+--- a/arch/mips/configs/mtx1_defconfig
++++ b/arch/mips/configs/mtx1_defconfig
+@@ -571,7 +571,6 @@ CONFIG_USB_SERIAL_OMNINET=m
+ CONFIG_USB_EMI62=m
+ CONFIG_USB_EMI26=m
+ CONFIG_USB_ADUTUX=m
+-CONFIG_USB_RIO500=m
+ CONFIG_USB_LEGOTOWER=m
+ CONFIG_USB_LCD=m
+ CONFIG_USB_CYPRESS_CY7C63=m
+diff --git a/arch/mips/configs/rm200_defconfig b/arch/mips/configs/rm200_defconfig
+index 0f4b09f8a0ee..d588bc5280f4 100644
+--- a/arch/mips/configs/rm200_defconfig
++++ b/arch/mips/configs/rm200_defconfig
+@@ -315,7 +315,6 @@ CONFIG_USB_SERIAL_SAFE_PADDED=y
+ CONFIG_USB_SERIAL_CYBERJACK=m
+ CONFIG_USB_SERIAL_XIRCOM=m
+ CONFIG_USB_SERIAL_OMNINET=m
+-CONFIG_USB_RIO500=m
+ CONFIG_USB_LEGOTOWER=m
+ CONFIG_USB_LCD=m
+ CONFIG_USB_CYTHERM=m
+diff --git a/arch/mips/include/uapi/asm/hwcap.h b/arch/mips/include/uapi/asm/hwcap.h
+index a2aba4b059e6..1ade1daa4921 100644
+--- a/arch/mips/include/uapi/asm/hwcap.h
++++ b/arch/mips/include/uapi/asm/hwcap.h
+@@ -6,5 +6,16 @@
+ #define HWCAP_MIPS_R6 (1 << 0)
+ #define HWCAP_MIPS_MSA (1 << 1)
+ #define HWCAP_MIPS_CRC32 (1 << 2)
++#define HWCAP_MIPS_MIPS16 (1 << 3)
++#define HWCAP_MIPS_MDMX (1 << 4)
++#define HWCAP_MIPS_MIPS3D (1 << 5)
++#define HWCAP_MIPS_SMARTMIPS (1 << 6)
++#define HWCAP_MIPS_DSP (1 << 7)
++#define HWCAP_MIPS_DSP2 (1 << 8)
++#define HWCAP_MIPS_DSP3 (1 << 9)
++#define HWCAP_MIPS_MIPS16E2 (1 << 10)
++#define HWCAP_LOONGSON_MMI (1 << 11)
++#define HWCAP_LOONGSON_EXT (1 << 12)
++#define HWCAP_LOONGSON_EXT2 (1 << 13)
+
+ #endif /* _UAPI_ASM_HWCAP_H */
+diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
+index e698a20017c1..147dafa4bfc3 100644
+--- a/arch/mips/kernel/cpu-probe.c
++++ b/arch/mips/kernel/cpu-probe.c
+@@ -2198,6 +2198,39 @@ void cpu_probe(void)
+ elf_hwcap |= HWCAP_MIPS_MSA;
+ }
+
++ if (cpu_has_mips16)
++ elf_hwcap |= HWCAP_MIPS_MIPS16;
++
++ if (cpu_has_mdmx)
++ elf_hwcap |= HWCAP_MIPS_MDMX;
++
++ if (cpu_has_mips3d)
++ elf_hwcap |= HWCAP_MIPS_MIPS3D;
++
++ if (cpu_has_smartmips)
++ elf_hwcap |= HWCAP_MIPS_SMARTMIPS;
++
++ if (cpu_has_dsp)
++ elf_hwcap |= HWCAP_MIPS_DSP;
++
++ if (cpu_has_dsp2)
++ elf_hwcap |= HWCAP_MIPS_DSP2;
++
++ if (cpu_has_dsp3)
++ elf_hwcap |= HWCAP_MIPS_DSP3;
++
++ if (cpu_has_mips16e2)
++ elf_hwcap |= HWCAP_MIPS_MIPS16E2;
++
++ if (cpu_has_loongson_mmi)
++ elf_hwcap |= HWCAP_LOONGSON_MMI;
++
++ if (cpu_has_loongson_ext)
++ elf_hwcap |= HWCAP_LOONGSON_EXT;
++
++ if (cpu_has_loongson_ext2)
++ elf_hwcap |= HWCAP_LOONGSON_EXT2;
++
+ if (cpu_has_vz)
+ cpu_probe_vz(c);
+
+diff --git a/arch/mips/loongson64/Platform b/arch/mips/loongson64/Platform
+index c1a4d4dc4665..9f79908f5063 100644
+--- a/arch/mips/loongson64/Platform
++++ b/arch/mips/loongson64/Platform
+@@ -66,6 +66,10 @@ else
+ $(call cc-option,-march=mips64r2,-mips64r2 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS64)
+ endif
+
++# Some -march= flags enable MMI instructions, and GCC complains about that
++# support being enabled alongside -msoft-float. Thus explicitly disable MMI.
++cflags-y += $(call cc-option,-mno-loongson-mmi)
++
+ #
+ # Loongson Machines' Support
+ #
+diff --git a/arch/mips/vdso/Makefile b/arch/mips/vdso/Makefile
+index 7221df24cb23..38d005be3cc4 100644
+--- a/arch/mips/vdso/Makefile
++++ b/arch/mips/vdso/Makefile
+@@ -9,6 +9,7 @@ ccflags-vdso := \
+ $(filter -mmicromips,$(KBUILD_CFLAGS)) \
+ $(filter -march=%,$(KBUILD_CFLAGS)) \
+ $(filter -m%-float,$(KBUILD_CFLAGS)) \
++ $(filter -mno-loongson-%,$(KBUILD_CFLAGS)) \
+ -D__VDSO__
+
+ ifdef CONFIG_CC_IS_CLANG
+diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
+index e28f8b723b5c..9d5252c9685c 100644
+--- a/arch/x86/include/asm/mwait.h
++++ b/arch/x86/include/asm/mwait.h
+@@ -21,7 +21,7 @@
+ #define MWAIT_ECX_INTERRUPT_BREAK 0x1
+ #define MWAITX_ECX_TIMER_ENABLE BIT(1)
+ #define MWAITX_MAX_LOOPS ((u32)-1)
+-#define MWAITX_DISABLE_CSTATES 0xf
++#define MWAITX_DISABLE_CSTATES 0xf0
+
+ static inline void __monitor(const void *eax, unsigned long ecx,
+ unsigned long edx)
+diff --git a/arch/x86/lib/delay.c b/arch/x86/lib/delay.c
+index b7375dc6898f..c126571e5e2e 100644
+--- a/arch/x86/lib/delay.c
++++ b/arch/x86/lib/delay.c
+@@ -113,8 +113,8 @@ static void delay_mwaitx(unsigned long __loops)
+ __monitorx(raw_cpu_ptr(&cpu_tss_rw), 0, 0);
+
+ /*
+- * AMD, like Intel, supports the EAX hint and EAX=0xf
+- * means, do not enter any deep C-state and we use it
++ * AMD, like Intel's MWAIT version, supports the EAX hint and
++ * EAX=0xf0 means, do not enter any deep C-state and we use it
+ * here in delay() to minimize wakeup latency.
+ */
+ __mwaitx(MWAITX_DISABLE_CSTATES, delay, MWAITX_ECX_TIMER_ENABLE);
+diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c
+index 3954c0dc1443..de04b89e9157 100644
+--- a/block/blk-rq-qos.c
++++ b/block/blk-rq-qos.c
+@@ -142,24 +142,27 @@ bool rq_depth_calc_max_depth(struct rq_depth *rqd)
+ return ret;
+ }
+
+-void rq_depth_scale_up(struct rq_depth *rqd)
++/* Returns true on success and false if scaling up wasn't possible */
++bool rq_depth_scale_up(struct rq_depth *rqd)
+ {
+ /*
+ * Hit max in previous round, stop here
+ */
+ if (rqd->scaled_max)
+- return;
++ return false;
+
+ rqd->scale_step--;
+
+ rqd->scaled_max = rq_depth_calc_max_depth(rqd);
++ return true;
+ }
+
+ /*
+ * Scale rwb down. If 'hard_throttle' is set, do it quicker, since we
+- * had a latency violation.
++ * had a latency violation. Returns true on success and returns false if
++ * scaling down wasn't possible.
+ */
+-void rq_depth_scale_down(struct rq_depth *rqd, bool hard_throttle)
++bool rq_depth_scale_down(struct rq_depth *rqd, bool hard_throttle)
+ {
+ /*
+ * Stop scaling down when we've hit the limit. This also prevents
+@@ -167,7 +170,7 @@ void rq_depth_scale_down(struct rq_depth *rqd, bool hard_throttle)
+ * keep up.
+ */
+ if (rqd->max_depth == 1)
+- return;
++ return false;
+
+ if (rqd->scale_step < 0 && hard_throttle)
+ rqd->scale_step = 0;
+@@ -176,6 +179,7 @@ void rq_depth_scale_down(struct rq_depth *rqd, bool hard_throttle)
+
+ rqd->scaled_max = false;
+ rq_depth_calc_max_depth(rqd);
++ return true;
+ }
+
+ struct rq_qos_wait_data {
+diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h
+index 2300e038b9fa..c0f0778d5396 100644
+--- a/block/blk-rq-qos.h
++++ b/block/blk-rq-qos.h
+@@ -125,8 +125,8 @@ void rq_qos_wait(struct rq_wait *rqw, void *private_data,
+ acquire_inflight_cb_t *acquire_inflight_cb,
+ cleanup_cb_t *cleanup_cb);
+ bool rq_wait_inc_below(struct rq_wait *rq_wait, unsigned int limit);
+-void rq_depth_scale_up(struct rq_depth *rqd);
+-void rq_depth_scale_down(struct rq_depth *rqd, bool hard_throttle);
++bool rq_depth_scale_up(struct rq_depth *rqd);
++bool rq_depth_scale_down(struct rq_depth *rqd, bool hard_throttle);
+ bool rq_depth_calc_max_depth(struct rq_depth *rqd);
+
+ void __rq_qos_cleanup(struct rq_qos *rqos, struct bio *bio);
+diff --git a/block/blk-wbt.c b/block/blk-wbt.c
+index 313f45a37e9d..5a96881e7a52 100644
+--- a/block/blk-wbt.c
++++ b/block/blk-wbt.c
+@@ -308,7 +308,8 @@ static void calc_wb_limits(struct rq_wb *rwb)
+
+ static void scale_up(struct rq_wb *rwb)
+ {
+- rq_depth_scale_up(&rwb->rq_depth);
++ if (!rq_depth_scale_up(&rwb->rq_depth))
++ return;
+ calc_wb_limits(rwb);
+ rwb->unknown_cnt = 0;
+ rwb_wake_all(rwb);
+@@ -317,7 +318,8 @@ static void scale_up(struct rq_wb *rwb)
+
+ static void scale_down(struct rq_wb *rwb, bool hard_throttle)
+ {
+- rq_depth_scale_down(&rwb->rq_depth, hard_throttle);
++ if (!rq_depth_scale_down(&rwb->rq_depth, hard_throttle))
++ return;
+ calc_wb_limits(rwb);
+ rwb->unknown_cnt = 0;
+ rwb_trace_step(rwb, "scale down");
+diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
+index 1e7ac0bd0d3a..9497298018a9 100644
+--- a/drivers/acpi/pptt.c
++++ b/drivers/acpi/pptt.c
+@@ -540,6 +540,44 @@ static int find_acpi_cpu_topology_tag(unsigned int cpu, int level, int flag)
+ return retval;
+ }
+
++/**
++ * check_acpi_cpu_flag() - Determine if CPU node has a flag set
++ * @cpu: Kernel logical CPU number
++ * @rev: The minimum PPTT revision defining the flag
++ * @flag: The flag itself
++ *
++ * Check the node representing a CPU for a given flag.
++ *
++ * Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found or
++ * the table revision isn't new enough.
++ * 1, any passed flag set
++ * 0, flag unset
++ */
++static int check_acpi_cpu_flag(unsigned int cpu, int rev, u32 flag)
++{
++ struct acpi_table_header *table;
++ acpi_status status;
++ u32 acpi_cpu_id = get_acpi_id_for_cpu(cpu);
++ struct acpi_pptt_processor *cpu_node = NULL;
++ int ret = -ENOENT;
++
++ status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
++ if (ACPI_FAILURE(status)) {
++ acpi_pptt_warn_missing();
++ return ret;
++ }
++
++ if (table->revision >= rev)
++ cpu_node = acpi_find_processor_node(table, acpi_cpu_id);
++
++ if (cpu_node)
++ ret = (cpu_node->flags & flag) != 0;
++
++ acpi_put_table(table);
++
++ return ret;
++}
++
+ /**
+ * acpi_find_last_cache_level() - Determines the number of cache levels for a PE
+ * @cpu: Kernel logical CPU number
+@@ -604,6 +642,20 @@ int cache_setup_acpi(unsigned int cpu)
+ return status;
+ }
+
++/**
++ * acpi_pptt_cpu_is_thread() - Determine if CPU is a thread
++ * @cpu: Kernel logical CPU number
++ *
++ * Return: 1, a thread
++ * 0, not a thread
++ * -ENOENT ,if the PPTT doesn't exist, the CPU cannot be found or
++ * the table revision isn't new enough.
++ */
++int acpi_pptt_cpu_is_thread(unsigned int cpu)
++{
++ return check_acpi_cpu_flag(cpu, 2, ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD);
++}
++
+ /**
+ * find_acpi_cpu_topology() - Determine a unique topology value for a given CPU
+ * @cpu: Kernel logical CPU number
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index ad3b1f4866b3..8f020827cdd3 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -282,6 +282,9 @@ static __init int efivar_ssdt_load(void)
+ void *data;
+ int ret;
+
++ if (!efivar_ssdt[0])
++ return 0;
++
+ ret = efivar_init(efivar_ssdt_iter, &entries, true, &entries);
+
+ list_for_each_entry_safe(entry, aux, &entries, list) {
+diff --git a/drivers/firmware/efi/tpm.c b/drivers/firmware/efi/tpm.c
+index 1d3f5ca3eaaf..ebd7977653a8 100644
+--- a/drivers/firmware/efi/tpm.c
++++ b/drivers/firmware/efi/tpm.c
+@@ -40,7 +40,7 @@ int __init efi_tpm_eventlog_init(void)
+ {
+ struct linux_efi_tpm_eventlog *log_tbl;
+ struct efi_tcg2_final_events_table *final_tbl;
+- unsigned int tbl_size;
++ int tbl_size;
+ int ret = 0;
+
+ if (efi.tpm_log == EFI_INVALID_TABLE_ADDR) {
+@@ -75,16 +75,28 @@ int __init efi_tpm_eventlog_init(void)
+ goto out;
+ }
+
+- tbl_size = tpm2_calc_event_log_size((void *)efi.tpm_final_log
+- + sizeof(final_tbl->version)
+- + sizeof(final_tbl->nr_events),
+- final_tbl->nr_events,
+- log_tbl->log);
++ tbl_size = 0;
++ if (final_tbl->nr_events != 0) {
++ void *events = (void *)efi.tpm_final_log
++ + sizeof(final_tbl->version)
++ + sizeof(final_tbl->nr_events);
++
++ tbl_size = tpm2_calc_event_log_size(events,
++ final_tbl->nr_events,
++ log_tbl->log);
++ }
++
++ if (tbl_size < 0) {
++ pr_err(FW_BUG "Failed to parse event in TPM Final Events Log\n");
++ goto out_calc;
++ }
++
+ memblock_reserve((unsigned long)final_tbl,
+ tbl_size + sizeof(*final_tbl));
+- early_memunmap(final_tbl, sizeof(*final_tbl));
+ efi_tpm_final_log_size = tbl_size;
+
++out_calc:
++ early_memunmap(final_tbl, sizeof(*final_tbl));
+ out:
+ early_memunmap(log_tbl, sizeof(*log_tbl));
+ return ret;
+diff --git a/drivers/firmware/google/vpd_decode.c b/drivers/firmware/google/vpd_decode.c
+index dda525c0f968..5c6f2a74f104 100644
+--- a/drivers/firmware/google/vpd_decode.c
++++ b/drivers/firmware/google/vpd_decode.c
+@@ -52,7 +52,7 @@ static int vpd_decode_entry(const u32 max_len, const u8 *input_buf,
+ if (max_len - consumed < *entry_len)
+ return VPD_FAIL;
+
+- consumed += decoded_len;
++ consumed += *entry_len;
+ *_consumed = consumed;
+ return VPD_OK;
+ }
+diff --git a/drivers/gpio/gpio-eic-sprd.c b/drivers/gpio/gpio-eic-sprd.c
+index 7b9ac4a12c20..090539f0c5a2 100644
+--- a/drivers/gpio/gpio-eic-sprd.c
++++ b/drivers/gpio/gpio-eic-sprd.c
+@@ -530,11 +530,12 @@ static void sprd_eic_handle_one_type(struct gpio_chip *chip)
+ }
+
+ for_each_set_bit(n, ®, SPRD_EIC_PER_BANK_NR) {
+- girq = irq_find_mapping(chip->irq.domain,
+- bank * SPRD_EIC_PER_BANK_NR + n);
++ u32 offset = bank * SPRD_EIC_PER_BANK_NR + n;
++
++ girq = irq_find_mapping(chip->irq.domain, offset);
+
+ generic_handle_irq(girq);
+- sprd_eic_toggle_trigger(chip, girq, n);
++ sprd_eic_toggle_trigger(chip, girq, offset);
+ }
+ }
+ }
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index d9074191edef..74a77001b1bd 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -2775,8 +2775,10 @@ int gpiod_direction_output(struct gpio_desc *desc, int value)
+ if (!ret)
+ goto set_output_value;
+ /* Emulate open drain by not actively driving the line high */
+- if (value)
+- return gpiod_direction_input(desc);
++ if (value) {
++ ret = gpiod_direction_input(desc);
++ goto set_output_flag;
++ }
+ }
+ else if (test_bit(FLAG_OPEN_SOURCE, &desc->flags)) {
+ ret = gpio_set_config(gc, gpio_chip_hwgpio(desc),
+@@ -2784,8 +2786,10 @@ int gpiod_direction_output(struct gpio_desc *desc, int value)
+ if (!ret)
+ goto set_output_value;
+ /* Emulate open source by not actively driving the line low */
+- if (!value)
+- return gpiod_direction_input(desc);
++ if (!value) {
++ ret = gpiod_direction_input(desc);
++ goto set_output_flag;
++ }
+ } else {
+ gpio_set_config(gc, gpio_chip_hwgpio(desc),
+ PIN_CONFIG_DRIVE_PUSH_PULL);
+@@ -2793,6 +2797,17 @@ int gpiod_direction_output(struct gpio_desc *desc, int value)
+
+ set_output_value:
+ return gpiod_direction_output_raw_commit(desc, value);
++
++set_output_flag:
++ /*
++ * When emulating open-source or open-drain functionalities by not
++ * actively driving the line (setting mode to input) we still need to
++ * set the IS_OUT flag or otherwise we won't be able to set the line
++ * value anymore.
++ */
++ if (ret == 0)
++ set_bit(FLAG_IS_OUT, &desc->flags);
++ return ret;
+ }
+ EXPORT_SYMBOL_GPL(gpiod_direction_output);
+
+@@ -3153,8 +3168,6 @@ static void gpio_set_open_drain_value_commit(struct gpio_desc *desc, bool value)
+
+ if (value) {
+ err = chip->direction_input(chip, offset);
+- if (!err)
+- clear_bit(FLAG_IS_OUT, &desc->flags);
+ } else {
+ err = chip->direction_output(chip, offset, 0);
+ if (!err)
+@@ -3184,8 +3197,6 @@ static void gpio_set_open_source_value_commit(struct gpio_desc *desc, bool value
+ set_bit(FLAG_IS_OUT, &desc->flags);
+ } else {
+ err = chip->direction_input(chip, offset);
+- if (!err)
+- clear_bit(FLAG_IS_OUT, &desc->flags);
+ }
+ trace_gpio_direction(desc_to_gpio(desc), !value, err);
+ if (err < 0)
+@@ -4303,7 +4314,7 @@ struct gpio_desc *gpiod_get_from_of_node(struct device_node *node,
+ transitory = flags & OF_GPIO_TRANSITORY;
+
+ ret = gpiod_request(desc, label);
+- if (ret == -EBUSY && (flags & GPIOD_FLAGS_BIT_NONEXCLUSIVE))
++ if (ret == -EBUSY && (dflags & GPIOD_FLAGS_BIT_NONEXCLUSIVE))
+ return desc;
+ if (ret)
+ return ERR_PTR(ret);
+diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
+index 1e59a78e74bf..9b61fae5aef7 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.c
++++ b/drivers/gpu/drm/i915/display/intel_display.c
+@@ -3291,7 +3291,20 @@ static int skl_max_plane_width(const struct drm_framebuffer *fb,
+ switch (fb->modifier) {
+ case DRM_FORMAT_MOD_LINEAR:
+ case I915_FORMAT_MOD_X_TILED:
+- return 4096;
++ /*
++ * Validated limit is 4k, but has 5k should
++ * work apart from the following features:
++ * - Ytile (already limited to 4k)
++ * - FP16 (already limited to 4k)
++ * - render compression (already limited to 4k)
++ * - KVMR sprite and cursor (don't care)
++ * - horizontal panning (TODO verify this)
++ * - pipe and plane scaling (TODO verify this)
++ */
++ if (cpp == 8)
++ return 4096;
++ else
++ return 5120;
+ case I915_FORMAT_MOD_Y_TILED_CCS:
+ case I915_FORMAT_MOD_Yf_TILED_CCS:
+ /* FIXME AUX plane? */
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+index 39a661927d8e..c201289039fe 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+@@ -317,7 +317,11 @@ vm_fault_t i915_gem_fault(struct vm_fault *vmf)
+ msecs_to_jiffies_timeout(CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND));
+ GEM_BUG_ON(!obj->userfault_count);
+
+- i915_vma_set_ggtt_write(vma);
++ if (write) {
++ GEM_BUG_ON(!i915_gem_object_has_pinned_pages(obj));
++ i915_vma_set_ggtt_write(vma);
++ obj->mm.dirty = true;
++ }
+
+ err_fence:
+ i915_vma_unpin_fence(vma);
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pm.c b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
+index 914b5d4112bb..d6e9a10f3589 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_pm.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
+@@ -250,9 +250,6 @@ void i915_gem_resume(struct drm_i915_private *i915)
+ mutex_lock(&i915->drm.struct_mutex);
+ intel_uncore_forcewake_get(&i915->uncore, FORCEWAKE_ALL);
+
+- i915_gem_restore_gtt_mappings(i915);
+- i915_gem_restore_fences(i915);
+-
+ if (i915_gem_init_hw(i915))
+ goto err_wedged;
+
+diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c
+index 99e8242194c0..8f75882ded3f 100644
+--- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
++++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
+@@ -1042,6 +1042,9 @@ static void gen9_whitelist_build(struct i915_wa_list *w)
+
+ /* WaAllowUMDToModifyHDCChicken1:skl,bxt,kbl,glk,cfl */
+ whitelist_reg(w, GEN8_HDC_CHICKEN1);
++
++ /* WaSendPushConstantsFromMMIO:skl,bxt */
++ whitelist_reg(w, COMMON_SLICE_CHICKEN2);
+ }
+
+ static void skl_whitelist_build(struct intel_engine_cs *engine)
+diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
+index bac1ee94f63f..5b895df09ebf 100644
+--- a/drivers/gpu/drm/i915/i915_drv.c
++++ b/drivers/gpu/drm/i915/i915_drv.c
+@@ -2238,6 +2238,11 @@ static int i915_drm_resume(struct drm_device *dev)
+ if (ret)
+ DRM_ERROR("failed to re-enable GGTT\n");
+
++ mutex_lock(&dev_priv->drm.struct_mutex);
++ i915_gem_restore_gtt_mappings(dev_priv);
++ i915_gem_restore_fences(dev_priv);
++ mutex_unlock(&dev_priv->drm.struct_mutex);
++
+ intel_csr_ucode_resume(dev_priv);
+
+ i915_restore_state(dev_priv);
+diff --git a/drivers/gpu/drm/i915/selftests/i915_gem.c b/drivers/gpu/drm/i915/selftests/i915_gem.c
+index c6a01a6e87f1..76b320914113 100644
+--- a/drivers/gpu/drm/i915/selftests/i915_gem.c
++++ b/drivers/gpu/drm/i915/selftests/i915_gem.c
+@@ -117,6 +117,12 @@ static void pm_resume(struct drm_i915_private *i915)
+ with_intel_runtime_pm(&i915->runtime_pm, wakeref) {
+ intel_gt_sanitize(i915, false);
+ i915_gem_sanitize(i915);
++
++ mutex_lock(&i915->drm.struct_mutex);
++ i915_gem_restore_gtt_mappings(i915);
++ i915_gem_restore_fences(i915);
++ mutex_unlock(&i915->drm.struct_mutex);
++
+ i915_gem_resume(i915);
+ }
+ }
+diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
+index 8cf6362e64bf..07b5fe0a7e5d 100644
+--- a/drivers/gpu/drm/msm/msm_gem.c
++++ b/drivers/gpu/drm/msm/msm_gem.c
+@@ -50,7 +50,7 @@ static void sync_for_device(struct msm_gem_object *msm_obj)
+ {
+ struct device *dev = msm_obj->base.dev->dev;
+
+- if (get_dma_ops(dev)) {
++ if (get_dma_ops(dev) && IS_ENABLED(CONFIG_ARM64)) {
+ dma_sync_sg_for_device(dev, msm_obj->sgt->sgl,
+ msm_obj->sgt->nents, DMA_BIDIRECTIONAL);
+ } else {
+@@ -63,7 +63,7 @@ static void sync_for_cpu(struct msm_gem_object *msm_obj)
+ {
+ struct device *dev = msm_obj->base.dev->dev;
+
+- if (get_dma_ops(dev)) {
++ if (get_dma_ops(dev) && IS_ENABLED(CONFIG_ARM64)) {
+ dma_sync_sg_for_cpu(dev, msm_obj->sgt->sgl,
+ msm_obj->sgt->nents, DMA_BIDIRECTIONAL);
+ } else {
+diff --git a/drivers/iio/accel/adxl372.c b/drivers/iio/accel/adxl372.c
+index 055227cb3d43..67b8817995c0 100644
+--- a/drivers/iio/accel/adxl372.c
++++ b/drivers/iio/accel/adxl372.c
+@@ -474,12 +474,17 @@ static int adxl372_configure_fifo(struct adxl372_state *st)
+ if (ret < 0)
+ return ret;
+
+- fifo_samples = st->watermark & 0xFF;
++ /*
++ * watermark stores the number of sets; we need to write the FIFO
++ * registers with the number of samples
++ */
++ fifo_samples = (st->watermark * st->fifo_set_size);
+ fifo_ctl = ADXL372_FIFO_CTL_FORMAT_MODE(st->fifo_format) |
+ ADXL372_FIFO_CTL_MODE_MODE(st->fifo_mode) |
+- ADXL372_FIFO_CTL_SAMPLES_MODE(st->watermark);
++ ADXL372_FIFO_CTL_SAMPLES_MODE(fifo_samples);
+
+- ret = regmap_write(st->regmap, ADXL372_FIFO_SAMPLES, fifo_samples);
++ ret = regmap_write(st->regmap,
++ ADXL372_FIFO_SAMPLES, fifo_samples & 0xFF);
+ if (ret < 0)
+ return ret;
+
+@@ -548,8 +553,7 @@ static irqreturn_t adxl372_trigger_handler(int irq, void *p)
+ goto err;
+
+ /* Each sample is 2 bytes */
+- for (i = 0; i < fifo_entries * sizeof(u16);
+- i += st->fifo_set_size * sizeof(u16))
++ for (i = 0; i < fifo_entries; i += st->fifo_set_size)
+ iio_push_to_buffers(indio_dev, &st->fifo_buf[i]);
+ }
+ err:
+@@ -571,6 +575,14 @@ static int adxl372_setup(struct adxl372_state *st)
+ return -ENODEV;
+ }
+
++ /*
++ * Perform a software reset to make sure the device is in a consistent
++ * state after start up.
++ */
++ ret = regmap_write(st->regmap, ADXL372_RESET, ADXL372_RESET_CODE);
++ if (ret < 0)
++ return ret;
++
+ ret = adxl372_set_op_mode(st, ADXL372_STANDBY);
+ if (ret < 0)
+ return ret;
+diff --git a/drivers/iio/adc/ad799x.c b/drivers/iio/adc/ad799x.c
+index 5a3ca5904ded..f658012baad8 100644
+--- a/drivers/iio/adc/ad799x.c
++++ b/drivers/iio/adc/ad799x.c
+@@ -810,10 +810,10 @@ static int ad799x_probe(struct i2c_client *client,
+
+ ret = ad799x_write_config(st, st->chip_config->default_config);
+ if (ret < 0)
+- goto error_disable_reg;
++ goto error_disable_vref;
+ ret = ad799x_read_config(st);
+ if (ret < 0)
+- goto error_disable_reg;
++ goto error_disable_vref;
+ st->config = ret;
+
+ ret = iio_triggered_buffer_setup(indio_dev, NULL,
+diff --git a/drivers/iio/adc/axp288_adc.c b/drivers/iio/adc/axp288_adc.c
+index 31d51bcc5f2c..85d08e68b34f 100644
+--- a/drivers/iio/adc/axp288_adc.c
++++ b/drivers/iio/adc/axp288_adc.c
+@@ -7,6 +7,7 @@
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ */
+
++#include <linux/dmi.h>
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/device.h>
+@@ -25,6 +26,11 @@
+ #define AXP288_ADC_EN_MASK 0xF0
+ #define AXP288_ADC_TS_ENABLE 0x01
+
++#define AXP288_ADC_TS_BIAS_MASK GENMASK(5, 4)
++#define AXP288_ADC_TS_BIAS_20UA (0 << 4)
++#define AXP288_ADC_TS_BIAS_40UA (1 << 4)
++#define AXP288_ADC_TS_BIAS_60UA (2 << 4)
++#define AXP288_ADC_TS_BIAS_80UA (3 << 4)
+ #define AXP288_ADC_TS_CURRENT_ON_OFF_MASK GENMASK(1, 0)
+ #define AXP288_ADC_TS_CURRENT_OFF (0 << 0)
+ #define AXP288_ADC_TS_CURRENT_ON_WHEN_CHARGING (1 << 0)
+@@ -177,10 +183,36 @@ static int axp288_adc_read_raw(struct iio_dev *indio_dev,
+ return ret;
+ }
+
++/*
++ * We rely on the machine's firmware to correctly setup the TS pin bias current
++ * at boot. This lists systems with broken fw where we need to set it ourselves.
++ */
++static const struct dmi_system_id axp288_adc_ts_bias_override[] = {
++ {
++ /* Lenovo Ideapad 100S (11 inch) */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad 100S-11IBY"),
++ },
++ .driver_data = (void *)(uintptr_t)AXP288_ADC_TS_BIAS_80UA,
++ },
++ {}
++};
++
+ static int axp288_adc_initialize(struct axp288_adc_info *info)
+ {
++ const struct dmi_system_id *bias_override;
+ int ret, adc_enable_val;
+
++ bias_override = dmi_first_match(axp288_adc_ts_bias_override);
++ if (bias_override) {
++ ret = regmap_update_bits(info->regmap, AXP288_ADC_TS_PIN_CTRL,
++ AXP288_ADC_TS_BIAS_MASK,
++ (uintptr_t)bias_override->driver_data);
++ if (ret)
++ return ret;
++ }
++
+ /*
+ * Determine if the TS pin is enabled and set the TS current-source
+ * accordingly.
+diff --git a/drivers/iio/adc/hx711.c b/drivers/iio/adc/hx711.c
+index 88c7fe15003b..62e6c8badd22 100644
+--- a/drivers/iio/adc/hx711.c
++++ b/drivers/iio/adc/hx711.c
+@@ -100,14 +100,14 @@ struct hx711_data {
+
+ static int hx711_cycle(struct hx711_data *hx711_data)
+ {
+- int val;
++ unsigned long flags;
+
+ /*
+ * if preempted for more then 60us while PD_SCK is high:
+ * hx711 is going in reset
+ * ==> measuring is false
+ */
+- preempt_disable();
++ local_irq_save(flags);
+ gpiod_set_value(hx711_data->gpiod_pd_sck, 1);
+
+ /*
+@@ -117,7 +117,6 @@ static int hx711_cycle(struct hx711_data *hx711_data)
+ */
+ ndelay(hx711_data->data_ready_delay_ns);
+
+- val = gpiod_get_value(hx711_data->gpiod_dout);
+ /*
+ * here we are not waiting for 0.2 us as suggested by the datasheet,
+ * because the oscilloscope showed in a test scenario
+@@ -125,7 +124,7 @@ static int hx711_cycle(struct hx711_data *hx711_data)
+ * and 0.56 us for PD_SCK low on TI Sitara with 800 MHz
+ */
+ gpiod_set_value(hx711_data->gpiod_pd_sck, 0);
+- preempt_enable();
++ local_irq_restore(flags);
+
+ /*
+ * make it a square wave for addressing cases with capacitance on
+@@ -133,7 +132,8 @@ static int hx711_cycle(struct hx711_data *hx711_data)
+ */
+ ndelay(hx711_data->data_ready_delay_ns);
+
+- return val;
++ /* sample as late as possible */
++ return gpiod_get_value(hx711_data->gpiod_dout);
+ }
+
+ static int hx711_read(struct hx711_data *hx711_data)
+diff --git a/drivers/iio/adc/stm32-adc-core.c b/drivers/iio/adc/stm32-adc-core.c
+index 1f7ce5186dfc..6016a864d6d6 100644
+--- a/drivers/iio/adc/stm32-adc-core.c
++++ b/drivers/iio/adc/stm32-adc-core.c
+@@ -22,33 +22,6 @@
+
+ #include "stm32-adc-core.h"
+
+-/* STM32F4 - common registers for all ADC instances: 1, 2 & 3 */
+-#define STM32F4_ADC_CSR (STM32_ADCX_COMN_OFFSET + 0x00)
+-#define STM32F4_ADC_CCR (STM32_ADCX_COMN_OFFSET + 0x04)
+-
+-/* STM32F4_ADC_CSR - bit fields */
+-#define STM32F4_EOC3 BIT(17)
+-#define STM32F4_EOC2 BIT(9)
+-#define STM32F4_EOC1 BIT(1)
+-
+-/* STM32F4_ADC_CCR - bit fields */
+-#define STM32F4_ADC_ADCPRE_SHIFT 16
+-#define STM32F4_ADC_ADCPRE_MASK GENMASK(17, 16)
+-
+-/* STM32H7 - common registers for all ADC instances */
+-#define STM32H7_ADC_CSR (STM32_ADCX_COMN_OFFSET + 0x00)
+-#define STM32H7_ADC_CCR (STM32_ADCX_COMN_OFFSET + 0x08)
+-
+-/* STM32H7_ADC_CSR - bit fields */
+-#define STM32H7_EOC_SLV BIT(18)
+-#define STM32H7_EOC_MST BIT(2)
+-
+-/* STM32H7_ADC_CCR - bit fields */
+-#define STM32H7_PRESC_SHIFT 18
+-#define STM32H7_PRESC_MASK GENMASK(21, 18)
+-#define STM32H7_CKMODE_SHIFT 16
+-#define STM32H7_CKMODE_MASK GENMASK(17, 16)
+-
+ #define STM32_ADC_CORE_SLEEP_DELAY_MS 2000
+
+ /**
+@@ -58,6 +31,8 @@
+ * @eoc1: adc1 end of conversion flag in @csr
+ * @eoc2: adc2 end of conversion flag in @csr
+ * @eoc3: adc3 end of conversion flag in @csr
++ * @ier: interrupt enable register offset for each adc
++ * @eocie_msk: end of conversion interrupt enable mask in @ier
+ */
+ struct stm32_adc_common_regs {
+ u32 csr;
+@@ -65,6 +40,8 @@ struct stm32_adc_common_regs {
+ u32 eoc1_msk;
+ u32 eoc2_msk;
+ u32 eoc3_msk;
++ u32 ier;
++ u32 eocie_msk;
+ };
+
+ struct stm32_adc_priv;
+@@ -278,6 +255,8 @@ static const struct stm32_adc_common_regs stm32f4_adc_common_regs = {
+ .eoc1_msk = STM32F4_EOC1,
+ .eoc2_msk = STM32F4_EOC2,
+ .eoc3_msk = STM32F4_EOC3,
++ .ier = STM32F4_ADC_CR1,
++ .eocie_msk = STM32F4_EOCIE,
+ };
+
+ /* STM32H7 common registers definitions */
+@@ -286,8 +265,24 @@ static const struct stm32_adc_common_regs stm32h7_adc_common_regs = {
+ .ccr = STM32H7_ADC_CCR,
+ .eoc1_msk = STM32H7_EOC_MST,
+ .eoc2_msk = STM32H7_EOC_SLV,
++ .ier = STM32H7_ADC_IER,
++ .eocie_msk = STM32H7_EOCIE,
++};
++
++static const unsigned int stm32_adc_offset[STM32_ADC_MAX_ADCS] = {
++ 0, STM32_ADC_OFFSET, STM32_ADC_OFFSET * 2,
+ };
+
++static unsigned int stm32_adc_eoc_enabled(struct stm32_adc_priv *priv,
++ unsigned int adc)
++{
++ u32 ier, offset = stm32_adc_offset[adc];
++
++ ier = readl_relaxed(priv->common.base + offset + priv->cfg->regs->ier);
++
++ return ier & priv->cfg->regs->eocie_msk;
++}
++
+ /* ADC common interrupt for all instances */
+ static void stm32_adc_irq_handler(struct irq_desc *desc)
+ {
+@@ -298,13 +293,28 @@ static void stm32_adc_irq_handler(struct irq_desc *desc)
+ chained_irq_enter(chip, desc);
+ status = readl_relaxed(priv->common.base + priv->cfg->regs->csr);
+
+- if (status & priv->cfg->regs->eoc1_msk)
++ /*
++ * End of conversion may be handled by using IRQ or DMA. There may be a
++ * race here when two conversions complete at the same time on several
++ * ADCs. EOC may be read 'set' for several ADCs, with:
++ * - an ADC configured to use DMA (EOC triggers the DMA request, and
++ * is then automatically cleared by DR read in hardware)
++ * - an ADC configured to use IRQs (EOCIE bit is set. The handler must
++ * be called in this case)
++ * So both EOC status bit in CSR and EOCIE control bit must be checked
++ * before invoking the interrupt handler (e.g. call ISR only for
++ * IRQ-enabled ADCs).
++ */
++ if (status & priv->cfg->regs->eoc1_msk &&
++ stm32_adc_eoc_enabled(priv, 0))
+ generic_handle_irq(irq_find_mapping(priv->domain, 0));
+
+- if (status & priv->cfg->regs->eoc2_msk)
++ if (status & priv->cfg->regs->eoc2_msk &&
++ stm32_adc_eoc_enabled(priv, 1))
+ generic_handle_irq(irq_find_mapping(priv->domain, 1));
+
+- if (status & priv->cfg->regs->eoc3_msk)
++ if (status & priv->cfg->regs->eoc3_msk &&
++ stm32_adc_eoc_enabled(priv, 2))
+ generic_handle_irq(irq_find_mapping(priv->domain, 2));
+
+ chained_irq_exit(chip, desc);
+diff --git a/drivers/iio/adc/stm32-adc-core.h b/drivers/iio/adc/stm32-adc-core.h
+index 8af507b3f32d..2579d514c2a3 100644
+--- a/drivers/iio/adc/stm32-adc-core.h
++++ b/drivers/iio/adc/stm32-adc-core.h
+@@ -25,8 +25,145 @@
+ * --------------------------------------------------------
+ */
+ #define STM32_ADC_MAX_ADCS 3
++#define STM32_ADC_OFFSET 0x100
+ #define STM32_ADCX_COMN_OFFSET 0x300
+
++/* STM32F4 - Registers for each ADC instance */
++#define STM32F4_ADC_SR 0x00
++#define STM32F4_ADC_CR1 0x04
++#define STM32F4_ADC_CR2 0x08
++#define STM32F4_ADC_SMPR1 0x0C
++#define STM32F4_ADC_SMPR2 0x10
++#define STM32F4_ADC_HTR 0x24
++#define STM32F4_ADC_LTR 0x28
++#define STM32F4_ADC_SQR1 0x2C
++#define STM32F4_ADC_SQR2 0x30
++#define STM32F4_ADC_SQR3 0x34
++#define STM32F4_ADC_JSQR 0x38
++#define STM32F4_ADC_JDR1 0x3C
++#define STM32F4_ADC_JDR2 0x40
++#define STM32F4_ADC_JDR3 0x44
++#define STM32F4_ADC_JDR4 0x48
++#define STM32F4_ADC_DR 0x4C
++
++/* STM32F4 - common registers for all ADC instances: 1, 2 & 3 */
++#define STM32F4_ADC_CSR (STM32_ADCX_COMN_OFFSET + 0x00)
++#define STM32F4_ADC_CCR (STM32_ADCX_COMN_OFFSET + 0x04)
++
++/* STM32F4_ADC_SR - bit fields */
++#define STM32F4_STRT BIT(4)
++#define STM32F4_EOC BIT(1)
++
++/* STM32F4_ADC_CR1 - bit fields */
++#define STM32F4_RES_SHIFT 24
++#define STM32F4_RES_MASK GENMASK(25, 24)
++#define STM32F4_SCAN BIT(8)
++#define STM32F4_EOCIE BIT(5)
++
++/* STM32F4_ADC_CR2 - bit fields */
++#define STM32F4_SWSTART BIT(30)
++#define STM32F4_EXTEN_SHIFT 28
++#define STM32F4_EXTEN_MASK GENMASK(29, 28)
++#define STM32F4_EXTSEL_SHIFT 24
++#define STM32F4_EXTSEL_MASK GENMASK(27, 24)
++#define STM32F4_EOCS BIT(10)
++#define STM32F4_DDS BIT(9)
++#define STM32F4_DMA BIT(8)
++#define STM32F4_ADON BIT(0)
++
++/* STM32F4_ADC_CSR - bit fields */
++#define STM32F4_EOC3 BIT(17)
++#define STM32F4_EOC2 BIT(9)
++#define STM32F4_EOC1 BIT(1)
++
++/* STM32F4_ADC_CCR - bit fields */
++#define STM32F4_ADC_ADCPRE_SHIFT 16
++#define STM32F4_ADC_ADCPRE_MASK GENMASK(17, 16)
++
++/* STM32H7 - Registers for each ADC instance */
++#define STM32H7_ADC_ISR 0x00
++#define STM32H7_ADC_IER 0x04
++#define STM32H7_ADC_CR 0x08
++#define STM32H7_ADC_CFGR 0x0C
++#define STM32H7_ADC_SMPR1 0x14
++#define STM32H7_ADC_SMPR2 0x18
++#define STM32H7_ADC_PCSEL 0x1C
++#define STM32H7_ADC_SQR1 0x30
++#define STM32H7_ADC_SQR2 0x34
++#define STM32H7_ADC_SQR3 0x38
++#define STM32H7_ADC_SQR4 0x3C
++#define STM32H7_ADC_DR 0x40
++#define STM32H7_ADC_DIFSEL 0xC0
++#define STM32H7_ADC_CALFACT 0xC4
++#define STM32H7_ADC_CALFACT2 0xC8
++
++/* STM32H7 - common registers for all ADC instances */
++#define STM32H7_ADC_CSR (STM32_ADCX_COMN_OFFSET + 0x00)
++#define STM32H7_ADC_CCR (STM32_ADCX_COMN_OFFSET + 0x08)
++
++/* STM32H7_ADC_ISR - bit fields */
++#define STM32MP1_VREGREADY BIT(12)
++#define STM32H7_EOC BIT(2)
++#define STM32H7_ADRDY BIT(0)
++
++/* STM32H7_ADC_IER - bit fields */
++#define STM32H7_EOCIE STM32H7_EOC
++
++/* STM32H7_ADC_CR - bit fields */
++#define STM32H7_ADCAL BIT(31)
++#define STM32H7_ADCALDIF BIT(30)
++#define STM32H7_DEEPPWD BIT(29)
++#define STM32H7_ADVREGEN BIT(28)
++#define STM32H7_LINCALRDYW6 BIT(27)
++#define STM32H7_LINCALRDYW5 BIT(26)
++#define STM32H7_LINCALRDYW4 BIT(25)
++#define STM32H7_LINCALRDYW3 BIT(24)
++#define STM32H7_LINCALRDYW2 BIT(23)
++#define STM32H7_LINCALRDYW1 BIT(22)
++#define STM32H7_ADCALLIN BIT(16)
++#define STM32H7_BOOST BIT(8)
++#define STM32H7_ADSTP BIT(4)
++#define STM32H7_ADSTART BIT(2)
++#define STM32H7_ADDIS BIT(1)
++#define STM32H7_ADEN BIT(0)
++
++/* STM32H7_ADC_CFGR bit fields */
++#define STM32H7_EXTEN_SHIFT 10
++#define STM32H7_EXTEN_MASK GENMASK(11, 10)
++#define STM32H7_EXTSEL_SHIFT 5
++#define STM32H7_EXTSEL_MASK GENMASK(9, 5)
++#define STM32H7_RES_SHIFT 2
++#define STM32H7_RES_MASK GENMASK(4, 2)
++#define STM32H7_DMNGT_SHIFT 0
++#define STM32H7_DMNGT_MASK GENMASK(1, 0)
++
++enum stm32h7_adc_dmngt {
++ STM32H7_DMNGT_DR_ONLY, /* Regular data in DR only */
++ STM32H7_DMNGT_DMA_ONESHOT, /* DMA one shot mode */
++ STM32H7_DMNGT_DFSDM, /* DFSDM mode */
++ STM32H7_DMNGT_DMA_CIRC, /* DMA circular mode */
++};
++
++/* STM32H7_ADC_CALFACT - bit fields */
++#define STM32H7_CALFACT_D_SHIFT 16
++#define STM32H7_CALFACT_D_MASK GENMASK(26, 16)
++#define STM32H7_CALFACT_S_SHIFT 0
++#define STM32H7_CALFACT_S_MASK GENMASK(10, 0)
++
++/* STM32H7_ADC_CALFACT2 - bit fields */
++#define STM32H7_LINCALFACT_SHIFT 0
++#define STM32H7_LINCALFACT_MASK GENMASK(29, 0)
++
++/* STM32H7_ADC_CSR - bit fields */
++#define STM32H7_EOC_SLV BIT(18)
++#define STM32H7_EOC_MST BIT(2)
++
++/* STM32H7_ADC_CCR - bit fields */
++#define STM32H7_PRESC_SHIFT 18
++#define STM32H7_PRESC_MASK GENMASK(21, 18)
++#define STM32H7_CKMODE_SHIFT 16
++#define STM32H7_CKMODE_MASK GENMASK(17, 16)
++
+ /**
+ * struct stm32_adc_common - stm32 ADC driver common data (for all instances)
+ * @base: control registers base cpu addr
+diff --git a/drivers/iio/adc/stm32-adc.c b/drivers/iio/adc/stm32-adc.c
+index 205e1699f954..b22be473cb03 100644
+--- a/drivers/iio/adc/stm32-adc.c
++++ b/drivers/iio/adc/stm32-adc.c
+@@ -28,115 +28,6 @@
+
+ #include "stm32-adc-core.h"
+
+-/* STM32F4 - Registers for each ADC instance */
+-#define STM32F4_ADC_SR 0x00
+-#define STM32F4_ADC_CR1 0x04
+-#define STM32F4_ADC_CR2 0x08
+-#define STM32F4_ADC_SMPR1 0x0C
+-#define STM32F4_ADC_SMPR2 0x10
+-#define STM32F4_ADC_HTR 0x24
+-#define STM32F4_ADC_LTR 0x28
+-#define STM32F4_ADC_SQR1 0x2C
+-#define STM32F4_ADC_SQR2 0x30
+-#define STM32F4_ADC_SQR3 0x34
+-#define STM32F4_ADC_JSQR 0x38
+-#define STM32F4_ADC_JDR1 0x3C
+-#define STM32F4_ADC_JDR2 0x40
+-#define STM32F4_ADC_JDR3 0x44
+-#define STM32F4_ADC_JDR4 0x48
+-#define STM32F4_ADC_DR 0x4C
+-
+-/* STM32F4_ADC_SR - bit fields */
+-#define STM32F4_STRT BIT(4)
+-#define STM32F4_EOC BIT(1)
+-
+-/* STM32F4_ADC_CR1 - bit fields */
+-#define STM32F4_RES_SHIFT 24
+-#define STM32F4_RES_MASK GENMASK(25, 24)
+-#define STM32F4_SCAN BIT(8)
+-#define STM32F4_EOCIE BIT(5)
+-
+-/* STM32F4_ADC_CR2 - bit fields */
+-#define STM32F4_SWSTART BIT(30)
+-#define STM32F4_EXTEN_SHIFT 28
+-#define STM32F4_EXTEN_MASK GENMASK(29, 28)
+-#define STM32F4_EXTSEL_SHIFT 24
+-#define STM32F4_EXTSEL_MASK GENMASK(27, 24)
+-#define STM32F4_EOCS BIT(10)
+-#define STM32F4_DDS BIT(9)
+-#define STM32F4_DMA BIT(8)
+-#define STM32F4_ADON BIT(0)
+-
+-/* STM32H7 - Registers for each ADC instance */
+-#define STM32H7_ADC_ISR 0x00
+-#define STM32H7_ADC_IER 0x04
+-#define STM32H7_ADC_CR 0x08
+-#define STM32H7_ADC_CFGR 0x0C
+-#define STM32H7_ADC_SMPR1 0x14
+-#define STM32H7_ADC_SMPR2 0x18
+-#define STM32H7_ADC_PCSEL 0x1C
+-#define STM32H7_ADC_SQR1 0x30
+-#define STM32H7_ADC_SQR2 0x34
+-#define STM32H7_ADC_SQR3 0x38
+-#define STM32H7_ADC_SQR4 0x3C
+-#define STM32H7_ADC_DR 0x40
+-#define STM32H7_ADC_DIFSEL 0xC0
+-#define STM32H7_ADC_CALFACT 0xC4
+-#define STM32H7_ADC_CALFACT2 0xC8
+-
+-/* STM32H7_ADC_ISR - bit fields */
+-#define STM32MP1_VREGREADY BIT(12)
+-#define STM32H7_EOC BIT(2)
+-#define STM32H7_ADRDY BIT(0)
+-
+-/* STM32H7_ADC_IER - bit fields */
+-#define STM32H7_EOCIE STM32H7_EOC
+-
+-/* STM32H7_ADC_CR - bit fields */
+-#define STM32H7_ADCAL BIT(31)
+-#define STM32H7_ADCALDIF BIT(30)
+-#define STM32H7_DEEPPWD BIT(29)
+-#define STM32H7_ADVREGEN BIT(28)
+-#define STM32H7_LINCALRDYW6 BIT(27)
+-#define STM32H7_LINCALRDYW5 BIT(26)
+-#define STM32H7_LINCALRDYW4 BIT(25)
+-#define STM32H7_LINCALRDYW3 BIT(24)
+-#define STM32H7_LINCALRDYW2 BIT(23)
+-#define STM32H7_LINCALRDYW1 BIT(22)
+-#define STM32H7_ADCALLIN BIT(16)
+-#define STM32H7_BOOST BIT(8)
+-#define STM32H7_ADSTP BIT(4)
+-#define STM32H7_ADSTART BIT(2)
+-#define STM32H7_ADDIS BIT(1)
+-#define STM32H7_ADEN BIT(0)
+-
+-/* STM32H7_ADC_CFGR bit fields */
+-#define STM32H7_EXTEN_SHIFT 10
+-#define STM32H7_EXTEN_MASK GENMASK(11, 10)
+-#define STM32H7_EXTSEL_SHIFT 5
+-#define STM32H7_EXTSEL_MASK GENMASK(9, 5)
+-#define STM32H7_RES_SHIFT 2
+-#define STM32H7_RES_MASK GENMASK(4, 2)
+-#define STM32H7_DMNGT_SHIFT 0
+-#define STM32H7_DMNGT_MASK GENMASK(1, 0)
+-
+-enum stm32h7_adc_dmngt {
+- STM32H7_DMNGT_DR_ONLY, /* Regular data in DR only */
+- STM32H7_DMNGT_DMA_ONESHOT, /* DMA one shot mode */
+- STM32H7_DMNGT_DFSDM, /* DFSDM mode */
+- STM32H7_DMNGT_DMA_CIRC, /* DMA circular mode */
+-};
+-
+-/* STM32H7_ADC_CALFACT - bit fields */
+-#define STM32H7_CALFACT_D_SHIFT 16
+-#define STM32H7_CALFACT_D_MASK GENMASK(26, 16)
+-#define STM32H7_CALFACT_S_SHIFT 0
+-#define STM32H7_CALFACT_S_MASK GENMASK(10, 0)
+-
+-/* STM32H7_ADC_CALFACT2 - bit fields */
+-#define STM32H7_LINCALFACT_SHIFT 0
+-#define STM32H7_LINCALFACT_MASK GENMASK(29, 0)
+-
+ /* Number of linear calibration shadow registers / LINCALRDYW control bits */
+ #define STM32H7_LINCALFACT_NUM 6
+
+diff --git a/drivers/iio/light/opt3001.c b/drivers/iio/light/opt3001.c
+index e666879007d2..92004a2563ea 100644
+--- a/drivers/iio/light/opt3001.c
++++ b/drivers/iio/light/opt3001.c
+@@ -686,6 +686,7 @@ static irqreturn_t opt3001_irq(int irq, void *_iio)
+ struct iio_dev *iio = _iio;
+ struct opt3001 *opt = iio_priv(iio);
+ int ret;
++ bool wake_result_ready_queue = false;
+
+ if (!opt->ok_to_ignore_lock)
+ mutex_lock(&opt->lock);
+@@ -720,13 +721,16 @@ static irqreturn_t opt3001_irq(int irq, void *_iio)
+ }
+ opt->result = ret;
+ opt->result_ready = true;
+- wake_up(&opt->result_ready_queue);
++ wake_result_ready_queue = true;
+ }
+
+ out:
+ if (!opt->ok_to_ignore_lock)
+ mutex_unlock(&opt->lock);
+
++ if (wake_result_ready_queue)
++ wake_up(&opt->result_ready_queue);
++
+ return IRQ_HANDLED;
+ }
+
+diff --git a/drivers/iio/light/vcnl4000.c b/drivers/iio/light/vcnl4000.c
+index 51421ac32517..16dacea9eadf 100644
+--- a/drivers/iio/light/vcnl4000.c
++++ b/drivers/iio/light/vcnl4000.c
+@@ -398,19 +398,23 @@ static int vcnl4000_probe(struct i2c_client *client,
+ static const struct of_device_id vcnl_4000_of_match[] = {
+ {
+ .compatible = "vishay,vcnl4000",
+- .data = "VCNL4000",
++ .data = (void *)VCNL4000,
+ },
+ {
+ .compatible = "vishay,vcnl4010",
+- .data = "VCNL4010",
++ .data = (void *)VCNL4010,
+ },
+ {
+- .compatible = "vishay,vcnl4010",
+- .data = "VCNL4020",
++ .compatible = "vishay,vcnl4020",
++ .data = (void *)VCNL4010,
++ },
++ {
++ .compatible = "vishay,vcnl4040",
++ .data = (void *)VCNL4040,
+ },
+ {
+ .compatible = "vishay,vcnl4200",
+- .data = "VCNL4200",
++ .data = (void *)VCNL4200,
+ },
+ {},
+ };
+diff --git a/drivers/infiniband/core/security.c b/drivers/infiniband/core/security.c
+index 1ab423b19f77..6eb6d2717ca5 100644
+--- a/drivers/infiniband/core/security.c
++++ b/drivers/infiniband/core/security.c
+@@ -426,7 +426,7 @@ int ib_create_qp_security(struct ib_qp *qp, struct ib_device *dev)
+ int ret;
+
+ rdma_for_each_port (dev, i) {
+- is_ib = rdma_protocol_ib(dev, i++);
++ is_ib = rdma_protocol_ib(dev, i);
+ if (is_ib)
+ break;
+ }
+diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c
+index 6cac0c88cf39..36cdfbdbd325 100644
+--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c
++++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c
+@@ -230,8 +230,6 @@ static void pvrdma_free_srq(struct pvrdma_dev *dev, struct pvrdma_srq *srq)
+
+ pvrdma_page_dir_cleanup(dev, &srq->pdir);
+
+- kfree(srq);
+-
+ atomic_dec(&dev->num_srqs);
+ }
+
+diff --git a/drivers/media/usb/stkwebcam/stk-webcam.c b/drivers/media/usb/stkwebcam/stk-webcam.c
+index be8041e3e6b8..b0cfa4c1f8cc 100644
+--- a/drivers/media/usb/stkwebcam/stk-webcam.c
++++ b/drivers/media/usb/stkwebcam/stk-webcam.c
+@@ -643,8 +643,7 @@ static int v4l_stk_release(struct file *fp)
+ dev->owner = NULL;
+ }
+
+- if (is_present(dev))
+- usb_autopm_put_interface(dev->interface);
++ usb_autopm_put_interface(dev->interface);
+ mutex_unlock(&dev->lock);
+ return v4l2_fh_release(fp);
+ }
+diff --git a/drivers/misc/mei/bus-fixup.c b/drivers/misc/mei/bus-fixup.c
+index 32e9b1aed2ca..0a2b99e1af45 100644
+--- a/drivers/misc/mei/bus-fixup.c
++++ b/drivers/misc/mei/bus-fixup.c
+@@ -218,13 +218,21 @@ static void mei_mkhi_fix(struct mei_cl_device *cldev)
+ {
+ int ret;
+
++ /* No need to enable the client if nothing is needed from it */
++ if (!cldev->bus->fw_f_fw_ver_supported &&
++ !cldev->bus->hbm_f_os_supported)
++ return;
++
+ ret = mei_cldev_enable(cldev);
+ if (ret)
+ return;
+
+- ret = mei_fwver(cldev);
+- if (ret < 0)
+- dev_err(&cldev->dev, "FW version command failed %d\n", ret);
++ if (cldev->bus->fw_f_fw_ver_supported) {
++ ret = mei_fwver(cldev);
++ if (ret < 0)
++ dev_err(&cldev->dev, "FW version command failed %d\n",
++ ret);
++ }
+
+ if (cldev->bus->hbm_f_os_supported) {
+ ret = mei_osver(cldev);
+diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
+index 77f7dff7098d..c09f8bb49495 100644
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -79,6 +79,9 @@
+ #define MEI_DEV_ID_CNP_H 0xA360 /* Cannon Point H */
+ #define MEI_DEV_ID_CNP_H_4 0xA364 /* Cannon Point H 4 (iTouch) */
+
++#define MEI_DEV_ID_CMP_LP 0x02e0 /* Comet Point LP */
++#define MEI_DEV_ID_CMP_LP_3 0x02e4 /* Comet Point LP 3 (iTouch) */
++
+ #define MEI_DEV_ID_ICP_LP 0x34E0 /* Ice Lake Point LP */
+
+ #define MEI_DEV_ID_TGP_LP 0xA0E0 /* Tiger Lake Point LP */
+diff --git a/drivers/misc/mei/hw-me.c b/drivers/misc/mei/hw-me.c
+index abe1b1f4362f..c4f6991d3028 100644
+--- a/drivers/misc/mei/hw-me.c
++++ b/drivers/misc/mei/hw-me.c
+@@ -1355,6 +1355,8 @@ static bool mei_me_fw_type_sps(struct pci_dev *pdev)
+ #define MEI_CFG_FW_SPS \
+ .quirk_probe = mei_me_fw_type_sps
+
++#define MEI_CFG_FW_VER_SUPP \
++ .fw_ver_supported = 1
+
+ #define MEI_CFG_ICH_HFS \
+ .fw_status.count = 0
+@@ -1392,31 +1394,41 @@ static const struct mei_cfg mei_me_ich10_cfg = {
+ MEI_CFG_ICH10_HFS,
+ };
+
+-/* PCH devices */
+-static const struct mei_cfg mei_me_pch_cfg = {
++/* PCH6 devices */
++static const struct mei_cfg mei_me_pch6_cfg = {
+ MEI_CFG_PCH_HFS,
+ };
+
++/* PCH7 devices */
++static const struct mei_cfg mei_me_pch7_cfg = {
++ MEI_CFG_PCH_HFS,
++ MEI_CFG_FW_VER_SUPP,
++};
++
+ /* PCH Cougar Point and Patsburg with quirk for Node Manager exclusion */
+ static const struct mei_cfg mei_me_pch_cpt_pbg_cfg = {
+ MEI_CFG_PCH_HFS,
++ MEI_CFG_FW_VER_SUPP,
+ MEI_CFG_FW_NM,
+ };
+
+ /* PCH8 Lynx Point and newer devices */
+ static const struct mei_cfg mei_me_pch8_cfg = {
+ MEI_CFG_PCH8_HFS,
++ MEI_CFG_FW_VER_SUPP,
+ };
+
+ /* PCH8 Lynx Point with quirk for SPS Firmware exclusion */
+ static const struct mei_cfg mei_me_pch8_sps_cfg = {
+ MEI_CFG_PCH8_HFS,
++ MEI_CFG_FW_VER_SUPP,
+ MEI_CFG_FW_SPS,
+ };
+
+ /* Cannon Lake and newer devices */
+ static const struct mei_cfg mei_me_pch12_cfg = {
+ MEI_CFG_PCH8_HFS,
++ MEI_CFG_FW_VER_SUPP,
+ MEI_CFG_DMA_128,
+ };
+
+@@ -1428,7 +1440,8 @@ static const struct mei_cfg *const mei_cfg_list[] = {
+ [MEI_ME_UNDEF_CFG] = NULL,
+ [MEI_ME_ICH_CFG] = &mei_me_ich_cfg,
+ [MEI_ME_ICH10_CFG] = &mei_me_ich10_cfg,
+- [MEI_ME_PCH_CFG] = &mei_me_pch_cfg,
++ [MEI_ME_PCH6_CFG] = &mei_me_pch6_cfg,
++ [MEI_ME_PCH7_CFG] = &mei_me_pch7_cfg,
+ [MEI_ME_PCH_CPT_PBG_CFG] = &mei_me_pch_cpt_pbg_cfg,
+ [MEI_ME_PCH8_CFG] = &mei_me_pch8_cfg,
+ [MEI_ME_PCH8_SPS_CFG] = &mei_me_pch8_sps_cfg,
+@@ -1473,6 +1486,8 @@ struct mei_device *mei_me_dev_init(struct pci_dev *pdev,
+ mei_device_init(dev, &pdev->dev, &mei_me_hw_ops);
+ hw->cfg = cfg;
+
++ dev->fw_f_fw_ver_supported = cfg->fw_ver_supported;
++
+ return dev;
+ }
+
+diff --git a/drivers/misc/mei/hw-me.h b/drivers/misc/mei/hw-me.h
+index 08c84a0de4a8..1d8794828cbc 100644
+--- a/drivers/misc/mei/hw-me.h
++++ b/drivers/misc/mei/hw-me.h
+@@ -20,11 +20,13 @@
+ * @fw_status: FW status
+ * @quirk_probe: device exclusion quirk
+ * @dma_size: device DMA buffers size
++ * @fw_ver_supported: is fw version retrievable from FW
+ */
+ struct mei_cfg {
+ const struct mei_fw_status fw_status;
+ bool (*quirk_probe)(struct pci_dev *pdev);
+ size_t dma_size[DMA_DSCR_NUM];
++ u32 fw_ver_supported:1;
+ };
+
+
+@@ -62,7 +64,8 @@ struct mei_me_hw {
+ * @MEI_ME_UNDEF_CFG: Lower sentinel.
+ * @MEI_ME_ICH_CFG: I/O Controller Hub legacy devices.
+ * @MEI_ME_ICH10_CFG: I/O Controller Hub platforms Gen10
+- * @MEI_ME_PCH_CFG: Platform Controller Hub platforms (Up to Gen8).
++ * @MEI_ME_PCH6_CFG: Platform Controller Hub platforms (Gen6).
++ * @MEI_ME_PCH7_CFG: Platform Controller Hub platforms (Gen7).
+ * @MEI_ME_PCH_CPT_PBG_CFG:Platform Controller Hub workstations
+ * with quirk for Node Manager exclusion.
+ * @MEI_ME_PCH8_CFG: Platform Controller Hub Gen8 and newer
+@@ -77,7 +80,8 @@ enum mei_cfg_idx {
+ MEI_ME_UNDEF_CFG,
+ MEI_ME_ICH_CFG,
+ MEI_ME_ICH10_CFG,
+- MEI_ME_PCH_CFG,
++ MEI_ME_PCH6_CFG,
++ MEI_ME_PCH7_CFG,
+ MEI_ME_PCH_CPT_PBG_CFG,
+ MEI_ME_PCH8_CFG,
+ MEI_ME_PCH8_SPS_CFG,
+diff --git a/drivers/misc/mei/mei_dev.h b/drivers/misc/mei/mei_dev.h
+index f71a023aed3c..0f2141178299 100644
+--- a/drivers/misc/mei/mei_dev.h
++++ b/drivers/misc/mei/mei_dev.h
+@@ -426,6 +426,8 @@ struct mei_fw_version {
+ *
+ * @fw_ver : FW versions
+ *
++ * @fw_f_fw_ver_supported : fw feature: fw version supported
++ *
+ * @me_clients_rwsem: rw lock over me_clients list
+ * @me_clients : list of FW clients
+ * @me_clients_map : FW clients bit map
+@@ -506,6 +508,8 @@ struct mei_device {
+
+ struct mei_fw_version fw_ver[MEI_MAX_FW_VER_BLOCKS];
+
++ unsigned int fw_f_fw_ver_supported:1;
++
+ struct rw_semaphore me_clients_rwsem;
+ struct list_head me_clients;
+ DECLARE_BITMAP(me_clients_map, MEI_CLIENTS_MAX);
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index 541538eff8b1..3a2eadcd0378 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -61,13 +61,13 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ {MEI_PCI_DEVICE(MEI_DEV_ID_ICH10_3, MEI_ME_ICH10_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_ICH10_4, MEI_ME_ICH10_CFG)},
+
+- {MEI_PCI_DEVICE(MEI_DEV_ID_IBXPK_1, MEI_ME_PCH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_IBXPK_2, MEI_ME_PCH_CFG)},
++ {MEI_PCI_DEVICE(MEI_DEV_ID_IBXPK_1, MEI_ME_PCH6_CFG)},
++ {MEI_PCI_DEVICE(MEI_DEV_ID_IBXPK_2, MEI_ME_PCH6_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_CPT_1, MEI_ME_PCH_CPT_PBG_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_PBG_1, MEI_ME_PCH_CPT_PBG_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_PPT_1, MEI_ME_PCH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_PPT_2, MEI_ME_PCH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_PPT_3, MEI_ME_PCH_CFG)},
++ {MEI_PCI_DEVICE(MEI_DEV_ID_PPT_1, MEI_ME_PCH7_CFG)},
++ {MEI_PCI_DEVICE(MEI_DEV_ID_PPT_2, MEI_ME_PCH7_CFG)},
++ {MEI_PCI_DEVICE(MEI_DEV_ID_PPT_3, MEI_ME_PCH7_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_LPT_H, MEI_ME_PCH8_SPS_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_LPT_W, MEI_ME_PCH8_SPS_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_LPT_LP, MEI_ME_PCH8_CFG)},
+@@ -96,6 +96,9 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_H, MEI_ME_PCH12_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_H_4, MEI_ME_PCH8_CFG)},
+
++ {MEI_PCI_DEVICE(MEI_DEV_ID_CMP_LP, MEI_ME_PCH12_CFG)},
++ {MEI_PCI_DEVICE(MEI_DEV_ID_CMP_LP_3, MEI_ME_PCH8_CFG)},
++
+ {MEI_PCI_DEVICE(MEI_DEV_ID_ICP_LP, MEI_ME_PCH12_CFG)},
+
+ {MEI_PCI_DEVICE(MEI_DEV_ID_TGP_LP, MEI_ME_PCH12_CFG)},
+diff --git a/drivers/mtd/nand/raw/au1550nd.c b/drivers/mtd/nand/raw/au1550nd.c
+index 97a97a9ccc36..e10b76089048 100644
+--- a/drivers/mtd/nand/raw/au1550nd.c
++++ b/drivers/mtd/nand/raw/au1550nd.c
+@@ -134,16 +134,15 @@ static void au_write_buf16(struct nand_chip *this, const u_char *buf, int len)
+
+ /**
+ * au_read_buf16 - read chip data into buffer
+- * @mtd: MTD device structure
++ * @this: NAND chip object
+ * @buf: buffer to store date
+ * @len: number of bytes to read
+ *
+ * read function for 16bit buswidth
+ */
+-static void au_read_buf16(struct mtd_info *mtd, u_char *buf, int len)
++static void au_read_buf16(struct nand_chip *this, u_char *buf, int len)
+ {
+ int i;
+- struct nand_chip *this = mtd_to_nand(mtd);
+ u16 *p = (u16 *) buf;
+ len >>= 1;
+
+diff --git a/drivers/staging/fbtft/Kconfig b/drivers/staging/fbtft/Kconfig
+index 8ec524a95ec8..4e5d860fd788 100644
+--- a/drivers/staging/fbtft/Kconfig
++++ b/drivers/staging/fbtft/Kconfig
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ menuconfig FB_TFT
+ tristate "Support for small TFT LCD display modules"
+- depends on FB && SPI
++ depends on FB && SPI && OF
+ depends on GPIOLIB || COMPILE_TEST
+ select FB_SYS_FILLRECT
+ select FB_SYS_COPYAREA
+diff --git a/drivers/staging/fbtft/fbtft-core.c b/drivers/staging/fbtft/fbtft-core.c
+index cf5700a2ea66..a0a67aa517f0 100644
+--- a/drivers/staging/fbtft/fbtft-core.c
++++ b/drivers/staging/fbtft/fbtft-core.c
+@@ -714,7 +714,7 @@ struct fb_info *fbtft_framebuffer_alloc(struct fbtft_display *display,
+ if (par->gamma.curves && gamma) {
+ if (fbtft_gamma_parse_str(par, par->gamma.curves, gamma,
+ strlen(gamma)))
+- goto alloc_fail;
++ goto release_framebuf;
+ }
+
+ /* Transmit buffer */
+@@ -731,7 +731,7 @@ struct fb_info *fbtft_framebuffer_alloc(struct fbtft_display *display,
+ if (txbuflen > 0) {
+ txbuf = devm_kzalloc(par->info->device, txbuflen, GFP_KERNEL);
+ if (!txbuf)
+- goto alloc_fail;
++ goto release_framebuf;
+ par->txbuf.buf = txbuf;
+ par->txbuf.len = txbuflen;
+ }
+@@ -753,6 +753,9 @@ struct fb_info *fbtft_framebuffer_alloc(struct fbtft_display *display,
+
+ return info;
+
++release_framebuf:
++ framebuffer_release(info);
++
+ alloc_fail:
+ vfree(vmem);
+
+diff --git a/drivers/staging/rtl8188eu/hal/hal8188e_rate_adaptive.c b/drivers/staging/rtl8188eu/hal/hal8188e_rate_adaptive.c
+index 9ddd51685063..5792f491b59a 100644
+--- a/drivers/staging/rtl8188eu/hal/hal8188e_rate_adaptive.c
++++ b/drivers/staging/rtl8188eu/hal/hal8188e_rate_adaptive.c
+@@ -409,7 +409,7 @@ static int odm_ARFBRefresh_8188E(struct odm_dm_struct *dm_odm, struct odm_ra_inf
+ pRaInfo->PTModeSS = 3;
+ else if (pRaInfo->HighestRate > 0x0b)
+ pRaInfo->PTModeSS = 2;
+- else if (pRaInfo->HighestRate > 0x0b)
++ else if (pRaInfo->HighestRate > 0x03)
+ pRaInfo->PTModeSS = 1;
+ else
+ pRaInfo->PTModeSS = 0;
+diff --git a/drivers/staging/vc04_services/bcm2835-audio/bcm2835-pcm.c b/drivers/staging/vc04_services/bcm2835-audio/bcm2835-pcm.c
+index bc1eaa3a0773..826016c3431a 100644
+--- a/drivers/staging/vc04_services/bcm2835-audio/bcm2835-pcm.c
++++ b/drivers/staging/vc04_services/bcm2835-audio/bcm2835-pcm.c
+@@ -12,7 +12,7 @@
+ static const struct snd_pcm_hardware snd_bcm2835_playback_hw = {
+ .info = (SNDRV_PCM_INFO_INTERLEAVED | SNDRV_PCM_INFO_BLOCK_TRANSFER |
+ SNDRV_PCM_INFO_MMAP | SNDRV_PCM_INFO_MMAP_VALID |
+- SNDRV_PCM_INFO_DRAIN_TRIGGER | SNDRV_PCM_INFO_SYNC_APPLPTR),
++ SNDRV_PCM_INFO_SYNC_APPLPTR),
+ .formats = SNDRV_PCM_FMTBIT_U8 | SNDRV_PCM_FMTBIT_S16_LE,
+ .rates = SNDRV_PCM_RATE_CONTINUOUS | SNDRV_PCM_RATE_8000_48000,
+ .rate_min = 8000,
+@@ -29,7 +29,7 @@ static const struct snd_pcm_hardware snd_bcm2835_playback_hw = {
+ static const struct snd_pcm_hardware snd_bcm2835_playback_spdif_hw = {
+ .info = (SNDRV_PCM_INFO_INTERLEAVED | SNDRV_PCM_INFO_BLOCK_TRANSFER |
+ SNDRV_PCM_INFO_MMAP | SNDRV_PCM_INFO_MMAP_VALID |
+- SNDRV_PCM_INFO_DRAIN_TRIGGER | SNDRV_PCM_INFO_SYNC_APPLPTR),
++ SNDRV_PCM_INFO_SYNC_APPLPTR),
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ .rates = SNDRV_PCM_RATE_CONTINUOUS | SNDRV_PCM_RATE_44100 |
+ SNDRV_PCM_RATE_48000,
+diff --git a/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c b/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c
+index 23fba01107b9..c6f9cf1913d2 100644
+--- a/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c
++++ b/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c
+@@ -289,6 +289,7 @@ int bcm2835_audio_stop(struct bcm2835_alsa_stream *alsa_stream)
+ VC_AUDIO_MSG_TYPE_STOP, false);
+ }
+
++/* FIXME: this doesn't seem working as expected for "draining" */
+ int bcm2835_audio_drain(struct bcm2835_alsa_stream *alsa_stream)
+ {
+ struct vc_audio_msg m = {
+diff --git a/drivers/staging/vt6655/device_main.c b/drivers/staging/vt6655/device_main.c
+index c6bb4aaf9bd0..082302944c37 100644
+--- a/drivers/staging/vt6655/device_main.c
++++ b/drivers/staging/vt6655/device_main.c
+@@ -1748,8 +1748,10 @@ vt6655_probe(struct pci_dev *pcid, const struct pci_device_id *ent)
+
+ priv->hw->max_signal = 100;
+
+- if (vnt_init(priv))
++ if (vnt_init(priv)) {
++ device_free_info(priv);
+ return -ENODEV;
++ }
+
+ device_print_info(priv);
+ pci_set_drvdata(pcid, priv);
+diff --git a/drivers/tty/serial/uartlite.c b/drivers/tty/serial/uartlite.c
+index b8b912b5a8b9..06e79c11141d 100644
+--- a/drivers/tty/serial/uartlite.c
++++ b/drivers/tty/serial/uartlite.c
+@@ -897,7 +897,8 @@ static int __init ulite_init(void)
+ static void __exit ulite_exit(void)
+ {
+ platform_driver_unregister(&ulite_platform_driver);
+- uart_unregister_driver(&ulite_uart_driver);
++ if (ulite_uart_driver.state)
++ uart_unregister_driver(&ulite_uart_driver);
+ }
+
+ module_init(ulite_init);
+diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c
+index f145946f659b..92df0c4f1c7a 100644
+--- a/drivers/tty/serial/xilinx_uartps.c
++++ b/drivers/tty/serial/xilinx_uartps.c
+@@ -1550,7 +1550,6 @@ static int cdns_uart_probe(struct platform_device *pdev)
+ goto err_out_id;
+ }
+
+- uartps_major = cdns_uart_uart_driver->tty_driver->major;
+ cdns_uart_data->cdns_uart_driver = cdns_uart_uart_driver;
+
+ /*
+@@ -1680,6 +1679,7 @@ static int cdns_uart_probe(struct platform_device *pdev)
+ console_port = NULL;
+ #endif
+
++ uartps_major = cdns_uart_uart_driver->tty_driver->major;
+ cdns_uart_data->cts_override = of_property_read_bool(pdev->dev.of_node,
+ "cts-override");
+ return 0;
+@@ -1741,6 +1741,12 @@ static int cdns_uart_remove(struct platform_device *pdev)
+ console_port = NULL;
+ #endif
+
++ /* If this is last instance major number should be initialized */
++ mutex_lock(&bitmap_lock);
++ if (bitmap_empty(bitmap, MAX_UART_INSTANCES))
++ uartps_major = 0;
++ mutex_unlock(&bitmap_lock);
++
+ uart_unregister_driver(cdns_uart_data->cdns_uart_driver);
+ return rc;
+ }
+diff --git a/drivers/usb/class/usblp.c b/drivers/usb/class/usblp.c
+index 407a7a6198a2..502e9bf1746f 100644
+--- a/drivers/usb/class/usblp.c
++++ b/drivers/usb/class/usblp.c
+@@ -461,10 +461,12 @@ static int usblp_release(struct inode *inode, struct file *file)
+
+ mutex_lock(&usblp_mutex);
+ usblp->used = 0;
+- if (usblp->present) {
++ if (usblp->present)
+ usblp_unlink_urbs(usblp);
+- usb_autopm_put_interface(usblp->intf);
+- } else /* finish cleanup from disconnect */
++
++ usb_autopm_put_interface(usblp->intf);
++
++ if (!usblp->present) /* finish cleanup from disconnect */
+ usblp_cleanup(usblp);
+ mutex_unlock(&usblp_mutex);
+ return 0;
+diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c
+index 8414fac74493..3d499d93c083 100644
+--- a/drivers/usb/gadget/udc/dummy_hcd.c
++++ b/drivers/usb/gadget/udc/dummy_hcd.c
+@@ -48,6 +48,7 @@
+ #define DRIVER_VERSION "02 May 2005"
+
+ #define POWER_BUDGET 500 /* in mA; use 8 for low-power port testing */
++#define POWER_BUDGET_3 900 /* in mA */
+
+ static const char driver_name[] = "dummy_hcd";
+ static const char driver_desc[] = "USB Host+Gadget Emulator";
+@@ -2432,7 +2433,7 @@ static int dummy_start_ss(struct dummy_hcd *dum_hcd)
+ dum_hcd->rh_state = DUMMY_RH_RUNNING;
+ dum_hcd->stream_en_ep = 0;
+ INIT_LIST_HEAD(&dum_hcd->urbp_list);
+- dummy_hcd_to_hcd(dum_hcd)->power_budget = POWER_BUDGET;
++ dummy_hcd_to_hcd(dum_hcd)->power_budget = POWER_BUDGET_3;
+ dummy_hcd_to_hcd(dum_hcd)->state = HC_STATE_RUNNING;
+ dummy_hcd_to_hcd(dum_hcd)->uses_new_polling = 1;
+ #ifdef CONFIG_USB_OTG
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 9741cdeea9d7..85ceb43e3405 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -3202,10 +3202,10 @@ static int xhci_align_td(struct xhci_hcd *xhci, struct urb *urb, u32 enqd_len,
+ if (usb_urb_dir_out(urb)) {
+ len = sg_pcopy_to_buffer(urb->sg, urb->num_sgs,
+ seg->bounce_buf, new_buff_len, enqd_len);
+- if (len != seg->bounce_len)
++ if (len != new_buff_len)
+ xhci_warn(xhci,
+ "WARN Wrong bounce buffer write length: %zu != %d\n",
+- len, seg->bounce_len);
++ len, new_buff_len);
+ seg->bounce_dma = dma_map_single(dev, seg->bounce_buf,
+ max_pkt, DMA_TO_DEVICE);
+ } else {
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 03d1e552769b..ee9d2e0fc53a 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -1032,7 +1032,7 @@ int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup)
+ writel(command, &xhci->op_regs->command);
+ xhci->broken_suspend = 0;
+ if (xhci_handshake(&xhci->op_regs->status,
+- STS_SAVE, 0, 10 * 1000)) {
++ STS_SAVE, 0, 20 * 1000)) {
+ /*
+ * AMD SNPS xHC 3.0 occasionally does not clear the
+ * SSS bit of USBSTS and when driver tries to poll
+@@ -1108,6 +1108,18 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+ hibernated = true;
+
+ if (!hibernated) {
++ /*
++ * Some controllers might lose power during suspend, so wait
++ * for controller not ready bit to clear, just as in xHC init.
++ */
++ retval = xhci_handshake(&xhci->op_regs->status,
++ STS_CNR, 0, 10 * 1000 * 1000);
++ if (retval) {
++ xhci_warn(xhci, "Controller not ready at resume %d\n",
++ retval);
++ spin_unlock_irq(&xhci->lock);
++ return retval;
++ }
+ /* step 1: restore register */
+ xhci_restore_registers(xhci);
+ /* step 2: initialize command ring buffer */
+@@ -3083,6 +3095,7 @@ static void xhci_endpoint_reset(struct usb_hcd *hcd,
+ unsigned int ep_index;
+ unsigned long flags;
+ u32 ep_flag;
++ int err;
+
+ xhci = hcd_to_xhci(hcd);
+ if (!host_ep->hcpriv)
+@@ -3142,7 +3155,17 @@ static void xhci_endpoint_reset(struct usb_hcd *hcd,
+ xhci_free_command(xhci, cfg_cmd);
+ goto cleanup;
+ }
+- xhci_queue_stop_endpoint(xhci, stop_cmd, udev->slot_id, ep_index, 0);
++
++ err = xhci_queue_stop_endpoint(xhci, stop_cmd, udev->slot_id,
++ ep_index, 0);
++ if (err < 0) {
++ spin_unlock_irqrestore(&xhci->lock, flags);
++ xhci_free_command(xhci, cfg_cmd);
++ xhci_dbg(xhci, "%s: Failed to queue stop ep command, %d ",
++ __func__, err);
++ goto cleanup;
++ }
++
+ xhci_ring_cmd_db(xhci);
+ spin_unlock_irqrestore(&xhci->lock, flags);
+
+@@ -3156,8 +3179,16 @@ static void xhci_endpoint_reset(struct usb_hcd *hcd,
+ ctrl_ctx, ep_flag, ep_flag);
+ xhci_endpoint_copy(xhci, cfg_cmd->in_ctx, vdev->out_ctx, ep_index);
+
+- xhci_queue_configure_endpoint(xhci, cfg_cmd, cfg_cmd->in_ctx->dma,
++ err = xhci_queue_configure_endpoint(xhci, cfg_cmd, cfg_cmd->in_ctx->dma,
+ udev->slot_id, false);
++ if (err < 0) {
++ spin_unlock_irqrestore(&xhci->lock, flags);
++ xhci_free_command(xhci, cfg_cmd);
++ xhci_dbg(xhci, "%s: Failed to queue config ep command, %d ",
++ __func__, err);
++ goto cleanup;
++ }
++
+ xhci_ring_cmd_db(xhci);
+ spin_unlock_irqrestore(&xhci->lock, flags);
+
+@@ -4673,12 +4704,12 @@ static int xhci_update_timeout_for_endpoint(struct xhci_hcd *xhci,
+ alt_timeout = xhci_call_host_update_timeout_for_endpoint(xhci, udev,
+ desc, state, timeout);
+
+- /* If we found we can't enable hub-initiated LPM, or
++ /* If we found we can't enable hub-initiated LPM, and
+ * the U1 or U2 exit latency was too high to allow
+- * device-initiated LPM as well, just stop searching.
++ * device-initiated LPM as well, then we will disable LPM
++ * for this device, so stop searching any further.
+ */
+- if (alt_timeout == USB3_LPM_DISABLED ||
+- alt_timeout == USB3_LPM_DEVICE_INITIATED) {
++ if (alt_timeout == USB3_LPM_DISABLED) {
+ *timeout = alt_timeout;
+ return -E2BIG;
+ }
+@@ -4789,10 +4820,12 @@ static u16 xhci_calculate_lpm_timeout(struct usb_hcd *hcd,
+ if (intf->dev.driver) {
+ driver = to_usb_driver(intf->dev.driver);
+ if (driver && driver->disable_hub_initiated_lpm) {
+- dev_dbg(&udev->dev, "Hub-initiated %s disabled "
+- "at request of driver %s\n",
+- state_name, driver->name);
+- return xhci_get_timeout_no_hub_lpm(udev, state);
++ dev_dbg(&udev->dev, "Hub-initiated %s disabled at request of driver %s\n",
++ state_name, driver->name);
++ timeout = xhci_get_timeout_no_hub_lpm(udev,
++ state);
++ if (timeout == USB3_LPM_DISABLED)
++ return timeout;
+ }
+ }
+
+@@ -5076,11 +5109,18 @@ int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks)
+ hcd->has_tt = 1;
+ } else {
+ /*
+- * Some 3.1 hosts return sbrn 0x30, use xhci supported protocol
+- * minor revision instead of sbrn. Minor revision is a two digit
+- * BCD containing minor and sub-minor numbers, only show minor.
++ * Early xHCI 1.1 spec did not mention USB 3.1 capable hosts
++ * should return 0x31 for sbrn, or that the minor revision
++ * is a two digit BCD containig minor and sub-minor numbers.
++ * This was later clarified in xHCI 1.2.
++ *
++ * Some USB 3.1 capable hosts therefore have sbrn 0x30, and
++ * minor revision set to 0x1 instead of 0x10.
+ */
+- minor_rev = xhci->usb3_rhub.min_rev / 0x10;
++ if (xhci->usb3_rhub.min_rev == 0x1)
++ minor_rev = 1;
++ else
++ minor_rev = xhci->usb3_rhub.min_rev / 0x10;
+
+ switch (minor_rev) {
+ case 2:
+@@ -5197,8 +5237,16 @@ static void xhci_clear_tt_buffer_complete(struct usb_hcd *hcd,
+ unsigned int ep_index;
+ unsigned long flags;
+
++ /*
++ * udev might be NULL if tt buffer is cleared during a failed device
++ * enumeration due to a halted control endpoint. Usb core might
++ * have allocated a new udev for the next enumeration attempt.
++ */
++
+ xhci = hcd_to_xhci(hcd);
+ udev = (struct usb_device *)ep->hcpriv;
++ if (!udev)
++ return;
+ slot_id = udev->slot_id;
+ ep_index = xhci_get_endpoint_index(&ep->desc);
+
+diff --git a/drivers/usb/image/microtek.c b/drivers/usb/image/microtek.c
+index 0a57c2cc8e5a..7a6b122c833f 100644
+--- a/drivers/usb/image/microtek.c
++++ b/drivers/usb/image/microtek.c
+@@ -716,6 +716,10 @@ static int mts_usb_probe(struct usb_interface *intf,
+
+ }
+
++ if (ep_in_current != &ep_in_set[2]) {
++ MTS_WARNING("couldn't find two input bulk endpoints. Bailing out.\n");
++ return -ENODEV;
++ }
+
+ if ( ep_out == -1 ) {
+ MTS_WARNING( "couldn't find an output bulk endpoint. Bailing out.\n" );
+diff --git a/drivers/usb/misc/Kconfig b/drivers/usb/misc/Kconfig
+index bdae62b2ffe0..9bce583aada3 100644
+--- a/drivers/usb/misc/Kconfig
++++ b/drivers/usb/misc/Kconfig
+@@ -47,16 +47,6 @@ config USB_SEVSEG
+ To compile this driver as a module, choose M here: the
+ module will be called usbsevseg.
+
+-config USB_RIO500
+- tristate "USB Diamond Rio500 support"
+- help
+- Say Y here if you want to connect a USB Rio500 mp3 player to your
+- computer's USB port. Please read <file:Documentation/usb/rio.rst>
+- for more information.
+-
+- To compile this driver as a module, choose M here: the
+- module will be called rio500.
+-
+ config USB_LEGOTOWER
+ tristate "USB Lego Infrared Tower support"
+ help
+diff --git a/drivers/usb/misc/Makefile b/drivers/usb/misc/Makefile
+index 109f54f5b9aa..0d416eb624bb 100644
+--- a/drivers/usb/misc/Makefile
++++ b/drivers/usb/misc/Makefile
+@@ -17,7 +17,6 @@ obj-$(CONFIG_USB_ISIGHTFW) += isight_firmware.o
+ obj-$(CONFIG_USB_LCD) += usblcd.o
+ obj-$(CONFIG_USB_LD) += ldusb.o
+ obj-$(CONFIG_USB_LEGOTOWER) += legousbtower.o
+-obj-$(CONFIG_USB_RIO500) += rio500.o
+ obj-$(CONFIG_USB_TEST) += usbtest.o
+ obj-$(CONFIG_USB_EHSET_TEST_FIXTURE) += ehset.o
+ obj-$(CONFIG_USB_TRANCEVIBRATOR) += trancevibrator.o
+diff --git a/drivers/usb/misc/adutux.c b/drivers/usb/misc/adutux.c
+index 344d523b0502..6f5edb9fc61e 100644
+--- a/drivers/usb/misc/adutux.c
++++ b/drivers/usb/misc/adutux.c
+@@ -75,6 +75,7 @@ struct adu_device {
+ char serial_number[8];
+
+ int open_count; /* number of times this port has been opened */
++ unsigned long disconnected:1;
+
+ char *read_buffer_primary;
+ int read_buffer_length;
+@@ -116,7 +117,7 @@ static void adu_abort_transfers(struct adu_device *dev)
+ {
+ unsigned long flags;
+
+- if (dev->udev == NULL)
++ if (dev->disconnected)
+ return;
+
+ /* shutdown transfer */
+@@ -148,6 +149,7 @@ static void adu_delete(struct adu_device *dev)
+ kfree(dev->read_buffer_secondary);
+ kfree(dev->interrupt_in_buffer);
+ kfree(dev->interrupt_out_buffer);
++ usb_put_dev(dev->udev);
+ kfree(dev);
+ }
+
+@@ -243,7 +245,7 @@ static int adu_open(struct inode *inode, struct file *file)
+ }
+
+ dev = usb_get_intfdata(interface);
+- if (!dev || !dev->udev) {
++ if (!dev) {
+ retval = -ENODEV;
+ goto exit_no_device;
+ }
+@@ -326,7 +328,7 @@ static int adu_release(struct inode *inode, struct file *file)
+ }
+
+ adu_release_internal(dev);
+- if (dev->udev == NULL) {
++ if (dev->disconnected) {
+ /* the device was unplugged before the file was released */
+ if (!dev->open_count) /* ... and we're the last user */
+ adu_delete(dev);
+@@ -354,7 +356,7 @@ static ssize_t adu_read(struct file *file, __user char *buffer, size_t count,
+ return -ERESTARTSYS;
+
+ /* verify that the device wasn't unplugged */
+- if (dev->udev == NULL) {
++ if (dev->disconnected) {
+ retval = -ENODEV;
+ pr_err("No device or device unplugged %d\n", retval);
+ goto exit;
+@@ -518,7 +520,7 @@ static ssize_t adu_write(struct file *file, const __user char *buffer,
+ goto exit_nolock;
+
+ /* verify that the device wasn't unplugged */
+- if (dev->udev == NULL) {
++ if (dev->disconnected) {
+ retval = -ENODEV;
+ pr_err("No device or device unplugged %d\n", retval);
+ goto exit;
+@@ -663,7 +665,7 @@ static int adu_probe(struct usb_interface *interface,
+
+ mutex_init(&dev->mtx);
+ spin_lock_init(&dev->buflock);
+- dev->udev = udev;
++ dev->udev = usb_get_dev(udev);
+ init_waitqueue_head(&dev->read_wait);
+ init_waitqueue_head(&dev->write_wait);
+
+@@ -762,14 +764,18 @@ static void adu_disconnect(struct usb_interface *interface)
+
+ dev = usb_get_intfdata(interface);
+
+- mutex_lock(&dev->mtx); /* not interruptible */
+- dev->udev = NULL; /* poison */
+ usb_deregister_dev(interface, &adu_class);
+- mutex_unlock(&dev->mtx);
++
++ usb_poison_urb(dev->interrupt_in_urb);
++ usb_poison_urb(dev->interrupt_out_urb);
+
+ mutex_lock(&adutux_mutex);
+ usb_set_intfdata(interface, NULL);
+
++ mutex_lock(&dev->mtx); /* not interruptible */
++ dev->disconnected = 1;
++ mutex_unlock(&dev->mtx);
++
+ /* if the device is not opened, then we clean up right now */
+ if (!dev->open_count)
+ adu_delete(dev);
+diff --git a/drivers/usb/misc/chaoskey.c b/drivers/usb/misc/chaoskey.c
+index cf5828ce927a..34e6cd6f40d3 100644
+--- a/drivers/usb/misc/chaoskey.c
++++ b/drivers/usb/misc/chaoskey.c
+@@ -98,6 +98,7 @@ static void chaoskey_free(struct chaoskey *dev)
+ usb_free_urb(dev->urb);
+ kfree(dev->name);
+ kfree(dev->buf);
++ usb_put_intf(dev->interface);
+ kfree(dev);
+ }
+ }
+@@ -145,6 +146,8 @@ static int chaoskey_probe(struct usb_interface *interface,
+ if (dev == NULL)
+ goto out;
+
++ dev->interface = usb_get_intf(interface);
++
+ dev->buf = kmalloc(size, GFP_KERNEL);
+
+ if (dev->buf == NULL)
+@@ -174,8 +177,6 @@ static int chaoskey_probe(struct usb_interface *interface,
+ goto out;
+ }
+
+- dev->interface = interface;
+-
+ dev->in_ep = in_ep;
+
+ if (le16_to_cpu(udev->descriptor.idVendor) != ALEA_VENDOR_ID)
+diff --git a/drivers/usb/misc/iowarrior.c b/drivers/usb/misc/iowarrior.c
+index f5bed9f29e56..f405fa734bcc 100644
+--- a/drivers/usb/misc/iowarrior.c
++++ b/drivers/usb/misc/iowarrior.c
+@@ -87,6 +87,7 @@ struct iowarrior {
+ char chip_serial[9]; /* the serial number string of the chip connected */
+ int report_size; /* number of bytes in a report */
+ u16 product_id;
++ struct usb_anchor submitted;
+ };
+
+ /*--------------*/
+@@ -243,6 +244,7 @@ static inline void iowarrior_delete(struct iowarrior *dev)
+ kfree(dev->int_in_buffer);
+ usb_free_urb(dev->int_in_urb);
+ kfree(dev->read_queue);
++ usb_put_intf(dev->interface);
+ kfree(dev);
+ }
+
+@@ -424,11 +426,13 @@ static ssize_t iowarrior_write(struct file *file,
+ retval = -EFAULT;
+ goto error;
+ }
++ usb_anchor_urb(int_out_urb, &dev->submitted);
+ retval = usb_submit_urb(int_out_urb, GFP_KERNEL);
+ if (retval) {
+ dev_dbg(&dev->interface->dev,
+ "submit error %d for urb nr.%d\n",
+ retval, atomic_read(&dev->write_busy));
++ usb_unanchor_urb(int_out_urb);
+ goto error;
+ }
+ /* submit was ok */
+@@ -764,11 +768,13 @@ static int iowarrior_probe(struct usb_interface *interface,
+ init_waitqueue_head(&dev->write_wait);
+
+ dev->udev = udev;
+- dev->interface = interface;
++ dev->interface = usb_get_intf(interface);
+
+ iface_desc = interface->cur_altsetting;
+ dev->product_id = le16_to_cpu(udev->descriptor.idProduct);
+
++ init_usb_anchor(&dev->submitted);
++
+ res = usb_find_last_int_in_endpoint(iface_desc, &dev->int_in_endpoint);
+ if (res) {
+ dev_err(&interface->dev, "no interrupt-in endpoint found\n");
+@@ -866,8 +872,6 @@ static void iowarrior_disconnect(struct usb_interface *interface)
+ dev = usb_get_intfdata(interface);
+ mutex_lock(&iowarrior_open_disc_lock);
+ usb_set_intfdata(interface, NULL);
+- /* prevent device read, write and ioctl */
+- dev->present = 0;
+
+ minor = dev->minor;
+ mutex_unlock(&iowarrior_open_disc_lock);
+@@ -878,8 +882,7 @@ static void iowarrior_disconnect(struct usb_interface *interface)
+ mutex_lock(&dev->mutex);
+
+ /* prevent device read, write and ioctl */
+-
+- mutex_unlock(&dev->mutex);
++ dev->present = 0;
+
+ if (dev->opened) {
+ /* There is a process that holds a filedescriptor to the device ,
+@@ -887,10 +890,13 @@ static void iowarrior_disconnect(struct usb_interface *interface)
+ Deleting the device is postponed until close() was called.
+ */
+ usb_kill_urb(dev->int_in_urb);
++ usb_kill_anchored_urbs(&dev->submitted);
+ wake_up_interruptible(&dev->read_wait);
+ wake_up_interruptible(&dev->write_wait);
++ mutex_unlock(&dev->mutex);
+ } else {
+ /* no process is using the device, cleanup now */
++ mutex_unlock(&dev->mutex);
+ iowarrior_delete(dev);
+ }
+
+diff --git a/drivers/usb/misc/ldusb.c b/drivers/usb/misc/ldusb.c
+index 6581774bdfa4..f3108d85e768 100644
+--- a/drivers/usb/misc/ldusb.c
++++ b/drivers/usb/misc/ldusb.c
+@@ -153,6 +153,7 @@ MODULE_PARM_DESC(min_interrupt_out_interval, "Minimum interrupt out interval in
+ struct ld_usb {
+ struct mutex mutex; /* locks this structure */
+ struct usb_interface *intf; /* save off the usb interface pointer */
++ unsigned long disconnected:1;
+
+ int open_count; /* number of times this port has been opened */
+
+@@ -192,12 +193,10 @@ static void ld_usb_abort_transfers(struct ld_usb *dev)
+ /* shutdown transfer */
+ if (dev->interrupt_in_running) {
+ dev->interrupt_in_running = 0;
+- if (dev->intf)
+- usb_kill_urb(dev->interrupt_in_urb);
++ usb_kill_urb(dev->interrupt_in_urb);
+ }
+ if (dev->interrupt_out_busy)
+- if (dev->intf)
+- usb_kill_urb(dev->interrupt_out_urb);
++ usb_kill_urb(dev->interrupt_out_urb);
+ }
+
+ /**
+@@ -205,8 +204,6 @@ static void ld_usb_abort_transfers(struct ld_usb *dev)
+ */
+ static void ld_usb_delete(struct ld_usb *dev)
+ {
+- ld_usb_abort_transfers(dev);
+-
+ /* free data structures */
+ usb_free_urb(dev->interrupt_in_urb);
+ usb_free_urb(dev->interrupt_out_urb);
+@@ -263,7 +260,7 @@ static void ld_usb_interrupt_in_callback(struct urb *urb)
+
+ resubmit:
+ /* resubmit if we're still running */
+- if (dev->interrupt_in_running && !dev->buffer_overflow && dev->intf) {
++ if (dev->interrupt_in_running && !dev->buffer_overflow) {
+ retval = usb_submit_urb(dev->interrupt_in_urb, GFP_ATOMIC);
+ if (retval) {
+ dev_err(&dev->intf->dev,
+@@ -392,7 +389,7 @@ static int ld_usb_release(struct inode *inode, struct file *file)
+ retval = -ENODEV;
+ goto unlock_exit;
+ }
+- if (dev->intf == NULL) {
++ if (dev->disconnected) {
+ /* the device was unplugged before the file was released */
+ mutex_unlock(&dev->mutex);
+ /* unlock here as ld_usb_delete frees dev */
+@@ -423,7 +420,7 @@ static __poll_t ld_usb_poll(struct file *file, poll_table *wait)
+
+ dev = file->private_data;
+
+- if (!dev->intf)
++ if (dev->disconnected)
+ return EPOLLERR | EPOLLHUP;
+
+ poll_wait(file, &dev->read_wait, wait);
+@@ -462,7 +459,7 @@ static ssize_t ld_usb_read(struct file *file, char __user *buffer, size_t count,
+ }
+
+ /* verify that the device wasn't unplugged */
+- if (dev->intf == NULL) {
++ if (dev->disconnected) {
+ retval = -ENODEV;
+ printk(KERN_ERR "ldusb: No device or device unplugged %d\n", retval);
+ goto unlock_exit;
+@@ -542,7 +539,7 @@ static ssize_t ld_usb_write(struct file *file, const char __user *buffer,
+ }
+
+ /* verify that the device wasn't unplugged */
+- if (dev->intf == NULL) {
++ if (dev->disconnected) {
+ retval = -ENODEV;
+ printk(KERN_ERR "ldusb: No device or device unplugged %d\n", retval);
+ goto unlock_exit;
+@@ -764,6 +761,9 @@ static void ld_usb_disconnect(struct usb_interface *intf)
+ /* give back our minor */
+ usb_deregister_dev(intf, &ld_usb_class);
+
++ usb_poison_urb(dev->interrupt_in_urb);
++ usb_poison_urb(dev->interrupt_out_urb);
++
+ mutex_lock(&dev->mutex);
+
+ /* if the device is not opened, then we clean up right now */
+@@ -771,7 +771,7 @@ static void ld_usb_disconnect(struct usb_interface *intf)
+ mutex_unlock(&dev->mutex);
+ ld_usb_delete(dev);
+ } else {
+- dev->intf = NULL;
++ dev->disconnected = 1;
+ /* wake up pollers */
+ wake_up_interruptible_all(&dev->read_wait);
+ wake_up_interruptible_all(&dev->write_wait);
+diff --git a/drivers/usb/misc/legousbtower.c b/drivers/usb/misc/legousbtower.c
+index 006cf13b2199..9d4c52a7ebe0 100644
+--- a/drivers/usb/misc/legousbtower.c
++++ b/drivers/usb/misc/legousbtower.c
+@@ -179,7 +179,6 @@ static const struct usb_device_id tower_table[] = {
+ };
+
+ MODULE_DEVICE_TABLE (usb, tower_table);
+-static DEFINE_MUTEX(open_disc_mutex);
+
+ #define LEGO_USB_TOWER_MINOR_BASE 160
+
+@@ -191,6 +190,7 @@ struct lego_usb_tower {
+ unsigned char minor; /* the starting minor number for this device */
+
+ int open_count; /* number of times this port has been opened */
++ unsigned long disconnected:1;
+
+ char* read_buffer;
+ size_t read_buffer_length; /* this much came in */
+@@ -290,14 +290,13 @@ static inline void lego_usb_tower_debug_data(struct device *dev,
+ */
+ static inline void tower_delete (struct lego_usb_tower *dev)
+ {
+- tower_abort_transfers (dev);
+-
+ /* free data structures */
+ usb_free_urb(dev->interrupt_in_urb);
+ usb_free_urb(dev->interrupt_out_urb);
+ kfree (dev->read_buffer);
+ kfree (dev->interrupt_in_buffer);
+ kfree (dev->interrupt_out_buffer);
++ usb_put_dev(dev->udev);
+ kfree (dev);
+ }
+
+@@ -332,18 +331,14 @@ static int tower_open (struct inode *inode, struct file *file)
+ goto exit;
+ }
+
+- mutex_lock(&open_disc_mutex);
+ dev = usb_get_intfdata(interface);
+-
+ if (!dev) {
+- mutex_unlock(&open_disc_mutex);
+ retval = -ENODEV;
+ goto exit;
+ }
+
+ /* lock this device */
+ if (mutex_lock_interruptible(&dev->lock)) {
+- mutex_unlock(&open_disc_mutex);
+ retval = -ERESTARTSYS;
+ goto exit;
+ }
+@@ -351,12 +346,9 @@ static int tower_open (struct inode *inode, struct file *file)
+
+ /* allow opening only once */
+ if (dev->open_count) {
+- mutex_unlock(&open_disc_mutex);
+ retval = -EBUSY;
+ goto unlock_exit;
+ }
+- dev->open_count = 1;
+- mutex_unlock(&open_disc_mutex);
+
+ /* reset the tower */
+ result = usb_control_msg (dev->udev,
+@@ -396,13 +388,14 @@ static int tower_open (struct inode *inode, struct file *file)
+ dev_err(&dev->udev->dev,
+ "Couldn't submit interrupt_in_urb %d\n", retval);
+ dev->interrupt_in_running = 0;
+- dev->open_count = 0;
+ goto unlock_exit;
+ }
+
+ /* save device in the file's private structure */
+ file->private_data = dev;
+
++ dev->open_count = 1;
++
+ unlock_exit:
+ mutex_unlock(&dev->lock);
+
+@@ -423,10 +416,9 @@ static int tower_release (struct inode *inode, struct file *file)
+
+ if (dev == NULL) {
+ retval = -ENODEV;
+- goto exit_nolock;
++ goto exit;
+ }
+
+- mutex_lock(&open_disc_mutex);
+ if (mutex_lock_interruptible(&dev->lock)) {
+ retval = -ERESTARTSYS;
+ goto exit;
+@@ -438,7 +430,8 @@ static int tower_release (struct inode *inode, struct file *file)
+ retval = -ENODEV;
+ goto unlock_exit;
+ }
+- if (dev->udev == NULL) {
++
++ if (dev->disconnected) {
+ /* the device was unplugged before the file was released */
+
+ /* unlock here as tower_delete frees dev */
+@@ -456,10 +449,7 @@ static int tower_release (struct inode *inode, struct file *file)
+
+ unlock_exit:
+ mutex_unlock(&dev->lock);
+-
+ exit:
+- mutex_unlock(&open_disc_mutex);
+-exit_nolock:
+ return retval;
+ }
+
+@@ -477,10 +467,9 @@ static void tower_abort_transfers (struct lego_usb_tower *dev)
+ if (dev->interrupt_in_running) {
+ dev->interrupt_in_running = 0;
+ mb();
+- if (dev->udev)
+- usb_kill_urb (dev->interrupt_in_urb);
++ usb_kill_urb(dev->interrupt_in_urb);
+ }
+- if (dev->interrupt_out_busy && dev->udev)
++ if (dev->interrupt_out_busy)
+ usb_kill_urb(dev->interrupt_out_urb);
+ }
+
+@@ -516,7 +505,7 @@ static __poll_t tower_poll (struct file *file, poll_table *wait)
+
+ dev = file->private_data;
+
+- if (!dev->udev)
++ if (dev->disconnected)
+ return EPOLLERR | EPOLLHUP;
+
+ poll_wait(file, &dev->read_wait, wait);
+@@ -563,7 +552,7 @@ static ssize_t tower_read (struct file *file, char __user *buffer, size_t count,
+ }
+
+ /* verify that the device wasn't unplugged */
+- if (dev->udev == NULL) {
++ if (dev->disconnected) {
+ retval = -ENODEV;
+ pr_err("No device or device unplugged %d\n", retval);
+ goto unlock_exit;
+@@ -649,7 +638,7 @@ static ssize_t tower_write (struct file *file, const char __user *buffer, size_t
+ }
+
+ /* verify that the device wasn't unplugged */
+- if (dev->udev == NULL) {
++ if (dev->disconnected) {
+ retval = -ENODEV;
+ pr_err("No device or device unplugged %d\n", retval);
+ goto unlock_exit;
+@@ -759,7 +748,7 @@ static void tower_interrupt_in_callback (struct urb *urb)
+
+ resubmit:
+ /* resubmit if we're still running */
+- if (dev->interrupt_in_running && dev->udev) {
++ if (dev->interrupt_in_running) {
+ retval = usb_submit_urb (dev->interrupt_in_urb, GFP_ATOMIC);
+ if (retval)
+ dev_err(&dev->udev->dev,
+@@ -822,8 +811,9 @@ static int tower_probe (struct usb_interface *interface, const struct usb_device
+
+ mutex_init(&dev->lock);
+
+- dev->udev = udev;
++ dev->udev = usb_get_dev(udev);
+ dev->open_count = 0;
++ dev->disconnected = 0;
+
+ dev->read_buffer = NULL;
+ dev->read_buffer_length = 0;
+@@ -891,8 +881,10 @@ static int tower_probe (struct usb_interface *interface, const struct usb_device
+ get_version_reply,
+ sizeof(*get_version_reply),
+ 1000);
+- if (result < 0) {
+- dev_err(idev, "LEGO USB Tower get version control request failed\n");
++ if (result < sizeof(*get_version_reply)) {
++ if (result >= 0)
++ result = -EIO;
++ dev_err(idev, "get version request failed: %d\n", result);
+ retval = result;
+ goto error;
+ }
+@@ -910,7 +902,6 @@ static int tower_probe (struct usb_interface *interface, const struct usb_device
+ if (retval) {
+ /* something prevented us from registering this driver */
+ dev_err(idev, "Not able to get a minor for this device.\n");
+- usb_set_intfdata (interface, NULL);
+ goto error;
+ }
+ dev->minor = interface->minor;
+@@ -942,23 +933,24 @@ static void tower_disconnect (struct usb_interface *interface)
+ int minor;
+
+ dev = usb_get_intfdata (interface);
+- mutex_lock(&open_disc_mutex);
+- usb_set_intfdata (interface, NULL);
+
+ minor = dev->minor;
+
+- /* give back our minor */
++ /* give back our minor and prevent further open() */
+ usb_deregister_dev (interface, &tower_class);
+
++ /* stop I/O */
++ usb_poison_urb(dev->interrupt_in_urb);
++ usb_poison_urb(dev->interrupt_out_urb);
++
+ mutex_lock(&dev->lock);
+- mutex_unlock(&open_disc_mutex);
+
+ /* if the device is not opened, then we clean up right now */
+ if (!dev->open_count) {
+ mutex_unlock(&dev->lock);
+ tower_delete (dev);
+ } else {
+- dev->udev = NULL;
++ dev->disconnected = 1;
+ /* wake up pollers */
+ wake_up_interruptible_all(&dev->read_wait);
+ wake_up_interruptible_all(&dev->write_wait);
+diff --git a/drivers/usb/misc/rio500.c b/drivers/usb/misc/rio500.c
+deleted file mode 100644
+index a32d61a79ab8..000000000000
+--- a/drivers/usb/misc/rio500.c
++++ /dev/null
+@@ -1,561 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0+
+-/* -*- linux-c -*- */
+-
+-/*
+- * Driver for USB Rio 500
+- *
+- * Cesar Miquel (miquel@df.uba.ar)
+- *
+- * based on hp_scanner.c by David E. Nelson (dnelson@jump.net)
+- *
+- * Based upon mouse.c (Brad Keryan) and printer.c (Michael Gee).
+- *
+- * Changelog:
+- * 30/05/2003 replaced lock/unlock kernel with up/down
+- * Daniele Bellucci bellucda@tiscali.it
+- * */
+-
+-#include <linux/module.h>
+-#include <linux/kernel.h>
+-#include <linux/signal.h>
+-#include <linux/sched/signal.h>
+-#include <linux/mutex.h>
+-#include <linux/errno.h>
+-#include <linux/random.h>
+-#include <linux/poll.h>
+-#include <linux/slab.h>
+-#include <linux/spinlock.h>
+-#include <linux/usb.h>
+-#include <linux/wait.h>
+-
+-#include "rio500_usb.h"
+-
+-#define DRIVER_AUTHOR "Cesar Miquel <miquel@df.uba.ar>"
+-#define DRIVER_DESC "USB Rio 500 driver"
+-
+-#define RIO_MINOR 64
+-
+-/* stall/wait timeout for rio */
+-#define NAK_TIMEOUT (HZ)
+-
+-#define IBUF_SIZE 0x1000
+-
+-/* Size of the rio buffer */
+-#define OBUF_SIZE 0x10000
+-
+-struct rio_usb_data {
+- struct usb_device *rio_dev; /* init: probe_rio */
+- unsigned int ifnum; /* Interface number of the USB device */
+- int isopen; /* nz if open */
+- int present; /* Device is present on the bus */
+- char *obuf, *ibuf; /* transfer buffers */
+- char bulk_in_ep, bulk_out_ep; /* Endpoint assignments */
+- wait_queue_head_t wait_q; /* for timeouts */
+- struct mutex lock; /* general race avoidance */
+-};
+-
+-static DEFINE_MUTEX(rio500_mutex);
+-static struct rio_usb_data rio_instance;
+-
+-static int open_rio(struct inode *inode, struct file *file)
+-{
+- struct rio_usb_data *rio = &rio_instance;
+-
+- /* against disconnect() */
+- mutex_lock(&rio500_mutex);
+- mutex_lock(&(rio->lock));
+-
+- if (rio->isopen || !rio->present) {
+- mutex_unlock(&(rio->lock));
+- mutex_unlock(&rio500_mutex);
+- return -EBUSY;
+- }
+- rio->isopen = 1;
+-
+- init_waitqueue_head(&rio->wait_q);
+-
+- mutex_unlock(&(rio->lock));
+-
+- dev_info(&rio->rio_dev->dev, "Rio opened.\n");
+- mutex_unlock(&rio500_mutex);
+-
+- return 0;
+-}
+-
+-static int close_rio(struct inode *inode, struct file *file)
+-{
+- struct rio_usb_data *rio = &rio_instance;
+-
+- /* against disconnect() */
+- mutex_lock(&rio500_mutex);
+- mutex_lock(&(rio->lock));
+-
+- rio->isopen = 0;
+- if (!rio->present) {
+- /* cleanup has been delayed */
+- kfree(rio->ibuf);
+- kfree(rio->obuf);
+- rio->ibuf = NULL;
+- rio->obuf = NULL;
+- } else {
+- dev_info(&rio->rio_dev->dev, "Rio closed.\n");
+- }
+- mutex_unlock(&(rio->lock));
+- mutex_unlock(&rio500_mutex);
+- return 0;
+-}
+-
+-static long ioctl_rio(struct file *file, unsigned int cmd, unsigned long arg)
+-{
+- struct RioCommand rio_cmd;
+- struct rio_usb_data *rio = &rio_instance;
+- void __user *data;
+- unsigned char *buffer;
+- int result, requesttype;
+- int retries;
+- int retval=0;
+-
+- mutex_lock(&(rio->lock));
+- /* Sanity check to make sure rio is connected, powered, etc */
+- if (rio->present == 0 || rio->rio_dev == NULL) {
+- retval = -ENODEV;
+- goto err_out;
+- }
+-
+- switch (cmd) {
+- case RIO_RECV_COMMAND:
+- data = (void __user *) arg;
+- if (data == NULL)
+- break;
+- if (copy_from_user(&rio_cmd, data, sizeof(struct RioCommand))) {
+- retval = -EFAULT;
+- goto err_out;
+- }
+- if (rio_cmd.length < 0 || rio_cmd.length > PAGE_SIZE) {
+- retval = -EINVAL;
+- goto err_out;
+- }
+- buffer = (unsigned char *) __get_free_page(GFP_KERNEL);
+- if (buffer == NULL) {
+- retval = -ENOMEM;
+- goto err_out;
+- }
+- if (copy_from_user(buffer, rio_cmd.buffer, rio_cmd.length)) {
+- retval = -EFAULT;
+- free_page((unsigned long) buffer);
+- goto err_out;
+- }
+-
+- requesttype = rio_cmd.requesttype | USB_DIR_IN |
+- USB_TYPE_VENDOR | USB_RECIP_DEVICE;
+- dev_dbg(&rio->rio_dev->dev,
+- "sending command:reqtype=%0x req=%0x value=%0x index=%0x len=%0x\n",
+- requesttype, rio_cmd.request, rio_cmd.value,
+- rio_cmd.index, rio_cmd.length);
+- /* Send rio control message */
+- retries = 3;
+- while (retries) {
+- result = usb_control_msg(rio->rio_dev,
+- usb_rcvctrlpipe(rio-> rio_dev, 0),
+- rio_cmd.request,
+- requesttype,
+- rio_cmd.value,
+- rio_cmd.index, buffer,
+- rio_cmd.length,
+- jiffies_to_msecs(rio_cmd.timeout));
+- if (result == -ETIMEDOUT)
+- retries--;
+- else if (result < 0) {
+- dev_err(&rio->rio_dev->dev,
+- "Error executing ioctrl. code = %d\n",
+- result);
+- retries = 0;
+- } else {
+- dev_dbg(&rio->rio_dev->dev,
+- "Executed ioctl. Result = %d (data=%02x)\n",
+- result, buffer[0]);
+- if (copy_to_user(rio_cmd.buffer, buffer,
+- rio_cmd.length)) {
+- free_page((unsigned long) buffer);
+- retval = -EFAULT;
+- goto err_out;
+- }
+- retries = 0;
+- }
+-
+- /* rio_cmd.buffer contains a raw stream of single byte
+- data which has been returned from rio. Data is
+- interpreted at application level. For data that
+- will be cast to data types longer than 1 byte, data
+- will be little_endian and will potentially need to
+- be swapped at the app level */
+-
+- }
+- free_page((unsigned long) buffer);
+- break;
+-
+- case RIO_SEND_COMMAND:
+- data = (void __user *) arg;
+- if (data == NULL)
+- break;
+- if (copy_from_user(&rio_cmd, data, sizeof(struct RioCommand))) {
+- retval = -EFAULT;
+- goto err_out;
+- }
+- if (rio_cmd.length < 0 || rio_cmd.length > PAGE_SIZE) {
+- retval = -EINVAL;
+- goto err_out;
+- }
+- buffer = (unsigned char *) __get_free_page(GFP_KERNEL);
+- if (buffer == NULL) {
+- retval = -ENOMEM;
+- goto err_out;
+- }
+- if (copy_from_user(buffer, rio_cmd.buffer, rio_cmd.length)) {
+- free_page((unsigned long)buffer);
+- retval = -EFAULT;
+- goto err_out;
+- }
+-
+- requesttype = rio_cmd.requesttype | USB_DIR_OUT |
+- USB_TYPE_VENDOR | USB_RECIP_DEVICE;
+- dev_dbg(&rio->rio_dev->dev,
+- "sending command: reqtype=%0x req=%0x value=%0x index=%0x len=%0x\n",
+- requesttype, rio_cmd.request, rio_cmd.value,
+- rio_cmd.index, rio_cmd.length);
+- /* Send rio control message */
+- retries = 3;
+- while (retries) {
+- result = usb_control_msg(rio->rio_dev,
+- usb_sndctrlpipe(rio-> rio_dev, 0),
+- rio_cmd.request,
+- requesttype,
+- rio_cmd.value,
+- rio_cmd.index, buffer,
+- rio_cmd.length,
+- jiffies_to_msecs(rio_cmd.timeout));
+- if (result == -ETIMEDOUT)
+- retries--;
+- else if (result < 0) {
+- dev_err(&rio->rio_dev->dev,
+- "Error executing ioctrl. code = %d\n",
+- result);
+- retries = 0;
+- } else {
+- dev_dbg(&rio->rio_dev->dev,
+- "Executed ioctl. Result = %d\n", result);
+- retries = 0;
+-
+- }
+-
+- }
+- free_page((unsigned long) buffer);
+- break;
+-
+- default:
+- retval = -ENOTTY;
+- break;
+- }
+-
+-
+-err_out:
+- mutex_unlock(&(rio->lock));
+- return retval;
+-}
+-
+-static ssize_t
+-write_rio(struct file *file, const char __user *buffer,
+- size_t count, loff_t * ppos)
+-{
+- DEFINE_WAIT(wait);
+- struct rio_usb_data *rio = &rio_instance;
+-
+- unsigned long copy_size;
+- unsigned long bytes_written = 0;
+- unsigned int partial;
+-
+- int result = 0;
+- int maxretry;
+- int errn = 0;
+- int intr;
+-
+- intr = mutex_lock_interruptible(&(rio->lock));
+- if (intr)
+- return -EINTR;
+- /* Sanity check to make sure rio is connected, powered, etc */
+- if (rio->present == 0 || rio->rio_dev == NULL) {
+- mutex_unlock(&(rio->lock));
+- return -ENODEV;
+- }
+-
+-
+-
+- do {
+- unsigned long thistime;
+- char *obuf = rio->obuf;
+-
+- thistime = copy_size =
+- (count >= OBUF_SIZE) ? OBUF_SIZE : count;
+- if (copy_from_user(rio->obuf, buffer, copy_size)) {
+- errn = -EFAULT;
+- goto error;
+- }
+- maxretry = 5;
+- while (thistime) {
+- if (!rio->rio_dev) {
+- errn = -ENODEV;
+- goto error;
+- }
+- if (signal_pending(current)) {
+- mutex_unlock(&(rio->lock));
+- return bytes_written ? bytes_written : -EINTR;
+- }
+-
+- result = usb_bulk_msg(rio->rio_dev,
+- usb_sndbulkpipe(rio->rio_dev, 2),
+- obuf, thistime, &partial, 5000);
+-
+- dev_dbg(&rio->rio_dev->dev,
+- "write stats: result:%d thistime:%lu partial:%u\n",
+- result, thistime, partial);
+-
+- if (result == -ETIMEDOUT) { /* NAK - so hold for a while */
+- if (!maxretry--) {
+- errn = -ETIME;
+- goto error;
+- }
+- prepare_to_wait(&rio->wait_q, &wait, TASK_INTERRUPTIBLE);
+- schedule_timeout(NAK_TIMEOUT);
+- finish_wait(&rio->wait_q, &wait);
+- continue;
+- } else if (!result && partial) {
+- obuf += partial;
+- thistime -= partial;
+- } else
+- break;
+- }
+- if (result) {
+- dev_err(&rio->rio_dev->dev, "Write Whoops - %x\n",
+- result);
+- errn = -EIO;
+- goto error;
+- }
+- bytes_written += copy_size;
+- count -= copy_size;
+- buffer += copy_size;
+- } while (count > 0);
+-
+- mutex_unlock(&(rio->lock));
+-
+- return bytes_written ? bytes_written : -EIO;
+-
+-error:
+- mutex_unlock(&(rio->lock));
+- return errn;
+-}
+-
+-static ssize_t
+-read_rio(struct file *file, char __user *buffer, size_t count, loff_t * ppos)
+-{
+- DEFINE_WAIT(wait);
+- struct rio_usb_data *rio = &rio_instance;
+- ssize_t read_count;
+- unsigned int partial;
+- int this_read;
+- int result;
+- int maxretry = 10;
+- char *ibuf;
+- int intr;
+-
+- intr = mutex_lock_interruptible(&(rio->lock));
+- if (intr)
+- return -EINTR;
+- /* Sanity check to make sure rio is connected, powered, etc */
+- if (rio->present == 0 || rio->rio_dev == NULL) {
+- mutex_unlock(&(rio->lock));
+- return -ENODEV;
+- }
+-
+- ibuf = rio->ibuf;
+-
+- read_count = 0;
+-
+-
+- while (count > 0) {
+- if (signal_pending(current)) {
+- mutex_unlock(&(rio->lock));
+- return read_count ? read_count : -EINTR;
+- }
+- if (!rio->rio_dev) {
+- mutex_unlock(&(rio->lock));
+- return -ENODEV;
+- }
+- this_read = (count >= IBUF_SIZE) ? IBUF_SIZE : count;
+-
+- result = usb_bulk_msg(rio->rio_dev,
+- usb_rcvbulkpipe(rio->rio_dev, 1),
+- ibuf, this_read, &partial,
+- 8000);
+-
+- dev_dbg(&rio->rio_dev->dev,
+- "read stats: result:%d this_read:%u partial:%u\n",
+- result, this_read, partial);
+-
+- if (partial) {
+- count = this_read = partial;
+- } else if (result == -ETIMEDOUT || result == 15) { /* FIXME: 15 ??? */
+- if (!maxretry--) {
+- mutex_unlock(&(rio->lock));
+- dev_err(&rio->rio_dev->dev,
+- "read_rio: maxretry timeout\n");
+- return -ETIME;
+- }
+- prepare_to_wait(&rio->wait_q, &wait, TASK_INTERRUPTIBLE);
+- schedule_timeout(NAK_TIMEOUT);
+- finish_wait(&rio->wait_q, &wait);
+- continue;
+- } else if (result != -EREMOTEIO) {
+- mutex_unlock(&(rio->lock));
+- dev_err(&rio->rio_dev->dev,
+- "Read Whoops - result:%d partial:%u this_read:%u\n",
+- result, partial, this_read);
+- return -EIO;
+- } else {
+- mutex_unlock(&(rio->lock));
+- return (0);
+- }
+-
+- if (this_read) {
+- if (copy_to_user(buffer, ibuf, this_read)) {
+- mutex_unlock(&(rio->lock));
+- return -EFAULT;
+- }
+- count -= this_read;
+- read_count += this_read;
+- buffer += this_read;
+- }
+- }
+- mutex_unlock(&(rio->lock));
+- return read_count;
+-}
+-
+-static const struct file_operations usb_rio_fops = {
+- .owner = THIS_MODULE,
+- .read = read_rio,
+- .write = write_rio,
+- .unlocked_ioctl = ioctl_rio,
+- .open = open_rio,
+- .release = close_rio,
+- .llseek = noop_llseek,
+-};
+-
+-static struct usb_class_driver usb_rio_class = {
+- .name = "rio500%d",
+- .fops = &usb_rio_fops,
+- .minor_base = RIO_MINOR,
+-};
+-
+-static int probe_rio(struct usb_interface *intf,
+- const struct usb_device_id *id)
+-{
+- struct usb_device *dev = interface_to_usbdev(intf);
+- struct rio_usb_data *rio = &rio_instance;
+- int retval = 0;
+-
+- mutex_lock(&rio500_mutex);
+- if (rio->present) {
+- dev_info(&intf->dev, "Second USB Rio at address %d refused\n", dev->devnum);
+- retval = -EBUSY;
+- goto bail_out;
+- } else {
+- dev_info(&intf->dev, "USB Rio found at address %d\n", dev->devnum);
+- }
+-
+- retval = usb_register_dev(intf, &usb_rio_class);
+- if (retval) {
+- dev_err(&dev->dev,
+- "Not able to get a minor for this device.\n");
+- retval = -ENOMEM;
+- goto bail_out;
+- }
+-
+- rio->rio_dev = dev;
+-
+- if (!(rio->obuf = kmalloc(OBUF_SIZE, GFP_KERNEL))) {
+- dev_err(&dev->dev,
+- "probe_rio: Not enough memory for the output buffer\n");
+- usb_deregister_dev(intf, &usb_rio_class);
+- retval = -ENOMEM;
+- goto bail_out;
+- }
+- dev_dbg(&intf->dev, "obuf address:%p\n", rio->obuf);
+-
+- if (!(rio->ibuf = kmalloc(IBUF_SIZE, GFP_KERNEL))) {
+- dev_err(&dev->dev,
+- "probe_rio: Not enough memory for the input buffer\n");
+- usb_deregister_dev(intf, &usb_rio_class);
+- kfree(rio->obuf);
+- retval = -ENOMEM;
+- goto bail_out;
+- }
+- dev_dbg(&intf->dev, "ibuf address:%p\n", rio->ibuf);
+-
+- mutex_init(&(rio->lock));
+-
+- usb_set_intfdata (intf, rio);
+- rio->present = 1;
+-bail_out:
+- mutex_unlock(&rio500_mutex);
+-
+- return retval;
+-}
+-
+-static void disconnect_rio(struct usb_interface *intf)
+-{
+- struct rio_usb_data *rio = usb_get_intfdata (intf);
+-
+- usb_set_intfdata (intf, NULL);
+- mutex_lock(&rio500_mutex);
+- if (rio) {
+- usb_deregister_dev(intf, &usb_rio_class);
+-
+- mutex_lock(&(rio->lock));
+- if (rio->isopen) {
+- rio->isopen = 0;
+- /* better let it finish - the release will do whats needed */
+- rio->rio_dev = NULL;
+- mutex_unlock(&(rio->lock));
+- mutex_unlock(&rio500_mutex);
+- return;
+- }
+- kfree(rio->ibuf);
+- kfree(rio->obuf);
+-
+- dev_info(&intf->dev, "USB Rio disconnected.\n");
+-
+- rio->present = 0;
+- mutex_unlock(&(rio->lock));
+- }
+- mutex_unlock(&rio500_mutex);
+-}
+-
+-static const struct usb_device_id rio_table[] = {
+- { USB_DEVICE(0x0841, 1) }, /* Rio 500 */
+- { } /* Terminating entry */
+-};
+-
+-MODULE_DEVICE_TABLE (usb, rio_table);
+-
+-static struct usb_driver rio_driver = {
+- .name = "rio500",
+- .probe = probe_rio,
+- .disconnect = disconnect_rio,
+- .id_table = rio_table,
+-};
+-
+-module_usb_driver(rio_driver);
+-
+-MODULE_AUTHOR( DRIVER_AUTHOR );
+-MODULE_DESCRIPTION( DRIVER_DESC );
+-MODULE_LICENSE("GPL");
+-
+diff --git a/drivers/usb/misc/rio500_usb.h b/drivers/usb/misc/rio500_usb.h
+deleted file mode 100644
+index 6db7a5863496..000000000000
+--- a/drivers/usb/misc/rio500_usb.h
++++ /dev/null
+@@ -1,20 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0+
+-/* ----------------------------------------------------------------------
+- Copyright (C) 2000 Cesar Miquel (miquel@df.uba.ar)
+- ---------------------------------------------------------------------- */
+-
+-#define RIO_SEND_COMMAND 0x1
+-#define RIO_RECV_COMMAND 0x2
+-
+-#define RIO_DIR_OUT 0x0
+-#define RIO_DIR_IN 0x1
+-
+-struct RioCommand {
+- short length;
+- int request;
+- int requesttype;
+- int value;
+- int index;
+- void __user *buffer;
+- int timeout;
+-};
+diff --git a/drivers/usb/misc/usblcd.c b/drivers/usb/misc/usblcd.c
+index 9ba4a4e68d91..aa982d3ca36b 100644
+--- a/drivers/usb/misc/usblcd.c
++++ b/drivers/usb/misc/usblcd.c
+@@ -18,6 +18,7 @@
+ #include <linux/slab.h>
+ #include <linux/errno.h>
+ #include <linux/mutex.h>
++#include <linux/rwsem.h>
+ #include <linux/uaccess.h>
+ #include <linux/usb.h>
+
+@@ -57,6 +58,8 @@ struct usb_lcd {
+ using up all RAM */
+ struct usb_anchor submitted; /* URBs to wait for
+ before suspend */
++ struct rw_semaphore io_rwsem;
++ unsigned long disconnected:1;
+ };
+ #define to_lcd_dev(d) container_of(d, struct usb_lcd, kref)
+
+@@ -142,6 +145,13 @@ static ssize_t lcd_read(struct file *file, char __user * buffer,
+
+ dev = file->private_data;
+
++ down_read(&dev->io_rwsem);
++
++ if (dev->disconnected) {
++ retval = -ENODEV;
++ goto out_up_io;
++ }
++
+ /* do a blocking bulk read to get data from the device */
+ retval = usb_bulk_msg(dev->udev,
+ usb_rcvbulkpipe(dev->udev,
+@@ -158,6 +168,9 @@ static ssize_t lcd_read(struct file *file, char __user * buffer,
+ retval = bytes_read;
+ }
+
++out_up_io:
++ up_read(&dev->io_rwsem);
++
+ return retval;
+ }
+
+@@ -237,11 +250,18 @@ static ssize_t lcd_write(struct file *file, const char __user * user_buffer,
+ if (r < 0)
+ return -EINTR;
+
++ down_read(&dev->io_rwsem);
++
++ if (dev->disconnected) {
++ retval = -ENODEV;
++ goto err_up_io;
++ }
++
+ /* create a urb, and a buffer for it, and copy the data to the urb */
+ urb = usb_alloc_urb(0, GFP_KERNEL);
+ if (!urb) {
+ retval = -ENOMEM;
+- goto err_no_buf;
++ goto err_up_io;
+ }
+
+ buf = usb_alloc_coherent(dev->udev, count, GFP_KERNEL,
+@@ -278,6 +298,7 @@ static ssize_t lcd_write(struct file *file, const char __user * user_buffer,
+ the USB core will eventually free it entirely */
+ usb_free_urb(urb);
+
++ up_read(&dev->io_rwsem);
+ exit:
+ return count;
+ error_unanchor:
+@@ -285,7 +306,8 @@ error_unanchor:
+ error:
+ usb_free_coherent(dev->udev, count, buf, urb->transfer_dma);
+ usb_free_urb(urb);
+-err_no_buf:
++err_up_io:
++ up_read(&dev->io_rwsem);
+ up(&dev->limit_sem);
+ return retval;
+ }
+@@ -325,6 +347,7 @@ static int lcd_probe(struct usb_interface *interface,
+
+ kref_init(&dev->kref);
+ sema_init(&dev->limit_sem, USB_LCD_CONCURRENT_WRITES);
++ init_rwsem(&dev->io_rwsem);
+ init_usb_anchor(&dev->submitted);
+
+ dev->udev = usb_get_dev(interface_to_usbdev(interface));
+@@ -422,6 +445,12 @@ static void lcd_disconnect(struct usb_interface *interface)
+ /* give back our minor */
+ usb_deregister_dev(interface, &lcd_class);
+
++ down_write(&dev->io_rwsem);
++ dev->disconnected = 1;
++ up_write(&dev->io_rwsem);
++
++ usb_kill_anchored_urbs(&dev->submitted);
++
+ /* decrement our usage count */
+ kref_put(&dev->kref, lcd_delete);
+
+diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c
+index 6715a128e6c8..be0505b8b5d4 100644
+--- a/drivers/usb/misc/yurex.c
++++ b/drivers/usb/misc/yurex.c
+@@ -60,6 +60,7 @@ struct usb_yurex {
+
+ struct kref kref;
+ struct mutex io_mutex;
++ unsigned long disconnected:1;
+ struct fasync_struct *async_queue;
+ wait_queue_head_t waitq;
+
+@@ -107,6 +108,7 @@ static void yurex_delete(struct kref *kref)
+ dev->int_buffer, dev->urb->transfer_dma);
+ usb_free_urb(dev->urb);
+ }
++ usb_put_intf(dev->interface);
+ usb_put_dev(dev->udev);
+ kfree(dev);
+ }
+@@ -132,6 +134,7 @@ static void yurex_interrupt(struct urb *urb)
+ switch (status) {
+ case 0: /*success*/
+ break;
++ /* The device is terminated or messed up, give up */
+ case -EOVERFLOW:
+ dev_err(&dev->interface->dev,
+ "%s - overflow with length %d, actual length is %d\n",
+@@ -140,12 +143,13 @@ static void yurex_interrupt(struct urb *urb)
+ case -ENOENT:
+ case -ESHUTDOWN:
+ case -EILSEQ:
+- /* The device is terminated, clean up */
++ case -EPROTO:
++ case -ETIME:
+ return;
+ default:
+ dev_err(&dev->interface->dev,
+ "%s - unknown status received: %d\n", __func__, status);
+- goto exit;
++ return;
+ }
+
+ /* handle received message */
+@@ -177,7 +181,6 @@ static void yurex_interrupt(struct urb *urb)
+ break;
+ }
+
+-exit:
+ retval = usb_submit_urb(dev->urb, GFP_ATOMIC);
+ if (retval) {
+ dev_err(&dev->interface->dev, "%s - usb_submit_urb failed: %d\n",
+@@ -204,7 +207,7 @@ static int yurex_probe(struct usb_interface *interface, const struct usb_device_
+ init_waitqueue_head(&dev->waitq);
+
+ dev->udev = usb_get_dev(interface_to_usbdev(interface));
+- dev->interface = interface;
++ dev->interface = usb_get_intf(interface);
+
+ /* set up the endpoint information */
+ iface_desc = interface->cur_altsetting;
+@@ -315,8 +318,9 @@ static void yurex_disconnect(struct usb_interface *interface)
+
+ /* prevent more I/O from starting */
+ usb_poison_urb(dev->urb);
++ usb_poison_urb(dev->cntl_urb);
+ mutex_lock(&dev->io_mutex);
+- dev->interface = NULL;
++ dev->disconnected = 1;
+ mutex_unlock(&dev->io_mutex);
+
+ /* wakeup waiters */
+@@ -404,7 +408,7 @@ static ssize_t yurex_read(struct file *file, char __user *buffer, size_t count,
+ dev = file->private_data;
+
+ mutex_lock(&dev->io_mutex);
+- if (!dev->interface) { /* already disconnected */
++ if (dev->disconnected) { /* already disconnected */
+ mutex_unlock(&dev->io_mutex);
+ return -ENODEV;
+ }
+@@ -439,7 +443,7 @@ static ssize_t yurex_write(struct file *file, const char __user *user_buffer,
+ goto error;
+
+ mutex_lock(&dev->io_mutex);
+- if (!dev->interface) { /* already disconnected */
++ if (dev->disconnected) { /* already disconnected */
+ mutex_unlock(&dev->io_mutex);
+ retval = -ENODEV;
+ goto error;
+diff --git a/drivers/usb/renesas_usbhs/common.h b/drivers/usb/renesas_usbhs/common.h
+index d1a0a35ecfff..0824099b905e 100644
+--- a/drivers/usb/renesas_usbhs/common.h
++++ b/drivers/usb/renesas_usbhs/common.h
+@@ -211,6 +211,7 @@ struct usbhs_priv;
+ /* DCPCTR */
+ #define BSTS (1 << 15) /* Buffer Status */
+ #define SUREQ (1 << 14) /* Sending SETUP Token */
++#define INBUFM (1 << 14) /* (PIPEnCTR) Transfer Buffer Monitor */
+ #define CSSTS (1 << 12) /* CSSTS Status */
+ #define ACLRM (1 << 9) /* Buffer Auto-Clear Mode */
+ #define SQCLR (1 << 8) /* Toggle Bit Clear */
+diff --git a/drivers/usb/renesas_usbhs/fifo.c b/drivers/usb/renesas_usbhs/fifo.c
+index 2a01ceb71641..86637cd066cf 100644
+--- a/drivers/usb/renesas_usbhs/fifo.c
++++ b/drivers/usb/renesas_usbhs/fifo.c
+@@ -89,7 +89,7 @@ static void __usbhsf_pkt_del(struct usbhs_pkt *pkt)
+ list_del_init(&pkt->node);
+ }
+
+-static struct usbhs_pkt *__usbhsf_pkt_get(struct usbhs_pipe *pipe)
++struct usbhs_pkt *__usbhsf_pkt_get(struct usbhs_pipe *pipe)
+ {
+ return list_first_entry_or_null(&pipe->list, struct usbhs_pkt, node);
+ }
+diff --git a/drivers/usb/renesas_usbhs/fifo.h b/drivers/usb/renesas_usbhs/fifo.h
+index 88d1816bcda2..c3d3cc35cee0 100644
+--- a/drivers/usb/renesas_usbhs/fifo.h
++++ b/drivers/usb/renesas_usbhs/fifo.h
+@@ -97,5 +97,6 @@ void usbhs_pkt_push(struct usbhs_pipe *pipe, struct usbhs_pkt *pkt,
+ void *buf, int len, int zero, int sequence);
+ struct usbhs_pkt *usbhs_pkt_pop(struct usbhs_pipe *pipe, struct usbhs_pkt *pkt);
+ void usbhs_pkt_start(struct usbhs_pipe *pipe);
++struct usbhs_pkt *__usbhsf_pkt_get(struct usbhs_pipe *pipe);
+
+ #endif /* RENESAS_USB_FIFO_H */
+diff --git a/drivers/usb/renesas_usbhs/mod_gadget.c b/drivers/usb/renesas_usbhs/mod_gadget.c
+index 4d571a5205e2..e5ef56991dba 100644
+--- a/drivers/usb/renesas_usbhs/mod_gadget.c
++++ b/drivers/usb/renesas_usbhs/mod_gadget.c
+@@ -722,8 +722,7 @@ static int __usbhsg_ep_set_halt_wedge(struct usb_ep *ep, int halt, int wedge)
+ struct usbhs_priv *priv = usbhsg_gpriv_to_priv(gpriv);
+ struct device *dev = usbhsg_gpriv_to_dev(gpriv);
+ unsigned long flags;
+-
+- usbhsg_pipe_disable(uep);
++ int ret = 0;
+
+ dev_dbg(dev, "set halt %d (pipe %d)\n",
+ halt, usbhs_pipe_number(pipe));
+@@ -731,6 +730,18 @@ static int __usbhsg_ep_set_halt_wedge(struct usb_ep *ep, int halt, int wedge)
+ /******************** spin lock ********************/
+ usbhs_lock(priv, flags);
+
++ /*
++ * According to usb_ep_set_halt()'s description, this function should
++ * return -EAGAIN if the IN endpoint has any queue or data. Note
++ * that the usbhs_pipe_is_dir_in() returns false if the pipe is an
++ * IN endpoint in the gadget mode.
++ */
++ if (!usbhs_pipe_is_dir_in(pipe) && (__usbhsf_pkt_get(pipe) ||
++ usbhs_pipe_contains_transmittable_data(pipe))) {
++ ret = -EAGAIN;
++ goto out;
++ }
++
+ if (halt)
+ usbhs_pipe_stall(pipe);
+ else
+@@ -741,10 +752,11 @@ static int __usbhsg_ep_set_halt_wedge(struct usb_ep *ep, int halt, int wedge)
+ else
+ usbhsg_status_clr(gpriv, USBHSG_STATUS_WEDGE);
+
++out:
+ usbhs_unlock(priv, flags);
+ /******************** spin unlock ******************/
+
+- return 0;
++ return ret;
+ }
+
+ static int usbhsg_ep_set_halt(struct usb_ep *ep, int value)
+diff --git a/drivers/usb/renesas_usbhs/pipe.c b/drivers/usb/renesas_usbhs/pipe.c
+index c4922b96c93b..9e5afdde1adb 100644
+--- a/drivers/usb/renesas_usbhs/pipe.c
++++ b/drivers/usb/renesas_usbhs/pipe.c
+@@ -277,6 +277,21 @@ int usbhs_pipe_is_accessible(struct usbhs_pipe *pipe)
+ return -EBUSY;
+ }
+
++bool usbhs_pipe_contains_transmittable_data(struct usbhs_pipe *pipe)
++{
++ u16 val;
++
++ /* Do not support for DCP pipe */
++ if (usbhs_pipe_is_dcp(pipe))
++ return false;
++
++ val = usbhsp_pipectrl_get(pipe);
++ if (val & INBUFM)
++ return true;
++
++ return false;
++}
++
+ /*
+ * PID ctrl
+ */
+diff --git a/drivers/usb/renesas_usbhs/pipe.h b/drivers/usb/renesas_usbhs/pipe.h
+index 3080423e600c..3b130529408b 100644
+--- a/drivers/usb/renesas_usbhs/pipe.h
++++ b/drivers/usb/renesas_usbhs/pipe.h
+@@ -83,6 +83,7 @@ void usbhs_pipe_clear(struct usbhs_pipe *pipe);
+ void usbhs_pipe_clear_without_sequence(struct usbhs_pipe *pipe,
+ int needs_bfre, int bfre_enable);
+ int usbhs_pipe_is_accessible(struct usbhs_pipe *pipe);
++bool usbhs_pipe_contains_transmittable_data(struct usbhs_pipe *pipe);
+ void usbhs_pipe_enable(struct usbhs_pipe *pipe);
+ void usbhs_pipe_disable(struct usbhs_pipe *pipe);
+ void usbhs_pipe_stall(struct usbhs_pipe *pipe);
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 4b3a049561f3..e25352932ba7 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -1030,6 +1030,9 @@ static const struct usb_device_id id_table_combined[] = {
+ /* EZPrototypes devices */
+ { USB_DEVICE(EZPROTOTYPES_VID, HJELMSLUND_USB485_ISO_PID) },
+ { USB_DEVICE_INTERFACE_NUMBER(UNJO_VID, UNJO_ISODEBUG_V1_PID, 1) },
++ /* Sienna devices */
++ { USB_DEVICE(FTDI_VID, FTDI_SIENNA_PID) },
++ { USB_DEVICE(ECHELON_VID, ECHELON_U20_PID) },
+ { } /* Terminating entry */
+ };
+
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index f12d806220b4..22d66217cb41 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -39,6 +39,9 @@
+
+ #define FTDI_LUMEL_PD12_PID 0x6002
+
++/* Sienna Serial Interface by Secyourit GmbH */
++#define FTDI_SIENNA_PID 0x8348
++
+ /* Cyber Cortex AV by Fabulous Silicon (http://fabuloussilicon.com) */
+ #define CYBER_CORTEX_AV_PID 0x8698
+
+@@ -688,6 +691,12 @@
+ #define BANDB_TTL3USB9M_PID 0xAC50
+ #define BANDB_ZZ_PROG1_USB_PID 0xBA02
+
++/*
++ * Echelon USB Serial Interface
++ */
++#define ECHELON_VID 0x0920
++#define ECHELON_U20_PID 0x7500
++
+ /*
+ * Intrepid Control Systems (http://www.intrepidcs.com/) ValueCAN and NeoVI
+ */
+diff --git a/drivers/usb/serial/keyspan.c b/drivers/usb/serial/keyspan.c
+index d34779fe4a8d..e66a59ef43a1 100644
+--- a/drivers/usb/serial/keyspan.c
++++ b/drivers/usb/serial/keyspan.c
+@@ -1741,8 +1741,8 @@ static struct urb *keyspan_setup_urb(struct usb_serial *serial, int endpoint,
+
+ ep_desc = find_ep(serial, endpoint);
+ if (!ep_desc) {
+- /* leak the urb, something's wrong and the callers don't care */
+- return urb;
++ usb_free_urb(urb);
++ return NULL;
+ }
+ if (usb_endpoint_xfer_int(ep_desc)) {
+ ep_type_name = "INT";
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 38e920ac7f82..06ab016be0b6 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -419,6 +419,7 @@ static void option_instat_callback(struct urb *urb);
+ #define CINTERION_PRODUCT_PH8_AUDIO 0x0083
+ #define CINTERION_PRODUCT_AHXX_2RMNET 0x0084
+ #define CINTERION_PRODUCT_AHXX_AUDIO 0x0085
++#define CINTERION_PRODUCT_CLS8 0x00b0
+
+ /* Olivetti products */
+ #define OLIVETTI_VENDOR_ID 0x0b3c
+@@ -1154,6 +1155,14 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) | RSVD(3) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG5, 0xff),
+ .driver_info = RSVD(0) | RSVD(1) | NCTRL(2) | RSVD(3) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1050, 0xff), /* Telit FN980 (rmnet) */
++ .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1051, 0xff), /* Telit FN980 (MBIM) */
++ .driver_info = NCTRL(0) | RSVD(1) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1052, 0xff), /* Telit FN980 (RNDIS) */
++ .driver_info = NCTRL(2) | RSVD(3) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1053, 0xff), /* Telit FN980 (ECM) */
++ .driver_info = NCTRL(0) | RSVD(1) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
+ .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+@@ -1847,6 +1856,8 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = RSVD(4) },
+ { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_AHXX_2RMNET, 0xff) },
+ { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_AHXX_AUDIO, 0xff) },
++ { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_CLS8, 0xff),
++ .driver_info = RSVD(0) | RSVD(4) },
+ { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_HC28_MDM) },
+ { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_HC28_MDMNET) },
+ { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC25_MDM) },
+diff --git a/drivers/usb/serial/usb-serial.c b/drivers/usb/serial/usb-serial.c
+index a3179fea38c8..8f066bb55d7d 100644
+--- a/drivers/usb/serial/usb-serial.c
++++ b/drivers/usb/serial/usb-serial.c
+@@ -314,10 +314,7 @@ static void serial_cleanup(struct tty_struct *tty)
+ serial = port->serial;
+ owner = serial->type->driver.owner;
+
+- mutex_lock(&serial->disc_mutex);
+- if (!serial->disconnected)
+- usb_autopm_put_interface(serial->interface);
+- mutex_unlock(&serial->disc_mutex);
++ usb_autopm_put_interface(serial->interface);
+
+ usb_serial_put(serial);
+ module_put(owner);
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index bcfdb55fd198..a3cf27120164 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -4416,18 +4416,20 @@ static int tcpm_fw_get_caps(struct tcpm_port *port,
+ /* USB data support is optional */
+ ret = fwnode_property_read_string(fwnode, "data-role", &cap_str);
+ if (ret == 0) {
+- port->typec_caps.data = typec_find_port_data_role(cap_str);
+- if (port->typec_caps.data < 0)
+- return -EINVAL;
++ ret = typec_find_port_data_role(cap_str);
++ if (ret < 0)
++ return ret;
++ port->typec_caps.data = ret;
+ }
+
+ ret = fwnode_property_read_string(fwnode, "power-role", &cap_str);
+ if (ret < 0)
+ return ret;
+
+- port->typec_caps.type = typec_find_port_power_role(cap_str);
+- if (port->typec_caps.type < 0)
+- return -EINVAL;
++ ret = typec_find_port_power_role(cap_str);
++ if (ret < 0)
++ return ret;
++ port->typec_caps.type = ret;
+ port->port_type = port->typec_caps.type;
+
+ if (port->port_type == TYPEC_PORT_SNK)
+diff --git a/drivers/usb/typec/ucsi/displayport.c b/drivers/usb/typec/ucsi/displayport.c
+index 6c103697c582..d99700cb4dca 100644
+--- a/drivers/usb/typec/ucsi/displayport.c
++++ b/drivers/usb/typec/ucsi/displayport.c
+@@ -75,6 +75,8 @@ static int ucsi_displayport_enter(struct typec_altmode *alt)
+
+ if (cur != 0xff) {
+ mutex_unlock(&dp->con->lock);
++ if (dp->con->port_altmode[cur] == alt)
++ return 0;
+ return -EBUSY;
+ }
+
+diff --git a/drivers/usb/typec/ucsi/ucsi_ccg.c b/drivers/usb/typec/ucsi/ucsi_ccg.c
+index 8e9f8fba55af..95378d8f7e4e 100644
+--- a/drivers/usb/typec/ucsi/ucsi_ccg.c
++++ b/drivers/usb/typec/ucsi/ucsi_ccg.c
+@@ -195,7 +195,6 @@ struct ucsi_ccg {
+
+ /* fw build with vendor information */
+ u16 fw_build;
+- bool run_isr; /* flag to call ISR routine during resume */
+ struct work_struct pm_work;
+ };
+
+@@ -224,18 +223,6 @@ static int ccg_read(struct ucsi_ccg *uc, u16 rab, u8 *data, u32 len)
+ if (quirks && quirks->max_read_len)
+ max_read_len = quirks->max_read_len;
+
+- if (uc->fw_build == CCG_FW_BUILD_NVIDIA &&
+- uc->fw_version <= CCG_OLD_FW_VERSION) {
+- mutex_lock(&uc->lock);
+- /*
+- * Do not schedule pm_work to run ISR in
+- * ucsi_ccg_runtime_resume() after pm_runtime_get_sync()
+- * since we are already in ISR path.
+- */
+- uc->run_isr = false;
+- mutex_unlock(&uc->lock);
+- }
+-
+ pm_runtime_get_sync(uc->dev);
+ while (rem_len > 0) {
+ msgs[1].buf = &data[len - rem_len];
+@@ -278,18 +265,6 @@ static int ccg_write(struct ucsi_ccg *uc, u16 rab, u8 *data, u32 len)
+ msgs[0].len = len + sizeof(rab);
+ msgs[0].buf = buf;
+
+- if (uc->fw_build == CCG_FW_BUILD_NVIDIA &&
+- uc->fw_version <= CCG_OLD_FW_VERSION) {
+- mutex_lock(&uc->lock);
+- /*
+- * Do not schedule pm_work to run ISR in
+- * ucsi_ccg_runtime_resume() after pm_runtime_get_sync()
+- * since we are already in ISR path.
+- */
+- uc->run_isr = false;
+- mutex_unlock(&uc->lock);
+- }
+-
+ pm_runtime_get_sync(uc->dev);
+ status = i2c_transfer(client->adapter, msgs, ARRAY_SIZE(msgs));
+ if (status < 0) {
+@@ -1133,7 +1108,6 @@ static int ucsi_ccg_probe(struct i2c_client *client,
+ uc->ppm.sync = ucsi_ccg_sync;
+ uc->dev = dev;
+ uc->client = client;
+- uc->run_isr = true;
+ mutex_init(&uc->lock);
+ INIT_WORK(&uc->work, ccg_update_firmware);
+ INIT_WORK(&uc->pm_work, ccg_pm_workaround_work);
+@@ -1195,6 +1169,8 @@ static int ucsi_ccg_probe(struct i2c_client *client,
+
+ pm_runtime_set_active(uc->dev);
+ pm_runtime_enable(uc->dev);
++ pm_runtime_use_autosuspend(uc->dev);
++ pm_runtime_set_autosuspend_delay(uc->dev, 5000);
+ pm_runtime_idle(uc->dev);
+
+ return 0;
+@@ -1237,7 +1213,6 @@ static int ucsi_ccg_runtime_resume(struct device *dev)
+ {
+ struct i2c_client *client = to_i2c_client(dev);
+ struct ucsi_ccg *uc = i2c_get_clientdata(client);
+- bool schedule = true;
+
+ /*
+ * Firmware version 3.1.10 or earlier, built for NVIDIA has known issue
+@@ -1245,17 +1220,8 @@ static int ucsi_ccg_runtime_resume(struct device *dev)
+ * Schedule a work to call ISR as a workaround.
+ */
+ if (uc->fw_build == CCG_FW_BUILD_NVIDIA &&
+- uc->fw_version <= CCG_OLD_FW_VERSION) {
+- mutex_lock(&uc->lock);
+- if (!uc->run_isr) {
+- uc->run_isr = true;
+- schedule = false;
+- }
+- mutex_unlock(&uc->lock);
+-
+- if (schedule)
+- schedule_work(&uc->pm_work);
+- }
++ uc->fw_version <= CCG_OLD_FW_VERSION)
++ schedule_work(&uc->pm_work);
+
+ return 0;
+ }
+diff --git a/drivers/usb/usb-skeleton.c b/drivers/usb/usb-skeleton.c
+index f101347e3ea3..e0cf11f798c5 100644
+--- a/drivers/usb/usb-skeleton.c
++++ b/drivers/usb/usb-skeleton.c
+@@ -59,6 +59,7 @@ struct usb_skel {
+ spinlock_t err_lock; /* lock for errors */
+ struct kref kref;
+ struct mutex io_mutex; /* synchronize I/O with disconnect */
++ unsigned long disconnected:1;
+ wait_queue_head_t bulk_in_wait; /* to wait for an ongoing read */
+ };
+ #define to_skel_dev(d) container_of(d, struct usb_skel, kref)
+@@ -71,6 +72,7 @@ static void skel_delete(struct kref *kref)
+ struct usb_skel *dev = to_skel_dev(kref);
+
+ usb_free_urb(dev->bulk_in_urb);
++ usb_put_intf(dev->interface);
+ usb_put_dev(dev->udev);
+ kfree(dev->bulk_in_buffer);
+ kfree(dev);
+@@ -122,10 +124,7 @@ static int skel_release(struct inode *inode, struct file *file)
+ return -ENODEV;
+
+ /* allow the device to be autosuspended */
+- mutex_lock(&dev->io_mutex);
+- if (dev->interface)
+- usb_autopm_put_interface(dev->interface);
+- mutex_unlock(&dev->io_mutex);
++ usb_autopm_put_interface(dev->interface);
+
+ /* decrement the count on our device */
+ kref_put(&dev->kref, skel_delete);
+@@ -238,7 +237,7 @@ static ssize_t skel_read(struct file *file, char *buffer, size_t count,
+ if (rv < 0)
+ return rv;
+
+- if (!dev->interface) { /* disconnect() was called */
++ if (dev->disconnected) { /* disconnect() was called */
+ rv = -ENODEV;
+ goto exit;
+ }
+@@ -420,7 +419,7 @@ static ssize_t skel_write(struct file *file, const char *user_buffer,
+
+ /* this lock makes sure we don't submit URBs to gone devices */
+ mutex_lock(&dev->io_mutex);
+- if (!dev->interface) { /* disconnect() was called */
++ if (dev->disconnected) { /* disconnect() was called */
+ mutex_unlock(&dev->io_mutex);
+ retval = -ENODEV;
+ goto error;
+@@ -505,7 +504,7 @@ static int skel_probe(struct usb_interface *interface,
+ init_waitqueue_head(&dev->bulk_in_wait);
+
+ dev->udev = usb_get_dev(interface_to_usbdev(interface));
+- dev->interface = interface;
++ dev->interface = usb_get_intf(interface);
+
+ /* set up the endpoint information */
+ /* use only the first bulk-in and bulk-out endpoints */
+@@ -571,7 +570,7 @@ static void skel_disconnect(struct usb_interface *interface)
+
+ /* prevent more I/O from starting */
+ mutex_lock(&dev->io_mutex);
+- dev->interface = NULL;
++ dev->disconnected = 1;
+ mutex_unlock(&dev->io_mutex);
+
+ usb_kill_anchored_urbs(&dev->submitted);
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 58a18ed11546..abcda051eee2 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -1591,7 +1591,6 @@ static noinline ssize_t btrfs_buffered_write(struct kiocb *iocb,
+ struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
+ struct btrfs_root *root = BTRFS_I(inode)->root;
+ struct page **pages = NULL;
+- struct extent_state *cached_state = NULL;
+ struct extent_changeset *data_reserved = NULL;
+ u64 release_bytes = 0;
+ u64 lockstart;
+@@ -1611,6 +1610,7 @@ static noinline ssize_t btrfs_buffered_write(struct kiocb *iocb,
+ return -ENOMEM;
+
+ while (iov_iter_count(i) > 0) {
++ struct extent_state *cached_state = NULL;
+ size_t offset = offset_in_page(pos);
+ size_t sector_offset;
+ size_t write_bytes = min(iov_iter_count(i),
+@@ -1758,9 +1758,20 @@ again:
+ if (copied > 0)
+ ret = btrfs_dirty_pages(inode, pages, dirty_pages,
+ pos, copied, &cached_state);
++
++ /*
++ * If we have not locked the extent range, because the range's
++ * start offset is >= i_size, we might still have a non-NULL
++ * cached extent state, acquired while marking the extent range
++ * as delalloc through btrfs_dirty_pages(). Therefore free any
++ * possible cached extent state to avoid a memory leak.
++ */
+ if (extents_locked)
+ unlock_extent_cached(&BTRFS_I(inode)->io_tree,
+ lockstart, lockend, &cached_state);
++ else
++ free_extent_state(cached_state);
++
+ btrfs_delalloc_release_extents(BTRFS_I(inode), reserve_bytes,
+ true);
+ if (ret) {
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index d51d9466feb0..b3453adb214d 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -6276,13 +6276,16 @@ static struct inode *btrfs_new_inode(struct btrfs_trans_handle *trans,
+ u32 sizes[2];
+ int nitems = name ? 2 : 1;
+ unsigned long ptr;
++ unsigned int nofs_flag;
+ int ret;
+
+ path = btrfs_alloc_path();
+ if (!path)
+ return ERR_PTR(-ENOMEM);
+
++ nofs_flag = memalloc_nofs_save();
+ inode = new_inode(fs_info->sb);
++ memalloc_nofs_restore(nofs_flag);
+ if (!inode) {
+ btrfs_free_path(path);
+ return ERR_PTR(-ENOMEM);
+diff --git a/fs/btrfs/ref-verify.c b/fs/btrfs/ref-verify.c
+index e87cbdad02a3..b57f3618e58e 100644
+--- a/fs/btrfs/ref-verify.c
++++ b/fs/btrfs/ref-verify.c
+@@ -500,7 +500,7 @@ static int process_leaf(struct btrfs_root *root,
+ struct btrfs_extent_data_ref *dref;
+ struct btrfs_shared_data_ref *sref;
+ u32 count;
+- int i = 0, tree_block_level = 0, ret;
++ int i = 0, tree_block_level = 0, ret = 0;
+ struct btrfs_key key;
+ int nritems = btrfs_header_nritems(leaf);
+
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 7f219851fa23..fbd66c33dd63 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -1434,6 +1434,13 @@ int btrfs_init_reloc_root(struct btrfs_trans_handle *trans,
+ int clear_rsv = 0;
+ int ret;
+
++ /*
++ * The subvolume has reloc tree but the swap is finished, no need to
++ * create/update the dead reloc tree
++ */
++ if (test_bit(BTRFS_ROOT_DEAD_RELOC_TREE, &root->state))
++ return 0;
++
+ if (root->reloc_root) {
+ reloc_root = root->reloc_root;
+ reloc_root->last_trans = trans->transid;
+@@ -2186,7 +2193,6 @@ static int clean_dirty_subvols(struct reloc_control *rc)
+ /* Merged subvolume, cleanup its reloc root */
+ struct btrfs_root *reloc_root = root->reloc_root;
+
+- clear_bit(BTRFS_ROOT_DEAD_RELOC_TREE, &root->state);
+ list_del_init(&root->reloc_dirty_list);
+ root->reloc_root = NULL;
+ if (reloc_root) {
+@@ -2195,6 +2201,7 @@ static int clean_dirty_subvols(struct reloc_control *rc)
+ if (ret2 < 0 && !ret)
+ ret = ret2;
+ }
++ clear_bit(BTRFS_ROOT_DEAD_RELOC_TREE, &root->state);
+ btrfs_put_fs_root(root);
+ } else {
+ /* Orphan reloc tree, just clean it up */
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 1bfd7e34f31e..c6bafb8b5f42 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -2932,7 +2932,8 @@ out:
+ * in the tree of log roots
+ */
+ static int update_log_root(struct btrfs_trans_handle *trans,
+- struct btrfs_root *log)
++ struct btrfs_root *log,
++ struct btrfs_root_item *root_item)
+ {
+ struct btrfs_fs_info *fs_info = log->fs_info;
+ int ret;
+@@ -2940,10 +2941,10 @@ static int update_log_root(struct btrfs_trans_handle *trans,
+ if (log->log_transid == 1) {
+ /* insert root item on the first sync */
+ ret = btrfs_insert_root(trans, fs_info->log_root_tree,
+- &log->root_key, &log->root_item);
++ &log->root_key, root_item);
+ } else {
+ ret = btrfs_update_root(trans, fs_info->log_root_tree,
+- &log->root_key, &log->root_item);
++ &log->root_key, root_item);
+ }
+ return ret;
+ }
+@@ -3041,6 +3042,7 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
+ struct btrfs_fs_info *fs_info = root->fs_info;
+ struct btrfs_root *log = root->log_root;
+ struct btrfs_root *log_root_tree = fs_info->log_root_tree;
++ struct btrfs_root_item new_root_item;
+ int log_transid = 0;
+ struct btrfs_log_ctx root_log_ctx;
+ struct blk_plug plug;
+@@ -3104,17 +3106,25 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
+ goto out;
+ }
+
++ /*
++ * We _must_ update under the root->log_mutex in order to make sure we
++ * have a consistent view of the log root we are trying to commit at
++ * this moment.
++ *
++ * We _must_ copy this into a local copy, because we are not holding the
++ * log_root_tree->log_mutex yet. This is important because when we
++ * commit the log_root_tree we must have a consistent view of the
++ * log_root_tree when we update the super block to point at the
++ * log_root_tree bytenr. If we update the log_root_tree here we'll race
++ * with the commit and possibly point at the new block which we may not
++ * have written out.
++ */
+ btrfs_set_root_node(&log->root_item, log->node);
++ memcpy(&new_root_item, &log->root_item, sizeof(new_root_item));
+
+ root->log_transid++;
+ log->log_transid = root->log_transid;
+ root->log_start_pid = 0;
+- /*
+- * Update or create log root item under the root's log_mutex to prevent
+- * races with concurrent log syncs that can lead to failure to update
+- * log root item because it was not created yet.
+- */
+- ret = update_log_root(trans, log);
+ /*
+ * IO has been started, blocks of the log tree have WRITTEN flag set
+ * in their headers. new modifications of the log will be written to
+@@ -3135,6 +3145,14 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
+ mutex_unlock(&log_root_tree->log_mutex);
+
+ mutex_lock(&log_root_tree->log_mutex);
++
++ /*
++ * Now we are safe to update the log_root_tree because we're under the
++ * log_mutex, and we're a current writer so we're holding the commit
++ * open until we drop the log_mutex.
++ */
++ ret = update_log_root(trans, log, &new_root_item);
++
+ if (atomic_dec_and_test(&log_root_tree->log_writers)) {
+ /* atomic_dec_and_test implies a barrier */
+ cond_wake_up_nomb(&log_root_tree->log_writer_wait);
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index e821a0e97cd8..9c057609eaec 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -3854,7 +3854,11 @@ static int alloc_profile_is_valid(u64 flags, int extended)
+ return !extended; /* "0" is valid for usual profiles */
+
+ /* true if exactly one bit set */
+- return is_power_of_2(flags);
++ /*
++ * Don't use is_power_of_2(unsigned long) because it won't work
++ * for the single profile (1ULL << 48) on 32-bit CPUs.
++ */
++ return flags != 0 && (flags & (flags - 1)) == 0;
+ }
+
+ static inline int balance_need_close(struct btrfs_fs_info *fs_info)
+diff --git a/fs/cifs/dir.c b/fs/cifs/dir.c
+index be424e81e3ad..1cc1f298e01a 100644
+--- a/fs/cifs/dir.c
++++ b/fs/cifs/dir.c
+@@ -738,10 +738,16 @@ cifs_lookup(struct inode *parent_dir_inode, struct dentry *direntry,
+ static int
+ cifs_d_revalidate(struct dentry *direntry, unsigned int flags)
+ {
++ struct inode *inode;
++
+ if (flags & LOOKUP_RCU)
+ return -ECHILD;
+
+ if (d_really_is_positive(direntry)) {
++ inode = d_inode(direntry);
++ if ((flags & LOOKUP_REVAL) && !CIFS_CACHE_READ(CIFS_I(inode)))
++ CIFS_I(inode)->time = 0; /* force reval */
++
+ if (cifs_revalidate_dentry(direntry))
+ return 0;
+ else {
+@@ -752,7 +758,7 @@ cifs_d_revalidate(struct dentry *direntry, unsigned int flags)
+ * attributes will have been updated by
+ * cifs_revalidate_dentry().
+ */
+- if (IS_AUTOMOUNT(d_inode(direntry)) &&
++ if (IS_AUTOMOUNT(inode) &&
+ !(direntry->d_flags & DCACHE_NEED_AUTOMOUNT)) {
+ spin_lock(&direntry->d_lock);
+ direntry->d_flags |= DCACHE_NEED_AUTOMOUNT;
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 97090693d182..4c1aeb2cf7f5 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -253,6 +253,12 @@ cifs_nt_open(char *full_path, struct inode *inode, struct cifs_sb_info *cifs_sb,
+ rc = cifs_get_inode_info(&inode, full_path, buf, inode->i_sb,
+ xid, fid);
+
++ if (rc) {
++ server->ops->close(xid, tcon, fid);
++ if (rc == -ESTALE)
++ rc = -EOPENSTALE;
++ }
++
+ out:
+ kfree(buf);
+ return rc;
+@@ -1847,13 +1853,12 @@ struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *cifs_inode,
+ {
+ struct cifsFileInfo *open_file = NULL;
+ struct cifs_sb_info *cifs_sb = CIFS_SB(cifs_inode->vfs_inode.i_sb);
+- struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb);
+
+ /* only filter by fsuid on multiuser mounts */
+ if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MULTIUSER))
+ fsuid_only = false;
+
+- spin_lock(&tcon->open_file_lock);
++ spin_lock(&cifs_inode->open_file_lock);
+ /* we could simply get the first_list_entry since write-only entries
+ are always at the end of the list but since the first entry might
+ have a close pending, we go through the whole list */
+@@ -1865,7 +1870,7 @@ struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *cifs_inode,
+ /* found a good file */
+ /* lock it so it will not be closed on us */
+ cifsFileInfo_get(open_file);
+- spin_unlock(&tcon->open_file_lock);
++ spin_unlock(&cifs_inode->open_file_lock);
+ return open_file;
+ } /* else might as well continue, and look for
+ another, or simply have the caller reopen it
+@@ -1873,7 +1878,7 @@ struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *cifs_inode,
+ } else /* write only file */
+ break; /* write only files are last so must be done */
+ }
+- spin_unlock(&tcon->open_file_lock);
++ spin_unlock(&cifs_inode->open_file_lock);
+ return NULL;
+ }
+
+@@ -1884,7 +1889,6 @@ cifs_get_writable_file(struct cifsInodeInfo *cifs_inode, bool fsuid_only,
+ {
+ struct cifsFileInfo *open_file, *inv_file = NULL;
+ struct cifs_sb_info *cifs_sb;
+- struct cifs_tcon *tcon;
+ bool any_available = false;
+ int rc = -EBADF;
+ unsigned int refind = 0;
+@@ -1904,16 +1908,15 @@ cifs_get_writable_file(struct cifsInodeInfo *cifs_inode, bool fsuid_only,
+ }
+
+ cifs_sb = CIFS_SB(cifs_inode->vfs_inode.i_sb);
+- tcon = cifs_sb_master_tcon(cifs_sb);
+
+ /* only filter by fsuid on multiuser mounts */
+ if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MULTIUSER))
+ fsuid_only = false;
+
+- spin_lock(&tcon->open_file_lock);
++ spin_lock(&cifs_inode->open_file_lock);
+ refind_writable:
+ if (refind > MAX_REOPEN_ATT) {
+- spin_unlock(&tcon->open_file_lock);
++ spin_unlock(&cifs_inode->open_file_lock);
+ return rc;
+ }
+ list_for_each_entry(open_file, &cifs_inode->openFileList, flist) {
+@@ -1925,7 +1928,7 @@ refind_writable:
+ if (!open_file->invalidHandle) {
+ /* found a good writable file */
+ cifsFileInfo_get(open_file);
+- spin_unlock(&tcon->open_file_lock);
++ spin_unlock(&cifs_inode->open_file_lock);
+ *ret_file = open_file;
+ return 0;
+ } else {
+@@ -1945,7 +1948,7 @@ refind_writable:
+ cifsFileInfo_get(inv_file);
+ }
+
+- spin_unlock(&tcon->open_file_lock);
++ spin_unlock(&cifs_inode->open_file_lock);
+
+ if (inv_file) {
+ rc = cifs_reopen_file(inv_file, false);
+@@ -1960,7 +1963,7 @@ refind_writable:
+ cifsFileInfo_put(inv_file);
+ ++refind;
+ inv_file = NULL;
+- spin_lock(&tcon->open_file_lock);
++ spin_lock(&cifs_inode->open_file_lock);
+ goto refind_writable;
+ }
+
+@@ -4399,17 +4402,15 @@ static int cifs_readpage(struct file *file, struct page *page)
+ static int is_inode_writable(struct cifsInodeInfo *cifs_inode)
+ {
+ struct cifsFileInfo *open_file;
+- struct cifs_tcon *tcon =
+- cifs_sb_master_tcon(CIFS_SB(cifs_inode->vfs_inode.i_sb));
+
+- spin_lock(&tcon->open_file_lock);
++ spin_lock(&cifs_inode->open_file_lock);
+ list_for_each_entry(open_file, &cifs_inode->openFileList, flist) {
+ if (OPEN_FMODE(open_file->f_flags) & FMODE_WRITE) {
+- spin_unlock(&tcon->open_file_lock);
++ spin_unlock(&cifs_inode->open_file_lock);
+ return 1;
+ }
+ }
+- spin_unlock(&tcon->open_file_lock);
++ spin_unlock(&cifs_inode->open_file_lock);
+ return 0;
+ }
+
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index 56ca4b8ccaba..79d9a60f21ba 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -414,6 +414,7 @@ int cifs_get_inode_info_unix(struct inode **pinode,
+ /* if uniqueid is different, return error */
+ if (unlikely(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM &&
+ CIFS_I(*pinode)->uniqueid != fattr.cf_uniqueid)) {
++ CIFS_I(*pinode)->time = 0; /* force reval */
+ rc = -ESTALE;
+ goto cgiiu_exit;
+ }
+@@ -421,6 +422,7 @@ int cifs_get_inode_info_unix(struct inode **pinode,
+ /* if filetype is different, return error */
+ if (unlikely(((*pinode)->i_mode & S_IFMT) !=
+ (fattr.cf_mode & S_IFMT))) {
++ CIFS_I(*pinode)->time = 0; /* force reval */
+ rc = -ESTALE;
+ goto cgiiu_exit;
+ }
+@@ -924,6 +926,7 @@ cifs_get_inode_info(struct inode **inode, const char *full_path,
+ /* if uniqueid is different, return error */
+ if (unlikely(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM &&
+ CIFS_I(*inode)->uniqueid != fattr.cf_uniqueid)) {
++ CIFS_I(*inode)->time = 0; /* force reval */
+ rc = -ESTALE;
+ goto cgii_exit;
+ }
+@@ -931,6 +934,7 @@ cifs_get_inode_info(struct inode **inode, const char *full_path,
+ /* if filetype is different, return error */
+ if (unlikely(((*inode)->i_mode & S_IFMT) !=
+ (fattr.cf_mode & S_IFMT))) {
++ CIFS_I(*inode)->time = 0; /* force reval */
+ rc = -ESTALE;
+ goto cgii_exit;
+ }
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 06d048341fa4..30149652c379 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -2566,7 +2566,8 @@ static void io_destruct_skb(struct sk_buff *skb)
+ {
+ struct io_ring_ctx *ctx = skb->sk->sk_user_data;
+
+- io_finish_async(ctx);
++ if (ctx->sqo_wq)
++ flush_workqueue(ctx->sqo_wq);
+ unix_destruct_scm(skb);
+ }
+
+diff --git a/fs/libfs.c b/fs/libfs.c
+index c9b2850c0f7c..8e023b08a240 100644
+--- a/fs/libfs.c
++++ b/fs/libfs.c
+@@ -89,58 +89,47 @@ int dcache_dir_close(struct inode *inode, struct file *file)
+ EXPORT_SYMBOL(dcache_dir_close);
+
+ /* parent is locked at least shared */
+-static struct dentry *next_positive(struct dentry *parent,
+- struct list_head *from,
+- int count)
++/*
++ * Returns an element of siblings' list.
++ * We are looking for <count>th positive after <p>; if
++ * found, dentry is grabbed and passed to caller via *<res>.
++ * If no such element exists, the anchor of list is returned
++ * and *<res> is set to NULL.
++ */
++static struct list_head *scan_positives(struct dentry *cursor,
++ struct list_head *p,
++ loff_t count,
++ struct dentry **res)
+ {
+- unsigned *seq = &parent->d_inode->i_dir_seq, n;
+- struct dentry *res;
+- struct list_head *p;
+- bool skipped;
+- int i;
++ struct dentry *dentry = cursor->d_parent, *found = NULL;
+
+-retry:
+- i = count;
+- skipped = false;
+- n = smp_load_acquire(seq) & ~1;
+- res = NULL;
+- rcu_read_lock();
+- for (p = from->next; p != &parent->d_subdirs; p = p->next) {
++ spin_lock(&dentry->d_lock);
++ while ((p = p->next) != &dentry->d_subdirs) {
+ struct dentry *d = list_entry(p, struct dentry, d_child);
+- if (!simple_positive(d)) {
+- skipped = true;
+- } else if (!--i) {
+- res = d;
+- break;
++ // we must at least skip cursors, to avoid livelocks
++ if (d->d_flags & DCACHE_DENTRY_CURSOR)
++ continue;
++ if (simple_positive(d) && !--count) {
++ spin_lock_nested(&d->d_lock, DENTRY_D_LOCK_NESTED);
++ if (simple_positive(d))
++ found = dget_dlock(d);
++ spin_unlock(&d->d_lock);
++ if (likely(found))
++ break;
++ count = 1;
++ }
++ if (need_resched()) {
++ list_move(&cursor->d_child, p);
++ p = &cursor->d_child;
++ spin_unlock(&dentry->d_lock);
++ cond_resched();
++ spin_lock(&dentry->d_lock);
+ }
+ }
+- rcu_read_unlock();
+- if (skipped) {
+- smp_rmb();
+- if (unlikely(*seq != n))
+- goto retry;
+- }
+- return res;
+-}
+-
+-static void move_cursor(struct dentry *cursor, struct list_head *after)
+-{
+- struct dentry *parent = cursor->d_parent;
+- unsigned n, *seq = &parent->d_inode->i_dir_seq;
+- spin_lock(&parent->d_lock);
+- for (;;) {
+- n = *seq;
+- if (!(n & 1) && cmpxchg(seq, n, n + 1) == n)
+- break;
+- cpu_relax();
+- }
+- __list_del(cursor->d_child.prev, cursor->d_child.next);
+- if (after)
+- list_add(&cursor->d_child, after);
+- else
+- list_add_tail(&cursor->d_child, &parent->d_subdirs);
+- smp_store_release(seq, n + 2);
+- spin_unlock(&parent->d_lock);
++ spin_unlock(&dentry->d_lock);
++ dput(*res);
++ *res = found;
++ return p;
+ }
+
+ loff_t dcache_dir_lseek(struct file *file, loff_t offset, int whence)
+@@ -158,17 +147,28 @@ loff_t dcache_dir_lseek(struct file *file, loff_t offset, int whence)
+ return -EINVAL;
+ }
+ if (offset != file->f_pos) {
++ struct dentry *cursor = file->private_data;
++ struct dentry *to = NULL;
++ struct list_head *p;
++
+ file->f_pos = offset;
+- if (file->f_pos >= 2) {
+- struct dentry *cursor = file->private_data;
+- struct dentry *to;
+- loff_t n = file->f_pos - 2;
+-
+- inode_lock_shared(dentry->d_inode);
+- to = next_positive(dentry, &dentry->d_subdirs, n);
+- move_cursor(cursor, to ? &to->d_child : NULL);
+- inode_unlock_shared(dentry->d_inode);
++ inode_lock_shared(dentry->d_inode);
++
++ if (file->f_pos > 2) {
++ p = scan_positives(cursor, &dentry->d_subdirs,
++ file->f_pos - 2, &to);
++ spin_lock(&dentry->d_lock);
++ list_move(&cursor->d_child, p);
++ spin_unlock(&dentry->d_lock);
++ } else {
++ spin_lock(&dentry->d_lock);
++ list_del_init(&cursor->d_child);
++ spin_unlock(&dentry->d_lock);
+ }
++
++ dput(to);
++
++ inode_unlock_shared(dentry->d_inode);
+ }
+ return offset;
+ }
+@@ -190,25 +190,29 @@ int dcache_readdir(struct file *file, struct dir_context *ctx)
+ {
+ struct dentry *dentry = file->f_path.dentry;
+ struct dentry *cursor = file->private_data;
+- struct list_head *p = &cursor->d_child;
+- struct dentry *next;
+- bool moved = false;
++ struct list_head *anchor = &dentry->d_subdirs;
++ struct dentry *next = NULL;
++ struct list_head *p;
+
+ if (!dir_emit_dots(file, ctx))
+ return 0;
+
+ if (ctx->pos == 2)
+- p = &dentry->d_subdirs;
+- while ((next = next_positive(dentry, p, 1)) != NULL) {
++ p = anchor;
++ else
++ p = &cursor->d_child;
++
++ while ((p = scan_positives(cursor, p, 1, &next)) != anchor) {
+ if (!dir_emit(ctx, next->d_name.name, next->d_name.len,
+ d_inode(next)->i_ino, dt_type(d_inode(next))))
+ break;
+- moved = true;
+- p = &next->d_child;
+ ctx->pos++;
+ }
+- if (moved)
+- move_cursor(cursor, p);
++ spin_lock(&dentry->d_lock);
++ list_move_tail(&cursor->d_child, p);
++ spin_unlock(&dentry->d_lock);
++ dput(next);
++
+ return 0;
+ }
+ EXPORT_SYMBOL(dcache_readdir);
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index 222d7115db71..98a9a0bcdf38 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -123,32 +123,49 @@ static inline int put_dreq(struct nfs_direct_req *dreq)
+ }
+
+ static void
+-nfs_direct_good_bytes(struct nfs_direct_req *dreq, struct nfs_pgio_header *hdr)
++nfs_direct_handle_truncated(struct nfs_direct_req *dreq,
++ const struct nfs_pgio_header *hdr,
++ ssize_t dreq_len)
+ {
+- int i;
+- ssize_t count;
++ struct nfs_direct_mirror *mirror = &dreq->mirrors[hdr->pgio_mirror_idx];
++
++ if (!(test_bit(NFS_IOHDR_ERROR, &hdr->flags) ||
++ test_bit(NFS_IOHDR_EOF, &hdr->flags)))
++ return;
++ if (dreq->max_count >= dreq_len) {
++ dreq->max_count = dreq_len;
++ if (dreq->count > dreq_len)
++ dreq->count = dreq_len;
++
++ if (test_bit(NFS_IOHDR_ERROR, &hdr->flags))
++ dreq->error = hdr->error;
++ else /* Clear outstanding error if this is EOF */
++ dreq->error = 0;
++ }
++ if (mirror->count > dreq_len)
++ mirror->count = dreq_len;
++}
+
+- WARN_ON_ONCE(dreq->count >= dreq->max_count);
++static void
++nfs_direct_count_bytes(struct nfs_direct_req *dreq,
++ const struct nfs_pgio_header *hdr)
++{
++ struct nfs_direct_mirror *mirror = &dreq->mirrors[hdr->pgio_mirror_idx];
++ loff_t hdr_end = hdr->io_start + hdr->good_bytes;
++ ssize_t dreq_len = 0;
+
+- if (dreq->mirror_count == 1) {
+- dreq->mirrors[hdr->pgio_mirror_idx].count += hdr->good_bytes;
+- dreq->count += hdr->good_bytes;
+- } else {
+- /* mirrored writes */
+- count = dreq->mirrors[hdr->pgio_mirror_idx].count;
+- if (count + dreq->io_start < hdr->io_start + hdr->good_bytes) {
+- count = hdr->io_start + hdr->good_bytes - dreq->io_start;
+- dreq->mirrors[hdr->pgio_mirror_idx].count = count;
+- }
+- /* update the dreq->count by finding the minimum agreed count from all
+- * mirrors */
+- count = dreq->mirrors[0].count;
++ if (hdr_end > dreq->io_start)
++ dreq_len = hdr_end - dreq->io_start;
+
+- for (i = 1; i < dreq->mirror_count; i++)
+- count = min(count, dreq->mirrors[i].count);
++ nfs_direct_handle_truncated(dreq, hdr, dreq_len);
+
+- dreq->count = count;
+- }
++ if (dreq_len > dreq->max_count)
++ dreq_len = dreq->max_count;
++
++ if (mirror->count < dreq_len)
++ mirror->count = dreq_len;
++ if (dreq->count < dreq_len)
++ dreq->count = dreq_len;
+ }
+
+ /*
+@@ -402,20 +419,12 @@ static void nfs_direct_read_completion(struct nfs_pgio_header *hdr)
+ struct nfs_direct_req *dreq = hdr->dreq;
+
+ spin_lock(&dreq->lock);
+- if (test_bit(NFS_IOHDR_ERROR, &hdr->flags))
+- dreq->error = hdr->error;
+-
+ if (test_bit(NFS_IOHDR_REDO, &hdr->flags)) {
+ spin_unlock(&dreq->lock);
+ goto out_put;
+ }
+
+- if (hdr->good_bytes != 0)
+- nfs_direct_good_bytes(dreq, hdr);
+-
+- if (test_bit(NFS_IOHDR_EOF, &hdr->flags))
+- dreq->error = 0;
+-
++ nfs_direct_count_bytes(dreq, hdr);
+ spin_unlock(&dreq->lock);
+
+ while (!list_empty(&hdr->pages)) {
+@@ -652,6 +661,9 @@ static void nfs_direct_write_reschedule(struct nfs_direct_req *dreq)
+ nfs_direct_write_scan_commit_list(dreq->inode, &reqs, &cinfo);
+
+ dreq->count = 0;
++ dreq->max_count = 0;
++ list_for_each_entry(req, &reqs, wb_list)
++ dreq->max_count += req->wb_bytes;
+ dreq->verf.committed = NFS_INVALID_STABLE_HOW;
+ nfs_clear_pnfs_ds_commit_verifiers(&dreq->ds_cinfo);
+ for (i = 0; i < dreq->mirror_count; i++)
+@@ -791,17 +803,13 @@ static void nfs_direct_write_completion(struct nfs_pgio_header *hdr)
+ nfs_init_cinfo_from_dreq(&cinfo, dreq);
+
+ spin_lock(&dreq->lock);
+-
+- if (test_bit(NFS_IOHDR_ERROR, &hdr->flags))
+- dreq->error = hdr->error;
+-
+ if (test_bit(NFS_IOHDR_REDO, &hdr->flags)) {
+ spin_unlock(&dreq->lock);
+ goto out_put;
+ }
+
++ nfs_direct_count_bytes(dreq, hdr);
+ if (hdr->good_bytes != 0) {
+- nfs_direct_good_bytes(dreq, hdr);
+ if (nfs_write_need_commit(hdr)) {
+ if (dreq->flags == NFS_ODIRECT_RESCHED_WRITES)
+ request_commit = true;
+diff --git a/include/linux/acpi.h b/include/linux/acpi.h
+index 9426b9aaed86..9d0e20a2ac83 100644
+--- a/include/linux/acpi.h
++++ b/include/linux/acpi.h
+@@ -1302,11 +1302,16 @@ static inline int lpit_read_residency_count_address(u64 *address)
+ #endif
+
+ #ifdef CONFIG_ACPI_PPTT
++int acpi_pptt_cpu_is_thread(unsigned int cpu);
+ int find_acpi_cpu_topology(unsigned int cpu, int level);
+ int find_acpi_cpu_topology_package(unsigned int cpu);
+ int find_acpi_cpu_topology_hetero_id(unsigned int cpu);
+ int find_acpi_cpu_cache_topology(unsigned int cpu, int level);
+ #else
++static inline int acpi_pptt_cpu_is_thread(unsigned int cpu)
++{
++ return -EINVAL;
++}
+ static inline int find_acpi_cpu_topology(unsigned int cpu, int level)
+ {
+ return -EINVAL;
+diff --git a/include/linux/hwmon.h b/include/linux/hwmon.h
+index 04c36b7a61dd..72579168189d 100644
+--- a/include/linux/hwmon.h
++++ b/include/linux/hwmon.h
+@@ -235,7 +235,7 @@ enum hwmon_power_attributes {
+ #define HWMON_P_LABEL BIT(hwmon_power_label)
+ #define HWMON_P_ALARM BIT(hwmon_power_alarm)
+ #define HWMON_P_CAP_ALARM BIT(hwmon_power_cap_alarm)
+-#define HWMON_P_MIN_ALARM BIT(hwmon_power_max_alarm)
++#define HWMON_P_MIN_ALARM BIT(hwmon_power_min_alarm)
+ #define HWMON_P_MAX_ALARM BIT(hwmon_power_max_alarm)
+ #define HWMON_P_LCRIT_ALARM BIT(hwmon_power_lcrit_alarm)
+ #define HWMON_P_CRIT_ALARM BIT(hwmon_power_crit_alarm)
+diff --git a/include/linux/tpm_eventlog.h b/include/linux/tpm_eventlog.h
+index 63238c84dc0b..131ea1bad458 100644
+--- a/include/linux/tpm_eventlog.h
++++ b/include/linux/tpm_eventlog.h
+@@ -152,7 +152,7 @@ struct tcg_algorithm_info {
+ * total. Once we've done this we know the offset of the data length field,
+ * and can calculate the total size of the event.
+ *
+- * Return: size of the event on success, <0 on failure
++ * Return: size of the event on success, 0 on failure
+ */
+
+ static inline int __calc_tpm2_event_size(struct tcg_pcr_event2_head *event,
+@@ -170,6 +170,7 @@ static inline int __calc_tpm2_event_size(struct tcg_pcr_event2_head *event,
+ u16 halg;
+ int i;
+ int j;
++ u32 count, event_type;
+
+ marker = event;
+ marker_start = marker;
+@@ -190,16 +191,22 @@ static inline int __calc_tpm2_event_size(struct tcg_pcr_event2_head *event,
+ }
+
+ event = (struct tcg_pcr_event2_head *)mapping;
++ /*
++ * The loop below will unmap these fields if the log is larger than
++ * one page, so save them here for reference:
++ */
++ count = READ_ONCE(event->count);
++ event_type = READ_ONCE(event->event_type);
+
+ efispecid = (struct tcg_efi_specid_event_head *)event_header->event;
+
+ /* Check if event is malformed. */
+- if (event->count > efispecid->num_algs) {
++ if (count > efispecid->num_algs) {
+ size = 0;
+ goto out;
+ }
+
+- for (i = 0; i < event->count; i++) {
++ for (i = 0; i < count; i++) {
+ halg_size = sizeof(event->digests[i].alg_id);
+
+ /* Map the digest's algorithm identifier */
+@@ -256,8 +263,9 @@ static inline int __calc_tpm2_event_size(struct tcg_pcr_event2_head *event,
+ + event_field->event_size;
+ size = marker - marker_start;
+
+- if ((event->event_type == 0) && (event_field->event_size == 0))
++ if (event_type == 0 && event_field->event_size == 0)
+ size = 0;
++
+ out:
+ if (do_mapping)
+ TPM_MEMUNMAP(mapping, mapping_size);
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 541fd805fb88..3647097e6783 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -2939,7 +2939,7 @@ int sysctl_max_threads(struct ctl_table *table, int write,
+ struct ctl_table t;
+ int ret;
+ int threads = max_threads;
+- int min = MIN_THREADS;
++ int min = 1;
+ int max = MAX_THREADS;
+
+ t = *table;
+@@ -2951,7 +2951,7 @@ int sysctl_max_threads(struct ctl_table *table, int write,
+ if (ret || !write)
+ return ret;
+
+- set_max_threads(threads);
++ max_threads = threads;
+
+ return 0;
+ }
+diff --git a/kernel/panic.c b/kernel/panic.c
+index 057540b6eee9..02d0de31c42d 100644
+--- a/kernel/panic.c
++++ b/kernel/panic.c
+@@ -179,6 +179,7 @@ void panic(const char *fmt, ...)
+ * after setting panic_cpu) from invoking panic() again.
+ */
+ local_irq_disable();
++ preempt_disable_notrace();
+
+ /*
+ * It's possible to come here directly from a panic-assertion and
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index f9821a3374e9..a33c524eba0d 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -3540,21 +3540,22 @@ ftrace_regex_open(struct ftrace_ops *ops, int flag,
+ struct ftrace_hash *hash;
+ struct list_head *mod_head;
+ struct trace_array *tr = ops->private;
+- int ret = 0;
++ int ret = -ENOMEM;
+
+ ftrace_ops_init(ops);
+
+ if (unlikely(ftrace_disabled))
+ return -ENODEV;
+
++ if (tr && trace_array_get(tr) < 0)
++ return -ENODEV;
++
+ iter = kzalloc(sizeof(*iter), GFP_KERNEL);
+ if (!iter)
+- return -ENOMEM;
++ goto out;
+
+- if (trace_parser_get_init(&iter->parser, FTRACE_BUFF_MAX)) {
+- kfree(iter);
+- return -ENOMEM;
+- }
++ if (trace_parser_get_init(&iter->parser, FTRACE_BUFF_MAX))
++ goto out;
+
+ iter->ops = ops;
+ iter->flags = flag;
+@@ -3584,13 +3585,13 @@ ftrace_regex_open(struct ftrace_ops *ops, int flag,
+
+ if (!iter->hash) {
+ trace_parser_put(&iter->parser);
+- kfree(iter);
+- ret = -ENOMEM;
+ goto out_unlock;
+ }
+ } else
+ iter->hash = hash;
+
++ ret = 0;
++
+ if (file->f_mode & FMODE_READ) {
+ iter->pg = ftrace_pages_start;
+
+@@ -3602,7 +3603,6 @@ ftrace_regex_open(struct ftrace_ops *ops, int flag,
+ /* Failed */
+ free_ftrace_hash(iter->hash);
+ trace_parser_put(&iter->parser);
+- kfree(iter);
+ }
+ } else
+ file->private_data = iter;
+@@ -3610,6 +3610,13 @@ ftrace_regex_open(struct ftrace_ops *ops, int flag,
+ out_unlock:
+ mutex_unlock(&ops->func_hash->regex_lock);
+
++ out:
++ if (ret) {
++ kfree(iter);
++ if (tr)
++ trace_array_put(tr);
++ }
++
+ return ret;
+ }
+
+@@ -5037,6 +5044,8 @@ int ftrace_regex_release(struct inode *inode, struct file *file)
+
+ mutex_unlock(&iter->ops->func_hash->regex_lock);
+ free_ftrace_hash(iter->hash);
++ if (iter->tr)
++ trace_array_put(iter->tr);
+ kfree(iter);
+
+ return 0;
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 563e80f9006a..04458ed44a55 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -4355,9 +4355,14 @@ static int show_traces_open(struct inode *inode, struct file *file)
+ if (tracing_disabled)
+ return -ENODEV;
+
++ if (trace_array_get(tr) < 0)
++ return -ENODEV;
++
+ ret = seq_open(file, &show_traces_seq_ops);
+- if (ret)
++ if (ret) {
++ trace_array_put(tr);
+ return ret;
++ }
+
+ m = file->private_data;
+ m->private = tr;
+@@ -4365,6 +4370,14 @@ static int show_traces_open(struct inode *inode, struct file *file)
+ return 0;
+ }
+
++static int show_traces_release(struct inode *inode, struct file *file)
++{
++ struct trace_array *tr = inode->i_private;
++
++ trace_array_put(tr);
++ return seq_release(inode, file);
++}
++
+ static ssize_t
+ tracing_write_stub(struct file *filp, const char __user *ubuf,
+ size_t count, loff_t *ppos)
+@@ -4395,8 +4408,8 @@ static const struct file_operations tracing_fops = {
+ static const struct file_operations show_traces_fops = {
+ .open = show_traces_open,
+ .read = seq_read,
+- .release = seq_release,
+ .llseek = seq_lseek,
++ .release = show_traces_release,
+ };
+
+ static ssize_t
+diff --git a/kernel/trace/trace_hwlat.c b/kernel/trace/trace_hwlat.c
+index fa95139445b2..862f4b0139fc 100644
+--- a/kernel/trace/trace_hwlat.c
++++ b/kernel/trace/trace_hwlat.c
+@@ -150,7 +150,7 @@ void trace_hwlat_callback(bool enter)
+ if (enter)
+ nmi_ts_start = time_get();
+ else
+- nmi_total_ts = time_get() - nmi_ts_start;
++ nmi_total_ts += time_get() - nmi_ts_start;
+ }
+
+ if (enter)
+@@ -256,6 +256,8 @@ static int get_sample(void)
+ /* Keep a running maximum ever recorded hardware latency */
+ if (sample > tr->max_latency)
+ tr->max_latency = sample;
++ if (outer_sample > tr->max_latency)
++ tr->max_latency = outer_sample;
+ }
+
+ out:
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 9c9194959271..bfa5815e59f8 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -1174,11 +1174,17 @@ static __always_inline bool free_pages_prepare(struct page *page,
+ debug_check_no_obj_freed(page_address(page),
+ PAGE_SIZE << order);
+ }
+- arch_free_page(page, order);
+ if (want_init_on_free())
+ kernel_init_free_pages(page, 1 << order);
+
+ kernel_poison_pages(page, 1 << order, 0);
++ /*
++ * arch_free_page() can make the page's contents inaccessible. s390
++ * does this. So nothing which can access the page's contents should
++ * happen after this.
++ */
++ arch_free_page(page, order);
++
+ if (debug_pagealloc_enabled())
+ kernel_map_pages(page, 1 << order, 0);
+
+diff --git a/mm/vmpressure.c b/mm/vmpressure.c
+index f3b50811497a..4bac22fe1aa2 100644
+--- a/mm/vmpressure.c
++++ b/mm/vmpressure.c
+@@ -355,6 +355,9 @@ void vmpressure_prio(gfp_t gfp, struct mem_cgroup *memcg, int prio)
+ * "hierarchy" or "local").
+ *
+ * To be used as memcg event method.
++ *
++ * Return: 0 on success, -ENOMEM on memory failure or -EINVAL if @args could
++ * not be parsed.
+ */
+ int vmpressure_register_event(struct mem_cgroup *memcg,
+ struct eventfd_ctx *eventfd, const char *args)
+@@ -362,7 +365,7 @@ int vmpressure_register_event(struct mem_cgroup *memcg,
+ struct vmpressure *vmpr = memcg_to_vmpressure(memcg);
+ struct vmpressure_event *ev;
+ enum vmpressure_modes mode = VMPRESSURE_NO_PASSTHROUGH;
+- enum vmpressure_levels level = -1;
++ enum vmpressure_levels level;
+ char *spec, *spec_orig;
+ char *token;
+ int ret = 0;
+@@ -375,20 +378,18 @@ int vmpressure_register_event(struct mem_cgroup *memcg,
+
+ /* Find required level */
+ token = strsep(&spec, ",");
+- level = match_string(vmpressure_str_levels, VMPRESSURE_NUM_LEVELS, token);
+- if (level < 0) {
+- ret = level;
++ ret = match_string(vmpressure_str_levels, VMPRESSURE_NUM_LEVELS, token);
++ if (ret < 0)
+ goto out;
+- }
++ level = ret;
+
+ /* Find optional mode */
+ token = strsep(&spec, ",");
+ if (token) {
+- mode = match_string(vmpressure_str_modes, VMPRESSURE_NUM_MODES, token);
+- if (mode < 0) {
+- ret = mode;
++ ret = match_string(vmpressure_str_modes, VMPRESSURE_NUM_MODES, token);
++ if (ret < 0)
+ goto out;
+- }
++ mode = ret;
+ }
+
+ ev = kzalloc(sizeof(*ev), GFP_KERNEL);
+@@ -404,6 +405,7 @@ int vmpressure_register_event(struct mem_cgroup *memcg,
+ mutex_lock(&vmpr->events_lock);
+ list_add(&ev->node, &vmpr->events);
+ mutex_unlock(&vmpr->events_lock);
++ ret = 0;
+ out:
+ kfree(spec_orig);
+ return ret;
+diff --git a/mm/z3fold.c b/mm/z3fold.c
+index 05bdf90646e7..6d3d3f698ebb 100644
+--- a/mm/z3fold.c
++++ b/mm/z3fold.c
+@@ -998,9 +998,11 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle)
+ struct z3fold_header *zhdr;
+ struct page *page;
+ enum buddy bud;
++ bool page_claimed;
+
+ zhdr = handle_to_z3fold_header(handle);
+ page = virt_to_page(zhdr);
++ page_claimed = test_and_set_bit(PAGE_CLAIMED, &page->private);
+
+ if (test_bit(PAGE_HEADLESS, &page->private)) {
+ /* if a headless page is under reclaim, just leave.
+@@ -1008,7 +1010,7 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle)
+ * has not been set before, we release this page
+ * immediately so we don't care about its value any more.
+ */
+- if (!test_and_set_bit(PAGE_CLAIMED, &page->private)) {
++ if (!page_claimed) {
+ spin_lock(&pool->lock);
+ list_del(&page->lru);
+ spin_unlock(&pool->lock);
+@@ -1044,13 +1046,15 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle)
+ atomic64_dec(&pool->pages_nr);
+ return;
+ }
+- if (test_bit(PAGE_CLAIMED, &page->private)) {
++ if (page_claimed) {
++ /* the page has not been claimed by us */
+ z3fold_page_unlock(zhdr);
+ return;
+ }
+ if (unlikely(PageIsolated(page)) ||
+ test_and_set_bit(NEEDS_COMPACTING, &page->private)) {
+ z3fold_page_unlock(zhdr);
++ clear_bit(PAGE_CLAIMED, &page->private);
+ return;
+ }
+ if (zhdr->cpu < 0 || !cpu_online(zhdr->cpu)) {
+@@ -1060,10 +1064,12 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle)
+ zhdr->cpu = -1;
+ kref_get(&zhdr->refcount);
+ do_compact_page(zhdr, true);
++ clear_bit(PAGE_CLAIMED, &page->private);
+ return;
+ }
+ kref_get(&zhdr->refcount);
+ queue_work_on(zhdr->cpu, pool->compact_wq, &zhdr->work);
++ clear_bit(PAGE_CLAIMED, &page->private);
+ z3fold_page_unlock(zhdr);
+ }
+
+diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
+index d61563a3695e..8218e837a58c 100644
+--- a/security/selinux/ss/services.c
++++ b/security/selinux/ss/services.c
+@@ -1946,7 +1946,14 @@ static int convert_context(struct context *oldc, struct context *newc, void *p)
+ rc = string_to_context_struct(args->newp, NULL, s,
+ newc, SECSID_NULL);
+ if (rc == -EINVAL) {
+- /* Retain string representation for later mapping. */
++ /*
++ * Retain string representation for later mapping.
++ *
++ * IMPORTANT: We need to copy the contents of oldc->str
++ * back into s again because string_to_context_struct()
++ * may have garbled it.
++ */
++ memcpy(s, oldc->str, oldc->len);
+ context_init(newc);
+ newc->str = s;
+ newc->len = oldc->len;
+diff --git a/tools/perf/util/jitdump.c b/tools/perf/util/jitdump.c
+index 18c34f0c1966..d36d86490d1d 100644
+--- a/tools/perf/util/jitdump.c
++++ b/tools/perf/util/jitdump.c
+@@ -396,7 +396,7 @@ static int jit_repipe_code_load(struct jit_buf_desc *jd, union jr_entry *jr)
+ size_t size;
+ u16 idr_size;
+ const char *sym;
+- uint32_t count;
++ uint64_t count;
+ int ret, csize, usize;
+ pid_t pid, tid;
+ struct {
+@@ -419,7 +419,7 @@ static int jit_repipe_code_load(struct jit_buf_desc *jd, union jr_entry *jr)
+ return -1;
+
+ filename = event->mmap2.filename;
+- size = snprintf(filename, PATH_MAX, "%s/jitted-%d-%u.so",
++ size = snprintf(filename, PATH_MAX, "%s/jitted-%d-%" PRIu64 ".so",
+ jd->dir,
+ pid,
+ count);
+@@ -530,7 +530,7 @@ static int jit_repipe_code_move(struct jit_buf_desc *jd, union jr_entry *jr)
+ return -1;
+
+ filename = event->mmap2.filename;
+- size = snprintf(filename, PATH_MAX, "%s/jitted-%d-%"PRIu64,
++ size = snprintf(filename, PATH_MAX, "%s/jitted-%d-%" PRIu64 ".so",
+ jd->dir,
+ pid,
+ jr->move.code_index);
+diff --git a/tools/perf/util/llvm-utils.c b/tools/perf/util/llvm-utils.c
+index 9f0470ecbca9..35bb082f5782 100644
+--- a/tools/perf/util/llvm-utils.c
++++ b/tools/perf/util/llvm-utils.c
+@@ -231,14 +231,14 @@ static int detect_kbuild_dir(char **kbuild_dir)
+ const char *prefix_dir = "";
+ const char *suffix_dir = "";
+
++ /* _UTSNAME_LENGTH is 65 */
++ char release[128];
++
+ char *autoconf_path;
+
+ int err;
+
+ if (!test_dir) {
+- /* _UTSNAME_LENGTH is 65 */
+- char release[128];
+-
+ err = fetch_kernel_version(NULL, release,
+ sizeof(release));
+ if (err)
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:5.3 commit in: /
@ 2019-10-29 12:06 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2019-10-29 12:06 UTC (permalink / raw
To: gentoo-commits
commit: b5c397963982ec8b83950f7e9a2ed6c989fa8678
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Oct 29 12:05:50 2019 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Oct 29 12:05:50 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b5c39796
Linux patch 5.3.8
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1007_linux-5.3.8.patch | 7745 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 7749 insertions(+)
diff --git a/0000_README b/0000_README
index e15ba25..bc9694a 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch: 1006_linux-5.3.7.patch
From: http://www.kernel.org
Desc: Linux 5.3.7
+Patch: 1007_linux-5.3.8.patch
+From: http://www.kernel.org
+Desc: Linux 5.3.8
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1007_linux-5.3.8.patch b/1007_linux-5.3.8.patch
new file mode 100644
index 0000000..8323ef7
--- /dev/null
+++ b/1007_linux-5.3.8.patch
@@ -0,0 +1,7745 @@
+diff --git a/Documentation/arm64/silicon-errata.rst b/Documentation/arm64/silicon-errata.rst
+index 3e57d09246e6..6e52d334bc55 100644
+--- a/Documentation/arm64/silicon-errata.rst
++++ b/Documentation/arm64/silicon-errata.rst
+@@ -107,6 +107,8 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Cavium | ThunderX2 SMMUv3| #126 | N/A |
+ +----------------+-----------------+-----------------+-----------------------------+
++| Cavium | ThunderX2 Core | #219 | CAVIUM_TX2_ERRATUM_219 |
+++----------------+-----------------+-----------------+-----------------------------+
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Freescale/NXP | LS2080A/LS1043A | A-008585 | FSL_ERRATUM_A008585 |
+ +----------------+-----------------+-----------------+-----------------------------+
+diff --git a/Makefile b/Makefile
+index 7a3e659c79ae..445f9488d8ba 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 3
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Bobtail Squid
+
+diff --git a/arch/arm/boot/dts/am335x-icev2.dts b/arch/arm/boot/dts/am335x-icev2.dts
+index 18f70b35da4c..204bccfcc110 100644
+--- a/arch/arm/boot/dts/am335x-icev2.dts
++++ b/arch/arm/boot/dts/am335x-icev2.dts
+@@ -432,7 +432,7 @@
+ pinctrl-0 = <&mmc0_pins_default>;
+ };
+
+-&gpio0 {
++&gpio0_target {
+ /* Do not idle the GPIO used for holding the VTT regulator */
+ ti,no-reset-on-init;
+ ti,no-idle-on-init;
+diff --git a/arch/arm/boot/dts/am33xx-l4.dtsi b/arch/arm/boot/dts/am33xx-l4.dtsi
+index 46849d6ecb3e..3287cf695b5a 100644
+--- a/arch/arm/boot/dts/am33xx-l4.dtsi
++++ b/arch/arm/boot/dts/am33xx-l4.dtsi
+@@ -127,7 +127,7 @@
+ ranges = <0x0 0x5000 0x1000>;
+ };
+
+- target-module@7000 { /* 0x44e07000, ap 14 20.0 */
++ gpio0_target: target-module@7000 { /* 0x44e07000, ap 14 20.0 */
+ compatible = "ti,sysc-omap2", "ti,sysc";
+ ti,hwmods = "gpio1";
+ reg = <0x7000 0x4>,
+@@ -2038,7 +2038,9 @@
+ reg = <0xe000 0x4>,
+ <0xe054 0x4>;
+ reg-names = "rev", "sysc";
+- ti,sysc-midle ;
++ ti,sysc-midle = <SYSC_IDLE_FORCE>,
++ <SYSC_IDLE_NO>,
++ <SYSC_IDLE_SMART>;
+ ti,sysc-sidle = <SYSC_IDLE_FORCE>,
+ <SYSC_IDLE_NO>,
+ <SYSC_IDLE_SMART>;
+diff --git a/arch/arm/boot/dts/am4372.dtsi b/arch/arm/boot/dts/am4372.dtsi
+index 848e2a8884e2..14bbc438055f 100644
+--- a/arch/arm/boot/dts/am4372.dtsi
++++ b/arch/arm/boot/dts/am4372.dtsi
+@@ -337,6 +337,8 @@
+ ti,hwmods = "dss_dispc";
+ clocks = <&disp_clk>;
+ clock-names = "fck";
++
++ max-memory-bandwidth = <230000000>;
+ };
+
+ rfbi: rfbi@4832a800 {
+diff --git a/arch/arm/boot/dts/dra7-l4.dtsi b/arch/arm/boot/dts/dra7-l4.dtsi
+index 21e5914fdd62..099d6fe2a57a 100644
+--- a/arch/arm/boot/dts/dra7-l4.dtsi
++++ b/arch/arm/boot/dts/dra7-l4.dtsi
+@@ -2762,7 +2762,7 @@
+ interrupt-names = "tx", "rx";
+ dmas = <&edma_xbar 129 1>, <&edma_xbar 128 1>;
+ dma-names = "tx", "rx";
+- clocks = <&ipu_clkctrl DRA7_IPU_MCASP1_CLKCTRL 22>,
++ clocks = <&ipu_clkctrl DRA7_IPU_MCASP1_CLKCTRL 0>,
+ <&ipu_clkctrl DRA7_IPU_MCASP1_CLKCTRL 24>,
+ <&ipu_clkctrl DRA7_IPU_MCASP1_CLKCTRL 28>;
+ clock-names = "fck", "ahclkx", "ahclkr";
+@@ -2799,8 +2799,8 @@
+ interrupt-names = "tx", "rx";
+ dmas = <&edma_xbar 131 1>, <&edma_xbar 130 1>;
+ dma-names = "tx", "rx";
+- clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP2_CLKCTRL 22>,
+- <&l4per2_clkctrl DRA7_L4PER2_MCASP2_CLKCTRL 24>,
++ clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP2_CLKCTRL 0>,
++ <&ipu_clkctrl DRA7_IPU_MCASP1_CLKCTRL 24>,
+ <&l4per2_clkctrl DRA7_L4PER2_MCASP2_CLKCTRL 28>;
+ clock-names = "fck", "ahclkx", "ahclkr";
+ status = "disabled";
+@@ -2818,9 +2818,8 @@
+ <SYSC_IDLE_SMART>;
+ /* Domains (P, C): l4per_pwrdm, l4per2_clkdm */
+ clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP3_CLKCTRL 0>,
+- <&l4per2_clkctrl DRA7_L4PER2_MCASP3_CLKCTRL 24>,
+- <&l4per2_clkctrl DRA7_L4PER2_MCASP3_CLKCTRL 28>;
+- clock-names = "fck", "ahclkx", "ahclkr";
++ <&l4per2_clkctrl DRA7_L4PER2_MCASP3_CLKCTRL 24>;
++ clock-names = "fck", "ahclkx";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0x0 0x68000 0x2000>,
+@@ -2836,7 +2835,7 @@
+ interrupt-names = "tx", "rx";
+ dmas = <&edma_xbar 133 1>, <&edma_xbar 132 1>;
+ dma-names = "tx", "rx";
+- clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP3_CLKCTRL 22>,
++ clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP3_CLKCTRL 0>,
+ <&l4per2_clkctrl DRA7_L4PER2_MCASP3_CLKCTRL 24>;
+ clock-names = "fck", "ahclkx";
+ status = "disabled";
+@@ -2854,9 +2853,8 @@
+ <SYSC_IDLE_SMART>;
+ /* Domains (P, C): l4per_pwrdm, l4per2_clkdm */
+ clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP4_CLKCTRL 0>,
+- <&l4per2_clkctrl DRA7_L4PER2_MCASP4_CLKCTRL 24>,
+- <&l4per2_clkctrl DRA7_L4PER2_MCASP4_CLKCTRL 28>;
+- clock-names = "fck", "ahclkx", "ahclkr";
++ <&l4per2_clkctrl DRA7_L4PER2_MCASP4_CLKCTRL 24>;
++ clock-names = "fck", "ahclkx";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0x0 0x6c000 0x2000>,
+@@ -2872,7 +2870,7 @@
+ interrupt-names = "tx", "rx";
+ dmas = <&edma_xbar 135 1>, <&edma_xbar 134 1>;
+ dma-names = "tx", "rx";
+- clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP4_CLKCTRL 22>,
++ clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP4_CLKCTRL 0>,
+ <&l4per2_clkctrl DRA7_L4PER2_MCASP4_CLKCTRL 24>;
+ clock-names = "fck", "ahclkx";
+ status = "disabled";
+@@ -2890,9 +2888,8 @@
+ <SYSC_IDLE_SMART>;
+ /* Domains (P, C): l4per_pwrdm, l4per2_clkdm */
+ clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP5_CLKCTRL 0>,
+- <&l4per2_clkctrl DRA7_L4PER2_MCASP5_CLKCTRL 24>,
+- <&l4per2_clkctrl DRA7_L4PER2_MCASP5_CLKCTRL 28>;
+- clock-names = "fck", "ahclkx", "ahclkr";
++ <&l4per2_clkctrl DRA7_L4PER2_MCASP5_CLKCTRL 24>;
++ clock-names = "fck", "ahclkx";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0x0 0x70000 0x2000>,
+@@ -2908,7 +2905,7 @@
+ interrupt-names = "tx", "rx";
+ dmas = <&edma_xbar 137 1>, <&edma_xbar 136 1>;
+ dma-names = "tx", "rx";
+- clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP5_CLKCTRL 22>,
++ clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP5_CLKCTRL 0>,
+ <&l4per2_clkctrl DRA7_L4PER2_MCASP5_CLKCTRL 24>;
+ clock-names = "fck", "ahclkx";
+ status = "disabled";
+@@ -2926,9 +2923,8 @@
+ <SYSC_IDLE_SMART>;
+ /* Domains (P, C): l4per_pwrdm, l4per2_clkdm */
+ clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP6_CLKCTRL 0>,
+- <&l4per2_clkctrl DRA7_L4PER2_MCASP6_CLKCTRL 24>,
+- <&l4per2_clkctrl DRA7_L4PER2_MCASP6_CLKCTRL 28>;
+- clock-names = "fck", "ahclkx", "ahclkr";
++ <&l4per2_clkctrl DRA7_L4PER2_MCASP6_CLKCTRL 24>;
++ clock-names = "fck", "ahclkx";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0x0 0x74000 0x2000>,
+@@ -2944,7 +2940,7 @@
+ interrupt-names = "tx", "rx";
+ dmas = <&edma_xbar 139 1>, <&edma_xbar 138 1>;
+ dma-names = "tx", "rx";
+- clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP6_CLKCTRL 22>,
++ clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP6_CLKCTRL 0>,
+ <&l4per2_clkctrl DRA7_L4PER2_MCASP6_CLKCTRL 24>;
+ clock-names = "fck", "ahclkx";
+ status = "disabled";
+@@ -2962,9 +2958,8 @@
+ <SYSC_IDLE_SMART>;
+ /* Domains (P, C): l4per_pwrdm, l4per2_clkdm */
+ clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP7_CLKCTRL 0>,
+- <&l4per2_clkctrl DRA7_L4PER2_MCASP7_CLKCTRL 24>,
+- <&l4per2_clkctrl DRA7_L4PER2_MCASP7_CLKCTRL 28>;
+- clock-names = "fck", "ahclkx", "ahclkr";
++ <&l4per2_clkctrl DRA7_L4PER2_MCASP7_CLKCTRL 24>;
++ clock-names = "fck", "ahclkx";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0x0 0x78000 0x2000>,
+@@ -2980,7 +2975,7 @@
+ interrupt-names = "tx", "rx";
+ dmas = <&edma_xbar 141 1>, <&edma_xbar 140 1>;
+ dma-names = "tx", "rx";
+- clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP7_CLKCTRL 22>,
++ clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP7_CLKCTRL 0>,
+ <&l4per2_clkctrl DRA7_L4PER2_MCASP7_CLKCTRL 24>;
+ clock-names = "fck", "ahclkx";
+ status = "disabled";
+@@ -2998,9 +2993,8 @@
+ <SYSC_IDLE_SMART>;
+ /* Domains (P, C): l4per_pwrdm, l4per2_clkdm */
+ clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP8_CLKCTRL 0>,
+- <&l4per2_clkctrl DRA7_L4PER2_MCASP8_CLKCTRL 24>,
+- <&l4per2_clkctrl DRA7_L4PER2_MCASP8_CLKCTRL 28>;
+- clock-names = "fck", "ahclkx", "ahclkr";
++ <&l4per2_clkctrl DRA7_L4PER2_MCASP8_CLKCTRL 24>;
++ clock-names = "fck", "ahclkx";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0x0 0x7c000 0x2000>,
+@@ -3016,7 +3010,7 @@
+ interrupt-names = "tx", "rx";
+ dmas = <&edma_xbar 143 1>, <&edma_xbar 142 1>;
+ dma-names = "tx", "rx";
+- clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP8_CLKCTRL 22>,
++ clocks = <&l4per2_clkctrl DRA7_L4PER2_MCASP8_CLKCTRL 0>,
+ <&l4per2_clkctrl DRA7_L4PER2_MCASP8_CLKCTRL 24>;
+ clock-names = "fck", "ahclkx";
+ status = "disabled";
+diff --git a/arch/arm/mach-omap2/omap_hwmod_33xx_43xx_ipblock_data.c b/arch/arm/mach-omap2/omap_hwmod_33xx_43xx_ipblock_data.c
+index adb6271f819b..7773876d165f 100644
+--- a/arch/arm/mach-omap2/omap_hwmod_33xx_43xx_ipblock_data.c
++++ b/arch/arm/mach-omap2/omap_hwmod_33xx_43xx_ipblock_data.c
+@@ -811,7 +811,8 @@ static struct omap_hwmod_class_sysconfig am33xx_timer_sysc = {
+ .rev_offs = 0x0000,
+ .sysc_offs = 0x0010,
+ .syss_offs = 0x0014,
+- .sysc_flags = (SYSC_HAS_SIDLEMODE | SYSC_HAS_SOFTRESET),
++ .sysc_flags = SYSC_HAS_SIDLEMODE | SYSC_HAS_SOFTRESET |
++ SYSC_HAS_RESET_STATUS,
+ .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
+ SIDLE_SMART_WKUP),
+ .sysc_fields = &omap_hwmod_sysc_type2,
+diff --git a/arch/arm/mach-omap2/omap_hwmod_33xx_data.c b/arch/arm/mach-omap2/omap_hwmod_33xx_data.c
+index c965af275e34..81d9912f17c8 100644
+--- a/arch/arm/mach-omap2/omap_hwmod_33xx_data.c
++++ b/arch/arm/mach-omap2/omap_hwmod_33xx_data.c
+@@ -231,8 +231,9 @@ static struct omap_hwmod am33xx_control_hwmod = {
+ static struct omap_hwmod_class_sysconfig lcdc_sysc = {
+ .rev_offs = 0x0,
+ .sysc_offs = 0x54,
+- .sysc_flags = (SYSC_HAS_SIDLEMODE | SYSC_HAS_MIDLEMODE),
+- .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART),
++ .sysc_flags = SYSC_HAS_SIDLEMODE | SYSC_HAS_MIDLEMODE,
++ .idlemodes = SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
++ MSTANDBY_FORCE | MSTANDBY_NO | MSTANDBY_SMART,
+ .sysc_fields = &omap_hwmod_sysc_type2,
+ };
+
+diff --git a/arch/arm/mach-omap2/pm.c b/arch/arm/mach-omap2/pm.c
+index 1fde1bf53fb6..7ac9af56762d 100644
+--- a/arch/arm/mach-omap2/pm.c
++++ b/arch/arm/mach-omap2/pm.c
+@@ -74,83 +74,6 @@ int omap_pm_clkdms_setup(struct clockdomain *clkdm, void *unused)
+ return 0;
+ }
+
+-/*
+- * This API is to be called during init to set the various voltage
+- * domains to the voltage as per the opp table. Typically we boot up
+- * at the nominal voltage. So this function finds out the rate of
+- * the clock associated with the voltage domain, finds out the correct
+- * opp entry and sets the voltage domain to the voltage specified
+- * in the opp entry
+- */
+-static int __init omap2_set_init_voltage(char *vdd_name, char *clk_name,
+- const char *oh_name)
+-{
+- struct voltagedomain *voltdm;
+- struct clk *clk;
+- struct dev_pm_opp *opp;
+- unsigned long freq, bootup_volt;
+- struct device *dev;
+-
+- if (!vdd_name || !clk_name || !oh_name) {
+- pr_err("%s: invalid parameters\n", __func__);
+- goto exit;
+- }
+-
+- if (!strncmp(oh_name, "mpu", 3))
+- /*
+- * All current OMAPs share voltage rail and clock
+- * source, so CPU0 is used to represent the MPU-SS.
+- */
+- dev = get_cpu_device(0);
+- else
+- dev = omap_device_get_by_hwmod_name(oh_name);
+-
+- if (IS_ERR(dev)) {
+- pr_err("%s: Unable to get dev pointer for hwmod %s\n",
+- __func__, oh_name);
+- goto exit;
+- }
+-
+- voltdm = voltdm_lookup(vdd_name);
+- if (!voltdm) {
+- pr_err("%s: unable to get vdd pointer for vdd_%s\n",
+- __func__, vdd_name);
+- goto exit;
+- }
+-
+- clk = clk_get(NULL, clk_name);
+- if (IS_ERR(clk)) {
+- pr_err("%s: unable to get clk %s\n", __func__, clk_name);
+- goto exit;
+- }
+-
+- freq = clk_get_rate(clk);
+- clk_put(clk);
+-
+- opp = dev_pm_opp_find_freq_ceil(dev, &freq);
+- if (IS_ERR(opp)) {
+- pr_err("%s: unable to find boot up OPP for vdd_%s\n",
+- __func__, vdd_name);
+- goto exit;
+- }
+-
+- bootup_volt = dev_pm_opp_get_voltage(opp);
+- dev_pm_opp_put(opp);
+-
+- if (!bootup_volt) {
+- pr_err("%s: unable to find voltage corresponding to the bootup OPP for vdd_%s\n",
+- __func__, vdd_name);
+- goto exit;
+- }
+-
+- voltdm_scale(voltdm, bootup_volt);
+- return 0;
+-
+-exit:
+- pr_err("%s: unable to set vdd_%s\n", __func__, vdd_name);
+- return -EINVAL;
+-}
+-
+ #ifdef CONFIG_SUSPEND
+ static int omap_pm_enter(suspend_state_t suspend_state)
+ {
+@@ -208,25 +131,6 @@ void omap_common_suspend_init(void *pm_suspend)
+ }
+ #endif /* CONFIG_SUSPEND */
+
+-static void __init omap3_init_voltages(void)
+-{
+- if (!soc_is_omap34xx())
+- return;
+-
+- omap2_set_init_voltage("mpu_iva", "dpll1_ck", "mpu");
+- omap2_set_init_voltage("core", "l3_ick", "l3_main");
+-}
+-
+-static void __init omap4_init_voltages(void)
+-{
+- if (!soc_is_omap44xx())
+- return;
+-
+- omap2_set_init_voltage("mpu", "dpll_mpu_ck", "mpu");
+- omap2_set_init_voltage("core", "l3_div_ck", "l3_main_1");
+- omap2_set_init_voltage("iva", "dpll_iva_m5x2_ck", "iva");
+-}
+-
+ int __maybe_unused omap_pm_nop_init(void)
+ {
+ return 0;
+@@ -246,10 +150,6 @@ int __init omap2_common_pm_late_init(void)
+ omap4_twl_init();
+ omap_voltage_late_init();
+
+- /* Initialize the voltages */
+- omap3_init_voltages();
+- omap4_init_voltages();
+-
+ /* Smartreflex device init */
+ omap_devinit_smartreflex();
+
+diff --git a/arch/arm/xen/efi.c b/arch/arm/xen/efi.c
+index d687a73044bf..cb2aaf98e243 100644
+--- a/arch/arm/xen/efi.c
++++ b/arch/arm/xen/efi.c
+@@ -19,7 +19,9 @@ void __init xen_efi_runtime_setup(void)
+ efi.get_variable = xen_efi_get_variable;
+ efi.get_next_variable = xen_efi_get_next_variable;
+ efi.set_variable = xen_efi_set_variable;
++ efi.set_variable_nonblocking = xen_efi_set_variable;
+ efi.query_variable_info = xen_efi_query_variable_info;
++ efi.query_variable_info_nonblocking = xen_efi_query_variable_info;
+ efi.update_capsule = xen_efi_update_capsule;
+ efi.query_capsule_caps = xen_efi_query_capsule_caps;
+ efi.get_next_high_mono_count = xen_efi_get_next_high_mono_count;
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 3adcec05b1f6..e8cf56283871 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -601,6 +601,23 @@ config CAVIUM_ERRATUM_30115
+
+ If unsure, say Y.
+
++config CAVIUM_TX2_ERRATUM_219
++ bool "Cavium ThunderX2 erratum 219: PRFM between TTBR change and ISB fails"
++ default y
++ help
++ On Cavium ThunderX2, a load, store or prefetch instruction between a
++ TTBR update and the corresponding context synchronizing operation can
++ cause a spurious Data Abort to be delivered to any hardware thread in
++ the CPU core.
++
++ Work around the issue by avoiding the problematic code sequence and
++ trapping KVM guest TTBRx_EL1 writes to EL2 when SMT is enabled. The
++ trap handler performs the corresponding register access, skips the
++ instruction and ensures context synchronization by virtue of the
++ exception return.
++
++ If unsure, say Y.
++
+ config QCOM_FALKOR_ERRATUM_1003
+ bool "Falkor E1003: Incorrect translation due to ASID change"
+ default y
+diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
+index f19fe4b9acc4..ac1dbca3d0cd 100644
+--- a/arch/arm64/include/asm/cpucaps.h
++++ b/arch/arm64/include/asm/cpucaps.h
+@@ -52,7 +52,9 @@
+ #define ARM64_HAS_IRQ_PRIO_MASKING 42
+ #define ARM64_HAS_DCPODP 43
+ #define ARM64_WORKAROUND_1463225 44
++#define ARM64_WORKAROUND_CAVIUM_TX2_219_TVM 45
++#define ARM64_WORKAROUND_CAVIUM_TX2_219_PRFM 46
+
+-#define ARM64_NCAPS 45
++#define ARM64_NCAPS 47
+
+ #endif /* __ASM_CPUCAPS_H */
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 1e43ba5c79b7..27b4a973f16d 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -12,6 +12,7 @@
+ #include <asm/cpu.h>
+ #include <asm/cputype.h>
+ #include <asm/cpufeature.h>
++#include <asm/smp_plat.h>
+
+ static bool __maybe_unused
+ is_affected_midr_range(const struct arm64_cpu_capabilities *entry, int scope)
+@@ -623,6 +624,30 @@ check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
+ return (need_wa > 0);
+ }
+
++static const __maybe_unused struct midr_range tx2_family_cpus[] = {
++ MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
++ MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
++ {},
++};
++
++static bool __maybe_unused
++needs_tx2_tvm_workaround(const struct arm64_cpu_capabilities *entry,
++ int scope)
++{
++ int i;
++
++ if (!is_affected_midr_range_list(entry, scope) ||
++ !is_hyp_mode_available())
++ return false;
++
++ for_each_possible_cpu(i) {
++ if (MPIDR_AFFINITY_LEVEL(cpu_logical_map(i), 0) != 0)
++ return true;
++ }
++
++ return false;
++}
++
+ #ifdef CONFIG_HARDEN_EL2_VECTORS
+
+ static const struct midr_range arm64_harden_el2_vectors[] = {
+@@ -851,6 +876,19 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+ .matches = has_cortex_a76_erratum_1463225,
+ },
++ {
++ .desc = "Cavium ThunderX2 erratum 219 (PRFM removal)",
++ .capability = ARM64_WORKAROUND_CAVIUM_TX2_219_PRFM,
++ ERRATA_MIDR_RANGE_LIST(tx2_family_cpus),
++ },
++#endif
++#ifdef CONFIG_CAVIUM_TX2_ERRATUM_219
++ {
++ .desc = "Cavium ThunderX2 erratum 219 (KVM guest sysreg trapping)",
++ .capability = ARM64_WORKAROUND_CAVIUM_TX2_219_TVM,
++ ERRATA_MIDR_RANGE_LIST(tx2_family_cpus),
++ .matches = needs_tx2_tvm_workaround,
++ },
+ #endif
+ {
+ }
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index 84a822748c84..109894bd3194 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -1070,7 +1070,9 @@ alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003
+ #else
+ ldr x30, =vectors
+ #endif
++alternative_if_not ARM64_WORKAROUND_CAVIUM_TX2_219_PRFM
+ prfm plil1strm, [x30, #(1b - tramp_vectors)]
++alternative_else_nop_endif
+ msr vbar_el1, x30
+ add x30, x30, #(1b - tramp_vectors)
+ isb
+diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
+index adaf266d8de8..7fdc821ebb78 100644
+--- a/arch/arm64/kvm/hyp/switch.c
++++ b/arch/arm64/kvm/hyp/switch.c
+@@ -124,6 +124,9 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
+ {
+ u64 hcr = vcpu->arch.hcr_el2;
+
++ if (cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM))
++ hcr |= HCR_TVM;
++
+ write_sysreg(hcr, hcr_el2);
+
+ if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE))
+@@ -174,8 +177,10 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
+ * the crucial bit is "On taking a vSError interrupt,
+ * HCR_EL2.VSE is cleared to 0."
+ */
+- if (vcpu->arch.hcr_el2 & HCR_VSE)
+- vcpu->arch.hcr_el2 = read_sysreg(hcr_el2);
++ if (vcpu->arch.hcr_el2 & HCR_VSE) {
++ vcpu->arch.hcr_el2 &= ~HCR_VSE;
++ vcpu->arch.hcr_el2 |= read_sysreg(hcr_el2) & HCR_VSE;
++ }
+
+ if (has_vhe())
+ deactivate_traps_vhe();
+@@ -393,6 +398,61 @@ static bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
+ return true;
+ }
+
++static bool __hyp_text handle_tx2_tvm(struct kvm_vcpu *vcpu)
++{
++ u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_hsr(vcpu));
++ int rt = kvm_vcpu_sys_get_rt(vcpu);
++ u64 val = vcpu_get_reg(vcpu, rt);
++
++ /*
++ * The normal sysreg handling code expects to see the traps,
++ * let's not do anything here.
++ */
++ if (vcpu->arch.hcr_el2 & HCR_TVM)
++ return false;
++
++ switch (sysreg) {
++ case SYS_SCTLR_EL1:
++ write_sysreg_el1(val, SYS_SCTLR);
++ break;
++ case SYS_TTBR0_EL1:
++ write_sysreg_el1(val, SYS_TTBR0);
++ break;
++ case SYS_TTBR1_EL1:
++ write_sysreg_el1(val, SYS_TTBR1);
++ break;
++ case SYS_TCR_EL1:
++ write_sysreg_el1(val, SYS_TCR);
++ break;
++ case SYS_ESR_EL1:
++ write_sysreg_el1(val, SYS_ESR);
++ break;
++ case SYS_FAR_EL1:
++ write_sysreg_el1(val, SYS_FAR);
++ break;
++ case SYS_AFSR0_EL1:
++ write_sysreg_el1(val, SYS_AFSR0);
++ break;
++ case SYS_AFSR1_EL1:
++ write_sysreg_el1(val, SYS_AFSR1);
++ break;
++ case SYS_MAIR_EL1:
++ write_sysreg_el1(val, SYS_MAIR);
++ break;
++ case SYS_AMAIR_EL1:
++ write_sysreg_el1(val, SYS_AMAIR);
++ break;
++ case SYS_CONTEXTIDR_EL1:
++ write_sysreg_el1(val, SYS_CONTEXTIDR);
++ break;
++ default:
++ return false;
++ }
++
++ __kvm_skip_instr(vcpu);
++ return true;
++}
++
+ /*
+ * Return true when we were able to fixup the guest exit and should return to
+ * the guest, false when we should restore the host state and return to the
+@@ -412,6 +472,11 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
+ if (*exit_code != ARM_EXCEPTION_TRAP)
+ goto exit;
+
++ if (cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) &&
++ kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 &&
++ handle_tx2_tvm(vcpu))
++ return true;
++
+ /*
+ * We trap the first access to the FP/SIMD to save the host context
+ * and restore the guest context lazily.
+diff --git a/arch/mips/boot/dts/qca/ar9331.dtsi b/arch/mips/boot/dts/qca/ar9331.dtsi
+index 63a9f33aa43e..5cfc9d347826 100644
+--- a/arch/mips/boot/dts/qca/ar9331.dtsi
++++ b/arch/mips/boot/dts/qca/ar9331.dtsi
+@@ -99,7 +99,7 @@
+
+ miscintc: interrupt-controller@18060010 {
+ compatible = "qca,ar7240-misc-intc";
+- reg = <0x18060010 0x4>;
++ reg = <0x18060010 0x8>;
+
+ interrupt-parent = <&cpuintc>;
+ interrupts = <6>;
+diff --git a/arch/mips/loongson64/common/serial.c b/arch/mips/loongson64/common/serial.c
+index ffefc1cb2612..98c3a7feb10f 100644
+--- a/arch/mips/loongson64/common/serial.c
++++ b/arch/mips/loongson64/common/serial.c
+@@ -110,7 +110,7 @@ static int __init serial_init(void)
+ }
+ module_init(serial_init);
+
+-static void __init serial_exit(void)
++static void __exit serial_exit(void)
+ {
+ platform_device_unregister(&uart8250_device);
+ }
+diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
+index bece1264d1c5..b0f70006bd85 100644
+--- a/arch/mips/mm/tlbex.c
++++ b/arch/mips/mm/tlbex.c
+@@ -655,6 +655,13 @@ static void build_restore_pagemask(u32 **p, struct uasm_reloc **r,
+ int restore_scratch)
+ {
+ if (restore_scratch) {
++ /*
++ * Ensure the MFC0 below observes the value written to the
++ * KScratch register by the prior MTC0.
++ */
++ if (scratch_reg >= 0)
++ uasm_i_ehb(p);
++
+ /* Reset default page size */
+ if (PM_DEFAULT_MASK >> 16) {
+ uasm_i_lui(p, tmp, PM_DEFAULT_MASK >> 16);
+@@ -669,12 +676,10 @@ static void build_restore_pagemask(u32 **p, struct uasm_reloc **r,
+ uasm_i_mtc0(p, 0, C0_PAGEMASK);
+ uasm_il_b(p, r, lid);
+ }
+- if (scratch_reg >= 0) {
+- uasm_i_ehb(p);
++ if (scratch_reg >= 0)
+ UASM_i_MFC0(p, 1, c0_kscratch(), scratch_reg);
+- } else {
++ else
+ UASM_i_LW(p, 1, scratchpad_offset(0), 0);
+- }
+ } else {
+ /* Reset default page size */
+ if (PM_DEFAULT_MASK >> 16) {
+@@ -923,6 +928,10 @@ build_get_pgd_vmalloc64(u32 **p, struct uasm_label **l, struct uasm_reloc **r,
+ }
+ if (mode != not_refill && check_for_high_segbits) {
+ uasm_l_large_segbits_fault(l, *p);
++
++ if (mode == refill_scratch && scratch_reg >= 0)
++ uasm_i_ehb(p);
++
+ /*
+ * We get here if we are an xsseg address, or if we are
+ * an xuseg address above (PGDIR_SHIFT+PGDIR_BITS) boundary.
+@@ -941,12 +950,10 @@ build_get_pgd_vmalloc64(u32 **p, struct uasm_label **l, struct uasm_reloc **r,
+ uasm_i_jr(p, ptr);
+
+ if (mode == refill_scratch) {
+- if (scratch_reg >= 0) {
+- uasm_i_ehb(p);
++ if (scratch_reg >= 0)
+ UASM_i_MFC0(p, 1, c0_kscratch(), scratch_reg);
+- } else {
++ else
+ UASM_i_LW(p, 1, scratchpad_offset(0), 0);
+- }
+ } else {
+ uasm_i_nop(p);
+ }
+diff --git a/arch/parisc/mm/ioremap.c b/arch/parisc/mm/ioremap.c
+index 92a9b5f12f98..f29f682352f0 100644
+--- a/arch/parisc/mm/ioremap.c
++++ b/arch/parisc/mm/ioremap.c
+@@ -3,7 +3,7 @@
+ * arch/parisc/mm/ioremap.c
+ *
+ * (C) Copyright 1995 1996 Linus Torvalds
+- * (C) Copyright 2001-2006 Helge Deller <deller@gmx.de>
++ * (C) Copyright 2001-2019 Helge Deller <deller@gmx.de>
+ * (C) Copyright 2005 Kyle McMartin <kyle@parisc-linux.org>
+ */
+
+@@ -84,7 +84,7 @@ void __iomem * __ioremap(unsigned long phys_addr, unsigned long size, unsigned l
+ addr = (void __iomem *) area->addr;
+ if (ioremap_page_range((unsigned long)addr, (unsigned long)addr + size,
+ phys_addr, pgprot)) {
+- vfree(addr);
++ vunmap(addr);
+ return NULL;
+ }
+
+@@ -92,9 +92,11 @@ void __iomem * __ioremap(unsigned long phys_addr, unsigned long size, unsigned l
+ }
+ EXPORT_SYMBOL(__ioremap);
+
+-void iounmap(const volatile void __iomem *addr)
++void iounmap(const volatile void __iomem *io_addr)
+ {
+- if (addr > high_memory)
+- return vfree((void *) (PAGE_MASK & (unsigned long __force) addr));
++ unsigned long addr = (unsigned long)io_addr & PAGE_MASK;
++
++ if (is_vmalloc_addr((void *)addr))
++ vunmap((void *)addr);
+ }
+ EXPORT_SYMBOL(iounmap);
+diff --git a/arch/powerpc/kvm/book3s_xive.c b/arch/powerpc/kvm/book3s_xive.c
+index 591bfb4bfd0f..a3f9c665bb5b 100644
+--- a/arch/powerpc/kvm/book3s_xive.c
++++ b/arch/powerpc/kvm/book3s_xive.c
+@@ -1217,6 +1217,7 @@ int kvmppc_xive_connect_vcpu(struct kvm_device *dev,
+ struct kvmppc_xive *xive = dev->private;
+ struct kvmppc_xive_vcpu *xc;
+ int i, r = -EBUSY;
++ u32 vp_id;
+
+ pr_devel("connect_vcpu(cpu=%d)\n", cpu);
+
+@@ -1228,25 +1229,32 @@ int kvmppc_xive_connect_vcpu(struct kvm_device *dev,
+ return -EPERM;
+ if (vcpu->arch.irq_type != KVMPPC_IRQ_DEFAULT)
+ return -EBUSY;
+- if (kvmppc_xive_find_server(vcpu->kvm, cpu)) {
+- pr_devel("Duplicate !\n");
+- return -EEXIST;
+- }
+ if (cpu >= (KVM_MAX_VCPUS * vcpu->kvm->arch.emul_smt_mode)) {
+ pr_devel("Out of bounds !\n");
+ return -EINVAL;
+ }
+- xc = kzalloc(sizeof(*xc), GFP_KERNEL);
+- if (!xc)
+- return -ENOMEM;
+
+ /* We need to synchronize with queue provisioning */
+ mutex_lock(&xive->lock);
++
++ vp_id = kvmppc_xive_vp(xive, cpu);
++ if (kvmppc_xive_vp_in_use(xive->kvm, vp_id)) {
++ pr_devel("Duplicate !\n");
++ r = -EEXIST;
++ goto bail;
++ }
++
++ xc = kzalloc(sizeof(*xc), GFP_KERNEL);
++ if (!xc) {
++ r = -ENOMEM;
++ goto bail;
++ }
++
+ vcpu->arch.xive_vcpu = xc;
+ xc->xive = xive;
+ xc->vcpu = vcpu;
+ xc->server_num = cpu;
+- xc->vp_id = kvmppc_xive_vp(xive, cpu);
++ xc->vp_id = vp_id;
+ xc->mfrr = 0xff;
+ xc->valid = true;
+
+diff --git a/arch/powerpc/kvm/book3s_xive.h b/arch/powerpc/kvm/book3s_xive.h
+index 955b820ffd6d..fe3ed50e0818 100644
+--- a/arch/powerpc/kvm/book3s_xive.h
++++ b/arch/powerpc/kvm/book3s_xive.h
+@@ -220,6 +220,18 @@ static inline u32 kvmppc_xive_vp(struct kvmppc_xive *xive, u32 server)
+ return xive->vp_base + kvmppc_pack_vcpu_id(xive->kvm, server);
+ }
+
++static inline bool kvmppc_xive_vp_in_use(struct kvm *kvm, u32 vp_id)
++{
++ struct kvm_vcpu *vcpu = NULL;
++ int i;
++
++ kvm_for_each_vcpu(i, vcpu, kvm) {
++ if (vcpu->arch.xive_vcpu && vp_id == vcpu->arch.xive_vcpu->vp_id)
++ return true;
++ }
++ return false;
++}
++
+ /*
+ * Mapping between guest priorities and host priorities
+ * is as follow.
+diff --git a/arch/powerpc/kvm/book3s_xive_native.c b/arch/powerpc/kvm/book3s_xive_native.c
+index 248c1ea9e788..78b906ffa0d2 100644
+--- a/arch/powerpc/kvm/book3s_xive_native.c
++++ b/arch/powerpc/kvm/book3s_xive_native.c
+@@ -106,6 +106,7 @@ int kvmppc_xive_native_connect_vcpu(struct kvm_device *dev,
+ struct kvmppc_xive *xive = dev->private;
+ struct kvmppc_xive_vcpu *xc = NULL;
+ int rc;
++ u32 vp_id;
+
+ pr_devel("native_connect_vcpu(server=%d)\n", server_num);
+
+@@ -124,7 +125,8 @@ int kvmppc_xive_native_connect_vcpu(struct kvm_device *dev,
+
+ mutex_lock(&xive->lock);
+
+- if (kvmppc_xive_find_server(vcpu->kvm, server_num)) {
++ vp_id = kvmppc_xive_vp(xive, server_num);
++ if (kvmppc_xive_vp_in_use(xive->kvm, vp_id)) {
+ pr_devel("Duplicate !\n");
+ rc = -EEXIST;
+ goto bail;
+@@ -141,7 +143,7 @@ int kvmppc_xive_native_connect_vcpu(struct kvm_device *dev,
+ xc->vcpu = vcpu;
+ xc->server_num = server_num;
+
+- xc->vp_id = kvmppc_xive_vp(xive, server_num);
++ xc->vp_id = vp_id;
+ xc->valid = true;
+ vcpu->arch.irq_type = KVMPPC_IRQ_XIVE;
+
+diff --git a/arch/riscv/include/asm/asm.h b/arch/riscv/include/asm/asm.h
+index 5a02b7d50940..9c992a88d858 100644
+--- a/arch/riscv/include/asm/asm.h
++++ b/arch/riscv/include/asm/asm.h
+@@ -22,6 +22,7 @@
+
+ #define REG_L __REG_SEL(ld, lw)
+ #define REG_S __REG_SEL(sd, sw)
++#define REG_SC __REG_SEL(sc.d, sc.w)
+ #define SZREG __REG_SEL(8, 4)
+ #define LGREG __REG_SEL(3, 2)
+
+diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
+index 9b60878a4469..2a82e0a5af46 100644
+--- a/arch/riscv/kernel/entry.S
++++ b/arch/riscv/kernel/entry.S
+@@ -98,7 +98,26 @@ _save_context:
+ */
+ .macro RESTORE_ALL
+ REG_L a0, PT_SSTATUS(sp)
+- REG_L a2, PT_SEPC(sp)
++ /*
++ * The current load reservation is effectively part of the processor's
++ * state, in the sense that load reservations cannot be shared between
++ * different hart contexts. We can't actually save and restore a load
++ * reservation, so instead here we clear any existing reservation --
++ * it's always legal for implementations to clear load reservations at
++ * any point (as long as the forward progress guarantee is kept, but
++ * we'll ignore that here).
++ *
++ * Dangling load reservations can be the result of taking a trap in the
++ * middle of an LR/SC sequence, but can also be the result of a taken
++ * forward branch around an SC -- which is how we implement CAS. As a
++ * result we need to clear reservations between the last CAS and the
++ * jump back to the new context. While it is unlikely the store
++ * completes, implementations are allowed to expand reservations to be
++ * arbitrarily large.
++ */
++ REG_L a2, PT_SEPC(sp)
++ REG_SC x0, a2, PT_SEPC(sp)
++
+ csrw CSR_SSTATUS, a0
+ csrw CSR_SEPC, a2
+
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index 42bf939693d3..ed9cd9944d4f 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -11,6 +11,7 @@
+ #include <linux/swap.h>
+ #include <linux/sizes.h>
+ #include <linux/of_fdt.h>
++#include <linux/libfdt.h>
+
+ #include <asm/fixmap.h>
+ #include <asm/tlbflush.h>
+@@ -82,6 +83,8 @@ disable:
+ }
+ #endif /* CONFIG_BLK_DEV_INITRD */
+
++static phys_addr_t dtb_early_pa __initdata;
++
+ void __init setup_bootmem(void)
+ {
+ struct memblock_region *reg;
+@@ -117,7 +120,12 @@ void __init setup_bootmem(void)
+ setup_initrd();
+ #endif /* CONFIG_BLK_DEV_INITRD */
+
+- early_init_fdt_reserve_self();
++ /*
++ * Avoid using early_init_fdt_reserve_self() since __pa() does
++ * not work for DTB pointers that are fixmap addresses
++ */
++ memblock_reserve(dtb_early_pa, fdt_totalsize(dtb_early_va));
++
+ early_init_fdt_scan_reserved_mem();
+ memblock_allow_resize();
+ memblock_dump_all();
+@@ -393,6 +401,8 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
+
+ /* Save pointer to DTB for early FDT parsing */
+ dtb_early_va = (void *)fix_to_virt(FIX_FDT) + (dtb_pa & ~PAGE_MASK);
++ /* Save physical address for memblock reservation */
++ dtb_early_pa = dtb_pa;
+ }
+
+ static void __init setup_vm_final(void)
+diff --git a/arch/s390/boot/startup.c b/arch/s390/boot/startup.c
+index 7b0d05414618..ceeacbeff600 100644
+--- a/arch/s390/boot/startup.c
++++ b/arch/s390/boot/startup.c
+@@ -101,10 +101,18 @@ static void handle_relocs(unsigned long offset)
+ dynsym = (Elf64_Sym *) vmlinux.dynsym_start;
+ for (rela = rela_start; rela < rela_end; rela++) {
+ loc = rela->r_offset + offset;
+- val = rela->r_addend + offset;
++ val = rela->r_addend;
+ r_sym = ELF64_R_SYM(rela->r_info);
+- if (r_sym)
+- val += dynsym[r_sym].st_value;
++ if (r_sym) {
++ if (dynsym[r_sym].st_shndx != SHN_UNDEF)
++ val += dynsym[r_sym].st_value + offset;
++ } else {
++ /*
++ * 0 == undefined symbol table index (STN_UNDEF),
++ * used for R_390_RELATIVE, only add KASLR offset
++ */
++ val += offset;
++ }
+ r_type = ELF64_R_TYPE(rela->r_info);
+ rc = arch_kexec_do_relocs(r_type, (void *) loc, val, 0);
+ if (rc)
+diff --git a/arch/s390/include/asm/hugetlb.h b/arch/s390/include/asm/hugetlb.h
+index bb59dd964590..de8f0bf5f238 100644
+--- a/arch/s390/include/asm/hugetlb.h
++++ b/arch/s390/include/asm/hugetlb.h
+@@ -12,8 +12,6 @@
+ #include <asm/page.h>
+ #include <asm/pgtable.h>
+
+-
+-#define is_hugepage_only_range(mm, addr, len) 0
+ #define hugetlb_free_pgd_range free_pgd_range
+ #define hugepages_supported() (MACHINE_HAS_EDAT1)
+
+@@ -23,6 +21,13 @@ pte_t huge_ptep_get(pte_t *ptep);
+ pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep);
+
++static inline bool is_hugepage_only_range(struct mm_struct *mm,
++ unsigned long addr,
++ unsigned long len)
++{
++ return false;
++}
++
+ /*
+ * If the arch doesn't supply something else, assume that hugepage
+ * size aligned regions are ok without further preparation.
+diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
+index 9b274fcaacb6..70ac23e50cae 100644
+--- a/arch/s390/include/asm/pgtable.h
++++ b/arch/s390/include/asm/pgtable.h
+@@ -1268,7 +1268,8 @@ static inline pte_t *pte_offset(pmd_t *pmd, unsigned long address)
+
+ #define pte_offset_kernel(pmd, address) pte_offset(pmd, address)
+ #define pte_offset_map(pmd, address) pte_offset_kernel(pmd, address)
+-#define pte_unmap(pte) do { } while (0)
++
++static inline void pte_unmap(pte_t *pte) { }
+
+ static inline bool gup_fast_permitted(unsigned long start, unsigned long end)
+ {
+diff --git a/arch/s390/kernel/machine_kexec_reloc.c b/arch/s390/kernel/machine_kexec_reloc.c
+index 3b664cb3ec4d..d5035de9020e 100644
+--- a/arch/s390/kernel/machine_kexec_reloc.c
++++ b/arch/s390/kernel/machine_kexec_reloc.c
+@@ -27,6 +27,7 @@ int arch_kexec_do_relocs(int r_type, void *loc, unsigned long val,
+ *(u32 *)loc = val;
+ break;
+ case R_390_64: /* Direct 64 bit. */
++ case R_390_GLOB_DAT:
+ *(u64 *)loc = val;
+ break;
+ case R_390_PC16: /* PC relative 16 bit. */
+diff --git a/arch/x86/hyperv/hv_apic.c b/arch/x86/hyperv/hv_apic.c
+index 5c056b8aebef..e01078e93dd3 100644
+--- a/arch/x86/hyperv/hv_apic.c
++++ b/arch/x86/hyperv/hv_apic.c
+@@ -260,11 +260,21 @@ void __init hv_apic_init(void)
+ }
+
+ if (ms_hyperv.hints & HV_X64_APIC_ACCESS_RECOMMENDED) {
+- pr_info("Hyper-V: Using MSR based APIC access\n");
++ pr_info("Hyper-V: Using enlightened APIC (%s mode)",
++ x2apic_enabled() ? "x2apic" : "xapic");
++ /*
++ * With x2apic, architectural x2apic MSRs are equivalent to the
++ * respective synthetic MSRs, so there's no need to override
++ * the apic accessors. The only exception is
++ * hv_apic_eoi_write, because it benefits from lazy EOI when
++ * available, but it works for both xapic and x2apic modes.
++ */
+ apic_set_eoi_write(hv_apic_eoi_write);
+- apic->read = hv_apic_read;
+- apic->write = hv_apic_write;
+- apic->icr_write = hv_apic_icr_write;
+- apic->icr_read = hv_apic_icr_read;
++ if (!x2apic_enabled()) {
++ apic->read = hv_apic_read;
++ apic->write = hv_apic_write;
++ apic->icr_write = hv_apic_icr_write;
++ apic->icr_read = hv_apic_icr_read;
++ }
+ }
+ }
+diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
+index 35c225ede0e4..61d93f062a36 100644
+--- a/arch/x86/include/asm/uaccess.h
++++ b/arch/x86/include/asm/uaccess.h
+@@ -734,5 +734,28 @@ do { \
+ if (unlikely(__gu_err)) goto err_label; \
+ } while (0)
+
++/*
++ * We want the unsafe accessors to always be inlined and use
++ * the error labels - thus the macro games.
++ */
++#define unsafe_copy_loop(dst, src, len, type, label) \
++ while (len >= sizeof(type)) { \
++ unsafe_put_user(*(type *)src,(type __user *)dst,label); \
++ dst += sizeof(type); \
++ src += sizeof(type); \
++ len -= sizeof(type); \
++ }
++
++#define unsafe_copy_to_user(_dst,_src,_len,label) \
++do { \
++ char __user *__ucu_dst = (_dst); \
++ const char *__ucu_src = (_src); \
++ size_t __ucu_len = (_len); \
++ unsafe_copy_loop(__ucu_dst, __ucu_src, __ucu_len, u64, label); \
++ unsafe_copy_loop(__ucu_dst, __ucu_src, __ucu_len, u32, label); \
++ unsafe_copy_loop(__ucu_dst, __ucu_src, __ucu_len, u16, label); \
++ unsafe_copy_loop(__ucu_dst, __ucu_src, __ucu_len, u8, label); \
++} while (0)
++
+ #endif /* _ASM_X86_UACCESS_H */
+
+diff --git a/arch/x86/kernel/apic/x2apic_cluster.c b/arch/x86/kernel/apic/x2apic_cluster.c
+index 609e499387a1..0cad36d1457a 100644
+--- a/arch/x86/kernel/apic/x2apic_cluster.c
++++ b/arch/x86/kernel/apic/x2apic_cluster.c
+@@ -158,7 +158,8 @@ static int x2apic_dead_cpu(unsigned int dead_cpu)
+ {
+ struct cluster_mask *cmsk = per_cpu(cluster_masks, dead_cpu);
+
+- cpumask_clear_cpu(dead_cpu, &cmsk->mask);
++ if (cmsk)
++ cpumask_clear_cpu(dead_cpu, &cmsk->mask);
+ free_cpumask_var(per_cpu(ipi_mask, dead_cpu));
+ return 0;
+ }
+diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
+index 29ffa495bd1c..206a4b6144c2 100644
+--- a/arch/x86/kernel/head64.c
++++ b/arch/x86/kernel/head64.c
+@@ -222,13 +222,31 @@ unsigned long __head __startup_64(unsigned long physaddr,
+ * we might write invalid pmds, when the kernel is relocated
+ * cleanup_highmap() fixes this up along with the mappings
+ * beyond _end.
++ *
++ * Only the region occupied by the kernel image has so far
++ * been checked against the table of usable memory regions
++ * provided by the firmware, so invalidate pages outside that
++ * region. A page table entry that maps to a reserved area of
++ * memory would allow processor speculation into that area,
++ * and on some hardware (particularly the UV platform) even
++ * speculative access to some reserved areas is caught as an
++ * error, causing the BIOS to halt the system.
+ */
+
+ pmd = fixup_pointer(level2_kernel_pgt, physaddr);
+- for (i = 0; i < PTRS_PER_PMD; i++) {
++
++ /* invalidate pages before the kernel image */
++ for (i = 0; i < pmd_index((unsigned long)_text); i++)
++ pmd[i] &= ~_PAGE_PRESENT;
++
++ /* fixup pages that are part of the kernel image */
++ for (; i <= pmd_index((unsigned long)_end); i++)
+ if (pmd[i] & _PAGE_PRESENT)
+ pmd[i] += load_delta;
+- }
++
++ /* invalidate pages after the kernel image */
++ for (; i < PTRS_PER_PMD; i++)
++ pmd[i] &= ~_PAGE_PRESENT;
+
+ /*
+ * Fixup phys_base - remove the memory encryption mask to obtain
+diff --git a/arch/x86/xen/efi.c b/arch/x86/xen/efi.c
+index 0d3365cb64de..7e3eb70f411a 100644
+--- a/arch/x86/xen/efi.c
++++ b/arch/x86/xen/efi.c
+@@ -65,7 +65,9 @@ static efi_system_table_t __init *xen_efi_probe(void)
+ efi.get_variable = xen_efi_get_variable;
+ efi.get_next_variable = xen_efi_get_next_variable;
+ efi.set_variable = xen_efi_set_variable;
++ efi.set_variable_nonblocking = xen_efi_set_variable;
+ efi.query_variable_info = xen_efi_query_variable_info;
++ efi.query_variable_info_nonblocking = xen_efi_query_variable_info;
+ efi.update_capsule = xen_efi_update_capsule;
+ efi.query_capsule_caps = xen_efi_query_capsule_caps;
+ efi.get_next_high_mono_count = xen_efi_get_next_high_mono_count;
+diff --git a/arch/xtensa/include/asm/bitops.h b/arch/xtensa/include/asm/bitops.h
+index aeb15f4c755b..be8b2be5a98b 100644
+--- a/arch/xtensa/include/asm/bitops.h
++++ b/arch/xtensa/include/asm/bitops.h
+@@ -148,7 +148,7 @@ static inline void change_bit(unsigned int bit, volatile unsigned long *p)
+ " getex %0\n"
+ " beqz %0, 1b\n"
+ : "=&a" (tmp)
+- : "a" (~mask), "a" (p)
++ : "a" (mask), "a" (p)
+ : "memory");
+ }
+
+diff --git a/arch/xtensa/kernel/xtensa_ksyms.c b/arch/xtensa/kernel/xtensa_ksyms.c
+index 04f19de46700..4092555828b1 100644
+--- a/arch/xtensa/kernel/xtensa_ksyms.c
++++ b/arch/xtensa/kernel/xtensa_ksyms.c
+@@ -119,13 +119,6 @@ EXPORT_SYMBOL(__invalidate_icache_range);
+ // FIXME EXPORT_SYMBOL(screen_info);
+ #endif
+
+-EXPORT_SYMBOL(outsb);
+-EXPORT_SYMBOL(outsw);
+-EXPORT_SYMBOL(outsl);
+-EXPORT_SYMBOL(insb);
+-EXPORT_SYMBOL(insw);
+-EXPORT_SYMBOL(insl);
+-
+ extern long common_exception_return;
+ EXPORT_SYMBOL(common_exception_return);
+
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index a79b9ad1aba1..ed41cde93641 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1998,6 +1998,8 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
+ }
+
+ blk_add_rq_to_plug(plug, rq);
++ } else if (q->elevator) {
++ blk_mq_sched_insert_request(rq, false, true, true);
+ } else if (plug && !blk_queue_nomerges(q)) {
+ /*
+ * We do limited plugging. If the bio can be merged, do that.
+@@ -2021,8 +2023,8 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
+ blk_mq_try_issue_directly(data.hctx, same_queue_rq,
+ &cookie);
+ }
+- } else if ((q->nr_hw_queues > 1 && is_sync) || (!q->elevator &&
+- !data.hctx->dispatch_busy)) {
++ } else if ((q->nr_hw_queues > 1 && is_sync) ||
++ !data.hctx->dispatch_busy) {
+ blk_mq_try_issue_directly(data.hctx, rq, &cookie);
+ } else {
+ blk_mq_sched_insert_request(rq, false, true, true);
+diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h
+index c0f0778d5396..8378f68a21ac 100644
+--- a/block/blk-rq-qos.h
++++ b/block/blk-rq-qos.h
+@@ -103,16 +103,13 @@ static inline void rq_qos_add(struct request_queue *q, struct rq_qos *rqos)
+
+ static inline void rq_qos_del(struct request_queue *q, struct rq_qos *rqos)
+ {
+- struct rq_qos *cur, *prev = NULL;
+- for (cur = q->rq_qos; cur; cur = cur->next) {
+- if (cur == rqos) {
+- if (prev)
+- prev->next = rqos->next;
+- else
+- q->rq_qos = cur;
++ struct rq_qos **cur;
++
++ for (cur = &q->rq_qos; *cur; cur = &(*cur)->next) {
++ if (*cur == rqos) {
++ *cur = rqos->next;
+ break;
+ }
+- prev = cur;
+ }
+
+ blk_mq_debugfs_unregister_rqos(rqos);
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index 3b2525908dd8..a1a858ad4d18 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -905,8 +905,8 @@ void acpi_cppc_processor_exit(struct acpi_processor *pr)
+ pcc_data[pcc_ss_id]->refcount--;
+ if (!pcc_data[pcc_ss_id]->refcount) {
+ pcc_mbox_free_channel(pcc_data[pcc_ss_id]->pcc_channel);
+- pcc_data[pcc_ss_id]->pcc_channel_acquired = 0;
+ kfree(pcc_data[pcc_ss_id]);
++ pcc_data[pcc_ss_id] = NULL;
+ }
+ }
+ }
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index 1413324982f0..14e68f202f81 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -1322,7 +1322,7 @@ static ssize_t scrub_show(struct device *dev,
+ nfit_device_lock(dev);
+ nd_desc = dev_get_drvdata(dev);
+ if (!nd_desc) {
+- device_unlock(dev);
++ nfit_device_unlock(dev);
+ return rc;
+ }
+ acpi_desc = to_acpi_desc(nd_desc);
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index dc1c83eafc22..1c5278207153 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -95,10 +95,6 @@ DEFINE_SHOW_ATTRIBUTE(proc);
+ #define SZ_1K 0x400
+ #endif
+
+-#ifndef SZ_4M
+-#define SZ_4M 0x400000
+-#endif
+-
+ #define FORBIDDEN_MMAP_FLAGS (VM_WRITE)
+
+ enum {
+@@ -5195,9 +5191,6 @@ static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
+ if (proc->tsk != current->group_leader)
+ return -EINVAL;
+
+- if ((vma->vm_end - vma->vm_start) > SZ_4M)
+- vma->vm_end = vma->vm_start + SZ_4M;
+-
+ binder_debug(BINDER_DEBUG_OPEN_CLOSE,
+ "%s: %d %lx-%lx (%ld K) vma %lx pagep %lx\n",
+ __func__, proc->pid, vma->vm_start, vma->vm_end,
+diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
+index 6d79a1b0d446..8fe99b20ca02 100644
+--- a/drivers/android/binder_alloc.c
++++ b/drivers/android/binder_alloc.c
+@@ -22,6 +22,7 @@
+ #include <asm/cacheflush.h>
+ #include <linux/uaccess.h>
+ #include <linux/highmem.h>
++#include <linux/sizes.h>
+ #include "binder_alloc.h"
+ #include "binder_trace.h"
+
+@@ -689,7 +690,9 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
+ alloc->buffer = (void __user *)vma->vm_start;
+ mutex_unlock(&binder_alloc_mmap_lock);
+
+- alloc->pages = kcalloc((vma->vm_end - vma->vm_start) / PAGE_SIZE,
++ alloc->buffer_size = min_t(unsigned long, vma->vm_end - vma->vm_start,
++ SZ_4M);
++ alloc->pages = kcalloc(alloc->buffer_size / PAGE_SIZE,
+ sizeof(alloc->pages[0]),
+ GFP_KERNEL);
+ if (alloc->pages == NULL) {
+@@ -697,7 +700,6 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
+ failure_string = "alloc page array";
+ goto err_alloc_pages_failed;
+ }
+- alloc->buffer_size = vma->vm_end - vma->vm_start;
+
+ buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
+ if (!buffer) {
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 3e63294304c7..691852b8bb41 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -1617,7 +1617,9 @@ static void ahci_intel_pcs_quirk(struct pci_dev *pdev, struct ahci_host_priv *hp
+ */
+ if (!id || id->vendor != PCI_VENDOR_ID_INTEL)
+ return;
+- if (((enum board_ids) id->driver_data) < board_ahci_pcs7)
++
++ /* Skip applying the quirk on Denverton and beyond */
++ if (((enum board_ids) id->driver_data) >= board_ahci_pcs7)
+ return;
+
+ /*
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 1669d41fcddc..810329523c28 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -9,6 +9,7 @@
+ */
+
+ #include <linux/acpi.h>
++#include <linux/cpufreq.h>
+ #include <linux/device.h>
+ #include <linux/err.h>
+ #include <linux/fwnode.h>
+@@ -3150,6 +3151,8 @@ void device_shutdown(void)
+ wait_for_device_probe();
+ device_block_probing();
+
++ cpufreq_suspend();
++
+ spin_lock(&devices_kset->list_lock);
+ /*
+ * Walk the devices list backward, shutting down each in turn.
+diff --git a/drivers/base/memory.c b/drivers/base/memory.c
+index 20c39d1bcef8..9b9abc4fcfb7 100644
+--- a/drivers/base/memory.c
++++ b/drivers/base/memory.c
+@@ -554,6 +554,9 @@ static ssize_t soft_offline_page_store(struct device *dev,
+ pfn >>= PAGE_SHIFT;
+ if (!pfn_valid(pfn))
+ return -ENXIO;
++ /* Only online pages can be soft-offlined (esp., not ZONE_DEVICE). */
++ if (!pfn_to_online_page(pfn))
++ return -EIO;
+ ret = soft_offline_page(pfn_to_page(pfn), 0);
+ return ret == 0 ? count : ret;
+ }
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 1410fa893653..f6f77eaa7217 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -994,6 +994,16 @@ static int loop_set_fd(struct loop_device *lo, fmode_t mode,
+ if (!(lo_flags & LO_FLAGS_READ_ONLY) && file->f_op->fsync)
+ blk_queue_write_cache(lo->lo_queue, true, false);
+
++ if (io_is_direct(lo->lo_backing_file) && inode->i_sb->s_bdev) {
++ /* In case of direct I/O, match underlying block size */
++ unsigned short bsize = bdev_logical_block_size(
++ inode->i_sb->s_bdev);
++
++ blk_queue_logical_block_size(lo->lo_queue, bsize);
++ blk_queue_physical_block_size(lo->lo_queue, bsize);
++ blk_queue_io_min(lo->lo_queue, bsize);
++ }
++
+ loop_update_rotational(lo);
+ loop_update_dio(lo);
+ set_capacity(lo->lo_disk, size);
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index d58a359a6622..4285e75e52c3 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -413,13 +413,14 @@ static void reset_bdev(struct zram *zram)
+ static ssize_t backing_dev_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+ {
++ struct file *file;
+ struct zram *zram = dev_to_zram(dev);
+- struct file *file = zram->backing_dev;
+ char *p;
+ ssize_t ret;
+
+ down_read(&zram->init_lock);
+- if (!zram->backing_dev) {
++ file = zram->backing_dev;
++ if (!file) {
+ memcpy(buf, "none\n", 5);
+ up_read(&zram->init_lock);
+ return 5;
+diff --git a/drivers/clk/ti/clk-7xx.c b/drivers/clk/ti/clk-7xx.c
+index b57fe09b428b..9dd6185a4b4e 100644
+--- a/drivers/clk/ti/clk-7xx.c
++++ b/drivers/clk/ti/clk-7xx.c
+@@ -683,7 +683,7 @@ static const struct omap_clkctrl_reg_data dra7_l4per2_clkctrl_regs[] __initconst
+ { DRA7_L4PER2_MCASP2_CLKCTRL, dra7_mcasp2_bit_data, CLKF_SW_SUP, "l4per2-clkctrl:0154:22" },
+ { DRA7_L4PER2_MCASP3_CLKCTRL, dra7_mcasp3_bit_data, CLKF_SW_SUP, "l4per2-clkctrl:015c:22" },
+ { DRA7_L4PER2_MCASP5_CLKCTRL, dra7_mcasp5_bit_data, CLKF_SW_SUP, "l4per2-clkctrl:016c:22" },
+- { DRA7_L4PER2_MCASP8_CLKCTRL, dra7_mcasp8_bit_data, CLKF_SW_SUP, "l4per2-clkctrl:0184:24" },
++ { DRA7_L4PER2_MCASP8_CLKCTRL, dra7_mcasp8_bit_data, CLKF_SW_SUP, "l4per2-clkctrl:0184:22" },
+ { DRA7_L4PER2_MCASP4_CLKCTRL, dra7_mcasp4_bit_data, CLKF_SW_SUP, "l4per2-clkctrl:018c:22" },
+ { DRA7_L4PER2_UART7_CLKCTRL, dra7_uart7_bit_data, CLKF_SW_SUP, "l4per2-clkctrl:01c4:24" },
+ { DRA7_L4PER2_UART8_CLKCTRL, dra7_uart8_bit_data, CLKF_SW_SUP, "l4per2-clkctrl:01d4:24" },
+@@ -828,8 +828,8 @@ static struct ti_dt_clk dra7xx_clks[] = {
+ DT_CLK(NULL, "mcasp6_aux_gfclk_mux", "l4per2-clkctrl:01f8:22"),
+ DT_CLK(NULL, "mcasp7_ahclkx_mux", "l4per2-clkctrl:01fc:24"),
+ DT_CLK(NULL, "mcasp7_aux_gfclk_mux", "l4per2-clkctrl:01fc:22"),
+- DT_CLK(NULL, "mcasp8_ahclkx_mux", "l4per2-clkctrl:0184:22"),
+- DT_CLK(NULL, "mcasp8_aux_gfclk_mux", "l4per2-clkctrl:0184:24"),
++ DT_CLK(NULL, "mcasp8_ahclkx_mux", "l4per2-clkctrl:0184:24"),
++ DT_CLK(NULL, "mcasp8_aux_gfclk_mux", "l4per2-clkctrl:0184:22"),
+ DT_CLK(NULL, "mmc1_clk32k", "l3init-clkctrl:0008:8"),
+ DT_CLK(NULL, "mmc1_fclk_div", "l3init-clkctrl:0008:25"),
+ DT_CLK(NULL, "mmc1_fclk_mux", "l3init-clkctrl:0008:24"),
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index c28ebf2810f1..f970f87ce86e 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -2746,14 +2746,6 @@ int cpufreq_unregister_driver(struct cpufreq_driver *driver)
+ }
+ EXPORT_SYMBOL_GPL(cpufreq_unregister_driver);
+
+-/*
+- * Stop cpufreq at shutdown to make sure it isn't holding any locks
+- * or mutexes when secondary CPUs are halted.
+- */
+-static struct syscore_ops cpufreq_syscore_ops = {
+- .shutdown = cpufreq_suspend,
+-};
+-
+ struct kobject *cpufreq_global_kobject;
+ EXPORT_SYMBOL(cpufreq_global_kobject);
+
+@@ -2765,8 +2757,6 @@ static int __init cpufreq_core_init(void)
+ cpufreq_global_kobject = kobject_create_and_add("cpufreq", &cpu_subsys.dev_root->kobj);
+ BUG_ON(!cpufreq_global_kobject);
+
+- register_syscore_ops(&cpufreq_syscore_ops);
+-
+ return 0;
+ }
+ module_param(off, int, 0444);
+diff --git a/drivers/edac/ghes_edac.c b/drivers/edac/ghes_edac.c
+index 7f19f1c672c3..2059e43ccc01 100644
+--- a/drivers/edac/ghes_edac.c
++++ b/drivers/edac/ghes_edac.c
+@@ -553,7 +553,11 @@ void ghes_edac_unregister(struct ghes *ghes)
+ if (!ghes_pvt)
+ return;
+
++ if (atomic_dec_return(&ghes_init))
++ return;
++
+ mci = ghes_pvt->mci;
++ ghes_pvt = NULL;
+ edac_mc_del_mc(mci->pdev);
+ edac_mc_free(mci);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
+index eba42c752bca..82155ac3288a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
+@@ -189,7 +189,7 @@ static int acp_hw_init(void *handle)
+ u32 val = 0;
+ u32 count = 0;
+ struct device *dev;
+- struct i2s_platform_data *i2s_pdata;
++ struct i2s_platform_data *i2s_pdata = NULL;
+
+ struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+@@ -231,20 +231,21 @@ static int acp_hw_init(void *handle)
+ adev->acp.acp_cell = kcalloc(ACP_DEVS, sizeof(struct mfd_cell),
+ GFP_KERNEL);
+
+- if (adev->acp.acp_cell == NULL)
+- return -ENOMEM;
++ if (adev->acp.acp_cell == NULL) {
++ r = -ENOMEM;
++ goto failure;
++ }
+
+ adev->acp.acp_res = kcalloc(5, sizeof(struct resource), GFP_KERNEL);
+ if (adev->acp.acp_res == NULL) {
+- kfree(adev->acp.acp_cell);
+- return -ENOMEM;
++ r = -ENOMEM;
++ goto failure;
+ }
+
+ i2s_pdata = kcalloc(3, sizeof(struct i2s_platform_data), GFP_KERNEL);
+ if (i2s_pdata == NULL) {
+- kfree(adev->acp.acp_res);
+- kfree(adev->acp.acp_cell);
+- return -ENOMEM;
++ r = -ENOMEM;
++ goto failure;
+ }
+
+ switch (adev->asic_type) {
+@@ -341,14 +342,14 @@ static int acp_hw_init(void *handle)
+ r = mfd_add_hotplug_devices(adev->acp.parent, adev->acp.acp_cell,
+ ACP_DEVS);
+ if (r)
+- return r;
++ goto failure;
+
+ for (i = 0; i < ACP_DEVS ; i++) {
+ dev = get_mfd_cell_dev(adev->acp.acp_cell[i].name, i);
+ r = pm_genpd_add_device(&adev->acp.acp_genpd->gpd, dev);
+ if (r) {
+ dev_err(dev, "Failed to add dev to genpd\n");
+- return r;
++ goto failure;
+ }
+ }
+
+@@ -367,7 +368,8 @@ static int acp_hw_init(void *handle)
+ break;
+ if (--count == 0) {
+ dev_err(&adev->pdev->dev, "Failed to reset ACP\n");
+- return -ETIMEDOUT;
++ r = -ETIMEDOUT;
++ goto failure;
+ }
+ udelay(100);
+ }
+@@ -384,7 +386,8 @@ static int acp_hw_init(void *handle)
+ break;
+ if (--count == 0) {
+ dev_err(&adev->pdev->dev, "Failed to reset ACP\n");
+- return -ETIMEDOUT;
++ r = -ETIMEDOUT;
++ goto failure;
+ }
+ udelay(100);
+ }
+@@ -393,6 +396,13 @@ static int acp_hw_init(void *handle)
+ val &= ~ACP_SOFT_RESET__SoftResetAud_MASK;
+ cgs_write_register(adev->acp.cgs_device, mmACP_SOFT_RESET, val);
+ return 0;
++
++failure:
++ kfree(i2s_pdata);
++ kfree(adev->acp.acp_res);
++ kfree(adev->acp.acp_cell);
++ kfree(adev->acp.acp_genpd);
++ return r;
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 8b26c970a3cb..90df22081a25 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -536,7 +536,6 @@ static int amdgpu_cs_list_validate(struct amdgpu_cs_parser *p,
+
+ list_for_each_entry(lobj, validated, tv.head) {
+ struct amdgpu_bo *bo = ttm_to_amdgpu_bo(lobj->tv.bo);
+- bool binding_userptr = false;
+ struct mm_struct *usermm;
+
+ usermm = amdgpu_ttm_tt_get_usermm(bo->tbo.ttm);
+@@ -553,7 +552,6 @@ static int amdgpu_cs_list_validate(struct amdgpu_cs_parser *p,
+
+ amdgpu_ttm_tt_set_user_pages(bo->tbo.ttm,
+ lobj->user_pages);
+- binding_userptr = true;
+ }
+
+ if (p->evictable == lobj)
+@@ -563,10 +561,8 @@ static int amdgpu_cs_list_validate(struct amdgpu_cs_parser *p,
+ if (r)
+ return r;
+
+- if (binding_userptr) {
+- kvfree(lobj->user_pages);
+- lobj->user_pages = NULL;
+- }
++ kvfree(lobj->user_pages);
++ lobj->user_pages = NULL;
+ }
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 5376328d3fd0..a7cd4a03bf38 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -1030,6 +1030,41 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
+ return -ENODEV;
+ }
+
++#ifdef CONFIG_DRM_AMDGPU_SI
++ if (!amdgpu_si_support) {
++ switch (flags & AMD_ASIC_MASK) {
++ case CHIP_TAHITI:
++ case CHIP_PITCAIRN:
++ case CHIP_VERDE:
++ case CHIP_OLAND:
++ case CHIP_HAINAN:
++ dev_info(&pdev->dev,
++ "SI support provided by radeon.\n");
++ dev_info(&pdev->dev,
++ "Use radeon.si_support=0 amdgpu.si_support=1 to override.\n"
++ );
++ return -ENODEV;
++ }
++ }
++#endif
++#ifdef CONFIG_DRM_AMDGPU_CIK
++ if (!amdgpu_cik_support) {
++ switch (flags & AMD_ASIC_MASK) {
++ case CHIP_KAVERI:
++ case CHIP_BONAIRE:
++ case CHIP_HAWAII:
++ case CHIP_KABINI:
++ case CHIP_MULLINS:
++ dev_info(&pdev->dev,
++ "CIK support provided by radeon.\n");
++ dev_info(&pdev->dev,
++ "Use radeon.cik_support=0 amdgpu.cik_support=1 to override.\n"
++ );
++ return -ENODEV;
++ }
++ }
++#endif
++
+ /* Get rid of things like offb */
+ ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, "amdgpudrmfb");
+ if (ret)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+index 00beba533582..56b4c241a14b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+@@ -144,41 +144,6 @@ int amdgpu_driver_load_kms(struct drm_device *dev, unsigned long flags)
+ struct amdgpu_device *adev;
+ int r, acpi_status;
+
+-#ifdef CONFIG_DRM_AMDGPU_SI
+- if (!amdgpu_si_support) {
+- switch (flags & AMD_ASIC_MASK) {
+- case CHIP_TAHITI:
+- case CHIP_PITCAIRN:
+- case CHIP_VERDE:
+- case CHIP_OLAND:
+- case CHIP_HAINAN:
+- dev_info(dev->dev,
+- "SI support provided by radeon.\n");
+- dev_info(dev->dev,
+- "Use radeon.si_support=0 amdgpu.si_support=1 to override.\n"
+- );
+- return -ENODEV;
+- }
+- }
+-#endif
+-#ifdef CONFIG_DRM_AMDGPU_CIK
+- if (!amdgpu_cik_support) {
+- switch (flags & AMD_ASIC_MASK) {
+- case CHIP_KAVERI:
+- case CHIP_BONAIRE:
+- case CHIP_HAWAII:
+- case CHIP_KABINI:
+- case CHIP_MULLINS:
+- dev_info(dev->dev,
+- "CIK support provided by radeon.\n");
+- dev_info(dev->dev,
+- "Use radeon.cik_support=0 amdgpu.cik_support=1 to override.\n"
+- );
+- return -ENODEV;
+- }
+- }
+-#endif
+-
+ adev = kzalloc(sizeof(struct amdgpu_device), GFP_KERNEL);
+ if (adev == NULL) {
+ return -ENOMEM;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+index b70b3c45bb29..65044b1b3d4c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+@@ -429,13 +429,14 @@ void amdgpu_vce_free_handles(struct amdgpu_device *adev, struct drm_file *filp)
+ * Open up a stream for HW test
+ */
+ int amdgpu_vce_get_create_msg(struct amdgpu_ring *ring, uint32_t handle,
++ struct amdgpu_bo *bo,
+ struct dma_fence **fence)
+ {
+ const unsigned ib_size_dw = 1024;
+ struct amdgpu_job *job;
+ struct amdgpu_ib *ib;
+ struct dma_fence *f = NULL;
+- uint64_t dummy;
++ uint64_t addr;
+ int i, r;
+
+ r = amdgpu_job_alloc_with_ib(ring->adev, ib_size_dw * 4, &job);
+@@ -444,7 +445,7 @@ int amdgpu_vce_get_create_msg(struct amdgpu_ring *ring, uint32_t handle,
+
+ ib = &job->ibs[0];
+
+- dummy = ib->gpu_addr + 1024;
++ addr = amdgpu_bo_gpu_offset(bo);
+
+ /* stitch together an VCE create msg */
+ ib->length_dw = 0;
+@@ -476,8 +477,8 @@ int amdgpu_vce_get_create_msg(struct amdgpu_ring *ring, uint32_t handle,
+
+ ib->ptr[ib->length_dw++] = 0x00000014; /* len */
+ ib->ptr[ib->length_dw++] = 0x05000005; /* feedback buffer */
+- ib->ptr[ib->length_dw++] = upper_32_bits(dummy);
+- ib->ptr[ib->length_dw++] = dummy;
++ ib->ptr[ib->length_dw++] = upper_32_bits(addr);
++ ib->ptr[ib->length_dw++] = addr;
+ ib->ptr[ib->length_dw++] = 0x00000001;
+
+ for (i = ib->length_dw; i < ib_size_dw; ++i)
+@@ -1110,13 +1111,20 @@ int amdgpu_vce_ring_test_ring(struct amdgpu_ring *ring)
+ int amdgpu_vce_ring_test_ib(struct amdgpu_ring *ring, long timeout)
+ {
+ struct dma_fence *fence = NULL;
++ struct amdgpu_bo *bo = NULL;
+ long r;
+
+ /* skip vce ring1/2 ib test for now, since it's not reliable */
+ if (ring != &ring->adev->vce.ring[0])
+ return 0;
+
+- r = amdgpu_vce_get_create_msg(ring, 1, NULL);
++ r = amdgpu_bo_create_reserved(ring->adev, 512, PAGE_SIZE,
++ AMDGPU_GEM_DOMAIN_VRAM,
++ &bo, NULL, NULL);
++ if (r)
++ return r;
++
++ r = amdgpu_vce_get_create_msg(ring, 1, bo, NULL);
+ if (r)
+ goto error;
+
+@@ -1132,5 +1140,7 @@ int amdgpu_vce_ring_test_ib(struct amdgpu_ring *ring, long timeout)
+
+ error:
+ dma_fence_put(fence);
++ amdgpu_bo_unreserve(bo);
++ amdgpu_bo_unref(&bo);
+ return r;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
+index 30ea54dd9117..e802f7d9db0a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
+@@ -59,6 +59,7 @@ int amdgpu_vce_entity_init(struct amdgpu_device *adev);
+ int amdgpu_vce_suspend(struct amdgpu_device *adev);
+ int amdgpu_vce_resume(struct amdgpu_device *adev);
+ int amdgpu_vce_get_create_msg(struct amdgpu_ring *ring, uint32_t handle,
++ struct amdgpu_bo *bo,
+ struct dma_fence **fence);
+ int amdgpu_vce_get_destroy_msg(struct amdgpu_ring *ring, uint32_t handle,
+ bool direct, struct dma_fence **fence);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+index 2e12eeb314a7..a3fe8b01d234 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+@@ -517,13 +517,14 @@ int amdgpu_vcn_enc_ring_test_ring(struct amdgpu_ring *ring)
+ }
+
+ static int amdgpu_vcn_enc_get_create_msg(struct amdgpu_ring *ring, uint32_t handle,
+- struct dma_fence **fence)
++ struct amdgpu_bo *bo,
++ struct dma_fence **fence)
+ {
+ const unsigned ib_size_dw = 16;
+ struct amdgpu_job *job;
+ struct amdgpu_ib *ib;
+ struct dma_fence *f = NULL;
+- uint64_t dummy;
++ uint64_t addr;
+ int i, r;
+
+ r = amdgpu_job_alloc_with_ib(ring->adev, ib_size_dw * 4, &job);
+@@ -531,14 +532,14 @@ static int amdgpu_vcn_enc_get_create_msg(struct amdgpu_ring *ring, uint32_t hand
+ return r;
+
+ ib = &job->ibs[0];
+- dummy = ib->gpu_addr + 1024;
++ addr = amdgpu_bo_gpu_offset(bo);
+
+ ib->length_dw = 0;
+ ib->ptr[ib->length_dw++] = 0x00000018;
+ ib->ptr[ib->length_dw++] = 0x00000001; /* session info */
+ ib->ptr[ib->length_dw++] = handle;
+- ib->ptr[ib->length_dw++] = upper_32_bits(dummy);
+- ib->ptr[ib->length_dw++] = dummy;
++ ib->ptr[ib->length_dw++] = upper_32_bits(addr);
++ ib->ptr[ib->length_dw++] = addr;
+ ib->ptr[ib->length_dw++] = 0x0000000b;
+
+ ib->ptr[ib->length_dw++] = 0x00000014;
+@@ -569,13 +570,14 @@ err:
+ }
+
+ static int amdgpu_vcn_enc_get_destroy_msg(struct amdgpu_ring *ring, uint32_t handle,
+- struct dma_fence **fence)
++ struct amdgpu_bo *bo,
++ struct dma_fence **fence)
+ {
+ const unsigned ib_size_dw = 16;
+ struct amdgpu_job *job;
+ struct amdgpu_ib *ib;
+ struct dma_fence *f = NULL;
+- uint64_t dummy;
++ uint64_t addr;
+ int i, r;
+
+ r = amdgpu_job_alloc_with_ib(ring->adev, ib_size_dw * 4, &job);
+@@ -583,14 +585,14 @@ static int amdgpu_vcn_enc_get_destroy_msg(struct amdgpu_ring *ring, uint32_t han
+ return r;
+
+ ib = &job->ibs[0];
+- dummy = ib->gpu_addr + 1024;
++ addr = amdgpu_bo_gpu_offset(bo);
+
+ ib->length_dw = 0;
+ ib->ptr[ib->length_dw++] = 0x00000018;
+ ib->ptr[ib->length_dw++] = 0x00000001;
+ ib->ptr[ib->length_dw++] = handle;
+- ib->ptr[ib->length_dw++] = upper_32_bits(dummy);
+- ib->ptr[ib->length_dw++] = dummy;
++ ib->ptr[ib->length_dw++] = upper_32_bits(addr);
++ ib->ptr[ib->length_dw++] = addr;
+ ib->ptr[ib->length_dw++] = 0x0000000b;
+
+ ib->ptr[ib->length_dw++] = 0x00000014;
+@@ -623,13 +625,20 @@ err:
+ int amdgpu_vcn_enc_ring_test_ib(struct amdgpu_ring *ring, long timeout)
+ {
+ struct dma_fence *fence = NULL;
++ struct amdgpu_bo *bo = NULL;
+ long r;
+
+- r = amdgpu_vcn_enc_get_create_msg(ring, 1, NULL);
++ r = amdgpu_bo_create_reserved(ring->adev, 128 * 1024, PAGE_SIZE,
++ AMDGPU_GEM_DOMAIN_VRAM,
++ &bo, NULL, NULL);
++ if (r)
++ return r;
++
++ r = amdgpu_vcn_enc_get_create_msg(ring, 1, bo, NULL);
+ if (r)
+ goto error;
+
+- r = amdgpu_vcn_enc_get_destroy_msg(ring, 1, &fence);
++ r = amdgpu_vcn_enc_get_destroy_msg(ring, 1, bo, &fence);
+ if (r)
+ goto error;
+
+@@ -641,6 +650,8 @@ int amdgpu_vcn_enc_ring_test_ib(struct amdgpu_ring *ring, long timeout)
+
+ error:
+ dma_fence_put(fence);
++ amdgpu_bo_unreserve(bo);
++ amdgpu_bo_unref(&bo);
+ return r;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+index 15c371fac469..0d131e1d6efc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+@@ -1086,7 +1086,7 @@ static void sdma_v5_0_ring_emit_pipeline_sync(struct amdgpu_ring *ring)
+ amdgpu_ring_write(ring, addr & 0xfffffffc);
+ amdgpu_ring_write(ring, upper_32_bits(addr) & 0xffffffff);
+ amdgpu_ring_write(ring, seq); /* reference */
+- amdgpu_ring_write(ring, 0xfffffff); /* mask */
++ amdgpu_ring_write(ring, 0xffffffff); /* mask */
+ amdgpu_ring_write(ring, SDMA_PKT_POLL_REGMEM_DW5_RETRY_COUNT(0xfff) |
+ SDMA_PKT_POLL_REGMEM_DW5_INTERVAL(4)); /* retry count, poll interval */
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
+index 670784a78512..217084d56ab8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
+@@ -206,13 +206,14 @@ static int uvd_v6_0_enc_ring_test_ring(struct amdgpu_ring *ring)
+ * Open up a stream for HW test
+ */
+ static int uvd_v6_0_enc_get_create_msg(struct amdgpu_ring *ring, uint32_t handle,
++ struct amdgpu_bo *bo,
+ struct dma_fence **fence)
+ {
+ const unsigned ib_size_dw = 16;
+ struct amdgpu_job *job;
+ struct amdgpu_ib *ib;
+ struct dma_fence *f = NULL;
+- uint64_t dummy;
++ uint64_t addr;
+ int i, r;
+
+ r = amdgpu_job_alloc_with_ib(ring->adev, ib_size_dw * 4, &job);
+@@ -220,15 +221,15 @@ static int uvd_v6_0_enc_get_create_msg(struct amdgpu_ring *ring, uint32_t handle
+ return r;
+
+ ib = &job->ibs[0];
+- dummy = ib->gpu_addr + 1024;
++ addr = amdgpu_bo_gpu_offset(bo);
+
+ ib->length_dw = 0;
+ ib->ptr[ib->length_dw++] = 0x00000018;
+ ib->ptr[ib->length_dw++] = 0x00000001; /* session info */
+ ib->ptr[ib->length_dw++] = handle;
+ ib->ptr[ib->length_dw++] = 0x00010000;
+- ib->ptr[ib->length_dw++] = upper_32_bits(dummy);
+- ib->ptr[ib->length_dw++] = dummy;
++ ib->ptr[ib->length_dw++] = upper_32_bits(addr);
++ ib->ptr[ib->length_dw++] = addr;
+
+ ib->ptr[ib->length_dw++] = 0x00000014;
+ ib->ptr[ib->length_dw++] = 0x00000002; /* task info */
+@@ -268,13 +269,14 @@ err:
+ */
+ static int uvd_v6_0_enc_get_destroy_msg(struct amdgpu_ring *ring,
+ uint32_t handle,
++ struct amdgpu_bo *bo,
+ struct dma_fence **fence)
+ {
+ const unsigned ib_size_dw = 16;
+ struct amdgpu_job *job;
+ struct amdgpu_ib *ib;
+ struct dma_fence *f = NULL;
+- uint64_t dummy;
++ uint64_t addr;
+ int i, r;
+
+ r = amdgpu_job_alloc_with_ib(ring->adev, ib_size_dw * 4, &job);
+@@ -282,15 +284,15 @@ static int uvd_v6_0_enc_get_destroy_msg(struct amdgpu_ring *ring,
+ return r;
+
+ ib = &job->ibs[0];
+- dummy = ib->gpu_addr + 1024;
++ addr = amdgpu_bo_gpu_offset(bo);
+
+ ib->length_dw = 0;
+ ib->ptr[ib->length_dw++] = 0x00000018;
+ ib->ptr[ib->length_dw++] = 0x00000001; /* session info */
+ ib->ptr[ib->length_dw++] = handle;
+ ib->ptr[ib->length_dw++] = 0x00010000;
+- ib->ptr[ib->length_dw++] = upper_32_bits(dummy);
+- ib->ptr[ib->length_dw++] = dummy;
++ ib->ptr[ib->length_dw++] = upper_32_bits(addr);
++ ib->ptr[ib->length_dw++] = addr;
+
+ ib->ptr[ib->length_dw++] = 0x00000014;
+ ib->ptr[ib->length_dw++] = 0x00000002; /* task info */
+@@ -327,13 +329,20 @@ err:
+ static int uvd_v6_0_enc_ring_test_ib(struct amdgpu_ring *ring, long timeout)
+ {
+ struct dma_fence *fence = NULL;
++ struct amdgpu_bo *bo = NULL;
+ long r;
+
+- r = uvd_v6_0_enc_get_create_msg(ring, 1, NULL);
++ r = amdgpu_bo_create_reserved(ring->adev, 128 * 1024, PAGE_SIZE,
++ AMDGPU_GEM_DOMAIN_VRAM,
++ &bo, NULL, NULL);
++ if (r)
++ return r;
++
++ r = uvd_v6_0_enc_get_create_msg(ring, 1, bo, NULL);
+ if (r)
+ goto error;
+
+- r = uvd_v6_0_enc_get_destroy_msg(ring, 1, &fence);
++ r = uvd_v6_0_enc_get_destroy_msg(ring, 1, bo, &fence);
+ if (r)
+ goto error;
+
+@@ -345,6 +354,8 @@ static int uvd_v6_0_enc_ring_test_ib(struct amdgpu_ring *ring, long timeout)
+
+ error:
+ dma_fence_put(fence);
++ amdgpu_bo_unreserve(bo);
++ amdgpu_bo_unref(&bo);
+ return r;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
+index a6bfe7651d07..c5e2f8c1741b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
+@@ -214,13 +214,14 @@ static int uvd_v7_0_enc_ring_test_ring(struct amdgpu_ring *ring)
+ * Open up a stream for HW test
+ */
+ static int uvd_v7_0_enc_get_create_msg(struct amdgpu_ring *ring, uint32_t handle,
++ struct amdgpu_bo *bo,
+ struct dma_fence **fence)
+ {
+ const unsigned ib_size_dw = 16;
+ struct amdgpu_job *job;
+ struct amdgpu_ib *ib;
+ struct dma_fence *f = NULL;
+- uint64_t dummy;
++ uint64_t addr;
+ int i, r;
+
+ r = amdgpu_job_alloc_with_ib(ring->adev, ib_size_dw * 4, &job);
+@@ -228,15 +229,15 @@ static int uvd_v7_0_enc_get_create_msg(struct amdgpu_ring *ring, uint32_t handle
+ return r;
+
+ ib = &job->ibs[0];
+- dummy = ib->gpu_addr + 1024;
++ addr = amdgpu_bo_gpu_offset(bo);
+
+ ib->length_dw = 0;
+ ib->ptr[ib->length_dw++] = 0x00000018;
+ ib->ptr[ib->length_dw++] = 0x00000001; /* session info */
+ ib->ptr[ib->length_dw++] = handle;
+ ib->ptr[ib->length_dw++] = 0x00000000;
+- ib->ptr[ib->length_dw++] = upper_32_bits(dummy);
+- ib->ptr[ib->length_dw++] = dummy;
++ ib->ptr[ib->length_dw++] = upper_32_bits(addr);
++ ib->ptr[ib->length_dw++] = addr;
+
+ ib->ptr[ib->length_dw++] = 0x00000014;
+ ib->ptr[ib->length_dw++] = 0x00000002; /* task info */
+@@ -275,13 +276,14 @@ err:
+ * Close up a stream for HW test or if userspace failed to do so
+ */
+ static int uvd_v7_0_enc_get_destroy_msg(struct amdgpu_ring *ring, uint32_t handle,
+- struct dma_fence **fence)
++ struct amdgpu_bo *bo,
++ struct dma_fence **fence)
+ {
+ const unsigned ib_size_dw = 16;
+ struct amdgpu_job *job;
+ struct amdgpu_ib *ib;
+ struct dma_fence *f = NULL;
+- uint64_t dummy;
++ uint64_t addr;
+ int i, r;
+
+ r = amdgpu_job_alloc_with_ib(ring->adev, ib_size_dw * 4, &job);
+@@ -289,15 +291,15 @@ static int uvd_v7_0_enc_get_destroy_msg(struct amdgpu_ring *ring, uint32_t handl
+ return r;
+
+ ib = &job->ibs[0];
+- dummy = ib->gpu_addr + 1024;
++ addr = amdgpu_bo_gpu_offset(bo);
+
+ ib->length_dw = 0;
+ ib->ptr[ib->length_dw++] = 0x00000018;
+ ib->ptr[ib->length_dw++] = 0x00000001;
+ ib->ptr[ib->length_dw++] = handle;
+ ib->ptr[ib->length_dw++] = 0x00000000;
+- ib->ptr[ib->length_dw++] = upper_32_bits(dummy);
+- ib->ptr[ib->length_dw++] = dummy;
++ ib->ptr[ib->length_dw++] = upper_32_bits(addr);
++ ib->ptr[ib->length_dw++] = addr;
+
+ ib->ptr[ib->length_dw++] = 0x00000014;
+ ib->ptr[ib->length_dw++] = 0x00000002;
+@@ -334,13 +336,20 @@ err:
+ static int uvd_v7_0_enc_ring_test_ib(struct amdgpu_ring *ring, long timeout)
+ {
+ struct dma_fence *fence = NULL;
++ struct amdgpu_bo *bo = NULL;
+ long r;
+
+- r = uvd_v7_0_enc_get_create_msg(ring, 1, NULL);
++ r = amdgpu_bo_create_reserved(ring->adev, 128 * 1024, PAGE_SIZE,
++ AMDGPU_GEM_DOMAIN_VRAM,
++ &bo, NULL, NULL);
++ if (r)
++ return r;
++
++ r = uvd_v7_0_enc_get_create_msg(ring, 1, bo, NULL);
+ if (r)
+ goto error;
+
+- r = uvd_v7_0_enc_get_destroy_msg(ring, 1, &fence);
++ r = uvd_v7_0_enc_get_destroy_msg(ring, 1, bo, &fence);
+ if (r)
+ goto error;
+
+@@ -352,6 +361,8 @@ static int uvd_v7_0_enc_ring_test_ib(struct amdgpu_ring *ring, long timeout)
+
+ error:
+ dma_fence_put(fence);
++ amdgpu_bo_unreserve(bo);
++ amdgpu_bo_unref(&bo);
+ return r;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
+index 6248c8455314..45f74219e79e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
+@@ -668,6 +668,7 @@ struct clock_source *dce100_clock_source_create(
+ return &clk_src->base;
+ }
+
++ kfree(clk_src);
+ BREAK_TO_DEBUGGER();
+ return NULL;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
+index 764329264c3b..0cb83b0e0e1e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
+@@ -714,6 +714,7 @@ struct clock_source *dce110_clock_source_create(
+ return &clk_src->base;
+ }
+
++ kfree(clk_src);
+ BREAK_TO_DEBUGGER();
+ return NULL;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c b/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
+index 7a04be74c9cf..918455caa9a6 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
+@@ -687,6 +687,7 @@ struct clock_source *dce112_clock_source_create(
+ return &clk_src->base;
+ }
+
++ kfree(clk_src);
+ BREAK_TO_DEBUGGER();
+ return NULL;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c b/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
+index ae38c9c7277c..49f3f0fad763 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
+@@ -500,6 +500,7 @@ static struct clock_source *dce120_clock_source_create(
+ return &clk_src->base;
+ }
+
++ kfree(clk_src);
+ BREAK_TO_DEBUGGER();
+ return NULL;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c b/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
+index 860a524ebcfa..952440893fbb 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
+@@ -701,6 +701,7 @@ struct clock_source *dce80_clock_source_create(
+ return &clk_src->base;
+ }
+
++ kfree(clk_src);
+ BREAK_TO_DEBUGGER();
+ return NULL;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
+index a12530a3ab9c..3f25e8da5396 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
+@@ -786,6 +786,7 @@ struct clock_source *dcn10_clock_source_create(
+ return &clk_src->base;
+ }
+
++ kfree(clk_src);
+ BREAK_TO_DEBUGGER();
+ return NULL;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index b949e202d6cb..5b7ff6c549f1 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -955,6 +955,7 @@ struct clock_source *dcn20_clock_source_create(
+ return &clk_src->base;
+ }
+
++ kfree(clk_src);
+ BREAK_TO_DEBUGGER();
+ return NULL;
+ }
+diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_wb_connector.c b/drivers/gpu/drm/arm/display/komeda/komeda_wb_connector.c
+index 2851cac94d86..b72840c06ab7 100644
+--- a/drivers/gpu/drm/arm/display/komeda/komeda_wb_connector.c
++++ b/drivers/gpu/drm/arm/display/komeda/komeda_wb_connector.c
+@@ -43,9 +43,8 @@ komeda_wb_encoder_atomic_check(struct drm_encoder *encoder,
+ struct komeda_data_flow_cfg dflow;
+ int err;
+
+- if (!writeback_job || !writeback_job->fb) {
++ if (!writeback_job)
+ return 0;
+- }
+
+ if (!crtc_st->active) {
+ DRM_DEBUG_ATOMIC("Cannot write the composition result out on a inactive CRTC.\n");
+@@ -166,8 +165,10 @@ static int komeda_wb_connector_add(struct komeda_kms_dev *kms,
+ &komeda_wb_encoder_helper_funcs,
+ formats, n_formats);
+ komeda_put_fourcc_list(formats);
+- if (err)
++ if (err) {
++ kfree(kwb_conn);
+ return err;
++ }
+
+ drm_connector_helper_add(&wb_conn->base, &komeda_wb_conn_helper_funcs);
+
+diff --git a/drivers/gpu/drm/arm/malidp_mw.c b/drivers/gpu/drm/arm/malidp_mw.c
+index 2e812525025d..a59227b2cdb5 100644
+--- a/drivers/gpu/drm/arm/malidp_mw.c
++++ b/drivers/gpu/drm/arm/malidp_mw.c
+@@ -130,7 +130,7 @@ malidp_mw_encoder_atomic_check(struct drm_encoder *encoder,
+ struct drm_framebuffer *fb;
+ int i, n_planes;
+
+- if (!conn_state->writeback_job || !conn_state->writeback_job->fb)
++ if (!conn_state->writeback_job)
+ return 0;
+
+ fb = conn_state->writeback_job->fb;
+@@ -247,7 +247,7 @@ void malidp_mw_atomic_commit(struct drm_device *drm,
+
+ mw_state = to_mw_state(conn_state);
+
+- if (conn_state->writeback_job && conn_state->writeback_job->fb) {
++ if (conn_state->writeback_job) {
+ struct drm_framebuffer *fb = conn_state->writeback_job->fb;
+
+ DRM_DEV_DEBUG_DRIVER(drm->dev,
+diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
+index 419381abbdd1..14aeaf736321 100644
+--- a/drivers/gpu/drm/drm_atomic.c
++++ b/drivers/gpu/drm/drm_atomic.c
+@@ -430,10 +430,15 @@ static int drm_atomic_connector_check(struct drm_connector *connector,
+ return -EINVAL;
+ }
+
+- if (writeback_job->out_fence && !writeback_job->fb) {
+- DRM_DEBUG_ATOMIC("[CONNECTOR:%d:%s] requesting out-fence without framebuffer\n",
+- connector->base.id, connector->name);
+- return -EINVAL;
++ if (!writeback_job->fb) {
++ if (writeback_job->out_fence) {
++ DRM_DEBUG_ATOMIC("[CONNECTOR:%d:%s] requesting out-fence without framebuffer\n",
++ connector->base.id, connector->name);
++ return -EINVAL;
++ }
++
++ drm_writeback_cleanup_job(writeback_job);
++ state->writeback_job = NULL;
+ }
+
+ return 0;
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index 82a4ceed3fcf..6b0177112e18 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -159,6 +159,9 @@ static const struct edid_quirk {
+ /* Medion MD 30217 PG */
+ { "MED", 0x7b8, EDID_QUIRK_PREFER_LARGE_75 },
+
++ /* Lenovo G50 */
++ { "SDC", 18514, EDID_QUIRK_FORCE_6BPC },
++
+ /* Panel in Samsung NP700G7A-S01PL notebook reports 6bpc */
+ { "SEC", 0xd033, EDID_QUIRK_FORCE_8BPC },
+
+diff --git a/drivers/gpu/drm/drm_writeback.c b/drivers/gpu/drm/drm_writeback.c
+index ff138b6ec48b..43d9e3bb3a94 100644
+--- a/drivers/gpu/drm/drm_writeback.c
++++ b/drivers/gpu/drm/drm_writeback.c
+@@ -324,6 +324,9 @@ void drm_writeback_cleanup_job(struct drm_writeback_job *job)
+ if (job->fb)
+ drm_framebuffer_put(job->fb);
+
++ if (job->out_fence)
++ dma_fence_put(job->out_fence);
++
+ kfree(job);
+ }
+ EXPORT_SYMBOL(drm_writeback_cleanup_job);
+@@ -366,25 +369,29 @@ drm_writeback_signal_completion(struct drm_writeback_connector *wb_connector,
+ {
+ unsigned long flags;
+ struct drm_writeback_job *job;
++ struct dma_fence *out_fence;
+
+ spin_lock_irqsave(&wb_connector->job_lock, flags);
+ job = list_first_entry_or_null(&wb_connector->job_queue,
+ struct drm_writeback_job,
+ list_entry);
+- if (job) {
++ if (job)
+ list_del(&job->list_entry);
+- if (job->out_fence) {
+- if (status)
+- dma_fence_set_error(job->out_fence, status);
+- dma_fence_signal(job->out_fence);
+- dma_fence_put(job->out_fence);
+- }
+- }
++
+ spin_unlock_irqrestore(&wb_connector->job_lock, flags);
+
+ if (WARN_ON(!job))
+ return;
+
++ out_fence = job->out_fence;
++ if (out_fence) {
++ if (status)
++ dma_fence_set_error(out_fence, status);
++ dma_fence_signal(out_fence);
++ dma_fence_put(out_fence);
++ job->out_fence = NULL;
++ }
++
+ INIT_WORK(&job->cleanup_work, cleanup_work);
+ queue_work(system_long_wq, &job->cleanup_work);
+ }
+diff --git a/drivers/gpu/drm/i915/display/intel_bios.c b/drivers/gpu/drm/i915/display/intel_bios.c
+index 3ef4e9f573cf..b1025c248bb9 100644
+--- a/drivers/gpu/drm/i915/display/intel_bios.c
++++ b/drivers/gpu/drm/i915/display/intel_bios.c
+@@ -1269,7 +1269,7 @@ static void sanitize_ddc_pin(struct drm_i915_private *dev_priv,
+ DRM_DEBUG_KMS("port %c trying to use the same DDC pin (0x%x) as port %c, "
+ "disabling port %c DVI/HDMI support\n",
+ port_name(port), info->alternate_ddc_pin,
+- port_name(p), port_name(port));
++ port_name(p), port_name(p));
+
+ /*
+ * If we have multiple ports supposedly sharing the
+@@ -1277,9 +1277,14 @@ static void sanitize_ddc_pin(struct drm_i915_private *dev_priv,
+ * port. Otherwise they share the same ddc bin and
+ * system couldn't communicate with them separately.
+ *
+- * Give child device order the priority, first come first
+- * served.
++ * Give inverse child device order the priority,
++ * last one wins. Yes, there are real machines
++ * (eg. Asrock B250M-HDV) where VBT has both
++ * port A and port E with the same AUX ch and
++ * we must pick port E :(
+ */
++ info = &dev_priv->vbt.ddi_port_info[p];
++
+ info->supports_dvi = false;
+ info->supports_hdmi = false;
+ info->alternate_ddc_pin = 0;
+@@ -1315,7 +1320,7 @@ static void sanitize_aux_ch(struct drm_i915_private *dev_priv,
+ DRM_DEBUG_KMS("port %c trying to use the same AUX CH (0x%x) as port %c, "
+ "disabling port %c DP support\n",
+ port_name(port), info->alternate_aux_channel,
+- port_name(p), port_name(port));
++ port_name(p), port_name(p));
+
+ /*
+ * If we have multiple ports supposedlt sharing the
+@@ -1323,9 +1328,14 @@ static void sanitize_aux_ch(struct drm_i915_private *dev_priv,
+ * port. Otherwise they share the same aux channel
+ * and system couldn't communicate with them separately.
+ *
+- * Give child device order the priority, first come first
+- * served.
++ * Give inverse child device order the priority,
++ * last one wins. Yes, there are real machines
++ * (eg. Asrock B250M-HDV) where VBT has both
++ * port A and port E with the same AUX ch and
++ * we must pick port E :(
+ */
++ info = &dev_priv->vbt.ddi_port_info[p];
++
+ info->supports_dp = false;
+ info->alternate_aux_channel = 0;
+ }
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+index c201289039fe..5bd27941811f 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+@@ -365,6 +365,7 @@ err:
+ return VM_FAULT_OOM;
+ case -ENOSPC:
+ case -EFAULT:
++ case -ENODEV: /* bad object, how did you get here! */
+ return VM_FAULT_SIGBUS;
+ default:
+ WARN_ONCE(ret, "unhandled error in %s: %i\n", __func__, ret);
+@@ -475,10 +476,16 @@ i915_gem_mmap_gtt(struct drm_file *file,
+ if (!obj)
+ return -ENOENT;
+
++ if (i915_gem_object_never_bind_ggtt(obj)) {
++ ret = -ENODEV;
++ goto out;
++ }
++
+ ret = create_mmap_offset(obj);
+ if (ret == 0)
+ *offset = drm_vma_node_offset_addr(&obj->base.vma_node);
+
++out:
+ i915_gem_object_put(obj);
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
+index dfebd5706f16..e44d3f49c1d6 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
++++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
+@@ -152,6 +152,12 @@ i915_gem_object_is_proxy(const struct drm_i915_gem_object *obj)
+ return obj->ops->flags & I915_GEM_OBJECT_IS_PROXY;
+ }
+
++static inline bool
++i915_gem_object_never_bind_ggtt(const struct drm_i915_gem_object *obj)
++{
++ return obj->ops->flags & I915_GEM_OBJECT_NO_GGTT;
++}
++
+ static inline bool
+ i915_gem_object_needs_async_cancel(const struct drm_i915_gem_object *obj)
+ {
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+index 18bf4f8d6d80..d5453e85df5e 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
++++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+@@ -31,7 +31,8 @@ struct drm_i915_gem_object_ops {
+ #define I915_GEM_OBJECT_HAS_STRUCT_PAGE BIT(0)
+ #define I915_GEM_OBJECT_IS_SHRINKABLE BIT(1)
+ #define I915_GEM_OBJECT_IS_PROXY BIT(2)
+-#define I915_GEM_OBJECT_ASYNC_CANCEL BIT(3)
++#define I915_GEM_OBJECT_NO_GGTT BIT(3)
++#define I915_GEM_OBJECT_ASYNC_CANCEL BIT(4)
+
+ /* Interface between the GEM object and its backing storage.
+ * get_pages() is called once prior to the use of the associated set
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+index 528b61678334..cd30e83c3205 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+@@ -694,6 +694,7 @@ i915_gem_userptr_dmabuf_export(struct drm_i915_gem_object *obj)
+ static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = {
+ .flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
+ I915_GEM_OBJECT_IS_SHRINKABLE |
++ I915_GEM_OBJECT_NO_GGTT |
+ I915_GEM_OBJECT_ASYNC_CANCEL,
+ .get_pages = i915_gem_userptr_get_pages,
+ .put_pages = i915_gem_userptr_put_pages,
+diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
+index 8a659d3d7435..7f6af4ca0968 100644
+--- a/drivers/gpu/drm/i915/i915_gem.c
++++ b/drivers/gpu/drm/i915/i915_gem.c
+@@ -1030,6 +1030,9 @@ i915_gem_object_ggtt_pin(struct drm_i915_gem_object *obj,
+
+ lockdep_assert_held(&obj->base.dev->struct_mutex);
+
++ if (i915_gem_object_never_bind_ggtt(obj))
++ return ERR_PTR(-ENODEV);
++
+ if (flags & PIN_MAPPABLE &&
+ (!view || view->type == I915_GGTT_VIEW_NORMAL)) {
+ /* If the required space is larger than the available
+diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
+index 9bb9260d9181..b05c7c513436 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_job.c
++++ b/drivers/gpu/drm/panfrost/panfrost_job.c
+@@ -384,13 +384,19 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
+ job_read(pfdev, JS_TAIL_LO(js)),
+ sched_job);
+
+- mutex_lock(&pfdev->reset_lock);
++ if (!mutex_trylock(&pfdev->reset_lock))
++ return;
+
+- for (i = 0; i < NUM_JOB_SLOTS; i++)
+- drm_sched_stop(&pfdev->js->queue[i].sched, sched_job);
++ for (i = 0; i < NUM_JOB_SLOTS; i++) {
++ struct drm_gpu_scheduler *sched = &pfdev->js->queue[i].sched;
++
++ drm_sched_stop(sched, sched_job);
++ if (js != i)
++ /* Ensure any timeouts on other slots have finished */
++ cancel_delayed_work_sync(&sched->work_tdr);
++ }
+
+- if (sched_job)
+- drm_sched_increase_karma(sched_job);
++ drm_sched_increase_karma(sched_job);
+
+ /* panfrost_core_dump(pfdev); */
+
+diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c
+index 5cc0fbb04ab1..7033f3a38c87 100644
+--- a/drivers/gpu/drm/radeon/radeon_drv.c
++++ b/drivers/gpu/drm/radeon/radeon_drv.c
+@@ -380,19 +380,11 @@ radeon_pci_remove(struct pci_dev *pdev)
+ static void
+ radeon_pci_shutdown(struct pci_dev *pdev)
+ {
+- struct drm_device *ddev = pci_get_drvdata(pdev);
+-
+ /* if we are running in a VM, make sure the device
+ * torn down properly on reboot/shutdown
+ */
+ if (radeon_device_is_virtual())
+ radeon_pci_remove(pdev);
+-
+- /* Some adapters need to be suspended before a
+- * shutdown occurs in order to prevent an error
+- * during kexec.
+- */
+- radeon_suspend_kms(ddev, true, true, false);
+ }
+
+ static int radeon_pmops_suspend(struct device *dev)
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_writeback.c b/drivers/gpu/drm/rcar-du/rcar_du_writeback.c
+index ae07290bba6a..04efa78d70b6 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_writeback.c
++++ b/drivers/gpu/drm/rcar-du/rcar_du_writeback.c
+@@ -147,7 +147,7 @@ static int rcar_du_wb_enc_atomic_check(struct drm_encoder *encoder,
+ struct drm_device *dev = encoder->dev;
+ struct drm_framebuffer *fb;
+
+- if (!conn_state->writeback_job || !conn_state->writeback_job->fb)
++ if (!conn_state->writeback_job)
+ return 0;
+
+ fb = conn_state->writeback_job->fb;
+@@ -221,7 +221,7 @@ void rcar_du_writeback_setup(struct rcar_du_crtc *rcrtc,
+ unsigned int i;
+
+ state = rcrtc->writeback.base.state;
+- if (!state || !state->writeback_job || !state->writeback_job->fb)
++ if (!state || !state->writeback_job)
+ return;
+
+ fb = state->writeback_job->fb;
+diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
+index 6dacff49c1cc..a77cd0344d22 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
++++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
+@@ -278,15 +278,13 @@ static vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf)
+ else
+ ret = vmf_insert_pfn(&cvma, address, pfn);
+
+- /*
+- * Somebody beat us to this PTE or prefaulting to
+- * an already populated PTE, or prefaulting error.
+- */
+-
+- if (unlikely((ret == VM_FAULT_NOPAGE && i > 0)))
+- break;
+- else if (unlikely(ret & VM_FAULT_ERROR))
+- goto out_io_unlock;
++ /* Never error on prefaulted PTEs */
++ if (unlikely((ret & VM_FAULT_ERROR))) {
++ if (i == 0)
++ goto out_io_unlock;
++ else
++ break;
++ }
+
+ address += PAGE_SIZE;
+ if (unlikely(++page_offset >= page_last))
+diff --git a/drivers/gpu/drm/vc4/vc4_txp.c b/drivers/gpu/drm/vc4/vc4_txp.c
+index 96f91c1b4b6e..e92fa1275034 100644
+--- a/drivers/gpu/drm/vc4/vc4_txp.c
++++ b/drivers/gpu/drm/vc4/vc4_txp.c
+@@ -229,7 +229,7 @@ static int vc4_txp_connector_atomic_check(struct drm_connector *conn,
+ int i;
+
+ conn_state = drm_atomic_get_new_connector_state(state, conn);
+- if (!conn_state->writeback_job || !conn_state->writeback_job->fb)
++ if (!conn_state->writeback_job)
+ return 0;
+
+ crtc_state = drm_atomic_get_new_crtc_state(state, conn_state->crtc);
+@@ -269,8 +269,7 @@ static void vc4_txp_connector_atomic_commit(struct drm_connector *conn,
+ u32 ctrl;
+ int i;
+
+- if (WARN_ON(!conn_state->writeback_job ||
+- !conn_state->writeback_job->fb))
++ if (WARN_ON(!conn_state->writeback_job))
+ return;
+
+ mode = &conn_state->crtc->state->adjusted_mode;
+diff --git a/drivers/infiniband/hw/cxgb4/mem.c b/drivers/infiniband/hw/cxgb4/mem.c
+index aa772ee0706f..35c284af574d 100644
+--- a/drivers/infiniband/hw/cxgb4/mem.c
++++ b/drivers/infiniband/hw/cxgb4/mem.c
+@@ -275,13 +275,17 @@ static int write_tpt_entry(struct c4iw_rdev *rdev, u32 reset_tpt_entry,
+ struct sk_buff *skb, struct c4iw_wr_wait *wr_waitp)
+ {
+ int err;
+- struct fw_ri_tpte tpt;
++ struct fw_ri_tpte *tpt;
+ u32 stag_idx;
+ static atomic_t key;
+
+ if (c4iw_fatal_error(rdev))
+ return -EIO;
+
++ tpt = kmalloc(sizeof(*tpt), GFP_KERNEL);
++ if (!tpt)
++ return -ENOMEM;
++
+ stag_state = stag_state > 0;
+ stag_idx = (*stag) >> 8;
+
+@@ -291,6 +295,7 @@ static int write_tpt_entry(struct c4iw_rdev *rdev, u32 reset_tpt_entry,
+ mutex_lock(&rdev->stats.lock);
+ rdev->stats.stag.fail++;
+ mutex_unlock(&rdev->stats.lock);
++ kfree(tpt);
+ return -ENOMEM;
+ }
+ mutex_lock(&rdev->stats.lock);
+@@ -305,28 +310,28 @@ static int write_tpt_entry(struct c4iw_rdev *rdev, u32 reset_tpt_entry,
+
+ /* write TPT entry */
+ if (reset_tpt_entry)
+- memset(&tpt, 0, sizeof(tpt));
++ memset(tpt, 0, sizeof(*tpt));
+ else {
+- tpt.valid_to_pdid = cpu_to_be32(FW_RI_TPTE_VALID_F |
++ tpt->valid_to_pdid = cpu_to_be32(FW_RI_TPTE_VALID_F |
+ FW_RI_TPTE_STAGKEY_V((*stag & FW_RI_TPTE_STAGKEY_M)) |
+ FW_RI_TPTE_STAGSTATE_V(stag_state) |
+ FW_RI_TPTE_STAGTYPE_V(type) | FW_RI_TPTE_PDID_V(pdid));
+- tpt.locread_to_qpid = cpu_to_be32(FW_RI_TPTE_PERM_V(perm) |
++ tpt->locread_to_qpid = cpu_to_be32(FW_RI_TPTE_PERM_V(perm) |
+ (bind_enabled ? FW_RI_TPTE_MWBINDEN_F : 0) |
+ FW_RI_TPTE_ADDRTYPE_V((zbva ? FW_RI_ZERO_BASED_TO :
+ FW_RI_VA_BASED_TO))|
+ FW_RI_TPTE_PS_V(page_size));
+- tpt.nosnoop_pbladdr = !pbl_size ? 0 : cpu_to_be32(
++ tpt->nosnoop_pbladdr = !pbl_size ? 0 : cpu_to_be32(
+ FW_RI_TPTE_PBLADDR_V(PBL_OFF(rdev, pbl_addr)>>3));
+- tpt.len_lo = cpu_to_be32((u32)(len & 0xffffffffUL));
+- tpt.va_hi = cpu_to_be32((u32)(to >> 32));
+- tpt.va_lo_fbo = cpu_to_be32((u32)(to & 0xffffffffUL));
+- tpt.dca_mwbcnt_pstag = cpu_to_be32(0);
+- tpt.len_hi = cpu_to_be32((u32)(len >> 32));
++ tpt->len_lo = cpu_to_be32((u32)(len & 0xffffffffUL));
++ tpt->va_hi = cpu_to_be32((u32)(to >> 32));
++ tpt->va_lo_fbo = cpu_to_be32((u32)(to & 0xffffffffUL));
++ tpt->dca_mwbcnt_pstag = cpu_to_be32(0);
++ tpt->len_hi = cpu_to_be32((u32)(len >> 32));
+ }
+ err = write_adapter_mem(rdev, stag_idx +
+ (rdev->lldi.vr->stag.start >> 5),
+- sizeof(tpt), &tpt, skb, wr_waitp);
++ sizeof(*tpt), tpt, skb, wr_waitp);
+
+ if (reset_tpt_entry) {
+ c4iw_put_resource(&rdev->resource.tpt_table, stag_idx);
+@@ -334,6 +339,7 @@ static int write_tpt_entry(struct c4iw_rdev *rdev, u32 reset_tpt_entry,
+ rdev->stats.stag.cur -= 32;
+ mutex_unlock(&rdev->stats.lock);
+ }
++ kfree(tpt);
+ return err;
+ }
+
+diff --git a/drivers/input/misc/da9063_onkey.c b/drivers/input/misc/da9063_onkey.c
+index fd355cf59397..3daf11a7df25 100644
+--- a/drivers/input/misc/da9063_onkey.c
++++ b/drivers/input/misc/da9063_onkey.c
+@@ -232,10 +232,7 @@ static int da9063_onkey_probe(struct platform_device *pdev)
+ onkey->input->phys = onkey->phys;
+ onkey->input->dev.parent = &pdev->dev;
+
+- if (onkey->key_power)
+- input_set_capability(onkey->input, EV_KEY, KEY_POWER);
+-
+- input_set_capability(onkey->input, EV_KEY, KEY_SLEEP);
++ input_set_capability(onkey->input, EV_KEY, KEY_POWER);
+
+ INIT_DELAYED_WORK(&onkey->work, da9063_poll_on);
+
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index 04fe43440a3c..2d8434b7b623 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -1827,31 +1827,6 @@ static int elantech_create_smbus(struct psmouse *psmouse,
+ leave_breadcrumbs);
+ }
+
+-static bool elantech_use_host_notify(struct psmouse *psmouse,
+- struct elantech_device_info *info)
+-{
+- if (ETP_NEW_IC_SMBUS_HOST_NOTIFY(info->fw_version))
+- return true;
+-
+- switch (info->bus) {
+- case ETP_BUS_PS2_ONLY:
+- /* expected case */
+- break;
+- case ETP_BUS_SMB_HST_NTFY_ONLY:
+- case ETP_BUS_PS2_SMB_HST_NTFY:
+- /* SMbus implementation is stable since 2018 */
+- if (dmi_get_bios_year() >= 2018)
+- return true;
+- /* fall through */
+- default:
+- psmouse_dbg(psmouse,
+- "Ignoring SMBus bus provider %d\n", info->bus);
+- break;
+- }
+-
+- return false;
+-}
+-
+ /**
+ * elantech_setup_smbus - called once the PS/2 devices are enumerated
+ * and decides to instantiate a SMBus InterTouch device.
+@@ -1871,7 +1846,7 @@ static int elantech_setup_smbus(struct psmouse *psmouse,
+ * i2c_blacklist_pnp_ids.
+ * Old ICs are up to the user to decide.
+ */
+- if (!elantech_use_host_notify(psmouse, info) ||
++ if (!ETP_NEW_IC_SMBUS_HOST_NOTIFY(info->fw_version) ||
+ psmouse_matches_pnp_id(psmouse, i2c_blacklist_pnp_ids))
+ return -ENXIO;
+ }
+@@ -1891,6 +1866,34 @@ static int elantech_setup_smbus(struct psmouse *psmouse,
+ return 0;
+ }
+
++static bool elantech_use_host_notify(struct psmouse *psmouse,
++ struct elantech_device_info *info)
++{
++ if (ETP_NEW_IC_SMBUS_HOST_NOTIFY(info->fw_version))
++ return true;
++
++ switch (info->bus) {
++ case ETP_BUS_PS2_ONLY:
++ /* expected case */
++ break;
++ case ETP_BUS_SMB_ALERT_ONLY:
++ /* fall-through */
++ case ETP_BUS_PS2_SMB_ALERT:
++ psmouse_dbg(psmouse, "Ignoring SMBus provider through alert protocol.\n");
++ break;
++ case ETP_BUS_SMB_HST_NTFY_ONLY:
++ /* fall-through */
++ case ETP_BUS_PS2_SMB_HST_NTFY:
++ return true;
++ default:
++ psmouse_dbg(psmouse,
++ "Ignoring SMBus bus provider %d.\n",
++ info->bus);
++ }
++
++ return false;
++}
++
+ int elantech_init_smbus(struct psmouse *psmouse)
+ {
+ struct elantech_device_info info;
+diff --git a/drivers/input/rmi4/rmi_driver.c b/drivers/input/rmi4/rmi_driver.c
+index 772493b1f665..190b9974526b 100644
+--- a/drivers/input/rmi4/rmi_driver.c
++++ b/drivers/input/rmi4/rmi_driver.c
+@@ -146,7 +146,7 @@ static int rmi_process_interrupt_requests(struct rmi_device *rmi_dev)
+ }
+
+ mutex_lock(&data->irq_mutex);
+- bitmap_and(data->irq_status, data->irq_status, data->current_irq_mask,
++ bitmap_and(data->irq_status, data->irq_status, data->fn_irq_bits,
+ data->irq_count);
+ /*
+ * At this point, irq_status has all bits that are set in the
+@@ -385,6 +385,8 @@ static int rmi_driver_set_irq_bits(struct rmi_device *rmi_dev,
+ bitmap_copy(data->current_irq_mask, data->new_irq_mask,
+ data->num_of_irq_regs);
+
++ bitmap_or(data->fn_irq_bits, data->fn_irq_bits, mask, data->irq_count);
++
+ error_unlock:
+ mutex_unlock(&data->irq_mutex);
+ return error;
+@@ -398,6 +400,8 @@ static int rmi_driver_clear_irq_bits(struct rmi_device *rmi_dev,
+ struct device *dev = &rmi_dev->dev;
+
+ mutex_lock(&data->irq_mutex);
++ bitmap_andnot(data->fn_irq_bits,
++ data->fn_irq_bits, mask, data->irq_count);
+ bitmap_andnot(data->new_irq_mask,
+ data->current_irq_mask, mask, data->irq_count);
+
+diff --git a/drivers/input/touchscreen/st1232.c b/drivers/input/touchscreen/st1232.c
+index 34923399ece4..1139714e72e2 100644
+--- a/drivers/input/touchscreen/st1232.c
++++ b/drivers/input/touchscreen/st1232.c
+@@ -81,8 +81,10 @@ static int st1232_ts_read_data(struct st1232_ts_data *ts)
+ for (i = 0, y = 0; i < ts->chip_info->max_fingers; i++, y += 3) {
+ finger[i].is_valid = buf[i + y] >> 7;
+ if (finger[i].is_valid) {
+- finger[i].x = ((buf[i + y] & 0x0070) << 4) | buf[i + 1];
+- finger[i].y = ((buf[i + y] & 0x0007) << 8) | buf[i + 2];
++ finger[i].x = ((buf[i + y] & 0x0070) << 4) |
++ buf[i + y + 1];
++ finger[i].y = ((buf[i + y] & 0x0007) << 8) |
++ buf[i + y + 2];
+
+ /* st1232 includes a z-axis / touch strength */
+ if (ts->chip_info->have_z)
+diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
+index c72c036aea76..daefc52b0ec5 100644
+--- a/drivers/irqchip/irq-sifive-plic.c
++++ b/drivers/irqchip/irq-sifive-plic.c
+@@ -97,7 +97,7 @@ static inline void plic_irq_toggle(const struct cpumask *mask,
+ }
+ }
+
+-static void plic_irq_enable(struct irq_data *d)
++static void plic_irq_unmask(struct irq_data *d)
+ {
+ unsigned int cpu = cpumask_any_and(irq_data_get_affinity_mask(d),
+ cpu_online_mask);
+@@ -106,7 +106,7 @@ static void plic_irq_enable(struct irq_data *d)
+ plic_irq_toggle(cpumask_of(cpu), d->hwirq, 1);
+ }
+
+-static void plic_irq_disable(struct irq_data *d)
++static void plic_irq_mask(struct irq_data *d)
+ {
+ plic_irq_toggle(cpu_possible_mask, d->hwirq, 0);
+ }
+@@ -125,10 +125,8 @@ static int plic_set_affinity(struct irq_data *d,
+ if (cpu >= nr_cpu_ids)
+ return -EINVAL;
+
+- if (!irqd_irq_disabled(d)) {
+- plic_irq_toggle(cpu_possible_mask, d->hwirq, 0);
+- plic_irq_toggle(cpumask_of(cpu), d->hwirq, 1);
+- }
++ plic_irq_toggle(cpu_possible_mask, d->hwirq, 0);
++ plic_irq_toggle(cpumask_of(cpu), d->hwirq, 1);
+
+ irq_data_update_effective_affinity(d, cpumask_of(cpu));
+
+@@ -136,14 +134,18 @@ static int plic_set_affinity(struct irq_data *d,
+ }
+ #endif
+
++static void plic_irq_eoi(struct irq_data *d)
++{
++ struct plic_handler *handler = this_cpu_ptr(&plic_handlers);
++
++ writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);
++}
++
+ static struct irq_chip plic_chip = {
+ .name = "SiFive PLIC",
+- /*
+- * There is no need to mask/unmask PLIC interrupts. They are "masked"
+- * by reading claim and "unmasked" when writing it back.
+- */
+- .irq_enable = plic_irq_enable,
+- .irq_disable = plic_irq_disable,
++ .irq_mask = plic_irq_mask,
++ .irq_unmask = plic_irq_unmask,
++ .irq_eoi = plic_irq_eoi,
+ #ifdef CONFIG_SMP
+ .irq_set_affinity = plic_set_affinity,
+ #endif
+@@ -152,7 +154,7 @@ static struct irq_chip plic_chip = {
+ static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
+ irq_hw_number_t hwirq)
+ {
+- irq_set_chip_and_handler(irq, &plic_chip, handle_simple_irq);
++ irq_set_chip_and_handler(irq, &plic_chip, handle_fasteoi_irq);
+ irq_set_chip_data(irq, NULL);
+ irq_set_noprobe(irq);
+ return 0;
+@@ -188,7 +190,6 @@ static void plic_handle_irq(struct pt_regs *regs)
+ hwirq);
+ else
+ generic_handle_irq(irq);
+- writel(hwirq, claim);
+ }
+ csr_set(sie, SIE_SEIE);
+ }
+diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
+index d249cf8ac277..8346e6d1816c 100644
+--- a/drivers/md/dm-cache-target.c
++++ b/drivers/md/dm-cache-target.c
+@@ -542,7 +542,7 @@ static void wake_migration_worker(struct cache *cache)
+
+ static struct dm_bio_prison_cell_v2 *alloc_prison_cell(struct cache *cache)
+ {
+- return dm_bio_prison_alloc_cell_v2(cache->prison, GFP_NOWAIT);
++ return dm_bio_prison_alloc_cell_v2(cache->prison, GFP_NOIO);
+ }
+
+ static void free_prison_cell(struct cache *cache, struct dm_bio_prison_cell_v2 *cell)
+@@ -554,9 +554,7 @@ static struct dm_cache_migration *alloc_migration(struct cache *cache)
+ {
+ struct dm_cache_migration *mg;
+
+- mg = mempool_alloc(&cache->migration_pool, GFP_NOWAIT);
+- if (!mg)
+- return NULL;
++ mg = mempool_alloc(&cache->migration_pool, GFP_NOIO);
+
+ memset(mg, 0, sizeof(*mg));
+
+@@ -664,10 +662,6 @@ static bool bio_detain_shared(struct cache *cache, dm_oblock_t oblock, struct bi
+ struct dm_bio_prison_cell_v2 *cell_prealloc, *cell;
+
+ cell_prealloc = alloc_prison_cell(cache); /* FIXME: allow wait if calling from worker */
+- if (!cell_prealloc) {
+- defer_bio(cache, bio);
+- return false;
+- }
+
+ build_key(oblock, end, &key);
+ r = dm_cell_get_v2(cache->prison, &key, lock_level(bio), bio, cell_prealloc, &cell);
+@@ -1493,11 +1487,6 @@ static int mg_lock_writes(struct dm_cache_migration *mg)
+ struct dm_bio_prison_cell_v2 *prealloc;
+
+ prealloc = alloc_prison_cell(cache);
+- if (!prealloc) {
+- DMERR_LIMIT("%s: alloc_prison_cell failed", cache_device_name(cache));
+- mg_complete(mg, false);
+- return -ENOMEM;
+- }
+
+ /*
+ * Prevent writes to the block, but allow reads to continue.
+@@ -1535,11 +1524,6 @@ static int mg_start(struct cache *cache, struct policy_work *op, struct bio *bio
+ }
+
+ mg = alloc_migration(cache);
+- if (!mg) {
+- policy_complete_background_work(cache->policy, op, false);
+- background_work_end(cache);
+- return -ENOMEM;
+- }
+
+ mg->op = op;
+ mg->overwrite_bio = bio;
+@@ -1628,10 +1612,6 @@ static int invalidate_lock(struct dm_cache_migration *mg)
+ struct dm_bio_prison_cell_v2 *prealloc;
+
+ prealloc = alloc_prison_cell(cache);
+- if (!prealloc) {
+- invalidate_complete(mg, false);
+- return -ENOMEM;
+- }
+
+ build_key(mg->invalidate_oblock, oblock_succ(mg->invalidate_oblock), &key);
+ r = dm_cell_lock_v2(cache->prison, &key,
+@@ -1669,10 +1649,6 @@ static int invalidate_start(struct cache *cache, dm_cblock_t cblock,
+ return -EPERM;
+
+ mg = alloc_migration(cache);
+- if (!mg) {
+- background_work_end(cache);
+- return -ENOMEM;
+- }
+
+ mg->overwrite_bio = bio;
+ mg->invalidate_cblock = cblock;
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index 297bbc0f41f0..c3445d2cedb9 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -151,7 +151,7 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
+ } else {
+ pr_err("md/raid0:%s: cannot assemble multi-zone RAID0 with default_layout setting\n",
+ mdname(mddev));
+- pr_err("md/raid0: please set raid.default_layout to 1 or 2\n");
++ pr_err("md/raid0: please set raid0.default_layout to 1 or 2\n");
+ err = -ENOTSUPP;
+ goto abort;
+ }
+diff --git a/drivers/memstick/host/jmb38x_ms.c b/drivers/memstick/host/jmb38x_ms.c
+index 32747425297d..64fff6abe60e 100644
+--- a/drivers/memstick/host/jmb38x_ms.c
++++ b/drivers/memstick/host/jmb38x_ms.c
+@@ -941,7 +941,7 @@ static int jmb38x_ms_probe(struct pci_dev *pdev,
+ if (!cnt) {
+ rc = -ENODEV;
+ pci_dev_busy = 1;
+- goto err_out;
++ goto err_out_int;
+ }
+
+ jm = kzalloc(sizeof(struct jmb38x_ms)
+diff --git a/drivers/mmc/host/cqhci.c b/drivers/mmc/host/cqhci.c
+index f7bdae5354c3..5047f7343ffc 100644
+--- a/drivers/mmc/host/cqhci.c
++++ b/drivers/mmc/host/cqhci.c
+@@ -611,7 +611,8 @@ static int cqhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
+ cq_host->slot[tag].flags = 0;
+
+ cq_host->qcnt += 1;
+-
++ /* Make sure descriptors are ready before ringing the doorbell */
++ wmb();
+ cqhci_writel(cq_host, 1 << tag, CQHCI_TDBR);
+ if (!(cqhci_readl(cq_host, CQHCI_TDBR) & (1 << tag)))
+ pr_debug("%s: cqhci: doorbell not set for tag %d\n",
+diff --git a/drivers/mmc/host/mxs-mmc.c b/drivers/mmc/host/mxs-mmc.c
+index b334e81c5cab..9a0bc0c5fa4b 100644
+--- a/drivers/mmc/host/mxs-mmc.c
++++ b/drivers/mmc/host/mxs-mmc.c
+@@ -17,6 +17,7 @@
+ #include <linux/interrupt.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/dmaengine.h>
++#include <linux/dma/mxs-dma.h>
+ #include <linux/highmem.h>
+ #include <linux/clk.h>
+ #include <linux/err.h>
+@@ -266,7 +267,7 @@ static void mxs_mmc_bc(struct mxs_mmc_host *host)
+ ssp->ssp_pio_words[2] = cmd1;
+ ssp->dma_dir = DMA_NONE;
+ ssp->slave_dirn = DMA_TRANS_NONE;
+- desc = mxs_mmc_prep_dma(host, DMA_CTRL_ACK);
++ desc = mxs_mmc_prep_dma(host, MXS_DMA_CTRL_WAIT4END);
+ if (!desc)
+ goto out;
+
+@@ -311,7 +312,7 @@ static void mxs_mmc_ac(struct mxs_mmc_host *host)
+ ssp->ssp_pio_words[2] = cmd1;
+ ssp->dma_dir = DMA_NONE;
+ ssp->slave_dirn = DMA_TRANS_NONE;
+- desc = mxs_mmc_prep_dma(host, DMA_CTRL_ACK);
++ desc = mxs_mmc_prep_dma(host, MXS_DMA_CTRL_WAIT4END);
+ if (!desc)
+ goto out;
+
+@@ -441,7 +442,7 @@ static void mxs_mmc_adtc(struct mxs_mmc_host *host)
+ host->data = data;
+ ssp->dma_dir = dma_data_dir;
+ ssp->slave_dirn = slave_dirn;
+- desc = mxs_mmc_prep_dma(host, DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
++ desc = mxs_mmc_prep_dma(host, DMA_PREP_INTERRUPT | MXS_DMA_CTRL_WAIT4END);
+ if (!desc)
+ goto out;
+
+diff --git a/drivers/mmc/host/sdhci-omap.c b/drivers/mmc/host/sdhci-omap.c
+index 41c2677c587f..083e7e053c95 100644
+--- a/drivers/mmc/host/sdhci-omap.c
++++ b/drivers/mmc/host/sdhci-omap.c
+@@ -372,7 +372,7 @@ static int sdhci_omap_execute_tuning(struct mmc_host *mmc, u32 opcode)
+ * on temperature
+ */
+ if (temperature < -20000)
+- phase_delay = min(max_window + 4 * max_len - 24,
++ phase_delay = min(max_window + 4 * (max_len - 1) - 24,
+ max_window +
+ DIV_ROUND_UP(13 * max_len, 16) * 4);
+ else if (temperature < 20000)
+diff --git a/drivers/net/dsa/qca8k.c b/drivers/net/dsa/qca8k.c
+index 16f15c93a102..bbeeb8618c80 100644
+--- a/drivers/net/dsa/qca8k.c
++++ b/drivers/net/dsa/qca8k.c
+@@ -705,7 +705,7 @@ qca8k_setup(struct dsa_switch *ds)
+ BIT(0) << QCA8K_GLOBAL_FW_CTRL1_UC_DP_S);
+
+ /* Setup connection between CPU port & user ports */
+- for (i = 0; i < DSA_MAX_PORTS; i++) {
++ for (i = 0; i < QCA8K_NUM_PORTS; i++) {
+ /* CPU port gets connected to all user ports of the switch */
+ if (dsa_is_cpu_port(ds, i)) {
+ qca8k_rmw(priv, QCA8K_PORT_LOOKUP_CTRL(QCA8K_CPU_PORT),
+@@ -1074,7 +1074,7 @@ qca8k_sw_probe(struct mdio_device *mdiodev)
+ if (id != QCA8K_ID_QCA8337)
+ return -ENODEV;
+
+- priv->ds = dsa_switch_alloc(&mdiodev->dev, DSA_MAX_PORTS);
++ priv->ds = dsa_switch_alloc(&mdiodev->dev, QCA8K_NUM_PORTS);
+ if (!priv->ds)
+ return -ENOMEM;
+
+diff --git a/drivers/net/dsa/rtl8366rb.c b/drivers/net/dsa/rtl8366rb.c
+index a268085ffad2..f5cc8b0a7c74 100644
+--- a/drivers/net/dsa/rtl8366rb.c
++++ b/drivers/net/dsa/rtl8366rb.c
+@@ -507,7 +507,8 @@ static int rtl8366rb_setup_cascaded_irq(struct realtek_smi *smi)
+ irq = of_irq_get(intc, 0);
+ if (irq <= 0) {
+ dev_err(smi->dev, "failed to get parent IRQ\n");
+- return irq ? irq : -EINVAL;
++ ret = irq ? irq : -EINVAL;
++ goto out_put_node;
+ }
+
+ /* This clears the IRQ status register */
+@@ -515,7 +516,7 @@ static int rtl8366rb_setup_cascaded_irq(struct realtek_smi *smi)
+ &val);
+ if (ret) {
+ dev_err(smi->dev, "can't read interrupt status\n");
+- return ret;
++ goto out_put_node;
+ }
+
+ /* Fetch IRQ edge information from the descriptor */
+@@ -537,7 +538,7 @@ static int rtl8366rb_setup_cascaded_irq(struct realtek_smi *smi)
+ val);
+ if (ret) {
+ dev_err(smi->dev, "could not configure IRQ polarity\n");
+- return ret;
++ goto out_put_node;
+ }
+
+ ret = devm_request_threaded_irq(smi->dev, irq, NULL,
+@@ -545,7 +546,7 @@ static int rtl8366rb_setup_cascaded_irq(struct realtek_smi *smi)
+ "RTL8366RB", smi);
+ if (ret) {
+ dev_err(smi->dev, "unable to request irq: %d\n", ret);
+- return ret;
++ goto out_put_node;
+ }
+ smi->irqdomain = irq_domain_add_linear(intc,
+ RTL8366RB_NUM_INTERRUPT,
+@@ -553,12 +554,15 @@ static int rtl8366rb_setup_cascaded_irq(struct realtek_smi *smi)
+ smi);
+ if (!smi->irqdomain) {
+ dev_err(smi->dev, "failed to create IRQ domain\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto out_put_node;
+ }
+ for (i = 0; i < smi->num_ports; i++)
+ irq_set_parent(irq_create_mapping(smi->irqdomain, i), irq);
+
+- return 0;
++out_put_node:
++ of_node_put(intc);
++ return ret;
+ }
+
+ static int rtl8366rb_set_addr(struct realtek_smi *smi)
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_main.c b/drivers/net/ethernet/aquantia/atlantic/aq_main.c
+index b4a0fb281e69..bb65dd39f847 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_main.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_main.c
+@@ -194,9 +194,7 @@ static void aq_ndev_set_multicast_settings(struct net_device *ndev)
+ {
+ struct aq_nic_s *aq_nic = netdev_priv(ndev);
+
+- aq_nic_set_packet_filter(aq_nic, ndev->flags);
+-
+- aq_nic_set_multicast_list(aq_nic, ndev);
++ (void)aq_nic_set_multicast_list(aq_nic, ndev);
+ }
+
+ static int aq_ndo_vlan_rx_add_vid(struct net_device *ndev, __be16 proto,
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+index 8f66e7817811..2a18439b36fb 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+@@ -631,9 +631,12 @@ err_exit:
+
+ int aq_nic_set_multicast_list(struct aq_nic_s *self, struct net_device *ndev)
+ {
+- unsigned int packet_filter = self->packet_filter;
++ const struct aq_hw_ops *hw_ops = self->aq_hw_ops;
++ struct aq_nic_cfg_s *cfg = &self->aq_nic_cfg;
++ unsigned int packet_filter = ndev->flags;
+ struct netdev_hw_addr *ha = NULL;
+ unsigned int i = 0U;
++ int err = 0;
+
+ self->mc_list.count = 0;
+ if (netdev_uc_count(ndev) > AQ_HW_MULTICAST_ADDRESS_MAX) {
+@@ -641,29 +644,26 @@ int aq_nic_set_multicast_list(struct aq_nic_s *self, struct net_device *ndev)
+ } else {
+ netdev_for_each_uc_addr(ha, ndev) {
+ ether_addr_copy(self->mc_list.ar[i++], ha->addr);
+-
+- if (i >= AQ_HW_MULTICAST_ADDRESS_MAX)
+- break;
+ }
+ }
+
+- if (i + netdev_mc_count(ndev) > AQ_HW_MULTICAST_ADDRESS_MAX) {
+- packet_filter |= IFF_ALLMULTI;
+- } else {
+- netdev_for_each_mc_addr(ha, ndev) {
+- ether_addr_copy(self->mc_list.ar[i++], ha->addr);
+-
+- if (i >= AQ_HW_MULTICAST_ADDRESS_MAX)
+- break;
++ cfg->is_mc_list_enabled = !!(packet_filter & IFF_MULTICAST);
++ if (cfg->is_mc_list_enabled) {
++ if (i + netdev_mc_count(ndev) > AQ_HW_MULTICAST_ADDRESS_MAX) {
++ packet_filter |= IFF_ALLMULTI;
++ } else {
++ netdev_for_each_mc_addr(ha, ndev) {
++ ether_addr_copy(self->mc_list.ar[i++],
++ ha->addr);
++ }
+ }
+ }
+
+ if (i > 0 && i <= AQ_HW_MULTICAST_ADDRESS_MAX) {
+- packet_filter |= IFF_MULTICAST;
+ self->mc_list.count = i;
+- self->aq_hw_ops->hw_multicast_list_set(self->aq_hw,
+- self->mc_list.ar,
+- self->mc_list.count);
++ err = hw_ops->hw_multicast_list_set(self->aq_hw,
++ self->mc_list.ar,
++ self->mc_list.count);
+ }
+ return aq_nic_set_packet_filter(self, packet_filter);
+ }
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+index 3901d7994ca1..76bdbe1596d6 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+@@ -313,6 +313,7 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
+ break;
+
+ buff->is_error |= buff_->is_error;
++ buff->is_cso_err |= buff_->is_cso_err;
+
+ } while (!buff_->is_eop);
+
+@@ -320,7 +321,7 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
+ err = 0;
+ goto err_exit;
+ }
+- if (buff->is_error) {
++ if (buff->is_error || buff->is_cso_err) {
+ buff_ = buff;
+ do {
+ next_ = buff_->next,
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
+index 30f7fc4c97ff..2ad3fa6316ce 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
+@@ -818,14 +818,15 @@ static int hw_atl_b0_hw_packet_filter_set(struct aq_hw_s *self,
+ cfg->is_vlan_force_promisc);
+
+ hw_atl_rpfl2multicast_flr_en_set(self,
+- IS_FILTER_ENABLED(IFF_ALLMULTI), 0);
++ IS_FILTER_ENABLED(IFF_ALLMULTI) &&
++ IS_FILTER_ENABLED(IFF_MULTICAST), 0);
+
+ hw_atl_rpfl2_accept_all_mc_packets_set(self,
+- IS_FILTER_ENABLED(IFF_ALLMULTI));
++ IS_FILTER_ENABLED(IFF_ALLMULTI) &&
++ IS_FILTER_ENABLED(IFF_MULTICAST));
+
+ hw_atl_rpfl2broadcast_en_set(self, IS_FILTER_ENABLED(IFF_BROADCAST));
+
+- cfg->is_mc_list_enabled = IS_FILTER_ENABLED(IFF_MULTICAST);
+
+ for (i = HW_ATL_B0_MAC_MIN; i < HW_ATL_B0_MAC_MAX; ++i)
+ hw_atl_rpfl2_uc_flr_en_set(self,
+@@ -968,14 +969,26 @@ static int hw_atl_b0_hw_interrupt_moderation_set(struct aq_hw_s *self)
+
+ static int hw_atl_b0_hw_stop(struct aq_hw_s *self)
+ {
++ int err;
++ u32 val;
++
+ hw_atl_b0_hw_irq_disable(self, HW_ATL_B0_INT_MASK);
+
+ /* Invalidate Descriptor Cache to prevent writing to the cached
+ * descriptors and to the data pointer of those descriptors
+ */
+- hw_atl_rdm_rx_dma_desc_cache_init_set(self, 1);
++ hw_atl_rdm_rx_dma_desc_cache_init_tgl(self);
+
+- return aq_hw_err_from_flags(self);
++ err = aq_hw_err_from_flags(self);
++
++ if (err)
++ goto err_exit;
++
++ readx_poll_timeout_atomic(hw_atl_rdm_rx_dma_desc_cache_init_done_get,
++ self, val, val == 1, 1000U, 10000U);
++
++err_exit:
++ return err;
+ }
+
+ static int hw_atl_b0_hw_ring_tx_stop(struct aq_hw_s *self,
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.c
+index 1149812ae463..6f340695e6bd 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.c
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.c
+@@ -606,12 +606,25 @@ void hw_atl_rpb_rx_flow_ctl_mode_set(struct aq_hw_s *aq_hw, u32 rx_flow_ctl_mode
+ HW_ATL_RPB_RX_FC_MODE_SHIFT, rx_flow_ctl_mode);
+ }
+
+-void hw_atl_rdm_rx_dma_desc_cache_init_set(struct aq_hw_s *aq_hw, u32 init)
++void hw_atl_rdm_rx_dma_desc_cache_init_tgl(struct aq_hw_s *aq_hw)
+ {
++ u32 val;
++
++ val = aq_hw_read_reg_bit(aq_hw, HW_ATL_RDM_RX_DMA_DESC_CACHE_INIT_ADR,
++ HW_ATL_RDM_RX_DMA_DESC_CACHE_INIT_MSK,
++ HW_ATL_RDM_RX_DMA_DESC_CACHE_INIT_SHIFT);
++
+ aq_hw_write_reg_bit(aq_hw, HW_ATL_RDM_RX_DMA_DESC_CACHE_INIT_ADR,
+ HW_ATL_RDM_RX_DMA_DESC_CACHE_INIT_MSK,
+ HW_ATL_RDM_RX_DMA_DESC_CACHE_INIT_SHIFT,
+- init);
++ val ^ 1);
++}
++
++u32 hw_atl_rdm_rx_dma_desc_cache_init_done_get(struct aq_hw_s *aq_hw)
++{
++ return aq_hw_read_reg_bit(aq_hw, RDM_RX_DMA_DESC_CACHE_INIT_DONE_ADR,
++ RDM_RX_DMA_DESC_CACHE_INIT_DONE_MSK,
++ RDM_RX_DMA_DESC_CACHE_INIT_DONE_SHIFT);
+ }
+
+ void hw_atl_rpb_rx_pkt_buff_size_per_tc_set(struct aq_hw_s *aq_hw,
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.h b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.h
+index 0c37abbabca5..c3ee278c3747 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.h
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.h
+@@ -313,8 +313,11 @@ void hw_atl_rpb_rx_pkt_buff_size_per_tc_set(struct aq_hw_s *aq_hw,
+ u32 rx_pkt_buff_size_per_tc,
+ u32 buffer);
+
+-/* set rdm rx dma descriptor cache init */
+-void hw_atl_rdm_rx_dma_desc_cache_init_set(struct aq_hw_s *aq_hw, u32 init);
++/* toggle rdm rx dma descriptor cache init */
++void hw_atl_rdm_rx_dma_desc_cache_init_tgl(struct aq_hw_s *aq_hw);
++
++/* get rdm rx dma descriptor cache init done */
++u32 hw_atl_rdm_rx_dma_desc_cache_init_done_get(struct aq_hw_s *aq_hw);
+
+ /* set rx xoff enable (per tc) */
+ void hw_atl_rpb_rx_xoff_en_per_tc_set(struct aq_hw_s *aq_hw, u32 rx_xoff_en_per_tc,
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h
+index c3febcdfa92e..35887ad89025 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h
+@@ -318,6 +318,25 @@
+ /* default value of bitfield rdm_desc_init_i */
+ #define HW_ATL_RDM_RX_DMA_DESC_CACHE_INIT_DEFAULT 0x0
+
++/* rdm_desc_init_done_i bitfield definitions
++ * preprocessor definitions for the bitfield rdm_desc_init_done_i.
++ * port="pif_rdm_desc_init_done_i"
++ */
++
++/* register address for bitfield rdm_desc_init_done_i */
++#define RDM_RX_DMA_DESC_CACHE_INIT_DONE_ADR 0x00005a10
++/* bitmask for bitfield rdm_desc_init_done_i */
++#define RDM_RX_DMA_DESC_CACHE_INIT_DONE_MSK 0x00000001U
++/* inverted bitmask for bitfield rdm_desc_init_done_i */
++#define RDM_RX_DMA_DESC_CACHE_INIT_DONE_MSKN 0xfffffffe
++/* lower bit position of bitfield rdm_desc_init_done_i */
++#define RDM_RX_DMA_DESC_CACHE_INIT_DONE_SHIFT 0U
++/* width of bitfield rdm_desc_init_done_i */
++#define RDM_RX_DMA_DESC_CACHE_INIT_DONE_WIDTH 1
++/* default value of bitfield rdm_desc_init_done_i */
++#define RDM_RX_DMA_DESC_CACHE_INIT_DONE_DEFAULT 0x0
++
++
+ /* rx int_desc_wrb_en bitfield definitions
+ * preprocessor definitions for the bitfield "int_desc_wrb_en".
+ * port="pif_rdm_int_desc_wrb_en_i"
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils_fw2x.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils_fw2x.c
+index da726489e3c8..7bc51f8d6f2f 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils_fw2x.c
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils_fw2x.c
+@@ -337,7 +337,7 @@ static int aq_fw2x_get_phy_temp(struct aq_hw_s *self, int *temp)
+ /* Convert PHY temperature from 1/256 degree Celsius
+ * to 1/1000 degree Celsius.
+ */
+- *temp = temp_res * 1000 / 256;
++ *temp = (temp_res & 0xFFFF) * 1000 / 256;
+
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/atheros/ag71xx.c b/drivers/net/ethernet/atheros/ag71xx.c
+index 6703960c7cf5..d1101eea15c2 100644
+--- a/drivers/net/ethernet/atheros/ag71xx.c
++++ b/drivers/net/ethernet/atheros/ag71xx.c
+@@ -526,7 +526,7 @@ static int ag71xx_mdio_probe(struct ag71xx *ag)
+ struct device *dev = &ag->pdev->dev;
+ struct net_device *ndev = ag->ndev;
+ static struct mii_bus *mii_bus;
+- struct device_node *np;
++ struct device_node *np, *mnp;
+ int err;
+
+ np = dev->of_node;
+@@ -571,7 +571,9 @@ static int ag71xx_mdio_probe(struct ag71xx *ag)
+ msleep(200);
+ }
+
+- err = of_mdiobus_register(mii_bus, np);
++ mnp = of_get_child_by_name(np, "mdio");
++ err = of_mdiobus_register(mii_bus, mnp);
++ of_node_put(mnp);
+ if (err)
+ goto mdio_err_put_clk;
+
+diff --git a/drivers/net/ethernet/broadcom/Kconfig b/drivers/net/ethernet/broadcom/Kconfig
+index e24f5d2b6afe..53055ce5dfd6 100644
+--- a/drivers/net/ethernet/broadcom/Kconfig
++++ b/drivers/net/ethernet/broadcom/Kconfig
+@@ -8,7 +8,6 @@ config NET_VENDOR_BROADCOM
+ default y
+ depends on (SSB_POSSIBLE && HAS_DMA) || PCI || BCM63XX || \
+ SIBYTE_SB1xxx_SOC
+- select DIMLIB
+ ---help---
+ If you have a network (Ethernet) chipset belonging to this class,
+ say Y.
+@@ -69,6 +68,7 @@ config BCMGENET
+ select FIXED_PHY
+ select BCM7XXX_PHY
+ select MDIO_BCM_UNIMAC
++ select DIMLIB
+ help
+ This driver supports the built-in Ethernet MACs found in the
+ Broadcom BCM7xxx Set Top Box family chipset.
+@@ -188,6 +188,7 @@ config SYSTEMPORT
+ select MII
+ select PHYLIB
+ select FIXED_PHY
++ select DIMLIB
+ help
+ This driver supports the built-in Ethernet MACs found in the
+ Broadcom BCM7xxx Set Top Box family chipset using an internal
+@@ -200,6 +201,7 @@ config BNXT
+ select LIBCRC32C
+ select NET_DEVLINK
+ select PAGE_POOL
++ select DIMLIB
+ ---help---
+ This driver supports Broadcom NetXtreme-C/E 10/25/40/50 gigabit
+ Ethernet cards. To compile this driver as a module, choose M here:
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+index 4a8fc03d82fd..dbc69d8fa05f 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+@@ -366,6 +366,7 @@ struct bcmgenet_mib_counters {
+ #define EXT_PWR_DOWN_PHY_EN (1 << 20)
+
+ #define EXT_RGMII_OOB_CTRL 0x0C
++#define RGMII_MODE_EN_V123 (1 << 0)
+ #define RGMII_LINK (1 << 4)
+ #define OOB_DISABLE (1 << 5)
+ #define RGMII_MODE_EN (1 << 6)
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index 970e478a9017..e7c291bf4ed1 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -258,7 +258,11 @@ int bcmgenet_mii_config(struct net_device *dev, bool init)
+ */
+ if (priv->ext_phy) {
+ reg = bcmgenet_ext_readl(priv, EXT_RGMII_OOB_CTRL);
+- reg |= RGMII_MODE_EN | id_mode_dis;
++ reg |= id_mode_dis;
++ if (GENET_IS_V1(priv) || GENET_IS_V2(priv) || GENET_IS_V3(priv))
++ reg |= RGMII_MODE_EN_V123;
++ else
++ reg |= RGMII_MODE_EN;
+ bcmgenet_ext_writel(priv, reg, EXT_RGMII_OOB_CTRL);
+ }
+
+@@ -273,11 +277,12 @@ int bcmgenet_mii_probe(struct net_device *dev)
+ struct bcmgenet_priv *priv = netdev_priv(dev);
+ struct device_node *dn = priv->pdev->dev.of_node;
+ struct phy_device *phydev;
+- u32 phy_flags;
++ u32 phy_flags = 0;
+ int ret;
+
+ /* Communicate the integrated PHY revision */
+- phy_flags = priv->gphy_rev;
++ if (priv->internal_phy)
++ phy_flags = priv->gphy_rev;
+
+ /* Initialize link state variables that bcmgenet_mii_setup() uses */
+ priv->old_link = -1;
+diff --git a/drivers/net/ethernet/hisilicon/hns_mdio.c b/drivers/net/ethernet/hisilicon/hns_mdio.c
+index 3e863a71c513..7df5d7d211d4 100644
+--- a/drivers/net/ethernet/hisilicon/hns_mdio.c
++++ b/drivers/net/ethernet/hisilicon/hns_mdio.c
+@@ -148,11 +148,15 @@ static int mdio_sc_cfg_reg_write(struct hns_mdio_device *mdio_dev,
+ {
+ u32 time_cnt;
+ u32 reg_value;
++ int ret;
+
+ regmap_write(mdio_dev->subctrl_vbase, cfg_reg, set_val);
+
+ for (time_cnt = MDIO_TIMEOUT; time_cnt; time_cnt--) {
+- regmap_read(mdio_dev->subctrl_vbase, st_reg, ®_value);
++ ret = regmap_read(mdio_dev->subctrl_vbase, st_reg, ®_value);
++ if (ret)
++ return ret;
++
+ reg_value &= st_msk;
+ if ((!!check_st) == (!!reg_value))
+ break;
+diff --git a/drivers/net/ethernet/i825xx/lasi_82596.c b/drivers/net/ethernet/i825xx/lasi_82596.c
+index 211c5f74b4c8..aec7e98bcc85 100644
+--- a/drivers/net/ethernet/i825xx/lasi_82596.c
++++ b/drivers/net/ethernet/i825xx/lasi_82596.c
+@@ -96,6 +96,8 @@
+
+ #define OPT_SWAP_PORT 0x0001 /* Need to wordswp on the MPU port */
+
++#define LIB82596_DMA_ATTR DMA_ATTR_NON_CONSISTENT
++
+ #define DMA_WBACK(ndev, addr, len) \
+ do { dma_cache_sync((ndev)->dev.parent, (void *)addr, len, DMA_TO_DEVICE); } while (0)
+
+@@ -200,7 +202,7 @@ static int __exit lan_remove_chip(struct parisc_device *pdev)
+
+ unregister_netdev (dev);
+ dma_free_attrs(&pdev->dev, sizeof(struct i596_private), lp->dma,
+- lp->dma_addr, DMA_ATTR_NON_CONSISTENT);
++ lp->dma_addr, LIB82596_DMA_ATTR);
+ free_netdev (dev);
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/i825xx/lib82596.c b/drivers/net/ethernet/i825xx/lib82596.c
+index 1274ad24d6af..f9742af7f142 100644
+--- a/drivers/net/ethernet/i825xx/lib82596.c
++++ b/drivers/net/ethernet/i825xx/lib82596.c
+@@ -1065,7 +1065,7 @@ static int i82596_probe(struct net_device *dev)
+
+ dma = dma_alloc_attrs(dev->dev.parent, sizeof(struct i596_dma),
+ &lp->dma_addr, GFP_KERNEL,
+- DMA_ATTR_NON_CONSISTENT);
++ LIB82596_DMA_ATTR);
+ if (!dma) {
+ printk(KERN_ERR "%s: Couldn't get shared memory\n", __FILE__);
+ return -ENOMEM;
+@@ -1087,7 +1087,7 @@ static int i82596_probe(struct net_device *dev)
+ i = register_netdev(dev);
+ if (i) {
+ dma_free_attrs(dev->dev.parent, sizeof(struct i596_dma),
+- dma, lp->dma_addr, DMA_ATTR_NON_CONSISTENT);
++ dma, lp->dma_addr, LIB82596_DMA_ATTR);
+ return i;
+ }
+
+diff --git a/drivers/net/ethernet/i825xx/sni_82596.c b/drivers/net/ethernet/i825xx/sni_82596.c
+index 6eb6c2ff7f09..6436a98c5953 100644
+--- a/drivers/net/ethernet/i825xx/sni_82596.c
++++ b/drivers/net/ethernet/i825xx/sni_82596.c
+@@ -24,6 +24,8 @@
+
+ static const char sni_82596_string[] = "snirm_82596";
+
++#define LIB82596_DMA_ATTR 0
++
+ #define DMA_WBACK(priv, addr, len) do { } while (0)
+ #define DMA_INV(priv, addr, len) do { } while (0)
+ #define DMA_WBACK_INV(priv, addr, len) do { } while (0)
+@@ -152,7 +154,7 @@ static int sni_82596_driver_remove(struct platform_device *pdev)
+
+ unregister_netdev(dev);
+ dma_free_attrs(dev->dev.parent, sizeof(struct i596_private), lp->dma,
+- lp->dma_addr, DMA_ATTR_NON_CONSISTENT);
++ lp->dma_addr, LIB82596_DMA_ATTR);
+ iounmap(lp->ca);
+ iounmap(lp->mpu_port);
+ free_netdev (dev);
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 5cb55ea671e3..964e7d62f4b1 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -2772,12 +2772,10 @@ static int enable_scrq_irq(struct ibmvnic_adapter *adapter,
+
+ if (adapter->resetting &&
+ adapter->reset_reason == VNIC_RESET_MOBILITY) {
+- u64 val = (0xff000000) | scrq->hw_irq;
++ struct irq_desc *desc = irq_to_desc(scrq->irq);
++ struct irq_chip *chip = irq_desc_get_chip(desc);
+
+- rc = plpar_hcall_norets(H_EOI, val);
+- if (rc)
+- dev_err(dev, "H_EOI FAILED irq 0x%llx. rc=%ld\n",
+- val, rc);
++ chip->irq_eoi(&desc->irq_data);
+ }
+
+ rc = plpar_hcall_norets(H_VIOCTL, adapter->vdev->unit_address,
+diff --git a/drivers/net/ethernet/mscc/ocelot_board.c b/drivers/net/ethernet/mscc/ocelot_board.c
+index 2451d4a96490..041fb9f38eca 100644
+--- a/drivers/net/ethernet/mscc/ocelot_board.c
++++ b/drivers/net/ethernet/mscc/ocelot_board.c
+@@ -287,13 +287,14 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
+ continue;
+
+ phy = of_phy_find_device(phy_node);
++ of_node_put(phy_node);
+ if (!phy)
+ continue;
+
+ err = ocelot_probe_port(ocelot, port, regs, phy);
+ if (err) {
+ of_node_put(portnp);
+- return err;
++ goto out_put_ports;
+ }
+
+ phy_mode = of_get_phy_mode(portnp);
+@@ -321,7 +322,8 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
+ "invalid phy mode for port%d, (Q)SGMII only\n",
+ port);
+ of_node_put(portnp);
+- return -EINVAL;
++ err = -EINVAL;
++ goto out_put_ports;
+ }
+
+ serdes = devm_of_phy_get(ocelot->dev, portnp, NULL);
+@@ -334,7 +336,8 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
+ "missing SerDes phys for port%d\n",
+ port);
+
+- goto err_probe_ports;
++ of_node_put(portnp);
++ goto out_put_ports;
+ }
+
+ ocelot->ports[port]->serdes = serdes;
+@@ -346,9 +349,8 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
+
+ dev_info(&pdev->dev, "Ocelot switch probed\n");
+
+- return 0;
+-
+-err_probe_ports:
++out_put_ports:
++ of_node_put(ports);
+ return err;
+ }
+
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+index fc9954e4a772..9c73fb759b57 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+@@ -407,8 +407,11 @@ static void dwmac4_set_filter(struct mac_device_info *hw,
+ int numhashregs = (hw->multicast_filter_bins >> 5);
+ int mcbitslog2 = hw->mcast_bits_log2;
+ unsigned int value;
++ u32 mc_filter[8];
+ int i;
+
++ memset(mc_filter, 0, sizeof(mc_filter));
++
+ value = readl(ioaddr + GMAC_PACKET_FILTER);
+ value &= ~GMAC_PACKET_FILTER_HMC;
+ value &= ~GMAC_PACKET_FILTER_HPF;
+@@ -422,16 +425,13 @@ static void dwmac4_set_filter(struct mac_device_info *hw,
+ /* Pass all multi */
+ value |= GMAC_PACKET_FILTER_PM;
+ /* Set all the bits of the HASH tab */
+- for (i = 0; i < numhashregs; i++)
+- writel(0xffffffff, ioaddr + GMAC_HASH_TAB(i));
++ memset(mc_filter, 0xff, sizeof(mc_filter));
+ } else if (!netdev_mc_empty(dev)) {
+ struct netdev_hw_addr *ha;
+- u32 mc_filter[8];
+
+ /* Hash filter for multicast */
+ value |= GMAC_PACKET_FILTER_HMC;
+
+- memset(mc_filter, 0, sizeof(mc_filter));
+ netdev_for_each_mc_addr(ha, dev) {
+ /* The upper n bits of the calculated CRC are used to
+ * index the contents of the hash table. The number of
+@@ -446,10 +446,11 @@ static void dwmac4_set_filter(struct mac_device_info *hw,
+ */
+ mc_filter[bit_nr >> 5] |= (1 << (bit_nr & 0x1f));
+ }
+- for (i = 0; i < numhashregs; i++)
+- writel(mc_filter[i], ioaddr + GMAC_HASH_TAB(i));
+ }
+
++ for (i = 0; i < numhashregs; i++)
++ writel(mc_filter[i], ioaddr + GMAC_HASH_TAB(i));
++
+ value |= GMAC_PACKET_FILTER_HPF;
+
+ /* Handle multiple unicast addresses */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
+index 85c68b7ee8c6..46d74f407aab 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
+@@ -370,7 +370,7 @@ static void dwxgmac2_set_filter(struct mac_device_info *hw,
+ dwxgmac2_set_mchash(ioaddr, mc_filter, mcbitslog2);
+
+ /* Handle multiple unicast addresses */
+- if (netdev_uc_count(dev) > XGMAC_ADDR_MAX) {
++ if (netdev_uc_count(dev) > hw->unicast_filter_entries) {
+ value |= XGMAC_FILTER_PR;
+ } else {
+ struct netdev_hw_addr *ha;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 5c4408bdc843..fe2d3029de5e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -626,6 +626,7 @@ static int stmmac_hwtstamp_set(struct net_device *dev, struct ifreq *ifr)
+ config.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT;
+ ptp_v2 = PTP_TCR_TSVER2ENA;
+ snap_type_sel = PTP_TCR_SNAPTYPSEL_1;
++ ts_event_en = PTP_TCR_TSEVNTENA;
+ ptp_over_ipv4_udp = PTP_TCR_TSIPV4ENA;
+ ptp_over_ipv6_udp = PTP_TCR_TSIPV6ENA;
+ ptp_over_ethernet = PTP_TCR_TSIPENA;
+@@ -4453,11 +4454,9 @@ int stmmac_suspend(struct device *dev)
+ if (!ndev || !netif_running(ndev))
+ return 0;
+
+- mutex_lock(&priv->lock);
++ phylink_mac_change(priv->phylink, false);
+
+- rtnl_lock();
+- phylink_stop(priv->phylink);
+- rtnl_unlock();
++ mutex_lock(&priv->lock);
+
+ netif_device_detach(ndev);
+ stmmac_stop_all_queues(priv);
+@@ -4472,11 +4471,19 @@ int stmmac_suspend(struct device *dev)
+ stmmac_pmt(priv, priv->hw, priv->wolopts);
+ priv->irq_wake = 1;
+ } else {
++ mutex_unlock(&priv->lock);
++ rtnl_lock();
++ phylink_stop(priv->phylink);
++ rtnl_unlock();
++ mutex_lock(&priv->lock);
++
+ stmmac_mac_set(priv, priv->ioaddr, false);
+ pinctrl_pm_select_sleep_state(priv->device);
+ /* Disable clock in case of PWM is off */
+- clk_disable(priv->plat->pclk);
+- clk_disable(priv->plat->stmmac_clk);
++ if (priv->plat->clk_ptp_ref)
++ clk_disable_unprepare(priv->plat->clk_ptp_ref);
++ clk_disable_unprepare(priv->plat->pclk);
++ clk_disable_unprepare(priv->plat->stmmac_clk);
+ }
+ mutex_unlock(&priv->lock);
+
+@@ -4539,8 +4546,10 @@ int stmmac_resume(struct device *dev)
+ } else {
+ pinctrl_pm_select_default_state(priv->device);
+ /* enable the clk previously disabled */
+- clk_enable(priv->plat->stmmac_clk);
+- clk_enable(priv->plat->pclk);
++ clk_prepare_enable(priv->plat->stmmac_clk);
++ clk_prepare_enable(priv->plat->pclk);
++ if (priv->plat->clk_ptp_ref)
++ clk_prepare_enable(priv->plat->clk_ptp_ref);
+ /* reset the phy so that it's ready */
+ if (priv->mii)
+ stmmac_mdio_reset(priv->mii);
+@@ -4562,12 +4571,16 @@ int stmmac_resume(struct device *dev)
+
+ stmmac_start_all_queues(priv);
+
+- rtnl_lock();
+- phylink_start(priv->phylink);
+- rtnl_unlock();
+-
+ mutex_unlock(&priv->lock);
+
++ if (!device_may_wakeup(priv->device)) {
++ rtnl_lock();
++ phylink_start(priv->phylink);
++ rtnl_unlock();
++ }
++
++ phylink_mac_change(priv->phylink, true);
++
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(stmmac_resume);
+diff --git a/drivers/net/ieee802154/ca8210.c b/drivers/net/ieee802154/ca8210.c
+index b188fce3f641..658b399ac9ea 100644
+--- a/drivers/net/ieee802154/ca8210.c
++++ b/drivers/net/ieee802154/ca8210.c
+@@ -3152,12 +3152,12 @@ static int ca8210_probe(struct spi_device *spi_device)
+ goto error;
+ }
+
++ priv->spi->dev.platform_data = pdata;
+ ret = ca8210_get_platform_data(priv->spi, pdata);
+ if (ret) {
+ dev_crit(&spi_device->dev, "ca8210_get_platform_data failed\n");
+ goto error;
+ }
+- priv->spi->dev.platform_data = pdata;
+
+ ret = ca8210_dev_com_init(priv);
+ if (ret) {
+diff --git a/drivers/net/netdevsim/fib.c b/drivers/net/netdevsim/fib.c
+index f61d094746c0..1a251f76d09b 100644
+--- a/drivers/net/netdevsim/fib.c
++++ b/drivers/net/netdevsim/fib.c
+@@ -241,8 +241,8 @@ static struct pernet_operations nsim_fib_net_ops = {
+
+ void nsim_fib_exit(void)
+ {
+- unregister_pernet_subsys(&nsim_fib_net_ops);
+ unregister_fib_notifier(&nsim_fib_nb);
++ unregister_pernet_subsys(&nsim_fib_net_ops);
+ }
+
+ int nsim_fib_init(void)
+@@ -258,6 +258,7 @@ int nsim_fib_init(void)
+ err = register_fib_notifier(&nsim_fib_nb, nsim_fib_dump_inconsistent);
+ if (err < 0) {
+ pr_err("Failed to register fib notifier\n");
++ unregister_pernet_subsys(&nsim_fib_net_ops);
+ goto err_out;
+ }
+
+diff --git a/drivers/net/phy/mdio_device.c b/drivers/net/phy/mdio_device.c
+index e282600bd83e..c1d345c3cab3 100644
+--- a/drivers/net/phy/mdio_device.c
++++ b/drivers/net/phy/mdio_device.c
+@@ -121,7 +121,7 @@ void mdio_device_reset(struct mdio_device *mdiodev, int value)
+ return;
+
+ if (mdiodev->reset_gpio)
+- gpiod_set_value(mdiodev->reset_gpio, value);
++ gpiod_set_value_cansleep(mdiodev->reset_gpio, value);
+
+ if (mdiodev->reset_ctrl) {
+ if (value)
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 2fea5541c35a..63dedec0433d 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -341,6 +341,35 @@ static int ksz8041_config_aneg(struct phy_device *phydev)
+ return genphy_config_aneg(phydev);
+ }
+
++static int ksz8051_ksz8795_match_phy_device(struct phy_device *phydev,
++ const u32 ksz_phy_id)
++{
++ int ret;
++
++ if ((phydev->phy_id & MICREL_PHY_ID_MASK) != ksz_phy_id)
++ return 0;
++
++ ret = phy_read(phydev, MII_BMSR);
++ if (ret < 0)
++ return ret;
++
++ /* KSZ8051 PHY and KSZ8794/KSZ8795/KSZ8765 switch share the same
++ * exact PHY ID. However, they can be told apart by the extended
++ * capability registers presence. The KSZ8051 PHY has them while
++ * the switch does not.
++ */
++ ret &= BMSR_ERCAP;
++ if (ksz_phy_id == PHY_ID_KSZ8051)
++ return ret;
++ else
++ return !ret;
++}
++
++static int ksz8051_match_phy_device(struct phy_device *phydev)
++{
++ return ksz8051_ksz8795_match_phy_device(phydev, PHY_ID_KSZ8051);
++}
++
+ static int ksz8081_config_init(struct phy_device *phydev)
+ {
+ /* KSZPHY_OMSO_FACTORY_TEST is set at de-assertion of the reset line
+@@ -364,6 +393,11 @@ static int ksz8061_config_init(struct phy_device *phydev)
+ return kszphy_config_init(phydev);
+ }
+
++static int ksz8795_match_phy_device(struct phy_device *phydev)
++{
++ return ksz8051_ksz8795_match_phy_device(phydev, PHY_ID_KSZ87XX);
++}
++
+ static int ksz9021_load_values_from_of(struct phy_device *phydev,
+ const struct device_node *of_node,
+ u16 reg,
+@@ -1017,8 +1051,6 @@ static struct phy_driver ksphy_driver[] = {
+ .suspend = genphy_suspend,
+ .resume = genphy_resume,
+ }, {
+- .phy_id = PHY_ID_KSZ8051,
+- .phy_id_mask = MICREL_PHY_ID_MASK,
+ .name = "Micrel KSZ8051",
+ /* PHY_BASIC_FEATURES */
+ .driver_data = &ksz8051_type,
+@@ -1029,6 +1061,7 @@ static struct phy_driver ksphy_driver[] = {
+ .get_sset_count = kszphy_get_sset_count,
+ .get_strings = kszphy_get_strings,
+ .get_stats = kszphy_get_stats,
++ .match_phy_device = ksz8051_match_phy_device,
+ .suspend = genphy_suspend,
+ .resume = genphy_resume,
+ }, {
+@@ -1141,13 +1174,12 @@ static struct phy_driver ksphy_driver[] = {
+ .suspend = genphy_suspend,
+ .resume = genphy_resume,
+ }, {
+- .phy_id = PHY_ID_KSZ8795,
+- .phy_id_mask = MICREL_PHY_ID_MASK,
+- .name = "Micrel KSZ8795",
++ .name = "Micrel KSZ87XX Switch",
+ /* PHY_BASIC_FEATURES */
+ .config_init = kszphy_config_init,
+ .config_aneg = ksz8873mll_config_aneg,
+ .read_status = ksz8873mll_read_status,
++ .match_phy_device = ksz8795_match_phy_device,
+ .suspend = genphy_suspend,
+ .resume = genphy_resume,
+ }, {
+diff --git a/drivers/net/phy/phy-c45.c b/drivers/net/phy/phy-c45.c
+index 7935593debb1..a1caeee12236 100644
+--- a/drivers/net/phy/phy-c45.c
++++ b/drivers/net/phy/phy-c45.c
+@@ -323,6 +323,8 @@ int genphy_c45_read_pma(struct phy_device *phydev)
+ {
+ int val;
+
++ linkmode_zero(phydev->lp_advertising);
++
+ val = phy_read_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_CTRL1);
+ if (val < 0)
+ return val;
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index 6b0f89369b46..0ff8df35c779 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -457,6 +457,11 @@ int phy_mii_ioctl(struct phy_device *phydev, struct ifreq *ifr, int cmd)
+ val);
+ change_autoneg = true;
+ break;
++ case MII_CTRL1000:
++ mii_ctrl1000_mod_linkmode_adv_t(phydev->advertising,
++ val);
++ change_autoneg = true;
++ break;
+ default:
+ /* do nothing */
+ break;
+@@ -561,9 +566,6 @@ int phy_start_aneg(struct phy_device *phydev)
+ if (AUTONEG_DISABLE == phydev->autoneg)
+ phy_sanitize_settings(phydev);
+
+- /* Invalidate LP advertising flags */
+- linkmode_zero(phydev->lp_advertising);
+-
+ err = phy_config_aneg(phydev);
+ if (err < 0)
+ goto out_unlock;
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 27ebc2c6c2d0..d6c9350b65bf 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -1823,7 +1823,14 @@ int genphy_read_status(struct phy_device *phydev)
+
+ linkmode_zero(phydev->lp_advertising);
+
+- if (phydev->autoneg == AUTONEG_ENABLE && phydev->autoneg_complete) {
++ if (phydev->autoneg == AUTONEG_ENABLE) {
++ if (!phydev->autoneg_complete) {
++ mii_stat1000_mod_linkmode_lpa_t(phydev->lp_advertising,
++ 0);
++ mii_lpa_mod_linkmode_lpa_t(phydev->lp_advertising, 0);
++ return 0;
++ }
++
+ if (phydev->is_gigabit_capable) {
+ lpagb = phy_read(phydev, MII_STAT1000);
+ if (lpagb < 0)
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 04137ac373b0..9eedc0714422 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -4533,10 +4533,9 @@ static int rtl8152_reset_resume(struct usb_interface *intf)
+ struct r8152 *tp = usb_get_intfdata(intf);
+
+ clear_bit(SELECTIVE_SUSPEND, &tp->flags);
+- mutex_lock(&tp->control);
+ tp->rtl_ops.init(tp);
+ queue_delayed_work(system_long_wq, &tp->hw_phy_work, 0);
+- mutex_unlock(&tp->control);
++ set_ethernet_addr(tp);
+ return rtl8152_resume(intf);
+ }
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 3b12e7ad35e1..acbadfdbdd3f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -513,31 +513,33 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0x24FD, 0x9074, iwl8265_2ac_cfg)},
+
+ /* 9000 Series */
+- {IWL_PCI_DEVICE(0x02F0, 0x0030, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x0034, iwl9560_2ac_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x0038, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x003C, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x0060, iwl9461_2ac_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x0064, iwl9461_2ac_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x00A0, iwl9462_2ac_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x00A4, iwl9462_2ac_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x0230, iwl9560_2ac_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x0234, iwl9560_2ac_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x0238, iwl9560_2ac_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x023C, iwl9560_2ac_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x0260, iwl9461_2ac_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x0264, iwl9461_2ac_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x02A0, iwl9462_2ac_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x02A4, iwl9462_2ac_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x1551, iwl9560_killer_s_2ac_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x1552, iwl9560_killer_i_2ac_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x2030, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x2034, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x4030, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x4034, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x40A4, iwl9462_2ac_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x4234, iwl9560_2ac_cfg_quz_a0_jf_b0_soc)},
+- {IWL_PCI_DEVICE(0x02F0, 0x42A4, iwl9462_2ac_cfg_quz_a0_jf_b0_soc)},
++ {IWL_PCI_DEVICE(0x02F0, 0x0030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x0034, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x0038, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x003C, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x0060, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x0064, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x00A0, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x00A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x0230, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x0234, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x0238, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x023C, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x0260, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x0264, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x02A0, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x02A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x1030, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x1551, killer1550s_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x1552, killer1550i_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x2030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x2034, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x4030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x4034, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x40A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x4234, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x02F0, 0x42A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++
+ {IWL_PCI_DEVICE(0x06F0, 0x0030, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x0034, iwl9560_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x0038, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+@@ -643,34 +645,34 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0x2720, 0x40A4, iwl9462_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x2720, 0x4234, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x2720, 0x42A4, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x0030, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x0034, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x0038, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x003C, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x0060, iwl9460_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x0064, iwl9461_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x00A0, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x00A4, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x0230, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x0234, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x0238, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x023C, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x0260, iwl9461_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x0264, iwl9461_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x02A0, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x02A4, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x1010, iwl9260_2ac_cfg)},
+- {IWL_PCI_DEVICE(0x30DC, 0x1030, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x1210, iwl9260_2ac_cfg)},
+- {IWL_PCI_DEVICE(0x30DC, 0x1551, iwl9560_killer_s_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x1552, iwl9560_killer_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x2030, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x2034, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x4030, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x4034, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x40A4, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x4234, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x30DC, 0x42A4, iwl9462_2ac_cfg_soc)},
++
++ {IWL_PCI_DEVICE(0x30DC, 0x0030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x0034, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x0038, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x003C, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x0060, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x0064, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x00A0, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x00A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x0230, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x0234, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x0238, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x023C, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x0260, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x0264, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x02A0, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x02A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x1030, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x1551, killer1550s_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x1552, killer1550i_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x2030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x2034, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x4030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x4034, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x40A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x4234, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x30DC, 0x42A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++
+ {IWL_PCI_DEVICE(0x31DC, 0x0030, iwl9560_2ac_160_cfg_shared_clk)},
+ {IWL_PCI_DEVICE(0x31DC, 0x0034, iwl9560_2ac_cfg_shared_clk)},
+ {IWL_PCI_DEVICE(0x31DC, 0x0038, iwl9560_2ac_160_cfg_shared_clk)},
+@@ -726,62 +728,60 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0x34F0, 0x4234, iwl9560_2ac_cfg_qu_b0_jf_b0)},
+ {IWL_PCI_DEVICE(0x34F0, 0x42A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
+
+- {IWL_PCI_DEVICE(0x3DF0, 0x0030, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x0034, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x0038, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x003C, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x0060, iwl9461_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x0064, iwl9461_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x00A0, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x00A4, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x0230, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x0234, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x0238, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x023C, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x0260, iwl9461_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x0264, iwl9461_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x02A0, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x02A4, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x1010, iwl9260_2ac_cfg)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x1030, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x1210, iwl9260_2ac_cfg)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x1551, iwl9560_killer_s_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x1552, iwl9560_killer_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x2030, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x2034, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x4030, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x4034, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x40A4, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x4234, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x3DF0, 0x42A4, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x0030, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x0034, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x0038, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x003C, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x0060, iwl9461_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x0064, iwl9461_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x00A0, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x00A4, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x0230, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x0234, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x0238, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x023C, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x0260, iwl9461_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x0264, iwl9461_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x02A0, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x02A4, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x1010, iwl9260_2ac_cfg)},
+- {IWL_PCI_DEVICE(0x43F0, 0x1030, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x1210, iwl9260_2ac_cfg)},
+- {IWL_PCI_DEVICE(0x43F0, 0x1551, iwl9560_killer_s_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x1552, iwl9560_killer_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x2030, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x2034, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x4030, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x4034, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x40A4, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x4234, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x43F0, 0x42A4, iwl9462_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x0030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x0034, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x0038, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x003C, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x0060, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x0064, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x00A0, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x00A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x0230, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x0234, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x0238, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x023C, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x0260, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x0264, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x02A0, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x02A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x1030, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x1551, killer1550s_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x1552, killer1550i_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x2030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x2034, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x4030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x4034, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x40A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x4234, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x3DF0, 0x42A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++
++ {IWL_PCI_DEVICE(0x43F0, 0x0030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x0034, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x0038, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x003C, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x0060, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x0064, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x00A0, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x00A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x0230, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x0234, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x0238, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x023C, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x0260, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x0264, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x02A0, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x02A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x1030, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x1551, killer1550s_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x1552, killer1550i_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x2030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x2034, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x4030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x4034, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x40A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x4234, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x43F0, 0x42A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++
+ {IWL_PCI_DEVICE(0x9DF0, 0x0000, iwl9460_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x9DF0, 0x0010, iwl9460_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x9DF0, 0x0030, iwl9560_2ac_160_cfg_soc)},
+@@ -821,34 +821,34 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0x9DF0, 0x40A4, iwl9462_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x9DF0, 0x4234, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x9DF0, 0x42A4, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x0030, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x0034, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x0038, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x003C, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x0060, iwl9461_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x0064, iwl9461_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x00A0, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x00A4, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x0230, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x0234, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x0238, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x023C, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x0260, iwl9461_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x0264, iwl9461_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x02A0, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x02A4, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x1010, iwl9260_2ac_cfg)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x1030, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x1210, iwl9260_2ac_cfg)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x1551, iwl9560_killer_s_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x1552, iwl9560_killer_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x2030, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x2034, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x4030, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x4034, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x40A4, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x4234, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0xA0F0, 0x42A4, iwl9462_2ac_cfg_soc)},
++
++ {IWL_PCI_DEVICE(0xA0F0, 0x0030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x0034, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x0038, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x003C, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x0060, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x0064, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x00A0, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x00A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x0230, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x0234, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x0238, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x023C, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x0260, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x0264, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x02A0, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x02A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x1030, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x1551, killer1550s_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x1552, killer1550i_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x2030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x2034, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x4030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x4034, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x40A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x4234, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0xA0F0, 0x42A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++
+ {IWL_PCI_DEVICE(0xA370, 0x0030, iwl9560_2ac_160_cfg_soc)},
+ {IWL_PCI_DEVICE(0xA370, 0x0034, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0xA370, 0x0038, iwl9560_2ac_160_cfg_soc)},
+diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
+index 240f762b3749..103ed00775eb 100644
+--- a/drivers/net/xen-netback/interface.c
++++ b/drivers/net/xen-netback/interface.c
+@@ -719,7 +719,6 @@ err_unmap:
+ xenvif_unmap_frontend_data_rings(queue);
+ netif_napi_del(&queue->napi);
+ err:
+- module_put(THIS_MODULE);
+ return err;
+ }
+
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index d3d6b7bd6903..36a5ed1eacbe 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -103,10 +103,13 @@ static void nvme_set_queue_dying(struct nvme_ns *ns)
+ */
+ if (!ns->disk || test_and_set_bit(NVME_NS_DEAD, &ns->flags))
+ return;
+- revalidate_disk(ns->disk);
+ blk_set_queue_dying(ns->queue);
+ /* Forcibly unquiesce queues to avoid blocking dispatch */
+ blk_mq_unquiesce_queue(ns->queue);
++ /*
++ * Revalidate after unblocking dispatchers that may be holding bd_butex
++ */
++ revalidate_disk(ns->disk);
+ }
+
+ static void nvme_queue_scan(struct nvme_ctrl *ctrl)
+@@ -849,7 +852,7 @@ out:
+ static int nvme_submit_user_cmd(struct request_queue *q,
+ struct nvme_command *cmd, void __user *ubuffer,
+ unsigned bufflen, void __user *meta_buffer, unsigned meta_len,
+- u32 meta_seed, u32 *result, unsigned timeout)
++ u32 meta_seed, u64 *result, unsigned timeout)
+ {
+ bool write = nvme_is_write(cmd);
+ struct nvme_ns *ns = q->queuedata;
+@@ -890,7 +893,7 @@ static int nvme_submit_user_cmd(struct request_queue *q,
+ else
+ ret = nvme_req(req)->status;
+ if (result)
+- *result = le32_to_cpu(nvme_req(req)->result.u32);
++ *result = le64_to_cpu(nvme_req(req)->result.u64);
+ if (meta && !ret && !write) {
+ if (copy_to_user(meta_buffer, meta, meta_len))
+ ret = -EFAULT;
+@@ -1336,6 +1339,54 @@ static int nvme_user_cmd(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
+ struct nvme_command c;
+ unsigned timeout = 0;
+ u32 effects;
++ u64 result;
++ int status;
++
++ if (!capable(CAP_SYS_ADMIN))
++ return -EACCES;
++ if (copy_from_user(&cmd, ucmd, sizeof(cmd)))
++ return -EFAULT;
++ if (cmd.flags)
++ return -EINVAL;
++
++ memset(&c, 0, sizeof(c));
++ c.common.opcode = cmd.opcode;
++ c.common.flags = cmd.flags;
++ c.common.nsid = cpu_to_le32(cmd.nsid);
++ c.common.cdw2[0] = cpu_to_le32(cmd.cdw2);
++ c.common.cdw2[1] = cpu_to_le32(cmd.cdw3);
++ c.common.cdw10 = cpu_to_le32(cmd.cdw10);
++ c.common.cdw11 = cpu_to_le32(cmd.cdw11);
++ c.common.cdw12 = cpu_to_le32(cmd.cdw12);
++ c.common.cdw13 = cpu_to_le32(cmd.cdw13);
++ c.common.cdw14 = cpu_to_le32(cmd.cdw14);
++ c.common.cdw15 = cpu_to_le32(cmd.cdw15);
++
++ if (cmd.timeout_ms)
++ timeout = msecs_to_jiffies(cmd.timeout_ms);
++
++ effects = nvme_passthru_start(ctrl, ns, cmd.opcode);
++ status = nvme_submit_user_cmd(ns ? ns->queue : ctrl->admin_q, &c,
++ (void __user *)(uintptr_t)cmd.addr, cmd.data_len,
++ (void __user *)(uintptr_t)cmd.metadata,
++ cmd.metadata_len, 0, &result, timeout);
++ nvme_passthru_end(ctrl, effects);
++
++ if (status >= 0) {
++ if (put_user(result, &ucmd->result))
++ return -EFAULT;
++ }
++
++ return status;
++}
++
++static int nvme_user_cmd64(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
++ struct nvme_passthru_cmd64 __user *ucmd)
++{
++ struct nvme_passthru_cmd64 cmd;
++ struct nvme_command c;
++ unsigned timeout = 0;
++ u32 effects;
+ int status;
+
+ if (!capable(CAP_SYS_ADMIN))
+@@ -1406,6 +1457,41 @@ static void nvme_put_ns_from_disk(struct nvme_ns_head *head, int idx)
+ srcu_read_unlock(&head->srcu, idx);
+ }
+
++static bool is_ctrl_ioctl(unsigned int cmd)
++{
++ if (cmd == NVME_IOCTL_ADMIN_CMD || cmd == NVME_IOCTL_ADMIN64_CMD)
++ return true;
++ if (is_sed_ioctl(cmd))
++ return true;
++ return false;
++}
++
++static int nvme_handle_ctrl_ioctl(struct nvme_ns *ns, unsigned int cmd,
++ void __user *argp,
++ struct nvme_ns_head *head,
++ int srcu_idx)
++{
++ struct nvme_ctrl *ctrl = ns->ctrl;
++ int ret;
++
++ nvme_get_ctrl(ns->ctrl);
++ nvme_put_ns_from_disk(head, srcu_idx);
++
++ switch (cmd) {
++ case NVME_IOCTL_ADMIN_CMD:
++ ret = nvme_user_cmd(ctrl, NULL, argp);
++ break;
++ case NVME_IOCTL_ADMIN64_CMD:
++ ret = nvme_user_cmd64(ctrl, NULL, argp);
++ break;
++ default:
++ ret = sed_ioctl(ctrl->opal_dev, cmd, argp);
++ break;
++ }
++ nvme_put_ctrl(ctrl);
++ return ret;
++}
++
+ static int nvme_ioctl(struct block_device *bdev, fmode_t mode,
+ unsigned int cmd, unsigned long arg)
+ {
+@@ -1423,20 +1509,8 @@ static int nvme_ioctl(struct block_device *bdev, fmode_t mode,
+ * seperately and drop the ns SRCU reference early. This avoids a
+ * deadlock when deleting namespaces using the passthrough interface.
+ */
+- if (cmd == NVME_IOCTL_ADMIN_CMD || is_sed_ioctl(cmd)) {
+- struct nvme_ctrl *ctrl = ns->ctrl;
+-
+- nvme_get_ctrl(ns->ctrl);
+- nvme_put_ns_from_disk(head, srcu_idx);
+-
+- if (cmd == NVME_IOCTL_ADMIN_CMD)
+- ret = nvme_user_cmd(ctrl, NULL, argp);
+- else
+- ret = sed_ioctl(ctrl->opal_dev, cmd, argp);
+-
+- nvme_put_ctrl(ctrl);
+- return ret;
+- }
++ if (is_ctrl_ioctl(cmd))
++ return nvme_handle_ctrl_ioctl(ns, cmd, argp, head, srcu_idx);
+
+ switch (cmd) {
+ case NVME_IOCTL_ID:
+@@ -1449,6 +1523,9 @@ static int nvme_ioctl(struct block_device *bdev, fmode_t mode,
+ case NVME_IOCTL_SUBMIT_IO:
+ ret = nvme_submit_io(ns, argp);
+ break;
++ case NVME_IOCTL_IO64_CMD:
++ ret = nvme_user_cmd64(ns->ctrl, ns, argp);
++ break;
+ default:
+ if (ns->ndev)
+ ret = nvme_nvm_ioctl(ns, cmd, arg);
+@@ -2267,6 +2344,16 @@ static const struct nvme_core_quirk_entry core_quirks[] = {
+ .vid = 0x14a4,
+ .fr = "22301111",
+ .quirks = NVME_QUIRK_SIMPLE_SUSPEND,
++ },
++ {
++ /*
++ * This Kingston E8FK11.T firmware version has no interrupt
++ * after resume with actions related to suspend to idle
++ * https://bugzilla.kernel.org/show_bug.cgi?id=204887
++ */
++ .vid = 0x2646,
++ .fr = "E8FK11.T",
++ .quirks = NVME_QUIRK_SIMPLE_SUSPEND,
+ }
+ };
+
+@@ -2510,8 +2597,9 @@ static int nvme_init_subsystem(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
+ list_add_tail(&subsys->entry, &nvme_subsystems);
+ }
+
+- if (sysfs_create_link(&subsys->dev.kobj, &ctrl->device->kobj,
+- dev_name(ctrl->device))) {
++ ret = sysfs_create_link(&subsys->dev.kobj, &ctrl->device->kobj,
++ dev_name(ctrl->device));
++ if (ret) {
+ dev_err(ctrl->device,
+ "failed to create sysfs link from subsystem.\n");
+ goto out_put_subsystem;
+@@ -2812,6 +2900,8 @@ static long nvme_dev_ioctl(struct file *file, unsigned int cmd,
+ switch (cmd) {
+ case NVME_IOCTL_ADMIN_CMD:
+ return nvme_user_cmd(ctrl, NULL, argp);
++ case NVME_IOCTL_ADMIN64_CMD:
++ return nvme_user_cmd64(ctrl, NULL, argp);
+ case NVME_IOCTL_IO_CMD:
+ return nvme_dev_user_cmd(ctrl, argp);
+ case NVME_IOCTL_RESET:
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 732d5b63ec05..2303d44fc3cb 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -769,7 +769,8 @@ static blk_status_t nvme_setup_prp_simple(struct nvme_dev *dev,
+ struct bio_vec *bv)
+ {
+ struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+- unsigned int first_prp_len = dev->ctrl.page_size - bv->bv_offset;
++ unsigned int offset = bv->bv_offset & (dev->ctrl.page_size - 1);
++ unsigned int first_prp_len = dev->ctrl.page_size - offset;
+
+ iod->first_dma = dma_map_bvec(dev->dev, bv, rq_dma_dir(req), 0);
+ if (dma_mapping_error(dev->dev, iod->first_dma))
+@@ -2894,11 +2895,21 @@ static int nvme_suspend(struct device *dev)
+ if (ret < 0)
+ goto unfreeze;
+
++ /*
++ * A saved state prevents pci pm from generically controlling the
++ * device's power. If we're using protocol specific settings, we don't
++ * want pci interfering.
++ */
++ pci_save_state(pdev);
++
+ ret = nvme_set_power_state(ctrl, ctrl->npss);
+ if (ret < 0)
+ goto unfreeze;
+
+ if (ret) {
++ /* discard the saved state */
++ pci_load_saved_state(pdev, NULL);
++
+ /*
+ * Clearing npss forces a controller reset on resume. The
+ * correct value will be resdicovered then.
+@@ -2906,14 +2917,7 @@ static int nvme_suspend(struct device *dev)
+ nvme_dev_disable(ndev, true);
+ ctrl->npss = 0;
+ ret = 0;
+- goto unfreeze;
+ }
+- /*
+- * A saved state prevents pci pm from generically controlling the
+- * device's power. If we're using protocol specific settings, we don't
+- * want pci interfering.
+- */
+- pci_save_state(pdev);
+ unfreeze:
+ nvme_unfreeze(ctrl);
+ return ret;
+@@ -3038,6 +3042,9 @@ static const struct pci_device_id nvme_id_table[] = {
+ .driver_data = NVME_QUIRK_LIGHTNVM, },
+ { PCI_DEVICE(0x10ec, 0x5762), /* ADATA SX6000LNP */
+ .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, },
++ { PCI_DEVICE(0x1cc1, 0x8201), /* ADATA SX8200PNP 512GB */
++ .driver_data = NVME_QUIRK_NO_DEEPEST_PS |
++ NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+ { PCI_DEVICE_CLASS(PCI_CLASS_STORAGE_EXPRESS, 0xffffff) },
+ { PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2001) },
+ { PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2003) },
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 1a6449bc547b..842ef876724f 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -427,7 +427,7 @@ static void nvme_rdma_destroy_queue_ib(struct nvme_rdma_queue *queue)
+ static int nvme_rdma_get_max_fr_pages(struct ib_device *ibdev)
+ {
+ return min_t(u32, NVME_RDMA_MAX_SEGMENTS,
+- ibdev->attrs.max_fast_reg_page_list_len);
++ ibdev->attrs.max_fast_reg_page_list_len - 1);
+ }
+
+ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue)
+@@ -437,7 +437,7 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue)
+ const int cq_factor = send_wr_factor + 1; /* + RECV */
+ int comp_vector, idx = nvme_rdma_queue_idx(queue);
+ enum ib_poll_context poll_ctx;
+- int ret;
++ int ret, pages_per_mr;
+
+ queue->device = nvme_rdma_find_get_device(queue->cm_id);
+ if (!queue->device) {
+@@ -479,10 +479,16 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue)
+ goto out_destroy_qp;
+ }
+
++ /*
++ * Currently we don't use SG_GAPS MR's so if the first entry is
++ * misaligned we'll end up using two entries for a single data page,
++ * so one additional entry is required.
++ */
++ pages_per_mr = nvme_rdma_get_max_fr_pages(ibdev) + 1;
+ ret = ib_mr_pool_init(queue->qp, &queue->qp->rdma_mrs,
+ queue->queue_size,
+ IB_MR_TYPE_MEM_REG,
+- nvme_rdma_get_max_fr_pages(ibdev), 0);
++ pages_per_mr, 0);
+ if (ret) {
+ dev_err(queue->ctrl->ctrl.device,
+ "failed to initialize MR pool sized %d for QID %d\n",
+@@ -614,7 +620,8 @@ static int nvme_rdma_start_queue(struct nvme_rdma_ctrl *ctrl, int idx)
+ if (!ret) {
+ set_bit(NVME_RDMA_Q_LIVE, &queue->flags);
+ } else {
+- __nvme_rdma_stop_queue(queue);
++ if (test_bit(NVME_RDMA_Q_ALLOCATED, &queue->flags))
++ __nvme_rdma_stop_queue(queue);
+ dev_info(ctrl->ctrl.device,
+ "failed to connect queue: %d ret=%d\n", idx, ret);
+ }
+@@ -824,8 +831,8 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
+ if (error)
+ goto out_stop_queue;
+
+- ctrl->ctrl.max_hw_sectors =
+- (ctrl->max_fr_pages - 1) << (ilog2(SZ_4K) - 9);
++ ctrl->ctrl.max_segments = ctrl->max_fr_pages;
++ ctrl->ctrl.max_hw_sectors = ctrl->max_fr_pages << (ilog2(SZ_4K) - 9);
+
+ error = nvme_init_identify(&ctrl->ctrl);
+ if (error)
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 606b13d35d16..bdadb27b28bb 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1039,7 +1039,7 @@ static void nvme_tcp_io_work(struct work_struct *w)
+ {
+ struct nvme_tcp_queue *queue =
+ container_of(w, struct nvme_tcp_queue, io_work);
+- unsigned long start = jiffies + msecs_to_jiffies(1);
++ unsigned long deadline = jiffies + msecs_to_jiffies(1);
+
+ do {
+ bool pending = false;
+@@ -1064,7 +1064,7 @@ static void nvme_tcp_io_work(struct work_struct *w)
+ if (!pending)
+ return;
+
+- } while (time_after(jiffies, start)); /* quota is exhausted */
++ } while (!time_after(jiffies, deadline)); /* quota is exhausted */
+
+ queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work);
+ }
+diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
+index 7989703b883c..6bd610ee2cd7 100644
+--- a/drivers/of/of_reserved_mem.c
++++ b/drivers/of/of_reserved_mem.c
+@@ -324,8 +324,10 @@ int of_reserved_mem_device_init_by_idx(struct device *dev,
+ if (!target)
+ return -ENODEV;
+
+- if (!of_device_is_available(target))
++ if (!of_device_is_available(target)) {
++ of_node_put(target);
+ return 0;
++ }
+
+ rmem = __find_rmem(target);
+ of_node_put(target);
+diff --git a/drivers/opp/of.c b/drivers/opp/of.c
+index b313aca9894f..4c7feb3ac4cd 100644
+--- a/drivers/opp/of.c
++++ b/drivers/opp/of.c
+@@ -77,8 +77,6 @@ static struct dev_pm_opp *_find_opp_of_np(struct opp_table *opp_table,
+ {
+ struct dev_pm_opp *opp;
+
+- lockdep_assert_held(&opp_table_lock);
+-
+ mutex_lock(&opp_table->lock);
+
+ list_for_each_entry(opp, &opp_table->opp_list, node) {
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index b97d9e10c9cc..57f15a7e6f0b 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -958,19 +958,6 @@ void pci_refresh_power_state(struct pci_dev *dev)
+ pci_update_current_state(dev, dev->current_state);
+ }
+
+-/**
+- * pci_power_up - Put the given device into D0 forcibly
+- * @dev: PCI device to power up
+- */
+-void pci_power_up(struct pci_dev *dev)
+-{
+- if (platform_pci_power_manageable(dev))
+- platform_pci_set_power_state(dev, PCI_D0);
+-
+- pci_raw_set_power_state(dev, PCI_D0);
+- pci_update_current_state(dev, PCI_D0);
+-}
+-
+ /**
+ * pci_platform_power_transition - Use platform to change device power state
+ * @dev: PCI device to handle.
+@@ -1153,6 +1140,17 @@ int pci_set_power_state(struct pci_dev *dev, pci_power_t state)
+ }
+ EXPORT_SYMBOL(pci_set_power_state);
+
++/**
++ * pci_power_up - Put the given device into D0 forcibly
++ * @dev: PCI device to power up
++ */
++void pci_power_up(struct pci_dev *dev)
++{
++ __pci_start_power_transition(dev, PCI_D0);
++ pci_raw_set_power_state(dev, PCI_D0);
++ pci_update_current_state(dev, PCI_D0);
++}
++
+ /**
+ * pci_choose_state - Choose the power state of a PCI device
+ * @dev: PCI device to be suspended
+diff --git a/drivers/pinctrl/intel/pinctrl-cherryview.c b/drivers/pinctrl/intel/pinctrl-cherryview.c
+index 03ec7a5d9d0b..bf049d1bbb87 100644
+--- a/drivers/pinctrl/intel/pinctrl-cherryview.c
++++ b/drivers/pinctrl/intel/pinctrl-cherryview.c
+@@ -1513,7 +1513,6 @@ static const struct dmi_system_id chv_no_valid_mask[] = {
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "GOOGLE"),
+ DMI_MATCH(DMI_PRODUCT_FAMILY, "Intel_Strago"),
+- DMI_MATCH(DMI_PRODUCT_VERSION, "1.0"),
+ },
+ },
+ {
+@@ -1521,7 +1520,6 @@ static const struct dmi_system_id chv_no_valid_mask[] = {
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "HP"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Setzer"),
+- DMI_MATCH(DMI_PRODUCT_VERSION, "1.0"),
+ },
+ },
+ {
+@@ -1529,7 +1527,6 @@ static const struct dmi_system_id chv_no_valid_mask[] = {
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "GOOGLE"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Cyan"),
+- DMI_MATCH(DMI_PRODUCT_VERSION, "1.0"),
+ },
+ },
+ {
+@@ -1537,7 +1534,6 @@ static const struct dmi_system_id chv_no_valid_mask[] = {
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "GOOGLE"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Celes"),
+- DMI_MATCH(DMI_PRODUCT_VERSION, "1.0"),
+ },
+ },
+ {}
+diff --git a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+index 6462d3ca7ceb..f2f5fcd9a237 100644
+--- a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
++++ b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+@@ -183,10 +183,10 @@ static struct armada_37xx_pin_group armada_37xx_nb_groups[] = {
+ PIN_GRP_EXTRA("uart2", 9, 2, BIT(1) | BIT(13) | BIT(14) | BIT(19),
+ BIT(1) | BIT(13) | BIT(14), BIT(1) | BIT(19),
+ 18, 2, "gpio", "uart"),
+- PIN_GRP_GPIO("led0_od", 11, 1, BIT(20), "led"),
+- PIN_GRP_GPIO("led1_od", 12, 1, BIT(21), "led"),
+- PIN_GRP_GPIO("led2_od", 13, 1, BIT(22), "led"),
+- PIN_GRP_GPIO("led3_od", 14, 1, BIT(23), "led"),
++ PIN_GRP_GPIO_2("led0_od", 11, 1, BIT(20), BIT(20), 0, "led"),
++ PIN_GRP_GPIO_2("led1_od", 12, 1, BIT(21), BIT(21), 0, "led"),
++ PIN_GRP_GPIO_2("led2_od", 13, 1, BIT(22), BIT(22), 0, "led"),
++ PIN_GRP_GPIO_2("led3_od", 14, 1, BIT(23), BIT(23), 0, "led"),
+
+ };
+
+@@ -221,11 +221,11 @@ static const struct armada_37xx_pin_data armada_37xx_pin_sb = {
+ };
+
+ static inline void armada_37xx_update_reg(unsigned int *reg,
+- unsigned int offset)
++ unsigned int *offset)
+ {
+ /* We never have more than 2 registers */
+- if (offset >= GPIO_PER_REG) {
+- offset -= GPIO_PER_REG;
++ if (*offset >= GPIO_PER_REG) {
++ *offset -= GPIO_PER_REG;
+ *reg += sizeof(u32);
+ }
+ }
+@@ -376,7 +376,7 @@ static inline void armada_37xx_irq_update_reg(unsigned int *reg,
+ {
+ int offset = irqd_to_hwirq(d);
+
+- armada_37xx_update_reg(reg, offset);
++ armada_37xx_update_reg(reg, &offset);
+ }
+
+ static int armada_37xx_gpio_direction_input(struct gpio_chip *chip,
+@@ -386,7 +386,7 @@ static int armada_37xx_gpio_direction_input(struct gpio_chip *chip,
+ unsigned int reg = OUTPUT_EN;
+ unsigned int mask;
+
+- armada_37xx_update_reg(®, offset);
++ armada_37xx_update_reg(®, &offset);
+ mask = BIT(offset);
+
+ return regmap_update_bits(info->regmap, reg, mask, 0);
+@@ -399,7 +399,7 @@ static int armada_37xx_gpio_get_direction(struct gpio_chip *chip,
+ unsigned int reg = OUTPUT_EN;
+ unsigned int val, mask;
+
+- armada_37xx_update_reg(®, offset);
++ armada_37xx_update_reg(®, &offset);
+ mask = BIT(offset);
+ regmap_read(info->regmap, reg, &val);
+
+@@ -413,7 +413,7 @@ static int armada_37xx_gpio_direction_output(struct gpio_chip *chip,
+ unsigned int reg = OUTPUT_EN;
+ unsigned int mask, val, ret;
+
+- armada_37xx_update_reg(®, offset);
++ armada_37xx_update_reg(®, &offset);
+ mask = BIT(offset);
+
+ ret = regmap_update_bits(info->regmap, reg, mask, mask);
+@@ -434,7 +434,7 @@ static int armada_37xx_gpio_get(struct gpio_chip *chip, unsigned int offset)
+ unsigned int reg = INPUT_VAL;
+ unsigned int val, mask;
+
+- armada_37xx_update_reg(®, offset);
++ armada_37xx_update_reg(®, &offset);
+ mask = BIT(offset);
+
+ regmap_read(info->regmap, reg, &val);
+@@ -449,7 +449,7 @@ static void armada_37xx_gpio_set(struct gpio_chip *chip, unsigned int offset,
+ unsigned int reg = OUTPUT_VAL;
+ unsigned int mask, val;
+
+- armada_37xx_update_reg(®, offset);
++ armada_37xx_update_reg(®, &offset);
+ mask = BIT(offset);
+ val = value ? mask : 0;
+
+diff --git a/drivers/s390/crypto/zcrypt_api.c b/drivers/s390/crypto/zcrypt_api.c
+index 1058b4b5cc1e..35a0e9569239 100644
+--- a/drivers/s390/crypto/zcrypt_api.c
++++ b/drivers/s390/crypto/zcrypt_api.c
+@@ -539,8 +539,7 @@ static int zcrypt_release(struct inode *inode, struct file *filp)
+ if (filp->f_inode->i_cdev == &zcrypt_cdev) {
+ struct zcdn_device *zcdndev;
+
+- if (mutex_lock_interruptible(&ap_perms_mutex))
+- return -ERESTARTSYS;
++ mutex_lock(&ap_perms_mutex);
+ zcdndev = find_zcdndev_by_devt(filp->f_inode->i_rdev);
+ mutex_unlock(&ap_perms_mutex);
+ if (zcdndev) {
+diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c
+index 296bbc3c4606..cf63916814cc 100644
+--- a/drivers/s390/scsi/zfcp_fsf.c
++++ b/drivers/s390/scsi/zfcp_fsf.c
+@@ -27,6 +27,11 @@
+
+ struct kmem_cache *zfcp_fsf_qtcb_cache;
+
++static bool ber_stop = true;
++module_param(ber_stop, bool, 0600);
++MODULE_PARM_DESC(ber_stop,
++ "Shuts down FCP devices for FCP channels that report a bit-error count in excess of its threshold (default on)");
++
+ static void zfcp_fsf_request_timeout_handler(struct timer_list *t)
+ {
+ struct zfcp_fsf_req *fsf_req = from_timer(fsf_req, t, timer);
+@@ -236,10 +241,15 @@ static void zfcp_fsf_status_read_handler(struct zfcp_fsf_req *req)
+ case FSF_STATUS_READ_SENSE_DATA_AVAIL:
+ break;
+ case FSF_STATUS_READ_BIT_ERROR_THRESHOLD:
+- dev_warn(&adapter->ccw_device->dev,
+- "The error threshold for checksum statistics "
+- "has been exceeded\n");
+ zfcp_dbf_hba_bit_err("fssrh_3", req);
++ if (ber_stop) {
++ dev_warn(&adapter->ccw_device->dev,
++ "All paths over this FCP device are disused because of excessive bit errors\n");
++ zfcp_erp_adapter_shutdown(adapter, 0, "fssrh_b");
++ } else {
++ dev_warn(&adapter->ccw_device->dev,
++ "The error threshold for checksum statistics has been exceeded\n");
++ }
+ break;
+ case FSF_STATUS_READ_LINK_DOWN:
+ zfcp_fsf_status_read_link_down(req);
+diff --git a/drivers/scsi/ch.c b/drivers/scsi/ch.c
+index 5f8153c37f77..76751d6c7f0d 100644
+--- a/drivers/scsi/ch.c
++++ b/drivers/scsi/ch.c
+@@ -579,7 +579,6 @@ ch_release(struct inode *inode, struct file *file)
+ scsi_changer *ch = file->private_data;
+
+ scsi_device_put(ch->device);
+- ch->device = NULL;
+ file->private_data = NULL;
+ kref_put(&ch->ref, ch_destroy);
+ return 0;
+diff --git a/drivers/scsi/megaraid.c b/drivers/scsi/megaraid.c
+index 45a66048801b..ff6d4aa92421 100644
+--- a/drivers/scsi/megaraid.c
++++ b/drivers/scsi/megaraid.c
+@@ -4183,11 +4183,11 @@ megaraid_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ */
+ if (pdev->subsystem_vendor == PCI_VENDOR_ID_COMPAQ &&
+ pdev->subsystem_device == 0xC000)
+- return -ENODEV;
++ goto out_disable_device;
+ /* Now check the magic signature byte */
+ pci_read_config_word(pdev, PCI_CONF_AMISIG, &magic);
+ if (magic != HBA_SIGNATURE_471 && magic != HBA_SIGNATURE)
+- return -ENODEV;
++ goto out_disable_device;
+ /* Ok it is probably a megaraid */
+ }
+
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index bad2b12604f1..a2922b17b55b 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -2338,6 +2338,7 @@ typedef struct fc_port {
+ unsigned int query:1;
+ unsigned int id_changed:1;
+ unsigned int scan_needed:1;
++ unsigned int n2n_flag:1;
+
+ struct completion nvme_del_done;
+ uint32_t nvme_prli_service_param;
+@@ -2388,7 +2389,6 @@ typedef struct fc_port {
+ uint8_t fc4_type;
+ uint8_t fc4f_nvme;
+ uint8_t scan_state;
+- uint8_t n2n_flag;
+
+ unsigned long last_queue_full;
+ unsigned long last_ramp_up;
+@@ -2979,6 +2979,7 @@ enum scan_flags_t {
+ enum fc4type_t {
+ FS_FC4TYPE_FCP = BIT_0,
+ FS_FC4TYPE_NVME = BIT_1,
++ FS_FCP_IS_N2N = BIT_7,
+ };
+
+ struct fab_scan_rp {
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index afcd9a885884..cd74cc9651de 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -746,12 +746,15 @@ static void qla24xx_handle_gnl_done_event(scsi_qla_host_t *vha,
+ break;
+ default:
+ if ((id.b24 != fcport->d_id.b24 &&
+- fcport->d_id.b24) ||
++ fcport->d_id.b24 &&
++ fcport->loop_id != FC_NO_LOOP_ID) ||
+ (fcport->loop_id != FC_NO_LOOP_ID &&
+ fcport->loop_id != loop_id)) {
+ ql_dbg(ql_dbg_disc, vha, 0x20e3,
+ "%s %d %8phC post del sess\n",
+ __func__, __LINE__, fcport->port_name);
++ if (fcport->n2n_flag)
++ fcport->d_id.b24 = 0;
+ qlt_schedule_sess_for_deletion(fcport);
+ return;
+ }
+@@ -759,6 +762,8 @@ static void qla24xx_handle_gnl_done_event(scsi_qla_host_t *vha,
+ }
+
+ fcport->loop_id = loop_id;
++ if (fcport->n2n_flag)
++ fcport->d_id.b24 = id.b24;
+
+ wwn = wwn_to_u64(fcport->port_name);
+ qlt_find_sess_invalidate_other(vha, wwn,
+@@ -966,7 +971,7 @@ qla24xx_async_gnl_sp_done(void *s, int res)
+ wwn = wwn_to_u64(e->port_name);
+
+ ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0x20e8,
+- "%s %8phC %02x:%02x:%02x state %d/%d lid %x \n",
++ "%s %8phC %02x:%02x:%02x CLS %x/%x lid %x \n",
+ __func__, (void *)&wwn, e->port_id[2], e->port_id[1],
+ e->port_id[0], e->current_login_state, e->last_login_state,
+ (loop_id & 0x7fff));
+@@ -1498,7 +1503,8 @@ int qla24xx_fcport_handle_login(struct scsi_qla_host *vha, fc_port_t *fcport)
+ (fcport->fw_login_state == DSC_LS_PRLI_PEND)))
+ return 0;
+
+- if (fcport->fw_login_state == DSC_LS_PLOGI_COMP) {
++ if (fcport->fw_login_state == DSC_LS_PLOGI_COMP &&
++ !N2N_TOPO(vha->hw)) {
+ if (time_before_eq(jiffies, fcport->plogi_nack_done_deadline)) {
+ set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
+ return 0;
+@@ -1569,8 +1575,9 @@ int qla24xx_fcport_handle_login(struct scsi_qla_host *vha, fc_port_t *fcport)
+ qla24xx_post_gpdb_work(vha, fcport, 0);
+ } else {
+ ql_dbg(ql_dbg_disc, vha, 0x2118,
+- "%s %d %8phC post NVMe PRLI\n",
+- __func__, __LINE__, fcport->port_name);
++ "%s %d %8phC post %s PRLI\n",
++ __func__, __LINE__, fcport->port_name,
++ fcport->fc4f_nvme ? "NVME" : "FC");
+ qla24xx_post_prli_work(vha, fcport);
+ }
+ break;
+@@ -1924,17 +1931,38 @@ qla24xx_handle_prli_done_event(struct scsi_qla_host *vha, struct event_arg *ea)
+ break;
+ }
+
+- if (ea->fcport->n2n_flag) {
++ if (ea->fcport->fc4f_nvme) {
+ ql_dbg(ql_dbg_disc, vha, 0x2118,
+ "%s %d %8phC post fc4 prli\n",
+ __func__, __LINE__, ea->fcport->port_name);
+ ea->fcport->fc4f_nvme = 0;
+- ea->fcport->n2n_flag = 0;
+ qla24xx_post_prli_work(vha, ea->fcport);
++ return;
++ }
++
++ /* at this point both PRLI NVME & PRLI FCP failed */
++ if (N2N_TOPO(vha->hw)) {
++ if (ea->fcport->n2n_link_reset_cnt < 3) {
++ ea->fcport->n2n_link_reset_cnt++;
++ /*
++ * remote port is not sending Plogi. Reset
++ * link to kick start his state machine
++ */
++ set_bit(N2N_LINK_RESET, &vha->dpc_flags);
++ } else {
++ ql_log(ql_log_warn, vha, 0x2119,
++ "%s %d %8phC Unable to reconnect\n",
++ __func__, __LINE__, ea->fcport->port_name);
++ }
++ } else {
++ /*
++ * switch connect. login failed. Take connection
++ * down and allow relogin to retrigger
++ */
++ ea->fcport->flags &= ~FCF_ASYNC_SENT;
++ ea->fcport->keep_nport_handle = 0;
++ qlt_schedule_sess_for_deletion(ea->fcport);
+ }
+- ql_dbg(ql_dbg_disc, vha, 0x2119,
+- "%s %d %8phC unhandle event of %x\n",
+- __func__, __LINE__, ea->fcport->port_name, ea->data[0]);
+ break;
+ }
+ }
+@@ -3268,7 +3296,7 @@ try_eft:
+
+ for (j = 0; j < 2; j++, fwdt++) {
+ if (!fwdt->template) {
+- ql_log(ql_log_warn, vha, 0x00ba,
++ ql_dbg(ql_dbg_init, vha, 0x00ba,
+ "-> fwdt%u no template\n", j);
+ continue;
+ }
+@@ -5078,28 +5106,47 @@ qla2x00_configure_local_loop(scsi_qla_host_t *vha)
+ unsigned long flags;
+
+ /* Inititae N2N login. */
+- if (test_and_clear_bit(N2N_LOGIN_NEEDED, &vha->dpc_flags)) {
+- /* borrowing */
+- u32 *bp, i, sz;
+-
+- memset(ha->init_cb, 0, ha->init_cb_size);
+- sz = min_t(int, sizeof(struct els_plogi_payload),
+- ha->init_cb_size);
+- rval = qla24xx_get_port_login_templ(vha, ha->init_cb_dma,
+- (void *)ha->init_cb, sz);
+- if (rval == QLA_SUCCESS) {
+- bp = (uint32_t *)ha->init_cb;
+- for (i = 0; i < sz/4 ; i++, bp++)
+- *bp = cpu_to_be32(*bp);
++ if (N2N_TOPO(ha)) {
++ if (test_and_clear_bit(N2N_LOGIN_NEEDED, &vha->dpc_flags)) {
++ /* borrowing */
++ u32 *bp, i, sz;
++
++ memset(ha->init_cb, 0, ha->init_cb_size);
++ sz = min_t(int, sizeof(struct els_plogi_payload),
++ ha->init_cb_size);
++ rval = qla24xx_get_port_login_templ(vha,
++ ha->init_cb_dma, (void *)ha->init_cb, sz);
++ if (rval == QLA_SUCCESS) {
++ bp = (uint32_t *)ha->init_cb;
++ for (i = 0; i < sz/4 ; i++, bp++)
++ *bp = cpu_to_be32(*bp);
+
+- memcpy(&ha->plogi_els_payld.data, (void *)ha->init_cb,
+- sizeof(ha->plogi_els_payld.data));
+- set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
+- } else {
+- ql_dbg(ql_dbg_init, vha, 0x00d1,
+- "PLOGI ELS param read fail.\n");
++ memcpy(&ha->plogi_els_payld.data,
++ (void *)ha->init_cb,
++ sizeof(ha->plogi_els_payld.data));
++ set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
++ } else {
++ ql_dbg(ql_dbg_init, vha, 0x00d1,
++ "PLOGI ELS param read fail.\n");
++ goto skip_login;
++ }
++ }
++
++ list_for_each_entry(fcport, &vha->vp_fcports, list) {
++ if (fcport->n2n_flag) {
++ qla24xx_fcport_handle_login(vha, fcport);
++ return QLA_SUCCESS;
++ }
++ }
++skip_login:
++ spin_lock_irqsave(&vha->work_lock, flags);
++ vha->scan.scan_retry++;
++ spin_unlock_irqrestore(&vha->work_lock, flags);
++
++ if (vha->scan.scan_retry < MAX_SCAN_RETRIES) {
++ set_bit(LOCAL_LOOP_UPDATE, &vha->dpc_flags);
++ set_bit(LOOP_RESYNC_NEEDED, &vha->dpc_flags);
+ }
+- return QLA_SUCCESS;
+ }
+
+ found_devs = 0;
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index 133f5f6270ff..abfb9c800ce2 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -2257,7 +2257,7 @@ qla2x00_lip_reset(scsi_qla_host_t *vha)
+ mbx_cmd_t mc;
+ mbx_cmd_t *mcp = &mc;
+
+- ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x105a,
++ ql_dbg(ql_dbg_disc, vha, 0x105a,
+ "Entered %s.\n", __func__);
+
+ if (IS_CNA_CAPABLE(vha->hw)) {
+@@ -3891,14 +3891,24 @@ qla24xx_report_id_acquisition(scsi_qla_host_t *vha,
+ case TOPO_N2N:
+ ha->current_topology = ISP_CFG_N;
+ spin_lock_irqsave(&vha->hw->tgt.sess_lock, flags);
++ list_for_each_entry(fcport, &vha->vp_fcports, list) {
++ fcport->scan_state = QLA_FCPORT_SCAN;
++ fcport->n2n_flag = 0;
++ }
++
+ fcport = qla2x00_find_fcport_by_wwpn(vha,
+ rptid_entry->u.f1.port_name, 1);
+ spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
+
+ if (fcport) {
+ fcport->plogi_nack_done_deadline = jiffies + HZ;
+- fcport->dm_login_expire = jiffies + 3*HZ;
++ fcport->dm_login_expire = jiffies + 2*HZ;
+ fcport->scan_state = QLA_FCPORT_FOUND;
++ fcport->n2n_flag = 1;
++ fcport->keep_nport_handle = 1;
++ if (vha->flags.nvme_enabled)
++ fcport->fc4f_nvme = 1;
++
+ switch (fcport->disc_state) {
+ case DSC_DELETED:
+ set_bit(RELOGIN_NEEDED,
+@@ -3932,7 +3942,7 @@ qla24xx_report_id_acquisition(scsi_qla_host_t *vha,
+ rptid_entry->u.f1.port_name,
+ rptid_entry->u.f1.node_name,
+ NULL,
+- FC4_TYPE_UNKNOWN);
++ FS_FCP_IS_N2N);
+ }
+
+ /* if our portname is higher then initiate N2N login */
+@@ -4031,6 +4041,7 @@ qla24xx_report_id_acquisition(scsi_qla_host_t *vha,
+
+ list_for_each_entry(fcport, &vha->vp_fcports, list) {
+ fcport->scan_state = QLA_FCPORT_SCAN;
++ fcport->n2n_flag = 0;
+ }
+
+ fcport = qla2x00_find_fcport_by_wwpn(vha,
+@@ -4040,6 +4051,14 @@ qla24xx_report_id_acquisition(scsi_qla_host_t *vha,
+ fcport->login_retry = vha->hw->login_retry_count;
+ fcport->plogi_nack_done_deadline = jiffies + HZ;
+ fcport->scan_state = QLA_FCPORT_FOUND;
++ fcport->keep_nport_handle = 1;
++ fcport->n2n_flag = 1;
++ fcport->d_id.b.domain =
++ rptid_entry->u.f2.remote_nport_id[2];
++ fcport->d_id.b.area =
++ rptid_entry->u.f2.remote_nport_id[1];
++ fcport->d_id.b.al_pa =
++ rptid_entry->u.f2.remote_nport_id[0];
+ }
+ }
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 4fda308c3ef5..2835afbd2edc 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -1153,6 +1153,7 @@ qla2x00_wait_for_sess_deletion(scsi_qla_host_t *vha)
+ qla2x00_mark_all_devices_lost(vha, 0);
+
+ wait_event_timeout(vha->fcport_waitQ, test_fcport_count(vha), 10*HZ);
++ flush_workqueue(vha->hw->wq);
+ }
+
+ /*
+@@ -5049,6 +5050,10 @@ void qla24xx_create_new_sess(struct scsi_qla_host *vha, struct qla_work_evt *e)
+
+ memcpy(fcport->port_name, e->u.new_sess.port_name,
+ WWN_SIZE);
++
++ if (e->u.new_sess.fc4_type & FS_FCP_IS_N2N)
++ fcport->n2n_flag = 1;
++
+ } else {
+ ql_dbg(ql_dbg_disc, vha, 0xffff,
+ "%s %8phC mem alloc fail.\n",
+@@ -5145,11 +5150,9 @@ void qla24xx_create_new_sess(struct scsi_qla_host *vha, struct qla_work_evt *e)
+ if (dfcp)
+ qlt_schedule_sess_for_deletion(tfcp);
+
+-
+- if (N2N_TOPO(vha->hw))
+- fcport->flags &= ~FCF_FABRIC_DEVICE;
+-
+ if (N2N_TOPO(vha->hw)) {
++ fcport->flags &= ~FCF_FABRIC_DEVICE;
++ fcport->keep_nport_handle = 1;
+ if (vha->flags.nvme_enabled) {
+ fcport->fc4f_nvme = 1;
+ fcport->n2n_flag = 1;
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index 459c28aa3b94..1bb0fc9324ea 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -954,7 +954,7 @@ void qlt_free_session_done(struct work_struct *work)
+ struct qla_hw_data *ha = vha->hw;
+ unsigned long flags;
+ bool logout_started = false;
+- scsi_qla_host_t *base_vha;
++ scsi_qla_host_t *base_vha = pci_get_drvdata(ha->pdev);
+ struct qlt_plogi_ack_t *own =
+ sess->plogi_link[QLT_PLOGI_LINK_SAME_WWN];
+
+@@ -1021,6 +1021,7 @@ void qlt_free_session_done(struct work_struct *work)
+
+ if (logout_started) {
+ bool traced = false;
++ u16 cnt = 0;
+
+ while (!READ_ONCE(sess->logout_completed)) {
+ if (!traced) {
+@@ -1030,6 +1031,9 @@ void qlt_free_session_done(struct work_struct *work)
+ traced = true;
+ }
+ msleep(100);
++ cnt++;
++ if (cnt > 200)
++ break;
+ }
+
+ ql_dbg(ql_dbg_disc, vha, 0xf087,
+@@ -1102,6 +1106,7 @@ void qlt_free_session_done(struct work_struct *work)
+ }
+
+ spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
++ sess->free_pending = 0;
+
+ ql_dbg(ql_dbg_tgt_mgt, vha, 0xf001,
+ "Unregistration of sess %p %8phC finished fcp_cnt %d\n",
+@@ -1110,17 +1115,8 @@ void qlt_free_session_done(struct work_struct *work)
+ if (tgt && (tgt->sess_count == 0))
+ wake_up_all(&tgt->waitQ);
+
+- if (vha->fcport_count == 0)
+- wake_up_all(&vha->fcport_waitQ);
+-
+- base_vha = pci_get_drvdata(ha->pdev);
+-
+- sess->free_pending = 0;
+-
+- if (test_bit(PFLG_DRIVER_REMOVING, &base_vha->pci_flags))
+- return;
+-
+- if ((!tgt || !tgt->tgt_stop) && !LOOP_TRANSITION(vha)) {
++ if (!test_bit(PFLG_DRIVER_REMOVING, &base_vha->pci_flags) &&
++ (!tgt || !tgt->tgt_stop) && !LOOP_TRANSITION(vha)) {
+ switch (vha->host->active_mode) {
+ case MODE_INITIATOR:
+ case MODE_DUAL:
+@@ -1133,6 +1129,9 @@ void qlt_free_session_done(struct work_struct *work)
+ break;
+ }
+ }
++
++ if (vha->fcport_count == 0)
++ wake_up_all(&vha->fcport_waitQ);
+ }
+
+ /* ha->tgt.sess_lock supposed to be held on entry */
+@@ -1162,7 +1161,7 @@ void qlt_unreg_sess(struct fc_port *sess)
+ sess->last_login_gen = sess->login_gen;
+
+ INIT_WORK(&sess->free_work, qlt_free_session_done);
+- schedule_work(&sess->free_work);
++ queue_work(sess->vha->hw->wq, &sess->free_work);
+ }
+ EXPORT_SYMBOL(qlt_unreg_sess);
+
+diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
+index 1c470e31ae81..ae2fa170f6ad 100644
+--- a/drivers/scsi/scsi_error.c
++++ b/drivers/scsi/scsi_error.c
+@@ -967,6 +967,7 @@ void scsi_eh_prep_cmnd(struct scsi_cmnd *scmd, struct scsi_eh_save *ses,
+ ses->data_direction = scmd->sc_data_direction;
+ ses->sdb = scmd->sdb;
+ ses->result = scmd->result;
++ ses->resid_len = scmd->req.resid_len;
+ ses->underflow = scmd->underflow;
+ ses->prot_op = scmd->prot_op;
+ ses->eh_eflags = scmd->eh_eflags;
+@@ -977,6 +978,7 @@ void scsi_eh_prep_cmnd(struct scsi_cmnd *scmd, struct scsi_eh_save *ses,
+ memset(scmd->cmnd, 0, BLK_MAX_CDB);
+ memset(&scmd->sdb, 0, sizeof(scmd->sdb));
+ scmd->result = 0;
++ scmd->req.resid_len = 0;
+
+ if (sense_bytes) {
+ scmd->sdb.length = min_t(unsigned, SCSI_SENSE_BUFFERSIZE,
+@@ -1029,6 +1031,7 @@ void scsi_eh_restore_cmnd(struct scsi_cmnd* scmd, struct scsi_eh_save *ses)
+ scmd->sc_data_direction = ses->data_direction;
+ scmd->sdb = ses->sdb;
+ scmd->result = ses->result;
++ scmd->req.resid_len = ses->resid_len;
+ scmd->underflow = ses->underflow;
+ scmd->prot_op = ses->prot_op;
+ scmd->eh_eflags = ses->eh_eflags;
+diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
+index 64c96c7828ee..6d7362e7367e 100644
+--- a/drivers/scsi/scsi_sysfs.c
++++ b/drivers/scsi/scsi_sysfs.c
+@@ -730,6 +730,14 @@ sdev_store_delete(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+ {
+ struct kernfs_node *kn;
++ struct scsi_device *sdev = to_scsi_device(dev);
++
++ /*
++ * We need to try to get module, avoiding the module been removed
++ * during delete.
++ */
++ if (scsi_device_get(sdev))
++ return -ENODEV;
+
+ kn = sysfs_break_active_protection(&dev->kobj, &attr->attr);
+ WARN_ON_ONCE(!kn);
+@@ -744,9 +752,10 @@ sdev_store_delete(struct device *dev, struct device_attribute *attr,
+ * state into SDEV_DEL.
+ */
+ device_remove_file(dev, attr);
+- scsi_remove_device(to_scsi_device(dev));
++ scsi_remove_device(sdev);
+ if (kn)
+ sysfs_unbreak_active_protection(kn);
++ scsi_device_put(sdev);
+ return count;
+ };
+ static DEVICE_ATTR(delete, S_IWUSR, NULL, sdev_store_delete);
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 149d406aacc9..2d77f32e13d5 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -1655,7 +1655,8 @@ static int sd_sync_cache(struct scsi_disk *sdkp, struct scsi_sense_hdr *sshdr)
+ /* we need to evaluate the error return */
+ if (scsi_sense_valid(sshdr) &&
+ (sshdr->asc == 0x3a || /* medium not present */
+- sshdr->asc == 0x20)) /* invalid command */
++ sshdr->asc == 0x20 || /* invalid command */
++ (sshdr->asc == 0x74 && sshdr->ascq == 0x71))) /* drive is password locked */
+ /* this is no error here */
+ return 0;
+
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 029da74bb2f5..e674f6148f69 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -8095,6 +8095,9 @@ int ufshcd_shutdown(struct ufs_hba *hba)
+ {
+ int ret = 0;
+
++ if (!hba->is_powered)
++ goto out;
++
+ if (ufshcd_is_ufs_dev_poweroff(hba) && ufshcd_is_link_off(hba))
+ goto out;
+
+diff --git a/drivers/staging/wlan-ng/cfg80211.c b/drivers/staging/wlan-ng/cfg80211.c
+index eee1998c4b18..fac38c842ac5 100644
+--- a/drivers/staging/wlan-ng/cfg80211.c
++++ b/drivers/staging/wlan-ng/cfg80211.c
+@@ -469,10 +469,8 @@ static int prism2_connect(struct wiphy *wiphy, struct net_device *dev,
+ /* Set the encryption - we only support wep */
+ if (is_wep) {
+ if (sme->key) {
+- if (sme->key_idx >= NUM_WEPKEYS) {
+- err = -EINVAL;
+- goto exit;
+- }
++ if (sme->key_idx >= NUM_WEPKEYS)
++ return -EINVAL;
+
+ result = prism2_domibset_uint32(wlandev,
+ DIDMIB_DOT11SMT_PRIVACYTABLE_WEPDEFAULTKEYID,
+diff --git a/drivers/usb/class/usblp.c b/drivers/usb/class/usblp.c
+index 502e9bf1746f..4a80103675d5 100644
+--- a/drivers/usb/class/usblp.c
++++ b/drivers/usb/class/usblp.c
+@@ -445,6 +445,7 @@ static void usblp_cleanup(struct usblp *usblp)
+ kfree(usblp->readbuf);
+ kfree(usblp->device_id_string);
+ kfree(usblp->statusbuf);
++ usb_put_intf(usblp->intf);
+ kfree(usblp);
+ }
+
+@@ -1107,7 +1108,7 @@ static int usblp_probe(struct usb_interface *intf,
+ init_waitqueue_head(&usblp->wwait);
+ init_usb_anchor(&usblp->urbs);
+ usblp->ifnum = intf->cur_altsetting->desc.bInterfaceNumber;
+- usblp->intf = intf;
++ usblp->intf = usb_get_intf(intf);
+
+ /* Malloc device ID string buffer to the largest expected length,
+ * since we can re-query it on an ioctl and a dynamic string
+@@ -1196,6 +1197,7 @@ abort:
+ kfree(usblp->readbuf);
+ kfree(usblp->statusbuf);
+ kfree(usblp->device_id_string);
++ usb_put_intf(usblp->intf);
+ kfree(usblp);
+ abort_ret:
+ return retval;
+diff --git a/drivers/usb/gadget/udc/lpc32xx_udc.c b/drivers/usb/gadget/udc/lpc32xx_udc.c
+index bb6af6b5ac97..4f1ac9f59f1c 100644
+--- a/drivers/usb/gadget/udc/lpc32xx_udc.c
++++ b/drivers/usb/gadget/udc/lpc32xx_udc.c
+@@ -1180,11 +1180,11 @@ static void udc_pop_fifo(struct lpc32xx_udc *udc, u8 *data, u32 bytes)
+ tmp = readl(USBD_RXDATA(udc->udp_baseaddr));
+
+ bl = bytes - n;
+- if (bl > 3)
+- bl = 3;
++ if (bl > 4)
++ bl = 4;
+
+ for (i = 0; i < bl; i++)
+- data[n + i] = (u8) ((tmp >> (n * 8)) & 0xFF);
++ data[n + i] = (u8) ((tmp >> (i * 8)) & 0xFF);
+ }
+ break;
+
+diff --git a/drivers/usb/misc/ldusb.c b/drivers/usb/misc/ldusb.c
+index f3108d85e768..15b5f06fb0b3 100644
+--- a/drivers/usb/misc/ldusb.c
++++ b/drivers/usb/misc/ldusb.c
+@@ -380,10 +380,7 @@ static int ld_usb_release(struct inode *inode, struct file *file)
+ goto exit;
+ }
+
+- if (mutex_lock_interruptible(&dev->mutex)) {
+- retval = -ERESTARTSYS;
+- goto exit;
+- }
++ mutex_lock(&dev->mutex);
+
+ if (dev->open_count != 1) {
+ retval = -ENODEV;
+@@ -467,7 +464,7 @@ static ssize_t ld_usb_read(struct file *file, char __user *buffer, size_t count,
+
+ /* wait for data */
+ spin_lock_irq(&dev->rbsl);
+- if (dev->ring_head == dev->ring_tail) {
++ while (dev->ring_head == dev->ring_tail) {
+ dev->interrupt_in_done = 0;
+ spin_unlock_irq(&dev->rbsl);
+ if (file->f_flags & O_NONBLOCK) {
+@@ -477,12 +474,17 @@ static ssize_t ld_usb_read(struct file *file, char __user *buffer, size_t count,
+ retval = wait_event_interruptible(dev->read_wait, dev->interrupt_in_done);
+ if (retval < 0)
+ goto unlock_exit;
+- } else {
+- spin_unlock_irq(&dev->rbsl);
++
++ spin_lock_irq(&dev->rbsl);
+ }
++ spin_unlock_irq(&dev->rbsl);
+
+ /* actual_buffer contains actual_length + interrupt_in_buffer */
+ actual_buffer = (size_t *)(dev->ring_buffer + dev->ring_tail * (sizeof(size_t)+dev->interrupt_in_endpoint_size));
++ if (*actual_buffer > dev->interrupt_in_endpoint_size) {
++ retval = -EIO;
++ goto unlock_exit;
++ }
+ bytes_to_read = min(count, *actual_buffer);
+ if (bytes_to_read < *actual_buffer)
+ dev_warn(&dev->intf->dev, "Read buffer overflow, %zd bytes dropped\n",
+@@ -693,10 +695,9 @@ static int ld_usb_probe(struct usb_interface *intf, const struct usb_device_id *
+ dev_warn(&intf->dev, "Interrupt out endpoint not found (using control endpoint instead)\n");
+
+ dev->interrupt_in_endpoint_size = usb_endpoint_maxp(dev->interrupt_in_endpoint);
+- dev->ring_buffer =
+- kmalloc_array(ring_buffer_size,
+- sizeof(size_t) + dev->interrupt_in_endpoint_size,
+- GFP_KERNEL);
++ dev->ring_buffer = kcalloc(ring_buffer_size,
++ sizeof(size_t) + dev->interrupt_in_endpoint_size,
++ GFP_KERNEL);
+ if (!dev->ring_buffer)
+ goto error;
+ dev->interrupt_in_buffer = kmalloc(dev->interrupt_in_endpoint_size, GFP_KERNEL);
+diff --git a/drivers/usb/misc/legousbtower.c b/drivers/usb/misc/legousbtower.c
+index 9d4c52a7ebe0..62dab2441ec4 100644
+--- a/drivers/usb/misc/legousbtower.c
++++ b/drivers/usb/misc/legousbtower.c
+@@ -419,10 +419,7 @@ static int tower_release (struct inode *inode, struct file *file)
+ goto exit;
+ }
+
+- if (mutex_lock_interruptible(&dev->lock)) {
+- retval = -ERESTARTSYS;
+- goto exit;
+- }
++ mutex_lock(&dev->lock);
+
+ if (dev->open_count != 1) {
+ dev_dbg(&dev->udev->dev, "%s: device not opened exactly once\n",
+diff --git a/drivers/usb/serial/ti_usb_3410_5052.c b/drivers/usb/serial/ti_usb_3410_5052.c
+index dd0ad67aa71e..9174ba2e06da 100644
+--- a/drivers/usb/serial/ti_usb_3410_5052.c
++++ b/drivers/usb/serial/ti_usb_3410_5052.c
+@@ -776,7 +776,6 @@ static void ti_close(struct usb_serial_port *port)
+ struct ti_port *tport;
+ int port_number;
+ int status;
+- int do_unlock;
+ unsigned long flags;
+
+ tdev = usb_get_serial_data(port->serial);
+@@ -800,16 +799,13 @@ static void ti_close(struct usb_serial_port *port)
+ "%s - cannot send close port command, %d\n"
+ , __func__, status);
+
+- /* if mutex_lock is interrupted, continue anyway */
+- do_unlock = !mutex_lock_interruptible(&tdev->td_open_close_lock);
++ mutex_lock(&tdev->td_open_close_lock);
+ --tport->tp_tdev->td_open_port_count;
+- if (tport->tp_tdev->td_open_port_count <= 0) {
++ if (tport->tp_tdev->td_open_port_count == 0) {
+ /* last port is closed, shut down interrupt urb */
+ usb_kill_urb(port->serial->port[0]->interrupt_in_urb);
+- tport->tp_tdev->td_open_port_count = 0;
+ }
+- if (do_unlock)
+- mutex_unlock(&tdev->td_open_close_lock);
++ mutex_unlock(&tdev->td_open_close_lock);
+ }
+
+
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index f131651502b8..c62903290f3a 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -899,7 +899,7 @@ out_free_interp:
+ the correct location in memory. */
+ for(i = 0, elf_ppnt = elf_phdata;
+ i < loc->elf_ex.e_phnum; i++, elf_ppnt++) {
+- int elf_prot, elf_flags, elf_fixed = MAP_FIXED_NOREPLACE;
++ int elf_prot, elf_flags;
+ unsigned long k, vaddr;
+ unsigned long total_size = 0;
+
+@@ -931,13 +931,6 @@ out_free_interp:
+ */
+ }
+ }
+-
+- /*
+- * Some binaries have overlapping elf segments and then
+- * we have to forcefully map over an existing mapping
+- * e.g. over this newly established brk mapping.
+- */
+- elf_fixed = MAP_FIXED;
+ }
+
+ elf_prot = make_prot(elf_ppnt->p_flags);
+@@ -950,7 +943,7 @@ out_free_interp:
+ * the ET_DYN load_addr calculations, proceed normally.
+ */
+ if (loc->elf_ex.e_type == ET_EXEC || load_addr_set) {
+- elf_flags |= elf_fixed;
++ elf_flags |= MAP_FIXED;
+ } else if (loc->elf_ex.e_type == ET_DYN) {
+ /*
+ * This logic is run once for the first LOAD Program
+@@ -986,7 +979,7 @@ out_free_interp:
+ load_bias = ELF_ET_DYN_BASE;
+ if (current->flags & PF_RANDOMIZE)
+ load_bias += arch_mmap_rnd();
+- elf_flags |= elf_fixed;
++ elf_flags |= MAP_FIXED;
+ } else
+ load_bias = 0;
+
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index d9541d58ce3d..e7a1ec075c65 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -908,8 +908,6 @@ struct btrfs_fs_info {
+ struct btrfs_workqueue *fixup_workers;
+ struct btrfs_workqueue *delayed_workers;
+
+- /* the extent workers do delayed refs on the extent allocation tree */
+- struct btrfs_workqueue *extent_workers;
+ struct task_struct *transaction_kthread;
+ struct task_struct *cleaner_kthread;
+ u32 thread_pool_size;
+diff --git a/fs/btrfs/delalloc-space.c b/fs/btrfs/delalloc-space.c
+index 17f7c0d38768..934521fe7e71 100644
+--- a/fs/btrfs/delalloc-space.c
++++ b/fs/btrfs/delalloc-space.c
+@@ -371,7 +371,6 @@ int btrfs_delalloc_reserve_metadata(struct btrfs_inode *inode, u64 num_bytes)
+ out_qgroup:
+ btrfs_qgroup_free_meta_prealloc(root, qgroup_reserve);
+ out_fail:
+- btrfs_inode_rsv_release(inode, true);
+ if (delalloc_lock)
+ mutex_unlock(&inode->delalloc_mutex);
+ return ret;
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 65af7eb3f7bd..46eac7ddf0f7 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -2036,7 +2036,6 @@ static void btrfs_stop_all_workers(struct btrfs_fs_info *fs_info)
+ btrfs_destroy_workqueue(fs_info->readahead_workers);
+ btrfs_destroy_workqueue(fs_info->flush_workers);
+ btrfs_destroy_workqueue(fs_info->qgroup_rescan_workers);
+- btrfs_destroy_workqueue(fs_info->extent_workers);
+ /*
+ * Now that all other work queues are destroyed, we can safely destroy
+ * the queues used for metadata I/O, since tasks from those other work
+@@ -2242,10 +2241,6 @@ static int btrfs_init_workqueues(struct btrfs_fs_info *fs_info,
+ max_active, 2);
+ fs_info->qgroup_rescan_workers =
+ btrfs_alloc_workqueue(fs_info, "qgroup-rescan", flags, 1, 0);
+- fs_info->extent_workers =
+- btrfs_alloc_workqueue(fs_info, "extent-refs", flags,
+- min_t(u64, fs_devices->num_devices,
+- max_active), 8);
+
+ if (!(fs_info->workers && fs_info->delalloc_workers &&
+ fs_info->submit_workers && fs_info->flush_workers &&
+@@ -2256,7 +2251,6 @@ static int btrfs_init_workqueues(struct btrfs_fs_info *fs_info,
+ fs_info->endio_freespace_worker && fs_info->rmw_workers &&
+ fs_info->caching_workers && fs_info->readahead_workers &&
+ fs_info->fixup_workers && fs_info->delayed_workers &&
+- fs_info->extent_workers &&
+ fs_info->qgroup_rescan_workers)) {
+ return -ENOMEM;
+ }
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index ef2f80825c82..d5a3a66c8f1d 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -8117,6 +8117,7 @@ int btrfs_read_block_groups(struct btrfs_fs_info *info)
+ btrfs_err(info,
+ "bg %llu is a mixed block group but filesystem hasn't enabled mixed block groups",
+ cache->key.objectid);
++ btrfs_put_block_group(cache);
+ ret = -EINVAL;
+ goto error;
+ }
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index abcda051eee2..d68add0bf346 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -2067,25 +2067,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ struct btrfs_trans_handle *trans;
+ struct btrfs_log_ctx ctx;
+ int ret = 0, err;
+- u64 len;
+
+- /*
+- * If the inode needs a full sync, make sure we use a full range to
+- * avoid log tree corruption, due to hole detection racing with ordered
+- * extent completion for adjacent ranges, and assertion failures during
+- * hole detection.
+- */
+- if (test_bit(BTRFS_INODE_NEEDS_FULL_SYNC,
+- &BTRFS_I(inode)->runtime_flags)) {
+- start = 0;
+- end = LLONG_MAX;
+- }
+-
+- /*
+- * The range length can be represented by u64, we have to do the typecasts
+- * to avoid signed overflow if it's [0, LLONG_MAX] eg. from fsync()
+- */
+- len = (u64)end - (u64)start + 1;
+ trace_btrfs_sync_file(file, datasync);
+
+ btrfs_init_log_ctx(&ctx, inode);
+@@ -2111,6 +2093,19 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+
+ atomic_inc(&root->log_batch);
+
++ /*
++ * If the inode needs a full sync, make sure we use a full range to
++ * avoid log tree corruption, due to hole detection racing with ordered
++ * extent completion for adjacent ranges, and assertion failures during
++ * hole detection. Do this while holding the inode lock, to avoid races
++ * with other tasks.
++ */
++ if (test_bit(BTRFS_INODE_NEEDS_FULL_SYNC,
++ &BTRFS_I(inode)->runtime_flags)) {
++ start = 0;
++ end = LLONG_MAX;
++ }
++
+ /*
+ * Before we acquired the inode's lock, someone may have dirtied more
+ * pages in the target range. We need to make sure that writeback for
+@@ -2138,8 +2133,11 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ /*
+ * We have to do this here to avoid the priority inversion of waiting on
+ * IO of a lower priority task while holding a transaction open.
++ *
++ * Also, the range length can be represented by u64, we have to do the
++ * typecasts to avoid signed overflow if it's [0, LLONG_MAX].
+ */
+- ret = btrfs_wait_ordered_range(inode, start, len);
++ ret = btrfs_wait_ordered_range(inode, start, (u64)end - (u64)start + 1);
+ if (ret) {
+ up_write(&BTRFS_I(inode)->dio_sem);
+ inode_unlock(inode);
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 001efc9ba1e7..60a00f6ca18f 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -3617,7 +3617,7 @@ int __btrfs_qgroup_reserve_meta(struct btrfs_root *root, int num_bytes,
+ return 0;
+
+ BUG_ON(num_bytes != round_down(num_bytes, fs_info->nodesize));
+- trace_qgroup_meta_reserve(root, type, (s64)num_bytes);
++ trace_qgroup_meta_reserve(root, (s64)num_bytes, type);
+ ret = qgroup_reserve(root, num_bytes, enforce, type);
+ if (ret < 0)
+ return ret;
+@@ -3664,7 +3664,7 @@ void __btrfs_qgroup_free_meta(struct btrfs_root *root, int num_bytes,
+ */
+ num_bytes = sub_root_meta_rsv(root, num_bytes, type);
+ BUG_ON(num_bytes != round_down(num_bytes, fs_info->nodesize));
+- trace_qgroup_meta_reserve(root, type, -(s64)num_bytes);
++ trace_qgroup_meta_reserve(root, -(s64)num_bytes, type);
+ btrfs_qgroup_free_refroot(fs_info, root->root_key.objectid,
+ num_bytes, type);
+ }
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index fbd66c33dd63..074947bebd16 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -3276,6 +3276,8 @@ static int relocate_file_extent_cluster(struct inode *inode,
+ if (!page) {
+ btrfs_delalloc_release_metadata(BTRFS_I(inode),
+ PAGE_SIZE, true);
++ btrfs_delalloc_release_extents(BTRFS_I(inode),
++ PAGE_SIZE, true);
+ ret = -ENOMEM;
+ goto out;
+ }
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index b11af7d8e8e9..61282b77950f 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -384,8 +384,8 @@ static int parse_reply_info_readdir(void **p, void *end,
+ }
+
+ done:
+- if (*p != end)
+- goto bad;
++ /* Skip over any unrecognized fields */
++ *p = end;
+ return 0;
+
+ bad:
+@@ -406,12 +406,10 @@ static int parse_reply_info_filelock(void **p, void *end,
+ goto bad;
+
+ info->filelock_reply = *p;
+- *p += sizeof(*info->filelock_reply);
+
+- if (unlikely(*p != end))
+- goto bad;
++ /* Skip over any unrecognized fields */
++ *p = end;
+ return 0;
+-
+ bad:
+ return -EIO;
+ }
+@@ -425,18 +423,21 @@ static int parse_reply_info_create(void **p, void *end,
+ {
+ if (features == (u64)-1 ||
+ (features & CEPH_FEATURE_REPLY_CREATE_INODE)) {
++ /* Malformed reply? */
+ if (*p == end) {
+ info->has_create_ino = false;
+ } else {
+ info->has_create_ino = true;
+- info->ino = ceph_decode_64(p);
++ ceph_decode_64_safe(p, end, info->ino, bad);
+ }
++ } else {
++ if (*p != end)
++ goto bad;
+ }
+
+- if (unlikely(*p != end))
+- goto bad;
++ /* Skip over any unrecognized fields */
++ *p = end;
+ return 0;
+-
+ bad:
+ return -EIO;
+ }
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 4c1aeb2cf7f5..53dbb6e0d390 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -405,10 +405,11 @@ void _cifsFileInfo_put(struct cifsFileInfo *cifs_file, bool wait_oplock_handler)
+ bool oplock_break_cancelled;
+
+ spin_lock(&tcon->open_file_lock);
+-
++ spin_lock(&cifsi->open_file_lock);
+ spin_lock(&cifs_file->file_info_lock);
+ if (--cifs_file->count > 0) {
+ spin_unlock(&cifs_file->file_info_lock);
++ spin_unlock(&cifsi->open_file_lock);
+ spin_unlock(&tcon->open_file_lock);
+ return;
+ }
+@@ -421,9 +422,7 @@ void _cifsFileInfo_put(struct cifsFileInfo *cifs_file, bool wait_oplock_handler)
+ cifs_add_pending_open_locked(&fid, cifs_file->tlink, &open);
+
+ /* remove it from the lists */
+- spin_lock(&cifsi->open_file_lock);
+ list_del(&cifs_file->flist);
+- spin_unlock(&cifsi->open_file_lock);
+ list_del(&cifs_file->tlist);
+ atomic_dec(&tcon->num_local_opens);
+
+@@ -440,6 +439,7 @@ void _cifsFileInfo_put(struct cifsFileInfo *cifs_file, bool wait_oplock_handler)
+ cifs_set_oplock_level(cifsi, 0);
+ }
+
++ spin_unlock(&cifsi->open_file_lock);
+ spin_unlock(&tcon->open_file_lock);
+
+ oplock_break_cancelled = wait_oplock_handler ?
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index 79d9a60f21ba..3c952024e10f 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -2465,9 +2465,9 @@ cifs_setattr_nounix(struct dentry *direntry, struct iattr *attrs)
+ rc = tcon->ses->server->ops->flush(xid, tcon, &wfile->fid);
+ cifsFileInfo_put(wfile);
+ if (rc)
+- return rc;
++ goto cifs_setattr_exit;
+ } else if (rc != -EBADF)
+- return rc;
++ goto cifs_setattr_exit;
+ else
+ rc = 0;
+ }
+diff --git a/fs/cifs/smb1ops.c b/fs/cifs/smb1ops.c
+index b7421a096319..514810694c0f 100644
+--- a/fs/cifs/smb1ops.c
++++ b/fs/cifs/smb1ops.c
+@@ -171,6 +171,9 @@ cifs_get_next_mid(struct TCP_Server_Info *server)
+ /* we do not want to loop forever */
+ last_mid = cur_mid;
+ cur_mid++;
++ /* avoid 0xFFFF MID */
++ if (cur_mid == 0xffff)
++ cur_mid++;
+
+ /*
+ * This nested loop looks more expensive than it is.
+diff --git a/fs/dax.c b/fs/dax.c
+index 6bf81f931de3..2cc43cd914eb 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -220,10 +220,11 @@ static void *get_unlocked_entry(struct xa_state *xas, unsigned int order)
+
+ for (;;) {
+ entry = xas_find_conflict(xas);
++ if (!entry || WARN_ON_ONCE(!xa_is_value(entry)))
++ return entry;
+ if (dax_entry_order(entry) < order)
+ return XA_RETRY_ENTRY;
+- if (!entry || WARN_ON_ONCE(!xa_is_value(entry)) ||
+- !dax_is_locked(entry))
++ if (!dax_is_locked(entry))
+ return entry;
+
+ wq = dax_entry_waitqueue(xas, entry, &ewait.key);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 30149652c379..ed223c33dd89 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -221,6 +221,7 @@ struct io_ring_ctx {
+ unsigned sq_entries;
+ unsigned sq_mask;
+ unsigned sq_thread_idle;
++ unsigned cached_sq_dropped;
+ struct io_uring_sqe *sq_sqes;
+
+ struct list_head defer_list;
+@@ -237,6 +238,7 @@ struct io_ring_ctx {
+ /* CQ ring */
+ struct io_cq_ring *cq_ring;
+ unsigned cached_cq_tail;
++ atomic_t cached_cq_overflow;
+ unsigned cq_entries;
+ unsigned cq_mask;
+ struct wait_queue_head cq_wait;
+@@ -431,7 +433,8 @@ static inline bool io_sequence_defer(struct io_ring_ctx *ctx,
+ if ((req->flags & (REQ_F_IO_DRAIN|REQ_F_IO_DRAINED)) != REQ_F_IO_DRAIN)
+ return false;
+
+- return req->sequence != ctx->cached_cq_tail + ctx->sq_ring->dropped;
++ return req->sequence != ctx->cached_cq_tail + ctx->sq_ring->dropped
++ + atomic_read(&ctx->cached_cq_overflow);
+ }
+
+ static struct io_kiocb *io_get_deferred_req(struct io_ring_ctx *ctx)
+@@ -511,9 +514,8 @@ static void io_cqring_fill_event(struct io_ring_ctx *ctx, u64 ki_user_data,
+ WRITE_ONCE(cqe->res, res);
+ WRITE_ONCE(cqe->flags, 0);
+ } else {
+- unsigned overflow = READ_ONCE(ctx->cq_ring->overflow);
+-
+- WRITE_ONCE(ctx->cq_ring->overflow, overflow + 1);
++ WRITE_ONCE(ctx->cq_ring->overflow,
++ atomic_inc_return(&ctx->cached_cq_overflow));
+ }
+ }
+
+@@ -687,6 +689,14 @@ static unsigned io_cqring_events(struct io_cq_ring *ring)
+ return READ_ONCE(ring->r.tail) - READ_ONCE(ring->r.head);
+ }
+
++static inline unsigned int io_sqring_entries(struct io_ring_ctx *ctx)
++{
++ struct io_sq_ring *ring = ctx->sq_ring;
++
++ /* make sure SQ entry isn't read before tail */
++ return smp_load_acquire(&ring->r.tail) - ctx->cached_sq_head;
++}
++
+ /*
+ * Find and free completed poll iocbs
+ */
+@@ -816,19 +826,11 @@ static void io_iopoll_reap_events(struct io_ring_ctx *ctx)
+ mutex_unlock(&ctx->uring_lock);
+ }
+
+-static int io_iopoll_check(struct io_ring_ctx *ctx, unsigned *nr_events,
+- long min)
++static int __io_iopoll_check(struct io_ring_ctx *ctx, unsigned *nr_events,
++ long min)
+ {
+- int iters, ret = 0;
++ int iters = 0, ret = 0;
+
+- /*
+- * We disallow the app entering submit/complete with polling, but we
+- * still need to lock the ring to prevent racing with polled issue
+- * that got punted to a workqueue.
+- */
+- mutex_lock(&ctx->uring_lock);
+-
+- iters = 0;
+ do {
+ int tmin = 0;
+
+@@ -864,6 +866,21 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, unsigned *nr_events,
+ ret = 0;
+ } while (min && !*nr_events && !need_resched());
+
++ return ret;
++}
++
++static int io_iopoll_check(struct io_ring_ctx *ctx, unsigned *nr_events,
++ long min)
++{
++ int ret;
++
++ /*
++ * We disallow the app entering submit/complete with polling, but we
++ * still need to lock the ring to prevent racing with polled issue
++ * that got punted to a workqueue.
++ */
++ mutex_lock(&ctx->uring_lock);
++ ret = __io_iopoll_check(ctx, nr_events, min);
+ mutex_unlock(&ctx->uring_lock);
+ return ret;
+ }
+@@ -2150,6 +2167,8 @@ err:
+ return;
+ }
+
++ req->user_data = s->sqe->user_data;
++
+ /*
+ * If we already have a head request, queue this one for async
+ * submittal once the head completes. If we don't have a head but
+@@ -2255,12 +2274,13 @@ static bool io_get_sqring(struct io_ring_ctx *ctx, struct sqe_submit *s)
+
+ /* drop invalid entries */
+ ctx->cached_sq_head++;
+- ring->dropped++;
++ ctx->cached_sq_dropped++;
++ WRITE_ONCE(ring->dropped, ctx->cached_sq_dropped);
+ return false;
+ }
+
+-static int io_submit_sqes(struct io_ring_ctx *ctx, struct sqe_submit *sqes,
+- unsigned int nr, bool has_user, bool mm_fault)
++static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
++ bool has_user, bool mm_fault)
+ {
+ struct io_submit_state state, *statep = NULL;
+ struct io_kiocb *link = NULL;
+@@ -2273,6 +2293,11 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, struct sqe_submit *sqes,
+ }
+
+ for (i = 0; i < nr; i++) {
++ struct sqe_submit s;
++
++ if (!io_get_sqring(ctx, &s))
++ break;
++
+ /*
+ * If previous wasn't linked and we have a linked command,
+ * that's the end of the chain. Submit the previous link.
+@@ -2281,16 +2306,16 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, struct sqe_submit *sqes,
+ io_queue_sqe(ctx, link, &link->submit);
+ link = NULL;
+ }
+- prev_was_link = (sqes[i].sqe->flags & IOSQE_IO_LINK) != 0;
++ prev_was_link = (s.sqe->flags & IOSQE_IO_LINK) != 0;
+
+ if (unlikely(mm_fault)) {
+- io_cqring_add_event(ctx, sqes[i].sqe->user_data,
++ io_cqring_add_event(ctx, s.sqe->user_data,
+ -EFAULT);
+ } else {
+- sqes[i].has_user = has_user;
+- sqes[i].needs_lock = true;
+- sqes[i].needs_fixed_file = true;
+- io_submit_sqe(ctx, &sqes[i], statep, &link);
++ s.has_user = has_user;
++ s.needs_lock = true;
++ s.needs_fixed_file = true;
++ io_submit_sqe(ctx, &s, statep, &link);
+ submitted++;
+ }
+ }
+@@ -2305,7 +2330,6 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, struct sqe_submit *sqes,
+
+ static int io_sq_thread(void *data)
+ {
+- struct sqe_submit sqes[IO_IOPOLL_BATCH];
+ struct io_ring_ctx *ctx = data;
+ struct mm_struct *cur_mm = NULL;
+ mm_segment_t old_fs;
+@@ -2320,14 +2344,27 @@ static int io_sq_thread(void *data)
+
+ timeout = inflight = 0;
+ while (!kthread_should_park()) {
+- bool all_fixed, mm_fault = false;
+- int i;
++ bool mm_fault = false;
++ unsigned int to_submit;
+
+ if (inflight) {
+ unsigned nr_events = 0;
+
+ if (ctx->flags & IORING_SETUP_IOPOLL) {
+- io_iopoll_check(ctx, &nr_events, 0);
++ /*
++ * inflight is the count of the maximum possible
++ * entries we submitted, but it can be smaller
++ * if we dropped some of them. If we don't have
++ * poll entries available, then we know that we
++ * have nothing left to poll for. Reset the
++ * inflight count to zero in that case.
++ */
++ mutex_lock(&ctx->uring_lock);
++ if (!list_empty(&ctx->poll_list))
++ __io_iopoll_check(ctx, &nr_events, 0);
++ else
++ inflight = 0;
++ mutex_unlock(&ctx->uring_lock);
+ } else {
+ /*
+ * Normal IO, just pretend everything completed.
+@@ -2341,7 +2378,8 @@ static int io_sq_thread(void *data)
+ timeout = jiffies + ctx->sq_thread_idle;
+ }
+
+- if (!io_get_sqring(ctx, &sqes[0])) {
++ to_submit = io_sqring_entries(ctx);
++ if (!to_submit) {
+ /*
+ * We're polling. If we're within the defined idle
+ * period, then let us spin without work before going
+@@ -2372,7 +2410,8 @@ static int io_sq_thread(void *data)
+ /* make sure to read SQ tail after writing flags */
+ smp_mb();
+
+- if (!io_get_sqring(ctx, &sqes[0])) {
++ to_submit = io_sqring_entries(ctx);
++ if (!to_submit) {
+ if (kthread_should_park()) {
+ finish_wait(&ctx->sqo_wait, &wait);
+ break;
+@@ -2390,19 +2429,8 @@ static int io_sq_thread(void *data)
+ ctx->sq_ring->flags &= ~IORING_SQ_NEED_WAKEUP;
+ }
+
+- i = 0;
+- all_fixed = true;
+- do {
+- if (all_fixed && io_sqe_needs_user(sqes[i].sqe))
+- all_fixed = false;
+-
+- i++;
+- if (i == ARRAY_SIZE(sqes))
+- break;
+- } while (io_get_sqring(ctx, &sqes[i]));
+-
+ /* Unless all new commands are FIXED regions, grab mm */
+- if (!all_fixed && !cur_mm) {
++ if (!cur_mm) {
+ mm_fault = !mmget_not_zero(ctx->sqo_mm);
+ if (!mm_fault) {
+ use_mm(ctx->sqo_mm);
+@@ -2410,8 +2438,9 @@ static int io_sq_thread(void *data)
+ }
+ }
+
+- inflight += io_submit_sqes(ctx, sqes, i, cur_mm != NULL,
+- mm_fault);
++ to_submit = min(to_submit, ctx->sq_entries);
++ inflight += io_submit_sqes(ctx, to_submit, cur_mm != NULL,
++ mm_fault);
+
+ /* Commit SQ ring head once we've consumed all SQEs */
+ io_commit_sqring(ctx);
+@@ -2462,13 +2491,14 @@ static int io_ring_submit(struct io_ring_ctx *ctx, unsigned int to_submit)
+ submit++;
+ io_submit_sqe(ctx, &s, statep, &link);
+ }
+- io_commit_sqring(ctx);
+
+ if (link)
+ io_queue_sqe(ctx, link, &link->submit);
+ if (statep)
+ io_submit_state_end(statep);
+
++ io_commit_sqring(ctx);
++
+ return submit;
+ }
+
+diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
+index 930e3d388579..699a560efbb0 100644
+--- a/fs/ocfs2/journal.c
++++ b/fs/ocfs2/journal.c
+@@ -217,7 +217,8 @@ void ocfs2_recovery_exit(struct ocfs2_super *osb)
+ /* At this point, we know that no more recovery threads can be
+ * launched, so wait for any recovery completion work to
+ * complete. */
+- flush_workqueue(osb->ocfs2_wq);
++ if (osb->ocfs2_wq)
++ flush_workqueue(osb->ocfs2_wq);
+
+ /*
+ * Now that recovery is shut down, and the osb is about to be
+diff --git a/fs/ocfs2/localalloc.c b/fs/ocfs2/localalloc.c
+index 158e5af767fd..720e9f94957e 100644
+--- a/fs/ocfs2/localalloc.c
++++ b/fs/ocfs2/localalloc.c
+@@ -377,7 +377,8 @@ void ocfs2_shutdown_local_alloc(struct ocfs2_super *osb)
+ struct ocfs2_dinode *alloc = NULL;
+
+ cancel_delayed_work(&osb->la_enable_wq);
+- flush_workqueue(osb->ocfs2_wq);
++ if (osb->ocfs2_wq)
++ flush_workqueue(osb->ocfs2_wq);
+
+ if (osb->local_alloc_state == OCFS2_LA_UNUSED)
+ goto out;
+diff --git a/fs/proc/page.c b/fs/proc/page.c
+index 544d1ee15aee..7c952ee732e6 100644
+--- a/fs/proc/page.c
++++ b/fs/proc/page.c
+@@ -42,10 +42,12 @@ static ssize_t kpagecount_read(struct file *file, char __user *buf,
+ return -EINVAL;
+
+ while (count > 0) {
+- if (pfn_valid(pfn))
+- ppage = pfn_to_page(pfn);
+- else
+- ppage = NULL;
++ /*
++ * TODO: ZONE_DEVICE support requires to identify
++ * memmaps that were actually initialized.
++ */
++ ppage = pfn_to_online_page(pfn);
++
+ if (!ppage || PageSlab(ppage) || page_has_type(ppage))
+ pcount = 0;
+ else
+@@ -216,10 +218,11 @@ static ssize_t kpageflags_read(struct file *file, char __user *buf,
+ return -EINVAL;
+
+ while (count > 0) {
+- if (pfn_valid(pfn))
+- ppage = pfn_to_page(pfn);
+- else
+- ppage = NULL;
++ /*
++ * TODO: ZONE_DEVICE support requires to identify
++ * memmaps that were actually initialized.
++ */
++ ppage = pfn_to_online_page(pfn);
+
+ if (put_user(stable_page_flags(ppage), out)) {
+ ret = -EFAULT;
+@@ -261,10 +264,11 @@ static ssize_t kpagecgroup_read(struct file *file, char __user *buf,
+ return -EINVAL;
+
+ while (count > 0) {
+- if (pfn_valid(pfn))
+- ppage = pfn_to_page(pfn);
+- else
+- ppage = NULL;
++ /*
++ * TODO: ZONE_DEVICE support requires to identify
++ * memmaps that were actually initialized.
++ */
++ ppage = pfn_to_online_page(pfn);
+
+ if (ppage)
+ ino = page_cgroup_ino(ppage);
+diff --git a/fs/readdir.c b/fs/readdir.c
+index 2f6a4534e0df..d26d5ea4de7b 100644
+--- a/fs/readdir.c
++++ b/fs/readdir.c
+@@ -20,9 +20,23 @@
+ #include <linux/syscalls.h>
+ #include <linux/unistd.h>
+ #include <linux/compat.h>
+-
+ #include <linux/uaccess.h>
+
++#include <asm/unaligned.h>
++
++/*
++ * Note the "unsafe_put_user() semantics: we goto a
++ * label for errors.
++ */
++#define unsafe_copy_dirent_name(_dst, _src, _len, label) do { \
++ char __user *dst = (_dst); \
++ const char *src = (_src); \
++ size_t len = (_len); \
++ unsafe_put_user(0, dst+len, label); \
++ unsafe_copy_to_user(dst, src, len, label); \
++} while (0)
++
++
+ int iterate_dir(struct file *file, struct dir_context *ctx)
+ {
+ struct inode *inode = file_inode(file);
+@@ -64,6 +78,40 @@ out:
+ }
+ EXPORT_SYMBOL(iterate_dir);
+
++/*
++ * POSIX says that a dirent name cannot contain NULL or a '/'.
++ *
++ * It's not 100% clear what we should really do in this case.
++ * The filesystem is clearly corrupted, but returning a hard
++ * error means that you now don't see any of the other names
++ * either, so that isn't a perfect alternative.
++ *
++ * And if you return an error, what error do you use? Several
++ * filesystems seem to have decided on EUCLEAN being the error
++ * code for EFSCORRUPTED, and that may be the error to use. Or
++ * just EIO, which is perhaps more obvious to users.
++ *
++ * In order to see the other file names in the directory, the
++ * caller might want to make this a "soft" error: skip the
++ * entry, and return the error at the end instead.
++ *
++ * Note that this should likely do a "memchr(name, 0, len)"
++ * check too, since that would be filesystem corruption as
++ * well. However, that case can't actually confuse user space,
++ * which has to do a strlen() on the name anyway to find the
++ * filename length, and the above "soft error" worry means
++ * that it's probably better left alone until we have that
++ * issue clarified.
++ */
++static int verify_dirent_name(const char *name, int len)
++{
++ if (!len)
++ return -EIO;
++ if (memchr(name, '/', len))
++ return -EIO;
++ return 0;
++}
++
+ /*
+ * Traditional linux readdir() handling..
+ *
+@@ -173,6 +221,9 @@ static int filldir(struct dir_context *ctx, const char *name, int namlen,
+ int reclen = ALIGN(offsetof(struct linux_dirent, d_name) + namlen + 2,
+ sizeof(long));
+
++ buf->error = verify_dirent_name(name, namlen);
++ if (unlikely(buf->error))
++ return buf->error;
+ buf->error = -EINVAL; /* only used if we fail.. */
+ if (reclen > buf->count)
+ return -EINVAL;
+@@ -182,28 +233,31 @@ static int filldir(struct dir_context *ctx, const char *name, int namlen,
+ return -EOVERFLOW;
+ }
+ dirent = buf->previous;
+- if (dirent) {
+- if (signal_pending(current))
+- return -EINTR;
+- if (__put_user(offset, &dirent->d_off))
+- goto efault;
+- }
+- dirent = buf->current_dir;
+- if (__put_user(d_ino, &dirent->d_ino))
+- goto efault;
+- if (__put_user(reclen, &dirent->d_reclen))
+- goto efault;
+- if (copy_to_user(dirent->d_name, name, namlen))
+- goto efault;
+- if (__put_user(0, dirent->d_name + namlen))
+- goto efault;
+- if (__put_user(d_type, (char __user *) dirent + reclen - 1))
++ if (dirent && signal_pending(current))
++ return -EINTR;
++
++ /*
++ * Note! This range-checks 'previous' (which may be NULL).
++ * The real range was checked in getdents
++ */
++ if (!user_access_begin(dirent, sizeof(*dirent)))
+ goto efault;
++ if (dirent)
++ unsafe_put_user(offset, &dirent->d_off, efault_end);
++ dirent = buf->current_dir;
++ unsafe_put_user(d_ino, &dirent->d_ino, efault_end);
++ unsafe_put_user(reclen, &dirent->d_reclen, efault_end);
++ unsafe_put_user(d_type, (char __user *) dirent + reclen - 1, efault_end);
++ unsafe_copy_dirent_name(dirent->d_name, name, namlen, efault_end);
++ user_access_end();
++
+ buf->previous = dirent;
+ dirent = (void __user *)dirent + reclen;
+ buf->current_dir = dirent;
+ buf->count -= reclen;
+ return 0;
++efault_end:
++ user_access_end();
+ efault:
+ buf->error = -EFAULT;
+ return -EFAULT;
+@@ -259,34 +313,38 @@ static int filldir64(struct dir_context *ctx, const char *name, int namlen,
+ int reclen = ALIGN(offsetof(struct linux_dirent64, d_name) + namlen + 1,
+ sizeof(u64));
+
++ buf->error = verify_dirent_name(name, namlen);
++ if (unlikely(buf->error))
++ return buf->error;
+ buf->error = -EINVAL; /* only used if we fail.. */
+ if (reclen > buf->count)
+ return -EINVAL;
+ dirent = buf->previous;
+- if (dirent) {
+- if (signal_pending(current))
+- return -EINTR;
+- if (__put_user(offset, &dirent->d_off))
+- goto efault;
+- }
+- dirent = buf->current_dir;
+- if (__put_user(ino, &dirent->d_ino))
+- goto efault;
+- if (__put_user(0, &dirent->d_off))
+- goto efault;
+- if (__put_user(reclen, &dirent->d_reclen))
+- goto efault;
+- if (__put_user(d_type, &dirent->d_type))
+- goto efault;
+- if (copy_to_user(dirent->d_name, name, namlen))
+- goto efault;
+- if (__put_user(0, dirent->d_name + namlen))
++ if (dirent && signal_pending(current))
++ return -EINTR;
++
++ /*
++ * Note! This range-checks 'previous' (which may be NULL).
++ * The real range was checked in getdents
++ */
++ if (!user_access_begin(dirent, sizeof(*dirent)))
+ goto efault;
++ if (dirent)
++ unsafe_put_user(offset, &dirent->d_off, efault_end);
++ dirent = buf->current_dir;
++ unsafe_put_user(ino, &dirent->d_ino, efault_end);
++ unsafe_put_user(reclen, &dirent->d_reclen, efault_end);
++ unsafe_put_user(d_type, &dirent->d_type, efault_end);
++ unsafe_copy_dirent_name(dirent->d_name, name, namlen, efault_end);
++ user_access_end();
++
+ buf->previous = dirent;
+ dirent = (void __user *)dirent + reclen;
+ buf->current_dir = dirent;
+ buf->count -= reclen;
+ return 0;
++efault_end:
++ user_access_end();
+ efault:
+ buf->error = -EFAULT;
+ return -EFAULT;
+diff --git a/include/linux/micrel_phy.h b/include/linux/micrel_phy.h
+index ad24554f11f9..75f880c25bb8 100644
+--- a/include/linux/micrel_phy.h
++++ b/include/linux/micrel_phy.h
+@@ -31,7 +31,7 @@
+ #define PHY_ID_KSZ886X 0x00221430
+ #define PHY_ID_KSZ8863 0x00221435
+
+-#define PHY_ID_KSZ8795 0x00221550
++#define PHY_ID_KSZ87XX 0x00221550
+
+ #define PHY_ID_KSZ9477 0x00221631
+
+diff --git a/include/linux/mii.h b/include/linux/mii.h
+index 5cd824c1c0ca..4ce8901a1af6 100644
+--- a/include/linux/mii.h
++++ b/include/linux/mii.h
+@@ -455,6 +455,15 @@ static inline void mii_lpa_mod_linkmode_lpa_t(unsigned long *lp_advertising,
+ lp_advertising, lpa & LPA_LPACK);
+ }
+
++static inline void mii_ctrl1000_mod_linkmode_adv_t(unsigned long *advertising,
++ u32 ctrl1000)
++{
++ linkmode_mod_bit(ETHTOOL_LINK_MODE_1000baseT_Half_BIT, advertising,
++ ctrl1000 & ADVERTISE_1000HALF);
++ linkmode_mod_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT, advertising,
++ ctrl1000 & ADVERTISE_1000FULL);
++}
++
+ /**
+ * linkmode_adv_to_lcl_adv_t
+ * @advertising:pointer to linkmode advertising
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index ba5583522d24..9b18d33681c2 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -3465,8 +3465,9 @@ int skb_ensure_writable(struct sk_buff *skb, int write_len);
+ int __skb_vlan_pop(struct sk_buff *skb, u16 *vlan_tci);
+ int skb_vlan_pop(struct sk_buff *skb);
+ int skb_vlan_push(struct sk_buff *skb, __be16 vlan_proto, u16 vlan_tci);
+-int skb_mpls_push(struct sk_buff *skb, __be32 mpls_lse, __be16 mpls_proto);
+-int skb_mpls_pop(struct sk_buff *skb, __be16 next_proto);
++int skb_mpls_push(struct sk_buff *skb, __be32 mpls_lse, __be16 mpls_proto,
++ int mac_len);
++int skb_mpls_pop(struct sk_buff *skb, __be16 next_proto, int mac_len);
+ int skb_mpls_update_lse(struct sk_buff *skb, __be32 mpls_lse);
+ int skb_mpls_dec_ttl(struct sk_buff *skb);
+ struct sk_buff *pskb_extract(struct sk_buff *skb, int off, int to_copy,
+diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
+index 34a038563d97..d38051dd414f 100644
+--- a/include/linux/uaccess.h
++++ b/include/linux/uaccess.h
+@@ -284,8 +284,10 @@ extern long strnlen_unsafe_user(const void __user *unsafe_addr, long count);
+ #ifndef user_access_begin
+ #define user_access_begin(ptr,len) access_ok(ptr, len)
+ #define user_access_end() do { } while (0)
+-#define unsafe_get_user(x, ptr, err) do { if (unlikely(__get_user(x, ptr))) goto err; } while (0)
+-#define unsafe_put_user(x, ptr, err) do { if (unlikely(__put_user(x, ptr))) goto err; } while (0)
++#define unsafe_op_wrap(op, err) do { if (unlikely(op)) goto err; } while (0)
++#define unsafe_get_user(x,p,e) unsafe_op_wrap(__get_user(x,p),e)
++#define unsafe_put_user(x,p,e) unsafe_op_wrap(__put_user(x,p),e)
++#define unsafe_copy_to_user(d,s,l,e) unsafe_op_wrap(__copy_to_user(d,s,l),e)
+ static inline unsigned long user_access_save(void) { return 0UL; }
+ static inline void user_access_restore(unsigned long flags) { }
+ #endif
+diff --git a/include/scsi/scsi_eh.h b/include/scsi/scsi_eh.h
+index 3810b340551c..6bd5ed695a5e 100644
+--- a/include/scsi/scsi_eh.h
++++ b/include/scsi/scsi_eh.h
+@@ -32,6 +32,7 @@ extern int scsi_ioctl_reset(struct scsi_device *, int __user *);
+ struct scsi_eh_save {
+ /* saved state */
+ int result;
++ unsigned int resid_len;
+ int eh_eflags;
+ enum dma_data_direction data_direction;
+ unsigned underflow;
+diff --git a/include/trace/events/btrfs.h b/include/trace/events/btrfs.h
+index 2f6a669408bb..e83dee3212bd 100644
+--- a/include/trace/events/btrfs.h
++++ b/include/trace/events/btrfs.h
+@@ -1687,6 +1687,7 @@ TRACE_EVENT(qgroup_update_reserve,
+ __entry->qgid = qgroup->qgroupid;
+ __entry->cur_reserved = qgroup->rsv.values[type];
+ __entry->diff = diff;
++ __entry->type = type;
+ ),
+
+ TP_printk_btrfs("qgid=%llu type=%s cur_reserved=%llu diff=%lld",
+@@ -1709,6 +1710,7 @@ TRACE_EVENT(qgroup_meta_reserve,
+ TP_fast_assign_btrfs(root->fs_info,
+ __entry->refroot = root->root_key.objectid;
+ __entry->diff = diff;
++ __entry->type = type;
+ ),
+
+ TP_printk_btrfs("refroot=%llu(%s) type=%s diff=%lld",
+@@ -1725,7 +1727,6 @@ TRACE_EVENT(qgroup_meta_convert,
+ TP_STRUCT__entry_btrfs(
+ __field( u64, refroot )
+ __field( s64, diff )
+- __field( int, type )
+ ),
+
+ TP_fast_assign_btrfs(root->fs_info,
+diff --git a/include/uapi/linux/nvme_ioctl.h b/include/uapi/linux/nvme_ioctl.h
+index 1c215ea1798e..e168dc59e9a0 100644
+--- a/include/uapi/linux/nvme_ioctl.h
++++ b/include/uapi/linux/nvme_ioctl.h
+@@ -45,6 +45,27 @@ struct nvme_passthru_cmd {
+ __u32 result;
+ };
+
++struct nvme_passthru_cmd64 {
++ __u8 opcode;
++ __u8 flags;
++ __u16 rsvd1;
++ __u32 nsid;
++ __u32 cdw2;
++ __u32 cdw3;
++ __u64 metadata;
++ __u64 addr;
++ __u32 metadata_len;
++ __u32 data_len;
++ __u32 cdw10;
++ __u32 cdw11;
++ __u32 cdw12;
++ __u32 cdw13;
++ __u32 cdw14;
++ __u32 cdw15;
++ __u32 timeout_ms;
++ __u64 result;
++};
++
+ #define nvme_admin_cmd nvme_passthru_cmd
+
+ #define NVME_IOCTL_ID _IO('N', 0x40)
+@@ -54,5 +75,7 @@ struct nvme_passthru_cmd {
+ #define NVME_IOCTL_RESET _IO('N', 0x44)
+ #define NVME_IOCTL_SUBSYS_RESET _IO('N', 0x45)
+ #define NVME_IOCTL_RESCAN _IO('N', 0x46)
++#define NVME_IOCTL_ADMIN64_CMD _IOWR('N', 0x47, struct nvme_passthru_cmd64)
++#define NVME_IOCTL_IO64_CMD _IOWR('N', 0x48, struct nvme_passthru_cmd64)
+
+ #endif /* _UAPI_LINUX_NVME_IOCTL_H */
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 0463c1151bae..a2a50b668ef3 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -6839,7 +6839,7 @@ static void __perf_event_output_stop(struct perf_event *event, void *data)
+ static int __perf_pmu_output_stop(void *info)
+ {
+ struct perf_event *event = info;
+- struct pmu *pmu = event->pmu;
++ struct pmu *pmu = event->ctx->pmu;
+ struct perf_cpu_context *cpuctx = this_cpu_ptr(pmu->pmu_cpu_context);
+ struct remote_output ro = {
+ .rb = event->rb,
+diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
+index 0892e38ed6fb..a9dfa04ffa44 100644
+--- a/kernel/trace/trace_event_perf.c
++++ b/kernel/trace/trace_event_perf.c
+@@ -272,9 +272,11 @@ int perf_kprobe_init(struct perf_event *p_event, bool is_retprobe)
+ goto out;
+ }
+
++ mutex_lock(&event_mutex);
+ ret = perf_trace_event_init(tp_event, p_event);
+ if (ret)
+ destroy_local_trace_kprobe(tp_event);
++ mutex_unlock(&event_mutex);
+ out:
+ kfree(func);
+ return ret;
+@@ -282,8 +284,10 @@ out:
+
+ void perf_kprobe_destroy(struct perf_event *p_event)
+ {
++ mutex_lock(&event_mutex);
+ perf_trace_event_close(p_event);
+ perf_trace_event_unreg(p_event);
++ mutex_unlock(&event_mutex);
+
+ destroy_local_trace_kprobe(p_event->tp_event);
+ }
+diff --git a/lib/textsearch.c b/lib/textsearch.c
+index 4f16eec5d554..f68dea8806be 100644
+--- a/lib/textsearch.c
++++ b/lib/textsearch.c
+@@ -89,9 +89,9 @@
+ * goto errout;
+ * }
+ *
+- * pos = textsearch_find_continuous(conf, \&state, example, strlen(example));
++ * pos = textsearch_find_continuous(conf, &state, example, strlen(example));
+ * if (pos != UINT_MAX)
+- * panic("Oh my god, dancing chickens at \%d\n", pos);
++ * panic("Oh my god, dancing chickens at %d\n", pos);
+ *
+ * textsearch_destroy(conf);
+ */
+diff --git a/lib/vdso/gettimeofday.c b/lib/vdso/gettimeofday.c
+index e630e7ff57f1..45f57fd2db64 100644
+--- a/lib/vdso/gettimeofday.c
++++ b/lib/vdso/gettimeofday.c
+@@ -214,9 +214,10 @@ int __cvdso_clock_getres_common(clockid_t clock, struct __kernel_timespec *res)
+ return -1;
+ }
+
+- res->tv_sec = 0;
+- res->tv_nsec = ns;
+-
++ if (likely(res)) {
++ res->tv_sec = 0;
++ res->tv_nsec = ns;
++ }
+ return 0;
+ }
+
+@@ -245,7 +246,7 @@ __cvdso_clock_getres_time32(clockid_t clock, struct old_timespec32 *res)
+ ret = clock_getres_fallback(clock, &ts);
+ #endif
+
+- if (likely(!ret)) {
++ if (likely(!ret && res)) {
+ res->tv_sec = ts.tv_sec;
+ res->tv_nsec = ts.tv_nsec;
+ }
+diff --git a/mm/compaction.c b/mm/compaction.c
+index 1e994920e6ff..5ab9c2b22693 100644
+--- a/mm/compaction.c
++++ b/mm/compaction.c
+@@ -270,14 +270,15 @@ __reset_isolation_pfn(struct zone *zone, unsigned long pfn, bool check_source,
+
+ /* Ensure the start of the pageblock or zone is online and valid */
+ block_pfn = pageblock_start_pfn(pfn);
+- block_page = pfn_to_online_page(max(block_pfn, zone->zone_start_pfn));
++ block_pfn = max(block_pfn, zone->zone_start_pfn);
++ block_page = pfn_to_online_page(block_pfn);
+ if (block_page) {
+ page = block_page;
+ pfn = block_pfn;
+ }
+
+ /* Ensure the end of the pageblock or zone is online and valid */
+- block_pfn += pageblock_nr_pages;
++ block_pfn = pageblock_end_pfn(pfn) - 1;
+ block_pfn = min(block_pfn, zone_end_pfn(zone) - 1);
+ end_page = pfn_to_online_page(block_pfn);
+ if (!end_page)
+@@ -303,7 +304,7 @@ __reset_isolation_pfn(struct zone *zone, unsigned long pfn, bool check_source,
+
+ page += (1 << PAGE_ALLOC_COSTLY_ORDER);
+ pfn += (1 << PAGE_ALLOC_COSTLY_ORDER);
+- } while (page < end_page);
++ } while (page <= end_page);
+
+ return false;
+ }
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 6d7296dd11b8..843ee2f8d356 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -1084,11 +1084,10 @@ static bool pfn_range_valid_gigantic(struct zone *z,
+ struct page *page;
+
+ for (i = start_pfn; i < end_pfn; i++) {
+- if (!pfn_valid(i))
++ page = pfn_to_online_page(i);
++ if (!page)
+ return false;
+
+- page = pfn_to_page(i);
+-
+ if (page_zone(page) != z)
+ return false;
+
+diff --git a/mm/memblock.c b/mm/memblock.c
+index 7d4f61ae666a..c4b16cae2bc9 100644
+--- a/mm/memblock.c
++++ b/mm/memblock.c
+@@ -1356,9 +1356,6 @@ static phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size,
+ align = SMP_CACHE_BYTES;
+ }
+
+- if (end > memblock.current_limit)
+- end = memblock.current_limit;
+-
+ again:
+ found = memblock_find_in_range_node(size, align, start, end, nid,
+ flags);
+@@ -1469,6 +1466,9 @@ static void * __init memblock_alloc_internal(
+ if (WARN_ON_ONCE(slab_is_available()))
+ return kzalloc_node(size, GFP_NOWAIT, nid);
+
++ if (max_addr > memblock.current_limit)
++ max_addr = memblock.current_limit;
++
+ alloc = memblock_alloc_range_nid(size, align, min_addr, max_addr, nid);
+
+ /* retry allocation without lower limit */
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 7ef849da8278..3151c87dff73 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -199,7 +199,6 @@ struct to_kill {
+ struct task_struct *tsk;
+ unsigned long addr;
+ short size_shift;
+- char addr_valid;
+ };
+
+ /*
+@@ -324,22 +323,27 @@ static void add_to_kill(struct task_struct *tsk, struct page *p,
+ }
+ }
+ tk->addr = page_address_in_vma(p, vma);
+- tk->addr_valid = 1;
+ if (is_zone_device_page(p))
+ tk->size_shift = dev_pagemap_mapping_shift(p, vma);
+ else
+ tk->size_shift = compound_order(compound_head(p)) + PAGE_SHIFT;
+
+ /*
+- * In theory we don't have to kill when the page was
+- * munmaped. But it could be also a mremap. Since that's
+- * likely very rare kill anyways just out of paranoia, but use
+- * a SIGKILL because the error is not contained anymore.
++ * Send SIGKILL if "tk->addr == -EFAULT". Also, as
++ * "tk->size_shift" is always non-zero for !is_zone_device_page(),
++ * so "tk->size_shift == 0" effectively checks no mapping on
++ * ZONE_DEVICE. Indeed, when a devdax page is mmapped N times
++ * to a process' address space, it's possible not all N VMAs
++ * contain mappings for the page, but at least one VMA does.
++ * Only deliver SIGBUS with payload derived from the VMA that
++ * has a mapping for the page.
+ */
+- if (tk->addr == -EFAULT || tk->size_shift == 0) {
++ if (tk->addr == -EFAULT) {
+ pr_info("Memory failure: Unable to find user space address %lx in %s\n",
+ page_to_pfn(p), tsk->comm);
+- tk->addr_valid = 0;
++ } else if (tk->size_shift == 0) {
++ kfree(tk);
++ return;
+ }
+ get_task_struct(tsk);
+ tk->tsk = tsk;
+@@ -366,7 +370,7 @@ static void kill_procs(struct list_head *to_kill, int forcekill, bool fail,
+ * make sure the process doesn't catch the
+ * signal and then access the memory. Just kill it.
+ */
+- if (fail || tk->addr_valid == 0) {
++ if (fail || tk->addr == -EFAULT) {
+ pr_err("Memory failure: %#lx: forcibly killing %s:%d because of failure to unmap corrupted page\n",
+ pfn, tk->tsk->comm, tk->tsk->pid);
+ do_send_sig_info(SIGKILL, SEND_SIG_PRIV,
+@@ -1253,17 +1257,19 @@ int memory_failure(unsigned long pfn, int flags)
+ if (!sysctl_memory_failure_recovery)
+ panic("Memory failure on page %lx", pfn);
+
+- if (!pfn_valid(pfn)) {
++ p = pfn_to_online_page(pfn);
++ if (!p) {
++ if (pfn_valid(pfn)) {
++ pgmap = get_dev_pagemap(pfn, NULL);
++ if (pgmap)
++ return memory_failure_dev_pagemap(pfn, flags,
++ pgmap);
++ }
+ pr_err("Memory failure: %#lx: memory outside kernel control\n",
+ pfn);
+ return -ENXIO;
+ }
+
+- pgmap = get_dev_pagemap(pfn, NULL);
+- if (pgmap)
+- return memory_failure_dev_pagemap(pfn, flags, pgmap);
+-
+- p = pfn_to_page(pfn);
+ if (PageHuge(p))
+ return memory_failure_hugetlb(pfn, flags);
+ if (TestSetPageHWPoison(p)) {
+diff --git a/mm/memremap.c b/mm/memremap.c
+index ed70c4e8e52a..31f1b2953c64 100644
+--- a/mm/memremap.c
++++ b/mm/memremap.c
+@@ -104,6 +104,7 @@ static void devm_memremap_pages_release(void *data)
+ struct dev_pagemap *pgmap = data;
+ struct device *dev = pgmap->dev;
+ struct resource *res = &pgmap->res;
++ struct page *first_page;
+ unsigned long pfn;
+ int nid;
+
+@@ -112,14 +113,16 @@ static void devm_memremap_pages_release(void *data)
+ put_page(pfn_to_page(pfn));
+ dev_pagemap_cleanup(pgmap);
+
++ /* make sure to access a memmap that was actually initialized */
++ first_page = pfn_to_page(pfn_first(pgmap));
++
+ /* pages are dead and unused, undo the arch mapping */
+- nid = page_to_nid(pfn_to_page(PHYS_PFN(res->start)));
++ nid = page_to_nid(first_page);
+
+ mem_hotplug_begin();
+ if (pgmap->type == MEMORY_DEVICE_PRIVATE) {
+- pfn = PHYS_PFN(res->start);
+- __remove_pages(page_zone(pfn_to_page(pfn)), pfn,
+- PHYS_PFN(resource_size(res)), NULL);
++ __remove_pages(page_zone(first_page), PHYS_PFN(res->start),
++ PHYS_PFN(resource_size(res)), NULL);
+ } else {
+ arch_remove_memory(nid, res->start, resource_size(res),
+ pgmap_altmap(pgmap));
+diff --git a/mm/page_owner.c b/mm/page_owner.c
+index addcbb2ae4e4..8088ab29bc2d 100644
+--- a/mm/page_owner.c
++++ b/mm/page_owner.c
+@@ -258,7 +258,8 @@ void pagetypeinfo_showmixedcount_print(struct seq_file *m,
+ * not matter as the mixed block count will still be correct
+ */
+ for (; pfn < end_pfn; ) {
+- if (!pfn_valid(pfn)) {
++ page = pfn_to_online_page(pfn);
++ if (!page) {
+ pfn = ALIGN(pfn + 1, MAX_ORDER_NR_PAGES);
+ continue;
+ }
+@@ -266,13 +267,13 @@ void pagetypeinfo_showmixedcount_print(struct seq_file *m,
+ block_end_pfn = ALIGN(pfn + 1, pageblock_nr_pages);
+ block_end_pfn = min(block_end_pfn, end_pfn);
+
+- page = pfn_to_page(pfn);
+ pageblock_mt = get_pageblock_migratetype(page);
+
+ for (; pfn < block_end_pfn; pfn++) {
+ if (!pfn_valid_within(pfn))
+ continue;
+
++ /* The pageblock is online, no need to recheck. */
+ page = pfn_to_page(pfn);
+
+ if (page_zone(page) != zone)
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index 807490fe217a..7f492e53a7db 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -178,10 +178,13 @@ static int init_memcg_params(struct kmem_cache *s,
+
+ static void destroy_memcg_params(struct kmem_cache *s)
+ {
+- if (is_root_cache(s))
++ if (is_root_cache(s)) {
+ kvfree(rcu_access_pointer(s->memcg_params.memcg_caches));
+- else
++ } else {
++ mem_cgroup_put(s->memcg_params.memcg);
++ WRITE_ONCE(s->memcg_params.memcg, NULL);
+ percpu_ref_exit(&s->memcg_params.refcnt);
++ }
+ }
+
+ static void free_memcg_params(struct rcu_head *rcu)
+@@ -253,8 +256,6 @@ static void memcg_unlink_cache(struct kmem_cache *s)
+ } else {
+ list_del(&s->memcg_params.children_node);
+ list_del(&s->memcg_params.kmem_caches_node);
+- mem_cgroup_put(s->memcg_params.memcg);
+- WRITE_ONCE(s->memcg_params.memcg, NULL);
+ }
+ }
+ #else
+diff --git a/mm/slub.c b/mm/slub.c
+index 8834563cdb4b..dac41cf0b94a 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -4836,7 +4836,17 @@ static ssize_t show_slab_objects(struct kmem_cache *s,
+ }
+ }
+
+- get_online_mems();
++ /*
++ * It is impossible to take "mem_hotplug_lock" here with "kernfs_mutex"
++ * already held which will conflict with an existing lock order:
++ *
++ * mem_hotplug_lock->slab_mutex->kernfs_mutex
++ *
++ * We don't really need mem_hotplug_lock (to hold off
++ * slab_mem_going_offline_callback) here because slab's memory hot
++ * unplug code doesn't destroy the kmem_cache->node[] data.
++ */
++
+ #ifdef CONFIG_SLUB_DEBUG
+ if (flags & SO_ALL) {
+ struct kmem_cache_node *n;
+@@ -4877,7 +4887,6 @@ static ssize_t show_slab_objects(struct kmem_cache *s,
+ x += sprintf(buf + x, " N%d=%lu",
+ node, nodes[node]);
+ #endif
+- put_online_mems();
+ kfree(nodes);
+ return x + sprintf(buf + x, "\n");
+ }
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index a6c5d0b28321..8d03013b6c59 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -354,12 +354,13 @@ unsigned long zone_reclaimable_pages(struct zone *zone)
+ */
+ unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone_idx)
+ {
+- unsigned long lru_size;
++ unsigned long lru_size = 0;
+ int zid;
+
+- if (!mem_cgroup_disabled())
+- lru_size = lruvec_page_state_local(lruvec, NR_LRU_BASE + lru);
+- else
++ if (!mem_cgroup_disabled()) {
++ for (zid = 0; zid < MAX_NR_ZONES; zid++)
++ lru_size += mem_cgroup_get_zone_lru_size(lruvec, lru, zid);
++ } else
+ lru_size = node_page_state(lruvec_pgdat(lruvec), NR_LRU_BASE + lru);
+
+ for (zid = zone_idx + 1; zid < MAX_NR_ZONES; zid++) {
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 982d8d12830e..d4a47c44daf0 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -5465,12 +5465,14 @@ static void skb_mod_eth_type(struct sk_buff *skb, struct ethhdr *hdr,
+ * @skb: buffer
+ * @mpls_lse: MPLS label stack entry to push
+ * @mpls_proto: ethertype of the new MPLS header (expects 0x8847 or 0x8848)
++ * @mac_len: length of the MAC header
+ *
+ * Expects skb->data at mac header.
+ *
+ * Returns 0 on success, -errno otherwise.
+ */
+-int skb_mpls_push(struct sk_buff *skb, __be32 mpls_lse, __be16 mpls_proto)
++int skb_mpls_push(struct sk_buff *skb, __be32 mpls_lse, __be16 mpls_proto,
++ int mac_len)
+ {
+ struct mpls_shim_hdr *lse;
+ int err;
+@@ -5487,15 +5489,15 @@ int skb_mpls_push(struct sk_buff *skb, __be32 mpls_lse, __be16 mpls_proto)
+ return err;
+
+ if (!skb->inner_protocol) {
+- skb_set_inner_network_header(skb, skb->mac_len);
++ skb_set_inner_network_header(skb, mac_len);
+ skb_set_inner_protocol(skb, skb->protocol);
+ }
+
+ skb_push(skb, MPLS_HLEN);
+ memmove(skb_mac_header(skb) - MPLS_HLEN, skb_mac_header(skb),
+- skb->mac_len);
++ mac_len);
+ skb_reset_mac_header(skb);
+- skb_set_network_header(skb, skb->mac_len);
++ skb_set_network_header(skb, mac_len);
+
+ lse = mpls_hdr(skb);
+ lse->label_stack_entry = mpls_lse;
+@@ -5514,29 +5516,30 @@ EXPORT_SYMBOL_GPL(skb_mpls_push);
+ *
+ * @skb: buffer
+ * @next_proto: ethertype of header after popped MPLS header
++ * @mac_len: length of the MAC header
+ *
+ * Expects skb->data at mac header.
+ *
+ * Returns 0 on success, -errno otherwise.
+ */
+-int skb_mpls_pop(struct sk_buff *skb, __be16 next_proto)
++int skb_mpls_pop(struct sk_buff *skb, __be16 next_proto, int mac_len)
+ {
+ int err;
+
+ if (unlikely(!eth_p_mpls(skb->protocol)))
+- return -EINVAL;
++ return 0;
+
+- err = skb_ensure_writable(skb, skb->mac_len + MPLS_HLEN);
++ err = skb_ensure_writable(skb, mac_len + MPLS_HLEN);
+ if (unlikely(err))
+ return err;
+
+ skb_postpull_rcsum(skb, mpls_hdr(skb), MPLS_HLEN);
+ memmove(skb_mac_header(skb) + MPLS_HLEN, skb_mac_header(skb),
+- skb->mac_len);
++ mac_len);
+
+ __skb_pull(skb, MPLS_HLEN);
+ skb_reset_mac_header(skb);
+- skb_set_network_header(skb, skb->mac_len);
++ skb_set_network_header(skb, mac_len);
+
+ if (skb->dev && skb->dev->type == ARPHRD_ETHER) {
+ struct ethhdr *hdr;
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 14654876127e..621f83434b24 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1482,7 +1482,7 @@ static bool rt_cache_route(struct fib_nh_common *nhc, struct rtable *rt)
+ prev = cmpxchg(p, orig, rt);
+ if (prev == orig) {
+ if (orig) {
+- dst_dev_put(&orig->dst);
++ rt_add_uncached_list(orig);
+ dst_release(&orig->dst);
+ }
+ } else {
+@@ -2470,14 +2470,17 @@ struct rtable *ip_route_output_key_hash_rcu(struct net *net, struct flowi4 *fl4,
+ int orig_oif = fl4->flowi4_oif;
+ unsigned int flags = 0;
+ struct rtable *rth;
+- int err = -ENETUNREACH;
++ int err;
+
+ if (fl4->saddr) {
+- rth = ERR_PTR(-EINVAL);
+ if (ipv4_is_multicast(fl4->saddr) ||
+ ipv4_is_lbcast(fl4->saddr) ||
+- ipv4_is_zeronet(fl4->saddr))
++ ipv4_is_zeronet(fl4->saddr)) {
++ rth = ERR_PTR(-EINVAL);
+ goto out;
++ }
++
++ rth = ERR_PTR(-ENETUNREACH);
+
+ /* I removed check for oif == dev_out->oif here.
+ It was wrong for two reasons:
+diff --git a/net/ipv6/ip6_input.c b/net/ipv6/ip6_input.c
+index a593aaf25748..2bb0b66181a7 100644
+--- a/net/ipv6/ip6_input.c
++++ b/net/ipv6/ip6_input.c
+@@ -80,8 +80,10 @@ static void ip6_sublist_rcv_finish(struct list_head *head)
+ {
+ struct sk_buff *skb, *next;
+
+- list_for_each_entry_safe(skb, next, head, list)
++ list_for_each_entry_safe(skb, next, head, list) {
++ skb_list_del_init(skb);
+ dst_input(skb);
++ }
+ }
+
+ static void ip6_list_rcv_finish(struct net *net, struct sock *sk,
+diff --git a/net/mac80211/debugfs_netdev.c b/net/mac80211/debugfs_netdev.c
+index b1438fd4d876..64b544ae9966 100644
+--- a/net/mac80211/debugfs_netdev.c
++++ b/net/mac80211/debugfs_netdev.c
+@@ -487,9 +487,14 @@ static ssize_t ieee80211_if_fmt_aqm(
+ const struct ieee80211_sub_if_data *sdata, char *buf, int buflen)
+ {
+ struct ieee80211_local *local = sdata->local;
+- struct txq_info *txqi = to_txq_info(sdata->vif.txq);
++ struct txq_info *txqi;
+ int len;
+
++ if (!sdata->vif.txq)
++ return 0;
++
++ txqi = to_txq_info(sdata->vif.txq);
++
+ spin_lock_bh(&local->fq.lock);
+ rcu_read_lock();
+
+@@ -658,7 +663,9 @@ static void add_common_files(struct ieee80211_sub_if_data *sdata)
+ DEBUGFS_ADD(rc_rateidx_vht_mcs_mask_5ghz);
+ DEBUGFS_ADD(hw_queues);
+
+- if (sdata->local->ops->wake_tx_queue)
++ if (sdata->local->ops->wake_tx_queue &&
++ sdata->vif.type != NL80211_IFTYPE_P2P_DEVICE &&
++ sdata->vif.type != NL80211_IFTYPE_NAN)
+ DEBUGFS_ADD(aqm);
+ }
+
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 4c888dc9bd81..a826f9ccc03f 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -2629,7 +2629,8 @@ struct sk_buff *ieee80211_ap_probereq_get(struct ieee80211_hw *hw,
+
+ rcu_read_lock();
+ ssid = ieee80211_bss_get_ie(cbss, WLAN_EID_SSID);
+- if (WARN_ON_ONCE(ssid == NULL))
++ if (WARN_ONCE(!ssid || ssid[1] > IEEE80211_MAX_SSID_LEN,
++ "invalid SSID element (len=%d)", ssid ? ssid[1] : -1))
+ ssid_len = 0;
+ else
+ ssid_len = ssid[1];
+@@ -5227,7 +5228,7 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata,
+
+ rcu_read_lock();
+ ssidie = ieee80211_bss_get_ie(req->bss, WLAN_EID_SSID);
+- if (!ssidie) {
++ if (!ssidie || ssidie[1] > sizeof(assoc_data->ssid)) {
+ rcu_read_unlock();
+ kfree(assoc_data);
+ return -EINVAL;
+diff --git a/net/netfilter/nft_connlimit.c b/net/netfilter/nft_connlimit.c
+index af1497ab9464..69d6173f91e2 100644
+--- a/net/netfilter/nft_connlimit.c
++++ b/net/netfilter/nft_connlimit.c
+@@ -218,8 +218,13 @@ static void nft_connlimit_destroy_clone(const struct nft_ctx *ctx,
+ static bool nft_connlimit_gc(struct net *net, const struct nft_expr *expr)
+ {
+ struct nft_connlimit *priv = nft_expr_priv(expr);
++ bool ret;
+
+- return nf_conncount_gc_list(net, &priv->list);
++ local_bh_disable();
++ ret = nf_conncount_gc_list(net, &priv->list);
++ local_bh_enable();
++
++ return ret;
+ }
+
+ static struct nft_expr_type nft_connlimit_type;
+diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
+index 3572e11b6f21..1c77f520f474 100644
+--- a/net/openvswitch/actions.c
++++ b/net/openvswitch/actions.c
+@@ -165,7 +165,8 @@ static int push_mpls(struct sk_buff *skb, struct sw_flow_key *key,
+ {
+ int err;
+
+- err = skb_mpls_push(skb, mpls->mpls_lse, mpls->mpls_ethertype);
++ err = skb_mpls_push(skb, mpls->mpls_lse, mpls->mpls_ethertype,
++ skb->mac_len);
+ if (err)
+ return err;
+
+@@ -178,7 +179,7 @@ static int pop_mpls(struct sk_buff *skb, struct sw_flow_key *key,
+ {
+ int err;
+
+- err = skb_mpls_pop(skb, ethertype);
++ err = skb_mpls_pop(skb, ethertype, skb->mac_len);
+ if (err)
+ return err;
+
+diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c
+index c97ebdc043e4..48f67a9b1037 100644
+--- a/net/rxrpc/peer_event.c
++++ b/net/rxrpc/peer_event.c
+@@ -147,10 +147,16 @@ void rxrpc_error_report(struct sock *sk)
+ {
+ struct sock_exterr_skb *serr;
+ struct sockaddr_rxrpc srx;
+- struct rxrpc_local *local = sk->sk_user_data;
++ struct rxrpc_local *local;
+ struct rxrpc_peer *peer;
+ struct sk_buff *skb;
+
++ rcu_read_lock();
++ local = rcu_dereference_sk_user_data(sk);
++ if (unlikely(!local)) {
++ rcu_read_unlock();
++ return;
++ }
+ _enter("%p{%d}", sk, local->debug_id);
+
+ /* Clear the outstanding error value on the socket so that it doesn't
+@@ -160,6 +166,7 @@ void rxrpc_error_report(struct sock *sk)
+
+ skb = sock_dequeue_err_skb(sk);
+ if (!skb) {
++ rcu_read_unlock();
+ _leave("UDP socket errqueue empty");
+ return;
+ }
+@@ -167,11 +174,11 @@ void rxrpc_error_report(struct sock *sk)
+ serr = SKB_EXT_ERR(skb);
+ if (!skb->len && serr->ee.ee_origin == SO_EE_ORIGIN_TIMESTAMPING) {
+ _leave("UDP empty message");
++ rcu_read_unlock();
+ rxrpc_free_skb(skb, rxrpc_skb_freed);
+ return;
+ }
+
+- rcu_read_lock();
+ peer = rxrpc_lookup_peer_icmp_rcu(local, skb, &srx);
+ if (peer && !rxrpc_get_peer_maybe(peer))
+ peer = NULL;
+diff --git a/net/sched/act_api.c b/net/sched/act_api.c
+index 2558f00f6b3e..69d4676a402f 100644
+--- a/net/sched/act_api.c
++++ b/net/sched/act_api.c
+@@ -832,8 +832,7 @@ static struct tc_cookie *nla_memdup_cookie(struct nlattr **tb)
+ }
+
+ static const struct nla_policy tcf_action_policy[TCA_ACT_MAX + 1] = {
+- [TCA_ACT_KIND] = { .type = NLA_NUL_STRING,
+- .len = IFNAMSIZ - 1 },
++ [TCA_ACT_KIND] = { .type = NLA_STRING },
+ [TCA_ACT_INDEX] = { .type = NLA_U32 },
+ [TCA_ACT_COOKIE] = { .type = NLA_BINARY,
+ .len = TC_COOKIE_MAX_SIZE },
+@@ -865,8 +864,10 @@ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
+ NL_SET_ERR_MSG(extack, "TC action kind must be specified");
+ goto err_out;
+ }
+- nla_strlcpy(act_name, kind, IFNAMSIZ);
+-
++ if (nla_strlcpy(act_name, kind, IFNAMSIZ) >= IFNAMSIZ) {
++ NL_SET_ERR_MSG(extack, "TC action name too long");
++ goto err_out;
++ }
+ if (tb[TCA_ACT_COOKIE]) {
+ cookie = nla_memdup_cookie(tb);
+ if (!cookie) {
+@@ -1352,11 +1353,16 @@ static int tcf_action_add(struct net *net, struct nlattr *nla,
+ struct netlink_ext_ack *extack)
+ {
+ size_t attr_size = 0;
+- int ret = 0;
++ int loop, ret;
+ struct tc_action *actions[TCA_ACT_MAX_PRIO] = {};
+
+- ret = tcf_action_init(net, NULL, nla, NULL, NULL, ovr, 0, actions,
+- &attr_size, true, extack);
++ for (loop = 0; loop < 10; loop++) {
++ ret = tcf_action_init(net, NULL, nla, NULL, NULL, ovr, 0,
++ actions, &attr_size, true, extack);
++ if (ret != -EAGAIN)
++ break;
++ }
++
+ if (ret < 0)
+ return ret;
+ ret = tcf_add_notify(net, n, actions, portid, attr_size, extack);
+@@ -1406,11 +1412,8 @@ static int tc_ctl_action(struct sk_buff *skb, struct nlmsghdr *n,
+ */
+ if (n->nlmsg_flags & NLM_F_REPLACE)
+ ovr = 1;
+-replay:
+ ret = tcf_action_add(net, tca[TCA_ACT_TAB], n, portid, ovr,
+ extack);
+- if (ret == -EAGAIN)
+- goto replay;
+ break;
+ case RTM_DELACTION:
+ ret = tca_action_gd(net, tca[TCA_ACT_TAB], n,
+diff --git a/net/sched/act_mpls.c b/net/sched/act_mpls.c
+index e168df0e008a..4cf6c553bb0b 100644
+--- a/net/sched/act_mpls.c
++++ b/net/sched/act_mpls.c
+@@ -55,7 +55,7 @@ static int tcf_mpls_act(struct sk_buff *skb, const struct tc_action *a,
+ struct tcf_mpls *m = to_mpls(a);
+ struct tcf_mpls_params *p;
+ __be32 new_lse;
+- int ret;
++ int ret, mac_len;
+
+ tcf_lastuse_update(&m->tcf_tm);
+ bstats_cpu_update(this_cpu_ptr(m->common.cpu_bstats), skb);
+@@ -63,8 +63,12 @@ static int tcf_mpls_act(struct sk_buff *skb, const struct tc_action *a,
+ /* Ensure 'data' points at mac_header prior calling mpls manipulating
+ * functions.
+ */
+- if (skb_at_tc_ingress(skb))
++ if (skb_at_tc_ingress(skb)) {
+ skb_push_rcsum(skb, skb->mac_len);
++ mac_len = skb->mac_len;
++ } else {
++ mac_len = skb_network_header(skb) - skb_mac_header(skb);
++ }
+
+ ret = READ_ONCE(m->tcf_action);
+
+@@ -72,12 +76,12 @@ static int tcf_mpls_act(struct sk_buff *skb, const struct tc_action *a,
+
+ switch (p->tcfm_action) {
+ case TCA_MPLS_ACT_POP:
+- if (skb_mpls_pop(skb, p->tcfm_proto))
++ if (skb_mpls_pop(skb, p->tcfm_proto, mac_len))
+ goto drop;
+ break;
+ case TCA_MPLS_ACT_PUSH:
+ new_lse = tcf_mpls_get_lse(NULL, p, !eth_p_mpls(skb->protocol));
+- if (skb_mpls_push(skb, new_lse, p->tcfm_proto))
++ if (skb_mpls_push(skb, new_lse, p->tcfm_proto, mac_len))
+ goto drop;
+ break;
+ case TCA_MPLS_ACT_MODIFY:
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 9aef93300f1c..6b12883e04b8 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -160,11 +160,22 @@ static inline u32 tcf_auto_prio(struct tcf_proto *tp)
+ return TC_H_MAJ(first);
+ }
+
++static bool tcf_proto_check_kind(struct nlattr *kind, char *name)
++{
++ if (kind)
++ return nla_strlcpy(name, kind, IFNAMSIZ) >= IFNAMSIZ;
++ memset(name, 0, IFNAMSIZ);
++ return false;
++}
++
+ static bool tcf_proto_is_unlocked(const char *kind)
+ {
+ const struct tcf_proto_ops *ops;
+ bool ret;
+
++ if (strlen(kind) == 0)
++ return false;
++
+ ops = tcf_proto_lookup_ops(kind, false, NULL);
+ /* On error return false to take rtnl lock. Proto lookup/create
+ * functions will perform lookup again and properly handle errors.
+@@ -1976,6 +1987,7 @@ static int tc_new_tfilter(struct sk_buff *skb, struct nlmsghdr *n,
+ {
+ struct net *net = sock_net(skb->sk);
+ struct nlattr *tca[TCA_MAX + 1];
++ char name[IFNAMSIZ];
+ struct tcmsg *t;
+ u32 protocol;
+ u32 prio;
+@@ -2032,13 +2044,19 @@ replay:
+ if (err)
+ return err;
+
++ if (tcf_proto_check_kind(tca[TCA_KIND], name)) {
++ NL_SET_ERR_MSG(extack, "Specified TC filter name too long");
++ err = -EINVAL;
++ goto errout;
++ }
++
+ /* Take rtnl mutex if rtnl_held was set to true on previous iteration,
+ * block is shared (no qdisc found), qdisc is not unlocked, classifier
+ * type is not specified, classifier is not unlocked.
+ */
+ if (rtnl_held ||
+ (q && !(q->ops->cl_ops->flags & QDISC_CLASS_OPS_DOIT_UNLOCKED)) ||
+- !tca[TCA_KIND] || !tcf_proto_is_unlocked(nla_data(tca[TCA_KIND]))) {
++ !tcf_proto_is_unlocked(name)) {
+ rtnl_held = true;
+ rtnl_lock();
+ }
+@@ -2196,6 +2214,7 @@ static int tc_del_tfilter(struct sk_buff *skb, struct nlmsghdr *n,
+ {
+ struct net *net = sock_net(skb->sk);
+ struct nlattr *tca[TCA_MAX + 1];
++ char name[IFNAMSIZ];
+ struct tcmsg *t;
+ u32 protocol;
+ u32 prio;
+@@ -2235,13 +2254,18 @@ static int tc_del_tfilter(struct sk_buff *skb, struct nlmsghdr *n,
+ if (err)
+ return err;
+
++ if (tcf_proto_check_kind(tca[TCA_KIND], name)) {
++ NL_SET_ERR_MSG(extack, "Specified TC filter name too long");
++ err = -EINVAL;
++ goto errout;
++ }
+ /* Take rtnl mutex if flushing whole chain, block is shared (no qdisc
+ * found), qdisc is not unlocked, classifier type is not specified,
+ * classifier is not unlocked.
+ */
+ if (!prio ||
+ (q && !(q->ops->cl_ops->flags & QDISC_CLASS_OPS_DOIT_UNLOCKED)) ||
+- !tca[TCA_KIND] || !tcf_proto_is_unlocked(nla_data(tca[TCA_KIND]))) {
++ !tcf_proto_is_unlocked(name)) {
+ rtnl_held = true;
+ rtnl_lock();
+ }
+@@ -2349,6 +2373,7 @@ static int tc_get_tfilter(struct sk_buff *skb, struct nlmsghdr *n,
+ {
+ struct net *net = sock_net(skb->sk);
+ struct nlattr *tca[TCA_MAX + 1];
++ char name[IFNAMSIZ];
+ struct tcmsg *t;
+ u32 protocol;
+ u32 prio;
+@@ -2385,12 +2410,17 @@ static int tc_get_tfilter(struct sk_buff *skb, struct nlmsghdr *n,
+ if (err)
+ return err;
+
++ if (tcf_proto_check_kind(tca[TCA_KIND], name)) {
++ NL_SET_ERR_MSG(extack, "Specified TC filter name too long");
++ err = -EINVAL;
++ goto errout;
++ }
+ /* Take rtnl mutex if block is shared (no qdisc found), qdisc is not
+ * unlocked, classifier type is not specified, classifier is not
+ * unlocked.
+ */
+ if ((q && !(q->ops->cl_ops->flags & QDISC_CLASS_OPS_DOIT_UNLOCKED)) ||
+- !tca[TCA_KIND] || !tcf_proto_is_unlocked(nla_data(tca[TCA_KIND]))) {
++ !tcf_proto_is_unlocked(name)) {
+ rtnl_held = true;
+ rtnl_lock();
+ }
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 81d58b280612..1047825d9f48 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1390,8 +1390,7 @@ check_loop_fn(struct Qdisc *q, unsigned long cl, struct qdisc_walker *w)
+ }
+
+ const struct nla_policy rtm_tca_policy[TCA_MAX + 1] = {
+- [TCA_KIND] = { .type = NLA_NUL_STRING,
+- .len = IFNAMSIZ - 1 },
++ [TCA_KIND] = { .type = NLA_STRING },
+ [TCA_RATE] = { .type = NLA_BINARY,
+ .len = sizeof(struct tc_estimator) },
+ [TCA_STAB] = { .type = NLA_NESTED },
+diff --git a/net/sched/sch_etf.c b/net/sched/sch_etf.c
+index cebfb65d8556..b1da5589a0c6 100644
+--- a/net/sched/sch_etf.c
++++ b/net/sched/sch_etf.c
+@@ -177,7 +177,7 @@ static int etf_enqueue_timesortedlist(struct sk_buff *nskb, struct Qdisc *sch,
+
+ parent = *p;
+ skb = rb_to_skb(parent);
+- if (ktime_after(txtime, skb->tstamp)) {
++ if (ktime_compare(txtime, skb->tstamp) >= 0) {
+ p = &parent->rb_right;
+ leftmost = false;
+ } else {
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index b083d4e66230..8fd7b0e6ce9f 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -9353,7 +9353,7 @@ struct proto sctp_prot = {
+ .backlog_rcv = sctp_backlog_rcv,
+ .hash = sctp_hash,
+ .unhash = sctp_unhash,
+- .get_port = sctp_get_port,
++ .no_autobind = true,
+ .obj_size = sizeof(struct sctp_sock),
+ .useroffset = offsetof(struct sctp_sock, subscribe),
+ .usersize = offsetof(struct sctp_sock, initmsg) -
+@@ -9395,7 +9395,7 @@ struct proto sctpv6_prot = {
+ .backlog_rcv = sctp_backlog_rcv,
+ .hash = sctp_hash,
+ .unhash = sctp_unhash,
+- .get_port = sctp_get_port,
++ .no_autobind = true,
+ .obj_size = sizeof(struct sctp6_sock),
+ .useroffset = offsetof(struct sctp6_sock, sctp.subscribe),
+ .usersize = offsetof(struct sctp6_sock, sctp.initmsg) -
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index f03459ddc840..c2ce582ea143 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -6184,6 +6184,9 @@ static int nl80211_del_mpath(struct sk_buff *skb, struct genl_info *info)
+ if (!rdev->ops->del_mpath)
+ return -EOPNOTSUPP;
+
++ if (dev->ieee80211_ptr->iftype != NL80211_IFTYPE_MESH_POINT)
++ return -EOPNOTSUPP;
++
+ return rdev_del_mpath(rdev, dev, dst);
+ }
+
+diff --git a/net/wireless/wext-sme.c b/net/wireless/wext-sme.c
+index c67d7a82ab13..73fd0eae08ca 100644
+--- a/net/wireless/wext-sme.c
++++ b/net/wireless/wext-sme.c
+@@ -202,6 +202,7 @@ int cfg80211_mgd_wext_giwessid(struct net_device *dev,
+ struct iw_point *data, char *ssid)
+ {
+ struct wireless_dev *wdev = dev->ieee80211_ptr;
++ int ret = 0;
+
+ /* call only for station! */
+ if (WARN_ON(wdev->iftype != NL80211_IFTYPE_STATION))
+@@ -219,7 +220,10 @@ int cfg80211_mgd_wext_giwessid(struct net_device *dev,
+ if (ie) {
+ data->flags = 1;
+ data->length = ie[1];
+- memcpy(ssid, ie + 2, data->length);
++ if (data->length > IW_ESSID_MAX_SIZE)
++ ret = -EINVAL;
++ else
++ memcpy(ssid, ie + 2, data->length);
+ }
+ rcu_read_unlock();
+ } else if (wdev->wext.connect.ssid && wdev->wext.connect.ssid_len) {
+@@ -229,7 +233,7 @@ int cfg80211_mgd_wext_giwessid(struct net_device *dev,
+ }
+ wdev_unlock(wdev);
+
+- return 0;
++ return ret;
+ }
+
+ int cfg80211_mgd_wext_siwap(struct net_device *dev,
+diff --git a/scripts/namespace.pl b/scripts/namespace.pl
+index 6135574a6f39..1da7bca201a4 100755
+--- a/scripts/namespace.pl
++++ b/scripts/namespace.pl
+@@ -65,13 +65,14 @@
+ use warnings;
+ use strict;
+ use File::Find;
++use File::Spec;
+
+ my $nm = ($ENV{'NM'} || "nm") . " -p";
+ my $objdump = ($ENV{'OBJDUMP'} || "objdump") . " -s -j .comment";
+-my $srctree = "";
+-my $objtree = "";
+-$srctree = "$ENV{'srctree'}/" if (exists($ENV{'srctree'}));
+-$objtree = "$ENV{'objtree'}/" if (exists($ENV{'objtree'}));
++my $srctree = File::Spec->curdir();
++my $objtree = File::Spec->curdir();
++$srctree = File::Spec->rel2abs($ENV{'srctree'}) if (exists($ENV{'srctree'}));
++$objtree = File::Spec->rel2abs($ENV{'objtree'}) if (exists($ENV{'objtree'}));
+
+ if ($#ARGV != -1) {
+ print STDERR "usage: $0 takes no parameters\n";
+@@ -231,9 +232,9 @@ sub do_nm
+ }
+ ($source = $basename) =~ s/\.o$//;
+ if (-e "$source.c" || -e "$source.S") {
+- $source = "$objtree$File::Find::dir/$source";
++ $source = File::Spec->catfile($objtree, $File::Find::dir, $source)
+ } else {
+- $source = "$srctree$File::Find::dir/$source";
++ $source = File::Spec->catfile($srctree, $File::Find::dir, $source)
+ }
+ if (! -e "$source.c" && ! -e "$source.S") {
+ # No obvious source, exclude the object if it is conglomerate
+diff --git a/security/safesetid/securityfs.c b/security/safesetid/securityfs.c
+index d568e17dd773..74a13d432ed8 100644
+--- a/security/safesetid/securityfs.c
++++ b/security/safesetid/securityfs.c
+@@ -187,7 +187,8 @@ out_free_rule:
+ out_free_buf:
+ kfree(buf);
+ out_free_pol:
+- release_ruleset(pol);
++ if (pol)
++ release_ruleset(pol);
+ return err;
+ }
+
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 36240def9bf5..00796c7727ea 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -3307,6 +3307,8 @@ static int patch_nvhdmi(struct hda_codec *codec)
+ nvhdmi_chmap_cea_alloc_validate_get_type;
+ spec->chmap.ops.chmap_validate = nvhdmi_chmap_validate;
+
++ codec->link_down_at_suspend = 1;
++
+ return 0;
+ }
+
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 36aee8ad2054..26249c607f2c 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -393,6 +393,7 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
+ case 0x10ec0700:
+ case 0x10ec0701:
+ case 0x10ec0703:
++ case 0x10ec0711:
+ alc_update_coef_idx(codec, 0x10, 1<<15, 0);
+ break;
+ case 0x10ec0662:
+@@ -5867,6 +5868,7 @@ enum {
+ ALC225_FIXUP_WYSE_AUTO_MUTE,
+ ALC225_FIXUP_WYSE_DISABLE_MIC_VREF,
+ ALC286_FIXUP_ACER_AIO_HEADSET_MIC,
++ ALC256_FIXUP_ASUS_HEADSET_MIC,
+ ALC256_FIXUP_ASUS_MIC_NO_PRESENCE,
+ ALC299_FIXUP_PREDATOR_SPK,
+ ALC294_FIXUP_ASUS_INTSPK_HEADSET_MIC,
+@@ -6901,6 +6903,15 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE
+ },
++ [ALC256_FIXUP_ASUS_HEADSET_MIC] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x19, 0x03a11020 }, /* headset mic with jack detect */
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC256_FIXUP_ASUS_HEADSET_MODE
++ },
+ [ALC256_FIXUP_ASUS_MIC_NO_PRESENCE] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+@@ -7097,6 +7108,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A),
+ SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_INTSPK_HEADSET_MIC),
++ SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW),
+ SND_PCI_QUIRK(0x1043, 0x1a30, "ASUS X705UD", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1b13, "Asus U41SV", ALC269_FIXUP_INV_DMIC),
+@@ -7965,6 +7977,7 @@ static int patch_alc269(struct hda_codec *codec)
+ case 0x10ec0700:
+ case 0x10ec0701:
+ case 0x10ec0703:
++ case 0x10ec0711:
+ spec->codec_variant = ALC269_TYPE_ALC700;
+ spec->gen.mixer_nid = 0; /* ALC700 does not have any loopback mixer path */
+ alc_update_coef_idx(codec, 0x4a, 1 << 15, 0); /* Combo jack auto trigger control */
+@@ -9105,6 +9118,7 @@ static const struct hda_device_id snd_hda_id_realtek[] = {
+ HDA_CODEC_ENTRY(0x10ec0700, "ALC700", patch_alc269),
+ HDA_CODEC_ENTRY(0x10ec0701, "ALC701", patch_alc269),
+ HDA_CODEC_ENTRY(0x10ec0703, "ALC703", patch_alc269),
++ HDA_CODEC_ENTRY(0x10ec0711, "ALC711", patch_alc269),
+ HDA_CODEC_ENTRY(0x10ec0867, "ALC891", patch_alc662),
+ HDA_CODEC_ENTRY(0x10ec0880, "ALC880", patch_alc880),
+ HDA_CODEC_ENTRY(0x10ec0882, "ALC882", patch_alc882),
+diff --git a/sound/soc/sh/rcar/core.c b/sound/soc/sh/rcar/core.c
+index 56e8dae9a15c..217f2aa06139 100644
+--- a/sound/soc/sh/rcar/core.c
++++ b/sound/soc/sh/rcar/core.c
+@@ -761,6 +761,7 @@ static int rsnd_soc_dai_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ }
+
+ /* set format */
++ rdai->bit_clk_inv = 0;
+ switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
+ case SND_SOC_DAIFMT_I2S:
+ rdai->sys_delay = 0;
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index 33cd26763c0e..ff5ab24f3bd1 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -348,6 +348,9 @@ static int set_sync_ep_implicit_fb_quirk(struct snd_usb_substream *subs,
+ ep = 0x84;
+ ifnum = 0;
+ goto add_sync_ep_from_ifnum;
++ case USB_ID(0x0582, 0x01d8): /* BOSS Katana */
++ /* BOSS Katana amplifiers do not need quirks */
++ return 0;
+ }
+
+ if (attr == USB_ENDPOINT_SYNC_ASYNC &&
+diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
+index ba7849751989..fc8aeb224c03 100644
+--- a/tools/testing/selftests/kvm/Makefile
++++ b/tools/testing/selftests/kvm/Makefile
+@@ -46,7 +46,7 @@ CFLAGS += -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99 \
+ -I$(LINUX_HDR_PATH) -Iinclude -I$(<D) -Iinclude/$(UNAME_M) -I..
+
+ no-pie-option := $(call try-run, echo 'int main() { return 0; }' | \
+- $(CC) -Werror $(KBUILD_CPPFLAGS) $(CC_OPTION_CFLAGS) -no-pie -x c - -o "$$TMP", -no-pie)
++ $(CC) -Werror -no-pie -x c - -o "$$TMP", -no-pie)
+
+ # On s390, build the testcases KVM-enabled
+ pgste-option = $(call try-run, echo 'int main() { return 0; }' | \
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:5.3 commit in: /
@ 2019-11-06 14:27 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2019-11-06 14:27 UTC (permalink / raw
To: gentoo-commits
commit: 5c57d019c84b891679f956769332a0a993c75e2d
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Nov 6 14:27:17 2019 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 6 14:27:17 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5c57d019
Linux patch 5.3.9
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1008_linux-5.3.9.patch | 6974 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 6978 insertions(+)
diff --git a/0000_README b/0000_README
index bc9694a..c1a5896 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch: 1007_linux-5.3.8.patch
From: http://www.kernel.org
Desc: Linux 5.3.8
+Patch: 1008_linux-5.3.9.patch
+From: http://www.kernel.org
+Desc: Linux 5.3.9
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1008_linux-5.3.9.patch b/1008_linux-5.3.9.patch
new file mode 100644
index 0000000..6e9eabf
--- /dev/null
+++ b/1008_linux-5.3.9.patch
@@ -0,0 +1,6974 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 4c1971960afa..5ea005c9e2d6 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -5267,6 +5267,10 @@
+ the unplug protocol
+ never -- do not unplug even if version check succeeds
+
++ xen_legacy_crash [X86,XEN]
++ Crash from Xen panic notifier, without executing late
++ panic() code such as dumping handler.
++
+ xen_nopvspin [X86,XEN]
+ Disables the ticketlock slowpath using Xen PV
+ optimizations.
+diff --git a/Documentation/scheduler/sched-bwc.rst b/Documentation/scheduler/sched-bwc.rst
+index 3a9064219656..9801d6b284b1 100644
+--- a/Documentation/scheduler/sched-bwc.rst
++++ b/Documentation/scheduler/sched-bwc.rst
+@@ -9,15 +9,16 @@ CFS bandwidth control is a CONFIG_FAIR_GROUP_SCHED extension which allows the
+ specification of the maximum CPU bandwidth available to a group or hierarchy.
+
+ The bandwidth allowed for a group is specified using a quota and period. Within
+-each given "period" (microseconds), a group is allowed to consume only up to
+-"quota" microseconds of CPU time. When the CPU bandwidth consumption of a
+-group exceeds this limit (for that period), the tasks belonging to its
+-hierarchy will be throttled and are not allowed to run again until the next
+-period.
+-
+-A group's unused runtime is globally tracked, being refreshed with quota units
+-above at each period boundary. As threads consume this bandwidth it is
+-transferred to cpu-local "silos" on a demand basis. The amount transferred
++each given "period" (microseconds), a task group is allocated up to "quota"
++microseconds of CPU time. That quota is assigned to per-cpu run queues in
++slices as threads in the cgroup become runnable. Once all quota has been
++assigned any additional requests for quota will result in those threads being
++throttled. Throttled threads will not be able to run again until the next
++period when the quota is replenished.
++
++A group's unassigned quota is globally tracked, being refreshed back to
++cfs_quota units at each period boundary. As threads consume this bandwidth it
++is transferred to cpu-local "silos" on a demand basis. The amount transferred
+ within each of these updates is tunable and described as the "slice".
+
+ Management
+@@ -35,12 +36,12 @@ The default values are::
+
+ A value of -1 for cpu.cfs_quota_us indicates that the group does not have any
+ bandwidth restriction in place, such a group is described as an unconstrained
+-bandwidth group. This represents the traditional work-conserving behavior for
++bandwidth group. This represents the traditional work-conserving behavior for
+ CFS.
+
+ Writing any (valid) positive value(s) will enact the specified bandwidth limit.
+-The minimum quota allowed for the quota or period is 1ms. There is also an
+-upper bound on the period length of 1s. Additional restrictions exist when
++The minimum quota allowed for the quota or period is 1ms. There is also an
++upper bound on the period length of 1s. Additional restrictions exist when
+ bandwidth limits are used in a hierarchical fashion, these are explained in
+ more detail below.
+
+@@ -53,8 +54,8 @@ unthrottled if it is in a constrained state.
+ System wide settings
+ --------------------
+ For efficiency run-time is transferred between the global pool and CPU local
+-"silos" in a batch fashion. This greatly reduces global accounting pressure
+-on large systems. The amount transferred each time such an update is required
++"silos" in a batch fashion. This greatly reduces global accounting pressure
++on large systems. The amount transferred each time such an update is required
+ is described as the "slice".
+
+ This is tunable via procfs::
+@@ -97,6 +98,51 @@ There are two ways in which a group may become throttled:
+ In case b) above, even though the child may have runtime remaining it will not
+ be allowed to until the parent's runtime is refreshed.
+
++CFS Bandwidth Quota Caveats
++---------------------------
++Once a slice is assigned to a cpu it does not expire. However all but 1ms of
++the slice may be returned to the global pool if all threads on that cpu become
++unrunnable. This is configured at compile time by the min_cfs_rq_runtime
++variable. This is a performance tweak that helps prevent added contention on
++the global lock.
++
++The fact that cpu-local slices do not expire results in some interesting corner
++cases that should be understood.
++
++For cgroup cpu constrained applications that are cpu limited this is a
++relatively moot point because they will naturally consume the entirety of their
++quota as well as the entirety of each cpu-local slice in each period. As a
++result it is expected that nr_periods roughly equal nr_throttled, and that
++cpuacct.usage will increase roughly equal to cfs_quota_us in each period.
++
++For highly-threaded, non-cpu bound applications this non-expiration nuance
++allows applications to briefly burst past their quota limits by the amount of
++unused slice on each cpu that the task group is running on (typically at most
++1ms per cpu or as defined by min_cfs_rq_runtime). This slight burst only
++applies if quota had been assigned to a cpu and then not fully used or returned
++in previous periods. This burst amount will not be transferred between cores.
++As a result, this mechanism still strictly limits the task group to quota
++average usage, albeit over a longer time window than a single period. This
++also limits the burst ability to no more than 1ms per cpu. This provides
++better more predictable user experience for highly threaded applications with
++small quota limits on high core count machines. It also eliminates the
++propensity to throttle these applications while simultanously using less than
++quota amounts of cpu. Another way to say this, is that by allowing the unused
++portion of a slice to remain valid across periods we have decreased the
++possibility of wastefully expiring quota on cpu-local silos that don't need a
++full slice's amount of cpu time.
++
++The interaction between cpu-bound and non-cpu-bound-interactive applications
++should also be considered, especially when single core usage hits 100%. If you
++gave each of these applications half of a cpu-core and they both got scheduled
++on the same CPU it is theoretically possible that the non-cpu bound application
++will use up to 1ms additional quota in some periods, thereby preventing the
++cpu-bound application from fully using its quota by that same amount. In these
++instances it will be up to the CFS algorithm (see sched-design-CFS.rst) to
++decide which application is chosen to run, as they will both be runnable and
++have remaining quota. This runtime discrepancy will be made up in the following
++periods when the interactive application idles.
++
+ Examples
+ --------
+ 1. Limit a group to 1 CPU worth of runtime::
+diff --git a/Makefile b/Makefile
+index 445f9488d8ba..ad5f5230bbbe 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 3
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Bobtail Squid
+
+diff --git a/arch/arc/kernel/perf_event.c b/arch/arc/kernel/perf_event.c
+index 861a8aea51f9..661fd842ea97 100644
+--- a/arch/arc/kernel/perf_event.c
++++ b/arch/arc/kernel/perf_event.c
+@@ -614,8 +614,8 @@ static int arc_pmu_device_probe(struct platform_device *pdev)
+ /* loop thru all available h/w condition indexes */
+ for (i = 0; i < cc_bcr.c; i++) {
+ write_aux_reg(ARC_REG_CC_INDEX, i);
+- cc_name.indiv.word0 = read_aux_reg(ARC_REG_CC_NAME0);
+- cc_name.indiv.word1 = read_aux_reg(ARC_REG_CC_NAME1);
++ cc_name.indiv.word0 = le32_to_cpu(read_aux_reg(ARC_REG_CC_NAME0));
++ cc_name.indiv.word1 = le32_to_cpu(read_aux_reg(ARC_REG_CC_NAME1));
+
+ arc_pmu_map_hw_event(i, cc_name.str);
+ arc_pmu_add_raw_event_attr(i, cc_name.str);
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index e8cf56283871..f63b824cdc2d 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -111,7 +111,7 @@ config ARM64
+ select GENERIC_STRNLEN_USER
+ select GENERIC_TIME_VSYSCALL
+ select GENERIC_GETTIMEOFDAY
+- select GENERIC_COMPAT_VDSO if (!CPU_BIG_ENDIAN && COMPAT)
++ select GENERIC_COMPAT_VDSO if (!CPU_BIG_ENDIAN && COMPAT && "$(CROSS_COMPILE_COMPAT)" != "")
+ select HANDLE_DOMAIN_IRQ
+ select HARDIRQS_SW_RESEND
+ select HAVE_PCI
+diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
+index 61de992bbea3..5858d6e44926 100644
+--- a/arch/arm64/Makefile
++++ b/arch/arm64/Makefile
+@@ -47,20 +47,16 @@ $(warning Detected assembler with broken .inst; disassembly will be unreliable)
+ endif
+ endif
+
++ifeq ($(CONFIG_CC_IS_CLANG), y)
++COMPATCC ?= $(CC) --target=$(notdir $(CROSS_COMPILE_COMPAT:%-=%))
++else
++COMPATCC ?= $(CROSS_COMPILE_COMPAT)gcc
++endif
++export COMPATCC
++
+ ifeq ($(CONFIG_GENERIC_COMPAT_VDSO), y)
+- CROSS_COMPILE_COMPAT ?= $(CONFIG_CROSS_COMPILE_COMPAT_VDSO:"%"=%)
+-
+- ifeq ($(CONFIG_CC_IS_CLANG), y)
+- $(warning CROSS_COMPILE_COMPAT is clang, the compat vDSO will not be built)
+- else ifeq ($(strip $(CROSS_COMPILE_COMPAT)),)
+- $(warning CROSS_COMPILE_COMPAT not defined or empty, the compat vDSO will not be built)
+- else ifeq ($(shell which $(CROSS_COMPILE_COMPAT)gcc 2> /dev/null),)
+- $(error $(CROSS_COMPILE_COMPAT)gcc not found, check CROSS_COMPILE_COMPAT)
+- else
+- export CROSS_COMPILE_COMPAT
+- export CONFIG_COMPAT_VDSO := y
+- compat_vdso := -DCONFIG_COMPAT_VDSO=1
+- endif
++ export CONFIG_COMPAT_VDSO := y
++ compat_vdso := -DCONFIG_COMPAT_VDSO=1
+ endif
+
+ KBUILD_CFLAGS += -mgeneral-regs-only $(lseinstr) $(brokengasinst) $(compat_vdso)
+diff --git a/arch/arm64/boot/dts/qcom/Makefile b/arch/arm64/boot/dts/qcom/Makefile
+index 0a7e5dfce6f7..954d75de617b 100644
+--- a/arch/arm64/boot/dts/qcom/Makefile
++++ b/arch/arm64/boot/dts/qcom/Makefile
+@@ -6,6 +6,9 @@ dtb-$(CONFIG_ARCH_QCOM) += msm8916-mtp.dtb
+ dtb-$(CONFIG_ARCH_QCOM) += msm8992-bullhead-rev-101.dtb
+ dtb-$(CONFIG_ARCH_QCOM) += msm8994-angler-rev-101.dtb
+ dtb-$(CONFIG_ARCH_QCOM) += msm8996-mtp.dtb
++dtb-$(CONFIG_ARCH_QCOM) += msm8998-asus-novago-tp370ql.dtb
++dtb-$(CONFIG_ARCH_QCOM) += msm8998-hp-envy-x2.dtb
++dtb-$(CONFIG_ARCH_QCOM) += msm8998-lenovo-miix-630.dtb
+ dtb-$(CONFIG_ARCH_QCOM) += msm8998-mtp.dtb
+ dtb-$(CONFIG_ARCH_QCOM) += sdm845-cheza-r1.dtb
+ dtb-$(CONFIG_ARCH_QCOM) += sdm845-cheza-r2.dtb
+diff --git a/arch/arm64/boot/dts/qcom/msm8998-asus-novago-tp370ql.dts b/arch/arm64/boot/dts/qcom/msm8998-asus-novago-tp370ql.dts
+new file mode 100644
+index 000000000000..db5821be1e2f
+--- /dev/null
++++ b/arch/arm64/boot/dts/qcom/msm8998-asus-novago-tp370ql.dts
+@@ -0,0 +1,47 @@
++// SPDX-License-Identifier: GPL-2.0
++/* Copyright (c) 2019, Jeffrey Hugo. All rights reserved. */
++
++/dts-v1/;
++
++#include "msm8998-clamshell.dtsi"
++
++/ {
++ model = "Asus NovaGo TP370QL";
++ compatible = "asus,novago-tp370ql", "qcom,msm8998";
++};
++
++&blsp1_i2c6 {
++ status = "okay";
++
++ touchpad@15 {
++ compatible = "hid-over-i2c";
++ interrupt-parent = <&tlmm>;
++ interrupts = <0x7b IRQ_TYPE_LEVEL_LOW>;
++ reg = <0x15>;
++ hid-descr-addr = <0x0001>;
++
++ pinctrl-names = "default";
++ pinctrl-0 = <&touchpad>;
++ };
++
++ keyboard@3a {
++ compatible = "hid-over-i2c";
++ interrupt-parent = <&tlmm>;
++ interrupts = <0x25 IRQ_TYPE_LEVEL_LOW>;
++ reg = <0x3a>;
++ hid-descr-addr = <0x0001>;
++ };
++};
++
++&sdhc2 {
++ cd-gpios = <&tlmm 95 GPIO_ACTIVE_HIGH>;
++};
++
++&tlmm {
++ touchpad: touchpad {
++ config {
++ pins = "gpio123";
++ bias-pull-up;
++ };
++ };
++};
+diff --git a/arch/arm64/boot/dts/qcom/msm8998-clamshell.dtsi b/arch/arm64/boot/dts/qcom/msm8998-clamshell.dtsi
+new file mode 100644
+index 000000000000..9682d4dd7496
+--- /dev/null
++++ b/arch/arm64/boot/dts/qcom/msm8998-clamshell.dtsi
+@@ -0,0 +1,240 @@
++// SPDX-License-Identifier: GPL-2.0
++/* Copyright (c) 2019, Jeffrey Hugo. All rights reserved. */
++
++/*
++ * Common include for MSM8998 clamshell devices, ie the Lenovo Miix 630,
++ * Asus NovaGo TP370QL, and HP Envy x2. All three devices are basically the
++ * same, with differences in peripherals.
++ */
++
++#include "msm8998.dtsi"
++#include "pm8998.dtsi"
++#include "pm8005.dtsi"
++
++/ {
++ chosen {
++ };
++
++ vph_pwr: vph-pwr-regulator {
++ compatible = "regulator-fixed";
++ regulator-name = "vph_pwr";
++ regulator-always-on;
++ regulator-boot-on;
++ };
++};
++
++&qusb2phy {
++ status = "okay";
++
++ vdda-pll-supply = <&vreg_l12a_1p8>;
++ vdda-phy-dpdm-supply = <&vreg_l24a_3p075>;
++};
++
++&rpm_requests {
++ pm8998-regulators {
++ compatible = "qcom,rpm-pm8998-regulators";
++
++ vdd_s1-supply = <&vph_pwr>;
++ vdd_s2-supply = <&vph_pwr>;
++ vdd_s3-supply = <&vph_pwr>;
++ vdd_s4-supply = <&vph_pwr>;
++ vdd_s5-supply = <&vph_pwr>;
++ vdd_s6-supply = <&vph_pwr>;
++ vdd_s7-supply = <&vph_pwr>;
++ vdd_s8-supply = <&vph_pwr>;
++ vdd_s9-supply = <&vph_pwr>;
++ vdd_s10-supply = <&vph_pwr>;
++ vdd_s11-supply = <&vph_pwr>;
++ vdd_s12-supply = <&vph_pwr>;
++ vdd_s13-supply = <&vph_pwr>;
++ vdd_l1_l27-supply = <&vreg_s7a_1p025>;
++ vdd_l2_l8_l17-supply = <&vreg_s3a_1p35>;
++ vdd_l3_l11-supply = <&vreg_s7a_1p025>;
++ vdd_l4_l5-supply = <&vreg_s7a_1p025>;
++ vdd_l6-supply = <&vreg_s5a_2p04>;
++ vdd_l7_l12_l14_l15-supply = <&vreg_s5a_2p04>;
++ vdd_l9-supply = <&vph_pwr>;
++ vdd_l10_l23_l25-supply = <&vph_pwr>;
++ vdd_l13_l19_l21-supply = <&vph_pwr>;
++ vdd_l16_l28-supply = <&vph_pwr>;
++ vdd_l18_l22-supply = <&vph_pwr>;
++ vdd_l20_l24-supply = <&vph_pwr>;
++ vdd_l26-supply = <&vreg_s3a_1p35>;
++ vdd_lvs1_lvs2-supply = <&vreg_s4a_1p8>;
++
++ vreg_s3a_1p35: s3 {
++ regulator-min-microvolt = <1352000>;
++ regulator-max-microvolt = <1352000>;
++ };
++ vreg_s4a_1p8: s4 {
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <1800000>;
++ regulator-allow-set-load;
++ };
++ vreg_s5a_2p04: s5 {
++ regulator-min-microvolt = <1904000>;
++ regulator-max-microvolt = <2040000>;
++ };
++ vreg_s7a_1p025: s7 {
++ regulator-min-microvolt = <900000>;
++ regulator-max-microvolt = <1028000>;
++ };
++ vreg_l1a_0p875: l1 {
++ regulator-min-microvolt = <880000>;
++ regulator-max-microvolt = <880000>;
++ regulator-allow-set-load;
++ };
++ vreg_l2a_1p2: l2 {
++ regulator-min-microvolt = <1200000>;
++ regulator-max-microvolt = <1200000>;
++ regulator-allow-set-load;
++ };
++ vreg_l3a_1p0: l3 {
++ regulator-min-microvolt = <1000000>;
++ regulator-max-microvolt = <1000000>;
++ };
++ vreg_l5a_0p8: l5 {
++ regulator-min-microvolt = <800000>;
++ regulator-max-microvolt = <800000>;
++ };
++ vreg_l6a_1p8: l6 {
++ regulator-min-microvolt = <1808000>;
++ regulator-max-microvolt = <1808000>;
++ };
++ vreg_l7a_1p8: l7 {
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <1800000>;
++ };
++ vreg_l8a_1p2: l8 {
++ regulator-min-microvolt = <1200000>;
++ regulator-max-microvolt = <1200000>;
++ };
++ vreg_l9a_1p8: l9 {
++ regulator-min-microvolt = <1808000>;
++ regulator-max-microvolt = <2960000>;
++ };
++ vreg_l10a_1p8: l10 {
++ regulator-min-microvolt = <1808000>;
++ regulator-max-microvolt = <2960000>;
++ };
++ vreg_l11a_1p0: l11 {
++ regulator-min-microvolt = <1000000>;
++ regulator-max-microvolt = <1000000>;
++ };
++ vreg_l12a_1p8: l12 {
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <1800000>;
++ };
++ vreg_l13a_2p95: l13 {
++ regulator-min-microvolt = <1808000>;
++ regulator-max-microvolt = <2960000>;
++ };
++ vreg_l14a_1p88: l14 {
++ regulator-min-microvolt = <1880000>;
++ regulator-max-microvolt = <1880000>;
++ };
++ vreg_15a_1p8: l15 {
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <1800000>;
++ };
++ vreg_l16a_2p7: l16 {
++ regulator-min-microvolt = <2704000>;
++ regulator-max-microvolt = <2704000>;
++ };
++ vreg_l17a_1p3: l17 {
++ regulator-min-microvolt = <1304000>;
++ regulator-max-microvolt = <1304000>;
++ };
++ vreg_l18a_2p7: l18 {
++ regulator-min-microvolt = <2704000>;
++ regulator-max-microvolt = <2704000>;
++ };
++ vreg_l19a_3p0: l19 {
++ regulator-min-microvolt = <3008000>;
++ regulator-max-microvolt = <3008000>;
++ };
++ vreg_l20a_2p95: l20 {
++ regulator-min-microvolt = <2960000>;
++ regulator-max-microvolt = <2960000>;
++ regulator-allow-set-load;
++ };
++ vreg_l21a_2p95: l21 {
++ regulator-min-microvolt = <2960000>;
++ regulator-max-microvolt = <2960000>;
++ regulator-allow-set-load;
++ regulator-system-load = <800000>;
++ };
++ vreg_l22a_2p85: l22 {
++ regulator-min-microvolt = <2864000>;
++ regulator-max-microvolt = <2864000>;
++ };
++ vreg_l23a_3p3: l23 {
++ regulator-min-microvolt = <3312000>;
++ regulator-max-microvolt = <3312000>;
++ };
++ vreg_l24a_3p075: l24 {
++ regulator-min-microvolt = <3088000>;
++ regulator-max-microvolt = <3088000>;
++ };
++ vreg_l25a_3p3: l25 {
++ regulator-min-microvolt = <3104000>;
++ regulator-max-microvolt = <3312000>;
++ };
++ vreg_l26a_1p2: l26 {
++ regulator-min-microvolt = <1200000>;
++ regulator-max-microvolt = <1200000>;
++ };
++ vreg_l28_3p0: l28 {
++ regulator-min-microvolt = <3008000>;
++ regulator-max-microvolt = <3008000>;
++ };
++
++ vreg_lvs1a_1p8: lvs1 {
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <1800000>;
++ };
++
++ vreg_lvs2a_1p8: lvs2 {
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <1800000>;
++ };
++
++ };
++};
++
++&tlmm {
++ gpio-reserved-ranges = <0 4>, <81 4>;
++
++ touchpad: touchpad {
++ config {
++ pins = "gpio123";
++ bias-pull-up; /* pull up */
++ };
++ };
++};
++
++&sdhc2 {
++ status = "okay";
++
++ vmmc-supply = <&vreg_l21a_2p95>;
++ vqmmc-supply = <&vreg_l13a_2p95>;
++
++ pinctrl-names = "default", "sleep";
++ pinctrl-0 = <&sdc2_clk_on &sdc2_cmd_on &sdc2_data_on &sdc2_cd_on>;
++ pinctrl-1 = <&sdc2_clk_off &sdc2_cmd_off &sdc2_data_off &sdc2_cd_off>;
++};
++
++&usb3 {
++ status = "okay";
++};
++
++&usb3_dwc3 {
++ dr_mode = "host"; /* Force to host until we have Type-C hooked up */
++};
++
++&usb3phy {
++ status = "okay";
++
++ vdda-phy-supply = <&vreg_l1a_0p875>;
++ vdda-pll-supply = <&vreg_l2a_1p2>;
++};
+diff --git a/arch/arm64/boot/dts/qcom/msm8998-hp-envy-x2.dts b/arch/arm64/boot/dts/qcom/msm8998-hp-envy-x2.dts
+new file mode 100644
+index 000000000000..24073127091f
+--- /dev/null
++++ b/arch/arm64/boot/dts/qcom/msm8998-hp-envy-x2.dts
+@@ -0,0 +1,30 @@
++// SPDX-License-Identifier: GPL-2.0
++/* Copyright (c) 2019, Jeffrey Hugo. All rights reserved. */
++
++/dts-v1/;
++
++#include "msm8998-clamshell.dtsi"
++
++/ {
++ model = "HP Envy x2";
++ compatible = "hp,envy-x2", "qcom,msm8998";
++};
++
++&blsp1_i2c6 {
++ status = "okay";
++
++ keyboard@3a {
++ compatible = "hid-over-i2c";
++ interrupt-parent = <&tlmm>;
++ interrupts = <0x79 IRQ_TYPE_LEVEL_LOW>;
++ reg = <0x3a>;
++ hid-descr-addr = <0x0001>;
++
++ pinctrl-names = "default";
++ pinctrl-0 = <&touchpad>;
++ };
++};
++
++&sdhc2 {
++ cd-gpios = <&tlmm 95 GPIO_ACTIVE_LOW>;
++};
+diff --git a/arch/arm64/boot/dts/qcom/msm8998-lenovo-miix-630.dts b/arch/arm64/boot/dts/qcom/msm8998-lenovo-miix-630.dts
+new file mode 100644
+index 000000000000..407c6a32911c
+--- /dev/null
++++ b/arch/arm64/boot/dts/qcom/msm8998-lenovo-miix-630.dts
+@@ -0,0 +1,30 @@
++// SPDX-License-Identifier: GPL-2.0
++/* Copyright (c) 2019, Jeffrey Hugo. All rights reserved. */
++
++/dts-v1/;
++
++#include "msm8998-clamshell.dtsi"
++
++/ {
++ model = "Lenovo Miix 630";
++ compatible = "lenovo,miix-630", "qcom,msm8998";
++};
++
++&blsp1_i2c6 {
++ status = "okay";
++
++ keyboard@3a {
++ compatible = "hid-over-i2c";
++ interrupt-parent = <&tlmm>;
++ interrupts = <0x79 IRQ_TYPE_LEVEL_LOW>;
++ reg = <0x3a>;
++ hid-descr-addr = <0x0001>;
++
++ pinctrl-names = "default";
++ pinctrl-0 = <&touchpad>;
++ };
++};
++
++&sdhc2 {
++ cd-gpios = <&tlmm 95 GPIO_ACTIVE_HIGH>;
++};
+diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
+index 92d2e9f28f28..a7edc079bcfd 100644
+--- a/arch/arm64/include/asm/pgtable-prot.h
++++ b/arch/arm64/include/asm/pgtable-prot.h
+@@ -32,11 +32,11 @@
+ #define PROT_DEFAULT (_PROT_DEFAULT | PTE_MAYBE_NG)
+ #define PROT_SECT_DEFAULT (_PROT_SECT_DEFAULT | PMD_MAYBE_NG)
+
+-#define PROT_DEVICE_nGnRnE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRnE))
+-#define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRE))
+-#define PROT_NORMAL_NC (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_NC))
+-#define PROT_NORMAL_WT (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_WT))
+-#define PROT_NORMAL (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL))
++#define PROT_DEVICE_nGnRnE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRnE))
++#define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRE))
++#define PROT_NORMAL_NC (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_NC))
++#define PROT_NORMAL_WT (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_WT))
++#define PROT_NORMAL (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL))
+
+ #define PROT_SECT_DEVICE_nGnRE (PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_DEVICE_nGnRE))
+ #define PROT_SECT_NORMAL (PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL))
+@@ -80,8 +80,9 @@
+ #define PAGE_S2_DEVICE __pgprot(_PROT_DEFAULT | PAGE_S2_MEMATTR(DEVICE_nGnRE) | PTE_S2_RDONLY | PAGE_S2_XN)
+
+ #define PAGE_NONE __pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN)
+-#define PAGE_SHARED __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE)
+-#define PAGE_SHARED_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_WRITE)
++/* shared+writable pages are clean by default, hence PTE_RDONLY|PTE_WRITE */
++#define PAGE_SHARED __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE)
++#define PAGE_SHARED_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_WRITE)
+ #define PAGE_READONLY __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN)
+ #define PAGE_READONLY_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN)
+ #define PAGE_EXECONLY __pgprot(_PAGE_DEFAULT | PTE_RDONLY | PTE_NG | PTE_PXN)
+diff --git a/arch/arm64/include/asm/vdso/compat_barrier.h b/arch/arm64/include/asm/vdso/compat_barrier.h
+index fb60a88b5ed4..3fd8fd6d8fc2 100644
+--- a/arch/arm64/include/asm/vdso/compat_barrier.h
++++ b/arch/arm64/include/asm/vdso/compat_barrier.h
+@@ -20,7 +20,7 @@
+
+ #define dmb(option) __asm__ __volatile__ ("dmb " #option : : : "memory")
+
+-#if __LINUX_ARM_ARCH__ >= 8
++#if __LINUX_ARM_ARCH__ >= 8 && defined(CONFIG_AS_DMB_ISHLD)
+ #define aarch32_smp_mb() dmb(ish)
+ #define aarch32_smp_rmb() dmb(ishld)
+ #define aarch32_smp_wmb() dmb(ishst)
+diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
+index 2ec09debc2bb..ca158be21f83 100644
+--- a/arch/arm64/kernel/armv8_deprecated.c
++++ b/arch/arm64/kernel/armv8_deprecated.c
+@@ -174,6 +174,9 @@ static void __init register_insn_emulation(struct insn_emulation_ops *ops)
+ struct insn_emulation *insn;
+
+ insn = kzalloc(sizeof(*insn), GFP_KERNEL);
++ if (!insn)
++ return;
++
+ insn->ops = ops;
+ insn->min = INSN_UNDEF;
+
+@@ -233,6 +236,8 @@ static void __init register_insn_emulation_sysctl(void)
+
+ insns_sysctl = kcalloc(nr_insn_emulated + 1, sizeof(*sysctl),
+ GFP_KERNEL);
++ if (!insns_sysctl)
++ return;
+
+ raw_spin_lock_irqsave(&insn_emulation_lock, flags);
+ list_for_each_entry(insn, &insn_emulation, node) {
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 27b4a973f16d..1e0b9ae9bf7e 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -816,6 +816,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ {
+ .desc = "Qualcomm Technologies Falkor/Kryo erratum 1003",
+ .capability = ARM64_WORKAROUND_QCOM_FALKOR_E1003,
++ .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+ .matches = cpucap_multi_entry_cap_matches,
+ .match_list = qcom_erratum_1003_list,
+ },
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 9323bcc40a58..cabebf1a7976 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -136,6 +136,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
+
+ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_SB_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FRINTTS_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_GPI_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index 109894bd3194..239f6841a741 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -775,6 +775,7 @@ el0_sync_compat:
+ b.ge el0_dbg
+ b el0_inv
+ el0_svc_compat:
++ gic_prio_kentry_setup tmp=x1
+ mov x0, sp
+ bl el0_svc_compat_handler
+ b ret_to_user
+diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
+index 171773257974..06e56b470315 100644
+--- a/arch/arm64/kernel/ftrace.c
++++ b/arch/arm64/kernel/ftrace.c
+@@ -121,10 +121,16 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
+
+ /*
+ * Ensure updated trampoline is visible to instruction
+- * fetch before we patch in the branch.
++ * fetch before we patch in the branch. Although the
++ * architecture doesn't require an IPI in this case,
++ * Neoverse-N1 erratum #1542419 does require one
++ * if the TLB maintenance in module_enable_ro() is
++ * skipped due to rodata_enabled. It doesn't seem worth
++ * it to make it conditional given that this is
++ * certainly not a fast-path.
+ */
+- __flush_icache_range((unsigned long)&dst[0],
+- (unsigned long)&dst[1]);
++ flush_icache_range((unsigned long)&dst[0],
++ (unsigned long)&dst[1]);
+ }
+ addr = (unsigned long)dst;
+ #else /* CONFIG_ARM64_MODULE_PLTS */
+diff --git a/arch/arm64/kernel/vdso32/Makefile b/arch/arm64/kernel/vdso32/Makefile
+index 1fba0776ed40..aa171b043287 100644
+--- a/arch/arm64/kernel/vdso32/Makefile
++++ b/arch/arm64/kernel/vdso32/Makefile
+@@ -8,8 +8,6 @@
+ ARCH_REL_TYPE_ABS := R_ARM_JUMP_SLOT|R_ARM_GLOB_DAT|R_ARM_ABS32
+ include $(srctree)/lib/vdso/Makefile
+
+-COMPATCC := $(CROSS_COMPILE_COMPAT)gcc
+-
+ # Same as cc-*option, but using COMPATCC instead of CC
+ cc32-option = $(call try-run,\
+ $(COMPATCC) $(1) -c -x c /dev/null -o "$$TMP",$(1),$(2))
+@@ -17,6 +15,8 @@ cc32-disable-warning = $(call try-run,\
+ $(COMPATCC) -W$(strip $(1)) -c -x c /dev/null -o "$$TMP",-Wno-$(strip $(1)))
+ cc32-ldoption = $(call try-run,\
+ $(COMPATCC) $(1) -nostdlib -x c /dev/null -o "$$TMP",$(1),$(2))
++cc32-as-instr = $(call try-run,\
++ printf "%b\n" "$(1)" | $(COMPATCC) $(VDSO_AFLAGS) -c -x assembler -o "$$TMP" -,$(2),$(3))
+
+ # We cannot use the global flags to compile the vDSO files, the main reason
+ # being that the 32-bit compiler may be older than the main (64-bit) compiler
+@@ -25,11 +25,9 @@ cc32-ldoption = $(call try-run,\
+ # arm64 one.
+ # As a result we set our own flags here.
+
+-# From top-level Makefile
+-# NOSTDINC_FLAGS
+-VDSO_CPPFLAGS := -nostdinc -isystem $(shell $(COMPATCC) -print-file-name=include)
++# KBUILD_CPPFLAGS and NOSTDINC_FLAGS from top-level Makefile
++VDSO_CPPFLAGS := -D__KERNEL__ -nostdinc -isystem $(shell $(COMPATCC) -print-file-name=include)
+ VDSO_CPPFLAGS += $(LINUXINCLUDE)
+-VDSO_CPPFLAGS += $(KBUILD_CPPFLAGS)
+
+ # Common C and assembly flags
+ # From top-level Makefile
+@@ -55,6 +53,7 @@ endif
+ VDSO_CAFLAGS += -fPIC -fno-builtin -fno-stack-protector
+ VDSO_CAFLAGS += -DDISABLE_BRANCH_PROFILING
+
++
+ # Try to compile for ARMv8. If the compiler is too old and doesn't support it,
+ # fall back to v7. There is no easy way to check for what architecture the code
+ # is being compiled, so define a macro specifying that (see arch/arm/Makefile).
+@@ -91,6 +90,12 @@ VDSO_CFLAGS += -Wno-int-to-pointer-cast
+ VDSO_AFLAGS := $(VDSO_CAFLAGS)
+ VDSO_AFLAGS += -D__ASSEMBLY__
+
++# Check for binutils support for dmb ishld
++dmbinstr := $(call cc32-as-instr,dmb ishld,-DCONFIG_AS_DMB_ISHLD=1)
++
++VDSO_CFLAGS += $(dmbinstr)
++VDSO_AFLAGS += $(dmbinstr)
++
+ VDSO_LDFLAGS := $(VDSO_CPPFLAGS)
+ # From arm vDSO Makefile
+ VDSO_LDFLAGS += -Wl,-Bsymbolic -Wl,--no-undefined -Wl,-soname=linux-vdso.so.1
+diff --git a/arch/mips/fw/sni/sniprom.c b/arch/mips/fw/sni/sniprom.c
+index 8772617b64ce..80112f2298b6 100644
+--- a/arch/mips/fw/sni/sniprom.c
++++ b/arch/mips/fw/sni/sniprom.c
+@@ -43,7 +43,7 @@
+
+ /* O32 stack has to be 8-byte aligned. */
+ static u64 o32_stk[4096];
+-#define O32_STK &o32_stk[sizeof(o32_stk)]
++#define O32_STK (&o32_stk[ARRAY_SIZE(o32_stk)])
+
+ #define __PROM_O32(fun, arg) fun arg __asm__(#fun); \
+ __asm__(#fun " = call_o32")
+diff --git a/arch/mips/include/asm/cmpxchg.h b/arch/mips/include/asm/cmpxchg.h
+index c8a47d18f628..2b61052e10c9 100644
+--- a/arch/mips/include/asm/cmpxchg.h
++++ b/arch/mips/include/asm/cmpxchg.h
+@@ -77,8 +77,8 @@ extern unsigned long __xchg_called_with_bad_pointer(void)
+ extern unsigned long __xchg_small(volatile void *ptr, unsigned long val,
+ unsigned int size);
+
+-static inline unsigned long __xchg(volatile void *ptr, unsigned long x,
+- int size)
++static __always_inline
++unsigned long __xchg(volatile void *ptr, unsigned long x, int size)
+ {
+ switch (size) {
+ case 1:
+@@ -153,8 +153,9 @@ static inline unsigned long __xchg(volatile void *ptr, unsigned long x,
+ extern unsigned long __cmpxchg_small(volatile void *ptr, unsigned long old,
+ unsigned long new, unsigned int size);
+
+-static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
+- unsigned long new, unsigned int size)
++static __always_inline
++unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
++ unsigned long new, unsigned int size)
+ {
+ switch (size) {
+ case 1:
+diff --git a/arch/powerpc/platforms/powernv/smp.c b/arch/powerpc/platforms/powernv/smp.c
+index 94cd96b9b7bb..001dfac8354a 100644
+--- a/arch/powerpc/platforms/powernv/smp.c
++++ b/arch/powerpc/platforms/powernv/smp.c
+@@ -146,20 +146,25 @@ static int pnv_smp_cpu_disable(void)
+ return 0;
+ }
+
++static void pnv_flush_interrupts(void)
++{
++ if (cpu_has_feature(CPU_FTR_ARCH_300)) {
++ if (xive_enabled())
++ xive_flush_interrupt();
++ else
++ icp_opal_flush_interrupt();
++ } else {
++ icp_native_flush_interrupt();
++ }
++}
++
+ static void pnv_smp_cpu_kill_self(void)
+ {
++ unsigned long srr1, unexpected_mask, wmask;
+ unsigned int cpu;
+- unsigned long srr1, wmask;
+ u64 lpcr_val;
+
+ /* Standard hot unplug procedure */
+- /*
+- * This hard disables local interurpts, ensuring we have no lazy
+- * irqs pending.
+- */
+- WARN_ON(irqs_disabled());
+- hard_irq_disable();
+- WARN_ON(lazy_irq_pending());
+
+ idle_task_exit();
+ current->active_mm = NULL; /* for sanity */
+@@ -172,6 +177,27 @@ static void pnv_smp_cpu_kill_self(void)
+ if (cpu_has_feature(CPU_FTR_ARCH_207S))
+ wmask = SRR1_WAKEMASK_P8;
+
++ /*
++ * This turns the irq soft-disabled state we're called with, into a
++ * hard-disabled state with pending irq_happened interrupts cleared.
++ *
++ * PACA_IRQ_DEC - Decrementer should be ignored.
++ * PACA_IRQ_HMI - Can be ignored, processing is done in real mode.
++ * PACA_IRQ_DBELL, EE, PMI - Unexpected.
++ */
++ hard_irq_disable();
++ if (generic_check_cpu_restart(cpu))
++ goto out;
++
++ unexpected_mask = ~(PACA_IRQ_DEC | PACA_IRQ_HMI | PACA_IRQ_HARD_DIS);
++ if (local_paca->irq_happened & unexpected_mask) {
++ if (local_paca->irq_happened & PACA_IRQ_EE)
++ pnv_flush_interrupts();
++ DBG("CPU%d Unexpected exit while offline irq_happened=%lx!\n",
++ cpu, local_paca->irq_happened);
++ }
++ local_paca->irq_happened = PACA_IRQ_HARD_DIS;
++
+ /*
+ * We don't want to take decrementer interrupts while we are
+ * offline, so clear LPCR:PECE1. We keep PECE2 (and
+@@ -197,6 +223,7 @@ static void pnv_smp_cpu_kill_self(void)
+
+ srr1 = pnv_cpu_offline(cpu);
+
++ WARN_ON_ONCE(!irqs_disabled());
+ WARN_ON(lazy_irq_pending());
+
+ /*
+@@ -212,13 +239,7 @@ static void pnv_smp_cpu_kill_self(void)
+ */
+ if (((srr1 & wmask) == SRR1_WAKEEE) ||
+ ((srr1 & wmask) == SRR1_WAKEHVI)) {
+- if (cpu_has_feature(CPU_FTR_ARCH_300)) {
+- if (xive_enabled())
+- xive_flush_interrupt();
+- else
+- icp_opal_flush_interrupt();
+- } else
+- icp_native_flush_interrupt();
++ pnv_flush_interrupts();
+ } else if ((srr1 & wmask) == SRR1_WAKEHDBELL) {
+ unsigned long msg = PPC_DBELL_TYPE(PPC_DBELL_SERVER);
+ asm volatile(PPC_MSGCLR(%0) : : "r" (msg));
+@@ -266,7 +287,7 @@ static void pnv_smp_cpu_kill_self(void)
+ */
+ lpcr_val = mfspr(SPRN_LPCR) | (u64)LPCR_PECE1;
+ pnv_program_cpu_hotplug_lpcr(cpu, lpcr_val);
+-
++out:
+ DBG("CPU%d coming online...\n", cpu);
+ }
+
+diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
+index 424eb72d56b1..93742df9067f 100644
+--- a/arch/riscv/kernel/traps.c
++++ b/arch/riscv/kernel/traps.c
+@@ -124,24 +124,24 @@ static inline unsigned long get_break_insn_length(unsigned long pc)
+
+ asmlinkage void do_trap_break(struct pt_regs *regs)
+ {
+-#ifdef CONFIG_GENERIC_BUG
+ if (!user_mode(regs)) {
+ enum bug_trap_type type;
+
+ type = report_bug(regs->sepc, regs);
+ switch (type) {
+- case BUG_TRAP_TYPE_NONE:
+- break;
++#ifdef CONFIG_GENERIC_BUG
+ case BUG_TRAP_TYPE_WARN:
+ regs->sepc += get_break_insn_length(regs->sepc);
+- break;
++ return;
+ case BUG_TRAP_TYPE_BUG:
++#endif /* CONFIG_GENERIC_BUG */
++ default:
+ die(regs, "Kernel BUG");
+ }
++ } else {
++ force_sig_fault(SIGTRAP, TRAP_BRKPT,
++ (void __user *)(regs->sepc));
+ }
+-#endif /* CONFIG_GENERIC_BUG */
+-
+- force_sig_fault(SIGTRAP, TRAP_BRKPT, (void __user *)(regs->sepc));
+ }
+
+ #ifdef CONFIG_GENERIC_BUG
+diff --git a/arch/s390/include/asm/uaccess.h b/arch/s390/include/asm/uaccess.h
+index bd2fd9a7821d..a470f1fa9f2a 100644
+--- a/arch/s390/include/asm/uaccess.h
++++ b/arch/s390/include/asm/uaccess.h
+@@ -83,7 +83,7 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n);
+ __rc; \
+ })
+
+-static inline int __put_user_fn(void *x, void __user *ptr, unsigned long size)
++static __always_inline int __put_user_fn(void *x, void __user *ptr, unsigned long size)
+ {
+ unsigned long spec = 0x010000UL;
+ int rc;
+@@ -113,7 +113,7 @@ static inline int __put_user_fn(void *x, void __user *ptr, unsigned long size)
+ return rc;
+ }
+
+-static inline int __get_user_fn(void *x, const void __user *ptr, unsigned long size)
++static __always_inline int __get_user_fn(void *x, const void __user *ptr, unsigned long size)
+ {
+ unsigned long spec = 0x01UL;
+ int rc;
+diff --git a/arch/s390/include/asm/unwind.h b/arch/s390/include/asm/unwind.h
+index d827b5b9a32c..eaaefeceef6f 100644
+--- a/arch/s390/include/asm/unwind.h
++++ b/arch/s390/include/asm/unwind.h
+@@ -35,6 +35,7 @@ struct unwind_state {
+ struct task_struct *task;
+ struct pt_regs *regs;
+ unsigned long sp, ip;
++ bool reuse_sp;
+ int graph_idx;
+ bool reliable;
+ bool error;
+diff --git a/arch/s390/kernel/idle.c b/arch/s390/kernel/idle.c
+index b9d8fe45737a..8f8456816d83 100644
+--- a/arch/s390/kernel/idle.c
++++ b/arch/s390/kernel/idle.c
+@@ -69,18 +69,26 @@ DEVICE_ATTR(idle_count, 0444, show_idle_count, NULL);
+ static ssize_t show_idle_time(struct device *dev,
+ struct device_attribute *attr, char *buf)
+ {
++ unsigned long long now, idle_time, idle_enter, idle_exit, in_idle;
+ struct s390_idle_data *idle = &per_cpu(s390_idle, dev->id);
+- unsigned long long now, idle_time, idle_enter, idle_exit;
+ unsigned int seq;
+
+ do {
+- now = get_tod_clock();
+ seq = read_seqcount_begin(&idle->seqcount);
+ idle_time = READ_ONCE(idle->idle_time);
+ idle_enter = READ_ONCE(idle->clock_idle_enter);
+ idle_exit = READ_ONCE(idle->clock_idle_exit);
+ } while (read_seqcount_retry(&idle->seqcount, seq));
+- idle_time += idle_enter ? ((idle_exit ? : now) - idle_enter) : 0;
++ in_idle = 0;
++ now = get_tod_clock();
++ if (idle_enter) {
++ if (idle_exit) {
++ in_idle = idle_exit - idle_enter;
++ } else if (now > idle_enter) {
++ in_idle = now - idle_enter;
++ }
++ }
++ idle_time += in_idle;
+ return sprintf(buf, "%llu\n", idle_time >> 12);
+ }
+ DEVICE_ATTR(idle_time_us, 0444, show_idle_time, NULL);
+@@ -88,17 +96,24 @@ DEVICE_ATTR(idle_time_us, 0444, show_idle_time, NULL);
+ u64 arch_cpu_idle_time(int cpu)
+ {
+ struct s390_idle_data *idle = &per_cpu(s390_idle, cpu);
+- unsigned long long now, idle_enter, idle_exit;
++ unsigned long long now, idle_enter, idle_exit, in_idle;
+ unsigned int seq;
+
+ do {
+- now = get_tod_clock();
+ seq = read_seqcount_begin(&idle->seqcount);
+ idle_enter = READ_ONCE(idle->clock_idle_enter);
+ idle_exit = READ_ONCE(idle->clock_idle_exit);
+ } while (read_seqcount_retry(&idle->seqcount, seq));
+-
+- return cputime_to_nsecs(idle_enter ? ((idle_exit ?: now) - idle_enter) : 0);
++ in_idle = 0;
++ now = get_tod_clock();
++ if (idle_enter) {
++ if (idle_exit) {
++ in_idle = idle_exit - idle_enter;
++ } else if (now > idle_enter) {
++ in_idle = now - idle_enter;
++ }
++ }
++ return cputime_to_nsecs(in_idle);
+ }
+
+ void arch_cpu_idle_enter(void)
+diff --git a/arch/s390/kernel/unwind_bc.c b/arch/s390/kernel/unwind_bc.c
+index 8fc9daae47a2..a8204f952315 100644
+--- a/arch/s390/kernel/unwind_bc.c
++++ b/arch/s390/kernel/unwind_bc.c
+@@ -46,10 +46,15 @@ bool unwind_next_frame(struct unwind_state *state)
+
+ regs = state->regs;
+ if (unlikely(regs)) {
+- sp = READ_ONCE_NOCHECK(regs->gprs[15]);
+- if (unlikely(outside_of_stack(state, sp))) {
+- if (!update_stack_info(state, sp))
+- goto out_err;
++ if (state->reuse_sp) {
++ sp = state->sp;
++ state->reuse_sp = false;
++ } else {
++ sp = READ_ONCE_NOCHECK(regs->gprs[15]);
++ if (unlikely(outside_of_stack(state, sp))) {
++ if (!update_stack_info(state, sp))
++ goto out_err;
++ }
+ }
+ sf = (struct stack_frame *) sp;
+ ip = READ_ONCE_NOCHECK(sf->gprs[8]);
+@@ -107,9 +112,9 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
+ {
+ struct stack_info *info = &state->stack_info;
+ unsigned long *mask = &state->stack_mask;
++ bool reliable, reuse_sp;
+ struct stack_frame *sf;
+ unsigned long ip;
+- bool reliable;
+
+ memset(state, 0, sizeof(*state));
+ state->task = task;
+@@ -134,10 +139,12 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
+ if (regs) {
+ ip = READ_ONCE_NOCHECK(regs->psw.addr);
+ reliable = true;
++ reuse_sp = true;
+ } else {
+ sf = (struct stack_frame *) sp;
+ ip = READ_ONCE_NOCHECK(sf->gprs[8]);
+ reliable = false;
++ reuse_sp = false;
+ }
+
+ #ifdef CONFIG_FUNCTION_GRAPH_TRACER
+@@ -151,5 +158,6 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
+ state->sp = sp;
+ state->ip = ip;
+ state->reliable = reliable;
++ state->reuse_sp = reuse_sp;
+ }
+ EXPORT_SYMBOL_GPL(__unwind_start);
+diff --git a/arch/s390/mm/cmm.c b/arch/s390/mm/cmm.c
+index 510a18299196..a51c892f14f3 100644
+--- a/arch/s390/mm/cmm.c
++++ b/arch/s390/mm/cmm.c
+@@ -298,16 +298,16 @@ static int cmm_timeout_handler(struct ctl_table *ctl, int write,
+ }
+
+ if (write) {
+- len = *lenp;
+- if (copy_from_user(buf, buffer,
+- len > sizeof(buf) ? sizeof(buf) : len))
++ len = min(*lenp, sizeof(buf));
++ if (copy_from_user(buf, buffer, len))
+ return -EFAULT;
+- buf[sizeof(buf) - 1] = '\0';
++ buf[len - 1] = '\0';
+ cmm_skip_blanks(buf, &p);
+ nr = simple_strtoul(p, &p, 0);
+ cmm_skip_blanks(p, &p);
+ seconds = simple_strtoul(p, &p, 0);
+ cmm_set_timeout(nr, seconds);
++ *ppos += *lenp;
+ } else {
+ len = sprintf(buf, "%ld %ld\n",
+ cmm_timeout_pages, cmm_timeout_seconds);
+@@ -315,9 +315,9 @@ static int cmm_timeout_handler(struct ctl_table *ctl, int write,
+ len = *lenp;
+ if (copy_to_user(buffer, buf, len))
+ return -EFAULT;
++ *lenp = len;
++ *ppos += len;
+ }
+- *lenp = len;
+- *ppos += len;
+ return 0;
+ }
+
+diff --git a/arch/s390/pci/pci_irq.c b/arch/s390/pci/pci_irq.c
+index d80616ae8dd8..fbe97ab2e228 100644
+--- a/arch/s390/pci/pci_irq.c
++++ b/arch/s390/pci/pci_irq.c
+@@ -284,7 +284,7 @@ int arch_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
+ return rc;
+ irq_set_chip_and_handler(irq, &zpci_irq_chip,
+ handle_percpu_irq);
+- msg.data = hwirq;
++ msg.data = hwirq - bit;
+ if (irq_delivery == DIRECTED) {
+ msg.address_lo = zdev->msi_addr & 0xff0000ff;
+ msg.address_lo |= msi->affinity ?
+diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
+index 33c1cd6a12ac..40ab9ad7aa96 100644
+--- a/arch/um/drivers/ubd_kern.c
++++ b/arch/um/drivers/ubd_kern.c
+@@ -1403,8 +1403,12 @@ static blk_status_t ubd_queue_rq(struct blk_mq_hw_ctx *hctx,
+
+ spin_unlock_irq(&ubd_dev->lock);
+
+- if (ret < 0)
+- blk_mq_requeue_request(req, true);
++ if (ret < 0) {
++ if (ret == -ENOMEM)
++ res = BLK_STS_RESOURCE;
++ else
++ res = BLK_STS_DEV_RESOURCE;
++ }
+
+ return res;
+ }
+diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
+index e7d35f60d53f..64c3e70b0556 100644
+--- a/arch/x86/events/amd/core.c
++++ b/arch/x86/events/amd/core.c
+@@ -5,12 +5,14 @@
+ #include <linux/init.h>
+ #include <linux/slab.h>
+ #include <linux/delay.h>
++#include <linux/jiffies.h>
+ #include <asm/apicdef.h>
+ #include <asm/nmi.h>
+
+ #include "../perf_event.h"
+
+-static DEFINE_PER_CPU(unsigned int, perf_nmi_counter);
++static DEFINE_PER_CPU(unsigned long, perf_nmi_tstamp);
++static unsigned long perf_nmi_window;
+
+ static __initconst const u64 amd_hw_cache_event_ids
+ [PERF_COUNT_HW_CACHE_MAX]
+@@ -641,11 +643,12 @@ static void amd_pmu_disable_event(struct perf_event *event)
+ * handler when multiple PMCs are active or PMC overflow while handling some
+ * other source of an NMI.
+ *
+- * Attempt to mitigate this by using the number of active PMCs to determine
+- * whether to return NMI_HANDLED if the perf NMI handler did not handle/reset
+- * any PMCs. The per-CPU perf_nmi_counter variable is set to a minimum of the
+- * number of active PMCs or 2. The value of 2 is used in case an NMI does not
+- * arrive at the LAPIC in time to be collapsed into an already pending NMI.
++ * Attempt to mitigate this by creating an NMI window in which un-handled NMIs
++ * received during this window will be claimed. This prevents extending the
++ * window past when it is possible that latent NMIs should be received. The
++ * per-CPU perf_nmi_tstamp will be set to the window end time whenever perf has
++ * handled a counter. When an un-handled NMI is received, it will be claimed
++ * only if arriving within that window.
+ */
+ static int amd_pmu_handle_irq(struct pt_regs *regs)
+ {
+@@ -663,21 +666,19 @@ static int amd_pmu_handle_irq(struct pt_regs *regs)
+ handled = x86_pmu_handle_irq(regs);
+
+ /*
+- * If a counter was handled, record the number of possible remaining
+- * NMIs that can occur.
++ * If a counter was handled, record a timestamp such that un-handled
++ * NMIs will be claimed if arriving within that window.
+ */
+ if (handled) {
+- this_cpu_write(perf_nmi_counter,
+- min_t(unsigned int, 2, active));
++ this_cpu_write(perf_nmi_tstamp,
++ jiffies + perf_nmi_window);
+
+ return handled;
+ }
+
+- if (!this_cpu_read(perf_nmi_counter))
++ if (time_after(jiffies, this_cpu_read(perf_nmi_tstamp)))
+ return NMI_DONE;
+
+- this_cpu_dec(perf_nmi_counter);
+-
+ return NMI_HANDLED;
+ }
+
+@@ -909,6 +910,9 @@ static int __init amd_core_pmu_init(void)
+ if (!boot_cpu_has(X86_FEATURE_PERFCTR_CORE))
+ return 0;
+
++ /* Avoid calulating the value each time in the NMI handler */
++ perf_nmi_window = msecs_to_jiffies(100);
++
+ switch (boot_cpu_data.x86) {
+ case 0x15:
+ pr_cont("Fam15h ");
+diff --git a/arch/x86/include/asm/intel-family.h b/arch/x86/include/asm/intel-family.h
+index 9ae1c0f05fd2..3525014c71da 100644
+--- a/arch/x86/include/asm/intel-family.h
++++ b/arch/x86/include/asm/intel-family.h
+@@ -76,6 +76,9 @@
+ #define INTEL_FAM6_TIGERLAKE_L 0x8C
+ #define INTEL_FAM6_TIGERLAKE 0x8D
+
++#define INTEL_FAM6_COMETLAKE 0xA5
++#define INTEL_FAM6_COMETLAKE_L 0xA6
++
+ /* "Small Core" Processors (Atom) */
+
+ #define INTEL_FAM6_ATOM_BONNELL 0x1C /* Diamondville, Pineview */
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index 45e425c5e6f5..fe887f723708 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -736,8 +736,14 @@ static int get_npt_level(struct kvm_vcpu *vcpu)
+ static void svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
+ {
+ vcpu->arch.efer = efer;
+- if (!npt_enabled && !(efer & EFER_LMA))
+- efer &= ~EFER_LME;
++
++ if (!npt_enabled) {
++ /* Shadow paging assumes NX to be available. */
++ efer |= EFER_NX;
++
++ if (!(efer & EFER_LMA))
++ efer &= ~EFER_LME;
++ }
+
+ to_svm(vcpu)->vmcb->save.efer = efer | EFER_SVME;
+ mark_dirty(to_svm(vcpu)->vmcb, VMCB_CR);
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 1d11bf4bab8b..2a0e281542cc 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -897,17 +897,9 @@ static bool update_transition_efer(struct vcpu_vmx *vmx, int efer_offset)
+ u64 guest_efer = vmx->vcpu.arch.efer;
+ u64 ignore_bits = 0;
+
+- if (!enable_ept) {
+- /*
+- * NX is needed to handle CR0.WP=1, CR4.SMEP=1. Testing
+- * host CPUID is more efficient than testing guest CPUID
+- * or CR4. Host SMEP is anyway a requirement for guest SMEP.
+- */
+- if (boot_cpu_has(X86_FEATURE_SMEP))
+- guest_efer |= EFER_NX;
+- else if (!(guest_efer & EFER_NX))
+- ignore_bits |= EFER_NX;
+- }
++ /* Shadow paging assumes NX to be available. */
++ if (!enable_ept)
++ guest_efer |= EFER_NX;
+
+ /*
+ * LMA and LME handled by hardware; SCE meaningless outside long mode.
+diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c
+index a7189a3b4d70..3304f61538a2 100644
+--- a/arch/x86/platform/efi/efi.c
++++ b/arch/x86/platform/efi/efi.c
+@@ -894,9 +894,6 @@ static void __init kexec_enter_virtual_mode(void)
+
+ if (efi_enabled(EFI_OLD_MEMMAP) && (__supported_pte_mask & _PAGE_NX))
+ runtime_code_page_mkexec();
+-
+- /* clean DUMMY object */
+- efi_delete_dummy_variable();
+ #endif
+ }
+
+diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
+index 750f46ad018a..205b1176084f 100644
+--- a/arch/x86/xen/enlighten.c
++++ b/arch/x86/xen/enlighten.c
+@@ -269,19 +269,41 @@ void xen_reboot(int reason)
+ BUG();
+ }
+
++static int reboot_reason = SHUTDOWN_reboot;
++static bool xen_legacy_crash;
+ void xen_emergency_restart(void)
+ {
+- xen_reboot(SHUTDOWN_reboot);
++ xen_reboot(reboot_reason);
+ }
+
+ static int
+ xen_panic_event(struct notifier_block *this, unsigned long event, void *ptr)
+ {
+- if (!kexec_crash_loaded())
+- xen_reboot(SHUTDOWN_crash);
++ if (!kexec_crash_loaded()) {
++ if (xen_legacy_crash)
++ xen_reboot(SHUTDOWN_crash);
++
++ reboot_reason = SHUTDOWN_crash;
++
++ /*
++ * If panic_timeout==0 then we are supposed to wait forever.
++ * However, to preserve original dom0 behavior we have to drop
++ * into hypervisor. (domU behavior is controlled by its
++ * config file)
++ */
++ if (panic_timeout == 0)
++ panic_timeout = -1;
++ }
+ return NOTIFY_DONE;
+ }
+
++static int __init parse_xen_legacy_crash(char *arg)
++{
++ xen_legacy_crash = true;
++ return 0;
++}
++early_param("xen_legacy_crash", parse_xen_legacy_crash);
++
+ static struct notifier_block xen_panic_block = {
+ .notifier_call = xen_panic_event,
+ .priority = INT_MIN
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 0b727f7432f9..9650777d0aaf 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -230,8 +230,8 @@ static void nbd_put(struct nbd_device *nbd)
+ if (refcount_dec_and_mutex_lock(&nbd->refs,
+ &nbd_index_mutex)) {
+ idr_remove(&nbd_index_idr, nbd->index);
+- mutex_unlock(&nbd_index_mutex);
+ nbd_dev_remove(nbd);
++ mutex_unlock(&nbd_index_mutex);
+ }
+ }
+
+@@ -935,6 +935,25 @@ static blk_status_t nbd_queue_rq(struct blk_mq_hw_ctx *hctx,
+ return ret;
+ }
+
++static struct socket *nbd_get_socket(struct nbd_device *nbd, unsigned long fd,
++ int *err)
++{
++ struct socket *sock;
++
++ *err = 0;
++ sock = sockfd_lookup(fd, err);
++ if (!sock)
++ return NULL;
++
++ if (sock->ops->shutdown == sock_no_shutdown) {
++ dev_err(disk_to_dev(nbd->disk), "Unsupported socket: shutdown callout must be supported.\n");
++ *err = -EINVAL;
++ return NULL;
++ }
++
++ return sock;
++}
++
+ static int nbd_add_socket(struct nbd_device *nbd, unsigned long arg,
+ bool netlink)
+ {
+@@ -944,7 +963,7 @@ static int nbd_add_socket(struct nbd_device *nbd, unsigned long arg,
+ struct nbd_sock *nsock;
+ int err;
+
+- sock = sockfd_lookup(arg, &err);
++ sock = nbd_get_socket(nbd, arg, &err);
+ if (!sock)
+ return err;
+
+@@ -996,7 +1015,7 @@ static int nbd_reconnect_socket(struct nbd_device *nbd, unsigned long arg)
+ int i;
+ int err;
+
+- sock = sockfd_lookup(arg, &err);
++ sock = nbd_get_socket(nbd, arg, &err);
+ if (!sock)
+ return err;
+
+diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c
+index a01f4b5d793c..be9ef4dd756f 100644
+--- a/drivers/dma/imx-sdma.c
++++ b/drivers/dma/imx-sdma.c
+@@ -1707,6 +1707,14 @@ static void sdma_add_scripts(struct sdma_engine *sdma,
+ if (!sdma->script_number)
+ sdma->script_number = SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V1;
+
++ if (sdma->script_number > sizeof(struct sdma_script_start_addrs)
++ / sizeof(s32)) {
++ dev_err(sdma->dev,
++ "SDMA script number %d not match with firmware.\n",
++ sdma->script_number);
++ return;
++ }
++
+ for (i = 0; i < sdma->script_number; i++)
+ if (addr_arr[i] > 0)
+ saddr_arr[i] = addr_arr[i];
+diff --git a/drivers/dma/qcom/bam_dma.c b/drivers/dma/qcom/bam_dma.c
+index 8e90a405939d..ef73f65224b1 100644
+--- a/drivers/dma/qcom/bam_dma.c
++++ b/drivers/dma/qcom/bam_dma.c
+@@ -694,6 +694,25 @@ static int bam_dma_terminate_all(struct dma_chan *chan)
+
+ /* remove all transactions, including active transaction */
+ spin_lock_irqsave(&bchan->vc.lock, flag);
++ /*
++ * If we have transactions queued, then some might be committed to the
++ * hardware in the desc fifo. The only way to reset the desc fifo is
++ * to do a hardware reset (either by pipe or the entire block).
++ * bam_chan_init_hw() will trigger a pipe reset, and also reinit the
++ * pipe. If the pipe is left disabled (default state after pipe reset)
++ * and is accessed by a connected hardware engine, a fatal error in
++ * the BAM will occur. There is a small window where this could happen
++ * with bam_chan_init_hw(), but it is assumed that the caller has
++ * stopped activity on any attached hardware engine. Make sure to do
++ * this first so that the BAM hardware doesn't cause memory corruption
++ * by accessing freed resources.
++ */
++ if (!list_empty(&bchan->desc_list)) {
++ async_desc = list_first_entry(&bchan->desc_list,
++ struct bam_async_desc, desc_node);
++ bam_chan_init_hw(bchan, async_desc->dir);
++ }
++
+ list_for_each_entry_safe(async_desc, tmp,
+ &bchan->desc_list, desc_node) {
+ list_add(&async_desc->vd.node, &bchan->vc.desc_issued);
+diff --git a/drivers/dma/tegra210-adma.c b/drivers/dma/tegra210-adma.c
+index b33cf6e8ab8e..d13fe1030a3e 100644
+--- a/drivers/dma/tegra210-adma.c
++++ b/drivers/dma/tegra210-adma.c
+@@ -40,6 +40,7 @@
+ #define ADMA_CH_CONFIG_MAX_BURST_SIZE 16
+ #define ADMA_CH_CONFIG_WEIGHT_FOR_WRR(val) ((val) & 0xf)
+ #define ADMA_CH_CONFIG_MAX_BUFS 8
++#define TEGRA186_ADMA_CH_CONFIG_OUTSTANDING_REQS(reqs) (reqs << 4)
+
+ #define ADMA_CH_FIFO_CTRL 0x2c
+ #define TEGRA210_ADMA_CH_FIFO_CTRL_OFLWTHRES(val) (((val) & 0xf) << 24)
+@@ -85,6 +86,7 @@ struct tegra_adma;
+ * @ch_req_tx_shift: Register offset for AHUB transmit channel select.
+ * @ch_req_rx_shift: Register offset for AHUB receive channel select.
+ * @ch_base_offset: Register offset of DMA channel registers.
++ * @has_outstanding_reqs: If DMA channel can have outstanding requests.
+ * @ch_fifo_ctrl: Default value for channel FIFO CTRL register.
+ * @ch_req_mask: Mask for Tx or Rx channel select.
+ * @ch_req_max: Maximum number of Tx or Rx channels available.
+@@ -103,6 +105,7 @@ struct tegra_adma_chip_data {
+ unsigned int ch_req_max;
+ unsigned int ch_reg_size;
+ unsigned int nr_channels;
++ bool has_outstanding_reqs;
+ };
+
+ /*
+@@ -602,6 +605,8 @@ static int tegra_adma_set_xfer_params(struct tegra_adma_chan *tdc,
+ ADMA_CH_CTRL_FLOWCTRL_EN;
+ ch_regs->config |= cdata->adma_get_burst_config(burst_size);
+ ch_regs->config |= ADMA_CH_CONFIG_WEIGHT_FOR_WRR(1);
++ if (cdata->has_outstanding_reqs)
++ ch_regs->config |= TEGRA186_ADMA_CH_CONFIG_OUTSTANDING_REQS(8);
+ ch_regs->fifo_ctrl = cdata->ch_fifo_ctrl;
+ ch_regs->tc = desc->period_len & ADMA_CH_TC_COUNT_MASK;
+
+@@ -786,6 +791,7 @@ static const struct tegra_adma_chip_data tegra210_chip_data = {
+ .ch_req_tx_shift = 28,
+ .ch_req_rx_shift = 24,
+ .ch_base_offset = 0,
++ .has_outstanding_reqs = false,
+ .ch_fifo_ctrl = TEGRA210_FIFO_CTRL_DEFAULT,
+ .ch_req_mask = 0xf,
+ .ch_req_max = 10,
+@@ -800,6 +806,7 @@ static const struct tegra_adma_chip_data tegra186_chip_data = {
+ .ch_req_tx_shift = 27,
+ .ch_req_rx_shift = 22,
+ .ch_base_offset = 0x10000,
++ .has_outstanding_reqs = true,
+ .ch_fifo_ctrl = TEGRA186_FIFO_CTRL_DEFAULT,
+ .ch_req_mask = 0x1f,
+ .ch_req_max = 20,
+diff --git a/drivers/dma/ti/cppi41.c b/drivers/dma/ti/cppi41.c
+index 2f946f55076c..8c2f7ebe998c 100644
+--- a/drivers/dma/ti/cppi41.c
++++ b/drivers/dma/ti/cppi41.c
+@@ -586,9 +586,22 @@ static struct dma_async_tx_descriptor *cppi41_dma_prep_slave_sg(
+ enum dma_transfer_direction dir, unsigned long tx_flags, void *context)
+ {
+ struct cppi41_channel *c = to_cpp41_chan(chan);
++ struct dma_async_tx_descriptor *txd = NULL;
++ struct cppi41_dd *cdd = c->cdd;
+ struct cppi41_desc *d;
+ struct scatterlist *sg;
+ unsigned int i;
++ int error;
++
++ error = pm_runtime_get(cdd->ddev.dev);
++ if (error < 0) {
++ pm_runtime_put_noidle(cdd->ddev.dev);
++
++ return NULL;
++ }
++
++ if (cdd->is_suspended)
++ goto err_out_not_ready;
+
+ d = c->desc;
+ for_each_sg(sgl, sg, sg_len, i) {
+@@ -611,7 +624,13 @@ static struct dma_async_tx_descriptor *cppi41_dma_prep_slave_sg(
+ d++;
+ }
+
+- return &c->txd;
++ txd = &c->txd;
++
++err_out_not_ready:
++ pm_runtime_mark_last_busy(cdd->ddev.dev);
++ pm_runtime_put_autosuspend(cdd->ddev.dev);
++
++ return txd;
+ }
+
+ static void cppi41_compute_td_desc(struct cppi41_desc *d)
+diff --git a/drivers/firmware/efi/cper.c b/drivers/firmware/efi/cper.c
+index addf0749dd8b..b1af0de2e100 100644
+--- a/drivers/firmware/efi/cper.c
++++ b/drivers/firmware/efi/cper.c
+@@ -381,7 +381,7 @@ static void cper_print_pcie(const char *pfx, const struct cper_sec_pcie *pcie,
+ printk("%s""vendor_id: 0x%04x, device_id: 0x%04x\n", pfx,
+ pcie->device_id.vendor_id, pcie->device_id.device_id);
+ p = pcie->device_id.class_code;
+- printk("%s""class_code: %02x%02x%02x\n", pfx, p[0], p[1], p[2]);
++ printk("%s""class_code: %02x%02x%02x\n", pfx, p[2], p[1], p[0]);
+ }
+ if (pcie->validation_bits & CPER_PCIE_VALID_SERIAL_NUMBER)
+ printk("%s""serial number: 0x%04x, 0x%04x\n", pfx,
+diff --git a/drivers/gpio/gpio-max77620.c b/drivers/gpio/gpio-max77620.c
+index b7d89e30131e..06e8caaafa81 100644
+--- a/drivers/gpio/gpio-max77620.c
++++ b/drivers/gpio/gpio-max77620.c
+@@ -192,13 +192,13 @@ static int max77620_gpio_set_debounce(struct max77620_gpio *mgpio,
+ case 0:
+ val = MAX77620_CNFG_GPIO_DBNC_None;
+ break;
+- case 1 ... 8:
++ case 1000 ... 8000:
+ val = MAX77620_CNFG_GPIO_DBNC_8ms;
+ break;
+- case 9 ... 16:
++ case 9000 ... 16000:
+ val = MAX77620_CNFG_GPIO_DBNC_16ms;
+ break;
+- case 17 ... 32:
++ case 17000 ... 32000:
+ val = MAX77620_CNFG_GPIO_DBNC_32ms;
+ break;
+ default:
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
+index 7bcf86c61999..61e38e43ad1d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
+@@ -270,7 +270,7 @@ int amdgpu_bo_list_ioctl(struct drm_device *dev, void *data,
+
+ r = amdgpu_bo_create_list_entry_array(&args->in, &info);
+ if (r)
+- goto error_free;
++ return r;
+
+ switch (args->in.operation) {
+ case AMDGPU_BO_LIST_OP_CREATE:
+@@ -283,8 +283,7 @@ int amdgpu_bo_list_ioctl(struct drm_device *dev, void *data,
+ r = idr_alloc(&fpriv->bo_list_handles, list, 1, 0, GFP_KERNEL);
+ mutex_unlock(&fpriv->bo_list_lock);
+ if (r < 0) {
+- amdgpu_bo_list_put(list);
+- return r;
++ goto error_put_list;
+ }
+
+ handle = r;
+@@ -306,9 +305,8 @@ int amdgpu_bo_list_ioctl(struct drm_device *dev, void *data,
+ mutex_unlock(&fpriv->bo_list_lock);
+
+ if (IS_ERR(old)) {
+- amdgpu_bo_list_put(list);
+ r = PTR_ERR(old);
+- goto error_free;
++ goto error_put_list;
+ }
+
+ amdgpu_bo_list_put(old);
+@@ -325,8 +323,10 @@ int amdgpu_bo_list_ioctl(struct drm_device *dev, void *data,
+
+ return 0;
+
++error_put_list:
++ amdgpu_bo_list_put(list);
++
+ error_free:
+- if (info)
+- kvfree(info);
++ kvfree(info);
+ return r;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index f41287f9000d..8cd6a6f94542 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -67,7 +67,7 @@ static const struct soc15_reg_golden golden_settings_gc_10_1[] =
+ {
+ SOC15_REG_GOLDEN_VALUE(GC, 0, mmCB_HW_CONTROL_4, 0xffffffff, 0x00400014),
+ SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_CPF_CLK_CTRL, 0xfcff8fff, 0xf8000100),
+- SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_SPI_CLK_CTRL, 0xc0000000, 0xc0000100),
++ SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_SPI_CLK_CTRL, 0xcd000000, 0x0d000100),
+ SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_SQ_CLK_CTRL, 0x60000ff0, 0x60000100),
+ SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_SQG_CLK_CTRL, 0x40000000, 0x40000100),
+ SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_VGT_CLK_CTRL, 0xffff8fff, 0xffff8100),
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v2_0.c b/drivers/gpu/drm/amd/amdgpu/gfxhub_v2_0.c
+index d605b4963f8a..141727ce7e76 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfxhub_v2_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v2_0.c
+@@ -151,6 +151,15 @@ static void gfxhub_v2_0_init_cache_regs(struct amdgpu_device *adev)
+ WREG32_SOC15(GC, 0, mmGCVM_L2_CNTL2, tmp);
+
+ tmp = mmGCVM_L2_CNTL3_DEFAULT;
++ if (adev->gmc.translate_further) {
++ tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL3, BANK_SELECT, 12);
++ tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL3,
++ L2_CACHE_BIGK_FRAGMENT_SIZE, 9);
++ } else {
++ tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL3, BANK_SELECT, 9);
++ tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL3,
++ L2_CACHE_BIGK_FRAGMENT_SIZE, 6);
++ }
+ WREG32_SOC15(GC, 0, mmGCVM_L2_CNTL3, tmp);
+
+ tmp = mmGCVM_L2_CNTL4_DEFAULT;
+diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v2_0.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v2_0.c
+index 0f9549f19ade..9e5c3a1909c7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v2_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v2_0.c
+@@ -137,6 +137,15 @@ static void mmhub_v2_0_init_cache_regs(struct amdgpu_device *adev)
+ WREG32_SOC15(MMHUB, 0, mmMMVM_L2_CNTL2, tmp);
+
+ tmp = mmMMVM_L2_CNTL3_DEFAULT;
++ if (adev->gmc.translate_further) {
++ tmp = REG_SET_FIELD(tmp, MMVM_L2_CNTL3, BANK_SELECT, 12);
++ tmp = REG_SET_FIELD(tmp, MMVM_L2_CNTL3,
++ L2_CACHE_BIGK_FRAGMENT_SIZE, 9);
++ } else {
++ tmp = REG_SET_FIELD(tmp, MMVM_L2_CNTL3, BANK_SELECT, 9);
++ tmp = REG_SET_FIELD(tmp, MMVM_L2_CNTL3,
++ L2_CACHE_BIGK_FRAGMENT_SIZE, 6);
++ }
+ WREG32_SOC15(MMHUB, 0, mmMMVM_L2_CNTL3, tmp);
+
+ tmp = mmMMVM_L2_CNTL4_DEFAULT;
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+index 4428018672d3..4f14ef813dda 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+@@ -159,6 +159,7 @@ static const struct soc15_reg_golden golden_settings_sdma0_4_2[] =
+ SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC7_RB_RPTR_ADDR_LO, 0xfffffffd, 0x00000001),
+ SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC7_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+ SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003c0),
++ SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_WATERMK, 0xfc000000, 0x00000000)
+ };
+
+ static const struct soc15_reg_golden golden_settings_sdma1_4_2[] = {
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
+index 3be8eb21fd6e..64be81eea9b4 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
+@@ -5097,9 +5097,7 @@ static void vega10_odn_update_soc_table(struct pp_hwmgr *hwmgr,
+
+ if (type == PP_OD_EDIT_SCLK_VDDC_TABLE) {
+ podn_vdd_dep = &data->odn_dpm_table.vdd_dep_on_sclk;
+- for (i = 0; i < podn_vdd_dep->count - 1; i++)
+- od_vddc_lookup_table->entries[i].us_vdd = podn_vdd_dep->entries[i].vddc;
+- if (od_vddc_lookup_table->entries[i].us_vdd < podn_vdd_dep->entries[i].vddc)
++ for (i = 0; i < podn_vdd_dep->count; i++)
+ od_vddc_lookup_table->entries[i].us_vdd = podn_vdd_dep->entries[i].vddc;
+ } else if (type == PP_OD_EDIT_MCLK_VDDC_TABLE) {
+ podn_vdd_dep = &data->odn_dpm_table.vdd_dep_on_mclk;
+diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
+index 9b61fae5aef7..dae45b6a35b7 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.c
++++ b/drivers/gpu/drm/i915/display/intel_display.c
+@@ -9186,7 +9186,6 @@ static bool wrpll_uses_pch_ssc(struct drm_i915_private *dev_priv,
+ static void lpt_init_pch_refclk(struct drm_i915_private *dev_priv)
+ {
+ struct intel_encoder *encoder;
+- bool pch_ssc_in_use = false;
+ bool has_fdi = false;
+
+ for_each_intel_encoder(&dev_priv->drm, encoder) {
+@@ -9214,22 +9213,24 @@ static void lpt_init_pch_refclk(struct drm_i915_private *dev_priv)
+ * clock hierarchy. That would also allow us to do
+ * clock bending finally.
+ */
++ dev_priv->pch_ssc_use = 0;
++
+ if (spll_uses_pch_ssc(dev_priv)) {
+ DRM_DEBUG_KMS("SPLL using PCH SSC\n");
+- pch_ssc_in_use = true;
++ dev_priv->pch_ssc_use |= BIT(DPLL_ID_SPLL);
+ }
+
+ if (wrpll_uses_pch_ssc(dev_priv, DPLL_ID_WRPLL1)) {
+ DRM_DEBUG_KMS("WRPLL1 using PCH SSC\n");
+- pch_ssc_in_use = true;
++ dev_priv->pch_ssc_use |= BIT(DPLL_ID_WRPLL1);
+ }
+
+ if (wrpll_uses_pch_ssc(dev_priv, DPLL_ID_WRPLL2)) {
+ DRM_DEBUG_KMS("WRPLL2 using PCH SSC\n");
+- pch_ssc_in_use = true;
++ dev_priv->pch_ssc_use |= BIT(DPLL_ID_WRPLL2);
+ }
+
+- if (pch_ssc_in_use)
++ if (dev_priv->pch_ssc_use)
+ return;
+
+ if (has_fdi) {
+diff --git a/drivers/gpu/drm/i915/display/intel_dpll_mgr.c b/drivers/gpu/drm/i915/display/intel_dpll_mgr.c
+index 2d4e7b9a7b9d..f199a6769962 100644
+--- a/drivers/gpu/drm/i915/display/intel_dpll_mgr.c
++++ b/drivers/gpu/drm/i915/display/intel_dpll_mgr.c
+@@ -498,16 +498,31 @@ static void hsw_ddi_wrpll_disable(struct drm_i915_private *dev_priv,
+ val = I915_READ(WRPLL_CTL(id));
+ I915_WRITE(WRPLL_CTL(id), val & ~WRPLL_PLL_ENABLE);
+ POSTING_READ(WRPLL_CTL(id));
++
++ /*
++ * Try to set up the PCH reference clock once all DPLLs
++ * that depend on it have been shut down.
++ */
++ if (dev_priv->pch_ssc_use & BIT(id))
++ intel_init_pch_refclk(dev_priv);
+ }
+
+ static void hsw_ddi_spll_disable(struct drm_i915_private *dev_priv,
+ struct intel_shared_dpll *pll)
+ {
++ enum intel_dpll_id id = pll->info->id;
+ u32 val;
+
+ val = I915_READ(SPLL_CTL);
+ I915_WRITE(SPLL_CTL, val & ~SPLL_PLL_ENABLE);
+ POSTING_READ(SPLL_CTL);
++
++ /*
++ * Try to set up the PCH reference clock once all DPLLs
++ * that depend on it have been shut down.
++ */
++ if (dev_priv->pch_ssc_use & BIT(id))
++ intel_init_pch_refclk(dev_priv);
+ }
+
+ static bool hsw_ddi_wrpll_get_hw_state(struct drm_i915_private *dev_priv,
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index 94b91a952699..edb88406cb75 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -1881,6 +1881,8 @@ struct drm_i915_private {
+ struct work_struct idle_work;
+ } gem;
+
++ u8 pch_ssc_use;
++
+ /* For i945gm vblank irq vs. C3 workaround */
+ struct {
+ struct work_struct work;
+diff --git a/drivers/hid/hid-axff.c b/drivers/hid/hid-axff.c
+index 6654c1550e2e..fbe4e16ab029 100644
+--- a/drivers/hid/hid-axff.c
++++ b/drivers/hid/hid-axff.c
+@@ -63,13 +63,20 @@ static int axff_init(struct hid_device *hid)
+ {
+ struct axff_device *axff;
+ struct hid_report *report;
+- struct hid_input *hidinput = list_first_entry(&hid->inputs, struct hid_input, list);
++ struct hid_input *hidinput;
+ struct list_head *report_list =&hid->report_enum[HID_OUTPUT_REPORT].report_list;
+- struct input_dev *dev = hidinput->input;
++ struct input_dev *dev;
+ int field_count = 0;
+ int i, j;
+ int error;
+
++ if (list_empty(&hid->inputs)) {
++ hid_err(hid, "no inputs found\n");
++ return -ENODEV;
++ }
++ hidinput = list_first_entry(&hid->inputs, struct hid_input, list);
++ dev = hidinput->input;
++
+ if (list_empty(report_list)) {
+ hid_err(hid, "no output reports found\n");
+ return -ENODEV;
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 210b81a56e1a..3af76624e4aa 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -1139,6 +1139,7 @@ int hid_open_report(struct hid_device *device)
+ __u8 *start;
+ __u8 *buf;
+ __u8 *end;
++ __u8 *next;
+ int ret;
+ static int (*dispatch_type[])(struct hid_parser *parser,
+ struct hid_item *item) = {
+@@ -1192,7 +1193,8 @@ int hid_open_report(struct hid_device *device)
+ device->collection_size = HID_DEFAULT_NUM_COLLECTIONS;
+
+ ret = -EINVAL;
+- while ((start = fetch_item(start, end, &item)) != NULL) {
++ while ((next = fetch_item(start, end, &item)) != NULL) {
++ start = next;
+
+ if (item.format != HID_ITEM_FORMAT_SHORT) {
+ hid_err(device, "unexpected long global item\n");
+@@ -1230,7 +1232,8 @@ int hid_open_report(struct hid_device *device)
+ }
+ }
+
+- hid_err(device, "item fetching failed at offset %d\n", (int)(end - start));
++ hid_err(device, "item fetching failed at offset %u/%u\n",
++ size - (unsigned int)(end - start), size);
+ err:
+ kfree(parser->collection_stack);
+ alloc_err:
+diff --git a/drivers/hid/hid-dr.c b/drivers/hid/hid-dr.c
+index 17e17f9a597b..947f19f8685f 100644
+--- a/drivers/hid/hid-dr.c
++++ b/drivers/hid/hid-dr.c
+@@ -75,13 +75,19 @@ static int drff_init(struct hid_device *hid)
+ {
+ struct drff_device *drff;
+ struct hid_report *report;
+- struct hid_input *hidinput = list_first_entry(&hid->inputs,
+- struct hid_input, list);
++ struct hid_input *hidinput;
+ struct list_head *report_list =
+ &hid->report_enum[HID_OUTPUT_REPORT].report_list;
+- struct input_dev *dev = hidinput->input;
++ struct input_dev *dev;
+ int error;
+
++ if (list_empty(&hid->inputs)) {
++ hid_err(hid, "no inputs found\n");
++ return -ENODEV;
++ }
++ hidinput = list_first_entry(&hid->inputs, struct hid_input, list);
++ dev = hidinput->input;
++
+ if (list_empty(report_list)) {
+ hid_err(hid, "no output reports found\n");
+ return -ENODEV;
+diff --git a/drivers/hid/hid-emsff.c b/drivers/hid/hid-emsff.c
+index 7cd5651872d3..c34f2e5a049f 100644
+--- a/drivers/hid/hid-emsff.c
++++ b/drivers/hid/hid-emsff.c
+@@ -47,13 +47,19 @@ static int emsff_init(struct hid_device *hid)
+ {
+ struct emsff_device *emsff;
+ struct hid_report *report;
+- struct hid_input *hidinput = list_first_entry(&hid->inputs,
+- struct hid_input, list);
++ struct hid_input *hidinput;
+ struct list_head *report_list =
+ &hid->report_enum[HID_OUTPUT_REPORT].report_list;
+- struct input_dev *dev = hidinput->input;
++ struct input_dev *dev;
+ int error;
+
++ if (list_empty(&hid->inputs)) {
++ hid_err(hid, "no inputs found\n");
++ return -ENODEV;
++ }
++ hidinput = list_first_entry(&hid->inputs, struct hid_input, list);
++ dev = hidinput->input;
++
+ if (list_empty(report_list)) {
+ hid_err(hid, "no output reports found\n");
+ return -ENODEV;
+diff --git a/drivers/hid/hid-gaff.c b/drivers/hid/hid-gaff.c
+index 0f95c96b70f8..ecbd3995a4eb 100644
+--- a/drivers/hid/hid-gaff.c
++++ b/drivers/hid/hid-gaff.c
+@@ -64,14 +64,20 @@ static int gaff_init(struct hid_device *hid)
+ {
+ struct gaff_device *gaff;
+ struct hid_report *report;
+- struct hid_input *hidinput = list_entry(hid->inputs.next,
+- struct hid_input, list);
++ struct hid_input *hidinput;
+ struct list_head *report_list =
+ &hid->report_enum[HID_OUTPUT_REPORT].report_list;
+ struct list_head *report_ptr = report_list;
+- struct input_dev *dev = hidinput->input;
++ struct input_dev *dev;
+ int error;
+
++ if (list_empty(&hid->inputs)) {
++ hid_err(hid, "no inputs found\n");
++ return -ENODEV;
++ }
++ hidinput = list_entry(hid->inputs.next, struct hid_input, list);
++ dev = hidinput->input;
++
+ if (list_empty(report_list)) {
+ hid_err(hid, "no output reports found\n");
+ return -ENODEV;
+diff --git a/drivers/hid/hid-holtekff.c b/drivers/hid/hid-holtekff.c
+index 10a720558830..8619b80c834c 100644
+--- a/drivers/hid/hid-holtekff.c
++++ b/drivers/hid/hid-holtekff.c
+@@ -124,13 +124,19 @@ static int holtekff_init(struct hid_device *hid)
+ {
+ struct holtekff_device *holtekff;
+ struct hid_report *report;
+- struct hid_input *hidinput = list_entry(hid->inputs.next,
+- struct hid_input, list);
++ struct hid_input *hidinput;
+ struct list_head *report_list =
+ &hid->report_enum[HID_OUTPUT_REPORT].report_list;
+- struct input_dev *dev = hidinput->input;
++ struct input_dev *dev;
+ int error;
+
++ if (list_empty(&hid->inputs)) {
++ hid_err(hid, "no inputs found\n");
++ return -ENODEV;
++ }
++ hidinput = list_entry(hid->inputs.next, struct hid_input, list);
++ dev = hidinput->input;
++
+ if (list_empty(report_list)) {
+ hid_err(hid, "no output report found\n");
+ return -ENODEV;
+diff --git a/drivers/hid/hid-hyperv.c b/drivers/hid/hid-hyperv.c
+index 7795831d37c2..f36316320075 100644
+--- a/drivers/hid/hid-hyperv.c
++++ b/drivers/hid/hid-hyperv.c
+@@ -314,60 +314,24 @@ static void mousevsc_on_receive(struct hv_device *device,
+
+ static void mousevsc_on_channel_callback(void *context)
+ {
+- const int packet_size = 0x100;
+- int ret;
+ struct hv_device *device = context;
+- u32 bytes_recvd;
+- u64 req_id;
+ struct vmpacket_descriptor *desc;
+- unsigned char *buffer;
+- int bufferlen = packet_size;
+-
+- buffer = kmalloc(bufferlen, GFP_ATOMIC);
+- if (!buffer)
+- return;
+-
+- do {
+- ret = vmbus_recvpacket_raw(device->channel, buffer,
+- bufferlen, &bytes_recvd, &req_id);
+-
+- switch (ret) {
+- case 0:
+- if (bytes_recvd <= 0) {
+- kfree(buffer);
+- return;
+- }
+- desc = (struct vmpacket_descriptor *)buffer;
+-
+- switch (desc->type) {
+- case VM_PKT_COMP:
+- break;
+-
+- case VM_PKT_DATA_INBAND:
+- mousevsc_on_receive(device, desc);
+- break;
+-
+- default:
+- pr_err("unhandled packet type %d, tid %llx len %d\n",
+- desc->type, req_id, bytes_recvd);
+- break;
+- }
+
++ foreach_vmbus_pkt(desc, device->channel) {
++ switch (desc->type) {
++ case VM_PKT_COMP:
+ break;
+
+- case -ENOBUFS:
+- kfree(buffer);
+- /* Handle large packet */
+- bufferlen = bytes_recvd;
+- buffer = kmalloc(bytes_recvd, GFP_ATOMIC);
+-
+- if (!buffer)
+- return;
++ case VM_PKT_DATA_INBAND:
++ mousevsc_on_receive(device, desc);
++ break;
+
++ default:
++ pr_err("Unhandled packet type %d, tid %llx len %d\n",
++ desc->type, desc->trans_id, desc->len8 * 8);
+ break;
+ }
+- } while (1);
+-
++ }
+ }
+
+ static int mousevsc_connect_to_vsp(struct hv_device *device)
+diff --git a/drivers/hid/hid-lg2ff.c b/drivers/hid/hid-lg2ff.c
+index dd1a6c3a7de6..73d07e35f12a 100644
+--- a/drivers/hid/hid-lg2ff.c
++++ b/drivers/hid/hid-lg2ff.c
+@@ -50,11 +50,17 @@ int lg2ff_init(struct hid_device *hid)
+ {
+ struct lg2ff_device *lg2ff;
+ struct hid_report *report;
+- struct hid_input *hidinput = list_entry(hid->inputs.next,
+- struct hid_input, list);
+- struct input_dev *dev = hidinput->input;
++ struct hid_input *hidinput;
++ struct input_dev *dev;
+ int error;
+
++ if (list_empty(&hid->inputs)) {
++ hid_err(hid, "no inputs found\n");
++ return -ENODEV;
++ }
++ hidinput = list_entry(hid->inputs.next, struct hid_input, list);
++ dev = hidinput->input;
++
+ /* Check that the report looks ok */
+ report = hid_validate_values(hid, HID_OUTPUT_REPORT, 0, 0, 7);
+ if (!report)
+diff --git a/drivers/hid/hid-lg3ff.c b/drivers/hid/hid-lg3ff.c
+index 9ecb6fd06203..b7e1949f3cf7 100644
+--- a/drivers/hid/hid-lg3ff.c
++++ b/drivers/hid/hid-lg3ff.c
+@@ -117,12 +117,19 @@ static const signed short ff3_joystick_ac[] = {
+
+ int lg3ff_init(struct hid_device *hid)
+ {
+- struct hid_input *hidinput = list_entry(hid->inputs.next, struct hid_input, list);
+- struct input_dev *dev = hidinput->input;
++ struct hid_input *hidinput;
++ struct input_dev *dev;
+ const signed short *ff_bits = ff3_joystick_ac;
+ int error;
+ int i;
+
++ if (list_empty(&hid->inputs)) {
++ hid_err(hid, "no inputs found\n");
++ return -ENODEV;
++ }
++ hidinput = list_entry(hid->inputs.next, struct hid_input, list);
++ dev = hidinput->input;
++
+ /* Check that the report looks ok */
+ if (!hid_validate_values(hid, HID_OUTPUT_REPORT, 0, 0, 35))
+ return -ENODEV;
+diff --git a/drivers/hid/hid-lg4ff.c b/drivers/hid/hid-lg4ff.c
+index 03f0220062ca..5e6a0cef2a06 100644
+--- a/drivers/hid/hid-lg4ff.c
++++ b/drivers/hid/hid-lg4ff.c
+@@ -1253,8 +1253,8 @@ static int lg4ff_handle_multimode_wheel(struct hid_device *hid, u16 *real_produc
+
+ int lg4ff_init(struct hid_device *hid)
+ {
+- struct hid_input *hidinput = list_entry(hid->inputs.next, struct hid_input, list);
+- struct input_dev *dev = hidinput->input;
++ struct hid_input *hidinput;
++ struct input_dev *dev;
+ struct list_head *report_list = &hid->report_enum[HID_OUTPUT_REPORT].report_list;
+ struct hid_report *report = list_entry(report_list->next, struct hid_report, list);
+ const struct usb_device_descriptor *udesc = &(hid_to_usb_dev(hid)->descriptor);
+@@ -1266,6 +1266,13 @@ int lg4ff_init(struct hid_device *hid)
+ int mmode_ret, mmode_idx = -1;
+ u16 real_product_id;
+
++ if (list_empty(&hid->inputs)) {
++ hid_err(hid, "no inputs found\n");
++ return -ENODEV;
++ }
++ hidinput = list_entry(hid->inputs.next, struct hid_input, list);
++ dev = hidinput->input;
++
+ /* Check that the report looks ok */
+ if (!hid_validate_values(hid, HID_OUTPUT_REPORT, 0, 0, 7))
+ return -1;
+diff --git a/drivers/hid/hid-lgff.c b/drivers/hid/hid-lgff.c
+index c79a6ec43745..aed4ddc397a9 100644
+--- a/drivers/hid/hid-lgff.c
++++ b/drivers/hid/hid-lgff.c
+@@ -115,12 +115,19 @@ static void hid_lgff_set_autocenter(struct input_dev *dev, u16 magnitude)
+
+ int lgff_init(struct hid_device* hid)
+ {
+- struct hid_input *hidinput = list_entry(hid->inputs.next, struct hid_input, list);
+- struct input_dev *dev = hidinput->input;
++ struct hid_input *hidinput;
++ struct input_dev *dev;
+ const signed short *ff_bits = ff_joystick;
+ int error;
+ int i;
+
++ if (list_empty(&hid->inputs)) {
++ hid_err(hid, "no inputs found\n");
++ return -ENODEV;
++ }
++ hidinput = list_entry(hid->inputs.next, struct hid_input, list);
++ dev = hidinput->input;
++
+ /* Check that the report looks ok */
+ if (!hid_validate_values(hid, HID_OUTPUT_REPORT, 0, 0, 7))
+ return -ENODEV;
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index 0179f7ed77e5..8e91e2f06cb4 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -1669,6 +1669,7 @@ static void hidpp_touchpad_raw_xy_event(struct hidpp_device *hidpp_dev,
+
+ #define HIDPP_FF_EFFECTID_NONE -1
+ #define HIDPP_FF_EFFECTID_AUTOCENTER -2
++#define HIDPP_AUTOCENTER_PARAMS_LENGTH 18
+
+ #define HIDPP_FF_MAX_PARAMS 20
+ #define HIDPP_FF_RESERVED_SLOTS 1
+@@ -2009,7 +2010,7 @@ static int hidpp_ff_erase_effect(struct input_dev *dev, int effect_id)
+ static void hidpp_ff_set_autocenter(struct input_dev *dev, u16 magnitude)
+ {
+ struct hidpp_ff_private_data *data = dev->ff->private;
+- u8 params[18];
++ u8 params[HIDPP_AUTOCENTER_PARAMS_LENGTH];
+
+ dbg_hid("Setting autocenter to %d.\n", magnitude);
+
+@@ -2077,23 +2078,34 @@ static DEVICE_ATTR(range, S_IRUSR | S_IWUSR | S_IRGRP | S_IWGRP | S_IROTH, hidpp
+ static void hidpp_ff_destroy(struct ff_device *ff)
+ {
+ struct hidpp_ff_private_data *data = ff->private;
++ struct hid_device *hid = data->hidpp->hid_dev;
+
++ hid_info(hid, "Unloading HID++ force feedback.\n");
++
++ device_remove_file(&hid->dev, &dev_attr_range);
++ destroy_workqueue(data->wq);
+ kfree(data->effect_ids);
+ }
+
+-static int hidpp_ff_init(struct hidpp_device *hidpp, u8 feature_index)
++static int hidpp_ff_init(struct hidpp_device *hidpp,
++ struct hidpp_ff_private_data *data)
+ {
+ struct hid_device *hid = hidpp->hid_dev;
+- struct hid_input *hidinput = list_entry(hid->inputs.next, struct hid_input, list);
+- struct input_dev *dev = hidinput->input;
++ struct hid_input *hidinput;
++ struct input_dev *dev;
+ const struct usb_device_descriptor *udesc = &(hid_to_usb_dev(hid)->descriptor);
+ const u16 bcdDevice = le16_to_cpu(udesc->bcdDevice);
+ struct ff_device *ff;
+- struct hidpp_report response;
+- struct hidpp_ff_private_data *data;
+- int error, j, num_slots;
++ int error, j, num_slots = data->num_effects;
+ u8 version;
+
++ if (list_empty(&hid->inputs)) {
++ hid_err(hid, "no inputs found\n");
++ return -ENODEV;
++ }
++ hidinput = list_entry(hid->inputs.next, struct hid_input, list);
++ dev = hidinput->input;
++
+ if (!dev) {
+ hid_err(hid, "Struct input_dev not set!\n");
+ return -EINVAL;
+@@ -2109,27 +2121,17 @@ static int hidpp_ff_init(struct hidpp_device *hidpp, u8 feature_index)
+ for (j = 0; hidpp_ff_effects_v2[j] >= 0; j++)
+ set_bit(hidpp_ff_effects_v2[j], dev->ffbit);
+
+- /* Read number of slots available in device */
+- error = hidpp_send_fap_command_sync(hidpp, feature_index,
+- HIDPP_FF_GET_INFO, NULL, 0, &response);
+- if (error) {
+- if (error < 0)
+- return error;
+- hid_err(hidpp->hid_dev, "%s: received protocol error 0x%02x\n",
+- __func__, error);
+- return -EPROTO;
+- }
+-
+- num_slots = response.fap.params[0] - HIDPP_FF_RESERVED_SLOTS;
+-
+ error = input_ff_create(dev, num_slots);
+
+ if (error) {
+ hid_err(dev, "Failed to create FF device!\n");
+ return error;
+ }
+-
+- data = kzalloc(sizeof(*data), GFP_KERNEL);
++ /*
++ * Create a copy of passed data, so we can transfer memory
++ * ownership to FF core
++ */
++ data = kmemdup(data, sizeof(*data), GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+ data->effect_ids = kcalloc(num_slots, sizeof(int), GFP_KERNEL);
+@@ -2145,10 +2147,7 @@ static int hidpp_ff_init(struct hidpp_device *hidpp, u8 feature_index)
+ }
+
+ data->hidpp = hidpp;
+- data->feature_index = feature_index;
+ data->version = version;
+- data->slot_autocenter = 0;
+- data->num_effects = num_slots;
+ for (j = 0; j < num_slots; j++)
+ data->effect_ids[j] = -1;
+
+@@ -2162,68 +2161,20 @@ static int hidpp_ff_init(struct hidpp_device *hidpp, u8 feature_index)
+ ff->set_autocenter = hidpp_ff_set_autocenter;
+ ff->destroy = hidpp_ff_destroy;
+
+-
+- /* reset all forces */
+- error = hidpp_send_fap_command_sync(hidpp, feature_index,
+- HIDPP_FF_RESET_ALL, NULL, 0, &response);
+-
+- /* Read current Range */
+- error = hidpp_send_fap_command_sync(hidpp, feature_index,
+- HIDPP_FF_GET_APERTURE, NULL, 0, &response);
+- if (error)
+- hid_warn(hidpp->hid_dev, "Failed to read range from device!\n");
+- data->range = error ? 900 : get_unaligned_be16(&response.fap.params[0]);
+-
+ /* Create sysfs interface */
+ error = device_create_file(&(hidpp->hid_dev->dev), &dev_attr_range);
+ if (error)
+ hid_warn(hidpp->hid_dev, "Unable to create sysfs interface for \"range\", errno %d!\n", error);
+
+- /* Read the current gain values */
+- error = hidpp_send_fap_command_sync(hidpp, feature_index,
+- HIDPP_FF_GET_GLOBAL_GAINS, NULL, 0, &response);
+- if (error)
+- hid_warn(hidpp->hid_dev, "Failed to read gain values from device!\n");
+- data->gain = error ? 0xffff : get_unaligned_be16(&response.fap.params[0]);
+- /* ignore boost value at response.fap.params[2] */
+-
+ /* init the hardware command queue */
+ atomic_set(&data->workqueue_size, 0);
+
+- /* initialize with zero autocenter to get wheel in usable state */
+- hidpp_ff_set_autocenter(dev, 0);
+-
+ hid_info(hid, "Force feedback support loaded (firmware release %d).\n",
+ version);
+
+ return 0;
+ }
+
+-static int hidpp_ff_deinit(struct hid_device *hid)
+-{
+- struct hid_input *hidinput = list_entry(hid->inputs.next, struct hid_input, list);
+- struct input_dev *dev = hidinput->input;
+- struct hidpp_ff_private_data *data;
+-
+- if (!dev) {
+- hid_err(hid, "Struct input_dev not found!\n");
+- return -EINVAL;
+- }
+-
+- hid_info(hid, "Unloading HID++ force feedback.\n");
+- data = dev->ff->private;
+- if (!data) {
+- hid_err(hid, "Private data not found!\n");
+- return -EINVAL;
+- }
+-
+- destroy_workqueue(data->wq);
+- device_remove_file(&hid->dev, &dev_attr_range);
+-
+- return 0;
+-}
+-
+-
+ /* ************************************************************************** */
+ /* */
+ /* Device Support */
+@@ -2725,24 +2676,93 @@ static int k400_connect(struct hid_device *hdev, bool connected)
+
+ #define HIDPP_PAGE_G920_FORCE_FEEDBACK 0x8123
+
+-static int g920_get_config(struct hidpp_device *hidpp)
++static int g920_ff_set_autocenter(struct hidpp_device *hidpp,
++ struct hidpp_ff_private_data *data)
+ {
++ struct hidpp_report response;
++ u8 params[HIDPP_AUTOCENTER_PARAMS_LENGTH] = {
++ [1] = HIDPP_FF_EFFECT_SPRING | HIDPP_FF_EFFECT_AUTOSTART,
++ };
++ int ret;
++
++ /* initialize with zero autocenter to get wheel in usable state */
++
++ dbg_hid("Setting autocenter to 0.\n");
++ ret = hidpp_send_fap_command_sync(hidpp, data->feature_index,
++ HIDPP_FF_DOWNLOAD_EFFECT,
++ params, ARRAY_SIZE(params),
++ &response);
++ if (ret)
++ hid_warn(hidpp->hid_dev, "Failed to autocenter device!\n");
++ else
++ data->slot_autocenter = response.fap.params[0];
++
++ return ret;
++}
++
++static int g920_get_config(struct hidpp_device *hidpp,
++ struct hidpp_ff_private_data *data)
++{
++ struct hidpp_report response;
+ u8 feature_type;
+- u8 feature_index;
+ int ret;
+
++ memset(data, 0, sizeof(*data));
++
+ /* Find feature and store for later use */
+ ret = hidpp_root_get_feature(hidpp, HIDPP_PAGE_G920_FORCE_FEEDBACK,
+- &feature_index, &feature_type);
++ &data->feature_index, &feature_type);
+ if (ret)
+ return ret;
+
+- ret = hidpp_ff_init(hidpp, feature_index);
++ /* Read number of slots available in device */
++ ret = hidpp_send_fap_command_sync(hidpp, data->feature_index,
++ HIDPP_FF_GET_INFO,
++ NULL, 0,
++ &response);
++ if (ret) {
++ if (ret < 0)
++ return ret;
++ hid_err(hidpp->hid_dev,
++ "%s: received protocol error 0x%02x\n", __func__, ret);
++ return -EPROTO;
++ }
++
++ data->num_effects = response.fap.params[0] - HIDPP_FF_RESERVED_SLOTS;
++
++ /* reset all forces */
++ ret = hidpp_send_fap_command_sync(hidpp, data->feature_index,
++ HIDPP_FF_RESET_ALL,
++ NULL, 0,
++ &response);
+ if (ret)
+- hid_warn(hidpp->hid_dev, "Unable to initialize force feedback support, errno %d\n",
+- ret);
++ hid_warn(hidpp->hid_dev, "Failed to reset all forces!\n");
+
+- return 0;
++ ret = hidpp_send_fap_command_sync(hidpp, data->feature_index,
++ HIDPP_FF_GET_APERTURE,
++ NULL, 0,
++ &response);
++ if (ret) {
++ hid_warn(hidpp->hid_dev,
++ "Failed to read range from device!\n");
++ }
++ data->range = ret ?
++ 900 : get_unaligned_be16(&response.fap.params[0]);
++
++ /* Read the current gain values */
++ ret = hidpp_send_fap_command_sync(hidpp, data->feature_index,
++ HIDPP_FF_GET_GLOBAL_GAINS,
++ NULL, 0,
++ &response);
++ if (ret)
++ hid_warn(hidpp->hid_dev,
++ "Failed to read gain values from device!\n");
++ data->gain = ret ?
++ 0xffff : get_unaligned_be16(&response.fap.params[0]);
++
++ /* ignore boost value at response.fap.params[2] */
++
++ return g920_ff_set_autocenter(hidpp, data);
+ }
+
+ /* -------------------------------------------------------------------------- */
+@@ -3458,34 +3478,45 @@ static int hidpp_get_report_length(struct hid_device *hdev, int id)
+ return report->field[0]->report_count + 1;
+ }
+
+-static bool hidpp_validate_report(struct hid_device *hdev, int id,
+- int expected_length, bool optional)
++static bool hidpp_validate_device(struct hid_device *hdev)
+ {
+- int report_length;
++ struct hidpp_device *hidpp = hid_get_drvdata(hdev);
++ int id, report_length, supported_reports = 0;
+
+- if (id >= HID_MAX_IDS || id < 0) {
+- hid_err(hdev, "invalid HID report id %u\n", id);
+- return false;
++ id = REPORT_ID_HIDPP_SHORT;
++ report_length = hidpp_get_report_length(hdev, id);
++ if (report_length) {
++ if (report_length < HIDPP_REPORT_SHORT_LENGTH)
++ goto bad_device;
++
++ supported_reports++;
+ }
+
++ id = REPORT_ID_HIDPP_LONG;
+ report_length = hidpp_get_report_length(hdev, id);
+- if (!report_length)
+- return optional;
++ if (report_length) {
++ if (report_length < HIDPP_REPORT_LONG_LENGTH)
++ goto bad_device;
+
+- if (report_length < expected_length) {
+- hid_warn(hdev, "not enough values in hidpp report %d\n", id);
+- return false;
++ supported_reports++;
+ }
+
+- return true;
+-}
++ id = REPORT_ID_HIDPP_VERY_LONG;
++ report_length = hidpp_get_report_length(hdev, id);
++ if (report_length) {
++ if (report_length < HIDPP_REPORT_LONG_LENGTH ||
++ report_length > HIDPP_REPORT_VERY_LONG_MAX_LENGTH)
++ goto bad_device;
+
+-static bool hidpp_validate_device(struct hid_device *hdev)
+-{
+- return hidpp_validate_report(hdev, REPORT_ID_HIDPP_SHORT,
+- HIDPP_REPORT_SHORT_LENGTH, false) &&
+- hidpp_validate_report(hdev, REPORT_ID_HIDPP_LONG,
+- HIDPP_REPORT_LONG_LENGTH, true);
++ supported_reports++;
++ hidpp->very_long_report_length = report_length;
++ }
++
++ return supported_reports;
++
++bad_device:
++ hid_warn(hdev, "not enough values in hidpp report %d\n", id);
++ return false;
+ }
+
+ static bool hidpp_application_equals(struct hid_device *hdev,
+@@ -3505,6 +3536,7 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ int ret;
+ bool connected;
+ unsigned int connect_mask = HID_CONNECT_DEFAULT;
++ struct hidpp_ff_private_data data;
+
+ /* report_fixup needs drvdata to be set before we call hid_parse */
+ hidpp = devm_kzalloc(&hdev->dev, sizeof(*hidpp), GFP_KERNEL);
+@@ -3531,11 +3563,6 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ return hid_hw_start(hdev, HID_CONNECT_DEFAULT);
+ }
+
+- hidpp->very_long_report_length =
+- hidpp_get_report_length(hdev, REPORT_ID_HIDPP_VERY_LONG);
+- if (hidpp->very_long_report_length > HIDPP_REPORT_VERY_LONG_MAX_LENGTH)
+- hidpp->very_long_report_length = HIDPP_REPORT_VERY_LONG_MAX_LENGTH;
+-
+ if (id->group == HID_GROUP_LOGITECH_DJ_DEVICE)
+ hidpp->quirks |= HIDPP_QUIRK_UNIFYING;
+
+@@ -3614,7 +3641,7 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ if (ret)
+ goto hid_hw_init_fail;
+ } else if (connected && (hidpp->quirks & HIDPP_QUIRK_CLASS_G920)) {
+- ret = g920_get_config(hidpp);
++ ret = g920_get_config(hidpp, &data);
+ if (ret)
+ goto hid_hw_init_fail;
+ }
+@@ -3636,6 +3663,14 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ goto hid_hw_start_fail;
+ }
+
++ if (hidpp->quirks & HIDPP_QUIRK_CLASS_G920) {
++ ret = hidpp_ff_init(hidpp, &data);
++ if (ret)
++ hid_warn(hidpp->hid_dev,
++ "Unable to initialize force feedback support, errno %d\n",
++ ret);
++ }
++
+ return ret;
+
+ hid_hw_init_fail:
+@@ -3658,9 +3693,6 @@ static void hidpp_remove(struct hid_device *hdev)
+
+ sysfs_remove_group(&hdev->dev.kobj, &ps_attribute_group);
+
+- if (hidpp->quirks & HIDPP_QUIRK_CLASS_G920)
+- hidpp_ff_deinit(hdev);
+-
+ hid_hw_stop(hdev);
+ cancel_work_sync(&hidpp->work);
+ mutex_destroy(&hidpp->send_mutex);
+diff --git a/drivers/hid/hid-microsoft.c b/drivers/hid/hid-microsoft.c
+index 8b3a922bdad3..572b5789d20f 100644
+--- a/drivers/hid/hid-microsoft.c
++++ b/drivers/hid/hid-microsoft.c
+@@ -328,11 +328,17 @@ static int ms_play_effect(struct input_dev *dev, void *data,
+
+ static int ms_init_ff(struct hid_device *hdev)
+ {
+- struct hid_input *hidinput = list_entry(hdev->inputs.next,
+- struct hid_input, list);
+- struct input_dev *input_dev = hidinput->input;
++ struct hid_input *hidinput;
++ struct input_dev *input_dev;
+ struct ms_data *ms = hid_get_drvdata(hdev);
+
++ if (list_empty(&hdev->inputs)) {
++ hid_err(hdev, "no inputs found\n");
++ return -ENODEV;
++ }
++ hidinput = list_entry(hdev->inputs.next, struct hid_input, list);
++ input_dev = hidinput->input;
++
+ if (!(ms->quirks & MS_QUIRK_FF))
+ return 0;
+
+diff --git a/drivers/hid/hid-sony.c b/drivers/hid/hid-sony.c
+index 73c0f7a95e2d..4c6ed6ef31f1 100644
+--- a/drivers/hid/hid-sony.c
++++ b/drivers/hid/hid-sony.c
+@@ -2254,9 +2254,15 @@ static int sony_play_effect(struct input_dev *dev, void *data,
+
+ static int sony_init_ff(struct sony_sc *sc)
+ {
+- struct hid_input *hidinput = list_entry(sc->hdev->inputs.next,
+- struct hid_input, list);
+- struct input_dev *input_dev = hidinput->input;
++ struct hid_input *hidinput;
++ struct input_dev *input_dev;
++
++ if (list_empty(&sc->hdev->inputs)) {
++ hid_err(sc->hdev, "no inputs found\n");
++ return -ENODEV;
++ }
++ hidinput = list_entry(sc->hdev->inputs.next, struct hid_input, list);
++ input_dev = hidinput->input;
+
+ input_set_capability(input_dev, EV_FF, FF_RUMBLE);
+ return input_ff_create_memless(input_dev, NULL, sony_play_effect);
+diff --git a/drivers/hid/hid-tmff.c b/drivers/hid/hid-tmff.c
+index bdfc5ff3b2c5..90acef304536 100644
+--- a/drivers/hid/hid-tmff.c
++++ b/drivers/hid/hid-tmff.c
+@@ -124,12 +124,18 @@ static int tmff_init(struct hid_device *hid, const signed short *ff_bits)
+ struct tmff_device *tmff;
+ struct hid_report *report;
+ struct list_head *report_list;
+- struct hid_input *hidinput = list_entry(hid->inputs.next,
+- struct hid_input, list);
+- struct input_dev *input_dev = hidinput->input;
++ struct hid_input *hidinput;
++ struct input_dev *input_dev;
+ int error;
+ int i;
+
++ if (list_empty(&hid->inputs)) {
++ hid_err(hid, "no inputs found\n");
++ return -ENODEV;
++ }
++ hidinput = list_entry(hid->inputs.next, struct hid_input, list);
++ input_dev = hidinput->input;
++
+ tmff = kzalloc(sizeof(struct tmff_device), GFP_KERNEL);
+ if (!tmff)
+ return -ENOMEM;
+diff --git a/drivers/hid/hid-zpff.c b/drivers/hid/hid-zpff.c
+index f90959e94028..3abaca045869 100644
+--- a/drivers/hid/hid-zpff.c
++++ b/drivers/hid/hid-zpff.c
+@@ -54,11 +54,17 @@ static int zpff_init(struct hid_device *hid)
+ {
+ struct zpff_device *zpff;
+ struct hid_report *report;
+- struct hid_input *hidinput = list_entry(hid->inputs.next,
+- struct hid_input, list);
+- struct input_dev *dev = hidinput->input;
++ struct hid_input *hidinput;
++ struct input_dev *dev;
+ int i, error;
+
++ if (list_empty(&hid->inputs)) {
++ hid_err(hid, "no inputs found\n");
++ return -ENODEV;
++ }
++ hidinput = list_entry(hid->inputs.next, struct hid_input, list);
++ dev = hidinput->input;
++
+ for (i = 0; i < 4; i++) {
+ report = hid_validate_values(hid, HID_OUTPUT_REPORT, 0, i, 1);
+ if (!report)
+diff --git a/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c b/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
+index 75078c83be1a..d31ea82b84c1 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
++++ b/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
+@@ -322,6 +322,25 @@ static const struct dmi_system_id i2c_hid_dmi_desc_override_table[] = {
+ },
+ .driver_data = (void *)&sipodev_desc
+ },
++ {
++ /*
++ * There are at least 2 Primebook C11B versions, the older
++ * version has a product-name of "Primebook C11B", and a
++ * bios version / release / firmware revision of:
++ * V2.1.2 / 05/03/2018 / 18.2
++ * The new version has "PRIMEBOOK C11B" as product-name and a
++ * bios version / release / firmware revision of:
++ * CFALKSW05_BIOS_V1.1.2 / 11/19/2018 / 19.2
++ * Only the older version needs this quirk, note the newer
++ * version will not match as it has a different product-name.
++ */
++ .ident = "Trekstor Primebook C11B",
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "TREKSTOR"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Primebook C11B"),
++ },
++ .driver_data = (void *)&sipodev_desc
++ },
+ {
+ .ident = "Direkt-Tek DTLAPY116-2",
+ .matches = {
+diff --git a/drivers/iio/accel/bmc150-accel-core.c b/drivers/iio/accel/bmc150-accel-core.c
+index cf6c0e3a83d3..121b4e89f038 100644
+--- a/drivers/iio/accel/bmc150-accel-core.c
++++ b/drivers/iio/accel/bmc150-accel-core.c
+@@ -117,7 +117,7 @@
+ #define BMC150_ACCEL_SLEEP_1_SEC 0x0F
+
+ #define BMC150_ACCEL_REG_TEMP 0x08
+-#define BMC150_ACCEL_TEMP_CENTER_VAL 24
++#define BMC150_ACCEL_TEMP_CENTER_VAL 23
+
+ #define BMC150_ACCEL_AXIS_TO_REG(axis) (BMC150_ACCEL_REG_XOUT_L + (axis * 2))
+ #define BMC150_AUTO_SUSPEND_DELAY_MS 2000
+diff --git a/drivers/iio/adc/meson_saradc.c b/drivers/iio/adc/meson_saradc.c
+index 7b28d045d271..7b27306330a3 100644
+--- a/drivers/iio/adc/meson_saradc.c
++++ b/drivers/iio/adc/meson_saradc.c
+@@ -1219,6 +1219,11 @@ static int meson_sar_adc_probe(struct platform_device *pdev)
+ if (IS_ERR(base))
+ return PTR_ERR(base);
+
++ priv->regmap = devm_regmap_init_mmio(&pdev->dev, base,
++ priv->param->regmap_config);
++ if (IS_ERR(priv->regmap))
++ return PTR_ERR(priv->regmap);
++
+ irq = irq_of_parse_and_map(pdev->dev.of_node, 0);
+ if (!irq)
+ return -EINVAL;
+@@ -1228,11 +1233,6 @@ static int meson_sar_adc_probe(struct platform_device *pdev)
+ if (ret)
+ return ret;
+
+- priv->regmap = devm_regmap_init_mmio(&pdev->dev, base,
+- priv->param->regmap_config);
+- if (IS_ERR(priv->regmap))
+- return PTR_ERR(priv->regmap);
+-
+ priv->clkin = devm_clk_get(&pdev->dev, "clkin");
+ if (IS_ERR(priv->clkin)) {
+ dev_err(&pdev->dev, "failed to get clkin\n");
+diff --git a/drivers/iio/imu/adis_buffer.c b/drivers/iio/imu/adis_buffer.c
+index 9ac8356d9a95..4998a89d083d 100644
+--- a/drivers/iio/imu/adis_buffer.c
++++ b/drivers/iio/imu/adis_buffer.c
+@@ -35,8 +35,11 @@ static int adis_update_scan_mode_burst(struct iio_dev *indio_dev,
+ return -ENOMEM;
+
+ adis->buffer = kzalloc(burst_length + sizeof(u16), GFP_KERNEL);
+- if (!adis->buffer)
++ if (!adis->buffer) {
++ kfree(adis->xfer);
++ adis->xfer = NULL;
+ return -ENOMEM;
++ }
+
+ tx = adis->buffer + burst_length;
+ tx[0] = ADIS_READ_REG(adis->burst->reg_cmd);
+@@ -78,8 +81,11 @@ int adis_update_scan_mode(struct iio_dev *indio_dev,
+ return -ENOMEM;
+
+ adis->buffer = kcalloc(indio_dev->scan_bytes, 2, GFP_KERNEL);
+- if (!adis->buffer)
++ if (!adis->buffer) {
++ kfree(adis->xfer);
++ adis->xfer = NULL;
+ return -ENOMEM;
++ }
+
+ rx = adis->buffer;
+ tx = rx + scan_count;
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c
+index 66fbcd94642d..4c754a02717b 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c
+@@ -92,9 +92,11 @@ static const struct st_lsm6dsx_ext_dev_settings st_lsm6dsx_ext_dev_table[] = {
+ static void st_lsm6dsx_shub_wait_complete(struct st_lsm6dsx_hw *hw)
+ {
+ struct st_lsm6dsx_sensor *sensor;
++ u16 odr;
+
+ sensor = iio_priv(hw->iio_devs[ST_LSM6DSX_ID_ACC]);
+- msleep((2000U / sensor->odr) + 1);
++ odr = (hw->enable_mask & BIT(ST_LSM6DSX_ID_ACC)) ? sensor->odr : 13;
++ msleep((2000U / odr) + 1);
+ }
+
+ /**
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index da10e6ccb43c..5920c0085d35 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -4399,6 +4399,7 @@ error2:
+ error1:
+ port_modify.set_port_cap_mask = 0;
+ port_modify.clr_port_cap_mask = IB_PORT_CM_SUP;
++ kfree(port);
+ while (--i) {
+ if (!rdma_cap_ib_cm(ib_device, i))
+ continue;
+@@ -4407,6 +4408,7 @@ error1:
+ ib_modify_port(ib_device, port->port_num, 0, &port_modify);
+ ib_unregister_mad_agent(port->mad_agent);
+ cm_remove_port_fs(port);
++ kfree(port);
+ }
+ free:
+ kfree(cm_dev);
+@@ -4460,6 +4462,7 @@ static void cm_remove_one(struct ib_device *ib_device, void *client_data)
+ spin_unlock_irq(&cm.state_lock);
+ ib_unregister_mad_agent(cur_mad_agent);
+ cm_remove_port_fs(port);
++ kfree(port);
+ }
+
+ kfree(cm_dev);
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index a68d0ccf67a4..2e48b59926c1 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -2396,9 +2396,10 @@ static int iw_conn_req_handler(struct iw_cm_id *cm_id,
+ conn_id->cm_id.iw = NULL;
+ cma_exch(conn_id, RDMA_CM_DESTROYING);
+ mutex_unlock(&conn_id->handler_mutex);
++ mutex_unlock(&listen_id->handler_mutex);
+ cma_deref_id(conn_id);
+ rdma_destroy_id(&conn_id->id);
+- goto out;
++ return ret;
+ }
+
+ mutex_unlock(&conn_id->handler_mutex);
+diff --git a/drivers/infiniband/core/nldev.c b/drivers/infiniband/core/nldev.c
+index 020c26976558..f42e856f3072 100644
+--- a/drivers/infiniband/core/nldev.c
++++ b/drivers/infiniband/core/nldev.c
+@@ -1230,7 +1230,7 @@ static int res_get_common_doit(struct sk_buff *skb, struct nlmsghdr *nlh,
+ msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
+ if (!msg) {
+ ret = -ENOMEM;
+- goto err;
++ goto err_get;
+ }
+
+ nlh = nlmsg_put(msg, NETLINK_CB(skb).portid, nlh->nlmsg_seq,
+@@ -1787,10 +1787,6 @@ static int nldev_stat_del_doit(struct sk_buff *skb, struct nlmsghdr *nlh,
+
+ cntn = nla_get_u32(tb[RDMA_NLDEV_ATTR_STAT_COUNTER_ID]);
+ qpn = nla_get_u32(tb[RDMA_NLDEV_ATTR_RES_LQPN]);
+- ret = rdma_counter_unbind_qpn(device, port, qpn, cntn);
+- if (ret)
+- goto err_unbind;
+-
+ if (fill_nldev_handle(msg, device) ||
+ nla_put_u32(msg, RDMA_NLDEV_ATTR_PORT_INDEX, port) ||
+ nla_put_u32(msg, RDMA_NLDEV_ATTR_STAT_COUNTER_ID, cntn) ||
+@@ -1799,13 +1795,15 @@ static int nldev_stat_del_doit(struct sk_buff *skb, struct nlmsghdr *nlh,
+ goto err_fill;
+ }
+
++ ret = rdma_counter_unbind_qpn(device, port, qpn, cntn);
++ if (ret)
++ goto err_fill;
++
+ nlmsg_end(msg, nlh);
+ ib_device_put(device);
+ return rdma_nl_unicast(msg, NETLINK_CB(skb).portid);
+
+ err_fill:
+- rdma_counter_bind_qpn(device, port, qpn, cntn);
+-err_unbind:
+ nlmsg_free(msg);
+ err:
+ ib_device_put(device);
+diff --git a/drivers/infiniband/hw/cxgb4/device.c b/drivers/infiniband/hw/cxgb4/device.c
+index a8b9548bd1a2..599340c1f0b8 100644
+--- a/drivers/infiniband/hw/cxgb4/device.c
++++ b/drivers/infiniband/hw/cxgb4/device.c
+@@ -242,10 +242,13 @@ static void set_ep_sin6_addrs(struct c4iw_ep *ep,
+ }
+ }
+
+-static int dump_qp(struct c4iw_qp *qp, struct c4iw_debugfs_data *qpd)
++static int dump_qp(unsigned long id, struct c4iw_qp *qp,
++ struct c4iw_debugfs_data *qpd)
+ {
+ int space;
+ int cc;
++ if (id != qp->wq.sq.qid)
++ return 0;
+
+ space = qpd->bufsize - qpd->pos - 1;
+ if (space == 0)
+@@ -350,7 +353,7 @@ static int qp_open(struct inode *inode, struct file *file)
+
+ xa_lock_irq(&qpd->devp->qps);
+ xa_for_each(&qpd->devp->qps, index, qp)
+- dump_qp(qp, qpd);
++ dump_qp(index, qp, qpd);
+ xa_unlock_irq(&qpd->devp->qps);
+
+ qpd->buf[qpd->pos++] = 0;
+diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c
+index eb9368be28c1..bbcac539777a 100644
+--- a/drivers/infiniband/hw/cxgb4/qp.c
++++ b/drivers/infiniband/hw/cxgb4/qp.c
+@@ -2737,15 +2737,11 @@ int c4iw_create_srq(struct ib_srq *ib_srq, struct ib_srq_init_attr *attrs,
+ if (CHELSIO_CHIP_VERSION(rhp->rdev.lldi.adapter_type) > CHELSIO_T6)
+ srq->flags = T4_SRQ_LIMIT_SUPPORT;
+
+- ret = xa_insert_irq(&rhp->qps, srq->wq.qid, srq, GFP_KERNEL);
+- if (ret)
+- goto err_free_queue;
+-
+ if (udata) {
+ srq_key_mm = kmalloc(sizeof(*srq_key_mm), GFP_KERNEL);
+ if (!srq_key_mm) {
+ ret = -ENOMEM;
+- goto err_remove_handle;
++ goto err_free_queue;
+ }
+ srq_db_key_mm = kmalloc(sizeof(*srq_db_key_mm), GFP_KERNEL);
+ if (!srq_db_key_mm) {
+@@ -2789,8 +2785,6 @@ err_free_srq_db_key_mm:
+ kfree(srq_db_key_mm);
+ err_free_srq_key_mm:
+ kfree(srq_key_mm);
+-err_remove_handle:
+- xa_erase_irq(&rhp->qps, srq->wq.qid);
+ err_free_queue:
+ free_srq_queue(srq, ucontext ? &ucontext->uctx : &rhp->rdev.uctx,
+ srq->wr_waitp);
+@@ -2813,8 +2807,6 @@ void c4iw_destroy_srq(struct ib_srq *ibsrq, struct ib_udata *udata)
+ rhp = srq->rhp;
+
+ pr_debug("%s id %d\n", __func__, srq->wq.qid);
+-
+- xa_erase_irq(&rhp->qps, srq->wq.qid);
+ ucontext = rdma_udata_to_drv_context(udata, struct c4iw_ucontext,
+ ibucontext);
+ free_srq_queue(srq, ucontext ? &ucontext->uctx : &rhp->rdev.uctx,
+diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
+index 2395fd4233a7..2ed7bfd5feea 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.c
++++ b/drivers/infiniband/hw/hfi1/sdma.c
+@@ -1526,8 +1526,11 @@ int sdma_init(struct hfi1_devdata *dd, u8 port)
+ }
+
+ ret = rhashtable_init(tmp_sdma_rht, &sdma_rht_params);
+- if (ret < 0)
++ if (ret < 0) {
++ kfree(tmp_sdma_rht);
+ goto bail;
++ }
++
+ dd->sdma_rht = tmp_sdma_rht;
+
+ dd_dev_info(dd, "SDMA num_sdma: %u\n", dd->num_sdma);
+diff --git a/drivers/infiniband/hw/hfi1/tid_rdma.c b/drivers/infiniband/hw/hfi1/tid_rdma.c
+index 6141f4edc6bf..536d974c78cf 100644
+--- a/drivers/infiniband/hw/hfi1/tid_rdma.c
++++ b/drivers/infiniband/hw/hfi1/tid_rdma.c
+@@ -2728,11 +2728,6 @@ static bool handle_read_kdeth_eflags(struct hfi1_ctxtdata *rcd,
+ diff = cmp_psn(psn,
+ flow->flow_state.r_next_psn);
+ if (diff > 0) {
+- if (!(qp->r_flags & RVT_R_RDMAR_SEQ))
+- restart_tid_rdma_read_req(rcd,
+- qp,
+- wqe);
+-
+ /* Drop the packet.*/
+ goto s_unlock;
+ } else if (diff < 0) {
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index af5bbb35c058..ef7ba0133d28 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -1275,29 +1275,6 @@ static int devx_handle_mkey_create(struct mlx5_ib_dev *dev,
+ return 0;
+ }
+
+-static void devx_free_indirect_mkey(struct rcu_head *rcu)
+-{
+- kfree(container_of(rcu, struct devx_obj, devx_mr.rcu));
+-}
+-
+-/* This function to delete from the radix tree needs to be called before
+- * destroying the underlying mkey. Otherwise a race might occur in case that
+- * other thread will get the same mkey before this one will be deleted,
+- * in that case it will fail via inserting to the tree its own data.
+- *
+- * Note:
+- * An error in the destroy is not expected unless there is some other indirect
+- * mkey which points to this one. In a kernel cleanup flow it will be just
+- * destroyed in the iterative destruction call. In a user flow, in case
+- * the application didn't close in the expected order it's its own problem,
+- * the mkey won't be part of the tree, in both cases the kernel is safe.
+- */
+-static void devx_cleanup_mkey(struct devx_obj *obj)
+-{
+- xa_erase(&obj->ib_dev->mdev->priv.mkey_table,
+- mlx5_base_mkey(obj->devx_mr.mmkey.key));
+-}
+-
+ static void devx_cleanup_subscription(struct mlx5_ib_dev *dev,
+ struct devx_event_subscription *sub)
+ {
+@@ -1339,8 +1316,16 @@ static int devx_obj_cleanup(struct ib_uobject *uobject,
+ int ret;
+
+ dev = mlx5_udata_to_mdev(&attrs->driver_udata);
+- if (obj->flags & DEVX_OBJ_FLAGS_INDIRECT_MKEY)
+- devx_cleanup_mkey(obj);
++ if (obj->flags & DEVX_OBJ_FLAGS_INDIRECT_MKEY) {
++ /*
++ * The pagefault_single_data_segment() does commands against
++ * the mmkey, we must wait for that to stop before freeing the
++ * mkey, as another allocation could get the same mkey #.
++ */
++ xa_erase(&obj->ib_dev->mdev->priv.mkey_table,
++ mlx5_base_mkey(obj->devx_mr.mmkey.key));
++ synchronize_srcu(&dev->mr_srcu);
++ }
+
+ if (obj->flags & DEVX_OBJ_FLAGS_DCT)
+ ret = mlx5_core_destroy_dct(obj->ib_dev->mdev, &obj->core_dct);
+@@ -1359,12 +1344,6 @@ static int devx_obj_cleanup(struct ib_uobject *uobject,
+ devx_cleanup_subscription(dev, sub_entry);
+ mutex_unlock(&devx_event_table->event_xa_lock);
+
+- if (obj->flags & DEVX_OBJ_FLAGS_INDIRECT_MKEY) {
+- call_srcu(&dev->mr_srcu, &obj->devx_mr.rcu,
+- devx_free_indirect_mkey);
+- return ret;
+- }
+-
+ kfree(obj);
+ return ret;
+ }
+@@ -1468,26 +1447,21 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OBJ_CREATE)(
+ &obj_id);
+ WARN_ON(obj->dinlen > MLX5_MAX_DESTROY_INBOX_SIZE_DW * sizeof(u32));
+
+- if (obj->flags & DEVX_OBJ_FLAGS_INDIRECT_MKEY) {
+- err = devx_handle_mkey_indirect(obj, dev, cmd_in, cmd_out);
+- if (err)
+- goto obj_destroy;
+- }
+-
+ err = uverbs_copy_to(attrs, MLX5_IB_ATTR_DEVX_OBJ_CREATE_CMD_OUT, cmd_out, cmd_out_len);
+ if (err)
+- goto err_copy;
++ goto obj_destroy;
+
+ if (opcode == MLX5_CMD_OP_CREATE_GENERAL_OBJECT)
+ obj_type = MLX5_GET(general_obj_in_cmd_hdr, cmd_in, obj_type);
+-
+ obj->obj_id = get_enc_obj_id(opcode | obj_type << 16, obj_id);
+
++ if (obj->flags & DEVX_OBJ_FLAGS_INDIRECT_MKEY) {
++ err = devx_handle_mkey_indirect(obj, dev, cmd_in, cmd_out);
++ if (err)
++ goto obj_destroy;
++ }
+ return 0;
+
+-err_copy:
+- if (obj->flags & DEVX_OBJ_FLAGS_INDIRECT_MKEY)
+- devx_cleanup_mkey(obj);
+ obj_destroy:
+ if (obj->flags & DEVX_OBJ_FLAGS_DCT)
+ mlx5_core_destroy_dct(obj->ib_dev->mdev, &obj->core_dct);
+diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
+index 9ae587b74b12..43c7353b9812 100644
+--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
++++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
+@@ -638,7 +638,6 @@ struct mlx5_ib_mw {
+ struct mlx5_ib_devx_mr {
+ struct mlx5_core_mkey mmkey;
+ int ndescs;
+- struct rcu_head rcu;
+ };
+
+ struct mlx5_ib_umr_context {
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 3401f5f6792e..a6198fe7f376 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -1423,6 +1423,9 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
+ if (!mr->umem)
+ return -EINVAL;
+
++ if (is_odp_mr(mr))
++ return -EOPNOTSUPP;
++
+ if (flags & IB_MR_REREG_TRANS) {
+ addr = virt_addr;
+ len = length;
+@@ -1468,8 +1471,6 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
+ }
+
+ mr->allocated_from_cache = 0;
+- if (IS_ENABLED(CONFIG_INFINIBAND_ON_DEMAND_PAGING))
+- mr->live = 1;
+ } else {
+ /*
+ * Send a UMR WQE
+@@ -1498,7 +1499,6 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
+
+ set_mr_fields(dev, mr, npages, len, access_flags);
+
+- update_odp_mr(mr);
+ return 0;
+
+ err:
+@@ -1591,13 +1591,14 @@ static void dereg_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr)
+ */
+ mr->live = 0;
+
++ /* Wait for all running page-fault handlers to finish. */
++ synchronize_srcu(&dev->mr_srcu);
++
+ /* dequeue pending prefetch requests for the mr */
+ if (atomic_read(&mr->num_pending_prefetch))
+ flush_workqueue(system_unbound_wq);
+ WARN_ON(atomic_read(&mr->num_pending_prefetch));
+
+- /* Wait for all running page-fault handlers to finish. */
+- synchronize_srcu(&dev->mr_srcu);
+ /* Destroy all page mappings */
+ if (umem_odp->page_list)
+ mlx5_ib_invalidate_range(umem_odp,
+@@ -1969,14 +1970,25 @@ free:
+
+ int mlx5_ib_dealloc_mw(struct ib_mw *mw)
+ {
++ struct mlx5_ib_dev *dev = to_mdev(mw->device);
+ struct mlx5_ib_mw *mmw = to_mmw(mw);
+ int err;
+
+- err = mlx5_core_destroy_mkey((to_mdev(mw->device))->mdev,
+- &mmw->mmkey);
+- if (!err)
+- kfree(mmw);
+- return err;
++ if (IS_ENABLED(CONFIG_INFINIBAND_ON_DEMAND_PAGING)) {
++ xa_erase_irq(&dev->mdev->priv.mkey_table,
++ mlx5_base_mkey(mmw->mmkey.key));
++ /*
++ * pagefault_single_data_segment() may be accessing mmw under
++ * SRCU if the user bound an ODP MR to this MW.
++ */
++ synchronize_srcu(&dev->mr_srcu);
++ }
++
++ err = mlx5_core_destroy_mkey(dev->mdev, &mmw->mmkey);
++ if (err)
++ return err;
++ kfree(mmw);
++ return 0;
+ }
+
+ int mlx5_ib_check_mr_status(struct ib_mr *ibmr, u32 check_mask,
+diff --git a/drivers/infiniband/sw/siw/siw_qp.c b/drivers/infiniband/sw/siw/siw_qp.c
+index 430314c8abd9..52d402f39df9 100644
+--- a/drivers/infiniband/sw/siw/siw_qp.c
++++ b/drivers/infiniband/sw/siw/siw_qp.c
+@@ -182,12 +182,19 @@ void siw_qp_llp_close(struct siw_qp *qp)
+ */
+ void siw_qp_llp_write_space(struct sock *sk)
+ {
+- struct siw_cep *cep = sk_to_cep(sk);
++ struct siw_cep *cep;
+
+- cep->sk_write_space(sk);
++ read_lock(&sk->sk_callback_lock);
++
++ cep = sk_to_cep(sk);
++ if (cep) {
++ cep->sk_write_space(sk);
+
+- if (!test_bit(SOCK_NOSPACE, &sk->sk_socket->flags))
+- (void)siw_sq_start(cep->qp);
++ if (!test_bit(SOCK_NOSPACE, &sk->sk_socket->flags))
++ (void)siw_sq_start(cep->qp);
++ }
++
++ read_unlock(&sk->sk_callback_lock);
+ }
+
+ static int siw_qp_readq_init(struct siw_qp *qp, int irq_size, int orq_size)
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index c4e0e4a9ee9e..f83a9a302f8e 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -2783,7 +2783,7 @@ static int identity_mapping(struct device *dev)
+ struct device_domain_info *info;
+
+ info = dev->archdata.iommu;
+- if (info && info != DUMMY_DEVICE_DOMAIN_INFO)
++ if (info && info != DUMMY_DEVICE_DOMAIN_INFO && info != DEFER_DEVICE_DOMAIN_INFO)
+ return (info->domain == si_domain);
+
+ return 0;
+diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c
+index f150f5c5492b..4fb1a40e68a0 100644
+--- a/drivers/md/dm-snap.c
++++ b/drivers/md/dm-snap.c
+@@ -18,7 +18,6 @@
+ #include <linux/vmalloc.h>
+ #include <linux/log2.h>
+ #include <linux/dm-kcopyd.h>
+-#include <linux/semaphore.h>
+
+ #include "dm.h"
+
+@@ -107,8 +106,8 @@ struct dm_snapshot {
+ /* The on disk metadata handler */
+ struct dm_exception_store *store;
+
+- /* Maximum number of in-flight COW jobs. */
+- struct semaphore cow_count;
++ unsigned in_progress;
++ struct wait_queue_head in_progress_wait;
+
+ struct dm_kcopyd_client *kcopyd_client;
+
+@@ -162,8 +161,8 @@ struct dm_snapshot {
+ */
+ #define DEFAULT_COW_THRESHOLD 2048
+
+-static int cow_threshold = DEFAULT_COW_THRESHOLD;
+-module_param_named(snapshot_cow_threshold, cow_threshold, int, 0644);
++static unsigned cow_threshold = DEFAULT_COW_THRESHOLD;
++module_param_named(snapshot_cow_threshold, cow_threshold, uint, 0644);
+ MODULE_PARM_DESC(snapshot_cow_threshold, "Maximum number of chunks being copied on write");
+
+ DECLARE_DM_KCOPYD_THROTTLE_WITH_MODULE_PARM(snapshot_copy_throttle,
+@@ -1327,7 +1326,7 @@ static int snapshot_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ goto bad_hash_tables;
+ }
+
+- sema_init(&s->cow_count, (cow_threshold > 0) ? cow_threshold : INT_MAX);
++ init_waitqueue_head(&s->in_progress_wait);
+
+ s->kcopyd_client = dm_kcopyd_client_create(&dm_kcopyd_throttle);
+ if (IS_ERR(s->kcopyd_client)) {
+@@ -1509,9 +1508,56 @@ static void snapshot_dtr(struct dm_target *ti)
+
+ dm_put_device(ti, s->origin);
+
++ WARN_ON(s->in_progress);
++
+ kfree(s);
+ }
+
++static void account_start_copy(struct dm_snapshot *s)
++{
++ spin_lock(&s->in_progress_wait.lock);
++ s->in_progress++;
++ spin_unlock(&s->in_progress_wait.lock);
++}
++
++static void account_end_copy(struct dm_snapshot *s)
++{
++ spin_lock(&s->in_progress_wait.lock);
++ BUG_ON(!s->in_progress);
++ s->in_progress--;
++ if (likely(s->in_progress <= cow_threshold) &&
++ unlikely(waitqueue_active(&s->in_progress_wait)))
++ wake_up_locked(&s->in_progress_wait);
++ spin_unlock(&s->in_progress_wait.lock);
++}
++
++static bool wait_for_in_progress(struct dm_snapshot *s, bool unlock_origins)
++{
++ if (unlikely(s->in_progress > cow_threshold)) {
++ spin_lock(&s->in_progress_wait.lock);
++ if (likely(s->in_progress > cow_threshold)) {
++ /*
++ * NOTE: this throttle doesn't account for whether
++ * the caller is servicing an IO that will trigger a COW
++ * so excess throttling may result for chunks not required
++ * to be COW'd. But if cow_threshold was reached, extra
++ * throttling is unlikely to negatively impact performance.
++ */
++ DECLARE_WAITQUEUE(wait, current);
++ __add_wait_queue(&s->in_progress_wait, &wait);
++ __set_current_state(TASK_UNINTERRUPTIBLE);
++ spin_unlock(&s->in_progress_wait.lock);
++ if (unlock_origins)
++ up_read(&_origins_lock);
++ io_schedule();
++ remove_wait_queue(&s->in_progress_wait, &wait);
++ return false;
++ }
++ spin_unlock(&s->in_progress_wait.lock);
++ }
++ return true;
++}
++
+ /*
+ * Flush a list of buffers.
+ */
+@@ -1527,7 +1573,7 @@ static void flush_bios(struct bio *bio)
+ }
+ }
+
+-static int do_origin(struct dm_dev *origin, struct bio *bio);
++static int do_origin(struct dm_dev *origin, struct bio *bio, bool limit);
+
+ /*
+ * Flush a list of buffers.
+@@ -1540,7 +1586,7 @@ static void retry_origin_bios(struct dm_snapshot *s, struct bio *bio)
+ while (bio) {
+ n = bio->bi_next;
+ bio->bi_next = NULL;
+- r = do_origin(s->origin, bio);
++ r = do_origin(s->origin, bio, false);
+ if (r == DM_MAPIO_REMAPPED)
+ generic_make_request(bio);
+ bio = n;
+@@ -1732,7 +1778,7 @@ static void copy_callback(int read_err, unsigned long write_err, void *context)
+ rb_link_node(&pe->out_of_order_node, parent, p);
+ rb_insert_color(&pe->out_of_order_node, &s->out_of_order_tree);
+ }
+- up(&s->cow_count);
++ account_end_copy(s);
+ }
+
+ /*
+@@ -1756,7 +1802,7 @@ static void start_copy(struct dm_snap_pending_exception *pe)
+ dest.count = src.count;
+
+ /* Hand over to kcopyd */
+- down(&s->cow_count);
++ account_start_copy(s);
+ dm_kcopyd_copy(s->kcopyd_client, &src, 1, &dest, 0, copy_callback, pe);
+ }
+
+@@ -1776,7 +1822,7 @@ static void start_full_bio(struct dm_snap_pending_exception *pe,
+ pe->full_bio = bio;
+ pe->full_bio_end_io = bio->bi_end_io;
+
+- down(&s->cow_count);
++ account_start_copy(s);
+ callback_data = dm_kcopyd_prepare_callback(s->kcopyd_client,
+ copy_callback, pe);
+
+@@ -1866,7 +1912,7 @@ static void zero_callback(int read_err, unsigned long write_err, void *context)
+ struct bio *bio = context;
+ struct dm_snapshot *s = bio->bi_private;
+
+- up(&s->cow_count);
++ account_end_copy(s);
+ bio->bi_status = write_err ? BLK_STS_IOERR : 0;
+ bio_endio(bio);
+ }
+@@ -1880,7 +1926,7 @@ static void zero_exception(struct dm_snapshot *s, struct dm_exception *e,
+ dest.sector = bio->bi_iter.bi_sector;
+ dest.count = s->store->chunk_size;
+
+- down(&s->cow_count);
++ account_start_copy(s);
+ WARN_ON_ONCE(bio->bi_private);
+ bio->bi_private = s;
+ dm_kcopyd_zero(s->kcopyd_client, 1, &dest, 0, zero_callback, bio);
+@@ -1916,6 +1962,11 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio)
+ if (!s->valid)
+ return DM_MAPIO_KILL;
+
++ if (bio_data_dir(bio) == WRITE) {
++ while (unlikely(!wait_for_in_progress(s, false)))
++ ; /* wait_for_in_progress() has slept */
++ }
++
+ down_read(&s->lock);
+ dm_exception_table_lock(&lock);
+
+@@ -2112,7 +2163,7 @@ redirect_to_origin:
+
+ if (bio_data_dir(bio) == WRITE) {
+ up_write(&s->lock);
+- return do_origin(s->origin, bio);
++ return do_origin(s->origin, bio, false);
+ }
+
+ out_unlock:
+@@ -2487,15 +2538,24 @@ next_snapshot:
+ /*
+ * Called on a write from the origin driver.
+ */
+-static int do_origin(struct dm_dev *origin, struct bio *bio)
++static int do_origin(struct dm_dev *origin, struct bio *bio, bool limit)
+ {
+ struct origin *o;
+ int r = DM_MAPIO_REMAPPED;
+
++again:
+ down_read(&_origins_lock);
+ o = __lookup_origin(origin->bdev);
+- if (o)
++ if (o) {
++ if (limit) {
++ struct dm_snapshot *s;
++ list_for_each_entry(s, &o->snapshots, list)
++ if (unlikely(!wait_for_in_progress(s, true)))
++ goto again;
++ }
++
+ r = __origin_write(&o->snapshots, bio->bi_iter.bi_sector, bio);
++ }
+ up_read(&_origins_lock);
+
+ return r;
+@@ -2608,7 +2668,7 @@ static int origin_map(struct dm_target *ti, struct bio *bio)
+ dm_accept_partial_bio(bio, available_sectors);
+
+ /* Only tell snapshots if this is a write */
+- return do_origin(o->dev, bio);
++ return do_origin(o->dev, bio, true);
+ }
+
+ /*
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 98603e235cf0..a76b6c6fd660 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -499,6 +499,7 @@ static int fastrpc_dma_buf_attach(struct dma_buf *dmabuf,
+ FASTRPC_PHYS(buffer->phys), buffer->size);
+ if (ret < 0) {
+ dev_err(buffer->dev, "failed to get scatterlist from DMA API\n");
++ kfree(a);
+ return -EINVAL;
+ }
+
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 931d9d935686..21d8fcc83c9c 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -4039,7 +4039,7 @@ out:
+ * this to-be-skipped slave to send a packet out.
+ */
+ old_arr = rtnl_dereference(bond->slave_arr);
+- for (idx = 0; idx < old_arr->count; idx++) {
++ for (idx = 0; old_arr != NULL && idx < old_arr->count; idx++) {
+ if (skipslave == old_arr->arr[idx]) {
+ old_arr->arr[idx] =
+ old_arr->arr[old_arr->count-1];
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mr.c b/drivers/net/ethernet/mellanox/mlx5/core/mr.c
+index 9231b39d18b2..c501bf2a0252 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/mr.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/mr.c
+@@ -112,17 +112,11 @@ int mlx5_core_destroy_mkey(struct mlx5_core_dev *dev,
+ u32 out[MLX5_ST_SZ_DW(destroy_mkey_out)] = {0};
+ u32 in[MLX5_ST_SZ_DW(destroy_mkey_in)] = {0};
+ struct xarray *mkeys = &dev->priv.mkey_table;
+- struct mlx5_core_mkey *deleted_mkey;
+ unsigned long flags;
+
+ xa_lock_irqsave(mkeys, flags);
+- deleted_mkey = __xa_erase(mkeys, mlx5_base_mkey(mkey->key));
++ __xa_erase(mkeys, mlx5_base_mkey(mkey->key));
+ xa_unlock_irqrestore(mkeys, flags);
+- if (!deleted_mkey) {
+- mlx5_core_dbg(dev, "failed xarray delete of mkey 0x%x\n",
+- mlx5_base_mkey(mkey->key));
+- return -ENOENT;
+- }
+
+ MLX5_SET(destroy_mkey_in, in, opcode, MLX5_CMD_OP_DESTROY_MKEY);
+ MLX5_SET(destroy_mkey_in, in, mkey_index, mlx5_mkey_to_idx(mkey->key));
+diff --git a/drivers/net/usb/sr9800.c b/drivers/net/usb/sr9800.c
+index 35f39f23d881..8f8c9ede88c2 100644
+--- a/drivers/net/usb/sr9800.c
++++ b/drivers/net/usb/sr9800.c
+@@ -336,7 +336,7 @@ static void sr_set_multicast(struct net_device *net)
+ static int sr_mdio_read(struct net_device *net, int phy_id, int loc)
+ {
+ struct usbnet *dev = netdev_priv(net);
+- __le16 res;
++ __le16 res = 0;
+
+ mutex_lock(&dev->phy_mutex);
+ sr_set_sw_mii(dev);
+diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c
+index dc45d16e8d21..383d4fa555a8 100644
+--- a/drivers/net/wireless/ath/ath10k/core.c
++++ b/drivers/net/wireless/ath/ath10k/core.c
+@@ -2118,12 +2118,15 @@ static int ath10k_init_uart(struct ath10k *ar)
+ return ret;
+ }
+
+- if (!uart_print && ar->hw_params.uart_pin_workaround) {
+- ret = ath10k_bmi_write32(ar, hi_dbg_uart_txpin,
+- ar->hw_params.uart_pin);
+- if (ret) {
+- ath10k_warn(ar, "failed to set UART TX pin: %d", ret);
+- return ret;
++ if (!uart_print) {
++ if (ar->hw_params.uart_pin_workaround) {
++ ret = ath10k_bmi_write32(ar, hi_dbg_uart_txpin,
++ ar->hw_params.uart_pin);
++ if (ret) {
++ ath10k_warn(ar, "failed to set UART TX pin: %d",
++ ret);
++ return ret;
++ }
+ }
+
+ return 0;
+diff --git a/drivers/net/wireless/ath/ath6kl/usb.c b/drivers/net/wireless/ath/ath6kl/usb.c
+index 4defb7a0330f..53b66e9434c9 100644
+--- a/drivers/net/wireless/ath/ath6kl/usb.c
++++ b/drivers/net/wireless/ath/ath6kl/usb.c
+@@ -132,6 +132,10 @@ ath6kl_usb_alloc_urb_from_pipe(struct ath6kl_usb_pipe *pipe)
+ struct ath6kl_urb_context *urb_context = NULL;
+ unsigned long flags;
+
++ /* bail if this pipe is not initialized */
++ if (!pipe->ar_usb)
++ return NULL;
++
+ spin_lock_irqsave(&pipe->ar_usb->cs_lock, flags);
+ if (!list_empty(&pipe->urb_list_head)) {
+ urb_context =
+@@ -150,6 +154,10 @@ static void ath6kl_usb_free_urb_to_pipe(struct ath6kl_usb_pipe *pipe,
+ {
+ unsigned long flags;
+
++ /* bail if this pipe is not initialized */
++ if (!pipe->ar_usb)
++ return;
++
+ spin_lock_irqsave(&pipe->ar_usb->cs_lock, flags);
+ pipe->urb_cnt++;
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index 8b0b464a1f21..c520f42d165c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -887,15 +887,17 @@ static bool iwl_mvm_sar_geo_support(struct iwl_mvm *mvm)
+ * firmware versions. Unfortunately, we don't have a TLV API
+ * flag to rely on, so rely on the major version which is in
+ * the first byte of ucode_ver. This was implemented
+- * initially on version 38 and then backported to29 and 17.
+- * The intention was to have it in 36 as well, but not all
+- * 8000 family got this feature enabled. The 8000 family is
+- * the only one using version 36, so skip this version
+- * entirely.
++ * initially on version 38 and then backported to 17. It was
++ * also backported to 29, but only for 7265D devices. The
++ * intention was to have it in 36 as well, but not all 8000
++ * family got this feature enabled. The 8000 family is the
++ * only one using version 36, so skip this version entirely.
+ */
+ return IWL_UCODE_SERIAL(mvm->fw->ucode_ver) >= 38 ||
+- IWL_UCODE_SERIAL(mvm->fw->ucode_ver) == 29 ||
+- IWL_UCODE_SERIAL(mvm->fw->ucode_ver) == 17;
++ IWL_UCODE_SERIAL(mvm->fw->ucode_ver) == 17 ||
++ (IWL_UCODE_SERIAL(mvm->fw->ucode_ver) == 29 &&
++ ((mvm->trans->hw_rev & CSR_HW_REV_TYPE_MSK) ==
++ CSR_HW_REV_TYPE_7265D));
+ }
+
+ int iwl_mvm_get_sar_geo_profile(struct iwl_mvm *mvm)
+diff --git a/drivers/net/wireless/realtek/rtlwifi/pci.c b/drivers/net/wireless/realtek/rtlwifi/pci.c
+index 4055e0ab75ba..05050f6c36db 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/pci.c
++++ b/drivers/net/wireless/realtek/rtlwifi/pci.c
+@@ -822,7 +822,7 @@ static void _rtl_pci_rx_interrupt(struct ieee80211_hw *hw)
+ hdr = rtl_get_hdr(skb);
+ fc = rtl_get_fc(skb);
+
+- if (!stats.crc && !stats.hwerror) {
++ if (!stats.crc && !stats.hwerror && (skb->len > FCS_LEN)) {
+ memcpy(IEEE80211_SKB_RXCB(skb), &rx_status,
+ sizeof(rx_status));
+
+@@ -859,6 +859,7 @@ static void _rtl_pci_rx_interrupt(struct ieee80211_hw *hw)
+ _rtl_pci_rx_to_mac80211(hw, skb, rx_status);
+ }
+ } else {
++ /* drop packets with errors or those too short */
+ dev_kfree_skb_any(skb);
+ }
+ new_trx_end:
+diff --git a/drivers/net/wireless/realtek/rtlwifi/ps.c b/drivers/net/wireless/realtek/rtlwifi/ps.c
+index 70f04c2f5b17..fff8dda14023 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/ps.c
++++ b/drivers/net/wireless/realtek/rtlwifi/ps.c
+@@ -754,6 +754,9 @@ static void rtl_p2p_noa_ie(struct ieee80211_hw *hw, void *data,
+ return;
+ } else {
+ noa_num = (noa_len - 2) / 13;
++ if (noa_num > P2P_MAX_NOA_NUM)
++ noa_num = P2P_MAX_NOA_NUM;
++
+ }
+ noa_index = ie[3];
+ if (rtlpriv->psc.p2p_ps_info.p2p_ps_mode ==
+@@ -848,6 +851,9 @@ static void rtl_p2p_action_ie(struct ieee80211_hw *hw, void *data,
+ return;
+ } else {
+ noa_num = (noa_len - 2) / 13;
++ if (noa_num > P2P_MAX_NOA_NUM)
++ noa_num = P2P_MAX_NOA_NUM;
++
+ }
+ noa_index = ie[3];
+ if (rtlpriv->psc.p2p_ps_info.p2p_ps_mode ==
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822b.c b/drivers/net/wireless/realtek/rtw88/rtw8822b.c
+index 1172f6c0605b..d61d534396c7 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822b.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822b.c
+@@ -997,7 +997,7 @@ static void rtw8822b_do_iqk(struct rtw_dev *rtwdev)
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_DTXLOK, RFREG_MASK, 0x0);
+
+ reload = !!rtw_read32_mask(rtwdev, REG_IQKFAILMSK, BIT(16));
+- iqk_fail_mask = rtw_read32_mask(rtwdev, REG_IQKFAILMSK, GENMASK(0, 7));
++ iqk_fail_mask = rtw_read32_mask(rtwdev, REG_IQKFAILMSK, GENMASK(7, 0));
+ rtw_dbg(rtwdev, RTW_DBG_PHY,
+ "iqk counter=%d reload=%d do_iqk_cnt=%d n_iqk_fail(mask)=0x%02x\n",
+ counter, reload, ++do_iqk_cnt, iqk_fail_mask);
+diff --git a/drivers/nfc/pn533/usb.c b/drivers/nfc/pn533/usb.c
+index c5289eaf17ee..e897e4d768ef 100644
+--- a/drivers/nfc/pn533/usb.c
++++ b/drivers/nfc/pn533/usb.c
+@@ -547,18 +547,25 @@ static int pn533_usb_probe(struct usb_interface *interface,
+
+ rc = pn533_finalize_setup(priv);
+ if (rc)
+- goto error;
++ goto err_deregister;
+
+ usb_set_intfdata(interface, phy);
+
+ return 0;
+
++err_deregister:
++ pn533_unregister_device(phy->priv);
+ error:
++ usb_kill_urb(phy->in_urb);
++ usb_kill_urb(phy->out_urb);
++ usb_kill_urb(phy->ack_urb);
++
+ usb_free_urb(phy->in_urb);
+ usb_free_urb(phy->out_urb);
+ usb_free_urb(phy->ack_urb);
+ usb_put_dev(phy->udev);
+ kfree(in_buf);
++ kfree(phy->ack_buffer);
+
+ return rc;
+ }
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 36a5ed1eacbe..3304e2c8a448 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -852,7 +852,7 @@ out:
+ static int nvme_submit_user_cmd(struct request_queue *q,
+ struct nvme_command *cmd, void __user *ubuffer,
+ unsigned bufflen, void __user *meta_buffer, unsigned meta_len,
+- u32 meta_seed, u64 *result, unsigned timeout)
++ u32 meta_seed, u32 *result, unsigned timeout)
+ {
+ bool write = nvme_is_write(cmd);
+ struct nvme_ns *ns = q->queuedata;
+@@ -893,7 +893,7 @@ static int nvme_submit_user_cmd(struct request_queue *q,
+ else
+ ret = nvme_req(req)->status;
+ if (result)
+- *result = le64_to_cpu(nvme_req(req)->result.u64);
++ *result = le32_to_cpu(nvme_req(req)->result.u32);
+ if (meta && !ret && !write) {
+ if (copy_to_user(meta_buffer, meta, meta_len))
+ ret = -EFAULT;
+@@ -1339,54 +1339,6 @@ static int nvme_user_cmd(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
+ struct nvme_command c;
+ unsigned timeout = 0;
+ u32 effects;
+- u64 result;
+- int status;
+-
+- if (!capable(CAP_SYS_ADMIN))
+- return -EACCES;
+- if (copy_from_user(&cmd, ucmd, sizeof(cmd)))
+- return -EFAULT;
+- if (cmd.flags)
+- return -EINVAL;
+-
+- memset(&c, 0, sizeof(c));
+- c.common.opcode = cmd.opcode;
+- c.common.flags = cmd.flags;
+- c.common.nsid = cpu_to_le32(cmd.nsid);
+- c.common.cdw2[0] = cpu_to_le32(cmd.cdw2);
+- c.common.cdw2[1] = cpu_to_le32(cmd.cdw3);
+- c.common.cdw10 = cpu_to_le32(cmd.cdw10);
+- c.common.cdw11 = cpu_to_le32(cmd.cdw11);
+- c.common.cdw12 = cpu_to_le32(cmd.cdw12);
+- c.common.cdw13 = cpu_to_le32(cmd.cdw13);
+- c.common.cdw14 = cpu_to_le32(cmd.cdw14);
+- c.common.cdw15 = cpu_to_le32(cmd.cdw15);
+-
+- if (cmd.timeout_ms)
+- timeout = msecs_to_jiffies(cmd.timeout_ms);
+-
+- effects = nvme_passthru_start(ctrl, ns, cmd.opcode);
+- status = nvme_submit_user_cmd(ns ? ns->queue : ctrl->admin_q, &c,
+- (void __user *)(uintptr_t)cmd.addr, cmd.data_len,
+- (void __user *)(uintptr_t)cmd.metadata,
+- cmd.metadata_len, 0, &result, timeout);
+- nvme_passthru_end(ctrl, effects);
+-
+- if (status >= 0) {
+- if (put_user(result, &ucmd->result))
+- return -EFAULT;
+- }
+-
+- return status;
+-}
+-
+-static int nvme_user_cmd64(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
+- struct nvme_passthru_cmd64 __user *ucmd)
+-{
+- struct nvme_passthru_cmd64 cmd;
+- struct nvme_command c;
+- unsigned timeout = 0;
+- u32 effects;
+ int status;
+
+ if (!capable(CAP_SYS_ADMIN))
+@@ -1457,41 +1409,6 @@ static void nvme_put_ns_from_disk(struct nvme_ns_head *head, int idx)
+ srcu_read_unlock(&head->srcu, idx);
+ }
+
+-static bool is_ctrl_ioctl(unsigned int cmd)
+-{
+- if (cmd == NVME_IOCTL_ADMIN_CMD || cmd == NVME_IOCTL_ADMIN64_CMD)
+- return true;
+- if (is_sed_ioctl(cmd))
+- return true;
+- return false;
+-}
+-
+-static int nvme_handle_ctrl_ioctl(struct nvme_ns *ns, unsigned int cmd,
+- void __user *argp,
+- struct nvme_ns_head *head,
+- int srcu_idx)
+-{
+- struct nvme_ctrl *ctrl = ns->ctrl;
+- int ret;
+-
+- nvme_get_ctrl(ns->ctrl);
+- nvme_put_ns_from_disk(head, srcu_idx);
+-
+- switch (cmd) {
+- case NVME_IOCTL_ADMIN_CMD:
+- ret = nvme_user_cmd(ctrl, NULL, argp);
+- break;
+- case NVME_IOCTL_ADMIN64_CMD:
+- ret = nvme_user_cmd64(ctrl, NULL, argp);
+- break;
+- default:
+- ret = sed_ioctl(ctrl->opal_dev, cmd, argp);
+- break;
+- }
+- nvme_put_ctrl(ctrl);
+- return ret;
+-}
+-
+ static int nvme_ioctl(struct block_device *bdev, fmode_t mode,
+ unsigned int cmd, unsigned long arg)
+ {
+@@ -1509,8 +1426,20 @@ static int nvme_ioctl(struct block_device *bdev, fmode_t mode,
+ * seperately and drop the ns SRCU reference early. This avoids a
+ * deadlock when deleting namespaces using the passthrough interface.
+ */
+- if (is_ctrl_ioctl(cmd))
+- return nvme_handle_ctrl_ioctl(ns, cmd, argp, head, srcu_idx);
++ if (cmd == NVME_IOCTL_ADMIN_CMD || is_sed_ioctl(cmd)) {
++ struct nvme_ctrl *ctrl = ns->ctrl;
++
++ nvme_get_ctrl(ns->ctrl);
++ nvme_put_ns_from_disk(head, srcu_idx);
++
++ if (cmd == NVME_IOCTL_ADMIN_CMD)
++ ret = nvme_user_cmd(ctrl, NULL, argp);
++ else
++ ret = sed_ioctl(ctrl->opal_dev, cmd, argp);
++
++ nvme_put_ctrl(ctrl);
++ return ret;
++ }
+
+ switch (cmd) {
+ case NVME_IOCTL_ID:
+@@ -1523,9 +1452,6 @@ static int nvme_ioctl(struct block_device *bdev, fmode_t mode,
+ case NVME_IOCTL_SUBMIT_IO:
+ ret = nvme_submit_io(ns, argp);
+ break;
+- case NVME_IOCTL_IO64_CMD:
+- ret = nvme_user_cmd64(ns->ctrl, ns, argp);
+- break;
+ default:
+ if (ns->ndev)
+ ret = nvme_nvm_ioctl(ns, cmd, arg);
+@@ -2900,8 +2826,6 @@ static long nvme_dev_ioctl(struct file *file, unsigned int cmd,
+ switch (cmd) {
+ case NVME_IOCTL_ADMIN_CMD:
+ return nvme_user_cmd(ctrl, NULL, argp);
+- case NVME_IOCTL_ADMIN64_CMD:
+- return nvme_user_cmd64(ctrl, NULL, argp);
+ case NVME_IOCTL_IO_CMD:
+ return nvme_dev_user_cmd(ctrl, argp);
+ case NVME_IOCTL_RESET:
+diff --git a/drivers/s390/cio/cio.h b/drivers/s390/cio/cio.h
+index ba7d2480613b..dcdaba689b20 100644
+--- a/drivers/s390/cio/cio.h
++++ b/drivers/s390/cio/cio.h
+@@ -113,6 +113,7 @@ struct subchannel {
+ enum sch_todo todo;
+ struct work_struct todo_work;
+ struct schib_config config;
++ u64 dma_mask;
+ char *driver_override; /* Driver name to force a match */
+ } __attribute__ ((aligned(8)));
+
+diff --git a/drivers/s390/cio/css.c b/drivers/s390/cio/css.c
+index 1fbfb0a93f5f..831850435c23 100644
+--- a/drivers/s390/cio/css.c
++++ b/drivers/s390/cio/css.c
+@@ -232,7 +232,12 @@ struct subchannel *css_alloc_subchannel(struct subchannel_id schid,
+ * belong to a subchannel need to fit 31 bit width (e.g. ccw).
+ */
+ sch->dev.coherent_dma_mask = DMA_BIT_MASK(31);
+- sch->dev.dma_mask = &sch->dev.coherent_dma_mask;
++ /*
++ * But we don't have such restrictions imposed on the stuff that
++ * is handled by the streaming API.
++ */
++ sch->dma_mask = DMA_BIT_MASK(64);
++ sch->dev.dma_mask = &sch->dma_mask;
+ return sch;
+
+ err:
+diff --git a/drivers/s390/cio/device.c b/drivers/s390/cio/device.c
+index c421899be20f..027ef1dde5a7 100644
+--- a/drivers/s390/cio/device.c
++++ b/drivers/s390/cio/device.c
+@@ -710,7 +710,7 @@ static struct ccw_device * io_subchannel_allocate_dev(struct subchannel *sch)
+ if (!cdev->private)
+ goto err_priv;
+ cdev->dev.coherent_dma_mask = sch->dev.coherent_dma_mask;
+- cdev->dev.dma_mask = &cdev->dev.coherent_dma_mask;
++ cdev->dev.dma_mask = sch->dev.dma_mask;
+ dma_pool = cio_gp_dma_create(&cdev->dev, 1);
+ if (!dma_pool)
+ goto err_dma_pool;
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index 6b7b390b2e52..9584c5a48397 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -441,9 +441,6 @@ qla2x00_sysfs_write_optrom_ctl(struct file *filp, struct kobject *kobj,
+ valid = 0;
+ if (ha->optrom_size == OPTROM_SIZE_2300 && start == 0)
+ valid = 1;
+- else if (start == (ha->flt_region_boot * 4) ||
+- start == (ha->flt_region_fw * 4))
+- valid = 1;
+ else if (IS_QLA24XX_TYPE(ha) || IS_QLA25XX(ha))
+ valid = 1;
+ if (!valid) {
+@@ -491,8 +488,10 @@ qla2x00_sysfs_write_optrom_ctl(struct file *filp, struct kobject *kobj,
+ "Writing flash region -- 0x%x/0x%x.\n",
+ ha->optrom_region_start, ha->optrom_region_size);
+
+- ha->isp_ops->write_optrom(vha, ha->optrom_buffer,
++ rval = ha->isp_ops->write_optrom(vha, ha->optrom_buffer,
+ ha->optrom_region_start, ha->optrom_region_size);
++ if (rval)
++ rval = -EIO;
+ break;
+ default:
+ rval = -EINVAL;
+diff --git a/drivers/staging/rtl8188eu/os_dep/usb_intf.c b/drivers/staging/rtl8188eu/os_dep/usb_intf.c
+index 664d93a7f90d..4fac9dca798e 100644
+--- a/drivers/staging/rtl8188eu/os_dep/usb_intf.c
++++ b/drivers/staging/rtl8188eu/os_dep/usb_intf.c
+@@ -348,8 +348,10 @@ static struct adapter *rtw_usb_if1_init(struct dvobj_priv *dvobj,
+ }
+
+ padapter->HalData = kzalloc(sizeof(struct hal_data_8188e), GFP_KERNEL);
+- if (!padapter->HalData)
+- DBG_88E("cant not alloc memory for HAL DATA\n");
++ if (!padapter->HalData) {
++ DBG_88E("Failed to allocate memory for HAL data\n");
++ goto free_adapter;
++ }
+
+ /* step read_chip_version */
+ rtw_hal_read_chip_version(padapter);
+diff --git a/drivers/target/iscsi/cxgbit/cxgbit_cm.c b/drivers/target/iscsi/cxgbit/cxgbit_cm.c
+index c70caf4ea490..a2b5c796bbc4 100644
+--- a/drivers/target/iscsi/cxgbit/cxgbit_cm.c
++++ b/drivers/target/iscsi/cxgbit/cxgbit_cm.c
+@@ -1831,7 +1831,7 @@ static void cxgbit_fw4_ack(struct cxgbit_sock *csk, struct sk_buff *skb)
+
+ while (credits) {
+ struct sk_buff *p = cxgbit_sock_peek_wr(csk);
+- const u32 csum = (__force u32)p->csum;
++ u32 csum;
+
+ if (unlikely(!p)) {
+ pr_err("csk 0x%p,%u, cr %u,%u+%u, empty.\n",
+@@ -1840,6 +1840,7 @@ static void cxgbit_fw4_ack(struct cxgbit_sock *csk, struct sk_buff *skb)
+ break;
+ }
+
++ csum = (__force u32)p->csum;
+ if (unlikely(credits < csum)) {
+ pr_warn("csk 0x%p,%u, cr %u,%u+%u, < %u.\n",
+ csk, csk->tid,
+diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
+index 27fbe62c7ddd..9c782706e652 100644
+--- a/drivers/thunderbolt/nhi.c
++++ b/drivers/thunderbolt/nhi.c
+@@ -143,9 +143,20 @@ static void __iomem *ring_options_base(struct tb_ring *ring)
+ return io;
+ }
+
+-static void ring_iowrite16desc(struct tb_ring *ring, u32 value, u32 offset)
++static void ring_iowrite_cons(struct tb_ring *ring, u16 cons)
+ {
+- iowrite16(value, ring_desc_base(ring) + offset);
++ /*
++ * The other 16-bits in the register is read-only and writes to it
++ * are ignored by the hardware so we can save one ioread32() by
++ * filling the read-only bits with zeroes.
++ */
++ iowrite32(cons, ring_desc_base(ring) + 8);
++}
++
++static void ring_iowrite_prod(struct tb_ring *ring, u16 prod)
++{
++ /* See ring_iowrite_cons() above for explanation */
++ iowrite32(prod << 16, ring_desc_base(ring) + 8);
+ }
+
+ static void ring_iowrite32desc(struct tb_ring *ring, u32 value, u32 offset)
+@@ -197,7 +208,10 @@ static void ring_write_descriptors(struct tb_ring *ring)
+ descriptor->sof = frame->sof;
+ }
+ ring->head = (ring->head + 1) % ring->size;
+- ring_iowrite16desc(ring, ring->head, ring->is_tx ? 10 : 8);
++ if (ring->is_tx)
++ ring_iowrite_prod(ring, ring->head);
++ else
++ ring_iowrite_cons(ring, ring->head);
+ }
+ }
+
+@@ -662,7 +676,7 @@ void tb_ring_stop(struct tb_ring *ring)
+
+ ring_iowrite32options(ring, 0, 0);
+ ring_iowrite64desc(ring, 0, 0);
+- ring_iowrite16desc(ring, 0, ring->is_tx ? 10 : 8);
++ ring_iowrite32desc(ring, 0, 8);
+ ring_iowrite32desc(ring, 0, 12);
+ ring->head = 0;
+ ring->tail = 0;
+diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c
+index 31d0234837e4..5a99234826e7 100644
+--- a/drivers/thunderbolt/tunnel.c
++++ b/drivers/thunderbolt/tunnel.c
+@@ -211,7 +211,7 @@ struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
+ return NULL;
+ }
+ tb_pci_init_path(path);
+- tunnel->paths[TB_PCI_PATH_UP] = path;
++ tunnel->paths[TB_PCI_PATH_DOWN] = path;
+
+ path = tb_path_alloc(tb, up, TB_PCI_HOPID, down, TB_PCI_HOPID, 0,
+ "PCIe Up");
+@@ -220,7 +220,7 @@ struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
+ return NULL;
+ }
+ tb_pci_init_path(path);
+- tunnel->paths[TB_PCI_PATH_DOWN] = path;
++ tunnel->paths[TB_PCI_PATH_UP] = path;
+
+ return tunnel;
+ }
+diff --git a/drivers/tty/n_hdlc.c b/drivers/tty/n_hdlc.c
+index e55c79eb6430..98361acd3053 100644
+--- a/drivers/tty/n_hdlc.c
++++ b/drivers/tty/n_hdlc.c
+@@ -968,6 +968,11 @@ static int __init n_hdlc_init(void)
+
+ } /* end of init_module() */
+
++#ifdef CONFIG_SPARC
++#undef __exitdata
++#define __exitdata
++#endif
++
+ static const char hdlc_unregister_ok[] __exitdata =
+ KERN_INFO "N_HDLC: line discipline unregistered\n";
+ static const char hdlc_unregister_fail[] __exitdata =
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index 3ef65cbd2478..e4b08077f875 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -141,7 +141,7 @@ static void omap8250_set_mctrl(struct uart_port *port, unsigned int mctrl)
+
+ serial8250_do_set_mctrl(port, mctrl);
+
+- if (!up->gpios) {
++ if (!mctrl_gpio_to_gpiod(up->gpios, UART_GPIO_RTS)) {
+ /*
+ * Turn off autoRTS if RTS is lowered and restore autoRTS
+ * setting if RTS is raised
+@@ -456,7 +456,8 @@ static void omap_8250_set_termios(struct uart_port *port,
+ up->port.status &= ~(UPSTAT_AUTOCTS | UPSTAT_AUTORTS | UPSTAT_AUTOXOFF);
+
+ if (termios->c_cflag & CRTSCTS && up->port.flags & UPF_HARD_FLOW &&
+- !up->gpios) {
++ !mctrl_gpio_to_gpiod(up->gpios, UART_GPIO_RTS) &&
++ !mctrl_gpio_to_gpiod(up->gpios, UART_GPIO_CTS)) {
+ /* Enable AUTOCTS (autoRTS is enabled when RTS is raised) */
+ up->port.status |= UPSTAT_AUTOCTS | UPSTAT_AUTORTS;
+ priv->efr |= UART_EFR_CTS;
+diff --git a/drivers/tty/serial/Kconfig b/drivers/tty/serial/Kconfig
+index 3083dbae35f7..3b436ccd29da 100644
+--- a/drivers/tty/serial/Kconfig
++++ b/drivers/tty/serial/Kconfig
+@@ -1075,6 +1075,7 @@ config SERIAL_SIFIVE_CONSOLE
+ bool "Console on SiFive UART"
+ depends on SERIAL_SIFIVE=y
+ select SERIAL_CORE_CONSOLE
++ select SERIAL_EARLYCON
+ help
+ Select this option if you would like to use a SiFive UART as the
+ system console.
+diff --git a/drivers/tty/serial/owl-uart.c b/drivers/tty/serial/owl-uart.c
+index 29a6dc6a8d23..73fcc6bdb031 100644
+--- a/drivers/tty/serial/owl-uart.c
++++ b/drivers/tty/serial/owl-uart.c
+@@ -742,7 +742,7 @@ static int __init owl_uart_init(void)
+ return ret;
+ }
+
+-static void __init owl_uart_exit(void)
++static void __exit owl_uart_exit(void)
+ {
+ platform_driver_unregister(&owl_uart_platform_driver);
+ uart_unregister_driver(&owl_uart_driver);
+diff --git a/drivers/tty/serial/rda-uart.c b/drivers/tty/serial/rda-uart.c
+index 284623eefaeb..ba5e488a0374 100644
+--- a/drivers/tty/serial/rda-uart.c
++++ b/drivers/tty/serial/rda-uart.c
+@@ -817,7 +817,7 @@ static int __init rda_uart_init(void)
+ return ret;
+ }
+
+-static void __init rda_uart_exit(void)
++static void __exit rda_uart_exit(void)
+ {
+ platform_driver_unregister(&rda_uart_platform_driver);
+ uart_unregister_driver(&rda_uart_driver);
+diff --git a/drivers/tty/serial/serial_mctrl_gpio.c b/drivers/tty/serial/serial_mctrl_gpio.c
+index 2b400189be91..54c43e02e375 100644
+--- a/drivers/tty/serial/serial_mctrl_gpio.c
++++ b/drivers/tty/serial/serial_mctrl_gpio.c
+@@ -61,6 +61,9 @@ EXPORT_SYMBOL_GPL(mctrl_gpio_set);
+ struct gpio_desc *mctrl_gpio_to_gpiod(struct mctrl_gpios *gpios,
+ enum mctrl_gpio_idx gidx)
+ {
++ if (gpios == NULL)
++ return NULL;
++
+ return gpios->gpio[gidx];
+ }
+ EXPORT_SYMBOL_GPL(mctrl_gpio_to_gpiod);
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index 7cf34beb50df..4d8f8f4ecf98 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -98,6 +98,17 @@ int usb_ep_enable(struct usb_ep *ep)
+ if (ep->enabled)
+ goto out;
+
++ /* UDC drivers can't handle endpoints with maxpacket size 0 */
++ if (usb_endpoint_maxp(ep->desc) == 0) {
++ /*
++ * We should log an error message here, but we can't call
++ * dev_err() because there's no way to find the gadget
++ * given only ep.
++ */
++ ret = -EINVAL;
++ goto out;
++ }
++
+ ret = ep->ops->enable(ep, ep->desc);
+ if (ret)
+ goto out;
+diff --git a/drivers/usb/host/xhci-debugfs.c b/drivers/usb/host/xhci-debugfs.c
+index 7ba6afc7ef23..76c3f29562d2 100644
+--- a/drivers/usb/host/xhci-debugfs.c
++++ b/drivers/usb/host/xhci-debugfs.c
+@@ -202,10 +202,10 @@ static void xhci_ring_dump_segment(struct seq_file *s,
+ trb = &seg->trbs[i];
+ dma = seg->dma + i * sizeof(*trb);
+ seq_printf(s, "%pad: %s\n", &dma,
+- xhci_decode_trb(trb->generic.field[0],
+- trb->generic.field[1],
+- trb->generic.field[2],
+- trb->generic.field[3]));
++ xhci_decode_trb(le32_to_cpu(trb->generic.field[0]),
++ le32_to_cpu(trb->generic.field[1]),
++ le32_to_cpu(trb->generic.field[2]),
++ le32_to_cpu(trb->generic.field[3])));
+ }
+ }
+
+@@ -263,10 +263,10 @@ static int xhci_slot_context_show(struct seq_file *s, void *unused)
+ xhci = hcd_to_xhci(bus_to_hcd(dev->udev->bus));
+ slot_ctx = xhci_get_slot_ctx(xhci, dev->out_ctx);
+ seq_printf(s, "%pad: %s\n", &dev->out_ctx->dma,
+- xhci_decode_slot_context(slot_ctx->dev_info,
+- slot_ctx->dev_info2,
+- slot_ctx->tt_info,
+- slot_ctx->dev_state));
++ xhci_decode_slot_context(le32_to_cpu(slot_ctx->dev_info),
++ le32_to_cpu(slot_ctx->dev_info2),
++ le32_to_cpu(slot_ctx->tt_info),
++ le32_to_cpu(slot_ctx->dev_state)));
+
+ return 0;
+ }
+@@ -286,10 +286,10 @@ static int xhci_endpoint_context_show(struct seq_file *s, void *unused)
+ ep_ctx = xhci_get_ep_ctx(xhci, dev->out_ctx, dci);
+ dma = dev->out_ctx->dma + dci * CTX_SIZE(xhci->hcc_params);
+ seq_printf(s, "%pad: %s\n", &dma,
+- xhci_decode_ep_context(ep_ctx->ep_info,
+- ep_ctx->ep_info2,
+- ep_ctx->deq,
+- ep_ctx->tx_info));
++ xhci_decode_ep_context(le32_to_cpu(ep_ctx->ep_info),
++ le32_to_cpu(ep_ctx->ep_info2),
++ le64_to_cpu(ep_ctx->deq),
++ le32_to_cpu(ep_ctx->tx_info)));
+ }
+
+ return 0;
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 85ceb43e3405..e7aab31fd9a5 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -3330,6 +3330,7 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ if (xhci_urb_suitable_for_idt(urb)) {
+ memcpy(&send_addr, urb->transfer_buffer,
+ trb_buff_len);
++ le64_to_cpus(&send_addr);
+ field |= TRB_IDT;
+ }
+ }
+@@ -3475,6 +3476,7 @@ int xhci_queue_ctrl_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ if (xhci_urb_suitable_for_idt(urb)) {
+ memcpy(&addr, urb->transfer_buffer,
+ urb->transfer_buffer_length);
++ le64_to_cpus(&addr);
+ field |= TRB_IDT;
+ } else {
+ addr = (u64) urb->transfer_dma;
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index ee9d2e0fc53a..270e45058272 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -3071,6 +3071,48 @@ void xhci_cleanup_stalled_ring(struct xhci_hcd *xhci, unsigned int ep_index,
+ }
+ }
+
++static void xhci_endpoint_disable(struct usb_hcd *hcd,
++ struct usb_host_endpoint *host_ep)
++{
++ struct xhci_hcd *xhci;
++ struct xhci_virt_device *vdev;
++ struct xhci_virt_ep *ep;
++ struct usb_device *udev;
++ unsigned long flags;
++ unsigned int ep_index;
++
++ xhci = hcd_to_xhci(hcd);
++rescan:
++ spin_lock_irqsave(&xhci->lock, flags);
++
++ udev = (struct usb_device *)host_ep->hcpriv;
++ if (!udev || !udev->slot_id)
++ goto done;
++
++ vdev = xhci->devs[udev->slot_id];
++ if (!vdev)
++ goto done;
++
++ ep_index = xhci_get_endpoint_index(&host_ep->desc);
++ ep = &vdev->eps[ep_index];
++ if (!ep)
++ goto done;
++
++ /* wait for hub_tt_work to finish clearing hub TT */
++ if (ep->ep_state & EP_CLEARING_TT) {
++ spin_unlock_irqrestore(&xhci->lock, flags);
++ schedule_timeout_uninterruptible(1);
++ goto rescan;
++ }
++
++ if (ep->ep_state)
++ xhci_dbg(xhci, "endpoint disable with ep_state 0x%x\n",
++ ep->ep_state);
++done:
++ host_ep->hcpriv = NULL;
++ spin_unlock_irqrestore(&xhci->lock, flags);
++}
++
+ /*
+ * Called after usb core issues a clear halt control message.
+ * The host side of the halt should already be cleared by a reset endpoint
+@@ -5237,20 +5279,13 @@ static void xhci_clear_tt_buffer_complete(struct usb_hcd *hcd,
+ unsigned int ep_index;
+ unsigned long flags;
+
+- /*
+- * udev might be NULL if tt buffer is cleared during a failed device
+- * enumeration due to a halted control endpoint. Usb core might
+- * have allocated a new udev for the next enumeration attempt.
+- */
+-
+ xhci = hcd_to_xhci(hcd);
++
++ spin_lock_irqsave(&xhci->lock, flags);
+ udev = (struct usb_device *)ep->hcpriv;
+- if (!udev)
+- return;
+ slot_id = udev->slot_id;
+ ep_index = xhci_get_endpoint_index(&ep->desc);
+
+- spin_lock_irqsave(&xhci->lock, flags);
+ xhci->devs[slot_id]->eps[ep_index].ep_state &= ~EP_CLEARING_TT;
+ xhci_ring_doorbell_for_active_rings(xhci, slot_id, ep_index);
+ spin_unlock_irqrestore(&xhci->lock, flags);
+@@ -5287,6 +5322,7 @@ static const struct hc_driver xhci_hc_driver = {
+ .free_streams = xhci_free_streams,
+ .add_endpoint = xhci_add_endpoint,
+ .drop_endpoint = xhci_drop_endpoint,
++ .endpoint_disable = xhci_endpoint_disable,
+ .endpoint_reset = xhci_endpoint_reset,
+ .check_bandwidth = xhci_check_bandwidth,
+ .reset_bandwidth = xhci_reset_bandwidth,
+diff --git a/drivers/usb/misc/ldusb.c b/drivers/usb/misc/ldusb.c
+index 15b5f06fb0b3..f5e34c503454 100644
+--- a/drivers/usb/misc/ldusb.c
++++ b/drivers/usb/misc/ldusb.c
+@@ -495,11 +495,11 @@ static ssize_t ld_usb_read(struct file *file, char __user *buffer, size_t count,
+ retval = -EFAULT;
+ goto unlock_exit;
+ }
+- dev->ring_tail = (dev->ring_tail+1) % ring_buffer_size;
+-
+ retval = bytes_to_read;
+
+ spin_lock_irq(&dev->rbsl);
++ dev->ring_tail = (dev->ring_tail + 1) % ring_buffer_size;
++
+ if (dev->buffer_overflow) {
+ dev->buffer_overflow = 0;
+ spin_unlock_irq(&dev->rbsl);
+@@ -580,7 +580,7 @@ static ssize_t ld_usb_write(struct file *file, const char __user *buffer,
+ 1 << 8, 0,
+ dev->interrupt_out_buffer,
+ bytes_to_write,
+- USB_CTRL_SET_TIMEOUT * HZ);
++ USB_CTRL_SET_TIMEOUT);
+ if (retval < 0)
+ dev_err(&dev->intf->dev,
+ "Couldn't submit HID_REQ_SET_REPORT %d\n",
+diff --git a/drivers/usb/misc/legousbtower.c b/drivers/usb/misc/legousbtower.c
+index 62dab2441ec4..23061f1526b4 100644
+--- a/drivers/usb/misc/legousbtower.c
++++ b/drivers/usb/misc/legousbtower.c
+@@ -878,7 +878,7 @@ static int tower_probe (struct usb_interface *interface, const struct usb_device
+ get_version_reply,
+ sizeof(*get_version_reply),
+ 1000);
+- if (result < sizeof(*get_version_reply)) {
++ if (result != sizeof(*get_version_reply)) {
+ if (result >= 0)
+ result = -EIO;
+ dev_err(idev, "get version request failed: %d\n", result);
+diff --git a/drivers/usb/serial/whiteheat.c b/drivers/usb/serial/whiteheat.c
+index 79314d8c94a4..ca3bd58f2025 100644
+--- a/drivers/usb/serial/whiteheat.c
++++ b/drivers/usb/serial/whiteheat.c
+@@ -559,6 +559,10 @@ static int firm_send_command(struct usb_serial_port *port, __u8 command,
+
+ command_port = port->serial->port[COMMAND_PORT];
+ command_info = usb_get_serial_port_data(command_port);
++
++ if (command_port->bulk_out_size < datasize + 1)
++ return -EIO;
++
+ mutex_lock(&command_info->mutex);
+ command_info->command_finished = false;
+
+@@ -632,6 +636,7 @@ static void firm_setup_port(struct tty_struct *tty)
+ struct device *dev = &port->dev;
+ struct whiteheat_port_settings port_settings;
+ unsigned int cflag = tty->termios.c_cflag;
++ speed_t baud;
+
+ port_settings.port = port->port_number + 1;
+
+@@ -692,11 +697,13 @@ static void firm_setup_port(struct tty_struct *tty)
+ dev_dbg(dev, "%s - XON = %2x, XOFF = %2x\n", __func__, port_settings.xon, port_settings.xoff);
+
+ /* get the baud rate wanted */
+- port_settings.baud = tty_get_baud_rate(tty);
+- dev_dbg(dev, "%s - baud rate = %d\n", __func__, port_settings.baud);
++ baud = tty_get_baud_rate(tty);
++ port_settings.baud = cpu_to_le32(baud);
++ dev_dbg(dev, "%s - baud rate = %u\n", __func__, baud);
+
+ /* fixme: should set validated settings */
+- tty_encode_baud_rate(tty, port_settings.baud, port_settings.baud);
++ tty_encode_baud_rate(tty, baud, baud);
++
+ /* handle any settings that aren't specified in the tty structure */
+ port_settings.lloop = 0;
+
+diff --git a/drivers/usb/serial/whiteheat.h b/drivers/usb/serial/whiteheat.h
+index 00398149cd8d..269e727a92f9 100644
+--- a/drivers/usb/serial/whiteheat.h
++++ b/drivers/usb/serial/whiteheat.h
+@@ -87,7 +87,7 @@ struct whiteheat_simple {
+
+ struct whiteheat_port_settings {
+ __u8 port; /* port number (1 to N) */
+- __u32 baud; /* any value 7 - 460800, firmware calculates
++ __le32 baud; /* any value 7 - 460800, firmware calculates
+ best fit; arrives little endian */
+ __u8 bits; /* 5, 6, 7, or 8 */
+ __u8 stop; /* 1 or 2, default 1 (2 = 1.5 if bits = 5) */
+diff --git a/drivers/usb/storage/scsiglue.c b/drivers/usb/storage/scsiglue.c
+index 05b80211290d..f3c4caf64051 100644
+--- a/drivers/usb/storage/scsiglue.c
++++ b/drivers/usb/storage/scsiglue.c
+@@ -67,7 +67,6 @@ static const char* host_info(struct Scsi_Host *host)
+ static int slave_alloc (struct scsi_device *sdev)
+ {
+ struct us_data *us = host_to_us(sdev->host);
+- int maxp;
+
+ /*
+ * Set the INQUIRY transfer length to 36. We don't use any of
+@@ -76,15 +75,6 @@ static int slave_alloc (struct scsi_device *sdev)
+ */
+ sdev->inquiry_len = 36;
+
+- /*
+- * USB has unusual scatter-gather requirements: the length of each
+- * scatterlist element except the last must be divisible by the
+- * Bulk maxpacket value. Fortunately this value is always a
+- * power of 2. Inform the block layer about this requirement.
+- */
+- maxp = usb_maxpacket(us->pusb_dev, us->recv_bulk_pipe, 0);
+- blk_queue_virt_boundary(sdev->request_queue, maxp - 1);
+-
+ /*
+ * Some host controllers may have alignment requirements.
+ * We'll play it safe by requiring 512-byte alignment always.
+diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
+index 047c5922618f..0d044d59317e 100644
+--- a/drivers/usb/storage/uas.c
++++ b/drivers/usb/storage/uas.c
+@@ -789,29 +789,9 @@ static int uas_slave_alloc(struct scsi_device *sdev)
+ {
+ struct uas_dev_info *devinfo =
+ (struct uas_dev_info *)sdev->host->hostdata;
+- int maxp;
+
+ sdev->hostdata = devinfo;
+
+- /*
+- * We have two requirements here. We must satisfy the requirements
+- * of the physical HC and the demands of the protocol, as we
+- * definitely want no additional memory allocation in this path
+- * ruling out using bounce buffers.
+- *
+- * For a transmission on USB to continue we must never send
+- * a package that is smaller than maxpacket. Hence the length of each
+- * scatterlist element except the last must be divisible by the
+- * Bulk maxpacket value.
+- * If the HC does not ensure that through SG,
+- * the upper layer must do that. We must assume nothing
+- * about the capabilities off the HC, so we use the most
+- * pessimistic requirement.
+- */
+-
+- maxp = usb_maxpacket(devinfo->udev, devinfo->data_in_pipe, 0);
+- blk_queue_virt_boundary(sdev->request_queue, maxp - 1);
+-
+ /*
+ * The protocol has no requirements on alignment in the strict sense.
+ * Controllers may or may not have alignment restrictions.
+diff --git a/drivers/virt/vboxguest/vboxguest_utils.c b/drivers/virt/vboxguest/vboxguest_utils.c
+index 75fd140b02ff..43c391626a00 100644
+--- a/drivers/virt/vboxguest/vboxguest_utils.c
++++ b/drivers/virt/vboxguest/vboxguest_utils.c
+@@ -220,6 +220,8 @@ static int hgcm_call_preprocess_linaddr(
+ if (!bounce_buf)
+ return -ENOMEM;
+
++ *bounce_buf_ret = bounce_buf;
++
+ if (copy_in) {
+ ret = copy_from_user(bounce_buf, (void __user *)buf, len);
+ if (ret)
+@@ -228,7 +230,6 @@ static int hgcm_call_preprocess_linaddr(
+ memset(bounce_buf, 0, len);
+ }
+
+- *bounce_buf_ret = bounce_buf;
+ hgcm_call_add_pagelist_size(bounce_buf, len, extra);
+ return 0;
+ }
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index bdc08244a648..a8041e451e9e 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -1499,9 +1499,6 @@ static bool virtqueue_enable_cb_delayed_packed(struct virtqueue *_vq)
+ * counter first before updating event flags.
+ */
+ virtio_wmb(vq->weak_barriers);
+- } else {
+- used_idx = vq->last_used_idx;
+- wrap_counter = vq->packed.used_wrap_counter;
+ }
+
+ if (vq->packed.event_flags_shadow == VRING_PACKED_EVENT_FLAG_DISABLE) {
+@@ -1518,7 +1515,9 @@ static bool virtqueue_enable_cb_delayed_packed(struct virtqueue *_vq)
+ */
+ virtio_mb(vq->weak_barriers);
+
+- if (is_used_desc_packed(vq, used_idx, wrap_counter)) {
++ if (is_used_desc_packed(vq,
++ vq->last_used_idx,
++ vq->packed.used_wrap_counter)) {
+ END_USE(vq);
+ return false;
+ }
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index e7a1ec075c65..180749080fd8 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -2758,8 +2758,7 @@ int btrfs_subvolume_reserve_metadata(struct btrfs_root *root,
+ int nitems, bool use_global_rsv);
+ void btrfs_subvolume_release_metadata(struct btrfs_fs_info *fs_info,
+ struct btrfs_block_rsv *rsv);
+-void btrfs_delalloc_release_extents(struct btrfs_inode *inode, u64 num_bytes,
+- bool qgroup_free);
++void btrfs_delalloc_release_extents(struct btrfs_inode *inode, u64 num_bytes);
+
+ int btrfs_delalloc_reserve_metadata(struct btrfs_inode *inode, u64 num_bytes);
+ int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache);
+diff --git a/fs/btrfs/delalloc-space.c b/fs/btrfs/delalloc-space.c
+index 934521fe7e71..8d2bb28ff5e0 100644
+--- a/fs/btrfs/delalloc-space.c
++++ b/fs/btrfs/delalloc-space.c
+@@ -407,7 +407,6 @@ void btrfs_delalloc_release_metadata(struct btrfs_inode *inode, u64 num_bytes,
+ * btrfs_delalloc_release_extents - release our outstanding_extents
+ * @inode: the inode to balance the reservation for.
+ * @num_bytes: the number of bytes we originally reserved with
+- * @qgroup_free: do we need to free qgroup meta reservation or convert them.
+ *
+ * When we reserve space we increase outstanding_extents for the extents we may
+ * add. Once we've set the range as delalloc or created our ordered extents we
+@@ -415,8 +414,7 @@ void btrfs_delalloc_release_metadata(struct btrfs_inode *inode, u64 num_bytes,
+ * temporarily tracked outstanding_extents. This _must_ be used in conjunction
+ * with btrfs_delalloc_reserve_metadata.
+ */
+-void btrfs_delalloc_release_extents(struct btrfs_inode *inode, u64 num_bytes,
+- bool qgroup_free)
++void btrfs_delalloc_release_extents(struct btrfs_inode *inode, u64 num_bytes)
+ {
+ struct btrfs_fs_info *fs_info = inode->root->fs_info;
+ unsigned num_extents;
+@@ -430,7 +428,7 @@ void btrfs_delalloc_release_extents(struct btrfs_inode *inode, u64 num_bytes,
+ if (btrfs_is_testing(fs_info))
+ return;
+
+- btrfs_inode_rsv_release(inode, qgroup_free);
++ btrfs_inode_rsv_release(inode, true);
+ }
+
+ /**
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index d68add0bf346..a8a2adaf222f 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -1692,7 +1692,7 @@ again:
+ force_page_uptodate);
+ if (ret) {
+ btrfs_delalloc_release_extents(BTRFS_I(inode),
+- reserve_bytes, true);
++ reserve_bytes);
+ break;
+ }
+
+@@ -1704,7 +1704,7 @@ again:
+ if (extents_locked == -EAGAIN)
+ goto again;
+ btrfs_delalloc_release_extents(BTRFS_I(inode),
+- reserve_bytes, true);
++ reserve_bytes);
+ ret = extents_locked;
+ break;
+ }
+@@ -1772,8 +1772,7 @@ again:
+ else
+ free_extent_state(cached_state);
+
+- btrfs_delalloc_release_extents(BTRFS_I(inode), reserve_bytes,
+- true);
++ btrfs_delalloc_release_extents(BTRFS_I(inode), reserve_bytes);
+ if (ret) {
+ btrfs_drop_pages(pages, num_pages);
+ break;
+diff --git a/fs/btrfs/inode-map.c b/fs/btrfs/inode-map.c
+index 2e8bb402050b..e2f49615c429 100644
+--- a/fs/btrfs/inode-map.c
++++ b/fs/btrfs/inode-map.c
+@@ -484,12 +484,13 @@ again:
+ ret = btrfs_prealloc_file_range_trans(inode, trans, 0, 0, prealloc,
+ prealloc, prealloc, &alloc_hint);
+ if (ret) {
+- btrfs_delalloc_release_extents(BTRFS_I(inode), prealloc, true);
++ btrfs_delalloc_release_extents(BTRFS_I(inode), prealloc);
++ btrfs_delalloc_release_metadata(BTRFS_I(inode), prealloc, true);
+ goto out_put;
+ }
+
+ ret = btrfs_write_out_ino_cache(root, trans, path, inode);
+- btrfs_delalloc_release_extents(BTRFS_I(inode), prealloc, false);
++ btrfs_delalloc_release_extents(BTRFS_I(inode), prealloc);
+ out_put:
+ iput(inode);
+ out_release:
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index b3453adb214d..1b85278471f6 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -2167,7 +2167,7 @@ again:
+
+ ClearPageChecked(page);
+ set_page_dirty(page);
+- btrfs_delalloc_release_extents(BTRFS_I(inode), PAGE_SIZE, false);
++ btrfs_delalloc_release_extents(BTRFS_I(inode), PAGE_SIZE);
+ out:
+ unlock_extent_cached(&BTRFS_I(inode)->io_tree, page_start, page_end,
+ &cached_state);
+@@ -4912,7 +4912,7 @@ again:
+ if (!page) {
+ btrfs_delalloc_release_space(inode, data_reserved,
+ block_start, blocksize, true);
+- btrfs_delalloc_release_extents(BTRFS_I(inode), blocksize, true);
++ btrfs_delalloc_release_extents(BTRFS_I(inode), blocksize);
+ ret = -ENOMEM;
+ goto out;
+ }
+@@ -4980,7 +4980,7 @@ out_unlock:
+ if (ret)
+ btrfs_delalloc_release_space(inode, data_reserved, block_start,
+ blocksize, true);
+- btrfs_delalloc_release_extents(BTRFS_I(inode), blocksize, (ret != 0));
++ btrfs_delalloc_release_extents(BTRFS_I(inode), blocksize);
+ unlock_page(page);
+ put_page(page);
+ out:
+@@ -8685,7 +8685,7 @@ static ssize_t btrfs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ } else if (ret >= 0 && (size_t)ret < count)
+ btrfs_delalloc_release_space(inode, data_reserved,
+ offset, count - (size_t)ret, true);
+- btrfs_delalloc_release_extents(BTRFS_I(inode), count, false);
++ btrfs_delalloc_release_extents(BTRFS_I(inode), count);
+ }
+ out:
+ if (wakeup)
+@@ -9038,7 +9038,7 @@ again:
+ unlock_extent_cached(io_tree, page_start, page_end, &cached_state);
+
+ if (!ret2) {
+- btrfs_delalloc_release_extents(BTRFS_I(inode), PAGE_SIZE, true);
++ btrfs_delalloc_release_extents(BTRFS_I(inode), PAGE_SIZE);
+ sb_end_pagefault(inode->i_sb);
+ extent_changeset_free(data_reserved);
+ return VM_FAULT_LOCKED;
+@@ -9047,7 +9047,7 @@ again:
+ out_unlock:
+ unlock_page(page);
+ out:
+- btrfs_delalloc_release_extents(BTRFS_I(inode), PAGE_SIZE, (ret != 0));
++ btrfs_delalloc_release_extents(BTRFS_I(inode), PAGE_SIZE);
+ btrfs_delalloc_release_space(inode, data_reserved, page_start,
+ reserved_space, (ret != 0));
+ out_noreserve:
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 818f7ec8bb0e..8dad66df74ed 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -1360,8 +1360,7 @@ again:
+ unlock_page(pages[i]);
+ put_page(pages[i]);
+ }
+- btrfs_delalloc_release_extents(BTRFS_I(inode), page_cnt << PAGE_SHIFT,
+- false);
++ btrfs_delalloc_release_extents(BTRFS_I(inode), page_cnt << PAGE_SHIFT);
+ extent_changeset_free(data_reserved);
+ return i_done;
+ out:
+@@ -1372,8 +1371,7 @@ out:
+ btrfs_delalloc_release_space(inode, data_reserved,
+ start_index << PAGE_SHIFT,
+ page_cnt << PAGE_SHIFT, true);
+- btrfs_delalloc_release_extents(BTRFS_I(inode), page_cnt << PAGE_SHIFT,
+- true);
++ btrfs_delalloc_release_extents(BTRFS_I(inode), page_cnt << PAGE_SHIFT);
+ extent_changeset_free(data_reserved);
+ return ret;
+
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 074947bebd16..572314aebdf1 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -3277,7 +3277,7 @@ static int relocate_file_extent_cluster(struct inode *inode,
+ btrfs_delalloc_release_metadata(BTRFS_I(inode),
+ PAGE_SIZE, true);
+ btrfs_delalloc_release_extents(BTRFS_I(inode),
+- PAGE_SIZE, true);
++ PAGE_SIZE);
+ ret = -ENOMEM;
+ goto out;
+ }
+@@ -3298,7 +3298,7 @@ static int relocate_file_extent_cluster(struct inode *inode,
+ btrfs_delalloc_release_metadata(BTRFS_I(inode),
+ PAGE_SIZE, true);
+ btrfs_delalloc_release_extents(BTRFS_I(inode),
+- PAGE_SIZE, true);
++ PAGE_SIZE);
+ ret = -EIO;
+ goto out;
+ }
+@@ -3327,7 +3327,7 @@ static int relocate_file_extent_cluster(struct inode *inode,
+ btrfs_delalloc_release_metadata(BTRFS_I(inode),
+ PAGE_SIZE, true);
+ btrfs_delalloc_release_extents(BTRFS_I(inode),
+- PAGE_SIZE, true);
++ PAGE_SIZE);
+
+ clear_extent_bits(&BTRFS_I(inode)->io_tree,
+ page_start, page_end,
+@@ -3343,8 +3343,7 @@ static int relocate_file_extent_cluster(struct inode *inode,
+ put_page(page);
+
+ index++;
+- btrfs_delalloc_release_extents(BTRFS_I(inode), PAGE_SIZE,
+- false);
++ btrfs_delalloc_release_extents(BTRFS_I(inode), PAGE_SIZE);
+ balance_dirty_pages_ratelimited(inode->i_mapping);
+ btrfs_throttle(fs_info);
+ }
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index c3c0c064c25d..91c702b4cae9 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -5070,7 +5070,7 @@ static int clone_range(struct send_ctx *sctx,
+ struct btrfs_path *path;
+ struct btrfs_key key;
+ int ret;
+- u64 clone_src_i_size;
++ u64 clone_src_i_size = 0;
+
+ /*
+ * Prevent cloning from a zero offset with a length matching the sector
+diff --git a/fs/cifs/netmisc.c b/fs/cifs/netmisc.c
+index ed92958e842d..657f409d4de0 100644
+--- a/fs/cifs/netmisc.c
++++ b/fs/cifs/netmisc.c
+@@ -117,10 +117,6 @@ static const struct smb_to_posix_error mapping_table_ERRSRV[] = {
+ {0, 0}
+ };
+
+-static const struct smb_to_posix_error mapping_table_ERRHRD[] = {
+- {0, 0}
+-};
+-
+ /*
+ * Convert a string containing text IPv4 or IPv6 address to binary form.
+ *
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index dd0f64f7bc06..0c4b6a41e385 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -1476,6 +1476,19 @@ int fuse_do_setattr(struct dentry *dentry, struct iattr *attr,
+ is_truncate = true;
+ }
+
++ /* Flush dirty data/metadata before non-truncate SETATTR */
++ if (is_wb && S_ISREG(inode->i_mode) &&
++ attr->ia_valid &
++ (ATTR_MODE | ATTR_UID | ATTR_GID | ATTR_MTIME_SET |
++ ATTR_TIMES_SET)) {
++ err = write_inode_now(inode, true);
++ if (err)
++ return err;
++
++ fuse_set_nowrite(inode);
++ fuse_release_nowrite(inode);
++ }
++
+ if (is_truncate) {
+ fuse_set_nowrite(inode);
+ set_bit(FUSE_I_SIZE_UNSTABLE, &fi->state);
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 91c99724dee0..d199dc0fbac1 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -201,7 +201,7 @@ int fuse_open_common(struct inode *inode, struct file *file, bool isdir)
+ {
+ struct fuse_conn *fc = get_fuse_conn(inode);
+ int err;
+- bool lock_inode = (file->f_flags & O_TRUNC) &&
++ bool is_wb_truncate = (file->f_flags & O_TRUNC) &&
+ fc->atomic_o_trunc &&
+ fc->writeback_cache;
+
+@@ -209,16 +209,20 @@ int fuse_open_common(struct inode *inode, struct file *file, bool isdir)
+ if (err)
+ return err;
+
+- if (lock_inode)
++ if (is_wb_truncate) {
+ inode_lock(inode);
++ fuse_set_nowrite(inode);
++ }
+
+ err = fuse_do_open(fc, get_node_id(inode), file, isdir);
+
+ if (!err)
+ fuse_finish_open(inode, file);
+
+- if (lock_inode)
++ if (is_wb_truncate) {
++ fuse_release_nowrite(inode);
+ inode_unlock(inode);
++ }
+
+ return err;
+ }
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index ed223c33dd89..37da4ea68f50 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -338,6 +338,8 @@ struct io_kiocb {
+ #define REQ_F_LINK 64 /* linked sqes */
+ #define REQ_F_LINK_DONE 128 /* linked sqes done */
+ #define REQ_F_FAIL_LINK 256 /* fail rest of links */
++#define REQ_F_ISREG 2048 /* regular file */
++#define REQ_F_MUST_PUNT 4096 /* must be punted even for NONBLOCK */
+ u64 user_data;
+ u32 result;
+ u32 sequence;
+@@ -885,26 +887,26 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, unsigned *nr_events,
+ return ret;
+ }
+
+-static void kiocb_end_write(struct kiocb *kiocb)
++static void kiocb_end_write(struct io_kiocb *req)
+ {
+- if (kiocb->ki_flags & IOCB_WRITE) {
+- struct inode *inode = file_inode(kiocb->ki_filp);
++ /*
++ * Tell lockdep we inherited freeze protection from submission
++ * thread.
++ */
++ if (req->flags & REQ_F_ISREG) {
++ struct inode *inode = file_inode(req->file);
+
+- /*
+- * Tell lockdep we inherited freeze protection from submission
+- * thread.
+- */
+- if (S_ISREG(inode->i_mode))
+- __sb_writers_acquired(inode->i_sb, SB_FREEZE_WRITE);
+- file_end_write(kiocb->ki_filp);
++ __sb_writers_acquired(inode->i_sb, SB_FREEZE_WRITE);
+ }
++ file_end_write(req->file);
+ }
+
+ static void io_complete_rw(struct kiocb *kiocb, long res, long res2)
+ {
+ struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw);
+
+- kiocb_end_write(kiocb);
++ if (kiocb->ki_flags & IOCB_WRITE)
++ kiocb_end_write(req);
+
+ if ((req->flags & REQ_F_LINK) && res != req->result)
+ req->flags |= REQ_F_FAIL_LINK;
+@@ -916,7 +918,8 @@ static void io_complete_rw_iopoll(struct kiocb *kiocb, long res, long res2)
+ {
+ struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw);
+
+- kiocb_end_write(kiocb);
++ if (kiocb->ki_flags & IOCB_WRITE)
++ kiocb_end_write(req);
+
+ if ((req->flags & REQ_F_LINK) && res != req->result)
+ req->flags |= REQ_F_FAIL_LINK;
+@@ -1030,8 +1033,17 @@ static int io_prep_rw(struct io_kiocb *req, const struct sqe_submit *s,
+ if (!req->file)
+ return -EBADF;
+
+- if (force_nonblock && !io_file_supports_async(req->file))
+- force_nonblock = false;
++ if (S_ISREG(file_inode(req->file)->i_mode))
++ req->flags |= REQ_F_ISREG;
++
++ /*
++ * If the file doesn't support async, mark it as REQ_F_MUST_PUNT so
++ * we know to async punt it even if it was opened O_NONBLOCK
++ */
++ if (force_nonblock && !io_file_supports_async(req->file)) {
++ req->flags |= REQ_F_MUST_PUNT;
++ return -EAGAIN;
++ }
+
+ kiocb->ki_pos = READ_ONCE(sqe->off);
+ kiocb->ki_flags = iocb_flags(kiocb->ki_filp);
+@@ -1052,7 +1064,8 @@ static int io_prep_rw(struct io_kiocb *req, const struct sqe_submit *s,
+ return ret;
+
+ /* don't allow async punt if RWF_NOWAIT was requested */
+- if (kiocb->ki_flags & IOCB_NOWAIT)
++ if ((kiocb->ki_flags & IOCB_NOWAIT) ||
++ (req->file->f_flags & O_NONBLOCK))
+ req->flags |= REQ_F_NOWAIT;
+
+ if (force_nonblock)
+@@ -1065,6 +1078,7 @@ static int io_prep_rw(struct io_kiocb *req, const struct sqe_submit *s,
+
+ kiocb->ki_flags |= IOCB_HIPRI;
+ kiocb->ki_complete = io_complete_rw_iopoll;
++ req->result = 0;
+ } else {
+ if (kiocb->ki_flags & IOCB_HIPRI)
+ return -EINVAL;
+@@ -1286,7 +1300,9 @@ static int io_read(struct io_kiocb *req, const struct sqe_submit *s,
+ * need async punt anyway, so it's more efficient to do it
+ * here.
+ */
+- if (force_nonblock && ret2 > 0 && ret2 < read_size)
++ if (force_nonblock && !(req->flags & REQ_F_NOWAIT) &&
++ (req->flags & REQ_F_ISREG) &&
++ ret2 > 0 && ret2 < read_size)
+ ret2 = -EAGAIN;
+ /* Catch -EAGAIN return for forced non-blocking submission */
+ if (!force_nonblock || ret2 != -EAGAIN) {
+@@ -1353,7 +1369,7 @@ static int io_write(struct io_kiocb *req, const struct sqe_submit *s,
+ * released so that it doesn't complain about the held lock when
+ * we return to userspace.
+ */
+- if (S_ISREG(file_inode(file)->i_mode)) {
++ if (req->flags & REQ_F_ISREG) {
+ __sb_start_write(file_inode(file)->i_sb,
+ SB_FREEZE_WRITE, true);
+ __sb_writers_release(file_inode(file)->i_sb,
+@@ -2096,7 +2112,13 @@ static int io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
+ }
+
+ ret = __io_submit_sqe(ctx, req, s, true);
+- if (ret == -EAGAIN && !(req->flags & REQ_F_NOWAIT)) {
++
++ /*
++ * We async punt it if the file wasn't marked NOWAIT, or if the file
++ * doesn't support non-blocking read/write attempts
++ */
++ if (ret == -EAGAIN && (!(req->flags & REQ_F_NOWAIT) ||
++ (req->flags & REQ_F_MUST_PUNT))) {
+ struct io_uring_sqe *sqe_copy;
+
+ sqe_copy = kmalloc(sizeof(*sqe_copy), GFP_KERNEL);
+diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
+index 071b90a45933..ad7a77101471 100644
+--- a/fs/nfs/delegation.c
++++ b/fs/nfs/delegation.c
+@@ -1181,7 +1181,7 @@ bool nfs4_refresh_delegation_stateid(nfs4_stateid *dst, struct inode *inode)
+ if (delegation != NULL &&
+ nfs4_stateid_match_other(dst, &delegation->stateid)) {
+ dst->seqid = delegation->stateid.seqid;
+- return ret;
++ ret = true;
+ }
+ rcu_read_unlock();
+ out:
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 1406858bae6c..e1e7d2724b97 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -6058,6 +6058,7 @@ int nfs4_proc_setclientid(struct nfs_client *clp, u32 program,
+ }
+ status = task->tk_status;
+ if (setclientid.sc_cred) {
++ kfree(clp->cl_acceptor);
+ clp->cl_acceptor = rpcauth_stringify_acceptor(setclientid.sc_cred);
+ put_rpccred(setclientid.sc_cred);
+ }
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index 85ca49549b39..52cab65f91cf 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -786,7 +786,6 @@ static void nfs_inode_remove_request(struct nfs_page *req)
+ struct nfs_inode *nfsi = NFS_I(inode);
+ struct nfs_page *head;
+
+- atomic_long_dec(&nfsi->nrequests);
+ if (nfs_page_group_sync_on_bit(req, PG_REMOVE)) {
+ head = req->wb_head;
+
+@@ -799,8 +798,10 @@ static void nfs_inode_remove_request(struct nfs_page *req)
+ spin_unlock(&mapping->private_lock);
+ }
+
+- if (test_and_clear_bit(PG_INODE_REF, &req->wb_flags))
++ if (test_and_clear_bit(PG_INODE_REF, &req->wb_flags)) {
+ nfs_release_request(req);
++ atomic_long_dec(&nfsi->nrequests);
++ }
+ }
+
+ static void
+diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
+index a4c905d6b575..9b827143a350 100644
+--- a/fs/ocfs2/aops.c
++++ b/fs/ocfs2/aops.c
+@@ -2042,7 +2042,8 @@ out_write_size:
+ inode->i_mtime = inode->i_ctime = current_time(inode);
+ di->i_mtime = di->i_ctime = cpu_to_le64(inode->i_mtime.tv_sec);
+ di->i_mtime_nsec = di->i_ctime_nsec = cpu_to_le32(inode->i_mtime.tv_nsec);
+- ocfs2_update_inode_fsync_trans(handle, inode, 1);
++ if (handle)
++ ocfs2_update_inode_fsync_trans(handle, inode, 1);
+ }
+ if (handle)
+ ocfs2_journal_dirty(handle, wc->w_di_bh);
+@@ -2139,13 +2140,30 @@ static int ocfs2_dio_wr_get_block(struct inode *inode, sector_t iblock,
+ struct ocfs2_dio_write_ctxt *dwc = NULL;
+ struct buffer_head *di_bh = NULL;
+ u64 p_blkno;
+- loff_t pos = iblock << inode->i_sb->s_blocksize_bits;
++ unsigned int i_blkbits = inode->i_sb->s_blocksize_bits;
++ loff_t pos = iblock << i_blkbits;
++ sector_t endblk = (i_size_read(inode) - 1) >> i_blkbits;
+ unsigned len, total_len = bh_result->b_size;
+ int ret = 0, first_get_block = 0;
+
+ len = osb->s_clustersize - (pos & (osb->s_clustersize - 1));
+ len = min(total_len, len);
+
++ /*
++ * bh_result->b_size is count in get_more_blocks according to write
++ * "pos" and "end", we need map twice to return different buffer state:
++ * 1. area in file size, not set NEW;
++ * 2. area out file size, set NEW.
++ *
++ * iblock endblk
++ * |--------|---------|---------|---------
++ * |<-------area in file------->|
++ */
++
++ if ((iblock <= endblk) &&
++ ((iblock + ((len - 1) >> i_blkbits)) > endblk))
++ len = (endblk - iblock + 1) << i_blkbits;
++
+ mlog(0, "get block of %lu at %llu:%u req %u\n",
+ inode->i_ino, pos, len, total_len);
+
+@@ -2229,6 +2247,9 @@ static int ocfs2_dio_wr_get_block(struct inode *inode, sector_t iblock,
+ if (desc->c_needs_zero)
+ set_buffer_new(bh_result);
+
++ if (iblock > endblk)
++ set_buffer_new(bh_result);
++
+ /* May sleep in end_io. It should not happen in a irq context. So defer
+ * it to dio work queue. */
+ set_buffer_defer_completion(bh_result);
+diff --git a/fs/ocfs2/ioctl.c b/fs/ocfs2/ioctl.c
+index d6f7b299eb23..efeea208fdeb 100644
+--- a/fs/ocfs2/ioctl.c
++++ b/fs/ocfs2/ioctl.c
+@@ -283,7 +283,7 @@ static int ocfs2_info_scan_inode_alloc(struct ocfs2_super *osb,
+ if (inode_alloc)
+ inode_lock(inode_alloc);
+
+- if (o2info_coherent(&fi->ifi_req)) {
++ if (inode_alloc && o2info_coherent(&fi->ifi_req)) {
+ status = ocfs2_inode_lock(inode_alloc, &bh, 0);
+ if (status < 0) {
+ mlog_errno(status);
+diff --git a/fs/ocfs2/xattr.c b/fs/ocfs2/xattr.c
+index 90c830e3758e..d8507972ee13 100644
+--- a/fs/ocfs2/xattr.c
++++ b/fs/ocfs2/xattr.c
+@@ -1490,18 +1490,6 @@ static int ocfs2_xa_check_space(struct ocfs2_xa_loc *loc,
+ return loc->xl_ops->xlo_check_space(loc, xi);
+ }
+
+-static void ocfs2_xa_add_entry(struct ocfs2_xa_loc *loc, u32 name_hash)
+-{
+- loc->xl_ops->xlo_add_entry(loc, name_hash);
+- loc->xl_entry->xe_name_hash = cpu_to_le32(name_hash);
+- /*
+- * We can't leave the new entry's xe_name_offset at zero or
+- * add_namevalue() will go nuts. We set it to the size of our
+- * storage so that it can never be less than any other entry.
+- */
+- loc->xl_entry->xe_name_offset = cpu_to_le16(loc->xl_size);
+-}
+-
+ static void ocfs2_xa_add_namevalue(struct ocfs2_xa_loc *loc,
+ struct ocfs2_xattr_info *xi)
+ {
+@@ -2133,29 +2121,31 @@ static int ocfs2_xa_prepare_entry(struct ocfs2_xa_loc *loc,
+ if (rc)
+ goto out;
+
+- if (loc->xl_entry) {
+- if (ocfs2_xa_can_reuse_entry(loc, xi)) {
+- orig_value_size = loc->xl_entry->xe_value_size;
+- rc = ocfs2_xa_reuse_entry(loc, xi, ctxt);
+- if (rc)
+- goto out;
+- goto alloc_value;
+- }
++ if (!loc->xl_entry) {
++ rc = -EINVAL;
++ goto out;
++ }
+
+- if (!ocfs2_xattr_is_local(loc->xl_entry)) {
+- orig_clusters = ocfs2_xa_value_clusters(loc);
+- rc = ocfs2_xa_value_truncate(loc, 0, ctxt);
+- if (rc) {
+- mlog_errno(rc);
+- ocfs2_xa_cleanup_value_truncate(loc,
+- "overwriting",
+- orig_clusters);
+- goto out;
+- }
++ if (ocfs2_xa_can_reuse_entry(loc, xi)) {
++ orig_value_size = loc->xl_entry->xe_value_size;
++ rc = ocfs2_xa_reuse_entry(loc, xi, ctxt);
++ if (rc)
++ goto out;
++ goto alloc_value;
++ }
++
++ if (!ocfs2_xattr_is_local(loc->xl_entry)) {
++ orig_clusters = ocfs2_xa_value_clusters(loc);
++ rc = ocfs2_xa_value_truncate(loc, 0, ctxt);
++ if (rc) {
++ mlog_errno(rc);
++ ocfs2_xa_cleanup_value_truncate(loc,
++ "overwriting",
++ orig_clusters);
++ goto out;
+ }
+- ocfs2_xa_wipe_namevalue(loc);
+- } else
+- ocfs2_xa_add_entry(loc, name_hash);
++ }
++ ocfs2_xa_wipe_namevalue(loc);
+
+ /*
+ * If we get here, we have a blank entry. Fill it. We grow our
+diff --git a/include/linux/platform_data/dma-imx-sdma.h b/include/linux/platform_data/dma-imx-sdma.h
+index 6eaa53cef0bd..30e676b36b24 100644
+--- a/include/linux/platform_data/dma-imx-sdma.h
++++ b/include/linux/platform_data/dma-imx-sdma.h
+@@ -51,7 +51,10 @@ struct sdma_script_start_addrs {
+ /* End of v2 array */
+ s32 zcanfd_2_mcu_addr;
+ s32 zqspi_2_mcu_addr;
++ s32 mcu_2_ecspi_addr;
+ /* End of v3 array */
++ s32 mcu_2_zqspi_addr;
++ /* End of v4 array */
+ };
+
+ /**
+diff --git a/include/linux/sunrpc/xprtsock.h b/include/linux/sunrpc/xprtsock.h
+index 7638dbe7bc50..a940de03808d 100644
+--- a/include/linux/sunrpc/xprtsock.h
++++ b/include/linux/sunrpc/xprtsock.h
+@@ -61,6 +61,7 @@ struct sock_xprt {
+ struct mutex recv_mutex;
+ struct sockaddr_storage srcaddr;
+ unsigned short srcport;
++ int xprt_err;
+
+ /*
+ * UDP socket buffer size parameters
+diff --git a/include/net/llc_conn.h b/include/net/llc_conn.h
+index df528a623548..ea985aa7a6c5 100644
+--- a/include/net/llc_conn.h
++++ b/include/net/llc_conn.h
+@@ -104,7 +104,7 @@ void llc_sk_reset(struct sock *sk);
+
+ /* Access to a connection */
+ int llc_conn_state_process(struct sock *sk, struct sk_buff *skb);
+-int llc_conn_send_pdu(struct sock *sk, struct sk_buff *skb);
++void llc_conn_send_pdu(struct sock *sk, struct sk_buff *skb);
+ void llc_conn_rtn_pdu(struct sock *sk, struct sk_buff *skb);
+ void llc_conn_resend_i_pdu_as_cmd(struct sock *sk, u8 nr, u8 first_p_bit);
+ void llc_conn_resend_i_pdu_as_rsp(struct sock *sk, u8 nr, u8 first_f_bit);
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 6b6b01234dd9..58b1fbc884a7 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -520,6 +520,11 @@ static inline struct Qdisc *qdisc_root(const struct Qdisc *qdisc)
+ return q;
+ }
+
++static inline struct Qdisc *qdisc_root_bh(const struct Qdisc *qdisc)
++{
++ return rcu_dereference_bh(qdisc->dev_queue->qdisc);
++}
++
+ static inline struct Qdisc *qdisc_root_sleeping(const struct Qdisc *qdisc)
+ {
+ return qdisc->dev_queue->qdisc_sleeping;
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index edc5c887a44c..45556fe771c3 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -519,10 +519,10 @@ TRACE_EVENT(rxrpc_local,
+ );
+
+ TRACE_EVENT(rxrpc_peer,
+- TP_PROTO(struct rxrpc_peer *peer, enum rxrpc_peer_trace op,
++ TP_PROTO(unsigned int peer_debug_id, enum rxrpc_peer_trace op,
+ int usage, const void *where),
+
+- TP_ARGS(peer, op, usage, where),
++ TP_ARGS(peer_debug_id, op, usage, where),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, peer )
+@@ -532,7 +532,7 @@ TRACE_EVENT(rxrpc_peer,
+ ),
+
+ TP_fast_assign(
+- __entry->peer = peer->debug_id;
++ __entry->peer = peer_debug_id;
+ __entry->op = op;
+ __entry->usage = usage;
+ __entry->where = where;
+diff --git a/include/uapi/linux/nvme_ioctl.h b/include/uapi/linux/nvme_ioctl.h
+index e168dc59e9a0..1c215ea1798e 100644
+--- a/include/uapi/linux/nvme_ioctl.h
++++ b/include/uapi/linux/nvme_ioctl.h
+@@ -45,27 +45,6 @@ struct nvme_passthru_cmd {
+ __u32 result;
+ };
+
+-struct nvme_passthru_cmd64 {
+- __u8 opcode;
+- __u8 flags;
+- __u16 rsvd1;
+- __u32 nsid;
+- __u32 cdw2;
+- __u32 cdw3;
+- __u64 metadata;
+- __u64 addr;
+- __u32 metadata_len;
+- __u32 data_len;
+- __u32 cdw10;
+- __u32 cdw11;
+- __u32 cdw12;
+- __u32 cdw13;
+- __u32 cdw14;
+- __u32 cdw15;
+- __u32 timeout_ms;
+- __u64 result;
+-};
+-
+ #define nvme_admin_cmd nvme_passthru_cmd
+
+ #define NVME_IOCTL_ID _IO('N', 0x40)
+@@ -75,7 +54,5 @@ struct nvme_passthru_cmd64 {
+ #define NVME_IOCTL_RESET _IO('N', 0x44)
+ #define NVME_IOCTL_SUBSYS_RESET _IO('N', 0x45)
+ #define NVME_IOCTL_RESCAN _IO('N', 0x46)
+-#define NVME_IOCTL_ADMIN64_CMD _IOWR('N', 0x47, struct nvme_passthru_cmd64)
+-#define NVME_IOCTL_IO64_CMD _IOWR('N', 0x48, struct nvme_passthru_cmd64)
+
+ #endif /* _UAPI_LINUX_NVME_IOCTL_H */
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index a2a50b668ef3..53173883513c 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -3694,11 +3694,23 @@ static void rotate_ctx(struct perf_event_context *ctx, struct perf_event *event)
+ perf_event_groups_insert(&ctx->flexible_groups, event);
+ }
+
++/* pick an event from the flexible_groups to rotate */
+ static inline struct perf_event *
+-ctx_first_active(struct perf_event_context *ctx)
++ctx_event_to_rotate(struct perf_event_context *ctx)
+ {
+- return list_first_entry_or_null(&ctx->flexible_active,
+- struct perf_event, active_list);
++ struct perf_event *event;
++
++ /* pick the first active flexible event */
++ event = list_first_entry_or_null(&ctx->flexible_active,
++ struct perf_event, active_list);
++
++ /* if no active flexible event, pick the first event */
++ if (!event) {
++ event = rb_entry_safe(rb_first(&ctx->flexible_groups.tree),
++ typeof(*event), group_node);
++ }
++
++ return event;
+ }
+
+ static bool perf_rotate_context(struct perf_cpu_context *cpuctx)
+@@ -3723,9 +3735,9 @@ static bool perf_rotate_context(struct perf_cpu_context *cpuctx)
+ perf_pmu_disable(cpuctx->ctx.pmu);
+
+ if (task_rotate)
+- task_event = ctx_first_active(task_ctx);
++ task_event = ctx_event_to_rotate(task_ctx);
+ if (cpu_rotate)
+- cpu_event = ctx_first_active(&cpuctx->ctx);
++ cpu_event = ctx_event_to_rotate(&cpuctx->ctx);
+
+ /*
+ * As per the order given at ctx_resched() first 'pop' task flexible
+@@ -5512,8 +5524,10 @@ static void perf_mmap_close(struct vm_area_struct *vma)
+ perf_pmu_output_stop(event);
+
+ /* now it's safe to free the pages */
+- atomic_long_sub(rb->aux_nr_pages, &mmap_user->locked_vm);
+- atomic64_sub(rb->aux_mmap_locked, &vma->vm_mm->pinned_vm);
++ if (!rb->aux_mmap_locked)
++ atomic_long_sub(rb->aux_nr_pages, &mmap_user->locked_vm);
++ else
++ atomic64_sub(rb->aux_mmap_locked, &vma->vm_mm->pinned_vm);
+
+ /* this has to be the last one */
+ rb_free_aux(rb);
+@@ -5585,7 +5599,8 @@ again:
+ * undo the VM accounting.
+ */
+
+- atomic_long_sub((size >> PAGE_SHIFT) + 1, &mmap_user->locked_vm);
++ atomic_long_sub((size >> PAGE_SHIFT) + 1 - mmap_locked,
++ &mmap_user->locked_vm);
+ atomic64_sub(mmap_locked, &vma->vm_mm->pinned_vm);
+ free_uid(mmap_user);
+
+@@ -5729,8 +5744,20 @@ accounting:
+
+ user_locked = atomic_long_read(&user->locked_vm) + user_extra;
+
+- if (user_locked > user_lock_limit)
++ if (user_locked <= user_lock_limit) {
++ /* charge all to locked_vm */
++ } else if (atomic_long_read(&user->locked_vm) >= user_lock_limit) {
++ /* charge all to pinned_vm */
++ extra = user_extra;
++ user_extra = 0;
++ } else {
++ /*
++ * charge locked_vm until it hits user_lock_limit;
++ * charge the rest from pinned_vm
++ */
+ extra = user_locked - user_lock_limit;
++ user_extra -= extra;
++ }
+
+ lock_limit = rlimit(RLIMIT_MEMLOCK);
+ lock_limit >>= PAGE_SHIFT;
+diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
+index 2305ce89a26c..46ed4e1383e2 100644
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -740,7 +740,7 @@ void vtime_account_system(struct task_struct *tsk)
+
+ write_seqcount_begin(&vtime->seqcount);
+ /* We might have scheduled out from guest path */
+- if (current->flags & PF_VCPU)
++ if (tsk->flags & PF_VCPU)
+ vtime_account_guest(tsk, vtime);
+ else
+ __vtime_account_system(tsk, vtime);
+@@ -783,7 +783,7 @@ void vtime_guest_enter(struct task_struct *tsk)
+ */
+ write_seqcount_begin(&vtime->seqcount);
+ __vtime_account_system(tsk, vtime);
+- current->flags |= PF_VCPU;
++ tsk->flags |= PF_VCPU;
+ write_seqcount_end(&vtime->seqcount);
+ }
+ EXPORT_SYMBOL_GPL(vtime_guest_enter);
+@@ -794,7 +794,7 @@ void vtime_guest_exit(struct task_struct *tsk)
+
+ write_seqcount_begin(&vtime->seqcount);
+ vtime_account_guest(tsk, vtime);
+- current->flags &= ~PF_VCPU;
++ tsk->flags &= ~PF_VCPU;
+ write_seqcount_end(&vtime->seqcount);
+ }
+ EXPORT_SYMBOL_GPL(vtime_guest_exit);
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 86cfc5d5129c..649c6b60929e 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -4355,23 +4355,16 @@ static inline u64 sched_cfs_bandwidth_slice(void)
+ }
+
+ /*
+- * Replenish runtime according to assigned quota and update expiration time.
+- * We use sched_clock_cpu directly instead of rq->clock to avoid adding
+- * additional synchronization around rq->lock.
++ * Replenish runtime according to assigned quota. We use sched_clock_cpu
++ * directly instead of rq->clock to avoid adding additional synchronization
++ * around rq->lock.
+ *
+ * requires cfs_b->lock
+ */
+ void __refill_cfs_bandwidth_runtime(struct cfs_bandwidth *cfs_b)
+ {
+- u64 now;
+-
+- if (cfs_b->quota == RUNTIME_INF)
+- return;
+-
+- now = sched_clock_cpu(smp_processor_id());
+- cfs_b->runtime = cfs_b->quota;
+- cfs_b->runtime_expires = now + ktime_to_ns(cfs_b->period);
+- cfs_b->expires_seq++;
++ if (cfs_b->quota != RUNTIME_INF)
++ cfs_b->runtime = cfs_b->quota;
+ }
+
+ static inline struct cfs_bandwidth *tg_cfs_bandwidth(struct task_group *tg)
+@@ -4393,8 +4386,7 @@ static int assign_cfs_rq_runtime(struct cfs_rq *cfs_rq)
+ {
+ struct task_group *tg = cfs_rq->tg;
+ struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(tg);
+- u64 amount = 0, min_amount, expires;
+- int expires_seq;
++ u64 amount = 0, min_amount;
+
+ /* note: this is a positive sum as runtime_remaining <= 0 */
+ min_amount = sched_cfs_bandwidth_slice() - cfs_rq->runtime_remaining;
+@@ -4411,61 +4403,17 @@ static int assign_cfs_rq_runtime(struct cfs_rq *cfs_rq)
+ cfs_b->idle = 0;
+ }
+ }
+- expires_seq = cfs_b->expires_seq;
+- expires = cfs_b->runtime_expires;
+ raw_spin_unlock(&cfs_b->lock);
+
+ cfs_rq->runtime_remaining += amount;
+- /*
+- * we may have advanced our local expiration to account for allowed
+- * spread between our sched_clock and the one on which runtime was
+- * issued.
+- */
+- if (cfs_rq->expires_seq != expires_seq) {
+- cfs_rq->expires_seq = expires_seq;
+- cfs_rq->runtime_expires = expires;
+- }
+
+ return cfs_rq->runtime_remaining > 0;
+ }
+
+-/*
+- * Note: This depends on the synchronization provided by sched_clock and the
+- * fact that rq->clock snapshots this value.
+- */
+-static void expire_cfs_rq_runtime(struct cfs_rq *cfs_rq)
+-{
+- struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(cfs_rq->tg);
+-
+- /* if the deadline is ahead of our clock, nothing to do */
+- if (likely((s64)(rq_clock(rq_of(cfs_rq)) - cfs_rq->runtime_expires) < 0))
+- return;
+-
+- if (cfs_rq->runtime_remaining < 0)
+- return;
+-
+- /*
+- * If the local deadline has passed we have to consider the
+- * possibility that our sched_clock is 'fast' and the global deadline
+- * has not truly expired.
+- *
+- * Fortunately we can check determine whether this the case by checking
+- * whether the global deadline(cfs_b->expires_seq) has advanced.
+- */
+- if (cfs_rq->expires_seq == cfs_b->expires_seq) {
+- /* extend local deadline, drift is bounded above by 2 ticks */
+- cfs_rq->runtime_expires += TICK_NSEC;
+- } else {
+- /* global deadline is ahead, expiration has passed */
+- cfs_rq->runtime_remaining = 0;
+- }
+-}
+-
+ static void __account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec)
+ {
+ /* dock delta_exec before expiring quota (as it could span periods) */
+ cfs_rq->runtime_remaining -= delta_exec;
+- expire_cfs_rq_runtime(cfs_rq);
+
+ if (likely(cfs_rq->runtime_remaining > 0))
+ return;
+@@ -4658,8 +4606,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
+ resched_curr(rq);
+ }
+
+-static u64 distribute_cfs_runtime(struct cfs_bandwidth *cfs_b,
+- u64 remaining, u64 expires)
++static u64 distribute_cfs_runtime(struct cfs_bandwidth *cfs_b, u64 remaining)
+ {
+ struct cfs_rq *cfs_rq;
+ u64 runtime;
+@@ -4684,7 +4631,6 @@ static u64 distribute_cfs_runtime(struct cfs_bandwidth *cfs_b,
+ remaining -= runtime;
+
+ cfs_rq->runtime_remaining += runtime;
+- cfs_rq->runtime_expires = expires;
+
+ /* we check whether we're throttled above */
+ if (cfs_rq->runtime_remaining > 0)
+@@ -4709,7 +4655,7 @@ next:
+ */
+ static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun, unsigned long flags)
+ {
+- u64 runtime, runtime_expires;
++ u64 runtime;
+ int throttled;
+
+ /* no need to continue the timer with no bandwidth constraint */
+@@ -4737,8 +4683,6 @@ static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun, u
+ /* account preceding periods in which throttling occurred */
+ cfs_b->nr_throttled += overrun;
+
+- runtime_expires = cfs_b->runtime_expires;
+-
+ /*
+ * This check is repeated as we are holding onto the new bandwidth while
+ * we unthrottle. This can potentially race with an unthrottled group
+@@ -4751,8 +4695,7 @@ static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun, u
+ cfs_b->distribute_running = 1;
+ raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
+ /* we can't nest cfs_b->lock while distributing bandwidth */
+- runtime = distribute_cfs_runtime(cfs_b, runtime,
+- runtime_expires);
++ runtime = distribute_cfs_runtime(cfs_b, runtime);
+ raw_spin_lock_irqsave(&cfs_b->lock, flags);
+
+ cfs_b->distribute_running = 0;
+@@ -4834,8 +4777,7 @@ static void __return_cfs_rq_runtime(struct cfs_rq *cfs_rq)
+ return;
+
+ raw_spin_lock(&cfs_b->lock);
+- if (cfs_b->quota != RUNTIME_INF &&
+- cfs_rq->runtime_expires == cfs_b->runtime_expires) {
++ if (cfs_b->quota != RUNTIME_INF) {
+ cfs_b->runtime += slack_runtime;
+
+ /* we are under rq->lock, defer unthrottling using a timer */
+@@ -4868,7 +4810,6 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
+ {
+ u64 runtime = 0, slice = sched_cfs_bandwidth_slice();
+ unsigned long flags;
+- u64 expires;
+
+ /* confirm we're still not at a refresh boundary */
+ raw_spin_lock_irqsave(&cfs_b->lock, flags);
+@@ -4886,7 +4827,6 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
+ if (cfs_b->quota != RUNTIME_INF && cfs_b->runtime > slice)
+ runtime = cfs_b->runtime;
+
+- expires = cfs_b->runtime_expires;
+ if (runtime)
+ cfs_b->distribute_running = 1;
+
+@@ -4895,11 +4835,10 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
+ if (!runtime)
+ return;
+
+- runtime = distribute_cfs_runtime(cfs_b, runtime, expires);
++ runtime = distribute_cfs_runtime(cfs_b, runtime);
+
+ raw_spin_lock_irqsave(&cfs_b->lock, flags);
+- if (expires == cfs_b->runtime_expires)
+- lsub_positive(&cfs_b->runtime, runtime);
++ lsub_positive(&cfs_b->runtime, runtime);
+ cfs_b->distribute_running = 0;
+ raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
+ }
+@@ -4995,20 +4934,28 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
+ if (++count > 3) {
+ u64 new, old = ktime_to_ns(cfs_b->period);
+
+- new = (old * 147) / 128; /* ~115% */
+- new = min(new, max_cfs_quota_period);
+-
+- cfs_b->period = ns_to_ktime(new);
+-
+- /* since max is 1s, this is limited to 1e9^2, which fits in u64 */
+- cfs_b->quota *= new;
+- cfs_b->quota = div64_u64(cfs_b->quota, old);
+-
+- pr_warn_ratelimited(
+- "cfs_period_timer[cpu%d]: period too short, scaling up (new cfs_period_us %lld, cfs_quota_us = %lld)\n",
+- smp_processor_id(),
+- div_u64(new, NSEC_PER_USEC),
+- div_u64(cfs_b->quota, NSEC_PER_USEC));
++ /*
++ * Grow period by a factor of 2 to avoid losing precision.
++ * Precision loss in the quota/period ratio can cause __cfs_schedulable
++ * to fail.
++ */
++ new = old * 2;
++ if (new < max_cfs_quota_period) {
++ cfs_b->period = ns_to_ktime(new);
++ cfs_b->quota *= 2;
++
++ pr_warn_ratelimited(
++ "cfs_period_timer[cpu%d]: period too short, scaling up (new cfs_period_us = %lld, cfs_quota_us = %lld)\n",
++ smp_processor_id(),
++ div_u64(new, NSEC_PER_USEC),
++ div_u64(cfs_b->quota, NSEC_PER_USEC));
++ } else {
++ pr_warn_ratelimited(
++ "cfs_period_timer[cpu%d]: period too short, but cannot scale up without losing precision (cfs_period_us = %lld, cfs_quota_us = %lld)\n",
++ smp_processor_id(),
++ div_u64(old, NSEC_PER_USEC),
++ div_u64(cfs_b->quota, NSEC_PER_USEC));
++ }
+
+ /* reset count so we don't come right back in here */
+ count = 0;
+@@ -5047,17 +4994,13 @@ static void init_cfs_rq_runtime(struct cfs_rq *cfs_rq)
+
+ void start_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
+ {
+- u64 overrun;
+-
+ lockdep_assert_held(&cfs_b->lock);
+
+ if (cfs_b->period_active)
+ return;
+
+ cfs_b->period_active = 1;
+- overrun = hrtimer_forward_now(&cfs_b->period_timer, cfs_b->period);
+- cfs_b->runtime_expires += (overrun + 1) * ktime_to_ns(cfs_b->period);
+- cfs_b->expires_seq++;
++ hrtimer_forward_now(&cfs_b->period_timer, cfs_b->period);
+ hrtimer_start_expires(&cfs_b->period_timer, HRTIMER_MODE_ABS_PINNED);
+ }
+
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 802b1f3405f2..28c16e94bc1d 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -335,8 +335,6 @@ struct cfs_bandwidth {
+ u64 quota;
+ u64 runtime;
+ s64 hierarchical_quota;
+- u64 runtime_expires;
+- int expires_seq;
+
+ u8 idle;
+ u8 period_active;
+@@ -556,8 +554,6 @@ struct cfs_rq {
+
+ #ifdef CONFIG_CFS_BANDWIDTH
+ int runtime_enabled;
+- int expires_seq;
+- u64 runtime_expires;
+ s64 runtime_remaining;
+
+ u64 throttled_clock;
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 04458ed44a55..a5e27f1c35a1 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -6012,6 +6012,7 @@ waitagain:
+ sizeof(struct trace_iterator) -
+ offsetof(struct trace_iterator, seq));
+ cpumask_clear(iter->started);
++ trace_seq_init(&iter->seq);
+ iter->pos = -1;
+
+ trace_event_read_lock();
+diff --git a/net/batman-adv/bat_iv_ogm.c b/net/batman-adv/bat_iv_ogm.c
+index d78938e3e008..5b0b20e6da95 100644
+--- a/net/batman-adv/bat_iv_ogm.c
++++ b/net/batman-adv/bat_iv_ogm.c
+@@ -22,6 +22,8 @@
+ #include <linux/kernel.h>
+ #include <linux/kref.h>
+ #include <linux/list.h>
++#include <linux/lockdep.h>
++#include <linux/mutex.h>
+ #include <linux/netdevice.h>
+ #include <linux/netlink.h>
+ #include <linux/pkt_sched.h>
+@@ -193,14 +195,18 @@ static int batadv_iv_ogm_iface_enable(struct batadv_hard_iface *hard_iface)
+ unsigned char *ogm_buff;
+ u32 random_seqno;
+
++ mutex_lock(&hard_iface->bat_iv.ogm_buff_mutex);
++
+ /* randomize initial seqno to avoid collision */
+ get_random_bytes(&random_seqno, sizeof(random_seqno));
+ atomic_set(&hard_iface->bat_iv.ogm_seqno, random_seqno);
+
+ hard_iface->bat_iv.ogm_buff_len = BATADV_OGM_HLEN;
+ ogm_buff = kmalloc(hard_iface->bat_iv.ogm_buff_len, GFP_ATOMIC);
+- if (!ogm_buff)
++ if (!ogm_buff) {
++ mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex);
+ return -ENOMEM;
++ }
+
+ hard_iface->bat_iv.ogm_buff = ogm_buff;
+
+@@ -212,35 +218,59 @@ static int batadv_iv_ogm_iface_enable(struct batadv_hard_iface *hard_iface)
+ batadv_ogm_packet->reserved = 0;
+ batadv_ogm_packet->tq = BATADV_TQ_MAX_VALUE;
+
++ mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex);
++
+ return 0;
+ }
+
+ static void batadv_iv_ogm_iface_disable(struct batadv_hard_iface *hard_iface)
+ {
++ mutex_lock(&hard_iface->bat_iv.ogm_buff_mutex);
++
+ kfree(hard_iface->bat_iv.ogm_buff);
+ hard_iface->bat_iv.ogm_buff = NULL;
++
++ mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex);
+ }
+
+ static void batadv_iv_ogm_iface_update_mac(struct batadv_hard_iface *hard_iface)
+ {
+ struct batadv_ogm_packet *batadv_ogm_packet;
+- unsigned char *ogm_buff = hard_iface->bat_iv.ogm_buff;
++ void *ogm_buff;
+
+- batadv_ogm_packet = (struct batadv_ogm_packet *)ogm_buff;
++ mutex_lock(&hard_iface->bat_iv.ogm_buff_mutex);
++
++ ogm_buff = hard_iface->bat_iv.ogm_buff;
++ if (!ogm_buff)
++ goto unlock;
++
++ batadv_ogm_packet = ogm_buff;
+ ether_addr_copy(batadv_ogm_packet->orig,
+ hard_iface->net_dev->dev_addr);
+ ether_addr_copy(batadv_ogm_packet->prev_sender,
+ hard_iface->net_dev->dev_addr);
++
++unlock:
++ mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex);
+ }
+
+ static void
+ batadv_iv_ogm_primary_iface_set(struct batadv_hard_iface *hard_iface)
+ {
+ struct batadv_ogm_packet *batadv_ogm_packet;
+- unsigned char *ogm_buff = hard_iface->bat_iv.ogm_buff;
++ void *ogm_buff;
+
+- batadv_ogm_packet = (struct batadv_ogm_packet *)ogm_buff;
++ mutex_lock(&hard_iface->bat_iv.ogm_buff_mutex);
++
++ ogm_buff = hard_iface->bat_iv.ogm_buff;
++ if (!ogm_buff)
++ goto unlock;
++
++ batadv_ogm_packet = ogm_buff;
+ batadv_ogm_packet->ttl = BATADV_TTL;
++
++unlock:
++ mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex);
+ }
+
+ /* when do we schedule our own ogm to be sent */
+@@ -742,7 +772,11 @@ batadv_iv_ogm_slide_own_bcast_window(struct batadv_hard_iface *hard_iface)
+ }
+ }
+
+-static void batadv_iv_ogm_schedule(struct batadv_hard_iface *hard_iface)
++/**
++ * batadv_iv_ogm_schedule_buff() - schedule submission of hardif ogm buffer
++ * @hard_iface: interface whose ogm buffer should be transmitted
++ */
++static void batadv_iv_ogm_schedule_buff(struct batadv_hard_iface *hard_iface)
+ {
+ struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface);
+ unsigned char **ogm_buff = &hard_iface->bat_iv.ogm_buff;
+@@ -753,9 +787,7 @@ static void batadv_iv_ogm_schedule(struct batadv_hard_iface *hard_iface)
+ u16 tvlv_len = 0;
+ unsigned long send_time;
+
+- if (hard_iface->if_status == BATADV_IF_NOT_IN_USE ||
+- hard_iface->if_status == BATADV_IF_TO_BE_REMOVED)
+- return;
++ lockdep_assert_held(&hard_iface->bat_iv.ogm_buff_mutex);
+
+ /* the interface gets activated here to avoid race conditions between
+ * the moment of activating the interface in
+@@ -823,6 +855,17 @@ out:
+ batadv_hardif_put(primary_if);
+ }
+
++static void batadv_iv_ogm_schedule(struct batadv_hard_iface *hard_iface)
++{
++ if (hard_iface->if_status == BATADV_IF_NOT_IN_USE ||
++ hard_iface->if_status == BATADV_IF_TO_BE_REMOVED)
++ return;
++
++ mutex_lock(&hard_iface->bat_iv.ogm_buff_mutex);
++ batadv_iv_ogm_schedule_buff(hard_iface);
++ mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex);
++}
++
+ /**
+ * batadv_iv_orig_ifinfo_sum() - Get bcast_own sum for originator over iterface
+ * @orig_node: originator which reproadcasted the OGMs directly
+diff --git a/net/batman-adv/hard-interface.c b/net/batman-adv/hard-interface.c
+index c90e47342bb0..afb52282d5bd 100644
+--- a/net/batman-adv/hard-interface.c
++++ b/net/batman-adv/hard-interface.c
+@@ -18,6 +18,7 @@
+ #include <linux/kref.h>
+ #include <linux/limits.h>
+ #include <linux/list.h>
++#include <linux/mutex.h>
+ #include <linux/netdevice.h>
+ #include <linux/printk.h>
+ #include <linux/rculist.h>
+@@ -929,6 +930,7 @@ batadv_hardif_add_interface(struct net_device *net_dev)
+ INIT_LIST_HEAD(&hard_iface->list);
+ INIT_HLIST_HEAD(&hard_iface->neigh_list);
+
++ mutex_init(&hard_iface->bat_iv.ogm_buff_mutex);
+ spin_lock_init(&hard_iface->neigh_list_lock);
+ kref_init(&hard_iface->refcount);
+
+diff --git a/net/batman-adv/types.h b/net/batman-adv/types.h
+index 6ae139d74e0f..10597a5f3303 100644
+--- a/net/batman-adv/types.h
++++ b/net/batman-adv/types.h
+@@ -81,6 +81,9 @@ struct batadv_hard_iface_bat_iv {
+
+ /** @ogm_seqno: OGM sequence number - used to identify each OGM */
+ atomic_t ogm_seqno;
++
++ /** @ogm_buff_mutex: lock protecting ogm_buff and ogm_buff_len */
++ struct mutex ogm_buff_mutex;
+ };
+
+ /**
+diff --git a/net/llc/llc_c_ac.c b/net/llc/llc_c_ac.c
+index 4d78375f9872..647c0554d04c 100644
+--- a/net/llc/llc_c_ac.c
++++ b/net/llc/llc_c_ac.c
+@@ -372,6 +372,7 @@ int llc_conn_ac_send_i_cmd_p_set_1(struct sock *sk, struct sk_buff *skb)
+ llc_pdu_init_as_i_cmd(skb, 1, llc->vS, llc->vR);
+ rc = llc_mac_hdr_init(skb, llc->dev->dev_addr, llc->daddr.mac);
+ if (likely(!rc)) {
++ skb_get(skb);
+ llc_conn_send_pdu(sk, skb);
+ llc_conn_ac_inc_vs_by_1(sk, skb);
+ }
+@@ -389,7 +390,8 @@ static int llc_conn_ac_send_i_cmd_p_set_0(struct sock *sk, struct sk_buff *skb)
+ llc_pdu_init_as_i_cmd(skb, 0, llc->vS, llc->vR);
+ rc = llc_mac_hdr_init(skb, llc->dev->dev_addr, llc->daddr.mac);
+ if (likely(!rc)) {
+- rc = llc_conn_send_pdu(sk, skb);
++ skb_get(skb);
++ llc_conn_send_pdu(sk, skb);
+ llc_conn_ac_inc_vs_by_1(sk, skb);
+ }
+ return rc;
+@@ -406,6 +408,7 @@ int llc_conn_ac_send_i_xxx_x_set_0(struct sock *sk, struct sk_buff *skb)
+ llc_pdu_init_as_i_cmd(skb, 0, llc->vS, llc->vR);
+ rc = llc_mac_hdr_init(skb, llc->dev->dev_addr, llc->daddr.mac);
+ if (likely(!rc)) {
++ skb_get(skb);
+ llc_conn_send_pdu(sk, skb);
+ llc_conn_ac_inc_vs_by_1(sk, skb);
+ }
+@@ -916,7 +919,8 @@ static int llc_conn_ac_send_i_rsp_f_set_ackpf(struct sock *sk,
+ llc_pdu_init_as_i_cmd(skb, llc->ack_pf, llc->vS, llc->vR);
+ rc = llc_mac_hdr_init(skb, llc->dev->dev_addr, llc->daddr.mac);
+ if (likely(!rc)) {
+- rc = llc_conn_send_pdu(sk, skb);
++ skb_get(skb);
++ llc_conn_send_pdu(sk, skb);
+ llc_conn_ac_inc_vs_by_1(sk, skb);
+ }
+ return rc;
+diff --git a/net/llc/llc_conn.c b/net/llc/llc_conn.c
+index 4ff89cb7c86f..ed2aca12460c 100644
+--- a/net/llc/llc_conn.c
++++ b/net/llc/llc_conn.c
+@@ -30,7 +30,7 @@
+ #endif
+
+ static int llc_find_offset(int state, int ev_type);
+-static int llc_conn_send_pdus(struct sock *sk, struct sk_buff *skb);
++static void llc_conn_send_pdus(struct sock *sk);
+ static int llc_conn_service(struct sock *sk, struct sk_buff *skb);
+ static int llc_exec_conn_trans_actions(struct sock *sk,
+ struct llc_conn_state_trans *trans,
+@@ -193,11 +193,11 @@ out_skb_put:
+ return rc;
+ }
+
+-int llc_conn_send_pdu(struct sock *sk, struct sk_buff *skb)
++void llc_conn_send_pdu(struct sock *sk, struct sk_buff *skb)
+ {
+ /* queue PDU to send to MAC layer */
+ skb_queue_tail(&sk->sk_write_queue, skb);
+- return llc_conn_send_pdus(sk, skb);
++ llc_conn_send_pdus(sk);
+ }
+
+ /**
+@@ -255,7 +255,7 @@ void llc_conn_resend_i_pdu_as_cmd(struct sock *sk, u8 nr, u8 first_p_bit)
+ if (howmany_resend > 0)
+ llc->vS = (llc->vS + 1) % LLC_2_SEQ_NBR_MODULO;
+ /* any PDUs to re-send are queued up; start sending to MAC */
+- llc_conn_send_pdus(sk, NULL);
++ llc_conn_send_pdus(sk);
+ out:;
+ }
+
+@@ -296,7 +296,7 @@ void llc_conn_resend_i_pdu_as_rsp(struct sock *sk, u8 nr, u8 first_f_bit)
+ if (howmany_resend > 0)
+ llc->vS = (llc->vS + 1) % LLC_2_SEQ_NBR_MODULO;
+ /* any PDUs to re-send are queued up; start sending to MAC */
+- llc_conn_send_pdus(sk, NULL);
++ llc_conn_send_pdus(sk);
+ out:;
+ }
+
+@@ -340,16 +340,12 @@ out:
+ /**
+ * llc_conn_send_pdus - Sends queued PDUs
+ * @sk: active connection
+- * @hold_skb: the skb held by caller, or NULL if does not care
+ *
+- * Sends queued pdus to MAC layer for transmission. When @hold_skb is
+- * NULL, always return 0. Otherwise, return 0 if @hold_skb is sent
+- * successfully, or 1 for failure.
++ * Sends queued pdus to MAC layer for transmission.
+ */
+-static int llc_conn_send_pdus(struct sock *sk, struct sk_buff *hold_skb)
++static void llc_conn_send_pdus(struct sock *sk)
+ {
+ struct sk_buff *skb;
+- int ret = 0;
+
+ while ((skb = skb_dequeue(&sk->sk_write_queue)) != NULL) {
+ struct llc_pdu_sn *pdu = llc_pdu_sn_hdr(skb);
+@@ -361,20 +357,10 @@ static int llc_conn_send_pdus(struct sock *sk, struct sk_buff *hold_skb)
+ skb_queue_tail(&llc_sk(sk)->pdu_unack_q, skb);
+ if (!skb2)
+ break;
+- dev_queue_xmit(skb2);
+- } else {
+- bool is_target = skb == hold_skb;
+- int rc;
+-
+- if (is_target)
+- skb_get(skb);
+- rc = dev_queue_xmit(skb);
+- if (is_target)
+- ret = rc;
++ skb = skb2;
+ }
++ dev_queue_xmit(skb);
+ }
+-
+- return ret;
+ }
+
+ /**
+diff --git a/net/llc/llc_s_ac.c b/net/llc/llc_s_ac.c
+index a94bd56bcac6..7ae4cc684d3a 100644
+--- a/net/llc/llc_s_ac.c
++++ b/net/llc/llc_s_ac.c
+@@ -58,8 +58,10 @@ int llc_sap_action_send_ui(struct llc_sap *sap, struct sk_buff *skb)
+ ev->daddr.lsap, LLC_PDU_CMD);
+ llc_pdu_init_as_ui_cmd(skb);
+ rc = llc_mac_hdr_init(skb, ev->saddr.mac, ev->daddr.mac);
+- if (likely(!rc))
++ if (likely(!rc)) {
++ skb_get(skb);
+ rc = dev_queue_xmit(skb);
++ }
+ return rc;
+ }
+
+@@ -81,8 +83,10 @@ int llc_sap_action_send_xid_c(struct llc_sap *sap, struct sk_buff *skb)
+ ev->daddr.lsap, LLC_PDU_CMD);
+ llc_pdu_init_as_xid_cmd(skb, LLC_XID_NULL_CLASS_2, 0);
+ rc = llc_mac_hdr_init(skb, ev->saddr.mac, ev->daddr.mac);
+- if (likely(!rc))
++ if (likely(!rc)) {
++ skb_get(skb);
+ rc = dev_queue_xmit(skb);
++ }
+ return rc;
+ }
+
+@@ -135,8 +139,10 @@ int llc_sap_action_send_test_c(struct llc_sap *sap, struct sk_buff *skb)
+ ev->daddr.lsap, LLC_PDU_CMD);
+ llc_pdu_init_as_test_cmd(skb);
+ rc = llc_mac_hdr_init(skb, ev->saddr.mac, ev->daddr.mac);
+- if (likely(!rc))
++ if (likely(!rc)) {
++ skb_get(skb);
+ rc = dev_queue_xmit(skb);
++ }
+ return rc;
+ }
+
+diff --git a/net/llc/llc_sap.c b/net/llc/llc_sap.c
+index a7f7b8ff4729..be419062e19a 100644
+--- a/net/llc/llc_sap.c
++++ b/net/llc/llc_sap.c
+@@ -197,29 +197,22 @@ out:
+ * After executing actions of the event, upper layer will be indicated
+ * if needed(on receiving an UI frame). sk can be null for the
+ * datalink_proto case.
++ *
++ * This function always consumes a reference to the skb.
+ */
+ static void llc_sap_state_process(struct llc_sap *sap, struct sk_buff *skb)
+ {
+ struct llc_sap_state_ev *ev = llc_sap_ev(skb);
+
+- /*
+- * We have to hold the skb, because llc_sap_next_state
+- * will kfree it in the sending path and we need to
+- * look at the skb->cb, where we encode llc_sap_state_ev.
+- */
+- skb_get(skb);
+ ev->ind_cfm_flag = 0;
+ llc_sap_next_state(sap, skb);
+- if (ev->ind_cfm_flag == LLC_IND) {
+- if (skb->sk->sk_state == TCP_LISTEN)
+- kfree_skb(skb);
+- else {
+- llc_save_primitive(skb->sk, skb, ev->prim);
+
+- /* queue skb to the user. */
+- if (sock_queue_rcv_skb(skb->sk, skb))
+- kfree_skb(skb);
+- }
++ if (ev->ind_cfm_flag == LLC_IND && skb->sk->sk_state != TCP_LISTEN) {
++ llc_save_primitive(skb->sk, skb, ev->prim);
++
++ /* queue skb to the user. */
++ if (sock_queue_rcv_skb(skb->sk, skb) == 0)
++ return;
+ }
+ kfree_skb(skb);
+ }
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index 81a8ef42b88d..56b1cf82ed3a 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -1793,8 +1793,8 @@ void __nf_ct_refresh_acct(struct nf_conn *ct,
+ if (nf_ct_is_confirmed(ct))
+ extra_jiffies += nfct_time_stamp;
+
+- if (ct->timeout != extra_jiffies)
+- ct->timeout = extra_jiffies;
++ if (READ_ONCE(ct->timeout) != extra_jiffies)
++ WRITE_ONCE(ct->timeout, extra_jiffies);
+ acct:
+ if (do_acct)
+ nf_ct_acct_update(ct, ctinfo, skb->len);
+diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c
+index 9c3ac96f71cb..64830d8c1fdb 100644
+--- a/net/rxrpc/peer_object.c
++++ b/net/rxrpc/peer_object.c
+@@ -216,7 +216,7 @@ struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *local, gfp_t gfp)
+ peer = kzalloc(sizeof(struct rxrpc_peer), gfp);
+ if (peer) {
+ atomic_set(&peer->usage, 1);
+- peer->local = local;
++ peer->local = rxrpc_get_local(local);
+ INIT_HLIST_HEAD(&peer->error_targets);
+ peer->service_conns = RB_ROOT;
+ seqlock_init(&peer->service_conn_lock);
+@@ -307,7 +307,6 @@ void rxrpc_new_incoming_peer(struct rxrpc_sock *rx, struct rxrpc_local *local,
+ unsigned long hash_key;
+
+ hash_key = rxrpc_peer_hash_key(local, &peer->srx);
+- peer->local = local;
+ rxrpc_init_peer(rx, peer, hash_key);
+
+ spin_lock(&rxnet->peer_hash_lock);
+@@ -382,7 +381,7 @@ struct rxrpc_peer *rxrpc_get_peer(struct rxrpc_peer *peer)
+ int n;
+
+ n = atomic_inc_return(&peer->usage);
+- trace_rxrpc_peer(peer, rxrpc_peer_got, n, here);
++ trace_rxrpc_peer(peer->debug_id, rxrpc_peer_got, n, here);
+ return peer;
+ }
+
+@@ -396,7 +395,7 @@ struct rxrpc_peer *rxrpc_get_peer_maybe(struct rxrpc_peer *peer)
+ if (peer) {
+ int n = atomic_fetch_add_unless(&peer->usage, 1, 0);
+ if (n > 0)
+- trace_rxrpc_peer(peer, rxrpc_peer_got, n + 1, here);
++ trace_rxrpc_peer(peer->debug_id, rxrpc_peer_got, n + 1, here);
+ else
+ peer = NULL;
+ }
+@@ -417,6 +416,7 @@ static void __rxrpc_put_peer(struct rxrpc_peer *peer)
+ list_del_init(&peer->keepalive_link);
+ spin_unlock_bh(&rxnet->peer_hash_lock);
+
++ rxrpc_put_local(peer->local);
+ kfree_rcu(peer, rcu);
+ }
+
+@@ -426,11 +426,13 @@ static void __rxrpc_put_peer(struct rxrpc_peer *peer)
+ void rxrpc_put_peer(struct rxrpc_peer *peer)
+ {
+ const void *here = __builtin_return_address(0);
++ unsigned int debug_id;
+ int n;
+
+ if (peer) {
++ debug_id = peer->debug_id;
+ n = atomic_dec_return(&peer->usage);
+- trace_rxrpc_peer(peer, rxrpc_peer_put, n, here);
++ trace_rxrpc_peer(debug_id, rxrpc_peer_put, n, here);
+ if (n == 0)
+ __rxrpc_put_peer(peer);
+ }
+@@ -443,13 +445,15 @@ void rxrpc_put_peer(struct rxrpc_peer *peer)
+ void rxrpc_put_peer_locked(struct rxrpc_peer *peer)
+ {
+ const void *here = __builtin_return_address(0);
++ unsigned int debug_id = peer->debug_id;
+ int n;
+
+ n = atomic_dec_return(&peer->usage);
+- trace_rxrpc_peer(peer, rxrpc_peer_put, n, here);
++ trace_rxrpc_peer(debug_id, rxrpc_peer_put, n, here);
+ if (n == 0) {
+ hash_del_rcu(&peer->hash_link);
+ list_del_init(&peer->keepalive_link);
++ rxrpc_put_local(peer->local);
+ kfree_rcu(peer, rcu);
+ }
+ }
+diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
+index 6a1547b270fe..22f51a7e356e 100644
+--- a/net/rxrpc/sendmsg.c
++++ b/net/rxrpc/sendmsg.c
+@@ -661,6 +661,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
+ case RXRPC_CALL_SERVER_PREALLOC:
+ case RXRPC_CALL_SERVER_SECURING:
+ case RXRPC_CALL_SERVER_ACCEPTING:
++ rxrpc_put_call(call, rxrpc_call_put);
+ ret = -EBUSY;
+ goto error_release_sock;
+ default:
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index f5cb35e550f8..0e44039e729c 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -476,7 +476,7 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ * skb will be queued.
+ */
+ if (count > 1 && (skb2 = skb_clone(skb, GFP_ATOMIC)) != NULL) {
+- struct Qdisc *rootq = qdisc_root(sch);
++ struct Qdisc *rootq = qdisc_root_bh(sch);
+ u32 dupsave = q->duplicate; /* prevent duplicating a dup... */
+
+ q->duplicate = 0;
+diff --git a/net/sched/sch_sfb.c b/net/sched/sch_sfb.c
+index 1dff8506a715..d448fe3068e5 100644
+--- a/net/sched/sch_sfb.c
++++ b/net/sched/sch_sfb.c
+@@ -488,7 +488,7 @@ static int sfb_change(struct Qdisc *sch, struct nlattr *opt,
+ struct netlink_ext_ack *extack)
+ {
+ struct sfb_sched_data *q = qdisc_priv(sch);
+- struct Qdisc *child;
++ struct Qdisc *child, *old;
+ struct nlattr *tb[TCA_SFB_MAX + 1];
+ const struct tc_sfb_qopt *ctl = &sfb_default_ops;
+ u32 limit;
+@@ -518,8 +518,8 @@ static int sfb_change(struct Qdisc *sch, struct nlattr *opt,
+ qdisc_hash_add(child, true);
+ sch_tree_lock(sch);
+
+- qdisc_tree_flush_backlog(q->qdisc);
+- qdisc_put(q->qdisc);
++ qdisc_purge_queue(q->qdisc);
++ old = q->qdisc;
+ q->qdisc = child;
+
+ q->rehash_interval = msecs_to_jiffies(ctl->rehash_interval);
+@@ -542,6 +542,7 @@ static int sfb_change(struct Qdisc *sch, struct nlattr *opt,
+ sfb_init_perturbation(1, q);
+
+ sch_tree_unlock(sch);
++ qdisc_put(old);
+
+ return 0;
+ }
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index e2176c167a57..4e0b5bed6c73 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -1243,19 +1243,21 @@ static void xs_error_report(struct sock *sk)
+ {
+ struct sock_xprt *transport;
+ struct rpc_xprt *xprt;
+- int err;
+
+ read_lock_bh(&sk->sk_callback_lock);
+ if (!(xprt = xprt_from_sock(sk)))
+ goto out;
+
+ transport = container_of(xprt, struct sock_xprt, xprt);
+- err = -sk->sk_err;
+- if (err == 0)
++ transport->xprt_err = -sk->sk_err;
++ if (transport->xprt_err == 0)
+ goto out;
+ dprintk("RPC: xs_error_report client %p, error=%d...\n",
+- xprt, -err);
+- trace_rpc_socket_error(xprt, sk->sk_socket, err);
++ xprt, -transport->xprt_err);
++ trace_rpc_socket_error(xprt, sk->sk_socket, transport->xprt_err);
++
++ /* barrier ensures xprt_err is set before XPRT_SOCK_WAKE_ERROR */
++ smp_mb__before_atomic();
+ xs_run_error_worker(transport, XPRT_SOCK_WAKE_ERROR);
+ out:
+ read_unlock_bh(&sk->sk_callback_lock);
+@@ -2470,7 +2472,6 @@ static void xs_wake_write(struct sock_xprt *transport)
+ static void xs_wake_error(struct sock_xprt *transport)
+ {
+ int sockerr;
+- int sockerr_len = sizeof(sockerr);
+
+ if (!test_bit(XPRT_SOCK_WAKE_ERROR, &transport->sock_state))
+ return;
+@@ -2479,9 +2480,7 @@ static void xs_wake_error(struct sock_xprt *transport)
+ goto out;
+ if (!test_and_clear_bit(XPRT_SOCK_WAKE_ERROR, &transport->sock_state))
+ goto out;
+- if (kernel_getsockopt(transport->sock, SOL_SOCKET, SO_ERROR,
+- (char *)&sockerr, &sockerr_len) != 0)
+- goto out;
++ sockerr = xchg(&transport->xprt_err, 0);
+ if (sockerr < 0)
+ xprt_wake_pending_tasks(&transport->xprt, sockerr);
+ out:
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index c2ce582ea143..da752caa1cda 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -377,7 +377,7 @@ const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = {
+ [NL80211_ATTR_MNTR_FLAGS] = { /* NLA_NESTED can't be empty */ },
+ [NL80211_ATTR_MESH_ID] = { .type = NLA_BINARY,
+ .len = IEEE80211_MAX_MESH_ID_LEN },
+- [NL80211_ATTR_MPATH_NEXT_HOP] = { .type = NLA_U32 },
++ [NL80211_ATTR_MPATH_NEXT_HOP] = NLA_POLICY_ETH_ADDR_COMPAT,
+
+ [NL80211_ATTR_REG_ALPHA2] = { .type = NLA_STRING, .len = 2 },
+ [NL80211_ATTR_REG_RULES] = { .type = NLA_NESTED },
+diff --git a/sound/core/timer.c b/sound/core/timer.c
+index 5c9fbf3f4340..6b724d2ee2de 100644
+--- a/sound/core/timer.c
++++ b/sound/core/timer.c
+@@ -226,7 +226,8 @@ static int snd_timer_check_master(struct snd_timer_instance *master)
+ return 0;
+ }
+
+-static int snd_timer_close_locked(struct snd_timer_instance *timeri);
++static int snd_timer_close_locked(struct snd_timer_instance *timeri,
++ struct device **card_devp_to_put);
+
+ /*
+ * open a timer instance
+@@ -238,6 +239,7 @@ int snd_timer_open(struct snd_timer_instance **ti,
+ {
+ struct snd_timer *timer;
+ struct snd_timer_instance *timeri = NULL;
++ struct device *card_dev_to_put = NULL;
+ int err;
+
+ mutex_lock(®ister_mutex);
+@@ -261,7 +263,7 @@ int snd_timer_open(struct snd_timer_instance **ti,
+ list_add_tail(&timeri->open_list, &snd_timer_slave_list);
+ err = snd_timer_check_slave(timeri);
+ if (err < 0) {
+- snd_timer_close_locked(timeri);
++ snd_timer_close_locked(timeri, &card_dev_to_put);
+ timeri = NULL;
+ }
+ goto unlock;
+@@ -313,7 +315,7 @@ int snd_timer_open(struct snd_timer_instance **ti,
+ timeri = NULL;
+
+ if (timer->card)
+- put_device(&timer->card->card_dev);
++ card_dev_to_put = &timer->card->card_dev;
+ module_put(timer->module);
+ goto unlock;
+ }
+@@ -323,12 +325,15 @@ int snd_timer_open(struct snd_timer_instance **ti,
+ timer->num_instances++;
+ err = snd_timer_check_master(timeri);
+ if (err < 0) {
+- snd_timer_close_locked(timeri);
++ snd_timer_close_locked(timeri, &card_dev_to_put);
+ timeri = NULL;
+ }
+
+ unlock:
+ mutex_unlock(®ister_mutex);
++ /* put_device() is called after unlock for avoiding deadlock */
++ if (card_dev_to_put)
++ put_device(card_dev_to_put);
+ *ti = timeri;
+ return err;
+ }
+@@ -338,7 +343,8 @@ EXPORT_SYMBOL(snd_timer_open);
+ * close a timer instance
+ * call this with register_mutex down.
+ */
+-static int snd_timer_close_locked(struct snd_timer_instance *timeri)
++static int snd_timer_close_locked(struct snd_timer_instance *timeri,
++ struct device **card_devp_to_put)
+ {
+ struct snd_timer *timer = timeri->timer;
+ struct snd_timer_instance *slave, *tmp;
+@@ -395,7 +401,7 @@ static int snd_timer_close_locked(struct snd_timer_instance *timeri)
+ timer->hw.close(timer);
+ /* release a card refcount for safe disconnection */
+ if (timer->card)
+- put_device(&timer->card->card_dev);
++ *card_devp_to_put = &timer->card->card_dev;
+ module_put(timer->module);
+ }
+
+@@ -407,14 +413,18 @@ static int snd_timer_close_locked(struct snd_timer_instance *timeri)
+ */
+ int snd_timer_close(struct snd_timer_instance *timeri)
+ {
++ struct device *card_dev_to_put = NULL;
+ int err;
+
+ if (snd_BUG_ON(!timeri))
+ return -ENXIO;
+
+ mutex_lock(®ister_mutex);
+- err = snd_timer_close_locked(timeri);
++ err = snd_timer_close_locked(timeri, &card_dev_to_put);
+ mutex_unlock(®ister_mutex);
++ /* put_device() is called after unlock for avoiding deadlock */
++ if (card_dev_to_put)
++ put_device(card_dev_to_put);
+ return err;
+ }
+ EXPORT_SYMBOL(snd_timer_close);
+diff --git a/sound/firewire/bebob/bebob_stream.c b/sound/firewire/bebob/bebob_stream.c
+index 334dc7c96e1d..80ea162bf1a1 100644
+--- a/sound/firewire/bebob/bebob_stream.c
++++ b/sound/firewire/bebob/bebob_stream.c
+@@ -252,8 +252,7 @@ end:
+ return err;
+ }
+
+-static unsigned int
+-map_data_channels(struct snd_bebob *bebob, struct amdtp_stream *s)
++static int map_data_channels(struct snd_bebob *bebob, struct amdtp_stream *s)
+ {
+ unsigned int sec, sections, ch, channels;
+ unsigned int pcm, midi, location;
+diff --git a/sound/hda/hdac_controller.c b/sound/hda/hdac_controller.c
+index 196bbc85699e..3b0110545070 100644
+--- a/sound/hda/hdac_controller.c
++++ b/sound/hda/hdac_controller.c
+@@ -447,8 +447,6 @@ static void azx_int_disable(struct hdac_bus *bus)
+ list_for_each_entry(azx_dev, &bus->stream_list, list)
+ snd_hdac_stream_updateb(azx_dev, SD_CTL, SD_INT_MASK, 0);
+
+- synchronize_irq(bus->irq);
+-
+ /* disable SIE for all streams */
+ snd_hdac_chip_writeb(bus, INTCTL, 0);
+
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 783f9a9c40ec..b0de3e3b33e5 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -1349,9 +1349,9 @@ static int azx_free(struct azx *chip)
+ }
+
+ if (bus->chip_init) {
+- azx_stop_chip(chip);
+ azx_clear_irq_pending(chip);
+ azx_stop_all_streams(chip);
++ azx_stop_chip(chip);
+ }
+
+ if (bus->irq >= 0)
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 26249c607f2c..d4daa3c937ba 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -409,6 +409,9 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
+ case 0x10ec0672:
+ alc_update_coef_idx(codec, 0xd, 0, 1<<14); /* EAPD Ctrl */
+ break;
++ case 0x10ec0623:
++ alc_update_coef_idx(codec, 0x19, 1<<13, 0);
++ break;
+ case 0x10ec0668:
+ alc_update_coef_idx(codec, 0x7, 3<<13, 0);
+ break;
+@@ -2919,6 +2922,7 @@ enum {
+ ALC269_TYPE_ALC225,
+ ALC269_TYPE_ALC294,
+ ALC269_TYPE_ALC300,
++ ALC269_TYPE_ALC623,
+ ALC269_TYPE_ALC700,
+ };
+
+@@ -2954,6 +2958,7 @@ static int alc269_parse_auto_config(struct hda_codec *codec)
+ case ALC269_TYPE_ALC225:
+ case ALC269_TYPE_ALC294:
+ case ALC269_TYPE_ALC300:
++ case ALC269_TYPE_ALC623:
+ case ALC269_TYPE_ALC700:
+ ssids = alc269_ssids;
+ break;
+@@ -7187,6 +7192,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x312f, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+ SND_PCI_QUIRK(0x17aa, 0x313c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+ SND_PCI_QUIRK(0x17aa, 0x3151, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
++ SND_PCI_QUIRK(0x17aa, 0x3176, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
++ SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+@@ -7974,6 +7981,9 @@ static int patch_alc269(struct hda_codec *codec)
+ spec->codec_variant = ALC269_TYPE_ALC300;
+ spec->gen.mixer_nid = 0; /* no loopback on ALC300 */
+ break;
++ case 0x10ec0623:
++ spec->codec_variant = ALC269_TYPE_ALC623;
++ break;
+ case 0x10ec0700:
+ case 0x10ec0701:
+ case 0x10ec0703:
+@@ -9101,6 +9111,7 @@ static const struct hda_device_id snd_hda_id_realtek[] = {
+ HDA_CODEC_ENTRY(0x10ec0298, "ALC298", patch_alc269),
+ HDA_CODEC_ENTRY(0x10ec0299, "ALC299", patch_alc269),
+ HDA_CODEC_ENTRY(0x10ec0300, "ALC300", patch_alc269),
++ HDA_CODEC_ENTRY(0x10ec0623, "ALC623", patch_alc269),
+ HDA_CODEC_REV_ENTRY(0x10ec0861, 0x100340, "ALC660", patch_alc861),
+ HDA_CODEC_ENTRY(0x10ec0660, "ALC660-VD", patch_alc861vd),
+ HDA_CODEC_ENTRY(0x10ec0861, "ALC861", patch_alc861),
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index b6f7b13768a1..059b70313f35 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1563,7 +1563,8 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
+ struct usb_interface *iface;
+
+ /* Playback Designs */
+- if (USB_ID_VENDOR(chip->usb_id) == 0x23ba) {
++ if (USB_ID_VENDOR(chip->usb_id) == 0x23ba &&
++ USB_ID_PRODUCT(chip->usb_id) < 0x0110) {
+ switch (fp->altsetting) {
+ case 1:
+ fp->dsd_dop = true;
+@@ -1580,9 +1581,6 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
+ /* XMOS based USB DACs */
+ switch (chip->usb_id) {
+ case USB_ID(0x1511, 0x0037): /* AURALiC VEGA */
+- case USB_ID(0x22d9, 0x0416): /* OPPO HA-1 */
+- case USB_ID(0x22d9, 0x0436): /* OPPO Sonica */
+- case USB_ID(0x22d9, 0x0461): /* OPPO UDP-205 */
+ case USB_ID(0x2522, 0x0012): /* LH Labs VI DAC Infinity */
+ case USB_ID(0x2772, 0x0230): /* Pro-Ject Pre Box S2 Digital */
+ if (fp->altsetting == 2)
+@@ -1596,7 +1594,6 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
+ case USB_ID(0x16d0, 0x0733): /* Furutech ADL Stratos */
+ case USB_ID(0x16d0, 0x09db): /* NuPrime Audio DAC-9 */
+ case USB_ID(0x1db5, 0x0003): /* Bryston BDA3 */
+- case USB_ID(0x22d9, 0x0426): /* OPPO HA-2 */
+ case USB_ID(0x22e1, 0xca01): /* HDTA Serenade DSD */
+ case USB_ID(0x249c, 0x9326): /* M2Tech Young MkIII */
+ case USB_ID(0x2616, 0x0106): /* PS Audio NuWave DAC */
+@@ -1651,9 +1648,13 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
+ * from XMOS/Thesycon
+ */
+ switch (USB_ID_VENDOR(chip->usb_id)) {
+- case 0x20b1: /* XMOS based devices */
+ case 0x152a: /* Thesycon devices */
++ case 0x20b1: /* XMOS based devices */
++ case 0x22d9: /* Oppo */
++ case 0x23ba: /* Playback Designs */
+ case 0x25ce: /* Mytek devices */
++ case 0x278b: /* Rotel? */
++ case 0x292b: /* Gustard/Ess based devices */
+ case 0x2ab6: /* T+A devices */
+ case 0x3842: /* EVGA */
+ case 0xc502: /* HiBy devices */
+diff --git a/tools/lib/subcmd/Makefile b/tools/lib/subcmd/Makefile
+index ed61fb3a46c0..5b2cd5e58df0 100644
+--- a/tools/lib/subcmd/Makefile
++++ b/tools/lib/subcmd/Makefile
+@@ -20,7 +20,13 @@ MAKEFLAGS += --no-print-directory
+ LIBFILE = $(OUTPUT)libsubcmd.a
+
+ CFLAGS := $(EXTRA_WARNINGS) $(EXTRA_CFLAGS)
+-CFLAGS += -ggdb3 -Wall -Wextra -std=gnu99 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fPIC
++CFLAGS += -ggdb3 -Wall -Wextra -std=gnu99 -fPIC
++
++ifeq ($(DEBUG),0)
++ ifeq ($(feature-fortify-source), 1)
++ CFLAGS += -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2
++ endif
++endif
+
+ ifeq ($(CC_NO_CLANG), 0)
+ CFLAGS += -O3
+diff --git a/tools/perf/arch/arm/annotate/instructions.c b/tools/perf/arch/arm/annotate/instructions.c
+index c7d1a69b894f..19ac54758c71 100644
+--- a/tools/perf/arch/arm/annotate/instructions.c
++++ b/tools/perf/arch/arm/annotate/instructions.c
+@@ -36,7 +36,7 @@ static int arm__annotate_init(struct arch *arch, char *cpuid __maybe_unused)
+
+ arm = zalloc(sizeof(*arm));
+ if (!arm)
+- return -1;
++ return ENOMEM;
+
+ #define ARM_CONDS "(cc|cs|eq|ge|gt|hi|le|ls|lt|mi|ne|pl|vc|vs)"
+ err = regcomp(&arm->call_insn, "^blx?" ARM_CONDS "?$", REG_EXTENDED);
+@@ -58,5 +58,5 @@ out_free_call:
+ regfree(&arm->call_insn);
+ out_free_arm:
+ free(arm);
+- return -1;
++ return SYMBOL_ANNOTATE_ERRNO__ARCH_INIT_REGEXP;
+ }
+diff --git a/tools/perf/arch/arm64/annotate/instructions.c b/tools/perf/arch/arm64/annotate/instructions.c
+index 8f70a1b282df..223e2f161f41 100644
+--- a/tools/perf/arch/arm64/annotate/instructions.c
++++ b/tools/perf/arch/arm64/annotate/instructions.c
+@@ -94,7 +94,7 @@ static int arm64__annotate_init(struct arch *arch, char *cpuid __maybe_unused)
+
+ arm = zalloc(sizeof(*arm));
+ if (!arm)
+- return -1;
++ return ENOMEM;
+
+ /* bl, blr */
+ err = regcomp(&arm->call_insn, "^blr?$", REG_EXTENDED);
+@@ -117,5 +117,5 @@ out_free_call:
+ regfree(&arm->call_insn);
+ out_free_arm:
+ free(arm);
+- return -1;
++ return SYMBOL_ANNOTATE_ERRNO__ARCH_INIT_REGEXP;
+ }
+diff --git a/tools/perf/arch/powerpc/util/header.c b/tools/perf/arch/powerpc/util/header.c
+index 0b242664f5ea..e46be9ef5a68 100644
+--- a/tools/perf/arch/powerpc/util/header.c
++++ b/tools/perf/arch/powerpc/util/header.c
+@@ -1,5 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <sys/types.h>
++#include <errno.h>
+ #include <unistd.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+@@ -31,7 +32,7 @@ get_cpuid(char *buffer, size_t sz)
+ buffer[nb-1] = '\0';
+ return 0;
+ }
+- return -1;
++ return ENOBUFS;
+ }
+
+ char *
+diff --git a/tools/perf/arch/s390/annotate/instructions.c b/tools/perf/arch/s390/annotate/instructions.c
+index 89bb8f2c54ce..a50e70baf918 100644
+--- a/tools/perf/arch/s390/annotate/instructions.c
++++ b/tools/perf/arch/s390/annotate/instructions.c
+@@ -164,8 +164,10 @@ static int s390__annotate_init(struct arch *arch, char *cpuid __maybe_unused)
+ if (!arch->initialized) {
+ arch->initialized = true;
+ arch->associate_instruction_ops = s390__associate_ins_ops;
+- if (cpuid)
+- err = s390__cpuid_parse(arch, cpuid);
++ if (cpuid) {
++ if (s390__cpuid_parse(arch, cpuid))
++ err = SYMBOL_ANNOTATE_ERRNO__ARCH_INIT_CPUID_PARSING;
++ }
+ }
+
+ return err;
+diff --git a/tools/perf/arch/s390/util/header.c b/tools/perf/arch/s390/util/header.c
+index 8b0b018d896a..7933f6871c81 100644
+--- a/tools/perf/arch/s390/util/header.c
++++ b/tools/perf/arch/s390/util/header.c
+@@ -8,6 +8,7 @@
+ */
+
+ #include <sys/types.h>
++#include <errno.h>
+ #include <unistd.h>
+ #include <stdio.h>
+ #include <string.h>
+@@ -54,7 +55,7 @@ int get_cpuid(char *buffer, size_t sz)
+
+ sysinfo = fopen(SYSINFO, "r");
+ if (sysinfo == NULL)
+- return -1;
++ return errno;
+
+ while ((read = getline(&line, &line_sz, sysinfo)) != -1) {
+ if (!strncmp(line, SYSINFO_MANU, strlen(SYSINFO_MANU))) {
+@@ -89,7 +90,7 @@ int get_cpuid(char *buffer, size_t sz)
+
+ /* Missing manufacturer, type or model information should not happen */
+ if (!manufacturer[0] || !type[0] || !model[0])
+- return -1;
++ return EINVAL;
+
+ /*
+ * Scan /proc/service_levels and return the CPU-MF counter facility
+@@ -133,14 +134,14 @@ skip_sysinfo:
+ else
+ nbytes = snprintf(buffer, sz, "%s,%s,%s", manufacturer, type,
+ model);
+- return (nbytes >= sz) ? -1 : 0;
++ return (nbytes >= sz) ? ENOBUFS : 0;
+ }
+
+ char *get_cpuid_str(struct perf_pmu *pmu __maybe_unused)
+ {
+ char *buf = malloc(128);
+
+- if (buf && get_cpuid(buf, 128) < 0)
++ if (buf && get_cpuid(buf, 128))
+ zfree(&buf);
+ return buf;
+ }
+diff --git a/tools/perf/arch/x86/annotate/instructions.c b/tools/perf/arch/x86/annotate/instructions.c
+index 44f5aba78210..7eb5621c021d 100644
+--- a/tools/perf/arch/x86/annotate/instructions.c
++++ b/tools/perf/arch/x86/annotate/instructions.c
+@@ -196,8 +196,10 @@ static int x86__annotate_init(struct arch *arch, char *cpuid)
+ if (arch->initialized)
+ return 0;
+
+- if (cpuid)
+- err = x86__cpuid_parse(arch, cpuid);
++ if (cpuid) {
++ if (x86__cpuid_parse(arch, cpuid))
++ err = SYMBOL_ANNOTATE_ERRNO__ARCH_INIT_CPUID_PARSING;
++ }
+
+ arch->initialized = true;
+ return err;
+diff --git a/tools/perf/arch/x86/util/header.c b/tools/perf/arch/x86/util/header.c
+index af9a9f2600be..a089af60906a 100644
+--- a/tools/perf/arch/x86/util/header.c
++++ b/tools/perf/arch/x86/util/header.c
+@@ -1,5 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <sys/types.h>
++#include <errno.h>
+ #include <unistd.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+@@ -57,7 +58,7 @@ __get_cpuid(char *buffer, size_t sz, const char *fmt)
+ buffer[nb-1] = '\0';
+ return 0;
+ }
+- return -1;
++ return ENOBUFS;
+ }
+
+ int
+diff --git a/tools/perf/builtin-kvm.c b/tools/perf/builtin-kvm.c
+index b33c83489120..44ff3ea1da23 100644
+--- a/tools/perf/builtin-kvm.c
++++ b/tools/perf/builtin-kvm.c
+@@ -699,14 +699,15 @@ static int process_sample_event(struct perf_tool *tool,
+
+ static int cpu_isa_config(struct perf_kvm_stat *kvm)
+ {
+- char buf[64], *cpuid;
++ char buf[128], *cpuid;
+ int err;
+
+ if (kvm->live) {
+ err = get_cpuid(buf, sizeof(buf));
+ if (err != 0) {
+- pr_err("Failed to look up CPU type\n");
+- return err;
++ pr_err("Failed to look up CPU type: %s\n",
++ str_error_r(err, buf, sizeof(buf)));
++ return -err;
+ }
+ cpuid = buf;
+ } else
+diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
+index 0140ddb8dd0b..c14a1cdad80c 100644
+--- a/tools/perf/builtin-script.c
++++ b/tools/perf/builtin-script.c
+@@ -1054,7 +1054,7 @@ static int perf_sample__fprintf_brstackinsn(struct perf_sample *sample,
+ continue;
+
+ insn = 0;
+- for (off = 0;; off += ilen) {
++ for (off = 0; off < (unsigned)len; off += ilen) {
+ uint64_t ip = start + off;
+
+ printed += ip__fprintf_sym(ip, thread, x.cpumode, x.cpu, &lastsym, attr, fp);
+@@ -1065,6 +1065,7 @@ static int perf_sample__fprintf_brstackinsn(struct perf_sample *sample,
+ printed += print_srccode(thread, x.cpumode, ip);
+ break;
+ } else {
++ ilen = 0;
+ printed += fprintf(fp, "\t%016" PRIx64 "\t%s\n", ip,
+ dump_insn(&x, ip, buffer + off, len - off, &ilen));
+ if (ilen == 0)
+@@ -1074,6 +1075,8 @@ static int perf_sample__fprintf_brstackinsn(struct perf_sample *sample,
+ insn++;
+ }
+ }
++ if (off != (unsigned)len)
++ printed += fprintf(fp, "\tmismatch of LBR data and executable\n");
+ }
+
+ /*
+@@ -1114,6 +1117,7 @@ static int perf_sample__fprintf_brstackinsn(struct perf_sample *sample,
+ goto out;
+ }
+ for (off = 0; off <= end - start; off += ilen) {
++ ilen = 0;
+ printed += fprintf(fp, "\t%016" PRIx64 "\t%s\n", start + off,
+ dump_insn(&x, start + off, buffer + off, len - off, &ilen));
+ if (ilen == 0)
+diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c
+index d413761621b0..fa85e33762f7 100644
+--- a/tools/perf/pmu-events/jevents.c
++++ b/tools/perf/pmu-events/jevents.c
+@@ -449,12 +449,12 @@ static struct fixed {
+ const char *name;
+ const char *event;
+ } fixed[] = {
+- { "inst_retired.any", "event=0xc0" },
+- { "inst_retired.any_p", "event=0xc0" },
+- { "cpu_clk_unhalted.ref", "event=0x0,umask=0x03" },
+- { "cpu_clk_unhalted.thread", "event=0x3c" },
+- { "cpu_clk_unhalted.core", "event=0x3c" },
+- { "cpu_clk_unhalted.thread_any", "event=0x3c,any=1" },
++ { "inst_retired.any", "event=0xc0,period=2000003" },
++ { "inst_retired.any_p", "event=0xc0,period=2000003" },
++ { "cpu_clk_unhalted.ref", "event=0x0,umask=0x03,period=2000003" },
++ { "cpu_clk_unhalted.thread", "event=0x3c,period=2000003" },
++ { "cpu_clk_unhalted.core", "event=0x3c,period=2000003" },
++ { "cpu_clk_unhalted.thread_any", "event=0x3c,any=1,period=2000003" },
+ { NULL, NULL},
+ };
+
+diff --git a/tools/perf/tests/perf-hooks.c b/tools/perf/tests/perf-hooks.c
+index a693bcf017ea..44c16fd11bf6 100644
+--- a/tools/perf/tests/perf-hooks.c
++++ b/tools/perf/tests/perf-hooks.c
+@@ -20,12 +20,11 @@ static void sigsegv_handler(int sig __maybe_unused)
+ static void the_hook(void *_hook_flags)
+ {
+ int *hook_flags = _hook_flags;
+- int *p = NULL;
+
+ *hook_flags = 1234;
+
+ /* Generate a segfault, test perf_hooks__recover */
+- *p = 0;
++ raise(SIGSEGV);
+ }
+
+ int test__perf_hooks(struct test *test __maybe_unused, int subtest __maybe_unused)
+diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
+index 163536720149..2e02d2a0176a 100644
+--- a/tools/perf/util/annotate.c
++++ b/tools/perf/util/annotate.c
+@@ -1625,6 +1625,19 @@ int symbol__strerror_disassemble(struct symbol *sym __maybe_unused, struct map *
+ case SYMBOL_ANNOTATE_ERRNO__NO_LIBOPCODES_FOR_BPF:
+ scnprintf(buf, buflen, "Please link with binutils's libopcode to enable BPF annotation");
+ break;
++ case SYMBOL_ANNOTATE_ERRNO__ARCH_INIT_REGEXP:
++ scnprintf(buf, buflen, "Problems with arch specific instruction name regular expressions.");
++ break;
++ case SYMBOL_ANNOTATE_ERRNO__ARCH_INIT_CPUID_PARSING:
++ scnprintf(buf, buflen, "Problems while parsing the CPUID in the arch specific initialization.");
++ break;
++ case SYMBOL_ANNOTATE_ERRNO__BPF_INVALID_FILE:
++ scnprintf(buf, buflen, "Invalid BPF file: %s.", dso->long_name);
++ break;
++ case SYMBOL_ANNOTATE_ERRNO__BPF_MISSING_BTF:
++ scnprintf(buf, buflen, "The %s BPF file has no BTF section, compile with -g or use pahole -J.",
++ dso->long_name);
++ break;
+ default:
+ scnprintf(buf, buflen, "Internal error: Invalid %d error code\n", errnum);
+ break;
+@@ -1656,7 +1669,7 @@ static int dso__disassemble_filename(struct dso *dso, char *filename, size_t fil
+
+ build_id_path = strdup(filename);
+ if (!build_id_path)
+- return -1;
++ return ENOMEM;
+
+ /*
+ * old style build-id cache has name of XX/XXXXXXX.. while
+@@ -1707,13 +1720,13 @@ static int symbol__disassemble_bpf(struct symbol *sym,
+ char tpath[PATH_MAX];
+ size_t buf_size;
+ int nr_skip = 0;
+- int ret = -1;
+ char *buf;
+ bfd *bfdf;
++ int ret;
+ FILE *s;
+
+ if (dso->binary_type != DSO_BINARY_TYPE__BPF_PROG_INFO)
+- return -1;
++ return SYMBOL_ANNOTATE_ERRNO__BPF_INVALID_FILE;
+
+ pr_debug("%s: handling sym %s addr %" PRIx64 " len %" PRIx64 "\n", __func__,
+ sym->name, sym->start, sym->end - sym->start);
+@@ -1726,8 +1739,10 @@ static int symbol__disassemble_bpf(struct symbol *sym,
+ assert(bfd_check_format(bfdf, bfd_object));
+
+ s = open_memstream(&buf, &buf_size);
+- if (!s)
++ if (!s) {
++ ret = errno;
+ goto out;
++ }
+ init_disassemble_info(&info, s,
+ (fprintf_ftype) fprintf);
+
+@@ -1736,8 +1751,10 @@ static int symbol__disassemble_bpf(struct symbol *sym,
+
+ info_node = perf_env__find_bpf_prog_info(dso->bpf_prog.env,
+ dso->bpf_prog.id);
+- if (!info_node)
++ if (!info_node) {
++ ret = SYMBOL_ANNOTATE_ERRNO__BPF_MISSING_BTF;
+ goto out;
++ }
+ info_linear = info_node->info_linear;
+ sub_id = dso->bpf_prog.sub_id;
+
+@@ -2065,11 +2082,11 @@ int symbol__annotate(struct symbol *sym, struct map *map,
+ int err;
+
+ if (!arch_name)
+- return -1;
++ return errno;
+
+ args.arch = arch = arch__find(arch_name);
+ if (arch == NULL)
+- return -ENOTSUP;
++ return ENOTSUP;
+
+ if (parch)
+ *parch = arch;
+@@ -2965,7 +2982,7 @@ int symbol__annotate2(struct symbol *sym, struct map *map, struct perf_evsel *ev
+
+ notes->offsets = zalloc(size * sizeof(struct annotation_line *));
+ if (notes->offsets == NULL)
+- return -1;
++ return ENOMEM;
+
+ if (perf_evsel__is_group_event(evsel))
+ nr_pcnt = evsel->nr_members;
+@@ -2991,7 +3008,7 @@ int symbol__annotate2(struct symbol *sym, struct map *map, struct perf_evsel *ev
+
+ out_free_offsets:
+ zfree(¬es->offsets);
+- return -1;
++ return err;
+ }
+
+ #define ANNOTATION__CFG(n) \
+diff --git a/tools/perf/util/annotate.h b/tools/perf/util/annotate.h
+index 5bc0cf655d37..2004e2cf0211 100644
+--- a/tools/perf/util/annotate.h
++++ b/tools/perf/util/annotate.h
+@@ -370,6 +370,10 @@ enum symbol_disassemble_errno {
+
+ SYMBOL_ANNOTATE_ERRNO__NO_VMLINUX = __SYMBOL_ANNOTATE_ERRNO__START,
+ SYMBOL_ANNOTATE_ERRNO__NO_LIBOPCODES_FOR_BPF,
++ SYMBOL_ANNOTATE_ERRNO__ARCH_INIT_CPUID_PARSING,
++ SYMBOL_ANNOTATE_ERRNO__ARCH_INIT_REGEXP,
++ SYMBOL_ANNOTATE_ERRNO__BPF_INVALID_FILE,
++ SYMBOL_ANNOTATE_ERRNO__BPF_MISSING_BTF,
+
+ __SYMBOL_ANNOTATE_ERRNO__END,
+ };
+diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
+index 7666206d06fa..f18113581cf0 100644
+--- a/tools/perf/util/map.c
++++ b/tools/perf/util/map.c
+@@ -1,5 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include "symbol.h"
++#include <assert.h>
+ #include <errno.h>
+ #include <inttypes.h>
+ #include <limits.h>
+@@ -847,6 +848,8 @@ static int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp
+ }
+
+ after->start = map->end;
++ after->pgoff += map->end - pos->start;
++ assert(pos->map_ip(pos, map->end) == after->map_ip(after, map->end));
+ __map_groups__insert(pos->groups, after);
+ if (verbose >= 2 && !use_browser)
+ map__fprintf(after, fp);
+diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
+index 25b43a8c2b15..1779923d7a7b 100644
+--- a/tools/testing/selftests/Makefile
++++ b/tools/testing/selftests/Makefile
+@@ -198,8 +198,12 @@ ifdef INSTALL_PATH
+ echo " cat /dev/null > \$$logfile" >> $(ALL_SCRIPT)
+ echo "fi" >> $(ALL_SCRIPT)
+
++ @# While building run_kselftest.sh skip also non-existent TARGET dirs:
++ @# they could be the result of a build failure and should NOT be
++ @# included in the generated runlist.
+ for TARGET in $(TARGETS); do \
+ BUILD_TARGET=$$BUILD/$$TARGET; \
++ [ ! -d $$INSTALL_PATH/$$TARGET ] && echo "Skipping non-existent dir: $$TARGET" && continue; \
+ echo "[ -w /dev/kmsg ] && echo \"kselftest: Running tests in $$TARGET\" >> /dev/kmsg" >> $(ALL_SCRIPT); \
+ echo "cd $$TARGET" >> $(ALL_SCRIPT); \
+ echo -n "run_many" >> $(ALL_SCRIPT); \
+diff --git a/tools/testing/selftests/kselftest/runner.sh b/tools/testing/selftests/kselftest/runner.sh
+index 00c9020bdda8..84de7bc74f2c 100644
+--- a/tools/testing/selftests/kselftest/runner.sh
++++ b/tools/testing/selftests/kselftest/runner.sh
+@@ -3,9 +3,14 @@
+ #
+ # Runs a set of tests in a given subdirectory.
+ export skip_rc=4
++export timeout_rc=124
+ export logfile=/dev/stdout
+ export per_test_logging=
+
++# Defaults for "settings" file fields:
++# "timeout" how many seconds to let each test run before failing.
++export kselftest_default_timeout=45
++
+ # There isn't a shell-agnostic way to find the path of a sourced file,
+ # so we must rely on BASE_DIR being set to find other tools.
+ if [ -z "$BASE_DIR" ]; then
+@@ -24,6 +29,16 @@ tap_prefix()
+ fi
+ }
+
++tap_timeout()
++{
++ # Make sure tests will time out if utility is available.
++ if [ -x /usr/bin/timeout ] ; then
++ /usr/bin/timeout "$kselftest_timeout" "$1"
++ else
++ "$1"
++ fi
++}
++
+ run_one()
+ {
+ DIR="$1"
+@@ -32,6 +47,18 @@ run_one()
+
+ BASENAME_TEST=$(basename $TEST)
+
++ # Reset any "settings"-file variables.
++ export kselftest_timeout="$kselftest_default_timeout"
++ # Load per-test-directory kselftest "settings" file.
++ settings="$BASE_DIR/$DIR/settings"
++ if [ -r "$settings" ] ; then
++ while read line ; do
++ field=$(echo "$line" | cut -d= -f1)
++ value=$(echo "$line" | cut -d= -f2-)
++ eval "kselftest_$field"="$value"
++ done < "$settings"
++ fi
++
+ TEST_HDR_MSG="selftests: $DIR: $BASENAME_TEST"
+ echo "# $TEST_HDR_MSG"
+ if [ ! -x "$TEST" ]; then
+@@ -44,14 +71,17 @@ run_one()
+ echo "not ok $test_num $TEST_HDR_MSG"
+ else
+ cd `dirname $TEST` > /dev/null
+- (((((./$BASENAME_TEST 2>&1; echo $? >&3) |
++ ((((( tap_timeout ./$BASENAME_TEST 2>&1; echo $? >&3) |
+ tap_prefix >&4) 3>&1) |
+ (read xs; exit $xs)) 4>>"$logfile" &&
+ echo "ok $test_num $TEST_HDR_MSG") ||
+- (if [ $? -eq $skip_rc ]; then \
++ (rc=$?; \
++ if [ $rc -eq $skip_rc ]; then \
+ echo "not ok $test_num $TEST_HDR_MSG # SKIP"
++ elif [ $rc -eq $timeout_rc ]; then \
++ echo "not ok $test_num $TEST_HDR_MSG # TIMEOUT"
+ else
+- echo "not ok $test_num $TEST_HDR_MSG"
++ echo "not ok $test_num $TEST_HDR_MSG # exit=$rc"
+ fi)
+ cd - >/dev/null
+ fi
+diff --git a/tools/testing/selftests/rtc/settings b/tools/testing/selftests/rtc/settings
+new file mode 100644
+index 000000000000..ba4d85f74cd6
+--- /dev/null
++++ b/tools/testing/selftests/rtc/settings
+@@ -0,0 +1 @@
++timeout=90
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:5.3 commit in: /
@ 2019-11-10 16:22 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2019-11-10 16:22 UTC (permalink / raw
To: gentoo-commits
commit: 0daea8e2bb22888ab05d48a2e6486bedbbb18a61
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Nov 10 16:21:52 2019 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Nov 10 16:21:52 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0daea8e2
Linux patch 5.3.10
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1009_linux-5.3.10.patch | 6701 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 6705 insertions(+)
diff --git a/0000_README b/0000_README
index c1a5896..6cbaced 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch: 1008_linux-5.3.9.patch
From: http://www.kernel.org
Desc: Linux 5.3.9
+Patch: 1009_linux-5.3.10.patch
+From: http://www.kernel.org
+Desc: Linux 5.3.10
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1009_linux-5.3.10.patch b/1009_linux-5.3.10.patch
new file mode 100644
index 0000000..5f59953
--- /dev/null
+++ b/1009_linux-5.3.10.patch
@@ -0,0 +1,6701 @@
+diff --git a/Makefile b/Makefile
+index ad5f5230bbbe..e2a8b4534da5 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 3
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Bobtail Squid
+
+diff --git a/arch/arm/boot/dts/am3874-iceboard.dts b/arch/arm/boot/dts/am3874-iceboard.dts
+index 883fb85135d4..1b4b2b0500e4 100644
+--- a/arch/arm/boot/dts/am3874-iceboard.dts
++++ b/arch/arm/boot/dts/am3874-iceboard.dts
+@@ -111,13 +111,13 @@
+ reg = <0x70>;
+ #address-cells = <1>;
+ #size-cells = <0>;
++ i2c-mux-idle-disconnect;
+
+ i2c@0 {
+ /* FMC A */
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0>;
+- i2c-mux-idle-disconnect;
+ };
+
+ i2c@1 {
+@@ -125,7 +125,6 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <1>;
+- i2c-mux-idle-disconnect;
+ };
+
+ i2c@2 {
+@@ -133,7 +132,6 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <2>;
+- i2c-mux-idle-disconnect;
+ };
+
+ i2c@3 {
+@@ -141,7 +139,6 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <3>;
+- i2c-mux-idle-disconnect;
+ };
+
+ i2c@4 {
+@@ -149,14 +146,12 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <4>;
+- i2c-mux-idle-disconnect;
+ };
+
+ i2c@5 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <5>;
+- i2c-mux-idle-disconnect;
+
+ ina230@40 { compatible = "ti,ina230"; reg = <0x40>; shunt-resistor = <5000>; };
+ ina230@41 { compatible = "ti,ina230"; reg = <0x41>; shunt-resistor = <5000>; };
+@@ -182,14 +177,12 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <6>;
+- i2c-mux-idle-disconnect;
+ };
+
+ i2c@7 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <7>;
+- i2c-mux-idle-disconnect;
+
+ u41: pca9575@20 {
+ compatible = "nxp,pca9575";
+diff --git a/arch/arm/boot/dts/bcm2837-rpi-cm3.dtsi b/arch/arm/boot/dts/bcm2837-rpi-cm3.dtsi
+index 81399b2c5af9..d4f0e455612d 100644
+--- a/arch/arm/boot/dts/bcm2837-rpi-cm3.dtsi
++++ b/arch/arm/boot/dts/bcm2837-rpi-cm3.dtsi
+@@ -8,6 +8,14 @@
+ reg = <0 0x40000000>;
+ };
+
++ leds {
++ /*
++ * Since there is no upstream GPIO driver yet,
++ * remove the incomplete node.
++ */
++ /delete-node/ act;
++ };
++
+ reg_3v3: fixed-regulator {
+ compatible = "regulator-fixed";
+ regulator-name = "3V3";
+diff --git a/arch/arm/boot/dts/imx6-logicpd-som.dtsi b/arch/arm/boot/dts/imx6-logicpd-som.dtsi
+index 7ceae3573248..547fb141ec0c 100644
+--- a/arch/arm/boot/dts/imx6-logicpd-som.dtsi
++++ b/arch/arm/boot/dts/imx6-logicpd-som.dtsi
+@@ -207,6 +207,10 @@
+ vin-supply = <&sw1c_reg>;
+ };
+
++&snvs_poweroff {
++ status = "okay";
++};
++
+ &iomuxc {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_hog>;
+diff --git a/arch/arm/boot/dts/imx7s.dtsi b/arch/arm/boot/dts/imx7s.dtsi
+index c1a4fff5ceda..6323a9462afa 100644
+--- a/arch/arm/boot/dts/imx7s.dtsi
++++ b/arch/arm/boot/dts/imx7s.dtsi
+@@ -448,7 +448,7 @@
+ compatible = "fsl,imx7d-gpt", "fsl,imx6sx-gpt";
+ reg = <0x302d0000 0x10000>;
+ interrupts = <GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clks IMX7D_CLK_DUMMY>,
++ clocks = <&clks IMX7D_GPT1_ROOT_CLK>,
+ <&clks IMX7D_GPT1_ROOT_CLK>;
+ clock-names = "ipg", "per";
+ };
+@@ -457,7 +457,7 @@
+ compatible = "fsl,imx7d-gpt", "fsl,imx6sx-gpt";
+ reg = <0x302e0000 0x10000>;
+ interrupts = <GIC_SPI 54 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clks IMX7D_CLK_DUMMY>,
++ clocks = <&clks IMX7D_GPT2_ROOT_CLK>,
+ <&clks IMX7D_GPT2_ROOT_CLK>;
+ clock-names = "ipg", "per";
+ status = "disabled";
+@@ -467,7 +467,7 @@
+ compatible = "fsl,imx7d-gpt", "fsl,imx6sx-gpt";
+ reg = <0x302f0000 0x10000>;
+ interrupts = <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clks IMX7D_CLK_DUMMY>,
++ clocks = <&clks IMX7D_GPT3_ROOT_CLK>,
+ <&clks IMX7D_GPT3_ROOT_CLK>;
+ clock-names = "ipg", "per";
+ status = "disabled";
+@@ -477,7 +477,7 @@
+ compatible = "fsl,imx7d-gpt", "fsl,imx6sx-gpt";
+ reg = <0x30300000 0x10000>;
+ interrupts = <GIC_SPI 52 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clks IMX7D_CLK_DUMMY>,
++ clocks = <&clks IMX7D_GPT4_ROOT_CLK>,
+ <&clks IMX7D_GPT4_ROOT_CLK>;
+ clock-names = "ipg", "per";
+ status = "disabled";
+diff --git a/arch/arm/boot/dts/logicpd-torpedo-som.dtsi b/arch/arm/boot/dts/logicpd-torpedo-som.dtsi
+index 3fdd0a72f87f..506b118e511a 100644
+--- a/arch/arm/boot/dts/logicpd-torpedo-som.dtsi
++++ b/arch/arm/boot/dts/logicpd-torpedo-som.dtsi
+@@ -192,3 +192,7 @@
+ &twl_gpio {
+ ti,use-leds;
+ };
++
++&twl_keypad {
++ status = "disabled";
++};
+diff --git a/arch/arm/boot/dts/omap4-droid4-xt894.dts b/arch/arm/boot/dts/omap4-droid4-xt894.dts
+index 4454449de00c..a40fe8d49da6 100644
+--- a/arch/arm/boot/dts/omap4-droid4-xt894.dts
++++ b/arch/arm/boot/dts/omap4-droid4-xt894.dts
+@@ -369,7 +369,7 @@
+ compatible = "ti,wl1285", "ti,wl1283";
+ reg = <2>;
+ /* gpio_100 with gpmc_wait2 pad as wakeirq */
+- interrupts-extended = <&gpio4 4 IRQ_TYPE_EDGE_RISING>,
++ interrupts-extended = <&gpio4 4 IRQ_TYPE_LEVEL_HIGH>,
+ <&omap4_pmx_core 0x4e>;
+ interrupt-names = "irq", "wakeup";
+ ref-clock-frequency = <26000000>;
+diff --git a/arch/arm/boot/dts/omap4-panda-common.dtsi b/arch/arm/boot/dts/omap4-panda-common.dtsi
+index 14be2ecb62b1..55ea8b6189af 100644
+--- a/arch/arm/boot/dts/omap4-panda-common.dtsi
++++ b/arch/arm/boot/dts/omap4-panda-common.dtsi
+@@ -474,7 +474,7 @@
+ compatible = "ti,wl1271";
+ reg = <2>;
+ /* gpio_53 with gpmc_ncs3 pad as wakeup */
+- interrupts-extended = <&gpio2 21 IRQ_TYPE_EDGE_RISING>,
++ interrupts-extended = <&gpio2 21 IRQ_TYPE_LEVEL_HIGH>,
+ <&omap4_pmx_core 0x3a>;
+ interrupt-names = "irq", "wakeup";
+ ref-clock-frequency = <38400000>;
+diff --git a/arch/arm/boot/dts/omap4-sdp.dts b/arch/arm/boot/dts/omap4-sdp.dts
+index 3c274965ff40..91480ac1f328 100644
+--- a/arch/arm/boot/dts/omap4-sdp.dts
++++ b/arch/arm/boot/dts/omap4-sdp.dts
+@@ -512,7 +512,7 @@
+ compatible = "ti,wl1281";
+ reg = <2>;
+ interrupt-parent = <&gpio1>;
+- interrupts = <21 IRQ_TYPE_EDGE_RISING>; /* gpio 53 */
++ interrupts = <21 IRQ_TYPE_LEVEL_HIGH>; /* gpio 53 */
+ ref-clock-frequency = <26000000>;
+ tcxo-clock-frequency = <26000000>;
+ };
+diff --git a/arch/arm/boot/dts/omap4-var-som-om44-wlan.dtsi b/arch/arm/boot/dts/omap4-var-som-om44-wlan.dtsi
+index 6dbbc9b3229c..d0032213101e 100644
+--- a/arch/arm/boot/dts/omap4-var-som-om44-wlan.dtsi
++++ b/arch/arm/boot/dts/omap4-var-som-om44-wlan.dtsi
+@@ -69,7 +69,7 @@
+ compatible = "ti,wl1271";
+ reg = <2>;
+ interrupt-parent = <&gpio2>;
+- interrupts = <9 IRQ_TYPE_EDGE_RISING>; /* gpio 41 */
++ interrupts = <9 IRQ_TYPE_LEVEL_HIGH>; /* gpio 41 */
+ ref-clock-frequency = <38400000>;
+ };
+ };
+diff --git a/arch/arm/boot/dts/omap5-board-common.dtsi b/arch/arm/boot/dts/omap5-board-common.dtsi
+index 7fff555ee394..68ac04641bdb 100644
+--- a/arch/arm/boot/dts/omap5-board-common.dtsi
++++ b/arch/arm/boot/dts/omap5-board-common.dtsi
+@@ -362,7 +362,7 @@
+ pinctrl-names = "default";
+ pinctrl-0 = <&wlcore_irq_pin>;
+ interrupt-parent = <&gpio1>;
+- interrupts = <14 IRQ_TYPE_EDGE_RISING>; /* gpio 14 */
++ interrupts = <14 IRQ_TYPE_LEVEL_HIGH>; /* gpio 14 */
+ ref-clock-frequency = <26000000>;
+ };
+ };
+diff --git a/arch/arm/boot/dts/vf610-zii-scu4-aib.dts b/arch/arm/boot/dts/vf610-zii-scu4-aib.dts
+index d7019e89f588..8136e0ca10d5 100644
+--- a/arch/arm/boot/dts/vf610-zii-scu4-aib.dts
++++ b/arch/arm/boot/dts/vf610-zii-scu4-aib.dts
+@@ -600,6 +600,7 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0x70>;
++ i2c-mux-idle-disconnect;
+
+ sff0_i2c: i2c@1 {
+ #address-cells = <1>;
+@@ -638,6 +639,7 @@
+ reg = <0x71>;
+ #address-cells = <1>;
+ #size-cells = <0>;
++ i2c-mux-idle-disconnect;
+
+ sff5_i2c: i2c@1 {
+ #address-cells = <1>;
+diff --git a/arch/arm/include/asm/domain.h b/arch/arm/include/asm/domain.h
+index 567dbede4785..f1d0a7807cd0 100644
+--- a/arch/arm/include/asm/domain.h
++++ b/arch/arm/include/asm/domain.h
+@@ -82,7 +82,7 @@
+ #ifndef __ASSEMBLY__
+
+ #ifdef CONFIG_CPU_CP15_MMU
+-static inline unsigned int get_domain(void)
++static __always_inline unsigned int get_domain(void)
+ {
+ unsigned int domain;
+
+@@ -94,7 +94,7 @@ static inline unsigned int get_domain(void)
+ return domain;
+ }
+
+-static inline void set_domain(unsigned val)
++static __always_inline void set_domain(unsigned int val)
+ {
+ asm volatile(
+ "mcr p15, 0, %0, c3, c0 @ set domain"
+@@ -102,12 +102,12 @@ static inline void set_domain(unsigned val)
+ isb();
+ }
+ #else
+-static inline unsigned int get_domain(void)
++static __always_inline unsigned int get_domain(void)
+ {
+ return 0;
+ }
+
+-static inline void set_domain(unsigned val)
++static __always_inline void set_domain(unsigned int val)
+ {
+ }
+ #endif
+diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
+index 303248e5b990..98c6b91be4a8 100644
+--- a/arch/arm/include/asm/uaccess.h
++++ b/arch/arm/include/asm/uaccess.h
+@@ -22,7 +22,7 @@
+ * perform such accesses (eg, via list poison values) which could then
+ * be exploited for priviledge escalation.
+ */
+-static inline unsigned int uaccess_save_and_enable(void)
++static __always_inline unsigned int uaccess_save_and_enable(void)
+ {
+ #ifdef CONFIG_CPU_SW_DOMAIN_PAN
+ unsigned int old_domain = get_domain();
+@@ -37,7 +37,7 @@ static inline unsigned int uaccess_save_and_enable(void)
+ #endif
+ }
+
+-static inline void uaccess_restore(unsigned int flags)
++static __always_inline void uaccess_restore(unsigned int flags)
+ {
+ #ifdef CONFIG_CPU_SW_DOMAIN_PAN
+ /* Restore the user access mask */
+diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
+index a7810be07da1..4a3982812a40 100644
+--- a/arch/arm/kernel/head-common.S
++++ b/arch/arm/kernel/head-common.S
+@@ -68,7 +68,7 @@ ENDPROC(__vet_atags)
+ * The following fragment of code is executed with the MMU on in MMU mode,
+ * and uses absolute addresses; this is not position independent.
+ *
+- * r0 = cp#15 control register
++ * r0 = cp#15 control register (exc_ret for M-class)
+ * r1 = machine ID
+ * r2 = atags/dtb pointer
+ * r9 = processor ID
+@@ -137,7 +137,8 @@ __mmap_switched_data:
+ #ifdef CONFIG_CPU_CP15
+ .long cr_alignment @ r3
+ #else
+- .long 0 @ r3
++M_CLASS(.long exc_ret) @ r3
++AR_CLASS(.long 0) @ r3
+ #endif
+ .size __mmap_switched_data, . - __mmap_switched_data
+
+diff --git a/arch/arm/kernel/head-nommu.S b/arch/arm/kernel/head-nommu.S
+index afa350f44dea..0fc814bbc34b 100644
+--- a/arch/arm/kernel/head-nommu.S
++++ b/arch/arm/kernel/head-nommu.S
+@@ -201,6 +201,8 @@ M_CLASS(streq r3, [r12, #PMSAv8_MAIR1])
+ bic r0, r0, #V7M_SCB_CCR_IC
+ #endif
+ str r0, [r12, V7M_SCB_CCR]
++ /* Pass exc_ret to __mmap_switched */
++ mov r0, r10
+ #endif /* CONFIG_CPU_CP15 elif CONFIG_CPU_V7M */
+ ret lr
+ ENDPROC(__after_proc_init)
+diff --git a/arch/arm/mach-davinci/dm365.c b/arch/arm/mach-davinci/dm365.c
+index 2f9ae6431bf5..cebab6af31a2 100644
+--- a/arch/arm/mach-davinci/dm365.c
++++ b/arch/arm/mach-davinci/dm365.c
+@@ -462,8 +462,8 @@ static s8 dm365_queue_priority_mapping[][2] = {
+ };
+
+ static const struct dma_slave_map dm365_edma_map[] = {
+- { "davinci-mcbsp.0", "tx", EDMA_FILTER_PARAM(0, 2) },
+- { "davinci-mcbsp.0", "rx", EDMA_FILTER_PARAM(0, 3) },
++ { "davinci-mcbsp", "tx", EDMA_FILTER_PARAM(0, 2) },
++ { "davinci-mcbsp", "rx", EDMA_FILTER_PARAM(0, 3) },
+ { "davinci_voicecodec", "tx", EDMA_FILTER_PARAM(0, 2) },
+ { "davinci_voicecodec", "rx", EDMA_FILTER_PARAM(0, 3) },
+ { "spi_davinci.2", "tx", EDMA_FILTER_PARAM(0, 10) },
+diff --git a/arch/arm/mm/alignment.c b/arch/arm/mm/alignment.c
+index 04b36436cbc0..6587432faf05 100644
+--- a/arch/arm/mm/alignment.c
++++ b/arch/arm/mm/alignment.c
+@@ -767,6 +767,36 @@ do_alignment_t32_to_handler(unsigned long *pinstr, struct pt_regs *regs,
+ return NULL;
+ }
+
++static int alignment_get_arm(struct pt_regs *regs, u32 *ip, unsigned long *inst)
++{
++ u32 instr = 0;
++ int fault;
++
++ if (user_mode(regs))
++ fault = get_user(instr, ip);
++ else
++ fault = probe_kernel_address(ip, instr);
++
++ *inst = __mem_to_opcode_arm(instr);
++
++ return fault;
++}
++
++static int alignment_get_thumb(struct pt_regs *regs, u16 *ip, u16 *inst)
++{
++ u16 instr = 0;
++ int fault;
++
++ if (user_mode(regs))
++ fault = get_user(instr, ip);
++ else
++ fault = probe_kernel_address(ip, instr);
++
++ *inst = __mem_to_opcode_thumb16(instr);
++
++ return fault;
++}
++
+ static int
+ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
+ {
+@@ -774,10 +804,10 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
+ unsigned long instr = 0, instrptr;
+ int (*handler)(unsigned long addr, unsigned long instr, struct pt_regs *regs);
+ unsigned int type;
+- unsigned int fault;
+ u16 tinstr = 0;
+ int isize = 4;
+ int thumb2_32b = 0;
++ int fault;
+
+ if (interrupts_enabled(regs))
+ local_irq_enable();
+@@ -786,15 +816,14 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
+
+ if (thumb_mode(regs)) {
+ u16 *ptr = (u16 *)(instrptr & ~1);
+- fault = probe_kernel_address(ptr, tinstr);
+- tinstr = __mem_to_opcode_thumb16(tinstr);
++
++ fault = alignment_get_thumb(regs, ptr, &tinstr);
+ if (!fault) {
+ if (cpu_architecture() >= CPU_ARCH_ARMv7 &&
+ IS_T32(tinstr)) {
+ /* Thumb-2 32-bit */
+- u16 tinst2 = 0;
+- fault = probe_kernel_address(ptr + 1, tinst2);
+- tinst2 = __mem_to_opcode_thumb16(tinst2);
++ u16 tinst2;
++ fault = alignment_get_thumb(regs, ptr + 1, &tinst2);
+ instr = __opcode_thumb32_compose(tinstr, tinst2);
+ thumb2_32b = 1;
+ } else {
+@@ -803,8 +832,7 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
+ }
+ }
+ } else {
+- fault = probe_kernel_address((void *)instrptr, instr);
+- instr = __mem_to_opcode_arm(instr);
++ fault = alignment_get_arm(regs, (void *)instrptr, &instr);
+ }
+
+ if (fault) {
+diff --git a/arch/arm/mm/proc-v7m.S b/arch/arm/mm/proc-v7m.S
+index 1448f144e7fb..1a49d503eafc 100644
+--- a/arch/arm/mm/proc-v7m.S
++++ b/arch/arm/mm/proc-v7m.S
+@@ -132,13 +132,11 @@ __v7m_setup_cont:
+ dsb
+ mov r6, lr @ save LR
+ ldr sp, =init_thread_union + THREAD_START_SP
+- stmia sp, {r0-r3, r12}
+ cpsie i
+ svc #0
+ 1: cpsid i
+- ldr r0, =exc_ret
+- orr lr, lr, #EXC_RET_THREADMODE_PROCESSSTACK
+- str lr, [r0]
++ /* Calculate exc_ret */
++ orr r10, lr, #EXC_RET_THREADMODE_PROCESSSTACK
+ ldmia sp, {r0-r3, r12}
+ str r5, [r12, #11 * 4] @ restore the original SVC vector entry
+ mov lr, r6 @ restore LR
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-pine64-plus.dts b/arch/arm64/boot/dts/allwinner/sun50i-a64-pine64-plus.dts
+index 24f1aac366d6..d5b6e8159a33 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-pine64-plus.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-pine64-plus.dts
+@@ -63,3 +63,12 @@
+ reg = <1>;
+ };
+ };
++
++®_dc1sw {
++ /*
++ * Ethernet PHY needs 30ms to properly power up and some more
++ * to initialize. 100ms should be plenty of time to finish
++ * whole process.
++ */
++ regulator-enable-ramp-delay = <100000>;
++};
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine-baseboard.dts b/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine-baseboard.dts
+index e6fb9683f213..25099202c52c 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine-baseboard.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine-baseboard.dts
+@@ -159,6 +159,12 @@
+ };
+
+ ®_dc1sw {
++ /*
++ * Ethernet PHY needs 30ms to properly power up and some more
++ * to initialize. 100ms should be plenty of time to finish
++ * whole process.
++ */
++ regulator-enable-ramp-delay = <100000>;
+ regulator-name = "vcc-phy";
+ };
+
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi
+index 9cc9bdde81ac..cd92f546c483 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi
+@@ -142,15 +142,6 @@
+ clock-output-names = "ext-osc32k";
+ };
+
+- pmu {
+- compatible = "arm,cortex-a53-pmu";
+- interrupts = <GIC_SPI 152 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 153 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 154 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 155 IRQ_TYPE_LEVEL_HIGH>;
+- interrupt-affinity = <&cpu0>, <&cpu1>, <&cpu2>, <&cpu3>;
+- };
+-
+ psci {
+ compatible = "arm,psci-0.2";
+ method = "smc";
+diff --git a/arch/arm64/boot/dts/broadcom/stingray/stingray-pinctrl.dtsi b/arch/arm64/boot/dts/broadcom/stingray/stingray-pinctrl.dtsi
+index 8a3a770e8f2c..56789ccf9454 100644
+--- a/arch/arm64/boot/dts/broadcom/stingray/stingray-pinctrl.dtsi
++++ b/arch/arm64/boot/dts/broadcom/stingray/stingray-pinctrl.dtsi
+@@ -42,13 +42,14 @@
+
+ pinmux: pinmux@14029c {
+ compatible = "pinctrl-single";
+- reg = <0x0014029c 0x250>;
++ reg = <0x0014029c 0x26c>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ pinctrl-single,register-width = <32>;
+ pinctrl-single,function-mask = <0xf>;
+ pinctrl-single,gpio-range = <
+- &range 0 154 MODE_GPIO
++ &range 0 91 MODE_GPIO
++ &range 95 60 MODE_GPIO
+ >;
+ range: gpio-range {
+ #pinctrl-single,gpio-range-cells = <3>;
+diff --git a/arch/arm64/boot/dts/broadcom/stingray/stingray.dtsi b/arch/arm64/boot/dts/broadcom/stingray/stingray.dtsi
+index 71e2e34400d4..0098dfdef96c 100644
+--- a/arch/arm64/boot/dts/broadcom/stingray/stingray.dtsi
++++ b/arch/arm64/boot/dts/broadcom/stingray/stingray.dtsi
+@@ -464,8 +464,7 @@
+ <&pinmux 108 16 27>,
+ <&pinmux 135 77 6>,
+ <&pinmux 141 67 4>,
+- <&pinmux 145 149 6>,
+- <&pinmux 151 91 4>;
++ <&pinmux 145 149 6>;
+ };
+
+ i2c1: i2c@e0000 {
+diff --git a/arch/arm64/boot/dts/freescale/fsl-lx2160a.dtsi b/arch/arm64/boot/dts/freescale/fsl-lx2160a.dtsi
+index e6fdba39453c..228ab83037d0 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-lx2160a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-lx2160a.dtsi
+@@ -33,7 +33,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster0_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@1 {
+@@ -49,7 +49,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster0_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@100 {
+@@ -65,7 +65,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster1_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@101 {
+@@ -81,7 +81,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster1_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@200 {
+@@ -97,7 +97,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster2_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@201 {
+@@ -113,7 +113,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster2_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@300 {
+@@ -129,7 +129,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster3_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@301 {
+@@ -145,7 +145,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster3_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@400 {
+@@ -161,7 +161,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster4_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@401 {
+@@ -177,7 +177,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster4_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@500 {
+@@ -193,7 +193,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster5_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@501 {
+@@ -209,7 +209,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster5_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@600 {
+@@ -225,7 +225,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster6_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@601 {
+@@ -241,7 +241,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster6_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@700 {
+@@ -257,7 +257,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster7_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@701 {
+@@ -273,7 +273,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster7_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cluster0_l2: l2-cache0 {
+@@ -340,9 +340,9 @@
+ cache-level = <2>;
+ };
+
+- cpu_pw20: cpu-pw20 {
++ cpu_pw15: cpu-pw15 {
+ compatible = "arm,idle-state";
+- idle-state-name = "PW20";
++ idle-state-name = "PW15";
+ arm,psci-suspend-param = <0x0>;
+ entry-latency-us = <2000>;
+ exit-latency-us = <2000>;
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm.dtsi b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
+index 232a7412755a..0d0a6543e5db 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
+@@ -650,7 +650,7 @@
+ compatible = "fsl,imx8mm-usdhc", "fsl,imx7d-usdhc";
+ reg = <0x30b40000 0x10000>;
+ interrupts = <GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clk IMX8MM_CLK_DUMMY>,
++ clocks = <&clk IMX8MM_CLK_IPG_ROOT>,
+ <&clk IMX8MM_CLK_NAND_USDHC_BUS>,
+ <&clk IMX8MM_CLK_USDHC1_ROOT>;
+ clock-names = "ipg", "ahb", "per";
+@@ -666,7 +666,7 @@
+ compatible = "fsl,imx8mm-usdhc", "fsl,imx7d-usdhc";
+ reg = <0x30b50000 0x10000>;
+ interrupts = <GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clk IMX8MM_CLK_DUMMY>,
++ clocks = <&clk IMX8MM_CLK_IPG_ROOT>,
+ <&clk IMX8MM_CLK_NAND_USDHC_BUS>,
+ <&clk IMX8MM_CLK_USDHC2_ROOT>;
+ clock-names = "ipg", "ahb", "per";
+@@ -680,7 +680,7 @@
+ compatible = "fsl,imx8mm-usdhc", "fsl,imx7d-usdhc";
+ reg = <0x30b60000 0x10000>;
+ interrupts = <GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clk IMX8MM_CLK_DUMMY>,
++ clocks = <&clk IMX8MM_CLK_IPG_ROOT>,
+ <&clk IMX8MM_CLK_NAND_USDHC_BUS>,
+ <&clk IMX8MM_CLK_USDHC3_ROOT>;
+ clock-names = "ipg", "ahb", "per";
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq-zii-ultra.dtsi b/arch/arm64/boot/dts/freescale/imx8mq-zii-ultra.dtsi
+index 7a1706f969f0..3faa652fdf20 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq-zii-ultra.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mq-zii-ultra.dtsi
+@@ -101,8 +101,8 @@
+ regulator-min-microvolt = <900000>;
+ regulator-max-microvolt = <1000000>;
+ gpios = <&gpio3 19 GPIO_ACTIVE_HIGH>;
+- states = <1000000 0x0
+- 900000 0x1>;
++ states = <1000000 0x1
++ 900000 0x0>;
+ regulator-always-on;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq.dtsi b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+index d1f4eb197af2..32c270c4c22b 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+@@ -782,7 +782,7 @@
+ "fsl,imx7d-usdhc";
+ reg = <0x30b40000 0x10000>;
+ interrupts = <GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clk IMX8MQ_CLK_DUMMY>,
++ clocks = <&clk IMX8MQ_CLK_IPG_ROOT>,
+ <&clk IMX8MQ_CLK_NAND_USDHC_BUS>,
+ <&clk IMX8MQ_CLK_USDHC1_ROOT>;
+ clock-names = "ipg", "ahb", "per";
+@@ -799,7 +799,7 @@
+ "fsl,imx7d-usdhc";
+ reg = <0x30b50000 0x10000>;
+ interrupts = <GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clk IMX8MQ_CLK_DUMMY>,
++ clocks = <&clk IMX8MQ_CLK_IPG_ROOT>,
+ <&clk IMX8MQ_CLK_NAND_USDHC_BUS>,
+ <&clk IMX8MQ_CLK_USDHC2_ROOT>;
+ clock-names = "ipg", "ahb", "per";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-hugsun-x99.dts b/arch/arm64/boot/dts/rockchip/rk3399-hugsun-x99.dts
+index 0d1f5f9a0de9..c133e8d64b2a 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-hugsun-x99.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-hugsun-x99.dts
+@@ -644,7 +644,7 @@
+ status = "okay";
+
+ u2phy0_host: host-port {
+- phy-supply = <&vcc5v0_host>;
++ phy-supply = <&vcc5v0_typec>;
+ status = "okay";
+ };
+
+@@ -712,7 +712,7 @@
+
+ &usbdrd_dwc3_0 {
+ status = "okay";
+- dr_mode = "otg";
++ dr_mode = "host";
+ };
+
+ &usbdrd3_1 {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-rockpro64.dts b/arch/arm64/boot/dts/rockchip/rk3399-rockpro64.dts
+index eb5594062006..99d65d2fca5e 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-rockpro64.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-rockpro64.dts
+@@ -166,7 +166,7 @@
+ regulator-always-on;
+ regulator-boot-on;
+ regulator-min-microvolt = <800000>;
+- regulator-max-microvolt = <1400000>;
++ regulator-max-microvolt = <1700000>;
+ vin-supply = <&vcc5v0_sys>;
+ };
+ };
+@@ -240,8 +240,8 @@
+ rk808: pmic@1b {
+ compatible = "rockchip,rk808";
+ reg = <0x1b>;
+- interrupt-parent = <&gpio1>;
+- interrupts = <21 IRQ_TYPE_LEVEL_LOW>;
++ interrupt-parent = <&gpio3>;
++ interrupts = <10 IRQ_TYPE_LEVEL_LOW>;
+ #clock-cells = <1>;
+ clock-output-names = "xin32k", "rk808-clkout2";
+ pinctrl-names = "default";
+@@ -567,7 +567,7 @@
+
+ pmic {
+ pmic_int_l: pmic-int-l {
+- rockchip,pins = <1 RK_PC5 RK_FUNC_GPIO &pcfg_pull_up>;
++ rockchip,pins = <3 RK_PB2 RK_FUNC_GPIO &pcfg_pull_up>;
+ };
+
+ vsel1_gpio: vsel1-gpio {
+@@ -613,7 +613,6 @@
+
+ &sdmmc {
+ bus-width = <4>;
+- cap-mmc-highspeed;
+ cap-sd-highspeed;
+ cd-gpios = <&gpio0 7 GPIO_ACTIVE_LOW>;
+ disable-wp;
+@@ -625,8 +624,7 @@
+
+ &sdhci {
+ bus-width = <8>;
+- mmc-hs400-1_8v;
+- mmc-hs400-enhanced-strobe;
++ mmc-hs200-1_8v;
+ non-removable;
+ status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+index ca70ff73f171..38c75fb3f232 100644
+--- a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+@@ -42,7 +42,7 @@
+ */
+ interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>;
+
+- gic_its: gic-its@18200000 {
++ gic_its: gic-its@1820000 {
+ compatible = "arm,gic-v3-its";
+ reg = <0x00 0x01820000 0x00 0x10000>;
+ socionext,synquacer-pre-its = <0x1000000 0x400000>;
+diff --git a/arch/mips/bcm63xx/prom.c b/arch/mips/bcm63xx/prom.c
+index 77a836e661c9..df69eaa453a1 100644
+--- a/arch/mips/bcm63xx/prom.c
++++ b/arch/mips/bcm63xx/prom.c
+@@ -84,7 +84,7 @@ void __init prom_init(void)
+ * Here we will start up CPU1 in the background and ask it to
+ * reconfigure itself then go back to sleep.
+ */
+- memcpy((void *)0xa0000200, &bmips_smp_movevec, 0x20);
++ memcpy((void *)0xa0000200, bmips_smp_movevec, 0x20);
+ __sync();
+ set_c0_cause(C_SW0);
+ cpumask_set_cpu(1, &bmips_booted_mask);
+diff --git a/arch/mips/include/asm/bmips.h b/arch/mips/include/asm/bmips.h
+index bf6a8afd7ad2..581a6a3c66e4 100644
+--- a/arch/mips/include/asm/bmips.h
++++ b/arch/mips/include/asm/bmips.h
+@@ -75,11 +75,11 @@ static inline int register_bmips_smp_ops(void)
+ #endif
+ }
+
+-extern char bmips_reset_nmi_vec;
+-extern char bmips_reset_nmi_vec_end;
+-extern char bmips_smp_movevec;
+-extern char bmips_smp_int_vec;
+-extern char bmips_smp_int_vec_end;
++extern char bmips_reset_nmi_vec[];
++extern char bmips_reset_nmi_vec_end[];
++extern char bmips_smp_movevec[];
++extern char bmips_smp_int_vec[];
++extern char bmips_smp_int_vec_end[];
+
+ extern int bmips_smp_enabled;
+ extern int bmips_cpu_offset;
+diff --git a/arch/mips/kernel/smp-bmips.c b/arch/mips/kernel/smp-bmips.c
+index 76fae9b79f13..712c15de6ab9 100644
+--- a/arch/mips/kernel/smp-bmips.c
++++ b/arch/mips/kernel/smp-bmips.c
+@@ -464,10 +464,10 @@ static void bmips_wr_vec(unsigned long dst, char *start, char *end)
+
+ static inline void bmips_nmi_handler_setup(void)
+ {
+- bmips_wr_vec(BMIPS_NMI_RESET_VEC, &bmips_reset_nmi_vec,
+- &bmips_reset_nmi_vec_end);
+- bmips_wr_vec(BMIPS_WARM_RESTART_VEC, &bmips_smp_int_vec,
+- &bmips_smp_int_vec_end);
++ bmips_wr_vec(BMIPS_NMI_RESET_VEC, bmips_reset_nmi_vec,
++ bmips_reset_nmi_vec_end);
++ bmips_wr_vec(BMIPS_WARM_RESTART_VEC, bmips_smp_int_vec,
++ bmips_smp_int_vec_end);
+ }
+
+ struct reset_vec_info {
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 9650777d0aaf..5f9d12ce91e5 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -351,17 +351,16 @@ static enum blk_eh_timer_return nbd_xmit_timeout(struct request *req,
+ struct nbd_device *nbd = cmd->nbd;
+ struct nbd_config *config;
+
++ if (!mutex_trylock(&cmd->lock))
++ return BLK_EH_RESET_TIMER;
++
+ if (!refcount_inc_not_zero(&nbd->config_refs)) {
+ cmd->status = BLK_STS_TIMEOUT;
++ mutex_unlock(&cmd->lock);
+ goto done;
+ }
+ config = nbd->config;
+
+- if (!mutex_trylock(&cmd->lock)) {
+- nbd_config_put(nbd);
+- return BLK_EH_RESET_TIMER;
+- }
+-
+ if (config->num_connections > 1) {
+ dev_err_ratelimited(nbd_to_dev(nbd),
+ "Connection timed out, retrying (%d/%d alive)\n",
+@@ -674,6 +673,12 @@ static struct nbd_cmd *nbd_read_stat(struct nbd_device *nbd, int index)
+ ret = -ENOENT;
+ goto out;
+ }
++ if (cmd->status != BLK_STS_OK) {
++ dev_err(disk_to_dev(nbd->disk), "Command already handled %p\n",
++ req);
++ ret = -ENOENT;
++ goto out;
++ }
+ if (test_bit(NBD_CMD_REQUEUED, &cmd->flags)) {
+ dev_err(disk_to_dev(nbd->disk), "Raced with timeout on req %p\n",
+ req);
+@@ -755,7 +760,10 @@ static bool nbd_clear_req(struct request *req, void *data, bool reserved)
+ {
+ struct nbd_cmd *cmd = blk_mq_rq_to_pdu(req);
+
++ mutex_lock(&cmd->lock);
+ cmd->status = BLK_STS_IOERR;
++ mutex_unlock(&cmd->lock);
++
+ blk_mq_complete_request(req);
+ return true;
+ }
+diff --git a/drivers/crypto/chelsio/chtls/chtls_cm.c b/drivers/crypto/chelsio/chtls/chtls_cm.c
+index 774d991d7cca..aca75237bbcf 100644
+--- a/drivers/crypto/chelsio/chtls/chtls_cm.c
++++ b/drivers/crypto/chelsio/chtls/chtls_cm.c
+@@ -1297,7 +1297,7 @@ static void make_established(struct sock *sk, u32 snd_isn, unsigned int opt)
+ tp->write_seq = snd_isn;
+ tp->snd_nxt = snd_isn;
+ tp->snd_una = snd_isn;
+- inet_sk(sk)->inet_id = tp->write_seq ^ jiffies;
++ inet_sk(sk)->inet_id = prandom_u32();
+ assign_rxopt(sk, opt);
+
+ if (tp->rcv_wnd > (RCV_BUFSIZ_M << 10))
+diff --git a/drivers/crypto/chelsio/chtls/chtls_io.c b/drivers/crypto/chelsio/chtls/chtls_io.c
+index 551bca6fef24..f382c2b23d75 100644
+--- a/drivers/crypto/chelsio/chtls/chtls_io.c
++++ b/drivers/crypto/chelsio/chtls/chtls_io.c
+@@ -1701,7 +1701,7 @@ int chtls_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ return peekmsg(sk, msg, len, nonblock, flags);
+
+ if (sk_can_busy_loop(sk) &&
+- skb_queue_empty(&sk->sk_receive_queue) &&
++ skb_queue_empty_lockless(&sk->sk_receive_queue) &&
+ sk->sk_state == TCP_ESTABLISHED)
+ sk_busy_loop(sk, nonblock);
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
+index 61e38e43ad1d..85b0515c0fdc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
+@@ -140,7 +140,12 @@ int amdgpu_bo_list_create(struct amdgpu_device *adev, struct drm_file *filp,
+ return 0;
+
+ error_free:
+- while (i--) {
++ for (i = 0; i < last_entry; ++i) {
++ struct amdgpu_bo *bo = ttm_to_amdgpu_bo(array[i].tv.bo);
++
++ amdgpu_bo_unref(&bo);
++ }
++ for (i = first_userptr; i < num_entries; ++i) {
+ struct amdgpu_bo *bo = ttm_to_amdgpu_bo(array[i].tv.bo);
+
+ amdgpu_bo_unref(&bo);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index bea6f298dfdc..0ff786dec8c4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -421,7 +421,8 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev,
+ .interruptible = (bp->type != ttm_bo_type_kernel),
+ .no_wait_gpu = false,
+ .resv = bp->resv,
+- .flags = TTM_OPT_FLAG_ALLOW_RES_EVICT
++ .flags = bp->type != ttm_bo_type_kernel ?
++ TTM_OPT_FLAG_ALLOW_RES_EVICT : 0
+ };
+ struct amdgpu_bo *bo;
+ unsigned long page_align, size = bp->size;
+diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_kms.c b/drivers/gpu/drm/arm/display/komeda/komeda_kms.c
+index 69d9e26c60c8..9e110d51dc1f 100644
+--- a/drivers/gpu/drm/arm/display/komeda/komeda_kms.c
++++ b/drivers/gpu/drm/arm/display/komeda/komeda_kms.c
+@@ -85,7 +85,8 @@ static void komeda_kms_commit_tail(struct drm_atomic_state *old_state)
+
+ drm_atomic_helper_commit_modeset_disables(dev, old_state);
+
+- drm_atomic_helper_commit_planes(dev, old_state, 0);
++ drm_atomic_helper_commit_planes(dev, old_state,
++ DRM_PLANE_COMMIT_ACTIVE_ONLY);
+
+ drm_atomic_helper_commit_modeset_enables(dev, old_state);
+
+diff --git a/drivers/i2c/busses/i2c-aspeed.c b/drivers/i2c/busses/i2c-aspeed.c
+index fa66951b05d0..7b098ff5f5dd 100644
+--- a/drivers/i2c/busses/i2c-aspeed.c
++++ b/drivers/i2c/busses/i2c-aspeed.c
+@@ -108,6 +108,12 @@
+ #define ASPEED_I2CD_S_TX_CMD BIT(2)
+ #define ASPEED_I2CD_M_TX_CMD BIT(1)
+ #define ASPEED_I2CD_M_START_CMD BIT(0)
++#define ASPEED_I2CD_MASTER_CMDS_MASK \
++ (ASPEED_I2CD_M_STOP_CMD | \
++ ASPEED_I2CD_M_S_RX_CMD_LAST | \
++ ASPEED_I2CD_M_RX_CMD | \
++ ASPEED_I2CD_M_TX_CMD | \
++ ASPEED_I2CD_M_START_CMD)
+
+ /* 0x18 : I2CD Slave Device Address Register */
+ #define ASPEED_I2CD_DEV_ADDR_MASK GENMASK(6, 0)
+@@ -336,18 +342,19 @@ static void aspeed_i2c_do_start(struct aspeed_i2c_bus *bus)
+ struct i2c_msg *msg = &bus->msgs[bus->msgs_index];
+ u8 slave_addr = i2c_8bit_addr_from_msg(msg);
+
+- bus->master_state = ASPEED_I2C_MASTER_START;
+-
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+ /*
+ * If it's requested in the middle of a slave session, set the master
+ * state to 'pending' then H/W will continue handling this master
+ * command when the bus comes back to the idle state.
+ */
+- if (bus->slave_state != ASPEED_I2C_SLAVE_INACTIVE)
++ if (bus->slave_state != ASPEED_I2C_SLAVE_INACTIVE) {
+ bus->master_state = ASPEED_I2C_MASTER_PENDING;
++ return;
++ }
+ #endif /* CONFIG_I2C_SLAVE */
+
++ bus->master_state = ASPEED_I2C_MASTER_START;
+ bus->buf_index = 0;
+
+ if (msg->flags & I2C_M_RD) {
+@@ -422,20 +429,6 @@ static u32 aspeed_i2c_master_irq(struct aspeed_i2c_bus *bus, u32 irq_status)
+ }
+ }
+
+-#if IS_ENABLED(CONFIG_I2C_SLAVE)
+- /*
+- * A pending master command will be started by H/W when the bus comes
+- * back to idle state after completing a slave operation so change the
+- * master state from 'pending' to 'start' at here if slave is inactive.
+- */
+- if (bus->master_state == ASPEED_I2C_MASTER_PENDING) {
+- if (bus->slave_state != ASPEED_I2C_SLAVE_INACTIVE)
+- goto out_no_complete;
+-
+- bus->master_state = ASPEED_I2C_MASTER_START;
+- }
+-#endif /* CONFIG_I2C_SLAVE */
+-
+ /* Master is not currently active, irq was for someone else. */
+ if (bus->master_state == ASPEED_I2C_MASTER_INACTIVE ||
+ bus->master_state == ASPEED_I2C_MASTER_PENDING)
+@@ -462,11 +455,15 @@ static u32 aspeed_i2c_master_irq(struct aspeed_i2c_bus *bus, u32 irq_status)
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+ /*
+ * If a peer master starts a xfer immediately after it queues a
+- * master command, change its state to 'pending' then H/W will
+- * continue the queued master xfer just after completing the
+- * slave mode session.
++ * master command, clear the queued master command and change
++ * its state to 'pending'. To simplify handling of pending
++ * cases, it uses S/W solution instead of H/W command queue
++ * handling.
+ */
+ if (unlikely(irq_status & ASPEED_I2CD_INTR_SLAVE_MATCH)) {
++ writel(readl(bus->base + ASPEED_I2C_CMD_REG) &
++ ~ASPEED_I2CD_MASTER_CMDS_MASK,
++ bus->base + ASPEED_I2C_CMD_REG);
+ bus->master_state = ASPEED_I2C_MASTER_PENDING;
+ dev_dbg(bus->dev,
+ "master goes pending due to a slave start\n");
+@@ -629,6 +626,14 @@ static irqreturn_t aspeed_i2c_bus_irq(int irq, void *dev_id)
+ irq_handled |= aspeed_i2c_master_irq(bus,
+ irq_remaining);
+ }
++
++ /*
++ * Start a pending master command at here if a slave operation is
++ * completed.
++ */
++ if (bus->master_state == ASPEED_I2C_MASTER_PENDING &&
++ bus->slave_state == ASPEED_I2C_SLAVE_INACTIVE)
++ aspeed_i2c_do_start(bus);
+ #else
+ irq_handled = aspeed_i2c_master_irq(bus, irq_remaining);
+ #endif /* CONFIG_I2C_SLAVE */
+@@ -691,6 +696,15 @@ static int aspeed_i2c_master_xfer(struct i2c_adapter *adap,
+ ASPEED_I2CD_BUS_BUSY_STS))
+ aspeed_i2c_recover_bus(bus);
+
++ /*
++ * If timed out and the state is still pending, drop the pending
++ * master command.
++ */
++ spin_lock_irqsave(&bus->lock, flags);
++ if (bus->master_state == ASPEED_I2C_MASTER_PENDING)
++ bus->master_state = ASPEED_I2C_MASTER_INACTIVE;
++ spin_unlock_irqrestore(&bus->lock, flags);
++
+ return -ETIMEDOUT;
+ }
+
+diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c
+index 29eae1bf4f86..2152ec5f535c 100644
+--- a/drivers/i2c/busses/i2c-mt65xx.c
++++ b/drivers/i2c/busses/i2c-mt65xx.c
+@@ -875,7 +875,7 @@ static irqreturn_t mtk_i2c_irq(int irqno, void *dev_id)
+
+ static u32 mtk_i2c_functionality(struct i2c_adapter *adap)
+ {
+- if (adap->quirks->flags & I2C_AQ_NO_ZERO_LEN)
++ if (i2c_check_quirks(adap, I2C_AQ_NO_ZERO_LEN))
+ return I2C_FUNC_I2C |
+ (I2C_FUNC_SMBUS_EMUL & ~I2C_FUNC_SMBUS_QUICK);
+ else
+diff --git a/drivers/i2c/busses/i2c-stm32f7.c b/drivers/i2c/busses/i2c-stm32f7.c
+index 266d1c269b83..1fac7344ae9c 100644
+--- a/drivers/i2c/busses/i2c-stm32f7.c
++++ b/drivers/i2c/busses/i2c-stm32f7.c
+@@ -305,7 +305,7 @@ struct stm32f7_i2c_dev {
+ struct regmap *regmap;
+ };
+
+-/**
++/*
+ * All these values are coming from I2C Specification, Version 6.0, 4th of
+ * April 2014.
+ *
+@@ -1192,6 +1192,8 @@ static void stm32f7_i2c_slave_start(struct stm32f7_i2c_dev *i2c_dev)
+ STM32F7_I2C_CR1_TXIE;
+ stm32f7_i2c_set_bits(base + STM32F7_I2C_CR1, mask);
+
++ /* Write 1st data byte */
++ writel_relaxed(value, base + STM32F7_I2C_TXDR);
+ } else {
+ /* Notify i2c slave that new write transfer is starting */
+ i2c_slave_event(slave, I2C_SLAVE_WRITE_REQUESTED, &value);
+@@ -1501,7 +1503,7 @@ static irqreturn_t stm32f7_i2c_isr_error(int irq, void *data)
+ void __iomem *base = i2c_dev->base;
+ struct device *dev = i2c_dev->dev;
+ struct stm32_i2c_dma *dma = i2c_dev->dma;
+- u32 mask, status;
++ u32 status;
+
+ status = readl_relaxed(i2c_dev->base + STM32F7_I2C_ISR);
+
+@@ -1526,12 +1528,15 @@ static irqreturn_t stm32f7_i2c_isr_error(int irq, void *data)
+ f7_msg->result = -EINVAL;
+ }
+
+- /* Disable interrupts */
+- if (stm32f7_i2c_is_slave_registered(i2c_dev))
+- mask = STM32F7_I2C_XFER_IRQ_MASK;
+- else
+- mask = STM32F7_I2C_ALL_IRQ_MASK;
+- stm32f7_i2c_disable_irq(i2c_dev, mask);
++ if (!i2c_dev->slave_running) {
++ u32 mask;
++ /* Disable interrupts */
++ if (stm32f7_i2c_is_slave_registered(i2c_dev))
++ mask = STM32F7_I2C_XFER_IRQ_MASK;
++ else
++ mask = STM32F7_I2C_ALL_IRQ_MASK;
++ stm32f7_i2c_disable_irq(i2c_dev, mask);
++ }
+
+ /* Disable dma */
+ if (i2c_dev->use_dma) {
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index c3a8d732805f..868c356fbf49 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -175,6 +175,22 @@ static DEFINE_IDA(its_vpeid_ida);
+ #define gic_data_rdist_rd_base() (gic_data_rdist()->rd_base)
+ #define gic_data_rdist_vlpi_base() (gic_data_rdist_rd_base() + SZ_128K)
+
++static u16 get_its_list(struct its_vm *vm)
++{
++ struct its_node *its;
++ unsigned long its_list = 0;
++
++ list_for_each_entry(its, &its_nodes, entry) {
++ if (!its->is_v4)
++ continue;
++
++ if (vm->vlpi_count[its->list_nr])
++ __set_bit(its->list_nr, &its_list);
++ }
++
++ return (u16)its_list;
++}
++
+ static struct its_collection *dev_event_to_col(struct its_device *its_dev,
+ u32 event)
+ {
+@@ -976,17 +992,15 @@ static void its_send_vmapp(struct its_node *its,
+
+ static void its_send_vmovp(struct its_vpe *vpe)
+ {
+- struct its_cmd_desc desc;
++ struct its_cmd_desc desc = {};
+ struct its_node *its;
+ unsigned long flags;
+ int col_id = vpe->col_idx;
+
+ desc.its_vmovp_cmd.vpe = vpe;
+- desc.its_vmovp_cmd.its_list = (u16)its_list_map;
+
+ if (!its_list_map) {
+ its = list_first_entry(&its_nodes, struct its_node, entry);
+- desc.its_vmovp_cmd.seq_num = 0;
+ desc.its_vmovp_cmd.col = &its->collections[col_id];
+ its_send_single_vcommand(its, its_build_vmovp_cmd, &desc);
+ return;
+@@ -1003,6 +1017,7 @@ static void its_send_vmovp(struct its_vpe *vpe)
+ raw_spin_lock_irqsave(&vmovp_lock, flags);
+
+ desc.its_vmovp_cmd.seq_num = vmovp_seq_num++;
++ desc.its_vmovp_cmd.its_list = get_its_list(vpe->its_vm);
+
+ /* Emit VMOVPs */
+ list_for_each_entry(its, &its_nodes, entry) {
+diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
+index daefc52b0ec5..7d0a12fe2714 100644
+--- a/drivers/irqchip/irq-sifive-plic.c
++++ b/drivers/irqchip/irq-sifive-plic.c
+@@ -252,8 +252,8 @@ static int __init plic_init(struct device_node *node,
+ continue;
+ }
+
+- /* skip context holes */
+- if (parent.args[0] == -1)
++ /* skip contexts other than supervisor external interrupt */
++ if (parent.args[0] != IRQ_S_EXT)
+ continue;
+
+ hartid = plic_find_hart_id(parent.np);
+diff --git a/drivers/isdn/capi/capi.c b/drivers/isdn/capi/capi.c
+index c92b405b7646..ba8619524231 100644
+--- a/drivers/isdn/capi/capi.c
++++ b/drivers/isdn/capi/capi.c
+@@ -744,7 +744,7 @@ capi_poll(struct file *file, poll_table *wait)
+
+ poll_wait(file, &(cdev->recvwait), wait);
+ mask = EPOLLOUT | EPOLLWRNORM;
+- if (!skb_queue_empty(&cdev->recvqueue))
++ if (!skb_queue_empty_lockless(&cdev->recvqueue))
+ mask |= EPOLLIN | EPOLLRDNORM;
+ return mask;
+ }
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index 907af62846ba..0721c22e2bc8 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -1808,7 +1808,6 @@ int b53_mirror_add(struct dsa_switch *ds, int port,
+ loc = B53_EG_MIR_CTL;
+
+ b53_read16(dev, B53_MGMT_PAGE, loc, ®);
+- reg &= ~MIRROR_MASK;
+ reg |= BIT(port);
+ b53_write16(dev, B53_MGMT_PAGE, loc, reg);
+
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index 28c963a21dac..9f05bf714ba2 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -37,22 +37,11 @@ static void bcm_sf2_imp_setup(struct dsa_switch *ds, int port)
+ unsigned int i;
+ u32 reg, offset;
+
+- if (priv->type == BCM7445_DEVICE_ID)
+- offset = CORE_STS_OVERRIDE_IMP;
+- else
+- offset = CORE_STS_OVERRIDE_IMP2;
+-
+ /* Enable the port memories */
+ reg = core_readl(priv, CORE_MEM_PSM_VDD_CTRL);
+ reg &= ~P_TXQ_PSM_VDD(port);
+ core_writel(priv, reg, CORE_MEM_PSM_VDD_CTRL);
+
+- /* Enable Broadcast, Multicast, Unicast forwarding to IMP port */
+- reg = core_readl(priv, CORE_IMP_CTL);
+- reg |= (RX_BCST_EN | RX_MCST_EN | RX_UCST_EN);
+- reg &= ~(RX_DIS | TX_DIS);
+- core_writel(priv, reg, CORE_IMP_CTL);
+-
+ /* Enable forwarding */
+ core_writel(priv, SW_FWDG_EN, CORE_SWMODE);
+
+@@ -71,10 +60,27 @@ static void bcm_sf2_imp_setup(struct dsa_switch *ds, int port)
+
+ b53_brcm_hdr_setup(ds, port);
+
+- /* Force link status for IMP port */
+- reg = core_readl(priv, offset);
+- reg |= (MII_SW_OR | LINK_STS);
+- core_writel(priv, reg, offset);
++ if (port == 8) {
++ if (priv->type == BCM7445_DEVICE_ID)
++ offset = CORE_STS_OVERRIDE_IMP;
++ else
++ offset = CORE_STS_OVERRIDE_IMP2;
++
++ /* Force link status for IMP port */
++ reg = core_readl(priv, offset);
++ reg |= (MII_SW_OR | LINK_STS);
++ core_writel(priv, reg, offset);
++
++ /* Enable Broadcast, Multicast, Unicast forwarding to IMP port */
++ reg = core_readl(priv, CORE_IMP_CTL);
++ reg |= (RX_BCST_EN | RX_MCST_EN | RX_UCST_EN);
++ reg &= ~(RX_DIS | TX_DIS);
++ core_writel(priv, reg, CORE_IMP_CTL);
++ } else {
++ reg = core_readl(priv, CORE_G_PCTL_PORT(port));
++ reg &= ~(RX_DIS | TX_DIS);
++ core_writel(priv, reg, CORE_G_PCTL_PORT(port));
++ }
+ }
+
+ static void bcm_sf2_gphy_enable_set(struct dsa_switch *ds, bool enable)
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index b22196880d6d..06e2581b28ea 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -2018,6 +2018,8 @@ static void bcmgenet_link_intr_enable(struct bcmgenet_priv *priv)
+ */
+ if (priv->internal_phy) {
+ int0_enable |= UMAC_IRQ_LINK_EVENT;
++ if (GENET_IS_V1(priv) || GENET_IS_V2(priv) || GENET_IS_V3(priv))
++ int0_enable |= UMAC_IRQ_PHY_DET_R;
+ } else if (priv->ext_phy) {
+ int0_enable |= UMAC_IRQ_LINK_EVENT;
+ } else if (priv->phy_interface == PHY_INTERFACE_MODE_MOCA) {
+@@ -2616,11 +2618,14 @@ static void bcmgenet_irq_task(struct work_struct *work)
+ priv->irq0_stat = 0;
+ spin_unlock_irq(&priv->lock);
+
++ if (status & UMAC_IRQ_PHY_DET_R &&
++ priv->dev->phydev->autoneg != AUTONEG_ENABLE)
++ phy_init_hw(priv->dev->phydev);
++
+ /* Link UP/DOWN event */
+- if (status & UMAC_IRQ_LINK_EVENT) {
+- priv->dev->phydev->link = !!(status & UMAC_IRQ_LINK_UP);
++ if (status & UMAC_IRQ_LINK_EVENT)
+ phy_mac_interrupt(priv->dev->phydev);
+- }
++
+ }
+
+ /* bcmgenet_isr1: handle Rx and Tx priority queues */
+@@ -2715,7 +2720,7 @@ static irqreturn_t bcmgenet_isr0(int irq, void *dev_id)
+ }
+
+ /* all other interested interrupts handled in bottom half */
+- status &= UMAC_IRQ_LINK_EVENT;
++ status &= (UMAC_IRQ_LINK_EVENT | UMAC_IRQ_PHY_DET_R);
+ if (status) {
+ /* Save irq status for bottom-half processing. */
+ spin_lock_irqsave(&priv->lock, flags);
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
+index a4dead4ab0ed..86b528d8364c 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
+@@ -695,10 +695,10 @@ static void uld_init(struct adapter *adap, struct cxgb4_lld_info *lld)
+ lld->write_cmpl_support = adap->params.write_cmpl_support;
+ }
+
+-static void uld_attach(struct adapter *adap, unsigned int uld)
++static int uld_attach(struct adapter *adap, unsigned int uld)
+ {
+- void *handle;
+ struct cxgb4_lld_info lli;
++ void *handle;
+
+ uld_init(adap, &lli);
+ uld_queue_init(adap, uld, &lli);
+@@ -708,7 +708,7 @@ static void uld_attach(struct adapter *adap, unsigned int uld)
+ dev_warn(adap->pdev_dev,
+ "could not attach to the %s driver, error %ld\n",
+ adap->uld[uld].name, PTR_ERR(handle));
+- return;
++ return PTR_ERR(handle);
+ }
+
+ adap->uld[uld].handle = handle;
+@@ -716,22 +716,22 @@ static void uld_attach(struct adapter *adap, unsigned int uld)
+
+ if (adap->flags & CXGB4_FULL_INIT_DONE)
+ adap->uld[uld].state_change(handle, CXGB4_STATE_UP);
++
++ return 0;
+ }
+
+-/**
+- * cxgb4_register_uld - register an upper-layer driver
+- * @type: the ULD type
+- * @p: the ULD methods
++/* cxgb4_register_uld - register an upper-layer driver
++ * @type: the ULD type
++ * @p: the ULD methods
+ *
+- * Registers an upper-layer driver with this driver and notifies the ULD
+- * about any presently available devices that support its type. Returns
+- * %-EBUSY if a ULD of the same type is already registered.
++ * Registers an upper-layer driver with this driver and notifies the ULD
++ * about any presently available devices that support its type.
+ */
+ void cxgb4_register_uld(enum cxgb4_uld type,
+ const struct cxgb4_uld_info *p)
+ {
+- int ret = 0;
+ struct adapter *adap;
++ int ret = 0;
+
+ if (type >= CXGB4_ULD_MAX)
+ return;
+@@ -763,8 +763,12 @@ void cxgb4_register_uld(enum cxgb4_uld type,
+ if (ret)
+ goto free_irq;
+ adap->uld[type] = *p;
+- uld_attach(adap, type);
++ ret = uld_attach(adap, type);
++ if (ret)
++ goto free_txq;
+ continue;
++free_txq:
++ release_sge_txq_uld(adap, type);
+ free_irq:
+ if (adap->flags & CXGB4_FULL_INIT_DONE)
+ quiesce_rx_uld(adap, type);
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c
+index b3da81e90132..928bfea5457b 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c
+@@ -3791,15 +3791,11 @@ int t4_sge_alloc_eth_txq(struct adapter *adap, struct sge_eth_txq *txq,
+ * write the CIDX Updates into the Status Page at the end of the
+ * TX Queue.
+ */
+- c.autoequiqe_to_viid = htonl((dbqt
+- ? FW_EQ_ETH_CMD_AUTOEQUIQE_F
+- : FW_EQ_ETH_CMD_AUTOEQUEQE_F) |
++ c.autoequiqe_to_viid = htonl(FW_EQ_ETH_CMD_AUTOEQUEQE_F |
+ FW_EQ_ETH_CMD_VIID_V(pi->viid));
+
+ c.fetchszm_to_iqid =
+- htonl(FW_EQ_ETH_CMD_HOSTFCMODE_V(dbqt
+- ? HOSTFCMODE_INGRESS_QUEUE_X
+- : HOSTFCMODE_STATUS_PAGE_X) |
++ htonl(FW_EQ_ETH_CMD_HOSTFCMODE_V(HOSTFCMODE_STATUS_PAGE_X) |
+ FW_EQ_ETH_CMD_PCIECHN_V(pi->tx_chan) |
+ FW_EQ_ETH_CMD_FETCHRO_F | FW_EQ_ETH_CMD_IQID_V(iqid));
+
+diff --git a/drivers/net/ethernet/faraday/ftgmac100.c b/drivers/net/ethernet/faraday/ftgmac100.c
+index 030fed65393e..713dc30f9dbb 100644
+--- a/drivers/net/ethernet/faraday/ftgmac100.c
++++ b/drivers/net/ethernet/faraday/ftgmac100.c
+@@ -726,6 +726,18 @@ static netdev_tx_t ftgmac100_hard_start_xmit(struct sk_buff *skb,
+ */
+ nfrags = skb_shinfo(skb)->nr_frags;
+
++ /* Setup HW checksumming */
++ csum_vlan = 0;
++ if (skb->ip_summed == CHECKSUM_PARTIAL &&
++ !ftgmac100_prep_tx_csum(skb, &csum_vlan))
++ goto drop;
++
++ /* Add VLAN tag */
++ if (skb_vlan_tag_present(skb)) {
++ csum_vlan |= FTGMAC100_TXDES1_INS_VLANTAG;
++ csum_vlan |= skb_vlan_tag_get(skb) & 0xffff;
++ }
++
+ /* Get header len */
+ len = skb_headlen(skb);
+
+@@ -752,19 +764,6 @@ static netdev_tx_t ftgmac100_hard_start_xmit(struct sk_buff *skb,
+ if (nfrags == 0)
+ f_ctl_stat |= FTGMAC100_TXDES0_LTS;
+ txdes->txdes3 = cpu_to_le32(map);
+-
+- /* Setup HW checksumming */
+- csum_vlan = 0;
+- if (skb->ip_summed == CHECKSUM_PARTIAL &&
+- !ftgmac100_prep_tx_csum(skb, &csum_vlan))
+- goto drop;
+-
+- /* Add VLAN tag */
+- if (skb_vlan_tag_present(skb)) {
+- csum_vlan |= FTGMAC100_TXDES1_INS_VLANTAG;
+- csum_vlan |= skb_vlan_tag_get(skb) & 0xffff;
+- }
+-
+ txdes->txdes1 = cpu_to_le32(csum_vlan);
+
+ /* Next descriptor */
+diff --git a/drivers/net/ethernet/hisilicon/hip04_eth.c b/drivers/net/ethernet/hisilicon/hip04_eth.c
+index c84167447abe..f51bc0255556 100644
+--- a/drivers/net/ethernet/hisilicon/hip04_eth.c
++++ b/drivers/net/ethernet/hisilicon/hip04_eth.c
+@@ -237,6 +237,7 @@ struct hip04_priv {
+ dma_addr_t rx_phys[RX_DESC_NUM];
+ unsigned int rx_head;
+ unsigned int rx_buf_size;
++ unsigned int rx_cnt_remaining;
+
+ struct device_node *phy_node;
+ struct phy_device *phy;
+@@ -575,7 +576,6 @@ static int hip04_rx_poll(struct napi_struct *napi, int budget)
+ struct hip04_priv *priv = container_of(napi, struct hip04_priv, napi);
+ struct net_device *ndev = priv->ndev;
+ struct net_device_stats *stats = &ndev->stats;
+- unsigned int cnt = hip04_recv_cnt(priv);
+ struct rx_desc *desc;
+ struct sk_buff *skb;
+ unsigned char *buf;
+@@ -588,8 +588,8 @@ static int hip04_rx_poll(struct napi_struct *napi, int budget)
+
+ /* clean up tx descriptors */
+ tx_remaining = hip04_tx_reclaim(ndev, false);
+-
+- while (cnt && !last) {
++ priv->rx_cnt_remaining += hip04_recv_cnt(priv);
++ while (priv->rx_cnt_remaining && !last) {
+ buf = priv->rx_buf[priv->rx_head];
+ skb = build_skb(buf, priv->rx_buf_size);
+ if (unlikely(!skb)) {
+@@ -635,11 +635,13 @@ refill:
+ hip04_set_recv_desc(priv, phys);
+
+ priv->rx_head = RX_NEXT(priv->rx_head);
+- if (rx >= budget)
++ if (rx >= budget) {
++ --priv->rx_cnt_remaining;
+ goto done;
++ }
+
+- if (--cnt == 0)
+- cnt = hip04_recv_cnt(priv);
++ if (--priv->rx_cnt_remaining == 0)
++ priv->rx_cnt_remaining += hip04_recv_cnt(priv);
+ }
+
+ if (!(priv->reg_inten & RCV_INT)) {
+@@ -724,6 +726,7 @@ static int hip04_mac_open(struct net_device *ndev)
+ int i;
+
+ priv->rx_head = 0;
++ priv->rx_cnt_remaining = 0;
+ priv->tx_head = 0;
+ priv->tx_tail = 0;
+ hip04_reset_ppe(priv);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+index 48c7b70fc2c4..58a7d62b38de 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+@@ -32,6 +32,8 @@
+
+ #define HNAE3_MOD_VERSION "1.0"
+
++#define HNAE3_MIN_VECTOR_NUM 2 /* first one for misc, another for IO */
++
+ /* Device IDs */
+ #define HNAE3_DEV_ID_GE 0xA220
+ #define HNAE3_DEV_ID_25GE 0xA221
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 3fde5471e1c0..65b53ec1d9ca 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -800,6 +800,9 @@ static int hclge_query_pf_resource(struct hclge_dev *hdev)
+ hnae3_get_field(__le16_to_cpu(req->pf_intr_vector_number),
+ HCLGE_PF_VEC_NUM_M, HCLGE_PF_VEC_NUM_S);
+
++ /* nic's msix numbers is always equals to the roce's. */
++ hdev->num_nic_msi = hdev->num_roce_msi;
++
+ /* PF should have NIC vectors and Roce vectors,
+ * NIC vectors are queued before Roce vectors.
+ */
+@@ -809,6 +812,15 @@ static int hclge_query_pf_resource(struct hclge_dev *hdev)
+ hdev->num_msi =
+ hnae3_get_field(__le16_to_cpu(req->pf_intr_vector_number),
+ HCLGE_PF_VEC_NUM_M, HCLGE_PF_VEC_NUM_S);
++
++ hdev->num_nic_msi = hdev->num_msi;
++ }
++
++ if (hdev->num_nic_msi < HNAE3_MIN_VECTOR_NUM) {
++ dev_err(&hdev->pdev->dev,
++ "Just %u msi resources, not enough for pf(min:2).\n",
++ hdev->num_nic_msi);
++ return -EINVAL;
+ }
+
+ return 0;
+@@ -1394,6 +1406,10 @@ static int hclge_assign_tqp(struct hclge_vport *vport, u16 num_tqps)
+ kinfo->rss_size = min_t(u16, hdev->rss_size_max,
+ vport->alloc_tqps / hdev->tm_info.num_tc);
+
++ /* ensure one to one mapping between irq and queue at default */
++ kinfo->rss_size = min_t(u16, kinfo->rss_size,
++ (hdev->num_nic_msi - 1) / hdev->tm_info.num_tc);
++
+ return 0;
+ }
+
+@@ -2172,7 +2188,8 @@ static int hclge_init_msi(struct hclge_dev *hdev)
+ int vectors;
+ int i;
+
+- vectors = pci_alloc_irq_vectors(pdev, 1, hdev->num_msi,
++ vectors = pci_alloc_irq_vectors(pdev, HNAE3_MIN_VECTOR_NUM,
++ hdev->num_msi,
+ PCI_IRQ_MSI | PCI_IRQ_MSIX);
+ if (vectors < 0) {
+ dev_err(&pdev->dev,
+@@ -2187,6 +2204,7 @@ static int hclge_init_msi(struct hclge_dev *hdev)
+
+ hdev->num_msi = vectors;
+ hdev->num_msi_left = vectors;
++
+ hdev->base_msi_vector = pdev->irq;
+ hdev->roce_base_vector = hdev->base_msi_vector +
+ hdev->roce_base_msix_offset;
+@@ -3644,6 +3662,7 @@ static int hclge_get_vector(struct hnae3_handle *handle, u16 vector_num,
+ int alloc = 0;
+ int i, j;
+
++ vector_num = min_t(u16, hdev->num_nic_msi - 1, vector_num);
+ vector_num = min(hdev->num_msi_left, vector_num);
+
+ for (j = 0; j < vector_num; j++) {
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+index 6a12285f4c76..6dc66d3f8408 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+@@ -795,6 +795,7 @@ struct hclge_dev {
+ u32 base_msi_vector;
+ u16 *vector_status;
+ int *vector_irq;
++ u16 num_nic_msi; /* Num of nic vectors for this PF */
+ u16 num_roce_msi; /* Num of roce vectors for this PF */
+ int roce_base_vector;
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+index 3f41fa2bc414..856337705949 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+@@ -540,9 +540,16 @@ static void hclge_tm_vport_tc_info_update(struct hclge_vport *vport)
+ kinfo->rss_size = kinfo->req_rss_size;
+ } else if (kinfo->rss_size > max_rss_size ||
+ (!kinfo->req_rss_size && kinfo->rss_size < max_rss_size)) {
++ /* if user not set rss, the rss_size should compare with the
++ * valid msi numbers to ensure one to one map between tqp and
++ * irq as default.
++ */
++ if (!kinfo->req_rss_size)
++ max_rss_size = min_t(u16, max_rss_size,
++ (hdev->num_nic_msi - 1) /
++ kinfo->num_tc);
++
+ /* Set to the maximum specification value (max_rss_size). */
+- dev_info(&hdev->pdev->dev, "rss changes from %d to %d\n",
+- kinfo->rss_size, max_rss_size);
+ kinfo->rss_size = max_rss_size;
+ }
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index a13a0e101c3b..b094d4e9ba2d 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -411,6 +411,13 @@ static int hclgevf_knic_setup(struct hclgevf_dev *hdev)
+ kinfo->tqp[i] = &hdev->htqp[i].q;
+ }
+
++ /* after init the max rss_size and tqps, adjust the default tqp numbers
++ * and rss size with the actual vector numbers
++ */
++ kinfo->num_tqps = min_t(u16, hdev->num_nic_msix - 1, kinfo->num_tqps);
++ kinfo->rss_size = min_t(u16, kinfo->num_tqps / kinfo->num_tc,
++ kinfo->rss_size);
++
+ return 0;
+ }
+
+@@ -502,6 +509,7 @@ static int hclgevf_get_vector(struct hnae3_handle *handle, u16 vector_num,
+ int alloc = 0;
+ int i, j;
+
++ vector_num = min_t(u16, hdev->num_nic_msix - 1, vector_num);
+ vector_num = min(hdev->num_msi_left, vector_num);
+
+ for (j = 0; j < vector_num; j++) {
+@@ -2208,13 +2216,14 @@ static int hclgevf_init_msi(struct hclgevf_dev *hdev)
+ int vectors;
+ int i;
+
+- if (hnae3_get_bit(hdev->ae_dev->flag, HNAE3_DEV_SUPPORT_ROCE_B))
++ if (hnae3_dev_roce_supported(hdev))
+ vectors = pci_alloc_irq_vectors(pdev,
+ hdev->roce_base_msix_offset + 1,
+ hdev->num_msi,
+ PCI_IRQ_MSIX);
+ else
+- vectors = pci_alloc_irq_vectors(pdev, 1, hdev->num_msi,
++ vectors = pci_alloc_irq_vectors(pdev, HNAE3_MIN_VECTOR_NUM,
++ hdev->num_msi,
+ PCI_IRQ_MSI | PCI_IRQ_MSIX);
+
+ if (vectors < 0) {
+@@ -2230,6 +2239,7 @@ static int hclgevf_init_msi(struct hclgevf_dev *hdev)
+
+ hdev->num_msi = vectors;
+ hdev->num_msi_left = vectors;
++
+ hdev->base_msi_vector = pdev->irq;
+ hdev->roce_base_vector = pdev->irq + hdev->roce_base_msix_offset;
+
+@@ -2495,7 +2505,7 @@ static int hclgevf_query_vf_resource(struct hclgevf_dev *hdev)
+
+ req = (struct hclgevf_query_res_cmd *)desc.data;
+
+- if (hnae3_get_bit(hdev->ae_dev->flag, HNAE3_DEV_SUPPORT_ROCE_B)) {
++ if (hnae3_dev_roce_supported(hdev)) {
+ hdev->roce_base_msix_offset =
+ hnae3_get_field(__le16_to_cpu(req->msixcap_localid_ba_rocee),
+ HCLGEVF_MSIX_OFT_ROCEE_M,
+@@ -2504,6 +2514,9 @@ static int hclgevf_query_vf_resource(struct hclgevf_dev *hdev)
+ hnae3_get_field(__le16_to_cpu(req->vf_intr_vector_number),
+ HCLGEVF_VEC_NUM_M, HCLGEVF_VEC_NUM_S);
+
++ /* nic's msix numbers is always equals to the roce's. */
++ hdev->num_nic_msix = hdev->num_roce_msix;
++
+ /* VF should have NIC vectors and Roce vectors, NIC vectors
+ * are queued before Roce vectors. The offset is fixed to 64.
+ */
+@@ -2513,6 +2526,15 @@ static int hclgevf_query_vf_resource(struct hclgevf_dev *hdev)
+ hdev->num_msi =
+ hnae3_get_field(__le16_to_cpu(req->vf_intr_vector_number),
+ HCLGEVF_VEC_NUM_M, HCLGEVF_VEC_NUM_S);
++
++ hdev->num_nic_msix = hdev->num_msi;
++ }
++
++ if (hdev->num_nic_msix < HNAE3_MIN_VECTOR_NUM) {
++ dev_err(&hdev->pdev->dev,
++ "Just %u msi resources, not enough for vf(min:2).\n",
++ hdev->num_nic_msix);
++ return -EINVAL;
+ }
+
+ return 0;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+index 5a9e30998a8f..3c90cff0e43a 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+@@ -265,6 +265,7 @@ struct hclgevf_dev {
+ u16 num_msi;
+ u16 num_msi_left;
+ u16 num_msi_used;
++ u16 num_nic_msix; /* Num of nic vectors for this VF */
+ u16 num_roce_msix; /* Num of roce vectors for this VF */
+ u16 roce_base_msix_offset;
+ int roce_base_vector;
+diff --git a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
+index 4356f3a58002..1187ef1375e2 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
++++ b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
+@@ -471,12 +471,31 @@ void mlx4_init_quotas(struct mlx4_dev *dev)
+ priv->mfunc.master.res_tracker.res_alloc[RES_MPT].quota[pf];
+ }
+
+-static int get_max_gauranteed_vfs_counter(struct mlx4_dev *dev)
++static int
++mlx4_calc_res_counter_guaranteed(struct mlx4_dev *dev,
++ struct resource_allocator *res_alloc,
++ int vf)
+ {
+- /* reduce the sink counter */
+- return (dev->caps.max_counters - 1 -
+- (MLX4_PF_COUNTERS_PER_PORT * MLX4_MAX_PORTS))
+- / MLX4_MAX_PORTS;
++ struct mlx4_active_ports actv_ports;
++ int ports, counters_guaranteed;
++
++ /* For master, only allocate according to the number of phys ports */
++ if (vf == mlx4_master_func_num(dev))
++ return MLX4_PF_COUNTERS_PER_PORT * dev->caps.num_ports;
++
++ /* calculate real number of ports for the VF */
++ actv_ports = mlx4_get_active_ports(dev, vf);
++ ports = bitmap_weight(actv_ports.ports, dev->caps.num_ports);
++ counters_guaranteed = ports * MLX4_VF_COUNTERS_PER_PORT;
++
++ /* If we do not have enough counters for this VF, do not
++ * allocate any for it. '-1' to reduce the sink counter.
++ */
++ if ((res_alloc->res_reserved + counters_guaranteed) >
++ (dev->caps.max_counters - 1))
++ return 0;
++
++ return counters_guaranteed;
+ }
+
+ int mlx4_init_resource_tracker(struct mlx4_dev *dev)
+@@ -484,7 +503,6 @@ int mlx4_init_resource_tracker(struct mlx4_dev *dev)
+ struct mlx4_priv *priv = mlx4_priv(dev);
+ int i, j;
+ int t;
+- int max_vfs_guarantee_counter = get_max_gauranteed_vfs_counter(dev);
+
+ priv->mfunc.master.res_tracker.slave_list =
+ kcalloc(dev->num_slaves, sizeof(struct slave_list),
+@@ -603,16 +621,8 @@ int mlx4_init_resource_tracker(struct mlx4_dev *dev)
+ break;
+ case RES_COUNTER:
+ res_alloc->quota[t] = dev->caps.max_counters;
+- if (t == mlx4_master_func_num(dev))
+- res_alloc->guaranteed[t] =
+- MLX4_PF_COUNTERS_PER_PORT *
+- MLX4_MAX_PORTS;
+- else if (t <= max_vfs_guarantee_counter)
+- res_alloc->guaranteed[t] =
+- MLX4_VF_COUNTERS_PER_PORT *
+- MLX4_MAX_PORTS;
+- else
+- res_alloc->guaranteed[t] = 0;
++ res_alloc->guaranteed[t] =
++ mlx4_calc_res_counter_guaranteed(dev, res_alloc, t);
+ break;
+ default:
+ break;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+index a6a52806be45..310f65ef5446 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+@@ -90,15 +90,19 @@ static int mlx5e_route_lookup_ipv4(struct mlx5e_priv *priv,
+ if (ret)
+ return ret;
+
+- if (mlx5_lag_is_multipath(mdev) && rt->rt_gw_family != AF_INET)
++ if (mlx5_lag_is_multipath(mdev) && rt->rt_gw_family != AF_INET) {
++ ip_rt_put(rt);
+ return -ENETUNREACH;
++ }
+ #else
+ return -EOPNOTSUPP;
+ #endif
+
+ ret = get_route_and_out_devs(priv, rt->dst.dev, route_dev, out_dev);
+- if (ret < 0)
++ if (ret < 0) {
++ ip_rt_put(rt);
+ return ret;
++ }
+
+ if (!(*out_ttl))
+ *out_ttl = ip4_dst_hoplimit(&rt->dst);
+@@ -142,8 +146,10 @@ static int mlx5e_route_lookup_ipv6(struct mlx5e_priv *priv,
+ *out_ttl = ip6_dst_hoplimit(dst);
+
+ ret = get_route_and_out_devs(priv, dst->dev, route_dev, out_dev);
+- if (ret < 0)
++ if (ret < 0) {
++ dst_release(dst);
+ return ret;
++ }
+ #else
+ return -EOPNOTSUPP;
+ #endif
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index 20e628c907e5..a9bb8e2b34a7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -1021,7 +1021,7 @@ static bool ext_link_mode_requested(const unsigned long *adver)
+ {
+ #define MLX5E_MIN_PTYS_EXT_LINK_MODE_BIT ETHTOOL_LINK_MODE_50000baseKR_Full_BIT
+ int size = __ETHTOOL_LINK_MODE_MASK_NBITS - MLX5E_MIN_PTYS_EXT_LINK_MODE_BIT;
+- __ETHTOOL_DECLARE_LINK_MODE_MASK(modes);
++ __ETHTOOL_DECLARE_LINK_MODE_MASK(modes) = {0,};
+
+ bitmap_set(modes, MLX5E_MIN_PTYS_EXT_LINK_MODE_BIT, size);
+ return bitmap_intersects(modes, adver, __ETHTOOL_LINK_MODE_MASK_NBITS);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index ac6e586d403d..fb139f8b9acf 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -1367,8 +1367,11 @@ int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget)
+ if (unlikely(!test_bit(MLX5E_RQ_STATE_ENABLED, &rq->state)))
+ return 0;
+
+- if (rq->cqd.left)
++ if (rq->cqd.left) {
+ work_done += mlx5e_decompress_cqes_cont(rq, cqwq, 0, budget);
++ if (rq->cqd.left || work_done >= budget)
++ goto out;
++ }
+
+ cqe = mlx5_cqwq_get_cqe(cqwq);
+ if (!cqe) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c b/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
+index 840ec945ccba..bbff8d8ded76 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
+@@ -35,6 +35,7 @@
+ #include <linux/udp.h>
+ #include <net/udp.h>
+ #include "en.h"
++#include "en/port.h"
+
+ enum {
+ MLX5E_ST_LINK_STATE,
+@@ -80,22 +81,12 @@ static int mlx5e_test_link_state(struct mlx5e_priv *priv)
+
+ static int mlx5e_test_link_speed(struct mlx5e_priv *priv)
+ {
+- u32 out[MLX5_ST_SZ_DW(ptys_reg)];
+- u32 eth_proto_oper;
+- int i;
++ u32 speed;
+
+ if (!netif_carrier_ok(priv->netdev))
+ return 1;
+
+- if (mlx5_query_port_ptys(priv->mdev, out, sizeof(out), MLX5_PTYS_EN, 1))
+- return 1;
+-
+- eth_proto_oper = MLX5_GET(ptys_reg, out, eth_proto_oper);
+- for (i = 0; i < MLX5E_LINK_MODES_NUMBER; i++) {
+- if (eth_proto_oper & MLX5E_PROT_MASK(i))
+- return 0;
+- }
+- return 1;
++ return mlx5e_port_linkspeed(priv->mdev, &speed);
+ }
+
+ struct mlx5ehdr {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index 0323fd078271..35945cdd0a61 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -285,7 +285,6 @@ mlx5_eswitch_add_fwd_rule(struct mlx5_eswitch *esw,
+
+ mlx5_eswitch_set_rule_source_port(esw, spec, attr);
+
+- spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS;
+ if (attr->outer_match_level != MLX5_MATCH_NONE)
+ spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS;
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
+index 1d55a324a17e..7879e1746297 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
+@@ -177,22 +177,32 @@ mlx5_eswitch_termtbl_actions_move(struct mlx5_flow_act *src,
+ memset(&src->vlan[1], 0, sizeof(src->vlan[1]));
+ }
+
++static bool mlx5_eswitch_offload_is_uplink_port(const struct mlx5_eswitch *esw,
++ const struct mlx5_flow_spec *spec)
++{
++ u32 port_mask, port_value;
++
++ if (MLX5_CAP_ESW_FLOWTABLE(esw->dev, flow_source))
++ return spec->flow_context.flow_source == MLX5_VPORT_UPLINK;
++
++ port_mask = MLX5_GET(fte_match_param, spec->match_criteria,
++ misc_parameters.source_port);
++ port_value = MLX5_GET(fte_match_param, spec->match_value,
++ misc_parameters.source_port);
++ return (port_mask & port_value & 0xffff) == MLX5_VPORT_UPLINK;
++}
++
+ bool
+ mlx5_eswitch_termtbl_required(struct mlx5_eswitch *esw,
+ struct mlx5_flow_act *flow_act,
+ struct mlx5_flow_spec *spec)
+ {
+- u32 port_mask = MLX5_GET(fte_match_param, spec->match_criteria,
+- misc_parameters.source_port);
+- u32 port_value = MLX5_GET(fte_match_param, spec->match_value,
+- misc_parameters.source_port);
+-
+ if (!MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, termination_table))
+ return false;
+
+ /* push vlan on RX */
+ return (flow_act->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH) &&
+- ((port_mask & port_value) == MLX5_VPORT_UPLINK);
++ mlx5_eswitch_offload_is_uplink_port(esw, spec);
+ }
+
+ struct mlx5_flow_handle *
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c
+index 17ceac7505e5..b94cdbd7bb18 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core.c
+@@ -1128,7 +1128,7 @@ __mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info,
+ if (err)
+ goto err_thermal_init;
+
+- if (mlxsw_driver->params_register && !reload)
++ if (mlxsw_driver->params_register)
+ devlink_params_publish(devlink);
+
+ return 0;
+@@ -1201,7 +1201,7 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core,
+ return;
+ }
+
+- if (mlxsw_core->driver->params_unregister && !reload)
++ if (mlxsw_core->driver->params_unregister)
+ devlink_params_unpublish(devlink);
+ mlxsw_thermal_fini(mlxsw_core->thermal);
+ mlxsw_hwmon_fini(mlxsw_core->hwmon);
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index bae0074ab9aa..00c86c7dd42d 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -976,6 +976,10 @@ static int r8168dp_2_mdio_read(struct rtl8169_private *tp, int reg)
+ {
+ int value;
+
++ /* Work around issue with chip reporting wrong PHY ID */
++ if (reg == MII_PHYSID2)
++ return 0xc912;
++
+ r8168dp_2_mdio_start(tp);
+
+ value = r8169_mdio_read(tp, reg);
+diff --git a/drivers/net/phy/bcm7xxx.c b/drivers/net/phy/bcm7xxx.c
+index 8fc33867e524..af8eabe7a6d4 100644
+--- a/drivers/net/phy/bcm7xxx.c
++++ b/drivers/net/phy/bcm7xxx.c
+@@ -572,6 +572,7 @@ static int bcm7xxx_28nm_probe(struct phy_device *phydev)
+ .name = _name, \
+ /* PHY_BASIC_FEATURES */ \
+ .flags = PHY_IS_INTERNAL, \
++ .soft_reset = genphy_soft_reset, \
+ .config_init = bcm7xxx_config_init, \
+ .suspend = bcm7xxx_suspend, \
+ .resume = bcm7xxx_config_init, \
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index a5a57ca94c1a..26a13fd3c463 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -87,8 +87,24 @@ struct phylink {
+ phylink_printk(KERN_WARNING, pl, fmt, ##__VA_ARGS__)
+ #define phylink_info(pl, fmt, ...) \
+ phylink_printk(KERN_INFO, pl, fmt, ##__VA_ARGS__)
++#if defined(CONFIG_DYNAMIC_DEBUG)
+ #define phylink_dbg(pl, fmt, ...) \
++do { \
++ if ((pl)->config->type == PHYLINK_NETDEV) \
++ netdev_dbg((pl)->netdev, fmt, ##__VA_ARGS__); \
++ else if ((pl)->config->type == PHYLINK_DEV) \
++ dev_dbg((pl)->dev, fmt, ##__VA_ARGS__); \
++} while (0)
++#elif defined(DEBUG)
++#define phylink_dbg(pl, fmt, ...) \
+ phylink_printk(KERN_DEBUG, pl, fmt, ##__VA_ARGS__)
++#else
++#define phylink_dbg(pl, fmt, ...) \
++({ \
++ if (0) \
++ phylink_printk(KERN_DEBUG, pl, fmt, ##__VA_ARGS__); \
++})
++#endif
+
+ /**
+ * phylink_set_port_modes() - set the port type modes in the ethtool mask
+diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c
+index 32f53de5b1fe..fe630438f67b 100644
+--- a/drivers/net/usb/cdc_ether.c
++++ b/drivers/net/usb/cdc_ether.c
+@@ -787,6 +787,13 @@ static const struct usb_device_id products[] = {
+ .driver_info = 0,
+ },
+
++/* ThinkPad USB-C Dock Gen 2 (based on Realtek RTL8153) */
++{
++ USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0xa387, USB_CLASS_COMM,
++ USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
++ .driver_info = 0,
++},
++
+ /* NVIDIA Tegra USB 3.0 Ethernet Adapters (based on Realtek RTL8153) */
+ {
+ USB_DEVICE_AND_INTERFACE_INFO(NVIDIA_VENDOR_ID, 0x09ff, USB_CLASS_COMM,
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index f033fee225a1..7dd6289b1ffc 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -1265,8 +1265,11 @@ static void lan78xx_status(struct lan78xx_net *dev, struct urb *urb)
+ netif_dbg(dev, link, dev->net, "PHY INTR: 0x%08x\n", intdata);
+ lan78xx_defer_kevent(dev, EVENT_LINK_RESET);
+
+- if (dev->domain_data.phyirq > 0)
++ if (dev->domain_data.phyirq > 0) {
++ local_irq_disable();
+ generic_handle_irq(dev->domain_data.phyirq);
++ local_irq_enable();
++ }
+ } else
+ netdev_warn(dev->net,
+ "unexpected interrupt: 0x%08x\n", intdata);
+@@ -3789,10 +3792,14 @@ static int lan78xx_probe(struct usb_interface *intf,
+ /* driver requires remote-wakeup capability during autosuspend. */
+ intf->needs_remote_wakeup = 1;
+
++ ret = lan78xx_phy_init(dev);
++ if (ret < 0)
++ goto out4;
++
+ ret = register_netdev(netdev);
+ if (ret != 0) {
+ netif_err(dev, probe, netdev, "couldn't register the device\n");
+- goto out4;
++ goto out5;
+ }
+
+ usb_set_intfdata(intf, dev);
+@@ -3805,14 +3812,10 @@ static int lan78xx_probe(struct usb_interface *intf,
+ pm_runtime_set_autosuspend_delay(&udev->dev,
+ DEFAULT_AUTOSUSPEND_DELAY);
+
+- ret = lan78xx_phy_init(dev);
+- if (ret < 0)
+- goto out5;
+-
+ return 0;
+
+ out5:
+- unregister_netdev(netdev);
++ phy_disconnect(netdev->phydev);
+ out4:
+ usb_free_urb(dev->urb_intr);
+ out3:
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 9eedc0714422..7661d7475c2a 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -5402,6 +5402,7 @@ static const struct usb_device_id rtl8152_table[] = {
+ {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7205)},
+ {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x720c)},
+ {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7214)},
++ {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0xa387)},
+ {REALTEK_USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041)},
+ {REALTEK_USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff)},
+ {REALTEK_USB_DEVICE(VENDOR_ID_TPLINK, 0x0601)},
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index 3d9bcc957f7d..e07872869266 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -2487,9 +2487,11 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+ vni = tunnel_id_to_key32(info->key.tun_id);
+ ifindex = 0;
+ dst_cache = &info->dst_cache;
+- if (info->options_len &&
+- info->key.tun_flags & TUNNEL_VXLAN_OPT)
++ if (info->key.tun_flags & TUNNEL_VXLAN_OPT) {
++ if (info->options_len < sizeof(*md))
++ goto drop;
+ md = ip_tunnel_info_opts(info);
++ }
+ ttl = info->key.ttl;
+ tos = info->key.tos;
+ label = info->key.label;
+diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
+index e6b175370f2e..8b7bd4822465 100644
+--- a/drivers/of/unittest.c
++++ b/drivers/of/unittest.c
+@@ -1205,6 +1205,7 @@ static int __init unittest_data_add(void)
+ of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
+ if (!unittest_data_node) {
+ pr_warn("%s: No tree to attach; not running tests\n", __func__);
++ kfree(unittest_data);
+ return -ENODATA;
+ }
+
+diff --git a/drivers/pinctrl/bcm/pinctrl-ns2-mux.c b/drivers/pinctrl/bcm/pinctrl-ns2-mux.c
+index 2bf6af7df7d9..9fabc451550e 100644
+--- a/drivers/pinctrl/bcm/pinctrl-ns2-mux.c
++++ b/drivers/pinctrl/bcm/pinctrl-ns2-mux.c
+@@ -640,8 +640,8 @@ static int ns2_pinmux_enable(struct pinctrl_dev *pctrl_dev,
+ const struct ns2_pin_function *func;
+ const struct ns2_pin_group *grp;
+
+- if (grp_select > pinctrl->num_groups ||
+- func_select > pinctrl->num_functions)
++ if (grp_select >= pinctrl->num_groups ||
++ func_select >= pinctrl->num_functions)
+ return -EINVAL;
+
+ func = &pinctrl->functions[func_select];
+diff --git a/drivers/pinctrl/intel/pinctrl-intel.c b/drivers/pinctrl/intel/pinctrl-intel.c
+index a18d6eefe672..4323796cbe11 100644
+--- a/drivers/pinctrl/intel/pinctrl-intel.c
++++ b/drivers/pinctrl/intel/pinctrl-intel.c
+@@ -96,6 +96,7 @@ struct intel_pinctrl_context {
+ * @pctldesc: Pin controller description
+ * @pctldev: Pointer to the pin controller device
+ * @chip: GPIO chip in this pin controller
++ * @irqchip: IRQ chip in this pin controller
+ * @soc: SoC/PCH specific pin configuration data
+ * @communities: All communities in this pin controller
+ * @ncommunities: Number of communities in this pin controller
+@@ -108,6 +109,7 @@ struct intel_pinctrl {
+ struct pinctrl_desc pctldesc;
+ struct pinctrl_dev *pctldev;
+ struct gpio_chip chip;
++ struct irq_chip irqchip;
+ const struct intel_pinctrl_soc_data *soc;
+ struct intel_community *communities;
+ size_t ncommunities;
+@@ -1081,16 +1083,6 @@ static irqreturn_t intel_gpio_irq(int irq, void *data)
+ return ret;
+ }
+
+-static struct irq_chip intel_gpio_irqchip = {
+- .name = "intel-gpio",
+- .irq_ack = intel_gpio_irq_ack,
+- .irq_mask = intel_gpio_irq_mask,
+- .irq_unmask = intel_gpio_irq_unmask,
+- .irq_set_type = intel_gpio_irq_type,
+- .irq_set_wake = intel_gpio_irq_wake,
+- .flags = IRQCHIP_MASK_ON_SUSPEND,
+-};
+-
+ static int intel_gpio_add_pin_ranges(struct intel_pinctrl *pctrl,
+ const struct intel_community *community)
+ {
+@@ -1140,12 +1132,22 @@ static int intel_gpio_probe(struct intel_pinctrl *pctrl, int irq)
+
+ pctrl->chip = intel_gpio_chip;
+
++ /* Setup GPIO chip */
+ pctrl->chip.ngpio = intel_gpio_ngpio(pctrl);
+ pctrl->chip.label = dev_name(pctrl->dev);
+ pctrl->chip.parent = pctrl->dev;
+ pctrl->chip.base = -1;
+ pctrl->irq = irq;
+
++ /* Setup IRQ chip */
++ pctrl->irqchip.name = dev_name(pctrl->dev);
++ pctrl->irqchip.irq_ack = intel_gpio_irq_ack;
++ pctrl->irqchip.irq_mask = intel_gpio_irq_mask;
++ pctrl->irqchip.irq_unmask = intel_gpio_irq_unmask;
++ pctrl->irqchip.irq_set_type = intel_gpio_irq_type;
++ pctrl->irqchip.irq_set_wake = intel_gpio_irq_wake;
++ pctrl->irqchip.flags = IRQCHIP_MASK_ON_SUSPEND;
++
+ ret = devm_gpiochip_add_data(pctrl->dev, &pctrl->chip, pctrl);
+ if (ret) {
+ dev_err(pctrl->dev, "failed to register gpiochip\n");
+@@ -1175,15 +1177,14 @@ static int intel_gpio_probe(struct intel_pinctrl *pctrl, int irq)
+ return ret;
+ }
+
+- ret = gpiochip_irqchip_add(&pctrl->chip, &intel_gpio_irqchip, 0,
++ ret = gpiochip_irqchip_add(&pctrl->chip, &pctrl->irqchip, 0,
+ handle_bad_irq, IRQ_TYPE_NONE);
+ if (ret) {
+ dev_err(pctrl->dev, "failed to add irqchip\n");
+ return ret;
+ }
+
+- gpiochip_set_chained_irqchip(&pctrl->chip, &intel_gpio_irqchip, irq,
+- NULL);
++ gpiochip_set_chained_irqchip(&pctrl->chip, &pctrl->irqchip, irq, NULL);
+ return 0;
+ }
+
+diff --git a/drivers/pinctrl/pinctrl-stmfx.c b/drivers/pinctrl/pinctrl-stmfx.c
+index 31b6e511670f..b7c7f24699c9 100644
+--- a/drivers/pinctrl/pinctrl-stmfx.c
++++ b/drivers/pinctrl/pinctrl-stmfx.c
+@@ -697,7 +697,7 @@ static int stmfx_pinctrl_probe(struct platform_device *pdev)
+
+ static int stmfx_pinctrl_remove(struct platform_device *pdev)
+ {
+- struct stmfx *stmfx = dev_get_platdata(&pdev->dev);
++ struct stmfx *stmfx = dev_get_drvdata(pdev->dev.parent);
+
+ return stmfx_function_disable(stmfx,
+ STMFX_FUNC_GPIO |
+diff --git a/drivers/platform/x86/pmc_atom.c b/drivers/platform/x86/pmc_atom.c
+index aa53648a2214..9aca5e7ce6d0 100644
+--- a/drivers/platform/x86/pmc_atom.c
++++ b/drivers/platform/x86/pmc_atom.c
+@@ -415,6 +415,13 @@ static const struct dmi_system_id critclk_systems[] = {
+ DMI_MATCH(DMI_BOARD_NAME, "CB6363"),
+ },
+ },
++ {
++ .ident = "SIMATIC IPC227E",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "SIEMENS AG"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "6ES7647-8B"),
++ },
++ },
+ { /*sentinel*/ }
+ };
+
+diff --git a/drivers/regulator/da9062-regulator.c b/drivers/regulator/da9062-regulator.c
+index 2ffc64622451..9b2ca472f70c 100644
+--- a/drivers/regulator/da9062-regulator.c
++++ b/drivers/regulator/da9062-regulator.c
+@@ -136,7 +136,6 @@ static int da9062_buck_set_mode(struct regulator_dev *rdev, unsigned mode)
+ static unsigned da9062_buck_get_mode(struct regulator_dev *rdev)
+ {
+ struct da9062_regulator *regl = rdev_get_drvdata(rdev);
+- struct regmap_field *field;
+ unsigned int val, mode = 0;
+ int ret;
+
+@@ -158,18 +157,7 @@ static unsigned da9062_buck_get_mode(struct regulator_dev *rdev)
+ return REGULATOR_MODE_NORMAL;
+ }
+
+- /* Detect current regulator state */
+- ret = regmap_field_read(regl->suspend, &val);
+- if (ret < 0)
+- return 0;
+-
+- /* Read regulator mode from proper register, depending on state */
+- if (val)
+- field = regl->suspend_sleep;
+- else
+- field = regl->sleep;
+-
+- ret = regmap_field_read(field, &val);
++ ret = regmap_field_read(regl->sleep, &val);
+ if (ret < 0)
+ return 0;
+
+@@ -208,21 +196,9 @@ static int da9062_ldo_set_mode(struct regulator_dev *rdev, unsigned mode)
+ static unsigned da9062_ldo_get_mode(struct regulator_dev *rdev)
+ {
+ struct da9062_regulator *regl = rdev_get_drvdata(rdev);
+- struct regmap_field *field;
+ int ret, val;
+
+- /* Detect current regulator state */
+- ret = regmap_field_read(regl->suspend, &val);
+- if (ret < 0)
+- return 0;
+-
+- /* Read regulator mode from proper register, depending on state */
+- if (val)
+- field = regl->suspend_sleep;
+- else
+- field = regl->sleep;
+-
+- ret = regmap_field_read(field, &val);
++ ret = regmap_field_read(regl->sleep, &val);
+ if (ret < 0)
+ return 0;
+
+@@ -408,10 +384,10 @@ static const struct da9062_regulator_info local_da9061_regulator_info[] = {
+ __builtin_ffs((int)DA9062AA_BUCK1_MODE_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_BUCK1_MODE_MASK)) - 1),
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VBUCK1_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_BUCK1_CONT,
++ __builtin_ffs((int)DA9062AA_BUCK1_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VBUCK1_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_BUCK1_CONF_MASK) - 1),
+ },
+ {
+ .desc.id = DA9061_ID_BUCK2,
+@@ -444,10 +420,10 @@ static const struct da9062_regulator_info local_da9061_regulator_info[] = {
+ __builtin_ffs((int)DA9062AA_BUCK3_MODE_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_BUCK3_MODE_MASK)) - 1),
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VBUCK3_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_BUCK3_CONT,
++ __builtin_ffs((int)DA9062AA_BUCK3_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VBUCK3_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_BUCK3_CONF_MASK) - 1),
+ },
+ {
+ .desc.id = DA9061_ID_BUCK3,
+@@ -480,10 +456,10 @@ static const struct da9062_regulator_info local_da9061_regulator_info[] = {
+ __builtin_ffs((int)DA9062AA_BUCK4_MODE_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_BUCK4_MODE_MASK)) - 1),
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VBUCK4_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_BUCK4_CONT,
++ __builtin_ffs((int)DA9062AA_BUCK4_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VBUCK4_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_BUCK4_CONF_MASK) - 1),
+ },
+ {
+ .desc.id = DA9061_ID_LDO1,
+@@ -509,10 +485,10 @@ static const struct da9062_regulator_info local_da9061_regulator_info[] = {
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_LDO1_SL_B_MASK)) - 1),
+ .suspend_vsel_reg = DA9062AA_VLDO1_B,
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VLDO1_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_LDO1_CONT,
++ __builtin_ffs((int)DA9062AA_LDO1_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VLDO1_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_LDO1_CONF_MASK) - 1),
+ .oc_event = REG_FIELD(DA9062AA_STATUS_D,
+ __builtin_ffs((int)DA9062AA_LDO1_ILIM_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+@@ -542,10 +518,10 @@ static const struct da9062_regulator_info local_da9061_regulator_info[] = {
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_LDO2_SL_B_MASK)) - 1),
+ .suspend_vsel_reg = DA9062AA_VLDO2_B,
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VLDO2_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_LDO2_CONT,
++ __builtin_ffs((int)DA9062AA_LDO2_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VLDO2_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_LDO2_CONF_MASK) - 1),
+ .oc_event = REG_FIELD(DA9062AA_STATUS_D,
+ __builtin_ffs((int)DA9062AA_LDO2_ILIM_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+@@ -575,10 +551,10 @@ static const struct da9062_regulator_info local_da9061_regulator_info[] = {
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_LDO3_SL_B_MASK)) - 1),
+ .suspend_vsel_reg = DA9062AA_VLDO3_B,
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VLDO3_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_LDO3_CONT,
++ __builtin_ffs((int)DA9062AA_LDO3_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VLDO3_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_LDO3_CONF_MASK) - 1),
+ .oc_event = REG_FIELD(DA9062AA_STATUS_D,
+ __builtin_ffs((int)DA9062AA_LDO3_ILIM_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+@@ -608,10 +584,10 @@ static const struct da9062_regulator_info local_da9061_regulator_info[] = {
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_LDO4_SL_B_MASK)) - 1),
+ .suspend_vsel_reg = DA9062AA_VLDO4_B,
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VLDO4_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_LDO4_CONT,
++ __builtin_ffs((int)DA9062AA_LDO4_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VLDO4_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_LDO4_CONF_MASK) - 1),
+ .oc_event = REG_FIELD(DA9062AA_STATUS_D,
+ __builtin_ffs((int)DA9062AA_LDO4_ILIM_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+@@ -652,10 +628,10 @@ static const struct da9062_regulator_info local_da9062_regulator_info[] = {
+ __builtin_ffs((int)DA9062AA_BUCK1_MODE_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_BUCK1_MODE_MASK)) - 1),
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VBUCK1_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_BUCK1_CONT,
++ __builtin_ffs((int)DA9062AA_BUCK1_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VBUCK1_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_BUCK1_CONF_MASK) - 1),
+ },
+ {
+ .desc.id = DA9062_ID_BUCK2,
+@@ -688,10 +664,10 @@ static const struct da9062_regulator_info local_da9062_regulator_info[] = {
+ __builtin_ffs((int)DA9062AA_BUCK2_MODE_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_BUCK2_MODE_MASK)) - 1),
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VBUCK2_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_BUCK2_CONT,
++ __builtin_ffs((int)DA9062AA_BUCK2_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VBUCK2_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_BUCK2_CONF_MASK) - 1),
+ },
+ {
+ .desc.id = DA9062_ID_BUCK3,
+@@ -724,10 +700,10 @@ static const struct da9062_regulator_info local_da9062_regulator_info[] = {
+ __builtin_ffs((int)DA9062AA_BUCK3_MODE_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_BUCK3_MODE_MASK)) - 1),
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VBUCK3_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_BUCK3_CONT,
++ __builtin_ffs((int)DA9062AA_BUCK3_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VBUCK3_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_BUCK3_CONF_MASK) - 1),
+ },
+ {
+ .desc.id = DA9062_ID_BUCK4,
+@@ -760,10 +736,10 @@ static const struct da9062_regulator_info local_da9062_regulator_info[] = {
+ __builtin_ffs((int)DA9062AA_BUCK4_MODE_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_BUCK4_MODE_MASK)) - 1),
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VBUCK4_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_BUCK4_CONT,
++ __builtin_ffs((int)DA9062AA_BUCK4_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VBUCK4_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_BUCK4_CONF_MASK) - 1),
+ },
+ {
+ .desc.id = DA9062_ID_LDO1,
+@@ -789,10 +765,10 @@ static const struct da9062_regulator_info local_da9062_regulator_info[] = {
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_LDO1_SL_B_MASK)) - 1),
+ .suspend_vsel_reg = DA9062AA_VLDO1_B,
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VLDO1_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_LDO1_CONT,
++ __builtin_ffs((int)DA9062AA_LDO1_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VLDO1_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_LDO1_CONF_MASK) - 1),
+ .oc_event = REG_FIELD(DA9062AA_STATUS_D,
+ __builtin_ffs((int)DA9062AA_LDO1_ILIM_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+@@ -822,10 +798,10 @@ static const struct da9062_regulator_info local_da9062_regulator_info[] = {
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_LDO2_SL_B_MASK)) - 1),
+ .suspend_vsel_reg = DA9062AA_VLDO2_B,
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VLDO2_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_LDO2_CONT,
++ __builtin_ffs((int)DA9062AA_LDO2_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VLDO2_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_LDO2_CONF_MASK) - 1),
+ .oc_event = REG_FIELD(DA9062AA_STATUS_D,
+ __builtin_ffs((int)DA9062AA_LDO2_ILIM_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+@@ -855,10 +831,10 @@ static const struct da9062_regulator_info local_da9062_regulator_info[] = {
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_LDO3_SL_B_MASK)) - 1),
+ .suspend_vsel_reg = DA9062AA_VLDO3_B,
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VLDO3_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_LDO3_CONT,
++ __builtin_ffs((int)DA9062AA_LDO3_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VLDO3_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_LDO3_CONF_MASK) - 1),
+ .oc_event = REG_FIELD(DA9062AA_STATUS_D,
+ __builtin_ffs((int)DA9062AA_LDO3_ILIM_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+@@ -888,10 +864,10 @@ static const struct da9062_regulator_info local_da9062_regulator_info[] = {
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_LDO4_SL_B_MASK)) - 1),
+ .suspend_vsel_reg = DA9062AA_VLDO4_B,
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VLDO4_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_LDO4_CONT,
++ __builtin_ffs((int)DA9062AA_LDO4_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VLDO4_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_LDO4_CONF_MASK) - 1),
+ .oc_event = REG_FIELD(DA9062AA_STATUS_D,
+ __builtin_ffs((int)DA9062AA_LDO4_ILIM_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+diff --git a/drivers/regulator/of_regulator.c b/drivers/regulator/of_regulator.c
+index 9112faa6a9a0..38dd06fbab38 100644
+--- a/drivers/regulator/of_regulator.c
++++ b/drivers/regulator/of_regulator.c
+@@ -231,12 +231,12 @@ static int of_get_regulation_constraints(struct device *dev,
+ "regulator-off-in-suspend"))
+ suspend_state->enabled = DISABLE_IN_SUSPEND;
+
+- if (!of_property_read_u32(np, "regulator-suspend-min-microvolt",
+- &pval))
++ if (!of_property_read_u32(suspend_np,
++ "regulator-suspend-min-microvolt", &pval))
+ suspend_state->min_uV = pval;
+
+- if (!of_property_read_u32(np, "regulator-suspend-max-microvolt",
+- &pval))
++ if (!of_property_read_u32(suspend_np,
++ "regulator-suspend-max-microvolt", &pval))
+ suspend_state->max_uV = pval;
+
+ if (!of_property_read_u32(suspend_np,
+diff --git a/drivers/regulator/pfuze100-regulator.c b/drivers/regulator/pfuze100-regulator.c
+index df5df1c495ad..689537927f6f 100644
+--- a/drivers/regulator/pfuze100-regulator.c
++++ b/drivers/regulator/pfuze100-regulator.c
+@@ -788,7 +788,13 @@ static int pfuze100_regulator_probe(struct i2c_client *client,
+
+ /* SW2~SW4 high bit check and modify the voltage value table */
+ if (i >= sw_check_start && i <= sw_check_end) {
+- regmap_read(pfuze_chip->regmap, desc->vsel_reg, &val);
++ ret = regmap_read(pfuze_chip->regmap,
++ desc->vsel_reg, &val);
++ if (ret) {
++ dev_err(&client->dev, "Fails to read from the register.\n");
++ return ret;
++ }
++
+ if (val & sw_hi) {
+ if (pfuze_chip->chip_id == PFUZE3000 ||
+ pfuze_chip->chip_id == PFUZE3001) {
+diff --git a/drivers/regulator/ti-abb-regulator.c b/drivers/regulator/ti-abb-regulator.c
+index cced1ffb896c..89b9314d64c9 100644
+--- a/drivers/regulator/ti-abb-regulator.c
++++ b/drivers/regulator/ti-abb-regulator.c
+@@ -173,19 +173,14 @@ static int ti_abb_wait_txdone(struct device *dev, struct ti_abb *abb)
+ while (timeout++ <= abb->settling_time) {
+ status = ti_abb_check_txdone(abb);
+ if (status)
+- break;
++ return 0;
+
+ udelay(1);
+ }
+
+- if (timeout > abb->settling_time) {
+- dev_warn_ratelimited(dev,
+- "%s:TRANXDONE timeout(%duS) int=0x%08x\n",
+- __func__, timeout, readl(abb->int_base));
+- return -ETIMEDOUT;
+- }
+-
+- return 0;
++ dev_warn_ratelimited(dev, "%s:TRANXDONE timeout(%duS) int=0x%08x\n",
++ __func__, timeout, readl(abb->int_base));
++ return -ETIMEDOUT;
+ }
+
+ /**
+@@ -205,19 +200,14 @@ static int ti_abb_clear_all_txdone(struct device *dev, const struct ti_abb *abb)
+
+ status = ti_abb_check_txdone(abb);
+ if (!status)
+- break;
++ return 0;
+
+ udelay(1);
+ }
+
+- if (timeout > abb->settling_time) {
+- dev_warn_ratelimited(dev,
+- "%s:TRANXDONE timeout(%duS) int=0x%08x\n",
+- __func__, timeout, readl(abb->int_base));
+- return -ETIMEDOUT;
+- }
+-
+- return 0;
++ dev_warn_ratelimited(dev, "%s:TRANXDONE timeout(%duS) int=0x%08x\n",
++ __func__, timeout, readl(abb->int_base));
++ return -ETIMEDOUT;
+ }
+
+ /**
+diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
+index 1b92f3c19ff3..90cf4691b8c3 100644
+--- a/drivers/scsi/Kconfig
++++ b/drivers/scsi/Kconfig
+@@ -898,7 +898,7 @@ config SCSI_SNI_53C710
+
+ config 53C700_LE_ON_BE
+ bool
+- depends on SCSI_LASI700
++ depends on SCSI_LASI700 || SCSI_SNI_53C710
+ default y
+
+ config SCSI_STEX
+diff --git a/drivers/scsi/device_handler/scsi_dh_alua.c b/drivers/scsi/device_handler/scsi_dh_alua.c
+index 4971104b1817..f32da0ca529e 100644
+--- a/drivers/scsi/device_handler/scsi_dh_alua.c
++++ b/drivers/scsi/device_handler/scsi_dh_alua.c
+@@ -512,6 +512,7 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_port_group *pg)
+ unsigned int tpg_desc_tbl_off;
+ unsigned char orig_transition_tmo;
+ unsigned long flags;
++ bool transitioning_sense = false;
+
+ if (!pg->expiry) {
+ unsigned long transition_tmo = ALUA_FAILOVER_TIMEOUT * HZ;
+@@ -572,13 +573,19 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_port_group *pg)
+ goto retry;
+ }
+ /*
+- * Retry on ALUA state transition or if any
+- * UNIT ATTENTION occurred.
++ * If the array returns with 'ALUA state transition'
++ * sense code here it cannot return RTPG data during
++ * transition. So set the state to 'transitioning' directly.
+ */
+ if (sense_hdr.sense_key == NOT_READY &&
+- sense_hdr.asc == 0x04 && sense_hdr.ascq == 0x0a)
+- err = SCSI_DH_RETRY;
+- else if (sense_hdr.sense_key == UNIT_ATTENTION)
++ sense_hdr.asc == 0x04 && sense_hdr.ascq == 0x0a) {
++ transitioning_sense = true;
++ goto skip_rtpg;
++ }
++ /*
++ * Retry on any other UNIT ATTENTION occurred.
++ */
++ if (sense_hdr.sense_key == UNIT_ATTENTION)
+ err = SCSI_DH_RETRY;
+ if (err == SCSI_DH_RETRY &&
+ pg->expiry != 0 && time_before(jiffies, pg->expiry)) {
+@@ -666,7 +673,11 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_port_group *pg)
+ off = 8 + (desc[7] * 4);
+ }
+
++ skip_rtpg:
+ spin_lock_irqsave(&pg->lock, flags);
++ if (transitioning_sense)
++ pg->state = SCSI_ACCESS_STATE_TRANSITIONING;
++
+ sdev_printk(KERN_INFO, sdev,
+ "%s: port group %02x state %c %s supports %c%c%c%c%c%c%c\n",
+ ALUA_DH_NAME, pg->group_id, print_alua_state(pg->state),
+diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
+index 1bb6aada93fa..a4519710b3fc 100644
+--- a/drivers/scsi/hpsa.c
++++ b/drivers/scsi/hpsa.c
+@@ -5478,6 +5478,8 @@ static int hpsa_ciss_submit(struct ctlr_info *h,
+ return SCSI_MLQUEUE_HOST_BUSY;
+ }
+
++ c->device = dev;
++
+ enqueue_cmd_and_start_io(h, c);
+ /* the cmd'll come back via intr handler in complete_scsi_command() */
+ return 0;
+@@ -5549,6 +5551,7 @@ static int hpsa_ioaccel_submit(struct ctlr_info *h,
+ hpsa_cmd_init(h, c->cmdindex, c);
+ c->cmd_type = CMD_SCSI;
+ c->scsi_cmd = cmd;
++ c->device = dev;
+ rc = hpsa_scsi_ioaccel_raid_map(h, c);
+ if (rc < 0) /* scsi_dma_map failed. */
+ rc = SCSI_MLQUEUE_HOST_BUSY;
+@@ -5556,6 +5559,7 @@ static int hpsa_ioaccel_submit(struct ctlr_info *h,
+ hpsa_cmd_init(h, c->cmdindex, c);
+ c->cmd_type = CMD_SCSI;
+ c->scsi_cmd = cmd;
++ c->device = dev;
+ rc = hpsa_scsi_ioaccel_direct_map(h, c);
+ if (rc < 0) /* scsi_dma_map failed. */
+ rc = SCSI_MLQUEUE_HOST_BUSY;
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 2835afbd2edc..04cf6986eb8e 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -3233,6 +3233,10 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ req->req_q_in, req->req_q_out, rsp->rsp_q_in, rsp->rsp_q_out);
+
+ ha->wq = alloc_workqueue("qla2xxx_wq", 0, 0);
++ if (unlikely(!ha->wq)) {
++ ret = -ENOMEM;
++ goto probe_failed;
++ }
+
+ if (ha->isp_ops->initialize_adapter(base_vha)) {
+ ql_log(ql_log_fatal, base_vha, 0x00d6,
+diff --git a/drivers/scsi/sni_53c710.c b/drivers/scsi/sni_53c710.c
+index aef4881d8e21..a85d52b5dc32 100644
+--- a/drivers/scsi/sni_53c710.c
++++ b/drivers/scsi/sni_53c710.c
+@@ -66,10 +66,8 @@ static int snirm710_probe(struct platform_device *dev)
+
+ base = res->start;
+ hostdata = kzalloc(sizeof(*hostdata), GFP_KERNEL);
+- if (!hostdata) {
+- dev_printk(KERN_ERR, dev, "Failed to allocate host data\n");
++ if (!hostdata)
+ return -ENOMEM;
+- }
+
+ hostdata->dev = &dev->dev;
+ dma_set_mask(&dev->dev, DMA_BIT_MASK(32));
+diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
+index 04bf2acd3800..2d19f0e332b0 100644
+--- a/drivers/target/target_core_device.c
++++ b/drivers/target/target_core_device.c
+@@ -1074,27 +1074,6 @@ passthrough_parse_cdb(struct se_cmd *cmd,
+ struct se_device *dev = cmd->se_dev;
+ unsigned int size;
+
+- /*
+- * Clear a lun set in the cdb if the initiator talking to use spoke
+- * and old standards version, as we can't assume the underlying device
+- * won't choke up on it.
+- */
+- switch (cdb[0]) {
+- case READ_10: /* SBC - RDProtect */
+- case READ_12: /* SBC - RDProtect */
+- case READ_16: /* SBC - RDProtect */
+- case SEND_DIAGNOSTIC: /* SPC - SELF-TEST Code */
+- case VERIFY: /* SBC - VRProtect */
+- case VERIFY_16: /* SBC - VRProtect */
+- case WRITE_VERIFY: /* SBC - VRProtect */
+- case WRITE_VERIFY_12: /* SBC - VRProtect */
+- case MAINTENANCE_IN: /* SPC - Parameter Data Format for SA RTPG */
+- break;
+- default:
+- cdb[1] &= 0x1f; /* clear logical unit number */
+- break;
+- }
+-
+ /*
+ * For REPORT LUNS we always need to emulate the response, for everything
+ * else, pass it up.
+diff --git a/drivers/tty/serial/8250/8250_men_mcb.c b/drivers/tty/serial/8250/8250_men_mcb.c
+index 02c5aff58a74..8df89e9cd254 100644
+--- a/drivers/tty/serial/8250/8250_men_mcb.c
++++ b/drivers/tty/serial/8250/8250_men_mcb.c
+@@ -72,8 +72,8 @@ static int serial_8250_men_mcb_probe(struct mcb_device *mdev,
+ {
+ struct serial_8250_men_mcb_data *data;
+ struct resource *mem;
+- unsigned int num_ports;
+- unsigned int i;
++ int num_ports;
++ int i;
+ void __iomem *membase;
+
+ mem = mcb_get_resource(mdev, IORESOURCE_MEM);
+@@ -88,7 +88,7 @@ static int serial_8250_men_mcb_probe(struct mcb_device *mdev,
+ dev_dbg(&mdev->dev, "found a 16z%03u with %u ports\n",
+ mdev->id, num_ports);
+
+- if (num_ports == 0 || num_ports > 4) {
++ if (num_ports <= 0 || num_ports > 4) {
+ dev_err(&mdev->dev, "unexpected number of ports: %u\n",
+ num_ports);
+ return -ENODEV;
+@@ -133,7 +133,7 @@ static int serial_8250_men_mcb_probe(struct mcb_device *mdev,
+
+ static void serial_8250_men_mcb_remove(struct mcb_device *mdev)
+ {
+- unsigned int num_ports, i;
++ int num_ports, i;
+ struct serial_8250_men_mcb_data *data = mcb_get_drvdata(mdev);
+
+ if (!data)
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index 4d8f8f4ecf98..51fa614b4079 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -1154,7 +1154,7 @@ static int check_pending_gadget_drivers(struct usb_udc *udc)
+ dev_name(&udc->dev)) == 0) {
+ ret = udc_bind_to_driver(udc, driver);
+ if (ret != -EPROBE_DEFER)
+- list_del(&driver->pending);
++ list_del_init(&driver->pending);
+ break;
+ }
+
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 5ef5a16c01d2..7289d443bfb3 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -1379,6 +1379,11 @@ void cifsFileInfo_put(struct cifsFileInfo *cifs_file);
+ struct cifsInodeInfo {
+ bool can_cache_brlcks;
+ struct list_head llist; /* locks helb by this inode */
++ /*
++ * NOTE: Some code paths call down_read(lock_sem) twice, so
++ * we must always use use cifs_down_write() instead of down_write()
++ * for this semaphore to avoid deadlocks.
++ */
+ struct rw_semaphore lock_sem; /* protect the fields above */
+ /* BB add in lists for dirty pages i.e. write caching info for oplock */
+ struct list_head openFileList;
+diff --git a/fs/cifs/cifsproto.h b/fs/cifs/cifsproto.h
+index 592a6cea2b79..65b07f92bc71 100644
+--- a/fs/cifs/cifsproto.h
++++ b/fs/cifs/cifsproto.h
+@@ -166,6 +166,7 @@ extern int cifs_unlock_range(struct cifsFileInfo *cfile,
+ struct file_lock *flock, const unsigned int xid);
+ extern int cifs_push_mandatory_locks(struct cifsFileInfo *cfile);
+
++extern void cifs_down_write(struct rw_semaphore *sem);
+ extern struct cifsFileInfo *cifs_new_fileinfo(struct cifs_fid *fid,
+ struct file *file,
+ struct tcon_link *tlink,
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 8ee57d1f507f..8995c03056e3 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -556,9 +556,11 @@ cifs_reconnect(struct TCP_Server_Info *server)
+ spin_lock(&GlobalMid_Lock);
+ list_for_each_safe(tmp, tmp2, &server->pending_mid_q) {
+ mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
++ kref_get(&mid_entry->refcount);
+ if (mid_entry->mid_state == MID_REQUEST_SUBMITTED)
+ mid_entry->mid_state = MID_RETRY_NEEDED;
+ list_move(&mid_entry->qhead, &retry_list);
++ mid_entry->mid_flags |= MID_DELETED;
+ }
+ spin_unlock(&GlobalMid_Lock);
+ mutex_unlock(&server->srv_mutex);
+@@ -568,6 +570,7 @@ cifs_reconnect(struct TCP_Server_Info *server)
+ mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
+ list_del_init(&mid_entry->qhead);
+ mid_entry->callback(mid_entry);
++ cifs_mid_q_entry_release(mid_entry);
+ }
+
+ if (cifs_rdma_enabled(server)) {
+@@ -887,8 +890,10 @@ dequeue_mid(struct mid_q_entry *mid, bool malformed)
+ if (mid->mid_flags & MID_DELETED)
+ printk_once(KERN_WARNING
+ "trying to dequeue a deleted mid\n");
+- else
++ else {
+ list_del_init(&mid->qhead);
++ mid->mid_flags |= MID_DELETED;
++ }
+ spin_unlock(&GlobalMid_Lock);
+ }
+
+@@ -958,8 +963,10 @@ static void clean_demultiplex_info(struct TCP_Server_Info *server)
+ list_for_each_safe(tmp, tmp2, &server->pending_mid_q) {
+ mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
+ cifs_dbg(FYI, "Clearing mid 0x%llx\n", mid_entry->mid);
++ kref_get(&mid_entry->refcount);
+ mid_entry->mid_state = MID_SHUTDOWN;
+ list_move(&mid_entry->qhead, &dispose_list);
++ mid_entry->mid_flags |= MID_DELETED;
+ }
+ spin_unlock(&GlobalMid_Lock);
+
+@@ -969,6 +976,7 @@ static void clean_demultiplex_info(struct TCP_Server_Info *server)
+ cifs_dbg(FYI, "Callback mid 0x%llx\n", mid_entry->mid);
+ list_del_init(&mid_entry->qhead);
+ mid_entry->callback(mid_entry);
++ cifs_mid_q_entry_release(mid_entry);
+ }
+ /* 1/8th of sec is more than enough time for them to exit */
+ msleep(125);
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 53dbb6e0d390..facb52d37d19 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -281,6 +281,13 @@ cifs_has_mand_locks(struct cifsInodeInfo *cinode)
+ return has_locks;
+ }
+
++void
++cifs_down_write(struct rw_semaphore *sem)
++{
++ while (!down_write_trylock(sem))
++ msleep(10);
++}
++
+ struct cifsFileInfo *
+ cifs_new_fileinfo(struct cifs_fid *fid, struct file *file,
+ struct tcon_link *tlink, __u32 oplock)
+@@ -306,7 +313,7 @@ cifs_new_fileinfo(struct cifs_fid *fid, struct file *file,
+ INIT_LIST_HEAD(&fdlocks->locks);
+ fdlocks->cfile = cfile;
+ cfile->llist = fdlocks;
+- down_write(&cinode->lock_sem);
++ cifs_down_write(&cinode->lock_sem);
+ list_add(&fdlocks->llist, &cinode->llist);
+ up_write(&cinode->lock_sem);
+
+@@ -464,7 +471,7 @@ void _cifsFileInfo_put(struct cifsFileInfo *cifs_file, bool wait_oplock_handler)
+ * Delete any outstanding lock records. We'll lose them when the file
+ * is closed anyway.
+ */
+- down_write(&cifsi->lock_sem);
++ cifs_down_write(&cifsi->lock_sem);
+ list_for_each_entry_safe(li, tmp, &cifs_file->llist->locks, llist) {
+ list_del(&li->llist);
+ cifs_del_lock_waiters(li);
+@@ -1027,7 +1034,7 @@ static void
+ cifs_lock_add(struct cifsFileInfo *cfile, struct cifsLockInfo *lock)
+ {
+ struct cifsInodeInfo *cinode = CIFS_I(d_inode(cfile->dentry));
+- down_write(&cinode->lock_sem);
++ cifs_down_write(&cinode->lock_sem);
+ list_add_tail(&lock->llist, &cfile->llist->locks);
+ up_write(&cinode->lock_sem);
+ }
+@@ -1049,7 +1056,7 @@ cifs_lock_add_if(struct cifsFileInfo *cfile, struct cifsLockInfo *lock,
+
+ try_again:
+ exist = false;
+- down_write(&cinode->lock_sem);
++ cifs_down_write(&cinode->lock_sem);
+
+ exist = cifs_find_lock_conflict(cfile, lock->offset, lock->length,
+ lock->type, lock->flags, &conf_lock,
+@@ -1072,7 +1079,7 @@ try_again:
+ (lock->blist.next == &lock->blist));
+ if (!rc)
+ goto try_again;
+- down_write(&cinode->lock_sem);
++ cifs_down_write(&cinode->lock_sem);
+ list_del_init(&lock->blist);
+ }
+
+@@ -1125,7 +1132,7 @@ cifs_posix_lock_set(struct file *file, struct file_lock *flock)
+ return rc;
+
+ try_again:
+- down_write(&cinode->lock_sem);
++ cifs_down_write(&cinode->lock_sem);
+ if (!cinode->can_cache_brlcks) {
+ up_write(&cinode->lock_sem);
+ return rc;
+@@ -1331,7 +1338,7 @@ cifs_push_locks(struct cifsFileInfo *cfile)
+ int rc = 0;
+
+ /* we are going to update can_cache_brlcks here - need a write access */
+- down_write(&cinode->lock_sem);
++ cifs_down_write(&cinode->lock_sem);
+ if (!cinode->can_cache_brlcks) {
+ up_write(&cinode->lock_sem);
+ return rc;
+@@ -1522,7 +1529,7 @@ cifs_unlock_range(struct cifsFileInfo *cfile, struct file_lock *flock,
+ if (!buf)
+ return -ENOMEM;
+
+- down_write(&cinode->lock_sem);
++ cifs_down_write(&cinode->lock_sem);
+ for (i = 0; i < 2; i++) {
+ cur = buf;
+ num = 0;
+diff --git a/fs/cifs/smb2file.c b/fs/cifs/smb2file.c
+index e6a1fc72018f..8b0b512c5792 100644
+--- a/fs/cifs/smb2file.c
++++ b/fs/cifs/smb2file.c
+@@ -145,7 +145,7 @@ smb2_unlock_range(struct cifsFileInfo *cfile, struct file_lock *flock,
+
+ cur = buf;
+
+- down_write(&cinode->lock_sem);
++ cifs_down_write(&cinode->lock_sem);
+ list_for_each_entry_safe(li, tmp, &cfile->llist->locks, llist) {
+ if (flock->fl_start > li->offset ||
+ (flock->fl_start + length) <
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index 5d6d44bfe10a..bb52751ba783 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -86,22 +86,8 @@ AllocMidQEntry(const struct smb_hdr *smb_buffer, struct TCP_Server_Info *server)
+
+ static void _cifs_mid_q_entry_release(struct kref *refcount)
+ {
+- struct mid_q_entry *mid = container_of(refcount, struct mid_q_entry,
+- refcount);
+-
+- mempool_free(mid, cifs_mid_poolp);
+-}
+-
+-void cifs_mid_q_entry_release(struct mid_q_entry *midEntry)
+-{
+- spin_lock(&GlobalMid_Lock);
+- kref_put(&midEntry->refcount, _cifs_mid_q_entry_release);
+- spin_unlock(&GlobalMid_Lock);
+-}
+-
+-void
+-DeleteMidQEntry(struct mid_q_entry *midEntry)
+-{
++ struct mid_q_entry *midEntry =
++ container_of(refcount, struct mid_q_entry, refcount);
+ #ifdef CONFIG_CIFS_STATS2
+ __le16 command = midEntry->server->vals->lock_cmd;
+ __u16 smb_cmd = le16_to_cpu(midEntry->command);
+@@ -166,6 +152,19 @@ DeleteMidQEntry(struct mid_q_entry *midEntry)
+ }
+ }
+ #endif
++
++ mempool_free(midEntry, cifs_mid_poolp);
++}
++
++void cifs_mid_q_entry_release(struct mid_q_entry *midEntry)
++{
++ spin_lock(&GlobalMid_Lock);
++ kref_put(&midEntry->refcount, _cifs_mid_q_entry_release);
++ spin_unlock(&GlobalMid_Lock);
++}
++
++void DeleteMidQEntry(struct mid_q_entry *midEntry)
++{
+ cifs_mid_q_entry_release(midEntry);
+ }
+
+@@ -173,8 +172,10 @@ void
+ cifs_delete_mid(struct mid_q_entry *mid)
+ {
+ spin_lock(&GlobalMid_Lock);
+- list_del_init(&mid->qhead);
+- mid->mid_flags |= MID_DELETED;
++ if (!(mid->mid_flags & MID_DELETED)) {
++ list_del_init(&mid->qhead);
++ mid->mid_flags |= MID_DELETED;
++ }
+ spin_unlock(&GlobalMid_Lock);
+
+ DeleteMidQEntry(mid);
+@@ -868,7 +869,10 @@ cifs_sync_mid_result(struct mid_q_entry *mid, struct TCP_Server_Info *server)
+ rc = -EHOSTDOWN;
+ break;
+ default:
+- list_del_init(&mid->qhead);
++ if (!(mid->mid_flags & MID_DELETED)) {
++ list_del_init(&mid->qhead);
++ mid->mid_flags |= MID_DELETED;
++ }
+ cifs_dbg(VFS, "%s: invalid mid state mid=%llu state=%d\n",
+ __func__, mid->mid, mid->mid_state);
+ rc = -EIO;
+diff --git a/include/linux/gfp.h b/include/linux/gfp.h
+index f33881688f42..ff1c96b8ae92 100644
+--- a/include/linux/gfp.h
++++ b/include/linux/gfp.h
+@@ -325,6 +325,29 @@ static inline bool gfpflags_allow_blocking(const gfp_t gfp_flags)
+ return !!(gfp_flags & __GFP_DIRECT_RECLAIM);
+ }
+
++/**
++ * gfpflags_normal_context - is gfp_flags a normal sleepable context?
++ * @gfp_flags: gfp_flags to test
++ *
++ * Test whether @gfp_flags indicates that the allocation is from the
++ * %current context and allowed to sleep.
++ *
++ * An allocation being allowed to block doesn't mean it owns the %current
++ * context. When direct reclaim path tries to allocate memory, the
++ * allocation context is nested inside whatever %current was doing at the
++ * time of the original allocation. The nested allocation may be allowed
++ * to block but modifying anything %current owns can corrupt the outer
++ * context's expectations.
++ *
++ * %true result from this function indicates that the allocation context
++ * can sleep and use anything that's associated with %current.
++ */
++static inline bool gfpflags_normal_context(const gfp_t gfp_flags)
++{
++ return (gfp_flags & (__GFP_DIRECT_RECLAIM | __GFP_MEMALLOC)) ==
++ __GFP_DIRECT_RECLAIM;
++}
++
+ #ifdef CONFIG_HIGHMEM
+ #define OPT_ZONE_HIGHMEM ZONE_HIGHMEM
+ #else
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index b8b570c30b5e..e4b323e4db8f 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -1437,9 +1437,8 @@ struct mlx5_ifc_extended_dest_format_bits {
+ };
+
+ union mlx5_ifc_dest_format_struct_flow_counter_list_auto_bits {
+- struct mlx5_ifc_dest_format_struct_bits dest_format_struct;
++ struct mlx5_ifc_extended_dest_format_bits extended_dest_format;
+ struct mlx5_ifc_flow_counter_list_bits flow_counter_list;
+- u8 reserved_at_0[0x40];
+ };
+
+ struct mlx5_ifc_fte_match_param_bits {
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 9b18d33681c2..7647beaac2d2 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -1360,7 +1360,8 @@ static inline __u32 skb_get_hash_flowi6(struct sk_buff *skb, const struct flowi6
+ return skb->hash;
+ }
+
+-__u32 skb_get_hash_perturb(const struct sk_buff *skb, u32 perturb);
++__u32 skb_get_hash_perturb(const struct sk_buff *skb,
++ const siphash_key_t *perturb);
+
+ static inline __u32 skb_get_hash_raw(const struct sk_buff *skb)
+ {
+@@ -1500,6 +1501,19 @@ static inline int skb_queue_empty(const struct sk_buff_head *list)
+ return list->next == (const struct sk_buff *) list;
+ }
+
++/**
++ * skb_queue_empty_lockless - check if a queue is empty
++ * @list: queue head
++ *
++ * Returns true if the queue is empty, false otherwise.
++ * This variant can be used in lockless contexts.
++ */
++static inline bool skb_queue_empty_lockless(const struct sk_buff_head *list)
++{
++ return READ_ONCE(list->next) == (const struct sk_buff *) list;
++}
++
++
+ /**
+ * skb_queue_is_last - check if skb is the last entry in the queue
+ * @list: queue head
+@@ -1853,9 +1867,11 @@ static inline void __skb_insert(struct sk_buff *newsk,
+ struct sk_buff *prev, struct sk_buff *next,
+ struct sk_buff_head *list)
+ {
+- newsk->next = next;
+- newsk->prev = prev;
+- next->prev = prev->next = newsk;
++ /* see skb_queue_empty_lockless() for the opposite READ_ONCE() */
++ WRITE_ONCE(newsk->next, next);
++ WRITE_ONCE(newsk->prev, prev);
++ WRITE_ONCE(next->prev, newsk);
++ WRITE_ONCE(prev->next, newsk);
+ list->qlen++;
+ }
+
+@@ -1866,11 +1882,11 @@ static inline void __skb_queue_splice(const struct sk_buff_head *list,
+ struct sk_buff *first = list->next;
+ struct sk_buff *last = list->prev;
+
+- first->prev = prev;
+- prev->next = first;
++ WRITE_ONCE(first->prev, prev);
++ WRITE_ONCE(prev->next, first);
+
+- last->next = next;
+- next->prev = last;
++ WRITE_ONCE(last->next, next);
++ WRITE_ONCE(next->prev, last);
+ }
+
+ /**
+@@ -2011,8 +2027,8 @@ static inline void __skb_unlink(struct sk_buff *skb, struct sk_buff_head *list)
+ next = skb->next;
+ prev = skb->prev;
+ skb->next = skb->prev = NULL;
+- next->prev = prev;
+- prev->next = next;
++ WRITE_ONCE(next->prev, prev);
++ WRITE_ONCE(prev->next, next);
+ }
+
+ /**
+diff --git a/include/net/busy_poll.h b/include/net/busy_poll.h
+index 127a5c4e3699..86e028388bad 100644
+--- a/include/net/busy_poll.h
++++ b/include/net/busy_poll.h
+@@ -122,7 +122,7 @@ static inline void skb_mark_napi_id(struct sk_buff *skb,
+ static inline void sk_mark_napi_id(struct sock *sk, const struct sk_buff *skb)
+ {
+ #ifdef CONFIG_NET_RX_BUSY_POLL
+- sk->sk_napi_id = skb->napi_id;
++ WRITE_ONCE(sk->sk_napi_id, skb->napi_id);
+ #endif
+ sk_rx_queue_set(sk, skb);
+ }
+@@ -132,8 +132,8 @@ static inline void sk_mark_napi_id_once(struct sock *sk,
+ const struct sk_buff *skb)
+ {
+ #ifdef CONFIG_NET_RX_BUSY_POLL
+- if (!sk->sk_napi_id)
+- sk->sk_napi_id = skb->napi_id;
++ if (!READ_ONCE(sk->sk_napi_id))
++ WRITE_ONCE(sk->sk_napi_id, skb->napi_id);
+ #endif
+ }
+
+diff --git a/include/net/flow_dissector.h b/include/net/flow_dissector.h
+index 90bd210be060..5cd12276ae21 100644
+--- a/include/net/flow_dissector.h
++++ b/include/net/flow_dissector.h
+@@ -4,6 +4,7 @@
+
+ #include <linux/types.h>
+ #include <linux/in6.h>
++#include <linux/siphash.h>
+ #include <uapi/linux/if_ether.h>
+
+ /**
+@@ -276,7 +277,7 @@ struct flow_keys_basic {
+ struct flow_keys {
+ struct flow_dissector_key_control control;
+ #define FLOW_KEYS_HASH_START_FIELD basic
+- struct flow_dissector_key_basic basic;
++ struct flow_dissector_key_basic basic __aligned(SIPHASH_ALIGNMENT);
+ struct flow_dissector_key_tags tags;
+ struct flow_dissector_key_vlan vlan;
+ struct flow_dissector_key_vlan cvlan;
+diff --git a/include/net/fq.h b/include/net/fq.h
+index d126b5d20261..2ad85e683041 100644
+--- a/include/net/fq.h
++++ b/include/net/fq.h
+@@ -69,7 +69,7 @@ struct fq {
+ struct list_head backlogs;
+ spinlock_t lock;
+ u32 flows_cnt;
+- u32 perturbation;
++ siphash_key_t perturbation;
+ u32 limit;
+ u32 memory_limit;
+ u32 memory_usage;
+diff --git a/include/net/fq_impl.h b/include/net/fq_impl.h
+index be40a4b327e3..107c0d700ed6 100644
+--- a/include/net/fq_impl.h
++++ b/include/net/fq_impl.h
+@@ -108,7 +108,7 @@ begin:
+
+ static u32 fq_flow_idx(struct fq *fq, struct sk_buff *skb)
+ {
+- u32 hash = skb_get_hash_perturb(skb, fq->perturbation);
++ u32 hash = skb_get_hash_perturb(skb, &fq->perturbation);
+
+ return reciprocal_scale(hash, fq->flows_cnt);
+ }
+@@ -308,7 +308,7 @@ static int fq_init(struct fq *fq, int flows_cnt)
+ INIT_LIST_HEAD(&fq->backlogs);
+ spin_lock_init(&fq->lock);
+ fq->flows_cnt = max_t(u32, flows_cnt, 1);
+- fq->perturbation = prandom_u32();
++ get_random_bytes(&fq->perturbation, sizeof(fq->perturbation));
+ fq->quantum = 300;
+ fq->limit = 8192;
+ fq->memory_limit = 16 << 20; /* 16 MBytes */
+diff --git a/include/net/ip.h b/include/net/ip.h
+index 29d89de39822..e6609ab69161 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -184,7 +184,7 @@ static inline struct sk_buff *ip_fraglist_next(struct ip_fraglist_iter *iter)
+ }
+
+ struct ip_frag_state {
+- struct iphdr *iph;
++ bool DF;
+ unsigned int hlen;
+ unsigned int ll_rs;
+ unsigned int mtu;
+@@ -195,7 +195,7 @@ struct ip_frag_state {
+ };
+
+ void ip_frag_init(struct sk_buff *skb, unsigned int hlen, unsigned int ll_rs,
+- unsigned int mtu, struct ip_frag_state *state);
++ unsigned int mtu, bool DF, struct ip_frag_state *state);
+ struct sk_buff *ip_frag_next(struct sk_buff *skb,
+ struct ip_frag_state *state);
+
+diff --git a/include/net/net_namespace.h b/include/net/net_namespace.h
+index ab40d7afdc54..8f8b37198f9b 100644
+--- a/include/net/net_namespace.h
++++ b/include/net/net_namespace.h
+@@ -52,6 +52,9 @@ struct bpf_prog;
+ #define NETDEV_HASHENTRIES (1 << NETDEV_HASHBITS)
+
+ struct net {
++ /* First cache line can be often dirtied.
++ * Do not place here read-mostly fields.
++ */
+ refcount_t passive; /* To decide when the network
+ * namespace should be freed.
+ */
+@@ -60,7 +63,13 @@ struct net {
+ */
+ spinlock_t rules_mod_lock;
+
+- u32 hash_mix;
++ unsigned int dev_unreg_count;
++
++ unsigned int dev_base_seq; /* protected by rtnl_mutex */
++ int ifindex;
++
++ spinlock_t nsid_lock;
++ atomic_t fnhe_genid;
+
+ struct list_head list; /* list of network namespaces */
+ struct list_head exit_list; /* To linked to call pernet exit
+@@ -76,11 +85,11 @@ struct net {
+ #endif
+ struct user_namespace *user_ns; /* Owning user namespace */
+ struct ucounts *ucounts;
+- spinlock_t nsid_lock;
+ struct idr netns_ids;
+
+ struct ns_common ns;
+
++ struct list_head dev_base_head;
+ struct proc_dir_entry *proc_net;
+ struct proc_dir_entry *proc_net_stat;
+
+@@ -93,12 +102,14 @@ struct net {
+
+ struct uevent_sock *uevent_sock; /* uevent socket */
+
+- struct list_head dev_base_head;
+ struct hlist_head *dev_name_head;
+ struct hlist_head *dev_index_head;
+- unsigned int dev_base_seq; /* protected by rtnl_mutex */
+- int ifindex;
+- unsigned int dev_unreg_count;
++ /* Note that @hash_mix can be read millions times per second,
++ * it is critical that it is on a read_mostly cache line.
++ */
++ u32 hash_mix;
++
++ struct net_device *loopback_dev; /* The loopback */
+
+ /* core fib_rules */
+ struct list_head rules_ops;
+@@ -106,7 +117,6 @@ struct net {
+ struct list_head fib_notifier_ops; /* Populated by
+ * register_pernet_subsys()
+ */
+- struct net_device *loopback_dev; /* The loopback */
+ struct netns_core core;
+ struct netns_mib mib;
+ struct netns_packet packet;
+@@ -171,7 +181,6 @@ struct net {
+ struct netns_xdp xdp;
+ #endif
+ struct sock *diag_nlsk;
+- atomic_t fnhe_genid;
+ } __randomize_layout;
+
+ #include <linux/seq_file_net.h>
+@@ -333,7 +342,7 @@ static inline struct net *read_pnet(const possible_net_t *pnet)
+ #define __net_initconst __initconst
+ #endif
+
+-int peernet2id_alloc(struct net *net, struct net *peer);
++int peernet2id_alloc(struct net *net, struct net *peer, gfp_t gfp);
+ int peernet2id(struct net *net, struct net *peer);
+ bool peernet_has_id(struct net *net, struct net *peer);
+ struct net *get_net_ns_by_id(struct net *net, int id);
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 2c53f1a1d905..b03f96370f8e 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -949,8 +949,8 @@ static inline void sk_incoming_cpu_update(struct sock *sk)
+ {
+ int cpu = raw_smp_processor_id();
+
+- if (unlikely(sk->sk_incoming_cpu != cpu))
+- sk->sk_incoming_cpu = cpu;
++ if (unlikely(READ_ONCE(sk->sk_incoming_cpu) != cpu))
++ WRITE_ONCE(sk->sk_incoming_cpu, cpu);
+ }
+
+ static inline void sock_rps_record_flow_hash(__u32 hash)
+@@ -2233,12 +2233,17 @@ struct sk_buff *sk_stream_alloc_skb(struct sock *sk, int size, gfp_t gfp,
+ * sk_page_frag - return an appropriate page_frag
+ * @sk: socket
+ *
+- * If socket allocation mode allows current thread to sleep, it means its
+- * safe to use the per task page_frag instead of the per socket one.
++ * Use the per task page_frag instead of the per socket one for
++ * optimization when we know that we're in the normal context and owns
++ * everything that's associated with %current.
++ *
++ * gfpflags_allow_blocking() isn't enough here as direct reclaim may nest
++ * inside other socket operations and end up recursing into sk_page_frag()
++ * while it's already in use.
+ */
+ static inline struct page_frag *sk_page_frag(struct sock *sk)
+ {
+- if (gfpflags_allow_blocking(sk->sk_allocation))
++ if (gfpflags_normal_context(sk->sk_allocation))
+ return ¤t->task_frag;
+
+ return &sk->sk_frag;
+diff --git a/include/sound/simple_card_utils.h b/include/sound/simple_card_utils.h
+index 985a5f583de4..31f76b6abf71 100644
+--- a/include/sound/simple_card_utils.h
++++ b/include/sound/simple_card_utils.h
+@@ -135,9 +135,9 @@ int asoc_simple_init_priv(struct asoc_simple_priv *priv,
+ struct link_info *li);
+
+ #ifdef DEBUG
+-inline void asoc_simple_debug_dai(struct asoc_simple_priv *priv,
+- char *name,
+- struct asoc_simple_dai *dai)
++static inline void asoc_simple_debug_dai(struct asoc_simple_priv *priv,
++ char *name,
++ struct asoc_simple_dai *dai)
+ {
+ struct device *dev = simple_priv_to_dev(priv);
+
+@@ -167,7 +167,7 @@ inline void asoc_simple_debug_dai(struct asoc_simple_priv *priv,
+ dev_dbg(dev, "%s clk %luHz\n", name, clk_get_rate(dai->clk));
+ }
+
+-inline void asoc_simple_debug_info(struct asoc_simple_priv *priv)
++static inline void asoc_simple_debug_info(struct asoc_simple_priv *priv)
+ {
+ struct snd_soc_card *card = simple_priv_to_card(priv);
+ struct device *dev = simple_priv_to_dev(priv);
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index dd310d3b5843..725b9b35f933 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -674,6 +674,8 @@ static bool synth_field_signed(char *type)
+ {
+ if (str_has_prefix(type, "u"))
+ return false;
++ if (strcmp(type, "gfp_t") == 0)
++ return false;
+
+ return true;
+ }
+diff --git a/net/atm/common.c b/net/atm/common.c
+index b7528e77997c..0ce530af534d 100644
+--- a/net/atm/common.c
++++ b/net/atm/common.c
+@@ -668,7 +668,7 @@ __poll_t vcc_poll(struct file *file, struct socket *sock, poll_table *wait)
+ mask |= EPOLLHUP;
+
+ /* readable? */
+- if (!skb_queue_empty(&sk->sk_receive_queue))
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
+ mask |= EPOLLIN | EPOLLRDNORM;
+
+ /* writable? */
+diff --git a/net/bluetooth/af_bluetooth.c b/net/bluetooth/af_bluetooth.c
+index 94ddf19998c7..5f508c50649d 100644
+--- a/net/bluetooth/af_bluetooth.c
++++ b/net/bluetooth/af_bluetooth.c
+@@ -460,7 +460,7 @@ __poll_t bt_sock_poll(struct file *file, struct socket *sock,
+ if (sk->sk_state == BT_LISTEN)
+ return bt_accept_poll(sk);
+
+- if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue))
++ if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))
+ mask |= EPOLLERR |
+ (sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0);
+
+@@ -470,7 +470,7 @@ __poll_t bt_sock_poll(struct file *file, struct socket *sock,
+ if (sk->sk_shutdown == SHUTDOWN_MASK)
+ mask |= EPOLLHUP;
+
+- if (!skb_queue_empty(&sk->sk_receive_queue))
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
+ mask |= EPOLLIN | EPOLLRDNORM;
+
+ if (sk->sk_state == BT_CLOSED)
+diff --git a/net/bridge/netfilter/nf_conntrack_bridge.c b/net/bridge/netfilter/nf_conntrack_bridge.c
+index 4f5444d2a526..a48cb1baeac6 100644
+--- a/net/bridge/netfilter/nf_conntrack_bridge.c
++++ b/net/bridge/netfilter/nf_conntrack_bridge.c
+@@ -34,6 +34,7 @@ static int nf_br_ip_fragment(struct net *net, struct sock *sk,
+ {
+ int frag_max_size = BR_INPUT_SKB_CB(skb)->frag_max_size;
+ unsigned int hlen, ll_rs, mtu;
++ ktime_t tstamp = skb->tstamp;
+ struct ip_frag_state state;
+ struct iphdr *iph;
+ int err;
+@@ -81,6 +82,7 @@ static int nf_br_ip_fragment(struct net *net, struct sock *sk,
+ if (iter.frag)
+ ip_fraglist_prepare(skb, &iter);
+
++ skb->tstamp = tstamp;
+ err = output(net, sk, data, skb);
+ if (err || !iter.frag)
+ break;
+@@ -94,7 +96,7 @@ slow_path:
+ * This may also be a clone skbuff, we could preserve the geometry for
+ * the copies but probably not worth the effort.
+ */
+- ip_frag_init(skb, hlen, ll_rs, frag_max_size, &state);
++ ip_frag_init(skb, hlen, ll_rs, frag_max_size, false, &state);
+
+ while (state.left > 0) {
+ struct sk_buff *skb2;
+@@ -105,6 +107,7 @@ slow_path:
+ goto blackhole;
+ }
+
++ skb2->tstamp = tstamp;
+ err = output(net, sk, data, skb2);
+ if (err)
+ goto blackhole;
+diff --git a/net/caif/caif_socket.c b/net/caif/caif_socket.c
+index 13ea920600ae..ef14da50a981 100644
+--- a/net/caif/caif_socket.c
++++ b/net/caif/caif_socket.c
+@@ -953,7 +953,7 @@ static __poll_t caif_poll(struct file *file,
+ mask |= EPOLLRDHUP;
+
+ /* readable? */
+- if (!skb_queue_empty(&sk->sk_receive_queue) ||
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue) ||
+ (sk->sk_shutdown & RCV_SHUTDOWN))
+ mask |= EPOLLIN | EPOLLRDNORM;
+
+diff --git a/net/core/datagram.c b/net/core/datagram.c
+index 45a162ef5e02..5dc112ec7286 100644
+--- a/net/core/datagram.c
++++ b/net/core/datagram.c
+@@ -97,7 +97,7 @@ int __skb_wait_for_more_packets(struct sock *sk, int *err, long *timeo_p,
+ if (error)
+ goto out_err;
+
+- if (sk->sk_receive_queue.prev != skb)
++ if (READ_ONCE(sk->sk_receive_queue.prev) != skb)
+ goto out;
+
+ /* Socket shut down? */
+@@ -278,7 +278,7 @@ struct sk_buff *__skb_try_recv_datagram(struct sock *sk, unsigned int flags,
+ break;
+
+ sk_busy_loop(sk, flags & MSG_DONTWAIT);
+- } while (sk->sk_receive_queue.prev != *last);
++ } while (READ_ONCE(sk->sk_receive_queue.prev) != *last);
+
+ error = -EAGAIN;
+
+@@ -767,7 +767,7 @@ __poll_t datagram_poll(struct file *file, struct socket *sock,
+ mask = 0;
+
+ /* exceptional events? */
+- if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue))
++ if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))
+ mask |= EPOLLERR |
+ (sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0);
+
+@@ -777,7 +777,7 @@ __poll_t datagram_poll(struct file *file, struct socket *sock,
+ mask |= EPOLLHUP;
+
+ /* readable? */
+- if (!skb_queue_empty(&sk->sk_receive_queue))
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
+ mask |= EPOLLIN | EPOLLRDNORM;
+
+ /* Connection-based need to check for termination and startup */
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 4ed9df74eb8a..33b278b826b5 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -9411,7 +9411,7 @@ int dev_change_net_namespace(struct net_device *dev, struct net *net, const char
+ call_netdevice_notifiers(NETDEV_UNREGISTER, dev);
+ rcu_barrier();
+
+- new_nsid = peernet2id_alloc(dev_net(dev), net);
++ new_nsid = peernet2id_alloc(dev_net(dev), net, GFP_KERNEL);
+ /* If there is an ifindex conflict assign a new one */
+ if (__dev_get_by_index(net, dev->ifindex))
+ new_ifindex = dev_new_index(net);
+diff --git a/net/core/ethtool.c b/net/core/ethtool.c
+index 6288e69e94fc..563a48c3df36 100644
+--- a/net/core/ethtool.c
++++ b/net/core/ethtool.c
+@@ -1395,11 +1395,13 @@ static int ethtool_reset(struct net_device *dev, char __user *useraddr)
+
+ static int ethtool_get_wol(struct net_device *dev, char __user *useraddr)
+ {
+- struct ethtool_wolinfo wol = { .cmd = ETHTOOL_GWOL };
++ struct ethtool_wolinfo wol;
+
+ if (!dev->ethtool_ops->get_wol)
+ return -EOPNOTSUPP;
+
++ memset(&wol, 0, sizeof(struct ethtool_wolinfo));
++ wol.cmd = ETHTOOL_GWOL;
+ dev->ethtool_ops->get_wol(dev, &wol);
+
+ if (copy_to_user(useraddr, &wol, sizeof(wol)))
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index 2470b4b404e6..2f5326a82465 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -1333,30 +1333,21 @@ out_bad:
+ }
+ EXPORT_SYMBOL(__skb_flow_dissect);
+
+-static u32 hashrnd __read_mostly;
++static siphash_key_t hashrnd __read_mostly;
+ static __always_inline void __flow_hash_secret_init(void)
+ {
+ net_get_random_once(&hashrnd, sizeof(hashrnd));
+ }
+
+-static __always_inline u32 __flow_hash_words(const u32 *words, u32 length,
+- u32 keyval)
++static const void *flow_keys_hash_start(const struct flow_keys *flow)
+ {
+- return jhash2(words, length, keyval);
+-}
+-
+-static inline const u32 *flow_keys_hash_start(const struct flow_keys *flow)
+-{
+- const void *p = flow;
+-
+- BUILD_BUG_ON(FLOW_KEYS_HASH_OFFSET % sizeof(u32));
+- return (const u32 *)(p + FLOW_KEYS_HASH_OFFSET);
++ BUILD_BUG_ON(FLOW_KEYS_HASH_OFFSET % SIPHASH_ALIGNMENT);
++ return &flow->FLOW_KEYS_HASH_START_FIELD;
+ }
+
+ static inline size_t flow_keys_hash_length(const struct flow_keys *flow)
+ {
+ size_t diff = FLOW_KEYS_HASH_OFFSET + sizeof(flow->addrs);
+- BUILD_BUG_ON((sizeof(*flow) - FLOW_KEYS_HASH_OFFSET) % sizeof(u32));
+ BUILD_BUG_ON(offsetof(typeof(*flow), addrs) !=
+ sizeof(*flow) - sizeof(flow->addrs));
+
+@@ -1371,7 +1362,7 @@ static inline size_t flow_keys_hash_length(const struct flow_keys *flow)
+ diff -= sizeof(flow->addrs.tipckey);
+ break;
+ }
+- return (sizeof(*flow) - diff) / sizeof(u32);
++ return sizeof(*flow) - diff;
+ }
+
+ __be32 flow_get_u32_src(const struct flow_keys *flow)
+@@ -1437,14 +1428,15 @@ static inline void __flow_hash_consistentify(struct flow_keys *keys)
+ }
+ }
+
+-static inline u32 __flow_hash_from_keys(struct flow_keys *keys, u32 keyval)
++static inline u32 __flow_hash_from_keys(struct flow_keys *keys,
++ const siphash_key_t *keyval)
+ {
+ u32 hash;
+
+ __flow_hash_consistentify(keys);
+
+- hash = __flow_hash_words(flow_keys_hash_start(keys),
+- flow_keys_hash_length(keys), keyval);
++ hash = siphash(flow_keys_hash_start(keys),
++ flow_keys_hash_length(keys), keyval);
+ if (!hash)
+ hash = 1;
+
+@@ -1454,12 +1446,13 @@ static inline u32 __flow_hash_from_keys(struct flow_keys *keys, u32 keyval)
+ u32 flow_hash_from_keys(struct flow_keys *keys)
+ {
+ __flow_hash_secret_init();
+- return __flow_hash_from_keys(keys, hashrnd);
++ return __flow_hash_from_keys(keys, &hashrnd);
+ }
+ EXPORT_SYMBOL(flow_hash_from_keys);
+
+ static inline u32 ___skb_get_hash(const struct sk_buff *skb,
+- struct flow_keys *keys, u32 keyval)
++ struct flow_keys *keys,
++ const siphash_key_t *keyval)
+ {
+ skb_flow_dissect_flow_keys(skb, keys,
+ FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL);
+@@ -1507,7 +1500,7 @@ u32 __skb_get_hash_symmetric(const struct sk_buff *skb)
+ &keys, NULL, 0, 0, 0,
+ FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL);
+
+- return __flow_hash_from_keys(&keys, hashrnd);
++ return __flow_hash_from_keys(&keys, &hashrnd);
+ }
+ EXPORT_SYMBOL_GPL(__skb_get_hash_symmetric);
+
+@@ -1527,13 +1520,14 @@ void __skb_get_hash(struct sk_buff *skb)
+
+ __flow_hash_secret_init();
+
+- hash = ___skb_get_hash(skb, &keys, hashrnd);
++ hash = ___skb_get_hash(skb, &keys, &hashrnd);
+
+ __skb_set_sw_hash(skb, hash, flow_keys_have_l4(&keys));
+ }
+ EXPORT_SYMBOL(__skb_get_hash);
+
+-__u32 skb_get_hash_perturb(const struct sk_buff *skb, u32 perturb)
++__u32 skb_get_hash_perturb(const struct sk_buff *skb,
++ const siphash_key_t *perturb)
+ {
+ struct flow_keys keys;
+
+diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
+index a0e0d298c991..87c32ab63304 100644
+--- a/net/core/net_namespace.c
++++ b/net/core/net_namespace.c
+@@ -245,11 +245,11 @@ static int __peernet2id(struct net *net, struct net *peer)
+ return __peernet2id_alloc(net, peer, &no);
+ }
+
+-static void rtnl_net_notifyid(struct net *net, int cmd, int id);
++static void rtnl_net_notifyid(struct net *net, int cmd, int id, gfp_t gfp);
+ /* This function returns the id of a peer netns. If no id is assigned, one will
+ * be allocated and returned.
+ */
+-int peernet2id_alloc(struct net *net, struct net *peer)
++int peernet2id_alloc(struct net *net, struct net *peer, gfp_t gfp)
+ {
+ bool alloc = false, alive = false;
+ int id;
+@@ -268,7 +268,7 @@ int peernet2id_alloc(struct net *net, struct net *peer)
+ id = __peernet2id_alloc(net, peer, &alloc);
+ spin_unlock_bh(&net->nsid_lock);
+ if (alloc && id >= 0)
+- rtnl_net_notifyid(net, RTM_NEWNSID, id);
++ rtnl_net_notifyid(net, RTM_NEWNSID, id, gfp);
+ if (alive)
+ put_net(peer);
+ return id;
+@@ -478,6 +478,7 @@ struct net *copy_net_ns(unsigned long flags,
+
+ if (rv < 0) {
+ put_userns:
++ key_remove_domain(net->key_domain);
+ put_user_ns(user_ns);
+ net_drop_ns(net);
+ dec_ucounts:
+@@ -532,7 +533,8 @@ static void unhash_nsid(struct net *net, struct net *last)
+ idr_remove(&tmp->netns_ids, id);
+ spin_unlock_bh(&tmp->nsid_lock);
+ if (id >= 0)
+- rtnl_net_notifyid(tmp, RTM_DELNSID, id);
++ rtnl_net_notifyid(tmp, RTM_DELNSID, id,
++ GFP_KERNEL);
+ if (tmp == last)
+ break;
+ }
+@@ -764,7 +766,7 @@ static int rtnl_net_newid(struct sk_buff *skb, struct nlmsghdr *nlh,
+ err = alloc_netid(net, peer, nsid);
+ spin_unlock_bh(&net->nsid_lock);
+ if (err >= 0) {
+- rtnl_net_notifyid(net, RTM_NEWNSID, err);
++ rtnl_net_notifyid(net, RTM_NEWNSID, err, GFP_KERNEL);
+ err = 0;
+ } else if (err == -ENOSPC && nsid >= 0) {
+ err = -EEXIST;
+@@ -1051,7 +1053,7 @@ end:
+ return err < 0 ? err : skb->len;
+ }
+
+-static void rtnl_net_notifyid(struct net *net, int cmd, int id)
++static void rtnl_net_notifyid(struct net *net, int cmd, int id, gfp_t gfp)
+ {
+ struct net_fill_args fillargs = {
+ .cmd = cmd,
+@@ -1060,7 +1062,7 @@ static void rtnl_net_notifyid(struct net *net, int cmd, int id)
+ struct sk_buff *msg;
+ int err = -ENOMEM;
+
+- msg = nlmsg_new(rtnl_net_get_size(), GFP_KERNEL);
++ msg = nlmsg_new(rtnl_net_get_size(), gfp);
+ if (!msg)
+ goto out;
+
+@@ -1068,7 +1070,7 @@ static void rtnl_net_notifyid(struct net *net, int cmd, int id)
+ if (err < 0)
+ goto err_out;
+
+- rtnl_notify(msg, net, 0, RTNLGRP_NSID, NULL, 0);
++ rtnl_notify(msg, net, 0, RTNLGRP_NSID, NULL, gfp);
+ return;
+
+ err_out:
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 1ee6460f8275..868a768f7300 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -1523,7 +1523,7 @@ static noinline_for_stack int nla_put_ifalias(struct sk_buff *skb,
+
+ static int rtnl_fill_link_netnsid(struct sk_buff *skb,
+ const struct net_device *dev,
+- struct net *src_net)
++ struct net *src_net, gfp_t gfp)
+ {
+ bool put_iflink = false;
+
+@@ -1531,7 +1531,7 @@ static int rtnl_fill_link_netnsid(struct sk_buff *skb,
+ struct net *link_net = dev->rtnl_link_ops->get_link_net(dev);
+
+ if (!net_eq(dev_net(dev), link_net)) {
+- int id = peernet2id_alloc(src_net, link_net);
++ int id = peernet2id_alloc(src_net, link_net, gfp);
+
+ if (nla_put_s32(skb, IFLA_LINK_NETNSID, id))
+ return -EMSGSIZE;
+@@ -1589,7 +1589,7 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb,
+ int type, u32 pid, u32 seq, u32 change,
+ unsigned int flags, u32 ext_filter_mask,
+ u32 event, int *new_nsid, int new_ifindex,
+- int tgt_netnsid)
++ int tgt_netnsid, gfp_t gfp)
+ {
+ struct ifinfomsg *ifm;
+ struct nlmsghdr *nlh;
+@@ -1681,7 +1681,7 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb,
+ goto nla_put_failure;
+ }
+
+- if (rtnl_fill_link_netnsid(skb, dev, src_net))
++ if (rtnl_fill_link_netnsid(skb, dev, src_net, gfp))
+ goto nla_put_failure;
+
+ if (new_nsid &&
+@@ -2001,7 +2001,7 @@ walk_entries:
+ NETLINK_CB(cb->skb).portid,
+ nlh->nlmsg_seq, 0, flags,
+ ext_filter_mask, 0, NULL, 0,
+- netnsid);
++ netnsid, GFP_KERNEL);
+
+ if (err < 0) {
+ if (likely(skb->len))
+@@ -3359,7 +3359,7 @@ static int rtnl_getlink(struct sk_buff *skb, struct nlmsghdr *nlh,
+ err = rtnl_fill_ifinfo(nskb, dev, net,
+ RTM_NEWLINK, NETLINK_CB(skb).portid,
+ nlh->nlmsg_seq, 0, 0, ext_filter_mask,
+- 0, NULL, 0, netnsid);
++ 0, NULL, 0, netnsid, GFP_KERNEL);
+ if (err < 0) {
+ /* -EMSGSIZE implies BUG in if_nlmsg_size */
+ WARN_ON(err == -EMSGSIZE);
+@@ -3471,7 +3471,7 @@ struct sk_buff *rtmsg_ifinfo_build_skb(int type, struct net_device *dev,
+
+ err = rtnl_fill_ifinfo(skb, dev, dev_net(dev),
+ type, 0, 0, change, 0, 0, event,
+- new_nsid, new_ifindex, -1);
++ new_nsid, new_ifindex, -1, flags);
+ if (err < 0) {
+ /* -EMSGSIZE implies BUG in if_nlmsg_size() */
+ WARN_ON(err == -EMSGSIZE);
+@@ -3916,7 +3916,7 @@ static int valid_fdb_dump_strict(const struct nlmsghdr *nlh,
+ ndm = nlmsg_data(nlh);
+ if (ndm->ndm_pad1 || ndm->ndm_pad2 || ndm->ndm_state ||
+ ndm->ndm_flags || ndm->ndm_type) {
+- NL_SET_ERR_MSG(extack, "Invalid values in header for fbd dump request");
++ NL_SET_ERR_MSG(extack, "Invalid values in header for fdb dump request");
+ return -EINVAL;
+ }
+
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 3aa93af51d48..b4247635c4a2 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -1125,7 +1125,7 @@ set_rcvbuf:
+ break;
+ }
+ case SO_INCOMING_CPU:
+- sk->sk_incoming_cpu = val;
++ WRITE_ONCE(sk->sk_incoming_cpu, val);
+ break;
+
+ case SO_CNX_ADVICE:
+@@ -1474,7 +1474,7 @@ int sock_getsockopt(struct socket *sock, int level, int optname,
+ break;
+
+ case SO_INCOMING_CPU:
+- v.val = sk->sk_incoming_cpu;
++ v.val = READ_ONCE(sk->sk_incoming_cpu);
+ break;
+
+ case SO_MEMINFO:
+@@ -3593,7 +3593,7 @@ bool sk_busy_loop_end(void *p, unsigned long start_time)
+ {
+ struct sock *sk = p;
+
+- return !skb_queue_empty(&sk->sk_receive_queue) ||
++ return !skb_queue_empty_lockless(&sk->sk_receive_queue) ||
+ sk_busy_loop_timeout(sk, start_time);
+ }
+ EXPORT_SYMBOL(sk_busy_loop_end);
+diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c
+index b685bc82f8d0..6b8a602849dd 100644
+--- a/net/dccp/ipv4.c
++++ b/net/dccp/ipv4.c
+@@ -117,7 +117,7 @@ int dccp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ inet->inet_daddr,
+ inet->inet_sport,
+ inet->inet_dport);
+- inet->inet_id = dp->dccps_iss ^ jiffies;
++ inet->inet_id = prandom_u32();
+
+ err = dccp_connect(sk);
+ rt = NULL;
+@@ -416,7 +416,7 @@ struct sock *dccp_v4_request_recv_sock(const struct sock *sk,
+ RCU_INIT_POINTER(newinet->inet_opt, rcu_dereference(ireq->ireq_opt));
+ newinet->mc_index = inet_iif(skb);
+ newinet->mc_ttl = ip_hdr(skb)->ttl;
+- newinet->inet_id = jiffies;
++ newinet->inet_id = prandom_u32();
+
+ if (dst == NULL && (dst = inet_csk_route_child_sock(sk, newsk, req)) == NULL)
+ goto put_and_exit;
+diff --git a/net/decnet/af_decnet.c b/net/decnet/af_decnet.c
+index 0ea75286abf4..3349ea81f901 100644
+--- a/net/decnet/af_decnet.c
++++ b/net/decnet/af_decnet.c
+@@ -1205,7 +1205,7 @@ static __poll_t dn_poll(struct file *file, struct socket *sock, poll_table *wai
+ struct dn_scp *scp = DN_SK(sk);
+ __poll_t mask = datagram_poll(file, sock, wait);
+
+- if (!skb_queue_empty(&scp->other_receive_queue))
++ if (!skb_queue_empty_lockless(&scp->other_receive_queue))
+ mask |= EPOLLRDBAND;
+
+ return mask;
+diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
+index 96f787cf9b6e..130f1a343abb 100644
+--- a/net/dsa/dsa2.c
++++ b/net/dsa/dsa2.c
+@@ -46,7 +46,7 @@ static struct dsa_switch_tree *dsa_tree_alloc(int index)
+ dst->index = index;
+
+ INIT_LIST_HEAD(&dst->list);
+- list_add_tail(&dsa_tree_list, &dst->list);
++ list_add_tail(&dst->list, &dsa_tree_list);
+
+ kref_init(&dst->refcount);
+
+diff --git a/net/ipv4/datagram.c b/net/ipv4/datagram.c
+index 9a0fe0c2fa02..4a8550c49202 100644
+--- a/net/ipv4/datagram.c
++++ b/net/ipv4/datagram.c
+@@ -73,7 +73,7 @@ int __ip4_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len
+ reuseport_has_conns(sk, true);
+ sk->sk_state = TCP_ESTABLISHED;
+ sk_set_txhash(sk);
+- inet->inet_id = jiffies;
++ inet->inet_id = prandom_u32();
+
+ sk_dst_set(sk, &rt->dst);
+ err = 0;
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index e8bc939b56dd..fb4162943fae 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -1147,7 +1147,7 @@ void fib_modify_prefix_metric(struct in_ifaddr *ifa, u32 new_metric)
+ if (!(dev->flags & IFF_UP) ||
+ ifa->ifa_flags & (IFA_F_SECONDARY | IFA_F_NOPREFIXROUTE) ||
+ ipv4_is_zeronet(prefix) ||
+- prefix == ifa->ifa_local || ifa->ifa_prefixlen == 32)
++ (prefix == ifa->ifa_local && ifa->ifa_prefixlen == 32))
+ return;
+
+ /* add the new */
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index 97824864e40d..83fb00153018 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -240,7 +240,7 @@ static inline int compute_score(struct sock *sk, struct net *net,
+ return -1;
+
+ score = sk->sk_family == PF_INET ? 2 : 1;
+- if (sk->sk_incoming_cpu == raw_smp_processor_id())
++ if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
+ score++;
+ }
+ return score;
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 52690bb3e40f..10636fb6093e 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -509,9 +509,9 @@ static void erspan_fb_xmit(struct sk_buff *skb, struct net_device *dev)
+ key = &tun_info->key;
+ if (!(tun_info->key.tun_flags & TUNNEL_ERSPAN_OPT))
+ goto err_free_skb;
+- md = ip_tunnel_info_opts(tun_info);
+- if (!md)
++ if (tun_info->options_len < sizeof(*md))
+ goto err_free_skb;
++ md = ip_tunnel_info_opts(tun_info);
+
+ /* ERSPAN has fixed 8 byte GRE header */
+ version = md->version;
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index da521790cd63..e780ceab16e1 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -645,11 +645,12 @@ void ip_fraglist_prepare(struct sk_buff *skb, struct ip_fraglist_iter *iter)
+ EXPORT_SYMBOL(ip_fraglist_prepare);
+
+ void ip_frag_init(struct sk_buff *skb, unsigned int hlen,
+- unsigned int ll_rs, unsigned int mtu,
++ unsigned int ll_rs, unsigned int mtu, bool DF,
+ struct ip_frag_state *state)
+ {
+ struct iphdr *iph = ip_hdr(skb);
+
++ state->DF = DF;
+ state->hlen = hlen;
+ state->ll_rs = ll_rs;
+ state->mtu = mtu;
+@@ -668,9 +669,6 @@ static void ip_frag_ipcb(struct sk_buff *from, struct sk_buff *to,
+ /* Copy the flags to each fragment. */
+ IPCB(to)->flags = IPCB(from)->flags;
+
+- if (IPCB(from)->flags & IPSKB_FRAG_PMTU)
+- state->iph->frag_off |= htons(IP_DF);
+-
+ /* ANK: dirty, but effective trick. Upgrade options only if
+ * the segment to be fragmented was THE FIRST (otherwise,
+ * options are already fixed) and make it ONCE
+@@ -738,6 +736,8 @@ struct sk_buff *ip_frag_next(struct sk_buff *skb, struct ip_frag_state *state)
+ */
+ iph = ip_hdr(skb2);
+ iph->frag_off = htons((state->offset >> 3));
++ if (state->DF)
++ iph->frag_off |= htons(IP_DF);
+
+ /*
+ * Added AC : If we are fragmenting a fragment that's not the
+@@ -771,6 +771,7 @@ int ip_do_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ struct rtable *rt = skb_rtable(skb);
+ unsigned int mtu, hlen, ll_rs;
+ struct ip_fraglist_iter iter;
++ ktime_t tstamp = skb->tstamp;
+ struct ip_frag_state state;
+ int err = 0;
+
+@@ -846,6 +847,7 @@ int ip_do_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ ip_fraglist_prepare(skb, &iter);
+ }
+
++ skb->tstamp = tstamp;
+ err = output(net, sk, skb);
+
+ if (!err)
+@@ -881,7 +883,8 @@ slow_path:
+ * Fragment the datagram.
+ */
+
+- ip_frag_init(skb, hlen, ll_rs, mtu, &state);
++ ip_frag_init(skb, hlen, ll_rs, mtu, IPCB(skb)->flags & IPSKB_FRAG_PMTU,
++ &state);
+
+ /*
+ * Keep copying data until we run out.
+@@ -900,6 +903,7 @@ slow_path:
+ /*
+ * Put this fragment into the sending queue.
+ */
++ skb2->tstamp = tstamp;
+ err = output(net, sk, skb2);
+ if (err)
+ goto fail;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 61082065b26a..cf79ab96c2df 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -584,7 +584,7 @@ __poll_t tcp_poll(struct file *file, struct socket *sock, poll_table *wait)
+ }
+ /* This barrier is coupled with smp_wmb() in tcp_reset() */
+ smp_rmb();
+- if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue))
++ if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))
+ mask |= EPOLLERR;
+
+ return mask;
+@@ -1961,7 +1961,7 @@ int tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int nonblock,
+ if (unlikely(flags & MSG_ERRQUEUE))
+ return inet_recv_error(sk, msg, len, addr_len);
+
+- if (sk_can_busy_loop(sk) && skb_queue_empty(&sk->sk_receive_queue) &&
++ if (sk_can_busy_loop(sk) && skb_queue_empty_lockless(&sk->sk_receive_queue) &&
+ (sk->sk_state == TCP_ESTABLISHED))
+ sk_busy_loop(sk, nonblock);
+
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index d57641cb3477..54320ef35405 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -300,7 +300,7 @@ int tcp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ inet->inet_daddr);
+ }
+
+- inet->inet_id = tp->write_seq ^ jiffies;
++ inet->inet_id = prandom_u32();
+
+ if (tcp_fastopen_defer_connect(sk, &err))
+ return err;
+@@ -1443,7 +1443,7 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
+ inet_csk(newsk)->icsk_ext_hdr_len = 0;
+ if (inet_opt)
+ inet_csk(newsk)->icsk_ext_hdr_len = inet_opt->opt.optlen;
+- newinet->inet_id = newtp->write_seq ^ jiffies;
++ newinet->inet_id = prandom_u32();
+
+ if (!dst) {
+ dst = inet_csk_route_child_sock(sk, newsk, req);
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 5e5d0575a43c..5487b43b8a56 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -388,7 +388,7 @@ static int compute_score(struct sock *sk, struct net *net,
+ return -1;
+ score += 4;
+
+- if (sk->sk_incoming_cpu == raw_smp_processor_id())
++ if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
+ score++;
+ return score;
+ }
+@@ -1316,6 +1316,20 @@ static void udp_set_dev_scratch(struct sk_buff *skb)
+ scratch->_tsize_state |= UDP_SKB_IS_STATELESS;
+ }
+
++static void udp_skb_csum_unnecessary_set(struct sk_buff *skb)
++{
++ /* We come here after udp_lib_checksum_complete() returned 0.
++ * This means that __skb_checksum_complete() might have
++ * set skb->csum_valid to 1.
++ * On 64bit platforms, we can set csum_unnecessary
++ * to true, but only if the skb is not shared.
++ */
++#if BITS_PER_LONG == 64
++ if (!skb_shared(skb))
++ udp_skb_scratch(skb)->csum_unnecessary = true;
++#endif
++}
++
+ static int udp_skb_truesize(struct sk_buff *skb)
+ {
+ return udp_skb_scratch(skb)->_tsize_state & ~UDP_SKB_IS_STATELESS;
+@@ -1550,10 +1564,7 @@ static struct sk_buff *__first_packet_length(struct sock *sk,
+ *total += skb->truesize;
+ kfree_skb(skb);
+ } else {
+- /* the csum related bits could be changed, refresh
+- * the scratch area
+- */
+- udp_set_dev_scratch(skb);
++ udp_skb_csum_unnecessary_set(skb);
+ break;
+ }
+ }
+@@ -1577,7 +1588,7 @@ static int first_packet_length(struct sock *sk)
+
+ spin_lock_bh(&rcvq->lock);
+ skb = __first_packet_length(sk, rcvq, &total);
+- if (!skb && !skb_queue_empty(sk_queue)) {
++ if (!skb && !skb_queue_empty_lockless(sk_queue)) {
+ spin_lock(&sk_queue->lock);
+ skb_queue_splice_tail_init(sk_queue, rcvq);
+ spin_unlock(&sk_queue->lock);
+@@ -1650,7 +1661,7 @@ struct sk_buff *__skb_recv_udp(struct sock *sk, unsigned int flags,
+ return skb;
+ }
+
+- if (skb_queue_empty(sk_queue)) {
++ if (skb_queue_empty_lockless(sk_queue)) {
+ spin_unlock_bh(&queue->lock);
+ goto busy_check;
+ }
+@@ -1676,7 +1687,7 @@ busy_check:
+ break;
+
+ sk_busy_loop(sk, flags & MSG_DONTWAIT);
+- } while (!skb_queue_empty(sk_queue));
++ } while (!skb_queue_empty_lockless(sk_queue));
+
+ /* sk_queue is empty, reader_queue may contain peeked packets */
+ } while (timeo &&
+@@ -2712,7 +2723,7 @@ __poll_t udp_poll(struct file *file, struct socket *sock, poll_table *wait)
+ __poll_t mask = datagram_poll(file, sock, wait);
+ struct sock *sk = sock->sk;
+
+- if (!skb_queue_empty(&udp_sk(sk)->reader_queue))
++ if (!skb_queue_empty_lockless(&udp_sk(sk)->reader_queue))
+ mask |= EPOLLIN | EPOLLRDNORM;
+
+ /* Check for false positives due to checksum errors */
+diff --git a/net/ipv6/inet6_hashtables.c b/net/ipv6/inet6_hashtables.c
+index cf60fae9533b..fbe9d4295eac 100644
+--- a/net/ipv6/inet6_hashtables.c
++++ b/net/ipv6/inet6_hashtables.c
+@@ -105,7 +105,7 @@ static inline int compute_score(struct sock *sk, struct net *net,
+ return -1;
+
+ score = 1;
+- if (sk->sk_incoming_cpu == raw_smp_processor_id())
++ if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
+ score++;
+ }
+ return score;
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index d5779d6a6065..4efc272c6027 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -980,9 +980,9 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
+ dsfield = key->tos;
+ if (!(tun_info->key.tun_flags & TUNNEL_ERSPAN_OPT))
+ goto tx_err;
+- md = ip_tunnel_info_opts(tun_info);
+- if (!md)
++ if (tun_info->options_len < sizeof(*md))
+ goto tx_err;
++ md = ip_tunnel_info_opts(tun_info);
+
+ tun_id = tunnel_id_to_key32(key->tun_id);
+ if (md->version == 1) {
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 8e49fd62eea9..e71568f730f9 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -768,6 +768,7 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ inet6_sk(skb->sk) : NULL;
+ struct ip6_frag_state state;
+ unsigned int mtu, hlen, nexthdr_offset;
++ ktime_t tstamp = skb->tstamp;
+ int hroom, err = 0;
+ __be32 frag_id;
+ u8 *prevhdr, nexthdr = 0;
+@@ -855,6 +856,7 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ if (iter.frag)
+ ip6_fraglist_prepare(skb, &iter);
+
++ skb->tstamp = tstamp;
+ err = output(net, sk, skb);
+ if (!err)
+ IP6_INC_STATS(net, ip6_dst_idev(&rt->dst),
+@@ -913,6 +915,7 @@ slow_path:
+ /*
+ * Put this fragment into the sending queue.
+ */
++ frag->tstamp = tstamp;
+ err = output(net, sk, frag);
+ if (err)
+ goto fail;
+diff --git a/net/ipv6/netfilter.c b/net/ipv6/netfilter.c
+index 61819ed858b1..7e75d01464fb 100644
+--- a/net/ipv6/netfilter.c
++++ b/net/ipv6/netfilter.c
+@@ -119,6 +119,7 @@ int br_ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ struct sk_buff *))
+ {
+ int frag_max_size = BR_INPUT_SKB_CB(skb)->frag_max_size;
++ ktime_t tstamp = skb->tstamp;
+ struct ip6_frag_state state;
+ u8 *prevhdr, nexthdr = 0;
+ unsigned int mtu, hlen;
+@@ -183,6 +184,7 @@ int br_ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ if (iter.frag)
+ ip6_fraglist_prepare(skb, &iter);
+
++ skb->tstamp = tstamp;
+ err = output(net, sk, data, skb);
+ if (err || !iter.frag)
+ break;
+@@ -215,6 +217,7 @@ slow_path:
+ goto blackhole;
+ }
+
++ skb2->tstamp = tstamp;
+ err = output(net, sk, data, skb2);
+ if (err)
+ goto blackhole;
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 0454a8a3b39c..bea3bdad0369 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -135,7 +135,7 @@ static int compute_score(struct sock *sk, struct net *net,
+ return -1;
+ score++;
+
+- if (sk->sk_incoming_cpu == raw_smp_processor_id())
++ if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
+ score++;
+
+ return score;
+diff --git a/net/nfc/llcp_sock.c b/net/nfc/llcp_sock.c
+index ccdd790e163a..28604414dec1 100644
+--- a/net/nfc/llcp_sock.c
++++ b/net/nfc/llcp_sock.c
+@@ -554,11 +554,11 @@ static __poll_t llcp_sock_poll(struct file *file, struct socket *sock,
+ if (sk->sk_state == LLCP_LISTEN)
+ return llcp_accept_poll(sk);
+
+- if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue))
++ if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))
+ mask |= EPOLLERR |
+ (sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0);
+
+- if (!skb_queue_empty(&sk->sk_receive_queue))
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
+ mask |= EPOLLIN | EPOLLRDNORM;
+
+ if (sk->sk_state == LLCP_CLOSED)
+diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
+index f1e7041a5a60..43aeca12208c 100644
+--- a/net/openvswitch/datapath.c
++++ b/net/openvswitch/datapath.c
+@@ -1850,7 +1850,7 @@ static struct genl_family dp_datapath_genl_family __ro_after_init = {
+ /* Called with ovs_mutex or RCU read lock. */
+ static int ovs_vport_cmd_fill_info(struct vport *vport, struct sk_buff *skb,
+ struct net *net, u32 portid, u32 seq,
+- u32 flags, u8 cmd)
++ u32 flags, u8 cmd, gfp_t gfp)
+ {
+ struct ovs_header *ovs_header;
+ struct ovs_vport_stats vport_stats;
+@@ -1871,7 +1871,7 @@ static int ovs_vport_cmd_fill_info(struct vport *vport, struct sk_buff *skb,
+ goto nla_put_failure;
+
+ if (!net_eq(net, dev_net(vport->dev))) {
+- int id = peernet2id_alloc(net, dev_net(vport->dev));
++ int id = peernet2id_alloc(net, dev_net(vport->dev), gfp);
+
+ if (nla_put_s32(skb, OVS_VPORT_ATTR_NETNSID, id))
+ goto nla_put_failure;
+@@ -1912,11 +1912,12 @@ struct sk_buff *ovs_vport_cmd_build_info(struct vport *vport, struct net *net,
+ struct sk_buff *skb;
+ int retval;
+
+- skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_ATOMIC);
++ skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
+ if (!skb)
+ return ERR_PTR(-ENOMEM);
+
+- retval = ovs_vport_cmd_fill_info(vport, skb, net, portid, seq, 0, cmd);
++ retval = ovs_vport_cmd_fill_info(vport, skb, net, portid, seq, 0, cmd,
++ GFP_KERNEL);
+ BUG_ON(retval < 0);
+
+ return skb;
+@@ -2058,7 +2059,7 @@ restart:
+
+ err = ovs_vport_cmd_fill_info(vport, reply, genl_info_net(info),
+ info->snd_portid, info->snd_seq, 0,
+- OVS_VPORT_CMD_NEW);
++ OVS_VPORT_CMD_NEW, GFP_KERNEL);
+
+ new_headroom = netdev_get_fwd_headroom(vport->dev);
+
+@@ -2119,7 +2120,7 @@ static int ovs_vport_cmd_set(struct sk_buff *skb, struct genl_info *info)
+
+ err = ovs_vport_cmd_fill_info(vport, reply, genl_info_net(info),
+ info->snd_portid, info->snd_seq, 0,
+- OVS_VPORT_CMD_SET);
++ OVS_VPORT_CMD_SET, GFP_KERNEL);
+ BUG_ON(err < 0);
+
+ ovs_unlock();
+@@ -2159,7 +2160,7 @@ static int ovs_vport_cmd_del(struct sk_buff *skb, struct genl_info *info)
+
+ err = ovs_vport_cmd_fill_info(vport, reply, genl_info_net(info),
+ info->snd_portid, info->snd_seq, 0,
+- OVS_VPORT_CMD_DEL);
++ OVS_VPORT_CMD_DEL, GFP_KERNEL);
+ BUG_ON(err < 0);
+
+ /* the vport deletion may trigger dp headroom update */
+@@ -2206,7 +2207,7 @@ static int ovs_vport_cmd_get(struct sk_buff *skb, struct genl_info *info)
+ goto exit_unlock_free;
+ err = ovs_vport_cmd_fill_info(vport, reply, genl_info_net(info),
+ info->snd_portid, info->snd_seq, 0,
+- OVS_VPORT_CMD_GET);
++ OVS_VPORT_CMD_GET, GFP_ATOMIC);
+ BUG_ON(err < 0);
+ rcu_read_unlock();
+
+@@ -2242,7 +2243,8 @@ static int ovs_vport_cmd_dump(struct sk_buff *skb, struct netlink_callback *cb)
+ NETLINK_CB(cb->skb).portid,
+ cb->nlh->nlmsg_seq,
+ NLM_F_MULTI,
+- OVS_VPORT_CMD_GET) < 0)
++ OVS_VPORT_CMD_GET,
++ GFP_ATOMIC) < 0)
+ goto out;
+
+ j++;
+diff --git a/net/phonet/socket.c b/net/phonet/socket.c
+index 96ea9f254ae9..76d499f6af9a 100644
+--- a/net/phonet/socket.c
++++ b/net/phonet/socket.c
+@@ -338,9 +338,9 @@ static __poll_t pn_socket_poll(struct file *file, struct socket *sock,
+
+ if (sk->sk_state == TCP_CLOSE)
+ return EPOLLERR;
+- if (!skb_queue_empty(&sk->sk_receive_queue))
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
+ mask |= EPOLLIN | EPOLLRDNORM;
+- if (!skb_queue_empty(&pn->ctrlreq_queue))
++ if (!skb_queue_empty_lockless(&pn->ctrlreq_queue))
+ mask |= EPOLLPRI;
+ if (!mask && sk->sk_state == TCP_CLOSE_WAIT)
+ return EPOLLHUP;
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index 8051dfdcf26d..b23a13c69019 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -596,6 +596,7 @@ struct rxrpc_call {
+ int debug_id; /* debug ID for printks */
+ unsigned short rx_pkt_offset; /* Current recvmsg packet offset */
+ unsigned short rx_pkt_len; /* Current recvmsg packet len */
++ bool rx_pkt_last; /* Current recvmsg packet is last */
+
+ /* Rx/Tx circular buffer, depending on phase.
+ *
+diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c
+index 3b0becb12041..08d4b4b9283a 100644
+--- a/net/rxrpc/recvmsg.c
++++ b/net/rxrpc/recvmsg.c
+@@ -267,11 +267,13 @@ static int rxrpc_verify_packet(struct rxrpc_call *call, struct sk_buff *skb,
+ */
+ static int rxrpc_locate_data(struct rxrpc_call *call, struct sk_buff *skb,
+ u8 *_annotation,
+- unsigned int *_offset, unsigned int *_len)
++ unsigned int *_offset, unsigned int *_len,
++ bool *_last)
+ {
+ struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
+ unsigned int offset = sizeof(struct rxrpc_wire_header);
+ unsigned int len;
++ bool last = false;
+ int ret;
+ u8 annotation = *_annotation;
+ u8 subpacket = annotation & RXRPC_RX_ANNO_SUBPACKET;
+@@ -281,6 +283,8 @@ static int rxrpc_locate_data(struct rxrpc_call *call, struct sk_buff *skb,
+ len = skb->len - offset;
+ if (subpacket < sp->nr_subpackets - 1)
+ len = RXRPC_JUMBO_DATALEN;
++ else if (sp->rx_flags & RXRPC_SKB_INCL_LAST)
++ last = true;
+
+ if (!(annotation & RXRPC_RX_ANNO_VERIFIED)) {
+ ret = rxrpc_verify_packet(call, skb, annotation, offset, len);
+@@ -291,6 +295,7 @@ static int rxrpc_locate_data(struct rxrpc_call *call, struct sk_buff *skb,
+
+ *_offset = offset;
+ *_len = len;
++ *_last = last;
+ call->conn->security->locate_data(call, skb, _offset, _len);
+ return 0;
+ }
+@@ -309,7 +314,7 @@ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call,
+ rxrpc_serial_t serial;
+ rxrpc_seq_t hard_ack, top, seq;
+ size_t remain;
+- bool last;
++ bool rx_pkt_last;
+ unsigned int rx_pkt_offset, rx_pkt_len;
+ int ix, copy, ret = -EAGAIN, ret2;
+
+@@ -319,6 +324,7 @@ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call,
+
+ rx_pkt_offset = call->rx_pkt_offset;
+ rx_pkt_len = call->rx_pkt_len;
++ rx_pkt_last = call->rx_pkt_last;
+
+ if (call->state >= RXRPC_CALL_SERVER_ACK_REQUEST) {
+ seq = call->rx_hard_ack;
+@@ -329,6 +335,7 @@ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call,
+ /* Barriers against rxrpc_input_data(). */
+ hard_ack = call->rx_hard_ack;
+ seq = hard_ack + 1;
++
+ while (top = smp_load_acquire(&call->rx_top),
+ before_eq(seq, top)
+ ) {
+@@ -356,7 +363,8 @@ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call,
+ if (rx_pkt_offset == 0) {
+ ret2 = rxrpc_locate_data(call, skb,
+ &call->rxtx_annotations[ix],
+- &rx_pkt_offset, &rx_pkt_len);
++ &rx_pkt_offset, &rx_pkt_len,
++ &rx_pkt_last);
+ trace_rxrpc_recvmsg(call, rxrpc_recvmsg_next, seq,
+ rx_pkt_offset, rx_pkt_len, ret2);
+ if (ret2 < 0) {
+@@ -396,13 +404,12 @@ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call,
+ }
+
+ /* The whole packet has been transferred. */
+- last = sp->hdr.flags & RXRPC_LAST_PACKET;
+ if (!(flags & MSG_PEEK))
+ rxrpc_rotate_rx_window(call);
+ rx_pkt_offset = 0;
+ rx_pkt_len = 0;
+
+- if (last) {
++ if (rx_pkt_last) {
+ ASSERTCMP(seq, ==, READ_ONCE(call->rx_top));
+ ret = 1;
+ goto out;
+@@ -415,6 +422,7 @@ out:
+ if (!(flags & MSG_PEEK)) {
+ call->rx_pkt_offset = rx_pkt_offset;
+ call->rx_pkt_len = rx_pkt_len;
++ call->rx_pkt_last = rx_pkt_last;
+ }
+ done:
+ trace_rxrpc_recvmsg(call, rxrpc_recvmsg_data_return, seq,
+diff --git a/net/sched/sch_hhf.c b/net/sched/sch_hhf.c
+index 23cd1c873a2c..be35f03b657b 100644
+--- a/net/sched/sch_hhf.c
++++ b/net/sched/sch_hhf.c
+@@ -5,11 +5,11 @@
+ * Copyright (C) 2013 Nandita Dukkipati <nanditad@google.com>
+ */
+
+-#include <linux/jhash.h>
+ #include <linux/jiffies.h>
+ #include <linux/module.h>
+ #include <linux/skbuff.h>
+ #include <linux/vmalloc.h>
++#include <linux/siphash.h>
+ #include <net/pkt_sched.h>
+ #include <net/sock.h>
+
+@@ -126,7 +126,7 @@ struct wdrr_bucket {
+
+ struct hhf_sched_data {
+ struct wdrr_bucket buckets[WDRR_BUCKET_CNT];
+- u32 perturbation; /* hash perturbation */
++ siphash_key_t perturbation; /* hash perturbation */
+ u32 quantum; /* psched_mtu(qdisc_dev(sch)); */
+ u32 drop_overlimit; /* number of times max qdisc packet
+ * limit was hit
+@@ -264,7 +264,7 @@ static enum wdrr_bucket_idx hhf_classify(struct sk_buff *skb, struct Qdisc *sch)
+ }
+
+ /* Get hashed flow-id of the skb. */
+- hash = skb_get_hash_perturb(skb, q->perturbation);
++ hash = skb_get_hash_perturb(skb, &q->perturbation);
+
+ /* Check if this packet belongs to an already established HH flow. */
+ flow_pos = hash & HHF_BIT_MASK;
+@@ -582,7 +582,7 @@ static int hhf_init(struct Qdisc *sch, struct nlattr *opt,
+
+ sch->limit = 1000;
+ q->quantum = psched_mtu(qdisc_dev(sch));
+- q->perturbation = prandom_u32();
++ get_random_bytes(&q->perturbation, sizeof(q->perturbation));
+ INIT_LIST_HEAD(&q->new_buckets);
+ INIT_LIST_HEAD(&q->old_buckets);
+
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index 0e44039e729c..42e557d48e4e 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -509,6 +509,7 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ if (skb->ip_summed == CHECKSUM_PARTIAL &&
+ skb_checksum_help(skb)) {
+ qdisc_drop(skb, sch, to_free);
++ skb = NULL;
+ goto finish_segs;
+ }
+
+@@ -593,9 +594,10 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ finish_segs:
+ if (segs) {
+ unsigned int len, last_len;
+- int nb = 0;
++ int nb;
+
+- len = skb->len;
++ len = skb ? skb->len : 0;
++ nb = skb ? 1 : 0;
+
+ while (segs) {
+ skb2 = segs->next;
+@@ -612,7 +614,10 @@ finish_segs:
+ }
+ segs = skb2;
+ }
+- qdisc_tree_reduce_backlog(sch, -nb, prev_len - len);
++ /* Parent qdiscs accounted for 1 skb of size @prev_len */
++ qdisc_tree_reduce_backlog(sch, -(nb - 1), -(len - prev_len));
++ } else if (!skb) {
++ return NET_XMIT_DROP;
+ }
+ return NET_XMIT_SUCCESS;
+ }
+diff --git a/net/sched/sch_sfb.c b/net/sched/sch_sfb.c
+index d448fe3068e5..4074c50ac3d7 100644
+--- a/net/sched/sch_sfb.c
++++ b/net/sched/sch_sfb.c
+@@ -18,7 +18,7 @@
+ #include <linux/errno.h>
+ #include <linux/skbuff.h>
+ #include <linux/random.h>
+-#include <linux/jhash.h>
++#include <linux/siphash.h>
+ #include <net/ip.h>
+ #include <net/pkt_sched.h>
+ #include <net/pkt_cls.h>
+@@ -45,7 +45,7 @@ struct sfb_bucket {
+ * (Section 4.4 of SFB reference : moving hash functions)
+ */
+ struct sfb_bins {
+- u32 perturbation; /* jhash perturbation */
++ siphash_key_t perturbation; /* siphash key */
+ struct sfb_bucket bins[SFB_LEVELS][SFB_NUMBUCKETS];
+ };
+
+@@ -217,7 +217,8 @@ static u32 sfb_compute_qlen(u32 *prob_r, u32 *avgpm_r, const struct sfb_sched_da
+
+ static void sfb_init_perturbation(u32 slot, struct sfb_sched_data *q)
+ {
+- q->bins[slot].perturbation = prandom_u32();
++ get_random_bytes(&q->bins[slot].perturbation,
++ sizeof(q->bins[slot].perturbation));
+ }
+
+ static void sfb_swap_slot(struct sfb_sched_data *q)
+@@ -314,9 +315,9 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ /* If using external classifiers, get result and record it. */
+ if (!sfb_classify(skb, fl, &ret, &salt))
+ goto other_drop;
+- sfbhash = jhash_1word(salt, q->bins[slot].perturbation);
++ sfbhash = siphash_1u32(salt, &q->bins[slot].perturbation);
+ } else {
+- sfbhash = skb_get_hash_perturb(skb, q->bins[slot].perturbation);
++ sfbhash = skb_get_hash_perturb(skb, &q->bins[slot].perturbation);
+ }
+
+
+@@ -352,7 +353,7 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ /* Inelastic flow */
+ if (q->double_buffering) {
+ sfbhash = skb_get_hash_perturb(skb,
+- q->bins[slot].perturbation);
++ &q->bins[slot].perturbation);
+ if (!sfbhash)
+ sfbhash = 1;
+ sfb_skb_cb(skb)->hashes[slot] = sfbhash;
+diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
+index 68404a9d2ce4..c787d4d46017 100644
+--- a/net/sched/sch_sfq.c
++++ b/net/sched/sch_sfq.c
+@@ -14,7 +14,7 @@
+ #include <linux/errno.h>
+ #include <linux/init.h>
+ #include <linux/skbuff.h>
+-#include <linux/jhash.h>
++#include <linux/siphash.h>
+ #include <linux/slab.h>
+ #include <linux/vmalloc.h>
+ #include <net/netlink.h>
+@@ -117,7 +117,7 @@ struct sfq_sched_data {
+ u8 headdrop;
+ u8 maxdepth; /* limit of packets per flow */
+
+- u32 perturbation;
++ siphash_key_t perturbation;
+ u8 cur_depth; /* depth of longest slot */
+ u8 flags;
+ unsigned short scaled_quantum; /* SFQ_ALLOT_SIZE(quantum) */
+@@ -157,7 +157,7 @@ static inline struct sfq_head *sfq_dep_head(struct sfq_sched_data *q, sfq_index
+ static unsigned int sfq_hash(const struct sfq_sched_data *q,
+ const struct sk_buff *skb)
+ {
+- return skb_get_hash_perturb(skb, q->perturbation) & (q->divisor - 1);
++ return skb_get_hash_perturb(skb, &q->perturbation) & (q->divisor - 1);
+ }
+
+ static unsigned int sfq_classify(struct sk_buff *skb, struct Qdisc *sch,
+@@ -607,9 +607,11 @@ static void sfq_perturbation(struct timer_list *t)
+ struct sfq_sched_data *q = from_timer(q, t, perturb_timer);
+ struct Qdisc *sch = q->sch;
+ spinlock_t *root_lock = qdisc_lock(qdisc_root_sleeping(sch));
++ siphash_key_t nkey;
+
++ get_random_bytes(&nkey, sizeof(nkey));
+ spin_lock(root_lock);
+- q->perturbation = prandom_u32();
++ q->perturbation = nkey;
+ if (!q->filter_list && q->tail)
+ sfq_rehash(sch);
+ spin_unlock(root_lock);
+@@ -688,7 +690,7 @@ static int sfq_change(struct Qdisc *sch, struct nlattr *opt)
+ del_timer(&q->perturb_timer);
+ if (q->perturb_period) {
+ mod_timer(&q->perturb_timer, jiffies + q->perturb_period);
+- q->perturbation = prandom_u32();
++ get_random_bytes(&q->perturbation, sizeof(q->perturbation));
+ }
+ sch_tree_unlock(sch);
+ kfree(p);
+@@ -745,7 +747,7 @@ static int sfq_init(struct Qdisc *sch, struct nlattr *opt,
+ q->quantum = psched_mtu(qdisc_dev(sch));
+ q->scaled_quantum = SFQ_ALLOT_SIZE(q->quantum);
+ q->perturb_period = 0;
+- q->perturbation = prandom_u32();
++ get_random_bytes(&q->perturbation, sizeof(q->perturbation));
+
+ if (opt) {
+ int err = sfq_change(sch, opt);
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 8fd7b0e6ce9f..b81d7673634c 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -8329,7 +8329,7 @@ __poll_t sctp_poll(struct file *file, struct socket *sock, poll_table *wait)
+ mask = 0;
+
+ /* Is there any exceptional events? */
+- if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue))
++ if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))
+ mask |= EPOLLERR |
+ (sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0);
+ if (sk->sk_shutdown & RCV_SHUTDOWN)
+@@ -8338,7 +8338,7 @@ __poll_t sctp_poll(struct file *file, struct socket *sock, poll_table *wait)
+ mask |= EPOLLHUP;
+
+ /* Is it readable? Reconsider this code with TCP-style support. */
+- if (!skb_queue_empty(&sk->sk_receive_queue))
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
+ mask |= EPOLLIN | EPOLLRDNORM;
+
+ /* The association is either gone or not ready. */
+@@ -8724,7 +8724,7 @@ struct sk_buff *sctp_skb_recv_datagram(struct sock *sk, int flags,
+ if (sk_can_busy_loop(sk)) {
+ sk_busy_loop(sk, noblock);
+
+- if (!skb_queue_empty(&sk->sk_receive_queue))
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
+ continue;
+ }
+
+@@ -9159,7 +9159,7 @@ void sctp_copy_sock(struct sock *newsk, struct sock *sk,
+ newinet->inet_rcv_saddr = inet->inet_rcv_saddr;
+ newinet->inet_dport = htons(asoc->peer.port);
+ newinet->pmtudisc = inet->pmtudisc;
+- newinet->inet_id = asoc->next_tsn ^ jiffies;
++ newinet->inet_id = prandom_u32();
+
+ newinet->uc_ttl = inet->uc_ttl;
+ newinet->mc_loop = 1;
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 5b932583e407..47946f489fd4 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -123,6 +123,12 @@ struct proto smc_proto6 = {
+ };
+ EXPORT_SYMBOL_GPL(smc_proto6);
+
++static void smc_restore_fallback_changes(struct smc_sock *smc)
++{
++ smc->clcsock->file->private_data = smc->sk.sk_socket;
++ smc->clcsock->file = NULL;
++}
++
+ static int __smc_release(struct smc_sock *smc)
+ {
+ struct sock *sk = &smc->sk;
+@@ -141,6 +147,7 @@ static int __smc_release(struct smc_sock *smc)
+ }
+ sk->sk_state = SMC_CLOSED;
+ sk->sk_state_change(sk);
++ smc_restore_fallback_changes(smc);
+ }
+
+ sk->sk_prot->unhash(sk);
+@@ -700,8 +707,6 @@ static int __smc_connect(struct smc_sock *smc)
+ int smc_type;
+ int rc = 0;
+
+- sock_hold(&smc->sk); /* sock put in passive closing */
+-
+ if (smc->use_fallback)
+ return smc_connect_fallback(smc, smc->fallback_rsn);
+
+@@ -846,6 +851,8 @@ static int smc_connect(struct socket *sock, struct sockaddr *addr,
+ rc = kernel_connect(smc->clcsock, addr, alen, flags);
+ if (rc && rc != -EINPROGRESS)
+ goto out;
++
++ sock_hold(&smc->sk); /* sock put in passive closing */
+ if (flags & O_NONBLOCK) {
+ if (schedule_work(&smc->connect_work))
+ smc->connect_nonblock = 1;
+@@ -1291,8 +1298,8 @@ static void smc_listen_work(struct work_struct *work)
+ /* check if RDMA is available */
+ if (!ism_supported) { /* SMC_TYPE_R or SMC_TYPE_B */
+ /* prepare RDMA check */
+- memset(&ini, 0, sizeof(ini));
+ ini.is_smcd = false;
++ ini.ism_dev = NULL;
+ ini.ib_lcl = &pclc->lcl;
+ rc = smc_find_rdma_device(new_smc, &ini);
+ if (rc) {
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 83ae41d7e554..90ecca988d12 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -740,7 +740,7 @@ static __poll_t tipc_poll(struct file *file, struct socket *sock,
+ /* fall through */
+ case TIPC_LISTEN:
+ case TIPC_CONNECTING:
+- if (!skb_queue_empty(&sk->sk_receive_queue))
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
+ revents |= EPOLLIN | EPOLLRDNORM;
+ break;
+ case TIPC_OPEN:
+@@ -748,7 +748,7 @@ static __poll_t tipc_poll(struct file *file, struct socket *sock,
+ revents |= EPOLLOUT;
+ if (!tipc_sk_type_connectionless(sk))
+ break;
+- if (skb_queue_empty(&sk->sk_receive_queue))
++ if (skb_queue_empty_lockless(&sk->sk_receive_queue))
+ break;
+ revents |= EPOLLIN | EPOLLRDNORM;
+ break;
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 67e87db5877f..0d8da809bea2 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -2599,7 +2599,7 @@ static __poll_t unix_poll(struct file *file, struct socket *sock, poll_table *wa
+ mask |= EPOLLRDHUP | EPOLLIN | EPOLLRDNORM;
+
+ /* readable? */
+- if (!skb_queue_empty(&sk->sk_receive_queue))
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
+ mask |= EPOLLIN | EPOLLRDNORM;
+
+ /* Connection-based need to check for termination and startup */
+@@ -2628,7 +2628,7 @@ static __poll_t unix_dgram_poll(struct file *file, struct socket *sock,
+ mask = 0;
+
+ /* exceptional events? */
+- if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue))
++ if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))
+ mask |= EPOLLERR |
+ (sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0);
+
+@@ -2638,7 +2638,7 @@ static __poll_t unix_dgram_poll(struct file *file, struct socket *sock,
+ mask |= EPOLLHUP;
+
+ /* readable? */
+- if (!skb_queue_empty(&sk->sk_receive_queue))
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
+ mask |= EPOLLIN | EPOLLRDNORM;
+
+ /* Connection-based need to check for termination and startup */
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 2ab43b2bba31..582a3e4dfce2 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -870,7 +870,7 @@ static __poll_t vsock_poll(struct file *file, struct socket *sock,
+ * the queue and write as long as the socket isn't shutdown for
+ * sending.
+ */
+- if (!skb_queue_empty(&sk->sk_receive_queue) ||
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue) ||
+ (sk->sk_shutdown & RCV_SHUTDOWN)) {
+ mask |= EPOLLIN | EPOLLRDNORM;
+ }
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index b0de3e3b33e5..e1791d01ccc0 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2431,6 +2431,12 @@ static const struct pci_device_id azx_ids[] = {
+ /* Icelake */
+ { PCI_DEVICE(0x8086, 0x34c8),
+ .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
++ /* Jasperlake */
++ { PCI_DEVICE(0x8086, 0x38c8),
++ .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
++ /* Tigerlake */
++ { PCI_DEVICE(0x8086, 0xa0c8),
++ .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+ /* Elkhart Lake */
+ { PCI_DEVICE(0x8086, 0x4b55),
+ .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+diff --git a/sound/soc/codecs/msm8916-wcd-digital.c b/sound/soc/codecs/msm8916-wcd-digital.c
+index 1db7e43ec203..5963d170df43 100644
+--- a/sound/soc/codecs/msm8916-wcd-digital.c
++++ b/sound/soc/codecs/msm8916-wcd-digital.c
+@@ -243,6 +243,10 @@ static const char *const rx_mix1_text[] = {
+ "ZERO", "IIR1", "IIR2", "RX1", "RX2", "RX3"
+ };
+
++static const char * const rx_mix2_text[] = {
++ "ZERO", "IIR1", "IIR2"
++};
++
+ static const char *const dec_mux_text[] = {
+ "ZERO", "ADC1", "ADC2", "ADC3", "DMIC1", "DMIC2"
+ };
+@@ -270,6 +274,16 @@ static const struct soc_enum rx3_mix1_inp_enum[] = {
+ SOC_ENUM_SINGLE(LPASS_CDC_CONN_RX3_B2_CTL, 0, 6, rx_mix1_text),
+ };
+
++/* RX1 MIX2 */
++static const struct soc_enum rx_mix2_inp1_chain_enum =
++ SOC_ENUM_SINGLE(LPASS_CDC_CONN_RX1_B3_CTL,
++ 0, 3, rx_mix2_text);
++
++/* RX2 MIX2 */
++static const struct soc_enum rx2_mix2_inp1_chain_enum =
++ SOC_ENUM_SINGLE(LPASS_CDC_CONN_RX2_B3_CTL,
++ 0, 3, rx_mix2_text);
++
+ /* DEC */
+ static const struct soc_enum dec1_mux_enum = SOC_ENUM_SINGLE(
+ LPASS_CDC_CONN_TX_B1_CTL, 0, 6, dec_mux_text);
+@@ -309,6 +323,10 @@ static const struct snd_kcontrol_new rx3_mix1_inp2_mux = SOC_DAPM_ENUM(
+ "RX3 MIX1 INP2 Mux", rx3_mix1_inp_enum[1]);
+ static const struct snd_kcontrol_new rx3_mix1_inp3_mux = SOC_DAPM_ENUM(
+ "RX3 MIX1 INP3 Mux", rx3_mix1_inp_enum[2]);
++static const struct snd_kcontrol_new rx1_mix2_inp1_mux = SOC_DAPM_ENUM(
++ "RX1 MIX2 INP1 Mux", rx_mix2_inp1_chain_enum);
++static const struct snd_kcontrol_new rx2_mix2_inp1_mux = SOC_DAPM_ENUM(
++ "RX2 MIX2 INP1 Mux", rx2_mix2_inp1_chain_enum);
+
+ /* Digital Gain control -38.4 dB to +38.4 dB in 0.3 dB steps */
+ static const DECLARE_TLV_DB_SCALE(digital_gain, -3840, 30, 0);
+@@ -740,6 +758,10 @@ static const struct snd_soc_dapm_widget msm8916_wcd_digital_dapm_widgets[] = {
+ &rx3_mix1_inp2_mux),
+ SND_SOC_DAPM_MUX("RX3 MIX1 INP3", SND_SOC_NOPM, 0, 0,
+ &rx3_mix1_inp3_mux),
++ SND_SOC_DAPM_MUX("RX1 MIX2 INP1", SND_SOC_NOPM, 0, 0,
++ &rx1_mix2_inp1_mux),
++ SND_SOC_DAPM_MUX("RX2 MIX2 INP1", SND_SOC_NOPM, 0, 0,
++ &rx2_mix2_inp1_mux),
+
+ SND_SOC_DAPM_MUX("CIC1 MUX", SND_SOC_NOPM, 0, 0, &cic1_mux),
+ SND_SOC_DAPM_MUX("CIC2 MUX", SND_SOC_NOPM, 0, 0, &cic2_mux),
+diff --git a/sound/soc/codecs/pcm3168a.c b/sound/soc/codecs/pcm3168a.c
+index f1104d7d6426..b31997075a50 100644
+--- a/sound/soc/codecs/pcm3168a.c
++++ b/sound/soc/codecs/pcm3168a.c
+@@ -21,8 +21,7 @@
+
+ #define PCM3168A_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | \
+ SNDRV_PCM_FMTBIT_S24_3LE | \
+- SNDRV_PCM_FMTBIT_S24_LE | \
+- SNDRV_PCM_FMTBIT_S32_LE)
++ SNDRV_PCM_FMTBIT_S24_LE)
+
+ #define PCM3168A_FMT_I2S 0x0
+ #define PCM3168A_FMT_LEFT_J 0x1
+diff --git a/sound/soc/codecs/rt5651.c b/sound/soc/codecs/rt5651.c
+index 762595de956c..c506c9305043 100644
+--- a/sound/soc/codecs/rt5651.c
++++ b/sound/soc/codecs/rt5651.c
+@@ -1770,6 +1770,9 @@ static int rt5651_detect_headset(struct snd_soc_component *component)
+
+ static bool rt5651_support_button_press(struct rt5651_priv *rt5651)
+ {
++ if (!rt5651->hp_jack)
++ return false;
++
+ /* Button press support only works with internal jack-detection */
+ return (rt5651->hp_jack->status & SND_JACK_MICROPHONE) &&
+ rt5651->gpiod_hp_det == NULL;
+diff --git a/sound/soc/codecs/rt5682.c b/sound/soc/codecs/rt5682.c
+index 1ef470700ed5..c50b75ce82e0 100644
+--- a/sound/soc/codecs/rt5682.c
++++ b/sound/soc/codecs/rt5682.c
+@@ -995,6 +995,16 @@ static int rt5682_set_jack_detect(struct snd_soc_component *component,
+ {
+ struct rt5682_priv *rt5682 = snd_soc_component_get_drvdata(component);
+
++ rt5682->hs_jack = hs_jack;
++
++ if (!hs_jack) {
++ regmap_update_bits(rt5682->regmap, RT5682_IRQ_CTRL_2,
++ RT5682_JD1_EN_MASK, RT5682_JD1_DIS);
++ regmap_update_bits(rt5682->regmap, RT5682_RC_CLK_CTRL,
++ RT5682_POW_JDH | RT5682_POW_JDL, 0);
++ return 0;
++ }
++
+ switch (rt5682->pdata.jd_src) {
+ case RT5682_JD1:
+ snd_soc_component_update_bits(component, RT5682_CBJ_CTRL_2,
+@@ -1032,8 +1042,6 @@ static int rt5682_set_jack_detect(struct snd_soc_component *component,
+ break;
+ }
+
+- rt5682->hs_jack = hs_jack;
+-
+ return 0;
+ }
+
+diff --git a/sound/soc/codecs/wm8994.c b/sound/soc/codecs/wm8994.c
+index c3d06e8bc54f..d5fb7f5dd551 100644
+--- a/sound/soc/codecs/wm8994.c
++++ b/sound/soc/codecs/wm8994.c
+@@ -533,13 +533,10 @@ static SOC_ENUM_SINGLE_DECL(dac_osr,
+ static SOC_ENUM_SINGLE_DECL(adc_osr,
+ WM8994_OVERSAMPLING, 1, osr_text);
+
+-static const struct snd_kcontrol_new wm8994_snd_controls[] = {
++static const struct snd_kcontrol_new wm8994_common_snd_controls[] = {
+ SOC_DOUBLE_R_TLV("AIF1ADC1 Volume", WM8994_AIF1_ADC1_LEFT_VOLUME,
+ WM8994_AIF1_ADC1_RIGHT_VOLUME,
+ 1, 119, 0, digital_tlv),
+-SOC_DOUBLE_R_TLV("AIF1ADC2 Volume", WM8994_AIF1_ADC2_LEFT_VOLUME,
+- WM8994_AIF1_ADC2_RIGHT_VOLUME,
+- 1, 119, 0, digital_tlv),
+ SOC_DOUBLE_R_TLV("AIF2ADC Volume", WM8994_AIF2_ADC_LEFT_VOLUME,
+ WM8994_AIF2_ADC_RIGHT_VOLUME,
+ 1, 119, 0, digital_tlv),
+@@ -556,8 +553,6 @@ SOC_ENUM("AIF2DACR Source", aif2dacr_src),
+
+ SOC_DOUBLE_R_TLV("AIF1DAC1 Volume", WM8994_AIF1_DAC1_LEFT_VOLUME,
+ WM8994_AIF1_DAC1_RIGHT_VOLUME, 1, 96, 0, digital_tlv),
+-SOC_DOUBLE_R_TLV("AIF1DAC2 Volume", WM8994_AIF1_DAC2_LEFT_VOLUME,
+- WM8994_AIF1_DAC2_RIGHT_VOLUME, 1, 96, 0, digital_tlv),
+ SOC_DOUBLE_R_TLV("AIF2DAC Volume", WM8994_AIF2_DAC_LEFT_VOLUME,
+ WM8994_AIF2_DAC_RIGHT_VOLUME, 1, 96, 0, digital_tlv),
+
+@@ -565,17 +560,12 @@ SOC_SINGLE_TLV("AIF1 Boost Volume", WM8994_AIF1_CONTROL_2, 10, 3, 0, aif_tlv),
+ SOC_SINGLE_TLV("AIF2 Boost Volume", WM8994_AIF2_CONTROL_2, 10, 3, 0, aif_tlv),
+
+ SOC_SINGLE("AIF1DAC1 EQ Switch", WM8994_AIF1_DAC1_EQ_GAINS_1, 0, 1, 0),
+-SOC_SINGLE("AIF1DAC2 EQ Switch", WM8994_AIF1_DAC2_EQ_GAINS_1, 0, 1, 0),
+ SOC_SINGLE("AIF2 EQ Switch", WM8994_AIF2_EQ_GAINS_1, 0, 1, 0),
+
+ WM8994_DRC_SWITCH("AIF1DAC1 DRC Switch", WM8994_AIF1_DRC1_1, 2),
+ WM8994_DRC_SWITCH("AIF1ADC1L DRC Switch", WM8994_AIF1_DRC1_1, 1),
+ WM8994_DRC_SWITCH("AIF1ADC1R DRC Switch", WM8994_AIF1_DRC1_1, 0),
+
+-WM8994_DRC_SWITCH("AIF1DAC2 DRC Switch", WM8994_AIF1_DRC2_1, 2),
+-WM8994_DRC_SWITCH("AIF1ADC2L DRC Switch", WM8994_AIF1_DRC2_1, 1),
+-WM8994_DRC_SWITCH("AIF1ADC2R DRC Switch", WM8994_AIF1_DRC2_1, 0),
+-
+ WM8994_DRC_SWITCH("AIF2DAC DRC Switch", WM8994_AIF2_DRC_1, 2),
+ WM8994_DRC_SWITCH("AIF2ADCL DRC Switch", WM8994_AIF2_DRC_1, 1),
+ WM8994_DRC_SWITCH("AIF2ADCR DRC Switch", WM8994_AIF2_DRC_1, 0),
+@@ -594,9 +584,6 @@ SOC_SINGLE("Sidetone HPF Switch", WM8994_SIDETONE, 6, 1, 0),
+ SOC_ENUM("AIF1ADC1 HPF Mode", aif1adc1_hpf),
+ SOC_DOUBLE("AIF1ADC1 HPF Switch", WM8994_AIF1_ADC1_FILTERS, 12, 11, 1, 0),
+
+-SOC_ENUM("AIF1ADC2 HPF Mode", aif1adc2_hpf),
+-SOC_DOUBLE("AIF1ADC2 HPF Switch", WM8994_AIF1_ADC2_FILTERS, 12, 11, 1, 0),
+-
+ SOC_ENUM("AIF2ADC HPF Mode", aif2adc_hpf),
+ SOC_DOUBLE("AIF2ADC HPF Switch", WM8994_AIF2_ADC_FILTERS, 12, 11, 1, 0),
+
+@@ -637,6 +624,24 @@ SOC_SINGLE("AIF2DAC 3D Stereo Switch", WM8994_AIF2_DAC_FILTERS_2,
+ 8, 1, 0),
+ };
+
++/* Controls not available on WM1811 */
++static const struct snd_kcontrol_new wm8994_snd_controls[] = {
++SOC_DOUBLE_R_TLV("AIF1ADC2 Volume", WM8994_AIF1_ADC2_LEFT_VOLUME,
++ WM8994_AIF1_ADC2_RIGHT_VOLUME,
++ 1, 119, 0, digital_tlv),
++SOC_DOUBLE_R_TLV("AIF1DAC2 Volume", WM8994_AIF1_DAC2_LEFT_VOLUME,
++ WM8994_AIF1_DAC2_RIGHT_VOLUME, 1, 96, 0, digital_tlv),
++
++SOC_SINGLE("AIF1DAC2 EQ Switch", WM8994_AIF1_DAC2_EQ_GAINS_1, 0, 1, 0),
++
++WM8994_DRC_SWITCH("AIF1DAC2 DRC Switch", WM8994_AIF1_DRC2_1, 2),
++WM8994_DRC_SWITCH("AIF1ADC2L DRC Switch", WM8994_AIF1_DRC2_1, 1),
++WM8994_DRC_SWITCH("AIF1ADC2R DRC Switch", WM8994_AIF1_DRC2_1, 0),
++
++SOC_ENUM("AIF1ADC2 HPF Mode", aif1adc2_hpf),
++SOC_DOUBLE("AIF1ADC2 HPF Switch", WM8994_AIF1_ADC2_FILTERS, 12, 11, 1, 0),
++};
++
+ static const struct snd_kcontrol_new wm8994_eq_controls[] = {
+ SOC_SINGLE_TLV("AIF1DAC1 EQ1 Volume", WM8994_AIF1_DAC1_EQ_GAINS_1, 11, 31, 0,
+ eq_tlv),
+@@ -4258,13 +4263,15 @@ static int wm8994_component_probe(struct snd_soc_component *component)
+ wm8994_handle_pdata(wm8994);
+
+ wm_hubs_add_analogue_controls(component);
+- snd_soc_add_component_controls(component, wm8994_snd_controls,
+- ARRAY_SIZE(wm8994_snd_controls));
++ snd_soc_add_component_controls(component, wm8994_common_snd_controls,
++ ARRAY_SIZE(wm8994_common_snd_controls));
+ snd_soc_dapm_new_controls(dapm, wm8994_dapm_widgets,
+ ARRAY_SIZE(wm8994_dapm_widgets));
+
+ switch (control->type) {
+ case WM8994:
++ snd_soc_add_component_controls(component, wm8994_snd_controls,
++ ARRAY_SIZE(wm8994_snd_controls));
+ snd_soc_dapm_new_controls(dapm, wm8994_specific_dapm_widgets,
+ ARRAY_SIZE(wm8994_specific_dapm_widgets));
+ if (control->revision < 4) {
+@@ -4284,8 +4291,10 @@ static int wm8994_component_probe(struct snd_soc_component *component)
+ }
+ break;
+ case WM8958:
++ snd_soc_add_component_controls(component, wm8994_snd_controls,
++ ARRAY_SIZE(wm8994_snd_controls));
+ snd_soc_add_component_controls(component, wm8958_snd_controls,
+- ARRAY_SIZE(wm8958_snd_controls));
++ ARRAY_SIZE(wm8958_snd_controls));
+ snd_soc_dapm_new_controls(dapm, wm8958_dapm_widgets,
+ ARRAY_SIZE(wm8958_dapm_widgets));
+ if (control->revision < 1) {
+diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
+index f5fbadc5e7e2..914fb3be5fea 100644
+--- a/sound/soc/codecs/wm_adsp.c
++++ b/sound/soc/codecs/wm_adsp.c
+@@ -1259,8 +1259,7 @@ static unsigned int wmfw_convert_flags(unsigned int in, unsigned int len)
+ }
+
+ if (in) {
+- if (in & WMFW_CTL_FLAG_READABLE)
+- out |= rd;
++ out |= rd;
+ if (in & WMFW_CTL_FLAG_WRITEABLE)
+ out |= wr;
+ if (in & WMFW_CTL_FLAG_VOLATILE)
+diff --git a/sound/soc/intel/boards/sof_rt5682.c b/sound/soc/intel/boards/sof_rt5682.c
+index daeaa396d928..9e59586e03ba 100644
+--- a/sound/soc/intel/boards/sof_rt5682.c
++++ b/sound/soc/intel/boards/sof_rt5682.c
+@@ -573,6 +573,15 @@ static int sof_audio_probe(struct platform_device *pdev)
+ /* need to get main clock from pmc */
+ if (sof_rt5682_quirk & SOF_RT5682_MCLK_BYTCHT_EN) {
+ ctx->mclk = devm_clk_get(&pdev->dev, "pmc_plt_clk_3");
++ if (IS_ERR(ctx->mclk)) {
++ ret = PTR_ERR(ctx->mclk);
++
++ dev_err(&pdev->dev,
++ "Failed to get MCLK from pmc_plt_clk_3: %d\n",
++ ret);
++ return ret;
++ }
++
+ ret = clk_prepare_enable(ctx->mclk);
+ if (ret < 0) {
+ dev_err(&pdev->dev,
+@@ -618,8 +627,24 @@ static int sof_audio_probe(struct platform_device *pdev)
+ &sof_audio_card_rt5682);
+ }
+
++static int sof_rt5682_remove(struct platform_device *pdev)
++{
++ struct snd_soc_card *card = platform_get_drvdata(pdev);
++ struct snd_soc_component *component = NULL;
++
++ for_each_card_components(card, component) {
++ if (!strcmp(component->name, rt5682_component[0].name)) {
++ snd_soc_component_set_jack(component, NULL, NULL);
++ break;
++ }
++ }
++
++ return 0;
++}
++
+ static struct platform_driver sof_audio = {
+ .probe = sof_audio_probe,
++ .remove = sof_rt5682_remove,
+ .driver = {
+ .name = "sof_rt5682",
+ .pm = &snd_soc_pm_ops,
+diff --git a/sound/soc/rockchip/rockchip_i2s.c b/sound/soc/rockchip/rockchip_i2s.c
+index 88ebaf6e1880..a0506e554c98 100644
+--- a/sound/soc/rockchip/rockchip_i2s.c
++++ b/sound/soc/rockchip/rockchip_i2s.c
+@@ -674,7 +674,7 @@ static int rockchip_i2s_probe(struct platform_device *pdev)
+ ret = rockchip_pcm_platform_register(&pdev->dev);
+ if (ret) {
+ dev_err(&pdev->dev, "Could not register PCM\n");
+- return ret;
++ goto err_suspend;
+ }
+
+ return 0;
+diff --git a/sound/soc/samsung/arndale_rt5631.c b/sound/soc/samsung/arndale_rt5631.c
+index c213913eb984..fd8c6642fb0d 100644
+--- a/sound/soc/samsung/arndale_rt5631.c
++++ b/sound/soc/samsung/arndale_rt5631.c
+@@ -5,6 +5,7 @@
+ // Author: Claude <claude@insginal.co.kr>
+
+ #include <linux/module.h>
++#include <linux/of_device.h>
+ #include <linux/platform_device.h>
+ #include <linux/clk.h>
+
+@@ -74,6 +75,17 @@ static struct snd_soc_card arndale_rt5631 = {
+ .num_links = ARRAY_SIZE(arndale_rt5631_dai),
+ };
+
++static void arndale_put_of_nodes(struct snd_soc_card *card)
++{
++ struct snd_soc_dai_link *dai_link;
++ int i;
++
++ for_each_card_prelinks(card, i, dai_link) {
++ of_node_put(dai_link->cpus->of_node);
++ of_node_put(dai_link->codecs->of_node);
++ }
++}
++
+ static int arndale_audio_probe(struct platform_device *pdev)
+ {
+ int n, ret;
+@@ -103,18 +115,31 @@ static int arndale_audio_probe(struct platform_device *pdev)
+ if (!arndale_rt5631_dai[0].codecs->of_node) {
+ dev_err(&pdev->dev,
+ "Property 'samsung,audio-codec' missing or invalid\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto err_put_of_nodes;
+ }
+ }
+
+ ret = devm_snd_soc_register_card(card->dev, card);
++ if (ret) {
++ dev_err(&pdev->dev, "snd_soc_register_card() failed: %d\n", ret);
++ goto err_put_of_nodes;
++ }
++ return 0;
+
+- if (ret)
+- dev_err(&pdev->dev, "snd_soc_register_card() failed:%d\n", ret);
+-
++err_put_of_nodes:
++ arndale_put_of_nodes(card);
+ return ret;
+ }
+
++static int arndale_audio_remove(struct platform_device *pdev)
++{
++ struct snd_soc_card *card = platform_get_drvdata(pdev);
++
++ arndale_put_of_nodes(card);
++ return 0;
++}
++
+ static const struct of_device_id samsung_arndale_rt5631_of_match[] __maybe_unused = {
+ { .compatible = "samsung,arndale-rt5631", },
+ { .compatible = "samsung,arndale-alc5631", },
+@@ -129,6 +154,7 @@ static struct platform_driver arndale_audio_driver = {
+ .of_match_table = of_match_ptr(samsung_arndale_rt5631_of_match),
+ },
+ .probe = arndale_audio_probe,
++ .remove = arndale_audio_remove,
+ };
+
+ module_platform_driver(arndale_audio_driver);
+diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c
+index dc463f1a9e24..1cc5a07a2f5c 100644
+--- a/sound/soc/soc-topology.c
++++ b/sound/soc/soc-topology.c
+@@ -1588,7 +1588,7 @@ static int soc_tplg_dapm_widget_create(struct soc_tplg *tplg,
+
+ /* map user to kernel widget ID */
+ template.id = get_widget_id(le32_to_cpu(w->id));
+- if (template.id < 0)
++ if ((int)template.id < 0)
+ return template.id;
+
+ /* strings are allocated here, but used and freed by the widget */
+diff --git a/sound/soc/sof/control.c b/sound/soc/sof/control.c
+index a4983f90ff5b..2b8711eda362 100644
+--- a/sound/soc/sof/control.c
++++ b/sound/soc/sof/control.c
+@@ -60,13 +60,16 @@ int snd_sof_volume_put(struct snd_kcontrol *kcontrol,
+ struct snd_sof_dev *sdev = scontrol->sdev;
+ struct sof_ipc_ctrl_data *cdata = scontrol->control_data;
+ unsigned int i, channels = scontrol->num_channels;
++ bool change = false;
++ u32 value;
+
+ /* update each channel */
+ for (i = 0; i < channels; i++) {
+- cdata->chanv[i].value =
+- mixer_to_ipc(ucontrol->value.integer.value[i],
++ value = mixer_to_ipc(ucontrol->value.integer.value[i],
+ scontrol->volume_table, sm->max + 1);
++ change = change || (value != cdata->chanv[i].value);
+ cdata->chanv[i].channel = i;
++ cdata->chanv[i].value = value;
+ }
+
+ /* notify DSP of mixer updates */
+@@ -76,8 +79,7 @@ int snd_sof_volume_put(struct snd_kcontrol *kcontrol,
+ SOF_CTRL_TYPE_VALUE_CHAN_GET,
+ SOF_CTRL_CMD_VOLUME,
+ true);
+-
+- return 0;
++ return change;
+ }
+
+ int snd_sof_switch_get(struct snd_kcontrol *kcontrol,
+@@ -105,11 +107,15 @@ int snd_sof_switch_put(struct snd_kcontrol *kcontrol,
+ struct snd_sof_dev *sdev = scontrol->sdev;
+ struct sof_ipc_ctrl_data *cdata = scontrol->control_data;
+ unsigned int i, channels = scontrol->num_channels;
++ bool change = false;
++ u32 value;
+
+ /* update each channel */
+ for (i = 0; i < channels; i++) {
+- cdata->chanv[i].value = ucontrol->value.integer.value[i];
++ value = ucontrol->value.integer.value[i];
++ change = change || (value != cdata->chanv[i].value);
+ cdata->chanv[i].channel = i;
++ cdata->chanv[i].value = value;
+ }
+
+ /* notify DSP of mixer updates */
+@@ -120,7 +126,7 @@ int snd_sof_switch_put(struct snd_kcontrol *kcontrol,
+ SOF_CTRL_CMD_SWITCH,
+ true);
+
+- return 0;
++ return change;
+ }
+
+ int snd_sof_enum_get(struct snd_kcontrol *kcontrol,
+@@ -148,11 +154,15 @@ int snd_sof_enum_put(struct snd_kcontrol *kcontrol,
+ struct snd_sof_dev *sdev = scontrol->sdev;
+ struct sof_ipc_ctrl_data *cdata = scontrol->control_data;
+ unsigned int i, channels = scontrol->num_channels;
++ bool change = false;
++ u32 value;
+
+ /* update each channel */
+ for (i = 0; i < channels; i++) {
+- cdata->chanv[i].value = ucontrol->value.enumerated.item[i];
++ value = ucontrol->value.enumerated.item[i];
++ change = change || (value != cdata->chanv[i].value);
+ cdata->chanv[i].channel = i;
++ cdata->chanv[i].value = value;
+ }
+
+ /* notify DSP of enum updates */
+@@ -163,7 +173,7 @@ int snd_sof_enum_put(struct snd_kcontrol *kcontrol,
+ SOF_CTRL_CMD_ENUM,
+ true);
+
+- return 0;
++ return change;
+ }
+
+ int snd_sof_bytes_get(struct snd_kcontrol *kcontrol,
+diff --git a/sound/soc/sof/intel/Kconfig b/sound/soc/sof/intel/Kconfig
+index dd14ce92fe10..a5fd356776ee 100644
+--- a/sound/soc/sof/intel/Kconfig
++++ b/sound/soc/sof/intel/Kconfig
+@@ -241,6 +241,16 @@ config SND_SOC_SOF_HDA_AUDIO_CODEC
+ Say Y if you want to enable HDAudio codecs with SOF.
+ If unsure select "N".
+
++config SND_SOC_SOF_HDA_ALWAYS_ENABLE_DMI_L1
++ bool "SOF enable DMI Link L1"
++ help
++ This option enables DMI L1 for both playback and capture
++ and disables known workarounds for specific HDaudio platforms.
++ Only use to look into power optimizations on platforms not
++ affected by DMI L1 issues. This option is not recommended.
++ Say Y if you want to enable DMI Link L1
++ If unsure, select "N".
++
+ endif ## SND_SOC_SOF_HDA_COMMON
+
+ config SND_SOC_SOF_HDA_LINK_BASELINE
+diff --git a/sound/soc/sof/intel/bdw.c b/sound/soc/sof/intel/bdw.c
+index 70d524ef9bc0..0ca3c1b55eeb 100644
+--- a/sound/soc/sof/intel/bdw.c
++++ b/sound/soc/sof/intel/bdw.c
+@@ -37,6 +37,7 @@
+ #define MBOX_SIZE 0x1000
+ #define MBOX_DUMP_SIZE 0x30
+ #define EXCEPT_OFFSET 0x800
++#define EXCEPT_MAX_HDR_SIZE 0x400
+
+ /* DSP peripherals */
+ #define DMAC0_OFFSET 0xFE000
+@@ -228,6 +229,11 @@ static void bdw_get_registers(struct snd_sof_dev *sdev,
+ /* note: variable AR register array is not read */
+
+ /* then get panic info */
++ if (xoops->arch_hdr.totalsize > EXCEPT_MAX_HDR_SIZE) {
++ dev_err(sdev->dev, "invalid header size 0x%x. FW oops is bogus\n",
++ xoops->arch_hdr.totalsize);
++ return;
++ }
+ offset += xoops->arch_hdr.totalsize;
+ sof_mailbox_read(sdev, offset, panic_info, sizeof(*panic_info));
+
+@@ -588,6 +594,7 @@ static int bdw_probe(struct snd_sof_dev *sdev)
+ /* TODO: add offsets */
+ sdev->mmio_bar = BDW_DSP_BAR;
+ sdev->mailbox_bar = BDW_DSP_BAR;
++ sdev->dsp_oops_offset = MBOX_OFFSET;
+
+ /* PCI base */
+ mmio = platform_get_resource(pdev, IORESOURCE_MEM,
+diff --git a/sound/soc/sof/intel/byt.c b/sound/soc/sof/intel/byt.c
+index 107d711efc3f..96faaa8fa5a3 100644
+--- a/sound/soc/sof/intel/byt.c
++++ b/sound/soc/sof/intel/byt.c
+@@ -28,6 +28,7 @@
+ #define MBOX_OFFSET 0x144000
+ #define MBOX_SIZE 0x1000
+ #define EXCEPT_OFFSET 0x800
++#define EXCEPT_MAX_HDR_SIZE 0x400
+
+ /* DSP peripherals */
+ #define DMAC0_OFFSET 0x098000
+@@ -273,6 +274,11 @@ static void byt_get_registers(struct snd_sof_dev *sdev,
+ /* note: variable AR register array is not read */
+
+ /* then get panic info */
++ if (xoops->arch_hdr.totalsize > EXCEPT_MAX_HDR_SIZE) {
++ dev_err(sdev->dev, "invalid header size 0x%x. FW oops is bogus\n",
++ xoops->arch_hdr.totalsize);
++ return;
++ }
+ offset += xoops->arch_hdr.totalsize;
+ sof_mailbox_read(sdev, offset, panic_info, sizeof(*panic_info));
+
+diff --git a/sound/soc/sof/intel/hda-ctrl.c b/sound/soc/sof/intel/hda-ctrl.c
+index ea63f83a509b..760094d49f18 100644
+--- a/sound/soc/sof/intel/hda-ctrl.c
++++ b/sound/soc/sof/intel/hda-ctrl.c
+@@ -139,20 +139,16 @@ void hda_dsp_ctrl_misc_clock_gating(struct snd_sof_dev *sdev, bool enable)
+ */
+ int hda_dsp_ctrl_clock_power_gating(struct snd_sof_dev *sdev, bool enable)
+ {
+-#if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA)
+- struct hdac_bus *bus = sof_to_bus(sdev);
+-#endif
+ u32 val;
+
+ /* enable/disable audio dsp clock gating */
+ val = enable ? PCI_CGCTL_ADSPDCGE : 0;
+ snd_sof_pci_update_bits(sdev, PCI_CGCTL, PCI_CGCTL_ADSPDCGE, val);
+
+-#if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA)
+- /* enable/disable L1 support */
+- val = enable ? SOF_HDA_VS_EM2_L1SEN : 0;
+- snd_hdac_chip_updatel(bus, VS_EM2, SOF_HDA_VS_EM2_L1SEN, val);
+-#endif
++ /* enable/disable DMI Link L1 support */
++ val = enable ? HDA_VS_INTEL_EM2_L1SEN : 0;
++ snd_sof_dsp_update_bits(sdev, HDA_DSP_HDA_BAR, HDA_VS_INTEL_EM2,
++ HDA_VS_INTEL_EM2_L1SEN, val);
+
+ /* enable/disable audio dsp power gating */
+ val = enable ? 0 : PCI_PGCTL_ADSPPGD;
+diff --git a/sound/soc/sof/intel/hda-loader.c b/sound/soc/sof/intel/hda-loader.c
+index 6427f0b3a2f1..65c2af3fcaab 100644
+--- a/sound/soc/sof/intel/hda-loader.c
++++ b/sound/soc/sof/intel/hda-loader.c
+@@ -44,6 +44,7 @@ static int cl_stream_prepare(struct snd_sof_dev *sdev, unsigned int format,
+ return -ENODEV;
+ }
+ hstream = &dsp_stream->hstream;
++ hstream->substream = NULL;
+
+ /* allocate DMA buffer */
+ ret = snd_dma_alloc_pages(SNDRV_DMA_TYPE_DEV_SG, &pci->dev, size, dmab);
+diff --git a/sound/soc/sof/intel/hda-stream.c b/sound/soc/sof/intel/hda-stream.c
+index ad8d41f22e92..2c7447188402 100644
+--- a/sound/soc/sof/intel/hda-stream.c
++++ b/sound/soc/sof/intel/hda-stream.c
+@@ -185,6 +185,17 @@ hda_dsp_stream_get(struct snd_sof_dev *sdev, int direction)
+ direction == SNDRV_PCM_STREAM_PLAYBACK ?
+ "playback" : "capture");
+
++ /*
++ * Disable DMI Link L1 entry when capture stream is opened.
++ * Workaround to address a known issue with host DMA that results
++ * in xruns during pause/release in capture scenarios.
++ */
++ if (!IS_ENABLED(SND_SOC_SOF_HDA_ALWAYS_ENABLE_DMI_L1))
++ if (stream && direction == SNDRV_PCM_STREAM_CAPTURE)
++ snd_sof_dsp_update_bits(sdev, HDA_DSP_HDA_BAR,
++ HDA_VS_INTEL_EM2,
++ HDA_VS_INTEL_EM2_L1SEN, 0);
++
+ return stream;
+ }
+
+@@ -193,23 +204,43 @@ int hda_dsp_stream_put(struct snd_sof_dev *sdev, int direction, int stream_tag)
+ {
+ struct hdac_bus *bus = sof_to_bus(sdev);
+ struct hdac_stream *s;
++ bool active_capture_stream = false;
++ bool found = false;
+
+ spin_lock_irq(&bus->reg_lock);
+
+- /* find used stream */
++ /*
++ * close stream matching the stream tag
++ * and check if there are any open capture streams.
++ */
+ list_for_each_entry(s, &bus->stream_list, list) {
+- if (s->direction == direction &&
+- s->opened && s->stream_tag == stream_tag) {
++ if (!s->opened)
++ continue;
++
++ if (s->direction == direction && s->stream_tag == stream_tag) {
+ s->opened = false;
+- spin_unlock_irq(&bus->reg_lock);
+- return 0;
++ found = true;
++ } else if (s->direction == SNDRV_PCM_STREAM_CAPTURE) {
++ active_capture_stream = true;
+ }
+ }
+
+ spin_unlock_irq(&bus->reg_lock);
+
+- dev_dbg(sdev->dev, "stream_tag %d not opened!\n", stream_tag);
+- return -ENODEV;
++ /* Enable DMI L1 entry if there are no capture streams open */
++ if (!IS_ENABLED(SND_SOC_SOF_HDA_ALWAYS_ENABLE_DMI_L1))
++ if (!active_capture_stream)
++ snd_sof_dsp_update_bits(sdev, HDA_DSP_HDA_BAR,
++ HDA_VS_INTEL_EM2,
++ HDA_VS_INTEL_EM2_L1SEN,
++ HDA_VS_INTEL_EM2_L1SEN);
++
++ if (!found) {
++ dev_dbg(sdev->dev, "stream_tag %d not opened!\n", stream_tag);
++ return -ENODEV;
++ }
++
++ return 0;
+ }
+
+ int hda_dsp_stream_trigger(struct snd_sof_dev *sdev,
+diff --git a/sound/soc/sof/intel/hda.c b/sound/soc/sof/intel/hda.c
+index 7f665392618f..f2d45d62dfa5 100644
+--- a/sound/soc/sof/intel/hda.c
++++ b/sound/soc/sof/intel/hda.c
+@@ -37,6 +37,8 @@
+ #define IS_CFL(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0xa348)
+ #define IS_CNL(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0x9dc8)
+
++#define EXCEPT_MAX_HDR_SIZE 0x400
++
+ /*
+ * Debug
+ */
+@@ -121,6 +123,11 @@ static void hda_dsp_get_registers(struct snd_sof_dev *sdev,
+ /* note: variable AR register array is not read */
+
+ /* then get panic info */
++ if (xoops->arch_hdr.totalsize > EXCEPT_MAX_HDR_SIZE) {
++ dev_err(sdev->dev, "invalid header size 0x%x. FW oops is bogus\n",
++ xoops->arch_hdr.totalsize);
++ return;
++ }
+ offset += xoops->arch_hdr.totalsize;
+ sof_block_read(sdev, sdev->mmio_bar, offset,
+ panic_info, sizeof(*panic_info));
+diff --git a/sound/soc/sof/intel/hda.h b/sound/soc/sof/intel/hda.h
+index d9c17146200b..2cc789f0e83c 100644
+--- a/sound/soc/sof/intel/hda.h
++++ b/sound/soc/sof/intel/hda.h
+@@ -39,7 +39,6 @@
+ #define SOF_HDA_WAKESTS 0x0E
+ #define SOF_HDA_WAKESTS_INT_MASK ((1 << 8) - 1)
+ #define SOF_HDA_RIRBSTS 0x5d
+-#define SOF_HDA_VS_EM2_L1SEN BIT(13)
+
+ /* SOF_HDA_GCTL register bist */
+ #define SOF_HDA_GCTL_RESET BIT(0)
+@@ -228,6 +227,10 @@
+ #define HDA_DSP_REG_HIPCIE (HDA_DSP_IPC_BASE + 0x0C)
+ #define HDA_DSP_REG_HIPCCTL (HDA_DSP_IPC_BASE + 0x10)
+
++/* Intel Vendor Specific Registers */
++#define HDA_VS_INTEL_EM2 0x1030
++#define HDA_VS_INTEL_EM2_L1SEN BIT(13)
++
+ /* HIPCI */
+ #define HDA_DSP_REG_HIPCI_BUSY BIT(31)
+ #define HDA_DSP_REG_HIPCI_MSG_MASK 0x7FFFFFFF
+diff --git a/sound/soc/sof/loader.c b/sound/soc/sof/loader.c
+index 952a19091c58..01775231f2b8 100644
+--- a/sound/soc/sof/loader.c
++++ b/sound/soc/sof/loader.c
+@@ -370,10 +370,10 @@ int snd_sof_run_firmware(struct snd_sof_dev *sdev)
+ msecs_to_jiffies(sdev->boot_timeout));
+ if (ret == 0) {
+ dev_err(sdev->dev, "error: firmware boot failure\n");
+- /* after this point FW_READY msg should be ignored */
+- sdev->boot_complete = true;
+ snd_sof_dsp_dbg_dump(sdev, SOF_DBG_REGS | SOF_DBG_MBOX |
+ SOF_DBG_TEXT | SOF_DBG_PCI);
++ /* after this point FW_READY msg should be ignored */
++ sdev->boot_complete = true;
+ return -EIO;
+ }
+
+diff --git a/sound/soc/sof/topology.c b/sound/soc/sof/topology.c
+index 432ae343f960..96230329e678 100644
+--- a/sound/soc/sof/topology.c
++++ b/sound/soc/sof/topology.c
+@@ -907,7 +907,9 @@ static void sof_parse_word_tokens(struct snd_soc_component *scomp,
+ for (j = 0; j < count; j++) {
+ /* match token type */
+ if (!(tokens[j].type == SND_SOC_TPLG_TUPLE_TYPE_WORD ||
+- tokens[j].type == SND_SOC_TPLG_TUPLE_TYPE_SHORT))
++ tokens[j].type == SND_SOC_TPLG_TUPLE_TYPE_SHORT ||
++ tokens[j].type == SND_SOC_TPLG_TUPLE_TYPE_BYTE ||
++ tokens[j].type == SND_SOC_TPLG_TUPLE_TYPE_BOOL))
+ continue;
+
+ /* match token id */
+diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c
+index e3776f5c2e01..637e18142658 100644
+--- a/tools/perf/builtin-c2c.c
++++ b/tools/perf/builtin-c2c.c
+@@ -2627,6 +2627,7 @@ static int build_cl_output(char *cl_sort, bool no_source)
+ bool add_sym = false;
+ bool add_dso = false;
+ bool add_src = false;
++ int ret = 0;
+
+ if (!buf)
+ return -ENOMEM;
+@@ -2645,7 +2646,8 @@ static int build_cl_output(char *cl_sort, bool no_source)
+ add_dso = true;
+ } else if (strcmp(tok, "offset")) {
+ pr_err("unrecognized sort token: %s\n", tok);
+- return -EINVAL;
++ ret = -EINVAL;
++ goto err;
+ }
+ }
+
+@@ -2668,13 +2670,15 @@ static int build_cl_output(char *cl_sort, bool no_source)
+ add_sym ? "symbol," : "",
+ add_dso ? "dso," : "",
+ add_src ? "cl_srcline," : "",
+- "node") < 0)
+- return -ENOMEM;
++ "node") < 0) {
++ ret = -ENOMEM;
++ goto err;
++ }
+
+ c2c.show_src = add_src;
+-
++err:
+ free(buf);
+- return 0;
++ return ret;
+ }
+
+ static int setup_coalesce(const char *coalesce, bool no_source)
+diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c
+index 9e5e60898083..353c9417e864 100644
+--- a/tools/perf/builtin-kmem.c
++++ b/tools/perf/builtin-kmem.c
+@@ -688,6 +688,7 @@ static char *compact_gfp_flags(char *gfp_flags)
+ new = realloc(new_flags, len + strlen(cpt) + 2);
+ if (new == NULL) {
+ free(new_flags);
++ free(orig_flags);
+ return NULL;
+ }
+
+diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
+index e95a2a26c40a..277cdf1fc5ac 100644
+--- a/tools/perf/util/header.c
++++ b/tools/perf/util/header.c
+@@ -1282,8 +1282,10 @@ static int build_mem_topology(struct memory_node *nodes, u64 size, u64 *cntp)
+ continue;
+
+ if (WARN_ONCE(cnt >= size,
+- "failed to write MEM_TOPOLOGY, way too many nodes\n"))
++ "failed to write MEM_TOPOLOGY, way too many nodes\n")) {
++ closedir(dir);
+ return -1;
++ }
+
+ ret = memory_node__read(&nodes[cnt++], idx);
+ }
+diff --git a/tools/perf/util/util.c b/tools/perf/util/util.c
+index a61535cf1bca..d0930c38e147 100644
+--- a/tools/perf/util/util.c
++++ b/tools/perf/util/util.c
+@@ -176,8 +176,10 @@ static int rm_rf_depth_pat(const char *path, int depth, const char **pat)
+ if (!strcmp(d->d_name, ".") || !strcmp(d->d_name, ".."))
+ continue;
+
+- if (!match_pat(d->d_name, pat))
+- return -2;
++ if (!match_pat(d->d_name, pat)) {
++ ret = -2;
++ break;
++ }
+
+ scnprintf(namebuf, sizeof(namebuf), "%s/%s",
+ path, d->d_name);
+diff --git a/tools/testing/selftests/kvm/x86_64/sync_regs_test.c b/tools/testing/selftests/kvm/x86_64/sync_regs_test.c
+index 11c2a70a7b87..5c8224256294 100644
+--- a/tools/testing/selftests/kvm/x86_64/sync_regs_test.c
++++ b/tools/testing/selftests/kvm/x86_64/sync_regs_test.c
+@@ -22,18 +22,19 @@
+
+ #define VCPU_ID 5
+
++#define UCALL_PIO_PORT ((uint16_t)0x1000)
++
++/*
++ * ucall is embedded here to protect against compiler reshuffling registers
++ * before calling a function. In this test we only need to get KVM_EXIT_IO
++ * vmexit and preserve RBX, no additional information is needed.
++ */
+ void guest_code(void)
+ {
+- /*
+- * use a callee-save register, otherwise the compiler
+- * saves it around the call to GUEST_SYNC.
+- */
+- register u32 stage asm("rbx");
+- for (;;) {
+- GUEST_SYNC(0);
+- stage++;
+- asm volatile ("" : : "r" (stage));
+- }
++ asm volatile("1: in %[port], %%al\n"
++ "add $0x1, %%rbx\n"
++ "jmp 1b"
++ : : [port] "d" (UCALL_PIO_PORT) : "rax", "rbx");
+ }
+
+ static void compare_regs(struct kvm_regs *left, struct kvm_regs *right)
+diff --git a/tools/testing/selftests/kvm/x86_64/vmx_set_nested_state_test.c b/tools/testing/selftests/kvm/x86_64/vmx_set_nested_state_test.c
+index 853e370e8a39..a6d85614ae4d 100644
+--- a/tools/testing/selftests/kvm/x86_64/vmx_set_nested_state_test.c
++++ b/tools/testing/selftests/kvm/x86_64/vmx_set_nested_state_test.c
+@@ -271,12 +271,7 @@ int main(int argc, char *argv[])
+ state.flags = KVM_STATE_NESTED_RUN_PENDING;
+ test_nested_state_expect_einval(vm, &state);
+
+- /*
+- * TODO: When SVM support is added for KVM_SET_NESTED_STATE
+- * add tests here to support it like VMX.
+- */
+- if (entry->ecx & CPUID_VMX)
+- test_vmx_nested_state(vm);
++ test_vmx_nested_state(vm);
+
+ kvm_vm_free(vm);
+ return 0;
+diff --git a/tools/testing/selftests/net/fib_tests.sh b/tools/testing/selftests/net/fib_tests.sh
+index c4ba0ff4a53f..76c1897e6352 100755
+--- a/tools/testing/selftests/net/fib_tests.sh
++++ b/tools/testing/selftests/net/fib_tests.sh
+@@ -1438,6 +1438,27 @@ ipv4_addr_metric_test()
+ fi
+ log_test $rc 0 "Prefix route with metric on link up"
+
++ # explicitly check for metric changes on edge scenarios
++ run_cmd "$IP addr flush dev dummy2"
++ run_cmd "$IP addr add dev dummy2 172.16.104.0/24 metric 259"
++ run_cmd "$IP addr change dev dummy2 172.16.104.0/24 metric 260"
++ rc=$?
++ if [ $rc -eq 0 ]; then
++ check_route "172.16.104.0/24 dev dummy2 proto kernel scope link src 172.16.104.0 metric 260"
++ rc=$?
++ fi
++ log_test $rc 0 "Modify metric of .0/24 address"
++
++ run_cmd "$IP addr flush dev dummy2"
++ run_cmd "$IP addr add dev dummy2 172.16.104.1/32 peer 172.16.104.2 metric 260"
++ run_cmd "$IP addr change dev dummy2 172.16.104.1/32 peer 172.16.104.2 metric 261"
++ rc=$?
++ if [ $rc -eq 0 ]; then
++ check_route "172.16.104.2 dev dummy2 proto kernel scope link src 172.16.104.1 metric 261"
++ rc=$?
++ fi
++ log_test $rc 0 "Modify metric of address with peer route"
++
+ $IP li del dummy1
+ $IP li del dummy2
+ cleanup
+diff --git a/tools/testing/selftests/net/reuseport_dualstack.c b/tools/testing/selftests/net/reuseport_dualstack.c
+index fe3230c55986..fb7a59ed759e 100644
+--- a/tools/testing/selftests/net/reuseport_dualstack.c
++++ b/tools/testing/selftests/net/reuseport_dualstack.c
+@@ -129,7 +129,7 @@ static void test(int *rcv_fds, int count, int proto)
+ {
+ struct epoll_event ev;
+ int epfd, i, test_fd;
+- uint16_t test_family;
++ int test_family;
+ socklen_t len;
+
+ epfd = epoll_create(1);
+@@ -146,6 +146,7 @@ static void test(int *rcv_fds, int count, int proto)
+ send_from_v4(proto);
+
+ test_fd = receive_once(epfd, proto);
++ len = sizeof(test_family);
+ if (getsockopt(test_fd, SOL_SOCKET, SO_DOMAIN, &test_family, &len))
+ error(1, errno, "failed to read socket domain");
+ if (test_family != AF_INET)
+diff --git a/tools/testing/selftests/powerpc/mm/Makefile b/tools/testing/selftests/powerpc/mm/Makefile
+index f1fbc15800c4..ed1565809d2b 100644
+--- a/tools/testing/selftests/powerpc/mm/Makefile
++++ b/tools/testing/selftests/powerpc/mm/Makefile
+@@ -4,6 +4,7 @@ noarg:
+
+ TEST_GEN_PROGS := hugetlb_vs_thp_test subpage_prot prot_sao segv_errors wild_bctr \
+ large_vm_fork_separation
++TEST_GEN_PROGS_EXTENDED := tlbie_test
+ TEST_GEN_FILES := tempfile
+
+ top_srcdir = ../../../../..
+@@ -19,3 +20,4 @@ $(OUTPUT)/large_vm_fork_separation: CFLAGS += -m64
+ $(OUTPUT)/tempfile:
+ dd if=/dev/zero of=$@ bs=64k count=1
+
++$(OUTPUT)/tlbie_test: LDLIBS += -lpthread
+diff --git a/tools/testing/selftests/powerpc/mm/tlbie_test.c b/tools/testing/selftests/powerpc/mm/tlbie_test.c
+new file mode 100644
+index 000000000000..f85a0938ab25
+--- /dev/null
++++ b/tools/testing/selftests/powerpc/mm/tlbie_test.c
+@@ -0,0 +1,734 @@
++// SPDX-License-Identifier: GPL-2.0
++
++/*
++ * Copyright 2019, Nick Piggin, Gautham R. Shenoy, Aneesh Kumar K.V, IBM Corp.
++ */
++
++/*
++ *
++ * Test tlbie/mtpidr race. We have 4 threads doing flush/load/compare/store
++ * sequence in a loop. The same threads also rung a context switch task
++ * that does sched_yield() in loop.
++ *
++ * The snapshot thread mark the mmap area PROT_READ in between, make a copy
++ * and copy it back to the original area. This helps us to detect if any
++ * store continued to happen after we marked the memory PROT_READ.
++ */
++
++#define _GNU_SOURCE
++#include <stdio.h>
++#include <sys/mman.h>
++#include <sys/types.h>
++#include <sys/wait.h>
++#include <sys/ipc.h>
++#include <sys/shm.h>
++#include <sys/stat.h>
++#include <sys/time.h>
++#include <linux/futex.h>
++#include <unistd.h>
++#include <asm/unistd.h>
++#include <string.h>
++#include <stdlib.h>
++#include <fcntl.h>
++#include <sched.h>
++#include <time.h>
++#include <stdarg.h>
++#include <sched.h>
++#include <pthread.h>
++#include <signal.h>
++#include <sys/prctl.h>
++
++static inline void dcbf(volatile unsigned int *addr)
++{
++ __asm__ __volatile__ ("dcbf %y0; sync" : : "Z"(*(unsigned char *)addr) : "memory");
++}
++
++static void err_msg(char *msg)
++{
++
++ time_t now;
++ time(&now);
++ printf("=================================\n");
++ printf(" Error: %s\n", msg);
++ printf(" %s", ctime(&now));
++ printf("=================================\n");
++ exit(1);
++}
++
++static char *map1;
++static char *map2;
++static pid_t rim_process_pid;
++
++/*
++ * A "rim-sequence" is defined to be the sequence of the following
++ * operations performed on a memory word:
++ * 1) FLUSH the contents of that word.
++ * 2) LOAD the contents of that word.
++ * 3) COMPARE the contents of that word with the content that was
++ * previously stored at that word
++ * 4) STORE new content into that word.
++ *
++ * The threads in this test that perform the rim-sequence are termed
++ * as rim_threads.
++ */
++
++/*
++ * A "corruption" is defined to be the failed COMPARE operation in a
++ * rim-sequence.
++ *
++ * A rim_thread that detects a corruption informs about it to all the
++ * other rim_threads, and the mem_snapshot thread.
++ */
++static volatile unsigned int corruption_found;
++
++/*
++ * This defines the maximum number of rim_threads in this test.
++ *
++ * The THREAD_ID_BITS denote the number of bits required
++ * to represent the thread_ids [0..MAX_THREADS - 1].
++ * We are being a bit paranoid here and set it to 8 bits,
++ * though 6 bits suffice.
++ *
++ */
++#define MAX_THREADS 64
++#define THREAD_ID_BITS 8
++#define THREAD_ID_MASK ((1 << THREAD_ID_BITS) - 1)
++static unsigned int rim_thread_ids[MAX_THREADS];
++static pthread_t rim_threads[MAX_THREADS];
++
++
++/*
++ * Each rim_thread works on an exclusive "chunk" of size
++ * RIM_CHUNK_SIZE.
++ *
++ * The ith rim_thread works on the ith chunk.
++ *
++ * The ith chunk begins at
++ * map1 + (i * RIM_CHUNK_SIZE)
++ */
++#define RIM_CHUNK_SIZE 1024
++#define BITS_PER_BYTE 8
++#define WORD_SIZE (sizeof(unsigned int))
++#define WORD_BITS (WORD_SIZE * BITS_PER_BYTE)
++#define WORDS_PER_CHUNK (RIM_CHUNK_SIZE/WORD_SIZE)
++
++static inline char *compute_chunk_start_addr(unsigned int thread_id)
++{
++ char *chunk_start;
++
++ chunk_start = (char *)((unsigned long)map1 +
++ (thread_id * RIM_CHUNK_SIZE));
++
++ return chunk_start;
++}
++
++/*
++ * The "word-offset" of a word-aligned address inside a chunk, is
++ * defined to be the number of words that precede the address in that
++ * chunk.
++ *
++ * WORD_OFFSET_BITS denote the number of bits required to represent
++ * the word-offsets of all the word-aligned addresses of a chunk.
++ */
++#define WORD_OFFSET_BITS (__builtin_ctz(WORDS_PER_CHUNK))
++#define WORD_OFFSET_MASK ((1 << WORD_OFFSET_BITS) - 1)
++
++static inline unsigned int compute_word_offset(char *start, unsigned int *addr)
++{
++ unsigned int delta_bytes, ret;
++ delta_bytes = (unsigned long)addr - (unsigned long)start;
++
++ ret = delta_bytes/WORD_SIZE;
++
++ return ret;
++}
++
++/*
++ * A "sweep" is defined to be the sequential execution of the
++ * rim-sequence by a rim_thread on its chunk one word at a time,
++ * starting from the first word of its chunk and ending with the last
++ * word of its chunk.
++ *
++ * Each sweep of a rim_thread is uniquely identified by a sweep_id.
++ * SWEEP_ID_BITS denote the number of bits required to represent
++ * the sweep_ids of rim_threads.
++ *
++ * As to why SWEEP_ID_BITS are computed as a function of THREAD_ID_BITS,
++ * WORD_OFFSET_BITS, and WORD_BITS, see the "store-pattern" below.
++ */
++#define SWEEP_ID_BITS (WORD_BITS - (THREAD_ID_BITS + WORD_OFFSET_BITS))
++#define SWEEP_ID_MASK ((1 << SWEEP_ID_BITS) - 1)
++
++/*
++ * A "store-pattern" is the word-pattern that is stored into a word
++ * location in the 4)STORE step of the rim-sequence.
++ *
++ * In the store-pattern, we shall encode:
++ *
++ * - The thread-id of the rim_thread performing the store
++ * (The most significant THREAD_ID_BITS)
++ *
++ * - The word-offset of the address into which the store is being
++ * performed (The next WORD_OFFSET_BITS)
++ *
++ * - The sweep_id of the current sweep in which the store is
++ * being performed. (The lower SWEEP_ID_BITS)
++ *
++ * Store Pattern: 32 bits
++ * |------------------|--------------------|---------------------------------|
++ * | Thread id | Word offset | sweep_id |
++ * |------------------|--------------------|---------------------------------|
++ * THREAD_ID_BITS WORD_OFFSET_BITS SWEEP_ID_BITS
++ *
++ * In the store pattern, the (Thread-id + Word-offset) uniquely identify the
++ * address to which the store is being performed i.e,
++ * address == map1 +
++ * (Thread-id * RIM_CHUNK_SIZE) + (Word-offset * WORD_SIZE)
++ *
++ * And the sweep_id in the store pattern identifies the time when the
++ * store was performed by the rim_thread.
++ *
++ * We shall use this property in the 3)COMPARE step of the
++ * rim-sequence.
++ */
++#define SWEEP_ID_SHIFT 0
++#define WORD_OFFSET_SHIFT (SWEEP_ID_BITS)
++#define THREAD_ID_SHIFT (WORD_OFFSET_BITS + SWEEP_ID_BITS)
++
++/*
++ * Compute the store pattern for a given thread with id @tid, at
++ * location @addr in the sweep identified by @sweep_id
++ */
++static inline unsigned int compute_store_pattern(unsigned int tid,
++ unsigned int *addr,
++ unsigned int sweep_id)
++{
++ unsigned int ret = 0;
++ char *start = compute_chunk_start_addr(tid);
++ unsigned int word_offset = compute_word_offset(start, addr);
++
++ ret += (tid & THREAD_ID_MASK) << THREAD_ID_SHIFT;
++ ret += (word_offset & WORD_OFFSET_MASK) << WORD_OFFSET_SHIFT;
++ ret += (sweep_id & SWEEP_ID_MASK) << SWEEP_ID_SHIFT;
++ return ret;
++}
++
++/* Extract the thread-id from the given store-pattern */
++static inline unsigned int extract_tid(unsigned int pattern)
++{
++ unsigned int ret;
++
++ ret = (pattern >> THREAD_ID_SHIFT) & THREAD_ID_MASK;
++ return ret;
++}
++
++/* Extract the word-offset from the given store-pattern */
++static inline unsigned int extract_word_offset(unsigned int pattern)
++{
++ unsigned int ret;
++
++ ret = (pattern >> WORD_OFFSET_SHIFT) & WORD_OFFSET_MASK;
++
++ return ret;
++}
++
++/* Extract the sweep-id from the given store-pattern */
++static inline unsigned int extract_sweep_id(unsigned int pattern)
++
++{
++ unsigned int ret;
++
++ ret = (pattern >> SWEEP_ID_SHIFT) & SWEEP_ID_MASK;
++
++ return ret;
++}
++
++/************************************************************
++ * *
++ * Logging the output of the verification *
++ * *
++ ************************************************************/
++#define LOGDIR_NAME_SIZE 100
++static char logdir[LOGDIR_NAME_SIZE];
++
++static FILE *fp[MAX_THREADS];
++static const char logfilename[] ="Thread-%02d-Chunk";
++
++static inline void start_verification_log(unsigned int tid,
++ unsigned int *addr,
++ unsigned int cur_sweep_id,
++ unsigned int prev_sweep_id)
++{
++ FILE *f;
++ char logfile[30];
++ char path[LOGDIR_NAME_SIZE + 30];
++ char separator[2] = "/";
++ char *chunk_start = compute_chunk_start_addr(tid);
++ unsigned int size = RIM_CHUNK_SIZE;
++
++ sprintf(logfile, logfilename, tid);
++ strcpy(path, logdir);
++ strcat(path, separator);
++ strcat(path, logfile);
++ f = fopen(path, "w");
++
++ if (!f) {
++ err_msg("Unable to create logfile\n");
++ }
++
++ fp[tid] = f;
++
++ fprintf(f, "----------------------------------------------------------\n");
++ fprintf(f, "PID = %d\n", rim_process_pid);
++ fprintf(f, "Thread id = %02d\n", tid);
++ fprintf(f, "Chunk Start Addr = 0x%016lx\n", (unsigned long)chunk_start);
++ fprintf(f, "Chunk Size = %d\n", size);
++ fprintf(f, "Next Store Addr = 0x%016lx\n", (unsigned long)addr);
++ fprintf(f, "Current sweep-id = 0x%08x\n", cur_sweep_id);
++ fprintf(f, "Previous sweep-id = 0x%08x\n", prev_sweep_id);
++ fprintf(f, "----------------------------------------------------------\n");
++}
++
++static inline void log_anamoly(unsigned int tid, unsigned int *addr,
++ unsigned int expected, unsigned int observed)
++{
++ FILE *f = fp[tid];
++
++ fprintf(f, "Thread %02d: Addr 0x%lx: Expected 0x%x, Observed 0x%x\n",
++ tid, (unsigned long)addr, expected, observed);
++ fprintf(f, "Thread %02d: Expected Thread id = %02d\n", tid, extract_tid(expected));
++ fprintf(f, "Thread %02d: Observed Thread id = %02d\n", tid, extract_tid(observed));
++ fprintf(f, "Thread %02d: Expected Word offset = %03d\n", tid, extract_word_offset(expected));
++ fprintf(f, "Thread %02d: Observed Word offset = %03d\n", tid, extract_word_offset(observed));
++ fprintf(f, "Thread %02d: Expected sweep-id = 0x%x\n", tid, extract_sweep_id(expected));
++ fprintf(f, "Thread %02d: Observed sweep-id = 0x%x\n", tid, extract_sweep_id(observed));
++ fprintf(f, "----------------------------------------------------------\n");
++}
++
++static inline void end_verification_log(unsigned int tid, unsigned nr_anamolies)
++{
++ FILE *f = fp[tid];
++ char logfile[30];
++ char path[LOGDIR_NAME_SIZE + 30];
++ char separator[] = "/";
++
++ fclose(f);
++
++ if (nr_anamolies == 0) {
++ remove(path);
++ return;
++ }
++
++ sprintf(logfile, logfilename, tid);
++ strcpy(path, logdir);
++ strcat(path, separator);
++ strcat(path, logfile);
++
++ printf("Thread %02d chunk has %d corrupted words. For details check %s\n",
++ tid, nr_anamolies, path);
++}
++
++/*
++ * When a COMPARE step of a rim-sequence fails, the rim_thread informs
++ * everyone else via the shared_memory pointed to by
++ * corruption_found variable. On seeing this, every thread verifies the
++ * content of its chunk as follows.
++ *
++ * Suppose a thread identified with @tid was about to store (but not
++ * yet stored) to @next_store_addr in its current sweep identified
++ * @cur_sweep_id. Let @prev_sweep_id indicate the previous sweep_id.
++ *
++ * This implies that for all the addresses @addr < @next_store_addr,
++ * Thread @tid has already performed a store as part of its current
++ * sweep. Hence we expect the content of such @addr to be:
++ * |-------------------------------------------------|
++ * | tid | word_offset(addr) | cur_sweep_id |
++ * |-------------------------------------------------|
++ *
++ * Since Thread @tid is yet to perform stores on address
++ * @next_store_addr and above, we expect the content of such an
++ * address @addr to be:
++ * |-------------------------------------------------|
++ * | tid | word_offset(addr) | prev_sweep_id |
++ * |-------------------------------------------------|
++ *
++ * The verifier function @verify_chunk does this verification and logs
++ * any anamolies that it finds.
++ */
++static void verify_chunk(unsigned int tid, unsigned int *next_store_addr,
++ unsigned int cur_sweep_id,
++ unsigned int prev_sweep_id)
++{
++ unsigned int *iter_ptr;
++ unsigned int size = RIM_CHUNK_SIZE;
++ unsigned int expected;
++ unsigned int observed;
++ char *chunk_start = compute_chunk_start_addr(tid);
++
++ int nr_anamolies = 0;
++
++ start_verification_log(tid, next_store_addr,
++ cur_sweep_id, prev_sweep_id);
++
++ for (iter_ptr = (unsigned int *)chunk_start;
++ (unsigned long)iter_ptr < (unsigned long)chunk_start + size;
++ iter_ptr++) {
++ unsigned int expected_sweep_id;
++
++ if (iter_ptr < next_store_addr) {
++ expected_sweep_id = cur_sweep_id;
++ } else {
++ expected_sweep_id = prev_sweep_id;
++ }
++
++ expected = compute_store_pattern(tid, iter_ptr, expected_sweep_id);
++
++ dcbf((volatile unsigned int*)iter_ptr); //Flush before reading
++ observed = *iter_ptr;
++
++ if (observed != expected) {
++ nr_anamolies++;
++ log_anamoly(tid, iter_ptr, expected, observed);
++ }
++ }
++
++ end_verification_log(tid, nr_anamolies);
++}
++
++static void set_pthread_cpu(pthread_t th, int cpu)
++{
++ cpu_set_t run_cpu_mask;
++ struct sched_param param;
++
++ CPU_ZERO(&run_cpu_mask);
++ CPU_SET(cpu, &run_cpu_mask);
++ pthread_setaffinity_np(th, sizeof(cpu_set_t), &run_cpu_mask);
++
++ param.sched_priority = 1;
++ if (0 && sched_setscheduler(0, SCHED_FIFO, ¶m) == -1) {
++ /* haven't reproduced with this setting, it kills random preemption which may be a factor */
++ fprintf(stderr, "could not set SCHED_FIFO, run as root?\n");
++ }
++}
++
++static void set_mycpu(int cpu)
++{
++ cpu_set_t run_cpu_mask;
++ struct sched_param param;
++
++ CPU_ZERO(&run_cpu_mask);
++ CPU_SET(cpu, &run_cpu_mask);
++ sched_setaffinity(0, sizeof(cpu_set_t), &run_cpu_mask);
++
++ param.sched_priority = 1;
++ if (0 && sched_setscheduler(0, SCHED_FIFO, ¶m) == -1) {
++ fprintf(stderr, "could not set SCHED_FIFO, run as root?\n");
++ }
++}
++
++static volatile int segv_wait;
++
++static void segv_handler(int signo, siginfo_t *info, void *extra)
++{
++ while (segv_wait) {
++ sched_yield();
++ }
++
++}
++
++static void set_segv_handler(void)
++{
++ struct sigaction sa;
++
++ sa.sa_flags = SA_SIGINFO;
++ sa.sa_sigaction = segv_handler;
++
++ if (sigaction(SIGSEGV, &sa, NULL) == -1) {
++ perror("sigaction");
++ exit(EXIT_FAILURE);
++ }
++}
++
++int timeout = 0;
++/*
++ * This function is executed by every rim_thread.
++ *
++ * This function performs sweeps over the exclusive chunks of the
++ * rim_threads executing the rim-sequence one word at a time.
++ */
++static void *rim_fn(void *arg)
++{
++ unsigned int tid = *((unsigned int *)arg);
++
++ int size = RIM_CHUNK_SIZE;
++ char *chunk_start = compute_chunk_start_addr(tid);
++
++ unsigned int prev_sweep_id;
++ unsigned int cur_sweep_id = 0;
++
++ /* word access */
++ unsigned int pattern = cur_sweep_id;
++ unsigned int *pattern_ptr = &pattern;
++ unsigned int *w_ptr, read_data;
++
++ set_segv_handler();
++
++ /*
++ * Let us initialize the chunk:
++ *
++ * Each word-aligned address addr in the chunk,
++ * is initialized to :
++ * |-------------------------------------------------|
++ * | tid | word_offset(addr) | 0 |
++ * |-------------------------------------------------|
++ */
++ for (w_ptr = (unsigned int *)chunk_start;
++ (unsigned long)w_ptr < (unsigned long)(chunk_start) + size;
++ w_ptr++) {
++
++ *pattern_ptr = compute_store_pattern(tid, w_ptr, cur_sweep_id);
++ *w_ptr = *pattern_ptr;
++ }
++
++ while (!corruption_found && !timeout) {
++ prev_sweep_id = cur_sweep_id;
++ cur_sweep_id = cur_sweep_id + 1;
++
++ for (w_ptr = (unsigned int *)chunk_start;
++ (unsigned long)w_ptr < (unsigned long)(chunk_start) + size;
++ w_ptr++) {
++ unsigned int old_pattern;
++
++ /*
++ * Compute the pattern that we would have
++ * stored at this location in the previous
++ * sweep.
++ */
++ old_pattern = compute_store_pattern(tid, w_ptr, prev_sweep_id);
++
++ /*
++ * FLUSH:Ensure that we flush the contents of
++ * the cache before loading
++ */
++ dcbf((volatile unsigned int*)w_ptr); //Flush
++
++ /* LOAD: Read the value */
++ read_data = *w_ptr; //Load
++
++ /*
++ * COMPARE: Is it the same as what we had stored
++ * in the previous sweep ? It better be!
++ */
++ if (read_data != old_pattern) {
++ /* No it isn't! Tell everyone */
++ corruption_found = 1;
++ }
++
++ /*
++ * Before performing a store, let us check if
++ * any rim_thread has found a corruption.
++ */
++ if (corruption_found || timeout) {
++ /*
++ * Yes. Someone (including us!) has found
++ * a corruption :(
++ *
++ * Let us verify that our chunk is
++ * correct.
++ */
++ /* But first, let us allow the dust to settle down! */
++ verify_chunk(tid, w_ptr, cur_sweep_id, prev_sweep_id);
++
++ return 0;
++ }
++
++ /*
++ * Compute the new pattern that we are going
++ * to write to this location
++ */
++ *pattern_ptr = compute_store_pattern(tid, w_ptr, cur_sweep_id);
++
++ /*
++ * STORE: Now let us write this pattern into
++ * the location
++ */
++ *w_ptr = *pattern_ptr;
++ }
++ }
++
++ return NULL;
++}
++
++
++static unsigned long start_cpu = 0;
++static unsigned long nrthreads = 4;
++
++static pthread_t mem_snapshot_thread;
++
++static void *mem_snapshot_fn(void *arg)
++{
++ int page_size = getpagesize();
++ size_t size = page_size;
++ void *tmp = malloc(size);
++
++ while (!corruption_found && !timeout) {
++ /* Stop memory migration once corruption is found */
++ segv_wait = 1;
++
++ mprotect(map1, size, PROT_READ);
++
++ /*
++ * Load from the working alias (map1). Loading from map2
++ * also fails.
++ */
++ memcpy(tmp, map1, size);
++
++ /*
++ * Stores must go via map2 which has write permissions, but
++ * the corrupted data tends to be seen in the snapshot buffer,
++ * so corruption does not appear to be introduced at the
++ * copy-back via map2 alias here.
++ */
++ memcpy(map2, tmp, size);
++ /*
++ * Before releasing other threads, must ensure the copy
++ * back to
++ */
++ asm volatile("sync" ::: "memory");
++ mprotect(map1, size, PROT_READ|PROT_WRITE);
++ asm volatile("sync" ::: "memory");
++ segv_wait = 0;
++
++ usleep(1); /* This value makes a big difference */
++ }
++
++ return 0;
++}
++
++void alrm_sighandler(int sig)
++{
++ timeout = 1;
++}
++
++int main(int argc, char *argv[])
++{
++ int c;
++ int page_size = getpagesize();
++ time_t now;
++ int i, dir_error;
++ pthread_attr_t attr;
++ key_t shm_key = (key_t) getpid();
++ int shmid, run_time = 20 * 60;
++ struct sigaction sa_alrm;
++
++ snprintf(logdir, LOGDIR_NAME_SIZE,
++ "/tmp/logdir-%u", (unsigned int)getpid());
++ while ((c = getopt(argc, argv, "r:hn:l:t:")) != -1) {
++ switch(c) {
++ case 'r':
++ start_cpu = strtoul(optarg, NULL, 10);
++ break;
++ case 'h':
++ printf("%s [-r <start_cpu>] [-n <nrthreads>] [-l <logdir>] [-t <timeout>]\n", argv[0]);
++ exit(0);
++ break;
++ case 'n':
++ nrthreads = strtoul(optarg, NULL, 10);
++ break;
++ case 'l':
++ strncpy(logdir, optarg, LOGDIR_NAME_SIZE - 1);
++ break;
++ case 't':
++ run_time = strtoul(optarg, NULL, 10);
++ break;
++ default:
++ printf("invalid option\n");
++ exit(0);
++ break;
++ }
++ }
++
++ if (nrthreads > MAX_THREADS)
++ nrthreads = MAX_THREADS;
++
++ shmid = shmget(shm_key, page_size, IPC_CREAT|0666);
++ if (shmid < 0) {
++ err_msg("Failed shmget\n");
++ }
++
++ map1 = shmat(shmid, NULL, 0);
++ if (map1 == (void *) -1) {
++ err_msg("Failed shmat");
++ }
++
++ map2 = shmat(shmid, NULL, 0);
++ if (map2 == (void *) -1) {
++ err_msg("Failed shmat");
++ }
++
++ dir_error = mkdir(logdir, 0755);
++
++ if (dir_error) {
++ err_msg("Failed mkdir");
++ }
++
++ printf("start_cpu list:%lu\n", start_cpu);
++ printf("number of worker threads:%lu + 1 snapshot thread\n", nrthreads);
++ printf("Allocated address:0x%016lx + secondary map:0x%016lx\n", (unsigned long)map1, (unsigned long)map2);
++ printf("logdir at : %s\n", logdir);
++ printf("Timeout: %d seconds\n", run_time);
++
++ time(&now);
++ printf("=================================\n");
++ printf(" Starting Test\n");
++ printf(" %s", ctime(&now));
++ printf("=================================\n");
++
++ for (i = 0; i < nrthreads; i++) {
++ if (1 && !fork()) {
++ prctl(PR_SET_PDEATHSIG, SIGKILL);
++ set_mycpu(start_cpu + i);
++ for (;;)
++ sched_yield();
++ exit(0);
++ }
++ }
++
++
++ sa_alrm.sa_handler = &alrm_sighandler;
++ sigemptyset(&sa_alrm.sa_mask);
++ sa_alrm.sa_flags = 0;
++
++ if (sigaction(SIGALRM, &sa_alrm, 0) == -1) {
++ err_msg("Failed signal handler registration\n");
++ }
++
++ alarm(run_time);
++
++ pthread_attr_init(&attr);
++ for (i = 0; i < nrthreads; i++) {
++ rim_thread_ids[i] = i;
++ pthread_create(&rim_threads[i], &attr, rim_fn, &rim_thread_ids[i]);
++ set_pthread_cpu(rim_threads[i], start_cpu + i);
++ }
++
++ pthread_create(&mem_snapshot_thread, &attr, mem_snapshot_fn, map1);
++ set_pthread_cpu(mem_snapshot_thread, start_cpu + i);
++
++
++ pthread_join(mem_snapshot_thread, NULL);
++ for (i = 0; i < nrthreads; i++) {
++ pthread_join(rim_threads[i], NULL);
++ }
++
++ if (!timeout) {
++ time(&now);
++ printf("=================================\n");
++ printf(" Data Corruption Detected\n");
++ printf(" %s", ctime(&now));
++ printf(" See logfiles in %s\n", logdir);
++ printf("=================================\n");
++ return 1;
++ }
++ return 0;
++}
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:5.3 commit in: /
@ 2019-11-12 19:36 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2019-11-12 19:36 UTC (permalink / raw
To: gentoo-commits
commit: d92b6599fe393f699614a7931661efd90b617e05
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Nov 12 19:36:13 2019 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Nov 12 19:36:13 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d92b6599
Linux patch 5.3.11
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1010_linux-5.3.11.patch | 12804 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 12808 insertions(+)
diff --git a/0000_README b/0000_README
index 6cbaced..075d9be 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch: 1009_linux-5.3.10.patch
From: http://www.kernel.org
Desc: Linux 5.3.10
+Patch: 1010_linux-5.3.11.patch
+From: http://www.kernel.org
+Desc: Linux 5.3.11
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1010_linux-5.3.11.patch b/1010_linux-5.3.11.patch
new file mode 100644
index 0000000..2d9e97b
--- /dev/null
+++ b/1010_linux-5.3.11.patch
@@ -0,0 +1,12804 @@
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index 5f7d7b14fa44..0c28d07654bd 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -486,6 +486,8 @@ What: /sys/devices/system/cpu/vulnerabilities
+ /sys/devices/system/cpu/vulnerabilities/spec_store_bypass
+ /sys/devices/system/cpu/vulnerabilities/l1tf
+ /sys/devices/system/cpu/vulnerabilities/mds
++ /sys/devices/system/cpu/vulnerabilities/tsx_async_abort
++ /sys/devices/system/cpu/vulnerabilities/itlb_multihit
+ Date: January 2018
+ Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
+ Description: Information about CPU vulnerabilities
+diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
+index 49311f3da6f2..0795e3c2643f 100644
+--- a/Documentation/admin-guide/hw-vuln/index.rst
++++ b/Documentation/admin-guide/hw-vuln/index.rst
+@@ -12,3 +12,5 @@ are configurable at compile, boot or run time.
+ spectre
+ l1tf
+ mds
++ tsx_async_abort
++ multihit.rst
+diff --git a/Documentation/admin-guide/hw-vuln/multihit.rst b/Documentation/admin-guide/hw-vuln/multihit.rst
+new file mode 100644
+index 000000000000..ba9988d8bce5
+--- /dev/null
++++ b/Documentation/admin-guide/hw-vuln/multihit.rst
+@@ -0,0 +1,163 @@
++iTLB multihit
++=============
++
++iTLB multihit is an erratum where some processors may incur a machine check
++error, possibly resulting in an unrecoverable CPU lockup, when an
++instruction fetch hits multiple entries in the instruction TLB. This can
++occur when the page size is changed along with either the physical address
++or cache type. A malicious guest running on a virtualized system can
++exploit this erratum to perform a denial of service attack.
++
++
++Affected processors
++-------------------
++
++Variations of this erratum are present on most Intel Core and Xeon processor
++models. The erratum is not present on:
++
++ - non-Intel processors
++
++ - Some Atoms (Airmont, Bonnell, Goldmont, GoldmontPlus, Saltwell, Silvermont)
++
++ - Intel processors that have the PSCHANGE_MC_NO bit set in the
++ IA32_ARCH_CAPABILITIES MSR.
++
++
++Related CVEs
++------------
++
++The following CVE entry is related to this issue:
++
++ ============== =================================================
++ CVE-2018-12207 Machine Check Error Avoidance on Page Size Change
++ ============== =================================================
++
++
++Problem
++-------
++
++Privileged software, including OS and virtual machine managers (VMM), are in
++charge of memory management. A key component in memory management is the control
++of the page tables. Modern processors use virtual memory, a technique that creates
++the illusion of a very large memory for processors. This virtual space is split
++into pages of a given size. Page tables translate virtual addresses to physical
++addresses.
++
++To reduce latency when performing a virtual to physical address translation,
++processors include a structure, called TLB, that caches recent translations.
++There are separate TLBs for instruction (iTLB) and data (dTLB).
++
++Under this errata, instructions are fetched from a linear address translated
++using a 4 KB translation cached in the iTLB. Privileged software modifies the
++paging structure so that the same linear address using large page size (2 MB, 4
++MB, 1 GB) with a different physical address or memory type. After the page
++structure modification but before the software invalidates any iTLB entries for
++the linear address, a code fetch that happens on the same linear address may
++cause a machine-check error which can result in a system hang or shutdown.
++
++
++Attack scenarios
++----------------
++
++Attacks against the iTLB multihit erratum can be mounted from malicious
++guests in a virtualized system.
++
++
++iTLB multihit system information
++--------------------------------
++
++The Linux kernel provides a sysfs interface to enumerate the current iTLB
++multihit status of the system:whether the system is vulnerable and which
++mitigations are active. The relevant sysfs file is:
++
++/sys/devices/system/cpu/vulnerabilities/itlb_multihit
++
++The possible values in this file are:
++
++.. list-table::
++
++ * - Not affected
++ - The processor is not vulnerable.
++ * - KVM: Mitigation: Split huge pages
++ - Software changes mitigate this issue.
++ * - KVM: Vulnerable
++ - The processor is vulnerable, but no mitigation enabled
++
++
++Enumeration of the erratum
++--------------------------------
++
++A new bit has been allocated in the IA32_ARCH_CAPABILITIES (PSCHANGE_MC_NO) msr
++and will be set on CPU's which are mitigated against this issue.
++
++ ======================================= =========== ===============================
++ IA32_ARCH_CAPABILITIES MSR Not present Possibly vulnerable,check model
++ IA32_ARCH_CAPABILITIES[PSCHANGE_MC_NO] '0' Likely vulnerable,check model
++ IA32_ARCH_CAPABILITIES[PSCHANGE_MC_NO] '1' Not vulnerable
++ ======================================= =========== ===============================
++
++
++Mitigation mechanism
++-------------------------
++
++This erratum can be mitigated by restricting the use of large page sizes to
++non-executable pages. This forces all iTLB entries to be 4K, and removes
++the possibility of multiple hits.
++
++In order to mitigate the vulnerability, KVM initially marks all huge pages
++as non-executable. If the guest attempts to execute in one of those pages,
++the page is broken down into 4K pages, which are then marked executable.
++
++If EPT is disabled or not available on the host, KVM is in control of TLB
++flushes and the problematic situation cannot happen. However, the shadow
++EPT paging mechanism used by nested virtualization is vulnerable, because
++the nested guest can trigger multiple iTLB hits by modifying its own
++(non-nested) page tables. For simplicity, KVM will make large pages
++non-executable in all shadow paging modes.
++
++Mitigation control on the kernel command line and KVM - module parameter
++------------------------------------------------------------------------
++
++The KVM hypervisor mitigation mechanism for marking huge pages as
++non-executable can be controlled with a module parameter "nx_huge_pages=".
++The kernel command line allows to control the iTLB multihit mitigations at
++boot time with the option "kvm.nx_huge_pages=".
++
++The valid arguments for these options are:
++
++ ========== ================================================================
++ force Mitigation is enabled. In this case, the mitigation implements
++ non-executable huge pages in Linux kernel KVM module. All huge
++ pages in the EPT are marked as non-executable.
++ If a guest attempts to execute in one of those pages, the page is
++ broken down into 4K pages, which are then marked executable.
++
++ off Mitigation is disabled.
++
++ auto Enable mitigation only if the platform is affected and the kernel
++ was not booted with the "mitigations=off" command line parameter.
++ This is the default option.
++ ========== ================================================================
++
++
++Mitigation selection guide
++--------------------------
++
++1. No virtualization in use
++^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++ The system is protected by the kernel unconditionally and no further
++ action is required.
++
++2. Virtualization with trusted guests
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++ If the guest comes from a trusted source, you may assume that the guest will
++ not attempt to maliciously exploit these errata and no further action is
++ required.
++
++3. Virtualization with untrusted guests
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++ If the guest comes from an untrusted source, the guest host kernel will need
++ to apply iTLB multihit mitigation via the kernel command line or kvm
++ module parameter.
+diff --git a/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst b/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
+new file mode 100644
+index 000000000000..fddbd7579c53
+--- /dev/null
++++ b/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
+@@ -0,0 +1,276 @@
++.. SPDX-License-Identifier: GPL-2.0
++
++TAA - TSX Asynchronous Abort
++======================================
++
++TAA is a hardware vulnerability that allows unprivileged speculative access to
++data which is available in various CPU internal buffers by using asynchronous
++aborts within an Intel TSX transactional region.
++
++Affected processors
++-------------------
++
++This vulnerability only affects Intel processors that support Intel
++Transactional Synchronization Extensions (TSX) when the TAA_NO bit (bit 8)
++is 0 in the IA32_ARCH_CAPABILITIES MSR. On processors where the MDS_NO bit
++(bit 5) is 0 in the IA32_ARCH_CAPABILITIES MSR, the existing MDS mitigations
++also mitigate against TAA.
++
++Whether a processor is affected or not can be read out from the TAA
++vulnerability file in sysfs. See :ref:`tsx_async_abort_sys_info`.
++
++Related CVEs
++------------
++
++The following CVE entry is related to this TAA issue:
++
++ ============== ===== ===================================================
++ CVE-2019-11135 TAA TSX Asynchronous Abort (TAA) condition on some
++ microprocessors utilizing speculative execution may
++ allow an authenticated user to potentially enable
++ information disclosure via a side channel with
++ local access.
++ ============== ===== ===================================================
++
++Problem
++-------
++
++When performing store, load or L1 refill operations, processors write
++data into temporary microarchitectural structures (buffers). The data in
++those buffers can be forwarded to load operations as an optimization.
++
++Intel TSX is an extension to the x86 instruction set architecture that adds
++hardware transactional memory support to improve performance of multi-threaded
++software. TSX lets the processor expose and exploit concurrency hidden in an
++application due to dynamically avoiding unnecessary synchronization.
++
++TSX supports atomic memory transactions that are either committed (success) or
++aborted. During an abort, operations that happened within the transactional region
++are rolled back. An asynchronous abort takes place, among other options, when a
++different thread accesses a cache line that is also used within the transactional
++region when that access might lead to a data race.
++
++Immediately after an uncompleted asynchronous abort, certain speculatively
++executed loads may read data from those internal buffers and pass it to dependent
++operations. This can be then used to infer the value via a cache side channel
++attack.
++
++Because the buffers are potentially shared between Hyper-Threads cross
++Hyper-Thread attacks are possible.
++
++The victim of a malicious actor does not need to make use of TSX. Only the
++attacker needs to begin a TSX transaction and raise an asynchronous abort
++which in turn potenitally leaks data stored in the buffers.
++
++More detailed technical information is available in the TAA specific x86
++architecture section: :ref:`Documentation/x86/tsx_async_abort.rst <tsx_async_abort>`.
++
++
++Attack scenarios
++----------------
++
++Attacks against the TAA vulnerability can be implemented from unprivileged
++applications running on hosts or guests.
++
++As for MDS, the attacker has no control over the memory addresses that can
++be leaked. Only the victim is responsible for bringing data to the CPU. As
++a result, the malicious actor has to sample as much data as possible and
++then postprocess it to try to infer any useful information from it.
++
++A potential attacker only has read access to the data. Also, there is no direct
++privilege escalation by using this technique.
++
++
++.. _tsx_async_abort_sys_info:
++
++TAA system information
++-----------------------
++
++The Linux kernel provides a sysfs interface to enumerate the current TAA status
++of mitigated systems. The relevant sysfs file is:
++
++/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
++
++The possible values in this file are:
++
++.. list-table::
++
++ * - 'Vulnerable'
++ - The CPU is affected by this vulnerability and the microcode and kernel mitigation are not applied.
++ * - 'Vulnerable: Clear CPU buffers attempted, no microcode'
++ - The system tries to clear the buffers but the microcode might not support the operation.
++ * - 'Mitigation: Clear CPU buffers'
++ - The microcode has been updated to clear the buffers. TSX is still enabled.
++ * - 'Mitigation: TSX disabled'
++ - TSX is disabled.
++ * - 'Not affected'
++ - The CPU is not affected by this issue.
++
++.. _ucode_needed:
++
++Best effort mitigation mode
++^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++If the processor is vulnerable, but the availability of the microcode-based
++mitigation mechanism is not advertised via CPUID the kernel selects a best
++effort mitigation mode. This mode invokes the mitigation instructions
++without a guarantee that they clear the CPU buffers.
++
++This is done to address virtualization scenarios where the host has the
++microcode update applied, but the hypervisor is not yet updated to expose the
++CPUID to the guest. If the host has updated microcode the protection takes
++effect; otherwise a few CPU cycles are wasted pointlessly.
++
++The state in the tsx_async_abort sysfs file reflects this situation
++accordingly.
++
++
++Mitigation mechanism
++--------------------
++
++The kernel detects the affected CPUs and the presence of the microcode which is
++required. If a CPU is affected and the microcode is available, then the kernel
++enables the mitigation by default.
++
++
++The mitigation can be controlled at boot time via a kernel command line option.
++See :ref:`taa_mitigation_control_command_line`.
++
++.. _virt_mechanism:
++
++Virtualization mitigation
++^^^^^^^^^^^^^^^^^^^^^^^^^
++
++Affected systems where the host has TAA microcode and TAA is mitigated by
++having disabled TSX previously, are not vulnerable regardless of the status
++of the VMs.
++
++In all other cases, if the host either does not have the TAA microcode or
++the kernel is not mitigated, the system might be vulnerable.
++
++
++.. _taa_mitigation_control_command_line:
++
++Mitigation control on the kernel command line
++---------------------------------------------
++
++The kernel command line allows to control the TAA mitigations at boot time with
++the option "tsx_async_abort=". The valid arguments for this option are:
++
++ ============ =============================================================
++ off This option disables the TAA mitigation on affected platforms.
++ If the system has TSX enabled (see next parameter) and the CPU
++ is affected, the system is vulnerable.
++
++ full TAA mitigation is enabled. If TSX is enabled, on an affected
++ system it will clear CPU buffers on ring transitions. On
++ systems which are MDS-affected and deploy MDS mitigation,
++ TAA is also mitigated. Specifying this option on those
++ systems will have no effect.
++
++ full,nosmt The same as tsx_async_abort=full, with SMT disabled on
++ vulnerable CPUs that have TSX enabled. This is the complete
++ mitigation. When TSX is disabled, SMT is not disabled because
++ CPU is not vulnerable to cross-thread TAA attacks.
++ ============ =============================================================
++
++Not specifying this option is equivalent to "tsx_async_abort=full".
++
++The kernel command line also allows to control the TSX feature using the
++parameter "tsx=" on CPUs which support TSX control. MSR_IA32_TSX_CTRL is used
++to control the TSX feature and the enumeration of the TSX feature bits (RTM
++and HLE) in CPUID.
++
++The valid options are:
++
++ ============ =============================================================
++ off Disables TSX on the system.
++
++ Note that this option takes effect only on newer CPUs which are
++ not vulnerable to MDS, i.e., have MSR_IA32_ARCH_CAPABILITIES.MDS_NO=1
++ and which get the new IA32_TSX_CTRL MSR through a microcode
++ update. This new MSR allows for the reliable deactivation of
++ the TSX functionality.
++
++ on Enables TSX.
++
++ Although there are mitigations for all known security
++ vulnerabilities, TSX has been known to be an accelerator for
++ several previous speculation-related CVEs, and so there may be
++ unknown security risks associated with leaving it enabled.
++
++ auto Disables TSX if X86_BUG_TAA is present, otherwise enables TSX
++ on the system.
++ ============ =============================================================
++
++Not specifying this option is equivalent to "tsx=off".
++
++The following combinations of the "tsx_async_abort" and "tsx" are possible. For
++affected platforms tsx=auto is equivalent to tsx=off and the result will be:
++
++ ========= ========================== =========================================
++ tsx=on tsx_async_abort=full The system will use VERW to clear CPU
++ buffers. Cross-thread attacks are still
++ possible on SMT machines.
++ tsx=on tsx_async_abort=full,nosmt As above, cross-thread attacks on SMT
++ mitigated.
++ tsx=on tsx_async_abort=off The system is vulnerable.
++ tsx=off tsx_async_abort=full TSX might be disabled if microcode
++ provides a TSX control MSR. If so,
++ system is not vulnerable.
++ tsx=off tsx_async_abort=full,nosmt Ditto
++ tsx=off tsx_async_abort=off ditto
++ ========= ========================== =========================================
++
++
++For unaffected platforms "tsx=on" and "tsx_async_abort=full" does not clear CPU
++buffers. For platforms without TSX control (MSR_IA32_ARCH_CAPABILITIES.MDS_NO=0)
++"tsx" command line argument has no effect.
++
++For the affected platforms below table indicates the mitigation status for the
++combinations of CPUID bit MD_CLEAR and IA32_ARCH_CAPABILITIES MSR bits MDS_NO
++and TSX_CTRL_MSR.
++
++ ======= ========= ============= ========================================
++ MDS_NO MD_CLEAR TSX_CTRL_MSR Status
++ ======= ========= ============= ========================================
++ 0 0 0 Vulnerable (needs microcode)
++ 0 1 0 MDS and TAA mitigated via VERW
++ 1 1 0 MDS fixed, TAA vulnerable if TSX enabled
++ because MD_CLEAR has no meaning and
++ VERW is not guaranteed to clear buffers
++ 1 X 1 MDS fixed, TAA can be mitigated by
++ VERW or TSX_CTRL_MSR
++ ======= ========= ============= ========================================
++
++Mitigation selection guide
++--------------------------
++
++1. Trusted userspace and guests
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++If all user space applications are from a trusted source and do not execute
++untrusted code which is supplied externally, then the mitigation can be
++disabled. The same applies to virtualized environments with trusted guests.
++
++
++2. Untrusted userspace and guests
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++If there are untrusted applications or guests on the system, enabling TSX
++might allow a malicious actor to leak data from the host or from other
++processes running on the same physical core.
++
++If the microcode is available and the TSX is disabled on the host, attacks
++are prevented in a virtualized environment as well, even if the VMs do not
++explicitly enable the mitigation.
++
++
++.. _taa_default_mitigations:
++
++Default mitigations
++-------------------
++
++The kernel's default action for vulnerable processors is:
++
++ - Deploy TSX disable mitigation (tsx_async_abort=full tsx=off).
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 5ea005c9e2d6..49d1719177ea 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -2040,6 +2040,25 @@
+ KVM MMU at runtime.
+ Default is 0 (off)
+
++ kvm.nx_huge_pages=
++ [KVM] Controls the software workaround for the
++ X86_BUG_ITLB_MULTIHIT bug.
++ force : Always deploy workaround.
++ off : Never deploy workaround.
++ auto : Deploy workaround based on the presence of
++ X86_BUG_ITLB_MULTIHIT.
++
++ Default is 'auto'.
++
++ If the software workaround is enabled for the host,
++ guests do need not to enable it for nested guests.
++
++ kvm.nx_huge_pages_recovery_ratio=
++ [KVM] Controls how many 4KiB pages are periodically zapped
++ back to huge pages. 0 disables the recovery, otherwise if
++ the value is N KVM will zap 1/Nth of the 4KiB pages every
++ minute. The default is 60.
++
+ kvm-amd.nested= [KVM,AMD] Allow nested virtualization in KVM/SVM.
+ Default is 1 (enabled)
+
+@@ -2612,6 +2631,13 @@
+ ssbd=force-off [ARM64]
+ l1tf=off [X86]
+ mds=off [X86]
++ tsx_async_abort=off [X86]
++ kvm.nx_huge_pages=off [X86]
++
++ Exceptions:
++ This does not have any effect on
++ kvm.nx_huge_pages when
++ kvm.nx_huge_pages=force.
+
+ auto (default)
+ Mitigate all CPU vulnerabilities, but leave SMT
+@@ -2627,6 +2653,7 @@
+ be fully mitigated, even if it means losing SMT.
+ Equivalent to: l1tf=flush,nosmt [X86]
+ mds=full,nosmt [X86]
++ tsx_async_abort=full,nosmt [X86]
+
+ mminit_loglevel=
+ [KNL] When CONFIG_DEBUG_MEMORY_INIT is set, this
+@@ -4813,6 +4840,71 @@
+ interruptions from clocksource watchdog are not
+ acceptable).
+
++ tsx= [X86] Control Transactional Synchronization
++ Extensions (TSX) feature in Intel processors that
++ support TSX control.
++
++ This parameter controls the TSX feature. The options are:
++
++ on - Enable TSX on the system. Although there are
++ mitigations for all known security vulnerabilities,
++ TSX has been known to be an accelerator for
++ several previous speculation-related CVEs, and
++ so there may be unknown security risks associated
++ with leaving it enabled.
++
++ off - Disable TSX on the system. (Note that this
++ option takes effect only on newer CPUs which are
++ not vulnerable to MDS, i.e., have
++ MSR_IA32_ARCH_CAPABILITIES.MDS_NO=1 and which get
++ the new IA32_TSX_CTRL MSR through a microcode
++ update. This new MSR allows for the reliable
++ deactivation of the TSX functionality.)
++
++ auto - Disable TSX if X86_BUG_TAA is present,
++ otherwise enable TSX on the system.
++
++ Not specifying this option is equivalent to tsx=off.
++
++ See Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
++ for more details.
++
++ tsx_async_abort= [X86,INTEL] Control mitigation for the TSX Async
++ Abort (TAA) vulnerability.
++
++ Similar to Micro-architectural Data Sampling (MDS)
++ certain CPUs that support Transactional
++ Synchronization Extensions (TSX) are vulnerable to an
++ exploit against CPU internal buffers which can forward
++ information to a disclosure gadget under certain
++ conditions.
++
++ In vulnerable processors, the speculatively forwarded
++ data can be used in a cache side channel attack, to
++ access data to which the attacker does not have direct
++ access.
++
++ This parameter controls the TAA mitigation. The
++ options are:
++
++ full - Enable TAA mitigation on vulnerable CPUs
++ if TSX is enabled.
++
++ full,nosmt - Enable TAA mitigation and disable SMT on
++ vulnerable CPUs. If TSX is disabled, SMT
++ is not disabled because CPU is not
++ vulnerable to cross-thread TAA attacks.
++ off - Unconditionally disable TAA mitigation
++
++ Not specifying this option is equivalent to
++ tsx_async_abort=full. On CPUs which are MDS affected
++ and deploy MDS mitigation, TAA mitigation is not
++ required and doesn't provide any additional
++ mitigation.
++
++ For details see:
++ Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
++
+ turbografx.map[2|3]= [HW,JOY]
+ TurboGraFX parallel port interface
+ Format:
+diff --git a/Documentation/arm64/silicon-errata.rst b/Documentation/arm64/silicon-errata.rst
+index 6e52d334bc55..d5f72a5b214f 100644
+--- a/Documentation/arm64/silicon-errata.rst
++++ b/Documentation/arm64/silicon-errata.rst
+@@ -91,6 +91,11 @@ stable kernels.
+ | ARM | MMU-500 | #841119,826419 | N/A |
+ +----------------+-----------------+-----------------+-----------------------------+
+ +----------------+-----------------+-----------------+-----------------------------+
++| Broadcom | Brahma-B53 | N/A | ARM64_ERRATUM_845719 |
+++----------------+-----------------+-----------------+-----------------------------+
++| Broadcom | Brahma-B53 | N/A | ARM64_ERRATUM_843419 |
+++----------------+-----------------+-----------------+-----------------------------+
+++----------------+-----------------+-----------------+-----------------------------+
+ | Cavium | ThunderX ITS | #22375,24313 | CAVIUM_ERRATUM_22375 |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Cavium | ThunderX ITS | #23144 | CAVIUM_ERRATUM_23144 |
+@@ -124,7 +129,7 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Qualcomm Tech. | Kryo/Falkor v1 | E1003 | QCOM_FALKOR_ERRATUM_1003 |
+ +----------------+-----------------+-----------------+-----------------------------+
+-| Qualcomm Tech. | Falkor v1 | E1009 | QCOM_FALKOR_ERRATUM_1009 |
++| Qualcomm Tech. | Kryo/Falkor v1 | E1009 | QCOM_FALKOR_ERRATUM_1009 |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Qualcomm Tech. | QDF2400 ITS | E0065 | QCOM_QDF2400_ERRATUM_0065 |
+ +----------------+-----------------+-----------------+-----------------------------+
+diff --git a/Documentation/x86/index.rst b/Documentation/x86/index.rst
+index af64c4bb4447..a8de2fbc1caa 100644
+--- a/Documentation/x86/index.rst
++++ b/Documentation/x86/index.rst
+@@ -27,6 +27,7 @@ x86-specific Documentation
+ mds
+ microcode
+ resctrl_ui
++ tsx_async_abort
+ usb-legacy-support
+ i386/index
+ x86_64/index
+diff --git a/Documentation/x86/tsx_async_abort.rst b/Documentation/x86/tsx_async_abort.rst
+new file mode 100644
+index 000000000000..583ddc185ba2
+--- /dev/null
++++ b/Documentation/x86/tsx_async_abort.rst
+@@ -0,0 +1,117 @@
++.. SPDX-License-Identifier: GPL-2.0
++
++TSX Async Abort (TAA) mitigation
++================================
++
++.. _tsx_async_abort:
++
++Overview
++--------
++
++TSX Async Abort (TAA) is a side channel attack on internal buffers in some
++Intel processors similar to Microachitectural Data Sampling (MDS). In this
++case certain loads may speculatively pass invalid data to dependent operations
++when an asynchronous abort condition is pending in a Transactional
++Synchronization Extensions (TSX) transaction. This includes loads with no
++fault or assist condition. Such loads may speculatively expose stale data from
++the same uarch data structures as in MDS, with same scope of exposure i.e.
++same-thread and cross-thread. This issue affects all current processors that
++support TSX.
++
++Mitigation strategy
++-------------------
++
++a) TSX disable - one of the mitigations is to disable TSX. A new MSR
++IA32_TSX_CTRL will be available in future and current processors after
++microcode update which can be used to disable TSX. In addition, it
++controls the enumeration of the TSX feature bits (RTM and HLE) in CPUID.
++
++b) Clear CPU buffers - similar to MDS, clearing the CPU buffers mitigates this
++vulnerability. More details on this approach can be found in
++:ref:`Documentation/admin-guide/hw-vuln/mds.rst <mds>`.
++
++Kernel internal mitigation modes
++--------------------------------
++
++ ============= ============================================================
++ off Mitigation is disabled. Either the CPU is not affected or
++ tsx_async_abort=off is supplied on the kernel command line.
++
++ tsx disabled Mitigation is enabled. TSX feature is disabled by default at
++ bootup on processors that support TSX control.
++
++ verw Mitigation is enabled. CPU is affected and MD_CLEAR is
++ advertised in CPUID.
++
++ ucode needed Mitigation is enabled. CPU is affected and MD_CLEAR is not
++ advertised in CPUID. That is mainly for virtualization
++ scenarios where the host has the updated microcode but the
++ hypervisor does not expose MD_CLEAR in CPUID. It's a best
++ effort approach without guarantee.
++ ============= ============================================================
++
++If the CPU is affected and the "tsx_async_abort" kernel command line parameter is
++not provided then the kernel selects an appropriate mitigation depending on the
++status of RTM and MD_CLEAR CPUID bits.
++
++Below tables indicate the impact of tsx=on|off|auto cmdline options on state of
++TAA mitigation, VERW behavior and TSX feature for various combinations of
++MSR_IA32_ARCH_CAPABILITIES bits.
++
++1. "tsx=off"
++
++========= ========= ============ ============ ============== =================== ======================
++MSR_IA32_ARCH_CAPABILITIES bits Result with cmdline tsx=off
++---------------------------------- -------------------------------------------------------------------------
++TAA_NO MDS_NO TSX_CTRL_MSR TSX state VERW can clear TAA mitigation TAA mitigation
++ after bootup CPU buffers tsx_async_abort=off tsx_async_abort=full
++========= ========= ============ ============ ============== =================== ======================
++ 0 0 0 HW default Yes Same as MDS Same as MDS
++ 0 0 1 Invalid case Invalid case Invalid case Invalid case
++ 0 1 0 HW default No Need ucode update Need ucode update
++ 0 1 1 Disabled Yes TSX disabled TSX disabled
++ 1 X 1 Disabled X None needed None needed
++========= ========= ============ ============ ============== =================== ======================
++
++2. "tsx=on"
++
++========= ========= ============ ============ ============== =================== ======================
++MSR_IA32_ARCH_CAPABILITIES bits Result with cmdline tsx=on
++---------------------------------- -------------------------------------------------------------------------
++TAA_NO MDS_NO TSX_CTRL_MSR TSX state VERW can clear TAA mitigation TAA mitigation
++ after bootup CPU buffers tsx_async_abort=off tsx_async_abort=full
++========= ========= ============ ============ ============== =================== ======================
++ 0 0 0 HW default Yes Same as MDS Same as MDS
++ 0 0 1 Invalid case Invalid case Invalid case Invalid case
++ 0 1 0 HW default No Need ucode update Need ucode update
++ 0 1 1 Enabled Yes None Same as MDS
++ 1 X 1 Enabled X None needed None needed
++========= ========= ============ ============ ============== =================== ======================
++
++3. "tsx=auto"
++
++========= ========= ============ ============ ============== =================== ======================
++MSR_IA32_ARCH_CAPABILITIES bits Result with cmdline tsx=auto
++---------------------------------- -------------------------------------------------------------------------
++TAA_NO MDS_NO TSX_CTRL_MSR TSX state VERW can clear TAA mitigation TAA mitigation
++ after bootup CPU buffers tsx_async_abort=off tsx_async_abort=full
++========= ========= ============ ============ ============== =================== ======================
++ 0 0 0 HW default Yes Same as MDS Same as MDS
++ 0 0 1 Invalid case Invalid case Invalid case Invalid case
++ 0 1 0 HW default No Need ucode update Need ucode update
++ 0 1 1 Disabled Yes TSX disabled TSX disabled
++ 1 X 1 Enabled X None needed None needed
++========= ========= ============ ============ ============== =================== ======================
++
++In the tables, TSX_CTRL_MSR is a new bit in MSR_IA32_ARCH_CAPABILITIES that
++indicates whether MSR_IA32_TSX_CTRL is supported.
++
++There are two control bits in IA32_TSX_CTRL MSR:
++
++ Bit 0: When set it disables the Restricted Transactional Memory (RTM)
++ sub-feature of TSX (will force all transactions to abort on the
++ XBEGIN instruction).
++
++ Bit 1: When set it disables the enumeration of the RTM and HLE feature
++ (i.e. it will make CPUID(EAX=7).EBX{bit4} and
++ CPUID(EAX=7).EBX{bit11} read as 0).
+diff --git a/Makefile b/Makefile
+index e2a8b4534da5..40148c01ffe2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 3
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Bobtail Squid
+
+diff --git a/arch/arc/boot/dts/hsdk.dts b/arch/arc/boot/dts/hsdk.dts
+index bfc7f5f5d6f2..9bea5daadd23 100644
+--- a/arch/arc/boot/dts/hsdk.dts
++++ b/arch/arc/boot/dts/hsdk.dts
+@@ -264,6 +264,14 @@
+ clocks = <&input_clk>;
+ cs-gpios = <&creg_gpio 0 GPIO_ACTIVE_LOW>,
+ <&creg_gpio 1 GPIO_ACTIVE_LOW>;
++
++ spi-flash@0 {
++ compatible = "sst26wf016b", "jedec,spi-nor";
++ reg = <0>;
++ #address-cells = <1>;
++ #size-cells = <1>;
++ spi-max-frequency = <4000000>;
++ };
+ };
+
+ creg_gpio: gpio@14b0 {
+diff --git a/arch/arc/configs/hsdk_defconfig b/arch/arc/configs/hsdk_defconfig
+index 403125d9c9a3..fe9de80e41ee 100644
+--- a/arch/arc/configs/hsdk_defconfig
++++ b/arch/arc/configs/hsdk_defconfig
+@@ -31,6 +31,8 @@ CONFIG_INET=y
+ CONFIG_DEVTMPFS=y
+ # CONFIG_STANDALONE is not set
+ # CONFIG_PREVENT_FIRMWARE_BUILD is not set
++CONFIG_MTD=y
++CONFIG_MTD_SPI_NOR=y
+ CONFIG_SCSI=y
+ CONFIG_BLK_DEV_SD=y
+ CONFIG_NETDEVICES=y
+diff --git a/arch/arm/boot/dts/imx6-logicpd-baseboard.dtsi b/arch/arm/boot/dts/imx6-logicpd-baseboard.dtsi
+index 2a6ce87071f9..9e027b9a5f91 100644
+--- a/arch/arm/boot/dts/imx6-logicpd-baseboard.dtsi
++++ b/arch/arm/boot/dts/imx6-logicpd-baseboard.dtsi
+@@ -328,6 +328,10 @@
+ pinctrl-0 = <&pinctrl_pwm3>;
+ };
+
++&snvs_pwrkey {
++ status = "okay";
++};
++
+ &ssi2 {
+ status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/stm32mp157c-ev1.dts b/arch/arm/boot/dts/stm32mp157c-ev1.dts
+index feb8f7727270..541bad97248a 100644
+--- a/arch/arm/boot/dts/stm32mp157c-ev1.dts
++++ b/arch/arm/boot/dts/stm32mp157c-ev1.dts
+@@ -206,7 +206,6 @@
+
+ joystick_pins: joystick {
+ pins = "gpio0", "gpio1", "gpio2", "gpio3", "gpio4";
+- drive-push-pull;
+ bias-pull-down;
+ };
+
+diff --git a/arch/arm/mach-sunxi/mc_smp.c b/arch/arm/mach-sunxi/mc_smp.c
+index 239084cf8192..26cbce135338 100644
+--- a/arch/arm/mach-sunxi/mc_smp.c
++++ b/arch/arm/mach-sunxi/mc_smp.c
+@@ -481,14 +481,18 @@ static void sunxi_mc_smp_cpu_die(unsigned int l_cpu)
+ static int sunxi_cpu_powerdown(unsigned int cpu, unsigned int cluster)
+ {
+ u32 reg;
++ int gating_bit = cpu;
+
+ pr_debug("%s: cluster %u cpu %u\n", __func__, cluster, cpu);
+ if (cpu >= SUNXI_CPUS_PER_CLUSTER || cluster >= SUNXI_NR_CLUSTERS)
+ return -EINVAL;
+
++ if (is_a83t && cpu == 0)
++ gating_bit = 4;
++
+ /* gate processor power */
+ reg = readl(prcm_base + PRCM_PWROFF_GATING_REG(cluster));
+- reg |= PRCM_PWROFF_GATING_REG_CORE(cpu);
++ reg |= PRCM_PWROFF_GATING_REG_CORE(gating_bit);
+ writel(reg, prcm_base + PRCM_PWROFF_GATING_REG(cluster));
+ udelay(20);
+
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index b1454d117cd2..aca07c2f6e6e 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -79,6 +79,7 @@
+ #define CAVIUM_CPU_PART_THUNDERX_83XX 0x0A3
+ #define CAVIUM_CPU_PART_THUNDERX2 0x0AF
+
++#define BRCM_CPU_PART_BRAHMA_B53 0x100
+ #define BRCM_CPU_PART_VULCAN 0x516
+
+ #define QCOM_CPU_PART_FALKOR_V1 0x800
+@@ -105,6 +106,7 @@
+ #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
+ #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
+ #define MIDR_CAVIUM_THUNDERX2 MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX2)
++#define MIDR_BRAHMA_B53 MIDR_CPU_MODEL(ARM_CPU_IMP_BRCM, BRCM_CPU_PART_BRAHMA_B53)
+ #define MIDR_BRCM_VULCAN MIDR_CPU_MODEL(ARM_CPU_IMP_BRCM, BRCM_CPU_PART_VULCAN)
+ #define MIDR_QCOM_FALKOR_V1 MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR_V1)
+ #define MIDR_QCOM_FALKOR MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR)
+diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
+index 8eb5c0fbdee6..b15f90511d4f 100644
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -283,23 +283,6 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
+ set_pte(ptep, pte);
+ }
+
+-#define __HAVE_ARCH_PTE_SAME
+-static inline int pte_same(pte_t pte_a, pte_t pte_b)
+-{
+- pteval_t lhs, rhs;
+-
+- lhs = pte_val(pte_a);
+- rhs = pte_val(pte_b);
+-
+- if (pte_present(pte_a))
+- lhs &= ~PTE_RDONLY;
+-
+- if (pte_present(pte_b))
+- rhs &= ~PTE_RDONLY;
+-
+- return (lhs == rhs);
+-}
+-
+ /*
+ * Huge pte definitions.
+ */
+diff --git a/arch/arm64/include/asm/vdso/vsyscall.h b/arch/arm64/include/asm/vdso/vsyscall.h
+index 0c731bfc7c8c..0c20a7c1bee5 100644
+--- a/arch/arm64/include/asm/vdso/vsyscall.h
++++ b/arch/arm64/include/asm/vdso/vsyscall.h
+@@ -30,13 +30,6 @@ int __arm64_get_clock_mode(struct timekeeper *tk)
+ }
+ #define __arch_get_clock_mode __arm64_get_clock_mode
+
+-static __always_inline
+-int __arm64_use_vsyscall(struct vdso_data *vdata)
+-{
+- return !vdata[CS_HRES_COARSE].clock_mode;
+-}
+-#define __arch_use_vsyscall __arm64_use_vsyscall
+-
+ static __always_inline
+ void __arm64_update_vsyscall(struct vdso_data *vdata, struct timekeeper *tk)
+ {
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 1e0b9ae9bf7e..169549f939e2 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -129,8 +129,8 @@ static void install_bp_hardening_cb(bp_hardening_cb_t fn,
+ int cpu, slot = -1;
+
+ /*
+- * enable_smccc_arch_workaround_1() passes NULL for the hyp_vecs
+- * start/end if we're a guest. Skip the hyp-vectors work.
++ * detect_harden_bp_fw() passes NULL for the hyp_vecs start/end if
++ * we're a guest. Skip the hyp-vectors work.
+ */
+ if (!hyp_vecs_start) {
+ __this_cpu_write(bp_hardening_data.fn, fn);
+@@ -489,6 +489,7 @@ static const struct midr_range arm64_ssb_cpus[] = {
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A35),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A53),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
++ MIDR_ALL_VERSIONS(MIDR_BRAHMA_B53),
+ {},
+ };
+
+@@ -573,6 +574,7 @@ static const struct midr_range spectre_v2_safe_list[] = {
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A35),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A53),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
++ MIDR_ALL_VERSIONS(MIDR_BRAHMA_B53),
+ { /* sentinel */ }
+ };
+
+@@ -659,17 +661,23 @@ static const struct midr_range arm64_harden_el2_vectors[] = {
+ #endif
+
+ #ifdef CONFIG_ARM64_WORKAROUND_REPEAT_TLBI
+-
+-static const struct midr_range arm64_repeat_tlbi_cpus[] = {
++static const struct arm64_cpu_capabilities arm64_repeat_tlbi_list[] = {
+ #ifdef CONFIG_QCOM_FALKOR_ERRATUM_1009
+- MIDR_RANGE(MIDR_QCOM_FALKOR_V1, 0, 0, 0, 0),
++ {
++ ERRATA_MIDR_REV(MIDR_QCOM_FALKOR_V1, 0, 0)
++ },
++ {
++ .midr_range.model = MIDR_QCOM_KRYO,
++ .matches = is_kryo_midr,
++ },
+ #endif
+ #ifdef CONFIG_ARM64_ERRATUM_1286807
+- MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 3, 0),
++ {
++ ERRATA_MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 3, 0),
++ },
+ #endif
+ {},
+ };
+-
+ #endif
+
+ #ifdef CONFIG_CAVIUM_ERRATUM_27456
+@@ -737,6 +745,33 @@ static const struct midr_range erratum_1418040_list[] = {
+ };
+ #endif
+
++#ifdef CONFIG_ARM64_ERRATUM_845719
++static const struct midr_range erratum_845719_list[] = {
++ /* Cortex-A53 r0p[01234] */
++ MIDR_REV_RANGE(MIDR_CORTEX_A53, 0, 0, 4),
++ /* Brahma-B53 r0p[0] */
++ MIDR_REV(MIDR_BRAHMA_B53, 0, 0),
++ {},
++};
++#endif
++
++#ifdef CONFIG_ARM64_ERRATUM_843419
++static const struct arm64_cpu_capabilities erratum_843419_list[] = {
++ {
++ /* Cortex-A53 r0p[01234] */
++ .matches = is_affected_midr_range,
++ ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A53, 0, 0, 4),
++ MIDR_FIXED(0x4, BIT(8)),
++ },
++ {
++ /* Brahma-B53 r0p[0] */
++ .matches = is_affected_midr_range,
++ ERRATA_MIDR_REV(MIDR_BRAHMA_B53, 0, 0),
++ },
++ {},
++};
++#endif
++
+ const struct arm64_cpu_capabilities arm64_errata[] = {
+ #ifdef CONFIG_ARM64_WORKAROUND_CLEAN_CACHE
+ {
+@@ -768,19 +803,18 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ #endif
+ #ifdef CONFIG_ARM64_ERRATUM_843419
+ {
+- /* Cortex-A53 r0p[01234] */
+ .desc = "ARM erratum 843419",
+ .capability = ARM64_WORKAROUND_843419,
+- ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A53, 0, 0, 4),
+- MIDR_FIXED(0x4, BIT(8)),
++ .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
++ .matches = cpucap_multi_entry_cap_matches,
++ .match_list = erratum_843419_list,
+ },
+ #endif
+ #ifdef CONFIG_ARM64_ERRATUM_845719
+ {
+- /* Cortex-A53 r0p[01234] */
+ .desc = "ARM erratum 845719",
+ .capability = ARM64_WORKAROUND_845719,
+- ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A53, 0, 0, 4),
++ ERRATA_MIDR_RANGE_LIST(erratum_845719_list),
+ },
+ #endif
+ #ifdef CONFIG_CAVIUM_ERRATUM_23154
+@@ -825,7 +859,9 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ {
+ .desc = "Qualcomm erratum 1009, ARM erratum 1286807",
+ .capability = ARM64_WORKAROUND_REPEAT_TLBI,
+- ERRATA_MIDR_RANGE_LIST(arm64_repeat_tlbi_cpus),
++ .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
++ .matches = cpucap_multi_entry_cap_matches,
++ .match_list = arm64_repeat_tlbi_list,
+ },
+ #endif
+ #ifdef CONFIG_ARM64_ERRATUM_858921
+diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/include/asm/book3s/32/kup.h
+index 677e9babef80..f9dc597b0b86 100644
+--- a/arch/powerpc/include/asm/book3s/32/kup.h
++++ b/arch/powerpc/include/asm/book3s/32/kup.h
+@@ -91,6 +91,7 @@
+
+ static inline void kuap_update_sr(u32 sr, u32 addr, u32 end)
+ {
++ addr &= 0xf0000000; /* align addr to start of segment */
+ barrier(); /* make sure thread.kuap is updated before playing with SRs */
+ while (addr < end) {
+ mtsrin(sr, addr);
+diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
+index d7fcdfa7fee4..ec2547cc5ecb 100644
+--- a/arch/powerpc/kvm/book3s.c
++++ b/arch/powerpc/kvm/book3s.c
+@@ -36,8 +36,8 @@
+ #include "book3s.h"
+ #include "trace.h"
+
+-#define VM_STAT(x) offsetof(struct kvm, stat.x), KVM_STAT_VM
+-#define VCPU_STAT(x) offsetof(struct kvm_vcpu, stat.x), KVM_STAT_VCPU
++#define VM_STAT(x, ...) offsetof(struct kvm, stat.x), KVM_STAT_VM, ## __VA_ARGS__
++#define VCPU_STAT(x, ...) offsetof(struct kvm_vcpu, stat.x), KVM_STAT_VCPU, ## __VA_ARGS__
+
+ /* #define EXIT_DEBUG */
+
+@@ -69,8 +69,8 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
+ { "pthru_all", VCPU_STAT(pthru_all) },
+ { "pthru_host", VCPU_STAT(pthru_host) },
+ { "pthru_bad_aff", VCPU_STAT(pthru_bad_aff) },
+- { "largepages_2M", VM_STAT(num_2M_pages) },
+- { "largepages_1G", VM_STAT(num_1G_pages) },
++ { "largepages_2M", VM_STAT(num_2M_pages, .mode = 0444) },
++ { "largepages_1G", VM_STAT(num_1G_pages, .mode = 0444) },
+ { NULL }
+ };
+
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 222855cc0158..8a717c681d3c 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -1934,6 +1934,51 @@ config X86_INTEL_MEMORY_PROTECTION_KEYS
+
+ If unsure, say y.
+
++choice
++ prompt "TSX enable mode"
++ depends on CPU_SUP_INTEL
++ default X86_INTEL_TSX_MODE_OFF
++ help
++ Intel's TSX (Transactional Synchronization Extensions) feature
++ allows to optimize locking protocols through lock elision which
++ can lead to a noticeable performance boost.
++
++ On the other hand it has been shown that TSX can be exploited
++ to form side channel attacks (e.g. TAA) and chances are there
++ will be more of those attacks discovered in the future.
++
++ Therefore TSX is not enabled by default (aka tsx=off). An admin
++ might override this decision by tsx=on the command line parameter.
++ Even with TSX enabled, the kernel will attempt to enable the best
++ possible TAA mitigation setting depending on the microcode available
++ for the particular machine.
++
++ This option allows to set the default tsx mode between tsx=on, =off
++ and =auto. See Documentation/admin-guide/kernel-parameters.txt for more
++ details.
++
++ Say off if not sure, auto if TSX is in use but it should be used on safe
++ platforms or on if TSX is in use and the security aspect of tsx is not
++ relevant.
++
++config X86_INTEL_TSX_MODE_OFF
++ bool "off"
++ help
++ TSX is disabled if possible - equals to tsx=off command line parameter.
++
++config X86_INTEL_TSX_MODE_ON
++ bool "on"
++ help
++ TSX is always enabled on TSX capable HW - equals the tsx=on command
++ line parameter.
++
++config X86_INTEL_TSX_MODE_AUTO
++ bool "auto"
++ help
++ TSX is enabled on TSX capable HW that is believed to be safe against
++ side channel attacks- equals the tsx=auto command line parameter.
++endchoice
++
+ config EFI
+ bool "EFI runtime service support"
+ depends on ACPI
+diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c
+index d6662fdef300..82bc60c8acb2 100644
+--- a/arch/x86/boot/compressed/eboot.c
++++ b/arch/x86/boot/compressed/eboot.c
+@@ -13,6 +13,7 @@
+ #include <asm/e820/types.h>
+ #include <asm/setup.h>
+ #include <asm/desc.h>
++#include <asm/boot.h>
+
+ #include "../string.h"
+ #include "eboot.h"
+@@ -813,7 +814,8 @@ efi_main(struct efi_config *c, struct boot_params *boot_params)
+ status = efi_relocate_kernel(sys_table, &bzimage_addr,
+ hdr->init_size, hdr->init_size,
+ hdr->pref_address,
+- hdr->kernel_alignment);
++ hdr->kernel_alignment,
++ LOAD_PHYSICAL_ADDR);
+ if (status != EFI_SUCCESS) {
+ efi_printk(sys_table, "efi_relocate_kernel() failed!\n");
+ goto fail;
+diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
+index 5b35b7ea5d72..26c36357c4c9 100644
+--- a/arch/x86/events/amd/ibs.c
++++ b/arch/x86/events/amd/ibs.c
+@@ -377,7 +377,8 @@ static inline void perf_ibs_disable_event(struct perf_ibs *perf_ibs,
+ struct hw_perf_event *hwc, u64 config)
+ {
+ config &= ~perf_ibs->cnt_mask;
+- wrmsrl(hwc->config_base, config);
++ if (boot_cpu_data.x86 == 0x10)
++ wrmsrl(hwc->config_base, config);
+ config &= ~perf_ibs->enable_mask;
+ wrmsrl(hwc->config_base, config);
+ }
+@@ -553,7 +554,8 @@ static struct perf_ibs perf_ibs_op = {
+ },
+ .msr = MSR_AMD64_IBSOPCTL,
+ .config_mask = IBS_OP_CONFIG_MASK,
+- .cnt_mask = IBS_OP_MAX_CNT,
++ .cnt_mask = IBS_OP_MAX_CNT | IBS_OP_CUR_CNT |
++ IBS_OP_CUR_CNT_RAND,
+ .enable_mask = IBS_OP_ENABLE,
+ .valid_mask = IBS_OP_VAL,
+ .max_period = IBS_OP_MAX_CNT << 4,
+@@ -614,7 +616,7 @@ fail:
+ if (event->attr.sample_type & PERF_SAMPLE_RAW)
+ offset_max = perf_ibs->offset_max;
+ else if (check_rip)
+- offset_max = 2;
++ offset_max = 3;
+ else
+ offset_max = 1;
+ do {
+diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
+index 3694a5d0703d..f7b191d3c9b0 100644
+--- a/arch/x86/events/intel/uncore.c
++++ b/arch/x86/events/intel/uncore.c
+@@ -502,10 +502,8 @@ void uncore_pmu_event_start(struct perf_event *event, int flags)
+ local64_set(&event->hw.prev_count, uncore_read_counter(box, event));
+ uncore_enable_event(box, event);
+
+- if (box->n_active == 1) {
+- uncore_enable_box(box);
++ if (box->n_active == 1)
+ uncore_pmu_start_hrtimer(box);
+- }
+ }
+
+ void uncore_pmu_event_stop(struct perf_event *event, int flags)
+@@ -529,10 +527,8 @@ void uncore_pmu_event_stop(struct perf_event *event, int flags)
+ WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
+ hwc->state |= PERF_HES_STOPPED;
+
+- if (box->n_active == 0) {
+- uncore_disable_box(box);
++ if (box->n_active == 0)
+ uncore_pmu_cancel_hrtimer(box);
+- }
+ }
+
+ if ((flags & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) {
+@@ -778,6 +774,40 @@ static int uncore_pmu_event_init(struct perf_event *event)
+ return ret;
+ }
+
++static void uncore_pmu_enable(struct pmu *pmu)
++{
++ struct intel_uncore_pmu *uncore_pmu;
++ struct intel_uncore_box *box;
++
++ uncore_pmu = container_of(pmu, struct intel_uncore_pmu, pmu);
++ if (!uncore_pmu)
++ return;
++
++ box = uncore_pmu_to_box(uncore_pmu, smp_processor_id());
++ if (!box)
++ return;
++
++ if (uncore_pmu->type->ops->enable_box)
++ uncore_pmu->type->ops->enable_box(box);
++}
++
++static void uncore_pmu_disable(struct pmu *pmu)
++{
++ struct intel_uncore_pmu *uncore_pmu;
++ struct intel_uncore_box *box;
++
++ uncore_pmu = container_of(pmu, struct intel_uncore_pmu, pmu);
++ if (!uncore_pmu)
++ return;
++
++ box = uncore_pmu_to_box(uncore_pmu, smp_processor_id());
++ if (!box)
++ return;
++
++ if (uncore_pmu->type->ops->disable_box)
++ uncore_pmu->type->ops->disable_box(box);
++}
++
+ static ssize_t uncore_get_attr_cpumask(struct device *dev,
+ struct device_attribute *attr, char *buf)
+ {
+@@ -803,6 +833,8 @@ static int uncore_pmu_register(struct intel_uncore_pmu *pmu)
+ pmu->pmu = (struct pmu) {
+ .attr_groups = pmu->type->attr_groups,
+ .task_ctx_nr = perf_invalid_context,
++ .pmu_enable = uncore_pmu_enable,
++ .pmu_disable = uncore_pmu_disable,
+ .event_init = uncore_pmu_event_init,
+ .add = uncore_pmu_event_add,
+ .del = uncore_pmu_event_del,
+diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
+index f36f7bebbc1b..bbfdaa720b45 100644
+--- a/arch/x86/events/intel/uncore.h
++++ b/arch/x86/events/intel/uncore.h
+@@ -441,18 +441,6 @@ static inline int uncore_freerunning_hw_config(struct intel_uncore_box *box,
+ return -EINVAL;
+ }
+
+-static inline void uncore_disable_box(struct intel_uncore_box *box)
+-{
+- if (box->pmu->type->ops->disable_box)
+- box->pmu->type->ops->disable_box(box);
+-}
+-
+-static inline void uncore_enable_box(struct intel_uncore_box *box)
+-{
+- if (box->pmu->type->ops->enable_box)
+- box->pmu->type->ops->enable_box(box);
+-}
+-
+ static inline void uncore_disable_event(struct intel_uncore_box *box,
+ struct perf_event *event)
+ {
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index e880f2408e29..5f9ae6b5be35 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -397,5 +397,7 @@
+ #define X86_BUG_MDS X86_BUG(19) /* CPU is affected by Microarchitectural data sampling */
+ #define X86_BUG_MSBDS_ONLY X86_BUG(20) /* CPU is only affected by the MSDBS variant of BUG_MDS */
+ #define X86_BUG_SWAPGS X86_BUG(21) /* CPU is affected by speculation through SWAPGS */
++#define X86_BUG_TAA X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
++#define X86_BUG_ITLB_MULTIHIT X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
+
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index dd0ca154a958..f68e174f452f 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -319,8 +319,11 @@ struct kvm_rmap_head {
+ struct kvm_mmu_page {
+ struct list_head link;
+ struct hlist_node hash_link;
++ struct list_head lpage_disallowed_link;
++
+ bool unsync;
+ bool mmio_cached;
++ bool lpage_disallowed; /* Can't be replaced by an equiv large page */
+
+ /*
+ * The following two entries are used to key the shadow page in the
+@@ -863,6 +866,7 @@ struct kvm_arch {
+ * Hash table of struct kvm_mmu_page.
+ */
+ struct list_head active_mmu_pages;
++ struct list_head lpage_disallowed_mmu_pages;
+ struct kvm_page_track_notifier_node mmu_sp_tracker;
+ struct kvm_page_track_notifier_head track_notifier_head;
+
+@@ -937,6 +941,7 @@ struct kvm_arch {
+ bool exception_payload_enabled;
+
+ struct kvm_pmu_event_filter *pmu_event_filter;
++ struct task_struct *nx_lpage_recovery_thread;
+ };
+
+ struct kvm_vm_stat {
+@@ -950,6 +955,7 @@ struct kvm_vm_stat {
+ ulong mmu_unsync;
+ ulong remote_tlb_flush;
+ ulong lpages;
++ ulong nx_lpage_splits;
+ ulong max_mmu_page_hash_collisions;
+ };
+
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 271d837d69a8..f1fbb29539c4 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -93,6 +93,18 @@
+ * Microarchitectural Data
+ * Sampling (MDS) vulnerabilities.
+ */
++#define ARCH_CAP_PSCHANGE_MC_NO BIT(6) /*
++ * The processor is not susceptible to a
++ * machine check error due to modifying the
++ * code page size along with either the
++ * physical address or cache type
++ * without TLB invalidation.
++ */
++#define ARCH_CAP_TSX_CTRL_MSR BIT(7) /* MSR for TSX control is available. */
++#define ARCH_CAP_TAA_NO BIT(8) /*
++ * Not susceptible to
++ * TSX Async Abort (TAA) vulnerabilities.
++ */
+
+ #define MSR_IA32_FLUSH_CMD 0x0000010b
+ #define L1D_FLUSH BIT(0) /*
+@@ -103,6 +115,10 @@
+ #define MSR_IA32_BBL_CR_CTL 0x00000119
+ #define MSR_IA32_BBL_CR_CTL3 0x0000011e
+
++#define MSR_IA32_TSX_CTRL 0x00000122
++#define TSX_CTRL_RTM_DISABLE BIT(0) /* Disable RTM feature */
++#define TSX_CTRL_CPUID_CLEAR BIT(1) /* Disable TSX enumeration */
++
+ #define MSR_IA32_SYSENTER_CS 0x00000174
+ #define MSR_IA32_SYSENTER_ESP 0x00000175
+ #define MSR_IA32_SYSENTER_EIP 0x00000176
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index 80bc209c0708..5c24a7b35166 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -314,7 +314,7 @@ DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
+ #include <asm/segment.h>
+
+ /**
+- * mds_clear_cpu_buffers - Mitigation for MDS vulnerability
++ * mds_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability
+ *
+ * This uses the otherwise unused and obsolete VERW instruction in
+ * combination with microcode which triggers a CPU buffer flush when the
+@@ -337,7 +337,7 @@ static inline void mds_clear_cpu_buffers(void)
+ }
+
+ /**
+- * mds_user_clear_cpu_buffers - Mitigation for MDS vulnerability
++ * mds_user_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability
+ *
+ * Clear CPU buffers if the corresponding static key is enabled
+ */
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 6e0a3b43d027..54f5d54280f6 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -988,4 +988,11 @@ enum mds_mitigations {
+ MDS_MITIGATION_VMWERV,
+ };
+
++enum taa_mitigations {
++ TAA_MITIGATION_OFF,
++ TAA_MITIGATION_UCODE_NEEDED,
++ TAA_MITIGATION_VERW,
++ TAA_MITIGATION_TSX_DISABLED,
++};
++
+ #endif /* _ASM_X86_PROCESSOR_H */
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index ad0d5ced82b3..2c6bab985a6a 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -1573,9 +1573,6 @@ static void setup_local_APIC(void)
+ {
+ int cpu = smp_processor_id();
+ unsigned int value;
+-#ifdef CONFIG_X86_32
+- int logical_apicid, ldr_apicid;
+-#endif
+
+
+ if (disable_apic) {
+@@ -1616,16 +1613,21 @@ static void setup_local_APIC(void)
+ apic->init_apic_ldr();
+
+ #ifdef CONFIG_X86_32
+- /*
+- * APIC LDR is initialized. If logical_apicid mapping was
+- * initialized during get_smp_config(), make sure it matches the
+- * actual value.
+- */
+- logical_apicid = early_per_cpu(x86_cpu_to_logical_apicid, cpu);
+- ldr_apicid = GET_APIC_LOGICAL_ID(apic_read(APIC_LDR));
+- WARN_ON(logical_apicid != BAD_APICID && logical_apicid != ldr_apicid);
+- /* always use the value from LDR */
+- early_per_cpu(x86_cpu_to_logical_apicid, cpu) = ldr_apicid;
++ if (apic->dest_logical) {
++ int logical_apicid, ldr_apicid;
++
++ /*
++ * APIC LDR is initialized. If logical_apicid mapping was
++ * initialized during get_smp_config(), make sure it matches
++ * the actual value.
++ */
++ logical_apicid = early_per_cpu(x86_cpu_to_logical_apicid, cpu);
++ ldr_apicid = GET_APIC_LOGICAL_ID(apic_read(APIC_LDR));
++ if (logical_apicid != BAD_APICID)
++ WARN_ON(logical_apicid != ldr_apicid);
++ /* Always use the value from LDR. */
++ early_per_cpu(x86_cpu_to_logical_apicid, cpu) = ldr_apicid;
++ }
+ #endif
+
+ /*
+diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
+index d7a1e5a9331c..890f60083eca 100644
+--- a/arch/x86/kernel/cpu/Makefile
++++ b/arch/x86/kernel/cpu/Makefile
+@@ -30,7 +30,7 @@ obj-$(CONFIG_PROC_FS) += proc.o
+ obj-$(CONFIG_X86_FEATURE_NAMES) += capflags.o powerflags.o
+
+ ifdef CONFIG_CPU_SUP_INTEL
+-obj-y += intel.o intel_pconfig.o
++obj-y += intel.o intel_pconfig.o tsx.o
+ obj-$(CONFIG_PM) += intel_epb.o
+ endif
+ obj-$(CONFIG_CPU_SUP_AMD) += amd.o
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index c6fa3ef10b4e..9b7586204cd2 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -39,6 +39,7 @@ static void __init spectre_v2_select_mitigation(void);
+ static void __init ssb_select_mitigation(void);
+ static void __init l1tf_select_mitigation(void);
+ static void __init mds_select_mitigation(void);
++static void __init taa_select_mitigation(void);
+
+ /* The base value of the SPEC_CTRL MSR that always has to be preserved. */
+ u64 x86_spec_ctrl_base;
+@@ -105,6 +106,7 @@ void __init check_bugs(void)
+ ssb_select_mitigation();
+ l1tf_select_mitigation();
+ mds_select_mitigation();
++ taa_select_mitigation();
+
+ arch_smt_update();
+
+@@ -268,6 +270,100 @@ static int __init mds_cmdline(char *str)
+ }
+ early_param("mds", mds_cmdline);
+
++#undef pr_fmt
++#define pr_fmt(fmt) "TAA: " fmt
++
++/* Default mitigation for TAA-affected CPUs */
++static enum taa_mitigations taa_mitigation __ro_after_init = TAA_MITIGATION_VERW;
++static bool taa_nosmt __ro_after_init;
++
++static const char * const taa_strings[] = {
++ [TAA_MITIGATION_OFF] = "Vulnerable",
++ [TAA_MITIGATION_UCODE_NEEDED] = "Vulnerable: Clear CPU buffers attempted, no microcode",
++ [TAA_MITIGATION_VERW] = "Mitigation: Clear CPU buffers",
++ [TAA_MITIGATION_TSX_DISABLED] = "Mitigation: TSX disabled",
++};
++
++static void __init taa_select_mitigation(void)
++{
++ u64 ia32_cap;
++
++ if (!boot_cpu_has_bug(X86_BUG_TAA)) {
++ taa_mitigation = TAA_MITIGATION_OFF;
++ return;
++ }
++
++ /* TSX previously disabled by tsx=off */
++ if (!boot_cpu_has(X86_FEATURE_RTM)) {
++ taa_mitigation = TAA_MITIGATION_TSX_DISABLED;
++ goto out;
++ }
++
++ if (cpu_mitigations_off()) {
++ taa_mitigation = TAA_MITIGATION_OFF;
++ return;
++ }
++
++ /* TAA mitigation is turned off on the cmdline (tsx_async_abort=off) */
++ if (taa_mitigation == TAA_MITIGATION_OFF)
++ goto out;
++
++ if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
++ taa_mitigation = TAA_MITIGATION_VERW;
++ else
++ taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
++
++ /*
++ * VERW doesn't clear the CPU buffers when MD_CLEAR=1 and MDS_NO=1.
++ * A microcode update fixes this behavior to clear CPU buffers. It also
++ * adds support for MSR_IA32_TSX_CTRL which is enumerated by the
++ * ARCH_CAP_TSX_CTRL_MSR bit.
++ *
++ * On MDS_NO=1 CPUs if ARCH_CAP_TSX_CTRL_MSR is not set, microcode
++ * update is required.
++ */
++ ia32_cap = x86_read_arch_cap_msr();
++ if ( (ia32_cap & ARCH_CAP_MDS_NO) &&
++ !(ia32_cap & ARCH_CAP_TSX_CTRL_MSR))
++ taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
++
++ /*
++ * TSX is enabled, select alternate mitigation for TAA which is
++ * the same as MDS. Enable MDS static branch to clear CPU buffers.
++ *
++ * For guests that can't determine whether the correct microcode is
++ * present on host, enable the mitigation for UCODE_NEEDED as well.
++ */
++ static_branch_enable(&mds_user_clear);
++
++ if (taa_nosmt || cpu_mitigations_auto_nosmt())
++ cpu_smt_disable(false);
++
++out:
++ pr_info("%s\n", taa_strings[taa_mitigation]);
++}
++
++static int __init tsx_async_abort_parse_cmdline(char *str)
++{
++ if (!boot_cpu_has_bug(X86_BUG_TAA))
++ return 0;
++
++ if (!str)
++ return -EINVAL;
++
++ if (!strcmp(str, "off")) {
++ taa_mitigation = TAA_MITIGATION_OFF;
++ } else if (!strcmp(str, "full")) {
++ taa_mitigation = TAA_MITIGATION_VERW;
++ } else if (!strcmp(str, "full,nosmt")) {
++ taa_mitigation = TAA_MITIGATION_VERW;
++ taa_nosmt = true;
++ }
++
++ return 0;
++}
++early_param("tsx_async_abort", tsx_async_abort_parse_cmdline);
++
+ #undef pr_fmt
+ #define pr_fmt(fmt) "Spectre V1 : " fmt
+
+@@ -786,13 +882,10 @@ static void update_mds_branch_idle(void)
+ }
+
+ #define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n"
++#define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n"
+
+ void arch_smt_update(void)
+ {
+- /* Enhanced IBRS implies STIBP. No update required. */
+- if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
+- return;
+-
+ mutex_lock(&spec_ctrl_mutex);
+
+ switch (spectre_v2_user) {
+@@ -819,6 +912,17 @@ void arch_smt_update(void)
+ break;
+ }
+
++ switch (taa_mitigation) {
++ case TAA_MITIGATION_VERW:
++ case TAA_MITIGATION_UCODE_NEEDED:
++ if (sched_smt_active())
++ pr_warn_once(TAA_MSG_SMT);
++ break;
++ case TAA_MITIGATION_TSX_DISABLED:
++ case TAA_MITIGATION_OFF:
++ break;
++ }
++
+ mutex_unlock(&spec_ctrl_mutex);
+ }
+
+@@ -1149,6 +1253,9 @@ void x86_spec_ctrl_setup_ap(void)
+ x86_amd_ssb_disable();
+ }
+
++bool itlb_multihit_kvm_mitigation;
++EXPORT_SYMBOL_GPL(itlb_multihit_kvm_mitigation);
++
+ #undef pr_fmt
+ #define pr_fmt(fmt) "L1TF: " fmt
+
+@@ -1304,11 +1411,24 @@ static ssize_t l1tf_show_state(char *buf)
+ l1tf_vmx_states[l1tf_vmx_mitigation],
+ sched_smt_active() ? "vulnerable" : "disabled");
+ }
++
++static ssize_t itlb_multihit_show_state(char *buf)
++{
++ if (itlb_multihit_kvm_mitigation)
++ return sprintf(buf, "KVM: Mitigation: Split huge pages\n");
++ else
++ return sprintf(buf, "KVM: Vulnerable\n");
++}
+ #else
+ static ssize_t l1tf_show_state(char *buf)
+ {
+ return sprintf(buf, "%s\n", L1TF_DEFAULT_MSG);
+ }
++
++static ssize_t itlb_multihit_show_state(char *buf)
++{
++ return sprintf(buf, "Processor vulnerable\n");
++}
+ #endif
+
+ static ssize_t mds_show_state(char *buf)
+@@ -1328,6 +1448,21 @@ static ssize_t mds_show_state(char *buf)
+ sched_smt_active() ? "vulnerable" : "disabled");
+ }
+
++static ssize_t tsx_async_abort_show_state(char *buf)
++{
++ if ((taa_mitigation == TAA_MITIGATION_TSX_DISABLED) ||
++ (taa_mitigation == TAA_MITIGATION_OFF))
++ return sprintf(buf, "%s\n", taa_strings[taa_mitigation]);
++
++ if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
++ return sprintf(buf, "%s; SMT Host state unknown\n",
++ taa_strings[taa_mitigation]);
++ }
++
++ return sprintf(buf, "%s; SMT %s\n", taa_strings[taa_mitigation],
++ sched_smt_active() ? "vulnerable" : "disabled");
++}
++
+ static char *stibp_state(void)
+ {
+ if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
+@@ -1398,6 +1533,12 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ case X86_BUG_MDS:
+ return mds_show_state(buf);
+
++ case X86_BUG_TAA:
++ return tsx_async_abort_show_state(buf);
++
++ case X86_BUG_ITLB_MULTIHIT:
++ return itlb_multihit_show_state(buf);
++
+ default:
+ break;
+ }
+@@ -1434,4 +1575,14 @@ ssize_t cpu_show_mds(struct device *dev, struct device_attribute *attr, char *bu
+ {
+ return cpu_show_common(dev, attr, buf, X86_BUG_MDS);
+ }
++
++ssize_t cpu_show_tsx_async_abort(struct device *dev, struct device_attribute *attr, char *buf)
++{
++ return cpu_show_common(dev, attr, buf, X86_BUG_TAA);
++}
++
++ssize_t cpu_show_itlb_multihit(struct device *dev, struct device_attribute *attr, char *buf)
++{
++ return cpu_show_common(dev, attr, buf, X86_BUG_ITLB_MULTIHIT);
++}
+ #endif
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index f125bf7ecb6f..663b27bdea88 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1016,13 +1016,14 @@ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
+ #endif
+ }
+
+-#define NO_SPECULATION BIT(0)
+-#define NO_MELTDOWN BIT(1)
+-#define NO_SSB BIT(2)
+-#define NO_L1TF BIT(3)
+-#define NO_MDS BIT(4)
+-#define MSBDS_ONLY BIT(5)
+-#define NO_SWAPGS BIT(6)
++#define NO_SPECULATION BIT(0)
++#define NO_MELTDOWN BIT(1)
++#define NO_SSB BIT(2)
++#define NO_L1TF BIT(3)
++#define NO_MDS BIT(4)
++#define MSBDS_ONLY BIT(5)
++#define NO_SWAPGS BIT(6)
++#define NO_ITLB_MULTIHIT BIT(7)
+
+ #define VULNWL(_vendor, _family, _model, _whitelist) \
+ { X86_VENDOR_##_vendor, _family, _model, X86_FEATURE_ANY, _whitelist }
+@@ -1043,26 +1044,26 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ VULNWL(NSC, 5, X86_MODEL_ANY, NO_SPECULATION),
+
+ /* Intel Family 6 */
+- VULNWL_INTEL(ATOM_SALTWELL, NO_SPECULATION),
+- VULNWL_INTEL(ATOM_SALTWELL_TABLET, NO_SPECULATION),
+- VULNWL_INTEL(ATOM_SALTWELL_MID, NO_SPECULATION),
+- VULNWL_INTEL(ATOM_BONNELL, NO_SPECULATION),
+- VULNWL_INTEL(ATOM_BONNELL_MID, NO_SPECULATION),
+-
+- VULNWL_INTEL(ATOM_SILVERMONT, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
+- VULNWL_INTEL(ATOM_SILVERMONT_X, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
+- VULNWL_INTEL(ATOM_SILVERMONT_MID, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
+- VULNWL_INTEL(ATOM_AIRMONT, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
+- VULNWL_INTEL(XEON_PHI_KNL, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
+- VULNWL_INTEL(XEON_PHI_KNM, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
++ VULNWL_INTEL(ATOM_SALTWELL, NO_SPECULATION | NO_ITLB_MULTIHIT),
++ VULNWL_INTEL(ATOM_SALTWELL_TABLET, NO_SPECULATION | NO_ITLB_MULTIHIT),
++ VULNWL_INTEL(ATOM_SALTWELL_MID, NO_SPECULATION | NO_ITLB_MULTIHIT),
++ VULNWL_INTEL(ATOM_BONNELL, NO_SPECULATION | NO_ITLB_MULTIHIT),
++ VULNWL_INTEL(ATOM_BONNELL_MID, NO_SPECULATION | NO_ITLB_MULTIHIT),
++
++ VULNWL_INTEL(ATOM_SILVERMONT, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT),
++ VULNWL_INTEL(ATOM_SILVERMONT_X, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT),
++ VULNWL_INTEL(ATOM_SILVERMONT_MID, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT),
++ VULNWL_INTEL(ATOM_AIRMONT, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT),
++ VULNWL_INTEL(XEON_PHI_KNL, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT),
++ VULNWL_INTEL(XEON_PHI_KNM, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT),
+
+ VULNWL_INTEL(CORE_YONAH, NO_SSB),
+
+- VULNWL_INTEL(ATOM_AIRMONT_MID, NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
++ VULNWL_INTEL(ATOM_AIRMONT_MID, NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT),
+
+- VULNWL_INTEL(ATOM_GOLDMONT, NO_MDS | NO_L1TF | NO_SWAPGS),
+- VULNWL_INTEL(ATOM_GOLDMONT_X, NO_MDS | NO_L1TF | NO_SWAPGS),
+- VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF | NO_SWAPGS),
++ VULNWL_INTEL(ATOM_GOLDMONT, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT),
++ VULNWL_INTEL(ATOM_GOLDMONT_X, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT),
++ VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT),
+
+ /*
+ * Technically, swapgs isn't serializing on AMD (despite it previously
+@@ -1072,15 +1073,17 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ * good enough for our purposes.
+ */
+
++ VULNWL_INTEL(ATOM_TREMONT_X, NO_ITLB_MULTIHIT),
++
+ /* AMD Family 0xf - 0x12 */
+- VULNWL_AMD(0x0f, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS),
+- VULNWL_AMD(0x10, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS),
+- VULNWL_AMD(0x11, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS),
+- VULNWL_AMD(0x12, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS),
++ VULNWL_AMD(0x0f, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
++ VULNWL_AMD(0x10, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
++ VULNWL_AMD(0x11, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
++ VULNWL_AMD(0x12, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
+
+ /* FAMILY_ANY must be last, otherwise 0x0f - 0x12 matches won't work */
+- VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS),
+- VULNWL_HYGON(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS),
++ VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
++ VULNWL_HYGON(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
+ {}
+ };
+
+@@ -1091,19 +1094,30 @@ static bool __init cpu_matches(unsigned long which)
+ return m && !!(m->driver_data & which);
+ }
+
+-static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
++u64 x86_read_arch_cap_msr(void)
+ {
+ u64 ia32_cap = 0;
+
++ if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES))
++ rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
++
++ return ia32_cap;
++}
++
++static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
++{
++ u64 ia32_cap = x86_read_arch_cap_msr();
++
++ /* Set ITLB_MULTIHIT bug if cpu is not in the whitelist and not mitigated */
++ if (!cpu_matches(NO_ITLB_MULTIHIT) && !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
++ setup_force_cpu_bug(X86_BUG_ITLB_MULTIHIT);
++
+ if (cpu_matches(NO_SPECULATION))
+ return;
+
+ setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
+ setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
+
+- if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
+- rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
+-
+ if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) &&
+ !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
+ setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
+@@ -1120,6 +1134,21 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ if (!cpu_matches(NO_SWAPGS))
+ setup_force_cpu_bug(X86_BUG_SWAPGS);
+
++ /*
++ * When the CPU is not mitigated for TAA (TAA_NO=0) set TAA bug when:
++ * - TSX is supported or
++ * - TSX_CTRL is present
++ *
++ * TSX_CTRL check is needed for cases when TSX could be disabled before
++ * the kernel boot e.g. kexec.
++ * TSX_CTRL check alone is not sufficient for cases when the microcode
++ * update is not present or running as guest that don't get TSX_CTRL.
++ */
++ if (!(ia32_cap & ARCH_CAP_TAA_NO) &&
++ (cpu_has(c, X86_FEATURE_RTM) ||
++ (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
++ setup_force_cpu_bug(X86_BUG_TAA);
++
+ if (cpu_matches(NO_MELTDOWN))
+ return;
+
+@@ -1553,6 +1582,8 @@ void __init identify_boot_cpu(void)
+ #endif
+ cpu_detect_tlb(&boot_cpu_data);
+ setup_cr_pinning();
++
++ tsx_init();
+ }
+
+ void identify_secondary_cpu(struct cpuinfo_x86 *c)
+diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
+index c0e2407abdd6..38ab6e115eac 100644
+--- a/arch/x86/kernel/cpu/cpu.h
++++ b/arch/x86/kernel/cpu/cpu.h
+@@ -44,6 +44,22 @@ struct _tlb_table {
+ extern const struct cpu_dev *const __x86_cpu_dev_start[],
+ *const __x86_cpu_dev_end[];
+
++#ifdef CONFIG_CPU_SUP_INTEL
++enum tsx_ctrl_states {
++ TSX_CTRL_ENABLE,
++ TSX_CTRL_DISABLE,
++ TSX_CTRL_NOT_SUPPORTED,
++};
++
++extern __ro_after_init enum tsx_ctrl_states tsx_ctrl_state;
++
++extern void __init tsx_init(void);
++extern void tsx_enable(void);
++extern void tsx_disable(void);
++#else
++static inline void tsx_init(void) { }
++#endif /* CONFIG_CPU_SUP_INTEL */
++
+ extern void get_cpu_cap(struct cpuinfo_x86 *c);
+ extern void get_cpu_address_sizes(struct cpuinfo_x86 *c);
+ extern void cpu_detect_cache_sizes(struct cpuinfo_x86 *c);
+@@ -62,4 +78,6 @@ unsigned int aperfmperf_get_khz(int cpu);
+
+ extern void x86_spec_ctrl_setup_ap(void);
+
++extern u64 x86_read_arch_cap_msr(void);
++
+ #endif /* ARCH_X86_CPU_H */
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index 8d6d92ebeb54..cc9f24818e49 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -761,6 +761,11 @@ static void init_intel(struct cpuinfo_x86 *c)
+ detect_tme(c);
+
+ init_intel_misc_features(c);
++
++ if (tsx_ctrl_state == TSX_CTRL_ENABLE)
++ tsx_enable();
++ if (tsx_ctrl_state == TSX_CTRL_DISABLE)
++ tsx_disable();
+ }
+
+ #ifdef CONFIG_X86_32
+diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
+new file mode 100644
+index 000000000000..3e20d322bc98
+--- /dev/null
++++ b/arch/x86/kernel/cpu/tsx.c
+@@ -0,0 +1,140 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Intel Transactional Synchronization Extensions (TSX) control.
++ *
++ * Copyright (C) 2019 Intel Corporation
++ *
++ * Author:
++ * Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
++ */
++
++#include <linux/cpufeature.h>
++
++#include <asm/cmdline.h>
++
++#include "cpu.h"
++
++enum tsx_ctrl_states tsx_ctrl_state __ro_after_init = TSX_CTRL_NOT_SUPPORTED;
++
++void tsx_disable(void)
++{
++ u64 tsx;
++
++ rdmsrl(MSR_IA32_TSX_CTRL, tsx);
++
++ /* Force all transactions to immediately abort */
++ tsx |= TSX_CTRL_RTM_DISABLE;
++
++ /*
++ * Ensure TSX support is not enumerated in CPUID.
++ * This is visible to userspace and will ensure they
++ * do not waste resources trying TSX transactions that
++ * will always abort.
++ */
++ tsx |= TSX_CTRL_CPUID_CLEAR;
++
++ wrmsrl(MSR_IA32_TSX_CTRL, tsx);
++}
++
++void tsx_enable(void)
++{
++ u64 tsx;
++
++ rdmsrl(MSR_IA32_TSX_CTRL, tsx);
++
++ /* Enable the RTM feature in the cpu */
++ tsx &= ~TSX_CTRL_RTM_DISABLE;
++
++ /*
++ * Ensure TSX support is enumerated in CPUID.
++ * This is visible to userspace and will ensure they
++ * can enumerate and use the TSX feature.
++ */
++ tsx &= ~TSX_CTRL_CPUID_CLEAR;
++
++ wrmsrl(MSR_IA32_TSX_CTRL, tsx);
++}
++
++static bool __init tsx_ctrl_is_supported(void)
++{
++ u64 ia32_cap = x86_read_arch_cap_msr();
++
++ /*
++ * TSX is controlled via MSR_IA32_TSX_CTRL. However, support for this
++ * MSR is enumerated by ARCH_CAP_TSX_MSR bit in MSR_IA32_ARCH_CAPABILITIES.
++ *
++ * TSX control (aka MSR_IA32_TSX_CTRL) is only available after a
++ * microcode update on CPUs that have their MSR_IA32_ARCH_CAPABILITIES
++ * bit MDS_NO=1. CPUs with MDS_NO=0 are not planned to get
++ * MSR_IA32_TSX_CTRL support even after a microcode update. Thus,
++ * tsx= cmdline requests will do nothing on CPUs without
++ * MSR_IA32_TSX_CTRL support.
++ */
++ return !!(ia32_cap & ARCH_CAP_TSX_CTRL_MSR);
++}
++
++static enum tsx_ctrl_states x86_get_tsx_auto_mode(void)
++{
++ if (boot_cpu_has_bug(X86_BUG_TAA))
++ return TSX_CTRL_DISABLE;
++
++ return TSX_CTRL_ENABLE;
++}
++
++void __init tsx_init(void)
++{
++ char arg[5] = {};
++ int ret;
++
++ if (!tsx_ctrl_is_supported())
++ return;
++
++ ret = cmdline_find_option(boot_command_line, "tsx", arg, sizeof(arg));
++ if (ret >= 0) {
++ if (!strcmp(arg, "on")) {
++ tsx_ctrl_state = TSX_CTRL_ENABLE;
++ } else if (!strcmp(arg, "off")) {
++ tsx_ctrl_state = TSX_CTRL_DISABLE;
++ } else if (!strcmp(arg, "auto")) {
++ tsx_ctrl_state = x86_get_tsx_auto_mode();
++ } else {
++ tsx_ctrl_state = TSX_CTRL_DISABLE;
++ pr_err("tsx: invalid option, defaulting to off\n");
++ }
++ } else {
++ /* tsx= not provided */
++ if (IS_ENABLED(CONFIG_X86_INTEL_TSX_MODE_AUTO))
++ tsx_ctrl_state = x86_get_tsx_auto_mode();
++ else if (IS_ENABLED(CONFIG_X86_INTEL_TSX_MODE_OFF))
++ tsx_ctrl_state = TSX_CTRL_DISABLE;
++ else
++ tsx_ctrl_state = TSX_CTRL_ENABLE;
++ }
++
++ if (tsx_ctrl_state == TSX_CTRL_DISABLE) {
++ tsx_disable();
++
++ /*
++ * tsx_disable() will change the state of the
++ * RTM CPUID bit. Clear it here since it is now
++ * expected to be not set.
++ */
++ setup_clear_cpu_cap(X86_FEATURE_RTM);
++ } else if (tsx_ctrl_state == TSX_CTRL_ENABLE) {
++
++ /*
++ * HW defaults TSX to be enabled at bootup.
++ * We may still need the TSX enable support
++ * during init for special cases like
++ * kexec after TSX is disabled.
++ */
++ tsx_enable();
++
++ /*
++ * tsx_enable() will change the state of the
++ * RTM CPUID bit. Force it here since it is now
++ * expected to be set.
++ */
++ setup_force_cpu_cap(X86_FEATURE_RTM);
++ }
++}
+diff --git a/arch/x86/kernel/dumpstack_64.c b/arch/x86/kernel/dumpstack_64.c
+index 753b8cfe8b8a..87b97897a881 100644
+--- a/arch/x86/kernel/dumpstack_64.c
++++ b/arch/x86/kernel/dumpstack_64.c
+@@ -94,6 +94,13 @@ static bool in_exception_stack(unsigned long *stack, struct stack_info *info)
+ BUILD_BUG_ON(N_EXCEPTION_STACKS != 6);
+
+ begin = (unsigned long)__this_cpu_read(cea_exception_stacks);
++ /*
++ * Handle the case where stack trace is collected _before_
++ * cea_exception_stacks had been initialized.
++ */
++ if (!begin)
++ return false;
++
+ end = begin + sizeof(struct cea_exception_stacks);
+ /* Bail if @stack is outside the exception stack area. */
+ if (stk < begin || stk >= end)
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index 94aa6102010d..32b1c6136c6a 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -37,6 +37,7 @@
+ #include <linux/uaccess.h>
+ #include <linux/hash.h>
+ #include <linux/kern_levels.h>
++#include <linux/kthread.h>
+
+ #include <asm/page.h>
+ #include <asm/pat.h>
+@@ -47,6 +48,30 @@
+ #include <asm/kvm_page_track.h>
+ #include "trace.h"
+
++extern bool itlb_multihit_kvm_mitigation;
++
++static int __read_mostly nx_huge_pages = -1;
++static uint __read_mostly nx_huge_pages_recovery_ratio = 60;
++
++static int set_nx_huge_pages(const char *val, const struct kernel_param *kp);
++static int set_nx_huge_pages_recovery_ratio(const char *val, const struct kernel_param *kp);
++
++static struct kernel_param_ops nx_huge_pages_ops = {
++ .set = set_nx_huge_pages,
++ .get = param_get_bool,
++};
++
++static struct kernel_param_ops nx_huge_pages_recovery_ratio_ops = {
++ .set = set_nx_huge_pages_recovery_ratio,
++ .get = param_get_uint,
++};
++
++module_param_cb(nx_huge_pages, &nx_huge_pages_ops, &nx_huge_pages, 0644);
++__MODULE_PARM_TYPE(nx_huge_pages, "bool");
++module_param_cb(nx_huge_pages_recovery_ratio, &nx_huge_pages_recovery_ratio_ops,
++ &nx_huge_pages_recovery_ratio, 0644);
++__MODULE_PARM_TYPE(nx_huge_pages_recovery_ratio, "uint");
++
+ /*
+ * When setting this variable to true it enables Two-Dimensional-Paging
+ * where the hardware walks 2 page tables:
+@@ -318,6 +343,11 @@ static inline bool spte_ad_enabled(u64 spte)
+ return !(spte & shadow_acc_track_value);
+ }
+
++static bool is_nx_huge_page_enabled(void)
++{
++ return READ_ONCE(nx_huge_pages);
++}
++
+ static inline u64 spte_shadow_accessed_mask(u64 spte)
+ {
+ MMU_WARN_ON((spte & shadow_mmio_mask) == shadow_mmio_value);
+@@ -1162,6 +1192,17 @@ static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp)
+ kvm_mmu_gfn_disallow_lpage(slot, gfn);
+ }
+
++static void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp)
++{
++ if (sp->lpage_disallowed)
++ return;
++
++ ++kvm->stat.nx_lpage_splits;
++ list_add_tail(&sp->lpage_disallowed_link,
++ &kvm->arch.lpage_disallowed_mmu_pages);
++ sp->lpage_disallowed = true;
++}
++
+ static void unaccount_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp)
+ {
+ struct kvm_memslots *slots;
+@@ -1179,6 +1220,13 @@ static void unaccount_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp)
+ kvm_mmu_gfn_allow_lpage(slot, gfn);
+ }
+
++static void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp)
++{
++ --kvm->stat.nx_lpage_splits;
++ sp->lpage_disallowed = false;
++ list_del(&sp->lpage_disallowed_link);
++}
++
+ static bool __mmu_gfn_lpage_is_disallowed(gfn_t gfn, int level,
+ struct kvm_memory_slot *slot)
+ {
+@@ -2753,6 +2801,9 @@ static bool __kvm_mmu_prepare_zap_page(struct kvm *kvm,
+ kvm_reload_remote_mmus(kvm);
+ }
+
++ if (sp->lpage_disallowed)
++ unaccount_huge_nx_page(kvm, sp);
++
+ sp->role.invalid = 1;
+ return list_unstable;
+ }
+@@ -2972,6 +3023,11 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
+ if (!speculative)
+ spte |= spte_shadow_accessed_mask(spte);
+
++ if (level > PT_PAGE_TABLE_LEVEL && (pte_access & ACC_EXEC_MASK) &&
++ is_nx_huge_page_enabled()) {
++ pte_access &= ~ACC_EXEC_MASK;
++ }
++
+ if (pte_access & ACC_EXEC_MASK)
+ spte |= shadow_x_mask;
+ else
+@@ -3192,9 +3248,32 @@ static void direct_pte_prefetch(struct kvm_vcpu *vcpu, u64 *sptep)
+ __direct_pte_prefetch(vcpu, sp, sptep);
+ }
+
++static void disallowed_hugepage_adjust(struct kvm_shadow_walk_iterator it,
++ gfn_t gfn, kvm_pfn_t *pfnp, int *levelp)
++{
++ int level = *levelp;
++ u64 spte = *it.sptep;
++
++ if (it.level == level && level > PT_PAGE_TABLE_LEVEL &&
++ is_nx_huge_page_enabled() &&
++ is_shadow_present_pte(spte) &&
++ !is_large_pte(spte)) {
++ /*
++ * A small SPTE exists for this pfn, but FNAME(fetch)
++ * and __direct_map would like to create a large PTE
++ * instead: just force them to go down another level,
++ * patching back for them into pfn the next 9 bits of
++ * the address.
++ */
++ u64 page_mask = KVM_PAGES_PER_HPAGE(level) - KVM_PAGES_PER_HPAGE(level - 1);
++ *pfnp |= gfn & page_mask;
++ (*levelp)--;
++ }
++}
++
+ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, int write,
+ int map_writable, int level, kvm_pfn_t pfn,
+- bool prefault)
++ bool prefault, bool lpage_disallowed)
+ {
+ struct kvm_shadow_walk_iterator it;
+ struct kvm_mmu_page *sp;
+@@ -3207,6 +3286,12 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, int write,
+
+ trace_kvm_mmu_spte_requested(gpa, level, pfn);
+ for_each_shadow_entry(vcpu, gpa, it) {
++ /*
++ * We cannot overwrite existing page tables with an NX
++ * large page, as the leaf could be executable.
++ */
++ disallowed_hugepage_adjust(it, gfn, &pfn, &level);
++
+ base_gfn = gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1);
+ if (it.level == level)
+ break;
+@@ -3217,6 +3302,8 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, int write,
+ it.level - 1, true, ACC_ALL);
+
+ link_shadow_page(vcpu, it.sptep, sp);
++ if (lpage_disallowed)
++ account_huge_nx_page(vcpu->kvm, sp);
+ }
+ }
+
+@@ -3508,11 +3595,14 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, u32 error_code,
+ {
+ int r;
+ int level;
+- bool force_pt_level = false;
++ bool force_pt_level;
+ kvm_pfn_t pfn;
+ unsigned long mmu_seq;
+ bool map_writable, write = error_code & PFERR_WRITE_MASK;
++ bool lpage_disallowed = (error_code & PFERR_FETCH_MASK) &&
++ is_nx_huge_page_enabled();
+
++ force_pt_level = lpage_disallowed;
+ level = mapping_level(vcpu, gfn, &force_pt_level);
+ if (likely(!force_pt_level)) {
+ /*
+@@ -3546,7 +3636,8 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, u32 error_code,
+ goto out_unlock;
+ if (likely(!force_pt_level))
+ transparent_hugepage_adjust(vcpu, gfn, &pfn, &level);
+- r = __direct_map(vcpu, v, write, map_writable, level, pfn, prefault);
++ r = __direct_map(vcpu, v, write, map_writable, level, pfn,
++ prefault, false);
+ out_unlock:
+ spin_unlock(&vcpu->kvm->mmu_lock);
+ kvm_release_pfn_clean(pfn);
+@@ -4132,6 +4223,8 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code,
+ unsigned long mmu_seq;
+ int write = error_code & PFERR_WRITE_MASK;
+ bool map_writable;
++ bool lpage_disallowed = (error_code & PFERR_FETCH_MASK) &&
++ is_nx_huge_page_enabled();
+
+ MMU_WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root_hpa));
+
+@@ -4142,8 +4235,9 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code,
+ if (r)
+ return r;
+
+- force_pt_level = !check_hugepage_cache_consistency(vcpu, gfn,
+- PT_DIRECTORY_LEVEL);
++ force_pt_level =
++ lpage_disallowed ||
++ !check_hugepage_cache_consistency(vcpu, gfn, PT_DIRECTORY_LEVEL);
+ level = mapping_level(vcpu, gfn, &force_pt_level);
+ if (likely(!force_pt_level)) {
+ if (level > PT_DIRECTORY_LEVEL &&
+@@ -4172,7 +4266,8 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code,
+ goto out_unlock;
+ if (likely(!force_pt_level))
+ transparent_hugepage_adjust(vcpu, gfn, &pfn, &level);
+- r = __direct_map(vcpu, gpa, write, map_writable, level, pfn, prefault);
++ r = __direct_map(vcpu, gpa, write, map_writable, level, pfn,
++ prefault, lpage_disallowed);
+ out_unlock:
+ spin_unlock(&vcpu->kvm->mmu_lock);
+ kvm_release_pfn_clean(pfn);
+@@ -6099,10 +6194,60 @@ static void kvm_set_mmio_spte_mask(void)
+ kvm_mmu_set_mmio_spte_mask(mask, mask);
+ }
+
++static bool get_nx_auto_mode(void)
++{
++ /* Return true when CPU has the bug, and mitigations are ON */
++ return boot_cpu_has_bug(X86_BUG_ITLB_MULTIHIT) && !cpu_mitigations_off();
++}
++
++static void __set_nx_huge_pages(bool val)
++{
++ nx_huge_pages = itlb_multihit_kvm_mitigation = val;
++}
++
++static int set_nx_huge_pages(const char *val, const struct kernel_param *kp)
++{
++ bool old_val = nx_huge_pages;
++ bool new_val;
++
++ /* In "auto" mode deploy workaround only if CPU has the bug. */
++ if (sysfs_streq(val, "off"))
++ new_val = 0;
++ else if (sysfs_streq(val, "force"))
++ new_val = 1;
++ else if (sysfs_streq(val, "auto"))
++ new_val = get_nx_auto_mode();
++ else if (strtobool(val, &new_val) < 0)
++ return -EINVAL;
++
++ __set_nx_huge_pages(new_val);
++
++ if (new_val != old_val) {
++ struct kvm *kvm;
++ int idx;
++
++ mutex_lock(&kvm_lock);
++
++ list_for_each_entry(kvm, &vm_list, vm_list) {
++ idx = srcu_read_lock(&kvm->srcu);
++ kvm_mmu_zap_all_fast(kvm);
++ srcu_read_unlock(&kvm->srcu, idx);
++
++ wake_up_process(kvm->arch.nx_lpage_recovery_thread);
++ }
++ mutex_unlock(&kvm_lock);
++ }
++
++ return 0;
++}
++
+ int kvm_mmu_module_init(void)
+ {
+ int ret = -ENOMEM;
+
++ if (nx_huge_pages == -1)
++ __set_nx_huge_pages(get_nx_auto_mode());
++
+ /*
+ * MMU roles use union aliasing which is, generally speaking, an
+ * undefined behavior. However, we supposedly know how compilers behave
+@@ -6182,3 +6327,116 @@ void kvm_mmu_module_exit(void)
+ unregister_shrinker(&mmu_shrinker);
+ mmu_audit_disable();
+ }
++
++static int set_nx_huge_pages_recovery_ratio(const char *val, const struct kernel_param *kp)
++{
++ unsigned int old_val;
++ int err;
++
++ old_val = nx_huge_pages_recovery_ratio;
++ err = param_set_uint(val, kp);
++ if (err)
++ return err;
++
++ if (READ_ONCE(nx_huge_pages) &&
++ !old_val && nx_huge_pages_recovery_ratio) {
++ struct kvm *kvm;
++
++ mutex_lock(&kvm_lock);
++
++ list_for_each_entry(kvm, &vm_list, vm_list)
++ wake_up_process(kvm->arch.nx_lpage_recovery_thread);
++
++ mutex_unlock(&kvm_lock);
++ }
++
++ return err;
++}
++
++static void kvm_recover_nx_lpages(struct kvm *kvm)
++{
++ int rcu_idx;
++ struct kvm_mmu_page *sp;
++ unsigned int ratio;
++ LIST_HEAD(invalid_list);
++ ulong to_zap;
++
++ rcu_idx = srcu_read_lock(&kvm->srcu);
++ spin_lock(&kvm->mmu_lock);
++
++ ratio = READ_ONCE(nx_huge_pages_recovery_ratio);
++ to_zap = ratio ? DIV_ROUND_UP(kvm->stat.nx_lpage_splits, ratio) : 0;
++ while (to_zap && !list_empty(&kvm->arch.lpage_disallowed_mmu_pages)) {
++ /*
++ * We use a separate list instead of just using active_mmu_pages
++ * because the number of lpage_disallowed pages is expected to
++ * be relatively small compared to the total.
++ */
++ sp = list_first_entry(&kvm->arch.lpage_disallowed_mmu_pages,
++ struct kvm_mmu_page,
++ lpage_disallowed_link);
++ WARN_ON_ONCE(!sp->lpage_disallowed);
++ kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list);
++ WARN_ON_ONCE(sp->lpage_disallowed);
++
++ if (!--to_zap || need_resched() || spin_needbreak(&kvm->mmu_lock)) {
++ kvm_mmu_commit_zap_page(kvm, &invalid_list);
++ if (to_zap)
++ cond_resched_lock(&kvm->mmu_lock);
++ }
++ }
++
++ spin_unlock(&kvm->mmu_lock);
++ srcu_read_unlock(&kvm->srcu, rcu_idx);
++}
++
++static long get_nx_lpage_recovery_timeout(u64 start_time)
++{
++ return READ_ONCE(nx_huge_pages) && READ_ONCE(nx_huge_pages_recovery_ratio)
++ ? start_time + 60 * HZ - get_jiffies_64()
++ : MAX_SCHEDULE_TIMEOUT;
++}
++
++static int kvm_nx_lpage_recovery_worker(struct kvm *kvm, uintptr_t data)
++{
++ u64 start_time;
++ long remaining_time;
++
++ while (true) {
++ start_time = get_jiffies_64();
++ remaining_time = get_nx_lpage_recovery_timeout(start_time);
++
++ set_current_state(TASK_INTERRUPTIBLE);
++ while (!kthread_should_stop() && remaining_time > 0) {
++ schedule_timeout(remaining_time);
++ remaining_time = get_nx_lpage_recovery_timeout(start_time);
++ set_current_state(TASK_INTERRUPTIBLE);
++ }
++
++ set_current_state(TASK_RUNNING);
++
++ if (kthread_should_stop())
++ return 0;
++
++ kvm_recover_nx_lpages(kvm);
++ }
++}
++
++int kvm_mmu_post_init_vm(struct kvm *kvm)
++{
++ int err;
++
++ err = kvm_vm_create_worker_thread(kvm, kvm_nx_lpage_recovery_worker, 0,
++ "kvm-nx-lpage-recovery",
++ &kvm->arch.nx_lpage_recovery_thread);
++ if (!err)
++ kthread_unpark(kvm->arch.nx_lpage_recovery_thread);
++
++ return err;
++}
++
++void kvm_mmu_pre_destroy_vm(struct kvm *kvm)
++{
++ if (kvm->arch.nx_lpage_recovery_thread)
++ kthread_stop(kvm->arch.nx_lpage_recovery_thread);
++}
+diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
+index 54c2a377795b..4610230ddaea 100644
+--- a/arch/x86/kvm/mmu.h
++++ b/arch/x86/kvm/mmu.h
+@@ -210,4 +210,8 @@ void kvm_mmu_gfn_allow_lpage(struct kvm_memory_slot *slot, gfn_t gfn);
+ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm,
+ struct kvm_memory_slot *slot, u64 gfn);
+ int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu);
++
++int kvm_mmu_post_init_vm(struct kvm *kvm);
++void kvm_mmu_pre_destroy_vm(struct kvm *kvm);
++
+ #endif
+diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
+index 7d5cdb3af594..97b21e7fd013 100644
+--- a/arch/x86/kvm/paging_tmpl.h
++++ b/arch/x86/kvm/paging_tmpl.h
+@@ -614,13 +614,14 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw,
+ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
+ struct guest_walker *gw,
+ int write_fault, int hlevel,
+- kvm_pfn_t pfn, bool map_writable, bool prefault)
++ kvm_pfn_t pfn, bool map_writable, bool prefault,
++ bool lpage_disallowed)
+ {
+ struct kvm_mmu_page *sp = NULL;
+ struct kvm_shadow_walk_iterator it;
+ unsigned direct_access, access = gw->pt_access;
+ int top_level, ret;
+- gfn_t base_gfn;
++ gfn_t gfn, base_gfn;
+
+ direct_access = gw->pte_access;
+
+@@ -665,13 +666,25 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
+ link_shadow_page(vcpu, it.sptep, sp);
+ }
+
+- base_gfn = gw->gfn;
++ /*
++ * FNAME(page_fault) might have clobbered the bottom bits of
++ * gw->gfn, restore them from the virtual address.
++ */
++ gfn = gw->gfn | ((addr & PT_LVL_OFFSET_MASK(gw->level)) >> PAGE_SHIFT);
++ base_gfn = gfn;
+
+ trace_kvm_mmu_spte_requested(addr, gw->level, pfn);
+
+ for (; shadow_walk_okay(&it); shadow_walk_next(&it)) {
+ clear_sp_write_flooding_count(it.sptep);
+- base_gfn = gw->gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1);
++
++ /*
++ * We cannot overwrite existing page tables with an NX
++ * large page, as the leaf could be executable.
++ */
++ disallowed_hugepage_adjust(it, gfn, &pfn, &hlevel);
++
++ base_gfn = gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1);
+ if (it.level == hlevel)
+ break;
+
+@@ -683,6 +696,8 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
+ sp = kvm_mmu_get_page(vcpu, base_gfn, addr,
+ it.level - 1, true, direct_access);
+ link_shadow_page(vcpu, it.sptep, sp);
++ if (lpage_disallowed)
++ account_huge_nx_page(vcpu->kvm, sp);
+ }
+ }
+
+@@ -759,9 +774,11 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code,
+ int r;
+ kvm_pfn_t pfn;
+ int level = PT_PAGE_TABLE_LEVEL;
+- bool force_pt_level = false;
+ unsigned long mmu_seq;
+ bool map_writable, is_self_change_mapping;
++ bool lpage_disallowed = (error_code & PFERR_FETCH_MASK) &&
++ is_nx_huge_page_enabled();
++ bool force_pt_level = lpage_disallowed;
+
+ pgprintk("%s: addr %lx err %x\n", __func__, addr, error_code);
+
+@@ -851,7 +868,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code,
+ if (!force_pt_level)
+ transparent_hugepage_adjust(vcpu, walker.gfn, &pfn, &level);
+ r = FNAME(fetch)(vcpu, addr, &walker, write_fault,
+- level, pfn, map_writable, prefault);
++ level, pfn, map_writable, prefault, lpage_disallowed);
+ kvm_mmu_audit(vcpu, AUDIT_POST_PAGE_FAULT);
+
+ out_unlock:
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index e5ccfb33dbea..f82f766c81c8 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -92,8 +92,8 @@ u64 __read_mostly efer_reserved_bits = ~((u64)(EFER_SCE | EFER_LME | EFER_LMA));
+ static u64 __read_mostly efer_reserved_bits = ~((u64)EFER_SCE);
+ #endif
+
+-#define VM_STAT(x) offsetof(struct kvm, stat.x), KVM_STAT_VM
+-#define VCPU_STAT(x) offsetof(struct kvm_vcpu, stat.x), KVM_STAT_VCPU
++#define VM_STAT(x, ...) offsetof(struct kvm, stat.x), KVM_STAT_VM, ## __VA_ARGS__
++#define VCPU_STAT(x, ...) offsetof(struct kvm_vcpu, stat.x), KVM_STAT_VCPU, ## __VA_ARGS__
+
+ #define KVM_X2APIC_API_VALID_FLAGS (KVM_X2APIC_API_USE_32BIT_IDS | \
+ KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK)
+@@ -212,7 +212,8 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
+ { "mmu_cache_miss", VM_STAT(mmu_cache_miss) },
+ { "mmu_unsync", VM_STAT(mmu_unsync) },
+ { "remote_tlb_flush", VM_STAT(remote_tlb_flush) },
+- { "largepages", VM_STAT(lpages) },
++ { "largepages", VM_STAT(lpages, .mode = 0444) },
++ { "nx_largepages_splitted", VM_STAT(nx_lpage_splits, .mode = 0444) },
+ { "max_mmu_page_hash_collisions",
+ VM_STAT(max_mmu_page_hash_collisions) },
+ { NULL }
+@@ -1255,6 +1256,14 @@ static u64 kvm_get_arch_capabilities(void)
+ if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES))
+ rdmsrl(MSR_IA32_ARCH_CAPABILITIES, data);
+
++ /*
++ * If nx_huge_pages is enabled, KVM's shadow paging will ensure that
++ * the nested hypervisor runs with NX huge pages. If it is not,
++ * L1 is anyway vulnerable to ITLB_MULTIHIT explots from other
++ * L1 guests, so it need not worry about its own (L2) guests.
++ */
++ data |= ARCH_CAP_PSCHANGE_MC_NO;
++
+ /*
+ * If we're doing cache flushes (either "always" or "cond")
+ * we will do one whenever the guest does a vmlaunch/vmresume.
+@@ -1267,6 +1276,25 @@ static u64 kvm_get_arch_capabilities(void)
+ if (l1tf_vmx_mitigation != VMENTER_L1D_FLUSH_NEVER)
+ data |= ARCH_CAP_SKIP_VMENTRY_L1DFLUSH;
+
++ /*
++ * On TAA affected systems, export MDS_NO=0 when:
++ * - TSX is enabled on the host, i.e. X86_FEATURE_RTM=1.
++ * - Updated microcode is present. This is detected by
++ * the presence of ARCH_CAP_TSX_CTRL_MSR and ensures
++ * that VERW clears CPU buffers.
++ *
++ * When MDS_NO=0 is exported, guests deploy clear CPU buffer
++ * mitigation and don't complain:
++ *
++ * "Vulnerable: Clear CPU buffers attempted, no microcode"
++ *
++ * If TSX is disabled on the system, guests are also mitigated against
++ * TAA and clear CPU buffer mitigation is not required for guests.
++ */
++ if (boot_cpu_has_bug(X86_BUG_TAA) && boot_cpu_has(X86_FEATURE_RTM) &&
++ (data & ARCH_CAP_TSX_CTRL_MSR))
++ data &= ~ARCH_CAP_MDS_NO;
++
+ return data;
+ }
+
+@@ -9314,6 +9342,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
+
+ INIT_HLIST_HEAD(&kvm->arch.mask_notifier_list);
+ INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);
++ INIT_LIST_HEAD(&kvm->arch.lpage_disallowed_mmu_pages);
+ INIT_LIST_HEAD(&kvm->arch.assigned_dev_head);
+ atomic_set(&kvm->arch.noncoherent_dma_count, 0);
+
+@@ -9345,6 +9374,11 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
+ return 0;
+ }
+
++int kvm_arch_post_init_vm(struct kvm *kvm)
++{
++ return kvm_mmu_post_init_vm(kvm);
++}
++
+ static void kvm_unload_vcpu_mmu(struct kvm_vcpu *vcpu)
+ {
+ vcpu_load(vcpu);
+@@ -9446,6 +9480,11 @@ int x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size)
+ }
+ EXPORT_SYMBOL_GPL(x86_set_memory_region);
+
++void kvm_arch_pre_destroy_vm(struct kvm *kvm)
++{
++ kvm_mmu_pre_destroy_vm(kvm);
++}
++
+ void kvm_arch_destroy_vm(struct kvm *kvm)
+ {
+ if (current->mm == kvm->mm) {
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 55a7dc227dfb..4c909ae07093 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -908,9 +908,14 @@ static int blkcg_print_stat(struct seq_file *sf, void *v)
+ int i;
+ bool has_stats = false;
+
++ spin_lock_irq(&blkg->q->queue_lock);
++
++ if (!blkg->online)
++ goto skip;
++
+ dname = blkg_dev_name(blkg);
+ if (!dname)
+- continue;
++ goto skip;
+
+ /*
+ * Hooray string manipulation, count is the size written NOT
+@@ -920,8 +925,6 @@ static int blkcg_print_stat(struct seq_file *sf, void *v)
+ */
+ off += scnprintf(buf+off, size-off, "%s ", dname);
+
+- spin_lock_irq(&blkg->q->queue_lock);
+-
+ blkg_rwstat_recursive_sum(blkg, NULL,
+ offsetof(struct blkcg_gq, stat_bytes), &rwstat);
+ rbytes = rwstat.cnt[BLKG_RWSTAT_READ];
+@@ -934,8 +937,6 @@ static int blkcg_print_stat(struct seq_file *sf, void *v)
+ wios = rwstat.cnt[BLKG_RWSTAT_WRITE];
+ dios = rwstat.cnt[BLKG_RWSTAT_DISCARD];
+
+- spin_unlock_irq(&blkg->q->queue_lock);
+-
+ if (rbytes || wbytes || rios || wios) {
+ has_stats = true;
+ off += scnprintf(buf+off, size-off,
+@@ -973,6 +974,8 @@ static int blkcg_print_stat(struct seq_file *sf, void *v)
+ seq_commit(sf, -1);
+ }
+ }
++ skip:
++ spin_unlock_irq(&blkg->q->queue_lock);
+ }
+
+ rcu_read_unlock();
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index cc37511de866..6265871a4af2 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -554,12 +554,27 @@ ssize_t __weak cpu_show_mds(struct device *dev,
+ return sprintf(buf, "Not affected\n");
+ }
+
++ssize_t __weak cpu_show_tsx_async_abort(struct device *dev,
++ struct device_attribute *attr,
++ char *buf)
++{
++ return sprintf(buf, "Not affected\n");
++}
++
++ssize_t __weak cpu_show_itlb_multihit(struct device *dev,
++ struct device_attribute *attr, char *buf)
++{
++ return sprintf(buf, "Not affected\n");
++}
++
+ static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
+ static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
+ static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
+ static DEVICE_ATTR(spec_store_bypass, 0444, cpu_show_spec_store_bypass, NULL);
+ static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
+ static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
++static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL);
++static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL);
+
+ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ &dev_attr_meltdown.attr,
+@@ -568,6 +583,8 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ &dev_attr_spec_store_bypass.attr,
+ &dev_attr_l1tf.attr,
+ &dev_attr_mds.attr,
++ &dev_attr_tsx_async_abort.attr,
++ &dev_attr_itlb_multihit.attr,
+ NULL
+ };
+
+diff --git a/drivers/clk/imx/clk-imx8mm.c b/drivers/clk/imx/clk-imx8mm.c
+index 6f46bcb1d643..59ce93691e97 100644
+--- a/drivers/clk/imx/clk-imx8mm.c
++++ b/drivers/clk/imx/clk-imx8mm.c
+@@ -666,7 +666,7 @@ static int __init imx8mm_clocks_init(struct device_node *ccm_node)
+ clks[IMX8MM_CLK_A53_DIV],
+ clks[IMX8MM_CLK_A53_SRC],
+ clks[IMX8MM_ARM_PLL_OUT],
+- clks[IMX8MM_CLK_24M]);
++ clks[IMX8MM_SYS_PLL1_800M]);
+
+ imx_check_clocks(clks, ARRAY_SIZE(clks));
+
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index cc27d4c59dca..6a89d3383a21 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -846,11 +846,9 @@ static void intel_pstate_hwp_force_min_perf(int cpu)
+ value |= HWP_MAX_PERF(min_perf);
+ value |= HWP_MIN_PERF(min_perf);
+
+- /* Set EPP/EPB to min */
++ /* Set EPP to min */
+ if (boot_cpu_has(X86_FEATURE_HWP_EPP))
+ value |= HWP_ENERGY_PERF_PREFERENCE(HWP_EPP_POWERSAVE);
+- else
+- intel_pstate_set_epb(cpu, HWP_EPP_BALANCE_POWERSAVE);
+
+ wrmsrl_on_cpu(cpu, MSR_HWP_REQUEST, value);
+ }
+diff --git a/drivers/dma/sprd-dma.c b/drivers/dma/sprd-dma.c
+index 525dc7338fe3..8546ad034720 100644
+--- a/drivers/dma/sprd-dma.c
++++ b/drivers/dma/sprd-dma.c
+@@ -134,6 +134,10 @@
+ #define SPRD_DMA_SRC_TRSF_STEP_OFFSET 0
+ #define SPRD_DMA_TRSF_STEP_MASK GENMASK(15, 0)
+
++/* SPRD DMA_SRC_BLK_STEP register definition */
++#define SPRD_DMA_LLIST_HIGH_MASK GENMASK(31, 28)
++#define SPRD_DMA_LLIST_HIGH_SHIFT 28
++
+ /* define DMA channel mode & trigger mode mask */
+ #define SPRD_DMA_CHN_MODE_MASK GENMASK(7, 0)
+ #define SPRD_DMA_TRG_MODE_MASK GENMASK(7, 0)
+@@ -208,6 +212,7 @@ struct sprd_dma_dev {
+ struct sprd_dma_chn channels[0];
+ };
+
++static void sprd_dma_free_desc(struct virt_dma_desc *vd);
+ static bool sprd_dma_filter_fn(struct dma_chan *chan, void *param);
+ static struct of_dma_filter_info sprd_dma_info = {
+ .filter_fn = sprd_dma_filter_fn,
+@@ -609,12 +614,19 @@ static int sprd_dma_alloc_chan_resources(struct dma_chan *chan)
+ static void sprd_dma_free_chan_resources(struct dma_chan *chan)
+ {
+ struct sprd_dma_chn *schan = to_sprd_dma_chan(chan);
++ struct virt_dma_desc *cur_vd = NULL;
+ unsigned long flags;
+
+ spin_lock_irqsave(&schan->vc.lock, flags);
++ if (schan->cur_desc)
++ cur_vd = &schan->cur_desc->vd;
++
+ sprd_dma_stop(schan);
+ spin_unlock_irqrestore(&schan->vc.lock, flags);
+
++ if (cur_vd)
++ sprd_dma_free_desc(cur_vd);
++
+ vchan_free_chan_resources(&schan->vc);
+ pm_runtime_put(chan->device->dev);
+ }
+@@ -717,6 +729,7 @@ static int sprd_dma_fill_desc(struct dma_chan *chan,
+ u32 int_mode = flags & SPRD_DMA_INT_MASK;
+ int src_datawidth, dst_datawidth, src_step, dst_step;
+ u32 temp, fix_mode = 0, fix_en = 0;
++ phys_addr_t llist_ptr;
+
+ if (dir == DMA_MEM_TO_DEV) {
+ src_step = sprd_dma_get_step(slave_cfg->src_addr_width);
+@@ -814,13 +827,16 @@ static int sprd_dma_fill_desc(struct dma_chan *chan,
+ * Set the link-list pointer point to next link-list
+ * configuration's physical address.
+ */
+- hw->llist_ptr = schan->linklist.phy_addr + temp;
++ llist_ptr = schan->linklist.phy_addr + temp;
++ hw->llist_ptr = lower_32_bits(llist_ptr);
++ hw->src_blk_step = (upper_32_bits(llist_ptr) << SPRD_DMA_LLIST_HIGH_SHIFT) &
++ SPRD_DMA_LLIST_HIGH_MASK;
+ } else {
+ hw->llist_ptr = 0;
++ hw->src_blk_step = 0;
+ }
+
+ hw->frg_step = 0;
+- hw->src_blk_step = 0;
+ hw->des_blk_step = 0;
+ return 0;
+ }
+@@ -1023,15 +1039,22 @@ static int sprd_dma_resume(struct dma_chan *chan)
+ static int sprd_dma_terminate_all(struct dma_chan *chan)
+ {
+ struct sprd_dma_chn *schan = to_sprd_dma_chan(chan);
++ struct virt_dma_desc *cur_vd = NULL;
+ unsigned long flags;
+ LIST_HEAD(head);
+
+ spin_lock_irqsave(&schan->vc.lock, flags);
++ if (schan->cur_desc)
++ cur_vd = &schan->cur_desc->vd;
++
+ sprd_dma_stop(schan);
+
+ vchan_get_all_descriptors(&schan->vc, &head);
+ spin_unlock_irqrestore(&schan->vc.lock, flags);
+
++ if (cur_vd)
++ sprd_dma_free_desc(cur_vd);
++
+ vchan_dma_desc_free_list(&schan->vc, &head);
+ return 0;
+ }
+diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
+index e7dc3c4dc8e0..5d56f1e4d332 100644
+--- a/drivers/dma/xilinx/xilinx_dma.c
++++ b/drivers/dma/xilinx/xilinx_dma.c
+@@ -68,6 +68,9 @@
+ #define XILINX_DMA_DMACR_CIRC_EN BIT(1)
+ #define XILINX_DMA_DMACR_RUNSTOP BIT(0)
+ #define XILINX_DMA_DMACR_FSYNCSRC_MASK GENMASK(6, 5)
++#define XILINX_DMA_DMACR_DELAY_MASK GENMASK(31, 24)
++#define XILINX_DMA_DMACR_FRAME_COUNT_MASK GENMASK(23, 16)
++#define XILINX_DMA_DMACR_MASTER_MASK GENMASK(11, 8)
+
+ #define XILINX_DMA_REG_DMASR 0x0004
+ #define XILINX_DMA_DMASR_EOL_LATE_ERR BIT(15)
+@@ -1354,7 +1357,8 @@ static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan)
+ node);
+ hw = &segment->hw;
+
+- xilinx_write(chan, XILINX_DMA_REG_SRCDSTADDR, hw->buf_addr);
++ xilinx_write(chan, XILINX_DMA_REG_SRCDSTADDR,
++ xilinx_prep_dma_addr_t(hw->buf_addr));
+
+ /* Start the transfer */
+ dma_ctrl_write(chan, XILINX_DMA_REG_BTT,
+@@ -2117,8 +2121,10 @@ int xilinx_vdma_channel_set_config(struct dma_chan *dchan,
+ chan->config.gen_lock = cfg->gen_lock;
+ chan->config.master = cfg->master;
+
++ dmacr &= ~XILINX_DMA_DMACR_GENLOCK_EN;
+ if (cfg->gen_lock && chan->genlock) {
+ dmacr |= XILINX_DMA_DMACR_GENLOCK_EN;
++ dmacr &= ~XILINX_DMA_DMACR_MASTER_MASK;
+ dmacr |= cfg->master << XILINX_DMA_DMACR_MASTER_SHIFT;
+ }
+
+@@ -2134,11 +2140,13 @@ int xilinx_vdma_channel_set_config(struct dma_chan *dchan,
+ chan->config.delay = cfg->delay;
+
+ if (cfg->coalesc <= XILINX_DMA_DMACR_FRAME_COUNT_MAX) {
++ dmacr &= ~XILINX_DMA_DMACR_FRAME_COUNT_MASK;
+ dmacr |= cfg->coalesc << XILINX_DMA_DMACR_FRAME_COUNT_SHIFT;
+ chan->config.coalesc = cfg->coalesc;
+ }
+
+ if (cfg->delay <= XILINX_DMA_DMACR_DELAY_MAX) {
++ dmacr &= ~XILINX_DMA_DMACR_DELAY_MASK;
+ dmacr |= cfg->delay << XILINX_DMA_DMACR_DELAY_SHIFT;
+ chan->config.delay = cfg->delay;
+ }
+diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
+index 0460c7581220..ee0661ddb25b 100644
+--- a/drivers/firmware/efi/libstub/Makefile
++++ b/drivers/firmware/efi/libstub/Makefile
+@@ -52,6 +52,7 @@ lib-$(CONFIG_EFI_ARMSTUB) += arm-stub.o fdt.o string.o random.o \
+
+ lib-$(CONFIG_ARM) += arm32-stub.o
+ lib-$(CONFIG_ARM64) += arm64-stub.o
++CFLAGS_arm32-stub.o := -DTEXT_OFFSET=$(TEXT_OFFSET)
+ CFLAGS_arm64-stub.o := -DTEXT_OFFSET=$(TEXT_OFFSET)
+
+ #
+diff --git a/drivers/firmware/efi/libstub/arm32-stub.c b/drivers/firmware/efi/libstub/arm32-stub.c
+index e8f7aefb6813..41213bf5fcf5 100644
+--- a/drivers/firmware/efi/libstub/arm32-stub.c
++++ b/drivers/firmware/efi/libstub/arm32-stub.c
+@@ -195,6 +195,7 @@ efi_status_t handle_kernel_image(efi_system_table_t *sys_table,
+ unsigned long dram_base,
+ efi_loaded_image_t *image)
+ {
++ unsigned long kernel_base;
+ efi_status_t status;
+
+ /*
+@@ -204,9 +205,18 @@ efi_status_t handle_kernel_image(efi_system_table_t *sys_table,
+ * loaded. These assumptions are made by the decompressor,
+ * before any memory map is available.
+ */
+- dram_base = round_up(dram_base, SZ_128M);
++ kernel_base = round_up(dram_base, SZ_128M);
+
+- status = reserve_kernel_base(sys_table, dram_base, reserve_addr,
++ /*
++ * Note that some platforms (notably, the Raspberry Pi 2) put
++ * spin-tables and other pieces of firmware at the base of RAM,
++ * abusing the fact that the window of TEXT_OFFSET bytes at the
++ * base of the kernel image is only partially used at the moment.
++ * (Up to 5 pages are used for the swapper page tables)
++ */
++ kernel_base += TEXT_OFFSET - 5 * PAGE_SIZE;
++
++ status = reserve_kernel_base(sys_table, kernel_base, reserve_addr,
+ reserve_size);
+ if (status != EFI_SUCCESS) {
+ pr_efi_err(sys_table, "Unable to allocate memory for uncompressed kernel.\n");
+@@ -220,7 +230,7 @@ efi_status_t handle_kernel_image(efi_system_table_t *sys_table,
+ *image_size = image->image_size;
+ status = efi_relocate_kernel(sys_table, image_addr, *image_size,
+ *image_size,
+- dram_base + MAX_UNCOMP_KERNEL_SIZE, 0);
++ kernel_base + MAX_UNCOMP_KERNEL_SIZE, 0, 0);
+ if (status != EFI_SUCCESS) {
+ pr_efi_err(sys_table, "Failed to relocate kernel.\n");
+ efi_free(sys_table, *reserve_size, *reserve_addr);
+diff --git a/drivers/firmware/efi/libstub/efi-stub-helper.c b/drivers/firmware/efi/libstub/efi-stub-helper.c
+index 3caae7f2cf56..35dbc2791c97 100644
+--- a/drivers/firmware/efi/libstub/efi-stub-helper.c
++++ b/drivers/firmware/efi/libstub/efi-stub-helper.c
+@@ -260,11 +260,11 @@ fail:
+ }
+
+ /*
+- * Allocate at the lowest possible address.
++ * Allocate at the lowest possible address that is not below 'min'.
+ */
+-efi_status_t efi_low_alloc(efi_system_table_t *sys_table_arg,
+- unsigned long size, unsigned long align,
+- unsigned long *addr)
++efi_status_t efi_low_alloc_above(efi_system_table_t *sys_table_arg,
++ unsigned long size, unsigned long align,
++ unsigned long *addr, unsigned long min)
+ {
+ unsigned long map_size, desc_size, buff_size;
+ efi_memory_desc_t *map;
+@@ -311,13 +311,8 @@ efi_status_t efi_low_alloc(efi_system_table_t *sys_table_arg,
+ start = desc->phys_addr;
+ end = start + desc->num_pages * EFI_PAGE_SIZE;
+
+- /*
+- * Don't allocate at 0x0. It will confuse code that
+- * checks pointers against NULL. Skip the first 8
+- * bytes so we start at a nice even number.
+- */
+- if (start == 0x0)
+- start += 8;
++ if (start < min)
++ start = min;
+
+ start = round_up(start, align);
+ if ((start + size) > end)
+@@ -698,7 +693,8 @@ efi_status_t efi_relocate_kernel(efi_system_table_t *sys_table_arg,
+ unsigned long image_size,
+ unsigned long alloc_size,
+ unsigned long preferred_addr,
+- unsigned long alignment)
++ unsigned long alignment,
++ unsigned long min_addr)
+ {
+ unsigned long cur_image_addr;
+ unsigned long new_addr = 0;
+@@ -731,8 +727,8 @@ efi_status_t efi_relocate_kernel(efi_system_table_t *sys_table_arg,
+ * possible.
+ */
+ if (status != EFI_SUCCESS) {
+- status = efi_low_alloc(sys_table_arg, alloc_size, alignment,
+- &new_addr);
++ status = efi_low_alloc_above(sys_table_arg, alloc_size,
++ alignment, &new_addr, min_addr);
+ }
+ if (status != EFI_SUCCESS) {
+ pr_efi_err(sys_table_arg, "Failed to allocate usable memory for kernel.\n");
+diff --git a/drivers/firmware/efi/tpm.c b/drivers/firmware/efi/tpm.c
+index ebd7977653a8..31f9f0e369b9 100644
+--- a/drivers/firmware/efi/tpm.c
++++ b/drivers/firmware/efi/tpm.c
+@@ -88,6 +88,7 @@ int __init efi_tpm_eventlog_init(void)
+
+ if (tbl_size < 0) {
+ pr_err(FW_BUG "Failed to parse event in TPM Final Events Log\n");
++ ret = -EINVAL;
+ goto out_calc;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+index 9d76e0923a5a..96b2a31ccfed 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+@@ -218,7 +218,7 @@ static struct dma_fence *amdgpu_job_run(struct drm_sched_job *sched_job)
+ struct amdgpu_ring *ring = to_amdgpu_ring(sched_job->sched);
+ struct dma_fence *fence = NULL, *finished;
+ struct amdgpu_job *job;
+- int r;
++ int r = 0;
+
+ job = to_amdgpu_job(sched_job);
+ finished = &job->base.s_fence->finished;
+@@ -243,6 +243,8 @@ static struct dma_fence *amdgpu_job_run(struct drm_sched_job *sched_job)
+ job->fence = dma_fence_get(fence);
+
+ amdgpu_job_free_resources(job);
++
++ fence = r ? ERR_PTR(r) : fence;
+ return fence;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+index 5eeb72fcc123..6a51e6a4a035 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+@@ -264,6 +264,7 @@ static void gmc_v10_0_flush_gpu_tlb(struct amdgpu_device *adev,
+
+ job->vm_pd_addr = amdgpu_gmc_pd_addr(adev->gart.bo);
+ job->vm_needs_flush = true;
++ job->ibs->ptr[job->ibs->length_dw++] = ring->funcs->nop;
+ amdgpu_ring_pad_ib(ring, &job->ibs[0]);
+ r = amdgpu_job_submit(job, &adev->mman.entity,
+ AMDGPU_FENCE_OWNER_UNDEFINED, &fence);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 730f97ba8dbb..dd4731ab935c 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -566,6 +566,10 @@ static bool construct(struct dc *dc,
+ #ifdef CONFIG_DRM_AMD_DC_DCN2_0
+ // Allocate memory for the vm_helper
+ dc->vm_helper = kzalloc(sizeof(struct vm_helper), GFP_KERNEL);
++ if (!dc->vm_helper) {
++ dm_error("%s: failed to create dc->vm_helper\n", __func__);
++ goto fail;
++ }
+
+ #endif
+ memcpy(&dc->bb_overrides, &init_params->bb_overrides, sizeof(dc->bb_overrides));
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c
+index e6da8506128b..623c1ab4d3db 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c
+@@ -374,6 +374,7 @@ void dal_ddc_service_i2c_query_dp_dual_mode_adaptor(
+ enum display_dongle_type *dongle = &sink_cap->dongle_type;
+ uint8_t type2_dongle_buf[DP_ADAPTOR_TYPE2_SIZE];
+ bool is_type2_dongle = false;
++ int retry_count = 2;
+ struct dp_hdmi_dongle_signature_data *dongle_signature;
+
+ /* Assume we have no valid DP passive dongle connected */
+@@ -386,13 +387,24 @@ void dal_ddc_service_i2c_query_dp_dual_mode_adaptor(
+ DP_HDMI_DONGLE_ADDRESS,
+ type2_dongle_buf,
+ sizeof(type2_dongle_buf))) {
+- *dongle = DISPLAY_DONGLE_DP_DVI_DONGLE;
+- sink_cap->max_hdmi_pixel_clock = DP_ADAPTOR_DVI_MAX_TMDS_CLK;
++ /* Passive HDMI dongles can sometimes fail here without retrying*/
++ while (retry_count > 0) {
++ if (i2c_read(ddc,
++ DP_HDMI_DONGLE_ADDRESS,
++ type2_dongle_buf,
++ sizeof(type2_dongle_buf)))
++ break;
++ retry_count--;
++ }
++ if (retry_count == 0) {
++ *dongle = DISPLAY_DONGLE_DP_DVI_DONGLE;
++ sink_cap->max_hdmi_pixel_clock = DP_ADAPTOR_DVI_MAX_TMDS_CLK;
+
+- CONN_DATA_DETECT(ddc->link, type2_dongle_buf, sizeof(type2_dongle_buf),
+- "DP-DVI passive dongle %dMhz: ",
+- DP_ADAPTOR_DVI_MAX_TMDS_CLK / 1000);
+- return;
++ CONN_DATA_DETECT(ddc->link, type2_dongle_buf, sizeof(type2_dongle_buf),
++ "DP-DVI passive dongle %dMhz: ",
++ DP_ADAPTOR_DVI_MAX_TMDS_CLK / 1000);
++ return;
++ }
+ }
+
+ /* Check if Type 2 dongle.*/
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index 68db60e4caf3..d1a33e04570f 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -394,6 +394,9 @@ bool resource_are_streams_timing_synchronizable(
+ if (stream1->view_format != stream2->view_format)
+ return false;
+
++ if (stream1->ignore_msa_timing_param || stream2->ignore_msa_timing_param)
++ return false;
++
+ return true;
+ }
+ static bool is_dp_and_hdmi_sharable(
+@@ -1566,6 +1569,9 @@ bool dc_is_stream_unchanged(
+ if (!are_stream_backends_same(old_stream, stream))
+ return false;
+
++ if (old_stream->ignore_msa_timing_param != stream->ignore_msa_timing_param)
++ return false;
++
+ return true;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c
+index 649883777f62..6c6c486b774a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c
+@@ -2577,7 +2577,8 @@ static void dml20_DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPer
+ mode_lib->vba.MinActiveDRAMClockChangeMargin
+ + mode_lib->vba.DRAMClockChangeLatency;
+
+- if (mode_lib->vba.MinActiveDRAMClockChangeMargin > 0) {
++ if (mode_lib->vba.MinActiveDRAMClockChangeMargin > 50) {
++ mode_lib->vba.DRAMClockChangeWatermark += 25;
+ mode_lib->vba.DRAMClockChangeSupport[0][0] = dm_dram_clock_change_vactive;
+ } else {
+ if (mode_lib->vba.SynchronizedVBlank || mode_lib->vba.NumberOfActivePlanes == 1) {
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
+index 0f2c22a3bcb6..f822b7730308 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
+@@ -315,6 +315,8 @@ static void i915_gem_context_free(struct i915_gem_context *ctx)
+ free_engines(rcu_access_pointer(ctx->engines));
+ mutex_destroy(&ctx->engines_mutex);
+
++ kfree(ctx->jump_whitelist);
++
+ if (ctx->timeline)
+ i915_timeline_put(ctx->timeline);
+
+@@ -465,6 +467,9 @@ __create_context(struct drm_i915_private *i915)
+ for (i = 0; i < ARRAY_SIZE(ctx->hang_timestamp); i++)
+ ctx->hang_timestamp[i] = jiffies - CONTEXT_FAST_HANG_JIFFIES;
+
++ ctx->jump_whitelist = NULL;
++ ctx->jump_whitelist_cmds = 0;
++
+ return ctx;
+
+ err_free:
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
+index cc513410eeef..d284b30d591a 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
++++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
+@@ -197,6 +197,11 @@ struct i915_gem_context {
+ * per vm, which may be one per context or shared with the global GTT)
+ */
+ struct radix_tree_root handles_vma;
++
++ /** jump_whitelist: Bit array for tracking cmds during cmdparsing */
++ unsigned long *jump_whitelist;
++ /** jump_whitelist_cmds: No of cmd slots available */
++ u32 jump_whitelist_cmds;
+ };
+
+ #endif /* __I915_GEM_CONTEXT_TYPES_H__ */
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+index 41dab9ea33cd..4a1debd021ed 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+@@ -295,7 +295,9 @@ static inline u64 gen8_noncanonical_addr(u64 address)
+
+ static inline bool eb_use_cmdparser(const struct i915_execbuffer *eb)
+ {
+- return intel_engine_needs_cmd_parser(eb->engine) && eb->batch_len;
++ return intel_engine_requires_cmd_parser(eb->engine) ||
++ (intel_engine_using_cmd_parser(eb->engine) &&
++ eb->args->batch_len);
+ }
+
+ static int eb_create(struct i915_execbuffer *eb)
+@@ -2009,10 +2011,39 @@ static int i915_reset_gen7_sol_offsets(struct i915_request *rq)
+ return 0;
+ }
+
+-static struct i915_vma *eb_parse(struct i915_execbuffer *eb, bool is_master)
++static struct i915_vma *
++shadow_batch_pin(struct i915_execbuffer *eb, struct drm_i915_gem_object *obj)
++{
++ struct drm_i915_private *dev_priv = eb->i915;
++ struct i915_vma * const vma = *eb->vma;
++ struct i915_address_space *vm;
++ u64 flags;
++
++ /*
++ * PPGTT backed shadow buffers must be mapped RO, to prevent
++ * post-scan tampering
++ */
++ if (CMDPARSER_USES_GGTT(dev_priv)) {
++ flags = PIN_GLOBAL;
++ vm = &dev_priv->ggtt.vm;
++ } else if (vma->vm->has_read_only) {
++ flags = PIN_USER;
++ vm = vma->vm;
++ i915_gem_object_set_readonly(obj);
++ } else {
++ DRM_DEBUG("Cannot prevent post-scan tampering without RO capable vm\n");
++ return ERR_PTR(-EINVAL);
++ }
++
++ return i915_gem_object_pin(obj, vm, NULL, 0, 0, flags);
++}
++
++static struct i915_vma *eb_parse(struct i915_execbuffer *eb)
+ {
+ struct drm_i915_gem_object *shadow_batch_obj;
+ struct i915_vma *vma;
++ u64 batch_start;
++ u64 shadow_batch_start;
+ int err;
+
+ shadow_batch_obj = i915_gem_batch_pool_get(&eb->engine->batch_pool,
+@@ -2020,30 +2051,53 @@ static struct i915_vma *eb_parse(struct i915_execbuffer *eb, bool is_master)
+ if (IS_ERR(shadow_batch_obj))
+ return ERR_CAST(shadow_batch_obj);
+
+- err = intel_engine_cmd_parser(eb->engine,
++ vma = shadow_batch_pin(eb, shadow_batch_obj);
++ if (IS_ERR(vma))
++ goto out;
++
++ batch_start = gen8_canonical_addr(eb->batch->node.start) +
++ eb->batch_start_offset;
++
++ shadow_batch_start = gen8_canonical_addr(vma->node.start);
++
++ err = intel_engine_cmd_parser(eb->gem_context,
++ eb->engine,
+ eb->batch->obj,
+- shadow_batch_obj,
++ batch_start,
+ eb->batch_start_offset,
+ eb->batch_len,
+- is_master);
++ shadow_batch_obj,
++ shadow_batch_start);
++
+ if (err) {
+- if (err == -EACCES) /* unhandled chained batch */
++ i915_vma_unpin(vma);
++
++ /*
++ * Unsafe GGTT-backed buffers can still be submitted safely
++ * as non-secure.
++ * For PPGTT backing however, we have no choice but to forcibly
++ * reject unsafe buffers
++ */
++ if (CMDPARSER_USES_GGTT(eb->i915) && (err == -EACCES))
++ /* Execute original buffer non-secure */
+ vma = NULL;
+ else
+ vma = ERR_PTR(err);
+ goto out;
+ }
+
+- vma = i915_gem_object_ggtt_pin(shadow_batch_obj, NULL, 0, 0, 0);
+- if (IS_ERR(vma))
+- goto out;
+-
+ eb->vma[eb->buffer_count] = i915_vma_get(vma);
+ eb->flags[eb->buffer_count] =
+ __EXEC_OBJECT_HAS_PIN | __EXEC_OBJECT_HAS_REF;
+ vma->exec_flags = &eb->flags[eb->buffer_count];
+ eb->buffer_count++;
++ eb->batch_start_offset = 0;
++ eb->batch = vma;
++
++ if (CMDPARSER_USES_GGTT(eb->i915))
++ eb->batch_flags |= I915_DISPATCH_SECURE;
+
++ /* eb->batch_len unchanged */
+ out:
+ i915_gem_object_unpin_pages(shadow_batch_obj);
+ return vma;
+@@ -2351,6 +2405,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
+ struct drm_i915_gem_exec_object2 *exec,
+ struct drm_syncobj **fences)
+ {
++ struct drm_i915_private *i915 = to_i915(dev);
+ struct i915_execbuffer eb;
+ struct dma_fence *in_fence = NULL;
+ struct dma_fence *exec_fence = NULL;
+@@ -2362,7 +2417,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
+ BUILD_BUG_ON(__EXEC_OBJECT_INTERNAL_FLAGS &
+ ~__EXEC_OBJECT_UNKNOWN_FLAGS);
+
+- eb.i915 = to_i915(dev);
++ eb.i915 = i915;
+ eb.file = file;
+ eb.args = args;
+ if (DBG_FORCE_RELOC || !(args->flags & I915_EXEC_NO_RELOC))
+@@ -2382,8 +2437,15 @@ i915_gem_do_execbuffer(struct drm_device *dev,
+
+ eb.batch_flags = 0;
+ if (args->flags & I915_EXEC_SECURE) {
++ if (INTEL_GEN(i915) >= 11)
++ return -ENODEV;
++
++ /* Return -EPERM to trigger fallback code on old binaries. */
++ if (!HAS_SECURE_BATCHES(i915))
++ return -EPERM;
++
+ if (!drm_is_current_master(file) || !capable(CAP_SYS_ADMIN))
+- return -EPERM;
++ return -EPERM;
+
+ eb.batch_flags |= I915_DISPATCH_SECURE;
+ }
+@@ -2473,34 +2535,19 @@ i915_gem_do_execbuffer(struct drm_device *dev,
+ goto err_vma;
+ }
+
++ if (eb.batch_len == 0)
++ eb.batch_len = eb.batch->size - eb.batch_start_offset;
++
+ if (eb_use_cmdparser(&eb)) {
+ struct i915_vma *vma;
+
+- vma = eb_parse(&eb, drm_is_current_master(file));
++ vma = eb_parse(&eb);
+ if (IS_ERR(vma)) {
+ err = PTR_ERR(vma);
+ goto err_vma;
+ }
+-
+- if (vma) {
+- /*
+- * Batch parsed and accepted:
+- *
+- * Set the DISPATCH_SECURE bit to remove the NON_SECURE
+- * bit from MI_BATCH_BUFFER_START commands issued in
+- * the dispatch_execbuffer implementations. We
+- * specifically don't want that set on batches the
+- * command parser has accepted.
+- */
+- eb.batch_flags |= I915_DISPATCH_SECURE;
+- eb.batch_start_offset = 0;
+- eb.batch = vma;
+- }
+ }
+
+- if (eb.batch_len == 0)
+- eb.batch_len = eb.batch->size - eb.batch_start_offset;
+-
+ /*
+ * snb/ivb/vlv conflate the "batch in ppgtt" bit with the "non-secure
+ * batch" bit. Hence we need to pin secure batches into the global gtt.
+diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
+index 43e975a26016..3f6c58b68bb1 100644
+--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
++++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
+@@ -460,12 +460,13 @@ struct intel_engine_cs {
+
+ struct intel_engine_hangcheck hangcheck;
+
+-#define I915_ENGINE_NEEDS_CMD_PARSER BIT(0)
++#define I915_ENGINE_USING_CMD_PARSER BIT(0)
+ #define I915_ENGINE_SUPPORTS_STATS BIT(1)
+ #define I915_ENGINE_HAS_PREEMPTION BIT(2)
+ #define I915_ENGINE_HAS_SEMAPHORES BIT(3)
+ #define I915_ENGINE_NEEDS_BREADCRUMB_TASKLET BIT(4)
+ #define I915_ENGINE_IS_VIRTUAL BIT(5)
++#define I915_ENGINE_REQUIRES_CMD_PARSER BIT(7)
+ unsigned int flags;
+
+ /*
+@@ -526,9 +527,15 @@ struct intel_engine_cs {
+ };
+
+ static inline bool
+-intel_engine_needs_cmd_parser(const struct intel_engine_cs *engine)
++intel_engine_using_cmd_parser(const struct intel_engine_cs *engine)
+ {
+- return engine->flags & I915_ENGINE_NEEDS_CMD_PARSER;
++ return engine->flags & I915_ENGINE_USING_CMD_PARSER;
++}
++
++static inline bool
++intel_engine_requires_cmd_parser(const struct intel_engine_cs *engine)
++{
++ return engine->flags & I915_ENGINE_REQUIRES_CMD_PARSER;
+ }
+
+ static inline bool
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.c b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
+index 9f8f7f54191f..3f6f42592708 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.c
++++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
+@@ -36,6 +36,9 @@ static int intel_gt_unpark(struct intel_wakeref *wf)
+ i915->gt.awake = intel_display_power_get(i915, POWER_DOMAIN_GT_IRQ);
+ GEM_BUG_ON(!i915->gt.awake);
+
++ if (NEEDS_RC6_CTX_CORRUPTION_WA(i915))
++ intel_uncore_forcewake_get(&i915->uncore, FORCEWAKE_ALL);
++
+ intel_enable_gt_powersave(i915);
+
+ i915_update_gfx_val(i915);
+@@ -70,6 +73,11 @@ static int intel_gt_park(struct intel_wakeref *wf)
+ if (INTEL_GEN(i915) >= 6)
+ gen6_rps_idle(i915);
+
++ if (NEEDS_RC6_CTX_CORRUPTION_WA(i915)) {
++ intel_rc6_ctx_wa_check(i915);
++ intel_uncore_forcewake_put(&i915->uncore, FORCEWAKE_ALL);
++ }
++
+ GEM_BUG_ON(!wakeref);
+ intel_display_power_put(i915, POWER_DOMAIN_GT_IRQ, wakeref);
+
+diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c
+index a28bcd2d7c09..a412e346b29c 100644
+--- a/drivers/gpu/drm/i915/i915_cmd_parser.c
++++ b/drivers/gpu/drm/i915/i915_cmd_parser.c
+@@ -52,13 +52,11 @@
+ * granting userspace undue privileges. There are three categories of privilege.
+ *
+ * First, commands which are explicitly defined as privileged or which should
+- * only be used by the kernel driver. The parser generally rejects such
+- * commands, though it may allow some from the drm master process.
++ * only be used by the kernel driver. The parser rejects such commands
+ *
+ * Second, commands which access registers. To support correct/enhanced
+ * userspace functionality, particularly certain OpenGL extensions, the parser
+- * provides a whitelist of registers which userspace may safely access (for both
+- * normal and drm master processes).
++ * provides a whitelist of registers which userspace may safely access
+ *
+ * Third, commands which access privileged memory (i.e. GGTT, HWS page, etc).
+ * The parser always rejects such commands.
+@@ -83,9 +81,9 @@
+ * in the per-engine command tables.
+ *
+ * Other command table entries map fairly directly to high level categories
+- * mentioned above: rejected, master-only, register whitelist. The parser
+- * implements a number of checks, including the privileged memory checks, via a
+- * general bitmasking mechanism.
++ * mentioned above: rejected, register whitelist. The parser implements a number
++ * of checks, including the privileged memory checks, via a general bitmasking
++ * mechanism.
+ */
+
+ /*
+@@ -103,8 +101,6 @@ struct drm_i915_cmd_descriptor {
+ * CMD_DESC_REJECT: The command is never allowed
+ * CMD_DESC_REGISTER: The command should be checked against the
+ * register whitelist for the appropriate ring
+- * CMD_DESC_MASTER: The command is allowed if the submitting process
+- * is the DRM master
+ */
+ u32 flags;
+ #define CMD_DESC_FIXED (1<<0)
+@@ -112,7 +108,6 @@ struct drm_i915_cmd_descriptor {
+ #define CMD_DESC_REJECT (1<<2)
+ #define CMD_DESC_REGISTER (1<<3)
+ #define CMD_DESC_BITMASK (1<<4)
+-#define CMD_DESC_MASTER (1<<5)
+
+ /*
+ * The command's unique identification bits and the bitmask to get them.
+@@ -193,7 +188,7 @@ struct drm_i915_cmd_table {
+ #define CMD(op, opm, f, lm, fl, ...) \
+ { \
+ .flags = (fl) | ((f) ? CMD_DESC_FIXED : 0), \
+- .cmd = { (op), ~0u << (opm) }, \
++ .cmd = { (op & ~0u << (opm)), ~0u << (opm) }, \
+ .length = { (lm) }, \
+ __VA_ARGS__ \
+ }
+@@ -208,14 +203,13 @@ struct drm_i915_cmd_table {
+ #define R CMD_DESC_REJECT
+ #define W CMD_DESC_REGISTER
+ #define B CMD_DESC_BITMASK
+-#define M CMD_DESC_MASTER
+
+ /* Command Mask Fixed Len Action
+ ---------------------------------------------------------- */
+-static const struct drm_i915_cmd_descriptor common_cmds[] = {
++static const struct drm_i915_cmd_descriptor gen7_common_cmds[] = {
+ CMD( MI_NOOP, SMI, F, 1, S ),
+ CMD( MI_USER_INTERRUPT, SMI, F, 1, R ),
+- CMD( MI_WAIT_FOR_EVENT, SMI, F, 1, M ),
++ CMD( MI_WAIT_FOR_EVENT, SMI, F, 1, R ),
+ CMD( MI_ARB_CHECK, SMI, F, 1, S ),
+ CMD( MI_REPORT_HEAD, SMI, F, 1, S ),
+ CMD( MI_SUSPEND_FLUSH, SMI, F, 1, S ),
+@@ -245,7 +239,7 @@ static const struct drm_i915_cmd_descriptor common_cmds[] = {
+ CMD( MI_BATCH_BUFFER_START, SMI, !F, 0xFF, S ),
+ };
+
+-static const struct drm_i915_cmd_descriptor render_cmds[] = {
++static const struct drm_i915_cmd_descriptor gen7_render_cmds[] = {
+ CMD( MI_FLUSH, SMI, F, 1, S ),
+ CMD( MI_ARB_ON_OFF, SMI, F, 1, R ),
+ CMD( MI_PREDICATE, SMI, F, 1, S ),
+@@ -312,7 +306,7 @@ static const struct drm_i915_cmd_descriptor hsw_render_cmds[] = {
+ CMD( MI_URB_ATOMIC_ALLOC, SMI, F, 1, S ),
+ CMD( MI_SET_APPID, SMI, F, 1, S ),
+ CMD( MI_RS_CONTEXT, SMI, F, 1, S ),
+- CMD( MI_LOAD_SCAN_LINES_INCL, SMI, !F, 0x3F, M ),
++ CMD( MI_LOAD_SCAN_LINES_INCL, SMI, !F, 0x3F, R ),
+ CMD( MI_LOAD_SCAN_LINES_EXCL, SMI, !F, 0x3F, R ),
+ CMD( MI_LOAD_REGISTER_REG, SMI, !F, 0xFF, W,
+ .reg = { .offset = 1, .mask = 0x007FFFFC, .step = 1 } ),
+@@ -329,7 +323,7 @@ static const struct drm_i915_cmd_descriptor hsw_render_cmds[] = {
+ CMD( GFX_OP_3DSTATE_BINDING_TABLE_EDIT_PS, S3D, !F, 0x1FF, S ),
+ };
+
+-static const struct drm_i915_cmd_descriptor video_cmds[] = {
++static const struct drm_i915_cmd_descriptor gen7_video_cmds[] = {
+ CMD( MI_ARB_ON_OFF, SMI, F, 1, R ),
+ CMD( MI_SET_APPID, SMI, F, 1, S ),
+ CMD( MI_STORE_DWORD_IMM, SMI, !F, 0xFF, B,
+@@ -373,7 +367,7 @@ static const struct drm_i915_cmd_descriptor video_cmds[] = {
+ CMD( MFX_WAIT, SMFX, F, 1, S ),
+ };
+
+-static const struct drm_i915_cmd_descriptor vecs_cmds[] = {
++static const struct drm_i915_cmd_descriptor gen7_vecs_cmds[] = {
+ CMD( MI_ARB_ON_OFF, SMI, F, 1, R ),
+ CMD( MI_SET_APPID, SMI, F, 1, S ),
+ CMD( MI_STORE_DWORD_IMM, SMI, !F, 0xFF, B,
+@@ -411,7 +405,7 @@ static const struct drm_i915_cmd_descriptor vecs_cmds[] = {
+ }}, ),
+ };
+
+-static const struct drm_i915_cmd_descriptor blt_cmds[] = {
++static const struct drm_i915_cmd_descriptor gen7_blt_cmds[] = {
+ CMD( MI_DISPLAY_FLIP, SMI, !F, 0xFF, R ),
+ CMD( MI_STORE_DWORD_IMM, SMI, !F, 0x3FF, B,
+ .bits = {{
+@@ -445,10 +439,64 @@ static const struct drm_i915_cmd_descriptor blt_cmds[] = {
+ };
+
+ static const struct drm_i915_cmd_descriptor hsw_blt_cmds[] = {
+- CMD( MI_LOAD_SCAN_LINES_INCL, SMI, !F, 0x3F, M ),
++ CMD( MI_LOAD_SCAN_LINES_INCL, SMI, !F, 0x3F, R ),
+ CMD( MI_LOAD_SCAN_LINES_EXCL, SMI, !F, 0x3F, R ),
+ };
+
++/*
++ * For Gen9 we can still rely on the h/w to enforce cmd security, and only
++ * need to re-enforce the register access checks. We therefore only need to
++ * teach the cmdparser how to find the end of each command, and identify
++ * register accesses. The table doesn't need to reject any commands, and so
++ * the only commands listed here are:
++ * 1) Those that touch registers
++ * 2) Those that do not have the default 8-bit length
++ *
++ * Note that the default MI length mask chosen for this table is 0xFF, not
++ * the 0x3F used on older devices. This is because the vast majority of MI
++ * cmds on Gen9 use a standard 8-bit Length field.
++ * All the Gen9 blitter instructions are standard 0xFF length mask, and
++ * none allow access to non-general registers, so in fact no BLT cmds are
++ * included in the table at all.
++ *
++ */
++static const struct drm_i915_cmd_descriptor gen9_blt_cmds[] = {
++ CMD( MI_NOOP, SMI, F, 1, S ),
++ CMD( MI_USER_INTERRUPT, SMI, F, 1, S ),
++ CMD( MI_WAIT_FOR_EVENT, SMI, F, 1, S ),
++ CMD( MI_FLUSH, SMI, F, 1, S ),
++ CMD( MI_ARB_CHECK, SMI, F, 1, S ),
++ CMD( MI_REPORT_HEAD, SMI, F, 1, S ),
++ CMD( MI_ARB_ON_OFF, SMI, F, 1, S ),
++ CMD( MI_SUSPEND_FLUSH, SMI, F, 1, S ),
++ CMD( MI_LOAD_SCAN_LINES_INCL, SMI, !F, 0x3F, S ),
++ CMD( MI_LOAD_SCAN_LINES_EXCL, SMI, !F, 0x3F, S ),
++ CMD( MI_STORE_DWORD_IMM, SMI, !F, 0x3FF, S ),
++ CMD( MI_LOAD_REGISTER_IMM(1), SMI, !F, 0xFF, W,
++ .reg = { .offset = 1, .mask = 0x007FFFFC, .step = 2 } ),
++ CMD( MI_UPDATE_GTT, SMI, !F, 0x3FF, S ),
++ CMD( MI_STORE_REGISTER_MEM_GEN8, SMI, F, 4, W,
++ .reg = { .offset = 1, .mask = 0x007FFFFC } ),
++ CMD( MI_FLUSH_DW, SMI, !F, 0x3F, S ),
++ CMD( MI_LOAD_REGISTER_MEM_GEN8, SMI, F, 4, W,
++ .reg = { .offset = 1, .mask = 0x007FFFFC } ),
++ CMD( MI_LOAD_REGISTER_REG, SMI, !F, 0xFF, W,
++ .reg = { .offset = 1, .mask = 0x007FFFFC, .step = 1 } ),
++
++ /*
++ * We allow BB_START but apply further checks. We just sanitize the
++ * basic fields here.
++ */
++#define MI_BB_START_OPERAND_MASK GENMASK(SMI-1, 0)
++#define MI_BB_START_OPERAND_EXPECT (MI_BATCH_PPGTT_HSW | 1)
++ CMD( MI_BATCH_BUFFER_START_GEN8, SMI, !F, 0xFF, B,
++ .bits = {{
++ .offset = 0,
++ .mask = MI_BB_START_OPERAND_MASK,
++ .expected = MI_BB_START_OPERAND_EXPECT,
++ }}, ),
++};
++
+ static const struct drm_i915_cmd_descriptor noop_desc =
+ CMD(MI_NOOP, SMI, F, 1, S);
+
+@@ -462,40 +510,44 @@ static const struct drm_i915_cmd_descriptor noop_desc =
+ #undef R
+ #undef W
+ #undef B
+-#undef M
+
+-static const struct drm_i915_cmd_table gen7_render_cmds[] = {
+- { common_cmds, ARRAY_SIZE(common_cmds) },
+- { render_cmds, ARRAY_SIZE(render_cmds) },
++static const struct drm_i915_cmd_table gen7_render_cmd_table[] = {
++ { gen7_common_cmds, ARRAY_SIZE(gen7_common_cmds) },
++ { gen7_render_cmds, ARRAY_SIZE(gen7_render_cmds) },
+ };
+
+-static const struct drm_i915_cmd_table hsw_render_ring_cmds[] = {
+- { common_cmds, ARRAY_SIZE(common_cmds) },
+- { render_cmds, ARRAY_SIZE(render_cmds) },
++static const struct drm_i915_cmd_table hsw_render_ring_cmd_table[] = {
++ { gen7_common_cmds, ARRAY_SIZE(gen7_common_cmds) },
++ { gen7_render_cmds, ARRAY_SIZE(gen7_render_cmds) },
+ { hsw_render_cmds, ARRAY_SIZE(hsw_render_cmds) },
+ };
+
+-static const struct drm_i915_cmd_table gen7_video_cmds[] = {
+- { common_cmds, ARRAY_SIZE(common_cmds) },
+- { video_cmds, ARRAY_SIZE(video_cmds) },
++static const struct drm_i915_cmd_table gen7_video_cmd_table[] = {
++ { gen7_common_cmds, ARRAY_SIZE(gen7_common_cmds) },
++ { gen7_video_cmds, ARRAY_SIZE(gen7_video_cmds) },
+ };
+
+-static const struct drm_i915_cmd_table hsw_vebox_cmds[] = {
+- { common_cmds, ARRAY_SIZE(common_cmds) },
+- { vecs_cmds, ARRAY_SIZE(vecs_cmds) },
++static const struct drm_i915_cmd_table hsw_vebox_cmd_table[] = {
++ { gen7_common_cmds, ARRAY_SIZE(gen7_common_cmds) },
++ { gen7_vecs_cmds, ARRAY_SIZE(gen7_vecs_cmds) },
+ };
+
+-static const struct drm_i915_cmd_table gen7_blt_cmds[] = {
+- { common_cmds, ARRAY_SIZE(common_cmds) },
+- { blt_cmds, ARRAY_SIZE(blt_cmds) },
++static const struct drm_i915_cmd_table gen7_blt_cmd_table[] = {
++ { gen7_common_cmds, ARRAY_SIZE(gen7_common_cmds) },
++ { gen7_blt_cmds, ARRAY_SIZE(gen7_blt_cmds) },
+ };
+
+-static const struct drm_i915_cmd_table hsw_blt_ring_cmds[] = {
+- { common_cmds, ARRAY_SIZE(common_cmds) },
+- { blt_cmds, ARRAY_SIZE(blt_cmds) },
++static const struct drm_i915_cmd_table hsw_blt_ring_cmd_table[] = {
++ { gen7_common_cmds, ARRAY_SIZE(gen7_common_cmds) },
++ { gen7_blt_cmds, ARRAY_SIZE(gen7_blt_cmds) },
+ { hsw_blt_cmds, ARRAY_SIZE(hsw_blt_cmds) },
+ };
+
++static const struct drm_i915_cmd_table gen9_blt_cmd_table[] = {
++ { gen9_blt_cmds, ARRAY_SIZE(gen9_blt_cmds) },
++};
++
++
+ /*
+ * Register whitelists, sorted by increasing register offset.
+ */
+@@ -611,17 +663,27 @@ static const struct drm_i915_reg_descriptor gen7_blt_regs[] = {
+ REG64_IDX(RING_TIMESTAMP, BLT_RING_BASE),
+ };
+
+-static const struct drm_i915_reg_descriptor ivb_master_regs[] = {
+- REG32(FORCEWAKE_MT),
+- REG32(DERRMR),
+- REG32(GEN7_PIPE_DE_LOAD_SL(PIPE_A)),
+- REG32(GEN7_PIPE_DE_LOAD_SL(PIPE_B)),
+- REG32(GEN7_PIPE_DE_LOAD_SL(PIPE_C)),
+-};
+-
+-static const struct drm_i915_reg_descriptor hsw_master_regs[] = {
+- REG32(FORCEWAKE_MT),
+- REG32(DERRMR),
++static const struct drm_i915_reg_descriptor gen9_blt_regs[] = {
++ REG64_IDX(RING_TIMESTAMP, RENDER_RING_BASE),
++ REG64_IDX(RING_TIMESTAMP, BSD_RING_BASE),
++ REG32(BCS_SWCTRL),
++ REG64_IDX(RING_TIMESTAMP, BLT_RING_BASE),
++ REG64_IDX(BCS_GPR, 0),
++ REG64_IDX(BCS_GPR, 1),
++ REG64_IDX(BCS_GPR, 2),
++ REG64_IDX(BCS_GPR, 3),
++ REG64_IDX(BCS_GPR, 4),
++ REG64_IDX(BCS_GPR, 5),
++ REG64_IDX(BCS_GPR, 6),
++ REG64_IDX(BCS_GPR, 7),
++ REG64_IDX(BCS_GPR, 8),
++ REG64_IDX(BCS_GPR, 9),
++ REG64_IDX(BCS_GPR, 10),
++ REG64_IDX(BCS_GPR, 11),
++ REG64_IDX(BCS_GPR, 12),
++ REG64_IDX(BCS_GPR, 13),
++ REG64_IDX(BCS_GPR, 14),
++ REG64_IDX(BCS_GPR, 15),
+ };
+
+ #undef REG64
+@@ -630,28 +692,27 @@ static const struct drm_i915_reg_descriptor hsw_master_regs[] = {
+ struct drm_i915_reg_table {
+ const struct drm_i915_reg_descriptor *regs;
+ int num_regs;
+- bool master;
+ };
+
+ static const struct drm_i915_reg_table ivb_render_reg_tables[] = {
+- { gen7_render_regs, ARRAY_SIZE(gen7_render_regs), false },
+- { ivb_master_regs, ARRAY_SIZE(ivb_master_regs), true },
++ { gen7_render_regs, ARRAY_SIZE(gen7_render_regs) },
+ };
+
+ static const struct drm_i915_reg_table ivb_blt_reg_tables[] = {
+- { gen7_blt_regs, ARRAY_SIZE(gen7_blt_regs), false },
+- { ivb_master_regs, ARRAY_SIZE(ivb_master_regs), true },
++ { gen7_blt_regs, ARRAY_SIZE(gen7_blt_regs) },
+ };
+
+ static const struct drm_i915_reg_table hsw_render_reg_tables[] = {
+- { gen7_render_regs, ARRAY_SIZE(gen7_render_regs), false },
+- { hsw_render_regs, ARRAY_SIZE(hsw_render_regs), false },
+- { hsw_master_regs, ARRAY_SIZE(hsw_master_regs), true },
++ { gen7_render_regs, ARRAY_SIZE(gen7_render_regs) },
++ { hsw_render_regs, ARRAY_SIZE(hsw_render_regs) },
+ };
+
+ static const struct drm_i915_reg_table hsw_blt_reg_tables[] = {
+- { gen7_blt_regs, ARRAY_SIZE(gen7_blt_regs), false },
+- { hsw_master_regs, ARRAY_SIZE(hsw_master_regs), true },
++ { gen7_blt_regs, ARRAY_SIZE(gen7_blt_regs) },
++};
++
++static const struct drm_i915_reg_table gen9_blt_reg_tables[] = {
++ { gen9_blt_regs, ARRAY_SIZE(gen9_blt_regs) },
+ };
+
+ static u32 gen7_render_get_cmd_length_mask(u32 cmd_header)
+@@ -709,6 +770,17 @@ static u32 gen7_blt_get_cmd_length_mask(u32 cmd_header)
+ return 0;
+ }
+
++static u32 gen9_blt_get_cmd_length_mask(u32 cmd_header)
++{
++ u32 client = cmd_header >> INSTR_CLIENT_SHIFT;
++
++ if (client == INSTR_MI_CLIENT || client == INSTR_BC_CLIENT)
++ return 0xFF;
++
++ DRM_DEBUG_DRIVER("CMD: Abnormal blt cmd length! 0x%08X\n", cmd_header);
++ return 0;
++}
++
+ static bool validate_cmds_sorted(const struct intel_engine_cs *engine,
+ const struct drm_i915_cmd_table *cmd_tables,
+ int cmd_table_count)
+@@ -866,18 +938,19 @@ void intel_engine_init_cmd_parser(struct intel_engine_cs *engine)
+ int cmd_table_count;
+ int ret;
+
+- if (!IS_GEN(engine->i915, 7))
++ if (!IS_GEN(engine->i915, 7) && !(IS_GEN(engine->i915, 9) &&
++ engine->class == COPY_ENGINE_CLASS))
+ return;
+
+ switch (engine->class) {
+ case RENDER_CLASS:
+ if (IS_HASWELL(engine->i915)) {
+- cmd_tables = hsw_render_ring_cmds;
++ cmd_tables = hsw_render_ring_cmd_table;
+ cmd_table_count =
+- ARRAY_SIZE(hsw_render_ring_cmds);
++ ARRAY_SIZE(hsw_render_ring_cmd_table);
+ } else {
+- cmd_tables = gen7_render_cmds;
+- cmd_table_count = ARRAY_SIZE(gen7_render_cmds);
++ cmd_tables = gen7_render_cmd_table;
++ cmd_table_count = ARRAY_SIZE(gen7_render_cmd_table);
+ }
+
+ if (IS_HASWELL(engine->i915)) {
+@@ -887,36 +960,46 @@ void intel_engine_init_cmd_parser(struct intel_engine_cs *engine)
+ engine->reg_tables = ivb_render_reg_tables;
+ engine->reg_table_count = ARRAY_SIZE(ivb_render_reg_tables);
+ }
+-
+ engine->get_cmd_length_mask = gen7_render_get_cmd_length_mask;
+ break;
+ case VIDEO_DECODE_CLASS:
+- cmd_tables = gen7_video_cmds;
+- cmd_table_count = ARRAY_SIZE(gen7_video_cmds);
++ cmd_tables = gen7_video_cmd_table;
++ cmd_table_count = ARRAY_SIZE(gen7_video_cmd_table);
+ engine->get_cmd_length_mask = gen7_bsd_get_cmd_length_mask;
+ break;
+ case COPY_ENGINE_CLASS:
+- if (IS_HASWELL(engine->i915)) {
+- cmd_tables = hsw_blt_ring_cmds;
+- cmd_table_count = ARRAY_SIZE(hsw_blt_ring_cmds);
++ engine->get_cmd_length_mask = gen7_blt_get_cmd_length_mask;
++ if (IS_GEN(engine->i915, 9)) {
++ cmd_tables = gen9_blt_cmd_table;
++ cmd_table_count = ARRAY_SIZE(gen9_blt_cmd_table);
++ engine->get_cmd_length_mask =
++ gen9_blt_get_cmd_length_mask;
++
++ /* BCS Engine unsafe without parser */
++ engine->flags |= I915_ENGINE_REQUIRES_CMD_PARSER;
++ } else if (IS_HASWELL(engine->i915)) {
++ cmd_tables = hsw_blt_ring_cmd_table;
++ cmd_table_count = ARRAY_SIZE(hsw_blt_ring_cmd_table);
+ } else {
+- cmd_tables = gen7_blt_cmds;
+- cmd_table_count = ARRAY_SIZE(gen7_blt_cmds);
++ cmd_tables = gen7_blt_cmd_table;
++ cmd_table_count = ARRAY_SIZE(gen7_blt_cmd_table);
+ }
+
+- if (IS_HASWELL(engine->i915)) {
++ if (IS_GEN(engine->i915, 9)) {
++ engine->reg_tables = gen9_blt_reg_tables;
++ engine->reg_table_count =
++ ARRAY_SIZE(gen9_blt_reg_tables);
++ } else if (IS_HASWELL(engine->i915)) {
+ engine->reg_tables = hsw_blt_reg_tables;
+ engine->reg_table_count = ARRAY_SIZE(hsw_blt_reg_tables);
+ } else {
+ engine->reg_tables = ivb_blt_reg_tables;
+ engine->reg_table_count = ARRAY_SIZE(ivb_blt_reg_tables);
+ }
+-
+- engine->get_cmd_length_mask = gen7_blt_get_cmd_length_mask;
+ break;
+ case VIDEO_ENHANCEMENT_CLASS:
+- cmd_tables = hsw_vebox_cmds;
+- cmd_table_count = ARRAY_SIZE(hsw_vebox_cmds);
++ cmd_tables = hsw_vebox_cmd_table;
++ cmd_table_count = ARRAY_SIZE(hsw_vebox_cmd_table);
+ /* VECS can use the same length_mask function as VCS */
+ engine->get_cmd_length_mask = gen7_bsd_get_cmd_length_mask;
+ break;
+@@ -942,7 +1025,7 @@ void intel_engine_init_cmd_parser(struct intel_engine_cs *engine)
+ return;
+ }
+
+- engine->flags |= I915_ENGINE_NEEDS_CMD_PARSER;
++ engine->flags |= I915_ENGINE_USING_CMD_PARSER;
+ }
+
+ /**
+@@ -954,7 +1037,7 @@ void intel_engine_init_cmd_parser(struct intel_engine_cs *engine)
+ */
+ void intel_engine_cleanup_cmd_parser(struct intel_engine_cs *engine)
+ {
+- if (!intel_engine_needs_cmd_parser(engine))
++ if (!intel_engine_using_cmd_parser(engine))
+ return;
+
+ fini_hash_table(engine);
+@@ -1028,22 +1111,16 @@ __find_reg(const struct drm_i915_reg_descriptor *table, int count, u32 addr)
+ }
+
+ static const struct drm_i915_reg_descriptor *
+-find_reg(const struct intel_engine_cs *engine, bool is_master, u32 addr)
++find_reg(const struct intel_engine_cs *engine, u32 addr)
+ {
+ const struct drm_i915_reg_table *table = engine->reg_tables;
++ const struct drm_i915_reg_descriptor *reg = NULL;
+ int count = engine->reg_table_count;
+
+- for (; count > 0; ++table, --count) {
+- if (!table->master || is_master) {
+- const struct drm_i915_reg_descriptor *reg;
++ for (; !reg && (count > 0); ++table, --count)
++ reg = __find_reg(table->regs, table->num_regs, addr);
+
+- reg = __find_reg(table->regs, table->num_regs, addr);
+- if (reg != NULL)
+- return reg;
+- }
+- }
+-
+- return NULL;
++ return reg;
+ }
+
+ /* Returns a vmap'd pointer to dst_obj, which the caller must unmap */
+@@ -1127,8 +1204,7 @@ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,
+
+ static bool check_cmd(const struct intel_engine_cs *engine,
+ const struct drm_i915_cmd_descriptor *desc,
+- const u32 *cmd, u32 length,
+- const bool is_master)
++ const u32 *cmd, u32 length)
+ {
+ if (desc->flags & CMD_DESC_SKIP)
+ return true;
+@@ -1138,12 +1214,6 @@ static bool check_cmd(const struct intel_engine_cs *engine,
+ return false;
+ }
+
+- if ((desc->flags & CMD_DESC_MASTER) && !is_master) {
+- DRM_DEBUG_DRIVER("CMD: Rejected master-only command: 0x%08X\n",
+- *cmd);
+- return false;
+- }
+-
+ if (desc->flags & CMD_DESC_REGISTER) {
+ /*
+ * Get the distance between individual register offset
+@@ -1157,7 +1227,7 @@ static bool check_cmd(const struct intel_engine_cs *engine,
+ offset += step) {
+ const u32 reg_addr = cmd[offset] & desc->reg.mask;
+ const struct drm_i915_reg_descriptor *reg =
+- find_reg(engine, is_master, reg_addr);
++ find_reg(engine, reg_addr);
+
+ if (!reg) {
+ DRM_DEBUG_DRIVER("CMD: Rejected register 0x%08X in command: 0x%08X (%s)\n",
+@@ -1235,16 +1305,112 @@ static bool check_cmd(const struct intel_engine_cs *engine,
+ return true;
+ }
+
++static int check_bbstart(const struct i915_gem_context *ctx,
++ u32 *cmd, u32 offset, u32 length,
++ u32 batch_len,
++ u64 batch_start,
++ u64 shadow_batch_start)
++{
++ u64 jump_offset, jump_target;
++ u32 target_cmd_offset, target_cmd_index;
++
++ /* For igt compatibility on older platforms */
++ if (CMDPARSER_USES_GGTT(ctx->i915)) {
++ DRM_DEBUG("CMD: Rejecting BB_START for ggtt based submission\n");
++ return -EACCES;
++ }
++
++ if (length != 3) {
++ DRM_DEBUG("CMD: Recursive BB_START with bad length(%u)\n",
++ length);
++ return -EINVAL;
++ }
++
++ jump_target = *(u64*)(cmd+1);
++ jump_offset = jump_target - batch_start;
++
++ /*
++ * Any underflow of jump_target is guaranteed to be outside the range
++ * of a u32, so >= test catches both too large and too small
++ */
++ if (jump_offset >= batch_len) {
++ DRM_DEBUG("CMD: BB_START to 0x%llx jumps out of BB\n",
++ jump_target);
++ return -EINVAL;
++ }
++
++ /*
++ * This cannot overflow a u32 because we already checked jump_offset
++ * is within the BB, and the batch_len is a u32
++ */
++ target_cmd_offset = lower_32_bits(jump_offset);
++ target_cmd_index = target_cmd_offset / sizeof(u32);
++
++ *(u64*)(cmd + 1) = shadow_batch_start + target_cmd_offset;
++
++ if (target_cmd_index == offset)
++ return 0;
++
++ if (ctx->jump_whitelist_cmds <= target_cmd_index) {
++ DRM_DEBUG("CMD: Rejecting BB_START - truncated whitelist array\n");
++ return -EINVAL;
++ } else if (!test_bit(target_cmd_index, ctx->jump_whitelist)) {
++ DRM_DEBUG("CMD: BB_START to 0x%llx not a previously executed cmd\n",
++ jump_target);
++ return -EINVAL;
++ }
++
++ return 0;
++}
++
++static void init_whitelist(struct i915_gem_context *ctx, u32 batch_len)
++{
++ const u32 batch_cmds = DIV_ROUND_UP(batch_len, sizeof(u32));
++ const u32 exact_size = BITS_TO_LONGS(batch_cmds);
++ u32 next_size = BITS_TO_LONGS(roundup_pow_of_two(batch_cmds));
++ unsigned long *next_whitelist;
++
++ if (CMDPARSER_USES_GGTT(ctx->i915))
++ return;
++
++ if (batch_cmds <= ctx->jump_whitelist_cmds) {
++ bitmap_zero(ctx->jump_whitelist, batch_cmds);
++ return;
++ }
++
++again:
++ next_whitelist = kcalloc(next_size, sizeof(long), GFP_KERNEL);
++ if (next_whitelist) {
++ kfree(ctx->jump_whitelist);
++ ctx->jump_whitelist = next_whitelist;
++ ctx->jump_whitelist_cmds =
++ next_size * BITS_PER_BYTE * sizeof(long);
++ return;
++ }
++
++ if (next_size > exact_size) {
++ next_size = exact_size;
++ goto again;
++ }
++
++ DRM_DEBUG("CMD: Failed to extend whitelist. BB_START may be disallowed\n");
++ bitmap_zero(ctx->jump_whitelist, ctx->jump_whitelist_cmds);
++
++ return;
++}
++
+ #define LENGTH_BIAS 2
+
+ /**
+ * i915_parse_cmds() - parse a submitted batch buffer for privilege violations
++ * @ctx: the context in which the batch is to execute
+ * @engine: the engine on which the batch is to execute
+ * @batch_obj: the batch buffer in question
+- * @shadow_batch_obj: copy of the batch buffer in question
++ * @batch_start: Canonical base address of batch
+ * @batch_start_offset: byte offset in the batch at which execution starts
+ * @batch_len: length of the commands in batch_obj
+- * @is_master: is the submitting process the drm master?
++ * @shadow_batch_obj: copy of the batch buffer in question
++ * @shadow_batch_start: Canonical base address of shadow_batch_obj
+ *
+ * Parses the specified batch buffer looking for privilege violations as
+ * described in the overview.
+@@ -1252,14 +1418,17 @@ static bool check_cmd(const struct intel_engine_cs *engine,
+ * Return: non-zero if the parser finds violations or otherwise fails; -EACCES
+ * if the batch appears legal but should use hardware parsing
+ */
+-int intel_engine_cmd_parser(struct intel_engine_cs *engine,
++
++int intel_engine_cmd_parser(struct i915_gem_context *ctx,
++ struct intel_engine_cs *engine,
+ struct drm_i915_gem_object *batch_obj,
+- struct drm_i915_gem_object *shadow_batch_obj,
++ u64 batch_start,
+ u32 batch_start_offset,
+ u32 batch_len,
+- bool is_master)
++ struct drm_i915_gem_object *shadow_batch_obj,
++ u64 shadow_batch_start)
+ {
+- u32 *cmd, *batch_end;
++ u32 *cmd, *batch_end, offset = 0;
+ struct drm_i915_cmd_descriptor default_desc = noop_desc;
+ const struct drm_i915_cmd_descriptor *desc = &default_desc;
+ bool needs_clflush_after = false;
+@@ -1273,6 +1442,8 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
+ return PTR_ERR(cmd);
+ }
+
++ init_whitelist(ctx, batch_len);
++
+ /*
+ * We use the batch length as size because the shadow object is as
+ * large or larger and copy_batch() will write MI_NOPs to the extra
+@@ -1282,31 +1453,15 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
+ do {
+ u32 length;
+
+- if (*cmd == MI_BATCH_BUFFER_END) {
+- if (needs_clflush_after) {
+- void *ptr = page_mask_bits(shadow_batch_obj->mm.mapping);
+- drm_clflush_virt_range(ptr,
+- (void *)(cmd + 1) - ptr);
+- }
++ if (*cmd == MI_BATCH_BUFFER_END)
+ break;
+- }
+
+ desc = find_cmd(engine, *cmd, desc, &default_desc);
+ if (!desc) {
+ DRM_DEBUG_DRIVER("CMD: Unrecognized command: 0x%08X\n",
+ *cmd);
+ ret = -EINVAL;
+- break;
+- }
+-
+- /*
+- * If the batch buffer contains a chained batch, return an
+- * error that tells the caller to abort and dispatch the
+- * workload as a non-secure batch.
+- */
+- if (desc->cmd.value == MI_BATCH_BUFFER_START) {
+- ret = -EACCES;
+- break;
++ goto err;
+ }
+
+ if (desc->flags & CMD_DESC_FIXED)
+@@ -1320,22 +1475,43 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
+ length,
+ batch_end - cmd);
+ ret = -EINVAL;
+- break;
++ goto err;
+ }
+
+- if (!check_cmd(engine, desc, cmd, length, is_master)) {
++ if (!check_cmd(engine, desc, cmd, length)) {
+ ret = -EACCES;
++ goto err;
++ }
++
++ if (desc->cmd.value == MI_BATCH_BUFFER_START) {
++ ret = check_bbstart(ctx, cmd, offset, length,
++ batch_len, batch_start,
++ shadow_batch_start);
++
++ if (ret)
++ goto err;
+ break;
+ }
+
++ if (ctx->jump_whitelist_cmds > offset)
++ set_bit(offset, ctx->jump_whitelist);
++
+ cmd += length;
++ offset += length;
+ if (cmd >= batch_end) {
+ DRM_DEBUG_DRIVER("CMD: Got to the end of the buffer w/o a BBE cmd!\n");
+ ret = -EINVAL;
+- break;
++ goto err;
+ }
+ } while (1);
+
++ if (needs_clflush_after) {
++ void *ptr = page_mask_bits(shadow_batch_obj->mm.mapping);
++
++ drm_clflush_virt_range(ptr, (void *)(cmd + 1) - ptr);
++ }
++
++err:
+ i915_gem_object_unpin_map(shadow_batch_obj);
+ return ret;
+ }
+@@ -1357,7 +1533,7 @@ int i915_cmd_parser_get_version(struct drm_i915_private *dev_priv)
+
+ /* If the command parser is not enabled, report 0 - unsupported */
+ for_each_engine(engine, dev_priv, id) {
+- if (intel_engine_needs_cmd_parser(engine)) {
++ if (intel_engine_using_cmd_parser(engine)) {
+ active = true;
+ break;
+ }
+@@ -1382,6 +1558,7 @@ int i915_cmd_parser_get_version(struct drm_i915_private *dev_priv)
+ * the parser enabled.
+ * 9. Don't whitelist or handle oacontrol specially, as ownership
+ * for oacontrol state is moving to i915-perf.
++ * 10. Support for Gen9 BCS Parsing
+ */
+- return 9;
++ return 10;
+ }
+diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
+index 5b895df09ebf..b6d51514cf9c 100644
+--- a/drivers/gpu/drm/i915/i915_drv.c
++++ b/drivers/gpu/drm/i915/i915_drv.c
+@@ -387,7 +387,7 @@ static int i915_getparam_ioctl(struct drm_device *dev, void *data,
+ value = !!(dev_priv->caps.scheduler & I915_SCHEDULER_CAP_SEMAPHORES);
+ break;
+ case I915_PARAM_HAS_SECURE_BATCHES:
+- value = capable(CAP_SYS_ADMIN);
++ value = HAS_SECURE_BATCHES(dev_priv) && capable(CAP_SYS_ADMIN);
+ break;
+ case I915_PARAM_CMD_PARSER_VERSION:
+ value = i915_cmd_parser_get_version(dev_priv);
+@@ -2156,6 +2156,8 @@ static int i915_drm_suspend_late(struct drm_device *dev, bool hibernation)
+
+ i915_gem_suspend_late(dev_priv);
+
++ intel_rc6_ctx_wa_suspend(dev_priv);
++
+ intel_uncore_suspend(&dev_priv->uncore);
+
+ intel_power_domains_suspend(dev_priv,
+@@ -2372,6 +2374,8 @@ static int i915_drm_resume_early(struct drm_device *dev)
+
+ intel_power_domains_resume(dev_priv);
+
++ intel_rc6_ctx_wa_resume(dev_priv);
++
+ intel_gt_sanitize(dev_priv, true);
+
+ enable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index edb88406cb75..a992f0749859 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -696,6 +696,8 @@ struct intel_rps {
+
+ struct intel_rc6 {
+ bool enabled;
++ bool ctx_corrupted;
++ intel_wakeref_t ctx_corrupted_wakeref;
+ u64 prev_hw_residency[4];
+ u64 cur_residency[4];
+ };
+@@ -2246,9 +2248,16 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
+ #define VEBOX_MASK(dev_priv) \
+ ENGINE_INSTANCES_MASK(dev_priv, VECS0, I915_MAX_VECS)
+
++/*
++ * The Gen7 cmdparser copies the scanned buffer to the ggtt for execution
++ * All later gens can run the final buffer from the ppgtt
++ */
++#define CMDPARSER_USES_GGTT(dev_priv) IS_GEN(dev_priv, 7)
++
+ #define HAS_LLC(dev_priv) (INTEL_INFO(dev_priv)->has_llc)
+ #define HAS_SNOOP(dev_priv) (INTEL_INFO(dev_priv)->has_snoop)
+ #define HAS_EDRAM(dev_priv) ((dev_priv)->edram_size_mb)
++#define HAS_SECURE_BATCHES(dev_priv) (INTEL_GEN(dev_priv) < 6)
+ #define HAS_WT(dev_priv) ((IS_HASWELL(dev_priv) || \
+ IS_BROADWELL(dev_priv)) && HAS_EDRAM(dev_priv))
+
+@@ -2281,10 +2290,12 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
+ /* Early gen2 have a totally busted CS tlb and require pinned batches. */
+ #define HAS_BROKEN_CS_TLB(dev_priv) (IS_I830(dev_priv) || IS_I845G(dev_priv))
+
++#define NEEDS_RC6_CTX_CORRUPTION_WA(dev_priv) \
++ (IS_BROADWELL(dev_priv) || IS_GEN(dev_priv, 9))
++
+ /* WaRsDisableCoarsePowerGating:skl,cnl */
+ #define NEEDS_WaRsDisableCoarsePowerGating(dev_priv) \
+- (IS_CANNONLAKE(dev_priv) || \
+- IS_SKL_GT3(dev_priv) || IS_SKL_GT4(dev_priv))
++ (IS_CANNONLAKE(dev_priv) || IS_GEN(dev_priv, 9))
+
+ #define HAS_GMBUS_IRQ(dev_priv) (INTEL_GEN(dev_priv) >= 4)
+ #define HAS_GMBUS_BURST_READ(dev_priv) (INTEL_GEN(dev_priv) >= 10 || \
+@@ -2528,6 +2539,14 @@ i915_gem_object_ggtt_pin(struct drm_i915_gem_object *obj,
+
+ int i915_gem_object_unbind(struct drm_i915_gem_object *obj);
+
++struct i915_vma * __must_check
++i915_gem_object_pin(struct drm_i915_gem_object *obj,
++ struct i915_address_space *vm,
++ const struct i915_ggtt_view *view,
++ u64 size,
++ u64 alignment,
++ u64 flags);
++
+ void i915_gem_runtime_suspend(struct drm_i915_private *dev_priv);
+
+ static inline int __must_check
+@@ -2712,12 +2731,14 @@ const char *i915_cache_level_str(struct drm_i915_private *i915, int type);
+ int i915_cmd_parser_get_version(struct drm_i915_private *dev_priv);
+ void intel_engine_init_cmd_parser(struct intel_engine_cs *engine);
+ void intel_engine_cleanup_cmd_parser(struct intel_engine_cs *engine);
+-int intel_engine_cmd_parser(struct intel_engine_cs *engine,
++int intel_engine_cmd_parser(struct i915_gem_context *cxt,
++ struct intel_engine_cs *engine,
+ struct drm_i915_gem_object *batch_obj,
+- struct drm_i915_gem_object *shadow_batch_obj,
++ u64 user_batch_start,
+ u32 batch_start_offset,
+ u32 batch_len,
+- bool is_master);
++ struct drm_i915_gem_object *shadow_batch_obj,
++ u64 shadow_batch_start);
+
+ /* i915_perf.c */
+ extern void i915_perf_init(struct drm_i915_private *dev_priv);
+diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
+index 7f6af4ca0968..a88e34b9350f 100644
+--- a/drivers/gpu/drm/i915/i915_gem.c
++++ b/drivers/gpu/drm/i915/i915_gem.c
+@@ -1025,6 +1025,20 @@ i915_gem_object_ggtt_pin(struct drm_i915_gem_object *obj,
+ {
+ struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
+ struct i915_address_space *vm = &dev_priv->ggtt.vm;
++
++ return i915_gem_object_pin(obj, vm, view, size, alignment,
++ flags | PIN_GLOBAL);
++}
++
++struct i915_vma *
++i915_gem_object_pin(struct drm_i915_gem_object *obj,
++ struct i915_address_space *vm,
++ const struct i915_ggtt_view *view,
++ u64 size,
++ u64 alignment,
++ u64 flags)
++{
++ struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
+ struct i915_vma *vma;
+ int ret;
+
+@@ -1091,7 +1105,7 @@ i915_gem_object_ggtt_pin(struct drm_i915_gem_object *obj,
+ return ERR_PTR(ret);
+ }
+
+- ret = i915_vma_pin(vma, size, alignment, flags | PIN_GLOBAL);
++ ret = i915_vma_pin(vma, size, alignment, flags);
+ if (ret)
+ return ERR_PTR(ret);
+
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index d6483b5dc8e5..177a26275811 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -493,6 +493,8 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
+ #define ECOCHK_PPGTT_WT_HSW (0x2 << 3)
+ #define ECOCHK_PPGTT_WB_HSW (0x3 << 3)
+
++#define GEN8_RC6_CTX_INFO _MMIO(0x8504)
++
+ #define GAC_ECO_BITS _MMIO(0x14090)
+ #define ECOBITS_SNB_BIT (1 << 13)
+ #define ECOBITS_PPGTT_CACHE64B (3 << 8)
+@@ -577,6 +579,10 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
+ */
+ #define BCS_SWCTRL _MMIO(0x22200)
+
++/* There are 16 GPR registers */
++#define BCS_GPR(n) _MMIO(0x22600 + (n) * 8)
++#define BCS_GPR_UDW(n) _MMIO(0x22600 + (n) * 8 + 4)
++
+ #define GPGPU_THREADS_DISPATCHED _MMIO(0x2290)
+ #define GPGPU_THREADS_DISPATCHED_UDW _MMIO(0x2290 + 4)
+ #define HS_INVOCATION_COUNT _MMIO(0x2300)
+@@ -7229,6 +7235,10 @@ enum {
+ #define SKL_CSR_DC5_DC6_COUNT _MMIO(0x8002C)
+ #define BXT_CSR_DC3_DC5_COUNT _MMIO(0x80038)
+
++/* Display Internal Timeout Register */
++#define RM_TIMEOUT _MMIO(0x42060)
++#define MMIO_TIMEOUT_US(us) ((us) << 0)
++
+ /* interrupts */
+ #define DE_MASTER_IRQ_CONTROL (1 << 31)
+ #define DE_SPRITEB_FLIP_DONE (1 << 29)
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index d9a7a13ce32a..9546900fd72b 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -125,6 +125,14 @@ static void bxt_init_clock_gating(struct drm_i915_private *dev_priv)
+ */
+ I915_WRITE(GEN9_CLKGATE_DIS_0, I915_READ(GEN9_CLKGATE_DIS_0) |
+ PWM1_GATING_DIS | PWM2_GATING_DIS);
++
++ /*
++ * Lower the display internal timeout.
++ * This is needed to avoid any hard hangs when DSI port PLL
++ * is off and a MMIO access is attempted by any privilege
++ * application, using batch buffers or any other means.
++ */
++ I915_WRITE(RM_TIMEOUT, MMIO_TIMEOUT_US(950));
+ }
+
+ static void glk_init_clock_gating(struct drm_i915_private *dev_priv)
+@@ -8556,6 +8564,95 @@ static void intel_init_emon(struct drm_i915_private *dev_priv)
+ dev_priv->ips.corr = (lcfuse & LCFUSE_HIV_MASK);
+ }
+
++static bool intel_rc6_ctx_corrupted(struct drm_i915_private *dev_priv)
++{
++ return !I915_READ(GEN8_RC6_CTX_INFO);
++}
++
++static void intel_rc6_ctx_wa_init(struct drm_i915_private *i915)
++{
++ if (!NEEDS_RC6_CTX_CORRUPTION_WA(i915))
++ return;
++
++ if (intel_rc6_ctx_corrupted(i915)) {
++ DRM_INFO("RC6 context corrupted, disabling runtime power management\n");
++ i915->gt_pm.rc6.ctx_corrupted = true;
++ i915->gt_pm.rc6.ctx_corrupted_wakeref =
++ intel_runtime_pm_get(&i915->runtime_pm);
++ }
++}
++
++static void intel_rc6_ctx_wa_cleanup(struct drm_i915_private *i915)
++{
++ if (i915->gt_pm.rc6.ctx_corrupted) {
++ intel_runtime_pm_put(&i915->runtime_pm,
++ i915->gt_pm.rc6.ctx_corrupted_wakeref);
++ i915->gt_pm.rc6.ctx_corrupted = false;
++ }
++}
++
++/**
++ * intel_rc6_ctx_wa_suspend - system suspend sequence for the RC6 CTX WA
++ * @i915: i915 device
++ *
++ * Perform any steps needed to clean up the RC6 CTX WA before system suspend.
++ */
++void intel_rc6_ctx_wa_suspend(struct drm_i915_private *i915)
++{
++ if (i915->gt_pm.rc6.ctx_corrupted)
++ intel_runtime_pm_put(&i915->runtime_pm,
++ i915->gt_pm.rc6.ctx_corrupted_wakeref);
++}
++
++/**
++ * intel_rc6_ctx_wa_resume - system resume sequence for the RC6 CTX WA
++ * @i915: i915 device
++ *
++ * Perform any steps needed to re-init the RC6 CTX WA after system resume.
++ */
++void intel_rc6_ctx_wa_resume(struct drm_i915_private *i915)
++{
++ if (!i915->gt_pm.rc6.ctx_corrupted)
++ return;
++
++ if (intel_rc6_ctx_corrupted(i915)) {
++ i915->gt_pm.rc6.ctx_corrupted_wakeref =
++ intel_runtime_pm_get(&i915->runtime_pm);
++ return;
++ }
++
++ DRM_INFO("RC6 context restored, re-enabling runtime power management\n");
++ i915->gt_pm.rc6.ctx_corrupted = false;
++}
++
++static void intel_disable_rc6(struct drm_i915_private *dev_priv);
++
++/**
++ * intel_rc6_ctx_wa_check - check for a new RC6 CTX corruption
++ * @i915: i915 device
++ *
++ * Check if an RC6 CTX corruption has happened since the last check and if so
++ * disable RC6 and runtime power management.
++*/
++void intel_rc6_ctx_wa_check(struct drm_i915_private *i915)
++{
++ if (!NEEDS_RC6_CTX_CORRUPTION_WA(i915))
++ return;
++
++ if (i915->gt_pm.rc6.ctx_corrupted)
++ return;
++
++ if (!intel_rc6_ctx_corrupted(i915))
++ return;
++
++ DRM_NOTE("RC6 context corruption, disabling runtime power management\n");
++
++ intel_disable_rc6(i915);
++ i915->gt_pm.rc6.ctx_corrupted = true;
++ i915->gt_pm.rc6.ctx_corrupted_wakeref =
++ intel_runtime_pm_get_noresume(&i915->runtime_pm);
++}
++
+ void intel_init_gt_powersave(struct drm_i915_private *dev_priv)
+ {
+ struct intel_rps *rps = &dev_priv->gt_pm.rps;
+@@ -8569,6 +8666,8 @@ void intel_init_gt_powersave(struct drm_i915_private *dev_priv)
+ pm_runtime_get(&dev_priv->drm.pdev->dev);
+ }
+
++ intel_rc6_ctx_wa_init(dev_priv);
++
+ /* Initialize RPS limits (for userspace) */
+ if (IS_CHERRYVIEW(dev_priv))
+ cherryview_init_gt_powersave(dev_priv);
+@@ -8607,6 +8706,8 @@ void intel_cleanup_gt_powersave(struct drm_i915_private *dev_priv)
+ if (IS_VALLEYVIEW(dev_priv))
+ valleyview_cleanup_gt_powersave(dev_priv);
+
++ intel_rc6_ctx_wa_cleanup(dev_priv);
++
+ if (!HAS_RC6(dev_priv))
+ pm_runtime_put(&dev_priv->drm.pdev->dev);
+ }
+@@ -8635,7 +8736,7 @@ static inline void intel_disable_llc_pstate(struct drm_i915_private *i915)
+ i915->gt_pm.llc_pstate.enabled = false;
+ }
+
+-static void intel_disable_rc6(struct drm_i915_private *dev_priv)
++static void __intel_disable_rc6(struct drm_i915_private *dev_priv)
+ {
+ lockdep_assert_held(&dev_priv->gt_pm.rps.lock);
+
+@@ -8654,6 +8755,13 @@ static void intel_disable_rc6(struct drm_i915_private *dev_priv)
+ dev_priv->gt_pm.rc6.enabled = false;
+ }
+
++static void intel_disable_rc6(struct drm_i915_private *dev_priv)
++{
++ mutex_lock(&dev_priv->gt_pm.rps.lock);
++ __intel_disable_rc6(dev_priv);
++ mutex_unlock(&dev_priv->gt_pm.rps.lock);
++}
++
+ static void intel_disable_rps(struct drm_i915_private *dev_priv)
+ {
+ lockdep_assert_held(&dev_priv->gt_pm.rps.lock);
+@@ -8679,7 +8787,7 @@ void intel_disable_gt_powersave(struct drm_i915_private *dev_priv)
+ {
+ mutex_lock(&dev_priv->gt_pm.rps.lock);
+
+- intel_disable_rc6(dev_priv);
++ __intel_disable_rc6(dev_priv);
+ intel_disable_rps(dev_priv);
+ if (HAS_LLC(dev_priv))
+ intel_disable_llc_pstate(dev_priv);
+@@ -8706,6 +8814,9 @@ static void intel_enable_rc6(struct drm_i915_private *dev_priv)
+ if (dev_priv->gt_pm.rc6.enabled)
+ return;
+
++ if (dev_priv->gt_pm.rc6.ctx_corrupted)
++ return;
++
+ if (IS_CHERRYVIEW(dev_priv))
+ cherryview_enable_rc6(dev_priv);
+ else if (IS_VALLEYVIEW(dev_priv))
+diff --git a/drivers/gpu/drm/i915/intel_pm.h b/drivers/gpu/drm/i915/intel_pm.h
+index 1b489fa399e1..4ccb5a53b61c 100644
+--- a/drivers/gpu/drm/i915/intel_pm.h
++++ b/drivers/gpu/drm/i915/intel_pm.h
+@@ -36,6 +36,9 @@ void intel_cleanup_gt_powersave(struct drm_i915_private *dev_priv);
+ void intel_sanitize_gt_powersave(struct drm_i915_private *dev_priv);
+ void intel_enable_gt_powersave(struct drm_i915_private *dev_priv);
+ void intel_disable_gt_powersave(struct drm_i915_private *dev_priv);
++void intel_rc6_ctx_wa_check(struct drm_i915_private *i915);
++void intel_rc6_ctx_wa_suspend(struct drm_i915_private *i915);
++void intel_rc6_ctx_wa_resume(struct drm_i915_private *i915);
+ void gen6_rps_busy(struct drm_i915_private *dev_priv);
+ void gen6_rps_idle(struct drm_i915_private *dev_priv);
+ void gen6_rps_boost(struct i915_request *rq);
+diff --git a/drivers/gpu/drm/radeon/si_dpm.c b/drivers/gpu/drm/radeon/si_dpm.c
+index 460fd98e40a7..a0b382a637a6 100644
+--- a/drivers/gpu/drm/radeon/si_dpm.c
++++ b/drivers/gpu/drm/radeon/si_dpm.c
+@@ -1958,6 +1958,7 @@ static void si_initialize_powertune_defaults(struct radeon_device *rdev)
+ case 0x682C:
+ si_pi->cac_weights = cac_weights_cape_verde_pro;
+ si_pi->dte_data = dte_data_sun_xt;
++ update_dte_from_pl2 = true;
+ break;
+ case 0x6825:
+ case 0x6827:
+diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
+index c1058eece16b..27e6449da24a 100644
+--- a/drivers/gpu/drm/scheduler/sched_main.c
++++ b/drivers/gpu/drm/scheduler/sched_main.c
+@@ -478,6 +478,7 @@ void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched)
+ struct drm_sched_job *s_job, *tmp;
+ uint64_t guilty_context;
+ bool found_guilty = false;
++ struct dma_fence *fence;
+
+ list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) {
+ struct drm_sched_fence *s_fence = s_job->s_fence;
+@@ -491,7 +492,16 @@ void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched)
+ dma_fence_set_error(&s_fence->finished, -ECANCELED);
+
+ dma_fence_put(s_job->s_fence->parent);
+- s_job->s_fence->parent = sched->ops->run_job(s_job);
++ fence = sched->ops->run_job(s_job);
++
++ if (IS_ERR_OR_NULL(fence)) {
++ s_job->s_fence->parent = NULL;
++ dma_fence_set_error(&s_fence->finished, PTR_ERR(fence));
++ } else {
++ s_job->s_fence->parent = fence;
++ }
++
++
+ }
+ }
+ EXPORT_SYMBOL(drm_sched_resubmit_jobs);
+@@ -719,7 +729,7 @@ static int drm_sched_main(void *param)
+ fence = sched->ops->run_job(sched_job);
+ drm_sched_fence_scheduled(s_fence);
+
+- if (fence) {
++ if (!IS_ERR_OR_NULL(fence)) {
+ s_fence->parent = dma_fence_get(fence);
+ r = dma_fence_add_callback(fence, &sched_job->cb,
+ drm_sched_process_job);
+@@ -729,8 +739,11 @@ static int drm_sched_main(void *param)
+ DRM_ERROR("fence add callback failed (%d)\n",
+ r);
+ dma_fence_put(fence);
+- } else
++ } else {
++
++ dma_fence_set_error(&s_fence->finished, PTR_ERR(fence));
+ drm_sched_process_job(NULL, &sched_job->cb);
++ }
+
+ wake_up(&sched->job_scheduled);
+ }
+diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c
+index 27e0f87075d9..4dc7e38c99c7 100644
+--- a/drivers/gpu/drm/v3d/v3d_gem.c
++++ b/drivers/gpu/drm/v3d/v3d_gem.c
+@@ -555,13 +555,16 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
+
+ if (args->bcl_start != args->bcl_end) {
+ bin = kcalloc(1, sizeof(*bin), GFP_KERNEL);
+- if (!bin)
++ if (!bin) {
++ v3d_job_put(&render->base);
+ return -ENOMEM;
++ }
+
+ ret = v3d_job_init(v3d, file_priv, &bin->base,
+ v3d_job_free, args->in_sync_bcl);
+ if (ret) {
+ v3d_job_put(&render->base);
++ kfree(bin);
+ return ret;
+ }
+
+diff --git a/drivers/hid/hid-google-hammer.c b/drivers/hid/hid-google-hammer.c
+index ee5e0bdcf078..154f1ce771d5 100644
+--- a/drivers/hid/hid-google-hammer.c
++++ b/drivers/hid/hid-google-hammer.c
+@@ -469,6 +469,10 @@ static int hammer_probe(struct hid_device *hdev,
+ static const struct hid_device_id hammer_devices[] = {
+ { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+ USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_HAMMER) },
++ { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
++ USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_MAGNEMITE) },
++ { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
++ USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_MASTERBALL) },
+ { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+ USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_STAFF) },
+ { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index e4d51ce20a6a..9cf5a95c1bd3 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -475,6 +475,8 @@
+ #define USB_DEVICE_ID_GOOGLE_STAFF 0x502b
+ #define USB_DEVICE_ID_GOOGLE_WAND 0x502d
+ #define USB_DEVICE_ID_GOOGLE_WHISKERS 0x5030
++#define USB_DEVICE_ID_GOOGLE_MASTERBALL 0x503c
++#define USB_DEVICE_ID_GOOGLE_MAGNEMITE 0x503d
+
+ #define USB_VENDOR_ID_GOTOP 0x08f2
+ #define USB_DEVICE_ID_SUPER_Q2 0x007f
+diff --git a/drivers/hid/intel-ish-hid/ishtp/client-buffers.c b/drivers/hid/intel-ish-hid/ishtp/client-buffers.c
+index 1b0a0cc605e7..513d7a4a1b8a 100644
+--- a/drivers/hid/intel-ish-hid/ishtp/client-buffers.c
++++ b/drivers/hid/intel-ish-hid/ishtp/client-buffers.c
+@@ -84,7 +84,7 @@ int ishtp_cl_alloc_tx_ring(struct ishtp_cl *cl)
+ return 0;
+ out:
+ dev_err(&cl->device->dev, "error in allocating Tx pool\n");
+- ishtp_cl_free_rx_ring(cl);
++ ishtp_cl_free_tx_ring(cl);
+ return -ENOMEM;
+ }
+
+diff --git a/drivers/hid/wacom.h b/drivers/hid/wacom.h
+index 4a7f8d363220..203d27d198b8 100644
+--- a/drivers/hid/wacom.h
++++ b/drivers/hid/wacom.h
+@@ -202,6 +202,21 @@ static inline void wacom_schedule_work(struct wacom_wac *wacom_wac,
+ }
+ }
+
++/*
++ * Convert a signed 32-bit integer to an unsigned n-bit integer. Undoes
++ * the normally-helpful work of 'hid_snto32' for fields that use signed
++ * ranges for questionable reasons.
++ */
++static inline __u32 wacom_s32tou(s32 value, __u8 n)
++{
++ switch (n) {
++ case 8: return ((__u8)value);
++ case 16: return ((__u16)value);
++ case 32: return ((__u32)value);
++ }
++ return value & (1 << (n - 1)) ? value & (~(~0U << n)) : value;
++}
++
+ extern const struct hid_device_id wacom_ids[];
+
+ void wacom_wac_irq(struct wacom_wac *wacom_wac, size_t len);
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 2b4640397375..4f2b08aa7508 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -2258,7 +2258,7 @@ static void wacom_wac_pen_event(struct hid_device *hdev, struct hid_field *field
+ case HID_DG_TOOLSERIALNUMBER:
+ if (value) {
+ wacom_wac->serial[0] = (wacom_wac->serial[0] & ~0xFFFFFFFFULL);
+- wacom_wac->serial[0] |= (__u32)value;
++ wacom_wac->serial[0] |= wacom_s32tou(value, field->report_size);
+ }
+ return;
+ case HID_DG_TWIST:
+@@ -2274,15 +2274,17 @@ static void wacom_wac_pen_event(struct hid_device *hdev, struct hid_field *field
+ return;
+ case WACOM_HID_WD_SERIALHI:
+ if (value) {
++ __u32 raw_value = wacom_s32tou(value, field->report_size);
++
+ wacom_wac->serial[0] = (wacom_wac->serial[0] & 0xFFFFFFFF);
+- wacom_wac->serial[0] |= ((__u64)value) << 32;
++ wacom_wac->serial[0] |= ((__u64)raw_value) << 32;
+ /*
+ * Non-USI EMR devices may contain additional tool type
+ * information here. See WACOM_HID_WD_TOOLTYPE case for
+ * more details.
+ */
+ if (value >> 20 == 1) {
+- wacom_wac->id[0] |= value & 0xFFFFF;
++ wacom_wac->id[0] |= raw_value & 0xFFFFF;
+ }
+ }
+ return;
+@@ -2294,7 +2296,7 @@ static void wacom_wac_pen_event(struct hid_device *hdev, struct hid_field *field
+ * bitwise OR so the complete value can be built
+ * up over time :(
+ */
+- wacom_wac->id[0] |= value;
++ wacom_wac->id[0] |= wacom_s32tou(value, field->report_size);
+ return;
+ case WACOM_HID_WD_OFFSETLEFT:
+ if (features->offset_left && value != features->offset_left)
+diff --git a/drivers/hwmon/ina3221.c b/drivers/hwmon/ina3221.c
+index 0037e2bdacd6..8a51dcf055ea 100644
+--- a/drivers/hwmon/ina3221.c
++++ b/drivers/hwmon/ina3221.c
+@@ -170,7 +170,7 @@ static inline int ina3221_wait_for_data(struct ina3221_data *ina)
+
+ /* Polling the CVRF bit to make sure read data is ready */
+ return regmap_field_read_poll_timeout(ina->fields[F_CVRF],
+- cvrf, cvrf, wait, 100000);
++ cvrf, cvrf, wait, wait * 2);
+ }
+
+ static int ina3221_read_value(struct ina3221_data *ina, unsigned int reg,
+diff --git a/drivers/hwtracing/intel_th/gth.c b/drivers/hwtracing/intel_th/gth.c
+index fa9d34af87ac..f72803a02391 100644
+--- a/drivers/hwtracing/intel_th/gth.c
++++ b/drivers/hwtracing/intel_th/gth.c
+@@ -626,6 +626,9 @@ static void intel_th_gth_switch(struct intel_th_device *thdev,
+ if (!count)
+ dev_dbg(&thdev->dev, "timeout waiting for CTS Trigger\n");
+
++ /* De-assert the trigger */
++ iowrite32(0, gth->base + REG_CTS_CTL);
++
+ intel_th_gth_stop(gth, output, false);
+ intel_th_gth_start(gth, output);
+ }
+diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c
+index 91dfeba62485..03ca5b1bef9f 100644
+--- a/drivers/hwtracing/intel_th/pci.c
++++ b/drivers/hwtracing/intel_th/pci.c
+@@ -199,6 +199,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x02a6),
+ .driver_data = (kernel_ulong_t)&intel_th_2x,
+ },
++ {
++ /* Comet Lake PCH */
++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x06a6),
++ .driver_data = (kernel_ulong_t)&intel_th_2x,
++ },
+ {
+ /* Ice Lake NNPI */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x45c5),
+@@ -209,6 +214,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa0a6),
+ .driver_data = (kernel_ulong_t)&intel_th_2x,
+ },
++ {
++ /* Jasper Lake PCH */
++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4da6),
++ .driver_data = (kernel_ulong_t)&intel_th_2x,
++ },
+ { 0 },
+ };
+
+diff --git a/drivers/iio/adc/stm32-adc.c b/drivers/iio/adc/stm32-adc.c
+index b22be473cb03..755627e4ab9e 100644
+--- a/drivers/iio/adc/stm32-adc.c
++++ b/drivers/iio/adc/stm32-adc.c
+@@ -1399,7 +1399,7 @@ static int stm32_adc_dma_start(struct iio_dev *indio_dev)
+ cookie = dmaengine_submit(desc);
+ ret = dma_submit_error(cookie);
+ if (ret) {
+- dmaengine_terminate_all(adc->dma_chan);
++ dmaengine_terminate_sync(adc->dma_chan);
+ return ret;
+ }
+
+@@ -1477,7 +1477,7 @@ static void __stm32_adc_buffer_predisable(struct iio_dev *indio_dev)
+ stm32_adc_conv_irq_disable(adc);
+
+ if (adc->dma_chan)
+- dmaengine_terminate_all(adc->dma_chan);
++ dmaengine_terminate_sync(adc->dma_chan);
+
+ if (stm32_adc_set_trig(indio_dev, NULL))
+ dev_err(&indio_dev->dev, "Can't clear trigger\n");
+diff --git a/drivers/iio/imu/adis16480.c b/drivers/iio/imu/adis16480.c
+index b99d73887c9f..8743b2f376e2 100644
+--- a/drivers/iio/imu/adis16480.c
++++ b/drivers/iio/imu/adis16480.c
+@@ -317,8 +317,11 @@ static int adis16480_set_freq(struct iio_dev *indio_dev, int val, int val2)
+ struct adis16480 *st = iio_priv(indio_dev);
+ unsigned int t, reg;
+
++ if (val < 0 || val2 < 0)
++ return -EINVAL;
++
+ t = val * 1000 + val2 / 1000;
+- if (t <= 0)
++ if (t == 0)
+ return -EINVAL;
+
+ /*
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
+index 8a704cd5bddb..3cb41ac357fa 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
+@@ -114,54 +114,63 @@ static const struct inv_mpu6050_hw hw_info[] = {
+ .name = "MPU6050",
+ .reg = ®_set_6050,
+ .config = &chip_config_6050,
++ .fifo_size = 1024,
+ },
+ {
+ .whoami = INV_MPU6500_WHOAMI_VALUE,
+ .name = "MPU6500",
+ .reg = ®_set_6500,
+ .config = &chip_config_6050,
++ .fifo_size = 512,
+ },
+ {
+ .whoami = INV_MPU6515_WHOAMI_VALUE,
+ .name = "MPU6515",
+ .reg = ®_set_6500,
+ .config = &chip_config_6050,
++ .fifo_size = 512,
+ },
+ {
+ .whoami = INV_MPU6000_WHOAMI_VALUE,
+ .name = "MPU6000",
+ .reg = ®_set_6050,
+ .config = &chip_config_6050,
++ .fifo_size = 1024,
+ },
+ {
+ .whoami = INV_MPU9150_WHOAMI_VALUE,
+ .name = "MPU9150",
+ .reg = ®_set_6050,
+ .config = &chip_config_6050,
++ .fifo_size = 1024,
+ },
+ {
+ .whoami = INV_MPU9250_WHOAMI_VALUE,
+ .name = "MPU9250",
+ .reg = ®_set_6500,
+ .config = &chip_config_6050,
++ .fifo_size = 512,
+ },
+ {
+ .whoami = INV_MPU9255_WHOAMI_VALUE,
+ .name = "MPU9255",
+ .reg = ®_set_6500,
+ .config = &chip_config_6050,
++ .fifo_size = 512,
+ },
+ {
+ .whoami = INV_ICM20608_WHOAMI_VALUE,
+ .name = "ICM20608",
+ .reg = ®_set_6500,
+ .config = &chip_config_6050,
++ .fifo_size = 512,
+ },
+ {
+ .whoami = INV_ICM20602_WHOAMI_VALUE,
+ .name = "ICM20602",
+ .reg = ®_set_icm20602,
+ .config = &chip_config_6050,
++ .fifo_size = 1008,
+ },
+ };
+
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h b/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
+index db1c6904388b..51235677c534 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
+@@ -100,12 +100,14 @@ struct inv_mpu6050_chip_config {
+ * @name: name of the chip.
+ * @reg: register map of the chip.
+ * @config: configuration of the chip.
++ * @fifo_size: size of the FIFO in bytes.
+ */
+ struct inv_mpu6050_hw {
+ u8 whoami;
+ u8 *name;
+ const struct inv_mpu6050_reg_map *reg;
+ const struct inv_mpu6050_chip_config *config;
++ size_t fifo_size;
+ };
+
+ /*
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
+index 5f9a5de0bab4..72d8c5790076 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
+@@ -180,9 +180,6 @@ irqreturn_t inv_mpu6050_read_fifo(int irq, void *p)
+ "failed to ack interrupt\n");
+ goto flush_fifo;
+ }
+- /* handle fifo overflow by reseting fifo */
+- if (int_status & INV_MPU6050_BIT_FIFO_OVERFLOW_INT)
+- goto flush_fifo;
+ if (!(int_status & INV_MPU6050_BIT_RAW_DATA_RDY_INT)) {
+ dev_warn(regmap_get_device(st->map),
+ "spurious interrupt with status 0x%x\n", int_status);
+@@ -211,6 +208,18 @@ irqreturn_t inv_mpu6050_read_fifo(int irq, void *p)
+ if (result)
+ goto end_session;
+ fifo_count = get_unaligned_be16(&data[0]);
++
++ /*
++ * Handle fifo overflow by resetting fifo.
++ * Reset if there is only 3 data set free remaining to mitigate
++ * possible delay between reading fifo count and fifo data.
++ */
++ nb = 3 * bytes_per_datum;
++ if (fifo_count >= st->hw->fifo_size - nb) {
++ dev_warn(regmap_get_device(st->map), "fifo overflow reset\n");
++ goto flush_fifo;
++ }
++
+ /* compute and process all complete datum */
+ nb = fifo_count / bytes_per_datum;
+ inv_mpu6050_update_period(st, pf->timestamp, nb);
+diff --git a/drivers/iio/proximity/srf04.c b/drivers/iio/proximity/srf04.c
+index 8b50d56b0a03..01eb8cc63076 100644
+--- a/drivers/iio/proximity/srf04.c
++++ b/drivers/iio/proximity/srf04.c
+@@ -110,7 +110,7 @@ static int srf04_read(struct srf04_data *data)
+ udelay(data->cfg->trigger_pulse_us);
+ gpiod_set_value(data->gpiod_trig, 0);
+
+- /* it cannot take more than 20 ms */
++ /* it should not take more than 20 ms until echo is rising */
+ ret = wait_for_completion_killable_timeout(&data->rising, HZ/50);
+ if (ret < 0) {
+ mutex_unlock(&data->lock);
+@@ -120,7 +120,8 @@ static int srf04_read(struct srf04_data *data)
+ return -ETIMEDOUT;
+ }
+
+- ret = wait_for_completion_killable_timeout(&data->falling, HZ/50);
++ /* it cannot take more than 50 ms until echo is falling */
++ ret = wait_for_completion_killable_timeout(&data->falling, HZ/20);
+ if (ret < 0) {
+ mutex_unlock(&data->lock);
+ return ret;
+@@ -135,19 +136,19 @@ static int srf04_read(struct srf04_data *data)
+
+ dt_ns = ktime_to_ns(ktime_dt);
+ /*
+- * measuring more than 3 meters is beyond the capabilities of
+- * the sensor
++ * measuring more than 6,45 meters is beyond the capabilities of
++ * the supported sensors
+ * ==> filter out invalid results for not measuring echos of
+ * another us sensor
+ *
+ * formula:
+- * distance 3 m
+- * time = ---------- = --------- = 9404389 ns
+- * speed 319 m/s
++ * distance 6,45 * 2 m
++ * time = ---------- = ------------ = 40438871 ns
++ * speed 319 m/s
+ *
+ * using a minimum speed at -20 °C of 319 m/s
+ */
+- if (dt_ns > 9404389)
++ if (dt_ns > 40438871)
+ return -EIO;
+
+ time_ns = dt_ns;
+@@ -159,20 +160,20 @@ static int srf04_read(struct srf04_data *data)
+ * with Temp in °C
+ * and speed in m/s
+ *
+- * use 343 m/s as ultrasonic speed at 20 °C here in absence of the
++ * use 343,5 m/s as ultrasonic speed at 20 °C here in absence of the
+ * temperature
+ *
+ * therefore:
+- * time 343
+- * distance = ------ * -----
+- * 10^6 2
++ * time 343,5 time * 106
++ * distance = ------ * ------- = ------------
++ * 10^6 2 617176
+ * with time in ns
+ * and distance in mm (one way)
+ *
+- * because we limit to 3 meters the multiplication with 343 just
++ * because we limit to 6,45 meters the multiplication with 106 just
+ * fits into 32 bit
+ */
+- distance_mm = time_ns * 343 / 2000000;
++ distance_mm = time_ns * 106 / 617176;
+
+ return distance_mm;
+ }
+diff --git a/drivers/infiniband/core/nldev.c b/drivers/infiniband/core/nldev.c
+index f42e856f3072..4300e2186584 100644
+--- a/drivers/infiniband/core/nldev.c
++++ b/drivers/infiniband/core/nldev.c
+@@ -778,7 +778,7 @@ static int fill_res_counter_entry(struct sk_buff *msg, bool has_cap_net_admin,
+ container_of(res, struct rdma_counter, res);
+
+ if (port && port != counter->port)
+- return 0;
++ return -EAGAIN;
+
+ /* Dump it even query failed */
+ rdma_counter_query_stats(counter);
+diff --git a/drivers/infiniband/core/uverbs.h b/drivers/infiniband/core/uverbs.h
+index 1e5aeb39f774..63f7f7db5902 100644
+--- a/drivers/infiniband/core/uverbs.h
++++ b/drivers/infiniband/core/uverbs.h
+@@ -98,7 +98,7 @@ ib_uverbs_init_udata_buf_or_null(struct ib_udata *udata,
+
+ struct ib_uverbs_device {
+ atomic_t refcount;
+- int num_comp_vectors;
++ u32 num_comp_vectors;
+ struct completion comp;
+ struct device dev;
+ /* First group for device attributes, NULL terminated array */
+diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
+index 92349bf37589..5b1dc11a7283 100644
+--- a/drivers/infiniband/core/verbs.c
++++ b/drivers/infiniband/core/verbs.c
+@@ -662,16 +662,17 @@ static bool find_gid_index(const union ib_gid *gid,
+ void *context)
+ {
+ struct find_gid_index_context *ctx = context;
++ u16 vlan_id = 0xffff;
++ int ret;
+
+ if (ctx->gid_type != gid_attr->gid_type)
+ return false;
+
+- if ((!!(ctx->vlan_id != 0xffff) == !is_vlan_dev(gid_attr->ndev)) ||
+- (is_vlan_dev(gid_attr->ndev) &&
+- vlan_dev_vlan_id(gid_attr->ndev) != ctx->vlan_id))
++ ret = rdma_read_gid_l2_fields(gid_attr, &vlan_id, NULL);
++ if (ret)
+ return false;
+
+- return true;
++ return ctx->vlan_id == vlan_id;
+ }
+
+ static const struct ib_gid_attr *
+diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
+index e87fc0408470..347dc242fb88 100644
+--- a/drivers/infiniband/hw/cxgb4/cm.c
++++ b/drivers/infiniband/hw/cxgb4/cm.c
+@@ -495,7 +495,6 @@ static int _put_ep_safe(struct c4iw_dev *dev, struct sk_buff *skb)
+
+ ep = *((struct c4iw_ep **)(skb->cb + 2 * sizeof(void *)));
+ release_ep_resources(ep);
+- kfree_skb(skb);
+ return 0;
+ }
+
+@@ -506,7 +505,6 @@ static int _put_pass_ep_safe(struct c4iw_dev *dev, struct sk_buff *skb)
+ ep = *((struct c4iw_ep **)(skb->cb + 2 * sizeof(void *)));
+ c4iw_put_ep(&ep->parent_ep->com);
+ release_ep_resources(ep);
+- kfree_skb(skb);
+ return 0;
+ }
+
+@@ -2424,20 +2422,6 @@ static int accept_cr(struct c4iw_ep *ep, struct sk_buff *skb,
+ enum chip_type adapter_type = ep->com.dev->rdev.lldi.adapter_type;
+
+ pr_debug("ep %p tid %u\n", ep, ep->hwtid);
+-
+- skb_get(skb);
+- rpl = cplhdr(skb);
+- if (!is_t4(adapter_type)) {
+- skb_trim(skb, roundup(sizeof(*rpl5), 16));
+- rpl5 = (void *)rpl;
+- INIT_TP_WR(rpl5, ep->hwtid);
+- } else {
+- skb_trim(skb, sizeof(*rpl));
+- INIT_TP_WR(rpl, ep->hwtid);
+- }
+- OPCODE_TID(rpl) = cpu_to_be32(MK_OPCODE_TID(CPL_PASS_ACCEPT_RPL,
+- ep->hwtid));
+-
+ cxgb_best_mtu(ep->com.dev->rdev.lldi.mtus, ep->mtu, &mtu_idx,
+ enable_tcp_timestamps && req->tcpopt.tstamp,
+ (ep->com.remote_addr.ss_family == AF_INET) ? 0 : 1);
+@@ -2483,6 +2467,20 @@ static int accept_cr(struct c4iw_ep *ep, struct sk_buff *skb,
+ if (tcph->ece && tcph->cwr)
+ opt2 |= CCTRL_ECN_V(1);
+ }
++
++ skb_get(skb);
++ rpl = cplhdr(skb);
++ if (!is_t4(adapter_type)) {
++ skb_trim(skb, roundup(sizeof(*rpl5), 16));
++ rpl5 = (void *)rpl;
++ INIT_TP_WR(rpl5, ep->hwtid);
++ } else {
++ skb_trim(skb, sizeof(*rpl));
++ INIT_TP_WR(rpl, ep->hwtid);
++ }
++ OPCODE_TID(rpl) = cpu_to_be32(MK_OPCODE_TID(CPL_PASS_ACCEPT_RPL,
++ ep->hwtid));
++
+ if (CHELSIO_CHIP_VERSION(adapter_type) > CHELSIO_T4) {
+ u32 isn = (prandom_u32() & ~7UL) - 1;
+ opt2 |= T5_OPT_2_VALID_F;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index b76e3beeafb8..854898433916 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -5268,9 +5268,9 @@ static void hns_roce_v2_free_eq(struct hns_roce_dev *hr_dev,
+ return;
+ }
+
+- if (eq->buf_list)
+- dma_free_coherent(hr_dev->dev, buf_chk_sz,
+- eq->buf_list->buf, eq->buf_list->map);
++ dma_free_coherent(hr_dev->dev, buf_chk_sz, eq->buf_list->buf,
++ eq->buf_list->map);
++ kfree(eq->buf_list);
+ }
+
+ static void hns_roce_config_eqc(struct hns_roce_dev *hr_dev,
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index 72869ff4a334..3903141a387e 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -3249,10 +3249,12 @@ static int modify_raw_packet_qp_sq(
+ }
+
+ /* Only remove the old rate after new rate was set */
+- if ((old_rl.rate &&
+- !mlx5_rl_are_equal(&old_rl, &new_rl)) ||
+- (new_state != MLX5_SQC_STATE_RDY))
++ if ((old_rl.rate && !mlx5_rl_are_equal(&old_rl, &new_rl)) ||
++ (new_state != MLX5_SQC_STATE_RDY)) {
+ mlx5_rl_remove_rate(dev, &old_rl);
++ if (new_state != MLX5_SQC_STATE_RDY)
++ memset(&new_rl, 0, sizeof(new_rl));
++ }
+
+ ibqp->rl = new_rl;
+ sq->state = new_state;
+diff --git a/drivers/infiniband/hw/qedr/main.c b/drivers/infiniband/hw/qedr/main.c
+index f97b3d65b30c..2fef7a48f77b 100644
+--- a/drivers/infiniband/hw/qedr/main.c
++++ b/drivers/infiniband/hw/qedr/main.c
+@@ -76,7 +76,7 @@ static void qedr_get_dev_fw_str(struct ib_device *ibdev, char *str)
+ struct qedr_dev *qedr = get_qedr_dev(ibdev);
+ u32 fw_ver = (u32)qedr->attr.fw_ver;
+
+- snprintf(str, IB_FW_VERSION_NAME_MAX, "%d. %d. %d. %d",
++ snprintf(str, IB_FW_VERSION_NAME_MAX, "%d.%d.%d.%d",
+ (fw_ver >> 24) & 0xFF, (fw_ver >> 16) & 0xFF,
+ (fw_ver >> 8) & 0xFF, fw_ver & 0xFF);
+ }
+diff --git a/drivers/infiniband/sw/siw/siw_qp.c b/drivers/infiniband/sw/siw/siw_qp.c
+index 52d402f39df9..b4317480cee7 100644
+--- a/drivers/infiniband/sw/siw/siw_qp.c
++++ b/drivers/infiniband/sw/siw/siw_qp.c
+@@ -1312,6 +1312,7 @@ int siw_qp_add(struct siw_device *sdev, struct siw_qp *qp)
+ void siw_free_qp(struct kref *ref)
+ {
+ struct siw_qp *found, *qp = container_of(ref, struct siw_qp, ref);
++ struct siw_base_qp *siw_base_qp = to_siw_base_qp(qp->ib_qp);
+ struct siw_device *sdev = qp->sdev;
+ unsigned long flags;
+
+@@ -1334,4 +1335,5 @@ void siw_free_qp(struct kref *ref)
+ atomic_dec(&sdev->num_qp);
+ siw_dbg_qp(qp, "free QP\n");
+ kfree_rcu(qp, rcu);
++ kfree(siw_base_qp);
+ }
+diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c
+index da52c90e06d4..ac08d84d84cb 100644
+--- a/drivers/infiniband/sw/siw/siw_verbs.c
++++ b/drivers/infiniband/sw/siw/siw_verbs.c
+@@ -603,7 +603,6 @@ out:
+ int siw_destroy_qp(struct ib_qp *base_qp, struct ib_udata *udata)
+ {
+ struct siw_qp *qp = to_siw_qp(base_qp);
+- struct siw_base_qp *siw_base_qp = to_siw_base_qp(base_qp);
+ struct siw_ucontext *uctx =
+ rdma_udata_to_drv_context(udata, struct siw_ucontext,
+ base_ucontext);
+@@ -640,7 +639,6 @@ int siw_destroy_qp(struct ib_qp *base_qp, struct ib_udata *udata)
+ qp->scq = qp->rcq = NULL;
+
+ siw_qp_put(qp);
+- kfree(siw_base_qp);
+
+ return 0;
+ }
+diff --git a/drivers/iommu/amd_iommu_quirks.c b/drivers/iommu/amd_iommu_quirks.c
+index c235f79b7a20..5120ce4fdce3 100644
+--- a/drivers/iommu/amd_iommu_quirks.c
++++ b/drivers/iommu/amd_iommu_quirks.c
+@@ -73,6 +73,19 @@ static const struct dmi_system_id ivrs_quirks[] __initconst = {
+ },
+ .driver_data = (void *)&ivrs_ioapic_quirks[DELL_LATITUDE_5495],
+ },
++ {
++ /*
++ * Acer Aspire A315-41 requires the very same workaround as
++ * Dell Latitude 5495
++ */
++ .callback = ivrs_ioapic_quirk_cb,
++ .ident = "Acer Aspire A315-41",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Aspire A315-41"),
++ },
++ .driver_data = (void *)&ivrs_ioapic_quirks[DELL_LATITUDE_5495],
++ },
+ {
+ .callback = ivrs_ioapic_quirk_cb,
+ .ident = "Lenovo ideapad 330S-15ARR",
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 21d8fcc83c9c..8550822095be 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -1816,7 +1816,8 @@ err_detach:
+ slave_disable_netpoll(new_slave);
+
+ err_close:
+- slave_dev->priv_flags &= ~IFF_BONDING;
++ if (!netif_is_bond_master(slave_dev))
++ slave_dev->priv_flags &= ~IFF_BONDING;
+ dev_close(slave_dev);
+
+ err_restore_mac:
+@@ -2017,7 +2018,8 @@ static int __bond_release_one(struct net_device *bond_dev,
+ else
+ dev_set_mtu(slave_dev, slave->original_mtu);
+
+- slave_dev->priv_flags &= ~IFF_BONDING;
++ if (!netif_is_bond_master(slave_dev))
++ slave_dev->priv_flags &= ~IFF_BONDING;
+
+ bond_free_slave(slave);
+
+@@ -2086,8 +2088,7 @@ static int bond_miimon_inspect(struct bonding *bond)
+ ignore_updelay = !rcu_dereference(bond->curr_active_slave);
+
+ bond_for_each_slave_rcu(bond, slave, iter) {
+- slave->new_link = BOND_LINK_NOCHANGE;
+- slave->link_new_state = slave->link;
++ bond_propose_link_state(slave, BOND_LINK_NOCHANGE);
+
+ link_state = bond_check_dev_link(bond, slave->dev, 0);
+
+@@ -2121,7 +2122,7 @@ static int bond_miimon_inspect(struct bonding *bond)
+ }
+
+ if (slave->delay <= 0) {
+- slave->new_link = BOND_LINK_DOWN;
++ bond_propose_link_state(slave, BOND_LINK_DOWN);
+ commit++;
+ continue;
+ }
+@@ -2158,7 +2159,7 @@ static int bond_miimon_inspect(struct bonding *bond)
+ slave->delay = 0;
+
+ if (slave->delay <= 0) {
+- slave->new_link = BOND_LINK_UP;
++ bond_propose_link_state(slave, BOND_LINK_UP);
+ commit++;
+ ignore_updelay = false;
+ continue;
+@@ -2196,7 +2197,7 @@ static void bond_miimon_commit(struct bonding *bond)
+ struct slave *slave, *primary;
+
+ bond_for_each_slave(bond, slave, iter) {
+- switch (slave->new_link) {
++ switch (slave->link_new_state) {
+ case BOND_LINK_NOCHANGE:
+ /* For 802.3ad mode, check current slave speed and
+ * duplex again in case its port was disabled after
+@@ -2268,8 +2269,8 @@ static void bond_miimon_commit(struct bonding *bond)
+
+ default:
+ slave_err(bond->dev, slave->dev, "invalid new link %d on slave\n",
+- slave->new_link);
+- slave->new_link = BOND_LINK_NOCHANGE;
++ slave->link_new_state);
++ bond_propose_link_state(slave, BOND_LINK_NOCHANGE);
+
+ continue;
+ }
+@@ -2677,13 +2678,13 @@ static void bond_loadbalance_arp_mon(struct bonding *bond)
+ bond_for_each_slave_rcu(bond, slave, iter) {
+ unsigned long trans_start = dev_trans_start(slave->dev);
+
+- slave->new_link = BOND_LINK_NOCHANGE;
++ bond_propose_link_state(slave, BOND_LINK_NOCHANGE);
+
+ if (slave->link != BOND_LINK_UP) {
+ if (bond_time_in_interval(bond, trans_start, 1) &&
+ bond_time_in_interval(bond, slave->last_rx, 1)) {
+
+- slave->new_link = BOND_LINK_UP;
++ bond_propose_link_state(slave, BOND_LINK_UP);
+ slave_state_changed = 1;
+
+ /* primary_slave has no meaning in round-robin
+@@ -2708,7 +2709,7 @@ static void bond_loadbalance_arp_mon(struct bonding *bond)
+ if (!bond_time_in_interval(bond, trans_start, 2) ||
+ !bond_time_in_interval(bond, slave->last_rx, 2)) {
+
+- slave->new_link = BOND_LINK_DOWN;
++ bond_propose_link_state(slave, BOND_LINK_DOWN);
+ slave_state_changed = 1;
+
+ if (slave->link_failure_count < UINT_MAX)
+@@ -2739,8 +2740,8 @@ static void bond_loadbalance_arp_mon(struct bonding *bond)
+ goto re_arm;
+
+ bond_for_each_slave(bond, slave, iter) {
+- if (slave->new_link != BOND_LINK_NOCHANGE)
+- slave->link = slave->new_link;
++ if (slave->link_new_state != BOND_LINK_NOCHANGE)
++ slave->link = slave->link_new_state;
+ }
+
+ if (slave_state_changed) {
+@@ -2763,9 +2764,9 @@ re_arm:
+ }
+
+ /* Called to inspect slaves for active-backup mode ARP monitor link state
+- * changes. Sets new_link in slaves to specify what action should take
+- * place for the slave. Returns 0 if no changes are found, >0 if changes
+- * to link states must be committed.
++ * changes. Sets proposed link state in slaves to specify what action
++ * should take place for the slave. Returns 0 if no changes are found, >0
++ * if changes to link states must be committed.
+ *
+ * Called with rcu_read_lock held.
+ */
+@@ -2777,12 +2778,12 @@ static int bond_ab_arp_inspect(struct bonding *bond)
+ int commit = 0;
+
+ bond_for_each_slave_rcu(bond, slave, iter) {
+- slave->new_link = BOND_LINK_NOCHANGE;
++ bond_propose_link_state(slave, BOND_LINK_NOCHANGE);
+ last_rx = slave_last_rx(bond, slave);
+
+ if (slave->link != BOND_LINK_UP) {
+ if (bond_time_in_interval(bond, last_rx, 1)) {
+- slave->new_link = BOND_LINK_UP;
++ bond_propose_link_state(slave, BOND_LINK_UP);
+ commit++;
+ }
+ continue;
+@@ -2810,7 +2811,7 @@ static int bond_ab_arp_inspect(struct bonding *bond)
+ if (!bond_is_active_slave(slave) &&
+ !rcu_access_pointer(bond->current_arp_slave) &&
+ !bond_time_in_interval(bond, last_rx, 3)) {
+- slave->new_link = BOND_LINK_DOWN;
++ bond_propose_link_state(slave, BOND_LINK_DOWN);
+ commit++;
+ }
+
+@@ -2823,7 +2824,7 @@ static int bond_ab_arp_inspect(struct bonding *bond)
+ if (bond_is_active_slave(slave) &&
+ (!bond_time_in_interval(bond, trans_start, 2) ||
+ !bond_time_in_interval(bond, last_rx, 2))) {
+- slave->new_link = BOND_LINK_DOWN;
++ bond_propose_link_state(slave, BOND_LINK_DOWN);
+ commit++;
+ }
+ }
+@@ -2843,7 +2844,7 @@ static void bond_ab_arp_commit(struct bonding *bond)
+ struct slave *slave;
+
+ bond_for_each_slave(bond, slave, iter) {
+- switch (slave->new_link) {
++ switch (slave->link_new_state) {
+ case BOND_LINK_NOCHANGE:
+ continue;
+
+@@ -2893,8 +2894,9 @@ static void bond_ab_arp_commit(struct bonding *bond)
+ continue;
+
+ default:
+- slave_err(bond->dev, slave->dev, "impossible: new_link %d on slave\n",
+- slave->new_link);
++ slave_err(bond->dev, slave->dev,
++ "impossible: link_new_state %d on slave\n",
++ slave->link_new_state);
+ continue;
+ }
+
+@@ -3457,7 +3459,7 @@ static void bond_get_stats(struct net_device *bond_dev,
+ struct list_head *iter;
+ struct slave *slave;
+
+- spin_lock_nested(&bond->stats_lock, bond_get_nest_level(bond_dev));
++ spin_lock(&bond->stats_lock);
+ memcpy(stats, &bond->bond_stats, sizeof(*stats));
+
+ rcu_read_lock();
+@@ -4296,7 +4298,6 @@ void bond_setup(struct net_device *bond_dev)
+ struct bonding *bond = netdev_priv(bond_dev);
+
+ spin_lock_init(&bond->mode_lock);
+- spin_lock_init(&bond->stats_lock);
+ bond->params = bonding_defaults;
+
+ /* Initialize pointers */
+@@ -4365,6 +4366,7 @@ static void bond_uninit(struct net_device *bond_dev)
+
+ list_del(&bond->bond_list);
+
++ lockdep_unregister_key(&bond->stats_lock_key);
+ bond_debug_unregister(bond);
+ }
+
+@@ -4771,6 +4773,10 @@ static int bond_init(struct net_device *bond_dev)
+ bond->nest_level = SINGLE_DEPTH_NESTING;
+ netdev_lockdep_set_classes(bond_dev);
+
++ spin_lock_init(&bond->stats_lock);
++ lockdep_register_key(&bond->stats_lock_key);
++ lockdep_set_class(&bond->stats_lock, &bond->stats_lock_key);
++
+ list_add_tail(&bond->bond_list, &bn->dev_list);
+
+ bond_prepare_sysfs_group(bond);
+diff --git a/drivers/net/can/c_can/c_can.c b/drivers/net/can/c_can/c_can.c
+index 606b7d8ffe13..9b61bfbea6cd 100644
+--- a/drivers/net/can/c_can/c_can.c
++++ b/drivers/net/can/c_can/c_can.c
+@@ -97,6 +97,9 @@
+ #define BTR_TSEG2_SHIFT 12
+ #define BTR_TSEG2_MASK (0x7 << BTR_TSEG2_SHIFT)
+
++/* interrupt register */
++#define INT_STS_PENDING 0x8000
++
+ /* brp extension register */
+ #define BRP_EXT_BRPE_MASK 0x0f
+ #define BRP_EXT_BRPE_SHIFT 0
+@@ -1029,10 +1032,16 @@ static int c_can_poll(struct napi_struct *napi, int quota)
+ u16 curr, last = priv->last_status;
+ int work_done = 0;
+
+- priv->last_status = curr = priv->read_reg(priv, C_CAN_STS_REG);
+- /* Ack status on C_CAN. D_CAN is self clearing */
+- if (priv->type != BOSCH_D_CAN)
+- priv->write_reg(priv, C_CAN_STS_REG, LEC_UNUSED);
++ /* Only read the status register if a status interrupt was pending */
++ if (atomic_xchg(&priv->sie_pending, 0)) {
++ priv->last_status = curr = priv->read_reg(priv, C_CAN_STS_REG);
++ /* Ack status on C_CAN. D_CAN is self clearing */
++ if (priv->type != BOSCH_D_CAN)
++ priv->write_reg(priv, C_CAN_STS_REG, LEC_UNUSED);
++ } else {
++ /* no change detected ... */
++ curr = last;
++ }
+
+ /* handle state changes */
+ if ((curr & STATUS_EWARN) && (!(last & STATUS_EWARN))) {
+@@ -1083,10 +1092,16 @@ static irqreturn_t c_can_isr(int irq, void *dev_id)
+ {
+ struct net_device *dev = (struct net_device *)dev_id;
+ struct c_can_priv *priv = netdev_priv(dev);
++ int reg_int;
+
+- if (!priv->read_reg(priv, C_CAN_INT_REG))
++ reg_int = priv->read_reg(priv, C_CAN_INT_REG);
++ if (!reg_int)
+ return IRQ_NONE;
+
++ /* save for later use */
++ if (reg_int & INT_STS_PENDING)
++ atomic_set(&priv->sie_pending, 1);
++
+ /* disable all interrupts and schedule the NAPI */
+ c_can_irq_control(priv, false);
+ napi_schedule(&priv->napi);
+diff --git a/drivers/net/can/c_can/c_can.h b/drivers/net/can/c_can/c_can.h
+index 8acdc7fa4792..d5567a7c1c6d 100644
+--- a/drivers/net/can/c_can/c_can.h
++++ b/drivers/net/can/c_can/c_can.h
+@@ -198,6 +198,7 @@ struct c_can_priv {
+ struct net_device *dev;
+ struct device *device;
+ atomic_t tx_active;
++ atomic_t sie_pending;
+ unsigned long tx_dir;
+ int last_status;
+ u16 (*read_reg) (const struct c_can_priv *priv, enum reg index);
+diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c
+index 483d270664cc..99fa712b48b3 100644
+--- a/drivers/net/can/dev.c
++++ b/drivers/net/can/dev.c
+@@ -842,6 +842,7 @@ void of_can_transceiver(struct net_device *dev)
+ return;
+
+ ret = of_property_read_u32(dn, "max-bitrate", &priv->bitrate_max);
++ of_node_put(dn);
+ if ((ret && ret != -EINVAL) || (!ret && !priv->bitrate_max))
+ netdev_warn(dev, "Invalid value for transceiver max bitrate. Ignoring bitrate limit.\n");
+ }
+diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c
+index fcec8bcb53d6..56fa98d7aa90 100644
+--- a/drivers/net/can/flexcan.c
++++ b/drivers/net/can/flexcan.c
+@@ -1169,6 +1169,7 @@ static int flexcan_chip_start(struct net_device *dev)
+ reg_mecr = priv->read(®s->mecr);
+ reg_mecr &= ~FLEXCAN_MECR_ECRWRDIS;
+ priv->write(reg_mecr, ®s->mecr);
++ reg_mecr |= FLEXCAN_MECR_ECCDIS;
+ reg_mecr &= ~(FLEXCAN_MECR_NCEFAFRZ | FLEXCAN_MECR_HANCEI_MSK |
+ FLEXCAN_MECR_FANCEI_MSK);
+ priv->write(reg_mecr, ®s->mecr);
+diff --git a/drivers/net/can/rx-offload.c b/drivers/net/can/rx-offload.c
+index e6a668ee7730..663697439d1c 100644
+--- a/drivers/net/can/rx-offload.c
++++ b/drivers/net/can/rx-offload.c
+@@ -207,8 +207,10 @@ int can_rx_offload_queue_sorted(struct can_rx_offload *offload,
+ unsigned long flags;
+
+ if (skb_queue_len(&offload->skb_queue) >
+- offload->skb_queue_len_max)
+- return -ENOMEM;
++ offload->skb_queue_len_max) {
++ kfree_skb(skb);
++ return -ENOBUFS;
++ }
+
+ cb = can_rx_offload_get_cb(skb);
+ cb->timestamp = timestamp;
+diff --git a/drivers/net/can/usb/gs_usb.c b/drivers/net/can/usb/gs_usb.c
+index bd6eb9967630..2f74f6704c12 100644
+--- a/drivers/net/can/usb/gs_usb.c
++++ b/drivers/net/can/usb/gs_usb.c
+@@ -623,6 +623,7 @@ static int gs_can_open(struct net_device *netdev)
+ rc);
+
+ usb_unanchor_urb(urb);
++ usb_free_urb(urb);
+ break;
+ }
+
+diff --git a/drivers/net/can/usb/mcba_usb.c b/drivers/net/can/usb/mcba_usb.c
+index 19a702ac49e4..21faa2ec4632 100644
+--- a/drivers/net/can/usb/mcba_usb.c
++++ b/drivers/net/can/usb/mcba_usb.c
+@@ -876,9 +876,8 @@ static void mcba_usb_disconnect(struct usb_interface *intf)
+ netdev_info(priv->netdev, "device disconnected\n");
+
+ unregister_candev(priv->netdev);
+- free_candev(priv->netdev);
+-
+ mcba_urb_unlink(priv);
++ free_candev(priv->netdev);
+ }
+
+ static struct usb_driver mcba_usb_driver = {
+diff --git a/drivers/net/can/usb/peak_usb/pcan_usb.c b/drivers/net/can/usb/peak_usb/pcan_usb.c
+index 617da295b6c1..5a66c9f53aae 100644
+--- a/drivers/net/can/usb/peak_usb/pcan_usb.c
++++ b/drivers/net/can/usb/peak_usb/pcan_usb.c
+@@ -100,7 +100,7 @@ struct pcan_usb_msg_context {
+ u8 *end;
+ u8 rec_cnt;
+ u8 rec_idx;
+- u8 rec_data_idx;
++ u8 rec_ts_idx;
+ struct net_device *netdev;
+ struct pcan_usb *pdev;
+ };
+@@ -547,10 +547,15 @@ static int pcan_usb_decode_status(struct pcan_usb_msg_context *mc,
+ mc->ptr += PCAN_USB_CMD_ARGS;
+
+ if (status_len & PCAN_USB_STATUSLEN_TIMESTAMP) {
+- int err = pcan_usb_decode_ts(mc, !mc->rec_idx);
++ int err = pcan_usb_decode_ts(mc, !mc->rec_ts_idx);
+
+ if (err)
+ return err;
++
++ /* Next packet in the buffer will have a timestamp on a single
++ * byte
++ */
++ mc->rec_ts_idx++;
+ }
+
+ switch (f) {
+@@ -632,10 +637,13 @@ static int pcan_usb_decode_data(struct pcan_usb_msg_context *mc, u8 status_len)
+
+ cf->can_dlc = get_can_dlc(rec_len);
+
+- /* first data packet timestamp is a word */
+- if (pcan_usb_decode_ts(mc, !mc->rec_data_idx))
++ /* Only first packet timestamp is a word */
++ if (pcan_usb_decode_ts(mc, !mc->rec_ts_idx))
+ goto decode_failed;
+
++ /* Next packet in the buffer will have a timestamp on a single byte */
++ mc->rec_ts_idx++;
++
+ /* read data */
+ memset(cf->data, 0x0, sizeof(cf->data));
+ if (status_len & PCAN_USB_STATUSLEN_RTR) {
+@@ -688,7 +696,6 @@ static int pcan_usb_decode_msg(struct peak_usb_device *dev, u8 *ibuf, u32 lbuf)
+ /* handle normal can frames here */
+ } else {
+ err = pcan_usb_decode_data(&mc, sl);
+- mc.rec_data_idx++;
+ }
+ }
+
+diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_core.c b/drivers/net/can/usb/peak_usb/pcan_usb_core.c
+index 65dce642b86b..0b7766b715fd 100644
+--- a/drivers/net/can/usb/peak_usb/pcan_usb_core.c
++++ b/drivers/net/can/usb/peak_usb/pcan_usb_core.c
+@@ -750,7 +750,7 @@ static int peak_usb_create_dev(const struct peak_usb_adapter *peak_usb_adapter,
+ dev = netdev_priv(netdev);
+
+ /* allocate a buffer large enough to send commands */
+- dev->cmd_buf = kmalloc(PCAN_USB_MAX_CMD_LEN, GFP_KERNEL);
++ dev->cmd_buf = kzalloc(PCAN_USB_MAX_CMD_LEN, GFP_KERNEL);
+ if (!dev->cmd_buf) {
+ err = -ENOMEM;
+ goto lbl_free_candev;
+diff --git a/drivers/net/can/usb/usb_8dev.c b/drivers/net/can/usb/usb_8dev.c
+index d596a2ad7f78..8fa224b28218 100644
+--- a/drivers/net/can/usb/usb_8dev.c
++++ b/drivers/net/can/usb/usb_8dev.c
+@@ -996,9 +996,8 @@ static void usb_8dev_disconnect(struct usb_interface *intf)
+ netdev_info(priv->netdev, "device disconnected\n");
+
+ unregister_netdev(priv->netdev);
+- free_candev(priv->netdev);
+-
+ unlink_all_urbs(priv);
++ free_candev(priv->netdev);
+ }
+
+ }
+diff --git a/drivers/net/ethernet/arc/emac_rockchip.c b/drivers/net/ethernet/arc/emac_rockchip.c
+index 42d2e1b02c44..664d664e0925 100644
+--- a/drivers/net/ethernet/arc/emac_rockchip.c
++++ b/drivers/net/ethernet/arc/emac_rockchip.c
+@@ -256,6 +256,9 @@ static int emac_rockchip_remove(struct platform_device *pdev)
+ if (priv->regulator)
+ regulator_disable(priv->regulator);
+
++ if (priv->soc_data->need_div_macclk)
++ clk_disable_unprepare(priv->macclk);
++
+ free_netdev(ndev);
+ return err;
+ }
+diff --git a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
+index 0e5de88fd6e8..cdd7e5da4a74 100644
+--- a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
++++ b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
+@@ -1499,7 +1499,7 @@ static int octeon_mgmt_probe(struct platform_device *pdev)
+ netdev->ethtool_ops = &octeon_mgmt_ethtool_ops;
+
+ netdev->min_mtu = 64 - OCTEON_MGMT_RX_HEADROOM;
+- netdev->max_mtu = 16383 - OCTEON_MGMT_RX_HEADROOM;
++ netdev->max_mtu = 16383 - OCTEON_MGMT_RX_HEADROOM - VLAN_HLEN;
+
+ mac = of_get_mac_address(pdev->dev.of_node);
+
+diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
+index 59564ac99d2a..edec61dfc868 100644
+--- a/drivers/net/ethernet/google/gve/gve_rx.c
++++ b/drivers/net/ethernet/google/gve/gve_rx.c
+@@ -289,6 +289,8 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc,
+
+ len = be16_to_cpu(rx_desc->len) - GVE_RX_PAD;
+ page_info = &rx->data.page_info[idx];
++ dma_sync_single_for_cpu(&priv->pdev->dev, rx->data.qpl->page_buses[idx],
++ PAGE_SIZE, DMA_FROM_DEVICE);
+
+ /* gvnic can only receive into registered segments. If the buffer
+ * can't be recycled, our only choice is to copy the data out of
+diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c
+index 778b87b5a06c..0a9a7ee2a866 100644
+--- a/drivers/net/ethernet/google/gve/gve_tx.c
++++ b/drivers/net/ethernet/google/gve/gve_tx.c
+@@ -390,7 +390,21 @@ static void gve_tx_fill_seg_desc(union gve_tx_desc *seg_desc,
+ seg_desc->seg.seg_addr = cpu_to_be64(addr);
+ }
+
+-static int gve_tx_add_skb(struct gve_tx_ring *tx, struct sk_buff *skb)
++static void gve_dma_sync_for_device(struct device *dev, dma_addr_t *page_buses,
++ u64 iov_offset, u64 iov_len)
++{
++ dma_addr_t dma;
++ u64 addr;
++
++ for (addr = iov_offset; addr < iov_offset + iov_len;
++ addr += PAGE_SIZE) {
++ dma = page_buses[addr / PAGE_SIZE];
++ dma_sync_single_for_device(dev, dma, PAGE_SIZE, DMA_TO_DEVICE);
++ }
++}
++
++static int gve_tx_add_skb(struct gve_tx_ring *tx, struct sk_buff *skb,
++ struct device *dev)
+ {
+ int pad_bytes, hlen, hdr_nfrags, payload_nfrags, l4_hdr_offset;
+ union gve_tx_desc *pkt_desc, *seg_desc;
+@@ -432,6 +446,9 @@ static int gve_tx_add_skb(struct gve_tx_ring *tx, struct sk_buff *skb)
+ skb_copy_bits(skb, 0,
+ tx->tx_fifo.base + info->iov[hdr_nfrags - 1].iov_offset,
+ hlen);
++ gve_dma_sync_for_device(dev, tx->tx_fifo.qpl->page_buses,
++ info->iov[hdr_nfrags - 1].iov_offset,
++ info->iov[hdr_nfrags - 1].iov_len);
+ copy_offset = hlen;
+
+ for (i = payload_iov; i < payload_nfrags + payload_iov; i++) {
+@@ -445,6 +462,9 @@ static int gve_tx_add_skb(struct gve_tx_ring *tx, struct sk_buff *skb)
+ skb_copy_bits(skb, copy_offset,
+ tx->tx_fifo.base + info->iov[i].iov_offset,
+ info->iov[i].iov_len);
++ gve_dma_sync_for_device(dev, tx->tx_fifo.qpl->page_buses,
++ info->iov[i].iov_offset,
++ info->iov[i].iov_len);
+ copy_offset += info->iov[i].iov_len;
+ }
+
+@@ -473,7 +493,7 @@ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev)
+ gve_tx_put_doorbell(priv, tx->q_resources, tx->req);
+ return NETDEV_TX_BUSY;
+ }
+- nsegs = gve_tx_add_skb(tx, skb);
++ nsegs = gve_tx_add_skb(tx, skb, &priv->pdev->dev);
+
+ netdev_tx_sent_queue(tx->netdev_txq, skb->len);
+ skb_tx_timestamp(skb);
+diff --git a/drivers/net/ethernet/hisilicon/hip04_eth.c b/drivers/net/ethernet/hisilicon/hip04_eth.c
+index f51bc0255556..4606a7e4a6d1 100644
+--- a/drivers/net/ethernet/hisilicon/hip04_eth.c
++++ b/drivers/net/ethernet/hisilicon/hip04_eth.c
+@@ -1041,7 +1041,6 @@ static int hip04_remove(struct platform_device *pdev)
+
+ hip04_free_ring(ndev, d);
+ unregister_netdev(ndev);
+- free_irq(ndev->irq, ndev);
+ of_node_put(priv->phy_node);
+ cancel_work_sync(&priv->tx_timeout_task);
+ free_netdev(ndev);
+diff --git a/drivers/net/ethernet/hisilicon/hns/hnae.c b/drivers/net/ethernet/hisilicon/hns/hnae.c
+index 6d0457eb4faa..08339278c722 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hnae.c
++++ b/drivers/net/ethernet/hisilicon/hns/hnae.c
+@@ -199,7 +199,6 @@ hnae_init_ring(struct hnae_queue *q, struct hnae_ring *ring, int flags)
+
+ ring->q = q;
+ ring->flags = flags;
+- spin_lock_init(&ring->lock);
+ ring->coal_param = q->handle->coal_param;
+ assert(!ring->desc && !ring->desc_cb && !ring->desc_dma_addr);
+
+diff --git a/drivers/net/ethernet/hisilicon/hns/hnae.h b/drivers/net/ethernet/hisilicon/hns/hnae.h
+index e9c67c06bfd2..6ab9458302e1 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hnae.h
++++ b/drivers/net/ethernet/hisilicon/hns/hnae.h
+@@ -274,9 +274,6 @@ struct hnae_ring {
+ /* statistic */
+ struct ring_stats stats;
+
+- /* ring lock for poll one */
+- spinlock_t lock;
+-
+ dma_addr_t desc_dma_addr;
+ u32 buf_size; /* size for hnae_desc->addr, preset by AE */
+ u16 desc_num; /* total number of desc */
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+index 2235dd55fab2..56e8d4dee0e0 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+@@ -943,15 +943,6 @@ static int is_valid_clean_head(struct hnae_ring *ring, int h)
+ return u > c ? (h > c && h <= u) : (h > c || h <= u);
+ }
+
+-/* netif_tx_lock will turn down the performance, set only when necessary */
+-#ifdef CONFIG_NET_POLL_CONTROLLER
+-#define NETIF_TX_LOCK(ring) spin_lock(&(ring)->lock)
+-#define NETIF_TX_UNLOCK(ring) spin_unlock(&(ring)->lock)
+-#else
+-#define NETIF_TX_LOCK(ring)
+-#define NETIF_TX_UNLOCK(ring)
+-#endif
+-
+ /* reclaim all desc in one budget
+ * return error or number of desc left
+ */
+@@ -965,21 +956,16 @@ static int hns_nic_tx_poll_one(struct hns_nic_ring_data *ring_data,
+ int head;
+ int bytes, pkts;
+
+- NETIF_TX_LOCK(ring);
+-
+ head = readl_relaxed(ring->io_base + RCB_REG_HEAD);
+ rmb(); /* make sure head is ready before touch any data */
+
+- if (is_ring_empty(ring) || head == ring->next_to_clean) {
+- NETIF_TX_UNLOCK(ring);
++ if (is_ring_empty(ring) || head == ring->next_to_clean)
+ return 0; /* no data to poll */
+- }
+
+ if (!is_valid_clean_head(ring, head)) {
+ netdev_err(ndev, "wrong head (%d, %d-%d)\n", head,
+ ring->next_to_use, ring->next_to_clean);
+ ring->stats.io_err_cnt++;
+- NETIF_TX_UNLOCK(ring);
+ return -EIO;
+ }
+
+@@ -994,8 +980,6 @@ static int hns_nic_tx_poll_one(struct hns_nic_ring_data *ring_data,
+ ring->stats.tx_pkts += pkts;
+ ring->stats.tx_bytes += bytes;
+
+- NETIF_TX_UNLOCK(ring);
+-
+ dev_queue = netdev_get_tx_queue(ndev, ring_data->queue_index);
+ netdev_tx_completed_queue(dev_queue, pkts, bytes);
+
+@@ -1055,16 +1039,12 @@ static void hns_nic_tx_clr_all_bufs(struct hns_nic_ring_data *ring_data)
+ int head;
+ int bytes, pkts;
+
+- NETIF_TX_LOCK(ring);
+-
+ head = ring->next_to_use; /* ntu :soft setted ring position*/
+ bytes = 0;
+ pkts = 0;
+ while (head != ring->next_to_clean)
+ hns_nic_reclaim_one_desc(ring, &bytes, &pkts);
+
+- NETIF_TX_UNLOCK(ring);
+-
+ dev_queue = netdev_get_tx_queue(ndev, ring_data->queue_index);
+ netdev_tx_reset_queue(dev_queue);
+ }
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 964e7d62f4b1..5ef7704cd98d 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -1723,6 +1723,86 @@ static int ibmvnic_set_mac(struct net_device *netdev, void *p)
+ return rc;
+ }
+
++/**
++ * do_change_param_reset returns zero if we are able to keep processing reset
++ * events, or non-zero if we hit a fatal error and must halt.
++ */
++static int do_change_param_reset(struct ibmvnic_adapter *adapter,
++ struct ibmvnic_rwi *rwi,
++ u32 reset_state)
++{
++ struct net_device *netdev = adapter->netdev;
++ int i, rc;
++
++ netdev_dbg(adapter->netdev, "Change param resetting driver (%d)\n",
++ rwi->reset_reason);
++
++ netif_carrier_off(netdev);
++ adapter->reset_reason = rwi->reset_reason;
++
++ ibmvnic_cleanup(netdev);
++
++ if (reset_state == VNIC_OPEN) {
++ rc = __ibmvnic_close(netdev);
++ if (rc)
++ return rc;
++ }
++
++ release_resources(adapter);
++ release_sub_crqs(adapter, 1);
++ release_crq_queue(adapter);
++
++ adapter->state = VNIC_PROBED;
++
++ rc = init_crq_queue(adapter);
++
++ if (rc) {
++ netdev_err(adapter->netdev,
++ "Couldn't initialize crq. rc=%d\n", rc);
++ return rc;
++ }
++
++ rc = ibmvnic_reset_init(adapter);
++ if (rc)
++ return IBMVNIC_INIT_FAILED;
++
++ /* If the adapter was in PROBE state prior to the reset,
++ * exit here.
++ */
++ if (reset_state == VNIC_PROBED)
++ return 0;
++
++ rc = ibmvnic_login(netdev);
++ if (rc) {
++ adapter->state = reset_state;
++ return rc;
++ }
++
++ rc = init_resources(adapter);
++ if (rc)
++ return rc;
++
++ ibmvnic_disable_irqs(adapter);
++
++ adapter->state = VNIC_CLOSED;
++
++ if (reset_state == VNIC_CLOSED)
++ return 0;
++
++ rc = __ibmvnic_open(netdev);
++ if (rc)
++ return IBMVNIC_OPEN_FAILED;
++
++ /* refresh device's multicast list */
++ ibmvnic_set_multi(netdev);
++
++ /* kick napi */
++ for (i = 0; i < adapter->req_rx_queues; i++)
++ napi_schedule(&adapter->napi[i]);
++
++ return 0;
++}
++
+ /**
+ * do_reset returns zero if we are able to keep processing reset events, or
+ * non-zero if we hit a fatal error and must halt.
+@@ -1738,6 +1818,8 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+ netdev_dbg(adapter->netdev, "Re-setting driver (%d)\n",
+ rwi->reset_reason);
+
++ rtnl_lock();
++
+ netif_carrier_off(netdev);
+ adapter->reset_reason = rwi->reset_reason;
+
+@@ -1751,16 +1833,25 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+ if (reset_state == VNIC_OPEN &&
+ adapter->reset_reason != VNIC_RESET_MOBILITY &&
+ adapter->reset_reason != VNIC_RESET_FAILOVER) {
+- rc = __ibmvnic_close(netdev);
++ adapter->state = VNIC_CLOSING;
++
++ /* Release the RTNL lock before link state change and
++ * re-acquire after the link state change to allow
++ * linkwatch_event to grab the RTNL lock and run during
++ * a reset.
++ */
++ rtnl_unlock();
++ rc = set_link_state(adapter, IBMVNIC_LOGICAL_LNK_DN);
++ rtnl_lock();
+ if (rc)
+- return rc;
+- }
++ goto out;
+
+- if (adapter->reset_reason == VNIC_RESET_CHANGE_PARAM ||
+- adapter->wait_for_reset) {
+- release_resources(adapter);
+- release_sub_crqs(adapter, 1);
+- release_crq_queue(adapter);
++ if (adapter->state != VNIC_CLOSING) {
++ rc = -1;
++ goto out;
++ }
++
++ adapter->state = VNIC_CLOSED;
+ }
+
+ if (adapter->reset_reason != VNIC_RESET_NON_FATAL) {
+@@ -1769,9 +1860,7 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+ */
+ adapter->state = VNIC_PROBED;
+
+- if (adapter->wait_for_reset) {
+- rc = init_crq_queue(adapter);
+- } else if (adapter->reset_reason == VNIC_RESET_MOBILITY) {
++ if (adapter->reset_reason == VNIC_RESET_MOBILITY) {
+ rc = ibmvnic_reenable_crq_queue(adapter);
+ release_sub_crqs(adapter, 1);
+ } else {
+@@ -1783,36 +1872,35 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+ if (rc) {
+ netdev_err(adapter->netdev,
+ "Couldn't initialize crq. rc=%d\n", rc);
+- return rc;
++ goto out;
+ }
+
+ rc = ibmvnic_reset_init(adapter);
+- if (rc)
+- return IBMVNIC_INIT_FAILED;
++ if (rc) {
++ rc = IBMVNIC_INIT_FAILED;
++ goto out;
++ }
+
+ /* If the adapter was in PROBE state prior to the reset,
+ * exit here.
+ */
+- if (reset_state == VNIC_PROBED)
+- return 0;
++ if (reset_state == VNIC_PROBED) {
++ rc = 0;
++ goto out;
++ }
+
+ rc = ibmvnic_login(netdev);
+ if (rc) {
+ adapter->state = reset_state;
+- return rc;
++ goto out;
+ }
+
+- if (adapter->reset_reason == VNIC_RESET_CHANGE_PARAM ||
+- adapter->wait_for_reset) {
+- rc = init_resources(adapter);
+- if (rc)
+- return rc;
+- } else if (adapter->req_rx_queues != old_num_rx_queues ||
+- adapter->req_tx_queues != old_num_tx_queues ||
+- adapter->req_rx_add_entries_per_subcrq !=
+- old_num_rx_slots ||
+- adapter->req_tx_entries_per_subcrq !=
+- old_num_tx_slots) {
++ if (adapter->req_rx_queues != old_num_rx_queues ||
++ adapter->req_tx_queues != old_num_tx_queues ||
++ adapter->req_rx_add_entries_per_subcrq !=
++ old_num_rx_slots ||
++ adapter->req_tx_entries_per_subcrq !=
++ old_num_tx_slots) {
+ release_rx_pools(adapter);
+ release_tx_pools(adapter);
+ release_napi(adapter);
+@@ -1820,32 +1908,30 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+
+ rc = init_resources(adapter);
+ if (rc)
+- return rc;
++ goto out;
+
+ } else {
+ rc = reset_tx_pools(adapter);
+ if (rc)
+- return rc;
++ goto out;
+
+ rc = reset_rx_pools(adapter);
+ if (rc)
+- return rc;
++ goto out;
+ }
+ ibmvnic_disable_irqs(adapter);
+ }
+ adapter->state = VNIC_CLOSED;
+
+- if (reset_state == VNIC_CLOSED)
+- return 0;
++ if (reset_state == VNIC_CLOSED) {
++ rc = 0;
++ goto out;
++ }
+
+ rc = __ibmvnic_open(netdev);
+ if (rc) {
+- if (list_empty(&adapter->rwi_list))
+- adapter->state = VNIC_CLOSED;
+- else
+- adapter->state = reset_state;
+-
+- return 0;
++ rc = IBMVNIC_OPEN_FAILED;
++ goto out;
+ }
+
+ /* refresh device's multicast list */
+@@ -1855,11 +1941,15 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+ for (i = 0; i < adapter->req_rx_queues; i++)
+ napi_schedule(&adapter->napi[i]);
+
+- if (adapter->reset_reason != VNIC_RESET_FAILOVER &&
+- adapter->reset_reason != VNIC_RESET_CHANGE_PARAM)
++ if (adapter->reset_reason != VNIC_RESET_FAILOVER)
+ call_netdevice_notifiers(NETDEV_NOTIFY_PEERS, netdev);
+
+- return 0;
++ rc = 0;
++
++out:
++ rtnl_unlock();
++
++ return rc;
+ }
+
+ static int do_hard_reset(struct ibmvnic_adapter *adapter,
+@@ -1919,14 +2009,8 @@ static int do_hard_reset(struct ibmvnic_adapter *adapter,
+ return 0;
+
+ rc = __ibmvnic_open(netdev);
+- if (rc) {
+- if (list_empty(&adapter->rwi_list))
+- adapter->state = VNIC_CLOSED;
+- else
+- adapter->state = reset_state;
+-
+- return 0;
+- }
++ if (rc)
++ return IBMVNIC_OPEN_FAILED;
+
+ return 0;
+ }
+@@ -1965,20 +2049,11 @@ static void __ibmvnic_reset(struct work_struct *work)
+ {
+ struct ibmvnic_rwi *rwi;
+ struct ibmvnic_adapter *adapter;
+- bool we_lock_rtnl = false;
+ u32 reset_state;
+ int rc = 0;
+
+ adapter = container_of(work, struct ibmvnic_adapter, ibmvnic_reset);
+
+- /* netif_set_real_num_xx_queues needs to take rtnl lock here
+- * unless wait_for_reset is set, in which case the rtnl lock
+- * has already been taken before initializing the reset
+- */
+- if (!adapter->wait_for_reset) {
+- rtnl_lock();
+- we_lock_rtnl = true;
+- }
+ reset_state = adapter->state;
+
+ rwi = get_next_rwi(adapter);
+@@ -1990,14 +2065,32 @@ static void __ibmvnic_reset(struct work_struct *work)
+ break;
+ }
+
+- if (adapter->force_reset_recovery) {
+- adapter->force_reset_recovery = false;
+- rc = do_hard_reset(adapter, rwi, reset_state);
++ if (rwi->reset_reason == VNIC_RESET_CHANGE_PARAM) {
++ /* CHANGE_PARAM requestor holds rtnl_lock */
++ rc = do_change_param_reset(adapter, rwi, reset_state);
++ } else if (adapter->force_reset_recovery) {
++ /* Transport event occurred during previous reset */
++ if (adapter->wait_for_reset) {
++ /* Previous was CHANGE_PARAM; caller locked */
++ adapter->force_reset_recovery = false;
++ rc = do_hard_reset(adapter, rwi, reset_state);
++ } else {
++ rtnl_lock();
++ adapter->force_reset_recovery = false;
++ rc = do_hard_reset(adapter, rwi, reset_state);
++ rtnl_unlock();
++ }
+ } else {
+ rc = do_reset(adapter, rwi, reset_state);
+ }
+ kfree(rwi);
+- if (rc && rc != IBMVNIC_INIT_FAILED &&
++ if (rc == IBMVNIC_OPEN_FAILED) {
++ if (list_empty(&adapter->rwi_list))
++ adapter->state = VNIC_CLOSED;
++ else
++ adapter->state = reset_state;
++ rc = 0;
++ } else if (rc && rc != IBMVNIC_INIT_FAILED &&
+ !adapter->force_reset_recovery)
+ break;
+
+@@ -2005,7 +2098,6 @@ static void __ibmvnic_reset(struct work_struct *work)
+ }
+
+ if (adapter->wait_for_reset) {
+- adapter->wait_for_reset = false;
+ adapter->reset_done_rc = rc;
+ complete(&adapter->reset_done);
+ }
+@@ -2016,8 +2108,6 @@ static void __ibmvnic_reset(struct work_struct *work)
+ }
+
+ adapter->resetting = false;
+- if (we_lock_rtnl)
+- rtnl_unlock();
+ }
+
+ static int ibmvnic_reset(struct ibmvnic_adapter *adapter,
+@@ -2078,8 +2168,6 @@ static int ibmvnic_reset(struct ibmvnic_adapter *adapter,
+
+ return 0;
+ err:
+- if (adapter->wait_for_reset)
+- adapter->wait_for_reset = false;
+ return -ret;
+ }
+
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h
+index 70bd286f8932..9d3d35cc91d6 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.h
++++ b/drivers/net/ethernet/ibm/ibmvnic.h
+@@ -20,6 +20,7 @@
+ #define IBMVNIC_INVALID_MAP -1
+ #define IBMVNIC_STATS_TIMEOUT 1
+ #define IBMVNIC_INIT_FAILED 2
++#define IBMVNIC_OPEN_FAILED 3
+
+ /* basic structures plus 100 2k buffers */
+ #define IBMVNIC_IO_ENTITLEMENT_DEFAULT 610305
+diff --git a/drivers/net/ethernet/intel/e1000/e1000_ethtool.c b/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
+index a41008523c98..2e07ffa87e34 100644
+--- a/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
++++ b/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
+@@ -607,6 +607,7 @@ static int e1000_set_ringparam(struct net_device *netdev,
+ for (i = 0; i < adapter->num_rx_queues; i++)
+ rxdr[i].count = rxdr->count;
+
++ err = 0;
+ if (netif_running(adapter->netdev)) {
+ /* Try to get new resources before deleting old */
+ err = e1000_setup_all_rx_resources(adapter);
+@@ -627,14 +628,13 @@ static int e1000_set_ringparam(struct net_device *netdev,
+ adapter->rx_ring = rxdr;
+ adapter->tx_ring = txdr;
+ err = e1000_up(adapter);
+- if (err)
+- goto err_setup;
+ }
+ kfree(tx_old);
+ kfree(rx_old);
+
+ clear_bit(__E1000_RESETTING, &adapter->flags);
+- return 0;
++ return err;
++
+ err_setup_tx:
+ e1000_free_all_rx_resources(adapter);
+ err_setup_rx:
+@@ -646,7 +646,6 @@ err_alloc_rx:
+ err_alloc_tx:
+ if (netif_running(adapter->netdev))
+ e1000_up(adapter);
+-err_setup:
+ clear_bit(__E1000_RESETTING, &adapter->flags);
+ return err;
+ }
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index b4df3e319467..93a1352f5be9 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -2064,7 +2064,8 @@ static void igb_check_swap_media(struct igb_adapter *adapter)
+ if ((hw->phy.media_type == e1000_media_type_copper) &&
+ (!(connsw & E1000_CONNSW_AUTOSENSE_EN))) {
+ swap_now = true;
+- } else if (!(connsw & E1000_CONNSW_SERDESD)) {
++ } else if ((hw->phy.media_type != e1000_media_type_copper) &&
++ !(connsw & E1000_CONNSW_SERDESD)) {
+ /* copper signal takes time to appear */
+ if (adapter->copper_tries < 4) {
+ adapter->copper_tries++;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h
+index b7298f9ee3d3..c4c128908b6e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h
+@@ -86,7 +86,7 @@ struct sk_buff *mlx5e_ktls_handle_tx_skb(struct net_device *netdev,
+ struct mlx5e_tx_wqe **wqe, u16 *pi);
+ void mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq,
+ struct mlx5e_tx_wqe_info *wi,
+- struct mlx5e_sq_dma *dma);
++ u32 *dma_fifo_cc);
+
+ #else
+
+@@ -94,6 +94,11 @@ static inline void mlx5e_ktls_build_netdev(struct mlx5e_priv *priv)
+ {
+ }
+
++static inline void
++mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq,
++ struct mlx5e_tx_wqe_info *wi,
++ u32 *dma_fifo_cc) {}
++
+ #endif
+
+ #endif /* __MLX5E_TLS_H__ */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
+index 7833ddef0427..002245bb6b28 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
+@@ -304,9 +304,16 @@ tx_post_resync_dump(struct mlx5e_txqsq *sq, struct sk_buff *skb,
+
+ void mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq,
+ struct mlx5e_tx_wqe_info *wi,
+- struct mlx5e_sq_dma *dma)
++ u32 *dma_fifo_cc)
+ {
+- struct mlx5e_sq_stats *stats = sq->stats;
++ struct mlx5e_sq_stats *stats;
++ struct mlx5e_sq_dma *dma;
++
++ if (!wi->resync_dump_frag)
++ return;
++
++ dma = mlx5e_dma_get(sq, (*dma_fifo_cc)++);
++ stats = sq->stats;
+
+ mlx5e_tx_dma_unmap(sq->pdev, dma);
+ __skb_frag_unref(wi->resync_dump_frag);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 9d5f6e56188f..f3a2970c3fcf 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -1347,9 +1347,13 @@ static void mlx5e_deactivate_txqsq(struct mlx5e_txqsq *sq)
+ /* last doorbell out, godspeed .. */
+ if (mlx5e_wqc_has_room_for(wq, sq->cc, sq->pc, 1)) {
+ u16 pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
++ struct mlx5e_tx_wqe_info *wi;
+ struct mlx5e_tx_wqe *nop;
+
+- sq->db.wqe_info[pi].skb = NULL;
++ wi = &sq->db.wqe_info[pi];
++
++ memset(wi, 0, sizeof(*wi));
++ wi->num_wqebbs = 1;
+ nop = mlx5e_post_nop(wq, sq->sqn, &sq->pc);
+ mlx5e_notify_hw(wq, sq->pc, sq->uar_map, &nop->ctrl);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+index 600e92cb629a..d5d2b1af3dbc 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+@@ -404,7 +404,10 @@ netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev)
+ static void mlx5e_dump_error_cqe(struct mlx5e_txqsq *sq,
+ struct mlx5_err_cqe *err_cqe)
+ {
+- u32 ci = mlx5_cqwq_get_ci(&sq->cq.wq);
++ struct mlx5_cqwq *wq = &sq->cq.wq;
++ u32 ci;
++
++ ci = mlx5_cqwq_ctr2ix(wq, wq->cc - 1);
+
+ netdev_err(sq->channel->netdev,
+ "Error cqe on cqn 0x%x, ci 0x%x, sqn 0x%x, opcode 0x%x, syndrome 0x%x, vendor syndrome 0x%x\n",
+@@ -480,14 +483,7 @@ bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget)
+ skb = wi->skb;
+
+ if (unlikely(!skb)) {
+-#ifdef CONFIG_MLX5_EN_TLS
+- if (wi->resync_dump_frag) {
+- struct mlx5e_sq_dma *dma =
+- mlx5e_dma_get(sq, dma_fifo_cc++);
+-
+- mlx5e_ktls_tx_handle_resync_dump_comp(sq, wi, dma);
+- }
+-#endif
++ mlx5e_ktls_tx_handle_resync_dump_comp(sq, wi, &dma_fifo_cc);
+ sqcc += wi->num_wqebbs;
+ continue;
+ }
+@@ -543,29 +539,38 @@ void mlx5e_free_txqsq_descs(struct mlx5e_txqsq *sq)
+ {
+ struct mlx5e_tx_wqe_info *wi;
+ struct sk_buff *skb;
++ u32 dma_fifo_cc;
++ u16 sqcc;
+ u16 ci;
+ int i;
+
+- while (sq->cc != sq->pc) {
+- ci = mlx5_wq_cyc_ctr2ix(&sq->wq, sq->cc);
++ sqcc = sq->cc;
++ dma_fifo_cc = sq->dma_fifo_cc;
++
++ while (sqcc != sq->pc) {
++ ci = mlx5_wq_cyc_ctr2ix(&sq->wq, sqcc);
+ wi = &sq->db.wqe_info[ci];
+ skb = wi->skb;
+
+- if (!skb) { /* nop */
+- sq->cc++;
++ if (!skb) {
++ mlx5e_ktls_tx_handle_resync_dump_comp(sq, wi, &dma_fifo_cc);
++ sqcc += wi->num_wqebbs;
+ continue;
+ }
+
+ for (i = 0; i < wi->num_dma; i++) {
+ struct mlx5e_sq_dma *dma =
+- mlx5e_dma_get(sq, sq->dma_fifo_cc++);
++ mlx5e_dma_get(sq, dma_fifo_cc++);
+
+ mlx5e_tx_dma_unmap(sq->pdev, dma);
+ }
+
+ dev_kfree_skb_any(skb);
+- sq->cc += wi->num_wqebbs;
++ sqcc += wi->num_wqebbs;
+ }
++
++ sq->dma_fifo_cc = dma_fifo_cc;
++ sq->cc = sqcc;
+ }
+
+ #ifdef CONFIG_MLX5_CORE_IPOIB
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c b/drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c
+index 4c50efe4e7f1..61021133029e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c
+@@ -464,8 +464,10 @@ static int mlx5_fpga_conn_create_cq(struct mlx5_fpga_conn *conn, int cq_size)
+ }
+
+ err = mlx5_vector2eqn(mdev, smp_processor_id(), &eqn, &irqn);
+- if (err)
++ if (err) {
++ kvfree(in);
+ goto err_cqwq;
++ }
+
+ cqc = MLX5_ADDR_OF(create_cq_in, in, cq_context);
+ MLX5_SET(cqc, cqc, log_cq_size, ilog2(cq_size));
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+index d685122d9ff7..c07f3154437c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+@@ -572,7 +572,7 @@ mlx5_fw_fatal_reporter_dump(struct devlink_health_reporter *reporter,
+ return -ENOMEM;
+ err = mlx5_crdump_collect(dev, cr_data);
+ if (err)
+- return err;
++ goto free_data;
+
+ if (priv_ctx) {
+ struct mlx5_fw_reporter_ctx *fw_reporter_ctx = priv_ctx;
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index 6932e615d4b0..7ffe5959a7e7 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -260,8 +260,15 @@ static int ocelot_vlan_vid_add(struct net_device *dev, u16 vid, bool pvid,
+ port->pvid = vid;
+
+ /* Untagged egress vlan clasification */
+- if (untagged)
++ if (untagged && port->vid != vid) {
++ if (port->vid) {
++ dev_err(ocelot->dev,
++ "Port already has a native VLAN: %d\n",
++ port->vid);
++ return -EBUSY;
++ }
+ port->vid = vid;
++ }
+
+ ocelot_vlan_port_apply(ocelot, port);
+
+@@ -877,7 +884,7 @@ end:
+ static int ocelot_vlan_rx_add_vid(struct net_device *dev, __be16 proto,
+ u16 vid)
+ {
+- return ocelot_vlan_vid_add(dev, vid, false, true);
++ return ocelot_vlan_vid_add(dev, vid, false, false);
+ }
+
+ static int ocelot_vlan_rx_kill_vid(struct net_device *dev, __be16 proto,
+@@ -1499,9 +1506,6 @@ static int ocelot_netdevice_port_event(struct net_device *dev,
+ struct ocelot_port *ocelot_port = netdev_priv(dev);
+ int err = 0;
+
+- if (!ocelot_netdevice_dev_check(dev))
+- return 0;
+-
+ switch (event) {
+ case NETDEV_CHANGEUPPER:
+ if (netif_is_bridge_master(info->upper_dev)) {
+@@ -1538,12 +1542,16 @@ static int ocelot_netdevice_event(struct notifier_block *unused,
+ struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+ int ret = 0;
+
++ if (!ocelot_netdevice_dev_check(dev))
++ return 0;
++
+ if (event == NETDEV_PRECHANGEUPPER &&
+ netif_is_lag_master(info->upper_dev)) {
+ struct netdev_lag_upper_info *lag_upper_info = info->upper_info;
+ struct netlink_ext_ack *extack;
+
+- if (lag_upper_info->tx_type != NETDEV_LAG_TX_TYPE_HASH) {
++ if (lag_upper_info &&
++ lag_upper_info->tx_type != NETDEV_LAG_TX_TYPE_HASH) {
+ extack = netdev_notifier_info_to_extack(&info->info);
+ NL_SET_ERR_MSG_MOD(extack, "LAG device using unsupported Tx type");
+
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c
+index 8d1c208f778f..a220cc7c947a 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_main.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_main.c
+@@ -1208,8 +1208,16 @@ enum qede_remove_mode {
+ static void __qede_remove(struct pci_dev *pdev, enum qede_remove_mode mode)
+ {
+ struct net_device *ndev = pci_get_drvdata(pdev);
+- struct qede_dev *edev = netdev_priv(ndev);
+- struct qed_dev *cdev = edev->cdev;
++ struct qede_dev *edev;
++ struct qed_dev *cdev;
++
++ if (!ndev) {
++ dev_info(&pdev->dev, "Device has already been removed\n");
++ return;
++ }
++
++ edev = netdev_priv(ndev);
++ cdev = edev->cdev;
+
+ DP_INFO(edev, "Starting qede_remove\n");
+
+diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
+index 9c54b715228e..06de59521fc4 100644
+--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
++++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
+@@ -57,10 +57,10 @@ static int rmnet_unregister_real_device(struct net_device *real_dev,
+ if (port->nr_rmnet_devs)
+ return -EINVAL;
+
+- kfree(port);
+-
+ netdev_rx_handler_unregister(real_dev);
+
++ kfree(port);
++
+ /* release reference on real_dev */
+ dev_put(real_dev);
+
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 00c86c7dd42d..efb5b000489f 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -863,6 +863,9 @@ static void r8168g_mdio_write(struct rtl8169_private *tp, int reg, int value)
+
+ static int r8168g_mdio_read(struct rtl8169_private *tp, int reg)
+ {
++ if (reg == 0x1f)
++ return tp->ocp_base == OCP_STD_PHY_BASE ? 0 : tp->ocp_base >> 4;
++
+ if (tp->ocp_base != OCP_STD_PHY_BASE)
+ reg -= 0x10;
+
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index fe2d3029de5e..ed0e694a0855 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -2906,6 +2906,7 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
+ } else {
+ stmmac_set_desc_addr(priv, first, des);
+ tmp_pay_len = pay_len;
++ des += proto_hdr_len;
+ }
+
+ stmmac_tso_allocator(priv, des, tmp_pay_len, (nfrags == 0), queue);
+diff --git a/drivers/net/fjes/fjes_main.c b/drivers/net/fjes/fjes_main.c
+index bbbc1dcb6ab5..b517c1af9de0 100644
+--- a/drivers/net/fjes/fjes_main.c
++++ b/drivers/net/fjes/fjes_main.c
+@@ -1237,8 +1237,17 @@ static int fjes_probe(struct platform_device *plat_dev)
+ adapter->open_guard = false;
+
+ adapter->txrx_wq = alloc_workqueue(DRV_NAME "/txrx", WQ_MEM_RECLAIM, 0);
++ if (unlikely(!adapter->txrx_wq)) {
++ err = -ENOMEM;
++ goto err_free_netdev;
++ }
++
+ adapter->control_wq = alloc_workqueue(DRV_NAME "/control",
+ WQ_MEM_RECLAIM, 0);
++ if (unlikely(!adapter->control_wq)) {
++ err = -ENOMEM;
++ goto err_free_txrx_wq;
++ }
+
+ INIT_WORK(&adapter->tx_stall_task, fjes_tx_stall_task);
+ INIT_WORK(&adapter->raise_intr_rxdata_task,
+@@ -1255,7 +1264,7 @@ static int fjes_probe(struct platform_device *plat_dev)
+ hw->hw_res.irq = platform_get_irq(plat_dev, 0);
+ err = fjes_hw_init(&adapter->hw);
+ if (err)
+- goto err_free_netdev;
++ goto err_free_control_wq;
+
+ /* setup MAC address (02:00:00:00:00:[epid])*/
+ netdev->dev_addr[0] = 2;
+@@ -1277,6 +1286,10 @@ static int fjes_probe(struct platform_device *plat_dev)
+
+ err_hw_exit:
+ fjes_hw_exit(&adapter->hw);
++err_free_control_wq:
++ destroy_workqueue(adapter->control_wq);
++err_free_txrx_wq:
++ destroy_workqueue(adapter->txrx_wq);
+ err_free_netdev:
+ free_netdev(netdev);
+ err_out:
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index e8fce6d715ef..8ed79b418d88 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -982,7 +982,7 @@ static int netvsc_attach(struct net_device *ndev,
+ if (netif_running(ndev)) {
+ ret = rndis_filter_open(nvdev);
+ if (ret)
+- return ret;
++ goto err;
+
+ rdev = nvdev->extension;
+ if (!rdev->link_state)
+@@ -990,6 +990,13 @@ static int netvsc_attach(struct net_device *ndev,
+ }
+
+ return 0;
++
++err:
++ netif_device_detach(ndev);
++
++ rndis_filter_device_remove(hdev, nvdev);
++
++ return ret;
+ }
+
+ static int netvsc_set_channels(struct net_device *net,
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index cb7637364b40..1bd113b142ea 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -3001,12 +3001,10 @@ static const struct nla_policy macsec_rtnl_policy[IFLA_MACSEC_MAX + 1] = {
+ static void macsec_free_netdev(struct net_device *dev)
+ {
+ struct macsec_dev *macsec = macsec_priv(dev);
+- struct net_device *real_dev = macsec->real_dev;
+
+ free_percpu(macsec->stats);
+ free_percpu(macsec->secy.tx_sc.stats);
+
+- dev_put(real_dev);
+ }
+
+ static void macsec_setup(struct net_device *dev)
+@@ -3261,8 +3259,6 @@ static int macsec_newlink(struct net *net, struct net_device *dev,
+ if (err < 0)
+ return err;
+
+- dev_hold(real_dev);
+-
+ macsec->nest_level = dev_get_nest_level(real_dev) + 1;
+ netdev_lockdep_set_classes(dev);
+ lockdep_set_class_and_subclass(&dev->addr_list_lock,
+diff --git a/drivers/net/phy/smsc.c b/drivers/net/phy/smsc.c
+index dc3d92d340c4..b73298250793 100644
+--- a/drivers/net/phy/smsc.c
++++ b/drivers/net/phy/smsc.c
+@@ -327,6 +327,7 @@ static struct phy_driver smsc_phy_driver[] = {
+ .name = "SMSC LAN8740",
+
+ /* PHY_BASIC_FEATURES */
++ .flags = PHY_RST_AFTER_CLK_EN,
+
+ .probe = smsc_phy_probe,
+
+diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
+index 00cab3f43a4c..a245597a3902 100644
+--- a/drivers/net/usb/cdc_ncm.c
++++ b/drivers/net/usb/cdc_ncm.c
+@@ -578,8 +578,8 @@ static void cdc_ncm_set_dgram_size(struct usbnet *dev, int new_size)
+ /* read current mtu value from device */
+ err = usbnet_read_cmd(dev, USB_CDC_GET_MAX_DATAGRAM_SIZE,
+ USB_TYPE_CLASS | USB_DIR_IN | USB_RECIP_INTERFACE,
+- 0, iface_no, &max_datagram_size, 2);
+- if (err < 0) {
++ 0, iface_no, &max_datagram_size, sizeof(max_datagram_size));
++ if (err < sizeof(max_datagram_size)) {
+ dev_dbg(&dev->intf->dev, "GET_MAX_DATAGRAM_SIZE failed\n");
+ goto out;
+ }
+@@ -590,7 +590,7 @@ static void cdc_ncm_set_dgram_size(struct usbnet *dev, int new_size)
+ max_datagram_size = cpu_to_le16(ctx->max_datagram_size);
+ err = usbnet_write_cmd(dev, USB_CDC_SET_MAX_DATAGRAM_SIZE,
+ USB_TYPE_CLASS | USB_DIR_OUT | USB_RECIP_INTERFACE,
+- 0, iface_no, &max_datagram_size, 2);
++ 0, iface_no, &max_datagram_size, sizeof(max_datagram_size));
+ if (err < 0)
+ dev_dbg(&dev->intf->dev, "SET_MAX_DATAGRAM_SIZE failed\n");
+
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 3d77cd402ba9..ba682bba7851 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1361,6 +1361,7 @@ static const struct usb_device_id products[] = {
+ {QMI_FIXED_INTF(0x413c, 0x81b6, 8)}, /* Dell Wireless 5811e */
+ {QMI_FIXED_INTF(0x413c, 0x81b6, 10)}, /* Dell Wireless 5811e */
+ {QMI_FIXED_INTF(0x413c, 0x81d7, 0)}, /* Dell Wireless 5821e */
++ {QMI_FIXED_INTF(0x413c, 0x81e0, 0)}, /* Dell Wireless 5821e with eSIM support*/
+ {QMI_FIXED_INTF(0x03f0, 0x4e1d, 8)}, /* HP lt4111 LTE/EV-DO/HSPA+ Gobi 4G Module */
+ {QMI_FIXED_INTF(0x03f0, 0x9d1d, 1)}, /* HP lt4120 Snapdragon X5 LTE */
+ {QMI_FIXED_INTF(0x22de, 0x9061, 3)}, /* WeTelecom WPD-600N */
+diff --git a/drivers/net/wimax/i2400m/op-rfkill.c b/drivers/net/wimax/i2400m/op-rfkill.c
+index 8efb493ceec2..5c79f052cad2 100644
+--- a/drivers/net/wimax/i2400m/op-rfkill.c
++++ b/drivers/net/wimax/i2400m/op-rfkill.c
+@@ -127,12 +127,12 @@ int i2400m_op_rfkill_sw_toggle(struct wimax_dev *wimax_dev,
+ "%d\n", result);
+ result = 0;
+ error_cmd:
+- kfree(cmd);
+ kfree_skb(ack_skb);
+ error_msg_to_dev:
+ error_alloc:
+ d_fnend(4, dev, "(wimax_dev %p state %d) = %d\n",
+ wimax_dev, state, result);
++ kfree(cmd);
+ return result;
+ }
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index acbadfdbdd3f..2ee5c5dc78cb 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -573,20 +573,20 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0x2526, 0x0034, iwl9560_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x2526, 0x0038, iwl9560_2ac_160_cfg)},
+ {IWL_PCI_DEVICE(0x2526, 0x003C, iwl9560_2ac_160_cfg)},
+- {IWL_PCI_DEVICE(0x2526, 0x0060, iwl9460_2ac_cfg)},
+- {IWL_PCI_DEVICE(0x2526, 0x0064, iwl9460_2ac_cfg)},
+- {IWL_PCI_DEVICE(0x2526, 0x00A0, iwl9460_2ac_cfg)},
+- {IWL_PCI_DEVICE(0x2526, 0x00A4, iwl9460_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x2526, 0x0060, iwl9461_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x2526, 0x0064, iwl9461_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x2526, 0x00A0, iwl9462_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x2526, 0x00A4, iwl9462_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x2526, 0x0210, iwl9260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x2526, 0x0214, iwl9260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x2526, 0x0230, iwl9560_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x2526, 0x0234, iwl9560_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x2526, 0x0238, iwl9560_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x2526, 0x023C, iwl9560_2ac_cfg)},
+- {IWL_PCI_DEVICE(0x2526, 0x0260, iwl9460_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x2526, 0x0260, iwl9461_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x2526, 0x0264, iwl9461_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x2526, 0x02A0, iwl9460_2ac_cfg)},
+- {IWL_PCI_DEVICE(0x2526, 0x02A4, iwl9460_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x2526, 0x02A0, iwl9462_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x2526, 0x02A4, iwl9462_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x2526, 0x1010, iwl9260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x2526, 0x1030, iwl9560_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x2526, 0x1210, iwl9260_2ac_cfg)},
+@@ -603,7 +603,7 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0x2526, 0x401C, iwl9260_2ac_160_cfg)},
+ {IWL_PCI_DEVICE(0x2526, 0x4030, iwl9560_2ac_160_cfg)},
+ {IWL_PCI_DEVICE(0x2526, 0x4034, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x2526, 0x40A4, iwl9460_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x2526, 0x40A4, iwl9462_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x2526, 0x4234, iwl9560_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x2526, 0x42A4, iwl9462_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x2526, 0x6010, iwl9260_2ac_160_cfg)},
+@@ -618,60 +618,61 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0x271B, 0x0210, iwl9160_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x271B, 0x0214, iwl9260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x271C, 0x0214, iwl9260_2ac_cfg)},
+- {IWL_PCI_DEVICE(0x2720, 0x0034, iwl9560_2ac_160_cfg)},
+- {IWL_PCI_DEVICE(0x2720, 0x0038, iwl9560_2ac_160_cfg)},
+- {IWL_PCI_DEVICE(0x2720, 0x003C, iwl9560_2ac_160_cfg)},
+- {IWL_PCI_DEVICE(0x2720, 0x0060, iwl9461_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x2720, 0x0064, iwl9461_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x2720, 0x00A0, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x2720, 0x00A4, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x2720, 0x0230, iwl9560_2ac_cfg)},
+- {IWL_PCI_DEVICE(0x2720, 0x0234, iwl9560_2ac_cfg)},
+- {IWL_PCI_DEVICE(0x2720, 0x0238, iwl9560_2ac_cfg)},
+- {IWL_PCI_DEVICE(0x2720, 0x023C, iwl9560_2ac_cfg)},
+- {IWL_PCI_DEVICE(0x2720, 0x0260, iwl9461_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x2720, 0x0264, iwl9461_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x2720, 0x02A0, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x2720, 0x02A4, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x2720, 0x1010, iwl9260_2ac_cfg)},
+- {IWL_PCI_DEVICE(0x2720, 0x1030, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x2720, 0x1210, iwl9260_2ac_cfg)},
+- {IWL_PCI_DEVICE(0x2720, 0x1551, iwl9560_killer_s_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x2720, 0x1552, iwl9560_killer_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x2720, 0x2030, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x2720, 0x2034, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x2720, 0x4030, iwl9560_2ac_160_cfg)},
+- {IWL_PCI_DEVICE(0x2720, 0x4034, iwl9560_2ac_160_cfg_soc)},
+- {IWL_PCI_DEVICE(0x2720, 0x40A4, iwl9462_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x2720, 0x4234, iwl9560_2ac_cfg_soc)},
+- {IWL_PCI_DEVICE(0x2720, 0x42A4, iwl9462_2ac_cfg_soc)},
+-
+- {IWL_PCI_DEVICE(0x30DC, 0x0030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x0034, iwl9560_2ac_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x0038, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x003C, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x0060, iwl9461_2ac_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x0064, iwl9461_2ac_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x00A0, iwl9462_2ac_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x00A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x0230, iwl9560_2ac_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x0234, iwl9560_2ac_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x0238, iwl9560_2ac_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x023C, iwl9560_2ac_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x0260, iwl9461_2ac_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x0264, iwl9461_2ac_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x02A0, iwl9462_2ac_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x02A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x1030, iwl9560_2ac_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x1551, killer1550s_2ac_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x1552, killer1550i_2ac_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x2030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x2034, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x4030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x4034, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x40A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x4234, iwl9560_2ac_cfg_qu_b0_jf_b0)},
+- {IWL_PCI_DEVICE(0x30DC, 0x42A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++
++ {IWL_PCI_DEVICE(0x2720, 0x0034, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x0038, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x003C, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x0060, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x0064, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x00A0, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x00A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x0230, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x0234, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x0238, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x023C, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x0260, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x0264, iwl9461_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x02A0, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x02A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x1030, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x1551, killer1550s_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x1552, killer1550i_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x2030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x2034, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x4030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x4034, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x40A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x4234, iwl9560_2ac_cfg_qu_b0_jf_b0)},
++ {IWL_PCI_DEVICE(0x2720, 0x42A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
++
++ {IWL_PCI_DEVICE(0x30DC, 0x0030, iwl9560_2ac_160_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x0034, iwl9560_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x0038, iwl9560_2ac_160_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x003C, iwl9560_2ac_160_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x0060, iwl9460_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x0064, iwl9461_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x00A0, iwl9462_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x00A4, iwl9462_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x0230, iwl9560_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x0234, iwl9560_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x0238, iwl9560_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x023C, iwl9560_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x0260, iwl9461_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x0264, iwl9461_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x02A0, iwl9462_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x02A4, iwl9462_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x1010, iwl9260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x30DC, 0x1030, iwl9560_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x1210, iwl9260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x30DC, 0x1551, iwl9560_killer_s_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x1552, iwl9560_killer_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x2030, iwl9560_2ac_160_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x2034, iwl9560_2ac_160_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x4030, iwl9560_2ac_160_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x4034, iwl9560_2ac_160_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x40A4, iwl9462_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x4234, iwl9560_2ac_cfg_soc)},
++ {IWL_PCI_DEVICE(0x30DC, 0x42A4, iwl9462_2ac_cfg_soc)},
+
+ {IWL_PCI_DEVICE(0x31DC, 0x0030, iwl9560_2ac_160_cfg_shared_clk)},
+ {IWL_PCI_DEVICE(0x31DC, 0x0034, iwl9560_2ac_cfg_shared_clk)},
+diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
+index d8f61e540bfd..ed744cd19819 100644
+--- a/drivers/net/wireless/mediatek/mt76/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/dma.c
+@@ -64,8 +64,10 @@ mt76_dma_add_buf(struct mt76_dev *dev, struct mt76_queue *q,
+ u32 ctrl;
+ int i, idx = -1;
+
+- if (txwi)
++ if (txwi) {
+ q->entry[q->head].txwi = DMA_DUMMY_DATA;
++ q->entry[q->head].skip_buf0 = true;
++ }
+
+ for (i = 0; i < nbufs; i += 2, buf += 2) {
+ u32 buf0 = buf[0].addr, buf1 = 0;
+@@ -108,7 +110,7 @@ mt76_dma_tx_cleanup_idx(struct mt76_dev *dev, struct mt76_queue *q, int idx,
+ __le32 __ctrl = READ_ONCE(q->desc[idx].ctrl);
+ u32 ctrl = le32_to_cpu(__ctrl);
+
+- if (!e->txwi || !e->skb) {
++ if (!e->skip_buf0) {
+ __le32 addr = READ_ONCE(q->desc[idx].buf0);
+ u32 len = FIELD_GET(MT_DMA_CTL_SD_LEN0, ctrl);
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index 989386ecb5e4..e98859ab480b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -102,8 +102,9 @@ struct mt76_queue_entry {
+ struct urb *urb;
+ };
+ enum mt76_txq_id qid;
+- bool schedule;
+- bool done;
++ bool skip_buf0:1;
++ bool schedule:1;
++ bool done:1;
+ };
+
+ struct mt76_queue_regs {
+diff --git a/drivers/net/wireless/virt_wifi.c b/drivers/net/wireless/virt_wifi.c
+index be92e1220284..7997cc6de334 100644
+--- a/drivers/net/wireless/virt_wifi.c
++++ b/drivers/net/wireless/virt_wifi.c
+@@ -548,6 +548,7 @@ static int virt_wifi_newlink(struct net *src_net, struct net_device *dev,
+ priv->is_connected = false;
+ priv->is_up = false;
+ INIT_DELAYED_WORK(&priv->connect, virt_wifi_connect_complete);
++ __module_get(THIS_MODULE);
+
+ return 0;
+ unregister_netdev:
+@@ -578,6 +579,7 @@ static void virt_wifi_dellink(struct net_device *dev,
+ netdev_upper_dev_unlink(priv->lowerdev, dev);
+
+ unregister_netdevice_queue(dev, head);
++ module_put(THIS_MODULE);
+
+ /* Deleting the wiphy is handled in the module destructor. */
+ }
+@@ -590,6 +592,42 @@ static struct rtnl_link_ops virt_wifi_link_ops = {
+ .priv_size = sizeof(struct virt_wifi_netdev_priv),
+ };
+
++static bool netif_is_virt_wifi_dev(const struct net_device *dev)
++{
++ return rcu_access_pointer(dev->rx_handler) == virt_wifi_rx_handler;
++}
++
++static int virt_wifi_event(struct notifier_block *this, unsigned long event,
++ void *ptr)
++{
++ struct net_device *lower_dev = netdev_notifier_info_to_dev(ptr);
++ struct virt_wifi_netdev_priv *priv;
++ struct net_device *upper_dev;
++ LIST_HEAD(list_kill);
++
++ if (!netif_is_virt_wifi_dev(lower_dev))
++ return NOTIFY_DONE;
++
++ switch (event) {
++ case NETDEV_UNREGISTER:
++ priv = rtnl_dereference(lower_dev->rx_handler_data);
++ if (!priv)
++ return NOTIFY_DONE;
++
++ upper_dev = priv->upperdev;
++
++ upper_dev->rtnl_link_ops->dellink(upper_dev, &list_kill);
++ unregister_netdevice_many(&list_kill);
++ break;
++ }
++
++ return NOTIFY_DONE;
++}
++
++static struct notifier_block virt_wifi_notifier = {
++ .notifier_call = virt_wifi_event,
++};
++
+ /* Acquires and releases the rtnl lock. */
+ static int __init virt_wifi_init_module(void)
+ {
+@@ -598,14 +636,25 @@ static int __init virt_wifi_init_module(void)
+ /* Guaranteed to be locallly-administered and not multicast. */
+ eth_random_addr(fake_router_bssid);
+
++ err = register_netdevice_notifier(&virt_wifi_notifier);
++ if (err)
++ return err;
++
++ err = -ENOMEM;
+ common_wiphy = virt_wifi_make_wiphy();
+ if (!common_wiphy)
+- return -ENOMEM;
++ goto notifier;
+
+ err = rtnl_link_register(&virt_wifi_link_ops);
+ if (err)
+- virt_wifi_destroy_wiphy(common_wiphy);
++ goto destroy_wiphy;
+
++ return 0;
++
++destroy_wiphy:
++ virt_wifi_destroy_wiphy(common_wiphy);
++notifier:
++ unregister_netdevice_notifier(&virt_wifi_notifier);
+ return err;
+ }
+
+@@ -615,6 +664,7 @@ static void __exit virt_wifi_cleanup_module(void)
+ /* Will delete any devices that depend on the wiphy. */
+ rtnl_link_unregister(&virt_wifi_link_ops);
+ virt_wifi_destroy_wiphy(common_wiphy);
++ unregister_netdevice_notifier(&virt_wifi_notifier);
+ }
+
+ module_init(virt_wifi_init_module);
+diff --git a/drivers/nfc/fdp/i2c.c b/drivers/nfc/fdp/i2c.c
+index 1cd113c8d7cb..ad0abb1f0bae 100644
+--- a/drivers/nfc/fdp/i2c.c
++++ b/drivers/nfc/fdp/i2c.c
+@@ -259,7 +259,7 @@ static void fdp_nci_i2c_read_device_properties(struct device *dev,
+ *fw_vsc_cfg, len);
+
+ if (r) {
+- devm_kfree(dev, fw_vsc_cfg);
++ devm_kfree(dev, *fw_vsc_cfg);
+ goto vsc_read_err;
+ }
+ } else {
+diff --git a/drivers/nfc/st21nfca/core.c b/drivers/nfc/st21nfca/core.c
+index f9ac176cf257..2ce17932a073 100644
+--- a/drivers/nfc/st21nfca/core.c
++++ b/drivers/nfc/st21nfca/core.c
+@@ -708,6 +708,7 @@ static int st21nfca_hci_complete_target_discovered(struct nfc_hci_dev *hdev,
+ NFC_PROTO_FELICA_MASK;
+ } else {
+ kfree_skb(nfcid_skb);
++ nfcid_skb = NULL;
+ /* P2P in type A */
+ r = nfc_hci_get_param(hdev, ST21NFCA_RF_READER_F_GATE,
+ ST21NFCA_RF_READER_F_NFCID1,
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 30de7efef003..d320684d25b2 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -715,7 +715,7 @@ int nvme_mpath_init(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
+ goto out;
+ }
+
+- error = nvme_read_ana_log(ctrl, true);
++ error = nvme_read_ana_log(ctrl, false);
+ if (error)
+ goto out_free_ana_log_buf;
+ return 0;
+diff --git a/drivers/pinctrl/intel/pinctrl-cherryview.c b/drivers/pinctrl/intel/pinctrl-cherryview.c
+index bf049d1bbb87..17a248b723b9 100644
+--- a/drivers/pinctrl/intel/pinctrl-cherryview.c
++++ b/drivers/pinctrl/intel/pinctrl-cherryview.c
+@@ -1584,7 +1584,7 @@ static int chv_gpio_probe(struct chv_pinctrl *pctrl, int irq)
+ intsel >>= CHV_PADCTRL0_INTSEL_SHIFT;
+
+ if (need_valid_mask && intsel >= community->nirqs)
+- clear_bit(i, chip->irq.valid_mask);
++ clear_bit(desc->number, chip->irq.valid_mask);
+ }
+
+ /*
+diff --git a/drivers/pinctrl/intel/pinctrl-intel.c b/drivers/pinctrl/intel/pinctrl-intel.c
+index 4323796cbe11..8fb6c9668c37 100644
+--- a/drivers/pinctrl/intel/pinctrl-intel.c
++++ b/drivers/pinctrl/intel/pinctrl-intel.c
+@@ -52,6 +52,7 @@
+ #define PADCFG0_GPIROUTNMI BIT(17)
+ #define PADCFG0_PMODE_SHIFT 10
+ #define PADCFG0_PMODE_MASK GENMASK(13, 10)
++#define PADCFG0_PMODE_GPIO 0
+ #define PADCFG0_GPIORXDIS BIT(9)
+ #define PADCFG0_GPIOTXDIS BIT(8)
+ #define PADCFG0_GPIORXSTATE BIT(1)
+@@ -307,7 +308,7 @@ static void intel_pin_dbg_show(struct pinctrl_dev *pctldev, struct seq_file *s,
+ cfg1 = readl(intel_get_padcfg(pctrl, pin, PADCFG1));
+
+ mode = (cfg0 & PADCFG0_PMODE_MASK) >> PADCFG0_PMODE_SHIFT;
+- if (!mode)
++ if (mode == PADCFG0_PMODE_GPIO)
+ seq_puts(s, "GPIO ");
+ else
+ seq_printf(s, "mode %d ", mode);
+@@ -428,6 +429,11 @@ static void __intel_gpio_set_direction(void __iomem *padcfg0, bool input)
+ writel(value, padcfg0);
+ }
+
++static int intel_gpio_get_gpio_mode(void __iomem *padcfg0)
++{
++ return (readl(padcfg0) & PADCFG0_PMODE_MASK) >> PADCFG0_PMODE_SHIFT;
++}
++
+ static void intel_gpio_set_gpio_mode(void __iomem *padcfg0)
+ {
+ u32 value;
+@@ -456,7 +462,20 @@ static int intel_gpio_request_enable(struct pinctrl_dev *pctldev,
+ }
+
+ padcfg0 = intel_get_padcfg(pctrl, pin, PADCFG0);
++
++ /*
++ * If pin is already configured in GPIO mode, we assume that
++ * firmware provides correct settings. In such case we avoid
++ * potential glitches on the pin. Otherwise, for the pin in
++ * alternative mode, consumer has to supply respective flags.
++ */
++ if (intel_gpio_get_gpio_mode(padcfg0) == PADCFG0_PMODE_GPIO) {
++ raw_spin_unlock_irqrestore(&pctrl->lock, flags);
++ return 0;
++ }
++
+ intel_gpio_set_gpio_mode(padcfg0);
++
+ /* Disable TX buffer and enable RX (this will be input) */
+ __intel_gpio_set_direction(padcfg0, true);
+
+diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c
+index 59252bfca14e..41309ac65693 100644
+--- a/drivers/scsi/lpfc/lpfc_nportdisc.c
++++ b/drivers/scsi/lpfc/lpfc_nportdisc.c
+@@ -845,9 +845,9 @@ lpfc_disc_set_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+
+ if (!(vport->fc_flag & FC_PT2PT)) {
+ /* Check config parameter use-adisc or FCP-2 */
+- if ((vport->cfg_use_adisc && (vport->fc_flag & FC_RSCN_MODE)) ||
++ if (vport->cfg_use_adisc && ((vport->fc_flag & FC_RSCN_MODE) ||
+ ((ndlp->nlp_fcp_info & NLP_FCP_2_DEVICE) &&
+- (ndlp->nlp_type & NLP_FCP_TARGET))) {
++ (ndlp->nlp_type & NLP_FCP_TARGET)))) {
+ spin_lock_irq(shost->host_lock);
+ ndlp->nlp_flag |= NLP_NPR_ADISC;
+ spin_unlock_irq(shost->host_lock);
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index f9e6a135d656..c7027ecd4d19 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -7898,7 +7898,7 @@ lpfc_sli4_process_missed_mbox_completions(struct lpfc_hba *phba)
+ if (sli4_hba->hdwq) {
+ for (eqidx = 0; eqidx < phba->cfg_irq_chann; eqidx++) {
+ eq = phba->sli4_hba.hba_eq_hdl[eqidx].eq;
+- if (eq->queue_id == sli4_hba->mbx_cq->assoc_qid) {
++ if (eq && eq->queue_id == sli4_hba->mbx_cq->assoc_qid) {
+ fpeq = eq;
+ break;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_bsg.c b/drivers/scsi/qla2xxx/qla_bsg.c
+index 5441557b424b..3084c2cff7bd 100644
+--- a/drivers/scsi/qla2xxx/qla_bsg.c
++++ b/drivers/scsi/qla2xxx/qla_bsg.c
+@@ -257,7 +257,7 @@ qla2x00_process_els(struct bsg_job *bsg_job)
+ srb_t *sp;
+ const char *type;
+ int req_sg_cnt, rsp_sg_cnt;
+- int rval = (DRIVER_ERROR << 16);
++ int rval = (DID_ERROR << 16);
+ uint16_t nextlid = 0;
+
+ if (bsg_request->msgcode == FC_BSG_RPT_ELS) {
+@@ -432,7 +432,7 @@ qla2x00_process_ct(struct bsg_job *bsg_job)
+ struct Scsi_Host *host = fc_bsg_to_shost(bsg_job);
+ scsi_qla_host_t *vha = shost_priv(host);
+ struct qla_hw_data *ha = vha->hw;
+- int rval = (DRIVER_ERROR << 16);
++ int rval = (DID_ERROR << 16);
+ int req_sg_cnt, rsp_sg_cnt;
+ uint16_t loop_id;
+ struct fc_port *fcport;
+@@ -1951,7 +1951,7 @@ qlafx00_mgmt_cmd(struct bsg_job *bsg_job)
+ struct Scsi_Host *host = fc_bsg_to_shost(bsg_job);
+ scsi_qla_host_t *vha = shost_priv(host);
+ struct qla_hw_data *ha = vha->hw;
+- int rval = (DRIVER_ERROR << 16);
++ int rval = (DID_ERROR << 16);
+ struct qla_mt_iocb_rqst_fx00 *piocb_rqst;
+ srb_t *sp;
+ int req_sg_cnt = 0, rsp_sg_cnt = 0;
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index abfb9c800ce2..ac4640f45678 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -710,6 +710,7 @@ qla2x00_execute_fw(scsi_qla_host_t *vha, uint32_t risc_addr)
+ mcp->mb[2] = LSW(risc_addr);
+ mcp->mb[3] = 0;
+ mcp->mb[4] = 0;
++ mcp->mb[11] = 0;
+ ha->flags.using_lr_setting = 0;
+ if (IS_QLA25XX(ha) || IS_QLA81XX(ha) || IS_QLA83XX(ha) ||
+ IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
+@@ -754,7 +755,7 @@ qla2x00_execute_fw(scsi_qla_host_t *vha, uint32_t risc_addr)
+ if (ha->flags.exchoffld_enabled)
+ mcp->mb[4] |= ENABLE_EXCHANGE_OFFLD;
+
+- mcp->out_mb |= MBX_4|MBX_3|MBX_2|MBX_1;
++ mcp->out_mb |= MBX_4 | MBX_3 | MBX_2 | MBX_1 | MBX_11;
+ mcp->in_mb |= MBX_3 | MBX_2 | MBX_1;
+ } else {
+ mcp->mb[1] = LSW(risc_addr);
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 04cf6986eb8e..ac96771bb06d 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -3543,6 +3543,10 @@ qla2x00_shutdown(struct pci_dev *pdev)
+ qla2x00_try_to_stop_firmware(vha);
+ }
+
++ /* Disable timer */
++ if (vha->timer_active)
++ qla2x00_stop_timer(vha);
++
+ /* Turn adapter off line */
+ vha->flags.online = 0;
+
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 2d77f32e13d5..9dc367e2e742 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -1166,11 +1166,12 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd)
+ sector_t lba = sectors_to_logical(sdp, blk_rq_pos(rq));
+ sector_t threshold;
+ unsigned int nr_blocks = sectors_to_logical(sdp, blk_rq_sectors(rq));
+- bool dif, dix;
+ unsigned int mask = logical_to_sectors(sdp, 1) - 1;
+ bool write = rq_data_dir(rq) == WRITE;
+ unsigned char protect, fua;
+ blk_status_t ret;
++ unsigned int dif;
++ bool dix;
+
+ ret = scsi_init_io(cmd);
+ if (ret != BLK_STS_OK)
+diff --git a/drivers/scsi/ufs/ufs_bsg.c b/drivers/scsi/ufs/ufs_bsg.c
+index a9344eb4e047..dc2f6d2b46ed 100644
+--- a/drivers/scsi/ufs/ufs_bsg.c
++++ b/drivers/scsi/ufs/ufs_bsg.c
+@@ -98,6 +98,8 @@ static int ufs_bsg_request(struct bsg_job *job)
+
+ bsg_reply->reply_payload_rcv_len = 0;
+
++ pm_runtime_get_sync(hba->dev);
++
+ msgcode = bsg_request->msgcode;
+ switch (msgcode) {
+ case UPIU_TRANSACTION_QUERY_REQ:
+@@ -135,6 +137,8 @@ static int ufs_bsg_request(struct bsg_job *job)
+ break;
+ }
+
++ pm_runtime_put_sync(hba->dev);
++
+ if (!desc_buff)
+ goto out;
+
+diff --git a/drivers/soundwire/Kconfig b/drivers/soundwire/Kconfig
+index f518273cfbe3..c8c80df090d1 100644
+--- a/drivers/soundwire/Kconfig
++++ b/drivers/soundwire/Kconfig
+@@ -5,6 +5,7 @@
+
+ menuconfig SOUNDWIRE
+ tristate "SoundWire support"
++ depends on ACPI || OF
+ help
+ SoundWire is a 2-Pin interface with data and clock line ratified
+ by the MIPI Alliance. SoundWire is used for transporting data
+diff --git a/drivers/soundwire/bus.c b/drivers/soundwire/bus.c
+index fe745830a261..90b2127cc203 100644
+--- a/drivers/soundwire/bus.c
++++ b/drivers/soundwire/bus.c
+@@ -803,7 +803,7 @@ static int sdw_handle_port_interrupt(struct sdw_slave *slave,
+ static int sdw_handle_slave_alerts(struct sdw_slave *slave)
+ {
+ struct sdw_slave_intr_status slave_intr;
+- u8 clear = 0, bit, port_status[15];
++ u8 clear = 0, bit, port_status[15] = {0};
+ int port_num, stat, ret, count = 0;
+ unsigned long port;
+ bool slave_notify = false;
+diff --git a/drivers/usb/core/config.c b/drivers/usb/core/config.c
+index 151a74a54386..1ac1095bfeac 100644
+--- a/drivers/usb/core/config.c
++++ b/drivers/usb/core/config.c
+@@ -348,6 +348,11 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno, int inum,
+
+ /* Validate the wMaxPacketSize field */
+ maxp = usb_endpoint_maxp(&endpoint->desc);
++ if (maxp == 0) {
++ dev_warn(ddev, "config %d interface %d altsetting %d endpoint 0x%X has wMaxPacketSize 0, skipping\n",
++ cfgno, inum, asnum, d->bEndpointAddress);
++ goto skip_to_next_endpoint_or_interface_descriptor;
++ }
+
+ /* Find the highest legal maxpacket size for this endpoint */
+ i = 0; /* additional transactions per microframe */
+diff --git a/drivers/usb/dwc3/Kconfig b/drivers/usb/dwc3/Kconfig
+index 89abc6078703..556a876c7896 100644
+--- a/drivers/usb/dwc3/Kconfig
++++ b/drivers/usb/dwc3/Kconfig
+@@ -102,6 +102,7 @@ config USB_DWC3_MESON_G12A
+ depends on ARCH_MESON || COMPILE_TEST
+ default USB_DWC3
+ select USB_ROLE_SWITCH
++ select REGMAP_MMIO
+ help
+ Support USB2/3 functionality in Amlogic G12A platforms.
+ Say 'Y' or 'M' if you have one such device.
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index c9bb93a2c81e..06d7e8612dfe 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -300,8 +300,7 @@ static void dwc3_frame_length_adjustment(struct dwc3 *dwc)
+
+ reg = dwc3_readl(dwc->regs, DWC3_GFLADJ);
+ dft = reg & DWC3_GFLADJ_30MHZ_MASK;
+- if (!dev_WARN_ONCE(dwc->dev, dft == dwc->fladj,
+- "request value same as default, ignoring\n")) {
++ if (dft != dwc->fladj) {
+ reg &= ~DWC3_GFLADJ_30MHZ_MASK;
+ reg |= DWC3_GFLADJ_30MHZ_SDBND_SEL | dwc->fladj;
+ dwc3_writel(dwc->regs, DWC3_GFLADJ, reg);
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index 5e8e18222f92..023f0357efd7 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -258,7 +258,7 @@ static int dwc3_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
+
+ ret = platform_device_add_properties(dwc->dwc3, p);
+ if (ret < 0)
+- return ret;
++ goto err;
+
+ ret = dwc3_pci_quirks(dwc);
+ if (ret)
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 173f5329d3d9..56bd6ae0c18f 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -707,6 +707,12 @@ static void dwc3_remove_requests(struct dwc3 *dwc, struct dwc3_ep *dep)
+
+ dwc3_gadget_giveback(dep, req, -ESHUTDOWN);
+ }
++
++ while (!list_empty(&dep->cancelled_list)) {
++ req = next_request(&dep->cancelled_list);
++
++ dwc3_gadget_giveback(dep, req, -ESHUTDOWN);
++ }
+ }
+
+ /**
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index 76883ff4f5bb..c8ae07cd6fbf 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -2156,14 +2156,18 @@ void composite_dev_cleanup(struct usb_composite_dev *cdev)
+ usb_ep_dequeue(cdev->gadget->ep0, cdev->os_desc_req);
+
+ kfree(cdev->os_desc_req->buf);
++ cdev->os_desc_req->buf = NULL;
+ usb_ep_free_request(cdev->gadget->ep0, cdev->os_desc_req);
++ cdev->os_desc_req = NULL;
+ }
+ if (cdev->req) {
+ if (cdev->setup_pending)
+ usb_ep_dequeue(cdev->gadget->ep0, cdev->req);
+
+ kfree(cdev->req->buf);
++ cdev->req->buf = NULL;
+ usb_ep_free_request(cdev->gadget->ep0, cdev->req);
++ cdev->req = NULL;
+ }
+ cdev->next_string_id = 0;
+ device_remove_file(&cdev->gadget->dev, &dev_attr_suspended);
+diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c
+index 025129942894..33852c2b29d1 100644
+--- a/drivers/usb/gadget/configfs.c
++++ b/drivers/usb/gadget/configfs.c
+@@ -61,6 +61,8 @@ struct gadget_info {
+ bool use_os_desc;
+ char b_vendor_code;
+ char qw_sign[OS_STRING_QW_SIGN_LEN];
++ spinlock_t spinlock;
++ bool unbind;
+ };
+
+ static inline struct gadget_info *to_gadget_info(struct config_item *item)
+@@ -1244,6 +1246,7 @@ static int configfs_composite_bind(struct usb_gadget *gadget,
+ int ret;
+
+ /* the gi->lock is hold by the caller */
++ gi->unbind = 0;
+ cdev->gadget = gadget;
+ set_gadget_data(gadget, cdev);
+ ret = composite_dev_prepare(composite, cdev);
+@@ -1376,31 +1379,128 @@ static void configfs_composite_unbind(struct usb_gadget *gadget)
+ {
+ struct usb_composite_dev *cdev;
+ struct gadget_info *gi;
++ unsigned long flags;
+
+ /* the gi->lock is hold by the caller */
+
+ cdev = get_gadget_data(gadget);
+ gi = container_of(cdev, struct gadget_info, cdev);
++ spin_lock_irqsave(&gi->spinlock, flags);
++ gi->unbind = 1;
++ spin_unlock_irqrestore(&gi->spinlock, flags);
+
+ kfree(otg_desc[0]);
+ otg_desc[0] = NULL;
+ purge_configs_funcs(gi);
+ composite_dev_cleanup(cdev);
+ usb_ep_autoconfig_reset(cdev->gadget);
++ spin_lock_irqsave(&gi->spinlock, flags);
+ cdev->gadget = NULL;
+ set_gadget_data(gadget, NULL);
++ spin_unlock_irqrestore(&gi->spinlock, flags);
++}
++
++static int configfs_composite_setup(struct usb_gadget *gadget,
++ const struct usb_ctrlrequest *ctrl)
++{
++ struct usb_composite_dev *cdev;
++ struct gadget_info *gi;
++ unsigned long flags;
++ int ret;
++
++ cdev = get_gadget_data(gadget);
++ if (!cdev)
++ return 0;
++
++ gi = container_of(cdev, struct gadget_info, cdev);
++ spin_lock_irqsave(&gi->spinlock, flags);
++ cdev = get_gadget_data(gadget);
++ if (!cdev || gi->unbind) {
++ spin_unlock_irqrestore(&gi->spinlock, flags);
++ return 0;
++ }
++
++ ret = composite_setup(gadget, ctrl);
++ spin_unlock_irqrestore(&gi->spinlock, flags);
++ return ret;
++}
++
++static void configfs_composite_disconnect(struct usb_gadget *gadget)
++{
++ struct usb_composite_dev *cdev;
++ struct gadget_info *gi;
++ unsigned long flags;
++
++ cdev = get_gadget_data(gadget);
++ if (!cdev)
++ return;
++
++ gi = container_of(cdev, struct gadget_info, cdev);
++ spin_lock_irqsave(&gi->spinlock, flags);
++ cdev = get_gadget_data(gadget);
++ if (!cdev || gi->unbind) {
++ spin_unlock_irqrestore(&gi->spinlock, flags);
++ return;
++ }
++
++ composite_disconnect(gadget);
++ spin_unlock_irqrestore(&gi->spinlock, flags);
++}
++
++static void configfs_composite_suspend(struct usb_gadget *gadget)
++{
++ struct usb_composite_dev *cdev;
++ struct gadget_info *gi;
++ unsigned long flags;
++
++ cdev = get_gadget_data(gadget);
++ if (!cdev)
++ return;
++
++ gi = container_of(cdev, struct gadget_info, cdev);
++ spin_lock_irqsave(&gi->spinlock, flags);
++ cdev = get_gadget_data(gadget);
++ if (!cdev || gi->unbind) {
++ spin_unlock_irqrestore(&gi->spinlock, flags);
++ return;
++ }
++
++ composite_suspend(gadget);
++ spin_unlock_irqrestore(&gi->spinlock, flags);
++}
++
++static void configfs_composite_resume(struct usb_gadget *gadget)
++{
++ struct usb_composite_dev *cdev;
++ struct gadget_info *gi;
++ unsigned long flags;
++
++ cdev = get_gadget_data(gadget);
++ if (!cdev)
++ return;
++
++ gi = container_of(cdev, struct gadget_info, cdev);
++ spin_lock_irqsave(&gi->spinlock, flags);
++ cdev = get_gadget_data(gadget);
++ if (!cdev || gi->unbind) {
++ spin_unlock_irqrestore(&gi->spinlock, flags);
++ return;
++ }
++
++ composite_resume(gadget);
++ spin_unlock_irqrestore(&gi->spinlock, flags);
+ }
+
+ static const struct usb_gadget_driver configfs_driver_template = {
+ .bind = configfs_composite_bind,
+ .unbind = configfs_composite_unbind,
+
+- .setup = composite_setup,
+- .reset = composite_disconnect,
+- .disconnect = composite_disconnect,
++ .setup = configfs_composite_setup,
++ .reset = configfs_composite_disconnect,
++ .disconnect = configfs_composite_disconnect,
+
+- .suspend = composite_suspend,
+- .resume = composite_resume,
++ .suspend = configfs_composite_suspend,
++ .resume = configfs_composite_resume,
+
+ .max_speed = USB_SPEED_SUPER,
+ .driver = {
+diff --git a/drivers/usb/gadget/udc/atmel_usba_udc.c b/drivers/usb/gadget/udc/atmel_usba_udc.c
+index 503d275bc4c4..761e8a808857 100644
+--- a/drivers/usb/gadget/udc/atmel_usba_udc.c
++++ b/drivers/usb/gadget/udc/atmel_usba_udc.c
+@@ -448,9 +448,11 @@ static void submit_request(struct usba_ep *ep, struct usba_request *req)
+ next_fifo_transaction(ep, req);
+ if (req->last_transaction) {
+ usba_ep_writel(ep, CTL_DIS, USBA_TX_PK_RDY);
+- usba_ep_writel(ep, CTL_ENB, USBA_TX_COMPLETE);
++ if (ep_is_control(ep))
++ usba_ep_writel(ep, CTL_ENB, USBA_TX_COMPLETE);
+ } else {
+- usba_ep_writel(ep, CTL_DIS, USBA_TX_COMPLETE);
++ if (ep_is_control(ep))
++ usba_ep_writel(ep, CTL_DIS, USBA_TX_COMPLETE);
+ usba_ep_writel(ep, CTL_ENB, USBA_TX_PK_RDY);
+ }
+ }
+diff --git a/drivers/usb/gadget/udc/fsl_udc_core.c b/drivers/usb/gadget/udc/fsl_udc_core.c
+index 20141c3096f6..9a05863b2876 100644
+--- a/drivers/usb/gadget/udc/fsl_udc_core.c
++++ b/drivers/usb/gadget/udc/fsl_udc_core.c
+@@ -2576,7 +2576,7 @@ static int fsl_udc_remove(struct platform_device *pdev)
+ dma_pool_destroy(udc_controller->td_pool);
+ free_irq(udc_controller->irq, udc_controller);
+ iounmap(dr_regs);
+- if (pdata->operating_mode == FSL_USB2_DR_DEVICE)
++ if (res && (pdata->operating_mode == FSL_USB2_DR_DEVICE))
+ release_mem_region(res->start, resource_size(res));
+
+ /* free udc --wait for the release() finished */
+diff --git a/drivers/usb/misc/ldusb.c b/drivers/usb/misc/ldusb.c
+index f5e34c503454..8f86b4ebca89 100644
+--- a/drivers/usb/misc/ldusb.c
++++ b/drivers/usb/misc/ldusb.c
+@@ -487,7 +487,7 @@ static ssize_t ld_usb_read(struct file *file, char __user *buffer, size_t count,
+ }
+ bytes_to_read = min(count, *actual_buffer);
+ if (bytes_to_read < *actual_buffer)
+- dev_warn(&dev->intf->dev, "Read buffer overflow, %zd bytes dropped\n",
++ dev_warn(&dev->intf->dev, "Read buffer overflow, %zu bytes dropped\n",
+ *actual_buffer-bytes_to_read);
+
+ /* copy one interrupt_in_buffer from ring_buffer into userspace */
+@@ -562,8 +562,9 @@ static ssize_t ld_usb_write(struct file *file, const char __user *buffer,
+ /* write the data into interrupt_out_buffer from userspace */
+ bytes_to_write = min(count, write_buffer_size*dev->interrupt_out_endpoint_size);
+ if (bytes_to_write < count)
+- dev_warn(&dev->intf->dev, "Write buffer overflow, %zd bytes dropped\n", count-bytes_to_write);
+- dev_dbg(&dev->intf->dev, "%s: count = %zd, bytes_to_write = %zd\n",
++ dev_warn(&dev->intf->dev, "Write buffer overflow, %zu bytes dropped\n",
++ count - bytes_to_write);
++ dev_dbg(&dev->intf->dev, "%s: count = %zu, bytes_to_write = %zu\n",
+ __func__, count, bytes_to_write);
+
+ if (copy_from_user(dev->interrupt_out_buffer, buffer, bytes_to_write)) {
+diff --git a/drivers/usb/usbip/stub.h b/drivers/usb/usbip/stub.h
+index 35618ceb2791..d11270560c24 100644
+--- a/drivers/usb/usbip/stub.h
++++ b/drivers/usb/usbip/stub.h
+@@ -52,7 +52,11 @@ struct stub_priv {
+ unsigned long seqnum;
+ struct list_head list;
+ struct stub_device *sdev;
+- struct urb *urb;
++ struct urb **urbs;
++ struct scatterlist *sgl;
++ int num_urbs;
++ int completed_urbs;
++ int urb_status;
+
+ int unlinking;
+ };
+@@ -86,6 +90,7 @@ extern struct usb_device_driver stub_driver;
+ struct bus_id_priv *get_busid_priv(const char *busid);
+ void put_busid_priv(struct bus_id_priv *bid);
+ int del_match_busid(char *busid);
++void stub_free_priv_and_urb(struct stub_priv *priv);
+ void stub_device_cleanup_urbs(struct stub_device *sdev);
+
+ /* stub_rx.c */
+diff --git a/drivers/usb/usbip/stub_main.c b/drivers/usb/usbip/stub_main.c
+index 2e4bfccd4bfc..c1c0bbc9f8b1 100644
+--- a/drivers/usb/usbip/stub_main.c
++++ b/drivers/usb/usbip/stub_main.c
+@@ -6,6 +6,7 @@
+ #include <linux/string.h>
+ #include <linux/module.h>
+ #include <linux/device.h>
++#include <linux/scatterlist.h>
+
+ #include "usbip_common.h"
+ #include "stub.h"
+@@ -281,13 +282,49 @@ static struct stub_priv *stub_priv_pop_from_listhead(struct list_head *listhead)
+ struct stub_priv *priv, *tmp;
+
+ list_for_each_entry_safe(priv, tmp, listhead, list) {
+- list_del(&priv->list);
++ list_del_init(&priv->list);
+ return priv;
+ }
+
+ return NULL;
+ }
+
++void stub_free_priv_and_urb(struct stub_priv *priv)
++{
++ struct urb *urb;
++ int i;
++
++ for (i = 0; i < priv->num_urbs; i++) {
++ urb = priv->urbs[i];
++
++ if (!urb)
++ return;
++
++ kfree(urb->setup_packet);
++ urb->setup_packet = NULL;
++
++
++ if (urb->transfer_buffer && !priv->sgl) {
++ kfree(urb->transfer_buffer);
++ urb->transfer_buffer = NULL;
++ }
++
++ if (urb->num_sgs) {
++ sgl_free(urb->sg);
++ urb->sg = NULL;
++ urb->num_sgs = 0;
++ }
++
++ usb_free_urb(urb);
++ }
++ if (!list_empty(&priv->list))
++ list_del(&priv->list);
++ if (priv->sgl)
++ sgl_free(priv->sgl);
++ kfree(priv->urbs);
++ kmem_cache_free(stub_priv_cache, priv);
++}
++
+ static struct stub_priv *stub_priv_pop(struct stub_device *sdev)
+ {
+ unsigned long flags;
+@@ -314,25 +351,15 @@ done:
+ void stub_device_cleanup_urbs(struct stub_device *sdev)
+ {
+ struct stub_priv *priv;
+- struct urb *urb;
++ int i;
+
+ dev_dbg(&sdev->udev->dev, "Stub device cleaning up urbs\n");
+
+ while ((priv = stub_priv_pop(sdev))) {
+- urb = priv->urb;
+- dev_dbg(&sdev->udev->dev, "free urb seqnum %lu\n",
+- priv->seqnum);
+- usb_kill_urb(urb);
+-
+- kmem_cache_free(stub_priv_cache, priv);
++ for (i = 0; i < priv->num_urbs; i++)
++ usb_kill_urb(priv->urbs[i]);
+
+- kfree(urb->transfer_buffer);
+- urb->transfer_buffer = NULL;
+-
+- kfree(urb->setup_packet);
+- urb->setup_packet = NULL;
+-
+- usb_free_urb(urb);
++ stub_free_priv_and_urb(priv);
+ }
+ }
+
+diff --git a/drivers/usb/usbip/stub_rx.c b/drivers/usb/usbip/stub_rx.c
+index b0a855acafa3..66edfeea68fe 100644
+--- a/drivers/usb/usbip/stub_rx.c
++++ b/drivers/usb/usbip/stub_rx.c
+@@ -7,6 +7,7 @@
+ #include <linux/kthread.h>
+ #include <linux/usb.h>
+ #include <linux/usb/hcd.h>
++#include <linux/scatterlist.h>
+
+ #include "usbip_common.h"
+ #include "stub.h"
+@@ -201,7 +202,7 @@ static void tweak_special_requests(struct urb *urb)
+ static int stub_recv_cmd_unlink(struct stub_device *sdev,
+ struct usbip_header *pdu)
+ {
+- int ret;
++ int ret, i;
+ unsigned long flags;
+ struct stub_priv *priv;
+
+@@ -246,12 +247,14 @@ static int stub_recv_cmd_unlink(struct stub_device *sdev,
+ * so a driver in a client host will know the failure
+ * of the unlink request ?
+ */
+- ret = usb_unlink_urb(priv->urb);
+- if (ret != -EINPROGRESS)
+- dev_err(&priv->urb->dev->dev,
+- "failed to unlink a urb # %lu, ret %d\n",
+- priv->seqnum, ret);
+-
++ for (i = priv->completed_urbs; i < priv->num_urbs; i++) {
++ ret = usb_unlink_urb(priv->urbs[i]);
++ if (ret != -EINPROGRESS)
++ dev_err(&priv->urbs[i]->dev->dev,
++ "failed to unlink %d/%d urb of seqnum %lu, ret %d\n",
++ i + 1, priv->num_urbs,
++ priv->seqnum, ret);
++ }
+ return 0;
+ }
+
+@@ -433,14 +436,36 @@ static void masking_bogus_flags(struct urb *urb)
+ urb->transfer_flags &= allowed;
+ }
+
++static int stub_recv_xbuff(struct usbip_device *ud, struct stub_priv *priv)
++{
++ int ret;
++ int i;
++
++ for (i = 0; i < priv->num_urbs; i++) {
++ ret = usbip_recv_xbuff(ud, priv->urbs[i]);
++ if (ret < 0)
++ break;
++ }
++
++ return ret;
++}
++
+ static void stub_recv_cmd_submit(struct stub_device *sdev,
+ struct usbip_header *pdu)
+ {
+- int ret;
+ struct stub_priv *priv;
+ struct usbip_device *ud = &sdev->ud;
+ struct usb_device *udev = sdev->udev;
++ struct scatterlist *sgl = NULL, *sg;
++ void *buffer = NULL;
++ unsigned long long buf_len;
++ int nents;
++ int num_urbs = 1;
+ int pipe = get_pipe(sdev, pdu);
++ int use_sg = pdu->u.cmd_submit.transfer_flags & URB_DMA_MAP_SG;
++ int support_sg = 1;
++ int np = 0;
++ int ret, i;
+
+ if (pipe == -1)
+ return;
+@@ -449,76 +474,139 @@ static void stub_recv_cmd_submit(struct stub_device *sdev,
+ if (!priv)
+ return;
+
+- /* setup a urb */
+- if (usb_pipeisoc(pipe))
+- priv->urb = usb_alloc_urb(pdu->u.cmd_submit.number_of_packets,
+- GFP_KERNEL);
+- else
+- priv->urb = usb_alloc_urb(0, GFP_KERNEL);
++ buf_len = (unsigned long long)pdu->u.cmd_submit.transfer_buffer_length;
+
+- if (!priv->urb) {
+- usbip_event_add(ud, SDEV_EVENT_ERROR_MALLOC);
+- return;
++ /* allocate urb transfer buffer, if needed */
++ if (buf_len) {
++ if (use_sg) {
++ sgl = sgl_alloc(buf_len, GFP_KERNEL, &nents);
++ if (!sgl)
++ goto err_malloc;
++ } else {
++ buffer = kzalloc(buf_len, GFP_KERNEL);
++ if (!buffer)
++ goto err_malloc;
++ }
+ }
+
+- /* allocate urb transfer buffer, if needed */
+- if (pdu->u.cmd_submit.transfer_buffer_length > 0) {
+- priv->urb->transfer_buffer =
+- kzalloc(pdu->u.cmd_submit.transfer_buffer_length,
+- GFP_KERNEL);
+- if (!priv->urb->transfer_buffer) {
++ /* Check if the server's HCD supports SG */
++ if (use_sg && !udev->bus->sg_tablesize) {
++ /*
++ * If the server's HCD doesn't support SG, break a single SG
++ * request into several URBs and map each SG list entry to
++ * corresponding URB buffer. The previously allocated SG
++ * list is stored in priv->sgl (If the server's HCD support SG,
++ * SG list is stored only in urb->sg) and it is used as an
++ * indicator that the server split single SG request into
++ * several URBs. Later, priv->sgl is used by stub_complete() and
++ * stub_send_ret_submit() to reassemble the divied URBs.
++ */
++ support_sg = 0;
++ num_urbs = nents;
++ priv->completed_urbs = 0;
++ pdu->u.cmd_submit.transfer_flags &= ~URB_DMA_MAP_SG;
++ }
++
++ /* allocate urb array */
++ priv->num_urbs = num_urbs;
++ priv->urbs = kmalloc_array(num_urbs, sizeof(*priv->urbs), GFP_KERNEL);
++ if (!priv->urbs)
++ goto err_urbs;
++
++ /* setup a urb */
++ if (support_sg) {
++ if (usb_pipeisoc(pipe))
++ np = pdu->u.cmd_submit.number_of_packets;
++
++ priv->urbs[0] = usb_alloc_urb(np, GFP_KERNEL);
++ if (!priv->urbs[0])
++ goto err_urb;
++
++ if (buf_len) {
++ if (use_sg) {
++ priv->urbs[0]->sg = sgl;
++ priv->urbs[0]->num_sgs = nents;
++ priv->urbs[0]->transfer_buffer = NULL;
++ } else {
++ priv->urbs[0]->transfer_buffer = buffer;
++ }
++ }
++
++ /* copy urb setup packet */
++ priv->urbs[0]->setup_packet = kmemdup(&pdu->u.cmd_submit.setup,
++ 8, GFP_KERNEL);
++ if (!priv->urbs[0]->setup_packet) {
+ usbip_event_add(ud, SDEV_EVENT_ERROR_MALLOC);
+ return;
+ }
+- }
+
+- /* copy urb setup packet */
+- priv->urb->setup_packet = kmemdup(&pdu->u.cmd_submit.setup, 8,
+- GFP_KERNEL);
+- if (!priv->urb->setup_packet) {
+- dev_err(&udev->dev, "allocate setup_packet\n");
+- usbip_event_add(ud, SDEV_EVENT_ERROR_MALLOC);
+- return;
++ usbip_pack_pdu(pdu, priv->urbs[0], USBIP_CMD_SUBMIT, 0);
++ } else {
++ for_each_sg(sgl, sg, nents, i) {
++ priv->urbs[i] = usb_alloc_urb(0, GFP_KERNEL);
++ /* The URBs which is previously allocated will be freed
++ * in stub_device_cleanup_urbs() if error occurs.
++ */
++ if (!priv->urbs[i])
++ goto err_urb;
++
++ usbip_pack_pdu(pdu, priv->urbs[i], USBIP_CMD_SUBMIT, 0);
++ priv->urbs[i]->transfer_buffer = sg_virt(sg);
++ priv->urbs[i]->transfer_buffer_length = sg->length;
++ }
++ priv->sgl = sgl;
+ }
+
+- /* set other members from the base header of pdu */
+- priv->urb->context = (void *) priv;
+- priv->urb->dev = udev;
+- priv->urb->pipe = pipe;
+- priv->urb->complete = stub_complete;
++ for (i = 0; i < num_urbs; i++) {
++ /* set other members from the base header of pdu */
++ priv->urbs[i]->context = (void *) priv;
++ priv->urbs[i]->dev = udev;
++ priv->urbs[i]->pipe = pipe;
++ priv->urbs[i]->complete = stub_complete;
+
+- usbip_pack_pdu(pdu, priv->urb, USBIP_CMD_SUBMIT, 0);
++ /* no need to submit an intercepted request, but harmless? */
++ tweak_special_requests(priv->urbs[i]);
+
++ masking_bogus_flags(priv->urbs[i]);
++ }
+
+- if (usbip_recv_xbuff(ud, priv->urb) < 0)
++ if (stub_recv_xbuff(ud, priv) < 0)
+ return;
+
+- if (usbip_recv_iso(ud, priv->urb) < 0)
++ if (usbip_recv_iso(ud, priv->urbs[0]) < 0)
+ return;
+
+- /* no need to submit an intercepted request, but harmless? */
+- tweak_special_requests(priv->urb);
+-
+- masking_bogus_flags(priv->urb);
+ /* urb is now ready to submit */
+- ret = usb_submit_urb(priv->urb, GFP_KERNEL);
+-
+- if (ret == 0)
+- usbip_dbg_stub_rx("submit urb ok, seqnum %u\n",
+- pdu->base.seqnum);
+- else {
+- dev_err(&udev->dev, "submit_urb error, %d\n", ret);
+- usbip_dump_header(pdu);
+- usbip_dump_urb(priv->urb);
+-
+- /*
+- * Pessimistic.
+- * This connection will be discarded.
+- */
+- usbip_event_add(ud, SDEV_EVENT_ERROR_SUBMIT);
++ for (i = 0; i < priv->num_urbs; i++) {
++ ret = usb_submit_urb(priv->urbs[i], GFP_KERNEL);
++
++ if (ret == 0)
++ usbip_dbg_stub_rx("submit urb ok, seqnum %u\n",
++ pdu->base.seqnum);
++ else {
++ dev_err(&udev->dev, "submit_urb error, %d\n", ret);
++ usbip_dump_header(pdu);
++ usbip_dump_urb(priv->urbs[i]);
++
++ /*
++ * Pessimistic.
++ * This connection will be discarded.
++ */
++ usbip_event_add(ud, SDEV_EVENT_ERROR_SUBMIT);
++ break;
++ }
+ }
+
+ usbip_dbg_stub_rx("Leave\n");
++ return;
++
++err_urb:
++ kfree(priv->urbs);
++err_urbs:
++ kfree(buffer);
++ sgl_free(sgl);
++err_malloc:
++ usbip_event_add(ud, SDEV_EVENT_ERROR_MALLOC);
+ }
+
+ /* recv a pdu */
+diff --git a/drivers/usb/usbip/stub_tx.c b/drivers/usb/usbip/stub_tx.c
+index f0ec41a50cbc..36010a82b359 100644
+--- a/drivers/usb/usbip/stub_tx.c
++++ b/drivers/usb/usbip/stub_tx.c
+@@ -5,25 +5,11 @@
+
+ #include <linux/kthread.h>
+ #include <linux/socket.h>
++#include <linux/scatterlist.h>
+
+ #include "usbip_common.h"
+ #include "stub.h"
+
+-static void stub_free_priv_and_urb(struct stub_priv *priv)
+-{
+- struct urb *urb = priv->urb;
+-
+- kfree(urb->setup_packet);
+- urb->setup_packet = NULL;
+-
+- kfree(urb->transfer_buffer);
+- urb->transfer_buffer = NULL;
+-
+- list_del(&priv->list);
+- kmem_cache_free(stub_priv_cache, priv);
+- usb_free_urb(urb);
+-}
+-
+ /* be in spin_lock_irqsave(&sdev->priv_lock, flags) */
+ void stub_enqueue_ret_unlink(struct stub_device *sdev, __u32 seqnum,
+ __u32 status)
+@@ -85,6 +71,22 @@ void stub_complete(struct urb *urb)
+ break;
+ }
+
++ /*
++ * If the server breaks single SG request into the several URBs, the
++ * URBs must be reassembled before sending completed URB to the vhci.
++ * Don't wake up the tx thread until all the URBs are completed.
++ */
++ if (priv->sgl) {
++ priv->completed_urbs++;
++
++ /* Only save the first error status */
++ if (urb->status && !priv->urb_status)
++ priv->urb_status = urb->status;
++
++ if (priv->completed_urbs < priv->num_urbs)
++ return;
++ }
++
+ /* link a urb to the queue of tx. */
+ spin_lock_irqsave(&sdev->priv_lock, flags);
+ if (sdev->ud.tcp_socket == NULL) {
+@@ -156,18 +158,22 @@ static int stub_send_ret_submit(struct stub_device *sdev)
+ size_t total_size = 0;
+
+ while ((priv = dequeue_from_priv_tx(sdev)) != NULL) {
+- int ret;
+- struct urb *urb = priv->urb;
++ struct urb *urb = priv->urbs[0];
+ struct usbip_header pdu_header;
+ struct usbip_iso_packet_descriptor *iso_buffer = NULL;
+ struct kvec *iov = NULL;
++ struct scatterlist *sg;
++ u32 actual_length = 0;
+ int iovnum = 0;
++ int ret;
++ int i;
+
+ txsize = 0;
+ memset(&pdu_header, 0, sizeof(pdu_header));
+ memset(&msg, 0, sizeof(msg));
+
+- if (urb->actual_length > 0 && !urb->transfer_buffer) {
++ if (urb->actual_length > 0 && !urb->transfer_buffer &&
++ !urb->num_sgs) {
+ dev_err(&sdev->udev->dev,
+ "urb: actual_length %d transfer_buffer null\n",
+ urb->actual_length);
+@@ -176,6 +182,11 @@ static int stub_send_ret_submit(struct stub_device *sdev)
+
+ if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS)
+ iovnum = 2 + urb->number_of_packets;
++ else if (usb_pipein(urb->pipe) && urb->actual_length > 0 &&
++ urb->num_sgs)
++ iovnum = 1 + urb->num_sgs;
++ else if (usb_pipein(urb->pipe) && priv->sgl)
++ iovnum = 1 + priv->num_urbs;
+ else
+ iovnum = 2;
+
+@@ -192,6 +203,15 @@ static int stub_send_ret_submit(struct stub_device *sdev)
+ setup_ret_submit_pdu(&pdu_header, urb);
+ usbip_dbg_stub_tx("setup txdata seqnum: %d\n",
+ pdu_header.base.seqnum);
++
++ if (priv->sgl) {
++ for (i = 0; i < priv->num_urbs; i++)
++ actual_length += priv->urbs[i]->actual_length;
++
++ pdu_header.u.ret_submit.status = priv->urb_status;
++ pdu_header.u.ret_submit.actual_length = actual_length;
++ }
++
+ usbip_header_correct_endian(&pdu_header, 1);
+
+ iov[iovnum].iov_base = &pdu_header;
+@@ -200,12 +220,47 @@ static int stub_send_ret_submit(struct stub_device *sdev)
+ txsize += sizeof(pdu_header);
+
+ /* 2. setup transfer buffer */
+- if (usb_pipein(urb->pipe) &&
++ if (usb_pipein(urb->pipe) && priv->sgl) {
++ /* If the server split a single SG request into several
++ * URBs because the server's HCD doesn't support SG,
++ * reassemble the split URB buffers into a single
++ * return command.
++ */
++ for (i = 0; i < priv->num_urbs; i++) {
++ iov[iovnum].iov_base =
++ priv->urbs[i]->transfer_buffer;
++ iov[iovnum].iov_len =
++ priv->urbs[i]->actual_length;
++ iovnum++;
++ }
++ txsize += actual_length;
++ } else if (usb_pipein(urb->pipe) &&
+ usb_pipetype(urb->pipe) != PIPE_ISOCHRONOUS &&
+ urb->actual_length > 0) {
+- iov[iovnum].iov_base = urb->transfer_buffer;
+- iov[iovnum].iov_len = urb->actual_length;
+- iovnum++;
++ if (urb->num_sgs) {
++ unsigned int copy = urb->actual_length;
++ int size;
++
++ for_each_sg(urb->sg, sg, urb->num_sgs, i) {
++ if (copy == 0)
++ break;
++
++ if (copy < sg->length)
++ size = copy;
++ else
++ size = sg->length;
++
++ iov[iovnum].iov_base = sg_virt(sg);
++ iov[iovnum].iov_len = size;
++
++ iovnum++;
++ copy -= size;
++ }
++ } else {
++ iov[iovnum].iov_base = urb->transfer_buffer;
++ iov[iovnum].iov_len = urb->actual_length;
++ iovnum++;
++ }
+ txsize += urb->actual_length;
+ } else if (usb_pipein(urb->pipe) &&
+ usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) {
+diff --git a/drivers/usb/usbip/usbip_common.c b/drivers/usb/usbip/usbip_common.c
+index 45da3e01c7b0..6532d68e8808 100644
+--- a/drivers/usb/usbip/usbip_common.c
++++ b/drivers/usb/usbip/usbip_common.c
+@@ -680,8 +680,12 @@ EXPORT_SYMBOL_GPL(usbip_pad_iso);
+ /* some members of urb must be substituted before. */
+ int usbip_recv_xbuff(struct usbip_device *ud, struct urb *urb)
+ {
+- int ret;
++ struct scatterlist *sg;
++ int ret = 0;
++ int recv;
+ int size;
++ int copy;
++ int i;
+
+ if (ud->side == USBIP_STUB || ud->side == USBIP_VUDC) {
+ /* the direction of urb must be OUT. */
+@@ -701,29 +705,48 @@ int usbip_recv_xbuff(struct usbip_device *ud, struct urb *urb)
+ if (!(size > 0))
+ return 0;
+
+- if (size > urb->transfer_buffer_length) {
++ if (size > urb->transfer_buffer_length)
+ /* should not happen, probably malicious packet */
+- if (ud->side == USBIP_STUB) {
+- usbip_event_add(ud, SDEV_EVENT_ERROR_TCP);
+- return 0;
+- } else {
+- usbip_event_add(ud, VDEV_EVENT_ERROR_TCP);
+- return -EPIPE;
+- }
+- }
++ goto error;
+
+- ret = usbip_recv(ud->tcp_socket, urb->transfer_buffer, size);
+- if (ret != size) {
+- dev_err(&urb->dev->dev, "recv xbuf, %d\n", ret);
+- if (ud->side == USBIP_STUB || ud->side == USBIP_VUDC) {
+- usbip_event_add(ud, SDEV_EVENT_ERROR_TCP);
+- } else {
+- usbip_event_add(ud, VDEV_EVENT_ERROR_TCP);
+- return -EPIPE;
++ if (urb->num_sgs) {
++ copy = size;
++ for_each_sg(urb->sg, sg, urb->num_sgs, i) {
++ int recv_size;
++
++ if (copy < sg->length)
++ recv_size = copy;
++ else
++ recv_size = sg->length;
++
++ recv = usbip_recv(ud->tcp_socket, sg_virt(sg),
++ recv_size);
++
++ if (recv != recv_size)
++ goto error;
++
++ copy -= recv;
++ ret += recv;
+ }
++
++ if (ret != size)
++ goto error;
++ } else {
++ ret = usbip_recv(ud->tcp_socket, urb->transfer_buffer, size);
++ if (ret != size)
++ goto error;
+ }
+
+ return ret;
++
++error:
++ dev_err(&urb->dev->dev, "recv xbuf, %d\n", ret);
++ if (ud->side == USBIP_STUB || ud->side == USBIP_VUDC)
++ usbip_event_add(ud, SDEV_EVENT_ERROR_TCP);
++ else
++ usbip_event_add(ud, VDEV_EVENT_ERROR_TCP);
++
++ return -EPIPE;
+ }
+ EXPORT_SYMBOL_GPL(usbip_recv_xbuff);
+
+diff --git a/drivers/usb/usbip/vhci_hcd.c b/drivers/usb/usbip/vhci_hcd.c
+index 000ab7225717..585a84d319bd 100644
+--- a/drivers/usb/usbip/vhci_hcd.c
++++ b/drivers/usb/usbip/vhci_hcd.c
+@@ -697,7 +697,8 @@ static int vhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flag
+ }
+ vdev = &vhci_hcd->vdev[portnum-1];
+
+- if (!urb->transfer_buffer && urb->transfer_buffer_length) {
++ if (!urb->transfer_buffer && !urb->num_sgs &&
++ urb->transfer_buffer_length) {
+ dev_dbg(dev, "Null URB transfer buffer\n");
+ return -EINVAL;
+ }
+@@ -1143,6 +1144,15 @@ static int vhci_setup(struct usb_hcd *hcd)
+ hcd->speed = HCD_USB3;
+ hcd->self.root_hub->speed = USB_SPEED_SUPER;
+ }
++
++ /*
++ * Support SG.
++ * sg_tablesize is an arbitrary value to alleviate memory pressure
++ * on the host.
++ */
++ hcd->self.sg_tablesize = 32;
++ hcd->self.no_sg_constraint = 1;
++
+ return 0;
+ }
+
+diff --git a/drivers/usb/usbip/vhci_rx.c b/drivers/usb/usbip/vhci_rx.c
+index 44cd64518925..33f8972ba842 100644
+--- a/drivers/usb/usbip/vhci_rx.c
++++ b/drivers/usb/usbip/vhci_rx.c
+@@ -90,6 +90,9 @@ static void vhci_recv_ret_submit(struct vhci_device *vdev,
+ if (usbip_dbg_flag_vhci_rx)
+ usbip_dump_urb(urb);
+
++ if (urb->num_sgs)
++ urb->transfer_flags &= ~URB_DMA_MAP_SG;
++
+ usbip_dbg_vhci_rx("now giveback urb %u\n", pdu->base.seqnum);
+
+ spin_lock_irqsave(&vhci->lock, flags);
+diff --git a/drivers/usb/usbip/vhci_tx.c b/drivers/usb/usbip/vhci_tx.c
+index 2fa26d0578d7..0ae40a13a9fe 100644
+--- a/drivers/usb/usbip/vhci_tx.c
++++ b/drivers/usb/usbip/vhci_tx.c
+@@ -5,6 +5,7 @@
+
+ #include <linux/kthread.h>
+ #include <linux/slab.h>
++#include <linux/scatterlist.h>
+
+ #include "usbip_common.h"
+ #include "vhci.h"
+@@ -50,19 +51,23 @@ static struct vhci_priv *dequeue_from_priv_tx(struct vhci_device *vdev)
+
+ static int vhci_send_cmd_submit(struct vhci_device *vdev)
+ {
++ struct usbip_iso_packet_descriptor *iso_buffer = NULL;
+ struct vhci_priv *priv = NULL;
++ struct scatterlist *sg;
+
+ struct msghdr msg;
+- struct kvec iov[3];
++ struct kvec *iov;
+ size_t txsize;
+
+ size_t total_size = 0;
++ int iovnum;
++ int err = -ENOMEM;
++ int i;
+
+ while ((priv = dequeue_from_priv_tx(vdev)) != NULL) {
+ int ret;
+ struct urb *urb = priv->urb;
+ struct usbip_header pdu_header;
+- struct usbip_iso_packet_descriptor *iso_buffer = NULL;
+
+ txsize = 0;
+ memset(&pdu_header, 0, sizeof(pdu_header));
+@@ -72,18 +77,45 @@ static int vhci_send_cmd_submit(struct vhci_device *vdev)
+ usbip_dbg_vhci_tx("setup txdata urb seqnum %lu\n",
+ priv->seqnum);
+
++ if (urb->num_sgs && usb_pipeout(urb->pipe))
++ iovnum = 2 + urb->num_sgs;
++ else
++ iovnum = 3;
++
++ iov = kcalloc(iovnum, sizeof(*iov), GFP_KERNEL);
++ if (!iov) {
++ usbip_event_add(&vdev->ud, SDEV_EVENT_ERROR_MALLOC);
++ return -ENOMEM;
++ }
++
++ if (urb->num_sgs)
++ urb->transfer_flags |= URB_DMA_MAP_SG;
++
+ /* 1. setup usbip_header */
+ setup_cmd_submit_pdu(&pdu_header, urb);
+ usbip_header_correct_endian(&pdu_header, 1);
++ iovnum = 0;
+
+- iov[0].iov_base = &pdu_header;
+- iov[0].iov_len = sizeof(pdu_header);
++ iov[iovnum].iov_base = &pdu_header;
++ iov[iovnum].iov_len = sizeof(pdu_header);
+ txsize += sizeof(pdu_header);
++ iovnum++;
+
+ /* 2. setup transfer buffer */
+ if (!usb_pipein(urb->pipe) && urb->transfer_buffer_length > 0) {
+- iov[1].iov_base = urb->transfer_buffer;
+- iov[1].iov_len = urb->transfer_buffer_length;
++ if (urb->num_sgs &&
++ !usb_endpoint_xfer_isoc(&urb->ep->desc)) {
++ for_each_sg(urb->sg, sg, urb->num_sgs, i) {
++ iov[iovnum].iov_base = sg_virt(sg);
++ iov[iovnum].iov_len = sg->length;
++ iovnum++;
++ }
++ } else {
++ iov[iovnum].iov_base = urb->transfer_buffer;
++ iov[iovnum].iov_len =
++ urb->transfer_buffer_length;
++ iovnum++;
++ }
+ txsize += urb->transfer_buffer_length;
+ }
+
+@@ -95,30 +127,43 @@ static int vhci_send_cmd_submit(struct vhci_device *vdev)
+ if (!iso_buffer) {
+ usbip_event_add(&vdev->ud,
+ SDEV_EVENT_ERROR_MALLOC);
+- return -1;
++ goto err_iso_buffer;
+ }
+
+- iov[2].iov_base = iso_buffer;
+- iov[2].iov_len = len;
++ iov[iovnum].iov_base = iso_buffer;
++ iov[iovnum].iov_len = len;
++ iovnum++;
+ txsize += len;
+ }
+
+- ret = kernel_sendmsg(vdev->ud.tcp_socket, &msg, iov, 3, txsize);
++ ret = kernel_sendmsg(vdev->ud.tcp_socket, &msg, iov, iovnum,
++ txsize);
+ if (ret != txsize) {
+ pr_err("sendmsg failed!, ret=%d for %zd\n", ret,
+ txsize);
+- kfree(iso_buffer);
+ usbip_event_add(&vdev->ud, VDEV_EVENT_ERROR_TCP);
+- return -1;
++ err = -EPIPE;
++ goto err_tx;
+ }
+
++ kfree(iov);
++ /* This is only for isochronous case */
+ kfree(iso_buffer);
++ iso_buffer = NULL;
++
+ usbip_dbg_vhci_tx("send txdata\n");
+
+ total_size += txsize;
+ }
+
+ return total_size;
++
++err_tx:
++ kfree(iso_buffer);
++err_iso_buffer:
++ kfree(iov);
++
++ return err;
+ }
+
+ static struct vhci_unlink *dequeue_from_unlink_tx(struct vhci_device *vdev)
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 1b85278471f6..a0318bc57fa6 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -472,6 +472,7 @@ static noinline void compress_file_range(struct async_chunk *async_chunk,
+ u64 start = async_chunk->start;
+ u64 end = async_chunk->end;
+ u64 actual_end;
++ u64 i_size;
+ int ret = 0;
+ struct page **pages = NULL;
+ unsigned long nr_pages;
+@@ -485,7 +486,19 @@ static noinline void compress_file_range(struct async_chunk *async_chunk,
+ inode_should_defrag(BTRFS_I(inode), start, end, end - start + 1,
+ SZ_16K);
+
+- actual_end = min_t(u64, i_size_read(inode), end + 1);
++ /*
++ * We need to save i_size before now because it could change in between
++ * us evaluating the size and assigning it. This is because we lock and
++ * unlock the page in truncate and fallocate, and then modify the i_size
++ * later on.
++ *
++ * The barriers are to emulate READ_ONCE, remove that once i_size_read
++ * does that for us.
++ */
++ barrier();
++ i_size = i_size_read(inode);
++ barrier();
++ actual_end = min_t(u64, i_size, end + 1);
+ again:
+ will_compress = 0;
+ nr_pages = (end >> PAGE_SHIFT) - (start >> PAGE_SHIFT) + 1;
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index 9634cae1e1b1..24f36e2dac06 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -686,9 +686,7 @@ static void dev_item_err(const struct extent_buffer *eb, int slot,
+ static int check_dev_item(struct extent_buffer *leaf,
+ struct btrfs_key *key, int slot)
+ {
+- struct btrfs_fs_info *fs_info = leaf->fs_info;
+ struct btrfs_dev_item *ditem;
+- u64 max_devid = max(BTRFS_MAX_DEVS(fs_info), BTRFS_MAX_DEVS_SYS_CHUNK);
+
+ if (key->objectid != BTRFS_DEV_ITEMS_OBJECTID) {
+ dev_item_err(leaf, slot,
+@@ -696,12 +694,6 @@ static int check_dev_item(struct extent_buffer *leaf,
+ key->objectid, BTRFS_DEV_ITEMS_OBJECTID);
+ return -EUCLEAN;
+ }
+- if (key->offset > max_devid) {
+- dev_item_err(leaf, slot,
+- "invalid devid: has=%llu expect=[0, %llu]",
+- key->offset, max_devid);
+- return -EUCLEAN;
+- }
+ ditem = btrfs_item_ptr(leaf, slot, struct btrfs_dev_item);
+ if (btrfs_device_id(leaf, ditem) != key->offset) {
+ dev_item_err(leaf, slot,
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 9c057609eaec..0084fb9fa91e 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -4976,6 +4976,7 @@ static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
+ } else if (type & BTRFS_BLOCK_GROUP_SYSTEM) {
+ max_stripe_size = SZ_32M;
+ max_chunk_size = 2 * max_stripe_size;
++ devs_max = min_t(int, devs_max, BTRFS_MAX_DEVS_SYS_CHUNK);
+ } else {
+ btrfs_err(info, "invalid chunk type 0x%llx requested",
+ type);
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index 8fd530112810..966cec9d62e4 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -1087,6 +1087,11 @@ void __ceph_remove_cap(struct ceph_cap *cap, bool queue_release)
+
+ dout("__ceph_remove_cap %p from %p\n", cap, &ci->vfs_inode);
+
++ /* remove from inode's cap rbtree, and clear auth cap */
++ rb_erase(&cap->ci_node, &ci->i_caps);
++ if (ci->i_auth_cap == cap)
++ ci->i_auth_cap = NULL;
++
+ /* remove from session list */
+ spin_lock(&session->s_cap_lock);
+ if (session->s_cap_iterator == cap) {
+@@ -1120,11 +1125,6 @@ void __ceph_remove_cap(struct ceph_cap *cap, bool queue_release)
+
+ spin_unlock(&session->s_cap_lock);
+
+- /* remove from inode list */
+- rb_erase(&cap->ci_node, &ci->i_caps);
+- if (ci->i_auth_cap == cap)
+- ci->i_auth_cap = NULL;
+-
+ if (removed)
+ ceph_put_cap(mdsc, cap);
+
+diff --git a/fs/ceph/dir.c b/fs/ceph/dir.c
+index 4ca0b8ff9a72..d17a789fd856 100644
+--- a/fs/ceph/dir.c
++++ b/fs/ceph/dir.c
+@@ -1553,36 +1553,37 @@ static int ceph_d_revalidate(struct dentry *dentry, unsigned int flags)
+ {
+ int valid = 0;
+ struct dentry *parent;
+- struct inode *dir;
++ struct inode *dir, *inode;
+
+ if (flags & LOOKUP_RCU) {
+ parent = READ_ONCE(dentry->d_parent);
+ dir = d_inode_rcu(parent);
+ if (!dir)
+ return -ECHILD;
++ inode = d_inode_rcu(dentry);
+ } else {
+ parent = dget_parent(dentry);
+ dir = d_inode(parent);
++ inode = d_inode(dentry);
+ }
+
+ dout("d_revalidate %p '%pd' inode %p offset %lld\n", dentry,
+- dentry, d_inode(dentry), ceph_dentry(dentry)->offset);
++ dentry, inode, ceph_dentry(dentry)->offset);
+
+ /* always trust cached snapped dentries, snapdir dentry */
+ if (ceph_snap(dir) != CEPH_NOSNAP) {
+ dout("d_revalidate %p '%pd' inode %p is SNAPPED\n", dentry,
+- dentry, d_inode(dentry));
++ dentry, inode);
+ valid = 1;
+- } else if (d_really_is_positive(dentry) &&
+- ceph_snap(d_inode(dentry)) == CEPH_SNAPDIR) {
++ } else if (inode && ceph_snap(inode) == CEPH_SNAPDIR) {
+ valid = 1;
+ } else {
+ valid = dentry_lease_is_valid(dentry, flags);
+ if (valid == -ECHILD)
+ return valid;
+ if (valid || dir_lease_is_valid(dir, dentry)) {
+- if (d_really_is_positive(dentry))
+- valid = ceph_is_any_caps(d_inode(dentry));
++ if (inode)
++ valid = ceph_is_any_caps(inode);
+ else
+ valid = 1;
+ }
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index 685a03cc4b77..8273d86bf499 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -458,6 +458,9 @@ int ceph_atomic_open(struct inode *dir, struct dentry *dentry,
+ err = ceph_security_init_secctx(dentry, mode, &as_ctx);
+ if (err < 0)
+ goto out_ctx;
++ } else if (!d_in_lookup(dentry)) {
++ /* If it's not being looked up, it's negative */
++ return -ENOENT;
+ }
+
+ /* do the open */
+@@ -1931,10 +1934,18 @@ static ssize_t __ceph_copy_file_range(struct file *src_file, loff_t src_off,
+ if (ceph_test_mount_opt(ceph_inode_to_client(src_inode), NOCOPYFROM))
+ return -EOPNOTSUPP;
+
++ /*
++ * Striped file layouts require that we copy partial objects, but the
++ * OSD copy-from operation only supports full-object copies. Limit
++ * this to non-striped file layouts for now.
++ */
+ if ((src_ci->i_layout.stripe_unit != dst_ci->i_layout.stripe_unit) ||
+- (src_ci->i_layout.stripe_count != dst_ci->i_layout.stripe_count) ||
+- (src_ci->i_layout.object_size != dst_ci->i_layout.object_size))
++ (src_ci->i_layout.stripe_count != 1) ||
++ (dst_ci->i_layout.stripe_count != 1) ||
++ (src_ci->i_layout.object_size != dst_ci->i_layout.object_size)) {
++ dout("Invalid src/dst files layout\n");
+ return -EOPNOTSUPP;
++ }
+
+ if (len < src_ci->i_layout.object_size)
+ return -EOPNOTSUPP; /* no remote copy will be done */
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index 3b537e7038c7..1676a46822ad 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -1432,6 +1432,7 @@ retry_lookup:
+ dout(" final dn %p\n", dn);
+ } else if ((req->r_op == CEPH_MDS_OP_LOOKUPSNAP ||
+ req->r_op == CEPH_MDS_OP_MKSNAP) &&
++ test_bit(CEPH_MDS_R_PARENT_LOCKED, &req->r_req_flags) &&
+ !test_bit(CEPH_MDS_R_ABORTED, &req->r_req_flags)) {
+ struct inode *dir = req->r_parent;
+
+diff --git a/fs/cifs/smb2pdu.h b/fs/cifs/smb2pdu.h
+index 747de9317659..a7f3eb12472f 100644
+--- a/fs/cifs/smb2pdu.h
++++ b/fs/cifs/smb2pdu.h
+@@ -836,6 +836,7 @@ struct create_durable_handle_reconnect_v2 {
+ struct create_context ccontext;
+ __u8 Name[8];
+ struct durable_reconnect_context_v2 dcontext;
++ __u8 Pad[4];
+ } __packed;
+
+ /* See MS-SMB2 2.2.13.2.5 */
+diff --git a/fs/configfs/symlink.c b/fs/configfs/symlink.c
+index 91eac6c55e07..f3881e4caedd 100644
+--- a/fs/configfs/symlink.c
++++ b/fs/configfs/symlink.c
+@@ -143,11 +143,42 @@ int configfs_symlink(struct inode *dir, struct dentry *dentry, const char *symna
+ !type->ct_item_ops->allow_link)
+ goto out_put;
+
++ /*
++ * This is really sick. What they wanted was a hybrid of
++ * link(2) and symlink(2) - they wanted the target resolved
++ * at syscall time (as link(2) would've done), be a directory
++ * (which link(2) would've refused to do) *AND* be a deep
++ * fucking magic, making the target busy from rmdir POV.
++ * symlink(2) is nothing of that sort, and the locking it
++ * gets matches the normal symlink(2) semantics. Without
++ * attempts to resolve the target (which might very well
++ * not even exist yet) done prior to locking the parent
++ * directory. This perversion, OTOH, needs to resolve
++ * the target, which would lead to obvious deadlocks if
++ * attempted with any directories locked.
++ *
++ * Unfortunately, that garbage is userland ABI and we should've
++ * said "no" back in 2005. Too late now, so we get to
++ * play very ugly games with locking.
++ *
++ * Try *ANYTHING* of that sort in new code, and you will
++ * really regret it. Just ask yourself - what could a BOFH
++ * do to me and do I want to find it out first-hand?
++ *
++ * AV, a thoroughly annoyed bastard.
++ */
++ inode_unlock(dir);
+ ret = get_target(symname, &path, &target_item, dentry->d_sb);
++ inode_lock(dir);
+ if (ret)
+ goto out_put;
+
+- ret = type->ct_item_ops->allow_link(parent_item, target_item);
++ if (dentry->d_inode || d_unhashed(dentry))
++ ret = -EEXIST;
++ else
++ ret = inode_permission(dir, MAY_WRITE | MAY_EXEC);
++ if (!ret)
++ ret = type->ct_item_ops->allow_link(parent_item, target_item);
+ if (!ret) {
+ mutex_lock(&configfs_symlink_mutex);
+ ret = create_link(parent_item, target_item, dentry);
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index 542b02d170f8..57b23703c679 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -577,10 +577,13 @@ void wbc_attach_and_unlock_inode(struct writeback_control *wbc,
+ spin_unlock(&inode->i_lock);
+
+ /*
+- * A dying wb indicates that the memcg-blkcg mapping has changed
+- * and a new wb is already serving the memcg. Switch immediately.
++ * A dying wb indicates that either the blkcg associated with the
++ * memcg changed or the associated memcg is dying. In the first
++ * case, a replacement wb should already be available and we should
++ * refresh the wb immediately. In the second case, trying to
++ * refresh will keep failing.
+ */
+- if (unlikely(wb_dying(wbc->wb)))
++ if (unlikely(wb_dying(wbc->wb) && !css_is_dying(wbc->wb->memcg_css)))
+ inode_switch_wbs(inode, wbc->wb_id);
+ }
+ EXPORT_SYMBOL_GPL(wbc_attach_and_unlock_inode);
+diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
+index ad7a77101471..af549d70ec50 100644
+--- a/fs/nfs/delegation.c
++++ b/fs/nfs/delegation.c
+@@ -53,6 +53,16 @@ nfs4_is_valid_delegation(const struct nfs_delegation *delegation,
+ return false;
+ }
+
++struct nfs_delegation *nfs4_get_valid_delegation(const struct inode *inode)
++{
++ struct nfs_delegation *delegation;
++
++ delegation = rcu_dereference(NFS_I(inode)->delegation);
++ if (nfs4_is_valid_delegation(delegation, 0))
++ return delegation;
++ return NULL;
++}
++
+ static int
+ nfs4_do_check_delegation(struct inode *inode, fmode_t flags, bool mark)
+ {
+diff --git a/fs/nfs/delegation.h b/fs/nfs/delegation.h
+index 9eb87ae4c982..8b14d441e699 100644
+--- a/fs/nfs/delegation.h
++++ b/fs/nfs/delegation.h
+@@ -68,6 +68,7 @@ int nfs4_lock_delegation_recall(struct file_lock *fl, struct nfs4_state *state,
+ bool nfs4_copy_delegation_stateid(struct inode *inode, fmode_t flags, nfs4_stateid *dst, const struct cred **cred);
+ bool nfs4_refresh_delegation_stateid(nfs4_stateid *dst, struct inode *inode);
+
++struct nfs_delegation *nfs4_get_valid_delegation(const struct inode *inode);
+ void nfs_mark_delegation_referenced(struct nfs_delegation *delegation);
+ int nfs4_have_delegation(struct inode *inode, fmode_t flags);
+ int nfs4_check_delegation(struct inode *inode, fmode_t flags);
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index e1e7d2724b97..e600f28b1ddb 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -1435,8 +1435,6 @@ static int can_open_delegated(struct nfs_delegation *delegation, fmode_t fmode,
+ return 0;
+ if ((delegation->type & fmode) != fmode)
+ return 0;
+- if (test_bit(NFS_DELEGATION_RETURNING, &delegation->flags))
+- return 0;
+ switch (claim) {
+ case NFS4_OPEN_CLAIM_NULL:
+ case NFS4_OPEN_CLAIM_FH:
+@@ -1805,7 +1803,6 @@ static void nfs4_return_incompatible_delegation(struct inode *inode, fmode_t fmo
+ static struct nfs4_state *nfs4_try_open_cached(struct nfs4_opendata *opendata)
+ {
+ struct nfs4_state *state = opendata->state;
+- struct nfs_inode *nfsi = NFS_I(state->inode);
+ struct nfs_delegation *delegation;
+ int open_mode = opendata->o_arg.open_flags;
+ fmode_t fmode = opendata->o_arg.fmode;
+@@ -1822,7 +1819,7 @@ static struct nfs4_state *nfs4_try_open_cached(struct nfs4_opendata *opendata)
+ }
+ spin_unlock(&state->owner->so_lock);
+ rcu_read_lock();
+- delegation = rcu_dereference(nfsi->delegation);
++ delegation = nfs4_get_valid_delegation(state->inode);
+ if (!can_open_delegated(delegation, fmode, claim)) {
+ rcu_read_unlock();
+ break;
+@@ -2366,7 +2363,7 @@ static void nfs4_open_prepare(struct rpc_task *task, void *calldata)
+ data->o_arg.open_flags, claim))
+ goto out_no_action;
+ rcu_read_lock();
+- delegation = rcu_dereference(NFS_I(data->state->inode)->delegation);
++ delegation = nfs4_get_valid_delegation(data->state->inode);
+ if (can_open_delegated(delegation, data->o_arg.fmode, claim))
+ goto unlock_no_action;
+ rcu_read_unlock();
+diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
+index 4435df3e5adb..f6d790b7f2e2 100644
+--- a/fs/ocfs2/file.c
++++ b/fs/ocfs2/file.c
+@@ -2092,54 +2092,90 @@ static int ocfs2_is_io_unaligned(struct inode *inode, size_t count, loff_t pos)
+ return 0;
+ }
+
+-static int ocfs2_prepare_inode_for_refcount(struct inode *inode,
+- struct file *file,
+- loff_t pos, size_t count,
+- int *meta_level)
++static int ocfs2_inode_lock_for_extent_tree(struct inode *inode,
++ struct buffer_head **di_bh,
++ int meta_level,
++ int overwrite_io,
++ int write_sem,
++ int wait)
+ {
+- int ret;
+- struct buffer_head *di_bh = NULL;
+- u32 cpos = pos >> OCFS2_SB(inode->i_sb)->s_clustersize_bits;
+- u32 clusters =
+- ocfs2_clusters_for_bytes(inode->i_sb, pos + count) - cpos;
++ int ret = 0;
+
+- ret = ocfs2_inode_lock(inode, &di_bh, 1);
+- if (ret) {
+- mlog_errno(ret);
++ if (wait)
++ ret = ocfs2_inode_lock(inode, NULL, meta_level);
++ else
++ ret = ocfs2_try_inode_lock(inode,
++ overwrite_io ? NULL : di_bh, meta_level);
++ if (ret < 0)
+ goto out;
++
++ if (wait) {
++ if (write_sem)
++ down_write(&OCFS2_I(inode)->ip_alloc_sem);
++ else
++ down_read(&OCFS2_I(inode)->ip_alloc_sem);
++ } else {
++ if (write_sem)
++ ret = down_write_trylock(&OCFS2_I(inode)->ip_alloc_sem);
++ else
++ ret = down_read_trylock(&OCFS2_I(inode)->ip_alloc_sem);
++
++ if (!ret) {
++ ret = -EAGAIN;
++ goto out_unlock;
++ }
+ }
+
+- *meta_level = 1;
++ return ret;
+
+- ret = ocfs2_refcount_cow(inode, di_bh, cpos, clusters, UINT_MAX);
+- if (ret)
+- mlog_errno(ret);
++out_unlock:
++ brelse(*di_bh);
++ ocfs2_inode_unlock(inode, meta_level);
+ out:
+- brelse(di_bh);
+ return ret;
+ }
+
++static void ocfs2_inode_unlock_for_extent_tree(struct inode *inode,
++ struct buffer_head **di_bh,
++ int meta_level,
++ int write_sem)
++{
++ if (write_sem)
++ up_write(&OCFS2_I(inode)->ip_alloc_sem);
++ else
++ up_read(&OCFS2_I(inode)->ip_alloc_sem);
++
++ brelse(*di_bh);
++ *di_bh = NULL;
++
++ if (meta_level >= 0)
++ ocfs2_inode_unlock(inode, meta_level);
++}
++
+ static int ocfs2_prepare_inode_for_write(struct file *file,
+ loff_t pos, size_t count, int wait)
+ {
+ int ret = 0, meta_level = 0, overwrite_io = 0;
++ int write_sem = 0;
+ struct dentry *dentry = file->f_path.dentry;
+ struct inode *inode = d_inode(dentry);
+ struct buffer_head *di_bh = NULL;
+ loff_t end;
++ u32 cpos;
++ u32 clusters;
+
+ /*
+ * We start with a read level meta lock and only jump to an ex
+ * if we need to make modifications here.
+ */
+ for(;;) {
+- if (wait)
+- ret = ocfs2_inode_lock(inode, NULL, meta_level);
+- else
+- ret = ocfs2_try_inode_lock(inode,
+- overwrite_io ? NULL : &di_bh, meta_level);
++ ret = ocfs2_inode_lock_for_extent_tree(inode,
++ &di_bh,
++ meta_level,
++ overwrite_io,
++ write_sem,
++ wait);
+ if (ret < 0) {
+- meta_level = -1;
+ if (ret != -EAGAIN)
+ mlog_errno(ret);
+ goto out;
+@@ -2151,15 +2187,8 @@ static int ocfs2_prepare_inode_for_write(struct file *file,
+ */
+ if (!wait && !overwrite_io) {
+ overwrite_io = 1;
+- if (!down_read_trylock(&OCFS2_I(inode)->ip_alloc_sem)) {
+- ret = -EAGAIN;
+- goto out_unlock;
+- }
+
+ ret = ocfs2_overwrite_io(inode, di_bh, pos, count);
+- brelse(di_bh);
+- di_bh = NULL;
+- up_read(&OCFS2_I(inode)->ip_alloc_sem);
+ if (ret < 0) {
+ if (ret != -EAGAIN)
+ mlog_errno(ret);
+@@ -2178,7 +2207,10 @@ static int ocfs2_prepare_inode_for_write(struct file *file,
+ * set inode->i_size at the end of a write. */
+ if (should_remove_suid(dentry)) {
+ if (meta_level == 0) {
+- ocfs2_inode_unlock(inode, meta_level);
++ ocfs2_inode_unlock_for_extent_tree(inode,
++ &di_bh,
++ meta_level,
++ write_sem);
+ meta_level = 1;
+ continue;
+ }
+@@ -2194,18 +2226,32 @@ static int ocfs2_prepare_inode_for_write(struct file *file,
+
+ ret = ocfs2_check_range_for_refcount(inode, pos, count);
+ if (ret == 1) {
+- ocfs2_inode_unlock(inode, meta_level);
+- meta_level = -1;
+-
+- ret = ocfs2_prepare_inode_for_refcount(inode,
+- file,
+- pos,
+- count,
+- &meta_level);
++ ocfs2_inode_unlock_for_extent_tree(inode,
++ &di_bh,
++ meta_level,
++ write_sem);
++ ret = ocfs2_inode_lock_for_extent_tree(inode,
++ &di_bh,
++ meta_level,
++ overwrite_io,
++ 1,
++ wait);
++ write_sem = 1;
++ if (ret < 0) {
++ if (ret != -EAGAIN)
++ mlog_errno(ret);
++ goto out;
++ }
++
++ cpos = pos >> OCFS2_SB(inode->i_sb)->s_clustersize_bits;
++ clusters =
++ ocfs2_clusters_for_bytes(inode->i_sb, pos + count) - cpos;
++ ret = ocfs2_refcount_cow(inode, di_bh, cpos, clusters, UINT_MAX);
+ }
+
+ if (ret < 0) {
+- mlog_errno(ret);
++ if (ret != -EAGAIN)
++ mlog_errno(ret);
+ goto out_unlock;
+ }
+
+@@ -2216,10 +2262,10 @@ out_unlock:
+ trace_ocfs2_prepare_inode_for_write(OCFS2_I(inode)->ip_blkno,
+ pos, count, wait);
+
+- brelse(di_bh);
+-
+- if (meta_level >= 0)
+- ocfs2_inode_unlock(inode, meta_level);
++ ocfs2_inode_unlock_for_extent_tree(inode,
++ &di_bh,
++ meta_level,
++ write_sem);
+
+ out:
+ return ret;
+diff --git a/include/asm-generic/vdso/vsyscall.h b/include/asm-generic/vdso/vsyscall.h
+index e94b19782c92..ce4103208619 100644
+--- a/include/asm-generic/vdso/vsyscall.h
++++ b/include/asm-generic/vdso/vsyscall.h
+@@ -25,13 +25,6 @@ static __always_inline int __arch_get_clock_mode(struct timekeeper *tk)
+ }
+ #endif /* __arch_get_clock_mode */
+
+-#ifndef __arch_use_vsyscall
+-static __always_inline int __arch_use_vsyscall(struct vdso_data *vdata)
+-{
+- return 1;
+-}
+-#endif /* __arch_use_vsyscall */
+-
+ #ifndef __arch_update_vsyscall
+ static __always_inline void __arch_update_vsyscall(struct vdso_data *vdata,
+ struct timekeeper *tk)
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index fcb1386bb0d4..4643fcf55474 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -59,6 +59,11 @@ extern ssize_t cpu_show_l1tf(struct device *dev,
+ struct device_attribute *attr, char *buf);
+ extern ssize_t cpu_show_mds(struct device *dev,
+ struct device_attribute *attr, char *buf);
++extern ssize_t cpu_show_tsx_async_abort(struct device *dev,
++ struct device_attribute *attr,
++ char *buf);
++extern ssize_t cpu_show_itlb_multihit(struct device *dev,
++ struct device_attribute *attr, char *buf);
+
+ extern __printf(4, 5)
+ struct device *cpu_device_create(struct device *parent, void *drvdata,
+@@ -211,28 +216,7 @@ static inline int cpuhp_smt_enable(void) { return 0; }
+ static inline int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval) { return 0; }
+ #endif
+
+-/*
+- * These are used for a global "mitigations=" cmdline option for toggling
+- * optional CPU mitigations.
+- */
+-enum cpu_mitigations {
+- CPU_MITIGATIONS_OFF,
+- CPU_MITIGATIONS_AUTO,
+- CPU_MITIGATIONS_AUTO_NOSMT,
+-};
+-
+-extern enum cpu_mitigations cpu_mitigations;
+-
+-/* mitigations=off */
+-static inline bool cpu_mitigations_off(void)
+-{
+- return cpu_mitigations == CPU_MITIGATIONS_OFF;
+-}
+-
+-/* mitigations=auto,nosmt */
+-static inline bool cpu_mitigations_auto_nosmt(void)
+-{
+- return cpu_mitigations == CPU_MITIGATIONS_AUTO_NOSMT;
+-}
++extern bool cpu_mitigations_off(void);
++extern bool cpu_mitigations_auto_nosmt(void);
+
+ #endif /* _LINUX_CPU_H_ */
+diff --git a/include/linux/efi.h b/include/linux/efi.h
+index f87fabea4a85..b3a93f8e6e59 100644
+--- a/include/linux/efi.h
++++ b/include/linux/efi.h
+@@ -1585,9 +1585,22 @@ char *efi_convert_cmdline(efi_system_table_t *sys_table_arg,
+ efi_status_t efi_get_memory_map(efi_system_table_t *sys_table_arg,
+ struct efi_boot_memmap *map);
+
++efi_status_t efi_low_alloc_above(efi_system_table_t *sys_table_arg,
++ unsigned long size, unsigned long align,
++ unsigned long *addr, unsigned long min);
++
++static inline
+ efi_status_t efi_low_alloc(efi_system_table_t *sys_table_arg,
+ unsigned long size, unsigned long align,
+- unsigned long *addr);
++ unsigned long *addr)
++{
++ /*
++ * Don't allocate at 0x0. It will confuse code that
++ * checks pointers against NULL. Skip the first 8
++ * bytes so we start at a nice even number.
++ */
++ return efi_low_alloc_above(sys_table_arg, size, align, addr, 0x8);
++}
+
+ efi_status_t efi_high_alloc(efi_system_table_t *sys_table_arg,
+ unsigned long size, unsigned long align,
+@@ -1598,7 +1611,8 @@ efi_status_t efi_relocate_kernel(efi_system_table_t *sys_table_arg,
+ unsigned long image_size,
+ unsigned long alloc_size,
+ unsigned long preferred_addr,
+- unsigned long alignment);
++ unsigned long alignment,
++ unsigned long min_addr);
+
+ efi_status_t handle_cmdline_files(efi_system_table_t *sys_table_arg,
+ efi_loaded_image_t *image,
+diff --git a/include/linux/filter.h b/include/linux/filter.h
+index 92c6e31fb008..38716f93825f 100644
+--- a/include/linux/filter.h
++++ b/include/linux/filter.h
+@@ -1099,7 +1099,6 @@ static inline void bpf_get_prog_name(const struct bpf_prog *prog, char *sym)
+
+ #endif /* CONFIG_BPF_JIT */
+
+-void bpf_prog_kallsyms_del_subprogs(struct bpf_prog *fp);
+ void bpf_prog_kallsyms_del_all(struct bpf_prog *fp);
+
+ #define BPF_ANC BIT(15)
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index fcb46b3374c6..52ed5f66e8f9 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -1090,6 +1090,7 @@ enum kvm_stat_kind {
+
+ struct kvm_stat_data {
+ int offset;
++ int mode;
+ struct kvm *kvm;
+ };
+
+@@ -1097,6 +1098,7 @@ struct kvm_stats_debugfs_item {
+ const char *name;
+ int offset;
+ enum kvm_stat_kind kind;
++ int mode;
+ };
+ extern struct kvm_stats_debugfs_item debugfs_entries[];
+ extern struct dentry *kvm_debugfs_dir;
+@@ -1380,4 +1382,10 @@ static inline int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
+ }
+ #endif /* CONFIG_HAVE_KVM_VCPU_RUN_PID_CHANGE */
+
++typedef int (*kvm_vm_thread_fn_t)(struct kvm *kvm, uintptr_t data);
++
++int kvm_vm_create_worker_thread(struct kvm *kvm, kvm_vm_thread_fn_t thread_fn,
++ uintptr_t data, const char *name,
++ struct task_struct **thread_ptr);
++
+ #endif
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index fe4552e1c40b..f17931c40dfb 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -695,11 +695,6 @@ static inline void *kvcalloc(size_t n, size_t size, gfp_t flags)
+
+ extern void kvfree(const void *addr);
+
+-static inline atomic_t *compound_mapcount_ptr(struct page *page)
+-{
+- return &page[1].compound_mapcount;
+-}
+-
+ static inline int compound_mapcount(struct page *page)
+ {
+ VM_BUG_ON_PAGE(!PageCompound(page), page);
+diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
+index 6a7a1083b6fb..a1b50f12e648 100644
+--- a/include/linux/mm_types.h
++++ b/include/linux/mm_types.h
+@@ -221,6 +221,11 @@ struct page {
+ #endif
+ } _struct_page_alignment;
+
++static inline atomic_t *compound_mapcount_ptr(struct page *page)
++{
++ return &page[1].compound_mapcount;
++}
++
+ /*
+ * Used for sizing the vmemmap region on some architectures
+ */
+diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
+index f91cb8898ff0..1bf83c8fcaa7 100644
+--- a/include/linux/page-flags.h
++++ b/include/linux/page-flags.h
+@@ -622,12 +622,28 @@ static inline int PageTransCompound(struct page *page)
+ *
+ * Unlike PageTransCompound, this is safe to be called only while
+ * split_huge_pmd() cannot run from under us, like if protected by the
+- * MMU notifier, otherwise it may result in page->_mapcount < 0 false
++ * MMU notifier, otherwise it may result in page->_mapcount check false
+ * positives.
++ *
++ * We have to treat page cache THP differently since every subpage of it
++ * would get _mapcount inc'ed once it is PMD mapped. But, it may be PTE
++ * mapped in the current process so comparing subpage's _mapcount to
++ * compound_mapcount to filter out PTE mapped case.
+ */
+ static inline int PageTransCompoundMap(struct page *page)
+ {
+- return PageTransCompound(page) && atomic_read(&page->_mapcount) < 0;
++ struct page *head;
++
++ if (!PageTransCompound(page))
++ return 0;
++
++ if (PageAnon(page))
++ return atomic_read(&page->_mapcount) < 0;
++
++ head = compound_head(page);
++ /* File THP is PMD mapped and not PTE mapped */
++ return atomic_read(&page->_mapcount) ==
++ atomic_read(compound_mapcount_ptr(head));
+ }
+
+ /*
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index e4b3fb4bb77c..ce7055259877 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -139,6 +139,11 @@ static inline void sk_msg_apply_bytes(struct sk_psock *psock, u32 bytes)
+ }
+ }
+
++static inline u32 sk_msg_iter_dist(u32 start, u32 end)
++{
++ return end >= start ? end - start : end + (MAX_MSG_FRAGS - start);
++}
++
+ #define sk_msg_iter_var_prev(var) \
+ do { \
+ if (var == 0) \
+@@ -198,9 +203,7 @@ static inline u32 sk_msg_elem_used(const struct sk_msg *msg)
+ if (sk_msg_full(msg))
+ return MAX_MSG_FRAGS;
+
+- return msg->sg.end >= msg->sg.start ?
+- msg->sg.end - msg->sg.start :
+- msg->sg.end + (MAX_MSG_FRAGS - msg->sg.start);
++ return sk_msg_iter_dist(msg->sg.start, msg->sg.end);
+ }
+
+ static inline struct scatterlist *sk_msg_elem(struct sk_msg *msg, int which)
+diff --git a/include/linux/sunrpc/bc_xprt.h b/include/linux/sunrpc/bc_xprt.h
+index 87d27e13d885..d796058cdff2 100644
+--- a/include/linux/sunrpc/bc_xprt.h
++++ b/include/linux/sunrpc/bc_xprt.h
+@@ -64,6 +64,11 @@ static inline int xprt_setup_backchannel(struct rpc_xprt *xprt,
+ return 0;
+ }
+
++static inline void xprt_destroy_backchannel(struct rpc_xprt *xprt,
++ unsigned int max_reqs)
++{
++}
++
+ static inline bool svc_is_backchannel(const struct svc_rqst *rqstp)
+ {
+ return false;
+diff --git a/include/net/bonding.h b/include/net/bonding.h
+index f7fe45689142..be404b272d6b 100644
+--- a/include/net/bonding.h
++++ b/include/net/bonding.h
+@@ -159,7 +159,6 @@ struct slave {
+ unsigned long target_last_arp_rx[BOND_MAX_ARP_TARGETS];
+ s8 link; /* one of BOND_LINK_XXXX */
+ s8 link_new_state; /* one of BOND_LINK_XXXX */
+- s8 new_link;
+ u8 backup:1, /* indicates backup slave. Value corresponds with
+ BOND_STATE_ACTIVE and BOND_STATE_BACKUP */
+ inactive:1, /* indicates inactive slave */
+@@ -239,6 +238,7 @@ struct bonding {
+ struct dentry *debug_dir;
+ #endif /* CONFIG_DEBUG_FS */
+ struct rtnl_link_stats64 bond_stats;
++ struct lock_class_key stats_lock_key;
+ };
+
+ #define bond_slave_get_rcu(dev) \
+@@ -549,7 +549,7 @@ static inline void bond_propose_link_state(struct slave *slave, int state)
+
+ static inline void bond_commit_link_state(struct slave *slave, bool notify)
+ {
+- if (slave->link == slave->link_new_state)
++ if (slave->link_new_state == BOND_LINK_NOCHANGE)
+ return;
+
+ slave->link = slave->link_new_state;
+diff --git a/include/net/ip_vs.h b/include/net/ip_vs.h
+index 3759167f91f5..078887c8c586 100644
+--- a/include/net/ip_vs.h
++++ b/include/net/ip_vs.h
+@@ -889,6 +889,7 @@ struct netns_ipvs {
+ struct delayed_work defense_work; /* Work handler */
+ int drop_rate;
+ int drop_counter;
++ int old_secure_tcp;
+ atomic_t dropentry;
+ /* locks in ctl.c */
+ spinlock_t dropentry_lock; /* drop entry handling */
+diff --git a/include/net/neighbour.h b/include/net/neighbour.h
+index 50a67bd6a434..b8452cc0e059 100644
+--- a/include/net/neighbour.h
++++ b/include/net/neighbour.h
+@@ -439,8 +439,8 @@ static inline int neigh_event_send(struct neighbour *neigh, struct sk_buff *skb)
+ {
+ unsigned long now = jiffies;
+
+- if (neigh->used != now)
+- neigh->used = now;
++ if (READ_ONCE(neigh->used) != now)
++ WRITE_ONCE(neigh->used, now);
+ if (!(neigh->nud_state&(NUD_CONNECTED|NUD_DELAY|NUD_PROBE)))
+ return __neigh_event_send(neigh, skb);
+ return 0;
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 7f7a4d9137e5..f9a8bcb37da0 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -801,7 +801,8 @@ struct nft_expr_ops {
+ */
+ struct nft_expr {
+ const struct nft_expr_ops *ops;
+- unsigned char data[];
++ unsigned char data[]
++ __attribute__((aligned(__alignof__(u64))));
+ };
+
+ static inline void *nft_expr_priv(const struct nft_expr *expr)
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 58b1fbc884a7..50ea27d0f7c5 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -13,6 +13,7 @@
+ #include <linux/refcount.h>
+ #include <linux/workqueue.h>
+ #include <linux/mutex.h>
++#include <linux/hashtable.h>
+ #include <net/gen_stats.h>
+ #include <net/rtnetlink.h>
+ #include <net/flow_offload.h>
+@@ -359,6 +360,7 @@ struct tcf_proto {
+ bool deleting;
+ refcount_t refcnt;
+ struct rcu_head rcu;
++ struct hlist_node destroy_ht_node;
+ };
+
+ struct qdisc_skb_cb {
+@@ -409,6 +411,8 @@ struct tcf_block {
+ struct list_head filter_chain_list;
+ } chain0;
+ struct rcu_head rcu;
++ DECLARE_HASHTABLE(proto_destroy_ht, 7);
++ struct mutex proto_destroy_lock; /* Lock for proto_destroy hashtable. */
+ };
+
+ #ifdef CONFIG_PROVE_LOCKING
+diff --git a/include/net/sock.h b/include/net/sock.h
+index b03f96370f8e..59220da079e8 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -2331,7 +2331,7 @@ static inline ktime_t sock_read_timestamp(struct sock *sk)
+
+ return kt;
+ #else
+- return sk->sk_stamp;
++ return READ_ONCE(sk->sk_stamp);
+ #endif
+ }
+
+@@ -2342,7 +2342,7 @@ static inline void sock_write_timestamp(struct sock *sk, ktime_t kt)
+ sk->sk_stamp = kt;
+ write_sequnlock(&sk->sk_stamp_seq);
+ #else
+- sk->sk_stamp = kt;
++ WRITE_ONCE(sk->sk_stamp, kt);
+ #endif
+ }
+
+diff --git a/include/net/tls.h b/include/net/tls.h
+index 41b2d41bb1b8..bd1ef1a915e9 100644
+--- a/include/net/tls.h
++++ b/include/net/tls.h
+@@ -40,6 +40,7 @@
+ #include <linux/socket.h>
+ #include <linux/tcp.h>
+ #include <linux/skmsg.h>
++#include <linux/mutex.h>
+ #include <linux/netdevice.h>
+
+ #include <net/tcp.h>
+@@ -268,6 +269,10 @@ struct tls_context {
+
+ bool in_tcp_sendpages;
+ bool pending_open_record_frags;
++
++ struct mutex tx_lock; /* protects partially_sent_* fields and
++ * per-type TX fields
++ */
+ unsigned long flags;
+
+ /* cache cold stuff */
+diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
+index 4f225175cb91..77d8df451805 100644
+--- a/include/rdma/ib_verbs.h
++++ b/include/rdma/ib_verbs.h
+@@ -327,7 +327,7 @@ struct ib_tm_caps {
+
+ struct ib_cq_init_attr {
+ unsigned int cqe;
+- int comp_vector;
++ u32 comp_vector;
+ u32 flags;
+ };
+
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 66088a9e9b9e..ef0e1e3e66f4 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -502,7 +502,7 @@ int bpf_remove_insns(struct bpf_prog *prog, u32 off, u32 cnt)
+ return WARN_ON_ONCE(bpf_adj_branches(prog, off, off + cnt, off, false));
+ }
+
+-void bpf_prog_kallsyms_del_subprogs(struct bpf_prog *fp)
++static void bpf_prog_kallsyms_del_subprogs(struct bpf_prog *fp)
+ {
+ int i;
+
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 272071e9112f..aac966b32c42 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -1316,24 +1316,32 @@ static void __bpf_prog_put_rcu(struct rcu_head *rcu)
+ {
+ struct bpf_prog_aux *aux = container_of(rcu, struct bpf_prog_aux, rcu);
+
++ kvfree(aux->func_info);
+ free_used_maps(aux);
+ bpf_prog_uncharge_memlock(aux->prog);
+ security_bpf_prog_free(aux);
+ bpf_prog_free(aux->prog);
+ }
+
++static void __bpf_prog_put_noref(struct bpf_prog *prog, bool deferred)
++{
++ bpf_prog_kallsyms_del_all(prog);
++ btf_put(prog->aux->btf);
++ bpf_prog_free_linfo(prog);
++
++ if (deferred)
++ call_rcu(&prog->aux->rcu, __bpf_prog_put_rcu);
++ else
++ __bpf_prog_put_rcu(&prog->aux->rcu);
++}
++
+ static void __bpf_prog_put(struct bpf_prog *prog, bool do_idr_lock)
+ {
+ if (atomic_dec_and_test(&prog->aux->refcnt)) {
+ perf_event_bpf_event(prog, PERF_BPF_EVENT_PROG_UNLOAD, 0);
+ /* bpf_prog_free_id() must be called first */
+ bpf_prog_free_id(prog, do_idr_lock);
+- bpf_prog_kallsyms_del_all(prog);
+- btf_put(prog->aux->btf);
+- kvfree(prog->aux->func_info);
+- bpf_prog_free_linfo(prog);
+-
+- call_rcu(&prog->aux->rcu, __bpf_prog_put_rcu);
++ __bpf_prog_put_noref(prog, true);
+ }
+ }
+
+@@ -1730,11 +1738,12 @@ static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr)
+ return err;
+
+ free_used_maps:
+- bpf_prog_free_linfo(prog);
+- kvfree(prog->aux->func_info);
+- btf_put(prog->aux->btf);
+- bpf_prog_kallsyms_del_subprogs(prog);
+- free_used_maps(prog->aux);
++ /* In case we have subprogs, we need to wait for a grace
++ * period before we can tear down JIT memory since symbols
++ * are already exposed under kallsyms.
++ */
++ __bpf_prog_put_noref(prog, prog->aux->func_cnt);
++ return err;
+ free_prog:
+ bpf_prog_uncharge_memlock(prog);
+ free_prog_sec:
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 5aa37531ce76..a8122c405603 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -786,7 +786,8 @@ static int generate_sched_domains(cpumask_var_t **domains,
+ cpumask_subset(cp->cpus_allowed, top_cpuset.effective_cpus))
+ continue;
+
+- if (is_sched_load_balance(cp))
++ if (is_sched_load_balance(cp) &&
++ !cpumask_empty(cp->effective_cpus))
+ csa[csn++] = cp;
+
+ /* skip @cp's subtree if not a partition root */
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index e84c0873559e..df186823dda6 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -2339,7 +2339,18 @@ void __init boot_cpu_hotplug_init(void)
+ this_cpu_write(cpuhp_state.state, CPUHP_ONLINE);
+ }
+
+-enum cpu_mitigations cpu_mitigations __ro_after_init = CPU_MITIGATIONS_AUTO;
++/*
++ * These are used for a global "mitigations=" cmdline option for toggling
++ * optional CPU mitigations.
++ */
++enum cpu_mitigations {
++ CPU_MITIGATIONS_OFF,
++ CPU_MITIGATIONS_AUTO,
++ CPU_MITIGATIONS_AUTO_NOSMT,
++};
++
++static enum cpu_mitigations cpu_mitigations __ro_after_init =
++ CPU_MITIGATIONS_AUTO;
+
+ static int __init mitigations_parse_cmdline(char *arg)
+ {
+@@ -2356,3 +2367,17 @@ static int __init mitigations_parse_cmdline(char *arg)
+ return 0;
+ }
+ early_param("mitigations", mitigations_parse_cmdline);
++
++/* mitigations=off */
++bool cpu_mitigations_off(void)
++{
++ return cpu_mitigations == CPU_MITIGATIONS_OFF;
++}
++EXPORT_SYMBOL_GPL(cpu_mitigations_off);
++
++/* mitigations=auto,nosmt */
++bool cpu_mitigations_auto_nosmt(void)
++{
++ return cpu_mitigations == CPU_MITIGATIONS_AUTO_NOSMT;
++}
++EXPORT_SYMBOL_GPL(cpu_mitigations_auto_nosmt);
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 3647097e6783..8bbd39585301 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -2586,7 +2586,35 @@ noinline static int copy_clone_args_from_user(struct kernel_clone_args *kargs,
+ return 0;
+ }
+
+-static bool clone3_args_valid(const struct kernel_clone_args *kargs)
++/**
++ * clone3_stack_valid - check and prepare stack
++ * @kargs: kernel clone args
++ *
++ * Verify that the stack arguments userspace gave us are sane.
++ * In addition, set the stack direction for userspace since it's easy for us to
++ * determine.
++ */
++static inline bool clone3_stack_valid(struct kernel_clone_args *kargs)
++{
++ if (kargs->stack == 0) {
++ if (kargs->stack_size > 0)
++ return false;
++ } else {
++ if (kargs->stack_size == 0)
++ return false;
++
++ if (!access_ok((void __user *)kargs->stack, kargs->stack_size))
++ return false;
++
++#if !defined(CONFIG_STACK_GROWSUP) && !defined(CONFIG_IA64)
++ kargs->stack += kargs->stack_size;
++#endif
++ }
++
++ return true;
++}
++
++static bool clone3_args_valid(struct kernel_clone_args *kargs)
+ {
+ /*
+ * All lower bits of the flag word are taken.
+@@ -2606,6 +2634,9 @@ static bool clone3_args_valid(const struct kernel_clone_args *kargs)
+ kargs->exit_signal)
+ return false;
+
++ if (!clone3_stack_valid(kargs))
++ return false;
++
+ return true;
+ }
+
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index f751ce0b783e..93a8749763ea 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -1927,7 +1927,7 @@ next_level:
+ static int
+ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *attr)
+ {
+- enum s_alloc alloc_state;
++ enum s_alloc alloc_state = sa_none;
+ struct sched_domain *sd;
+ struct s_data d;
+ struct rq *rq = NULL;
+@@ -1935,6 +1935,9 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
+ struct sched_domain_topology_level *tl_asym;
+ bool has_asym = false;
+
++ if (WARN_ON(cpumask_empty(cpu_map)))
++ goto error;
++
+ alloc_state = __visit_domain_allocation_hell(&d, cpu_map);
+ if (alloc_state != sa_rootdomain)
+ goto error;
+@@ -2005,7 +2008,7 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
+ rcu_read_unlock();
+
+ if (has_asym)
+- static_branch_enable_cpuslocked(&sched_asym_cpucapacity);
++ static_branch_inc_cpuslocked(&sched_asym_cpucapacity);
+
+ if (rq && sched_debug_enabled) {
+ pr_info("root domain span: %*pbl (max cpu_capacity = %lu)\n",
+@@ -2100,8 +2103,12 @@ int sched_init_domains(const struct cpumask *cpu_map)
+ */
+ static void detach_destroy_domains(const struct cpumask *cpu_map)
+ {
++ unsigned int cpu = cpumask_any(cpu_map);
+ int i;
+
++ if (rcu_access_pointer(per_cpu(sd_asym_cpucapacity, cpu)))
++ static_branch_dec_cpuslocked(&sched_asym_cpucapacity);
++
+ rcu_read_lock();
+ for_each_cpu(i, cpu_map)
+ cpu_attach_domain(NULL, &def_root_domain, i);
+diff --git a/kernel/time/vsyscall.c b/kernel/time/vsyscall.c
+index 4bc37ac3bb05..5ee0f7709410 100644
+--- a/kernel/time/vsyscall.c
++++ b/kernel/time/vsyscall.c
+@@ -110,8 +110,7 @@ void update_vsyscall(struct timekeeper *tk)
+ nsec = nsec + tk->wall_to_monotonic.tv_nsec;
+ vdso_ts->sec += __iter_div_u64_rem(nsec, NSEC_PER_SEC, &vdso_ts->nsec);
+
+- if (__arch_use_vsyscall(vdata))
+- update_vdso_data(vdata, tk);
++ update_vdso_data(vdata, tk);
+
+ __arch_update_vsyscall(vdata, tk);
+
+@@ -124,10 +123,8 @@ void update_vsyscall_tz(void)
+ {
+ struct vdso_data *vdata = __arch_get_k_vdso_data();
+
+- if (__arch_use_vsyscall(vdata)) {
+- vdata[CS_HRES_COARSE].tz_minuteswest = sys_tz.tz_minuteswest;
+- vdata[CS_HRES_COARSE].tz_dsttime = sys_tz.tz_dsttime;
+- }
++ vdata[CS_HRES_COARSE].tz_minuteswest = sys_tz.tz_minuteswest;
++ vdata[CS_HRES_COARSE].tz_dsttime = sys_tz.tz_dsttime;
+
+ __arch_sync_vdso_data(vdata);
+ }
+diff --git a/lib/dump_stack.c b/lib/dump_stack.c
+index 5cff72f18c4a..33ffbf308853 100644
+--- a/lib/dump_stack.c
++++ b/lib/dump_stack.c
+@@ -106,7 +106,12 @@ retry:
+ was_locked = 1;
+ } else {
+ local_irq_restore(flags);
+- cpu_relax();
++ /*
++ * Wait for the lock to release before jumping to
++ * atomic_cmpxchg() in order to mitigate the thundering herd
++ * problem.
++ */
++ do { cpu_relax(); } while (atomic_read(&dump_lock) != -1);
+ goto retry;
+ }
+
+diff --git a/mm/filemap.c b/mm/filemap.c
+index d0cf700bf201..d9572593e5c7 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -408,7 +408,8 @@ int __filemap_fdatawrite_range(struct address_space *mapping, loff_t start,
+ .range_end = end,
+ };
+
+- if (!mapping_cap_writeback_dirty(mapping))
++ if (!mapping_cap_writeback_dirty(mapping) ||
++ !mapping_tagged(mapping, PAGECACHE_TAG_DIRTY))
+ return 0;
+
+ wbc_attach_fdatawrite_inode(&wbc, mapping->host);
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index eaaa21b23215..5ce6d8728e2b 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -1016,12 +1016,13 @@ static void collapse_huge_page(struct mm_struct *mm,
+
+ anon_vma_lock_write(vma->anon_vma);
+
+- pte = pte_offset_map(pmd, address);
+- pte_ptl = pte_lockptr(mm, pmd);
+-
+ mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, NULL, mm,
+ address, address + HPAGE_PMD_SIZE);
+ mmu_notifier_invalidate_range_start(&range);
++
++ pte = pte_offset_map(pmd, address);
++ pte_ptl = pte_lockptr(mm, pmd);
++
+ pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
+ /*
+ * After this gup_fast can't run anymore. This also removes
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index e18108b2b786..89fd0829ebd0 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -486,7 +486,7 @@ ino_t page_cgroup_ino(struct page *page)
+ unsigned long ino = 0;
+
+ rcu_read_lock();
+- if (PageHead(page) && PageSlab(page))
++ if (PageSlab(page) && !PageTail(page))
+ memcg = memcg_from_slab_page(page);
+ else
+ memcg = READ_ONCE(page->mem_cgroup);
+@@ -2407,6 +2407,15 @@ retry:
+ goto retry;
+ }
+
++ /*
++ * Memcg doesn't have a dedicated reserve for atomic
++ * allocations. But like the global atomic pool, we need to
++ * put the burden of reclaim on regular allocation requests
++ * and let these go through as privileged allocations.
++ */
++ if (gfp_mask & __GFP_ATOMIC)
++ goto force;
++
+ /*
+ * Unlike in global OOM situations, memcg is not in a physical
+ * memory shortage. Allow dying and OOM-killed tasks to
+@@ -4763,12 +4772,6 @@ static void __mem_cgroup_free(struct mem_cgroup *memcg)
+ {
+ int node;
+
+- /*
+- * Flush percpu vmstats and vmevents to guarantee the value correctness
+- * on parent's and all ancestor levels.
+- */
+- memcg_flush_percpu_vmstats(memcg, false);
+- memcg_flush_percpu_vmevents(memcg);
+ for_each_node(node)
+ free_mem_cgroup_per_node_info(memcg, node);
+ free_percpu(memcg->vmstats_percpu);
+@@ -4779,6 +4782,12 @@ static void __mem_cgroup_free(struct mem_cgroup *memcg)
+ static void mem_cgroup_free(struct mem_cgroup *memcg)
+ {
+ memcg_wb_domain_exit(memcg);
++ /*
++ * Flush percpu vmstats and vmevents to guarantee the value correctness
++ * on parent's and all ancestor levels.
++ */
++ memcg_flush_percpu_vmstats(memcg, false);
++ memcg_flush_percpu_vmevents(memcg);
+ __mem_cgroup_free(memcg);
+ }
+
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index bfa5815e59f8..702a1d02fc62 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -1946,6 +1946,14 @@ void __init page_alloc_init_late(void)
+ /* Block until all are initialised */
+ wait_for_completion(&pgdat_init_all_done_comp);
+
++ /*
++ * The number of managed pages has changed due to the initialisation
++ * so the pcpu batch and high limits needs to be updated or the limits
++ * will be artificially small.
++ */
++ for_each_populated_zone(zone)
++ zone_pcp_update(zone);
++
+ /*
+ * We initialized the rest of the deferred pages. Permanently disable
+ * on-demand struct page initialization.
+@@ -8479,7 +8487,6 @@ void free_contig_range(unsigned long pfn, unsigned int nr_pages)
+ WARN(count != 0, "%d pages are still in use!\n", count);
+ }
+
+-#ifdef CONFIG_MEMORY_HOTPLUG
+ /*
+ * The zone indicated has a new number of managed_pages; batch sizes and percpu
+ * page high values need to be recalulated.
+@@ -8493,7 +8500,6 @@ void __meminit zone_pcp_update(struct zone *zone)
+ per_cpu_ptr(zone->pageset, cpu));
+ mutex_unlock(&pcp_batch_high_lock);
+ }
+-#endif
+
+ void zone_pcp_reset(struct zone *zone)
+ {
+diff --git a/mm/slab.h b/mm/slab.h
+index 9057b8056b07..8d830e722398 100644
+--- a/mm/slab.h
++++ b/mm/slab.h
+@@ -259,8 +259,8 @@ static inline struct kmem_cache *memcg_root_cache(struct kmem_cache *s)
+ * Expects a pointer to a slab page. Please note, that PageSlab() check
+ * isn't sufficient, as it returns true also for tail compound slab pages,
+ * which do not have slab_cache pointer set.
+- * So this function assumes that the page can pass PageHead() and PageSlab()
+- * checks.
++ * So this function assumes that the page can pass PageSlab() && !PageTail()
++ * check.
+ *
+ * The kmem_cache can be reparented asynchronously. The caller must ensure
+ * the memcg lifetime, e.g. by taking rcu_read_lock() or cgroup_mutex.
+diff --git a/mm/vmstat.c b/mm/vmstat.c
+index fd7e16ca6996..3c85f760bdd0 100644
+--- a/mm/vmstat.c
++++ b/mm/vmstat.c
+@@ -1970,7 +1970,7 @@ void __init init_mm_internals(void)
+ #endif
+ #ifdef CONFIG_PROC_FS
+ proc_create_seq("buddyinfo", 0444, NULL, &fragmentation_op);
+- proc_create_seq("pagetypeinfo", 0444, NULL, &pagetypeinfo_op);
++ proc_create_seq("pagetypeinfo", 0400, NULL, &pagetypeinfo_op);
+ proc_create_seq("vmstat", 0444, NULL, &vmstat_op);
+ proc_create_seq("zoneinfo", 0444, NULL, &zoneinfo_op);
+ #endif
+diff --git a/net/core/lwt_bpf.c b/net/core/lwt_bpf.c
+index f93785e5833c..74cfb8b5ab33 100644
+--- a/net/core/lwt_bpf.c
++++ b/net/core/lwt_bpf.c
+@@ -88,11 +88,16 @@ static int bpf_lwt_input_reroute(struct sk_buff *skb)
+ int err = -EINVAL;
+
+ if (skb->protocol == htons(ETH_P_IP)) {
++ struct net_device *dev = skb_dst(skb)->dev;
+ struct iphdr *iph = ip_hdr(skb);
+
++ dev_hold(dev);
++ skb_dst_drop(skb);
+ err = ip_route_input_noref(skb, iph->daddr, iph->saddr,
+- iph->tos, skb_dst(skb)->dev);
++ iph->tos, dev);
++ dev_put(dev);
+ } else if (skb->protocol == htons(ETH_P_IPV6)) {
++ skb_dst_drop(skb);
+ err = ipv6_stub->ipv6_route_input(skb);
+ } else {
+ err = -EAFNOSUPPORT;
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 6832eeb4b785..c10e3e56006e 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -271,18 +271,28 @@ void sk_msg_trim(struct sock *sk, struct sk_msg *msg, int len)
+
+ msg->sg.data[i].length -= trim;
+ sk_mem_uncharge(sk, trim);
++ /* Adjust copybreak if it falls into the trimmed part of last buf */
++ if (msg->sg.curr == i && msg->sg.copybreak > msg->sg.data[i].length)
++ msg->sg.copybreak = msg->sg.data[i].length;
+ out:
+- /* If we trim data before curr pointer update copybreak and current
+- * so that any future copy operations start at new copy location.
++ sk_msg_iter_var_next(i);
++ msg->sg.end = i;
++
++ /* If we trim data a full sg elem before curr pointer update
++ * copybreak and current so that any future copy operations
++ * start at new copy location.
+ * However trimed data that has not yet been used in a copy op
+ * does not require an update.
+ */
+- if (msg->sg.curr >= i) {
++ if (!msg->sg.size) {
++ msg->sg.curr = msg->sg.start;
++ msg->sg.copybreak = 0;
++ } else if (sk_msg_iter_dist(msg->sg.start, msg->sg.curr) >=
++ sk_msg_iter_dist(msg->sg.start, msg->sg.end)) {
++ sk_msg_iter_var_prev(i);
+ msg->sg.curr = i;
+ msg->sg.copybreak = msg->sg.data[i].length;
+ }
+- sk_msg_iter_var_next(i);
+- msg->sg.end = i;
+ }
+ EXPORT_SYMBOL_GPL(sk_msg_trim);
+
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index 0913a090b2bf..f1888c683426 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -1814,8 +1814,8 @@ int fib_sync_down_addr(struct net_device *dev, __be32 local)
+ int ret = 0;
+ unsigned int hash = fib_laddr_hashfn(local);
+ struct hlist_head *head = &fib_info_laddrhash[hash];
++ int tb_id = l3mdev_fib_table(dev) ? : RT_TABLE_MAIN;
+ struct net *net = dev_net(dev);
+- int tb_id = l3mdev_fib_table(dev);
+ struct fib_info *fi;
+
+ if (!fib_info_laddrhash || local == 0)
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 546088e50815..2b25a0de0364 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -621,6 +621,7 @@ static void rt6_probe(struct fib6_nh *fib6_nh)
+ {
+ struct __rt6_probe_work *work = NULL;
+ const struct in6_addr *nh_gw;
++ unsigned long last_probe;
+ struct neighbour *neigh;
+ struct net_device *dev;
+ struct inet6_dev *idev;
+@@ -639,6 +640,7 @@ static void rt6_probe(struct fib6_nh *fib6_nh)
+ nh_gw = &fib6_nh->fib_nh_gw6;
+ dev = fib6_nh->fib_nh_dev;
+ rcu_read_lock_bh();
++ last_probe = READ_ONCE(fib6_nh->last_probe);
+ idev = __in6_dev_get(dev);
+ neigh = __ipv6_neigh_lookup_noref(dev, nh_gw);
+ if (neigh) {
+@@ -654,13 +656,15 @@ static void rt6_probe(struct fib6_nh *fib6_nh)
+ __neigh_set_probe_once(neigh);
+ }
+ write_unlock(&neigh->lock);
+- } else if (time_after(jiffies, fib6_nh->last_probe +
++ } else if (time_after(jiffies, last_probe +
+ idev->cnf.rtr_probe_interval)) {
+ work = kmalloc(sizeof(*work), GFP_ATOMIC);
+ }
+
+- if (work) {
+- fib6_nh->last_probe = jiffies;
++ if (!work || cmpxchg(&fib6_nh->last_probe,
++ last_probe, jiffies) != last_probe) {
++ kfree(work);
++ } else {
+ INIT_WORK(&work->work, rt6_probe_deferred);
+ work->target = *nh_gw;
+ dev_hold(dev);
+@@ -3385,6 +3389,9 @@ int fib6_nh_init(struct net *net, struct fib6_nh *fib6_nh,
+ int err;
+
+ fib6_nh->fib_nh_family = AF_INET6;
++#ifdef CONFIG_IPV6_ROUTER_PREF
++ fib6_nh->last_probe = jiffies;
++#endif
+
+ err = -ENODEV;
+ if (cfg->fc_ifindex) {
+diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
+index e64d5f9a89dd..e7288eab7512 100644
+--- a/net/netfilter/ipset/ip_set_core.c
++++ b/net/netfilter/ipset/ip_set_core.c
+@@ -2069,8 +2069,9 @@ ip_set_sockfn_get(struct sock *sk, int optval, void __user *user, int *len)
+ }
+
+ req_version->version = IPSET_PROTOCOL;
+- ret = copy_to_user(user, req_version,
+- sizeof(struct ip_set_req_version));
++ if (copy_to_user(user, req_version,
++ sizeof(struct ip_set_req_version)))
++ ret = -EFAULT;
+ goto done;
+ }
+ case IP_SET_OP_GET_BYNAME: {
+@@ -2129,7 +2130,8 @@ ip_set_sockfn_get(struct sock *sk, int optval, void __user *user, int *len)
+ } /* end of switch(op) */
+
+ copy:
+- ret = copy_to_user(user, data, copylen);
++ if (copy_to_user(user, data, copylen))
++ ret = -EFAULT;
+
+ done:
+ vfree(data);
+diff --git a/net/netfilter/ipset/ip_set_hash_ipmac.c b/net/netfilter/ipset/ip_set_hash_ipmac.c
+index 24d8f4df4230..4ce563eb927d 100644
+--- a/net/netfilter/ipset/ip_set_hash_ipmac.c
++++ b/net/netfilter/ipset/ip_set_hash_ipmac.c
+@@ -209,7 +209,7 @@ hash_ipmac6_kadt(struct ip_set *set, const struct sk_buff *skb,
+ (skb_mac_header(skb) + ETH_HLEN) > skb->data)
+ return -EINVAL;
+
+- if (opt->flags & IPSET_DIM_ONE_SRC)
++ if (opt->flags & IPSET_DIM_TWO_SRC)
+ ether_addr_copy(e.ether, eth_hdr(skb)->h_source);
+ else
+ ether_addr_copy(e.ether, eth_hdr(skb)->h_dest);
+diff --git a/net/netfilter/ipvs/ip_vs_app.c b/net/netfilter/ipvs/ip_vs_app.c
+index 4515056ef1c2..f9b16f2b2219 100644
+--- a/net/netfilter/ipvs/ip_vs_app.c
++++ b/net/netfilter/ipvs/ip_vs_app.c
+@@ -193,21 +193,29 @@ struct ip_vs_app *register_ip_vs_app(struct netns_ipvs *ipvs, struct ip_vs_app *
+
+ mutex_lock(&__ip_vs_app_mutex);
+
++ /* increase the module use count */
++ if (!ip_vs_use_count_inc()) {
++ err = -ENOENT;
++ goto out_unlock;
++ }
++
+ list_for_each_entry(a, &ipvs->app_list, a_list) {
+ if (!strcmp(app->name, a->name)) {
+ err = -EEXIST;
++ /* decrease the module use count */
++ ip_vs_use_count_dec();
+ goto out_unlock;
+ }
+ }
+ a = kmemdup(app, sizeof(*app), GFP_KERNEL);
+ if (!a) {
+ err = -ENOMEM;
++ /* decrease the module use count */
++ ip_vs_use_count_dec();
+ goto out_unlock;
+ }
+ INIT_LIST_HEAD(&a->incs_list);
+ list_add(&a->a_list, &ipvs->app_list);
+- /* increase the module use count */
+- ip_vs_use_count_inc();
+
+ out_unlock:
+ mutex_unlock(&__ip_vs_app_mutex);
+diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c
+index 060565e7d227..e29b00f514a0 100644
+--- a/net/netfilter/ipvs/ip_vs_ctl.c
++++ b/net/netfilter/ipvs/ip_vs_ctl.c
+@@ -93,7 +93,6 @@ static bool __ip_vs_addr_is_local_v6(struct net *net,
+ static void update_defense_level(struct netns_ipvs *ipvs)
+ {
+ struct sysinfo i;
+- static int old_secure_tcp = 0;
+ int availmem;
+ int nomem;
+ int to_change = -1;
+@@ -174,35 +173,35 @@ static void update_defense_level(struct netns_ipvs *ipvs)
+ spin_lock(&ipvs->securetcp_lock);
+ switch (ipvs->sysctl_secure_tcp) {
+ case 0:
+- if (old_secure_tcp >= 2)
++ if (ipvs->old_secure_tcp >= 2)
+ to_change = 0;
+ break;
+ case 1:
+ if (nomem) {
+- if (old_secure_tcp < 2)
++ if (ipvs->old_secure_tcp < 2)
+ to_change = 1;
+ ipvs->sysctl_secure_tcp = 2;
+ } else {
+- if (old_secure_tcp >= 2)
++ if (ipvs->old_secure_tcp >= 2)
+ to_change = 0;
+ }
+ break;
+ case 2:
+ if (nomem) {
+- if (old_secure_tcp < 2)
++ if (ipvs->old_secure_tcp < 2)
+ to_change = 1;
+ } else {
+- if (old_secure_tcp >= 2)
++ if (ipvs->old_secure_tcp >= 2)
+ to_change = 0;
+ ipvs->sysctl_secure_tcp = 1;
+ }
+ break;
+ case 3:
+- if (old_secure_tcp < 2)
++ if (ipvs->old_secure_tcp < 2)
+ to_change = 1;
+ break;
+ }
+- old_secure_tcp = ipvs->sysctl_secure_tcp;
++ ipvs->old_secure_tcp = ipvs->sysctl_secure_tcp;
+ if (to_change >= 0)
+ ip_vs_protocol_timeout_change(ipvs,
+ ipvs->sysctl_secure_tcp > 1);
+@@ -1275,7 +1274,8 @@ ip_vs_add_service(struct netns_ipvs *ipvs, struct ip_vs_service_user_kern *u,
+ struct ip_vs_service *svc = NULL;
+
+ /* increase the module use count */
+- ip_vs_use_count_inc();
++ if (!ip_vs_use_count_inc())
++ return -ENOPROTOOPT;
+
+ /* Lookup the scheduler by 'u->sched_name' */
+ if (strcmp(u->sched_name, "none")) {
+@@ -2434,9 +2434,6 @@ do_ip_vs_set_ctl(struct sock *sk, int cmd, void __user *user, unsigned int len)
+ if (copy_from_user(arg, user, len) != 0)
+ return -EFAULT;
+
+- /* increase the module use count */
+- ip_vs_use_count_inc();
+-
+ /* Handle daemons since they have another lock */
+ if (cmd == IP_VS_SO_SET_STARTDAEMON ||
+ cmd == IP_VS_SO_SET_STOPDAEMON) {
+@@ -2449,13 +2446,13 @@ do_ip_vs_set_ctl(struct sock *sk, int cmd, void __user *user, unsigned int len)
+ ret = -EINVAL;
+ if (strscpy(cfg.mcast_ifn, dm->mcast_ifn,
+ sizeof(cfg.mcast_ifn)) <= 0)
+- goto out_dec;
++ return ret;
+ cfg.syncid = dm->syncid;
+ ret = start_sync_thread(ipvs, &cfg, dm->state);
+ } else {
+ ret = stop_sync_thread(ipvs, dm->state);
+ }
+- goto out_dec;
++ return ret;
+ }
+
+ mutex_lock(&__ip_vs_mutex);
+@@ -2550,10 +2547,6 @@ do_ip_vs_set_ctl(struct sock *sk, int cmd, void __user *user, unsigned int len)
+
+ out_unlock:
+ mutex_unlock(&__ip_vs_mutex);
+- out_dec:
+- /* decrease the module use count */
+- ip_vs_use_count_dec();
+-
+ return ret;
+ }
+
+diff --git a/net/netfilter/ipvs/ip_vs_pe.c b/net/netfilter/ipvs/ip_vs_pe.c
+index 8e104dff7abc..166c669f0763 100644
+--- a/net/netfilter/ipvs/ip_vs_pe.c
++++ b/net/netfilter/ipvs/ip_vs_pe.c
+@@ -68,7 +68,8 @@ int register_ip_vs_pe(struct ip_vs_pe *pe)
+ struct ip_vs_pe *tmp;
+
+ /* increase the module use count */
+- ip_vs_use_count_inc();
++ if (!ip_vs_use_count_inc())
++ return -ENOENT;
+
+ mutex_lock(&ip_vs_pe_mutex);
+ /* Make sure that the pe with this name doesn't exist
+diff --git a/net/netfilter/ipvs/ip_vs_sched.c b/net/netfilter/ipvs/ip_vs_sched.c
+index 2f9d5cd5daee..d4903723be7e 100644
+--- a/net/netfilter/ipvs/ip_vs_sched.c
++++ b/net/netfilter/ipvs/ip_vs_sched.c
+@@ -179,7 +179,8 @@ int register_ip_vs_scheduler(struct ip_vs_scheduler *scheduler)
+ }
+
+ /* increase the module use count */
+- ip_vs_use_count_inc();
++ if (!ip_vs_use_count_inc())
++ return -ENOENT;
+
+ mutex_lock(&ip_vs_sched_mutex);
+
+diff --git a/net/netfilter/ipvs/ip_vs_sync.c b/net/netfilter/ipvs/ip_vs_sync.c
+index a4a78c4b06de..8dc892a9dc91 100644
+--- a/net/netfilter/ipvs/ip_vs_sync.c
++++ b/net/netfilter/ipvs/ip_vs_sync.c
+@@ -1762,6 +1762,10 @@ int start_sync_thread(struct netns_ipvs *ipvs, struct ipvs_sync_daemon_cfg *c,
+ IP_VS_DBG(7, "Each ip_vs_sync_conn entry needs %zd bytes\n",
+ sizeof(struct ip_vs_sync_conn_v0));
+
++ /* increase the module use count */
++ if (!ip_vs_use_count_inc())
++ return -ENOPROTOOPT;
++
+ /* Do not hold one mutex and then to block on another */
+ for (;;) {
+ rtnl_lock();
+@@ -1892,9 +1896,6 @@ int start_sync_thread(struct netns_ipvs *ipvs, struct ipvs_sync_daemon_cfg *c,
+ mutex_unlock(&ipvs->sync_mutex);
+ rtnl_unlock();
+
+- /* increase the module use count */
+- ip_vs_use_count_inc();
+-
+ return 0;
+
+ out:
+@@ -1924,11 +1925,17 @@ out:
+ }
+ kfree(ti);
+ }
++
++ /* decrease the module use count */
++ ip_vs_use_count_dec();
+ return result;
+
+ out_early:
+ mutex_unlock(&ipvs->sync_mutex);
+ rtnl_unlock();
++
++ /* decrease the module use count */
++ ip_vs_use_count_dec();
+ return result;
+ }
+
+diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
+index a0b4bf654de2..4c2f8959de58 100644
+--- a/net/netfilter/nf_flow_table_core.c
++++ b/net/netfilter/nf_flow_table_core.c
+@@ -201,6 +201,8 @@ int flow_offload_add(struct nf_flowtable *flow_table, struct flow_offload *flow)
+ {
+ int err;
+
++ flow->timeout = (u32)jiffies + NF_FLOW_TIMEOUT;
++
+ err = rhashtable_insert_fast(&flow_table->rhashtable,
+ &flow->tuplehash[0].node,
+ nf_flow_offload_rhash_params);
+@@ -217,7 +219,6 @@ int flow_offload_add(struct nf_flowtable *flow_table, struct flow_offload *flow)
+ return err;
+ }
+
+- flow->timeout = (u32)jiffies + NF_FLOW_TIMEOUT;
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(flow_offload_add);
+diff --git a/net/netfilter/nft_payload.c b/net/netfilter/nft_payload.c
+index 22a80eb60222..5cb2d8908d2a 100644
+--- a/net/netfilter/nft_payload.c
++++ b/net/netfilter/nft_payload.c
+@@ -161,13 +161,21 @@ static int nft_payload_offload_ll(struct nft_offload_ctx *ctx,
+
+ switch (priv->offset) {
+ case offsetof(struct ethhdr, h_source):
++ if (priv->len != ETH_ALEN)
++ return -EOPNOTSUPP;
++
+ NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_ETH_ADDRS, eth_addrs,
+ src, ETH_ALEN, reg);
+ break;
+ case offsetof(struct ethhdr, h_dest):
++ if (priv->len != ETH_ALEN)
++ return -EOPNOTSUPP;
++
+ NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_ETH_ADDRS, eth_addrs,
+ dst, ETH_ALEN, reg);
+ break;
++ default:
++ return -EOPNOTSUPP;
+ }
+
+ return 0;
+@@ -181,14 +189,23 @@ static int nft_payload_offload_ip(struct nft_offload_ctx *ctx,
+
+ switch (priv->offset) {
+ case offsetof(struct iphdr, saddr):
++ if (priv->len != sizeof(struct in_addr))
++ return -EOPNOTSUPP;
++
+ NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_IPV4_ADDRS, ipv4, src,
+ sizeof(struct in_addr), reg);
+ break;
+ case offsetof(struct iphdr, daddr):
++ if (priv->len != sizeof(struct in_addr))
++ return -EOPNOTSUPP;
++
+ NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_IPV4_ADDRS, ipv4, dst,
+ sizeof(struct in_addr), reg);
+ break;
+ case offsetof(struct iphdr, protocol):
++ if (priv->len != sizeof(__u8))
++ return -EOPNOTSUPP;
++
+ NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_BASIC, basic, ip_proto,
+ sizeof(__u8), reg);
+ nft_offload_set_dependency(ctx, NFT_OFFLOAD_DEP_TRANSPORT);
+@@ -208,14 +225,23 @@ static int nft_payload_offload_ip6(struct nft_offload_ctx *ctx,
+
+ switch (priv->offset) {
+ case offsetof(struct ipv6hdr, saddr):
++ if (priv->len != sizeof(struct in6_addr))
++ return -EOPNOTSUPP;
++
+ NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_IPV6_ADDRS, ipv6, src,
+ sizeof(struct in6_addr), reg);
+ break;
+ case offsetof(struct ipv6hdr, daddr):
++ if (priv->len != sizeof(struct in6_addr))
++ return -EOPNOTSUPP;
++
+ NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_IPV6_ADDRS, ipv6, dst,
+ sizeof(struct in6_addr), reg);
+ break;
+ case offsetof(struct ipv6hdr, nexthdr):
++ if (priv->len != sizeof(__u8))
++ return -EOPNOTSUPP;
++
+ NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_BASIC, basic, ip_proto,
+ sizeof(__u8), reg);
+ nft_offload_set_dependency(ctx, NFT_OFFLOAD_DEP_TRANSPORT);
+@@ -255,10 +281,16 @@ static int nft_payload_offload_tcp(struct nft_offload_ctx *ctx,
+
+ switch (priv->offset) {
+ case offsetof(struct tcphdr, source):
++ if (priv->len != sizeof(__be16))
++ return -EOPNOTSUPP;
++
+ NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_PORTS, tp, src,
+ sizeof(__be16), reg);
+ break;
+ case offsetof(struct tcphdr, dest):
++ if (priv->len != sizeof(__be16))
++ return -EOPNOTSUPP;
++
+ NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_PORTS, tp, dst,
+ sizeof(__be16), reg);
+ break;
+@@ -277,10 +309,16 @@ static int nft_payload_offload_udp(struct nft_offload_ctx *ctx,
+
+ switch (priv->offset) {
+ case offsetof(struct udphdr, source):
++ if (priv->len != sizeof(__be16))
++ return -EOPNOTSUPP;
++
+ NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_PORTS, tp, src,
+ sizeof(__be16), reg);
+ break;
+ case offsetof(struct udphdr, dest):
++ if (priv->len != sizeof(__be16))
++ return -EOPNOTSUPP;
++
+ NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_PORTS, tp, dst,
+ sizeof(__be16), reg);
+ break;
+diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c
+index 17e6ca62f1be..afde0d763039 100644
+--- a/net/nfc/netlink.c
++++ b/net/nfc/netlink.c
+@@ -1099,7 +1099,6 @@ static int nfc_genl_llc_set_params(struct sk_buff *skb, struct genl_info *info)
+
+ local = nfc_llcp_find_local(dev);
+ if (!local) {
+- nfc_put_device(dev);
+ rc = -ENODEV;
+ goto exit;
+ }
+@@ -1159,7 +1158,6 @@ static int nfc_genl_llc_sdreq(struct sk_buff *skb, struct genl_info *info)
+
+ local = nfc_llcp_find_local(dev);
+ if (!local) {
+- nfc_put_device(dev);
+ rc = -ENODEV;
+ goto exit;
+ }
+diff --git a/net/openvswitch/vport-internal_dev.c b/net/openvswitch/vport-internal_dev.c
+index d2437b5b2f6a..baa33103108a 100644
+--- a/net/openvswitch/vport-internal_dev.c
++++ b/net/openvswitch/vport-internal_dev.c
+@@ -137,7 +137,7 @@ static void do_setup(struct net_device *netdev)
+ netdev->priv_flags |= IFF_LIVE_ADDR_CHANGE | IFF_OPENVSWITCH |
+ IFF_NO_QUEUE;
+ netdev->needs_free_netdev = true;
+- netdev->priv_destructor = internal_dev_destructor;
++ netdev->priv_destructor = NULL;
+ netdev->ethtool_ops = &internal_dev_ethtool_ops;
+ netdev->rtnl_link_ops = &internal_dev_link_ops;
+
+@@ -159,7 +159,6 @@ static struct vport *internal_dev_create(const struct vport_parms *parms)
+ struct internal_dev *internal_dev;
+ struct net_device *dev;
+ int err;
+- bool free_vport = true;
+
+ vport = ovs_vport_alloc(0, &ovs_internal_vport_ops, parms);
+ if (IS_ERR(vport)) {
+@@ -190,10 +189,9 @@ static struct vport *internal_dev_create(const struct vport_parms *parms)
+
+ rtnl_lock();
+ err = register_netdevice(vport->dev);
+- if (err) {
+- free_vport = false;
++ if (err)
+ goto error_unlock;
+- }
++ vport->dev->priv_destructor = internal_dev_destructor;
+
+ dev_set_promiscuity(vport->dev, 1);
+ rtnl_unlock();
+@@ -207,8 +205,7 @@ error_unlock:
+ error_free_netdev:
+ free_netdev(dev);
+ error_free_vport:
+- if (free_vport)
+- ovs_vport_free(vport);
++ ovs_vport_free(vport);
+ error:
+ return ERR_PTR(err);
+ }
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 6b12883e04b8..5c1769999a92 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -21,6 +21,7 @@
+ #include <linux/slab.h>
+ #include <linux/idr.h>
+ #include <linux/rhashtable.h>
++#include <linux/jhash.h>
+ #include <net/net_namespace.h>
+ #include <net/sock.h>
+ #include <net/netlink.h>
+@@ -45,6 +46,62 @@ static LIST_HEAD(tcf_proto_base);
+ /* Protects list of registered TC modules. It is pure SMP lock. */
+ static DEFINE_RWLOCK(cls_mod_lock);
+
++static u32 destroy_obj_hashfn(const struct tcf_proto *tp)
++{
++ return jhash_3words(tp->chain->index, tp->prio,
++ (__force __u32)tp->protocol, 0);
++}
++
++static void tcf_proto_signal_destroying(struct tcf_chain *chain,
++ struct tcf_proto *tp)
++{
++ struct tcf_block *block = chain->block;
++
++ mutex_lock(&block->proto_destroy_lock);
++ hash_add_rcu(block->proto_destroy_ht, &tp->destroy_ht_node,
++ destroy_obj_hashfn(tp));
++ mutex_unlock(&block->proto_destroy_lock);
++}
++
++static bool tcf_proto_cmp(const struct tcf_proto *tp1,
++ const struct tcf_proto *tp2)
++{
++ return tp1->chain->index == tp2->chain->index &&
++ tp1->prio == tp2->prio &&
++ tp1->protocol == tp2->protocol;
++}
++
++static bool tcf_proto_exists_destroying(struct tcf_chain *chain,
++ struct tcf_proto *tp)
++{
++ u32 hash = destroy_obj_hashfn(tp);
++ struct tcf_proto *iter;
++ bool found = false;
++
++ rcu_read_lock();
++ hash_for_each_possible_rcu(chain->block->proto_destroy_ht, iter,
++ destroy_ht_node, hash) {
++ if (tcf_proto_cmp(tp, iter)) {
++ found = true;
++ break;
++ }
++ }
++ rcu_read_unlock();
++
++ return found;
++}
++
++static void
++tcf_proto_signal_destroyed(struct tcf_chain *chain, struct tcf_proto *tp)
++{
++ struct tcf_block *block = chain->block;
++
++ mutex_lock(&block->proto_destroy_lock);
++ if (hash_hashed(&tp->destroy_ht_node))
++ hash_del_rcu(&tp->destroy_ht_node);
++ mutex_unlock(&block->proto_destroy_lock);
++}
++
+ /* Find classifier type by string name */
+
+ static const struct tcf_proto_ops *__tcf_proto_lookup_ops(const char *kind)
+@@ -232,9 +289,11 @@ static void tcf_proto_get(struct tcf_proto *tp)
+ static void tcf_chain_put(struct tcf_chain *chain);
+
+ static void tcf_proto_destroy(struct tcf_proto *tp, bool rtnl_held,
+- struct netlink_ext_ack *extack)
++ bool sig_destroy, struct netlink_ext_ack *extack)
+ {
+ tp->ops->destroy(tp, rtnl_held, extack);
++ if (sig_destroy)
++ tcf_proto_signal_destroyed(tp->chain, tp);
+ tcf_chain_put(tp->chain);
+ module_put(tp->ops->owner);
+ kfree_rcu(tp, rcu);
+@@ -244,7 +303,7 @@ static void tcf_proto_put(struct tcf_proto *tp, bool rtnl_held,
+ struct netlink_ext_ack *extack)
+ {
+ if (refcount_dec_and_test(&tp->refcnt))
+- tcf_proto_destroy(tp, rtnl_held, extack);
++ tcf_proto_destroy(tp, rtnl_held, true, extack);
+ }
+
+ static int walker_check_empty(struct tcf_proto *tp, void *fh,
+@@ -368,6 +427,7 @@ static bool tcf_chain_detach(struct tcf_chain *chain)
+ static void tcf_block_destroy(struct tcf_block *block)
+ {
+ mutex_destroy(&block->lock);
++ mutex_destroy(&block->proto_destroy_lock);
+ kfree_rcu(block, rcu);
+ }
+
+@@ -543,6 +603,12 @@ static void tcf_chain_flush(struct tcf_chain *chain, bool rtnl_held)
+
+ mutex_lock(&chain->filter_chain_lock);
+ tp = tcf_chain_dereference(chain->filter_chain, chain);
++ while (tp) {
++ tp_next = rcu_dereference_protected(tp->next, 1);
++ tcf_proto_signal_destroying(chain, tp);
++ tp = tp_next;
++ }
++ tp = tcf_chain_dereference(chain->filter_chain, chain);
+ RCU_INIT_POINTER(chain->filter_chain, NULL);
+ tcf_chain0_head_change(chain, NULL);
+ chain->flushing = true;
+@@ -1002,6 +1068,7 @@ static struct tcf_block *tcf_block_create(struct net *net, struct Qdisc *q,
+ return ERR_PTR(-ENOMEM);
+ }
+ mutex_init(&block->lock);
++ mutex_init(&block->proto_destroy_lock);
+ flow_block_init(&block->flow_block);
+ INIT_LIST_HEAD(&block->chain_list);
+ INIT_LIST_HEAD(&block->owner_list);
+@@ -1754,6 +1821,12 @@ static struct tcf_proto *tcf_chain_tp_insert_unique(struct tcf_chain *chain,
+
+ mutex_lock(&chain->filter_chain_lock);
+
++ if (tcf_proto_exists_destroying(chain, tp_new)) {
++ mutex_unlock(&chain->filter_chain_lock);
++ tcf_proto_destroy(tp_new, rtnl_held, false, NULL);
++ return ERR_PTR(-EAGAIN);
++ }
++
+ tp = tcf_chain_tp_find(chain, &chain_info,
+ protocol, prio, false);
+ if (!tp)
+@@ -1761,10 +1834,10 @@ static struct tcf_proto *tcf_chain_tp_insert_unique(struct tcf_chain *chain,
+ mutex_unlock(&chain->filter_chain_lock);
+
+ if (tp) {
+- tcf_proto_destroy(tp_new, rtnl_held, NULL);
++ tcf_proto_destroy(tp_new, rtnl_held, false, NULL);
+ tp_new = tp;
+ } else if (err) {
+- tcf_proto_destroy(tp_new, rtnl_held, NULL);
++ tcf_proto_destroy(tp_new, rtnl_held, false, NULL);
+ tp_new = ERR_PTR(err);
+ }
+
+@@ -1802,6 +1875,7 @@ static void tcf_chain_tp_delete_empty(struct tcf_chain *chain,
+ return;
+ }
+
++ tcf_proto_signal_destroying(chain, tp);
+ next = tcf_chain_dereference(chain_info.next, chain);
+ if (tp == chain->filter_chain)
+ tcf_chain0_head_change(chain, next);
+@@ -2321,6 +2395,7 @@ static int tc_del_tfilter(struct sk_buff *skb, struct nlmsghdr *n,
+ err = -EINVAL;
+ goto errout_locked;
+ } else if (t->tcm_handle == 0) {
++ tcf_proto_signal_destroying(chain, tp);
+ tcf_chain_tp_remove(chain, &chain_info, tp);
+ mutex_unlock(&chain->filter_chain_lock);
+
+diff --git a/net/smc/smc_pnet.c b/net/smc/smc_pnet.c
+index bab2da8cf17a..a20594056fef 100644
+--- a/net/smc/smc_pnet.c
++++ b/net/smc/smc_pnet.c
+@@ -376,8 +376,6 @@ static int smc_pnet_fill_entry(struct net *net,
+ return 0;
+
+ error:
+- if (pnetelem->ndev)
+- dev_put(pnetelem->ndev);
+ return rc;
+ }
+
+diff --git a/net/sunrpc/backchannel_rqst.c b/net/sunrpc/backchannel_rqst.c
+index 339e8c077c2d..195b40c5dae4 100644
+--- a/net/sunrpc/backchannel_rqst.c
++++ b/net/sunrpc/backchannel_rqst.c
+@@ -220,7 +220,7 @@ void xprt_destroy_bc(struct rpc_xprt *xprt, unsigned int max_reqs)
+ goto out;
+
+ spin_lock_bh(&xprt->bc_pa_lock);
+- xprt->bc_alloc_max -= max_reqs;
++ xprt->bc_alloc_max -= min(max_reqs, xprt->bc_alloc_max);
+ list_for_each_entry_safe(req, tmp, &xprt->bc_pa_list, rq_bc_pa_list) {
+ dprintk("RPC: req=%p\n", req);
+ list_del(&req->rq_bc_pa_list);
+@@ -307,8 +307,8 @@ void xprt_free_bc_rqst(struct rpc_rqst *req)
+ */
+ dprintk("RPC: Last session removed req=%p\n", req);
+ xprt_free_allocation(req);
+- return;
+ }
++ xprt_put(xprt);
+ }
+
+ /*
+@@ -339,7 +339,7 @@ found:
+ spin_unlock(&xprt->bc_pa_lock);
+ if (new) {
+ if (req != new)
+- xprt_free_bc_rqst(new);
++ xprt_free_allocation(new);
+ break;
+ } else if (req)
+ break;
+@@ -368,6 +368,7 @@ void xprt_complete_bc_request(struct rpc_rqst *req, uint32_t copied)
+ set_bit(RPC_BC_PA_IN_USE, &req->rq_bc_pa_state);
+
+ dprintk("RPC: add callback request to list\n");
++ xprt_get(xprt);
+ spin_lock(&bc_serv->sv_cb_lock);
+ list_add(&req->rq_bc_list, &bc_serv->sv_cb_list);
+ wake_up(&bc_serv->sv_cb_waitq);
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index 20631d64312c..ac796f3d4240 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -1935,6 +1935,11 @@ static void xprt_destroy_cb(struct work_struct *work)
+ rpc_destroy_wait_queue(&xprt->sending);
+ rpc_destroy_wait_queue(&xprt->backlog);
+ kfree(xprt->servername);
++ /*
++ * Destroy any existing back channel
++ */
++ xprt_destroy_backchannel(xprt, UINT_MAX);
++
+ /*
+ * Tear down transport state and free the rpc_xprt
+ */
+diff --git a/net/sunrpc/xprtrdma/backchannel.c b/net/sunrpc/xprtrdma/backchannel.c
+index 59e624b1d7a0..7cccaab9a17a 100644
+--- a/net/sunrpc/xprtrdma/backchannel.c
++++ b/net/sunrpc/xprtrdma/backchannel.c
+@@ -165,6 +165,7 @@ void xprt_rdma_bc_free_rqst(struct rpc_rqst *rqst)
+ spin_lock(&xprt->bc_pa_lock);
+ list_add_tail(&rqst->rq_bc_pa_list, &xprt->bc_pa_list);
+ spin_unlock(&xprt->bc_pa_lock);
++ xprt_put(xprt);
+ }
+
+ static struct rpc_rqst *rpcrdma_bc_rqst_get(struct rpcrdma_xprt *r_xprt)
+@@ -261,6 +262,7 @@ void rpcrdma_bc_receive_call(struct rpcrdma_xprt *r_xprt,
+
+ /* Queue rqst for ULP's callback service */
+ bc_serv = xprt->bc_serv;
++ xprt_get(xprt);
+ spin_lock(&bc_serv->sv_cb_lock);
+ list_add(&rqst->rq_bc_list, &bc_serv->sv_cb_list);
+ spin_unlock(&bc_serv->sv_cb_lock);
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index 43922d86e510..6b0c9b798d9c 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -482,8 +482,10 @@ last_record:
+ int tls_device_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
+ {
+ unsigned char record_type = TLS_RECORD_TYPE_DATA;
++ struct tls_context *tls_ctx = tls_get_ctx(sk);
+ int rc;
+
++ mutex_lock(&tls_ctx->tx_lock);
+ lock_sock(sk);
+
+ if (unlikely(msg->msg_controllen)) {
+@@ -497,12 +499,14 @@ int tls_device_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
+
+ out:
+ release_sock(sk);
++ mutex_unlock(&tls_ctx->tx_lock);
+ return rc;
+ }
+
+ int tls_device_sendpage(struct sock *sk, struct page *page,
+ int offset, size_t size, int flags)
+ {
++ struct tls_context *tls_ctx = tls_get_ctx(sk);
+ struct iov_iter msg_iter;
+ char *kaddr = kmap(page);
+ struct kvec iov;
+@@ -511,6 +515,7 @@ int tls_device_sendpage(struct sock *sk, struct page *page,
+ if (flags & MSG_SENDPAGE_NOTLAST)
+ flags |= MSG_MORE;
+
++ mutex_lock(&tls_ctx->tx_lock);
+ lock_sock(sk);
+
+ if (flags & MSG_OOB) {
+@@ -527,6 +532,7 @@ int tls_device_sendpage(struct sock *sk, struct page *page,
+
+ out:
+ release_sock(sk);
++ mutex_unlock(&tls_ctx->tx_lock);
+ return rc;
+ }
+
+@@ -575,9 +581,11 @@ static int tls_device_push_pending_record(struct sock *sk, int flags)
+
+ void tls_device_write_space(struct sock *sk, struct tls_context *ctx)
+ {
+- if (!sk->sk_write_pending && tls_is_partially_sent_record(ctx)) {
++ if (tls_is_partially_sent_record(ctx)) {
+ gfp_t sk_allocation = sk->sk_allocation;
+
++ WARN_ON_ONCE(sk->sk_write_pending);
++
+ sk->sk_allocation = GFP_ATOMIC;
+ tls_push_partial_record(sk, ctx,
+ MSG_DONTWAIT | MSG_NOSIGNAL |
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 43252a801c3f..9313dd51023a 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -258,6 +258,7 @@ void tls_ctx_free(struct tls_context *ctx)
+
+ memzero_explicit(&ctx->crypto_send, sizeof(ctx->crypto_send));
+ memzero_explicit(&ctx->crypto_recv, sizeof(ctx->crypto_recv));
++ mutex_destroy(&ctx->tx_lock);
+ kfree(ctx);
+ }
+
+@@ -615,6 +616,7 @@ static struct tls_context *create_ctx(struct sock *sk)
+ ctx->getsockopt = sk->sk_prot->getsockopt;
+ ctx->sk_proto_close = sk->sk_prot->close;
+ ctx->unhash = sk->sk_prot->unhash;
++ mutex_init(&ctx->tx_lock);
+ return ctx;
+ }
+
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 91d21b048a9b..881f06f465f8 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -897,15 +897,9 @@ int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
+ if (msg->msg_flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL))
+ return -ENOTSUPP;
+
++ mutex_lock(&tls_ctx->tx_lock);
+ lock_sock(sk);
+
+- /* Wait till there is any pending write on socket */
+- if (unlikely(sk->sk_write_pending)) {
+- ret = wait_on_pending_writer(sk, &timeo);
+- if (unlikely(ret))
+- goto send_end;
+- }
+-
+ if (unlikely(msg->msg_controllen)) {
+ ret = tls_proccess_cmsg(sk, msg, &record_type);
+ if (ret) {
+@@ -1091,6 +1085,7 @@ send_end:
+ ret = sk_stream_error(sk, msg->msg_flags, ret);
+
+ release_sock(sk);
++ mutex_unlock(&tls_ctx->tx_lock);
+ return copied ? copied : ret;
+ }
+
+@@ -1114,13 +1109,6 @@ static int tls_sw_do_sendpage(struct sock *sk, struct page *page,
+ eor = !(flags & (MSG_MORE | MSG_SENDPAGE_NOTLAST));
+ sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk);
+
+- /* Wait till there is any pending write on socket */
+- if (unlikely(sk->sk_write_pending)) {
+- ret = wait_on_pending_writer(sk, &timeo);
+- if (unlikely(ret))
+- goto sendpage_end;
+- }
+-
+ /* Call the sk_stream functions to manage the sndbuf mem. */
+ while (size > 0) {
+ size_t copy, required_size;
+@@ -1219,15 +1207,18 @@ sendpage_end:
+ int tls_sw_sendpage(struct sock *sk, struct page *page,
+ int offset, size_t size, int flags)
+ {
++ struct tls_context *tls_ctx = tls_get_ctx(sk);
+ int ret;
+
+ if (flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL |
+ MSG_SENDPAGE_NOTLAST | MSG_SENDPAGE_NOPOLICY))
+ return -ENOTSUPP;
+
++ mutex_lock(&tls_ctx->tx_lock);
+ lock_sock(sk);
+ ret = tls_sw_do_sendpage(sk, page, offset, size, flags);
+ release_sock(sk);
++ mutex_unlock(&tls_ctx->tx_lock);
+ return ret;
+ }
+
+@@ -2172,9 +2163,11 @@ static void tx_work_handler(struct work_struct *work)
+
+ if (!test_and_clear_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask))
+ return;
++ mutex_lock(&tls_ctx->tx_lock);
+ lock_sock(sk);
+ tls_tx_records(sk, -1);
+ release_sock(sk);
++ mutex_unlock(&tls_ctx->tx_lock);
+ }
+
+ void tls_sw_write_space(struct sock *sk, struct tls_context *ctx)
+@@ -2182,12 +2175,9 @@ void tls_sw_write_space(struct sock *sk, struct tls_context *ctx)
+ struct tls_sw_context_tx *tx_ctx = tls_sw_ctx_tx(ctx);
+
+ /* Schedule the transmission if tx list is ready */
+- if (is_tx_ready(tx_ctx) && !sk->sk_write_pending) {
+- /* Schedule the transmission */
+- if (!test_and_set_bit(BIT_TX_SCHEDULED,
+- &tx_ctx->tx_bitmask))
+- schedule_delayed_work(&tx_ctx->tx_work.work, 0);
+- }
++ if (is_tx_ready(tx_ctx) &&
++ !test_and_set_bit(BIT_TX_SCHEDULED, &tx_ctx->tx_bitmask))
++ schedule_delayed_work(&tx_ctx->tx_work.work, 0);
+ }
+
+ void tls_sw_strparser_arm(struct sock *sk, struct tls_context *tls_ctx)
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index a7adffd062c7..058d59fceddd 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -870,9 +870,11 @@ virtio_transport_recv_connected(struct sock *sk,
+ if (le32_to_cpu(pkt->hdr.flags) & VIRTIO_VSOCK_SHUTDOWN_SEND)
+ vsk->peer_shutdown |= SEND_SHUTDOWN;
+ if (vsk->peer_shutdown == SHUTDOWN_MASK &&
+- vsock_stream_has_data(vsk) <= 0) {
+- sock_set_flag(sk, SOCK_DONE);
+- sk->sk_state = TCP_CLOSING;
++ vsock_stream_has_data(vsk) <= 0 &&
++ !sock_flag(sk, SOCK_DONE)) {
++ (void)virtio_transport_reset(vsk, NULL);
++
++ virtio_transport_do_close(vsk, true);
+ }
+ if (le32_to_cpu(pkt->hdr.flags))
+ sk->sk_state_change(sk);
+diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c
+index 688aac7a6943..182f9eb48dde 100644
+--- a/net/xdp/xdp_umem.c
++++ b/net/xdp/xdp_umem.c
+@@ -26,6 +26,9 @@ void xdp_add_sk_umem(struct xdp_umem *umem, struct xdp_sock *xs)
+ {
+ unsigned long flags;
+
++ if (!xs->tx)
++ return;
++
+ spin_lock_irqsave(&umem->xsk_list_lock, flags);
+ list_add_rcu(&xs->list, &umem->xsk_list);
+ spin_unlock_irqrestore(&umem->xsk_list_lock, flags);
+@@ -35,6 +38,9 @@ void xdp_del_sk_umem(struct xdp_umem *umem, struct xdp_sock *xs)
+ {
+ unsigned long flags;
+
++ if (!xs->tx)
++ return;
++
+ spin_lock_irqsave(&umem->xsk_list_lock, flags);
+ list_del_rcu(&xs->list);
+ spin_unlock_irqrestore(&umem->xsk_list_lock, flags);
+diff --git a/sound/core/timer.c b/sound/core/timer.c
+index 6b724d2ee2de..59ae21b0bb93 100644
+--- a/sound/core/timer.c
++++ b/sound/core/timer.c
+@@ -284,11 +284,11 @@ int snd_timer_open(struct snd_timer_instance **ti,
+ goto unlock;
+ }
+ if (!list_empty(&timer->open_list_head)) {
+- timeri = list_entry(timer->open_list_head.next,
++ struct snd_timer_instance *t =
++ list_entry(timer->open_list_head.next,
+ struct snd_timer_instance, open_list);
+- if (timeri->flags & SNDRV_TIMER_IFLG_EXCLUSIVE) {
++ if (t->flags & SNDRV_TIMER_IFLG_EXCLUSIVE) {
+ err = -EBUSY;
+- timeri = NULL;
+ goto unlock;
+ }
+ }
+diff --git a/sound/firewire/bebob/bebob_focusrite.c b/sound/firewire/bebob/bebob_focusrite.c
+index 32b864bee25f..06d6a37cd853 100644
+--- a/sound/firewire/bebob/bebob_focusrite.c
++++ b/sound/firewire/bebob/bebob_focusrite.c
+@@ -27,6 +27,8 @@
+ #define SAFFIRE_CLOCK_SOURCE_SPDIF 1
+
+ /* clock sources as returned from register of Saffire Pro 10 and 26 */
++#define SAFFIREPRO_CLOCK_SOURCE_SELECT_MASK 0x000000ff
++#define SAFFIREPRO_CLOCK_SOURCE_DETECT_MASK 0x0000ff00
+ #define SAFFIREPRO_CLOCK_SOURCE_INTERNAL 0
+ #define SAFFIREPRO_CLOCK_SOURCE_SKIP 1 /* never used on hardware */
+ #define SAFFIREPRO_CLOCK_SOURCE_SPDIF 2
+@@ -189,6 +191,7 @@ saffirepro_both_clk_src_get(struct snd_bebob *bebob, unsigned int *id)
+ map = saffirepro_clk_maps[1];
+
+ /* In a case that this driver cannot handle the value of register. */
++ value &= SAFFIREPRO_CLOCK_SOURCE_SELECT_MASK;
+ if (value >= SAFFIREPRO_CLOCK_SOURCE_COUNT || map[value] < 0) {
+ err = -EIO;
+ goto end;
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index 6d1fb7c11f17..b7a1abb3e231 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -7604,7 +7604,7 @@ static void hp_callback(struct hda_codec *codec, struct hda_jack_callback *cb)
+ /* Delay enabling the HP amp, to let the mic-detection
+ * state machine run.
+ */
+- cancel_delayed_work_sync(&spec->unsol_hp_work);
++ cancel_delayed_work(&spec->unsol_hp_work);
+ schedule_delayed_work(&spec->unsol_hp_work, msecs_to_jiffies(500));
+ tbl = snd_hda_jack_tbl_get(codec, cb->nid);
+ if (tbl)
+diff --git a/sound/soc/sh/rcar/dma.c b/sound/soc/sh/rcar/dma.c
+index 0324a5c39619..28f65eba2bb4 100644
+--- a/sound/soc/sh/rcar/dma.c
++++ b/sound/soc/sh/rcar/dma.c
+@@ -508,10 +508,10 @@ static struct rsnd_mod_ops rsnd_dmapp_ops = {
+ #define RDMA_SSI_I_N(addr, i) (addr ##_reg - 0x00300000 + (0x40 * i) + 0x8)
+ #define RDMA_SSI_O_N(addr, i) (addr ##_reg - 0x00300000 + (0x40 * i) + 0xc)
+
+-#define RDMA_SSIU_I_N(addr, i, j) (addr ##_reg - 0x00441000 + (0x1000 * (i)) + (((j) / 4) * 0xA000) + (((j) % 4) * 0x400))
++#define RDMA_SSIU_I_N(addr, i, j) (addr ##_reg - 0x00441000 + (0x1000 * (i)) + (((j) / 4) * 0xA000) + (((j) % 4) * 0x400) - (0x4000 * ((i) / 9) * ((j) / 4)))
+ #define RDMA_SSIU_O_N(addr, i, j) RDMA_SSIU_I_N(addr, i, j)
+
+-#define RDMA_SSIU_I_P(addr, i, j) (addr ##_reg - 0x00141000 + (0x1000 * (i)) + (((j) / 4) * 0xA000) + (((j) % 4) * 0x400))
++#define RDMA_SSIU_I_P(addr, i, j) (addr ##_reg - 0x00141000 + (0x1000 * (i)) + (((j) / 4) * 0xA000) + (((j) % 4) * 0x400) - (0x4000 * ((i) / 9) * ((j) / 4)))
+ #define RDMA_SSIU_O_P(addr, i, j) RDMA_SSIU_I_P(addr, i, j)
+
+ #define RDMA_SRC_I_N(addr, i) (addr ##_reg - 0x00500000 + (0x400 * i))
+diff --git a/sound/soc/sof/intel/hda-stream.c b/sound/soc/sof/intel/hda-stream.c
+index 2c7447188402..0c11fceb28a7 100644
+--- a/sound/soc/sof/intel/hda-stream.c
++++ b/sound/soc/sof/intel/hda-stream.c
+@@ -190,7 +190,7 @@ hda_dsp_stream_get(struct snd_sof_dev *sdev, int direction)
+ * Workaround to address a known issue with host DMA that results
+ * in xruns during pause/release in capture scenarios.
+ */
+- if (!IS_ENABLED(SND_SOC_SOF_HDA_ALWAYS_ENABLE_DMI_L1))
++ if (!IS_ENABLED(CONFIG_SND_SOC_SOF_HDA_ALWAYS_ENABLE_DMI_L1))
+ if (stream && direction == SNDRV_PCM_STREAM_CAPTURE)
+ snd_sof_dsp_update_bits(sdev, HDA_DSP_HDA_BAR,
+ HDA_VS_INTEL_EM2,
+@@ -228,7 +228,7 @@ int hda_dsp_stream_put(struct snd_sof_dev *sdev, int direction, int stream_tag)
+ spin_unlock_irq(&bus->reg_lock);
+
+ /* Enable DMI L1 entry if there are no capture streams open */
+- if (!IS_ENABLED(SND_SOC_SOF_HDA_ALWAYS_ENABLE_DMI_L1))
++ if (!IS_ENABLED(CONFIG_SND_SOC_SOF_HDA_ALWAYS_ENABLE_DMI_L1))
+ if (!active_capture_stream)
+ snd_sof_dsp_update_bits(sdev, HDA_DSP_HDA_BAR,
+ HDA_VS_INTEL_EM2,
+diff --git a/sound/usb/Makefile b/sound/usb/Makefile
+index e1ce257ab705..d27a21b0ff9c 100644
+--- a/sound/usb/Makefile
++++ b/sound/usb/Makefile
+@@ -16,7 +16,8 @@ snd-usb-audio-objs := card.o \
+ power.o \
+ proc.o \
+ quirks.o \
+- stream.o
++ stream.o \
++ validate.o
+
+ snd-usb-audio-$(CONFIG_SND_USB_AUDIO_USE_MEDIA_CONTROLLER) += media.o
+
+diff --git a/sound/usb/clock.c b/sound/usb/clock.c
+index 72e9bdf76115..6b8c14f9b5d4 100644
+--- a/sound/usb/clock.c
++++ b/sound/usb/clock.c
+@@ -38,39 +38,37 @@ static void *find_uac_clock_desc(struct usb_host_interface *iface, int id,
+ static bool validate_clock_source_v2(void *p, int id)
+ {
+ struct uac_clock_source_descriptor *cs = p;
+- return cs->bLength == sizeof(*cs) && cs->bClockID == id;
++ return cs->bClockID == id;
+ }
+
+ static bool validate_clock_source_v3(void *p, int id)
+ {
+ struct uac3_clock_source_descriptor *cs = p;
+- return cs->bLength == sizeof(*cs) && cs->bClockID == id;
++ return cs->bClockID == id;
+ }
+
+ static bool validate_clock_selector_v2(void *p, int id)
+ {
+ struct uac_clock_selector_descriptor *cs = p;
+- return cs->bLength >= sizeof(*cs) && cs->bClockID == id &&
+- cs->bLength == 7 + cs->bNrInPins;
++ return cs->bClockID == id;
+ }
+
+ static bool validate_clock_selector_v3(void *p, int id)
+ {
+ struct uac3_clock_selector_descriptor *cs = p;
+- return cs->bLength >= sizeof(*cs) && cs->bClockID == id &&
+- cs->bLength == 11 + cs->bNrInPins;
++ return cs->bClockID == id;
+ }
+
+ static bool validate_clock_multiplier_v2(void *p, int id)
+ {
+ struct uac_clock_multiplier_descriptor *cs = p;
+- return cs->bLength == sizeof(*cs) && cs->bClockID == id;
++ return cs->bClockID == id;
+ }
+
+ static bool validate_clock_multiplier_v3(void *p, int id)
+ {
+ struct uac3_clock_multiplier_descriptor *cs = p;
+- return cs->bLength == sizeof(*cs) && cs->bClockID == id;
++ return cs->bClockID == id;
+ }
+
+ #define DEFINE_FIND_HELPER(name, obj, validator, type) \
+diff --git a/sound/usb/helper.h b/sound/usb/helper.h
+index 6afb70156ec4..5e8a18b4e7b9 100644
+--- a/sound/usb/helper.h
++++ b/sound/usb/helper.h
+@@ -31,4 +31,8 @@ static inline int snd_usb_ctrl_intf(struct snd_usb_audio *chip)
+ return get_iface_desc(chip->ctrl_intf)->bInterfaceNumber;
+ }
+
++/* in validate.c */
++bool snd_usb_validate_audio_desc(void *p, int protocol);
++bool snd_usb_validate_midi_desc(void *p);
++
+ #endif /* __USBAUDIO_HELPER_H */
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index eceab19766db..673652ad7018 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -740,13 +740,6 @@ static int uac_mixer_unit_get_channels(struct mixer_build *state,
+ {
+ int mu_channels;
+
+- if (desc->bLength < sizeof(*desc))
+- return -EINVAL;
+- if (!desc->bNrInPins)
+- return -EINVAL;
+- if (desc->bLength < sizeof(*desc) + desc->bNrInPins)
+- return -EINVAL;
+-
+ switch (state->mixer->protocol) {
+ case UAC_VERSION_1:
+ case UAC_VERSION_2:
+@@ -765,222 +758,242 @@ static int uac_mixer_unit_get_channels(struct mixer_build *state,
+ }
+
+ /*
+- * parse the source unit recursively until it reaches to a terminal
+- * or a branched unit.
++ * Parse Input Terminal Unit
+ */
+ static int __check_input_term(struct mixer_build *state, int id,
+- struct usb_audio_term *term)
++ struct usb_audio_term *term);
++
++static int parse_term_uac1_iterm_unit(struct mixer_build *state,
++ struct usb_audio_term *term,
++ void *p1, int id)
+ {
+- int protocol = state->mixer->protocol;
++ struct uac_input_terminal_descriptor *d = p1;
++
++ term->type = le16_to_cpu(d->wTerminalType);
++ term->channels = d->bNrChannels;
++ term->chconfig = le16_to_cpu(d->wChannelConfig);
++ term->name = d->iTerminal;
++ return 0;
++}
++
++static int parse_term_uac2_iterm_unit(struct mixer_build *state,
++ struct usb_audio_term *term,
++ void *p1, int id)
++{
++ struct uac2_input_terminal_descriptor *d = p1;
+ int err;
+- void *p1;
+- unsigned char *hdr;
+
+- memset(term, 0, sizeof(*term));
+- for (;;) {
+- /* a loop in the terminal chain? */
+- if (test_and_set_bit(id, state->termbitmap))
+- return -EINVAL;
++ /* call recursively to verify the referenced clock entity */
++ err = __check_input_term(state, d->bCSourceID, term);
++ if (err < 0)
++ return err;
+
+- p1 = find_audio_control_unit(state, id);
+- if (!p1)
+- break;
++ /* save input term properties after recursion,
++ * to ensure they are not overriden by the recursion calls
++ */
++ term->id = id;
++ term->type = le16_to_cpu(d->wTerminalType);
++ term->channels = d->bNrChannels;
++ term->chconfig = le32_to_cpu(d->bmChannelConfig);
++ term->name = d->iTerminal;
++ return 0;
++}
+
+- hdr = p1;
+- term->id = id;
++static int parse_term_uac3_iterm_unit(struct mixer_build *state,
++ struct usb_audio_term *term,
++ void *p1, int id)
++{
++ struct uac3_input_terminal_descriptor *d = p1;
++ int err;
+
+- if (protocol == UAC_VERSION_1 || protocol == UAC_VERSION_2) {
+- switch (hdr[2]) {
+- case UAC_INPUT_TERMINAL:
+- if (protocol == UAC_VERSION_1) {
+- struct uac_input_terminal_descriptor *d = p1;
+-
+- term->type = le16_to_cpu(d->wTerminalType);
+- term->channels = d->bNrChannels;
+- term->chconfig = le16_to_cpu(d->wChannelConfig);
+- term->name = d->iTerminal;
+- } else { /* UAC_VERSION_2 */
+- struct uac2_input_terminal_descriptor *d = p1;
+-
+- /* call recursively to verify that the
+- * referenced clock entity is valid */
+- err = __check_input_term(state, d->bCSourceID, term);
+- if (err < 0)
+- return err;
++ /* call recursively to verify the referenced clock entity */
++ err = __check_input_term(state, d->bCSourceID, term);
++ if (err < 0)
++ return err;
+
+- /* save input term properties after recursion,
+- * to ensure they are not overriden by the
+- * recursion calls */
+- term->id = id;
+- term->type = le16_to_cpu(d->wTerminalType);
+- term->channels = d->bNrChannels;
+- term->chconfig = le32_to_cpu(d->bmChannelConfig);
+- term->name = d->iTerminal;
+- }
+- return 0;
+- case UAC_FEATURE_UNIT: {
+- /* the header is the same for v1 and v2 */
+- struct uac_feature_unit_descriptor *d = p1;
++ /* save input term properties after recursion,
++ * to ensure they are not overriden by the recursion calls
++ */
++ term->id = id;
++ term->type = le16_to_cpu(d->wTerminalType);
+
+- id = d->bSourceID;
+- break; /* continue to parse */
+- }
+- case UAC_MIXER_UNIT: {
+- struct uac_mixer_unit_descriptor *d = p1;
+-
+- term->type = UAC3_MIXER_UNIT << 16; /* virtual type */
+- term->channels = uac_mixer_unit_bNrChannels(d);
+- term->chconfig = uac_mixer_unit_wChannelConfig(d, protocol);
+- term->name = uac_mixer_unit_iMixer(d);
+- return 0;
+- }
+- case UAC_SELECTOR_UNIT:
+- case UAC2_CLOCK_SELECTOR: {
+- struct uac_selector_unit_descriptor *d = p1;
+- /* call recursively to retrieve the channel info */
+- err = __check_input_term(state, d->baSourceID[0], term);
+- if (err < 0)
+- return err;
+- term->type = UAC3_SELECTOR_UNIT << 16; /* virtual type */
+- term->id = id;
+- term->name = uac_selector_unit_iSelector(d);
+- return 0;
+- }
+- case UAC1_PROCESSING_UNIT:
+- /* UAC2_EFFECT_UNIT */
+- if (protocol == UAC_VERSION_1)
+- term->type = UAC3_PROCESSING_UNIT << 16; /* virtual type */
+- else /* UAC_VERSION_2 */
+- term->type = UAC3_EFFECT_UNIT << 16; /* virtual type */
+- /* fall through */
+- case UAC1_EXTENSION_UNIT:
+- /* UAC2_PROCESSING_UNIT_V2 */
+- if (protocol == UAC_VERSION_1 && !term->type)
+- term->type = UAC3_EXTENSION_UNIT << 16; /* virtual type */
+- else if (protocol == UAC_VERSION_2 && !term->type)
+- term->type = UAC3_PROCESSING_UNIT << 16; /* virtual type */
+- /* fall through */
+- case UAC2_EXTENSION_UNIT_V2: {
+- struct uac_processing_unit_descriptor *d = p1;
+-
+- if (protocol == UAC_VERSION_2 &&
+- hdr[2] == UAC2_EFFECT_UNIT) {
+- /* UAC2/UAC1 unit IDs overlap here in an
+- * uncompatible way. Ignore this unit for now.
+- */
+- return 0;
+- }
++ err = get_cluster_channels_v3(state, le16_to_cpu(d->wClusterDescrID));
++ if (err < 0)
++ return err;
++ term->channels = err;
+
+- if (d->bNrInPins) {
+- id = d->baSourceID[0];
+- break; /* continue to parse */
+- }
+- if (!term->type)
+- term->type = UAC3_EXTENSION_UNIT << 16; /* virtual type */
++ /* REVISIT: UAC3 IT doesn't have channels cfg */
++ term->chconfig = 0;
+
+- term->channels = uac_processing_unit_bNrChannels(d);
+- term->chconfig = uac_processing_unit_wChannelConfig(d, protocol);
+- term->name = uac_processing_unit_iProcessing(d, protocol);
+- return 0;
+- }
+- case UAC2_CLOCK_SOURCE: {
+- struct uac_clock_source_descriptor *d = p1;
++ term->name = le16_to_cpu(d->wTerminalDescrStr);
++ return 0;
++}
+
+- term->type = UAC3_CLOCK_SOURCE << 16; /* virtual type */
+- term->id = id;
+- term->name = d->iClockSource;
+- return 0;
+- }
+- default:
+- return -ENODEV;
+- }
+- } else { /* UAC_VERSION_3 */
+- switch (hdr[2]) {
+- case UAC_INPUT_TERMINAL: {
+- struct uac3_input_terminal_descriptor *d = p1;
+-
+- /* call recursively to verify that the
+- * referenced clock entity is valid */
+- err = __check_input_term(state, d->bCSourceID, term);
+- if (err < 0)
+- return err;
++static int parse_term_mixer_unit(struct mixer_build *state,
++ struct usb_audio_term *term,
++ void *p1, int id)
++{
++ struct uac_mixer_unit_descriptor *d = p1;
++ int protocol = state->mixer->protocol;
++ int err;
+
+- /* save input term properties after recursion,
+- * to ensure they are not overriden by the
+- * recursion calls */
+- term->id = id;
+- term->type = le16_to_cpu(d->wTerminalType);
++ err = uac_mixer_unit_get_channels(state, d);
++ if (err <= 0)
++ return err;
+
+- err = get_cluster_channels_v3(state, le16_to_cpu(d->wClusterDescrID));
+- if (err < 0)
+- return err;
+- term->channels = err;
++ term->type = UAC3_MIXER_UNIT << 16; /* virtual type */
++ term->channels = err;
++ if (protocol != UAC_VERSION_3) {
++ term->chconfig = uac_mixer_unit_wChannelConfig(d, protocol);
++ term->name = uac_mixer_unit_iMixer(d);
++ }
++ return 0;
++}
+
+- /* REVISIT: UAC3 IT doesn't have channels cfg */
+- term->chconfig = 0;
++static int parse_term_selector_unit(struct mixer_build *state,
++ struct usb_audio_term *term,
++ void *p1, int id)
++{
++ struct uac_selector_unit_descriptor *d = p1;
++ int err;
+
+- term->name = le16_to_cpu(d->wTerminalDescrStr);
+- return 0;
+- }
+- case UAC3_FEATURE_UNIT: {
+- struct uac3_feature_unit_descriptor *d = p1;
++ /* call recursively to retrieve the channel info */
++ err = __check_input_term(state, d->baSourceID[0], term);
++ if (err < 0)
++ return err;
++ term->type = UAC3_SELECTOR_UNIT << 16; /* virtual type */
++ term->id = id;
++ if (state->mixer->protocol != UAC_VERSION_3)
++ term->name = uac_selector_unit_iSelector(d);
++ return 0;
++}
+
+- id = d->bSourceID;
+- break; /* continue to parse */
+- }
+- case UAC3_CLOCK_SOURCE: {
+- struct uac3_clock_source_descriptor *d = p1;
++static int parse_term_proc_unit(struct mixer_build *state,
++ struct usb_audio_term *term,
++ void *p1, int id, int vtype)
++{
++ struct uac_processing_unit_descriptor *d = p1;
++ int protocol = state->mixer->protocol;
++ int err;
+
+- term->type = UAC3_CLOCK_SOURCE << 16; /* virtual type */
+- term->id = id;
+- term->name = le16_to_cpu(d->wClockSourceStr);
+- return 0;
+- }
+- case UAC3_MIXER_UNIT: {
+- struct uac_mixer_unit_descriptor *d = p1;
++ if (d->bNrInPins) {
++ /* call recursively to retrieve the channel info */
++ err = __check_input_term(state, d->baSourceID[0], term);
++ if (err < 0)
++ return err;
++ }
+
+- err = uac_mixer_unit_get_channels(state, d);
+- if (err <= 0)
+- return err;
++ term->type = vtype << 16; /* virtual type */
++ term->id = id;
+
+- term->channels = err;
+- term->type = UAC3_MIXER_UNIT << 16; /* virtual type */
++ if (protocol == UAC_VERSION_3)
++ return 0;
+
+- return 0;
+- }
+- case UAC3_SELECTOR_UNIT:
+- case UAC3_CLOCK_SELECTOR: {
+- struct uac_selector_unit_descriptor *d = p1;
+- /* call recursively to retrieve the channel info */
+- err = __check_input_term(state, d->baSourceID[0], term);
+- if (err < 0)
+- return err;
+- term->type = UAC3_SELECTOR_UNIT << 16; /* virtual type */
+- term->id = id;
+- term->name = 0; /* TODO: UAC3 Class-specific strings */
++ if (!term->channels) {
++ term->channels = uac_processing_unit_bNrChannels(d);
++ term->chconfig = uac_processing_unit_wChannelConfig(d, protocol);
++ }
++ term->name = uac_processing_unit_iProcessing(d, protocol);
++ return 0;
++}
+
+- return 0;
+- }
+- case UAC3_PROCESSING_UNIT: {
+- struct uac_processing_unit_descriptor *d = p1;
++static int parse_term_uac2_clock_source(struct mixer_build *state,
++ struct usb_audio_term *term,
++ void *p1, int id)
++{
++ struct uac_clock_source_descriptor *d = p1;
+
+- if (!d->bNrInPins)
+- return -EINVAL;
++ term->type = UAC3_CLOCK_SOURCE << 16; /* virtual type */
++ term->id = id;
++ term->name = d->iClockSource;
++ return 0;
++}
+
+- /* call recursively to retrieve the channel info */
+- err = __check_input_term(state, d->baSourceID[0], term);
+- if (err < 0)
+- return err;
++static int parse_term_uac3_clock_source(struct mixer_build *state,
++ struct usb_audio_term *term,
++ void *p1, int id)
++{
++ struct uac3_clock_source_descriptor *d = p1;
++
++ term->type = UAC3_CLOCK_SOURCE << 16; /* virtual type */
++ term->id = id;
++ term->name = le16_to_cpu(d->wClockSourceStr);
++ return 0;
++}
+
+- term->type = UAC3_PROCESSING_UNIT << 16; /* virtual type */
+- term->id = id;
+- term->name = 0; /* TODO: UAC3 Class-specific strings */
++#define PTYPE(a, b) ((a) << 8 | (b))
+
+- return 0;
+- }
+- default:
+- return -ENODEV;
+- }
++/*
++ * parse the source unit recursively until it reaches to a terminal
++ * or a branched unit.
++ */
++static int __check_input_term(struct mixer_build *state, int id,
++ struct usb_audio_term *term)
++{
++ int protocol = state->mixer->protocol;
++ void *p1;
++ unsigned char *hdr;
++
++ for (;;) {
++ /* a loop in the terminal chain? */
++ if (test_and_set_bit(id, state->termbitmap))
++ return -EINVAL;
++
++ p1 = find_audio_control_unit(state, id);
++ if (!p1)
++ break;
++ if (!snd_usb_validate_audio_desc(p1, protocol))
++ break; /* bad descriptor */
++
++ hdr = p1;
++ term->id = id;
++
++ switch (PTYPE(protocol, hdr[2])) {
++ case PTYPE(UAC_VERSION_1, UAC_FEATURE_UNIT):
++ case PTYPE(UAC_VERSION_2, UAC_FEATURE_UNIT):
++ case PTYPE(UAC_VERSION_3, UAC3_FEATURE_UNIT): {
++ /* the header is the same for all versions */
++ struct uac_feature_unit_descriptor *d = p1;
++
++ id = d->bSourceID;
++ break; /* continue to parse */
++ }
++ case PTYPE(UAC_VERSION_1, UAC_INPUT_TERMINAL):
++ return parse_term_uac1_iterm_unit(state, term, p1, id);
++ case PTYPE(UAC_VERSION_2, UAC_INPUT_TERMINAL):
++ return parse_term_uac2_iterm_unit(state, term, p1, id);
++ case PTYPE(UAC_VERSION_3, UAC_INPUT_TERMINAL):
++ return parse_term_uac3_iterm_unit(state, term, p1, id);
++ case PTYPE(UAC_VERSION_1, UAC_MIXER_UNIT):
++ case PTYPE(UAC_VERSION_2, UAC_MIXER_UNIT):
++ case PTYPE(UAC_VERSION_3, UAC3_MIXER_UNIT):
++ return parse_term_mixer_unit(state, term, p1, id);
++ case PTYPE(UAC_VERSION_1, UAC_SELECTOR_UNIT):
++ case PTYPE(UAC_VERSION_2, UAC_SELECTOR_UNIT):
++ case PTYPE(UAC_VERSION_2, UAC2_CLOCK_SELECTOR):
++ case PTYPE(UAC_VERSION_3, UAC3_SELECTOR_UNIT):
++ case PTYPE(UAC_VERSION_3, UAC3_CLOCK_SELECTOR):
++ return parse_term_selector_unit(state, term, p1, id);
++ case PTYPE(UAC_VERSION_1, UAC1_PROCESSING_UNIT):
++ case PTYPE(UAC_VERSION_2, UAC2_PROCESSING_UNIT_V2):
++ case PTYPE(UAC_VERSION_3, UAC3_PROCESSING_UNIT):
++ return parse_term_proc_unit(state, term, p1, id,
++ UAC3_PROCESSING_UNIT);
++ case PTYPE(UAC_VERSION_2, UAC2_EFFECT_UNIT):
++ case PTYPE(UAC_VERSION_3, UAC3_EFFECT_UNIT):
++ return parse_term_proc_unit(state, term, p1, id,
++ UAC3_EFFECT_UNIT);
++ case PTYPE(UAC_VERSION_1, UAC1_EXTENSION_UNIT):
++ case PTYPE(UAC_VERSION_2, UAC2_EXTENSION_UNIT_V2):
++ case PTYPE(UAC_VERSION_3, UAC3_EXTENSION_UNIT):
++ return parse_term_proc_unit(state, term, p1, id,
++ UAC3_EXTENSION_UNIT);
++ case PTYPE(UAC_VERSION_2, UAC2_CLOCK_SOURCE):
++ return parse_term_uac2_clock_source(state, term, p1, id);
++ case PTYPE(UAC_VERSION_3, UAC3_CLOCK_SOURCE):
++ return parse_term_uac3_clock_source(state, term, p1, id);
++ default:
++ return -ENODEV;
+ }
+ }
+ return -ENODEV;
+@@ -1024,10 +1037,15 @@ static struct usb_feature_control_info audio_feature_info[] = {
+ { UAC2_FU_PHASE_INVERTER, "Phase Inverter Control", USB_MIXER_BOOLEAN, -1 },
+ };
+
++static void usb_mixer_elem_info_free(struct usb_mixer_elem_info *cval)
++{
++ kfree(cval);
++}
++
+ /* private_free callback */
+ void snd_usb_mixer_elem_free(struct snd_kcontrol *kctl)
+ {
+- kfree(kctl->private_data);
++ usb_mixer_elem_info_free(kctl->private_data);
+ kctl->private_data = NULL;
+ }
+
+@@ -1550,7 +1568,7 @@ static void __build_feature_ctl(struct usb_mixer_interface *mixer,
+
+ ctl_info = get_feature_control_info(control);
+ if (!ctl_info) {
+- kfree(cval);
++ usb_mixer_elem_info_free(cval);
+ return;
+ }
+ if (mixer->protocol == UAC_VERSION_1)
+@@ -1583,7 +1601,7 @@ static void __build_feature_ctl(struct usb_mixer_interface *mixer,
+
+ if (!kctl) {
+ usb_audio_err(mixer->chip, "cannot malloc kcontrol\n");
+- kfree(cval);
++ usb_mixer_elem_info_free(cval);
+ return;
+ }
+ kctl->private_free = snd_usb_mixer_elem_free;
+@@ -1753,7 +1771,7 @@ static void build_connector_control(struct usb_mixer_interface *mixer,
+ kctl = snd_ctl_new1(&usb_connector_ctl_ro, cval);
+ if (!kctl) {
+ usb_audio_err(mixer->chip, "cannot malloc kcontrol\n");
+- kfree(cval);
++ usb_mixer_elem_info_free(cval);
+ return;
+ }
+ get_connector_control_name(mixer, term, is_input, kctl->id.name,
+@@ -1774,13 +1792,6 @@ static int parse_clock_source_unit(struct mixer_build *state, int unitid,
+ if (state->mixer->protocol != UAC_VERSION_2)
+ return -EINVAL;
+
+- if (hdr->bLength != sizeof(*hdr)) {
+- usb_audio_dbg(state->chip,
+- "Bogus clock source descriptor length of %d, ignoring.\n",
+- hdr->bLength);
+- return 0;
+- }
+-
+ /*
+ * The only property of this unit we are interested in is the
+ * clock source validity. If that isn't readable, just bail out.
+@@ -1806,7 +1817,7 @@ static int parse_clock_source_unit(struct mixer_build *state, int unitid,
+ kctl = snd_ctl_new1(&usb_bool_master_control_ctl_ro, cval);
+
+ if (!kctl) {
+- kfree(cval);
++ usb_mixer_elem_info_free(cval);
+ return -ENOMEM;
+ }
+
+@@ -1839,62 +1850,20 @@ static int parse_audio_feature_unit(struct mixer_build *state, int unitid,
+ __u8 *bmaControls;
+
+ if (state->mixer->protocol == UAC_VERSION_1) {
+- if (hdr->bLength < 7) {
+- usb_audio_err(state->chip,
+- "unit %u: invalid UAC_FEATURE_UNIT descriptor\n",
+- unitid);
+- return -EINVAL;
+- }
+ csize = hdr->bControlSize;
+- if (!csize) {
+- usb_audio_dbg(state->chip,
+- "unit %u: invalid bControlSize == 0\n",
+- unitid);
+- return -EINVAL;
+- }
+ channels = (hdr->bLength - 7) / csize - 1;
+ bmaControls = hdr->bmaControls;
+- if (hdr->bLength < 7 + csize) {
+- usb_audio_err(state->chip,
+- "unit %u: invalid UAC_FEATURE_UNIT descriptor\n",
+- unitid);
+- return -EINVAL;
+- }
+ } else if (state->mixer->protocol == UAC_VERSION_2) {
+ struct uac2_feature_unit_descriptor *ftr = _ftr;
+- if (hdr->bLength < 6) {
+- usb_audio_err(state->chip,
+- "unit %u: invalid UAC_FEATURE_UNIT descriptor\n",
+- unitid);
+- return -EINVAL;
+- }
+ csize = 4;
+ channels = (hdr->bLength - 6) / 4 - 1;
+ bmaControls = ftr->bmaControls;
+- if (hdr->bLength < 6 + csize) {
+- usb_audio_err(state->chip,
+- "unit %u: invalid UAC_FEATURE_UNIT descriptor\n",
+- unitid);
+- return -EINVAL;
+- }
+ } else { /* UAC_VERSION_3 */
+ struct uac3_feature_unit_descriptor *ftr = _ftr;
+
+- if (hdr->bLength < 7) {
+- usb_audio_err(state->chip,
+- "unit %u: invalid UAC3_FEATURE_UNIT descriptor\n",
+- unitid);
+- return -EINVAL;
+- }
+ csize = 4;
+ channels = (ftr->bLength - 7) / 4 - 1;
+ bmaControls = ftr->bmaControls;
+- if (hdr->bLength < 7 + csize) {
+- usb_audio_err(state->chip,
+- "unit %u: invalid UAC3_FEATURE_UNIT descriptor\n",
+- unitid);
+- return -EINVAL;
+- }
+ }
+
+ /* parse the source unit */
+@@ -2068,7 +2037,7 @@ static void build_mixer_unit_ctl(struct mixer_build *state,
+ kctl = snd_ctl_new1(&usb_feature_unit_ctl, cval);
+ if (!kctl) {
+ usb_audio_err(state->chip, "cannot malloc kcontrol\n");
+- kfree(cval);
++ usb_mixer_elem_info_free(cval);
+ return;
+ }
+ kctl->private_free = snd_usb_mixer_elem_free;
+@@ -2094,15 +2063,11 @@ static int parse_audio_input_terminal(struct mixer_build *state, int unitid,
+
+ if (state->mixer->protocol == UAC_VERSION_2) {
+ struct uac2_input_terminal_descriptor *d_v2 = raw_desc;
+- if (d_v2->bLength < sizeof(*d_v2))
+- return -EINVAL;
+ control = UAC2_TE_CONNECTOR;
+ term_id = d_v2->bTerminalID;
+ bmctls = le16_to_cpu(d_v2->bmControls);
+ } else if (state->mixer->protocol == UAC_VERSION_3) {
+ struct uac3_input_terminal_descriptor *d_v3 = raw_desc;
+- if (d_v3->bLength < sizeof(*d_v3))
+- return -EINVAL;
+ control = UAC3_TE_INSERTION;
+ term_id = d_v3->bTerminalID;
+ bmctls = le32_to_cpu(d_v3->bmControls);
+@@ -2364,18 +2329,7 @@ static int build_audio_procunit(struct mixer_build *state, int unitid,
+ const char *name = extension_unit ?
+ "Extension Unit" : "Processing Unit";
+
+- if (desc->bLength < 13) {
+- usb_audio_err(state->chip, "invalid %s descriptor (id %d)\n", name, unitid);
+- return -EINVAL;
+- }
+-
+ num_ins = desc->bNrInPins;
+- if (desc->bLength < 13 + num_ins ||
+- desc->bLength < num_ins + uac_processing_unit_bControlSize(desc, state->mixer->protocol)) {
+- usb_audio_err(state->chip, "invalid %s descriptor (id %d)\n", name, unitid);
+- return -EINVAL;
+- }
+-
+ for (i = 0; i < num_ins; i++) {
+ err = parse_audio_unit(state, desc->baSourceID[i]);
+ if (err < 0)
+@@ -2466,7 +2420,7 @@ static int build_audio_procunit(struct mixer_build *state, int unitid,
+
+ kctl = snd_ctl_new1(&mixer_procunit_ctl, cval);
+ if (!kctl) {
+- kfree(cval);
++ usb_mixer_elem_info_free(cval);
+ return -ENOMEM;
+ }
+ kctl->private_free = snd_usb_mixer_elem_free;
+@@ -2604,7 +2558,7 @@ static void usb_mixer_selector_elem_free(struct snd_kcontrol *kctl)
+ if (kctl->private_data) {
+ struct usb_mixer_elem_info *cval = kctl->private_data;
+ num_ins = cval->max;
+- kfree(cval);
++ usb_mixer_elem_info_free(cval);
+ kctl->private_data = NULL;
+ }
+ if (kctl->private_value) {
+@@ -2630,13 +2584,6 @@ static int parse_audio_selector_unit(struct mixer_build *state, int unitid,
+ const struct usbmix_name_map *map;
+ char **namelist;
+
+- if (desc->bLength < 5 || !desc->bNrInPins ||
+- desc->bLength < 5 + desc->bNrInPins) {
+- usb_audio_err(state->chip,
+- "invalid SELECTOR UNIT descriptor %d\n", unitid);
+- return -EINVAL;
+- }
+-
+ for (i = 0; i < desc->bNrInPins; i++) {
+ err = parse_audio_unit(state, desc->baSourceID[i]);
+ if (err < 0)
+@@ -2676,10 +2623,10 @@ static int parse_audio_selector_unit(struct mixer_build *state, int unitid,
+ break;
+ }
+
+- namelist = kmalloc_array(desc->bNrInPins, sizeof(char *), GFP_KERNEL);
++ namelist = kcalloc(desc->bNrInPins, sizeof(char *), GFP_KERNEL);
+ if (!namelist) {
+- kfree(cval);
+- return -ENOMEM;
++ err = -ENOMEM;
++ goto error_cval;
+ }
+ #define MAX_ITEM_NAME_LEN 64
+ for (i = 0; i < desc->bNrInPins; i++) {
+@@ -2687,11 +2634,8 @@ static int parse_audio_selector_unit(struct mixer_build *state, int unitid,
+ len = 0;
+ namelist[i] = kmalloc(MAX_ITEM_NAME_LEN, GFP_KERNEL);
+ if (!namelist[i]) {
+- while (i--)
+- kfree(namelist[i]);
+- kfree(namelist);
+- kfree(cval);
+- return -ENOMEM;
++ err = -ENOMEM;
++ goto error_name;
+ }
+ len = check_mapped_selector_name(state, unitid, i, namelist[i],
+ MAX_ITEM_NAME_LEN);
+@@ -2705,11 +2649,8 @@ static int parse_audio_selector_unit(struct mixer_build *state, int unitid,
+ kctl = snd_ctl_new1(&mixer_selectunit_ctl, cval);
+ if (! kctl) {
+ usb_audio_err(state->chip, "cannot malloc kcontrol\n");
+- for (i = 0; i < desc->bNrInPins; i++)
+- kfree(namelist[i]);
+- kfree(namelist);
+- kfree(cval);
+- return -ENOMEM;
++ err = -ENOMEM;
++ goto error_name;
+ }
+ kctl->private_value = (unsigned long)namelist;
+ kctl->private_free = usb_mixer_selector_elem_free;
+@@ -2755,6 +2696,14 @@ static int parse_audio_selector_unit(struct mixer_build *state, int unitid,
+ usb_audio_dbg(state->chip, "[%d] SU [%s] items = %d\n",
+ cval->head.id, kctl->id.name, desc->bNrInPins);
+ return snd_usb_mixer_add_control(&cval->head, kctl);
++
++ error_name:
++ for (i = 0; i < desc->bNrInPins; i++)
++ kfree(namelist[i]);
++ kfree(namelist);
++ error_cval:
++ usb_mixer_elem_info_free(cval);
++ return err;
+ }
+
+ /*
+@@ -2775,62 +2724,49 @@ static int parse_audio_unit(struct mixer_build *state, int unitid)
+ return -EINVAL;
+ }
+
+- if (protocol == UAC_VERSION_1 || protocol == UAC_VERSION_2) {
+- switch (p1[2]) {
+- case UAC_INPUT_TERMINAL:
+- return parse_audio_input_terminal(state, unitid, p1);
+- case UAC_MIXER_UNIT:
+- return parse_audio_mixer_unit(state, unitid, p1);
+- case UAC2_CLOCK_SOURCE:
+- return parse_clock_source_unit(state, unitid, p1);
+- case UAC_SELECTOR_UNIT:
+- case UAC2_CLOCK_SELECTOR:
+- return parse_audio_selector_unit(state, unitid, p1);
+- case UAC_FEATURE_UNIT:
+- return parse_audio_feature_unit(state, unitid, p1);
+- case UAC1_PROCESSING_UNIT:
+- /* UAC2_EFFECT_UNIT has the same value */
+- if (protocol == UAC_VERSION_1)
+- return parse_audio_processing_unit(state, unitid, p1);
+- else
+- return 0; /* FIXME - effect units not implemented yet */
+- case UAC1_EXTENSION_UNIT:
+- /* UAC2_PROCESSING_UNIT_V2 has the same value */
+- if (protocol == UAC_VERSION_1)
+- return parse_audio_extension_unit(state, unitid, p1);
+- else /* UAC_VERSION_2 */
+- return parse_audio_processing_unit(state, unitid, p1);
+- case UAC2_EXTENSION_UNIT_V2:
+- return parse_audio_extension_unit(state, unitid, p1);
+- default:
+- usb_audio_err(state->chip,
+- "unit %u: unexpected type 0x%02x\n", unitid, p1[2]);
+- return -EINVAL;
+- }
+- } else { /* UAC_VERSION_3 */
+- switch (p1[2]) {
+- case UAC_INPUT_TERMINAL:
+- return parse_audio_input_terminal(state, unitid, p1);
+- case UAC3_MIXER_UNIT:
+- return parse_audio_mixer_unit(state, unitid, p1);
+- case UAC3_CLOCK_SOURCE:
+- return parse_clock_source_unit(state, unitid, p1);
+- case UAC3_SELECTOR_UNIT:
+- case UAC3_CLOCK_SELECTOR:
+- return parse_audio_selector_unit(state, unitid, p1);
+- case UAC3_FEATURE_UNIT:
+- return parse_audio_feature_unit(state, unitid, p1);
+- case UAC3_EFFECT_UNIT:
+- return 0; /* FIXME - effect units not implemented yet */
+- case UAC3_PROCESSING_UNIT:
+- return parse_audio_processing_unit(state, unitid, p1);
+- case UAC3_EXTENSION_UNIT:
+- return parse_audio_extension_unit(state, unitid, p1);
+- default:
+- usb_audio_err(state->chip,
+- "unit %u: unexpected type 0x%02x\n", unitid, p1[2]);
+- return -EINVAL;
+- }
++ if (!snd_usb_validate_audio_desc(p1, protocol)) {
++ usb_audio_dbg(state->chip, "invalid unit %d\n", unitid);
++ return 0; /* skip invalid unit */
++ }
++
++ switch (PTYPE(protocol, p1[2])) {
++ case PTYPE(UAC_VERSION_1, UAC_INPUT_TERMINAL):
++ case PTYPE(UAC_VERSION_2, UAC_INPUT_TERMINAL):
++ case PTYPE(UAC_VERSION_3, UAC_INPUT_TERMINAL):
++ return parse_audio_input_terminal(state, unitid, p1);
++ case PTYPE(UAC_VERSION_1, UAC_MIXER_UNIT):
++ case PTYPE(UAC_VERSION_2, UAC_MIXER_UNIT):
++ case PTYPE(UAC_VERSION_3, UAC3_MIXER_UNIT):
++ return parse_audio_mixer_unit(state, unitid, p1);
++ case PTYPE(UAC_VERSION_2, UAC2_CLOCK_SOURCE):
++ case PTYPE(UAC_VERSION_3, UAC3_CLOCK_SOURCE):
++ return parse_clock_source_unit(state, unitid, p1);
++ case PTYPE(UAC_VERSION_1, UAC_SELECTOR_UNIT):
++ case PTYPE(UAC_VERSION_2, UAC_SELECTOR_UNIT):
++ case PTYPE(UAC_VERSION_3, UAC3_SELECTOR_UNIT):
++ case PTYPE(UAC_VERSION_2, UAC2_CLOCK_SELECTOR):
++ case PTYPE(UAC_VERSION_3, UAC3_CLOCK_SELECTOR):
++ return parse_audio_selector_unit(state, unitid, p1);
++ case PTYPE(UAC_VERSION_1, UAC_FEATURE_UNIT):
++ case PTYPE(UAC_VERSION_2, UAC_FEATURE_UNIT):
++ case PTYPE(UAC_VERSION_3, UAC3_FEATURE_UNIT):
++ return parse_audio_feature_unit(state, unitid, p1);
++ case PTYPE(UAC_VERSION_1, UAC1_PROCESSING_UNIT):
++ case PTYPE(UAC_VERSION_2, UAC2_PROCESSING_UNIT_V2):
++ case PTYPE(UAC_VERSION_3, UAC3_PROCESSING_UNIT):
++ return parse_audio_processing_unit(state, unitid, p1);
++ case PTYPE(UAC_VERSION_1, UAC1_EXTENSION_UNIT):
++ case PTYPE(UAC_VERSION_2, UAC2_EXTENSION_UNIT_V2):
++ case PTYPE(UAC_VERSION_3, UAC3_EXTENSION_UNIT):
++ return parse_audio_extension_unit(state, unitid, p1);
++ case PTYPE(UAC_VERSION_2, UAC2_EFFECT_UNIT):
++ case PTYPE(UAC_VERSION_3, UAC3_EFFECT_UNIT):
++ return 0; /* FIXME - effect units not implemented yet */
++ default:
++ usb_audio_err(state->chip,
++ "unit %u: unexpected type 0x%02x\n",
++ unitid, p1[2]);
++ return -EINVAL;
+ }
+ }
+
+@@ -3145,11 +3081,12 @@ static int snd_usb_mixer_controls(struct usb_mixer_interface *mixer)
+ while ((p = snd_usb_find_csint_desc(mixer->hostif->extra,
+ mixer->hostif->extralen,
+ p, UAC_OUTPUT_TERMINAL)) != NULL) {
++ if (!snd_usb_validate_audio_desc(p, mixer->protocol))
++ continue; /* skip invalid descriptor */
++
+ if (mixer->protocol == UAC_VERSION_1) {
+ struct uac1_output_terminal_descriptor *desc = p;
+
+- if (desc->bLength < sizeof(*desc))
+- continue; /* invalid descriptor? */
+ /* mark terminal ID as visited */
+ set_bit(desc->bTerminalID, state.unitbitmap);
+ state.oterm.id = desc->bTerminalID;
+@@ -3161,8 +3098,6 @@ static int snd_usb_mixer_controls(struct usb_mixer_interface *mixer)
+ } else if (mixer->protocol == UAC_VERSION_2) {
+ struct uac2_output_terminal_descriptor *desc = p;
+
+- if (desc->bLength < sizeof(*desc))
+- continue; /* invalid descriptor? */
+ /* mark terminal ID as visited */
+ set_bit(desc->bTerminalID, state.unitbitmap);
+ state.oterm.id = desc->bTerminalID;
+@@ -3188,8 +3123,6 @@ static int snd_usb_mixer_controls(struct usb_mixer_interface *mixer)
+ } else { /* UAC_VERSION_3 */
+ struct uac3_output_terminal_descriptor *desc = p;
+
+- if (desc->bLength < sizeof(*desc))
+- continue; /* invalid descriptor? */
+ /* mark terminal ID as visited */
+ set_bit(desc->bTerminalID, state.unitbitmap);
+ state.oterm.id = desc->bTerminalID;
+diff --git a/sound/usb/power.c b/sound/usb/power.c
+index bd303a1ba1b7..606a2cb23eab 100644
+--- a/sound/usb/power.c
++++ b/sound/usb/power.c
+@@ -31,6 +31,8 @@ snd_usb_find_power_domain(struct usb_host_interface *ctrl_iface,
+ struct uac3_power_domain_descriptor *pd_desc = p;
+ int i;
+
++ if (!snd_usb_validate_audio_desc(p, UAC_VERSION_3))
++ continue;
+ for (i = 0; i < pd_desc->bNrEntities; i++) {
+ if (pd_desc->baEntityID[i] == id) {
+ pd->pd_id = pd_desc->bPowerDomainID;
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 059b70313f35..0bbe1201a6ac 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -248,6 +248,9 @@ static int create_yamaha_midi_quirk(struct snd_usb_audio *chip,
+ NULL, USB_MS_MIDI_OUT_JACK);
+ if (!injd && !outjd)
+ return -ENODEV;
++ if (!(injd && snd_usb_validate_midi_desc(injd)) ||
++ !(outjd && snd_usb_validate_midi_desc(outjd)))
++ return -ENODEV;
+ if (injd && (injd->bLength < 5 ||
+ (injd->bJackType != USB_MS_EMBEDDED &&
+ injd->bJackType != USB_MS_EXTERNAL)))
+diff --git a/sound/usb/stream.c b/sound/usb/stream.c
+index e852c7fd6109..a0649c8ae460 100644
+--- a/sound/usb/stream.c
++++ b/sound/usb/stream.c
+@@ -627,16 +627,14 @@ static int parse_uac_endpoint_attributes(struct snd_usb_audio *chip,
+ */
+ static void *
+ snd_usb_find_input_terminal_descriptor(struct usb_host_interface *ctrl_iface,
+- int terminal_id, bool uac23)
++ int terminal_id, int protocol)
+ {
+ struct uac2_input_terminal_descriptor *term = NULL;
+- size_t minlen = uac23 ? sizeof(struct uac2_input_terminal_descriptor) :
+- sizeof(struct uac_input_terminal_descriptor);
+
+ while ((term = snd_usb_find_csint_desc(ctrl_iface->extra,
+ ctrl_iface->extralen,
+ term, UAC_INPUT_TERMINAL))) {
+- if (term->bLength < minlen)
++ if (!snd_usb_validate_audio_desc(term, protocol))
+ continue;
+ if (term->bTerminalID == terminal_id)
+ return term;
+@@ -647,7 +645,7 @@ snd_usb_find_input_terminal_descriptor(struct usb_host_interface *ctrl_iface,
+
+ static void *
+ snd_usb_find_output_terminal_descriptor(struct usb_host_interface *ctrl_iface,
+- int terminal_id)
++ int terminal_id, int protocol)
+ {
+ /* OK to use with both UAC2 and UAC3 */
+ struct uac2_output_terminal_descriptor *term = NULL;
+@@ -655,8 +653,9 @@ snd_usb_find_output_terminal_descriptor(struct usb_host_interface *ctrl_iface,
+ while ((term = snd_usb_find_csint_desc(ctrl_iface->extra,
+ ctrl_iface->extralen,
+ term, UAC_OUTPUT_TERMINAL))) {
+- if (term->bLength >= sizeof(*term) &&
+- term->bTerminalID == terminal_id)
++ if (!snd_usb_validate_audio_desc(term, protocol))
++ continue;
++ if (term->bTerminalID == terminal_id)
+ return term;
+ }
+
+@@ -731,7 +730,7 @@ snd_usb_get_audioformat_uac12(struct snd_usb_audio *chip,
+
+ iterm = snd_usb_find_input_terminal_descriptor(chip->ctrl_intf,
+ as->bTerminalLink,
+- false);
++ protocol);
+ if (iterm) {
+ num_channels = iterm->bNrChannels;
+ chconfig = le16_to_cpu(iterm->wChannelConfig);
+@@ -767,7 +766,7 @@ snd_usb_get_audioformat_uac12(struct snd_usb_audio *chip,
+ */
+ input_term = snd_usb_find_input_terminal_descriptor(chip->ctrl_intf,
+ as->bTerminalLink,
+- true);
++ protocol);
+ if (input_term) {
+ clock = input_term->bCSourceID;
+ if (!chconfig && (num_channels == input_term->bNrChannels))
+@@ -776,7 +775,8 @@ snd_usb_get_audioformat_uac12(struct snd_usb_audio *chip,
+ }
+
+ output_term = snd_usb_find_output_terminal_descriptor(chip->ctrl_intf,
+- as->bTerminalLink);
++ as->bTerminalLink,
++ protocol);
+ if (output_term) {
+ clock = output_term->bCSourceID;
+ goto found_clock;
+@@ -1002,14 +1002,15 @@ snd_usb_get_audioformat_uac3(struct snd_usb_audio *chip,
+ */
+ input_term = snd_usb_find_input_terminal_descriptor(chip->ctrl_intf,
+ as->bTerminalLink,
+- true);
++ UAC_VERSION_3);
+ if (input_term) {
+ clock = input_term->bCSourceID;
+ goto found_clock;
+ }
+
+ output_term = snd_usb_find_output_terminal_descriptor(chip->ctrl_intf,
+- as->bTerminalLink);
++ as->bTerminalLink,
++ UAC_VERSION_3);
+ if (output_term) {
+ clock = output_term->bCSourceID;
+ goto found_clock;
+diff --git a/sound/usb/validate.c b/sound/usb/validate.c
+new file mode 100644
+index 000000000000..a5e584b60dcd
+--- /dev/null
++++ b/sound/usb/validate.c
+@@ -0,0 +1,332 @@
++// SPDX-License-Identifier: GPL-2.0-or-later
++//
++// Validation of USB-audio class descriptors
++//
++
++#include <linux/init.h>
++#include <linux/usb.h>
++#include <linux/usb/audio.h>
++#include <linux/usb/audio-v2.h>
++#include <linux/usb/audio-v3.h>
++#include <linux/usb/midi.h>
++#include "usbaudio.h"
++#include "helper.h"
++
++struct usb_desc_validator {
++ unsigned char protocol;
++ unsigned char type;
++ bool (*func)(const void *p, const struct usb_desc_validator *v);
++ size_t size;
++};
++
++#define UAC_VERSION_ALL (unsigned char)(-1)
++
++/* UAC1 only */
++static bool validate_uac1_header(const void *p,
++ const struct usb_desc_validator *v)
++{
++ const struct uac1_ac_header_descriptor *d = p;
++
++ return d->bLength >= sizeof(*d) &&
++ d->bLength >= sizeof(*d) + d->bInCollection;
++}
++
++/* for mixer unit; covering all UACs */
++static bool validate_mixer_unit(const void *p,
++ const struct usb_desc_validator *v)
++{
++ const struct uac_mixer_unit_descriptor *d = p;
++ size_t len;
++
++ if (d->bLength < sizeof(*d) || !d->bNrInPins)
++ return false;
++ len = sizeof(*d) + d->bNrInPins;
++ /* We can't determine the bitmap size only from this unit descriptor,
++ * so just check with the remaining length.
++ * The actual bitmap is checked at mixer unit parser.
++ */
++ switch (v->protocol) {
++ case UAC_VERSION_1:
++ default:
++ len += 2 + 1; /* wChannelConfig, iChannelNames */
++ /* bmControls[n*m] */
++ len += 1; /* iMixer */
++ break;
++ case UAC_VERSION_2:
++ len += 4 + 1; /* bmChannelConfig, iChannelNames */
++ /* bmMixerControls[n*m] */
++ len += 1 + 1; /* bmControls, iMixer */
++ break;
++ case UAC_VERSION_3:
++ len += 2; /* wClusterDescrID */
++ /* bmMixerControls[n*m] */
++ break;
++ }
++ return d->bLength >= len;
++}
++
++/* both for processing and extension units; covering all UACs */
++static bool validate_processing_unit(const void *p,
++ const struct usb_desc_validator *v)
++{
++ const struct uac_processing_unit_descriptor *d = p;
++ const unsigned char *hdr = p;
++ size_t len, m;
++
++ if (d->bLength < sizeof(*d))
++ return false;
++ len = sizeof(*d) + d->bNrInPins;
++ if (d->bLength < len)
++ return false;
++ switch (v->protocol) {
++ case UAC_VERSION_1:
++ default:
++ /* bNrChannels, wChannelConfig, iChannelNames, bControlSize */
++ len += 1 + 2 + 1 + 1;
++ if (d->bLength < len) /* bControlSize */
++ return false;
++ m = hdr[len];
++ len += 1 + m + 1; /* bControlSize, bmControls, iProcessing */
++ break;
++ case UAC_VERSION_2:
++ /* bNrChannels, bmChannelConfig, iChannelNames */
++ len += 1 + 4 + 1;
++ if (v->type == UAC2_PROCESSING_UNIT_V2)
++ len += 2; /* bmControls -- 2 bytes for PU */
++ else
++ len += 1; /* bmControls -- 1 byte for EU */
++ len += 1; /* iProcessing */
++ break;
++ case UAC_VERSION_3:
++ /* wProcessingDescrStr, bmControls */
++ len += 2 + 4;
++ break;
++ }
++ if (d->bLength < len)
++ return false;
++
++ switch (v->protocol) {
++ case UAC_VERSION_1:
++ default:
++ if (v->type == UAC1_EXTENSION_UNIT)
++ return true; /* OK */
++ switch (d->wProcessType) {
++ case UAC_PROCESS_UP_DOWNMIX:
++ case UAC_PROCESS_DOLBY_PROLOGIC:
++ if (d->bLength < len + 1) /* bNrModes */
++ return false;
++ m = hdr[len];
++ len += 1 + m * 2; /* bNrModes, waModes(n) */
++ break;
++ default:
++ break;
++ }
++ break;
++ case UAC_VERSION_2:
++ if (v->type == UAC2_EXTENSION_UNIT_V2)
++ return true; /* OK */
++ switch (d->wProcessType) {
++ case UAC2_PROCESS_UP_DOWNMIX:
++ case UAC2_PROCESS_DOLBY_PROLOCIC: /* SiC! */
++ if (d->bLength < len + 1) /* bNrModes */
++ return false;
++ m = hdr[len];
++ len += 1 + m * 4; /* bNrModes, daModes(n) */
++ break;
++ default:
++ break;
++ }
++ break;
++ case UAC_VERSION_3:
++ if (v->type == UAC3_EXTENSION_UNIT) {
++ len += 2; /* wClusterDescrID */
++ break;
++ }
++ switch (d->wProcessType) {
++ case UAC3_PROCESS_UP_DOWNMIX:
++ if (d->bLength < len + 1) /* bNrModes */
++ return false;
++ m = hdr[len];
++ len += 1 + m * 2; /* bNrModes, waClusterDescrID(n) */
++ break;
++ case UAC3_PROCESS_MULTI_FUNCTION:
++ len += 2 + 4; /* wClusterDescrID, bmAlgorighms */
++ break;
++ default:
++ break;
++ }
++ break;
++ }
++ if (d->bLength < len)
++ return false;
++
++ return true;
++}
++
++/* both for selector and clock selector units; covering all UACs */
++static bool validate_selector_unit(const void *p,
++ const struct usb_desc_validator *v)
++{
++ const struct uac_selector_unit_descriptor *d = p;
++ size_t len;
++
++ if (d->bLength < sizeof(*d))
++ return false;
++ len = sizeof(*d) + d->bNrInPins;
++ switch (v->protocol) {
++ case UAC_VERSION_1:
++ default:
++ len += 1; /* iSelector */
++ break;
++ case UAC_VERSION_2:
++ len += 1 + 1; /* bmControls, iSelector */
++ break;
++ case UAC_VERSION_3:
++ len += 4 + 2; /* bmControls, wSelectorDescrStr */
++ break;
++ }
++ return d->bLength >= len;
++}
++
++static bool validate_uac1_feature_unit(const void *p,
++ const struct usb_desc_validator *v)
++{
++ const struct uac_feature_unit_descriptor *d = p;
++
++ if (d->bLength < sizeof(*d) || !d->bControlSize)
++ return false;
++ /* at least bmaControls(0) for master channel + iFeature */
++ return d->bLength >= sizeof(*d) + d->bControlSize + 1;
++}
++
++static bool validate_uac2_feature_unit(const void *p,
++ const struct usb_desc_validator *v)
++{
++ const struct uac2_feature_unit_descriptor *d = p;
++
++ if (d->bLength < sizeof(*d))
++ return false;
++ /* at least bmaControls(0) for master channel + iFeature */
++ return d->bLength >= sizeof(*d) + 4 + 1;
++}
++
++static bool validate_uac3_feature_unit(const void *p,
++ const struct usb_desc_validator *v)
++{
++ const struct uac3_feature_unit_descriptor *d = p;
++
++ if (d->bLength < sizeof(*d))
++ return false;
++ /* at least bmaControls(0) for master channel + wFeatureDescrStr */
++ return d->bLength >= sizeof(*d) + 4 + 2;
++}
++
++static bool validate_midi_out_jack(const void *p,
++ const struct usb_desc_validator *v)
++{
++ const struct usb_midi_out_jack_descriptor *d = p;
++
++ return d->bLength >= sizeof(*d) &&
++ d->bLength >= sizeof(*d) + d->bNrInputPins * 2;
++}
++
++#define FIXED(p, t, s) { .protocol = (p), .type = (t), .size = sizeof(s) }
++#define FUNC(p, t, f) { .protocol = (p), .type = (t), .func = (f) }
++
++static struct usb_desc_validator audio_validators[] = {
++ /* UAC1 */
++ FUNC(UAC_VERSION_1, UAC_HEADER, validate_uac1_header),
++ FIXED(UAC_VERSION_1, UAC_INPUT_TERMINAL,
++ struct uac_input_terminal_descriptor),
++ FIXED(UAC_VERSION_1, UAC_OUTPUT_TERMINAL,
++ struct uac1_output_terminal_descriptor),
++ FUNC(UAC_VERSION_1, UAC_MIXER_UNIT, validate_mixer_unit),
++ FUNC(UAC_VERSION_1, UAC_SELECTOR_UNIT, validate_selector_unit),
++ FUNC(UAC_VERSION_1, UAC_FEATURE_UNIT, validate_uac1_feature_unit),
++ FUNC(UAC_VERSION_1, UAC1_PROCESSING_UNIT, validate_processing_unit),
++ FUNC(UAC_VERSION_1, UAC1_EXTENSION_UNIT, validate_processing_unit),
++
++ /* UAC2 */
++ FIXED(UAC_VERSION_2, UAC_HEADER, struct uac2_ac_header_descriptor),
++ FIXED(UAC_VERSION_2, UAC_INPUT_TERMINAL,
++ struct uac2_input_terminal_descriptor),
++ FIXED(UAC_VERSION_2, UAC_OUTPUT_TERMINAL,
++ struct uac2_output_terminal_descriptor),
++ FUNC(UAC_VERSION_2, UAC_MIXER_UNIT, validate_mixer_unit),
++ FUNC(UAC_VERSION_2, UAC_SELECTOR_UNIT, validate_selector_unit),
++ FUNC(UAC_VERSION_2, UAC_FEATURE_UNIT, validate_uac2_feature_unit),
++ /* UAC_VERSION_2, UAC2_EFFECT_UNIT: not implemented yet */
++ FUNC(UAC_VERSION_2, UAC2_PROCESSING_UNIT_V2, validate_processing_unit),
++ FUNC(UAC_VERSION_2, UAC2_EXTENSION_UNIT_V2, validate_processing_unit),
++ FIXED(UAC_VERSION_2, UAC2_CLOCK_SOURCE,
++ struct uac_clock_source_descriptor),
++ FUNC(UAC_VERSION_2, UAC2_CLOCK_SELECTOR, validate_selector_unit),
++ FIXED(UAC_VERSION_2, UAC2_CLOCK_MULTIPLIER,
++ struct uac_clock_multiplier_descriptor),
++ /* UAC_VERSION_2, UAC2_SAMPLE_RATE_CONVERTER: not implemented yet */
++
++ /* UAC3 */
++ FIXED(UAC_VERSION_2, UAC_HEADER, struct uac3_ac_header_descriptor),
++ FIXED(UAC_VERSION_3, UAC_INPUT_TERMINAL,
++ struct uac3_input_terminal_descriptor),
++ FIXED(UAC_VERSION_3, UAC_OUTPUT_TERMINAL,
++ struct uac3_output_terminal_descriptor),
++ /* UAC_VERSION_3, UAC3_EXTENDED_TERMINAL: not implemented yet */
++ FUNC(UAC_VERSION_3, UAC3_MIXER_UNIT, validate_mixer_unit),
++ FUNC(UAC_VERSION_3, UAC3_SELECTOR_UNIT, validate_selector_unit),
++ FUNC(UAC_VERSION_3, UAC_FEATURE_UNIT, validate_uac3_feature_unit),
++ /* UAC_VERSION_3, UAC3_EFFECT_UNIT: not implemented yet */
++ FUNC(UAC_VERSION_3, UAC3_PROCESSING_UNIT, validate_processing_unit),
++ FUNC(UAC_VERSION_3, UAC3_EXTENSION_UNIT, validate_processing_unit),
++ FIXED(UAC_VERSION_3, UAC3_CLOCK_SOURCE,
++ struct uac3_clock_source_descriptor),
++ FUNC(UAC_VERSION_3, UAC3_CLOCK_SELECTOR, validate_selector_unit),
++ FIXED(UAC_VERSION_3, UAC3_CLOCK_MULTIPLIER,
++ struct uac3_clock_multiplier_descriptor),
++ /* UAC_VERSION_3, UAC3_SAMPLE_RATE_CONVERTER: not implemented yet */
++ /* UAC_VERSION_3, UAC3_CONNECTORS: not implemented yet */
++ { } /* terminator */
++};
++
++static struct usb_desc_validator midi_validators[] = {
++ FIXED(UAC_VERSION_ALL, USB_MS_HEADER,
++ struct usb_ms_header_descriptor),
++ FIXED(UAC_VERSION_ALL, USB_MS_MIDI_IN_JACK,
++ struct usb_midi_in_jack_descriptor),
++ FUNC(UAC_VERSION_ALL, USB_MS_MIDI_OUT_JACK,
++ validate_midi_out_jack),
++ { } /* terminator */
++};
++
++
++/* Validate the given unit descriptor, return true if it's OK */
++static bool validate_desc(unsigned char *hdr, int protocol,
++ const struct usb_desc_validator *v)
++{
++ if (hdr[1] != USB_DT_CS_INTERFACE)
++ return true; /* don't care */
++
++ for (; v->type; v++) {
++ if (v->type == hdr[2] &&
++ (v->protocol == UAC_VERSION_ALL ||
++ v->protocol == protocol)) {
++ if (v->func)
++ return v->func(hdr, v);
++ /* check for the fixed size */
++ return hdr[0] >= v->size;
++ }
++ }
++
++ return true; /* not matching, skip validation */
++}
++
++bool snd_usb_validate_audio_desc(void *p, int protocol)
++{
++ return validate_desc(p, protocol, audio_validators);
++}
++
++bool snd_usb_validate_midi_desc(void *p)
++{
++ return validate_desc(p, UAC_VERSION_1, midi_validators);
++}
++
+diff --git a/tools/gpio/Makefile b/tools/gpio/Makefile
+index 6ecdd1067826..1178d302757e 100644
+--- a/tools/gpio/Makefile
++++ b/tools/gpio/Makefile
+@@ -3,7 +3,11 @@ include ../scripts/Makefile.include
+
+ bindir ?= /usr/bin
+
+-ifeq ($(srctree),)
++# This will work when gpio is built in tools env. where srctree
++# isn't set and when invoked from selftests build, where srctree
++# is set to ".". building_out_of_srctree is undefined for in srctree
++# builds
++ifndef building_out_of_srctree
+ srctree := $(patsubst %/,%,$(dir $(CURDIR)))
+ srctree := $(patsubst %/,%,$(dir $(srctree)))
+ endif
+diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
+index 6bd270a1e93e..7ec174361c54 100644
+--- a/tools/perf/util/hist.c
++++ b/tools/perf/util/hist.c
+@@ -1618,7 +1618,7 @@ int hists__collapse_resort(struct hists *hists, struct ui_progress *prog)
+ return 0;
+ }
+
+-static int hist_entry__sort(struct hist_entry *a, struct hist_entry *b)
++static int64_t hist_entry__sort(struct hist_entry *a, struct hist_entry *b)
+ {
+ struct hists *hists = a->hists;
+ struct perf_hpp_fmt *fmt;
+diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
+index f18113581cf0..b7691d4af521 100644
+--- a/tools/perf/util/map.c
++++ b/tools/perf/util/map.c
+@@ -637,7 +637,7 @@ bool map_groups__empty(struct map_groups *mg)
+
+ struct map_groups *map_groups__new(struct machine *machine)
+ {
+- struct map_groups *mg = malloc(sizeof(*mg));
++ struct map_groups *mg = zalloc(sizeof(*mg));
+
+ if (mg != NULL)
+ map_groups__init(mg, machine);
+diff --git a/tools/testing/selftests/bpf/test_tc_edt.sh b/tools/testing/selftests/bpf/test_tc_edt.sh
+index f38567ef694b..daa7d1b8d309 100755
+--- a/tools/testing/selftests/bpf/test_tc_edt.sh
++++ b/tools/testing/selftests/bpf/test_tc_edt.sh
+@@ -59,7 +59,7 @@ ip netns exec ${NS_SRC} tc filter add dev veth_src egress \
+
+ # start the listener
+ ip netns exec ${NS_DST} bash -c \
+- "nc -4 -l -s ${IP_DST} -p 9000 >/dev/null &"
++ "nc -4 -l -p 9000 >/dev/null &"
+ declare -i NC_PID=$!
+ sleep 1
+
+diff --git a/tools/testing/selftests/net/tls.c b/tools/testing/selftests/net/tls.c
+index 4c285b6e1db8..1c8f194d6556 100644
+--- a/tools/testing/selftests/net/tls.c
++++ b/tools/testing/selftests/net/tls.c
+@@ -898,6 +898,114 @@ TEST_F(tls, nonblocking)
+ }
+ }
+
++static void
++test_mutliproc(struct __test_metadata *_metadata, struct _test_data_tls *self,
++ bool sendpg, unsigned int n_readers, unsigned int n_writers)
++{
++ const unsigned int n_children = n_readers + n_writers;
++ const size_t data = 6 * 1000 * 1000;
++ const size_t file_sz = data / 100;
++ size_t read_bias, write_bias;
++ int i, fd, child_id;
++ char buf[file_sz];
++ pid_t pid;
++
++ /* Only allow multiples for simplicity */
++ ASSERT_EQ(!(n_readers % n_writers) || !(n_writers % n_readers), true);
++ read_bias = n_writers / n_readers ?: 1;
++ write_bias = n_readers / n_writers ?: 1;
++
++ /* prep a file to send */
++ fd = open("/tmp/", O_TMPFILE | O_RDWR, 0600);
++ ASSERT_GE(fd, 0);
++
++ memset(buf, 0xac, file_sz);
++ ASSERT_EQ(write(fd, buf, file_sz), file_sz);
++
++ /* spawn children */
++ for (child_id = 0; child_id < n_children; child_id++) {
++ pid = fork();
++ ASSERT_NE(pid, -1);
++ if (!pid)
++ break;
++ }
++
++ /* parent waits for all children */
++ if (pid) {
++ for (i = 0; i < n_children; i++) {
++ int status;
++
++ wait(&status);
++ EXPECT_EQ(status, 0);
++ }
++
++ return;
++ }
++
++ /* Split threads for reading and writing */
++ if (child_id < n_readers) {
++ size_t left = data * read_bias;
++ char rb[8001];
++
++ while (left) {
++ int res;
++
++ res = recv(self->cfd, rb,
++ left > sizeof(rb) ? sizeof(rb) : left, 0);
++
++ EXPECT_GE(res, 0);
++ left -= res;
++ }
++ } else {
++ size_t left = data * write_bias;
++
++ while (left) {
++ int res;
++
++ ASSERT_EQ(lseek(fd, 0, SEEK_SET), 0);
++ if (sendpg)
++ res = sendfile(self->fd, fd, NULL,
++ left > file_sz ? file_sz : left);
++ else
++ res = send(self->fd, buf,
++ left > file_sz ? file_sz : left, 0);
++
++ EXPECT_GE(res, 0);
++ left -= res;
++ }
++ }
++}
++
++TEST_F(tls, mutliproc_even)
++{
++ test_mutliproc(_metadata, self, false, 6, 6);
++}
++
++TEST_F(tls, mutliproc_readers)
++{
++ test_mutliproc(_metadata, self, false, 4, 12);
++}
++
++TEST_F(tls, mutliproc_writers)
++{
++ test_mutliproc(_metadata, self, false, 10, 2);
++}
++
++TEST_F(tls, mutliproc_sendpage_even)
++{
++ test_mutliproc(_metadata, self, true, 6, 6);
++}
++
++TEST_F(tls, mutliproc_sendpage_readers)
++{
++ test_mutliproc(_metadata, self, true, 4, 12);
++}
++
++TEST_F(tls, mutliproc_sendpage_writers)
++{
++ test_mutliproc(_metadata, self, true, 10, 2);
++}
++
+ TEST_F(tls, control_msg)
+ {
+ if (self->notls)
+diff --git a/tools/usb/usbip/libsrc/usbip_device_driver.c b/tools/usb/usbip/libsrc/usbip_device_driver.c
+index 5a3726eb44ab..b237a43e6299 100644
+--- a/tools/usb/usbip/libsrc/usbip_device_driver.c
++++ b/tools/usb/usbip/libsrc/usbip_device_driver.c
+@@ -69,7 +69,7 @@ int read_usb_vudc_device(struct udev_device *sdev, struct usbip_usb_device *dev)
+ FILE *fd = NULL;
+ struct udev_device *plat;
+ const char *speed;
+- int ret = 0;
++ size_t ret;
+
+ plat = udev_device_get_parent(sdev);
+ path = udev_device_get_syspath(plat);
+@@ -79,8 +79,10 @@ int read_usb_vudc_device(struct udev_device *sdev, struct usbip_usb_device *dev)
+ if (!fd)
+ return -1;
+ ret = fread((char *) &descr, sizeof(descr), 1, fd);
+- if (ret < 0)
++ if (ret != 1) {
++ err("Cannot read vudc device descr file: %s", strerror(errno));
+ goto err;
++ }
+ fclose(fd);
+
+ copy_descr_attr(dev, &descr, bDeviceClass);
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index c6a91b044d8d..9d4e03eddccf 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -50,6 +50,7 @@
+ #include <linux/bsearch.h>
+ #include <linux/io.h>
+ #include <linux/lockdep.h>
++#include <linux/kthread.h>
+
+ #include <asm/processor.h>
+ #include <asm/ioctl.h>
+@@ -617,13 +618,31 @@ static int kvm_create_vm_debugfs(struct kvm *kvm, int fd)
+
+ stat_data->kvm = kvm;
+ stat_data->offset = p->offset;
++ stat_data->mode = p->mode ? p->mode : 0644;
+ kvm->debugfs_stat_data[p - debugfs_entries] = stat_data;
+- debugfs_create_file(p->name, 0644, kvm->debugfs_dentry,
++ debugfs_create_file(p->name, stat_data->mode, kvm->debugfs_dentry,
+ stat_data, stat_fops_per_vm[p->kind]);
+ }
+ return 0;
+ }
+
++/*
++ * Called after the VM is otherwise initialized, but just before adding it to
++ * the vm_list.
++ */
++int __weak kvm_arch_post_init_vm(struct kvm *kvm)
++{
++ return 0;
++}
++
++/*
++ * Called just after removing the VM from the vm_list, but before doing any
++ * other destruction.
++ */
++void __weak kvm_arch_pre_destroy_vm(struct kvm *kvm)
++{
++}
++
+ static struct kvm *kvm_create_vm(unsigned long type)
+ {
+ int r, i;
+@@ -674,10 +693,14 @@ static struct kvm *kvm_create_vm(unsigned long type)
+ rcu_assign_pointer(kvm->buses[i],
+ kzalloc(sizeof(struct kvm_io_bus), GFP_KERNEL_ACCOUNT));
+ if (!kvm->buses[i])
+- goto out_err;
++ goto out_err_no_mmu_notifier;
+ }
+
+ r = kvm_init_mmu_notifier(kvm);
++ if (r)
++ goto out_err_no_mmu_notifier;
++
++ r = kvm_arch_post_init_vm(kvm);
+ if (r)
+ goto out_err;
+
+@@ -690,6 +713,11 @@ static struct kvm *kvm_create_vm(unsigned long type)
+ return kvm;
+
+ out_err:
++#if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER)
++ if (kvm->mmu_notifier.ops)
++ mmu_notifier_unregister(&kvm->mmu_notifier, current->mm);
++#endif
++out_err_no_mmu_notifier:
+ cleanup_srcu_struct(&kvm->irq_srcu);
+ out_err_no_irq_srcu:
+ cleanup_srcu_struct(&kvm->srcu);
+@@ -732,6 +760,8 @@ static void kvm_destroy_vm(struct kvm *kvm)
+ mutex_lock(&kvm_lock);
+ list_del(&kvm->vm_list);
+ mutex_unlock(&kvm_lock);
++ kvm_arch_pre_destroy_vm(kvm);
++
+ kvm_free_irq_routing(kvm);
+ for (i = 0; i < KVM_NR_BUSES; i++) {
+ struct kvm_io_bus *bus = kvm_get_bus(kvm, i);
+@@ -3930,7 +3960,9 @@ static int kvm_debugfs_open(struct inode *inode, struct file *file,
+ if (!refcount_inc_not_zero(&stat_data->kvm->users_count))
+ return -ENOENT;
+
+- if (simple_attr_open(inode, file, get, set, fmt)) {
++ if (simple_attr_open(inode, file, get,
++ stat_data->mode & S_IWUGO ? set : NULL,
++ fmt)) {
+ kvm_put_kvm(stat_data->kvm);
+ return -ENOMEM;
+ }
+@@ -4178,7 +4210,8 @@ static void kvm_init_debug(void)
+
+ kvm_debugfs_num_entries = 0;
+ for (p = debugfs_entries; p->name; ++p, kvm_debugfs_num_entries++) {
+- debugfs_create_file(p->name, 0644, kvm_debugfs_dir,
++ int mode = p->mode ? p->mode : 0644;
++ debugfs_create_file(p->name, mode, kvm_debugfs_dir,
+ (void *)(long)p->offset,
+ stat_fops[p->kind]);
+ }
+@@ -4361,3 +4394,86 @@ void kvm_exit(void)
+ kvm_vfio_ops_exit();
+ }
+ EXPORT_SYMBOL_GPL(kvm_exit);
++
++struct kvm_vm_worker_thread_context {
++ struct kvm *kvm;
++ struct task_struct *parent;
++ struct completion init_done;
++ kvm_vm_thread_fn_t thread_fn;
++ uintptr_t data;
++ int err;
++};
++
++static int kvm_vm_worker_thread(void *context)
++{
++ /*
++ * The init_context is allocated on the stack of the parent thread, so
++ * we have to locally copy anything that is needed beyond initialization
++ */
++ struct kvm_vm_worker_thread_context *init_context = context;
++ struct kvm *kvm = init_context->kvm;
++ kvm_vm_thread_fn_t thread_fn = init_context->thread_fn;
++ uintptr_t data = init_context->data;
++ int err;
++
++ err = kthread_park(current);
++ /* kthread_park(current) is never supposed to return an error */
++ WARN_ON(err != 0);
++ if (err)
++ goto init_complete;
++
++ err = cgroup_attach_task_all(init_context->parent, current);
++ if (err) {
++ kvm_err("%s: cgroup_attach_task_all failed with err %d\n",
++ __func__, err);
++ goto init_complete;
++ }
++
++ set_user_nice(current, task_nice(init_context->parent));
++
++init_complete:
++ init_context->err = err;
++ complete(&init_context->init_done);
++ init_context = NULL;
++
++ if (err)
++ return err;
++
++ /* Wait to be woken up by the spawner before proceeding. */
++ kthread_parkme();
++
++ if (!kthread_should_stop())
++ err = thread_fn(kvm, data);
++
++ return err;
++}
++
++int kvm_vm_create_worker_thread(struct kvm *kvm, kvm_vm_thread_fn_t thread_fn,
++ uintptr_t data, const char *name,
++ struct task_struct **thread_ptr)
++{
++ struct kvm_vm_worker_thread_context init_context = {};
++ struct task_struct *thread;
++
++ *thread_ptr = NULL;
++ init_context.kvm = kvm;
++ init_context.parent = current;
++ init_context.thread_fn = thread_fn;
++ init_context.data = data;
++ init_completion(&init_context.init_done);
++
++ thread = kthread_run(kvm_vm_worker_thread, &init_context,
++ "%s-%d", name, task_pid_nr(current));
++ if (IS_ERR(thread))
++ return PTR_ERR(thread);
++
++ /* kthread_run is never supposed to return NULL */
++ WARN_ON(thread == NULL);
++
++ wait_for_completion(&init_context.init_done);
++
++ if (!init_context.err)
++ *thread_ptr = thread;
++
++ return init_context.err;
++}
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:5.3 commit in: /
@ 2019-11-14 23:08 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2019-11-14 23:08 UTC (permalink / raw
To: gentoo-commits
commit: c2f22be66c4c66dfca8c9700e773ecf41429d30b
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Nov 14 23:07:49 2019 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Nov 14 23:07:49 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c2f22be6
x86/insn: Fix awk regexp warnings. See bug #696846.
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++
2900_awk-regexp-warnings.patch | 89 ++++++++++++++++++++++++++++++++++++++++++
2 files changed, 93 insertions(+)
diff --git a/0000_README b/0000_README
index 075d9be..0d383a1 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch: 2600_enable-key-swapping-for-apple-mac.patch
From: https://github.com/free5lot/hid-apple-patched
Desc: This hid-apple patch enables swapping of the FN and left Control keys and some additional on some apple keyboards. See bug #622902
+Patch: 2900_awk-regexp-warnings.patch
+From: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/commit/?id=700c1018b86d0d4b3f1f2d459708c0cdf42b521d
+Desc: x86/insn: Fix awk regexp warnings. See bug #696846.
+
Patch: 4567_distro-Gentoo-Kconfig.patch
From: Tom Wijsman <TomWij@gentoo.org>
Desc: Add Gentoo Linux support config settings and defaults.
diff --git a/2900_awk-regexp-warnings.patch b/2900_awk-regexp-warnings.patch
new file mode 100644
index 0000000..5e62625
--- /dev/null
+++ b/2900_awk-regexp-warnings.patch
@@ -0,0 +1,89 @@
+From 700c1018b86d0d4b3f1f2d459708c0cdf42b521d Mon Sep 17 00:00:00 2001
+From: Alexander Kapshuk <alexander.kapshuk@gmail.com>
+Date: Tue, 24 Sep 2019 07:46:59 +0300
+Subject: x86/insn: Fix awk regexp warnings
+
+gawk 5.0.1 generates the following regexp warnings:
+
+ GEN /home/sasha/torvalds/tools/objtool/arch/x86/lib/inat-tables.c
+ awk: ../arch/x86/tools/gen-insn-attr-x86.awk:260: warning: regexp escape sequence `\:' is not a known regexp operator
+ awk: ../arch/x86/tools/gen-insn-attr-x86.awk:350: (FILENAME=../arch/x86/lib/x86-opcode-map.txt FNR=41) warning: regexp escape sequence `\&' is not a known regexp operator
+
+Ealier versions of gawk are not known to generate these warnings. The
+gawk manual referenced below does not list characters ':' and '&' as
+needing escaping, so 'unescape' them. See
+
+ https://www.gnu.org/software/gawk/manual/html_node/Escape-Sequences.html
+
+for more info.
+
+Running diff on the output generated by the script before and after
+applying the patch reported no differences.
+
+ [ bp: Massage commit message. ]
+
+[ Caught the respective tools header discrepancy. ]
+Reported-by: kbuild test robot <lkp@intel.com>
+Signed-off-by: Alexander Kapshuk <alexander.kapshuk@gmail.com>
+Signed-off-by: Borislav Petkov <bp@suse.de>
+Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
+Cc: "H. Peter Anvin" <hpa@zytor.com>
+Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
+Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
+Cc: Ingo Molnar <mingo@redhat.com>
+Cc: Josh Poimboeuf <jpoimboe@redhat.com>
+Cc: Thomas Gleixner <tglx@linutronix.de>
+Cc: x86-ml <x86@kernel.org>
+Link: https://lkml.kernel.org/r/20190924044659.3785-1-alexander.kapshuk@gmail.com
+---
+ arch/x86/tools/gen-insn-attr-x86.awk | 4 ++--
+ tools/arch/x86/tools/gen-insn-attr-x86.awk | 4 ++--
+ 2 files changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/arch/x86/tools/gen-insn-attr-x86.awk b/arch/x86/tools/gen-insn-attr-x86.awk
+index b02a36b2c14f..a42015b305f4 100644
+--- a/arch/x86/tools/gen-insn-attr-x86.awk
++++ b/arch/x86/tools/gen-insn-attr-x86.awk
+@@ -69,7 +69,7 @@ BEGIN {
+
+ lprefix1_expr = "\\((66|!F3)\\)"
+ lprefix2_expr = "\\(F3\\)"
+- lprefix3_expr = "\\((F2|!F3|66\\&F2)\\)"
++ lprefix3_expr = "\\((F2|!F3|66&F2)\\)"
+ lprefix_expr = "\\((66|F2|F3)\\)"
+ max_lprefix = 4
+
+@@ -257,7 +257,7 @@ function convert_operands(count,opnd, i,j,imm,mod)
+ return add_flags(imm, mod)
+ }
+
+-/^[0-9a-f]+\:/ {
++/^[0-9a-f]+:/ {
+ if (NR == 1)
+ next
+ # get index
+diff --git a/tools/arch/x86/tools/gen-insn-attr-x86.awk b/tools/arch/x86/tools/gen-insn-attr-x86.awk
+index b02a36b2c14f..a42015b305f4 100644
+--- a/tools/objtool/arch/x86/tools/gen-insn-attr-x86.awk
++++ b/tools/objtool/arch/x86/tools/gen-insn-attr-x86.awk
+@@ -69,7 +69,7 @@ BEGIN {
+
+ lprefix1_expr = "\\((66|!F3)\\)"
+ lprefix2_expr = "\\(F3\\)"
+- lprefix3_expr = "\\((F2|!F3|66\\&F2)\\)"
++ lprefix3_expr = "\\((F2|!F3|66&F2)\\)"
+ lprefix_expr = "\\((66|F2|F3)\\)"
+ max_lprefix = 4
+
+@@ -257,7 +257,7 @@ function convert_operands(count,opnd, i,j,imm,mod)
+ return add_flags(imm, mod)
+ }
+
+-/^[0-9a-f]+\:/ {
++/^[0-9a-f]+:/ {
+ if (NR == 1)
+ next
+ # get index
+--
+cgit 1.2-0.3.lf.el7
+
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:5.3 commit in: /
@ 2019-11-20 16:39 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2019-11-20 16:39 UTC (permalink / raw
To: gentoo-commits
commit: 5933a9239408065bb06ea1767b8294503c34ca86
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Nov 20 16:39:39 2019 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 20 16:39:39 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5933a923
Linux patch 5.3.12
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1011_linux-5.3.12.patch | 1501 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1505 insertions(+)
diff --git a/0000_README b/0000_README
index 0d383a1..bb387d0 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch: 1010_linux-5.3.11.patch
From: http://www.kernel.org
Desc: Linux 5.3.11
+Patch: 1011_linux-5.3.12.patch
+From: http://www.kernel.org
+Desc: Linux 5.3.12
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1011_linux-5.3.12.patch b/1011_linux-5.3.12.patch
new file mode 100644
index 0000000..1e831b7
--- /dev/null
+++ b/1011_linux-5.3.12.patch
@@ -0,0 +1,1501 @@
+diff --git a/Makefile b/Makefile
+index 40148c01ffe2..2f0c428ed2b6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 3
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Bobtail Squid
+
+diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.c
+index 6c4f01540833..43abebc2fc77 100644
+--- a/arch/x86/kernel/early-quirks.c
++++ b/arch/x86/kernel/early-quirks.c
+@@ -709,6 +709,8 @@ static struct chipset early_qrk[] __initdata = {
+ */
+ { PCI_VENDOR_ID_INTEL, 0x0f00,
+ PCI_CLASS_BRIDGE_HOST, PCI_ANY_ID, 0, force_disable_hpet},
++ { PCI_VENDOR_ID_INTEL, 0x3ec4,
++ PCI_CLASS_BRIDGE_HOST, PCI_ANY_ID, 0, force_disable_hpet},
+ { PCI_VENDOR_ID_BROADCOM, 0x4331,
+ PCI_CLASS_NETWORK_OTHER, PCI_ANY_ID, 0, apple_airport_reset},
+ {}
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index 32b1c6136c6a..2812e5c4ab7b 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -3352,7 +3352,7 @@ static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu,
+ * here.
+ */
+ if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn) &&
+- level == PT_PAGE_TABLE_LEVEL &&
++ !kvm_is_zone_device_pfn(pfn) && level == PT_PAGE_TABLE_LEVEL &&
+ PageTransCompoundMap(pfn_to_page(pfn)) &&
+ !mmu_gfn_lpage_is_disallowed(vcpu, gfn, PT_DIRECTORY_LEVEL)) {
+ unsigned long mask;
+@@ -5961,9 +5961,9 @@ restart:
+ * the guest, and the guest page table is using 4K page size
+ * mapping if the indirect sp has level = 1.
+ */
+- if (sp->role.direct &&
+- !kvm_is_reserved_pfn(pfn) &&
+- PageTransCompoundMap(pfn_to_page(pfn))) {
++ if (sp->role.direct && !kvm_is_reserved_pfn(pfn) &&
++ !kvm_is_zone_device_pfn(pfn) &&
++ PageTransCompoundMap(pfn_to_page(pfn))) {
+ pte_list_remove(rmap_head, sptep);
+
+ if (kvm_available_flush_tlb_with_range())
+diff --git a/drivers/base/memory.c b/drivers/base/memory.c
+index 9b9abc4fcfb7..c6791a59bce7 100644
+--- a/drivers/base/memory.c
++++ b/drivers/base/memory.c
+@@ -884,3 +884,39 @@ int walk_memory_blocks(unsigned long start, unsigned long size,
+ }
+ return ret;
+ }
++
++struct for_each_memory_block_cb_data {
++ walk_memory_blocks_func_t func;
++ void *arg;
++};
++
++static int for_each_memory_block_cb(struct device *dev, void *data)
++{
++ struct memory_block *mem = to_memory_block(dev);
++ struct for_each_memory_block_cb_data *cb_data = data;
++
++ return cb_data->func(mem, cb_data->arg);
++}
++
++/**
++ * for_each_memory_block - walk through all present memory blocks
++ *
++ * @arg: argument passed to func
++ * @func: callback for each memory block walked
++ *
++ * This function walks through all present memory blocks, calling func on
++ * each memory block.
++ *
++ * In case func() returns an error, walking is aborted and the error is
++ * returned.
++ */
++int for_each_memory_block(void *arg, walk_memory_blocks_func_t func)
++{
++ struct for_each_memory_block_cb_data cb_data = {
++ .func = func,
++ .arg = arg,
++ };
++
++ return bus_for_each_dev(&memory_subsys, NULL, &cb_data,
++ for_each_memory_block_cb);
++}
+diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c
+index 2d1939db108f..dd1a43a366f2 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_power.c
++++ b/drivers/gpu/drm/i915/display/intel_display_power.c
+@@ -4345,6 +4345,9 @@ void intel_power_domains_init_hw(struct drm_i915_private *i915, bool resume)
+
+ power_domains->initializing = true;
+
++ /* Must happen before power domain init on VLV/CHV */
++ intel_update_rawclk(i915);
++
+ if (INTEL_GEN(i915) >= 11) {
+ icl_display_core_init(i915, resume);
+ } else if (IS_CANNONLAKE(i915)) {
+diff --git a/drivers/gpu/drm/i915/gt/intel_mocs.c b/drivers/gpu/drm/i915/gt/intel_mocs.c
+index 1f9db50b1869..79df66022d3a 100644
+--- a/drivers/gpu/drm/i915/gt/intel_mocs.c
++++ b/drivers/gpu/drm/i915/gt/intel_mocs.c
+@@ -200,14 +200,6 @@ static const struct drm_i915_mocs_entry broxton_mocs_table[] = {
+ MOCS_ENTRY(15, \
+ LE_3_WB | LE_TC_1_LLC | LE_LRUM(2) | LE_AOM(1), \
+ L3_3_WB), \
+- /* Bypass LLC - Uncached (EHL+) */ \
+- MOCS_ENTRY(16, \
+- LE_1_UC | LE_TC_1_LLC | LE_SCF(1), \
+- L3_1_UC), \
+- /* Bypass LLC - L3 (Read-Only) (EHL+) */ \
+- MOCS_ENTRY(17, \
+- LE_1_UC | LE_TC_1_LLC | LE_SCF(1), \
+- L3_3_WB), \
+ /* Self-Snoop - L3 + LLC */ \
+ MOCS_ENTRY(18, \
+ LE_3_WB | LE_TC_1_LLC | LE_LRUM(3) | LE_SSE(3), \
+diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
+index b6d51514cf9c..942d8b9fff3c 100644
+--- a/drivers/gpu/drm/i915/i915_drv.c
++++ b/drivers/gpu/drm/i915/i915_drv.c
+@@ -708,9 +708,6 @@ static int i915_load_modeset_init(struct drm_device *dev)
+ if (ret)
+ goto cleanup_vga_client;
+
+- /* must happen before intel_power_domains_init_hw() on VLV/CHV */
+- intel_update_rawclk(dev_priv);
+-
+ intel_power_domains_init_hw(dev_priv, false);
+
+ intel_csr_ucode_init(dev_priv);
+diff --git a/drivers/i2c/i2c-core-acpi.c b/drivers/i2c/i2c-core-acpi.c
+index 4dbbc9a35f65..a2c68c2f444a 100644
+--- a/drivers/i2c/i2c-core-acpi.c
++++ b/drivers/i2c/i2c-core-acpi.c
+@@ -39,6 +39,7 @@ struct i2c_acpi_lookup {
+ int index;
+ u32 speed;
+ u32 min_speed;
++ u32 force_speed;
+ };
+
+ /**
+@@ -285,6 +286,19 @@ i2c_acpi_match_device(const struct acpi_device_id *matches,
+ return acpi_match_device(matches, &client->dev);
+ }
+
++static const struct acpi_device_id i2c_acpi_force_400khz_device_ids[] = {
++ /*
++ * These Silead touchscreen controllers only work at 400KHz, for
++ * some reason they do not work at 100KHz. On some devices the ACPI
++ * tables list another device at their bus as only being capable
++ * of 100KHz, testing has shown that these other devices work fine
++ * at 400KHz (as can be expected of any recent i2c hw) so we force
++ * the speed of the bus to 400 KHz if a Silead device is present.
++ */
++ { "MSSL1680", 0 },
++ {}
++};
++
+ static acpi_status i2c_acpi_lookup_speed(acpi_handle handle, u32 level,
+ void *data, void **return_value)
+ {
+@@ -303,6 +317,9 @@ static acpi_status i2c_acpi_lookup_speed(acpi_handle handle, u32 level,
+ if (lookup->speed <= lookup->min_speed)
+ lookup->min_speed = lookup->speed;
+
++ if (acpi_match_device_ids(adev, i2c_acpi_force_400khz_device_ids) == 0)
++ lookup->force_speed = 400000;
++
+ return AE_OK;
+ }
+
+@@ -340,7 +357,16 @@ u32 i2c_acpi_find_bus_speed(struct device *dev)
+ return 0;
+ }
+
+- return lookup.min_speed != UINT_MAX ? lookup.min_speed : 0;
++ if (lookup.force_speed) {
++ if (lookup.force_speed != lookup.min_speed)
++ dev_warn(dev, FW_BUG "DSDT uses known not-working I2C bus speed %d, forcing it to %d\n",
++ lookup.min_speed, lookup.force_speed);
++ return lookup.force_speed;
++ } else if (lookup.min_speed != UINT_MAX) {
++ return lookup.min_speed;
++ } else {
++ return 0;
++ }
+ }
+ EXPORT_SYMBOL_GPL(i2c_acpi_find_bus_speed);
+
+diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
+index 71cb9525c074..26b792bb1027 100644
+--- a/drivers/infiniband/hw/hfi1/init.c
++++ b/drivers/infiniband/hw/hfi1/init.c
+@@ -1489,7 +1489,6 @@ static int __init hfi1_mod_init(void)
+ goto bail_dev;
+ }
+
+- hfi1_compute_tid_rdma_flow_wt();
+ /*
+ * These must be called before the driver is registered with
+ * the PCI subsystem.
+diff --git a/drivers/infiniband/hw/hfi1/pcie.c b/drivers/infiniband/hw/hfi1/pcie.c
+index 61aa5504d7c3..61362bd6d3ce 100644
+--- a/drivers/infiniband/hw/hfi1/pcie.c
++++ b/drivers/infiniband/hw/hfi1/pcie.c
+@@ -319,7 +319,9 @@ int pcie_speeds(struct hfi1_devdata *dd)
+ /*
+ * bus->max_bus_speed is set from the bridge's linkcap Max Link Speed
+ */
+- if (parent && dd->pcidev->bus->max_bus_speed != PCIE_SPEED_8_0GT) {
++ if (parent &&
++ (dd->pcidev->bus->max_bus_speed == PCIE_SPEED_2_5GT ||
++ dd->pcidev->bus->max_bus_speed == PCIE_SPEED_5_0GT)) {
+ dd_dev_info(dd, "Parent PCIe bridge does not support Gen3\n");
+ dd->link_gen3_capable = 0;
+ }
+diff --git a/drivers/infiniband/hw/hfi1/rc.c b/drivers/infiniband/hw/hfi1/rc.c
+index 024a7c2b6124..de90df3816f2 100644
+--- a/drivers/infiniband/hw/hfi1/rc.c
++++ b/drivers/infiniband/hw/hfi1/rc.c
+@@ -2210,15 +2210,15 @@ int do_rc_ack(struct rvt_qp *qp, u32 aeth, u32 psn, int opcode,
+ if (qp->s_flags & RVT_S_WAIT_RNR)
+ goto bail_stop;
+ rdi = ib_to_rvt(qp->ibqp.device);
+- if (qp->s_rnr_retry == 0 &&
+- !((rdi->post_parms[wqe->wr.opcode].flags &
+- RVT_OPERATION_IGN_RNR_CNT) &&
+- qp->s_rnr_retry_cnt == 0)) {
+- status = IB_WC_RNR_RETRY_EXC_ERR;
+- goto class_b;
++ if (!(rdi->post_parms[wqe->wr.opcode].flags &
++ RVT_OPERATION_IGN_RNR_CNT)) {
++ if (qp->s_rnr_retry == 0) {
++ status = IB_WC_RNR_RETRY_EXC_ERR;
++ goto class_b;
++ }
++ if (qp->s_rnr_retry_cnt < 7 && qp->s_rnr_retry_cnt > 0)
++ qp->s_rnr_retry--;
+ }
+- if (qp->s_rnr_retry_cnt < 7 && qp->s_rnr_retry_cnt > 0)
+- qp->s_rnr_retry--;
+
+ /*
+ * The last valid PSN is the previous PSN. For TID RDMA WRITE
+diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
+index 2ed7bfd5feea..c61b6022575e 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.c
++++ b/drivers/infiniband/hw/hfi1/sdma.c
+@@ -65,6 +65,7 @@
+ #define SDMA_DESCQ_CNT 2048
+ #define SDMA_DESC_INTR 64
+ #define INVALID_TAIL 0xffff
++#define SDMA_PAD max_t(size_t, MAX_16B_PADDING, sizeof(u32))
+
+ static uint sdma_descq_cnt = SDMA_DESCQ_CNT;
+ module_param(sdma_descq_cnt, uint, S_IRUGO);
+@@ -1296,7 +1297,7 @@ void sdma_clean(struct hfi1_devdata *dd, size_t num_engines)
+ struct sdma_engine *sde;
+
+ if (dd->sdma_pad_dma) {
+- dma_free_coherent(&dd->pcidev->dev, 4,
++ dma_free_coherent(&dd->pcidev->dev, SDMA_PAD,
+ (void *)dd->sdma_pad_dma,
+ dd->sdma_pad_phys);
+ dd->sdma_pad_dma = NULL;
+@@ -1491,7 +1492,7 @@ int sdma_init(struct hfi1_devdata *dd, u8 port)
+ }
+
+ /* Allocate memory for pad */
+- dd->sdma_pad_dma = dma_alloc_coherent(&dd->pcidev->dev, sizeof(u32),
++ dd->sdma_pad_dma = dma_alloc_coherent(&dd->pcidev->dev, SDMA_PAD,
+ &dd->sdma_pad_phys, GFP_KERNEL);
+ if (!dd->sdma_pad_dma) {
+ dd_dev_err(dd, "failed to allocate SendDMA pad memory\n");
+diff --git a/drivers/infiniband/hw/hfi1/tid_rdma.c b/drivers/infiniband/hw/hfi1/tid_rdma.c
+index 536d974c78cf..09838980f827 100644
+--- a/drivers/infiniband/hw/hfi1/tid_rdma.c
++++ b/drivers/infiniband/hw/hfi1/tid_rdma.c
+@@ -107,8 +107,6 @@ static u32 mask_generation(u32 a)
+ * C - Capcode
+ */
+
+-static u32 tid_rdma_flow_wt;
+-
+ static void tid_rdma_trigger_resume(struct work_struct *work);
+ static void hfi1_kern_exp_rcv_free_flows(struct tid_rdma_request *req);
+ static int hfi1_kern_exp_rcv_alloc_flows(struct tid_rdma_request *req,
+@@ -136,6 +134,26 @@ static void update_r_next_psn_fecn(struct hfi1_packet *packet,
+ struct tid_rdma_flow *flow,
+ bool fecn);
+
++static void validate_r_tid_ack(struct hfi1_qp_priv *priv)
++{
++ if (priv->r_tid_ack == HFI1_QP_WQE_INVALID)
++ priv->r_tid_ack = priv->r_tid_tail;
++}
++
++static void tid_rdma_schedule_ack(struct rvt_qp *qp)
++{
++ struct hfi1_qp_priv *priv = qp->priv;
++
++ priv->s_flags |= RVT_S_ACK_PENDING;
++ hfi1_schedule_tid_send(qp);
++}
++
++static void tid_rdma_trigger_ack(struct rvt_qp *qp)
++{
++ validate_r_tid_ack(qp->priv);
++ tid_rdma_schedule_ack(qp);
++}
++
+ static u64 tid_rdma_opfn_encode(struct tid_rdma_params *p)
+ {
+ return
+@@ -2997,10 +3015,7 @@ nak_psn:
+ qpriv->s_nak_state = IB_NAK_PSN_ERROR;
+ /* We are NAK'ing the next expected PSN */
+ qpriv->s_nak_psn = mask_psn(flow->flow_state.r_next_psn);
+- qpriv->s_flags |= RVT_S_ACK_PENDING;
+- if (qpriv->r_tid_ack == HFI1_QP_WQE_INVALID)
+- qpriv->r_tid_ack = qpriv->r_tid_tail;
+- hfi1_schedule_tid_send(qp);
++ tid_rdma_trigger_ack(qp);
+ }
+ goto unlock;
+ }
+@@ -3363,18 +3378,17 @@ u32 hfi1_build_tid_rdma_write_req(struct rvt_qp *qp, struct rvt_swqe *wqe,
+ return sizeof(ohdr->u.tid_rdma.w_req) / sizeof(u32);
+ }
+
+-void hfi1_compute_tid_rdma_flow_wt(void)
++static u32 hfi1_compute_tid_rdma_flow_wt(struct rvt_qp *qp)
+ {
+ /*
+ * Heuristic for computing the RNR timeout when waiting on the flow
+ * queue. Rather than a computationaly expensive exact estimate of when
+ * a flow will be available, we assume that if a QP is at position N in
+ * the flow queue it has to wait approximately (N + 1) * (number of
+- * segments between two sync points), assuming PMTU of 4K. The rationale
+- * for this is that flows are released and recycled at each sync point.
++ * segments between two sync points). The rationale for this is that
++ * flows are released and recycled at each sync point.
+ */
+- tid_rdma_flow_wt = MAX_TID_FLOW_PSN * enum_to_mtu(OPA_MTU_4096) /
+- TID_RDMA_MAX_SEGMENT_SIZE;
++ return (MAX_TID_FLOW_PSN * qp->pmtu) >> TID_RDMA_SEGMENT_SHIFT;
+ }
+
+ static u32 position_in_queue(struct hfi1_qp_priv *qpriv,
+@@ -3497,7 +3511,7 @@ static void hfi1_tid_write_alloc_resources(struct rvt_qp *qp, bool intr_ctx)
+ if (qpriv->flow_state.index >= RXE_NUM_TID_FLOWS) {
+ ret = hfi1_kern_setup_hw_flow(qpriv->rcd, qp);
+ if (ret) {
+- to_seg = tid_rdma_flow_wt *
++ to_seg = hfi1_compute_tid_rdma_flow_wt(qp) *
+ position_in_queue(qpriv,
+ &rcd->flow_queue);
+ break;
+@@ -3518,7 +3532,7 @@ static void hfi1_tid_write_alloc_resources(struct rvt_qp *qp, bool intr_ctx)
+ /*
+ * If overtaking req->acked_tail, send an RNR NAK. Because the
+ * QP is not queued in this case, and the issue can only be
+- * caused due a delay in scheduling the second leg which we
++ * caused by a delay in scheduling the second leg which we
+ * cannot estimate, we use a rather arbitrary RNR timeout of
+ * (MAX_FLOWS / 2) segments
+ */
+@@ -3526,8 +3540,7 @@ static void hfi1_tid_write_alloc_resources(struct rvt_qp *qp, bool intr_ctx)
+ MAX_FLOWS)) {
+ ret = -EAGAIN;
+ to_seg = MAX_FLOWS >> 1;
+- qpriv->s_flags |= RVT_S_ACK_PENDING;
+- hfi1_schedule_tid_send(qp);
++ tid_rdma_trigger_ack(qp);
+ break;
+ }
+
+@@ -4327,8 +4340,7 @@ void hfi1_rc_rcv_tid_rdma_write_data(struct hfi1_packet *packet)
+ trace_hfi1_tid_req_rcv_write_data(qp, 0, e->opcode, e->psn, e->lpsn,
+ req);
+ trace_hfi1_tid_write_rsp_rcv_data(qp);
+- if (priv->r_tid_ack == HFI1_QP_WQE_INVALID)
+- priv->r_tid_ack = priv->r_tid_tail;
++ validate_r_tid_ack(priv);
+
+ if (opcode == TID_OP(WRITE_DATA_LAST)) {
+ release_rdma_sge_mr(e);
+@@ -4367,8 +4379,7 @@ void hfi1_rc_rcv_tid_rdma_write_data(struct hfi1_packet *packet)
+ }
+
+ done:
+- priv->s_flags |= RVT_S_ACK_PENDING;
+- hfi1_schedule_tid_send(qp);
++ tid_rdma_schedule_ack(qp);
+ exit:
+ priv->r_next_psn_kdeth = flow->flow_state.r_next_psn;
+ if (fecn)
+@@ -4380,10 +4391,7 @@ send_nak:
+ if (!priv->s_nak_state) {
+ priv->s_nak_state = IB_NAK_PSN_ERROR;
+ priv->s_nak_psn = flow->flow_state.r_next_psn;
+- priv->s_flags |= RVT_S_ACK_PENDING;
+- if (priv->r_tid_ack == HFI1_QP_WQE_INVALID)
+- priv->r_tid_ack = priv->r_tid_tail;
+- hfi1_schedule_tid_send(qp);
++ tid_rdma_trigger_ack(qp);
+ }
+ goto done;
+ }
+@@ -4931,8 +4939,7 @@ void hfi1_rc_rcv_tid_rdma_resync(struct hfi1_packet *packet)
+ qpriv->resync = true;
+ /* RESYNC request always gets a TID RDMA ACK. */
+ qpriv->s_nak_state = 0;
+- qpriv->s_flags |= RVT_S_ACK_PENDING;
+- hfi1_schedule_tid_send(qp);
++ tid_rdma_trigger_ack(qp);
+ bail:
+ if (fecn)
+ qp->s_flags |= RVT_S_ECN;
+diff --git a/drivers/infiniband/hw/hfi1/tid_rdma.h b/drivers/infiniband/hw/hfi1/tid_rdma.h
+index 1c536185261e..6e82df2190b7 100644
+--- a/drivers/infiniband/hw/hfi1/tid_rdma.h
++++ b/drivers/infiniband/hw/hfi1/tid_rdma.h
+@@ -17,6 +17,7 @@
+ #define TID_RDMA_MIN_SEGMENT_SIZE BIT(18) /* 256 KiB (for now) */
+ #define TID_RDMA_MAX_SEGMENT_SIZE BIT(18) /* 256 KiB (for now) */
+ #define TID_RDMA_MAX_PAGES (BIT(18) >> PAGE_SHIFT)
++#define TID_RDMA_SEGMENT_SHIFT 18
+
+ /*
+ * Bit definitions for priv->s_flags.
+@@ -274,8 +275,6 @@ u32 hfi1_build_tid_rdma_write_req(struct rvt_qp *qp, struct rvt_swqe *wqe,
+ struct ib_other_headers *ohdr,
+ u32 *bth1, u32 *bth2, u32 *len);
+
+-void hfi1_compute_tid_rdma_flow_wt(void);
+-
+ void hfi1_rc_rcv_tid_rdma_write_req(struct hfi1_packet *packet);
+
+ u32 hfi1_build_tid_rdma_write_resp(struct rvt_qp *qp, struct rvt_ack_entry *e,
+diff --git a/drivers/infiniband/hw/hfi1/verbs.c b/drivers/infiniband/hw/hfi1/verbs.c
+index 9f53f63b1453..4add0c9b8c77 100644
+--- a/drivers/infiniband/hw/hfi1/verbs.c
++++ b/drivers/infiniband/hw/hfi1/verbs.c
+@@ -147,9 +147,6 @@ static int pio_wait(struct rvt_qp *qp,
+ /* Length of buffer to create verbs txreq cache name */
+ #define TXREQ_NAME_LEN 24
+
+-/* 16B trailing buffer */
+-static const u8 trail_buf[MAX_16B_PADDING];
+-
+ static uint wss_threshold = 80;
+ module_param(wss_threshold, uint, S_IRUGO);
+ MODULE_PARM_DESC(wss_threshold, "Percentage (1-100) of LLC to use as a threshold for a cacheless copy");
+@@ -820,8 +817,8 @@ static int build_verbs_tx_desc(
+
+ /* add icrc, lt byte, and padding to flit */
+ if (extra_bytes)
+- ret = sdma_txadd_kvaddr(sde->dd, &tx->txreq,
+- (void *)trail_buf, extra_bytes);
++ ret = sdma_txadd_daddr(sde->dd, &tx->txreq,
++ sde->dd->sdma_pad_phys, extra_bytes);
+
+ bail_txadd:
+ return ret;
+@@ -1089,7 +1086,8 @@ int hfi1_verbs_send_pio(struct rvt_qp *qp, struct hfi1_pkt_state *ps,
+ }
+ /* add icrc, lt byte, and padding to flit */
+ if (extra_bytes)
+- seg_pio_copy_mid(pbuf, trail_buf, extra_bytes);
++ seg_pio_copy_mid(pbuf, ppd->dd->sdma_pad_dma,
++ extra_bytes);
+
+ seg_pio_copy_end(pbuf);
+ }
+diff --git a/drivers/input/ff-memless.c b/drivers/input/ff-memless.c
+index 1cb40c7475af..8229a9006917 100644
+--- a/drivers/input/ff-memless.c
++++ b/drivers/input/ff-memless.c
+@@ -489,6 +489,15 @@ static void ml_ff_destroy(struct ff_device *ff)
+ {
+ struct ml_device *ml = ff->private;
+
++ /*
++ * Even though we stop all playing effects when tearing down
++ * an input device (via input_device_flush() that calls into
++ * input_ff_flush() that stops and erases all effects), we
++ * do not actually stop the timer, and therefore we should
++ * do it here.
++ */
++ del_timer_sync(&ml->timer);
++
+ kfree(ml->private);
+ }
+
+diff --git a/drivers/input/rmi4/rmi_f11.c b/drivers/input/rmi4/rmi_f11.c
+index f28a7158b2ef..26c239325f95 100644
+--- a/drivers/input/rmi4/rmi_f11.c
++++ b/drivers/input/rmi4/rmi_f11.c
+@@ -1284,8 +1284,8 @@ static irqreturn_t rmi_f11_attention(int irq, void *ctx)
+ valid_bytes = f11->sensor.attn_size;
+ memcpy(f11->sensor.data_pkt, drvdata->attn_data.data,
+ valid_bytes);
+- drvdata->attn_data.data += f11->sensor.attn_size;
+- drvdata->attn_data.size -= f11->sensor.attn_size;
++ drvdata->attn_data.data += valid_bytes;
++ drvdata->attn_data.size -= valid_bytes;
+ } else {
+ error = rmi_read_block(rmi_dev,
+ data_base_addr, f11->sensor.data_pkt,
+diff --git a/drivers/input/rmi4/rmi_f12.c b/drivers/input/rmi4/rmi_f12.c
+index d20a5d6780d1..7e97944f7616 100644
+--- a/drivers/input/rmi4/rmi_f12.c
++++ b/drivers/input/rmi4/rmi_f12.c
+@@ -55,6 +55,9 @@ struct f12_data {
+
+ const struct rmi_register_desc_item *data15;
+ u16 data15_offset;
++
++ unsigned long *abs_mask;
++ unsigned long *rel_mask;
+ };
+
+ static int rmi_f12_read_sensor_tuning(struct f12_data *f12)
+@@ -209,8 +212,8 @@ static irqreturn_t rmi_f12_attention(int irq, void *ctx)
+ valid_bytes = sensor->attn_size;
+ memcpy(sensor->data_pkt, drvdata->attn_data.data,
+ valid_bytes);
+- drvdata->attn_data.data += sensor->attn_size;
+- drvdata->attn_data.size -= sensor->attn_size;
++ drvdata->attn_data.data += valid_bytes;
++ drvdata->attn_data.size -= valid_bytes;
+ } else {
+ retval = rmi_read_block(rmi_dev, f12->data_addr,
+ sensor->data_pkt, sensor->pkt_size);
+@@ -291,9 +294,18 @@ static int rmi_f12_write_control_regs(struct rmi_function *fn)
+ static int rmi_f12_config(struct rmi_function *fn)
+ {
+ struct rmi_driver *drv = fn->rmi_dev->driver;
++ struct f12_data *f12 = dev_get_drvdata(&fn->dev);
++ struct rmi_2d_sensor *sensor;
+ int ret;
+
+- drv->set_irq_bits(fn->rmi_dev, fn->irq_mask);
++ sensor = &f12->sensor;
++
++ if (!sensor->report_abs)
++ drv->clear_irq_bits(fn->rmi_dev, f12->abs_mask);
++ else
++ drv->set_irq_bits(fn->rmi_dev, f12->abs_mask);
++
++ drv->clear_irq_bits(fn->rmi_dev, f12->rel_mask);
+
+ ret = rmi_f12_write_control_regs(fn);
+ if (ret)
+@@ -315,9 +327,12 @@ static int rmi_f12_probe(struct rmi_function *fn)
+ struct rmi_device_platform_data *pdata = rmi_get_platform_data(rmi_dev);
+ struct rmi_driver_data *drvdata = dev_get_drvdata(&rmi_dev->dev);
+ u16 data_offset = 0;
++ int mask_size;
+
+ rmi_dbg(RMI_DEBUG_FN, &fn->dev, "%s\n", __func__);
+
++ mask_size = BITS_TO_LONGS(drvdata->irq_count) * sizeof(unsigned long);
++
+ ret = rmi_read(fn->rmi_dev, query_addr, &buf);
+ if (ret < 0) {
+ dev_err(&fn->dev, "Failed to read general info register: %d\n",
+@@ -332,10 +347,19 @@ static int rmi_f12_probe(struct rmi_function *fn)
+ return -ENODEV;
+ }
+
+- f12 = devm_kzalloc(&fn->dev, sizeof(struct f12_data), GFP_KERNEL);
++ f12 = devm_kzalloc(&fn->dev, sizeof(struct f12_data) + mask_size * 2,
++ GFP_KERNEL);
+ if (!f12)
+ return -ENOMEM;
+
++ f12->abs_mask = (unsigned long *)((char *)f12
++ + sizeof(struct f12_data));
++ f12->rel_mask = (unsigned long *)((char *)f12
++ + sizeof(struct f12_data) + mask_size);
++
++ set_bit(fn->irq_pos, f12->abs_mask);
++ set_bit(fn->irq_pos + 1, f12->rel_mask);
++
+ f12->has_dribble = !!(buf & BIT(3));
+
+ if (fn->dev.of_node) {
+diff --git a/drivers/input/rmi4/rmi_f54.c b/drivers/input/rmi4/rmi_f54.c
+index 710b02595486..897105b9a98b 100644
+--- a/drivers/input/rmi4/rmi_f54.c
++++ b/drivers/input/rmi4/rmi_f54.c
+@@ -359,7 +359,7 @@ static const struct vb2_ops rmi_f54_queue_ops = {
+ static const struct vb2_queue rmi_f54_queue = {
+ .type = V4L2_BUF_TYPE_VIDEO_CAPTURE,
+ .io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF | VB2_READ,
+- .buf_struct_size = sizeof(struct vb2_buffer),
++ .buf_struct_size = sizeof(struct vb2_v4l2_buffer),
+ .ops = &rmi_f54_queue_ops,
+ .mem_ops = &vb2_vmalloc_memops,
+ .timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC,
+@@ -601,7 +601,7 @@ static int rmi_f54_config(struct rmi_function *fn)
+ {
+ struct rmi_driver *drv = fn->rmi_dev->driver;
+
+- drv->set_irq_bits(fn->rmi_dev, fn->irq_mask);
++ drv->clear_irq_bits(fn->rmi_dev, fn->irq_mask);
+
+ return 0;
+ }
+@@ -730,6 +730,7 @@ static void rmi_f54_remove(struct rmi_function *fn)
+
+ video_unregister_device(&f54->vdev);
+ v4l2_device_unregister(&f54->v4l2);
++ destroy_workqueue(f54->workqueue);
+ }
+
+ struct rmi_function_handler rmi_f54_handler = {
+diff --git a/drivers/mmc/host/sdhci-of-at91.c b/drivers/mmc/host/sdhci-of-at91.c
+index e7d1920729fb..0ae986c42bc8 100644
+--- a/drivers/mmc/host/sdhci-of-at91.c
++++ b/drivers/mmc/host/sdhci-of-at91.c
+@@ -358,7 +358,7 @@ static int sdhci_at91_probe(struct platform_device *pdev)
+ pm_runtime_use_autosuspend(&pdev->dev);
+
+ /* HS200 is broken at this moment */
+- host->quirks2 = SDHCI_QUIRK2_BROKEN_HS200;
++ host->quirks2 |= SDHCI_QUIRK2_BROKEN_HS200;
+
+ ret = sdhci_add_host(host);
+ if (ret)
+diff --git a/drivers/net/can/slcan.c b/drivers/net/can/slcan.c
+index aa97dbc797b6..5d338b2ac39e 100644
+--- a/drivers/net/can/slcan.c
++++ b/drivers/net/can/slcan.c
+@@ -613,6 +613,7 @@ err_free_chan:
+ sl->tty = NULL;
+ tty->disc_data = NULL;
+ clear_bit(SLF_INUSE, &sl->flags);
++ free_netdev(sl->dev);
+
+ err_exit:
+ rtnl_unlock();
+diff --git a/drivers/net/ethernet/cortina/gemini.c b/drivers/net/ethernet/cortina/gemini.c
+index 9003eb6716cd..01e23a922982 100644
+--- a/drivers/net/ethernet/cortina/gemini.c
++++ b/drivers/net/ethernet/cortina/gemini.c
+@@ -2527,6 +2527,7 @@ static int gemini_ethernet_port_remove(struct platform_device *pdev)
+ struct gemini_ethernet_port *port = platform_get_drvdata(pdev);
+
+ gemini_port_remove(port);
++ free_netdev(port->netdev);
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index 0acb11557ed1..5d2da74e2306 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -2166,8 +2166,16 @@ err_set_cdan:
+ err_service_reg:
+ free_channel(priv, channel);
+ err_alloc_ch:
+- if (err == -EPROBE_DEFER)
++ if (err == -EPROBE_DEFER) {
++ for (i = 0; i < priv->num_channels; i++) {
++ channel = priv->channel[i];
++ nctx = &channel->nctx;
++ dpaa2_io_service_deregister(channel->dpio, nctx, dev);
++ free_channel(priv, channel);
++ }
++ priv->num_channels = 0;
+ return err;
++ }
+
+ if (cpumask_empty(&priv->dpio_cpumask)) {
+ dev_err(dev, "No cpu with an affine DPIO/DPCON\n");
+diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c
+index 309470ec0219..d3654c35d2dd 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/main.c
++++ b/drivers/net/ethernet/mellanox/mlx4/main.c
+@@ -3982,6 +3982,7 @@ static int mlx4_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ goto err_params_unregister;
+
+ devlink_params_publish(devlink);
++ devlink_reload_enable(devlink);
+ pci_save_state(pdev);
+ return 0;
+
+@@ -4093,6 +4094,8 @@ static void mlx4_remove_one(struct pci_dev *pdev)
+ struct devlink *devlink = priv_to_devlink(priv);
+ int active_vfs = 0;
+
++ devlink_reload_disable(devlink);
++
+ if (mlx4_is_slave(dev))
+ persist->interface_state |= MLX4_INTERFACE_STATE_NOWAIT;
+
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c
+index b94cdbd7bb18..6e8e7ca7ac76 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core.c
+@@ -1131,6 +1131,9 @@ __mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info,
+ if (mlxsw_driver->params_register)
+ devlink_params_publish(devlink);
+
++ if (!reload)
++ devlink_reload_enable(devlink);
++
+ return 0;
+
+ err_thermal_init:
+@@ -1191,6 +1194,8 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core,
+ {
+ struct devlink *devlink = priv_to_devlink(mlxsw_core);
+
++ if (!reload)
++ devlink_reload_disable(devlink);
+ if (mlxsw_core->reload_fail) {
+ if (!reload)
+ /* Only the parts that were not de-initialized in the
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+index f97a4096f8fc..1c6f1b3a3229 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+@@ -1225,7 +1225,7 @@ static int sun8i_dwmac_probe(struct platform_device *pdev)
+ dwmac_mux:
+ sun8i_dwmac_unset_syscon(gmac);
+ dwmac_exit:
+- sun8i_dwmac_exit(pdev, plat_dat->bsp_priv);
++ stmmac_pltfr_remove(pdev);
+ return ret;
+ }
+
+diff --git a/drivers/net/netdevsim/dev.c b/drivers/net/netdevsim/dev.c
+index bcc40a236624..b2fe271a4f5d 100644
+--- a/drivers/net/netdevsim/dev.c
++++ b/drivers/net/netdevsim/dev.c
+@@ -297,6 +297,7 @@ nsim_dev_create(struct nsim_bus_dev *nsim_bus_dev, unsigned int port_count)
+ if (err)
+ goto err_debugfs_exit;
+
++ devlink_reload_enable(devlink);
+ return nsim_dev;
+
+ err_debugfs_exit:
+@@ -314,6 +315,7 @@ static void nsim_dev_destroy(struct nsim_dev *nsim_dev)
+ {
+ struct devlink *devlink = priv_to_devlink(nsim_dev);
+
++ devlink_reload_disable(devlink);
+ nsim_bpf_dev_exit(nsim_dev);
+ nsim_dev_debugfs_exit(nsim_dev);
+ devlink_unregister(devlink);
+diff --git a/drivers/net/slip/slip.c b/drivers/net/slip/slip.c
+index cac64b96d545..4d479e3c817d 100644
+--- a/drivers/net/slip/slip.c
++++ b/drivers/net/slip/slip.c
+@@ -855,6 +855,7 @@ err_free_chan:
+ sl->tty = NULL;
+ tty->disc_data = NULL;
+ clear_bit(SLF_INUSE, &sl->flags);
++ free_netdev(sl->dev);
+
+ err_exit:
+ rtnl_unlock();
+diff --git a/drivers/net/usb/ax88172a.c b/drivers/net/usb/ax88172a.c
+index 011bd4cb546e..af3994e0853b 100644
+--- a/drivers/net/usb/ax88172a.c
++++ b/drivers/net/usb/ax88172a.c
+@@ -196,7 +196,7 @@ static int ax88172a_bind(struct usbnet *dev, struct usb_interface *intf)
+
+ /* Get the MAC address */
+ ret = asix_read_cmd(dev, AX_CMD_READ_NODE_ID, 0, 0, ETH_ALEN, buf, 0);
+- if (ret < 0) {
++ if (ret < ETH_ALEN) {
+ netdev_err(dev->net, "Failed to read MAC address: %d\n", ret);
+ goto free;
+ }
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index ba682bba7851..44aee7a431ea 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1370,6 +1370,8 @@ static const struct usb_device_id products[] = {
+ {QMI_QUIRK_SET_DTR(0x2c7c, 0x0191, 4)}, /* Quectel EG91 */
+ {QMI_FIXED_INTF(0x2c7c, 0x0296, 4)}, /* Quectel BG96 */
+ {QMI_QUIRK_SET_DTR(0x2cb7, 0x0104, 4)}, /* Fibocom NL678 series */
++ {QMI_FIXED_INTF(0x0489, 0xe0b4, 0)}, /* Foxconn T77W968 LTE */
++ {QMI_FIXED_INTF(0x0489, 0xe0b5, 0)}, /* Foxconn T77W968 LTE with eSIM support*/
+
+ /* 4. Gobi 1000 devices */
+ {QMI_GOBI1K_DEVICE(0x05c6, 0x9212)}, /* Acer Gobi Modem Device */
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 4e88d7e9cf9a..7a2f80db8349 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -1854,7 +1854,8 @@ int scsi_mq_setup_tags(struct Scsi_Host *shost)
+ {
+ unsigned int cmd_size, sgl_size;
+
+- sgl_size = scsi_mq_inline_sgl_size(shost);
++ sgl_size = max_t(unsigned int, sizeof(struct scatterlist),
++ scsi_mq_inline_sgl_size(shost));
+ cmd_size = sizeof(struct scsi_cmnd) + shost->hostt->cmd_size + sgl_size;
+ if (scsi_host_get_prot(shost))
+ cmd_size += sizeof(struct scsi_data_buffer) +
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index a0318bc57fa6..5b7768ccd20b 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -9723,6 +9723,18 @@ out_fail:
+ commit_transaction = true;
+ }
+ if (commit_transaction) {
++ /*
++ * We may have set commit_transaction when logging the new name
++ * in the destination root, in which case we left the source
++ * root context in the list of log contextes. So make sure we
++ * remove it to avoid invalid memory accesses, since the context
++ * was allocated in our stack frame.
++ */
++ if (sync_log_root) {
++ mutex_lock(&root->log_mutex);
++ list_del_init(&ctx_root.list);
++ mutex_unlock(&root->log_mutex);
++ }
+ ret = btrfs_commit_transaction(trans);
+ } else {
+ int ret2;
+@@ -9736,6 +9748,9 @@ out_notrans:
+ if (old_ino == BTRFS_FIRST_FREE_OBJECTID)
+ up_read(&fs_info->subvol_sem);
+
++ ASSERT(list_empty(&ctx_root.list));
++ ASSERT(list_empty(&ctx_dest.list));
++
+ return ret;
+ }
+
+diff --git a/fs/ecryptfs/inode.c b/fs/ecryptfs/inode.c
+index 18426f4855f1..0c7ea4596202 100644
+--- a/fs/ecryptfs/inode.c
++++ b/fs/ecryptfs/inode.c
+@@ -311,9 +311,9 @@ static int ecryptfs_i_size_read(struct dentry *dentry, struct inode *inode)
+ static struct dentry *ecryptfs_lookup_interpose(struct dentry *dentry,
+ struct dentry *lower_dentry)
+ {
+- struct inode *inode, *lower_inode = d_inode(lower_dentry);
++ struct path *path = ecryptfs_dentry_to_lower_path(dentry->d_parent);
++ struct inode *inode, *lower_inode;
+ struct ecryptfs_dentry_info *dentry_info;
+- struct vfsmount *lower_mnt;
+ int rc = 0;
+
+ dentry_info = kmem_cache_alloc(ecryptfs_dentry_info_cache, GFP_KERNEL);
+@@ -322,16 +322,23 @@ static struct dentry *ecryptfs_lookup_interpose(struct dentry *dentry,
+ return ERR_PTR(-ENOMEM);
+ }
+
+- lower_mnt = mntget(ecryptfs_dentry_to_lower_mnt(dentry->d_parent));
+ fsstack_copy_attr_atime(d_inode(dentry->d_parent),
+- d_inode(lower_dentry->d_parent));
++ d_inode(path->dentry));
+ BUG_ON(!d_count(lower_dentry));
+
+ ecryptfs_set_dentry_private(dentry, dentry_info);
+- dentry_info->lower_path.mnt = lower_mnt;
++ dentry_info->lower_path.mnt = mntget(path->mnt);
+ dentry_info->lower_path.dentry = lower_dentry;
+
+- if (d_really_is_negative(lower_dentry)) {
++ /*
++ * negative dentry can go positive under us here - its parent is not
++ * locked. That's OK and that could happen just as we return from
++ * ecryptfs_lookup() anyway. Just need to be careful and fetch
++ * ->d_inode only once - it's not stable here.
++ */
++ lower_inode = READ_ONCE(lower_dentry->d_inode);
++
++ if (!lower_inode) {
+ /* We want to add because we couldn't find in lower */
+ d_add(dentry, NULL);
+ return NULL;
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 37da4ea68f50..56c23dee9811 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -1179,7 +1179,7 @@ static int io_import_fixed(struct io_ring_ctx *ctx, int rw,
+ }
+ }
+
+- return 0;
++ return len;
+ }
+
+ static ssize_t io_import_iovec(struct io_ring_ctx *ctx, int rw,
+diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
+index 4fc6454f7ebb..d4ca9827a6bb 100644
+--- a/include/linux/intel-iommu.h
++++ b/include/linux/intel-iommu.h
+@@ -334,7 +334,8 @@ enum {
+ #define QI_DEV_IOTLB_SID(sid) ((u64)((sid) & 0xffff) << 32)
+ #define QI_DEV_IOTLB_QDEP(qdep) (((qdep) & 0x1f) << 16)
+ #define QI_DEV_IOTLB_ADDR(addr) ((u64)(addr) & VTD_PAGE_MASK)
+-#define QI_DEV_IOTLB_PFSID(pfsid) (((u64)(pfsid & 0xf) << 12) | ((u64)(pfsid & 0xfff) << 52))
++#define QI_DEV_IOTLB_PFSID(pfsid) (((u64)(pfsid & 0xf) << 12) | \
++ ((u64)((pfsid >> 4) & 0xfff) << 52))
+ #define QI_DEV_IOTLB_SIZE 1
+ #define QI_DEV_IOTLB_MAX_INVS 32
+
+@@ -358,7 +359,8 @@ enum {
+ #define QI_DEV_EIOTLB_PASID(p) (((u64)p) << 32)
+ #define QI_DEV_EIOTLB_SID(sid) ((u64)((sid) & 0xffff) << 16)
+ #define QI_DEV_EIOTLB_QDEP(qd) ((u64)((qd) & 0x1f) << 4)
+-#define QI_DEV_EIOTLB_PFSID(pfsid) (((u64)(pfsid & 0xf) << 12) | ((u64)(pfsid & 0xfff) << 52))
++#define QI_DEV_EIOTLB_PFSID(pfsid) (((u64)(pfsid & 0xf) << 12) | \
++ ((u64)((pfsid >> 4) & 0xfff) << 52))
+ #define QI_DEV_EIOTLB_MAX_INVS 32
+
+ /* Page group response descriptor QW0 */
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index 52ed5f66e8f9..d41c521a39da 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -966,6 +966,7 @@ int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu);
+ void kvm_vcpu_kick(struct kvm_vcpu *vcpu);
+
+ bool kvm_is_reserved_pfn(kvm_pfn_t pfn);
++bool kvm_is_zone_device_pfn(kvm_pfn_t pfn);
+
+ struct kvm_irq_ack_notifier {
+ struct hlist_node link;
+diff --git a/include/linux/memory.h b/include/linux/memory.h
+index 02e633f3ede0..c0cb7e93b880 100644
+--- a/include/linux/memory.h
++++ b/include/linux/memory.h
+@@ -120,6 +120,7 @@ extern struct memory_block *find_memory_block(struct mem_section *);
+ typedef int (*walk_memory_blocks_func_t)(struct memory_block *, void *);
+ extern int walk_memory_blocks(unsigned long start, unsigned long size,
+ void *arg, walk_memory_blocks_func_t func);
++extern int for_each_memory_block(void *arg, walk_memory_blocks_func_t func);
+ #define CONFIG_MEM_BLOCK_SIZE (PAGES_PER_SECTION<<PAGE_SHIFT)
+ #endif /* CONFIG_MEMORY_HOTPLUG_SPARSE */
+
+diff --git a/include/net/devlink.h b/include/net/devlink.h
+index bc36f942a7d5..ffa506ae5018 100644
+--- a/include/net/devlink.h
++++ b/include/net/devlink.h
+@@ -35,6 +35,7 @@ struct devlink {
+ struct device *dev;
+ possible_net_t _net;
+ struct mutex lock;
++ u8 reload_enabled:1;
+ char priv[0] __aligned(NETDEV_ALIGN);
+ };
+
+@@ -594,6 +595,8 @@ struct ib_device;
+ struct devlink *devlink_alloc(const struct devlink_ops *ops, size_t priv_size);
+ int devlink_register(struct devlink *devlink, struct device *dev);
+ void devlink_unregister(struct devlink *devlink);
++void devlink_reload_enable(struct devlink *devlink);
++void devlink_reload_disable(struct devlink *devlink);
+ void devlink_free(struct devlink *devlink);
+ int devlink_port_register(struct devlink *devlink,
+ struct devlink_port *devlink_port,
+diff --git a/include/trace/events/tcp.h b/include/trace/events/tcp.h
+index 2bc9960a31aa..cf97f6339acb 100644
+--- a/include/trace/events/tcp.h
++++ b/include/trace/events/tcp.h
+@@ -86,7 +86,7 @@ DECLARE_EVENT_CLASS(tcp_event_sk_skb,
+ sk->sk_v6_rcv_saddr, sk->sk_v6_daddr);
+ ),
+
+- TP_printk("sport=%hu dport=%hu saddr=%pI4 daddr=%pI4 saddrv6=%pI6c daddrv6=%pI6c state=%s\n",
++ TP_printk("sport=%hu dport=%hu saddr=%pI4 daddr=%pI4 saddrv6=%pI6c daddrv6=%pI6c state=%s",
+ __entry->sport, __entry->dport, __entry->saddr, __entry->daddr,
+ __entry->saddr_v6, __entry->daddr_v6,
+ show_tcp_state_name(__entry->state))
+diff --git a/include/uapi/linux/devlink.h b/include/uapi/linux/devlink.h
+index ffc993256527..f0953046bc17 100644
+--- a/include/uapi/linux/devlink.h
++++ b/include/uapi/linux/devlink.h
+@@ -348,6 +348,7 @@ enum devlink_attr {
+ DEVLINK_ATTR_PORT_PCI_PF_NUMBER, /* u16 */
+ DEVLINK_ATTR_PORT_PCI_VF_NUMBER, /* u16 */
+
++ DEVLINK_ATTR_HEALTH_REPORTER_DUMP_TS_NS, /* u64 */
+ /* add new attributes above here, update the policy in devlink.c */
+
+ __DEVLINK_ATTR_MAX,
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 534fec266a33..f8eed866ef94 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -2205,8 +2205,8 @@ static void ptrace_stop(int exit_code, int why, int clear_code, kernel_siginfo_t
+ */
+ preempt_disable();
+ read_unlock(&tasklist_lock);
+- preempt_enable_no_resched();
+ cgroup_enter_frozen();
++ preempt_enable_no_resched();
+ freezable_schedule();
+ cgroup_leave_frozen(true);
+ } else {
+diff --git a/kernel/time/ntp.c b/kernel/time/ntp.c
+index 65eb796610dc..069ca78fb0bf 100644
+--- a/kernel/time/ntp.c
++++ b/kernel/time/ntp.c
+@@ -771,7 +771,7 @@ int __do_adjtimex(struct __kernel_timex *txc, const struct timespec64 *ts,
+ /* fill PPS status fields */
+ pps_fill_timex(txc);
+
+- txc->time.tv_sec = (time_t)ts->tv_sec;
++ txc->time.tv_sec = ts->tv_sec;
+ txc->time.tv_usec = ts->tv_nsec;
+ if (!(time_status & STA_NANO))
+ txc->time.tv_usec = ts->tv_nsec / NSEC_PER_USEC;
+diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
+index 68c2f2f3c05b..7a93e1e439dd 100644
+--- a/mm/hugetlb_cgroup.c
++++ b/mm/hugetlb_cgroup.c
+@@ -196,7 +196,7 @@ int hugetlb_cgroup_charge_cgroup(int idx, unsigned long nr_pages,
+ again:
+ rcu_read_lock();
+ h_cg = hugetlb_cgroup_from_task(current);
+- if (!css_tryget_online(&h_cg->css)) {
++ if (!css_tryget(&h_cg->css)) {
+ rcu_read_unlock();
+ goto again;
+ }
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 89fd0829ebd0..515b050b7533 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -962,7 +962,7 @@ struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
+ if (unlikely(!memcg))
+ memcg = root_mem_cgroup;
+ }
+- } while (!css_tryget_online(&memcg->css));
++ } while (!css_tryget(&memcg->css));
+ rcu_read_unlock();
+ return memcg;
+ }
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index c73f09913165..2c1a66cd47df 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1687,6 +1687,18 @@ static int check_cpu_on_node(pg_data_t *pgdat)
+ return 0;
+ }
+
++static int check_no_memblock_for_node_cb(struct memory_block *mem, void *arg)
++{
++ int nid = *(int *)arg;
++
++ /*
++ * If a memory block belongs to multiple nodes, the stored nid is not
++ * reliable. However, such blocks are always online (e.g., cannot get
++ * offlined) and, therefore, are still spanned by the node.
++ */
++ return mem->nid == nid ? -EEXIST : 0;
++}
++
+ /**
+ * try_offline_node
+ * @nid: the node ID
+@@ -1699,25 +1711,24 @@ static int check_cpu_on_node(pg_data_t *pgdat)
+ void try_offline_node(int nid)
+ {
+ pg_data_t *pgdat = NODE_DATA(nid);
+- unsigned long start_pfn = pgdat->node_start_pfn;
+- unsigned long end_pfn = start_pfn + pgdat->node_spanned_pages;
+- unsigned long pfn;
+-
+- for (pfn = start_pfn; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
+- unsigned long section_nr = pfn_to_section_nr(pfn);
+-
+- if (!present_section_nr(section_nr))
+- continue;
++ int rc;
+
+- if (pfn_to_nid(pfn) != nid)
+- continue;
++ /*
++ * If the node still spans pages (especially ZONE_DEVICE), don't
++ * offline it. A node spans memory after move_pfn_range_to_zone(),
++ * e.g., after the memory block was onlined.
++ */
++ if (pgdat->node_spanned_pages)
++ return;
+
+- /*
+- * some memory sections of this node are not removed, and we
+- * can't offline node now.
+- */
++ /*
++ * Especially offline memory blocks might not be spanned by the
++ * node. They will get spanned by the node once they get onlined.
++ * However, they link to the node in sysfs and can get onlined later.
++ */
++ rc = for_each_memory_block(&nid, check_no_memblock_for_node_cb);
++ if (rc)
+ return;
+- }
+
+ if (check_cpu_on_node(pgdat))
+ return;
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index 65e0874fce17..d9fd28f7ca44 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -666,7 +666,9 @@ static int queue_pages_test_walk(unsigned long start, unsigned long end,
+ * 1 - there is unmovable page, but MPOL_MF_MOVE* & MPOL_MF_STRICT were
+ * specified.
+ * 0 - queue pages successfully or no misplaced page.
+- * -EIO - there is misplaced page and only MPOL_MF_STRICT was specified.
++ * errno - i.e. misplaced pages with MPOL_MF_STRICT specified (-EIO) or
++ * memory range specified by nodemask and maxnode points outside
++ * your accessible address space (-EFAULT)
+ */
+ static int
+ queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end,
+@@ -1287,7 +1289,7 @@ static long do_mbind(unsigned long start, unsigned long len,
+ flags | MPOL_MF_INVERT, &pagelist);
+
+ if (ret < 0) {
+- err = -EIO;
++ err = ret;
+ goto up_out;
+ }
+
+@@ -1306,10 +1308,12 @@ static long do_mbind(unsigned long start, unsigned long len,
+
+ if ((ret > 0) || (nr_failed && (flags & MPOL_MF_STRICT)))
+ err = -EIO;
+- } else
+- putback_movable_pages(&pagelist);
+-
++ } else {
+ up_out:
++ if (!list_empty(&pagelist))
++ putback_movable_pages(&pagelist);
++ }
++
+ up_write(&mm->mmap_sem);
+ mpol_out:
+ mpol_put(new);
+diff --git a/mm/page_io.c b/mm/page_io.c
+index 24ee600f9131..60a66a58b9bf 100644
+--- a/mm/page_io.c
++++ b/mm/page_io.c
+@@ -73,6 +73,7 @@ static void swap_slot_free_notify(struct page *page)
+ {
+ struct swap_info_struct *sis;
+ struct gendisk *disk;
++ swp_entry_t entry;
+
+ /*
+ * There is no guarantee that the page is in swap cache - the software
+@@ -104,11 +105,10 @@ static void swap_slot_free_notify(struct page *page)
+ * we again wish to reclaim it.
+ */
+ disk = sis->bdev->bd_disk;
+- if (disk->fops->swap_slot_free_notify) {
+- swp_entry_t entry;
++ entry.val = page_private(page);
++ if (disk->fops->swap_slot_free_notify && __swap_count(entry) == 1) {
+ unsigned long offset;
+
+- entry.val = page_private(page);
+ offset = swp_offset(entry);
+
+ SetPageDirty(page);
+diff --git a/mm/slub.c b/mm/slub.c
+index dac41cf0b94a..d2445dd1c7ed 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -1432,12 +1432,15 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s,
+ void *old_tail = *tail ? *tail : *head;
+ int rsize;
+
+- if (slab_want_init_on_free(s)) {
+- void *p = NULL;
++ /* Head and tail of the reconstructed freelist */
++ *head = NULL;
++ *tail = NULL;
+
+- do {
+- object = next;
+- next = get_freepointer(s, object);
++ do {
++ object = next;
++ next = get_freepointer(s, object);
++
++ if (slab_want_init_on_free(s)) {
+ /*
+ * Clear the object and the metadata, but don't touch
+ * the redzone.
+@@ -1447,29 +1450,8 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s,
+ : 0;
+ memset((char *)object + s->inuse, 0,
+ s->size - s->inuse - rsize);
+- set_freepointer(s, object, p);
+- p = object;
+- } while (object != old_tail);
+- }
+-
+-/*
+- * Compiler cannot detect this function can be removed if slab_free_hook()
+- * evaluates to nothing. Thus, catch all relevant config debug options here.
+- */
+-#if defined(CONFIG_LOCKDEP) || \
+- defined(CONFIG_DEBUG_KMEMLEAK) || \
+- defined(CONFIG_DEBUG_OBJECTS_FREE) || \
+- defined(CONFIG_KASAN)
+
+- next = *head;
+-
+- /* Head and tail of the reconstructed freelist */
+- *head = NULL;
+- *tail = NULL;
+-
+- do {
+- object = next;
+- next = get_freepointer(s, object);
++ }
+ /* If object's reuse doesn't have to be delayed */
+ if (!slab_free_hook(s, object)) {
+ /* Move object to the new freelist */
+@@ -1484,9 +1466,6 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s,
+ *tail = NULL;
+
+ return *head != NULL;
+-#else
+- return true;
+-#endif
+ }
+
+ static void *setup_object(struct kmem_cache *s, struct page *page,
+diff --git a/net/core/devlink.c b/net/core/devlink.c
+index 4f40aeace902..d40f6cc48690 100644
+--- a/net/core/devlink.c
++++ b/net/core/devlink.c
+@@ -2677,7 +2677,7 @@ static int devlink_nl_cmd_reload(struct sk_buff *skb, struct genl_info *info)
+ struct devlink *devlink = info->user_ptr[0];
+ int err;
+
+- if (!devlink->ops->reload)
++ if (!devlink->ops->reload || !devlink->reload_enabled)
+ return -EOPNOTSUPP;
+
+ err = devlink_resources_validate(devlink, NULL, info);
+@@ -4577,6 +4577,7 @@ struct devlink_health_reporter {
+ bool auto_recover;
+ u8 health_state;
+ u64 dump_ts;
++ u64 dump_real_ts;
+ u64 error_count;
+ u64 recovery_count;
+ u64 last_recovery_ts;
+@@ -4749,6 +4750,7 @@ static int devlink_health_do_dump(struct devlink_health_reporter *reporter,
+ goto dump_err;
+
+ reporter->dump_ts = jiffies;
++ reporter->dump_real_ts = ktime_get_real_ns();
+
+ return 0;
+
+@@ -4911,6 +4913,10 @@ devlink_nl_health_reporter_fill(struct sk_buff *msg,
+ jiffies_to_msecs(reporter->dump_ts),
+ DEVLINK_ATTR_PAD))
+ goto reporter_nest_cancel;
++ if (reporter->dump_fmsg &&
++ nla_put_u64_64bit(msg, DEVLINK_ATTR_HEALTH_REPORTER_DUMP_TS_NS,
++ reporter->dump_real_ts, DEVLINK_ATTR_PAD))
++ goto reporter_nest_cancel;
+
+ nla_nest_end(msg, reporter_attr);
+ genlmsg_end(msg, hdr);
+@@ -5559,12 +5565,49 @@ EXPORT_SYMBOL_GPL(devlink_register);
+ void devlink_unregister(struct devlink *devlink)
+ {
+ mutex_lock(&devlink_mutex);
++ WARN_ON(devlink->ops->reload &&
++ devlink->reload_enabled);
+ devlink_notify(devlink, DEVLINK_CMD_DEL);
+ list_del(&devlink->list);
+ mutex_unlock(&devlink_mutex);
+ }
+ EXPORT_SYMBOL_GPL(devlink_unregister);
+
++/**
++ * devlink_reload_enable - Enable reload of devlink instance
++ *
++ * @devlink: devlink
++ *
++ * Should be called at end of device initialization
++ * process when reload operation is supported.
++ */
++void devlink_reload_enable(struct devlink *devlink)
++{
++ mutex_lock(&devlink_mutex);
++ devlink->reload_enabled = true;
++ mutex_unlock(&devlink_mutex);
++}
++EXPORT_SYMBOL_GPL(devlink_reload_enable);
++
++/**
++ * devlink_reload_disable - Disable reload of devlink instance
++ *
++ * @devlink: devlink
++ *
++ * Should be called at the beginning of device cleanup
++ * process when reload operation is supported.
++ */
++void devlink_reload_disable(struct devlink *devlink)
++{
++ mutex_lock(&devlink_mutex);
++ /* Mutex is taken which ensures that no reload operation is in
++ * progress while setting up forbidded flag.
++ */
++ devlink->reload_enabled = false;
++ mutex_unlock(&devlink_mutex);
++}
++EXPORT_SYMBOL_GPL(devlink_reload_disable);
++
+ /**
+ * devlink_free - Free devlink instance resources
+ *
+diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
+index c07bc82cbbe9..f2daddf1afac 100644
+--- a/net/ipv4/ipmr.c
++++ b/net/ipv4/ipmr.c
+@@ -2289,7 +2289,8 @@ int ipmr_get_route(struct net *net, struct sk_buff *skb,
+ rcu_read_unlock();
+ return -ENODEV;
+ }
+- skb2 = skb_clone(skb, GFP_ATOMIC);
++
++ skb2 = skb_realloc_headroom(skb, sizeof(struct iphdr));
+ if (!skb2) {
+ read_unlock(&mrt_lock);
+ rcu_read_unlock();
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 47946f489fd4..737b49909a7a 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -796,6 +796,7 @@ static void smc_connect_work(struct work_struct *work)
+ smc->sk.sk_err = EPIPE;
+ else if (signal_pending(current))
+ smc->sk.sk_err = -sock_intr_errno(timeo);
++ sock_put(&smc->sk); /* passive closing */
+ goto out;
+ }
+
+@@ -1731,7 +1732,7 @@ static int smc_setsockopt(struct socket *sock, int level, int optname,
+ case TCP_FASTOPEN_KEY:
+ case TCP_FASTOPEN_NO_COOKIE:
+ /* option not supported by SMC */
+- if (sk->sk_state == SMC_INIT) {
++ if (sk->sk_state == SMC_INIT && !smc->connect_nonblock) {
+ smc_switch_to_fallback(smc);
+ smc->fallback_rsn = SMC_CLC_DECL_OPTUNSUPP;
+ } else {
+diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c
+index a2ab8e8d3a93..4a9a2f6ef5a4 100644
+--- a/sound/usb/endpoint.c
++++ b/sound/usb/endpoint.c
+@@ -388,6 +388,9 @@ static void snd_complete_urb(struct urb *urb)
+ }
+
+ prepare_outbound_urb(ep, ctx);
++ /* can be stopped during prepare callback */
++ if (unlikely(!test_bit(EP_FLAG_RUNNING, &ep->flags)))
++ goto exit_clear;
+ } else {
+ retire_inbound_urb(ep, ctx);
+ /* can be stopped during retire callback */
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 673652ad7018..90cd59a1869a 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -1229,7 +1229,8 @@ static int get_min_max_with_quirks(struct usb_mixer_elem_info *cval,
+ if (cval->min + cval->res < cval->max) {
+ int last_valid_res = cval->res;
+ int saved, test, check;
+- get_cur_mix_raw(cval, minchn, &saved);
++ if (get_cur_mix_raw(cval, minchn, &saved) < 0)
++ goto no_res_check;
+ for (;;) {
+ test = saved;
+ if (test < cval->max)
+@@ -1249,6 +1250,7 @@ static int get_min_max_with_quirks(struct usb_mixer_elem_info *cval,
+ snd_usb_set_cur_mix_value(cval, minchn, 0, saved);
+ }
+
++no_res_check:
+ cval->initialized = 1;
+ }
+
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 0bbe1201a6ac..349e1e52996d 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -248,8 +248,8 @@ static int create_yamaha_midi_quirk(struct snd_usb_audio *chip,
+ NULL, USB_MS_MIDI_OUT_JACK);
+ if (!injd && !outjd)
+ return -ENODEV;
+- if (!(injd && snd_usb_validate_midi_desc(injd)) ||
+- !(outjd && snd_usb_validate_midi_desc(outjd)))
++ if ((injd && !snd_usb_validate_midi_desc(injd)) ||
++ (outjd && !snd_usb_validate_midi_desc(outjd)))
+ return -ENODEV;
+ if (injd && (injd->bLength < 5 ||
+ (injd->bJackType != USB_MS_EMBEDDED &&
+diff --git a/sound/usb/validate.c b/sound/usb/validate.c
+index a5e584b60dcd..389e8657434a 100644
+--- a/sound/usb/validate.c
++++ b/sound/usb/validate.c
+@@ -81,9 +81,9 @@ static bool validate_processing_unit(const void *p,
+ switch (v->protocol) {
+ case UAC_VERSION_1:
+ default:
+- /* bNrChannels, wChannelConfig, iChannelNames, bControlSize */
+- len += 1 + 2 + 1 + 1;
+- if (d->bLength < len) /* bControlSize */
++ /* bNrChannels, wChannelConfig, iChannelNames */
++ len += 1 + 2 + 1;
++ if (d->bLength < len + 1) /* bControlSize */
+ return false;
+ m = hdr[len];
+ len += 1 + m + 1; /* bControlSize, bmControls, iProcessing */
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 9d4e03eddccf..49ef54267061 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -150,10 +150,30 @@ __weak int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
+ return 0;
+ }
+
++bool kvm_is_zone_device_pfn(kvm_pfn_t pfn)
++{
++ /*
++ * The metadata used by is_zone_device_page() to determine whether or
++ * not a page is ZONE_DEVICE is guaranteed to be valid if and only if
++ * the device has been pinned, e.g. by get_user_pages(). WARN if the
++ * page_count() is zero to help detect bad usage of this helper.
++ */
++ if (!pfn_valid(pfn) || WARN_ON_ONCE(!page_count(pfn_to_page(pfn))))
++ return false;
++
++ return is_zone_device_page(pfn_to_page(pfn));
++}
++
+ bool kvm_is_reserved_pfn(kvm_pfn_t pfn)
+ {
++ /*
++ * ZONE_DEVICE pages currently set PG_reserved, but from a refcounting
++ * perspective they are "normal" pages, albeit with slightly different
++ * usage rules.
++ */
+ if (pfn_valid(pfn))
+- return PageReserved(pfn_to_page(pfn));
++ return PageReserved(pfn_to_page(pfn)) &&
++ !kvm_is_zone_device_pfn(pfn);
+
+ return true;
+ }
+@@ -1882,7 +1902,7 @@ EXPORT_SYMBOL_GPL(kvm_release_pfn_dirty);
+
+ void kvm_set_pfn_dirty(kvm_pfn_t pfn)
+ {
+- if (!kvm_is_reserved_pfn(pfn)) {
++ if (!kvm_is_reserved_pfn(pfn) && !kvm_is_zone_device_pfn(pfn)) {
+ struct page *page = pfn_to_page(pfn);
+
+ SetPageDirty(page);
+@@ -1892,7 +1912,7 @@ EXPORT_SYMBOL_GPL(kvm_set_pfn_dirty);
+
+ void kvm_set_pfn_accessed(kvm_pfn_t pfn)
+ {
+- if (!kvm_is_reserved_pfn(pfn))
++ if (!kvm_is_reserved_pfn(pfn) && !kvm_is_zone_device_pfn(pfn))
+ mark_page_accessed(pfn_to_page(pfn));
+ }
+ EXPORT_SYMBOL_GPL(kvm_set_pfn_accessed);
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:5.3 commit in: /
@ 2019-11-24 15:45 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2019-11-24 15:45 UTC (permalink / raw
To: gentoo-commits
commit: b46fdd9d1832e5019b9f4a733e8d56204f769b2d
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Nov 24 15:44:49 2019 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Nov 24 15:44:49 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b46fdd9d
Linux patch 5.3.13
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1012_linux-5.3.13.patch | 451 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 455 insertions(+)
diff --git a/0000_README b/0000_README
index bb387d0..5f3156b 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch: 1011_linux-5.3.12.patch
From: http://www.kernel.org
Desc: Linux 5.3.12
+Patch: 1012_linux-5.3.13.patch
+From: http://www.kernel.org
+Desc: Linux 5.3.13
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1012_linux-5.3.13.patch b/1012_linux-5.3.13.patch
new file mode 100644
index 0000000..8684d09
--- /dev/null
+++ b/1012_linux-5.3.13.patch
@@ -0,0 +1,451 @@
+diff --git a/Makefile b/Makefile
+index 2f0c428ed2b6..f9d3d58ae801 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 3
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Bobtail Squid
+
+diff --git a/arch/arm64/lib/clear_user.S b/arch/arm64/lib/clear_user.S
+index 10415572e82f..322b55664cca 100644
+--- a/arch/arm64/lib/clear_user.S
++++ b/arch/arm64/lib/clear_user.S
+@@ -48,5 +48,6 @@ EXPORT_SYMBOL(__arch_clear_user)
+ .section .fixup,"ax"
+ .align 2
+ 9: mov x0, x2 // return the original size
++ uaccess_disable_not_uao x2, x3
+ ret
+ .previous
+diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
+index 680e74409ff9..8472dc7798b3 100644
+--- a/arch/arm64/lib/copy_from_user.S
++++ b/arch/arm64/lib/copy_from_user.S
+@@ -66,5 +66,6 @@ EXPORT_SYMBOL(__arch_copy_from_user)
+ .section .fixup,"ax"
+ .align 2
+ 9998: sub x0, end, dst // bytes not copied
++ uaccess_disable_not_uao x3, x4
+ ret
+ .previous
+diff --git a/arch/arm64/lib/copy_in_user.S b/arch/arm64/lib/copy_in_user.S
+index 0bedae3f3792..8e0355c1e318 100644
+--- a/arch/arm64/lib/copy_in_user.S
++++ b/arch/arm64/lib/copy_in_user.S
+@@ -68,5 +68,6 @@ EXPORT_SYMBOL(__arch_copy_in_user)
+ .section .fixup,"ax"
+ .align 2
+ 9998: sub x0, end, dst // bytes not copied
++ uaccess_disable_not_uao x3, x4
+ ret
+ .previous
+diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
+index 2d88c736e8f2..6085214654dc 100644
+--- a/arch/arm64/lib/copy_to_user.S
++++ b/arch/arm64/lib/copy_to_user.S
+@@ -65,5 +65,6 @@ EXPORT_SYMBOL(__arch_copy_to_user)
+ .section .fixup,"ax"
+ .align 2
+ 9998: sub x0, end, dst // bytes not copied
++ uaccess_disable_not_uao x3, x4
+ ret
+ .previous
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 70bcbd02edcb..aabc8c1ab0cd 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2699,6 +2699,28 @@ static void bfq_bfqq_save_state(struct bfq_queue *bfqq)
+ }
+ }
+
++
++static
++void bfq_release_process_ref(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++ /*
++ * To prevent bfqq's service guarantees from being violated,
++ * bfqq may be left busy, i.e., queued for service, even if
++ * empty (see comments in __bfq_bfqq_expire() for
++ * details). But, if no process will send requests to bfqq any
++ * longer, then there is no point in keeping bfqq queued for
++ * service. In addition, keeping bfqq queued for service, but
++ * with no process ref any longer, may have caused bfqq to be
++ * freed when dequeued from service. But this is assumed to
++ * never happen.
++ */
++ if (bfq_bfqq_busy(bfqq) && RB_EMPTY_ROOT(&bfqq->sort_list) &&
++ bfqq != bfqd->in_service_queue)
++ bfq_del_bfqq_busy(bfqd, bfqq, false);
++
++ bfq_put_queue(bfqq);
++}
++
+ static void
+ bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
+ struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
+@@ -2769,8 +2791,7 @@ bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
+ */
+ new_bfqq->pid = -1;
+ bfqq->bic = NULL;
+- /* release process reference to bfqq */
+- bfq_put_queue(bfqq);
++ bfq_release_process_ref(bfqd, bfqq);
+ }
+
+ static bool bfq_allow_bio_merge(struct request_queue *q, struct request *rq,
+@@ -4885,7 +4906,7 @@ static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+
+ bfq_put_cooperator(bfqq);
+
+- bfq_put_queue(bfqq); /* release process reference */
++ bfq_release_process_ref(bfqd, bfqq);
+ }
+
+ static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic, bool is_sync)
+@@ -4987,8 +5008,7 @@ static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio)
+
+ bfqq = bic_to_bfqq(bic, false);
+ if (bfqq) {
+- /* release process reference on this queue */
+- bfq_put_queue(bfqq);
++ bfq_release_process_ref(bfqd, bfqq);
+ bfqq = bfq_get_queue(bfqd, bio, BLK_RW_ASYNC, bic);
+ bic_set_bfqq(bic, bfqq, false);
+ }
+@@ -5948,7 +5968,7 @@ bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
+
+ bfq_put_cooperator(bfqq);
+
+- bfq_put_queue(bfqq);
++ bfq_release_process_ref(bfqq->bfqd, bfqq);
+ return NULL;
+ }
+
+diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
+index a245597a3902..c2c82e6391b4 100644
+--- a/drivers/net/usb/cdc_ncm.c
++++ b/drivers/net/usb/cdc_ncm.c
+@@ -579,7 +579,7 @@ static void cdc_ncm_set_dgram_size(struct usbnet *dev, int new_size)
+ err = usbnet_read_cmd(dev, USB_CDC_GET_MAX_DATAGRAM_SIZE,
+ USB_TYPE_CLASS | USB_DIR_IN | USB_RECIP_INTERFACE,
+ 0, iface_no, &max_datagram_size, sizeof(max_datagram_size));
+- if (err < sizeof(max_datagram_size)) {
++ if (err != sizeof(max_datagram_size)) {
+ dev_dbg(&dev->intf->dev, "GET_MAX_DATAGRAM_SIZE failed\n");
+ goto out;
+ }
+diff --git a/drivers/video/fbdev/core/fbmon.c b/drivers/video/fbdev/core/fbmon.c
+index 3558a70a6664..8e2e19f3bf44 100644
+--- a/drivers/video/fbdev/core/fbmon.c
++++ b/drivers/video/fbdev/core/fbmon.c
+@@ -999,98 +999,6 @@ void fb_edid_to_monspecs(unsigned char *edid, struct fb_monspecs *specs)
+ DPRINTK("========================================\n");
+ }
+
+-/**
+- * fb_edid_add_monspecs() - add monitor video modes from E-EDID data
+- * @edid: 128 byte array with an E-EDID block
+- * @spacs: monitor specs to be extended
+- */
+-void fb_edid_add_monspecs(unsigned char *edid, struct fb_monspecs *specs)
+-{
+- unsigned char *block;
+- struct fb_videomode *m;
+- int num = 0, i;
+- u8 svd[64], edt[(128 - 4) / DETAILED_TIMING_DESCRIPTION_SIZE];
+- u8 pos = 4, svd_n = 0;
+-
+- if (!edid)
+- return;
+-
+- if (!edid_checksum(edid))
+- return;
+-
+- if (edid[0] != 0x2 ||
+- edid[2] < 4 || edid[2] > 128 - DETAILED_TIMING_DESCRIPTION_SIZE)
+- return;
+-
+- DPRINTK(" Short Video Descriptors\n");
+-
+- while (pos < edid[2]) {
+- u8 len = edid[pos] & 0x1f, type = (edid[pos] >> 5) & 7;
+- pr_debug("Data block %u of %u bytes\n", type, len);
+- if (type == 2) {
+- for (i = pos; i < pos + len; i++) {
+- u8 idx = edid[pos + i] & 0x7f;
+- svd[svd_n++] = idx;
+- pr_debug("N%sative mode #%d\n",
+- edid[pos + i] & 0x80 ? "" : "on-n", idx);
+- }
+- } else if (type == 3 && len >= 3) {
+- /* Check Vendor Specific Data Block. For HDMI,
+- it is always 00-0C-03 for HDMI Licensing, LLC. */
+- if (edid[pos + 1] == 3 && edid[pos + 2] == 0xc &&
+- edid[pos + 3] == 0)
+- specs->misc |= FB_MISC_HDMI;
+- }
+- pos += len + 1;
+- }
+-
+- block = edid + edid[2];
+-
+- DPRINTK(" Extended Detailed Timings\n");
+-
+- for (i = 0; i < (128 - edid[2]) / DETAILED_TIMING_DESCRIPTION_SIZE;
+- i++, block += DETAILED_TIMING_DESCRIPTION_SIZE)
+- if (PIXEL_CLOCK != 0)
+- edt[num++] = block - edid;
+-
+- /* Yikes, EDID data is totally useless */
+- if (!(num + svd_n))
+- return;
+-
+- m = kcalloc(specs->modedb_len + num + svd_n,
+- sizeof(struct fb_videomode),
+- GFP_KERNEL);
+-
+- if (!m)
+- return;
+-
+- memcpy(m, specs->modedb, specs->modedb_len * sizeof(struct fb_videomode));
+-
+- for (i = specs->modedb_len; i < specs->modedb_len + num; i++) {
+- get_detailed_timing(edid + edt[i - specs->modedb_len], &m[i]);
+- if (i == specs->modedb_len)
+- m[i].flag |= FB_MODE_IS_FIRST;
+- pr_debug("Adding %ux%u@%u\n", m[i].xres, m[i].yres, m[i].refresh);
+- }
+-
+- for (i = specs->modedb_len + num; i < specs->modedb_len + num + svd_n; i++) {
+- int idx = svd[i - specs->modedb_len - num];
+- if (!idx || idx >= ARRAY_SIZE(cea_modes)) {
+- pr_warn("Reserved SVD code %d\n", idx);
+- } else if (!cea_modes[idx].xres) {
+- pr_warn("Unimplemented SVD code %d\n", idx);
+- } else {
+- memcpy(&m[i], cea_modes + idx, sizeof(m[i]));
+- pr_debug("Adding SVD #%d: %ux%u@%u\n", idx,
+- m[i].xres, m[i].yres, m[i].refresh);
+- }
+- }
+-
+- kfree(specs->modedb);
+- specs->modedb = m;
+- specs->modedb_len = specs->modedb_len + num + svd_n;
+-}
+-
+ /*
+ * VESA Generalized Timing Formula (GTF)
+ */
+@@ -1500,9 +1408,6 @@ int fb_parse_edid(unsigned char *edid, struct fb_var_screeninfo *var)
+ void fb_edid_to_monspecs(unsigned char *edid, struct fb_monspecs *specs)
+ {
+ }
+-void fb_edid_add_monspecs(unsigned char *edid, struct fb_monspecs *specs)
+-{
+-}
+ void fb_destroy_modedb(struct fb_videomode *modedb)
+ {
+ }
+@@ -1610,7 +1515,6 @@ EXPORT_SYMBOL(fb_firmware_edid);
+
+ EXPORT_SYMBOL(fb_parse_edid);
+ EXPORT_SYMBOL(fb_edid_to_monspecs);
+-EXPORT_SYMBOL(fb_edid_add_monspecs);
+ EXPORT_SYMBOL(fb_get_mode);
+ EXPORT_SYMBOL(fb_validate_mode);
+ EXPORT_SYMBOL(fb_destroy_modedb);
+diff --git a/drivers/video/fbdev/core/modedb.c b/drivers/video/fbdev/core/modedb.c
+index ac049871704d..6473e0dfe146 100644
+--- a/drivers/video/fbdev/core/modedb.c
++++ b/drivers/video/fbdev/core/modedb.c
+@@ -289,63 +289,6 @@ static const struct fb_videomode modedb[] = {
+ };
+
+ #ifdef CONFIG_FB_MODE_HELPERS
+-const struct fb_videomode cea_modes[65] = {
+- /* #1: 640x480p@59.94/60Hz */
+- [1] = {
+- NULL, 60, 640, 480, 39722, 48, 16, 33, 10, 96, 2, 0,
+- FB_VMODE_NONINTERLACED, 0,
+- },
+- /* #3: 720x480p@59.94/60Hz */
+- [3] = {
+- NULL, 60, 720, 480, 37037, 60, 16, 30, 9, 62, 6, 0,
+- FB_VMODE_NONINTERLACED, 0,
+- },
+- /* #5: 1920x1080i@59.94/60Hz */
+- [5] = {
+- NULL, 60, 1920, 1080, 13763, 148, 88, 15, 2, 44, 5,
+- FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT,
+- FB_VMODE_INTERLACED, 0,
+- },
+- /* #7: 720(1440)x480iH@59.94/60Hz */
+- [7] = {
+- NULL, 60, 1440, 480, 18554/*37108*/, 114, 38, 15, 4, 124, 3, 0,
+- FB_VMODE_INTERLACED, 0,
+- },
+- /* #9: 720(1440)x240pH@59.94/60Hz */
+- [9] = {
+- NULL, 60, 1440, 240, 18554, 114, 38, 16, 4, 124, 3, 0,
+- FB_VMODE_NONINTERLACED, 0,
+- },
+- /* #18: 720x576pH@50Hz */
+- [18] = {
+- NULL, 50, 720, 576, 37037, 68, 12, 39, 5, 64, 5, 0,
+- FB_VMODE_NONINTERLACED, 0,
+- },
+- /* #19: 1280x720p@50Hz */
+- [19] = {
+- NULL, 50, 1280, 720, 13468, 220, 440, 20, 5, 40, 5,
+- FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT,
+- FB_VMODE_NONINTERLACED, 0,
+- },
+- /* #20: 1920x1080i@50Hz */
+- [20] = {
+- NULL, 50, 1920, 1080, 13480, 148, 528, 15, 5, 528, 5,
+- FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT,
+- FB_VMODE_INTERLACED, 0,
+- },
+- /* #32: 1920x1080p@23.98/24Hz */
+- [32] = {
+- NULL, 24, 1920, 1080, 13468, 148, 638, 36, 4, 44, 5,
+- FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT,
+- FB_VMODE_NONINTERLACED, 0,
+- },
+- /* #35: (2880)x480p4x@59.94/60Hz */
+- [35] = {
+- NULL, 60, 2880, 480, 9250, 240, 64, 30, 9, 248, 6, 0,
+- FB_VMODE_NONINTERLACED, 0,
+- },
+-};
+-
+ const struct fb_videomode vesa_modes[] = {
+ /* 0 640x350-85 VESA */
+ { NULL, 85, 640, 350, 31746, 96, 32, 60, 32, 64, 3,
+diff --git a/include/linux/fb.h b/include/linux/fb.h
+index 303771264644..50948e519897 100644
+--- a/include/linux/fb.h
++++ b/include/linux/fb.h
+@@ -721,8 +721,6 @@ extern int fb_parse_edid(unsigned char *edid, struct fb_var_screeninfo *var);
+ extern const unsigned char *fb_firmware_edid(struct device *device);
+ extern void fb_edid_to_monspecs(unsigned char *edid,
+ struct fb_monspecs *specs);
+-extern void fb_edid_add_monspecs(unsigned char *edid,
+- struct fb_monspecs *specs);
+ extern void fb_destroy_modedb(struct fb_videomode *modedb);
+ extern int fb_find_mode_cvt(struct fb_videomode *mode, int margins, int rb);
+ extern unsigned char *fb_ddc_read(struct i2c_adapter *adapter);
+@@ -796,7 +794,6 @@ struct dmt_videomode {
+
+ extern const char *fb_mode_option;
+ extern const struct fb_videomode vesa_modes[];
+-extern const struct fb_videomode cea_modes[65];
+ extern const struct dmt_videomode dmt_modes[];
+
+ struct fb_modelist {
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 2c1a66cd47df..f363fed0db4f 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -436,67 +436,33 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn,
+ zone_span_writeunlock(zone);
+ }
+
+-static void shrink_pgdat_span(struct pglist_data *pgdat,
+- unsigned long start_pfn, unsigned long end_pfn)
++static void update_pgdat_span(struct pglist_data *pgdat)
+ {
+- unsigned long pgdat_start_pfn = pgdat->node_start_pfn;
+- unsigned long p = pgdat_end_pfn(pgdat); /* pgdat_end_pfn namespace clash */
+- unsigned long pgdat_end_pfn = p;
+- unsigned long pfn;
+- int nid = pgdat->node_id;
+-
+- if (pgdat_start_pfn == start_pfn) {
+- /*
+- * If the section is smallest section in the pgdat, it need
+- * shrink pgdat->node_start_pfn and pgdat->node_spanned_pages.
+- * In this case, we find second smallest valid mem_section
+- * for shrinking zone.
+- */
+- pfn = find_smallest_section_pfn(nid, NULL, end_pfn,
+- pgdat_end_pfn);
+- if (pfn) {
+- pgdat->node_start_pfn = pfn;
+- pgdat->node_spanned_pages = pgdat_end_pfn - pfn;
+- }
+- } else if (pgdat_end_pfn == end_pfn) {
+- /*
+- * If the section is biggest section in the pgdat, it need
+- * shrink pgdat->node_spanned_pages.
+- * In this case, we find second biggest valid mem_section for
+- * shrinking zone.
+- */
+- pfn = find_biggest_section_pfn(nid, NULL, pgdat_start_pfn,
+- start_pfn);
+- if (pfn)
+- pgdat->node_spanned_pages = pfn - pgdat_start_pfn + 1;
+- }
++ unsigned long node_start_pfn = 0, node_end_pfn = 0;
++ struct zone *zone;
+
+- /*
+- * If the section is not biggest or smallest mem_section in the pgdat,
+- * it only creates a hole in the pgdat. So in this case, we need not
+- * change the pgdat.
+- * But perhaps, the pgdat has only hole data. Thus it check the pgdat
+- * has only hole or not.
+- */
+- pfn = pgdat_start_pfn;
+- for (; pfn < pgdat_end_pfn; pfn += PAGES_PER_SUBSECTION) {
+- if (unlikely(!pfn_valid(pfn)))
+- continue;
++ for (zone = pgdat->node_zones;
++ zone < pgdat->node_zones + MAX_NR_ZONES; zone++) {
++ unsigned long zone_end_pfn = zone->zone_start_pfn +
++ zone->spanned_pages;
+
+- if (pfn_to_nid(pfn) != nid)
++ /* No need to lock the zones, they can't change. */
++ if (!zone->spanned_pages)
+ continue;
+-
+- /* Skip range to be removed */
+- if (pfn >= start_pfn && pfn < end_pfn)
++ if (!node_end_pfn) {
++ node_start_pfn = zone->zone_start_pfn;
++ node_end_pfn = zone_end_pfn;
+ continue;
++ }
+
+- /* If we find valid section, we have nothing to do */
+- return;
++ if (zone_end_pfn > node_end_pfn)
++ node_end_pfn = zone_end_pfn;
++ if (zone->zone_start_pfn < node_start_pfn)
++ node_start_pfn = zone->zone_start_pfn;
+ }
+
+- /* The pgdat has no valid section */
+- pgdat->node_start_pfn = 0;
+- pgdat->node_spanned_pages = 0;
++ pgdat->node_start_pfn = node_start_pfn;
++ pgdat->node_spanned_pages = node_end_pfn - node_start_pfn;
+ }
+
+ static void __remove_zone(struct zone *zone, unsigned long start_pfn,
+@@ -507,7 +473,7 @@ static void __remove_zone(struct zone *zone, unsigned long start_pfn,
+
+ pgdat_resize_lock(zone->zone_pgdat, &flags);
+ shrink_zone_span(zone, start_pfn, start_pfn + nr_pages);
+- shrink_pgdat_span(pgdat, start_pfn, start_pfn + nr_pages);
++ update_pgdat_span(pgdat);
+ pgdat_resize_unlock(zone->zone_pgdat, &flags);
+ }
+
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:5.3 commit in: /
@ 2019-11-29 21:38 Thomas Deutschmann
0 siblings, 0 replies; 21+ messages in thread
From: Thomas Deutschmann @ 2019-11-29 21:38 UTC (permalink / raw
To: gentoo-commits
commit: 57acb6027fa35bec9bb58c8f2fa8a716595c302b
Author: Thomas Deutschmann <whissi <AT> whissi <DOT> de>
AuthorDate: Fri Nov 29 21:37:56 2019 +0000
Commit: Thomas Deutschmann <whissi <AT> gentoo <DOT> org>
CommitDate: Fri Nov 29 21:37:56 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=57acb602
Linux patch 5.3.14
Signed-off-by: Thomas Deutschmann <whissi <AT> whissi.de>
1013_linux-5.3.14.patch | 4004 +++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 4004 insertions(+)
diff --git a/1013_linux-5.3.14.patch b/1013_linux-5.3.14.patch
new file mode 100644
index 0000000..038253d
--- /dev/null
+++ b/1013_linux-5.3.14.patch
@@ -0,0 +1,4004 @@
+diff --git a/Documentation/admin-guide/hw-vuln/mds.rst b/Documentation/admin-guide/hw-vuln/mds.rst
+index e3a796c0d3a2..2d19c9f4c1fe 100644
+--- a/Documentation/admin-guide/hw-vuln/mds.rst
++++ b/Documentation/admin-guide/hw-vuln/mds.rst
+@@ -265,8 +265,11 @@ time with the option "mds=". The valid arguments for this option are:
+
+ ============ =============================================================
+
+-Not specifying this option is equivalent to "mds=full".
+-
++Not specifying this option is equivalent to "mds=full". For processors
++that are affected by both TAA (TSX Asynchronous Abort) and MDS,
++specifying just "mds=off" without an accompanying "tsx_async_abort=off"
++will have no effect as the same mitigation is used for both
++vulnerabilities.
+
+ Mitigation selection guide
+ --------------------------
+diff --git a/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst b/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
+index fddbd7579c53..af6865b822d2 100644
+--- a/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
++++ b/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
+@@ -174,7 +174,10 @@ the option "tsx_async_abort=". The valid arguments for this option are:
+ CPU is not vulnerable to cross-thread TAA attacks.
+ ============ =============================================================
+
+-Not specifying this option is equivalent to "tsx_async_abort=full".
++Not specifying this option is equivalent to "tsx_async_abort=full". For
++processors that are affected by both TAA and MDS, specifying just
++"tsx_async_abort=off" without an accompanying "mds=off" will have no
++effect as the same mitigation is used for both vulnerabilities.
+
+ The kernel command line also allows to control the TSX feature using the
+ parameter "tsx=" on CPUs which support TSX control. MSR_IA32_TSX_CTRL is used
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 49d1719177ea..c4894b716fbe 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -2449,6 +2449,12 @@
+ SMT on vulnerable CPUs
+ off - Unconditionally disable MDS mitigation
+
++ On TAA-affected machines, mds=off can be prevented by
++ an active TAA mitigation as both vulnerabilities are
++ mitigated with the same mechanism so in order to disable
++ this mitigation, you need to specify tsx_async_abort=off
++ too.
++
+ Not specifying this option is equivalent to
+ mds=full.
+
+@@ -4896,6 +4902,11 @@
+ vulnerable to cross-thread TAA attacks.
+ off - Unconditionally disable TAA mitigation
+
++ On MDS-affected machines, tsx_async_abort=off can be
++ prevented by an active MDS mitigation as both vulnerabilities
++ are mitigated with the same mechanism so in order to disable
++ this mitigation, you need to specify mds=off too.
++
+ Not specifying this option is equivalent to
+ tsx_async_abort=full. On CPUs which are MDS affected
+ and deploy MDS mitigation, TAA mitigation is not
+diff --git a/Documentation/devicetree/bindings/net/wireless/qcom,ath10k.txt b/Documentation/devicetree/bindings/net/wireless/qcom,ath10k.txt
+index ae661e65354e..f9499b20d840 100644
+--- a/Documentation/devicetree/bindings/net/wireless/qcom,ath10k.txt
++++ b/Documentation/devicetree/bindings/net/wireless/qcom,ath10k.txt
+@@ -81,6 +81,12 @@ Optional properties:
+ Definition: Name of external front end module used. Some valid FEM names
+ for example: "microsemi-lx5586", "sky85703-11"
+ and "sky85803" etc.
++- qcom,snoc-host-cap-8bit-quirk:
++ Usage: Optional
++ Value type: <empty>
++ Definition: Quirk specifying that the firmware expects the 8bit version
++ of the host capability QMI request
++
+
+ Example (to supply PCI based wifi block details):
+
+diff --git a/Makefile b/Makefile
+index f9d3d58ae801..1e5933d6dc97 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 3
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Bobtail Squid
+
+diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
+index d5e0b908f0ba..25da9b2d9610 100644
+--- a/arch/arm/mm/mmu.c
++++ b/arch/arm/mm/mmu.c
+@@ -1197,6 +1197,9 @@ void __init adjust_lowmem_bounds(void)
+ phys_addr_t block_start = reg->base;
+ phys_addr_t block_end = reg->base + reg->size;
+
++ if (memblock_is_nomap(reg))
++ continue;
++
+ if (reg->base < vmalloc_limit) {
+ if (block_end > lowmem_limit)
+ /*
+diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
+index ec1c97a8e8cb..baaafc9b9d88 100644
+--- a/arch/powerpc/include/asm/asm-prototypes.h
++++ b/arch/powerpc/include/asm/asm-prototypes.h
+@@ -140,9 +140,12 @@ void _kvmppc_save_tm_pr(struct kvm_vcpu *vcpu, u64 guest_msr);
+ /* Patch sites */
+ extern s32 patch__call_flush_count_cache;
+ extern s32 patch__flush_count_cache_return;
++extern s32 patch__flush_link_stack_return;
++extern s32 patch__call_kvm_flush_link_stack;
+ extern s32 patch__memset_nocache, patch__memcpy_nocache;
+
+ extern long flush_count_cache;
++extern long kvm_flush_link_stack;
+
+ #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+ void kvmppc_save_tm_hv(struct kvm_vcpu *vcpu, u64 msr, bool preserve_nv);
+diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
+index 759597bf0fd8..ccf44c135389 100644
+--- a/arch/powerpc/include/asm/security_features.h
++++ b/arch/powerpc/include/asm/security_features.h
+@@ -81,6 +81,9 @@ static inline bool security_ftr_enabled(unsigned long feature)
+ // Software required to flush count cache on context switch
+ #define SEC_FTR_FLUSH_COUNT_CACHE 0x0000000000000400ull
+
++// Software required to flush link stack on context switch
++#define SEC_FTR_FLUSH_LINK_STACK 0x0000000000001000ull
++
+
+ // Features enabled by default
+ #define SEC_FTR_DEFAULT \
+diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
+index 0a0b5310f54a..81d61770f9c2 100644
+--- a/arch/powerpc/kernel/entry_64.S
++++ b/arch/powerpc/kernel/entry_64.S
+@@ -546,6 +546,7 @@ flush_count_cache:
+ /* Save LR into r9 */
+ mflr r9
+
++ // Flush the link stack
+ .rept 64
+ bl .+4
+ .endr
+@@ -555,6 +556,11 @@ flush_count_cache:
+ .balign 32
+ /* Restore LR */
+ 1: mtlr r9
++
++ // If we're just flushing the link stack, return here
++3: nop
++ patch_site 3b patch__flush_link_stack_return
++
+ li r9,0x7fff
+ mtctr r9
+
+diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
+index e1c9cf079503..bd91dceb7010 100644
+--- a/arch/powerpc/kernel/security.c
++++ b/arch/powerpc/kernel/security.c
+@@ -24,11 +24,12 @@ enum count_cache_flush_type {
+ COUNT_CACHE_FLUSH_HW = 0x4,
+ };
+ static enum count_cache_flush_type count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
++static bool link_stack_flush_enabled;
+
+ bool barrier_nospec_enabled;
+ static bool no_nospec;
+ static bool btb_flush_enabled;
+-#ifdef CONFIG_PPC_FSL_BOOK3E
++#if defined(CONFIG_PPC_FSL_BOOK3E) || defined(CONFIG_PPC_BOOK3S_64)
+ static bool no_spectrev2;
+ #endif
+
+@@ -114,7 +115,7 @@ static __init int security_feature_debugfs_init(void)
+ device_initcall(security_feature_debugfs_init);
+ #endif /* CONFIG_DEBUG_FS */
+
+-#ifdef CONFIG_PPC_FSL_BOOK3E
++#if defined(CONFIG_PPC_FSL_BOOK3E) || defined(CONFIG_PPC_BOOK3S_64)
+ static int __init handle_nospectre_v2(char *p)
+ {
+ no_spectrev2 = true;
+@@ -122,6 +123,9 @@ static int __init handle_nospectre_v2(char *p)
+ return 0;
+ }
+ early_param("nospectre_v2", handle_nospectre_v2);
++#endif /* CONFIG_PPC_FSL_BOOK3E || CONFIG_PPC_BOOK3S_64 */
++
++#ifdef CONFIG_PPC_FSL_BOOK3E
+ void setup_spectre_v2(void)
+ {
+ if (no_spectrev2 || cpu_mitigations_off())
+@@ -209,11 +213,19 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
+
+ if (ccd)
+ seq_buf_printf(&s, "Indirect branch cache disabled");
++
++ if (link_stack_flush_enabled)
++ seq_buf_printf(&s, ", Software link stack flush");
++
+ } else if (count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
+ seq_buf_printf(&s, "Mitigation: Software count cache flush");
+
+ if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
+ seq_buf_printf(&s, " (hardware accelerated)");
++
++ if (link_stack_flush_enabled)
++ seq_buf_printf(&s, ", Software link stack flush");
++
+ } else if (btb_flush_enabled) {
+ seq_buf_printf(&s, "Mitigation: Branch predictor state flush");
+ } else {
+@@ -374,18 +386,49 @@ static __init int stf_barrier_debugfs_init(void)
+ device_initcall(stf_barrier_debugfs_init);
+ #endif /* CONFIG_DEBUG_FS */
+
++static void no_count_cache_flush(void)
++{
++ count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
++ pr_info("count-cache-flush: software flush disabled.\n");
++}
++
+ static void toggle_count_cache_flush(bool enable)
+ {
+- if (!enable || !security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE)) {
++ if (!security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE) &&
++ !security_ftr_enabled(SEC_FTR_FLUSH_LINK_STACK))
++ enable = false;
++
++ if (!enable) {
+ patch_instruction_site(&patch__call_flush_count_cache, PPC_INST_NOP);
+- count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
+- pr_info("count-cache-flush: software flush disabled.\n");
++#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
++ patch_instruction_site(&patch__call_kvm_flush_link_stack, PPC_INST_NOP);
++#endif
++ pr_info("link-stack-flush: software flush disabled.\n");
++ link_stack_flush_enabled = false;
++ no_count_cache_flush();
+ return;
+ }
+
++ // This enables the branch from _switch to flush_count_cache
+ patch_branch_site(&patch__call_flush_count_cache,
+ (u64)&flush_count_cache, BRANCH_SET_LINK);
+
++#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
++ // This enables the branch from guest_exit_cont to kvm_flush_link_stack
++ patch_branch_site(&patch__call_kvm_flush_link_stack,
++ (u64)&kvm_flush_link_stack, BRANCH_SET_LINK);
++#endif
++
++ pr_info("link-stack-flush: software flush enabled.\n");
++ link_stack_flush_enabled = true;
++
++ // If we just need to flush the link stack, patch an early return
++ if (!security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE)) {
++ patch_instruction_site(&patch__flush_link_stack_return, PPC_INST_BLR);
++ no_count_cache_flush();
++ return;
++ }
++
+ if (!security_ftr_enabled(SEC_FTR_BCCTR_FLUSH_ASSIST)) {
+ count_cache_flush_type = COUNT_CACHE_FLUSH_SW;
+ pr_info("count-cache-flush: full software flush sequence enabled.\n");
+@@ -399,7 +442,26 @@ static void toggle_count_cache_flush(bool enable)
+
+ void setup_count_cache_flush(void)
+ {
+- toggle_count_cache_flush(true);
++ bool enable = true;
++
++ if (no_spectrev2 || cpu_mitigations_off()) {
++ if (security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED) ||
++ security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED))
++ pr_warn("Spectre v2 mitigations not fully under software control, can't disable\n");
++
++ enable = false;
++ }
++
++ /*
++ * There's no firmware feature flag/hypervisor bit to tell us we need to
++ * flush the link stack on context switch. So we set it here if we see
++ * either of the Spectre v2 mitigations that aim to protect userspace.
++ */
++ if (security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED) ||
++ security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE))
++ security_ftr_set(SEC_FTR_FLUSH_LINK_STACK);
++
++ toggle_count_cache_flush(enable);
+ }
+
+ #ifdef CONFIG_DEBUG_FS
+diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+index 07181d0dfcb7..0ba1d7abb798 100644
+--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
++++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+@@ -11,6 +11,7 @@
+ */
+
+ #include <asm/ppc_asm.h>
++#include <asm/code-patching-asm.h>
+ #include <asm/kvm_asm.h>
+ #include <asm/reg.h>
+ #include <asm/mmu.h>
+@@ -1458,6 +1459,13 @@ guest_exit_cont: /* r9 = vcpu, r12 = trap, r13 = paca */
+ 1:
+ #endif /* CONFIG_KVM_XICS */
+
++ /*
++ * Possibly flush the link stack here, before we do a blr in
++ * guest_exit_short_path.
++ */
++1: nop
++ patch_site 1b patch__call_kvm_flush_link_stack
++
+ /* If we came in through the P9 short path, go back out to C now */
+ lwz r0, STACK_SLOT_SHORT_PATH(r1)
+ cmpwi r0, 0
+@@ -1933,6 +1941,28 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
+ mtlr r0
+ blr
+
++.balign 32
++.global kvm_flush_link_stack
++kvm_flush_link_stack:
++ /* Save LR into r0 */
++ mflr r0
++
++ /* Flush the link stack. On Power8 it's up to 32 entries in size. */
++ .rept 32
++ bl .+4
++ .endr
++
++ /* And on Power9 it's up to 64. */
++BEGIN_FTR_SECTION
++ .rept 32
++ bl .+4
++ .endr
++END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
++
++ /* Restore LR */
++ mtlr r0
++ blr
++
+ kvmppc_guest_external:
+ /* External interrupt, first check for host_ipi. If this is
+ * set, we know the host wants us out so let's do it now
+diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
+index 4f86928246e7..1153e510cedd 100644
+--- a/arch/x86/entry/entry_32.S
++++ b/arch/x86/entry/entry_32.S
+@@ -172,7 +172,7 @@
+ ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_PTI
+ .if \no_user_check == 0
+ /* coming from usermode? */
+- testl $SEGMENT_RPL_MASK, PT_CS(%esp)
++ testl $USER_SEGMENT_RPL_MASK, PT_CS(%esp)
+ jz .Lend_\@
+ .endif
+ /* On user-cr3? */
+@@ -205,64 +205,76 @@
+ #define CS_FROM_ENTRY_STACK (1 << 31)
+ #define CS_FROM_USER_CR3 (1 << 30)
+ #define CS_FROM_KERNEL (1 << 29)
++#define CS_FROM_ESPFIX (1 << 28)
+
+ .macro FIXUP_FRAME
+ /*
+ * The high bits of the CS dword (__csh) are used for CS_FROM_*.
+ * Clear them in case hardware didn't do this for us.
+ */
+- andl $0x0000ffff, 3*4(%esp)
++ andl $0x0000ffff, 4*4(%esp)
+
+ #ifdef CONFIG_VM86
+- testl $X86_EFLAGS_VM, 4*4(%esp)
++ testl $X86_EFLAGS_VM, 5*4(%esp)
+ jnz .Lfrom_usermode_no_fixup_\@
+ #endif
+- testl $SEGMENT_RPL_MASK, 3*4(%esp)
++ testl $USER_SEGMENT_RPL_MASK, 4*4(%esp)
+ jnz .Lfrom_usermode_no_fixup_\@
+
+- orl $CS_FROM_KERNEL, 3*4(%esp)
++ orl $CS_FROM_KERNEL, 4*4(%esp)
+
+ /*
+ * When we're here from kernel mode; the (exception) stack looks like:
+ *
+- * 5*4(%esp) - <previous context>
+- * 4*4(%esp) - flags
+- * 3*4(%esp) - cs
+- * 2*4(%esp) - ip
+- * 1*4(%esp) - orig_eax
+- * 0*4(%esp) - gs / function
++ * 6*4(%esp) - <previous context>
++ * 5*4(%esp) - flags
++ * 4*4(%esp) - cs
++ * 3*4(%esp) - ip
++ * 2*4(%esp) - orig_eax
++ * 1*4(%esp) - gs / function
++ * 0*4(%esp) - fs
+ *
+ * Lets build a 5 entry IRET frame after that, such that struct pt_regs
+ * is complete and in particular regs->sp is correct. This gives us
+- * the original 5 enties as gap:
++ * the original 6 enties as gap:
+ *
+- * 12*4(%esp) - <previous context>
+- * 11*4(%esp) - gap / flags
+- * 10*4(%esp) - gap / cs
+- * 9*4(%esp) - gap / ip
+- * 8*4(%esp) - gap / orig_eax
+- * 7*4(%esp) - gap / gs / function
+- * 6*4(%esp) - ss
+- * 5*4(%esp) - sp
+- * 4*4(%esp) - flags
+- * 3*4(%esp) - cs
+- * 2*4(%esp) - ip
+- * 1*4(%esp) - orig_eax
+- * 0*4(%esp) - gs / function
++ * 14*4(%esp) - <previous context>
++ * 13*4(%esp) - gap / flags
++ * 12*4(%esp) - gap / cs
++ * 11*4(%esp) - gap / ip
++ * 10*4(%esp) - gap / orig_eax
++ * 9*4(%esp) - gap / gs / function
++ * 8*4(%esp) - gap / fs
++ * 7*4(%esp) - ss
++ * 6*4(%esp) - sp
++ * 5*4(%esp) - flags
++ * 4*4(%esp) - cs
++ * 3*4(%esp) - ip
++ * 2*4(%esp) - orig_eax
++ * 1*4(%esp) - gs / function
++ * 0*4(%esp) - fs
+ */
+
+ pushl %ss # ss
+ pushl %esp # sp (points at ss)
+- addl $6*4, (%esp) # point sp back at the previous context
+- pushl 6*4(%esp) # flags
+- pushl 6*4(%esp) # cs
+- pushl 6*4(%esp) # ip
+- pushl 6*4(%esp) # orig_eax
+- pushl 6*4(%esp) # gs / function
++ addl $7*4, (%esp) # point sp back at the previous context
++ pushl 7*4(%esp) # flags
++ pushl 7*4(%esp) # cs
++ pushl 7*4(%esp) # ip
++ pushl 7*4(%esp) # orig_eax
++ pushl 7*4(%esp) # gs / function
++ pushl 7*4(%esp) # fs
+ .Lfrom_usermode_no_fixup_\@:
+ .endm
+
+ .macro IRET_FRAME
++ /*
++ * We're called with %ds, %es, %fs, and %gs from the interrupted
++ * frame, so we shouldn't use them. Also, we may be in ESPFIX
++ * mode and therefore have a nonzero SS base and an offset ESP,
++ * so any attempt to access the stack needs to use SS. (except for
++ * accesses through %esp, which automatically use SS.)
++ */
+ testl $CS_FROM_KERNEL, 1*4(%esp)
+ jz .Lfinished_frame_\@
+
+@@ -276,31 +288,40 @@
+ movl 5*4(%esp), %eax # (modified) regs->sp
+
+ movl 4*4(%esp), %ecx # flags
+- movl %ecx, -4(%eax)
++ movl %ecx, %ss:-1*4(%eax)
+
+ movl 3*4(%esp), %ecx # cs
+ andl $0x0000ffff, %ecx
+- movl %ecx, -8(%eax)
++ movl %ecx, %ss:-2*4(%eax)
+
+ movl 2*4(%esp), %ecx # ip
+- movl %ecx, -12(%eax)
++ movl %ecx, %ss:-3*4(%eax)
+
+ movl 1*4(%esp), %ecx # eax
+- movl %ecx, -16(%eax)
++ movl %ecx, %ss:-4*4(%eax)
+
+ popl %ecx
+- lea -16(%eax), %esp
++ lea -4*4(%eax), %esp
+ popl %eax
+ .Lfinished_frame_\@:
+ .endm
+
+-.macro SAVE_ALL pt_regs_ax=%eax switch_stacks=0 skip_gs=0
++.macro SAVE_ALL pt_regs_ax=%eax switch_stacks=0 skip_gs=0 unwind_espfix=0
+ cld
+ .if \skip_gs == 0
+ PUSH_GS
+ .endif
+- FIXUP_FRAME
+ pushl %fs
++
++ pushl %eax
++ movl $(__KERNEL_PERCPU), %eax
++ movl %eax, %fs
++.if \unwind_espfix > 0
++ UNWIND_ESPFIX_STACK
++.endif
++ popl %eax
++
++ FIXUP_FRAME
+ pushl %es
+ pushl %ds
+ pushl \pt_regs_ax
+@@ -313,8 +334,6 @@
+ movl $(__USER_DS), %edx
+ movl %edx, %ds
+ movl %edx, %es
+- movl $(__KERNEL_PERCPU), %edx
+- movl %edx, %fs
+ .if \skip_gs == 0
+ SET_KERNEL_GS %edx
+ .endif
+@@ -324,8 +343,8 @@
+ .endif
+ .endm
+
+-.macro SAVE_ALL_NMI cr3_reg:req
+- SAVE_ALL
++.macro SAVE_ALL_NMI cr3_reg:req unwind_espfix=0
++ SAVE_ALL unwind_espfix=\unwind_espfix
+
+ BUG_IF_WRONG_CR3
+
+@@ -357,6 +376,7 @@
+ 2: popl %es
+ 3: popl %fs
+ POP_GS \pop
++ IRET_FRAME
+ .pushsection .fixup, "ax"
+ 4: movl $0, (%esp)
+ jmp 1b
+@@ -395,7 +415,8 @@
+
+ .macro CHECK_AND_APPLY_ESPFIX
+ #ifdef CONFIG_X86_ESPFIX32
+-#define GDT_ESPFIX_SS PER_CPU_VAR(gdt_page) + (GDT_ENTRY_ESPFIX_SS * 8)
++#define GDT_ESPFIX_OFFSET (GDT_ENTRY_ESPFIX_SS * 8)
++#define GDT_ESPFIX_SS PER_CPU_VAR(gdt_page) + GDT_ESPFIX_OFFSET
+
+ ALTERNATIVE "jmp .Lend_\@", "", X86_BUG_ESPFIX
+
+@@ -1075,7 +1096,6 @@ restore_all:
+ /* Restore user state */
+ RESTORE_REGS pop=4 # skip orig_eax/error_code
+ .Lirq_return:
+- IRET_FRAME
+ /*
+ * ARCH_HAS_MEMBARRIER_SYNC_CORE rely on IRET core serialization
+ * when returning from IPI handler and when returning from
+@@ -1128,30 +1148,43 @@ ENDPROC(entry_INT80_32)
+ * We can't call C functions using the ESPFIX stack. This code reads
+ * the high word of the segment base from the GDT and swiches to the
+ * normal stack and adjusts ESP with the matching offset.
++ *
++ * We might be on user CR3 here, so percpu data is not mapped and we can't
++ * access the GDT through the percpu segment. Instead, use SGDT to find
++ * the cpu_entry_area alias of the GDT.
+ */
+ #ifdef CONFIG_X86_ESPFIX32
+ /* fixup the stack */
+- mov GDT_ESPFIX_SS + 4, %al /* bits 16..23 */
+- mov GDT_ESPFIX_SS + 7, %ah /* bits 24..31 */
++ pushl %ecx
++ subl $2*4, %esp
++ sgdt (%esp)
++ movl 2(%esp), %ecx /* GDT address */
++ /*
++ * Careful: ECX is a linear pointer, so we need to force base
++ * zero. %cs is the only known-linear segment we have right now.
++ */
++ mov %cs:GDT_ESPFIX_OFFSET + 4(%ecx), %al /* bits 16..23 */
++ mov %cs:GDT_ESPFIX_OFFSET + 7(%ecx), %ah /* bits 24..31 */
+ shl $16, %eax
++ addl $2*4, %esp
++ popl %ecx
+ addl %esp, %eax /* the adjusted stack pointer */
+ pushl $__KERNEL_DS
+ pushl %eax
+ lss (%esp), %esp /* switch to the normal stack segment */
+ #endif
+ .endm
++
+ .macro UNWIND_ESPFIX_STACK
++ /* It's safe to clobber %eax, all other regs need to be preserved */
+ #ifdef CONFIG_X86_ESPFIX32
+ movl %ss, %eax
+ /* see if on espfix stack */
+ cmpw $__ESPFIX_SS, %ax
+- jne 27f
+- movl $__KERNEL_DS, %eax
+- movl %eax, %ds
+- movl %eax, %es
++ jne .Lno_fixup_\@
+ /* switch to normal stack */
+ FIXUP_ESPFIX_STACK
+-27:
++.Lno_fixup_\@:
+ #endif
+ .endm
+
+@@ -1341,11 +1374,6 @@ END(spurious_interrupt_bug)
+
+ #ifdef CONFIG_XEN_PV
+ ENTRY(xen_hypervisor_callback)
+- pushl $-1 /* orig_ax = -1 => not a system call */
+- SAVE_ALL
+- ENCODE_FRAME_POINTER
+- TRACE_IRQS_OFF
+-
+ /*
+ * Check to see if we got the event in the critical
+ * region in xen_iret_direct, after we've reenabled
+@@ -1353,16 +1381,17 @@ ENTRY(xen_hypervisor_callback)
+ * iret instruction's behaviour where it delivers a
+ * pending interrupt when enabling interrupts:
+ */
+- movl PT_EIP(%esp), %eax
+- cmpl $xen_iret_start_crit, %eax
++ cmpl $xen_iret_start_crit, (%esp)
+ jb 1f
+- cmpl $xen_iret_end_crit, %eax
++ cmpl $xen_iret_end_crit, (%esp)
+ jae 1f
+-
+- jmp xen_iret_crit_fixup
+-
+-ENTRY(xen_do_upcall)
+-1: mov %esp, %eax
++ call xen_iret_crit_fixup
++1:
++ pushl $-1 /* orig_ax = -1 => not a system call */
++ SAVE_ALL
++ ENCODE_FRAME_POINTER
++ TRACE_IRQS_OFF
++ mov %esp, %eax
+ call xen_evtchn_do_upcall
+ #ifndef CONFIG_PREEMPT
+ call xen_maybe_preempt_hcall
+@@ -1449,10 +1478,9 @@ END(page_fault)
+
+ common_exception_read_cr2:
+ /* the function address is in %gs's slot on the stack */
+- SAVE_ALL switch_stacks=1 skip_gs=1
++ SAVE_ALL switch_stacks=1 skip_gs=1 unwind_espfix=1
+
+ ENCODE_FRAME_POINTER
+- UNWIND_ESPFIX_STACK
+
+ /* fixup %gs */
+ GS_TO_REG %ecx
+@@ -1474,9 +1502,8 @@ END(common_exception_read_cr2)
+
+ common_exception:
+ /* the function address is in %gs's slot on the stack */
+- SAVE_ALL switch_stacks=1 skip_gs=1
++ SAVE_ALL switch_stacks=1 skip_gs=1 unwind_espfix=1
+ ENCODE_FRAME_POINTER
+- UNWIND_ESPFIX_STACK
+
+ /* fixup %gs */
+ GS_TO_REG %ecx
+@@ -1515,6 +1542,10 @@ ENTRY(nmi)
+ ASM_CLAC
+
+ #ifdef CONFIG_X86_ESPFIX32
++ /*
++ * ESPFIX_SS is only ever set on the return to user path
++ * after we've switched to the entry stack.
++ */
+ pushl %eax
+ movl %ss, %eax
+ cmpw $__ESPFIX_SS, %ax
+@@ -1550,6 +1581,11 @@ ENTRY(nmi)
+ movl %ebx, %esp
+
+ .Lnmi_return:
++#ifdef CONFIG_X86_ESPFIX32
++ testl $CS_FROM_ESPFIX, PT_CS(%esp)
++ jnz .Lnmi_from_espfix
++#endif
++
+ CHECK_AND_APPLY_ESPFIX
+ RESTORE_ALL_NMI cr3_reg=%edi pop=4
+ jmp .Lirq_return
+@@ -1557,23 +1593,42 @@ ENTRY(nmi)
+ #ifdef CONFIG_X86_ESPFIX32
+ .Lnmi_espfix_stack:
+ /*
+- * create the pointer to lss back
++ * Create the pointer to LSS back
+ */
+ pushl %ss
+ pushl %esp
+ addl $4, (%esp)
+- /* copy the iret frame of 12 bytes */
+- .rept 3
+- pushl 16(%esp)
+- .endr
+- pushl %eax
+- SAVE_ALL_NMI cr3_reg=%edi
++
++ /* Copy the (short) IRET frame */
++ pushl 4*4(%esp) # flags
++ pushl 4*4(%esp) # cs
++ pushl 4*4(%esp) # ip
++
++ pushl %eax # orig_ax
++
++ SAVE_ALL_NMI cr3_reg=%edi unwind_espfix=1
+ ENCODE_FRAME_POINTER
+- FIXUP_ESPFIX_STACK # %eax == %esp
++
++ /* clear CS_FROM_KERNEL, set CS_FROM_ESPFIX */
++ xorl $(CS_FROM_ESPFIX | CS_FROM_KERNEL), PT_CS(%esp)
++
+ xorl %edx, %edx # zero error code
+- call do_nmi
++ movl %esp, %eax # pt_regs pointer
++ jmp .Lnmi_from_sysenter_stack
++
++.Lnmi_from_espfix:
+ RESTORE_ALL_NMI cr3_reg=%edi
+- lss 12+4(%esp), %esp # back to espfix stack
++ /*
++ * Because we cleared CS_FROM_KERNEL, IRET_FRAME 'forgot' to
++ * fix up the gap and long frame:
++ *
++ * 3 - original frame (exception)
++ * 2 - ESPFIX block (above)
++ * 6 - gap (FIXUP_FRAME)
++ * 5 - long frame (FIXUP_FRAME)
++ * 1 - orig_ax
++ */
++ lss (1+5+6)*4(%esp), %esp # back to espfix stack
+ jmp .Lirq_return
+ #endif
+ END(nmi)
+diff --git a/arch/x86/include/asm/cpu_entry_area.h b/arch/x86/include/asm/cpu_entry_area.h
+index cff3f3f3bfe0..6e9c9af3255a 100644
+--- a/arch/x86/include/asm/cpu_entry_area.h
++++ b/arch/x86/include/asm/cpu_entry_area.h
+@@ -78,8 +78,12 @@ struct cpu_entry_area {
+
+ /*
+ * The GDT is just below entry_stack and thus serves (on x86_64) as
+- * a a read-only guard page.
++ * a read-only guard page. On 32-bit the GDT must be writeable, so
++ * it needs an extra guard page.
+ */
++#ifdef CONFIG_X86_32
++ char guard_entry_stack[PAGE_SIZE];
++#endif
+ struct entry_stack_page entry_stack_page;
+
+ /*
+@@ -94,7 +98,6 @@ struct cpu_entry_area {
+ */
+ struct cea_exception_stacks estacks;
+ #endif
+-#ifdef CONFIG_CPU_SUP_INTEL
+ /*
+ * Per CPU debug store for Intel performance monitoring. Wastes a
+ * full page at the moment.
+@@ -105,11 +108,13 @@ struct cpu_entry_area {
+ * Reserve enough fixmap PTEs.
+ */
+ struct debug_store_buffers cpu_debug_buffers;
+-#endif
+ };
+
+-#define CPU_ENTRY_AREA_SIZE (sizeof(struct cpu_entry_area))
+-#define CPU_ENTRY_AREA_TOT_SIZE (CPU_ENTRY_AREA_SIZE * NR_CPUS)
++#define CPU_ENTRY_AREA_SIZE (sizeof(struct cpu_entry_area))
++#define CPU_ENTRY_AREA_ARRAY_SIZE (CPU_ENTRY_AREA_SIZE * NR_CPUS)
++
++/* Total size includes the readonly IDT mapping page as well: */
++#define CPU_ENTRY_AREA_TOTAL_SIZE (CPU_ENTRY_AREA_ARRAY_SIZE + PAGE_SIZE)
+
+ DECLARE_PER_CPU(struct cpu_entry_area *, cpu_entry_area);
+ DECLARE_PER_CPU(struct cea_exception_stacks *, cea_exception_stacks);
+@@ -117,13 +122,14 @@ DECLARE_PER_CPU(struct cea_exception_stacks *, cea_exception_stacks);
+ extern void setup_cpu_entry_areas(void);
+ extern void cea_set_pte(void *cea_vaddr, phys_addr_t pa, pgprot_t flags);
+
++/* Single page reserved for the readonly IDT mapping: */
+ #define CPU_ENTRY_AREA_RO_IDT CPU_ENTRY_AREA_BASE
+ #define CPU_ENTRY_AREA_PER_CPU (CPU_ENTRY_AREA_RO_IDT + PAGE_SIZE)
+
+ #define CPU_ENTRY_AREA_RO_IDT_VADDR ((void *)CPU_ENTRY_AREA_RO_IDT)
+
+ #define CPU_ENTRY_AREA_MAP_SIZE \
+- (CPU_ENTRY_AREA_PER_CPU + CPU_ENTRY_AREA_TOT_SIZE - CPU_ENTRY_AREA_BASE)
++ (CPU_ENTRY_AREA_PER_CPU + CPU_ENTRY_AREA_ARRAY_SIZE - CPU_ENTRY_AREA_BASE)
+
+ extern struct cpu_entry_area *get_cpu_entry_area(int cpu);
+
+diff --git a/arch/x86/include/asm/pgtable_32_types.h b/arch/x86/include/asm/pgtable_32_types.h
+index b0bc0fff5f1f..1636eb8e5a5b 100644
+--- a/arch/x86/include/asm/pgtable_32_types.h
++++ b/arch/x86/include/asm/pgtable_32_types.h
+@@ -44,11 +44,11 @@ extern bool __vmalloc_start_set; /* set once high_memory is set */
+ * Define this here and validate with BUILD_BUG_ON() in pgtable_32.c
+ * to avoid include recursion hell
+ */
+-#define CPU_ENTRY_AREA_PAGES (NR_CPUS * 40)
++#define CPU_ENTRY_AREA_PAGES (NR_CPUS * 39)
+
+-#define CPU_ENTRY_AREA_BASE \
+- ((FIXADDR_TOT_START - PAGE_SIZE * (CPU_ENTRY_AREA_PAGES + 1)) \
+- & PMD_MASK)
++/* The +1 is for the readonly IDT page: */
++#define CPU_ENTRY_AREA_BASE \
++ ((FIXADDR_TOT_START - PAGE_SIZE*(CPU_ENTRY_AREA_PAGES+1)) & PMD_MASK)
+
+ #define LDT_BASE_ADDR \
+ ((CPU_ENTRY_AREA_BASE - PAGE_SIZE) & PMD_MASK)
+diff --git a/arch/x86/include/asm/segment.h b/arch/x86/include/asm/segment.h
+index ac3892920419..6669164abadc 100644
+--- a/arch/x86/include/asm/segment.h
++++ b/arch/x86/include/asm/segment.h
+@@ -31,6 +31,18 @@
+ */
+ #define SEGMENT_RPL_MASK 0x3
+
++/*
++ * When running on Xen PV, the actual privilege level of the kernel is 1,
++ * not 0. Testing the Requested Privilege Level in a segment selector to
++ * determine whether the context is user mode or kernel mode with
++ * SEGMENT_RPL_MASK is wrong because the PV kernel's privilege level
++ * matches the 0x3 mask.
++ *
++ * Testing with USER_SEGMENT_RPL_MASK is valid for both native and Xen PV
++ * kernels because privilege level 2 is never used.
++ */
++#define USER_SEGMENT_RPL_MASK 0x2
++
+ /* User mode is privilege level 3: */
+ #define USER_RPL 0x3
+
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 9b7586204cd2..cc5b535d2448 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -39,6 +39,7 @@ static void __init spectre_v2_select_mitigation(void);
+ static void __init ssb_select_mitigation(void);
+ static void __init l1tf_select_mitigation(void);
+ static void __init mds_select_mitigation(void);
++static void __init mds_print_mitigation(void);
+ static void __init taa_select_mitigation(void);
+
+ /* The base value of the SPEC_CTRL MSR that always has to be preserved. */
+@@ -108,6 +109,12 @@ void __init check_bugs(void)
+ mds_select_mitigation();
+ taa_select_mitigation();
+
++ /*
++ * As MDS and TAA mitigations are inter-related, print MDS
++ * mitigation until after TAA mitigation selection is done.
++ */
++ mds_print_mitigation();
++
+ arch_smt_update();
+
+ #ifdef CONFIG_X86_32
+@@ -245,6 +252,12 @@ static void __init mds_select_mitigation(void)
+ (mds_nosmt || cpu_mitigations_auto_nosmt()))
+ cpu_smt_disable(false);
+ }
++}
++
++static void __init mds_print_mitigation(void)
++{
++ if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off())
++ return;
+
+ pr_info("%s\n", mds_strings[mds_mitigation]);
+ }
+@@ -304,8 +317,12 @@ static void __init taa_select_mitigation(void)
+ return;
+ }
+
+- /* TAA mitigation is turned off on the cmdline (tsx_async_abort=off) */
+- if (taa_mitigation == TAA_MITIGATION_OFF)
++ /*
++ * TAA mitigation via VERW is turned off if both
++ * tsx_async_abort=off and mds=off are specified.
++ */
++ if (taa_mitigation == TAA_MITIGATION_OFF &&
++ mds_mitigation == MDS_MITIGATION_OFF)
+ goto out;
+
+ if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
+@@ -339,6 +356,15 @@ static void __init taa_select_mitigation(void)
+ if (taa_nosmt || cpu_mitigations_auto_nosmt())
+ cpu_smt_disable(false);
+
++ /*
++ * Update MDS mitigation, if necessary, as the mds_user_clear is
++ * now enabled for TAA mitigation.
++ */
++ if (mds_mitigation == MDS_MITIGATION_OFF &&
++ boot_cpu_has_bug(X86_BUG_MDS)) {
++ mds_mitigation = MDS_MITIGATION_FULL;
++ mds_select_mitigation();
++ }
+ out:
+ pr_info("%s\n", taa_strings[taa_mitigation]);
+ }
+diff --git a/arch/x86/kernel/doublefault.c b/arch/x86/kernel/doublefault.c
+index 0b8cedb20d6d..d5c9b13bafdf 100644
+--- a/arch/x86/kernel/doublefault.c
++++ b/arch/x86/kernel/doublefault.c
+@@ -65,6 +65,9 @@ struct x86_hw_tss doublefault_tss __cacheline_aligned = {
+ .ss = __KERNEL_DS,
+ .ds = __USER_DS,
+ .fs = __KERNEL_PERCPU,
++#ifndef CONFIG_X86_32_LAZY_GS
++ .gs = __KERNEL_STACK_CANARY,
++#endif
+
+ .__cr3 = __pa_nodebug(swapper_pg_dir),
+ };
+diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
+index 30f9cb2c0b55..2e6a0676c1f4 100644
+--- a/arch/x86/kernel/head_32.S
++++ b/arch/x86/kernel/head_32.S
+@@ -571,6 +571,16 @@ ENTRY(initial_page_table)
+ # error "Kernel PMDs should be 1, 2 or 3"
+ # endif
+ .align PAGE_SIZE /* needs to be page-sized too */
++
++#ifdef CONFIG_PAGE_TABLE_ISOLATION
++ /*
++ * PTI needs another page so sync_initial_pagetable() works correctly
++ * and does not scribble over the data which is placed behind the
++ * actual initial_page_table. See clone_pgd_range().
++ */
++ .fill 1024, 4, 0
++#endif
++
+ #endif
+
+ .data
+diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
+index 752ad11d6868..d9643647a9ce 100644
+--- a/arch/x86/mm/cpu_entry_area.c
++++ b/arch/x86/mm/cpu_entry_area.c
+@@ -178,7 +178,9 @@ static __init void setup_cpu_entry_area_ptes(void)
+ #ifdef CONFIG_X86_32
+ unsigned long start, end;
+
+- BUILD_BUG_ON(CPU_ENTRY_AREA_PAGES * PAGE_SIZE < CPU_ENTRY_AREA_MAP_SIZE);
++ /* The +1 is for the readonly IDT: */
++ BUILD_BUG_ON((CPU_ENTRY_AREA_PAGES+1)*PAGE_SIZE != CPU_ENTRY_AREA_MAP_SIZE);
++ BUILD_BUG_ON(CPU_ENTRY_AREA_TOTAL_SIZE != CPU_ENTRY_AREA_MAP_SIZE);
+ BUG_ON(CPU_ENTRY_AREA_BASE & ~PMD_MASK);
+
+ start = CPU_ENTRY_AREA_BASE;
+diff --git a/arch/x86/tools/gen-insn-attr-x86.awk b/arch/x86/tools/gen-insn-attr-x86.awk
+index b02a36b2c14f..a42015b305f4 100644
+--- a/arch/x86/tools/gen-insn-attr-x86.awk
++++ b/arch/x86/tools/gen-insn-attr-x86.awk
+@@ -69,7 +69,7 @@ BEGIN {
+
+ lprefix1_expr = "\\((66|!F3)\\)"
+ lprefix2_expr = "\\(F3\\)"
+- lprefix3_expr = "\\((F2|!F3|66\\&F2)\\)"
++ lprefix3_expr = "\\((F2|!F3|66&F2)\\)"
+ lprefix_expr = "\\((66|F2|F3)\\)"
+ max_lprefix = 4
+
+@@ -257,7 +257,7 @@ function convert_operands(count,opnd, i,j,imm,mod)
+ return add_flags(imm, mod)
+ }
+
+-/^[0-9a-f]+\:/ {
++/^[0-9a-f]+:/ {
+ if (NR == 1)
+ next
+ # get index
+diff --git a/arch/x86/xen/xen-asm_32.S b/arch/x86/xen/xen-asm_32.S
+index c15db060a242..cd177772fe4d 100644
+--- a/arch/x86/xen/xen-asm_32.S
++++ b/arch/x86/xen/xen-asm_32.S
+@@ -126,10 +126,9 @@ hyper_iret:
+ .globl xen_iret_start_crit, xen_iret_end_crit
+
+ /*
+- * This is called by xen_hypervisor_callback in entry.S when it sees
++ * This is called by xen_hypervisor_callback in entry_32.S when it sees
+ * that the EIP at the time of interrupt was between
+- * xen_iret_start_crit and xen_iret_end_crit. We're passed the EIP in
+- * %eax so we can do a more refined determination of what to do.
++ * xen_iret_start_crit and xen_iret_end_crit.
+ *
+ * The stack format at this point is:
+ * ----------------
+@@ -138,70 +137,46 @@ hyper_iret:
+ * eflags } outer exception info
+ * cs }
+ * eip }
+- * ---------------- <- edi (copy dest)
+- * eax : outer eax if it hasn't been restored
+ * ----------------
+- * eflags } nested exception info
+- * cs } (no ss/esp because we're nested
+- * eip } from the same ring)
+- * orig_eax }<- esi (copy src)
+- * - - - - - - - -
+- * fs }
+- * es }
+- * ds } SAVE_ALL state
+- * eax }
+- * : :
+- * ebx }<- esp
++ * eax : outer eax if it hasn't been restored
+ * ----------------
++ * eflags }
++ * cs } nested exception info
++ * eip }
++ * return address : (into xen_hypervisor_callback)
+ *
+- * In order to deliver the nested exception properly, we need to shift
+- * everything from the return addr up to the error code so it sits
+- * just under the outer exception info. This means that when we
+- * handle the exception, we do it in the context of the outer
+- * exception rather than starting a new one.
++ * In order to deliver the nested exception properly, we need to discard the
++ * nested exception frame such that when we handle the exception, we do it
++ * in the context of the outer exception rather than starting a new one.
+ *
+- * The only caveat is that if the outer eax hasn't been restored yet
+- * (ie, it's still on stack), we need to insert its value into the
+- * SAVE_ALL state before going on, since it's usermode state which we
+- * eventually need to restore.
++ * The only caveat is that if the outer eax hasn't been restored yet (i.e.
++ * it's still on stack), we need to restore its value here.
+ */
+ ENTRY(xen_iret_crit_fixup)
+ /*
+ * Paranoia: Make sure we're really coming from kernel space.
+ * One could imagine a case where userspace jumps into the
+ * critical range address, but just before the CPU delivers a
+- * GP, it decides to deliver an interrupt instead. Unlikely?
+- * Definitely. Easy to avoid? Yes. The Intel documents
+- * explicitly say that the reported EIP for a bad jump is the
+- * jump instruction itself, not the destination, but some
+- * virtual environments get this wrong.
++ * PF, it decides to deliver an interrupt instead. Unlikely?
++ * Definitely. Easy to avoid? Yes.
+ */
+- movl PT_CS(%esp), %ecx
+- andl $SEGMENT_RPL_MASK, %ecx
+- cmpl $USER_RPL, %ecx
+- je 2f
+-
+- lea PT_ORIG_EAX(%esp), %esi
+- lea PT_EFLAGS(%esp), %edi
++ testb $2, 2*4(%esp) /* nested CS */
++ jnz 2f
+
+ /*
+ * If eip is before iret_restore_end then stack
+ * hasn't been restored yet.
+ */
+- cmp $iret_restore_end, %eax
++ cmpl $iret_restore_end, 1*4(%esp)
+ jae 1f
+
+- movl 0+4(%edi), %eax /* copy EAX (just above top of frame) */
+- movl %eax, PT_EAX(%esp)
++ movl 4*4(%esp), %eax /* load outer EAX */
++ ret $4*4 /* discard nested EIP, CS, and EFLAGS as
++ * well as the just restored EAX */
+
+- lea ESP_OFFSET(%edi), %edi /* move dest up over saved regs */
+-
+- /* set up the copy */
+-1: std
+- mov $PT_EIP / 4, %ecx /* saved regs up to orig_eax */
+- rep movsl
+- cld
+-
+- lea 4(%edi), %esp /* point esp to new frame */
+-2: jmp xen_do_upcall
++1:
++ ret $3*4 /* discard nested EIP, CS, and EFLAGS */
+
++2:
++ ret
++END(xen_iret_crit_fixup)
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 5f9d12ce91e5..f4140f077324 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -956,6 +956,7 @@ static struct socket *nbd_get_socket(struct nbd_device *nbd, unsigned long fd,
+ if (sock->ops->shutdown == sock_no_shutdown) {
+ dev_err(disk_to_dev(nbd->disk), "Unsupported socket: shutdown callout must be supported.\n");
+ *err = -EINVAL;
++ sockfd_put(sock);
+ return NULL;
+ }
+
+@@ -994,14 +995,15 @@ static int nbd_add_socket(struct nbd_device *nbd, unsigned long arg,
+ sockfd_put(sock);
+ return -ENOMEM;
+ }
++
++ config->socks = socks;
++
+ nsock = kzalloc(sizeof(struct nbd_sock), GFP_KERNEL);
+ if (!nsock) {
+ sockfd_put(sock);
+ return -ENOMEM;
+ }
+
+- config->socks = socks;
+-
+ nsock->fallback_index = -1;
+ nsock->dead = false;
+ mutex_init(&nsock->tx_lock);
+diff --git a/drivers/bluetooth/hci_bcsp.c b/drivers/bluetooth/hci_bcsp.c
+index fe2e307009f4..cf4a56095817 100644
+--- a/drivers/bluetooth/hci_bcsp.c
++++ b/drivers/bluetooth/hci_bcsp.c
+@@ -591,6 +591,7 @@ static int bcsp_recv(struct hci_uart *hu, const void *data, int count)
+ if (*ptr == 0xc0) {
+ BT_ERR("Short BCSP packet");
+ kfree_skb(bcsp->rx_skb);
++ bcsp->rx_skb = NULL;
+ bcsp->rx_state = BCSP_W4_PKT_START;
+ bcsp->rx_count = 0;
+ } else
+@@ -606,6 +607,7 @@ static int bcsp_recv(struct hci_uart *hu, const void *data, int count)
+ bcsp->rx_skb->data[2])) != bcsp->rx_skb->data[3]) {
+ BT_ERR("Error in BCSP hdr checksum");
+ kfree_skb(bcsp->rx_skb);
++ bcsp->rx_skb = NULL;
+ bcsp->rx_state = BCSP_W4_PKT_DELIMITER;
+ bcsp->rx_count = 0;
+ continue;
+@@ -630,6 +632,7 @@ static int bcsp_recv(struct hci_uart *hu, const void *data, int count)
+ bscp_get_crc(bcsp));
+
+ kfree_skb(bcsp->rx_skb);
++ bcsp->rx_skb = NULL;
+ bcsp->rx_state = BCSP_W4_PKT_DELIMITER;
+ bcsp->rx_count = 0;
+ continue;
+diff --git a/drivers/bluetooth/hci_ll.c b/drivers/bluetooth/hci_ll.c
+index 285706618f8a..d9a4c6c691e0 100644
+--- a/drivers/bluetooth/hci_ll.c
++++ b/drivers/bluetooth/hci_ll.c
+@@ -621,13 +621,6 @@ static int ll_setup(struct hci_uart *hu)
+
+ serdev_device_set_flow_control(serdev, true);
+
+- if (hu->oper_speed)
+- speed = hu->oper_speed;
+- else if (hu->proto->oper_speed)
+- speed = hu->proto->oper_speed;
+- else
+- speed = 0;
+-
+ do {
+ /* Reset the Bluetooth device */
+ gpiod_set_value_cansleep(lldev->enable_gpio, 0);
+@@ -639,20 +632,6 @@ static int ll_setup(struct hci_uart *hu)
+ return err;
+ }
+
+- if (speed) {
+- __le32 speed_le = cpu_to_le32(speed);
+- struct sk_buff *skb;
+-
+- skb = __hci_cmd_sync(hu->hdev,
+- HCI_VS_UPDATE_UART_HCI_BAUDRATE,
+- sizeof(speed_le), &speed_le,
+- HCI_INIT_TIMEOUT);
+- if (!IS_ERR(skb)) {
+- kfree_skb(skb);
+- serdev_device_set_baudrate(serdev, speed);
+- }
+- }
+-
+ err = download_firmware(lldev);
+ if (!err)
+ break;
+@@ -677,7 +656,25 @@ static int ll_setup(struct hci_uart *hu)
+ }
+
+ /* Operational speed if any */
++ if (hu->oper_speed)
++ speed = hu->oper_speed;
++ else if (hu->proto->oper_speed)
++ speed = hu->proto->oper_speed;
++ else
++ speed = 0;
++
++ if (speed) {
++ __le32 speed_le = cpu_to_le32(speed);
++ struct sk_buff *skb;
+
++ skb = __hci_cmd_sync(hu->hdev, HCI_VS_UPDATE_UART_HCI_BAUDRATE,
++ sizeof(speed_le), &speed_le,
++ HCI_INIT_TIMEOUT);
++ if (!IS_ERR(skb)) {
++ kfree_skb(skb);
++ serdev_device_set_baudrate(serdev, speed);
++ }
++ }
+
+ return 0;
+ }
+diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
+index 7270e7b69262..3259426f01dc 100644
+--- a/drivers/char/virtio_console.c
++++ b/drivers/char/virtio_console.c
+@@ -1325,24 +1325,24 @@ static void set_console_size(struct port *port, u16 rows, u16 cols)
+ port->cons.ws.ws_col = cols;
+ }
+
+-static unsigned int fill_queue(struct virtqueue *vq, spinlock_t *lock)
++static int fill_queue(struct virtqueue *vq, spinlock_t *lock)
+ {
+ struct port_buffer *buf;
+- unsigned int nr_added_bufs;
++ int nr_added_bufs;
+ int ret;
+
+ nr_added_bufs = 0;
+ do {
+ buf = alloc_buf(vq->vdev, PAGE_SIZE, 0);
+ if (!buf)
+- break;
++ return -ENOMEM;
+
+ spin_lock_irq(lock);
+ ret = add_inbuf(vq, buf);
+ if (ret < 0) {
+ spin_unlock_irq(lock);
+ free_buf(buf, true);
+- break;
++ return ret;
+ }
+ nr_added_bufs++;
+ spin_unlock_irq(lock);
+@@ -1362,7 +1362,6 @@ static int add_port(struct ports_device *portdev, u32 id)
+ char debugfs_name[16];
+ struct port *port;
+ dev_t devt;
+- unsigned int nr_added_bufs;
+ int err;
+
+ port = kmalloc(sizeof(*port), GFP_KERNEL);
+@@ -1421,11 +1420,13 @@ static int add_port(struct ports_device *portdev, u32 id)
+ spin_lock_init(&port->outvq_lock);
+ init_waitqueue_head(&port->waitqueue);
+
+- /* Fill the in_vq with buffers so the host can send us data. */
+- nr_added_bufs = fill_queue(port->in_vq, &port->inbuf_lock);
+- if (!nr_added_bufs) {
++ /* We can safely ignore ENOSPC because it means
++ * the queue already has buffers. Buffers are removed
++ * only by virtcons_remove(), not by unplug_port()
++ */
++ err = fill_queue(port->in_vq, &port->inbuf_lock);
++ if (err < 0 && err != -ENOSPC) {
+ dev_err(port->dev, "Error allocating inbufs\n");
+- err = -ENOMEM;
+ goto free_device;
+ }
+
+@@ -2059,14 +2060,11 @@ static int virtcons_probe(struct virtio_device *vdev)
+ INIT_WORK(&portdev->control_work, &control_work_handler);
+
+ if (multiport) {
+- unsigned int nr_added_bufs;
+-
+ spin_lock_init(&portdev->c_ivq_lock);
+ spin_lock_init(&portdev->c_ovq_lock);
+
+- nr_added_bufs = fill_queue(portdev->c_ivq,
+- &portdev->c_ivq_lock);
+- if (!nr_added_bufs) {
++ err = fill_queue(portdev->c_ivq, &portdev->c_ivq_lock);
++ if (err < 0) {
+ dev_err(&vdev->dev,
+ "Error allocating buffers for control queue\n");
+ /*
+@@ -2077,7 +2075,7 @@ static int virtcons_probe(struct virtio_device *vdev)
+ VIRTIO_CONSOLE_DEVICE_READY, 0);
+ /* Device was functional: we need full cleanup. */
+ virtcons_remove(vdev);
+- return -ENOMEM;
++ return err;
+ }
+ } else {
+ /*
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index f970f87ce86e..9b6a674f83de 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -933,6 +933,9 @@ static ssize_t show(struct kobject *kobj, struct attribute *attr, char *buf)
+ struct freq_attr *fattr = to_attr(attr);
+ ssize_t ret;
+
++ if (!fattr->show)
++ return -EIO;
++
+ down_read(&policy->rwsem);
+ ret = fattr->show(policy, buf);
+ up_read(&policy->rwsem);
+@@ -947,6 +950,9 @@ static ssize_t store(struct kobject *kobj, struct attribute *attr,
+ struct freq_attr *fattr = to_attr(attr);
+ ssize_t ret = -EINVAL;
+
++ if (!fattr->store)
++ return -EIO;
++
+ /*
+ * cpus_read_trylock() is used here to work around a circular lock
+ * dependency problem with respect to the cpufreq_register_driver().
+diff --git a/drivers/gpio/gpio-bd70528.c b/drivers/gpio/gpio-bd70528.c
+index fd85605d2dab..01e122c3a9f1 100644
+--- a/drivers/gpio/gpio-bd70528.c
++++ b/drivers/gpio/gpio-bd70528.c
+@@ -25,13 +25,13 @@ static int bd70528_set_debounce(struct bd70528_gpio *bdgpio,
+ case 0:
+ val = BD70528_DEBOUNCE_DISABLE;
+ break;
+- case 1 ... 15:
++ case 1 ... 15000:
+ val = BD70528_DEBOUNCE_15MS;
+ break;
+- case 16 ... 30:
++ case 15001 ... 30000:
+ val = BD70528_DEBOUNCE_30MS;
+ break;
+- case 31 ... 50:
++ case 30001 ... 50000:
+ val = BD70528_DEBOUNCE_50MS;
+ break;
+ default:
+diff --git a/drivers/gpio/gpio-max77620.c b/drivers/gpio/gpio-max77620.c
+index 06e8caaafa81..4ead063bfe38 100644
+--- a/drivers/gpio/gpio-max77620.c
++++ b/drivers/gpio/gpio-max77620.c
+@@ -192,13 +192,13 @@ static int max77620_gpio_set_debounce(struct max77620_gpio *mgpio,
+ case 0:
+ val = MAX77620_CNFG_GPIO_DBNC_None;
+ break;
+- case 1000 ... 8000:
++ case 1 ... 8000:
+ val = MAX77620_CNFG_GPIO_DBNC_8ms;
+ break;
+- case 9000 ... 16000:
++ case 8001 ... 16000:
+ val = MAX77620_CNFG_GPIO_DBNC_16ms;
+ break;
+- case 17000 ... 32000:
++ case 16001 ... 32000:
+ val = MAX77620_CNFG_GPIO_DBNC_32ms;
+ break;
+ default:
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+index 56b4c241a14b..65f6619f0c0c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+@@ -635,15 +635,19 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
+ return -ENOMEM;
+ alloc_size = info->read_mmr_reg.count * sizeof(*regs);
+
+- for (i = 0; i < info->read_mmr_reg.count; i++)
++ amdgpu_gfx_off_ctrl(adev, false);
++ for (i = 0; i < info->read_mmr_reg.count; i++) {
+ if (amdgpu_asic_read_register(adev, se_num, sh_num,
+ info->read_mmr_reg.dword_offset + i,
+ ®s[i])) {
+ DRM_DEBUG_KMS("unallowed offset %#x\n",
+ info->read_mmr_reg.dword_offset + i);
+ kfree(regs);
++ amdgpu_gfx_off_ctrl(adev, true);
+ return -EFAULT;
+ }
++ }
++ amdgpu_gfx_off_ctrl(adev, true);
+ n = copy_to_user(out, regs, min(size, alloc_size));
+ kfree(regs);
+ return n ? -EFAULT : 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index c066e1d3f981..75faa56f243a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -596,8 +596,13 @@ static void gfx_v9_0_check_if_need_gfxoff(struct amdgpu_device *adev)
+ case CHIP_VEGA20:
+ break;
+ case CHIP_RAVEN:
+- if (!(adev->rev_id >= 0x8 || adev->pdev->device == 0x15d8)
+- &&((adev->gfx.rlc_fw_version != 106 &&
++ /* Disable GFXOFF on original raven. There are combinations
++ * of sbios and platforms that are not stable.
++ */
++ if (!(adev->rev_id >= 0x8 || adev->pdev->device == 0x15d8))
++ adev->pm.pp_feature &= ~PP_GFXOFF_MASK;
++ else if (!(adev->rev_id >= 0x8 || adev->pdev->device == 0x15d8)
++ &&((adev->gfx.rlc_fw_version != 106 &&
+ adev->gfx.rlc_fw_version < 531) ||
+ (adev->gfx.rlc_fw_version == 53815) ||
+ (adev->gfx.rlc_feature_version < 1) ||
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+index 3c1084de5d59..ec62747b4bbb 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+@@ -3477,18 +3477,31 @@ static int smu7_get_pp_table_entry(struct pp_hwmgr *hwmgr,
+
+ static int smu7_get_gpu_power(struct pp_hwmgr *hwmgr, u32 *query)
+ {
++ struct amdgpu_device *adev = hwmgr->adev;
+ int i;
+ u32 tmp = 0;
+
+ if (!query)
+ return -EINVAL;
+
+- smum_send_msg_to_smc_with_parameter(hwmgr, PPSMC_MSG_GetCurrPkgPwr, 0);
+- tmp = cgs_read_register(hwmgr->device, mmSMC_MSG_ARG_0);
+- *query = tmp;
++ /*
++ * PPSMC_MSG_GetCurrPkgPwr is not supported on:
++ * - Hawaii
++ * - Bonaire
++ * - Fiji
++ * - Tonga
++ */
++ if ((adev->asic_type != CHIP_HAWAII) &&
++ (adev->asic_type != CHIP_BONAIRE) &&
++ (adev->asic_type != CHIP_FIJI) &&
++ (adev->asic_type != CHIP_TONGA)) {
++ smum_send_msg_to_smc_with_parameter(hwmgr, PPSMC_MSG_GetCurrPkgPwr, 0);
++ tmp = cgs_read_register(hwmgr->device, mmSMC_MSG_ARG_0);
++ *query = tmp;
+
+- if (tmp != 0)
+- return 0;
++ if (tmp != 0)
++ return 0;
++ }
+
+ smum_send_msg_to_smc(hwmgr, PPSMC_MSG_PmStatusLogStart);
+ cgs_write_ind_register(hwmgr->device, CGS_IND_REG__SMC,
+diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
+index dae45b6a35b7..5c8c11deb857 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.c
++++ b/drivers/gpu/drm/i915/display/intel_display.c
+@@ -2519,6 +2519,9 @@ u32 intel_plane_fb_max_stride(struct drm_i915_private *dev_priv,
+ * the highest stride limits of them all.
+ */
+ crtc = intel_get_crtc_for_pipe(dev_priv, PIPE_A);
++ if (!crtc)
++ return 0;
++
+ plane = to_intel_plane(crtc->base.primary);
+
+ return plane->max_stride(plane, pixel_format, modifier,
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+index cd30e83c3205..33046a3aef06 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+@@ -663,8 +663,28 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj,
+ i915_gem_gtt_finish_pages(obj, pages);
+
+ for_each_sgt_page(page, sgt_iter, pages) {
+- if (obj->mm.dirty)
++ if (obj->mm.dirty && trylock_page(page)) {
++ /*
++ * As this may not be anonymous memory (e.g. shmem)
++ * but exist on a real mapping, we have to lock
++ * the page in order to dirty it -- holding
++ * the page reference is not sufficient to
++ * prevent the inode from being truncated.
++ * Play safe and take the lock.
++ *
++ * However...!
++ *
++ * The mmu-notifier can be invalidated for a
++ * migrate_page, that is alreadying holding the lock
++ * on the page. Such a try_to_unmap() will result
++ * in us calling put_pages() and so recursively try
++ * to lock the page. We avoid that deadlock with
++ * a trylock_page() and in exchange we risk missing
++ * some page dirtying.
++ */
+ set_page_dirty(page);
++ unlock_page(page);
++ }
+
+ mark_page_accessed(page);
+ put_page(page);
+diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
+index 8fe46ee920a0..c599d9db01ac 100644
+--- a/drivers/gpu/drm/i915/i915_pmu.c
++++ b/drivers/gpu/drm/i915/i915_pmu.c
+@@ -833,8 +833,8 @@ create_event_attributes(struct drm_i915_private *i915)
+ const char *name;
+ const char *unit;
+ } events[] = {
+- __event(I915_PMU_ACTUAL_FREQUENCY, "actual-frequency", "MHz"),
+- __event(I915_PMU_REQUESTED_FREQUENCY, "requested-frequency", "MHz"),
++ __event(I915_PMU_ACTUAL_FREQUENCY, "actual-frequency", "M"),
++ __event(I915_PMU_REQUESTED_FREQUENCY, "requested-frequency", "M"),
+ __event(I915_PMU_INTERRUPTS, "interrupts", NULL),
+ __event(I915_PMU_RC6_RESIDENCY, "rc6-residency", "ns"),
+ };
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index d5216bcc4649..e8446c3cad11 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -2911,21 +2911,18 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ }
+
+ ret = -ENOMEM;
+- cc->io_queue = alloc_workqueue("kcryptd_io/%s",
+- WQ_HIGHPRI | WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM,
+- 1, devname);
++ cc->io_queue = alloc_workqueue("kcryptd_io/%s", WQ_MEM_RECLAIM, 1, devname);
+ if (!cc->io_queue) {
+ ti->error = "Couldn't create kcryptd io queue";
+ goto bad;
+ }
+
+ if (test_bit(DM_CRYPT_SAME_CPU, &cc->flags))
+- cc->crypt_queue = alloc_workqueue("kcryptd/%s",
+- WQ_HIGHPRI | WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM,
++ cc->crypt_queue = alloc_workqueue("kcryptd/%s", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM,
+ 1, devname);
+ else
+ cc->crypt_queue = alloc_workqueue("kcryptd/%s",
+- WQ_HIGHPRI | WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_UNBOUND,
++ WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_UNBOUND,
+ num_online_cpus(), devname);
+ if (!cc->crypt_queue) {
+ ti->error = "Couldn't create kcryptd queue";
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 8a1354a08a1a..c0c653e35fbb 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -191,7 +191,7 @@ static void * r10buf_pool_alloc(gfp_t gfp_flags, void *data)
+
+ out_free_pages:
+ while (--j >= 0)
+- resync_free_pages(&rps[j * 2]);
++ resync_free_pages(&rps[j]);
+
+ j = 0;
+ out_free_bio:
+diff --git a/drivers/media/platform/vivid/vivid-kthread-cap.c b/drivers/media/platform/vivid/vivid-kthread-cap.c
+index 003319d7816d..31f78d6a05a4 100644
+--- a/drivers/media/platform/vivid/vivid-kthread-cap.c
++++ b/drivers/media/platform/vivid/vivid-kthread-cap.c
+@@ -796,7 +796,11 @@ static int vivid_thread_vid_cap(void *data)
+ if (kthread_should_stop())
+ break;
+
+- mutex_lock(&dev->mutex);
++ if (!mutex_trylock(&dev->mutex)) {
++ schedule_timeout_uninterruptible(1);
++ continue;
++ }
++
+ cur_jiffies = jiffies;
+ if (dev->cap_seq_resync) {
+ dev->jiffies_vid_cap = cur_jiffies;
+@@ -956,8 +960,6 @@ void vivid_stop_generating_vid_cap(struct vivid_dev *dev, bool *pstreaming)
+
+ /* shutdown control thread */
+ vivid_grab_controls(dev, false);
+- mutex_unlock(&dev->mutex);
+ kthread_stop(dev->kthread_vid_cap);
+ dev->kthread_vid_cap = NULL;
+- mutex_lock(&dev->mutex);
+ }
+diff --git a/drivers/media/platform/vivid/vivid-kthread-out.c b/drivers/media/platform/vivid/vivid-kthread-out.c
+index ce5bcda2348c..1e165a6a2207 100644
+--- a/drivers/media/platform/vivid/vivid-kthread-out.c
++++ b/drivers/media/platform/vivid/vivid-kthread-out.c
+@@ -143,7 +143,11 @@ static int vivid_thread_vid_out(void *data)
+ if (kthread_should_stop())
+ break;
+
+- mutex_lock(&dev->mutex);
++ if (!mutex_trylock(&dev->mutex)) {
++ schedule_timeout_uninterruptible(1);
++ continue;
++ }
++
+ cur_jiffies = jiffies;
+ if (dev->out_seq_resync) {
+ dev->jiffies_vid_out = cur_jiffies;
+@@ -301,8 +305,6 @@ void vivid_stop_generating_vid_out(struct vivid_dev *dev, bool *pstreaming)
+
+ /* shutdown control thread */
+ vivid_grab_controls(dev, false);
+- mutex_unlock(&dev->mutex);
+ kthread_stop(dev->kthread_vid_out);
+ dev->kthread_vid_out = NULL;
+- mutex_lock(&dev->mutex);
+ }
+diff --git a/drivers/media/platform/vivid/vivid-sdr-cap.c b/drivers/media/platform/vivid/vivid-sdr-cap.c
+index 9acc709b0740..2b7522e16efc 100644
+--- a/drivers/media/platform/vivid/vivid-sdr-cap.c
++++ b/drivers/media/platform/vivid/vivid-sdr-cap.c
+@@ -141,7 +141,11 @@ static int vivid_thread_sdr_cap(void *data)
+ if (kthread_should_stop())
+ break;
+
+- mutex_lock(&dev->mutex);
++ if (!mutex_trylock(&dev->mutex)) {
++ schedule_timeout_uninterruptible(1);
++ continue;
++ }
++
+ cur_jiffies = jiffies;
+ if (dev->sdr_cap_seq_resync) {
+ dev->jiffies_sdr_cap = cur_jiffies;
+@@ -303,10 +307,8 @@ static void sdr_cap_stop_streaming(struct vb2_queue *vq)
+ }
+
+ /* shutdown control thread */
+- mutex_unlock(&dev->mutex);
+ kthread_stop(dev->kthread_sdr_cap);
+ dev->kthread_sdr_cap = NULL;
+- mutex_lock(&dev->mutex);
+ }
+
+ static void sdr_cap_buf_request_complete(struct vb2_buffer *vb)
+diff --git a/drivers/media/platform/vivid/vivid-vid-cap.c b/drivers/media/platform/vivid/vivid-vid-cap.c
+index 8cbaa0c998ed..2d030732feac 100644
+--- a/drivers/media/platform/vivid/vivid-vid-cap.c
++++ b/drivers/media/platform/vivid/vivid-vid-cap.c
+@@ -223,9 +223,6 @@ static int vid_cap_start_streaming(struct vb2_queue *vq, unsigned count)
+ if (vb2_is_streaming(&dev->vb_vid_out_q))
+ dev->can_loop_video = vivid_vid_can_loop(dev);
+
+- if (dev->kthread_vid_cap)
+- return 0;
+-
+ dev->vid_cap_seq_count = 0;
+ dprintk(dev, 1, "%s\n", __func__);
+ for (i = 0; i < VIDEO_MAX_FRAME; i++)
+diff --git a/drivers/media/platform/vivid/vivid-vid-out.c b/drivers/media/platform/vivid/vivid-vid-out.c
+index 148b663a6075..a0364ac497f9 100644
+--- a/drivers/media/platform/vivid/vivid-vid-out.c
++++ b/drivers/media/platform/vivid/vivid-vid-out.c
+@@ -161,9 +161,6 @@ static int vid_out_start_streaming(struct vb2_queue *vq, unsigned count)
+ if (vb2_is_streaming(&dev->vb_vid_cap_q))
+ dev->can_loop_video = vivid_vid_can_loop(dev);
+
+- if (dev->kthread_vid_out)
+- return 0;
+-
+ dev->vid_out_seq_count = 0;
+ dprintk(dev, 1, "%s\n", __func__);
+ if (dev->start_streaming_error) {
+diff --git a/drivers/media/rc/imon.c b/drivers/media/rc/imon.c
+index 37a850421fbb..c683a244b9fa 100644
+--- a/drivers/media/rc/imon.c
++++ b/drivers/media/rc/imon.c
+@@ -1598,8 +1598,7 @@ static void imon_incoming_packet(struct imon_context *ictx,
+ spin_unlock_irqrestore(&ictx->kc_lock, flags);
+
+ /* send touchscreen events through input subsystem if touchpad data */
+- if (ictx->display_type == IMON_DISPLAY_TYPE_VGA && len == 8 &&
+- buf[7] == 0x86) {
++ if (ictx->touch && len == 8 && buf[7] == 0x86) {
+ imon_touch_event(ictx, buf);
+ return;
+
+diff --git a/drivers/media/rc/mceusb.c b/drivers/media/rc/mceusb.c
+index 9929fcdec74d..b59a4a6d4d34 100644
+--- a/drivers/media/rc/mceusb.c
++++ b/drivers/media/rc/mceusb.c
+@@ -562,7 +562,7 @@ static int mceusb_cmd_datasize(u8 cmd, u8 subcmd)
+ datasize = 4;
+ break;
+ case MCE_CMD_G_REVISION:
+- datasize = 2;
++ datasize = 4;
+ break;
+ case MCE_RSP_EQWAKESUPPORT:
+ case MCE_RSP_GETWAKESOURCE:
+@@ -598,14 +598,9 @@ static void mceusb_dev_printdata(struct mceusb_dev *ir, u8 *buf, int buf_len,
+ char *inout;
+ u8 cmd, subcmd, *data;
+ struct device *dev = ir->dev;
+- int start, skip = 0;
+ u32 carrier, period;
+
+- /* skip meaningless 0xb1 0x60 header bytes on orig receiver */
+- if (ir->flags.microsoft_gen1 && !out && !offset)
+- skip = 2;
+-
+- if (len <= skip)
++ if (offset < 0 || offset >= buf_len)
+ return;
+
+ dev_dbg(dev, "%cx data[%d]: %*ph (len=%d sz=%d)",
+@@ -614,11 +609,32 @@ static void mceusb_dev_printdata(struct mceusb_dev *ir, u8 *buf, int buf_len,
+
+ inout = out ? "Request" : "Got";
+
+- start = offset + skip;
+- cmd = buf[start] & 0xff;
+- subcmd = buf[start + 1] & 0xff;
+- data = buf + start + 2;
++ cmd = buf[offset];
++ subcmd = (offset + 1 < buf_len) ? buf[offset + 1] : 0;
++ data = &buf[offset] + 2;
++
++ /* Trace meaningless 0xb1 0x60 header bytes on original receiver */
++ if (ir->flags.microsoft_gen1 && !out && !offset) {
++ dev_dbg(dev, "MCE gen 1 header");
++ return;
++ }
++
++ /* Trace IR data header or trailer */
++ if (cmd != MCE_CMD_PORT_IR &&
++ (cmd & MCE_PORT_MASK) == MCE_COMMAND_IRDATA) {
++ if (cmd == MCE_IRDATA_TRAILER)
++ dev_dbg(dev, "End of raw IR data");
++ else
++ dev_dbg(dev, "Raw IR data, %d pulse/space samples",
++ cmd & MCE_PACKET_LENGTH_MASK);
++ return;
++ }
++
++ /* Unexpected end of buffer? */
++ if (offset + len > buf_len)
++ return;
+
++ /* Decode MCE command/response */
+ switch (cmd) {
+ case MCE_CMD_NULL:
+ if (subcmd == MCE_CMD_NULL)
+@@ -642,7 +658,7 @@ static void mceusb_dev_printdata(struct mceusb_dev *ir, u8 *buf, int buf_len,
+ dev_dbg(dev, "Get hw/sw rev?");
+ else
+ dev_dbg(dev, "hw/sw rev %*ph",
+- 4, &buf[start + 2]);
++ 4, &buf[offset + 2]);
+ break;
+ case MCE_CMD_RESUME:
+ dev_dbg(dev, "Device resume requested");
+@@ -744,13 +760,6 @@ static void mceusb_dev_printdata(struct mceusb_dev *ir, u8 *buf, int buf_len,
+ default:
+ break;
+ }
+-
+- if (cmd == MCE_IRDATA_TRAILER)
+- dev_dbg(dev, "End of raw IR data");
+- else if ((cmd != MCE_CMD_PORT_IR) &&
+- ((cmd & MCE_PORT_MASK) == MCE_COMMAND_IRDATA))
+- dev_dbg(dev, "Raw IR data, %d pulse/space samples",
+- cmd & MCE_PACKET_LENGTH_MASK);
+ #endif
+ }
+
+@@ -1127,32 +1136,62 @@ static int mceusb_set_rx_carrier_report(struct rc_dev *dev, int enable)
+ }
+
+ /*
++ * Handle PORT_SYS/IR command response received from the MCE device.
++ *
++ * Assumes single response with all its data (not truncated)
++ * in buf_in[]. The response itself determines its total length
++ * (mceusb_cmd_datasize() + 2) and hence the minimum size of buf_in[].
++ *
+ * We don't do anything but print debug spew for many of the command bits
+ * we receive from the hardware, but some of them are useful information
+ * we want to store so that we can use them.
+ */
+-static void mceusb_handle_command(struct mceusb_dev *ir, int index)
++static void mceusb_handle_command(struct mceusb_dev *ir, u8 *buf_in)
+ {
++ u8 cmd = buf_in[0];
++ u8 subcmd = buf_in[1];
++ u8 *hi = &buf_in[2]; /* read only when required */
++ u8 *lo = &buf_in[3]; /* read only when required */
+ struct ir_raw_event rawir = {};
+- u8 hi = ir->buf_in[index + 1] & 0xff;
+- u8 lo = ir->buf_in[index + 2] & 0xff;
+ u32 carrier_cycles;
+ u32 cycles_fix;
+
+- switch (ir->buf_in[index]) {
+- /* the one and only 5-byte return value command */
+- case MCE_RSP_GETPORTSTATUS:
+- if ((ir->buf_in[index + 4] & 0xff) == 0x00)
+- ir->txports_cabled |= 1 << hi;
+- break;
++ if (cmd == MCE_CMD_PORT_SYS) {
++ switch (subcmd) {
++ /* the one and only 5-byte return value command */
++ case MCE_RSP_GETPORTSTATUS:
++ if (buf_in[5] == 0)
++ ir->txports_cabled |= 1 << *hi;
++ break;
++
++ /* 1-byte return value commands */
++ case MCE_RSP_EQEMVER:
++ ir->emver = *hi;
++ break;
++
++ /* No return value commands */
++ case MCE_RSP_CMD_ILLEGAL:
++ ir->need_reset = true;
++ break;
++
++ default:
++ break;
++ }
++
++ return;
++ }
+
++ if (cmd != MCE_CMD_PORT_IR)
++ return;
++
++ switch (subcmd) {
+ /* 2-byte return value commands */
+ case MCE_RSP_EQIRTIMEOUT:
+- ir->rc->timeout = US_TO_NS((hi << 8 | lo) * MCE_TIME_UNIT);
++ ir->rc->timeout = US_TO_NS((*hi << 8 | *lo) * MCE_TIME_UNIT);
+ break;
+ case MCE_RSP_EQIRNUMPORTS:
+- ir->num_txports = hi;
+- ir->num_rxports = lo;
++ ir->num_txports = *hi;
++ ir->num_rxports = *lo;
+ break;
+ case MCE_RSP_EQIRRXCFCNT:
+ /*
+@@ -1165,7 +1204,7 @@ static void mceusb_handle_command(struct mceusb_dev *ir, int index)
+ */
+ if (ir->carrier_report_enabled && ir->learning_active &&
+ ir->pulse_tunit > 0) {
+- carrier_cycles = (hi << 8 | lo);
++ carrier_cycles = (*hi << 8 | *lo);
+ /*
+ * Adjust carrier cycle count by adding
+ * 1 missed count per pulse "on"
+@@ -1183,24 +1222,24 @@ static void mceusb_handle_command(struct mceusb_dev *ir, int index)
+ break;
+
+ /* 1-byte return value commands */
+- case MCE_RSP_EQEMVER:
+- ir->emver = hi;
+- break;
+ case MCE_RSP_EQIRTXPORTS:
+- ir->tx_mask = hi;
++ ir->tx_mask = *hi;
+ break;
+ case MCE_RSP_EQIRRXPORTEN:
+- ir->learning_active = ((hi & 0x02) == 0x02);
+- if (ir->rxports_active != hi) {
++ ir->learning_active = ((*hi & 0x02) == 0x02);
++ if (ir->rxports_active != *hi) {
+ dev_info(ir->dev, "%s-range (0x%x) receiver active",
+- ir->learning_active ? "short" : "long", hi);
+- ir->rxports_active = hi;
++ ir->learning_active ? "short" : "long", *hi);
++ ir->rxports_active = *hi;
+ }
+ break;
++
++ /* No return value commands */
+ case MCE_RSP_CMD_ILLEGAL:
+ case MCE_RSP_TX_TIMEOUT:
+ ir->need_reset = true;
+ break;
++
+ default:
+ break;
+ }
+@@ -1226,7 +1265,8 @@ static void mceusb_process_ir_data(struct mceusb_dev *ir, int buf_len)
+ ir->rem = mceusb_cmd_datasize(ir->cmd, ir->buf_in[i]);
+ mceusb_dev_printdata(ir, ir->buf_in, buf_len, i - 1,
+ ir->rem + 2, false);
+- mceusb_handle_command(ir, i);
++ if (i + ir->rem < buf_len)
++ mceusb_handle_command(ir, &ir->buf_in[i - 1]);
+ ir->parser_state = CMD_DATA;
+ break;
+ case PARSE_IRDATA:
+@@ -1255,15 +1295,22 @@ static void mceusb_process_ir_data(struct mceusb_dev *ir, int buf_len)
+ ir->rem--;
+ break;
+ case CMD_HEADER:
+- /* decode mce packets of the form (84),AA,BB,CC,DD */
+- /* IR data packets can span USB messages - rem */
+ ir->cmd = ir->buf_in[i];
+ if ((ir->cmd == MCE_CMD_PORT_IR) ||
+ ((ir->cmd & MCE_PORT_MASK) !=
+ MCE_COMMAND_IRDATA)) {
++ /*
++ * got PORT_SYS, PORT_IR, or unknown
++ * command response prefix
++ */
+ ir->parser_state = SUBCMD;
+ continue;
+ }
++ /*
++ * got IR data prefix (0x80 + num_bytes)
++ * decode MCE packets of the form {0x83, AA, BB, CC}
++ * IR data packets can span USB messages
++ */
+ ir->rem = (ir->cmd & MCE_PACKET_LENGTH_MASK);
+ mceusb_dev_printdata(ir, ir->buf_in, buf_len,
+ i, ir->rem + 1, false);
+@@ -1287,6 +1334,14 @@ static void mceusb_process_ir_data(struct mceusb_dev *ir, int buf_len)
+ if (ir->parser_state != CMD_HEADER && !ir->rem)
+ ir->parser_state = CMD_HEADER;
+ }
++
++ /*
++ * Accept IR data spanning multiple rx buffers.
++ * Reject MCE command response spanning multiple rx buffers.
++ */
++ if (ir->parser_state != PARSE_IRDATA || !ir->rem)
++ ir->parser_state = CMD_HEADER;
++
+ if (event) {
+ dev_dbg(ir->dev, "processed IR data");
+ ir_raw_event_handle(ir->rc);
+diff --git a/drivers/media/usb/b2c2/flexcop-usb.c b/drivers/media/usb/b2c2/flexcop-usb.c
+index 1826ff825c2e..1a801dc286f8 100644
+--- a/drivers/media/usb/b2c2/flexcop-usb.c
++++ b/drivers/media/usb/b2c2/flexcop-usb.c
+@@ -538,6 +538,9 @@ static int flexcop_usb_probe(struct usb_interface *intf,
+ struct flexcop_device *fc = NULL;
+ int ret;
+
++ if (intf->cur_altsetting->desc.bNumEndpoints < 1)
++ return -ENODEV;
++
+ if ((fc = flexcop_device_kmalloc(sizeof(struct flexcop_usb))) == NULL) {
+ err("out of memory\n");
+ return -ENOMEM;
+diff --git a/drivers/media/usb/dvb-usb/cxusb.c b/drivers/media/usb/dvb-usb/cxusb.c
+index bac0778f7def..e3d58f5247ae 100644
+--- a/drivers/media/usb/dvb-usb/cxusb.c
++++ b/drivers/media/usb/dvb-usb/cxusb.c
+@@ -542,7 +542,8 @@ static int cxusb_rc_query(struct dvb_usb_device *d)
+ {
+ u8 ircode[4];
+
+- cxusb_ctrl_msg(d, CMD_GET_IR_CODE, NULL, 0, ircode, 4);
++ if (cxusb_ctrl_msg(d, CMD_GET_IR_CODE, NULL, 0, ircode, 4) < 0)
++ return 0;
+
+ if (ircode[2] || ircode[3])
+ rc_keydown(d->rc_dev, RC_PROTO_NEC,
+diff --git a/drivers/media/usb/usbvision/usbvision-video.c b/drivers/media/usb/usbvision/usbvision-video.c
+index 93750af82d98..044d18e9b7ec 100644
+--- a/drivers/media/usb/usbvision/usbvision-video.c
++++ b/drivers/media/usb/usbvision/usbvision-video.c
+@@ -314,6 +314,10 @@ static int usbvision_v4l2_open(struct file *file)
+ if (mutex_lock_interruptible(&usbvision->v4l2_lock))
+ return -ERESTARTSYS;
+
++ if (usbvision->remove_pending) {
++ err_code = -ENODEV;
++ goto unlock;
++ }
+ if (usbvision->user) {
+ err_code = -EBUSY;
+ } else {
+@@ -377,6 +381,7 @@ unlock:
+ static int usbvision_v4l2_close(struct file *file)
+ {
+ struct usb_usbvision *usbvision = video_drvdata(file);
++ int r;
+
+ PDEBUG(DBG_IO, "close");
+
+@@ -391,9 +396,10 @@ static int usbvision_v4l2_close(struct file *file)
+ usbvision_scratch_free(usbvision);
+
+ usbvision->user--;
++ r = usbvision->remove_pending;
+ mutex_unlock(&usbvision->v4l2_lock);
+
+- if (usbvision->remove_pending) {
++ if (r) {
+ printk(KERN_INFO "%s: Final disconnect\n", __func__);
+ usbvision_release(usbvision);
+ return 0;
+@@ -453,6 +459,9 @@ static int vidioc_querycap(struct file *file, void *priv,
+ {
+ struct usb_usbvision *usbvision = video_drvdata(file);
+
++ if (!usbvision->dev)
++ return -ENODEV;
++
+ strscpy(vc->driver, "USBVision", sizeof(vc->driver));
+ strscpy(vc->card,
+ usbvision_device_data[usbvision->dev_model].model_string,
+@@ -1073,6 +1082,11 @@ static int usbvision_radio_open(struct file *file)
+
+ if (mutex_lock_interruptible(&usbvision->v4l2_lock))
+ return -ERESTARTSYS;
++
++ if (usbvision->remove_pending) {
++ err_code = -ENODEV;
++ goto out;
++ }
+ err_code = v4l2_fh_open(file);
+ if (err_code)
+ goto out;
+@@ -1105,21 +1119,24 @@ out:
+ static int usbvision_radio_close(struct file *file)
+ {
+ struct usb_usbvision *usbvision = video_drvdata(file);
++ int r;
+
+ PDEBUG(DBG_IO, "");
+
+ mutex_lock(&usbvision->v4l2_lock);
+ /* Set packet size to 0 */
+ usbvision->iface_alt = 0;
+- usb_set_interface(usbvision->dev, usbvision->iface,
+- usbvision->iface_alt);
++ if (usbvision->dev)
++ usb_set_interface(usbvision->dev, usbvision->iface,
++ usbvision->iface_alt);
+
+ usbvision_audio_off(usbvision);
+ usbvision->radio = 0;
+ usbvision->user--;
++ r = usbvision->remove_pending;
+ mutex_unlock(&usbvision->v4l2_lock);
+
+- if (usbvision->remove_pending) {
++ if (r) {
+ printk(KERN_INFO "%s: Final disconnect\n", __func__);
+ v4l2_fh_release(file);
+ usbvision_release(usbvision);
+@@ -1551,6 +1568,7 @@ err_usb:
+ static void usbvision_disconnect(struct usb_interface *intf)
+ {
+ struct usb_usbvision *usbvision = to_usbvision(usb_get_intfdata(intf));
++ int u;
+
+ PDEBUG(DBG_PROBE, "");
+
+@@ -1567,13 +1585,14 @@ static void usbvision_disconnect(struct usb_interface *intf)
+ v4l2_device_disconnect(&usbvision->v4l2_dev);
+ usbvision_i2c_unregister(usbvision);
+ usbvision->remove_pending = 1; /* Now all ISO data will be ignored */
++ u = usbvision->user;
+
+ usb_put_dev(usbvision->dev);
+ usbvision->dev = NULL; /* USB device is no more */
+
+ mutex_unlock(&usbvision->v4l2_lock);
+
+- if (usbvision->user) {
++ if (u) {
+ printk(KERN_INFO "%s: In use, disconnect pending\n",
+ __func__);
+ wake_up_interruptible(&usbvision->wait_frame);
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 66ee168ddc7e..428235ca2635 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -2151,6 +2151,20 @@ static int uvc_probe(struct usb_interface *intf,
+ sizeof(dev->name) - len);
+ }
+
++ /* Initialize the media device. */
++#ifdef CONFIG_MEDIA_CONTROLLER
++ dev->mdev.dev = &intf->dev;
++ strscpy(dev->mdev.model, dev->name, sizeof(dev->mdev.model));
++ if (udev->serial)
++ strscpy(dev->mdev.serial, udev->serial,
++ sizeof(dev->mdev.serial));
++ usb_make_path(udev, dev->mdev.bus_info, sizeof(dev->mdev.bus_info));
++ dev->mdev.hw_revision = le16_to_cpu(udev->descriptor.bcdDevice);
++ media_device_init(&dev->mdev);
++
++ dev->vdev.mdev = &dev->mdev;
++#endif
++
+ /* Parse the Video Class control descriptor. */
+ if (uvc_parse_control(dev) < 0) {
+ uvc_trace(UVC_TRACE_PROBE, "Unable to parse UVC "
+@@ -2171,19 +2185,7 @@ static int uvc_probe(struct usb_interface *intf,
+ "linux-uvc-devel mailing list.\n");
+ }
+
+- /* Initialize the media device and register the V4L2 device. */
+-#ifdef CONFIG_MEDIA_CONTROLLER
+- dev->mdev.dev = &intf->dev;
+- strscpy(dev->mdev.model, dev->name, sizeof(dev->mdev.model));
+- if (udev->serial)
+- strscpy(dev->mdev.serial, udev->serial,
+- sizeof(dev->mdev.serial));
+- usb_make_path(udev, dev->mdev.bus_info, sizeof(dev->mdev.bus_info));
+- dev->mdev.hw_revision = le16_to_cpu(udev->descriptor.bcdDevice);
+- media_device_init(&dev->mdev);
+-
+- dev->vdev.mdev = &dev->mdev;
+-#endif
++ /* Register the V4L2 device. */
+ if (v4l2_device_register(&intf->dev, &dev->vdev) < 0)
+ goto error;
+
+diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c
+index 0a9a7ee2a866..f4889431f9b7 100644
+--- a/drivers/net/ethernet/google/gve/gve_tx.c
++++ b/drivers/net/ethernet/google/gve/gve_tx.c
+@@ -393,12 +393,13 @@ static void gve_tx_fill_seg_desc(union gve_tx_desc *seg_desc,
+ static void gve_dma_sync_for_device(struct device *dev, dma_addr_t *page_buses,
+ u64 iov_offset, u64 iov_len)
+ {
++ u64 last_page = (iov_offset + iov_len - 1) / PAGE_SIZE;
++ u64 first_page = iov_offset / PAGE_SIZE;
+ dma_addr_t dma;
+- u64 addr;
++ u64 page;
+
+- for (addr = iov_offset; addr < iov_offset + iov_len;
+- addr += PAGE_SIZE) {
+- dma = page_buses[addr / PAGE_SIZE];
++ for (page = first_page; page <= last_page; page++) {
++ dma = page_buses[page];
+ dma_sync_single_for_device(dev, dma, PAGE_SIZE, DMA_TO_DEVICE);
+ }
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+index 94c59939a8cf..e639a365ac2d 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+@@ -1745,6 +1745,7 @@ static int mlx4_en_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd,
+ err = mlx4_en_get_flow(dev, cmd, cmd->fs.location);
+ break;
+ case ETHTOOL_GRXCLSRLALL:
++ cmd->data = MAX_NUM_OF_FS_RULES;
+ while ((!err || err == -ENOENT) && priority < cmd->rule_cnt) {
+ err = mlx4_en_get_flow(dev, cmd, i);
+ if (!err)
+@@ -1811,6 +1812,7 @@ static int mlx4_en_set_channels(struct net_device *dev,
+ struct mlx4_en_dev *mdev = priv->mdev;
+ struct mlx4_en_port_profile new_prof;
+ struct mlx4_en_priv *tmp;
++ int total_tx_count;
+ int port_up = 0;
+ int xdp_count;
+ int err = 0;
+@@ -1825,13 +1827,12 @@ static int mlx4_en_set_channels(struct net_device *dev,
+
+ mutex_lock(&mdev->state_lock);
+ xdp_count = priv->tx_ring_num[TX_XDP] ? channel->rx_count : 0;
+- if (channel->tx_count * priv->prof->num_up + xdp_count >
+- priv->mdev->profile.max_num_tx_rings_p_up * priv->prof->num_up) {
++ total_tx_count = channel->tx_count * priv->prof->num_up + xdp_count;
++ if (total_tx_count > MAX_TX_RINGS) {
+ err = -EINVAL;
+ en_err(priv,
+ "Total number of TX and XDP rings (%d) exceeds the maximum supported (%d)\n",
+- channel->tx_count * priv->prof->num_up + xdp_count,
+- MAX_TX_RINGS);
++ total_tx_count, MAX_TX_RINGS);
+ goto out;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+index c1438ae52a11..ba4f195a36d6 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+@@ -91,6 +91,7 @@ int mlx4_en_alloc_tx_queue_per_tc(struct net_device *dev, u8 tc)
+ struct mlx4_en_dev *mdev = priv->mdev;
+ struct mlx4_en_port_profile new_prof;
+ struct mlx4_en_priv *tmp;
++ int total_count;
+ int port_up = 0;
+ int err = 0;
+
+@@ -104,6 +105,14 @@ int mlx4_en_alloc_tx_queue_per_tc(struct net_device *dev, u8 tc)
+ MLX4_EN_NUM_UP_HIGH;
+ new_prof.tx_ring_num[TX] = new_prof.num_tx_rings_p_up *
+ new_prof.num_up;
++ total_count = new_prof.tx_ring_num[TX] + new_prof.tx_ring_num[TX_XDP];
++ if (total_count > MAX_TX_RINGS) {
++ err = -EINVAL;
++ en_err(priv,
++ "Total number of TX and XDP rings (%d) exceeds the maximum supported (%d)\n",
++ total_count, MAX_TX_RINGS);
++ goto out;
++ }
+ err = mlx4_en_try_alloc_resources(priv, tmp, &new_prof, true);
+ if (err)
+ goto out;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+index 310f65ef5446..d41c520ce0a8 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+@@ -232,12 +232,15 @@ int mlx5e_tc_tun_create_header_ipv4(struct mlx5e_priv *priv,
+ if (max_encap_size < ipv4_encap_size) {
+ mlx5_core_warn(priv->mdev, "encap size %d too big, max supported is %d\n",
+ ipv4_encap_size, max_encap_size);
+- return -EOPNOTSUPP;
++ err = -EOPNOTSUPP;
++ goto out;
+ }
+
+ encap_header = kzalloc(ipv4_encap_size, GFP_KERNEL);
+- if (!encap_header)
+- return -ENOMEM;
++ if (!encap_header) {
++ err = -ENOMEM;
++ goto out;
++ }
+
+ /* used by mlx5e_detach_encap to lookup a neigh hash table
+ * entry in the neigh hash table when a user deletes a rule
+@@ -348,12 +351,15 @@ int mlx5e_tc_tun_create_header_ipv6(struct mlx5e_priv *priv,
+ if (max_encap_size < ipv6_encap_size) {
+ mlx5_core_warn(priv->mdev, "encap size %d too big, max supported is %d\n",
+ ipv6_encap_size, max_encap_size);
+- return -EOPNOTSUPP;
++ err = -EOPNOTSUPP;
++ goto out;
+ }
+
+ encap_header = kzalloc(ipv6_encap_size, GFP_KERNEL);
+- if (!encap_header)
+- return -ENOMEM;
++ if (!encap_header) {
++ err = -ENOMEM;
++ goto out;
++ }
+
+ /* used by mlx5e_detach_encap to lookup a neigh hash table
+ * entry in the neigh hash table when a user deletes a rule
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index a9bb8e2b34a7..8d4856860365 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -708,9 +708,9 @@ static int get_fec_supported_advertised(struct mlx5_core_dev *dev,
+
+ static void ptys2ethtool_supported_advertised_port(struct ethtool_link_ksettings *link_ksettings,
+ u32 eth_proto_cap,
+- u8 connector_type)
++ u8 connector_type, bool ext)
+ {
+- if (!connector_type || connector_type >= MLX5E_CONNECTOR_TYPE_NUMBER) {
++ if ((!connector_type && !ext) || connector_type >= MLX5E_CONNECTOR_TYPE_NUMBER) {
+ if (eth_proto_cap & (MLX5E_PROT_MASK(MLX5E_10GBASE_CR)
+ | MLX5E_PROT_MASK(MLX5E_10GBASE_SR)
+ | MLX5E_PROT_MASK(MLX5E_40GBASE_CR4)
+@@ -842,9 +842,9 @@ static int ptys2connector_type[MLX5E_CONNECTOR_TYPE_NUMBER] = {
+ [MLX5E_PORT_OTHER] = PORT_OTHER,
+ };
+
+-static u8 get_connector_port(u32 eth_proto, u8 connector_type)
++static u8 get_connector_port(u32 eth_proto, u8 connector_type, bool ext)
+ {
+- if (connector_type && connector_type < MLX5E_CONNECTOR_TYPE_NUMBER)
++ if ((connector_type || ext) && connector_type < MLX5E_CONNECTOR_TYPE_NUMBER)
+ return ptys2connector_type[connector_type];
+
+ if (eth_proto &
+@@ -945,9 +945,9 @@ int mlx5e_ethtool_get_link_ksettings(struct mlx5e_priv *priv,
+ eth_proto_oper = eth_proto_oper ? eth_proto_oper : eth_proto_cap;
+
+ link_ksettings->base.port = get_connector_port(eth_proto_oper,
+- connector_type);
++ connector_type, ext);
+ ptys2ethtool_supported_advertised_port(link_ksettings, eth_proto_admin,
+- connector_type);
++ connector_type, ext);
+ get_lp_advertising(mdev, eth_proto_lp, link_ksettings);
+
+ if (an_status == MLX5_AN_COMPLETE)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index 1f3891fde2eb..a3b2ce112508 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -2044,7 +2044,7 @@ int mlx5_eswitch_set_vport_state(struct mlx5_eswitch *esw,
+
+ unlock:
+ mutex_unlock(&esw->state_lock);
+- return 0;
++ return err;
+ }
+
+ int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 3e99799bdb40..a6a64531bc43 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -549,7 +549,7 @@ static void del_sw_flow_group(struct fs_node *node)
+
+ rhashtable_destroy(&fg->ftes_hash);
+ ida_destroy(&fg->fte_allocator);
+- if (ft->autogroup.active)
++ if (ft->autogroup.active && fg->max_ftes == ft->autogroup.group_size)
+ ft->autogroup.num_groups--;
+ err = rhltable_remove(&ft->fgs_hash,
+ &fg->hash,
+@@ -1095,6 +1095,8 @@ mlx5_create_auto_grouped_flow_table(struct mlx5_flow_namespace *ns,
+
+ ft->autogroup.active = true;
+ ft->autogroup.required_groups = max_num_groups;
++ /* We save place for flow groups in addition to max types */
++ ft->autogroup.group_size = ft->max_fte / (max_num_groups + 1);
+
+ return ft;
+ }
+@@ -1297,8 +1299,7 @@ static struct mlx5_flow_group *alloc_auto_flow_group(struct mlx5_flow_table *ft
+ return ERR_PTR(-ENOENT);
+
+ if (ft->autogroup.num_groups < ft->autogroup.required_groups)
+- /* We save place for flow groups in addition to max types */
+- group_size = ft->max_fte / (ft->autogroup.required_groups + 1);
++ group_size = ft->autogroup.group_size;
+
+ /* ft->max_fte == ft->autogroup.max_types */
+ if (group_size == 0)
+@@ -1325,7 +1326,8 @@ static struct mlx5_flow_group *alloc_auto_flow_group(struct mlx5_flow_table *ft
+ if (IS_ERR(fg))
+ goto out;
+
+- ft->autogroup.num_groups++;
++ if (group_size == ft->autogroup.group_size)
++ ft->autogroup.num_groups++;
+
+ out:
+ return fg;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
+index c1252d6be0ef..80906aff21d7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
+@@ -137,6 +137,7 @@ struct mlx5_flow_table {
+ struct {
+ bool active;
+ unsigned int required_groups;
++ unsigned int group_size;
+ unsigned int num_groups;
+ } autogroup;
+ /* Protect fwd_rules */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index fda4964c5cf4..5e2b56305a3a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -1552,6 +1552,7 @@ static const struct pci_device_id mlx5_core_pci_table[] = {
+ { PCI_VDEVICE(MELLANOX, 0x101c), MLX5_PCI_DEV_IS_VF}, /* ConnectX-6 VF */
+ { PCI_VDEVICE(MELLANOX, 0x101d) }, /* ConnectX-6 Dx */
+ { PCI_VDEVICE(MELLANOX, 0x101e), MLX5_PCI_DEV_IS_VF}, /* ConnectX Family mlx5Gen Virtual Function */
++ { PCI_VDEVICE(MELLANOX, 0x101f) }, /* ConnectX-6 LX */
+ { PCI_VDEVICE(MELLANOX, 0xa2d2) }, /* BlueField integrated ConnectX-5 network controller */
+ { PCI_VDEVICE(MELLANOX, 0xa2d3), MLX5_PCI_DEV_IS_VF}, /* BlueField integrated ConnectX-5 network controller VF */
+ { PCI_VDEVICE(MELLANOX, 0xa2d6) }, /* BlueField-2 integrated ConnectX-6 Dx network controller */
+diff --git a/drivers/net/ethernet/mellanox/mlxfw/mlxfw_fsm.c b/drivers/net/ethernet/mellanox/mlxfw/mlxfw_fsm.c
+index 67990406cba2..29e95d0a6ad1 100644
+--- a/drivers/net/ethernet/mellanox/mlxfw/mlxfw_fsm.c
++++ b/drivers/net/ethernet/mellanox/mlxfw/mlxfw_fsm.c
+@@ -66,6 +66,8 @@ retry:
+ return err;
+
+ if (fsm_state_err != MLXFW_FSM_STATE_ERR_OK) {
++ fsm_state_err = min_t(enum mlxfw_fsm_state_err,
++ fsm_state_err, MLXFW_FSM_STATE_ERR_MAX);
+ pr_err("Firmware flash failed: %s\n",
+ mlxfw_fsm_state_err_str[fsm_state_err]);
+ NL_SET_ERR_MSG_MOD(extack, "Firmware flash failed");
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+index e618be7ce6c6..7b7e50d25d25 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+@@ -994,7 +994,7 @@ u32 mlxsw_sp_ipip_dev_ul_tb_id(const struct net_device *ol_dev)
+ if (d)
+ return l3mdev_fib_table(d) ? : RT_TABLE_MAIN;
+ else
+- return l3mdev_fib_table(ol_dev) ? : RT_TABLE_MAIN;
++ return RT_TABLE_MAIN;
+ }
+
+ static struct mlxsw_sp_rif *
+@@ -1598,27 +1598,10 @@ static int mlxsw_sp_netdevice_ipip_ol_vrf_event(struct mlxsw_sp *mlxsw_sp,
+ {
+ struct mlxsw_sp_ipip_entry *ipip_entry =
+ mlxsw_sp_ipip_entry_find_by_ol_dev(mlxsw_sp, ol_dev);
+- enum mlxsw_sp_l3proto ul_proto;
+- union mlxsw_sp_l3addr saddr;
+- u32 ul_tb_id;
+
+ if (!ipip_entry)
+ return 0;
+
+- /* For flat configuration cases, moving overlay to a different VRF might
+- * cause local address conflict, and the conflicting tunnels need to be
+- * demoted.
+- */
+- ul_tb_id = mlxsw_sp_ipip_dev_ul_tb_id(ol_dev);
+- ul_proto = mlxsw_sp->router->ipip_ops_arr[ipip_entry->ipipt]->ul_proto;
+- saddr = mlxsw_sp_ipip_netdev_saddr(ul_proto, ol_dev);
+- if (mlxsw_sp_ipip_demote_tunnel_by_saddr(mlxsw_sp, ul_proto,
+- saddr, ul_tb_id,
+- ipip_entry)) {
+- mlxsw_sp_ipip_entry_demote_tunnel(mlxsw_sp, ipip_entry);
+- return 0;
+- }
+-
+ return __mlxsw_sp_ipip_entry_update_tunnel(mlxsw_sp, ipip_entry,
+ true, false, false, extack);
+ }
+diff --git a/drivers/net/ethernet/sfc/ptp.c b/drivers/net/ethernet/sfc/ptp.c
+index 02ed6d1b716c..af15a737c675 100644
+--- a/drivers/net/ethernet/sfc/ptp.c
++++ b/drivers/net/ethernet/sfc/ptp.c
+@@ -1531,7 +1531,8 @@ void efx_ptp_remove(struct efx_nic *efx)
+ (void)efx_ptp_disable(efx);
+
+ cancel_work_sync(&efx->ptp_data->work);
+- cancel_work_sync(&efx->ptp_data->pps_work);
++ if (efx->ptp_data->pps_workwq)
++ cancel_work_sync(&efx->ptp_data->pps_work);
+
+ skb_queue_purge(&efx->ptp_data->rxq);
+ skb_queue_purge(&efx->ptp_data->txq);
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index bd04fe762056..2a79c7a7e920 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -68,11 +68,12 @@ static int mdiobus_register_reset(struct mdio_device *mdiodev)
+ if (mdiodev->dev.of_node)
+ reset = devm_reset_control_get_exclusive(&mdiodev->dev,
+ "phy");
+- if (PTR_ERR(reset) == -ENOENT ||
+- PTR_ERR(reset) == -ENOTSUPP)
+- reset = NULL;
+- else if (IS_ERR(reset))
+- return PTR_ERR(reset);
++ if (IS_ERR(reset)) {
++ if (PTR_ERR(reset) == -ENOENT || PTR_ERR(reset) == -ENOTSUPP)
++ reset = NULL;
++ else
++ return PTR_ERR(reset);
++ }
+
+ mdiodev->reset_ctrl = reset;
+
+diff --git a/drivers/net/wireless/ath/ath10k/pci.c b/drivers/net/wireless/ath/ath10k/pci.c
+index a0b4d265c6eb..347bb92e4130 100644
+--- a/drivers/net/wireless/ath/ath10k/pci.c
++++ b/drivers/net/wireless/ath/ath10k/pci.c
+@@ -3490,7 +3490,7 @@ static int ath10k_pci_probe(struct pci_dev *pdev,
+ struct ath10k_pci *ar_pci;
+ enum ath10k_hw_rev hw_rev;
+ struct ath10k_bus_params bus_params = {};
+- bool pci_ps;
++ bool pci_ps, is_qca988x = false;
+ int (*pci_soft_reset)(struct ath10k *ar);
+ int (*pci_hard_reset)(struct ath10k *ar);
+ u32 (*targ_cpu_to_ce_addr)(struct ath10k *ar, u32 addr);
+@@ -3500,6 +3500,7 @@ static int ath10k_pci_probe(struct pci_dev *pdev,
+ case QCA988X_2_0_DEVICE_ID:
+ hw_rev = ATH10K_HW_QCA988X;
+ pci_ps = false;
++ is_qca988x = true;
+ pci_soft_reset = ath10k_pci_warm_reset;
+ pci_hard_reset = ath10k_pci_qca988x_chip_reset;
+ targ_cpu_to_ce_addr = ath10k_pci_qca988x_targ_cpu_to_ce_addr;
+@@ -3619,25 +3620,34 @@ static int ath10k_pci_probe(struct pci_dev *pdev,
+ goto err_deinit_irq;
+ }
+
++ bus_params.dev_type = ATH10K_DEV_TYPE_LL;
++ bus_params.link_can_suspend = true;
++ /* Read CHIP_ID before reset to catch QCA9880-AR1A v1 devices that
++ * fall off the bus during chip_reset. These chips have the same pci
++ * device id as the QCA9880 BR4A or 2R4E. So that's why the check.
++ */
++ if (is_qca988x) {
++ bus_params.chip_id =
++ ath10k_pci_soc_read32(ar, SOC_CHIP_ID_ADDRESS);
++ if (bus_params.chip_id != 0xffffffff) {
++ if (!ath10k_pci_chip_is_supported(pdev->device,
++ bus_params.chip_id))
++ goto err_unsupported;
++ }
++ }
++
+ ret = ath10k_pci_chip_reset(ar);
+ if (ret) {
+ ath10k_err(ar, "failed to reset chip: %d\n", ret);
+ goto err_free_irq;
+ }
+
+- bus_params.dev_type = ATH10K_DEV_TYPE_LL;
+- bus_params.link_can_suspend = true;
+ bus_params.chip_id = ath10k_pci_soc_read32(ar, SOC_CHIP_ID_ADDRESS);
+- if (bus_params.chip_id == 0xffffffff) {
+- ath10k_err(ar, "failed to get chip id\n");
+- goto err_free_irq;
+- }
++ if (bus_params.chip_id == 0xffffffff)
++ goto err_unsupported;
+
+- if (!ath10k_pci_chip_is_supported(pdev->device, bus_params.chip_id)) {
+- ath10k_err(ar, "device %04x with chip_id %08x isn't supported\n",
+- pdev->device, bus_params.chip_id);
++ if (!ath10k_pci_chip_is_supported(pdev->device, bus_params.chip_id))
+ goto err_free_irq;
+- }
+
+ ret = ath10k_core_register(ar, &bus_params);
+ if (ret) {
+@@ -3647,6 +3657,10 @@ static int ath10k_pci_probe(struct pci_dev *pdev,
+
+ return 0;
+
++err_unsupported:
++ ath10k_err(ar, "device %04x with chip_id %08x isn't supported\n",
++ pdev->device, bus_params.chip_id);
++
+ err_free_irq:
+ ath10k_pci_free_irq(ar);
+ ath10k_pci_rx_retry_sync(ar);
+diff --git a/drivers/net/wireless/ath/ath10k/qmi.c b/drivers/net/wireless/ath/ath10k/qmi.c
+index 3b63b6257c43..545ac1f06997 100644
+--- a/drivers/net/wireless/ath/ath10k/qmi.c
++++ b/drivers/net/wireless/ath/ath10k/qmi.c
+@@ -581,22 +581,29 @@ static int ath10k_qmi_host_cap_send_sync(struct ath10k_qmi *qmi)
+ {
+ struct wlfw_host_cap_resp_msg_v01 resp = {};
+ struct wlfw_host_cap_req_msg_v01 req = {};
++ struct qmi_elem_info *req_ei;
+ struct ath10k *ar = qmi->ar;
++ struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar);
+ struct qmi_txn txn;
+ int ret;
+
+ req.daemon_support_valid = 1;
+ req.daemon_support = 0;
+
+- ret = qmi_txn_init(&qmi->qmi_hdl, &txn,
+- wlfw_host_cap_resp_msg_v01_ei, &resp);
++ ret = qmi_txn_init(&qmi->qmi_hdl, &txn, wlfw_host_cap_resp_msg_v01_ei,
++ &resp);
+ if (ret < 0)
+ goto out;
+
++ if (test_bit(ATH10K_SNOC_FLAG_8BIT_HOST_CAP_QUIRK, &ar_snoc->flags))
++ req_ei = wlfw_host_cap_8bit_req_msg_v01_ei;
++ else
++ req_ei = wlfw_host_cap_req_msg_v01_ei;
++
+ ret = qmi_send_request(&qmi->qmi_hdl, NULL, &txn,
+ QMI_WLFW_HOST_CAP_REQ_V01,
+ WLFW_HOST_CAP_REQ_MSG_V01_MAX_MSG_LEN,
+- wlfw_host_cap_req_msg_v01_ei, &req);
++ req_ei, &req);
+ if (ret < 0) {
+ qmi_txn_cancel(&txn);
+ ath10k_err(ar, "failed to send host capability request: %d\n", ret);
+diff --git a/drivers/net/wireless/ath/ath10k/qmi_wlfw_v01.c b/drivers/net/wireless/ath/ath10k/qmi_wlfw_v01.c
+index 1fe05c6218c3..86fcf4e1de5f 100644
+--- a/drivers/net/wireless/ath/ath10k/qmi_wlfw_v01.c
++++ b/drivers/net/wireless/ath/ath10k/qmi_wlfw_v01.c
+@@ -1988,6 +1988,28 @@ struct qmi_elem_info wlfw_host_cap_req_msg_v01_ei[] = {
+ {}
+ };
+
++struct qmi_elem_info wlfw_host_cap_8bit_req_msg_v01_ei[] = {
++ {
++ .data_type = QMI_OPT_FLAG,
++ .elem_len = 1,
++ .elem_size = sizeof(u8),
++ .array_type = NO_ARRAY,
++ .tlv_type = 0x10,
++ .offset = offsetof(struct wlfw_host_cap_req_msg_v01,
++ daemon_support_valid),
++ },
++ {
++ .data_type = QMI_UNSIGNED_1_BYTE,
++ .elem_len = 1,
++ .elem_size = sizeof(u8),
++ .array_type = NO_ARRAY,
++ .tlv_type = 0x10,
++ .offset = offsetof(struct wlfw_host_cap_req_msg_v01,
++ daemon_support),
++ },
++ {}
++};
++
+ struct qmi_elem_info wlfw_host_cap_resp_msg_v01_ei[] = {
+ {
+ .data_type = QMI_STRUCT,
+diff --git a/drivers/net/wireless/ath/ath10k/qmi_wlfw_v01.h b/drivers/net/wireless/ath/ath10k/qmi_wlfw_v01.h
+index bca1186e1560..4d107e1364a8 100644
+--- a/drivers/net/wireless/ath/ath10k/qmi_wlfw_v01.h
++++ b/drivers/net/wireless/ath/ath10k/qmi_wlfw_v01.h
+@@ -575,6 +575,7 @@ struct wlfw_host_cap_req_msg_v01 {
+
+ #define WLFW_HOST_CAP_REQ_MSG_V01_MAX_MSG_LEN 189
+ extern struct qmi_elem_info wlfw_host_cap_req_msg_v01_ei[];
++extern struct qmi_elem_info wlfw_host_cap_8bit_req_msg_v01_ei[];
+
+ struct wlfw_host_cap_resp_msg_v01 {
+ struct qmi_response_type_v01 resp;
+diff --git a/drivers/net/wireless/ath/ath10k/snoc.c b/drivers/net/wireless/ath/ath10k/snoc.c
+index b491361e6ed4..fc15a0037f0e 100644
+--- a/drivers/net/wireless/ath/ath10k/snoc.c
++++ b/drivers/net/wireless/ath/ath10k/snoc.c
+@@ -1261,6 +1261,15 @@ out:
+ return ret;
+ }
+
++static void ath10k_snoc_quirks_init(struct ath10k *ar)
++{
++ struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar);
++ struct device *dev = &ar_snoc->dev->dev;
++
++ if (of_property_read_bool(dev->of_node, "qcom,snoc-host-cap-8bit-quirk"))
++ set_bit(ATH10K_SNOC_FLAG_8BIT_HOST_CAP_QUIRK, &ar_snoc->flags);
++}
++
+ int ath10k_snoc_fw_indication(struct ath10k *ar, u64 type)
+ {
+ struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar);
+@@ -1678,6 +1687,8 @@ static int ath10k_snoc_probe(struct platform_device *pdev)
+ ar->ce_priv = &ar_snoc->ce;
+ msa_size = drv_data->msa_size;
+
++ ath10k_snoc_quirks_init(ar);
++
+ ret = ath10k_snoc_resource_init(ar);
+ if (ret) {
+ ath10k_warn(ar, "failed to initialize resource: %d\n", ret);
+diff --git a/drivers/net/wireless/ath/ath10k/snoc.h b/drivers/net/wireless/ath/ath10k/snoc.h
+index d62f53501fbb..9db823e46314 100644
+--- a/drivers/net/wireless/ath/ath10k/snoc.h
++++ b/drivers/net/wireless/ath/ath10k/snoc.h
+@@ -63,6 +63,7 @@ enum ath10k_snoc_flags {
+ ATH10K_SNOC_FLAG_REGISTERED,
+ ATH10K_SNOC_FLAG_UNREGISTERING,
+ ATH10K_SNOC_FLAG_RECOVERY,
++ ATH10K_SNOC_FLAG_8BIT_HOST_CAP_QUIRK,
+ };
+
+ struct ath10k_snoc {
+diff --git a/drivers/net/wireless/ath/ath10k/usb.c b/drivers/net/wireless/ath/ath10k/usb.c
+index e1420f67f776..9ebe74ee4aef 100644
+--- a/drivers/net/wireless/ath/ath10k/usb.c
++++ b/drivers/net/wireless/ath/ath10k/usb.c
+@@ -38,6 +38,10 @@ ath10k_usb_alloc_urb_from_pipe(struct ath10k_usb_pipe *pipe)
+ struct ath10k_urb_context *urb_context = NULL;
+ unsigned long flags;
+
++ /* bail if this pipe is not initialized */
++ if (!pipe->ar_usb)
++ return NULL;
++
+ spin_lock_irqsave(&pipe->ar_usb->cs_lock, flags);
+ if (!list_empty(&pipe->urb_list_head)) {
+ urb_context = list_first_entry(&pipe->urb_list_head,
+@@ -55,6 +59,10 @@ static void ath10k_usb_free_urb_to_pipe(struct ath10k_usb_pipe *pipe,
+ {
+ unsigned long flags;
+
++ /* bail if this pipe is not initialized */
++ if (!pipe->ar_usb)
++ return;
++
+ spin_lock_irqsave(&pipe->ar_usb->cs_lock, flags);
+
+ pipe->urb_cnt++;
+diff --git a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
+index 2b29bf4730f6..b4885a700296 100644
+--- a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
++++ b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
+@@ -4183,7 +4183,7 @@ static void ar9003_hw_thermometer_apply(struct ath_hw *ah)
+
+ static void ar9003_hw_thermo_cal_apply(struct ath_hw *ah)
+ {
+- u32 data, ko, kg;
++ u32 data = 0, ko, kg;
+
+ if (!AR_SREV_9462_20_OR_LATER(ah))
+ return;
+diff --git a/drivers/nfc/port100.c b/drivers/nfc/port100.c
+index 145ddf3f0a45..604dba4f18af 100644
+--- a/drivers/nfc/port100.c
++++ b/drivers/nfc/port100.c
+@@ -783,7 +783,7 @@ static int port100_send_frame_async(struct port100 *dev, struct sk_buff *out,
+
+ rc = port100_submit_urb_for_ack(dev, GFP_KERNEL);
+ if (rc)
+- usb_unlink_urb(dev->out_urb);
++ usb_kill_urb(dev->out_urb);
+
+ exit:
+ mutex_unlock(&dev->out_urb_lock);
+diff --git a/drivers/staging/comedi/drivers/usbduxfast.c b/drivers/staging/comedi/drivers/usbduxfast.c
+index 04bc488385e6..4af012968cb6 100644
+--- a/drivers/staging/comedi/drivers/usbduxfast.c
++++ b/drivers/staging/comedi/drivers/usbduxfast.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0+
+ /*
+- * Copyright (C) 2004-2014 Bernd Porr, mail@berndporr.me.uk
++ * Copyright (C) 2004-2019 Bernd Porr, mail@berndporr.me.uk
+ */
+
+ /*
+@@ -8,7 +8,7 @@
+ * Description: University of Stirling USB DAQ & INCITE Technology Limited
+ * Devices: [ITL] USB-DUX-FAST (usbduxfast)
+ * Author: Bernd Porr <mail@berndporr.me.uk>
+- * Updated: 10 Oct 2014
++ * Updated: 16 Nov 2019
+ * Status: stable
+ */
+
+@@ -22,6 +22,7 @@
+ *
+ *
+ * Revision history:
++ * 1.0: Fixed a rounding error in usbduxfast_ai_cmdtest
+ * 0.9: Dropping the first data packet which seems to be from the last transfer.
+ * Buffer overflows in the FX2 are handed over to comedi.
+ * 0.92: Dropping now 4 packets. The quad buffer has to be emptied.
+@@ -350,6 +351,7 @@ static int usbduxfast_ai_cmdtest(struct comedi_device *dev,
+ struct comedi_cmd *cmd)
+ {
+ int err = 0;
++ int err2 = 0;
+ unsigned int steps;
+ unsigned int arg;
+
+@@ -399,11 +401,16 @@ static int usbduxfast_ai_cmdtest(struct comedi_device *dev,
+ */
+ steps = (cmd->convert_arg * 30) / 1000;
+ if (cmd->chanlist_len != 1)
+- err |= comedi_check_trigger_arg_min(&steps,
+- MIN_SAMPLING_PERIOD);
+- err |= comedi_check_trigger_arg_max(&steps, MAX_SAMPLING_PERIOD);
+- arg = (steps * 1000) / 30;
+- err |= comedi_check_trigger_arg_is(&cmd->convert_arg, arg);
++ err2 |= comedi_check_trigger_arg_min(&steps,
++ MIN_SAMPLING_PERIOD);
++ else
++ err2 |= comedi_check_trigger_arg_min(&steps, 1);
++ err2 |= comedi_check_trigger_arg_max(&steps, MAX_SAMPLING_PERIOD);
++ if (err2) {
++ err |= err2;
++ arg = (steps * 1000) / 30;
++ err |= comedi_check_trigger_arg_is(&cmd->convert_arg, arg);
++ }
+
+ if (cmd->stop_src == TRIG_COUNT)
+ err |= comedi_check_trigger_arg_min(&cmd->stop_arg, 1);
+diff --git a/drivers/usb/misc/appledisplay.c b/drivers/usb/misc/appledisplay.c
+index ac92725458b5..ba1eaabc7796 100644
+--- a/drivers/usb/misc/appledisplay.c
++++ b/drivers/usb/misc/appledisplay.c
+@@ -164,7 +164,12 @@ static int appledisplay_bl_get_brightness(struct backlight_device *bd)
+ 0,
+ pdata->msgdata, 2,
+ ACD_USB_TIMEOUT);
+- brightness = pdata->msgdata[1];
++ if (retval < 2) {
++ if (retval >= 0)
++ retval = -EMSGSIZE;
++ } else {
++ brightness = pdata->msgdata[1];
++ }
+ mutex_unlock(&pdata->sysfslock);
+
+ if (retval < 0)
+@@ -299,6 +304,7 @@ error:
+ if (pdata) {
+ if (pdata->urb) {
+ usb_kill_urb(pdata->urb);
++ cancel_delayed_work_sync(&pdata->work);
+ if (pdata->urbdata)
+ usb_free_coherent(pdata->udev, ACD_URB_BUFFER_LEN,
+ pdata->urbdata, pdata->urb->transfer_dma);
+diff --git a/drivers/usb/misc/chaoskey.c b/drivers/usb/misc/chaoskey.c
+index 34e6cd6f40d3..87067c3d6109 100644
+--- a/drivers/usb/misc/chaoskey.c
++++ b/drivers/usb/misc/chaoskey.c
+@@ -384,13 +384,17 @@ static int _chaoskey_fill(struct chaoskey *dev)
+ !dev->reading,
+ (started ? NAK_TIMEOUT : ALEA_FIRST_TIMEOUT) );
+
+- if (result < 0)
++ if (result < 0) {
++ usb_kill_urb(dev->urb);
+ goto out;
++ }
+
+- if (result == 0)
++ if (result == 0) {
+ result = -ETIMEDOUT;
+- else
++ usb_kill_urb(dev->urb);
++ } else {
+ result = dev->valid;
++ }
+ out:
+ /* Let the device go back to sleep eventually */
+ usb_autopm_put_interface(dev->interface);
+@@ -526,7 +530,21 @@ static int chaoskey_suspend(struct usb_interface *interface,
+
+ static int chaoskey_resume(struct usb_interface *interface)
+ {
++ struct chaoskey *dev;
++ struct usb_device *udev = interface_to_usbdev(interface);
++
+ usb_dbg(interface, "resume");
++ dev = usb_get_intfdata(interface);
++
++ /*
++ * We may have lost power.
++ * In that case the device that needs a long time
++ * for the first requests needs an extended timeout
++ * again
++ */
++ if (le16_to_cpu(udev->descriptor.idVendor) == ALEA_VENDOR_ID)
++ dev->reads_started = false;
++
+ return 0;
+ }
+ #else
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 979bef9bfb6b..f5143eedbc48 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -125,6 +125,7 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x10C4, 0x8341) }, /* Siemens MC35PU GPRS Modem */
+ { USB_DEVICE(0x10C4, 0x8382) }, /* Cygnal Integrated Products, Inc. */
+ { USB_DEVICE(0x10C4, 0x83A8) }, /* Amber Wireless AMB2560 */
++ { USB_DEVICE(0x10C4, 0x83AA) }, /* Mark-10 Digital Force Gauge */
+ { USB_DEVICE(0x10C4, 0x83D8) }, /* DekTec DTA Plus VHF/UHF Booster/Attenuator */
+ { USB_DEVICE(0x10C4, 0x8411) }, /* Kyocera GPS Module */
+ { USB_DEVICE(0x10C4, 0x8418) }, /* IRZ Automation Teleport SG-10 GSM/GPRS Modem */
+diff --git a/drivers/usb/serial/mos7720.c b/drivers/usb/serial/mos7720.c
+index 18110225d506..2ec4eeacebc7 100644
+--- a/drivers/usb/serial/mos7720.c
++++ b/drivers/usb/serial/mos7720.c
+@@ -1833,10 +1833,6 @@ static int mos7720_startup(struct usb_serial *serial)
+ product = le16_to_cpu(serial->dev->descriptor.idProduct);
+ dev = serial->dev;
+
+- /* setting configuration feature to one */
+- usb_control_msg(serial->dev, usb_sndctrlpipe(serial->dev, 0),
+- (__u8)0x03, 0x00, 0x01, 0x00, NULL, 0x00, 5000);
+-
+ if (product == MOSCHIP_DEVICE_ID_7715) {
+ struct urb *urb = serial->port[0]->interrupt_in_urb;
+
+diff --git a/drivers/usb/serial/mos7840.c b/drivers/usb/serial/mos7840.c
+index a698d46ba773..ab4bf8d6d7df 100644
+--- a/drivers/usb/serial/mos7840.c
++++ b/drivers/usb/serial/mos7840.c
+@@ -119,11 +119,15 @@
+ /* This driver also supports
+ * ATEN UC2324 device using Moschip MCS7840
+ * ATEN UC2322 device using Moschip MCS7820
++ * MOXA UPort 2210 device using Moschip MCS7820
+ */
+ #define USB_VENDOR_ID_ATENINTL 0x0557
+ #define ATENINTL_DEVICE_ID_UC2324 0x2011
+ #define ATENINTL_DEVICE_ID_UC2322 0x7820
+
++#define USB_VENDOR_ID_MOXA 0x110a
++#define MOXA_DEVICE_ID_2210 0x2210
++
+ /* Interrupt Routine Defines */
+
+ #define SERIAL_IIR_RLS 0x06
+@@ -195,6 +199,7 @@ static const struct usb_device_id id_table[] = {
+ {USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL2_4)},
+ {USB_DEVICE(USB_VENDOR_ID_ATENINTL, ATENINTL_DEVICE_ID_UC2324)},
+ {USB_DEVICE(USB_VENDOR_ID_ATENINTL, ATENINTL_DEVICE_ID_UC2322)},
++ {USB_DEVICE(USB_VENDOR_ID_MOXA, MOXA_DEVICE_ID_2210)},
+ {} /* terminating entry */
+ };
+ MODULE_DEVICE_TABLE(usb, id_table);
+@@ -2020,6 +2025,7 @@ static int mos7840_probe(struct usb_serial *serial,
+ const struct usb_device_id *id)
+ {
+ u16 product = le16_to_cpu(serial->dev->descriptor.idProduct);
++ u16 vid = le16_to_cpu(serial->dev->descriptor.idVendor);
+ u8 *buf;
+ int device_type;
+
+@@ -2030,6 +2036,11 @@ static int mos7840_probe(struct usb_serial *serial,
+ goto out;
+ }
+
++ if (vid == USB_VENDOR_ID_MOXA && product == MOXA_DEVICE_ID_2210) {
++ device_type = MOSCHIP_DEVICE_ID_7820;
++ goto out;
++ }
++
+ buf = kzalloc(VENDOR_READ_LENGTH, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+@@ -2279,11 +2290,6 @@ out:
+ goto error;
+ } else
+ dev_dbg(&port->dev, "ZLP_REG5 Writing success status%d\n", status);
+-
+- /* setting configuration feature to one */
+- usb_control_msg(serial->dev, usb_sndctrlpipe(serial->dev, 0),
+- 0x03, 0x00, 0x01, 0x00, NULL, 0x00,
+- MOS_WDR_TIMEOUT);
+ }
+ return 0;
+ error:
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 06ab016be0b6..e9491d400a24 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -197,6 +197,7 @@ static void option_instat_callback(struct urb *urb);
+ #define DELL_PRODUCT_5804_MINICARD_ATT 0x819b /* Novatel E371 */
+
+ #define DELL_PRODUCT_5821E 0x81d7
++#define DELL_PRODUCT_5821E_ESIM 0x81e0
+
+ #define KYOCERA_VENDOR_ID 0x0c88
+ #define KYOCERA_PRODUCT_KPC650 0x17da
+@@ -1044,6 +1045,8 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(DELL_VENDOR_ID, DELL_PRODUCT_5804_MINICARD_ATT, 0xff, 0xff, 0xff) },
+ { USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5821E),
+ .driver_info = RSVD(0) | RSVD(1) | RSVD(6) },
++ { USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5821E_ESIM),
++ .driver_info = RSVD(0) | RSVD(1) | RSVD(6) },
+ { USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_E100A) }, /* ADU-E100, ADU-310 */
+ { USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_500A) },
+ { USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_620UW) },
+@@ -1990,6 +1993,10 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(0x03f0, 0xa31d, 0xff, 0x06, 0x13) },
+ { USB_DEVICE_AND_INTERFACE_INFO(0x03f0, 0xa31d, 0xff, 0x06, 0x14) },
+ { USB_DEVICE_AND_INTERFACE_INFO(0x03f0, 0xa31d, 0xff, 0x06, 0x1b) },
++ { USB_DEVICE(0x0489, 0xe0b4), /* Foxconn T77W968 */
++ .driver_info = RSVD(0) | RSVD(1) | RSVD(6) },
++ { USB_DEVICE(0x0489, 0xe0b5), /* Foxconn T77W968 ESIM */
++ .driver_info = RSVD(0) | RSVD(1) | RSVD(6) },
+ { USB_DEVICE(0x1508, 0x1001), /* Fibocom NL668 */
+ .driver_info = RSVD(4) | RSVD(5) | RSVD(6) },
+ { USB_DEVICE(0x2cb7, 0x0104), /* Fibocom NL678 series */
+diff --git a/drivers/usb/usbip/Kconfig b/drivers/usb/usbip/Kconfig
+index 2f86b28fa3da..7bbae7a08642 100644
+--- a/drivers/usb/usbip/Kconfig
++++ b/drivers/usb/usbip/Kconfig
+@@ -4,6 +4,7 @@ config USBIP_CORE
+ tristate "USB/IP support"
+ depends on NET
+ select USB_COMMON
++ select SGL_ALLOC
+ ---help---
+ This enables pushing USB packets over IP to allow remote
+ machines direct access to USB devices. It provides the
+diff --git a/drivers/usb/usbip/stub_rx.c b/drivers/usb/usbip/stub_rx.c
+index 66edfeea68fe..e2b019532234 100644
+--- a/drivers/usb/usbip/stub_rx.c
++++ b/drivers/usb/usbip/stub_rx.c
+@@ -470,18 +470,50 @@ static void stub_recv_cmd_submit(struct stub_device *sdev,
+ if (pipe == -1)
+ return;
+
++ /*
++ * Smatch reported the error case where use_sg is true and buf_len is 0.
++ * In this case, It adds SDEV_EVENT_ERROR_MALLOC and stub_priv will be
++ * released by stub event handler and connection will be shut down.
++ */
+ priv = stub_priv_alloc(sdev, pdu);
+ if (!priv)
+ return;
+
+ buf_len = (unsigned long long)pdu->u.cmd_submit.transfer_buffer_length;
+
++ if (use_sg && !buf_len) {
++ dev_err(&udev->dev, "sg buffer with zero length\n");
++ goto err_malloc;
++ }
++
+ /* allocate urb transfer buffer, if needed */
+ if (buf_len) {
+ if (use_sg) {
+ sgl = sgl_alloc(buf_len, GFP_KERNEL, &nents);
+ if (!sgl)
+ goto err_malloc;
++
++ /* Check if the server's HCD supports SG */
++ if (!udev->bus->sg_tablesize) {
++ /*
++ * If the server's HCD doesn't support SG, break
++ * a single SG request into several URBs and map
++ * each SG list entry to corresponding URB
++ * buffer. The previously allocated SG list is
++ * stored in priv->sgl (If the server's HCD
++ * support SG, SG list is stored only in
++ * urb->sg) and it is used as an indicator that
++ * the server split single SG request into
++ * several URBs. Later, priv->sgl is used by
++ * stub_complete() and stub_send_ret_submit() to
++ * reassemble the divied URBs.
++ */
++ support_sg = 0;
++ num_urbs = nents;
++ priv->completed_urbs = 0;
++ pdu->u.cmd_submit.transfer_flags &=
++ ~URB_DMA_MAP_SG;
++ }
+ } else {
+ buffer = kzalloc(buf_len, GFP_KERNEL);
+ if (!buffer)
+@@ -489,24 +521,6 @@ static void stub_recv_cmd_submit(struct stub_device *sdev,
+ }
+ }
+
+- /* Check if the server's HCD supports SG */
+- if (use_sg && !udev->bus->sg_tablesize) {
+- /*
+- * If the server's HCD doesn't support SG, break a single SG
+- * request into several URBs and map each SG list entry to
+- * corresponding URB buffer. The previously allocated SG
+- * list is stored in priv->sgl (If the server's HCD support SG,
+- * SG list is stored only in urb->sg) and it is used as an
+- * indicator that the server split single SG request into
+- * several URBs. Later, priv->sgl is used by stub_complete() and
+- * stub_send_ret_submit() to reassemble the divied URBs.
+- */
+- support_sg = 0;
+- num_urbs = nents;
+- priv->completed_urbs = 0;
+- pdu->u.cmd_submit.transfer_flags &= ~URB_DMA_MAP_SG;
+- }
+-
+ /* allocate urb array */
+ priv->num_urbs = num_urbs;
+ priv->urbs = kmalloc_array(num_urbs, sizeof(*priv->urbs), GFP_KERNEL);
+diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
+index 6a50e1d0529c..d91fe6dd172c 100644
+--- a/drivers/vhost/vsock.c
++++ b/drivers/vhost/vsock.c
+@@ -102,7 +102,7 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
+ struct iov_iter iov_iter;
+ unsigned out, in;
+ size_t nbytes;
+- size_t len;
++ size_t iov_len, payload_len;
+ int head;
+
+ spin_lock_bh(&vsock->send_pkt_list_lock);
+@@ -147,8 +147,24 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
+ break;
+ }
+
+- len = iov_length(&vq->iov[out], in);
+- iov_iter_init(&iov_iter, READ, &vq->iov[out], in, len);
++ iov_len = iov_length(&vq->iov[out], in);
++ if (iov_len < sizeof(pkt->hdr)) {
++ virtio_transport_free_pkt(pkt);
++ vq_err(vq, "Buffer len [%zu] too small\n", iov_len);
++ break;
++ }
++
++ iov_iter_init(&iov_iter, READ, &vq->iov[out], in, iov_len);
++ payload_len = pkt->len - pkt->off;
++
++ /* If the packet is greater than the space available in the
++ * buffer, we split it using multiple buffers.
++ */
++ if (payload_len > iov_len - sizeof(pkt->hdr))
++ payload_len = iov_len - sizeof(pkt->hdr);
++
++ /* Set the correct length in the header */
++ pkt->hdr.len = cpu_to_le32(payload_len);
+
+ nbytes = copy_to_iter(&pkt->hdr, sizeof(pkt->hdr), &iov_iter);
+ if (nbytes != sizeof(pkt->hdr)) {
+@@ -157,33 +173,47 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
+ break;
+ }
+
+- nbytes = copy_to_iter(pkt->buf, pkt->len, &iov_iter);
+- if (nbytes != pkt->len) {
++ nbytes = copy_to_iter(pkt->buf + pkt->off, payload_len,
++ &iov_iter);
++ if (nbytes != payload_len) {
+ virtio_transport_free_pkt(pkt);
+ vq_err(vq, "Faulted on copying pkt buf\n");
+ break;
+ }
+
+- vhost_add_used(vq, head, sizeof(pkt->hdr) + pkt->len);
++ vhost_add_used(vq, head, sizeof(pkt->hdr) + payload_len);
+ added = true;
+
+- if (pkt->reply) {
+- int val;
+-
+- val = atomic_dec_return(&vsock->queued_replies);
+-
+- /* Do we have resources to resume tx processing? */
+- if (val + 1 == tx_vq->num)
+- restart_tx = true;
+- }
+-
+ /* Deliver to monitoring devices all correctly transmitted
+ * packets.
+ */
+ virtio_transport_deliver_tap_pkt(pkt);
+
+- total_len += pkt->len;
+- virtio_transport_free_pkt(pkt);
++ pkt->off += payload_len;
++ total_len += payload_len;
++
++ /* If we didn't send all the payload we can requeue the packet
++ * to send it with the next available buffer.
++ */
++ if (pkt->off < pkt->len) {
++ spin_lock_bh(&vsock->send_pkt_list_lock);
++ list_add(&pkt->list, &vsock->send_pkt_list);
++ spin_unlock_bh(&vsock->send_pkt_list_lock);
++ } else {
++ if (pkt->reply) {
++ int val;
++
++ val = atomic_dec_return(&vsock->queued_replies);
++
++ /* Do we have resources to resume tx
++ * processing?
++ */
++ if (val + 1 == tx_vq->num)
++ restart_tx = true;
++ }
++
++ virtio_transport_free_pkt(pkt);
++ }
+ } while(likely(!vhost_exceeds_weight(vq, ++pkts, total_len)));
+ if (added)
+ vhost_signal(&vsock->dev, vq);
+diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
+index 226fbb995fb0..b9f8355947d5 100644
+--- a/drivers/virtio/virtio_balloon.c
++++ b/drivers/virtio/virtio_balloon.c
+@@ -820,7 +820,7 @@ static unsigned long virtio_balloon_shrinker_count(struct shrinker *shrinker,
+ unsigned long count;
+
+ count = vb->num_pages / VIRTIO_BALLOON_PAGES_PER_PAGE;
+- count += vb->num_free_page_blocks >> VIRTIO_BALLOON_FREE_PAGE_ORDER;
++ count += vb->num_free_page_blocks << VIRTIO_BALLOON_FREE_PAGE_ORDER;
+
+ return count;
+ }
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index a8041e451e9e..867c7ebd3f10 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -583,7 +583,7 @@ unmap_release:
+ kfree(desc);
+
+ END_USE(vq);
+- return -EIO;
++ return -ENOMEM;
+ }
+
+ static bool virtqueue_kick_prepare_split(struct virtqueue *_vq)
+@@ -1085,7 +1085,7 @@ unmap_release:
+ kfree(desc);
+
+ END_USE(vq);
+- return -EIO;
++ return -ENOMEM;
+ }
+
+ static inline int virtqueue_add_packed(struct virtqueue *_vq,
+diff --git a/fs/ocfs2/xattr.c b/fs/ocfs2/xattr.c
+index d8507972ee13..90c830e3758e 100644
+--- a/fs/ocfs2/xattr.c
++++ b/fs/ocfs2/xattr.c
+@@ -1490,6 +1490,18 @@ static int ocfs2_xa_check_space(struct ocfs2_xa_loc *loc,
+ return loc->xl_ops->xlo_check_space(loc, xi);
+ }
+
++static void ocfs2_xa_add_entry(struct ocfs2_xa_loc *loc, u32 name_hash)
++{
++ loc->xl_ops->xlo_add_entry(loc, name_hash);
++ loc->xl_entry->xe_name_hash = cpu_to_le32(name_hash);
++ /*
++ * We can't leave the new entry's xe_name_offset at zero or
++ * add_namevalue() will go nuts. We set it to the size of our
++ * storage so that it can never be less than any other entry.
++ */
++ loc->xl_entry->xe_name_offset = cpu_to_le16(loc->xl_size);
++}
++
+ static void ocfs2_xa_add_namevalue(struct ocfs2_xa_loc *loc,
+ struct ocfs2_xattr_info *xi)
+ {
+@@ -2121,31 +2133,29 @@ static int ocfs2_xa_prepare_entry(struct ocfs2_xa_loc *loc,
+ if (rc)
+ goto out;
+
+- if (!loc->xl_entry) {
+- rc = -EINVAL;
+- goto out;
+- }
+-
+- if (ocfs2_xa_can_reuse_entry(loc, xi)) {
+- orig_value_size = loc->xl_entry->xe_value_size;
+- rc = ocfs2_xa_reuse_entry(loc, xi, ctxt);
+- if (rc)
+- goto out;
+- goto alloc_value;
+- }
++ if (loc->xl_entry) {
++ if (ocfs2_xa_can_reuse_entry(loc, xi)) {
++ orig_value_size = loc->xl_entry->xe_value_size;
++ rc = ocfs2_xa_reuse_entry(loc, xi, ctxt);
++ if (rc)
++ goto out;
++ goto alloc_value;
++ }
+
+- if (!ocfs2_xattr_is_local(loc->xl_entry)) {
+- orig_clusters = ocfs2_xa_value_clusters(loc);
+- rc = ocfs2_xa_value_truncate(loc, 0, ctxt);
+- if (rc) {
+- mlog_errno(rc);
+- ocfs2_xa_cleanup_value_truncate(loc,
+- "overwriting",
+- orig_clusters);
+- goto out;
++ if (!ocfs2_xattr_is_local(loc->xl_entry)) {
++ orig_clusters = ocfs2_xa_value_clusters(loc);
++ rc = ocfs2_xa_value_truncate(loc, 0, ctxt);
++ if (rc) {
++ mlog_errno(rc);
++ ocfs2_xa_cleanup_value_truncate(loc,
++ "overwriting",
++ orig_clusters);
++ goto out;
++ }
+ }
+- }
+- ocfs2_xa_wipe_namevalue(loc);
++ ocfs2_xa_wipe_namevalue(loc);
++ } else
++ ocfs2_xa_add_entry(loc, name_hash);
+
+ /*
+ * If we get here, we have a blank entry. Fill it. We grow our
+diff --git a/include/net/tls.h b/include/net/tls.h
+index bd1ef1a915e9..9bf04a74a6cb 100644
+--- a/include/net/tls.h
++++ b/include/net/tls.h
+@@ -364,6 +364,8 @@ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx);
+ void tls_sw_strparser_arm(struct sock *sk, struct tls_context *ctx);
+ void tls_sw_strparser_done(struct tls_context *tls_ctx);
+ int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size);
++int tls_sw_sendpage_locked(struct sock *sk, struct page *page,
++ int offset, size_t size, int flags);
+ int tls_sw_sendpage(struct sock *sk, struct page *page,
+ int offset, size_t size, int flags);
+ void tls_sw_cancel_work_tx(struct tls_context *tls_ctx);
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 8bbd39585301..eafb81c99921 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -1713,11 +1713,11 @@ static void pidfd_show_fdinfo(struct seq_file *m, struct file *f)
+ /*
+ * Poll support for process exit notification.
+ */
+-static unsigned int pidfd_poll(struct file *file, struct poll_table_struct *pts)
++static __poll_t pidfd_poll(struct file *file, struct poll_table_struct *pts)
+ {
+ struct task_struct *task;
+ struct pid *pid = file->private_data;
+- int poll_flags = 0;
++ __poll_t poll_flags = 0;
+
+ poll_wait(file, &pid->wait_pidfd, pts);
+
+@@ -1729,7 +1729,7 @@ static unsigned int pidfd_poll(struct file *file, struct poll_table_struct *pts)
+ * group, then poll(2) should block, similar to the wait(2) family.
+ */
+ if (!task || (task->exit_state && thread_group_empty(task)))
+- poll_flags = POLLIN | POLLRDNORM;
++ poll_flags = EPOLLIN | EPOLLRDNORM;
+ rcu_read_unlock();
+
+ return poll_flags;
+diff --git a/kernel/futex.c b/kernel/futex.c
+index 6d50728ef2e7..ff7035567f9f 100644
+--- a/kernel/futex.c
++++ b/kernel/futex.c
+@@ -3454,11 +3454,16 @@ err_unlock:
+ return ret;
+ }
+
++/* Constants for the pending_op argument of handle_futex_death */
++#define HANDLE_DEATH_PENDING true
++#define HANDLE_DEATH_LIST false
++
+ /*
+ * Process a futex-list entry, check whether it's owned by the
+ * dying task, and do notification if so:
+ */
+-static int handle_futex_death(u32 __user *uaddr, struct task_struct *curr, int pi)
++static int handle_futex_death(u32 __user *uaddr, struct task_struct *curr,
++ bool pi, bool pending_op)
+ {
+ u32 uval, uninitialized_var(nval), mval;
+ int err;
+@@ -3471,6 +3476,42 @@ retry:
+ if (get_user(uval, uaddr))
+ return -1;
+
++ /*
++ * Special case for regular (non PI) futexes. The unlock path in
++ * user space has two race scenarios:
++ *
++ * 1. The unlock path releases the user space futex value and
++ * before it can execute the futex() syscall to wake up
++ * waiters it is killed.
++ *
++ * 2. A woken up waiter is killed before it can acquire the
++ * futex in user space.
++ *
++ * In both cases the TID validation below prevents a wakeup of
++ * potential waiters which can cause these waiters to block
++ * forever.
++ *
++ * In both cases the following conditions are met:
++ *
++ * 1) task->robust_list->list_op_pending != NULL
++ * @pending_op == true
++ * 2) User space futex value == 0
++ * 3) Regular futex: @pi == false
++ *
++ * If these conditions are met, it is safe to attempt waking up a
++ * potential waiter without touching the user space futex value and
++ * trying to set the OWNER_DIED bit. The user space futex value is
++ * uncontended and the rest of the user space mutex state is
++ * consistent, so a woken waiter will just take over the
++ * uncontended futex. Setting the OWNER_DIED bit would create
++ * inconsistent state and malfunction of the user space owner died
++ * handling.
++ */
++ if (pending_op && !pi && !uval) {
++ futex_wake(uaddr, 1, 1, FUTEX_BITSET_MATCH_ANY);
++ return 0;
++ }
++
+ if ((uval & FUTEX_TID_MASK) != task_pid_vnr(curr))
+ return 0;
+
+@@ -3590,10 +3631,11 @@ void exit_robust_list(struct task_struct *curr)
+ * A pending lock might already be on the list, so
+ * don't process it twice:
+ */
+- if (entry != pending)
++ if (entry != pending) {
+ if (handle_futex_death((void __user *)entry + futex_offset,
+- curr, pi))
++ curr, pi, HANDLE_DEATH_LIST))
+ return;
++ }
+ if (rc)
+ return;
+ entry = next_entry;
+@@ -3607,9 +3649,10 @@ void exit_robust_list(struct task_struct *curr)
+ cond_resched();
+ }
+
+- if (pending)
++ if (pending) {
+ handle_futex_death((void __user *)pending + futex_offset,
+- curr, pip);
++ curr, pip, HANDLE_DEATH_PENDING);
++ }
+ }
+
+ long do_futex(u32 __user *uaddr, int op, u32 val, ktime_t *timeout,
+@@ -3786,7 +3829,8 @@ void compat_exit_robust_list(struct task_struct *curr)
+ if (entry != pending) {
+ void __user *uaddr = futex_uaddr(entry, futex_offset);
+
+- if (handle_futex_death(uaddr, curr, pi))
++ if (handle_futex_death(uaddr, curr, pi,
++ HANDLE_DEATH_LIST))
+ return;
+ }
+ if (rc)
+@@ -3805,7 +3849,7 @@ void compat_exit_robust_list(struct task_struct *curr)
+ if (pending) {
+ void __user *uaddr = futex_uaddr(pending, futex_offset);
+
+- handle_futex_death(uaddr, curr, pip);
++ handle_futex_death(uaddr, curr, pip, HANDLE_DEATH_PENDING);
+ }
+ }
+
+diff --git a/mm/ksm.c b/mm/ksm.c
+index 3dc4346411e4..4d5998ca31ae 100644
+--- a/mm/ksm.c
++++ b/mm/ksm.c
+@@ -885,13 +885,13 @@ static int remove_stable_node(struct stable_node *stable_node)
+ return 0;
+ }
+
+- if (WARN_ON_ONCE(page_mapped(page))) {
+- /*
+- * This should not happen: but if it does, just refuse to let
+- * merge_across_nodes be switched - there is no need to panic.
+- */
+- err = -EBUSY;
+- } else {
++ /*
++ * Page could be still mapped if this races with __mmput() running in
++ * between ksm_exit() and exit_mmap(). Just refuse to let
++ * merge_across_nodes/max_page_sharing be switched.
++ */
++ err = -EBUSY;
++ if (!page_mapped(page)) {
+ /*
+ * The stable node did not yet appear stale to get_ksm_page(),
+ * since that allows for an unmapped ksm page to be recognized
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index f363fed0db4f..8431897acb54 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -331,7 +331,7 @@ static unsigned long find_smallest_section_pfn(int nid, struct zone *zone,
+ unsigned long end_pfn)
+ {
+ for (; start_pfn < end_pfn; start_pfn += PAGES_PER_SUBSECTION) {
+- if (unlikely(!pfn_valid(start_pfn)))
++ if (unlikely(!pfn_to_online_page(start_pfn)))
+ continue;
+
+ if (unlikely(pfn_to_nid(start_pfn) != nid))
+@@ -356,7 +356,7 @@ static unsigned long find_biggest_section_pfn(int nid, struct zone *zone,
+ /* pfn is the end pfn of a memory section. */
+ pfn = end_pfn - 1;
+ for (; pfn >= start_pfn; pfn -= PAGES_PER_SUBSECTION) {
+- if (unlikely(!pfn_valid(pfn)))
++ if (unlikely(!pfn_to_online_page(pfn)))
+ continue;
+
+ if (unlikely(pfn_to_nid(pfn) != nid))
+@@ -415,7 +415,7 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn,
+ */
+ pfn = zone_start_pfn;
+ for (; pfn < zone_end_pfn; pfn += PAGES_PER_SUBSECTION) {
+- if (unlikely(!pfn_valid(pfn)))
++ if (unlikely(!pfn_to_online_page(pfn)))
+ continue;
+
+ if (page_zone(pfn_to_page(pfn)) != zone)
+@@ -471,6 +471,16 @@ static void __remove_zone(struct zone *zone, unsigned long start_pfn,
+ struct pglist_data *pgdat = zone->zone_pgdat;
+ unsigned long flags;
+
++#ifdef CONFIG_ZONE_DEVICE
++ /*
++ * Zone shrinking code cannot properly deal with ZONE_DEVICE. So
++ * we will not try to shrink the zones - which is okay as
++ * set_zone_contiguous() cannot deal with ZONE_DEVICE either way.
++ */
++ if (zone_idx(zone) == ZONE_DEVICE)
++ return;
++#endif
++
+ pgdat_resize_lock(zone->zone_pgdat, &flags);
+ shrink_zone_span(zone, start_pfn, start_pfn + nr_pages);
+ update_pgdat_span(pgdat);
+diff --git a/mm/slub.c b/mm/slub.c
+index d2445dd1c7ed..f24ea152cdbb 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -2648,6 +2648,17 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
+ return p;
+ }
+
++/*
++ * If the object has been wiped upon free, make sure it's fully initialized by
++ * zeroing out freelist pointer.
++ */
++static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s,
++ void *obj)
++{
++ if (unlikely(slab_want_init_on_free(s)) && obj)
++ memset((void *)((char *)obj + s->offset), 0, sizeof(void *));
++}
++
+ /*
+ * Inlined fastpath so that allocation functions (kmalloc, kmem_cache_alloc)
+ * have the fastpath folded into their functions. So no function call
+@@ -2736,12 +2747,8 @@ redo:
+ prefetch_freepointer(s, next_object);
+ stat(s, ALLOC_FASTPATH);
+ }
+- /*
+- * If the object has been wiped upon free, make sure it's fully
+- * initialized by zeroing out freelist pointer.
+- */
+- if (unlikely(slab_want_init_on_free(s)) && object)
+- memset(object + s->offset, 0, sizeof(void *));
++
++ maybe_wipe_obj_freeptr(s, object);
+
+ if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object)
+ memset(object, 0, s->object_size);
+@@ -3155,10 +3162,13 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
+ goto error;
+
+ c = this_cpu_ptr(s->cpu_slab);
++ maybe_wipe_obj_freeptr(s, p[i]);
++
+ continue; /* goto for-loop */
+ }
+ c->freelist = get_freepointer(s, object);
+ p[i] = object;
++ maybe_wipe_obj_freeptr(s, p[i]);
+ }
+ c->tid = next_tid(c->tid);
+ local_irq_enable();
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 868a768f7300..60987be7fdaa 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -2195,6 +2195,8 @@ static int do_setvfinfo(struct net_device *dev, struct nlattr **tb)
+ if (tb[IFLA_VF_MAC]) {
+ struct ifla_vf_mac *ivm = nla_data(tb[IFLA_VF_MAC]);
+
++ if (ivm->vf >= INT_MAX)
++ return -EINVAL;
+ err = -EOPNOTSUPP;
+ if (ops->ndo_set_vf_mac)
+ err = ops->ndo_set_vf_mac(dev, ivm->vf,
+@@ -2206,6 +2208,8 @@ static int do_setvfinfo(struct net_device *dev, struct nlattr **tb)
+ if (tb[IFLA_VF_VLAN]) {
+ struct ifla_vf_vlan *ivv = nla_data(tb[IFLA_VF_VLAN]);
+
++ if (ivv->vf >= INT_MAX)
++ return -EINVAL;
+ err = -EOPNOTSUPP;
+ if (ops->ndo_set_vf_vlan)
+ err = ops->ndo_set_vf_vlan(dev, ivv->vf, ivv->vlan,
+@@ -2238,6 +2242,8 @@ static int do_setvfinfo(struct net_device *dev, struct nlattr **tb)
+ if (len == 0)
+ return -EINVAL;
+
++ if (ivvl[0]->vf >= INT_MAX)
++ return -EINVAL;
+ err = ops->ndo_set_vf_vlan(dev, ivvl[0]->vf, ivvl[0]->vlan,
+ ivvl[0]->qos, ivvl[0]->vlan_proto);
+ if (err < 0)
+@@ -2248,6 +2254,8 @@ static int do_setvfinfo(struct net_device *dev, struct nlattr **tb)
+ struct ifla_vf_tx_rate *ivt = nla_data(tb[IFLA_VF_TX_RATE]);
+ struct ifla_vf_info ivf;
+
++ if (ivt->vf >= INT_MAX)
++ return -EINVAL;
+ err = -EOPNOTSUPP;
+ if (ops->ndo_get_vf_config)
+ err = ops->ndo_get_vf_config(dev, ivt->vf, &ivf);
+@@ -2266,6 +2274,8 @@ static int do_setvfinfo(struct net_device *dev, struct nlattr **tb)
+ if (tb[IFLA_VF_RATE]) {
+ struct ifla_vf_rate *ivt = nla_data(tb[IFLA_VF_RATE]);
+
++ if (ivt->vf >= INT_MAX)
++ return -EINVAL;
+ err = -EOPNOTSUPP;
+ if (ops->ndo_set_vf_rate)
+ err = ops->ndo_set_vf_rate(dev, ivt->vf,
+@@ -2278,6 +2288,8 @@ static int do_setvfinfo(struct net_device *dev, struct nlattr **tb)
+ if (tb[IFLA_VF_SPOOFCHK]) {
+ struct ifla_vf_spoofchk *ivs = nla_data(tb[IFLA_VF_SPOOFCHK]);
+
++ if (ivs->vf >= INT_MAX)
++ return -EINVAL;
+ err = -EOPNOTSUPP;
+ if (ops->ndo_set_vf_spoofchk)
+ err = ops->ndo_set_vf_spoofchk(dev, ivs->vf,
+@@ -2289,6 +2301,8 @@ static int do_setvfinfo(struct net_device *dev, struct nlattr **tb)
+ if (tb[IFLA_VF_LINK_STATE]) {
+ struct ifla_vf_link_state *ivl = nla_data(tb[IFLA_VF_LINK_STATE]);
+
++ if (ivl->vf >= INT_MAX)
++ return -EINVAL;
+ err = -EOPNOTSUPP;
+ if (ops->ndo_set_vf_link_state)
+ err = ops->ndo_set_vf_link_state(dev, ivl->vf,
+@@ -2302,6 +2316,8 @@ static int do_setvfinfo(struct net_device *dev, struct nlattr **tb)
+
+ err = -EOPNOTSUPP;
+ ivrssq_en = nla_data(tb[IFLA_VF_RSS_QUERY_EN]);
++ if (ivrssq_en->vf >= INT_MAX)
++ return -EINVAL;
+ if (ops->ndo_set_vf_rss_query_en)
+ err = ops->ndo_set_vf_rss_query_en(dev, ivrssq_en->vf,
+ ivrssq_en->setting);
+@@ -2312,6 +2328,8 @@ static int do_setvfinfo(struct net_device *dev, struct nlattr **tb)
+ if (tb[IFLA_VF_TRUST]) {
+ struct ifla_vf_trust *ivt = nla_data(tb[IFLA_VF_TRUST]);
+
++ if (ivt->vf >= INT_MAX)
++ return -EINVAL;
+ err = -EOPNOTSUPP;
+ if (ops->ndo_set_vf_trust)
+ err = ops->ndo_set_vf_trust(dev, ivt->vf, ivt->setting);
+@@ -2322,15 +2340,18 @@ static int do_setvfinfo(struct net_device *dev, struct nlattr **tb)
+ if (tb[IFLA_VF_IB_NODE_GUID]) {
+ struct ifla_vf_guid *ivt = nla_data(tb[IFLA_VF_IB_NODE_GUID]);
+
++ if (ivt->vf >= INT_MAX)
++ return -EINVAL;
+ if (!ops->ndo_set_vf_guid)
+ return -EOPNOTSUPP;
+-
+ return handle_vf_guid(dev, ivt, IFLA_VF_IB_NODE_GUID);
+ }
+
+ if (tb[IFLA_VF_IB_PORT_GUID]) {
+ struct ifla_vf_guid *ivt = nla_data(tb[IFLA_VF_IB_PORT_GUID]);
+
++ if (ivt->vf >= INT_MAX)
++ return -EINVAL;
+ if (!ops->ndo_set_vf_guid)
+ return -EOPNOTSUPP;
+
+diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
+index 0b980e841927..c45b7d738cd1 100644
+--- a/net/ipv4/sysctl_net_ipv4.c
++++ b/net/ipv4/sysctl_net_ipv4.c
+@@ -1028,7 +1028,7 @@ static struct ctl_table ipv4_net_table[] = {
+ .mode = 0644,
+ .proc_handler = proc_fib_multipath_hash_policy,
+ .extra1 = SYSCTL_ZERO,
+- .extra2 = SYSCTL_ONE,
++ .extra2 = &two,
+ },
+ #endif
+ {
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 2b25a0de0364..56c8c990b6f2 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -634,7 +634,7 @@ static void rt6_probe(struct fib6_nh *fib6_nh)
+ * Router Reachability Probe MUST be rate-limited
+ * to no more than one per minute.
+ */
+- if (fib6_nh->fib_nh_gw_family)
++ if (!fib6_nh->fib_nh_gw_family)
+ return;
+
+ nh_gw = &fib6_nh->fib_nh_gw6;
+diff --git a/net/sched/act_pedit.c b/net/sched/act_pedit.c
+index cdfaa79382a2..b5bc631b96b7 100644
+--- a/net/sched/act_pedit.c
++++ b/net/sched/act_pedit.c
+@@ -43,7 +43,7 @@ static struct tcf_pedit_key_ex *tcf_pedit_keys_ex_parse(struct nlattr *nla,
+ int err = -EINVAL;
+ int rem;
+
+- if (!nla || !n)
++ if (!nla)
+ return NULL;
+
+ keys_ex = kcalloc(n, sizeof(*k), GFP_KERNEL);
+@@ -170,6 +170,10 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
+ }
+
+ parm = nla_data(pattr);
++ if (!parm->nkeys) {
++ NL_SET_ERR_MSG_MOD(extack, "Pedit requires keys to be passed");
++ return -EINVAL;
++ }
+ ksize = parm->nkeys * sizeof(struct tc_pedit_key);
+ if (nla_len(pattr) < sizeof(*parm) + ksize) {
+ NL_SET_ERR_MSG_ATTR(extack, pattr, "Length of TCA_PEDIT_PARMS or TCA_PEDIT_PARMS_EX pedit attribute is invalid");
+@@ -183,12 +187,6 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
+ index = parm->index;
+ err = tcf_idr_check_alloc(tn, &index, a, bind);
+ if (!err) {
+- if (!parm->nkeys) {
+- tcf_idr_cleanup(tn, index);
+- NL_SET_ERR_MSG_MOD(extack, "Pedit requires keys to be passed");
+- ret = -EINVAL;
+- goto out_free;
+- }
+ ret = tcf_idr_create(tn, index, est, a,
+ &act_pedit_ops, bind, false);
+ if (ret) {
+diff --git a/net/sched/act_tunnel_key.c b/net/sched/act_tunnel_key.c
+index 2f83a79f76aa..d55669e14741 100644
+--- a/net/sched/act_tunnel_key.c
++++ b/net/sched/act_tunnel_key.c
+@@ -135,6 +135,10 @@ static int tunnel_key_copy_opts(const struct nlattr *nla, u8 *dst,
+ if (opt_len < 0)
+ return opt_len;
+ opts_len += opt_len;
++ if (opts_len > IP_TUNNEL_OPTS_MAX) {
++ NL_SET_ERR_MSG(extack, "Tunnel options exceeds max size");
++ return -EINVAL;
++ }
+ if (dst) {
+ dst_len -= opt_len;
+ dst += opt_len;
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index 76bebe516194..92c0766d7f4f 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -842,7 +842,7 @@ static int taprio_parse_mqprio_opt(struct net_device *dev,
+ }
+
+ /* Verify priority mapping uses valid tcs */
+- for (i = 0; i < TC_BITMASK + 1; i++) {
++ for (i = 0; i <= TC_BITMASK; i++) {
+ if (qopt->prio_tc_map[i] >= qopt->num_tc) {
+ NL_SET_ERR_MSG(extack, "Invalid traffic class in priority to traffic class mapping");
+ return -EINVAL;
+@@ -1014,6 +1014,26 @@ static void setup_txtime(struct taprio_sched *q,
+ }
+ }
+
++static int taprio_mqprio_cmp(const struct net_device *dev,
++ const struct tc_mqprio_qopt *mqprio)
++{
++ int i;
++
++ if (!mqprio || mqprio->num_tc != dev->num_tc)
++ return -1;
++
++ for (i = 0; i < mqprio->num_tc; i++)
++ if (dev->tc_to_txq[i].count != mqprio->count[i] ||
++ dev->tc_to_txq[i].offset != mqprio->offset[i])
++ return -1;
++
++ for (i = 0; i <= TC_BITMASK; i++)
++ if (dev->prio_tc_map[i] != mqprio->prio_tc_map[i])
++ return -1;
++
++ return 0;
++}
++
+ static int taprio_change(struct Qdisc *sch, struct nlattr *opt,
+ struct netlink_ext_ack *extack)
+ {
+@@ -1065,6 +1085,10 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt,
+ admin = rcu_dereference(q->admin_sched);
+ rcu_read_unlock();
+
++ /* no changes - no new mqprio settings */
++ if (!taprio_mqprio_cmp(dev, mqprio))
++ mqprio = NULL;
++
+ if (mqprio && (oper || admin)) {
+ NL_SET_ERR_MSG(extack, "Changing the traffic mapping of a running schedule is not supported");
+ err = -ENOTSUPP;
+@@ -1132,7 +1156,7 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt,
+ mqprio->offset[i]);
+
+ /* Always use supplied priority mappings */
+- for (i = 0; i < TC_BITMASK + 1; i++)
++ for (i = 0; i <= TC_BITMASK; i++)
+ netdev_set_prio_tc_map(dev, i,
+ mqprio->prio_tc_map[i]);
+ }
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 9313dd51023a..ac2dfe36022d 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -852,6 +852,7 @@ static int __init tls_register(void)
+ {
+ tls_sw_proto_ops = inet_stream_ops;
+ tls_sw_proto_ops.splice_read = tls_sw_splice_read;
++ tls_sw_proto_ops.sendpage_locked = tls_sw_sendpage_locked,
+
+ #ifdef CONFIG_TLS_DEVICE
+ tls_device_init();
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 881f06f465f8..41b2bdc05ba3 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -1204,6 +1204,17 @@ sendpage_end:
+ return copied ? copied : ret;
+ }
+
++int tls_sw_sendpage_locked(struct sock *sk, struct page *page,
++ int offset, size_t size, int flags)
++{
++ if (flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL |
++ MSG_SENDPAGE_NOTLAST | MSG_SENDPAGE_NOPOLICY |
++ MSG_NO_SHARED_FRAGS))
++ return -ENOTSUPP;
++
++ return tls_sw_do_sendpage(sk, page, offset, size, flags);
++}
++
+ int tls_sw_sendpage(struct sock *sk, struct page *page,
+ int offset, size_t size, int flags)
+ {
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index 058d59fceddd..279d838784e5 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -91,8 +91,17 @@ static struct sk_buff *virtio_transport_build_skb(void *opaque)
+ struct virtio_vsock_pkt *pkt = opaque;
+ struct af_vsockmon_hdr *hdr;
+ struct sk_buff *skb;
++ size_t payload_len;
++ void *payload_buf;
+
+- skb = alloc_skb(sizeof(*hdr) + sizeof(pkt->hdr) + pkt->len,
++ /* A packet could be split to fit the RX buffer, so we can retrieve
++ * the payload length from the header and the buffer pointer taking
++ * care of the offset in the original packet.
++ */
++ payload_len = le32_to_cpu(pkt->hdr.len);
++ payload_buf = pkt->buf + pkt->off;
++
++ skb = alloc_skb(sizeof(*hdr) + sizeof(pkt->hdr) + payload_len,
+ GFP_ATOMIC);
+ if (!skb)
+ return NULL;
+@@ -132,8 +141,8 @@ static struct sk_buff *virtio_transport_build_skb(void *opaque)
+
+ skb_put_data(skb, &pkt->hdr, sizeof(pkt->hdr));
+
+- if (pkt->len) {
+- skb_put_data(skb, pkt->buf, pkt->len);
++ if (payload_len) {
++ skb_put_data(skb, payload_buf, payload_len);
+ }
+
+ return skb;
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 90cd59a1869a..bd1cffb2ab50 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -2930,6 +2930,9 @@ static int snd_usb_mixer_controls_badd(struct usb_mixer_interface *mixer,
+ continue;
+
+ iface = usb_ifnum_to_if(dev, intf);
++ if (!iface)
++ continue;
++
+ num = iface->num_altsetting;
+
+ if (num < 2)
+diff --git a/tools/gpio/Build b/tools/gpio/Build
+index 620c1937d957..4141f35837db 100644
+--- a/tools/gpio/Build
++++ b/tools/gpio/Build
+@@ -1,3 +1,4 @@
++gpio-utils-y += gpio-utils.o
+ lsgpio-y += lsgpio.o gpio-utils.o
+ gpio-hammer-y += gpio-hammer.o gpio-utils.o
+ gpio-event-mon-y += gpio-event-mon.o gpio-utils.o
+diff --git a/tools/gpio/Makefile b/tools/gpio/Makefile
+index 1178d302757e..6080de58861f 100644
+--- a/tools/gpio/Makefile
++++ b/tools/gpio/Makefile
+@@ -35,11 +35,15 @@ $(OUTPUT)include/linux/gpio.h: ../../include/uapi/linux/gpio.h
+
+ prepare: $(OUTPUT)include/linux/gpio.h
+
++GPIO_UTILS_IN := $(output)gpio-utils-in.o
++$(GPIO_UTILS_IN): prepare FORCE
++ $(Q)$(MAKE) $(build)=gpio-utils
++
+ #
+ # lsgpio
+ #
+ LSGPIO_IN := $(OUTPUT)lsgpio-in.o
+-$(LSGPIO_IN): prepare FORCE
++$(LSGPIO_IN): prepare FORCE $(OUTPUT)gpio-utils-in.o
+ $(Q)$(MAKE) $(build)=lsgpio
+ $(OUTPUT)lsgpio: $(LSGPIO_IN)
+ $(QUIET_LINK)$(CC) $(CFLAGS) $(LDFLAGS) $< -o $@
+@@ -48,7 +52,7 @@ $(OUTPUT)lsgpio: $(LSGPIO_IN)
+ # gpio-hammer
+ #
+ GPIO_HAMMER_IN := $(OUTPUT)gpio-hammer-in.o
+-$(GPIO_HAMMER_IN): prepare FORCE
++$(GPIO_HAMMER_IN): prepare FORCE $(OUTPUT)gpio-utils-in.o
+ $(Q)$(MAKE) $(build)=gpio-hammer
+ $(OUTPUT)gpio-hammer: $(GPIO_HAMMER_IN)
+ $(QUIET_LINK)$(CC) $(CFLAGS) $(LDFLAGS) $< -o $@
+@@ -57,7 +61,7 @@ $(OUTPUT)gpio-hammer: $(GPIO_HAMMER_IN)
+ # gpio-event-mon
+ #
+ GPIO_EVENT_MON_IN := $(OUTPUT)gpio-event-mon-in.o
+-$(GPIO_EVENT_MON_IN): prepare FORCE
++$(GPIO_EVENT_MON_IN): prepare FORCE $(OUTPUT)gpio-utils-in.o
+ $(Q)$(MAKE) $(build)=gpio-event-mon
+ $(OUTPUT)gpio-event-mon: $(GPIO_EVENT_MON_IN)
+ $(QUIET_LINK)$(CC) $(CFLAGS) $(LDFLAGS) $< -o $@
+diff --git a/tools/objtool/arch/x86/tools/gen-insn-attr-x86.awk b/tools/objtool/arch/x86/tools/gen-insn-attr-x86.awk
+index b02a36b2c14f..a42015b305f4 100644
+--- a/tools/objtool/arch/x86/tools/gen-insn-attr-x86.awk
++++ b/tools/objtool/arch/x86/tools/gen-insn-attr-x86.awk
+@@ -69,7 +69,7 @@ BEGIN {
+
+ lprefix1_expr = "\\((66|!F3)\\)"
+ lprefix2_expr = "\\(F3\\)"
+- lprefix3_expr = "\\((F2|!F3|66\\&F2)\\)"
++ lprefix3_expr = "\\((F2|!F3|66&F2)\\)"
+ lprefix_expr = "\\((66|F2|F3)\\)"
+ max_lprefix = 4
+
+@@ -257,7 +257,7 @@ function convert_operands(count,opnd, i,j,imm,mod)
+ return add_flags(imm, mod)
+ }
+
+-/^[0-9a-f]+\:/ {
++/^[0-9a-f]+:/ {
+ if (NR == 1)
+ next
+ # get index
+diff --git a/tools/testing/selftests/x86/mov_ss_trap.c b/tools/testing/selftests/x86/mov_ss_trap.c
+index 3c3a022654f3..6da0ac3f0135 100644
+--- a/tools/testing/selftests/x86/mov_ss_trap.c
++++ b/tools/testing/selftests/x86/mov_ss_trap.c
+@@ -257,7 +257,8 @@ int main()
+ err(1, "sigaltstack");
+ sethandler(SIGSEGV, handle_and_longjmp, SA_RESETHAND | SA_ONSTACK);
+ nr = SYS_getpid;
+- asm volatile ("mov %[ss], %%ss; SYSENTER" : "+a" (nr)
++ /* Clear EBP first to make sure we segfault cleanly. */
++ asm volatile ("xorl %%ebp, %%ebp; mov %[ss], %%ss; SYSENTER" : "+a" (nr)
+ : [ss] "m" (ss) : "flags", "rcx"
+ #ifdef __x86_64__
+ , "r11"
+diff --git a/tools/testing/selftests/x86/sigreturn.c b/tools/testing/selftests/x86/sigreturn.c
+index 3e49a7873f3e..57c4f67f16ef 100644
+--- a/tools/testing/selftests/x86/sigreturn.c
++++ b/tools/testing/selftests/x86/sigreturn.c
+@@ -451,6 +451,19 @@ static void sigusr1(int sig, siginfo_t *info, void *ctx_void)
+ ctx->uc_mcontext.gregs[REG_SP] = (unsigned long)0x8badf00d5aadc0deULL;
+ ctx->uc_mcontext.gregs[REG_CX] = 0;
+
++#ifdef __i386__
++ /*
++ * Make sure the kernel doesn't inadvertently use DS or ES-relative
++ * accesses in a region where user DS or ES is loaded.
++ *
++ * Skip this for 64-bit builds because long mode doesn't care about
++ * DS and ES and skipping it increases test coverage a little bit,
++ * since 64-bit kernels can still run the 32-bit build.
++ */
++ ctx->uc_mcontext.gregs[REG_DS] = 0;
++ ctx->uc_mcontext.gregs[REG_ES] = 0;
++#endif
++
+ memcpy(&requested_regs, &ctx->uc_mcontext.gregs, sizeof(gregset_t));
+ requested_regs[REG_CX] = *ssptr(ctx); /* The asm code does this. */
+
+diff --git a/tools/usb/usbip/libsrc/usbip_host_common.c b/tools/usb/usbip/libsrc/usbip_host_common.c
+index 2813aa821c82..d1d8ba2a4a40 100644
+--- a/tools/usb/usbip/libsrc/usbip_host_common.c
++++ b/tools/usb/usbip/libsrc/usbip_host_common.c
+@@ -57,7 +57,7 @@ static int32_t read_attr_usbip_status(struct usbip_usb_device *udev)
+ }
+
+ value = atoi(status);
+-
++ close(fd);
+ return value;
+ }
+
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:5.3 commit in: /
@ 2019-11-30 13:15 Thomas Deutschmann
0 siblings, 0 replies; 21+ messages in thread
From: Thomas Deutschmann @ 2019-11-30 13:15 UTC (permalink / raw
To: gentoo-commits
commit: c26796850f00de4dbd6fa760c6bd4101e69f0393
Author: Thomas Deutschmann <whissi <AT> whissi <DOT> de>
AuthorDate: Sat Nov 30 13:14:01 2019 +0000
Commit: Thomas Deutschmann <whissi <AT> gentoo <DOT> org>
CommitDate: Sat Nov 30 13:14:01 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c2679685
Drop 2900_awk-regexp-warnings.patch
Signed-off-by: Thomas Deutschmann <whissi <AT> whissi.de>
2900_awk-regexp-warnings.patch | 89 ------------------------------------------
1 file changed, 89 deletions(-)
diff --git a/2900_awk-regexp-warnings.patch b/2900_awk-regexp-warnings.patch
deleted file mode 100644
index 5e62625..0000000
--- a/2900_awk-regexp-warnings.patch
+++ /dev/null
@@ -1,89 +0,0 @@
-From 700c1018b86d0d4b3f1f2d459708c0cdf42b521d Mon Sep 17 00:00:00 2001
-From: Alexander Kapshuk <alexander.kapshuk@gmail.com>
-Date: Tue, 24 Sep 2019 07:46:59 +0300
-Subject: x86/insn: Fix awk regexp warnings
-
-gawk 5.0.1 generates the following regexp warnings:
-
- GEN /home/sasha/torvalds/tools/objtool/arch/x86/lib/inat-tables.c
- awk: ../arch/x86/tools/gen-insn-attr-x86.awk:260: warning: regexp escape sequence `\:' is not a known regexp operator
- awk: ../arch/x86/tools/gen-insn-attr-x86.awk:350: (FILENAME=../arch/x86/lib/x86-opcode-map.txt FNR=41) warning: regexp escape sequence `\&' is not a known regexp operator
-
-Ealier versions of gawk are not known to generate these warnings. The
-gawk manual referenced below does not list characters ':' and '&' as
-needing escaping, so 'unescape' them. See
-
- https://www.gnu.org/software/gawk/manual/html_node/Escape-Sequences.html
-
-for more info.
-
-Running diff on the output generated by the script before and after
-applying the patch reported no differences.
-
- [ bp: Massage commit message. ]
-
-[ Caught the respective tools header discrepancy. ]
-Reported-by: kbuild test robot <lkp@intel.com>
-Signed-off-by: Alexander Kapshuk <alexander.kapshuk@gmail.com>
-Signed-off-by: Borislav Petkov <bp@suse.de>
-Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
-Cc: "H. Peter Anvin" <hpa@zytor.com>
-Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
-Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
-Cc: Ingo Molnar <mingo@redhat.com>
-Cc: Josh Poimboeuf <jpoimboe@redhat.com>
-Cc: Thomas Gleixner <tglx@linutronix.de>
-Cc: x86-ml <x86@kernel.org>
-Link: https://lkml.kernel.org/r/20190924044659.3785-1-alexander.kapshuk@gmail.com
----
- arch/x86/tools/gen-insn-attr-x86.awk | 4 ++--
- tools/arch/x86/tools/gen-insn-attr-x86.awk | 4 ++--
- 2 files changed, 4 insertions(+), 4 deletions(-)
-
-diff --git a/arch/x86/tools/gen-insn-attr-x86.awk b/arch/x86/tools/gen-insn-attr-x86.awk
-index b02a36b2c14f..a42015b305f4 100644
---- a/arch/x86/tools/gen-insn-attr-x86.awk
-+++ b/arch/x86/tools/gen-insn-attr-x86.awk
-@@ -69,7 +69,7 @@ BEGIN {
-
- lprefix1_expr = "\\((66|!F3)\\)"
- lprefix2_expr = "\\(F3\\)"
-- lprefix3_expr = "\\((F2|!F3|66\\&F2)\\)"
-+ lprefix3_expr = "\\((F2|!F3|66&F2)\\)"
- lprefix_expr = "\\((66|F2|F3)\\)"
- max_lprefix = 4
-
-@@ -257,7 +257,7 @@ function convert_operands(count,opnd, i,j,imm,mod)
- return add_flags(imm, mod)
- }
-
--/^[0-9a-f]+\:/ {
-+/^[0-9a-f]+:/ {
- if (NR == 1)
- next
- # get index
-diff --git a/tools/arch/x86/tools/gen-insn-attr-x86.awk b/tools/arch/x86/tools/gen-insn-attr-x86.awk
-index b02a36b2c14f..a42015b305f4 100644
---- a/tools/objtool/arch/x86/tools/gen-insn-attr-x86.awk
-+++ b/tools/objtool/arch/x86/tools/gen-insn-attr-x86.awk
-@@ -69,7 +69,7 @@ BEGIN {
-
- lprefix1_expr = "\\((66|!F3)\\)"
- lprefix2_expr = "\\(F3\\)"
-- lprefix3_expr = "\\((F2|!F3|66\\&F2)\\)"
-+ lprefix3_expr = "\\((F2|!F3|66&F2)\\)"
- lprefix_expr = "\\((66|F2|F3)\\)"
- max_lprefix = 4
-
-@@ -257,7 +257,7 @@ function convert_operands(count,opnd, i,j,imm,mod)
- return add_flags(imm, mod)
- }
-
--/^[0-9a-f]+\:/ {
-+/^[0-9a-f]+:/ {
- if (NR == 1)
- next
- # get index
---
-cgit 1.2-0.3.lf.el7
-
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:5.3 commit in: /
@ 2019-12-05 1:06 Thomas Deutschmann
0 siblings, 0 replies; 21+ messages in thread
From: Thomas Deutschmann @ 2019-12-05 1:06 UTC (permalink / raw
To: gentoo-commits
commit: c912848be123dc772ae4f5d65e058ab4942e3f0a
Author: Thomas Deutschmann <whissi <AT> whissi <DOT> de>
AuthorDate: Thu Dec 5 01:06:18 2019 +0000
Commit: Thomas Deutschmann <whissi <AT> gentoo <DOT> org>
CommitDate: Thu Dec 5 01:06:18 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c912848b
Linux patch 5.3.15
Signed-off-by: Thomas Deutschmann <whissi <AT> whissi.de>
1014_linux-5.3.15.patch | 4330 +++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 4330 insertions(+)
diff --git a/1014_linux-5.3.15.patch b/1014_linux-5.3.15.patch
new file mode 100644
index 0000000..cd90588
--- /dev/null
+++ b/1014_linux-5.3.15.patch
@@ -0,0 +1,4330 @@
+diff --git a/Makefile b/Makefile
+index 1e5933d6dc97..5a88d67e9635 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 3
+-SUBLEVEL = 14
++SUBLEVEL = 15
+ EXTRAVERSION =
+ NAME = Bobtail Squid
+
+diff --git a/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi b/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
+index f3404dd10537..cf628465cd0a 100644
+--- a/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
+@@ -230,6 +230,8 @@
+ accelerometer@1c {
+ compatible = "fsl,mma8451";
+ reg = <0x1c>;
++ pinctrl-names = "default";
++ pinctrl-0 = <&pinctrl_mma8451_int>;
+ interrupt-parent = <&gpio6>;
+ interrupts = <31 IRQ_TYPE_LEVEL_LOW>;
+ };
+@@ -628,6 +630,12 @@
+ >;
+ };
+
++ pinctrl_mma8451_int: mma8451intgrp {
++ fsl,pins = <
++ MX6QDL_PAD_EIM_BCLK__GPIO6_IO31 0xb0b1
++ >;
++ };
++
+ pinctrl_pwm3: pwm1grp {
+ fsl,pins = <
+ MX6QDL_PAD_SD4_DAT1__PWM3_OUT 0x1b0b1
+diff --git a/arch/arm/boot/dts/stm32mp157c.dtsi b/arch/arm/boot/dts/stm32mp157c.dtsi
+index 0c4e6ebc3529..31556bea2c93 100644
+--- a/arch/arm/boot/dts/stm32mp157c.dtsi
++++ b/arch/arm/boot/dts/stm32mp157c.dtsi
+@@ -914,7 +914,7 @@
+ interrupt-names = "int0", "int1";
+ clocks = <&rcc CK_HSE>, <&rcc FDCAN_K>;
+ clock-names = "hclk", "cclk";
+- bosch,mram-cfg = <0x1400 0 0 32 0 0 2 2>;
++ bosch,mram-cfg = <0x0 0 0 32 0 0 2 2>;
+ status = "disabled";
+ };
+
+@@ -927,7 +927,7 @@
+ interrupt-names = "int0", "int1";
+ clocks = <&rcc CK_HSE>, <&rcc FDCAN_K>;
+ clock-names = "hclk", "cclk";
+- bosch,mram-cfg = <0x0 0 0 32 0 0 2 2>;
++ bosch,mram-cfg = <0x1400 0 0 32 0 0 2 2>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm/boot/dts/sun8i-a83t-tbs-a711.dts b/arch/arm/boot/dts/sun8i-a83t-tbs-a711.dts
+index 568b90ece342..3bec3e0a81b2 100644
+--- a/arch/arm/boot/dts/sun8i-a83t-tbs-a711.dts
++++ b/arch/arm/boot/dts/sun8i-a83t-tbs-a711.dts
+@@ -192,6 +192,7 @@
+ vqmmc-supply = <®_dldo1>;
+ non-removable;
+ wakeup-source;
++ keep-power-in-suspend;
+ status = "okay";
+
+ brcmf: wifi@1 {
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1028a-qds.dts b/arch/arm64/boot/dts/freescale/fsl-ls1028a-qds.dts
+index de6ef39f3118..fce9343dc017 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1028a-qds.dts
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1028a-qds.dts
+@@ -99,7 +99,7 @@
+ status = "okay";
+
+ i2c-mux@77 {
+- compatible = "nxp,pca9847";
++ compatible = "nxp,pca9547";
+ reg = <0x77>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm.dtsi b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
+index 0d0a6543e5db..a9824b862c41 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
+@@ -370,7 +370,7 @@
+ };
+
+ sdma2: dma-controller@302c0000 {
+- compatible = "fsl,imx8mm-sdma", "fsl,imx7d-sdma";
++ compatible = "fsl,imx8mm-sdma", "fsl,imx8mq-sdma";
+ reg = <0x302c0000 0x10000>;
+ interrupts = <GIC_SPI 103 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk IMX8MM_CLK_SDMA2_ROOT>,
+@@ -381,7 +381,7 @@
+ };
+
+ sdma3: dma-controller@302b0000 {
+- compatible = "fsl,imx8mm-sdma", "fsl,imx7d-sdma";
++ compatible = "fsl,imx8mm-sdma", "fsl,imx8mq-sdma";
+ reg = <0x302b0000 0x10000>;
+ interrupts = <GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk IMX8MM_CLK_SDMA3_ROOT>,
+@@ -693,7 +693,7 @@
+ };
+
+ sdma1: dma-controller@30bd0000 {
+- compatible = "fsl,imx8mm-sdma", "fsl,imx7d-sdma";
++ compatible = "fsl,imx8mm-sdma", "fsl,imx8mq-sdma";
+ reg = <0x30bd0000 0x10000>;
+ interrupts = <GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk IMX8MM_CLK_SDMA1_ROOT>,
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq-zii-ultra.dtsi b/arch/arm64/boot/dts/freescale/imx8mq-zii-ultra.dtsi
+index 3faa652fdf20..c25be32ba37e 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq-zii-ultra.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mq-zii-ultra.dtsi
+@@ -100,7 +100,7 @@
+ regulator-name = "0V9_ARM";
+ regulator-min-microvolt = <900000>;
+ regulator-max-microvolt = <1000000>;
+- gpios = <&gpio3 19 GPIO_ACTIVE_HIGH>;
++ gpios = <&gpio3 16 GPIO_ACTIVE_HIGH>;
+ states = <1000000 0x1
+ 900000 0x0>;
+ regulator-always-on;
+diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
+index 02a59946a78a..be3517ef0574 100644
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -1141,6 +1141,19 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
+ goto out_addrs;
+ }
+
++ /*
++ * If we have seen a tail call, we need a second pass.
++ * This is because bpf_jit_emit_common_epilogue() is called
++ * from bpf_jit_emit_tail_call() with a not yet stable ctx->seen.
++ */
++ if (cgctx.seen & SEEN_TAILCALL) {
++ cgctx.idx = 0;
++ if (bpf_jit_build_body(fp, 0, &cgctx, addrs, false)) {
++ fp = org_fp;
++ goto out_addrs;
++ }
++ }
++
+ /*
+ * Pretend to build prologue, given the features we've seen. This will
+ * update ctgtx.idx as it pretends to output instructions, then we can
+diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h
+index 4c95c365058a..44c48e34d799 100644
+--- a/arch/x86/include/asm/fpu/internal.h
++++ b/arch/x86/include/asm/fpu/internal.h
+@@ -509,7 +509,7 @@ static inline void __fpu_invalidate_fpregs_state(struct fpu *fpu)
+
+ static inline int fpregs_state_valid(struct fpu *fpu, unsigned int cpu)
+ {
+- return fpu == this_cpu_read_stable(fpu_fpregs_owner_ctx) && cpu == fpu->last_cpu;
++ return fpu == this_cpu_read(fpu_fpregs_owner_ctx) && cpu == fpu->last_cpu;
+ }
+
+ /*
+diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
+index efbd54cc4e69..055c8613b531 100644
+--- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
++++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
+@@ -522,6 +522,10 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg)
+ int ret = 0;
+
+ rdtgrp = rdtgroup_kn_lock_live(of->kn);
++ if (!rdtgrp) {
++ ret = -ENOENT;
++ goto out;
++ }
+
+ md.priv = of->kn->priv;
+ resid = md.u.rid;
+diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
+index 57d87f79558f..04dd3cc6c6ed 100644
+--- a/arch/x86/kernel/tsc.c
++++ b/arch/x86/kernel/tsc.c
+@@ -1505,6 +1505,9 @@ void __init tsc_init(void)
+ return;
+ }
+
++ if (tsc_clocksource_reliable || no_tsc_watchdog)
++ clocksource_tsc_early.flags &= ~CLOCK_SOURCE_MUST_VERIFY;
++
+ clocksource_register_khz(&clocksource_tsc_early, tsc_khz);
+ detect_art();
+ }
+diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
+index 5b248763a672..a18155cdce41 100644
+--- a/drivers/block/drbd/drbd_main.c
++++ b/drivers/block/drbd/drbd_main.c
+@@ -786,7 +786,6 @@ int __drbd_send_protocol(struct drbd_connection *connection, enum drbd_packet cm
+
+ if (nc->tentative && connection->agreed_pro_version < 92) {
+ rcu_read_unlock();
+- mutex_unlock(&sock->mutex);
+ drbd_err(connection, "--dry-run is not supported by peer");
+ return -EOPNOTSUPP;
+ }
+diff --git a/drivers/clk/at91/clk-main.c b/drivers/clk/at91/clk-main.c
+index 311cea0c3ae2..37c22667e831 100644
+--- a/drivers/clk/at91/clk-main.c
++++ b/drivers/clk/at91/clk-main.c
+@@ -156,7 +156,7 @@ at91_clk_register_main_osc(struct regmap *regmap,
+ if (bypass)
+ regmap_update_bits(regmap,
+ AT91_CKGR_MOR, MOR_KEY_MASK |
+- AT91_PMC_MOSCEN,
++ AT91_PMC_OSCBYPASS,
+ AT91_PMC_OSCBYPASS | AT91_PMC_KEY);
+
+ hw = &osc->hw;
+@@ -297,7 +297,10 @@ static int clk_main_probe_frequency(struct regmap *regmap)
+ regmap_read(regmap, AT91_CKGR_MCFR, &mcfr);
+ if (mcfr & AT91_PMC_MAINRDY)
+ return 0;
+- usleep_range(MAINF_LOOP_MIN_WAIT, MAINF_LOOP_MAX_WAIT);
++ if (system_state < SYSTEM_RUNNING)
++ udelay(MAINF_LOOP_MIN_WAIT);
++ else
++ usleep_range(MAINF_LOOP_MIN_WAIT, MAINF_LOOP_MAX_WAIT);
+ } while (time_before(prep_time, timeout));
+
+ return -ETIMEDOUT;
+diff --git a/drivers/clk/at91/sam9x60.c b/drivers/clk/at91/sam9x60.c
+index 9790ddfa5b3c..86238d5ecb4d 100644
+--- a/drivers/clk/at91/sam9x60.c
++++ b/drivers/clk/at91/sam9x60.c
+@@ -43,6 +43,7 @@ static const struct clk_pll_characteristics upll_characteristics = {
+ };
+
+ static const struct clk_programmable_layout sam9x60_programmable_layout = {
++ .pres_mask = 0xff,
+ .pres_shift = 8,
+ .css_mask = 0x1f,
+ .have_slck_mck = 0,
+diff --git a/drivers/clk/at91/sckc.c b/drivers/clk/at91/sckc.c
+index 9bfe9a28294a..fac0ca56d42d 100644
+--- a/drivers/clk/at91/sckc.c
++++ b/drivers/clk/at91/sckc.c
+@@ -76,7 +76,10 @@ static int clk_slow_osc_prepare(struct clk_hw *hw)
+
+ writel(tmp | osc->bits->cr_osc32en, sckcr);
+
+- usleep_range(osc->startup_usec, osc->startup_usec + 1);
++ if (system_state < SYSTEM_RUNNING)
++ udelay(osc->startup_usec);
++ else
++ usleep_range(osc->startup_usec, osc->startup_usec + 1);
+
+ return 0;
+ }
+@@ -187,7 +190,10 @@ static int clk_slow_rc_osc_prepare(struct clk_hw *hw)
+
+ writel(readl(sckcr) | osc->bits->cr_rcen, sckcr);
+
+- usleep_range(osc->startup_usec, osc->startup_usec + 1);
++ if (system_state < SYSTEM_RUNNING)
++ udelay(osc->startup_usec);
++ else
++ usleep_range(osc->startup_usec, osc->startup_usec + 1);
+
+ return 0;
+ }
+@@ -288,7 +294,10 @@ static int clk_sam9x5_slow_set_parent(struct clk_hw *hw, u8 index)
+
+ writel(tmp, sckcr);
+
+- usleep_range(SLOWCK_SW_TIME_USEC, SLOWCK_SW_TIME_USEC + 1);
++ if (system_state < SYSTEM_RUNNING)
++ udelay(SLOWCK_SW_TIME_USEC);
++ else
++ usleep_range(SLOWCK_SW_TIME_USEC, SLOWCK_SW_TIME_USEC + 1);
+
+ return 0;
+ }
+@@ -533,7 +542,10 @@ static int clk_sama5d4_slow_osc_prepare(struct clk_hw *hw)
+ return 0;
+ }
+
+- usleep_range(osc->startup_usec, osc->startup_usec + 1);
++ if (system_state < SYSTEM_RUNNING)
++ udelay(osc->startup_usec);
++ else
++ usleep_range(osc->startup_usec, osc->startup_usec + 1);
+ osc->prepared = true;
+
+ return 0;
+diff --git a/drivers/clk/meson/gxbb.c b/drivers/clk/meson/gxbb.c
+index dab16d9b1af8..9834eb2c1b67 100644
+--- a/drivers/clk/meson/gxbb.c
++++ b/drivers/clk/meson/gxbb.c
+@@ -866,6 +866,7 @@ static struct clk_regmap gxbb_sar_adc_clk_div = {
+ .ops = &clk_regmap_divider_ops,
+ .parent_names = (const char *[]){ "sar_adc_clk_sel" },
+ .num_parents = 1,
++ .flags = CLK_SET_RATE_PARENT,
+ },
+ };
+
+diff --git a/drivers/clk/samsung/clk-exynos5420.c b/drivers/clk/samsung/clk-exynos5420.c
+index 7670cc596c74..31466cd1842f 100644
+--- a/drivers/clk/samsung/clk-exynos5420.c
++++ b/drivers/clk/samsung/clk-exynos5420.c
+@@ -165,12 +165,18 @@ static const unsigned long exynos5x_clk_regs[] __initconst = {
+ GATE_BUS_CPU,
+ GATE_SCLK_CPU,
+ CLKOUT_CMU_CPU,
++ CPLL_CON0,
++ DPLL_CON0,
+ EPLL_CON0,
+ EPLL_CON1,
+ EPLL_CON2,
+ RPLL_CON0,
+ RPLL_CON1,
+ RPLL_CON2,
++ IPLL_CON0,
++ SPLL_CON0,
++ VPLL_CON0,
++ MPLL_CON0,
+ SRC_TOP0,
+ SRC_TOP1,
+ SRC_TOP2,
+@@ -1172,8 +1178,6 @@ static const struct samsung_gate_clock exynos5x_gate_clks[] __initconst = {
+ GATE(CLK_SCLK_ISP_SENSOR2, "sclk_isp_sensor2", "dout_isp_sensor2",
+ GATE_TOP_SCLK_ISP, 12, CLK_SET_RATE_PARENT, 0),
+
+- GATE(CLK_G3D, "g3d", "mout_user_aclk_g3d", GATE_IP_G3D, 9, 0, 0),
+-
+ /* CDREX */
+ GATE(CLK_CLKM_PHY0, "clkm_phy0", "dout_sclk_cdrex",
+ GATE_BUS_CDREX0, 0, 0, 0),
+@@ -1248,6 +1252,15 @@ static struct exynos5_subcmu_reg_dump exynos5x_gsc_suspend_regs[] = {
+ { DIV2_RATIO0, 0, 0x30 }, /* DIV dout_gscl_blk_300 */
+ };
+
++static const struct samsung_gate_clock exynos5x_g3d_gate_clks[] __initconst = {
++ GATE(CLK_G3D, "g3d", "mout_user_aclk_g3d", GATE_IP_G3D, 9, 0, 0),
++};
++
++static struct exynos5_subcmu_reg_dump exynos5x_g3d_suspend_regs[] = {
++ { GATE_IP_G3D, 0x3ff, 0x3ff }, /* G3D gates */
++ { SRC_TOP5, 0, BIT(16) }, /* MUX mout_user_aclk_g3d */
++};
++
+ static const struct samsung_div_clock exynos5x_mfc_div_clks[] __initconst = {
+ DIV(0, "dout_mfc_blk", "mout_user_aclk333", DIV4_RATIO, 0, 2),
+ };
+@@ -1320,6 +1333,14 @@ static const struct exynos5_subcmu_info exynos5x_gsc_subcmu = {
+ .pd_name = "GSC",
+ };
+
++static const struct exynos5_subcmu_info exynos5x_g3d_subcmu = {
++ .gate_clks = exynos5x_g3d_gate_clks,
++ .nr_gate_clks = ARRAY_SIZE(exynos5x_g3d_gate_clks),
++ .suspend_regs = exynos5x_g3d_suspend_regs,
++ .nr_suspend_regs = ARRAY_SIZE(exynos5x_g3d_suspend_regs),
++ .pd_name = "G3D",
++};
++
+ static const struct exynos5_subcmu_info exynos5x_mfc_subcmu = {
+ .div_clks = exynos5x_mfc_div_clks,
+ .nr_div_clks = ARRAY_SIZE(exynos5x_mfc_div_clks),
+@@ -1351,6 +1372,7 @@ static const struct exynos5_subcmu_info exynos5800_mau_subcmu = {
+ static const struct exynos5_subcmu_info *exynos5x_subcmus[] = {
+ &exynos5x_disp_subcmu,
+ &exynos5x_gsc_subcmu,
++ &exynos5x_g3d_subcmu,
+ &exynos5x_mfc_subcmu,
+ &exynos5x_mscl_subcmu,
+ };
+@@ -1358,6 +1380,7 @@ static const struct exynos5_subcmu_info *exynos5x_subcmus[] = {
+ static const struct exynos5_subcmu_info *exynos5800_subcmus[] = {
+ &exynos5x_disp_subcmu,
+ &exynos5x_gsc_subcmu,
++ &exynos5x_g3d_subcmu,
+ &exynos5x_mfc_subcmu,
+ &exynos5x_mscl_subcmu,
+ &exynos5800_mau_subcmu,
+diff --git a/drivers/clk/samsung/clk-exynos5433.c b/drivers/clk/samsung/clk-exynos5433.c
+index 7824c2ba3d8e..4b1aa9382ad2 100644
+--- a/drivers/clk/samsung/clk-exynos5433.c
++++ b/drivers/clk/samsung/clk-exynos5433.c
+@@ -13,6 +13,7 @@
+ #include <linux/of_device.h>
+ #include <linux/platform_device.h>
+ #include <linux/pm_runtime.h>
++#include <linux/slab.h>
+
+ #include <dt-bindings/clock/exynos5433.h>
+
+@@ -5584,6 +5585,8 @@ static int __init exynos5433_cmu_probe(struct platform_device *pdev)
+
+ data->clk_save = samsung_clk_alloc_reg_dump(info->clk_regs,
+ info->nr_clk_regs);
++ if (!data->clk_save)
++ return -ENOMEM;
+ data->nr_clk_save = info->nr_clk_regs;
+ data->clk_suspend = info->suspend_regs;
+ data->nr_clk_suspend = info->nr_suspend_regs;
+@@ -5592,12 +5595,19 @@ static int __init exynos5433_cmu_probe(struct platform_device *pdev)
+ if (data->nr_pclks > 0) {
+ data->pclks = devm_kcalloc(dev, sizeof(struct clk *),
+ data->nr_pclks, GFP_KERNEL);
+-
++ if (!data->pclks) {
++ kfree(data->clk_save);
++ return -ENOMEM;
++ }
+ for (i = 0; i < data->nr_pclks; i++) {
+ struct clk *clk = of_clk_get(dev->of_node, i);
+
+- if (IS_ERR(clk))
++ if (IS_ERR(clk)) {
++ kfree(data->clk_save);
++ while (--i >= 0)
++ clk_put(data->pclks[i]);
+ return PTR_ERR(clk);
++ }
+ data->pclks[i] = clk;
+ }
+ }
+diff --git a/drivers/clk/sunxi-ng/ccu-sun9i-a80.c b/drivers/clk/sunxi-ng/ccu-sun9i-a80.c
+index dcac1391767f..ef29582676f6 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun9i-a80.c
++++ b/drivers/clk/sunxi-ng/ccu-sun9i-a80.c
+@@ -1224,7 +1224,7 @@ static int sun9i_a80_ccu_probe(struct platform_device *pdev)
+
+ /* Enforce d1 = 0, d2 = 0 for Audio PLL */
+ val = readl(reg + SUN9I_A80_PLL_AUDIO_REG);
+- val &= (BIT(16) & BIT(18));
++ val &= ~(BIT(16) | BIT(18));
+ writel(val, reg + SUN9I_A80_PLL_AUDIO_REG);
+
+ /* Enforce P = 1 for both CPU cluster PLLs */
+diff --git a/drivers/clk/sunxi/clk-sunxi.c b/drivers/clk/sunxi/clk-sunxi.c
+index d3a43381a792..27201fd26e44 100644
+--- a/drivers/clk/sunxi/clk-sunxi.c
++++ b/drivers/clk/sunxi/clk-sunxi.c
+@@ -1080,8 +1080,8 @@ static struct clk ** __init sunxi_divs_clk_setup(struct device_node *node,
+ rate_hw, rate_ops,
+ gate_hw, &clk_gate_ops,
+ clkflags |
+- data->div[i].critical ?
+- CLK_IS_CRITICAL : 0);
++ (data->div[i].critical ?
++ CLK_IS_CRITICAL : 0));
+
+ WARN_ON(IS_ERR(clk_data->clks[i]));
+ }
+diff --git a/drivers/clk/ti/clk-dra7-atl.c b/drivers/clk/ti/clk-dra7-atl.c
+index a01ca9395179..f65e16c4f3c4 100644
+--- a/drivers/clk/ti/clk-dra7-atl.c
++++ b/drivers/clk/ti/clk-dra7-atl.c
+@@ -174,7 +174,6 @@ static void __init of_dra7_atl_clock_setup(struct device_node *node)
+ struct clk_init_data init = { NULL };
+ const char **parent_names = NULL;
+ struct clk *clk;
+- int ret;
+
+ clk_hw = kzalloc(sizeof(*clk_hw), GFP_KERNEL);
+ if (!clk_hw) {
+@@ -207,11 +206,6 @@ static void __init of_dra7_atl_clock_setup(struct device_node *node)
+ clk = ti_clk_register(NULL, &clk_hw->hw, node->name);
+
+ if (!IS_ERR(clk)) {
+- ret = ti_clk_add_alias(NULL, clk, node->name);
+- if (ret) {
+- clk_unregister(clk);
+- goto cleanup;
+- }
+ of_clk_add_provider(node, of_clk_src_simple_get, clk);
+ kfree(parent_names);
+ return;
+diff --git a/drivers/clk/ti/clkctrl.c b/drivers/clk/ti/clkctrl.c
+index 975995eea15c..b0c0690a5a12 100644
+--- a/drivers/clk/ti/clkctrl.c
++++ b/drivers/clk/ti/clkctrl.c
+@@ -100,11 +100,12 @@ static bool _omap4_is_timeout(union omap4_timeout *time, u32 timeout)
+ * can be from a timer that requires pm_runtime access, which
+ * will eventually bring us here with timekeeping_suspended,
+ * during both suspend entry and resume paths. This happens
+- * at least on am43xx platform.
++ * at least on am43xx platform. Account for flakeyness
++ * with udelay() by multiplying the timeout value by 2.
+ */
+ if (unlikely(_early_timeout || timekeeping_suspended)) {
+ if (time->cycles++ < timeout) {
+- udelay(1);
++ udelay(1 * 2);
+ return false;
+ }
+ } else {
+diff --git a/drivers/clocksource/timer-mediatek.c b/drivers/clocksource/timer-mediatek.c
+index a562f491b0f8..9318edcd8963 100644
+--- a/drivers/clocksource/timer-mediatek.c
++++ b/drivers/clocksource/timer-mediatek.c
+@@ -268,15 +268,12 @@ static int __init mtk_syst_init(struct device_node *node)
+
+ ret = timer_of_init(node, &to);
+ if (ret)
+- goto err;
++ return ret;
+
+ clockevents_config_and_register(&to.clkevt, timer_of_rate(&to),
+ TIMER_SYNC_TICKS, 0xffffffff);
+
+ return 0;
+-err:
+- timer_of_cleanup(&to);
+- return ret;
+ }
+
+ static int __init mtk_gpt_init(struct device_node *node)
+@@ -293,7 +290,7 @@ static int __init mtk_gpt_init(struct device_node *node)
+
+ ret = timer_of_init(node, &to);
+ if (ret)
+- goto err;
++ return ret;
+
+ /* Configure clock source */
+ mtk_gpt_setup(&to, TIMER_CLK_SRC, GPT_CTRL_OP_FREERUN);
+@@ -311,9 +308,6 @@ static int __init mtk_gpt_init(struct device_node *node)
+ mtk_gpt_enable_irq(&to, TIMER_CLK_EVT);
+
+ return 0;
+-err:
+- timer_of_cleanup(&to);
+- return ret;
+ }
+ TIMER_OF_DECLARE(mtk_mt6577, "mediatek,mt6577-timer", mtk_gpt_init);
+ TIMER_OF_DECLARE(mtk_mt6765, "mediatek,mt6765-timer", mtk_syst_init);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+index 7398b4850649..b7633484d15f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+@@ -597,8 +597,11 @@ void amdgpu_ctx_mgr_entity_fini(struct amdgpu_ctx_mgr *mgr)
+ continue;
+ }
+
+- for (i = 0; i < num_entities; i++)
++ for (i = 0; i < num_entities; i++) {
++ mutex_lock(&ctx->adev->lock_reset);
+ drm_sched_entity_fini(&ctx->entities[0][i].entity);
++ mutex_unlock(&ctx->adev->lock_reset);
++ }
+ }
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 5a7f893cf724..2877ce84aef2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2788,6 +2788,13 @@ fence_driver_init:
+ DRM_INFO("amdgpu: acceleration disabled, skipping benchmarks\n");
+ }
+
++ /*
++ * Register gpu instance before amdgpu_device_enable_mgpu_fan_boost.
++ * Otherwise the mgpu fan boost feature will be skipped due to the
++ * gpu instance is counted less.
++ */
++ amdgpu_register_gpu_instance(adev);
++
+ /* enable clockgating, etc. after ib tests, etc. since some blocks require
+ * explicit gating rather than handling it automatically.
+ */
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+index 65f6619f0c0c..e531ba9195a0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+@@ -190,7 +190,6 @@ int amdgpu_driver_load_kms(struct drm_device *dev, unsigned long flags)
+ pm_runtime_put_autosuspend(dev->dev);
+ }
+
+- amdgpu_register_gpu_instance(adev);
+ out:
+ if (r) {
+ /* balance pm_runtime_get_sync in amdgpu_driver_unload_kms */
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 75faa56f243a..b1388d3e72f7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -538,6 +538,13 @@ static void gfx_v9_0_check_fw_write_wait(struct amdgpu_device *adev)
+ adev->gfx.me_fw_write_wait = false;
+ adev->gfx.mec_fw_write_wait = false;
+
++ if ((adev->gfx.mec_fw_version < 0x000001a5) ||
++ (adev->gfx.mec_feature_version < 46) ||
++ (adev->gfx.pfp_fw_version < 0x000000b7) ||
++ (adev->gfx.pfp_feature_version < 46))
++ DRM_WARN_ONCE("Warning: check cp_fw_version and update it to realize \
++ GRBM requires 1-cycle delay in cp firmware\n");
++
+ switch (adev->asic_type) {
+ case CHIP_VEGA10:
+ if ((adev->gfx.me_fw_version >= 0x0000009c) &&
+diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+index 8bf9f541e7fe..a0ef44d025d6 100644
+--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
++++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+@@ -205,7 +205,7 @@ static int navi10_workload_map[] = {
+ WORKLOAD_MAP(PP_SMC_POWER_PROFILE_POWERSAVING, WORKLOAD_PPLIB_POWER_SAVING_BIT),
+ WORKLOAD_MAP(PP_SMC_POWER_PROFILE_VIDEO, WORKLOAD_PPLIB_VIDEO_BIT),
+ WORKLOAD_MAP(PP_SMC_POWER_PROFILE_VR, WORKLOAD_PPLIB_VR_BIT),
+- WORKLOAD_MAP(PP_SMC_POWER_PROFILE_COMPUTE, WORKLOAD_PPLIB_CUSTOM_BIT),
++ WORKLOAD_MAP(PP_SMC_POWER_PROFILE_COMPUTE, WORKLOAD_PPLIB_COMPUTE_BIT),
+ WORKLOAD_MAP(PP_SMC_POWER_PROFILE_CUSTOM, WORKLOAD_PPLIB_CUSTOM_BIT),
+ };
+
+diff --git a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
+index 6a14497257e4..33ca6c581f21 100644
+--- a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
++++ b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
+@@ -219,7 +219,7 @@ static int vega20_workload_map[] = {
+ WORKLOAD_MAP(PP_SMC_POWER_PROFILE_POWERSAVING, WORKLOAD_PPLIB_POWER_SAVING_BIT),
+ WORKLOAD_MAP(PP_SMC_POWER_PROFILE_VIDEO, WORKLOAD_PPLIB_VIDEO_BIT),
+ WORKLOAD_MAP(PP_SMC_POWER_PROFILE_VR, WORKLOAD_PPLIB_VR_BIT),
+- WORKLOAD_MAP(PP_SMC_POWER_PROFILE_COMPUTE, WORKLOAD_PPLIB_CUSTOM_BIT),
++ WORKLOAD_MAP(PP_SMC_POWER_PROFILE_COMPUTE, WORKLOAD_PPLIB_COMPUTE_BIT),
+ WORKLOAD_MAP(PP_SMC_POWER_PROFILE_CUSTOM, WORKLOAD_PPLIB_CUSTOM_BIT),
+ };
+
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 3af76624e4aa..12149c5c39e4 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -211,6 +211,18 @@ static unsigned hid_lookup_collection(struct hid_parser *parser, unsigned type)
+ return 0; /* we know nothing about this usage type */
+ }
+
++/*
++ * Concatenate usage which defines 16 bits or less with the
++ * currently defined usage page to form a 32 bit usage
++ */
++
++static void complete_usage(struct hid_parser *parser, unsigned int index)
++{
++ parser->local.usage[index] &= 0xFFFF;
++ parser->local.usage[index] |=
++ (parser->global.usage_page & 0xFFFF) << 16;
++}
++
+ /*
+ * Add a usage to the temporary parser table.
+ */
+@@ -222,6 +234,14 @@ static int hid_add_usage(struct hid_parser *parser, unsigned usage, u8 size)
+ return -1;
+ }
+ parser->local.usage[parser->local.usage_index] = usage;
++
++ /*
++ * If Usage item only includes usage id, concatenate it with
++ * currently defined usage page
++ */
++ if (size <= 2)
++ complete_usage(parser, parser->local.usage_index);
++
+ parser->local.usage_size[parser->local.usage_index] = size;
+ parser->local.collection_index[parser->local.usage_index] =
+ parser->collection_stack_ptr ?
+@@ -543,13 +563,32 @@ static int hid_parser_local(struct hid_parser *parser, struct hid_item *item)
+ * usage value."
+ */
+
+-static void hid_concatenate_usage_page(struct hid_parser *parser)
++static void hid_concatenate_last_usage_page(struct hid_parser *parser)
+ {
+ int i;
++ unsigned int usage_page;
++ unsigned int current_page;
+
+- for (i = 0; i < parser->local.usage_index; i++)
+- if (parser->local.usage_size[i] <= 2)
+- parser->local.usage[i] += parser->global.usage_page << 16;
++ if (!parser->local.usage_index)
++ return;
++
++ usage_page = parser->global.usage_page;
++
++ /*
++ * Concatenate usage page again only if last declared Usage Page
++ * has not been already used in previous usages concatenation
++ */
++ for (i = parser->local.usage_index - 1; i >= 0; i--) {
++ if (parser->local.usage_size[i] > 2)
++ /* Ignore extended usages */
++ continue;
++
++ current_page = parser->local.usage[i] >> 16;
++ if (current_page == usage_page)
++ break;
++
++ complete_usage(parser, i);
++ }
+ }
+
+ /*
+@@ -561,7 +600,7 @@ static int hid_parser_main(struct hid_parser *parser, struct hid_item *item)
+ __u32 data;
+ int ret;
+
+- hid_concatenate_usage_page(parser);
++ hid_concatenate_last_usage_page(parser);
+
+ data = item_udata(item);
+
+@@ -772,7 +811,7 @@ static int hid_scan_main(struct hid_parser *parser, struct hid_item *item)
+ __u32 data;
+ int i;
+
+- hid_concatenate_usage_page(parser);
++ hid_concatenate_last_usage_page(parser);
+
+ data = item_udata(item);
+
+diff --git a/drivers/misc/mei/bus.c b/drivers/misc/mei/bus.c
+index 985bd4fd3328..53bb394ccba6 100644
+--- a/drivers/misc/mei/bus.c
++++ b/drivers/misc/mei/bus.c
+@@ -873,15 +873,16 @@ static const struct device_type mei_cl_device_type = {
+
+ /**
+ * mei_cl_bus_set_name - set device name for me client device
++ * <controller>-<client device>
++ * Example: 0000:00:16.0-55213584-9a29-4916-badf-0fb7ed682aeb
+ *
+ * @cldev: me client device
+ */
+ static inline void mei_cl_bus_set_name(struct mei_cl_device *cldev)
+ {
+- dev_set_name(&cldev->dev, "mei:%s:%pUl:%02X",
+- cldev->name,
+- mei_me_cl_uuid(cldev->me_cl),
+- mei_me_cl_ver(cldev->me_cl));
++ dev_set_name(&cldev->dev, "%s-%pUl",
++ dev_name(cldev->bus->dev),
++ mei_me_cl_uuid(cldev->me_cl));
+ }
+
+ /**
+diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
+index c09f8bb49495..b359f06f05e7 100644
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -81,6 +81,7 @@
+
+ #define MEI_DEV_ID_CMP_LP 0x02e0 /* Comet Point LP */
+ #define MEI_DEV_ID_CMP_LP_3 0x02e4 /* Comet Point LP 3 (iTouch) */
++#define MEI_DEV_ID_CMP_V 0xA3BA /* Comet Point Lake V */
+
+ #define MEI_DEV_ID_ICP_LP 0x34E0 /* Ice Lake Point LP */
+
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index 3a2eadcd0378..b1c518abc10e 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -98,6 +98,7 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+
+ {MEI_PCI_DEVICE(MEI_DEV_ID_CMP_LP, MEI_ME_PCH12_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_CMP_LP_3, MEI_ME_PCH8_CFG)},
++ {MEI_PCI_DEVICE(MEI_DEV_ID_CMP_V, MEI_ME_PCH12_CFG)},
+
+ {MEI_PCI_DEVICE(MEI_DEV_ID_ICP_LP, MEI_ME_PCH12_CFG)},
+
+diff --git a/drivers/net/can/c_can/c_can.c b/drivers/net/can/c_can/c_can.c
+index 9b61bfbea6cd..24c6015f6c92 100644
+--- a/drivers/net/can/c_can/c_can.c
++++ b/drivers/net/can/c_can/c_can.c
+@@ -52,6 +52,7 @@
+ #define CONTROL_EX_PDR BIT(8)
+
+ /* control register */
++#define CONTROL_SWR BIT(15)
+ #define CONTROL_TEST BIT(7)
+ #define CONTROL_CCE BIT(6)
+ #define CONTROL_DISABLE_AR BIT(5)
+@@ -572,6 +573,26 @@ static void c_can_configure_msg_objects(struct net_device *dev)
+ IF_MCONT_RCV_EOB);
+ }
+
++static int c_can_software_reset(struct net_device *dev)
++{
++ struct c_can_priv *priv = netdev_priv(dev);
++ int retry = 0;
++
++ if (priv->type != BOSCH_D_CAN)
++ return 0;
++
++ priv->write_reg(priv, C_CAN_CTRL_REG, CONTROL_SWR | CONTROL_INIT);
++ while (priv->read_reg(priv, C_CAN_CTRL_REG) & CONTROL_SWR) {
++ msleep(20);
++ if (retry++ > 100) {
++ netdev_err(dev, "CCTRL: software reset failed\n");
++ return -EIO;
++ }
++ }
++
++ return 0;
++}
++
+ /*
+ * Configure C_CAN chip:
+ * - enable/disable auto-retransmission
+@@ -581,6 +602,11 @@ static void c_can_configure_msg_objects(struct net_device *dev)
+ static int c_can_chip_config(struct net_device *dev)
+ {
+ struct c_can_priv *priv = netdev_priv(dev);
++ int err;
++
++ err = c_can_software_reset(dev);
++ if (err)
++ return err;
+
+ /* enable automatic retransmission */
+ priv->write_reg(priv, C_CAN_CTRL_REG, CONTROL_ENABLE_AR);
+diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c
+index 56fa98d7aa90..a4f0fa94d136 100644
+--- a/drivers/net/can/flexcan.c
++++ b/drivers/net/can/flexcan.c
+@@ -658,6 +658,7 @@ static void flexcan_irq_bus_err(struct net_device *dev, u32 reg_esr)
+ struct can_frame *cf;
+ bool rx_errors = false, tx_errors = false;
+ u32 timestamp;
++ int err;
+
+ timestamp = priv->read(®s->timer) << 16;
+
+@@ -706,7 +707,9 @@ static void flexcan_irq_bus_err(struct net_device *dev, u32 reg_esr)
+ if (tx_errors)
+ dev->stats.tx_errors++;
+
+- can_rx_offload_queue_sorted(&priv->offload, skb, timestamp);
++ err = can_rx_offload_queue_sorted(&priv->offload, skb, timestamp);
++ if (err)
++ dev->stats.rx_fifo_errors++;
+ }
+
+ static void flexcan_irq_state(struct net_device *dev, u32 reg_esr)
+@@ -719,6 +722,7 @@ static void flexcan_irq_state(struct net_device *dev, u32 reg_esr)
+ int flt;
+ struct can_berr_counter bec;
+ u32 timestamp;
++ int err;
+
+ timestamp = priv->read(®s->timer) << 16;
+
+@@ -750,7 +754,9 @@ static void flexcan_irq_state(struct net_device *dev, u32 reg_esr)
+ if (unlikely(new_state == CAN_STATE_BUS_OFF))
+ can_bus_off(dev);
+
+- can_rx_offload_queue_sorted(&priv->offload, skb, timestamp);
++ err = can_rx_offload_queue_sorted(&priv->offload, skb, timestamp);
++ if (err)
++ dev->stats.rx_fifo_errors++;
+ }
+
+ static inline struct flexcan_priv *rx_offload_to_priv(struct can_rx_offload *offload)
+diff --git a/drivers/net/can/rx-offload.c b/drivers/net/can/rx-offload.c
+index 663697439d1c..84cae167e42f 100644
+--- a/drivers/net/can/rx-offload.c
++++ b/drivers/net/can/rx-offload.c
+@@ -107,37 +107,95 @@ static int can_rx_offload_compare(struct sk_buff *a, struct sk_buff *b)
+ return cb_b->timestamp - cb_a->timestamp;
+ }
+
+-static struct sk_buff *can_rx_offload_offload_one(struct can_rx_offload *offload, unsigned int n)
++/**
++ * can_rx_offload_offload_one() - Read one CAN frame from HW
++ * @offload: pointer to rx_offload context
++ * @n: number of mailbox to read
++ *
++ * The task of this function is to read a CAN frame from mailbox @n
++ * from the device and return the mailbox's content as a struct
++ * sk_buff.
++ *
++ * If the struct can_rx_offload::skb_queue exceeds the maximal queue
++ * length (struct can_rx_offload::skb_queue_len_max) or no skb can be
++ * allocated, the mailbox contents is discarded by reading it into an
++ * overflow buffer. This way the mailbox is marked as free by the
++ * driver.
++ *
++ * Return: A pointer to skb containing the CAN frame on success.
++ *
++ * NULL if the mailbox @n is empty.
++ *
++ * ERR_PTR() in case of an error
++ */
++static struct sk_buff *
++can_rx_offload_offload_one(struct can_rx_offload *offload, unsigned int n)
+ {
+- struct sk_buff *skb = NULL;
++ struct sk_buff *skb = NULL, *skb_error = NULL;
+ struct can_rx_offload_cb *cb;
+ struct can_frame *cf;
+ int ret;
+
+- /* If queue is full or skb not available, read to discard mailbox */
+- if (likely(skb_queue_len(&offload->skb_queue) <=
+- offload->skb_queue_len_max))
++ if (likely(skb_queue_len(&offload->skb_queue) <
++ offload->skb_queue_len_max)) {
+ skb = alloc_can_skb(offload->dev, &cf);
++ if (unlikely(!skb))
++ skb_error = ERR_PTR(-ENOMEM); /* skb alloc failed */
++ } else {
++ skb_error = ERR_PTR(-ENOBUFS); /* skb_queue is full */
++ }
+
+- if (!skb) {
++ /* If queue is full or skb not available, drop by reading into
++ * overflow buffer.
++ */
++ if (unlikely(skb_error)) {
+ struct can_frame cf_overflow;
+ u32 timestamp;
+
+ ret = offload->mailbox_read(offload, &cf_overflow,
+ ×tamp, n);
+- if (ret)
+- offload->dev->stats.rx_dropped++;
+
+- return NULL;
++ /* Mailbox was empty. */
++ if (unlikely(!ret))
++ return NULL;
++
++ /* Mailbox has been read and we're dropping it or
++ * there was a problem reading the mailbox.
++ *
++ * Increment error counters in any case.
++ */
++ offload->dev->stats.rx_dropped++;
++ offload->dev->stats.rx_fifo_errors++;
++
++ /* There was a problem reading the mailbox, propagate
++ * error value.
++ */
++ if (unlikely(ret < 0))
++ return ERR_PTR(ret);
++
++ return skb_error;
+ }
+
+ cb = can_rx_offload_get_cb(skb);
+ ret = offload->mailbox_read(offload, cf, &cb->timestamp, n);
+- if (!ret) {
++
++ /* Mailbox was empty. */
++ if (unlikely(!ret)) {
+ kfree_skb(skb);
+ return NULL;
+ }
+
++ /* There was a problem reading the mailbox, propagate error value. */
++ if (unlikely(ret < 0)) {
++ kfree_skb(skb);
++
++ offload->dev->stats.rx_dropped++;
++ offload->dev->stats.rx_fifo_errors++;
++
++ return ERR_PTR(ret);
++ }
++
++ /* Mailbox was read. */
+ return skb;
+ }
+
+@@ -157,8 +215,8 @@ int can_rx_offload_irq_offload_timestamp(struct can_rx_offload *offload, u64 pen
+ continue;
+
+ skb = can_rx_offload_offload_one(offload, i);
+- if (!skb)
+- break;
++ if (IS_ERR_OR_NULL(skb))
++ continue;
+
+ __skb_queue_add_sort(&skb_queue, skb, can_rx_offload_compare);
+ }
+@@ -188,7 +246,13 @@ int can_rx_offload_irq_offload_fifo(struct can_rx_offload *offload)
+ struct sk_buff *skb;
+ int received = 0;
+
+- while ((skb = can_rx_offload_offload_one(offload, 0))) {
++ while (1) {
++ skb = can_rx_offload_offload_one(offload, 0);
++ if (IS_ERR(skb))
++ continue;
++ if (!skb)
++ break;
++
+ skb_queue_tail(&offload->skb_queue, skb);
+ received++;
+ }
+@@ -252,8 +316,10 @@ int can_rx_offload_queue_tail(struct can_rx_offload *offload,
+ struct sk_buff *skb)
+ {
+ if (skb_queue_len(&offload->skb_queue) >
+- offload->skb_queue_len_max)
+- return -ENOMEM;
++ offload->skb_queue_len_max) {
++ kfree_skb(skb);
++ return -ENOBUFS;
++ }
+
+ skb_queue_tail(&offload->skb_queue, skb);
+ can_rx_offload_schedule(offload);
+diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c
+index 5d6f8977df3f..c0ee0fa90970 100644
+--- a/drivers/net/can/spi/mcp251x.c
++++ b/drivers/net/can/spi/mcp251x.c
+@@ -759,6 +759,7 @@ static void mcp251x_restart_work_handler(struct work_struct *ws)
+ if (priv->after_suspend) {
+ mcp251x_hw_reset(spi);
+ mcp251x_setup(net, spi);
++ priv->force_quit = 0;
+ if (priv->after_suspend & AFTER_SUSPEND_RESTART) {
+ mcp251x_set_normal_mode(spi);
+ } else if (priv->after_suspend & AFTER_SUSPEND_UP) {
+@@ -770,7 +771,6 @@ static void mcp251x_restart_work_handler(struct work_struct *ws)
+ mcp251x_hw_sleep(spi);
+ }
+ priv->after_suspend = 0;
+- priv->force_quit = 0;
+ }
+
+ if (priv->restart_tx) {
+diff --git a/drivers/net/can/usb/peak_usb/pcan_usb.c b/drivers/net/can/usb/peak_usb/pcan_usb.c
+index 5a66c9f53aae..d2539c95adb6 100644
+--- a/drivers/net/can/usb/peak_usb/pcan_usb.c
++++ b/drivers/net/can/usb/peak_usb/pcan_usb.c
+@@ -436,8 +436,8 @@ static int pcan_usb_decode_error(struct pcan_usb_msg_context *mc, u8 n,
+ }
+ if ((n & PCAN_USB_ERROR_BUS_LIGHT) == 0) {
+ /* no error (back to active state) */
+- mc->pdev->dev.can.state = CAN_STATE_ERROR_ACTIVE;
+- return 0;
++ new_state = CAN_STATE_ERROR_ACTIVE;
++ break;
+ }
+ break;
+
+@@ -460,9 +460,9 @@ static int pcan_usb_decode_error(struct pcan_usb_msg_context *mc, u8 n,
+ }
+
+ if ((n & PCAN_USB_ERROR_BUS_HEAVY) == 0) {
+- /* no error (back to active state) */
+- mc->pdev->dev.can.state = CAN_STATE_ERROR_ACTIVE;
+- return 0;
++ /* no error (back to warning state) */
++ new_state = CAN_STATE_ERROR_WARNING;
++ break;
+ }
+ break;
+
+@@ -501,6 +501,11 @@ static int pcan_usb_decode_error(struct pcan_usb_msg_context *mc, u8 n,
+ mc->pdev->dev.can.can_stats.error_warning++;
+ break;
+
++ case CAN_STATE_ERROR_ACTIVE:
++ cf->can_id |= CAN_ERR_CRTL;
++ cf->data[1] = CAN_ERR_CRTL_ACTIVE;
++ break;
++
+ default:
+ /* CAN_STATE_MAX (trick to handle other errors) */
+ cf->can_id |= CAN_ERR_CRTL;
+diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
+index 296286f4fb39..5763ae6c6c6a 100644
+--- a/drivers/net/dsa/sja1105/sja1105_main.c
++++ b/drivers/net/dsa/sja1105/sja1105_main.c
+@@ -591,15 +591,15 @@ static int sja1105_parse_rgmii_delays(struct sja1105_private *priv,
+ int i;
+
+ for (i = 0; i < SJA1105_NUM_PORTS; i++) {
+- if (ports->role == XMII_MAC)
++ if (ports[i].role == XMII_MAC)
+ continue;
+
+- if (ports->phy_mode == PHY_INTERFACE_MODE_RGMII_RXID ||
+- ports->phy_mode == PHY_INTERFACE_MODE_RGMII_ID)
++ if (ports[i].phy_mode == PHY_INTERFACE_MODE_RGMII_RXID ||
++ ports[i].phy_mode == PHY_INTERFACE_MODE_RGMII_ID)
+ priv->rgmii_rx_delay[i] = true;
+
+- if (ports->phy_mode == PHY_INTERFACE_MODE_RGMII_TXID ||
+- ports->phy_mode == PHY_INTERFACE_MODE_RGMII_ID)
++ if (ports[i].phy_mode == PHY_INTERFACE_MODE_RGMII_TXID ||
++ ports[i].phy_mode == PHY_INTERFACE_MODE_RGMII_ID)
+ priv->rgmii_tx_delay[i] = true;
+
+ if ((priv->rgmii_rx_delay[i] || priv->rgmii_tx_delay[i]) &&
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index 06e2581b28ea..2f0011465af0 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -1996,8 +1996,6 @@ static void reset_umac(struct bcmgenet_priv *priv)
+
+ /* issue soft reset with (rg)mii loopback to ensure a stable rxclk */
+ bcmgenet_umac_writel(priv, CMD_SW_RESET | CMD_LCL_LOOP_EN, UMAC_CMD);
+- udelay(2);
+- bcmgenet_umac_writel(priv, 0, UMAC_CMD);
+ }
+
+ static void bcmgenet_intr_disable(struct bcmgenet_priv *priv)
+@@ -2619,8 +2617,10 @@ static void bcmgenet_irq_task(struct work_struct *work)
+ spin_unlock_irq(&priv->lock);
+
+ if (status & UMAC_IRQ_PHY_DET_R &&
+- priv->dev->phydev->autoneg != AUTONEG_ENABLE)
++ priv->dev->phydev->autoneg != AUTONEG_ENABLE) {
+ phy_init_hw(priv->dev->phydev);
++ genphy_config_aneg(priv->dev->phydev);
++ }
+
+ /* Link UP/DOWN event */
+ if (status & UMAC_IRQ_LINK_EVENT)
+@@ -3643,6 +3643,7 @@ static int bcmgenet_resume(struct device *d)
+ phy_init_hw(dev->phydev);
+
+ /* Speed settings must be restored */
++ genphy_config_aneg(dev->phydev);
+ bcmgenet_mii_config(priv->dev, false);
+
+ bcmgenet_set_hw_addr(priv, dev->dev_addr);
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index e7c291bf4ed1..dbe18cdf6c1b 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -181,8 +181,38 @@ int bcmgenet_mii_config(struct net_device *dev, bool init)
+ const char *phy_name = NULL;
+ u32 id_mode_dis = 0;
+ u32 port_ctrl;
++ int bmcr = -1;
++ int ret;
+ u32 reg;
+
++ /* MAC clocking workaround during reset of umac state machines */
++ reg = bcmgenet_umac_readl(priv, UMAC_CMD);
++ if (reg & CMD_SW_RESET) {
++ /* An MII PHY must be isolated to prevent TXC contention */
++ if (priv->phy_interface == PHY_INTERFACE_MODE_MII) {
++ ret = phy_read(phydev, MII_BMCR);
++ if (ret >= 0) {
++ bmcr = ret;
++ ret = phy_write(phydev, MII_BMCR,
++ bmcr | BMCR_ISOLATE);
++ }
++ if (ret) {
++ netdev_err(dev, "failed to isolate PHY\n");
++ return ret;
++ }
++ }
++ /* Switch MAC clocking to RGMII generated clock */
++ bcmgenet_sys_writel(priv, PORT_MODE_EXT_GPHY, SYS_PORT_CTRL);
++ /* Ensure 5 clks with Rx disabled
++ * followed by 5 clks with Reset asserted
++ */
++ udelay(4);
++ reg &= ~(CMD_SW_RESET | CMD_LCL_LOOP_EN);
++ bcmgenet_umac_writel(priv, reg, UMAC_CMD);
++ /* Ensure 5 more clocks before Rx is enabled */
++ udelay(2);
++ }
++
+ priv->ext_phy = !priv->internal_phy &&
+ (priv->phy_interface != PHY_INTERFACE_MODE_MOCA);
+
+@@ -214,6 +244,9 @@ int bcmgenet_mii_config(struct net_device *dev, bool init)
+ phy_set_max_speed(phydev, SPEED_100);
+ bcmgenet_sys_writel(priv,
+ PORT_MODE_EXT_EPHY, SYS_PORT_CTRL);
++ /* Restore the MII PHY after isolation */
++ if (bmcr >= 0)
++ phy_write(phydev, MII_BMCR, bmcr);
+ break;
+
+ case PHY_INTERFACE_MODE_REVMII:
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 35b59b5edf0f..a09a4be1d055 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -4393,6 +4393,7 @@ static int macb_remove(struct platform_device *pdev)
+ mdiobus_free(bp->mii_bus);
+
+ unregister_netdev(dev);
++ tasklet_kill(&bp->hresp_err_tasklet);
+ pm_runtime_disable(&pdev->dev);
+ pm_runtime_dont_use_autosuspend(&pdev->dev);
+ if (!pm_runtime_suspended(&pdev->dev)) {
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index e5610a4da539..9123f3febee1 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -3580,6 +3580,11 @@ fec_drv_remove(struct platform_device *pdev)
+ struct net_device *ndev = platform_get_drvdata(pdev);
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ struct device_node *np = pdev->dev.of_node;
++ int ret;
++
++ ret = pm_runtime_get_sync(&pdev->dev);
++ if (ret < 0)
++ return ret;
+
+ cancel_work_sync(&fep->tx_timeout_work);
+ fec_ptp_stop(pdev);
+@@ -3587,13 +3592,17 @@ fec_drv_remove(struct platform_device *pdev)
+ fec_enet_mii_remove(fep);
+ if (fep->reg_phy)
+ regulator_disable(fep->reg_phy);
+- pm_runtime_put(&pdev->dev);
+- pm_runtime_disable(&pdev->dev);
++
+ if (of_phy_is_fixed_link(np))
+ of_phy_deregister_fixed_link(np);
+ of_node_put(fep->phy_node);
+ free_netdev(ndev);
+
++ clk_disable_unprepare(fep->clk_ahb);
++ clk_disable_unprepare(fep->clk_ipg);
++ pm_runtime_put_noidle(&pdev->dev);
++ pm_runtime_disable(&pdev->dev);
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index aca95f64bde8..9b7a8db9860f 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -544,7 +544,7 @@ static int gve_alloc_queue_page_list(struct gve_priv *priv, u32 id,
+ }
+
+ qpl->id = id;
+- qpl->num_entries = pages;
++ qpl->num_entries = 0;
+ qpl->pages = kvzalloc(pages * sizeof(*qpl->pages), GFP_KERNEL);
+ /* caller handles clean up */
+ if (!qpl->pages)
+@@ -562,6 +562,7 @@ static int gve_alloc_queue_page_list(struct gve_priv *priv, u32 id,
+ /* caller handles clean up */
+ if (err)
+ return -ENOMEM;
++ qpl->num_entries++;
+ }
+ priv->num_registered_pages += pages;
+
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
+index 906cf68d3453..4a53bfc017b1 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_common.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
+@@ -1861,7 +1861,8 @@ i40e_status i40e_aq_get_link_info(struct i40e_hw *hw,
+ hw->aq.fw_min_ver < 40)) && hw_link_info->phy_type == 0xE)
+ hw_link_info->phy_type = I40E_PHY_TYPE_10GBASE_SFPP_CU;
+
+- if (hw->flags & I40E_HW_FLAG_AQ_PHY_ACCESS_CAPABLE) {
++ if (hw->flags & I40E_HW_FLAG_AQ_PHY_ACCESS_CAPABLE &&
++ hw->mac.type != I40E_MAC_X722) {
+ __le32 tmp;
+
+ memcpy(&tmp, resp->link_type, sizeof(tmp));
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 9d2b50964a08..fa857b60ba2b 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -336,7 +336,7 @@ iavf_map_vector_to_rxq(struct iavf_adapter *adapter, int v_idx, int r_idx)
+ q_vector->rx.target_itr = ITR_TO_REG(rx_ring->itr_setting);
+ q_vector->ring_mask |= BIT(r_idx);
+ wr32(hw, IAVF_VFINT_ITRN1(IAVF_RX_ITR, q_vector->reg_idx),
+- q_vector->rx.current_itr);
++ q_vector->rx.current_itr >> 1);
+ q_vector->rx.current_itr = q_vector->rx.target_itr;
+ }
+
+@@ -362,7 +362,7 @@ iavf_map_vector_to_txq(struct iavf_adapter *adapter, int v_idx, int t_idx)
+ q_vector->tx.target_itr = ITR_TO_REG(tx_ring->itr_setting);
+ q_vector->num_ringpairs++;
+ wr32(hw, IAVF_VFINT_ITRN1(IAVF_TX_ITR, q_vector->reg_idx),
+- q_vector->tx.target_itr);
++ q_vector->tx.target_itr >> 1);
+ q_vector->tx.current_itr = q_vector->tx.target_itr;
+ }
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c
+index 2a232504379d..602b0fd84c29 100644
+--- a/drivers/net/ethernet/intel/ice/ice_sched.c
++++ b/drivers/net/ethernet/intel/ice/ice_sched.c
+@@ -1052,7 +1052,7 @@ enum ice_status ice_sched_query_res_alloc(struct ice_hw *hw)
+ struct ice_aqc_query_txsched_res_resp *buf;
+ enum ice_status status = 0;
+ __le16 max_sibl;
+- u8 i;
++ u16 i;
+
+ if (hw->layer_info)
+ return status;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index 35945cdd0a61..3ac6104e9924 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -1085,7 +1085,7 @@ static int esw_create_offloads_fdb_tables(struct mlx5_eswitch *esw, int nvports)
+ MLX5_CAP_GEN(dev, max_flow_counter_15_0);
+ fdb_max = 1 << MLX5_CAP_ESW_FLOWTABLE_FDB(dev, log_max_ft_size);
+
+- esw_debug(dev, "Create offloads FDB table, min (max esw size(2^%d), max counters(%d), groups(%d), max flow table size(2^%d))\n",
++ esw_debug(dev, "Create offloads FDB table, min (max esw size(2^%d), max counters(%d), groups(%d), max flow table size(%d))\n",
+ MLX5_CAP_ESW_FLOWTABLE_FDB(dev, log_max_ft_size),
+ max_flow_counter, ESW_OFFLOADS_NUM_GROUPS,
+ fdb_max);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
+index 7879e1746297..366bda1bb1c3 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
+@@ -183,7 +183,8 @@ static bool mlx5_eswitch_offload_is_uplink_port(const struct mlx5_eswitch *esw,
+ u32 port_mask, port_value;
+
+ if (MLX5_CAP_ESW_FLOWTABLE(esw->dev, flow_source))
+- return spec->flow_context.flow_source == MLX5_VPORT_UPLINK;
++ return spec->flow_context.flow_source ==
++ MLX5_FLOW_CONTEXT_FLOW_SOURCE_UPLINK;
+
+ port_mask = MLX5_GET(fte_match_param, spec->match_criteria,
+ misc_parameters.source_port);
+diff --git a/drivers/net/ethernet/mscc/ocelot.h b/drivers/net/ethernet/mscc/ocelot.h
+index f7eeb4806897..aa372aba66c8 100644
+--- a/drivers/net/ethernet/mscc/ocelot.h
++++ b/drivers/net/ethernet/mscc/ocelot.h
+@@ -479,7 +479,7 @@ void __ocelot_write_ix(struct ocelot *ocelot, u32 val, u32 reg, u32 offset);
+ #define ocelot_write_rix(ocelot, val, reg, ri) __ocelot_write_ix(ocelot, val, reg, reg##_RSZ * (ri))
+ #define ocelot_write(ocelot, val, reg) __ocelot_write_ix(ocelot, val, reg, 0)
+
+-void __ocelot_rmw_ix(struct ocelot *ocelot, u32 val, u32 reg, u32 mask,
++void __ocelot_rmw_ix(struct ocelot *ocelot, u32 val, u32 mask, u32 reg,
+ u32 offset);
+ #define ocelot_rmw_ix(ocelot, val, m, reg, gi, ri) __ocelot_rmw_ix(ocelot, val, m, reg, reg##_GSZ * (gi) + reg##_RSZ * (ri))
+ #define ocelot_rmw_gix(ocelot, val, m, reg, gi) __ocelot_rmw_ix(ocelot, val, m, reg, reg##_GSZ * (gi))
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+index 9c73fb759b57..ff830bb5fcaf 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+@@ -438,7 +438,7 @@ static void dwmac4_set_filter(struct mac_device_info *hw,
+ * bits used depends on the hardware configuration
+ * selected at core configuration time.
+ */
+- int bit_nr = bitrev32(~crc32_le(~0, ha->addr,
++ u32 bit_nr = bitrev32(~crc32_le(~0, ha->addr,
+ ETH_ALEN)) >> (32 - mcbitslog2);
+ /* The most significant bit determines the register to
+ * use (H/L) while the other 5 bits determine the bit
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
+index 46d74f407aab..341c7a70fc71 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
+@@ -196,6 +196,7 @@ static void dwxgmac2_config_cbs(struct mac_device_info *hw,
+ writel(low_credit, ioaddr + XGMAC_MTL_TCx_LOCREDIT(queue));
+
+ value = readl(ioaddr + XGMAC_MTL_TCx_ETS_CONTROL(queue));
++ value &= ~XGMAC_TSA;
+ value |= XGMAC_CC | XGMAC_CBS;
+ writel(value, ioaddr + XGMAC_MTL_TCx_ETS_CONTROL(queue));
+ }
+@@ -361,7 +362,7 @@ static void dwxgmac2_set_filter(struct mac_device_info *hw,
+ value |= XGMAC_FILTER_HMC;
+
+ netdev_for_each_mc_addr(ha, dev) {
+- int nr = (bitrev32(~crc32_le(~0, ha->addr, 6)) >>
++ u32 nr = (bitrev32(~crc32_le(~0, ha->addr, 6)) >>
+ (32 - mcbitslog2));
+ mc_filter[nr >> 5] |= (1 << (nr & 0x1F));
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
+index a4f236e3593e..28dc3b33606e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
+@@ -441,6 +441,7 @@ static void dwxgmac2_enable_tso(void __iomem *ioaddr, bool en, u32 chan)
+ static void dwxgmac2_qmode(void __iomem *ioaddr, u32 channel, u8 qmode)
+ {
+ u32 value = readl(ioaddr + XGMAC_MTL_TXQ_OPMODE(channel));
++ u32 flow = readl(ioaddr + XGMAC_RX_FLOW_CTRL);
+
+ value &= ~XGMAC_TXQEN;
+ if (qmode != MTL_QUEUE_AVB) {
+@@ -448,6 +449,7 @@ static void dwxgmac2_qmode(void __iomem *ioaddr, u32 channel, u8 qmode)
+ writel(0, ioaddr + XGMAC_MTL_TCx_ETS_CONTROL(channel));
+ } else {
+ value |= 0x1 << XGMAC_TXQEN_SHIFT;
++ writel(flow & (~XGMAC_RFE), ioaddr + XGMAC_RX_FLOW_CTRL);
+ }
+
+ writel(value, ioaddr + XGMAC_MTL_TXQ_OPMODE(channel));
+diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c
+index 940192c057b6..16b86fa60962 100644
+--- a/drivers/net/macvlan.c
++++ b/drivers/net/macvlan.c
+@@ -359,10 +359,11 @@ static void macvlan_broadcast_enqueue(struct macvlan_port *port,
+ }
+ spin_unlock(&port->bc_queue.lock);
+
++ schedule_work(&port->bc_work);
++
+ if (err)
+ goto free_nskb;
+
+- schedule_work(&port->bc_work);
+ return;
+
+ free_nskb:
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index 2a79c7a7e920..9fa1c93ece7a 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -66,8 +66,8 @@ static int mdiobus_register_reset(struct mdio_device *mdiodev)
+ struct reset_control *reset = NULL;
+
+ if (mdiodev->dev.of_node)
+- reset = devm_reset_control_get_exclusive(&mdiodev->dev,
+- "phy");
++ reset = of_reset_control_get_exclusive(mdiodev->dev.of_node,
++ "phy");
+ if (IS_ERR(reset)) {
+ if (PTR_ERR(reset) == -ENOENT || PTR_ERR(reset) == -ENOTSUPP)
+ reset = NULL;
+@@ -111,6 +111,8 @@ int mdiobus_unregister_device(struct mdio_device *mdiodev)
+ if (mdiodev->bus->mdio_map[mdiodev->addr] != mdiodev)
+ return -EINVAL;
+
++ reset_control_put(mdiodev->reset_ctrl);
++
+ mdiodev->bus->mdio_map[mdiodev->addr] = NULL;
+
+ return 0;
+diff --git a/drivers/net/slip/slip.c b/drivers/net/slip/slip.c
+index 4d479e3c817d..2a91c192659f 100644
+--- a/drivers/net/slip/slip.c
++++ b/drivers/net/slip/slip.c
+@@ -855,6 +855,7 @@ err_free_chan:
+ sl->tty = NULL;
+ tty->disc_data = NULL;
+ clear_bit(SLF_INUSE, &sl->flags);
++ sl_free_netdev(sl->dev);
+ free_netdev(sl->dev);
+
+ err_exit:
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index d320684d25b2..a5c809c85f6d 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -158,9 +158,11 @@ void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl)
+ struct nvme_ns *ns;
+
+ mutex_lock(&ctrl->scan_lock);
++ down_read(&ctrl->namespaces_rwsem);
+ list_for_each_entry(ns, &ctrl->namespaces, list)
+ if (nvme_mpath_clear_current_path(ns))
+ kblockd_schedule_work(&ns->head->requeue_work);
++ up_read(&ctrl->namespaces_rwsem);
+ mutex_unlock(&ctrl->scan_lock);
+ }
+
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 842ef876724f..439e66769f25 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -2118,8 +2118,16 @@ err_unreg_client:
+
+ static void __exit nvme_rdma_cleanup_module(void)
+ {
++ struct nvme_rdma_ctrl *ctrl;
++
+ nvmf_unregister_transport(&nvme_rdma_transport);
+ ib_unregister_client(&nvme_rdma_ib_client);
++
++ mutex_lock(&nvme_rdma_ctrl_mutex);
++ list_for_each_entry(ctrl, &nvme_rdma_ctrl_list, list)
++ nvme_delete_ctrl(&ctrl->ctrl);
++ mutex_unlock(&nvme_rdma_ctrl_mutex);
++ flush_workqueue(nvme_delete_wq);
+ }
+
+ module_init(nvme_rdma_init_module);
+diff --git a/drivers/pinctrl/intel/pinctrl-cherryview.c b/drivers/pinctrl/intel/pinctrl-cherryview.c
+index 17a248b723b9..8dfaf8e8c3a0 100644
+--- a/drivers/pinctrl/intel/pinctrl-cherryview.c
++++ b/drivers/pinctrl/intel/pinctrl-cherryview.c
+@@ -147,6 +147,7 @@ struct chv_pin_context {
+ * @pctldesc: Pin controller description
+ * @pctldev: Pointer to the pin controller device
+ * @chip: GPIO chip in this pin controller
++ * @irqchip: IRQ chip in this pin controller
+ * @regs: MMIO registers
+ * @intr_lines: Stores mapping between 16 HW interrupt wires and GPIO
+ * offset (in GPIO number space)
+@@ -162,6 +163,7 @@ struct chv_pinctrl {
+ struct pinctrl_desc pctldesc;
+ struct pinctrl_dev *pctldev;
+ struct gpio_chip chip;
++ struct irq_chip irqchip;
+ void __iomem *regs;
+ unsigned intr_lines[16];
+ const struct chv_community *community;
+@@ -1466,16 +1468,6 @@ static int chv_gpio_irq_type(struct irq_data *d, unsigned int type)
+ return 0;
+ }
+
+-static struct irq_chip chv_gpio_irqchip = {
+- .name = "chv-gpio",
+- .irq_startup = chv_gpio_irq_startup,
+- .irq_ack = chv_gpio_irq_ack,
+- .irq_mask = chv_gpio_irq_mask,
+- .irq_unmask = chv_gpio_irq_unmask,
+- .irq_set_type = chv_gpio_irq_type,
+- .flags = IRQCHIP_SKIP_SET_WAKE,
+-};
+-
+ static void chv_gpio_irq_handler(struct irq_desc *desc)
+ {
+ struct gpio_chip *gc = irq_desc_get_handler_data(desc);
+@@ -1615,7 +1607,15 @@ static int chv_gpio_probe(struct chv_pinctrl *pctrl, int irq)
+ }
+ }
+
+- ret = gpiochip_irqchip_add(chip, &chv_gpio_irqchip, 0,
++ pctrl->irqchip.name = "chv-gpio";
++ pctrl->irqchip.irq_startup = chv_gpio_irq_startup;
++ pctrl->irqchip.irq_ack = chv_gpio_irq_ack;
++ pctrl->irqchip.irq_mask = chv_gpio_irq_mask;
++ pctrl->irqchip.irq_unmask = chv_gpio_irq_unmask;
++ pctrl->irqchip.irq_set_type = chv_gpio_irq_type;
++ pctrl->irqchip.flags = IRQCHIP_SKIP_SET_WAKE;
++
++ ret = gpiochip_irqchip_add(chip, &pctrl->irqchip, 0,
+ handle_bad_irq, IRQ_TYPE_NONE);
+ if (ret) {
+ dev_err(pctrl->dev, "failed to add IRQ chip\n");
+@@ -1632,7 +1632,7 @@ static int chv_gpio_probe(struct chv_pinctrl *pctrl, int irq)
+ }
+ }
+
+- gpiochip_set_chained_irqchip(chip, &chv_gpio_irqchip, irq,
++ gpiochip_set_chained_irqchip(chip, &pctrl->irqchip, irq,
+ chv_gpio_irq_handler);
+ return 0;
+ }
+diff --git a/drivers/platform/x86/hp-wmi.c b/drivers/platform/x86/hp-wmi.c
+index 2521e45280b8..2daad1430945 100644
+--- a/drivers/platform/x86/hp-wmi.c
++++ b/drivers/platform/x86/hp-wmi.c
+@@ -65,7 +65,7 @@ struct bios_args {
+ u32 command;
+ u32 commandtype;
+ u32 datasize;
+- u32 data;
++ u8 data[128];
+ };
+
+ enum hp_wmi_commandtype {
+@@ -216,7 +216,7 @@ static int hp_wmi_perform_query(int query, enum hp_wmi_command command,
+ .command = command,
+ .commandtype = query,
+ .datasize = insize,
+- .data = 0,
++ .data = { 0 },
+ };
+ struct acpi_buffer input = { sizeof(struct bios_args), &args };
+ struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL };
+@@ -228,7 +228,7 @@ static int hp_wmi_perform_query(int query, enum hp_wmi_command command,
+
+ if (WARN_ON(insize > sizeof(args.data)))
+ return -EINVAL;
+- memcpy(&args.data, buffer, insize);
++ memcpy(&args.data[0], buffer, insize);
+
+ wmi_evaluate_method(HPWMI_BIOS_GUID, 0, mid, &input, &output);
+
+@@ -380,7 +380,7 @@ static int hp_wmi_rfkill2_refresh(void)
+ int err, i;
+
+ err = hp_wmi_perform_query(HPWMI_WIRELESS2_QUERY, HPWMI_READ, &state,
+- 0, sizeof(state));
++ sizeof(state), sizeof(state));
+ if (err)
+ return err;
+
+@@ -777,7 +777,7 @@ static int __init hp_wmi_rfkill2_setup(struct platform_device *device)
+ int err, i;
+
+ err = hp_wmi_perform_query(HPWMI_WIRELESS2_QUERY, HPWMI_READ, &state,
+- 0, sizeof(state));
++ sizeof(state), sizeof(state));
+ if (err)
+ return err < 0 ? err : -EINVAL;
+
+diff --git a/drivers/pwm/pwm-bcm-iproc.c b/drivers/pwm/pwm-bcm-iproc.c
+index d961a8207b1c..31b01035d0ab 100644
+--- a/drivers/pwm/pwm-bcm-iproc.c
++++ b/drivers/pwm/pwm-bcm-iproc.c
+@@ -187,6 +187,7 @@ static int iproc_pwmc_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ static const struct pwm_ops iproc_pwm_ops = {
+ .apply = iproc_pwmc_apply,
+ .get_state = iproc_pwmc_get_state,
++ .owner = THIS_MODULE,
+ };
+
+ static int iproc_pwmc_probe(struct platform_device *pdev)
+diff --git a/drivers/reset/core.c b/drivers/reset/core.c
+index 213ff40dda11..36b1ff69b1e2 100644
+--- a/drivers/reset/core.c
++++ b/drivers/reset/core.c
+@@ -748,6 +748,7 @@ static void reset_control_array_put(struct reset_control_array *resets)
+ for (i = 0; i < resets->num_rstcs; i++)
+ __reset_control_put_internal(resets->rstc[i]);
+ mutex_unlock(&reset_list_mutex);
++ kfree(resets);
+ }
+
+ /**
+diff --git a/drivers/soc/imx/gpc.c b/drivers/soc/imx/gpc.c
+index d9231bd3c691..98b9d9a902ae 100644
+--- a/drivers/soc/imx/gpc.c
++++ b/drivers/soc/imx/gpc.c
+@@ -249,13 +249,13 @@ static struct genpd_power_state imx6_pm_domain_pu_state = {
+ };
+
+ static struct imx_pm_domain imx_gpc_domains[] = {
+- [GPC_PGC_DOMAIN_ARM] {
++ [GPC_PGC_DOMAIN_ARM] = {
+ .base = {
+ .name = "ARM",
+ .flags = GENPD_FLAG_ALWAYS_ON,
+ },
+ },
+- [GPC_PGC_DOMAIN_PU] {
++ [GPC_PGC_DOMAIN_PU] = {
+ .base = {
+ .name = "PU",
+ .power_off = imx6_pm_domain_power_off,
+@@ -266,7 +266,7 @@ static struct imx_pm_domain imx_gpc_domains[] = {
+ .reg_offs = 0x260,
+ .cntr_pdn_bit = 0,
+ },
+- [GPC_PGC_DOMAIN_DISPLAY] {
++ [GPC_PGC_DOMAIN_DISPLAY] = {
+ .base = {
+ .name = "DISPLAY",
+ .power_off = imx6_pm_domain_power_off,
+@@ -275,7 +275,7 @@ static struct imx_pm_domain imx_gpc_domains[] = {
+ .reg_offs = 0x240,
+ .cntr_pdn_bit = 4,
+ },
+- [GPC_PGC_DOMAIN_PCI] {
++ [GPC_PGC_DOMAIN_PCI] = {
+ .base = {
+ .name = "PCI",
+ .power_off = imx6_pm_domain_power_off,
+diff --git a/drivers/soundwire/intel.c b/drivers/soundwire/intel.c
+index ec25a71d0887..db9c138adb1f 100644
+--- a/drivers/soundwire/intel.c
++++ b/drivers/soundwire/intel.c
+@@ -765,7 +765,7 @@ static int intel_register_dai(struct sdw_intel *sdw)
+ /* Create PCM DAIs */
+ stream = &cdns->pcm;
+
+- ret = intel_create_dai(cdns, dais, INTEL_PDI_IN, stream->num_in,
++ ret = intel_create_dai(cdns, dais, INTEL_PDI_IN, cdns->pcm.num_in,
+ off, stream->num_ch_in, true);
+ if (ret)
+ return ret;
+@@ -796,7 +796,7 @@ static int intel_register_dai(struct sdw_intel *sdw)
+ if (ret)
+ return ret;
+
+- off += cdns->pdm.num_bd;
++ off += cdns->pdm.num_out;
+ ret = intel_create_dai(cdns, dais, INTEL_PDI_BD, cdns->pdm.num_bd,
+ off, stream->num_ch_bd, false);
+ if (ret)
+diff --git a/drivers/staging/rtl8192e/rtl8192e/rtl_core.c b/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
+index f932cb15e4e5..c702ee9691b1 100644
+--- a/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
++++ b/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
+@@ -1616,14 +1616,15 @@ static void _rtl92e_hard_data_xmit(struct sk_buff *skb, struct net_device *dev,
+ memcpy((unsigned char *)(skb->cb), &dev, sizeof(dev));
+ skb_push(skb, priv->rtllib->tx_headroom);
+ ret = _rtl92e_tx(dev, skb);
+- if (ret != 0)
+- kfree_skb(skb);
+
+ if (queue_index != MGNT_QUEUE) {
+ priv->rtllib->stats.tx_bytes += (skb->len -
+ priv->rtllib->tx_headroom);
+ priv->rtllib->stats.tx_packets++;
+ }
++
++ if (ret != 0)
++ kfree_skb(skb);
+ }
+
+ static int _rtl92e_hard_start_xmit(struct sk_buff *skb, struct net_device *dev)
+diff --git a/drivers/staging/rtl8723bs/os_dep/sdio_intf.c b/drivers/staging/rtl8723bs/os_dep/sdio_intf.c
+index 540a7eed621d..96b1437d3b25 100644
+--- a/drivers/staging/rtl8723bs/os_dep/sdio_intf.c
++++ b/drivers/staging/rtl8723bs/os_dep/sdio_intf.c
+@@ -18,18 +18,13 @@
+ static const struct sdio_device_id sdio_ids[] =
+ {
+ { SDIO_DEVICE(0x024c, 0x0523), },
++ { SDIO_DEVICE(0x024c, 0x0525), },
+ { SDIO_DEVICE(0x024c, 0x0623), },
+ { SDIO_DEVICE(0x024c, 0x0626), },
+ { SDIO_DEVICE(0x024c, 0xb723), },
+ { /* end: all zeroes */ },
+ };
+-static const struct acpi_device_id acpi_ids[] = {
+- {"OBDA8723", 0x0000},
+- {}
+-};
+-
+ MODULE_DEVICE_TABLE(sdio, sdio_ids);
+-MODULE_DEVICE_TABLE(acpi, acpi_ids);
+
+ static int rtw_drv_init(struct sdio_func *func, const struct sdio_device_id *id);
+ static void rtw_dev_remove(struct sdio_func *func);
+diff --git a/drivers/staging/wilc1000/wilc_hif.c b/drivers/staging/wilc1000/wilc_hif.c
+index 9345cabe3c93..b5f3781805c4 100644
+--- a/drivers/staging/wilc1000/wilc_hif.c
++++ b/drivers/staging/wilc1000/wilc_hif.c
+@@ -477,16 +477,21 @@ void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ memcpy(¶m->supp_rates[1], rates_ie + 2, rates_len);
+ }
+
+- supp_rates_ie = cfg80211_find_ie(WLAN_EID_EXT_SUPP_RATES, ies->data,
+- ies->len);
+- if (supp_rates_ie) {
+- if (supp_rates_ie[1] > (WILC_MAX_RATES_SUPPORTED - rates_len))
+- param->supp_rates[0] = WILC_MAX_RATES_SUPPORTED;
+- else
+- param->supp_rates[0] += supp_rates_ie[1];
+-
+- memcpy(¶m->supp_rates[rates_len + 1], supp_rates_ie + 2,
+- (param->supp_rates[0] - rates_len));
++ if (rates_len < WILC_MAX_RATES_SUPPORTED) {
++ supp_rates_ie = cfg80211_find_ie(WLAN_EID_EXT_SUPP_RATES,
++ ies->data, ies->len);
++ if (supp_rates_ie) {
++ u8 ext_rates = supp_rates_ie[1];
++
++ if (ext_rates > (WILC_MAX_RATES_SUPPORTED - rates_len))
++ param->supp_rates[0] = WILC_MAX_RATES_SUPPORTED;
++ else
++ param->supp_rates[0] += ext_rates;
++
++ memcpy(¶m->supp_rates[rates_len + 1],
++ supp_rates_ie + 2,
++ (param->supp_rates[0] - rates_len));
++ }
+ }
+
+ ht_ie = cfg80211_find_ie(WLAN_EID_HT_CAPABILITY, ies->data, ies->len);
+diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
+index 5668a44e0653..0439ab5ba5cc 100644
+--- a/drivers/thunderbolt/switch.c
++++ b/drivers/thunderbolt/switch.c
+@@ -168,7 +168,7 @@ static int nvm_validate_and_write(struct tb_switch *sw)
+
+ static int nvm_authenticate_host(struct tb_switch *sw)
+ {
+- int ret;
++ int ret = 0;
+
+ /*
+ * Root switch NVM upgrade requires that we disconnect the
+@@ -176,6 +176,8 @@ static int nvm_authenticate_host(struct tb_switch *sw)
+ * already).
+ */
+ if (!sw->safe_mode) {
++ u32 status;
++
+ ret = tb_domain_disconnect_all_paths(sw->tb);
+ if (ret)
+ return ret;
+@@ -184,7 +186,16 @@ static int nvm_authenticate_host(struct tb_switch *sw)
+ * everything goes well so getting timeout is expected.
+ */
+ ret = dma_port_flash_update_auth(sw->dma_port);
+- return ret == -ETIMEDOUT ? 0 : ret;
++ if (!ret || ret == -ETIMEDOUT)
++ return 0;
++
++ /*
++ * Any error from update auth operation requires power
++ * cycling of the host router.
++ */
++ tb_sw_warn(sw, "failed to authenticate NVM, power cycling\n");
++ if (dma_port_flash_update_auth_status(sw->dma_port, &status) > 0)
++ nvm_set_auth_status(sw, status);
+ }
+
+ /*
+@@ -192,7 +203,7 @@ static int nvm_authenticate_host(struct tb_switch *sw)
+ * switch.
+ */
+ dma_port_power_cycle(sw->dma_port);
+- return 0;
++ return ret;
+ }
+
+ static int nvm_authenticate_device(struct tb_switch *sw)
+@@ -200,8 +211,16 @@ static int nvm_authenticate_device(struct tb_switch *sw)
+ int ret, retries = 10;
+
+ ret = dma_port_flash_update_auth(sw->dma_port);
+- if (ret && ret != -ETIMEDOUT)
++ switch (ret) {
++ case 0:
++ case -ETIMEDOUT:
++ case -EACCES:
++ case -EINVAL:
++ /* Power cycle is required */
++ break;
++ default:
+ return ret;
++ }
+
+ /*
+ * Poll here for the authentication status. It takes some time
+@@ -887,12 +906,13 @@ int tb_dp_port_set_hops(struct tb_port *port, unsigned int video,
+ */
+ bool tb_dp_port_is_enabled(struct tb_port *port)
+ {
+- u32 data;
++ u32 data[2];
+
+- if (tb_port_read(port, &data, TB_CFG_PORT, port->cap_adap, 1))
++ if (tb_port_read(port, data, TB_CFG_PORT, port->cap_adap,
++ ARRAY_SIZE(data)))
+ return false;
+
+- return !!(data & (TB_DP_VIDEO_EN | TB_DP_AUX_EN));
++ return !!(data[0] & (TB_DP_VIDEO_EN | TB_DP_AUX_EN));
+ }
+
+ /**
+@@ -905,19 +925,21 @@ bool tb_dp_port_is_enabled(struct tb_port *port)
+ */
+ int tb_dp_port_enable(struct tb_port *port, bool enable)
+ {
+- u32 data;
++ u32 data[2];
+ int ret;
+
+- ret = tb_port_read(port, &data, TB_CFG_PORT, port->cap_adap, 1);
++ ret = tb_port_read(port, data, TB_CFG_PORT, port->cap_adap,
++ ARRAY_SIZE(data));
+ if (ret)
+ return ret;
+
+ if (enable)
+- data |= TB_DP_VIDEO_EN | TB_DP_AUX_EN;
++ data[0] |= TB_DP_VIDEO_EN | TB_DP_AUX_EN;
+ else
+- data &= ~(TB_DP_VIDEO_EN | TB_DP_AUX_EN);
++ data[0] &= ~(TB_DP_VIDEO_EN | TB_DP_AUX_EN);
+
+- return tb_port_write(port, &data, TB_CFG_PORT, port->cap_adap, 1);
++ return tb_port_write(port, data, TB_CFG_PORT, port->cap_adap,
++ ARRAY_SIZE(data));
+ }
+
+ /* switch utility functions */
+@@ -1022,13 +1044,6 @@ static int tb_switch_set_authorized(struct tb_switch *sw, unsigned int val)
+ if (sw->authorized)
+ goto unlock;
+
+- /*
+- * Make sure there is no PCIe rescan ongoing when a new PCIe
+- * tunnel is created. Otherwise the PCIe rescan code might find
+- * the new tunnel too early.
+- */
+- pci_lock_rescan_remove();
+-
+ switch (val) {
+ /* Approve switch */
+ case 1:
+@@ -1048,8 +1063,6 @@ static int tb_switch_set_authorized(struct tb_switch *sw, unsigned int val)
+ break;
+ }
+
+- pci_unlock_rescan_remove();
+-
+ if (!ret) {
+ sw->authorized = val;
+ /* Notify status change to the userspace */
+@@ -1243,8 +1256,6 @@ static ssize_t nvm_authenticate_store(struct device *dev,
+ */
+ nvm_authenticate_start(sw);
+ ret = nvm_authenticate_host(sw);
+- if (ret)
+- nvm_authenticate_complete(sw);
+ } else {
+ ret = nvm_authenticate_device(sw);
+ }
+@@ -1670,13 +1681,16 @@ static int tb_switch_add_dma_port(struct tb_switch *sw)
+ int ret;
+
+ switch (sw->generation) {
+- case 3:
+- break;
+-
+ case 2:
+ /* Only root switch can be upgraded */
+ if (tb_route(sw))
+ return 0;
++
++ /* fallthrough */
++ case 3:
++ ret = tb_switch_set_uuid(sw);
++ if (ret)
++ return ret;
+ break;
+
+ default:
+@@ -1696,6 +1710,19 @@ static int tb_switch_add_dma_port(struct tb_switch *sw)
+ if (!sw->dma_port)
+ return 0;
+
++ /*
++ * If there is status already set then authentication failed
++ * when the dma_port_flash_update_auth() returned. Power cycling
++ * is not needed (it was done already) so only thing we do here
++ * is to unblock runtime PM of the root port.
++ */
++ nvm_get_auth_status(sw, &status);
++ if (status) {
++ if (!tb_route(sw))
++ nvm_authenticate_complete(sw);
++ return 0;
++ }
++
+ /*
+ * Check status of the previous flash authentication. If there
+ * is one we need to power cycle the switch in any case to make
+@@ -1711,9 +1738,6 @@ static int tb_switch_add_dma_port(struct tb_switch *sw)
+
+ if (status) {
+ tb_sw_info(sw, "switch flash authentication failed\n");
+- ret = tb_switch_set_uuid(sw);
+- if (ret)
+- return ret;
+ nvm_set_auth_status(sw, status);
+ }
+
+diff --git a/drivers/usb/dwc2/core.c b/drivers/usb/dwc2/core.c
+index 8e41d70fd298..78a4925aa118 100644
+--- a/drivers/usb/dwc2/core.c
++++ b/drivers/usb/dwc2/core.c
+@@ -524,7 +524,7 @@ int dwc2_core_reset(struct dwc2_hsotg *hsotg, bool skip_wait)
+ greset |= GRSTCTL_CSFTRST;
+ dwc2_writel(hsotg, greset, GRSTCTL);
+
+- if (dwc2_hsotg_wait_bit_clear(hsotg, GRSTCTL, GRSTCTL_CSFTRST, 50)) {
++ if (dwc2_hsotg_wait_bit_clear(hsotg, GRSTCTL, GRSTCTL_CSFTRST, 10000)) {
+ dev_warn(hsotg->dev, "%s: HANG! Soft Reset timeout GRSTCTL GRSTCTL_CSFTRST\n",
+ __func__);
+ return -EBUSY;
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index e25352932ba7..fc9720941253 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -1033,6 +1033,9 @@ static const struct usb_device_id id_table_combined[] = {
+ /* Sienna devices */
+ { USB_DEVICE(FTDI_VID, FTDI_SIENNA_PID) },
+ { USB_DEVICE(ECHELON_VID, ECHELON_U20_PID) },
++ /* U-Blox devices */
++ { USB_DEVICE(UBLOX_VID, UBLOX_C099F9P_ZED_PID) },
++ { USB_DEVICE(UBLOX_VID, UBLOX_C099F9P_ODIN_PID) },
+ { } /* Terminating entry */
+ };
+
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index 22d66217cb41..e8373528264c 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -1558,3 +1558,10 @@
+ */
+ #define UNJO_VID 0x22B7
+ #define UNJO_ISODEBUG_V1_PID 0x150D
++
++/*
++ * U-Blox products (http://www.u-blox.com).
++ */
++#define UBLOX_VID 0x1546
++#define UBLOX_C099F9P_ZED_PID 0x0502
++#define UBLOX_C099F9P_ODIN_PID 0x0503
+diff --git a/drivers/video/fbdev/c2p_core.h b/drivers/video/fbdev/c2p_core.h
+index e1035a865fb9..45a6d895a7d7 100644
+--- a/drivers/video/fbdev/c2p_core.h
++++ b/drivers/video/fbdev/c2p_core.h
+@@ -29,7 +29,7 @@ static inline void _transp(u32 d[], unsigned int i1, unsigned int i2,
+
+ extern void c2p_unsupported(void);
+
+-static inline u32 get_mask(unsigned int n)
++static __always_inline u32 get_mask(unsigned int n)
+ {
+ switch (n) {
+ case 1:
+@@ -57,7 +57,7 @@ static inline u32 get_mask(unsigned int n)
+ * Transpose operations on 8 32-bit words
+ */
+
+-static inline void transp8(u32 d[], unsigned int n, unsigned int m)
++static __always_inline void transp8(u32 d[], unsigned int n, unsigned int m)
+ {
+ u32 mask = get_mask(n);
+
+@@ -99,7 +99,7 @@ static inline void transp8(u32 d[], unsigned int n, unsigned int m)
+ * Transpose operations on 4 32-bit words
+ */
+
+-static inline void transp4(u32 d[], unsigned int n, unsigned int m)
++static __always_inline void transp4(u32 d[], unsigned int n, unsigned int m)
+ {
+ u32 mask = get_mask(n);
+
+@@ -126,7 +126,7 @@ static inline void transp4(u32 d[], unsigned int n, unsigned int m)
+ * Transpose operations on 4 32-bit words (reverse order)
+ */
+
+-static inline void transp4x(u32 d[], unsigned int n, unsigned int m)
++static __always_inline void transp4x(u32 d[], unsigned int n, unsigned int m)
+ {
+ u32 mask = get_mask(n);
+
+diff --git a/drivers/watchdog/bd70528_wdt.c b/drivers/watchdog/bd70528_wdt.c
+index b0152fef4fc7..bc60e036627a 100644
+--- a/drivers/watchdog/bd70528_wdt.c
++++ b/drivers/watchdog/bd70528_wdt.c
+@@ -288,3 +288,4 @@ module_platform_driver(bd70528_wdt);
+ MODULE_AUTHOR("Matti Vaittinen <matti.vaittinen@fi.rohmeurope.com>");
+ MODULE_DESCRIPTION("BD70528 watchdog driver");
+ MODULE_LICENSE("GPL");
++MODULE_ALIAS("platform:bd70528-wdt");
+diff --git a/drivers/watchdog/imx_sc_wdt.c b/drivers/watchdog/imx_sc_wdt.c
+index 78eaaf75a263..9545d1e07421 100644
+--- a/drivers/watchdog/imx_sc_wdt.c
++++ b/drivers/watchdog/imx_sc_wdt.c
+@@ -99,8 +99,14 @@ static int imx_sc_wdt_set_pretimeout(struct watchdog_device *wdog,
+ {
+ struct arm_smccc_res res;
+
++ /*
++ * SCU firmware calculates pretimeout based on current time
++ * stamp instead of watchdog timeout stamp, need to convert
++ * the pretimeout to SCU firmware's timeout value.
++ */
+ arm_smccc_smc(IMX_SIP_TIMER, IMX_SIP_TIMER_SET_PRETIME_WDOG,
+- pretimeout * 1000, 0, 0, 0, 0, 0, &res);
++ (wdog->timeout - pretimeout) * 1000, 0, 0, 0,
++ 0, 0, &res);
+ if (res.a0)
+ return -EACCES;
+
+diff --git a/drivers/watchdog/meson_gxbb_wdt.c b/drivers/watchdog/meson_gxbb_wdt.c
+index d17c1a6ed723..5a9ca10fbcfa 100644
+--- a/drivers/watchdog/meson_gxbb_wdt.c
++++ b/drivers/watchdog/meson_gxbb_wdt.c
+@@ -89,8 +89,8 @@ static unsigned int meson_gxbb_wdt_get_timeleft(struct watchdog_device *wdt_dev)
+
+ reg = readl(data->reg_base + GXBB_WDT_TCNT_REG);
+
+- return ((reg >> GXBB_WDT_TCNT_CNT_SHIFT) -
+- (reg & GXBB_WDT_TCNT_SETUP_MASK)) / 1000;
++ return ((reg & GXBB_WDT_TCNT_SETUP_MASK) -
++ (reg >> GXBB_WDT_TCNT_CNT_SHIFT)) / 1000;
+ }
+
+ static const struct watchdog_ops meson_gxbb_wdt_ops = {
+diff --git a/drivers/watchdog/pm8916_wdt.c b/drivers/watchdog/pm8916_wdt.c
+index 2d3652004e39..1213179f863c 100644
+--- a/drivers/watchdog/pm8916_wdt.c
++++ b/drivers/watchdog/pm8916_wdt.c
+@@ -163,9 +163,17 @@ static int pm8916_wdt_probe(struct platform_device *pdev)
+
+ irq = platform_get_irq(pdev, 0);
+ if (irq > 0) {
+- if (devm_request_irq(dev, irq, pm8916_wdt_isr, 0, "pm8916_wdt",
+- wdt))
+- irq = 0;
++ err = devm_request_irq(dev, irq, pm8916_wdt_isr, 0,
++ "pm8916_wdt", wdt);
++ if (err)
++ return err;
++
++ wdt->wdev.info = &pm8916_wdt_pt_ident;
++ } else {
++ if (irq == -EPROBE_DEFER)
++ return -EPROBE_DEFER;
++
++ wdt->wdev.info = &pm8916_wdt_ident;
+ }
+
+ /* Configure watchdog to hard-reset mode */
+@@ -177,7 +185,6 @@ static int pm8916_wdt_probe(struct platform_device *pdev)
+ return err;
+ }
+
+- wdt->wdev.info = (irq > 0) ? &pm8916_wdt_pt_ident : &pm8916_wdt_ident,
+ wdt->wdev.ops = &pm8916_wdt_ops,
+ wdt->wdev.parent = dev;
+ wdt->wdev.min_timeout = PM8916_WDT_MIN_TIMEOUT;
+diff --git a/fs/ceph/super.c b/fs/ceph/super.c
+index ab4868c7308e..b565c55ed064 100644
+--- a/fs/ceph/super.c
++++ b/fs/ceph/super.c
+@@ -255,6 +255,7 @@ static int parse_fsopt_token(char *c, void *private)
+ return -ENOMEM;
+ break;
+ case Opt_fscache_uniq:
++#ifdef CONFIG_CEPH_FSCACHE
+ kfree(fsopt->fscache_uniq);
+ fsopt->fscache_uniq = kstrndup(argstr[0].from,
+ argstr[0].to-argstr[0].from,
+@@ -263,7 +264,10 @@ static int parse_fsopt_token(char *c, void *private)
+ return -ENOMEM;
+ fsopt->flags |= CEPH_MOUNT_OPT_FSCACHE;
+ break;
+- /* misc */
++#else
++ pr_err("fscache support is disabled\n");
++ return -EINVAL;
++#endif
+ case Opt_wsize:
+ if (intval < (int)PAGE_SIZE || intval > CEPH_MAX_WRITE_SIZE)
+ return -EINVAL;
+@@ -340,10 +344,15 @@ static int parse_fsopt_token(char *c, void *private)
+ fsopt->flags &= ~CEPH_MOUNT_OPT_INO32;
+ break;
+ case Opt_fscache:
++#ifdef CONFIG_CEPH_FSCACHE
+ fsopt->flags |= CEPH_MOUNT_OPT_FSCACHE;
+ kfree(fsopt->fscache_uniq);
+ fsopt->fscache_uniq = NULL;
+ break;
++#else
++ pr_err("fscache support is disabled\n");
++ return -EINVAL;
++#endif
+ case Opt_nofscache:
+ fsopt->flags &= ~CEPH_MOUNT_OPT_FSCACHE;
+ kfree(fsopt->fscache_uniq);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 723b0d1a3881..819dcc475e5d 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -5942,8 +5942,23 @@ static int __ext4_expand_extra_isize(struct inode *inode,
+ {
+ struct ext4_inode *raw_inode;
+ struct ext4_xattr_ibody_header *header;
++ unsigned int inode_size = EXT4_INODE_SIZE(inode->i_sb);
++ struct ext4_inode_info *ei = EXT4_I(inode);
+ int error;
+
++ /* this was checked at iget time, but double check for good measure */
++ if ((EXT4_GOOD_OLD_INODE_SIZE + ei->i_extra_isize > inode_size) ||
++ (ei->i_extra_isize & 3)) {
++ EXT4_ERROR_INODE(inode, "bad extra_isize %u (inode size %u)",
++ ei->i_extra_isize,
++ EXT4_INODE_SIZE(inode->i_sb));
++ return -EFSCORRUPTED;
++ }
++ if ((new_extra_isize < ei->i_extra_isize) ||
++ (new_extra_isize < 4) ||
++ (new_extra_isize > inode_size - EXT4_GOOD_OLD_INODE_SIZE))
++ return -EINVAL; /* Should never happen */
++
+ raw_inode = ext4_raw_inode(iloc);
+
+ header = IHDR(inode, raw_inode);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 4079605d437a..c3bbe57ebc43 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -3544,12 +3544,15 @@ static void ext4_clamp_want_extra_isize(struct super_block *sb)
+ {
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
+ struct ext4_super_block *es = sbi->s_es;
++ unsigned def_extra_isize = sizeof(struct ext4_inode) -
++ EXT4_GOOD_OLD_INODE_SIZE;
+
+- /* determine the minimum size of new large inodes, if present */
+- if (sbi->s_inode_size > EXT4_GOOD_OLD_INODE_SIZE &&
+- sbi->s_want_extra_isize == 0) {
+- sbi->s_want_extra_isize = sizeof(struct ext4_inode) -
+- EXT4_GOOD_OLD_INODE_SIZE;
++ if (sbi->s_inode_size == EXT4_GOOD_OLD_INODE_SIZE) {
++ sbi->s_want_extra_isize = 0;
++ return;
++ }
++ if (sbi->s_want_extra_isize < 4) {
++ sbi->s_want_extra_isize = def_extra_isize;
+ if (ext4_has_feature_extra_isize(sb)) {
+ if (sbi->s_want_extra_isize <
+ le16_to_cpu(es->s_want_extra_isize))
+@@ -3562,10 +3565,10 @@ static void ext4_clamp_want_extra_isize(struct super_block *sb)
+ }
+ }
+ /* Check if enough inode space is available */
+- if (EXT4_GOOD_OLD_INODE_SIZE + sbi->s_want_extra_isize >
+- sbi->s_inode_size) {
+- sbi->s_want_extra_isize = sizeof(struct ext4_inode) -
+- EXT4_GOOD_OLD_INODE_SIZE;
++ if ((sbi->s_want_extra_isize > sbi->s_inode_size) ||
++ (EXT4_GOOD_OLD_INODE_SIZE + sbi->s_want_extra_isize >
++ sbi->s_inode_size)) {
++ sbi->s_want_extra_isize = def_extra_isize;
+ ext4_msg(sb, KERN_INFO,
+ "required extra inode space not available");
+ }
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 56c23dee9811..f563a581b924 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -260,6 +260,8 @@ struct io_ring_ctx {
+
+ struct user_struct *user;
+
++ struct cred *creds;
++
+ struct completion ctx_done;
+
+ struct {
+@@ -1633,8 +1635,11 @@ static void io_poll_complete_work(struct work_struct *work)
+ struct io_poll_iocb *poll = &req->poll;
+ struct poll_table_struct pt = { ._key = poll->events };
+ struct io_ring_ctx *ctx = req->ctx;
++ const struct cred *old_cred;
+ __poll_t mask = 0;
+
++ old_cred = override_creds(ctx->creds);
++
+ if (!READ_ONCE(poll->canceled))
+ mask = vfs_poll(poll->file, &pt) & poll->events;
+
+@@ -1649,7 +1654,7 @@ static void io_poll_complete_work(struct work_struct *work)
+ if (!mask && !READ_ONCE(poll->canceled)) {
+ add_wait_queue(poll->head, &poll->wait);
+ spin_unlock_irq(&ctx->completion_lock);
+- return;
++ goto out;
+ }
+ list_del_init(&req->list);
+ io_poll_complete(ctx, req, mask);
+@@ -1657,6 +1662,8 @@ static void io_poll_complete_work(struct work_struct *work)
+
+ io_cqring_ev_posted(ctx);
+ io_put_req(req);
++out:
++ revert_creds(old_cred);
+ }
+
+ static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+@@ -1906,10 +1913,12 @@ static void io_sq_wq_submit_work(struct work_struct *work)
+ struct io_ring_ctx *ctx = req->ctx;
+ struct mm_struct *cur_mm = NULL;
+ struct async_list *async_list;
++ const struct cred *old_cred;
+ LIST_HEAD(req_list);
+ mm_segment_t old_fs;
+ int ret;
+
++ old_cred = override_creds(ctx->creds);
+ async_list = io_async_list_from_sqe(ctx, req->submit.sqe);
+ restart:
+ do {
+@@ -2017,6 +2026,7 @@ out:
+ unuse_mm(cur_mm);
+ mmput(cur_mm);
+ }
++ revert_creds(old_cred);
+ }
+
+ /*
+@@ -2354,6 +2364,7 @@ static int io_sq_thread(void *data)
+ {
+ struct io_ring_ctx *ctx = data;
+ struct mm_struct *cur_mm = NULL;
++ const struct cred *old_cred;
+ mm_segment_t old_fs;
+ DEFINE_WAIT(wait);
+ unsigned inflight;
+@@ -2363,6 +2374,7 @@ static int io_sq_thread(void *data)
+
+ old_fs = get_fs();
+ set_fs(USER_DS);
++ old_cred = override_creds(ctx->creds);
+
+ timeout = inflight = 0;
+ while (!kthread_should_park()) {
+@@ -2473,6 +2485,7 @@ static int io_sq_thread(void *data)
+ unuse_mm(cur_mm);
+ mmput(cur_mm);
+ }
++ revert_creds(old_cred);
+
+ kthread_parkme();
+
+@@ -3142,6 +3155,8 @@ static void io_ring_ctx_free(struct io_ring_ctx *ctx)
+ io_unaccount_mem(ctx->user,
+ ring_pages(ctx->sq_entries, ctx->cq_entries));
+ free_uid(ctx->user);
++ if (ctx->creds)
++ put_cred(ctx->creds);
+ kfree(ctx);
+ }
+
+@@ -3419,6 +3434,12 @@ static int io_uring_create(unsigned entries, struct io_uring_params *p)
+ ctx->account_mem = account_mem;
+ ctx->user = user;
+
++ ctx->creds = prepare_creds();
++ if (!ctx->creds) {
++ ret = -ENOMEM;
++ goto err;
++ }
++
+ ret = io_allocate_scq_urings(ctx, p);
+ if (ret)
+ goto err;
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 18f4cc2c6acd..5d177c0c7fe3 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -651,11 +651,11 @@ void bpf_map_put_with_uref(struct bpf_map *map);
+ void bpf_map_put(struct bpf_map *map);
+ int bpf_map_charge_memlock(struct bpf_map *map, u32 pages);
+ void bpf_map_uncharge_memlock(struct bpf_map *map, u32 pages);
+-int bpf_map_charge_init(struct bpf_map_memory *mem, size_t size);
++int bpf_map_charge_init(struct bpf_map_memory *mem, u64 size);
+ void bpf_map_charge_finish(struct bpf_map_memory *mem);
+ void bpf_map_charge_move(struct bpf_map_memory *dst,
+ struct bpf_map_memory *src);
+-void *bpf_map_area_alloc(size_t size, int numa_node);
++void *bpf_map_area_alloc(u64 size, int numa_node);
+ void bpf_map_area_free(void *base);
+ void bpf_map_init_from_attr(struct bpf_map *map, union bpf_attr *attr);
+
+diff --git a/include/linux/idr.h b/include/linux/idr.h
+index 4ec8986e5dfb..ac6e946b6767 100644
+--- a/include/linux/idr.h
++++ b/include/linux/idr.h
+@@ -185,7 +185,7 @@ static inline void idr_preload_end(void)
+ * is convenient for a "not found" value.
+ */
+ #define idr_for_each_entry(idr, entry, id) \
+- for (id = 0; ((entry) = idr_get_next(idr, &(id))) != NULL; ++id)
++ for (id = 0; ((entry) = idr_get_next(idr, &(id))) != NULL; id += 1U)
+
+ /**
+ * idr_for_each_entry_ul() - Iterate over an IDR's elements of a given type.
+diff --git a/include/linux/reset-controller.h b/include/linux/reset-controller.h
+index 9326d671b6e6..8675ec64987b 100644
+--- a/include/linux/reset-controller.h
++++ b/include/linux/reset-controller.h
+@@ -7,7 +7,7 @@
+ struct reset_controller_dev;
+
+ /**
+- * struct reset_control_ops
++ * struct reset_control_ops - reset controller driver callbacks
+ *
+ * @reset: for self-deasserting resets, does all necessary
+ * things to reset the device
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index ce7055259877..da4caff7efa4 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -14,6 +14,7 @@
+ #include <net/strparser.h>
+
+ #define MAX_MSG_FRAGS MAX_SKB_FRAGS
++#define NR_MSG_FRAG_IDS (MAX_MSG_FRAGS + 1)
+
+ enum __sk_action {
+ __SK_DROP = 0,
+@@ -29,11 +30,13 @@ struct sk_msg_sg {
+ u32 size;
+ u32 copybreak;
+ bool copy[MAX_MSG_FRAGS];
+- /* The extra element is used for chaining the front and sections when
+- * the list becomes partitioned (e.g. end < start). The crypto APIs
+- * require the chaining.
++ /* The extra two elements:
++ * 1) used for chaining the front and sections when the list becomes
++ * partitioned (e.g. end < start). The crypto APIs require the
++ * chaining;
++ * 2) to chain tailer SG entries after the message.
+ */
+- struct scatterlist data[MAX_MSG_FRAGS + 1];
++ struct scatterlist data[MAX_MSG_FRAGS + 2];
+ };
+
+ /* UAPI in filter.c depends on struct sk_msg_sg being first element. */
+@@ -141,13 +144,13 @@ static inline void sk_msg_apply_bytes(struct sk_psock *psock, u32 bytes)
+
+ static inline u32 sk_msg_iter_dist(u32 start, u32 end)
+ {
+- return end >= start ? end - start : end + (MAX_MSG_FRAGS - start);
++ return end >= start ? end - start : end + (NR_MSG_FRAG_IDS - start);
+ }
+
+ #define sk_msg_iter_var_prev(var) \
+ do { \
+ if (var == 0) \
+- var = MAX_MSG_FRAGS - 1; \
++ var = NR_MSG_FRAG_IDS - 1; \
+ else \
+ var--; \
+ } while (0)
+@@ -155,7 +158,7 @@ static inline u32 sk_msg_iter_dist(u32 start, u32 end)
+ #define sk_msg_iter_var_next(var) \
+ do { \
+ var++; \
+- if (var == MAX_MSG_FRAGS) \
++ if (var == NR_MSG_FRAG_IDS) \
+ var = 0; \
+ } while (0)
+
+@@ -172,9 +175,9 @@ static inline void sk_msg_clear_meta(struct sk_msg *msg)
+
+ static inline void sk_msg_init(struct sk_msg *msg)
+ {
+- BUILD_BUG_ON(ARRAY_SIZE(msg->sg.data) - 1 != MAX_MSG_FRAGS);
++ BUILD_BUG_ON(ARRAY_SIZE(msg->sg.data) - 1 != NR_MSG_FRAG_IDS);
+ memset(msg, 0, sizeof(*msg));
+- sg_init_marker(msg->sg.data, MAX_MSG_FRAGS);
++ sg_init_marker(msg->sg.data, NR_MSG_FRAG_IDS);
+ }
+
+ static inline void sk_msg_xfer(struct sk_msg *dst, struct sk_msg *src,
+@@ -195,14 +198,11 @@ static inline void sk_msg_xfer_full(struct sk_msg *dst, struct sk_msg *src)
+
+ static inline bool sk_msg_full(const struct sk_msg *msg)
+ {
+- return (msg->sg.end == msg->sg.start) && msg->sg.size;
++ return sk_msg_iter_dist(msg->sg.start, msg->sg.end) == MAX_MSG_FRAGS;
+ }
+
+ static inline u32 sk_msg_elem_used(const struct sk_msg *msg)
+ {
+- if (sk_msg_full(msg))
+- return MAX_MSG_FRAGS;
+-
+ return sk_msg_iter_dist(msg->sg.start, msg->sg.end);
+ }
+
+diff --git a/include/net/fq_impl.h b/include/net/fq_impl.h
+index 107c0d700ed6..38a9a3d1222b 100644
+--- a/include/net/fq_impl.h
++++ b/include/net/fq_impl.h
+@@ -313,7 +313,7 @@ static int fq_init(struct fq *fq, int flows_cnt)
+ fq->limit = 8192;
+ fq->memory_limit = 16 << 20; /* 16 MBytes */
+
+- fq->flows = kcalloc(fq->flows_cnt, sizeof(fq->flows[0]), GFP_KERNEL);
++ fq->flows = kvcalloc(fq->flows_cnt, sizeof(fq->flows[0]), GFP_KERNEL);
+ if (!fq->flows)
+ return -ENOMEM;
+
+@@ -331,7 +331,7 @@ static void fq_reset(struct fq *fq,
+ for (i = 0; i < fq->flows_cnt; i++)
+ fq_flow_reset(fq, &fq->flows[i], free_func);
+
+- kfree(fq->flows);
++ kvfree(fq->flows);
+ fq->flows = NULL;
+ }
+
+diff --git a/include/net/sctp/structs.h b/include/net/sctp/structs.h
+index ba5c4f6eede5..eeee040b5397 100644
+--- a/include/net/sctp/structs.h
++++ b/include/net/sctp/structs.h
+@@ -1239,6 +1239,9 @@ struct sctp_ep_common {
+ /* What socket does this endpoint belong to? */
+ struct sock *sk;
+
++ /* Cache netns and it won't change once set */
++ struct net *net;
++
+ /* This is where we receive inbound chunks. */
+ struct sctp_inq inqueue;
+
+diff --git a/include/net/tls.h b/include/net/tls.h
+index 9bf04a74a6cb..e46d4aa27ee7 100644
+--- a/include/net/tls.h
++++ b/include/net/tls.h
+@@ -121,7 +121,6 @@ struct tls_rec {
+ struct list_head list;
+ int tx_ready;
+ int tx_flags;
+- int inplace_crypto;
+
+ struct sk_msg msg_plaintext;
+ struct sk_msg msg_encrypted;
+@@ -408,7 +407,7 @@ int tls_push_sg(struct sock *sk, struct tls_context *ctx,
+ int flags);
+ int tls_push_partial_record(struct sock *sk, struct tls_context *ctx,
+ int flags);
+-bool tls_free_partial_record(struct sock *sk, struct tls_context *ctx);
++void tls_free_partial_record(struct sock *sk, struct tls_context *ctx);
+
+ static inline struct tls_msg *tls_msg(struct sk_buff *skb)
+ {
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index 0a00eaca6fae..9ad7cd3267f5 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -1302,12 +1302,12 @@ static bool sysctl_is_valid_access(int off, int size, enum bpf_access_type type,
+ return false;
+
+ switch (off) {
+- case offsetof(struct bpf_sysctl, write):
++ case bpf_ctx_range(struct bpf_sysctl, write):
+ if (type != BPF_READ)
+ return false;
+ bpf_ctx_record_field_size(info, size_default);
+ return bpf_ctx_narrow_access_ok(off, size, size_default);
+- case offsetof(struct bpf_sysctl, file_pos):
++ case bpf_ctx_range(struct bpf_sysctl, file_pos):
+ if (type == BPF_READ) {
+ bpf_ctx_record_field_size(info, size_default);
+ return bpf_ctx_narrow_access_ok(off, size, size_default);
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index aac966b32c42..ee3087462bc9 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -126,7 +126,7 @@ static struct bpf_map *find_and_alloc_map(union bpf_attr *attr)
+ return map;
+ }
+
+-void *bpf_map_area_alloc(size_t size, int numa_node)
++void *bpf_map_area_alloc(u64 size, int numa_node)
+ {
+ /* We really just want to fail instead of triggering OOM killer
+ * under memory pressure, therefore we set __GFP_NORETRY to kmalloc,
+@@ -141,6 +141,9 @@ void *bpf_map_area_alloc(size_t size, int numa_node)
+ const gfp_t flags = __GFP_NOWARN | __GFP_ZERO;
+ void *area;
+
++ if (size >= SIZE_MAX)
++ return NULL;
++
+ if (size <= (PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER)) {
+ area = kmalloc_node(size, GFP_USER | __GFP_NORETRY | flags,
+ numa_node);
+@@ -197,7 +200,7 @@ static void bpf_uncharge_memlock(struct user_struct *user, u32 pages)
+ atomic_long_sub(pages, &user->locked_vm);
+ }
+
+-int bpf_map_charge_init(struct bpf_map_memory *mem, size_t size)
++int bpf_map_charge_init(struct bpf_map_memory *mem, u64 size)
+ {
+ u32 pages = round_up(size, PAGE_SIZE) >> PAGE_SHIFT;
+ struct user_struct *user;
+diff --git a/kernel/stacktrace.c b/kernel/stacktrace.c
+index f5440abb7532..9bbfbdb96ae5 100644
+--- a/kernel/stacktrace.c
++++ b/kernel/stacktrace.c
+@@ -141,7 +141,8 @@ unsigned int stack_trace_save_tsk(struct task_struct *tsk, unsigned long *store,
+ struct stacktrace_cookie c = {
+ .store = store,
+ .size = size,
+- .skip = skipnr + 1,
++ /* skip this function if they are tracing us */
++ .skip = skipnr + !!(current == tsk),
+ };
+
+ if (!try_get_task_stack(tsk))
+@@ -298,7 +299,8 @@ unsigned int stack_trace_save_tsk(struct task_struct *task,
+ struct stack_trace trace = {
+ .entries = store,
+ .max_entries = size,
+- .skip = skipnr + 1,
++ /* skip this function if they are tracing us */
++ .skip = skipnr + !!(current == task),
+ };
+
+ save_stack_trace_tsk(task, &trace);
+diff --git a/lib/idr.c b/lib/idr.c
+index 66a374892482..c2cf2c52bbde 100644
+--- a/lib/idr.c
++++ b/lib/idr.c
+@@ -215,7 +215,7 @@ int idr_for_each(const struct idr *idr,
+ EXPORT_SYMBOL(idr_for_each);
+
+ /**
+- * idr_get_next() - Find next populated entry.
++ * idr_get_next_ul() - Find next populated entry.
+ * @idr: IDR handle.
+ * @nextid: Pointer to an ID.
+ *
+@@ -224,7 +224,7 @@ EXPORT_SYMBOL(idr_for_each);
+ * to the ID of the found value. To use in a loop, the value pointed to by
+ * nextid must be incremented by the user.
+ */
+-void *idr_get_next(struct idr *idr, int *nextid)
++void *idr_get_next_ul(struct idr *idr, unsigned long *nextid)
+ {
+ struct radix_tree_iter iter;
+ void __rcu **slot;
+@@ -245,18 +245,14 @@ void *idr_get_next(struct idr *idr, int *nextid)
+ }
+ if (!slot)
+ return NULL;
+- id = iter.index + base;
+-
+- if (WARN_ON_ONCE(id > INT_MAX))
+- return NULL;
+
+- *nextid = id;
++ *nextid = iter.index + base;
+ return entry;
+ }
+-EXPORT_SYMBOL(idr_get_next);
++EXPORT_SYMBOL(idr_get_next_ul);
+
+ /**
+- * idr_get_next_ul() - Find next populated entry.
++ * idr_get_next() - Find next populated entry.
+ * @idr: IDR handle.
+ * @nextid: Pointer to an ID.
+ *
+@@ -265,22 +261,17 @@ EXPORT_SYMBOL(idr_get_next);
+ * to the ID of the found value. To use in a loop, the value pointed to by
+ * nextid must be incremented by the user.
+ */
+-void *idr_get_next_ul(struct idr *idr, unsigned long *nextid)
++void *idr_get_next(struct idr *idr, int *nextid)
+ {
+- struct radix_tree_iter iter;
+- void __rcu **slot;
+- unsigned long base = idr->idr_base;
+ unsigned long id = *nextid;
++ void *entry = idr_get_next_ul(idr, &id);
+
+- id = (id < base) ? 0 : id - base;
+- slot = radix_tree_iter_find(&idr->idr_rt, &iter, id);
+- if (!slot)
++ if (WARN_ON_ONCE(id > INT_MAX))
+ return NULL;
+-
+- *nextid = iter.index + base;
+- return rcu_dereference_raw(*slot);
++ *nextid = id;
++ return entry;
+ }
+-EXPORT_SYMBOL(idr_get_next_ul);
++EXPORT_SYMBOL(idr_get_next);
+
+ /**
+ * idr_replace() - replace pointer for given ID.
+diff --git a/lib/radix-tree.c b/lib/radix-tree.c
+index 18c1dfbb1765..c8fa1d274530 100644
+--- a/lib/radix-tree.c
++++ b/lib/radix-tree.c
+@@ -1529,7 +1529,7 @@ void __rcu **idr_get_free(struct radix_tree_root *root,
+ offset = radix_tree_find_next_bit(node, IDR_FREE,
+ offset + 1);
+ start = next_index(start, node, offset);
+- if (start > max)
++ if (start > max || start == 0)
+ return ERR_PTR(-ENOSPC);
+ while (offset == RADIX_TREE_MAP_SIZE) {
+ offset = node->offset + 1;
+diff --git a/lib/test_xarray.c b/lib/test_xarray.c
+index 9d631a7b6a70..7df4f7f395bf 100644
+--- a/lib/test_xarray.c
++++ b/lib/test_xarray.c
+@@ -1110,6 +1110,28 @@ static noinline void check_find_entry(struct xarray *xa)
+ XA_BUG_ON(xa, !xa_empty(xa));
+ }
+
++static noinline void check_move_tiny(struct xarray *xa)
++{
++ XA_STATE(xas, xa, 0);
++
++ XA_BUG_ON(xa, !xa_empty(xa));
++ rcu_read_lock();
++ XA_BUG_ON(xa, xas_next(&xas) != NULL);
++ XA_BUG_ON(xa, xas_next(&xas) != NULL);
++ rcu_read_unlock();
++ xa_store_index(xa, 0, GFP_KERNEL);
++ rcu_read_lock();
++ xas_set(&xas, 0);
++ XA_BUG_ON(xa, xas_next(&xas) != xa_mk_index(0));
++ XA_BUG_ON(xa, xas_next(&xas) != NULL);
++ xas_set(&xas, 0);
++ XA_BUG_ON(xa, xas_prev(&xas) != xa_mk_index(0));
++ XA_BUG_ON(xa, xas_prev(&xas) != NULL);
++ rcu_read_unlock();
++ xa_erase_index(xa, 0);
++ XA_BUG_ON(xa, !xa_empty(xa));
++}
++
+ static noinline void check_move_small(struct xarray *xa, unsigned long idx)
+ {
+ XA_STATE(xas, xa, 0);
+@@ -1217,6 +1239,8 @@ static noinline void check_move(struct xarray *xa)
+
+ xa_destroy(xa);
+
++ check_move_tiny(xa);
++
+ for (i = 0; i < 16; i++)
+ check_move_small(xa, 1UL << i);
+
+diff --git a/lib/xarray.c b/lib/xarray.c
+index 446b956c9188..1237c213f52b 100644
+--- a/lib/xarray.c
++++ b/lib/xarray.c
+@@ -994,6 +994,8 @@ void *__xas_prev(struct xa_state *xas)
+
+ if (!xas_frozen(xas->xa_node))
+ xas->xa_index--;
++ if (!xas->xa_node)
++ return set_bounds(xas);
+ if (xas_not_node(xas->xa_node))
+ return xas_load(xas);
+
+@@ -1031,6 +1033,8 @@ void *__xas_next(struct xa_state *xas)
+
+ if (!xas_frozen(xas->xa_node))
+ xas->xa_index++;
++ if (!xas->xa_node)
++ return set_bounds(xas);
+ if (xas_not_node(xas->xa_node))
+ return xas_load(xas);
+
+diff --git a/net/bridge/netfilter/ebt_dnat.c b/net/bridge/netfilter/ebt_dnat.c
+index ed91ea31978a..12a4f4d93681 100644
+--- a/net/bridge/netfilter/ebt_dnat.c
++++ b/net/bridge/netfilter/ebt_dnat.c
+@@ -20,7 +20,6 @@ static unsigned int
+ ebt_dnat_tg(struct sk_buff *skb, const struct xt_action_param *par)
+ {
+ const struct ebt_nat_info *info = par->targinfo;
+- struct net_device *dev;
+
+ if (skb_ensure_writable(skb, ETH_ALEN))
+ return EBT_DROP;
+@@ -33,10 +32,22 @@ ebt_dnat_tg(struct sk_buff *skb, const struct xt_action_param *par)
+ else
+ skb->pkt_type = PACKET_MULTICAST;
+ } else {
+- if (xt_hooknum(par) != NF_BR_BROUTING)
+- dev = br_port_get_rcu(xt_in(par))->br->dev;
+- else
++ const struct net_device *dev;
++
++ switch (xt_hooknum(par)) {
++ case NF_BR_BROUTING:
+ dev = xt_in(par);
++ break;
++ case NF_BR_PRE_ROUTING:
++ dev = br_port_get_rcu(xt_in(par))->br->dev;
++ break;
++ default:
++ dev = NULL;
++ break;
++ }
++
++ if (!dev) /* NF_BR_LOCAL_OUT */
++ return info->target;
+
+ if (ether_addr_equal(info->mac, dev->dev_addr))
+ skb->pkt_type = PACKET_HOST;
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 4c6a252d4212..d81a5a5090bd 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2299,7 +2299,7 @@ BPF_CALL_4(bpf_msg_pull_data, struct sk_msg *, msg, u32, start,
+ WARN_ON_ONCE(last_sge == first_sge);
+ shift = last_sge > first_sge ?
+ last_sge - first_sge - 1 :
+- MAX_SKB_FRAGS - first_sge + last_sge - 1;
++ NR_MSG_FRAG_IDS - first_sge + last_sge - 1;
+ if (!shift)
+ goto out;
+
+@@ -2308,8 +2308,8 @@ BPF_CALL_4(bpf_msg_pull_data, struct sk_msg *, msg, u32, start,
+ do {
+ u32 move_from;
+
+- if (i + shift >= MAX_MSG_FRAGS)
+- move_from = i + shift - MAX_MSG_FRAGS;
++ if (i + shift >= NR_MSG_FRAG_IDS)
++ move_from = i + shift - NR_MSG_FRAG_IDS;
+ else
+ move_from = i + shift;
+ if (move_from == msg->sg.end)
+@@ -2323,7 +2323,7 @@ BPF_CALL_4(bpf_msg_pull_data, struct sk_msg *, msg, u32, start,
+ } while (1);
+
+ msg->sg.end = msg->sg.end - shift > msg->sg.end ?
+- msg->sg.end - shift + MAX_MSG_FRAGS :
++ msg->sg.end - shift + NR_MSG_FRAG_IDS :
+ msg->sg.end - shift;
+ out:
+ msg->data = sg_virt(&msg->sg.data[first_sge]) + start - offset;
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index c10e3e56006e..74c1f9909e88 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -422,7 +422,7 @@ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb)
+ copied = skb->len;
+ msg->sg.start = 0;
+ msg->sg.size = copied;
+- msg->sg.end = num_sge == MAX_MSG_FRAGS ? 0 : num_sge;
++ msg->sg.end = num_sge;
+ msg->skb = skb;
+
+ sk_psock_queue_msg(psock, msg);
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index 8a56e09cfb0e..e38705165ac9 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -301,7 +301,7 @@ EXPORT_SYMBOL_GPL(tcp_bpf_sendmsg_redir);
+ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock,
+ struct sk_msg *msg, int *copied, int flags)
+ {
+- bool cork = false, enospc = msg->sg.start == msg->sg.end;
++ bool cork = false, enospc = sk_msg_full(msg);
+ struct sock *sk_redir;
+ u32 tosend, delta = 0;
+ int ret;
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index 4c2702f128f3..868705ed5cbb 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -1297,8 +1297,8 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
+ ieee80211_remove_interfaces(local);
+ fail_rate:
+ rtnl_unlock();
+- ieee80211_led_exit(local);
+ fail_flows:
++ ieee80211_led_exit(local);
+ destroy_workqueue(local->workqueue);
+ fail_workqueue:
+ wiphy_unregister(local->hw.wiphy);
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index 5fb368cc2633..0030b13c2f50 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -2455,7 +2455,8 @@ unsigned long ieee80211_sta_last_active(struct sta_info *sta)
+ {
+ struct ieee80211_sta_rx_stats *stats = sta_get_last_rx_stats(sta);
+
+- if (time_after(stats->last_rx, sta->status_stats.last_ack))
++ if (!sta->status_stats.last_ack ||
++ time_after(stats->last_rx, sta->status_stats.last_ack))
+ return stats->last_rx;
+ return sta->status_stats.last_ack;
+ }
+diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
+index e7288eab7512..d73d1828216a 100644
+--- a/net/netfilter/ipset/ip_set_core.c
++++ b/net/netfilter/ipset/ip_set_core.c
+@@ -296,7 +296,8 @@ ip_set_get_ipaddr4(struct nlattr *nla, __be32 *ipaddr)
+
+ if (unlikely(!flag_nested(nla)))
+ return -IPSET_ERR_PROTOCOL;
+- if (nla_parse_nested_deprecated(tb, IPSET_ATTR_IPADDR_MAX, nla, ipaddr_policy, NULL))
++ if (nla_parse_nested(tb, IPSET_ATTR_IPADDR_MAX, nla,
++ ipaddr_policy, NULL))
+ return -IPSET_ERR_PROTOCOL;
+ if (unlikely(!ip_set_attr_netorder(tb, IPSET_ATTR_IPADDR_IPV4)))
+ return -IPSET_ERR_PROTOCOL;
+@@ -314,7 +315,8 @@ ip_set_get_ipaddr6(struct nlattr *nla, union nf_inet_addr *ipaddr)
+ if (unlikely(!flag_nested(nla)))
+ return -IPSET_ERR_PROTOCOL;
+
+- if (nla_parse_nested_deprecated(tb, IPSET_ATTR_IPADDR_MAX, nla, ipaddr_policy, NULL))
++ if (nla_parse_nested(tb, IPSET_ATTR_IPADDR_MAX, nla,
++ ipaddr_policy, NULL))
+ return -IPSET_ERR_PROTOCOL;
+ if (unlikely(!ip_set_attr_netorder(tb, IPSET_ATTR_IPADDR_IPV6)))
+ return -IPSET_ERR_PROTOCOL;
+@@ -934,7 +936,8 @@ static int ip_set_create(struct net *net, struct sock *ctnl,
+
+ /* Without holding any locks, create private part. */
+ if (attr[IPSET_ATTR_DATA] &&
+- nla_parse_nested_deprecated(tb, IPSET_ATTR_CREATE_MAX, attr[IPSET_ATTR_DATA], set->type->create_policy, NULL)) {
++ nla_parse_nested(tb, IPSET_ATTR_CREATE_MAX, attr[IPSET_ATTR_DATA],
++ set->type->create_policy, NULL)) {
+ ret = -IPSET_ERR_PROTOCOL;
+ goto put_out;
+ }
+@@ -1281,6 +1284,14 @@ dump_attrs(struct nlmsghdr *nlh)
+ }
+ }
+
++static const struct nla_policy
++ip_set_dump_policy[IPSET_ATTR_CMD_MAX + 1] = {
++ [IPSET_ATTR_PROTOCOL] = { .type = NLA_U8 },
++ [IPSET_ATTR_SETNAME] = { .type = NLA_NUL_STRING,
++ .len = IPSET_MAXNAMELEN - 1 },
++ [IPSET_ATTR_FLAGS] = { .type = NLA_U32 },
++};
++
+ static int
+ dump_init(struct netlink_callback *cb, struct ip_set_net *inst)
+ {
+@@ -1292,9 +1303,9 @@ dump_init(struct netlink_callback *cb, struct ip_set_net *inst)
+ ip_set_id_t index;
+ int ret;
+
+- ret = nla_parse_deprecated(cda, IPSET_ATTR_CMD_MAX, attr,
+- nlh->nlmsg_len - min_len,
+- ip_set_setname_policy, NULL);
++ ret = nla_parse(cda, IPSET_ATTR_CMD_MAX, attr,
++ nlh->nlmsg_len - min_len,
++ ip_set_dump_policy, NULL);
+ if (ret)
+ return ret;
+
+@@ -1543,9 +1554,9 @@ call_ad(struct sock *ctnl, struct sk_buff *skb, struct ip_set *set,
+ memcpy(&errmsg->msg, nlh, nlh->nlmsg_len);
+ cmdattr = (void *)&errmsg->msg + min_len;
+
+- ret = nla_parse_deprecated(cda, IPSET_ATTR_CMD_MAX, cmdattr,
+- nlh->nlmsg_len - min_len,
+- ip_set_adt_policy, NULL);
++ ret = nla_parse(cda, IPSET_ATTR_CMD_MAX, cmdattr,
++ nlh->nlmsg_len - min_len, ip_set_adt_policy,
++ NULL);
+
+ if (ret) {
+ nlmsg_free(skb2);
+@@ -1596,7 +1607,9 @@ static int ip_set_ad(struct net *net, struct sock *ctnl,
+
+ use_lineno = !!attr[IPSET_ATTR_LINENO];
+ if (attr[IPSET_ATTR_DATA]) {
+- if (nla_parse_nested_deprecated(tb, IPSET_ATTR_ADT_MAX, attr[IPSET_ATTR_DATA], set->type->adt_policy, NULL))
++ if (nla_parse_nested(tb, IPSET_ATTR_ADT_MAX,
++ attr[IPSET_ATTR_DATA],
++ set->type->adt_policy, NULL))
+ return -IPSET_ERR_PROTOCOL;
+ ret = call_ad(ctnl, skb, set, tb, adt, flags,
+ use_lineno);
+@@ -1606,7 +1619,8 @@ static int ip_set_ad(struct net *net, struct sock *ctnl,
+ nla_for_each_nested(nla, attr[IPSET_ATTR_ADT], nla_rem) {
+ if (nla_type(nla) != IPSET_ATTR_DATA ||
+ !flag_nested(nla) ||
+- nla_parse_nested_deprecated(tb, IPSET_ATTR_ADT_MAX, nla, set->type->adt_policy, NULL))
++ nla_parse_nested(tb, IPSET_ATTR_ADT_MAX, nla,
++ set->type->adt_policy, NULL))
+ return -IPSET_ERR_PROTOCOL;
+ ret = call_ad(ctnl, skb, set, tb, adt,
+ flags, use_lineno);
+@@ -1655,7 +1669,8 @@ static int ip_set_utest(struct net *net, struct sock *ctnl, struct sk_buff *skb,
+ if (!set)
+ return -ENOENT;
+
+- if (nla_parse_nested_deprecated(tb, IPSET_ATTR_ADT_MAX, attr[IPSET_ATTR_DATA], set->type->adt_policy, NULL))
++ if (nla_parse_nested(tb, IPSET_ATTR_ADT_MAX, attr[IPSET_ATTR_DATA],
++ set->type->adt_policy, NULL))
+ return -IPSET_ERR_PROTOCOL;
+
+ rcu_read_lock_bh();
+@@ -1961,7 +1976,7 @@ static const struct nfnl_callback ip_set_netlink_subsys_cb[IPSET_MSG_MAX] = {
+ [IPSET_CMD_LIST] = {
+ .call = ip_set_dump,
+ .attr_count = IPSET_ATTR_CMD_MAX,
+- .policy = ip_set_setname_policy,
++ .policy = ip_set_dump_policy,
+ },
+ [IPSET_CMD_SAVE] = {
+ .call = ip_set_dump,
+diff --git a/net/netfilter/ipset/ip_set_hash_net.c b/net/netfilter/ipset/ip_set_hash_net.c
+index c259cbc3ef45..3d932de0ad29 100644
+--- a/net/netfilter/ipset/ip_set_hash_net.c
++++ b/net/netfilter/ipset/ip_set_hash_net.c
+@@ -368,6 +368,7 @@ static struct ip_set_type hash_net_type __read_mostly = {
+ [IPSET_ATTR_IP_TO] = { .type = NLA_NESTED },
+ [IPSET_ATTR_CIDR] = { .type = NLA_U8 },
+ [IPSET_ATTR_TIMEOUT] = { .type = NLA_U32 },
++ [IPSET_ATTR_LINENO] = { .type = NLA_U32 },
+ [IPSET_ATTR_CADT_FLAGS] = { .type = NLA_U32 },
+ [IPSET_ATTR_BYTES] = { .type = NLA_U64 },
+ [IPSET_ATTR_PACKETS] = { .type = NLA_U64 },
+diff --git a/net/netfilter/ipset/ip_set_hash_netnet.c b/net/netfilter/ipset/ip_set_hash_netnet.c
+index a3ae69bfee66..4398322fad59 100644
+--- a/net/netfilter/ipset/ip_set_hash_netnet.c
++++ b/net/netfilter/ipset/ip_set_hash_netnet.c
+@@ -476,6 +476,7 @@ static struct ip_set_type hash_netnet_type __read_mostly = {
+ [IPSET_ATTR_CIDR] = { .type = NLA_U8 },
+ [IPSET_ATTR_CIDR2] = { .type = NLA_U8 },
+ [IPSET_ATTR_TIMEOUT] = { .type = NLA_U32 },
++ [IPSET_ATTR_LINENO] = { .type = NLA_U32 },
+ [IPSET_ATTR_CADT_FLAGS] = { .type = NLA_U32 },
+ [IPSET_ATTR_BYTES] = { .type = NLA_U64 },
+ [IPSET_ATTR_PACKETS] = { .type = NLA_U64 },
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 3b81323fa017..5dbc6bfb532c 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1922,6 +1922,7 @@ static int nf_tables_newchain(struct net *net, struct sock *nlsk,
+ if (nlh->nlmsg_flags & NLM_F_REPLACE)
+ return -EOPNOTSUPP;
+
++ flags |= chain->flags & NFT_BASE_CHAIN;
+ return nf_tables_updchain(&ctx, genmask, policy, flags);
+ }
+
+diff --git a/net/netfilter/nf_tables_offload.c b/net/netfilter/nf_tables_offload.c
+index c0d18c1d77ac..04fbab60e808 100644
+--- a/net/netfilter/nf_tables_offload.c
++++ b/net/netfilter/nf_tables_offload.c
+@@ -241,7 +241,8 @@ int nft_flow_rule_offload_commit(struct net *net)
+
+ switch (trans->msg_type) {
+ case NFT_MSG_NEWCHAIN:
+- if (!(trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD))
++ if (!(trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD) ||
++ nft_trans_chain_update(trans))
+ continue;
+
+ err = nft_flow_offload_chain(trans, FLOW_BLOCK_BIND);
+diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
+index 43aeca12208c..f40757dbfb28 100644
+--- a/net/openvswitch/datapath.c
++++ b/net/openvswitch/datapath.c
+@@ -701,9 +701,13 @@ static size_t ovs_flow_cmd_msg_size(const struct sw_flow_actions *acts,
+ {
+ size_t len = NLMSG_ALIGN(sizeof(struct ovs_header));
+
+- /* OVS_FLOW_ATTR_UFID */
++ /* OVS_FLOW_ATTR_UFID, or unmasked flow key as fallback
++ * see ovs_nla_put_identifier()
++ */
+ if (sfid && ovs_identifier_is_ufid(sfid))
+ len += nla_total_size(sfid->ufid_len);
++ else
++ len += nla_total_size(ovs_key_attr_size());
+
+ /* OVS_FLOW_ATTR_KEY */
+ if (!sfid || should_fill_key(sfid, ufid_flags))
+@@ -879,7 +883,10 @@ static struct sk_buff *ovs_flow_cmd_build_info(const struct sw_flow *flow,
+ retval = ovs_flow_cmd_fill_info(flow, dp_ifindex, skb,
+ info->snd_portid, info->snd_seq, 0,
+ cmd, ufid_flags);
+- BUG_ON(retval < 0);
++ if (WARN_ON_ONCE(retval < 0)) {
++ kfree_skb(skb);
++ skb = ERR_PTR(retval);
++ }
+ return skb;
+ }
+
+@@ -1343,7 +1350,10 @@ static int ovs_flow_cmd_del(struct sk_buff *skb, struct genl_info *info)
+ OVS_FLOW_CMD_DEL,
+ ufid_flags);
+ rcu_read_unlock();
+- BUG_ON(err < 0);
++ if (WARN_ON_ONCE(err < 0)) {
++ kfree_skb(reply);
++ goto out_free;
++ }
+
+ ovs_notify(&dp_flow_genl_family, reply, info);
+ } else {
+@@ -1351,6 +1361,7 @@ static int ovs_flow_cmd_del(struct sk_buff *skb, struct genl_info *info)
+ }
+ }
+
++out_free:
+ ovs_flow_free(flow, true);
+ return 0;
+ unlock:
+diff --git a/net/psample/psample.c b/net/psample/psample.c
+index 66e4b61a350d..a3f7e35dccac 100644
+--- a/net/psample/psample.c
++++ b/net/psample/psample.c
+@@ -221,7 +221,7 @@ void psample_sample_packet(struct psample_group *group, struct sk_buff *skb,
+ data_len = PSAMPLE_MAX_PACKET_SIZE - meta_len - NLA_HDRLEN
+ - NLA_ALIGNTO;
+
+- nl_skb = genlmsg_new(meta_len + data_len, GFP_ATOMIC);
++ nl_skb = genlmsg_new(meta_len + nla_total_size(data_len), GFP_ATOMIC);
+ if (unlikely(!nl_skb))
+ return;
+
+diff --git a/net/sched/sch_mq.c b/net/sched/sch_mq.c
+index 0d578333e967..278c0b2dc523 100644
+--- a/net/sched/sch_mq.c
++++ b/net/sched/sch_mq.c
+@@ -245,7 +245,8 @@ static int mq_dump_class_stats(struct Qdisc *sch, unsigned long cl,
+ struct netdev_queue *dev_queue = mq_queue_get(sch, cl);
+
+ sch = dev_queue->qdisc_sleeping;
+- if (gnet_stats_copy_basic(&sch->running, d, NULL, &sch->bstats) < 0 ||
++ if (gnet_stats_copy_basic(&sch->running, d, sch->cpu_bstats,
++ &sch->bstats) < 0 ||
+ qdisc_qstats_copy(d, sch) < 0)
+ return -1;
+ return 0;
+diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
+index 46980b8d66c5..0d0113a24962 100644
+--- a/net/sched/sch_mqprio.c
++++ b/net/sched/sch_mqprio.c
+@@ -557,8 +557,8 @@ static int mqprio_dump_class_stats(struct Qdisc *sch, unsigned long cl,
+ struct netdev_queue *dev_queue = mqprio_queue_get(sch, cl);
+
+ sch = dev_queue->qdisc_sleeping;
+- if (gnet_stats_copy_basic(qdisc_root_sleeping_running(sch),
+- d, NULL, &sch->bstats) < 0 ||
++ if (gnet_stats_copy_basic(qdisc_root_sleeping_running(sch), d,
++ sch->cpu_bstats, &sch->bstats) < 0 ||
+ qdisc_qstats_copy(d, sch) < 0)
+ return -1;
+ }
+diff --git a/net/sched/sch_multiq.c b/net/sched/sch_multiq.c
+index e1087746f6a2..5cdf3b6abae6 100644
+--- a/net/sched/sch_multiq.c
++++ b/net/sched/sch_multiq.c
+@@ -330,7 +330,7 @@ static int multiq_dump_class_stats(struct Qdisc *sch, unsigned long cl,
+
+ cl_q = q->queues[cl - 1];
+ if (gnet_stats_copy_basic(qdisc_root_sleeping_running(sch),
+- d, NULL, &cl_q->bstats) < 0 ||
++ d, cl_q->cpu_bstats, &cl_q->bstats) < 0 ||
+ qdisc_qstats_copy(d, cl_q) < 0)
+ return -1;
+
+diff --git a/net/sched/sch_prio.c b/net/sched/sch_prio.c
+index 0f8fedb8809a..18b884cfdfe8 100644
+--- a/net/sched/sch_prio.c
++++ b/net/sched/sch_prio.c
+@@ -356,7 +356,7 @@ static int prio_dump_class_stats(struct Qdisc *sch, unsigned long cl,
+
+ cl_q = q->queues[cl - 1];
+ if (gnet_stats_copy_basic(qdisc_root_sleeping_running(sch),
+- d, NULL, &cl_q->bstats) < 0 ||
++ d, cl_q->cpu_bstats, &cl_q->bstats) < 0 ||
+ qdisc_qstats_copy(d, cl_q) < 0)
+ return -1;
+
+diff --git a/net/sctp/associola.c b/net/sctp/associola.c
+index 5010cce52c93..a40b80cdb4b3 100644
+--- a/net/sctp/associola.c
++++ b/net/sctp/associola.c
+@@ -65,6 +65,7 @@ static struct sctp_association *sctp_association_init(
+ /* Discarding const is appropriate here. */
+ asoc->ep = (struct sctp_endpoint *)ep;
+ asoc->base.sk = (struct sock *)sk;
++ asoc->base.net = sock_net(sk);
+
+ sctp_endpoint_hold(asoc->ep);
+ sock_hold(asoc->base.sk);
+diff --git a/net/sctp/endpointola.c b/net/sctp/endpointola.c
+index 69cebb2c998b..046da0bdc539 100644
+--- a/net/sctp/endpointola.c
++++ b/net/sctp/endpointola.c
+@@ -152,6 +152,7 @@ static struct sctp_endpoint *sctp_endpoint_init(struct sctp_endpoint *ep,
+
+ /* Remember who we are attached to. */
+ ep->base.sk = sk;
++ ep->base.net = sock_net(sk);
+ sock_hold(ep->base.sk);
+
+ return ep;
+diff --git a/net/sctp/input.c b/net/sctp/input.c
+index 1008cdc44dd6..2b43b5ed3241 100644
+--- a/net/sctp/input.c
++++ b/net/sctp/input.c
+@@ -876,7 +876,7 @@ static inline int sctp_hash_cmp(struct rhashtable_compare_arg *arg,
+ if (!sctp_transport_hold(t))
+ return err;
+
+- if (!net_eq(sock_net(t->asoc->base.sk), x->net))
++ if (!net_eq(t->asoc->base.net, x->net))
+ goto out;
+ if (x->lport != htons(t->asoc->base.bind_addr.port))
+ goto out;
+@@ -891,7 +891,7 @@ static inline __u32 sctp_hash_obj(const void *data, u32 len, u32 seed)
+ {
+ const struct sctp_transport *t = data;
+
+- return sctp_hashfn(sock_net(t->asoc->base.sk),
++ return sctp_hashfn(t->asoc->base.net,
+ htons(t->asoc->base.bind_addr.port),
+ &t->ipaddr, seed);
+ }
+diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c
+index 2c244b29a199..9eeea0d8e4cf 100644
+--- a/net/sctp/sm_statefuns.c
++++ b/net/sctp/sm_statefuns.c
+@@ -2160,8 +2160,10 @@ enum sctp_disposition sctp_sf_do_5_2_4_dupcook(
+
+ /* Update socket peer label if first association. */
+ if (security_sctp_assoc_request((struct sctp_endpoint *)ep,
+- chunk->skb))
++ chunk->skb)) {
++ sctp_association_free(new_asoc);
+ return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
++ }
+
+ /* Set temp so that it won't be added into hashtable */
+ new_asoc->temp = 1;
+diff --git a/net/socket.c b/net/socket.c
+index 6a9ab7a8b1d2..d7a106028f0e 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -2232,15 +2232,10 @@ static int copy_msghdr_from_user(struct msghdr *kmsg,
+ return err < 0 ? err : 0;
+ }
+
+-static int ___sys_sendmsg(struct socket *sock, struct user_msghdr __user *msg,
+- struct msghdr *msg_sys, unsigned int flags,
+- struct used_address *used_address,
+- unsigned int allowed_msghdr_flags)
++static int ____sys_sendmsg(struct socket *sock, struct msghdr *msg_sys,
++ unsigned int flags, struct used_address *used_address,
++ unsigned int allowed_msghdr_flags)
+ {
+- struct compat_msghdr __user *msg_compat =
+- (struct compat_msghdr __user *)msg;
+- struct sockaddr_storage address;
+- struct iovec iovstack[UIO_FASTIOV], *iov = iovstack;
+ unsigned char ctl[sizeof(struct cmsghdr) + 20]
+ __aligned(sizeof(__kernel_size_t));
+ /* 20 is size of ipv6_pktinfo */
+@@ -2248,19 +2243,10 @@ static int ___sys_sendmsg(struct socket *sock, struct user_msghdr __user *msg,
+ int ctl_len;
+ ssize_t err;
+
+- msg_sys->msg_name = &address;
+-
+- if (MSG_CMSG_COMPAT & flags)
+- err = get_compat_msghdr(msg_sys, msg_compat, NULL, &iov);
+- else
+- err = copy_msghdr_from_user(msg_sys, msg, NULL, &iov);
+- if (err < 0)
+- return err;
+-
+ err = -ENOBUFS;
+
+ if (msg_sys->msg_controllen > INT_MAX)
+- goto out_freeiov;
++ goto out;
+ flags |= (msg_sys->msg_flags & allowed_msghdr_flags);
+ ctl_len = msg_sys->msg_controllen;
+ if ((MSG_CMSG_COMPAT & flags) && ctl_len) {
+@@ -2268,7 +2254,7 @@ static int ___sys_sendmsg(struct socket *sock, struct user_msghdr __user *msg,
+ cmsghdr_from_user_compat_to_kern(msg_sys, sock->sk, ctl,
+ sizeof(ctl));
+ if (err)
+- goto out_freeiov;
++ goto out;
+ ctl_buf = msg_sys->msg_control;
+ ctl_len = msg_sys->msg_controllen;
+ } else if (ctl_len) {
+@@ -2277,7 +2263,7 @@ static int ___sys_sendmsg(struct socket *sock, struct user_msghdr __user *msg,
+ if (ctl_len > sizeof(ctl)) {
+ ctl_buf = sock_kmalloc(sock->sk, ctl_len, GFP_KERNEL);
+ if (ctl_buf == NULL)
+- goto out_freeiov;
++ goto out;
+ }
+ err = -EFAULT;
+ /*
+@@ -2323,7 +2309,47 @@ static int ___sys_sendmsg(struct socket *sock, struct user_msghdr __user *msg,
+ out_freectl:
+ if (ctl_buf != ctl)
+ sock_kfree_s(sock->sk, ctl_buf, ctl_len);
+-out_freeiov:
++out:
++ return err;
++}
++
++static int sendmsg_copy_msghdr(struct msghdr *msg,
++ struct user_msghdr __user *umsg, unsigned flags,
++ struct iovec **iov)
++{
++ int err;
++
++ if (flags & MSG_CMSG_COMPAT) {
++ struct compat_msghdr __user *msg_compat;
++
++ msg_compat = (struct compat_msghdr __user *) umsg;
++ err = get_compat_msghdr(msg, msg_compat, NULL, iov);
++ } else {
++ err = copy_msghdr_from_user(msg, umsg, NULL, iov);
++ }
++ if (err < 0)
++ return err;
++
++ return 0;
++}
++
++static int ___sys_sendmsg(struct socket *sock, struct user_msghdr __user *msg,
++ struct msghdr *msg_sys, unsigned int flags,
++ struct used_address *used_address,
++ unsigned int allowed_msghdr_flags)
++{
++ struct sockaddr_storage address;
++ struct iovec iovstack[UIO_FASTIOV], *iov = iovstack;
++ ssize_t err;
++
++ msg_sys->msg_name = &address;
++
++ err = sendmsg_copy_msghdr(msg_sys, msg, flags, &iov);
++ if (err < 0)
++ return err;
++
++ err = ____sys_sendmsg(sock, msg_sys, flags, used_address,
++ allowed_msghdr_flags);
+ kfree(iov);
+ return err;
+ }
+@@ -2331,12 +2357,27 @@ out_freeiov:
+ /*
+ * BSD sendmsg interface
+ */
+-long __sys_sendmsg_sock(struct socket *sock, struct user_msghdr __user *msg,
++long __sys_sendmsg_sock(struct socket *sock, struct user_msghdr __user *umsg,
+ unsigned int flags)
+ {
+- struct msghdr msg_sys;
++ struct iovec iovstack[UIO_FASTIOV], *iov = iovstack;
++ struct sockaddr_storage address;
++ struct msghdr msg = { .msg_name = &address };
++ ssize_t err;
++
++ err = sendmsg_copy_msghdr(&msg, umsg, flags, &iov);
++ if (err)
++ return err;
++ /* disallow ancillary data requests from this path */
++ if (msg.msg_control || msg.msg_controllen) {
++ err = -EINVAL;
++ goto out;
++ }
+
+- return ___sys_sendmsg(sock, msg, &msg_sys, flags, NULL, 0);
++ err = ____sys_sendmsg(sock, &msg, flags, NULL, 0);
++out:
++ kfree(iov);
++ return err;
+ }
+
+ long __sys_sendmsg(int fd, struct user_msghdr __user *msg, unsigned int flags,
+@@ -2442,33 +2483,41 @@ SYSCALL_DEFINE4(sendmmsg, int, fd, struct mmsghdr __user *, mmsg,
+ return __sys_sendmmsg(fd, mmsg, vlen, flags, true);
+ }
+
+-static int ___sys_recvmsg(struct socket *sock, struct user_msghdr __user *msg,
+- struct msghdr *msg_sys, unsigned int flags, int nosec)
++static int recvmsg_copy_msghdr(struct msghdr *msg,
++ struct user_msghdr __user *umsg, unsigned flags,
++ struct sockaddr __user **uaddr,
++ struct iovec **iov)
+ {
+- struct compat_msghdr __user *msg_compat =
+- (struct compat_msghdr __user *)msg;
+- struct iovec iovstack[UIO_FASTIOV];
+- struct iovec *iov = iovstack;
+- unsigned long cmsg_ptr;
+- int len;
+ ssize_t err;
+
+- /* kernel mode address */
+- struct sockaddr_storage addr;
++ if (MSG_CMSG_COMPAT & flags) {
++ struct compat_msghdr __user *msg_compat;
+
+- /* user mode address pointers */
+- struct sockaddr __user *uaddr;
+- int __user *uaddr_len = COMPAT_NAMELEN(msg);
+-
+- msg_sys->msg_name = &addr;
+-
+- if (MSG_CMSG_COMPAT & flags)
+- err = get_compat_msghdr(msg_sys, msg_compat, &uaddr, &iov);
+- else
+- err = copy_msghdr_from_user(msg_sys, msg, &uaddr, &iov);
++ msg_compat = (struct compat_msghdr __user *) umsg;
++ err = get_compat_msghdr(msg, msg_compat, uaddr, iov);
++ } else {
++ err = copy_msghdr_from_user(msg, umsg, uaddr, iov);
++ }
+ if (err < 0)
+ return err;
+
++ return 0;
++}
++
++static int ____sys_recvmsg(struct socket *sock, struct msghdr *msg_sys,
++ struct user_msghdr __user *msg,
++ struct sockaddr __user *uaddr,
++ unsigned int flags, int nosec)
++{
++ struct compat_msghdr __user *msg_compat =
++ (struct compat_msghdr __user *) msg;
++ int __user *uaddr_len = COMPAT_NAMELEN(msg);
++ struct sockaddr_storage addr;
++ unsigned long cmsg_ptr;
++ int len;
++ ssize_t err;
++
++ msg_sys->msg_name = &addr;
+ cmsg_ptr = (unsigned long)msg_sys->msg_control;
+ msg_sys->msg_flags = flags & (MSG_CMSG_CLOEXEC|MSG_CMSG_COMPAT);
+
+@@ -2479,7 +2528,7 @@ static int ___sys_recvmsg(struct socket *sock, struct user_msghdr __user *msg,
+ flags |= MSG_DONTWAIT;
+ err = (nosec ? sock_recvmsg_nosec : sock_recvmsg)(sock, msg_sys, flags);
+ if (err < 0)
+- goto out_freeiov;
++ goto out;
+ len = err;
+
+ if (uaddr != NULL) {
+@@ -2487,12 +2536,12 @@ static int ___sys_recvmsg(struct socket *sock, struct user_msghdr __user *msg,
+ msg_sys->msg_namelen, uaddr,
+ uaddr_len);
+ if (err < 0)
+- goto out_freeiov;
++ goto out;
+ }
+ err = __put_user((msg_sys->msg_flags & ~MSG_CMSG_COMPAT),
+ COMPAT_FLAGS(msg));
+ if (err)
+- goto out_freeiov;
++ goto out;
+ if (MSG_CMSG_COMPAT & flags)
+ err = __put_user((unsigned long)msg_sys->msg_control - cmsg_ptr,
+ &msg_compat->msg_controllen);
+@@ -2500,10 +2549,25 @@ static int ___sys_recvmsg(struct socket *sock, struct user_msghdr __user *msg,
+ err = __put_user((unsigned long)msg_sys->msg_control - cmsg_ptr,
+ &msg->msg_controllen);
+ if (err)
+- goto out_freeiov;
++ goto out;
+ err = len;
++out:
++ return err;
++}
++
++static int ___sys_recvmsg(struct socket *sock, struct user_msghdr __user *msg,
++ struct msghdr *msg_sys, unsigned int flags, int nosec)
++{
++ struct iovec iovstack[UIO_FASTIOV], *iov = iovstack;
++ /* user mode address pointers */
++ struct sockaddr __user *uaddr;
++ ssize_t err;
++
++ err = recvmsg_copy_msghdr(msg_sys, msg, flags, &uaddr, &iov);
++ if (err < 0)
++ return err;
+
+-out_freeiov:
++ err = ____sys_recvmsg(sock, msg_sys, msg, uaddr, flags, nosec);
+ kfree(iov);
+ return err;
+ }
+@@ -2512,12 +2576,28 @@ out_freeiov:
+ * BSD recvmsg interface
+ */
+
+-long __sys_recvmsg_sock(struct socket *sock, struct user_msghdr __user *msg,
++long __sys_recvmsg_sock(struct socket *sock, struct user_msghdr __user *umsg,
+ unsigned int flags)
+ {
+- struct msghdr msg_sys;
++ struct iovec iovstack[UIO_FASTIOV], *iov = iovstack;
++ struct sockaddr_storage address;
++ struct msghdr msg = { .msg_name = &address };
++ struct sockaddr __user *uaddr;
++ ssize_t err;
++
++ err = recvmsg_copy_msghdr(&msg, umsg, flags, &uaddr, &iov);
++ if (err)
++ return err;
++ /* disallow ancillary data requests from this path */
++ if (msg.msg_control || msg.msg_controllen) {
++ err = -EINVAL;
++ goto out;
++ }
+
+- return ___sys_recvmsg(sock, msg, &msg_sys, flags, 0);
++ err = ____sys_recvmsg(sock, &msg, umsg, uaddr, flags, 0);
++out:
++ kfree(iov);
++ return err;
+ }
+
+ long __sys_recvmsg(int fd, struct user_msghdr __user *msg, unsigned int flags,
+diff --git a/net/tipc/netlink_compat.c b/net/tipc/netlink_compat.c
+index e135d4e11231..d4d2928424e2 100644
+--- a/net/tipc/netlink_compat.c
++++ b/net/tipc/netlink_compat.c
+@@ -550,7 +550,7 @@ static int tipc_nl_compat_link_stat_dump(struct tipc_nl_compat_msg *msg,
+ if (len <= 0)
+ return -EINVAL;
+
+- len = min_t(int, len, TIPC_MAX_BEARER_NAME);
++ len = min_t(int, len, TIPC_MAX_LINK_NAME);
+ if (!string_is_valid(name, len))
+ return -EINVAL;
+
+@@ -822,7 +822,7 @@ static int tipc_nl_compat_link_reset_stats(struct tipc_nl_compat_cmd_doit *cmd,
+ if (len <= 0)
+ return -EINVAL;
+
+- len = min_t(int, len, TIPC_MAX_BEARER_NAME);
++ len = min_t(int, len, TIPC_MAX_LINK_NAME);
+ if (!string_is_valid(name, len))
+ return -EINVAL;
+
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index ac2dfe36022d..c7ecd053d4e7 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -208,24 +208,15 @@ int tls_push_partial_record(struct sock *sk, struct tls_context *ctx,
+ return tls_push_sg(sk, ctx, sg, offset, flags);
+ }
+
+-bool tls_free_partial_record(struct sock *sk, struct tls_context *ctx)
++void tls_free_partial_record(struct sock *sk, struct tls_context *ctx)
+ {
+ struct scatterlist *sg;
+
+- sg = ctx->partially_sent_record;
+- if (!sg)
+- return false;
+-
+- while (1) {
++ for (sg = ctx->partially_sent_record; sg; sg = sg_next(sg)) {
+ put_page(sg_page(sg));
+ sk_mem_uncharge(sk, sg->length);
+-
+- if (sg_is_last(sg))
+- break;
+- sg++;
+ }
+ ctx->partially_sent_record = NULL;
+- return true;
+ }
+
+ static void tls_write_space(struct sock *sk)
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 41b2bdc05ba3..45e993c4e8f6 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -705,8 +705,7 @@ static int tls_push_record(struct sock *sk, int flags,
+ }
+
+ i = msg_pl->sg.start;
+- sg_chain(rec->sg_aead_in, 2, rec->inplace_crypto ?
+- &msg_en->sg.data[i] : &msg_pl->sg.data[i]);
++ sg_chain(rec->sg_aead_in, 2, &msg_pl->sg.data[i]);
+
+ i = msg_en->sg.end;
+ sk_msg_iter_var_prev(i);
+@@ -766,8 +765,14 @@ static int bpf_exec_tx_verdict(struct sk_msg *msg, struct sock *sk,
+
+ policy = !(flags & MSG_SENDPAGE_NOPOLICY);
+ psock = sk_psock_get(sk);
+- if (!psock || !policy)
+- return tls_push_record(sk, flags, record_type);
++ if (!psock || !policy) {
++ err = tls_push_record(sk, flags, record_type);
++ if (err) {
++ *copied -= sk_msg_free(sk, msg);
++ tls_free_open_rec(sk);
++ }
++ return err;
++ }
+ more_data:
+ enospc = sk_msg_full(msg);
+ if (psock->eval == __SK_NONE) {
+@@ -965,8 +970,6 @@ alloc_encrypted:
+ if (ret)
+ goto fallback_to_reg_send;
+
+- rec->inplace_crypto = 0;
+-
+ num_zc++;
+ copied += try_to_copy;
+
+@@ -979,7 +982,7 @@ alloc_encrypted:
+ num_async++;
+ else if (ret == -ENOMEM)
+ goto wait_for_memory;
+- else if (ret == -ENOSPC)
++ else if (ctx->open_rec && ret == -ENOSPC)
+ goto rollback_iter;
+ else if (ret != -EAGAIN)
+ goto send_end;
+@@ -1048,11 +1051,12 @@ wait_for_memory:
+ ret = sk_stream_wait_memory(sk, &timeo);
+ if (ret) {
+ trim_sgl:
+- tls_trim_both_msgs(sk, orig_size);
++ if (ctx->open_rec)
++ tls_trim_both_msgs(sk, orig_size);
+ goto send_end;
+ }
+
+- if (msg_en->sg.size < required_size)
++ if (ctx->open_rec && msg_en->sg.size < required_size)
+ goto alloc_encrypted;
+ }
+
+@@ -1164,7 +1168,6 @@ alloc_payload:
+
+ tls_ctx->pending_open_record_frags = true;
+ if (full_record || eor || sk_msg_full(msg_pl)) {
+- rec->inplace_crypto = 0;
+ ret = bpf_exec_tx_verdict(msg_pl, sk, full_record,
+ record_type, &copied, flags);
+ if (ret) {
+@@ -1185,11 +1188,13 @@ wait_for_sndbuf:
+ wait_for_memory:
+ ret = sk_stream_wait_memory(sk, &timeo);
+ if (ret) {
+- tls_trim_both_msgs(sk, msg_pl->sg.size);
++ if (ctx->open_rec)
++ tls_trim_both_msgs(sk, msg_pl->sg.size);
+ goto sendpage_end;
+ }
+
+- goto alloc_payload;
++ if (ctx->open_rec)
++ goto alloc_payload;
+ }
+
+ if (num_async) {
+@@ -2081,7 +2086,8 @@ void tls_sw_release_resources_tx(struct sock *sk)
+ /* Free up un-sent records in tx_list. First, free
+ * the partially sent record if any at head of tx_list.
+ */
+- if (tls_free_partial_record(sk, tls_ctx)) {
++ if (tls_ctx->partially_sent_record) {
++ tls_free_partial_record(sk, tls_ctx);
+ rec = list_first_entry(&ctx->tx_list,
+ struct tls_rec, list);
+ list_del(&rec->list);
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index c6f3c4a1bd99..f3423562d933 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -495,6 +495,8 @@ static void ___xfrm_state_destroy(struct xfrm_state *x)
+ x->type->destructor(x);
+ xfrm_put_type(x->type);
+ }
++ if (x->xfrag.page)
++ put_page(x->xfrag.page);
+ xfrm_dev_state_free(x);
+ security_xfrm_state_free(x);
+ xfrm_state_free(x);
+diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile
+index 1d9be26b4edd..42b571cde177 100644
+--- a/samples/bpf/Makefile
++++ b/samples/bpf/Makefile
+@@ -176,6 +176,7 @@ KBUILD_HOSTCFLAGS += -I$(srctree)/tools/lib/bpf/
+ KBUILD_HOSTCFLAGS += -I$(srctree)/tools/testing/selftests/bpf/
+ KBUILD_HOSTCFLAGS += -I$(srctree)/tools/lib/ -I$(srctree)/tools/include
+ KBUILD_HOSTCFLAGS += -I$(srctree)/tools/perf
++KBUILD_HOSTCFLAGS += -DHAVE_ATTR_TEST=0
+
+ HOSTCFLAGS_bpf_load.o += -I$(objtree)/usr/include -Wno-unused-variable
+
+diff --git a/scripts/gdb/linux/symbols.py b/scripts/gdb/linux/symbols.py
+index 2f5b95f09fa0..3c2950430289 100644
+--- a/scripts/gdb/linux/symbols.py
++++ b/scripts/gdb/linux/symbols.py
+@@ -99,7 +99,8 @@ lx-symbols command."""
+ attrs[n]['name'].string(): attrs[n]['address']
+ for n in range(int(sect_attrs['nsections']))}
+ args = []
+- for section_name in [".data", ".data..read_mostly", ".rodata", ".bss"]:
++ for section_name in [".data", ".data..read_mostly", ".rodata", ".bss",
++ ".text", ".text.hot", ".text.unlikely"]:
+ address = section_name_to_address.get(section_name)
+ if address:
+ args.append(" -s {name} {addr}".format(
+diff --git a/sound/core/compress_offload.c b/sound/core/compress_offload.c
+index 41905afada63..f34ce564d92c 100644
+--- a/sound/core/compress_offload.c
++++ b/sound/core/compress_offload.c
+@@ -528,7 +528,7 @@ static int snd_compress_check_input(struct snd_compr_params *params)
+ {
+ /* first let's check the buffer parameter's */
+ if (params->buffer.fragment_size == 0 ||
+- params->buffer.fragments > INT_MAX / params->buffer.fragment_size ||
++ params->buffer.fragments > U32_MAX / params->buffer.fragment_size ||
+ params->buffer.fragments == 0)
+ return -EINVAL;
+
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 00796c7727ea..ff99f5feaace 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -2703,6 +2703,18 @@ static int patch_i915_icl_hdmi(struct hda_codec *codec)
+ return intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map));
+ }
+
++static int patch_i915_tgl_hdmi(struct hda_codec *codec)
++{
++ /*
++ * pin to port mapping table where the value indicate the pin number and
++ * the index indicate the port number with 1 base.
++ */
++ static const int map[] = {0x4, 0x6, 0x8, 0xa, 0xb, 0xc, 0xd, 0xe, 0xf};
++
++ return intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map));
++}
++
++
+ /* Intel Baytrail and Braswell; with eld notifier */
+ static int patch_i915_byt_hdmi(struct hda_codec *codec)
+ {
+@@ -3960,6 +3972,7 @@ HDA_CODEC_ENTRY(0x8086280b, "Kabylake HDMI", patch_i915_hsw_hdmi),
+ HDA_CODEC_ENTRY(0x8086280c, "Cannonlake HDMI", patch_i915_glk_hdmi),
+ HDA_CODEC_ENTRY(0x8086280d, "Geminilake HDMI", patch_i915_glk_hdmi),
+ HDA_CODEC_ENTRY(0x8086280f, "Icelake HDMI", patch_i915_icl_hdmi),
++HDA_CODEC_ENTRY(0x80862812, "Tigerlake HDMI", patch_i915_tgl_hdmi),
+ HDA_CODEC_ENTRY(0x80862880, "CedarTrail HDMI", patch_generic_hdmi),
+ HDA_CODEC_ENTRY(0x80862882, "Valleyview2 HDMI", patch_i915_byt_hdmi),
+ HDA_CODEC_ENTRY(0x80862883, "Braswell HDMI", patch_i915_byt_hdmi),
+diff --git a/sound/soc/codecs/hdac_hda.c b/sound/soc/codecs/hdac_hda.c
+index 91242b6f8ea7..4570f662fb48 100644
+--- a/sound/soc/codecs/hdac_hda.c
++++ b/sound/soc/codecs/hdac_hda.c
+@@ -410,8 +410,8 @@ static void hdac_hda_codec_remove(struct snd_soc_component *component)
+ return;
+ }
+
+- snd_hdac_ext_bus_link_put(hdev->bus, hlink);
+ pm_runtime_disable(&hdev->dev);
++ snd_hdac_ext_bus_link_put(hdev->bus, hlink);
+ }
+
+ static const struct snd_soc_dapm_route hdac_hda_dapm_routes[] = {
+diff --git a/sound/soc/codecs/msm8916-wcd-analog.c b/sound/soc/codecs/msm8916-wcd-analog.c
+index 368b6c09474b..aa9a8ac987dc 100644
+--- a/sound/soc/codecs/msm8916-wcd-analog.c
++++ b/sound/soc/codecs/msm8916-wcd-analog.c
+@@ -306,7 +306,7 @@ struct pm8916_wcd_analog_priv {
+ };
+
+ static const char *const adc2_mux_text[] = { "ZERO", "INP2", "INP3" };
+-static const char *const rdac2_mux_text[] = { "ZERO", "RX2", "RX1" };
++static const char *const rdac2_mux_text[] = { "RX1", "RX2" };
+ static const char *const hph_text[] = { "ZERO", "Switch", };
+
+ static const struct soc_enum hph_enum = SOC_ENUM_SINGLE_VIRT(
+@@ -321,7 +321,7 @@ static const struct soc_enum adc2_enum = SOC_ENUM_SINGLE_VIRT(
+
+ /* RDAC2 MUX */
+ static const struct soc_enum rdac2_mux_enum = SOC_ENUM_SINGLE(
+- CDC_D_CDC_CONN_HPHR_DAC_CTL, 0, 3, rdac2_mux_text);
++ CDC_D_CDC_CONN_HPHR_DAC_CTL, 0, 2, rdac2_mux_text);
+
+ static const struct snd_kcontrol_new spkr_switch[] = {
+ SOC_DAPM_SINGLE("Switch", CDC_A_SPKR_DAC_CTL, 7, 1, 0)
+diff --git a/sound/soc/kirkwood/kirkwood-i2s.c b/sound/soc/kirkwood/kirkwood-i2s.c
+index 3446a113f482..eb38cdb37f0e 100644
+--- a/sound/soc/kirkwood/kirkwood-i2s.c
++++ b/sound/soc/kirkwood/kirkwood-i2s.c
+@@ -559,10 +559,6 @@ static int kirkwood_i2s_dev_probe(struct platform_device *pdev)
+ return PTR_ERR(priv->clk);
+ }
+
+- err = clk_prepare_enable(priv->clk);
+- if (err < 0)
+- return err;
+-
+ priv->extclk = devm_clk_get(&pdev->dev, "extclk");
+ if (IS_ERR(priv->extclk)) {
+ if (PTR_ERR(priv->extclk) == -EPROBE_DEFER)
+@@ -578,6 +574,10 @@ static int kirkwood_i2s_dev_probe(struct platform_device *pdev)
+ }
+ }
+
++ err = clk_prepare_enable(priv->clk);
++ if (err < 0)
++ return err;
++
+ /* Some sensible defaults - this reflects the powerup values */
+ priv->ctl_play = KIRKWOOD_PLAYCTL_SIZE_24;
+ priv->ctl_rec = KIRKWOOD_RECCTL_SIZE_24;
+@@ -591,7 +591,7 @@ static int kirkwood_i2s_dev_probe(struct platform_device *pdev)
+ priv->ctl_rec |= KIRKWOOD_RECCTL_BURST_128;
+ }
+
+- err = devm_snd_soc_register_component(&pdev->dev, &kirkwood_soc_component,
++ err = snd_soc_register_component(&pdev->dev, &kirkwood_soc_component,
+ soc_dai, 2);
+ if (err) {
+ dev_err(&pdev->dev, "snd_soc_register_component failed\n");
+@@ -614,6 +614,7 @@ static int kirkwood_i2s_dev_remove(struct platform_device *pdev)
+ {
+ struct kirkwood_dma_data *priv = dev_get_drvdata(&pdev->dev);
+
++ snd_soc_unregister_component(&pdev->dev);
+ if (!IS_ERR(priv->extclk))
+ clk_disable_unprepare(priv->extclk);
+ clk_disable_unprepare(priv->clk);
+diff --git a/sound/soc/rockchip/rockchip_max98090.c b/sound/soc/rockchip/rockchip_max98090.c
+index 782e534d4c0d..f2add1fe2e79 100644
+--- a/sound/soc/rockchip/rockchip_max98090.c
++++ b/sound/soc/rockchip/rockchip_max98090.c
+@@ -67,10 +67,13 @@ static int rk_jack_event(struct notifier_block *nb, unsigned long event,
+ struct snd_soc_jack *jack = (struct snd_soc_jack *)data;
+ struct snd_soc_dapm_context *dapm = &jack->card->dapm;
+
+- if (event & SND_JACK_MICROPHONE)
++ if (event & SND_JACK_MICROPHONE) {
+ snd_soc_dapm_force_enable_pin(dapm, "MICBIAS");
+- else
++ snd_soc_dapm_force_enable_pin(dapm, "SHDN");
++ } else {
+ snd_soc_dapm_disable_pin(dapm, "MICBIAS");
++ snd_soc_dapm_disable_pin(dapm, "SHDN");
++ }
+
+ snd_soc_dapm_sync(dapm);
+
+diff --git a/sound/soc/sof/ipc.c b/sound/soc/sof/ipc.c
+index 20dfca9c93b7..c4086186722f 100644
+--- a/sound/soc/sof/ipc.c
++++ b/sound/soc/sof/ipc.c
+@@ -578,8 +578,10 @@ static int sof_set_get_large_ctrl_data(struct snd_sof_dev *sdev,
+ else
+ err = sof_get_ctrl_copy_params(cdata->type, partdata, cdata,
+ sparams);
+- if (err < 0)
++ if (err < 0) {
++ kfree(partdata);
+ return err;
++ }
+
+ msg_bytes = sparams->msg_bytes;
+ pl_size = sparams->pl_size;
+diff --git a/sound/soc/sof/topology.c b/sound/soc/sof/topology.c
+index 96230329e678..355f04663f57 100644
+--- a/sound/soc/sof/topology.c
++++ b/sound/soc/sof/topology.c
+@@ -533,15 +533,16 @@ static int sof_control_load_bytes(struct snd_soc_component *scomp,
+ struct soc_bytes_ext *sbe = (struct soc_bytes_ext *)kc->private_value;
+ int max_size = sbe->max;
+
+- if (le32_to_cpu(control->priv.size) > max_size) {
++ /* init the get/put bytes data */
++ scontrol->size = sizeof(struct sof_ipc_ctrl_data) +
++ le32_to_cpu(control->priv.size);
++
++ if (scontrol->size > max_size) {
+ dev_err(sdev->dev, "err: bytes data size %d exceeds max %d.\n",
+- control->priv.size, max_size);
++ scontrol->size, max_size);
+ return -EINVAL;
+ }
+
+- /* init the get/put bytes data */
+- scontrol->size = sizeof(struct sof_ipc_ctrl_data) +
+- le32_to_cpu(control->priv.size);
+ scontrol->control_data = kzalloc(max_size, GFP_KERNEL);
+ cdata = scontrol->control_data;
+ if (!scontrol->control_data)
+diff --git a/sound/soc/stm/stm32_sai_sub.c b/sound/soc/stm/stm32_sai_sub.c
+index d7501f88aaa6..34e73071d4db 100644
+--- a/sound/soc/stm/stm32_sai_sub.c
++++ b/sound/soc/stm/stm32_sai_sub.c
+@@ -1217,6 +1217,16 @@ static int stm32_sai_pcm_process_spdif(struct snd_pcm_substream *substream,
+ return 0;
+ }
+
++/* No support of mmap in S/PDIF mode */
++static const struct snd_pcm_hardware stm32_sai_pcm_hw_spdif = {
++ .info = SNDRV_PCM_INFO_INTERLEAVED,
++ .buffer_bytes_max = 8 * PAGE_SIZE,
++ .period_bytes_min = 1024,
++ .period_bytes_max = PAGE_SIZE,
++ .periods_min = 2,
++ .periods_max = 8,
++};
++
+ static const struct snd_pcm_hardware stm32_sai_pcm_hw = {
+ .info = SNDRV_PCM_INFO_INTERLEAVED | SNDRV_PCM_INFO_MMAP,
+ .buffer_bytes_max = 8 * PAGE_SIZE,
+@@ -1269,7 +1279,7 @@ static const struct snd_dmaengine_pcm_config stm32_sai_pcm_config = {
+ };
+
+ static const struct snd_dmaengine_pcm_config stm32_sai_pcm_config_spdif = {
+- .pcm_hardware = &stm32_sai_pcm_hw,
++ .pcm_hardware = &stm32_sai_pcm_hw_spdif,
+ .prepare_slave_config = snd_dmaengine_pcm_prepare_slave_config,
+ .process = stm32_sai_pcm_process_spdif,
+ };
+diff --git a/sound/soc/ti/sdma-pcm.c b/sound/soc/ti/sdma-pcm.c
+index a236350beb10..2b0bc234e1b6 100644
+--- a/sound/soc/ti/sdma-pcm.c
++++ b/sound/soc/ti/sdma-pcm.c
+@@ -62,7 +62,7 @@ int sdma_pcm_platform_register(struct device *dev,
+ config->chan_names[0] = txdmachan;
+ config->chan_names[1] = rxdmachan;
+
+- return devm_snd_dmaengine_pcm_register(dev, config, 0);
++ return devm_snd_dmaengine_pcm_register(dev, config, flags);
+ }
+ EXPORT_SYMBOL_GPL(sdma_pcm_platform_register);
+
+diff --git a/tools/perf/util/scripting-engines/trace-event-perl.c b/tools/perf/util/scripting-engines/trace-event-perl.c
+index 61aa7f3df915..6a0dcaee3f3e 100644
+--- a/tools/perf/util/scripting-engines/trace-event-perl.c
++++ b/tools/perf/util/scripting-engines/trace-event-perl.c
+@@ -539,10 +539,11 @@ static int perl_stop_script(void)
+
+ static int perl_generate_script(struct tep_handle *pevent, const char *outfile)
+ {
++ int i, not_first, count, nr_events;
++ struct tep_event **all_events;
+ struct tep_event *event = NULL;
+ struct tep_format_field *f;
+ char fname[PATH_MAX];
+- int not_first, count;
+ FILE *ofp;
+
+ sprintf(fname, "%s.pl", outfile);
+@@ -603,8 +604,11 @@ sub print_backtrace\n\
+ }\n\n\
+ ");
+
++ nr_events = tep_get_events_count(pevent);
++ all_events = tep_list_events(pevent, TEP_EVENT_SORT_ID);
+
+- while ((event = trace_find_next_event(pevent, event))) {
++ for (i = 0; all_events && i < nr_events; i++) {
++ event = all_events[i];
+ fprintf(ofp, "sub %s::%s\n{\n", event->system, event->name);
+ fprintf(ofp, "\tmy (");
+
+diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
+index 25dc1d765553..df5ebb6af9fc 100644
+--- a/tools/perf/util/scripting-engines/trace-event-python.c
++++ b/tools/perf/util/scripting-engines/trace-event-python.c
+@@ -1687,10 +1687,11 @@ static int python_stop_script(void)
+
+ static int python_generate_script(struct tep_handle *pevent, const char *outfile)
+ {
++ int i, not_first, count, nr_events;
++ struct tep_event **all_events;
+ struct tep_event *event = NULL;
+ struct tep_format_field *f;
+ char fname[PATH_MAX];
+- int not_first, count;
+ FILE *ofp;
+
+ sprintf(fname, "%s.py", outfile);
+@@ -1735,7 +1736,11 @@ static int python_generate_script(struct tep_handle *pevent, const char *outfile
+ fprintf(ofp, "def trace_end():\n");
+ fprintf(ofp, "\tprint(\"in trace_end\")\n\n");
+
+- while ((event = trace_find_next_event(pevent, event))) {
++ nr_events = tep_get_events_count(pevent);
++ all_events = tep_list_events(pevent, TEP_EVENT_SORT_ID);
++
++ for (i = 0; all_events && i < nr_events; i++) {
++ event = all_events[i];
+ fprintf(ofp, "def %s__%s(", event->system, event->name);
+ fprintf(ofp, "event_name, ");
+ fprintf(ofp, "context, ");
+diff --git a/tools/testing/selftests/bpf/test_sockmap.c b/tools/testing/selftests/bpf/test_sockmap.c
+index 3845144e2c91..4a851513c842 100644
+--- a/tools/testing/selftests/bpf/test_sockmap.c
++++ b/tools/testing/selftests/bpf/test_sockmap.c
+@@ -240,14 +240,14 @@ static int sockmap_init_sockets(int verbose)
+ addr.sin_port = htons(S1_PORT);
+ err = bind(s1, (struct sockaddr *)&addr, sizeof(addr));
+ if (err < 0) {
+- perror("bind s1 failed()\n");
++ perror("bind s1 failed()");
+ return errno;
+ }
+
+ addr.sin_port = htons(S2_PORT);
+ err = bind(s2, (struct sockaddr *)&addr, sizeof(addr));
+ if (err < 0) {
+- perror("bind s2 failed()\n");
++ perror("bind s2 failed()");
+ return errno;
+ }
+
+@@ -255,14 +255,14 @@ static int sockmap_init_sockets(int verbose)
+ addr.sin_port = htons(S1_PORT);
+ err = listen(s1, 32);
+ if (err < 0) {
+- perror("listen s1 failed()\n");
++ perror("listen s1 failed()");
+ return errno;
+ }
+
+ addr.sin_port = htons(S2_PORT);
+ err = listen(s2, 32);
+ if (err < 0) {
+- perror("listen s1 failed()\n");
++ perror("listen s1 failed()");
+ return errno;
+ }
+
+@@ -270,14 +270,14 @@ static int sockmap_init_sockets(int verbose)
+ addr.sin_port = htons(S1_PORT);
+ err = connect(c1, (struct sockaddr *)&addr, sizeof(addr));
+ if (err < 0 && errno != EINPROGRESS) {
+- perror("connect c1 failed()\n");
++ perror("connect c1 failed()");
+ return errno;
+ }
+
+ addr.sin_port = htons(S2_PORT);
+ err = connect(c2, (struct sockaddr *)&addr, sizeof(addr));
+ if (err < 0 && errno != EINPROGRESS) {
+- perror("connect c2 failed()\n");
++ perror("connect c2 failed()");
+ return errno;
+ } else if (err < 0) {
+ err = 0;
+@@ -286,13 +286,13 @@ static int sockmap_init_sockets(int verbose)
+ /* Accept Connecrtions */
+ p1 = accept(s1, NULL, NULL);
+ if (p1 < 0) {
+- perror("accept s1 failed()\n");
++ perror("accept s1 failed()");
+ return errno;
+ }
+
+ p2 = accept(s2, NULL, NULL);
+ if (p2 < 0) {
+- perror("accept s1 failed()\n");
++ perror("accept s1 failed()");
+ return errno;
+ }
+
+@@ -332,6 +332,10 @@ static int msg_loop_sendpage(int fd, int iov_length, int cnt,
+ int i, fp;
+
+ file = fopen(".sendpage_tst.tmp", "w+");
++ if (!file) {
++ perror("create file for sendpage");
++ return 1;
++ }
+ for (i = 0; i < iov_length * cnt; i++, k++)
+ fwrite(&k, sizeof(char), 1, file);
+ fflush(file);
+@@ -339,12 +343,17 @@ static int msg_loop_sendpage(int fd, int iov_length, int cnt,
+ fclose(file);
+
+ fp = open(".sendpage_tst.tmp", O_RDONLY);
++ if (fp < 0) {
++ perror("reopen file for sendpage");
++ return 1;
++ }
++
+ clock_gettime(CLOCK_MONOTONIC, &s->start);
+ for (i = 0; i < cnt; i++) {
+ int sent = sendfile(fd, fp, NULL, iov_length);
+
+ if (!drop && sent < 0) {
+- perror("send loop error:");
++ perror("send loop error");
+ close(fp);
+ return sent;
+ } else if (drop && sent >= 0) {
+@@ -463,7 +472,7 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
+ int sent = sendmsg(fd, &msg, flags);
+
+ if (!drop && sent < 0) {
+- perror("send loop error:");
++ perror("send loop error");
+ goto out_errno;
+ } else if (drop && sent >= 0) {
+ printf("send loop error expected: %i\n", sent);
+@@ -499,7 +508,7 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
+ total_bytes -= txmsg_pop_total;
+ err = clock_gettime(CLOCK_MONOTONIC, &s->start);
+ if (err < 0)
+- perror("recv start time: ");
++ perror("recv start time");
+ while (s->bytes_recvd < total_bytes) {
+ if (txmsg_cork) {
+ timeout.tv_sec = 0;
+@@ -543,7 +552,7 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
+ if (recv < 0) {
+ if (errno != EWOULDBLOCK) {
+ clock_gettime(CLOCK_MONOTONIC, &s->end);
+- perror("recv failed()\n");
++ perror("recv failed()");
+ goto out_errno;
+ }
+ }
+@@ -557,7 +566,7 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
+
+ errno = msg_verify_data(&msg, recv, chunk_sz);
+ if (errno) {
+- perror("data verify msg failed\n");
++ perror("data verify msg failed");
+ goto out_errno;
+ }
+ if (recvp) {
+@@ -565,7 +574,7 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
+ recvp,
+ chunk_sz);
+ if (errno) {
+- perror("data verify msg_peek failed\n");
++ perror("data verify msg_peek failed");
+ goto out_errno;
+ }
+ }
+@@ -654,7 +663,7 @@ static int sendmsg_test(struct sockmap_options *opt)
+ err = 0;
+ exit(err ? 1 : 0);
+ } else if (rxpid == -1) {
+- perror("msg_loop_rx: ");
++ perror("msg_loop_rx");
+ return errno;
+ }
+
+@@ -681,7 +690,7 @@ static int sendmsg_test(struct sockmap_options *opt)
+ s.bytes_recvd, recvd_Bps, recvd_Bps/giga);
+ exit(err ? 1 : 0);
+ } else if (txpid == -1) {
+- perror("msg_loop_tx: ");
++ perror("msg_loop_tx");
+ return errno;
+ }
+
+@@ -715,7 +724,7 @@ static int forever_ping_pong(int rate, struct sockmap_options *opt)
+ /* Ping/Pong data from client to server */
+ sc = send(c1, buf, sizeof(buf), 0);
+ if (sc < 0) {
+- perror("send failed()\n");
++ perror("send failed()");
+ return sc;
+ }
+
+@@ -748,7 +757,7 @@ static int forever_ping_pong(int rate, struct sockmap_options *opt)
+ rc = recv(i, buf, sizeof(buf), 0);
+ if (rc < 0) {
+ if (errno != EWOULDBLOCK) {
+- perror("recv failed()\n");
++ perror("recv failed()");
+ return rc;
+ }
+ }
+@@ -760,7 +769,7 @@ static int forever_ping_pong(int rate, struct sockmap_options *opt)
+
+ sc = send(i, buf, rc, 0);
+ if (sc < 0) {
+- perror("send failed()\n");
++ perror("send failed()");
+ return sc;
+ }
+ }
+diff --git a/tools/testing/selftests/bpf/test_sysctl.c b/tools/testing/selftests/bpf/test_sysctl.c
+index a3bebd7c68dd..c938f1767ca7 100644
+--- a/tools/testing/selftests/bpf/test_sysctl.c
++++ b/tools/testing/selftests/bpf/test_sysctl.c
+@@ -158,9 +158,14 @@ static struct sysctl_test tests[] = {
+ .descr = "ctx:file_pos sysctl:read read ok narrow",
+ .insns = {
+ /* If (file_pos == X) */
++#if __BYTE_ORDER == __LITTLE_ENDIAN
+ BPF_LDX_MEM(BPF_B, BPF_REG_7, BPF_REG_1,
+ offsetof(struct bpf_sysctl, file_pos)),
+- BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 2),
++#else
++ BPF_LDX_MEM(BPF_B, BPF_REG_7, BPF_REG_1,
++ offsetof(struct bpf_sysctl, file_pos) + 3),
++#endif
++ BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 4, 2),
+
+ /* return ALLOW; */
+ BPF_MOV64_IMM(BPF_REG_0, 1),
+@@ -173,6 +178,7 @@ static struct sysctl_test tests[] = {
+ .attach_type = BPF_CGROUP_SYSCTL,
+ .sysctl = "kernel/ostype",
+ .open_flags = O_RDONLY,
++ .seek = 4,
+ .result = SUCCESS,
+ },
+ {
+diff --git a/tools/testing/selftests/bpf/xdping.c b/tools/testing/selftests/bpf/xdping.c
+index d60a343b1371..842d9155d36c 100644
+--- a/tools/testing/selftests/bpf/xdping.c
++++ b/tools/testing/selftests/bpf/xdping.c
+@@ -45,7 +45,7 @@ static int get_stats(int fd, __u16 count, __u32 raddr)
+ printf("\nXDP RTT data:\n");
+
+ if (bpf_map_lookup_elem(fd, &raddr, &pinginfo)) {
+- perror("bpf_map_lookup elem: ");
++ perror("bpf_map_lookup elem");
+ return 1;
+ }
+
+diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh
+index ab367e75f095..d697815d2785 100755
+--- a/tools/testing/selftests/net/pmtu.sh
++++ b/tools/testing/selftests/net/pmtu.sh
+@@ -1249,8 +1249,7 @@ test_list_flush_ipv4_exception() {
+ done
+ run_cmd ${ns_a} ping -q -M want -i 0.1 -c 2 -s 1800 "${dst2}"
+
+- # Each exception is printed as two lines
+- if [ "$(${ns_a} ip route list cache | wc -l)" -ne 202 ]; then
++ if [ "$(${ns_a} ip -oneline route list cache | wc -l)" -ne 101 ]; then
+ err " can't list cached exceptions"
+ fail=1
+ fi
+@@ -1300,7 +1299,7 @@ test_list_flush_ipv6_exception() {
+ run_cmd ${ns_a} ping -q -M want -i 0.1 -w 1 -s 1800 "${dst_prefix1}${i}"
+ done
+ run_cmd ${ns_a} ping -q -M want -i 0.1 -w 1 -s 1800 "${dst2}"
+- if [ "$(${ns_a} ip -6 route list cache | wc -l)" -ne 101 ]; then
++ if [ "$(${ns_a} ip -oneline -6 route list cache | wc -l)" -ne 101 ]; then
+ err " can't list cached exceptions"
+ fail=1
+ fi
+diff --git a/tools/testing/selftests/net/tls.c b/tools/testing/selftests/net/tls.c
+index 1c8f194d6556..46abcae47dee 100644
+--- a/tools/testing/selftests/net/tls.c
++++ b/tools/testing/selftests/net/tls.c
+@@ -268,6 +268,38 @@ TEST_F(tls, sendmsg_single)
+ EXPECT_EQ(memcmp(buf, test_str, send_len), 0);
+ }
+
++#define MAX_FRAGS 64
++#define SEND_LEN 13
++TEST_F(tls, sendmsg_fragmented)
++{
++ char const *test_str = "test_sendmsg";
++ char buf[SEND_LEN * MAX_FRAGS];
++ struct iovec vec[MAX_FRAGS];
++ struct msghdr msg;
++ int i, frags;
++
++ for (frags = 1; frags <= MAX_FRAGS; frags++) {
++ for (i = 0; i < frags; i++) {
++ vec[i].iov_base = (char *)test_str;
++ vec[i].iov_len = SEND_LEN;
++ }
++
++ memset(&msg, 0, sizeof(struct msghdr));
++ msg.msg_iov = vec;
++ msg.msg_iovlen = frags;
++
++ EXPECT_EQ(sendmsg(self->fd, &msg, 0), SEND_LEN * frags);
++ EXPECT_EQ(recv(self->cfd, buf, SEND_LEN * frags, MSG_WAITALL),
++ SEND_LEN * frags);
++
++ for (i = 0; i < frags; i++)
++ EXPECT_EQ(memcmp(buf + SEND_LEN * i,
++ test_str, SEND_LEN), 0);
++ }
++}
++#undef MAX_FRAGS
++#undef SEND_LEN
++
+ TEST_F(tls, sendmsg_large)
+ {
+ void *mem = malloc(16384);
+@@ -694,6 +726,34 @@ TEST_F(tls, recv_lowat)
+ EXPECT_EQ(memcmp(send_mem, recv_mem + 10, 5), 0);
+ }
+
++TEST_F(tls, recv_rcvbuf)
++{
++ char send_mem[4096];
++ char recv_mem[4096];
++ int rcv_buf = 1024;
++
++ memset(send_mem, 0x1c, sizeof(send_mem));
++
++ EXPECT_EQ(setsockopt(self->cfd, SOL_SOCKET, SO_RCVBUF,
++ &rcv_buf, sizeof(rcv_buf)), 0);
++
++ EXPECT_EQ(send(self->fd, send_mem, 512, 0), 512);
++ memset(recv_mem, 0, sizeof(recv_mem));
++ EXPECT_EQ(recv(self->cfd, recv_mem, sizeof(recv_mem), 0), 512);
++ EXPECT_EQ(memcmp(send_mem, recv_mem, 512), 0);
++
++ if (self->notls)
++ return;
++
++ EXPECT_EQ(send(self->fd, send_mem, 4096, 0), 4096);
++ memset(recv_mem, 0, sizeof(recv_mem));
++ EXPECT_EQ(recv(self->cfd, recv_mem, sizeof(recv_mem), 0), -1);
++ EXPECT_EQ(errno, EMSGSIZE);
++
++ EXPECT_EQ(recv(self->cfd, recv_mem, sizeof(recv_mem), 0), -1);
++ EXPECT_EQ(errno, EMSGSIZE);
++}
++
+ TEST_F(tls, bidir)
+ {
+ char const *test_str = "test_read";
+diff --git a/tools/testing/selftests/vm/gup_benchmark.c b/tools/testing/selftests/vm/gup_benchmark.c
+index c0534e298b51..8e9929ce64cd 100644
+--- a/tools/testing/selftests/vm/gup_benchmark.c
++++ b/tools/testing/selftests/vm/gup_benchmark.c
+@@ -71,7 +71,7 @@ int main(int argc, char **argv)
+ flags |= MAP_SHARED;
+ break;
+ case 'H':
+- flags |= MAP_HUGETLB;
++ flags |= (MAP_HUGETLB | MAP_ANONYMOUS);
+ break;
+ default:
+ return -1;
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:5.3 commit in: /
@ 2019-12-13 12:37 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2019-12-13 12:37 UTC (permalink / raw
To: gentoo-commits
commit: 0ed25f649e3464d2d4a156d1ec101d471e87f711
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 13 12:36:47 2019 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Dec 13 12:36:47 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0ed25f64
Linux patch 5.3.16 and add missing entries in README
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 12 +
1015_linux-5.3.16.patch | 3370 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 3382 insertions(+)
diff --git a/0000_README b/0000_README
index 5f3156b..0825437 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,18 @@ Patch: 1012_linux-5.3.13.patch
From: http://www.kernel.org
Desc: Linux 5.3.13
+Patch: 1013_linux-5.3.14.patch
+From: http://www.kernel.org
+Desc: Linux 5.3.14
+
+Patch: 1014_linux-5.3.15.patch
+From: http://www.kernel.org
+Desc: Linux 5.3.15
+
+Patch: 1015_linux-5.3.16.patch
+From: http://www.kernel.org
+Desc: Linux 5.3.16
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1015_linux-5.3.16.patch b/1015_linux-5.3.16.patch
new file mode 100644
index 0000000..ad0944a
--- /dev/null
+++ b/1015_linux-5.3.16.patch
@@ -0,0 +1,3370 @@
+diff --git a/Makefile b/Makefile
+index 5a88d67e9635..ced7342b61ff 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 3
+-SUBLEVEL = 15
++SUBLEVEL = 16
+ EXTRAVERSION =
+ NAME = Bobtail Squid
+
+diff --git a/arch/arm64/boot/dts/exynos/exynos5433.dtsi b/arch/arm64/boot/dts/exynos/exynos5433.dtsi
+index a76f620f7f35..a5f8752f607b 100644
+--- a/arch/arm64/boot/dts/exynos/exynos5433.dtsi
++++ b/arch/arm64/boot/dts/exynos/exynos5433.dtsi
+@@ -18,8 +18,8 @@
+
+ / {
+ compatible = "samsung,exynos5433";
+- #address-cells = <1>;
+- #size-cells = <1>;
++ #address-cells = <2>;
++ #size-cells = <2>;
+
+ interrupt-parent = <&gic>;
+
+@@ -311,7 +311,7 @@
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+- ranges;
++ ranges = <0x0 0x0 0x0 0x18000000>;
+
+ chipid@10000000 {
+ compatible = "samsung,exynos4210-chipid";
+diff --git a/arch/arm64/boot/dts/exynos/exynos7.dtsi b/arch/arm64/boot/dts/exynos/exynos7.dtsi
+index bcb9d8cee267..0821489a874d 100644
+--- a/arch/arm64/boot/dts/exynos/exynos7.dtsi
++++ b/arch/arm64/boot/dts/exynos/exynos7.dtsi
+@@ -12,8 +12,8 @@
+ / {
+ compatible = "samsung,exynos7";
+ interrupt-parent = <&gic>;
+- #address-cells = <1>;
+- #size-cells = <1>;
++ #address-cells = <2>;
++ #size-cells = <2>;
+
+ aliases {
+ pinctrl0 = &pinctrl_alive;
+@@ -98,7 +98,7 @@
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+- ranges;
++ ranges = <0 0 0 0x18000000>;
+
+ chipid@10000000 {
+ compatible = "samsung,exynos4210-chipid";
+diff --git a/arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi b/arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi
+index a7dc319214a4..b0095072bc28 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi
+@@ -1612,7 +1612,7 @@
+ regulator-name = "VDD_HDMI_5V0";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+- gpio = <&exp1 12 GPIO_ACTIVE_LOW>;
++ gpio = <&exp1 12 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
+ vin-supply = <&vdd_5v0_sys>;
+ };
+diff --git a/arch/mips/sgi-ip27/Kconfig b/arch/mips/sgi-ip27/Kconfig
+index ef3847e7aee0..e5b6cadbec85 100644
+--- a/arch/mips/sgi-ip27/Kconfig
++++ b/arch/mips/sgi-ip27/Kconfig
+@@ -38,10 +38,3 @@ config REPLICATE_KTEXT
+ Say Y here to enable replicating the kernel text across multiple
+ nodes in a NUMA cluster. This trades memory for speed.
+
+-config REPLICATE_EXHANDLERS
+- bool "Exception handler replication support"
+- depends on SGI_IP27
+- help
+- Say Y here to enable replicating the kernel exception handlers
+- across multiple nodes in a NUMA cluster. This trades memory for
+- speed.
+diff --git a/arch/mips/sgi-ip27/ip27-init.c b/arch/mips/sgi-ip27/ip27-init.c
+index 066b33f50bcc..db58ebf02870 100644
+--- a/arch/mips/sgi-ip27/ip27-init.c
++++ b/arch/mips/sgi-ip27/ip27-init.c
+@@ -69,23 +69,14 @@ static void per_hub_init(cnodeid_t cnode)
+
+ hub_rtc_init(cnode);
+
+-#ifdef CONFIG_REPLICATE_EXHANDLERS
+- /*
+- * If this is not a headless node initialization,
+- * copy over the caliased exception handlers.
+- */
+- if (get_compact_nodeid() == cnode) {
+- extern char except_vec2_generic, except_vec3_generic;
+- extern void build_tlb_refill_handler(void);
+-
+- memcpy((void *)(CKSEG0 + 0x100), &except_vec2_generic, 0x80);
+- memcpy((void *)(CKSEG0 + 0x180), &except_vec3_generic, 0x80);
+- build_tlb_refill_handler();
+- memcpy((void *)(CKSEG0 + 0x100), (void *) CKSEG0, 0x80);
+- memcpy((void *)(CKSEG0 + 0x180), &except_vec3_generic, 0x100);
++ if (nasid) {
++ /* copy exception handlers from first node to current node */
++ memcpy((void *)NODE_OFFSET_TO_K0(nasid, 0),
++ (void *)CKSEG0, 0x200);
+ __flush_cache_all();
++ /* switch to node local exception handlers */
++ REMOTE_HUB_S(nasid, PI_CALIAS_SIZE, PI_CALIAS_SIZE_8K);
+ }
+-#endif
+ }
+
+ void per_cpu_init(void)
+diff --git a/arch/mips/sgi-ip27/ip27-memory.c b/arch/mips/sgi-ip27/ip27-memory.c
+index fb077a947575..8624a885d95b 100644
+--- a/arch/mips/sgi-ip27/ip27-memory.c
++++ b/arch/mips/sgi-ip27/ip27-memory.c
+@@ -332,11 +332,7 @@ static void __init mlreset(void)
+ * thinks it is a node 0 address.
+ */
+ REMOTE_HUB_S(nasid, PI_REGION_PRESENT, (region_mask | 1));
+-#ifdef CONFIG_REPLICATE_EXHANDLERS
+- REMOTE_HUB_S(nasid, PI_CALIAS_SIZE, PI_CALIAS_SIZE_8K);
+-#else
+ REMOTE_HUB_S(nasid, PI_CALIAS_SIZE, PI_CALIAS_SIZE_0);
+-#endif
+
+ #ifdef LATER
+ /*
+diff --git a/arch/powerpc/kvm/book3s_xive.c b/arch/powerpc/kvm/book3s_xive.c
+index a3f9c665bb5b..baa740815b3c 100644
+--- a/arch/powerpc/kvm/book3s_xive.c
++++ b/arch/powerpc/kvm/book3s_xive.c
+@@ -2005,6 +2005,10 @@ static int kvmppc_xive_create(struct kvm_device *dev, u32 type)
+
+ pr_devel("Creating xive for partition\n");
+
++ /* Already there ? */
++ if (kvm->arch.xive)
++ return -EEXIST;
++
+ xive = kvmppc_xive_get_device(kvm, type);
+ if (!xive)
+ return -ENOMEM;
+@@ -2014,12 +2018,6 @@ static int kvmppc_xive_create(struct kvm_device *dev, u32 type)
+ xive->kvm = kvm;
+ mutex_init(&xive->lock);
+
+- /* Already there ? */
+- if (kvm->arch.xive)
+- ret = -EEXIST;
+- else
+- kvm->arch.xive = xive;
+-
+ /* We use the default queue size set by the host */
+ xive->q_order = xive_native_default_eq_shift();
+ if (xive->q_order < PAGE_SHIFT)
+@@ -2039,6 +2037,7 @@ static int kvmppc_xive_create(struct kvm_device *dev, u32 type)
+ if (ret)
+ return ret;
+
++ kvm->arch.xive = xive;
+ return 0;
+ }
+
+diff --git a/arch/powerpc/kvm/book3s_xive_native.c b/arch/powerpc/kvm/book3s_xive_native.c
+index 78b906ffa0d2..5a3373e06e60 100644
+--- a/arch/powerpc/kvm/book3s_xive_native.c
++++ b/arch/powerpc/kvm/book3s_xive_native.c
+@@ -50,6 +50,24 @@ static void kvmppc_xive_native_cleanup_queue(struct kvm_vcpu *vcpu, int prio)
+ }
+ }
+
++static int kvmppc_xive_native_configure_queue(u32 vp_id, struct xive_q *q,
++ u8 prio, __be32 *qpage,
++ u32 order, bool can_escalate)
++{
++ int rc;
++ __be32 *qpage_prev = q->qpage;
++
++ rc = xive_native_configure_queue(vp_id, q, prio, qpage, order,
++ can_escalate);
++ if (rc)
++ return rc;
++
++ if (qpage_prev)
++ put_page(virt_to_page(qpage_prev));
++
++ return rc;
++}
++
+ void kvmppc_xive_native_cleanup_vcpu(struct kvm_vcpu *vcpu)
+ {
+ struct kvmppc_xive_vcpu *xc = vcpu->arch.xive_vcpu;
+@@ -582,19 +600,14 @@ static int kvmppc_xive_native_set_queue_config(struct kvmppc_xive *xive,
+ q->guest_qaddr = 0;
+ q->guest_qshift = 0;
+
+- rc = xive_native_configure_queue(xc->vp_id, q, priority,
+- NULL, 0, true);
++ rc = kvmppc_xive_native_configure_queue(xc->vp_id, q, priority,
++ NULL, 0, true);
+ if (rc) {
+ pr_err("Failed to reset queue %d for VCPU %d: %d\n",
+ priority, xc->server_num, rc);
+ return rc;
+ }
+
+- if (q->qpage) {
+- put_page(virt_to_page(q->qpage));
+- q->qpage = NULL;
+- }
+-
+ return 0;
+ }
+
+@@ -624,12 +637,6 @@ static int kvmppc_xive_native_set_queue_config(struct kvmppc_xive *xive,
+
+ srcu_idx = srcu_read_lock(&kvm->srcu);
+ gfn = gpa_to_gfn(kvm_eq.qaddr);
+- page = gfn_to_page(kvm, gfn);
+- if (is_error_page(page)) {
+- srcu_read_unlock(&kvm->srcu, srcu_idx);
+- pr_err("Couldn't get queue page %llx!\n", kvm_eq.qaddr);
+- return -EINVAL;
+- }
+
+ page_size = kvm_host_page_size(kvm, gfn);
+ if (1ull << kvm_eq.qshift > page_size) {
+@@ -638,6 +645,13 @@ static int kvmppc_xive_native_set_queue_config(struct kvmppc_xive *xive,
+ return -EINVAL;
+ }
+
++ page = gfn_to_page(kvm, gfn);
++ if (is_error_page(page)) {
++ srcu_read_unlock(&kvm->srcu, srcu_idx);
++ pr_err("Couldn't get queue page %llx!\n", kvm_eq.qaddr);
++ return -EINVAL;
++ }
++
+ qaddr = page_to_virt(page) + (kvm_eq.qaddr & ~PAGE_MASK);
+ srcu_read_unlock(&kvm->srcu, srcu_idx);
+
+@@ -653,8 +667,8 @@ static int kvmppc_xive_native_set_queue_config(struct kvmppc_xive *xive,
+ * OPAL level because the use of END ESBs is not supported by
+ * Linux.
+ */
+- rc = xive_native_configure_queue(xc->vp_id, q, priority,
+- (__be32 *) qaddr, kvm_eq.qshift, true);
++ rc = kvmppc_xive_native_configure_queue(xc->vp_id, q, priority,
++ (__be32 *) qaddr, kvm_eq.qshift, true);
+ if (rc) {
+ pr_err("Failed to configure queue %d for VCPU %d: %d\n",
+ priority, xc->server_num, rc);
+@@ -1081,7 +1095,6 @@ static int kvmppc_xive_native_create(struct kvm_device *dev, u32 type)
+ dev->private = xive;
+ xive->dev = dev;
+ xive->kvm = kvm;
+- kvm->arch.xive = xive;
+ mutex_init(&xive->mapping_lock);
+ mutex_init(&xive->lock);
+
+@@ -1102,6 +1115,7 @@ static int kvmppc_xive_native_create(struct kvm_device *dev, u32 type)
+ if (ret)
+ return ret;
+
++ kvm->arch.xive = xive;
+ return 0;
+ }
+
+diff --git a/arch/sparc/include/asm/io_64.h b/arch/sparc/include/asm/io_64.h
+index 688911051b44..f4afa301954a 100644
+--- a/arch/sparc/include/asm/io_64.h
++++ b/arch/sparc/include/asm/io_64.h
+@@ -407,6 +407,7 @@ static inline void __iomem *ioremap(unsigned long offset, unsigned long size)
+ }
+
+ #define ioremap_nocache(X,Y) ioremap((X),(Y))
++#define ioremap_uc(X,Y) ioremap((X),(Y))
+ #define ioremap_wc(X,Y) ioremap((X),(Y))
+ #define ioremap_wt(X,Y) ioremap((X),(Y))
+
+diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+index a46dee8e78db..2e3b06d6bbc6 100644
+--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
++++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+@@ -461,10 +461,8 @@ static ssize_t rdtgroup_cpus_write(struct kernfs_open_file *of,
+ }
+
+ rdtgrp = rdtgroup_kn_lock_live(of->kn);
+- rdt_last_cmd_clear();
+ if (!rdtgrp) {
+ ret = -ENOENT;
+- rdt_last_cmd_puts("Directory was removed\n");
+ goto unlock;
+ }
+
+@@ -2648,10 +2646,8 @@ static int mkdir_rdt_prepare(struct kernfs_node *parent_kn,
+ int ret;
+
+ prdtgrp = rdtgroup_kn_lock_live(prgrp_kn);
+- rdt_last_cmd_clear();
+ if (!prdtgrp) {
+ ret = -ENODEV;
+- rdt_last_cmd_puts("Directory was removed\n");
+ goto out_unlock;
+ }
+
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index e7d25f436466..9d2e0a1967a1 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -497,7 +497,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
+
+ r = -E2BIG;
+
+- if (*nent >= maxnent)
++ if (WARN_ON(*nent >= maxnent))
+ goto out;
+
+ do_host_cpuid(entry, function, 0);
+@@ -794,6 +794,9 @@ out:
+ static int do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 func,
+ int *nent, int maxnent, unsigned int type)
+ {
++ if (*nent >= maxnent)
++ return -E2BIG;
++
+ if (type == KVM_GET_EMULATED_CPUID)
+ return __do_cpuid_func_emulated(entry, func, nent, maxnent);
+
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 61aa9421e27a..69c2a4fed5e1 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -2392,6 +2392,16 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+ entry_failure_code))
+ return -EINVAL;
+
++ /*
++ * Immediately write vmcs02.GUEST_CR3. It will be propagated to vmcs12
++ * on nested VM-Exit, which can occur without actually running L2 and
++ * thus without hitting vmx_set_cr3(), e.g. if L1 is entering L2 with
++ * vmcs12.GUEST_ACTIVITYSTATE=HLT, in which case KVM will intercept the
++ * transition to HLT instead of running L2.
++ */
++ if (enable_ept)
++ vmcs_writel(GUEST_CR3, vmcs12->guest_cr3);
++
+ /* Late preparation of GUEST_PDPTRs now that EFER and CRs are set. */
+ if (load_guest_pdptrs_vmcs12 && nested_cpu_has_ept(vmcs12) &&
+ is_pae_paging(vcpu)) {
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 2a0e281542cc..9e4b0036141f 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -2878,6 +2878,7 @@ u64 construct_eptp(struct kvm_vcpu *vcpu, unsigned long root_hpa)
+ void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
+ {
+ struct kvm *kvm = vcpu->kvm;
++ bool update_guest_cr3 = true;
+ unsigned long guest_cr3;
+ u64 eptp;
+
+@@ -2894,15 +2895,18 @@ void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
+ spin_unlock(&to_kvm_vmx(kvm)->ept_pointer_lock);
+ }
+
+- if (enable_unrestricted_guest || is_paging(vcpu) ||
+- is_guest_mode(vcpu))
++ /* Loading vmcs02.GUEST_CR3 is handled by nested VM-Enter. */
++ if (is_guest_mode(vcpu))
++ update_guest_cr3 = false;
++ else if (enable_unrestricted_guest || is_paging(vcpu))
+ guest_cr3 = kvm_read_cr3(vcpu);
+ else
+ guest_cr3 = to_kvm_vmx(kvm)->ept_identity_map_addr;
+ ept_load_pdptrs(vcpu);
+ }
+
+- vmcs_writel(GUEST_CR3, guest_cr3);
++ if (update_guest_cr3)
++ vmcs_writel(GUEST_CR3, guest_cr3);
+ }
+
+ int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index f82f766c81c8..2826f60c558d 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -300,13 +300,14 @@ int kvm_set_shared_msr(unsigned slot, u64 value, u64 mask)
+ struct kvm_shared_msrs *smsr = per_cpu_ptr(shared_msrs, cpu);
+ int err;
+
+- if (((value ^ smsr->values[slot].curr) & mask) == 0)
++ value = (value & mask) | (smsr->values[slot].host & ~mask);
++ if (value == smsr->values[slot].curr)
+ return 0;
+- smsr->values[slot].curr = value;
+ err = wrmsrl_safe(shared_msrs_global.msrs[slot], value);
+ if (err)
+ return 1;
+
++ smsr->values[slot].curr = value;
+ if (!smsr->registered) {
+ smsr->urn.on_user_return = kvm_on_user_return;
+ user_return_notifier_register(&smsr->urn);
+@@ -1291,10 +1292,15 @@ static u64 kvm_get_arch_capabilities(void)
+ * If TSX is disabled on the system, guests are also mitigated against
+ * TAA and clear CPU buffer mitigation is not required for guests.
+ */
+- if (boot_cpu_has_bug(X86_BUG_TAA) && boot_cpu_has(X86_FEATURE_RTM) &&
+- (data & ARCH_CAP_TSX_CTRL_MSR))
++ if (!boot_cpu_has(X86_FEATURE_RTM))
++ data &= ~ARCH_CAP_TAA_NO;
++ else if (!boot_cpu_has_bug(X86_BUG_TAA))
++ data |= ARCH_CAP_TAA_NO;
++ else if (data & ARCH_CAP_TSX_CTRL_MSR)
+ data &= ~ARCH_CAP_MDS_NO;
+
++ /* KVM does not emulate MSR_IA32_TSX_CTRL. */
++ data &= ~ARCH_CAP_TSX_CTRL_MSR;
+ return data;
+ }
+
+@@ -4327,6 +4333,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
+ case KVM_SET_NESTED_STATE: {
+ struct kvm_nested_state __user *user_kvm_nested_state = argp;
+ struct kvm_nested_state kvm_state;
++ int idx;
+
+ r = -EINVAL;
+ if (!kvm_x86_ops->set_nested_state)
+@@ -4350,7 +4357,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
+ && !(kvm_state.flags & KVM_STATE_NESTED_GUEST_MODE))
+ break;
+
++ idx = srcu_read_lock(&vcpu->kvm->srcu);
+ r = kvm_x86_ops->set_nested_state(vcpu, user_kvm_nested_state, &kvm_state);
++ srcu_read_unlock(&vcpu->kvm->srcu, idx);
+ break;
+ }
+ case KVM_GET_SUPPORTED_HV_CPUID: {
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index 9ceacd1156db..304d31d8cbbc 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -197,7 +197,7 @@ void vmalloc_sync_all(void)
+ return;
+
+ for (address = VMALLOC_START & PMD_MASK;
+- address >= TASK_SIZE_MAX && address < FIXADDR_TOP;
++ address >= TASK_SIZE_MAX && address < VMALLOC_END;
+ address += PMD_SIZE) {
+ struct page *page;
+
+diff --git a/arch/x86/pci/fixup.c b/arch/x86/pci/fixup.c
+index 527e69b12002..e723559c386a 100644
+--- a/arch/x86/pci/fixup.c
++++ b/arch/x86/pci/fixup.c
+@@ -588,6 +588,17 @@ static void pci_fixup_amd_ehci_pme(struct pci_dev *dev)
+ }
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x7808, pci_fixup_amd_ehci_pme);
+
++/*
++ * Device [1022:7914]
++ * When in D0, PME# doesn't get asserted when plugging USB 2.0 device.
++ */
++static void pci_fixup_amd_fch_xhci_pme(struct pci_dev *dev)
++{
++ dev_info(&dev->dev, "PME# does not work under D0, disabling it\n");
++ dev->pme_support &= ~(PCI_PM_CAP_PME_D0 >> PCI_PM_CAP_PME_SHIFT);
++}
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x7914, pci_fixup_amd_fch_xhci_pme);
++
+ /*
+ * Apple MacBook Pro: Avoid [mem 0x7fa00000-0x7fbfffff]
+ *
+diff --git a/block/bio.c b/block/bio.c
+index 299a0e7651ec..31d56e7e2ce0 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -769,7 +769,7 @@ bool __bio_try_merge_page(struct bio *bio, struct page *page,
+ if (WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED)))
+ return false;
+
+- if (bio->bi_vcnt > 0) {
++ if (bio->bi_vcnt > 0 && !bio_full(bio, len)) {
+ struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1];
+
+ if (page_is_mergeable(bv, page, len, off, same_page)) {
+diff --git a/crypto/af_alg.c b/crypto/af_alg.c
+index 879cf23f7489..0dceaabc6321 100644
+--- a/crypto/af_alg.c
++++ b/crypto/af_alg.c
+@@ -1043,7 +1043,7 @@ void af_alg_async_cb(struct crypto_async_request *_req, int err)
+ af_alg_free_resources(areq);
+ sock_put(sk);
+
+- iocb->ki_complete(iocb, err ? err : resultlen, 0);
++ iocb->ki_complete(iocb, err ? err : (int)resultlen, 0);
+ }
+ EXPORT_SYMBOL_GPL(af_alg_async_cb);
+
+diff --git a/crypto/crypto_user_base.c b/crypto/crypto_user_base.c
+index c65e39005ce2..a4db71846af5 100644
+--- a/crypto/crypto_user_base.c
++++ b/crypto/crypto_user_base.c
+@@ -214,8 +214,10 @@ static int crypto_report(struct sk_buff *in_skb, struct nlmsghdr *in_nlh,
+ drop_alg:
+ crypto_mod_put(alg);
+
+- if (err)
++ if (err) {
++ kfree_skb(skb);
+ return err;
++ }
+
+ return nlmsg_unicast(crypto_nlsk, skb, NETLINK_CB(in_skb).portid);
+ }
+diff --git a/crypto/crypto_user_stat.c b/crypto/crypto_user_stat.c
+index a03f326a63d3..30f77cf9122d 100644
+--- a/crypto/crypto_user_stat.c
++++ b/crypto/crypto_user_stat.c
+@@ -326,8 +326,10 @@ int crypto_reportstat(struct sk_buff *in_skb, struct nlmsghdr *in_nlh,
+ drop_alg:
+ crypto_mod_put(alg);
+
+- if (err)
++ if (err) {
++ kfree_skb(skb);
+ return err;
++ }
+
+ return nlmsg_unicast(crypto_nlsk, skb, NETLINK_CB(in_skb).portid);
+ }
+diff --git a/crypto/ecc.c b/crypto/ecc.c
+index dfe114bc0c4a..8ee787723c5c 100644
+--- a/crypto/ecc.c
++++ b/crypto/ecc.c
+@@ -1284,10 +1284,11 @@ EXPORT_SYMBOL(ecc_point_mult_shamir);
+ static inline void ecc_swap_digits(const u64 *in, u64 *out,
+ unsigned int ndigits)
+ {
++ const __be64 *src = (__force __be64 *)in;
+ int i;
+
+ for (i = 0; i < ndigits; i++)
+- out[i] = __swab64(in[ndigits - 1 - i]);
++ out[i] = be64_to_cpu(src[ndigits - 1 - i]);
+ }
+
+ static int __ecc_is_key_valid(const struct ecc_curve *curve,
+diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
+index 8fe99b20ca02..bb0a732cd98a 100644
+--- a/drivers/android/binder_alloc.c
++++ b/drivers/android/binder_alloc.c
+@@ -277,8 +277,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
+ return 0;
+
+ free_range:
+- for (page_addr = end - PAGE_SIZE; page_addr >= start;
+- page_addr -= PAGE_SIZE) {
++ for (page_addr = end - PAGE_SIZE; 1; page_addr -= PAGE_SIZE) {
+ bool ret;
+ size_t index;
+
+@@ -291,6 +290,8 @@ free_range:
+ WARN_ON(!ret);
+
+ trace_binder_free_lru_end(alloc, index);
++ if (page_addr == start)
++ break;
+ continue;
+
+ err_vm_insert_page_failed:
+@@ -298,7 +299,8 @@ err_vm_insert_page_failed:
+ page->page_ptr = NULL;
+ err_alloc_page_failed:
+ err_page_ptr_cleared:
+- ;
++ if (page_addr == start)
++ break;
+ }
+ err_no_vma:
+ if (mm) {
+@@ -681,17 +683,17 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
+ struct binder_buffer *buffer;
+
+ mutex_lock(&binder_alloc_mmap_lock);
+- if (alloc->buffer) {
++ if (alloc->buffer_size) {
+ ret = -EBUSY;
+ failure_string = "already mapped";
+ goto err_already_mapped;
+ }
++ alloc->buffer_size = min_t(unsigned long, vma->vm_end - vma->vm_start,
++ SZ_4M);
++ mutex_unlock(&binder_alloc_mmap_lock);
+
+ alloc->buffer = (void __user *)vma->vm_start;
+- mutex_unlock(&binder_alloc_mmap_lock);
+
+- alloc->buffer_size = min_t(unsigned long, vma->vm_end - vma->vm_start,
+- SZ_4M);
+ alloc->pages = kcalloc(alloc->buffer_size / PAGE_SIZE,
+ sizeof(alloc->pages[0]),
+ GFP_KERNEL);
+@@ -722,8 +724,9 @@ err_alloc_buf_struct_failed:
+ kfree(alloc->pages);
+ alloc->pages = NULL;
+ err_alloc_pages_failed:
+- mutex_lock(&binder_alloc_mmap_lock);
+ alloc->buffer = NULL;
++ mutex_lock(&binder_alloc_mmap_lock);
++ alloc->buffer_size = 0;
+ err_already_mapped:
+ mutex_unlock(&binder_alloc_mmap_lock);
+ binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
+@@ -841,14 +844,20 @@ void binder_alloc_print_pages(struct seq_file *m,
+ int free = 0;
+
+ mutex_lock(&alloc->mutex);
+- for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
+- page = &alloc->pages[i];
+- if (!page->page_ptr)
+- free++;
+- else if (list_empty(&page->lru))
+- active++;
+- else
+- lru++;
++ /*
++ * Make sure the binder_alloc is fully initialized, otherwise we might
++ * read inconsistent state.
++ */
++ if (binder_alloc_get_vma(alloc) != NULL) {
++ for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
++ page = &alloc->pages[i];
++ if (!page->page_ptr)
++ free++;
++ else if (list_empty(&page->lru))
++ active++;
++ else
++ lru++;
++ }
+ }
+ mutex_unlock(&alloc->mutex);
+ seq_printf(m, " pages: %d:%d:%d\n", active, lru, free);
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index c8fb886aebd4..64e364c4a0fb 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -2089,7 +2089,7 @@ static int rbd_object_map_update_finish(struct rbd_obj_request *obj_req,
+ struct rbd_device *rbd_dev = obj_req->img_request->rbd_dev;
+ struct ceph_osd_data *osd_data;
+ u64 objno;
+- u8 state, new_state, current_state;
++ u8 state, new_state, uninitialized_var(current_state);
+ bool has_current_state;
+ void *p;
+
+diff --git a/drivers/block/rsxx/core.c b/drivers/block/rsxx/core.c
+index 76b73ddf8fd7..10f6368117d8 100644
+--- a/drivers/block/rsxx/core.c
++++ b/drivers/block/rsxx/core.c
+@@ -1000,8 +1000,10 @@ static void rsxx_pci_remove(struct pci_dev *dev)
+
+ cancel_work_sync(&card->event_work);
+
++ destroy_workqueue(card->event_wq);
+ rsxx_destroy_dev(card);
+ rsxx_dma_destroy(card);
++ destroy_workqueue(card->creg_ctrl.creg_wq);
+
+ spin_lock_irqsave(&card->irq_lock, flags);
+ rsxx_disable_ier_and_isr(card, CR_INTR_ALL);
+diff --git a/drivers/char/lp.c b/drivers/char/lp.c
+index 7c9269e3477a..bd95aba1f9fe 100644
+--- a/drivers/char/lp.c
++++ b/drivers/char/lp.c
+@@ -713,6 +713,10 @@ static int lp_set_timeout64(unsigned int minor, void __user *arg)
+ if (copy_from_user(karg, arg, sizeof(karg)))
+ return -EFAULT;
+
++ /* sparc64 suseconds_t is 32-bit only */
++ if (IS_ENABLED(CONFIG_SPARC64) && !in_compat_syscall())
++ karg[1] >>= 32;
++
+ return lp_set_timeout(minor, karg[0], karg[1]);
+ }
+
+diff --git a/drivers/cpufreq/imx-cpufreq-dt.c b/drivers/cpufreq/imx-cpufreq-dt.c
+index 35db14cf3102..85a6efd6b68f 100644
+--- a/drivers/cpufreq/imx-cpufreq-dt.c
++++ b/drivers/cpufreq/imx-cpufreq-dt.c
+@@ -44,19 +44,19 @@ static int imx_cpufreq_dt_probe(struct platform_device *pdev)
+ mkt_segment = (cell_value & OCOTP_CFG3_MKT_SEGMENT_MASK) >> OCOTP_CFG3_MKT_SEGMENT_SHIFT;
+
+ /*
+- * Early samples without fuses written report "0 0" which means
+- * consumer segment and minimum speed grading.
+- *
+- * According to datasheet minimum speed grading is not supported for
+- * consumer parts so clamp to 1 to avoid warning for "no OPPs"
++ * Early samples without fuses written report "0 0" which may NOT
++ * match any OPP defined in DT. So clamp to minimum OPP defined in
++ * DT to avoid warning for "no OPPs".
+ *
+ * Applies to i.MX8M series SoCs.
+ */
+- if (mkt_segment == 0 && speed_grade == 0 && (
+- of_machine_is_compatible("fsl,imx8mm") ||
+- of_machine_is_compatible("fsl,imx8mn") ||
+- of_machine_is_compatible("fsl,imx8mq")))
+- speed_grade = 1;
++ if (mkt_segment == 0 && speed_grade == 0) {
++ if (of_machine_is_compatible("fsl,imx8mm") ||
++ of_machine_is_compatible("fsl,imx8mq"))
++ speed_grade = 1;
++ if (of_machine_is_compatible("fsl,imx8mn"))
++ speed_grade = 0xb;
++ }
+
+ supported_hw[0] = BIT(speed_grade);
+ supported_hw[1] = BIT(mkt_segment);
+diff --git a/drivers/crypto/amcc/crypto4xx_core.c b/drivers/crypto/amcc/crypto4xx_core.c
+index de5e9352e920..7d6b695c4ab3 100644
+--- a/drivers/crypto/amcc/crypto4xx_core.c
++++ b/drivers/crypto/amcc/crypto4xx_core.c
+@@ -365,12 +365,8 @@ static u32 crypto4xx_build_sdr(struct crypto4xx_device *dev)
+ dma_alloc_coherent(dev->core_dev->device,
+ PPC4XX_SD_BUFFER_SIZE * PPC4XX_NUM_SD,
+ &dev->scatter_buffer_pa, GFP_ATOMIC);
+- if (!dev->scatter_buffer_va) {
+- dma_free_coherent(dev->core_dev->device,
+- sizeof(struct ce_sd) * PPC4XX_NUM_SD,
+- dev->sdr, dev->sdr_pa);
++ if (!dev->scatter_buffer_va)
+ return -ENOMEM;
+- }
+
+ for (i = 0; i < PPC4XX_NUM_SD; i++) {
+ dev->sdr[i].ptr = dev->scatter_buffer_pa +
+diff --git a/drivers/crypto/atmel-aes.c b/drivers/crypto/atmel-aes.c
+index 2b7af44c7b85..eb0dbdc12f47 100644
+--- a/drivers/crypto/atmel-aes.c
++++ b/drivers/crypto/atmel-aes.c
+@@ -490,6 +490,29 @@ static inline bool atmel_aes_is_encrypt(const struct atmel_aes_dev *dd)
+ static void atmel_aes_authenc_complete(struct atmel_aes_dev *dd, int err);
+ #endif
+
++static void atmel_aes_set_iv_as_last_ciphertext_block(struct atmel_aes_dev *dd)
++{
++ struct ablkcipher_request *req = ablkcipher_request_cast(dd->areq);
++ struct atmel_aes_reqctx *rctx = ablkcipher_request_ctx(req);
++ struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req);
++ unsigned int ivsize = crypto_ablkcipher_ivsize(ablkcipher);
++
++ if (req->nbytes < ivsize)
++ return;
++
++ if (rctx->mode & AES_FLAGS_ENCRYPT) {
++ scatterwalk_map_and_copy(req->info, req->dst,
++ req->nbytes - ivsize, ivsize, 0);
++ } else {
++ if (req->src == req->dst)
++ memcpy(req->info, rctx->lastc, ivsize);
++ else
++ scatterwalk_map_and_copy(req->info, req->src,
++ req->nbytes - ivsize,
++ ivsize, 0);
++ }
++}
++
+ static inline int atmel_aes_complete(struct atmel_aes_dev *dd, int err)
+ {
+ #ifdef CONFIG_CRYPTO_DEV_ATMEL_AUTHENC
+@@ -500,26 +523,8 @@ static inline int atmel_aes_complete(struct atmel_aes_dev *dd, int err)
+ clk_disable(dd->iclk);
+ dd->flags &= ~AES_FLAGS_BUSY;
+
+- if (!dd->ctx->is_aead) {
+- struct ablkcipher_request *req =
+- ablkcipher_request_cast(dd->areq);
+- struct atmel_aes_reqctx *rctx = ablkcipher_request_ctx(req);
+- struct crypto_ablkcipher *ablkcipher =
+- crypto_ablkcipher_reqtfm(req);
+- int ivsize = crypto_ablkcipher_ivsize(ablkcipher);
+-
+- if (rctx->mode & AES_FLAGS_ENCRYPT) {
+- scatterwalk_map_and_copy(req->info, req->dst,
+- req->nbytes - ivsize, ivsize, 0);
+- } else {
+- if (req->src == req->dst) {
+- memcpy(req->info, rctx->lastc, ivsize);
+- } else {
+- scatterwalk_map_and_copy(req->info, req->src,
+- req->nbytes - ivsize, ivsize, 0);
+- }
+- }
+- }
++ if (!dd->ctx->is_aead)
++ atmel_aes_set_iv_as_last_ciphertext_block(dd);
+
+ if (dd->is_async)
+ dd->areq->complete(dd->areq, err);
+@@ -1125,10 +1130,12 @@ static int atmel_aes_crypt(struct ablkcipher_request *req, unsigned long mode)
+ rctx->mode = mode;
+
+ if (!(mode & AES_FLAGS_ENCRYPT) && (req->src == req->dst)) {
+- int ivsize = crypto_ablkcipher_ivsize(ablkcipher);
++ unsigned int ivsize = crypto_ablkcipher_ivsize(ablkcipher);
+
+- scatterwalk_map_and_copy(rctx->lastc, req->src,
+- (req->nbytes - ivsize), ivsize, 0);
++ if (req->nbytes >= ivsize)
++ scatterwalk_map_and_copy(rctx->lastc, req->src,
++ req->nbytes - ivsize,
++ ivsize, 0);
+ }
+
+ return atmel_aes_handle_queue(dd, &req->base);
+diff --git a/drivers/crypto/ccp/ccp-dmaengine.c b/drivers/crypto/ccp/ccp-dmaengine.c
+index 7f22a45bbc11..03817df17728 100644
+--- a/drivers/crypto/ccp/ccp-dmaengine.c
++++ b/drivers/crypto/ccp/ccp-dmaengine.c
+@@ -337,6 +337,7 @@ static struct ccp_dma_desc *ccp_alloc_dma_desc(struct ccp_dma_chan *chan,
+ desc->tx_desc.flags = flags;
+ desc->tx_desc.tx_submit = ccp_tx_submit;
+ desc->ccp = chan->ccp;
++ INIT_LIST_HEAD(&desc->entry);
+ INIT_LIST_HEAD(&desc->pending);
+ INIT_LIST_HEAD(&desc->active);
+ desc->status = DMA_IN_PROGRESS;
+diff --git a/drivers/crypto/geode-aes.c b/drivers/crypto/geode-aes.c
+index d81a1297cb9e..940485112d15 100644
+--- a/drivers/crypto/geode-aes.c
++++ b/drivers/crypto/geode-aes.c
+@@ -10,6 +10,7 @@
+ #include <linux/spinlock.h>
+ #include <crypto/algapi.h>
+ #include <crypto/aes.h>
++#include <crypto/skcipher.h>
+
+ #include <linux/io.h>
+ #include <linux/delay.h>
+@@ -166,13 +167,15 @@ static int geode_setkey_blk(struct crypto_tfm *tfm, const u8 *key,
+ /*
+ * The requested key size is not supported by HW, do a fallback
+ */
+- op->fallback.blk->base.crt_flags &= ~CRYPTO_TFM_REQ_MASK;
+- op->fallback.blk->base.crt_flags |= (tfm->crt_flags & CRYPTO_TFM_REQ_MASK);
++ crypto_sync_skcipher_clear_flags(op->fallback.blk, CRYPTO_TFM_REQ_MASK);
++ crypto_sync_skcipher_set_flags(op->fallback.blk,
++ tfm->crt_flags & CRYPTO_TFM_REQ_MASK);
+
+- ret = crypto_blkcipher_setkey(op->fallback.blk, key, len);
++ ret = crypto_sync_skcipher_setkey(op->fallback.blk, key, len);
+ if (ret) {
+ tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK;
+- tfm->crt_flags |= (op->fallback.blk->base.crt_flags & CRYPTO_TFM_RES_MASK);
++ tfm->crt_flags |= crypto_sync_skcipher_get_flags(op->fallback.blk) &
++ CRYPTO_TFM_RES_MASK;
+ }
+ return ret;
+ }
+@@ -181,33 +184,28 @@ static int fallback_blk_dec(struct blkcipher_desc *desc,
+ struct scatterlist *dst, struct scatterlist *src,
+ unsigned int nbytes)
+ {
+- unsigned int ret;
+- struct crypto_blkcipher *tfm;
+ struct geode_aes_op *op = crypto_blkcipher_ctx(desc->tfm);
++ SYNC_SKCIPHER_REQUEST_ON_STACK(req, op->fallback.blk);
+
+- tfm = desc->tfm;
+- desc->tfm = op->fallback.blk;
+-
+- ret = crypto_blkcipher_decrypt_iv(desc, dst, src, nbytes);
++ skcipher_request_set_sync_tfm(req, op->fallback.blk);
++ skcipher_request_set_callback(req, 0, NULL, NULL);
++ skcipher_request_set_crypt(req, src, dst, nbytes, desc->info);
+
+- desc->tfm = tfm;
+- return ret;
++ return crypto_skcipher_decrypt(req);
+ }
++
+ static int fallback_blk_enc(struct blkcipher_desc *desc,
+ struct scatterlist *dst, struct scatterlist *src,
+ unsigned int nbytes)
+ {
+- unsigned int ret;
+- struct crypto_blkcipher *tfm;
+ struct geode_aes_op *op = crypto_blkcipher_ctx(desc->tfm);
++ SYNC_SKCIPHER_REQUEST_ON_STACK(req, op->fallback.blk);
+
+- tfm = desc->tfm;
+- desc->tfm = op->fallback.blk;
+-
+- ret = crypto_blkcipher_encrypt_iv(desc, dst, src, nbytes);
++ skcipher_request_set_sync_tfm(req, op->fallback.blk);
++ skcipher_request_set_callback(req, 0, NULL, NULL);
++ skcipher_request_set_crypt(req, src, dst, nbytes, desc->info);
+
+- desc->tfm = tfm;
+- return ret;
++ return crypto_skcipher_encrypt(req);
+ }
+
+ static void
+@@ -307,6 +305,9 @@ geode_cbc_decrypt(struct blkcipher_desc *desc,
+ struct blkcipher_walk walk;
+ int err, ret;
+
++ if (nbytes % AES_BLOCK_SIZE)
++ return -EINVAL;
++
+ if (unlikely(op->keylen != AES_KEYSIZE_128))
+ return fallback_blk_dec(desc, dst, src, nbytes);
+
+@@ -339,6 +340,9 @@ geode_cbc_encrypt(struct blkcipher_desc *desc,
+ struct blkcipher_walk walk;
+ int err, ret;
+
++ if (nbytes % AES_BLOCK_SIZE)
++ return -EINVAL;
++
+ if (unlikely(op->keylen != AES_KEYSIZE_128))
+ return fallback_blk_enc(desc, dst, src, nbytes);
+
+@@ -366,9 +370,8 @@ static int fallback_init_blk(struct crypto_tfm *tfm)
+ const char *name = crypto_tfm_alg_name(tfm);
+ struct geode_aes_op *op = crypto_tfm_ctx(tfm);
+
+- op->fallback.blk = crypto_alloc_blkcipher(name, 0,
+- CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK);
+-
++ op->fallback.blk = crypto_alloc_sync_skcipher(name, 0,
++ CRYPTO_ALG_NEED_FALLBACK);
+ if (IS_ERR(op->fallback.blk)) {
+ printk(KERN_ERR "Error allocating fallback algo %s\n", name);
+ return PTR_ERR(op->fallback.blk);
+@@ -381,7 +384,7 @@ static void fallback_exit_blk(struct crypto_tfm *tfm)
+ {
+ struct geode_aes_op *op = crypto_tfm_ctx(tfm);
+
+- crypto_free_blkcipher(op->fallback.blk);
++ crypto_free_sync_skcipher(op->fallback.blk);
+ op->fallback.blk = NULL;
+ }
+
+@@ -420,6 +423,9 @@ geode_ecb_decrypt(struct blkcipher_desc *desc,
+ struct blkcipher_walk walk;
+ int err, ret;
+
++ if (nbytes % AES_BLOCK_SIZE)
++ return -EINVAL;
++
+ if (unlikely(op->keylen != AES_KEYSIZE_128))
+ return fallback_blk_dec(desc, dst, src, nbytes);
+
+@@ -450,6 +456,9 @@ geode_ecb_encrypt(struct blkcipher_desc *desc,
+ struct blkcipher_walk walk;
+ int err, ret;
+
++ if (nbytes % AES_BLOCK_SIZE)
++ return -EINVAL;
++
+ if (unlikely(op->keylen != AES_KEYSIZE_128))
+ return fallback_blk_enc(desc, dst, src, nbytes);
+
+diff --git a/drivers/crypto/geode-aes.h b/drivers/crypto/geode-aes.h
+index 5c6e131a8f9d..f8a86898ac22 100644
+--- a/drivers/crypto/geode-aes.h
++++ b/drivers/crypto/geode-aes.h
+@@ -60,7 +60,7 @@ struct geode_aes_op {
+ u8 *iv;
+
+ union {
+- struct crypto_blkcipher *blk;
++ struct crypto_sync_skcipher *blk;
+ struct crypto_cipher *cip;
+ } fallback;
+ u32 keylen;
+diff --git a/drivers/edac/ghes_edac.c b/drivers/edac/ghes_edac.c
+index 2059e43ccc01..1163c382d4a5 100644
+--- a/drivers/edac/ghes_edac.c
++++ b/drivers/edac/ghes_edac.c
+@@ -26,9 +26,18 @@ struct ghes_edac_pvt {
+ char msg[80];
+ };
+
+-static atomic_t ghes_init = ATOMIC_INIT(0);
++static refcount_t ghes_refcount = REFCOUNT_INIT(0);
++
++/*
++ * Access to ghes_pvt must be protected by ghes_lock. The spinlock
++ * also provides the necessary (implicit) memory barrier for the SMP
++ * case to make the pointer visible on another CPU.
++ */
+ static struct ghes_edac_pvt *ghes_pvt;
+
++/* GHES registration mutex */
++static DEFINE_MUTEX(ghes_reg_mutex);
++
+ /*
+ * Sync with other, potentially concurrent callers of
+ * ghes_edac_report_mem_error(). We don't know what the
+@@ -79,9 +88,8 @@ static void ghes_edac_count_dimms(const struct dmi_header *dh, void *arg)
+ (*num_dimm)++;
+ }
+
+-static int get_dimm_smbios_index(u16 handle)
++static int get_dimm_smbios_index(struct mem_ctl_info *mci, u16 handle)
+ {
+- struct mem_ctl_info *mci = ghes_pvt->mci;
+ int i;
+
+ for (i = 0; i < mci->tot_dimms; i++) {
+@@ -198,14 +206,11 @@ void ghes_edac_report_mem_error(int sev, struct cper_sec_mem_err *mem_err)
+ enum hw_event_mc_err_type type;
+ struct edac_raw_error_desc *e;
+ struct mem_ctl_info *mci;
+- struct ghes_edac_pvt *pvt = ghes_pvt;
++ struct ghes_edac_pvt *pvt;
+ unsigned long flags;
+ char *p;
+ u8 grain_bits;
+
+- if (!pvt)
+- return;
+-
+ /*
+ * We can do the locking below because GHES defers error processing
+ * from NMI to IRQ context. Whenever that changes, we'd at least
+@@ -216,6 +221,10 @@ void ghes_edac_report_mem_error(int sev, struct cper_sec_mem_err *mem_err)
+
+ spin_lock_irqsave(&ghes_lock, flags);
+
++ pvt = ghes_pvt;
++ if (!pvt)
++ goto unlock;
++
+ mci = pvt->mci;
+ e = &mci->error_desc;
+
+@@ -348,7 +357,7 @@ void ghes_edac_report_mem_error(int sev, struct cper_sec_mem_err *mem_err)
+ p += sprintf(p, "DIMM DMI handle: 0x%.4x ",
+ mem_err->mem_dev_handle);
+
+- index = get_dimm_smbios_index(mem_err->mem_dev_handle);
++ index = get_dimm_smbios_index(mci, mem_err->mem_dev_handle);
+ if (index >= 0) {
+ e->top_layer = index;
+ e->enable_per_layer_report = true;
+@@ -443,6 +452,8 @@ void ghes_edac_report_mem_error(int sev, struct cper_sec_mem_err *mem_err)
+ grain_bits, e->syndrome, pvt->detail_location);
+
+ edac_raw_mc_handle_error(type, mci, e);
++
++unlock:
+ spin_unlock_irqrestore(&ghes_lock, flags);
+ }
+
+@@ -457,10 +468,12 @@ static struct acpi_platform_list plat_list[] = {
+ int ghes_edac_register(struct ghes *ghes, struct device *dev)
+ {
+ bool fake = false;
+- int rc, num_dimm = 0;
++ int rc = 0, num_dimm = 0;
+ struct mem_ctl_info *mci;
++ struct ghes_edac_pvt *pvt;
+ struct edac_mc_layer layers[1];
+ struct ghes_edac_dimm_fill dimm_fill;
++ unsigned long flags;
+ int idx = -1;
+
+ if (IS_ENABLED(CONFIG_X86)) {
+@@ -472,11 +485,14 @@ int ghes_edac_register(struct ghes *ghes, struct device *dev)
+ idx = 0;
+ }
+
++ /* finish another registration/unregistration instance first */
++ mutex_lock(&ghes_reg_mutex);
++
+ /*
+ * We have only one logical memory controller to which all DIMMs belong.
+ */
+- if (atomic_inc_return(&ghes_init) > 1)
+- return 0;
++ if (refcount_inc_not_zero(&ghes_refcount))
++ goto unlock;
+
+ /* Get the number of DIMMs */
+ dmi_walk(ghes_edac_count_dimms, &num_dimm);
+@@ -494,12 +510,13 @@ int ghes_edac_register(struct ghes *ghes, struct device *dev)
+ mci = edac_mc_alloc(0, ARRAY_SIZE(layers), layers, sizeof(struct ghes_edac_pvt));
+ if (!mci) {
+ pr_info("Can't allocate memory for EDAC data\n");
+- return -ENOMEM;
++ rc = -ENOMEM;
++ goto unlock;
+ }
+
+- ghes_pvt = mci->pvt_info;
+- ghes_pvt->ghes = ghes;
+- ghes_pvt->mci = mci;
++ pvt = mci->pvt_info;
++ pvt->ghes = ghes;
++ pvt->mci = mci;
+
+ mci->pdev = dev;
+ mci->mtype_cap = MEM_FLAG_EMPTY;
+@@ -541,23 +558,48 @@ int ghes_edac_register(struct ghes *ghes, struct device *dev)
+ if (rc < 0) {
+ pr_info("Can't register at EDAC core\n");
+ edac_mc_free(mci);
+- return -ENODEV;
++ rc = -ENODEV;
++ goto unlock;
+ }
+- return 0;
++
++ spin_lock_irqsave(&ghes_lock, flags);
++ ghes_pvt = pvt;
++ spin_unlock_irqrestore(&ghes_lock, flags);
++
++ /* only increment on success */
++ refcount_inc(&ghes_refcount);
++
++unlock:
++ mutex_unlock(&ghes_reg_mutex);
++
++ return rc;
+ }
+
+ void ghes_edac_unregister(struct ghes *ghes)
+ {
+ struct mem_ctl_info *mci;
++ unsigned long flags;
+
+- if (!ghes_pvt)
+- return;
++ mutex_lock(&ghes_reg_mutex);
+
+- if (atomic_dec_return(&ghes_init))
+- return;
++ if (!refcount_dec_and_test(&ghes_refcount))
++ goto unlock;
+
+- mci = ghes_pvt->mci;
++ /*
++ * Wait for the irq handler being finished.
++ */
++ spin_lock_irqsave(&ghes_lock, flags);
++ mci = ghes_pvt ? ghes_pvt->mci : NULL;
+ ghes_pvt = NULL;
+- edac_mc_del_mc(mci->pdev);
+- edac_mc_free(mci);
++ spin_unlock_irqrestore(&ghes_lock, flags);
++
++ if (!mci)
++ goto unlock;
++
++ mci = edac_mc_del_mc(mci->pdev);
++ if (mci)
++ edac_mc_free(mci);
++
++unlock:
++ mutex_unlock(&ghes_reg_mutex);
+ }
+diff --git a/drivers/gpu/drm/drm_damage_helper.c b/drivers/gpu/drm/drm_damage_helper.c
+index 8230dac01a89..3a4126dc2520 100644
+--- a/drivers/gpu/drm/drm_damage_helper.c
++++ b/drivers/gpu/drm/drm_damage_helper.c
+@@ -212,8 +212,14 @@ retry:
+ drm_for_each_plane(plane, fb->dev) {
+ struct drm_plane_state *plane_state;
+
+- if (plane->state->fb != fb)
++ ret = drm_modeset_lock(&plane->mutex, state->acquire_ctx);
++ if (ret)
++ goto out;
++
++ if (plane->state->fb != fb) {
++ drm_modeset_unlock(&plane->mutex);
+ continue;
++ }
+
+ plane_state = drm_atomic_get_plane_state(state, plane);
+ if (IS_ERR(plane_state)) {
+diff --git a/drivers/gpu/drm/i810/i810_dma.c b/drivers/gpu/drm/i810/i810_dma.c
+index 3b378936f575..a9b15001416a 100644
+--- a/drivers/gpu/drm/i810/i810_dma.c
++++ b/drivers/gpu/drm/i810/i810_dma.c
+@@ -721,7 +721,7 @@ static void i810_dma_dispatch_vertex(struct drm_device *dev,
+ if (nbox > I810_NR_SAREA_CLIPRECTS)
+ nbox = I810_NR_SAREA_CLIPRECTS;
+
+- if (used > 4 * 1024)
++ if (used < 0 || used > 4 * 1024)
+ used = 0;
+
+ if (sarea_priv->dirty)
+@@ -1041,7 +1041,7 @@ static void i810_dma_dispatch_mc(struct drm_device *dev, struct drm_buf *buf, in
+ if (u != I810_BUF_CLIENT)
+ DRM_DEBUG("MC found buffer that isn't mine!\n");
+
+- if (used > 4 * 1024)
++ if (used < 0 || used > 4 * 1024)
+ used = 0;
+
+ sarea_priv->dirty = 0x7f;
+diff --git a/drivers/gpu/drm/mcde/mcde_drv.c b/drivers/gpu/drm/mcde/mcde_drv.c
+index a810568c76df..1bd744231aad 100644
+--- a/drivers/gpu/drm/mcde/mcde_drv.c
++++ b/drivers/gpu/drm/mcde/mcde_drv.c
+@@ -487,7 +487,8 @@ static int mcde_probe(struct platform_device *pdev)
+ }
+ if (!match) {
+ dev_err(dev, "no matching components\n");
+- return -ENODEV;
++ ret = -ENODEV;
++ goto clk_disable;
+ }
+ if (IS_ERR(match)) {
+ dev_err(dev, "could not create component match\n");
+diff --git a/drivers/gpu/drm/msm/msm_debugfs.c b/drivers/gpu/drm/msm/msm_debugfs.c
+index a0a8df591e93..dd43681b8662 100644
+--- a/drivers/gpu/drm/msm/msm_debugfs.c
++++ b/drivers/gpu/drm/msm/msm_debugfs.c
+@@ -42,12 +42,8 @@ static int msm_gpu_release(struct inode *inode, struct file *file)
+ struct msm_gpu_show_priv *show_priv = m->private;
+ struct msm_drm_private *priv = show_priv->dev->dev_private;
+ struct msm_gpu *gpu = priv->gpu;
+- int ret;
+-
+- ret = mutex_lock_interruptible(&show_priv->dev->struct_mutex);
+- if (ret)
+- return ret;
+
++ mutex_lock(&show_priv->dev->struct_mutex);
+ gpu->funcs->gpu_state_put(show_priv->state);
+ mutex_unlock(&show_priv->dev->struct_mutex);
+
+diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.c b/drivers/gpu/drm/sun4i/sun4i_tcon.c
+index df0cc8f46d7b..3491c4c7659e 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_tcon.c
++++ b/drivers/gpu/drm/sun4i/sun4i_tcon.c
+@@ -486,7 +486,7 @@ static void sun4i_tcon0_mode_set_rgb(struct sun4i_tcon *tcon,
+
+ WARN_ON(!tcon->quirks->has_channel_0);
+
+- tcon->dclk_min_div = 6;
++ tcon->dclk_min_div = 1;
+ tcon->dclk_max_div = 127;
+ sun4i_tcon0_mode_set_common(tcon, mode);
+
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
+index a0365e23678e..f5fb1e7a9c17 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
+@@ -655,10 +655,13 @@ static ssize_t cyc_threshold_store(struct device *dev,
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
++
++ /* mask off max threshold before checking min value */
++ val &= ETM_CYC_THRESHOLD_MASK;
+ if (val < drvdata->ccitmin)
+ return -EINVAL;
+
+- config->ccctlr = val & ETM_CYC_THRESHOLD_MASK;
++ config->ccctlr = val;
+ return size;
+ }
+ static DEVICE_ATTR_RW(cyc_threshold);
+@@ -689,14 +692,16 @@ static ssize_t bb_ctrl_store(struct device *dev,
+ return -EINVAL;
+ if (!drvdata->nr_addr_cmp)
+ return -EINVAL;
++
+ /*
+- * Bit[7:0] selects which address range comparator is used for
+- * branch broadcast control.
++ * Bit[8] controls include(1) / exclude(0), bits[0-7] select
++ * individual range comparators. If include then at least 1
++ * range must be selected.
+ */
+- if (BMVAL(val, 0, 7) > drvdata->nr_addr_cmp)
++ if ((val & BIT(8)) && (BMVAL(val, 0, 7) == 0))
+ return -EINVAL;
+
+- config->bb_ctrl = val;
++ config->bb_ctrl = val & GENMASK(8, 0);
+ return size;
+ }
+ static DEVICE_ATTR_RW(bb_ctrl);
+@@ -1329,8 +1334,8 @@ static ssize_t seq_event_store(struct device *dev,
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->seq_idx;
+- /* RST, bits[7:0] */
+- config->seq_ctrl[idx] = val & 0xFF;
++ /* Seq control has two masks B[15:8] F[7:0] */
++ config->seq_ctrl[idx] = val & 0xFFFF;
+ spin_unlock(&drvdata->spinlock);
+ return size;
+ }
+@@ -1585,7 +1590,7 @@ static ssize_t res_ctrl_store(struct device *dev,
+ if (idx % 2 != 0)
+ /* PAIRINV, bit[21] */
+ val &= ~BIT(21);
+- config->res_ctrl[idx] = val;
++ config->res_ctrl[idx] = val & GENMASK(21, 0);
+ spin_unlock(&drvdata->spinlock);
+ return size;
+ }
+diff --git a/drivers/i2c/i2c-core-of.c b/drivers/i2c/i2c-core-of.c
+index d1c48dec7118..9b2fce4906c4 100644
+--- a/drivers/i2c/i2c-core-of.c
++++ b/drivers/i2c/i2c-core-of.c
+@@ -250,14 +250,14 @@ static int of_i2c_notify(struct notifier_block *nb, unsigned long action,
+ }
+
+ client = of_i2c_register_device(adap, rd->dn);
+- put_device(&adap->dev);
+-
+ if (IS_ERR(client)) {
+ dev_err(&adap->dev, "failed to create client for '%pOF'\n",
+ rd->dn);
++ put_device(&adap->dev);
+ of_node_clear_flag(rd->dn, OF_POPULATED);
+ return notifier_from_errno(PTR_ERR(client));
+ }
++ put_device(&adap->dev);
+ break;
+ case OF_RECONFIG_CHANGE_REMOVE:
+ /* already depopulated? */
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.h b/drivers/infiniband/hw/hns/hns_roce_hem.h
+index f1ccb8f35fe5..e41ebc25b1f9 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.h
+@@ -59,7 +59,7 @@ enum {
+
+ #define HNS_ROCE_HEM_CHUNK_LEN \
+ ((256 - sizeof(struct list_head) - 2 * sizeof(int)) / \
+- (sizeof(struct scatterlist)))
++ (sizeof(struct scatterlist) + sizeof(void *)))
+
+ #define check_whether_bt_num_3(type, hop_num) \
+ (type < HEM_TYPE_MTT && hop_num == 2)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c
+index 38bb548eaa6d..9768e377cd22 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_srq.c
++++ b/drivers/infiniband/hw/hns/hns_roce_srq.c
+@@ -221,7 +221,7 @@ int hns_roce_create_srq(struct ib_srq *ib_srq,
+ srq->max = roundup_pow_of_two(srq_init_attr->attr.max_wr + 1);
+ srq->max_gs = srq_init_attr->attr.max_sge;
+
+- srq_desc_size = max(16, 16 * srq->max_gs);
++ srq_desc_size = roundup_pow_of_two(max(16, 16 * srq->max_gs));
+
+ srq->wqe_shift = ilog2(srq_desc_size);
+
+diff --git a/drivers/infiniband/hw/qib/qib_sysfs.c b/drivers/infiniband/hw/qib/qib_sysfs.c
+index 905206a0c2d5..d4f5e8cd438d 100644
+--- a/drivers/infiniband/hw/qib/qib_sysfs.c
++++ b/drivers/infiniband/hw/qib/qib_sysfs.c
+@@ -301,6 +301,9 @@ static ssize_t qib_portattr_show(struct kobject *kobj,
+ struct qib_pportdata *ppd =
+ container_of(kobj, struct qib_pportdata, pport_kobj);
+
++ if (!pattr->show)
++ return -EIO;
++
+ return pattr->show(ppd, buf);
+ }
+
+@@ -312,6 +315,9 @@ static ssize_t qib_portattr_store(struct kobject *kobj,
+ struct qib_pportdata *ppd =
+ container_of(kobj, struct qib_pportdata, pport_kobj);
+
++ if (!pattr->store)
++ return -EIO;
++
+ return pattr->store(ppd, buf, len);
+ }
+
+diff --git a/drivers/input/joystick/psxpad-spi.c b/drivers/input/joystick/psxpad-spi.c
+index 7eee1b0e360f..99a6052500ca 100644
+--- a/drivers/input/joystick/psxpad-spi.c
++++ b/drivers/input/joystick/psxpad-spi.c
+@@ -292,7 +292,7 @@ static int psxpad_spi_probe(struct spi_device *spi)
+ if (!pad)
+ return -ENOMEM;
+
+- pdev = input_allocate_polled_device();
++ pdev = devm_input_allocate_polled_device(&spi->dev);
+ if (!pdev) {
+ dev_err(&spi->dev, "failed to allocate input device\n");
+ return -ENOMEM;
+diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
+index 46bbe99d6511..13a92e8823f1 100644
+--- a/drivers/input/mouse/synaptics.c
++++ b/drivers/input/mouse/synaptics.c
+@@ -172,6 +172,7 @@ static const char * const smbus_pnp_ids[] = {
+ "LEN0071", /* T480 */
+ "LEN0072", /* X1 Carbon Gen 5 (2017) - Elan/ALPS trackpoint */
+ "LEN0073", /* X1 Carbon G5 (Elantech) */
++ "LEN0091", /* X1 Carbon 6 */
+ "LEN0092", /* X1 Carbon 6 */
+ "LEN0093", /* T480 */
+ "LEN0096", /* X280 */
+diff --git a/drivers/input/rmi4/rmi_f34v7.c b/drivers/input/rmi4/rmi_f34v7.c
+index a4cabf52740c..74f7c6f214ff 100644
+--- a/drivers/input/rmi4/rmi_f34v7.c
++++ b/drivers/input/rmi4/rmi_f34v7.c
+@@ -1189,6 +1189,9 @@ int rmi_f34v7_do_reflash(struct f34_data *f34, const struct firmware *fw)
+ {
+ int ret;
+
++ f34->fn->rmi_dev->driver->set_irq_bits(f34->fn->rmi_dev,
++ f34->fn->irq_mask);
++
+ rmi_f34v7_read_queries_bl_version(f34);
+
+ f34->v7.image = fw->data;
+diff --git a/drivers/input/rmi4/rmi_smbus.c b/drivers/input/rmi4/rmi_smbus.c
+index 2407ea43de59..b313c579914f 100644
+--- a/drivers/input/rmi4/rmi_smbus.c
++++ b/drivers/input/rmi4/rmi_smbus.c
+@@ -163,7 +163,6 @@ static int rmi_smb_write_block(struct rmi_transport_dev *xport, u16 rmiaddr,
+ /* prepare to write next block of bytes */
+ cur_len -= SMB_MAX_COUNT;
+ databuff += SMB_MAX_COUNT;
+- rmiaddr += SMB_MAX_COUNT;
+ }
+ exit:
+ mutex_unlock(&rmi_smb->page_mutex);
+@@ -215,7 +214,6 @@ static int rmi_smb_read_block(struct rmi_transport_dev *xport, u16 rmiaddr,
+ /* prepare to read next block of bytes */
+ cur_len -= SMB_MAX_COUNT;
+ databuff += SMB_MAX_COUNT;
+- rmiaddr += SMB_MAX_COUNT;
+ }
+
+ retval = 0;
+diff --git a/drivers/input/touchscreen/cyttsp4_core.c b/drivers/input/touchscreen/cyttsp4_core.c
+index 4b22d49a0f49..6bcffc930384 100644
+--- a/drivers/input/touchscreen/cyttsp4_core.c
++++ b/drivers/input/touchscreen/cyttsp4_core.c
+@@ -1990,11 +1990,6 @@ static int cyttsp4_mt_probe(struct cyttsp4 *cd)
+
+ /* get sysinfo */
+ md->si = &cd->sysinfo;
+- if (!md->si) {
+- dev_err(dev, "%s: Fail get sysinfo pointer from core p=%p\n",
+- __func__, md->si);
+- goto error_get_sysinfo;
+- }
+
+ rc = cyttsp4_setup_input_device(cd);
+ if (rc)
+@@ -2004,8 +1999,6 @@ static int cyttsp4_mt_probe(struct cyttsp4 *cd)
+
+ error_init_input:
+ input_free_device(md->input);
+-error_get_sysinfo:
+- input_set_drvdata(md->input, NULL);
+ error_alloc_failed:
+ dev_err(dev, "%s failed.\n", __func__);
+ return rc;
+diff --git a/drivers/input/touchscreen/goodix.c b/drivers/input/touchscreen/goodix.c
+index 5178ea8b5f30..b99ace9b9a0e 100644
+--- a/drivers/input/touchscreen/goodix.c
++++ b/drivers/input/touchscreen/goodix.c
+@@ -126,6 +126,15 @@ static const unsigned long goodix_irq_flags[] = {
+ */
+ static const struct dmi_system_id rotated_screen[] = {
+ #if defined(CONFIG_DMI) && defined(CONFIG_X86)
++ {
++ .ident = "Teclast X89",
++ .matches = {
++ /* tPAD is too generic, also match on bios date */
++ DMI_MATCH(DMI_BOARD_VENDOR, "TECLAST"),
++ DMI_MATCH(DMI_BOARD_NAME, "tPAD"),
++ DMI_MATCH(DMI_BIOS_DATE, "12/19/2014"),
++ },
++ },
+ {
+ .ident = "WinBook TW100",
+ .matches = {
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index c3445d2cedb9..94c3f1a6fb5c 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -612,7 +612,7 @@ static bool raid0_make_request(struct mddev *mddev, struct bio *bio)
+ tmp_dev = map_sector(mddev, zone, sector, §or);
+ break;
+ default:
+- WARN("md/raid0:%s: Invalid layout\n", mdname(mddev));
++ WARN(1, "md/raid0:%s: Invalid layout\n", mdname(mddev));
+ bio_io_error(bio);
+ return true;
+ }
+diff --git a/drivers/media/rc/rc-main.c b/drivers/media/rc/rc-main.c
+index 13da4c5c7d17..7741151606ef 100644
+--- a/drivers/media/rc/rc-main.c
++++ b/drivers/media/rc/rc-main.c
+@@ -1773,6 +1773,7 @@ static int rc_prepare_rx_device(struct rc_dev *dev)
+ set_bit(MSC_SCAN, dev->input_dev->mscbit);
+
+ /* Pointer/mouse events */
++ set_bit(INPUT_PROP_POINTING_STICK, dev->input_dev->propbit);
+ set_bit(EV_REL, dev->input_dev->evbit);
+ set_bit(REL_X, dev->input_dev->relbit);
+ set_bit(REL_Y, dev->input_dev->relbit);
+diff --git a/drivers/net/can/slcan.c b/drivers/net/can/slcan.c
+index 5d338b2ac39e..cf0769ad39cd 100644
+--- a/drivers/net/can/slcan.c
++++ b/drivers/net/can/slcan.c
+@@ -613,6 +613,7 @@ err_free_chan:
+ sl->tty = NULL;
+ tty->disc_data = NULL;
+ clear_bit(SLF_INUSE, &sl->flags);
++ slc_free_netdev(sl->dev);
+ free_netdev(sl->dev);
+
+ err_exit:
+diff --git a/drivers/net/can/usb/ucan.c b/drivers/net/can/usb/ucan.c
+index 04aac3bb54ef..81e942f713e6 100644
+--- a/drivers/net/can/usb/ucan.c
++++ b/drivers/net/can/usb/ucan.c
+@@ -792,7 +792,7 @@ resubmit:
+ up);
+
+ usb_anchor_urb(urb, &up->rx_urbs);
+- ret = usb_submit_urb(urb, GFP_KERNEL);
++ ret = usb_submit_urb(urb, GFP_ATOMIC);
+
+ if (ret < 0) {
+ netdev_err(up->netdev,
+diff --git a/drivers/net/ethernet/cirrus/ep93xx_eth.c b/drivers/net/ethernet/cirrus/ep93xx_eth.c
+index f1a0c4dceda0..f37c9a08c4cf 100644
+--- a/drivers/net/ethernet/cirrus/ep93xx_eth.c
++++ b/drivers/net/ethernet/cirrus/ep93xx_eth.c
+@@ -763,6 +763,7 @@ static int ep93xx_eth_remove(struct platform_device *pdev)
+ {
+ struct net_device *dev;
+ struct ep93xx_priv *ep;
++ struct resource *mem;
+
+ dev = platform_get_drvdata(pdev);
+ if (dev == NULL)
+@@ -778,8 +779,8 @@ static int ep93xx_eth_remove(struct platform_device *pdev)
+ iounmap(ep->base_addr);
+
+ if (ep->res != NULL) {
+- release_resource(ep->res);
+- kfree(ep->res);
++ mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++ release_mem_region(mem->start, resource_size(mem));
+ }
+
+ free_netdev(dev);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+index bac4ce13f6ae..f5c323e79834 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+@@ -124,7 +124,7 @@ static int hclge_ets_validate(struct hclge_dev *hdev, struct ieee_ets *ets,
+ if (ret)
+ return ret;
+
+- for (i = 0; i < HNAE3_MAX_TC; i++) {
++ for (i = 0; i < hdev->tc_max; i++) {
+ switch (ets->tc_tsa[i]) {
+ case IEEE_8021QAZ_TSA_STRICT:
+ if (hdev->tm_info.tc_info[i].tc_sch_mode !=
+@@ -302,6 +302,7 @@ static int hclge_ieee_setpfc(struct hnae3_handle *h, struct ieee_pfc *pfc)
+ struct hclge_vport *vport = hclge_get_vport(h);
+ struct hclge_dev *hdev = vport->back;
+ u8 i, j, pfc_map, *prio_tc;
++ int ret;
+
+ if (!(hdev->dcbx_cap & DCB_CAP_DCBX_VER_IEEE) ||
+ hdev->flag & HCLGE_FLAG_MQPRIO_ENABLE)
+@@ -327,7 +328,21 @@ static int hclge_ieee_setpfc(struct hnae3_handle *h, struct ieee_pfc *pfc)
+
+ hclge_tm_pfc_info_update(hdev);
+
+- return hclge_pause_setup_hw(hdev, false);
++ ret = hclge_pause_setup_hw(hdev, false);
++ if (ret)
++ return ret;
++
++ ret = hclge_notify_client(hdev, HNAE3_DOWN_CLIENT);
++ if (ret)
++ return ret;
++
++ ret = hclge_buffer_alloc(hdev);
++ if (ret) {
++ hclge_notify_client(hdev, HNAE3_UP_CLIENT);
++ return ret;
++ }
++
++ return hclge_notify_client(hdev, HNAE3_UP_CLIENT);
+ }
+
+ /* DCBX configuration */
+diff --git a/drivers/net/ethernet/renesas/ravb.h b/drivers/net/ethernet/renesas/ravb.h
+index ac9195add811..709022939822 100644
+--- a/drivers/net/ethernet/renesas/ravb.h
++++ b/drivers/net/ethernet/renesas/ravb.h
+@@ -960,6 +960,8 @@ enum RAVB_QUEUE {
+ #define NUM_RX_QUEUE 2
+ #define NUM_TX_QUEUE 2
+
++#define RX_BUF_SZ (2048 - ETH_FCS_LEN + sizeof(__sum16))
++
+ /* TX descriptors per packet */
+ #define NUM_TX_DESC_GEN2 2
+ #define NUM_TX_DESC_GEN3 1
+@@ -1023,7 +1025,6 @@ struct ravb_private {
+ u32 dirty_rx[NUM_RX_QUEUE]; /* Producer ring indices */
+ u32 cur_tx[NUM_TX_QUEUE];
+ u32 dirty_tx[NUM_TX_QUEUE];
+- u32 rx_buf_sz; /* Based on MTU+slack. */
+ struct napi_struct napi[NUM_RX_QUEUE];
+ struct work_struct work;
+ /* MII transceiver section. */
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index 6cacd5e893ac..393644833cd5 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -230,7 +230,7 @@ static void ravb_ring_free(struct net_device *ndev, int q)
+ le32_to_cpu(desc->dptr)))
+ dma_unmap_single(ndev->dev.parent,
+ le32_to_cpu(desc->dptr),
+- priv->rx_buf_sz,
++ RX_BUF_SZ,
+ DMA_FROM_DEVICE);
+ }
+ ring_size = sizeof(struct ravb_ex_rx_desc) *
+@@ -293,9 +293,9 @@ static void ravb_ring_format(struct net_device *ndev, int q)
+ for (i = 0; i < priv->num_rx_ring[q]; i++) {
+ /* RX descriptor */
+ rx_desc = &priv->rx_ring[q][i];
+- rx_desc->ds_cc = cpu_to_le16(priv->rx_buf_sz);
++ rx_desc->ds_cc = cpu_to_le16(RX_BUF_SZ);
+ dma_addr = dma_map_single(ndev->dev.parent, priv->rx_skb[q][i]->data,
+- priv->rx_buf_sz,
++ RX_BUF_SZ,
+ DMA_FROM_DEVICE);
+ /* We just set the data size to 0 for a failed mapping which
+ * should prevent DMA from happening...
+@@ -342,9 +342,6 @@ static int ravb_ring_init(struct net_device *ndev, int q)
+ int ring_size;
+ int i;
+
+- priv->rx_buf_sz = (ndev->mtu <= 1492 ? PKT_BUF_SZ : ndev->mtu) +
+- ETH_HLEN + VLAN_HLEN + sizeof(__sum16);
+-
+ /* Allocate RX and TX skb rings */
+ priv->rx_skb[q] = kcalloc(priv->num_rx_ring[q],
+ sizeof(*priv->rx_skb[q]), GFP_KERNEL);
+@@ -354,7 +351,7 @@ static int ravb_ring_init(struct net_device *ndev, int q)
+ goto error;
+
+ for (i = 0; i < priv->num_rx_ring[q]; i++) {
+- skb = netdev_alloc_skb(ndev, priv->rx_buf_sz + RAVB_ALIGN - 1);
++ skb = netdev_alloc_skb(ndev, RX_BUF_SZ + RAVB_ALIGN - 1);
+ if (!skb)
+ goto error;
+ ravb_set_buffer_align(skb);
+@@ -590,7 +587,7 @@ static bool ravb_rx(struct net_device *ndev, int *quota, int q)
+ skb = priv->rx_skb[q][entry];
+ priv->rx_skb[q][entry] = NULL;
+ dma_unmap_single(ndev->dev.parent, le32_to_cpu(desc->dptr),
+- priv->rx_buf_sz,
++ RX_BUF_SZ,
+ DMA_FROM_DEVICE);
+ get_ts &= (q == RAVB_NC) ?
+ RAVB_RXTSTAMP_TYPE_V2_L2_EVENT :
+@@ -623,11 +620,11 @@ static bool ravb_rx(struct net_device *ndev, int *quota, int q)
+ for (; priv->cur_rx[q] - priv->dirty_rx[q] > 0; priv->dirty_rx[q]++) {
+ entry = priv->dirty_rx[q] % priv->num_rx_ring[q];
+ desc = &priv->rx_ring[q][entry];
+- desc->ds_cc = cpu_to_le16(priv->rx_buf_sz);
++ desc->ds_cc = cpu_to_le16(RX_BUF_SZ);
+
+ if (!priv->rx_skb[q][entry]) {
+ skb = netdev_alloc_skb(ndev,
+- priv->rx_buf_sz +
++ RX_BUF_SZ +
+ RAVB_ALIGN - 1);
+ if (!skb)
+ break; /* Better luck next round. */
+@@ -1814,10 +1811,15 @@ static int ravb_do_ioctl(struct net_device *ndev, struct ifreq *req, int cmd)
+
+ static int ravb_change_mtu(struct net_device *ndev, int new_mtu)
+ {
+- if (netif_running(ndev))
+- return -EBUSY;
++ struct ravb_private *priv = netdev_priv(ndev);
+
+ ndev->mtu = new_mtu;
++
++ if (netif_running(ndev)) {
++ synchronize_irq(priv->emac_irq);
++ ravb_emac_init(ndev);
++ }
++
+ netdev_update_features(ndev);
+
+ return 0;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
+index 9ef6b8fe03c1..0fbf8c1d5c98 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
+@@ -252,27 +252,23 @@ static int iwl_pcie_gen2_build_amsdu(struct iwl_trans *trans,
+ struct ieee80211_hdr *hdr = (void *)skb->data;
+ unsigned int snap_ip_tcp_hdrlen, ip_hdrlen, total_len, hdr_room;
+ unsigned int mss = skb_shinfo(skb)->gso_size;
+- u16 length, iv_len, amsdu_pad;
++ u16 length, amsdu_pad;
+ u8 *start_hdr;
+ struct iwl_tso_hdr_page *hdr_page;
+ struct page **page_ptr;
+ struct tso_t tso;
+
+- /* if the packet is protected, then it must be CCMP or GCMP */
+- iv_len = ieee80211_has_protected(hdr->frame_control) ?
+- IEEE80211_CCMP_HDR_LEN : 0;
+-
+ trace_iwlwifi_dev_tx(trans->dev, skb, tfd, sizeof(*tfd),
+ &dev_cmd->hdr, start_len, 0);
+
+ ip_hdrlen = skb_transport_header(skb) - skb_network_header(skb);
+ snap_ip_tcp_hdrlen = 8 + ip_hdrlen + tcp_hdrlen(skb);
+- total_len = skb->len - snap_ip_tcp_hdrlen - hdr_len - iv_len;
++ total_len = skb->len - snap_ip_tcp_hdrlen - hdr_len;
+ amsdu_pad = 0;
+
+ /* total amount of header we may need for this A-MSDU */
+ hdr_room = DIV_ROUND_UP(total_len, mss) *
+- (3 + snap_ip_tcp_hdrlen + sizeof(struct ethhdr)) + iv_len;
++ (3 + snap_ip_tcp_hdrlen + sizeof(struct ethhdr));
+
+ /* Our device supports 9 segments at most, it will fit in 1 page */
+ hdr_page = get_page_hdr(trans, hdr_room);
+@@ -283,14 +279,12 @@ static int iwl_pcie_gen2_build_amsdu(struct iwl_trans *trans,
+ start_hdr = hdr_page->pos;
+ page_ptr = (void *)((u8 *)skb->cb + trans_pcie->page_offs);
+ *page_ptr = hdr_page->page;
+- memcpy(hdr_page->pos, skb->data + hdr_len, iv_len);
+- hdr_page->pos += iv_len;
+
+ /*
+- * Pull the ieee80211 header + IV to be able to use TSO core,
++ * Pull the ieee80211 header to be able to use TSO core,
+ * we will restore it for the tx_status flow.
+ */
+- skb_pull(skb, hdr_len + iv_len);
++ skb_pull(skb, hdr_len);
+
+ /*
+ * Remove the length of all the headers that we don't actually
+@@ -365,8 +359,8 @@ static int iwl_pcie_gen2_build_amsdu(struct iwl_trans *trans,
+ }
+ }
+
+- /* re -add the WiFi header and IV */
+- skb_push(skb, hdr_len + iv_len);
++ /* re -add the WiFi header */
++ skb_push(skb, hdr_len);
+
+ return 0;
+
+diff --git a/drivers/net/wireless/rsi/rsi_91x_mgmt.c b/drivers/net/wireless/rsi/rsi_91x_mgmt.c
+index 6c7f26ef6476..9cc8a335d519 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_mgmt.c
++++ b/drivers/net/wireless/rsi/rsi_91x_mgmt.c
+@@ -1756,6 +1756,7 @@ static int rsi_send_beacon(struct rsi_common *common)
+ skb_pull(skb, (64 - dword_align_bytes));
+ if (rsi_prepare_beacon(common, skb)) {
+ rsi_dbg(ERR_ZONE, "Failed to prepare beacon\n");
++ dev_kfree_skb(skb);
+ return -EINVAL;
+ }
+ skb_queue_tail(&common->tx_queue[MGMT_BEACON_Q], skb);
+diff --git a/drivers/nfc/nxp-nci/i2c.c b/drivers/nfc/nxp-nci/i2c.c
+index 4aeb3861b409..6c468899f2ff 100644
+--- a/drivers/nfc/nxp-nci/i2c.c
++++ b/drivers/nfc/nxp-nci/i2c.c
+@@ -225,8 +225,10 @@ static irqreturn_t nxp_nci_i2c_irq_thread_fn(int irq, void *phy_id)
+
+ if (r == -EREMOTEIO) {
+ phy->hard_fault = r;
+- skb = NULL;
+- } else if (r < 0) {
++ if (info->mode == NXP_NCI_MODE_FW)
++ nxp_nci_fw_recv_frame(phy->ndev, NULL);
++ }
++ if (r < 0) {
+ nfc_err(&client->dev, "Read failed with error %d\n", r);
+ goto exit_irq_handled;
+ }
+diff --git a/drivers/spi/spi-atmel.c b/drivers/spi/spi-atmel.c
+index f00b367523cd..6a4540ba65ba 100644
+--- a/drivers/spi/spi-atmel.c
++++ b/drivers/spi/spi-atmel.c
+@@ -1182,10 +1182,8 @@ static int atmel_spi_setup(struct spi_device *spi)
+ as = spi_master_get_devdata(spi->master);
+
+ /* see notes above re chipselect */
+- if (!atmel_spi_is_v2(as)
+- && spi->chip_select == 0
+- && (spi->mode & SPI_CS_HIGH)) {
+- dev_dbg(&spi->dev, "setup: can't be active-high\n");
++ if (!as->use_cs_gpios && (spi->mode & SPI_CS_HIGH)) {
++ dev_warn(&spi->dev, "setup: non GPIO CS can't be active-high\n");
+ return -EINVAL;
+ }
+
+diff --git a/drivers/spi/spi-fsl-qspi.c b/drivers/spi/spi-fsl-qspi.c
+index 448c00e4065b..609e10aa2535 100644
+--- a/drivers/spi/spi-fsl-qspi.c
++++ b/drivers/spi/spi-fsl-qspi.c
+@@ -63,6 +63,11 @@
+ #define QUADSPI_IPCR 0x08
+ #define QUADSPI_IPCR_SEQID(x) ((x) << 24)
+
++#define QUADSPI_FLSHCR 0x0c
++#define QUADSPI_FLSHCR_TCSS_MASK GENMASK(3, 0)
++#define QUADSPI_FLSHCR_TCSH_MASK GENMASK(11, 8)
++#define QUADSPI_FLSHCR_TDH_MASK GENMASK(17, 16)
++
+ #define QUADSPI_BUF3CR 0x1c
+ #define QUADSPI_BUF3CR_ALLMST_MASK BIT(31)
+ #define QUADSPI_BUF3CR_ADATSZ(x) ((x) << 8)
+@@ -95,6 +100,9 @@
+ #define QUADSPI_FR 0x160
+ #define QUADSPI_FR_TFF_MASK BIT(0)
+
++#define QUADSPI_RSER 0x164
++#define QUADSPI_RSER_TFIE BIT(0)
++
+ #define QUADSPI_SPTRCLR 0x16c
+ #define QUADSPI_SPTRCLR_IPPTRC BIT(8)
+ #define QUADSPI_SPTRCLR_BFPTRC BIT(0)
+@@ -112,9 +120,6 @@
+ #define QUADSPI_LCKER_LOCK BIT(0)
+ #define QUADSPI_LCKER_UNLOCK BIT(1)
+
+-#define QUADSPI_RSER 0x164
+-#define QUADSPI_RSER_TFIE BIT(0)
+-
+ #define QUADSPI_LUT_BASE 0x310
+ #define QUADSPI_LUT_OFFSET (SEQID_LUT * 4 * 4)
+ #define QUADSPI_LUT_REG(idx) \
+@@ -181,6 +186,12 @@
+ */
+ #define QUADSPI_QUIRK_BASE_INTERNAL BIT(4)
+
++/*
++ * Controller uses TDH bits in register QUADSPI_FLSHCR.
++ * They need to be set in accordance with the DDR/SDR mode.
++ */
++#define QUADSPI_QUIRK_USE_TDH_SETTING BIT(5)
++
+ struct fsl_qspi_devtype_data {
+ unsigned int rxfifo;
+ unsigned int txfifo;
+@@ -209,7 +220,8 @@ static const struct fsl_qspi_devtype_data imx7d_data = {
+ .rxfifo = SZ_128,
+ .txfifo = SZ_512,
+ .ahb_buf_size = SZ_1K,
+- .quirks = QUADSPI_QUIRK_TKT253890 | QUADSPI_QUIRK_4X_INT_CLK,
++ .quirks = QUADSPI_QUIRK_TKT253890 | QUADSPI_QUIRK_4X_INT_CLK |
++ QUADSPI_QUIRK_USE_TDH_SETTING,
+ .little_endian = true,
+ };
+
+@@ -217,7 +229,8 @@ static const struct fsl_qspi_devtype_data imx6ul_data = {
+ .rxfifo = SZ_128,
+ .txfifo = SZ_512,
+ .ahb_buf_size = SZ_1K,
+- .quirks = QUADSPI_QUIRK_TKT253890 | QUADSPI_QUIRK_4X_INT_CLK,
++ .quirks = QUADSPI_QUIRK_TKT253890 | QUADSPI_QUIRK_4X_INT_CLK |
++ QUADSPI_QUIRK_USE_TDH_SETTING,
+ .little_endian = true,
+ };
+
+@@ -275,6 +288,11 @@ static inline int needs_amba_base_offset(struct fsl_qspi *q)
+ return !(q->devtype_data->quirks & QUADSPI_QUIRK_BASE_INTERNAL);
+ }
+
++static inline int needs_tdh_setting(struct fsl_qspi *q)
++{
++ return q->devtype_data->quirks & QUADSPI_QUIRK_USE_TDH_SETTING;
++}
++
+ /*
+ * An IC bug makes it necessary to rearrange the 32-bit data.
+ * Later chips, such as IMX6SLX, have fixed this bug.
+@@ -710,6 +728,16 @@ static int fsl_qspi_default_setup(struct fsl_qspi *q)
+ qspi_writel(q, QUADSPI_MCR_MDIS_MASK | QUADSPI_MCR_RESERVED_MASK,
+ base + QUADSPI_MCR);
+
++ /*
++ * Previous boot stages (BootROM, bootloader) might have used DDR
++ * mode and did not clear the TDH bits. As we currently use SDR mode
++ * only, clear the TDH bits if necessary.
++ */
++ if (needs_tdh_setting(q))
++ qspi_writel(q, qspi_readl(q, base + QUADSPI_FLSHCR) &
++ ~QUADSPI_FLSHCR_TDH_MASK,
++ base + QUADSPI_FLSHCR);
++
+ reg = qspi_readl(q, base + QUADSPI_SMPR);
+ qspi_writel(q, reg & ~(QUADSPI_SMPR_FSDLY_MASK
+ | QUADSPI_SMPR_FSPHS_MASK
+diff --git a/drivers/spi/spi-stm32-qspi.c b/drivers/spi/spi-stm32-qspi.c
+index 655e4afbfb2a..c9a782010006 100644
+--- a/drivers/spi/spi-stm32-qspi.c
++++ b/drivers/spi/spi-stm32-qspi.c
+@@ -528,7 +528,6 @@ static void stm32_qspi_release(struct stm32_qspi *qspi)
+ stm32_qspi_dma_free(qspi);
+ mutex_destroy(&qspi->lock);
+ clk_disable_unprepare(qspi->clk);
+- spi_master_put(qspi->ctrl);
+ }
+
+ static int stm32_qspi_probe(struct platform_device *pdev)
+@@ -629,6 +628,8 @@ static int stm32_qspi_probe(struct platform_device *pdev)
+
+ err:
+ stm32_qspi_release(qspi);
++ spi_master_put(qspi->ctrl);
++
+ return ret;
+ }
+
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 75ac046cae52..f3b54025986f 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -1710,15 +1710,7 @@ static int of_spi_parse_dt(struct spi_controller *ctlr, struct spi_device *spi,
+ spi->mode |= SPI_3WIRE;
+ if (of_property_read_bool(nc, "spi-lsb-first"))
+ spi->mode |= SPI_LSB_FIRST;
+-
+- /*
+- * For descriptors associated with the device, polarity inversion is
+- * handled in the gpiolib, so all chip selects are "active high" in
+- * the logical sense, the gpiolib will invert the line if need be.
+- */
+- if (ctlr->use_gpio_descriptors)
+- spi->mode |= SPI_CS_HIGH;
+- else if (of_property_read_bool(nc, "spi-cs-high"))
++ if (of_property_read_bool(nc, "spi-cs-high"))
+ spi->mode |= SPI_CS_HIGH;
+
+ /* Device DUAL/QUAD mode */
+@@ -1782,6 +1774,15 @@ static int of_spi_parse_dt(struct spi_controller *ctlr, struct spi_device *spi,
+ }
+ spi->chip_select = value;
+
++ /*
++ * For descriptors associated with the device, polarity inversion is
++ * handled in the gpiolib, so all gpio chip selects are "active high"
++ * in the logical sense, the gpiolib will invert the line if need be.
++ */
++ if ((ctlr->use_gpio_descriptors) && ctlr->cs_gpiods &&
++ ctlr->cs_gpiods[spi->chip_select])
++ spi->mode |= SPI_CS_HIGH;
++
+ /* Device speed */
+ rc = of_property_read_u32(nc, "spi-max-frequency", &value);
+ if (rc) {
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index ebe15f2cf7fc..7f38ba134bb9 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -304,7 +304,7 @@ static void thermal_zone_device_set_polling(struct thermal_zone_device *tz,
+ &tz->poll_queue,
+ msecs_to_jiffies(delay));
+ else
+- cancel_delayed_work_sync(&tz->poll_queue);
++ cancel_delayed_work(&tz->poll_queue);
+ }
+
+ static void monitor_thermal_zone(struct thermal_zone_device *tz)
+@@ -1404,7 +1404,7 @@ void thermal_zone_device_unregister(struct thermal_zone_device *tz)
+
+ mutex_unlock(&thermal_list_lock);
+
+- thermal_zone_device_set_polling(tz, 0);
++ cancel_delayed_work_sync(&tz->poll_queue);
+
+ thermal_set_governor(tz, NULL);
+
+diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
+index 5921a33b2a07..eba7266cef87 100644
+--- a/drivers/tty/serial/amba-pl011.c
++++ b/drivers/tty/serial/amba-pl011.c
+@@ -813,10 +813,8 @@ __acquires(&uap->port.lock)
+ if (!uap->using_tx_dma)
+ return;
+
+- /* Avoid deadlock with the DMA engine callback */
+- spin_unlock(&uap->port.lock);
+- dmaengine_terminate_all(uap->dmatx.chan);
+- spin_lock(&uap->port.lock);
++ dmaengine_terminate_async(uap->dmatx.chan);
++
+ if (uap->dmatx.queued) {
+ dma_unmap_sg(uap->dmatx.chan->device->dev, &uap->dmatx.sg, 1,
+ DMA_TO_DEVICE);
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 92dad2b4ec36..2cd1a9dd8877 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -436,8 +436,8 @@ static void lpuart_dma_tx(struct lpuart_port *sport)
+ }
+
+ sport->dma_tx_desc = dmaengine_prep_slave_sg(sport->dma_tx_chan, sgl,
+- sport->dma_tx_nents,
+- DMA_MEM_TO_DEV, DMA_PREP_INTERRUPT);
++ ret, DMA_MEM_TO_DEV,
++ DMA_PREP_INTERRUPT);
+ if (!sport->dma_tx_desc) {
+ dma_unmap_sg(dev, sgl, sport->dma_tx_nents, DMA_TO_DEVICE);
+ dev_err(dev, "Cannot prepare TX slave DMA!\n");
+diff --git a/drivers/tty/serial/ifx6x60.c b/drivers/tty/serial/ifx6x60.c
+index ffefd218761e..31033d517e82 100644
+--- a/drivers/tty/serial/ifx6x60.c
++++ b/drivers/tty/serial/ifx6x60.c
+@@ -1230,6 +1230,9 @@ static int ifx_spi_spi_remove(struct spi_device *spi)
+ struct ifx_spi_device *ifx_dev = spi_get_drvdata(spi);
+ /* stop activity */
+ tasklet_kill(&ifx_dev->io_work_tasklet);
++
++ pm_runtime_disable(&spi->dev);
++
+ /* free irq */
+ free_irq(gpio_to_irq(ifx_dev->gpio.reset_out), ifx_dev);
+ free_irq(gpio_to_irq(ifx_dev->gpio.srdy), ifx_dev);
+diff --git a/drivers/tty/serial/msm_serial.c b/drivers/tty/serial/msm_serial.c
+index 3657a24913fc..00964b6e4ac1 100644
+--- a/drivers/tty/serial/msm_serial.c
++++ b/drivers/tty/serial/msm_serial.c
+@@ -980,6 +980,7 @@ static unsigned int msm_get_mctrl(struct uart_port *port)
+ static void msm_reset(struct uart_port *port)
+ {
+ struct msm_port *msm_port = UART_TO_MSM(port);
++ unsigned int mr;
+
+ /* reset everything */
+ msm_write(port, UART_CR_CMD_RESET_RX, UART_CR);
+@@ -987,7 +988,10 @@ static void msm_reset(struct uart_port *port)
+ msm_write(port, UART_CR_CMD_RESET_ERR, UART_CR);
+ msm_write(port, UART_CR_CMD_RESET_BREAK_INT, UART_CR);
+ msm_write(port, UART_CR_CMD_RESET_CTS, UART_CR);
+- msm_write(port, UART_CR_CMD_SET_RFR, UART_CR);
++ msm_write(port, UART_CR_CMD_RESET_RFR, UART_CR);
++ mr = msm_read(port, UART_MR1);
++ mr &= ~UART_MR1_RX_RDY_CTL;
++ msm_write(port, mr, UART_MR1);
+
+ /* Disable DM modes */
+ if (msm_port->is_uartdm)
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 4223cb496764..efaf6f9ca17f 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -1106,7 +1106,7 @@ static int uart_break_ctl(struct tty_struct *tty, int break_state)
+ if (!uport)
+ goto out;
+
+- if (uport->type != PORT_UNKNOWN)
++ if (uport->type != PORT_UNKNOWN && uport->ops->break_ctl)
+ uport->ops->break_ctl(uport, break_state);
+ ret = 0;
+ out:
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index 24a2261f879a..0b97268b0055 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -239,8 +239,8 @@ static void stm32_receive_chars(struct uart_port *port, bool threaded)
+ * cleared by the sequence [read SR - read DR].
+ */
+ if ((sr & USART_SR_ERR_MASK) && ofs->icr != UNDEF_REG)
+- stm32_clr_bits(port, ofs->icr, USART_ICR_ORECF |
+- USART_ICR_PECF | USART_ICR_FECF);
++ writel_relaxed(sr & USART_SR_ERR_MASK,
++ port->membase + ofs->icr);
+
+ c = stm32_get_char(port, &sr, &stm32_port->last_res);
+ port->icount.rx++;
+@@ -434,7 +434,7 @@ static void stm32_transmit_chars(struct uart_port *port)
+ if (ofs->icr == UNDEF_REG)
+ stm32_clr_bits(port, ofs->isr, USART_SR_TC);
+ else
+- stm32_set_bits(port, ofs->icr, USART_ICR_TCCF);
++ writel_relaxed(USART_ICR_TCCF, port->membase + ofs->icr);
+
+ if (stm32_port->tx_ch)
+ stm32_transmit_chars_dma(port);
+diff --git a/drivers/tty/vt/keyboard.c b/drivers/tty/vt/keyboard.c
+index 515fc095e3b4..15d33fa0c925 100644
+--- a/drivers/tty/vt/keyboard.c
++++ b/drivers/tty/vt/keyboard.c
+@@ -1491,7 +1491,7 @@ static void kbd_event(struct input_handle *handle, unsigned int event_type,
+
+ if (event_type == EV_MSC && event_code == MSC_RAW && HW_RAW(handle->dev))
+ kbd_rawcode(value);
+- if (event_type == EV_KEY)
++ if (event_type == EV_KEY && event_code <= KEY_MAX)
+ kbd_keycode(event_code, value, HW_RAW(handle->dev));
+
+ spin_unlock(&kbd_event_lock);
+diff --git a/drivers/tty/vt/vc_screen.c b/drivers/tty/vt/vc_screen.c
+index 1f042346e722..778f83ea2249 100644
+--- a/drivers/tty/vt/vc_screen.c
++++ b/drivers/tty/vt/vc_screen.c
+@@ -456,6 +456,9 @@ vcs_write(struct file *file, const char __user *buf, size_t count, loff_t *ppos)
+ size_t ret;
+ char *con_buf;
+
++ if (use_unicode(inode))
++ return -EOPNOTSUPP;
++
+ con_buf = (char *) __get_free_page(GFP_KERNEL);
+ if (!con_buf)
+ return -ENOMEM;
+diff --git a/drivers/usb/gadget/function/u_serial.c b/drivers/usb/gadget/function/u_serial.c
+index 65f634ec7fc2..bb1e2e1d0076 100644
+--- a/drivers/usb/gadget/function/u_serial.c
++++ b/drivers/usb/gadget/function/u_serial.c
+@@ -1239,8 +1239,10 @@ int gserial_alloc_line(unsigned char *line_num)
+ __func__, port_num, PTR_ERR(tty_dev));
+
+ ret = PTR_ERR(tty_dev);
++ mutex_lock(&ports[port_num].lock);
+ port = ports[port_num].port;
+ ports[port_num].port = NULL;
++ mutex_unlock(&ports[port_num].lock);
+ gserial_free_port(port);
+ goto err;
+ }
+diff --git a/drivers/watchdog/aspeed_wdt.c b/drivers/watchdog/aspeed_wdt.c
+index 5b64bc2e8788..c59e981841e1 100644
+--- a/drivers/watchdog/aspeed_wdt.c
++++ b/drivers/watchdog/aspeed_wdt.c
+@@ -202,11 +202,6 @@ static int aspeed_wdt_probe(struct platform_device *pdev)
+ if (IS_ERR(wdt->base))
+ return PTR_ERR(wdt->base);
+
+- /*
+- * The ast2400 wdt can run at PCLK, or 1MHz. The ast2500 only
+- * runs at 1MHz. We chose to always run at 1MHz, as there's no
+- * good reason to have a faster watchdog counter.
+- */
+ wdt->wdd.info = &aspeed_wdt_info;
+ wdt->wdd.ops = &aspeed_wdt_ops;
+ wdt->wdd.max_hw_heartbeat_ms = WDT_MAX_TIMEOUT_MS;
+@@ -222,7 +217,16 @@ static int aspeed_wdt_probe(struct platform_device *pdev)
+ return -EINVAL;
+ config = ofdid->data;
+
+- wdt->ctrl = WDT_CTRL_1MHZ_CLK;
++ /*
++ * On clock rates:
++ * - ast2400 wdt can run at PCLK, or 1MHz
++ * - ast2500 only runs at 1MHz, hard coding bit 4 to 1
++ * - ast2600 always runs at 1MHz
++ *
++ * Set the ast2400 to run at 1MHz as it simplifies the driver.
++ */
++ if (of_device_is_compatible(np, "aspeed,ast2400-wdt"))
++ wdt->ctrl = WDT_CTRL_1MHZ_CLK;
+
+ /*
+ * Control reset on a per-device basis to ensure the
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index 139b4e3cc946..f4fdf3eaa570 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -803,7 +803,12 @@ success:
+ continue;
+
+ if (cookie->inodes[i]) {
+- afs_vnode_commit_status(&fc, AFS_FS_I(cookie->inodes[i]),
++ struct afs_vnode *iv = AFS_FS_I(cookie->inodes[i]);
++
++ if (test_bit(AFS_VNODE_UNSET, &iv->flags))
++ continue;
++
++ afs_vnode_commit_status(&fc, iv,
+ scb->cb_break, NULL, scb);
+ continue;
+ }
+diff --git a/fs/aio.c b/fs/aio.c
+index 01e0fb9ae45a..0d9a559d488c 100644
+--- a/fs/aio.c
++++ b/fs/aio.c
+@@ -2179,7 +2179,7 @@ SYSCALL_DEFINE5(io_getevents_time32, __u32, ctx_id,
+ #ifdef CONFIG_COMPAT
+
+ struct __compat_aio_sigset {
+- compat_sigset_t __user *sigmask;
++ compat_uptr_t sigmask;
+ compat_size_t sigsetsize;
+ };
+
+@@ -2193,7 +2193,7 @@ COMPAT_SYSCALL_DEFINE6(io_pgetevents,
+ struct old_timespec32 __user *, timeout,
+ const struct __compat_aio_sigset __user *, usig)
+ {
+- struct __compat_aio_sigset ksig = { NULL, };
++ struct __compat_aio_sigset ksig = { 0, };
+ struct timespec64 t;
+ bool interrupted;
+ int ret;
+@@ -2204,7 +2204,7 @@ COMPAT_SYSCALL_DEFINE6(io_pgetevents,
+ if (usig && copy_from_user(&ksig, usig, sizeof(ksig)))
+ return -EFAULT;
+
+- ret = set_compat_user_sigmask(ksig.sigmask, ksig.sigsetsize);
++ ret = set_compat_user_sigmask(compat_ptr(ksig.sigmask), ksig.sigsetsize);
+ if (ret)
+ return ret;
+
+@@ -2228,7 +2228,7 @@ COMPAT_SYSCALL_DEFINE6(io_pgetevents_time64,
+ struct __kernel_timespec __user *, timeout,
+ const struct __compat_aio_sigset __user *, usig)
+ {
+- struct __compat_aio_sigset ksig = { NULL, };
++ struct __compat_aio_sigset ksig = { 0, };
+ struct timespec64 t;
+ bool interrupted;
+ int ret;
+@@ -2239,7 +2239,7 @@ COMPAT_SYSCALL_DEFINE6(io_pgetevents_time64,
+ if (usig && copy_from_user(&ksig, usig, sizeof(ksig)))
+ return -EFAULT;
+
+- ret = set_compat_user_sigmask(ksig.sigmask, ksig.sigsetsize);
++ ret = set_compat_user_sigmask(compat_ptr(ksig.sigmask), ksig.sigsetsize);
+ if (ret)
+ return ret;
+
+diff --git a/fs/autofs/expire.c b/fs/autofs/expire.c
+index cdff0567aacb..2d01553a6d58 100644
+--- a/fs/autofs/expire.c
++++ b/fs/autofs/expire.c
+@@ -498,9 +498,10 @@ static struct dentry *autofs_expire_indirect(struct super_block *sb,
+ */
+ how &= ~AUTOFS_EXP_LEAVES;
+ found = should_expire(expired, mnt, timeout, how);
+- if (!found || found != expired)
+- /* Something has changed, continue */
++ if (found != expired) { // something has changed, continue
++ dput(found);
+ goto next;
++ }
+
+ if (expired != dentry)
+ dput(dentry);
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index facb52d37d19..c9abc789c6b5 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -313,9 +313,6 @@ cifs_new_fileinfo(struct cifs_fid *fid, struct file *file,
+ INIT_LIST_HEAD(&fdlocks->locks);
+ fdlocks->cfile = cfile;
+ cfile->llist = fdlocks;
+- cifs_down_write(&cinode->lock_sem);
+- list_add(&fdlocks->llist, &cinode->llist);
+- up_write(&cinode->lock_sem);
+
+ cfile->count = 1;
+ cfile->pid = current->tgid;
+@@ -339,6 +336,10 @@ cifs_new_fileinfo(struct cifs_fid *fid, struct file *file,
+ oplock = 0;
+ }
+
++ cifs_down_write(&cinode->lock_sem);
++ list_add(&fdlocks->llist, &cinode->llist);
++ up_write(&cinode->lock_sem);
++
+ spin_lock(&tcon->open_file_lock);
+ if (fid->pending_open->oplock != CIFS_OPLOCK_NO_CHANGE && oplock)
+ oplock = fid->pending_open->oplock;
+diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
+index e311f58dc1c8..449d1584ff72 100644
+--- a/fs/cifs/smb2misc.c
++++ b/fs/cifs/smb2misc.c
+@@ -673,10 +673,10 @@ smb2_is_valid_oplock_break(char *buffer, struct TCP_Server_Info *server)
+ spin_lock(&cifs_tcp_ses_lock);
+ list_for_each(tmp, &server->smb_ses_list) {
+ ses = list_entry(tmp, struct cifs_ses, smb_ses_list);
++
+ list_for_each(tmp1, &ses->tcon_list) {
+ tcon = list_entry(tmp1, struct cifs_tcon, tcon_list);
+
+- cifs_stats_inc(&tcon->stats.cifs_stats.num_oplock_brks);
+ spin_lock(&tcon->open_file_lock);
+ list_for_each(tmp2, &tcon->openFileList) {
+ cfile = list_entry(tmp2, struct cifsFileInfo,
+@@ -688,6 +688,8 @@ smb2_is_valid_oplock_break(char *buffer, struct TCP_Server_Info *server)
+ continue;
+
+ cifs_dbg(FYI, "file id match, oplock break\n");
++ cifs_stats_inc(
++ &tcon->stats.cifs_stats.num_oplock_brks);
+ cinode = CIFS_I(d_inode(cfile->dentry));
+ spin_lock(&cfile->file_info_lock);
+ if (!CIFS_CACHE_WRITE(cinode) &&
+@@ -720,9 +722,6 @@ smb2_is_valid_oplock_break(char *buffer, struct TCP_Server_Info *server)
+ return true;
+ }
+ spin_unlock(&tcon->open_file_lock);
+- spin_unlock(&cifs_tcp_ses_lock);
+- cifs_dbg(FYI, "No matching file for oplock break\n");
+- return true;
+ }
+ }
+ spin_unlock(&cifs_tcp_ses_lock);
+diff --git a/fs/ecryptfs/inode.c b/fs/ecryptfs/inode.c
+index 0c7ea4596202..e23752d9a79f 100644
+--- a/fs/ecryptfs/inode.c
++++ b/fs/ecryptfs/inode.c
+@@ -128,13 +128,20 @@ static int ecryptfs_do_unlink(struct inode *dir, struct dentry *dentry,
+ struct inode *inode)
+ {
+ struct dentry *lower_dentry = ecryptfs_dentry_to_lower(dentry);
+- struct inode *lower_dir_inode = ecryptfs_inode_to_lower(dir);
+ struct dentry *lower_dir_dentry;
++ struct inode *lower_dir_inode;
+ int rc;
+
+- dget(lower_dentry);
+- lower_dir_dentry = lock_parent(lower_dentry);
+- rc = vfs_unlink(lower_dir_inode, lower_dentry, NULL);
++ lower_dir_dentry = ecryptfs_dentry_to_lower(dentry->d_parent);
++ lower_dir_inode = d_inode(lower_dir_dentry);
++ inode_lock_nested(lower_dir_inode, I_MUTEX_PARENT);
++ dget(lower_dentry); // don't even try to make the lower negative
++ if (lower_dentry->d_parent != lower_dir_dentry)
++ rc = -EINVAL;
++ else if (d_unhashed(lower_dentry))
++ rc = -EINVAL;
++ else
++ rc = vfs_unlink(lower_dir_inode, lower_dentry, NULL);
+ if (rc) {
+ printk(KERN_ERR "Error in vfs_unlink; rc = [%d]\n", rc);
+ goto out_unlock;
+@@ -142,10 +149,11 @@ static int ecryptfs_do_unlink(struct inode *dir, struct dentry *dentry,
+ fsstack_copy_attr_times(dir, lower_dir_inode);
+ set_nlink(inode, ecryptfs_inode_to_lower(inode)->i_nlink);
+ inode->i_ctime = dir->i_ctime;
+- d_drop(dentry);
+ out_unlock:
+- unlock_dir(lower_dir_dentry);
+ dput(lower_dentry);
++ inode_unlock(lower_dir_inode);
++ if (!rc)
++ d_drop(dentry);
+ return rc;
+ }
+
+@@ -519,22 +527,30 @@ static int ecryptfs_rmdir(struct inode *dir, struct dentry *dentry)
+ {
+ struct dentry *lower_dentry;
+ struct dentry *lower_dir_dentry;
++ struct inode *lower_dir_inode;
+ int rc;
+
+ lower_dentry = ecryptfs_dentry_to_lower(dentry);
+- dget(dentry);
+- lower_dir_dentry = lock_parent(lower_dentry);
+- dget(lower_dentry);
+- rc = vfs_rmdir(d_inode(lower_dir_dentry), lower_dentry);
+- dput(lower_dentry);
+- if (!rc && d_really_is_positive(dentry))
++ lower_dir_dentry = ecryptfs_dentry_to_lower(dentry->d_parent);
++ lower_dir_inode = d_inode(lower_dir_dentry);
++
++ inode_lock_nested(lower_dir_inode, I_MUTEX_PARENT);
++ dget(lower_dentry); // don't even try to make the lower negative
++ if (lower_dentry->d_parent != lower_dir_dentry)
++ rc = -EINVAL;
++ else if (d_unhashed(lower_dentry))
++ rc = -EINVAL;
++ else
++ rc = vfs_rmdir(lower_dir_inode, lower_dentry);
++ if (!rc) {
+ clear_nlink(d_inode(dentry));
+- fsstack_copy_attr_times(dir, d_inode(lower_dir_dentry));
+- set_nlink(dir, d_inode(lower_dir_dentry)->i_nlink);
+- unlock_dir(lower_dir_dentry);
++ fsstack_copy_attr_times(dir, lower_dir_inode);
++ set_nlink(dir, lower_dir_inode->i_nlink);
++ }
++ dput(lower_dentry);
++ inode_unlock(lower_dir_inode);
+ if (!rc)
+ d_drop(dentry);
+- dput(dentry);
+ return rc;
+ }
+
+@@ -572,20 +588,22 @@ ecryptfs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ struct dentry *lower_new_dentry;
+ struct dentry *lower_old_dir_dentry;
+ struct dentry *lower_new_dir_dentry;
+- struct dentry *trap = NULL;
++ struct dentry *trap;
+ struct inode *target_inode;
+
+ if (flags)
+ return -EINVAL;
+
++ lower_old_dir_dentry = ecryptfs_dentry_to_lower(old_dentry->d_parent);
++ lower_new_dir_dentry = ecryptfs_dentry_to_lower(new_dentry->d_parent);
++
+ lower_old_dentry = ecryptfs_dentry_to_lower(old_dentry);
+ lower_new_dentry = ecryptfs_dentry_to_lower(new_dentry);
+- dget(lower_old_dentry);
+- dget(lower_new_dentry);
+- lower_old_dir_dentry = dget_parent(lower_old_dentry);
+- lower_new_dir_dentry = dget_parent(lower_new_dentry);
++
+ target_inode = d_inode(new_dentry);
++
+ trap = lock_rename(lower_old_dir_dentry, lower_new_dir_dentry);
++ dget(lower_new_dentry);
+ rc = -EINVAL;
+ if (lower_old_dentry->d_parent != lower_old_dir_dentry)
+ goto out_lock;
+@@ -613,11 +631,8 @@ ecryptfs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ if (new_dir != old_dir)
+ fsstack_copy_attr_all(old_dir, d_inode(lower_old_dir_dentry));
+ out_lock:
+- unlock_rename(lower_old_dir_dentry, lower_new_dir_dentry);
+- dput(lower_new_dir_dentry);
+- dput(lower_old_dir_dentry);
+ dput(lower_new_dentry);
+- dput(lower_old_dentry);
++ unlock_rename(lower_old_dir_dentry, lower_new_dir_dentry);
+ return rc;
+ }
+
+diff --git a/fs/exportfs/expfs.c b/fs/exportfs/expfs.c
+index f0e549783caf..ba6de72a3e34 100644
+--- a/fs/exportfs/expfs.c
++++ b/fs/exportfs/expfs.c
+@@ -519,26 +519,33 @@ struct dentry *exportfs_decode_fh(struct vfsmount *mnt, struct fid *fid,
+ * inode is actually connected to the parent.
+ */
+ err = exportfs_get_name(mnt, target_dir, nbuf, result);
+- if (!err) {
+- inode_lock(target_dir->d_inode);
+- nresult = lookup_one_len(nbuf, target_dir,
+- strlen(nbuf));
+- inode_unlock(target_dir->d_inode);
+- if (!IS_ERR(nresult)) {
+- if (nresult->d_inode) {
+- dput(result);
+- result = nresult;
+- } else
+- dput(nresult);
+- }
++ if (err) {
++ dput(target_dir);
++ goto err_result;
+ }
+
++ inode_lock(target_dir->d_inode);
++ nresult = lookup_one_len(nbuf, target_dir, strlen(nbuf));
++ if (!IS_ERR(nresult)) {
++ if (unlikely(nresult->d_inode != result->d_inode)) {
++ dput(nresult);
++ nresult = ERR_PTR(-ESTALE);
++ }
++ }
++ inode_unlock(target_dir->d_inode);
+ /*
+ * At this point we are done with the parent, but it's pinned
+ * by the child dentry anyway.
+ */
+ dput(target_dir);
+
++ if (IS_ERR(nresult)) {
++ err = PTR_ERR(nresult);
++ goto err_result;
++ }
++ dput(result);
++ result = nresult;
++
+ /*
+ * And finally make sure the dentry is actually acceptable
+ * to NFSD.
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index 0c4b6a41e385..2d417bf778ef 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -214,7 +214,8 @@ static int fuse_dentry_revalidate(struct dentry *entry, unsigned int flags)
+ kfree(forget);
+ if (ret == -ENOMEM)
+ goto out;
+- if (ret || (outarg.attr.mode ^ inode->i_mode) & S_IFMT)
++ if (ret || fuse_invalid_attr(&outarg.attr) ||
++ (outarg.attr.mode ^ inode->i_mode) & S_IFMT)
+ goto invalid;
+
+ forget_all_cached_acls(inode);
+@@ -272,6 +273,12 @@ int fuse_valid_type(int m)
+ S_ISBLK(m) || S_ISFIFO(m) || S_ISSOCK(m);
+ }
+
++bool fuse_invalid_attr(struct fuse_attr *attr)
++{
++ return !fuse_valid_type(attr->mode) ||
++ attr->size > LLONG_MAX;
++}
++
+ int fuse_lookup_name(struct super_block *sb, u64 nodeid, const struct qstr *name,
+ struct fuse_entry_out *outarg, struct inode **inode)
+ {
+@@ -303,7 +310,7 @@ int fuse_lookup_name(struct super_block *sb, u64 nodeid, const struct qstr *name
+ err = -EIO;
+ if (!outarg->nodeid)
+ goto out_put_forget;
+- if (!fuse_valid_type(outarg->attr.mode))
++ if (fuse_invalid_attr(&outarg->attr))
+ goto out_put_forget;
+
+ *inode = fuse_iget(sb, outarg->nodeid, outarg->generation,
+@@ -427,7 +434,8 @@ static int fuse_create_open(struct inode *dir, struct dentry *entry,
+ goto out_free_ff;
+
+ err = -EIO;
+- if (!S_ISREG(outentry.attr.mode) || invalid_nodeid(outentry.nodeid))
++ if (!S_ISREG(outentry.attr.mode) || invalid_nodeid(outentry.nodeid) ||
++ fuse_invalid_attr(&outentry.attr))
+ goto out_free_ff;
+
+ ff->fh = outopen.fh;
+@@ -535,7 +543,7 @@ static int create_new_entry(struct fuse_conn *fc, struct fuse_args *args,
+ goto out_put_forget_req;
+
+ err = -EIO;
+- if (invalid_nodeid(outarg.nodeid))
++ if (invalid_nodeid(outarg.nodeid) || fuse_invalid_attr(&outarg.attr))
+ goto out_put_forget_req;
+
+ if ((outarg.attr.mode ^ mode) & S_IFMT)
+@@ -814,7 +822,8 @@ static int fuse_link(struct dentry *entry, struct inode *newdir,
+
+ spin_lock(&fi->lock);
+ fi->attr_version = atomic64_inc_return(&fc->attr_version);
+- inc_nlink(inode);
++ if (likely(inode->i_nlink < UINT_MAX))
++ inc_nlink(inode);
+ spin_unlock(&fi->lock);
+ fuse_invalidate_attr(inode);
+ fuse_update_ctime(inode);
+@@ -894,7 +903,8 @@ static int fuse_do_getattr(struct inode *inode, struct kstat *stat,
+ args.out.args[0].value = &outarg;
+ err = fuse_simple_request(fc, &args);
+ if (!err) {
+- if ((inode->i_mode ^ outarg.attr.mode) & S_IFMT) {
++ if (fuse_invalid_attr(&outarg.attr) ||
++ (inode->i_mode ^ outarg.attr.mode) & S_IFMT) {
+ make_bad_inode(inode);
+ err = -EIO;
+ } else {
+@@ -1517,7 +1527,8 @@ int fuse_do_setattr(struct dentry *dentry, struct iattr *attr,
+ goto error;
+ }
+
+- if ((inode->i_mode ^ outarg.attr.mode) & S_IFMT) {
++ if (fuse_invalid_attr(&outarg.attr) ||
++ (inode->i_mode ^ outarg.attr.mode) & S_IFMT) {
+ make_bad_inode(inode);
+ err = -EIO;
+ goto error;
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index 89bdc41e0d86..28ba17462325 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -1008,6 +1008,8 @@ void fuse_ctl_remove_conn(struct fuse_conn *fc);
+ */
+ int fuse_valid_type(int m);
+
++bool fuse_invalid_attr(struct fuse_attr *attr);
++
+ /**
+ * Is current process allowed to perform filesystem operation?
+ */
+diff --git a/fs/fuse/readdir.c b/fs/fuse/readdir.c
+index b2da3de6a78e..656afd035e21 100644
+--- a/fs/fuse/readdir.c
++++ b/fs/fuse/readdir.c
+@@ -184,7 +184,7 @@ static int fuse_direntplus_link(struct file *file,
+
+ if (invalid_nodeid(o->nodeid))
+ return -EIO;
+- if (!fuse_valid_type(o->attr.mode))
++ if (fuse_invalid_attr(&o->attr))
+ return -EIO;
+
+ fc = get_fuse_conn(dir);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index f563a581b924..7a83ecab9f4a 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -1535,6 +1535,8 @@ static int io_send_recvmsg(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+ ret = fn(sock, msg, flags);
+ if (force_nonblock && ret == -EAGAIN)
+ return ret;
++ if (ret == -ERESTARTSYS)
++ ret = -EINTR;
+ }
+
+ io_cqring_add_event(req->ctx, sqe->user_data, ret);
+@@ -1785,7 +1787,7 @@ static int io_poll_add(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ }
+
+ static int io_req_defer(struct io_ring_ctx *ctx, struct io_kiocb *req,
+- const struct io_uring_sqe *sqe)
++ struct sqe_submit *s)
+ {
+ struct io_uring_sqe *sqe_copy;
+
+@@ -1803,7 +1805,8 @@ static int io_req_defer(struct io_ring_ctx *ctx, struct io_kiocb *req,
+ return 0;
+ }
+
+- memcpy(sqe_copy, sqe, sizeof(*sqe_copy));
++ memcpy(&req->submit, s, sizeof(*s));
++ memcpy(sqe_copy, s->sqe, sizeof(*sqe_copy));
+ req->submit.sqe = sqe_copy;
+
+ INIT_WORK(&req->work, io_sq_wq_submit_work);
+@@ -2112,7 +2115,7 @@ static int io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
+ {
+ int ret;
+
+- ret = io_req_defer(ctx, req, s->sqe);
++ ret = io_req_defer(ctx, req, s);
+ if (ret) {
+ if (ret != -EIOCBQUEUED) {
+ io_free_req(req);
+diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
+index 10517cea9682..23b8059812c0 100644
+--- a/fs/iomap/direct-io.c
++++ b/fs/iomap/direct-io.c
+@@ -501,8 +501,15 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
+ }
+ pos += ret;
+
+- if (iov_iter_rw(iter) == READ && pos >= dio->i_size)
++ if (iov_iter_rw(iter) == READ && pos >= dio->i_size) {
++ /*
++ * We only report that we've read data up to i_size.
++ * Revert iter to a state corresponding to that as
++ * some callers (such as splice code) rely on it.
++ */
++ iov_iter_revert(iter, pos - dio->i_size);
+ break;
++ }
+ } while ((count = iov_iter_count(iter)) > 0);
+ blk_finish_plug(&plug);
+
+diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
+index a387534c9577..a5a65de50318 100644
+--- a/fs/kernfs/dir.c
++++ b/fs/kernfs/dir.c
+@@ -621,7 +621,6 @@ static struct kernfs_node *__kernfs_new_node(struct kernfs_root *root,
+ {
+ struct kernfs_node *kn;
+ u32 gen;
+- int cursor;
+ int ret;
+
+ name = kstrdup_const(name, GFP_KERNEL);
+@@ -634,11 +633,11 @@ static struct kernfs_node *__kernfs_new_node(struct kernfs_root *root,
+
+ idr_preload(GFP_KERNEL);
+ spin_lock(&kernfs_idr_lock);
+- cursor = idr_get_cursor(&root->ino_idr);
+ ret = idr_alloc_cyclic(&root->ino_idr, kn, 1, 0, GFP_ATOMIC);
+- if (ret >= 0 && ret < cursor)
++ if (ret >= 0 && ret < root->last_ino)
+ root->next_generation++;
+ gen = root->next_generation;
++ root->last_ino = ret;
+ spin_unlock(&kernfs_idr_lock);
+ idr_preload_end();
+ if (ret < 0)
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 8beda999e134..c187d892e656 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1083,7 +1083,8 @@ nfsd4_clone(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ goto out;
+
+ status = nfsd4_clone_file_range(src, clone->cl_src_pos,
+- dst, clone->cl_dst_pos, clone->cl_count);
++ dst, clone->cl_dst_pos, clone->cl_count,
++ EX_ISSYNC(cstate->current_fh.fh_export));
+
+ fput(dst);
+ fput(src);
+diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
+index 18d94ea984ba..7f938bcb927d 100644
+--- a/fs/nfsd/nfssvc.c
++++ b/fs/nfsd/nfssvc.c
+@@ -94,12 +94,11 @@ static const struct svc_version *nfsd_acl_version[] = {
+
+ #define NFSD_ACL_MINVERS 2
+ #define NFSD_ACL_NRVERS ARRAY_SIZE(nfsd_acl_version)
+-static const struct svc_version *nfsd_acl_versions[NFSD_ACL_NRVERS];
+
+ static struct svc_program nfsd_acl_program = {
+ .pg_prog = NFS_ACL_PROGRAM,
+ .pg_nvers = NFSD_ACL_NRVERS,
+- .pg_vers = nfsd_acl_versions,
++ .pg_vers = nfsd_acl_version,
+ .pg_name = "nfsacl",
+ .pg_class = "nfsd",
+ .pg_stats = &nfsd_acl_svcstats,
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index c85783e536d5..18ffb590f008 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -552,7 +552,7 @@ __be32 nfsd4_set_nfs4_label(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ #endif
+
+ __be32 nfsd4_clone_file_range(struct file *src, u64 src_pos, struct file *dst,
+- u64 dst_pos, u64 count)
++ u64 dst_pos, u64 count, bool sync)
+ {
+ loff_t cloned;
+
+@@ -561,6 +561,12 @@ __be32 nfsd4_clone_file_range(struct file *src, u64 src_pos, struct file *dst,
+ return nfserrno(cloned);
+ if (count && cloned != count)
+ return nfserrno(-EINVAL);
++ if (sync) {
++ loff_t dst_end = count ? dst_pos + count - 1 : LLONG_MAX;
++ int status = vfs_fsync_range(dst, dst_pos, dst_end, 0);
++ if (status < 0)
++ return nfserrno(status);
++ }
+ return 0;
+ }
+
+diff --git a/fs/nfsd/vfs.h b/fs/nfsd/vfs.h
+index db351247892d..02b0a140af8c 100644
+--- a/fs/nfsd/vfs.h
++++ b/fs/nfsd/vfs.h
+@@ -58,7 +58,7 @@ __be32 nfsd4_set_nfs4_label(struct svc_rqst *, struct svc_fh *,
+ __be32 nfsd4_vfs_fallocate(struct svc_rqst *, struct svc_fh *,
+ struct file *, loff_t, loff_t, int);
+ __be32 nfsd4_clone_file_range(struct file *, u64, struct file *,
+- u64, u64);
++ u64, u64, bool);
+ #endif /* CONFIG_NFSD_V4 */
+ __be32 nfsd_create_locked(struct svc_rqst *, struct svc_fh *,
+ char *name, int len, struct iattr *attrs,
+diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h
+index df03825ad1a1..b20ef2c0812d 100644
+--- a/include/linux/jbd2.h
++++ b/include/linux/jbd2.h
+@@ -1584,7 +1584,7 @@ static inline int jbd2_space_needed(journal_t *journal)
+ static inline unsigned long jbd2_log_space_left(journal_t *journal)
+ {
+ /* Allow for rounding errors */
+- unsigned long free = journal->j_free - 32;
++ long free = journal->j_free - 32;
+
+ if (journal->j_committing_transaction) {
+ unsigned long committing = atomic_read(&journal->
+@@ -1593,7 +1593,7 @@ static inline unsigned long jbd2_log_space_left(journal_t *journal)
+ /* Transaction + control blocks */
+ free -= committing + (committing >> JBD2_CONTROL_BLOCKS_SHIFT);
+ }
+- return free;
++ return max_t(long, free, 0);
+ }
+
+ /*
+diff --git a/include/linux/kernfs.h b/include/linux/kernfs.h
+index 936b61bd504e..f797ccc650e7 100644
+--- a/include/linux/kernfs.h
++++ b/include/linux/kernfs.h
+@@ -187,6 +187,7 @@ struct kernfs_root {
+
+ /* private fields, do not use outside kernfs proper */
+ struct idr ino_idr;
++ u32 last_ino;
+ u32 next_generation;
+ struct kernfs_syscall_ops *syscall_ops;
+
+diff --git a/include/sound/hdaudio.h b/include/sound/hdaudio.h
+index 612a17e375d0..3f0330a505b3 100644
+--- a/include/sound/hdaudio.h
++++ b/include/sound/hdaudio.h
+@@ -504,6 +504,7 @@ struct hdac_stream {
+ bool prepared:1;
+ bool no_period_wakeup:1;
+ bool locked:1;
++ bool stripe:1; /* apply stripe control */
+
+ /* timestamp */
+ unsigned long start_wallclk; /* start + minimum wallclk */
+diff --git a/kernel/audit_watch.c b/kernel/audit_watch.c
+index 1f31c2f1e6fc..4508d5e0cf69 100644
+--- a/kernel/audit_watch.c
++++ b/kernel/audit_watch.c
+@@ -351,12 +351,12 @@ static int audit_get_nd(struct audit_watch *watch, struct path *parent)
+ struct dentry *d = kern_path_locked(watch->path, parent);
+ if (IS_ERR(d))
+ return PTR_ERR(d);
+- inode_unlock(d_backing_inode(parent->dentry));
+ if (d_is_positive(d)) {
+ /* update watch filter fields */
+ watch->dev = d->d_sb->s_dev;
+ watch->ino = d_backing_inode(d)->i_ino;
+ }
++ inode_unlock(d_backing_inode(parent->dentry));
+ dput(d);
+ return 0;
+ }
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 8be1da1ebd9a..f23862fa1514 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -2119,11 +2119,12 @@ int cgroup_do_get_tree(struct fs_context *fc)
+
+ nsdentry = kernfs_node_dentry(cgrp->kn, sb);
+ dput(fc->root);
+- fc->root = nsdentry;
+ if (IS_ERR(nsdentry)) {
+- ret = PTR_ERR(nsdentry);
+ deactivate_locked_super(sb);
++ ret = PTR_ERR(nsdentry);
++ nsdentry = NULL;
+ }
++ fc->root = nsdentry;
+ }
+
+ if (!ctx->kfc.new_sb_created)
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 53173883513c..25942e43b8d4 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -11719,7 +11719,7 @@ inherit_event(struct perf_event *parent_event,
+ GFP_KERNEL);
+ if (!child_ctx->task_ctx_data) {
+ free_event(child_event);
+- return NULL;
++ return ERR_PTR(-ENOMEM);
+ }
+ }
+
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index fffe790d98bb..9a839798851c 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -5874,10 +5874,11 @@ void init_idle(struct task_struct *idle, int cpu)
+ struct rq *rq = cpu_rq(cpu);
+ unsigned long flags;
+
++ __sched_fork(0, idle);
++
+ raw_spin_lock_irqsave(&idle->pi_lock, flags);
+ raw_spin_lock(&rq->lock);
+
+- __sched_fork(0, idle);
+ idle->state = TASK_RUNNING;
+ idle->se.exec_start = sched_clock();
+ idle->flags |= PF_IDLE;
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 649c6b60929e..ba7cc68a3993 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -7530,6 +7530,19 @@ static void update_blocked_averages(int cpu)
+ rq_lock_irqsave(rq, &rf);
+ update_rq_clock(rq);
+
++ /*
++ * update_cfs_rq_load_avg() can call cpufreq_update_util(). Make sure
++ * that RT, DL and IRQ signals have been updated before updating CFS.
++ */
++ curr_class = rq->curr->sched_class;
++ update_rt_rq_load_avg(rq_clock_pelt(rq), rq, curr_class == &rt_sched_class);
++ update_dl_rq_load_avg(rq_clock_pelt(rq), rq, curr_class == &dl_sched_class);
++ update_irq_load_avg(rq, 0);
++
++ /* Don't need periodic decay once load/util_avg are null */
++ if (others_have_blocked(rq))
++ done = false;
++
+ /*
+ * Iterates the task_group tree in a bottom up fashion, see
+ * list_add_leaf_cfs_rq() for details.
+@@ -7557,14 +7570,6 @@ static void update_blocked_averages(int cpu)
+ done = false;
+ }
+
+- curr_class = rq->curr->sched_class;
+- update_rt_rq_load_avg(rq_clock_pelt(rq), rq, curr_class == &rt_sched_class);
+- update_dl_rq_load_avg(rq_clock_pelt(rq), rq, curr_class == &dl_sched_class);
+- update_irq_load_avg(rq, 0);
+- /* Don't need periodic decay once load/util_avg are null */
+- if (others_have_blocked(rq))
+- done = false;
+-
+ update_blocked_load_status(rq, !done);
+ rq_unlock_irqrestore(rq, &rf);
+ }
+@@ -7625,12 +7630,18 @@ static inline void update_blocked_averages(int cpu)
+
+ rq_lock_irqsave(rq, &rf);
+ update_rq_clock(rq);
+- update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq);
+
++ /*
++ * update_cfs_rq_load_avg() can call cpufreq_update_util(). Make sure
++ * that RT, DL and IRQ signals have been updated before updating CFS.
++ */
+ curr_class = rq->curr->sched_class;
+ update_rt_rq_load_avg(rq_clock_pelt(rq), rq, curr_class == &rt_sched_class);
+ update_dl_rq_load_avg(rq_clock_pelt(rq), rq, curr_class == &dl_sched_class);
+ update_irq_load_avg(rq, 0);
++
++ update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq);
++
+ update_blocked_load_status(rq, cfs_rq_has_blocked(cfs_rq) || others_have_blocked(rq));
+ rq_unlock_irqrestore(rq, &rf);
+ }
+diff --git a/kernel/time/time.c b/kernel/time/time.c
+index 5c54ca632d08..83f403e7a15c 100644
+--- a/kernel/time/time.c
++++ b/kernel/time/time.c
+@@ -881,7 +881,8 @@ int get_timespec64(struct timespec64 *ts,
+ ts->tv_sec = kts.tv_sec;
+
+ /* Zero out the padding for 32 bit systems or in compat mode */
+- if (IS_ENABLED(CONFIG_64BIT_TIME) && in_compat_syscall())
++ if (IS_ENABLED(CONFIG_64BIT_TIME) && (!IS_ENABLED(CONFIG_64BIT) ||
++ in_compat_syscall()))
+ kts.tv_nsec &= 0xFFFFFFFFUL;
+
+ ts->tv_nsec = kts.tv_nsec;
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index 53934fe73a9d..d6a27b99efd7 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -260,7 +260,7 @@ static void __rpc_init_priority_wait_queue(struct rpc_wait_queue *queue, const c
+ rpc_reset_waitqueue_priority(queue);
+ queue->qlen = 0;
+ queue->timer_list.expires = 0;
+- INIT_DEFERRABLE_WORK(&queue->timer_list.dwork, __rpc_queue_timer_fn);
++ INIT_DELAYED_WORK(&queue->timer_list.dwork, __rpc_queue_timer_fn);
+ INIT_LIST_HEAD(&queue->timer_list.list);
+ rpc_assign_waitqueue_name(queue, qname);
+ }
+diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c
+index 6088bc2dc11e..fcd4b1f36e66 100644
+--- a/net/xfrm/xfrm_input.c
++++ b/net/xfrm/xfrm_input.c
+@@ -480,6 +480,9 @@ int xfrm_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type)
+ else
+ XFRM_INC_STATS(net,
+ LINUX_MIB_XFRMINSTATEINVALID);
++
++ if (encap_type == -1)
++ dev_put(skb->dev);
+ goto drop;
+ }
+
+diff --git a/sound/core/oss/linear.c b/sound/core/oss/linear.c
+index 2045697f449d..797d838a2f9e 100644
+--- a/sound/core/oss/linear.c
++++ b/sound/core/oss/linear.c
+@@ -107,6 +107,8 @@ static snd_pcm_sframes_t linear_transfer(struct snd_pcm_plugin *plugin,
+ }
+ }
+ #endif
++ if (frames > dst_channels[0].frames)
++ frames = dst_channels[0].frames;
+ convert(plugin, src_channels, dst_channels, frames);
+ return frames;
+ }
+diff --git a/sound/core/oss/mulaw.c b/sound/core/oss/mulaw.c
+index 7915564bd394..3788906421a7 100644
+--- a/sound/core/oss/mulaw.c
++++ b/sound/core/oss/mulaw.c
+@@ -269,6 +269,8 @@ static snd_pcm_sframes_t mulaw_transfer(struct snd_pcm_plugin *plugin,
+ }
+ }
+ #endif
++ if (frames > dst_channels[0].frames)
++ frames = dst_channels[0].frames;
+ data = (struct mulaw_priv *)plugin->extra_data;
+ data->func(plugin, src_channels, dst_channels, frames);
+ return frames;
+diff --git a/sound/core/oss/route.c b/sound/core/oss/route.c
+index c8171f5783c8..72dea04197ef 100644
+--- a/sound/core/oss/route.c
++++ b/sound/core/oss/route.c
+@@ -57,6 +57,8 @@ static snd_pcm_sframes_t route_transfer(struct snd_pcm_plugin *plugin,
+ return -ENXIO;
+ if (frames == 0)
+ return 0;
++ if (frames > dst_channels[0].frames)
++ frames = dst_channels[0].frames;
+
+ nsrcs = plugin->src_format.channels;
+ ndsts = plugin->dst_format.channels;
+diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c
+index d80041ea4e01..2236b5e0c1f2 100644
+--- a/sound/core/pcm_lib.c
++++ b/sound/core/pcm_lib.c
+@@ -1782,11 +1782,14 @@ void snd_pcm_period_elapsed(struct snd_pcm_substream *substream)
+ struct snd_pcm_runtime *runtime;
+ unsigned long flags;
+
+- if (PCM_RUNTIME_CHECK(substream))
++ if (snd_BUG_ON(!substream))
+ return;
+- runtime = substream->runtime;
+
+ snd_pcm_stream_lock_irqsave(substream, flags);
++ if (PCM_RUNTIME_CHECK(substream))
++ goto _unlock;
++ runtime = substream->runtime;
++
+ if (!snd_pcm_running(substream) ||
+ snd_pcm_update_hw_ptr0(substream, 1) < 0)
+ goto _end;
+@@ -1797,6 +1800,7 @@ void snd_pcm_period_elapsed(struct snd_pcm_substream *substream)
+ #endif
+ _end:
+ kill_fasync(&runtime->fasync, SIGIO, POLL_IN);
++ _unlock:
+ snd_pcm_stream_unlock_irqrestore(substream, flags);
+ }
+ EXPORT_SYMBOL(snd_pcm_period_elapsed);
+diff --git a/sound/hda/hdac_stream.c b/sound/hda/hdac_stream.c
+index 55d53b89ac21..581706acf486 100644
+--- a/sound/hda/hdac_stream.c
++++ b/sound/hda/hdac_stream.c
+@@ -96,12 +96,14 @@ void snd_hdac_stream_start(struct hdac_stream *azx_dev, bool fresh_start)
+ 1 << azx_dev->index,
+ 1 << azx_dev->index);
+ /* set stripe control */
+- if (azx_dev->substream)
+- stripe_ctl = snd_hdac_get_stream_stripe_ctl(bus, azx_dev->substream);
+- else
+- stripe_ctl = 0;
+- snd_hdac_stream_updateb(azx_dev, SD_CTL_3B, SD_CTL_STRIPE_MASK,
+- stripe_ctl);
++ if (azx_dev->stripe) {
++ if (azx_dev->substream)
++ stripe_ctl = snd_hdac_get_stream_stripe_ctl(bus, azx_dev->substream);
++ else
++ stripe_ctl = 0;
++ snd_hdac_stream_updateb(azx_dev, SD_CTL_3B, SD_CTL_STRIPE_MASK,
++ stripe_ctl);
++ }
+ /* set DMA start and interrupt mask */
+ snd_hdac_stream_updateb(azx_dev, SD_CTL,
+ 0, SD_CTL_DMA_START | SD_INT_MASK);
+@@ -118,7 +120,10 @@ void snd_hdac_stream_clear(struct hdac_stream *azx_dev)
+ snd_hdac_stream_updateb(azx_dev, SD_CTL,
+ SD_CTL_DMA_START | SD_INT_MASK, 0);
+ snd_hdac_stream_writeb(azx_dev, SD_STS, SD_INT_MASK); /* to be sure */
+- snd_hdac_stream_updateb(azx_dev, SD_CTL_3B, SD_CTL_STRIPE_MASK, 0);
++ if (azx_dev->stripe) {
++ snd_hdac_stream_updateb(azx_dev, SD_CTL_3B, SD_CTL_STRIPE_MASK, 0);
++ azx_dev->stripe = 0;
++ }
+ azx_dev->running = false;
+ }
+ EXPORT_SYMBOL_GPL(snd_hdac_stream_clear);
+diff --git a/sound/pci/hda/hda_bind.c b/sound/pci/hda/hda_bind.c
+index 8272b50b8349..6a8564566375 100644
+--- a/sound/pci/hda/hda_bind.c
++++ b/sound/pci/hda/hda_bind.c
+@@ -43,6 +43,10 @@ static void hda_codec_unsol_event(struct hdac_device *dev, unsigned int ev)
+ {
+ struct hda_codec *codec = container_of(dev, struct hda_codec, core);
+
++ /* ignore unsol events during shutdown */
++ if (codec->bus->shutdown)
++ return;
++
+ if (codec->patch_ops.unsol_event)
+ codec->patch_ops.unsol_event(codec, ev);
+ }
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index e1791d01ccc0..c2740366815d 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -1383,8 +1383,11 @@ static int azx_free(struct azx *chip)
+ static int azx_dev_disconnect(struct snd_device *device)
+ {
+ struct azx *chip = device->device_data;
++ struct hdac_bus *bus = azx_bus(chip);
+
+ chip->bus.shutdown = 1;
++ cancel_work_sync(&bus->unsol_work);
++
+ return 0;
+ }
+
+@@ -2428,6 +2431,9 @@ static const struct pci_device_id azx_ids[] = {
+ /* CometLake-H */
+ { PCI_DEVICE(0x8086, 0x06C8),
+ .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
++ /* CometLake-S */
++ { PCI_DEVICE(0x8086, 0xa3f0),
++ .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+ /* Icelake */
+ { PCI_DEVICE(0x8086, 0x34c8),
+ .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 968d3caab6ac..90aa0f400a57 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -910,6 +910,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ SND_PCI_QUIRK(0x103c, 0x837f, "HP ProBook 470 G5", CXT_FIXUP_MUTE_LED_GPIO),
+ SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x103c, 0x8402, "HP ProBook 645 G4", CXT_FIXUP_MUTE_LED_GPIO),
+ SND_PCI_QUIRK(0x103c, 0x8455, "HP Z2 G4", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x8456, "HP Z2 G4 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x8457, "HP Z2 G4 mini", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index ff99f5feaace..22de30af529e 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -31,6 +31,7 @@
+ #include <sound/hda_codec.h>
+ #include "hda_local.h"
+ #include "hda_jack.h"
++#include "hda_controller.h"
+
+ static bool static_hdmi_pcm;
+ module_param(static_hdmi_pcm, bool, 0644);
+@@ -45,10 +46,12 @@ MODULE_PARM_DESC(static_hdmi_pcm, "Don't restrict PCM parameters per ELD info");
+ ((codec)->core.vendor_id == 0x80862800))
+ #define is_cannonlake(codec) ((codec)->core.vendor_id == 0x8086280c)
+ #define is_icelake(codec) ((codec)->core.vendor_id == 0x8086280f)
++#define is_tigerlake(codec) ((codec)->core.vendor_id == 0x80862812)
+ #define is_haswell_plus(codec) (is_haswell(codec) || is_broadwell(codec) \
+ || is_skylake(codec) || is_broxton(codec) \
+ || is_kabylake(codec) || is_geminilake(codec) \
+- || is_cannonlake(codec) || is_icelake(codec))
++ || is_cannonlake(codec) || is_icelake(codec) \
++ || is_tigerlake(codec))
+ #define is_valleyview(codec) ((codec)->core.vendor_id == 0x80862882)
+ #define is_cherryview(codec) ((codec)->core.vendor_id == 0x80862883)
+ #define is_valleyview_plus(codec) (is_valleyview(codec) || is_cherryview(codec))
+@@ -1226,6 +1229,10 @@ static int hdmi_pcm_open(struct hda_pcm_stream *hinfo,
+ per_pin->cvt_nid = per_cvt->cvt_nid;
+ hinfo->nid = per_cvt->cvt_nid;
+
++ /* flip stripe flag for the assigned stream if supported */
++ if (get_wcaps(codec, per_cvt->cvt_nid) & AC_WCAP_STRIPE)
++ azx_stream(get_azx_dev(substream))->stripe = 1;
++
+ snd_hda_set_dev_select(codec, per_pin->pin_nid, per_pin->dev_id);
+ snd_hda_codec_write_cache(codec, per_pin->pin_nid, 0,
+ AC_VERB_SET_CONNECT_SEL,
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index d4daa3c937ba..4b23e374d367 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -367,9 +367,7 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
+ case 0x10ec0215:
+ case 0x10ec0233:
+ case 0x10ec0235:
+- case 0x10ec0236:
+ case 0x10ec0255:
+- case 0x10ec0256:
+ case 0x10ec0257:
+ case 0x10ec0282:
+ case 0x10ec0283:
+@@ -381,6 +379,11 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
+ case 0x10ec0300:
+ alc_update_coef_idx(codec, 0x10, 1<<9, 0);
+ break;
++ case 0x10ec0236:
++ case 0x10ec0256:
++ alc_write_coef_idx(codec, 0x36, 0x5757);
++ alc_update_coef_idx(codec, 0x10, 1<<9, 0);
++ break;
+ case 0x10ec0275:
+ alc_update_coef_idx(codec, 0xe, 0, 1<<0);
+ break;
+@@ -5878,6 +5881,7 @@ enum {
+ ALC299_FIXUP_PREDATOR_SPK,
+ ALC294_FIXUP_ASUS_INTSPK_HEADSET_MIC,
+ ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE,
++ ALC294_FIXUP_ASUS_INTSPK_GPIO,
+ };
+
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -6953,6 +6957,13 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC256_FIXUP_ASUS_HEADSET_MODE
+ },
++ [ALC294_FIXUP_ASUS_INTSPK_GPIO] = {
++ .type = HDA_FIXUP_FUNC,
++ /* The GPIO must be pulled to initialize the AMP */
++ .v.func = alc_fixup_gpio4,
++ .chained = true,
++ .chain_id = ALC294_FIXUP_ASUS_INTSPK_HEADSET_MIC
++ },
+ };
+
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -7112,7 +7123,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK),
+ SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A),
+ SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC),
+- SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_INTSPK_HEADSET_MIC),
++ SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_INTSPK_GPIO),
+ SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW),
+ SND_PCI_QUIRK(0x1043, 0x1a30, "ASUS X705UD", ALC256_FIXUP_ASUS_MIC),
+@@ -7219,6 +7230,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
+ SND_PCI_QUIRK(0x19e5, 0x3204, "Huawei MACH-WX9", ALC256_FIXUP_HUAWEI_MACH_WX9_PINS),
+ SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */
++ SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ SND_PCI_QUIRK(0x10ec, 0x118c, "Medion EE4254 MD62100", ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE),
+
+ #if 0
+diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
+index c14a1cdad80c..b0de5684ebd6 100644
+--- a/tools/perf/builtin-script.c
++++ b/tools/perf/builtin-script.c
+@@ -1075,7 +1075,7 @@ static int perf_sample__fprintf_brstackinsn(struct perf_sample *sample,
+ insn++;
+ }
+ }
+- if (off != (unsigned)len)
++ if (off != end - start)
+ printed += fprintf(fp, "\tmismatch of LBR data and executable\n");
+ }
+
+diff --git a/tools/perf/scripts/python/exported-sql-viewer.py b/tools/perf/scripts/python/exported-sql-viewer.py
+index 61b3911d91e6..4b28c9d08d5a 100755
+--- a/tools/perf/scripts/python/exported-sql-viewer.py
++++ b/tools/perf/scripts/python/exported-sql-viewer.py
+@@ -625,7 +625,7 @@ class CallGraphRootItem(CallGraphLevelItemBase):
+ self.query_done = True
+ if_has_calls = ""
+ if IsSelectable(glb.db, "comms", columns = "has_calls"):
+- if_has_calls = " WHERE has_calls = TRUE"
++ if_has_calls = " WHERE has_calls = " + glb.dbref.TRUE
+ query = QSqlQuery(glb.db)
+ QueryExec(query, "SELECT id, comm FROM comms" + if_has_calls)
+ while query.next():
+@@ -905,7 +905,7 @@ class CallTreeRootItem(CallGraphLevelItemBase):
+ self.query_done = True
+ if_has_calls = ""
+ if IsSelectable(glb.db, "comms", columns = "has_calls"):
+- if_has_calls = " WHERE has_calls = TRUE"
++ if_has_calls = " WHERE has_calls = " + glb.dbref.TRUE
+ query = QSqlQuery(glb.db)
+ QueryExec(query, "SELECT id, comm FROM comms" + if_has_calls)
+ while query.next():
+@@ -3509,6 +3509,12 @@ class DBRef():
+ def __init__(self, is_sqlite3, dbname):
+ self.is_sqlite3 = is_sqlite3
+ self.dbname = dbname
++ self.TRUE = "TRUE"
++ self.FALSE = "FALSE"
++ # SQLite prior to version 3.23 does not support TRUE and FALSE
++ if self.is_sqlite3:
++ self.TRUE = "1"
++ self.FALSE = "0"
+
+ def Open(self, connection_name):
+ dbname = self.dbname
+diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
+index 1779923d7a7b..3e5474110dfd 100644
+--- a/tools/testing/selftests/Makefile
++++ b/tools/testing/selftests/Makefile
+@@ -203,7 +203,7 @@ ifdef INSTALL_PATH
+ @# included in the generated runlist.
+ for TARGET in $(TARGETS); do \
+ BUILD_TARGET=$$BUILD/$$TARGET; \
+- [ ! -d $$INSTALL_PATH/$$TARGET ] && echo "Skipping non-existent dir: $$TARGET" && continue; \
++ [ ! -d $(INSTALL_PATH)/$$TARGET ] && echo "Skipping non-existent dir: $$TARGET" && continue; \
+ echo "[ -w /dev/kmsg ] && echo \"kselftest: Running tests in $$TARGET\" >> /dev/kmsg" >> $(ALL_SCRIPT); \
+ echo "cd $$TARGET" >> $(ALL_SCRIPT); \
+ echo -n "run_many" >> $(ALL_SCRIPT); \
+diff --git a/tools/testing/selftests/kvm/lib/assert.c b/tools/testing/selftests/kvm/lib/assert.c
+index 4911fc77d0f6..d1cf9f6e0e6b 100644
+--- a/tools/testing/selftests/kvm/lib/assert.c
++++ b/tools/testing/selftests/kvm/lib/assert.c
+@@ -55,7 +55,7 @@ static void test_dump_stack(void)
+ #pragma GCC diagnostic pop
+ }
+
+-static pid_t gettid(void)
++static pid_t _gettid(void)
+ {
+ return syscall(SYS_gettid);
+ }
+@@ -72,7 +72,7 @@ test_assert(bool exp, const char *exp_str,
+ fprintf(stderr, "==== Test Assertion Failure ====\n"
+ " %s:%u: %s\n"
+ " pid=%d tid=%d - %s\n",
+- file, line, exp_str, getpid(), gettid(),
++ file, line, exp_str, getpid(), _gettid(),
+ strerror(errno));
+ test_dump_stack();
+ if (fmt) {
+diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c
+index a4ad431c92a9..e821fe242dbc 100644
+--- a/virt/kvm/arm/vgic/vgic-v3.c
++++ b/virt/kvm/arm/vgic/vgic-v3.c
+@@ -363,8 +363,8 @@ retry:
+ int vgic_v3_save_pending_tables(struct kvm *kvm)
+ {
+ struct vgic_dist *dist = &kvm->arch.vgic;
+- int last_byte_offset = -1;
+ struct vgic_irq *irq;
++ gpa_t last_ptr = ~(gpa_t)0;
+ int ret;
+ u8 val;
+
+@@ -384,11 +384,11 @@ int vgic_v3_save_pending_tables(struct kvm *kvm)
+ bit_nr = irq->intid % BITS_PER_BYTE;
+ ptr = pendbase + byte_offset;
+
+- if (byte_offset != last_byte_offset) {
++ if (ptr != last_ptr) {
+ ret = kvm_read_guest_lock(kvm, ptr, &val, 1);
+ if (ret)
+ return ret;
+- last_byte_offset = byte_offset;
++ last_ptr = ptr;
+ }
+
+ stored = val & (1U << bit_nr);
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:5.3 commit in: /
@ 2019-12-17 21:57 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2019-12-17 21:57 UTC (permalink / raw
To: gentoo-commits
commit: cfc015bf7c758b2b2f1a6bbe37417d4384673b36
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Dec 17 21:56:43 2019 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Dec 17 21:56:43 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=cfc015bf
Linux patch 5.3.17
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1016_linux-5.3.17.patch | 6754 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 6758 insertions(+)
diff --git a/0000_README b/0000_README
index 0825437..e723ae5 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch: 1015_linux-5.3.16.patch
From: http://www.kernel.org
Desc: Linux 5.3.16
+Patch: 1016_linux-5.3.17.patch
+From: http://www.kernel.org
+Desc: Linux 5.3.17
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1016_linux-5.3.17.patch b/1016_linux-5.3.17.patch
new file mode 100644
index 0000000..86d638f
--- /dev/null
+++ b/1016_linux-5.3.17.patch
@@ -0,0 +1,6754 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index c4894b716fbe..f1e54f44c0e8 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -5066,13 +5066,13 @@
+ Flags is a set of characters, each corresponding
+ to a common usb-storage quirk flag as follows:
+ a = SANE_SENSE (collect more than 18 bytes
+- of sense data);
++ of sense data, not on uas);
+ b = BAD_SENSE (don't collect more than 18
+- bytes of sense data);
++ bytes of sense data, not on uas);
+ c = FIX_CAPACITY (decrease the reported
+ device capacity by one sector);
+ d = NO_READ_DISC_INFO (don't use
+- READ_DISC_INFO command);
++ READ_DISC_INFO command, not on uas);
+ e = NO_READ_CAPACITY_16 (don't use
+ READ_CAPACITY_16 command);
+ f = NO_REPORT_OPCODES (don't use report opcodes
+@@ -5087,17 +5087,18 @@
+ j = NO_REPORT_LUNS (don't use report luns
+ command, uas only);
+ l = NOT_LOCKABLE (don't try to lock and
+- unlock ejectable media);
++ unlock ejectable media, not on uas);
+ m = MAX_SECTORS_64 (don't transfer more
+- than 64 sectors = 32 KB at a time);
++ than 64 sectors = 32 KB at a time,
++ not on uas);
+ n = INITIAL_READ10 (force a retry of the
+- initial READ(10) command);
++ initial READ(10) command, not on uas);
+ o = CAPACITY_OK (accept the capacity
+- reported by the device);
++ reported by the device, not on uas);
+ p = WRITE_CACHE (the device cache is ON
+- by default);
++ by default, not on uas);
+ r = IGNORE_RESIDUE (the device reports
+- bogus residue values);
++ bogus residue values, not on uas);
+ s = SINGLE_LUN (the device has only one
+ Logical Unit);
+ t = NO_ATA_1X (don't allow ATA(12) and ATA(16)
+@@ -5106,7 +5107,8 @@
+ w = NO_WP_DETECT (don't test whether the
+ medium is write-protected).
+ y = ALWAYS_SYNC (issue a SYNCHRONIZE_CACHE
+- even if the device claims no cache)
++ even if the device claims no cache,
++ not on uas)
+ Example: quirks=0419:aaf5:rl,0421:0433:rc
+
+ user_debug= [KNL,ARM]
+diff --git a/Makefile b/Makefile
+index ced7342b61ff..9cce8d426cb8 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 3
+-SUBLEVEL = 16
++SUBLEVEL = 17
+ EXTRAVERSION =
+ NAME = Bobtail Squid
+
+diff --git a/arch/arm/boot/dts/omap3-pandora-common.dtsi b/arch/arm/boot/dts/omap3-pandora-common.dtsi
+index ec5891718ae6..150d5be42d27 100644
+--- a/arch/arm/boot/dts/omap3-pandora-common.dtsi
++++ b/arch/arm/boot/dts/omap3-pandora-common.dtsi
+@@ -226,6 +226,17 @@
+ gpio = <&gpio6 4 GPIO_ACTIVE_HIGH>; /* GPIO_164 */
+ };
+
++ /* wl1251 wifi+bt module */
++ wlan_en: fixed-regulator-wg7210_en {
++ compatible = "regulator-fixed";
++ regulator-name = "vwlan";
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <1800000>;
++ startup-delay-us = <50000>;
++ enable-active-high;
++ gpio = <&gpio1 23 GPIO_ACTIVE_HIGH>;
++ };
++
+ /* wg7210 (wifi+bt module) 32k clock buffer */
+ wg7210_32k: fixed-regulator-wg7210_32k {
+ compatible = "regulator-fixed";
+@@ -522,9 +533,30 @@
+ /*wp-gpios = <&gpio4 31 GPIO_ACTIVE_HIGH>;*/ /* GPIO_127 */
+ };
+
+-/* mmc3 is probed using pdata-quirks to pass wl1251 card data */
+ &mmc3 {
+- status = "disabled";
++ vmmc-supply = <&wlan_en>;
++
++ bus-width = <4>;
++ non-removable;
++ ti,non-removable;
++ cap-power-off-card;
++
++ pinctrl-names = "default";
++ pinctrl-0 = <&mmc3_pins>;
++
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ wlan: wifi@1 {
++ compatible = "ti,wl1251";
++
++ reg = <1>;
++
++ interrupt-parent = <&gpio1>;
++ interrupts = <21 IRQ_TYPE_LEVEL_HIGH>; /* GPIO_21 */
++
++ ti,wl1251-has-eeprom;
++ };
+ };
+
+ /* bluetooth*/
+diff --git a/arch/arm/boot/dts/omap3-tao3530.dtsi b/arch/arm/boot/dts/omap3-tao3530.dtsi
+index a7a04d78deeb..f24e2326cfa7 100644
+--- a/arch/arm/boot/dts/omap3-tao3530.dtsi
++++ b/arch/arm/boot/dts/omap3-tao3530.dtsi
+@@ -222,7 +222,7 @@
+ pinctrl-0 = <&mmc1_pins>;
+ vmmc-supply = <&vmmc1>;
+ vqmmc-supply = <&vsim>;
+- cd-gpios = <&twl_gpio 0 GPIO_ACTIVE_HIGH>;
++ cd-gpios = <&twl_gpio 0 GPIO_ACTIVE_LOW>;
+ bus-width = <8>;
+ };
+
+diff --git a/arch/arm/mach-omap2/pdata-quirks.c b/arch/arm/mach-omap2/pdata-quirks.c
+index 6c6f8fce854e..da21c589dbdf 100644
+--- a/arch/arm/mach-omap2/pdata-quirks.c
++++ b/arch/arm/mach-omap2/pdata-quirks.c
+@@ -7,7 +7,6 @@
+ #include <linux/clk.h>
+ #include <linux/davinci_emac.h>
+ #include <linux/gpio.h>
+-#include <linux/gpio/machine.h>
+ #include <linux/init.h>
+ #include <linux/kernel.h>
+ #include <linux/of_platform.h>
+@@ -304,118 +303,15 @@ static void __init omap3_logicpd_torpedo_init(void)
+ }
+
+ /* omap3pandora legacy devices */
+-#define PANDORA_WIFI_IRQ_GPIO 21
+-#define PANDORA_WIFI_NRESET_GPIO 23
+
+ static struct platform_device pandora_backlight = {
+ .name = "pandora-backlight",
+ .id = -1,
+ };
+
+-static struct regulator_consumer_supply pandora_vmmc3_supply[] = {
+- REGULATOR_SUPPLY("vmmc", "omap_hsmmc.2"),
+-};
+-
+-static struct regulator_init_data pandora_vmmc3 = {
+- .constraints = {
+- .valid_ops_mask = REGULATOR_CHANGE_STATUS,
+- },
+- .num_consumer_supplies = ARRAY_SIZE(pandora_vmmc3_supply),
+- .consumer_supplies = pandora_vmmc3_supply,
+-};
+-
+-static struct fixed_voltage_config pandora_vwlan = {
+- .supply_name = "vwlan",
+- .microvolts = 1800000, /* 1.8V */
+- .startup_delay = 50000, /* 50ms */
+- .init_data = &pandora_vmmc3,
+-};
+-
+-static struct platform_device pandora_vwlan_device = {
+- .name = "reg-fixed-voltage",
+- .id = 1,
+- .dev = {
+- .platform_data = &pandora_vwlan,
+- },
+-};
+-
+-static struct gpiod_lookup_table pandora_vwlan_gpiod_table = {
+- .dev_id = "reg-fixed-voltage.1",
+- .table = {
+- /*
+- * As this is a low GPIO number it should be at the first
+- * GPIO bank.
+- */
+- GPIO_LOOKUP("gpio-0-31", PANDORA_WIFI_NRESET_GPIO,
+- NULL, GPIO_ACTIVE_HIGH),
+- { },
+- },
+-};
+-
+-static void pandora_wl1251_init_card(struct mmc_card *card)
+-{
+- /*
+- * We have TI wl1251 attached to MMC3. Pass this information to
+- * SDIO core because it can't be probed by normal methods.
+- */
+- if (card->type == MMC_TYPE_SDIO || card->type == MMC_TYPE_SD_COMBO) {
+- card->quirks |= MMC_QUIRK_NONSTD_SDIO;
+- card->cccr.wide_bus = 1;
+- card->cis.vendor = 0x104c;
+- card->cis.device = 0x9066;
+- card->cis.blksize = 512;
+- card->cis.max_dtr = 24000000;
+- card->ocr = 0x80;
+- }
+-}
+-
+-static struct omap2_hsmmc_info pandora_mmc3[] = {
+- {
+- .mmc = 3,
+- .caps = MMC_CAP_4_BIT_DATA | MMC_CAP_POWER_OFF_CARD,
+- .init_card = pandora_wl1251_init_card,
+- },
+- {} /* Terminator */
+-};
+-
+-static void __init pandora_wl1251_init(void)
+-{
+- struct wl1251_platform_data pandora_wl1251_pdata;
+- int ret;
+-
+- memset(&pandora_wl1251_pdata, 0, sizeof(pandora_wl1251_pdata));
+-
+- pandora_wl1251_pdata.power_gpio = -1;
+-
+- ret = gpio_request_one(PANDORA_WIFI_IRQ_GPIO, GPIOF_IN, "wl1251 irq");
+- if (ret < 0)
+- goto fail;
+-
+- pandora_wl1251_pdata.irq = gpio_to_irq(PANDORA_WIFI_IRQ_GPIO);
+- if (pandora_wl1251_pdata.irq < 0)
+- goto fail_irq;
+-
+- pandora_wl1251_pdata.use_eeprom = true;
+- ret = wl1251_set_platform_data(&pandora_wl1251_pdata);
+- if (ret < 0)
+- goto fail_irq;
+-
+- return;
+-
+-fail_irq:
+- gpio_free(PANDORA_WIFI_IRQ_GPIO);
+-fail:
+- pr_err("wl1251 board initialisation failed\n");
+-}
+-
+ static void __init omap3_pandora_legacy_init(void)
+ {
+ platform_device_register(&pandora_backlight);
+- gpiod_add_lookup_table(&pandora_vwlan_gpiod_table);
+- platform_device_register(&pandora_vwlan_device);
+- omap_hsmmc_init(pandora_mmc3);
+- omap_hsmmc_late_init(pandora_mmc3);
+- pandora_wl1251_init();
+ }
+ #endif /* CONFIG_ARCH_OMAP3 */
+
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi
+index cd92f546c483..1d362f625a40 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi
+@@ -142,6 +142,15 @@
+ clock-output-names = "ext-osc32k";
+ };
+
++ pmu {
++ compatible = "arm,cortex-a53-pmu";
++ interrupts = <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 119 IRQ_TYPE_LEVEL_HIGH>;
++ interrupt-affinity = <&cpu0>, <&cpu1>, <&cpu2>, <&cpu3>;
++ };
++
+ psci {
+ compatible = "arm,psci-0.2";
+ method = "smc";
+diff --git a/arch/powerpc/include/asm/vdso_datapage.h b/arch/powerpc/include/asm/vdso_datapage.h
+index c61d59ed3b45..2ccb938d8544 100644
+--- a/arch/powerpc/include/asm/vdso_datapage.h
++++ b/arch/powerpc/include/asm/vdso_datapage.h
+@@ -82,6 +82,7 @@ struct vdso_data {
+ __s32 wtom_clock_nsec; /* Wall to monotonic clock nsec */
+ __s64 wtom_clock_sec; /* Wall to monotonic clock sec */
+ struct timespec stamp_xtime; /* xtime as at tb_orig_stamp */
++ __u32 hrtimer_res; /* hrtimer resolution */
+ __u32 syscall_map_64[SYSCALL_MAP_SIZE]; /* map of syscalls */
+ __u32 syscall_map_32[SYSCALL_MAP_SIZE]; /* map of syscalls */
+ };
+@@ -103,6 +104,7 @@ struct vdso_data {
+ __s32 wtom_clock_nsec;
+ struct timespec stamp_xtime; /* xtime as at tb_orig_stamp */
+ __u32 stamp_sec_fraction; /* fractional seconds of stamp_xtime */
++ __u32 hrtimer_res; /* hrtimer resolution */
+ __u32 syscall_map_32[SYSCALL_MAP_SIZE]; /* map of syscalls */
+ __u32 dcache_block_size; /* L1 d-cache block size */
+ __u32 icache_block_size; /* L1 i-cache block size */
+diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
+index 56dfa7a2a6f2..61527f1d4d05 100644
+--- a/arch/powerpc/kernel/Makefile
++++ b/arch/powerpc/kernel/Makefile
+@@ -5,8 +5,8 @@
+
+ CFLAGS_ptrace.o += -DUTS_MACHINE='"$(UTS_MACHINE)"'
+
+-# Disable clang warning for using setjmp without setjmp.h header
+-CFLAGS_crash.o += $(call cc-disable-warning, builtin-requires-header)
++# Avoid clang warnings around longjmp/setjmp declarations
++CFLAGS_crash.o += -ffreestanding
+
+ ifdef CONFIG_PPC64
+ CFLAGS_prom_init.o += $(NO_MINIMAL_TOC)
+diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
+index 4ccb6b3a7fbd..6279053967fd 100644
+--- a/arch/powerpc/kernel/asm-offsets.c
++++ b/arch/powerpc/kernel/asm-offsets.c
+@@ -387,6 +387,7 @@ int main(void)
+ OFFSET(WTOM_CLOCK_NSEC, vdso_data, wtom_clock_nsec);
+ OFFSET(STAMP_XTIME, vdso_data, stamp_xtime);
+ OFFSET(STAMP_SEC_FRAC, vdso_data, stamp_sec_fraction);
++ OFFSET(CLOCK_HRTIMER_RES, vdso_data, hrtimer_res);
+ OFFSET(CFG_ICACHE_BLOCKSZ, vdso_data, icache_block_size);
+ OFFSET(CFG_DCACHE_BLOCKSZ, vdso_data, dcache_block_size);
+ OFFSET(CFG_ICACHE_LOGBLOCKSZ, vdso_data, icache_log_block_size);
+@@ -417,7 +418,6 @@ int main(void)
+ DEFINE(CLOCK_REALTIME_COARSE, CLOCK_REALTIME_COARSE);
+ DEFINE(CLOCK_MONOTONIC_COARSE, CLOCK_MONOTONIC_COARSE);
+ DEFINE(NSEC_PER_SEC, NSEC_PER_SEC);
+- DEFINE(CLOCK_REALTIME_RES, MONOTONIC_RES_NSEC);
+
+ #ifdef CONFIG_BUG
+ DEFINE(BUG_ENTRY_SIZE, sizeof(struct bug_entry));
+diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
+index b55a7b4cb543..9bc0aa9aeb65 100644
+--- a/arch/powerpc/kernel/misc_64.S
++++ b/arch/powerpc/kernel/misc_64.S
+@@ -82,7 +82,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE)
+ subf r8,r6,r4 /* compute length */
+ add r8,r8,r5 /* ensure we get enough */
+ lwz r9,DCACHEL1LOGBLOCKSIZE(r10) /* Get log-2 of cache block size */
+- srw. r8,r8,r9 /* compute line count */
++ srd. r8,r8,r9 /* compute line count */
+ beqlr /* nothing to do? */
+ mtctr r8
+ 1: dcbst 0,r6
+@@ -98,7 +98,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE)
+ subf r8,r6,r4 /* compute length */
+ add r8,r8,r5
+ lwz r9,ICACHEL1LOGBLOCKSIZE(r10) /* Get log-2 of Icache block size */
+- srw. r8,r8,r9 /* compute line count */
++ srd. r8,r8,r9 /* compute line count */
+ beqlr /* nothing to do? */
+ mtctr r8
+ 2: icbi 0,r6
+diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
+index 694522308cd5..619447b1b797 100644
+--- a/arch/powerpc/kernel/time.c
++++ b/arch/powerpc/kernel/time.c
+@@ -959,6 +959,7 @@ void update_vsyscall(struct timekeeper *tk)
+ vdso_data->wtom_clock_nsec = tk->wall_to_monotonic.tv_nsec;
+ vdso_data->stamp_xtime = xt;
+ vdso_data->stamp_sec_fraction = frac_sec;
++ vdso_data->hrtimer_res = hrtimer_resolution;
+ smp_wmb();
+ ++(vdso_data->tb_update_count);
+ }
+diff --git a/arch/powerpc/kernel/vdso32/gettimeofday.S b/arch/powerpc/kernel/vdso32/gettimeofday.S
+index becd9f8767ed..a967e795b96d 100644
+--- a/arch/powerpc/kernel/vdso32/gettimeofday.S
++++ b/arch/powerpc/kernel/vdso32/gettimeofday.S
+@@ -156,12 +156,15 @@ V_FUNCTION_BEGIN(__kernel_clock_getres)
+ cror cr0*4+eq,cr0*4+eq,cr1*4+eq
+ bne cr0,99f
+
++ mflr r12
++ .cfi_register lr,r12
++ bl __get_datapage@local /* get data page */
++ lwz r5, CLOCK_HRTIMER_RES(r3)
++ mtlr r12
+ li r3,0
+ cmpli cr0,r4,0
+ crclr cr0*4+so
+ beqlr
+- lis r5,CLOCK_REALTIME_RES@h
+- ori r5,r5,CLOCK_REALTIME_RES@l
+ stw r3,TSPC32_TV_SEC(r4)
+ stw r5,TSPC32_TV_NSEC(r4)
+ blr
+diff --git a/arch/powerpc/kernel/vdso64/cacheflush.S b/arch/powerpc/kernel/vdso64/cacheflush.S
+index 3f92561a64c4..526f5ba2593e 100644
+--- a/arch/powerpc/kernel/vdso64/cacheflush.S
++++ b/arch/powerpc/kernel/vdso64/cacheflush.S
+@@ -35,7 +35,7 @@ V_FUNCTION_BEGIN(__kernel_sync_dicache)
+ subf r8,r6,r4 /* compute length */
+ add r8,r8,r5 /* ensure we get enough */
+ lwz r9,CFG_DCACHE_LOGBLOCKSZ(r10)
+- srw. r8,r8,r9 /* compute line count */
++ srd. r8,r8,r9 /* compute line count */
+ crclr cr0*4+so
+ beqlr /* nothing to do? */
+ mtctr r8
+@@ -52,7 +52,7 @@ V_FUNCTION_BEGIN(__kernel_sync_dicache)
+ subf r8,r6,r4 /* compute length */
+ add r8,r8,r5
+ lwz r9,CFG_ICACHE_LOGBLOCKSZ(r10)
+- srw. r8,r8,r9 /* compute line count */
++ srd. r8,r8,r9 /* compute line count */
+ crclr cr0*4+so
+ beqlr /* nothing to do? */
+ mtctr r8
+diff --git a/arch/powerpc/kernel/vdso64/gettimeofday.S b/arch/powerpc/kernel/vdso64/gettimeofday.S
+index 07bfe33fe874..81757f06bbd7 100644
+--- a/arch/powerpc/kernel/vdso64/gettimeofday.S
++++ b/arch/powerpc/kernel/vdso64/gettimeofday.S
+@@ -186,12 +186,15 @@ V_FUNCTION_BEGIN(__kernel_clock_getres)
+ cror cr0*4+eq,cr0*4+eq,cr1*4+eq
+ bne cr0,99f
+
++ mflr r12
++ .cfi_register lr,r12
++ bl V_LOCAL_FUNC(__get_datapage)
++ lwz r5, CLOCK_HRTIMER_RES(r3)
++ mtlr r12
+ li r3,0
+ cmpldi cr0,r4,0
+ crclr cr0*4+so
+ beqlr
+- lis r5,CLOCK_REALTIME_RES@h
+- ori r5,r5,CLOCK_REALTIME_RES@l
+ std r3,TSPC64_TV_SEC(r4)
+ std r5,TSPC64_TV_NSEC(r4)
+ blr
+diff --git a/arch/powerpc/platforms/powernv/opal-imc.c b/arch/powerpc/platforms/powernv/opal-imc.c
+index e04b20625cb9..7ccc5c85c74e 100644
+--- a/arch/powerpc/platforms/powernv/opal-imc.c
++++ b/arch/powerpc/platforms/powernv/opal-imc.c
+@@ -285,7 +285,14 @@ static int opal_imc_counters_probe(struct platform_device *pdev)
+ domain = IMC_DOMAIN_THREAD;
+ break;
+ case IMC_TYPE_TRACE:
+- domain = IMC_DOMAIN_TRACE;
++ /*
++ * FIXME. Using trace_imc events to monitor application
++ * or KVM thread performance can cause a checkstop
++ * (system crash).
++ * Disable it for now.
++ */
++ pr_info_once("IMC: disabling trace_imc PMU\n");
++ domain = -1;
+ break;
+ default:
+ pr_warn("IMC Unknown Device type \n");
+diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c
+index be86fce1a84e..723db3ceeaa8 100644
+--- a/arch/powerpc/sysdev/xive/common.c
++++ b/arch/powerpc/sysdev/xive/common.c
+@@ -1000,6 +1000,15 @@ static int xive_irq_alloc_data(unsigned int virq, irq_hw_number_t hw)
+ xd->target = XIVE_INVALID_TARGET;
+ irq_set_handler_data(virq, xd);
+
++ /*
++ * Turn OFF by default the interrupt being mapped. A side
++ * effect of this check is the mapping the ESB page of the
++ * interrupt in the Linux address space. This prevents page
++ * fault issues in the crash handler which masks all
++ * interrupts.
++ */
++ xive_esb_read(xd, XIVE_ESB_SET_PQ_01);
++
+ return 0;
+ }
+
+diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c
+index 8ef9cf4ebb1c..238207cd6bc3 100644
+--- a/arch/powerpc/sysdev/xive/spapr.c
++++ b/arch/powerpc/sysdev/xive/spapr.c
+@@ -356,20 +356,28 @@ static int xive_spapr_populate_irq_data(u32 hw_irq, struct xive_irq_data *data)
+ data->esb_shift = esb_shift;
+ data->trig_page = trig_page;
+
++ data->hw_irq = hw_irq;
++
+ /*
+ * No chip-id for the sPAPR backend. This has an impact how we
+ * pick a target. See xive_pick_irq_target().
+ */
+ data->src_chip = XIVE_INVALID_CHIP_ID;
+
++ /*
++ * When the H_INT_ESB flag is set, the H_INT_ESB hcall should
++ * be used for interrupt management. Skip the remapping of the
++ * ESB pages which are not available.
++ */
++ if (data->flags & XIVE_IRQ_FLAG_H_INT_ESB)
++ return 0;
++
+ data->eoi_mmio = ioremap(data->eoi_page, 1u << data->esb_shift);
+ if (!data->eoi_mmio) {
+ pr_err("Failed to map EOI page for irq 0x%x\n", hw_irq);
+ return -ENOMEM;
+ }
+
+- data->hw_irq = hw_irq;
+-
+ /* Full function page supports trigger */
+ if (flags & XIVE_SRC_TRIGGER) {
+ data->trig_mmio = data->eoi_mmio;
+diff --git a/arch/powerpc/xmon/Makefile b/arch/powerpc/xmon/Makefile
+index f142570ad860..c3842dbeb1b7 100644
+--- a/arch/powerpc/xmon/Makefile
++++ b/arch/powerpc/xmon/Makefile
+@@ -1,8 +1,8 @@
+ # SPDX-License-Identifier: GPL-2.0
+ # Makefile for xmon
+
+-# Disable clang warning for using setjmp without setjmp.h header
+-subdir-ccflags-y := $(call cc-disable-warning, builtin-requires-header)
++# Avoid clang warnings around longjmp/setjmp declarations
++subdir-ccflags-y := -ffreestanding
+
+ GCOV_PROFILE := n
+ KCOV_INSTRUMENT := n
+diff --git a/arch/s390/boot/startup.c b/arch/s390/boot/startup.c
+index ceeacbeff600..437ddec24723 100644
+--- a/arch/s390/boot/startup.c
++++ b/arch/s390/boot/startup.c
+@@ -164,6 +164,11 @@ void startup_kernel(void)
+ handle_relocs(__kaslr_offset);
+
+ if (__kaslr_offset) {
++ /*
++ * Save KASLR offset for early dumps, before vmcore_info is set.
++ * Mark as uneven to distinguish from real vmcore_info pointer.
++ */
++ S390_lowcore.vmcore_info = __kaslr_offset | 0x1UL;
+ /* Clear non-relocated kernel */
+ if (IS_ENABLED(CONFIG_KERNEL_UNCOMPRESSED))
+ memset(img, 0, vmlinux.image_size);
+diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
+index 70ac23e50cae..a9ee48b66a01 100644
+--- a/arch/s390/include/asm/pgtable.h
++++ b/arch/s390/include/asm/pgtable.h
+@@ -1172,8 +1172,6 @@ void gmap_pmdp_idte_global(struct mm_struct *mm, unsigned long vmaddr);
+ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, pte_t entry)
+ {
+- if (!MACHINE_HAS_NX)
+- pte_val(entry) &= ~_PAGE_NOEXEC;
+ if (pte_present(entry))
+ pte_val(entry) &= ~_PAGE_UNUSED;
+ if (mm_has_pgste(mm))
+@@ -1190,6 +1188,8 @@ static inline pte_t mk_pte_phys(unsigned long physpage, pgprot_t pgprot)
+ {
+ pte_t __pte;
+ pte_val(__pte) = physpage + pgprot_val(pgprot);
++ if (!MACHINE_HAS_NX)
++ pte_val(__pte) &= ~_PAGE_NOEXEC;
+ return pte_mkyoung(__pte);
+ }
+
+diff --git a/arch/s390/kernel/machine_kexec.c b/arch/s390/kernel/machine_kexec.c
+index 444a19125a81..d402ced7f7c3 100644
+--- a/arch/s390/kernel/machine_kexec.c
++++ b/arch/s390/kernel/machine_kexec.c
+@@ -254,10 +254,10 @@ void arch_crash_save_vmcoreinfo(void)
+ VMCOREINFO_SYMBOL(lowcore_ptr);
+ VMCOREINFO_SYMBOL(high_memory);
+ VMCOREINFO_LENGTH(lowcore_ptr, NR_CPUS);
+- mem_assign_absolute(S390_lowcore.vmcore_info, paddr_vmcoreinfo_note());
+ vmcoreinfo_append_str("SDMA=%lx\n", __sdma);
+ vmcoreinfo_append_str("EDMA=%lx\n", __edma);
+ vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset());
++ mem_assign_absolute(S390_lowcore.vmcore_info, paddr_vmcoreinfo_note());
+ }
+
+ void machine_shutdown(void)
+diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
+index 44974654cbd0..d95c85780e07 100644
+--- a/arch/s390/kernel/smp.c
++++ b/arch/s390/kernel/smp.c
+@@ -262,10 +262,13 @@ static void pcpu_prepare_secondary(struct pcpu *pcpu, int cpu)
+ lc->spinlock_index = 0;
+ lc->percpu_offset = __per_cpu_offset[cpu];
+ lc->kernel_asce = S390_lowcore.kernel_asce;
++ lc->user_asce = S390_lowcore.kernel_asce;
+ lc->machine_flags = S390_lowcore.machine_flags;
+ lc->user_timer = lc->system_timer =
+ lc->steal_timer = lc->avg_steal_timer = 0;
+ __ctl_store(lc->cregs_save_area, 0, 15);
++ lc->cregs_save_area[1] = lc->kernel_asce;
++ lc->cregs_save_area[7] = lc->vdso_asce;
+ save_access_regs((unsigned int *) lc->access_regs_save_area);
+ memcpy(lc->stfle_fac_list, S390_lowcore.stfle_fac_list,
+ sizeof(lc->stfle_fac_list));
+@@ -816,6 +819,8 @@ static void smp_init_secondary(void)
+
+ S390_lowcore.last_update_clock = get_tod_clock();
+ restore_access_regs(S390_lowcore.access_regs_save_area);
++ set_cpu_flag(CIF_ASCE_PRIMARY);
++ set_cpu_flag(CIF_ASCE_SECONDARY);
+ cpu_init();
+ preempt_disable();
+ init_cpu_timer();
+diff --git a/block/bio.c b/block/bio.c
+index 31d56e7e2ce0..853e2a2ec4d9 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -769,10 +769,12 @@ bool __bio_try_merge_page(struct bio *bio, struct page *page,
+ if (WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED)))
+ return false;
+
+- if (bio->bi_vcnt > 0 && !bio_full(bio, len)) {
++ if (bio->bi_vcnt > 0) {
+ struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1];
+
+ if (page_is_mergeable(bv, page, len, off, same_page)) {
++ if (bio->bi_iter.bi_size > UINT_MAX - len)
++ return false;
+ bv->bv_len += len;
+ bio->bi_iter.bi_size += len;
+ return true;
+diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c
+index d6e1a9bd7131..3a8e653eea77 100644
+--- a/block/blk-mq-sysfs.c
++++ b/block/blk-mq-sysfs.c
+@@ -166,20 +166,25 @@ static ssize_t blk_mq_hw_sysfs_nr_reserved_tags_show(struct blk_mq_hw_ctx *hctx,
+
+ static ssize_t blk_mq_hw_sysfs_cpus_show(struct blk_mq_hw_ctx *hctx, char *page)
+ {
++ const size_t size = PAGE_SIZE - 1;
+ unsigned int i, first = 1;
+- ssize_t ret = 0;
++ int ret = 0, pos = 0;
+
+ for_each_cpu(i, hctx->cpumask) {
+ if (first)
+- ret += sprintf(ret + page, "%u", i);
++ ret = snprintf(pos + page, size - pos, "%u", i);
+ else
+- ret += sprintf(ret + page, ", %u", i);
++ ret = snprintf(pos + page, size - pos, ", %u", i);
++
++ if (ret >= size - pos)
++ break;
+
+ first = 0;
++ pos += ret;
+ }
+
+- ret += sprintf(ret + page, "\n");
+- return ret;
++ ret = snprintf(pos + page, size + 1 - pos, "\n");
++ return pos + ret;
+ }
+
+ static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_nr_tags = {
+diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
+index 60bbc5090abe..751ed38f2a10 100644
+--- a/drivers/acpi/acpi_lpss.c
++++ b/drivers/acpi/acpi_lpss.c
+@@ -10,6 +10,7 @@
+ #include <linux/acpi.h>
+ #include <linux/clkdev.h>
+ #include <linux/clk-provider.h>
++#include <linux/dmi.h>
+ #include <linux/err.h>
+ #include <linux/io.h>
+ #include <linux/mutex.h>
+@@ -463,6 +464,18 @@ struct lpss_device_links {
+ const char *consumer_hid;
+ const char *consumer_uid;
+ u32 flags;
++ const struct dmi_system_id *dep_missing_ids;
++};
++
++/* Please keep this list sorted alphabetically by vendor and model */
++static const struct dmi_system_id i2c1_dep_missing_dmi_ids[] = {
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "T200TA"),
++ },
++ },
++ {}
+ };
+
+ /*
+@@ -473,9 +486,17 @@ struct lpss_device_links {
+ * the supplier is not enumerated until after the consumer is probed.
+ */
+ static const struct lpss_device_links lpss_device_links[] = {
++ /* CHT External sdcard slot controller depends on PMIC I2C ctrl */
+ {"808622C1", "7", "80860F14", "3", DL_FLAG_PM_RUNTIME},
++ /* CHT iGPU depends on PMIC I2C controller */
+ {"808622C1", "7", "LNXVIDEO", NULL, DL_FLAG_PM_RUNTIME},
++ /* BYT iGPU depends on the Embedded Controller I2C controller (UID 1) */
++ {"80860F41", "1", "LNXVIDEO", NULL, DL_FLAG_PM_RUNTIME,
++ i2c1_dep_missing_dmi_ids},
++ /* BYT CR iGPU depends on PMIC I2C controller (UID 5 on CR) */
+ {"80860F41", "5", "LNXVIDEO", NULL, DL_FLAG_PM_RUNTIME},
++ /* BYT iGPU depends on PMIC I2C controller (UID 7 on non CR) */
++ {"80860F41", "7", "LNXVIDEO", NULL, DL_FLAG_PM_RUNTIME},
+ };
+
+ static bool hid_uid_match(struct acpi_device *adev,
+@@ -570,7 +591,8 @@ static void acpi_lpss_link_consumer(struct device *dev1,
+ if (!dev2)
+ return;
+
+- if (acpi_lpss_dep(ACPI_COMPANION(dev2), ACPI_HANDLE(dev1)))
++ if ((link->dep_missing_ids && dmi_check_system(link->dep_missing_ids))
++ || acpi_lpss_dep(ACPI_COMPANION(dev2), ACPI_HANDLE(dev1)))
+ device_link_add(dev2, dev1, link->flags);
+
+ put_device(dev2);
+@@ -585,7 +607,8 @@ static void acpi_lpss_link_supplier(struct device *dev1,
+ if (!dev2)
+ return;
+
+- if (acpi_lpss_dep(ACPI_COMPANION(dev1), ACPI_HANDLE(dev2)))
++ if ((link->dep_missing_ids && dmi_check_system(link->dep_missing_ids))
++ || acpi_lpss_dep(ACPI_COMPANION(dev1), ACPI_HANDLE(dev2)))
+ device_link_add(dev1, dev2, link->flags);
+
+ put_device(dev2);
+diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
+index 48bc96d45bab..54002670cb7a 100644
+--- a/drivers/acpi/bus.c
++++ b/drivers/acpi/bus.c
+@@ -153,7 +153,7 @@ int acpi_bus_get_private_data(acpi_handle handle, void **data)
+ {
+ acpi_status status;
+
+- if (!*data)
++ if (!data)
+ return -EINVAL;
+
+ status = acpi_get_data(handle, acpi_bus_private_data_handler, data);
+diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
+index f616b16c1f0b..6e302f225358 100644
+--- a/drivers/acpi/device_pm.c
++++ b/drivers/acpi/device_pm.c
+@@ -1309,9 +1309,19 @@ static void acpi_dev_pm_detach(struct device *dev, bool power_off)
+ */
+ int acpi_dev_pm_attach(struct device *dev, bool power_on)
+ {
++ /*
++ * Skip devices whose ACPI companions match the device IDs below,
++ * because they require special power management handling incompatible
++ * with the generic ACPI PM domain.
++ */
++ static const struct acpi_device_id special_pm_ids[] = {
++ {"PNP0C0B", }, /* Generic ACPI fan */
++ {"INT3404", }, /* Fan */
++ {}
++ };
+ struct acpi_device *adev = ACPI_COMPANION(dev);
+
+- if (!adev)
++ if (!adev || !acpi_match_device_ids(adev, special_pm_ids))
+ return 0;
+
+ /*
+diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
+index 9c0edf2fc0dd..f58da6ef259f 100644
+--- a/drivers/acpi/osl.c
++++ b/drivers/acpi/osl.c
+@@ -360,19 +360,21 @@ void *__ref acpi_os_map_memory(acpi_physical_address phys, acpi_size size)
+ }
+ EXPORT_SYMBOL_GPL(acpi_os_map_memory);
+
+-static void acpi_os_drop_map_ref(struct acpi_ioremap *map)
++/* Must be called with mutex_lock(&acpi_ioremap_lock) */
++static unsigned long acpi_os_drop_map_ref(struct acpi_ioremap *map)
+ {
+- if (!--map->refcount)
++ unsigned long refcount = --map->refcount;
++
++ if (!refcount)
+ list_del_rcu(&map->list);
++ return refcount;
+ }
+
+ static void acpi_os_map_cleanup(struct acpi_ioremap *map)
+ {
+- if (!map->refcount) {
+- synchronize_rcu_expedited();
+- acpi_unmap(map->phys, map->virt);
+- kfree(map);
+- }
++ synchronize_rcu_expedited();
++ acpi_unmap(map->phys, map->virt);
++ kfree(map);
+ }
+
+ /**
+@@ -392,6 +394,7 @@ static void acpi_os_map_cleanup(struct acpi_ioremap *map)
+ void __ref acpi_os_unmap_iomem(void __iomem *virt, acpi_size size)
+ {
+ struct acpi_ioremap *map;
++ unsigned long refcount;
+
+ if (!acpi_permanent_mmap) {
+ __acpi_unmap_table(virt, size);
+@@ -405,10 +408,11 @@ void __ref acpi_os_unmap_iomem(void __iomem *virt, acpi_size size)
+ WARN(true, PREFIX "%s: bad address %p\n", __func__, virt);
+ return;
+ }
+- acpi_os_drop_map_ref(map);
++ refcount = acpi_os_drop_map_ref(map);
+ mutex_unlock(&acpi_ioremap_lock);
+
+- acpi_os_map_cleanup(map);
++ if (!refcount)
++ acpi_os_map_cleanup(map);
+ }
+ EXPORT_SYMBOL_GPL(acpi_os_unmap_iomem);
+
+@@ -443,6 +447,7 @@ void acpi_os_unmap_generic_address(struct acpi_generic_address *gas)
+ {
+ u64 addr;
+ struct acpi_ioremap *map;
++ unsigned long refcount;
+
+ if (gas->space_id != ACPI_ADR_SPACE_SYSTEM_MEMORY)
+ return;
+@@ -458,10 +463,11 @@ void acpi_os_unmap_generic_address(struct acpi_generic_address *gas)
+ mutex_unlock(&acpi_ioremap_lock);
+ return;
+ }
+- acpi_os_drop_map_ref(map);
++ refcount = acpi_os_drop_map_ref(map);
+ mutex_unlock(&acpi_ioremap_lock);
+
+- acpi_os_map_cleanup(map);
++ if (!refcount)
++ acpi_os_map_cleanup(map);
+ }
+ EXPORT_SYMBOL(acpi_os_unmap_generic_address);
+
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 1c5278207153..dbc6aacd07a3 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -3332,7 +3332,7 @@ static void binder_transaction(struct binder_proc *proc,
+ binder_size_t parent_offset;
+ struct binder_fd_array_object *fda =
+ to_binder_fd_array_object(hdr);
+- size_t num_valid = (buffer_offset - off_start_offset) *
++ size_t num_valid = (buffer_offset - off_start_offset) /
+ sizeof(binder_size_t);
+ struct binder_buffer_object *parent =
+ binder_validate_ptr(target_proc, t->buffer,
+@@ -3406,7 +3406,7 @@ static void binder_transaction(struct binder_proc *proc,
+ t->buffer->user_data + sg_buf_offset;
+ sg_buf_offset += ALIGN(bp->length, sizeof(u64));
+
+- num_valid = (buffer_offset - off_start_offset) *
++ num_valid = (buffer_offset - off_start_offset) /
+ sizeof(binder_size_t);
+ ret = binder_fixup_parent(t, thread, bp,
+ off_start_offset,
+diff --git a/drivers/char/hw_random/omap-rng.c b/drivers/char/hw_random/omap-rng.c
+index e9b6ac61fb7f..498825242634 100644
+--- a/drivers/char/hw_random/omap-rng.c
++++ b/drivers/char/hw_random/omap-rng.c
+@@ -66,6 +66,13 @@
+ #define OMAP4_RNG_OUTPUT_SIZE 0x8
+ #define EIP76_RNG_OUTPUT_SIZE 0x10
+
++/*
++ * EIP76 RNG takes approx. 700us to produce 16 bytes of output data
++ * as per testing results. And to account for the lack of udelay()'s
++ * reliability, we keep the timeout as 1000us.
++ */
++#define RNG_DATA_FILL_TIMEOUT 100
++
+ enum {
+ RNG_OUTPUT_0_REG = 0,
+ RNG_OUTPUT_1_REG,
+@@ -176,7 +183,7 @@ static int omap_rng_do_read(struct hwrng *rng, void *data, size_t max,
+ if (max < priv->pdata->data_size)
+ return 0;
+
+- for (i = 0; i < 20; i++) {
++ for (i = 0; i < RNG_DATA_FILL_TIMEOUT; i++) {
+ present = priv->pdata->data_present(priv);
+ if (present || !wait)
+ break;
+diff --git a/drivers/char/ppdev.c b/drivers/char/ppdev.c
+index f0a8adca1eee..61f9ae2805c7 100644
+--- a/drivers/char/ppdev.c
++++ b/drivers/char/ppdev.c
+@@ -619,20 +619,27 @@ static int pp_do_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ if (copy_from_user(time32, argp, sizeof(time32)))
+ return -EFAULT;
+
++ if ((time32[0] < 0) || (time32[1] < 0))
++ return -EINVAL;
++
+ return pp_set_timeout(pp->pdev, time32[0], time32[1]);
+
+ case PPSETTIME64:
+ if (copy_from_user(time64, argp, sizeof(time64)))
+ return -EFAULT;
+
++ if ((time64[0] < 0) || (time64[1] < 0))
++ return -EINVAL;
++
++ if (IS_ENABLED(CONFIG_SPARC64) && !in_compat_syscall())
++ time64[1] >>= 32;
++
+ return pp_set_timeout(pp->pdev, time64[0], time64[1]);
+
+ case PPGETTIME32:
+ jiffies_to_timespec64(pp->pdev->timeout, &ts);
+ time32[0] = ts.tv_sec;
+ time32[1] = ts.tv_nsec / NSEC_PER_USEC;
+- if ((time32[0] < 0) || (time32[1] < 0))
+- return -EINVAL;
+
+ if (copy_to_user(argp, time32, sizeof(time32)))
+ return -EFAULT;
+@@ -643,8 +650,9 @@ static int pp_do_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ jiffies_to_timespec64(pp->pdev->timeout, &ts);
+ time64[0] = ts.tv_sec;
+ time64[1] = ts.tv_nsec / NSEC_PER_USEC;
+- if ((time64[0] < 0) || (time64[1] < 0))
+- return -EINVAL;
++
++ if (IS_ENABLED(CONFIG_SPARC64) && !in_compat_syscall())
++ time64[1] <<= 32;
+
+ if (copy_to_user(argp, time64, sizeof(time64)))
+ return -EFAULT;
+diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c
+index ba9acae83bff..5817dfe5c5d2 100644
+--- a/drivers/char/tpm/tpm2-cmd.c
++++ b/drivers/char/tpm/tpm2-cmd.c
+@@ -939,6 +939,10 @@ static int tpm2_get_cc_attrs_tbl(struct tpm_chip *chip)
+
+ chip->cc_attrs_tbl = devm_kcalloc(&chip->dev, 4, nr_commands,
+ GFP_KERNEL);
++ if (!chip->cc_attrs_tbl) {
++ rc = -ENOMEM;
++ goto out;
++ }
+
+ rc = tpm_buf_init(&buf, TPM2_ST_NO_SESSIONS, TPM2_CC_GET_CAPABILITY);
+ if (rc)
+diff --git a/drivers/cpufreq/powernv-cpufreq.c b/drivers/cpufreq/powernv-cpufreq.c
+index 6061850e59c9..56f4bc0d209e 100644
+--- a/drivers/cpufreq/powernv-cpufreq.c
++++ b/drivers/cpufreq/powernv-cpufreq.c
+@@ -1041,9 +1041,14 @@ static struct cpufreq_driver powernv_cpufreq_driver = {
+
+ static int init_chip_info(void)
+ {
+- unsigned int chip[256];
++ unsigned int *chip;
+ unsigned int cpu, i;
+ unsigned int prev_chip_id = UINT_MAX;
++ int ret = 0;
++
++ chip = kcalloc(num_possible_cpus(), sizeof(*chip), GFP_KERNEL);
++ if (!chip)
++ return -ENOMEM;
+
+ for_each_possible_cpu(cpu) {
+ unsigned int id = cpu_to_chip_id(cpu);
+@@ -1055,8 +1060,10 @@ static int init_chip_info(void)
+ }
+
+ chips = kcalloc(nr_chips, sizeof(struct chip), GFP_KERNEL);
+- if (!chips)
+- return -ENOMEM;
++ if (!chips) {
++ ret = -ENOMEM;
++ goto free_and_return;
++ }
+
+ for (i = 0; i < nr_chips; i++) {
+ chips[i].id = chip[i];
+@@ -1066,7 +1073,9 @@ static int init_chip_info(void)
+ per_cpu(chip_info, cpu) = &chips[i];
+ }
+
+- return 0;
++free_and_return:
++ kfree(chip);
++ return ret;
+ }
+
+ static inline void clean_chip_info(void)
+diff --git a/drivers/cpuidle/driver.c b/drivers/cpuidle/driver.c
+index dc32f34e68d9..01acd88c4193 100644
+--- a/drivers/cpuidle/driver.c
++++ b/drivers/cpuidle/driver.c
+@@ -62,24 +62,23 @@ static inline void __cpuidle_unset_driver(struct cpuidle_driver *drv)
+ * __cpuidle_set_driver - set per CPU driver variables for the given driver.
+ * @drv: a valid pointer to a struct cpuidle_driver
+ *
+- * For each CPU in the driver's cpumask, unset the registered driver per CPU
+- * to @drv.
+- *
+- * Returns 0 on success, -EBUSY if the CPUs have driver(s) already.
++ * Returns 0 on success, -EBUSY if any CPU in the cpumask have a driver
++ * different from drv already.
+ */
+ static inline int __cpuidle_set_driver(struct cpuidle_driver *drv)
+ {
+ int cpu;
+
+ for_each_cpu(cpu, drv->cpumask) {
++ struct cpuidle_driver *old_drv;
+
+- if (__cpuidle_get_cpu_driver(cpu)) {
+- __cpuidle_unset_driver(drv);
++ old_drv = __cpuidle_get_cpu_driver(cpu);
++ if (old_drv && old_drv != drv)
+ return -EBUSY;
+- }
++ }
+
++ for_each_cpu(cpu, drv->cpumask)
+ per_cpu(cpuidle_drivers, cpu) = drv;
+- }
+
+ return 0;
+ }
+diff --git a/drivers/cpuidle/governors/teo.c b/drivers/cpuidle/governors/teo.c
+index 12d9e6cecf1d..9722afd1179b 100644
+--- a/drivers/cpuidle/governors/teo.c
++++ b/drivers/cpuidle/governors/teo.c
+@@ -241,7 +241,7 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ {
+ struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
+ int latency_req = cpuidle_governor_latency_req(dev->cpu);
+- unsigned int duration_us, count;
++ unsigned int duration_us, hits, misses, early_hits;
+ int max_early_idx, constraint_idx, idx, i;
+ ktime_t delta_tick;
+
+@@ -255,7 +255,9 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ cpu_data->sleep_length_ns = tick_nohz_get_sleep_length(&delta_tick);
+ duration_us = ktime_to_us(cpu_data->sleep_length_ns);
+
+- count = 0;
++ hits = 0;
++ misses = 0;
++ early_hits = 0;
+ max_early_idx = -1;
+ constraint_idx = drv->state_count;
+ idx = -1;
+@@ -266,23 +268,61 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+
+ if (s->disabled || su->disable) {
+ /*
+- * If the "early hits" metric of a disabled state is
+- * greater than the current maximum, it should be taken
+- * into account, because it would be a mistake to select
+- * a deeper state with lower "early hits" metric. The
+- * index cannot be changed to point to it, however, so
+- * just increase the max count alone and let the index
+- * still point to a shallower idle state.
++ * Ignore disabled states with target residencies beyond
++ * the anticipated idle duration.
+ */
+- if (max_early_idx >= 0 &&
+- count < cpu_data->states[i].early_hits)
+- count = cpu_data->states[i].early_hits;
++ if (s->target_residency > duration_us)
++ continue;
++
++ /*
++ * This state is disabled, so the range of idle duration
++ * values corresponding to it is covered by the current
++ * candidate state, but still the "hits" and "misses"
++ * metrics of the disabled state need to be used to
++ * decide whether or not the state covering the range in
++ * question is good enough.
++ */
++ hits = cpu_data->states[i].hits;
++ misses = cpu_data->states[i].misses;
++
++ if (early_hits >= cpu_data->states[i].early_hits ||
++ idx < 0)
++ continue;
++
++ /*
++ * If the current candidate state has been the one with
++ * the maximum "early hits" metric so far, the "early
++ * hits" metric of the disabled state replaces the
++ * current "early hits" count to avoid selecting a
++ * deeper state with lower "early hits" metric.
++ */
++ if (max_early_idx == idx) {
++ early_hits = cpu_data->states[i].early_hits;
++ continue;
++ }
++
++ /*
++ * The current candidate state is closer to the disabled
++ * one than the current maximum "early hits" state, so
++ * replace the latter with it, but in case the maximum
++ * "early hits" state index has not been set so far,
++ * check if the current candidate state is not too
++ * shallow for that role.
++ */
++ if (!(tick_nohz_tick_stopped() &&
++ drv->states[idx].target_residency < TICK_USEC)) {
++ early_hits = cpu_data->states[i].early_hits;
++ max_early_idx = idx;
++ }
+
+ continue;
+ }
+
+- if (idx < 0)
++ if (idx < 0) {
+ idx = i; /* first enabled state */
++ hits = cpu_data->states[i].hits;
++ misses = cpu_data->states[i].misses;
++ }
+
+ if (s->target_residency > duration_us)
+ break;
+@@ -291,11 +331,13 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ constraint_idx = i;
+
+ idx = i;
++ hits = cpu_data->states[i].hits;
++ misses = cpu_data->states[i].misses;
+
+- if (count < cpu_data->states[i].early_hits &&
++ if (early_hits < cpu_data->states[i].early_hits &&
+ !(tick_nohz_tick_stopped() &&
+ drv->states[i].target_residency < TICK_USEC)) {
+- count = cpu_data->states[i].early_hits;
++ early_hits = cpu_data->states[i].early_hits;
+ max_early_idx = i;
+ }
+ }
+@@ -308,8 +350,7 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ * "early hits" metric, but if that cannot be determined, just use the
+ * state selected so far.
+ */
+- if (cpu_data->states[idx].hits <= cpu_data->states[idx].misses &&
+- max_early_idx >= 0) {
++ if (hits <= misses && max_early_idx >= 0) {
+ idx = max_early_idx;
+ duration_us = drv->states[idx].target_residency;
+ }
+@@ -324,10 +365,9 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ if (idx < 0) {
+ idx = 0; /* No states enabled. Must use 0. */
+ } else if (idx > 0) {
++ unsigned int count = 0;
+ u64 sum = 0;
+
+- count = 0;
+-
+ /*
+ * Count and sum the most recent idle duration values less than
+ * the current expected idle duration value.
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index a0e19802149f..a3f0955af0f6 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -160,6 +160,7 @@ int devfreq_update_status(struct devfreq *devfreq, unsigned long freq)
+ int lev, prev_lev, ret = 0;
+ unsigned long cur_time;
+
++ lockdep_assert_held(&devfreq->lock);
+ cur_time = jiffies;
+
+ /* Immediately exit if previous_freq is not initialized yet. */
+@@ -1397,12 +1398,17 @@ static ssize_t trans_stat_show(struct device *dev,
+ int i, j;
+ unsigned int max_state = devfreq->profile->max_state;
+
+- if (!devfreq->stop_polling &&
+- devfreq_update_status(devfreq, devfreq->previous_freq))
+- return 0;
+ if (max_state == 0)
+ return sprintf(buf, "Not Supported.\n");
+
++ mutex_lock(&devfreq->lock);
++ if (!devfreq->stop_polling &&
++ devfreq_update_status(devfreq, devfreq->previous_freq)) {
++ mutex_unlock(&devfreq->lock);
++ return 0;
++ }
++ mutex_unlock(&devfreq->lock);
++
+ len = sprintf(buf, " From : To\n");
+ len += sprintf(buf + len, " :");
+ for (i = 0; i < max_state; i++)
+diff --git a/drivers/edac/altera_edac.c b/drivers/edac/altera_edac.c
+index bf024ec0116c..d0860838151b 100644
+--- a/drivers/edac/altera_edac.c
++++ b/drivers/edac/altera_edac.c
+@@ -561,6 +561,7 @@ static const struct regmap_config s10_sdram_regmap_cfg = {
+ .reg_write = s10_protected_reg_write,
+ .use_single_read = true,
+ .use_single_write = true,
++ .fast_io = true,
+ };
+
+ /************** </Stratix10 EDAC Memory Controller Functions> ***********/
+diff --git a/drivers/edac/ghes_edac.c b/drivers/edac/ghes_edac.c
+index 1163c382d4a5..f9c17654045a 100644
+--- a/drivers/edac/ghes_edac.c
++++ b/drivers/edac/ghes_edac.c
+@@ -566,8 +566,8 @@ int ghes_edac_register(struct ghes *ghes, struct device *dev)
+ ghes_pvt = pvt;
+ spin_unlock_irqrestore(&ghes_lock, flags);
+
+- /* only increment on success */
+- refcount_inc(&ghes_refcount);
++ /* only set on success */
++ refcount_set(&ghes_refcount, 1);
+
+ unlock:
+ mutex_unlock(&ghes_reg_mutex);
+diff --git a/drivers/firmware/qcom_scm-64.c b/drivers/firmware/qcom_scm-64.c
+index 91d5ad7cf58b..25e0f60c759a 100644
+--- a/drivers/firmware/qcom_scm-64.c
++++ b/drivers/firmware/qcom_scm-64.c
+@@ -150,7 +150,7 @@ static int qcom_scm_call(struct device *dev, u32 svc_id, u32 cmd_id,
+ kfree(args_virt);
+ }
+
+- if (res->a0 < 0)
++ if ((long)res->a0 < 0)
+ return qcom_scm_remap_error(res->a0);
+
+ return 0;
+diff --git a/drivers/hwtracing/coresight/coresight-funnel.c b/drivers/hwtracing/coresight/coresight-funnel.c
+index fa97cb9ab4f9..f4a33d2cfd73 100644
+--- a/drivers/hwtracing/coresight/coresight-funnel.c
++++ b/drivers/hwtracing/coresight/coresight-funnel.c
+@@ -37,12 +37,14 @@ DEFINE_CORESIGHT_DEVLIST(funnel_devs, "funnel");
+ * @atclk: optional clock for the core parts of the funnel.
+ * @csdev: component vitals needed by the framework.
+ * @priority: port selection order.
++ * @spinlock: serialize enable/disable operations.
+ */
+ struct funnel_drvdata {
+ void __iomem *base;
+ struct clk *atclk;
+ struct coresight_device *csdev;
+ unsigned long priority;
++ spinlock_t spinlock;
+ };
+
+ static int dynamic_funnel_enable_hw(struct funnel_drvdata *drvdata, int port)
+@@ -75,11 +77,21 @@ static int funnel_enable(struct coresight_device *csdev, int inport,
+ {
+ int rc = 0;
+ struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+-
+- if (drvdata->base)
+- rc = dynamic_funnel_enable_hw(drvdata, inport);
+-
++ unsigned long flags;
++ bool first_enable = false;
++
++ spin_lock_irqsave(&drvdata->spinlock, flags);
++ if (atomic_read(&csdev->refcnt[inport]) == 0) {
++ if (drvdata->base)
++ rc = dynamic_funnel_enable_hw(drvdata, inport);
++ if (!rc)
++ first_enable = true;
++ }
+ if (!rc)
++ atomic_inc(&csdev->refcnt[inport]);
++ spin_unlock_irqrestore(&drvdata->spinlock, flags);
++
++ if (first_enable)
+ dev_dbg(&csdev->dev, "FUNNEL inport %d enabled\n", inport);
+ return rc;
+ }
+@@ -106,11 +118,19 @@ static void funnel_disable(struct coresight_device *csdev, int inport,
+ int outport)
+ {
+ struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
++ unsigned long flags;
++ bool last_disable = false;
++
++ spin_lock_irqsave(&drvdata->spinlock, flags);
++ if (atomic_dec_return(&csdev->refcnt[inport]) == 0) {
++ if (drvdata->base)
++ dynamic_funnel_disable_hw(drvdata, inport);
++ last_disable = true;
++ }
++ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+- if (drvdata->base)
+- dynamic_funnel_disable_hw(drvdata, inport);
+-
+- dev_dbg(&csdev->dev, "FUNNEL inport %d disabled\n", inport);
++ if (last_disable)
++ dev_dbg(&csdev->dev, "FUNNEL inport %d disabled\n", inport);
+ }
+
+ static const struct coresight_ops_link funnel_link_ops = {
+diff --git a/drivers/hwtracing/coresight/coresight-replicator.c b/drivers/hwtracing/coresight/coresight-replicator.c
+index b7d6d59d56db..596c4297b03b 100644
+--- a/drivers/hwtracing/coresight/coresight-replicator.c
++++ b/drivers/hwtracing/coresight/coresight-replicator.c
+@@ -31,11 +31,13 @@ DEFINE_CORESIGHT_DEVLIST(replicator_devs, "replicator");
+ * whether this one is programmable or not.
+ * @atclk: optional clock for the core parts of the replicator.
+ * @csdev: component vitals needed by the framework
++ * @spinlock: serialize enable/disable operations.
+ */
+ struct replicator_drvdata {
+ void __iomem *base;
+ struct clk *atclk;
+ struct coresight_device *csdev;
++ spinlock_t spinlock;
+ };
+
+ static void dynamic_replicator_reset(struct replicator_drvdata *drvdata)
+@@ -97,10 +99,22 @@ static int replicator_enable(struct coresight_device *csdev, int inport,
+ {
+ int rc = 0;
+ struct replicator_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+-
+- if (drvdata->base)
+- rc = dynamic_replicator_enable(drvdata, inport, outport);
++ unsigned long flags;
++ bool first_enable = false;
++
++ spin_lock_irqsave(&drvdata->spinlock, flags);
++ if (atomic_read(&csdev->refcnt[outport]) == 0) {
++ if (drvdata->base)
++ rc = dynamic_replicator_enable(drvdata, inport,
++ outport);
++ if (!rc)
++ first_enable = true;
++ }
+ if (!rc)
++ atomic_inc(&csdev->refcnt[outport]);
++ spin_unlock_irqrestore(&drvdata->spinlock, flags);
++
++ if (first_enable)
+ dev_dbg(&csdev->dev, "REPLICATOR enabled\n");
+ return rc;
+ }
+@@ -137,10 +151,19 @@ static void replicator_disable(struct coresight_device *csdev, int inport,
+ int outport)
+ {
+ struct replicator_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
++ unsigned long flags;
++ bool last_disable = false;
++
++ spin_lock_irqsave(&drvdata->spinlock, flags);
++ if (atomic_dec_return(&csdev->refcnt[outport]) == 0) {
++ if (drvdata->base)
++ dynamic_replicator_disable(drvdata, inport, outport);
++ last_disable = true;
++ }
++ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+- if (drvdata->base)
+- dynamic_replicator_disable(drvdata, inport, outport);
+- dev_dbg(&csdev->dev, "REPLICATOR disabled\n");
++ if (last_disable)
++ dev_dbg(&csdev->dev, "REPLICATOR disabled\n");
+ }
+
+ static const struct coresight_ops_link replicator_link_ops = {
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+index 23b7ff00af5c..485311a4bf5d 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-etf.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+@@ -334,9 +334,10 @@ static int tmc_disable_etf_sink(struct coresight_device *csdev)
+ static int tmc_enable_etf_link(struct coresight_device *csdev,
+ int inport, int outport)
+ {
+- int ret;
++ int ret = 0;
+ unsigned long flags;
+ struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
++ bool first_enable = false;
+
+ spin_lock_irqsave(&drvdata->spinlock, flags);
+ if (drvdata->reading) {
+@@ -344,12 +345,18 @@ static int tmc_enable_etf_link(struct coresight_device *csdev,
+ return -EBUSY;
+ }
+
+- ret = tmc_etf_enable_hw(drvdata);
++ if (atomic_read(&csdev->refcnt[0]) == 0) {
++ ret = tmc_etf_enable_hw(drvdata);
++ if (!ret) {
++ drvdata->mode = CS_MODE_SYSFS;
++ first_enable = true;
++ }
++ }
+ if (!ret)
+- drvdata->mode = CS_MODE_SYSFS;
++ atomic_inc(&csdev->refcnt[0]);
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+- if (!ret)
++ if (first_enable)
+ dev_dbg(&csdev->dev, "TMC-ETF enabled\n");
+ return ret;
+ }
+@@ -359,6 +366,7 @@ static void tmc_disable_etf_link(struct coresight_device *csdev,
+ {
+ unsigned long flags;
+ struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
++ bool last_disable = false;
+
+ spin_lock_irqsave(&drvdata->spinlock, flags);
+ if (drvdata->reading) {
+@@ -366,11 +374,15 @@ static void tmc_disable_etf_link(struct coresight_device *csdev,
+ return;
+ }
+
+- tmc_etf_disable_hw(drvdata);
+- drvdata->mode = CS_MODE_DISABLED;
++ if (atomic_dec_return(&csdev->refcnt[0]) == 0) {
++ tmc_etf_disable_hw(drvdata);
++ drvdata->mode = CS_MODE_DISABLED;
++ last_disable = true;
++ }
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+- dev_dbg(&csdev->dev, "TMC-ETF disabled\n");
++ if (last_disable)
++ dev_dbg(&csdev->dev, "TMC-ETF disabled\n");
+ }
+
+ static void *tmc_alloc_etf_buffer(struct coresight_device *csdev,
+diff --git a/drivers/hwtracing/coresight/coresight.c b/drivers/hwtracing/coresight/coresight.c
+index 55db77f6410b..17416419cb71 100644
+--- a/drivers/hwtracing/coresight/coresight.c
++++ b/drivers/hwtracing/coresight/coresight.c
+@@ -253,9 +253,9 @@ static int coresight_enable_link(struct coresight_device *csdev,
+ struct coresight_device *parent,
+ struct coresight_device *child)
+ {
+- int ret;
++ int ret = 0;
+ int link_subtype;
+- int refport, inport, outport;
++ int inport, outport;
+
+ if (!parent || !child)
+ return -EINVAL;
+@@ -264,29 +264,17 @@ static int coresight_enable_link(struct coresight_device *csdev,
+ outport = coresight_find_link_outport(csdev, child);
+ link_subtype = csdev->subtype.link_subtype;
+
+- if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_MERG)
+- refport = inport;
+- else if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_SPLIT)
+- refport = outport;
+- else
+- refport = 0;
+-
+- if (refport < 0)
+- return refport;
++ if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_MERG && inport < 0)
++ return inport;
++ if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_SPLIT && outport < 0)
++ return outport;
+
+- if (atomic_inc_return(&csdev->refcnt[refport]) == 1) {
+- if (link_ops(csdev)->enable) {
+- ret = link_ops(csdev)->enable(csdev, inport, outport);
+- if (ret) {
+- atomic_dec(&csdev->refcnt[refport]);
+- return ret;
+- }
+- }
+- }
+-
+- csdev->enable = true;
++ if (link_ops(csdev)->enable)
++ ret = link_ops(csdev)->enable(csdev, inport, outport);
++ if (!ret)
++ csdev->enable = true;
+
+- return 0;
++ return ret;
+ }
+
+ static void coresight_disable_link(struct coresight_device *csdev,
+@@ -295,7 +283,7 @@ static void coresight_disable_link(struct coresight_device *csdev,
+ {
+ int i, nr_conns;
+ int link_subtype;
+- int refport, inport, outport;
++ int inport, outport;
+
+ if (!parent || !child)
+ return;
+@@ -305,20 +293,15 @@ static void coresight_disable_link(struct coresight_device *csdev,
+ link_subtype = csdev->subtype.link_subtype;
+
+ if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_MERG) {
+- refport = inport;
+ nr_conns = csdev->pdata->nr_inport;
+ } else if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_SPLIT) {
+- refport = outport;
+ nr_conns = csdev->pdata->nr_outport;
+ } else {
+- refport = 0;
+ nr_conns = 1;
+ }
+
+- if (atomic_dec_return(&csdev->refcnt[refport]) == 0) {
+- if (link_ops(csdev)->disable)
+- link_ops(csdev)->disable(csdev, inport, outport);
+- }
++ if (link_ops(csdev)->disable)
++ link_ops(csdev)->disable(csdev, inport, outport);
+
+ for (i = 0; i < nr_conns; i++)
+ if (atomic_read(&csdev->refcnt[i]) != 0)
+diff --git a/drivers/hwtracing/intel_th/core.c b/drivers/hwtracing/intel_th/core.c
+index 55922896d862..ebe992824b56 100644
+--- a/drivers/hwtracing/intel_th/core.c
++++ b/drivers/hwtracing/intel_th/core.c
+@@ -649,10 +649,8 @@ intel_th_subdevice_alloc(struct intel_th *th,
+ }
+
+ err = intel_th_device_add_resources(thdev, res, subdev->nres);
+- if (err) {
+- put_device(&thdev->dev);
++ if (err)
+ goto fail_put_device;
+- }
+
+ if (subdev->type == INTEL_TH_OUTPUT) {
+ if (subdev->mknode)
+@@ -667,10 +665,8 @@ intel_th_subdevice_alloc(struct intel_th *th,
+ }
+
+ err = device_add(&thdev->dev);
+- if (err) {
+- put_device(&thdev->dev);
++ if (err)
+ goto fail_free_res;
+- }
+
+ /* need switch driver to be loaded to enumerate the rest */
+ if (subdev->type == INTEL_TH_SWITCH && !req) {
+diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c
+index 03ca5b1bef9f..ebf3e30e989a 100644
+--- a/drivers/hwtracing/intel_th/pci.c
++++ b/drivers/hwtracing/intel_th/pci.c
+@@ -209,6 +209,16 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x45c5),
+ .driver_data = (kernel_ulong_t)&intel_th_2x,
+ },
++ {
++ /* Ice Lake CPU */
++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x8a29),
++ .driver_data = (kernel_ulong_t)&intel_th_2x,
++ },
++ {
++ /* Tiger Lake CPU */
++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x9a33),
++ .driver_data = (kernel_ulong_t)&intel_th_2x,
++ },
+ {
+ /* Tiger Lake PCH */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa0a6),
+diff --git a/drivers/hwtracing/stm/policy.c b/drivers/hwtracing/stm/policy.c
+index 4b9e44b227d8..4f932a419752 100644
+--- a/drivers/hwtracing/stm/policy.c
++++ b/drivers/hwtracing/stm/policy.c
+@@ -345,7 +345,11 @@ void stp_policy_unbind(struct stp_policy *policy)
+ stm->policy = NULL;
+ policy->stm = NULL;
+
++ /*
++ * Drop the reference on the protocol driver and lose the link.
++ */
+ stm_put_protocol(stm->pdrv);
++ stm->pdrv = NULL;
+ stm_put_device(stm);
+ }
+
+diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c
+index edc6f1cc90b2..3f03abf100b5 100644
+--- a/drivers/iio/adc/ad7124.c
++++ b/drivers/iio/adc/ad7124.c
+@@ -39,6 +39,8 @@
+ #define AD7124_STATUS_POR_FLAG_MSK BIT(4)
+
+ /* AD7124_ADC_CONTROL */
++#define AD7124_ADC_CTRL_REF_EN_MSK BIT(8)
++#define AD7124_ADC_CTRL_REF_EN(x) FIELD_PREP(AD7124_ADC_CTRL_REF_EN_MSK, x)
+ #define AD7124_ADC_CTRL_PWR_MSK GENMASK(7, 6)
+ #define AD7124_ADC_CTRL_PWR(x) FIELD_PREP(AD7124_ADC_CTRL_PWR_MSK, x)
+ #define AD7124_ADC_CTRL_MODE_MSK GENMASK(5, 2)
+@@ -424,7 +426,10 @@ static int ad7124_init_channel_vref(struct ad7124_state *st,
+ break;
+ case AD7124_INT_REF:
+ st->channel_config[channel_number].vref_mv = 2500;
+- break;
++ st->adc_control &= ~AD7124_ADC_CTRL_REF_EN_MSK;
++ st->adc_control |= AD7124_ADC_CTRL_REF_EN(1);
++ return ad_sd_write_reg(&st->sd, AD7124_ADC_CONTROL,
++ 2, st->adc_control);
+ default:
+ dev_err(&st->sd.spi->dev, "Invalid reference %d\n", refsel);
+ return -EINVAL;
+diff --git a/drivers/iio/adc/ad7606.c b/drivers/iio/adc/ad7606.c
+index aba0fd123a51..0e3b085a85bb 100644
+--- a/drivers/iio/adc/ad7606.c
++++ b/drivers/iio/adc/ad7606.c
+@@ -57,7 +57,7 @@ static int ad7606_reset(struct ad7606_state *st)
+
+ static int ad7606_read_samples(struct ad7606_state *st)
+ {
+- unsigned int num = st->chip_info->num_channels;
++ unsigned int num = st->chip_info->num_channels - 1;
+ u16 *data = st->data;
+ int ret;
+
+diff --git a/drivers/iio/adc/ad7949.c b/drivers/iio/adc/ad7949.c
+index ac0ffff6c5ae..6b51bfcad0d0 100644
+--- a/drivers/iio/adc/ad7949.c
++++ b/drivers/iio/adc/ad7949.c
+@@ -57,29 +57,11 @@ struct ad7949_adc_chip {
+ u32 buffer ____cacheline_aligned;
+ };
+
+-static bool ad7949_spi_cfg_is_read_back(struct ad7949_adc_chip *ad7949_adc)
+-{
+- if (!(ad7949_adc->cfg & AD7949_CFG_READ_BACK))
+- return true;
+-
+- return false;
+-}
+-
+-static int ad7949_spi_bits_per_word(struct ad7949_adc_chip *ad7949_adc)
+-{
+- int ret = ad7949_adc->resolution;
+-
+- if (ad7949_spi_cfg_is_read_back(ad7949_adc))
+- ret += AD7949_CFG_REG_SIZE_BITS;
+-
+- return ret;
+-}
+-
+ static int ad7949_spi_write_cfg(struct ad7949_adc_chip *ad7949_adc, u16 val,
+ u16 mask)
+ {
+ int ret;
+- int bits_per_word = ad7949_spi_bits_per_word(ad7949_adc);
++ int bits_per_word = ad7949_adc->resolution;
+ int shift = bits_per_word - AD7949_CFG_REG_SIZE_BITS;
+ struct spi_message msg;
+ struct spi_transfer tx[] = {
+@@ -107,7 +89,8 @@ static int ad7949_spi_read_channel(struct ad7949_adc_chip *ad7949_adc, int *val,
+ unsigned int channel)
+ {
+ int ret;
+- int bits_per_word = ad7949_spi_bits_per_word(ad7949_adc);
++ int i;
++ int bits_per_word = ad7949_adc->resolution;
+ int mask = GENMASK(ad7949_adc->resolution, 0);
+ struct spi_message msg;
+ struct spi_transfer tx[] = {
+@@ -118,12 +101,23 @@ static int ad7949_spi_read_channel(struct ad7949_adc_chip *ad7949_adc, int *val,
+ },
+ };
+
+- ret = ad7949_spi_write_cfg(ad7949_adc,
+- channel << AD7949_OFFSET_CHANNEL_SEL,
+- AD7949_MASK_CHANNEL_SEL);
+- if (ret)
+- return ret;
++ /*
++ * 1: write CFG for sample N and read old data (sample N-2)
++ * 2: if CFG was not changed since sample N-1 then we'll get good data
++ * at the next xfer, so we bail out now, otherwise we write something
++ * and we read garbage (sample N-1 configuration).
++ */
++ for (i = 0; i < 2; i++) {
++ ret = ad7949_spi_write_cfg(ad7949_adc,
++ channel << AD7949_OFFSET_CHANNEL_SEL,
++ AD7949_MASK_CHANNEL_SEL);
++ if (ret)
++ return ret;
++ if (channel == ad7949_adc->current_channel)
++ break;
++ }
+
++ /* 3: write something and read actual data */
+ ad7949_adc->buffer = 0;
+ spi_message_init_with_transfers(&msg, tx, 1);
+ ret = spi_sync(ad7949_adc->spi, &msg);
+@@ -138,10 +132,7 @@ static int ad7949_spi_read_channel(struct ad7949_adc_chip *ad7949_adc, int *val,
+
+ ad7949_adc->current_channel = channel;
+
+- if (ad7949_spi_cfg_is_read_back(ad7949_adc))
+- *val = (ad7949_adc->buffer >> AD7949_CFG_REG_SIZE_BITS) & mask;
+- else
+- *val = ad7949_adc->buffer & mask;
++ *val = ad7949_adc->buffer & mask;
+
+ return 0;
+ }
+diff --git a/drivers/iio/humidity/hdc100x.c b/drivers/iio/humidity/hdc100x.c
+index 066e05f92081..ff6666ac5d68 100644
+--- a/drivers/iio/humidity/hdc100x.c
++++ b/drivers/iio/humidity/hdc100x.c
+@@ -229,7 +229,7 @@ static int hdc100x_read_raw(struct iio_dev *indio_dev,
+ *val2 = 65536;
+ return IIO_VAL_FRACTIONAL;
+ } else {
+- *val = 100;
++ *val = 100000;
+ *val2 = 65536;
+ return IIO_VAL_FRACTIONAL;
+ }
+diff --git a/drivers/iio/imu/adis16480.c b/drivers/iio/imu/adis16480.c
+index 8743b2f376e2..7b966a41d623 100644
+--- a/drivers/iio/imu/adis16480.c
++++ b/drivers/iio/imu/adis16480.c
+@@ -623,9 +623,13 @@ static int adis16480_read_raw(struct iio_dev *indio_dev,
+ *val2 = (st->chip_info->temp_scale % 1000) * 1000;
+ return IIO_VAL_INT_PLUS_MICRO;
+ case IIO_PRESSURE:
+- *val = 0;
+- *val2 = 4000; /* 40ubar = 0.004 kPa */
+- return IIO_VAL_INT_PLUS_MICRO;
++ /*
++ * max scale is 1310 mbar
++ * max raw value is 32767 shifted for 32bits
++ */
++ *val = 131; /* 1310mbar = 131 kPa */
++ *val2 = 32767 << 16;
++ return IIO_VAL_FRACTIONAL;
+ default:
+ return -EINVAL;
+ }
+@@ -786,13 +790,14 @@ static const struct adis16480_chip_info adis16480_chip_info[] = {
+ .channels = adis16485_channels,
+ .num_channels = ARRAY_SIZE(adis16485_channels),
+ /*
+- * storing the value in rad/degree and the scale in degree
+- * gives us the result in rad and better precession than
+- * storing the scale directly in rad.
++ * Typically we do IIO_RAD_TO_DEGREE in the denominator, which
++ * is exactly the same as IIO_DEGREE_TO_RAD in numerator, since
++ * it gives better approximation. However, in this case we
++ * cannot do it since it would not fit in a 32bit variable.
+ */
+- .gyro_max_val = IIO_RAD_TO_DEGREE(22887),
+- .gyro_max_scale = 300,
+- .accel_max_val = IIO_M_S_2_TO_G(21973),
++ .gyro_max_val = 22887 << 16,
++ .gyro_max_scale = IIO_DEGREE_TO_RAD(300),
++ .accel_max_val = IIO_M_S_2_TO_G(21973 << 16),
+ .accel_max_scale = 18,
+ .temp_scale = 5650, /* 5.65 milli degree Celsius */
+ .int_clk = 2460000,
+@@ -802,9 +807,9 @@ static const struct adis16480_chip_info adis16480_chip_info[] = {
+ [ADIS16480] = {
+ .channels = adis16480_channels,
+ .num_channels = ARRAY_SIZE(adis16480_channels),
+- .gyro_max_val = IIO_RAD_TO_DEGREE(22500),
+- .gyro_max_scale = 450,
+- .accel_max_val = IIO_M_S_2_TO_G(12500),
++ .gyro_max_val = 22500 << 16,
++ .gyro_max_scale = IIO_DEGREE_TO_RAD(450),
++ .accel_max_val = IIO_M_S_2_TO_G(12500 << 16),
+ .accel_max_scale = 10,
+ .temp_scale = 5650, /* 5.65 milli degree Celsius */
+ .int_clk = 2460000,
+@@ -814,9 +819,9 @@ static const struct adis16480_chip_info adis16480_chip_info[] = {
+ [ADIS16485] = {
+ .channels = adis16485_channels,
+ .num_channels = ARRAY_SIZE(adis16485_channels),
+- .gyro_max_val = IIO_RAD_TO_DEGREE(22500),
+- .gyro_max_scale = 450,
+- .accel_max_val = IIO_M_S_2_TO_G(20000),
++ .gyro_max_val = 22500 << 16,
++ .gyro_max_scale = IIO_DEGREE_TO_RAD(450),
++ .accel_max_val = IIO_M_S_2_TO_G(20000 << 16),
+ .accel_max_scale = 5,
+ .temp_scale = 5650, /* 5.65 milli degree Celsius */
+ .int_clk = 2460000,
+@@ -826,9 +831,9 @@ static const struct adis16480_chip_info adis16480_chip_info[] = {
+ [ADIS16488] = {
+ .channels = adis16480_channels,
+ .num_channels = ARRAY_SIZE(adis16480_channels),
+- .gyro_max_val = IIO_RAD_TO_DEGREE(22500),
+- .gyro_max_scale = 450,
+- .accel_max_val = IIO_M_S_2_TO_G(22500),
++ .gyro_max_val = 22500 << 16,
++ .gyro_max_scale = IIO_DEGREE_TO_RAD(450),
++ .accel_max_val = IIO_M_S_2_TO_G(22500 << 16),
+ .accel_max_scale = 18,
+ .temp_scale = 5650, /* 5.65 milli degree Celsius */
+ .int_clk = 2460000,
+@@ -838,9 +843,9 @@ static const struct adis16480_chip_info adis16480_chip_info[] = {
+ [ADIS16495_1] = {
+ .channels = adis16485_channels,
+ .num_channels = ARRAY_SIZE(adis16485_channels),
+- .gyro_max_val = IIO_RAD_TO_DEGREE(20000),
+- .gyro_max_scale = 125,
+- .accel_max_val = IIO_M_S_2_TO_G(32000),
++ .gyro_max_val = 20000 << 16,
++ .gyro_max_scale = IIO_DEGREE_TO_RAD(125),
++ .accel_max_val = IIO_M_S_2_TO_G(32000 << 16),
+ .accel_max_scale = 8,
+ .temp_scale = 12500, /* 12.5 milli degree Celsius */
+ .int_clk = 4250000,
+@@ -851,9 +856,9 @@ static const struct adis16480_chip_info adis16480_chip_info[] = {
+ [ADIS16495_2] = {
+ .channels = adis16485_channels,
+ .num_channels = ARRAY_SIZE(adis16485_channels),
+- .gyro_max_val = IIO_RAD_TO_DEGREE(18000),
+- .gyro_max_scale = 450,
+- .accel_max_val = IIO_M_S_2_TO_G(32000),
++ .gyro_max_val = 18000 << 16,
++ .gyro_max_scale = IIO_DEGREE_TO_RAD(450),
++ .accel_max_val = IIO_M_S_2_TO_G(32000 << 16),
+ .accel_max_scale = 8,
+ .temp_scale = 12500, /* 12.5 milli degree Celsius */
+ .int_clk = 4250000,
+@@ -864,9 +869,9 @@ static const struct adis16480_chip_info adis16480_chip_info[] = {
+ [ADIS16495_3] = {
+ .channels = adis16485_channels,
+ .num_channels = ARRAY_SIZE(adis16485_channels),
+- .gyro_max_val = IIO_RAD_TO_DEGREE(20000),
+- .gyro_max_scale = 2000,
+- .accel_max_val = IIO_M_S_2_TO_G(32000),
++ .gyro_max_val = 20000 << 16,
++ .gyro_max_scale = IIO_DEGREE_TO_RAD(2000),
++ .accel_max_val = IIO_M_S_2_TO_G(32000 << 16),
+ .accel_max_scale = 8,
+ .temp_scale = 12500, /* 12.5 milli degree Celsius */
+ .int_clk = 4250000,
+@@ -877,9 +882,9 @@ static const struct adis16480_chip_info adis16480_chip_info[] = {
+ [ADIS16497_1] = {
+ .channels = adis16485_channels,
+ .num_channels = ARRAY_SIZE(adis16485_channels),
+- .gyro_max_val = IIO_RAD_TO_DEGREE(20000),
+- .gyro_max_scale = 125,
+- .accel_max_val = IIO_M_S_2_TO_G(32000),
++ .gyro_max_val = 20000 << 16,
++ .gyro_max_scale = IIO_DEGREE_TO_RAD(125),
++ .accel_max_val = IIO_M_S_2_TO_G(32000 << 16),
+ .accel_max_scale = 40,
+ .temp_scale = 12500, /* 12.5 milli degree Celsius */
+ .int_clk = 4250000,
+@@ -890,9 +895,9 @@ static const struct adis16480_chip_info adis16480_chip_info[] = {
+ [ADIS16497_2] = {
+ .channels = adis16485_channels,
+ .num_channels = ARRAY_SIZE(adis16485_channels),
+- .gyro_max_val = IIO_RAD_TO_DEGREE(18000),
+- .gyro_max_scale = 450,
+- .accel_max_val = IIO_M_S_2_TO_G(32000),
++ .gyro_max_val = 18000 << 16,
++ .gyro_max_scale = IIO_DEGREE_TO_RAD(450),
++ .accel_max_val = IIO_M_S_2_TO_G(32000 << 16),
+ .accel_max_scale = 40,
+ .temp_scale = 12500, /* 12.5 milli degree Celsius */
+ .int_clk = 4250000,
+@@ -903,9 +908,9 @@ static const struct adis16480_chip_info adis16480_chip_info[] = {
+ [ADIS16497_3] = {
+ .channels = adis16485_channels,
+ .num_channels = ARRAY_SIZE(adis16485_channels),
+- .gyro_max_val = IIO_RAD_TO_DEGREE(20000),
+- .gyro_max_scale = 2000,
+- .accel_max_val = IIO_M_S_2_TO_G(32000),
++ .gyro_max_val = 20000 << 16,
++ .gyro_max_scale = IIO_DEGREE_TO_RAD(2000),
++ .accel_max_val = IIO_M_S_2_TO_G(32000 << 16),
+ .accel_max_scale = 40,
+ .temp_scale = 12500, /* 12.5 milli degree Celsius */
+ .int_clk = 4250000,
+@@ -919,6 +924,7 @@ static const struct iio_info adis16480_info = {
+ .read_raw = &adis16480_read_raw,
+ .write_raw = &adis16480_write_raw,
+ .update_scan_mode = adis_update_scan_mode,
++ .debugfs_reg_access = adis_debugfs_reg_access,
+ };
+
+ static int adis16480_stop_device(struct iio_dev *indio_dev)
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
+index 3cb41ac357fa..f6bd4f19273c 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
+@@ -115,6 +115,7 @@ static const struct inv_mpu6050_hw hw_info[] = {
+ .reg = ®_set_6050,
+ .config = &chip_config_6050,
+ .fifo_size = 1024,
++ .temp = {INV_MPU6050_TEMP_OFFSET, INV_MPU6050_TEMP_SCALE},
+ },
+ {
+ .whoami = INV_MPU6500_WHOAMI_VALUE,
+@@ -122,6 +123,7 @@ static const struct inv_mpu6050_hw hw_info[] = {
+ .reg = ®_set_6500,
+ .config = &chip_config_6050,
+ .fifo_size = 512,
++ .temp = {INV_MPU6500_TEMP_OFFSET, INV_MPU6500_TEMP_SCALE},
+ },
+ {
+ .whoami = INV_MPU6515_WHOAMI_VALUE,
+@@ -129,6 +131,7 @@ static const struct inv_mpu6050_hw hw_info[] = {
+ .reg = ®_set_6500,
+ .config = &chip_config_6050,
+ .fifo_size = 512,
++ .temp = {INV_MPU6500_TEMP_OFFSET, INV_MPU6500_TEMP_SCALE},
+ },
+ {
+ .whoami = INV_MPU6000_WHOAMI_VALUE,
+@@ -136,6 +139,7 @@ static const struct inv_mpu6050_hw hw_info[] = {
+ .reg = ®_set_6050,
+ .config = &chip_config_6050,
+ .fifo_size = 1024,
++ .temp = {INV_MPU6050_TEMP_OFFSET, INV_MPU6050_TEMP_SCALE},
+ },
+ {
+ .whoami = INV_MPU9150_WHOAMI_VALUE,
+@@ -143,6 +147,7 @@ static const struct inv_mpu6050_hw hw_info[] = {
+ .reg = ®_set_6050,
+ .config = &chip_config_6050,
+ .fifo_size = 1024,
++ .temp = {INV_MPU6050_TEMP_OFFSET, INV_MPU6050_TEMP_SCALE},
+ },
+ {
+ .whoami = INV_MPU9250_WHOAMI_VALUE,
+@@ -150,6 +155,7 @@ static const struct inv_mpu6050_hw hw_info[] = {
+ .reg = ®_set_6500,
+ .config = &chip_config_6050,
+ .fifo_size = 512,
++ .temp = {INV_MPU6500_TEMP_OFFSET, INV_MPU6500_TEMP_SCALE},
+ },
+ {
+ .whoami = INV_MPU9255_WHOAMI_VALUE,
+@@ -157,6 +163,7 @@ static const struct inv_mpu6050_hw hw_info[] = {
+ .reg = ®_set_6500,
+ .config = &chip_config_6050,
+ .fifo_size = 512,
++ .temp = {INV_MPU6500_TEMP_OFFSET, INV_MPU6500_TEMP_SCALE},
+ },
+ {
+ .whoami = INV_ICM20608_WHOAMI_VALUE,
+@@ -164,6 +171,7 @@ static const struct inv_mpu6050_hw hw_info[] = {
+ .reg = ®_set_6500,
+ .config = &chip_config_6050,
+ .fifo_size = 512,
++ .temp = {INV_ICM20608_TEMP_OFFSET, INV_ICM20608_TEMP_SCALE},
+ },
+ {
+ .whoami = INV_ICM20602_WHOAMI_VALUE,
+@@ -171,6 +179,7 @@ static const struct inv_mpu6050_hw hw_info[] = {
+ .reg = ®_set_icm20602,
+ .config = &chip_config_6050,
+ .fifo_size = 1008,
++ .temp = {INV_ICM20608_TEMP_OFFSET, INV_ICM20608_TEMP_SCALE},
+ },
+ };
+
+@@ -471,12 +480,8 @@ inv_mpu6050_read_raw(struct iio_dev *indio_dev,
+
+ return IIO_VAL_INT_PLUS_MICRO;
+ case IIO_TEMP:
+- *val = 0;
+- if (st->chip_type == INV_ICM20602)
+- *val2 = INV_ICM20602_TEMP_SCALE;
+- else
+- *val2 = INV_MPU6050_TEMP_SCALE;
+-
++ *val = st->hw->temp.scale / 1000000;
++ *val2 = st->hw->temp.scale % 1000000;
+ return IIO_VAL_INT_PLUS_MICRO;
+ default:
+ return -EINVAL;
+@@ -484,11 +489,7 @@ inv_mpu6050_read_raw(struct iio_dev *indio_dev,
+ case IIO_CHAN_INFO_OFFSET:
+ switch (chan->type) {
+ case IIO_TEMP:
+- if (st->chip_type == INV_ICM20602)
+- *val = INV_ICM20602_TEMP_OFFSET;
+- else
+- *val = INV_MPU6050_TEMP_OFFSET;
+-
++ *val = st->hw->temp.offset;
+ return IIO_VAL_INT;
+ default:
+ return -EINVAL;
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h b/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
+index 51235677c534..c32bd0c012b5 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
+@@ -101,6 +101,7 @@ struct inv_mpu6050_chip_config {
+ * @reg: register map of the chip.
+ * @config: configuration of the chip.
+ * @fifo_size: size of the FIFO in bytes.
++ * @temp: offset and scale to apply to raw temperature.
+ */
+ struct inv_mpu6050_hw {
+ u8 whoami;
+@@ -108,6 +109,10 @@ struct inv_mpu6050_hw {
+ const struct inv_mpu6050_reg_map *reg;
+ const struct inv_mpu6050_chip_config *config;
+ size_t fifo_size;
++ struct {
++ int offset;
++ int scale;
++ } temp;
+ };
+
+ /*
+@@ -218,16 +223,19 @@ struct inv_mpu6050_state {
+ #define INV_MPU6050_REG_UP_TIME_MIN 5000
+ #define INV_MPU6050_REG_UP_TIME_MAX 10000
+
+-#define INV_MPU6050_TEMP_OFFSET 12421
+-#define INV_MPU6050_TEMP_SCALE 2941
++#define INV_MPU6050_TEMP_OFFSET 12420
++#define INV_MPU6050_TEMP_SCALE 2941176
+ #define INV_MPU6050_MAX_GYRO_FS_PARAM 3
+ #define INV_MPU6050_MAX_ACCL_FS_PARAM 3
+ #define INV_MPU6050_THREE_AXIS 3
+ #define INV_MPU6050_GYRO_CONFIG_FSR_SHIFT 3
+ #define INV_MPU6050_ACCL_CONFIG_FSR_SHIFT 3
+
+-#define INV_ICM20602_TEMP_OFFSET 8170
+-#define INV_ICM20602_TEMP_SCALE 3060
++#define INV_MPU6500_TEMP_OFFSET 7011
++#define INV_MPU6500_TEMP_SCALE 2995178
++
++#define INV_ICM20608_TEMP_OFFSET 8170
++#define INV_ICM20608_TEMP_SCALE 3059976
+
+ /* 6 + 6 round up and plus 8 */
+ #define INV_MPU6050_OUTPUT_DATA_SIZE 24
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
+index c14bf533b66b..ceee4e1aa5d4 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
+@@ -198,6 +198,7 @@ struct st_lsm6dsx_ext_dev_settings {
+ * @wai: Sensor WhoAmI default value.
+ * @max_fifo_size: Sensor max fifo length in FIFO words.
+ * @id: List of hw id/device name supported by the driver configuration.
++ * @odr_table: Hw sensors odr table (Hz + val).
+ * @decimator: List of decimator register info (addr + mask).
+ * @batch: List of FIFO batching register info (addr + mask).
+ * @fifo_ops: Sensor hw FIFO parameters.
+@@ -211,6 +212,7 @@ struct st_lsm6dsx_settings {
+ enum st_lsm6dsx_hw_id hw_id;
+ const char *name;
+ } id[ST_LSM6DSX_MAX_ID];
++ struct st_lsm6dsx_odr_table_entry odr_table[2];
+ struct st_lsm6dsx_reg decimator[ST_LSM6DSX_MAX_ID];
+ struct st_lsm6dsx_reg batch[ST_LSM6DSX_MAX_ID];
+ struct st_lsm6dsx_fifo_ops fifo_ops;
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+index a6702a74570e..ba89cbbb73ae 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+@@ -69,33 +69,6 @@
+ #define ST_LSM6DSX_REG_GYRO_OUT_Y_L_ADDR 0x24
+ #define ST_LSM6DSX_REG_GYRO_OUT_Z_L_ADDR 0x26
+
+-static const struct st_lsm6dsx_odr_table_entry st_lsm6dsx_odr_table[] = {
+- [ST_LSM6DSX_ID_ACC] = {
+- .reg = {
+- .addr = 0x10,
+- .mask = GENMASK(7, 4),
+- },
+- .odr_avl[0] = { 13, 0x01 },
+- .odr_avl[1] = { 26, 0x02 },
+- .odr_avl[2] = { 52, 0x03 },
+- .odr_avl[3] = { 104, 0x04 },
+- .odr_avl[4] = { 208, 0x05 },
+- .odr_avl[5] = { 416, 0x06 },
+- },
+- [ST_LSM6DSX_ID_GYRO] = {
+- .reg = {
+- .addr = 0x11,
+- .mask = GENMASK(7, 4),
+- },
+- .odr_avl[0] = { 13, 0x01 },
+- .odr_avl[1] = { 26, 0x02 },
+- .odr_avl[2] = { 52, 0x03 },
+- .odr_avl[3] = { 104, 0x04 },
+- .odr_avl[4] = { 208, 0x05 },
+- .odr_avl[5] = { 416, 0x06 },
+- }
+-};
+-
+ static const struct st_lsm6dsx_fs_table_entry st_lsm6dsx_fs_table[] = {
+ [ST_LSM6DSX_ID_ACC] = {
+ .reg = {
+@@ -129,6 +102,32 @@ static const struct st_lsm6dsx_settings st_lsm6dsx_sensor_settings[] = {
+ .name = ST_LSM6DS3_DEV_NAME,
+ },
+ },
++ .odr_table = {
++ [ST_LSM6DSX_ID_ACC] = {
++ .reg = {
++ .addr = 0x10,
++ .mask = GENMASK(7, 4),
++ },
++ .odr_avl[0] = { 13, 0x01 },
++ .odr_avl[1] = { 26, 0x02 },
++ .odr_avl[2] = { 52, 0x03 },
++ .odr_avl[3] = { 104, 0x04 },
++ .odr_avl[4] = { 208, 0x05 },
++ .odr_avl[5] = { 416, 0x06 },
++ },
++ [ST_LSM6DSX_ID_GYRO] = {
++ .reg = {
++ .addr = 0x11,
++ .mask = GENMASK(7, 4),
++ },
++ .odr_avl[0] = { 13, 0x01 },
++ .odr_avl[1] = { 26, 0x02 },
++ .odr_avl[2] = { 52, 0x03 },
++ .odr_avl[3] = { 104, 0x04 },
++ .odr_avl[4] = { 208, 0x05 },
++ .odr_avl[5] = { 416, 0x06 },
++ },
++ },
+ .decimator = {
+ [ST_LSM6DSX_ID_ACC] = {
+ .addr = 0x08,
+@@ -179,6 +178,32 @@ static const struct st_lsm6dsx_settings st_lsm6dsx_sensor_settings[] = {
+ .name = ST_LSM6DS3H_DEV_NAME,
+ },
+ },
++ .odr_table = {
++ [ST_LSM6DSX_ID_ACC] = {
++ .reg = {
++ .addr = 0x10,
++ .mask = GENMASK(7, 4),
++ },
++ .odr_avl[0] = { 13, 0x01 },
++ .odr_avl[1] = { 26, 0x02 },
++ .odr_avl[2] = { 52, 0x03 },
++ .odr_avl[3] = { 104, 0x04 },
++ .odr_avl[4] = { 208, 0x05 },
++ .odr_avl[5] = { 416, 0x06 },
++ },
++ [ST_LSM6DSX_ID_GYRO] = {
++ .reg = {
++ .addr = 0x11,
++ .mask = GENMASK(7, 4),
++ },
++ .odr_avl[0] = { 13, 0x01 },
++ .odr_avl[1] = { 26, 0x02 },
++ .odr_avl[2] = { 52, 0x03 },
++ .odr_avl[3] = { 104, 0x04 },
++ .odr_avl[4] = { 208, 0x05 },
++ .odr_avl[5] = { 416, 0x06 },
++ },
++ },
+ .decimator = {
+ [ST_LSM6DSX_ID_ACC] = {
+ .addr = 0x08,
+@@ -235,6 +260,32 @@ static const struct st_lsm6dsx_settings st_lsm6dsx_sensor_settings[] = {
+ .name = ST_ISM330DLC_DEV_NAME,
+ },
+ },
++ .odr_table = {
++ [ST_LSM6DSX_ID_ACC] = {
++ .reg = {
++ .addr = 0x10,
++ .mask = GENMASK(7, 4),
++ },
++ .odr_avl[0] = { 13, 0x01 },
++ .odr_avl[1] = { 26, 0x02 },
++ .odr_avl[2] = { 52, 0x03 },
++ .odr_avl[3] = { 104, 0x04 },
++ .odr_avl[4] = { 208, 0x05 },
++ .odr_avl[5] = { 416, 0x06 },
++ },
++ [ST_LSM6DSX_ID_GYRO] = {
++ .reg = {
++ .addr = 0x11,
++ .mask = GENMASK(7, 4),
++ },
++ .odr_avl[0] = { 13, 0x01 },
++ .odr_avl[1] = { 26, 0x02 },
++ .odr_avl[2] = { 52, 0x03 },
++ .odr_avl[3] = { 104, 0x04 },
++ .odr_avl[4] = { 208, 0x05 },
++ .odr_avl[5] = { 416, 0x06 },
++ },
++ },
+ .decimator = {
+ [ST_LSM6DSX_ID_ACC] = {
+ .addr = 0x08,
+@@ -288,6 +339,32 @@ static const struct st_lsm6dsx_settings st_lsm6dsx_sensor_settings[] = {
+ .name = ST_LSM6DSOX_DEV_NAME,
+ },
+ },
++ .odr_table = {
++ [ST_LSM6DSX_ID_ACC] = {
++ .reg = {
++ .addr = 0x10,
++ .mask = GENMASK(7, 4),
++ },
++ .odr_avl[0] = { 13, 0x01 },
++ .odr_avl[1] = { 26, 0x02 },
++ .odr_avl[2] = { 52, 0x03 },
++ .odr_avl[3] = { 104, 0x04 },
++ .odr_avl[4] = { 208, 0x05 },
++ .odr_avl[5] = { 416, 0x06 },
++ },
++ [ST_LSM6DSX_ID_GYRO] = {
++ .reg = {
++ .addr = 0x11,
++ .mask = GENMASK(7, 4),
++ },
++ .odr_avl[0] = { 13, 0x01 },
++ .odr_avl[1] = { 26, 0x02 },
++ .odr_avl[2] = { 52, 0x03 },
++ .odr_avl[3] = { 104, 0x04 },
++ .odr_avl[4] = { 208, 0x05 },
++ .odr_avl[5] = { 416, 0x06 },
++ },
++ },
+ .batch = {
+ [ST_LSM6DSX_ID_ACC] = {
+ .addr = 0x09,
+@@ -356,6 +433,32 @@ static const struct st_lsm6dsx_settings st_lsm6dsx_sensor_settings[] = {
+ .name = ST_ASM330LHH_DEV_NAME,
+ },
+ },
++ .odr_table = {
++ [ST_LSM6DSX_ID_ACC] = {
++ .reg = {
++ .addr = 0x10,
++ .mask = GENMASK(7, 4),
++ },
++ .odr_avl[0] = { 13, 0x01 },
++ .odr_avl[1] = { 26, 0x02 },
++ .odr_avl[2] = { 52, 0x03 },
++ .odr_avl[3] = { 104, 0x04 },
++ .odr_avl[4] = { 208, 0x05 },
++ .odr_avl[5] = { 416, 0x06 },
++ },
++ [ST_LSM6DSX_ID_GYRO] = {
++ .reg = {
++ .addr = 0x11,
++ .mask = GENMASK(7, 4),
++ },
++ .odr_avl[0] = { 13, 0x01 },
++ .odr_avl[1] = { 26, 0x02 },
++ .odr_avl[2] = { 52, 0x03 },
++ .odr_avl[3] = { 104, 0x04 },
++ .odr_avl[4] = { 208, 0x05 },
++ .odr_avl[5] = { 416, 0x06 },
++ },
++ },
+ .batch = {
+ [ST_LSM6DSX_ID_ACC] = {
+ .addr = 0x09,
+@@ -398,6 +501,32 @@ static const struct st_lsm6dsx_settings st_lsm6dsx_sensor_settings[] = {
+ .name = ST_LSM6DSR_DEV_NAME,
+ },
+ },
++ .odr_table = {
++ [ST_LSM6DSX_ID_ACC] = {
++ .reg = {
++ .addr = 0x10,
++ .mask = GENMASK(7, 4),
++ },
++ .odr_avl[0] = { 13, 0x01 },
++ .odr_avl[1] = { 26, 0x02 },
++ .odr_avl[2] = { 52, 0x03 },
++ .odr_avl[3] = { 104, 0x04 },
++ .odr_avl[4] = { 208, 0x05 },
++ .odr_avl[5] = { 416, 0x06 },
++ },
++ [ST_LSM6DSX_ID_GYRO] = {
++ .reg = {
++ .addr = 0x11,
++ .mask = GENMASK(7, 4),
++ },
++ .odr_avl[0] = { 13, 0x01 },
++ .odr_avl[1] = { 26, 0x02 },
++ .odr_avl[2] = { 52, 0x03 },
++ .odr_avl[3] = { 104, 0x04 },
++ .odr_avl[4] = { 208, 0x05 },
++ .odr_avl[5] = { 416, 0x06 },
++ },
++ },
+ .batch = {
+ [ST_LSM6DSX_ID_ACC] = {
+ .addr = 0x09,
+@@ -560,22 +689,23 @@ static int st_lsm6dsx_set_full_scale(struct st_lsm6dsx_sensor *sensor,
+
+ int st_lsm6dsx_check_odr(struct st_lsm6dsx_sensor *sensor, u16 odr, u8 *val)
+ {
++ const struct st_lsm6dsx_odr_table_entry *odr_table;
+ int i;
+
++ odr_table = &sensor->hw->settings->odr_table[sensor->id];
+ for (i = 0; i < ST_LSM6DSX_ODR_LIST_SIZE; i++)
+ /*
+ * ext devices can run at different odr respect to
+ * accel sensor
+ */
+- if (st_lsm6dsx_odr_table[sensor->id].odr_avl[i].hz >= odr)
++ if (odr_table->odr_avl[i].hz >= odr)
+ break;
+
+ if (i == ST_LSM6DSX_ODR_LIST_SIZE)
+ return -EINVAL;
+
+- *val = st_lsm6dsx_odr_table[sensor->id].odr_avl[i].val;
+-
+- return 0;
++ *val = odr_table->odr_avl[i].val;
++ return odr_table->odr_avl[i].hz;
+ }
+
+ static u16 st_lsm6dsx_check_odr_dependency(struct st_lsm6dsx_hw *hw, u16 odr,
+@@ -638,7 +768,7 @@ static int st_lsm6dsx_set_odr(struct st_lsm6dsx_sensor *sensor, u16 req_odr)
+ return err;
+ }
+
+- reg = &st_lsm6dsx_odr_table[ref_sensor->id].reg;
++ reg = &hw->settings->odr_table[ref_sensor->id].reg;
+ data = ST_LSM6DSX_SHIFT_VAL(val, reg->mask);
+ return st_lsm6dsx_update_bits_locked(hw, reg->addr, reg->mask, data);
+ }
+@@ -738,8 +868,10 @@ static int st_lsm6dsx_write_raw(struct iio_dev *iio_dev,
+ case IIO_CHAN_INFO_SAMP_FREQ: {
+ u8 data;
+
+- err = st_lsm6dsx_check_odr(sensor, val, &data);
+- if (!err)
++ val = st_lsm6dsx_check_odr(sensor, val, &data);
++ if (val < 0)
++ err = val;
++ else
+ sensor->odr = val;
+ break;
+ }
+@@ -783,11 +915,12 @@ st_lsm6dsx_sysfs_sampling_frequency_avail(struct device *dev,
+ {
+ struct st_lsm6dsx_sensor *sensor = iio_priv(dev_get_drvdata(dev));
+ enum st_lsm6dsx_sensor_id id = sensor->id;
++ struct st_lsm6dsx_hw *hw = sensor->hw;
+ int i, len = 0;
+
+ for (i = 0; i < ST_LSM6DSX_ODR_LIST_SIZE; i++)
+ len += scnprintf(buf + len, PAGE_SIZE - len, "%d ",
+- st_lsm6dsx_odr_table[id].odr_avl[i].hz);
++ hw->settings->odr_table[id].odr_avl[i].hz);
+ buf[len - 1] = '\n';
+
+ return len;
+@@ -1037,7 +1170,7 @@ static struct iio_dev *st_lsm6dsx_alloc_iiodev(struct st_lsm6dsx_hw *hw,
+ sensor = iio_priv(iio_dev);
+ sensor->id = id;
+ sensor->hw = hw;
+- sensor->odr = st_lsm6dsx_odr_table[id].odr_avl[0].hz;
++ sensor->odr = hw->settings->odr_table[id].odr_avl[0].hz;
+ sensor->gain = st_lsm6dsx_fs_table[id].fs_avl[0].gain;
+ sensor->watermark = 1;
+
+diff --git a/drivers/interconnect/qcom/sdm845.c b/drivers/interconnect/qcom/sdm845.c
+index 4915b78da673..3a897f712da8 100644
+--- a/drivers/interconnect/qcom/sdm845.c
++++ b/drivers/interconnect/qcom/sdm845.c
+@@ -807,9 +807,9 @@ static int qnoc_remove(struct platform_device *pdev)
+ {
+ struct qcom_icc_provider *qp = platform_get_drvdata(pdev);
+ struct icc_provider *provider = &qp->provider;
+- struct icc_node *n;
++ struct icc_node *n, *tmp;
+
+- list_for_each_entry(n, &provider->nodes, node_list) {
++ list_for_each_entry_safe(n, tmp, &provider->nodes, node_list) {
+ icc_node_del(n);
+ icc_node_destroy(n->id);
+ }
+diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
+index 1cb137f0ef9d..598e5fafbeed 100644
+--- a/drivers/md/dm-writecache.c
++++ b/drivers/md/dm-writecache.c
+@@ -1218,7 +1218,8 @@ bio_copy:
+ }
+ } while (bio->bi_iter.bi_size);
+
+- if (unlikely(wc->uncommitted_blocks >= wc->autocommit_blocks))
++ if (unlikely(bio->bi_opf & REQ_FUA ||
++ wc->uncommitted_blocks >= wc->autocommit_blocks))
+ writecache_flush(wc);
+ else
+ writecache_schedule_autocommit(wc);
+diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c
+index 595a73110e17..ac1179ca80d9 100644
+--- a/drivers/md/dm-zoned-metadata.c
++++ b/drivers/md/dm-zoned-metadata.c
+@@ -554,6 +554,7 @@ static struct dmz_mblock *dmz_get_mblock(struct dmz_metadata *zmd,
+ TASK_UNINTERRUPTIBLE);
+ if (test_bit(DMZ_META_ERROR, &mblk->state)) {
+ dmz_release_mblock(zmd, mblk);
++ dmz_check_bdev(zmd->dev);
+ return ERR_PTR(-EIO);
+ }
+
+@@ -625,6 +626,8 @@ static int dmz_rdwr_block(struct dmz_metadata *zmd, int op, sector_t block,
+ ret = submit_bio_wait(bio);
+ bio_put(bio);
+
++ if (ret)
++ dmz_check_bdev(zmd->dev);
+ return ret;
+ }
+
+@@ -691,6 +694,7 @@ static int dmz_write_dirty_mblocks(struct dmz_metadata *zmd,
+ TASK_UNINTERRUPTIBLE);
+ if (test_bit(DMZ_META_ERROR, &mblk->state)) {
+ clear_bit(DMZ_META_ERROR, &mblk->state);
++ dmz_check_bdev(zmd->dev);
+ ret = -EIO;
+ }
+ nr_mblks_submitted--;
+@@ -768,7 +772,7 @@ int dmz_flush_metadata(struct dmz_metadata *zmd)
+ /* If there are no dirty metadata blocks, just flush the device cache */
+ if (list_empty(&write_list)) {
+ ret = blkdev_issue_flush(zmd->dev->bdev, GFP_NOIO, NULL);
+- goto out;
++ goto err;
+ }
+
+ /*
+@@ -778,7 +782,7 @@ int dmz_flush_metadata(struct dmz_metadata *zmd)
+ */
+ ret = dmz_log_dirty_mblocks(zmd, &write_list);
+ if (ret)
+- goto out;
++ goto err;
+
+ /*
+ * The log is on disk. It is now safe to update in place
+@@ -786,11 +790,11 @@ int dmz_flush_metadata(struct dmz_metadata *zmd)
+ */
+ ret = dmz_write_dirty_mblocks(zmd, &write_list, zmd->mblk_primary);
+ if (ret)
+- goto out;
++ goto err;
+
+ ret = dmz_write_sb(zmd, zmd->mblk_primary);
+ if (ret)
+- goto out;
++ goto err;
+
+ while (!list_empty(&write_list)) {
+ mblk = list_first_entry(&write_list, struct dmz_mblock, link);
+@@ -805,16 +809,20 @@ int dmz_flush_metadata(struct dmz_metadata *zmd)
+
+ zmd->sb_gen++;
+ out:
+- if (ret && !list_empty(&write_list)) {
+- spin_lock(&zmd->mblk_lock);
+- list_splice(&write_list, &zmd->mblk_dirty_list);
+- spin_unlock(&zmd->mblk_lock);
+- }
+-
+ dmz_unlock_flush(zmd);
+ up_write(&zmd->mblk_sem);
+
+ return ret;
++
++err:
++ if (!list_empty(&write_list)) {
++ spin_lock(&zmd->mblk_lock);
++ list_splice(&write_list, &zmd->mblk_dirty_list);
++ spin_unlock(&zmd->mblk_lock);
++ }
++ if (!dmz_check_bdev(zmd->dev))
++ ret = -EIO;
++ goto out;
+ }
+
+ /*
+@@ -1244,6 +1252,7 @@ static int dmz_update_zone(struct dmz_metadata *zmd, struct dm_zone *zone)
+ if (ret) {
+ dmz_dev_err(zmd->dev, "Get zone %u report failed",
+ dmz_id(zmd, zone));
++ dmz_check_bdev(zmd->dev);
+ return ret;
+ }
+
+diff --git a/drivers/md/dm-zoned-reclaim.c b/drivers/md/dm-zoned-reclaim.c
+index d240d7ca8a8a..e7ace908a9b7 100644
+--- a/drivers/md/dm-zoned-reclaim.c
++++ b/drivers/md/dm-zoned-reclaim.c
+@@ -82,6 +82,7 @@ static int dmz_reclaim_align_wp(struct dmz_reclaim *zrc, struct dm_zone *zone,
+ "Align zone %u wp %llu to %llu (wp+%u) blocks failed %d",
+ dmz_id(zmd, zone), (unsigned long long)wp_block,
+ (unsigned long long)block, nr_blocks, ret);
++ dmz_check_bdev(zrc->dev);
+ return ret;
+ }
+
+@@ -489,12 +490,7 @@ static void dmz_reclaim_work(struct work_struct *work)
+ ret = dmz_do_reclaim(zrc);
+ if (ret) {
+ dmz_dev_debug(zrc->dev, "Reclaim error %d\n", ret);
+- if (ret == -EIO)
+- /*
+- * LLD might be performing some error handling sequence
+- * at the underlying device. To not interfere, do not
+- * attempt to schedule the next reclaim run immediately.
+- */
++ if (!dmz_check_bdev(zrc->dev))
+ return;
+ }
+
+diff --git a/drivers/md/dm-zoned-target.c b/drivers/md/dm-zoned-target.c
+index d3bcc4197f5d..4574e0dedbd6 100644
+--- a/drivers/md/dm-zoned-target.c
++++ b/drivers/md/dm-zoned-target.c
+@@ -80,6 +80,8 @@ static inline void dmz_bio_endio(struct bio *bio, blk_status_t status)
+
+ if (status != BLK_STS_OK && bio->bi_status == BLK_STS_OK)
+ bio->bi_status = status;
++ if (bio->bi_status != BLK_STS_OK)
++ bioctx->target->dev->flags |= DMZ_CHECK_BDEV;
+
+ if (refcount_dec_and_test(&bioctx->ref)) {
+ struct dm_zone *zone = bioctx->zone;
+@@ -565,31 +567,51 @@ out:
+ }
+
+ /*
+- * Check the backing device availability. If it's on the way out,
++ * Check if the backing device is being removed. If it's on the way out,
+ * start failing I/O. Reclaim and metadata components also call this
+ * function to cleanly abort operation in the event of such failure.
+ */
+ bool dmz_bdev_is_dying(struct dmz_dev *dmz_dev)
+ {
+- struct gendisk *disk;
++ if (dmz_dev->flags & DMZ_BDEV_DYING)
++ return true;
+
+- if (!(dmz_dev->flags & DMZ_BDEV_DYING)) {
+- disk = dmz_dev->bdev->bd_disk;
+- if (blk_queue_dying(bdev_get_queue(dmz_dev->bdev))) {
+- dmz_dev_warn(dmz_dev, "Backing device queue dying");
+- dmz_dev->flags |= DMZ_BDEV_DYING;
+- } else if (disk->fops->check_events) {
+- if (disk->fops->check_events(disk, 0) &
+- DISK_EVENT_MEDIA_CHANGE) {
+- dmz_dev_warn(dmz_dev, "Backing device offline");
+- dmz_dev->flags |= DMZ_BDEV_DYING;
+- }
+- }
++ if (dmz_dev->flags & DMZ_CHECK_BDEV)
++ return !dmz_check_bdev(dmz_dev);
++
++ if (blk_queue_dying(bdev_get_queue(dmz_dev->bdev))) {
++ dmz_dev_warn(dmz_dev, "Backing device queue dying");
++ dmz_dev->flags |= DMZ_BDEV_DYING;
+ }
+
+ return dmz_dev->flags & DMZ_BDEV_DYING;
+ }
+
++/*
++ * Check the backing device availability. This detects such events as
++ * backing device going offline due to errors, media removals, etc.
++ * This check is less efficient than dmz_bdev_is_dying() and should
++ * only be performed as a part of error handling.
++ */
++bool dmz_check_bdev(struct dmz_dev *dmz_dev)
++{
++ struct gendisk *disk;
++
++ dmz_dev->flags &= ~DMZ_CHECK_BDEV;
++
++ if (dmz_bdev_is_dying(dmz_dev))
++ return false;
++
++ disk = dmz_dev->bdev->bd_disk;
++ if (disk->fops->check_events &&
++ disk->fops->check_events(disk, 0) & DISK_EVENT_MEDIA_CHANGE) {
++ dmz_dev_warn(dmz_dev, "Backing device offline");
++ dmz_dev->flags |= DMZ_BDEV_DYING;
++ }
++
++ return !(dmz_dev->flags & DMZ_BDEV_DYING);
++}
++
+ /*
+ * Process a new BIO.
+ */
+@@ -902,8 +924,8 @@ static int dmz_prepare_ioctl(struct dm_target *ti, struct block_device **bdev)
+ {
+ struct dmz_target *dmz = ti->private;
+
+- if (dmz_bdev_is_dying(dmz->dev))
+- return -ENODEV;
++ if (!dmz_check_bdev(dmz->dev))
++ return -EIO;
+
+ *bdev = dmz->dev->bdev;
+
+diff --git a/drivers/md/dm-zoned.h b/drivers/md/dm-zoned.h
+index d8e70b0ade35..5b5e493d479c 100644
+--- a/drivers/md/dm-zoned.h
++++ b/drivers/md/dm-zoned.h
+@@ -72,6 +72,7 @@ struct dmz_dev {
+
+ /* Device flags. */
+ #define DMZ_BDEV_DYING (1 << 0)
++#define DMZ_CHECK_BDEV (2 << 0)
+
+ /*
+ * Zone descriptor.
+@@ -255,5 +256,6 @@ void dmz_schedule_reclaim(struct dmz_reclaim *zrc);
+ * Functions defined in dm-zoned-target.c
+ */
+ bool dmz_bdev_is_dying(struct dmz_dev *dmz_dev);
++bool dmz_check_bdev(struct dmz_dev *dmz_dev);
+
+ #endif /* DM_ZONED_H */
+diff --git a/drivers/md/md-linear.c b/drivers/md/md-linear.c
+index 7354466ddc90..afcf1d388300 100644
+--- a/drivers/md/md-linear.c
++++ b/drivers/md/md-linear.c
+@@ -244,10 +244,9 @@ static bool linear_make_request(struct mddev *mddev, struct bio *bio)
+ sector_t start_sector, end_sector, data_offset;
+ sector_t bio_sector = bio->bi_iter.bi_sector;
+
+- if (unlikely(bio->bi_opf & REQ_PREFLUSH)) {
+- md_flush_request(mddev, bio);
++ if (unlikely(bio->bi_opf & REQ_PREFLUSH)
++ && md_flush_request(mddev, bio))
+ return true;
+- }
+
+ tmp_dev = which_dev(mddev, bio_sector);
+ start_sector = tmp_dev->end_sector - tmp_dev->rdev->sectors;
+diff --git a/drivers/md/md-multipath.c b/drivers/md/md-multipath.c
+index 6780938d2991..152f9e65a226 100644
+--- a/drivers/md/md-multipath.c
++++ b/drivers/md/md-multipath.c
+@@ -104,10 +104,9 @@ static bool multipath_make_request(struct mddev *mddev, struct bio * bio)
+ struct multipath_bh * mp_bh;
+ struct multipath_info *multipath;
+
+- if (unlikely(bio->bi_opf & REQ_PREFLUSH)) {
+- md_flush_request(mddev, bio);
++ if (unlikely(bio->bi_opf & REQ_PREFLUSH)
++ && md_flush_request(mddev, bio))
+ return true;
+- }
+
+ mp_bh = mempool_alloc(&conf->pool, GFP_NOIO);
+
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 3100dd53c64c..33e67e315c95 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -545,7 +545,13 @@ static void md_submit_flush_data(struct work_struct *ws)
+ }
+ }
+
+-void md_flush_request(struct mddev *mddev, struct bio *bio)
++/*
++ * Manages consolidation of flushes and submitting any flushes needed for
++ * a bio with REQ_PREFLUSH. Returns true if the bio is finished or is
++ * being finished in another context. Returns false if the flushing is
++ * complete but still needs the I/O portion of the bio to be processed.
++ */
++bool md_flush_request(struct mddev *mddev, struct bio *bio)
+ {
+ ktime_t start = ktime_get_boottime();
+ spin_lock_irq(&mddev->lock);
+@@ -570,9 +576,10 @@ void md_flush_request(struct mddev *mddev, struct bio *bio)
+ bio_endio(bio);
+ else {
+ bio->bi_opf &= ~REQ_PREFLUSH;
+- mddev->pers->make_request(mddev, bio);
++ return false;
+ }
+ }
++ return true;
+ }
+ EXPORT_SYMBOL(md_flush_request);
+
+diff --git a/drivers/md/md.h b/drivers/md/md.h
+index 08f2aee383e8..cf070eec6753 100644
+--- a/drivers/md/md.h
++++ b/drivers/md/md.h
+@@ -546,7 +546,7 @@ struct md_personality
+ int level;
+ struct list_head list;
+ struct module *owner;
+- bool (*make_request)(struct mddev *mddev, struct bio *bio);
++ bool __must_check (*make_request)(struct mddev *mddev, struct bio *bio);
+ /*
+ * start up works that do NOT require md_thread. tasks that
+ * requires md_thread should go into start()
+@@ -699,7 +699,7 @@ extern void md_error(struct mddev *mddev, struct md_rdev *rdev);
+ extern void md_finish_reshape(struct mddev *mddev);
+
+ extern int mddev_congested(struct mddev *mddev, int bits);
+-extern void md_flush_request(struct mddev *mddev, struct bio *bio);
++extern bool __must_check md_flush_request(struct mddev *mddev, struct bio *bio);
+ extern void md_super_write(struct mddev *mddev, struct md_rdev *rdev,
+ sector_t sector, int size, struct page *page);
+ extern int md_super_wait(struct mddev *mddev);
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index 94c3f1a6fb5c..8f4046d5789d 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -572,10 +572,9 @@ static bool raid0_make_request(struct mddev *mddev, struct bio *bio)
+ unsigned chunk_sects;
+ unsigned sectors;
+
+- if (unlikely(bio->bi_opf & REQ_PREFLUSH)) {
+- md_flush_request(mddev, bio);
++ if (unlikely(bio->bi_opf & REQ_PREFLUSH)
++ && md_flush_request(mddev, bio))
+ return true;
+- }
+
+ if (unlikely((bio_op(bio) == REQ_OP_DISCARD))) {
+ raid0_handle_discard(mddev, bio);
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 5afbb7df06e7..df4db2d8e1de 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -1564,10 +1564,9 @@ static bool raid1_make_request(struct mddev *mddev, struct bio *bio)
+ {
+ sector_t sectors;
+
+- if (unlikely(bio->bi_opf & REQ_PREFLUSH)) {
+- md_flush_request(mddev, bio);
++ if (unlikely(bio->bi_opf & REQ_PREFLUSH)
++ && md_flush_request(mddev, bio))
+ return true;
+- }
+
+ /*
+ * There is a limit to the maximum size, but
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index c0c653e35fbb..0259c1137b95 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -1523,10 +1523,9 @@ static bool raid10_make_request(struct mddev *mddev, struct bio *bio)
+ int chunk_sects = chunk_mask + 1;
+ int sectors = bio_sectors(bio);
+
+- if (unlikely(bio->bi_opf & REQ_PREFLUSH)) {
+- md_flush_request(mddev, bio);
++ if (unlikely(bio->bi_opf & REQ_PREFLUSH)
++ && md_flush_request(mddev, bio))
+ return true;
+- }
+
+ if (!md_write_start(mddev, bio))
+ return false;
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 39f8ef6ee59c..3ffc1ae2fe72 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -5587,8 +5587,8 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
+ if (ret == 0)
+ return true;
+ if (ret == -ENODEV) {
+- md_flush_request(mddev, bi);
+- return true;
++ if (md_flush_request(mddev, bi))
++ return true;
+ }
+ /* ret == -EAGAIN, fallback */
+ /*
+@@ -5721,7 +5721,7 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
+ do_flush = false;
+ }
+
+- if (!sh->batch_head)
++ if (!sh->batch_head || sh == sh->batch_head)
+ set_bit(STRIPE_HANDLE, &sh->state);
+ clear_bit(STRIPE_DELAYED, &sh->state);
+ if ((!sh->batch_head || sh == sh->batch_head) &&
+diff --git a/drivers/media/platform/qcom/venus/vdec.c b/drivers/media/platform/qcom/venus/vdec.c
+index e1f998656c07..fb399586a3ec 100644
+--- a/drivers/media/platform/qcom/venus/vdec.c
++++ b/drivers/media/platform/qcom/venus/vdec.c
+@@ -1104,9 +1104,6 @@ static const struct v4l2_file_operations vdec_fops = {
+ .unlocked_ioctl = video_ioctl2,
+ .poll = v4l2_m2m_fop_poll,
+ .mmap = v4l2_m2m_fop_mmap,
+-#ifdef CONFIG_COMPAT
+- .compat_ioctl32 = v4l2_compat_ioctl32,
+-#endif
+ };
+
+ static int vdec_probe(struct platform_device *pdev)
+diff --git a/drivers/media/platform/qcom/venus/venc.c b/drivers/media/platform/qcom/venus/venc.c
+index a5f3d2c46bea..114e14e4e41c 100644
+--- a/drivers/media/platform/qcom/venus/venc.c
++++ b/drivers/media/platform/qcom/venus/venc.c
+@@ -1230,9 +1230,6 @@ static const struct v4l2_file_operations venc_fops = {
+ .unlocked_ioctl = video_ioctl2,
+ .poll = v4l2_m2m_fop_poll,
+ .mmap = v4l2_m2m_fop_mmap,
+-#ifdef CONFIG_COMPAT
+- .compat_ioctl32 = v4l2_compat_ioctl32,
+-#endif
+ };
+
+ static int venc_probe(struct platform_device *pdev)
+diff --git a/drivers/media/platform/sti/bdisp/bdisp-v4l2.c b/drivers/media/platform/sti/bdisp/bdisp-v4l2.c
+index 79f7db1a9d18..908e7a144c5b 100644
+--- a/drivers/media/platform/sti/bdisp/bdisp-v4l2.c
++++ b/drivers/media/platform/sti/bdisp/bdisp-v4l2.c
+@@ -651,8 +651,7 @@ static int bdisp_release(struct file *file)
+
+ dev_dbg(bdisp->dev, "%s\n", __func__);
+
+- if (mutex_lock_interruptible(&bdisp->lock))
+- return -ERESTARTSYS;
++ mutex_lock(&bdisp->lock);
+
+ v4l2_m2m_ctx_release(ctx->fh.m2m_ctx);
+
+diff --git a/drivers/media/radio/radio-wl1273.c b/drivers/media/radio/radio-wl1273.c
+index 104ac41c6f96..112376873167 100644
+--- a/drivers/media/radio/radio-wl1273.c
++++ b/drivers/media/radio/radio-wl1273.c
+@@ -1148,8 +1148,7 @@ static int wl1273_fm_fops_release(struct file *file)
+ if (radio->rds_users > 0) {
+ radio->rds_users--;
+ if (radio->rds_users == 0) {
+- if (mutex_lock_interruptible(&core->lock))
+- return -EINTR;
++ mutex_lock(&core->lock);
+
+ radio->irq_flags &= ~WL1273_RDS_EVENT;
+
+diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c
+index 952fa4063ff8..d0df054b0b47 100644
+--- a/drivers/mmc/host/omap_hsmmc.c
++++ b/drivers/mmc/host/omap_hsmmc.c
+@@ -1512,6 +1512,36 @@ static void omap_hsmmc_init_card(struct mmc_host *mmc, struct mmc_card *card)
+
+ if (mmc_pdata(host)->init_card)
+ mmc_pdata(host)->init_card(card);
++ else if (card->type == MMC_TYPE_SDIO ||
++ card->type == MMC_TYPE_SD_COMBO) {
++ struct device_node *np = mmc_dev(mmc)->of_node;
++
++ /*
++ * REVISIT: should be moved to sdio core and made more
++ * general e.g. by expanding the DT bindings of child nodes
++ * to provide a mechanism to provide this information:
++ * Documentation/devicetree/bindings/mmc/mmc-card.txt
++ */
++
++ np = of_get_compatible_child(np, "ti,wl1251");
++ if (np) {
++ /*
++ * We have TI wl1251 attached to MMC3. Pass this
++ * information to the SDIO core because it can't be
++ * probed by normal methods.
++ */
++
++ dev_info(host->dev, "found wl1251\n");
++ card->quirks |= MMC_QUIRK_NONSTD_SDIO;
++ card->cccr.wide_bus = 1;
++ card->cis.vendor = 0x104c;
++ card->cis.device = 0x9066;
++ card->cis.blksize = 512;
++ card->cis.max_dtr = 24000000;
++ card->ocr = 0x80;
++ of_node_put(np);
++ }
++ }
+ }
+
+ static void omap_hsmmc_enable_sdio_irq(struct mmc_host *mmc, int enable)
+diff --git a/drivers/mtd/devices/spear_smi.c b/drivers/mtd/devices/spear_smi.c
+index 986f81d2f93e..47ad0766affa 100644
+--- a/drivers/mtd/devices/spear_smi.c
++++ b/drivers/mtd/devices/spear_smi.c
+@@ -592,6 +592,26 @@ static int spear_mtd_read(struct mtd_info *mtd, loff_t from, size_t len,
+ return 0;
+ }
+
++/*
++ * The purpose of this function is to ensure a memcpy_toio() with byte writes
++ * only. Its structure is inspired from the ARM implementation of _memcpy_toio()
++ * which also does single byte writes but cannot be used here as this is just an
++ * implementation detail and not part of the API. Not mentioning the comment
++ * stating that _memcpy_toio() should be optimized.
++ */
++static void spear_smi_memcpy_toio_b(volatile void __iomem *dest,
++ const void *src, size_t len)
++{
++ const unsigned char *from = src;
++
++ while (len) {
++ len--;
++ writeb(*from, dest);
++ from++;
++ dest++;
++ }
++}
++
+ static inline int spear_smi_cpy_toio(struct spear_smi *dev, u32 bank,
+ void __iomem *dest, const void *src, size_t len)
+ {
+@@ -614,7 +634,23 @@ static inline int spear_smi_cpy_toio(struct spear_smi *dev, u32 bank,
+ ctrlreg1 = readl(dev->io_base + SMI_CR1);
+ writel((ctrlreg1 | WB_MODE) & ~SW_MODE, dev->io_base + SMI_CR1);
+
+- memcpy_toio(dest, src, len);
++ /*
++ * In Write Burst mode (WB_MODE), the specs states that writes must be:
++ * - incremental
++ * - of the same size
++ * The ARM implementation of memcpy_toio() will optimize the number of
++ * I/O by using as much 4-byte writes as possible, surrounded by
++ * 2-byte/1-byte access if:
++ * - the destination is not 4-byte aligned
++ * - the length is not a multiple of 4-byte.
++ * Avoid this alternance of write access size by using our own 'byte
++ * access' helper if at least one of the two conditions above is true.
++ */
++ if (IS_ALIGNED(len, sizeof(u32)) &&
++ IS_ALIGNED((uintptr_t)dest, sizeof(u32)))
++ memcpy_toio(dest, src, len);
++ else
++ spear_smi_memcpy_toio_b(dest, src, len);
+
+ writel(ctrlreg1, dev->io_base + SMI_CR1);
+
+diff --git a/drivers/mtd/nand/raw/nand_base.c b/drivers/mtd/nand/raw/nand_base.c
+index 91f046d4d452..e78cd47e8ec4 100644
+--- a/drivers/mtd/nand/raw/nand_base.c
++++ b/drivers/mtd/nand/raw/nand_base.c
+@@ -292,12 +292,16 @@ int nand_bbm_get_next_page(struct nand_chip *chip, int page)
+ struct mtd_info *mtd = nand_to_mtd(chip);
+ int last_page = ((mtd->erasesize - mtd->writesize) >>
+ chip->page_shift) & chip->pagemask;
++ unsigned int bbm_flags = NAND_BBM_FIRSTPAGE | NAND_BBM_SECONDPAGE
++ | NAND_BBM_LASTPAGE;
+
++ if (page == 0 && !(chip->options & bbm_flags))
++ return 0;
+ if (page == 0 && chip->options & NAND_BBM_FIRSTPAGE)
+ return 0;
+- else if (page <= 1 && chip->options & NAND_BBM_SECONDPAGE)
++ if (page <= 1 && chip->options & NAND_BBM_SECONDPAGE)
+ return 1;
+- else if (page <= last_page && chip->options & NAND_BBM_LASTPAGE)
++ if (page <= last_page && chip->options & NAND_BBM_LASTPAGE)
+ return last_page;
+
+ return -EINVAL;
+diff --git a/drivers/mtd/nand/raw/nand_micron.c b/drivers/mtd/nand/raw/nand_micron.c
+index 8ca9fad6e6ad..56654030ec7f 100644
+--- a/drivers/mtd/nand/raw/nand_micron.c
++++ b/drivers/mtd/nand/raw/nand_micron.c
+@@ -446,8 +446,10 @@ static int micron_nand_init(struct nand_chip *chip)
+ if (ret)
+ goto err_free_manuf_data;
+
++ chip->options |= NAND_BBM_FIRSTPAGE;
++
+ if (mtd->writesize == 2048)
+- chip->options |= NAND_BBM_FIRSTPAGE | NAND_BBM_SECONDPAGE;
++ chip->options |= NAND_BBM_SECONDPAGE;
+
+ ondie = micron_supports_on_die_ecc(chip);
+
+diff --git a/drivers/net/wireless/ath/ar5523/ar5523.c b/drivers/net/wireless/ath/ar5523/ar5523.c
+index b94759daeacc..da2d179430ca 100644
+--- a/drivers/net/wireless/ath/ar5523/ar5523.c
++++ b/drivers/net/wireless/ath/ar5523/ar5523.c
+@@ -255,7 +255,8 @@ static int ar5523_cmd(struct ar5523 *ar, u32 code, const void *idata,
+
+ if (flags & AR5523_CMD_FLAG_MAGIC)
+ hdr->magic = cpu_to_be32(1 << 24);
+- memcpy(hdr + 1, idata, ilen);
++ if (ilen)
++ memcpy(hdr + 1, idata, ilen);
+
+ cmd->odata = odata;
+ cmd->olen = olen;
+diff --git a/drivers/net/wireless/ath/wil6210/wmi.c b/drivers/net/wireless/ath/wil6210/wmi.c
+index 475b1a233cc9..8f7bd3edabf5 100644
+--- a/drivers/net/wireless/ath/wil6210/wmi.c
++++ b/drivers/net/wireless/ath/wil6210/wmi.c
+@@ -2478,7 +2478,8 @@ int wmi_set_ie(struct wil6210_vif *vif, u8 type, u16 ie_len, const void *ie)
+ cmd->mgmt_frm_type = type;
+ /* BUG: FW API define ieLen as u8. Will fix FW */
+ cmd->ie_len = cpu_to_le16(ie_len);
+- memcpy(cmd->ie_info, ie, ie_len);
++ if (ie_len)
++ memcpy(cmd->ie_info, ie, ie_len);
+ rc = wmi_send(wil, WMI_SET_APPIE_CMDID, vif->mid, cmd, len);
+ kfree(cmd);
+ out:
+@@ -2514,7 +2515,8 @@ int wmi_update_ft_ies(struct wil6210_vif *vif, u16 ie_len, const void *ie)
+ }
+
+ cmd->ie_len = cpu_to_le16(ie_len);
+- memcpy(cmd->ie_info, ie, ie_len);
++ if (ie_len)
++ memcpy(cmd->ie_info, ie, ie_len);
+ rc = wmi_send(wil, WMI_UPDATE_FT_IES_CMDID, vif->mid, cmd, len);
+ kfree(cmd);
+
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+index 4ea5401c4d6b..88a2a8087a11 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+@@ -1426,6 +1426,8 @@ static int brcmf_pcie_reset(struct device *dev)
+ struct brcmf_fw_request *fwreq;
+ int err;
+
++ brcmf_pcie_intr_disable(devinfo);
++
+ brcmf_pcie_bus_console_read(devinfo, true);
+
+ brcmf_detach(dev);
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
+index 0fbf8c1d5c98..3c5925a95719 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
+@@ -469,6 +469,7 @@ iwl_tfh_tfd *iwl_pcie_gen2_build_tx(struct iwl_trans *trans,
+ dma_addr_t tb_phys;
+ int len, tb1_len, tb2_len;
+ void *tb1_addr;
++ struct sk_buff *frag;
+
+ tb_phys = iwl_pcie_get_first_tb_dma(txq, idx);
+
+@@ -517,6 +518,19 @@ iwl_tfh_tfd *iwl_pcie_gen2_build_tx(struct iwl_trans *trans,
+ if (iwl_pcie_gen2_tx_add_frags(trans, skb, tfd, out_meta))
+ goto out_err;
+
++ skb_walk_frags(skb, frag) {
++ tb_phys = dma_map_single(trans->dev, frag->data,
++ skb_headlen(frag), DMA_TO_DEVICE);
++ if (unlikely(dma_mapping_error(trans->dev, tb_phys)))
++ goto out_err;
++ iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, skb_headlen(frag));
++ trace_iwlwifi_dev_tx_tb(trans->dev, skb,
++ frag->data,
++ skb_headlen(frag));
++ if (iwl_pcie_gen2_tx_add_frags(trans, frag, tfd, out_meta))
++ goto out_err;
++ }
++
+ return tfd;
+
+ out_err:
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/hw.c
+index c7f29a9be50d..146fe144f5f5 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/hw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/hw.c
+@@ -1176,6 +1176,7 @@ void rtl92de_enable_interrupt(struct ieee80211_hw *hw)
+
+ rtl_write_dword(rtlpriv, REG_HIMR, rtlpci->irq_mask[0] & 0xFFFFFFFF);
+ rtl_write_dword(rtlpriv, REG_HIMRE, rtlpci->irq_mask[1] & 0xFFFFFFFF);
++ rtlpci->irq_enabled = true;
+ }
+
+ void rtl92de_disable_interrupt(struct ieee80211_hw *hw)
+@@ -1185,7 +1186,7 @@ void rtl92de_disable_interrupt(struct ieee80211_hw *hw)
+
+ rtl_write_dword(rtlpriv, REG_HIMR, IMR8190_DISABLED);
+ rtl_write_dword(rtlpriv, REG_HIMRE, IMR8190_DISABLED);
+- synchronize_irq(rtlpci->pdev->irq);
++ rtlpci->irq_enabled = false;
+ }
+
+ static void _rtl92de_poweroff_adapter(struct ieee80211_hw *hw)
+@@ -1351,7 +1352,7 @@ void rtl92de_set_beacon_related_registers(struct ieee80211_hw *hw)
+
+ bcn_interval = mac->beacon_interval;
+ atim_window = 2;
+- /*rtl92de_disable_interrupt(hw); */
++ rtl92de_disable_interrupt(hw);
+ rtl_write_word(rtlpriv, REG_ATIMWND, atim_window);
+ rtl_write_word(rtlpriv, REG_BCN_INTERVAL, bcn_interval);
+ rtl_write_word(rtlpriv, REG_BCNTCFG, 0x660f);
+@@ -1371,9 +1372,9 @@ void rtl92de_set_beacon_interval(struct ieee80211_hw *hw)
+
+ RT_TRACE(rtlpriv, COMP_BEACON, DBG_DMESG,
+ "beacon_interval:%d\n", bcn_interval);
+- /* rtl92de_disable_interrupt(hw); */
++ rtl92de_disable_interrupt(hw);
+ rtl_write_word(rtlpriv, REG_BCN_INTERVAL, bcn_interval);
+- /* rtl92de_enable_interrupt(hw); */
++ rtl92de_enable_interrupt(hw);
+ }
+
+ void rtl92de_update_interrupt_mask(struct ieee80211_hw *hw,
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/sw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/sw.c
+index 99e5cd9a5c86..1dbdddce0823 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/sw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/sw.c
+@@ -216,6 +216,7 @@ static struct rtl_hal_ops rtl8192de_hal_ops = {
+ .led_control = rtl92de_led_control,
+ .set_desc = rtl92de_set_desc,
+ .get_desc = rtl92de_get_desc,
++ .is_tx_desc_closed = rtl92de_is_tx_desc_closed,
+ .tx_polling = rtl92de_tx_polling,
+ .enable_hw_sec = rtl92de_enable_hw_security_config,
+ .set_key = rtl92de_set_key,
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.c
+index d162884a9e00..26092bb08f1d 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.c
+@@ -818,13 +818,15 @@ u64 rtl92de_get_desc(struct ieee80211_hw *hw,
+ break;
+ }
+ } else {
+- struct rx_desc_92c *pdesc = (struct rx_desc_92c *)p_desc;
+ switch (desc_name) {
+ case HW_DESC_OWN:
+- ret = GET_RX_DESC_OWN(pdesc);
++ ret = GET_RX_DESC_OWN(p_desc);
+ break;
+ case HW_DESC_RXPKT_LEN:
+- ret = GET_RX_DESC_PKT_LEN(pdesc);
++ ret = GET_RX_DESC_PKT_LEN(p_desc);
++ break;
++ case HW_DESC_RXBUFF_ADDR:
++ ret = GET_RX_DESC_BUFF_ADDR(p_desc);
+ break;
+ default:
+ WARN_ONCE(true, "rtl8192de: ERR rxdesc :%d not processed\n",
+@@ -835,6 +837,23 @@ u64 rtl92de_get_desc(struct ieee80211_hw *hw,
+ return ret;
+ }
+
++bool rtl92de_is_tx_desc_closed(struct ieee80211_hw *hw,
++ u8 hw_queue, u16 index)
++{
++ struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
++ struct rtl8192_tx_ring *ring = &rtlpci->tx_ring[hw_queue];
++ u8 *entry = (u8 *)(&ring->desc[ring->idx]);
++ u8 own = (u8)rtl92de_get_desc(hw, entry, true, HW_DESC_OWN);
++
++ /* a beacon packet will only use the first
++ * descriptor by defaut, and the own bit may not
++ * be cleared by the hardware
++ */
++ if (own)
++ return false;
++ return true;
++}
++
+ void rtl92de_tx_polling(struct ieee80211_hw *hw, u8 hw_queue)
+ {
+ struct rtl_priv *rtlpriv = rtl_priv(hw);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.h b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.h
+index 36820070fd76..635989e15282 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.h
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.h
+@@ -715,6 +715,8 @@ void rtl92de_set_desc(struct ieee80211_hw *hw, u8 *pdesc, bool istx,
+ u8 desc_name, u8 *val);
+ u64 rtl92de_get_desc(struct ieee80211_hw *hw,
+ u8 *p_desc, bool istx, u8 desc_name);
++bool rtl92de_is_tx_desc_closed(struct ieee80211_hw *hw,
++ u8 hw_queue, u16 index);
+ void rtl92de_tx_polling(struct ieee80211_hw *hw, u8 hw_queue);
+ void rtl92de_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc,
+ bool b_firstseg, bool b_lastseg,
+diff --git a/drivers/net/wireless/virt_wifi.c b/drivers/net/wireless/virt_wifi.c
+index 7997cc6de334..01305ba2d3aa 100644
+--- a/drivers/net/wireless/virt_wifi.c
++++ b/drivers/net/wireless/virt_wifi.c
+@@ -450,7 +450,6 @@ static void virt_wifi_net_device_destructor(struct net_device *dev)
+ */
+ kfree(dev->ieee80211_ptr);
+ dev->ieee80211_ptr = NULL;
+- free_netdev(dev);
+ }
+
+ /* No lock interaction. */
+@@ -458,7 +457,7 @@ static void virt_wifi_setup(struct net_device *dev)
+ {
+ ether_setup(dev);
+ dev->netdev_ops = &virt_wifi_ops;
+- dev->priv_destructor = virt_wifi_net_device_destructor;
++ dev->needs_free_netdev = true;
+ }
+
+ /* Called in a RCU read critical section from netif_receive_skb */
+@@ -544,6 +543,7 @@ static int virt_wifi_newlink(struct net *src_net, struct net_device *dev,
+ goto unregister_netdev;
+ }
+
++ dev->priv_destructor = virt_wifi_net_device_destructor;
+ priv->being_deleted = false;
+ priv->is_connected = false;
+ priv->is_up = false;
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 3304e2c8a448..ac2ac06d870b 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -2270,16 +2270,6 @@ static const struct nvme_core_quirk_entry core_quirks[] = {
+ .vid = 0x14a4,
+ .fr = "22301111",
+ .quirks = NVME_QUIRK_SIMPLE_SUSPEND,
+- },
+- {
+- /*
+- * This Kingston E8FK11.T firmware version has no interrupt
+- * after resume with actions related to suspend to idle
+- * https://bugzilla.kernel.org/show_bug.cgi?id=204887
+- */
+- .vid = 0x2646,
+- .fr = "E8FK11.T",
+- .quirks = NVME_QUIRK_SIMPLE_SUSPEND,
+ }
+ };
+
+diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
+index e4c46637f32f..b3869951c0eb 100644
+--- a/drivers/pci/hotplug/acpiphp_glue.c
++++ b/drivers/pci/hotplug/acpiphp_glue.c
+@@ -449,8 +449,15 @@ static void acpiphp_native_scan_bridge(struct pci_dev *bridge)
+
+ /* Scan non-hotplug bridges that need to be reconfigured */
+ for_each_pci_bridge(dev, bus) {
+- if (!hotplug_is_native(dev))
+- max = pci_scan_bridge(bus, dev, max, 1);
++ if (hotplug_is_native(dev))
++ continue;
++
++ max = pci_scan_bridge(bus, dev, max, 1);
++ if (dev->subordinate) {
++ pcibios_resource_survey_bus(dev->subordinate);
++ pci_bus_size_bridges(dev->subordinate);
++ pci_bus_assign_resources(dev->subordinate);
++ }
+ }
+ }
+
+@@ -480,7 +487,6 @@ static void enable_slot(struct acpiphp_slot *slot, bool bridge)
+ if (PCI_SLOT(dev->devfn) == slot->device)
+ acpiphp_native_scan_bridge(dev);
+ }
+- pci_assign_unassigned_bridge_resources(bus->self);
+ } else {
+ LIST_HEAD(add_list);
+ int max, pass;
+diff --git a/drivers/phy/renesas/phy-rcar-gen3-usb2.c b/drivers/phy/renesas/phy-rcar-gen3-usb2.c
+index b7f6b1324395..6fd1390fd06e 100644
+--- a/drivers/phy/renesas/phy-rcar-gen3-usb2.c
++++ b/drivers/phy/renesas/phy-rcar-gen3-usb2.c
+@@ -21,6 +21,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/regulator/consumer.h>
++#include <linux/string.h>
+ #include <linux/usb/of.h>
+ #include <linux/workqueue.h>
+
+@@ -320,9 +321,9 @@ static ssize_t role_store(struct device *dev, struct device_attribute *attr,
+ if (!ch->is_otg_channel || !rcar_gen3_is_any_rphy_initialized(ch))
+ return -EIO;
+
+- if (!strncmp(buf, "host", strlen("host")))
++ if (sysfs_streq(buf, "host"))
+ new_mode = PHY_MODE_USB_HOST;
+- else if (!strncmp(buf, "peripheral", strlen("peripheral")))
++ else if (sysfs_streq(buf, "peripheral"))
+ new_mode = PHY_MODE_USB_DEVICE;
+ else
+ return -EINVAL;
+diff --git a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+index f2f5fcd9a237..83e585c5a613 100644
+--- a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
++++ b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+@@ -595,10 +595,10 @@ static int armada_37xx_irq_set_type(struct irq_data *d, unsigned int type)
+ regmap_read(info->regmap, in_reg, &in_val);
+
+ /* Set initial polarity based on current input level. */
+- if (in_val & d->mask)
+- val |= d->mask; /* falling */
++ if (in_val & BIT(d->hwirq % GPIO_PER_REG))
++ val |= BIT(d->hwirq % GPIO_PER_REG); /* falling */
+ else
+- val &= ~d->mask; /* rising */
++ val &= ~(BIT(d->hwirq % GPIO_PER_REG)); /* rising */
+ break;
+ }
+ default:
+diff --git a/drivers/pinctrl/pinctrl-rza2.c b/drivers/pinctrl/pinctrl-rza2.c
+index 5b951c7422cc..98e432c5b61b 100644
+--- a/drivers/pinctrl/pinctrl-rza2.c
++++ b/drivers/pinctrl/pinctrl-rza2.c
+@@ -212,8 +212,8 @@ static const char * const rza2_gpio_names[] = {
+ "PC_0", "PC_1", "PC_2", "PC_3", "PC_4", "PC_5", "PC_6", "PC_7",
+ "PD_0", "PD_1", "PD_2", "PD_3", "PD_4", "PD_5", "PD_6", "PD_7",
+ "PE_0", "PE_1", "PE_2", "PE_3", "PE_4", "PE_5", "PE_6", "PE_7",
+- "PF_0", "PF_1", "PF_2", "PF_3", "P0_4", "PF_5", "PF_6", "PF_7",
+- "PG_0", "PG_1", "PG_2", "P0_3", "PG_4", "PG_5", "PG_6", "PG_7",
++ "PF_0", "PF_1", "PF_2", "PF_3", "PF_4", "PF_5", "PF_6", "PF_7",
++ "PG_0", "PG_1", "PG_2", "PG_3", "PG_4", "PG_5", "PG_6", "PG_7",
+ "PH_0", "PH_1", "PH_2", "PH_3", "PH_4", "PH_5", "PH_6", "PH_7",
+ /* port I does not exist */
+ "PJ_0", "PJ_1", "PJ_2", "PJ_3", "PJ_4", "PJ_5", "PJ_6", "PJ_7",
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos.c b/drivers/pinctrl/samsung/pinctrl-exynos.c
+index ebc27b06718c..0599f5127b01 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos.c
++++ b/drivers/pinctrl/samsung/pinctrl-exynos.c
+@@ -486,8 +486,10 @@ int exynos_eint_wkup_init(struct samsung_pinctrl_drv_data *d)
+ if (match) {
+ irq_chip = kmemdup(match->data,
+ sizeof(*irq_chip), GFP_KERNEL);
+- if (!irq_chip)
++ if (!irq_chip) {
++ of_node_put(np);
+ return -ENOMEM;
++ }
+ wkup_np = np;
+ break;
+ }
+@@ -504,6 +506,7 @@ int exynos_eint_wkup_init(struct samsung_pinctrl_drv_data *d)
+ bank->nr_pins, &exynos_eint_irqd_ops, bank);
+ if (!bank->irq_domain) {
+ dev_err(dev, "wkup irq domain add failed\n");
++ of_node_put(wkup_np);
+ return -ENXIO;
+ }
+
+@@ -518,8 +521,10 @@ int exynos_eint_wkup_init(struct samsung_pinctrl_drv_data *d)
+ weint_data = devm_kcalloc(dev,
+ bank->nr_pins, sizeof(*weint_data),
+ GFP_KERNEL);
+- if (!weint_data)
++ if (!weint_data) {
++ of_node_put(wkup_np);
+ return -ENOMEM;
++ }
+
+ for (idx = 0; idx < bank->nr_pins; ++idx) {
+ irq = irq_of_parse_and_map(bank->of_node, idx);
+@@ -536,10 +541,13 @@ int exynos_eint_wkup_init(struct samsung_pinctrl_drv_data *d)
+ }
+ }
+
+- if (!muxed_banks)
++ if (!muxed_banks) {
++ of_node_put(wkup_np);
+ return 0;
++ }
+
+ irq = irq_of_parse_and_map(wkup_np, 0);
++ of_node_put(wkup_np);
+ if (!irq) {
+ dev_err(dev, "irq number for muxed EINTs not found\n");
+ return 0;
+diff --git a/drivers/pinctrl/samsung/pinctrl-s3c24xx.c b/drivers/pinctrl/samsung/pinctrl-s3c24xx.c
+index 7e824e4d20f4..9bd0a3de101d 100644
+--- a/drivers/pinctrl/samsung/pinctrl-s3c24xx.c
++++ b/drivers/pinctrl/samsung/pinctrl-s3c24xx.c
+@@ -490,8 +490,10 @@ static int s3c24xx_eint_init(struct samsung_pinctrl_drv_data *d)
+ return -ENODEV;
+
+ eint_data = devm_kzalloc(dev, sizeof(*eint_data), GFP_KERNEL);
+- if (!eint_data)
++ if (!eint_data) {
++ of_node_put(eint_np);
+ return -ENOMEM;
++ }
+
+ eint_data->drvdata = d;
+
+@@ -503,12 +505,14 @@ static int s3c24xx_eint_init(struct samsung_pinctrl_drv_data *d)
+ irq = irq_of_parse_and_map(eint_np, i);
+ if (!irq) {
+ dev_err(dev, "failed to get wakeup EINT IRQ %d\n", i);
++ of_node_put(eint_np);
+ return -ENXIO;
+ }
+
+ eint_data->parents[i] = irq;
+ irq_set_chained_handler_and_data(irq, handlers[i], eint_data);
+ }
++ of_node_put(eint_np);
+
+ bank = d->pin_banks;
+ for (i = 0; i < d->nr_banks; ++i, ++bank) {
+diff --git a/drivers/pinctrl/samsung/pinctrl-s3c64xx.c b/drivers/pinctrl/samsung/pinctrl-s3c64xx.c
+index c399f0932af5..f97f8179f2b1 100644
+--- a/drivers/pinctrl/samsung/pinctrl-s3c64xx.c
++++ b/drivers/pinctrl/samsung/pinctrl-s3c64xx.c
+@@ -704,8 +704,10 @@ static int s3c64xx_eint_eint0_init(struct samsung_pinctrl_drv_data *d)
+ return -ENODEV;
+
+ data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
+- if (!data)
++ if (!data) {
++ of_node_put(eint0_np);
+ return -ENOMEM;
++ }
+ data->drvdata = d;
+
+ for (i = 0; i < NUM_EINT0_IRQ; ++i) {
+@@ -714,6 +716,7 @@ static int s3c64xx_eint_eint0_init(struct samsung_pinctrl_drv_data *d)
+ irq = irq_of_parse_and_map(eint0_np, i);
+ if (!irq) {
+ dev_err(dev, "failed to get wakeup EINT IRQ %d\n", i);
++ of_node_put(eint0_np);
+ return -ENXIO;
+ }
+
+@@ -721,6 +724,7 @@ static int s3c64xx_eint_eint0_init(struct samsung_pinctrl_drv_data *d)
+ s3c64xx_eint0_handlers[i],
+ data);
+ }
++ of_node_put(eint0_np);
+
+ bank = d->pin_banks;
+ for (i = 0; i < d->nr_banks; ++i, ++bank) {
+diff --git a/drivers/pinctrl/samsung/pinctrl-samsung.c b/drivers/pinctrl/samsung/pinctrl-samsung.c
+index de0477bb469d..f26574ef234a 100644
+--- a/drivers/pinctrl/samsung/pinctrl-samsung.c
++++ b/drivers/pinctrl/samsung/pinctrl-samsung.c
+@@ -272,6 +272,7 @@ static int samsung_dt_node_to_map(struct pinctrl_dev *pctldev,
+ &reserved_maps, num_maps);
+ if (ret < 0) {
+ samsung_dt_free_map(pctldev, *map, *num_maps);
++ of_node_put(np);
+ return ret;
+ }
+ }
+@@ -785,8 +786,10 @@ static struct samsung_pmx_func *samsung_pinctrl_create_functions(
+ if (!of_get_child_count(cfg_np)) {
+ ret = samsung_pinctrl_create_function(dev, drvdata,
+ cfg_np, func);
+- if (ret < 0)
++ if (ret < 0) {
++ of_node_put(cfg_np);
+ return ERR_PTR(ret);
++ }
+ if (ret > 0) {
+ ++func;
+ ++func_cnt;
+@@ -797,8 +800,11 @@ static struct samsung_pmx_func *samsung_pinctrl_create_functions(
+ for_each_child_of_node(cfg_np, func_np) {
+ ret = samsung_pinctrl_create_function(dev, drvdata,
+ func_np, func);
+- if (ret < 0)
++ if (ret < 0) {
++ of_node_put(func_np);
++ of_node_put(cfg_np);
+ return ERR_PTR(ret);
++ }
+ if (ret > 0) {
+ ++func;
+ ++func_cnt;
+diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c
+index 72b7ddc43116..60a0341d7f58 100644
+--- a/drivers/rtc/interface.c
++++ b/drivers/rtc/interface.c
+@@ -125,7 +125,7 @@ EXPORT_SYMBOL_GPL(rtc_read_time);
+
+ int rtc_set_time(struct rtc_device *rtc, struct rtc_time *tm)
+ {
+- int err;
++ int err, uie;
+
+ err = rtc_valid_tm(tm);
+ if (err != 0)
+@@ -137,6 +137,17 @@ int rtc_set_time(struct rtc_device *rtc, struct rtc_time *tm)
+
+ rtc_subtract_offset(rtc, tm);
+
++#ifdef CONFIG_RTC_INTF_DEV_UIE_EMUL
++ uie = rtc->uie_rtctimer.enabled || rtc->uie_irq_active;
++#else
++ uie = rtc->uie_rtctimer.enabled;
++#endif
++ if (uie) {
++ err = rtc_update_irq_enable(rtc, 0);
++ if (err)
++ return err;
++ }
++
+ err = mutex_lock_interruptible(&rtc->ops_lock);
+ if (err)
+ return err;
+@@ -153,6 +164,12 @@ int rtc_set_time(struct rtc_device *rtc, struct rtc_time *tm)
+ /* A timer might have just expired */
+ schedule_work(&rtc->irqwork);
+
++ if (uie) {
++ err = rtc_update_irq_enable(rtc, 1);
++ if (err)
++ return err;
++ }
++
+ trace_rtc_set_time(rtc_tm_to_time64(tm), err);
+ return err;
+ }
+diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c
+index dccdb41bed8c..1234294700c4 100644
+--- a/drivers/s390/scsi/zfcp_dbf.c
++++ b/drivers/s390/scsi/zfcp_dbf.c
+@@ -95,11 +95,9 @@ void zfcp_dbf_hba_fsf_res(char *tag, int level, struct zfcp_fsf_req *req)
+ memcpy(rec->u.res.fsf_status_qual, &q_head->fsf_status_qual,
+ FSF_STATUS_QUALIFIER_SIZE);
+
+- if (q_head->fsf_command != FSF_QTCB_FCP_CMND) {
+- rec->pl_len = q_head->log_length;
+- zfcp_dbf_pl_write(dbf, (char *)q_pref + q_head->log_start,
+- rec->pl_len, "fsf_res", req->req_id);
+- }
++ rec->pl_len = q_head->log_length;
++ zfcp_dbf_pl_write(dbf, (char *)q_pref + q_head->log_start,
++ rec->pl_len, "fsf_res", req->req_id);
+
+ debug_event(dbf->hba, level, rec, sizeof(*rec));
+ spin_unlock_irqrestore(&dbf->hba_lock, flags);
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index f9df800e7067..6ba4a741a805 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -583,7 +583,7 @@ lpfc_sli4_fcp_xri_aborted(struct lpfc_hba *phba,
+ if (psb->cur_iocbq.sli4_xritag == xri) {
+ list_del(&psb->list);
+ qp->abts_scsi_io_bufs--;
+- psb->exch_busy = 0;
++ psb->flags &= ~LPFC_SBUF_XBUSY;
+ psb->status = IOSTAT_SUCCESS;
+ spin_unlock(
+ &qp->abts_scsi_buf_list_lock);
+@@ -615,7 +615,7 @@ lpfc_sli4_fcp_xri_aborted(struct lpfc_hba *phba,
+ if (iocbq->sli4_xritag != xri)
+ continue;
+ psb = container_of(iocbq, struct lpfc_io_buf, cur_iocbq);
+- psb->exch_busy = 0;
++ psb->flags &= ~LPFC_SBUF_XBUSY;
+ spin_unlock_irqrestore(&phba->hbalock, iflag);
+ if (!list_empty(&pring->txq))
+ lpfc_worker_wake_up(phba);
+@@ -834,7 +834,7 @@ lpfc_release_scsi_buf_s4(struct lpfc_hba *phba, struct lpfc_io_buf *psb)
+ psb->prot_seg_cnt = 0;
+
+ qp = psb->hdwq;
+- if (psb->exch_busy) {
++ if (psb->flags & LPFC_SBUF_XBUSY) {
+ spin_lock_irqsave(&qp->abts_scsi_buf_list_lock, iflag);
+ psb->pCmd = NULL;
+ list_add_tail(&psb->list, &qp->lpfc_abts_scsi_buf_list);
+@@ -3679,7 +3679,10 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
+ lpfc_cmd->result = (pIocbOut->iocb.un.ulpWord[4] & IOERR_PARAM_MASK);
+ lpfc_cmd->status = pIocbOut->iocb.ulpStatus;
+ /* pick up SLI4 exhange busy status from HBA */
+- lpfc_cmd->exch_busy = pIocbOut->iocb_flag & LPFC_EXCHANGE_BUSY;
++ if (pIocbOut->iocb_flag & LPFC_EXCHANGE_BUSY)
++ lpfc_cmd->flags |= LPFC_SBUF_XBUSY;
++ else
++ lpfc_cmd->flags &= ~LPFC_SBUF_XBUSY;
+
+ #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
+ if (lpfc_cmd->prot_data_type) {
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index c7027ecd4d19..6f6e306ff1e6 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -11768,7 +11768,10 @@ lpfc_sli_wake_iocb_wait(struct lpfc_hba *phba,
+ !(cmdiocbq->iocb_flag & LPFC_IO_LIBDFC)) {
+ lpfc_cmd = container_of(cmdiocbq, struct lpfc_io_buf,
+ cur_iocbq);
+- lpfc_cmd->exch_busy = rspiocbq->iocb_flag & LPFC_EXCHANGE_BUSY;
++ if (rspiocbq && (rspiocbq->iocb_flag & LPFC_EXCHANGE_BUSY))
++ lpfc_cmd->flags |= LPFC_SBUF_XBUSY;
++ else
++ lpfc_cmd->flags &= ~LPFC_SBUF_XBUSY;
+ }
+
+ pdone_q = cmdiocbq->context_un.wait_queue;
+diff --git a/drivers/scsi/lpfc/lpfc_sli.h b/drivers/scsi/lpfc/lpfc_sli.h
+index 467b8270f7fd..9449236c231d 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.h
++++ b/drivers/scsi/lpfc/lpfc_sli.h
+@@ -375,14 +375,13 @@ struct lpfc_io_buf {
+
+ struct lpfc_nodelist *ndlp;
+ uint32_t timeout;
+- uint16_t flags; /* TBD convert exch_busy to flags */
++ uint16_t flags;
+ #define LPFC_SBUF_XBUSY 0x1 /* SLI4 hba reported XB on WCQE cmpl */
+ #define LPFC_SBUF_BUMP_QDEPTH 0x2 /* bumped queue depth counter */
+ /* External DIF device IO conversions */
+ #define LPFC_SBUF_NORMAL_DIF 0x4 /* normal mode to insert/strip */
+ #define LPFC_SBUF_PASS_DIF 0x8 /* insert/strip mode to passthru */
+ #define LPFC_SBUF_NOT_POSTED 0x10 /* SGL failed post to FW. */
+- uint16_t exch_busy; /* SLI4 hba reported XB on complete WCQE */
+ uint16_t status; /* From IOCB Word 7- ulpStatus */
+ uint32_t result; /* From IOCB Word 4. */
+
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index 9584c5a48397..d9a0eb1d8a1d 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -725,7 +725,8 @@ qla2x00_sysfs_write_reset(struct file *filp, struct kobject *kobj,
+ break;
+ } else {
+ /* Make sure FC side is not in reset */
+- qla2x00_wait_for_hba_online(vha);
++ WARN_ON_ONCE(qla2x00_wait_for_hba_online(vha) !=
++ QLA_SUCCESS);
+
+ /* Issue MPI reset */
+ scsi_block_requests(vha->host);
+diff --git a/drivers/scsi/qla2xxx/qla_bsg.c b/drivers/scsi/qla2xxx/qla_bsg.c
+index 3084c2cff7bd..be9eeabe965e 100644
+--- a/drivers/scsi/qla2xxx/qla_bsg.c
++++ b/drivers/scsi/qla2xxx/qla_bsg.c
+@@ -341,6 +341,8 @@ qla2x00_process_els(struct bsg_job *bsg_job)
+ dma_map_sg(&ha->pdev->dev, bsg_job->request_payload.sg_list,
+ bsg_job->request_payload.sg_cnt, DMA_TO_DEVICE);
+ if (!req_sg_cnt) {
++ dma_unmap_sg(&ha->pdev->dev, bsg_job->request_payload.sg_list,
++ bsg_job->request_payload.sg_cnt, DMA_TO_DEVICE);
+ rval = -ENOMEM;
+ goto done_free_fcport;
+ }
+@@ -348,6 +350,8 @@ qla2x00_process_els(struct bsg_job *bsg_job)
+ rsp_sg_cnt = dma_map_sg(&ha->pdev->dev, bsg_job->reply_payload.sg_list,
+ bsg_job->reply_payload.sg_cnt, DMA_FROM_DEVICE);
+ if (!rsp_sg_cnt) {
++ dma_unmap_sg(&ha->pdev->dev, bsg_job->reply_payload.sg_list,
++ bsg_job->reply_payload.sg_cnt, DMA_FROM_DEVICE);
+ rval = -ENOMEM;
+ goto done_free_fcport;
+ }
+@@ -1778,8 +1782,8 @@ qla24xx_process_bidir_cmd(struct bsg_job *bsg_job)
+ uint16_t nextlid = 0;
+ uint32_t tot_dsds;
+ srb_t *sp = NULL;
+- uint32_t req_data_len = 0;
+- uint32_t rsp_data_len = 0;
++ uint32_t req_data_len;
++ uint32_t rsp_data_len;
+
+ /* Check the type of the adapter */
+ if (!IS_BIDI_CAPABLE(ha)) {
+@@ -1884,6 +1888,9 @@ qla24xx_process_bidir_cmd(struct bsg_job *bsg_job)
+ goto done_unmap_sg;
+ }
+
++ req_data_len = bsg_job->request_payload.payload_len;
++ rsp_data_len = bsg_job->reply_payload.payload_len;
++
+ if (req_data_len != rsp_data_len) {
+ rval = EXT_STATUS_BUSY;
+ ql_log(ql_log_warn, vha, 0x70aa,
+@@ -1891,10 +1898,6 @@ qla24xx_process_bidir_cmd(struct bsg_job *bsg_job)
+ goto done_unmap_sg;
+ }
+
+- req_data_len = bsg_job->request_payload.payload_len;
+- rsp_data_len = bsg_job->reply_payload.payload_len;
+-
+-
+ /* Alloc SRB structure */
+ sp = qla2x00_get_sp(vha, &(vha->bidir_fcport), GFP_KERNEL);
+ if (!sp) {
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index a2922b17b55b..29af738ca203 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -531,18 +531,23 @@ typedef struct srb {
+ */
+ uint8_t cmd_type;
+ uint8_t pad[3];
+- atomic_t ref_count;
+ struct kref cmd_kref; /* need to migrate ref_count over to this */
+ void *priv;
+ wait_queue_head_t nvme_ls_waitq;
+ struct fc_port *fcport;
+ struct scsi_qla_host *vha;
++ unsigned int start_timer:1;
++ unsigned int abort:1;
++ unsigned int aborted:1;
++ unsigned int completed:1;
++
+ uint32_t handle;
+ uint16_t flags;
+ uint16_t type;
+ const char *name;
+ int iocbs;
+ struct qla_qpair *qpair;
++ struct srb *cmd_sp;
+ struct list_head elem;
+ u32 gen1; /* scratch */
+ u32 gen2; /* scratch */
+@@ -560,7 +565,6 @@ typedef struct srb {
+ } srb_t;
+
+ #define GET_CMD_SP(sp) (sp->u.scmd.cmd)
+-#define SET_CMD_SP(sp, cmd) (sp->u.scmd.cmd = cmd)
+ #define GET_CMD_CTX_SP(sp) (sp->u.scmd.ctx)
+
+ #define GET_CMD_SENSE_LEN(sp) \
+@@ -4629,6 +4633,7 @@ struct secure_flash_update_block_pk {
+ #define QLA_SUSPENDED 0x106
+ #define QLA_BUSY 0x107
+ #define QLA_ALREADY_REGISTERED 0x109
++#define QLA_OS_TIMER_EXPIRED 0x10a
+
+ #define NVRAM_DELAY() udelay(10)
+
+diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c
+index 9f58e591666d..97ca95cd174b 100644
+--- a/drivers/scsi/qla2xxx/qla_gs.c
++++ b/drivers/scsi/qla2xxx/qla_gs.c
+@@ -3029,7 +3029,7 @@ static void qla24xx_async_gpsc_sp_done(void *s, int res)
+ fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE);
+
+ if (res == QLA_FUNCTION_TIMEOUT)
+- return;
++ goto done;
+
+ if (res == (DID_ERROR << 16)) {
+ /* entry status error */
+@@ -3674,7 +3674,6 @@ void qla24xx_async_gnnft_done(scsi_qla_host_t *vha, srb_t *sp)
+ list_for_each_entry(fcport, &vha->vp_fcports, list) {
+ if (memcmp(rp->port_name, fcport->port_name, WWN_SIZE))
+ continue;
+- fcport->scan_needed = 0;
+ fcport->scan_state = QLA_FCPORT_FOUND;
+ found = true;
+ /*
+@@ -3683,10 +3682,12 @@ void qla24xx_async_gnnft_done(scsi_qla_host_t *vha, srb_t *sp)
+ if ((fcport->flags & FCF_FABRIC_DEVICE) == 0) {
+ qla2x00_clear_loop_id(fcport);
+ fcport->flags |= FCF_FABRIC_DEVICE;
+- } else if (fcport->d_id.b24 != rp->id.b24) {
++ } else if (fcport->d_id.b24 != rp->id.b24 ||
++ fcport->scan_needed) {
+ qlt_schedule_sess_for_deletion(fcport);
+ }
+ fcport->d_id.b24 = rp->id.b24;
++ fcport->scan_needed = 0;
+ break;
+ }
+
+@@ -4152,7 +4153,7 @@ int qla24xx_async_gpnft(scsi_qla_host_t *vha, u8 fc4_type, srb_t *sp)
+ rspsz,
+ &sp->u.iocb_cmd.u.ctarg.rsp_dma,
+ GFP_KERNEL);
+- sp->u.iocb_cmd.u.ctarg.rsp_allocated_size = sizeof(struct ct_sns_pkt);
++ sp->u.iocb_cmd.u.ctarg.rsp_allocated_size = rspsz;
+ if (!sp->u.iocb_cmd.u.ctarg.rsp) {
+ ql_log(ql_log_warn, vha, 0xffff,
+ "Failed to allocate ct_sns request.\n");
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index cd74cc9651de..646fe7fe91b2 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -99,9 +99,39 @@ static void qla24xx_abort_iocb_timeout(void *data)
+ {
+ srb_t *sp = data;
+ struct srb_iocb *abt = &sp->u.iocb_cmd;
++ struct qla_qpair *qpair = sp->qpair;
++ u32 handle;
++ unsigned long flags;
++
++ if (sp->cmd_sp)
++ ql_dbg(ql_dbg_async, sp->vha, 0x507c,
++ "Abort timeout - cmd hdl=%x, cmd type=%x hdl=%x, type=%x\n",
++ sp->cmd_sp->handle, sp->cmd_sp->type,
++ sp->handle, sp->type);
++ else
++ ql_dbg(ql_dbg_async, sp->vha, 0x507c,
++ "Abort timeout 2 - hdl=%x, type=%x\n",
++ sp->handle, sp->type);
++
++ spin_lock_irqsave(qpair->qp_lock_ptr, flags);
++ for (handle = 1; handle < qpair->req->num_outstanding_cmds; handle++) {
++ if (sp->cmd_sp && (qpair->req->outstanding_cmds[handle] ==
++ sp->cmd_sp))
++ qpair->req->outstanding_cmds[handle] = NULL;
++
++ /* removing the abort */
++ if (qpair->req->outstanding_cmds[handle] == sp) {
++ qpair->req->outstanding_cmds[handle] = NULL;
++ break;
++ }
++ }
++ spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
++
++ if (sp->cmd_sp)
++ sp->cmd_sp->done(sp->cmd_sp, QLA_OS_TIMER_EXPIRED);
+
+ abt->u.abt.comp_status = CS_TIMEOUT;
+- sp->done(sp, QLA_FUNCTION_TIMEOUT);
++ sp->done(sp, QLA_OS_TIMER_EXPIRED);
+ }
+
+ static void qla24xx_abort_sp_done(void *ptr, int res)
+@@ -109,7 +139,8 @@ static void qla24xx_abort_sp_done(void *ptr, int res)
+ srb_t *sp = ptr;
+ struct srb_iocb *abt = &sp->u.iocb_cmd;
+
+- if (del_timer(&sp->u.iocb_cmd.timer)) {
++ if ((res == QLA_OS_TIMER_EXPIRED) ||
++ del_timer(&sp->u.iocb_cmd.timer)) {
+ if (sp->flags & SRB_WAKEUP_ON_COMP)
+ complete(&abt->u.abt.comp);
+ else
+@@ -133,6 +164,7 @@ static int qla24xx_async_abort_cmd(srb_t *cmd_sp, bool wait)
+ sp->type = SRB_ABT_CMD;
+ sp->name = "abort";
+ sp->qpair = cmd_sp->qpair;
++ sp->cmd_sp = cmd_sp;
+ if (wait)
+ sp->flags = SRB_WAKEUP_ON_COMP;
+
+@@ -364,9 +396,6 @@ qla2x00_async_logout(struct scsi_qla_host *vha, fc_port_t *fcport)
+ struct srb_iocb *lio;
+ int rval = QLA_FUNCTION_FAILED;
+
+- if (!vha->flags.online || (fcport->flags & FCF_ASYNC_SENT))
+- return rval;
+-
+ fcport->flags |= FCF_ASYNC_SENT;
+ sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ if (!sp)
+@@ -513,6 +542,7 @@ static int qla_post_els_plogi_work(struct scsi_qla_host *vha, fc_port_t *fcport)
+
+ e->u.fcport.fcport = fcport;
+ fcport->flags |= FCF_ASYNC_ACTIVE;
++ fcport->disc_state = DSC_LOGIN_PEND;
+ return qla2x00_post_work(vha, e);
+ }
+
+@@ -813,6 +843,15 @@ static void qla24xx_handle_gnl_done_event(scsi_qla_host_t *vha,
+ fcport->fw_login_state = current_login_state;
+ fcport->d_id = id;
+ switch (current_login_state) {
++ case DSC_LS_PRLI_PEND:
++ /*
++ * In the middle of PRLI. Let it finish.
++ * Allow relogin code to recheck state again
++ * with GNL. Push disc_state back to DELETED
++ * so GNL can go out again
++ */
++ fcport->disc_state = DSC_DELETED;
++ break;
+ case DSC_LS_PRLI_COMP:
+ if ((e->prli_svc_param_word_3[0] & BIT_4) == 0)
+ fcport->port_type = FCT_INITIATOR;
+@@ -1131,13 +1170,11 @@ void qla24xx_async_gpdb_sp_done(void *s, int res)
+ "Async done-%s res %x, WWPN %8phC mb[1]=%x mb[2]=%x \n",
+ sp->name, res, fcport->port_name, mb[1], mb[2]);
+
+- if (res == QLA_FUNCTION_TIMEOUT) {
+- dma_pool_free(sp->vha->hw->s_dma_pool, sp->u.iocb_cmd.u.mbx.in,
+- sp->u.iocb_cmd.u.mbx.in_dma);
+- return;
+- }
+-
+ fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE);
++
++ if (res == QLA_FUNCTION_TIMEOUT)
++ goto done;
++
+ memset(&ea, 0, sizeof(ea));
+ ea.event = FCME_GPDB_DONE;
+ ea.fcport = fcport;
+@@ -1145,6 +1182,7 @@ void qla24xx_async_gpdb_sp_done(void *s, int res)
+
+ qla2x00_fcport_event_handler(vha, &ea);
+
++done:
+ dma_pool_free(ha->s_dma_pool, sp->u.iocb_cmd.u.mbx.in,
+ sp->u.iocb_cmd.u.mbx.in_dma);
+
+@@ -1488,7 +1526,7 @@ int qla24xx_fcport_handle_login(struct scsi_qla_host *vha, fc_port_t *fcport)
+ u64 wwn;
+ u16 sec;
+
+- ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0x20d8,
++ ql_dbg(ql_dbg_disc, vha, 0x20d8,
+ "%s %8phC DS %d LS %d P %d fl %x confl %p rscn %d|%d login %d lid %d scan %d\n",
+ __func__, fcport->port_name, fcport->disc_state,
+ fcport->fw_login_state, fcport->login_pause, fcport->flags,
+@@ -1499,6 +1537,7 @@ int qla24xx_fcport_handle_login(struct scsi_qla_host *vha, fc_port_t *fcport)
+ return 0;
+
+ if ((fcport->loop_id != FC_NO_LOOP_ID) &&
++ qla_dual_mode_enabled(vha) &&
+ ((fcport->fw_login_state == DSC_LS_PLOGI_PEND) ||
+ (fcport->fw_login_state == DSC_LS_PRLI_PEND)))
+ return 0;
+@@ -1668,21 +1707,10 @@ void qla24xx_handle_relogin_event(scsi_qla_host_t *vha,
+ fcport->last_login_gen, fcport->login_gen,
+ fcport->flags);
+
+- if ((fcport->fw_login_state == DSC_LS_PLOGI_PEND) ||
+- (fcport->fw_login_state == DSC_LS_PRLI_PEND))
+- return;
+-
+- if (fcport->fw_login_state == DSC_LS_PLOGI_COMP) {
+- if (time_before_eq(jiffies, fcport->plogi_nack_done_deadline)) {
+- set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
+- return;
+- }
+- }
+-
+ if (fcport->last_rscn_gen != fcport->rscn_gen) {
+- ql_dbg(ql_dbg_disc, vha, 0x20e9, "%s %d %8phC post gidpn\n",
++ ql_dbg(ql_dbg_disc, vha, 0x20e9, "%s %d %8phC post gnl\n",
+ __func__, __LINE__, fcport->port_name);
+-
++ qla24xx_post_gnl_work(vha, fcport);
+ return;
+ }
+
+@@ -3288,6 +3316,8 @@ try_eft:
+ ql_dbg(ql_dbg_init, vha, 0x00c3,
+ "Allocated (%d KB) EFT ...\n", EFT_SIZE / 1024);
+ eft_size = EFT_SIZE;
++ ha->eft_dma = tc_dma;
++ ha->eft = tc;
+ }
+
+ if (IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
+@@ -7595,8 +7625,12 @@ qla27xx_get_active_image(struct scsi_qla_host *vha,
+ goto check_sec_image;
+ }
+
+- qla24xx_read_flash_data(vha, (void *)(&pri_image_status),
+- ha->flt_region_img_status_pri, sizeof(pri_image_status) >> 2);
++ if (qla24xx_read_flash_data(vha, (void *)(&pri_image_status),
++ ha->flt_region_img_status_pri, sizeof(pri_image_status) >> 2) !=
++ QLA_SUCCESS) {
++ WARN_ON_ONCE(true);
++ goto check_sec_image;
++ }
+ qla27xx_print_image(vha, "Primary image", &pri_image_status);
+
+ if (qla27xx_check_image_status_signature(&pri_image_status)) {
+@@ -8350,7 +8384,7 @@ qla81xx_nvram_config(scsi_qla_host_t *vha)
+ active_regions.aux.vpd_nvram == QLA27XX_PRIMARY_IMAGE ?
+ "primary" : "secondary");
+ }
+- qla24xx_read_flash_data(vha, ha->vpd, faddr, ha->vpd_size >> 2);
++ ha->isp_ops->read_optrom(vha, ha->vpd, faddr << 2, ha->vpd_size);
+
+ /* Get NVRAM data into cache and calculate checksum. */
+ faddr = ha->flt_region_nvram;
+@@ -8362,7 +8396,7 @@ qla81xx_nvram_config(scsi_qla_host_t *vha)
+ "Loading %s nvram image.\n",
+ active_regions.aux.vpd_nvram == QLA27XX_PRIMARY_IMAGE ?
+ "primary" : "secondary");
+- qla24xx_read_flash_data(vha, ha->nvram, faddr, ha->nvram_size >> 2);
++ ha->isp_ops->read_optrom(vha, ha->nvram, faddr << 2, ha->nvram_size);
+
+ dptr = (uint32_t *)nv;
+ for (cnt = 0, chksum = 0; cnt < ha->nvram_size >> 2; cnt++, dptr++)
+@@ -9092,8 +9126,6 @@ int qla2xxx_delete_qpair(struct scsi_qla_host *vha, struct qla_qpair *qpair)
+ struct qla_hw_data *ha = qpair->hw;
+
+ qpair->delete_in_progress = 1;
+- while (atomic_read(&qpair->ref_count))
+- msleep(500);
+
+ ret = qla25xx_delete_req_que(vha, qpair->req);
+ if (ret != QLA_SUCCESS)
+diff --git a/drivers/scsi/qla2xxx/qla_inline.h b/drivers/scsi/qla2xxx/qla_inline.h
+index bf063c664352..0c3d907af769 100644
+--- a/drivers/scsi/qla2xxx/qla_inline.h
++++ b/drivers/scsi/qla2xxx/qla_inline.h
+@@ -152,6 +152,18 @@ qla2x00_chip_is_down(scsi_qla_host_t *vha)
+ return (qla2x00_reset_active(vha) || !vha->hw->flags.fw_started);
+ }
+
++static void qla2xxx_init_sp(srb_t *sp, scsi_qla_host_t *vha,
++ struct qla_qpair *qpair, fc_port_t *fcport)
++{
++ memset(sp, 0, sizeof(*sp));
++ sp->fcport = fcport;
++ sp->iocbs = 1;
++ sp->vha = vha;
++ sp->qpair = qpair;
++ sp->cmd_type = TYPE_SRB;
++ INIT_LIST_HEAD(&sp->elem);
++}
++
+ static inline srb_t *
+ qla2xxx_get_qpair_sp(scsi_qla_host_t *vha, struct qla_qpair *qpair,
+ fc_port_t *fcport, gfp_t flag)
+@@ -164,19 +176,9 @@ qla2xxx_get_qpair_sp(scsi_qla_host_t *vha, struct qla_qpair *qpair,
+ return NULL;
+
+ sp = mempool_alloc(qpair->srb_mempool, flag);
+- if (!sp)
+- goto done;
+-
+- memset(sp, 0, sizeof(*sp));
+- sp->fcport = fcport;
+- sp->iocbs = 1;
+- sp->vha = vha;
+- sp->qpair = qpair;
+- sp->cmd_type = TYPE_SRB;
+- INIT_LIST_HEAD(&sp->elem);
+-
+-done:
+- if (!sp)
++ if (sp)
++ qla2xxx_init_sp(sp, vha, qpair, fcport);
++ else
+ QLA_QPAIR_MARK_NOT_BUSY(qpair);
+ return sp;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
+index 9312b19ed708..1886de92034c 100644
+--- a/drivers/scsi/qla2xxx/qla_iocb.c
++++ b/drivers/scsi/qla2xxx/qla_iocb.c
+@@ -2540,7 +2540,7 @@ void qla2x00_init_timer(srb_t *sp, unsigned long tmo)
+ sp->free = qla2x00_sp_free;
+ if (IS_QLAFX00(sp->vha->hw) && sp->type == SRB_FXIOCB_DCMD)
+ init_completion(&sp->u.iocb_cmd.u.fxiocb.fxiocb_comp);
+- add_timer(&sp->u.iocb_cmd.timer);
++ sp->start_timer = 1;
+ }
+
+ static void
+@@ -3668,6 +3668,9 @@ qla2x00_start_sp(srb_t *sp)
+ break;
+ }
+
++ if (sp->start_timer)
++ add_timer(&sp->u.iocb_cmd.timer);
++
+ wmb();
+ qla2x00_start_iocbs(vha, qp->req);
+ done:
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index 78aec50abe0f..9e32a90d4d77 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -2473,6 +2473,11 @@ qla2x00_status_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, void *pkt)
+ return;
+ }
+
++ if (sp->abort)
++ sp->aborted = 1;
++ else
++ sp->completed = 1;
++
+ if (sp->cmd_type != TYPE_SRB) {
+ req->outstanding_cmds[handle] = NULL;
+ ql_dbg(ql_dbg_io, vha, 0x3015,
+@@ -3471,10 +3476,8 @@ qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp)
+ ha->msix_count, ret);
+ goto msix_out;
+ } else if (ret < ha->msix_count) {
+- ql_log(ql_log_warn, vha, 0x00c6,
+- "MSI-X: Failed to enable support "
+- "with %d vectors, using %d vectors.\n",
+- ha->msix_count, ret);
++ ql_log(ql_log_info, vha, 0x00c6,
++ "MSI-X: Using %d vectors\n", ret);
+ ha->msix_count = ret;
+ /* Recalculate queue values */
+ if (ha->mqiobase && (ql2xmqsupport || ql2xnvmeenable)) {
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index ac4640f45678..8601e63e4698 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -253,21 +253,9 @@ qla2x00_mailbox_command(scsi_qla_host_t *vha, mbx_cmd_t *mcp)
+ if ((!abort_active && io_lock_on) || IS_NOPOLLING_TYPE(ha)) {
+ set_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags);
+
+- if (IS_P3P_TYPE(ha)) {
+- if (RD_REG_DWORD(®->isp82.hint) &
+- HINT_MBX_INT_PENDING) {
+- ha->flags.mbox_busy = 0;
+- spin_unlock_irqrestore(&ha->hardware_lock,
+- flags);
+-
+- atomic_dec(&ha->num_pend_mbx_stage2);
+- ql_dbg(ql_dbg_mbx, vha, 0x1010,
+- "Pending mailbox timeout, exiting.\n");
+- rval = QLA_FUNCTION_TIMEOUT;
+- goto premature_exit;
+- }
++ if (IS_P3P_TYPE(ha))
+ WRT_REG_DWORD(®->isp82.hint, HINT_MBX_INT_PENDING);
+- } else if (IS_FWI2_CAPABLE(ha))
++ else if (IS_FWI2_CAPABLE(ha))
+ WRT_REG_DWORD(®->isp24.hccr, HCCRX_SET_HOST_INT);
+ else
+ WRT_REG_WORD(®->isp.hccr, HCCR_SET_HOST_INT);
+@@ -6297,17 +6285,13 @@ int qla24xx_send_mb_cmd(struct scsi_qla_host *vha, mbx_cmd_t *mcp)
+ case QLA_SUCCESS:
+ ql_dbg(ql_dbg_mbx, vha, 0x119d, "%s: %s done.\n",
+ __func__, sp->name);
+- sp->free(sp);
+ break;
+ default:
+ ql_dbg(ql_dbg_mbx, vha, 0x119e, "%s: %s Failed. %x.\n",
+ __func__, sp->name, rval);
+- sp->free(sp);
+ break;
+ }
+
+- return rval;
+-
+ done_free_sp:
+ sp->free(sp);
+ done:
+diff --git a/drivers/scsi/qla2xxx/qla_mid.c b/drivers/scsi/qla2xxx/qla_mid.c
+index b2977e49356b..0341dc0e0651 100644
+--- a/drivers/scsi/qla2xxx/qla_mid.c
++++ b/drivers/scsi/qla2xxx/qla_mid.c
+@@ -934,7 +934,7 @@ int qla24xx_control_vp(scsi_qla_host_t *vha, int cmd)
+
+ sp = qla2x00_get_sp(base_vha, NULL, GFP_KERNEL);
+ if (!sp)
+- goto done;
++ return rval;
+
+ sp->type = SRB_CTRL_VP;
+ sp->name = "ctrl_vp";
+@@ -950,7 +950,7 @@ int qla24xx_control_vp(scsi_qla_host_t *vha, int cmd)
+ ql_dbg(ql_dbg_async, vha, 0xffff,
+ "%s: %s Failed submission. %x.\n",
+ __func__, sp->name, rval);
+- goto done_free_sp;
++ goto done;
+ }
+
+ ql_dbg(ql_dbg_vport, vha, 0x113f, "%s hndl %x submitted\n",
+@@ -968,16 +968,13 @@ int qla24xx_control_vp(scsi_qla_host_t *vha, int cmd)
+ case QLA_SUCCESS:
+ ql_dbg(ql_dbg_vport, vha, 0xffff, "%s: %s done.\n",
+ __func__, sp->name);
+- goto done_free_sp;
++ break;
+ default:
+ ql_dbg(ql_dbg_vport, vha, 0xffff, "%s: %s Failed. %x.\n",
+ __func__, sp->name, rval);
+- goto done_free_sp;
++ break;
+ }
+ done:
+- return rval;
+-
+-done_free_sp:
+ sp->free(sp);
+ return rval;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index 963094b3c300..672da9b838e5 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -227,8 +227,8 @@ static void qla_nvme_abort_work(struct work_struct *work)
+
+ if (ha->flags.host_shutting_down) {
+ ql_log(ql_log_info, sp->fcport->vha, 0xffff,
+- "%s Calling done on sp: %p, type: 0x%x, sp->ref_count: 0x%x\n",
+- __func__, sp, sp->type, atomic_read(&sp->ref_count));
++ "%s Calling done on sp: %p, type: 0x%x\n",
++ __func__, sp, sp->type);
+ sp->done(sp, 0);
+ goto out;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_nx.c b/drivers/scsi/qla2xxx/qla_nx.c
+index c760ae354174..3a23827e0f0b 100644
+--- a/drivers/scsi/qla2xxx/qla_nx.c
++++ b/drivers/scsi/qla2xxx/qla_nx.c
+@@ -2287,7 +2287,9 @@ qla82xx_disable_intrs(struct qla_hw_data *ha)
+ {
+ scsi_qla_host_t *vha = pci_get_drvdata(ha->pdev);
+
+- qla82xx_mbx_intr_disable(vha);
++ if (ha->interrupts_on)
++ qla82xx_mbx_intr_disable(vha);
++
+ spin_lock_irq(&ha->hardware_lock);
+ if (IS_QLA8044(ha))
+ qla8044_wr_reg(ha, LEG_INTR_MASK_OFFSET, 1);
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index ac96771bb06d..ac2977bdc359 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -716,18 +716,12 @@ qla2x00_sp_compl(void *ptr, int res)
+ struct scsi_cmnd *cmd = GET_CMD_SP(sp);
+ struct completion *comp = sp->comp;
+
+- if (WARN_ON_ONCE(atomic_read(&sp->ref_count) == 0))
+- return;
+-
+- atomic_dec(&sp->ref_count);
+-
+ sp->free(sp);
+ cmd->result = res;
+ CMD_SP(cmd) = NULL;
+ cmd->scsi_done(cmd);
+ if (comp)
+ complete(comp);
+- qla2x00_rel_sp(sp);
+ }
+
+ void
+@@ -821,18 +815,12 @@ qla2xxx_qpair_sp_compl(void *ptr, int res)
+ struct scsi_cmnd *cmd = GET_CMD_SP(sp);
+ struct completion *comp = sp->comp;
+
+- if (WARN_ON_ONCE(atomic_read(&sp->ref_count) == 0))
+- return;
+-
+- atomic_dec(&sp->ref_count);
+-
+ sp->free(sp);
+ cmd->result = res;
+ CMD_SP(cmd) = NULL;
+ cmd->scsi_done(cmd);
+ if (comp)
+ complete(comp);
+- qla2xxx_rel_qpair_sp(sp->qpair, sp);
+ }
+
+ static int
+@@ -925,13 +913,12 @@ qla2xxx_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
+ else
+ goto qc24_target_busy;
+
+- sp = qla2x00_get_sp(vha, fcport, GFP_ATOMIC);
+- if (!sp)
+- goto qc24_host_busy;
++ sp = scsi_cmd_priv(cmd);
++ qla2xxx_init_sp(sp, vha, vha->hw->base_qpair, fcport);
+
+ sp->u.scmd.cmd = cmd;
+ sp->type = SRB_SCSI_CMD;
+- atomic_set(&sp->ref_count, 1);
++
+ CMD_SP(cmd) = (void *)sp;
+ sp->free = qla2x00_sp_free_dma;
+ sp->done = qla2x00_sp_compl;
+@@ -948,9 +935,6 @@ qla2xxx_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
+ qc24_host_busy_free_sp:
+ sp->free(sp);
+
+-qc24_host_busy:
+- return SCSI_MLQUEUE_HOST_BUSY;
+-
+ qc24_target_busy:
+ return SCSI_MLQUEUE_TARGET_BUSY;
+
+@@ -1011,24 +995,21 @@ qla2xxx_mqueuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd,
+ else
+ goto qc24_target_busy;
+
+- sp = qla2xxx_get_qpair_sp(vha, qpair, fcport, GFP_ATOMIC);
+- if (!sp)
+- goto qc24_host_busy;
++ sp = scsi_cmd_priv(cmd);
++ qla2xxx_init_sp(sp, vha, qpair, fcport);
+
+ sp->u.scmd.cmd = cmd;
+ sp->type = SRB_SCSI_CMD;
+- atomic_set(&sp->ref_count, 1);
+ CMD_SP(cmd) = (void *)sp;
+ sp->free = qla2xxx_qpair_sp_free_dma;
+ sp->done = qla2xxx_qpair_sp_compl;
+- sp->qpair = qpair;
+
+ rval = ha->isp_ops->start_scsi_mq(sp);
+ if (rval != QLA_SUCCESS) {
+ ql_dbg(ql_dbg_io + ql_dbg_verbose, vha, 0x3078,
+ "Start scsi failed rval=%d for cmd=%p.\n", rval, cmd);
+ if (rval == QLA_INTERFACE_ERROR)
+- goto qc24_fail_command;
++ goto qc24_free_sp_fail_command;
+ goto qc24_host_busy_free_sp;
+ }
+
+@@ -1037,12 +1018,14 @@ qla2xxx_mqueuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd,
+ qc24_host_busy_free_sp:
+ sp->free(sp);
+
+-qc24_host_busy:
+- return SCSI_MLQUEUE_HOST_BUSY;
+-
+ qc24_target_busy:
+ return SCSI_MLQUEUE_TARGET_BUSY;
+
++qc24_free_sp_fail_command:
++ sp->free(sp);
++ CMD_SP(cmd) = NULL;
++ qla2xxx_rel_qpair_sp(sp->qpair, sp);
++
+ qc24_fail_command:
+ cmd->scsi_done(cmd);
+
+@@ -1212,16 +1195,6 @@ qla2x00_wait_for_chip_reset(scsi_qla_host_t *vha)
+ return return_status;
+ }
+
+-static int
+-sp_get(struct srb *sp)
+-{
+- if (!refcount_inc_not_zero((refcount_t *)&sp->ref_count))
+- /* kref get fail */
+- return ENXIO;
+- else
+- return 0;
+-}
+-
+ #define ISP_REG_DISCONNECT 0xffffffffU
+ /**************************************************************************
+ * qla2x00_isp_reg_stat
+@@ -1270,14 +1243,16 @@ static int
+ qla2xxx_eh_abort(struct scsi_cmnd *cmd)
+ {
+ scsi_qla_host_t *vha = shost_priv(cmd->device->host);
++ DECLARE_COMPLETION_ONSTACK(comp);
+ srb_t *sp;
+ int ret;
+ unsigned int id;
+ uint64_t lun;
+- unsigned long flags;
+ int rval;
+ struct qla_hw_data *ha = vha->hw;
++ uint32_t ratov_j;
+ struct qla_qpair *qpair;
++ unsigned long flags;
+
+ if (qla2x00_isp_reg_stat(ha)) {
+ ql_log(ql_log_info, vha, 0x8042,
+@@ -1289,29 +1264,28 @@ qla2xxx_eh_abort(struct scsi_cmnd *cmd)
+ if (ret != 0)
+ return ret;
+
+- sp = (srb_t *) CMD_SP(cmd);
+- if (!sp)
+- return SUCCESS;
+-
++ sp = scsi_cmd_priv(cmd);
+ qpair = sp->qpair;
+- if (!qpair)
++
++ if ((sp->fcport && sp->fcport->deleted) || !qpair)
+ return SUCCESS;
+
+ spin_lock_irqsave(qpair->qp_lock_ptr, flags);
+- if (sp->type != SRB_SCSI_CMD || GET_CMD_SP(sp) != cmd) {
+- /* there's a chance an interrupt could clear
+- the ptr as part of done & free */
++ if (sp->completed) {
+ spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
+ return SUCCESS;
+ }
+
+- if (sp_get(sp)){
+- /* ref_count is already 0 */
++ if (sp->abort || sp->aborted) {
+ spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
+- return SUCCESS;
++ return FAILED;
+ }
++
++ sp->abort = 1;
++ sp->comp = ∁
+ spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
+
++
+ id = cmd->device->id;
+ lun = cmd->device->lun;
+
+@@ -1319,28 +1293,37 @@ qla2xxx_eh_abort(struct scsi_cmnd *cmd)
+ "Aborting from RISC nexus=%ld:%d:%llu sp=%p cmd=%p handle=%x\n",
+ vha->host_no, id, lun, sp, cmd, sp->handle);
+
++ /*
++ * Abort will release the original Command/sp from FW. Let the
++ * original command call scsi_done. In return, he will wakeup
++ * this sleeping thread.
++ */
+ rval = ha->isp_ops->abort_command(sp);
++
+ ql_dbg(ql_dbg_taskm, vha, 0x8003,
+ "Abort command mbx cmd=%p, rval=%x.\n", cmd, rval);
+
++ /* Wait for the command completion. */
++ ratov_j = ha->r_a_tov/10 * 4 * 1000;
++ ratov_j = msecs_to_jiffies(ratov_j);
+ switch (rval) {
+ case QLA_SUCCESS:
+- /*
+- * The command has been aborted. That means that the firmware
+- * won't report a completion.
+- */
+- sp->done(sp, DID_ABORT << 16);
+- ret = SUCCESS;
++ if (!wait_for_completion_timeout(&comp, ratov_j)) {
++ ql_dbg(ql_dbg_taskm, vha, 0xffff,
++ "%s: Abort wait timer (4 * R_A_TOV[%d]) expired\n",
++ __func__, ha->r_a_tov/10);
++ ret = FAILED;
++ } else {
++ ret = SUCCESS;
++ }
+ break;
+ default:
+- /*
+- * Either abort failed or abort and completion raced. Let
+- * the SCSI core retry the abort in the former case.
+- */
+ ret = FAILED;
+ break;
+ }
+
++ sp->comp = NULL;
++
+ ql_log(ql_log_info, vha, 0x801c,
+ "Abort command issued nexus=%ld:%d:%llu -- %x.\n",
+ vha->host_no, id, lun, ret);
+@@ -1723,29 +1706,52 @@ static void qla2x00_abort_srb(struct qla_qpair *qp, srb_t *sp, const int res,
+ scsi_qla_host_t *vha = qp->vha;
+ struct qla_hw_data *ha = vha->hw;
+ int rval;
++ bool ret_cmd;
++ uint32_t ratov_j;
+
+- if (sp_get(sp))
++ if (qla2x00_chip_is_down(vha)) {
++ sp->done(sp, res);
+ return;
++ }
+
+ if (sp->type == SRB_NVME_CMD || sp->type == SRB_NVME_LS ||
+ (sp->type == SRB_SCSI_CMD && !ha->flags.eeh_busy &&
+ !test_bit(ABORT_ISP_ACTIVE, &vha->dpc_flags) &&
+ !qla2x00_isp_reg_stat(ha))) {
++ if (sp->comp) {
++ sp->done(sp, res);
++ return;
++ }
++
+ sp->comp = ∁
++ sp->abort = 1;
+ spin_unlock_irqrestore(qp->qp_lock_ptr, *flags);
+- rval = ha->isp_ops->abort_command(sp);
+
++ rval = ha->isp_ops->abort_command(sp);
++ /* Wait for command completion. */
++ ret_cmd = false;
++ ratov_j = ha->r_a_tov/10 * 4 * 1000;
++ ratov_j = msecs_to_jiffies(ratov_j);
+ switch (rval) {
+ case QLA_SUCCESS:
+- sp->done(sp, res);
++ if (wait_for_completion_timeout(&comp, ratov_j)) {
++ ql_dbg(ql_dbg_taskm, vha, 0xffff,
++ "%s: Abort wait timer (4 * R_A_TOV[%d]) expired\n",
++ __func__, ha->r_a_tov/10);
++ ret_cmd = true;
++ }
++ /* else FW return SP to driver */
+ break;
+- case QLA_FUNCTION_PARAMETER_ERROR:
+- wait_for_completion(&comp);
++ default:
++ ret_cmd = true;
+ break;
+ }
+
+ spin_lock_irqsave(qp->qp_lock_ptr, *flags);
+- sp->comp = NULL;
++ if (ret_cmd && (!sp->completed || !sp->aborted))
++ sp->done(sp, res);
++ } else {
++ sp->done(sp, res);
+ }
+ }
+
+@@ -1768,7 +1774,6 @@ __qla2x00_abort_all_cmds(struct qla_qpair *qp, int res)
+ for (cnt = 1; cnt < req->num_outstanding_cmds; cnt++) {
+ sp = req->outstanding_cmds[cnt];
+ if (sp) {
+- req->outstanding_cmds[cnt] = NULL;
+ switch (sp->cmd_type) {
+ case TYPE_SRB:
+ qla2x00_abort_srb(qp, sp, res, &flags);
+@@ -1790,6 +1795,7 @@ __qla2x00_abort_all_cmds(struct qla_qpair *qp, int res)
+ default:
+ break;
+ }
++ req->outstanding_cmds[cnt] = NULL;
+ }
+ }
+ spin_unlock_irqrestore(qp->qp_lock_ptr, flags);
+@@ -4671,7 +4677,8 @@ qla2x00_mem_free(struct qla_hw_data *ha)
+ ha->sfp_data = NULL;
+
+ if (ha->flt)
+- dma_free_coherent(&ha->pdev->dev, SFP_DEV_SIZE,
++ dma_free_coherent(&ha->pdev->dev,
++ sizeof(struct qla_flt_header) + FLT_REGIONS_SIZE,
+ ha->flt, ha->flt_dma);
+ ha->flt = NULL;
+ ha->flt_dma = 0;
+@@ -7168,6 +7175,7 @@ struct scsi_host_template qla2xxx_driver_template = {
+
+ .supported_mode = MODE_INITIATOR,
+ .track_queue_depth = 1,
++ .cmd_size = sizeof(srb_t),
+ };
+
+ static const struct pci_error_handlers qla2xxx_err_handler = {
+diff --git a/drivers/scsi/qla2xxx/qla_sup.c b/drivers/scsi/qla2xxx/qla_sup.c
+index 1eb82384d933..81c5d3de666b 100644
+--- a/drivers/scsi/qla2xxx/qla_sup.c
++++ b/drivers/scsi/qla2xxx/qla_sup.c
+@@ -680,8 +680,8 @@ qla2xxx_get_flt_info(scsi_qla_host_t *vha, uint32_t flt_addr)
+
+ ha->flt_region_flt = flt_addr;
+ wptr = (uint16_t *)ha->flt;
+- qla24xx_read_flash_data(vha, (void *)flt, flt_addr,
+- (sizeof(struct qla_flt_header) + FLT_REGIONS_SIZE) >> 2);
++ ha->isp_ops->read_optrom(vha, (void *)flt, flt_addr << 2,
++ (sizeof(struct qla_flt_header) + FLT_REGIONS_SIZE));
+
+ if (le16_to_cpu(*wptr) == 0xffff)
+ goto no_flash_data;
+@@ -948,11 +948,11 @@ qla2xxx_get_fdt_info(scsi_qla_host_t *vha)
+ struct req_que *req = ha->req_q_map[0];
+ uint16_t cnt, chksum;
+ uint16_t *wptr = (void *)req->ring;
+- struct qla_fdt_layout *fdt = (void *)req->ring;
++ struct qla_fdt_layout *fdt = (struct qla_fdt_layout *)req->ring;
+ uint8_t man_id, flash_id;
+ uint16_t mid = 0, fid = 0;
+
+- qla24xx_read_flash_data(vha, (void *)fdt, ha->flt_region_fdt,
++ ha->isp_ops->read_optrom(vha, fdt, ha->flt_region_fdt << 2,
+ OPTROM_BURST_DWORDS);
+ if (le16_to_cpu(*wptr) == 0xffff)
+ goto no_flash_data;
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index 1bb0fc9324ea..0b8ec4218d6b 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -3237,7 +3237,8 @@ int qlt_xmit_response(struct qla_tgt_cmd *cmd, int xmit_type,
+ if (!qpair->fw_started || (cmd->reset_count != qpair->chip_reset) ||
+ (cmd->sess && cmd->sess->deleted)) {
+ cmd->state = QLA_TGT_STATE_PROCESSED;
+- return 0;
++ res = 0;
++ goto free;
+ }
+
+ ql_dbg_qp(ql_dbg_tgt, qpair, 0xe018,
+@@ -3248,9 +3249,8 @@ int qlt_xmit_response(struct qla_tgt_cmd *cmd, int xmit_type,
+
+ res = qlt_pre_xmit_response(cmd, &prm, xmit_type, scsi_status,
+ &full_req_cnt);
+- if (unlikely(res != 0)) {
+- return res;
+- }
++ if (unlikely(res != 0))
++ goto free;
+
+ spin_lock_irqsave(qpair->qp_lock_ptr, flags);
+
+@@ -3270,7 +3270,8 @@ int qlt_xmit_response(struct qla_tgt_cmd *cmd, int xmit_type,
+ vha->flags.online, qla2x00_reset_active(vha),
+ cmd->reset_count, qpair->chip_reset);
+ spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
+- return 0;
++ res = 0;
++ goto free;
+ }
+
+ /* Does F/W have an IOCBs for this request */
+@@ -3373,6 +3374,8 @@ out_unmap_unlock:
+ qlt_unmap_sg(vha, cmd);
+ spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
+
++free:
++ vha->hw->tgt.tgt_ops->free_cmd(cmd);
+ return res;
+ }
+ EXPORT_SYMBOL(qlt_xmit_response);
+@@ -6195,7 +6198,6 @@ static void qlt_abort_work(struct qla_tgt *tgt,
+ struct qla_hw_data *ha = vha->hw;
+ struct fc_port *sess = NULL;
+ unsigned long flags = 0, flags2 = 0;
+- uint32_t be_s_id;
+ uint8_t s_id[3];
+ int rc;
+
+@@ -6208,8 +6210,7 @@ static void qlt_abort_work(struct qla_tgt *tgt,
+ s_id[1] = prm->abts.fcp_hdr_le.s_id[1];
+ s_id[2] = prm->abts.fcp_hdr_le.s_id[0];
+
+- sess = ha->tgt.tgt_ops->find_sess_by_s_id(vha,
+- (unsigned char *)&be_s_id);
++ sess = ha->tgt.tgt_ops->find_sess_by_s_id(vha, s_id);
+ if (!sess) {
+ spin_unlock_irqrestore(&ha->tgt.sess_lock, flags2);
+
+@@ -6683,7 +6684,8 @@ qlt_enable_vha(struct scsi_qla_host *vha)
+ } else {
+ set_bit(ISP_ABORT_NEEDED, &base_vha->dpc_flags);
+ qla2xxx_wake_dpc(base_vha);
+- qla2x00_wait_for_hba_online(base_vha);
++ WARN_ON_ONCE(qla2x00_wait_for_hba_online(base_vha) !=
++ QLA_SUCCESS);
+ }
+ mutex_unlock(&ha->optrom_mutex);
+ }
+@@ -6714,7 +6716,9 @@ static void qlt_disable_vha(struct scsi_qla_host *vha)
+
+ set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags);
+ qla2xxx_wake_dpc(vha);
+- qla2x00_wait_for_hba_online(vha);
++ if (qla2x00_wait_for_hba_online(vha) != QLA_SUCCESS)
++ ql_dbg(ql_dbg_tgt, vha, 0xe081,
++ "qla2x00_wait_for_hba_online() failed\n");
+ }
+
+ /*
+diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
+index d15412d3d9bd..0bb06e33ecab 100644
+--- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c
++++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
+@@ -620,6 +620,7 @@ static int tcm_qla2xxx_queue_data_in(struct se_cmd *se_cmd)
+ {
+ struct qla_tgt_cmd *cmd = container_of(se_cmd,
+ struct qla_tgt_cmd, se_cmd);
++ struct scsi_qla_host *vha = cmd->vha;
+
+ if (cmd->aborted) {
+ /* Cmd can loop during Q-full. tcm_qla2xxx_aborted_task
+@@ -632,6 +633,7 @@ static int tcm_qla2xxx_queue_data_in(struct se_cmd *se_cmd)
+ cmd->se_cmd.transport_state,
+ cmd->se_cmd.t_state,
+ cmd->se_cmd.se_cmd_flags);
++ vha->hw->tgt.tgt_ops->free_cmd(cmd);
+ return 0;
+ }
+
+@@ -659,6 +661,7 @@ static int tcm_qla2xxx_queue_status(struct se_cmd *se_cmd)
+ {
+ struct qla_tgt_cmd *cmd = container_of(se_cmd,
+ struct qla_tgt_cmd, se_cmd);
++ struct scsi_qla_host *vha = cmd->vha;
+ int xmit_type = QLA_TGT_XMIT_STATUS;
+
+ if (cmd->aborted) {
+@@ -672,6 +675,7 @@ static int tcm_qla2xxx_queue_status(struct se_cmd *se_cmd)
+ cmd, kref_read(&cmd->se_cmd.cmd_kref),
+ cmd->se_cmd.transport_state, cmd->se_cmd.t_state,
+ cmd->se_cmd.se_cmd_flags);
++ vha->hw->tgt.tgt_ops->free_cmd(cmd);
+ return 0;
+ }
+ cmd->bufflen = se_cmd->data_length;
+diff --git a/drivers/staging/erofs/xattr.c b/drivers/staging/erofs/xattr.c
+index df40654b9fbb..bec68beaeca3 100644
+--- a/drivers/staging/erofs/xattr.c
++++ b/drivers/staging/erofs/xattr.c
+@@ -649,6 +649,8 @@ ssize_t erofs_listxattr(struct dentry *dentry,
+ struct listxattr_iter it;
+
+ ret = init_inode_xattrs(d_inode(dentry));
++ if (ret == -ENOATTR)
++ return 0;
+ if (ret)
+ return ret;
+
+diff --git a/drivers/staging/isdn/gigaset/usb-gigaset.c b/drivers/staging/isdn/gigaset/usb-gigaset.c
+index 1b9b43659bdf..a20c0bfa68f3 100644
+--- a/drivers/staging/isdn/gigaset/usb-gigaset.c
++++ b/drivers/staging/isdn/gigaset/usb-gigaset.c
+@@ -571,8 +571,7 @@ static int gigaset_initcshw(struct cardstate *cs)
+ {
+ struct usb_cardstate *ucs;
+
+- cs->hw.usb = ucs =
+- kmalloc(sizeof(struct usb_cardstate), GFP_KERNEL);
++ cs->hw.usb = ucs = kzalloc(sizeof(struct usb_cardstate), GFP_KERNEL);
+ if (!ucs) {
+ pr_err("out of memory\n");
+ return -ENOMEM;
+@@ -584,9 +583,6 @@ static int gigaset_initcshw(struct cardstate *cs)
+ ucs->bchars[3] = 0;
+ ucs->bchars[4] = 0x11;
+ ucs->bchars[5] = 0x13;
+- ucs->bulk_out_buffer = NULL;
+- ucs->bulk_out_urb = NULL;
+- ucs->read_urb = NULL;
+ tasklet_init(&cs->write_tasklet,
+ gigaset_modem_fill, (unsigned long) cs);
+
+@@ -685,6 +681,11 @@ static int gigaset_probe(struct usb_interface *interface,
+ return -ENODEV;
+ }
+
++ if (hostif->desc.bNumEndpoints < 2) {
++ dev_err(&interface->dev, "missing endpoints\n");
++ return -ENODEV;
++ }
++
+ dev_info(&udev->dev, "%s: Device matched ... !\n", __func__);
+
+ /* allocate memory for our device state and initialize it */
+@@ -704,6 +705,12 @@ static int gigaset_probe(struct usb_interface *interface,
+
+ endpoint = &hostif->endpoint[0].desc;
+
++ if (!usb_endpoint_is_bulk_out(endpoint)) {
++ dev_err(&interface->dev, "missing bulk-out endpoint\n");
++ retval = -ENODEV;
++ goto error;
++ }
++
+ buffer_size = le16_to_cpu(endpoint->wMaxPacketSize);
+ ucs->bulk_out_size = buffer_size;
+ ucs->bulk_out_epnum = usb_endpoint_num(endpoint);
+@@ -723,6 +730,12 @@ static int gigaset_probe(struct usb_interface *interface,
+
+ endpoint = &hostif->endpoint[1].desc;
+
++ if (!usb_endpoint_is_int_in(endpoint)) {
++ dev_err(&interface->dev, "missing int-in endpoint\n");
++ retval = -ENODEV;
++ goto error;
++ }
++
+ ucs->busy = 0;
+
+ ucs->read_urb = usb_alloc_urb(0, GFP_KERNEL);
+diff --git a/drivers/staging/media/hantro/hantro_v4l2.c b/drivers/staging/media/hantro/hantro_v4l2.c
+index 68f45ee66821..21d454e22f50 100644
+--- a/drivers/staging/media/hantro/hantro_v4l2.c
++++ b/drivers/staging/media/hantro/hantro_v4l2.c
+@@ -356,19 +356,26 @@ vidioc_s_fmt_out_mplane(struct file *file, void *priv, struct v4l2_format *f)
+ {
+ struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp;
+ struct hantro_ctx *ctx = fh_to_ctx(priv);
++ struct vb2_queue *vq = v4l2_m2m_get_vq(ctx->fh.m2m_ctx, f->type);
+ const struct hantro_fmt *formats;
+ unsigned int num_fmts;
+- struct vb2_queue *vq;
+ int ret;
+
+- /* Change not allowed if queue is busy. */
+- vq = v4l2_m2m_get_vq(ctx->fh.m2m_ctx, f->type);
+- if (vb2_is_busy(vq))
+- return -EBUSY;
++ ret = vidioc_try_fmt_out_mplane(file, priv, f);
++ if (ret)
++ return ret;
+
+ if (!hantro_is_encoder_ctx(ctx)) {
+ struct vb2_queue *peer_vq;
+
++ /*
++ * In order to support dynamic resolution change,
++ * the decoder admits a resolution change, as long
++ * as the pixelformat remains. Can't be done if streaming.
++ */
++ if (vb2_is_streaming(vq) || (vb2_is_busy(vq) &&
++ pix_mp->pixelformat != ctx->src_fmt.pixelformat))
++ return -EBUSY;
+ /*
+ * Since format change on the OUTPUT queue will reset
+ * the CAPTURE queue, we can't allow doing so
+@@ -378,12 +385,15 @@ vidioc_s_fmt_out_mplane(struct file *file, void *priv, struct v4l2_format *f)
+ V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE);
+ if (vb2_is_busy(peer_vq))
+ return -EBUSY;
++ } else {
++ /*
++ * The encoder doesn't admit a format change if
++ * there are OUTPUT buffers allocated.
++ */
++ if (vb2_is_busy(vq))
++ return -EBUSY;
+ }
+
+- ret = vidioc_try_fmt_out_mplane(file, priv, f);
+- if (ret)
+- return ret;
+-
+ formats = hantro_get_formats(ctx, &num_fmts);
+ ctx->vpu_src_fmt = hantro_find_format(formats, num_fmts,
+ pix_mp->pixelformat);
+diff --git a/drivers/staging/rtl8188eu/os_dep/usb_intf.c b/drivers/staging/rtl8188eu/os_dep/usb_intf.c
+index 4fac9dca798e..a7cac0719b8b 100644
+--- a/drivers/staging/rtl8188eu/os_dep/usb_intf.c
++++ b/drivers/staging/rtl8188eu/os_dep/usb_intf.c
+@@ -70,7 +70,7 @@ static struct dvobj_priv *usb_dvobj_init(struct usb_interface *usb_intf)
+ phost_conf = pusbd->actconfig;
+ pconf_desc = &phost_conf->desc;
+
+- phost_iface = &usb_intf->altsetting[0];
++ phost_iface = usb_intf->cur_altsetting;
+ piface_desc = &phost_iface->desc;
+
+ pdvobjpriv->NumInterfaces = pconf_desc->bNumInterfaces;
+diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c
+index d0daae0b8299..426f51e302ef 100644
+--- a/drivers/staging/rtl8712/usb_intf.c
++++ b/drivers/staging/rtl8712/usb_intf.c
+@@ -247,7 +247,7 @@ static uint r8712_usb_dvobj_init(struct _adapter *padapter)
+
+ pdvobjpriv->padapter = padapter;
+ padapter->eeprom_address_size = 6;
+- phost_iface = &pintf->altsetting[0];
++ phost_iface = pintf->cur_altsetting;
+ piface_desc = &phost_iface->desc;
+ pdvobjpriv->nr_endpoint = piface_desc->bNumEndpoints;
+ if (pusbd->speed == USB_SPEED_HIGH) {
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+index cc4383d1ec3e..f292ee3065bc 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+@@ -3300,7 +3300,7 @@ static int __init vchiq_driver_init(void)
+ return 0;
+
+ region_unregister:
+- platform_driver_unregister(&vchiq_driver);
++ unregister_chrdev_region(vchiq_devid, 1);
+
+ class_destroy:
+ class_destroy(vchiq_class);
+diff --git a/drivers/usb/atm/ueagle-atm.c b/drivers/usb/atm/ueagle-atm.c
+index 8faa51b1a520..958bae30b3ff 100644
+--- a/drivers/usb/atm/ueagle-atm.c
++++ b/drivers/usb/atm/ueagle-atm.c
+@@ -2124,10 +2124,11 @@ resubmit:
+ /*
+ * Start the modem : init the data and start kernel thread
+ */
+-static int uea_boot(struct uea_softc *sc)
++static int uea_boot(struct uea_softc *sc, struct usb_interface *intf)
+ {
+- int ret, size;
+ struct intr_pkt *intr;
++ int ret = -ENOMEM;
++ int size;
+
+ uea_enters(INS_TO_USBDEV(sc));
+
+@@ -2152,6 +2153,11 @@ static int uea_boot(struct uea_softc *sc)
+ if (UEA_CHIP_VERSION(sc) == ADI930)
+ load_XILINX_firmware(sc);
+
++ if (intf->cur_altsetting->desc.bNumEndpoints < 1) {
++ ret = -ENODEV;
++ goto err0;
++ }
++
+ intr = kmalloc(size, GFP_KERNEL);
+ if (!intr)
+ goto err0;
+@@ -2163,8 +2169,7 @@ static int uea_boot(struct uea_softc *sc)
+ usb_fill_int_urb(sc->urb_int, sc->usb_dev,
+ usb_rcvintpipe(sc->usb_dev, UEA_INTR_PIPE),
+ intr, size, uea_intr, sc,
+- sc->usb_dev->actconfig->interface[0]->altsetting[0].
+- endpoint[0].desc.bInterval);
++ intf->cur_altsetting->endpoint[0].desc.bInterval);
+
+ ret = usb_submit_urb(sc->urb_int, GFP_KERNEL);
+ if (ret < 0) {
+@@ -2179,6 +2184,7 @@ static int uea_boot(struct uea_softc *sc)
+ sc->kthread = kthread_create(uea_kthread, sc, "ueagle-atm");
+ if (IS_ERR(sc->kthread)) {
+ uea_err(INS_TO_USBDEV(sc), "failed to create thread\n");
++ ret = PTR_ERR(sc->kthread);
+ goto err2;
+ }
+
+@@ -2193,7 +2199,7 @@ err1:
+ kfree(intr);
+ err0:
+ uea_leaves(INS_TO_USBDEV(sc));
+- return -ENOMEM;
++ return ret;
+ }
+
+ /*
+@@ -2554,7 +2560,7 @@ static int uea_bind(struct usbatm_data *usbatm, struct usb_interface *intf,
+ if (ret < 0)
+ goto error;
+
+- ret = uea_boot(sc);
++ ret = uea_boot(sc, intf);
+ if (ret < 0)
+ goto error_rm_grp;
+
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 236313f41f4a..dfe9ac8d2375 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -5814,7 +5814,7 @@ re_enumerate_no_bos:
+
+ /**
+ * usb_reset_device - warn interface drivers and perform a USB port reset
+- * @udev: device to reset (not in SUSPENDED or NOTATTACHED state)
++ * @udev: device to reset (not in NOTATTACHED state)
+ *
+ * Warns all drivers bound to registered interfaces (using their pre_reset
+ * method), performs the port reset, and then lets the drivers know that
+@@ -5842,8 +5842,7 @@ int usb_reset_device(struct usb_device *udev)
+ struct usb_host_config *config = udev->actconfig;
+ struct usb_hub *hub = usb_hub_to_struct_hub(udev->parent);
+
+- if (udev->state == USB_STATE_NOTATTACHED ||
+- udev->state == USB_STATE_SUSPENDED) {
++ if (udev->state == USB_STATE_NOTATTACHED) {
+ dev_dbg(&udev->dev, "device reset not allowed in state %d\n",
+ udev->state);
+ return -EINVAL;
+diff --git a/drivers/usb/core/urb.c b/drivers/usb/core/urb.c
+index 0eab79f82ce4..da923ec17612 100644
+--- a/drivers/usb/core/urb.c
++++ b/drivers/usb/core/urb.c
+@@ -45,6 +45,7 @@ void usb_init_urb(struct urb *urb)
+ if (urb) {
+ memset(urb, 0, sizeof(*urb));
+ kref_init(&urb->kref);
++ INIT_LIST_HEAD(&urb->urb_list);
+ INIT_LIST_HEAD(&urb->anchor_list);
+ }
+ }
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index 023f0357efd7..294276f7deb9 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -29,7 +29,8 @@
+ #define PCI_DEVICE_ID_INTEL_BXT_M 0x1aaa
+ #define PCI_DEVICE_ID_INTEL_APL 0x5aaa
+ #define PCI_DEVICE_ID_INTEL_KBP 0xa2b0
+-#define PCI_DEVICE_ID_INTEL_CMLH 0x02ee
++#define PCI_DEVICE_ID_INTEL_CMLLP 0x02ee
++#define PCI_DEVICE_ID_INTEL_CMLH 0x06ee
+ #define PCI_DEVICE_ID_INTEL_GLK 0x31aa
+ #define PCI_DEVICE_ID_INTEL_CNPLP 0x9dee
+ #define PCI_DEVICE_ID_INTEL_CNPH 0xa36e
+@@ -308,6 +309,9 @@ static const struct pci_device_id dwc3_pci_id_table[] = {
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MRFLD),
+ (kernel_ulong_t) &dwc3_pci_mrfld_properties, },
+
++ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_CMLLP),
++ (kernel_ulong_t) &dwc3_pci_intel_properties, },
++
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_CMLH),
+ (kernel_ulong_t) &dwc3_pci_intel_properties, },
+
+diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
+index 3996b9c4ff8d..fd1b100d2927 100644
+--- a/drivers/usb/dwc3/ep0.c
++++ b/drivers/usb/dwc3/ep0.c
+@@ -1117,6 +1117,9 @@ static void dwc3_ep0_xfernotready(struct dwc3 *dwc,
+ void dwc3_ep0_interrupt(struct dwc3 *dwc,
+ const struct dwc3_event_depevt *event)
+ {
++ struct dwc3_ep *dep = dwc->eps[event->endpoint_number];
++ u8 cmd;
++
+ switch (event->endpoint_event) {
+ case DWC3_DEPEVT_XFERCOMPLETE:
+ dwc3_ep0_xfer_complete(dwc, event);
+@@ -1129,7 +1132,12 @@ void dwc3_ep0_interrupt(struct dwc3 *dwc,
+ case DWC3_DEPEVT_XFERINPROGRESS:
+ case DWC3_DEPEVT_RXTXFIFOEVT:
+ case DWC3_DEPEVT_STREAMEVT:
++ break;
+ case DWC3_DEPEVT_EPCMDCMPLT:
++ cmd = DEPEVT_PARAMETER_CMD(event->parameters);
++
++ if (cmd == DWC3_DEPCMD_ENDTRANSFER)
++ dep->flags &= ~DWC3_EP_TRANSFER_STARTED;
+ break;
+ }
+ }
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 56bd6ae0c18f..54f79871be9a 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2471,7 +2471,7 @@ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
+
+ req->request.actual = req->request.length - req->remaining;
+
+- if (!dwc3_gadget_ep_request_completed(req) &&
++ if (!dwc3_gadget_ep_request_completed(req) ||
+ req->num_pending_sgs) {
+ __dwc3_gadget_kick_transfer(dep);
+ goto out;
+@@ -2699,6 +2699,9 @@ static void dwc3_stop_active_transfer(struct dwc3_ep *dep, bool force,
+ WARN_ON_ONCE(ret);
+ dep->resource_index = 0;
+
++ if (!interrupt)
++ dep->flags &= ~DWC3_EP_TRANSFER_STARTED;
++
+ if (dwc3_is_usb31(dwc) || dwc->revision < DWC3_REVISION_310A)
+ udelay(100);
+ }
+diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c
+index 33852c2b29d1..ab9ac48a751a 100644
+--- a/drivers/usb/gadget/configfs.c
++++ b/drivers/usb/gadget/configfs.c
+@@ -1544,6 +1544,7 @@ static struct config_group *gadgets_make(
+ gi->composite.resume = NULL;
+ gi->composite.max_speed = USB_SPEED_SUPER;
+
++ spin_lock_init(&gi->spinlock);
+ mutex_init(&gi->lock);
+ INIT_LIST_HEAD(&gi->string_list);
+ INIT_LIST_HEAD(&gi->available_func);
+diff --git a/drivers/usb/gadget/udc/pch_udc.c b/drivers/usb/gadget/udc/pch_udc.c
+index cded51f36fc1..11a31cf2a8c4 100644
+--- a/drivers/usb/gadget/udc/pch_udc.c
++++ b/drivers/usb/gadget/udc/pch_udc.c
+@@ -1519,7 +1519,6 @@ static void pch_udc_free_dma_chain(struct pch_udc_dev *dev,
+ td = phys_to_virt(addr);
+ addr2 = (dma_addr_t)td->next;
+ dma_pool_free(dev->data_requests, td, addr);
+- td->next = 0x00;
+ addr = addr2;
+ }
+ req->chain_len = 1;
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index 3abe70ff1b1e..1711113d1eb8 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -806,7 +806,7 @@ static void xhci_del_comp_mod_timer(struct xhci_hcd *xhci, u32 status,
+
+ static int xhci_handle_usb2_port_link_resume(struct xhci_port *port,
+ u32 *status, u32 portsc,
+- unsigned long flags)
++ unsigned long *flags)
+ {
+ struct xhci_bus_state *bus_state;
+ struct xhci_hcd *xhci;
+@@ -860,11 +860,11 @@ static int xhci_handle_usb2_port_link_resume(struct xhci_port *port,
+ xhci_test_and_clear_bit(xhci, port, PORT_PLC);
+ xhci_set_link_state(xhci, port, XDEV_U0);
+
+- spin_unlock_irqrestore(&xhci->lock, flags);
++ spin_unlock_irqrestore(&xhci->lock, *flags);
+ time_left = wait_for_completion_timeout(
+ &bus_state->rexit_done[wIndex],
+ msecs_to_jiffies(XHCI_MAX_REXIT_TIMEOUT_MS));
+- spin_lock_irqsave(&xhci->lock, flags);
++ spin_lock_irqsave(&xhci->lock, *flags);
+
+ if (time_left) {
+ slot_id = xhci_find_slot_id_by_port(hcd, xhci,
+@@ -920,11 +920,13 @@ static void xhci_get_usb3_port_status(struct xhci_port *port, u32 *status,
+ {
+ struct xhci_bus_state *bus_state;
+ struct xhci_hcd *xhci;
++ struct usb_hcd *hcd;
+ u32 link_state;
+ u32 portnum;
+
+ bus_state = &port->rhub->bus_state;
+ xhci = hcd_to_xhci(port->rhub->hcd);
++ hcd = port->rhub->hcd;
+ link_state = portsc & PORT_PLS_MASK;
+ portnum = port->hcd_portnum;
+
+@@ -952,12 +954,20 @@ static void xhci_get_usb3_port_status(struct xhci_port *port, u32 *status,
+ bus_state->suspended_ports &= ~(1 << portnum);
+ }
+
++ /* remote wake resume signaling complete */
++ if (bus_state->port_remote_wakeup & (1 << portnum) &&
++ link_state != XDEV_RESUME &&
++ link_state != XDEV_RECOVERY) {
++ bus_state->port_remote_wakeup &= ~(1 << portnum);
++ usb_hcd_end_port_resume(&hcd->self, portnum);
++ }
++
+ xhci_hub_report_usb3_link_state(xhci, status, portsc);
+ xhci_del_comp_mod_timer(xhci, portsc, portnum);
+ }
+
+ static void xhci_get_usb2_port_status(struct xhci_port *port, u32 *status,
+- u32 portsc, unsigned long flags)
++ u32 portsc, unsigned long *flags)
+ {
+ struct xhci_bus_state *bus_state;
+ u32 link_state;
+@@ -1007,7 +1017,7 @@ static void xhci_get_usb2_port_status(struct xhci_port *port, u32 *status,
+ static u32 xhci_get_port_status(struct usb_hcd *hcd,
+ struct xhci_bus_state *bus_state,
+ u16 wIndex, u32 raw_port_status,
+- unsigned long flags)
++ unsigned long *flags)
+ __releases(&xhci->lock)
+ __acquires(&xhci->lock)
+ {
+@@ -1130,7 +1140,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ }
+ trace_xhci_get_port_status(wIndex, temp);
+ status = xhci_get_port_status(hcd, bus_state, wIndex, temp,
+- flags);
++ &flags);
+ if (status == 0xffffffff)
+ goto error;
+
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index cf5e17962179..cd18c1ce0f71 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -1909,13 +1909,17 @@ no_bw:
+ xhci->usb3_rhub.num_ports = 0;
+ xhci->num_active_eps = 0;
+ kfree(xhci->usb2_rhub.ports);
++ kfree(xhci->usb2_rhub.psi);
+ kfree(xhci->usb3_rhub.ports);
++ kfree(xhci->usb3_rhub.psi);
+ kfree(xhci->hw_ports);
+ kfree(xhci->rh_bw);
+ kfree(xhci->ext_caps);
+
+ xhci->usb2_rhub.ports = NULL;
++ xhci->usb2_rhub.psi = NULL;
+ xhci->usb3_rhub.ports = NULL;
++ xhci->usb3_rhub.psi = NULL;
+ xhci->hw_ports = NULL;
+ xhci->rh_bw = NULL;
+ xhci->ext_caps = NULL;
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 1e0236e90687..1904ef56f61c 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -519,6 +519,18 @@ static int xhci_pci_resume(struct usb_hcd *hcd, bool hibernated)
+ }
+ #endif /* CONFIG_PM */
+
++static void xhci_pci_shutdown(struct usb_hcd *hcd)
++{
++ struct xhci_hcd *xhci = hcd_to_xhci(hcd);
++ struct pci_dev *pdev = to_pci_dev(hcd->self.controller);
++
++ xhci_shutdown(hcd);
++
++ /* Yet another workaround for spurious wakeups at shutdown with HSW */
++ if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)
++ pci_set_power_state(pdev, PCI_D3hot);
++}
++
+ /*-------------------------------------------------------------------------*/
+
+ /* PCI driver selection metadata; PCI hotplugging uses this */
+@@ -554,6 +566,7 @@ static int __init xhci_pci_init(void)
+ #ifdef CONFIG_PM
+ xhci_pci_hc_driver.pci_suspend = xhci_pci_suspend;
+ xhci_pci_hc_driver.pci_resume = xhci_pci_resume;
++ xhci_pci_hc_driver.shutdown = xhci_pci_shutdown;
+ #endif
+ return pci_register_driver(&xhci_pci_driver);
+ }
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index e7aab31fd9a5..4a2fe56940bd 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1624,7 +1624,6 @@ static void handle_port_status(struct xhci_hcd *xhci,
+ slot_id = xhci_find_slot_id_by_port(hcd, xhci, hcd_portnum + 1);
+ if (slot_id && xhci->devs[slot_id])
+ xhci->devs[slot_id]->flags |= VDEV_PORT_ERROR;
+- bus_state->port_remote_wakeup &= ~(1 << hcd_portnum);
+ }
+
+ if ((portsc & PORT_PLC) && (portsc & PORT_PLS_MASK) == XDEV_RESUME) {
+@@ -1644,6 +1643,7 @@ static void handle_port_status(struct xhci_hcd *xhci,
+ */
+ bus_state->port_remote_wakeup |= 1 << hcd_portnum;
+ xhci_test_and_clear_bit(xhci, port, PORT_PLC);
++ usb_hcd_start_port_resume(&hcd->self, hcd_portnum);
+ xhci_set_link_state(xhci, port, XDEV_U0);
+ /* Need to wait until the next link state change
+ * indicates the device is actually in U0.
+@@ -1684,7 +1684,6 @@ static void handle_port_status(struct xhci_hcd *xhci,
+ if (slot_id && xhci->devs[slot_id])
+ xhci_ring_device(xhci, slot_id);
+ if (bus_state->port_remote_wakeup & (1 << hcd_portnum)) {
+- bus_state->port_remote_wakeup &= ~(1 << hcd_portnum);
+ xhci_test_and_clear_bit(xhci, port, PORT_PLC);
+ usb_wakeup_notification(hcd->self.root_hub,
+ hcd_portnum + 1);
+@@ -2378,7 +2377,8 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+ case COMP_SUCCESS:
+ if (EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)) == 0)
+ break;
+- if (xhci->quirks & XHCI_TRUST_TX_LENGTH)
++ if (xhci->quirks & XHCI_TRUST_TX_LENGTH ||
++ ep_ring->last_td_was_short)
+ trb_comp_code = COMP_SHORT_PACKET;
+ else
+ xhci_warn_ratelimited(xhci,
+diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c
+index 2ff7c911fbd0..dc172513a4aa 100644
+--- a/drivers/usb/host/xhci-tegra.c
++++ b/drivers/usb/host/xhci-tegra.c
+@@ -755,7 +755,6 @@ static int tegra_xusb_runtime_suspend(struct device *dev)
+ {
+ struct tegra_xusb *tegra = dev_get_drvdata(dev);
+
+- tegra_xusb_phy_disable(tegra);
+ regulator_bulk_disable(tegra->soc->num_supplies, tegra->supplies);
+ tegra_xusb_clk_disable(tegra);
+
+@@ -779,16 +778,8 @@ static int tegra_xusb_runtime_resume(struct device *dev)
+ goto disable_clk;
+ }
+
+- err = tegra_xusb_phy_enable(tegra);
+- if (err < 0) {
+- dev_err(dev, "failed to enable PHYs: %d\n", err);
+- goto disable_regulator;
+- }
+-
+ return 0;
+
+-disable_regulator:
+- regulator_bulk_disable(tegra->soc->num_supplies, tegra->supplies);
+ disable_clk:
+ tegra_xusb_clk_disable(tegra);
+ return err;
+@@ -1181,6 +1172,12 @@ static int tegra_xusb_probe(struct platform_device *pdev)
+ */
+ platform_set_drvdata(pdev, tegra);
+
++ err = tegra_xusb_phy_enable(tegra);
++ if (err < 0) {
++ dev_err(&pdev->dev, "failed to enable PHYs: %d\n", err);
++ goto put_hcd;
++ }
++
+ pm_runtime_enable(&pdev->dev);
+ if (pm_runtime_enabled(&pdev->dev))
+ err = pm_runtime_get_sync(&pdev->dev);
+@@ -1189,7 +1186,7 @@ static int tegra_xusb_probe(struct platform_device *pdev)
+
+ if (err < 0) {
+ dev_err(&pdev->dev, "failed to enable device: %d\n", err);
+- goto disable_rpm;
++ goto disable_phy;
+ }
+
+ tegra_xusb_config(tegra, regs);
+@@ -1275,9 +1272,11 @@ remove_usb2:
+ put_rpm:
+ if (!pm_runtime_status_suspended(&pdev->dev))
+ tegra_xusb_runtime_suspend(&pdev->dev);
+-disable_rpm:
+- pm_runtime_disable(&pdev->dev);
++put_hcd:
+ usb_put_hcd(tegra->hcd);
++disable_phy:
++ tegra_xusb_phy_disable(tegra);
++ pm_runtime_disable(&pdev->dev);
+ put_powerdomains:
+ if (!of_property_read_bool(pdev->dev.of_node, "power-domains")) {
+ tegra_powergate_power_off(TEGRA_POWERGATE_XUSBC);
+@@ -1314,6 +1313,8 @@ static int tegra_xusb_remove(struct platform_device *pdev)
+ tegra_xusb_powerdomain_remove(&pdev->dev, tegra);
+ }
+
++ tegra_xusb_phy_disable(tegra);
++
+ tegra_xusb_padctl_put(tegra->padctl);
+
+ return 0;
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 270e45058272..73ad81eb2a8e 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -770,7 +770,7 @@ static void xhci_stop(struct usb_hcd *hcd)
+ *
+ * This will only ever be called with the main usb_hcd (the USB3 roothub).
+ */
+-static void xhci_shutdown(struct usb_hcd *hcd)
++void xhci_shutdown(struct usb_hcd *hcd)
+ {
+ struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+
+@@ -789,11 +789,8 @@ static void xhci_shutdown(struct usb_hcd *hcd)
+ xhci_dbg_trace(xhci, trace_xhci_dbg_init,
+ "xhci_shutdown completed - status = %x",
+ readl(&xhci->op_regs->status));
+-
+- /* Yet another workaround for spurious wakeups at shutdown with HSW */
+- if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)
+- pci_set_power_state(to_pci_dev(hcd->self.sysdev), PCI_D3hot);
+ }
++EXPORT_SYMBOL_GPL(xhci_shutdown);
+
+ #ifdef CONFIG_PM
+ static void xhci_save_registers(struct xhci_hcd *xhci)
+@@ -973,7 +970,7 @@ static bool xhci_pending_portevent(struct xhci_hcd *xhci)
+ int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup)
+ {
+ int rc = 0;
+- unsigned int delay = XHCI_MAX_HALT_USEC;
++ unsigned int delay = XHCI_MAX_HALT_USEC * 2;
+ struct usb_hcd *hcd = xhci_to_hcd(xhci);
+ u32 command;
+ u32 res;
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index f5c41448d067..73b49bd0451f 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -2050,6 +2050,7 @@ int xhci_start(struct xhci_hcd *xhci);
+ int xhci_reset(struct xhci_hcd *xhci);
+ int xhci_run(struct usb_hcd *hcd);
+ int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks);
++void xhci_shutdown(struct usb_hcd *hcd);
+ void xhci_init_driver(struct hc_driver *drv,
+ const struct xhci_driver_overrides *over);
+ int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id);
+diff --git a/drivers/usb/misc/adutux.c b/drivers/usb/misc/adutux.c
+index 6f5edb9fc61e..d8d157c4c271 100644
+--- a/drivers/usb/misc/adutux.c
++++ b/drivers/usb/misc/adutux.c
+@@ -669,7 +669,7 @@ static int adu_probe(struct usb_interface *interface,
+ init_waitqueue_head(&dev->read_wait);
+ init_waitqueue_head(&dev->write_wait);
+
+- res = usb_find_common_endpoints_reverse(&interface->altsetting[0],
++ res = usb_find_common_endpoints_reverse(interface->cur_altsetting,
+ NULL, NULL,
+ &dev->interrupt_in_endpoint,
+ &dev->interrupt_out_endpoint);
+diff --git a/drivers/usb/misc/idmouse.c b/drivers/usb/misc/idmouse.c
+index 20b0f91a5d9b..bb24527f3c70 100644
+--- a/drivers/usb/misc/idmouse.c
++++ b/drivers/usb/misc/idmouse.c
+@@ -337,7 +337,7 @@ static int idmouse_probe(struct usb_interface *interface,
+ int result;
+
+ /* check if we have gotten the data or the hid interface */
+- iface_desc = &interface->altsetting[0];
++ iface_desc = interface->cur_altsetting;
+ if (iface_desc->desc.bInterfaceClass != 0x0A)
+ return -ENODEV;
+
+diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
+index ac2b4fcc265f..f48a23adbc35 100644
+--- a/drivers/usb/mon/mon_bin.c
++++ b/drivers/usb/mon/mon_bin.c
+@@ -1039,12 +1039,18 @@ static long mon_bin_ioctl(struct file *file, unsigned int cmd, unsigned long arg
+
+ mutex_lock(&rp->fetch_lock);
+ spin_lock_irqsave(&rp->b_lock, flags);
+- mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
+- kfree(rp->b_vec);
+- rp->b_vec = vec;
+- rp->b_size = size;
+- rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
+- rp->cnt_lost = 0;
++ if (rp->mmap_active) {
++ mon_free_buff(vec, size/CHUNK_SIZE);
++ kfree(vec);
++ ret = -EBUSY;
++ } else {
++ mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
++ kfree(rp->b_vec);
++ rp->b_vec = vec;
++ rp->b_size = size;
++ rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
++ rp->cnt_lost = 0;
++ }
+ spin_unlock_irqrestore(&rp->b_lock, flags);
+ mutex_unlock(&rp->fetch_lock);
+ }
+@@ -1216,13 +1222,21 @@ mon_bin_poll(struct file *file, struct poll_table_struct *wait)
+ static void mon_bin_vma_open(struct vm_area_struct *vma)
+ {
+ struct mon_reader_bin *rp = vma->vm_private_data;
++ unsigned long flags;
++
++ spin_lock_irqsave(&rp->b_lock, flags);
+ rp->mmap_active++;
++ spin_unlock_irqrestore(&rp->b_lock, flags);
+ }
+
+ static void mon_bin_vma_close(struct vm_area_struct *vma)
+ {
++ unsigned long flags;
++
+ struct mon_reader_bin *rp = vma->vm_private_data;
++ spin_lock_irqsave(&rp->b_lock, flags);
+ rp->mmap_active--;
++ spin_unlock_irqrestore(&rp->b_lock, flags);
+ }
+
+ /*
+@@ -1234,16 +1248,12 @@ static vm_fault_t mon_bin_vma_fault(struct vm_fault *vmf)
+ unsigned long offset, chunk_idx;
+ struct page *pageptr;
+
+- mutex_lock(&rp->fetch_lock);
+ offset = vmf->pgoff << PAGE_SHIFT;
+- if (offset >= rp->b_size) {
+- mutex_unlock(&rp->fetch_lock);
++ if (offset >= rp->b_size)
+ return VM_FAULT_SIGBUS;
+- }
+ chunk_idx = offset / CHUNK_SIZE;
+ pageptr = rp->b_vec[chunk_idx].pg;
+ get_page(pageptr);
+- mutex_unlock(&rp->fetch_lock);
+ vmf->page = pageptr;
+ return 0;
+ }
+diff --git a/drivers/usb/roles/class.c b/drivers/usb/roles/class.c
+index 86defca6623e..6b3c08730ba3 100644
+--- a/drivers/usb/roles/class.c
++++ b/drivers/usb/roles/class.c
+@@ -144,8 +144,8 @@ EXPORT_SYMBOL_GPL(usb_role_switch_get);
+ void usb_role_switch_put(struct usb_role_switch *sw)
+ {
+ if (!IS_ERR_OR_NULL(sw)) {
+- put_device(&sw->dev);
+ module_put(sw->dev.parent->driver->owner);
++ put_device(&sw->dev);
+ }
+ }
+ EXPORT_SYMBOL_GPL(usb_role_switch_put);
+diff --git a/drivers/usb/serial/io_edgeport.c b/drivers/usb/serial/io_edgeport.c
+index 48a439298a68..9690a5f4b9d6 100644
+--- a/drivers/usb/serial/io_edgeport.c
++++ b/drivers/usb/serial/io_edgeport.c
+@@ -2901,16 +2901,18 @@ static int edge_startup(struct usb_serial *serial)
+ response = 0;
+
+ if (edge_serial->is_epic) {
++ struct usb_host_interface *alt;
++
++ alt = serial->interface->cur_altsetting;
++
+ /* EPIC thing, set up our interrupt polling now and our read
+ * urb, so that the device knows it really is connected. */
+ interrupt_in_found = bulk_in_found = bulk_out_found = false;
+- for (i = 0; i < serial->interface->altsetting[0]
+- .desc.bNumEndpoints; ++i) {
++ for (i = 0; i < alt->desc.bNumEndpoints; ++i) {
+ struct usb_endpoint_descriptor *endpoint;
+ int buffer_size;
+
+- endpoint = &serial->interface->altsetting[0].
+- endpoint[i].desc;
++ endpoint = &alt->endpoint[i].desc;
+ buffer_size = usb_endpoint_maxp(endpoint);
+ if (!interrupt_in_found &&
+ (usb_endpoint_is_int_in(endpoint))) {
+diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
+index 0d044d59317e..80448ecf6b6a 100644
+--- a/drivers/usb/storage/uas.c
++++ b/drivers/usb/storage/uas.c
+@@ -825,6 +825,10 @@ static int uas_slave_configure(struct scsi_device *sdev)
+ sdev->wce_default_on = 1;
+ }
+
++ /* Some disks cannot handle READ_CAPACITY_16 */
++ if (devinfo->flags & US_FL_NO_READ_CAPACITY_16)
++ sdev->no_read_capacity_16 = 1;
++
+ /*
+ * Some disks return the total number of blocks in response
+ * to READ CAPACITY rather than the highest block number.
+@@ -833,6 +837,12 @@ static int uas_slave_configure(struct scsi_device *sdev)
+ if (devinfo->flags & US_FL_FIX_CAPACITY)
+ sdev->fix_capacity = 1;
+
++ /*
++ * in some cases we have to guess
++ */
++ if (devinfo->flags & US_FL_CAPACITY_HEURISTICS)
++ sdev->guess_capacity = 1;
++
+ /*
+ * Some devices don't like MODE SENSE with page=0x3f,
+ * which is the command used for checking if a device
+diff --git a/drivers/usb/typec/class.c b/drivers/usb/typec/class.c
+index a18285a990a8..78fc5fa963c3 100644
+--- a/drivers/usb/typec/class.c
++++ b/drivers/usb/typec/class.c
+@@ -1604,14 +1604,16 @@ struct typec_port *typec_register_port(struct device *parent,
+
+ port->sw = typec_switch_get(&port->dev);
+ if (IS_ERR(port->sw)) {
++ ret = PTR_ERR(port->sw);
+ put_device(&port->dev);
+- return ERR_CAST(port->sw);
++ return ERR_PTR(ret);
+ }
+
+ port->mux = typec_mux_get(&port->dev, NULL);
+ if (IS_ERR(port->mux)) {
++ ret = PTR_ERR(port->mux);
+ put_device(&port->dev);
+- return ERR_CAST(port->mux);
++ return ERR_PTR(ret);
+ }
+
+ ret = device_add(&port->dev);
+diff --git a/drivers/video/hdmi.c b/drivers/video/hdmi.c
+index b939bc28d886..9c82e2a0a411 100644
+--- a/drivers/video/hdmi.c
++++ b/drivers/video/hdmi.c
+@@ -1576,12 +1576,12 @@ static int hdmi_avi_infoframe_unpack(struct hdmi_avi_infoframe *frame,
+ if (ptr[0] & 0x10)
+ frame->active_aspect = ptr[1] & 0xf;
+ if (ptr[0] & 0x8) {
+- frame->top_bar = (ptr[5] << 8) + ptr[6];
+- frame->bottom_bar = (ptr[7] << 8) + ptr[8];
++ frame->top_bar = (ptr[6] << 8) | ptr[5];
++ frame->bottom_bar = (ptr[8] << 8) | ptr[7];
+ }
+ if (ptr[0] & 0x4) {
+- frame->left_bar = (ptr[9] << 8) + ptr[10];
+- frame->right_bar = (ptr[11] << 8) + ptr[12];
++ frame->left_bar = (ptr[10] << 8) | ptr[9];
++ frame->right_bar = (ptr[12] << 8) | ptr[11];
+ }
+ frame->scan_mode = ptr[0] & 0x3;
+
+diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
+index b9f8355947d5..53330a23e507 100644
+--- a/drivers/virtio/virtio_balloon.c
++++ b/drivers/virtio/virtio_balloon.c
+@@ -721,6 +721,17 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info,
+
+ get_page(newpage); /* balloon reference */
+
++ /*
++ * When we migrate a page to a different zone and adjusted the
++ * managed page count when inflating, we have to fixup the count of
++ * both involved zones.
++ */
++ if (!virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM) &&
++ page_zone(page) != page_zone(newpage)) {
++ adjust_managed_page_count(page, 1);
++ adjust_managed_page_count(newpage, -1);
++ }
++
+ /* balloon's page migration 1st step -- inflate "newpage" */
+ spin_lock_irqsave(&vb_dev_info->pages_lock, flags);
+ balloon_page_insert(vb_dev_info, newpage);
+diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
+index 6858a05606dd..8235ac0db4b1 100644
+--- a/fs/btrfs/delayed-inode.c
++++ b/fs/btrfs/delayed-inode.c
+@@ -1948,12 +1948,19 @@ void btrfs_kill_all_delayed_nodes(struct btrfs_root *root)
+ }
+
+ inode_id = delayed_nodes[n - 1]->inode_id + 1;
+-
+- for (i = 0; i < n; i++)
+- refcount_inc(&delayed_nodes[i]->refs);
++ for (i = 0; i < n; i++) {
++ /*
++ * Don't increase refs in case the node is dead and
++ * about to be removed from the tree in the loop below
++ */
++ if (!refcount_inc_not_zero(&delayed_nodes[i]->refs))
++ delayed_nodes[i] = NULL;
++ }
+ spin_unlock(&root->inode_lock);
+
+ for (i = 0; i < n; i++) {
++ if (!delayed_nodes[i])
++ continue;
+ __btrfs_kill_delayed_node(delayed_nodes[i]);
+ btrfs_release_delayed_node(delayed_nodes[i]);
+ }
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 3e0c8fcb658f..580b462796e5 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -4117,7 +4117,7 @@ retry:
+ for (i = 0; i < nr_pages; i++) {
+ struct page *page = pvec.pages[i];
+
+- done_index = page->index;
++ done_index = page->index + 1;
+ /*
+ * At this point we hold neither the i_pages lock nor
+ * the page lock: the page may be truncated or
+@@ -4152,16 +4152,6 @@ retry:
+
+ ret = __extent_writepage(page, wbc, epd);
+ if (ret < 0) {
+- /*
+- * done_index is set past this page,
+- * so media errors will not choke
+- * background writeout for the entire
+- * file. This has consequences for
+- * range_cyclic semantics (ie. it may
+- * not be suitable for data integrity
+- * writeout).
+- */
+- done_index = page->index + 1;
+ done = 1;
+ break;
+ }
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index a8a2adaf222f..e33e98e2815a 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -1636,6 +1636,7 @@ static noinline ssize_t btrfs_buffered_write(struct kiocb *iocb,
+ break;
+ }
+
++ only_release_metadata = false;
+ sector_offset = pos & (fs_info->sectorsize - 1);
+ reserve_bytes = round_up(write_bytes + sector_offset,
+ fs_info->sectorsize);
+@@ -1791,7 +1792,6 @@ again:
+ set_extent_bit(&BTRFS_I(inode)->io_tree, lockstart,
+ lockend, EXTENT_NORESERVE, NULL,
+ NULL, GFP_NOFS);
+- only_release_metadata = false;
+ }
+
+ btrfs_drop_pages(pages, num_pages);
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index 52ad985cc7f9..0d685a134ea5 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -384,6 +384,12 @@ static int io_ctl_prepare_pages(struct btrfs_io_ctl *io_ctl, struct inode *inode
+ if (uptodate && !PageUptodate(page)) {
+ btrfs_readpage(NULL, page);
+ lock_page(page);
++ if (page->mapping != inode->i_mapping) {
++ btrfs_err(BTRFS_I(inode)->root->fs_info,
++ "free space cache page truncated");
++ io_ctl_drop_pages(io_ctl);
++ return -EIO;
++ }
+ if (!PageUptodate(page)) {
+ btrfs_err(BTRFS_I(inode)->root->fs_info,
+ "error reading free space cache");
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 5b7768ccd20b..d24687cd1efa 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -2175,12 +2175,16 @@ again:
+ mapping_set_error(page->mapping, ret);
+ end_extent_writepage(page, ret, page_start, page_end);
+ ClearPageChecked(page);
+- goto out;
++ goto out_reserved;
+ }
+
+ ClearPageChecked(page);
+ set_page_dirty(page);
++out_reserved:
+ btrfs_delalloc_release_extents(BTRFS_I(inode), PAGE_SIZE);
++ if (ret)
++ btrfs_delalloc_release_space(inode, data_reserved, page_start,
++ PAGE_SIZE, true);
+ out:
+ unlock_extent_cached(&BTRFS_I(inode)->io_tree, page_start, page_end,
+ &cached_state);
+@@ -9529,6 +9533,9 @@ static int btrfs_rename_exchange(struct inode *old_dir,
+ goto out_notrans;
+ }
+
++ if (dest != root)
++ btrfs_record_root_in_trans(trans, dest);
++
+ /*
+ * We need to find a free sequence number both in the source and
+ * in the destination directory for the exchange.
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 91c702b4cae9..2ee98d46e935 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -24,6 +24,14 @@
+ #include "transaction.h"
+ #include "compression.h"
+
++/*
++ * Maximum number of references an extent can have in order for us to attempt to
++ * issue clone operations instead of write operations. This currently exists to
++ * avoid hitting limitations of the backreference walking code (taking a lot of
++ * time and using too much memory for extents with large number of references).
++ */
++#define SEND_MAX_EXTENT_REFS 64
++
+ /*
+ * A fs_path is a helper to dynamically build path names with unknown size.
+ * It reallocates the internal buffer on demand.
+@@ -1287,6 +1295,7 @@ static int find_extent_clone(struct send_ctx *sctx,
+ struct clone_root *cur_clone_root;
+ struct btrfs_key found_key;
+ struct btrfs_path *tmp_path;
++ struct btrfs_extent_item *ei;
+ int compressed;
+ u32 i;
+
+@@ -1334,7 +1343,6 @@ static int find_extent_clone(struct send_ctx *sctx,
+ ret = extent_from_logical(fs_info, disk_byte, tmp_path,
+ &found_key, &flags);
+ up_read(&fs_info->commit_root_sem);
+- btrfs_release_path(tmp_path);
+
+ if (ret < 0)
+ goto out;
+@@ -1343,6 +1351,21 @@ static int find_extent_clone(struct send_ctx *sctx,
+ goto out;
+ }
+
++ ei = btrfs_item_ptr(tmp_path->nodes[0], tmp_path->slots[0],
++ struct btrfs_extent_item);
++ /*
++ * Backreference walking (iterate_extent_inodes() below) is currently
++ * too expensive when an extent has a large number of references, both
++ * in time spent and used memory. So for now just fallback to write
++ * operations instead of clone operations when an extent has more than
++ * a certain amount of references.
++ */
++ if (btrfs_extent_refs(tmp_path->nodes[0], ei) > SEND_MAX_EXTENT_REFS) {
++ ret = -ENOENT;
++ goto out;
++ }
++ btrfs_release_path(tmp_path);
++
+ /*
+ * Setup the clone roots.
+ */
+diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
+index 7f6aa1816409..5ab2920b0c1f 100644
+--- a/fs/btrfs/volumes.h
++++ b/fs/btrfs/volumes.h
+@@ -331,7 +331,6 @@ struct btrfs_bio {
+ u64 map_type; /* get from map_lookup->type */
+ bio_end_io_t *end_io;
+ struct bio *orig_bio;
+- unsigned long flags;
+ void *private;
+ atomic_t error;
+ int max_errors;
+diff --git a/fs/ext2/inode.c b/fs/ext2/inode.c
+index 7004ce581a32..a16c53655e77 100644
+--- a/fs/ext2/inode.c
++++ b/fs/ext2/inode.c
+@@ -701,10 +701,13 @@ static int ext2_get_blocks(struct inode *inode,
+ if (!partial) {
+ count++;
+ mutex_unlock(&ei->truncate_mutex);
+- if (err)
+- goto cleanup;
+ goto got_it;
+ }
++
++ if (err) {
++ mutex_unlock(&ei->truncate_mutex);
++ goto cleanup;
++ }
+ }
+
+ /*
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 819dcc475e5d..e0347a75f4cb 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -196,7 +196,12 @@ void ext4_evict_inode(struct inode *inode)
+ {
+ handle_t *handle;
+ int err;
+- int extra_credits = 3;
++ /*
++ * Credits for final inode cleanup and freeing:
++ * sb + inode (ext4_orphan_del()), block bitmap, group descriptor
++ * (xattr block freeing), bitmap, group descriptor (inode freeing)
++ */
++ int extra_credits = 6;
+ struct ext4_xattr_inode_array *ea_inode_array = NULL;
+
+ trace_ext4_evict_inode(inode);
+@@ -252,8 +257,12 @@ void ext4_evict_inode(struct inode *inode)
+ if (!IS_NOQUOTA(inode))
+ extra_credits += EXT4_MAXQUOTAS_DEL_BLOCKS(inode->i_sb);
+
++ /*
++ * Block bitmap, group descriptor, and inode are accounted in both
++ * ext4_blocks_for_truncate() and extra_credits. So subtract 3.
++ */
+ handle = ext4_journal_start(inode, EXT4_HT_TRUNCATE,
+- ext4_blocks_for_truncate(inode)+extra_credits);
++ ext4_blocks_for_truncate(inode) + extra_credits - 3);
+ if (IS_ERR(handle)) {
+ ext4_std_error(inode->i_sb, PTR_ERR(handle));
+ /*
+@@ -5484,11 +5493,15 @@ static void ext4_wait_for_tail_page_commit(struct inode *inode)
+
+ offset = inode->i_size & (PAGE_SIZE - 1);
+ /*
+- * All buffers in the last page remain valid? Then there's nothing to
+- * do. We do the check mainly to optimize the common PAGE_SIZE ==
+- * blocksize case
++ * If the page is fully truncated, we don't need to wait for any commit
++ * (and we even should not as __ext4_journalled_invalidatepage() may
++ * strip all buffers from the page but keep the page dirty which can then
++ * confuse e.g. concurrent ext4_writepage() seeing dirty page without
++ * buffers). Also we don't need to wait for any commit if all buffers in
++ * the page remain valid. This is most beneficial for the common case of
++ * blocksize == PAGESIZE.
+ */
+- if (offset > PAGE_SIZE - i_blocksize(inode))
++ if (!offset || offset > (PAGE_SIZE - i_blocksize(inode)))
+ return;
+ while (1) {
+ page = find_lock_page(inode->i_mapping,
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 129029534075..2fcf80c2337a 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -3182,18 +3182,17 @@ static int ext4_unlink(struct inode *dir, struct dentry *dentry)
+ if (IS_DIRSYNC(dir))
+ ext4_handle_sync(handle);
+
+- if (inode->i_nlink == 0) {
+- ext4_warning_inode(inode, "Deleting file '%.*s' with no links",
+- dentry->d_name.len, dentry->d_name.name);
+- set_nlink(inode, 1);
+- }
+ retval = ext4_delete_entry(handle, dir, de, bh);
+ if (retval)
+ goto end_unlink;
+ dir->i_ctime = dir->i_mtime = current_time(dir);
+ ext4_update_dx_flag(dir);
+ ext4_mark_inode_dirty(handle, dir);
+- drop_nlink(inode);
++ if (inode->i_nlink == 0)
++ ext4_warning_inode(inode, "Deleting file '%.*s' with no links",
++ dentry->d_name.len, dentry->d_name.name);
++ else
++ drop_nlink(inode);
+ if (!inode->i_nlink)
+ ext4_orphan_add(handle, inode);
+ inode->i_ctime = current_time(inode);
+diff --git a/fs/ocfs2/quota_global.c b/fs/ocfs2/quota_global.c
+index 7a922190a8c7..eda83487c9ec 100644
+--- a/fs/ocfs2/quota_global.c
++++ b/fs/ocfs2/quota_global.c
+@@ -728,7 +728,7 @@ static int ocfs2_release_dquot(struct dquot *dquot)
+
+ mutex_lock(&dquot->dq_lock);
+ /* Check whether we are not racing with some other dqget() */
+- if (atomic_read(&dquot->dq_count) > 1)
++ if (dquot_is_busy(dquot))
+ goto out;
+ /* Running from downconvert thread? Postpone quota processing to wq */
+ if (current == osb->dc_task) {
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index 702aa63f6774..29abdb1d3b5c 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -1170,7 +1170,7 @@ static int ovl_rename(struct inode *olddir, struct dentry *old,
+ if (newdentry == trap)
+ goto out_dput;
+
+- if (WARN_ON(olddentry->d_inode == newdentry->d_inode))
++ if (olddentry->d_inode == newdentry->d_inode)
+ goto out_dput;
+
+ err = 0;
+diff --git a/fs/overlayfs/inode.c b/fs/overlayfs/inode.c
+index bc14781886bf..b045cf1826fc 100644
+--- a/fs/overlayfs/inode.c
++++ b/fs/overlayfs/inode.c
+@@ -200,8 +200,14 @@ int ovl_getattr(const struct path *path, struct kstat *stat,
+ if (ovl_test_flag(OVL_INDEX, d_inode(dentry)) ||
+ (!ovl_verify_lower(dentry->d_sb) &&
+ (is_dir || lowerstat.nlink == 1))) {
+- stat->ino = lowerstat.ino;
+ lower_layer = ovl_layer_lower(dentry);
++ /*
++ * Cannot use origin st_dev;st_ino because
++ * origin inode content may differ from overlay
++ * inode content.
++ */
++ if (samefs || lower_layer->fsid)
++ stat->ino = lowerstat.ino;
+ }
+
+ /*
+diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c
+index e9717c2f7d45..f47c591402d7 100644
+--- a/fs/overlayfs/namei.c
++++ b/fs/overlayfs/namei.c
+@@ -325,6 +325,14 @@ int ovl_check_origin_fh(struct ovl_fs *ofs, struct ovl_fh *fh, bool connected,
+ int i;
+
+ for (i = 0; i < ofs->numlower; i++) {
++ /*
++ * If lower fs uuid is not unique among lower fs we cannot match
++ * fh->uuid to layer.
++ */
++ if (ofs->lower_layers[i].fsid &&
++ ofs->lower_layers[i].fs->bad_uuid)
++ continue;
++
+ origin = ovl_decode_real_fh(fh, ofs->lower_layers[i].mnt,
+ connected);
+ if (origin)
+diff --git a/fs/overlayfs/ovl_entry.h b/fs/overlayfs/ovl_entry.h
+index a8279280e88d..28348c44ea5b 100644
+--- a/fs/overlayfs/ovl_entry.h
++++ b/fs/overlayfs/ovl_entry.h
+@@ -22,6 +22,8 @@ struct ovl_config {
+ struct ovl_sb {
+ struct super_block *sb;
+ dev_t pseudo_dev;
++ /* Unusable (conflicting) uuid */
++ bool bad_uuid;
+ };
+
+ struct ovl_layer {
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index afbcb116a7f1..7621ff176d15 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -1255,7 +1255,7 @@ static bool ovl_lower_uuid_ok(struct ovl_fs *ofs, const uuid_t *uuid)
+ {
+ unsigned int i;
+
+- if (!ofs->config.nfs_export && !(ofs->config.index && ofs->upper_mnt))
++ if (!ofs->config.nfs_export && !ofs->upper_mnt)
+ return true;
+
+ for (i = 0; i < ofs->numlowerfs; i++) {
+@@ -1263,9 +1263,13 @@ static bool ovl_lower_uuid_ok(struct ovl_fs *ofs, const uuid_t *uuid)
+ * We use uuid to associate an overlay lower file handle with a
+ * lower layer, so we can accept lower fs with null uuid as long
+ * as all lower layers with null uuid are on the same fs.
++ * if we detect multiple lower fs with the same uuid, we
++ * disable lower file handle decoding on all of them.
+ */
+- if (uuid_equal(&ofs->lower_fs[i].sb->s_uuid, uuid))
++ if (uuid_equal(&ofs->lower_fs[i].sb->s_uuid, uuid)) {
++ ofs->lower_fs[i].bad_uuid = true;
+ return false;
++ }
+ }
+ return true;
+ }
+@@ -1277,6 +1281,7 @@ static int ovl_get_fsid(struct ovl_fs *ofs, const struct path *path)
+ unsigned int i;
+ dev_t dev;
+ int err;
++ bool bad_uuid = false;
+
+ /* fsid 0 is reserved for upper fs even with non upper overlay */
+ if (ofs->upper_mnt && ofs->upper_mnt->mnt_sb == sb)
+@@ -1288,11 +1293,15 @@ static int ovl_get_fsid(struct ovl_fs *ofs, const struct path *path)
+ }
+
+ if (!ovl_lower_uuid_ok(ofs, &sb->s_uuid)) {
+- ofs->config.index = false;
+- ofs->config.nfs_export = false;
+- pr_warn("overlayfs: %s uuid detected in lower fs '%pd2', falling back to index=off,nfs_export=off.\n",
+- uuid_is_null(&sb->s_uuid) ? "null" : "conflicting",
+- path->dentry);
++ bad_uuid = true;
++ if (ofs->config.index || ofs->config.nfs_export) {
++ ofs->config.index = false;
++ ofs->config.nfs_export = false;
++ pr_warn("overlayfs: %s uuid detected in lower fs '%pd2', falling back to index=off,nfs_export=off.\n",
++ uuid_is_null(&sb->s_uuid) ? "null" :
++ "conflicting",
++ path->dentry);
++ }
+ }
+
+ err = get_anon_bdev(&dev);
+@@ -1303,6 +1312,7 @@ static int ovl_get_fsid(struct ovl_fs *ofs, const struct path *path)
+
+ ofs->lower_fs[ofs->numlowerfs].sb = sb;
+ ofs->lower_fs[ofs->numlowerfs].pseudo_dev = dev;
++ ofs->lower_fs[ofs->numlowerfs].bad_uuid = bad_uuid;
+ ofs->numlowerfs++;
+
+ return ofs->numlowerfs;
+diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
+index be9c471cdbc8..48041a6b4e32 100644
+--- a/fs/quota/dquot.c
++++ b/fs/quota/dquot.c
+@@ -497,7 +497,7 @@ int dquot_release(struct dquot *dquot)
+
+ mutex_lock(&dquot->dq_lock);
+ /* Check whether we are not racing with some other dqget() */
+- if (atomic_read(&dquot->dq_count) > 1)
++ if (dquot_is_busy(dquot))
+ goto out_dqlock;
+ if (dqopt->ops[dquot->dq_id.type]->release_dqblk) {
+ ret = dqopt->ops[dquot->dq_id.type]->release_dqblk(dquot);
+@@ -623,7 +623,7 @@ EXPORT_SYMBOL(dquot_scan_active);
+ /* Write all dquot structures to quota files */
+ int dquot_writeback_dquots(struct super_block *sb, int type)
+ {
+- struct list_head *dirty;
++ struct list_head dirty;
+ struct dquot *dquot;
+ struct quota_info *dqopt = sb_dqopt(sb);
+ int cnt;
+@@ -637,9 +637,10 @@ int dquot_writeback_dquots(struct super_block *sb, int type)
+ if (!sb_has_quota_active(sb, cnt))
+ continue;
+ spin_lock(&dq_list_lock);
+- dirty = &dqopt->info[cnt].dqi_dirty_list;
+- while (!list_empty(dirty)) {
+- dquot = list_first_entry(dirty, struct dquot,
++ /* Move list away to avoid livelock. */
++ list_replace_init(&dqopt->info[cnt].dqi_dirty_list, &dirty);
++ while (!list_empty(&dirty)) {
++ dquot = list_first_entry(&dirty, struct dquot,
+ dq_dirty);
+
+ WARN_ON(!test_bit(DQ_ACTIVE_B, &dquot->dq_flags));
+diff --git a/fs/reiserfs/inode.c b/fs/reiserfs/inode.c
+index 132ec4406ed0..6419e6dacc39 100644
+--- a/fs/reiserfs/inode.c
++++ b/fs/reiserfs/inode.c
+@@ -2097,6 +2097,15 @@ int reiserfs_new_inode(struct reiserfs_transaction_handle *th,
+ goto out_inserted_sd;
+ }
+
++ /*
++ * Mark it private if we're creating the privroot
++ * or something under it.
++ */
++ if (IS_PRIVATE(dir) || dentry == REISERFS_SB(sb)->priv_root) {
++ inode->i_flags |= S_PRIVATE;
++ inode->i_opflags &= ~IOP_XATTR;
++ }
++
+ if (reiserfs_posixacl(inode->i_sb)) {
+ reiserfs_write_unlock(inode->i_sb);
+ retval = reiserfs_inherit_default_acl(th, dir, dentry, inode);
+@@ -2111,8 +2120,7 @@ int reiserfs_new_inode(struct reiserfs_transaction_handle *th,
+ reiserfs_warning(inode->i_sb, "jdm-13090",
+ "ACLs aren't enabled in the fs, "
+ "but vfs thinks they are!");
+- } else if (IS_PRIVATE(dir))
+- inode->i_flags |= S_PRIVATE;
++ }
+
+ if (security->name) {
+ reiserfs_write_unlock(inode->i_sb);
+diff --git a/fs/reiserfs/namei.c b/fs/reiserfs/namei.c
+index 97f3fc4fdd79..959a066b7bb0 100644
+--- a/fs/reiserfs/namei.c
++++ b/fs/reiserfs/namei.c
+@@ -377,10 +377,13 @@ static struct dentry *reiserfs_lookup(struct inode *dir, struct dentry *dentry,
+
+ /*
+ * Propagate the private flag so we know we're
+- * in the priv tree
++ * in the priv tree. Also clear IOP_XATTR
++ * since we don't have xattrs on xattr files.
+ */
+- if (IS_PRIVATE(dir))
++ if (IS_PRIVATE(dir)) {
+ inode->i_flags |= S_PRIVATE;
++ inode->i_opflags &= ~IOP_XATTR;
++ }
+ }
+ reiserfs_write_unlock(dir->i_sb);
+ if (retval == IO_ERROR) {
+diff --git a/fs/reiserfs/reiserfs.h b/fs/reiserfs/reiserfs.h
+index e5ca9ed79e54..726580114d55 100644
+--- a/fs/reiserfs/reiserfs.h
++++ b/fs/reiserfs/reiserfs.h
+@@ -1168,6 +1168,8 @@ static inline int bmap_would_wrap(unsigned bmap_nr)
+ return bmap_nr > ((1LL << 16) - 1);
+ }
+
++extern const struct xattr_handler *reiserfs_xattr_handlers[];
++
+ /*
+ * this says about version of key of all items (but stat data) the
+ * object consists of
+diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c
+index ab028ea0e561..c00a34b801b4 100644
+--- a/fs/reiserfs/super.c
++++ b/fs/reiserfs/super.c
+@@ -2046,6 +2046,8 @@ static int reiserfs_fill_super(struct super_block *s, void *data, int silent)
+ if (replay_only(s))
+ goto error_unlocked;
+
++ s->s_xattr = reiserfs_xattr_handlers;
++
+ if (bdev_read_only(s->s_bdev) && !sb_rdonly(s)) {
+ SWARN(silent, s, "clm-7000",
+ "Detected readonly device, marking FS readonly");
+diff --git a/fs/reiserfs/xattr.c b/fs/reiserfs/xattr.c
+index b5b26d8a192c..62b40df36c98 100644
+--- a/fs/reiserfs/xattr.c
++++ b/fs/reiserfs/xattr.c
+@@ -122,13 +122,13 @@ static struct dentry *open_xa_root(struct super_block *sb, int flags)
+ struct dentry *xaroot;
+
+ if (d_really_is_negative(privroot))
+- return ERR_PTR(-ENODATA);
++ return ERR_PTR(-EOPNOTSUPP);
+
+ inode_lock_nested(d_inode(privroot), I_MUTEX_XATTR);
+
+ xaroot = dget(REISERFS_SB(sb)->xattr_root);
+ if (!xaroot)
+- xaroot = ERR_PTR(-ENODATA);
++ xaroot = ERR_PTR(-EOPNOTSUPP);
+ else if (d_really_is_negative(xaroot)) {
+ int err = -ENODATA;
+
+@@ -619,6 +619,10 @@ int reiserfs_xattr_set(struct inode *inode, const char *name,
+ int error, error2;
+ size_t jbegin_count = reiserfs_xattr_nblocks(inode, buffer_size);
+
++ /* Check before we start a transaction and then do nothing. */
++ if (!d_really_is_positive(REISERFS_SB(inode->i_sb)->priv_root))
++ return -EOPNOTSUPP;
++
+ if (!(flags & XATTR_REPLACE))
+ jbegin_count += reiserfs_xattr_jcreate_nblocks(inode);
+
+@@ -841,8 +845,7 @@ ssize_t reiserfs_listxattr(struct dentry * dentry, char *buffer, size_t size)
+ if (d_really_is_negative(dentry))
+ return -EINVAL;
+
+- if (!dentry->d_sb->s_xattr ||
+- get_inode_sd_version(d_inode(dentry)) == STAT_DATA_V1)
++ if (get_inode_sd_version(d_inode(dentry)) == STAT_DATA_V1)
+ return -EOPNOTSUPP;
+
+ dir = open_xa_dir(d_inode(dentry), XATTR_REPLACE);
+@@ -882,6 +885,7 @@ static int create_privroot(struct dentry *dentry)
+ }
+
+ d_inode(dentry)->i_flags |= S_PRIVATE;
++ d_inode(dentry)->i_opflags &= ~IOP_XATTR;
+ reiserfs_info(dentry->d_sb, "Created %s - reserved for xattr "
+ "storage.\n", PRIVROOT_NAME);
+
+@@ -895,7 +899,7 @@ static int create_privroot(struct dentry *dentry) { return 0; }
+ #endif
+
+ /* Actual operations that are exported to VFS-land */
+-static const struct xattr_handler *reiserfs_xattr_handlers[] = {
++const struct xattr_handler *reiserfs_xattr_handlers[] = {
+ #ifdef CONFIG_REISERFS_FS_XATTR
+ &reiserfs_xattr_user_handler,
+ &reiserfs_xattr_trusted_handler,
+@@ -966,8 +970,10 @@ int reiserfs_lookup_privroot(struct super_block *s)
+ if (!IS_ERR(dentry)) {
+ REISERFS_SB(s)->priv_root = dentry;
+ d_set_d_op(dentry, &xattr_lookup_poison_ops);
+- if (d_really_is_positive(dentry))
++ if (d_really_is_positive(dentry)) {
+ d_inode(dentry)->i_flags |= S_PRIVATE;
++ d_inode(dentry)->i_opflags &= ~IOP_XATTR;
++ }
+ } else
+ err = PTR_ERR(dentry);
+ inode_unlock(d_inode(s->s_root));
+@@ -996,7 +1002,6 @@ int reiserfs_xattr_init(struct super_block *s, int mount_flags)
+ }
+
+ if (d_really_is_positive(privroot)) {
+- s->s_xattr = reiserfs_xattr_handlers;
+ inode_lock(d_inode(privroot));
+ if (!REISERFS_SB(s)->xattr_root) {
+ struct dentry *dentry;
+diff --git a/fs/reiserfs/xattr_acl.c b/fs/reiserfs/xattr_acl.c
+index aa9380bac196..05f666794561 100644
+--- a/fs/reiserfs/xattr_acl.c
++++ b/fs/reiserfs/xattr_acl.c
+@@ -320,10 +320,8 @@ reiserfs_inherit_default_acl(struct reiserfs_transaction_handle *th,
+ * would be useless since permissions are ignored, and a pain because
+ * it introduces locking cycles
+ */
+- if (IS_PRIVATE(dir)) {
+- inode->i_flags |= S_PRIVATE;
++ if (IS_PRIVATE(inode))
+ goto apply_umask;
+- }
+
+ err = posix_acl_create(dir, &inode->i_mode, &default_acl, &acl);
+ if (err)
+diff --git a/fs/splice.c b/fs/splice.c
+index 98412721f056..e509239d7e06 100644
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -945,12 +945,13 @@ ssize_t splice_direct_to_actor(struct file *in, struct splice_desc *sd,
+ WARN_ON_ONCE(pipe->nrbufs != 0);
+
+ while (len) {
++ unsigned int pipe_pages;
+ size_t read_len;
+ loff_t pos = sd->pos, prev_pos = pos;
+
+ /* Don't try to read more the pipe has space for. */
+- read_len = min_t(size_t, len,
+- (pipe->buffers - pipe->nrbufs) << PAGE_SHIFT);
++ pipe_pages = pipe->buffers - pipe->nrbufs;
++ read_len = min(len, (size_t)pipe_pages << PAGE_SHIFT);
+ ret = do_splice_to(in, &pos, pipe, read_len, flags);
+ if (unlikely(ret <= 0))
+ goto out_release;
+@@ -1180,8 +1181,15 @@ static long do_splice(struct file *in, loff_t __user *off_in,
+
+ pipe_lock(opipe);
+ ret = wait_for_space(opipe, flags);
+- if (!ret)
++ if (!ret) {
++ unsigned int pipe_pages;
++
++ /* Don't try to read more the pipe has space for. */
++ pipe_pages = opipe->buffers - opipe->nrbufs;
++ len = min(len, (size_t)pipe_pages << PAGE_SHIFT);
++
+ ret = do_splice_to(in, &offset, opipe, len, flags);
++ }
+ pipe_unlock(opipe);
+ if (ret > 0)
+ wakeup_pipe_readers(opipe);
+diff --git a/include/acpi/acpi_bus.h b/include/acpi/acpi_bus.h
+index 175f7b40c585..3f6fddeb7519 100644
+--- a/include/acpi/acpi_bus.h
++++ b/include/acpi/acpi_bus.h
+@@ -78,9 +78,6 @@ acpi_evaluate_dsm_typed(acpi_handle handle, const guid_t *guid, u64 rev,
+ bool acpi_dev_found(const char *hid);
+ bool acpi_dev_present(const char *hid, const char *uid, s64 hrv);
+
+-struct acpi_device *
+-acpi_dev_get_first_match_dev(const char *hid, const char *uid, s64 hrv);
+-
+ #ifdef CONFIG_ACPI
+
+ #include <linux/proc_fs.h>
+@@ -683,6 +680,9 @@ static inline bool acpi_device_can_poweroff(struct acpi_device *adev)
+ adev->power.states[ACPI_STATE_D3_HOT].flags.explicit_set);
+ }
+
++struct acpi_device *
++acpi_dev_get_first_match_dev(const char *hid, const char *uid, s64 hrv);
++
+ static inline void acpi_dev_put(struct acpi_device *adev)
+ {
+ put_device(&adev->dev);
+diff --git a/include/linux/mfd/rk808.h b/include/linux/mfd/rk808.h
+index 7cfd2b0504df..a59bf323f713 100644
+--- a/include/linux/mfd/rk808.h
++++ b/include/linux/mfd/rk808.h
+@@ -610,7 +610,7 @@ enum {
+ RK808_ID = 0x0000,
+ RK809_ID = 0x8090,
+ RK817_ID = 0x8170,
+- RK818_ID = 0x8181,
++ RK818_ID = 0x8180,
+ };
+
+ struct rk808 {
+diff --git a/include/linux/quotaops.h b/include/linux/quotaops.h
+index 185d94829701..91e0b7624053 100644
+--- a/include/linux/quotaops.h
++++ b/include/linux/quotaops.h
+@@ -54,6 +54,16 @@ static inline struct dquot *dqgrab(struct dquot *dquot)
+ atomic_inc(&dquot->dq_count);
+ return dquot;
+ }
++
++static inline bool dquot_is_busy(struct dquot *dquot)
++{
++ if (test_bit(DQ_MOD_B, &dquot->dq_flags))
++ return true;
++ if (atomic_read(&dquot->dq_count) > 1)
++ return true;
++ return false;
++}
++
+ void dqput(struct dquot *dquot);
+ int dquot_scan_active(struct super_block *sb,
+ int (*fn)(struct dquot *dquot, unsigned long priv),
+diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
+index 77d8df451805..d9b9cb693314 100644
+--- a/include/rdma/ib_verbs.h
++++ b/include/rdma/ib_verbs.h
+@@ -3978,9 +3978,7 @@ static inline void ib_dma_unmap_sg_attrs(struct ib_device *dev,
+ */
+ static inline unsigned int ib_dma_max_seg_size(struct ib_device *dev)
+ {
+- struct device_dma_parameters *p = dev->dma_device->dma_parms;
+-
+- return p ? p->max_segment_size : UINT_MAX;
++ return dma_get_max_seg_size(dev->dma_device);
+ }
+
+ /**
+diff --git a/include/uapi/linux/cec.h b/include/uapi/linux/cec.h
+index 5704fa0292b5..423859e489c7 100644
+--- a/include/uapi/linux/cec.h
++++ b/include/uapi/linux/cec.h
+@@ -768,8 +768,8 @@ struct cec_event {
+ #define CEC_MSG_SELECT_DIGITAL_SERVICE 0x93
+ #define CEC_MSG_TUNER_DEVICE_STATUS 0x07
+ /* Recording Flag Operand (rec_flag) */
+-#define CEC_OP_REC_FLAG_USED 0
+-#define CEC_OP_REC_FLAG_NOT_USED 1
++#define CEC_OP_REC_FLAG_NOT_USED 0
++#define CEC_OP_REC_FLAG_USED 1
+ /* Tuner Display Info Operand (tuner_display_info) */
+ #define CEC_OP_TUNER_DISPLAY_INFO_DIGITAL 0
+ #define CEC_OP_TUNER_DISPLAY_INFO_NONE 1
+diff --git a/kernel/cgroup/pids.c b/kernel/cgroup/pids.c
+index 8e513a573fe9..138059eb730d 100644
+--- a/kernel/cgroup/pids.c
++++ b/kernel/cgroup/pids.c
+@@ -45,7 +45,7 @@ struct pids_cgroup {
+ * %PIDS_MAX = (%PID_MAX_LIMIT + 1).
+ */
+ atomic64_t counter;
+- int64_t limit;
++ atomic64_t limit;
+
+ /* Handle for "pids.events" */
+ struct cgroup_file events_file;
+@@ -73,8 +73,8 @@ pids_css_alloc(struct cgroup_subsys_state *parent)
+ if (!pids)
+ return ERR_PTR(-ENOMEM);
+
+- pids->limit = PIDS_MAX;
+ atomic64_set(&pids->counter, 0);
++ atomic64_set(&pids->limit, PIDS_MAX);
+ atomic64_set(&pids->events_limit, 0);
+ return &pids->css;
+ }
+@@ -146,13 +146,14 @@ static int pids_try_charge(struct pids_cgroup *pids, int num)
+
+ for (p = pids; parent_pids(p); p = parent_pids(p)) {
+ int64_t new = atomic64_add_return(num, &p->counter);
++ int64_t limit = atomic64_read(&p->limit);
+
+ /*
+ * Since new is capped to the maximum number of pid_t, if
+ * p->limit is %PIDS_MAX then we know that this test will never
+ * fail.
+ */
+- if (new > p->limit)
++ if (new > limit)
+ goto revert;
+ }
+
+@@ -277,7 +278,7 @@ set_limit:
+ * Limit updates don't need to be mutex'd, since it isn't
+ * critical that any racing fork()s follow the new limit.
+ */
+- pids->limit = limit;
++ atomic64_set(&pids->limit, limit);
+ return nbytes;
+ }
+
+@@ -285,7 +286,7 @@ static int pids_max_show(struct seq_file *sf, void *v)
+ {
+ struct cgroup_subsys_state *css = seq_css(sf);
+ struct pids_cgroup *pids = css_pids(css);
+- int64_t limit = pids->limit;
++ int64_t limit = atomic64_read(&pids->limit);
+
+ if (limit >= PIDS_MAX)
+ seq_printf(sf, "%s\n", PIDS_MAX_STR);
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 601d61150b65..7f7b05c69835 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -2532,8 +2532,14 @@ repeat:
+ */
+ if (need_to_create_worker(pool)) {
+ spin_lock(&wq_mayday_lock);
+- get_pwq(pwq);
+- list_move_tail(&pwq->mayday_node, &wq->maydays);
++ /*
++ * Queue iff we aren't racing destruction
++ * and somebody else hasn't queued it already.
++ */
++ if (wq->rescuer && list_empty(&pwq->mayday_node)) {
++ get_pwq(pwq);
++ list_add_tail(&pwq->mayday_node, &wq->maydays);
++ }
+ spin_unlock(&wq_mayday_lock);
+ }
+ }
+@@ -4316,9 +4322,29 @@ void destroy_workqueue(struct workqueue_struct *wq)
+ struct pool_workqueue *pwq;
+ int node;
+
++ /*
++ * Remove it from sysfs first so that sanity check failure doesn't
++ * lead to sysfs name conflicts.
++ */
++ workqueue_sysfs_unregister(wq);
++
+ /* drain it before proceeding with destruction */
+ drain_workqueue(wq);
+
++ /* kill rescuer, if sanity checks fail, leave it w/o rescuer */
++ if (wq->rescuer) {
++ struct worker *rescuer = wq->rescuer;
++
++ /* this prevents new queueing */
++ spin_lock_irq(&wq_mayday_lock);
++ wq->rescuer = NULL;
++ spin_unlock_irq(&wq_mayday_lock);
++
++ /* rescuer will empty maydays list before exiting */
++ kthread_stop(rescuer->task);
++ kfree(rescuer);
++ }
++
+ /* sanity checks */
+ mutex_lock(&wq->mutex);
+ for_each_pwq(pwq, wq) {
+@@ -4350,11 +4376,6 @@ void destroy_workqueue(struct workqueue_struct *wq)
+ list_del_rcu(&wq->list);
+ mutex_unlock(&wq_pool_mutex);
+
+- workqueue_sysfs_unregister(wq);
+-
+- if (wq->rescuer)
+- kthread_stop(wq->rescuer->task);
+-
+ if (!(wq->flags & WQ_UNBOUND)) {
+ wq_unregister_lockdep(wq);
+ /*
+@@ -4629,7 +4650,8 @@ static void show_pwq(struct pool_workqueue *pwq)
+ pr_info(" pwq %d:", pool->id);
+ pr_cont_pool_info(pool);
+
+- pr_cont(" active=%d/%d%s\n", pwq->nr_active, pwq->max_active,
++ pr_cont(" active=%d/%d refcnt=%d%s\n",
++ pwq->nr_active, pwq->max_active, pwq->refcnt,
+ !list_empty(&pwq->mayday_node) ? " MAYDAY" : "");
+
+ hash_for_each(pool->busy_hash, bkt, worker, hentry) {
+diff --git a/lib/raid6/unroll.awk b/lib/raid6/unroll.awk
+index c6aa03631df8..0809805a7e23 100644
+--- a/lib/raid6/unroll.awk
++++ b/lib/raid6/unroll.awk
+@@ -13,7 +13,7 @@ BEGIN {
+ for (i = 0; i < rep; ++i) {
+ tmp = $0
+ gsub(/\$\$/, i, tmp)
+- gsub(/\$\#/, n, tmp)
++ gsub(/\$#/, n, tmp)
+ gsub(/\$\*/, "$", tmp)
+ print tmp
+ }
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 2bed4761f279..c3cbf533c914 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -2198,11 +2198,14 @@ static int shmem_mmap(struct file *file, struct vm_area_struct *vma)
+ return -EPERM;
+
+ /*
+- * Since the F_SEAL_FUTURE_WRITE seals allow for a MAP_SHARED
+- * read-only mapping, take care to not allow mprotect to revert
+- * protections.
++ * Since an F_SEAL_FUTURE_WRITE sealed memfd can be mapped as
++ * MAP_SHARED and read-only, take care to not allow mprotect to
++ * revert protections on such mappings. Do this only for shared
++ * mappings. For private mappings, don't need to mask
++ * VM_MAYWRITE as we still want them to be COW-writable.
+ */
+- vma->vm_flags &= ~(VM_MAYWRITE);
++ if (vma->vm_flags & VM_SHARED)
++ vma->vm_flags &= ~(VM_MAYWRITE);
+ }
+
+ file_accessed(file);
+@@ -2727,7 +2730,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
+ }
+
+ shmem_falloc.waitq = &shmem_falloc_waitq;
+- shmem_falloc.start = unmap_start >> PAGE_SHIFT;
++ shmem_falloc.start = (u64)unmap_start >> PAGE_SHIFT;
+ shmem_falloc.next = (unmap_end + 1) >> PAGE_SHIFT;
+ spin_lock(&inode->i_lock);
+ inode->i_private = &shmem_falloc;
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index 7f492e53a7db..add3e4ca32c9 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -904,6 +904,18 @@ static void flush_memcg_workqueue(struct kmem_cache *s)
+ * previous workitems on workqueue are processed.
+ */
+ flush_workqueue(memcg_kmem_cache_wq);
++
++ /*
++ * If we're racing with children kmem_cache deactivation, it might
++ * take another rcu grace period to complete their destruction.
++ * At this moment the corresponding percpu_ref_kill() call should be
++ * done, but it might take another rcu grace period to complete
++ * switching to the atomic mode.
++ * Please, note that we check without grabbing the slab_mutex. It's safe
++ * because at this moment the children list can't grow.
++ */
++ if (!list_empty(&s->memcg_params.children))
++ rcu_barrier();
+ }
+ #else
+ static inline int shutdown_memcg_caches(struct kmem_cache *s)
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 5c1769999a92..758ca7e5304c 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -2854,13 +2854,19 @@ static int tc_chain_tmplt_add(struct tcf_chain *chain, struct net *net,
+ struct netlink_ext_ack *extack)
+ {
+ const struct tcf_proto_ops *ops;
++ char name[IFNAMSIZ];
+ void *tmplt_priv;
+
+ /* If kind is not set, user did not specify template. */
+ if (!tca[TCA_KIND])
+ return 0;
+
+- ops = tcf_proto_lookup_ops(nla_data(tca[TCA_KIND]), true, extack);
++ if (tcf_proto_check_kind(tca[TCA_KIND], name)) {
++ NL_SET_ERR_MSG(extack, "Specified TC chain template name too long");
++ return -EINVAL;
++ }
++
++ ops = tcf_proto_lookup_ops(name, true, extack);
+ if (IS_ERR(ops))
+ return PTR_ERR(ops);
+ if (!ops->tmplt_create || !ops->tmplt_destroy || !ops->tmplt_dump) {
+diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
+index b256806d69cd..db116fc8ff44 100644
+--- a/net/sunrpc/xdr.c
++++ b/net/sunrpc/xdr.c
+@@ -436,13 +436,12 @@ xdr_shrink_bufhead(struct xdr_buf *buf, size_t len)
+ }
+
+ /**
+- * xdr_shrink_pagelen
++ * xdr_shrink_pagelen - shrinks buf->pages by up to @len bytes
+ * @buf: xdr_buf
+ * @len: bytes to remove from buf->pages
+ *
+- * Shrinks XDR buffer's page array buf->pages by
+- * 'len' bytes. The extra data is not lost, but is instead
+- * moved into the tail.
++ * The extra data is not lost, but is instead moved into buf->tail.
++ * Returns the actual number of bytes moved.
+ */
+ static unsigned int
+ xdr_shrink_pagelen(struct xdr_buf *buf, size_t len)
+@@ -455,8 +454,8 @@ xdr_shrink_pagelen(struct xdr_buf *buf, size_t len)
+
+ result = 0;
+ tail = buf->tail;
+- BUG_ON (len > pglen);
+-
++ if (len > buf->page_len)
++ len = buf-> page_len;
+ tailbuf_len = buf->buflen - buf->head->iov_len - buf->page_len;
+
+ /* Shift the tail first */
+diff --git a/sound/firewire/fireface/ff-pcm.c b/sound/firewire/fireface/ff-pcm.c
+index 9eab3ad283ce..df6ff2df0124 100644
+--- a/sound/firewire/fireface/ff-pcm.c
++++ b/sound/firewire/fireface/ff-pcm.c
+@@ -219,7 +219,7 @@ static int pcm_hw_params(struct snd_pcm_substream *substream,
+ mutex_unlock(&ff->mutex);
+ }
+
+- return 0;
++ return err;
+ }
+
+ static int pcm_hw_free(struct snd_pcm_substream *substream)
+diff --git a/sound/firewire/oxfw/oxfw-pcm.c b/sound/firewire/oxfw/oxfw-pcm.c
+index 7c6d1c277d4d..78d906af9c00 100644
+--- a/sound/firewire/oxfw/oxfw-pcm.c
++++ b/sound/firewire/oxfw/oxfw-pcm.c
+@@ -255,7 +255,7 @@ static int pcm_playback_hw_params(struct snd_pcm_substream *substream,
+ mutex_unlock(&oxfw->mutex);
+ }
+
+- return 0;
++ return err;
+ }
+
+ static int pcm_capture_hw_free(struct snd_pcm_substream *substream)
+diff --git a/sound/soc/codecs/rt5645.c b/sound/soc/codecs/rt5645.c
+index 1c06b3b9218c..19662ee330d6 100644
+--- a/sound/soc/codecs/rt5645.c
++++ b/sound/soc/codecs/rt5645.c
+@@ -3270,6 +3270,9 @@ static void rt5645_jack_detect_work(struct work_struct *work)
+ snd_soc_jack_report(rt5645->mic_jack,
+ report, SND_JACK_MICROPHONE);
+ return;
++ case 4:
++ val = snd_soc_component_read32(rt5645->component, RT5645_A_JD_CTRL1) & 0x0020;
++ break;
+ default: /* read rt5645 jd1_1 status */
+ val = snd_soc_component_read32(rt5645->component, RT5645_INT_IRQ_ST) & 0x1000;
+ break;
+@@ -3603,7 +3606,7 @@ static const struct rt5645_platform_data intel_braswell_platform_data = {
+ static const struct rt5645_platform_data buddy_platform_data = {
+ .dmic1_data_pin = RT5645_DMIC_DATA_GPIO5,
+ .dmic2_data_pin = RT5645_DMIC_DATA_IN2P,
+- .jd_mode = 3,
++ .jd_mode = 4,
+ .level_trigger_irq = true,
+ };
+
+@@ -3999,6 +4002,7 @@ static int rt5645_i2c_probe(struct i2c_client *i2c,
+ RT5645_JD1_MODE_1);
+ break;
+ case 3:
++ case 4:
+ regmap_update_bits(rt5645->regmap, RT5645_A_JD_CTRL1,
+ RT5645_JD1_MODE_MASK,
+ RT5645_JD1_MODE_2);
+diff --git a/sound/soc/fsl/fsl_audmix.c b/sound/soc/fsl/fsl_audmix.c
+index 3897a54a11fe..3f69ce98bc4b 100644
+--- a/sound/soc/fsl/fsl_audmix.c
++++ b/sound/soc/fsl/fsl_audmix.c
+@@ -286,6 +286,7 @@ static int fsl_audmix_dai_trigger(struct snd_pcm_substream *substream, int cmd,
+ struct snd_soc_dai *dai)
+ {
+ struct fsl_audmix *priv = snd_soc_dai_get_drvdata(dai);
++ unsigned long lock_flags;
+
+ /* Capture stream shall not be handled */
+ if (substream->stream == SNDRV_PCM_STREAM_CAPTURE)
+@@ -295,12 +296,16 @@ static int fsl_audmix_dai_trigger(struct snd_pcm_substream *substream, int cmd,
+ case SNDRV_PCM_TRIGGER_START:
+ case SNDRV_PCM_TRIGGER_RESUME:
+ case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
++ spin_lock_irqsave(&priv->lock, lock_flags);
+ priv->tdms |= BIT(dai->driver->id);
++ spin_unlock_irqrestore(&priv->lock, lock_flags);
+ break;
+ case SNDRV_PCM_TRIGGER_STOP:
+ case SNDRV_PCM_TRIGGER_SUSPEND:
+ case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
++ spin_lock_irqsave(&priv->lock, lock_flags);
+ priv->tdms &= ~BIT(dai->driver->id);
++ spin_unlock_irqrestore(&priv->lock, lock_flags);
+ break;
+ default:
+ return -EINVAL;
+@@ -493,6 +498,7 @@ static int fsl_audmix_probe(struct platform_device *pdev)
+ return PTR_ERR(priv->ipg_clk);
+ }
+
++ spin_lock_init(&priv->lock);
+ platform_set_drvdata(pdev, priv);
+ pm_runtime_enable(dev);
+
+diff --git a/sound/soc/fsl/fsl_audmix.h b/sound/soc/fsl/fsl_audmix.h
+index 7812ffec45c5..479f05695d53 100644
+--- a/sound/soc/fsl/fsl_audmix.h
++++ b/sound/soc/fsl/fsl_audmix.h
+@@ -96,6 +96,7 @@ struct fsl_audmix {
+ struct platform_device *pdev;
+ struct regmap *regmap;
+ struct clk *ipg_clk;
++ spinlock_t lock; /* Protect tdms */
+ u8 tdms;
+ };
+
+diff --git a/sound/soc/soc-jack.c b/sound/soc/soc-jack.c
+index c7b990abdbaa..2a528e73bad2 100644
+--- a/sound/soc/soc-jack.c
++++ b/sound/soc/soc-jack.c
+@@ -100,10 +100,9 @@ void snd_soc_jack_report(struct snd_soc_jack *jack, int status, int mask)
+ unsigned int sync = 0;
+ int enable;
+
+- trace_snd_soc_jack_report(jack, mask, status);
+-
+ if (!jack)
+ return;
++ trace_snd_soc_jack_report(jack, mask, status);
+
+ dapm = &jack->card->dapm;
+
+diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
+index 7f8b5c8982e3..b505bb062d07 100644
+--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
+@@ -35,6 +35,7 @@
+ #include <stdbool.h>
+ #include <string.h>
+ #include <time.h>
++#include <limits.h>
+ #include <linux/elf.h>
+ #include <sys/uio.h>
+ #include <sys/utsname.h>
+@@ -3077,7 +3078,7 @@ static int user_trap_syscall(int nr, unsigned int flags)
+ return seccomp(SECCOMP_SET_MODE_FILTER, flags, &prog);
+ }
+
+-#define USER_NOTIF_MAGIC 116983961184613L
++#define USER_NOTIF_MAGIC INT_MAX
+ TEST(user_notification_basic)
+ {
+ pid_t pid;
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:5.3 commit in: /
@ 2019-12-18 19:31 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2019-12-18 19:31 UTC (permalink / raw
To: gentoo-commits
commit: 82fedfef763a5862f977af8faf71de2441c67ab1
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Dec 18 19:31:13 2019 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Dec 18 19:31:13 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=82fedfef
Linux patch 5.3.18
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1017_linux-5.3.18.patch | 1781 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1785 insertions(+)
diff --git a/0000_README b/0000_README
index e723ae5..5f9ec9a 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,10 @@ Patch: 1016_linux-5.3.17.patch
From: http://www.kernel.org
Desc: Linux 5.3.17
+Patch: 1017_linux-5.3.18.patch
+From: http://www.kernel.org
+Desc: Linux 5.3.18
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1017_linux-5.3.18.patch b/1017_linux-5.3.18.patch
new file mode 100644
index 0000000..3f57093
--- /dev/null
+++ b/1017_linux-5.3.18.patch
@@ -0,0 +1,1781 @@
+diff --git a/Makefile b/Makefile
+index 9cce8d426cb8..a3fb24bb6dd5 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 3
+-SUBLEVEL = 17
++SUBLEVEL = 18
+ EXTRAVERSION =
+ NAME = Bobtail Squid
+
+diff --git a/drivers/infiniband/core/addr.c b/drivers/infiniband/core/addr.c
+index bf539c34ccd3..fca5025d5a1a 100644
+--- a/drivers/infiniband/core/addr.c
++++ b/drivers/infiniband/core/addr.c
+@@ -421,16 +421,15 @@ static int addr6_resolve(struct sockaddr *src_sock,
+ (const struct sockaddr_in6 *)dst_sock;
+ struct flowi6 fl6;
+ struct dst_entry *dst;
+- int ret;
+
+ memset(&fl6, 0, sizeof fl6);
+ fl6.daddr = dst_in->sin6_addr;
+ fl6.saddr = src_in->sin6_addr;
+ fl6.flowi6_oif = addr->bound_dev_if;
+
+- ret = ipv6_stub->ipv6_dst_lookup(addr->net, NULL, &dst, &fl6);
+- if (ret < 0)
+- return ret;
++ dst = ipv6_stub->ipv6_dst_lookup_flow(addr->net, NULL, &fl6, NULL);
++ if (IS_ERR(dst))
++ return PTR_ERR(dst);
+
+ if (ipv6_addr_any(&src_in->sin6_addr))
+ src_in->sin6_addr = fl6.saddr;
+diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c
+index 5a3474f9351b..312c2fc961c0 100644
+--- a/drivers/infiniband/sw/rxe/rxe_net.c
++++ b/drivers/infiniband/sw/rxe/rxe_net.c
+@@ -117,10 +117,12 @@ static struct dst_entry *rxe_find_route6(struct net_device *ndev,
+ memcpy(&fl6.daddr, daddr, sizeof(*daddr));
+ fl6.flowi6_proto = IPPROTO_UDP;
+
+- if (unlikely(ipv6_stub->ipv6_dst_lookup(sock_net(recv_sockets.sk6->sk),
+- recv_sockets.sk6->sk, &ndst, &fl6))) {
++ ndst = ipv6_stub->ipv6_dst_lookup_flow(sock_net(recv_sockets.sk6->sk),
++ recv_sockets.sk6->sk, &fl6,
++ NULL);
++ if (unlikely(IS_ERR(ndst))) {
+ pr_err_ratelimited("no route to %pI6\n", daddr);
+- goto put;
++ return NULL;
+ }
+
+ if (unlikely(ndst->error)) {
+diff --git a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
+index acb016834f04..6cc100e7d5c0 100644
+--- a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
++++ b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
+@@ -1115,7 +1115,7 @@ static int bgx_lmac_enable(struct bgx *bgx, u8 lmacid)
+ phy_interface_mode(lmac->lmac_type)))
+ return -ENODEV;
+
+- phy_start_aneg(lmac->phydev);
++ phy_start(lmac->phydev);
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index 65bec19a438f..2120300aa70e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -792,7 +792,7 @@ struct mlx5e_xsk {
+ struct mlx5e_priv {
+ /* priv data path fields - start */
+ struct mlx5e_txqsq *txq2sq[MLX5E_MAX_NUM_CHANNELS * MLX5E_MAX_NUM_TC];
+- int channel_tc2txq[MLX5E_MAX_NUM_CHANNELS][MLX5E_MAX_NUM_TC];
++ int channel_tc2realtxq[MLX5E_MAX_NUM_CHANNELS][MLX5E_MAX_NUM_TC];
+ #ifdef CONFIG_MLX5_CORE_EN_DCB
+ struct mlx5e_dcbx_dp dcbx_dp;
+ #endif
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+index 633b117eb13e..99c7cdd0404a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+@@ -155,8 +155,11 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
+ }
+
+ if (port_buffer->buffer[i].size <
+- (xoff + max_mtu + (1 << MLX5E_BUFFER_CELL_SHIFT)))
++ (xoff + max_mtu + (1 << MLX5E_BUFFER_CELL_SHIFT))) {
++ pr_err("buffer_size[%d]=%d is not enough for lossless buffer\n",
++ i, port_buffer->buffer[i].size);
+ return -ENOMEM;
++ }
+
+ port_buffer->buffer[i].xoff = port_buffer->buffer[i].size - xoff;
+ port_buffer->buffer[i].xon =
+@@ -232,6 +235,26 @@ static int update_buffer_lossy(unsigned int max_mtu,
+ return 0;
+ }
+
++static int fill_pfc_en(struct mlx5_core_dev *mdev, u8 *pfc_en)
++{
++ u32 g_rx_pause, g_tx_pause;
++ int err;
++
++ err = mlx5_query_port_pause(mdev, &g_rx_pause, &g_tx_pause);
++ if (err)
++ return err;
++
++ /* If global pause enabled, set all active buffers to lossless.
++ * Otherwise, check PFC setting.
++ */
++ if (g_rx_pause || g_tx_pause)
++ *pfc_en = 0xff;
++ else
++ err = mlx5_query_port_pfc(mdev, pfc_en, NULL);
++
++ return err;
++}
++
+ #define MINIMUM_MAX_MTU 9216
+ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ u32 change, unsigned int mtu,
+@@ -277,7 +300,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+
+ if (change & MLX5E_PORT_BUFFER_PRIO2BUFFER) {
+ update_prio2buffer = true;
+- err = mlx5_query_port_pfc(priv->mdev, &curr_pfc_en, NULL);
++ err = fill_pfc_en(priv->mdev, &curr_pfc_en);
+ if (err)
+ return err;
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+index d41c520ce0a8..0d520c93c9ba 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+@@ -137,10 +137,10 @@ static int mlx5e_route_lookup_ipv6(struct mlx5e_priv *priv,
+ #if IS_ENABLED(CONFIG_INET) && IS_ENABLED(CONFIG_IPV6)
+ int ret;
+
+- ret = ipv6_stub->ipv6_dst_lookup(dev_net(mirred_dev), NULL, &dst,
+- fl6);
+- if (ret < 0)
+- return ret;
++ dst = ipv6_stub->ipv6_dst_lookup_flow(dev_net(mirred_dev), NULL, fl6,
++ NULL);
++ if (IS_ERR(dst))
++ return PTR_ERR(dst);
+
+ if (!(*out_ttl))
+ *out_ttl = ip6_dst_hoplimit(dst);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index f3a2970c3fcf..fdf515ca5cf5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -1678,11 +1678,10 @@ static int mlx5e_open_sqs(struct mlx5e_channel *c,
+ struct mlx5e_params *params,
+ struct mlx5e_channel_param *cparam)
+ {
+- struct mlx5e_priv *priv = c->priv;
+ int err, tc;
+
+ for (tc = 0; tc < params->num_tc; tc++) {
+- int txq_ix = c->ix + tc * priv->max_nch;
++ int txq_ix = c->ix + tc * params->num_channels;
+
+ err = mlx5e_open_txqsq(c, c->priv->tisn[tc], txq_ix,
+ params, &cparam->sq, &c->sq[tc], tc);
+@@ -2856,26 +2855,21 @@ static void mlx5e_netdev_set_tcs(struct net_device *netdev)
+ netdev_set_tc_queue(netdev, tc, nch, 0);
+ }
+
+-static void mlx5e_build_tc2txq_maps(struct mlx5e_priv *priv)
++static void mlx5e_build_txq_maps(struct mlx5e_priv *priv)
+ {
+- int i, tc;
++ int i, ch;
+
+- for (i = 0; i < priv->max_nch; i++)
+- for (tc = 0; tc < priv->profile->max_tc; tc++)
+- priv->channel_tc2txq[i][tc] = i + tc * priv->max_nch;
+-}
++ ch = priv->channels.num;
+
+-static void mlx5e_build_tx2sq_maps(struct mlx5e_priv *priv)
+-{
+- struct mlx5e_channel *c;
+- struct mlx5e_txqsq *sq;
+- int i, tc;
++ for (i = 0; i < ch; i++) {
++ int tc;
++
++ for (tc = 0; tc < priv->channels.params.num_tc; tc++) {
++ struct mlx5e_channel *c = priv->channels.c[i];
++ struct mlx5e_txqsq *sq = &c->sq[tc];
+
+- for (i = 0; i < priv->channels.num; i++) {
+- c = priv->channels.c[i];
+- for (tc = 0; tc < c->num_tc; tc++) {
+- sq = &c->sq[tc];
+ priv->txq2sq[sq->txq_ix] = sq;
++ priv->channel_tc2realtxq[i][tc] = i + tc * ch;
+ }
+ }
+ }
+@@ -2890,7 +2884,7 @@ void mlx5e_activate_priv_channels(struct mlx5e_priv *priv)
+ netif_set_real_num_tx_queues(netdev, num_txqs);
+ netif_set_real_num_rx_queues(netdev, num_rxqs);
+
+- mlx5e_build_tx2sq_maps(priv);
++ mlx5e_build_txq_maps(priv);
+ mlx5e_activate_channels(&priv->channels);
+ mlx5e_xdp_tx_enable(priv);
+ netif_tx_start_all_queues(priv->netdev);
+@@ -4968,7 +4962,6 @@ static int mlx5e_nic_init(struct mlx5_core_dev *mdev,
+ if (err)
+ mlx5_core_err(mdev, "TLS initialization failed, %d\n", err);
+ mlx5e_build_nic_netdev(netdev);
+- mlx5e_build_tc2txq_maps(priv);
+
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
+index 57f9f346d213..0b394d6d730f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
+@@ -1435,7 +1435,7 @@ static int mlx5e_grp_channels_fill_strings(struct mlx5e_priv *priv, u8 *data,
+ for (j = 0; j < NUM_SQ_STATS; j++)
+ sprintf(data + (idx++) * ETH_GSTRING_LEN,
+ sq_stats_desc[j].format,
+- priv->channel_tc2txq[i][tc]);
++ i + tc * max_nch);
+
+ for (i = 0; i < max_nch; i++) {
+ for (j = 0; j < NUM_XSKSQ_STATS * is_xsk; j++)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+index d5d2b1af3dbc..565ac6347fa9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+@@ -93,7 +93,7 @@ u16 mlx5e_select_queue(struct net_device *dev, struct sk_buff *skb,
+ if (txq_ix >= num_channels)
+ txq_ix = priv->txq2sq[txq_ix]->ch_ix;
+
+- return priv->channel_tc2txq[txq_ix][up];
++ return priv->channel_tc2realtxq[txq_ix][up];
+ }
+
+ static inline int mlx5e_skb_l2_header_offset(struct sk_buff *skb)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index ed0e694a0855..d8dd4265d89d 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -1477,10 +1477,8 @@ static void free_dma_rx_desc_resources(struct stmmac_priv *priv)
+ rx_q->dma_erx, rx_q->dma_rx_phy);
+
+ kfree(rx_q->buf_pool);
+- if (rx_q->page_pool) {
+- page_pool_request_shutdown(rx_q->page_pool);
++ if (rx_q->page_pool)
+ page_pool_destroy(rx_q->page_pool);
+- }
+ }
+ }
+
+diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
+index a46b8b2e44e1..1840fa1f8f3c 100644
+--- a/drivers/net/ethernet/ti/cpsw.c
++++ b/drivers/net/ethernet/ti/cpsw.c
+@@ -890,8 +890,8 @@ static irqreturn_t cpsw_rx_interrupt(int irq, void *dev_id)
+ {
+ struct cpsw_common *cpsw = dev_id;
+
+- cpdma_ctlr_eoi(cpsw->dma, CPDMA_EOI_RX);
+ writel(0, &cpsw->wr_regs->rx_en);
++ cpdma_ctlr_eoi(cpsw->dma, CPDMA_EOI_RX);
+
+ if (cpsw->quirk_irq) {
+ disable_irq_nosync(cpsw->irqs_table[0]);
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index cb2ea8facd8d..ac1470a6c64f 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -853,7 +853,9 @@ static struct dst_entry *geneve_get_v6_dst(struct sk_buff *skb,
+ if (dst)
+ return dst;
+ }
+- if (ipv6_stub->ipv6_dst_lookup(geneve->net, gs6->sock->sk, &dst, fl6)) {
++ dst = ipv6_stub->ipv6_dst_lookup_flow(geneve->net, gs6->sock->sk, fl6,
++ NULL);
++ if (IS_ERR(dst)) {
+ netdev_dbg(dev, "no route to %pI6\n", &fl6->daddr);
+ return ERR_PTR(-ENETUNREACH);
+ }
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index e07872869266..838d0390b2f4 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -2276,7 +2276,6 @@ static struct dst_entry *vxlan6_get_route(struct vxlan_dev *vxlan,
+ bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
+ struct dst_entry *ndst;
+ struct flowi6 fl6;
+- int err;
+
+ if (!sock6)
+ return ERR_PTR(-EIO);
+@@ -2299,10 +2298,9 @@ static struct dst_entry *vxlan6_get_route(struct vxlan_dev *vxlan,
+ fl6.fl6_dport = dport;
+ fl6.fl6_sport = sport;
+
+- err = ipv6_stub->ipv6_dst_lookup(vxlan->net,
+- sock6->sock->sk,
+- &ndst, &fl6);
+- if (unlikely(err < 0)) {
++ ndst = ipv6_stub->ipv6_dst_lookup_flow(vxlan->net, sock6->sock->sk,
++ &fl6, NULL);
++ if (unlikely(IS_ERR(ndst))) {
+ netdev_dbg(dev, "no route to %pI6\n", daddr);
+ return ERR_PTR(-ENETUNREACH);
+ }
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 88292953aa6f..9d639ea51acd 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -1848,6 +1848,11 @@ struct net_device {
+ unsigned char if_port;
+ unsigned char dma;
+
++ /* Note : dev->mtu is often read without holding a lock.
++ * Writers usually hold RTNL.
++ * It is recommended to use READ_ONCE() to annotate the reads,
++ * and to use WRITE_ONCE() to annotate the writes.
++ */
+ unsigned int mtu;
+ unsigned int min_mtu;
+ unsigned int max_mtu;
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 7647beaac2d2..451b4ef1c0b7 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -3482,8 +3482,9 @@ int __skb_vlan_pop(struct sk_buff *skb, u16 *vlan_tci);
+ int skb_vlan_pop(struct sk_buff *skb);
+ int skb_vlan_push(struct sk_buff *skb, __be16 vlan_proto, u16 vlan_tci);
+ int skb_mpls_push(struct sk_buff *skb, __be32 mpls_lse, __be16 mpls_proto,
+- int mac_len);
+-int skb_mpls_pop(struct sk_buff *skb, __be16 next_proto, int mac_len);
++ int mac_len, bool ethernet);
++int skb_mpls_pop(struct sk_buff *skb, __be16 next_proto, int mac_len,
++ bool ethernet);
+ int skb_mpls_update_lse(struct sk_buff *skb, __be32 mpls_lse);
+ int skb_mpls_dec_ttl(struct sk_buff *skb);
+ struct sk_buff *pskb_extract(struct sk_buff *skb, int off, int to_copy,
+diff --git a/include/linux/time.h b/include/linux/time.h
+index 27d83fd2ae61..5f3e49978837 100644
+--- a/include/linux/time.h
++++ b/include/linux/time.h
+@@ -96,4 +96,17 @@ static inline bool itimerspec64_valid(const struct itimerspec64 *its)
+ */
+ #define time_after32(a, b) ((s32)((u32)(b) - (u32)(a)) < 0)
+ #define time_before32(b, a) time_after32(a, b)
++
++/**
++ * time_between32 - check if a 32-bit timestamp is within a given time range
++ * @t: the time which may be within [l,h]
++ * @l: the lower bound of the range
++ * @h: the higher bound of the range
++ *
++ * time_before32(t, l, h) returns true if @l <= @t <= @h. All operands are
++ * treated as 32-bit integers.
++ *
++ * Equivalent to !(time_before32(@t, @l) || time_after32(@t, @h)).
++ */
++#define time_between32(t, l, h) ((u32)(h) - (u32)(l) >= (u32)(t) - (u32)(l))
+ #endif
+diff --git a/include/net/ip.h b/include/net/ip.h
+index e6609ab69161..df712087320a 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -759,4 +759,9 @@ int ip_misc_proc_init(void);
+ int rtm_getroute_parse_ip_proto(struct nlattr *attr, u8 *ip_proto, u8 family,
+ struct netlink_ext_ack *extack);
+
++static inline bool inetdev_valid_mtu(unsigned int mtu)
++{
++ return likely(mtu >= IPV4_MIN_MTU);
++}
++
+ #endif /* _IP_H */
+diff --git a/include/net/ipv6.h b/include/net/ipv6.h
+index 8dfc65639aa4..6a939a7cc988 100644
+--- a/include/net/ipv6.h
++++ b/include/net/ipv6.h
+@@ -1017,7 +1017,7 @@ static inline struct sk_buff *ip6_finish_skb(struct sock *sk)
+
+ int ip6_dst_lookup(struct net *net, struct sock *sk, struct dst_entry **dst,
+ struct flowi6 *fl6);
+-struct dst_entry *ip6_dst_lookup_flow(const struct sock *sk, struct flowi6 *fl6,
++struct dst_entry *ip6_dst_lookup_flow(struct net *net, const struct sock *sk, struct flowi6 *fl6,
+ const struct in6_addr *final_dst);
+ struct dst_entry *ip6_sk_dst_lookup_flow(struct sock *sk, struct flowi6 *fl6,
+ const struct in6_addr *final_dst,
+diff --git a/include/net/ipv6_stubs.h b/include/net/ipv6_stubs.h
+index 5c93e942c50b..3e7d2c0e79ca 100644
+--- a/include/net/ipv6_stubs.h
++++ b/include/net/ipv6_stubs.h
+@@ -24,8 +24,10 @@ struct ipv6_stub {
+ const struct in6_addr *addr);
+ int (*ipv6_sock_mc_drop)(struct sock *sk, int ifindex,
+ const struct in6_addr *addr);
+- int (*ipv6_dst_lookup)(struct net *net, struct sock *sk,
+- struct dst_entry **dst, struct flowi6 *fl6);
++ struct dst_entry *(*ipv6_dst_lookup_flow)(struct net *net,
++ const struct sock *sk,
++ struct flowi6 *fl6,
++ const struct in6_addr *final_dst);
+ int (*ipv6_route_input)(struct sk_buff *skb);
+
+ struct fib6_table *(*fib6_get_table)(struct net *net, u32 id);
+diff --git a/include/net/page_pool.h b/include/net/page_pool.h
+index 2cbcdbdec254..1121faa99c12 100644
+--- a/include/net/page_pool.h
++++ b/include/net/page_pool.h
+@@ -70,7 +70,12 @@ struct page_pool_params {
+ struct page_pool {
+ struct page_pool_params p;
+
+- u32 pages_state_hold_cnt;
++ struct delayed_work release_dw;
++ void (*disconnect)(void *);
++ unsigned long defer_start;
++ unsigned long defer_warn;
++
++ u32 pages_state_hold_cnt;
+
+ /*
+ * Data structure for allocation side
+@@ -129,25 +134,19 @@ inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool)
+
+ struct page_pool *page_pool_create(const struct page_pool_params *params);
+
+-void __page_pool_free(struct page_pool *pool);
+-static inline void page_pool_free(struct page_pool *pool)
+-{
+- /* When page_pool isn't compiled-in, net/core/xdp.c doesn't
+- * allow registering MEM_TYPE_PAGE_POOL, but shield linker.
+- */
+ #ifdef CONFIG_PAGE_POOL
+- __page_pool_free(pool);
+-#endif
+-}
+-
+-/* Drivers use this instead of page_pool_free */
++void page_pool_destroy(struct page_pool *pool);
++void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *));
++#else
+ static inline void page_pool_destroy(struct page_pool *pool)
+ {
+- if (!pool)
+- return;
++}
+
+- page_pool_free(pool);
++static inline void page_pool_use_xdp_mem(struct page_pool *pool,
++ void (*disconnect)(void *))
++{
+ }
++#endif
+
+ /* Never call this directly, use helpers below */
+ void __page_pool_put_page(struct page_pool *pool,
+@@ -170,24 +169,6 @@ static inline void page_pool_recycle_direct(struct page_pool *pool,
+ __page_pool_put_page(pool, page, true);
+ }
+
+-/* API user MUST have disconnected alloc-side (not allowed to call
+- * page_pool_alloc_pages()) before calling this. The free-side can
+- * still run concurrently, to handle in-flight packet-pages.
+- *
+- * A request to shutdown can fail (with false) if there are still
+- * in-flight packet-pages.
+- */
+-bool __page_pool_request_shutdown(struct page_pool *pool);
+-static inline bool page_pool_request_shutdown(struct page_pool *pool)
+-{
+- bool safe_to_remove = false;
+-
+-#ifdef CONFIG_PAGE_POOL
+- safe_to_remove = __page_pool_request_shutdown(pool);
+-#endif
+- return safe_to_remove;
+-}
+-
+ /* Disconnects a page (from a page_pool). API users can have a need
+ * to disconnect a page (from a page_pool), to allow it to be used as
+ * a regular page (that will eventually be returned to the normal
+@@ -216,11 +197,6 @@ static inline bool is_page_pool_compiled_in(void)
+ #endif
+ }
+
+-static inline void page_pool_get(struct page_pool *pool)
+-{
+- refcount_inc(&pool->user_cnt);
+-}
+-
+ static inline bool page_pool_put(struct page_pool *pool)
+ {
+ return refcount_dec_and_test(&pool->user_cnt);
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 81e8ade1e6e4..09910641fcc3 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -484,15 +484,16 @@ static inline void tcp_synq_overflow(const struct sock *sk)
+ reuse = rcu_dereference(sk->sk_reuseport_cb);
+ if (likely(reuse)) {
+ last_overflow = READ_ONCE(reuse->synq_overflow_ts);
+- if (time_after32(now, last_overflow + HZ))
++ if (!time_between32(now, last_overflow,
++ last_overflow + HZ))
+ WRITE_ONCE(reuse->synq_overflow_ts, now);
+ return;
+ }
+ }
+
+- last_overflow = tcp_sk(sk)->rx_opt.ts_recent_stamp;
+- if (time_after32(now, last_overflow + HZ))
+- tcp_sk(sk)->rx_opt.ts_recent_stamp = now;
++ last_overflow = READ_ONCE(tcp_sk(sk)->rx_opt.ts_recent_stamp);
++ if (!time_between32(now, last_overflow, last_overflow + HZ))
++ WRITE_ONCE(tcp_sk(sk)->rx_opt.ts_recent_stamp, now);
+ }
+
+ /* syncookies: no recent synqueue overflow on this listening socket? */
+@@ -507,13 +508,23 @@ static inline bool tcp_synq_no_recent_overflow(const struct sock *sk)
+ reuse = rcu_dereference(sk->sk_reuseport_cb);
+ if (likely(reuse)) {
+ last_overflow = READ_ONCE(reuse->synq_overflow_ts);
+- return time_after32(now, last_overflow +
+- TCP_SYNCOOKIE_VALID);
++ return !time_between32(now, last_overflow - HZ,
++ last_overflow +
++ TCP_SYNCOOKIE_VALID);
+ }
+ }
+
+- last_overflow = tcp_sk(sk)->rx_opt.ts_recent_stamp;
+- return time_after32(now, last_overflow + TCP_SYNCOOKIE_VALID);
++ last_overflow = READ_ONCE(tcp_sk(sk)->rx_opt.ts_recent_stamp);
++
++ /* If last_overflow <= jiffies <= last_overflow + TCP_SYNCOOKIE_VALID,
++ * then we're under synflood. However, we have to use
++ * 'last_overflow - HZ' as lower bound. That's because a concurrent
++ * tcp_synq_overflow() could update .ts_recent_stamp after we read
++ * jiffies but before we store .ts_recent_stamp into last_overflow,
++ * which could lead to rejecting a valid syncookie.
++ */
++ return !time_between32(now, last_overflow - HZ,
++ last_overflow + TCP_SYNCOOKIE_VALID);
+ }
+
+ static inline u32 tcp_cookie_time(void)
+diff --git a/include/net/xdp_priv.h b/include/net/xdp_priv.h
+index 6a8cba6ea79a..a9d5b7603b89 100644
+--- a/include/net/xdp_priv.h
++++ b/include/net/xdp_priv.h
+@@ -12,12 +12,8 @@ struct xdp_mem_allocator {
+ struct page_pool *page_pool;
+ struct zero_copy_allocator *zc_alloc;
+ };
+- int disconnect_cnt;
+- unsigned long defer_start;
+ struct rhash_head node;
+ struct rcu_head rcu;
+- struct delayed_work defer_wq;
+- unsigned long defer_warn;
+ };
+
+ #endif /* __LINUX_NET_XDP_PRIV_H__ */
+diff --git a/include/trace/events/xdp.h b/include/trace/events/xdp.h
+index 68899fdc985b..eabc60f1d129 100644
+--- a/include/trace/events/xdp.h
++++ b/include/trace/events/xdp.h
+@@ -316,19 +316,15 @@ __MEM_TYPE_MAP(__MEM_TYPE_TP_FN)
+
+ TRACE_EVENT(mem_disconnect,
+
+- TP_PROTO(const struct xdp_mem_allocator *xa,
+- bool safe_to_remove, bool force),
++ TP_PROTO(const struct xdp_mem_allocator *xa),
+
+- TP_ARGS(xa, safe_to_remove, force),
++ TP_ARGS(xa),
+
+ TP_STRUCT__entry(
+ __field(const struct xdp_mem_allocator *, xa)
+ __field(u32, mem_id)
+ __field(u32, mem_type)
+ __field(const void *, allocator)
+- __field(bool, safe_to_remove)
+- __field(bool, force)
+- __field(int, disconnect_cnt)
+ ),
+
+ TP_fast_assign(
+@@ -336,19 +332,12 @@ TRACE_EVENT(mem_disconnect,
+ __entry->mem_id = xa->mem.id;
+ __entry->mem_type = xa->mem.type;
+ __entry->allocator = xa->allocator;
+- __entry->safe_to_remove = safe_to_remove;
+- __entry->force = force;
+- __entry->disconnect_cnt = xa->disconnect_cnt;
+ ),
+
+- TP_printk("mem_id=%d mem_type=%s allocator=%p"
+- " safe_to_remove=%s force=%s disconnect_cnt=%d",
++ TP_printk("mem_id=%d mem_type=%s allocator=%p",
+ __entry->mem_id,
+ __print_symbolic(__entry->mem_type, __MEM_TYPE_SYM_TAB),
+- __entry->allocator,
+- __entry->safe_to_remove ? "true" : "false",
+- __entry->force ? "true" : "false",
+- __entry->disconnect_cnt
++ __entry->allocator
+ )
+ );
+
+diff --git a/net/bridge/br_device.c b/net/bridge/br_device.c
+index 681b72862c16..750e8dba38ec 100644
+--- a/net/bridge/br_device.c
++++ b/net/bridge/br_device.c
+@@ -253,6 +253,12 @@ static int br_set_mac_address(struct net_device *dev, void *p)
+ if (!is_valid_ether_addr(addr->sa_data))
+ return -EADDRNOTAVAIL;
+
++ /* dev_set_mac_addr() can be called by a master device on bridge's
++ * NETDEV_UNREGISTER, but since it's being destroyed do nothing
++ */
++ if (dev->reg_state != NETREG_REGISTERED)
++ return -EBUSY;
++
+ spin_lock_bh(&br->lock);
+ if (!ether_addr_equal(dev->dev_addr, addr->sa_data)) {
+ /* Mac address will be changed in br_stp_change_bridge_id(). */
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 33b278b826b5..ae83b3059d67 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -7662,7 +7662,8 @@ int __dev_set_mtu(struct net_device *dev, int new_mtu)
+ if (ops->ndo_change_mtu)
+ return ops->ndo_change_mtu(dev, new_mtu);
+
+- dev->mtu = new_mtu;
++ /* Pairs with all the lockless reads of dev->mtu in the stack */
++ WRITE_ONCE(dev->mtu, new_mtu);
+ return 0;
+ }
+ EXPORT_SYMBOL(__dev_set_mtu);
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index 2f5326a82465..fdcce7ab0cc3 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -853,9 +853,10 @@ bool __skb_flow_dissect(const struct net *net,
+ nhoff = skb_network_offset(skb);
+ hlen = skb_headlen(skb);
+ #if IS_ENABLED(CONFIG_NET_DSA)
+- if (unlikely(skb->dev && netdev_uses_dsa(skb->dev))) {
++ if (unlikely(skb->dev && netdev_uses_dsa(skb->dev) &&
++ proto == htons(ETH_P_XDSA))) {
+ const struct dsa_device_ops *ops;
+- int offset;
++ int offset = 0;
+
+ ops = skb->dev->dsa_ptr->tag_ops;
+ if (ops->flow_dissect &&
+diff --git a/net/core/lwt_bpf.c b/net/core/lwt_bpf.c
+index 74cfb8b5ab33..99a6de52b21d 100644
+--- a/net/core/lwt_bpf.c
++++ b/net/core/lwt_bpf.c
+@@ -230,9 +230,7 @@ static int bpf_lwt_xmit_reroute(struct sk_buff *skb)
+ fl6.daddr = iph6->daddr;
+ fl6.saddr = iph6->saddr;
+
+- err = ipv6_stub->ipv6_dst_lookup(net, skb->sk, &dst, &fl6);
+- if (unlikely(err))
+- goto err;
++ dst = ipv6_stub->ipv6_dst_lookup_flow(net, skb->sk, &fl6, NULL);
+ if (IS_ERR(dst)) {
+ err = PTR_ERR(dst);
+ goto err;
+diff --git a/net/core/page_pool.c b/net/core/page_pool.c
+index 3272dc7a8c81..6e7715243dda 100644
+--- a/net/core/page_pool.c
++++ b/net/core/page_pool.c
+@@ -18,6 +18,9 @@
+
+ #include <trace/events/page_pool.h>
+
++#define DEFER_TIME (msecs_to_jiffies(1000))
++#define DEFER_WARN_INTERVAL (60 * HZ)
++
+ static int page_pool_init(struct page_pool *pool,
+ const struct page_pool_params *params)
+ {
+@@ -200,22 +203,14 @@ static s32 page_pool_inflight(struct page_pool *pool)
+ {
+ u32 release_cnt = atomic_read(&pool->pages_state_release_cnt);
+ u32 hold_cnt = READ_ONCE(pool->pages_state_hold_cnt);
+- s32 distance;
+-
+- distance = _distance(hold_cnt, release_cnt);
+-
+- trace_page_pool_inflight(pool, distance, hold_cnt, release_cnt);
+- return distance;
+-}
++ s32 inflight;
+
+-static bool __page_pool_safe_to_destroy(struct page_pool *pool)
+-{
+- s32 inflight = page_pool_inflight(pool);
++ inflight = _distance(hold_cnt, release_cnt);
+
+- /* The distance should not be able to become negative */
++ trace_page_pool_inflight(pool, inflight, hold_cnt, release_cnt);
+ WARN(inflight < 0, "Negative(%d) inflight packet-pages", inflight);
+
+- return (inflight == 0);
++ return inflight;
+ }
+
+ /* Cleanup page_pool state from page */
+@@ -223,6 +218,7 @@ static void __page_pool_clean_page(struct page_pool *pool,
+ struct page *page)
+ {
+ dma_addr_t dma;
++ int count;
+
+ if (!(pool->p.flags & PP_FLAG_DMA_MAP))
+ goto skip_dma_unmap;
+@@ -234,9 +230,11 @@ static void __page_pool_clean_page(struct page_pool *pool,
+ DMA_ATTR_SKIP_CPU_SYNC);
+ page->dma_addr = 0;
+ skip_dma_unmap:
+- atomic_inc(&pool->pages_state_release_cnt);
+- trace_page_pool_state_release(pool, page,
+- atomic_read(&pool->pages_state_release_cnt));
++ /* This may be the last page returned, releasing the pool, so
++ * it is not safe to reference pool afterwards.
++ */
++ count = atomic_inc_return(&pool->pages_state_release_cnt);
++ trace_page_pool_state_release(pool, page, count);
+ }
+
+ /* unmap the page and clean our state */
+@@ -345,31 +343,10 @@ static void __page_pool_empty_ring(struct page_pool *pool)
+ }
+ }
+
+-static void __warn_in_flight(struct page_pool *pool)
++static void page_pool_free(struct page_pool *pool)
+ {
+- u32 release_cnt = atomic_read(&pool->pages_state_release_cnt);
+- u32 hold_cnt = READ_ONCE(pool->pages_state_hold_cnt);
+- s32 distance;
+-
+- distance = _distance(hold_cnt, release_cnt);
+-
+- /* Drivers should fix this, but only problematic when DMA is used */
+- WARN(1, "Still in-flight pages:%d hold:%u released:%u",
+- distance, hold_cnt, release_cnt);
+-}
+-
+-void __page_pool_free(struct page_pool *pool)
+-{
+- /* Only last user actually free/release resources */
+- if (!page_pool_put(pool))
+- return;
+-
+- WARN(pool->alloc.count, "API usage violation");
+- WARN(!ptr_ring_empty(&pool->ring), "ptr_ring is not empty");
+-
+- /* Can happen due to forced shutdown */
+- if (!__page_pool_safe_to_destroy(pool))
+- __warn_in_flight(pool);
++ if (pool->disconnect)
++ pool->disconnect(pool);
+
+ ptr_ring_cleanup(&pool->ring, NULL);
+
+@@ -378,12 +355,8 @@ void __page_pool_free(struct page_pool *pool)
+
+ kfree(pool);
+ }
+-EXPORT_SYMBOL(__page_pool_free);
+
+-/* Request to shutdown: release pages cached by page_pool, and check
+- * for in-flight pages
+- */
+-bool __page_pool_request_shutdown(struct page_pool *pool)
++static void page_pool_scrub(struct page_pool *pool)
+ {
+ struct page *page;
+
+@@ -400,7 +373,64 @@ bool __page_pool_request_shutdown(struct page_pool *pool)
+ * be in-flight.
+ */
+ __page_pool_empty_ring(pool);
++}
++
++static int page_pool_release(struct page_pool *pool)
++{
++ int inflight;
++
++ page_pool_scrub(pool);
++ inflight = page_pool_inflight(pool);
++ if (!inflight)
++ page_pool_free(pool);
++
++ return inflight;
++}
++
++static void page_pool_release_retry(struct work_struct *wq)
++{
++ struct delayed_work *dwq = to_delayed_work(wq);
++ struct page_pool *pool = container_of(dwq, typeof(*pool), release_dw);
++ int inflight;
++
++ inflight = page_pool_release(pool);
++ if (!inflight)
++ return;
++
++ /* Periodic warning */
++ if (time_after_eq(jiffies, pool->defer_warn)) {
++ int sec = (s32)((u32)jiffies - (u32)pool->defer_start) / HZ;
++
++ pr_warn("%s() stalled pool shutdown %d inflight %d sec\n",
++ __func__, inflight, sec);
++ pool->defer_warn = jiffies + DEFER_WARN_INTERVAL;
++ }
++
++ /* Still not ready to be disconnected, retry later */
++ schedule_delayed_work(&pool->release_dw, DEFER_TIME);
++}
++
++void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *))
++{
++ refcount_inc(&pool->user_cnt);
++ pool->disconnect = disconnect;
++}
++
++void page_pool_destroy(struct page_pool *pool)
++{
++ if (!pool)
++ return;
++
++ if (!page_pool_put(pool))
++ return;
++
++ if (!page_pool_release(pool))
++ return;
++
++ pool->defer_start = jiffies;
++ pool->defer_warn = jiffies + DEFER_WARN_INTERVAL;
+
+- return __page_pool_safe_to_destroy(pool);
++ INIT_DELAYED_WORK(&pool->release_dw, page_pool_release_retry);
++ schedule_delayed_work(&pool->release_dw, DEFER_TIME);
+ }
+-EXPORT_SYMBOL(__page_pool_request_shutdown);
++EXPORT_SYMBOL(page_pool_destroy);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index d4a47c44daf0..7b62f1bd04a0 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -5472,7 +5472,7 @@ static void skb_mod_eth_type(struct sk_buff *skb, struct ethhdr *hdr,
+ * Returns 0 on success, -errno otherwise.
+ */
+ int skb_mpls_push(struct sk_buff *skb, __be32 mpls_lse, __be16 mpls_proto,
+- int mac_len)
++ int mac_len, bool ethernet)
+ {
+ struct mpls_shim_hdr *lse;
+ int err;
+@@ -5503,7 +5503,7 @@ int skb_mpls_push(struct sk_buff *skb, __be32 mpls_lse, __be16 mpls_proto,
+ lse->label_stack_entry = mpls_lse;
+ skb_postpush_rcsum(skb, lse, MPLS_HLEN);
+
+- if (skb->dev && skb->dev->type == ARPHRD_ETHER)
++ if (ethernet)
+ skb_mod_eth_type(skb, eth_hdr(skb), mpls_proto);
+ skb->protocol = mpls_proto;
+
+@@ -5517,12 +5517,14 @@ EXPORT_SYMBOL_GPL(skb_mpls_push);
+ * @skb: buffer
+ * @next_proto: ethertype of header after popped MPLS header
+ * @mac_len: length of the MAC header
++ * @ethernet: flag to indicate if ethernet header is present in packet
+ *
+ * Expects skb->data at mac header.
+ *
+ * Returns 0 on success, -errno otherwise.
+ */
+-int skb_mpls_pop(struct sk_buff *skb, __be16 next_proto, int mac_len)
++int skb_mpls_pop(struct sk_buff *skb, __be16 next_proto, int mac_len,
++ bool ethernet)
+ {
+ int err;
+
+@@ -5541,7 +5543,7 @@ int skb_mpls_pop(struct sk_buff *skb, __be16 next_proto, int mac_len)
+ skb_reset_mac_header(skb);
+ skb_set_network_header(skb, mac_len);
+
+- if (skb->dev && skb->dev->type == ARPHRD_ETHER) {
++ if (ethernet) {
+ struct ethhdr *hdr;
+
+ /* use mpls_hdr() to get ethertype to account for VLANs. */
+diff --git a/net/core/xdp.c b/net/core/xdp.c
+index d7bf62ffbb5e..b3f463c6543f 100644
+--- a/net/core/xdp.c
++++ b/net/core/xdp.c
+@@ -70,10 +70,6 @@ static void __xdp_mem_allocator_rcu_free(struct rcu_head *rcu)
+
+ xa = container_of(rcu, struct xdp_mem_allocator, rcu);
+
+- /* Allocator have indicated safe to remove before this is called */
+- if (xa->mem.type == MEM_TYPE_PAGE_POOL)
+- page_pool_free(xa->page_pool);
+-
+ /* Allow this ID to be reused */
+ ida_simple_remove(&mem_id_pool, xa->mem.id);
+
+@@ -85,62 +81,57 @@ static void __xdp_mem_allocator_rcu_free(struct rcu_head *rcu)
+ kfree(xa);
+ }
+
+-static bool __mem_id_disconnect(int id, bool force)
++static void mem_xa_remove(struct xdp_mem_allocator *xa)
++{
++ trace_mem_disconnect(xa);
++
++ if (!rhashtable_remove_fast(mem_id_ht, &xa->node, mem_id_rht_params))
++ call_rcu(&xa->rcu, __xdp_mem_allocator_rcu_free);
++}
++
++static void mem_allocator_disconnect(void *allocator)
+ {
+ struct xdp_mem_allocator *xa;
+- bool safe_to_remove = true;
++ struct rhashtable_iter iter;
+
+ mutex_lock(&mem_id_lock);
+
+- xa = rhashtable_lookup_fast(mem_id_ht, &id, mem_id_rht_params);
+- if (!xa) {
+- mutex_unlock(&mem_id_lock);
+- WARN(1, "Request remove non-existing id(%d), driver bug?", id);
+- return true;
+- }
+- xa->disconnect_cnt++;
++ rhashtable_walk_enter(mem_id_ht, &iter);
++ do {
++ rhashtable_walk_start(&iter);
+
+- /* Detects in-flight packet-pages for page_pool */
+- if (xa->mem.type == MEM_TYPE_PAGE_POOL)
+- safe_to_remove = page_pool_request_shutdown(xa->page_pool);
++ while ((xa = rhashtable_walk_next(&iter)) && !IS_ERR(xa)) {
++ if (xa->allocator == allocator)
++ mem_xa_remove(xa);
++ }
+
+- trace_mem_disconnect(xa, safe_to_remove, force);
++ rhashtable_walk_stop(&iter);
+
+- if ((safe_to_remove || force) &&
+- !rhashtable_remove_fast(mem_id_ht, &xa->node, mem_id_rht_params))
+- call_rcu(&xa->rcu, __xdp_mem_allocator_rcu_free);
++ } while (xa == ERR_PTR(-EAGAIN));
++ rhashtable_walk_exit(&iter);
+
+ mutex_unlock(&mem_id_lock);
+- return (safe_to_remove|force);
+ }
+
+-#define DEFER_TIME (msecs_to_jiffies(1000))
+-#define DEFER_WARN_INTERVAL (30 * HZ)
+-#define DEFER_MAX_RETRIES 120
+-
+-static void mem_id_disconnect_defer_retry(struct work_struct *wq)
++static void mem_id_disconnect(int id)
+ {
+- struct delayed_work *dwq = to_delayed_work(wq);
+- struct xdp_mem_allocator *xa = container_of(dwq, typeof(*xa), defer_wq);
+- bool force = false;
++ struct xdp_mem_allocator *xa;
+
+- if (xa->disconnect_cnt > DEFER_MAX_RETRIES)
+- force = true;
++ mutex_lock(&mem_id_lock);
+
+- if (__mem_id_disconnect(xa->mem.id, force))
++ xa = rhashtable_lookup_fast(mem_id_ht, &id, mem_id_rht_params);
++ if (!xa) {
++ mutex_unlock(&mem_id_lock);
++ WARN(1, "Request remove non-existing id(%d), driver bug?", id);
+ return;
++ }
+
+- /* Periodic warning */
+- if (time_after_eq(jiffies, xa->defer_warn)) {
+- int sec = (s32)((u32)jiffies - (u32)xa->defer_start) / HZ;
++ trace_mem_disconnect(xa);
+
+- pr_warn("%s() stalled mem.id=%u shutdown %d attempts %d sec\n",
+- __func__, xa->mem.id, xa->disconnect_cnt, sec);
+- xa->defer_warn = jiffies + DEFER_WARN_INTERVAL;
+- }
++ if (!rhashtable_remove_fast(mem_id_ht, &xa->node, mem_id_rht_params))
++ call_rcu(&xa->rcu, __xdp_mem_allocator_rcu_free);
+
+- /* Still not ready to be disconnected, retry later */
+- schedule_delayed_work(&xa->defer_wq, DEFER_TIME);
++ mutex_unlock(&mem_id_lock);
+ }
+
+ void xdp_rxq_info_unreg_mem_model(struct xdp_rxq_info *xdp_rxq)
+@@ -153,38 +144,21 @@ void xdp_rxq_info_unreg_mem_model(struct xdp_rxq_info *xdp_rxq)
+ return;
+ }
+
+- if (xdp_rxq->mem.type != MEM_TYPE_PAGE_POOL &&
+- xdp_rxq->mem.type != MEM_TYPE_ZERO_COPY) {
+- return;
+- }
+-
+ if (id == 0)
+ return;
+
+- if (__mem_id_disconnect(id, false))
+- return;
+-
+- /* Could not disconnect, defer new disconnect attempt to later */
+- mutex_lock(&mem_id_lock);
++ if (xdp_rxq->mem.type == MEM_TYPE_ZERO_COPY)
++ return mem_id_disconnect(id);
+
+- xa = rhashtable_lookup_fast(mem_id_ht, &id, mem_id_rht_params);
+- if (!xa) {
+- mutex_unlock(&mem_id_lock);
+- return;
++ if (xdp_rxq->mem.type == MEM_TYPE_PAGE_POOL) {
++ rcu_read_lock();
++ xa = rhashtable_lookup(mem_id_ht, &id, mem_id_rht_params);
++ page_pool_destroy(xa->page_pool);
++ rcu_read_unlock();
+ }
+- xa->defer_start = jiffies;
+- xa->defer_warn = jiffies + DEFER_WARN_INTERVAL;
+-
+- INIT_DELAYED_WORK(&xa->defer_wq, mem_id_disconnect_defer_retry);
+- mutex_unlock(&mem_id_lock);
+- schedule_delayed_work(&xa->defer_wq, DEFER_TIME);
+ }
+ EXPORT_SYMBOL_GPL(xdp_rxq_info_unreg_mem_model);
+
+-/* This unregister operation will also cleanup and destroy the
+- * allocator. The page_pool_free() operation is first called when it's
+- * safe to remove, possibly deferred to a workqueue.
+- */
+ void xdp_rxq_info_unreg(struct xdp_rxq_info *xdp_rxq)
+ {
+ /* Simplify driver cleanup code paths, allow unreg "unused" */
+@@ -371,7 +345,7 @@ int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq,
+ }
+
+ if (type == MEM_TYPE_PAGE_POOL)
+- page_pool_get(xdp_alloc->page_pool);
++ page_pool_use_xdp_mem(allocator, mem_allocator_disconnect);
+
+ mutex_unlock(&mem_id_lock);
+
+@@ -402,15 +376,8 @@ static void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct,
+ /* mem->id is valid, checked in xdp_rxq_info_reg_mem_model() */
+ xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params);
+ page = virt_to_head_page(data);
+- if (likely(xa)) {
+- napi_direct &= !xdp_return_frame_no_direct();
+- page_pool_put_page(xa->page_pool, page, napi_direct);
+- } else {
+- /* Hopefully stack show who to blame for late return */
+- WARN_ONCE(1, "page_pool gone mem.id=%d", mem->id);
+- trace_mem_return_failed(mem, page);
+- put_page(page);
+- }
++ napi_direct &= !xdp_return_frame_no_direct();
++ page_pool_put_page(xa->page_pool, page, napi_direct);
+ rcu_read_unlock();
+ break;
+ case MEM_TYPE_PAGE_SHARED:
+diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
+index 1b7381ff787b..e81869b7875f 100644
+--- a/net/dccp/ipv6.c
++++ b/net/dccp/ipv6.c
+@@ -210,7 +210,7 @@ static int dccp_v6_send_response(const struct sock *sk, struct request_sock *req
+ final_p = fl6_update_dst(&fl6, rcu_dereference(np->opt), &final);
+ rcu_read_unlock();
+
+- dst = ip6_dst_lookup_flow(sk, &fl6, final_p);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, &fl6, final_p);
+ if (IS_ERR(dst)) {
+ err = PTR_ERR(dst);
+ dst = NULL;
+@@ -281,7 +281,7 @@ static void dccp_v6_ctl_send_reset(const struct sock *sk, struct sk_buff *rxskb)
+ security_skb_classify_flow(rxskb, flowi6_to_flowi(&fl6));
+
+ /* sk = NULL, but it is safe for now. RST socket required. */
+- dst = ip6_dst_lookup_flow(ctl_sk, &fl6, NULL);
++ dst = ip6_dst_lookup_flow(sock_net(ctl_sk), ctl_sk, &fl6, NULL);
+ if (!IS_ERR(dst)) {
+ skb_dst_set(skb, dst);
+ ip6_xmit(ctl_sk, skb, &fl6, 0, NULL, 0);
+@@ -911,7 +911,7 @@ static int dccp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
+ opt = rcu_dereference_protected(np->opt, lockdep_sock_is_held(sk));
+ final_p = fl6_update_dst(&fl6, opt, &final);
+
+- dst = ip6_dst_lookup_flow(sk, &fl6, final_p);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, &fl6, final_p);
+ if (IS_ERR(dst)) {
+ err = PTR_ERR(dst);
+ goto failure;
+diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
+index f509b495451a..b01e1bae4ddc 100644
+--- a/net/hsr/hsr_device.c
++++ b/net/hsr/hsr_device.c
+@@ -227,8 +227,13 @@ static int hsr_dev_xmit(struct sk_buff *skb, struct net_device *dev)
+ struct hsr_port *master;
+
+ master = hsr_port_get_hsr(hsr, HSR_PT_MASTER);
+- skb->dev = master->dev;
+- hsr_forward_skb(skb, master);
++ if (master) {
++ skb->dev = master->dev;
++ hsr_forward_skb(skb, master);
++ } else {
++ atomic_long_inc(&dev->tx_dropped);
++ dev_kfree_skb_any(skb);
++ }
+ return NETDEV_TX_OK;
+ }
+
+diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
+index a4b5bd4d2c89..e4632bd2026d 100644
+--- a/net/ipv4/devinet.c
++++ b/net/ipv4/devinet.c
+@@ -1496,11 +1496,6 @@ skip:
+ }
+ }
+
+-static bool inetdev_valid_mtu(unsigned int mtu)
+-{
+- return mtu >= IPV4_MIN_MTU;
+-}
+-
+ static void inetdev_send_gratuitous_arp(struct net_device *dev,
+ struct in_device *in_dev)
+
+diff --git a/net/ipv4/gre_demux.c b/net/ipv4/gre_demux.c
+index 44bfeecac33e..5fd6e8ed02b5 100644
+--- a/net/ipv4/gre_demux.c
++++ b/net/ipv4/gre_demux.c
+@@ -127,7 +127,7 @@ int gre_parse_header(struct sk_buff *skb, struct tnl_ptk_info *tpi,
+ if (!pskb_may_pull(skb, nhs + hdr_len + sizeof(*ershdr)))
+ return -EINVAL;
+
+- ershdr = (struct erspan_base_hdr *)options;
++ ershdr = (struct erspan_base_hdr *)(skb->data + nhs + hdr_len);
+ tpi->key = cpu_to_be32(get_session_id(ershdr));
+ }
+
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index e780ceab16e1..cd664655806e 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -1258,15 +1258,18 @@ static int ip_setup_cork(struct sock *sk, struct inet_cork *cork,
+ cork->addr = ipc->addr;
+ }
+
+- /*
+- * We steal reference to this route, caller should not release it
+- */
+- *rtp = NULL;
+ cork->fragsize = ip_sk_use_pmtu(sk) ?
+- dst_mtu(&rt->dst) : rt->dst.dev->mtu;
++ dst_mtu(&rt->dst) : READ_ONCE(rt->dst.dev->mtu);
++
++ if (!inetdev_valid_mtu(cork->fragsize))
++ return -ENETUNREACH;
+
+ cork->gso_size = ipc->gso_size;
++
+ cork->dst = &rt->dst;
++ /* We stole this route, caller should not release it. */
++ *rtp = NULL;
++
+ cork->length = 0;
+ cork->ttl = ipc->ttl;
+ cork->tos = ipc->tos;
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 8a645f304e6c..606e17e1aca3 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -755,8 +755,9 @@ static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb
+ min_t(unsigned int, eff_sacks,
+ (remaining - TCPOLEN_SACK_BASE_ALIGNED) /
+ TCPOLEN_SACK_PERBLOCK);
+- size += TCPOLEN_SACK_BASE_ALIGNED +
+- opts->num_sack_blocks * TCPOLEN_SACK_PERBLOCK;
++ if (likely(opts->num_sack_blocks))
++ size += TCPOLEN_SACK_BASE_ALIGNED +
++ opts->num_sack_blocks * TCPOLEN_SACK_PERBLOCK;
+ }
+
+ return size;
+diff --git a/net/ipv6/addrconf_core.c b/net/ipv6/addrconf_core.c
+index 783f3c1466da..748a4253650f 100644
+--- a/net/ipv6/addrconf_core.c
++++ b/net/ipv6/addrconf_core.c
+@@ -128,11 +128,12 @@ int inet6addr_validator_notifier_call_chain(unsigned long val, void *v)
+ }
+ EXPORT_SYMBOL(inet6addr_validator_notifier_call_chain);
+
+-static int eafnosupport_ipv6_dst_lookup(struct net *net, struct sock *u1,
+- struct dst_entry **u2,
+- struct flowi6 *u3)
++static struct dst_entry *eafnosupport_ipv6_dst_lookup_flow(struct net *net,
++ const struct sock *sk,
++ struct flowi6 *fl6,
++ const struct in6_addr *final_dst)
+ {
+- return -EAFNOSUPPORT;
++ return ERR_PTR(-EAFNOSUPPORT);
+ }
+
+ static int eafnosupport_ipv6_route_input(struct sk_buff *skb)
+@@ -189,7 +190,7 @@ static int eafnosupport_ip6_del_rt(struct net *net, struct fib6_info *rt)
+ }
+
+ const struct ipv6_stub *ipv6_stub __read_mostly = &(struct ipv6_stub) {
+- .ipv6_dst_lookup = eafnosupport_ipv6_dst_lookup,
++ .ipv6_dst_lookup_flow = eafnosupport_ipv6_dst_lookup_flow,
+ .ipv6_route_input = eafnosupport_ipv6_route_input,
+ .fib6_get_table = eafnosupport_fib6_get_table,
+ .fib6_table_lookup = eafnosupport_fib6_table_lookup,
+diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
+index ef37e0574f54..14ac1d911287 100644
+--- a/net/ipv6/af_inet6.c
++++ b/net/ipv6/af_inet6.c
+@@ -765,7 +765,7 @@ int inet6_sk_rebuild_header(struct sock *sk)
+ &final);
+ rcu_read_unlock();
+
+- dst = ip6_dst_lookup_flow(sk, &fl6, final_p);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, &fl6, final_p);
+ if (IS_ERR(dst)) {
+ sk->sk_route_caps = 0;
+ sk->sk_err_soft = -PTR_ERR(dst);
+@@ -946,7 +946,7 @@ static int ipv6_route_input(struct sk_buff *skb)
+ static const struct ipv6_stub ipv6_stub_impl = {
+ .ipv6_sock_mc_join = ipv6_sock_mc_join,
+ .ipv6_sock_mc_drop = ipv6_sock_mc_drop,
+- .ipv6_dst_lookup = ip6_dst_lookup,
++ .ipv6_dst_lookup_flow = ip6_dst_lookup_flow,
+ .ipv6_route_input = ipv6_route_input,
+ .fib6_get_table = fib6_get_table,
+ .fib6_table_lookup = fib6_table_lookup,
+diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c
+index 96f939248d2f..390bedde21a5 100644
+--- a/net/ipv6/datagram.c
++++ b/net/ipv6/datagram.c
+@@ -85,7 +85,7 @@ int ip6_datagram_dst_update(struct sock *sk, bool fix_sk_saddr)
+ final_p = fl6_update_dst(&fl6, opt, &final);
+ rcu_read_unlock();
+
+- dst = ip6_dst_lookup_flow(sk, &fl6, final_p);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, &fl6, final_p);
+ if (IS_ERR(dst)) {
+ err = PTR_ERR(dst);
+ goto out;
+diff --git a/net/ipv6/inet6_connection_sock.c b/net/ipv6/inet6_connection_sock.c
+index 4da24aa6c696..9f3ef6e02568 100644
+--- a/net/ipv6/inet6_connection_sock.c
++++ b/net/ipv6/inet6_connection_sock.c
+@@ -48,7 +48,7 @@ struct dst_entry *inet6_csk_route_req(const struct sock *sk,
+ fl6->flowi6_uid = sk->sk_uid;
+ security_req_classify_flow(req, flowi6_to_flowi(fl6));
+
+- dst = ip6_dst_lookup_flow(sk, fl6, final_p);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, fl6, final_p);
+ if (IS_ERR(dst))
+ return NULL;
+
+@@ -103,7 +103,7 @@ static struct dst_entry *inet6_csk_route_socket(struct sock *sk,
+
+ dst = __inet6_csk_dst_check(sk, np->dst_cookie);
+ if (!dst) {
+- dst = ip6_dst_lookup_flow(sk, fl6, final_p);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, fl6, final_p);
+
+ if (!IS_ERR(dst))
+ ip6_dst_store(sk, dst, NULL, NULL);
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index e71568f730f9..43c7389922b1 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -1144,19 +1144,19 @@ EXPORT_SYMBOL_GPL(ip6_dst_lookup);
+ * It returns a valid dst pointer on success, or a pointer encoded
+ * error code.
+ */
+-struct dst_entry *ip6_dst_lookup_flow(const struct sock *sk, struct flowi6 *fl6,
++struct dst_entry *ip6_dst_lookup_flow(struct net *net, const struct sock *sk, struct flowi6 *fl6,
+ const struct in6_addr *final_dst)
+ {
+ struct dst_entry *dst = NULL;
+ int err;
+
+- err = ip6_dst_lookup_tail(sock_net(sk), sk, &dst, fl6);
++ err = ip6_dst_lookup_tail(net, sk, &dst, fl6);
+ if (err)
+ return ERR_PTR(err);
+ if (final_dst)
+ fl6->daddr = *final_dst;
+
+- return xfrm_lookup_route(sock_net(sk), dst, flowi6_to_flowi(fl6), sk, 0);
++ return xfrm_lookup_route(net, dst, flowi6_to_flowi(fl6), sk, 0);
+ }
+ EXPORT_SYMBOL_GPL(ip6_dst_lookup_flow);
+
+@@ -1188,7 +1188,7 @@ struct dst_entry *ip6_sk_dst_lookup_flow(struct sock *sk, struct flowi6 *fl6,
+ if (dst)
+ return dst;
+
+- dst = ip6_dst_lookup_flow(sk, fl6, final_dst);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, fl6, final_dst);
+ if (connected && !IS_ERR(dst))
+ ip6_sk_dst_store_flow(sk, dst_clone(dst), fl6);
+
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index 8a6131991e38..6889716bf989 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -923,7 +923,7 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+
+ fl6.flowlabel = ip6_make_flowinfo(ipc6.tclass, fl6.flowlabel);
+
+- dst = ip6_dst_lookup_flow(sk, &fl6, final_p);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, &fl6, final_p);
+ if (IS_ERR(dst)) {
+ err = PTR_ERR(dst);
+ goto out;
+diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
+index 16632e02e9b0..30915f6f31e3 100644
+--- a/net/ipv6/syncookies.c
++++ b/net/ipv6/syncookies.c
+@@ -235,7 +235,7 @@ struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb)
+ fl6.flowi6_uid = sk->sk_uid;
+ security_req_classify_flow(req, flowi6_to_flowi(&fl6));
+
+- dst = ip6_dst_lookup_flow(sk, &fl6, final_p);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, &fl6, final_p);
+ if (IS_ERR(dst))
+ goto out_free;
+ }
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 5da069e91cac..84497e0342bc 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -275,7 +275,7 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
+
+ security_sk_classify_flow(sk, flowi6_to_flowi(&fl6));
+
+- dst = ip6_dst_lookup_flow(sk, &fl6, final_p);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, &fl6, final_p);
+ if (IS_ERR(dst)) {
+ err = PTR_ERR(dst);
+ goto failure;
+@@ -904,7 +904,7 @@ static void tcp_v6_send_response(const struct sock *sk, struct sk_buff *skb, u32
+ * Underlying function will use this to retrieve the network
+ * namespace
+ */
+- dst = ip6_dst_lookup_flow(ctl_sk, &fl6, NULL);
++ dst = ip6_dst_lookup_flow(sock_net(ctl_sk), ctl_sk, &fl6, NULL);
+ if (!IS_ERR(dst)) {
+ skb_dst_set(buff, dst);
+ ip6_xmit(ctl_sk, buff, &fl6, fl6.flowi6_mark, NULL, tclass);
+diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c
+index 687e23a8b326..ad371606cba5 100644
+--- a/net/l2tp/l2tp_ip6.c
++++ b/net/l2tp/l2tp_ip6.c
+@@ -615,7 +615,7 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+
+ fl6.flowlabel = ip6_make_flowinfo(ipc6.tclass, fl6.flowlabel);
+
+- dst = ip6_dst_lookup_flow(sk, &fl6, final_p);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, &fl6, final_p);
+ if (IS_ERR(dst)) {
+ err = PTR_ERR(dst);
+ goto out;
+diff --git a/net/mpls/af_mpls.c b/net/mpls/af_mpls.c
+index c312741df2ce..4701edffb1f7 100644
+--- a/net/mpls/af_mpls.c
++++ b/net/mpls/af_mpls.c
+@@ -617,16 +617,15 @@ static struct net_device *inet6_fib_lookup_dev(struct net *net,
+ struct net_device *dev;
+ struct dst_entry *dst;
+ struct flowi6 fl6;
+- int err;
+
+ if (!ipv6_stub)
+ return ERR_PTR(-EAFNOSUPPORT);
+
+ memset(&fl6, 0, sizeof(fl6));
+ memcpy(&fl6.daddr, addr, sizeof(struct in6_addr));
+- err = ipv6_stub->ipv6_dst_lookup(net, NULL, &dst, &fl6);
+- if (err)
+- return ERR_PTR(err);
++ dst = ipv6_stub->ipv6_dst_lookup_flow(net, NULL, &fl6, NULL);
++ if (IS_ERR(dst))
++ return ERR_CAST(dst);
+
+ dev = dst->dev;
+ dev_hold(dev);
+diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
+index 1c77f520f474..99352f09deaa 100644
+--- a/net/openvswitch/actions.c
++++ b/net/openvswitch/actions.c
+@@ -166,7 +166,8 @@ static int push_mpls(struct sk_buff *skb, struct sw_flow_key *key,
+ int err;
+
+ err = skb_mpls_push(skb, mpls->mpls_lse, mpls->mpls_ethertype,
+- skb->mac_len);
++ skb->mac_len,
++ ovs_key_mac_proto(key) == MAC_PROTO_ETHERNET);
+ if (err)
+ return err;
+
+@@ -179,7 +180,8 @@ static int pop_mpls(struct sk_buff *skb, struct sw_flow_key *key,
+ {
+ int err;
+
+- err = skb_mpls_pop(skb, ethertype, skb->mac_len);
++ err = skb_mpls_pop(skb, ethertype, skb->mac_len,
++ ovs_key_mac_proto(key) == MAC_PROTO_ETHERNET);
+ if (err)
+ return err;
+
+diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c
+index 05249eb45082..283e8f9a5fd2 100644
+--- a/net/openvswitch/conntrack.c
++++ b/net/openvswitch/conntrack.c
+@@ -903,6 +903,17 @@ static int ovs_ct_nat(struct net *net, struct sw_flow_key *key,
+ }
+ err = ovs_ct_nat_execute(skb, ct, ctinfo, &info->range, maniptype);
+
++ if (err == NF_ACCEPT &&
++ ct->status & IPS_SRC_NAT && ct->status & IPS_DST_NAT) {
++ if (maniptype == NF_NAT_MANIP_SRC)
++ maniptype = NF_NAT_MANIP_DST;
++ else
++ maniptype = NF_NAT_MANIP_SRC;
++
++ err = ovs_ct_nat_execute(skb, ct, ctinfo, &info->range,
++ maniptype);
++ }
++
+ /* Mark NAT done if successful and update the flow key. */
+ if (err == NF_ACCEPT)
+ ovs_nat_update_key(key, skb, maniptype);
+diff --git a/net/sched/act_mpls.c b/net/sched/act_mpls.c
+index 4cf6c553bb0b..db570d2bd0e0 100644
+--- a/net/sched/act_mpls.c
++++ b/net/sched/act_mpls.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+ /* Copyright (C) 2019 Netronome Systems, Inc. */
+
++#include <linux/if_arp.h>
+ #include <linux/init.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+@@ -76,12 +77,14 @@ static int tcf_mpls_act(struct sk_buff *skb, const struct tc_action *a,
+
+ switch (p->tcfm_action) {
+ case TCA_MPLS_ACT_POP:
+- if (skb_mpls_pop(skb, p->tcfm_proto, mac_len))
++ if (skb_mpls_pop(skb, p->tcfm_proto, mac_len,
++ skb->dev && skb->dev->type == ARPHRD_ETHER))
+ goto drop;
+ break;
+ case TCA_MPLS_ACT_PUSH:
+ new_lse = tcf_mpls_get_lse(NULL, p, !eth_p_mpls(skb->protocol));
+- if (skb_mpls_push(skb, new_lse, p->tcfm_proto, mac_len))
++ if (skb_mpls_push(skb, new_lse, p->tcfm_proto, mac_len,
++ skb->dev && skb->dev->type == ARPHRD_ETHER))
+ goto drop;
+ break;
+ case TCA_MPLS_ACT_MODIFY:
+diff --git a/net/sched/sch_mq.c b/net/sched/sch_mq.c
+index 278c0b2dc523..e79f1afe0cfd 100644
+--- a/net/sched/sch_mq.c
++++ b/net/sched/sch_mq.c
+@@ -153,6 +153,7 @@ static int mq_dump(struct Qdisc *sch, struct sk_buff *skb)
+ __gnet_stats_copy_queue(&sch->qstats,
+ qdisc->cpu_qstats,
+ &qdisc->qstats, qlen);
++ sch->q.qlen += qlen;
+ } else {
+ sch->q.qlen += qdisc->q.qlen;
+ sch->bstats.bytes += qdisc->bstats.bytes;
+diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
+index 0d0113a24962..8766ab5b8788 100644
+--- a/net/sched/sch_mqprio.c
++++ b/net/sched/sch_mqprio.c
+@@ -411,6 +411,7 @@ static int mqprio_dump(struct Qdisc *sch, struct sk_buff *skb)
+ __gnet_stats_copy_queue(&sch->qstats,
+ qdisc->cpu_qstats,
+ &qdisc->qstats, qlen);
++ sch->q.qlen += qlen;
+ } else {
+ sch->q.qlen += qdisc->q.qlen;
+ sch->bstats.bytes += qdisc->bstats.bytes;
+@@ -433,7 +434,7 @@ static int mqprio_dump(struct Qdisc *sch, struct sk_buff *skb)
+ opt.offset[tc] = dev->tc_to_txq[tc].offset;
+ }
+
+- if (nla_put(skb, TCA_OPTIONS, NLA_ALIGN(sizeof(opt)), &opt))
++ if (nla_put(skb, TCA_OPTIONS, sizeof(opt), &opt))
+ goto nla_put_failure;
+
+ if ((priv->flags & TC_MQPRIO_F_MODE) &&
+diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
+index e5f2fc726a98..e9c2b4dfb542 100644
+--- a/net/sctp/ipv6.c
++++ b/net/sctp/ipv6.c
+@@ -275,7 +275,7 @@ static void sctp_v6_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
+ final_p = fl6_update_dst(fl6, rcu_dereference(np->opt), &final);
+ rcu_read_unlock();
+
+- dst = ip6_dst_lookup_flow(sk, fl6, final_p);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, fl6, final_p);
+ if (!asoc || saddr)
+ goto out;
+
+@@ -328,7 +328,7 @@ static void sctp_v6_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
+ fl6->saddr = laddr->a.v6.sin6_addr;
+ fl6->fl6_sport = laddr->a.v6.sin6_port;
+ final_p = fl6_update_dst(fl6, rcu_dereference(np->opt), &final);
+- bdst = ip6_dst_lookup_flow(sk, fl6, final_p);
++ bdst = ip6_dst_lookup_flow(sock_net(sk), sk, fl6, final_p);
+
+ if (IS_ERR(bdst))
+ continue;
+diff --git a/net/tipc/core.c b/net/tipc/core.c
+index c8370722f0bb..10d5b888a9c1 100644
+--- a/net/tipc/core.c
++++ b/net/tipc/core.c
+@@ -122,14 +122,6 @@ static int __init tipc_init(void)
+ sysctl_tipc_rmem[1] = RCVBUF_DEF;
+ sysctl_tipc_rmem[2] = RCVBUF_MAX;
+
+- err = tipc_netlink_start();
+- if (err)
+- goto out_netlink;
+-
+- err = tipc_netlink_compat_start();
+- if (err)
+- goto out_netlink_compat;
+-
+ err = tipc_register_sysctl();
+ if (err)
+ goto out_sysctl;
+@@ -150,8 +142,21 @@ static int __init tipc_init(void)
+ if (err)
+ goto out_bearer;
+
++ err = tipc_netlink_start();
++ if (err)
++ goto out_netlink;
++
++ err = tipc_netlink_compat_start();
++ if (err)
++ goto out_netlink_compat;
++
+ pr_info("Started in single node mode\n");
+ return 0;
++
++out_netlink_compat:
++ tipc_netlink_stop();
++out_netlink:
++ tipc_bearer_cleanup();
+ out_bearer:
+ unregister_pernet_device(&tipc_topsrv_net_ops);
+ out_pernet_topsrv:
+@@ -161,22 +166,18 @@ out_socket:
+ out_pernet:
+ tipc_unregister_sysctl();
+ out_sysctl:
+- tipc_netlink_compat_stop();
+-out_netlink_compat:
+- tipc_netlink_stop();
+-out_netlink:
+ pr_err("Unable to start in single node mode\n");
+ return err;
+ }
+
+ static void __exit tipc_exit(void)
+ {
++ tipc_netlink_compat_stop();
++ tipc_netlink_stop();
+ tipc_bearer_cleanup();
+ unregister_pernet_device(&tipc_topsrv_net_ops);
+ tipc_socket_stop();
+ unregister_pernet_device(&tipc_net_ops);
+- tipc_netlink_stop();
+- tipc_netlink_compat_stop();
+ tipc_unregister_sysctl();
+
+ pr_info("Deactivated\n");
+diff --git a/net/tipc/udp_media.c b/net/tipc/udp_media.c
+index 287df68721df..186c78431217 100644
+--- a/net/tipc/udp_media.c
++++ b/net/tipc/udp_media.c
+@@ -195,10 +195,13 @@ static int tipc_udp_xmit(struct net *net, struct sk_buff *skb,
+ .saddr = src->ipv6,
+ .flowi6_proto = IPPROTO_UDP
+ };
+- err = ipv6_stub->ipv6_dst_lookup(net, ub->ubsock->sk,
+- &ndst, &fl6);
+- if (err)
++ ndst = ipv6_stub->ipv6_dst_lookup_flow(net,
++ ub->ubsock->sk,
++ &fl6, NULL);
++ if (IS_ERR(ndst)) {
++ err = PTR_ERR(ndst);
+ goto tx_error;
++ }
+ dst_cache_set_ip6(cache, ndst, &fl6.saddr);
+ }
+ ttl = ip6_dst_hoplimit(ndst);
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index 6b0c9b798d9c..d12793e541a4 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -385,7 +385,7 @@ static int tls_push_data(struct sock *sk,
+
+ if (flags &
+ ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL | MSG_SENDPAGE_NOTLAST))
+- return -ENOTSUPP;
++ return -EOPNOTSUPP;
+
+ if (sk->sk_err)
+ return -sk->sk_err;
+@@ -519,7 +519,7 @@ int tls_device_sendpage(struct sock *sk, struct page *page,
+ lock_sock(sk);
+
+ if (flags & MSG_OOB) {
+- rc = -ENOTSUPP;
++ rc = -EOPNOTSUPP;
+ goto out;
+ }
+
+@@ -961,7 +961,7 @@ int tls_set_device_offload(struct sock *sk, struct tls_context *ctx)
+ }
+
+ if (!(netdev->features & NETIF_F_HW_TLS_TX)) {
+- rc = -ENOTSUPP;
++ rc = -EOPNOTSUPP;
+ goto release_netdev;
+ }
+
+@@ -1034,7 +1034,7 @@ int tls_set_device_offload_rx(struct sock *sk, struct tls_context *ctx)
+ }
+
+ if (!(netdev->features & NETIF_F_HW_TLS_RX)) {
+- rc = -ENOTSUPP;
++ rc = -EOPNOTSUPP;
+ goto release_netdev;
+ }
+
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index c7ecd053d4e7..07476df4b13f 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -473,7 +473,7 @@ static int do_tls_setsockopt_conf(struct sock *sk, char __user *optval,
+ /* check version */
+ if (crypto_info->version != TLS_1_2_VERSION &&
+ crypto_info->version != TLS_1_3_VERSION) {
+- rc = -ENOTSUPP;
++ rc = -EINVAL;
+ goto err_crypto_info;
+ }
+
+@@ -782,7 +782,7 @@ static int tls_init(struct sock *sk)
+ * share the ulp context.
+ */
+ if (sk->sk_state != TCP_ESTABLISHED)
+- return -ENOTSUPP;
++ return -ENOTCONN;
+
+ tls_build_proto(sk);
+
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 45e993c4e8f6..8e031926efb4 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -900,7 +900,7 @@ int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
+ int ret = 0;
+
+ if (msg->msg_flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL))
+- return -ENOTSUPP;
++ return -EOPNOTSUPP;
+
+ mutex_lock(&tls_ctx->tx_lock);
+ lock_sock(sk);
+@@ -1215,7 +1215,7 @@ int tls_sw_sendpage_locked(struct sock *sk, struct page *page,
+ if (flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL |
+ MSG_SENDPAGE_NOTLAST | MSG_SENDPAGE_NOPOLICY |
+ MSG_NO_SHARED_FRAGS))
+- return -ENOTSUPP;
++ return -EOPNOTSUPP;
+
+ return tls_sw_do_sendpage(sk, page, offset, size, flags);
+ }
+@@ -1228,7 +1228,7 @@ int tls_sw_sendpage(struct sock *sk, struct page *page,
+
+ if (flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL |
+ MSG_SENDPAGE_NOTLAST | MSG_SENDPAGE_NOPOLICY))
+- return -ENOTSUPP;
++ return -EOPNOTSUPP;
+
+ mutex_lock(&tls_ctx->tx_lock);
+ lock_sock(sk);
+@@ -1928,7 +1928,7 @@ ssize_t tls_sw_splice_read(struct socket *sock, loff_t *ppos,
+
+ /* splice does not support reading control messages */
+ if (ctx->control != TLS_RECORD_TYPE_DATA) {
+- err = -ENOTSUPP;
++ err = -EINVAL;
+ goto splice_read_end;
+ }
+
+diff --git a/tools/testing/selftests/net/tls.c b/tools/testing/selftests/net/tls.c
+index 46abcae47dee..13e5ef615026 100644
+--- a/tools/testing/selftests/net/tls.c
++++ b/tools/testing/selftests/net/tls.c
+@@ -25,10 +25,6 @@
+ #define TLS_PAYLOAD_MAX_LEN 16384
+ #define SOL_TLS 282
+
+-#ifndef ENOTSUPP
+-#define ENOTSUPP 524
+-#endif
+-
+ FIXTURE(tls_basic)
+ {
+ int fd, cfd;
+@@ -1205,11 +1201,11 @@ TEST(non_established) {
+ /* TLS ULP not supported */
+ if (errno == ENOENT)
+ return;
+- EXPECT_EQ(errno, ENOTSUPP);
++ EXPECT_EQ(errno, ENOTCONN);
+
+ ret = setsockopt(sfd, IPPROTO_TCP, TCP_ULP, "tls", sizeof("tls"));
+ EXPECT_EQ(ret, -1);
+- EXPECT_EQ(errno, ENOTSUPP);
++ EXPECT_EQ(errno, ENOTCONN);
+
+ ret = getsockname(sfd, &addr, &len);
+ ASSERT_EQ(ret, 0);
^ permalink raw reply related [flat|nested] 21+ messages in thread
end of thread, other threads:[~2019-12-18 19:31 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-11-30 13:15 [gentoo-commits] proj/linux-patches:5.3 commit in: / Thomas Deutschmann
-- strict thread matches above, loose matches on Subject: below --
2019-12-18 19:31 Mike Pagano
2019-12-17 21:57 Mike Pagano
2019-12-13 12:37 Mike Pagano
2019-12-05 1:06 Thomas Deutschmann
2019-11-29 21:38 Thomas Deutschmann
2019-11-24 15:45 Mike Pagano
2019-11-20 16:39 Mike Pagano
2019-11-14 23:08 Mike Pagano
2019-11-12 19:36 Mike Pagano
2019-11-10 16:22 Mike Pagano
2019-11-06 14:27 Mike Pagano
2019-10-29 12:06 Mike Pagano
2019-10-17 22:28 Mike Pagano
2019-10-16 18:21 Mike Pagano
2019-10-11 17:08 Mike Pagano
2019-10-07 17:48 Mike Pagano
2019-10-05 20:41 Mike Pagano
2019-10-01 10:12 Mike Pagano
2019-09-21 15:53 Mike Pagano
2019-09-16 11:55 Mike Pagano
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox