public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-05-06 11:25 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2019-05-06 11:25 UTC (permalink / raw
  To: gentoo-commits

commit:     86c4421eb0e144943cf18a8111fc0ef63e506478
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon May  6 11:25:17 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon May  6 11:25:17 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=86c4421e

Removal of incompatible cpu optimization patches

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |   8 -
 ...-additional-cpu-optimizations-for-gcc-4.9.patch | 545 --------------------
 5011_enable-cpu-optimizations-for-gcc8.patch       | 569 ---------------------
 3 files changed, 1122 deletions(-)

diff --git a/0000_README b/0000_README
index cfba4e3..90e376f 100644
--- a/0000_README
+++ b/0000_README
@@ -62,11 +62,3 @@ Desc:   This hid-apple patch enables swapping of the FN and left Control keys an
 Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.
-
-Patch:  5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
-From:   https://github.com/graysky2/kernel_gcc_patch/
-Desc:   Kernel patch enables gcc >= v4.13 optimizations for additional CPUs.
-
-Patch:  5011_enable-cpu-optimizations-for-gcc8.patch
-From:   https://github.com/graysky2/kernel_gcc_patch/
-Desc:   Kernel patch for >= gccv8 enables kernel >= v4.13 optimizations for additional CPUs.

diff --git a/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch b/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
deleted file mode 100644
index a8aa759..0000000
--- a/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
+++ /dev/null
@@ -1,545 +0,0 @@
-WARNING
-This patch works with gcc versions 4.9+ and with kernel version 4.13+ and should
-NOT be applied when compiling on older versions of gcc due to key name changes
-of the march flags introduced with the version 4.9 release of gcc.[1]
-
-Use the older version of this patch hosted on the same github for older
-versions of gcc.
-
-FEATURES
-This patch adds additional CPU options to the Linux kernel accessible under:
- Processor type and features  --->
-  Processor family --->
-
-The expanded microarchitectures include:
-* AMD Improved K8-family
-* AMD K10-family
-* AMD Family 10h (Barcelona)
-* AMD Family 14h (Bobcat)
-* AMD Family 16h (Jaguar)
-* AMD Family 15h (Bulldozer)
-* AMD Family 15h (Piledriver)
-* AMD Family 15h (Steamroller)
-* AMD Family 15h (Excavator)
-* AMD Family 17h (Zen)
-* Intel Silvermont low-power processors
-* Intel 1st Gen Core i3/i5/i7 (Nehalem)
-* Intel 1.5 Gen Core i3/i5/i7 (Westmere)
-* Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
-* Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
-* Intel 4th Gen Core i3/i5/i7 (Haswell)
-* Intel 5th Gen Core i3/i5/i7 (Broadwell)
-* Intel 6th Gen Core i3/i5/i7 (Skylake)
-* Intel 6th Gen Core i7/i9 (Skylake X)
-
-It also offers to compile passing the 'native' option which, "selects the CPU
-to generate code for at compilation time by determining the processor type of
-the compiling machine. Using -march=native enables all instruction subsets
-supported by the local machine and will produce code optimized for the local
-machine under the constraints of the selected instruction set."[3]
-
-MINOR NOTES
-This patch also changes 'atom' to 'bonnell' in accordance with the gcc v4.9
-changes. Note that upstream is using the deprecated 'match=atom' flags when I
-believe it should use the newer 'march=bonnell' flag for atom processors.[2]
-
-It is not recommended to compile on Atom-CPUs with the 'native' option.[4] The
-recommendation is to use the 'atom' option instead.
-
-BENEFITS
-Small but real speed increases are measurable using a make endpoint comparing
-a generic kernel to one built with one of the respective microarchs.
-
-See the following experimental evidence supporting this statement:
-https://github.com/graysky2/kernel_gcc_patch
-
-REQUIREMENTS
-linux version >=4.13
-gcc version >=4.9
-
-ACKNOWLEDGMENTS
-This patch builds on the seminal work by Jeroen.[5]
-
-REFERENCES
-1. https://gcc.gnu.org/gcc-4.9/changes.html
-2. https://bugzilla.kernel.org/show_bug.cgi?id=77461
-3. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html
-4. https://github.com/graysky2/kernel_gcc_patch/issues/15
-5. http://www.linuxforge.net/docs/linux/linux-gcc.php
-
---- a/arch/x86/include/asm/module.h	2018-01-28 16:20:33.000000000 -0500
-+++ b/arch/x86/include/asm/module.h	2018-03-10 06:42:38.688317317 -0500
-@@ -25,6 +25,26 @@ struct mod_arch_specific {
- #define MODULE_PROC_FAMILY "586MMX "
- #elif defined CONFIG_MCORE2
- #define MODULE_PROC_FAMILY "CORE2 "
-+#elif defined CONFIG_MNATIVE
-+#define MODULE_PROC_FAMILY "NATIVE "
-+#elif defined CONFIG_MNEHALEM
-+#define MODULE_PROC_FAMILY "NEHALEM "
-+#elif defined CONFIG_MWESTMERE
-+#define MODULE_PROC_FAMILY "WESTMERE "
-+#elif defined CONFIG_MSILVERMONT
-+#define MODULE_PROC_FAMILY "SILVERMONT "
-+#elif defined CONFIG_MSANDYBRIDGE
-+#define MODULE_PROC_FAMILY "SANDYBRIDGE "
-+#elif defined CONFIG_MIVYBRIDGE
-+#define MODULE_PROC_FAMILY "IVYBRIDGE "
-+#elif defined CONFIG_MHASWELL
-+#define MODULE_PROC_FAMILY "HASWELL "
-+#elif defined CONFIG_MBROADWELL
-+#define MODULE_PROC_FAMILY "BROADWELL "
-+#elif defined CONFIG_MSKYLAKE
-+#define MODULE_PROC_FAMILY "SKYLAKE "
-+#elif defined CONFIG_MSKYLAKEX
-+#define MODULE_PROC_FAMILY "SKYLAKEX "
- #elif defined CONFIG_MATOM
- #define MODULE_PROC_FAMILY "ATOM "
- #elif defined CONFIG_M686
-@@ -43,6 +63,26 @@ struct mod_arch_specific {
- #define MODULE_PROC_FAMILY "K7 "
- #elif defined CONFIG_MK8
- #define MODULE_PROC_FAMILY "K8 "
-+#elif defined CONFIG_MK8SSE3
-+#define MODULE_PROC_FAMILY "K8SSE3 "
-+#elif defined CONFIG_MK10
-+#define MODULE_PROC_FAMILY "K10 "
-+#elif defined CONFIG_MBARCELONA
-+#define MODULE_PROC_FAMILY "BARCELONA "
-+#elif defined CONFIG_MBOBCAT
-+#define MODULE_PROC_FAMILY "BOBCAT "
-+#elif defined CONFIG_MBULLDOZER
-+#define MODULE_PROC_FAMILY "BULLDOZER "
-+#elif defined CONFIG_MPILEDRIVER
-+#define MODULE_PROC_FAMILY "PILEDRIVER "
-+#elif defined CONFIG_MSTEAMROLLER
-+#define MODULE_PROC_FAMILY "STEAMROLLER "
-+#elif defined CONFIG_MJAGUAR
-+#define MODULE_PROC_FAMILY "JAGUAR "
-+#elif defined CONFIG_MEXCAVATOR
-+#define MODULE_PROC_FAMILY "EXCAVATOR "
-+#elif defined CONFIG_MZEN
-+#define MODULE_PROC_FAMILY "ZEN "
- #elif defined CONFIG_MELAN
- #define MODULE_PROC_FAMILY "ELAN "
- #elif defined CONFIG_MCRUSOE
---- a/arch/x86/Kconfig.cpu	2018-01-28 16:20:33.000000000 -0500
-+++ b/arch/x86/Kconfig.cpu	2018-03-10 06:45:50.244371799 -0500
-@@ -116,6 +116,7 @@ config MPENTIUMM
- config MPENTIUM4
- 	bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
- 	depends on X86_32
-+	select X86_P6_NOP
- 	---help---
- 	  Select this for Intel Pentium 4 chips.  This includes the
- 	  Pentium 4, Pentium D, P4-based Celeron and Xeon, and
-@@ -148,9 +149,8 @@ config MPENTIUM4
- 		-Paxville
- 		-Dempsey
- 
--
- config MK6
--	bool "K6/K6-II/K6-III"
-+	bool "AMD K6/K6-II/K6-III"
- 	depends on X86_32
- 	---help---
- 	  Select this for an AMD K6-family processor.  Enables use of
-@@ -158,7 +158,7 @@ config MK6
- 	  flags to GCC.
- 
- config MK7
--	bool "Athlon/Duron/K7"
-+	bool "AMD Athlon/Duron/K7"
- 	depends on X86_32
- 	---help---
- 	  Select this for an AMD Athlon K7-family processor.  Enables use of
-@@ -166,12 +166,83 @@ config MK7
- 	  flags to GCC.
- 
- config MK8
--	bool "Opteron/Athlon64/Hammer/K8"
-+	bool "AMD Opteron/Athlon64/Hammer/K8"
- 	---help---
- 	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
- 	  Enables use of some extended instructions, and passes appropriate
- 	  optimization flags to GCC.
- 
-+config MK8SSE3
-+	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
-+	---help---
-+	  Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
-+	  Enables use of some extended instructions, and passes appropriate
-+	  optimization flags to GCC.
-+
-+config MK10
-+	bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
-+	---help---
-+	  Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
-+		Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
-+	  Enables use of some extended instructions, and passes appropriate
-+	  optimization flags to GCC.
-+
-+config MBARCELONA
-+	bool "AMD Barcelona"
-+	---help---
-+	  Select this for AMD Family 10h Barcelona processors.
-+
-+	  Enables -march=barcelona
-+
-+config MBOBCAT
-+	bool "AMD Bobcat"
-+	---help---
-+	  Select this for AMD Family 14h Bobcat processors.
-+
-+	  Enables -march=btver1
-+
-+config MJAGUAR
-+	bool "AMD Jaguar"
-+	---help---
-+	  Select this for AMD Family 16h Jaguar processors.
-+
-+	  Enables -march=btver2
-+
-+config MBULLDOZER
-+	bool "AMD Bulldozer"
-+	---help---
-+	  Select this for AMD Family 15h Bulldozer processors.
-+
-+	  Enables -march=bdver1
-+
-+config MPILEDRIVER
-+	bool "AMD Piledriver"
-+	---help---
-+	  Select this for AMD Family 15h Piledriver processors.
-+
-+	  Enables -march=bdver2
-+
-+config MSTEAMROLLER
-+	bool "AMD Steamroller"
-+	---help---
-+	  Select this for AMD Family 15h Steamroller processors.
-+
-+	  Enables -march=bdver3
-+
-+config MEXCAVATOR
-+	bool "AMD Excavator"
-+	---help---
-+	  Select this for AMD Family 15h Excavator processors.
-+
-+	  Enables -march=bdver4
-+
-+config MZEN
-+	bool "AMD Zen"
-+	---help---
-+	  Select this for AMD Family 17h Zen processors.
-+
-+	  Enables -march=znver1
-+
- config MCRUSOE
- 	bool "Crusoe"
- 	depends on X86_32
-@@ -253,6 +324,7 @@ config MVIAC7
- 
- config MPSC
- 	bool "Intel P4 / older Netburst based Xeon"
-+	select X86_P6_NOP
- 	depends on X86_64
- 	---help---
- 	  Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
-@@ -262,8 +334,19 @@ config MPSC
- 	  using the cpu family field
- 	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
- 
-+config MATOM
-+	bool "Intel Atom"
-+	select X86_P6_NOP
-+	---help---
-+
-+	  Select this for the Intel Atom platform. Intel Atom CPUs have an
-+	  in-order pipelining architecture and thus can benefit from
-+	  accordingly optimized code. Use a recent GCC with specific Atom
-+	  support in order to fully benefit from selecting this option.
-+
- config MCORE2
--	bool "Core 2/newer Xeon"
-+	bool "Intel Core 2"
-+	select X86_P6_NOP
- 	---help---
- 
- 	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
-@@ -271,14 +354,88 @@ config MCORE2
- 	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
- 	  (not a typo)
- 
--config MATOM
--	bool "Intel Atom"
-+	  Enables -march=core2
-+
-+config MNEHALEM
-+	bool "Intel Nehalem"
-+	select X86_P6_NOP
- 	---help---
- 
--	  Select this for the Intel Atom platform. Intel Atom CPUs have an
--	  in-order pipelining architecture and thus can benefit from
--	  accordingly optimized code. Use a recent GCC with specific Atom
--	  support in order to fully benefit from selecting this option.
-+	  Select this for 1st Gen Core processors in the Nehalem family.
-+
-+	  Enables -march=nehalem
-+
-+config MWESTMERE
-+	bool "Intel Westmere"
-+	select X86_P6_NOP
-+	---help---
-+
-+	  Select this for the Intel Westmere formerly Nehalem-C family.
-+
-+	  Enables -march=westmere
-+
-+config MSILVERMONT
-+	bool "Intel Silvermont"
-+	select X86_P6_NOP
-+	---help---
-+
-+	  Select this for the Intel Silvermont platform.
-+
-+	  Enables -march=silvermont
-+
-+config MSANDYBRIDGE
-+	bool "Intel Sandy Bridge"
-+	select X86_P6_NOP
-+	---help---
-+
-+	  Select this for 2nd Gen Core processors in the Sandy Bridge family.
-+
-+	  Enables -march=sandybridge
-+
-+config MIVYBRIDGE
-+	bool "Intel Ivy Bridge"
-+	select X86_P6_NOP
-+	---help---
-+
-+	  Select this for 3rd Gen Core processors in the Ivy Bridge family.
-+
-+	  Enables -march=ivybridge
-+
-+config MHASWELL
-+	bool "Intel Haswell"
-+	select X86_P6_NOP
-+	---help---
-+
-+	  Select this for 4th Gen Core processors in the Haswell family.
-+
-+	  Enables -march=haswell
-+
-+config MBROADWELL
-+	bool "Intel Broadwell"
-+	select X86_P6_NOP
-+	---help---
-+
-+	  Select this for 5th Gen Core processors in the Broadwell family.
-+
-+	  Enables -march=broadwell
-+
-+config MSKYLAKE
-+	bool "Intel Skylake"
-+	select X86_P6_NOP
-+	---help---
-+
-+	  Select this for 6th Gen Core processors in the Skylake family.
-+
-+	  Enables -march=skylake
-+
-+config MSKYLAKEX
-+	bool "Intel Skylake X"
-+	select X86_P6_NOP
-+	---help---
-+
-+	  Select this for 6th Gen Core processors in the Skylake X family.
-+
-+	  Enables -march=skylake-avx512
- 
- config GENERIC_CPU
- 	bool "Generic-x86-64"
-@@ -287,6 +444,19 @@ config GENERIC_CPU
- 	  Generic x86-64 CPU.
- 	  Run equally well on all x86-64 CPUs.
- 
-+config MNATIVE
-+ bool "Native optimizations autodetected by GCC"
-+ ---help---
-+
-+   GCC 4.2 and above support -march=native, which automatically detects
-+   the optimum settings to use based on your processor. -march=native
-+   also detects and applies additional settings beyond -march specific
-+   to your CPU, (eg. -msse4). Unless you have a specific reason not to
-+   (e.g. distcc cross-compiling), you should probably be using
-+   -march=native rather than anything listed below.
-+
-+   Enables -march=native
-+
- endchoice
- 
- config X86_GENERIC
-@@ -311,7 +481,7 @@ config X86_INTERNODE_CACHE_SHIFT
- config X86_L1_CACHE_SHIFT
- 	int
- 	default "7" if MPENTIUM4 || MPSC
--	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
-+	default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
- 	default "4" if MELAN || M486 || MGEODEGX1
- 	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
- 
-@@ -342,35 +512,36 @@ config X86_ALIGNMENT_16
- 
- config X86_INTEL_USERCOPY
- 	def_bool y
--	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
-+	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MNATIVE
- 
- config X86_USE_PPRO_CHECKSUM
- 	def_bool y
--	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
-+	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MATOM || MNATIVE
- 
- config X86_USE_3DNOW
- 	def_bool y
- 	depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
- 
--#
--# P6_NOPs are a relatively minor optimization that require a family >=
--# 6 processor, except that it is broken on certain VIA chips.
--# Furthermore, AMD chips prefer a totally different sequence of NOPs
--# (which work on all CPUs).  In addition, it looks like Virtual PC
--# does not understand them.
--#
--# As a result, disallow these if we're not compiling for X86_64 (these
--# NOPs do work on all x86-64 capable chips); the list of processors in
--# the right-hand clause are the cores that benefit from this optimization.
--#
- config X86_P6_NOP
--	def_bool y
--	depends on X86_64
--	depends on (MCORE2 || MPENTIUM4 || MPSC)
-+	default n
-+	bool "Support for P6_NOPs on Intel chips"
-+	depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT  || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MNATIVE)
-+	---help---
-+	P6_NOPs are a relatively minor optimization that require a family >=
-+	6 processor, except that it is broken on certain VIA chips.
-+	Furthermore, AMD chips prefer a totally different sequence of NOPs
-+	(which work on all CPUs).  In addition, it looks like Virtual PC
-+	does not understand them.
-+
-+	As a result, disallow these if we're not compiling for X86_64 (these
-+	NOPs do work on all x86-64 capable chips); the list of processors in
-+	the right-hand clause are the cores that benefit from this optimization.
-+
-+	Say Y if you have Intel CPU newer than Pentium Pro, N otherwise.
- 
- config X86_TSC
- 	def_bool y
--	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
-+	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MNATIVE || MATOM) || X86_64
- 
- config X86_CMPXCHG64
- 	def_bool y
-@@ -380,7 +551,7 @@ config X86_CMPXCHG64
- # generates cmov.
- config X86_CMOV
- 	def_bool y
--	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
-+	depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
- 
- config X86_MINIMUM_CPU_FAMILY
- 	int
---- a/arch/x86/Makefile	2018-01-28 16:20:33.000000000 -0500
-+++ b/arch/x86/Makefile	2018-03-10 06:47:00.284240139 -0500
-@@ -124,13 +124,42 @@ else
- 	KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
- 
-         # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
-+        cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
-         cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
-+        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
-+        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
-+        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
-+        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
-+        cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
-+        cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
-+        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
-+        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
-+        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
-+        cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
-         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
- 
-         cflags-$(CONFIG_MCORE2) += \
--                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
--	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
--		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
-+                $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
-+        cflags-$(CONFIG_MNEHALEM) += \
-+                $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
-+        cflags-$(CONFIG_MWESTMERE) += \
-+                $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
-+        cflags-$(CONFIG_MSILVERMONT) += \
-+                $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
-+        cflags-$(CONFIG_MSANDYBRIDGE) += \
-+                $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
-+        cflags-$(CONFIG_MIVYBRIDGE) += \
-+                $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
-+        cflags-$(CONFIG_MHASWELL) += \
-+                $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
-+        cflags-$(CONFIG_MBROADWELL) += \
-+                $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
-+        cflags-$(CONFIG_MSKYLAKE) += \
-+                $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake))
-+        cflags-$(CONFIG_MSKYLAKEX) += \
-+                $(call cc-option,-march=skylake-avx512,$(call cc-option,-mtune=skylake-avx512))
-+        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
-+                $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
-         cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
-         KBUILD_CFLAGS += $(cflags-y)
- 
---- a/arch/x86/Makefile_32.cpu	2018-01-28 16:20:33.000000000 -0500
-+++ b/arch/x86/Makefile_32.cpu	2018-03-10 06:47:46.025992644 -0500
-@@ -23,7 +23,18 @@ cflags-$(CONFIG_MK6)		+= -march=k6
- # Please note, that patches that add -march=athlon-xp and friends are pointless.
- # They make zero difference whatsosever to performance at this time.
- cflags-$(CONFIG_MK7)		+= -march=athlon
-+cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
- cflags-$(CONFIG_MK8)		+= $(call cc-option,-march=k8,-march=athlon)
-+cflags-$(CONFIG_MK8SSE3)		+= $(call cc-option,-march=k8-sse3,-march=athlon)
-+cflags-$(CONFIG_MK10)	+= $(call cc-option,-march=amdfam10,-march=athlon)
-+cflags-$(CONFIG_MBARCELONA)	+= $(call cc-option,-march=barcelona,-march=athlon)
-+cflags-$(CONFIG_MBOBCAT)	+= $(call cc-option,-march=btver1,-march=athlon)
-+cflags-$(CONFIG_MJAGUAR)	+= $(call cc-option,-march=btver2,-march=athlon)
-+cflags-$(CONFIG_MBULLDOZER)	+= $(call cc-option,-march=bdver1,-march=athlon)
-+cflags-$(CONFIG_MPILEDRIVER)	+= $(call cc-option,-march=bdver2,-march=athlon)
-+cflags-$(CONFIG_MSTEAMROLLER)	+= $(call cc-option,-march=bdver3,-march=athlon)
-+cflags-$(CONFIG_MEXCAVATOR)	+= $(call cc-option,-march=bdver4,-march=athlon)
-+cflags-$(CONFIG_MZEN)	+= $(call cc-option,-march=znver1,-march=athlon)
- cflags-$(CONFIG_MCRUSOE)	+= -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0
- cflags-$(CONFIG_MEFFICEON)	+= -march=i686 $(call tune,pentium3) -falign-functions=0 -falign-jumps=0 -falign-loops=0
- cflags-$(CONFIG_MWINCHIPC6)	+= $(call cc-option,-march=winchip-c6,-march=i586)
-@@ -32,8 +43,17 @@ cflags-$(CONFIG_MCYRIXIII)	+= $(call cc-
- cflags-$(CONFIG_MVIAC3_2)	+= $(call cc-option,-march=c3-2,-march=i686)
- cflags-$(CONFIG_MVIAC7)		+= -march=i686
- cflags-$(CONFIG_MCORE2)		+= -march=i686 $(call tune,core2)
--cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
--	$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
-+cflags-$(CONFIG_MNEHALEM)	+= -march=i686 $(call tune,nehalem)
-+cflags-$(CONFIG_MWESTMERE)	+= -march=i686 $(call tune,westmere)
-+cflags-$(CONFIG_MSILVERMONT)	+= -march=i686 $(call tune,silvermont)
-+cflags-$(CONFIG_MSANDYBRIDGE)	+= -march=i686 $(call tune,sandybridge)
-+cflags-$(CONFIG_MIVYBRIDGE)	+= -march=i686 $(call tune,ivybridge)
-+cflags-$(CONFIG_MHASWELL)	+= -march=i686 $(call tune,haswell)
-+cflags-$(CONFIG_MBROADWELL)	+= -march=i686 $(call tune,broadwell)
-+cflags-$(CONFIG_MSKYLAKE)	+= -march=i686 $(call tune,skylake)
-+cflags-$(CONFIG_MSKYLAKEX)	+= -march=i686 $(call tune,skylake-avx512)
-+cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
-+	$(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
- 
- # AMD Elan support
- cflags-$(CONFIG_MELAN)		+= -march=i486

diff --git a/5011_enable-cpu-optimizations-for-gcc8.patch b/5011_enable-cpu-optimizations-for-gcc8.patch
deleted file mode 100644
index bfd2065..0000000
--- a/5011_enable-cpu-optimizations-for-gcc8.patch
+++ /dev/null
@@ -1,569 +0,0 @@
-WARNING
-This patch works with gcc versions 8.1+ and with kernel version 4.13+ and should
-NOT be applied when compiling on older versions of gcc due to key name changes
-of the march flags introduced with the version 4.9 release of gcc.[1]
-
-Use the older version of this patch hosted on the same github for older
-versions of gcc.
-
-FEATURES
-This patch adds additional CPU options to the Linux kernel accessible under:
- Processor type and features  --->
-  Processor family --->
-
-The expanded microarchitectures include:
-* AMD Improved K8-family
-* AMD K10-family
-* AMD Family 10h (Barcelona)
-* AMD Family 14h (Bobcat)
-* AMD Family 16h (Jaguar)
-* AMD Family 15h (Bulldozer)
-* AMD Family 15h (Piledriver)
-* AMD Family 15h (Steamroller)
-* AMD Family 15h (Excavator)
-* AMD Family 17h (Zen)
-* Intel Silvermont low-power processors
-* Intel 1st Gen Core i3/i5/i7 (Nehalem)
-* Intel 1.5 Gen Core i3/i5/i7 (Westmere)
-* Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
-* Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
-* Intel 4th Gen Core i3/i5/i7 (Haswell)
-* Intel 5th Gen Core i3/i5/i7 (Broadwell)
-* Intel 6th Gen Core i3/i5/i7 (Skylake)
-* Intel 6th Gen Core i7/i9 (Skylake X)
-* Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
-* Intel 8th Gen Core i7/i9 (Ice Lake)
-
-It also offers to compile passing the 'native' option which, "selects the CPU
-to generate code for at compilation time by determining the processor type of
-the compiling machine. Using -march=native enables all instruction subsets
-supported by the local machine and will produce code optimized for the local
-machine under the constraints of the selected instruction set."[3]
-
-MINOR NOTES
-This patch also changes 'atom' to 'bonnell' in accordance with the gcc v4.9
-changes. Note that upstream is using the deprecated 'match=atom' flags when I
-believe it should use the newer 'march=bonnell' flag for atom processors.[2]
-
-It is not recommended to compile on Atom-CPUs with the 'native' option.[4] The
-recommendation is to use the 'atom' option instead.
-
-BENEFITS
-Small but real speed increases are measurable using a make endpoint comparing
-a generic kernel to one built with one of the respective microarchs.
-
-See the following experimental evidence supporting this statement:
-https://github.com/graysky2/kernel_gcc_patch
-
-REQUIREMENTS
-linux version >=4.20
-gcc version >=8.1
-
-ACKNOWLEDGMENTS
-This patch builds on the seminal work by Jeroen.[5]
-
-REFERENCES
-1. https://gcc.gnu.org/gcc-4.9/changes.html
-2. https://bugzilla.kernel.org/show_bug.cgi?id=77461
-3. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html
-4. https://github.com/graysky2/kernel_gcc_patch/issues/15
-5. http://www.linuxforge.net/docs/linux/linux-gcc.php
-
---- a/arch/x86/Makefile_32.cpu	2019-02-22 09:22:03.426937735 -0500
-+++ b/arch/x86/Makefile_32.cpu	2019-02-22 09:37:58.680968580 -0500
-@@ -23,7 +23,18 @@ cflags-$(CONFIG_MK6)		+= -march=k6
- # Please note, that patches that add -march=athlon-xp and friends are pointless.
- # They make zero difference whatsosever to performance at this time.
- cflags-$(CONFIG_MK7)		+= -march=athlon
-+cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
- cflags-$(CONFIG_MK8)		+= $(call cc-option,-march=k8,-march=athlon)
-+cflags-$(CONFIG_MK8SSE3)		+= $(call cc-option,-march=k8-sse3,-march=athlon)
-+cflags-$(CONFIG_MK10)	+= $(call cc-option,-march=amdfam10,-march=athlon)
-+cflags-$(CONFIG_MBARCELONA)	+= $(call cc-option,-march=barcelona,-march=athlon)
-+cflags-$(CONFIG_MBOBCAT)	+= $(call cc-option,-march=btver1,-march=athlon)
-+cflags-$(CONFIG_MJAGUAR)	+= $(call cc-option,-march=btver2,-march=athlon)
-+cflags-$(CONFIG_MBULLDOZER)	+= $(call cc-option,-march=bdver1,-march=athlon)
-+cflags-$(CONFIG_MPILEDRIVER)	+= $(call cc-option,-march=bdver2,-march=athlon)
-+cflags-$(CONFIG_MSTEAMROLLER)	+= $(call cc-option,-march=bdver3,-march=athlon)
-+cflags-$(CONFIG_MEXCAVATOR)	+= $(call cc-option,-march=bdver4,-march=athlon)
-+cflags-$(CONFIG_MZEN)	+= $(call cc-option,-march=znver1,-march=athlon)
- cflags-$(CONFIG_MCRUSOE)	+= -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0
- cflags-$(CONFIG_MEFFICEON)	+= -march=i686 $(call tune,pentium3) -falign-functions=0 -falign-jumps=0 -falign-loops=0
- cflags-$(CONFIG_MWINCHIPC6)	+= $(call cc-option,-march=winchip-c6,-march=i586)
-@@ -32,9 +43,20 @@ cflags-$(CONFIG_MCYRIXIII)	+= $(call cc-
- cflags-$(CONFIG_MVIAC3_2)	+= $(call cc-option,-march=c3-2,-march=i686)
- cflags-$(CONFIG_MVIAC7)		+= -march=i686
- cflags-$(CONFIG_MCORE2)		+= -march=i686 $(call tune,core2)
--cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
--	$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
--
-+cflags-$(CONFIG_MNEHALEM)	+= -march=i686 $(call tune,nehalem)
-+cflags-$(CONFIG_MWESTMERE)	+= -march=i686 $(call tune,westmere)
-+cflags-$(CONFIG_MSILVERMONT)	+= -march=i686 $(call tune,silvermont)
-+cflags-$(CONFIG_MSANDYBRIDGE)	+= -march=i686 $(call tune,sandybridge)
-+cflags-$(CONFIG_MIVYBRIDGE)	+= -march=i686 $(call tune,ivybridge)
-+cflags-$(CONFIG_MHASWELL)	+= -march=i686 $(call tune,haswell)
-+cflags-$(CONFIG_MBROADWELL)	+= -march=i686 $(call tune,broadwell)
-+cflags-$(CONFIG_MSKYLAKE)	+= -march=i686 $(call tune,skylake)
-+cflags-$(CONFIG_MSKYLAKEX)	+= -march=i686 $(call tune,skylake-avx512)
-+cflags-$(CONFIG_MCANNONLAKE)	+= -march=i686 $(call tune,cannonlake)
-+cflags-$(CONFIG_MICELAKE)	+= -march=i686 $(call tune,icelake)
-+cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
-+	$(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
-+ 
- # AMD Elan support
- cflags-$(CONFIG_MELAN)		+= -march=i486
- 
---- a/arch/x86/Kconfig.cpu	2019-02-22 09:22:11.576958595 -0500
-+++ b/arch/x86/Kconfig.cpu	2019-02-22 09:34:16.490003911 -0500
-@@ -116,6 +116,7 @@ config MPENTIUMM
- config MPENTIUM4
- 	bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
- 	depends on X86_32
-+	select X86_P6_NOP
- 	---help---
- 	  Select this for Intel Pentium 4 chips.  This includes the
- 	  Pentium 4, Pentium D, P4-based Celeron and Xeon, and
-@@ -150,7 +151,7 @@ config MPENTIUM4
- 
- 
- config MK6
--	bool "K6/K6-II/K6-III"
-+	bool "AMD K6/K6-II/K6-III"
- 	depends on X86_32
- 	---help---
- 	  Select this for an AMD K6-family processor.  Enables use of
-@@ -158,7 +159,7 @@ config MK6
- 	  flags to GCC.
- 
- config MK7
--	bool "Athlon/Duron/K7"
-+	bool "AMD Athlon/Duron/K7"
- 	depends on X86_32
- 	---help---
- 	  Select this for an AMD Athlon K7-family processor.  Enables use of
-@@ -166,11 +167,81 @@ config MK7
- 	  flags to GCC.
- 
- config MK8
--	bool "Opteron/Athlon64/Hammer/K8"
-+	bool "AMD Opteron/Athlon64/Hammer/K8"
- 	---help---
- 	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
- 	  Enables use of some extended instructions, and passes appropriate
- 	  optimization flags to GCC.
-+config MK8SSE3
-+	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
-+	---help---
-+	  Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
-+	  Enables use of some extended instructions, and passes appropriate
-+	  optimization flags to GCC.
-+
-+config MK10
-+	bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
-+	---help---
-+	  Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
-+		Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
-+	  Enables use of some extended instructions, and passes appropriate
-+	  optimization flags to GCC.
-+
-+config MBARCELONA
-+	bool "AMD Barcelona"
-+	---help---
-+	  Select this for AMD Family 10h Barcelona processors.
-+
-+	  Enables -march=barcelona
-+
-+config MBOBCAT
-+	bool "AMD Bobcat"
-+	---help---
-+	  Select this for AMD Family 14h Bobcat processors.
-+
-+	  Enables -march=btver1
-+
-+config MJAGUAR
-+	bool "AMD Jaguar"
-+	---help---
-+	  Select this for AMD Family 16h Jaguar processors.
-+
-+	  Enables -march=btver2
-+
-+config MBULLDOZER
-+	bool "AMD Bulldozer"
-+	---help---
-+	  Select this for AMD Family 15h Bulldozer processors.
-+
-+	  Enables -march=bdver1
-+
-+config MPILEDRIVER
-+	bool "AMD Piledriver"
-+	---help---
-+	  Select this for AMD Family 15h Piledriver processors.
-+
-+	  Enables -march=bdver2
-+
-+config MSTEAMROLLER
-+	bool "AMD Steamroller"
-+	---help---
-+	  Select this for AMD Family 15h Steamroller processors.
-+
-+	  Enables -march=bdver3
-+
-+config MEXCAVATOR
-+	bool "AMD Excavator"
-+	---help---
-+	  Select this for AMD Family 15h Excavator processors.
-+
-+	  Enables -march=bdver4
-+
-+config MZEN
-+	bool "AMD Zen"
-+	---help---
-+	  Select this for AMD Family 17h Zen processors.
-+
-+	  Enables -march=znver1
- 
- config MCRUSOE
- 	bool "Crusoe"
-@@ -253,6 +324,7 @@ config MVIAC7
- 
- config MPSC
- 	bool "Intel P4 / older Netburst based Xeon"
-+	select X86_P6_NOP
- 	depends on X86_64
- 	---help---
- 	  Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
-@@ -262,23 +334,126 @@ config MPSC
- 	  using the cpu family field
- 	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
- 
-+config MATOM
-+	bool "Intel Atom"
-+	select X86_P6_NOP
-+	---help---
-+
-+	  Select this for the Intel Atom platform. Intel Atom CPUs have an
-+	  in-order pipelining architecture and thus can benefit from
-+	  accordingly optimized code. Use a recent GCC with specific Atom
-+	  support in order to fully benefit from selecting this option.
-+
- config MCORE2
--	bool "Core 2/newer Xeon"
-+	bool "Intel Core 2"
-+	select X86_P6_NOP
-+
- 	---help---
- 
- 	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
- 	  53xx) CPUs. You can distinguish newer from older Xeons by the CPU
- 	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
- 	  (not a typo)
-+	  Enables -march=core2
- 
--config MATOM
--	bool "Intel Atom"
-+config MNEHALEM
-+	bool "Intel Nehalem"
-+	select X86_P6_NOP
- 	---help---
- 
--	  Select this for the Intel Atom platform. Intel Atom CPUs have an
--	  in-order pipelining architecture and thus can benefit from
--	  accordingly optimized code. Use a recent GCC with specific Atom
--	  support in order to fully benefit from selecting this option.
-+	  Select this for 1st Gen Core processors in the Nehalem family.
-+
-+	  Enables -march=nehalem
-+
-+config MWESTMERE
-+	bool "Intel Westmere"
-+	select X86_P6_NOP
-+	---help---
-+
-+	  Select this for the Intel Westmere formerly Nehalem-C family.
-+
-+	  Enables -march=westmere
-+
-+config MSILVERMONT
-+	bool "Intel Silvermont"
-+	select X86_P6_NOP
-+	---help---
-+
-+	  Select this for the Intel Silvermont platform.
-+
-+	  Enables -march=silvermont
-+
-+config MSANDYBRIDGE
-+	bool "Intel Sandy Bridge"
-+	select X86_P6_NOP
-+	---help---
-+
-+	  Select this for 2nd Gen Core processors in the Sandy Bridge family.
-+
-+	  Enables -march=sandybridge
-+
-+config MIVYBRIDGE
-+	bool "Intel Ivy Bridge"
-+	select X86_P6_NOP
-+	---help---
-+
-+	  Select this for 3rd Gen Core processors in the Ivy Bridge family.
-+
-+	  Enables -march=ivybridge
-+
-+config MHASWELL
-+	bool "Intel Haswell"
-+	select X86_P6_NOP
-+	---help---
-+
-+	  Select this for 4th Gen Core processors in the Haswell family.
-+
-+	  Enables -march=haswell
-+
-+config MBROADWELL
-+	bool "Intel Broadwell"
-+	select X86_P6_NOP
-+	---help---
-+
-+	  Select this for 5th Gen Core processors in the Broadwell family.
-+
-+	  Enables -march=broadwell
-+
-+config MSKYLAKE
-+	bool "Intel Skylake"
-+	select X86_P6_NOP
-+	---help---
-+
-+	  Select this for 6th Gen Core processors in the Skylake family.
-+
-+	  Enables -march=skylake
-+
-+config MSKYLAKEX
-+	bool "Intel Skylake X"
-+	select X86_P6_NOP
-+	---help---
-+
-+	  Select this for 6th Gen Core processors in the Skylake X family.
-+
-+	  Enables -march=skylake-avx512
-+
-+config MCANNONLAKE
-+	bool "Intel Cannon Lake"
-+	select X86_P6_NOP
-+	---help---
-+
-+	  Select this for 8th Gen Core processors
-+
-+	  Enables -march=cannonlake
-+
-+config MICELAKE
-+	bool "Intel Ice Lake"
-+	select X86_P6_NOP
-+	---help---
-+
-+	  Select this for 8th Gen Core processors in the Ice Lake family.
-+
-+	  Enables -march=icelake
- 
- config GENERIC_CPU
- 	bool "Generic-x86-64"
-@@ -287,6 +462,19 @@ config GENERIC_CPU
- 	  Generic x86-64 CPU.
- 	  Run equally well on all x86-64 CPUs.
- 
-+config MNATIVE
-+ bool "Native optimizations autodetected by GCC"
-+ ---help---
-+
-+   GCC 4.2 and above support -march=native, which automatically detects
-+   the optimum settings to use based on your processor. -march=native
-+   also detects and applies additional settings beyond -march specific
-+   to your CPU, (eg. -msse4). Unless you have a specific reason not to
-+   (e.g. distcc cross-compiling), you should probably be using
-+   -march=native rather than anything listed below.
-+
-+   Enables -march=native
-+
- endchoice
- 
- config X86_GENERIC
-@@ -311,7 +499,7 @@ config X86_INTERNODE_CACHE_SHIFT
- config X86_L1_CACHE_SHIFT
- 	int
- 	default "7" if MPENTIUM4 || MPSC
--	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
-+	default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
- 	default "4" if MELAN || M486 || MGEODEGX1
- 	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
- 
-@@ -329,39 +517,40 @@ config X86_ALIGNMENT_16
- 
- config X86_INTEL_USERCOPY
- 	def_bool y
--	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
-+	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MNATIVE
- 
- config X86_USE_PPRO_CHECKSUM
- 	def_bool y
--	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
-+	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MATOM || MNATIVE
- 
- config X86_USE_3DNOW
- 	def_bool y
- 	depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
- 
--#
--# P6_NOPs are a relatively minor optimization that require a family >=
--# 6 processor, except that it is broken on certain VIA chips.
--# Furthermore, AMD chips prefer a totally different sequence of NOPs
--# (which work on all CPUs).  In addition, it looks like Virtual PC
--# does not understand them.
--#
--# As a result, disallow these if we're not compiling for X86_64 (these
--# NOPs do work on all x86-64 capable chips); the list of processors in
--# the right-hand clause are the cores that benefit from this optimization.
--#
- config X86_P6_NOP
--	def_bool y
--	depends on X86_64
--	depends on (MCORE2 || MPENTIUM4 || MPSC)
-+	default n
-+	bool "Support for P6_NOPs on Intel chips"
-+	depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT  || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MNATIVE)
-+	---help---
-+	P6_NOPs are a relatively minor optimization that require a family >=
-+	6 processor, except that it is broken on certain VIA chips.
-+	Furthermore, AMD chips prefer a totally different sequence of NOPs
-+	(which work on all CPUs).  In addition, it looks like Virtual PC
-+	does not understand them.
-+
-+	As a result, disallow these if we're not compiling for X86_64 (these
-+	NOPs do work on all x86-64 capable chips); the list of processors in
-+	the right-hand clause are the cores that benefit from this optimization.
- 
-+	Say Y if you have Intel CPU newer than Pentium Pro, N otherwise.
-+ 
- config X86_TSC
- 	def_bool y
--	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
-+	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MNATIVE || MATOM) || X86_64
- 
- config X86_CMPXCHG64
- 	def_bool y
--	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8
-+	depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
- 
- # this should be set for all -march=.. options where the compiler
- # generates cmov.
---- a/arch/x86/Makefile	2019-02-22 09:21:58.196924367 -0500
-+++ b/arch/x86/Makefile	2019-02-22 09:36:27.310577832 -0500
-@@ -118,13 +118,46 @@ else
- 	KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
- 
-         # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
-+		cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
-         cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
-+        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
-+        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
-+        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
-+        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
-+        cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
-+        cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
-+        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
-+        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
-+        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
-+        cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
-         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
- 
-         cflags-$(CONFIG_MCORE2) += \
--                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
--	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
--		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
-+                $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
-+        cflags-$(CONFIG_MNEHALEM) += \
-+                $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
-+        cflags-$(CONFIG_MWESTMERE) += \
-+                $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
-+        cflags-$(CONFIG_MSILVERMONT) += \
-+                $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
-+        cflags-$(CONFIG_MSANDYBRIDGE) += \
-+                $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
-+        cflags-$(CONFIG_MIVYBRIDGE) += \
-+                $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
-+        cflags-$(CONFIG_MHASWELL) += \
-+                $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
-+        cflags-$(CONFIG_MBROADWELL) += \
-+                $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
-+        cflags-$(CONFIG_MSKYLAKE) += \
-+                $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake))
-+        cflags-$(CONFIG_MSKYLAKEX) += \
-+                $(call cc-option,-march=skylake-avx512,$(call cc-option,-mtune=skylake-avx512))
-+        cflags-$(CONFIG_MCANNONLAKE) += \
-+                $(call cc-option,-march=cannonlake,$(call cc-option,-mtune=cannonlake))
-+        cflags-$(CONFIG_MICELAKE) += \
-+                $(call cc-option,-march=icelake,$(call cc-option,-mtune=icelake))
-+        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
-+                $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
-         cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
-         KBUILD_CFLAGS += $(cflags-y)
- 
---- a/arch/x86/include/asm/module.h	2019-02-22 09:22:26.726997480 -0500
-+++ b/arch/x86/include/asm/module.h	2019-02-22 09:40:04.231493392 -0500
-@@ -25,6 +25,30 @@ struct mod_arch_specific {
- #define MODULE_PROC_FAMILY "586MMX "
- #elif defined CONFIG_MCORE2
- #define MODULE_PROC_FAMILY "CORE2 "
-+#elif defined CONFIG_MNATIVE
-+#define MODULE_PROC_FAMILY "NATIVE "
-+#elif defined CONFIG_MNEHALEM
-+#define MODULE_PROC_FAMILY "NEHALEM "
-+#elif defined CONFIG_MWESTMERE
-+#define MODULE_PROC_FAMILY "WESTMERE "
-+#elif defined CONFIG_MSILVERMONT
-+#define MODULE_PROC_FAMILY "SILVERMONT "
-+#elif defined CONFIG_MSANDYBRIDGE
-+#define MODULE_PROC_FAMILY "SANDYBRIDGE "
-+#elif defined CONFIG_MIVYBRIDGE
-+#define MODULE_PROC_FAMILY "IVYBRIDGE "
-+#elif defined CONFIG_MHASWELL
-+#define MODULE_PROC_FAMILY "HASWELL "
-+#elif defined CONFIG_MBROADWELL
-+#define MODULE_PROC_FAMILY "BROADWELL "
-+#elif defined CONFIG_MSKYLAKE
-+#define MODULE_PROC_FAMILY "SKYLAKE "
-+#elif defined CONFIG_MSKYLAKEX
-+#define MODULE_PROC_FAMILY "SKYLAKEX "
-+#elif defined CONFIG_MCANNONLAKE
-+#define MODULE_PROC_FAMILY "CANNONLAKE "
-+#elif defined CONFIG_MICELAKE
-+#define MODULE_PROC_FAMILY "ICELAKE "
- #elif defined CONFIG_MATOM
- #define MODULE_PROC_FAMILY "ATOM "
- #elif defined CONFIG_M686
-@@ -43,6 +67,26 @@ struct mod_arch_specific {
- #define MODULE_PROC_FAMILY "K7 "
- #elif defined CONFIG_MK8
- #define MODULE_PROC_FAMILY "K8 "
-+#elif defined CONFIG_MK8SSE3
-+#define MODULE_PROC_FAMILY "K8SSE3 "
-+#elif defined CONFIG_MK10
-+#define MODULE_PROC_FAMILY "K10 "
-+#elif defined CONFIG_MBARCELONA
-+#define MODULE_PROC_FAMILY "BARCELONA "
-+#elif defined CONFIG_MBOBCAT
-+#define MODULE_PROC_FAMILY "BOBCAT "
-+#elif defined CONFIG_MBULLDOZER
-+#define MODULE_PROC_FAMILY "BULLDOZER "
-+#elif defined CONFIG_MPILEDRIVER
-+#define MODULE_PROC_FAMILY "PILEDRIVER "
-+#elif defined CONFIG_MSTEAMROLLER
-+#define MODULE_PROC_FAMILY "STEAMROLLER "
-+#elif defined CONFIG_MJAGUAR
-+#define MODULE_PROC_FAMILY "JAGUAR "
-+#elif defined CONFIG_MEXCAVATOR
-+#define MODULE_PROC_FAMILY "EXCAVATOR "
-+#elif defined CONFIG_MZEN
-+#define MODULE_PROC_FAMILY "ZEN "
- #elif defined CONFIG_MELAN
- #define MODULE_PROC_FAMILY "ELAN "
- #elif defined CONFIG_MCRUSOE


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-05-10 23:40 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2019-05-10 23:40 UTC (permalink / raw
  To: gentoo-commits

commit:     a19b0b75f0aac978ab7823ff4f3b2c031c7ba90c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri May 10 23:39:45 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri May 10 23:39:45 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a19b0b75

Add cpu optimization patches

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |   8 +
 ...-additional-cpu-optimizations-for-gcc-4.9.patch | 545 ++++++++++++++++++++
 5011_enable-cpu-optimizations-for-gcc8.patch       | 569 +++++++++++++++++++++
 3 files changed, 1122 insertions(+)

diff --git a/0000_README b/0000_README
index 90e376f..cfba4e3 100644
--- a/0000_README
+++ b/0000_README
@@ -62,3 +62,11 @@ Desc:   This hid-apple patch enables swapping of the FN and left Control keys an
 Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.
+
+Patch:  5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
+From:   https://github.com/graysky2/kernel_gcc_patch/
+Desc:   Kernel patch enables gcc >= v4.13 optimizations for additional CPUs.
+
+Patch:  5011_enable-cpu-optimizations-for-gcc8.patch
+From:   https://github.com/graysky2/kernel_gcc_patch/
+Desc:   Kernel patch for >= gccv8 enables kernel >= v4.13 optimizations for additional CPUs.

diff --git a/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch b/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
new file mode 100644
index 0000000..a8aa759
--- /dev/null
+++ b/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
@@ -0,0 +1,545 @@
+WARNING
+This patch works with gcc versions 4.9+ and with kernel version 4.13+ and should
+NOT be applied when compiling on older versions of gcc due to key name changes
+of the march flags introduced with the version 4.9 release of gcc.[1]
+
+Use the older version of this patch hosted on the same github for older
+versions of gcc.
+
+FEATURES
+This patch adds additional CPU options to the Linux kernel accessible under:
+ Processor type and features  --->
+  Processor family --->
+
+The expanded microarchitectures include:
+* AMD Improved K8-family
+* AMD K10-family
+* AMD Family 10h (Barcelona)
+* AMD Family 14h (Bobcat)
+* AMD Family 16h (Jaguar)
+* AMD Family 15h (Bulldozer)
+* AMD Family 15h (Piledriver)
+* AMD Family 15h (Steamroller)
+* AMD Family 15h (Excavator)
+* AMD Family 17h (Zen)
+* Intel Silvermont low-power processors
+* Intel 1st Gen Core i3/i5/i7 (Nehalem)
+* Intel 1.5 Gen Core i3/i5/i7 (Westmere)
+* Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
+* Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
+* Intel 4th Gen Core i3/i5/i7 (Haswell)
+* Intel 5th Gen Core i3/i5/i7 (Broadwell)
+* Intel 6th Gen Core i3/i5/i7 (Skylake)
+* Intel 6th Gen Core i7/i9 (Skylake X)
+
+It also offers to compile passing the 'native' option which, "selects the CPU
+to generate code for at compilation time by determining the processor type of
+the compiling machine. Using -march=native enables all instruction subsets
+supported by the local machine and will produce code optimized for the local
+machine under the constraints of the selected instruction set."[3]
+
+MINOR NOTES
+This patch also changes 'atom' to 'bonnell' in accordance with the gcc v4.9
+changes. Note that upstream is using the deprecated 'match=atom' flags when I
+believe it should use the newer 'march=bonnell' flag for atom processors.[2]
+
+It is not recommended to compile on Atom-CPUs with the 'native' option.[4] The
+recommendation is to use the 'atom' option instead.
+
+BENEFITS
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=4.13
+gcc version >=4.9
+
+ACKNOWLEDGMENTS
+This patch builds on the seminal work by Jeroen.[5]
+
+REFERENCES
+1. https://gcc.gnu.org/gcc-4.9/changes.html
+2. https://bugzilla.kernel.org/show_bug.cgi?id=77461
+3. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html
+4. https://github.com/graysky2/kernel_gcc_patch/issues/15
+5. http://www.linuxforge.net/docs/linux/linux-gcc.php
+
+--- a/arch/x86/include/asm/module.h	2018-01-28 16:20:33.000000000 -0500
++++ b/arch/x86/include/asm/module.h	2018-03-10 06:42:38.688317317 -0500
+@@ -25,6 +25,26 @@ struct mod_arch_specific {
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
++#elif defined CONFIG_MSKYLAKE
++#define MODULE_PROC_FAMILY "SKYLAKE "
++#elif defined CONFIG_MSKYLAKEX
++#define MODULE_PROC_FAMILY "SKYLAKEX "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -43,6 +63,26 @@ struct mod_arch_specific {
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MSTEAMROLLER
++#define MODULE_PROC_FAMILY "STEAMROLLER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
++#elif defined CONFIG_MEXCAVATOR
++#define MODULE_PROC_FAMILY "EXCAVATOR "
++#elif defined CONFIG_MZEN
++#define MODULE_PROC_FAMILY "ZEN "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+--- a/arch/x86/Kconfig.cpu	2018-01-28 16:20:33.000000000 -0500
++++ b/arch/x86/Kconfig.cpu	2018-03-10 06:45:50.244371799 -0500
+@@ -116,6 +116,7 @@ config MPENTIUMM
+ config MPENTIUM4
+ 	bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
+ 	depends on X86_32
++	select X86_P6_NOP
+ 	---help---
+ 	  Select this for Intel Pentium 4 chips.  This includes the
+ 	  Pentium 4, Pentium D, P4-based Celeron and Xeon, and
+@@ -148,9 +149,8 @@ config MPENTIUM4
+ 		-Paxville
+ 		-Dempsey
+ 
+-
+ config MK6
+-	bool "K6/K6-II/K6-III"
++	bool "AMD K6/K6-II/K6-III"
+ 	depends on X86_32
+ 	---help---
+ 	  Select this for an AMD K6-family processor.  Enables use of
+@@ -158,7 +158,7 @@ config MK6
+ 	  flags to GCC.
+ 
+ config MK7
+-	bool "Athlon/Duron/K7"
++	bool "AMD Athlon/Duron/K7"
+ 	depends on X86_32
+ 	---help---
+ 	  Select this for an AMD Athlon K7-family processor.  Enables use of
+@@ -166,12 +166,83 @@ config MK7
+ 	  flags to GCC.
+ 
+ config MK8
+-	bool "Opteron/Athlon64/Hammer/K8"
++	bool "AMD Opteron/Athlon64/Hammer/K8"
+ 	---help---
+ 	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ 	  Enables use of some extended instructions, and passes appropriate
+ 	  optimization flags to GCC.
+ 
++config MK8SSE3
++	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++	---help---
++	  Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MK10
++	bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++	---help---
++	  Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++		Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MBARCELONA
++	bool "AMD Barcelona"
++	---help---
++	  Select this for AMD Family 10h Barcelona processors.
++
++	  Enables -march=barcelona
++
++config MBOBCAT
++	bool "AMD Bobcat"
++	---help---
++	  Select this for AMD Family 14h Bobcat processors.
++
++	  Enables -march=btver1
++
++config MJAGUAR
++	bool "AMD Jaguar"
++	---help---
++	  Select this for AMD Family 16h Jaguar processors.
++
++	  Enables -march=btver2
++
++config MBULLDOZER
++	bool "AMD Bulldozer"
++	---help---
++	  Select this for AMD Family 15h Bulldozer processors.
++
++	  Enables -march=bdver1
++
++config MPILEDRIVER
++	bool "AMD Piledriver"
++	---help---
++	  Select this for AMD Family 15h Piledriver processors.
++
++	  Enables -march=bdver2
++
++config MSTEAMROLLER
++	bool "AMD Steamroller"
++	---help---
++	  Select this for AMD Family 15h Steamroller processors.
++
++	  Enables -march=bdver3
++
++config MEXCAVATOR
++	bool "AMD Excavator"
++	---help---
++	  Select this for AMD Family 15h Excavator processors.
++
++	  Enables -march=bdver4
++
++config MZEN
++	bool "AMD Zen"
++	---help---
++	  Select this for AMD Family 17h Zen processors.
++
++	  Enables -march=znver1
++
+ config MCRUSOE
+ 	bool "Crusoe"
+ 	depends on X86_32
+@@ -253,6 +324,7 @@ config MVIAC7
+ 
+ config MPSC
+ 	bool "Intel P4 / older Netburst based Xeon"
++	select X86_P6_NOP
+ 	depends on X86_64
+ 	---help---
+ 	  Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
+@@ -262,8 +334,19 @@ config MPSC
+ 	  using the cpu family field
+ 	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+ 
++config MATOM
++	bool "Intel Atom"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for the Intel Atom platform. Intel Atom CPUs have an
++	  in-order pipelining architecture and thus can benefit from
++	  accordingly optimized code. Use a recent GCC with specific Atom
++	  support in order to fully benefit from selecting this option.
++
+ config MCORE2
+-	bool "Core 2/newer Xeon"
++	bool "Intel Core 2"
++	select X86_P6_NOP
+ 	---help---
+ 
+ 	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -271,14 +354,88 @@ config MCORE2
+ 	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ 	  (not a typo)
+ 
+-config MATOM
+-	bool "Intel Atom"
++	  Enables -march=core2
++
++config MNEHALEM
++	bool "Intel Nehalem"
++	select X86_P6_NOP
+ 	---help---
+ 
+-	  Select this for the Intel Atom platform. Intel Atom CPUs have an
+-	  in-order pipelining architecture and thus can benefit from
+-	  accordingly optimized code. Use a recent GCC with specific Atom
+-	  support in order to fully benefit from selecting this option.
++	  Select this for 1st Gen Core processors in the Nehalem family.
++
++	  Enables -march=nehalem
++
++config MWESTMERE
++	bool "Intel Westmere"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for the Intel Westmere formerly Nehalem-C family.
++
++	  Enables -march=westmere
++
++config MSILVERMONT
++	bool "Intel Silvermont"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for the Intel Silvermont platform.
++
++	  Enables -march=silvermont
++
++config MSANDYBRIDGE
++	bool "Intel Sandy Bridge"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++	  Enables -march=sandybridge
++
++config MIVYBRIDGE
++	bool "Intel Ivy Bridge"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++	  Enables -march=ivybridge
++
++config MHASWELL
++	bool "Intel Haswell"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 4th Gen Core processors in the Haswell family.
++
++	  Enables -march=haswell
++
++config MBROADWELL
++	bool "Intel Broadwell"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 5th Gen Core processors in the Broadwell family.
++
++	  Enables -march=broadwell
++
++config MSKYLAKE
++	bool "Intel Skylake"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 6th Gen Core processors in the Skylake family.
++
++	  Enables -march=skylake
++
++config MSKYLAKEX
++	bool "Intel Skylake X"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 6th Gen Core processors in the Skylake X family.
++
++	  Enables -march=skylake-avx512
+ 
+ config GENERIC_CPU
+ 	bool "Generic-x86-64"
+@@ -287,6 +444,19 @@ config GENERIC_CPU
+ 	  Generic x86-64 CPU.
+ 	  Run equally well on all x86-64 CPUs.
+ 
++config MNATIVE
++ bool "Native optimizations autodetected by GCC"
++ ---help---
++
++   GCC 4.2 and above support -march=native, which automatically detects
++   the optimum settings to use based on your processor. -march=native
++   also detects and applies additional settings beyond -march specific
++   to your CPU, (eg. -msse4). Unless you have a specific reason not to
++   (e.g. distcc cross-compiling), you should probably be using
++   -march=native rather than anything listed below.
++
++   Enables -march=native
++
+ endchoice
+ 
+ config X86_GENERIC
+@@ -311,7 +481,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ 	int
+ 	default "7" if MPENTIUM4 || MPSC
+-	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++	default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
+ 	default "4" if MELAN || M486 || MGEODEGX1
+ 	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+ 
+@@ -342,35 +512,36 @@ config X86_ALIGNMENT_16
+ 
+ config X86_INTEL_USERCOPY
+ 	def_bool y
+-	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MNATIVE
+ 
+ config X86_USE_PPRO_CHECKSUM
+ 	def_bool y
+-	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MATOM || MNATIVE
+ 
+ config X86_USE_3DNOW
+ 	def_bool y
+ 	depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
+ 
+-#
+-# P6_NOPs are a relatively minor optimization that require a family >=
+-# 6 processor, except that it is broken on certain VIA chips.
+-# Furthermore, AMD chips prefer a totally different sequence of NOPs
+-# (which work on all CPUs).  In addition, it looks like Virtual PC
+-# does not understand them.
+-#
+-# As a result, disallow these if we're not compiling for X86_64 (these
+-# NOPs do work on all x86-64 capable chips); the list of processors in
+-# the right-hand clause are the cores that benefit from this optimization.
+-#
+ config X86_P6_NOP
+-	def_bool y
+-	depends on X86_64
+-	depends on (MCORE2 || MPENTIUM4 || MPSC)
++	default n
++	bool "Support for P6_NOPs on Intel chips"
++	depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT  || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MNATIVE)
++	---help---
++	P6_NOPs are a relatively minor optimization that require a family >=
++	6 processor, except that it is broken on certain VIA chips.
++	Furthermore, AMD chips prefer a totally different sequence of NOPs
++	(which work on all CPUs).  In addition, it looks like Virtual PC
++	does not understand them.
++
++	As a result, disallow these if we're not compiling for X86_64 (these
++	NOPs do work on all x86-64 capable chips); the list of processors in
++	the right-hand clause are the cores that benefit from this optimization.
++
++	Say Y if you have Intel CPU newer than Pentium Pro, N otherwise.
+ 
+ config X86_TSC
+ 	def_bool y
+-	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MNATIVE || MATOM) || X86_64
+ 
+ config X86_CMPXCHG64
+ 	def_bool y
+@@ -380,7 +551,7 @@ config X86_CMPXCHG64
+ # generates cmov.
+ config X86_CMOV
+ 	def_bool y
+-	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++	depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
+ 
+ config X86_MINIMUM_CPU_FAMILY
+ 	int
+--- a/arch/x86/Makefile	2018-01-28 16:20:33.000000000 -0500
++++ b/arch/x86/Makefile	2018-03-10 06:47:00.284240139 -0500
+@@ -124,13 +124,42 @@ else
+ 	KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
+ 
+         # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++        cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+         cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
++        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++        cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
++        cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
++        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
++        cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
+         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+ 
+         cflags-$(CONFIG_MCORE2) += \
+-                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+-	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+-		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++                $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
++        cflags-$(CONFIG_MNEHALEM) += \
++                $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
++        cflags-$(CONFIG_MWESTMERE) += \
++                $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
++        cflags-$(CONFIG_MSILVERMONT) += \
++                $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
++        cflags-$(CONFIG_MSANDYBRIDGE) += \
++                $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
++        cflags-$(CONFIG_MIVYBRIDGE) += \
++                $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
++        cflags-$(CONFIG_MHASWELL) += \
++                $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
++        cflags-$(CONFIG_MBROADWELL) += \
++                $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
++        cflags-$(CONFIG_MSKYLAKE) += \
++                $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake))
++        cflags-$(CONFIG_MSKYLAKEX) += \
++                $(call cc-option,-march=skylake-avx512,$(call cc-option,-mtune=skylake-avx512))
++        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
++                $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+         cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+         KBUILD_CFLAGS += $(cflags-y)
+ 
+--- a/arch/x86/Makefile_32.cpu	2018-01-28 16:20:33.000000000 -0500
++++ b/arch/x86/Makefile_32.cpu	2018-03-10 06:47:46.025992644 -0500
+@@ -23,7 +23,18 @@ cflags-$(CONFIG_MK6)		+= -march=k6
+ # Please note, that patches that add -march=athlon-xp and friends are pointless.
+ # They make zero difference whatsosever to performance at this time.
+ cflags-$(CONFIG_MK7)		+= -march=athlon
++cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8)		+= $(call cc-option,-march=k8,-march=athlon)
++cflags-$(CONFIG_MK8SSE3)		+= $(call cc-option,-march=k8-sse3,-march=athlon)
++cflags-$(CONFIG_MK10)	+= $(call cc-option,-march=amdfam10,-march=athlon)
++cflags-$(CONFIG_MBARCELONA)	+= $(call cc-option,-march=barcelona,-march=athlon)
++cflags-$(CONFIG_MBOBCAT)	+= $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MJAGUAR)	+= $(call cc-option,-march=btver2,-march=athlon)
++cflags-$(CONFIG_MBULLDOZER)	+= $(call cc-option,-march=bdver1,-march=athlon)
++cflags-$(CONFIG_MPILEDRIVER)	+= $(call cc-option,-march=bdver2,-march=athlon)
++cflags-$(CONFIG_MSTEAMROLLER)	+= $(call cc-option,-march=bdver3,-march=athlon)
++cflags-$(CONFIG_MEXCAVATOR)	+= $(call cc-option,-march=bdver4,-march=athlon)
++cflags-$(CONFIG_MZEN)	+= $(call cc-option,-march=znver1,-march=athlon)
+ cflags-$(CONFIG_MCRUSOE)	+= -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MEFFICEON)	+= -march=i686 $(call tune,pentium3) -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MWINCHIPC6)	+= $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -32,8 +43,17 @@ cflags-$(CONFIG_MCYRIXIII)	+= $(call cc-
+ cflags-$(CONFIG_MVIAC3_2)	+= $(call cc-option,-march=c3-2,-march=i686)
+ cflags-$(CONFIG_MVIAC7)		+= -march=i686
+ cflags-$(CONFIG_MCORE2)		+= -march=i686 $(call tune,core2)
+-cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
+-	$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++cflags-$(CONFIG_MNEHALEM)	+= -march=i686 $(call tune,nehalem)
++cflags-$(CONFIG_MWESTMERE)	+= -march=i686 $(call tune,westmere)
++cflags-$(CONFIG_MSILVERMONT)	+= -march=i686 $(call tune,silvermont)
++cflags-$(CONFIG_MSANDYBRIDGE)	+= -march=i686 $(call tune,sandybridge)
++cflags-$(CONFIG_MIVYBRIDGE)	+= -march=i686 $(call tune,ivybridge)
++cflags-$(CONFIG_MHASWELL)	+= -march=i686 $(call tune,haswell)
++cflags-$(CONFIG_MBROADWELL)	+= -march=i686 $(call tune,broadwell)
++cflags-$(CONFIG_MSKYLAKE)	+= -march=i686 $(call tune,skylake)
++cflags-$(CONFIG_MSKYLAKEX)	+= -march=i686 $(call tune,skylake-avx512)
++cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
++	$(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+ 
+ # AMD Elan support
+ cflags-$(CONFIG_MELAN)		+= -march=i486

diff --git a/5011_enable-cpu-optimizations-for-gcc8.patch b/5011_enable-cpu-optimizations-for-gcc8.patch
new file mode 100644
index 0000000..bfd2065
--- /dev/null
+++ b/5011_enable-cpu-optimizations-for-gcc8.patch
@@ -0,0 +1,569 @@
+WARNING
+This patch works with gcc versions 8.1+ and with kernel version 4.13+ and should
+NOT be applied when compiling on older versions of gcc due to key name changes
+of the march flags introduced with the version 4.9 release of gcc.[1]
+
+Use the older version of this patch hosted on the same github for older
+versions of gcc.
+
+FEATURES
+This patch adds additional CPU options to the Linux kernel accessible under:
+ Processor type and features  --->
+  Processor family --->
+
+The expanded microarchitectures include:
+* AMD Improved K8-family
+* AMD K10-family
+* AMD Family 10h (Barcelona)
+* AMD Family 14h (Bobcat)
+* AMD Family 16h (Jaguar)
+* AMD Family 15h (Bulldozer)
+* AMD Family 15h (Piledriver)
+* AMD Family 15h (Steamroller)
+* AMD Family 15h (Excavator)
+* AMD Family 17h (Zen)
+* Intel Silvermont low-power processors
+* Intel 1st Gen Core i3/i5/i7 (Nehalem)
+* Intel 1.5 Gen Core i3/i5/i7 (Westmere)
+* Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
+* Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
+* Intel 4th Gen Core i3/i5/i7 (Haswell)
+* Intel 5th Gen Core i3/i5/i7 (Broadwell)
+* Intel 6th Gen Core i3/i5/i7 (Skylake)
+* Intel 6th Gen Core i7/i9 (Skylake X)
+* Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
+* Intel 8th Gen Core i7/i9 (Ice Lake)
+
+It also offers to compile passing the 'native' option which, "selects the CPU
+to generate code for at compilation time by determining the processor type of
+the compiling machine. Using -march=native enables all instruction subsets
+supported by the local machine and will produce code optimized for the local
+machine under the constraints of the selected instruction set."[3]
+
+MINOR NOTES
+This patch also changes 'atom' to 'bonnell' in accordance with the gcc v4.9
+changes. Note that upstream is using the deprecated 'match=atom' flags when I
+believe it should use the newer 'march=bonnell' flag for atom processors.[2]
+
+It is not recommended to compile on Atom-CPUs with the 'native' option.[4] The
+recommendation is to use the 'atom' option instead.
+
+BENEFITS
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=4.20
+gcc version >=8.1
+
+ACKNOWLEDGMENTS
+This patch builds on the seminal work by Jeroen.[5]
+
+REFERENCES
+1. https://gcc.gnu.org/gcc-4.9/changes.html
+2. https://bugzilla.kernel.org/show_bug.cgi?id=77461
+3. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html
+4. https://github.com/graysky2/kernel_gcc_patch/issues/15
+5. http://www.linuxforge.net/docs/linux/linux-gcc.php
+
+--- a/arch/x86/Makefile_32.cpu	2019-02-22 09:22:03.426937735 -0500
++++ b/arch/x86/Makefile_32.cpu	2019-02-22 09:37:58.680968580 -0500
+@@ -23,7 +23,18 @@ cflags-$(CONFIG_MK6)		+= -march=k6
+ # Please note, that patches that add -march=athlon-xp and friends are pointless.
+ # They make zero difference whatsosever to performance at this time.
+ cflags-$(CONFIG_MK7)		+= -march=athlon
++cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8)		+= $(call cc-option,-march=k8,-march=athlon)
++cflags-$(CONFIG_MK8SSE3)		+= $(call cc-option,-march=k8-sse3,-march=athlon)
++cflags-$(CONFIG_MK10)	+= $(call cc-option,-march=amdfam10,-march=athlon)
++cflags-$(CONFIG_MBARCELONA)	+= $(call cc-option,-march=barcelona,-march=athlon)
++cflags-$(CONFIG_MBOBCAT)	+= $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MJAGUAR)	+= $(call cc-option,-march=btver2,-march=athlon)
++cflags-$(CONFIG_MBULLDOZER)	+= $(call cc-option,-march=bdver1,-march=athlon)
++cflags-$(CONFIG_MPILEDRIVER)	+= $(call cc-option,-march=bdver2,-march=athlon)
++cflags-$(CONFIG_MSTEAMROLLER)	+= $(call cc-option,-march=bdver3,-march=athlon)
++cflags-$(CONFIG_MEXCAVATOR)	+= $(call cc-option,-march=bdver4,-march=athlon)
++cflags-$(CONFIG_MZEN)	+= $(call cc-option,-march=znver1,-march=athlon)
+ cflags-$(CONFIG_MCRUSOE)	+= -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MEFFICEON)	+= -march=i686 $(call tune,pentium3) -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MWINCHIPC6)	+= $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -32,9 +43,20 @@ cflags-$(CONFIG_MCYRIXIII)	+= $(call cc-
+ cflags-$(CONFIG_MVIAC3_2)	+= $(call cc-option,-march=c3-2,-march=i686)
+ cflags-$(CONFIG_MVIAC7)		+= -march=i686
+ cflags-$(CONFIG_MCORE2)		+= -march=i686 $(call tune,core2)
+-cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
+-	$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
+-
++cflags-$(CONFIG_MNEHALEM)	+= -march=i686 $(call tune,nehalem)
++cflags-$(CONFIG_MWESTMERE)	+= -march=i686 $(call tune,westmere)
++cflags-$(CONFIG_MSILVERMONT)	+= -march=i686 $(call tune,silvermont)
++cflags-$(CONFIG_MSANDYBRIDGE)	+= -march=i686 $(call tune,sandybridge)
++cflags-$(CONFIG_MIVYBRIDGE)	+= -march=i686 $(call tune,ivybridge)
++cflags-$(CONFIG_MHASWELL)	+= -march=i686 $(call tune,haswell)
++cflags-$(CONFIG_MBROADWELL)	+= -march=i686 $(call tune,broadwell)
++cflags-$(CONFIG_MSKYLAKE)	+= -march=i686 $(call tune,skylake)
++cflags-$(CONFIG_MSKYLAKEX)	+= -march=i686 $(call tune,skylake-avx512)
++cflags-$(CONFIG_MCANNONLAKE)	+= -march=i686 $(call tune,cannonlake)
++cflags-$(CONFIG_MICELAKE)	+= -march=i686 $(call tune,icelake)
++cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
++	$(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
++ 
+ # AMD Elan support
+ cflags-$(CONFIG_MELAN)		+= -march=i486
+ 
+--- a/arch/x86/Kconfig.cpu	2019-02-22 09:22:11.576958595 -0500
++++ b/arch/x86/Kconfig.cpu	2019-02-22 09:34:16.490003911 -0500
+@@ -116,6 +116,7 @@ config MPENTIUMM
+ config MPENTIUM4
+ 	bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
+ 	depends on X86_32
++	select X86_P6_NOP
+ 	---help---
+ 	  Select this for Intel Pentium 4 chips.  This includes the
+ 	  Pentium 4, Pentium D, P4-based Celeron and Xeon, and
+@@ -150,7 +151,7 @@ config MPENTIUM4
+ 
+ 
+ config MK6
+-	bool "K6/K6-II/K6-III"
++	bool "AMD K6/K6-II/K6-III"
+ 	depends on X86_32
+ 	---help---
+ 	  Select this for an AMD K6-family processor.  Enables use of
+@@ -158,7 +159,7 @@ config MK6
+ 	  flags to GCC.
+ 
+ config MK7
+-	bool "Athlon/Duron/K7"
++	bool "AMD Athlon/Duron/K7"
+ 	depends on X86_32
+ 	---help---
+ 	  Select this for an AMD Athlon K7-family processor.  Enables use of
+@@ -166,11 +167,81 @@ config MK7
+ 	  flags to GCC.
+ 
+ config MK8
+-	bool "Opteron/Athlon64/Hammer/K8"
++	bool "AMD Opteron/Athlon64/Hammer/K8"
+ 	---help---
+ 	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ 	  Enables use of some extended instructions, and passes appropriate
+ 	  optimization flags to GCC.
++config MK8SSE3
++	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++	---help---
++	  Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MK10
++	bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++	---help---
++	  Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++		Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MBARCELONA
++	bool "AMD Barcelona"
++	---help---
++	  Select this for AMD Family 10h Barcelona processors.
++
++	  Enables -march=barcelona
++
++config MBOBCAT
++	bool "AMD Bobcat"
++	---help---
++	  Select this for AMD Family 14h Bobcat processors.
++
++	  Enables -march=btver1
++
++config MJAGUAR
++	bool "AMD Jaguar"
++	---help---
++	  Select this for AMD Family 16h Jaguar processors.
++
++	  Enables -march=btver2
++
++config MBULLDOZER
++	bool "AMD Bulldozer"
++	---help---
++	  Select this for AMD Family 15h Bulldozer processors.
++
++	  Enables -march=bdver1
++
++config MPILEDRIVER
++	bool "AMD Piledriver"
++	---help---
++	  Select this for AMD Family 15h Piledriver processors.
++
++	  Enables -march=bdver2
++
++config MSTEAMROLLER
++	bool "AMD Steamroller"
++	---help---
++	  Select this for AMD Family 15h Steamroller processors.
++
++	  Enables -march=bdver3
++
++config MEXCAVATOR
++	bool "AMD Excavator"
++	---help---
++	  Select this for AMD Family 15h Excavator processors.
++
++	  Enables -march=bdver4
++
++config MZEN
++	bool "AMD Zen"
++	---help---
++	  Select this for AMD Family 17h Zen processors.
++
++	  Enables -march=znver1
+ 
+ config MCRUSOE
+ 	bool "Crusoe"
+@@ -253,6 +324,7 @@ config MVIAC7
+ 
+ config MPSC
+ 	bool "Intel P4 / older Netburst based Xeon"
++	select X86_P6_NOP
+ 	depends on X86_64
+ 	---help---
+ 	  Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
+@@ -262,23 +334,126 @@ config MPSC
+ 	  using the cpu family field
+ 	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+ 
++config MATOM
++	bool "Intel Atom"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for the Intel Atom platform. Intel Atom CPUs have an
++	  in-order pipelining architecture and thus can benefit from
++	  accordingly optimized code. Use a recent GCC with specific Atom
++	  support in order to fully benefit from selecting this option.
++
+ config MCORE2
+-	bool "Core 2/newer Xeon"
++	bool "Intel Core 2"
++	select X86_P6_NOP
++
+ 	---help---
+ 
+ 	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+ 	  53xx) CPUs. You can distinguish newer from older Xeons by the CPU
+ 	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ 	  (not a typo)
++	  Enables -march=core2
+ 
+-config MATOM
+-	bool "Intel Atom"
++config MNEHALEM
++	bool "Intel Nehalem"
++	select X86_P6_NOP
+ 	---help---
+ 
+-	  Select this for the Intel Atom platform. Intel Atom CPUs have an
+-	  in-order pipelining architecture and thus can benefit from
+-	  accordingly optimized code. Use a recent GCC with specific Atom
+-	  support in order to fully benefit from selecting this option.
++	  Select this for 1st Gen Core processors in the Nehalem family.
++
++	  Enables -march=nehalem
++
++config MWESTMERE
++	bool "Intel Westmere"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for the Intel Westmere formerly Nehalem-C family.
++
++	  Enables -march=westmere
++
++config MSILVERMONT
++	bool "Intel Silvermont"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for the Intel Silvermont platform.
++
++	  Enables -march=silvermont
++
++config MSANDYBRIDGE
++	bool "Intel Sandy Bridge"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++	  Enables -march=sandybridge
++
++config MIVYBRIDGE
++	bool "Intel Ivy Bridge"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++	  Enables -march=ivybridge
++
++config MHASWELL
++	bool "Intel Haswell"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 4th Gen Core processors in the Haswell family.
++
++	  Enables -march=haswell
++
++config MBROADWELL
++	bool "Intel Broadwell"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 5th Gen Core processors in the Broadwell family.
++
++	  Enables -march=broadwell
++
++config MSKYLAKE
++	bool "Intel Skylake"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 6th Gen Core processors in the Skylake family.
++
++	  Enables -march=skylake
++
++config MSKYLAKEX
++	bool "Intel Skylake X"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 6th Gen Core processors in the Skylake X family.
++
++	  Enables -march=skylake-avx512
++
++config MCANNONLAKE
++	bool "Intel Cannon Lake"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 8th Gen Core processors
++
++	  Enables -march=cannonlake
++
++config MICELAKE
++	bool "Intel Ice Lake"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 8th Gen Core processors in the Ice Lake family.
++
++	  Enables -march=icelake
+ 
+ config GENERIC_CPU
+ 	bool "Generic-x86-64"
+@@ -287,6 +462,19 @@ config GENERIC_CPU
+ 	  Generic x86-64 CPU.
+ 	  Run equally well on all x86-64 CPUs.
+ 
++config MNATIVE
++ bool "Native optimizations autodetected by GCC"
++ ---help---
++
++   GCC 4.2 and above support -march=native, which automatically detects
++   the optimum settings to use based on your processor. -march=native
++   also detects and applies additional settings beyond -march specific
++   to your CPU, (eg. -msse4). Unless you have a specific reason not to
++   (e.g. distcc cross-compiling), you should probably be using
++   -march=native rather than anything listed below.
++
++   Enables -march=native
++
+ endchoice
+ 
+ config X86_GENERIC
+@@ -311,7 +499,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ 	int
+ 	default "7" if MPENTIUM4 || MPSC
+-	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++	default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
+ 	default "4" if MELAN || M486 || MGEODEGX1
+ 	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+ 
+@@ -329,39 +517,40 @@ config X86_ALIGNMENT_16
+ 
+ config X86_INTEL_USERCOPY
+ 	def_bool y
+-	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MNATIVE
+ 
+ config X86_USE_PPRO_CHECKSUM
+ 	def_bool y
+-	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MATOM || MNATIVE
+ 
+ config X86_USE_3DNOW
+ 	def_bool y
+ 	depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
+ 
+-#
+-# P6_NOPs are a relatively minor optimization that require a family >=
+-# 6 processor, except that it is broken on certain VIA chips.
+-# Furthermore, AMD chips prefer a totally different sequence of NOPs
+-# (which work on all CPUs).  In addition, it looks like Virtual PC
+-# does not understand them.
+-#
+-# As a result, disallow these if we're not compiling for X86_64 (these
+-# NOPs do work on all x86-64 capable chips); the list of processors in
+-# the right-hand clause are the cores that benefit from this optimization.
+-#
+ config X86_P6_NOP
+-	def_bool y
+-	depends on X86_64
+-	depends on (MCORE2 || MPENTIUM4 || MPSC)
++	default n
++	bool "Support for P6_NOPs on Intel chips"
++	depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT  || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MNATIVE)
++	---help---
++	P6_NOPs are a relatively minor optimization that require a family >=
++	6 processor, except that it is broken on certain VIA chips.
++	Furthermore, AMD chips prefer a totally different sequence of NOPs
++	(which work on all CPUs).  In addition, it looks like Virtual PC
++	does not understand them.
++
++	As a result, disallow these if we're not compiling for X86_64 (these
++	NOPs do work on all x86-64 capable chips); the list of processors in
++	the right-hand clause are the cores that benefit from this optimization.
+ 
++	Say Y if you have Intel CPU newer than Pentium Pro, N otherwise.
++ 
+ config X86_TSC
+ 	def_bool y
+-	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MNATIVE || MATOM) || X86_64
+ 
+ config X86_CMPXCHG64
+ 	def_bool y
+-	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8
++	depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
+ 
+ # this should be set for all -march=.. options where the compiler
+ # generates cmov.
+--- a/arch/x86/Makefile	2019-02-22 09:21:58.196924367 -0500
++++ b/arch/x86/Makefile	2019-02-22 09:36:27.310577832 -0500
+@@ -118,13 +118,46 @@ else
+ 	KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
+ 
+         # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++		cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+         cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
++        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++        cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
++        cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
++        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
++        cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
+         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+ 
+         cflags-$(CONFIG_MCORE2) += \
+-                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+-	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+-		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++                $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
++        cflags-$(CONFIG_MNEHALEM) += \
++                $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
++        cflags-$(CONFIG_MWESTMERE) += \
++                $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
++        cflags-$(CONFIG_MSILVERMONT) += \
++                $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
++        cflags-$(CONFIG_MSANDYBRIDGE) += \
++                $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
++        cflags-$(CONFIG_MIVYBRIDGE) += \
++                $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
++        cflags-$(CONFIG_MHASWELL) += \
++                $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
++        cflags-$(CONFIG_MBROADWELL) += \
++                $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
++        cflags-$(CONFIG_MSKYLAKE) += \
++                $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake))
++        cflags-$(CONFIG_MSKYLAKEX) += \
++                $(call cc-option,-march=skylake-avx512,$(call cc-option,-mtune=skylake-avx512))
++        cflags-$(CONFIG_MCANNONLAKE) += \
++                $(call cc-option,-march=cannonlake,$(call cc-option,-mtune=cannonlake))
++        cflags-$(CONFIG_MICELAKE) += \
++                $(call cc-option,-march=icelake,$(call cc-option,-mtune=icelake))
++        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
++                $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+         cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+         KBUILD_CFLAGS += $(cflags-y)
+ 
+--- a/arch/x86/include/asm/module.h	2019-02-22 09:22:26.726997480 -0500
++++ b/arch/x86/include/asm/module.h	2019-02-22 09:40:04.231493392 -0500
+@@ -25,6 +25,30 @@ struct mod_arch_specific {
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
++#elif defined CONFIG_MSKYLAKE
++#define MODULE_PROC_FAMILY "SKYLAKE "
++#elif defined CONFIG_MSKYLAKEX
++#define MODULE_PROC_FAMILY "SKYLAKEX "
++#elif defined CONFIG_MCANNONLAKE
++#define MODULE_PROC_FAMILY "CANNONLAKE "
++#elif defined CONFIG_MICELAKE
++#define MODULE_PROC_FAMILY "ICELAKE "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -43,6 +67,26 @@ struct mod_arch_specific {
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MSTEAMROLLER
++#define MODULE_PROC_FAMILY "STEAMROLLER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
++#elif defined CONFIG_MEXCAVATOR
++#define MODULE_PROC_FAMILY "EXCAVATOR "
++#elif defined CONFIG_MZEN
++#define MODULE_PROC_FAMILY "ZEN "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-05-11 13:04 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2019-05-11 13:04 UTC (permalink / raw
  To: gentoo-commits

commit:     9ae174120152d4765d74eee7613cfaa065053979
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat May 11 13:03:42 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat May 11 13:03:42 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9ae17412

Linux patch 5.1.1

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1000_linux-5.1.1.patch | 3603 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3607 insertions(+)

diff --git a/0000_README b/0000_README
index cfba4e3..72fa25f 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
+Patch:  1000_linux-5.1.1.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.1.1
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1000_linux-5.1.1.patch b/1000_linux-5.1.1.patch
new file mode 100644
index 0000000..ff16b38
--- /dev/null
+++ b/1000_linux-5.1.1.patch
@@ -0,0 +1,3603 @@
+diff --git a/Makefile b/Makefile
+index 26c92f892d24..bf604f77e5e5 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
+index c7e1a7837706..6fb2214333a2 100644
+--- a/arch/arm64/include/asm/futex.h
++++ b/arch/arm64/include/asm/futex.h
+@@ -23,26 +23,34 @@
+ 
+ #include <asm/errno.h>
+ 
++#define FUTEX_MAX_LOOPS	128 /* What's the largest number you can think of? */
++
+ #define __futex_atomic_op(insn, ret, oldval, uaddr, tmp, oparg)		\
+ do {									\
++	unsigned int loops = FUTEX_MAX_LOOPS;				\
++									\
+ 	uaccess_enable();						\
+ 	asm volatile(							\
+ "	prfm	pstl1strm, %2\n"					\
+ "1:	ldxr	%w1, %2\n"						\
+ 	insn "\n"							\
+ "2:	stlxr	%w0, %w3, %2\n"						\
+-"	cbnz	%w0, 1b\n"						\
+-"	dmb	ish\n"							\
++"	cbz	%w0, 3f\n"						\
++"	sub	%w4, %w4, %w0\n"					\
++"	cbnz	%w4, 1b\n"						\
++"	mov	%w0, %w7\n"						\
+ "3:\n"									\
++"	dmb	ish\n"							\
+ "	.pushsection .fixup,\"ax\"\n"					\
+ "	.align	2\n"							\
+-"4:	mov	%w0, %w5\n"						\
++"4:	mov	%w0, %w6\n"						\
+ "	b	3b\n"							\
+ "	.popsection\n"							\
+ 	_ASM_EXTABLE(1b, 4b)						\
+ 	_ASM_EXTABLE(2b, 4b)						\
+-	: "=&r" (ret), "=&r" (oldval), "+Q" (*uaddr), "=&r" (tmp)	\
+-	: "r" (oparg), "Ir" (-EFAULT)					\
++	: "=&r" (ret), "=&r" (oldval), "+Q" (*uaddr), "=&r" (tmp),	\
++	  "+r" (loops)							\
++	: "r" (oparg), "Ir" (-EFAULT), "Ir" (-EAGAIN)			\
+ 	: "memory");							\
+ 	uaccess_disable();						\
+ } while (0)
+@@ -57,23 +65,23 @@ arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *_uaddr)
+ 
+ 	switch (op) {
+ 	case FUTEX_OP_SET:
+-		__futex_atomic_op("mov	%w3, %w4",
++		__futex_atomic_op("mov	%w3, %w5",
+ 				  ret, oldval, uaddr, tmp, oparg);
+ 		break;
+ 	case FUTEX_OP_ADD:
+-		__futex_atomic_op("add	%w3, %w1, %w4",
++		__futex_atomic_op("add	%w3, %w1, %w5",
+ 				  ret, oldval, uaddr, tmp, oparg);
+ 		break;
+ 	case FUTEX_OP_OR:
+-		__futex_atomic_op("orr	%w3, %w1, %w4",
++		__futex_atomic_op("orr	%w3, %w1, %w5",
+ 				  ret, oldval, uaddr, tmp, oparg);
+ 		break;
+ 	case FUTEX_OP_ANDN:
+-		__futex_atomic_op("and	%w3, %w1, %w4",
++		__futex_atomic_op("and	%w3, %w1, %w5",
+ 				  ret, oldval, uaddr, tmp, ~oparg);
+ 		break;
+ 	case FUTEX_OP_XOR:
+-		__futex_atomic_op("eor	%w3, %w1, %w4",
++		__futex_atomic_op("eor	%w3, %w1, %w5",
+ 				  ret, oldval, uaddr, tmp, oparg);
+ 		break;
+ 	default:
+@@ -93,6 +101,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *_uaddr,
+ 			      u32 oldval, u32 newval)
+ {
+ 	int ret = 0;
++	unsigned int loops = FUTEX_MAX_LOOPS;
+ 	u32 val, tmp;
+ 	u32 __user *uaddr;
+ 
+@@ -104,20 +113,24 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *_uaddr,
+ 	asm volatile("// futex_atomic_cmpxchg_inatomic\n"
+ "	prfm	pstl1strm, %2\n"
+ "1:	ldxr	%w1, %2\n"
+-"	sub	%w3, %w1, %w4\n"
+-"	cbnz	%w3, 3f\n"
+-"2:	stlxr	%w3, %w5, %2\n"
+-"	cbnz	%w3, 1b\n"
+-"	dmb	ish\n"
++"	sub	%w3, %w1, %w5\n"
++"	cbnz	%w3, 4f\n"
++"2:	stlxr	%w3, %w6, %2\n"
++"	cbz	%w3, 3f\n"
++"	sub	%w4, %w4, %w3\n"
++"	cbnz	%w4, 1b\n"
++"	mov	%w0, %w8\n"
+ "3:\n"
++"	dmb	ish\n"
++"4:\n"
+ "	.pushsection .fixup,\"ax\"\n"
+-"4:	mov	%w0, %w6\n"
+-"	b	3b\n"
++"5:	mov	%w0, %w7\n"
++"	b	4b\n"
+ "	.popsection\n"
+-	_ASM_EXTABLE(1b, 4b)
+-	_ASM_EXTABLE(2b, 4b)
+-	: "+r" (ret), "=&r" (val), "+Q" (*uaddr), "=&r" (tmp)
+-	: "r" (oldval), "r" (newval), "Ir" (-EFAULT)
++	_ASM_EXTABLE(1b, 5b)
++	_ASM_EXTABLE(2b, 5b)
++	: "+r" (ret), "=&r" (val), "+Q" (*uaddr), "=&r" (tmp), "+r" (loops)
++	: "r" (oldval), "r" (newval), "Ir" (-EFAULT), "Ir" (-EAGAIN)
+ 	: "memory");
+ 	uaccess_disable();
+ 
+diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
+index 1e2a10a06b9d..cf768608437e 100644
+--- a/drivers/acpi/acpi_lpss.c
++++ b/drivers/acpi/acpi_lpss.c
+@@ -1142,8 +1142,8 @@ static struct dev_pm_domain acpi_lpss_pm_domain = {
+ 		.thaw_noirq = acpi_subsys_thaw_noirq,
+ 		.poweroff = acpi_subsys_suspend,
+ 		.poweroff_late = acpi_lpss_suspend_late,
+-		.poweroff_noirq = acpi_subsys_suspend_noirq,
+-		.restore_noirq = acpi_subsys_resume_noirq,
++		.poweroff_noirq = acpi_lpss_suspend_noirq,
++		.restore_noirq = acpi_lpss_resume_noirq,
+ 		.restore_early = acpi_lpss_resume_early,
+ #endif
+ 		.runtime_suspend = acpi_lpss_runtime_suspend,
+diff --git a/drivers/bluetooth/hci_bcm.c b/drivers/bluetooth/hci_bcm.c
+index ddbe518c3e5b..b5d31d583d60 100644
+--- a/drivers/bluetooth/hci_bcm.c
++++ b/drivers/bluetooth/hci_bcm.c
+@@ -228,9 +228,15 @@ static int bcm_gpio_set_power(struct bcm_device *dev, bool powered)
+ 	int err;
+ 
+ 	if (powered && !dev->res_enabled) {
+-		err = regulator_bulk_enable(BCM_NUM_SUPPLIES, dev->supplies);
+-		if (err)
+-			return err;
++		/* Intel Macs use bcm_apple_get_resources() and don't
++		 * have regulator supplies configured.
++		 */
++		if (dev->supplies[0].supply) {
++			err = regulator_bulk_enable(BCM_NUM_SUPPLIES,
++						    dev->supplies);
++			if (err)
++				return err;
++		}
+ 
+ 		/* LPO clock needs to be 32.768 kHz */
+ 		err = clk_set_rate(dev->lpo_clk, 32768);
+@@ -259,7 +265,13 @@ static int bcm_gpio_set_power(struct bcm_device *dev, bool powered)
+ 	if (!powered && dev->res_enabled) {
+ 		clk_disable_unprepare(dev->txco_clk);
+ 		clk_disable_unprepare(dev->lpo_clk);
+-		regulator_bulk_disable(BCM_NUM_SUPPLIES, dev->supplies);
++
++		/* Intel Macs use bcm_apple_get_resources() and don't
++		 * have regulator supplies configured.
++		 */
++		if (dev->supplies[0].supply)
++			regulator_bulk_disable(BCM_NUM_SUPPLIES,
++					       dev->supplies);
+ 	}
+ 
+ 	/* wait for device to power on and come out of reset */
+diff --git a/drivers/cpufreq/armada-37xx-cpufreq.c b/drivers/cpufreq/armada-37xx-cpufreq.c
+index 75491fc841a6..0df16eb1eb3c 100644
+--- a/drivers/cpufreq/armada-37xx-cpufreq.c
++++ b/drivers/cpufreq/armada-37xx-cpufreq.c
+@@ -359,11 +359,11 @@ static int __init armada37xx_cpufreq_driver_init(void)
+ 	struct armada_37xx_dvfs *dvfs;
+ 	struct platform_device *pdev;
+ 	unsigned long freq;
+-	unsigned int cur_frequency;
++	unsigned int cur_frequency, base_frequency;
+ 	struct regmap *nb_pm_base, *avs_base;
+ 	struct device *cpu_dev;
+ 	int load_lvl, ret;
+-	struct clk *clk;
++	struct clk *clk, *parent;
+ 
+ 	nb_pm_base =
+ 		syscon_regmap_lookup_by_compatible("marvell,armada-3700-nb-pm");
+@@ -399,6 +399,22 @@ static int __init armada37xx_cpufreq_driver_init(void)
+ 		return PTR_ERR(clk);
+ 	}
+ 
++	parent = clk_get_parent(clk);
++	if (IS_ERR(parent)) {
++		dev_err(cpu_dev, "Cannot get parent clock for CPU0\n");
++		clk_put(clk);
++		return PTR_ERR(parent);
++	}
++
++	/* Get parent CPU frequency */
++	base_frequency =  clk_get_rate(parent);
++
++	if (!base_frequency) {
++		dev_err(cpu_dev, "Failed to get parent clock rate for CPU\n");
++		clk_put(clk);
++		return -EINVAL;
++	}
++
+ 	/* Get nominal (current) CPU frequency */
+ 	cur_frequency = clk_get_rate(clk);
+ 	if (!cur_frequency) {
+@@ -431,7 +447,7 @@ static int __init armada37xx_cpufreq_driver_init(void)
+ 	for (load_lvl = ARMADA_37XX_DVFS_LOAD_0; load_lvl < LOAD_LEVEL_NR;
+ 	     load_lvl++) {
+ 		unsigned long u_volt = avs_map[dvfs->avs[load_lvl]] * 1000;
+-		freq = cur_frequency / dvfs->divider[load_lvl];
++		freq = base_frequency / dvfs->divider[load_lvl];
+ 		ret = dev_pm_opp_add(cpu_dev, freq, u_volt);
+ 		if (ret)
+ 			goto remove_opp;
+diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c
+index 632d25674e7f..45653029ee18 100644
+--- a/drivers/hv/hv.c
++++ b/drivers/hv/hv.c
+@@ -408,7 +408,6 @@ int hv_synic_cleanup(unsigned int cpu)
+ 
+ 		clockevents_unbind_device(hv_cpu->clk_evt, cpu);
+ 		hv_ce_shutdown(hv_cpu->clk_evt);
+-		put_cpu_ptr(hv_cpu);
+ 	}
+ 
+ 	hv_get_synint_state(VMBUS_MESSAGE_SINT, shared_sint.as_uint64);
+diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c
+index 1cf6290d6435..70f2cb90adc5 100644
+--- a/drivers/hwtracing/intel_th/pci.c
++++ b/drivers/hwtracing/intel_th/pci.c
+@@ -165,6 +165,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
+ 		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x34a6),
+ 		.driver_data = (kernel_ulong_t)&intel_th_2x,
+ 	},
++	{
++		/* Comet Lake */
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x02a6),
++		.driver_data = (kernel_ulong_t)&intel_th_2x,
++	},
+ 	{ 0 },
+ };
+ 
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index 1412abcff010..5f4bd52121fe 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -385,8 +385,9 @@ static void i3c_bus_set_addr_slot_status(struct i3c_bus *bus, u16 addr,
+ 		return;
+ 
+ 	ptr = bus->addrslots + (bitpos / BITS_PER_LONG);
+-	*ptr &= ~(I3C_ADDR_SLOT_STATUS_MASK << (bitpos % BITS_PER_LONG));
+-	*ptr |= status << (bitpos % BITS_PER_LONG);
++	*ptr &= ~((unsigned long)I3C_ADDR_SLOT_STATUS_MASK <<
++						(bitpos % BITS_PER_LONG));
++	*ptr |= (unsigned long)status << (bitpos % BITS_PER_LONG);
+ }
+ 
+ static bool i3c_bus_dev_addr_is_avail(struct i3c_bus *bus, u8 addr)
+diff --git a/drivers/iio/adc/qcom-spmi-adc5.c b/drivers/iio/adc/qcom-spmi-adc5.c
+index 6a866cc187f7..21fdcde77883 100644
+--- a/drivers/iio/adc/qcom-spmi-adc5.c
++++ b/drivers/iio/adc/qcom-spmi-adc5.c
+@@ -664,6 +664,7 @@ static const struct of_device_id adc5_match_table[] = {
+ 	},
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(of, adc5_match_table);
+ 
+ static int adc5_get_dt_data(struct adc5_chip *adc, struct device_node *node)
+ {
+diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
+index ce3e541434dc..a09a742d7ec1 100644
+--- a/drivers/scsi/lpfc/lpfc_attr.c
++++ b/drivers/scsi/lpfc/lpfc_attr.c
+@@ -114,7 +114,7 @@ static ssize_t
+ lpfc_drvr_version_show(struct device *dev, struct device_attribute *attr,
+ 		       char *buf)
+ {
+-	return snprintf(buf, PAGE_SIZE, LPFC_MODULE_DESC "\n");
++	return scnprintf(buf, PAGE_SIZE, LPFC_MODULE_DESC "\n");
+ }
+ 
+ /**
+@@ -134,9 +134,9 @@ lpfc_enable_fip_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+ 	if (phba->hba_flag & HBA_FIP_SUPPORT)
+-		return snprintf(buf, PAGE_SIZE, "1\n");
++		return scnprintf(buf, PAGE_SIZE, "1\n");
+ 	else
+-		return snprintf(buf, PAGE_SIZE, "0\n");
++		return scnprintf(buf, PAGE_SIZE, "0\n");
+ }
+ 
+ static ssize_t
+@@ -564,14 +564,15 @@ lpfc_bg_info_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	if (phba->cfg_enable_bg)
++	if (phba->cfg_enable_bg) {
+ 		if (phba->sli3_options & LPFC_SLI3_BG_ENABLED)
+-			return snprintf(buf, PAGE_SIZE, "BlockGuard Enabled\n");
++			return scnprintf(buf, PAGE_SIZE,
++					"BlockGuard Enabled\n");
+ 		else
+-			return snprintf(buf, PAGE_SIZE,
++			return scnprintf(buf, PAGE_SIZE,
+ 					"BlockGuard Not Supported\n");
+-	else
+-			return snprintf(buf, PAGE_SIZE,
++	} else
++		return scnprintf(buf, PAGE_SIZE,
+ 					"BlockGuard Disabled\n");
+ }
+ 
+@@ -583,7 +584,7 @@ lpfc_bg_guard_err_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%llu\n",
++	return scnprintf(buf, PAGE_SIZE, "%llu\n",
+ 			(unsigned long long)phba->bg_guard_err_cnt);
+ }
+ 
+@@ -595,7 +596,7 @@ lpfc_bg_apptag_err_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%llu\n",
++	return scnprintf(buf, PAGE_SIZE, "%llu\n",
+ 			(unsigned long long)phba->bg_apptag_err_cnt);
+ }
+ 
+@@ -607,7 +608,7 @@ lpfc_bg_reftag_err_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%llu\n",
++	return scnprintf(buf, PAGE_SIZE, "%llu\n",
+ 			(unsigned long long)phba->bg_reftag_err_cnt);
+ }
+ 
+@@ -625,7 +626,7 @@ lpfc_info_show(struct device *dev, struct device_attribute *attr,
+ {
+ 	struct Scsi_Host *host = class_to_shost(dev);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%s\n",lpfc_info(host));
++	return scnprintf(buf, PAGE_SIZE, "%s\n", lpfc_info(host));
+ }
+ 
+ /**
+@@ -644,7 +645,7 @@ lpfc_serialnum_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%s\n",phba->SerialNumber);
++	return scnprintf(buf, PAGE_SIZE, "%s\n", phba->SerialNumber);
+ }
+ 
+ /**
+@@ -666,7 +667,7 @@ lpfc_temp_sensor_show(struct device *dev, struct device_attribute *attr,
+ 	struct Scsi_Host *shost = class_to_shost(dev);
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+-	return snprintf(buf, PAGE_SIZE, "%d\n",phba->temp_sensor_support);
++	return scnprintf(buf, PAGE_SIZE, "%d\n", phba->temp_sensor_support);
+ }
+ 
+ /**
+@@ -685,7 +686,7 @@ lpfc_modeldesc_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%s\n",phba->ModelDesc);
++	return scnprintf(buf, PAGE_SIZE, "%s\n", phba->ModelDesc);
+ }
+ 
+ /**
+@@ -704,7 +705,7 @@ lpfc_modelname_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%s\n",phba->ModelName);
++	return scnprintf(buf, PAGE_SIZE, "%s\n", phba->ModelName);
+ }
+ 
+ /**
+@@ -723,7 +724,7 @@ lpfc_programtype_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%s\n",phba->ProgramType);
++	return scnprintf(buf, PAGE_SIZE, "%s\n", phba->ProgramType);
+ }
+ 
+ /**
+@@ -741,7 +742,7 @@ lpfc_mlomgmt_show(struct device *dev, struct device_attribute *attr, char *buf)
+ 	struct lpfc_vport *vport = (struct lpfc_vport *)shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n",
++	return scnprintf(buf, PAGE_SIZE, "%d\n",
+ 		(phba->sli.sli_flag & LPFC_MENLO_MAINT));
+ }
+ 
+@@ -761,7 +762,7 @@ lpfc_vportnum_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%s\n",phba->Port);
++	return scnprintf(buf, PAGE_SIZE, "%s\n", phba->Port);
+ }
+ 
+ /**
+@@ -789,10 +790,10 @@ lpfc_fwrev_show(struct device *dev, struct device_attribute *attr,
+ 	sli_family = phba->sli4_hba.pc_sli4_params.sli_family;
+ 
+ 	if (phba->sli_rev < LPFC_SLI_REV4)
+-		len = snprintf(buf, PAGE_SIZE, "%s, sli-%d\n",
++		len = scnprintf(buf, PAGE_SIZE, "%s, sli-%d\n",
+ 			       fwrev, phba->sli_rev);
+ 	else
+-		len = snprintf(buf, PAGE_SIZE, "%s, sli-%d:%d:%x\n",
++		len = scnprintf(buf, PAGE_SIZE, "%s, sli-%d:%d:%x\n",
+ 			       fwrev, phba->sli_rev, if_type, sli_family);
+ 
+ 	return len;
+@@ -816,7 +817,7 @@ lpfc_hdw_show(struct device *dev, struct device_attribute *attr, char *buf)
+ 	lpfc_vpd_t *vp = &phba->vpd;
+ 
+ 	lpfc_jedec_to_ascii(vp->rev.biuRev, hdw);
+-	return snprintf(buf, PAGE_SIZE, "%s\n", hdw);
++	return scnprintf(buf, PAGE_SIZE, "%s\n", hdw);
+ }
+ 
+ /**
+@@ -837,10 +838,11 @@ lpfc_option_rom_version_show(struct device *dev, struct device_attribute *attr,
+ 	char fwrev[FW_REV_STR_SIZE];
+ 
+ 	if (phba->sli_rev < LPFC_SLI_REV4)
+-		return snprintf(buf, PAGE_SIZE, "%s\n", phba->OptionROMVersion);
++		return scnprintf(buf, PAGE_SIZE, "%s\n",
++				phba->OptionROMVersion);
+ 
+ 	lpfc_decode_firmware_rev(phba, fwrev, 1);
+-	return snprintf(buf, PAGE_SIZE, "%s\n", fwrev);
++	return scnprintf(buf, PAGE_SIZE, "%s\n", fwrev);
+ }
+ 
+ /**
+@@ -871,20 +873,20 @@ lpfc_link_state_show(struct device *dev, struct device_attribute *attr,
+ 	case LPFC_LINK_DOWN:
+ 	case LPFC_HBA_ERROR:
+ 		if (phba->hba_flag & LINK_DISABLED)
+-			len += snprintf(buf + len, PAGE_SIZE-len,
++			len += scnprintf(buf + len, PAGE_SIZE-len,
+ 				"Link Down - User disabled\n");
+ 		else
+-			len += snprintf(buf + len, PAGE_SIZE-len,
++			len += scnprintf(buf + len, PAGE_SIZE-len,
+ 				"Link Down\n");
+ 		break;
+ 	case LPFC_LINK_UP:
+ 	case LPFC_CLEAR_LA:
+ 	case LPFC_HBA_READY:
+-		len += snprintf(buf + len, PAGE_SIZE-len, "Link Up - ");
++		len += scnprintf(buf + len, PAGE_SIZE-len, "Link Up - ");
+ 
+ 		switch (vport->port_state) {
+ 		case LPFC_LOCAL_CFG_LINK:
+-			len += snprintf(buf + len, PAGE_SIZE-len,
++			len += scnprintf(buf + len, PAGE_SIZE-len,
+ 					"Configuring Link\n");
+ 			break;
+ 		case LPFC_FDISC:
+@@ -894,38 +896,40 @@ lpfc_link_state_show(struct device *dev, struct device_attribute *attr,
+ 		case LPFC_NS_QRY:
+ 		case LPFC_BUILD_DISC_LIST:
+ 		case LPFC_DISC_AUTH:
+-			len += snprintf(buf + len, PAGE_SIZE - len,
++			len += scnprintf(buf + len, PAGE_SIZE - len,
+ 					"Discovery\n");
+ 			break;
+ 		case LPFC_VPORT_READY:
+-			len += snprintf(buf + len, PAGE_SIZE - len, "Ready\n");
++			len += scnprintf(buf + len, PAGE_SIZE - len,
++					"Ready\n");
+ 			break;
+ 
+ 		case LPFC_VPORT_FAILED:
+-			len += snprintf(buf + len, PAGE_SIZE - len, "Failed\n");
++			len += scnprintf(buf + len, PAGE_SIZE - len,
++					"Failed\n");
+ 			break;
+ 
+ 		case LPFC_VPORT_UNKNOWN:
+-			len += snprintf(buf + len, PAGE_SIZE - len,
++			len += scnprintf(buf + len, PAGE_SIZE - len,
+ 					"Unknown\n");
+ 			break;
+ 		}
+ 		if (phba->sli.sli_flag & LPFC_MENLO_MAINT)
+-			len += snprintf(buf + len, PAGE_SIZE-len,
++			len += scnprintf(buf + len, PAGE_SIZE-len,
+ 					"   Menlo Maint Mode\n");
+ 		else if (phba->fc_topology == LPFC_TOPOLOGY_LOOP) {
+ 			if (vport->fc_flag & FC_PUBLIC_LOOP)
+-				len += snprintf(buf + len, PAGE_SIZE-len,
++				len += scnprintf(buf + len, PAGE_SIZE-len,
+ 						"   Public Loop\n");
+ 			else
+-				len += snprintf(buf + len, PAGE_SIZE-len,
++				len += scnprintf(buf + len, PAGE_SIZE-len,
+ 						"   Private Loop\n");
+ 		} else {
+ 			if (vport->fc_flag & FC_FABRIC)
+-				len += snprintf(buf + len, PAGE_SIZE-len,
++				len += scnprintf(buf + len, PAGE_SIZE-len,
+ 						"   Fabric\n");
+ 			else
+-				len += snprintf(buf + len, PAGE_SIZE-len,
++				len += scnprintf(buf + len, PAGE_SIZE-len,
+ 						"   Point-2-Point\n");
+ 		}
+ 	}
+@@ -937,28 +941,28 @@ lpfc_link_state_show(struct device *dev, struct device_attribute *attr,
+ 		struct lpfc_trunk_link link = phba->trunk_link;
+ 
+ 		if (bf_get(lpfc_conf_trunk_port0, &phba->sli4_hba))
+-			len += snprintf(buf + len, PAGE_SIZE - len,
++			len += scnprintf(buf + len, PAGE_SIZE - len,
+ 				"Trunk port 0: Link %s %s\n",
+ 				(link.link0.state == LPFC_LINK_UP) ?
+ 				 "Up" : "Down. ",
+ 				trunk_errmsg[link.link0.fault]);
+ 
+ 		if (bf_get(lpfc_conf_trunk_port1, &phba->sli4_hba))
+-			len += snprintf(buf + len, PAGE_SIZE - len,
++			len += scnprintf(buf + len, PAGE_SIZE - len,
+ 				"Trunk port 1: Link %s %s\n",
+ 				(link.link1.state == LPFC_LINK_UP) ?
+ 				 "Up" : "Down. ",
+ 				trunk_errmsg[link.link1.fault]);
+ 
+ 		if (bf_get(lpfc_conf_trunk_port2, &phba->sli4_hba))
+-			len += snprintf(buf + len, PAGE_SIZE - len,
++			len += scnprintf(buf + len, PAGE_SIZE - len,
+ 				"Trunk port 2: Link %s %s\n",
+ 				(link.link2.state == LPFC_LINK_UP) ?
+ 				 "Up" : "Down. ",
+ 				trunk_errmsg[link.link2.fault]);
+ 
+ 		if (bf_get(lpfc_conf_trunk_port3, &phba->sli4_hba))
+-			len += snprintf(buf + len, PAGE_SIZE - len,
++			len += scnprintf(buf + len, PAGE_SIZE - len,
+ 				"Trunk port 3: Link %s %s\n",
+ 				(link.link3.state == LPFC_LINK_UP) ?
+ 				 "Up" : "Down. ",
+@@ -986,15 +990,15 @@ lpfc_sli4_protocol_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_hba *phba = vport->phba;
+ 
+ 	if (phba->sli_rev < LPFC_SLI_REV4)
+-		return snprintf(buf, PAGE_SIZE, "fc\n");
++		return scnprintf(buf, PAGE_SIZE, "fc\n");
+ 
+ 	if (phba->sli4_hba.lnk_info.lnk_dv == LPFC_LNK_DAT_VAL) {
+ 		if (phba->sli4_hba.lnk_info.lnk_tp == LPFC_LNK_TYPE_GE)
+-			return snprintf(buf, PAGE_SIZE, "fcoe\n");
++			return scnprintf(buf, PAGE_SIZE, "fcoe\n");
+ 		if (phba->sli4_hba.lnk_info.lnk_tp == LPFC_LNK_TYPE_FC)
+-			return snprintf(buf, PAGE_SIZE, "fc\n");
++			return scnprintf(buf, PAGE_SIZE, "fc\n");
+ 	}
+-	return snprintf(buf, PAGE_SIZE, "unknown\n");
++	return scnprintf(buf, PAGE_SIZE, "unknown\n");
+ }
+ 
+ /**
+@@ -1014,7 +1018,7 @@ lpfc_oas_supported_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *)shost->hostdata;
+ 	struct lpfc_hba *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n",
++	return scnprintf(buf, PAGE_SIZE, "%d\n",
+ 			phba->sli4_hba.pc_sli4_params.oas_supported);
+ }
+ 
+@@ -1072,7 +1076,7 @@ lpfc_num_discovered_ports_show(struct device *dev,
+ 	struct Scsi_Host  *shost = class_to_shost(dev);
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n",
++	return scnprintf(buf, PAGE_SIZE, "%d\n",
+ 			vport->fc_map_cnt + vport->fc_unmap_cnt);
+ }
+ 
+@@ -1586,7 +1590,7 @@ lpfc_nport_evt_cnt_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n", phba->nport_event_cnt);
++	return scnprintf(buf, PAGE_SIZE, "%d\n", phba->nport_event_cnt);
+ }
+ 
+ int
+@@ -1675,7 +1679,7 @@ lpfc_board_mode_show(struct device *dev, struct device_attribute *attr,
+ 	else
+ 		state = "online";
+ 
+-	return snprintf(buf, PAGE_SIZE, "%s\n", state);
++	return scnprintf(buf, PAGE_SIZE, "%s\n", state);
+ }
+ 
+ /**
+@@ -1901,8 +1905,8 @@ lpfc_max_rpi_show(struct device *dev, struct device_attribute *attr,
+ 	uint32_t cnt;
+ 
+ 	if (lpfc_get_hba_info(phba, NULL, NULL, &cnt, NULL, NULL, NULL))
+-		return snprintf(buf, PAGE_SIZE, "%d\n", cnt);
+-	return snprintf(buf, PAGE_SIZE, "Unknown\n");
++		return scnprintf(buf, PAGE_SIZE, "%d\n", cnt);
++	return scnprintf(buf, PAGE_SIZE, "Unknown\n");
+ }
+ 
+ /**
+@@ -1929,8 +1933,8 @@ lpfc_used_rpi_show(struct device *dev, struct device_attribute *attr,
+ 	uint32_t cnt, acnt;
+ 
+ 	if (lpfc_get_hba_info(phba, NULL, NULL, &cnt, &acnt, NULL, NULL))
+-		return snprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
+-	return snprintf(buf, PAGE_SIZE, "Unknown\n");
++		return scnprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
++	return scnprintf(buf, PAGE_SIZE, "Unknown\n");
+ }
+ 
+ /**
+@@ -1957,8 +1961,8 @@ lpfc_max_xri_show(struct device *dev, struct device_attribute *attr,
+ 	uint32_t cnt;
+ 
+ 	if (lpfc_get_hba_info(phba, &cnt, NULL, NULL, NULL, NULL, NULL))
+-		return snprintf(buf, PAGE_SIZE, "%d\n", cnt);
+-	return snprintf(buf, PAGE_SIZE, "Unknown\n");
++		return scnprintf(buf, PAGE_SIZE, "%d\n", cnt);
++	return scnprintf(buf, PAGE_SIZE, "Unknown\n");
+ }
+ 
+ /**
+@@ -1985,8 +1989,8 @@ lpfc_used_xri_show(struct device *dev, struct device_attribute *attr,
+ 	uint32_t cnt, acnt;
+ 
+ 	if (lpfc_get_hba_info(phba, &cnt, &acnt, NULL, NULL, NULL, NULL))
+-		return snprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
+-	return snprintf(buf, PAGE_SIZE, "Unknown\n");
++		return scnprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
++	return scnprintf(buf, PAGE_SIZE, "Unknown\n");
+ }
+ 
+ /**
+@@ -2013,8 +2017,8 @@ lpfc_max_vpi_show(struct device *dev, struct device_attribute *attr,
+ 	uint32_t cnt;
+ 
+ 	if (lpfc_get_hba_info(phba, NULL, NULL, NULL, NULL, &cnt, NULL))
+-		return snprintf(buf, PAGE_SIZE, "%d\n", cnt);
+-	return snprintf(buf, PAGE_SIZE, "Unknown\n");
++		return scnprintf(buf, PAGE_SIZE, "%d\n", cnt);
++	return scnprintf(buf, PAGE_SIZE, "Unknown\n");
+ }
+ 
+ /**
+@@ -2041,8 +2045,8 @@ lpfc_used_vpi_show(struct device *dev, struct device_attribute *attr,
+ 	uint32_t cnt, acnt;
+ 
+ 	if (lpfc_get_hba_info(phba, NULL, NULL, NULL, NULL, &cnt, &acnt))
+-		return snprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
+-	return snprintf(buf, PAGE_SIZE, "Unknown\n");
++		return scnprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
++	return scnprintf(buf, PAGE_SIZE, "Unknown\n");
+ }
+ 
+ /**
+@@ -2067,10 +2071,10 @@ lpfc_npiv_info_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+ 	if (!(phba->max_vpi))
+-		return snprintf(buf, PAGE_SIZE, "NPIV Not Supported\n");
++		return scnprintf(buf, PAGE_SIZE, "NPIV Not Supported\n");
+ 	if (vport->port_type == LPFC_PHYSICAL_PORT)
+-		return snprintf(buf, PAGE_SIZE, "NPIV Physical\n");
+-	return snprintf(buf, PAGE_SIZE, "NPIV Virtual (VPI %d)\n", vport->vpi);
++		return scnprintf(buf, PAGE_SIZE, "NPIV Physical\n");
++	return scnprintf(buf, PAGE_SIZE, "NPIV Virtual (VPI %d)\n", vport->vpi);
+ }
+ 
+ /**
+@@ -2092,7 +2096,7 @@ lpfc_poll_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%#x\n", phba->cfg_poll);
++	return scnprintf(buf, PAGE_SIZE, "%#x\n", phba->cfg_poll);
+ }
+ 
+ /**
+@@ -2196,7 +2200,7 @@ lpfc_fips_level_show(struct device *dev,  struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n", phba->fips_level);
++	return scnprintf(buf, PAGE_SIZE, "%d\n", phba->fips_level);
+ }
+ 
+ /**
+@@ -2215,7 +2219,7 @@ lpfc_fips_rev_show(struct device *dev,  struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n", phba->fips_spec_rev);
++	return scnprintf(buf, PAGE_SIZE, "%d\n", phba->fips_spec_rev);
+ }
+ 
+ /**
+@@ -2234,7 +2238,7 @@ lpfc_dss_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%s - %sOperational\n",
++	return scnprintf(buf, PAGE_SIZE, "%s - %sOperational\n",
+ 			(phba->cfg_enable_dss) ? "Enabled" : "Disabled",
+ 			(phba->sli3_options & LPFC_SLI3_DSS_ENABLED) ?
+ 				"" : "Not ");
+@@ -2263,7 +2267,7 @@ lpfc_sriov_hw_max_virtfn_show(struct device *dev,
+ 	uint16_t max_nr_virtfn;
+ 
+ 	max_nr_virtfn = lpfc_sli_sriov_nr_virtfn_get(phba);
+-	return snprintf(buf, PAGE_SIZE, "%d\n", max_nr_virtfn);
++	return scnprintf(buf, PAGE_SIZE, "%d\n", max_nr_virtfn);
+ }
+ 
+ static inline bool lpfc_rangecheck(uint val, uint min, uint max)
+@@ -2323,7 +2327,7 @@ lpfc_##attr##_show(struct device *dev, struct device_attribute *attr, \
+ 	struct Scsi_Host  *shost = class_to_shost(dev);\
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;\
+ 	struct lpfc_hba   *phba = vport->phba;\
+-	return snprintf(buf, PAGE_SIZE, "%d\n",\
++	return scnprintf(buf, PAGE_SIZE, "%d\n",\
+ 			phba->cfg_##attr);\
+ }
+ 
+@@ -2351,7 +2355,7 @@ lpfc_##attr##_show(struct device *dev, struct device_attribute *attr, \
+ 	struct lpfc_hba   *phba = vport->phba;\
+ 	uint val = 0;\
+ 	val = phba->cfg_##attr;\
+-	return snprintf(buf, PAGE_SIZE, "%#x\n",\
++	return scnprintf(buf, PAGE_SIZE, "%#x\n",\
+ 			phba->cfg_##attr);\
+ }
+ 
+@@ -2487,7 +2491,7 @@ lpfc_##attr##_show(struct device *dev, struct device_attribute *attr, \
+ { \
+ 	struct Scsi_Host  *shost = class_to_shost(dev);\
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;\
+-	return snprintf(buf, PAGE_SIZE, "%d\n", vport->cfg_##attr);\
++	return scnprintf(buf, PAGE_SIZE, "%d\n", vport->cfg_##attr);\
+ }
+ 
+ /**
+@@ -2512,7 +2516,7 @@ lpfc_##attr##_show(struct device *dev, struct device_attribute *attr, \
+ { \
+ 	struct Scsi_Host  *shost = class_to_shost(dev);\
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;\
+-	return snprintf(buf, PAGE_SIZE, "%#x\n", vport->cfg_##attr);\
++	return scnprintf(buf, PAGE_SIZE, "%#x\n", vport->cfg_##attr);\
+ }
+ 
+ /**
+@@ -2784,7 +2788,7 @@ lpfc_soft_wwpn_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "0x%llx\n",
++	return scnprintf(buf, PAGE_SIZE, "0x%llx\n",
+ 			(unsigned long long)phba->cfg_soft_wwpn);
+ }
+ 
+@@ -2881,7 +2885,7 @@ lpfc_soft_wwnn_show(struct device *dev, struct device_attribute *attr,
+ {
+ 	struct Scsi_Host *shost = class_to_shost(dev);
+ 	struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
+-	return snprintf(buf, PAGE_SIZE, "0x%llx\n",
++	return scnprintf(buf, PAGE_SIZE, "0x%llx\n",
+ 			(unsigned long long)phba->cfg_soft_wwnn);
+ }
+ 
+@@ -2947,7 +2951,7 @@ lpfc_oas_tgt_show(struct device *dev, struct device_attribute *attr,
+ 	struct Scsi_Host *shost = class_to_shost(dev);
+ 	struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "0x%llx\n",
++	return scnprintf(buf, PAGE_SIZE, "0x%llx\n",
+ 			wwn_to_u64(phba->cfg_oas_tgt_wwpn));
+ }
+ 
+@@ -3015,7 +3019,7 @@ lpfc_oas_priority_show(struct device *dev, struct device_attribute *attr,
+ 	struct Scsi_Host *shost = class_to_shost(dev);
+ 	struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n", phba->cfg_oas_priority);
++	return scnprintf(buf, PAGE_SIZE, "%d\n", phba->cfg_oas_priority);
+ }
+ 
+ /**
+@@ -3078,7 +3082,7 @@ lpfc_oas_vpt_show(struct device *dev, struct device_attribute *attr,
+ 	struct Scsi_Host *shost = class_to_shost(dev);
+ 	struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "0x%llx\n",
++	return scnprintf(buf, PAGE_SIZE, "0x%llx\n",
+ 			wwn_to_u64(phba->cfg_oas_vpt_wwpn));
+ }
+ 
+@@ -3149,7 +3153,7 @@ lpfc_oas_lun_state_show(struct device *dev, struct device_attribute *attr,
+ 	struct Scsi_Host *shost = class_to_shost(dev);
+ 	struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n", phba->cfg_oas_lun_state);
++	return scnprintf(buf, PAGE_SIZE, "%d\n", phba->cfg_oas_lun_state);
+ }
+ 
+ /**
+@@ -3213,7 +3217,7 @@ lpfc_oas_lun_status_show(struct device *dev, struct device_attribute *attr,
+ 	if (!(phba->cfg_oas_flags & OAS_LUN_VALID))
+ 		return -EFAULT;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n", phba->cfg_oas_lun_status);
++	return scnprintf(buf, PAGE_SIZE, "%d\n", phba->cfg_oas_lun_status);
+ }
+ static DEVICE_ATTR(lpfc_xlane_lun_status, S_IRUGO,
+ 		   lpfc_oas_lun_status_show, NULL);
+@@ -3365,7 +3369,7 @@ lpfc_oas_lun_show(struct device *dev, struct device_attribute *attr,
+ 	if (oas_lun != NOT_OAS_ENABLED_LUN)
+ 		phba->cfg_oas_flags |= OAS_LUN_VALID;
+ 
+-	len += snprintf(buf + len, PAGE_SIZE-len, "0x%llx", oas_lun);
++	len += scnprintf(buf + len, PAGE_SIZE-len, "0x%llx", oas_lun);
+ 
+ 	return len;
+ }
+@@ -3499,7 +3503,7 @@ lpfc_iocb_hw_show(struct device *dev, struct device_attribute *attr, char *buf)
+ 	struct Scsi_Host  *shost = class_to_shost(dev);
+ 	struct lpfc_hba   *phba = ((struct lpfc_vport *) shost->hostdata)->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n", phba->iocb_max);
++	return scnprintf(buf, PAGE_SIZE, "%d\n", phba->iocb_max);
+ }
+ 
+ static DEVICE_ATTR(iocb_hw, S_IRUGO,
+@@ -3511,7 +3515,7 @@ lpfc_txq_hw_show(struct device *dev, struct device_attribute *attr, char *buf)
+ 	struct lpfc_hba   *phba = ((struct lpfc_vport *) shost->hostdata)->phba;
+ 	struct lpfc_sli_ring *pring = lpfc_phba_elsring(phba);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n",
++	return scnprintf(buf, PAGE_SIZE, "%d\n",
+ 			pring ? pring->txq_max : 0);
+ }
+ 
+@@ -3525,7 +3529,7 @@ lpfc_txcmplq_hw_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_hba   *phba = ((struct lpfc_vport *) shost->hostdata)->phba;
+ 	struct lpfc_sli_ring *pring = lpfc_phba_elsring(phba);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n",
++	return scnprintf(buf, PAGE_SIZE, "%d\n",
+ 			pring ? pring->txcmplq_max : 0);
+ }
+ 
+@@ -3561,7 +3565,7 @@ lpfc_nodev_tmo_show(struct device *dev, struct device_attribute *attr,
+ 	struct Scsi_Host  *shost = class_to_shost(dev);
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n",	vport->cfg_devloss_tmo);
++	return scnprintf(buf, PAGE_SIZE, "%d\n",	vport->cfg_devloss_tmo);
+ }
+ 
+ /**
+@@ -5169,12 +5173,12 @@ lpfc_fcp_cpu_map_show(struct device *dev, struct device_attribute *attr,
+ 
+ 	switch (phba->cfg_fcp_cpu_map) {
+ 	case 0:
+-		len += snprintf(buf + len, PAGE_SIZE-len,
++		len += scnprintf(buf + len, PAGE_SIZE-len,
+ 				"fcp_cpu_map: No mapping (%d)\n",
+ 				phba->cfg_fcp_cpu_map);
+ 		return len;
+ 	case 1:
+-		len += snprintf(buf + len, PAGE_SIZE-len,
++		len += scnprintf(buf + len, PAGE_SIZE-len,
+ 				"fcp_cpu_map: HBA centric mapping (%d): "
+ 				"%d of %d CPUs online from %d possible CPUs\n",
+ 				phba->cfg_fcp_cpu_map, num_online_cpus(),
+@@ -5188,12 +5192,12 @@ lpfc_fcp_cpu_map_show(struct device *dev, struct device_attribute *attr,
+ 		cpup = &phba->sli4_hba.cpu_map[phba->sli4_hba.curr_disp_cpu];
+ 
+ 		if (!cpu_present(phba->sli4_hba.curr_disp_cpu))
+-			len += snprintf(buf + len, PAGE_SIZE - len,
++			len += scnprintf(buf + len, PAGE_SIZE - len,
+ 					"CPU %02d not present\n",
+ 					phba->sli4_hba.curr_disp_cpu);
+ 		else if (cpup->irq == LPFC_VECTOR_MAP_EMPTY) {
+ 			if (cpup->hdwq == LPFC_VECTOR_MAP_EMPTY)
+-				len += snprintf(
++				len += scnprintf(
+ 					buf + len, PAGE_SIZE - len,
+ 					"CPU %02d hdwq None "
+ 					"physid %d coreid %d ht %d\n",
+@@ -5201,7 +5205,7 @@ lpfc_fcp_cpu_map_show(struct device *dev, struct device_attribute *attr,
+ 					cpup->phys_id,
+ 					cpup->core_id, cpup->hyper);
+ 			else
+-				len += snprintf(
++				len += scnprintf(
+ 					buf + len, PAGE_SIZE - len,
+ 					"CPU %02d EQ %04d hdwq %04d "
+ 					"physid %d coreid %d ht %d\n",
+@@ -5210,7 +5214,7 @@ lpfc_fcp_cpu_map_show(struct device *dev, struct device_attribute *attr,
+ 					cpup->core_id, cpup->hyper);
+ 		} else {
+ 			if (cpup->hdwq == LPFC_VECTOR_MAP_EMPTY)
+-				len += snprintf(
++				len += scnprintf(
+ 					buf + len, PAGE_SIZE - len,
+ 					"CPU %02d hdwq None "
+ 					"physid %d coreid %d ht %d IRQ %d\n",
+@@ -5218,7 +5222,7 @@ lpfc_fcp_cpu_map_show(struct device *dev, struct device_attribute *attr,
+ 					cpup->phys_id,
+ 					cpup->core_id, cpup->hyper, cpup->irq);
+ 			else
+-				len += snprintf(
++				len += scnprintf(
+ 					buf + len, PAGE_SIZE - len,
+ 					"CPU %02d EQ %04d hdwq %04d "
+ 					"physid %d coreid %d ht %d IRQ %d\n",
+@@ -5233,7 +5237,7 @@ lpfc_fcp_cpu_map_show(struct device *dev, struct device_attribute *attr,
+ 		if (phba->sli4_hba.curr_disp_cpu <
+ 				phba->sli4_hba.num_possible_cpu &&
+ 				(len >= (PAGE_SIZE - 64))) {
+-			len += snprintf(buf + len,
++			len += scnprintf(buf + len,
+ 					PAGE_SIZE - len, "more...\n");
+ 			break;
+ 		}
+@@ -5753,10 +5757,10 @@ lpfc_sg_seg_cnt_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_hba   *phba = vport->phba;
+ 	int len;
+ 
+-	len = snprintf(buf, PAGE_SIZE, "SGL sz: %d  total SGEs: %d\n",
++	len = scnprintf(buf, PAGE_SIZE, "SGL sz: %d  total SGEs: %d\n",
+ 		       phba->cfg_sg_dma_buf_size, phba->cfg_total_seg_cnt);
+ 
+-	len += snprintf(buf + len, PAGE_SIZE, "Cfg: %d  SCSI: %d  NVME: %d\n",
++	len += scnprintf(buf + len, PAGE_SIZE, "Cfg: %d  SCSI: %d  NVME: %d\n",
+ 			phba->cfg_sg_seg_cnt, phba->cfg_scsi_seg_cnt,
+ 			phba->cfg_nvme_seg_cnt);
+ 	return len;
+@@ -6755,7 +6759,7 @@ lpfc_show_rport_##field (struct device *dev,				\
+ {									\
+ 	struct fc_rport *rport = transport_class_to_rport(dev);		\
+ 	struct lpfc_rport_data *rdata = rport->hostdata;		\
+-	return snprintf(buf, sz, format_string,				\
++	return scnprintf(buf, sz, format_string,			\
+ 		(rdata->target) ? cast rdata->target->field : 0);	\
+ }
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_ct.c b/drivers/scsi/lpfc/lpfc_ct.c
+index 7290573110fe..2e3949c6cd07 100644
+--- a/drivers/scsi/lpfc/lpfc_ct.c
++++ b/drivers/scsi/lpfc/lpfc_ct.c
+@@ -1430,7 +1430,7 @@ lpfc_vport_symbolic_port_name(struct lpfc_vport *vport, char *symbol,
+ 	 * Name object.  NPIV is not in play so this integer
+ 	 * value is sufficient and unique per FC-ID.
+ 	 */
+-	n = snprintf(symbol, size, "%d", vport->phba->brd_no);
++	n = scnprintf(symbol, size, "%d", vport->phba->brd_no);
+ 	return n;
+ }
+ 
+@@ -1444,26 +1444,26 @@ lpfc_vport_symbolic_node_name(struct lpfc_vport *vport, char *symbol,
+ 
+ 	lpfc_decode_firmware_rev(vport->phba, fwrev, 0);
+ 
+-	n = snprintf(symbol, size, "Emulex %s", vport->phba->ModelName);
++	n = scnprintf(symbol, size, "Emulex %s", vport->phba->ModelName);
+ 	if (size < n)
+ 		return n;
+ 
+-	n += snprintf(symbol + n, size - n, " FV%s", fwrev);
++	n += scnprintf(symbol + n, size - n, " FV%s", fwrev);
+ 	if (size < n)
+ 		return n;
+ 
+-	n += snprintf(symbol + n, size - n, " DV%s.",
++	n += scnprintf(symbol + n, size - n, " DV%s.",
+ 		      lpfc_release_version);
+ 	if (size < n)
+ 		return n;
+ 
+-	n += snprintf(symbol + n, size - n, " HN:%s.",
++	n += scnprintf(symbol + n, size - n, " HN:%s.",
+ 		      init_utsname()->nodename);
+ 	if (size < n)
+ 		return n;
+ 
+ 	/* Note :- OS name is "Linux" */
+-	n += snprintf(symbol + n, size - n, " OS:%s\n",
++	n += scnprintf(symbol + n, size - n, " OS:%s\n",
+ 		      init_utsname()->sysname);
+ 	return n;
+ }
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
+index 1215eaa530db..d6410cf18f1c 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.c
++++ b/drivers/scsi/lpfc/lpfc_debugfs.c
+@@ -170,7 +170,7 @@ lpfc_debugfs_disc_trc_data(struct lpfc_vport *vport, char *buf, int size)
+ 		snprintf(buffer,
+ 			LPFC_DEBUG_TRC_ENTRY_SIZE, "%010d:%010d ms:%s\n",
+ 			dtp->seq_cnt, ms, dtp->fmt);
+-		len +=  snprintf(buf+len, size-len, buffer,
++		len +=  scnprintf(buf+len, size-len, buffer,
+ 			dtp->data1, dtp->data2, dtp->data3);
+ 	}
+ 	for (i = 0; i < index; i++) {
+@@ -181,7 +181,7 @@ lpfc_debugfs_disc_trc_data(struct lpfc_vport *vport, char *buf, int size)
+ 		snprintf(buffer,
+ 			LPFC_DEBUG_TRC_ENTRY_SIZE, "%010d:%010d ms:%s\n",
+ 			dtp->seq_cnt, ms, dtp->fmt);
+-		len +=  snprintf(buf+len, size-len, buffer,
++		len +=  scnprintf(buf+len, size-len, buffer,
+ 			dtp->data1, dtp->data2, dtp->data3);
+ 	}
+ 
+@@ -236,7 +236,7 @@ lpfc_debugfs_slow_ring_trc_data(struct lpfc_hba *phba, char *buf, int size)
+ 		snprintf(buffer,
+ 			LPFC_DEBUG_TRC_ENTRY_SIZE, "%010d:%010d ms:%s\n",
+ 			dtp->seq_cnt, ms, dtp->fmt);
+-		len +=  snprintf(buf+len, size-len, buffer,
++		len +=  scnprintf(buf+len, size-len, buffer,
+ 			dtp->data1, dtp->data2, dtp->data3);
+ 	}
+ 	for (i = 0; i < index; i++) {
+@@ -247,7 +247,7 @@ lpfc_debugfs_slow_ring_trc_data(struct lpfc_hba *phba, char *buf, int size)
+ 		snprintf(buffer,
+ 			LPFC_DEBUG_TRC_ENTRY_SIZE, "%010d:%010d ms:%s\n",
+ 			dtp->seq_cnt, ms, dtp->fmt);
+-		len +=  snprintf(buf+len, size-len, buffer,
++		len +=  scnprintf(buf+len, size-len, buffer,
+ 			dtp->data1, dtp->data2, dtp->data3);
+ 	}
+ 
+@@ -307,7 +307,7 @@ lpfc_debugfs_hbqinfo_data(struct lpfc_hba *phba, char *buf, int size)
+ 
+ 	i = lpfc_debugfs_last_hbq;
+ 
+-	len +=  snprintf(buf+len, size-len, "HBQ %d Info\n", i);
++	len +=  scnprintf(buf+len, size-len, "HBQ %d Info\n", i);
+ 
+ 	hbqs =  &phba->hbqs[i];
+ 	posted = 0;
+@@ -315,21 +315,21 @@ lpfc_debugfs_hbqinfo_data(struct lpfc_hba *phba, char *buf, int size)
+ 		posted++;
+ 
+ 	hip =  lpfc_hbq_defs[i];
+-	len +=  snprintf(buf+len, size-len,
++	len +=  scnprintf(buf+len, size-len,
+ 		"idx:%d prof:%d rn:%d bufcnt:%d icnt:%d acnt:%d posted %d\n",
+ 		hip->hbq_index, hip->profile, hip->rn,
+ 		hip->buffer_count, hip->init_count, hip->add_count, posted);
+ 
+ 	raw_index = phba->hbq_get[i];
+ 	getidx = le32_to_cpu(raw_index);
+-	len +=  snprintf(buf+len, size-len,
++	len +=  scnprintf(buf+len, size-len,
+ 		"entries:%d bufcnt:%d Put:%d nPut:%d localGet:%d hbaGet:%d\n",
+ 		hbqs->entry_count, hbqs->buffer_count, hbqs->hbqPutIdx,
+ 		hbqs->next_hbqPutIdx, hbqs->local_hbqGetIdx, getidx);
+ 
+ 	hbqe = (struct lpfc_hbq_entry *) phba->hbqs[i].hbq_virt;
+ 	for (j=0; j<hbqs->entry_count; j++) {
+-		len +=  snprintf(buf+len, size-len,
++		len +=  scnprintf(buf+len, size-len,
+ 			"%03d: %08x %04x %05x ", j,
+ 			le32_to_cpu(hbqe->bde.addrLow),
+ 			le32_to_cpu(hbqe->bde.tus.w),
+@@ -341,14 +341,16 @@ lpfc_debugfs_hbqinfo_data(struct lpfc_hba *phba, char *buf, int size)
+ 		low = hbqs->hbqPutIdx - posted;
+ 		if (low >= 0) {
+ 			if ((j >= hbqs->hbqPutIdx) || (j < low)) {
+-				len +=  snprintf(buf+len, size-len, "Unused\n");
++				len +=  scnprintf(buf + len, size - len,
++						"Unused\n");
+ 				goto skipit;
+ 			}
+ 		}
+ 		else {
+ 			if ((j >= hbqs->hbqPutIdx) &&
+ 				(j < (hbqs->entry_count+low))) {
+-				len +=  snprintf(buf+len, size-len, "Unused\n");
++				len +=  scnprintf(buf + len, size - len,
++						"Unused\n");
+ 				goto skipit;
+ 			}
+ 		}
+@@ -358,7 +360,7 @@ lpfc_debugfs_hbqinfo_data(struct lpfc_hba *phba, char *buf, int size)
+ 			hbq_buf = container_of(d_buf, struct hbq_dmabuf, dbuf);
+ 			phys = ((uint64_t)hbq_buf->dbuf.phys & 0xffffffff);
+ 			if (phys == le32_to_cpu(hbqe->bde.addrLow)) {
+-				len +=  snprintf(buf+len, size-len,
++				len +=  scnprintf(buf+len, size-len,
+ 					"Buf%d: %p %06x\n", i,
+ 					hbq_buf->dbuf.virt, hbq_buf->tag);
+ 				found = 1;
+@@ -367,7 +369,7 @@ lpfc_debugfs_hbqinfo_data(struct lpfc_hba *phba, char *buf, int size)
+ 			i++;
+ 		}
+ 		if (!found) {
+-			len +=  snprintf(buf+len, size-len, "No DMAinfo?\n");
++			len +=  scnprintf(buf+len, size-len, "No DMAinfo?\n");
+ 		}
+ skipit:
+ 		hbqe++;
+@@ -413,14 +415,14 @@ lpfc_debugfs_commonxripools_data(struct lpfc_hba *phba, char *buf, int size)
+ 			break;
+ 		qp = &phba->sli4_hba.hdwq[lpfc_debugfs_last_xripool];
+ 
+-		len +=  snprintf(buf + len, size - len, "HdwQ %d Info ", i);
++		len += scnprintf(buf + len, size - len, "HdwQ %d Info ", i);
+ 		spin_lock_irqsave(&qp->abts_scsi_buf_list_lock, iflag);
+ 		spin_lock(&qp->abts_nvme_buf_list_lock);
+ 		spin_lock(&qp->io_buf_list_get_lock);
+ 		spin_lock(&qp->io_buf_list_put_lock);
+ 		out = qp->total_io_bufs - (qp->get_io_bufs + qp->put_io_bufs +
+ 			qp->abts_scsi_io_bufs + qp->abts_nvme_io_bufs);
+-		len +=  snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				 "tot:%d get:%d put:%d mt:%d "
+ 				 "ABTS scsi:%d nvme:%d Out:%d\n",
+ 			qp->total_io_bufs, qp->get_io_bufs, qp->put_io_bufs,
+@@ -612,9 +614,9 @@ lpfc_debugfs_lockstat_data(struct lpfc_hba *phba, char *buf, int size)
+ 			break;
+ 		qp = &phba->sli4_hba.hdwq[lpfc_debugfs_last_lock];
+ 
+-		len +=  snprintf(buf + len, size - len, "HdwQ %03d Lock ", i);
++		len += scnprintf(buf + len, size - len, "HdwQ %03d Lock ", i);
+ 		if (phba->cfg_xri_rebalancing) {
+-			len +=  snprintf(buf + len, size - len,
++			len += scnprintf(buf + len, size - len,
+ 					 "get_pvt:%d mv_pvt:%d "
+ 					 "mv2pub:%d mv2pvt:%d "
+ 					 "put_pvt:%d put_pub:%d wq:%d\n",
+@@ -626,7 +628,7 @@ lpfc_debugfs_lockstat_data(struct lpfc_hba *phba, char *buf, int size)
+ 					 qp->lock_conflict.free_pub_pool,
+ 					 qp->lock_conflict.wq_access);
+ 		} else {
+-			len +=  snprintf(buf + len, size - len,
++			len += scnprintf(buf + len, size - len,
+ 					 "get:%d put:%d free:%d wq:%d\n",
+ 					 qp->lock_conflict.alloc_xri_get,
+ 					 qp->lock_conflict.alloc_xri_put,
+@@ -678,7 +680,7 @@ lpfc_debugfs_dumpHBASlim_data(struct lpfc_hba *phba, char *buf, int size)
+ 	off = 0;
+ 	spin_lock_irq(&phba->hbalock);
+ 
+-	len +=  snprintf(buf+len, size-len, "HBA SLIM\n");
++	len +=  scnprintf(buf+len, size-len, "HBA SLIM\n");
+ 	lpfc_memcpy_from_slim(buffer,
+ 		phba->MBslimaddr + lpfc_debugfs_last_hba_slim_off, 1024);
+ 
+@@ -692,7 +694,7 @@ lpfc_debugfs_dumpHBASlim_data(struct lpfc_hba *phba, char *buf, int size)
+ 
+ 	i = 1024;
+ 	while (i > 0) {
+-		len +=  snprintf(buf+len, size-len,
++		len +=  scnprintf(buf+len, size-len,
+ 		"%08x: %08x %08x %08x %08x %08x %08x %08x %08x\n",
+ 		off, *ptr, *(ptr+1), *(ptr+2), *(ptr+3), *(ptr+4),
+ 		*(ptr+5), *(ptr+6), *(ptr+7));
+@@ -736,11 +738,11 @@ lpfc_debugfs_dumpHostSlim_data(struct lpfc_hba *phba, char *buf, int size)
+ 	off = 0;
+ 	spin_lock_irq(&phba->hbalock);
+ 
+-	len +=  snprintf(buf+len, size-len, "SLIM Mailbox\n");
++	len +=  scnprintf(buf+len, size-len, "SLIM Mailbox\n");
+ 	ptr = (uint32_t *)phba->slim2p.virt;
+ 	i = sizeof(MAILBOX_t);
+ 	while (i > 0) {
+-		len +=  snprintf(buf+len, size-len,
++		len +=  scnprintf(buf+len, size-len,
+ 		"%08x: %08x %08x %08x %08x %08x %08x %08x %08x\n",
+ 		off, *ptr, *(ptr+1), *(ptr+2), *(ptr+3), *(ptr+4),
+ 		*(ptr+5), *(ptr+6), *(ptr+7));
+@@ -749,11 +751,11 @@ lpfc_debugfs_dumpHostSlim_data(struct lpfc_hba *phba, char *buf, int size)
+ 		off += (8 * sizeof(uint32_t));
+ 	}
+ 
+-	len +=  snprintf(buf+len, size-len, "SLIM PCB\n");
++	len +=  scnprintf(buf+len, size-len, "SLIM PCB\n");
+ 	ptr = (uint32_t *)phba->pcb;
+ 	i = sizeof(PCB_t);
+ 	while (i > 0) {
+-		len +=  snprintf(buf+len, size-len,
++		len +=  scnprintf(buf+len, size-len,
+ 		"%08x: %08x %08x %08x %08x %08x %08x %08x %08x\n",
+ 		off, *ptr, *(ptr+1), *(ptr+2), *(ptr+3), *(ptr+4),
+ 		*(ptr+5), *(ptr+6), *(ptr+7));
+@@ -766,7 +768,7 @@ lpfc_debugfs_dumpHostSlim_data(struct lpfc_hba *phba, char *buf, int size)
+ 		for (i = 0; i < 4; i++) {
+ 			pgpp = &phba->port_gp[i];
+ 			pring = &psli->sli3_ring[i];
+-			len +=  snprintf(buf+len, size-len,
++			len +=  scnprintf(buf+len, size-len,
+ 					 "Ring %d: CMD GetInx:%d "
+ 					 "(Max:%d Next:%d "
+ 					 "Local:%d flg:x%x)  "
+@@ -783,7 +785,7 @@ lpfc_debugfs_dumpHostSlim_data(struct lpfc_hba *phba, char *buf, int size)
+ 		word1 = readl(phba->CAregaddr);
+ 		word2 = readl(phba->HSregaddr);
+ 		word3 = readl(phba->HCregaddr);
+-		len +=  snprintf(buf+len, size-len, "HA:%08x CA:%08x HS:%08x "
++		len +=  scnprintf(buf+len, size-len, "HA:%08x CA:%08x HS:%08x "
+ 				 "HC:%08x\n", word0, word1, word2, word3);
+ 	}
+ 	spin_unlock_irq(&phba->hbalock);
+@@ -821,12 +823,12 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size)
+ 	cnt = (LPFC_NODELIST_SIZE / LPFC_NODELIST_ENTRY_SIZE);
+ 	outio = 0;
+ 
+-	len += snprintf(buf+len, size-len, "\nFCP Nodelist Entries ...\n");
++	len += scnprintf(buf+len, size-len, "\nFCP Nodelist Entries ...\n");
+ 	spin_lock_irq(shost->host_lock);
+ 	list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
+ 		iocnt = 0;
+ 		if (!cnt) {
+-			len +=  snprintf(buf+len, size-len,
++			len +=  scnprintf(buf+len, size-len,
+ 				"Missing Nodelist Entries\n");
+ 			break;
+ 		}
+@@ -864,63 +866,63 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size)
+ 		default:
+ 			statep = "UNKNOWN";
+ 		}
+-		len += snprintf(buf+len, size-len, "%s DID:x%06x ",
++		len += scnprintf(buf+len, size-len, "%s DID:x%06x ",
+ 				statep, ndlp->nlp_DID);
+-		len += snprintf(buf+len, size-len,
++		len += scnprintf(buf+len, size-len,
+ 				"WWPN x%llx ",
+ 				wwn_to_u64(ndlp->nlp_portname.u.wwn));
+-		len += snprintf(buf+len, size-len,
++		len += scnprintf(buf+len, size-len,
+ 				"WWNN x%llx ",
+ 				wwn_to_u64(ndlp->nlp_nodename.u.wwn));
+ 		if (ndlp->nlp_flag & NLP_RPI_REGISTERED)
+-			len += snprintf(buf+len, size-len, "RPI:%03d ",
++			len += scnprintf(buf+len, size-len, "RPI:%03d ",
+ 					ndlp->nlp_rpi);
+ 		else
+-			len += snprintf(buf+len, size-len, "RPI:none ");
+-		len +=  snprintf(buf+len, size-len, "flag:x%08x ",
++			len += scnprintf(buf+len, size-len, "RPI:none ");
++		len +=  scnprintf(buf+len, size-len, "flag:x%08x ",
+ 			ndlp->nlp_flag);
+ 		if (!ndlp->nlp_type)
+-			len += snprintf(buf+len, size-len, "UNKNOWN_TYPE ");
++			len += scnprintf(buf+len, size-len, "UNKNOWN_TYPE ");
+ 		if (ndlp->nlp_type & NLP_FC_NODE)
+-			len += snprintf(buf+len, size-len, "FC_NODE ");
++			len += scnprintf(buf+len, size-len, "FC_NODE ");
+ 		if (ndlp->nlp_type & NLP_FABRIC) {
+-			len += snprintf(buf+len, size-len, "FABRIC ");
++			len += scnprintf(buf+len, size-len, "FABRIC ");
+ 			iocnt = 0;
+ 		}
+ 		if (ndlp->nlp_type & NLP_FCP_TARGET)
+-			len += snprintf(buf+len, size-len, "FCP_TGT sid:%d ",
++			len += scnprintf(buf+len, size-len, "FCP_TGT sid:%d ",
+ 				ndlp->nlp_sid);
+ 		if (ndlp->nlp_type & NLP_FCP_INITIATOR)
+-			len += snprintf(buf+len, size-len, "FCP_INITIATOR ");
++			len += scnprintf(buf+len, size-len, "FCP_INITIATOR ");
+ 		if (ndlp->nlp_type & NLP_NVME_TARGET)
+-			len += snprintf(buf + len,
++			len += scnprintf(buf + len,
+ 					size - len, "NVME_TGT sid:%d ",
+ 					NLP_NO_SID);
+ 		if (ndlp->nlp_type & NLP_NVME_INITIATOR)
+-			len += snprintf(buf + len,
++			len += scnprintf(buf + len,
+ 					size - len, "NVME_INITIATOR ");
+-		len += snprintf(buf+len, size-len, "usgmap:%x ",
++		len += scnprintf(buf+len, size-len, "usgmap:%x ",
+ 			ndlp->nlp_usg_map);
+-		len += snprintf(buf+len, size-len, "refcnt:%x",
++		len += scnprintf(buf+len, size-len, "refcnt:%x",
+ 			kref_read(&ndlp->kref));
+ 		if (iocnt) {
+ 			i = atomic_read(&ndlp->cmd_pending);
+-			len += snprintf(buf + len, size - len,
++			len += scnprintf(buf + len, size - len,
+ 					" OutIO:x%x Qdepth x%x",
+ 					i, ndlp->cmd_qdepth);
+ 			outio += i;
+ 		}
+-		len += snprintf(buf + len, size - len, "defer:%x ",
++		len += scnprintf(buf + len, size - len, "defer:%x ",
+ 			ndlp->nlp_defer_did);
+-		len +=  snprintf(buf+len, size-len, "\n");
++		len +=  scnprintf(buf+len, size-len, "\n");
+ 	}
+ 	spin_unlock_irq(shost->host_lock);
+ 
+-	len += snprintf(buf + len, size - len,
++	len += scnprintf(buf + len, size - len,
+ 			"\nOutstanding IO x%x\n",  outio);
+ 
+ 	if (phba->nvmet_support && phba->targetport && (vport == phba->pport)) {
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"\nNVME Targetport Entry ...\n");
+ 
+ 		/* Port state is only one of two values for now. */
+@@ -928,18 +930,18 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size)
+ 			statep = "REGISTERED";
+ 		else
+ 			statep = "INIT";
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"TGT WWNN x%llx WWPN x%llx State %s\n",
+ 				wwn_to_u64(vport->fc_nodename.u.wwn),
+ 				wwn_to_u64(vport->fc_portname.u.wwn),
+ 				statep);
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"    Targetport DID x%06x\n",
+ 				phba->targetport->port_id);
+ 		goto out_exit;
+ 	}
+ 
+-	len += snprintf(buf + len, size - len,
++	len += scnprintf(buf + len, size - len,
+ 				"\nNVME Lport/Rport Entries ...\n");
+ 
+ 	localport = vport->localport;
+@@ -954,11 +956,11 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size)
+ 	else
+ 		statep = "UNKNOWN ";
+ 
+-	len += snprintf(buf + len, size - len,
++	len += scnprintf(buf + len, size - len,
+ 			"Lport DID x%06x PortState %s\n",
+ 			localport->port_id, statep);
+ 
+-	len += snprintf(buf + len, size - len, "\tRport List:\n");
++	len += scnprintf(buf + len, size - len, "\tRport List:\n");
+ 	list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
+ 		/* local short-hand pointer. */
+ 		spin_lock(&phba->hbalock);
+@@ -985,32 +987,32 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size)
+ 		}
+ 
+ 		/* Tab in to show lport ownership. */
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"\t%s Port ID:x%06x ",
+ 				statep, nrport->port_id);
+-		len += snprintf(buf + len, size - len, "WWPN x%llx ",
++		len += scnprintf(buf + len, size - len, "WWPN x%llx ",
+ 				nrport->port_name);
+-		len += snprintf(buf + len, size - len, "WWNN x%llx ",
++		len += scnprintf(buf + len, size - len, "WWNN x%llx ",
+ 				nrport->node_name);
+ 
+ 		/* An NVME rport can have multiple roles. */
+ 		if (nrport->port_role & FC_PORT_ROLE_NVME_INITIATOR)
+-			len +=  snprintf(buf + len, size - len,
++			len +=  scnprintf(buf + len, size - len,
+ 					 "INITIATOR ");
+ 		if (nrport->port_role & FC_PORT_ROLE_NVME_TARGET)
+-			len +=  snprintf(buf + len, size - len,
++			len +=  scnprintf(buf + len, size - len,
+ 					 "TARGET ");
+ 		if (nrport->port_role & FC_PORT_ROLE_NVME_DISCOVERY)
+-			len +=  snprintf(buf + len, size - len,
++			len +=  scnprintf(buf + len, size - len,
+ 					 "DISCSRVC ");
+ 		if (nrport->port_role & ~(FC_PORT_ROLE_NVME_INITIATOR |
+ 					  FC_PORT_ROLE_NVME_TARGET |
+ 					  FC_PORT_ROLE_NVME_DISCOVERY))
+-			len +=  snprintf(buf + len, size - len,
++			len +=  scnprintf(buf + len, size - len,
+ 					 "UNKNOWN ROLE x%x",
+ 					 nrport->port_role);
+ 		/* Terminate the string. */
+-		len +=  snprintf(buf + len, size - len, "\n");
++		len +=  scnprintf(buf + len, size - len, "\n");
+ 	}
+ 
+ 	spin_unlock_irq(shost->host_lock);
+@@ -1049,35 +1051,35 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
+ 		if (!phba->targetport)
+ 			return len;
+ 		tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"\nNVME Targetport Statistics\n");
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"LS: Rcv %08x Drop %08x Abort %08x\n",
+ 				atomic_read(&tgtp->rcv_ls_req_in),
+ 				atomic_read(&tgtp->rcv_ls_req_drop),
+ 				atomic_read(&tgtp->xmt_ls_abort));
+ 		if (atomic_read(&tgtp->rcv_ls_req_in) !=
+ 		    atomic_read(&tgtp->rcv_ls_req_out)) {
+-			len += snprintf(buf + len, size - len,
++			len += scnprintf(buf + len, size - len,
+ 					"Rcv LS: in %08x != out %08x\n",
+ 					atomic_read(&tgtp->rcv_ls_req_in),
+ 					atomic_read(&tgtp->rcv_ls_req_out));
+ 		}
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"LS: Xmt %08x Drop %08x Cmpl %08x\n",
+ 				atomic_read(&tgtp->xmt_ls_rsp),
+ 				atomic_read(&tgtp->xmt_ls_drop),
+ 				atomic_read(&tgtp->xmt_ls_rsp_cmpl));
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"LS: RSP Abort %08x xb %08x Err %08x\n",
+ 				atomic_read(&tgtp->xmt_ls_rsp_aborted),
+ 				atomic_read(&tgtp->xmt_ls_rsp_xb_set),
+ 				atomic_read(&tgtp->xmt_ls_rsp_error));
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"FCP: Rcv %08x Defer %08x Release %08x "
+ 				"Drop %08x\n",
+ 				atomic_read(&tgtp->rcv_fcp_cmd_in),
+@@ -1087,13 +1089,13 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
+ 
+ 		if (atomic_read(&tgtp->rcv_fcp_cmd_in) !=
+ 		    atomic_read(&tgtp->rcv_fcp_cmd_out)) {
+-			len += snprintf(buf + len, size - len,
++			len += scnprintf(buf + len, size - len,
+ 					"Rcv FCP: in %08x != out %08x\n",
+ 					atomic_read(&tgtp->rcv_fcp_cmd_in),
+ 					atomic_read(&tgtp->rcv_fcp_cmd_out));
+ 		}
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"FCP Rsp: read %08x readrsp %08x "
+ 				"write %08x rsp %08x\n",
+ 				atomic_read(&tgtp->xmt_fcp_read),
+@@ -1101,31 +1103,31 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
+ 				atomic_read(&tgtp->xmt_fcp_write),
+ 				atomic_read(&tgtp->xmt_fcp_rsp));
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"FCP Rsp Cmpl: %08x err %08x drop %08x\n",
+ 				atomic_read(&tgtp->xmt_fcp_rsp_cmpl),
+ 				atomic_read(&tgtp->xmt_fcp_rsp_error),
+ 				atomic_read(&tgtp->xmt_fcp_rsp_drop));
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"FCP Rsp Abort: %08x xb %08x xricqe  %08x\n",
+ 				atomic_read(&tgtp->xmt_fcp_rsp_aborted),
+ 				atomic_read(&tgtp->xmt_fcp_rsp_xb_set),
+ 				atomic_read(&tgtp->xmt_fcp_xri_abort_cqe));
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"ABORT: Xmt %08x Cmpl %08x\n",
+ 				atomic_read(&tgtp->xmt_fcp_abort),
+ 				atomic_read(&tgtp->xmt_fcp_abort_cmpl));
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"ABORT: Sol %08x  Usol %08x Err %08x Cmpl %08x",
+ 				atomic_read(&tgtp->xmt_abort_sol),
+ 				atomic_read(&tgtp->xmt_abort_unsol),
+ 				atomic_read(&tgtp->xmt_abort_rsp),
+ 				atomic_read(&tgtp->xmt_abort_rsp_error));
+ 
+-		len +=  snprintf(buf + len, size - len, "\n");
++		len +=  scnprintf(buf + len, size - len, "\n");
+ 
+ 		cnt = 0;
+ 		spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
+@@ -1136,7 +1138,7 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
+ 		}
+ 		spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
+ 		if (cnt) {
+-			len += snprintf(buf + len, size - len,
++			len += scnprintf(buf + len, size - len,
+ 					"ABORT: %d ctx entries\n", cnt);
+ 			spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
+ 			list_for_each_entry_safe(ctxp, next_ctxp,
+@@ -1144,7 +1146,7 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
+ 				    list) {
+ 				if (len >= (size - LPFC_DEBUG_OUT_LINE_SZ))
+ 					break;
+-				len += snprintf(buf + len, size - len,
++				len += scnprintf(buf + len, size - len,
+ 						"Entry: oxid %x state %x "
+ 						"flag %x\n",
+ 						ctxp->oxid, ctxp->state,
+@@ -1158,7 +1160,7 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
+ 		tot += atomic_read(&tgtp->xmt_fcp_release);
+ 		tot = atomic_read(&tgtp->rcv_fcp_cmd_in) - tot;
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"IO_CTX: %08x  WAIT: cur %08x tot %08x\n"
+ 				"CTX Outstanding %08llx\n",
+ 				phba->sli4_hba.nvmet_xri_cnt,
+@@ -1176,10 +1178,10 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
+ 		if (!lport)
+ 			return len;
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"\nNVME HDWQ Statistics\n");
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"LS: Xmt %016x Cmpl %016x\n",
+ 				atomic_read(&lport->fc4NvmeLsRequests),
+ 				atomic_read(&lport->fc4NvmeLsCmpls));
+@@ -1199,20 +1201,20 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
+ 			if (i >= 32)
+ 				continue;
+ 
+-			len += snprintf(buf + len, PAGE_SIZE - len,
++			len += scnprintf(buf + len, PAGE_SIZE - len,
+ 					"HDWQ (%d): Rd %016llx Wr %016llx "
+ 					"IO %016llx ",
+ 					i, data1, data2, data3);
+-			len += snprintf(buf + len, PAGE_SIZE - len,
++			len += scnprintf(buf + len, PAGE_SIZE - len,
+ 					"Cmpl %016llx OutIO %016llx\n",
+ 					tot, ((data1 + data2 + data3) - tot));
+ 		}
+-		len += snprintf(buf + len, PAGE_SIZE - len,
++		len += scnprintf(buf + len, PAGE_SIZE - len,
+ 				"Total FCP Cmpl %016llx Issue %016llx "
+ 				"OutIO %016llx\n",
+ 				totin, totout, totout - totin);
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"LS Xmt Err: Abrt %08x Err %08x  "
+ 				"Cmpl Err: xb %08x Err %08x\n",
+ 				atomic_read(&lport->xmt_ls_abort),
+@@ -1220,7 +1222,7 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
+ 				atomic_read(&lport->cmpl_ls_xb),
+ 				atomic_read(&lport->cmpl_ls_err));
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"FCP Xmt Err: noxri %06x nondlp %06x "
+ 				"qdepth %06x wqerr %06x err %06x Abrt %06x\n",
+ 				atomic_read(&lport->xmt_fcp_noxri),
+@@ -1230,7 +1232,7 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
+ 				atomic_read(&lport->xmt_fcp_err),
+ 				atomic_read(&lport->xmt_fcp_abort));
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"FCP Cmpl Err: xb %08x Err %08x\n",
+ 				atomic_read(&lport->cmpl_fcp_xb),
+ 				atomic_read(&lport->cmpl_fcp_err));
+@@ -1322,58 +1324,58 @@ lpfc_debugfs_nvmektime_data(struct lpfc_vport *vport, char *buf, int size)
+ 
+ 	if (phba->nvmet_support == 0) {
+ 		/* NVME Initiator */
+-		len += snprintf(buf + len, PAGE_SIZE - len,
++		len += scnprintf(buf + len, PAGE_SIZE - len,
+ 				"ktime %s: Total Samples: %lld\n",
+ 				(phba->ktime_on ?  "Enabled" : "Disabled"),
+ 				phba->ktime_data_samples);
+ 		if (phba->ktime_data_samples == 0)
+ 			return len;
+ 
+-		len += snprintf(
++		len += scnprintf(
+ 			buf + len, PAGE_SIZE - len,
+ 			"Segment 1: Last NVME Cmd cmpl "
+ 			"done -to- Start of next NVME cnd (in driver)\n");
+-		len += snprintf(
++		len += scnprintf(
+ 			buf + len, PAGE_SIZE - len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg1_total,
+ 				phba->ktime_data_samples),
+ 			phba->ktime_seg1_min,
+ 			phba->ktime_seg1_max);
+-		len += snprintf(
++		len += scnprintf(
+ 			buf + len, PAGE_SIZE - len,
+ 			"Segment 2: Driver start of NVME cmd "
+ 			"-to- Firmware WQ doorbell\n");
+-		len += snprintf(
++		len += scnprintf(
+ 			buf + len, PAGE_SIZE - len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg2_total,
+ 				phba->ktime_data_samples),
+ 			phba->ktime_seg2_min,
+ 			phba->ktime_seg2_max);
+-		len += snprintf(
++		len += scnprintf(
+ 			buf + len, PAGE_SIZE - len,
+ 			"Segment 3: Firmware WQ doorbell -to- "
+ 			"MSI-X ISR cmpl\n");
+-		len += snprintf(
++		len += scnprintf(
+ 			buf + len, PAGE_SIZE - len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg3_total,
+ 				phba->ktime_data_samples),
+ 			phba->ktime_seg3_min,
+ 			phba->ktime_seg3_max);
+-		len += snprintf(
++		len += scnprintf(
+ 			buf + len, PAGE_SIZE - len,
+ 			"Segment 4: MSI-X ISR cmpl -to- "
+ 			"NVME cmpl done\n");
+-		len += snprintf(
++		len += scnprintf(
+ 			buf + len, PAGE_SIZE - len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg4_total,
+ 				phba->ktime_data_samples),
+ 			phba->ktime_seg4_min,
+ 			phba->ktime_seg4_max);
+-		len += snprintf(
++		len += scnprintf(
+ 			buf + len, PAGE_SIZE - len,
+ 			"Total IO avg time: %08lld\n",
+ 			div_u64(phba->ktime_seg1_total +
+@@ -1385,7 +1387,7 @@ lpfc_debugfs_nvmektime_data(struct lpfc_vport *vport, char *buf, int size)
+ 	}
+ 
+ 	/* NVME Target */
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"ktime %s: Total Samples: %lld %lld\n",
+ 			(phba->ktime_on ? "Enabled" : "Disabled"),
+ 			phba->ktime_data_samples,
+@@ -1393,46 +1395,46 @@ lpfc_debugfs_nvmektime_data(struct lpfc_vport *vport, char *buf, int size)
+ 	if (phba->ktime_data_samples == 0)
+ 		return len;
+ 
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"Segment 1: MSI-X ISR Rcv cmd -to- "
+ 			"cmd pass to NVME Layer\n");
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg1_total,
+ 				phba->ktime_data_samples),
+ 			phba->ktime_seg1_min,
+ 			phba->ktime_seg1_max);
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"Segment 2: cmd pass to NVME Layer- "
+ 			"-to- Driver rcv cmd OP (action)\n");
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg2_total,
+ 				phba->ktime_data_samples),
+ 			phba->ktime_seg2_min,
+ 			phba->ktime_seg2_max);
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"Segment 3: Driver rcv cmd OP -to- "
+ 			"Firmware WQ doorbell: cmd\n");
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg3_total,
+ 				phba->ktime_data_samples),
+ 			phba->ktime_seg3_min,
+ 			phba->ktime_seg3_max);
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"Segment 4: Firmware WQ doorbell: cmd "
+ 			"-to- MSI-X ISR for cmd cmpl\n");
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg4_total,
+ 				phba->ktime_data_samples),
+ 			phba->ktime_seg4_min,
+ 			phba->ktime_seg4_max);
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"Segment 5: MSI-X ISR for cmd cmpl "
+ 			"-to- NVME layer passed cmd done\n");
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg5_total,
+ 				phba->ktime_data_samples),
+@@ -1440,10 +1442,10 @@ lpfc_debugfs_nvmektime_data(struct lpfc_vport *vport, char *buf, int size)
+ 			phba->ktime_seg5_max);
+ 
+ 	if (phba->ktime_status_samples == 0) {
+-		len += snprintf(buf + len, PAGE_SIZE-len,
++		len += scnprintf(buf + len, PAGE_SIZE-len,
+ 				"Total: cmd received by MSI-X ISR "
+ 				"-to- cmd completed on wire\n");
+-		len += snprintf(buf + len, PAGE_SIZE-len,
++		len += scnprintf(buf + len, PAGE_SIZE-len,
+ 				"avg:%08lld min:%08lld "
+ 				"max %08lld\n",
+ 				div_u64(phba->ktime_seg10_total,
+@@ -1453,46 +1455,46 @@ lpfc_debugfs_nvmektime_data(struct lpfc_vport *vport, char *buf, int size)
+ 		return len;
+ 	}
+ 
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"Segment 6: NVME layer passed cmd done "
+ 			"-to- Driver rcv rsp status OP\n");
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg6_total,
+ 				phba->ktime_status_samples),
+ 			phba->ktime_seg6_min,
+ 			phba->ktime_seg6_max);
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"Segment 7: Driver rcv rsp status OP "
+ 			"-to- Firmware WQ doorbell: status\n");
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg7_total,
+ 				phba->ktime_status_samples),
+ 			phba->ktime_seg7_min,
+ 			phba->ktime_seg7_max);
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"Segment 8: Firmware WQ doorbell: status"
+ 			" -to- MSI-X ISR for status cmpl\n");
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg8_total,
+ 				phba->ktime_status_samples),
+ 			phba->ktime_seg8_min,
+ 			phba->ktime_seg8_max);
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"Segment 9: MSI-X ISR for status cmpl  "
+ 			"-to- NVME layer passed status done\n");
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg9_total,
+ 				phba->ktime_status_samples),
+ 			phba->ktime_seg9_min,
+ 			phba->ktime_seg9_max);
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"Total: cmd received by MSI-X ISR -to- "
+ 			"cmd completed on wire\n");
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg10_total,
+ 				phba->ktime_status_samples),
+@@ -1527,7 +1529,7 @@ lpfc_debugfs_nvmeio_trc_data(struct lpfc_hba *phba, char *buf, int size)
+ 		(phba->nvmeio_trc_size - 1);
+ 	skip = phba->nvmeio_trc_output_idx;
+ 
+-	len += snprintf(buf + len, size - len,
++	len += scnprintf(buf + len, size - len,
+ 			"%s IO Trace %s: next_idx %d skip %d size %d\n",
+ 			(phba->nvmet_support ? "NVME" : "NVMET"),
+ 			(state ? "Enabled" : "Disabled"),
+@@ -1549,18 +1551,18 @@ lpfc_debugfs_nvmeio_trc_data(struct lpfc_hba *phba, char *buf, int size)
+ 		if (!dtp->fmt)
+ 			continue;
+ 
+-		len +=  snprintf(buf + len, size - len, dtp->fmt,
++		len +=  scnprintf(buf + len, size - len, dtp->fmt,
+ 			dtp->data1, dtp->data2, dtp->data3);
+ 
+ 		if (phba->nvmeio_trc_output_idx >= phba->nvmeio_trc_size) {
+ 			phba->nvmeio_trc_output_idx = 0;
+-			len += snprintf(buf + len, size - len,
++			len += scnprintf(buf + len, size - len,
+ 					"Trace Complete\n");
+ 			goto out;
+ 		}
+ 
+ 		if (len >= (size - LPFC_DEBUG_OUT_LINE_SZ)) {
+-			len += snprintf(buf + len, size - len,
++			len += scnprintf(buf + len, size - len,
+ 					"Trace Continue (%d of %d)\n",
+ 					phba->nvmeio_trc_output_idx,
+ 					phba->nvmeio_trc_size);
+@@ -1578,18 +1580,18 @@ lpfc_debugfs_nvmeio_trc_data(struct lpfc_hba *phba, char *buf, int size)
+ 		if (!dtp->fmt)
+ 			continue;
+ 
+-		len +=  snprintf(buf + len, size - len, dtp->fmt,
++		len +=  scnprintf(buf + len, size - len, dtp->fmt,
+ 			dtp->data1, dtp->data2, dtp->data3);
+ 
+ 		if (phba->nvmeio_trc_output_idx >= phba->nvmeio_trc_size) {
+ 			phba->nvmeio_trc_output_idx = 0;
+-			len += snprintf(buf + len, size - len,
++			len += scnprintf(buf + len, size - len,
+ 					"Trace Complete\n");
+ 			goto out;
+ 		}
+ 
+ 		if (len >= (size - LPFC_DEBUG_OUT_LINE_SZ)) {
+-			len += snprintf(buf + len, size - len,
++			len += scnprintf(buf + len, size - len,
+ 					"Trace Continue (%d of %d)\n",
+ 					phba->nvmeio_trc_output_idx,
+ 					phba->nvmeio_trc_size);
+@@ -1597,7 +1599,7 @@ lpfc_debugfs_nvmeio_trc_data(struct lpfc_hba *phba, char *buf, int size)
+ 		}
+ 	}
+ 
+-	len += snprintf(buf + len, size - len,
++	len += scnprintf(buf + len, size - len,
+ 			"Trace Done\n");
+ out:
+ 	return len;
+@@ -1627,17 +1629,17 @@ lpfc_debugfs_cpucheck_data(struct lpfc_vport *vport, char *buf, int size)
+ 	uint32_t tot_rcv;
+ 	uint32_t tot_cmpl;
+ 
+-	len += snprintf(buf + len, PAGE_SIZE - len,
++	len += scnprintf(buf + len, PAGE_SIZE - len,
+ 			"CPUcheck %s ",
+ 			(phba->cpucheck_on & LPFC_CHECK_NVME_IO ?
+ 				"Enabled" : "Disabled"));
+ 	if (phba->nvmet_support) {
+-		len += snprintf(buf + len, PAGE_SIZE - len,
++		len += scnprintf(buf + len, PAGE_SIZE - len,
+ 				"%s\n",
+ 				(phba->cpucheck_on & LPFC_CHECK_NVMET_RCV ?
+ 					"Rcv Enabled\n" : "Rcv Disabled\n"));
+ 	} else {
+-		len += snprintf(buf + len, PAGE_SIZE - len, "\n");
++		len += scnprintf(buf + len, PAGE_SIZE - len, "\n");
+ 	}
+ 	max_cnt = size - LPFC_DEBUG_OUT_LINE_SZ;
+ 
+@@ -1658,7 +1660,7 @@ lpfc_debugfs_cpucheck_data(struct lpfc_vport *vport, char *buf, int size)
+ 		if (!tot_xmt && !tot_cmpl && !tot_rcv)
+ 			continue;
+ 
+-		len += snprintf(buf + len, PAGE_SIZE - len,
++		len += scnprintf(buf + len, PAGE_SIZE - len,
+ 				"HDWQ %03d: ", i);
+ 		for (j = 0; j < LPFC_CHECK_CPU_CNT; j++) {
+ 			/* Only display non-zero counters */
+@@ -1667,22 +1669,22 @@ lpfc_debugfs_cpucheck_data(struct lpfc_vport *vport, char *buf, int size)
+ 			    !qp->cpucheck_rcv_io[j])
+ 				continue;
+ 			if (phba->nvmet_support) {
+-				len += snprintf(buf + len, PAGE_SIZE - len,
++				len += scnprintf(buf + len, PAGE_SIZE - len,
+ 						"CPU %03d: %x/%x/%x ", j,
+ 						qp->cpucheck_rcv_io[j],
+ 						qp->cpucheck_xmt_io[j],
+ 						qp->cpucheck_cmpl_io[j]);
+ 			} else {
+-				len += snprintf(buf + len, PAGE_SIZE - len,
++				len += scnprintf(buf + len, PAGE_SIZE - len,
+ 						"CPU %03d: %x/%x ", j,
+ 						qp->cpucheck_xmt_io[j],
+ 						qp->cpucheck_cmpl_io[j]);
+ 			}
+ 		}
+-		len += snprintf(buf + len, PAGE_SIZE - len,
++		len += scnprintf(buf + len, PAGE_SIZE - len,
+ 				"Total: %x\n", tot_xmt);
+ 		if (len >= max_cnt) {
+-			len += snprintf(buf + len, PAGE_SIZE - len,
++			len += scnprintf(buf + len, PAGE_SIZE - len,
+ 					"Truncated ...\n");
+ 			return len;
+ 		}
+@@ -2258,28 +2260,29 @@ lpfc_debugfs_dif_err_read(struct file *file, char __user *buf,
+ 	int cnt = 0;
+ 
+ 	if (dent == phba->debug_writeGuard)
+-		cnt = snprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_wgrd_cnt);
++		cnt = scnprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_wgrd_cnt);
+ 	else if (dent == phba->debug_writeApp)
+-		cnt = snprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_wapp_cnt);
++		cnt = scnprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_wapp_cnt);
+ 	else if (dent == phba->debug_writeRef)
+-		cnt = snprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_wref_cnt);
++		cnt = scnprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_wref_cnt);
+ 	else if (dent == phba->debug_readGuard)
+-		cnt = snprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_rgrd_cnt);
++		cnt = scnprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_rgrd_cnt);
+ 	else if (dent == phba->debug_readApp)
+-		cnt = snprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_rapp_cnt);
++		cnt = scnprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_rapp_cnt);
+ 	else if (dent == phba->debug_readRef)
+-		cnt = snprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_rref_cnt);
++		cnt = scnprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_rref_cnt);
+ 	else if (dent == phba->debug_InjErrNPortID)
+-		cnt = snprintf(cbuf, 32, "0x%06x\n", phba->lpfc_injerr_nportid);
++		cnt = scnprintf(cbuf, 32, "0x%06x\n",
++				phba->lpfc_injerr_nportid);
+ 	else if (dent == phba->debug_InjErrWWPN) {
+ 		memcpy(&tmp, &phba->lpfc_injerr_wwpn, sizeof(struct lpfc_name));
+ 		tmp = cpu_to_be64(tmp);
+-		cnt = snprintf(cbuf, 32, "0x%016llx\n", tmp);
++		cnt = scnprintf(cbuf, 32, "0x%016llx\n", tmp);
+ 	} else if (dent == phba->debug_InjErrLBA) {
+ 		if (phba->lpfc_injerr_lba == (sector_t)(-1))
+-			cnt = snprintf(cbuf, 32, "off\n");
++			cnt = scnprintf(cbuf, 32, "off\n");
+ 		else
+-			cnt = snprintf(cbuf, 32, "0x%llx\n",
++			cnt = scnprintf(cbuf, 32, "0x%llx\n",
+ 				 (uint64_t) phba->lpfc_injerr_lba);
+ 	} else
+ 		lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
+@@ -3224,17 +3227,17 @@ lpfc_idiag_pcicfg_read(struct file *file, char __user *buf, size_t nbytes,
+ 	switch (count) {
+ 	case SIZE_U8: /* byte (8 bits) */
+ 		pci_read_config_byte(pdev, where, &u8val);
+-		len += snprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
+ 				"%03x: %02x\n", where, u8val);
+ 		break;
+ 	case SIZE_U16: /* word (16 bits) */
+ 		pci_read_config_word(pdev, where, &u16val);
+-		len += snprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
+ 				"%03x: %04x\n", where, u16val);
+ 		break;
+ 	case SIZE_U32: /* double word (32 bits) */
+ 		pci_read_config_dword(pdev, where, &u32val);
+-		len += snprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
+ 				"%03x: %08x\n", where, u32val);
+ 		break;
+ 	case LPFC_PCI_CFG_BROWSE: /* browse all */
+@@ -3254,25 +3257,25 @@ pcicfg_browse:
+ 	offset = offset_label;
+ 
+ 	/* Read PCI config space */
+-	len += snprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
+ 			"%03x: ", offset_label);
+ 	while (index > 0) {
+ 		pci_read_config_dword(pdev, offset, &u32val);
+-		len += snprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
+ 				"%08x ", u32val);
+ 		offset += sizeof(uint32_t);
+ 		if (offset >= LPFC_PCI_CFG_SIZE) {
+-			len += snprintf(pbuffer+len,
++			len += scnprintf(pbuffer+len,
+ 					LPFC_PCI_CFG_SIZE-len, "\n");
+ 			break;
+ 		}
+ 		index -= sizeof(uint32_t);
+ 		if (!index)
+-			len += snprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
++			len += scnprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
+ 					"\n");
+ 		else if (!(index % (8 * sizeof(uint32_t)))) {
+ 			offset_label += (8 * sizeof(uint32_t));
+-			len += snprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
++			len += scnprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
+ 					"\n%03x: ", offset_label);
+ 		}
+ 	}
+@@ -3543,7 +3546,7 @@ lpfc_idiag_baracc_read(struct file *file, char __user *buf, size_t nbytes,
+ 	if (acc_range == SINGLE_WORD) {
+ 		offset_run = offset;
+ 		u32val = readl(mem_mapped_bar + offset_run);
+-		len += snprintf(pbuffer+len, LPFC_PCI_BAR_RD_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_PCI_BAR_RD_BUF_SIZE-len,
+ 				"%05x: %08x\n", offset_run, u32val);
+ 	} else
+ 		goto baracc_browse;
+@@ -3557,35 +3560,35 @@ baracc_browse:
+ 	offset_run = offset_label;
+ 
+ 	/* Read PCI bar memory mapped space */
+-	len += snprintf(pbuffer+len, LPFC_PCI_BAR_RD_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_PCI_BAR_RD_BUF_SIZE-len,
+ 			"%05x: ", offset_label);
+ 	index = LPFC_PCI_BAR_RD_SIZE;
+ 	while (index > 0) {
+ 		u32val = readl(mem_mapped_bar + offset_run);
+-		len += snprintf(pbuffer+len, LPFC_PCI_BAR_RD_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_PCI_BAR_RD_BUF_SIZE-len,
+ 				"%08x ", u32val);
+ 		offset_run += sizeof(uint32_t);
+ 		if (acc_range == LPFC_PCI_BAR_BROWSE) {
+ 			if (offset_run >= bar_size) {
+-				len += snprintf(pbuffer+len,
++				len += scnprintf(pbuffer+len,
+ 					LPFC_PCI_BAR_RD_BUF_SIZE-len, "\n");
+ 				break;
+ 			}
+ 		} else {
+ 			if (offset_run >= offset +
+ 			    (acc_range * sizeof(uint32_t))) {
+-				len += snprintf(pbuffer+len,
++				len += scnprintf(pbuffer+len,
+ 					LPFC_PCI_BAR_RD_BUF_SIZE-len, "\n");
+ 				break;
+ 			}
+ 		}
+ 		index -= sizeof(uint32_t);
+ 		if (!index)
+-			len += snprintf(pbuffer+len,
++			len += scnprintf(pbuffer+len,
+ 					LPFC_PCI_BAR_RD_BUF_SIZE-len, "\n");
+ 		else if (!(index % (8 * sizeof(uint32_t)))) {
+ 			offset_label += (8 * sizeof(uint32_t));
+-			len += snprintf(pbuffer+len,
++			len += scnprintf(pbuffer+len,
+ 					LPFC_PCI_BAR_RD_BUF_SIZE-len,
+ 					"\n%05x: ", offset_label);
+ 		}
+@@ -3758,19 +3761,19 @@ __lpfc_idiag_print_wq(struct lpfc_queue *qp, char *wqtype,
+ 	if (!qp)
+ 		return len;
+ 
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"\t\t%s WQ info: ", wqtype);
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"AssocCQID[%04d]: WQ-STAT[oflow:x%x posted:x%llx]\n",
+ 			qp->assoc_qid, qp->q_cnt_1,
+ 			(unsigned long long)qp->q_cnt_4);
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"\t\tWQID[%02d], QE-CNT[%04d], QE-SZ[%04d], "
+ 			"HST-IDX[%04d], PRT-IDX[%04d], NTFI[%03d]",
+ 			qp->queue_id, qp->entry_count,
+ 			qp->entry_size, qp->host_index,
+ 			qp->hba_index, qp->notify_interval);
+-	len +=  snprintf(pbuffer + len,
++	len +=  scnprintf(pbuffer + len,
+ 			LPFC_QUE_INFO_GET_BUF_SIZE - len, "\n");
+ 	return len;
+ }
+@@ -3810,21 +3813,22 @@ __lpfc_idiag_print_cq(struct lpfc_queue *qp, char *cqtype,
+ 	if (!qp)
+ 		return len;
+ 
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"\t%s CQ info: ", cqtype);
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"AssocEQID[%02d]: CQ STAT[max:x%x relw:x%x "
+ 			"xabt:x%x wq:x%llx]\n",
+ 			qp->assoc_qid, qp->q_cnt_1, qp->q_cnt_2,
+ 			qp->q_cnt_3, (unsigned long long)qp->q_cnt_4);
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"\tCQID[%02d], QE-CNT[%04d], QE-SZ[%04d], "
+ 			"HST-IDX[%04d], NTFI[%03d], PLMT[%03d]",
+ 			qp->queue_id, qp->entry_count,
+ 			qp->entry_size, qp->host_index,
+ 			qp->notify_interval, qp->max_proc_limit);
+ 
+-	len +=  snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len, "\n");
++	len +=  scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++			"\n");
+ 
+ 	return len;
+ }
+@@ -3836,19 +3840,19 @@ __lpfc_idiag_print_rqpair(struct lpfc_queue *qp, struct lpfc_queue *datqp,
+ 	if (!qp || !datqp)
+ 		return len;
+ 
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"\t\t%s RQ info: ", rqtype);
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"AssocCQID[%02d]: RQ-STAT[nopost:x%x nobuf:x%x "
+ 			"posted:x%x rcv:x%llx]\n",
+ 			qp->assoc_qid, qp->q_cnt_1, qp->q_cnt_2,
+ 			qp->q_cnt_3, (unsigned long long)qp->q_cnt_4);
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"\t\tHQID[%02d], QE-CNT[%04d], QE-SZ[%04d], "
+ 			"HST-IDX[%04d], PRT-IDX[%04d], NTFI[%03d]\n",
+ 			qp->queue_id, qp->entry_count, qp->entry_size,
+ 			qp->host_index, qp->hba_index, qp->notify_interval);
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"\t\tDQID[%02d], QE-CNT[%04d], QE-SZ[%04d], "
+ 			"HST-IDX[%04d], PRT-IDX[%04d], NTFI[%03d]\n",
+ 			datqp->queue_id, datqp->entry_count,
+@@ -3927,18 +3931,19 @@ __lpfc_idiag_print_eq(struct lpfc_queue *qp, char *eqtype,
+ 	if (!qp)
+ 		return len;
+ 
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"\n%s EQ info: EQ-STAT[max:x%x noE:x%x "
+ 			"cqe_proc:x%x eqe_proc:x%llx eqd %d]\n",
+ 			eqtype, qp->q_cnt_1, qp->q_cnt_2, qp->q_cnt_3,
+ 			(unsigned long long)qp->q_cnt_4, qp->q_mode);
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"EQID[%02d], QE-CNT[%04d], QE-SZ[%04d], "
+ 			"HST-IDX[%04d], NTFI[%03d], PLMT[%03d], AFFIN[%03d]",
+ 			qp->queue_id, qp->entry_count, qp->entry_size,
+ 			qp->host_index, qp->notify_interval,
+ 			qp->max_proc_limit, qp->chann);
+-	len +=  snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len, "\n");
++	len +=  scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++			"\n");
+ 
+ 	return len;
+ }
+@@ -3991,9 +3996,10 @@ lpfc_idiag_queinfo_read(struct file *file, char __user *buf, size_t nbytes,
+ 		if (phba->lpfc_idiag_last_eq >= phba->cfg_hdw_queue)
+ 			phba->lpfc_idiag_last_eq = 0;
+ 
+-		len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+-					"HDWQ %d out of %d HBA HDWQs\n",
+-					x, phba->cfg_hdw_queue);
++		len += scnprintf(pbuffer + len,
++				 LPFC_QUE_INFO_GET_BUF_SIZE - len,
++				 "HDWQ %d out of %d HBA HDWQs\n",
++				 x, phba->cfg_hdw_queue);
+ 
+ 		/* Fast-path EQ */
+ 		qp = phba->sli4_hba.hdwq[x].hba_eq;
+@@ -4075,7 +4081,7 @@ lpfc_idiag_queinfo_read(struct file *file, char __user *buf, size_t nbytes,
+ 	return simple_read_from_buffer(buf, nbytes, ppos, pbuffer, len);
+ 
+ too_big:
+-	len +=  snprintf(pbuffer + len,
++	len +=  scnprintf(pbuffer + len,
+ 		LPFC_QUE_INFO_GET_BUF_SIZE - len, "Truncated ...\n");
+ out:
+ 	spin_unlock_irq(&phba->hbalock);
+@@ -4131,22 +4137,22 @@ lpfc_idiag_queacc_read_qe(char *pbuffer, int len, struct lpfc_queue *pque,
+ 		return 0;
+ 
+ 	esize = pque->entry_size;
+-	len += snprintf(pbuffer+len, LPFC_QUE_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_QUE_ACC_BUF_SIZE-len,
+ 			"QE-INDEX[%04d]:\n", index);
+ 
+ 	offset = 0;
+ 	pentry = pque->qe[index].address;
+ 	while (esize > 0) {
+-		len += snprintf(pbuffer+len, LPFC_QUE_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_QUE_ACC_BUF_SIZE-len,
+ 				"%08x ", *pentry);
+ 		pentry++;
+ 		offset += sizeof(uint32_t);
+ 		esize -= sizeof(uint32_t);
+ 		if (esize > 0 && !(offset % (4 * sizeof(uint32_t))))
+-			len += snprintf(pbuffer+len,
++			len += scnprintf(pbuffer+len,
+ 					LPFC_QUE_ACC_BUF_SIZE-len, "\n");
+ 	}
+-	len += snprintf(pbuffer+len, LPFC_QUE_ACC_BUF_SIZE-len, "\n");
++	len += scnprintf(pbuffer+len, LPFC_QUE_ACC_BUF_SIZE-len, "\n");
+ 
+ 	return len;
+ }
+@@ -4526,27 +4532,27 @@ lpfc_idiag_drbacc_read_reg(struct lpfc_hba *phba, char *pbuffer,
+ 
+ 	switch (drbregid) {
+ 	case LPFC_DRB_EQ:
+-		len += snprintf(pbuffer + len, LPFC_DRB_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer + len, LPFC_DRB_ACC_BUF_SIZE-len,
+ 				"EQ-DRB-REG: 0x%08x\n",
+ 				readl(phba->sli4_hba.EQDBregaddr));
+ 		break;
+ 	case LPFC_DRB_CQ:
+-		len += snprintf(pbuffer + len, LPFC_DRB_ACC_BUF_SIZE - len,
++		len += scnprintf(pbuffer + len, LPFC_DRB_ACC_BUF_SIZE - len,
+ 				"CQ-DRB-REG: 0x%08x\n",
+ 				readl(phba->sli4_hba.CQDBregaddr));
+ 		break;
+ 	case LPFC_DRB_MQ:
+-		len += snprintf(pbuffer+len, LPFC_DRB_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_DRB_ACC_BUF_SIZE-len,
+ 				"MQ-DRB-REG:   0x%08x\n",
+ 				readl(phba->sli4_hba.MQDBregaddr));
+ 		break;
+ 	case LPFC_DRB_WQ:
+-		len += snprintf(pbuffer+len, LPFC_DRB_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_DRB_ACC_BUF_SIZE-len,
+ 				"WQ-DRB-REG:   0x%08x\n",
+ 				readl(phba->sli4_hba.WQDBregaddr));
+ 		break;
+ 	case LPFC_DRB_RQ:
+-		len += snprintf(pbuffer+len, LPFC_DRB_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_DRB_ACC_BUF_SIZE-len,
+ 				"RQ-DRB-REG:   0x%08x\n",
+ 				readl(phba->sli4_hba.RQDBregaddr));
+ 		break;
+@@ -4736,37 +4742,37 @@ lpfc_idiag_ctlacc_read_reg(struct lpfc_hba *phba, char *pbuffer,
+ 
+ 	switch (ctlregid) {
+ 	case LPFC_CTL_PORT_SEM:
+-		len += snprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
+ 				"Port SemReg:   0x%08x\n",
+ 				readl(phba->sli4_hba.conf_regs_memmap_p +
+ 				      LPFC_CTL_PORT_SEM_OFFSET));
+ 		break;
+ 	case LPFC_CTL_PORT_STA:
+-		len += snprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
+ 				"Port StaReg:   0x%08x\n",
+ 				readl(phba->sli4_hba.conf_regs_memmap_p +
+ 				      LPFC_CTL_PORT_STA_OFFSET));
+ 		break;
+ 	case LPFC_CTL_PORT_CTL:
+-		len += snprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
+ 				"Port CtlReg:   0x%08x\n",
+ 				readl(phba->sli4_hba.conf_regs_memmap_p +
+ 				      LPFC_CTL_PORT_CTL_OFFSET));
+ 		break;
+ 	case LPFC_CTL_PORT_ER1:
+-		len += snprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
+ 				"Port Er1Reg:   0x%08x\n",
+ 				readl(phba->sli4_hba.conf_regs_memmap_p +
+ 				      LPFC_CTL_PORT_ER1_OFFSET));
+ 		break;
+ 	case LPFC_CTL_PORT_ER2:
+-		len += snprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
+ 				"Port Er2Reg:   0x%08x\n",
+ 				readl(phba->sli4_hba.conf_regs_memmap_p +
+ 				      LPFC_CTL_PORT_ER2_OFFSET));
+ 		break;
+ 	case LPFC_CTL_PDEV_CTL:
+-		len += snprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
+ 				"PDev CtlReg:   0x%08x\n",
+ 				readl(phba->sli4_hba.conf_regs_memmap_p +
+ 				      LPFC_CTL_PDEV_CTL_OFFSET));
+@@ -4959,13 +4965,13 @@ lpfc_idiag_mbxacc_get_setup(struct lpfc_hba *phba, char *pbuffer)
+ 	mbx_dump_cnt = idiag.cmd.data[IDIAG_MBXACC_DPCNT_INDX];
+ 	mbx_word_cnt = idiag.cmd.data[IDIAG_MBXACC_WDCNT_INDX];
+ 
+-	len += snprintf(pbuffer+len, LPFC_MBX_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_MBX_ACC_BUF_SIZE-len,
+ 			"mbx_dump_map: 0x%08x\n", mbx_dump_map);
+-	len += snprintf(pbuffer+len, LPFC_MBX_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_MBX_ACC_BUF_SIZE-len,
+ 			"mbx_dump_cnt: %04d\n", mbx_dump_cnt);
+-	len += snprintf(pbuffer+len, LPFC_MBX_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_MBX_ACC_BUF_SIZE-len,
+ 			"mbx_word_cnt: %04d\n", mbx_word_cnt);
+-	len += snprintf(pbuffer+len, LPFC_MBX_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_MBX_ACC_BUF_SIZE-len,
+ 			"mbx_mbox_cmd: 0x%02x\n", mbx_mbox_cmd);
+ 
+ 	return len;
+@@ -5114,35 +5120,35 @@ lpfc_idiag_extacc_avail_get(struct lpfc_hba *phba, char *pbuffer, int len)
+ {
+ 	uint16_t ext_cnt, ext_size;
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\nAvailable Extents Information:\n");
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tPort Available VPI extents: ");
+ 	lpfc_sli4_get_avail_extnt_rsrc(phba, LPFC_RSC_TYPE_FCOE_VPI,
+ 				       &ext_cnt, &ext_size);
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"Count %3d, Size %3d\n", ext_cnt, ext_size);
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tPort Available VFI extents: ");
+ 	lpfc_sli4_get_avail_extnt_rsrc(phba, LPFC_RSC_TYPE_FCOE_VFI,
+ 				       &ext_cnt, &ext_size);
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"Count %3d, Size %3d\n", ext_cnt, ext_size);
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tPort Available RPI extents: ");
+ 	lpfc_sli4_get_avail_extnt_rsrc(phba, LPFC_RSC_TYPE_FCOE_RPI,
+ 				       &ext_cnt, &ext_size);
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"Count %3d, Size %3d\n", ext_cnt, ext_size);
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tPort Available XRI extents: ");
+ 	lpfc_sli4_get_avail_extnt_rsrc(phba, LPFC_RSC_TYPE_FCOE_XRI,
+ 				       &ext_cnt, &ext_size);
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"Count %3d, Size %3d\n", ext_cnt, ext_size);
+ 
+ 	return len;
+@@ -5166,55 +5172,55 @@ lpfc_idiag_extacc_alloc_get(struct lpfc_hba *phba, char *pbuffer, int len)
+ 	uint16_t ext_cnt, ext_size;
+ 	int rc;
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\nAllocated Extents Information:\n");
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tHost Allocated VPI extents: ");
+ 	rc = lpfc_sli4_get_allocated_extnts(phba, LPFC_RSC_TYPE_FCOE_VPI,
+ 					    &ext_cnt, &ext_size);
+ 	if (!rc)
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"Port %d Extent %3d, Size %3d\n",
+ 				phba->brd_no, ext_cnt, ext_size);
+ 	else
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"N/A\n");
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tHost Allocated VFI extents: ");
+ 	rc = lpfc_sli4_get_allocated_extnts(phba, LPFC_RSC_TYPE_FCOE_VFI,
+ 					    &ext_cnt, &ext_size);
+ 	if (!rc)
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"Port %d Extent %3d, Size %3d\n",
+ 				phba->brd_no, ext_cnt, ext_size);
+ 	else
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"N/A\n");
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tHost Allocated RPI extents: ");
+ 	rc = lpfc_sli4_get_allocated_extnts(phba, LPFC_RSC_TYPE_FCOE_RPI,
+ 					    &ext_cnt, &ext_size);
+ 	if (!rc)
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"Port %d Extent %3d, Size %3d\n",
+ 				phba->brd_no, ext_cnt, ext_size);
+ 	else
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"N/A\n");
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tHost Allocated XRI extents: ");
+ 	rc = lpfc_sli4_get_allocated_extnts(phba, LPFC_RSC_TYPE_FCOE_XRI,
+ 					    &ext_cnt, &ext_size);
+ 	if (!rc)
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"Port %d Extent %3d, Size %3d\n",
+ 				phba->brd_no, ext_cnt, ext_size);
+ 	else
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"N/A\n");
+ 
+ 	return len;
+@@ -5238,49 +5244,49 @@ lpfc_idiag_extacc_drivr_get(struct lpfc_hba *phba, char *pbuffer, int len)
+ 	struct lpfc_rsrc_blks *rsrc_blks;
+ 	int index;
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\nDriver Extents Information:\n");
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tVPI extents:\n");
+ 	index = 0;
+ 	list_for_each_entry(rsrc_blks, &phba->lpfc_vpi_blk_list, list) {
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"\t\tBlock %3d: Start %4d, Count %4d\n",
+ 				index, rsrc_blks->rsrc_start,
+ 				rsrc_blks->rsrc_size);
+ 		index++;
+ 	}
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tVFI extents:\n");
+ 	index = 0;
+ 	list_for_each_entry(rsrc_blks, &phba->sli4_hba.lpfc_vfi_blk_list,
+ 			    list) {
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"\t\tBlock %3d: Start %4d, Count %4d\n",
+ 				index, rsrc_blks->rsrc_start,
+ 				rsrc_blks->rsrc_size);
+ 		index++;
+ 	}
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tRPI extents:\n");
+ 	index = 0;
+ 	list_for_each_entry(rsrc_blks, &phba->sli4_hba.lpfc_rpi_blk_list,
+ 			    list) {
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"\t\tBlock %3d: Start %4d, Count %4d\n",
+ 				index, rsrc_blks->rsrc_start,
+ 				rsrc_blks->rsrc_size);
+ 		index++;
+ 	}
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tXRI extents:\n");
+ 	index = 0;
+ 	list_for_each_entry(rsrc_blks, &phba->sli4_hba.lpfc_xri_blk_list,
+ 			    list) {
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"\t\tBlock %3d: Start %4d, Count %4d\n",
+ 				index, rsrc_blks->rsrc_start,
+ 				rsrc_blks->rsrc_size);
+@@ -5706,11 +5712,11 @@ lpfc_idiag_mbxacc_dump_bsg_mbox(struct lpfc_hba *phba, enum nemb_type nemb_tp,
+ 				if (i != 0)
+ 					pr_err("%s\n", line_buf);
+ 				len = 0;
+-				len += snprintf(line_buf+len,
++				len += scnprintf(line_buf+len,
+ 						LPFC_MBX_ACC_LBUF_SZ-len,
+ 						"%03d: ", i);
+ 			}
+-			len += snprintf(line_buf+len, LPFC_MBX_ACC_LBUF_SZ-len,
++			len += scnprintf(line_buf+len, LPFC_MBX_ACC_LBUF_SZ-len,
+ 					"%08x ", (uint32_t)*pword);
+ 			pword++;
+ 		}
+@@ -5773,11 +5779,11 @@ lpfc_idiag_mbxacc_dump_issue_mbox(struct lpfc_hba *phba, MAILBOX_t *pmbox)
+ 					pr_err("%s\n", line_buf);
+ 				len = 0;
+ 				memset(line_buf, 0, LPFC_MBX_ACC_LBUF_SZ);
+-				len += snprintf(line_buf+len,
++				len += scnprintf(line_buf+len,
+ 						LPFC_MBX_ACC_LBUF_SZ-len,
+ 						"%03d: ", i);
+ 			}
+-			len += snprintf(line_buf+len, LPFC_MBX_ACC_LBUF_SZ-len,
++			len += scnprintf(line_buf+len, LPFC_MBX_ACC_LBUF_SZ-len,
+ 					"%08x ",
+ 					((uint32_t)*pword) & 0xffffffff);
+ 			pword++;
+@@ -5796,18 +5802,18 @@ lpfc_idiag_mbxacc_dump_issue_mbox(struct lpfc_hba *phba, MAILBOX_t *pmbox)
+ 					pr_err("%s\n", line_buf);
+ 				len = 0;
+ 				memset(line_buf, 0, LPFC_MBX_ACC_LBUF_SZ);
+-				len += snprintf(line_buf+len,
++				len += scnprintf(line_buf+len,
+ 						LPFC_MBX_ACC_LBUF_SZ-len,
+ 						"%03d: ", i);
+ 			}
+ 			for (j = 0; j < 4; j++) {
+-				len += snprintf(line_buf+len,
++				len += scnprintf(line_buf+len,
+ 						LPFC_MBX_ACC_LBUF_SZ-len,
+ 						"%02x",
+ 						((uint8_t)*pbyte) & 0xff);
+ 				pbyte++;
+ 			}
+-			len += snprintf(line_buf+len,
++			len += scnprintf(line_buf+len,
+ 					LPFC_MBX_ACC_LBUF_SZ-len, " ");
+ 		}
+ 		if ((i - 1) % 8)
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.h b/drivers/scsi/lpfc/lpfc_debugfs.h
+index 93ab7dfb8ee0..2700f373b46a 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.h
++++ b/drivers/scsi/lpfc/lpfc_debugfs.h
+@@ -348,7 +348,7 @@ lpfc_debug_dump_qe(struct lpfc_queue *q, uint32_t idx)
+ 	pword = q->qe[idx].address;
+ 
+ 	len = 0;
+-	len += snprintf(line_buf+len, LPFC_LBUF_SZ-len, "QE[%04d]: ", idx);
++	len += scnprintf(line_buf+len, LPFC_LBUF_SZ-len, "QE[%04d]: ", idx);
+ 	if (qe_word_cnt > 8)
+ 		printk(KERN_ERR "%s\n", line_buf);
+ 
+@@ -359,11 +359,11 @@ lpfc_debug_dump_qe(struct lpfc_queue *q, uint32_t idx)
+ 			if (qe_word_cnt > 8) {
+ 				len = 0;
+ 				memset(line_buf, 0, LPFC_LBUF_SZ);
+-				len += snprintf(line_buf+len, LPFC_LBUF_SZ-len,
++				len += scnprintf(line_buf+len, LPFC_LBUF_SZ-len,
+ 						"%03d: ", i);
+ 			}
+ 		}
+-		len += snprintf(line_buf+len, LPFC_LBUF_SZ-len, "%08x ",
++		len += scnprintf(line_buf+len, LPFC_LBUF_SZ-len, "%08x ",
+ 				((uint32_t)*pword) & 0xffffffff);
+ 		pword++;
+ 	}
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index f928c4d3a1ef..70d92334e721 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -364,7 +364,7 @@ qla2x00_sysfs_write_optrom_ctl(struct file *filp, struct kobject *kobj,
+ 		}
+ 
+ 		ha->optrom_region_start = start;
+-		ha->optrom_region_size = start + size;
++		ha->optrom_region_size = size;
+ 
+ 		ha->optrom_state = QLA_SREADING;
+ 		ha->optrom_buffer = vmalloc(ha->optrom_region_size);
+@@ -437,7 +437,7 @@ qla2x00_sysfs_write_optrom_ctl(struct file *filp, struct kobject *kobj,
+ 		}
+ 
+ 		ha->optrom_region_start = start;
+-		ha->optrom_region_size = start + size;
++		ha->optrom_region_size = size;
+ 
+ 		ha->optrom_state = QLA_SWRITING;
+ 		ha->optrom_buffer = vmalloc(ha->optrom_region_size);
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index 41c85da3ab32..34dd8bf3fb31 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -615,7 +615,6 @@ static void qla_nvme_unregister_remote_port(struct work_struct *work)
+ 	struct fc_port *fcport = container_of(work, struct fc_port,
+ 	    nvme_del_work);
+ 	struct qla_nvme_rport *qla_rport, *trport;
+-	scsi_qla_host_t *base_vha;
+ 
+ 	if (!IS_ENABLED(CONFIG_NVME_FC))
+ 		return;
+@@ -623,23 +622,19 @@ static void qla_nvme_unregister_remote_port(struct work_struct *work)
+ 	ql_log(ql_log_warn, NULL, 0x2112,
+ 	    "%s: unregister remoteport on %p\n",__func__, fcport);
+ 
+-	base_vha = pci_get_drvdata(fcport->vha->hw->pdev);
+-	if (test_bit(PFLG_DRIVER_REMOVING, &base_vha->pci_flags)) {
+-		ql_dbg(ql_dbg_disc, fcport->vha, 0x2114,
+-		    "%s: Notify FC-NVMe transport, set devloss=0\n",
+-		    __func__);
+-
+-		nvme_fc_set_remoteport_devloss(fcport->nvme_remote_port, 0);
+-	}
+-
+ 	list_for_each_entry_safe(qla_rport, trport,
+ 	    &fcport->vha->nvme_rport_list, list) {
+ 		if (qla_rport->fcport == fcport) {
+ 			ql_log(ql_log_info, fcport->vha, 0x2113,
+ 			    "%s: fcport=%p\n", __func__, fcport);
++			nvme_fc_set_remoteport_devloss
++				(fcport->nvme_remote_port, 0);
+ 			init_completion(&fcport->nvme_del_done);
+-			nvme_fc_unregister_remoteport(
+-			    fcport->nvme_remote_port);
++			if (nvme_fc_unregister_remoteport
++			    (fcport->nvme_remote_port))
++				ql_log(ql_log_info, fcport->vha, 0x2114,
++				    "%s: Failed to unregister nvme_remote_port\n",
++				    __func__);
+ 			wait_for_completion(&fcport->nvme_del_done);
+ 			break;
+ 		}
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index 582d1663f971..697eee1d8847 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -980,6 +980,8 @@ void qlt_free_session_done(struct work_struct *work)
+ 		sess->send_els_logo);
+ 
+ 	if (!IS_SW_RESV_ADDR(sess->d_id)) {
++		qla2x00_mark_device_lost(vha, sess, 0, 0);
++
+ 		if (sess->send_els_logo) {
+ 			qlt_port_logo_t logo;
+ 
+@@ -1160,8 +1162,6 @@ void qlt_unreg_sess(struct fc_port *sess)
+ 	if (sess->se_sess)
+ 		vha->hw->tgt.tgt_ops->clear_nacl_from_fcport_map(sess);
+ 
+-	qla2x00_mark_device_lost(vha, sess, 0, 0);
+-
+ 	sess->deleted = QLA_SESS_DELETION_IN_PROGRESS;
+ 	sess->disc_state = DSC_DELETE_PEND;
+ 	sess->last_rscn_gen = sess->rscn_gen;
+diff --git a/drivers/soc/sunxi/Kconfig b/drivers/soc/sunxi/Kconfig
+index 353b07e40176..e84eb4e59f58 100644
+--- a/drivers/soc/sunxi/Kconfig
++++ b/drivers/soc/sunxi/Kconfig
+@@ -4,6 +4,7 @@
+ config SUNXI_SRAM
+ 	bool
+ 	default ARCH_SUNXI
++	select REGMAP_MMIO
+ 	help
+ 	  Say y here to enable the SRAM controller support. This
+ 	  device is responsible on mapping the SRAM in the sunXi SoCs
+diff --git a/drivers/staging/greybus/power_supply.c b/drivers/staging/greybus/power_supply.c
+index 0529e5628c24..ae5c0285a942 100644
+--- a/drivers/staging/greybus/power_supply.c
++++ b/drivers/staging/greybus/power_supply.c
+@@ -520,7 +520,7 @@ static int gb_power_supply_prop_descriptors_get(struct gb_power_supply *gbpsy)
+ 
+ 	op = gb_operation_create(connection,
+ 				 GB_POWER_SUPPLY_TYPE_GET_PROP_DESCRIPTORS,
+-				 sizeof(req), sizeof(*resp) + props_count *
++				 sizeof(*req), sizeof(*resp) + props_count *
+ 				 sizeof(struct gb_power_supply_props_desc),
+ 				 GFP_KERNEL);
+ 	if (!op)
+diff --git a/drivers/staging/most/cdev/cdev.c b/drivers/staging/most/cdev/cdev.c
+index f2b347cda8b7..d5f236889021 100644
+--- a/drivers/staging/most/cdev/cdev.c
++++ b/drivers/staging/most/cdev/cdev.c
+@@ -549,7 +549,7 @@ static void __exit mod_exit(void)
+ 		destroy_cdev(c);
+ 		destroy_channel(c);
+ 	}
+-	unregister_chrdev_region(comp.devno, 1);
++	unregister_chrdev_region(comp.devno, CHRDEV_REGION_SIZE);
+ 	ida_destroy(&comp.minor_id);
+ 	class_destroy(comp.class);
+ }
+diff --git a/drivers/staging/most/sound/sound.c b/drivers/staging/most/sound/sound.c
+index 79ab3a78c5ec..1e6f47cfe42c 100644
+--- a/drivers/staging/most/sound/sound.c
++++ b/drivers/staging/most/sound/sound.c
+@@ -622,7 +622,7 @@ static int audio_probe_channel(struct most_interface *iface, int channel_id,
+ 	INIT_LIST_HEAD(&adpt->dev_list);
+ 	iface->priv = adpt;
+ 	list_add_tail(&adpt->list, &adpt_list);
+-	ret = snd_card_new(&iface->dev, -1, "INIC", THIS_MODULE,
++	ret = snd_card_new(iface->driver_dev, -1, "INIC", THIS_MODULE,
+ 			   sizeof(*channel), &adpt->card);
+ 	if (ret < 0)
+ 		goto err_free_adpt;
+diff --git a/drivers/staging/wilc1000/wilc_netdev.c b/drivers/staging/wilc1000/wilc_netdev.c
+index 1787154ee088..ba78c08a17f1 100644
+--- a/drivers/staging/wilc1000/wilc_netdev.c
++++ b/drivers/staging/wilc1000/wilc_netdev.c
+@@ -708,7 +708,7 @@ static void wilc_set_multicast_list(struct net_device *dev)
+ 		return;
+ 	}
+ 
+-	mc_list = kmalloc_array(dev->mc.count, ETH_ALEN, GFP_KERNEL);
++	mc_list = kmalloc_array(dev->mc.count, ETH_ALEN, GFP_ATOMIC);
+ 	if (!mc_list)
+ 		return;
+ 
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index ec666eb4b7b4..c03aa8550980 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -470,12 +470,12 @@ static void acm_read_bulk_callback(struct urb *urb)
+ 	struct acm *acm = rb->instance;
+ 	unsigned long flags;
+ 	int status = urb->status;
++	bool stopped = false;
++	bool stalled = false;
+ 
+ 	dev_vdbg(&acm->data->dev, "got urb %d, len %d, status %d\n",
+ 		rb->index, urb->actual_length, status);
+ 
+-	set_bit(rb->index, &acm->read_urbs_free);
+-
+ 	if (!acm->dev) {
+ 		dev_dbg(&acm->data->dev, "%s - disconnected\n", __func__);
+ 		return;
+@@ -488,15 +488,16 @@ static void acm_read_bulk_callback(struct urb *urb)
+ 		break;
+ 	case -EPIPE:
+ 		set_bit(EVENT_RX_STALL, &acm->flags);
+-		schedule_work(&acm->work);
+-		return;
++		stalled = true;
++		break;
+ 	case -ENOENT:
+ 	case -ECONNRESET:
+ 	case -ESHUTDOWN:
+ 		dev_dbg(&acm->data->dev,
+ 			"%s - urb shutting down with status: %d\n",
+ 			__func__, status);
+-		return;
++		stopped = true;
++		break;
+ 	default:
+ 		dev_dbg(&acm->data->dev,
+ 			"%s - nonzero urb status received: %d\n",
+@@ -505,10 +506,24 @@ static void acm_read_bulk_callback(struct urb *urb)
+ 	}
+ 
+ 	/*
+-	 * Unthrottle may run on another CPU which needs to see events
+-	 * in the same order. Submission has an implict barrier
++	 * Make sure URB processing is done before marking as free to avoid
++	 * racing with unthrottle() on another CPU. Matches the barriers
++	 * implied by the test_and_clear_bit() in acm_submit_read_urb().
+ 	 */
+ 	smp_mb__before_atomic();
++	set_bit(rb->index, &acm->read_urbs_free);
++	/*
++	 * Make sure URB is marked as free before checking the throttled flag
++	 * to avoid racing with unthrottle() on another CPU. Matches the
++	 * smp_mb() in unthrottle().
++	 */
++	smp_mb__after_atomic();
++
++	if (stopped || stalled) {
++		if (stalled)
++			schedule_work(&acm->work);
++		return;
++	}
+ 
+ 	/* throttle device if requested by tty */
+ 	spin_lock_irqsave(&acm->read_lock, flags);
+@@ -842,6 +857,9 @@ static void acm_tty_unthrottle(struct tty_struct *tty)
+ 	acm->throttle_req = 0;
+ 	spin_unlock_irq(&acm->read_lock);
+ 
++	/* Matches the smp_mb__after_atomic() in acm_read_bulk_callback(). */
++	smp_mb();
++
+ 	if (was_throttled)
+ 		acm_submit_read_urbs(acm, GFP_KERNEL);
+ }
+diff --git a/drivers/usb/dwc3/Kconfig b/drivers/usb/dwc3/Kconfig
+index 2b1494460d0c..784309435916 100644
+--- a/drivers/usb/dwc3/Kconfig
++++ b/drivers/usb/dwc3/Kconfig
+@@ -54,7 +54,8 @@ comment "Platform Glue Driver Support"
+ 
+ config USB_DWC3_OMAP
+ 	tristate "Texas Instruments OMAP5 and similar Platforms"
+-	depends on EXTCON && (ARCH_OMAP2PLUS || COMPILE_TEST)
++	depends on ARCH_OMAP2PLUS || COMPILE_TEST
++	depends on EXTCON || !EXTCON
+ 	depends on OF
+ 	default USB_DWC3
+ 	help
+@@ -115,7 +116,8 @@ config USB_DWC3_ST
+ 
+ config USB_DWC3_QCOM
+ 	tristate "Qualcomm Platform"
+-	depends on EXTCON && (ARCH_QCOM || COMPILE_TEST)
++	depends on ARCH_QCOM || COMPILE_TEST
++	depends on EXTCON || !EXTCON
+ 	depends on OF
+ 	default USB_DWC3
+ 	help
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index a1b126f90261..f944cea4056b 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -1218,7 +1218,7 @@ static void dwc3_get_properties(struct dwc3 *dwc)
+ 	u8			tx_max_burst_prd;
+ 
+ 	/* default to highest possible threshold */
+-	lpm_nyet_threshold = 0xff;
++	lpm_nyet_threshold = 0xf;
+ 
+ 	/* default to -3.5dB de-emphasis */
+ 	tx_de_emphasis = 1;
+diff --git a/drivers/usb/musb/Kconfig b/drivers/usb/musb/Kconfig
+index f742fddc5e2c..52f8e2b57ad5 100644
+--- a/drivers/usb/musb/Kconfig
++++ b/drivers/usb/musb/Kconfig
+@@ -67,7 +67,7 @@ config USB_MUSB_SUNXI
+ 	depends on NOP_USB_XCEIV
+ 	depends on PHY_SUN4I_USB
+ 	depends on EXTCON
+-	depends on GENERIC_PHY
++	select GENERIC_PHY
+ 	select SUNXI_SRAM
+ 
+ config USB_MUSB_DAVINCI
+diff --git a/drivers/usb/serial/f81232.c b/drivers/usb/serial/f81232.c
+index 0dcdcb4b2cde..dee6f2caf9b5 100644
+--- a/drivers/usb/serial/f81232.c
++++ b/drivers/usb/serial/f81232.c
+@@ -556,9 +556,12 @@ static int f81232_open(struct tty_struct *tty, struct usb_serial_port *port)
+ 
+ static void f81232_close(struct usb_serial_port *port)
+ {
++	struct f81232_private *port_priv = usb_get_serial_port_data(port);
++
+ 	f81232_port_disable(port);
+ 	usb_serial_generic_close(port);
+ 	usb_kill_urb(port->interrupt_in_urb);
++	flush_work(&port_priv->interrupt_work);
+ }
+ 
+ static void f81232_dtr_rts(struct usb_serial_port *port, int on)
+@@ -632,6 +635,40 @@ static int f81232_port_remove(struct usb_serial_port *port)
+ 	return 0;
+ }
+ 
++static int f81232_suspend(struct usb_serial *serial, pm_message_t message)
++{
++	struct usb_serial_port *port = serial->port[0];
++	struct f81232_private *port_priv = usb_get_serial_port_data(port);
++	int i;
++
++	for (i = 0; i < ARRAY_SIZE(port->read_urbs); ++i)
++		usb_kill_urb(port->read_urbs[i]);
++
++	usb_kill_urb(port->interrupt_in_urb);
++
++	if (port_priv)
++		flush_work(&port_priv->interrupt_work);
++
++	return 0;
++}
++
++static int f81232_resume(struct usb_serial *serial)
++{
++	struct usb_serial_port *port = serial->port[0];
++	int result;
++
++	if (tty_port_initialized(&port->port)) {
++		result = usb_submit_urb(port->interrupt_in_urb, GFP_NOIO);
++		if (result) {
++			dev_err(&port->dev, "submit interrupt urb failed: %d\n",
++					result);
++			return result;
++		}
++	}
++
++	return usb_serial_generic_resume(serial);
++}
++
+ static struct usb_serial_driver f81232_device = {
+ 	.driver = {
+ 		.owner =	THIS_MODULE,
+@@ -655,6 +692,8 @@ static struct usb_serial_driver f81232_device = {
+ 	.read_int_callback =	f81232_read_int_callback,
+ 	.port_probe =		f81232_port_probe,
+ 	.port_remove =		f81232_port_remove,
++	.suspend =		f81232_suspend,
++	.resume =		f81232_resume,
+ };
+ 
+ static struct usb_serial_driver * const serial_drivers[] = {
+diff --git a/drivers/usb/storage/scsiglue.c b/drivers/usb/storage/scsiglue.c
+index a73ea495d5a7..59190d88fa9f 100644
+--- a/drivers/usb/storage/scsiglue.c
++++ b/drivers/usb/storage/scsiglue.c
+@@ -65,6 +65,7 @@ static const char* host_info(struct Scsi_Host *host)
+ static int slave_alloc (struct scsi_device *sdev)
+ {
+ 	struct us_data *us = host_to_us(sdev->host);
++	int maxp;
+ 
+ 	/*
+ 	 * Set the INQUIRY transfer length to 36.  We don't use any of
+@@ -74,20 +75,17 @@ static int slave_alloc (struct scsi_device *sdev)
+ 	sdev->inquiry_len = 36;
+ 
+ 	/*
+-	 * USB has unusual DMA-alignment requirements: Although the
+-	 * starting address of each scatter-gather element doesn't matter,
+-	 * the length of each element except the last must be divisible
+-	 * by the Bulk maxpacket value.  There's currently no way to
+-	 * express this by block-layer constraints, so we'll cop out
+-	 * and simply require addresses to be aligned at 512-byte
+-	 * boundaries.  This is okay since most block I/O involves
+-	 * hardware sectors that are multiples of 512 bytes in length,
+-	 * and since host controllers up through USB 2.0 have maxpacket
+-	 * values no larger than 512.
+-	 *
+-	 * But it doesn't suffice for Wireless USB, where Bulk maxpacket
+-	 * values can be as large as 2048.  To make that work properly
+-	 * will require changes to the block layer.
++	 * USB has unusual scatter-gather requirements: the length of each
++	 * scatterlist element except the last must be divisible by the
++	 * Bulk maxpacket value.  Fortunately this value is always a
++	 * power of 2.  Inform the block layer about this requirement.
++	 */
++	maxp = usb_maxpacket(us->pusb_dev, us->recv_bulk_pipe, 0);
++	blk_queue_virt_boundary(sdev->request_queue, maxp - 1);
++
++	/*
++	 * Some host controllers may have alignment requirements.
++	 * We'll play it safe by requiring 512-byte alignment always.
+ 	 */
+ 	blk_queue_update_dma_alignment(sdev->request_queue, (512 - 1));
+ 
+diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
+index a6d68191c861..047c5922618f 100644
+--- a/drivers/usb/storage/uas.c
++++ b/drivers/usb/storage/uas.c
+@@ -789,24 +789,33 @@ static int uas_slave_alloc(struct scsi_device *sdev)
+ {
+ 	struct uas_dev_info *devinfo =
+ 		(struct uas_dev_info *)sdev->host->hostdata;
++	int maxp;
+ 
+ 	sdev->hostdata = devinfo;
+ 
+ 	/*
+-	 * USB has unusual DMA-alignment requirements: Although the
+-	 * starting address of each scatter-gather element doesn't matter,
+-	 * the length of each element except the last must be divisible
+-	 * by the Bulk maxpacket value.  There's currently no way to
+-	 * express this by block-layer constraints, so we'll cop out
+-	 * and simply require addresses to be aligned at 512-byte
+-	 * boundaries.  This is okay since most block I/O involves
+-	 * hardware sectors that are multiples of 512 bytes in length,
+-	 * and since host controllers up through USB 2.0 have maxpacket
+-	 * values no larger than 512.
++	 * We have two requirements here. We must satisfy the requirements
++	 * of the physical HC and the demands of the protocol, as we
++	 * definitely want no additional memory allocation in this path
++	 * ruling out using bounce buffers.
+ 	 *
+-	 * But it doesn't suffice for Wireless USB, where Bulk maxpacket
+-	 * values can be as large as 2048.  To make that work properly
+-	 * will require changes to the block layer.
++	 * For a transmission on USB to continue we must never send
++	 * a package that is smaller than maxpacket. Hence the length of each
++         * scatterlist element except the last must be divisible by the
++         * Bulk maxpacket value.
++	 * If the HC does not ensure that through SG,
++	 * the upper layer must do that. We must assume nothing
++	 * about the capabilities off the HC, so we use the most
++	 * pessimistic requirement.
++	 */
++
++	maxp = usb_maxpacket(devinfo->udev, devinfo->data_in_pipe, 0);
++	blk_queue_virt_boundary(sdev->request_queue, maxp - 1);
++
++	/*
++	 * The protocol has no requirements on alignment in the strict sense.
++	 * Controllers may or may not have alignment restrictions.
++	 * As this is not exported, we use an extremely conservative guess.
+ 	 */
+ 	blk_queue_update_dma_alignment(sdev->request_queue, (512 - 1));
+ 
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 094e61e07030..05b1b96f4d9e 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -190,6 +190,9 @@ struct adv_info {
+ 
+ #define HCI_MAX_SHORT_NAME_LENGTH	10
+ 
++/* Min encryption key size to match with SMP */
++#define HCI_MIN_ENC_KEY_SIZE		7
++
+ /* Default LE RPA expiry time, 15 minutes */
+ #define HCI_DEFAULT_RPA_TIMEOUT		(15 * 60)
+ 
+diff --git a/kernel/futex.c b/kernel/futex.c
+index 9e40cf7be606..6262f1534ac9 100644
+--- a/kernel/futex.c
++++ b/kernel/futex.c
+@@ -1311,13 +1311,15 @@ static int lookup_pi_state(u32 __user *uaddr, u32 uval,
+ 
+ static int lock_pi_update_atomic(u32 __user *uaddr, u32 uval, u32 newval)
+ {
++	int err;
+ 	u32 uninitialized_var(curval);
+ 
+ 	if (unlikely(should_fail_futex(true)))
+ 		return -EFAULT;
+ 
+-	if (unlikely(cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)))
+-		return -EFAULT;
++	err = cmpxchg_futex_value_locked(&curval, uaddr, uval, newval);
++	if (unlikely(err))
++		return err;
+ 
+ 	/* If user space value changed, let the caller retry */
+ 	return curval != uval ? -EAGAIN : 0;
+@@ -1502,10 +1504,8 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_pi_state *pi_
+ 	if (unlikely(should_fail_futex(true)))
+ 		ret = -EFAULT;
+ 
+-	if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) {
+-		ret = -EFAULT;
+-
+-	} else if (curval != uval) {
++	ret = cmpxchg_futex_value_locked(&curval, uaddr, uval, newval);
++	if (!ret && (curval != uval)) {
+ 		/*
+ 		 * If a unconditional UNLOCK_PI operation (user space did not
+ 		 * try the TID->0 transition) raced with a waiter setting the
+@@ -1700,32 +1700,32 @@ retry_private:
+ 	double_lock_hb(hb1, hb2);
+ 	op_ret = futex_atomic_op_inuser(op, uaddr2);
+ 	if (unlikely(op_ret < 0)) {
+-
+ 		double_unlock_hb(hb1, hb2);
+ 
+-#ifndef CONFIG_MMU
+-		/*
+-		 * we don't get EFAULT from MMU faults if we don't have an MMU,
+-		 * but we might get them from range checking
+-		 */
+-		ret = op_ret;
+-		goto out_put_keys;
+-#endif
+-
+-		if (unlikely(op_ret != -EFAULT)) {
++		if (!IS_ENABLED(CONFIG_MMU) ||
++		    unlikely(op_ret != -EFAULT && op_ret != -EAGAIN)) {
++			/*
++			 * we don't get EFAULT from MMU faults if we don't have
++			 * an MMU, but we might get them from range checking
++			 */
+ 			ret = op_ret;
+ 			goto out_put_keys;
+ 		}
+ 
+-		ret = fault_in_user_writeable(uaddr2);
+-		if (ret)
+-			goto out_put_keys;
++		if (op_ret == -EFAULT) {
++			ret = fault_in_user_writeable(uaddr2);
++			if (ret)
++				goto out_put_keys;
++		}
+ 
+-		if (!(flags & FLAGS_SHARED))
++		if (!(flags & FLAGS_SHARED)) {
++			cond_resched();
+ 			goto retry_private;
++		}
+ 
+ 		put_futex_key(&key2);
+ 		put_futex_key(&key1);
++		cond_resched();
+ 		goto retry;
+ 	}
+ 
+@@ -2350,7 +2350,7 @@ static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
+ 	u32 uval, uninitialized_var(curval), newval;
+ 	struct task_struct *oldowner, *newowner;
+ 	u32 newtid;
+-	int ret;
++	int ret, err = 0;
+ 
+ 	lockdep_assert_held(q->lock_ptr);
+ 
+@@ -2421,14 +2421,17 @@ retry:
+ 	if (!pi_state->owner)
+ 		newtid |= FUTEX_OWNER_DIED;
+ 
+-	if (get_futex_value_locked(&uval, uaddr))
+-		goto handle_fault;
++	err = get_futex_value_locked(&uval, uaddr);
++	if (err)
++		goto handle_err;
+ 
+ 	for (;;) {
+ 		newval = (uval & FUTEX_OWNER_DIED) | newtid;
+ 
+-		if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval))
+-			goto handle_fault;
++		err = cmpxchg_futex_value_locked(&curval, uaddr, uval, newval);
++		if (err)
++			goto handle_err;
++
+ 		if (curval == uval)
+ 			break;
+ 		uval = curval;
+@@ -2456,23 +2459,37 @@ retry:
+ 	return 0;
+ 
+ 	/*
+-	 * To handle the page fault we need to drop the locks here. That gives
+-	 * the other task (either the highest priority waiter itself or the
+-	 * task which stole the rtmutex) the chance to try the fixup of the
+-	 * pi_state. So once we are back from handling the fault we need to
+-	 * check the pi_state after reacquiring the locks and before trying to
+-	 * do another fixup. When the fixup has been done already we simply
+-	 * return.
++	 * In order to reschedule or handle a page fault, we need to drop the
++	 * locks here. In the case of a fault, this gives the other task
++	 * (either the highest priority waiter itself or the task which stole
++	 * the rtmutex) the chance to try the fixup of the pi_state. So once we
++	 * are back from handling the fault we need to check the pi_state after
++	 * reacquiring the locks and before trying to do another fixup. When
++	 * the fixup has been done already we simply return.
+ 	 *
+ 	 * Note: we hold both hb->lock and pi_mutex->wait_lock. We can safely
+ 	 * drop hb->lock since the caller owns the hb -> futex_q relation.
+ 	 * Dropping the pi_mutex->wait_lock requires the state revalidate.
+ 	 */
+-handle_fault:
++handle_err:
+ 	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
+ 	spin_unlock(q->lock_ptr);
+ 
+-	ret = fault_in_user_writeable(uaddr);
++	switch (err) {
++	case -EFAULT:
++		ret = fault_in_user_writeable(uaddr);
++		break;
++
++	case -EAGAIN:
++		cond_resched();
++		ret = 0;
++		break;
++
++	default:
++		WARN_ON_ONCE(1);
++		ret = err;
++		break;
++	}
+ 
+ 	spin_lock(q->lock_ptr);
+ 	raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
+@@ -3041,10 +3058,8 @@ retry:
+ 		 * A unconditional UNLOCK_PI op raced against a waiter
+ 		 * setting the FUTEX_WAITERS bit. Try again.
+ 		 */
+-		if (ret == -EAGAIN) {
+-			put_futex_key(&key);
+-			goto retry;
+-		}
++		if (ret == -EAGAIN)
++			goto pi_retry;
+ 		/*
+ 		 * wake_futex_pi has detected invalid state. Tell user
+ 		 * space.
+@@ -3059,9 +3074,19 @@ retry:
+ 	 * preserve the WAITERS bit not the OWNER_DIED one. We are the
+ 	 * owner.
+ 	 */
+-	if (cmpxchg_futex_value_locked(&curval, uaddr, uval, 0)) {
++	if ((ret = cmpxchg_futex_value_locked(&curval, uaddr, uval, 0))) {
+ 		spin_unlock(&hb->lock);
+-		goto pi_faulted;
++		switch (ret) {
++		case -EFAULT:
++			goto pi_faulted;
++
++		case -EAGAIN:
++			goto pi_retry;
++
++		default:
++			WARN_ON_ONCE(1);
++			goto out_putkey;
++		}
+ 	}
+ 
+ 	/*
+@@ -3075,6 +3100,11 @@ out_putkey:
+ 	put_futex_key(&key);
+ 	return ret;
+ 
++pi_retry:
++	put_futex_key(&key);
++	cond_resched();
++	goto retry;
++
+ pi_faulted:
+ 	put_futex_key(&key);
+ 
+@@ -3435,6 +3465,7 @@ err_unlock:
+ static int handle_futex_death(u32 __user *uaddr, struct task_struct *curr, int pi)
+ {
+ 	u32 uval, uninitialized_var(nval), mval;
++	int err;
+ 
+ 	/* Futex address must be 32bit aligned */
+ 	if ((((unsigned long)uaddr) % sizeof(*uaddr)) != 0)
+@@ -3444,42 +3475,57 @@ retry:
+ 	if (get_user(uval, uaddr))
+ 		return -1;
+ 
+-	if ((uval & FUTEX_TID_MASK) == task_pid_vnr(curr)) {
+-		/*
+-		 * Ok, this dying thread is truly holding a futex
+-		 * of interest. Set the OWNER_DIED bit atomically
+-		 * via cmpxchg, and if the value had FUTEX_WAITERS
+-		 * set, wake up a waiter (if any). (We have to do a
+-		 * futex_wake() even if OWNER_DIED is already set -
+-		 * to handle the rare but possible case of recursive
+-		 * thread-death.) The rest of the cleanup is done in
+-		 * userspace.
+-		 */
+-		mval = (uval & FUTEX_WAITERS) | FUTEX_OWNER_DIED;
+-		/*
+-		 * We are not holding a lock here, but we want to have
+-		 * the pagefault_disable/enable() protection because
+-		 * we want to handle the fault gracefully. If the
+-		 * access fails we try to fault in the futex with R/W
+-		 * verification via get_user_pages. get_user() above
+-		 * does not guarantee R/W access. If that fails we
+-		 * give up and leave the futex locked.
+-		 */
+-		if (cmpxchg_futex_value_locked(&nval, uaddr, uval, mval)) {
++	if ((uval & FUTEX_TID_MASK) != task_pid_vnr(curr))
++		return 0;
++
++	/*
++	 * Ok, this dying thread is truly holding a futex
++	 * of interest. Set the OWNER_DIED bit atomically
++	 * via cmpxchg, and if the value had FUTEX_WAITERS
++	 * set, wake up a waiter (if any). (We have to do a
++	 * futex_wake() even if OWNER_DIED is already set -
++	 * to handle the rare but possible case of recursive
++	 * thread-death.) The rest of the cleanup is done in
++	 * userspace.
++	 */
++	mval = (uval & FUTEX_WAITERS) | FUTEX_OWNER_DIED;
++
++	/*
++	 * We are not holding a lock here, but we want to have
++	 * the pagefault_disable/enable() protection because
++	 * we want to handle the fault gracefully. If the
++	 * access fails we try to fault in the futex with R/W
++	 * verification via get_user_pages. get_user() above
++	 * does not guarantee R/W access. If that fails we
++	 * give up and leave the futex locked.
++	 */
++	if ((err = cmpxchg_futex_value_locked(&nval, uaddr, uval, mval))) {
++		switch (err) {
++		case -EFAULT:
+ 			if (fault_in_user_writeable(uaddr))
+ 				return -1;
+ 			goto retry;
+-		}
+-		if (nval != uval)
++
++		case -EAGAIN:
++			cond_resched();
+ 			goto retry;
+ 
+-		/*
+-		 * Wake robust non-PI futexes here. The wakeup of
+-		 * PI futexes happens in exit_pi_state():
+-		 */
+-		if (!pi && (uval & FUTEX_WAITERS))
+-			futex_wake(uaddr, 1, 1, FUTEX_BITSET_MATCH_ANY);
++		default:
++			WARN_ON_ONCE(1);
++			return err;
++		}
+ 	}
++
++	if (nval != uval)
++		goto retry;
++
++	/*
++	 * Wake robust non-PI futexes here. The wakeup of
++	 * PI futexes happens in exit_pi_state():
++	 */
++	if (!pi && (uval & FUTEX_WAITERS))
++		futex_wake(uaddr, 1, 1, FUTEX_BITSET_MATCH_ANY);
++
+ 	return 0;
+ }
+ 
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index 1401afa0d58a..53a081392115 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -357,8 +357,10 @@ irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify)
+ 	desc->affinity_notify = notify;
+ 	raw_spin_unlock_irqrestore(&desc->lock, flags);
+ 
+-	if (old_notify)
++	if (old_notify) {
++		cancel_work_sync(&old_notify->work);
+ 		kref_put(&old_notify->kref, old_notify->release);
++	}
+ 
+ 	return 0;
+ }
+diff --git a/lib/ubsan.c b/lib/ubsan.c
+index e4162f59a81c..1e9e2ab25539 100644
+--- a/lib/ubsan.c
++++ b/lib/ubsan.c
+@@ -86,11 +86,13 @@ static bool is_inline_int(struct type_descriptor *type)
+ 	return bits <= inline_bits;
+ }
+ 
+-static s_max get_signed_val(struct type_descriptor *type, unsigned long val)
++static s_max get_signed_val(struct type_descriptor *type, void *val)
+ {
+ 	if (is_inline_int(type)) {
+ 		unsigned extra_bits = sizeof(s_max)*8 - type_bit_width(type);
+-		return ((s_max)val) << extra_bits >> extra_bits;
++		unsigned long ulong_val = (unsigned long)val;
++
++		return ((s_max)ulong_val) << extra_bits >> extra_bits;
+ 	}
+ 
+ 	if (type_bit_width(type) == 64)
+@@ -99,15 +101,15 @@ static s_max get_signed_val(struct type_descriptor *type, unsigned long val)
+ 	return *(s_max *)val;
+ }
+ 
+-static bool val_is_negative(struct type_descriptor *type, unsigned long val)
++static bool val_is_negative(struct type_descriptor *type, void *val)
+ {
+ 	return type_is_signed(type) && get_signed_val(type, val) < 0;
+ }
+ 
+-static u_max get_unsigned_val(struct type_descriptor *type, unsigned long val)
++static u_max get_unsigned_val(struct type_descriptor *type, void *val)
+ {
+ 	if (is_inline_int(type))
+-		return val;
++		return (unsigned long)val;
+ 
+ 	if (type_bit_width(type) == 64)
+ 		return *(u64 *)val;
+@@ -116,7 +118,7 @@ static u_max get_unsigned_val(struct type_descriptor *type, unsigned long val)
+ }
+ 
+ static void val_to_string(char *str, size_t size, struct type_descriptor *type,
+-	unsigned long value)
++			void *value)
+ {
+ 	if (type_is_int(type)) {
+ 		if (type_bit_width(type) == 128) {
+@@ -163,8 +165,8 @@ static void ubsan_epilogue(unsigned long *flags)
+ 	current->in_ubsan--;
+ }
+ 
+-static void handle_overflow(struct overflow_data *data, unsigned long lhs,
+-			unsigned long rhs, char op)
++static void handle_overflow(struct overflow_data *data, void *lhs,
++			void *rhs, char op)
+ {
+ 
+ 	struct type_descriptor *type = data->type;
+@@ -191,8 +193,7 @@ static void handle_overflow(struct overflow_data *data, unsigned long lhs,
+ }
+ 
+ void __ubsan_handle_add_overflow(struct overflow_data *data,
+-				unsigned long lhs,
+-				unsigned long rhs)
++				void *lhs, void *rhs)
+ {
+ 
+ 	handle_overflow(data, lhs, rhs, '+');
+@@ -200,23 +201,21 @@ void __ubsan_handle_add_overflow(struct overflow_data *data,
+ EXPORT_SYMBOL(__ubsan_handle_add_overflow);
+ 
+ void __ubsan_handle_sub_overflow(struct overflow_data *data,
+-				unsigned long lhs,
+-				unsigned long rhs)
++				void *lhs, void *rhs)
+ {
+ 	handle_overflow(data, lhs, rhs, '-');
+ }
+ EXPORT_SYMBOL(__ubsan_handle_sub_overflow);
+ 
+ void __ubsan_handle_mul_overflow(struct overflow_data *data,
+-				unsigned long lhs,
+-				unsigned long rhs)
++				void *lhs, void *rhs)
+ {
+ 	handle_overflow(data, lhs, rhs, '*');
+ }
+ EXPORT_SYMBOL(__ubsan_handle_mul_overflow);
+ 
+ void __ubsan_handle_negate_overflow(struct overflow_data *data,
+-				unsigned long old_val)
++				void *old_val)
+ {
+ 	unsigned long flags;
+ 	char old_val_str[VALUE_LENGTH];
+@@ -237,8 +236,7 @@ EXPORT_SYMBOL(__ubsan_handle_negate_overflow);
+ 
+ 
+ void __ubsan_handle_divrem_overflow(struct overflow_data *data,
+-				unsigned long lhs,
+-				unsigned long rhs)
++				void *lhs, void *rhs)
+ {
+ 	unsigned long flags;
+ 	char rhs_val_str[VALUE_LENGTH];
+@@ -323,7 +321,7 @@ static void ubsan_type_mismatch_common(struct type_mismatch_data_common *data,
+ }
+ 
+ void __ubsan_handle_type_mismatch(struct type_mismatch_data *data,
+-				unsigned long ptr)
++				void *ptr)
+ {
+ 	struct type_mismatch_data_common common_data = {
+ 		.location = &data->location,
+@@ -332,12 +330,12 @@ void __ubsan_handle_type_mismatch(struct type_mismatch_data *data,
+ 		.type_check_kind = data->type_check_kind
+ 	};
+ 
+-	ubsan_type_mismatch_common(&common_data, ptr);
++	ubsan_type_mismatch_common(&common_data, (unsigned long)ptr);
+ }
+ EXPORT_SYMBOL(__ubsan_handle_type_mismatch);
+ 
+ void __ubsan_handle_type_mismatch_v1(struct type_mismatch_data_v1 *data,
+-				unsigned long ptr)
++				void *ptr)
+ {
+ 
+ 	struct type_mismatch_data_common common_data = {
+@@ -347,12 +345,12 @@ void __ubsan_handle_type_mismatch_v1(struct type_mismatch_data_v1 *data,
+ 		.type_check_kind = data->type_check_kind
+ 	};
+ 
+-	ubsan_type_mismatch_common(&common_data, ptr);
++	ubsan_type_mismatch_common(&common_data, (unsigned long)ptr);
+ }
+ EXPORT_SYMBOL(__ubsan_handle_type_mismatch_v1);
+ 
+ void __ubsan_handle_vla_bound_not_positive(struct vla_bound_data *data,
+-					unsigned long bound)
++					void *bound)
+ {
+ 	unsigned long flags;
+ 	char bound_str[VALUE_LENGTH];
+@@ -369,8 +367,7 @@ void __ubsan_handle_vla_bound_not_positive(struct vla_bound_data *data,
+ }
+ EXPORT_SYMBOL(__ubsan_handle_vla_bound_not_positive);
+ 
+-void __ubsan_handle_out_of_bounds(struct out_of_bounds_data *data,
+-				unsigned long index)
++void __ubsan_handle_out_of_bounds(struct out_of_bounds_data *data, void *index)
+ {
+ 	unsigned long flags;
+ 	char index_str[VALUE_LENGTH];
+@@ -388,7 +385,7 @@ void __ubsan_handle_out_of_bounds(struct out_of_bounds_data *data,
+ EXPORT_SYMBOL(__ubsan_handle_out_of_bounds);
+ 
+ void __ubsan_handle_shift_out_of_bounds(struct shift_out_of_bounds_data *data,
+-					unsigned long lhs, unsigned long rhs)
++					void *lhs, void *rhs)
+ {
+ 	unsigned long flags;
+ 	struct type_descriptor *rhs_type = data->rhs_type;
+@@ -439,7 +436,7 @@ void __ubsan_handle_builtin_unreachable(struct unreachable_data *data)
+ EXPORT_SYMBOL(__ubsan_handle_builtin_unreachable);
+ 
+ void __ubsan_handle_load_invalid_value(struct invalid_value_data *data,
+-				unsigned long val)
++				void *val)
+ {
+ 	unsigned long flags;
+ 	char val_str[VALUE_LENGTH];
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index bd4978ce8c45..3cf0764d5793 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -1276,6 +1276,14 @@ int hci_conn_check_link_mode(struct hci_conn *conn)
+ 	    !test_bit(HCI_CONN_ENCRYPT, &conn->flags))
+ 		return 0;
+ 
++	/* The minimum encryption key size needs to be enforced by the
++	 * host stack before establishing any L2CAP connections. The
++	 * specification in theory allows a minimum of 1, but to align
++	 * BR/EDR and LE transports, a minimum of 7 is chosen.
++	 */
++	if (conn->enc_key_size < HCI_MIN_ENC_KEY_SIZE)
++		return 0;
++
+ 	return 1;
+ }
+ 
+diff --git a/net/bluetooth/hidp/sock.c b/net/bluetooth/hidp/sock.c
+index 9f85a1943be9..2151913892ce 100644
+--- a/net/bluetooth/hidp/sock.c
++++ b/net/bluetooth/hidp/sock.c
+@@ -75,6 +75,7 @@ static int do_hidp_sock_ioctl(struct socket *sock, unsigned int cmd, void __user
+ 			sockfd_put(csock);
+ 			return err;
+ 		}
++		ca.name[sizeof(ca.name)-1] = 0;
+ 
+ 		err = hidp_connection_add(&ca, csock, isock);
+ 		if (!err && copy_to_user(argp, &ca, sizeof(ca)))
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index f17e393b43b4..b53acd6c9a3d 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -510,12 +510,12 @@ void l2cap_chan_set_defaults(struct l2cap_chan *chan)
+ }
+ EXPORT_SYMBOL_GPL(l2cap_chan_set_defaults);
+ 
+-static void l2cap_le_flowctl_init(struct l2cap_chan *chan)
++static void l2cap_le_flowctl_init(struct l2cap_chan *chan, u16 tx_credits)
+ {
+ 	chan->sdu = NULL;
+ 	chan->sdu_last_frag = NULL;
+ 	chan->sdu_len = 0;
+-	chan->tx_credits = 0;
++	chan->tx_credits = tx_credits;
+ 	/* Derive MPS from connection MTU to stop HCI fragmentation */
+ 	chan->mps = min_t(u16, chan->imtu, chan->conn->mtu - L2CAP_HDR_SIZE);
+ 	/* Give enough credits for a full packet */
+@@ -1281,7 +1281,7 @@ static void l2cap_le_connect(struct l2cap_chan *chan)
+ 	if (test_and_set_bit(FLAG_LE_CONN_REQ_SENT, &chan->flags))
+ 		return;
+ 
+-	l2cap_le_flowctl_init(chan);
++	l2cap_le_flowctl_init(chan, 0);
+ 
+ 	req.psm     = chan->psm;
+ 	req.scid    = cpu_to_le16(chan->scid);
+@@ -5532,11 +5532,10 @@ static int l2cap_le_connect_req(struct l2cap_conn *conn,
+ 	chan->dcid = scid;
+ 	chan->omtu = mtu;
+ 	chan->remote_mps = mps;
+-	chan->tx_credits = __le16_to_cpu(req->credits);
+ 
+ 	__l2cap_chan_add(conn, chan);
+ 
+-	l2cap_le_flowctl_init(chan);
++	l2cap_le_flowctl_init(chan, __le16_to_cpu(req->credits));
+ 
+ 	dcid = chan->scid;
+ 	credits = chan->rx_credits;
+diff --git a/sound/soc/intel/common/sst-firmware.c b/sound/soc/intel/common/sst-firmware.c
+index 1e067504b604..f830e59f93ea 100644
+--- a/sound/soc/intel/common/sst-firmware.c
++++ b/sound/soc/intel/common/sst-firmware.c
+@@ -1251,11 +1251,15 @@ struct sst_dsp *sst_dsp_new(struct device *dev,
+ 		goto irq_err;
+ 
+ 	err = sst_dma_new(sst);
+-	if (err)
+-		dev_warn(dev, "sst_dma_new failed %d\n", err);
++	if (err)  {
++		dev_err(dev, "sst_dma_new failed %d\n", err);
++		goto dma_err;
++	}
+ 
+ 	return sst;
+ 
++dma_err:
++	free_irq(sst->irq, sst);
+ irq_err:
+ 	if (sst->ops->free)
+ 		sst->ops->free(sst);


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-05-14 22:26 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2019-05-14 22:26 UTC (permalink / raw
  To: gentoo-commits

commit:     98034f3b0a814b0e857795cb36004bcc0bfc8e50
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue May 14 22:25:53 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue May 14 22:25:53 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=98034f3b

Linux patch 5.1.2

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1001_linux-5.1.2.patch | 2955 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2959 insertions(+)

diff --git a/0000_README b/0000_README
index 72fa25f..b65b94a 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch:  1000_linux-5.1.1.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.1.1
 
+Patch:  1001_linux-5.1.2.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.1.2
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1001_linux-5.1.2.patch b/1001_linux-5.1.2.patch
new file mode 100644
index 0000000..a8d7259
--- /dev/null
+++ b/1001_linux-5.1.2.patch
@@ -0,0 +1,2955 @@
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index 9605dbd4b5b5..141a7bb58b80 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -484,6 +484,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
+ 		/sys/devices/system/cpu/vulnerabilities/spectre_v2
+ 		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
+ 		/sys/devices/system/cpu/vulnerabilities/l1tf
++		/sys/devices/system/cpu/vulnerabilities/mds
+ Date:		January 2018
+ Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
+ Description:	Information about CPU vulnerabilities
+@@ -496,8 +497,7 @@ Description:	Information about CPU vulnerabilities
+ 		"Vulnerable"	  CPU is affected and no mitigation in effect
+ 		"Mitigation: $M"  CPU is affected and mitigation $M is in effect
+ 
+-		Details about the l1tf file can be found in
+-		Documentation/admin-guide/l1tf.rst
++		See also: Documentation/admin-guide/hw-vuln/index.rst
+ 
+ What:		/sys/devices/system/cpu/smt
+ 		/sys/devices/system/cpu/smt/active
+diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
+new file mode 100644
+index 000000000000..ffc064c1ec68
+--- /dev/null
++++ b/Documentation/admin-guide/hw-vuln/index.rst
+@@ -0,0 +1,13 @@
++========================
++Hardware vulnerabilities
++========================
++
++This section describes CPU vulnerabilities and provides an overview of the
++possible mitigations along with guidance for selecting mitigations if they
++are configurable at compile, boot or run time.
++
++.. toctree::
++   :maxdepth: 1
++
++   l1tf
++   mds
+diff --git a/Documentation/admin-guide/hw-vuln/l1tf.rst b/Documentation/admin-guide/hw-vuln/l1tf.rst
+new file mode 100644
+index 000000000000..31653a9f0e1b
+--- /dev/null
++++ b/Documentation/admin-guide/hw-vuln/l1tf.rst
+@@ -0,0 +1,615 @@
++L1TF - L1 Terminal Fault
++========================
++
++L1 Terminal Fault is a hardware vulnerability which allows unprivileged
++speculative access to data which is available in the Level 1 Data Cache
++when the page table entry controlling the virtual address, which is used
++for the access, has the Present bit cleared or other reserved bits set.
++
++Affected processors
++-------------------
++
++This vulnerability affects a wide range of Intel processors. The
++vulnerability is not present on:
++
++   - Processors from AMD, Centaur and other non Intel vendors
++
++   - Older processor models, where the CPU family is < 6
++
++   - A range of Intel ATOM processors (Cedarview, Cloverview, Lincroft,
++     Penwell, Pineview, Silvermont, Airmont, Merrifield)
++
++   - The Intel XEON PHI family
++
++   - Intel processors which have the ARCH_CAP_RDCL_NO bit set in the
++     IA32_ARCH_CAPABILITIES MSR. If the bit is set the CPU is not affected
++     by the Meltdown vulnerability either. These CPUs should become
++     available by end of 2018.
++
++Whether a processor is affected or not can be read out from the L1TF
++vulnerability file in sysfs. See :ref:`l1tf_sys_info`.
++
++Related CVEs
++------------
++
++The following CVE entries are related to the L1TF vulnerability:
++
++   =============  =================  ==============================
++   CVE-2018-3615  L1 Terminal Fault  SGX related aspects
++   CVE-2018-3620  L1 Terminal Fault  OS, SMM related aspects
++   CVE-2018-3646  L1 Terminal Fault  Virtualization related aspects
++   =============  =================  ==============================
++
++Problem
++-------
++
++If an instruction accesses a virtual address for which the relevant page
++table entry (PTE) has the Present bit cleared or other reserved bits set,
++then speculative execution ignores the invalid PTE and loads the referenced
++data if it is present in the Level 1 Data Cache, as if the page referenced
++by the address bits in the PTE was still present and accessible.
++
++While this is a purely speculative mechanism and the instruction will raise
++a page fault when it is retired eventually, the pure act of loading the
++data and making it available to other speculative instructions opens up the
++opportunity for side channel attacks to unprivileged malicious code,
++similar to the Meltdown attack.
++
++While Meltdown breaks the user space to kernel space protection, L1TF
++allows to attack any physical memory address in the system and the attack
++works across all protection domains. It allows an attack of SGX and also
++works from inside virtual machines because the speculation bypasses the
++extended page table (EPT) protection mechanism.
++
++
++Attack scenarios
++----------------
++
++1. Malicious user space
++^^^^^^^^^^^^^^^^^^^^^^^
++
++   Operating Systems store arbitrary information in the address bits of a
++   PTE which is marked non present. This allows a malicious user space
++   application to attack the physical memory to which these PTEs resolve.
++   In some cases user-space can maliciously influence the information
++   encoded in the address bits of the PTE, thus making attacks more
++   deterministic and more practical.
++
++   The Linux kernel contains a mitigation for this attack vector, PTE
++   inversion, which is permanently enabled and has no performance
++   impact. The kernel ensures that the address bits of PTEs, which are not
++   marked present, never point to cacheable physical memory space.
++
++   A system with an up to date kernel is protected against attacks from
++   malicious user space applications.
++
++2. Malicious guest in a virtual machine
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   The fact that L1TF breaks all domain protections allows malicious guest
++   OSes, which can control the PTEs directly, and malicious guest user
++   space applications, which run on an unprotected guest kernel lacking the
++   PTE inversion mitigation for L1TF, to attack physical host memory.
++
++   A special aspect of L1TF in the context of virtualization is symmetric
++   multi threading (SMT). The Intel implementation of SMT is called
++   HyperThreading. The fact that Hyperthreads on the affected processors
++   share the L1 Data Cache (L1D) is important for this. As the flaw allows
++   only to attack data which is present in L1D, a malicious guest running
++   on one Hyperthread can attack the data which is brought into the L1D by
++   the context which runs on the sibling Hyperthread of the same physical
++   core. This context can be host OS, host user space or a different guest.
++
++   If the processor does not support Extended Page Tables, the attack is
++   only possible, when the hypervisor does not sanitize the content of the
++   effective (shadow) page tables.
++
++   While solutions exist to mitigate these attack vectors fully, these
++   mitigations are not enabled by default in the Linux kernel because they
++   can affect performance significantly. The kernel provides several
++   mechanisms which can be utilized to address the problem depending on the
++   deployment scenario. The mitigations, their protection scope and impact
++   are described in the next sections.
++
++   The default mitigations and the rationale for choosing them are explained
++   at the end of this document. See :ref:`default_mitigations`.
++
++.. _l1tf_sys_info:
++
++L1TF system information
++-----------------------
++
++The Linux kernel provides a sysfs interface to enumerate the current L1TF
++status of the system: whether the system is vulnerable, and which
++mitigations are active. The relevant sysfs file is:
++
++/sys/devices/system/cpu/vulnerabilities/l1tf
++
++The possible values in this file are:
++
++  ===========================   ===============================
++  'Not affected'		The processor is not vulnerable
++  'Mitigation: PTE Inversion'	The host protection is active
++  ===========================   ===============================
++
++If KVM/VMX is enabled and the processor is vulnerable then the following
++information is appended to the 'Mitigation: PTE Inversion' part:
++
++  - SMT status:
++
++    =====================  ================
++    'VMX: SMT vulnerable'  SMT is enabled
++    'VMX: SMT disabled'    SMT is disabled
++    =====================  ================
++
++  - L1D Flush mode:
++
++    ================================  ====================================
++    'L1D vulnerable'		      L1D flushing is disabled
++
++    'L1D conditional cache flushes'   L1D flush is conditionally enabled
++
++    'L1D cache flushes'		      L1D flush is unconditionally enabled
++    ================================  ====================================
++
++The resulting grade of protection is discussed in the following sections.
++
++
++Host mitigation mechanism
++-------------------------
++
++The kernel is unconditionally protected against L1TF attacks from malicious
++user space running on the host.
++
++
++Guest mitigation mechanisms
++---------------------------
++
++.. _l1d_flush:
++
++1. L1D flush on VMENTER
++^^^^^^^^^^^^^^^^^^^^^^^
++
++   To make sure that a guest cannot attack data which is present in the L1D
++   the hypervisor flushes the L1D before entering the guest.
++
++   Flushing the L1D evicts not only the data which should not be accessed
++   by a potentially malicious guest, it also flushes the guest
++   data. Flushing the L1D has a performance impact as the processor has to
++   bring the flushed guest data back into the L1D. Depending on the
++   frequency of VMEXIT/VMENTER and the type of computations in the guest
++   performance degradation in the range of 1% to 50% has been observed. For
++   scenarios where guest VMEXIT/VMENTER are rare the performance impact is
++   minimal. Virtio and mechanisms like posted interrupts are designed to
++   confine the VMEXITs to a bare minimum, but specific configurations and
++   application scenarios might still suffer from a high VMEXIT rate.
++
++   The kernel provides two L1D flush modes:
++    - conditional ('cond')
++    - unconditional ('always')
++
++   The conditional mode avoids L1D flushing after VMEXITs which execute
++   only audited code paths before the corresponding VMENTER. These code
++   paths have been verified that they cannot expose secrets or other
++   interesting data to an attacker, but they can leak information about the
++   address space layout of the hypervisor.
++
++   Unconditional mode flushes L1D on all VMENTER invocations and provides
++   maximum protection. It has a higher overhead than the conditional
++   mode. The overhead cannot be quantified correctly as it depends on the
++   workload scenario and the resulting number of VMEXITs.
++
++   The general recommendation is to enable L1D flush on VMENTER. The kernel
++   defaults to conditional mode on affected processors.
++
++   **Note**, that L1D flush does not prevent the SMT problem because the
++   sibling thread will also bring back its data into the L1D which makes it
++   attackable again.
++
++   L1D flush can be controlled by the administrator via the kernel command
++   line and sysfs control files. See :ref:`mitigation_control_command_line`
++   and :ref:`mitigation_control_kvm`.
++
++.. _guest_confinement:
++
++2. Guest VCPU confinement to dedicated physical cores
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   To address the SMT problem, it is possible to make a guest or a group of
++   guests affine to one or more physical cores. The proper mechanism for
++   that is to utilize exclusive cpusets to ensure that no other guest or
++   host tasks can run on these cores.
++
++   If only a single guest or related guests run on sibling SMT threads on
++   the same physical core then they can only attack their own memory and
++   restricted parts of the host memory.
++
++   Host memory is attackable, when one of the sibling SMT threads runs in
++   host OS (hypervisor) context and the other in guest context. The amount
++   of valuable information from the host OS context depends on the context
++   which the host OS executes, i.e. interrupts, soft interrupts and kernel
++   threads. The amount of valuable data from these contexts cannot be
++   declared as non-interesting for an attacker without deep inspection of
++   the code.
++
++   **Note**, that assigning guests to a fixed set of physical cores affects
++   the ability of the scheduler to do load balancing and might have
++   negative effects on CPU utilization depending on the hosting
++   scenario. Disabling SMT might be a viable alternative for particular
++   scenarios.
++
++   For further information about confining guests to a single or to a group
++   of cores consult the cpusets documentation:
++
++   https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt
++
++.. _interrupt_isolation:
++
++3. Interrupt affinity
++^^^^^^^^^^^^^^^^^^^^^
++
++   Interrupts can be made affine to logical CPUs. This is not universally
++   true because there are types of interrupts which are truly per CPU
++   interrupts, e.g. the local timer interrupt. Aside of that multi queue
++   devices affine their interrupts to single CPUs or groups of CPUs per
++   queue without allowing the administrator to control the affinities.
++
++   Moving the interrupts, which can be affinity controlled, away from CPUs
++   which run untrusted guests, reduces the attack vector space.
++
++   Whether the interrupts with are affine to CPUs, which run untrusted
++   guests, provide interesting data for an attacker depends on the system
++   configuration and the scenarios which run on the system. While for some
++   of the interrupts it can be assumed that they won't expose interesting
++   information beyond exposing hints about the host OS memory layout, there
++   is no way to make general assumptions.
++
++   Interrupt affinity can be controlled by the administrator via the
++   /proc/irq/$NR/smp_affinity[_list] files. Limited documentation is
++   available at:
++
++   https://www.kernel.org/doc/Documentation/IRQ-affinity.txt
++
++.. _smt_control:
++
++4. SMT control
++^^^^^^^^^^^^^^
++
++   To prevent the SMT issues of L1TF it might be necessary to disable SMT
++   completely. Disabling SMT can have a significant performance impact, but
++   the impact depends on the hosting scenario and the type of workloads.
++   The impact of disabling SMT needs also to be weighted against the impact
++   of other mitigation solutions like confining guests to dedicated cores.
++
++   The kernel provides a sysfs interface to retrieve the status of SMT and
++   to control it. It also provides a kernel command line interface to
++   control SMT.
++
++   The kernel command line interface consists of the following options:
++
++     =========== ==========================================================
++     nosmt	 Affects the bring up of the secondary CPUs during boot. The
++		 kernel tries to bring all present CPUs online during the
++		 boot process. "nosmt" makes sure that from each physical
++		 core only one - the so called primary (hyper) thread is
++		 activated. Due to a design flaw of Intel processors related
++		 to Machine Check Exceptions the non primary siblings have
++		 to be brought up at least partially and are then shut down
++		 again.  "nosmt" can be undone via the sysfs interface.
++
++     nosmt=force Has the same effect as "nosmt" but it does not allow to
++		 undo the SMT disable via the sysfs interface.
++     =========== ==========================================================
++
++   The sysfs interface provides two files:
++
++   - /sys/devices/system/cpu/smt/control
++   - /sys/devices/system/cpu/smt/active
++
++   /sys/devices/system/cpu/smt/control:
++
++     This file allows to read out the SMT control state and provides the
++     ability to disable or (re)enable SMT. The possible states are:
++
++	==============  ===================================================
++	on		SMT is supported by the CPU and enabled. All
++			logical CPUs can be onlined and offlined without
++			restrictions.
++
++	off		SMT is supported by the CPU and disabled. Only
++			the so called primary SMT threads can be onlined
++			and offlined without restrictions. An attempt to
++			online a non-primary sibling is rejected
++
++	forceoff	Same as 'off' but the state cannot be controlled.
++			Attempts to write to the control file are rejected.
++
++	notsupported	The processor does not support SMT. It's therefore
++			not affected by the SMT implications of L1TF.
++			Attempts to write to the control file are rejected.
++	==============  ===================================================
++
++     The possible states which can be written into this file to control SMT
++     state are:
++
++     - on
++     - off
++     - forceoff
++
++   /sys/devices/system/cpu/smt/active:
++
++     This file reports whether SMT is enabled and active, i.e. if on any
++     physical core two or more sibling threads are online.
++
++   SMT control is also possible at boot time via the l1tf kernel command
++   line parameter in combination with L1D flush control. See
++   :ref:`mitigation_control_command_line`.
++
++5. Disabling EPT
++^^^^^^^^^^^^^^^^
++
++  Disabling EPT for virtual machines provides full mitigation for L1TF even
++  with SMT enabled, because the effective page tables for guests are
++  managed and sanitized by the hypervisor. Though disabling EPT has a
++  significant performance impact especially when the Meltdown mitigation
++  KPTI is enabled.
++
++  EPT can be disabled in the hypervisor via the 'kvm-intel.ept' parameter.
++
++There is ongoing research and development for new mitigation mechanisms to
++address the performance impact of disabling SMT or EPT.
++
++.. _mitigation_control_command_line:
++
++Mitigation control on the kernel command line
++---------------------------------------------
++
++The kernel command line allows to control the L1TF mitigations at boot
++time with the option "l1tf=". The valid arguments for this option are:
++
++  ============  =============================================================
++  full		Provides all available mitigations for the L1TF
++		vulnerability. Disables SMT and enables all mitigations in
++		the hypervisors, i.e. unconditional L1D flushing
++
++		SMT control and L1D flush control via the sysfs interface
++		is still possible after boot.  Hypervisors will issue a
++		warning when the first VM is started in a potentially
++		insecure configuration, i.e. SMT enabled or L1D flush
++		disabled.
++
++  full,force	Same as 'full', but disables SMT and L1D flush runtime
++		control. Implies the 'nosmt=force' command line option.
++		(i.e. sysfs control of SMT is disabled.)
++
++  flush		Leaves SMT enabled and enables the default hypervisor
++		mitigation, i.e. conditional L1D flushing
++
++		SMT control and L1D flush control via the sysfs interface
++		is still possible after boot.  Hypervisors will issue a
++		warning when the first VM is started in a potentially
++		insecure configuration, i.e. SMT enabled or L1D flush
++		disabled.
++
++  flush,nosmt	Disables SMT and enables the default hypervisor mitigation,
++		i.e. conditional L1D flushing.
++
++		SMT control and L1D flush control via the sysfs interface
++		is still possible after boot.  Hypervisors will issue a
++		warning when the first VM is started in a potentially
++		insecure configuration, i.e. SMT enabled or L1D flush
++		disabled.
++
++  flush,nowarn	Same as 'flush', but hypervisors will not warn when a VM is
++		started in a potentially insecure configuration.
++
++  off		Disables hypervisor mitigations and doesn't emit any
++		warnings.
++		It also drops the swap size and available RAM limit restrictions
++		on both hypervisor and bare metal.
++
++  ============  =============================================================
++
++The default is 'flush'. For details about L1D flushing see :ref:`l1d_flush`.
++
++
++.. _mitigation_control_kvm:
++
++Mitigation control for KVM - module parameter
++-------------------------------------------------------------
++
++The KVM hypervisor mitigation mechanism, flushing the L1D cache when
++entering a guest, can be controlled with a module parameter.
++
++The option/parameter is "kvm-intel.vmentry_l1d_flush=". It takes the
++following arguments:
++
++  ============  ==============================================================
++  always	L1D cache flush on every VMENTER.
++
++  cond		Flush L1D on VMENTER only when the code between VMEXIT and
++		VMENTER can leak host memory which is considered
++		interesting for an attacker. This still can leak host memory
++		which allows e.g. to determine the hosts address space layout.
++
++  never		Disables the mitigation
++  ============  ==============================================================
++
++The parameter can be provided on the kernel command line, as a module
++parameter when loading the modules and at runtime modified via the sysfs
++file:
++
++/sys/module/kvm_intel/parameters/vmentry_l1d_flush
++
++The default is 'cond'. If 'l1tf=full,force' is given on the kernel command
++line, then 'always' is enforced and the kvm-intel.vmentry_l1d_flush
++module parameter is ignored and writes to the sysfs file are rejected.
++
++.. _mitigation_selection:
++
++Mitigation selection guide
++--------------------------
++
++1. No virtualization in use
++^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   The system is protected by the kernel unconditionally and no further
++   action is required.
++
++2. Virtualization with trusted guests
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   If the guest comes from a trusted source and the guest OS kernel is
++   guaranteed to have the L1TF mitigations in place the system is fully
++   protected against L1TF and no further action is required.
++
++   To avoid the overhead of the default L1D flushing on VMENTER the
++   administrator can disable the flushing via the kernel command line and
++   sysfs control files. See :ref:`mitigation_control_command_line` and
++   :ref:`mitigation_control_kvm`.
++
++
++3. Virtualization with untrusted guests
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++3.1. SMT not supported or disabled
++""""""""""""""""""""""""""""""""""
++
++  If SMT is not supported by the processor or disabled in the BIOS or by
++  the kernel, it's only required to enforce L1D flushing on VMENTER.
++
++  Conditional L1D flushing is the default behaviour and can be tuned. See
++  :ref:`mitigation_control_command_line` and :ref:`mitigation_control_kvm`.
++
++3.2. EPT not supported or disabled
++""""""""""""""""""""""""""""""""""
++
++  If EPT is not supported by the processor or disabled in the hypervisor,
++  the system is fully protected. SMT can stay enabled and L1D flushing on
++  VMENTER is not required.
++
++  EPT can be disabled in the hypervisor via the 'kvm-intel.ept' parameter.
++
++3.3. SMT and EPT supported and active
++"""""""""""""""""""""""""""""""""""""
++
++  If SMT and EPT are supported and active then various degrees of
++  mitigations can be employed:
++
++  - L1D flushing on VMENTER:
++
++    L1D flushing on VMENTER is the minimal protection requirement, but it
++    is only potent in combination with other mitigation methods.
++
++    Conditional L1D flushing is the default behaviour and can be tuned. See
++    :ref:`mitigation_control_command_line` and :ref:`mitigation_control_kvm`.
++
++  - Guest confinement:
++
++    Confinement of guests to a single or a group of physical cores which
++    are not running any other processes, can reduce the attack surface
++    significantly, but interrupts, soft interrupts and kernel threads can
++    still expose valuable data to a potential attacker. See
++    :ref:`guest_confinement`.
++
++  - Interrupt isolation:
++
++    Isolating the guest CPUs from interrupts can reduce the attack surface
++    further, but still allows a malicious guest to explore a limited amount
++    of host physical memory. This can at least be used to gain knowledge
++    about the host address space layout. The interrupts which have a fixed
++    affinity to the CPUs which run the untrusted guests can depending on
++    the scenario still trigger soft interrupts and schedule kernel threads
++    which might expose valuable information. See
++    :ref:`interrupt_isolation`.
++
++The above three mitigation methods combined can provide protection to a
++certain degree, but the risk of the remaining attack surface has to be
++carefully analyzed. For full protection the following methods are
++available:
++
++  - Disabling SMT:
++
++    Disabling SMT and enforcing the L1D flushing provides the maximum
++    amount of protection. This mitigation is not depending on any of the
++    above mitigation methods.
++
++    SMT control and L1D flushing can be tuned by the command line
++    parameters 'nosmt', 'l1tf', 'kvm-intel.vmentry_l1d_flush' and at run
++    time with the matching sysfs control files. See :ref:`smt_control`,
++    :ref:`mitigation_control_command_line` and
++    :ref:`mitigation_control_kvm`.
++
++  - Disabling EPT:
++
++    Disabling EPT provides the maximum amount of protection as well. It is
++    not depending on any of the above mitigation methods. SMT can stay
++    enabled and L1D flushing is not required, but the performance impact is
++    significant.
++
++    EPT can be disabled in the hypervisor via the 'kvm-intel.ept'
++    parameter.
++
++3.4. Nested virtual machines
++""""""""""""""""""""""""""""
++
++When nested virtualization is in use, three operating systems are involved:
++the bare metal hypervisor, the nested hypervisor and the nested virtual
++machine.  VMENTER operations from the nested hypervisor into the nested
++guest will always be processed by the bare metal hypervisor. If KVM is the
++bare metal hypervisor it will:
++
++ - Flush the L1D cache on every switch from the nested hypervisor to the
++   nested virtual machine, so that the nested hypervisor's secrets are not
++   exposed to the nested virtual machine;
++
++ - Flush the L1D cache on every switch from the nested virtual machine to
++   the nested hypervisor; this is a complex operation, and flushing the L1D
++   cache avoids that the bare metal hypervisor's secrets are exposed to the
++   nested virtual machine;
++
++ - Instruct the nested hypervisor to not perform any L1D cache flush. This
++   is an optimization to avoid double L1D flushing.
++
++
++.. _default_mitigations:
++
++Default mitigations
++-------------------
++
++  The kernel default mitigations for vulnerable processors are:
++
++  - PTE inversion to protect against malicious user space. This is done
++    unconditionally and cannot be controlled. The swap storage is limited
++    to ~16TB.
++
++  - L1D conditional flushing on VMENTER when EPT is enabled for
++    a guest.
++
++  The kernel does not by default enforce the disabling of SMT, which leaves
++  SMT systems vulnerable when running untrusted guests with EPT enabled.
++
++  The rationale for this choice is:
++
++  - Force disabling SMT can break existing setups, especially with
++    unattended updates.
++
++  - If regular users run untrusted guests on their machine, then L1TF is
++    just an add on to other malware which might be embedded in an untrusted
++    guest, e.g. spam-bots or attacks on the local network.
++
++    There is no technical way to prevent a user from running untrusted code
++    on their machines blindly.
++
++  - It's technically extremely unlikely and from today's knowledge even
++    impossible that L1TF can be exploited via the most popular attack
++    mechanisms like JavaScript because these mechanisms have no way to
++    control PTEs. If this would be possible and not other mitigation would
++    be possible, then the default might be different.
++
++  - The administrators of cloud and hosting setups have to carefully
++    analyze the risk for their scenarios and make the appropriate
++    mitigation choices, which might even vary across their deployed
++    machines and also result in other changes of their overall setup.
++    There is no way for the kernel to provide a sensible default for this
++    kind of scenarios.
+diff --git a/Documentation/admin-guide/hw-vuln/mds.rst b/Documentation/admin-guide/hw-vuln/mds.rst
+new file mode 100644
+index 000000000000..e3a796c0d3a2
+--- /dev/null
++++ b/Documentation/admin-guide/hw-vuln/mds.rst
+@@ -0,0 +1,308 @@
++MDS - Microarchitectural Data Sampling
++======================================
++
++Microarchitectural Data Sampling is a hardware vulnerability which allows
++unprivileged speculative access to data which is available in various CPU
++internal buffers.
++
++Affected processors
++-------------------
++
++This vulnerability affects a wide range of Intel processors. The
++vulnerability is not present on:
++
++   - Processors from AMD, Centaur and other non Intel vendors
++
++   - Older processor models, where the CPU family is < 6
++
++   - Some Atoms (Bonnell, Saltwell, Goldmont, GoldmontPlus)
++
++   - Intel processors which have the ARCH_CAP_MDS_NO bit set in the
++     IA32_ARCH_CAPABILITIES MSR.
++
++Whether a processor is affected or not can be read out from the MDS
++vulnerability file in sysfs. See :ref:`mds_sys_info`.
++
++Not all processors are affected by all variants of MDS, but the mitigation
++is identical for all of them so the kernel treats them as a single
++vulnerability.
++
++Related CVEs
++------------
++
++The following CVE entries are related to the MDS vulnerability:
++
++   ==============  =====  ===================================================
++   CVE-2018-12126  MSBDS  Microarchitectural Store Buffer Data Sampling
++   CVE-2018-12130  MFBDS  Microarchitectural Fill Buffer Data Sampling
++   CVE-2018-12127  MLPDS  Microarchitectural Load Port Data Sampling
++   CVE-2019-11091  MDSUM  Microarchitectural Data Sampling Uncacheable Memory
++   ==============  =====  ===================================================
++
++Problem
++-------
++
++When performing store, load, L1 refill operations, processors write data
++into temporary microarchitectural structures (buffers). The data in the
++buffer can be forwarded to load operations as an optimization.
++
++Under certain conditions, usually a fault/assist caused by a load
++operation, data unrelated to the load memory address can be speculatively
++forwarded from the buffers. Because the load operation causes a fault or
++assist and its result will be discarded, the forwarded data will not cause
++incorrect program execution or state changes. But a malicious operation
++may be able to forward this speculative data to a disclosure gadget which
++allows in turn to infer the value via a cache side channel attack.
++
++Because the buffers are potentially shared between Hyper-Threads cross
++Hyper-Thread attacks are possible.
++
++Deeper technical information is available in the MDS specific x86
++architecture section: :ref:`Documentation/x86/mds.rst <mds>`.
++
++
++Attack scenarios
++----------------
++
++Attacks against the MDS vulnerabilities can be mounted from malicious non
++priviledged user space applications running on hosts or guest. Malicious
++guest OSes can obviously mount attacks as well.
++
++Contrary to other speculation based vulnerabilities the MDS vulnerability
++does not allow the attacker to control the memory target address. As a
++consequence the attacks are purely sampling based, but as demonstrated with
++the TLBleed attack samples can be postprocessed successfully.
++
++Web-Browsers
++^^^^^^^^^^^^
++
++  It's unclear whether attacks through Web-Browsers are possible at
++  all. The exploitation through Java-Script is considered very unlikely,
++  but other widely used web technologies like Webassembly could possibly be
++  abused.
++
++
++.. _mds_sys_info:
++
++MDS system information
++-----------------------
++
++The Linux kernel provides a sysfs interface to enumerate the current MDS
++status of the system: whether the system is vulnerable, and which
++mitigations are active. The relevant sysfs file is:
++
++/sys/devices/system/cpu/vulnerabilities/mds
++
++The possible values in this file are:
++
++  .. list-table::
++
++     * - 'Not affected'
++       - The processor is not vulnerable
++     * - 'Vulnerable'
++       - The processor is vulnerable, but no mitigation enabled
++     * - 'Vulnerable: Clear CPU buffers attempted, no microcode'
++       - The processor is vulnerable but microcode is not updated.
++
++         The mitigation is enabled on a best effort basis. See :ref:`vmwerv`
++     * - 'Mitigation: Clear CPU buffers'
++       - The processor is vulnerable and the CPU buffer clearing mitigation is
++         enabled.
++
++If the processor is vulnerable then the following information is appended
++to the above information:
++
++    ========================  ============================================
++    'SMT vulnerable'          SMT is enabled
++    'SMT mitigated'           SMT is enabled and mitigated
++    'SMT disabled'            SMT is disabled
++    'SMT Host state unknown'  Kernel runs in a VM, Host SMT state unknown
++    ========================  ============================================
++
++.. _vmwerv:
++
++Best effort mitigation mode
++^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++  If the processor is vulnerable, but the availability of the microcode based
++  mitigation mechanism is not advertised via CPUID the kernel selects a best
++  effort mitigation mode.  This mode invokes the mitigation instructions
++  without a guarantee that they clear the CPU buffers.
++
++  This is done to address virtualization scenarios where the host has the
++  microcode update applied, but the hypervisor is not yet updated to expose
++  the CPUID to the guest. If the host has updated microcode the protection
++  takes effect otherwise a few cpu cycles are wasted pointlessly.
++
++  The state in the mds sysfs file reflects this situation accordingly.
++
++
++Mitigation mechanism
++-------------------------
++
++The kernel detects the affected CPUs and the presence of the microcode
++which is required.
++
++If a CPU is affected and the microcode is available, then the kernel
++enables the mitigation by default. The mitigation can be controlled at boot
++time via a kernel command line option. See
++:ref:`mds_mitigation_control_command_line`.
++
++.. _cpu_buffer_clear:
++
++CPU buffer clearing
++^^^^^^^^^^^^^^^^^^^
++
++  The mitigation for MDS clears the affected CPU buffers on return to user
++  space and when entering a guest.
++
++  If SMT is enabled it also clears the buffers on idle entry when the CPU
++  is only affected by MSBDS and not any other MDS variant, because the
++  other variants cannot be protected against cross Hyper-Thread attacks.
++
++  For CPUs which are only affected by MSBDS the user space, guest and idle
++  transition mitigations are sufficient and SMT is not affected.
++
++.. _virt_mechanism:
++
++Virtualization mitigation
++^^^^^^^^^^^^^^^^^^^^^^^^^
++
++  The protection for host to guest transition depends on the L1TF
++  vulnerability of the CPU:
++
++  - CPU is affected by L1TF:
++
++    If the L1D flush mitigation is enabled and up to date microcode is
++    available, the L1D flush mitigation is automatically protecting the
++    guest transition.
++
++    If the L1D flush mitigation is disabled then the MDS mitigation is
++    invoked explicit when the host MDS mitigation is enabled.
++
++    For details on L1TF and virtualization see:
++    :ref:`Documentation/admin-guide/hw-vuln//l1tf.rst <mitigation_control_kvm>`.
++
++  - CPU is not affected by L1TF:
++
++    CPU buffers are flushed before entering the guest when the host MDS
++    mitigation is enabled.
++
++  The resulting MDS protection matrix for the host to guest transition:
++
++  ============ ===== ============= ============ =================
++   L1TF         MDS   VMX-L1FLUSH   Host MDS     MDS-State
++
++   Don't care   No    Don't care    N/A          Not affected
++
++   Yes          Yes   Disabled      Off          Vulnerable
++
++   Yes          Yes   Disabled      Full         Mitigated
++
++   Yes          Yes   Enabled       Don't care   Mitigated
++
++   No           Yes   N/A           Off          Vulnerable
++
++   No           Yes   N/A           Full         Mitigated
++  ============ ===== ============= ============ =================
++
++  This only covers the host to guest transition, i.e. prevents leakage from
++  host to guest, but does not protect the guest internally. Guests need to
++  have their own protections.
++
++.. _xeon_phi:
++
++XEON PHI specific considerations
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++  The XEON PHI processor family is affected by MSBDS which can be exploited
++  cross Hyper-Threads when entering idle states. Some XEON PHI variants allow
++  to use MWAIT in user space (Ring 3) which opens an potential attack vector
++  for malicious user space. The exposure can be disabled on the kernel
++  command line with the 'ring3mwait=disable' command line option.
++
++  XEON PHI is not affected by the other MDS variants and MSBDS is mitigated
++  before the CPU enters a idle state. As XEON PHI is not affected by L1TF
++  either disabling SMT is not required for full protection.
++
++.. _mds_smt_control:
++
++SMT control
++^^^^^^^^^^^
++
++  All MDS variants except MSBDS can be attacked cross Hyper-Threads. That
++  means on CPUs which are affected by MFBDS or MLPDS it is necessary to
++  disable SMT for full protection. These are most of the affected CPUs; the
++  exception is XEON PHI, see :ref:`xeon_phi`.
++
++  Disabling SMT can have a significant performance impact, but the impact
++  depends on the type of workloads.
++
++  See the relevant chapter in the L1TF mitigation documentation for details:
++  :ref:`Documentation/admin-guide/hw-vuln/l1tf.rst <smt_control>`.
++
++
++.. _mds_mitigation_control_command_line:
++
++Mitigation control on the kernel command line
++---------------------------------------------
++
++The kernel command line allows to control the MDS mitigations at boot
++time with the option "mds=". The valid arguments for this option are:
++
++  ============  =============================================================
++  full		If the CPU is vulnerable, enable all available mitigations
++		for the MDS vulnerability, CPU buffer clearing on exit to
++		userspace and when entering a VM. Idle transitions are
++		protected as well if SMT is enabled.
++
++		It does not automatically disable SMT.
++
++  full,nosmt	The same as mds=full, with SMT disabled on vulnerable
++		CPUs.  This is the complete mitigation.
++
++  off		Disables MDS mitigations completely.
++
++  ============  =============================================================
++
++Not specifying this option is equivalent to "mds=full".
++
++
++Mitigation selection guide
++--------------------------
++
++1. Trusted userspace
++^^^^^^^^^^^^^^^^^^^^
++
++   If all userspace applications are from a trusted source and do not
++   execute untrusted code which is supplied externally, then the mitigation
++   can be disabled.
++
++
++2. Virtualization with trusted guests
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   The same considerations as above versus trusted user space apply.
++
++3. Virtualization with untrusted guests
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   The protection depends on the state of the L1TF mitigations.
++   See :ref:`virt_mechanism`.
++
++   If the MDS mitigation is enabled and SMT is disabled, guest to host and
++   guest to guest attacks are prevented.
++
++.. _mds_default_mitigations:
++
++Default mitigations
++-------------------
++
++  The kernel default mitigations for vulnerable processors are:
++
++  - Enable CPU buffer clearing
++
++  The kernel does not by default enforce the disabling of SMT, which leaves
++  SMT systems vulnerable when running untrusted code. The same rationale as
++  for L1TF applies.
++  See :ref:`Documentation/admin-guide/hw-vuln//l1tf.rst <default_mitigations>`.
+diff --git a/Documentation/admin-guide/index.rst b/Documentation/admin-guide/index.rst
+index 0a491676685e..42247516962a 100644
+--- a/Documentation/admin-guide/index.rst
++++ b/Documentation/admin-guide/index.rst
+@@ -17,14 +17,12 @@ etc.
+    kernel-parameters
+    devices
+ 
+-This section describes CPU vulnerabilities and provides an overview of the
+-possible mitigations along with guidance for selecting mitigations if they
+-are configurable at compile, boot or run time.
++This section describes CPU vulnerabilities and their mitigations.
+ 
+ .. toctree::
+    :maxdepth: 1
+ 
+-   l1tf
++   hw-vuln/index
+ 
+ Here is a set of documents aimed at users who are trying to track down
+ problems and bugs in particular.
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 2b8ee90bb644..c7937f379d22 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -2141,7 +2141,7 @@
+ 
+ 			Default is 'flush'.
+ 
+-			For details see: Documentation/admin-guide/l1tf.rst
++			For details see: Documentation/admin-guide/hw-vuln/l1tf.rst
+ 
+ 	l2cr=		[PPC]
+ 
+@@ -2387,6 +2387,32 @@
+ 			Format: <first>,<last>
+ 			Specifies range of consoles to be captured by the MDA.
+ 
++	mds=		[X86,INTEL]
++			Control mitigation for the Micro-architectural Data
++			Sampling (MDS) vulnerability.
++
++			Certain CPUs are vulnerable to an exploit against CPU
++			internal buffers which can forward information to a
++			disclosure gadget under certain conditions.
++
++			In vulnerable processors, the speculatively
++			forwarded data can be used in a cache side channel
++			attack, to access data to which the attacker does
++			not have direct access.
++
++			This parameter controls the MDS mitigation. The
++			options are:
++
++			full       - Enable MDS mitigation on vulnerable CPUs
++			full,nosmt - Enable MDS mitigation and disable
++				     SMT on vulnerable CPUs
++			off        - Unconditionally disable MDS mitigation
++
++			Not specifying this option is equivalent to
++			mds=full.
++
++			For details see: Documentation/admin-guide/hw-vuln/mds.rst
++
+ 	mem=nn[KMG]	[KNL,BOOT] Force usage of a specific amount of memory
+ 			Amount of memory to be used when the kernel is not able
+ 			to see the whole system memory or for test.
+@@ -2544,6 +2570,40 @@
+ 			in the "bleeding edge" mini2440 support kernel at
+ 			http://repo.or.cz/w/linux-2.6/mini2440.git
+ 
++	mitigations=
++			[X86,PPC,S390] Control optional mitigations for CPU
++			vulnerabilities.  This is a set of curated,
++			arch-independent options, each of which is an
++			aggregation of existing arch-specific options.
++
++			off
++				Disable all optional CPU mitigations.  This
++				improves system performance, but it may also
++				expose users to several CPU vulnerabilities.
++				Equivalent to: nopti [X86,PPC]
++					       nospectre_v1 [PPC]
++					       nobp=0 [S390]
++					       nospectre_v2 [X86,PPC,S390]
++					       spectre_v2_user=off [X86]
++					       spec_store_bypass_disable=off [X86,PPC]
++					       l1tf=off [X86]
++					       mds=off [X86]
++
++			auto (default)
++				Mitigate all CPU vulnerabilities, but leave SMT
++				enabled, even if it's vulnerable.  This is for
++				users who don't want to be surprised by SMT
++				getting disabled across kernel upgrades, or who
++				have other ways of avoiding SMT-based attacks.
++				Equivalent to: (default behavior)
++
++			auto,nosmt
++				Mitigate all CPU vulnerabilities, disabling SMT
++				if needed.  This is for users who always want to
++				be fully mitigated, even if it means losing SMT.
++				Equivalent to: l1tf=flush,nosmt [X86]
++					       mds=full,nosmt [X86]
++
+ 	mminit_loglevel=
+ 			[KNL] When CONFIG_DEBUG_MEMORY_INIT is set, this
+ 			parameter allows control of the logging verbosity for
+diff --git a/Documentation/admin-guide/l1tf.rst b/Documentation/admin-guide/l1tf.rst
+deleted file mode 100644
+index 9af977384168..000000000000
+--- a/Documentation/admin-guide/l1tf.rst
++++ /dev/null
+@@ -1,614 +0,0 @@
+-L1TF - L1 Terminal Fault
+-========================
+-
+-L1 Terminal Fault is a hardware vulnerability which allows unprivileged
+-speculative access to data which is available in the Level 1 Data Cache
+-when the page table entry controlling the virtual address, which is used
+-for the access, has the Present bit cleared or other reserved bits set.
+-
+-Affected processors
+--------------------
+-
+-This vulnerability affects a wide range of Intel processors. The
+-vulnerability is not present on:
+-
+-   - Processors from AMD, Centaur and other non Intel vendors
+-
+-   - Older processor models, where the CPU family is < 6
+-
+-   - A range of Intel ATOM processors (Cedarview, Cloverview, Lincroft,
+-     Penwell, Pineview, Silvermont, Airmont, Merrifield)
+-
+-   - The Intel XEON PHI family
+-
+-   - Intel processors which have the ARCH_CAP_RDCL_NO bit set in the
+-     IA32_ARCH_CAPABILITIES MSR. If the bit is set the CPU is not affected
+-     by the Meltdown vulnerability either. These CPUs should become
+-     available by end of 2018.
+-
+-Whether a processor is affected or not can be read out from the L1TF
+-vulnerability file in sysfs. See :ref:`l1tf_sys_info`.
+-
+-Related CVEs
+-------------
+-
+-The following CVE entries are related to the L1TF vulnerability:
+-
+-   =============  =================  ==============================
+-   CVE-2018-3615  L1 Terminal Fault  SGX related aspects
+-   CVE-2018-3620  L1 Terminal Fault  OS, SMM related aspects
+-   CVE-2018-3646  L1 Terminal Fault  Virtualization related aspects
+-   =============  =================  ==============================
+-
+-Problem
+--------
+-
+-If an instruction accesses a virtual address for which the relevant page
+-table entry (PTE) has the Present bit cleared or other reserved bits set,
+-then speculative execution ignores the invalid PTE and loads the referenced
+-data if it is present in the Level 1 Data Cache, as if the page referenced
+-by the address bits in the PTE was still present and accessible.
+-
+-While this is a purely speculative mechanism and the instruction will raise
+-a page fault when it is retired eventually, the pure act of loading the
+-data and making it available to other speculative instructions opens up the
+-opportunity for side channel attacks to unprivileged malicious code,
+-similar to the Meltdown attack.
+-
+-While Meltdown breaks the user space to kernel space protection, L1TF
+-allows to attack any physical memory address in the system and the attack
+-works across all protection domains. It allows an attack of SGX and also
+-works from inside virtual machines because the speculation bypasses the
+-extended page table (EPT) protection mechanism.
+-
+-
+-Attack scenarios
+-----------------
+-
+-1. Malicious user space
+-^^^^^^^^^^^^^^^^^^^^^^^
+-
+-   Operating Systems store arbitrary information in the address bits of a
+-   PTE which is marked non present. This allows a malicious user space
+-   application to attack the physical memory to which these PTEs resolve.
+-   In some cases user-space can maliciously influence the information
+-   encoded in the address bits of the PTE, thus making attacks more
+-   deterministic and more practical.
+-
+-   The Linux kernel contains a mitigation for this attack vector, PTE
+-   inversion, which is permanently enabled and has no performance
+-   impact. The kernel ensures that the address bits of PTEs, which are not
+-   marked present, never point to cacheable physical memory space.
+-
+-   A system with an up to date kernel is protected against attacks from
+-   malicious user space applications.
+-
+-2. Malicious guest in a virtual machine
+-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-
+-   The fact that L1TF breaks all domain protections allows malicious guest
+-   OSes, which can control the PTEs directly, and malicious guest user
+-   space applications, which run on an unprotected guest kernel lacking the
+-   PTE inversion mitigation for L1TF, to attack physical host memory.
+-
+-   A special aspect of L1TF in the context of virtualization is symmetric
+-   multi threading (SMT). The Intel implementation of SMT is called
+-   HyperThreading. The fact that Hyperthreads on the affected processors
+-   share the L1 Data Cache (L1D) is important for this. As the flaw allows
+-   only to attack data which is present in L1D, a malicious guest running
+-   on one Hyperthread can attack the data which is brought into the L1D by
+-   the context which runs on the sibling Hyperthread of the same physical
+-   core. This context can be host OS, host user space or a different guest.
+-
+-   If the processor does not support Extended Page Tables, the attack is
+-   only possible, when the hypervisor does not sanitize the content of the
+-   effective (shadow) page tables.
+-
+-   While solutions exist to mitigate these attack vectors fully, these
+-   mitigations are not enabled by default in the Linux kernel because they
+-   can affect performance significantly. The kernel provides several
+-   mechanisms which can be utilized to address the problem depending on the
+-   deployment scenario. The mitigations, their protection scope and impact
+-   are described in the next sections.
+-
+-   The default mitigations and the rationale for choosing them are explained
+-   at the end of this document. See :ref:`default_mitigations`.
+-
+-.. _l1tf_sys_info:
+-
+-L1TF system information
+------------------------
+-
+-The Linux kernel provides a sysfs interface to enumerate the current L1TF
+-status of the system: whether the system is vulnerable, and which
+-mitigations are active. The relevant sysfs file is:
+-
+-/sys/devices/system/cpu/vulnerabilities/l1tf
+-
+-The possible values in this file are:
+-
+-  ===========================   ===============================
+-  'Not affected'		The processor is not vulnerable
+-  'Mitigation: PTE Inversion'	The host protection is active
+-  ===========================   ===============================
+-
+-If KVM/VMX is enabled and the processor is vulnerable then the following
+-information is appended to the 'Mitigation: PTE Inversion' part:
+-
+-  - SMT status:
+-
+-    =====================  ================
+-    'VMX: SMT vulnerable'  SMT is enabled
+-    'VMX: SMT disabled'    SMT is disabled
+-    =====================  ================
+-
+-  - L1D Flush mode:
+-
+-    ================================  ====================================
+-    'L1D vulnerable'		      L1D flushing is disabled
+-
+-    'L1D conditional cache flushes'   L1D flush is conditionally enabled
+-
+-    'L1D cache flushes'		      L1D flush is unconditionally enabled
+-    ================================  ====================================
+-
+-The resulting grade of protection is discussed in the following sections.
+-
+-
+-Host mitigation mechanism
+--------------------------
+-
+-The kernel is unconditionally protected against L1TF attacks from malicious
+-user space running on the host.
+-
+-
+-Guest mitigation mechanisms
+----------------------------
+-
+-.. _l1d_flush:
+-
+-1. L1D flush on VMENTER
+-^^^^^^^^^^^^^^^^^^^^^^^
+-
+-   To make sure that a guest cannot attack data which is present in the L1D
+-   the hypervisor flushes the L1D before entering the guest.
+-
+-   Flushing the L1D evicts not only the data which should not be accessed
+-   by a potentially malicious guest, it also flushes the guest
+-   data. Flushing the L1D has a performance impact as the processor has to
+-   bring the flushed guest data back into the L1D. Depending on the
+-   frequency of VMEXIT/VMENTER and the type of computations in the guest
+-   performance degradation in the range of 1% to 50% has been observed. For
+-   scenarios where guest VMEXIT/VMENTER are rare the performance impact is
+-   minimal. Virtio and mechanisms like posted interrupts are designed to
+-   confine the VMEXITs to a bare minimum, but specific configurations and
+-   application scenarios might still suffer from a high VMEXIT rate.
+-
+-   The kernel provides two L1D flush modes:
+-    - conditional ('cond')
+-    - unconditional ('always')
+-
+-   The conditional mode avoids L1D flushing after VMEXITs which execute
+-   only audited code paths before the corresponding VMENTER. These code
+-   paths have been verified that they cannot expose secrets or other
+-   interesting data to an attacker, but they can leak information about the
+-   address space layout of the hypervisor.
+-
+-   Unconditional mode flushes L1D on all VMENTER invocations and provides
+-   maximum protection. It has a higher overhead than the conditional
+-   mode. The overhead cannot be quantified correctly as it depends on the
+-   workload scenario and the resulting number of VMEXITs.
+-
+-   The general recommendation is to enable L1D flush on VMENTER. The kernel
+-   defaults to conditional mode on affected processors.
+-
+-   **Note**, that L1D flush does not prevent the SMT problem because the
+-   sibling thread will also bring back its data into the L1D which makes it
+-   attackable again.
+-
+-   L1D flush can be controlled by the administrator via the kernel command
+-   line and sysfs control files. See :ref:`mitigation_control_command_line`
+-   and :ref:`mitigation_control_kvm`.
+-
+-.. _guest_confinement:
+-
+-2. Guest VCPU confinement to dedicated physical cores
+-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-
+-   To address the SMT problem, it is possible to make a guest or a group of
+-   guests affine to one or more physical cores. The proper mechanism for
+-   that is to utilize exclusive cpusets to ensure that no other guest or
+-   host tasks can run on these cores.
+-
+-   If only a single guest or related guests run on sibling SMT threads on
+-   the same physical core then they can only attack their own memory and
+-   restricted parts of the host memory.
+-
+-   Host memory is attackable, when one of the sibling SMT threads runs in
+-   host OS (hypervisor) context and the other in guest context. The amount
+-   of valuable information from the host OS context depends on the context
+-   which the host OS executes, i.e. interrupts, soft interrupts and kernel
+-   threads. The amount of valuable data from these contexts cannot be
+-   declared as non-interesting for an attacker without deep inspection of
+-   the code.
+-
+-   **Note**, that assigning guests to a fixed set of physical cores affects
+-   the ability of the scheduler to do load balancing and might have
+-   negative effects on CPU utilization depending on the hosting
+-   scenario. Disabling SMT might be a viable alternative for particular
+-   scenarios.
+-
+-   For further information about confining guests to a single or to a group
+-   of cores consult the cpusets documentation:
+-
+-   https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt
+-
+-.. _interrupt_isolation:
+-
+-3. Interrupt affinity
+-^^^^^^^^^^^^^^^^^^^^^
+-
+-   Interrupts can be made affine to logical CPUs. This is not universally
+-   true because there are types of interrupts which are truly per CPU
+-   interrupts, e.g. the local timer interrupt. Aside of that multi queue
+-   devices affine their interrupts to single CPUs or groups of CPUs per
+-   queue without allowing the administrator to control the affinities.
+-
+-   Moving the interrupts, which can be affinity controlled, away from CPUs
+-   which run untrusted guests, reduces the attack vector space.
+-
+-   Whether the interrupts with are affine to CPUs, which run untrusted
+-   guests, provide interesting data for an attacker depends on the system
+-   configuration and the scenarios which run on the system. While for some
+-   of the interrupts it can be assumed that they won't expose interesting
+-   information beyond exposing hints about the host OS memory layout, there
+-   is no way to make general assumptions.
+-
+-   Interrupt affinity can be controlled by the administrator via the
+-   /proc/irq/$NR/smp_affinity[_list] files. Limited documentation is
+-   available at:
+-
+-   https://www.kernel.org/doc/Documentation/IRQ-affinity.txt
+-
+-.. _smt_control:
+-
+-4. SMT control
+-^^^^^^^^^^^^^^
+-
+-   To prevent the SMT issues of L1TF it might be necessary to disable SMT
+-   completely. Disabling SMT can have a significant performance impact, but
+-   the impact depends on the hosting scenario and the type of workloads.
+-   The impact of disabling SMT needs also to be weighted against the impact
+-   of other mitigation solutions like confining guests to dedicated cores.
+-
+-   The kernel provides a sysfs interface to retrieve the status of SMT and
+-   to control it. It also provides a kernel command line interface to
+-   control SMT.
+-
+-   The kernel command line interface consists of the following options:
+-
+-     =========== ==========================================================
+-     nosmt	 Affects the bring up of the secondary CPUs during boot. The
+-		 kernel tries to bring all present CPUs online during the
+-		 boot process. "nosmt" makes sure that from each physical
+-		 core only one - the so called primary (hyper) thread is
+-		 activated. Due to a design flaw of Intel processors related
+-		 to Machine Check Exceptions the non primary siblings have
+-		 to be brought up at least partially and are then shut down
+-		 again.  "nosmt" can be undone via the sysfs interface.
+-
+-     nosmt=force Has the same effect as "nosmt" but it does not allow to
+-		 undo the SMT disable via the sysfs interface.
+-     =========== ==========================================================
+-
+-   The sysfs interface provides two files:
+-
+-   - /sys/devices/system/cpu/smt/control
+-   - /sys/devices/system/cpu/smt/active
+-
+-   /sys/devices/system/cpu/smt/control:
+-
+-     This file allows to read out the SMT control state and provides the
+-     ability to disable or (re)enable SMT. The possible states are:
+-
+-	==============  ===================================================
+-	on		SMT is supported by the CPU and enabled. All
+-			logical CPUs can be onlined and offlined without
+-			restrictions.
+-
+-	off		SMT is supported by the CPU and disabled. Only
+-			the so called primary SMT threads can be onlined
+-			and offlined without restrictions. An attempt to
+-			online a non-primary sibling is rejected
+-
+-	forceoff	Same as 'off' but the state cannot be controlled.
+-			Attempts to write to the control file are rejected.
+-
+-	notsupported	The processor does not support SMT. It's therefore
+-			not affected by the SMT implications of L1TF.
+-			Attempts to write to the control file are rejected.
+-	==============  ===================================================
+-
+-     The possible states which can be written into this file to control SMT
+-     state are:
+-
+-     - on
+-     - off
+-     - forceoff
+-
+-   /sys/devices/system/cpu/smt/active:
+-
+-     This file reports whether SMT is enabled and active, i.e. if on any
+-     physical core two or more sibling threads are online.
+-
+-   SMT control is also possible at boot time via the l1tf kernel command
+-   line parameter in combination with L1D flush control. See
+-   :ref:`mitigation_control_command_line`.
+-
+-5. Disabling EPT
+-^^^^^^^^^^^^^^^^
+-
+-  Disabling EPT for virtual machines provides full mitigation for L1TF even
+-  with SMT enabled, because the effective page tables for guests are
+-  managed and sanitized by the hypervisor. Though disabling EPT has a
+-  significant performance impact especially when the Meltdown mitigation
+-  KPTI is enabled.
+-
+-  EPT can be disabled in the hypervisor via the 'kvm-intel.ept' parameter.
+-
+-There is ongoing research and development for new mitigation mechanisms to
+-address the performance impact of disabling SMT or EPT.
+-
+-.. _mitigation_control_command_line:
+-
+-Mitigation control on the kernel command line
+----------------------------------------------
+-
+-The kernel command line allows to control the L1TF mitigations at boot
+-time with the option "l1tf=". The valid arguments for this option are:
+-
+-  ============  =============================================================
+-  full		Provides all available mitigations for the L1TF
+-		vulnerability. Disables SMT and enables all mitigations in
+-		the hypervisors, i.e. unconditional L1D flushing
+-
+-		SMT control and L1D flush control via the sysfs interface
+-		is still possible after boot.  Hypervisors will issue a
+-		warning when the first VM is started in a potentially
+-		insecure configuration, i.e. SMT enabled or L1D flush
+-		disabled.
+-
+-  full,force	Same as 'full', but disables SMT and L1D flush runtime
+-		control. Implies the 'nosmt=force' command line option.
+-		(i.e. sysfs control of SMT is disabled.)
+-
+-  flush		Leaves SMT enabled and enables the default hypervisor
+-		mitigation, i.e. conditional L1D flushing
+-
+-		SMT control and L1D flush control via the sysfs interface
+-		is still possible after boot.  Hypervisors will issue a
+-		warning when the first VM is started in a potentially
+-		insecure configuration, i.e. SMT enabled or L1D flush
+-		disabled.
+-
+-  flush,nosmt	Disables SMT and enables the default hypervisor mitigation,
+-		i.e. conditional L1D flushing.
+-
+-		SMT control and L1D flush control via the sysfs interface
+-		is still possible after boot.  Hypervisors will issue a
+-		warning when the first VM is started in a potentially
+-		insecure configuration, i.e. SMT enabled or L1D flush
+-		disabled.
+-
+-  flush,nowarn	Same as 'flush', but hypervisors will not warn when a VM is
+-		started in a potentially insecure configuration.
+-
+-  off		Disables hypervisor mitigations and doesn't emit any
+-		warnings.
+-		It also drops the swap size and available RAM limit restrictions
+-		on both hypervisor and bare metal.
+-
+-  ============  =============================================================
+-
+-The default is 'flush'. For details about L1D flushing see :ref:`l1d_flush`.
+-
+-
+-.. _mitigation_control_kvm:
+-
+-Mitigation control for KVM - module parameter
+--------------------------------------------------------------
+-
+-The KVM hypervisor mitigation mechanism, flushing the L1D cache when
+-entering a guest, can be controlled with a module parameter.
+-
+-The option/parameter is "kvm-intel.vmentry_l1d_flush=". It takes the
+-following arguments:
+-
+-  ============  ==============================================================
+-  always	L1D cache flush on every VMENTER.
+-
+-  cond		Flush L1D on VMENTER only when the code between VMEXIT and
+-		VMENTER can leak host memory which is considered
+-		interesting for an attacker. This still can leak host memory
+-		which allows e.g. to determine the hosts address space layout.
+-
+-  never		Disables the mitigation
+-  ============  ==============================================================
+-
+-The parameter can be provided on the kernel command line, as a module
+-parameter when loading the modules and at runtime modified via the sysfs
+-file:
+-
+-/sys/module/kvm_intel/parameters/vmentry_l1d_flush
+-
+-The default is 'cond'. If 'l1tf=full,force' is given on the kernel command
+-line, then 'always' is enforced and the kvm-intel.vmentry_l1d_flush
+-module parameter is ignored and writes to the sysfs file are rejected.
+-
+-
+-Mitigation selection guide
+---------------------------
+-
+-1. No virtualization in use
+-^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-
+-   The system is protected by the kernel unconditionally and no further
+-   action is required.
+-
+-2. Virtualization with trusted guests
+-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-
+-   If the guest comes from a trusted source and the guest OS kernel is
+-   guaranteed to have the L1TF mitigations in place the system is fully
+-   protected against L1TF and no further action is required.
+-
+-   To avoid the overhead of the default L1D flushing on VMENTER the
+-   administrator can disable the flushing via the kernel command line and
+-   sysfs control files. See :ref:`mitigation_control_command_line` and
+-   :ref:`mitigation_control_kvm`.
+-
+-
+-3. Virtualization with untrusted guests
+-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-
+-3.1. SMT not supported or disabled
+-""""""""""""""""""""""""""""""""""
+-
+-  If SMT is not supported by the processor or disabled in the BIOS or by
+-  the kernel, it's only required to enforce L1D flushing on VMENTER.
+-
+-  Conditional L1D flushing is the default behaviour and can be tuned. See
+-  :ref:`mitigation_control_command_line` and :ref:`mitigation_control_kvm`.
+-
+-3.2. EPT not supported or disabled
+-""""""""""""""""""""""""""""""""""
+-
+-  If EPT is not supported by the processor or disabled in the hypervisor,
+-  the system is fully protected. SMT can stay enabled and L1D flushing on
+-  VMENTER is not required.
+-
+-  EPT can be disabled in the hypervisor via the 'kvm-intel.ept' parameter.
+-
+-3.3. SMT and EPT supported and active
+-"""""""""""""""""""""""""""""""""""""
+-
+-  If SMT and EPT are supported and active then various degrees of
+-  mitigations can be employed:
+-
+-  - L1D flushing on VMENTER:
+-
+-    L1D flushing on VMENTER is the minimal protection requirement, but it
+-    is only potent in combination with other mitigation methods.
+-
+-    Conditional L1D flushing is the default behaviour and can be tuned. See
+-    :ref:`mitigation_control_command_line` and :ref:`mitigation_control_kvm`.
+-
+-  - Guest confinement:
+-
+-    Confinement of guests to a single or a group of physical cores which
+-    are not running any other processes, can reduce the attack surface
+-    significantly, but interrupts, soft interrupts and kernel threads can
+-    still expose valuable data to a potential attacker. See
+-    :ref:`guest_confinement`.
+-
+-  - Interrupt isolation:
+-
+-    Isolating the guest CPUs from interrupts can reduce the attack surface
+-    further, but still allows a malicious guest to explore a limited amount
+-    of host physical memory. This can at least be used to gain knowledge
+-    about the host address space layout. The interrupts which have a fixed
+-    affinity to the CPUs which run the untrusted guests can depending on
+-    the scenario still trigger soft interrupts and schedule kernel threads
+-    which might expose valuable information. See
+-    :ref:`interrupt_isolation`.
+-
+-The above three mitigation methods combined can provide protection to a
+-certain degree, but the risk of the remaining attack surface has to be
+-carefully analyzed. For full protection the following methods are
+-available:
+-
+-  - Disabling SMT:
+-
+-    Disabling SMT and enforcing the L1D flushing provides the maximum
+-    amount of protection. This mitigation is not depending on any of the
+-    above mitigation methods.
+-
+-    SMT control and L1D flushing can be tuned by the command line
+-    parameters 'nosmt', 'l1tf', 'kvm-intel.vmentry_l1d_flush' and at run
+-    time with the matching sysfs control files. See :ref:`smt_control`,
+-    :ref:`mitigation_control_command_line` and
+-    :ref:`mitigation_control_kvm`.
+-
+-  - Disabling EPT:
+-
+-    Disabling EPT provides the maximum amount of protection as well. It is
+-    not depending on any of the above mitigation methods. SMT can stay
+-    enabled and L1D flushing is not required, but the performance impact is
+-    significant.
+-
+-    EPT can be disabled in the hypervisor via the 'kvm-intel.ept'
+-    parameter.
+-
+-3.4. Nested virtual machines
+-""""""""""""""""""""""""""""
+-
+-When nested virtualization is in use, three operating systems are involved:
+-the bare metal hypervisor, the nested hypervisor and the nested virtual
+-machine.  VMENTER operations from the nested hypervisor into the nested
+-guest will always be processed by the bare metal hypervisor. If KVM is the
+-bare metal hypervisor it will:
+-
+- - Flush the L1D cache on every switch from the nested hypervisor to the
+-   nested virtual machine, so that the nested hypervisor's secrets are not
+-   exposed to the nested virtual machine;
+-
+- - Flush the L1D cache on every switch from the nested virtual machine to
+-   the nested hypervisor; this is a complex operation, and flushing the L1D
+-   cache avoids that the bare metal hypervisor's secrets are exposed to the
+-   nested virtual machine;
+-
+- - Instruct the nested hypervisor to not perform any L1D cache flush. This
+-   is an optimization to avoid double L1D flushing.
+-
+-
+-.. _default_mitigations:
+-
+-Default mitigations
+--------------------
+-
+-  The kernel default mitigations for vulnerable processors are:
+-
+-  - PTE inversion to protect against malicious user space. This is done
+-    unconditionally and cannot be controlled. The swap storage is limited
+-    to ~16TB.
+-
+-  - L1D conditional flushing on VMENTER when EPT is enabled for
+-    a guest.
+-
+-  The kernel does not by default enforce the disabling of SMT, which leaves
+-  SMT systems vulnerable when running untrusted guests with EPT enabled.
+-
+-  The rationale for this choice is:
+-
+-  - Force disabling SMT can break existing setups, especially with
+-    unattended updates.
+-
+-  - If regular users run untrusted guests on their machine, then L1TF is
+-    just an add on to other malware which might be embedded in an untrusted
+-    guest, e.g. spam-bots or attacks on the local network.
+-
+-    There is no technical way to prevent a user from running untrusted code
+-    on their machines blindly.
+-
+-  - It's technically extremely unlikely and from today's knowledge even
+-    impossible that L1TF can be exploited via the most popular attack
+-    mechanisms like JavaScript because these mechanisms have no way to
+-    control PTEs. If this would be possible and not other mitigation would
+-    be possible, then the default might be different.
+-
+-  - The administrators of cloud and hosting setups have to carefully
+-    analyze the risk for their scenarios and make the appropriate
+-    mitigation choices, which might even vary across their deployed
+-    machines and also result in other changes of their overall setup.
+-    There is no way for the kernel to provide a sensible default for this
+-    kind of scenarios.
+diff --git a/Documentation/index.rst b/Documentation/index.rst
+index 80a421cb935e..3511400dc092 100644
+--- a/Documentation/index.rst
++++ b/Documentation/index.rst
+@@ -102,6 +102,7 @@ implementation.
+    :maxdepth: 2
+ 
+    sh/index
++   x86/index
+ 
+ Filesystem Documentation
+ ------------------------
+diff --git a/Documentation/x86/conf.py b/Documentation/x86/conf.py
+new file mode 100644
+index 000000000000..33c5c3142e20
+--- /dev/null
++++ b/Documentation/x86/conf.py
+@@ -0,0 +1,10 @@
++# -*- coding: utf-8; mode: python -*-
++
++project = "X86 architecture specific documentation"
++
++tags.add("subproject")
++
++latex_documents = [
++    ('index', 'x86.tex', project,
++     'The kernel development community', 'manual'),
++]
+diff --git a/Documentation/x86/index.rst b/Documentation/x86/index.rst
+new file mode 100644
+index 000000000000..ef389dcf1b1d
+--- /dev/null
++++ b/Documentation/x86/index.rst
+@@ -0,0 +1,8 @@
++==========================
++x86 architecture specifics
++==========================
++
++.. toctree::
++   :maxdepth: 1
++
++   mds
+diff --git a/Documentation/x86/mds.rst b/Documentation/x86/mds.rst
+new file mode 100644
+index 000000000000..534e9baa4e1d
+--- /dev/null
++++ b/Documentation/x86/mds.rst
+@@ -0,0 +1,225 @@
++Microarchitectural Data Sampling (MDS) mitigation
++=================================================
++
++.. _mds:
++
++Overview
++--------
++
++Microarchitectural Data Sampling (MDS) is a family of side channel attacks
++on internal buffers in Intel CPUs. The variants are:
++
++ - Microarchitectural Store Buffer Data Sampling (MSBDS) (CVE-2018-12126)
++ - Microarchitectural Fill Buffer Data Sampling (MFBDS) (CVE-2018-12130)
++ - Microarchitectural Load Port Data Sampling (MLPDS) (CVE-2018-12127)
++ - Microarchitectural Data Sampling Uncacheable Memory (MDSUM) (CVE-2019-11091)
++
++MSBDS leaks Store Buffer Entries which can be speculatively forwarded to a
++dependent load (store-to-load forwarding) as an optimization. The forward
++can also happen to a faulting or assisting load operation for a different
++memory address, which can be exploited under certain conditions. Store
++buffers are partitioned between Hyper-Threads so cross thread forwarding is
++not possible. But if a thread enters or exits a sleep state the store
++buffer is repartitioned which can expose data from one thread to the other.
++
++MFBDS leaks Fill Buffer Entries. Fill buffers are used internally to manage
++L1 miss situations and to hold data which is returned or sent in response
++to a memory or I/O operation. Fill buffers can forward data to a load
++operation and also write data to the cache. When the fill buffer is
++deallocated it can retain the stale data of the preceding operations which
++can then be forwarded to a faulting or assisting load operation, which can
++be exploited under certain conditions. Fill buffers are shared between
++Hyper-Threads so cross thread leakage is possible.
++
++MLPDS leaks Load Port Data. Load ports are used to perform load operations
++from memory or I/O. The received data is then forwarded to the register
++file or a subsequent operation. In some implementations the Load Port can
++contain stale data from a previous operation which can be forwarded to
++faulting or assisting loads under certain conditions, which again can be
++exploited eventually. Load ports are shared between Hyper-Threads so cross
++thread leakage is possible.
++
++MDSUM is a special case of MSBDS, MFBDS and MLPDS. An uncacheable load from
++memory that takes a fault or assist can leave data in a microarchitectural
++structure that may later be observed using one of the same methods used by
++MSBDS, MFBDS or MLPDS.
++
++Exposure assumptions
++--------------------
++
++It is assumed that attack code resides in user space or in a guest with one
++exception. The rationale behind this assumption is that the code construct
++needed for exploiting MDS requires:
++
++ - to control the load to trigger a fault or assist
++
++ - to have a disclosure gadget which exposes the speculatively accessed
++   data for consumption through a side channel.
++
++ - to control the pointer through which the disclosure gadget exposes the
++   data
++
++The existence of such a construct in the kernel cannot be excluded with
++100% certainty, but the complexity involved makes it extremly unlikely.
++
++There is one exception, which is untrusted BPF. The functionality of
++untrusted BPF is limited, but it needs to be thoroughly investigated
++whether it can be used to create such a construct.
++
++
++Mitigation strategy
++-------------------
++
++All variants have the same mitigation strategy at least for the single CPU
++thread case (SMT off): Force the CPU to clear the affected buffers.
++
++This is achieved by using the otherwise unused and obsolete VERW
++instruction in combination with a microcode update. The microcode clears
++the affected CPU buffers when the VERW instruction is executed.
++
++For virtualization there are two ways to achieve CPU buffer
++clearing. Either the modified VERW instruction or via the L1D Flush
++command. The latter is issued when L1TF mitigation is enabled so the extra
++VERW can be avoided. If the CPU is not affected by L1TF then VERW needs to
++be issued.
++
++If the VERW instruction with the supplied segment selector argument is
++executed on a CPU without the microcode update there is no side effect
++other than a small number of pointlessly wasted CPU cycles.
++
++This does not protect against cross Hyper-Thread attacks except for MSBDS
++which is only exploitable cross Hyper-thread when one of the Hyper-Threads
++enters a C-state.
++
++The kernel provides a function to invoke the buffer clearing:
++
++    mds_clear_cpu_buffers()
++
++The mitigation is invoked on kernel/userspace, hypervisor/guest and C-state
++(idle) transitions.
++
++As a special quirk to address virtualization scenarios where the host has
++the microcode updated, but the hypervisor does not (yet) expose the
++MD_CLEAR CPUID bit to guests, the kernel issues the VERW instruction in the
++hope that it might actually clear the buffers. The state is reflected
++accordingly.
++
++According to current knowledge additional mitigations inside the kernel
++itself are not required because the necessary gadgets to expose the leaked
++data cannot be controlled in a way which allows exploitation from malicious
++user space or VM guests.
++
++Kernel internal mitigation modes
++--------------------------------
++
++ ======= ============================================================
++ off      Mitigation is disabled. Either the CPU is not affected or
++          mds=off is supplied on the kernel command line
++
++ full     Mitigation is enabled. CPU is affected and MD_CLEAR is
++          advertised in CPUID.
++
++ vmwerv	  Mitigation is enabled. CPU is affected and MD_CLEAR is not
++	  advertised in CPUID. That is mainly for virtualization
++	  scenarios where the host has the updated microcode but the
++	  hypervisor does not expose MD_CLEAR in CPUID. It's a best
++	  effort approach without guarantee.
++ ======= ============================================================
++
++If the CPU is affected and mds=off is not supplied on the kernel command
++line then the kernel selects the appropriate mitigation mode depending on
++the availability of the MD_CLEAR CPUID bit.
++
++Mitigation points
++-----------------
++
++1. Return to user space
++^^^^^^^^^^^^^^^^^^^^^^^
++
++   When transitioning from kernel to user space the CPU buffers are flushed
++   on affected CPUs when the mitigation is not disabled on the kernel
++   command line. The migitation is enabled through the static key
++   mds_user_clear.
++
++   The mitigation is invoked in prepare_exit_to_usermode() which covers
++   most of the kernel to user space transitions. There are a few exceptions
++   which are not invoking prepare_exit_to_usermode() on return to user
++   space. These exceptions use the paranoid exit code.
++
++   - Non Maskable Interrupt (NMI):
++
++     Access to sensible data like keys, credentials in the NMI context is
++     mostly theoretical: The CPU can do prefetching or execute a
++     misspeculated code path and thereby fetching data which might end up
++     leaking through a buffer.
++
++     But for mounting other attacks the kernel stack address of the task is
++     already valuable information. So in full mitigation mode, the NMI is
++     mitigated on the return from do_nmi() to provide almost complete
++     coverage.
++
++   - Double fault (#DF):
++
++     A double fault is usually fatal, but the ESPFIX workaround, which can
++     be triggered from user space through modify_ldt(2) is a recoverable
++     double fault. #DF uses the paranoid exit path, so explicit mitigation
++     in the double fault handler is required.
++
++   - Machine Check Exception (#MC):
++
++     Another corner case is a #MC which hits between the CPU buffer clear
++     invocation and the actual return to user. As this still is in kernel
++     space it takes the paranoid exit path which does not clear the CPU
++     buffers. So the #MC handler repopulates the buffers to some
++     extent. Machine checks are not reliably controllable and the window is
++     extremly small so mitigation would just tick a checkbox that this
++     theoretical corner case is covered. To keep the amount of special
++     cases small, ignore #MC.
++
++   - Debug Exception (#DB):
++
++     This takes the paranoid exit path only when the INT1 breakpoint is in
++     kernel space. #DB on a user space address takes the regular exit path,
++     so no extra mitigation required.
++
++
++2. C-State transition
++^^^^^^^^^^^^^^^^^^^^^
++
++   When a CPU goes idle and enters a C-State the CPU buffers need to be
++   cleared on affected CPUs when SMT is active. This addresses the
++   repartitioning of the store buffer when one of the Hyper-Threads enters
++   a C-State.
++
++   When SMT is inactive, i.e. either the CPU does not support it or all
++   sibling threads are offline CPU buffer clearing is not required.
++
++   The idle clearing is enabled on CPUs which are only affected by MSBDS
++   and not by any other MDS variant. The other MDS variants cannot be
++   protected against cross Hyper-Thread attacks because the Fill Buffer and
++   the Load Ports are shared. So on CPUs affected by other variants, the
++   idle clearing would be a window dressing exercise and is therefore not
++   activated.
++
++   The invocation is controlled by the static key mds_idle_clear which is
++   switched depending on the chosen mitigation mode and the SMT state of
++   the system.
++
++   The buffer clear is only invoked before entering the C-State to prevent
++   that stale data from the idling CPU from spilling to the Hyper-Thread
++   sibling after the store buffer got repartitioned and all entries are
++   available to the non idle sibling.
++
++   When coming out of idle the store buffer is partitioned again so each
++   sibling has half of it available. The back from idle CPU could be then
++   speculatively exposed to contents of the sibling. The buffers are
++   flushed either on exit to user space or on VMENTER so malicious code
++   in user space or the guest cannot speculatively access them.
++
++   The mitigation is hooked into all variants of halt()/mwait(), but does
++   not cover the legacy ACPI IO-Port mechanism because the ACPI idle driver
++   has been superseded by the intel_idle driver around 2010 and is
++   preferred on all affected CPUs which are expected to gain the MD_CLEAR
++   functionality in microcode. Aside of that the IO-Port mechanism is a
++   legacy interface which is only used on older systems which are either
++   not affected or do not receive microcode updates anymore.
+diff --git a/Makefile b/Makefile
+index bf604f77e5e5..58ec07990e76 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
+index b33bafb8fcea..70568ccbd9fd 100644
+--- a/arch/powerpc/kernel/security.c
++++ b/arch/powerpc/kernel/security.c
+@@ -57,7 +57,7 @@ void setup_barrier_nospec(void)
+ 	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
+ 		 security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR);
+ 
+-	if (!no_nospec)
++	if (!no_nospec && !cpu_mitigations_off())
+ 		enable_barrier_nospec(enable);
+ }
+ 
+@@ -116,7 +116,7 @@ static int __init handle_nospectre_v2(char *p)
+ early_param("nospectre_v2", handle_nospectre_v2);
+ void setup_spectre_v2(void)
+ {
+-	if (no_spectrev2)
++	if (no_spectrev2 || cpu_mitigations_off())
+ 		do_btb_flush_fixups();
+ 	else
+ 		btb_flush_enabled = true;
+@@ -300,7 +300,7 @@ void setup_stf_barrier(void)
+ 
+ 	stf_enabled_flush_types = type;
+ 
+-	if (!no_stf_barrier)
++	if (!no_stf_barrier && !cpu_mitigations_off())
+ 		stf_barrier_enable(enable);
+ }
+ 
+diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
+index ba404dd9ce1d..4f49e1a3594c 100644
+--- a/arch/powerpc/kernel/setup_64.c
++++ b/arch/powerpc/kernel/setup_64.c
+@@ -932,7 +932,7 @@ void setup_rfi_flush(enum l1d_flush_type types, bool enable)
+ 
+ 	enabled_flush_types = types;
+ 
+-	if (!no_rfi_flush)
++	if (!no_rfi_flush && !cpu_mitigations_off())
+ 		rfi_flush_enable(enable);
+ }
+ 
+diff --git a/arch/s390/kernel/nospec-branch.c b/arch/s390/kernel/nospec-branch.c
+index bdddaae96559..649135cbedd5 100644
+--- a/arch/s390/kernel/nospec-branch.c
++++ b/arch/s390/kernel/nospec-branch.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <linux/module.h>
+ #include <linux/device.h>
++#include <linux/cpu.h>
+ #include <asm/nospec-branch.h>
+ 
+ static int __init nobp_setup_early(char *str)
+@@ -58,7 +59,7 @@ early_param("nospectre_v2", nospectre_v2_setup_early);
+ 
+ void __init nospec_auto_detect(void)
+ {
+-	if (test_facility(156)) {
++	if (test_facility(156) || cpu_mitigations_off()) {
+ 		/*
+ 		 * The machine supports etokens.
+ 		 * Disable expolines and disable nobp.
+diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
+index 7bc105f47d21..19f650d729f5 100644
+--- a/arch/x86/entry/common.c
++++ b/arch/x86/entry/common.c
+@@ -31,6 +31,7 @@
+ #include <asm/vdso.h>
+ #include <linux/uaccess.h>
+ #include <asm/cpufeature.h>
++#include <asm/nospec-branch.h>
+ 
+ #define CREATE_TRACE_POINTS
+ #include <trace/events/syscalls.h>
+@@ -212,6 +213,8 @@ __visible inline void prepare_exit_to_usermode(struct pt_regs *regs)
+ #endif
+ 
+ 	user_enter_irqoff();
++
++	mds_user_clear_cpu_buffers();
+ }
+ 
+ #define SYSCALL_EXIT_WORK_FLAGS				\
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 981ff9479648..75f27ee2c263 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -344,6 +344,7 @@
+ /* Intel-defined CPU features, CPUID level 0x00000007:0 (EDX), word 18 */
+ #define X86_FEATURE_AVX512_4VNNIW	(18*32+ 2) /* AVX-512 Neural Network Instructions */
+ #define X86_FEATURE_AVX512_4FMAPS	(18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
++#define X86_FEATURE_MD_CLEAR		(18*32+10) /* VERW clears CPU buffers */
+ #define X86_FEATURE_TSX_FORCE_ABORT	(18*32+13) /* "" TSX_FORCE_ABORT */
+ #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
+ #define X86_FEATURE_SPEC_CTRL		(18*32+26) /* "" Speculation Control (IBRS + IBPB) */
+@@ -382,5 +383,7 @@
+ #define X86_BUG_SPECTRE_V2		X86_BUG(16) /* CPU is affected by Spectre variant 2 attack with indirect branches */
+ #define X86_BUG_SPEC_STORE_BYPASS	X86_BUG(17) /* CPU is affected by speculative store bypass attack */
+ #define X86_BUG_L1TF			X86_BUG(18) /* CPU is affected by L1 Terminal Fault */
++#define X86_BUG_MDS			X86_BUG(19) /* CPU is affected by Microarchitectural data sampling */
++#define X86_BUG_MSBDS_ONLY		X86_BUG(20) /* CPU is only affected by the  MSDBS variant of BUG_MDS */
+ 
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
+index 058e40fed167..8a0e56e1dcc9 100644
+--- a/arch/x86/include/asm/irqflags.h
++++ b/arch/x86/include/asm/irqflags.h
+@@ -6,6 +6,8 @@
+ 
+ #ifndef __ASSEMBLY__
+ 
++#include <asm/nospec-branch.h>
++
+ /* Provide __cpuidle; we can't safely include <linux/cpu.h> */
+ #define __cpuidle __attribute__((__section__(".cpuidle.text")))
+ 
+@@ -54,11 +56,13 @@ static inline void native_irq_enable(void)
+ 
+ static inline __cpuidle void native_safe_halt(void)
+ {
++	mds_idle_clear_cpu_buffers();
+ 	asm volatile("sti; hlt": : :"memory");
+ }
+ 
+ static inline __cpuidle void native_halt(void)
+ {
++	mds_idle_clear_cpu_buffers();
+ 	asm volatile("hlt": : :"memory");
+ }
+ 
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index ca5bc0eacb95..20f7da552e90 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -2,6 +2,8 @@
+ #ifndef _ASM_X86_MSR_INDEX_H
+ #define _ASM_X86_MSR_INDEX_H
+ 
++#include <linux/bits.h>
++
+ /*
+  * CPU model specific register (MSR) numbers.
+  *
+@@ -40,14 +42,14 @@
+ /* Intel MSRs. Some also available on other CPUs */
+ 
+ #define MSR_IA32_SPEC_CTRL		0x00000048 /* Speculation Control */
+-#define SPEC_CTRL_IBRS			(1 << 0)   /* Indirect Branch Restricted Speculation */
++#define SPEC_CTRL_IBRS			BIT(0)	   /* Indirect Branch Restricted Speculation */
+ #define SPEC_CTRL_STIBP_SHIFT		1	   /* Single Thread Indirect Branch Predictor (STIBP) bit */
+-#define SPEC_CTRL_STIBP			(1 << SPEC_CTRL_STIBP_SHIFT)	/* STIBP mask */
++#define SPEC_CTRL_STIBP			BIT(SPEC_CTRL_STIBP_SHIFT)	/* STIBP mask */
+ #define SPEC_CTRL_SSBD_SHIFT		2	   /* Speculative Store Bypass Disable bit */
+-#define SPEC_CTRL_SSBD			(1 << SPEC_CTRL_SSBD_SHIFT)	/* Speculative Store Bypass Disable */
++#define SPEC_CTRL_SSBD			BIT(SPEC_CTRL_SSBD_SHIFT)	/* Speculative Store Bypass Disable */
+ 
+ #define MSR_IA32_PRED_CMD		0x00000049 /* Prediction Command */
+-#define PRED_CMD_IBPB			(1 << 0)   /* Indirect Branch Prediction Barrier */
++#define PRED_CMD_IBPB			BIT(0)	   /* Indirect Branch Prediction Barrier */
+ 
+ #define MSR_PPIN_CTL			0x0000004e
+ #define MSR_PPIN			0x0000004f
+@@ -69,20 +71,25 @@
+ #define MSR_MTRRcap			0x000000fe
+ 
+ #define MSR_IA32_ARCH_CAPABILITIES	0x0000010a
+-#define ARCH_CAP_RDCL_NO		(1 << 0)   /* Not susceptible to Meltdown */
+-#define ARCH_CAP_IBRS_ALL		(1 << 1)   /* Enhanced IBRS support */
+-#define ARCH_CAP_SKIP_VMENTRY_L1DFLUSH	(1 << 3)   /* Skip L1D flush on vmentry */
+-#define ARCH_CAP_SSB_NO			(1 << 4)   /*
+-						    * Not susceptible to Speculative Store Bypass
+-						    * attack, so no Speculative Store Bypass
+-						    * control required.
+-						    */
++#define ARCH_CAP_RDCL_NO		BIT(0)	/* Not susceptible to Meltdown */
++#define ARCH_CAP_IBRS_ALL		BIT(1)	/* Enhanced IBRS support */
++#define ARCH_CAP_SKIP_VMENTRY_L1DFLUSH	BIT(3)	/* Skip L1D flush on vmentry */
++#define ARCH_CAP_SSB_NO			BIT(4)	/*
++						 * Not susceptible to Speculative Store Bypass
++						 * attack, so no Speculative Store Bypass
++						 * control required.
++						 */
++#define ARCH_CAP_MDS_NO			BIT(5)   /*
++						  * Not susceptible to
++						  * Microarchitectural Data
++						  * Sampling (MDS) vulnerabilities.
++						  */
+ 
+ #define MSR_IA32_FLUSH_CMD		0x0000010b
+-#define L1D_FLUSH			(1 << 0)   /*
+-						    * Writeback and invalidate the
+-						    * L1 data cache.
+-						    */
++#define L1D_FLUSH			BIT(0)	/*
++						 * Writeback and invalidate the
++						 * L1 data cache.
++						 */
+ 
+ #define MSR_IA32_BBL_CR_CTL		0x00000119
+ #define MSR_IA32_BBL_CR_CTL3		0x0000011e
+diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
+index 39a2fb29378a..eb0f80ce8524 100644
+--- a/arch/x86/include/asm/mwait.h
++++ b/arch/x86/include/asm/mwait.h
+@@ -6,6 +6,7 @@
+ #include <linux/sched/idle.h>
+ 
+ #include <asm/cpufeature.h>
++#include <asm/nospec-branch.h>
+ 
+ #define MWAIT_SUBSTATE_MASK		0xf
+ #define MWAIT_CSTATE_MASK		0xf
+@@ -40,6 +41,8 @@ static inline void __monitorx(const void *eax, unsigned long ecx,
+ 
+ static inline void __mwait(unsigned long eax, unsigned long ecx)
+ {
++	mds_idle_clear_cpu_buffers();
++
+ 	/* "mwait %eax, %ecx;" */
+ 	asm volatile(".byte 0x0f, 0x01, 0xc9;"
+ 		     :: "a" (eax), "c" (ecx));
+@@ -74,6 +77,8 @@ static inline void __mwait(unsigned long eax, unsigned long ecx)
+ static inline void __mwaitx(unsigned long eax, unsigned long ebx,
+ 			    unsigned long ecx)
+ {
++	/* No MDS buffer clear as this is AMD/HYGON only */
++
+ 	/* "mwaitx %eax, %ebx, %ecx;" */
+ 	asm volatile(".byte 0x0f, 0x01, 0xfb;"
+ 		     :: "a" (eax), "b" (ebx), "c" (ecx));
+@@ -81,6 +86,8 @@ static inline void __mwaitx(unsigned long eax, unsigned long ebx,
+ 
+ static inline void __sti_mwait(unsigned long eax, unsigned long ecx)
+ {
++	mds_idle_clear_cpu_buffers();
++
+ 	trace_hardirqs_on();
+ 	/* "mwait %eax, %ecx;" */
+ 	asm volatile("sti; .byte 0x0f, 0x01, 0xc9;"
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index dad12b767ba0..4e970390110f 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -318,6 +318,56 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
+ DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
+ DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
+ 
++DECLARE_STATIC_KEY_FALSE(mds_user_clear);
++DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
++
++#include <asm/segment.h>
++
++/**
++ * mds_clear_cpu_buffers - Mitigation for MDS vulnerability
++ *
++ * This uses the otherwise unused and obsolete VERW instruction in
++ * combination with microcode which triggers a CPU buffer flush when the
++ * instruction is executed.
++ */
++static inline void mds_clear_cpu_buffers(void)
++{
++	static const u16 ds = __KERNEL_DS;
++
++	/*
++	 * Has to be the memory-operand variant because only that
++	 * guarantees the CPU buffer flush functionality according to
++	 * documentation. The register-operand variant does not.
++	 * Works with any segment selector, but a valid writable
++	 * data segment is the fastest variant.
++	 *
++	 * "cc" clobber is required because VERW modifies ZF.
++	 */
++	asm volatile("verw %[ds]" : : [ds] "m" (ds) : "cc");
++}
++
++/**
++ * mds_user_clear_cpu_buffers - Mitigation for MDS vulnerability
++ *
++ * Clear CPU buffers if the corresponding static key is enabled
++ */
++static inline void mds_user_clear_cpu_buffers(void)
++{
++	if (static_branch_likely(&mds_user_clear))
++		mds_clear_cpu_buffers();
++}
++
++/**
++ * mds_idle_clear_cpu_buffers - Mitigation for MDS vulnerability
++ *
++ * Clear CPU buffers if the corresponding static key is enabled
++ */
++static inline void mds_idle_clear_cpu_buffers(void)
++{
++	if (static_branch_likely(&mds_idle_clear))
++		mds_clear_cpu_buffers();
++}
++
+ #endif /* __ASSEMBLY__ */
+ 
+ /*
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 2bb3a648fc12..31e9895db75e 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -991,4 +991,10 @@ enum l1tf_mitigations {
+ 
+ extern enum l1tf_mitigations l1tf_mitigation;
+ 
++enum mds_mitigations {
++	MDS_MITIGATION_OFF,
++	MDS_MITIGATION_FULL,
++	MDS_MITIGATION_VMWERV,
++};
++
+ #endif /* _ASM_X86_PROCESSOR_H */
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index b91b3bfa5cfb..03b4cc0ec3a7 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -37,6 +37,7 @@
+ static void __init spectre_v2_select_mitigation(void);
+ static void __init ssb_select_mitigation(void);
+ static void __init l1tf_select_mitigation(void);
++static void __init mds_select_mitigation(void);
+ 
+ /* The base value of the SPEC_CTRL MSR that always has to be preserved. */
+ u64 x86_spec_ctrl_base;
+@@ -63,6 +64,13 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
+ /* Control unconditional IBPB in switch_mm() */
+ DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
+ 
++/* Control MDS CPU buffer clear before returning to user space */
++DEFINE_STATIC_KEY_FALSE(mds_user_clear);
++EXPORT_SYMBOL_GPL(mds_user_clear);
++/* Control MDS CPU buffer clear before idling (halt, mwait) */
++DEFINE_STATIC_KEY_FALSE(mds_idle_clear);
++EXPORT_SYMBOL_GPL(mds_idle_clear);
++
+ void __init check_bugs(void)
+ {
+ 	identify_boot_cpu();
+@@ -101,6 +109,10 @@ void __init check_bugs(void)
+ 
+ 	l1tf_select_mitigation();
+ 
++	mds_select_mitigation();
++
++	arch_smt_update();
++
+ #ifdef CONFIG_X86_32
+ 	/*
+ 	 * Check whether we are able to run this kernel safely on SMP.
+@@ -206,6 +218,61 @@ static void x86_amd_ssb_disable(void)
+ 		wrmsrl(MSR_AMD64_LS_CFG, msrval);
+ }
+ 
++#undef pr_fmt
++#define pr_fmt(fmt)	"MDS: " fmt
++
++/* Default mitigation for MDS-affected CPUs */
++static enum mds_mitigations mds_mitigation __ro_after_init = MDS_MITIGATION_FULL;
++static bool mds_nosmt __ro_after_init = false;
++
++static const char * const mds_strings[] = {
++	[MDS_MITIGATION_OFF]	= "Vulnerable",
++	[MDS_MITIGATION_FULL]	= "Mitigation: Clear CPU buffers",
++	[MDS_MITIGATION_VMWERV]	= "Vulnerable: Clear CPU buffers attempted, no microcode",
++};
++
++static void __init mds_select_mitigation(void)
++{
++	if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off()) {
++		mds_mitigation = MDS_MITIGATION_OFF;
++		return;
++	}
++
++	if (mds_mitigation == MDS_MITIGATION_FULL) {
++		if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
++			mds_mitigation = MDS_MITIGATION_VMWERV;
++
++		static_branch_enable(&mds_user_clear);
++
++		if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) &&
++		    (mds_nosmt || cpu_mitigations_auto_nosmt()))
++			cpu_smt_disable(false);
++	}
++
++	pr_info("%s\n", mds_strings[mds_mitigation]);
++}
++
++static int __init mds_cmdline(char *str)
++{
++	if (!boot_cpu_has_bug(X86_BUG_MDS))
++		return 0;
++
++	if (!str)
++		return -EINVAL;
++
++	if (!strcmp(str, "off"))
++		mds_mitigation = MDS_MITIGATION_OFF;
++	else if (!strcmp(str, "full"))
++		mds_mitigation = MDS_MITIGATION_FULL;
++	else if (!strcmp(str, "full,nosmt")) {
++		mds_mitigation = MDS_MITIGATION_FULL;
++		mds_nosmt = true;
++	}
++
++	return 0;
++}
++early_param("mds", mds_cmdline);
++
+ #undef pr_fmt
+ #define pr_fmt(fmt)     "Spectre V2 : " fmt
+ 
+@@ -440,7 +507,8 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
+ 	char arg[20];
+ 	int ret, i;
+ 
+-	if (cmdline_find_option_bool(boot_command_line, "nospectre_v2"))
++	if (cmdline_find_option_bool(boot_command_line, "nospectre_v2") ||
++	    cpu_mitigations_off())
+ 		return SPECTRE_V2_CMD_NONE;
+ 
+ 	ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg));
+@@ -574,9 +642,6 @@ specv2_set_mode:
+ 
+ 	/* Set up IBPB and STIBP depending on the general spectre V2 command */
+ 	spectre_v2_user_select_mitigation(cmd);
+-
+-	/* Enable STIBP if appropriate */
+-	arch_smt_update();
+ }
+ 
+ static void update_stibp_msr(void * __unused)
+@@ -610,6 +675,31 @@ static void update_indir_branch_cond(void)
+ 		static_branch_disable(&switch_to_cond_stibp);
+ }
+ 
++#undef pr_fmt
++#define pr_fmt(fmt) fmt
++
++/* Update the static key controlling the MDS CPU buffer clear in idle */
++static void update_mds_branch_idle(void)
++{
++	/*
++	 * Enable the idle clearing if SMT is active on CPUs which are
++	 * affected only by MSBDS and not any other MDS variant.
++	 *
++	 * The other variants cannot be mitigated when SMT is enabled, so
++	 * clearing the buffers on idle just to prevent the Store Buffer
++	 * repartitioning leak would be a window dressing exercise.
++	 */
++	if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY))
++		return;
++
++	if (sched_smt_active())
++		static_branch_enable(&mds_idle_clear);
++	else
++		static_branch_disable(&mds_idle_clear);
++}
++
++#define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n"
++
+ void arch_smt_update(void)
+ {
+ 	/* Enhanced IBRS implies STIBP. No update required. */
+@@ -631,6 +721,17 @@ void arch_smt_update(void)
+ 		break;
+ 	}
+ 
++	switch (mds_mitigation) {
++	case MDS_MITIGATION_FULL:
++	case MDS_MITIGATION_VMWERV:
++		if (sched_smt_active() && !boot_cpu_has(X86_BUG_MSBDS_ONLY))
++			pr_warn_once(MDS_MSG_SMT);
++		update_mds_branch_idle();
++		break;
++	case MDS_MITIGATION_OFF:
++		break;
++	}
++
+ 	mutex_unlock(&spec_ctrl_mutex);
+ }
+ 
+@@ -672,7 +773,8 @@ static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
+ 	char arg[20];
+ 	int ret, i;
+ 
+-	if (cmdline_find_option_bool(boot_command_line, "nospec_store_bypass_disable")) {
++	if (cmdline_find_option_bool(boot_command_line, "nospec_store_bypass_disable") ||
++	    cpu_mitigations_off()) {
+ 		return SPEC_STORE_BYPASS_CMD_NONE;
+ 	} else {
+ 		ret = cmdline_find_option(boot_command_line, "spec_store_bypass_disable",
+@@ -1008,6 +1110,11 @@ static void __init l1tf_select_mitigation(void)
+ 	if (!boot_cpu_has_bug(X86_BUG_L1TF))
+ 		return;
+ 
++	if (cpu_mitigations_off())
++		l1tf_mitigation = L1TF_MITIGATION_OFF;
++	else if (cpu_mitigations_auto_nosmt())
++		l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
++
+ 	override_cache_bits(&boot_cpu_data);
+ 
+ 	switch (l1tf_mitigation) {
+@@ -1036,7 +1143,7 @@ static void __init l1tf_select_mitigation(void)
+ 		pr_info("You may make it effective by booting the kernel with mem=%llu parameter.\n",
+ 				half_pa);
+ 		pr_info("However, doing so will make a part of your RAM unusable.\n");
+-		pr_info("Reading https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html might help you decide.\n");
++		pr_info("Reading https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html might help you decide.\n");
+ 		return;
+ 	}
+ 
+@@ -1069,6 +1176,7 @@ static int __init l1tf_cmdline(char *str)
+ early_param("l1tf", l1tf_cmdline);
+ 
+ #undef pr_fmt
++#define pr_fmt(fmt) fmt
+ 
+ #ifdef CONFIG_SYSFS
+ 
+@@ -1107,6 +1215,23 @@ static ssize_t l1tf_show_state(char *buf)
+ }
+ #endif
+ 
++static ssize_t mds_show_state(char *buf)
++{
++	if (!hypervisor_is_type(X86_HYPER_NATIVE)) {
++		return sprintf(buf, "%s; SMT Host state unknown\n",
++			       mds_strings[mds_mitigation]);
++	}
++
++	if (boot_cpu_has(X86_BUG_MSBDS_ONLY)) {
++		return sprintf(buf, "%s; SMT %s\n", mds_strings[mds_mitigation],
++			       (mds_mitigation == MDS_MITIGATION_OFF ? "vulnerable" :
++			        sched_smt_active() ? "mitigated" : "disabled"));
++	}
++
++	return sprintf(buf, "%s; SMT %s\n", mds_strings[mds_mitigation],
++		       sched_smt_active() ? "vulnerable" : "disabled");
++}
++
+ static char *stibp_state(void)
+ {
+ 	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
+@@ -1173,6 +1298,10 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ 		if (boot_cpu_has(X86_FEATURE_L1TF_PTEINV))
+ 			return l1tf_show_state(buf);
+ 		break;
++
++	case X86_BUG_MDS:
++		return mds_show_state(buf);
++
+ 	default:
+ 		break;
+ 	}
+@@ -1204,4 +1333,9 @@ ssize_t cpu_show_l1tf(struct device *dev, struct device_attribute *attr, char *b
+ {
+ 	return cpu_show_common(dev, attr, buf, X86_BUG_L1TF);
+ }
++
++ssize_t cpu_show_mds(struct device *dev, struct device_attribute *attr, char *buf)
++{
++	return cpu_show_common(dev, attr, buf, X86_BUG_MDS);
++}
+ #endif
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index cb28e98a0659..132a63dc5a76 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -948,61 +948,77 @@ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
+ #endif
+ }
+ 
+-static const __initconst struct x86_cpu_id cpu_no_speculation[] = {
+-	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_SALTWELL,	X86_FEATURE_ANY },
+-	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_SALTWELL_TABLET,	X86_FEATURE_ANY },
+-	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_BONNELL_MID,	X86_FEATURE_ANY },
+-	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_SALTWELL_MID,	X86_FEATURE_ANY },
+-	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_BONNELL,	X86_FEATURE_ANY },
+-	{ X86_VENDOR_CENTAUR,	5 },
+-	{ X86_VENDOR_INTEL,	5 },
+-	{ X86_VENDOR_NSC,	5 },
+-	{ X86_VENDOR_ANY,	4 },
++#define NO_SPECULATION	BIT(0)
++#define NO_MELTDOWN	BIT(1)
++#define NO_SSB		BIT(2)
++#define NO_L1TF		BIT(3)
++#define NO_MDS		BIT(4)
++#define MSBDS_ONLY	BIT(5)
++
++#define VULNWL(_vendor, _family, _model, _whitelist)	\
++	{ X86_VENDOR_##_vendor, _family, _model, X86_FEATURE_ANY, _whitelist }
++
++#define VULNWL_INTEL(model, whitelist)		\
++	VULNWL(INTEL, 6, INTEL_FAM6_##model, whitelist)
++
++#define VULNWL_AMD(family, whitelist)		\
++	VULNWL(AMD, family, X86_MODEL_ANY, whitelist)
++
++#define VULNWL_HYGON(family, whitelist)		\
++	VULNWL(HYGON, family, X86_MODEL_ANY, whitelist)
++
++static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
++	VULNWL(ANY,	4, X86_MODEL_ANY,	NO_SPECULATION),
++	VULNWL(CENTAUR,	5, X86_MODEL_ANY,	NO_SPECULATION),
++	VULNWL(INTEL,	5, X86_MODEL_ANY,	NO_SPECULATION),
++	VULNWL(NSC,	5, X86_MODEL_ANY,	NO_SPECULATION),
++
++	/* Intel Family 6 */
++	VULNWL_INTEL(ATOM_SALTWELL,		NO_SPECULATION),
++	VULNWL_INTEL(ATOM_SALTWELL_TABLET,	NO_SPECULATION),
++	VULNWL_INTEL(ATOM_SALTWELL_MID,		NO_SPECULATION),
++	VULNWL_INTEL(ATOM_BONNELL,		NO_SPECULATION),
++	VULNWL_INTEL(ATOM_BONNELL_MID,		NO_SPECULATION),
++
++	VULNWL_INTEL(ATOM_SILVERMONT,		NO_SSB | NO_L1TF | MSBDS_ONLY),
++	VULNWL_INTEL(ATOM_SILVERMONT_X,		NO_SSB | NO_L1TF | MSBDS_ONLY),
++	VULNWL_INTEL(ATOM_SILVERMONT_MID,	NO_SSB | NO_L1TF | MSBDS_ONLY),
++	VULNWL_INTEL(ATOM_AIRMONT,		NO_SSB | NO_L1TF | MSBDS_ONLY),
++	VULNWL_INTEL(XEON_PHI_KNL,		NO_SSB | NO_L1TF | MSBDS_ONLY),
++	VULNWL_INTEL(XEON_PHI_KNM,		NO_SSB | NO_L1TF | MSBDS_ONLY),
++
++	VULNWL_INTEL(CORE_YONAH,		NO_SSB),
++
++	VULNWL_INTEL(ATOM_AIRMONT_MID,		NO_L1TF | MSBDS_ONLY),
++
++	VULNWL_INTEL(ATOM_GOLDMONT,		NO_MDS | NO_L1TF),
++	VULNWL_INTEL(ATOM_GOLDMONT_X,		NO_MDS | NO_L1TF),
++	VULNWL_INTEL(ATOM_GOLDMONT_PLUS,	NO_MDS | NO_L1TF),
++
++	/* AMD Family 0xf - 0x12 */
++	VULNWL_AMD(0x0f,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS),
++	VULNWL_AMD(0x10,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS),
++	VULNWL_AMD(0x11,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS),
++	VULNWL_AMD(0x12,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS),
++
++	/* FAMILY_ANY must be last, otherwise 0x0f - 0x12 matches won't work */
++	VULNWL_AMD(X86_FAMILY_ANY,	NO_MELTDOWN | NO_L1TF | NO_MDS),
++	VULNWL_HYGON(X86_FAMILY_ANY,	NO_MELTDOWN | NO_L1TF | NO_MDS),
+ 	{}
+ };
+ 
+-static const __initconst struct x86_cpu_id cpu_no_meltdown[] = {
+-	{ X86_VENDOR_AMD },
+-	{ X86_VENDOR_HYGON },
+-	{}
+-};
+-
+-/* Only list CPUs which speculate but are non susceptible to SSB */
+-static const __initconst struct x86_cpu_id cpu_no_spec_store_bypass[] = {
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT	},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_AIRMONT		},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT_X	},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT_MID	},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_CORE_YONAH		},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_XEON_PHI_KNL		},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_XEON_PHI_KNM		},
+-	{ X86_VENDOR_AMD,	0x12,					},
+-	{ X86_VENDOR_AMD,	0x11,					},
+-	{ X86_VENDOR_AMD,	0x10,					},
+-	{ X86_VENDOR_AMD,	0xf,					},
+-	{}
+-};
++static bool __init cpu_matches(unsigned long which)
++{
++	const struct x86_cpu_id *m = x86_match_cpu(cpu_vuln_whitelist);
+ 
+-static const __initconst struct x86_cpu_id cpu_no_l1tf[] = {
+-	/* in addition to cpu_no_speculation */
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT	},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT_X	},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_AIRMONT		},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT_MID	},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_AIRMONT_MID	},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_GOLDMONT	},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_GOLDMONT_X	},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_GOLDMONT_PLUS	},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_XEON_PHI_KNL		},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_XEON_PHI_KNM		},
+-	{}
+-};
++	return m && !!(m->driver_data & which);
++}
+ 
+ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ {
+ 	u64 ia32_cap = 0;
+ 
+-	if (x86_match_cpu(cpu_no_speculation))
++	if (cpu_matches(NO_SPECULATION))
+ 		return;
+ 
+ 	setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
+@@ -1011,15 +1027,20 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 	if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
+ 		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
+ 
+-	if (!x86_match_cpu(cpu_no_spec_store_bypass) &&
+-	   !(ia32_cap & ARCH_CAP_SSB_NO) &&
++	if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) &&
+ 	   !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
+ 		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
+ 
+ 	if (ia32_cap & ARCH_CAP_IBRS_ALL)
+ 		setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
+ 
+-	if (x86_match_cpu(cpu_no_meltdown))
++	if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_MDS_NO)) {
++		setup_force_cpu_bug(X86_BUG_MDS);
++		if (cpu_matches(MSBDS_ONLY))
++			setup_force_cpu_bug(X86_BUG_MSBDS_ONLY);
++	}
++
++	if (cpu_matches(NO_MELTDOWN))
+ 		return;
+ 
+ 	/* Rogue Data Cache Load? No! */
+@@ -1028,7 +1049,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 
+ 	setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
+ 
+-	if (x86_match_cpu(cpu_no_l1tf))
++	if (cpu_matches(NO_L1TF))
+ 		return;
+ 
+ 	setup_force_cpu_bug(X86_BUG_L1TF);
+diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
+index 18bc9b51ac9b..086cf1d1d71d 100644
+--- a/arch/x86/kernel/nmi.c
++++ b/arch/x86/kernel/nmi.c
+@@ -34,6 +34,7 @@
+ #include <asm/x86_init.h>
+ #include <asm/reboot.h>
+ #include <asm/cache.h>
++#include <asm/nospec-branch.h>
+ 
+ #define CREATE_TRACE_POINTS
+ #include <trace/events/nmi.h>
+@@ -533,6 +534,9 @@ nmi_restart:
+ 		write_cr2(this_cpu_read(nmi_cr2));
+ 	if (this_cpu_dec_return(nmi_state))
+ 		goto nmi_restart;
++
++	if (user_mode(regs))
++		mds_user_clear_cpu_buffers();
+ }
+ NOKPROBE_SYMBOL(do_nmi);
+ 
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index d26f9e9c3d83..07c7bbe79e8b 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -58,6 +58,7 @@
+ #include <asm/alternative.h>
+ #include <asm/fpu/xstate.h>
+ #include <asm/trace/mpx.h>
++#include <asm/nospec-branch.h>
+ #include <asm/mpx.h>
+ #include <asm/vm86.h>
+ #include <asm/umip.h>
+@@ -367,6 +368,13 @@ dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code)
+ 		regs->ip = (unsigned long)general_protection;
+ 		regs->sp = (unsigned long)&gpregs->orig_ax;
+ 
++		/*
++		 * This situation can be triggered by userspace via
++		 * modify_ldt(2) and the return does not take the regular
++		 * user space exit, so a CPU buffer clear is required when
++		 * MDS mitigation is enabled.
++		 */
++		mds_user_clear_cpu_buffers();
+ 		return;
+ 	}
+ #endif
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index fd3951638ae4..bbbe611f0c49 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -410,7 +410,8 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
+ 	/* cpuid 7.0.edx*/
+ 	const u32 kvm_cpuid_7_0_edx_x86_features =
+ 		F(AVX512_4VNNIW) | F(AVX512_4FMAPS) | F(SPEC_CTRL) |
+-		F(SPEC_CTRL_SSBD) | F(ARCH_CAPABILITIES) | F(INTEL_STIBP);
++		F(SPEC_CTRL_SSBD) | F(ARCH_CAPABILITIES) | F(INTEL_STIBP) |
++		F(MD_CLEAR);
+ 
+ 	/* all calls to cpuid_count() should be made on the same cpu */
+ 	get_cpu();
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 0c955bb286ff..194c6ec11f4c 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -6431,8 +6431,11 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
+ 	 */
+ 	x86_spec_ctrl_set_guest(vmx->spec_ctrl, 0);
+ 
++	/* L1D Flush includes CPU buffer clear to mitigate MDS */
+ 	if (static_branch_unlikely(&vmx_l1d_should_flush))
+ 		vmx_l1d_flush(vcpu);
++	else if (static_branch_unlikely(&mds_user_clear))
++		mds_clear_cpu_buffers();
+ 
+ 	if (vcpu->arch.cr2 != read_cr2())
+ 		write_cr2(vcpu->arch.cr2);
+@@ -6668,8 +6671,8 @@ free_partial_vcpu:
+ 	return ERR_PTR(err);
+ }
+ 
+-#define L1TF_MSG_SMT "L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html for details.\n"
+-#define L1TF_MSG_L1D "L1TF CPU bug present and virtualization mitigation disabled, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html for details.\n"
++#define L1TF_MSG_SMT "L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.\n"
++#define L1TF_MSG_L1D "L1TF CPU bug present and virtualization mitigation disabled, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.\n"
+ 
+ static int vmx_vm_init(struct kvm *kvm)
+ {
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index 139b28a01ce4..d0255d64edce 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -35,6 +35,7 @@
+ #include <linux/spinlock.h>
+ #include <linux/mm.h>
+ #include <linux/uaccess.h>
++#include <linux/cpu.h>
+ 
+ #include <asm/cpufeature.h>
+ #include <asm/hypervisor.h>
+@@ -115,7 +116,8 @@ void __init pti_check_boottime_disable(void)
+ 		}
+ 	}
+ 
+-	if (cmdline_find_option_bool(boot_command_line, "nopti")) {
++	if (cmdline_find_option_bool(boot_command_line, "nopti") ||
++	    cpu_mitigations_off()) {
+ 		pti_mode = PTI_FORCE_OFF;
+ 		pti_print_if_insecure("disabled on command line.");
+ 		return;
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index 668139cfa664..cc37511de866 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -548,11 +548,18 @@ ssize_t __weak cpu_show_l1tf(struct device *dev,
+ 	return sprintf(buf, "Not affected\n");
+ }
+ 
++ssize_t __weak cpu_show_mds(struct device *dev,
++			    struct device_attribute *attr, char *buf)
++{
++	return sprintf(buf, "Not affected\n");
++}
++
+ static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
+ static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
+ static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
+ static DEVICE_ATTR(spec_store_bypass, 0444, cpu_show_spec_store_bypass, NULL);
+ static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
++static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
+ 
+ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_meltdown.attr,
+@@ -560,6 +567,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_spectre_v2.attr,
+ 	&dev_attr_spec_store_bypass.attr,
+ 	&dev_attr_l1tf.attr,
++	&dev_attr_mds.attr,
+ 	NULL
+ };
+ 
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index 5041357d0297..57ae83c4d5f4 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -57,6 +57,8 @@ extern ssize_t cpu_show_spec_store_bypass(struct device *dev,
+ 					  struct device_attribute *attr, char *buf);
+ extern ssize_t cpu_show_l1tf(struct device *dev,
+ 			     struct device_attribute *attr, char *buf);
++extern ssize_t cpu_show_mds(struct device *dev,
++			    struct device_attribute *attr, char *buf);
+ 
+ extern __printf(4, 5)
+ struct device *cpu_device_create(struct device *parent, void *drvdata,
+@@ -187,4 +189,28 @@ static inline void cpu_smt_disable(bool force) { }
+ static inline void cpu_smt_check_topology(void) { }
+ #endif
+ 
++/*
++ * These are used for a global "mitigations=" cmdline option for toggling
++ * optional CPU mitigations.
++ */
++enum cpu_mitigations {
++	CPU_MITIGATIONS_OFF,
++	CPU_MITIGATIONS_AUTO,
++	CPU_MITIGATIONS_AUTO_NOSMT,
++};
++
++extern enum cpu_mitigations cpu_mitigations;
++
++/* mitigations=off */
++static inline bool cpu_mitigations_off(void)
++{
++	return cpu_mitigations == CPU_MITIGATIONS_OFF;
++}
++
++/* mitigations=auto,nosmt */
++static inline bool cpu_mitigations_auto_nosmt(void)
++{
++	return cpu_mitigations == CPU_MITIGATIONS_AUTO_NOSMT;
++}
++
+ #endif /* _LINUX_CPU_H_ */
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 6754f3ecfd94..43e741e88691 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -2304,3 +2304,18 @@ void __init boot_cpu_hotplug_init(void)
+ #endif
+ 	this_cpu_write(cpuhp_state.state, CPUHP_ONLINE);
+ }
++
++enum cpu_mitigations cpu_mitigations __ro_after_init = CPU_MITIGATIONS_AUTO;
++
++static int __init mitigations_parse_cmdline(char *arg)
++{
++	if (!strcmp(arg, "off"))
++		cpu_mitigations = CPU_MITIGATIONS_OFF;
++	else if (!strcmp(arg, "auto"))
++		cpu_mitigations = CPU_MITIGATIONS_AUTO;
++	else if (!strcmp(arg, "auto,nosmt"))
++		cpu_mitigations = CPU_MITIGATIONS_AUTO_NOSMT;
++
++	return 0;
++}
++early_param("mitigations", mitigations_parse_cmdline);
+diff --git a/tools/power/x86/turbostat/Makefile b/tools/power/x86/turbostat/Makefile
+index 1598b4fa0b11..045f5f7d68ab 100644
+--- a/tools/power/x86/turbostat/Makefile
++++ b/tools/power/x86/turbostat/Makefile
+@@ -9,7 +9,7 @@ ifeq ("$(origin O)", "command line")
+ endif
+ 
+ turbostat : turbostat.c
+-override CFLAGS +=	-Wall
++override CFLAGS +=	-Wall -I../../../include
+ override CFLAGS +=	-DMSRHEADER='"../../../../arch/x86/include/asm/msr-index.h"'
+ override CFLAGS +=	-DINTEL_FAMILY_HEADER='"../../../../arch/x86/include/asm/intel-family.h"'
+ 
+diff --git a/tools/power/x86/x86_energy_perf_policy/Makefile b/tools/power/x86/x86_energy_perf_policy/Makefile
+index ae7a0e09b722..1fdeef864e7c 100644
+--- a/tools/power/x86/x86_energy_perf_policy/Makefile
++++ b/tools/power/x86/x86_energy_perf_policy/Makefile
+@@ -9,7 +9,7 @@ ifeq ("$(origin O)", "command line")
+ endif
+ 
+ x86_energy_perf_policy : x86_energy_perf_policy.c
+-override CFLAGS +=	-Wall
++override CFLAGS +=	-Wall -I../../../include
+ override CFLAGS +=	-DMSRHEADER='"../../../../arch/x86/include/asm/msr-index.h"'
+ 
+ %: %.c


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-05-16 23:05 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2019-05-16 23:05 UTC (permalink / raw
  To: gentoo-commits

commit:     101fb31f0878ee0018db2d5d28637e01d59b3fe5
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu May 16 23:05:40 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu May 16 23:05:40 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=101fb31f

Linux patch 5.1.3

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1002_linux-5.1.3.patch | 1471 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1475 insertions(+)

diff --git a/0000_README b/0000_README
index b65b94a..781e82e 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch:  1001_linux-5.1.2.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.1.2
 
+Patch:  1002_linux-5.1.3.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.1.3
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1002_linux-5.1.3.patch b/1002_linux-5.1.3.patch
new file mode 100644
index 0000000..76b7926
--- /dev/null
+++ b/1002_linux-5.1.3.patch
@@ -0,0 +1,1471 @@
+diff --git a/Makefile b/Makefile
+index 58ec07990e76..f6c763aff4f3 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h b/arch/powerpc/include/asm/book3s/64/pgalloc.h
+index 138bc2ecc0c4..ba188797d940 100644
+--- a/arch/powerpc/include/asm/book3s/64/pgalloc.h
++++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h
+@@ -81,6 +81,9 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
+ 
+ 	pgd = kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE),
+ 			       pgtable_gfp_flags(mm, GFP_KERNEL));
++	if (unlikely(!pgd))
++		return pgd;
++
+ 	/*
+ 	 * Don't scan the PGD for pointers, it contains references to PUDs but
+ 	 * those references are not full pointers and so can't be recognised by
+diff --git a/arch/powerpc/include/asm/reg_booke.h b/arch/powerpc/include/asm/reg_booke.h
+index eb2a33d5df26..e382bd6ede84 100644
+--- a/arch/powerpc/include/asm/reg_booke.h
++++ b/arch/powerpc/include/asm/reg_booke.h
+@@ -41,7 +41,7 @@
+ #if defined(CONFIG_PPC_BOOK3E_64)
+ #define MSR_64BIT	MSR_CM
+ 
+-#define MSR_		(MSR_ME | MSR_CE)
++#define MSR_		(MSR_ME | MSR_RI | MSR_CE)
+ #define MSR_KERNEL	(MSR_ | MSR_64BIT)
+ #define MSR_USER32	(MSR_ | MSR_PR | MSR_EE)
+ #define MSR_USER64	(MSR_USER32 | MSR_64BIT)
+diff --git a/arch/powerpc/kernel/idle_book3s.S b/arch/powerpc/kernel/idle_book3s.S
+index 7f5ac2e8581b..36178000a2f2 100644
+--- a/arch/powerpc/kernel/idle_book3s.S
++++ b/arch/powerpc/kernel/idle_book3s.S
+@@ -170,6 +170,9 @@ core_idle_lock_held:
+ 	bne-	core_idle_lock_held
+ 	blr
+ 
++/* Reuse an unused pt_regs slot for IAMR */
++#define PNV_POWERSAVE_IAMR	_DAR
++
+ /*
+  * Pass requested state in r3:
+  *	r3 - PNV_THREAD_NAP/SLEEP/WINKLE in POWER8
+@@ -200,6 +203,12 @@ pnv_powersave_common:
+ 	/* Continue saving state */
+ 	SAVE_GPR(2, r1)
+ 	SAVE_NVGPRS(r1)
++
++BEGIN_FTR_SECTION
++	mfspr	r5, SPRN_IAMR
++	std	r5, PNV_POWERSAVE_IAMR(r1)
++END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
++
+ 	mfcr	r5
+ 	std	r5,_CCR(r1)
+ 	std	r1,PACAR1(r13)
+@@ -924,6 +933,17 @@ BEGIN_FTR_SECTION
+ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
+ 	REST_NVGPRS(r1)
+ 	REST_GPR(2, r1)
++
++BEGIN_FTR_SECTION
++	/* IAMR was saved in pnv_powersave_common() */
++	ld	r5, PNV_POWERSAVE_IAMR(r1)
++	mtspr	SPRN_IAMR, r5
++	/*
++	 * We don't need an isync here because the upcoming mtmsrd is
++	 * execution synchronizing.
++	 */
++END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
++
+ 	ld	r4,PACAKMSR(r13)
+ 	ld	r5,_LINK(r1)
+ 	ld	r6,_CCR(r1)
+diff --git a/drivers/hwmon/occ/sysfs.c b/drivers/hwmon/occ/sysfs.c
+index fe3d15e416e7..a71ca94c789f 100644
+--- a/drivers/hwmon/occ/sysfs.c
++++ b/drivers/hwmon/occ/sysfs.c
+@@ -42,16 +42,16 @@ static ssize_t occ_sysfs_show(struct device *dev,
+ 		val = !!(header->status & OCC_STAT_ACTIVE);
+ 		break;
+ 	case 2:
+-		val = !!(header->status & OCC_EXT_STAT_DVFS_OT);
++		val = !!(header->ext_status & OCC_EXT_STAT_DVFS_OT);
+ 		break;
+ 	case 3:
+-		val = !!(header->status & OCC_EXT_STAT_DVFS_POWER);
++		val = !!(header->ext_status & OCC_EXT_STAT_DVFS_POWER);
+ 		break;
+ 	case 4:
+-		val = !!(header->status & OCC_EXT_STAT_MEM_THROTTLE);
++		val = !!(header->ext_status & OCC_EXT_STAT_MEM_THROTTLE);
+ 		break;
+ 	case 5:
+-		val = !!(header->status & OCC_EXT_STAT_QUICK_DROP);
++		val = !!(header->ext_status & OCC_EXT_STAT_QUICK_DROP);
+ 		break;
+ 	case 6:
+ 		val = header->occ_state;
+diff --git a/drivers/hwmon/pwm-fan.c b/drivers/hwmon/pwm-fan.c
+index 167221c7628a..e4c5197417a8 100644
+--- a/drivers/hwmon/pwm-fan.c
++++ b/drivers/hwmon/pwm-fan.c
+@@ -271,7 +271,7 @@ static int pwm_fan_probe(struct platform_device *pdev)
+ 
+ 	ret = pwm_fan_of_get_cooling_data(&pdev->dev, ctx);
+ 	if (ret)
+-		return ret;
++		goto err_pwm_disable;
+ 
+ 	ctx->pwm_fan_state = ctx->pwm_fan_max_state;
+ 	if (IS_ENABLED(CONFIG_THERMAL)) {
+diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
+index 688aa3b5f3ac..2f0f88b79c4b 100644
+--- a/drivers/i2c/i2c-core-base.c
++++ b/drivers/i2c/i2c-core-base.c
+@@ -1871,8 +1871,11 @@ int __i2c_transfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num)
+ 
+ 	if (WARN_ON(!msgs || num < 1))
+ 		return -EINVAL;
+-	if (WARN_ON(test_bit(I2C_ALF_IS_SUSPENDED, &adap->locked_flags)))
++	if (test_bit(I2C_ALF_IS_SUSPENDED, &adap->locked_flags)) {
++		if (!test_and_set_bit(I2C_ALF_SUSPEND_REPORTED, &adap->locked_flags))
++			dev_WARN(&adap->dev, "Transfer while suspended\n");
+ 		return -ESHUTDOWN;
++	}
+ 
+ 	if (adap->quirks && i2c_check_for_quirks(adap, msgs, num))
+ 		return -EOPNOTSUPP;
+diff --git a/drivers/isdn/gigaset/bas-gigaset.c b/drivers/isdn/gigaset/bas-gigaset.c
+index ecdeb89645d0..149b1aca52a2 100644
+--- a/drivers/isdn/gigaset/bas-gigaset.c
++++ b/drivers/isdn/gigaset/bas-gigaset.c
+@@ -958,6 +958,7 @@ static void write_iso_callback(struct urb *urb)
+  */
+ static int starturbs(struct bc_state *bcs)
+ {
++	struct usb_device *udev = bcs->cs->hw.bas->udev;
+ 	struct bas_bc_state *ubc = bcs->hw.bas;
+ 	struct urb *urb;
+ 	int j, k;
+@@ -975,8 +976,8 @@ static int starturbs(struct bc_state *bcs)
+ 			rc = -EFAULT;
+ 			goto error;
+ 		}
+-		usb_fill_int_urb(urb, bcs->cs->hw.bas->udev,
+-				 usb_rcvisocpipe(urb->dev, 3 + 2 * bcs->channel),
++		usb_fill_int_urb(urb, udev,
++				 usb_rcvisocpipe(udev, 3 + 2 * bcs->channel),
+ 				 ubc->isoinbuf + k * BAS_INBUFSIZE,
+ 				 BAS_INBUFSIZE, read_iso_callback, bcs,
+ 				 BAS_FRAMETIME);
+@@ -1006,8 +1007,8 @@ static int starturbs(struct bc_state *bcs)
+ 			rc = -EFAULT;
+ 			goto error;
+ 		}
+-		usb_fill_int_urb(urb, bcs->cs->hw.bas->udev,
+-				 usb_sndisocpipe(urb->dev, 4 + 2 * bcs->channel),
++		usb_fill_int_urb(urb, udev,
++				 usb_sndisocpipe(udev, 4 + 2 * bcs->channel),
+ 				 ubc->isooutbuf->data,
+ 				 sizeof(ubc->isooutbuf->data),
+ 				 write_iso_callback, &ubc->isoouturbs[k],
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index c033bfcb209e..364dd2f6fa1b 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -4223,26 +4223,15 @@ static void handle_parity_checks6(struct r5conf *conf, struct stripe_head *sh,
+ 	case check_state_check_result:
+ 		sh->check_state = check_state_idle;
+ 
++		if (s->failed > 1)
++			break;
+ 		/* handle a successful check operation, if parity is correct
+ 		 * we are done.  Otherwise update the mismatch count and repair
+ 		 * parity if !MD_RECOVERY_CHECK
+ 		 */
+ 		if (sh->ops.zero_sum_result == 0) {
+-			/* both parities are correct */
+-			if (!s->failed)
+-				set_bit(STRIPE_INSYNC, &sh->state);
+-			else {
+-				/* in contrast to the raid5 case we can validate
+-				 * parity, but still have a failure to write
+-				 * back
+-				 */
+-				sh->check_state = check_state_compute_result;
+-				/* Returning at this point means that we may go
+-				 * off and bring p and/or q uptodate again so
+-				 * we make sure to check zero_sum_result again
+-				 * to verify if p or q need writeback
+-				 */
+-			}
++			/* Any parity checked was correct */
++			set_bit(STRIPE_INSYNC, &sh->state);
+ 		} else {
+ 			atomic64_add(STRIPE_SECTORS, &conf->mddev->resync_mismatches);
+ 			if (test_bit(MD_RECOVERY_CHECK, &conf->mddev->recovery)) {
+diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c
+index da1fc17295d9..b996967af8d9 100644
+--- a/drivers/net/bonding/bond_options.c
++++ b/drivers/net/bonding/bond_options.c
+@@ -1098,13 +1098,6 @@ static int bond_option_arp_validate_set(struct bonding *bond,
+ {
+ 	netdev_dbg(bond->dev, "Setting arp_validate to %s (%llu)\n",
+ 		   newval->string, newval->value);
+-
+-	if (bond->dev->flags & IFF_UP) {
+-		if (!newval->value)
+-			bond->recv_probe = NULL;
+-		else if (bond->params.arp_interval)
+-			bond->recv_probe = bond_arp_rcv;
+-	}
+ 	bond->params.arp_validate = newval->value;
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 3da2795e2486..a6535e226d84 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -2461,12 +2461,12 @@ static int macb_open(struct net_device *dev)
+ 		goto pm_exit;
+ 	}
+ 
+-	bp->macbgem_ops.mog_init_rings(bp);
+-	macb_init_hw(bp);
+-
+ 	for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue)
+ 		napi_enable(&queue->napi);
+ 
++	bp->macbgem_ops.mog_init_rings(bp);
++	macb_init_hw(bp);
++
+ 	/* schedule a link state check */
+ 	phy_start(dev->phydev);
+ 
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+index dfebc30c4841..d3f2408dc9e8 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+@@ -1648,7 +1648,7 @@ static struct sk_buff *dpaa_cleanup_tx_fd(const struct dpaa_priv *priv,
+ 				 qm_sg_entry_get_len(&sgt[0]), dma_dir);
+ 
+ 		/* remaining pages were mapped with skb_frag_dma_map() */
+-		for (i = 1; i < nr_frags; i++) {
++		for (i = 1; i <= nr_frags; i++) {
+ 			WARN_ON(qm_sg_entry_is_ext(&sgt[i]));
+ 
+ 			dma_unmap_page(dev, qm_sg_addr(&sgt[i]),
+diff --git a/drivers/net/ethernet/freescale/ucc_geth_ethtool.c b/drivers/net/ethernet/freescale/ucc_geth_ethtool.c
+index 0beee2cc2ddd..722b6de24816 100644
+--- a/drivers/net/ethernet/freescale/ucc_geth_ethtool.c
++++ b/drivers/net/ethernet/freescale/ucc_geth_ethtool.c
+@@ -252,14 +252,12 @@ uec_set_ringparam(struct net_device *netdev,
+ 		return -EINVAL;
+ 	}
+ 
++	if (netif_running(netdev))
++		return -EBUSY;
++
+ 	ug_info->bdRingLenRx[queue] = ring->rx_pending;
+ 	ug_info->bdRingLenTx[queue] = ring->tx_pending;
+ 
+-	if (netif_running(netdev)) {
+-		/* FIXME: restart automatically */
+-		netdev_info(netdev, "Please re-open the interface\n");
+-	}
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/ethernet/seeq/sgiseeq.c b/drivers/net/ethernet/seeq/sgiseeq.c
+index 70cce63a6081..696037d5ac3d 100644
+--- a/drivers/net/ethernet/seeq/sgiseeq.c
++++ b/drivers/net/ethernet/seeq/sgiseeq.c
+@@ -735,6 +735,7 @@ static int sgiseeq_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	platform_set_drvdata(pdev, dev);
++	SET_NETDEV_DEV(dev, &pdev->dev);
+ 	sp = netdev_priv(dev);
+ 
+ 	/* Make private data page aligned */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+index 195669f550f0..ba124a4da793 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+@@ -1015,6 +1015,8 @@ static struct mac_device_info *sun8i_dwmac_setup(void *ppriv)
+ 	mac->mac = &sun8i_dwmac_ops;
+ 	mac->dma = &sun8i_dwmac_dma_ops;
+ 
++	priv->dev->priv_flags |= IFF_UNICAST_FLT;
++
+ 	/* The loopback bit seems to be re-set when link change
+ 	 * Simply mask it each time
+ 	 * Speed 10/100/1000 are set in BIT(2)/BIT(3)
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 77068c545de0..cd5966b0db57 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -2044,11 +2044,14 @@ bool phy_validate_pause(struct phy_device *phydev,
+ 			struct ethtool_pauseparam *pp)
+ {
+ 	if (!linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT,
+-			       phydev->supported) ||
+-	    (!linkmode_test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
+-				phydev->supported) &&
+-	     pp->rx_pause != pp->tx_pause))
++			       phydev->supported) && pp->rx_pause)
+ 		return false;
++
++	if (!linkmode_test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
++			       phydev->supported) &&
++	    pp->rx_pause != pp->tx_pause)
++		return false;
++
+ 	return true;
+ }
+ EXPORT_SYMBOL(phy_validate_pause);
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index e9ca1c088d0b..f4c933ac6edf 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -596,13 +596,18 @@ static u16 tun_automq_select_queue(struct tun_struct *tun, struct sk_buff *skb)
+ static u16 tun_ebpf_select_queue(struct tun_struct *tun, struct sk_buff *skb)
+ {
+ 	struct tun_prog *prog;
++	u32 numqueues;
+ 	u16 ret = 0;
+ 
++	numqueues = READ_ONCE(tun->numqueues);
++	if (!numqueues)
++		return 0;
++
+ 	prog = rcu_dereference(tun->steering_prog);
+ 	if (prog)
+ 		ret = bpf_prog_run_clear_cb(prog->prog, skb);
+ 
+-	return ret % tun->numqueues;
++	return ret % numqueues;
+ }
+ 
+ static u16 tun_select_queue(struct net_device *dev, struct sk_buff *skb,
+@@ -700,6 +705,8 @@ static void __tun_detach(struct tun_file *tfile, bool clean)
+ 				   tun->tfiles[tun->numqueues - 1]);
+ 		ntfile = rtnl_dereference(tun->tfiles[index]);
+ 		ntfile->queue_index = index;
++		rcu_assign_pointer(tun->tfiles[tun->numqueues - 1],
++				   NULL);
+ 
+ 		--tun->numqueues;
+ 		if (clean) {
+@@ -1082,7 +1089,7 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	tfile = rcu_dereference(tun->tfiles[txq]);
+ 
+ 	/* Drop packet if interface is not attached */
+-	if (txq >= tun->numqueues)
++	if (!tfile)
+ 		goto drop;
+ 
+ 	if (!rcu_dereference(tun->steering_prog))
+@@ -1305,6 +1312,7 @@ static int tun_xdp_xmit(struct net_device *dev, int n,
+ 
+ 	rcu_read_lock();
+ 
++resample:
+ 	numqueues = READ_ONCE(tun->numqueues);
+ 	if (!numqueues) {
+ 		rcu_read_unlock();
+@@ -1313,6 +1321,8 @@ static int tun_xdp_xmit(struct net_device *dev, int n,
+ 
+ 	tfile = rcu_dereference(tun->tfiles[smp_processor_id() %
+ 					    numqueues]);
++	if (unlikely(!tfile))
++		goto resample;
+ 
+ 	spin_lock(&tfile->tx_ring.producer_lock);
+ 	for (i = 0; i < n; i++) {
+diff --git a/drivers/net/wireless/marvell/mwl8k.c b/drivers/net/wireless/marvell/mwl8k.c
+index 8e4e9b6919e0..ffc565ac2192 100644
+--- a/drivers/net/wireless/marvell/mwl8k.c
++++ b/drivers/net/wireless/marvell/mwl8k.c
+@@ -441,6 +441,9 @@ static const struct ieee80211_rate mwl8k_rates_50[] = {
+ #define MWL8K_CMD_UPDATE_STADB		0x1123
+ #define MWL8K_CMD_BASTREAM		0x1125
+ 
++#define MWL8K_LEGACY_5G_RATE_OFFSET \
++	(ARRAY_SIZE(mwl8k_rates_24) - ARRAY_SIZE(mwl8k_rates_50))
++
+ static const char *mwl8k_cmd_name(__le16 cmd, char *buf, int bufsize)
+ {
+ 	u16 command = le16_to_cpu(cmd);
+@@ -1016,8 +1019,9 @@ mwl8k_rxd_ap_process(void *_rxd, struct ieee80211_rx_status *status,
+ 
+ 	if (rxd->channel > 14) {
+ 		status->band = NL80211_BAND_5GHZ;
+-		if (!(status->encoding == RX_ENC_HT))
+-			status->rate_idx -= 5;
++		if (!(status->encoding == RX_ENC_HT) &&
++		    status->rate_idx >= MWL8K_LEGACY_5G_RATE_OFFSET)
++			status->rate_idx -= MWL8K_LEGACY_5G_RATE_OFFSET;
+ 	} else {
+ 		status->band = NL80211_BAND_2GHZ;
+ 	}
+@@ -1124,8 +1128,9 @@ mwl8k_rxd_sta_process(void *_rxd, struct ieee80211_rx_status *status,
+ 
+ 	if (rxd->channel > 14) {
+ 		status->band = NL80211_BAND_5GHZ;
+-		if (!(status->encoding == RX_ENC_HT))
+-			status->rate_idx -= 5;
++		if (!(status->encoding == RX_ENC_HT) &&
++		    status->rate_idx >= MWL8K_LEGACY_5G_RATE_OFFSET)
++			status->rate_idx -= MWL8K_LEGACY_5G_RATE_OFFSET;
+ 	} else {
+ 		status->band = NL80211_BAND_2GHZ;
+ 	}
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c
+index 6bab162e1bb8..655460f61bbc 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c
+@@ -1675,6 +1675,7 @@ static void _rtl8723e_read_adapter_info(struct ieee80211_hw *hw,
+ 					rtlhal->oem_id = RT_CID_819X_LENOVO;
+ 					break;
+ 				}
++				break;
+ 			case 0x1025:
+ 				rtlhal->oem_id = RT_CID_819X_ACER;
+ 				break;
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index 95441a35eceb..82acd6155adf 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -1486,6 +1486,21 @@ static void hv_pci_assign_slots(struct hv_pcibus_device *hbus)
+ 	}
+ }
+ 
++/*
++ * Remove entries in sysfs pci slot directory.
++ */
++static void hv_pci_remove_slots(struct hv_pcibus_device *hbus)
++{
++	struct hv_pci_dev *hpdev;
++
++	list_for_each_entry(hpdev, &hbus->children, list_entry) {
++		if (!hpdev->pci_slot)
++			continue;
++		pci_destroy_slot(hpdev->pci_slot);
++		hpdev->pci_slot = NULL;
++	}
++}
++
+ /**
+  * create_root_hv_pci_bus() - Expose a new root PCI bus
+  * @hbus:	Root PCI bus, as understood by this driver
+@@ -1761,6 +1776,10 @@ static void pci_devices_present_work(struct work_struct *work)
+ 		hpdev = list_first_entry(&removed, struct hv_pci_dev,
+ 					 list_entry);
+ 		list_del(&hpdev->list_entry);
++
++		if (hpdev->pci_slot)
++			pci_destroy_slot(hpdev->pci_slot);
++
+ 		put_pcichild(hpdev);
+ 	}
+ 
+@@ -1900,6 +1919,9 @@ static void hv_eject_device_work(struct work_struct *work)
+ 			 sizeof(*ejct_pkt), (unsigned long)&ctxt.pkt,
+ 			 VM_PKT_DATA_INBAND, 0);
+ 
++	/* For the get_pcichild() in hv_pci_eject_device() */
++	put_pcichild(hpdev);
++	/* For the two refs got in new_pcichild_device() */
+ 	put_pcichild(hpdev);
+ 	put_pcichild(hpdev);
+ 	put_hvpcibus(hpdev->hbus);
+@@ -2677,6 +2699,7 @@ static int hv_pci_remove(struct hv_device *hdev)
+ 		pci_lock_rescan_remove();
+ 		pci_stop_root_bus(hbus->pci_bus);
+ 		pci_remove_root_bus(hbus->pci_bus);
++		hv_pci_remove_slots(hbus);
+ 		pci_unlock_rescan_remove();
+ 		hbus->state = hv_pcibus_removed;
+ 	}
+diff --git a/drivers/platform/x86/dell-laptop.c b/drivers/platform/x86/dell-laptop.c
+index 95e6ca116e00..a561f653cf13 100644
+--- a/drivers/platform/x86/dell-laptop.c
++++ b/drivers/platform/x86/dell-laptop.c
+@@ -531,7 +531,7 @@ static void dell_rfkill_query(struct rfkill *rfkill, void *data)
+ 		return;
+ 	}
+ 
+-	dell_fill_request(&buffer, 0, 0x2, 0, 0);
++	dell_fill_request(&buffer, 0x2, 0, 0, 0);
+ 	ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ 	hwswitch = buffer.output[1];
+ 
+@@ -562,7 +562,7 @@ static int dell_debugfs_show(struct seq_file *s, void *data)
+ 		return ret;
+ 	status = buffer.output[1];
+ 
+-	dell_fill_request(&buffer, 0, 0x2, 0, 0);
++	dell_fill_request(&buffer, 0x2, 0, 0, 0);
+ 	hwswitch_ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ 	if (hwswitch_ret)
+ 		return hwswitch_ret;
+@@ -647,7 +647,7 @@ static void dell_update_rfkill(struct work_struct *ignored)
+ 	if (ret != 0)
+ 		return;
+ 
+-	dell_fill_request(&buffer, 0, 0x2, 0, 0);
++	dell_fill_request(&buffer, 0x2, 0, 0, 0);
+ 	ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ 
+ 	if (ret == 0 && (status & BIT(0)))
+diff --git a/drivers/platform/x86/sony-laptop.c b/drivers/platform/x86/sony-laptop.c
+index 4bfbfa3f78e6..2058445fc456 100644
+--- a/drivers/platform/x86/sony-laptop.c
++++ b/drivers/platform/x86/sony-laptop.c
+@@ -4424,14 +4424,16 @@ sony_pic_read_possible_resource(struct acpi_resource *resource, void *context)
+ 			}
+ 			return AE_OK;
+ 		}
++
++	case ACPI_RESOURCE_TYPE_END_TAG:
++		return AE_OK;
++
+ 	default:
+ 		dprintk("Resource %d isn't an IRQ nor an IO port\n",
+ 			resource->type);
++		return AE_CTRL_TERMINATE;
+ 
+-	case ACPI_RESOURCE_TYPE_END_TAG:
+-		return AE_OK;
+ 	}
+-	return AE_CTRL_TERMINATE;
+ }
+ 
+ static int sony_pic_possible_resources(struct acpi_device *device)
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index 726341f2b638..89ce14b35adc 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -79,7 +79,7 @@
+ #include <linux/jiffies.h>
+ #include <linux/workqueue.h>
+ #include <linux/acpi.h>
+-#include <linux/pci_ids.h>
++#include <linux/pci.h>
+ #include <linux/power_supply.h>
+ #include <sound/core.h>
+ #include <sound/control.h>
+@@ -4501,6 +4501,74 @@ static void bluetooth_exit(void)
+ 	bluetooth_shutdown();
+ }
+ 
++static const struct dmi_system_id bt_fwbug_list[] __initconst = {
++	{
++		.ident = "ThinkPad E485",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_BOARD_NAME, "20KU"),
++		},
++	},
++	{
++		.ident = "ThinkPad E585",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_BOARD_NAME, "20KV"),
++		},
++	},
++	{
++		.ident = "ThinkPad A285 - 20MW",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_BOARD_NAME, "20MW"),
++		},
++	},
++	{
++		.ident = "ThinkPad A285 - 20MX",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_BOARD_NAME, "20MX"),
++		},
++	},
++	{
++		.ident = "ThinkPad A485 - 20MU",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_BOARD_NAME, "20MU"),
++		},
++	},
++	{
++		.ident = "ThinkPad A485 - 20MV",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_BOARD_NAME, "20MV"),
++		},
++	},
++	{}
++};
++
++static const struct pci_device_id fwbug_cards_ids[] __initconst = {
++	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x24F3) },
++	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x24FD) },
++	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x2526) },
++	{}
++};
++
++
++static int __init have_bt_fwbug(void)
++{
++	/*
++	 * Some AMD based ThinkPads have a firmware bug that calling
++	 * "GBDC" will cause bluetooth on Intel wireless cards blocked
++	 */
++	if (dmi_check_system(bt_fwbug_list) && pci_dev_present(fwbug_cards_ids)) {
++		vdbg_printk(TPACPI_DBG_INIT | TPACPI_DBG_RFKILL,
++			FW_BUG "disable bluetooth subdriver for Intel cards\n");
++		return 1;
++	} else
++		return 0;
++}
++
+ static int __init bluetooth_init(struct ibm_init_struct *iibm)
+ {
+ 	int res;
+@@ -4513,7 +4581,7 @@ static int __init bluetooth_init(struct ibm_init_struct *iibm)
+ 
+ 	/* bluetooth not supported on 570, 600e/x, 770e, 770x, A21e, A2xm/p,
+ 	   G4x, R30, R31, R40e, R50e, T20-22, X20-21 */
+-	tp_features.bluetooth = hkey_handle &&
++	tp_features.bluetooth = !have_bt_fwbug() && hkey_handle &&
+ 	    acpi_evalf(hkey_handle, &status, "GBDC", "qd");
+ 
+ 	vdbg_printk(TPACPI_DBG_INIT | TPACPI_DBG_RFKILL,
+diff --git a/drivers/usb/serial/generic.c b/drivers/usb/serial/generic.c
+index 2274d9625f63..0fff4968ea1b 100644
+--- a/drivers/usb/serial/generic.c
++++ b/drivers/usb/serial/generic.c
+@@ -376,6 +376,7 @@ void usb_serial_generic_read_bulk_callback(struct urb *urb)
+ 	struct usb_serial_port *port = urb->context;
+ 	unsigned char *data = urb->transfer_buffer;
+ 	unsigned long flags;
++	bool stopped = false;
+ 	int status = urb->status;
+ 	int i;
+ 
+@@ -383,33 +384,51 @@ void usb_serial_generic_read_bulk_callback(struct urb *urb)
+ 		if (urb == port->read_urbs[i])
+ 			break;
+ 	}
+-	set_bit(i, &port->read_urbs_free);
+ 
+ 	dev_dbg(&port->dev, "%s - urb %d, len %d\n", __func__, i,
+ 							urb->actual_length);
+ 	switch (status) {
+ 	case 0:
++		usb_serial_debug_data(&port->dev, __func__, urb->actual_length,
++							data);
++		port->serial->type->process_read_urb(urb);
+ 		break;
+ 	case -ENOENT:
+ 	case -ECONNRESET:
+ 	case -ESHUTDOWN:
+ 		dev_dbg(&port->dev, "%s - urb stopped: %d\n",
+ 							__func__, status);
+-		return;
++		stopped = true;
++		break;
+ 	case -EPIPE:
+ 		dev_err(&port->dev, "%s - urb stopped: %d\n",
+ 							__func__, status);
+-		return;
++		stopped = true;
++		break;
+ 	default:
+ 		dev_dbg(&port->dev, "%s - nonzero urb status: %d\n",
+ 							__func__, status);
+-		goto resubmit;
++		break;
+ 	}
+ 
+-	usb_serial_debug_data(&port->dev, __func__, urb->actual_length, data);
+-	port->serial->type->process_read_urb(urb);
++	/*
++	 * Make sure URB processing is done before marking as free to avoid
++	 * racing with unthrottle() on another CPU. Matches the barriers
++	 * implied by the test_and_clear_bit() in
++	 * usb_serial_generic_submit_read_urb().
++	 */
++	smp_mb__before_atomic();
++	set_bit(i, &port->read_urbs_free);
++	/*
++	 * Make sure URB is marked as free before checking the throttled flag
++	 * to avoid racing with unthrottle() on another CPU. Matches the
++	 * smp_mb() in unthrottle().
++	 */
++	smp_mb__after_atomic();
++
++	if (stopped)
++		return;
+ 
+-resubmit:
+ 	/* Throttle the device if requested by tty */
+ 	spin_lock_irqsave(&port->lock, flags);
+ 	port->throttled = port->throttle_req;
+@@ -484,6 +503,12 @@ void usb_serial_generic_unthrottle(struct tty_struct *tty)
+ 	port->throttled = port->throttle_req = 0;
+ 	spin_unlock_irq(&port->lock);
+ 
++	/*
++	 * Matches the smp_mb__after_atomic() in
++	 * usb_serial_generic_read_bulk_callback().
++	 */
++	smp_mb();
++
+ 	if (was_throttled)
+ 		usb_serial_generic_submit_read_urbs(port, GFP_KERNEL);
+ }
+diff --git a/drivers/virt/fsl_hypervisor.c b/drivers/virt/fsl_hypervisor.c
+index 8ba726e600e9..1bbd910d4ddb 100644
+--- a/drivers/virt/fsl_hypervisor.c
++++ b/drivers/virt/fsl_hypervisor.c
+@@ -215,6 +215,9 @@ static long ioctl_memcpy(struct fsl_hv_ioctl_memcpy __user *p)
+ 	 * hypervisor.
+ 	 */
+ 	lb_offset = param.local_vaddr & (PAGE_SIZE - 1);
++	if (param.count == 0 ||
++	    param.count > U64_MAX - lb_offset - PAGE_SIZE + 1)
++		return -EINVAL;
+ 	num_pages = (param.count + lb_offset + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ 
+ 	/* Allocate the buffers we need */
+@@ -331,8 +334,8 @@ static long ioctl_dtprop(struct fsl_hv_ioctl_prop __user *p, int set)
+ 	struct fsl_hv_ioctl_prop param;
+ 	char __user *upath, *upropname;
+ 	void __user *upropval;
+-	char *path = NULL, *propname = NULL;
+-	void *propval = NULL;
++	char *path, *propname;
++	void *propval;
+ 	int ret = 0;
+ 
+ 	/* Get the parameters from the user. */
+@@ -344,32 +347,30 @@ static long ioctl_dtprop(struct fsl_hv_ioctl_prop __user *p, int set)
+ 	upropval = (void __user *)(uintptr_t)param.propval;
+ 
+ 	path = strndup_user(upath, FH_DTPROP_MAX_PATHLEN);
+-	if (IS_ERR(path)) {
+-		ret = PTR_ERR(path);
+-		goto out;
+-	}
++	if (IS_ERR(path))
++		return PTR_ERR(path);
+ 
+ 	propname = strndup_user(upropname, FH_DTPROP_MAX_PATHLEN);
+ 	if (IS_ERR(propname)) {
+ 		ret = PTR_ERR(propname);
+-		goto out;
++		goto err_free_path;
+ 	}
+ 
+ 	if (param.proplen > FH_DTPROP_MAX_PROPLEN) {
+ 		ret = -EINVAL;
+-		goto out;
++		goto err_free_propname;
+ 	}
+ 
+ 	propval = kmalloc(param.proplen, GFP_KERNEL);
+ 	if (!propval) {
+ 		ret = -ENOMEM;
+-		goto out;
++		goto err_free_propname;
+ 	}
+ 
+ 	if (set) {
+ 		if (copy_from_user(propval, upropval, param.proplen)) {
+ 			ret = -EFAULT;
+-			goto out;
++			goto err_free_propval;
+ 		}
+ 
+ 		param.ret = fh_partition_set_dtprop(param.handle,
+@@ -388,7 +389,7 @@ static long ioctl_dtprop(struct fsl_hv_ioctl_prop __user *p, int set)
+ 			if (copy_to_user(upropval, propval, param.proplen) ||
+ 			    put_user(param.proplen, &p->proplen)) {
+ 				ret = -EFAULT;
+-				goto out;
++				goto err_free_propval;
+ 			}
+ 		}
+ 	}
+@@ -396,10 +397,12 @@ static long ioctl_dtprop(struct fsl_hv_ioctl_prop __user *p, int set)
+ 	if (put_user(param.ret, &p->ret))
+ 		ret = -EFAULT;
+ 
+-out:
+-	kfree(path);
++err_free_propval:
+ 	kfree(propval);
++err_free_propname:
+ 	kfree(propname);
++err_free_path:
++	kfree(path);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/virt/vboxguest/vboxguest_core.c b/drivers/virt/vboxguest/vboxguest_core.c
+index 8ca333f21292..2307b0329aec 100644
+--- a/drivers/virt/vboxguest/vboxguest_core.c
++++ b/drivers/virt/vboxguest/vboxguest_core.c
+@@ -1298,6 +1298,20 @@ static int vbg_ioctl_hgcm_disconnect(struct vbg_dev *gdev,
+ 	return ret;
+ }
+ 
++static bool vbg_param_valid(enum vmmdev_hgcm_function_parameter_type type)
++{
++	switch (type) {
++	case VMMDEV_HGCM_PARM_TYPE_32BIT:
++	case VMMDEV_HGCM_PARM_TYPE_64BIT:
++	case VMMDEV_HGCM_PARM_TYPE_LINADDR:
++	case VMMDEV_HGCM_PARM_TYPE_LINADDR_IN:
++	case VMMDEV_HGCM_PARM_TYPE_LINADDR_OUT:
++		return true;
++	default:
++		return false;
++	}
++}
++
+ static int vbg_ioctl_hgcm_call(struct vbg_dev *gdev,
+ 			       struct vbg_session *session, bool f32bit,
+ 			       struct vbg_ioctl_hgcm_call *call)
+@@ -1333,6 +1347,23 @@ static int vbg_ioctl_hgcm_call(struct vbg_dev *gdev,
+ 	}
+ 	call->hdr.size_out = actual_size;
+ 
++	/* Validate parameter types */
++	if (f32bit) {
++		struct vmmdev_hgcm_function_parameter32 *parm =
++			VBG_IOCTL_HGCM_CALL_PARMS32(call);
++
++		for (i = 0; i < call->parm_count; i++)
++			if (!vbg_param_valid(parm[i].type))
++				return -EINVAL;
++	} else {
++		struct vmmdev_hgcm_function_parameter *parm =
++			VBG_IOCTL_HGCM_CALL_PARMS(call);
++
++		for (i = 0; i < call->parm_count; i++)
++			if (!vbg_param_valid(parm[i].type))
++				return -EINVAL;
++	}
++
+ 	/*
+ 	 * Validate the client id.
+ 	 */
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index 5df92c308286..021010424fa5 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -1004,6 +1004,7 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
+ 
+ 	if (unlikely(vq->vq.num_free < 1)) {
+ 		pr_debug("Can't add buf len 1 - avail = 0\n");
++		kfree(desc);
+ 		END_USE(vq);
+ 		return -ENOSPC;
+ 	}
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 9727944139f2..d87dfa5aa112 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -220,12 +220,14 @@ struct block_device *f2fs_target_device(struct f2fs_sb_info *sbi,
+ 	struct block_device *bdev = sbi->sb->s_bdev;
+ 	int i;
+ 
+-	for (i = 0; i < sbi->s_ndevs; i++) {
+-		if (FDEV(i).start_blk <= blk_addr &&
+-					FDEV(i).end_blk >= blk_addr) {
+-			blk_addr -= FDEV(i).start_blk;
+-			bdev = FDEV(i).bdev;
+-			break;
++	if (f2fs_is_multi_device(sbi)) {
++		for (i = 0; i < sbi->s_ndevs; i++) {
++			if (FDEV(i).start_blk <= blk_addr &&
++			    FDEV(i).end_blk >= blk_addr) {
++				blk_addr -= FDEV(i).start_blk;
++				bdev = FDEV(i).bdev;
++				break;
++			}
+ 		}
+ 	}
+ 	if (bio) {
+@@ -239,6 +241,9 @@ int f2fs_target_device_index(struct f2fs_sb_info *sbi, block_t blkaddr)
+ {
+ 	int i;
+ 
++	if (!f2fs_is_multi_device(sbi))
++		return 0;
++
+ 	for (i = 0; i < sbi->s_ndevs; i++)
+ 		if (FDEV(i).start_blk <= blkaddr && FDEV(i).end_blk >= blkaddr)
+ 			return i;
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 87f75ebd2fd6..7bea1bc6589f 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1366,6 +1366,17 @@ static inline bool time_to_inject(struct f2fs_sb_info *sbi, int type)
+ }
+ #endif
+ 
++/*
++ * Test if the mounted volume is a multi-device volume.
++ *   - For a single regular disk volume, sbi->s_ndevs is 0.
++ *   - For a single zoned disk volume, sbi->s_ndevs is 1.
++ *   - For a multi-device volume, sbi->s_ndevs is always 2 or more.
++ */
++static inline bool f2fs_is_multi_device(struct f2fs_sb_info *sbi)
++{
++	return sbi->s_ndevs > 1;
++}
++
+ /* For write statistics. Suppose sector size is 512 bytes,
+  * and the return value is in kbytes. s is of struct f2fs_sb_info.
+  */
+@@ -3615,7 +3626,7 @@ static inline bool f2fs_force_buffered_io(struct inode *inode,
+ 
+ 	if (f2fs_post_read_required(inode))
+ 		return true;
+-	if (sbi->s_ndevs)
++	if (f2fs_is_multi_device(sbi))
+ 		return true;
+ 	/*
+ 	 * for blkzoned device, fallback direct IO to buffered IO, so
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 5742ab8b57dc..30d49467578e 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -2573,7 +2573,7 @@ static int f2fs_ioc_flush_device(struct file *filp, unsigned long arg)
+ 							sizeof(range)))
+ 		return -EFAULT;
+ 
+-	if (sbi->s_ndevs <= 1 || sbi->s_ndevs - 1 <= range.dev_num ||
++	if (!f2fs_is_multi_device(sbi) || sbi->s_ndevs - 1 <= range.dev_num ||
+ 			__is_large_section(sbi)) {
+ 		f2fs_msg(sbi->sb, KERN_WARNING,
+ 			"Can't flush %u in %d for segs_per_sec %u != 1\n",
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 195cf0f9d9ef..ab764bd106de 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1346,7 +1346,7 @@ void f2fs_build_gc_manager(struct f2fs_sb_info *sbi)
+ 	sbi->gc_pin_file_threshold = DEF_GC_FAILED_PINNED_FILES;
+ 
+ 	/* give warm/cold data area from slower device */
+-	if (sbi->s_ndevs && !__is_large_section(sbi))
++	if (f2fs_is_multi_device(sbi) && !__is_large_section(sbi))
+ 		SIT_I(sbi)->last_victim[ALLOC_NEXT] =
+ 				GET_SEGNO(sbi, FDEV(0).end_blk) + 1;
+ }
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index aa7fe79b62b2..ddfa2eb7ec58 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -580,7 +580,7 @@ static int submit_flush_wait(struct f2fs_sb_info *sbi, nid_t ino)
+ 	int ret = 0;
+ 	int i;
+ 
+-	if (!sbi->s_ndevs)
++	if (!f2fs_is_multi_device(sbi))
+ 		return __submit_flush_wait(sbi, sbi->sb->s_bdev);
+ 
+ 	for (i = 0; i < sbi->s_ndevs; i++) {
+@@ -648,7 +648,8 @@ int f2fs_issue_flush(struct f2fs_sb_info *sbi, nid_t ino)
+ 		return ret;
+ 	}
+ 
+-	if (atomic_inc_return(&fcc->queued_flush) == 1 || sbi->s_ndevs > 1) {
++	if (atomic_inc_return(&fcc->queued_flush) == 1 ||
++	    f2fs_is_multi_device(sbi)) {
+ 		ret = submit_flush_wait(sbi, ino);
+ 		atomic_dec(&fcc->queued_flush);
+ 
+@@ -754,7 +755,7 @@ int f2fs_flush_device_cache(struct f2fs_sb_info *sbi)
+ {
+ 	int ret = 0, i;
+ 
+-	if (!sbi->s_ndevs)
++	if (!f2fs_is_multi_device(sbi))
+ 		return 0;
+ 
+ 	for (i = 1; i < sbi->s_ndevs; i++) {
+@@ -1369,7 +1370,7 @@ static int __queue_discard_cmd(struct f2fs_sb_info *sbi,
+ 
+ 	trace_f2fs_queue_discard(bdev, blkstart, blklen);
+ 
+-	if (sbi->s_ndevs) {
++	if (f2fs_is_multi_device(sbi)) {
+ 		int devi = f2fs_target_device_index(sbi, blkstart);
+ 
+ 		blkstart -= FDEV(devi).start_blk;
+@@ -1732,7 +1733,7 @@ static int __f2fs_issue_discard_zone(struct f2fs_sb_info *sbi,
+ 	block_t lblkstart = blkstart;
+ 	int devi = 0;
+ 
+-	if (sbi->s_ndevs) {
++	if (f2fs_is_multi_device(sbi)) {
+ 		devi = f2fs_target_device_index(sbi, blkstart);
+ 		blkstart -= FDEV(devi).start_blk;
+ 	}
+@@ -3089,7 +3090,7 @@ static void update_device_state(struct f2fs_io_info *fio)
+ 	struct f2fs_sb_info *sbi = fio->sbi;
+ 	unsigned int devidx;
+ 
+-	if (!sbi->s_ndevs)
++	if (!f2fs_is_multi_device(sbi))
+ 		return;
+ 
+ 	devidx = f2fs_target_device_index(sbi, fio->new_blkaddr);
+diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
+index b84d635567d3..1e7a74b8e064 100644
+--- a/fs/kernfs/dir.c
++++ b/fs/kernfs/dir.c
+@@ -650,11 +650,10 @@ static struct kernfs_node *__kernfs_new_node(struct kernfs_root *root,
+ 	kn->id.generation = gen;
+ 
+ 	/*
+-	 * set ino first. This barrier is paired with atomic_inc_not_zero in
++	 * set ino first. This RELEASE is paired with atomic_inc_not_zero in
+ 	 * kernfs_find_and_get_node_by_ino
+ 	 */
+-	smp_mb__before_atomic();
+-	atomic_set(&kn->count, 1);
++	atomic_set_release(&kn->count, 1);
+ 	atomic_set(&kn->active, KN_DEACTIVATED_BIAS);
+ 	RB_CLEAR_NODE(&kn->rb);
+ 
+diff --git a/include/linux/i2c.h b/include/linux/i2c.h
+index 383510b4f083..646dd962d6a1 100644
+--- a/include/linux/i2c.h
++++ b/include/linux/i2c.h
+@@ -682,7 +682,8 @@ struct i2c_adapter {
+ 	int retries;
+ 	struct device dev;		/* the adapter device */
+ 	unsigned long locked_flags;	/* owned by the I2C core */
+-#define I2C_ALF_IS_SUSPENDED	0
++#define I2C_ALF_IS_SUSPENDED		0
++#define I2C_ALF_SUSPEND_REPORTED	1
+ 
+ 	int nr;
+ 	char name[48];
+diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c
+index 8d77b6ee4477..eb98be23423e 100644
+--- a/net/8021q/vlan_dev.c
++++ b/net/8021q/vlan_dev.c
+@@ -367,10 +367,12 @@ static int vlan_dev_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+ 	ifrr.ifr_ifru = ifr->ifr_ifru;
+ 
+ 	switch (cmd) {
++	case SIOCSHWTSTAMP:
++		if (!net_eq(dev_net(dev), &init_net))
++			break;
+ 	case SIOCGMIIPHY:
+ 	case SIOCGMIIREG:
+ 	case SIOCSMIIREG:
+-	case SIOCSHWTSTAMP:
+ 	case SIOCGHWTSTAMP:
+ 		if (netif_device_present(real_dev) && ops->ndo_do_ioctl)
+ 			err = ops->ndo_do_ioctl(real_dev, &ifrr, cmd);
+diff --git a/net/bridge/br_if.c b/net/bridge/br_if.c
+index 41f0a696a65f..0cb0aa0313a8 100644
+--- a/net/bridge/br_if.c
++++ b/net/bridge/br_if.c
+@@ -602,13 +602,15 @@ int br_add_if(struct net_bridge *br, struct net_device *dev,
+ 	call_netdevice_notifiers(NETDEV_JOIN, dev);
+ 
+ 	err = dev_set_allmulti(dev, 1);
+-	if (err)
+-		goto put_back;
++	if (err) {
++		kfree(p);	/* kobject not yet init'd, manually free */
++		goto err1;
++	}
+ 
+ 	err = kobject_init_and_add(&p->kobj, &brport_ktype, &(dev->dev.kobj),
+ 				   SYSFS_BRIDGE_PORT_ATTR);
+ 	if (err)
+-		goto err1;
++		goto err2;
+ 
+ 	err = br_sysfs_addif(p);
+ 	if (err)
+@@ -700,12 +702,9 @@ err3:
+ 	sysfs_remove_link(br->ifobj, p->dev->name);
+ err2:
+ 	kobject_put(&p->kobj);
+-	p = NULL; /* kobject_put frees */
+-err1:
+ 	dev_set_allmulti(dev, -1);
+-put_back:
++err1:
+ 	dev_put(dev);
+-	kfree(p);
+ 	return err;
+ }
+ 
+diff --git a/net/core/fib_rules.c b/net/core/fib_rules.c
+index ffbb827723a2..c49b752ea7eb 100644
+--- a/net/core/fib_rules.c
++++ b/net/core/fib_rules.c
+@@ -756,9 +756,9 @@ int fib_nl_newrule(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 	if (err)
+ 		goto errout;
+ 
+-	if ((nlh->nlmsg_flags & NLM_F_EXCL) &&
+-	    rule_exists(ops, frh, tb, rule)) {
+-		err = -EEXIST;
++	if (rule_exists(ops, frh, tb, rule)) {
++		if (nlh->nlmsg_flags & NLM_F_EXCL)
++			err = -EEXIST;
+ 		goto errout_free;
+ 	}
+ 
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index 94a450b2191a..139470d8d3c0 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -712,7 +712,10 @@ bool __skb_flow_bpf_dissect(struct bpf_prog *prog,
+ 	flow_keys->thoff = flow_keys->nhoff;
+ 
+ 	bpf_compute_data_pointers((struct sk_buff *)skb);
++
++	preempt_disable();
+ 	result = BPF_PROG_RUN(prog, skb);
++	preempt_enable();
+ 
+ 	/* Restore state */
+ 	memcpy(cb, &cb_saved, sizeof(cb_saved));
+diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c
+index 36de4f2a3366..cb080efdc7b3 100644
+--- a/net/dsa/dsa.c
++++ b/net/dsa/dsa.c
+@@ -344,15 +344,22 @@ static int __init dsa_init_module(void)
+ 
+ 	rc = dsa_slave_register_notifier();
+ 	if (rc)
+-		return rc;
++		goto register_notifier_fail;
+ 
+ 	rc = dsa_legacy_register();
+ 	if (rc)
+-		return rc;
++		goto legacy_register_fail;
+ 
+ 	dev_add_pack(&dsa_pack_type);
+ 
+ 	return 0;
++
++legacy_register_fail:
++	dsa_slave_unregister_notifier();
++register_notifier_fail:
++	destroy_workqueue(dsa_owq);
++
++	return rc;
+ }
+ module_init(dsa_init_module);
+ 
+diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c
+index c55a5432cf37..dc91c27bb788 100644
+--- a/net/ipv4/raw.c
++++ b/net/ipv4/raw.c
+@@ -173,6 +173,7 @@ static int icmp_filter(const struct sock *sk, const struct sk_buff *skb)
+ static int raw_v4_input(struct sk_buff *skb, const struct iphdr *iph, int hash)
+ {
+ 	int sdif = inet_sdif(skb);
++	int dif = inet_iif(skb);
+ 	struct sock *sk;
+ 	struct hlist_head *head;
+ 	int delivered = 0;
+@@ -185,8 +186,7 @@ static int raw_v4_input(struct sk_buff *skb, const struct iphdr *iph, int hash)
+ 
+ 	net = dev_net(skb->dev);
+ 	sk = __raw_v4_lookup(net, __sk_head(head), iph->protocol,
+-			     iph->saddr, iph->daddr,
+-			     skb->dev->ifindex, sdif);
++			     iph->saddr, iph->daddr, dif, sdif);
+ 
+ 	while (sk) {
+ 		delivered = 1;
+diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
+index b2109b74857d..971d60bf9640 100644
+--- a/net/ipv6/sit.c
++++ b/net/ipv6/sit.c
+@@ -1084,7 +1084,7 @@ static void ipip6_tunnel_bind_dev(struct net_device *dev)
+ 	if (!tdev && tunnel->parms.link)
+ 		tdev = __dev_get_by_index(tunnel->net, tunnel->parms.link);
+ 
+-	if (tdev) {
++	if (tdev && !netif_is_l3_master(tdev)) {
+ 		int t_hlen = tunnel->hlen + sizeof(struct iphdr);
+ 
+ 		dev->hard_header_len = tdev->hard_header_len + sizeof(struct iphdr);
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 9b81813dd16a..59da6f5b717d 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -4603,14 +4603,29 @@ static void __exit packet_exit(void)
+ 
+ static int __init packet_init(void)
+ {
+-	int rc = proto_register(&packet_proto, 0);
++	int rc;
+ 
+-	if (rc != 0)
++	rc = proto_register(&packet_proto, 0);
++	if (rc)
+ 		goto out;
++	rc = sock_register(&packet_family_ops);
++	if (rc)
++		goto out_proto;
++	rc = register_pernet_subsys(&packet_net_ops);
++	if (rc)
++		goto out_sock;
++	rc = register_netdevice_notifier(&packet_netdev_notifier);
++	if (rc)
++		goto out_pernet;
+ 
+-	sock_register(&packet_family_ops);
+-	register_pernet_subsys(&packet_net_ops);
+-	register_netdevice_notifier(&packet_netdev_notifier);
++	return 0;
++
++out_pernet:
++	unregister_pernet_subsys(&packet_net_ops);
++out_sock:
++	sock_unregister(PF_PACKET);
++out_proto:
++	proto_unregister(&packet_proto);
+ out:
+ 	return rc;
+ }
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index b542f14ed444..2851937f6e32 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -734,11 +734,11 @@ static __poll_t tipc_poll(struct file *file, struct socket *sock,
+ 
+ 	switch (sk->sk_state) {
+ 	case TIPC_ESTABLISHED:
+-	case TIPC_CONNECTING:
+ 		if (!tsk->cong_link_cnt && !tsk_conn_cong(tsk))
+ 			revents |= EPOLLOUT;
+ 		/* fall through */
+ 	case TIPC_LISTEN:
++	case TIPC_CONNECTING:
+ 		if (!skb_queue_empty(&sk->sk_receive_queue))
+ 			revents |= EPOLLIN | EPOLLRDNORM;
+ 		break;
+@@ -2041,7 +2041,7 @@ static bool tipc_sk_filter_connect(struct tipc_sock *tsk, struct sk_buff *skb)
+ 			if (msg_data_sz(hdr))
+ 				return true;
+ 			/* Empty ACK-, - wake up sleeping connect() and drop */
+-			sk->sk_data_ready(sk);
++			sk->sk_state_change(sk);
+ 			msg_set_dest_droppable(hdr, 1);
+ 			return false;
+ 		}
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 1d0b37af2444..28bff30c2f15 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -4572,7 +4572,7 @@ static int selinux_socket_connect_helper(struct socket *sock,
+ 		struct lsm_network_audit net = {0,};
+ 		struct sockaddr_in *addr4 = NULL;
+ 		struct sockaddr_in6 *addr6 = NULL;
+-		unsigned short snum;
++		unsigned short snum = 0;
+ 		u32 sid, perm;
+ 
+ 		/* sctp_connectx(3) calls via selinux_sctp_bind_connect()
+@@ -4595,12 +4595,12 @@ static int selinux_socket_connect_helper(struct socket *sock,
+ 			break;
+ 		default:
+ 			/* Note that SCTP services expect -EINVAL, whereas
+-			 * others expect -EAFNOSUPPORT.
++			 * others must handle this at the protocol level:
++			 * connect(AF_UNSPEC) on a connected socket is
++			 * a documented way disconnect the socket.
+ 			 */
+ 			if (sksec->sclass == SECCLASS_SCTP_SOCKET)
+ 				return -EINVAL;
+-			else
+-				return -EAFNOSUPPORT;
+ 		}
+ 
+ 		err = sel_netport_sid(sk->sk_protocol, snum, &sid);
+diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
+index 5019cdae5d0b..0fad0dc62338 100644
+--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
+@@ -3095,9 +3095,9 @@ TEST(user_notification_basic)
+ 
+ 	/* Check that we get -ENOSYS with no listener attached */
+ 	if (pid == 0) {
+-		if (user_trap_syscall(__NR_getpid, 0) < 0)
++		if (user_trap_syscall(__NR_getppid, 0) < 0)
+ 			exit(1);
+-		ret = syscall(__NR_getpid);
++		ret = syscall(__NR_getppid);
+ 		exit(ret >= 0 || errno != ENOSYS);
+ 	}
+ 
+@@ -3112,12 +3112,12 @@ TEST(user_notification_basic)
+ 	EXPECT_EQ(seccomp(SECCOMP_SET_MODE_FILTER, 0, &prog), 0);
+ 
+ 	/* Check that the basic notification machinery works */
+-	listener = user_trap_syscall(__NR_getpid,
++	listener = user_trap_syscall(__NR_getppid,
+ 				     SECCOMP_FILTER_FLAG_NEW_LISTENER);
+ 	ASSERT_GE(listener, 0);
+ 
+ 	/* Installing a second listener in the chain should EBUSY */
+-	EXPECT_EQ(user_trap_syscall(__NR_getpid,
++	EXPECT_EQ(user_trap_syscall(__NR_getppid,
+ 				    SECCOMP_FILTER_FLAG_NEW_LISTENER),
+ 		  -1);
+ 	EXPECT_EQ(errno, EBUSY);
+@@ -3126,7 +3126,7 @@ TEST(user_notification_basic)
+ 	ASSERT_GE(pid, 0);
+ 
+ 	if (pid == 0) {
+-		ret = syscall(__NR_getpid);
++		ret = syscall(__NR_getppid);
+ 		exit(ret != USER_NOTIF_MAGIC);
+ 	}
+ 
+@@ -3144,7 +3144,7 @@ TEST(user_notification_basic)
+ 	EXPECT_GT(poll(&pollfd, 1, -1), 0);
+ 	EXPECT_EQ(pollfd.revents, POLLOUT);
+ 
+-	EXPECT_EQ(req.data.nr,  __NR_getpid);
++	EXPECT_EQ(req.data.nr,  __NR_getppid);
+ 
+ 	resp.id = req.id;
+ 	resp.error = 0;
+@@ -3176,7 +3176,7 @@ TEST(user_notification_kill_in_middle)
+ 		TH_LOG("Kernel does not support PR_SET_NO_NEW_PRIVS!");
+ 	}
+ 
+-	listener = user_trap_syscall(__NR_getpid,
++	listener = user_trap_syscall(__NR_getppid,
+ 				     SECCOMP_FILTER_FLAG_NEW_LISTENER);
+ 	ASSERT_GE(listener, 0);
+ 
+@@ -3188,7 +3188,7 @@ TEST(user_notification_kill_in_middle)
+ 	ASSERT_GE(pid, 0);
+ 
+ 	if (pid == 0) {
+-		ret = syscall(__NR_getpid);
++		ret = syscall(__NR_getppid);
+ 		exit(ret != USER_NOTIF_MAGIC);
+ 	}
+ 
+@@ -3298,7 +3298,7 @@ TEST(user_notification_closed_listener)
+ 		TH_LOG("Kernel does not support PR_SET_NO_NEW_PRIVS!");
+ 	}
+ 
+-	listener = user_trap_syscall(__NR_getpid,
++	listener = user_trap_syscall(__NR_getppid,
+ 				     SECCOMP_FILTER_FLAG_NEW_LISTENER);
+ 	ASSERT_GE(listener, 0);
+ 
+@@ -3309,7 +3309,7 @@ TEST(user_notification_closed_listener)
+ 	ASSERT_GE(pid, 0);
+ 	if (pid == 0) {
+ 		close(listener);
+-		ret = syscall(__NR_getpid);
++		ret = syscall(__NR_getppid);
+ 		exit(ret != -1 && errno != ENOSYS);
+ 	}
+ 
+@@ -3332,14 +3332,15 @@ TEST(user_notification_child_pid_ns)
+ 
+ 	ASSERT_EQ(unshare(CLONE_NEWUSER | CLONE_NEWPID), 0);
+ 
+-	listener = user_trap_syscall(__NR_getpid, SECCOMP_FILTER_FLAG_NEW_LISTENER);
++	listener = user_trap_syscall(__NR_getppid,
++				     SECCOMP_FILTER_FLAG_NEW_LISTENER);
+ 	ASSERT_GE(listener, 0);
+ 
+ 	pid = fork();
+ 	ASSERT_GE(pid, 0);
+ 
+ 	if (pid == 0)
+-		exit(syscall(__NR_getpid) != USER_NOTIF_MAGIC);
++		exit(syscall(__NR_getppid) != USER_NOTIF_MAGIC);
+ 
+ 	EXPECT_EQ(ioctl(listener, SECCOMP_IOCTL_NOTIF_RECV, &req), 0);
+ 	EXPECT_EQ(req.pid, pid);
+@@ -3371,7 +3372,8 @@ TEST(user_notification_sibling_pid_ns)
+ 		TH_LOG("Kernel does not support PR_SET_NO_NEW_PRIVS!");
+ 	}
+ 
+-	listener = user_trap_syscall(__NR_getpid, SECCOMP_FILTER_FLAG_NEW_LISTENER);
++	listener = user_trap_syscall(__NR_getppid,
++				     SECCOMP_FILTER_FLAG_NEW_LISTENER);
+ 	ASSERT_GE(listener, 0);
+ 
+ 	pid = fork();
+@@ -3384,7 +3386,7 @@ TEST(user_notification_sibling_pid_ns)
+ 		ASSERT_GE(pid2, 0);
+ 
+ 		if (pid2 == 0)
+-			exit(syscall(__NR_getpid) != USER_NOTIF_MAGIC);
++			exit(syscall(__NR_getppid) != USER_NOTIF_MAGIC);
+ 
+ 		EXPECT_EQ(waitpid(pid2, &status, 0), pid2);
+ 		EXPECT_EQ(true, WIFEXITED(status));
+@@ -3393,11 +3395,11 @@ TEST(user_notification_sibling_pid_ns)
+ 	}
+ 
+ 	/* Create the sibling ns, and sibling in it. */
+-	EXPECT_EQ(unshare(CLONE_NEWPID), 0);
+-	EXPECT_EQ(errno, 0);
++	ASSERT_EQ(unshare(CLONE_NEWPID), 0);
++	ASSERT_EQ(errno, 0);
+ 
+ 	pid2 = fork();
+-	EXPECT_GE(pid2, 0);
++	ASSERT_GE(pid2, 0);
+ 
+ 	if (pid2 == 0) {
+ 		ASSERT_EQ(ioctl(listener, SECCOMP_IOCTL_NOTIF_RECV, &req), 0);
+@@ -3405,7 +3407,7 @@ TEST(user_notification_sibling_pid_ns)
+ 		 * The pid should be 0, i.e. the task is in some namespace that
+ 		 * we can't "see".
+ 		 */
+-		ASSERT_EQ(req.pid, 0);
++		EXPECT_EQ(req.pid, 0);
+ 
+ 		resp.id = req.id;
+ 		resp.error = 0;
+@@ -3435,14 +3437,15 @@ TEST(user_notification_fault_recv)
+ 
+ 	ASSERT_EQ(unshare(CLONE_NEWUSER), 0);
+ 
+-	listener = user_trap_syscall(__NR_getpid, SECCOMP_FILTER_FLAG_NEW_LISTENER);
++	listener = user_trap_syscall(__NR_getppid,
++				     SECCOMP_FILTER_FLAG_NEW_LISTENER);
+ 	ASSERT_GE(listener, 0);
+ 
+ 	pid = fork();
+ 	ASSERT_GE(pid, 0);
+ 
+ 	if (pid == 0)
+-		exit(syscall(__NR_getpid) != USER_NOTIF_MAGIC);
++		exit(syscall(__NR_getppid) != USER_NOTIF_MAGIC);
+ 
+ 	/* Do a bad recv() */
+ 	EXPECT_EQ(ioctl(listener, SECCOMP_IOCTL_NOTIF_RECV, NULL), -1);


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-05-22 11:07 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2019-05-22 11:07 UTC (permalink / raw
  To: gentoo-commits

commit:     7ee0edffbb2ddbea4dfb18101618c5ecc95fb0c9
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 22 11:07:00 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 22 11:07:00 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7ee0edff

Linux patch 5.1.4

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1003_linux-5.1.4.patch | 5966 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5970 insertions(+)

diff --git a/0000_README b/0000_README
index 781e82e..7dd0866 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1002_linux-5.1.3.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.1.3
 
+Patch:  1003_linux-5.1.4.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.1.4
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1003_linux-5.1.4.patch b/1003_linux-5.1.4.patch
new file mode 100644
index 0000000..7803bfb
--- /dev/null
+++ b/1003_linux-5.1.4.patch
@@ -0,0 +1,5966 @@
+diff --git a/Documentation/devicetree/bindings/mmc/mmc.txt b/Documentation/devicetree/bindings/mmc/mmc.txt
+index cdbcfd3a4ff2..c269dbe384fe 100644
+--- a/Documentation/devicetree/bindings/mmc/mmc.txt
++++ b/Documentation/devicetree/bindings/mmc/mmc.txt
+@@ -64,6 +64,8 @@ Optional properties:
+   whether pwrseq-simple is used. Default to 10ms if no available.
+ - supports-cqe : The presence of this property indicates that the corresponding
+   MMC host controller supports HW command queue feature.
++- disable-cqe-dcmd: This property indicates that the MMC controller's command
++  queue engine (CQE) does not support direct commands (DCMDs).
+ 
+ *NOTE* on CD and WP polarity. To use common for all SD/MMC host controllers line
+ polarity properties, we have to fix the meaning of the "normal" and "inverted"
+diff --git a/Documentation/x86/mds.rst b/Documentation/x86/mds.rst
+index 534e9baa4e1d..5d4330be200f 100644
+--- a/Documentation/x86/mds.rst
++++ b/Documentation/x86/mds.rst
+@@ -142,45 +142,13 @@ Mitigation points
+    mds_user_clear.
+ 
+    The mitigation is invoked in prepare_exit_to_usermode() which covers
+-   most of the kernel to user space transitions. There are a few exceptions
+-   which are not invoking prepare_exit_to_usermode() on return to user
+-   space. These exceptions use the paranoid exit code.
++   all but one of the kernel to user space transitions.  The exception
++   is when we return from a Non Maskable Interrupt (NMI), which is
++   handled directly in do_nmi().
+ 
+-   - Non Maskable Interrupt (NMI):
+-
+-     Access to sensible data like keys, credentials in the NMI context is
+-     mostly theoretical: The CPU can do prefetching or execute a
+-     misspeculated code path and thereby fetching data which might end up
+-     leaking through a buffer.
+-
+-     But for mounting other attacks the kernel stack address of the task is
+-     already valuable information. So in full mitigation mode, the NMI is
+-     mitigated on the return from do_nmi() to provide almost complete
+-     coverage.
+-
+-   - Double fault (#DF):
+-
+-     A double fault is usually fatal, but the ESPFIX workaround, which can
+-     be triggered from user space through modify_ldt(2) is a recoverable
+-     double fault. #DF uses the paranoid exit path, so explicit mitigation
+-     in the double fault handler is required.
+-
+-   - Machine Check Exception (#MC):
+-
+-     Another corner case is a #MC which hits between the CPU buffer clear
+-     invocation and the actual return to user. As this still is in kernel
+-     space it takes the paranoid exit path which does not clear the CPU
+-     buffers. So the #MC handler repopulates the buffers to some
+-     extent. Machine checks are not reliably controllable and the window is
+-     extremly small so mitigation would just tick a checkbox that this
+-     theoretical corner case is covered. To keep the amount of special
+-     cases small, ignore #MC.
+-
+-   - Debug Exception (#DB):
+-
+-     This takes the paranoid exit path only when the INT1 breakpoint is in
+-     kernel space. #DB on a user space address takes the regular exit path,
+-     so no extra mitigation required.
++   (The reason that NMI is special is that prepare_exit_to_usermode() can
++    enable IRQs.  In NMI context, NMIs are blocked, and we don't want to
++    enable IRQs with NMIs blocked.)
+ 
+ 
+ 2. C-State transition
+diff --git a/Makefile b/Makefile
+index f6c763aff4f3..acab93537f63 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+@@ -636,7 +636,7 @@ ifeq ($(may-sync-config),1)
+ # Read in dependencies to all Kconfig* files, make sure to run syncconfig if
+ # changes are detected. This should be included after arch/$(SRCARCH)/Makefile
+ # because some architectures define CROSS_COMPILE there.
+--include include/config/auto.conf.cmd
++include include/config/auto.conf.cmd
+ 
+ $(KCONFIG_CONFIG):
+ 	@echo >&2 '***'
+diff --git a/arch/arm/boot/dts/exynos5260.dtsi b/arch/arm/boot/dts/exynos5260.dtsi
+index 55167850619c..33a085ffc447 100644
+--- a/arch/arm/boot/dts/exynos5260.dtsi
++++ b/arch/arm/boot/dts/exynos5260.dtsi
+@@ -223,7 +223,7 @@
+ 			wakeup-interrupt-controller {
+ 				compatible = "samsung,exynos4210-wakeup-eint";
+ 				interrupt-parent = <&gic>;
+-				interrupts = <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>;
++				interrupts = <GIC_SPI 48 IRQ_TYPE_LEVEL_HIGH>;
+ 			};
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/exynos5422-odroidxu3-audio.dtsi b/arch/arm/boot/dts/exynos5422-odroidxu3-audio.dtsi
+index 51a843bd65ed..c3c2d85267da 100644
+--- a/arch/arm/boot/dts/exynos5422-odroidxu3-audio.dtsi
++++ b/arch/arm/boot/dts/exynos5422-odroidxu3-audio.dtsi
+@@ -22,11 +22,12 @@
+ 			"Headphone Jack", "HPL",
+ 			"Headphone Jack", "HPR",
+ 			"Headphone Jack", "MICBIAS",
+-			"IN1", "Headphone Jack",
++			"IN12", "Headphone Jack",
+ 			"Speakers", "SPKL",
+ 			"Speakers", "SPKR",
+ 			"I2S Playback", "Mixer DAI TX",
+-			"HiFi Playback", "Mixer DAI TX";
++			"HiFi Playback", "Mixer DAI TX",
++			"Mixer DAI RX", "HiFi Capture";
+ 
+ 		assigned-clocks = <&clock CLK_MOUT_EPLL>,
+ 				<&clock CLK_MOUT_MAU_EPLL>,
+diff --git a/arch/arm/boot/dts/imx6-logicpd-baseboard.dtsi b/arch/arm/boot/dts/imx6-logicpd-baseboard.dtsi
+index fb01fa6e4224..3cae139e6396 100644
+--- a/arch/arm/boot/dts/imx6-logicpd-baseboard.dtsi
++++ b/arch/arm/boot/dts/imx6-logicpd-baseboard.dtsi
+@@ -216,7 +216,7 @@
+ &fec {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_enet>;
+-	phy-mode = "rgmii";
++	phy-mode = "rgmii-id";
+ 	phy-reset-duration = <10>;
+ 	phy-reset-gpios = <&gpio1 24 GPIO_ACTIVE_LOW>;
+ 	phy-supply = <&reg_enet>;
+diff --git a/arch/arm/boot/dts/imx6dl-riotboard.dts b/arch/arm/boot/dts/imx6dl-riotboard.dts
+index 65c184bb8fb0..d9de49efa802 100644
+--- a/arch/arm/boot/dts/imx6dl-riotboard.dts
++++ b/arch/arm/boot/dts/imx6dl-riotboard.dts
+@@ -92,7 +92,7 @@
+ &fec {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_enet>;
+-	phy-mode = "rgmii";
++	phy-mode = "rgmii-id";
+ 	phy-reset-gpios = <&gpio3 31 GPIO_ACTIVE_LOW>;
+ 	interrupts-extended = <&gpio1 6 IRQ_TYPE_LEVEL_HIGH>,
+ 			      <&intc 0 119 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm/boot/dts/imx6q-ba16.dtsi b/arch/arm/boot/dts/imx6q-ba16.dtsi
+index adc9455e42c7..37c63402157b 100644
+--- a/arch/arm/boot/dts/imx6q-ba16.dtsi
++++ b/arch/arm/boot/dts/imx6q-ba16.dtsi
+@@ -171,7 +171,7 @@
+ &fec {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_enet>;
+-	phy-mode = "rgmii";
++	phy-mode = "rgmii-id";
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm/boot/dts/imx6q-marsboard.dts b/arch/arm/boot/dts/imx6q-marsboard.dts
+index d8ccb533b6b7..84b30bd6908f 100644
+--- a/arch/arm/boot/dts/imx6q-marsboard.dts
++++ b/arch/arm/boot/dts/imx6q-marsboard.dts
+@@ -110,7 +110,7 @@
+ &fec {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_enet>;
+-	phy-mode = "rgmii";
++	phy-mode = "rgmii-id";
+ 	phy-reset-gpios = <&gpio3 31 GPIO_ACTIVE_LOW>;
+ 	status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/imx6q-tbs2910.dts b/arch/arm/boot/dts/imx6q-tbs2910.dts
+index 2ce8399a10ba..bfff87ce2e1f 100644
+--- a/arch/arm/boot/dts/imx6q-tbs2910.dts
++++ b/arch/arm/boot/dts/imx6q-tbs2910.dts
+@@ -98,7 +98,7 @@
+ &fec {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_enet>;
+-	phy-mode = "rgmii";
++	phy-mode = "rgmii-id";
+ 	phy-reset-gpios = <&gpio1 25 GPIO_ACTIVE_LOW>;
+ 	status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/imx6qdl-apf6.dtsi b/arch/arm/boot/dts/imx6qdl-apf6.dtsi
+index 1ebf29f43a24..4738c3c1ab50 100644
+--- a/arch/arm/boot/dts/imx6qdl-apf6.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-apf6.dtsi
+@@ -51,7 +51,7 @@
+ &fec {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_enet>;
+-	phy-mode = "rgmii";
++	phy-mode = "rgmii-id";
+ 	phy-reset-duration = <10>;
+ 	phy-reset-gpios = <&gpio1 24 GPIO_ACTIVE_LOW>;
+ 	status = "okay";
+diff --git a/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi b/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
+index 1280de50a984..f3404dd10537 100644
+--- a/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
+@@ -292,7 +292,7 @@
+ &fec {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_enet>;
+-	phy-mode = "rgmii";
++	phy-mode = "rgmii-id";
+ 	interrupts-extended = <&gpio1 6 IRQ_TYPE_LEVEL_HIGH>,
+ 			      <&intc 0 119 IRQ_TYPE_LEVEL_HIGH>;
+ 	fsl,err006687-workaround-present;
+diff --git a/arch/arm/boot/dts/imx6qdl-sabresd.dtsi b/arch/arm/boot/dts/imx6qdl-sabresd.dtsi
+index a0705066ccba..185fb17a3500 100644
+--- a/arch/arm/boot/dts/imx6qdl-sabresd.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-sabresd.dtsi
+@@ -202,7 +202,7 @@
+ &fec {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_enet>;
+-	phy-mode = "rgmii";
++	phy-mode = "rgmii-id";
+ 	phy-reset-gpios = <&gpio1 25 GPIO_ACTIVE_LOW>;
+ 	status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/imx6qdl-sr-som.dtsi b/arch/arm/boot/dts/imx6qdl-sr-som.dtsi
+index 4ccb7afc4b35..6d7f6b9035bc 100644
+--- a/arch/arm/boot/dts/imx6qdl-sr-som.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-sr-som.dtsi
+@@ -53,7 +53,7 @@
+ &fec {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_microsom_enet_ar8035>;
+-	phy-mode = "rgmii";
++	phy-mode = "rgmii-id";
+ 	phy-reset-duration = <2>;
+ 	phy-reset-gpios = <&gpio4 15 GPIO_ACTIVE_LOW>;
+ 	status = "okay";
+diff --git a/arch/arm/boot/dts/imx6qdl-wandboard.dtsi b/arch/arm/boot/dts/imx6qdl-wandboard.dtsi
+index b7d5fb421404..50d9a989e06a 100644
+--- a/arch/arm/boot/dts/imx6qdl-wandboard.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-wandboard.dtsi
+@@ -224,7 +224,7 @@
+ &fec {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_enet>;
+-	phy-mode = "rgmii";
++	phy-mode = "rgmii-id";
+ 	phy-reset-gpios = <&gpio3 29 GPIO_ACTIVE_LOW>;
+ 	interrupts-extended = <&gpio1 6 IRQ_TYPE_LEVEL_HIGH>,
+ 			      <&intc 0 119 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm/boot/dts/imx6sx-sabreauto.dts b/arch/arm/boot/dts/imx6sx-sabreauto.dts
+index b0ee324afe58..315044ccd65f 100644
+--- a/arch/arm/boot/dts/imx6sx-sabreauto.dts
++++ b/arch/arm/boot/dts/imx6sx-sabreauto.dts
+@@ -75,7 +75,7 @@
+ &fec1 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_enet1>;
+-	phy-mode = "rgmii";
++	phy-mode = "rgmii-id";
+ 	phy-handle = <&ethphy1>;
+ 	fsl,magic-packet;
+ 	status = "okay";
+diff --git a/arch/arm/boot/dts/imx6sx-sdb.dtsi b/arch/arm/boot/dts/imx6sx-sdb.dtsi
+index 08ede56c3f10..f6972deb5e39 100644
+--- a/arch/arm/boot/dts/imx6sx-sdb.dtsi
++++ b/arch/arm/boot/dts/imx6sx-sdb.dtsi
+@@ -191,7 +191,7 @@
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_enet1>;
+ 	phy-supply = <&reg_enet_3v3>;
+-	phy-mode = "rgmii";
++	phy-mode = "rgmii-id";
+ 	phy-handle = <&ethphy1>;
+ 	phy-reset-gpios = <&gpio2 7 GPIO_ACTIVE_LOW>;
+ 	status = "okay";
+diff --git a/arch/arm/boot/dts/imx7d-pico.dtsi b/arch/arm/boot/dts/imx7d-pico.dtsi
+index 3fd595a71202..6f50ebf31a0a 100644
+--- a/arch/arm/boot/dts/imx7d-pico.dtsi
++++ b/arch/arm/boot/dts/imx7d-pico.dtsi
+@@ -92,7 +92,7 @@
+ 			  <&clks IMX7D_ENET1_TIME_ROOT_CLK>;
+ 	assigned-clock-parents = <&clks IMX7D_PLL_ENET_MAIN_100M_CLK>;
+ 	assigned-clock-rates = <0>, <100000000>;
+-	phy-mode = "rgmii";
++	phy-mode = "rgmii-id";
+ 	phy-handle = <&ethphy0>;
+ 	fsl,magic-packet;
+ 	phy-reset-gpios = <&gpio6 11 GPIO_ACTIVE_LOW>;
+diff --git a/arch/arm/boot/dts/qcom-ipq4019.dtsi b/arch/arm/boot/dts/qcom-ipq4019.dtsi
+index 9e75f97770ce..1008dfbcb972 100644
+--- a/arch/arm/boot/dts/qcom-ipq4019.dtsi
++++ b/arch/arm/boot/dts/qcom-ipq4019.dtsi
+@@ -400,8 +400,8 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x81000000 0 0x40200000 0x40200000 0 0x00100000
+-				  0x82000000 0 0x40300000 0x40300000 0 0x400000>;
++			ranges = <0x81000000 0 0x40200000 0x40200000 0 0x00100000>,
++				 <0x82000000 0 0x40300000 0x40300000 0 0x00d00000>;
+ 
+ 			interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi";
+diff --git a/arch/arm/crypto/aes-neonbs-glue.c b/arch/arm/crypto/aes-neonbs-glue.c
+index 07e31941dc67..617c2c99ebfb 100644
+--- a/arch/arm/crypto/aes-neonbs-glue.c
++++ b/arch/arm/crypto/aes-neonbs-glue.c
+@@ -278,6 +278,8 @@ static int __xts_crypt(struct skcipher_request *req,
+ 	int err;
+ 
+ 	err = skcipher_walk_virt(&walk, req, true);
++	if (err)
++		return err;
+ 
+ 	crypto_cipher_encrypt_one(ctx->tweak_tfm, walk.iv, walk.iv);
+ 
+diff --git a/arch/arm/mach-exynos/firmware.c b/arch/arm/mach-exynos/firmware.c
+index d602e3bf3f96..2eaf2dbb8e81 100644
+--- a/arch/arm/mach-exynos/firmware.c
++++ b/arch/arm/mach-exynos/firmware.c
+@@ -196,6 +196,7 @@ bool __init exynos_secure_firmware_available(void)
+ 		return false;
+ 
+ 	addr = of_get_address(nd, 0, NULL, NULL);
++	of_node_put(nd);
+ 	if (!addr) {
+ 		pr_err("%s: No address specified.\n", __func__);
+ 		return false;
+diff --git a/arch/arm/mach-exynos/suspend.c b/arch/arm/mach-exynos/suspend.c
+index 0850505ac78b..9afb0c69db34 100644
+--- a/arch/arm/mach-exynos/suspend.c
++++ b/arch/arm/mach-exynos/suspend.c
+@@ -639,8 +639,10 @@ void __init exynos_pm_init(void)
+ 
+ 	if (WARN_ON(!of_find_property(np, "interrupt-controller", NULL))) {
+ 		pr_warn("Outdated DT detected, suspend/resume will NOT work\n");
++		of_node_put(np);
+ 		return;
+ 	}
++	of_node_put(np);
+ 
+ 	pm_data = (const struct exynos_pm_data *) match->data;
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-rockpro64.dts b/arch/arm64/boot/dts/rockchip/rk3399-rockpro64.dts
+index 1f2394e0587d..d473ce290f0c 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-rockpro64.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-rockpro64.dts
+@@ -504,7 +504,7 @@
+ 	status = "okay";
+ 
+ 	bt656-supply = <&vcc1v8_dvp>;
+-	audio-supply = <&vcca1v8_codec>;
++	audio-supply = <&vcc_3v0>;
+ 	sdmmc-supply = <&vcc_sdio>;
+ 	gpio1830-supply = <&vcc_3v0>;
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399.dtsi b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+index db9d948c0b03..1a16d6ce3ea8 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+@@ -333,6 +333,7 @@
+ 		phys = <&emmc_phy>;
+ 		phy-names = "phy_arasan";
+ 		power-domains = <&power RK3399_PD_EMMC>;
++		disable-cqe-dcmd;
+ 		status = "disabled";
+ 	};
+ 
+diff --git a/arch/arm64/crypto/aes-neonbs-glue.c b/arch/arm64/crypto/aes-neonbs-glue.c
+index e7a95a566462..5cc248967387 100644
+--- a/arch/arm64/crypto/aes-neonbs-glue.c
++++ b/arch/arm64/crypto/aes-neonbs-glue.c
+@@ -304,6 +304,8 @@ static int __xts_crypt(struct skcipher_request *req,
+ 	int err;
+ 
+ 	err = skcipher_walk_virt(&walk, req, false);
++	if (err)
++		return err;
+ 
+ 	kernel_neon_begin();
+ 	neon_aes_ecb_encrypt(walk.iv, walk.iv, ctx->twkey, ctx->key.rounds, 1);
+diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c
+index 791ad422c427..089b09286da7 100644
+--- a/arch/arm64/crypto/ghash-ce-glue.c
++++ b/arch/arm64/crypto/ghash-ce-glue.c
+@@ -473,9 +473,11 @@ static int gcm_encrypt(struct aead_request *req)
+ 		put_unaligned_be32(2, iv + GCM_IV_SIZE);
+ 
+ 		while (walk.nbytes >= (2 * AES_BLOCK_SIZE)) {
+-			int blocks = walk.nbytes / AES_BLOCK_SIZE;
++			const int blocks =
++				walk.nbytes / (2 * AES_BLOCK_SIZE) * 2;
+ 			u8 *dst = walk.dst.virt.addr;
+ 			u8 *src = walk.src.virt.addr;
++			int remaining = blocks;
+ 
+ 			do {
+ 				__aes_arm64_encrypt(ctx->aes_key.key_enc,
+@@ -485,9 +487,9 @@ static int gcm_encrypt(struct aead_request *req)
+ 
+ 				dst += AES_BLOCK_SIZE;
+ 				src += AES_BLOCK_SIZE;
+-			} while (--blocks > 0);
++			} while (--remaining > 0);
+ 
+-			ghash_do_update(walk.nbytes / AES_BLOCK_SIZE, dg,
++			ghash_do_update(blocks, dg,
+ 					walk.dst.virt.addr, &ctx->ghash_key,
+ 					NULL, pmull_ghash_update_p64);
+ 
+@@ -609,7 +611,7 @@ static int gcm_decrypt(struct aead_request *req)
+ 		put_unaligned_be32(2, iv + GCM_IV_SIZE);
+ 
+ 		while (walk.nbytes >= (2 * AES_BLOCK_SIZE)) {
+-			int blocks = walk.nbytes / AES_BLOCK_SIZE;
++			int blocks = walk.nbytes / (2 * AES_BLOCK_SIZE) * 2;
+ 			u8 *dst = walk.dst.virt.addr;
+ 			u8 *src = walk.src.virt.addr;
+ 
+diff --git a/arch/arm64/include/asm/arch_timer.h b/arch/arm64/include/asm/arch_timer.h
+index f2a234d6516c..93e07512b4b6 100644
+--- a/arch/arm64/include/asm/arch_timer.h
++++ b/arch/arm64/include/asm/arch_timer.h
+@@ -148,18 +148,47 @@ static inline void arch_timer_set_cntkctl(u32 cntkctl)
+ 	isb();
+ }
+ 
++/*
++ * Ensure that reads of the counter are treated the same as memory reads
++ * for the purposes of ordering by subsequent memory barriers.
++ *
++ * This insanity brought to you by speculative system register reads,
++ * out-of-order memory accesses, sequence locks and Thomas Gleixner.
++ *
++ * http://lists.infradead.org/pipermail/linux-arm-kernel/2019-February/631195.html
++ */
++#define arch_counter_enforce_ordering(val) do {				\
++	u64 tmp, _val = (val);						\
++									\
++	asm volatile(							\
++	"	eor	%0, %1, %1\n"					\
++	"	add	%0, sp, %0\n"					\
++	"	ldr	xzr, [%0]"					\
++	: "=r" (tmp) : "r" (_val));					\
++} while (0)
++
+ static inline u64 arch_counter_get_cntpct(void)
+ {
++	u64 cnt;
++
+ 	isb();
+-	return arch_timer_reg_read_stable(cntpct_el0);
++	cnt = arch_timer_reg_read_stable(cntpct_el0);
++	arch_counter_enforce_ordering(cnt);
++	return cnt;
+ }
+ 
+ static inline u64 arch_counter_get_cntvct(void)
+ {
++	u64 cnt;
++
+ 	isb();
+-	return arch_timer_reg_read_stable(cntvct_el0);
++	cnt = arch_timer_reg_read_stable(cntvct_el0);
++	arch_counter_enforce_ordering(cnt);
++	return cnt;
+ }
+ 
++#undef arch_counter_enforce_ordering
++
+ static inline int arch_timer_arch_init(void)
+ {
+ 	return 0;
+diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
+index 5d9ce62bdebd..228af56ece48 100644
+--- a/arch/arm64/include/asm/processor.h
++++ b/arch/arm64/include/asm/processor.h
+@@ -57,7 +57,15 @@
+ #define TASK_SIZE_64		(UL(1) << vabits_user)
+ 
+ #ifdef CONFIG_COMPAT
++#ifdef CONFIG_ARM64_64K_PAGES
++/*
++ * With CONFIG_ARM64_64K_PAGES enabled, the last page is occupied
++ * by the compat vectors page.
++ */
+ #define TASK_SIZE_32		UL(0x100000000)
++#else
++#define TASK_SIZE_32		(UL(0x100000000) - PAGE_SIZE)
++#endif /* CONFIG_ARM64_64K_PAGES */
+ #define TASK_SIZE		(test_thread_flag(TIF_32BIT) ? \
+ 				TASK_SIZE_32 : TASK_SIZE_64)
+ #define TASK_SIZE_OF(tsk)	(test_tsk_thread_flag(tsk, TIF_32BIT) ? \
+diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
+index d7bb6aefae0a..0fa6db521e44 100644
+--- a/arch/arm64/kernel/debug-monitors.c
++++ b/arch/arm64/kernel/debug-monitors.c
+@@ -135,6 +135,7 @@ NOKPROBE_SYMBOL(disable_debug_monitors);
+  */
+ static int clear_os_lock(unsigned int cpu)
+ {
++	write_sysreg(0, osdlr_el1);
+ 	write_sysreg(0, oslar_el1);
+ 	isb();
+ 	return 0;
+diff --git a/arch/arm64/kernel/sys.c b/arch/arm64/kernel/sys.c
+index b44065fb1616..6f91e8116514 100644
+--- a/arch/arm64/kernel/sys.c
++++ b/arch/arm64/kernel/sys.c
+@@ -31,7 +31,7 @@
+ 
+ SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len,
+ 		unsigned long, prot, unsigned long, flags,
+-		unsigned long, fd, off_t, off)
++		unsigned long, fd, unsigned long, off)
+ {
+ 	if (offset_in_page(off) != 0)
+ 		return -EINVAL;
+diff --git a/arch/arm64/kernel/vdso/gettimeofday.S b/arch/arm64/kernel/vdso/gettimeofday.S
+index c39872a7b03c..e8f60112818f 100644
+--- a/arch/arm64/kernel/vdso/gettimeofday.S
++++ b/arch/arm64/kernel/vdso/gettimeofday.S
+@@ -73,6 +73,13 @@ x_tmp		.req	x8
+ 	movn	x_tmp, #0xff00, lsl #48
+ 	and	\res, x_tmp, \res
+ 	mul	\res, \res, \mult
++	/*
++	 * Fake address dependency from the value computed from the counter
++	 * register to subsequent data page accesses so that the sequence
++	 * locking also orders the read of the counter.
++	 */
++	and	x_tmp, \res, xzr
++	add	vdso_data, vdso_data, x_tmp
+ 	.endm
+ 
+ 	/*
+@@ -147,12 +154,12 @@ ENTRY(__kernel_gettimeofday)
+ 	/* w11 = cs_mono_mult, w12 = cs_shift */
+ 	ldp	w11, w12, [vdso_data, #VDSO_CS_MONO_MULT]
+ 	ldp	x13, x14, [vdso_data, #VDSO_XTIME_CLK_SEC]
+-	seqcnt_check fail=1b
+ 
+ 	get_nsec_per_sec res=x9
+ 	lsl	x9, x9, x12
+ 
+ 	get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11
++	seqcnt_check fail=1b
+ 	get_ts_realtime res_sec=x10, res_nsec=x11, \
+ 		clock_nsec=x15, xtime_sec=x13, xtime_nsec=x14, nsec_to_sec=x9
+ 
+@@ -211,13 +218,13 @@ realtime:
+ 	/* w11 = cs_mono_mult, w12 = cs_shift */
+ 	ldp	w11, w12, [vdso_data, #VDSO_CS_MONO_MULT]
+ 	ldp	x13, x14, [vdso_data, #VDSO_XTIME_CLK_SEC]
+-	seqcnt_check fail=realtime
+ 
+ 	/* All computations are done with left-shifted nsecs. */
+ 	get_nsec_per_sec res=x9
+ 	lsl	x9, x9, x12
+ 
+ 	get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11
++	seqcnt_check fail=realtime
+ 	get_ts_realtime res_sec=x10, res_nsec=x11, \
+ 		clock_nsec=x15, xtime_sec=x13, xtime_nsec=x14, nsec_to_sec=x9
+ 	clock_gettime_return, shift=1
+@@ -231,7 +238,6 @@ monotonic:
+ 	ldp	w11, w12, [vdso_data, #VDSO_CS_MONO_MULT]
+ 	ldp	x13, x14, [vdso_data, #VDSO_XTIME_CLK_SEC]
+ 	ldp	x3, x4, [vdso_data, #VDSO_WTM_CLK_SEC]
+-	seqcnt_check fail=monotonic
+ 
+ 	/* All computations are done with left-shifted nsecs. */
+ 	lsl	x4, x4, x12
+@@ -239,6 +245,7 @@ monotonic:
+ 	lsl	x9, x9, x12
+ 
+ 	get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11
++	seqcnt_check fail=monotonic
+ 	get_ts_realtime res_sec=x10, res_nsec=x11, \
+ 		clock_nsec=x15, xtime_sec=x13, xtime_nsec=x14, nsec_to_sec=x9
+ 
+@@ -253,13 +260,13 @@ monotonic_raw:
+ 	/* w11 = cs_raw_mult, w12 = cs_shift */
+ 	ldp	w12, w11, [vdso_data, #VDSO_CS_SHIFT]
+ 	ldp	x13, x14, [vdso_data, #VDSO_RAW_TIME_SEC]
+-	seqcnt_check fail=monotonic_raw
+ 
+ 	/* All computations are done with left-shifted nsecs. */
+ 	get_nsec_per_sec res=x9
+ 	lsl	x9, x9, x12
+ 
+ 	get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11
++	seqcnt_check fail=monotonic_raw
+ 	get_ts_clock_raw res_sec=x10, res_nsec=x11, \
+ 		clock_nsec=x15, nsec_to_sec=x9
+ 
+diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
+index aa0817c9c4c3..fdd626d34274 100644
+--- a/arch/arm64/mm/proc.S
++++ b/arch/arm64/mm/proc.S
+@@ -65,24 +65,25 @@ ENTRY(cpu_do_suspend)
+ 	mrs	x2, tpidr_el0
+ 	mrs	x3, tpidrro_el0
+ 	mrs	x4, contextidr_el1
+-	mrs	x5, cpacr_el1
+-	mrs	x6, tcr_el1
+-	mrs	x7, vbar_el1
+-	mrs	x8, mdscr_el1
+-	mrs	x9, oslsr_el1
+-	mrs	x10, sctlr_el1
++	mrs	x5, osdlr_el1
++	mrs	x6, cpacr_el1
++	mrs	x7, tcr_el1
++	mrs	x8, vbar_el1
++	mrs	x9, mdscr_el1
++	mrs	x10, oslsr_el1
++	mrs	x11, sctlr_el1
+ alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
+-	mrs	x11, tpidr_el1
++	mrs	x12, tpidr_el1
+ alternative_else
+-	mrs	x11, tpidr_el2
++	mrs	x12, tpidr_el2
+ alternative_endif
+-	mrs	x12, sp_el0
++	mrs	x13, sp_el0
+ 	stp	x2, x3, [x0]
+-	stp	x4, xzr, [x0, #16]
+-	stp	x5, x6, [x0, #32]
+-	stp	x7, x8, [x0, #48]
+-	stp	x9, x10, [x0, #64]
+-	stp	x11, x12, [x0, #80]
++	stp	x4, x5, [x0, #16]
++	stp	x6, x7, [x0, #32]
++	stp	x8, x9, [x0, #48]
++	stp	x10, x11, [x0, #64]
++	stp	x12, x13, [x0, #80]
+ 	ret
+ ENDPROC(cpu_do_suspend)
+ 
+@@ -105,8 +106,8 @@ ENTRY(cpu_do_resume)
+ 	msr	cpacr_el1, x6
+ 
+ 	/* Don't change t0sz here, mask those bits when restoring */
+-	mrs	x5, tcr_el1
+-	bfi	x8, x5, TCR_T0SZ_OFFSET, TCR_TxSZ_WIDTH
++	mrs	x7, tcr_el1
++	bfi	x8, x7, TCR_T0SZ_OFFSET, TCR_TxSZ_WIDTH
+ 
+ 	msr	tcr_el1, x8
+ 	msr	vbar_el1, x9
+@@ -130,6 +131,7 @@ alternative_endif
+ 	/*
+ 	 * Restore oslsr_el1 by writing oslar_el1
+ 	 */
++	msr	osdlr_el1, x5
+ 	ubfx	x11, x11, #1, #1
+ 	msr	oslar_el1, x11
+ 	reset_pmuserenr_el0 x0			// Disable PMU access from EL0
+diff --git a/arch/arm64/net/bpf_jit.h b/arch/arm64/net/bpf_jit.h
+index 783de51a6c4e..6c881659ee8a 100644
+--- a/arch/arm64/net/bpf_jit.h
++++ b/arch/arm64/net/bpf_jit.h
+@@ -100,12 +100,6 @@
+ #define A64_STXR(sf, Rt, Rn, Rs) \
+ 	A64_LSX(sf, Rt, Rn, Rs, STORE_EX)
+ 
+-/* Prefetch */
+-#define A64_PRFM(Rn, type, target, policy) \
+-	aarch64_insn_gen_prefetch(Rn, AARCH64_INSN_PRFM_TYPE_##type, \
+-				  AARCH64_INSN_PRFM_TARGET_##target, \
+-				  AARCH64_INSN_PRFM_POLICY_##policy)
+-
+ /* Add/subtract (immediate) */
+ #define A64_ADDSUB_IMM(sf, Rd, Rn, imm12, type) \
+ 	aarch64_insn_gen_add_sub_imm(Rd, Rn, imm12, \
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index aaddc0217e73..a1420626fca2 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -762,7 +762,6 @@ emit_cond_jmp:
+ 	case BPF_STX | BPF_XADD | BPF_DW:
+ 		emit_a64_mov_i(1, tmp, off, ctx);
+ 		emit(A64_ADD(1, tmp, tmp, dst), ctx);
+-		emit(A64_PRFM(tmp, PST, L1, STRM), ctx);
+ 		emit(A64_LDXR(isdw, tmp2, tmp), ctx);
+ 		emit(A64_ADD(isdw, tmp2, tmp2, src), ctx);
+ 		emit(A64_STXR(isdw, tmp2, tmp, tmp3), ctx);
+diff --git a/arch/powerpc/mm/hash_low_32.S b/arch/powerpc/mm/hash_low_32.S
+index a6c491f18a04..5842cfa0394d 100644
+--- a/arch/powerpc/mm/hash_low_32.S
++++ b/arch/powerpc/mm/hash_low_32.S
+@@ -539,7 +539,8 @@ _GLOBAL(flush_hash_pages)
+ #ifdef CONFIG_SMP
+ 	lis	r9, (mmu_hash_lock - PAGE_OFFSET)@ha
+ 	addi	r9, r9, (mmu_hash_lock - PAGE_OFFSET)@l
+-	lwz	r8,TASK_CPU(r2)
++	tophys	(r8, r2)
++	lwz	r8, TASK_CPU(r8)
+ 	oris	r8,r8,9
+ 10:	lwarx	r0,0,r9
+ 	cmpi	0,r0,0
+diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
+index b6e3d0653002..3c01d5946f3b 100644
+--- a/arch/s390/Kconfig
++++ b/arch/s390/Kconfig
+@@ -149,6 +149,7 @@ config S390
+ 	select HAVE_FUNCTION_TRACER
+ 	select HAVE_FUTEX_CMPXCHG if FUTEX
+ 	select HAVE_GCC_PLUGINS
++	select HAVE_GENERIC_GUP
+ 	select HAVE_KERNEL_BZIP2
+ 	select HAVE_KERNEL_GZIP
+ 	select HAVE_KERNEL_LZ4
+diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
+index 76dc344edb8c..394bec31cb97 100644
+--- a/arch/s390/include/asm/pgtable.h
++++ b/arch/s390/include/asm/pgtable.h
+@@ -1204,42 +1204,79 @@ static inline pte_t mk_pte(struct page *page, pgprot_t pgprot)
+ #define pmd_index(address) (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1))
+ #define pte_index(address) (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE-1))
+ 
+-#define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address))
+-#define pgd_offset_k(address) pgd_offset(&init_mm, address)
+-#define pgd_offset_raw(pgd, addr) ((pgd) + pgd_index(addr))
+-
+ #define pmd_deref(pmd) (pmd_val(pmd) & _SEGMENT_ENTRY_ORIGIN)
+ #define pud_deref(pud) (pud_val(pud) & _REGION_ENTRY_ORIGIN)
+ #define p4d_deref(pud) (p4d_val(pud) & _REGION_ENTRY_ORIGIN)
+ #define pgd_deref(pgd) (pgd_val(pgd) & _REGION_ENTRY_ORIGIN)
+ 
+-static inline p4d_t *p4d_offset(pgd_t *pgd, unsigned long address)
++/*
++ * The pgd_offset function *always* adds the index for the top-level
++ * region/segment table. This is done to get a sequence like the
++ * following to work:
++ *	pgdp = pgd_offset(current->mm, addr);
++ *	pgd = READ_ONCE(*pgdp);
++ *	p4dp = p4d_offset(&pgd, addr);
++ *	...
++ * The subsequent p4d_offset, pud_offset and pmd_offset functions
++ * only add an index if they dereferenced the pointer.
++ */
++static inline pgd_t *pgd_offset_raw(pgd_t *pgd, unsigned long address)
+ {
+-	p4d_t *p4d = (p4d_t *) pgd;
++	unsigned long rste;
++	unsigned int shift;
+ 
+-	if ((pgd_val(*pgd) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R1)
+-		p4d = (p4d_t *) pgd_deref(*pgd);
+-	return p4d + p4d_index(address);
++	/* Get the first entry of the top level table */
++	rste = pgd_val(*pgd);
++	/* Pick up the shift from the table type of the first entry */
++	shift = ((rste & _REGION_ENTRY_TYPE_MASK) >> 2) * 11 + 20;
++	return pgd + ((address >> shift) & (PTRS_PER_PGD - 1));
+ }
+ 
+-static inline pud_t *pud_offset(p4d_t *p4d, unsigned long address)
++#define pgd_offset(mm, address) pgd_offset_raw(READ_ONCE((mm)->pgd), address)
++#define pgd_offset_k(address) pgd_offset(&init_mm, address)
++
++static inline p4d_t *p4d_offset(pgd_t *pgd, unsigned long address)
+ {
+-	pud_t *pud = (pud_t *) p4d;
++	if ((pgd_val(*pgd) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R1)
++		return (p4d_t *) pgd_deref(*pgd) + p4d_index(address);
++	return (p4d_t *) pgd;
++}
+ 
+-	if ((p4d_val(*p4d) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R2)
+-		pud = (pud_t *) p4d_deref(*p4d);
+-	return pud + pud_index(address);
++static inline pud_t *pud_offset(p4d_t *p4d, unsigned long address)
++{
++	if ((p4d_val(*p4d) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R2)
++		return (pud_t *) p4d_deref(*p4d) + pud_index(address);
++	return (pud_t *) p4d;
+ }
+ 
+ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
+ {
+-	pmd_t *pmd = (pmd_t *) pud;
++	if ((pud_val(*pud) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R3)
++		return (pmd_t *) pud_deref(*pud) + pmd_index(address);
++	return (pmd_t *) pud;
++}
+ 
+-	if ((pud_val(*pud) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R3)
+-		pmd = (pmd_t *) pud_deref(*pud);
+-	return pmd + pmd_index(address);
++static inline pte_t *pte_offset(pmd_t *pmd, unsigned long address)
++{
++	return (pte_t *) pmd_deref(*pmd) + pte_index(address);
+ }
+ 
++#define pte_offset_kernel(pmd, address) pte_offset(pmd, address)
++#define pte_offset_map(pmd, address) pte_offset_kernel(pmd, address)
++#define pte_unmap(pte) do { } while (0)
++
++static inline bool gup_fast_permitted(unsigned long start, int nr_pages)
++{
++	unsigned long len, end;
++
++	len = (unsigned long) nr_pages << PAGE_SHIFT;
++	end = start + len;
++	if (end < start)
++		return false;
++	return end <= current->mm->context.asce_limit;
++}
++#define gup_fast_permitted gup_fast_permitted
++
+ #define pfn_pte(pfn,pgprot) mk_pte_phys(__pa((pfn) << PAGE_SHIFT),(pgprot))
+ #define pte_pfn(x) (pte_val(x) >> PAGE_SHIFT)
+ #define pte_page(x) pfn_to_page(pte_pfn(x))
+@@ -1249,12 +1286,6 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
+ #define p4d_page(p4d) pfn_to_page(p4d_pfn(p4d))
+ #define pgd_page(pgd) pfn_to_page(pgd_pfn(pgd))
+ 
+-/* Find an entry in the lowest level page table.. */
+-#define pte_offset(pmd, addr) ((pte_t *) pmd_deref(*(pmd)) + pte_index(addr))
+-#define pte_offset_kernel(pmd, address) pte_offset(pmd,address)
+-#define pte_offset_map(pmd, address) pte_offset_kernel(pmd, address)
+-#define pte_unmap(pte) do { } while (0)
+-
+ static inline pmd_t pmd_wrprotect(pmd_t pmd)
+ {
+ 	pmd_val(pmd) &= ~_SEGMENT_ENTRY_WRITE;
+diff --git a/arch/s390/mm/Makefile b/arch/s390/mm/Makefile
+index f5880bfd1b0c..3175413186b9 100644
+--- a/arch/s390/mm/Makefile
++++ b/arch/s390/mm/Makefile
+@@ -4,7 +4,7 @@
+ #
+ 
+ obj-y		:= init.o fault.o extmem.o mmap.o vmem.o maccess.o
+-obj-y		+= page-states.o gup.o pageattr.o pgtable.o pgalloc.o
++obj-y		+= page-states.o pageattr.o pgtable.o pgalloc.o
+ 
+ obj-$(CONFIG_CMM)		+= cmm.o
+ obj-$(CONFIG_HUGETLB_PAGE)	+= hugetlbpage.o
+diff --git a/arch/s390/mm/gup.c b/arch/s390/mm/gup.c
+deleted file mode 100644
+index 2809d11c7a28..000000000000
+--- a/arch/s390/mm/gup.c
++++ /dev/null
+@@ -1,300 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- *  Lockless get_user_pages_fast for s390
+- *
+- *  Copyright IBM Corp. 2010
+- *  Author(s): Martin Schwidefsky <schwidefsky@de.ibm.com>
+- */
+-#include <linux/sched.h>
+-#include <linux/mm.h>
+-#include <linux/hugetlb.h>
+-#include <linux/vmstat.h>
+-#include <linux/pagemap.h>
+-#include <linux/rwsem.h>
+-#include <asm/pgtable.h>
+-
+-/*
+- * The performance critical leaf functions are made noinline otherwise gcc
+- * inlines everything into a single function which results in too much
+- * register pressure.
+- */
+-static inline int gup_pte_range(pmd_t *pmdp, pmd_t pmd, unsigned long addr,
+-		unsigned long end, int write, struct page **pages, int *nr)
+-{
+-	struct page *head, *page;
+-	unsigned long mask;
+-	pte_t *ptep, pte;
+-
+-	mask = (write ? _PAGE_PROTECT : 0) | _PAGE_INVALID | _PAGE_SPECIAL;
+-
+-	ptep = ((pte_t *) pmd_deref(pmd)) + pte_index(addr);
+-	do {
+-		pte = *ptep;
+-		barrier();
+-		/* Similar to the PMD case, NUMA hinting must take slow path */
+-		if (pte_protnone(pte))
+-			return 0;
+-		if ((pte_val(pte) & mask) != 0)
+-			return 0;
+-		VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
+-		page = pte_page(pte);
+-		head = compound_head(page);
+-		if (!page_cache_get_speculative(head))
+-			return 0;
+-		if (unlikely(pte_val(pte) != pte_val(*ptep))) {
+-			put_page(head);
+-			return 0;
+-		}
+-		VM_BUG_ON_PAGE(compound_head(page) != head, page);
+-		pages[*nr] = page;
+-		(*nr)++;
+-
+-	} while (ptep++, addr += PAGE_SIZE, addr != end);
+-
+-	return 1;
+-}
+-
+-static inline int gup_huge_pmd(pmd_t *pmdp, pmd_t pmd, unsigned long addr,
+-		unsigned long end, int write, struct page **pages, int *nr)
+-{
+-	struct page *head, *page;
+-	unsigned long mask;
+-	int refs;
+-
+-	mask = (write ? _SEGMENT_ENTRY_PROTECT : 0) | _SEGMENT_ENTRY_INVALID;
+-	if ((pmd_val(pmd) & mask) != 0)
+-		return 0;
+-	VM_BUG_ON(!pfn_valid(pmd_val(pmd) >> PAGE_SHIFT));
+-
+-	refs = 0;
+-	head = pmd_page(pmd);
+-	page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
+-	do {
+-		VM_BUG_ON(compound_head(page) != head);
+-		pages[*nr] = page;
+-		(*nr)++;
+-		page++;
+-		refs++;
+-	} while (addr += PAGE_SIZE, addr != end);
+-
+-	if (!page_cache_add_speculative(head, refs)) {
+-		*nr -= refs;
+-		return 0;
+-	}
+-
+-	if (unlikely(pmd_val(pmd) != pmd_val(*pmdp))) {
+-		*nr -= refs;
+-		while (refs--)
+-			put_page(head);
+-		return 0;
+-	}
+-
+-	return 1;
+-}
+-
+-
+-static inline int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr,
+-		unsigned long end, int write, struct page **pages, int *nr)
+-{
+-	unsigned long next;
+-	pmd_t *pmdp, pmd;
+-
+-	pmdp = (pmd_t *) pudp;
+-	if ((pud_val(pud) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R3)
+-		pmdp = (pmd_t *) pud_deref(pud);
+-	pmdp += pmd_index(addr);
+-	do {
+-		pmd = *pmdp;
+-		barrier();
+-		next = pmd_addr_end(addr, end);
+-		if (pmd_none(pmd))
+-			return 0;
+-		if (unlikely(pmd_large(pmd))) {
+-			/*
+-			 * NUMA hinting faults need to be handled in the GUP
+-			 * slowpath for accounting purposes and so that they
+-			 * can be serialised against THP migration.
+-			 */
+-			if (pmd_protnone(pmd))
+-				return 0;
+-			if (!gup_huge_pmd(pmdp, pmd, addr, next,
+-					  write, pages, nr))
+-				return 0;
+-		} else if (!gup_pte_range(pmdp, pmd, addr, next,
+-					  write, pages, nr))
+-			return 0;
+-	} while (pmdp++, addr = next, addr != end);
+-
+-	return 1;
+-}
+-
+-static int gup_huge_pud(pud_t *pudp, pud_t pud, unsigned long addr,
+-		unsigned long end, int write, struct page **pages, int *nr)
+-{
+-	struct page *head, *page;
+-	unsigned long mask;
+-	int refs;
+-
+-	mask = (write ? _REGION_ENTRY_PROTECT : 0) | _REGION_ENTRY_INVALID;
+-	if ((pud_val(pud) & mask) != 0)
+-		return 0;
+-	VM_BUG_ON(!pfn_valid(pud_pfn(pud)));
+-
+-	refs = 0;
+-	head = pud_page(pud);
+-	page = head + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
+-	do {
+-		VM_BUG_ON_PAGE(compound_head(page) != head, page);
+-		pages[*nr] = page;
+-		(*nr)++;
+-		page++;
+-		refs++;
+-	} while (addr += PAGE_SIZE, addr != end);
+-
+-	if (!page_cache_add_speculative(head, refs)) {
+-		*nr -= refs;
+-		return 0;
+-	}
+-
+-	if (unlikely(pud_val(pud) != pud_val(*pudp))) {
+-		*nr -= refs;
+-		while (refs--)
+-			put_page(head);
+-		return 0;
+-	}
+-
+-	return 1;
+-}
+-
+-static inline int gup_pud_range(p4d_t *p4dp, p4d_t p4d, unsigned long addr,
+-		unsigned long end, int write, struct page **pages, int *nr)
+-{
+-	unsigned long next;
+-	pud_t *pudp, pud;
+-
+-	pudp = (pud_t *) p4dp;
+-	if ((p4d_val(p4d) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R2)
+-		pudp = (pud_t *) p4d_deref(p4d);
+-	pudp += pud_index(addr);
+-	do {
+-		pud = *pudp;
+-		barrier();
+-		next = pud_addr_end(addr, end);
+-		if (pud_none(pud))
+-			return 0;
+-		if (unlikely(pud_large(pud))) {
+-			if (!gup_huge_pud(pudp, pud, addr, next, write, pages,
+-					  nr))
+-				return 0;
+-		} else if (!gup_pmd_range(pudp, pud, addr, next, write, pages,
+-					  nr))
+-			return 0;
+-	} while (pudp++, addr = next, addr != end);
+-
+-	return 1;
+-}
+-
+-static inline int gup_p4d_range(pgd_t *pgdp, pgd_t pgd, unsigned long addr,
+-		unsigned long end, int write, struct page **pages, int *nr)
+-{
+-	unsigned long next;
+-	p4d_t *p4dp, p4d;
+-
+-	p4dp = (p4d_t *) pgdp;
+-	if ((pgd_val(pgd) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R1)
+-		p4dp = (p4d_t *) pgd_deref(pgd);
+-	p4dp += p4d_index(addr);
+-	do {
+-		p4d = *p4dp;
+-		barrier();
+-		next = p4d_addr_end(addr, end);
+-		if (p4d_none(p4d))
+-			return 0;
+-		if (!gup_pud_range(p4dp, p4d, addr, next, write, pages, nr))
+-			return 0;
+-	} while (p4dp++, addr = next, addr != end);
+-
+-	return 1;
+-}
+-
+-/*
+- * Like get_user_pages_fast() except its IRQ-safe in that it won't fall
+- * back to the regular GUP.
+- * Note a difference with get_user_pages_fast: this always returns the
+- * number of pages pinned, 0 if no pages were pinned.
+- */
+-int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
+-			  struct page **pages)
+-{
+-	struct mm_struct *mm = current->mm;
+-	unsigned long addr, len, end;
+-	unsigned long next, flags;
+-	pgd_t *pgdp, pgd;
+-	int nr = 0;
+-
+-	start &= PAGE_MASK;
+-	addr = start;
+-	len = (unsigned long) nr_pages << PAGE_SHIFT;
+-	end = start + len;
+-	if ((end <= start) || (end > mm->context.asce_limit))
+-		return 0;
+-	/*
+-	 * local_irq_save() doesn't prevent pagetable teardown, but does
+-	 * prevent the pagetables from being freed on s390.
+-	 *
+-	 * So long as we atomically load page table pointers versus teardown,
+-	 * we can follow the address down to the the page and take a ref on it.
+-	 */
+-	local_irq_save(flags);
+-	pgdp = pgd_offset(mm, addr);
+-	do {
+-		pgd = *pgdp;
+-		barrier();
+-		next = pgd_addr_end(addr, end);
+-		if (pgd_none(pgd))
+-			break;
+-		if (!gup_p4d_range(pgdp, pgd, addr, next, write, pages, &nr))
+-			break;
+-	} while (pgdp++, addr = next, addr != end);
+-	local_irq_restore(flags);
+-
+-	return nr;
+-}
+-
+-/**
+- * get_user_pages_fast() - pin user pages in memory
+- * @start:	starting user address
+- * @nr_pages:	number of pages from start to pin
+- * @write:	whether pages will be written to
+- * @pages:	array that receives pointers to the pages pinned.
+- *		Should be at least nr_pages long.
+- *
+- * Attempt to pin user pages in memory without taking mm->mmap_sem.
+- * If not successful, it will fall back to taking the lock and
+- * calling get_user_pages().
+- *
+- * Returns number of pages pinned. This may be fewer than the number
+- * requested. If nr_pages is 0 or negative, returns 0. If no pages
+- * were pinned, returns -errno.
+- */
+-int get_user_pages_fast(unsigned long start, int nr_pages, int write,
+-			struct page **pages)
+-{
+-	int nr, ret;
+-
+-	might_sleep();
+-	start &= PAGE_MASK;
+-	nr = __get_user_pages_fast(start, nr_pages, write, pages);
+-	if (nr == nr_pages)
+-		return nr;
+-
+-	/* Try to get the remaining pages with get_user_pages */
+-	start += nr << PAGE_SHIFT;
+-	pages += nr;
+-	ret = get_user_pages_unlocked(start, nr_pages - nr, pages,
+-				      write ? FOLL_WRITE : 0);
+-	/* Have to be a bit careful with return values */
+-	if (nr > 0)
+-		ret = (ret < 0) ? nr : ret + nr;
+-	return ret;
+-}
+diff --git a/arch/x86/crypto/crct10dif-pclmul_glue.c b/arch/x86/crypto/crct10dif-pclmul_glue.c
+index 0e785c0b2354..b67cb4c7b2f6 100644
+--- a/arch/x86/crypto/crct10dif-pclmul_glue.c
++++ b/arch/x86/crypto/crct10dif-pclmul_glue.c
+@@ -70,15 +70,14 @@ static int chksum_final(struct shash_desc *desc, u8 *out)
+ 	return 0;
+ }
+ 
+-static int __chksum_finup(__u16 *crcp, const u8 *data, unsigned int len,
+-			u8 *out)
++static int __chksum_finup(__u16 crc, const u8 *data, unsigned int len, u8 *out)
+ {
+ 	if (len >= 16 && irq_fpu_usable()) {
+ 		kernel_fpu_begin();
+-		*(__u16 *)out = crc_t10dif_pcl(*crcp, data, len);
++		*(__u16 *)out = crc_t10dif_pcl(crc, data, len);
+ 		kernel_fpu_end();
+ 	} else
+-		*(__u16 *)out = crc_t10dif_generic(*crcp, data, len);
++		*(__u16 *)out = crc_t10dif_generic(crc, data, len);
+ 	return 0;
+ }
+ 
+@@ -87,15 +86,13 @@ static int chksum_finup(struct shash_desc *desc, const u8 *data,
+ {
+ 	struct chksum_desc_ctx *ctx = shash_desc_ctx(desc);
+ 
+-	return __chksum_finup(&ctx->crc, data, len, out);
++	return __chksum_finup(ctx->crc, data, len, out);
+ }
+ 
+ static int chksum_digest(struct shash_desc *desc, const u8 *data,
+ 			 unsigned int length, u8 *out)
+ {
+-	struct chksum_desc_ctx *ctx = shash_desc_ctx(desc);
+-
+-	return __chksum_finup(&ctx->crc, data, length, out);
++	return __chksum_finup(0, data, length, out);
+ }
+ 
+ static struct shash_alg alg = {
+diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
+index d309f30cf7af..5fc76b755510 100644
+--- a/arch/x86/entry/entry_32.S
++++ b/arch/x86/entry/entry_32.S
+@@ -650,6 +650,7 @@ ENTRY(__switch_to_asm)
+ 	pushl	%ebx
+ 	pushl	%edi
+ 	pushl	%esi
++	pushfl
+ 
+ 	/* switch stack */
+ 	movl	%esp, TASK_threadsp(%eax)
+@@ -672,6 +673,7 @@ ENTRY(__switch_to_asm)
+ #endif
+ 
+ 	/* restore callee-saved registers */
++	popfl
+ 	popl	%esi
+ 	popl	%edi
+ 	popl	%ebx
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index 1f0efdb7b629..4fe27b67d7e2 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -291,6 +291,7 @@ ENTRY(__switch_to_asm)
+ 	pushq	%r13
+ 	pushq	%r14
+ 	pushq	%r15
++	pushfq
+ 
+ 	/* switch stack */
+ 	movq	%rsp, TASK_threadsp(%rdi)
+@@ -313,6 +314,7 @@ ENTRY(__switch_to_asm)
+ #endif
+ 
+ 	/* restore callee-saved registers */
++	popfq
+ 	popq	%r15
+ 	popq	%r14
+ 	popq	%r13
+diff --git a/arch/x86/include/asm/switch_to.h b/arch/x86/include/asm/switch_to.h
+index 7cf1a270d891..157149d4129c 100644
+--- a/arch/x86/include/asm/switch_to.h
++++ b/arch/x86/include/asm/switch_to.h
+@@ -40,6 +40,7 @@ asmlinkage void ret_from_fork(void);
+  * order of the fields must match the code in __switch_to_asm().
+  */
+ struct inactive_task_frame {
++	unsigned long flags;
+ #ifdef CONFIG_X86_64
+ 	unsigned long r15;
+ 	unsigned long r14;
+diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
+index e64de5149e50..d904aafe6409 100644
+--- a/arch/x86/kernel/cpu/mce/amd.c
++++ b/arch/x86/kernel/cpu/mce/amd.c
+@@ -563,33 +563,59 @@ out:
+ 	return offset;
+ }
+ 
++bool amd_filter_mce(struct mce *m)
++{
++	enum smca_bank_types bank_type = smca_get_bank_type(m->bank);
++	struct cpuinfo_x86 *c = &boot_cpu_data;
++	u8 xec = (m->status >> 16) & 0x3F;
++
++	/* See Family 17h Models 10h-2Fh Erratum #1114. */
++	if (c->x86 == 0x17 &&
++	    c->x86_model >= 0x10 && c->x86_model <= 0x2F &&
++	    bank_type == SMCA_IF && xec == 10)
++		return true;
++
++	return false;
++}
++
+ /*
+- * Turn off MC4_MISC thresholding banks on all family 0x15 models since
+- * they're not supported there.
++ * Turn off thresholding banks for the following conditions:
++ * - MC4_MISC thresholding is not supported on Family 0x15.
++ * - Prevent possible spurious interrupts from the IF bank on Family 0x17
++ *   Models 0x10-0x2F due to Erratum #1114.
+  */
+-void disable_err_thresholding(struct cpuinfo_x86 *c)
++void disable_err_thresholding(struct cpuinfo_x86 *c, unsigned int bank)
+ {
+-	int i;
++	int i, num_msrs;
+ 	u64 hwcr;
+ 	bool need_toggle;
+-	u32 msrs[] = {
+-		0x00000413, /* MC4_MISC0 */
+-		0xc0000408, /* MC4_MISC1 */
+-	};
++	u32 msrs[NR_BLOCKS];
++
++	if (c->x86 == 0x15 && bank == 4) {
++		msrs[0] = 0x00000413; /* MC4_MISC0 */
++		msrs[1] = 0xc0000408; /* MC4_MISC1 */
++		num_msrs = 2;
++	} else if (c->x86 == 0x17 &&
++		   (c->x86_model >= 0x10 && c->x86_model <= 0x2F)) {
+ 
+-	if (c->x86 != 0x15)
++		if (smca_get_bank_type(bank) != SMCA_IF)
++			return;
++
++		msrs[0] = MSR_AMD64_SMCA_MCx_MISC(bank);
++		num_msrs = 1;
++	} else {
+ 		return;
++	}
+ 
+ 	rdmsrl(MSR_K7_HWCR, hwcr);
+ 
+ 	/* McStatusWrEn has to be set */
+ 	need_toggle = !(hwcr & BIT(18));
+-
+ 	if (need_toggle)
+ 		wrmsrl(MSR_K7_HWCR, hwcr | BIT(18));
+ 
+ 	/* Clear CntP bit safely */
+-	for (i = 0; i < ARRAY_SIZE(msrs); i++)
++	for (i = 0; i < num_msrs; i++)
+ 		msr_clear_bit(msrs[i], 62);
+ 
+ 	/* restore old settings */
+@@ -604,12 +630,12 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c)
+ 	unsigned int bank, block, cpu = smp_processor_id();
+ 	int offset = -1;
+ 
+-	disable_err_thresholding(c);
+-
+ 	for (bank = 0; bank < mca_cfg.banks; ++bank) {
+ 		if (mce_flags.smca)
+ 			smca_configure(bank, cpu);
+ 
++		disable_err_thresholding(c, bank);
++
+ 		for (block = 0; block < NR_BLOCKS; ++block) {
+ 			address = get_block_address(address, low, high, bank, block);
+ 			if (!address)
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index b7fb541a4873..1a7084ba9a3b 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -1771,6 +1771,14 @@ static void __mcheck_cpu_init_timer(void)
+ 	mce_start_timer(t);
+ }
+ 
++bool filter_mce(struct mce *m)
++{
++	if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD)
++		return amd_filter_mce(m);
++
++	return false;
++}
++
+ /* Handle unconfigured int18 (should never happen) */
+ static void unexpected_machine_check(struct pt_regs *regs, long error_code)
+ {
+diff --git a/arch/x86/kernel/cpu/mce/genpool.c b/arch/x86/kernel/cpu/mce/genpool.c
+index 3395549c51d3..64d1d5a00f39 100644
+--- a/arch/x86/kernel/cpu/mce/genpool.c
++++ b/arch/x86/kernel/cpu/mce/genpool.c
+@@ -99,6 +99,9 @@ int mce_gen_pool_add(struct mce *mce)
+ {
+ 	struct mce_evt_llist *node;
+ 
++	if (filter_mce(mce))
++		return -EINVAL;
++
+ 	if (!mce_evt_pool)
+ 		return -EINVAL;
+ 
+diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h
+index af5eab1e65e2..a34b55baa7aa 100644
+--- a/arch/x86/kernel/cpu/mce/internal.h
++++ b/arch/x86/kernel/cpu/mce/internal.h
+@@ -173,4 +173,13 @@ struct mca_msr_regs {
+ 
+ extern struct mca_msr_regs msr_ops;
+ 
++/* Decide whether to add MCE record to MCE event pool or filter it out. */
++extern bool filter_mce(struct mce *m);
++
++#ifdef CONFIG_X86_MCE_AMD
++extern bool amd_filter_mce(struct mce *m);
++#else
++static inline bool amd_filter_mce(struct mce *m)			{ return false; };
++#endif
++
+ #endif /* __X86_MCE_INTERNAL_H__ */
+diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
+index e471d8e6f0b2..70933193878c 100644
+--- a/arch/x86/kernel/process_32.c
++++ b/arch/x86/kernel/process_32.c
+@@ -127,6 +127,13 @@ int copy_thread_tls(unsigned long clone_flags, unsigned long sp,
+ 	struct task_struct *tsk;
+ 	int err;
+ 
++	/*
++	 * For a new task use the RESET flags value since there is no before.
++	 * All the status flags are zero; DF and all the system flags must also
++	 * be 0, specifically IF must be 0 because we context switch to the new
++	 * task with interrupts disabled.
++	 */
++	frame->flags = X86_EFLAGS_FIXED;
+ 	frame->bp = 0;
+ 	frame->ret_addr = (unsigned long) ret_from_fork;
+ 	p->thread.sp = (unsigned long) fork_frame;
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index 6a62f4af9fcf..026a43be9bd1 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -392,6 +392,14 @@ int copy_thread_tls(unsigned long clone_flags, unsigned long sp,
+ 	childregs = task_pt_regs(p);
+ 	fork_frame = container_of(childregs, struct fork_frame, regs);
+ 	frame = &fork_frame->frame;
++
++	/*
++	 * For a new task use the RESET flags value since there is no before.
++	 * All the status flags are zero; DF and all the system flags must also
++	 * be 0, specifically IF must be 0 because we context switch to the new
++	 * task with interrupts disabled.
++	 */
++	frame->flags = X86_EFLAGS_FIXED;
+ 	frame->bp = 0;
+ 	frame->ret_addr = (unsigned long) ret_from_fork;
+ 	p->thread.sp = (unsigned long) fork_frame;
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 07c7bbe79e8b..d26f9e9c3d83 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -58,7 +58,6 @@
+ #include <asm/alternative.h>
+ #include <asm/fpu/xstate.h>
+ #include <asm/trace/mpx.h>
+-#include <asm/nospec-branch.h>
+ #include <asm/mpx.h>
+ #include <asm/vm86.h>
+ #include <asm/umip.h>
+@@ -368,13 +367,6 @@ dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code)
+ 		regs->ip = (unsigned long)general_protection;
+ 		regs->sp = (unsigned long)&gpregs->orig_ax;
+ 
+-		/*
+-		 * This situation can be triggered by userspace via
+-		 * modify_ldt(2) and the return does not take the regular
+-		 * user space exit, so a CPU buffer clear is required when
+-		 * MDS mitigation is enabled.
+-		 */
+-		mds_user_clear_cpu_buffers();
+ 		return;
+ 	}
+ #endif
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index bd13fdddbdc4..ea188545a15c 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -1454,7 +1454,7 @@ static void apic_timer_expired(struct kvm_lapic *apic)
+ 	if (swait_active(q))
+ 		swake_up_one(q);
+ 
+-	if (apic_lvtt_tscdeadline(apic))
++	if (apic_lvtt_tscdeadline(apic) || ktimer->hv_timer_in_use)
+ 		ktimer->expired_tscdeadline = ktimer->tscdeadline;
+ }
+ 
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 194c6ec11f4c..2b4a3d32c511 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -6856,30 +6856,6 @@ static void nested_vmx_entry_exit_ctls_update(struct kvm_vcpu *vcpu)
+ 	}
+ }
+ 
+-static bool guest_cpuid_has_pmu(struct kvm_vcpu *vcpu)
+-{
+-	struct kvm_cpuid_entry2 *entry;
+-	union cpuid10_eax eax;
+-
+-	entry = kvm_find_cpuid_entry(vcpu, 0xa, 0);
+-	if (!entry)
+-		return false;
+-
+-	eax.full = entry->eax;
+-	return (eax.split.version_id > 0);
+-}
+-
+-static void nested_vmx_procbased_ctls_update(struct kvm_vcpu *vcpu)
+-{
+-	struct vcpu_vmx *vmx = to_vmx(vcpu);
+-	bool pmu_enabled = guest_cpuid_has_pmu(vcpu);
+-
+-	if (pmu_enabled)
+-		vmx->nested.msrs.procbased_ctls_high |= CPU_BASED_RDPMC_EXITING;
+-	else
+-		vmx->nested.msrs.procbased_ctls_high &= ~CPU_BASED_RDPMC_EXITING;
+-}
+-
+ static void update_intel_pt_cfg(struct kvm_vcpu *vcpu)
+ {
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+@@ -6968,7 +6944,6 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
+ 	if (nested_vmx_allowed(vcpu)) {
+ 		nested_vmx_cr_fixed1_bits_update(vcpu);
+ 		nested_vmx_entry_exit_ctls_update(vcpu);
+-		nested_vmx_procbased_ctls_update(vcpu);
+ 	}
+ 
+ 	if (boot_cpu_has(X86_FEATURE_INTEL_PT) &&
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index b5edc8e3ce1d..fed1ab6a825c 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1262,31 +1262,42 @@ static int do_get_msr_feature(struct kvm_vcpu *vcpu, unsigned index, u64 *data)
+ 	return 0;
+ }
+ 
+-bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer)
++static bool __kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer)
+ {
+-	if (efer & efer_reserved_bits)
+-		return false;
+-
+ 	if (efer & EFER_FFXSR && !guest_cpuid_has(vcpu, X86_FEATURE_FXSR_OPT))
+-			return false;
++		return false;
+ 
+ 	if (efer & EFER_SVME && !guest_cpuid_has(vcpu, X86_FEATURE_SVM))
+-			return false;
++		return false;
+ 
+ 	return true;
++
++}
++bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer)
++{
++	if (efer & efer_reserved_bits)
++		return false;
++
++	return __kvm_valid_efer(vcpu, efer);
+ }
+ EXPORT_SYMBOL_GPL(kvm_valid_efer);
+ 
+-static int set_efer(struct kvm_vcpu *vcpu, u64 efer)
++static int set_efer(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ {
+ 	u64 old_efer = vcpu->arch.efer;
++	u64 efer = msr_info->data;
+ 
+-	if (!kvm_valid_efer(vcpu, efer))
+-		return 1;
++	if (efer & efer_reserved_bits)
++		return false;
+ 
+-	if (is_paging(vcpu)
+-	    && (vcpu->arch.efer & EFER_LME) != (efer & EFER_LME))
+-		return 1;
++	if (!msr_info->host_initiated) {
++		if (!__kvm_valid_efer(vcpu, efer))
++			return 1;
++
++		if (is_paging(vcpu) &&
++		    (vcpu->arch.efer & EFER_LME) != (efer & EFER_LME))
++			return 1;
++	}
+ 
+ 	efer &= ~EFER_LMA;
+ 	efer |= vcpu->arch.efer & EFER_LMA;
+@@ -2456,7 +2467,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		vcpu->arch.arch_capabilities = data;
+ 		break;
+ 	case MSR_EFER:
+-		return set_efer(vcpu, data);
++		return set_efer(vcpu, msr_info);
+ 	case MSR_K7_HWCR:
+ 		data &= ~(u64)0x40;	/* ignore flush filter disable */
+ 		data &= ~(u64)0x100;	/* ignore ignne emulation enable */
+diff --git a/arch/x86/platform/pvh/enlighten.c b/arch/x86/platform/pvh/enlighten.c
+index 62f5c7045944..1861a2ba0f2b 100644
+--- a/arch/x86/platform/pvh/enlighten.c
++++ b/arch/x86/platform/pvh/enlighten.c
+@@ -44,8 +44,6 @@ void __init __weak mem_map_via_hcall(struct boot_params *ptr __maybe_unused)
+ 
+ static void __init init_pvh_bootparams(bool xen_guest)
+ {
+-	memset(&pvh_bootparams, 0, sizeof(pvh_bootparams));
+-
+ 	if ((pvh_start_info.version > 0) && (pvh_start_info.memmap_entries)) {
+ 		struct hvm_memmap_table_entry *ep;
+ 		int i;
+@@ -103,7 +101,7 @@ static void __init init_pvh_bootparams(bool xen_guest)
+  * If we are trying to boot a Xen PVH guest, it is expected that the kernel
+  * will have been configured to provide the required override for this routine.
+  */
+-void __init __weak xen_pvh_init(void)
++void __init __weak xen_pvh_init(struct boot_params *boot_params)
+ {
+ 	xen_raw_printk("Error: Missing xen PVH initialization\n");
+ 	BUG();
+@@ -112,7 +110,7 @@ void __init __weak xen_pvh_init(void)
+ static void hypervisor_specific_init(bool xen_guest)
+ {
+ 	if (xen_guest)
+-		xen_pvh_init();
++		xen_pvh_init(&pvh_bootparams);
+ }
+ 
+ /*
+@@ -131,6 +129,8 @@ void __init xen_prepare_pvh(void)
+ 		BUG();
+ 	}
+ 
++	memset(&pvh_bootparams, 0, sizeof(pvh_bootparams));
++
+ 	hypervisor_specific_init(xen_guest);
+ 
+ 	init_pvh_bootparams(xen_guest);
+diff --git a/arch/x86/xen/efi.c b/arch/x86/xen/efi.c
+index 1fbb629a9d78..0d3365cb64de 100644
+--- a/arch/x86/xen/efi.c
++++ b/arch/x86/xen/efi.c
+@@ -158,7 +158,7 @@ static enum efi_secureboot_mode xen_efi_get_secureboot(void)
+ 	return efi_secureboot_mode_unknown;
+ }
+ 
+-void __init xen_efi_init(void)
++void __init xen_efi_init(struct boot_params *boot_params)
+ {
+ 	efi_system_table_t *efi_systab_xen;
+ 
+@@ -167,12 +167,12 @@ void __init xen_efi_init(void)
+ 	if (efi_systab_xen == NULL)
+ 		return;
+ 
+-	strncpy((char *)&boot_params.efi_info.efi_loader_signature, "Xen",
+-			sizeof(boot_params.efi_info.efi_loader_signature));
+-	boot_params.efi_info.efi_systab = (__u32)__pa(efi_systab_xen);
+-	boot_params.efi_info.efi_systab_hi = (__u32)(__pa(efi_systab_xen) >> 32);
++	strncpy((char *)&boot_params->efi_info.efi_loader_signature, "Xen",
++			sizeof(boot_params->efi_info.efi_loader_signature));
++	boot_params->efi_info.efi_systab = (__u32)__pa(efi_systab_xen);
++	boot_params->efi_info.efi_systab_hi = (__u32)(__pa(efi_systab_xen) >> 32);
+ 
+-	boot_params.secure_boot = xen_efi_get_secureboot();
++	boot_params->secure_boot = xen_efi_get_secureboot();
+ 
+ 	set_bit(EFI_BOOT, &efi.flags);
+ 	set_bit(EFI_PARAVIRT, &efi.flags);
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index c54a493e139a..4722ba2966ac 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -1403,7 +1403,7 @@ asmlinkage __visible void __init xen_start_kernel(void)
+ 	/* We need this for printk timestamps */
+ 	xen_setup_runstate_info(0);
+ 
+-	xen_efi_init();
++	xen_efi_init(&boot_params);
+ 
+ 	/* Start the world */
+ #ifdef CONFIG_X86_32
+diff --git a/arch/x86/xen/enlighten_pvh.c b/arch/x86/xen/enlighten_pvh.c
+index 35b7599d2d0b..80a79db72fcf 100644
+--- a/arch/x86/xen/enlighten_pvh.c
++++ b/arch/x86/xen/enlighten_pvh.c
+@@ -13,6 +13,8 @@
+ 
+ #include <xen/interface/memory.h>
+ 
++#include "xen-ops.h"
++
+ /*
+  * PVH variables.
+  *
+@@ -21,17 +23,20 @@
+  */
+ bool xen_pvh __attribute__((section(".data"))) = 0;
+ 
+-void __init xen_pvh_init(void)
++void __init xen_pvh_init(struct boot_params *boot_params)
+ {
+ 	u32 msr;
+ 	u64 pfn;
+ 
+ 	xen_pvh = 1;
++	xen_domain_type = XEN_HVM_DOMAIN;
+ 	xen_start_flags = pvh_start_info.flags;
+ 
+ 	msr = cpuid_ebx(xen_cpuid_base() + 2);
+ 	pfn = __pa(hypercall_page);
+ 	wrmsr_safe(msr, (u32)pfn, (u32)(pfn >> 32));
++
++	xen_efi_init(boot_params);
+ }
+ 
+ void __init mem_map_via_hcall(struct boot_params *boot_params_p)
+diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
+index 0e60bd918695..2f111f47ba98 100644
+--- a/arch/x86/xen/xen-ops.h
++++ b/arch/x86/xen/xen-ops.h
+@@ -122,9 +122,9 @@ static inline void __init xen_init_vga(const struct dom0_vga_console_info *info,
+ void __init xen_init_apic(void);
+ 
+ #ifdef CONFIG_XEN_EFI
+-extern void xen_efi_init(void);
++extern void xen_efi_init(struct boot_params *boot_params);
+ #else
+-static inline void __init xen_efi_init(void)
++static inline void __init xen_efi_init(struct boot_params *boot_params)
+ {
+ }
+ #endif
+diff --git a/crypto/ccm.c b/crypto/ccm.c
+index 50df8f001c1c..f4caa149b9d2 100644
+--- a/crypto/ccm.c
++++ b/crypto/ccm.c
+@@ -458,7 +458,6 @@ static void crypto_ccm_free(struct aead_instance *inst)
+ 
+ static int crypto_ccm_create_common(struct crypto_template *tmpl,
+ 				    struct rtattr **tb,
+-				    const char *full_name,
+ 				    const char *ctr_name,
+ 				    const char *mac_name)
+ {
+@@ -486,7 +485,8 @@ static int crypto_ccm_create_common(struct crypto_template *tmpl,
+ 
+ 	mac = __crypto_hash_alg_common(mac_alg);
+ 	err = -EINVAL;
+-	if (mac->digestsize != 16)
++	if (strncmp(mac->base.cra_name, "cbcmac(", 7) != 0 ||
++	    mac->digestsize != 16)
+ 		goto out_put_mac;
+ 
+ 	inst = kzalloc(sizeof(*inst) + sizeof(*ictx), GFP_KERNEL);
+@@ -509,23 +509,27 @@ static int crypto_ccm_create_common(struct crypto_template *tmpl,
+ 
+ 	ctr = crypto_spawn_skcipher_alg(&ictx->ctr);
+ 
+-	/* Not a stream cipher? */
++	/* The skcipher algorithm must be CTR mode, using 16-byte blocks. */
+ 	err = -EINVAL;
+-	if (ctr->base.cra_blocksize != 1)
++	if (strncmp(ctr->base.cra_name, "ctr(", 4) != 0 ||
++	    crypto_skcipher_alg_ivsize(ctr) != 16 ||
++	    ctr->base.cra_blocksize != 1)
+ 		goto err_drop_ctr;
+ 
+-	/* We want the real thing! */
+-	if (crypto_skcipher_alg_ivsize(ctr) != 16)
++	/* ctr and cbcmac must use the same underlying block cipher. */
++	if (strcmp(ctr->base.cra_name + 4, mac->base.cra_name + 7) != 0)
+ 		goto err_drop_ctr;
+ 
+ 	err = -ENAMETOOLONG;
++	if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
++		     "ccm(%s", ctr->base.cra_name + 4) >= CRYPTO_MAX_ALG_NAME)
++		goto err_drop_ctr;
++
+ 	if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+ 		     "ccm_base(%s,%s)", ctr->base.cra_driver_name,
+ 		     mac->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
+ 		goto err_drop_ctr;
+ 
+-	memcpy(inst->alg.base.cra_name, full_name, CRYPTO_MAX_ALG_NAME);
+-
+ 	inst->alg.base.cra_flags = ctr->base.cra_flags & CRYPTO_ALG_ASYNC;
+ 	inst->alg.base.cra_priority = (mac->base.cra_priority +
+ 				       ctr->base.cra_priority) / 2;
+@@ -567,7 +571,6 @@ static int crypto_ccm_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	const char *cipher_name;
+ 	char ctr_name[CRYPTO_MAX_ALG_NAME];
+ 	char mac_name[CRYPTO_MAX_ALG_NAME];
+-	char full_name[CRYPTO_MAX_ALG_NAME];
+ 
+ 	cipher_name = crypto_attr_alg_name(tb[1]);
+ 	if (IS_ERR(cipher_name))
+@@ -581,35 +584,24 @@ static int crypto_ccm_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 		     cipher_name) >= CRYPTO_MAX_ALG_NAME)
+ 		return -ENAMETOOLONG;
+ 
+-	if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "ccm(%s)", cipher_name) >=
+-	    CRYPTO_MAX_ALG_NAME)
+-		return -ENAMETOOLONG;
+-
+-	return crypto_ccm_create_common(tmpl, tb, full_name, ctr_name,
+-					mac_name);
++	return crypto_ccm_create_common(tmpl, tb, ctr_name, mac_name);
+ }
+ 
+ static int crypto_ccm_base_create(struct crypto_template *tmpl,
+ 				  struct rtattr **tb)
+ {
+ 	const char *ctr_name;
+-	const char *cipher_name;
+-	char full_name[CRYPTO_MAX_ALG_NAME];
++	const char *mac_name;
+ 
+ 	ctr_name = crypto_attr_alg_name(tb[1]);
+ 	if (IS_ERR(ctr_name))
+ 		return PTR_ERR(ctr_name);
+ 
+-	cipher_name = crypto_attr_alg_name(tb[2]);
+-	if (IS_ERR(cipher_name))
+-		return PTR_ERR(cipher_name);
+-
+-	if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "ccm_base(%s,%s)",
+-		     ctr_name, cipher_name) >= CRYPTO_MAX_ALG_NAME)
+-		return -ENAMETOOLONG;
++	mac_name = crypto_attr_alg_name(tb[2]);
++	if (IS_ERR(mac_name))
++		return PTR_ERR(mac_name);
+ 
+-	return crypto_ccm_create_common(tmpl, tb, full_name, ctr_name,
+-					cipher_name);
++	return crypto_ccm_create_common(tmpl, tb, ctr_name, mac_name);
+ }
+ 
+ static int crypto_rfc4309_setkey(struct crypto_aead *parent, const u8 *key,
+diff --git a/crypto/chacha20poly1305.c b/crypto/chacha20poly1305.c
+index ed2e12e26dd8..279d816ab51d 100644
+--- a/crypto/chacha20poly1305.c
++++ b/crypto/chacha20poly1305.c
+@@ -645,8 +645,8 @@ static int chachapoly_create(struct crypto_template *tmpl, struct rtattr **tb,
+ 
+ 	err = -ENAMETOOLONG;
+ 	if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
+-		     "%s(%s,%s)", name, chacha_name,
+-		     poly_name) >= CRYPTO_MAX_ALG_NAME)
++		     "%s(%s,%s)", name, chacha->base.cra_name,
++		     poly->cra_name) >= CRYPTO_MAX_ALG_NAME)
+ 		goto out_drop_chacha;
+ 	if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+ 		     "%s(%s,%s)", name, chacha->base.cra_driver_name,
+diff --git a/crypto/chacha_generic.c b/crypto/chacha_generic.c
+index 35b583101f4f..90ec0ec1b4f7 100644
+--- a/crypto/chacha_generic.c
++++ b/crypto/chacha_generic.c
+@@ -52,7 +52,7 @@ static int chacha_stream_xor(struct skcipher_request *req,
+ 		unsigned int nbytes = walk.nbytes;
+ 
+ 		if (nbytes < walk.total)
+-			nbytes = round_down(nbytes, walk.stride);
++			nbytes = round_down(nbytes, CHACHA_BLOCK_SIZE);
+ 
+ 		chacha_docrypt(state, walk.dst.virt.addr, walk.src.virt.addr,
+ 			       nbytes, ctx->nrounds);
+diff --git a/crypto/crct10dif_generic.c b/crypto/crct10dif_generic.c
+index 8e94e29dc6fc..d08048ae5552 100644
+--- a/crypto/crct10dif_generic.c
++++ b/crypto/crct10dif_generic.c
+@@ -65,10 +65,9 @@ static int chksum_final(struct shash_desc *desc, u8 *out)
+ 	return 0;
+ }
+ 
+-static int __chksum_finup(__u16 *crcp, const u8 *data, unsigned int len,
+-			u8 *out)
++static int __chksum_finup(__u16 crc, const u8 *data, unsigned int len, u8 *out)
+ {
+-	*(__u16 *)out = crc_t10dif_generic(*crcp, data, len);
++	*(__u16 *)out = crc_t10dif_generic(crc, data, len);
+ 	return 0;
+ }
+ 
+@@ -77,15 +76,13 @@ static int chksum_finup(struct shash_desc *desc, const u8 *data,
+ {
+ 	struct chksum_desc_ctx *ctx = shash_desc_ctx(desc);
+ 
+-	return __chksum_finup(&ctx->crc, data, len, out);
++	return __chksum_finup(ctx->crc, data, len, out);
+ }
+ 
+ static int chksum_digest(struct shash_desc *desc, const u8 *data,
+ 			 unsigned int length, u8 *out)
+ {
+-	struct chksum_desc_ctx *ctx = shash_desc_ctx(desc);
+-
+-	return __chksum_finup(&ctx->crc, data, length, out);
++	return __chksum_finup(0, data, length, out);
+ }
+ 
+ static struct shash_alg alg = {
+diff --git a/crypto/gcm.c b/crypto/gcm.c
+index e1a11f529d25..eea16e726ede 100644
+--- a/crypto/gcm.c
++++ b/crypto/gcm.c
+@@ -597,7 +597,6 @@ static void crypto_gcm_free(struct aead_instance *inst)
+ 
+ static int crypto_gcm_create_common(struct crypto_template *tmpl,
+ 				    struct rtattr **tb,
+-				    const char *full_name,
+ 				    const char *ctr_name,
+ 				    const char *ghash_name)
+ {
+@@ -638,7 +637,8 @@ static int crypto_gcm_create_common(struct crypto_template *tmpl,
+ 		goto err_free_inst;
+ 
+ 	err = -EINVAL;
+-	if (ghash->digestsize != 16)
++	if (strcmp(ghash->base.cra_name, "ghash") != 0 ||
++	    ghash->digestsize != 16)
+ 		goto err_drop_ghash;
+ 
+ 	crypto_set_skcipher_spawn(&ctx->ctr, aead_crypto_instance(inst));
+@@ -650,24 +650,24 @@ static int crypto_gcm_create_common(struct crypto_template *tmpl,
+ 
+ 	ctr = crypto_spawn_skcipher_alg(&ctx->ctr);
+ 
+-	/* We only support 16-byte blocks. */
++	/* The skcipher algorithm must be CTR mode, using 16-byte blocks. */
+ 	err = -EINVAL;
+-	if (crypto_skcipher_alg_ivsize(ctr) != 16)
++	if (strncmp(ctr->base.cra_name, "ctr(", 4) != 0 ||
++	    crypto_skcipher_alg_ivsize(ctr) != 16 ||
++	    ctr->base.cra_blocksize != 1)
+ 		goto out_put_ctr;
+ 
+-	/* Not a stream cipher? */
+-	if (ctr->base.cra_blocksize != 1)
++	err = -ENAMETOOLONG;
++	if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
++		     "gcm(%s", ctr->base.cra_name + 4) >= CRYPTO_MAX_ALG_NAME)
+ 		goto out_put_ctr;
+ 
+-	err = -ENAMETOOLONG;
+ 	if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+ 		     "gcm_base(%s,%s)", ctr->base.cra_driver_name,
+ 		     ghash_alg->cra_driver_name) >=
+ 	    CRYPTO_MAX_ALG_NAME)
+ 		goto out_put_ctr;
+ 
+-	memcpy(inst->alg.base.cra_name, full_name, CRYPTO_MAX_ALG_NAME);
+-
+ 	inst->alg.base.cra_flags = (ghash->base.cra_flags |
+ 				    ctr->base.cra_flags) & CRYPTO_ALG_ASYNC;
+ 	inst->alg.base.cra_priority = (ghash->base.cra_priority +
+@@ -709,7 +709,6 @@ static int crypto_gcm_create(struct crypto_template *tmpl, struct rtattr **tb)
+ {
+ 	const char *cipher_name;
+ 	char ctr_name[CRYPTO_MAX_ALG_NAME];
+-	char full_name[CRYPTO_MAX_ALG_NAME];
+ 
+ 	cipher_name = crypto_attr_alg_name(tb[1]);
+ 	if (IS_ERR(cipher_name))
+@@ -719,12 +718,7 @@ static int crypto_gcm_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	    CRYPTO_MAX_ALG_NAME)
+ 		return -ENAMETOOLONG;
+ 
+-	if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "gcm(%s)", cipher_name) >=
+-	    CRYPTO_MAX_ALG_NAME)
+-		return -ENAMETOOLONG;
+-
+-	return crypto_gcm_create_common(tmpl, tb, full_name,
+-					ctr_name, "ghash");
++	return crypto_gcm_create_common(tmpl, tb, ctr_name, "ghash");
+ }
+ 
+ static int crypto_gcm_base_create(struct crypto_template *tmpl,
+@@ -732,7 +726,6 @@ static int crypto_gcm_base_create(struct crypto_template *tmpl,
+ {
+ 	const char *ctr_name;
+ 	const char *ghash_name;
+-	char full_name[CRYPTO_MAX_ALG_NAME];
+ 
+ 	ctr_name = crypto_attr_alg_name(tb[1]);
+ 	if (IS_ERR(ctr_name))
+@@ -742,12 +735,7 @@ static int crypto_gcm_base_create(struct crypto_template *tmpl,
+ 	if (IS_ERR(ghash_name))
+ 		return PTR_ERR(ghash_name);
+ 
+-	if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "gcm_base(%s,%s)",
+-		     ctr_name, ghash_name) >= CRYPTO_MAX_ALG_NAME)
+-		return -ENAMETOOLONG;
+-
+-	return crypto_gcm_create_common(tmpl, tb, full_name,
+-					ctr_name, ghash_name);
++	return crypto_gcm_create_common(tmpl, tb, ctr_name, ghash_name);
+ }
+ 
+ static int crypto_rfc4106_setkey(struct crypto_aead *parent, const u8 *key,
+diff --git a/crypto/lrw.c b/crypto/lrw.c
+index 08a0e458bc3e..cc5c89246193 100644
+--- a/crypto/lrw.c
++++ b/crypto/lrw.c
+@@ -162,8 +162,10 @@ static int xor_tweak(struct skcipher_request *req, bool second_pass)
+ 	}
+ 
+ 	err = skcipher_walk_virt(&w, req, false);
+-	iv = (__be32 *)w.iv;
++	if (err)
++		return err;
+ 
++	iv = (__be32 *)w.iv;
+ 	counter[0] = be32_to_cpu(iv[3]);
+ 	counter[1] = be32_to_cpu(iv[2]);
+ 	counter[2] = be32_to_cpu(iv[1]);
+diff --git a/crypto/salsa20_generic.c b/crypto/salsa20_generic.c
+index 00fce32ae17a..1d7ad0256fd3 100644
+--- a/crypto/salsa20_generic.c
++++ b/crypto/salsa20_generic.c
+@@ -161,7 +161,7 @@ static int salsa20_crypt(struct skcipher_request *req)
+ 
+ 	err = skcipher_walk_virt(&walk, req, false);
+ 
+-	salsa20_init(state, ctx, walk.iv);
++	salsa20_init(state, ctx, req->iv);
+ 
+ 	while (walk.nbytes > 0) {
+ 		unsigned int nbytes = walk.nbytes;
+diff --git a/crypto/skcipher.c b/crypto/skcipher.c
+index bcf13d95f54a..2e66f312e2c4 100644
+--- a/crypto/skcipher.c
++++ b/crypto/skcipher.c
+@@ -131,8 +131,13 @@ unmap_src:
+ 		memcpy(walk->dst.virt.addr, walk->page, n);
+ 		skcipher_unmap_dst(walk);
+ 	} else if (unlikely(walk->flags & SKCIPHER_WALK_SLOW)) {
+-		if (WARN_ON(err)) {
+-			/* unexpected case; didn't process all bytes */
++		if (err) {
++			/*
++			 * Didn't process all bytes.  Either the algorithm is
++			 * broken, or this was the last step and it turned out
++			 * the message wasn't evenly divisible into blocks but
++			 * the algorithm requires it.
++			 */
+ 			err = -EINVAL;
+ 			goto finish;
+ 		}
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index 403c4ff15349..e52f1238d2d6 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -977,6 +977,8 @@ static int acpi_s2idle_prepare(void)
+ 	if (acpi_sci_irq_valid())
+ 		enable_irq_wake(acpi_sci_irq);
+ 
++	acpi_enable_wakeup_devices(ACPI_STATE_S0);
++
+ 	/* Change the configuration of GPEs to avoid spurious wakeup. */
+ 	acpi_enable_all_wakeup_gpes();
+ 	acpi_os_wait_events_complete();
+@@ -1027,6 +1029,8 @@ static void acpi_s2idle_restore(void)
+ {
+ 	acpi_enable_all_runtime_gpes();
+ 
++	acpi_disable_wakeup_devices(ACPI_STATE_S0);
++
+ 	if (acpi_sci_irq_valid())
+ 		disable_irq_wake(acpi_sci_irq);
+ 
+diff --git a/drivers/char/ipmi/ipmi_dmi.c b/drivers/char/ipmi/ipmi_dmi.c
+index f2411468f33f..f38e651dd1b5 100644
+--- a/drivers/char/ipmi/ipmi_dmi.c
++++ b/drivers/char/ipmi/ipmi_dmi.c
+@@ -47,9 +47,11 @@ static void __init dmi_add_platform_ipmi(unsigned long base_addr,
+ 	memset(&p, 0, sizeof(p));
+ 
+ 	name = "dmi-ipmi-si";
++	p.iftype = IPMI_PLAT_IF_SI;
+ 	switch (type) {
+ 	case IPMI_DMI_TYPE_SSIF:
+ 		name = "dmi-ipmi-ssif";
++		p.iftype = IPMI_PLAT_IF_SSIF;
+ 		p.type = SI_TYPE_INVALID;
+ 		break;
+ 	case IPMI_DMI_TYPE_BT:
+diff --git a/drivers/char/ipmi/ipmi_plat_data.c b/drivers/char/ipmi/ipmi_plat_data.c
+index 8f0ca2a848eb..28471ff2a3a3 100644
+--- a/drivers/char/ipmi/ipmi_plat_data.c
++++ b/drivers/char/ipmi/ipmi_plat_data.c
+@@ -12,7 +12,7 @@ struct platform_device *ipmi_platform_add(const char *name, unsigned int inst,
+ 					  struct ipmi_plat_data *p)
+ {
+ 	struct platform_device *pdev;
+-	unsigned int num_r = 1, size, pidx = 0;
++	unsigned int num_r = 1, size = 0, pidx = 0;
+ 	struct resource r[4];
+ 	struct property_entry pr[6];
+ 	u32 flags;
+@@ -21,19 +21,22 @@ struct platform_device *ipmi_platform_add(const char *name, unsigned int inst,
+ 	memset(pr, 0, sizeof(pr));
+ 	memset(r, 0, sizeof(r));
+ 
+-	if (p->type == SI_BT)
+-		size = 3;
+-	else if (p->type == SI_TYPE_INVALID)
+-		size = 0;
+-	else
+-		size = 2;
++	if (p->iftype == IPMI_PLAT_IF_SI) {
++		if (p->type == SI_BT)
++			size = 3;
++		else if (p->type != SI_TYPE_INVALID)
++			size = 2;
++
++		if (p->regsize == 0)
++			p->regsize = DEFAULT_REGSIZE;
++		if (p->regspacing == 0)
++			p->regspacing = p->regsize;
+ 
+-	if (p->regsize == 0)
+-		p->regsize = DEFAULT_REGSIZE;
+-	if (p->regspacing == 0)
+-		p->regspacing = p->regsize;
++		pr[pidx++] = PROPERTY_ENTRY_U8("ipmi-type", p->type);
++	} else if (p->iftype == IPMI_PLAT_IF_SSIF) {
++		pr[pidx++] = PROPERTY_ENTRY_U16("i2c-addr", p->addr);
++	}
+ 
+-	pr[pidx++] = PROPERTY_ENTRY_U8("ipmi-type", p->type);
+ 	if (p->slave_addr)
+ 		pr[pidx++] = PROPERTY_ENTRY_U8("slave-addr", p->slave_addr);
+ 	pr[pidx++] = PROPERTY_ENTRY_U8("addr-source", p->addr_source);
+diff --git a/drivers/char/ipmi/ipmi_plat_data.h b/drivers/char/ipmi/ipmi_plat_data.h
+index 567cfcec8ada..9ba744ea9571 100644
+--- a/drivers/char/ipmi/ipmi_plat_data.h
++++ b/drivers/char/ipmi/ipmi_plat_data.h
+@@ -6,7 +6,10 @@
+ 
+ #include <linux/ipmi.h>
+ 
++enum ipmi_plat_interface_type { IPMI_PLAT_IF_SI, IPMI_PLAT_IF_SSIF };
++
+ struct ipmi_plat_data {
++	enum ipmi_plat_interface_type iftype;
+ 	unsigned int type; /* si_type for si, SI_INVALID for others */
+ 	unsigned int space; /* addr_space for si, intf# for ssif. */
+ 	unsigned long addr;
+diff --git a/drivers/char/ipmi/ipmi_si_hardcode.c b/drivers/char/ipmi/ipmi_si_hardcode.c
+index 682221eebd66..f6ece7569504 100644
+--- a/drivers/char/ipmi/ipmi_si_hardcode.c
++++ b/drivers/char/ipmi/ipmi_si_hardcode.c
+@@ -83,6 +83,7 @@ static void __init ipmi_hardcode_init_one(const char *si_type_str,
+ 
+ 	memset(&p, 0, sizeof(p));
+ 
++	p.iftype = IPMI_PLAT_IF_SI;
+ 	if (!si_type_str || !*si_type_str || strcmp(si_type_str, "kcs") == 0) {
+ 		p.type = SI_KCS;
+ 	} else if (strcmp(si_type_str, "smic") == 0) {
+diff --git a/drivers/char/ipmi/ipmi_si_hotmod.c b/drivers/char/ipmi/ipmi_si_hotmod.c
+index 03140f6cdf6f..42a925f8cf69 100644
+--- a/drivers/char/ipmi/ipmi_si_hotmod.c
++++ b/drivers/char/ipmi/ipmi_si_hotmod.c
+@@ -108,6 +108,7 @@ static int parse_hotmod_str(const char *curr, enum hotmod_op *op,
+ 	int rv;
+ 	unsigned int ival;
+ 
++	h->iftype = IPMI_PLAT_IF_SI;
+ 	rv = parse_str(hotmod_ops, &ival, "operation", &curr);
+ 	if (rv)
+ 		return rv;
+diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
+index 8b5aec5430f1..aaccb0ff1ea6 100644
+--- a/drivers/char/ipmi/ipmi_ssif.c
++++ b/drivers/char/ipmi/ipmi_ssif.c
+@@ -727,12 +727,16 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ 			/* End of read */
+ 			len = ssif_info->multi_len;
+ 			data = ssif_info->data;
+-		} else if (blocknum != ssif_info->multi_pos) {
++		} else if (blocknum + 1 != ssif_info->multi_pos) {
+ 			/*
+ 			 * Out of sequence block, just abort.  Block
+ 			 * numbers start at zero for the second block,
+ 			 * but multi_pos starts at one, so the +1.
+ 			 */
++			if (ssif_info->ssif_debug & SSIF_DEBUG_MSG)
++				dev_dbg(&ssif_info->client->dev,
++					"Received message out of sequence, expected %u, got %u\n",
++					ssif_info->multi_pos - 1, blocknum);
+ 			result = -EIO;
+ 		} else {
+ 			ssif_inc_stat(ssif_info, received_message_parts);
+diff --git a/drivers/crypto/amcc/crypto4xx_alg.c b/drivers/crypto/amcc/crypto4xx_alg.c
+index 4092c2aad8e2..3458c5a085d9 100644
+--- a/drivers/crypto/amcc/crypto4xx_alg.c
++++ b/drivers/crypto/amcc/crypto4xx_alg.c
+@@ -141,9 +141,10 @@ static int crypto4xx_setkey_aes(struct crypto_skcipher *cipher,
+ 	/* Setup SA */
+ 	sa = ctx->sa_in;
+ 
+-	set_dynamic_sa_command_0(sa, SA_NOT_SAVE_HASH, (cm == CRYPTO_MODE_CBC ?
+-				 SA_SAVE_IV : SA_NOT_SAVE_IV),
+-				 SA_LOAD_HASH_FROM_SA, SA_LOAD_IV_FROM_STATE,
++	set_dynamic_sa_command_0(sa, SA_NOT_SAVE_HASH, (cm == CRYPTO_MODE_ECB ?
++				 SA_NOT_SAVE_IV : SA_SAVE_IV),
++				 SA_NOT_LOAD_HASH, (cm == CRYPTO_MODE_ECB ?
++				 SA_LOAD_IV_FROM_SA : SA_LOAD_IV_FROM_STATE),
+ 				 SA_NO_HEADER_PROC, SA_HASH_ALG_NULL,
+ 				 SA_CIPHER_ALG_AES, SA_PAD_TYPE_ZERO,
+ 				 SA_OP_GROUP_BASIC, SA_OPCODE_DECRYPT,
+@@ -162,6 +163,11 @@ static int crypto4xx_setkey_aes(struct crypto_skcipher *cipher,
+ 	memcpy(ctx->sa_out, ctx->sa_in, ctx->sa_len * 4);
+ 	sa = ctx->sa_out;
+ 	sa->sa_command_0.bf.dir = DIR_OUTBOUND;
++	/*
++	 * SA_OPCODE_ENCRYPT is the same value as SA_OPCODE_DECRYPT.
++	 * it's the DIR_(IN|OUT)BOUND that matters
++	 */
++	sa->sa_command_0.bf.opcode = SA_OPCODE_ENCRYPT;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/crypto/amcc/crypto4xx_core.c b/drivers/crypto/amcc/crypto4xx_core.c
+index 06574a884715..920bd5e720b2 100644
+--- a/drivers/crypto/amcc/crypto4xx_core.c
++++ b/drivers/crypto/amcc/crypto4xx_core.c
+@@ -714,7 +714,23 @@ int crypto4xx_build_pd(struct crypto_async_request *req,
+ 	size_t offset_to_sr_ptr;
+ 	u32 gd_idx = 0;
+ 	int tmp;
+-	bool is_busy;
++	bool is_busy, force_sd;
++
++	/*
++	 * There's a very subtile/disguised "bug" in the hardware that
++	 * gets indirectly mentioned in 18.1.3.5 Encryption/Decryption
++	 * of the hardware spec:
++	 * *drum roll* the AES/(T)DES OFB and CFB modes are listed as
++	 * operation modes for >>> "Block ciphers" <<<.
++	 *
++	 * To workaround this issue and stop the hardware from causing
++	 * "overran dst buffer" on crypttexts that are not a multiple
++	 * of 16 (AES_BLOCK_SIZE), we force the driver to use the
++	 * scatter buffers.
++	 */
++	force_sd = (req_sa->sa_command_1.bf.crypto_mode9_8 == CRYPTO_MODE_CFB
++		|| req_sa->sa_command_1.bf.crypto_mode9_8 == CRYPTO_MODE_OFB)
++		&& (datalen % AES_BLOCK_SIZE);
+ 
+ 	/* figure how many gd are needed */
+ 	tmp = sg_nents_for_len(src, assoclen + datalen);
+@@ -732,7 +748,7 @@ int crypto4xx_build_pd(struct crypto_async_request *req,
+ 	}
+ 
+ 	/* figure how many sd are needed */
+-	if (sg_is_last(dst)) {
++	if (sg_is_last(dst) && force_sd == false) {
+ 		num_sd = 0;
+ 	} else {
+ 		if (datalen > PPC4XX_SD_BUFFER_SIZE) {
+@@ -807,9 +823,10 @@ int crypto4xx_build_pd(struct crypto_async_request *req,
+ 	pd->sa_len = sa_len;
+ 
+ 	pd_uinfo = &dev->pdr_uinfo[pd_entry];
+-	pd_uinfo->async_req = req;
+ 	pd_uinfo->num_gd = num_gd;
+ 	pd_uinfo->num_sd = num_sd;
++	pd_uinfo->dest_va = dst;
++	pd_uinfo->async_req = req;
+ 
+ 	if (iv_len)
+ 		memcpy(pd_uinfo->sr_va->save_iv, iv, iv_len);
+@@ -828,7 +845,6 @@ int crypto4xx_build_pd(struct crypto_async_request *req,
+ 		/* get first gd we are going to use */
+ 		gd_idx = fst_gd;
+ 		pd_uinfo->first_gd = fst_gd;
+-		pd_uinfo->num_gd = num_gd;
+ 		gd = crypto4xx_get_gdp(dev, &gd_dma, gd_idx);
+ 		pd->src = gd_dma;
+ 		/* enable gather */
+@@ -865,17 +881,14 @@ int crypto4xx_build_pd(struct crypto_async_request *req,
+ 		 * Indicate gather array is not used
+ 		 */
+ 		pd_uinfo->first_gd = 0xffffffff;
+-		pd_uinfo->num_gd = 0;
+ 	}
+-	if (sg_is_last(dst)) {
++	if (!num_sd) {
+ 		/*
+ 		 * we know application give us dst a whole piece of memory
+ 		 * no need to use scatter ring.
+ 		 */
+ 		pd_uinfo->using_sd = 0;
+ 		pd_uinfo->first_sd = 0xffffffff;
+-		pd_uinfo->num_sd = 0;
+-		pd_uinfo->dest_va = dst;
+ 		sa->sa_command_0.bf.scatter = 0;
+ 		pd->dest = (u32)dma_map_page(dev->core_dev->device,
+ 					     sg_page(dst), dst->offset,
+@@ -889,9 +902,7 @@ int crypto4xx_build_pd(struct crypto_async_request *req,
+ 		nbytes = datalen;
+ 		sa->sa_command_0.bf.scatter = 1;
+ 		pd_uinfo->using_sd = 1;
+-		pd_uinfo->dest_va = dst;
+ 		pd_uinfo->first_sd = fst_sd;
+-		pd_uinfo->num_sd = num_sd;
+ 		sd = crypto4xx_get_sdp(dev, &sd_dma, sd_idx);
+ 		pd->dest = sd_dma;
+ 		/* setup scatter descriptor */
+diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c
+index c2c1abc68f81..0a72c96708c4 100644
+--- a/drivers/crypto/caam/caamalg_qi2.c
++++ b/drivers/crypto/caam/caamalg_qi2.c
+@@ -2854,6 +2854,7 @@ struct caam_hash_state {
+ 	struct caam_request caam_req;
+ 	dma_addr_t buf_dma;
+ 	dma_addr_t ctx_dma;
++	int ctx_dma_len;
+ 	u8 buf_0[CAAM_MAX_HASH_BLOCK_SIZE] ____cacheline_aligned;
+ 	int buflen_0;
+ 	u8 buf_1[CAAM_MAX_HASH_BLOCK_SIZE] ____cacheline_aligned;
+@@ -2927,6 +2928,7 @@ static inline int ctx_map_to_qm_sg(struct device *dev,
+ 				   struct caam_hash_state *state, int ctx_len,
+ 				   struct dpaa2_sg_entry *qm_sg, u32 flag)
+ {
++	state->ctx_dma_len = ctx_len;
+ 	state->ctx_dma = dma_map_single(dev, state->caam_ctx, ctx_len, flag);
+ 	if (dma_mapping_error(dev, state->ctx_dma)) {
+ 		dev_err(dev, "unable to map ctx\n");
+@@ -3018,13 +3020,13 @@ static void split_key_sh_done(void *cbk_ctx, u32 err)
+ }
+ 
+ /* Digest hash size if it is too large */
+-static int hash_digest_key(struct caam_hash_ctx *ctx, const u8 *key_in,
+-			   u32 *keylen, u8 *key_out, u32 digestsize)
++static int hash_digest_key(struct caam_hash_ctx *ctx, u32 *keylen, u8 *key,
++			   u32 digestsize)
+ {
+ 	struct caam_request *req_ctx;
+ 	u32 *desc;
+ 	struct split_key_sh_result result;
+-	dma_addr_t src_dma, dst_dma;
++	dma_addr_t key_dma;
+ 	struct caam_flc *flc;
+ 	dma_addr_t flc_dma;
+ 	int ret = -ENOMEM;
+@@ -3041,17 +3043,10 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, const u8 *key_in,
+ 	if (!flc)
+ 		goto err_flc;
+ 
+-	src_dma = dma_map_single(ctx->dev, (void *)key_in, *keylen,
+-				 DMA_TO_DEVICE);
+-	if (dma_mapping_error(ctx->dev, src_dma)) {
+-		dev_err(ctx->dev, "unable to map key input memory\n");
+-		goto err_src_dma;
+-	}
+-	dst_dma = dma_map_single(ctx->dev, (void *)key_out, digestsize,
+-				 DMA_FROM_DEVICE);
+-	if (dma_mapping_error(ctx->dev, dst_dma)) {
+-		dev_err(ctx->dev, "unable to map key output memory\n");
+-		goto err_dst_dma;
++	key_dma = dma_map_single(ctx->dev, key, *keylen, DMA_BIDIRECTIONAL);
++	if (dma_mapping_error(ctx->dev, key_dma)) {
++		dev_err(ctx->dev, "unable to map key memory\n");
++		goto err_key_dma;
+ 	}
+ 
+ 	desc = flc->sh_desc;
+@@ -3076,14 +3071,14 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, const u8 *key_in,
+ 
+ 	dpaa2_fl_set_final(in_fle, true);
+ 	dpaa2_fl_set_format(in_fle, dpaa2_fl_single);
+-	dpaa2_fl_set_addr(in_fle, src_dma);
++	dpaa2_fl_set_addr(in_fle, key_dma);
+ 	dpaa2_fl_set_len(in_fle, *keylen);
+ 	dpaa2_fl_set_format(out_fle, dpaa2_fl_single);
+-	dpaa2_fl_set_addr(out_fle, dst_dma);
++	dpaa2_fl_set_addr(out_fle, key_dma);
+ 	dpaa2_fl_set_len(out_fle, digestsize);
+ 
+ 	print_hex_dump_debug("key_in@" __stringify(__LINE__)": ",
+-			     DUMP_PREFIX_ADDRESS, 16, 4, key_in, *keylen, 1);
++			     DUMP_PREFIX_ADDRESS, 16, 4, key, *keylen, 1);
+ 	print_hex_dump_debug("shdesc@" __stringify(__LINE__)": ",
+ 			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
+ 			     1);
+@@ -3103,17 +3098,15 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, const u8 *key_in,
+ 		wait_for_completion(&result.completion);
+ 		ret = result.err;
+ 		print_hex_dump_debug("digested key@" __stringify(__LINE__)": ",
+-				     DUMP_PREFIX_ADDRESS, 16, 4, key_in,
++				     DUMP_PREFIX_ADDRESS, 16, 4, key,
+ 				     digestsize, 1);
+ 	}
+ 
+ 	dma_unmap_single(ctx->dev, flc_dma, sizeof(flc->flc) + desc_bytes(desc),
+ 			 DMA_TO_DEVICE);
+ err_flc_dma:
+-	dma_unmap_single(ctx->dev, dst_dma, digestsize, DMA_FROM_DEVICE);
+-err_dst_dma:
+-	dma_unmap_single(ctx->dev, src_dma, *keylen, DMA_TO_DEVICE);
+-err_src_dma:
++	dma_unmap_single(ctx->dev, key_dma, *keylen, DMA_BIDIRECTIONAL);
++err_key_dma:
+ 	kfree(flc);
+ err_flc:
+ 	kfree(req_ctx);
+@@ -3135,12 +3128,10 @@ static int ahash_setkey(struct crypto_ahash *ahash, const u8 *key,
+ 	dev_dbg(ctx->dev, "keylen %d blocksize %d\n", keylen, blocksize);
+ 
+ 	if (keylen > blocksize) {
+-		hashed_key = kmalloc_array(digestsize, sizeof(*hashed_key),
+-					   GFP_KERNEL | GFP_DMA);
++		hashed_key = kmemdup(key, keylen, GFP_KERNEL | GFP_DMA);
+ 		if (!hashed_key)
+ 			return -ENOMEM;
+-		ret = hash_digest_key(ctx, key, &keylen, hashed_key,
+-				      digestsize);
++		ret = hash_digest_key(ctx, &keylen, hashed_key, digestsize);
+ 		if (ret)
+ 			goto bad_free_key;
+ 		key = hashed_key;
+@@ -3165,14 +3156,12 @@ bad_free_key:
+ }
+ 
+ static inline void ahash_unmap(struct device *dev, struct ahash_edesc *edesc,
+-			       struct ahash_request *req, int dst_len)
++			       struct ahash_request *req)
+ {
+ 	struct caam_hash_state *state = ahash_request_ctx(req);
+ 
+ 	if (edesc->src_nents)
+ 		dma_unmap_sg(dev, req->src, edesc->src_nents, DMA_TO_DEVICE);
+-	if (edesc->dst_dma)
+-		dma_unmap_single(dev, edesc->dst_dma, dst_len, DMA_FROM_DEVICE);
+ 
+ 	if (edesc->qm_sg_bytes)
+ 		dma_unmap_single(dev, edesc->qm_sg_dma, edesc->qm_sg_bytes,
+@@ -3187,18 +3176,15 @@ static inline void ahash_unmap(struct device *dev, struct ahash_edesc *edesc,
+ 
+ static inline void ahash_unmap_ctx(struct device *dev,
+ 				   struct ahash_edesc *edesc,
+-				   struct ahash_request *req, int dst_len,
+-				   u32 flag)
++				   struct ahash_request *req, u32 flag)
+ {
+-	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
+-	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+ 	struct caam_hash_state *state = ahash_request_ctx(req);
+ 
+ 	if (state->ctx_dma) {
+-		dma_unmap_single(dev, state->ctx_dma, ctx->ctx_len, flag);
++		dma_unmap_single(dev, state->ctx_dma, state->ctx_dma_len, flag);
+ 		state->ctx_dma = 0;
+ 	}
+-	ahash_unmap(dev, edesc, req, dst_len);
++	ahash_unmap(dev, edesc, req);
+ }
+ 
+ static void ahash_done(void *cbk_ctx, u32 status)
+@@ -3219,16 +3205,13 @@ static void ahash_done(void *cbk_ctx, u32 status)
+ 		ecode = -EIO;
+ 	}
+ 
+-	ahash_unmap(ctx->dev, edesc, req, digestsize);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_FROM_DEVICE);
++	memcpy(req->result, state->caam_ctx, digestsize);
+ 	qi_cache_free(edesc);
+ 
+ 	print_hex_dump_debug("ctx@" __stringify(__LINE__)": ",
+ 			     DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx,
+ 			     ctx->ctx_len, 1);
+-	if (req->result)
+-		print_hex_dump_debug("result@" __stringify(__LINE__)": ",
+-				     DUMP_PREFIX_ADDRESS, 16, 4, req->result,
+-				     digestsize, 1);
+ 
+ 	req->base.complete(&req->base, ecode);
+ }
+@@ -3250,7 +3233,7 @@ static void ahash_done_bi(void *cbk_ctx, u32 status)
+ 		ecode = -EIO;
+ 	}
+ 
+-	ahash_unmap_ctx(ctx->dev, edesc, req, ctx->ctx_len, DMA_BIDIRECTIONAL);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_BIDIRECTIONAL);
+ 	switch_buf(state);
+ 	qi_cache_free(edesc);
+ 
+@@ -3283,16 +3266,13 @@ static void ahash_done_ctx_src(void *cbk_ctx, u32 status)
+ 		ecode = -EIO;
+ 	}
+ 
+-	ahash_unmap_ctx(ctx->dev, edesc, req, digestsize, DMA_TO_DEVICE);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_BIDIRECTIONAL);
++	memcpy(req->result, state->caam_ctx, digestsize);
+ 	qi_cache_free(edesc);
+ 
+ 	print_hex_dump_debug("ctx@" __stringify(__LINE__)": ",
+ 			     DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx,
+ 			     ctx->ctx_len, 1);
+-	if (req->result)
+-		print_hex_dump_debug("result@" __stringify(__LINE__)": ",
+-				     DUMP_PREFIX_ADDRESS, 16, 4, req->result,
+-				     digestsize, 1);
+ 
+ 	req->base.complete(&req->base, ecode);
+ }
+@@ -3314,7 +3294,7 @@ static void ahash_done_ctx_dst(void *cbk_ctx, u32 status)
+ 		ecode = -EIO;
+ 	}
+ 
+-	ahash_unmap_ctx(ctx->dev, edesc, req, ctx->ctx_len, DMA_FROM_DEVICE);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_FROM_DEVICE);
+ 	switch_buf(state);
+ 	qi_cache_free(edesc);
+ 
+@@ -3452,7 +3432,7 @@ static int ahash_update_ctx(struct ahash_request *req)
+ 
+ 	return ret;
+ unmap_ctx:
+-	ahash_unmap_ctx(ctx->dev, edesc, req, ctx->ctx_len, DMA_BIDIRECTIONAL);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_BIDIRECTIONAL);
+ 	qi_cache_free(edesc);
+ 	return ret;
+ }
+@@ -3484,7 +3464,7 @@ static int ahash_final_ctx(struct ahash_request *req)
+ 	sg_table = &edesc->sgt[0];
+ 
+ 	ret = ctx_map_to_qm_sg(ctx->dev, state, ctx->ctx_len, sg_table,
+-			       DMA_TO_DEVICE);
++			       DMA_BIDIRECTIONAL);
+ 	if (ret)
+ 		goto unmap_ctx;
+ 
+@@ -3503,22 +3483,13 @@ static int ahash_final_ctx(struct ahash_request *req)
+ 	}
+ 	edesc->qm_sg_bytes = qm_sg_bytes;
+ 
+-	edesc->dst_dma = dma_map_single(ctx->dev, req->result, digestsize,
+-					DMA_FROM_DEVICE);
+-	if (dma_mapping_error(ctx->dev, edesc->dst_dma)) {
+-		dev_err(ctx->dev, "unable to map dst\n");
+-		edesc->dst_dma = 0;
+-		ret = -ENOMEM;
+-		goto unmap_ctx;
+-	}
+-
+ 	memset(&req_ctx->fd_flt, 0, sizeof(req_ctx->fd_flt));
+ 	dpaa2_fl_set_final(in_fle, true);
+ 	dpaa2_fl_set_format(in_fle, dpaa2_fl_sg);
+ 	dpaa2_fl_set_addr(in_fle, edesc->qm_sg_dma);
+ 	dpaa2_fl_set_len(in_fle, ctx->ctx_len + buflen);
+ 	dpaa2_fl_set_format(out_fle, dpaa2_fl_single);
+-	dpaa2_fl_set_addr(out_fle, edesc->dst_dma);
++	dpaa2_fl_set_addr(out_fle, state->ctx_dma);
+ 	dpaa2_fl_set_len(out_fle, digestsize);
+ 
+ 	req_ctx->flc = &ctx->flc[FINALIZE];
+@@ -3533,7 +3504,7 @@ static int ahash_final_ctx(struct ahash_request *req)
+ 		return ret;
+ 
+ unmap_ctx:
+-	ahash_unmap_ctx(ctx->dev, edesc, req, digestsize, DMA_FROM_DEVICE);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_BIDIRECTIONAL);
+ 	qi_cache_free(edesc);
+ 	return ret;
+ }
+@@ -3586,7 +3557,7 @@ static int ahash_finup_ctx(struct ahash_request *req)
+ 	sg_table = &edesc->sgt[0];
+ 
+ 	ret = ctx_map_to_qm_sg(ctx->dev, state, ctx->ctx_len, sg_table,
+-			       DMA_TO_DEVICE);
++			       DMA_BIDIRECTIONAL);
+ 	if (ret)
+ 		goto unmap_ctx;
+ 
+@@ -3605,22 +3576,13 @@ static int ahash_finup_ctx(struct ahash_request *req)
+ 	}
+ 	edesc->qm_sg_bytes = qm_sg_bytes;
+ 
+-	edesc->dst_dma = dma_map_single(ctx->dev, req->result, digestsize,
+-					DMA_FROM_DEVICE);
+-	if (dma_mapping_error(ctx->dev, edesc->dst_dma)) {
+-		dev_err(ctx->dev, "unable to map dst\n");
+-		edesc->dst_dma = 0;
+-		ret = -ENOMEM;
+-		goto unmap_ctx;
+-	}
+-
+ 	memset(&req_ctx->fd_flt, 0, sizeof(req_ctx->fd_flt));
+ 	dpaa2_fl_set_final(in_fle, true);
+ 	dpaa2_fl_set_format(in_fle, dpaa2_fl_sg);
+ 	dpaa2_fl_set_addr(in_fle, edesc->qm_sg_dma);
+ 	dpaa2_fl_set_len(in_fle, ctx->ctx_len + buflen + req->nbytes);
+ 	dpaa2_fl_set_format(out_fle, dpaa2_fl_single);
+-	dpaa2_fl_set_addr(out_fle, edesc->dst_dma);
++	dpaa2_fl_set_addr(out_fle, state->ctx_dma);
+ 	dpaa2_fl_set_len(out_fle, digestsize);
+ 
+ 	req_ctx->flc = &ctx->flc[FINALIZE];
+@@ -3635,7 +3597,7 @@ static int ahash_finup_ctx(struct ahash_request *req)
+ 		return ret;
+ 
+ unmap_ctx:
+-	ahash_unmap_ctx(ctx->dev, edesc, req, digestsize, DMA_FROM_DEVICE);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_BIDIRECTIONAL);
+ 	qi_cache_free(edesc);
+ 	return ret;
+ }
+@@ -3704,18 +3666,19 @@ static int ahash_digest(struct ahash_request *req)
+ 		dpaa2_fl_set_addr(in_fle, sg_dma_address(req->src));
+ 	}
+ 
+-	edesc->dst_dma = dma_map_single(ctx->dev, req->result, digestsize,
++	state->ctx_dma_len = digestsize;
++	state->ctx_dma = dma_map_single(ctx->dev, state->caam_ctx, digestsize,
+ 					DMA_FROM_DEVICE);
+-	if (dma_mapping_error(ctx->dev, edesc->dst_dma)) {
+-		dev_err(ctx->dev, "unable to map dst\n");
+-		edesc->dst_dma = 0;
++	if (dma_mapping_error(ctx->dev, state->ctx_dma)) {
++		dev_err(ctx->dev, "unable to map ctx\n");
++		state->ctx_dma = 0;
+ 		goto unmap;
+ 	}
+ 
+ 	dpaa2_fl_set_final(in_fle, true);
+ 	dpaa2_fl_set_len(in_fle, req->nbytes);
+ 	dpaa2_fl_set_format(out_fle, dpaa2_fl_single);
+-	dpaa2_fl_set_addr(out_fle, edesc->dst_dma);
++	dpaa2_fl_set_addr(out_fle, state->ctx_dma);
+ 	dpaa2_fl_set_len(out_fle, digestsize);
+ 
+ 	req_ctx->flc = &ctx->flc[DIGEST];
+@@ -3729,7 +3692,7 @@ static int ahash_digest(struct ahash_request *req)
+ 		return ret;
+ 
+ unmap:
+-	ahash_unmap(ctx->dev, edesc, req, digestsize);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_FROM_DEVICE);
+ 	qi_cache_free(edesc);
+ 	return ret;
+ }
+@@ -3755,27 +3718,39 @@ static int ahash_final_no_ctx(struct ahash_request *req)
+ 	if (!edesc)
+ 		return ret;
+ 
+-	state->buf_dma = dma_map_single(ctx->dev, buf, buflen, DMA_TO_DEVICE);
+-	if (dma_mapping_error(ctx->dev, state->buf_dma)) {
+-		dev_err(ctx->dev, "unable to map src\n");
+-		goto unmap;
++	if (buflen) {
++		state->buf_dma = dma_map_single(ctx->dev, buf, buflen,
++						DMA_TO_DEVICE);
++		if (dma_mapping_error(ctx->dev, state->buf_dma)) {
++			dev_err(ctx->dev, "unable to map src\n");
++			goto unmap;
++		}
+ 	}
+ 
+-	edesc->dst_dma = dma_map_single(ctx->dev, req->result, digestsize,
++	state->ctx_dma_len = digestsize;
++	state->ctx_dma = dma_map_single(ctx->dev, state->caam_ctx, digestsize,
+ 					DMA_FROM_DEVICE);
+-	if (dma_mapping_error(ctx->dev, edesc->dst_dma)) {
+-		dev_err(ctx->dev, "unable to map dst\n");
+-		edesc->dst_dma = 0;
++	if (dma_mapping_error(ctx->dev, state->ctx_dma)) {
++		dev_err(ctx->dev, "unable to map ctx\n");
++		state->ctx_dma = 0;
+ 		goto unmap;
+ 	}
+ 
+ 	memset(&req_ctx->fd_flt, 0, sizeof(req_ctx->fd_flt));
+ 	dpaa2_fl_set_final(in_fle, true);
+-	dpaa2_fl_set_format(in_fle, dpaa2_fl_single);
+-	dpaa2_fl_set_addr(in_fle, state->buf_dma);
+-	dpaa2_fl_set_len(in_fle, buflen);
++	/*
++	 * crypto engine requires the input entry to be present when
++	 * "frame list" FD is used.
++	 * Since engine does not support FMT=2'b11 (unused entry type), leaving
++	 * in_fle zeroized (except for "Final" flag) is the best option.
++	 */
++	if (buflen) {
++		dpaa2_fl_set_format(in_fle, dpaa2_fl_single);
++		dpaa2_fl_set_addr(in_fle, state->buf_dma);
++		dpaa2_fl_set_len(in_fle, buflen);
++	}
+ 	dpaa2_fl_set_format(out_fle, dpaa2_fl_single);
+-	dpaa2_fl_set_addr(out_fle, edesc->dst_dma);
++	dpaa2_fl_set_addr(out_fle, state->ctx_dma);
+ 	dpaa2_fl_set_len(out_fle, digestsize);
+ 
+ 	req_ctx->flc = &ctx->flc[DIGEST];
+@@ -3790,7 +3765,7 @@ static int ahash_final_no_ctx(struct ahash_request *req)
+ 		return ret;
+ 
+ unmap:
+-	ahash_unmap(ctx->dev, edesc, req, digestsize);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_FROM_DEVICE);
+ 	qi_cache_free(edesc);
+ 	return ret;
+ }
+@@ -3870,6 +3845,7 @@ static int ahash_update_no_ctx(struct ahash_request *req)
+ 		}
+ 		edesc->qm_sg_bytes = qm_sg_bytes;
+ 
++		state->ctx_dma_len = ctx->ctx_len;
+ 		state->ctx_dma = dma_map_single(ctx->dev, state->caam_ctx,
+ 						ctx->ctx_len, DMA_FROM_DEVICE);
+ 		if (dma_mapping_error(ctx->dev, state->ctx_dma)) {
+@@ -3918,7 +3894,7 @@ static int ahash_update_no_ctx(struct ahash_request *req)
+ 
+ 	return ret;
+ unmap_ctx:
+-	ahash_unmap_ctx(ctx->dev, edesc, req, ctx->ctx_len, DMA_TO_DEVICE);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_TO_DEVICE);
+ 	qi_cache_free(edesc);
+ 	return ret;
+ }
+@@ -3983,11 +3959,12 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
+ 	}
+ 	edesc->qm_sg_bytes = qm_sg_bytes;
+ 
+-	edesc->dst_dma = dma_map_single(ctx->dev, req->result, digestsize,
++	state->ctx_dma_len = digestsize;
++	state->ctx_dma = dma_map_single(ctx->dev, state->caam_ctx, digestsize,
+ 					DMA_FROM_DEVICE);
+-	if (dma_mapping_error(ctx->dev, edesc->dst_dma)) {
+-		dev_err(ctx->dev, "unable to map dst\n");
+-		edesc->dst_dma = 0;
++	if (dma_mapping_error(ctx->dev, state->ctx_dma)) {
++		dev_err(ctx->dev, "unable to map ctx\n");
++		state->ctx_dma = 0;
+ 		ret = -ENOMEM;
+ 		goto unmap;
+ 	}
+@@ -3998,7 +3975,7 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
+ 	dpaa2_fl_set_addr(in_fle, edesc->qm_sg_dma);
+ 	dpaa2_fl_set_len(in_fle, buflen + req->nbytes);
+ 	dpaa2_fl_set_format(out_fle, dpaa2_fl_single);
+-	dpaa2_fl_set_addr(out_fle, edesc->dst_dma);
++	dpaa2_fl_set_addr(out_fle, state->ctx_dma);
+ 	dpaa2_fl_set_len(out_fle, digestsize);
+ 
+ 	req_ctx->flc = &ctx->flc[DIGEST];
+@@ -4013,7 +3990,7 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
+ 
+ 	return ret;
+ unmap:
+-	ahash_unmap(ctx->dev, edesc, req, digestsize);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_FROM_DEVICE);
+ 	qi_cache_free(edesc);
+ 	return -ENOMEM;
+ }
+@@ -4100,6 +4077,7 @@ static int ahash_update_first(struct ahash_request *req)
+ 			scatterwalk_map_and_copy(next_buf, req->src, to_hash,
+ 						 *next_buflen, 0);
+ 
++		state->ctx_dma_len = ctx->ctx_len;
+ 		state->ctx_dma = dma_map_single(ctx->dev, state->caam_ctx,
+ 						ctx->ctx_len, DMA_FROM_DEVICE);
+ 		if (dma_mapping_error(ctx->dev, state->ctx_dma)) {
+@@ -4143,7 +4121,7 @@ static int ahash_update_first(struct ahash_request *req)
+ 
+ 	return ret;
+ unmap_ctx:
+-	ahash_unmap_ctx(ctx->dev, edesc, req, ctx->ctx_len, DMA_TO_DEVICE);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_TO_DEVICE);
+ 	qi_cache_free(edesc);
+ 	return ret;
+ }
+@@ -4162,6 +4140,7 @@ static int ahash_init(struct ahash_request *req)
+ 	state->final = ahash_final_no_ctx;
+ 
+ 	state->ctx_dma = 0;
++	state->ctx_dma_len = 0;
+ 	state->current_buf = 0;
+ 	state->buf_dma = 0;
+ 	state->buflen_0 = 0;
+diff --git a/drivers/crypto/caam/caamalg_qi2.h b/drivers/crypto/caam/caamalg_qi2.h
+index 20890780fb82..be5085451053 100644
+--- a/drivers/crypto/caam/caamalg_qi2.h
++++ b/drivers/crypto/caam/caamalg_qi2.h
+@@ -162,14 +162,12 @@ struct skcipher_edesc {
+ 
+ /*
+  * ahash_edesc - s/w-extended ahash descriptor
+- * @dst_dma: I/O virtual address of req->result
+  * @qm_sg_dma: I/O virtual address of h/w link table
+  * @src_nents: number of segments in input scatterlist
+  * @qm_sg_bytes: length of dma mapped qm_sg space
+  * @sgt: pointer to h/w link table
+  */
+ struct ahash_edesc {
+-	dma_addr_t dst_dma;
+ 	dma_addr_t qm_sg_dma;
+ 	int src_nents;
+ 	int qm_sg_bytes;
+diff --git a/drivers/crypto/ccp/psp-dev.c b/drivers/crypto/ccp/psp-dev.c
+index fadf859a14b8..59c4796300e8 100644
+--- a/drivers/crypto/ccp/psp-dev.c
++++ b/drivers/crypto/ccp/psp-dev.c
+@@ -997,7 +997,7 @@ void psp_pci_init(void)
+ 	rc = sev_platform_init(&error);
+ 	if (rc) {
+ 		dev_err(sp->dev, "SEV: failed to INIT error %#x\n", error);
+-		goto err;
++		return;
+ 	}
+ 
+ 	dev_info(sp->dev, "SEV API:%d.%d build:%d\n", psp_master->api_major,
+diff --git a/drivers/crypto/ccree/cc_aead.c b/drivers/crypto/ccree/cc_aead.c
+index a3527c00b29a..009ce649ff25 100644
+--- a/drivers/crypto/ccree/cc_aead.c
++++ b/drivers/crypto/ccree/cc_aead.c
+@@ -424,7 +424,7 @@ static int validate_keys_sizes(struct cc_aead_ctx *ctx)
+ /* This function prepers the user key so it can pass to the hmac processing
+  * (copy to intenral buffer or hash in case of key longer than block
+  */
+-static int cc_get_plain_hmac_key(struct crypto_aead *tfm, const u8 *key,
++static int cc_get_plain_hmac_key(struct crypto_aead *tfm, const u8 *authkey,
+ 				 unsigned int keylen)
+ {
+ 	dma_addr_t key_dma_addr = 0;
+@@ -437,6 +437,7 @@ static int cc_get_plain_hmac_key(struct crypto_aead *tfm, const u8 *key,
+ 	unsigned int hashmode;
+ 	unsigned int idx = 0;
+ 	int rc = 0;
++	u8 *key = NULL;
+ 	struct cc_hw_desc desc[MAX_AEAD_SETKEY_SEQ];
+ 	dma_addr_t padded_authkey_dma_addr =
+ 		ctx->auth_state.hmac.padded_authkey_dma_addr;
+@@ -455,11 +456,17 @@ static int cc_get_plain_hmac_key(struct crypto_aead *tfm, const u8 *key,
+ 	}
+ 
+ 	if (keylen != 0) {
++
++		key = kmemdup(authkey, keylen, GFP_KERNEL);
++		if (!key)
++			return -ENOMEM;
++
+ 		key_dma_addr = dma_map_single(dev, (void *)key, keylen,
+ 					      DMA_TO_DEVICE);
+ 		if (dma_mapping_error(dev, key_dma_addr)) {
+ 			dev_err(dev, "Mapping key va=0x%p len=%u for DMA failed\n",
+ 				key, keylen);
++			kzfree(key);
+ 			return -ENOMEM;
+ 		}
+ 		if (keylen > blocksize) {
+@@ -542,6 +549,8 @@ static int cc_get_plain_hmac_key(struct crypto_aead *tfm, const u8 *key,
+ 	if (key_dma_addr)
+ 		dma_unmap_single(dev, key_dma_addr, keylen, DMA_TO_DEVICE);
+ 
++	kzfree(key);
++
+ 	return rc;
+ }
+ 
+diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c
+index 0ee1c52da0a4..583737e50611 100644
+--- a/drivers/crypto/ccree/cc_buffer_mgr.c
++++ b/drivers/crypto/ccree/cc_buffer_mgr.c
+@@ -83,24 +83,17 @@ static void cc_copy_mac(struct device *dev, struct aead_request *req,
+  */
+ static unsigned int cc_get_sgl_nents(struct device *dev,
+ 				     struct scatterlist *sg_list,
+-				     unsigned int nbytes, u32 *lbytes,
+-				     bool *is_chained)
++				     unsigned int nbytes, u32 *lbytes)
+ {
+ 	unsigned int nents = 0;
+ 
+ 	while (nbytes && sg_list) {
+-		if (sg_list->length) {
+-			nents++;
+-			/* get the number of bytes in the last entry */
+-			*lbytes = nbytes;
+-			nbytes -= (sg_list->length > nbytes) ?
+-					nbytes : sg_list->length;
+-			sg_list = sg_next(sg_list);
+-		} else {
+-			sg_list = (struct scatterlist *)sg_page(sg_list);
+-			if (is_chained)
+-				*is_chained = true;
+-		}
++		nents++;
++		/* get the number of bytes in the last entry */
++		*lbytes = nbytes;
++		nbytes -= (sg_list->length > nbytes) ?
++				nbytes : sg_list->length;
++		sg_list = sg_next(sg_list);
+ 	}
+ 	dev_dbg(dev, "nents %d last bytes %d\n", nents, *lbytes);
+ 	return nents;
+@@ -142,7 +135,7 @@ void cc_copy_sg_portion(struct device *dev, u8 *dest, struct scatterlist *sg,
+ {
+ 	u32 nents, lbytes;
+ 
+-	nents = cc_get_sgl_nents(dev, sg, end, &lbytes, NULL);
++	nents = cc_get_sgl_nents(dev, sg, end, &lbytes);
+ 	sg_copy_buffer(sg, nents, (void *)dest, (end - to_skip + 1), to_skip,
+ 		       (direct == CC_SG_TO_BUF));
+ }
+@@ -314,40 +307,10 @@ static void cc_add_sg_entry(struct device *dev, struct buffer_array *sgl_data,
+ 	sgl_data->num_of_buffers++;
+ }
+ 
+-static int cc_dma_map_sg(struct device *dev, struct scatterlist *sg, u32 nents,
+-			 enum dma_data_direction direction)
+-{
+-	u32 i, j;
+-	struct scatterlist *l_sg = sg;
+-
+-	for (i = 0; i < nents; i++) {
+-		if (!l_sg)
+-			break;
+-		if (dma_map_sg(dev, l_sg, 1, direction) != 1) {
+-			dev_err(dev, "dma_map_page() sg buffer failed\n");
+-			goto err;
+-		}
+-		l_sg = sg_next(l_sg);
+-	}
+-	return nents;
+-
+-err:
+-	/* Restore mapped parts */
+-	for (j = 0; j < i; j++) {
+-		if (!sg)
+-			break;
+-		dma_unmap_sg(dev, sg, 1, direction);
+-		sg = sg_next(sg);
+-	}
+-	return 0;
+-}
+-
+ static int cc_map_sg(struct device *dev, struct scatterlist *sg,
+ 		     unsigned int nbytes, int direction, u32 *nents,
+ 		     u32 max_sg_nents, u32 *lbytes, u32 *mapped_nents)
+ {
+-	bool is_chained = false;
+-
+ 	if (sg_is_last(sg)) {
+ 		/* One entry only case -set to DLLI */
+ 		if (dma_map_sg(dev, sg, 1, direction) != 1) {
+@@ -361,35 +324,21 @@ static int cc_map_sg(struct device *dev, struct scatterlist *sg,
+ 		*nents = 1;
+ 		*mapped_nents = 1;
+ 	} else {  /*sg_is_last*/
+-		*nents = cc_get_sgl_nents(dev, sg, nbytes, lbytes,
+-					  &is_chained);
++		*nents = cc_get_sgl_nents(dev, sg, nbytes, lbytes);
+ 		if (*nents > max_sg_nents) {
+ 			*nents = 0;
+ 			dev_err(dev, "Too many fragments. current %d max %d\n",
+ 				*nents, max_sg_nents);
+ 			return -ENOMEM;
+ 		}
+-		if (!is_chained) {
+-			/* In case of mmu the number of mapped nents might
+-			 * be changed from the original sgl nents
+-			 */
+-			*mapped_nents = dma_map_sg(dev, sg, *nents, direction);
+-			if (*mapped_nents == 0) {
+-				*nents = 0;
+-				dev_err(dev, "dma_map_sg() sg buffer failed\n");
+-				return -ENOMEM;
+-			}
+-		} else {
+-			/*In this case the driver maps entry by entry so it
+-			 * must have the same nents before and after map
+-			 */
+-			*mapped_nents = cc_dma_map_sg(dev, sg, *nents,
+-						      direction);
+-			if (*mapped_nents != *nents) {
+-				*nents = *mapped_nents;
+-				dev_err(dev, "dma_map_sg() sg buffer failed\n");
+-				return -ENOMEM;
+-			}
++		/* In case of mmu the number of mapped nents might
++		 * be changed from the original sgl nents
++		 */
++		*mapped_nents = dma_map_sg(dev, sg, *nents, direction);
++		if (*mapped_nents == 0) {
++			*nents = 0;
++			dev_err(dev, "dma_map_sg() sg buffer failed\n");
++			return -ENOMEM;
+ 		}
+ 	}
+ 
+@@ -571,7 +520,6 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct cc_drvdata *drvdata = dev_get_drvdata(dev);
+ 	u32 dummy;
+-	bool chained;
+ 	u32 size_to_unmap = 0;
+ 
+ 	if (areq_ctx->mac_buf_dma_addr) {
+@@ -612,6 +560,7 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
+ 	if (areq_ctx->gen_ctx.iv_dma_addr) {
+ 		dma_unmap_single(dev, areq_ctx->gen_ctx.iv_dma_addr,
+ 				 hw_iv_size, DMA_BIDIRECTIONAL);
++		kzfree(areq_ctx->gen_ctx.iv);
+ 	}
+ 
+ 	/* Release pool */
+@@ -636,15 +585,14 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
+ 		size_to_unmap += crypto_aead_ivsize(tfm);
+ 
+ 	dma_unmap_sg(dev, req->src,
+-		     cc_get_sgl_nents(dev, req->src, size_to_unmap,
+-				      &dummy, &chained),
++		     cc_get_sgl_nents(dev, req->src, size_to_unmap, &dummy),
+ 		     DMA_BIDIRECTIONAL);
+ 	if (req->src != req->dst) {
+ 		dev_dbg(dev, "Unmapping dst sgl: req->dst=%pK\n",
+ 			sg_virt(req->dst));
+ 		dma_unmap_sg(dev, req->dst,
+ 			     cc_get_sgl_nents(dev, req->dst, size_to_unmap,
+-					      &dummy, &chained),
++					      &dummy),
+ 			     DMA_BIDIRECTIONAL);
+ 	}
+ 	if (drvdata->coherent &&
+@@ -717,19 +665,27 @@ static int cc_aead_chain_iv(struct cc_drvdata *drvdata,
+ 	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ 	unsigned int hw_iv_size = areq_ctx->hw_iv_size;
+ 	struct device *dev = drvdata_to_dev(drvdata);
++	gfp_t flags = cc_gfp_flags(&req->base);
+ 	int rc = 0;
+ 
+ 	if (!req->iv) {
+ 		areq_ctx->gen_ctx.iv_dma_addr = 0;
++		areq_ctx->gen_ctx.iv = NULL;
+ 		goto chain_iv_exit;
+ 	}
+ 
+-	areq_ctx->gen_ctx.iv_dma_addr = dma_map_single(dev, req->iv,
+-						       hw_iv_size,
+-						       DMA_BIDIRECTIONAL);
++	areq_ctx->gen_ctx.iv = kmemdup(req->iv, hw_iv_size, flags);
++	if (!areq_ctx->gen_ctx.iv)
++		return -ENOMEM;
++
++	areq_ctx->gen_ctx.iv_dma_addr =
++		dma_map_single(dev, areq_ctx->gen_ctx.iv, hw_iv_size,
++			       DMA_BIDIRECTIONAL);
+ 	if (dma_mapping_error(dev, areq_ctx->gen_ctx.iv_dma_addr)) {
+ 		dev_err(dev, "Mapping iv %u B at va=%pK for DMA failed\n",
+ 			hw_iv_size, req->iv);
++		kzfree(areq_ctx->gen_ctx.iv);
++		areq_ctx->gen_ctx.iv = NULL;
+ 		rc = -ENOMEM;
+ 		goto chain_iv_exit;
+ 	}
+@@ -1022,7 +978,6 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata,
+ 	unsigned int size_for_map = req->assoclen + req->cryptlen;
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	u32 sg_index = 0;
+-	bool chained = false;
+ 	bool is_gcm4543 = areq_ctx->is_gcm4543;
+ 	u32 size_to_skip = req->assoclen;
+ 
+@@ -1043,7 +998,7 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata,
+ 	size_for_map += (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ?
+ 			authsize : 0;
+ 	src_mapped_nents = cc_get_sgl_nents(dev, req->src, size_for_map,
+-					    &src_last_bytes, &chained);
++					    &src_last_bytes);
+ 	sg_index = areq_ctx->src_sgl->length;
+ 	//check where the data starts
+ 	while (sg_index <= size_to_skip) {
+@@ -1083,7 +1038,7 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata,
+ 	}
+ 
+ 	dst_mapped_nents = cc_get_sgl_nents(dev, req->dst, size_for_map,
+-					    &dst_last_bytes, &chained);
++					    &dst_last_bytes);
+ 	sg_index = areq_ctx->dst_sgl->length;
+ 	offset = size_to_skip;
+ 
+@@ -1484,7 +1439,7 @@ int cc_map_hash_request_update(struct cc_drvdata *drvdata, void *ctx,
+ 		dev_dbg(dev, " less than one block: curr_buff=%pK *curr_buff_cnt=0x%X copy_to=%pK\n",
+ 			curr_buff, *curr_buff_cnt, &curr_buff[*curr_buff_cnt]);
+ 		areq_ctx->in_nents =
+-			cc_get_sgl_nents(dev, src, nbytes, &dummy, NULL);
++			cc_get_sgl_nents(dev, src, nbytes, &dummy);
+ 		sg_copy_to_buffer(src, areq_ctx->in_nents,
+ 				  &curr_buff[*curr_buff_cnt], nbytes);
+ 		*curr_buff_cnt += nbytes;
+diff --git a/drivers/crypto/ccree/cc_driver.h b/drivers/crypto/ccree/cc_driver.h
+index 33dbf3e6d15d..a6c500dca529 100644
+--- a/drivers/crypto/ccree/cc_driver.h
++++ b/drivers/crypto/ccree/cc_driver.h
+@@ -168,6 +168,7 @@ struct cc_alg_template {
+ 
+ struct async_gen_req_ctx {
+ 	dma_addr_t iv_dma_addr;
++	u8 *iv;
+ 	enum drv_crypto_direction op_type;
+ };
+ 
+diff --git a/drivers/crypto/ccree/cc_fips.c b/drivers/crypto/ccree/cc_fips.c
+index b4d0a6d983e0..09f708f6418e 100644
+--- a/drivers/crypto/ccree/cc_fips.c
++++ b/drivers/crypto/ccree/cc_fips.c
+@@ -72,20 +72,28 @@ static inline void tee_fips_error(struct device *dev)
+ 		dev_err(dev, "TEE reported error!\n");
+ }
+ 
++/*
++ * This function check if cryptocell tee fips error occurred
++ * and in such case triggers system error
++ */
++void cc_tee_handle_fips_error(struct cc_drvdata *p_drvdata)
++{
++	struct device *dev = drvdata_to_dev(p_drvdata);
++
++	if (!cc_get_tee_fips_status(p_drvdata))
++		tee_fips_error(dev);
++}
++
+ /* Deferred service handler, run as interrupt-fired tasklet */
+ static void fips_dsr(unsigned long devarg)
+ {
+ 	struct cc_drvdata *drvdata = (struct cc_drvdata *)devarg;
+-	struct device *dev = drvdata_to_dev(drvdata);
+-	u32 irq, state, val;
++	u32 irq, val;
+ 
+ 	irq = (drvdata->irq & (CC_GPR0_IRQ_MASK));
+ 
+ 	if (irq) {
+-		state = cc_ioread(drvdata, CC_REG(GPR_HOST));
+-
+-		if (state != (CC_FIPS_SYNC_TEE_STATUS | CC_FIPS_SYNC_MODULE_OK))
+-			tee_fips_error(dev);
++		cc_tee_handle_fips_error(drvdata);
+ 	}
+ 
+ 	/* after verifing that there is nothing to do,
+@@ -113,8 +121,7 @@ int cc_fips_init(struct cc_drvdata *p_drvdata)
+ 	dev_dbg(dev, "Initializing fips tasklet\n");
+ 	tasklet_init(&fips_h->tasklet, fips_dsr, (unsigned long)p_drvdata);
+ 
+-	if (!cc_get_tee_fips_status(p_drvdata))
+-		tee_fips_error(dev);
++	cc_tee_handle_fips_error(p_drvdata);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/crypto/ccree/cc_fips.h b/drivers/crypto/ccree/cc_fips.h
+index 645e096a7a82..67d5fbfa09b5 100644
+--- a/drivers/crypto/ccree/cc_fips.h
++++ b/drivers/crypto/ccree/cc_fips.h
+@@ -18,6 +18,7 @@ int cc_fips_init(struct cc_drvdata *p_drvdata);
+ void cc_fips_fini(struct cc_drvdata *drvdata);
+ void fips_handler(struct cc_drvdata *drvdata);
+ void cc_set_ree_fips_status(struct cc_drvdata *drvdata, bool ok);
++void cc_tee_handle_fips_error(struct cc_drvdata *p_drvdata);
+ 
+ #else  /* CONFIG_CRYPTO_FIPS */
+ 
+@@ -30,6 +31,7 @@ static inline void cc_fips_fini(struct cc_drvdata *drvdata) {}
+ static inline void cc_set_ree_fips_status(struct cc_drvdata *drvdata,
+ 					  bool ok) {}
+ static inline void fips_handler(struct cc_drvdata *drvdata) {}
++static inline void cc_tee_handle_fips_error(struct cc_drvdata *p_drvdata) {}
+ 
+ #endif /* CONFIG_CRYPTO_FIPS */
+ 
+diff --git a/drivers/crypto/ccree/cc_hash.c b/drivers/crypto/ccree/cc_hash.c
+index 2c4ddc8fb76b..e44cbf173606 100644
+--- a/drivers/crypto/ccree/cc_hash.c
++++ b/drivers/crypto/ccree/cc_hash.c
+@@ -69,6 +69,7 @@ struct cc_hash_alg {
+ struct hash_key_req_ctx {
+ 	u32 keylen;
+ 	dma_addr_t key_dma_addr;
++	u8 *key;
+ };
+ 
+ /* hash per-session context */
+@@ -730,13 +731,20 @@ static int cc_hash_setkey(struct crypto_ahash *ahash, const u8 *key,
+ 	ctx->key_params.keylen = keylen;
+ 	ctx->key_params.key_dma_addr = 0;
+ 	ctx->is_hmac = true;
++	ctx->key_params.key = NULL;
+ 
+ 	if (keylen) {
++		ctx->key_params.key = kmemdup(key, keylen, GFP_KERNEL);
++		if (!ctx->key_params.key)
++			return -ENOMEM;
++
+ 		ctx->key_params.key_dma_addr =
+-			dma_map_single(dev, (void *)key, keylen, DMA_TO_DEVICE);
++			dma_map_single(dev, (void *)ctx->key_params.key, keylen,
++				       DMA_TO_DEVICE);
+ 		if (dma_mapping_error(dev, ctx->key_params.key_dma_addr)) {
+ 			dev_err(dev, "Mapping key va=0x%p len=%u for DMA failed\n",
+-				key, keylen);
++				ctx->key_params.key, keylen);
++			kzfree(ctx->key_params.key);
+ 			return -ENOMEM;
+ 		}
+ 		dev_dbg(dev, "mapping key-buffer: key_dma_addr=%pad keylen=%u\n",
+@@ -887,6 +895,9 @@ out:
+ 		dev_dbg(dev, "Unmapped key-buffer: key_dma_addr=%pad keylen=%u\n",
+ 			&ctx->key_params.key_dma_addr, ctx->key_params.keylen);
+ 	}
++
++	kzfree(ctx->key_params.key);
++
+ 	return rc;
+ }
+ 
+@@ -913,11 +924,16 @@ static int cc_xcbc_setkey(struct crypto_ahash *ahash,
+ 
+ 	ctx->key_params.keylen = keylen;
+ 
++	ctx->key_params.key = kmemdup(key, keylen, GFP_KERNEL);
++	if (!ctx->key_params.key)
++		return -ENOMEM;
++
+ 	ctx->key_params.key_dma_addr =
+-		dma_map_single(dev, (void *)key, keylen, DMA_TO_DEVICE);
++		dma_map_single(dev, ctx->key_params.key, keylen, DMA_TO_DEVICE);
+ 	if (dma_mapping_error(dev, ctx->key_params.key_dma_addr)) {
+ 		dev_err(dev, "Mapping key va=0x%p len=%u for DMA failed\n",
+ 			key, keylen);
++		kzfree(ctx->key_params.key);
+ 		return -ENOMEM;
+ 	}
+ 	dev_dbg(dev, "mapping key-buffer: key_dma_addr=%pad keylen=%u\n",
+@@ -969,6 +985,8 @@ static int cc_xcbc_setkey(struct crypto_ahash *ahash,
+ 	dev_dbg(dev, "Unmapped key-buffer: key_dma_addr=%pad keylen=%u\n",
+ 		&ctx->key_params.key_dma_addr, ctx->key_params.keylen);
+ 
++	kzfree(ctx->key_params.key);
++
+ 	return rc;
+ }
+ 
+@@ -1621,7 +1639,7 @@ static struct cc_hash_template driver_hash[] = {
+ 			.setkey = cc_hash_setkey,
+ 			.halg = {
+ 				.digestsize = SHA224_DIGEST_SIZE,
+-				.statesize = CC_STATE_SIZE(SHA224_DIGEST_SIZE),
++				.statesize = CC_STATE_SIZE(SHA256_DIGEST_SIZE),
+ 			},
+ 		},
+ 		.hash_mode = DRV_HASH_SHA224,
+@@ -1648,7 +1666,7 @@ static struct cc_hash_template driver_hash[] = {
+ 			.setkey = cc_hash_setkey,
+ 			.halg = {
+ 				.digestsize = SHA384_DIGEST_SIZE,
+-				.statesize = CC_STATE_SIZE(SHA384_DIGEST_SIZE),
++				.statesize = CC_STATE_SIZE(SHA512_DIGEST_SIZE),
+ 			},
+ 		},
+ 		.hash_mode = DRV_HASH_SHA384,
+diff --git a/drivers/crypto/ccree/cc_ivgen.c b/drivers/crypto/ccree/cc_ivgen.c
+index 769458323394..1abec3896a78 100644
+--- a/drivers/crypto/ccree/cc_ivgen.c
++++ b/drivers/crypto/ccree/cc_ivgen.c
+@@ -154,9 +154,6 @@ void cc_ivgen_fini(struct cc_drvdata *drvdata)
+ 	}
+ 
+ 	ivgen_ctx->pool = NULL_SRAM_ADDR;
+-
+-	/* release "this" context */
+-	kfree(ivgen_ctx);
+ }
+ 
+ /*!
+@@ -174,10 +171,12 @@ int cc_ivgen_init(struct cc_drvdata *drvdata)
+ 	int rc;
+ 
+ 	/* Allocate "this" context */
+-	ivgen_ctx = kzalloc(sizeof(*ivgen_ctx), GFP_KERNEL);
++	ivgen_ctx = devm_kzalloc(device, sizeof(*ivgen_ctx), GFP_KERNEL);
+ 	if (!ivgen_ctx)
+ 		return -ENOMEM;
+ 
++	drvdata->ivgen_handle = ivgen_ctx;
++
+ 	/* Allocate pool's header for initial enc. key/IV */
+ 	ivgen_ctx->pool_meta = dma_alloc_coherent(device, CC_IVPOOL_META_SIZE,
+ 						  &ivgen_ctx->pool_meta_dma,
+@@ -196,8 +195,6 @@ int cc_ivgen_init(struct cc_drvdata *drvdata)
+ 		goto out;
+ 	}
+ 
+-	drvdata->ivgen_handle = ivgen_ctx;
+-
+ 	return cc_init_iv_sram(drvdata);
+ 
+ out:
+diff --git a/drivers/crypto/ccree/cc_pm.c b/drivers/crypto/ccree/cc_pm.c
+index 6ff7e75ad90e..638082dff183 100644
+--- a/drivers/crypto/ccree/cc_pm.c
++++ b/drivers/crypto/ccree/cc_pm.c
+@@ -11,6 +11,7 @@
+ #include "cc_ivgen.h"
+ #include "cc_hash.h"
+ #include "cc_pm.h"
++#include "cc_fips.h"
+ 
+ #define POWER_DOWN_ENABLE 0x01
+ #define POWER_DOWN_DISABLE 0x00
+@@ -25,13 +26,13 @@ int cc_pm_suspend(struct device *dev)
+ 	int rc;
+ 
+ 	dev_dbg(dev, "set HOST_POWER_DOWN_EN\n");
+-	cc_iowrite(drvdata, CC_REG(HOST_POWER_DOWN_EN), POWER_DOWN_ENABLE);
+ 	rc = cc_suspend_req_queue(drvdata);
+ 	if (rc) {
+ 		dev_err(dev, "cc_suspend_req_queue (%x)\n", rc);
+ 		return rc;
+ 	}
+ 	fini_cc_regs(drvdata);
++	cc_iowrite(drvdata, CC_REG(HOST_POWER_DOWN_EN), POWER_DOWN_ENABLE);
+ 	cc_clk_off(drvdata);
+ 	return 0;
+ }
+@@ -42,19 +43,21 @@ int cc_pm_resume(struct device *dev)
+ 	struct cc_drvdata *drvdata = dev_get_drvdata(dev);
+ 
+ 	dev_dbg(dev, "unset HOST_POWER_DOWN_EN\n");
+-	cc_iowrite(drvdata, CC_REG(HOST_POWER_DOWN_EN), POWER_DOWN_DISABLE);
+-
++	/* Enables the device source clk */
+ 	rc = cc_clk_on(drvdata);
+ 	if (rc) {
+ 		dev_err(dev, "failed getting clock back on. We're toast.\n");
+ 		return rc;
+ 	}
+ 
++	cc_iowrite(drvdata, CC_REG(HOST_POWER_DOWN_EN), POWER_DOWN_DISABLE);
+ 	rc = init_cc_regs(drvdata, false);
+ 	if (rc) {
+ 		dev_err(dev, "init_cc_regs (%x)\n", rc);
+ 		return rc;
+ 	}
++	/* check if tee fips error occurred during power down */
++	cc_tee_handle_fips_error(drvdata);
+ 
+ 	rc = cc_resume_req_queue(drvdata);
+ 	if (rc) {
+diff --git a/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c b/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c
+index 02dac6ae7e53..7564b4c41afc 100644
+--- a/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c
++++ b/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c
+@@ -250,9 +250,14 @@ static int rk_set_data_start(struct rk_crypto_info *dev)
+ 	u8 *src_last_blk = page_address(sg_page(dev->sg_src)) +
+ 		dev->sg_src->offset + dev->sg_src->length - ivsize;
+ 
+-	/* store the iv that need to be updated in chain mode */
+-	if (ctx->mode & RK_CRYPTO_DEC)
++	/* Store the iv that need to be updated in chain mode.
++	 * And update the IV buffer to contain the next IV for decryption mode.
++	 */
++	if (ctx->mode & RK_CRYPTO_DEC) {
+ 		memcpy(ctx->iv, src_last_blk, ivsize);
++		sg_pcopy_to_buffer(dev->first, dev->src_nents, req->info,
++				   ivsize, dev->total - ivsize);
++	}
+ 
+ 	err = dev->load_data(dev, dev->sg_src, dev->sg_dst);
+ 	if (!err)
+@@ -288,13 +293,19 @@ static void rk_iv_copyback(struct rk_crypto_info *dev)
+ 	struct ablkcipher_request *req =
+ 		ablkcipher_request_cast(dev->async_req);
+ 	struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req);
++	struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm);
+ 	u32 ivsize = crypto_ablkcipher_ivsize(tfm);
+ 
+-	if (ivsize == DES_BLOCK_SIZE)
+-		memcpy_fromio(req->info, dev->reg + RK_CRYPTO_TDES_IV_0,
+-			      ivsize);
+-	else if (ivsize == AES_BLOCK_SIZE)
+-		memcpy_fromio(req->info, dev->reg + RK_CRYPTO_AES_IV_0, ivsize);
++	/* Update the IV buffer to contain the next IV for encryption mode. */
++	if (!(ctx->mode & RK_CRYPTO_DEC)) {
++		if (dev->aligned) {
++			memcpy(req->info, sg_virt(dev->sg_dst) +
++				dev->sg_dst->length - ivsize, ivsize);
++		} else {
++			memcpy(req->info, dev->addr_vir +
++				dev->count - ivsize, ivsize);
++		}
++	}
+ }
+ 
+ static void rk_update_iv(struct rk_crypto_info *dev)
+diff --git a/drivers/crypto/vmx/aesp8-ppc.pl b/drivers/crypto/vmx/aesp8-ppc.pl
+index d6a9f63d65ba..de78282b8f44 100644
+--- a/drivers/crypto/vmx/aesp8-ppc.pl
++++ b/drivers/crypto/vmx/aesp8-ppc.pl
+@@ -1854,7 +1854,7 @@ Lctr32_enc8x_three:
+ 	stvx_u		$out1,$x10,$out
+ 	stvx_u		$out2,$x20,$out
+ 	addi		$out,$out,0x30
+-	b		Lcbc_dec8x_done
++	b		Lctr32_enc8x_done
+ 
+ .align	5
+ Lctr32_enc8x_two:
+@@ -1866,7 +1866,7 @@ Lctr32_enc8x_two:
+ 	stvx_u		$out0,$x00,$out
+ 	stvx_u		$out1,$x10,$out
+ 	addi		$out,$out,0x20
+-	b		Lcbc_dec8x_done
++	b		Lctr32_enc8x_done
+ 
+ .align	5
+ Lctr32_enc8x_one:
+diff --git a/drivers/dax/Kconfig b/drivers/dax/Kconfig
+index 5ef624fe3934..a59f338f520f 100644
+--- a/drivers/dax/Kconfig
++++ b/drivers/dax/Kconfig
+@@ -23,7 +23,6 @@ config DEV_DAX
+ config DEV_DAX_PMEM
+ 	tristate "PMEM DAX: direct access to persistent memory"
+ 	depends on LIBNVDIMM && NVDIMM_DAX && DEV_DAX
+-	depends on m # until we can kill DEV_DAX_PMEM_COMPAT
+ 	default DEV_DAX
+ 	help
+ 	  Support raw access to persistent memory.  Note that this
+@@ -50,7 +49,7 @@ config DEV_DAX_KMEM
+ 
+ config DEV_DAX_PMEM_COMPAT
+ 	tristate "PMEM DAX: support the deprecated /sys/class/dax interface"
+-	depends on DEV_DAX_PMEM
++	depends on m && DEV_DAX_PMEM=m
+ 	default DEV_DAX_PMEM
+ 	help
+ 	  Older versions of the libdaxctl library expect to find all
+diff --git a/drivers/dax/device.c b/drivers/dax/device.c
+index e428468ab661..996d68ff992a 100644
+--- a/drivers/dax/device.c
++++ b/drivers/dax/device.c
+@@ -184,8 +184,7 @@ static vm_fault_t __dev_dax_pmd_fault(struct dev_dax *dev_dax,
+ 
+ 	*pfn = phys_to_pfn_t(phys, dax_region->pfn_flags);
+ 
+-	return vmf_insert_pfn_pmd(vmf->vma, vmf->address, vmf->pmd, *pfn,
+-			vmf->flags & FAULT_FLAG_WRITE);
++	return vmf_insert_pfn_pmd(vmf, *pfn, vmf->flags & FAULT_FLAG_WRITE);
+ }
+ 
+ #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+@@ -235,8 +234,7 @@ static vm_fault_t __dev_dax_pud_fault(struct dev_dax *dev_dax,
+ 
+ 	*pfn = phys_to_pfn_t(phys, dax_region->pfn_flags);
+ 
+-	return vmf_insert_pfn_pud(vmf->vma, vmf->address, vmf->pud, *pfn,
+-			vmf->flags & FAULT_FLAG_WRITE);
++	return vmf_insert_pfn_pud(vmf, *pfn, vmf->flags & FAULT_FLAG_WRITE);
+ }
+ #else
+ static vm_fault_t __dev_dax_pud_fault(struct dev_dax *dev_dax,
+diff --git a/drivers/edac/mce_amd.c b/drivers/edac/mce_amd.c
+index 0a1814dad6cf..bb0202ad7a13 100644
+--- a/drivers/edac/mce_amd.c
++++ b/drivers/edac/mce_amd.c
+@@ -1004,7 +1004,7 @@ static inline void amd_decode_err_code(u16 ec)
+ /*
+  * Filter out unwanted MCE signatures here.
+  */
+-static bool amd_filter_mce(struct mce *m)
++static bool ignore_mce(struct mce *m)
+ {
+ 	/*
+ 	 * NB GART TLB error reporting is disabled by default.
+@@ -1038,7 +1038,7 @@ amd_decode_mce(struct notifier_block *nb, unsigned long val, void *data)
+ 	unsigned int fam = x86_family(m->cpuid);
+ 	int ecc;
+ 
+-	if (amd_filter_mce(m))
++	if (ignore_mce(m))
+ 		return NOTIFY_STOP;
+ 
+ 	pr_emerg(HW_ERR "%s\n", decode_error_status(m));
+diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
+index b2fd412715b1..d3725c17ce3a 100644
+--- a/drivers/md/bcache/journal.c
++++ b/drivers/md/bcache/journal.c
+@@ -540,11 +540,11 @@ static void journal_reclaim(struct cache_set *c)
+ 				  ca->sb.nr_this_dev);
+ 	}
+ 
+-	bkey_init(k);
+-	SET_KEY_PTRS(k, n);
+-
+-	if (n)
++	if (n) {
++		bkey_init(k);
++		SET_KEY_PTRS(k, n);
+ 		c->journal.blocks_free = c->sb.bucket_size >> c->block_bits;
++	}
+ out:
+ 	if (!journal_full(&c->journal))
+ 		__closure_wake_up(&c->journal.wait);
+@@ -671,6 +671,9 @@ static void journal_write_unlocked(struct closure *cl)
+ 		ca->journal.seq[ca->journal.cur_idx] = w->data->seq;
+ 	}
+ 
++	/* If KEY_PTRS(k) == 0, this jset gets lost in air */
++	BUG_ON(i == 0);
++
+ 	atomic_dec_bug(&fifo_back(&c->journal.pin));
+ 	bch_journal_next(&c->journal);
+ 	journal_reclaim(c);
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index a697a3a923cd..171d5e0f698b 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -1516,6 +1516,7 @@ static void cache_set_free(struct closure *cl)
+ 	bch_btree_cache_free(c);
+ 	bch_journal_free(c);
+ 
++	mutex_lock(&bch_register_lock);
+ 	for_each_cache(ca, c, i)
+ 		if (ca) {
+ 			ca->set = NULL;
+@@ -1534,7 +1535,6 @@ static void cache_set_free(struct closure *cl)
+ 	mempool_exit(&c->search);
+ 	kfree(c->devices);
+ 
+-	mutex_lock(&bch_register_lock);
+ 	list_del(&c->list);
+ 	mutex_unlock(&bch_register_lock);
+ 
+diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
+index 7c364a9c4eeb..b5b9c6142f08 100644
+--- a/drivers/mmc/core/queue.c
++++ b/drivers/mmc/core/queue.c
+@@ -472,6 +472,7 @@ void mmc_cleanup_queue(struct mmc_queue *mq)
+ 		blk_mq_unquiesce_queue(q);
+ 
+ 	blk_cleanup_queue(q);
++	blk_mq_free_tag_set(&mq->tag_set);
+ 
+ 	/*
+ 	 * A request can be completed before the next request, potentially
+diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig
+index 28fcd8f580a1..94032939cbbd 100644
+--- a/drivers/mmc/host/Kconfig
++++ b/drivers/mmc/host/Kconfig
+@@ -92,6 +92,7 @@ config MMC_SDHCI_PCI
+ 	tristate "SDHCI support on PCI bus"
+ 	depends on MMC_SDHCI && PCI
+ 	select MMC_CQHCI
++	select IOSF_MBI if X86
+ 	help
+ 	  This selects the PCI Secure Digital Host Controller Interface.
+ 	  Most controllers found today are PCI devices.
+diff --git a/drivers/mmc/host/sdhci-of-arasan.c b/drivers/mmc/host/sdhci-of-arasan.c
+index c9e3e050ccc8..88dc3f00a5be 100644
+--- a/drivers/mmc/host/sdhci-of-arasan.c
++++ b/drivers/mmc/host/sdhci-of-arasan.c
+@@ -832,7 +832,10 @@ static int sdhci_arasan_probe(struct platform_device *pdev)
+ 		host->mmc_host_ops.start_signal_voltage_switch =
+ 					sdhci_arasan_voltage_switch;
+ 		sdhci_arasan->has_cqe = true;
+-		host->mmc->caps2 |= MMC_CAP2_CQE | MMC_CAP2_CQE_DCMD;
++		host->mmc->caps2 |= MMC_CAP2_CQE;
++
++		if (!of_property_read_bool(np, "disable-cqe-dcmd"))
++			host->mmc->caps2 |= MMC_CAP2_CQE_DCMD;
+ 	}
+ 
+ 	ret = sdhci_arasan_add_host(sdhci_arasan);
+diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
+index 99b0fec2836b..7bd27198178d 100644
+--- a/drivers/mmc/host/sdhci-pci-core.c
++++ b/drivers/mmc/host/sdhci-pci-core.c
+@@ -31,6 +31,10 @@
+ #include <linux/mmc/sdhci-pci-data.h>
+ #include <linux/acpi.h>
+ 
++#ifdef CONFIG_X86
++#include <asm/iosf_mbi.h>
++#endif
++
+ #include "cqhci.h"
+ 
+ #include "sdhci.h"
+@@ -451,6 +455,50 @@ static const struct sdhci_pci_fixes sdhci_intel_pch_sdio = {
+ 	.probe_slot	= pch_hc_probe_slot,
+ };
+ 
++#ifdef CONFIG_X86
++
++#define BYT_IOSF_SCCEP			0x63
++#define BYT_IOSF_OCP_NETCTRL0		0x1078
++#define BYT_IOSF_OCP_TIMEOUT_BASE	GENMASK(10, 8)
++
++static void byt_ocp_setting(struct pci_dev *pdev)
++{
++	u32 val = 0;
++
++	if (pdev->device != PCI_DEVICE_ID_INTEL_BYT_EMMC &&
++	    pdev->device != PCI_DEVICE_ID_INTEL_BYT_SDIO &&
++	    pdev->device != PCI_DEVICE_ID_INTEL_BYT_SD &&
++	    pdev->device != PCI_DEVICE_ID_INTEL_BYT_EMMC2)
++		return;
++
++	if (iosf_mbi_read(BYT_IOSF_SCCEP, MBI_CR_READ, BYT_IOSF_OCP_NETCTRL0,
++			  &val)) {
++		dev_err(&pdev->dev, "%s read error\n", __func__);
++		return;
++	}
++
++	if (!(val & BYT_IOSF_OCP_TIMEOUT_BASE))
++		return;
++
++	val &= ~BYT_IOSF_OCP_TIMEOUT_BASE;
++
++	if (iosf_mbi_write(BYT_IOSF_SCCEP, MBI_CR_WRITE, BYT_IOSF_OCP_NETCTRL0,
++			   val)) {
++		dev_err(&pdev->dev, "%s write error\n", __func__);
++		return;
++	}
++
++	dev_dbg(&pdev->dev, "%s completed\n", __func__);
++}
++
++#else
++
++static inline void byt_ocp_setting(struct pci_dev *pdev)
++{
++}
++
++#endif
++
+ enum {
+ 	INTEL_DSM_FNS		=  0,
+ 	INTEL_DSM_V18_SWITCH	=  3,
+@@ -715,6 +763,8 @@ static void byt_probe_slot(struct sdhci_pci_slot *slot)
+ 
+ 	byt_read_dsm(slot);
+ 
++	byt_ocp_setting(slot->chip->pdev);
++
+ 	ops->execute_tuning = intel_execute_tuning;
+ 	ops->start_signal_voltage_switch = intel_start_signal_voltage_switch;
+ 
+@@ -938,7 +988,35 @@ static int byt_sd_probe_slot(struct sdhci_pci_slot *slot)
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_PM_SLEEP
++
++static int byt_resume(struct sdhci_pci_chip *chip)
++{
++	byt_ocp_setting(chip->pdev);
++
++	return sdhci_pci_resume_host(chip);
++}
++
++#endif
++
++#ifdef CONFIG_PM
++
++static int byt_runtime_resume(struct sdhci_pci_chip *chip)
++{
++	byt_ocp_setting(chip->pdev);
++
++	return sdhci_pci_runtime_resume_host(chip);
++}
++
++#endif
++
+ static const struct sdhci_pci_fixes sdhci_intel_byt_emmc = {
++#ifdef CONFIG_PM_SLEEP
++	.resume		= byt_resume,
++#endif
++#ifdef CONFIG_PM
++	.runtime_resume	= byt_runtime_resume,
++#endif
+ 	.allow_runtime_pm = true,
+ 	.probe_slot	= byt_emmc_probe_slot,
+ 	.quirks		= SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC |
+@@ -972,6 +1050,12 @@ static const struct sdhci_pci_fixes sdhci_intel_glk_emmc = {
+ };
+ 
+ static const struct sdhci_pci_fixes sdhci_ni_byt_sdio = {
++#ifdef CONFIG_PM_SLEEP
++	.resume		= byt_resume,
++#endif
++#ifdef CONFIG_PM
++	.runtime_resume	= byt_runtime_resume,
++#endif
+ 	.quirks		= SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC |
+ 			  SDHCI_QUIRK_NO_LED,
+ 	.quirks2	= SDHCI_QUIRK2_HOST_OFF_CARD_ON |
+@@ -983,6 +1067,12 @@ static const struct sdhci_pci_fixes sdhci_ni_byt_sdio = {
+ };
+ 
+ static const struct sdhci_pci_fixes sdhci_intel_byt_sdio = {
++#ifdef CONFIG_PM_SLEEP
++	.resume		= byt_resume,
++#endif
++#ifdef CONFIG_PM
++	.runtime_resume	= byt_runtime_resume,
++#endif
+ 	.quirks		= SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC |
+ 			  SDHCI_QUIRK_NO_LED,
+ 	.quirks2	= SDHCI_QUIRK2_HOST_OFF_CARD_ON |
+@@ -994,6 +1084,12 @@ static const struct sdhci_pci_fixes sdhci_intel_byt_sdio = {
+ };
+ 
+ static const struct sdhci_pci_fixes sdhci_intel_byt_sd = {
++#ifdef CONFIG_PM_SLEEP
++	.resume		= byt_resume,
++#endif
++#ifdef CONFIG_PM
++	.runtime_resume	= byt_runtime_resume,
++#endif
+ 	.quirks		= SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC |
+ 			  SDHCI_QUIRK_NO_LED,
+ 	.quirks2	= SDHCI_QUIRK2_CARD_ON_NEEDS_BUS_ON |
+diff --git a/drivers/mmc/host/sdhci-tegra.c b/drivers/mmc/host/sdhci-tegra.c
+index 32e62904c0d3..46086dd43bfb 100644
+--- a/drivers/mmc/host/sdhci-tegra.c
++++ b/drivers/mmc/host/sdhci-tegra.c
+@@ -779,6 +779,7 @@ static void tegra_sdhci_set_uhs_signaling(struct sdhci_host *host,
+ 	bool set_dqs_trim = false;
+ 	bool do_hs400_dll_cal = false;
+ 
++	tegra_host->ddr_signaling = false;
+ 	switch (timing) {
+ 	case MMC_TIMING_UHS_SDR50:
+ 	case MMC_TIMING_UHS_SDR104:
+diff --git a/drivers/mtd/maps/Kconfig b/drivers/mtd/maps/Kconfig
+index e0cf869c8544..544ed1931843 100644
+--- a/drivers/mtd/maps/Kconfig
++++ b/drivers/mtd/maps/Kconfig
+@@ -10,7 +10,7 @@ config MTD_COMPLEX_MAPPINGS
+ 
+ config MTD_PHYSMAP
+ 	tristate "Flash device in physical memory map"
+-	depends on MTD_CFI || MTD_JEDECPROBE || MTD_ROM || MTD_LPDDR
++	depends on MTD_CFI || MTD_JEDECPROBE || MTD_ROM || MTD_RAM || MTD_LPDDR
+ 	help
+ 	  This provides a 'mapping' driver which allows the NOR Flash and
+ 	  ROM driver code to communicate with chips which are mapped
+diff --git a/drivers/mtd/maps/physmap-core.c b/drivers/mtd/maps/physmap-core.c
+index d9a3e4bebe5d..21b556afc305 100644
+--- a/drivers/mtd/maps/physmap-core.c
++++ b/drivers/mtd/maps/physmap-core.c
+@@ -132,6 +132,8 @@ static void physmap_set_addr_gpios(struct physmap_flash_info *info,
+ 
+ 		gpiod_set_value(info->gpios->desc[i], !!(BIT(i) & ofs));
+ 	}
++
++	info->gpio_values = ofs;
+ }
+ 
+ #define win_mask(order)		(BIT(order) - 1)
+diff --git a/drivers/mtd/spi-nor/intel-spi.c b/drivers/mtd/spi-nor/intel-spi.c
+index af0a22019516..d60cbf23d9aa 100644
+--- a/drivers/mtd/spi-nor/intel-spi.c
++++ b/drivers/mtd/spi-nor/intel-spi.c
+@@ -632,6 +632,10 @@ static ssize_t intel_spi_read(struct spi_nor *nor, loff_t from, size_t len,
+ 	while (len > 0) {
+ 		block_size = min_t(size_t, len, INTEL_SPI_FIFO_SZ);
+ 
++		/* Read cannot cross 4K boundary */
++		block_size = min_t(loff_t, from + block_size,
++				   round_up(from + 1, SZ_4K)) - from;
++
+ 		writel(from, ispi->base + FADDR);
+ 
+ 		val = readl(ispi->base + HSFSTS_CTL);
+@@ -685,6 +689,10 @@ static ssize_t intel_spi_write(struct spi_nor *nor, loff_t to, size_t len,
+ 	while (len > 0) {
+ 		block_size = min_t(size_t, len, INTEL_SPI_FIFO_SZ);
+ 
++		/* Write cannot cross 4K boundary */
++		block_size = min_t(loff_t, to + block_size,
++				   round_up(to + 1, SZ_4K)) - to;
++
+ 		writel(to, ispi->base + FADDR);
+ 
+ 		val = readl(ispi->base + HSFSTS_CTL);
+diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
+index f3d753d3169c..2030805aa216 100644
+--- a/drivers/nvdimm/label.c
++++ b/drivers/nvdimm/label.c
+@@ -756,6 +756,17 @@ static const guid_t *to_abstraction_guid(enum nvdimm_claim_class claim_class,
+ 		return &guid_null;
+ }
+ 
++static void reap_victim(struct nd_mapping *nd_mapping,
++		struct nd_label_ent *victim)
++{
++	struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
++	u32 slot = to_slot(ndd, victim->label);
++
++	dev_dbg(ndd->dev, "free: %d\n", slot);
++	nd_label_free_slot(ndd, slot);
++	victim->label = NULL;
++}
++
+ static int __pmem_label_update(struct nd_region *nd_region,
+ 		struct nd_mapping *nd_mapping, struct nd_namespace_pmem *nspm,
+ 		int pos, unsigned long flags)
+@@ -763,9 +774,9 @@ static int __pmem_label_update(struct nd_region *nd_region,
+ 	struct nd_namespace_common *ndns = &nspm->nsio.common;
+ 	struct nd_interleave_set *nd_set = nd_region->nd_set;
+ 	struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
+-	struct nd_label_ent *label_ent, *victim = NULL;
+ 	struct nd_namespace_label *nd_label;
+ 	struct nd_namespace_index *nsindex;
++	struct nd_label_ent *label_ent;
+ 	struct nd_label_id label_id;
+ 	struct resource *res;
+ 	unsigned long *free;
+@@ -834,18 +845,10 @@ static int __pmem_label_update(struct nd_region *nd_region,
+ 	list_for_each_entry(label_ent, &nd_mapping->labels, list) {
+ 		if (!label_ent->label)
+ 			continue;
+-		if (memcmp(nspm->uuid, label_ent->label->uuid,
+-					NSLABEL_UUID_LEN) != 0)
+-			continue;
+-		victim = label_ent;
+-		list_move_tail(&victim->list, &nd_mapping->labels);
+-		break;
+-	}
+-	if (victim) {
+-		dev_dbg(ndd->dev, "free: %d\n", slot);
+-		slot = to_slot(ndd, victim->label);
+-		nd_label_free_slot(ndd, slot);
+-		victim->label = NULL;
++		if (test_and_clear_bit(ND_LABEL_REAP, &label_ent->flags)
++				|| memcmp(nspm->uuid, label_ent->label->uuid,
++					NSLABEL_UUID_LEN) == 0)
++			reap_victim(nd_mapping, label_ent);
+ 	}
+ 
+ 	/* update index */
+diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
+index f293556cbbf6..d0214644e334 100644
+--- a/drivers/nvdimm/namespace_devs.c
++++ b/drivers/nvdimm/namespace_devs.c
+@@ -1247,12 +1247,27 @@ static int namespace_update_uuid(struct nd_region *nd_region,
+ 	for (i = 0; i < nd_region->ndr_mappings; i++) {
+ 		struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+ 		struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
++		struct nd_label_ent *label_ent;
+ 		struct resource *res;
+ 
+ 		for_each_dpa_resource(ndd, res)
+ 			if (strcmp(res->name, old_label_id.id) == 0)
+ 				sprintf((void *) res->name, "%s",
+ 						new_label_id.id);
++
++		mutex_lock(&nd_mapping->lock);
++		list_for_each_entry(label_ent, &nd_mapping->labels, list) {
++			struct nd_namespace_label *nd_label = label_ent->label;
++			struct nd_label_id label_id;
++
++			if (!nd_label)
++				continue;
++			nd_label_gen_id(&label_id, nd_label->uuid,
++					__le32_to_cpu(nd_label->flags));
++			if (strcmp(old_label_id.id, label_id.id) == 0)
++				set_bit(ND_LABEL_REAP, &label_ent->flags);
++		}
++		mutex_unlock(&nd_mapping->lock);
+ 	}
+ 	kfree(*old_uuid);
+  out:
+diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
+index a5ac3b240293..191d62af0e51 100644
+--- a/drivers/nvdimm/nd.h
++++ b/drivers/nvdimm/nd.h
+@@ -113,8 +113,12 @@ struct nd_percpu_lane {
+ 	spinlock_t lock;
+ };
+ 
++enum nd_label_flags {
++	ND_LABEL_REAP,
++};
+ struct nd_label_ent {
+ 	struct list_head list;
++	unsigned long flags;
+ 	struct nd_namespace_label *label;
+ };
+ 
+diff --git a/drivers/power/supply/axp288_charger.c b/drivers/power/supply/axp288_charger.c
+index f8c6da9277b3..00b961890a38 100644
+--- a/drivers/power/supply/axp288_charger.c
++++ b/drivers/power/supply/axp288_charger.c
+@@ -833,6 +833,10 @@ static int axp288_charger_probe(struct platform_device *pdev)
+ 	/* Register charger interrupts */
+ 	for (i = 0; i < CHRG_INTR_END; i++) {
+ 		pirq = platform_get_irq(info->pdev, i);
++		if (pirq < 0) {
++			dev_err(&pdev->dev, "Failed to get IRQ: %d\n", pirq);
++			return pirq;
++		}
+ 		info->irq[i] = regmap_irq_get_virq(info->regmap_irqc, pirq);
+ 		if (info->irq[i] < 0) {
+ 			dev_warn(&info->pdev->dev,
+diff --git a/drivers/power/supply/axp288_fuel_gauge.c b/drivers/power/supply/axp288_fuel_gauge.c
+index 9ff2461820d8..368281bc0d2b 100644
+--- a/drivers/power/supply/axp288_fuel_gauge.c
++++ b/drivers/power/supply/axp288_fuel_gauge.c
+@@ -685,6 +685,26 @@ intr_failed:
+  * detection reports one despite it not being there.
+  */
+ static const struct dmi_system_id axp288_fuel_gauge_blacklist[] = {
++	{
++		/* ACEPC T8 Cherry Trail Z8350 mini PC */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "To be filled by O.E.M."),
++			DMI_EXACT_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "T8"),
++			/* also match on somewhat unique bios-version */
++			DMI_EXACT_MATCH(DMI_BIOS_VERSION, "1.000"),
++		},
++	},
++	{
++		/* ACEPC T11 Cherry Trail Z8350 mini PC */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "To be filled by O.E.M."),
++			DMI_EXACT_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "T11"),
++			/* also match on somewhat unique bios-version */
++			DMI_EXACT_MATCH(DMI_BIOS_VERSION, "1.000"),
++		},
++	},
+ 	{
+ 		/* Intel Cherry Trail Compute Stick, Windows version */
+ 		.matches = {
+diff --git a/drivers/tty/hvc/hvc_riscv_sbi.c b/drivers/tty/hvc/hvc_riscv_sbi.c
+index 75155bde2b88..31f53fa77e4a 100644
+--- a/drivers/tty/hvc/hvc_riscv_sbi.c
++++ b/drivers/tty/hvc/hvc_riscv_sbi.c
+@@ -53,7 +53,6 @@ device_initcall(hvc_sbi_init);
+ static int __init hvc_sbi_console_init(void)
+ {
+ 	hvc_instantiate(0, 0, &hvc_sbi_ops);
+-	add_preferred_console("hvc", 0, NULL);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/tty/vt/keyboard.c b/drivers/tty/vt/keyboard.c
+index 88312c6c92cc..0617e87ab343 100644
+--- a/drivers/tty/vt/keyboard.c
++++ b/drivers/tty/vt/keyboard.c
+@@ -123,6 +123,7 @@ static const int NR_TYPES = ARRAY_SIZE(max_vals);
+ static struct input_handler kbd_handler;
+ static DEFINE_SPINLOCK(kbd_event_lock);
+ static DEFINE_SPINLOCK(led_lock);
++static DEFINE_SPINLOCK(func_buf_lock); /* guard 'func_buf'  and friends */
+ static unsigned long key_down[BITS_TO_LONGS(KEY_CNT)];	/* keyboard key bitmap */
+ static unsigned char shift_down[NR_SHIFT];		/* shift state counters.. */
+ static bool dead_key_next;
+@@ -1990,11 +1991,12 @@ int vt_do_kdgkb_ioctl(int cmd, struct kbsentry __user *user_kdgkb, int perm)
+ 	char *p;
+ 	u_char *q;
+ 	u_char __user *up;
+-	int sz;
++	int sz, fnw_sz;
+ 	int delta;
+ 	char *first_free, *fj, *fnw;
+ 	int i, j, k;
+ 	int ret;
++	unsigned long flags;
+ 
+ 	if (!capable(CAP_SYS_TTY_CONFIG))
+ 		perm = 0;
+@@ -2037,7 +2039,14 @@ int vt_do_kdgkb_ioctl(int cmd, struct kbsentry __user *user_kdgkb, int perm)
+ 			goto reterr;
+ 		}
+ 
++		fnw = NULL;
++		fnw_sz = 0;
++		/* race aginst other writers */
++		again:
++		spin_lock_irqsave(&func_buf_lock, flags);
+ 		q = func_table[i];
++
++		/* fj pointer to next entry after 'q' */
+ 		first_free = funcbufptr + (funcbufsize - funcbufleft);
+ 		for (j = i+1; j < MAX_NR_FUNC && !func_table[j]; j++)
+ 			;
+@@ -2045,10 +2054,12 @@ int vt_do_kdgkb_ioctl(int cmd, struct kbsentry __user *user_kdgkb, int perm)
+ 			fj = func_table[j];
+ 		else
+ 			fj = first_free;
+-
++		/* buffer usage increase by new entry */
+ 		delta = (q ? -strlen(q) : 1) + strlen(kbs->kb_string);
++
+ 		if (delta <= funcbufleft) { 	/* it fits in current buf */
+ 		    if (j < MAX_NR_FUNC) {
++			/* make enough space for new entry at 'fj' */
+ 			memmove(fj + delta, fj, first_free - fj);
+ 			for (k = j; k < MAX_NR_FUNC; k++)
+ 			    if (func_table[k])
+@@ -2061,20 +2072,28 @@ int vt_do_kdgkb_ioctl(int cmd, struct kbsentry __user *user_kdgkb, int perm)
+ 		    sz = 256;
+ 		    while (sz < funcbufsize - funcbufleft + delta)
+ 		      sz <<= 1;
+-		    fnw = kmalloc(sz, GFP_KERNEL);
+-		    if(!fnw) {
+-		      ret = -ENOMEM;
+-		      goto reterr;
++		    if (fnw_sz != sz) {
++		      spin_unlock_irqrestore(&func_buf_lock, flags);
++		      kfree(fnw);
++		      fnw = kmalloc(sz, GFP_KERNEL);
++		      fnw_sz = sz;
++		      if (!fnw) {
++			ret = -ENOMEM;
++			goto reterr;
++		      }
++		      goto again;
+ 		    }
+ 
+ 		    if (!q)
+ 		      func_table[i] = fj;
++		    /* copy data before insertion point to new location */
+ 		    if (fj > funcbufptr)
+ 			memmove(fnw, funcbufptr, fj - funcbufptr);
+ 		    for (k = 0; k < j; k++)
+ 		      if (func_table[k])
+ 			func_table[k] = fnw + (func_table[k] - funcbufptr);
+ 
++		    /* copy data after insertion point to new location */
+ 		    if (first_free > fj) {
+ 			memmove(fnw + (fj - funcbufptr) + delta, fj, first_free - fj);
+ 			for (k = j; k < MAX_NR_FUNC; k++)
+@@ -2087,7 +2106,9 @@ int vt_do_kdgkb_ioctl(int cmd, struct kbsentry __user *user_kdgkb, int perm)
+ 		    funcbufleft = funcbufleft - delta + sz - funcbufsize;
+ 		    funcbufsize = sz;
+ 		}
++		/* finally insert item itself */
+ 		strcpy(func_table[i], kbs->kb_string);
++		spin_unlock_irqrestore(&func_buf_lock, flags);
+ 		break;
+ 	}
+ 	ret = 0;
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index 650c66886c80..693b3b4176f5 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -4179,8 +4179,6 @@ void do_blank_screen(int entering_gfx)
+ 		return;
+ 	}
+ 
+-	if (blank_state != blank_normal_wait)
+-		return;
+ 	blank_state = blank_off;
+ 
+ 	/* don't blank graphics */
+diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
+index 11459fe84a29..fd4d968f1b41 100644
+--- a/fs/btrfs/backref.c
++++ b/fs/btrfs/backref.c
+@@ -1460,8 +1460,8 @@ int btrfs_find_all_roots(struct btrfs_trans_handle *trans,
+  * callers (such as fiemap) which want to know whether the extent is
+  * shared but do not need a ref count.
+  *
+- * This attempts to allocate a transaction in order to account for
+- * delayed refs, but continues on even when the alloc fails.
++ * This attempts to attach to the running transaction in order to account for
++ * delayed refs, but continues on even when no running transaction exists.
+  *
+  * Return: 0 if extent is not shared, 1 if it is shared, < 0 on error.
+  */
+@@ -1484,13 +1484,16 @@ int btrfs_check_shared(struct btrfs_root *root, u64 inum, u64 bytenr)
+ 	tmp = ulist_alloc(GFP_NOFS);
+ 	roots = ulist_alloc(GFP_NOFS);
+ 	if (!tmp || !roots) {
+-		ulist_free(tmp);
+-		ulist_free(roots);
+-		return -ENOMEM;
++		ret = -ENOMEM;
++		goto out;
+ 	}
+ 
+-	trans = btrfs_join_transaction(root);
++	trans = btrfs_attach_transaction(root);
+ 	if (IS_ERR(trans)) {
++		if (PTR_ERR(trans) != -ENOENT && PTR_ERR(trans) != -EROFS) {
++			ret = PTR_ERR(trans);
++			goto out;
++		}
+ 		trans = NULL;
+ 		down_read(&fs_info->commit_root_sem);
+ 	} else {
+@@ -1523,6 +1526,7 @@ int btrfs_check_shared(struct btrfs_root *root, u64 inum, u64 bytenr)
+ 	} else {
+ 		up_read(&fs_info->commit_root_sem);
+ 	}
++out:
+ 	ulist_free(tmp);
+ 	ulist_free(roots);
+ 	return ret;
+@@ -1912,13 +1916,19 @@ int iterate_extent_inodes(struct btrfs_fs_info *fs_info,
+ 			extent_item_objectid);
+ 
+ 	if (!search_commit_root) {
+-		trans = btrfs_join_transaction(fs_info->extent_root);
+-		if (IS_ERR(trans))
+-			return PTR_ERR(trans);
++		trans = btrfs_attach_transaction(fs_info->extent_root);
++		if (IS_ERR(trans)) {
++			if (PTR_ERR(trans) != -ENOENT &&
++			    PTR_ERR(trans) != -EROFS)
++				return PTR_ERR(trans);
++			trans = NULL;
++		}
++	}
++
++	if (trans)
+ 		btrfs_get_tree_mod_seq(fs_info, &tree_mod_seq_elem);
+-	} else {
++	else
+ 		down_read(&fs_info->commit_root_sem);
+-	}
+ 
+ 	ret = btrfs_find_all_leafs(trans, fs_info, extent_item_objectid,
+ 				   tree_mod_seq_elem.seq, &refs,
+@@ -1951,7 +1961,7 @@ int iterate_extent_inodes(struct btrfs_fs_info *fs_info,
+ 
+ 	free_leaf_list(refs);
+ out:
+-	if (!search_commit_root) {
++	if (trans) {
+ 		btrfs_put_tree_mod_seq(fs_info, &tree_mod_seq_elem);
+ 		btrfs_end_transaction(trans);
+ 	} else {
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 324df36d28bf..65b12963e72b 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -2416,6 +2416,16 @@ read_block_for_search(struct btrfs_root *root, struct btrfs_path *p,
+ 	if (tmp) {
+ 		/* first we do an atomic uptodate check */
+ 		if (btrfs_buffer_uptodate(tmp, gen, 1) > 0) {
++			/*
++			 * Do extra check for first_key, eb can be stale due to
++			 * being cached, read from scrub, or have multiple
++			 * parents (shared tree blocks).
++			 */
++			if (btrfs_verify_level_key(fs_info, tmp,
++					parent_level - 1, &first_key, gen)) {
++				free_extent_buffer(tmp);
++				return -EUCLEAN;
++			}
+ 			*eb_ret = tmp;
+ 			return 0;
+ 		}
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index b3642367a595..71237d8747db 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -1348,6 +1348,12 @@ struct btrfs_root {
+ 	 * manipulation with the read-only status via SUBVOL_SETFLAGS
+ 	 */
+ 	int send_in_progress;
++	/*
++	 * Number of currently running deduplication operations that have a
++	 * destination inode belonging to this root. Protected by the lock
++	 * root_item_lock.
++	 */
++	int dedupe_in_progress;
+ 	struct btrfs_subvolume_writers *subv_writers;
+ 	atomic_t will_be_snapshotted;
+ 	atomic_t snapshot_force_cow;
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 6fe9197f6ee4..875c400453bd 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -414,9 +414,9 @@ static int btrfs_check_super_csum(struct btrfs_fs_info *fs_info,
+ 	return ret;
+ }
+ 
+-static int verify_level_key(struct btrfs_fs_info *fs_info,
+-			    struct extent_buffer *eb, int level,
+-			    struct btrfs_key *first_key, u64 parent_transid)
++int btrfs_verify_level_key(struct btrfs_fs_info *fs_info,
++			   struct extent_buffer *eb, int level,
++			   struct btrfs_key *first_key, u64 parent_transid)
+ {
+ 	int found_level;
+ 	struct btrfs_key found_key;
+@@ -493,8 +493,8 @@ static int btree_read_extent_buffer_pages(struct btrfs_fs_info *fs_info,
+ 			if (verify_parent_transid(io_tree, eb,
+ 						   parent_transid, 0))
+ 				ret = -EIO;
+-			else if (verify_level_key(fs_info, eb, level,
+-						  first_key, parent_transid))
++			else if (btrfs_verify_level_key(fs_info, eb, level,
++						first_key, parent_transid))
+ 				ret = -EUCLEAN;
+ 			else
+ 				break;
+@@ -1018,13 +1018,18 @@ void readahead_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr)
+ {
+ 	struct extent_buffer *buf = NULL;
+ 	struct inode *btree_inode = fs_info->btree_inode;
++	int ret;
+ 
+ 	buf = btrfs_find_create_tree_block(fs_info, bytenr);
+ 	if (IS_ERR(buf))
+ 		return;
+-	read_extent_buffer_pages(&BTRFS_I(btree_inode)->io_tree,
+-				 buf, WAIT_NONE, 0);
+-	free_extent_buffer(buf);
++
++	ret = read_extent_buffer_pages(&BTRFS_I(btree_inode)->io_tree, buf,
++			WAIT_NONE, 0);
++	if (ret < 0)
++		free_extent_buffer_stale(buf);
++	else
++		free_extent_buffer(buf);
+ }
+ 
+ int reada_tree_block_flagged(struct btrfs_fs_info *fs_info, u64 bytenr,
+@@ -1044,12 +1049,12 @@ int reada_tree_block_flagged(struct btrfs_fs_info *fs_info, u64 bytenr,
+ 	ret = read_extent_buffer_pages(io_tree, buf, WAIT_PAGE_LOCK,
+ 				       mirror_num);
+ 	if (ret) {
+-		free_extent_buffer(buf);
++		free_extent_buffer_stale(buf);
+ 		return ret;
+ 	}
+ 
+ 	if (test_bit(EXTENT_BUFFER_CORRUPT, &buf->bflags)) {
+-		free_extent_buffer(buf);
++		free_extent_buffer_stale(buf);
+ 		return -EIO;
+ 	} else if (extent_buffer_uptodate(buf)) {
+ 		*eb = buf;
+@@ -1103,7 +1108,7 @@ struct extent_buffer *read_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr,
+ 	ret = btree_read_extent_buffer_pages(fs_info, buf, parent_transid,
+ 					     level, first_key);
+ 	if (ret) {
+-		free_extent_buffer(buf);
++		free_extent_buffer_stale(buf);
+ 		return ERR_PTR(ret);
+ 	}
+ 	return buf;
+diff --git a/fs/btrfs/disk-io.h b/fs/btrfs/disk-io.h
+index 987a64bc0c66..67a9fe2d29c7 100644
+--- a/fs/btrfs/disk-io.h
++++ b/fs/btrfs/disk-io.h
+@@ -39,6 +39,9 @@ static inline u64 btrfs_sb_offset(int mirror)
+ struct btrfs_device;
+ struct btrfs_fs_devices;
+ 
++int btrfs_verify_level_key(struct btrfs_fs_info *fs_info,
++			   struct extent_buffer *eb, int level,
++			   struct btrfs_key *first_key, u64 parent_transid);
+ struct extent_buffer *read_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr,
+ 				      u64 parent_transid, int level,
+ 				      struct btrfs_key *first_key);
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index c5880329ae37..d789542edc5a 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -11315,9 +11315,9 @@ int btrfs_error_unpin_extent_range(struct btrfs_fs_info *fs_info,
+  * held back allocations.
+  */
+ static int btrfs_trim_free_extents(struct btrfs_device *device,
+-				   u64 minlen, u64 *trimmed)
++				   struct fstrim_range *range, u64 *trimmed)
+ {
+-	u64 start = 0, len = 0;
++	u64 start = range->start, len = 0;
+ 	int ret;
+ 
+ 	*trimmed = 0;
+@@ -11360,8 +11360,8 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 		if (!trans)
+ 			up_read(&fs_info->commit_root_sem);
+ 
+-		ret = find_free_dev_extent_start(trans, device, minlen, start,
+-						 &start, &len);
++		ret = find_free_dev_extent_start(trans, device, range->minlen,
++						 start, &start, &len);
+ 		if (trans) {
+ 			up_read(&fs_info->commit_root_sem);
+ 			btrfs_put_transaction(trans);
+@@ -11374,6 +11374,16 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 			break;
+ 		}
+ 
++		/* If we are out of the passed range break */
++		if (start > range->start + range->len - 1) {
++			mutex_unlock(&fs_info->chunk_mutex);
++			ret = 0;
++			break;
++		}
++
++		start = max(range->start, start);
++		len = min(range->len, len);
++
+ 		ret = btrfs_issue_discard(device->bdev, start, len, &bytes);
+ 		mutex_unlock(&fs_info->chunk_mutex);
+ 
+@@ -11383,6 +11393,10 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 		start += len;
+ 		*trimmed += bytes;
+ 
++		/* We've trimmed enough */
++		if (*trimmed >= range->len)
++			break;
++
+ 		if (fatal_signal_pending(current)) {
+ 			ret = -ERESTARTSYS;
+ 			break;
+@@ -11466,8 +11480,7 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ 	mutex_lock(&fs_info->fs_devices->device_list_mutex);
+ 	devices = &fs_info->fs_devices->devices;
+ 	list_for_each_entry(device, devices, dev_list) {
+-		ret = btrfs_trim_free_extents(device, range->minlen,
+-					      &group_trimmed);
++		ret = btrfs_trim_free_extents(device, range, &group_trimmed);
+ 		if (ret) {
+ 			dev_failed++;
+ 			dev_ret = ret;
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index cd4e693406a0..96f05fc851d8 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3260,6 +3260,19 @@ static int btrfs_extent_same(struct inode *src, u64 loff, u64 olen,
+ {
+ 	int ret;
+ 	u64 i, tail_len, chunk_count;
++	struct btrfs_root *root_dst = BTRFS_I(dst)->root;
++
++	spin_lock(&root_dst->root_item_lock);
++	if (root_dst->send_in_progress) {
++		btrfs_warn_rl(root_dst->fs_info,
++"cannot deduplicate to root %llu while send operations are using it (%d in progress)",
++			      root_dst->root_key.objectid,
++			      root_dst->send_in_progress);
++		spin_unlock(&root_dst->root_item_lock);
++		return -EAGAIN;
++	}
++	root_dst->dedupe_in_progress++;
++	spin_unlock(&root_dst->root_item_lock);
+ 
+ 	tail_len = olen % BTRFS_MAX_DEDUPE_LEN;
+ 	chunk_count = div_u64(olen, BTRFS_MAX_DEDUPE_LEN);
+@@ -3268,7 +3281,7 @@ static int btrfs_extent_same(struct inode *src, u64 loff, u64 olen,
+ 		ret = btrfs_extent_same_range(src, loff, BTRFS_MAX_DEDUPE_LEN,
+ 					      dst, dst_loff);
+ 		if (ret)
+-			return ret;
++			goto out;
+ 
+ 		loff += BTRFS_MAX_DEDUPE_LEN;
+ 		dst_loff += BTRFS_MAX_DEDUPE_LEN;
+@@ -3277,6 +3290,10 @@ static int btrfs_extent_same(struct inode *src, u64 loff, u64 olen,
+ 	if (tail_len > 0)
+ 		ret = btrfs_extent_same_range(src, loff, tail_len, dst,
+ 					      dst_loff);
++out:
++	spin_lock(&root_dst->root_item_lock);
++	root_dst->dedupe_in_progress--;
++	spin_unlock(&root_dst->root_item_lock);
+ 
+ 	return ret;
+ }
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 7ea2d6b1f170..19b00b1668ed 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -6579,6 +6579,38 @@ commit_trans:
+ 	return btrfs_commit_transaction(trans);
+ }
+ 
++/*
++ * Make sure any existing dellaloc is flushed for any root used by a send
++ * operation so that we do not miss any data and we do not race with writeback
++ * finishing and changing a tree while send is using the tree. This could
++ * happen if a subvolume is in RW mode, has delalloc, is turned to RO mode and
++ * a send operation then uses the subvolume.
++ * After flushing delalloc ensure_commit_roots_uptodate() must be called.
++ */
++static int flush_delalloc_roots(struct send_ctx *sctx)
++{
++	struct btrfs_root *root = sctx->parent_root;
++	int ret;
++	int i;
++
++	if (root) {
++		ret = btrfs_start_delalloc_snapshot(root);
++		if (ret)
++			return ret;
++		btrfs_wait_ordered_extents(root, U64_MAX, 0, U64_MAX);
++	}
++
++	for (i = 0; i < sctx->clone_roots_cnt; i++) {
++		root = sctx->clone_roots[i].root;
++		ret = btrfs_start_delalloc_snapshot(root);
++		if (ret)
++			return ret;
++		btrfs_wait_ordered_extents(root, U64_MAX, 0, U64_MAX);
++	}
++
++	return 0;
++}
++
+ static void btrfs_root_dec_send_in_progress(struct btrfs_root* root)
+ {
+ 	spin_lock(&root->root_item_lock);
+@@ -6594,6 +6626,13 @@ static void btrfs_root_dec_send_in_progress(struct btrfs_root* root)
+ 	spin_unlock(&root->root_item_lock);
+ }
+ 
++static void dedupe_in_progress_warn(const struct btrfs_root *root)
++{
++	btrfs_warn_rl(root->fs_info,
++"cannot use root %llu for send while deduplications on it are in progress (%d in progress)",
++		      root->root_key.objectid, root->dedupe_in_progress);
++}
++
+ long btrfs_ioctl_send(struct file *mnt_file, struct btrfs_ioctl_send_args *arg)
+ {
+ 	int ret = 0;
+@@ -6617,6 +6656,11 @@ long btrfs_ioctl_send(struct file *mnt_file, struct btrfs_ioctl_send_args *arg)
+ 	 * making it RW. This also protects against deletion.
+ 	 */
+ 	spin_lock(&send_root->root_item_lock);
++	if (btrfs_root_readonly(send_root) && send_root->dedupe_in_progress) {
++		dedupe_in_progress_warn(send_root);
++		spin_unlock(&send_root->root_item_lock);
++		return -EAGAIN;
++	}
+ 	send_root->send_in_progress++;
+ 	spin_unlock(&send_root->root_item_lock);
+ 
+@@ -6751,6 +6795,13 @@ long btrfs_ioctl_send(struct file *mnt_file, struct btrfs_ioctl_send_args *arg)
+ 				ret = -EPERM;
+ 				goto out;
+ 			}
++			if (clone_root->dedupe_in_progress) {
++				dedupe_in_progress_warn(clone_root);
++				spin_unlock(&clone_root->root_item_lock);
++				srcu_read_unlock(&fs_info->subvol_srcu, index);
++				ret = -EAGAIN;
++				goto out;
++			}
+ 			clone_root->send_in_progress++;
+ 			spin_unlock(&clone_root->root_item_lock);
+ 			srcu_read_unlock(&fs_info->subvol_srcu, index);
+@@ -6785,6 +6836,13 @@ long btrfs_ioctl_send(struct file *mnt_file, struct btrfs_ioctl_send_args *arg)
+ 			ret = -EPERM;
+ 			goto out;
+ 		}
++		if (sctx->parent_root->dedupe_in_progress) {
++			dedupe_in_progress_warn(sctx->parent_root);
++			spin_unlock(&sctx->parent_root->root_item_lock);
++			srcu_read_unlock(&fs_info->subvol_srcu, index);
++			ret = -EAGAIN;
++			goto out;
++		}
+ 		spin_unlock(&sctx->parent_root->root_item_lock);
+ 
+ 		srcu_read_unlock(&fs_info->subvol_srcu, index);
+@@ -6803,6 +6861,10 @@ long btrfs_ioctl_send(struct file *mnt_file, struct btrfs_ioctl_send_args *arg)
+ 			NULL);
+ 	sort_clone_roots = 1;
+ 
++	ret = flush_delalloc_roots(sctx);
++	if (ret)
++		goto out;
++
+ 	ret = ensure_commit_roots_uptodate(sctx);
+ 	if (ret)
+ 		goto out;
+diff --git a/fs/cifs/cifs_debug.c b/fs/cifs/cifs_debug.c
+index 13c1288b04a7..b5c6c4a8ab5d 100644
+--- a/fs/cifs/cifs_debug.c
++++ b/fs/cifs/cifs_debug.c
+@@ -376,6 +376,8 @@ skip_rdma:
+ 				atomic_read(&server->in_send),
+ 				atomic_read(&server->num_waiters));
+ #endif
++			/* dump session id helpful for use with network trace */
++			seq_printf(m, " SessionId: 0x%llx", ses->Suid);
+ 			if (ses->session_flags & SMB2_SESSION_FLAG_ENCRYPT_DATA)
+ 				seq_puts(m, " encrypted");
+ 			if (ses->sign)
+diff --git a/fs/dax.c b/fs/dax.c
+index e5e54da1715f..83009875308c 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -1575,8 +1575,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
+ 		}
+ 
+ 		trace_dax_pmd_insert_mapping(inode, vmf, PMD_SIZE, pfn, entry);
+-		result = vmf_insert_pfn_pmd(vma, vmf->address, vmf->pmd, pfn,
+-					    write);
++		result = vmf_insert_pfn_pmd(vmf, pfn, write);
+ 		break;
+ 	case IOMAP_UNWRITTEN:
+ 	case IOMAP_HOLE:
+@@ -1686,8 +1685,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order)
+ 		ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
+ #ifdef CONFIG_FS_DAX_PMD
+ 	else if (order == PMD_ORDER)
+-		ret = vmf_insert_pfn_pmd(vmf->vma, vmf->address, vmf->pmd,
+-			pfn, true);
++		ret = vmf_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE);
+ #endif
+ 	else
+ 		ret = VM_FAULT_FALLBACK;
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 0f89f5190cd7..f2c62e2a0c98 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -1035,6 +1035,7 @@ static int ext4_ext_split(handle_t *handle, struct inode *inode,
+ 	__le32 border;
+ 	ext4_fsblk_t *ablocks = NULL; /* array of allocated blocks */
+ 	int err = 0;
++	size_t ext_size = 0;
+ 
+ 	/* make decision: where to split? */
+ 	/* FIXME: now decision is simplest: at current extent */
+@@ -1126,6 +1127,10 @@ static int ext4_ext_split(handle_t *handle, struct inode *inode,
+ 		le16_add_cpu(&neh->eh_entries, m);
+ 	}
+ 
++	/* zero out unused area in the extent block */
++	ext_size = sizeof(struct ext4_extent_header) +
++		sizeof(struct ext4_extent) * le16_to_cpu(neh->eh_entries);
++	memset(bh->b_data + ext_size, 0, inode->i_sb->s_blocksize - ext_size);
+ 	ext4_extent_block_csum_set(inode, neh);
+ 	set_buffer_uptodate(bh);
+ 	unlock_buffer(bh);
+@@ -1205,6 +1210,11 @@ static int ext4_ext_split(handle_t *handle, struct inode *inode,
+ 				sizeof(struct ext4_extent_idx) * m);
+ 			le16_add_cpu(&neh->eh_entries, m);
+ 		}
++		/* zero out unused area in the extent block */
++		ext_size = sizeof(struct ext4_extent_header) +
++		   (sizeof(struct ext4_extent) * le16_to_cpu(neh->eh_entries));
++		memset(bh->b_data + ext_size, 0,
++			inode->i_sb->s_blocksize - ext_size);
+ 		ext4_extent_block_csum_set(inode, neh);
+ 		set_buffer_uptodate(bh);
+ 		unlock_buffer(bh);
+@@ -1270,6 +1280,7 @@ static int ext4_ext_grow_indepth(handle_t *handle, struct inode *inode,
+ 	ext4_fsblk_t newblock, goal = 0;
+ 	struct ext4_super_block *es = EXT4_SB(inode->i_sb)->s_es;
+ 	int err = 0;
++	size_t ext_size = 0;
+ 
+ 	/* Try to prepend new index to old one */
+ 	if (ext_depth(inode))
+@@ -1295,9 +1306,11 @@ static int ext4_ext_grow_indepth(handle_t *handle, struct inode *inode,
+ 		goto out;
+ 	}
+ 
++	ext_size = sizeof(EXT4_I(inode)->i_data);
+ 	/* move top-level index/leaf into new block */
+-	memmove(bh->b_data, EXT4_I(inode)->i_data,
+-		sizeof(EXT4_I(inode)->i_data));
++	memmove(bh->b_data, EXT4_I(inode)->i_data, ext_size);
++	/* zero out unused area in the extent block */
++	memset(bh->b_data + ext_size, 0, inode->i_sb->s_blocksize - ext_size);
+ 
+ 	/* set size of new block */
+ 	neh = ext_block_hdr(bh);
+diff --git a/fs/ext4/file.c b/fs/ext4/file.c
+index 98ec11f69cd4..2c5baa5e8291 100644
+--- a/fs/ext4/file.c
++++ b/fs/ext4/file.c
+@@ -264,6 +264,13 @@ ext4_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ 	}
+ 
+ 	ret = __generic_file_write_iter(iocb, from);
++	/*
++	 * Unaligned direct AIO must be the only IO in flight. Otherwise
++	 * overlapping aligned IO after unaligned might result in data
++	 * corruption.
++	 */
++	if (ret == -EIOCBQUEUED && unaligned_aio)
++		ext4_unwritten_wait(inode);
+ 	inode_unlock(inode);
+ 
+ 	if (ret > 0)
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index bab3da4f1e0d..20faa6a69238 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -978,7 +978,7 @@ mext_out:
+ 		if (err == 0)
+ 			err = err2;
+ 		mnt_drop_write_file(filp);
+-		if (!err && (o_group > EXT4_SB(sb)->s_groups_count) &&
++		if (!err && (o_group < EXT4_SB(sb)->s_groups_count) &&
+ 		    ext4_has_group_desc_csum(sb) &&
+ 		    test_opt(sb, INIT_INODE_TABLE))
+ 			err = ext4_register_li_request(sb, o_group);
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 6fb76d408093..8ef5f12bbee2 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -1539,7 +1539,7 @@ static int mb_find_extent(struct ext4_buddy *e4b, int block,
+ 		ex->fe_len += 1 << order;
+ 	}
+ 
+-	if (ex->fe_start + ex->fe_len > (1 << (e4b->bd_blkbits + 3))) {
++	if (ex->fe_start + ex->fe_len > EXT4_CLUSTERS_PER_GROUP(e4b->bd_sb)) {
+ 		/* Should never happen! (but apparently sometimes does?!?) */
+ 		WARN_ON(1);
+ 		ext4_error(e4b->bd_sb, "corruption or bug in mb_find_extent "
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 980166a8122a..5d9ffa8efbfd 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -871,12 +871,15 @@ static void dx_release(struct dx_frame *frames)
+ {
+ 	struct dx_root_info *info;
+ 	int i;
++	unsigned int indirect_levels;
+ 
+ 	if (frames[0].bh == NULL)
+ 		return;
+ 
+ 	info = &((struct dx_root *)frames[0].bh->b_data)->info;
+-	for (i = 0; i <= info->indirect_levels; i++) {
++	/* save local copy, "info" may be freed after brelse() */
++	indirect_levels = info->indirect_levels;
++	for (i = 0; i <= indirect_levels; i++) {
+ 		if (frames[i].bh == NULL)
+ 			break;
+ 		brelse(frames[i].bh);
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index e7ae26e36c9c..4d5c0fc9d23a 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -874,6 +874,7 @@ static int add_new_gdb(handle_t *handle, struct inode *inode,
+ 	err = ext4_handle_dirty_metadata(handle, NULL, gdb_bh);
+ 	if (unlikely(err)) {
+ 		ext4_std_error(sb, err);
++		iloc.bh = NULL;
+ 		goto errout;
+ 	}
+ 	brelse(dind);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 6ed4eb81e674..f5044622ebb1 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -698,7 +698,7 @@ void __ext4_abort(struct super_block *sb, const char *function,
+ 			jbd2_journal_abort(EXT4_SB(sb)->s_journal, -EIO);
+ 		save_error_info(sb, function, line);
+ 	}
+-	if (test_opt(sb, ERRORS_PANIC)) {
++	if (test_opt(sb, ERRORS_PANIC) && !system_going_down()) {
+ 		if (EXT4_SB(sb)->s_journal &&
+ 		  !(EXT4_SB(sb)->s_journal->j_flags & JBD2_REC_ERR))
+ 			return;
+@@ -3513,6 +3513,37 @@ int ext4_calculate_overhead(struct super_block *sb)
+ 	return 0;
+ }
+ 
++static void ext4_clamp_want_extra_isize(struct super_block *sb)
++{
++	struct ext4_sb_info *sbi = EXT4_SB(sb);
++	struct ext4_super_block *es = sbi->s_es;
++
++	/* determine the minimum size of new large inodes, if present */
++	if (sbi->s_inode_size > EXT4_GOOD_OLD_INODE_SIZE &&
++	    sbi->s_want_extra_isize == 0) {
++		sbi->s_want_extra_isize = sizeof(struct ext4_inode) -
++						     EXT4_GOOD_OLD_INODE_SIZE;
++		if (ext4_has_feature_extra_isize(sb)) {
++			if (sbi->s_want_extra_isize <
++			    le16_to_cpu(es->s_want_extra_isize))
++				sbi->s_want_extra_isize =
++					le16_to_cpu(es->s_want_extra_isize);
++			if (sbi->s_want_extra_isize <
++			    le16_to_cpu(es->s_min_extra_isize))
++				sbi->s_want_extra_isize =
++					le16_to_cpu(es->s_min_extra_isize);
++		}
++	}
++	/* Check if enough inode space is available */
++	if (EXT4_GOOD_OLD_INODE_SIZE + sbi->s_want_extra_isize >
++							sbi->s_inode_size) {
++		sbi->s_want_extra_isize = sizeof(struct ext4_inode) -
++						       EXT4_GOOD_OLD_INODE_SIZE;
++		ext4_msg(sb, KERN_INFO,
++			 "required extra inode space not available");
++	}
++}
++
+ static void ext4_set_resv_clusters(struct super_block *sb)
+ {
+ 	ext4_fsblk_t resv_clusters;
+@@ -4238,7 +4269,7 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 				 "data=, fs mounted w/o journal");
+ 			goto failed_mount_wq;
+ 		}
+-		sbi->s_def_mount_opt &= EXT4_MOUNT_JOURNAL_CHECKSUM;
++		sbi->s_def_mount_opt &= ~EXT4_MOUNT_JOURNAL_CHECKSUM;
+ 		clear_opt(sb, JOURNAL_CHECKSUM);
+ 		clear_opt(sb, DATA_FLAGS);
+ 		sbi->s_journal = NULL;
+@@ -4387,30 +4418,7 @@ no_journal:
+ 	} else if (ret)
+ 		goto failed_mount4a;
+ 
+-	/* determine the minimum size of new large inodes, if present */
+-	if (sbi->s_inode_size > EXT4_GOOD_OLD_INODE_SIZE &&
+-	    sbi->s_want_extra_isize == 0) {
+-		sbi->s_want_extra_isize = sizeof(struct ext4_inode) -
+-						     EXT4_GOOD_OLD_INODE_SIZE;
+-		if (ext4_has_feature_extra_isize(sb)) {
+-			if (sbi->s_want_extra_isize <
+-			    le16_to_cpu(es->s_want_extra_isize))
+-				sbi->s_want_extra_isize =
+-					le16_to_cpu(es->s_want_extra_isize);
+-			if (sbi->s_want_extra_isize <
+-			    le16_to_cpu(es->s_min_extra_isize))
+-				sbi->s_want_extra_isize =
+-					le16_to_cpu(es->s_min_extra_isize);
+-		}
+-	}
+-	/* Check if enough inode space is available */
+-	if (EXT4_GOOD_OLD_INODE_SIZE + sbi->s_want_extra_isize >
+-							sbi->s_inode_size) {
+-		sbi->s_want_extra_isize = sizeof(struct ext4_inode) -
+-						       EXT4_GOOD_OLD_INODE_SIZE;
+-		ext4_msg(sb, KERN_INFO, "required extra inode space not"
+-			 "available");
+-	}
++	ext4_clamp_want_extra_isize(sb);
+ 
+ 	ext4_set_resv_clusters(sb);
+ 
+@@ -5194,6 +5202,8 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 		goto restore_opts;
+ 	}
+ 
++	ext4_clamp_want_extra_isize(sb);
++
+ 	if ((old_opts.s_mount_opt & EXT4_MOUNT_JOURNAL_CHECKSUM) ^
+ 	    test_opt(sb, JOURNAL_CHECKSUM)) {
+ 		ext4_msg(sb, KERN_ERR, "changing journal_checksum "
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index dc82e7757f67..491f9ee4040e 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -1696,7 +1696,7 @@ static int ext4_xattr_set_entry(struct ext4_xattr_info *i,
+ 
+ 	/* No failures allowed past this point. */
+ 
+-	if (!s->not_found && here->e_value_size && here->e_value_offs) {
++	if (!s->not_found && here->e_value_size && !here->e_value_inum) {
+ 		/* Remove the old value. */
+ 		void *first_val = s->base + min_offs;
+ 		size_t offs = le16_to_cpu(here->e_value_offs);
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index 36855c1f8daf..b16645b417d9 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -523,8 +523,6 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
+ 
+ 	isw->inode = inode;
+ 
+-	atomic_inc(&isw_nr_in_flight);
+-
+ 	/*
+ 	 * In addition to synchronizing among switchers, I_WB_SWITCH tells
+ 	 * the RCU protected stat update paths to grab the i_page
+@@ -532,6 +530,9 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
+ 	 * Let's continue after I_WB_SWITCH is guaranteed to be visible.
+ 	 */
+ 	call_rcu(&isw->rcu_head, inode_switch_wbs_rcu_fn);
++
++	atomic_inc(&isw_nr_in_flight);
++
+ 	goto out_unlock;
+ 
+ out_free:
+@@ -901,7 +902,11 @@ restart:
+ void cgroup_writeback_umount(void)
+ {
+ 	if (atomic_read(&isw_nr_in_flight)) {
+-		synchronize_rcu();
++		/*
++		 * Use rcu_barrier() to wait for all pending callbacks to
++		 * ensure that all in-flight wb switches are in the workqueue.
++		 */
++		rcu_barrier();
+ 		flush_workqueue(isw_wq);
+ 	}
+ }
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index 9285dd4f4b1c..f76e44d1aa54 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -440,9 +440,7 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart,
+ 			u32 hash;
+ 
+ 			index = page->index;
+-			hash = hugetlb_fault_mutex_hash(h, current->mm,
+-							&pseudo_vma,
+-							mapping, index, 0);
++			hash = hugetlb_fault_mutex_hash(h, mapping, index, 0);
+ 			mutex_lock(&hugetlb_fault_mutex_table[hash]);
+ 
+ 			/*
+@@ -639,8 +637,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
+ 		addr = index * hpage_size;
+ 
+ 		/* mutex taken here, fault path and hole punch */
+-		hash = hugetlb_fault_mutex_hash(h, mm, &pseudo_vma, mapping,
+-						index, addr);
++		hash = hugetlb_fault_mutex_hash(h, mapping, index, addr);
+ 		mutex_lock(&hugetlb_fault_mutex_table[hash]);
+ 
+ 		/* See if already present in mapping to avoid alloc/free */
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index 382c030cc78b..43df0c943229 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -1350,6 +1350,10 @@ static int jbd2_write_superblock(journal_t *journal, int write_flags)
+ 	journal_superblock_t *sb = journal->j_superblock;
+ 	int ret;
+ 
++	/* Buffer got discarded which means block device got invalidated */
++	if (!buffer_mapped(bh))
++		return -EIO;
++
+ 	trace_jbd2_write_superblock(journal, write_flags);
+ 	if (!(journal->j_flags & JBD2_BARRIER))
+ 		write_flags &= ~(REQ_FUA | REQ_PREFLUSH);
+@@ -2371,22 +2375,19 @@ static struct kmem_cache *jbd2_journal_head_cache;
+ static atomic_t nr_journal_heads = ATOMIC_INIT(0);
+ #endif
+ 
+-static int jbd2_journal_init_journal_head_cache(void)
++static int __init jbd2_journal_init_journal_head_cache(void)
+ {
+-	int retval;
+-
+-	J_ASSERT(jbd2_journal_head_cache == NULL);
++	J_ASSERT(!jbd2_journal_head_cache);
+ 	jbd2_journal_head_cache = kmem_cache_create("jbd2_journal_head",
+ 				sizeof(struct journal_head),
+ 				0,		/* offset */
+ 				SLAB_TEMPORARY | SLAB_TYPESAFE_BY_RCU,
+ 				NULL);		/* ctor */
+-	retval = 0;
+ 	if (!jbd2_journal_head_cache) {
+-		retval = -ENOMEM;
+ 		printk(KERN_EMERG "JBD2: no memory for journal_head cache\n");
++		return -ENOMEM;
+ 	}
+-	return retval;
++	return 0;
+ }
+ 
+ static void jbd2_journal_destroy_journal_head_cache(void)
+@@ -2632,28 +2633,38 @@ static void __exit jbd2_remove_jbd_stats_proc_entry(void)
+ 
+ struct kmem_cache *jbd2_handle_cache, *jbd2_inode_cache;
+ 
++static int __init jbd2_journal_init_inode_cache(void)
++{
++	J_ASSERT(!jbd2_inode_cache);
++	jbd2_inode_cache = KMEM_CACHE(jbd2_inode, 0);
++	if (!jbd2_inode_cache) {
++		pr_emerg("JBD2: failed to create inode cache\n");
++		return -ENOMEM;
++	}
++	return 0;
++}
++
+ static int __init jbd2_journal_init_handle_cache(void)
+ {
++	J_ASSERT(!jbd2_handle_cache);
+ 	jbd2_handle_cache = KMEM_CACHE(jbd2_journal_handle, SLAB_TEMPORARY);
+-	if (jbd2_handle_cache == NULL) {
++	if (!jbd2_handle_cache) {
+ 		printk(KERN_EMERG "JBD2: failed to create handle cache\n");
+ 		return -ENOMEM;
+ 	}
+-	jbd2_inode_cache = KMEM_CACHE(jbd2_inode, 0);
+-	if (jbd2_inode_cache == NULL) {
+-		printk(KERN_EMERG "JBD2: failed to create inode cache\n");
+-		kmem_cache_destroy(jbd2_handle_cache);
+-		return -ENOMEM;
+-	}
+ 	return 0;
+ }
+ 
++static void jbd2_journal_destroy_inode_cache(void)
++{
++	kmem_cache_destroy(jbd2_inode_cache);
++	jbd2_inode_cache = NULL;
++}
++
+ static void jbd2_journal_destroy_handle_cache(void)
+ {
+ 	kmem_cache_destroy(jbd2_handle_cache);
+ 	jbd2_handle_cache = NULL;
+-	kmem_cache_destroy(jbd2_inode_cache);
+-	jbd2_inode_cache = NULL;
+ }
+ 
+ /*
+@@ -2664,11 +2675,15 @@ static int __init journal_init_caches(void)
+ {
+ 	int ret;
+ 
+-	ret = jbd2_journal_init_revoke_caches();
++	ret = jbd2_journal_init_revoke_record_cache();
++	if (ret == 0)
++		ret = jbd2_journal_init_revoke_table_cache();
+ 	if (ret == 0)
+ 		ret = jbd2_journal_init_journal_head_cache();
+ 	if (ret == 0)
+ 		ret = jbd2_journal_init_handle_cache();
++	if (ret == 0)
++		ret = jbd2_journal_init_inode_cache();
+ 	if (ret == 0)
+ 		ret = jbd2_journal_init_transaction_cache();
+ 	return ret;
+@@ -2676,9 +2691,11 @@ static int __init journal_init_caches(void)
+ 
+ static void jbd2_journal_destroy_caches(void)
+ {
+-	jbd2_journal_destroy_revoke_caches();
++	jbd2_journal_destroy_revoke_record_cache();
++	jbd2_journal_destroy_revoke_table_cache();
+ 	jbd2_journal_destroy_journal_head_cache();
+ 	jbd2_journal_destroy_handle_cache();
++	jbd2_journal_destroy_inode_cache();
+ 	jbd2_journal_destroy_transaction_cache();
+ 	jbd2_journal_destroy_slabs();
+ }
+diff --git a/fs/jbd2/revoke.c b/fs/jbd2/revoke.c
+index a1143e57a718..69b9bc329964 100644
+--- a/fs/jbd2/revoke.c
++++ b/fs/jbd2/revoke.c
+@@ -178,33 +178,41 @@ static struct jbd2_revoke_record_s *find_revoke_record(journal_t *journal,
+ 	return NULL;
+ }
+ 
+-void jbd2_journal_destroy_revoke_caches(void)
++void jbd2_journal_destroy_revoke_record_cache(void)
+ {
+ 	kmem_cache_destroy(jbd2_revoke_record_cache);
+ 	jbd2_revoke_record_cache = NULL;
++}
++
++void jbd2_journal_destroy_revoke_table_cache(void)
++{
+ 	kmem_cache_destroy(jbd2_revoke_table_cache);
+ 	jbd2_revoke_table_cache = NULL;
+ }
+ 
+-int __init jbd2_journal_init_revoke_caches(void)
++int __init jbd2_journal_init_revoke_record_cache(void)
+ {
+ 	J_ASSERT(!jbd2_revoke_record_cache);
+-	J_ASSERT(!jbd2_revoke_table_cache);
+-
+ 	jbd2_revoke_record_cache = KMEM_CACHE(jbd2_revoke_record_s,
+ 					SLAB_HWCACHE_ALIGN|SLAB_TEMPORARY);
+-	if (!jbd2_revoke_record_cache)
+-		goto record_cache_failure;
+ 
++	if (!jbd2_revoke_record_cache) {
++		pr_emerg("JBD2: failed to create revoke_record cache\n");
++		return -ENOMEM;
++	}
++	return 0;
++}
++
++int __init jbd2_journal_init_revoke_table_cache(void)
++{
++	J_ASSERT(!jbd2_revoke_table_cache);
+ 	jbd2_revoke_table_cache = KMEM_CACHE(jbd2_revoke_table_s,
+ 					     SLAB_TEMPORARY);
+-	if (!jbd2_revoke_table_cache)
+-		goto table_cache_failure;
+-	return 0;
+-table_cache_failure:
+-	jbd2_journal_destroy_revoke_caches();
+-record_cache_failure:
++	if (!jbd2_revoke_table_cache) {
++		pr_emerg("JBD2: failed to create revoke_table cache\n");
+ 		return -ENOMEM;
++	}
++	return 0;
+ }
+ 
+ static struct jbd2_revoke_table_s *jbd2_journal_init_revoke_table(int hash_size)
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index f940d31c2adc..8ca4fddc705f 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -42,9 +42,11 @@ int __init jbd2_journal_init_transaction_cache(void)
+ 					0,
+ 					SLAB_HWCACHE_ALIGN|SLAB_TEMPORARY,
+ 					NULL);
+-	if (transaction_cache)
+-		return 0;
+-	return -ENOMEM;
++	if (!transaction_cache) {
++		pr_emerg("JBD2: failed to create transaction cache\n");
++		return -ENOMEM;
++	}
++	return 0;
+ }
+ 
+ void jbd2_journal_destroy_transaction_cache(void)
+diff --git a/fs/ocfs2/export.c b/fs/ocfs2/export.c
+index 4bf8d5854b27..af2888d23de3 100644
+--- a/fs/ocfs2/export.c
++++ b/fs/ocfs2/export.c
+@@ -148,16 +148,24 @@ static struct dentry *ocfs2_get_parent(struct dentry *child)
+ 	u64 blkno;
+ 	struct dentry *parent;
+ 	struct inode *dir = d_inode(child);
++	int set;
+ 
+ 	trace_ocfs2_get_parent(child, child->d_name.len, child->d_name.name,
+ 			       (unsigned long long)OCFS2_I(dir)->ip_blkno);
+ 
++	status = ocfs2_nfs_sync_lock(OCFS2_SB(dir->i_sb), 1);
++	if (status < 0) {
++		mlog(ML_ERROR, "getting nfs sync lock(EX) failed %d\n", status);
++		parent = ERR_PTR(status);
++		goto bail;
++	}
++
+ 	status = ocfs2_inode_lock(dir, NULL, 0);
+ 	if (status < 0) {
+ 		if (status != -ENOENT)
+ 			mlog_errno(status);
+ 		parent = ERR_PTR(status);
+-		goto bail;
++		goto unlock_nfs_sync;
+ 	}
+ 
+ 	status = ocfs2_lookup_ino_from_name(dir, "..", 2, &blkno);
+@@ -166,11 +174,31 @@ static struct dentry *ocfs2_get_parent(struct dentry *child)
+ 		goto bail_unlock;
+ 	}
+ 
++	status = ocfs2_test_inode_bit(OCFS2_SB(dir->i_sb), blkno, &set);
++	if (status < 0) {
++		if (status == -EINVAL) {
++			status = -ESTALE;
++		} else
++			mlog(ML_ERROR, "test inode bit failed %d\n", status);
++		parent = ERR_PTR(status);
++		goto bail_unlock;
++	}
++
++	trace_ocfs2_get_dentry_test_bit(status, set);
++	if (!set) {
++		status = -ESTALE;
++		parent = ERR_PTR(status);
++		goto bail_unlock;
++	}
++
+ 	parent = d_obtain_alias(ocfs2_iget(OCFS2_SB(dir->i_sb), blkno, 0, 0));
+ 
+ bail_unlock:
+ 	ocfs2_inode_unlock(dir, 0);
+ 
++unlock_nfs_sync:
++	ocfs2_nfs_sync_unlock(OCFS2_SB(dir->i_sb), 1);
++
+ bail:
+ 	trace_ocfs2_get_parent_end(parent);
+ 
+diff --git a/include/acpi/platform/aclinux.h b/include/acpi/platform/aclinux.h
+index 624b90b34085..310501994c02 100644
+--- a/include/acpi/platform/aclinux.h
++++ b/include/acpi/platform/aclinux.h
+@@ -66,6 +66,11 @@
+ 
+ #define ACPI_INIT_FUNCTION __init
+ 
++/* Use a specific bugging default separate from ACPICA */
++
++#undef ACPI_DEBUG_DEFAULT
++#define ACPI_DEBUG_DEFAULT          (ACPI_LV_INFO | ACPI_LV_REPAIR)
++
+ #ifndef CONFIG_ACPI
+ 
+ /* External globals for __KERNEL__, stubs is needed */
+@@ -82,11 +87,6 @@
+ #define ACPI_NO_ERROR_MESSAGES
+ #undef ACPI_DEBUG_OUTPUT
+ 
+-/* Use a specific bugging default separate from ACPICA */
+-
+-#undef ACPI_DEBUG_DEFAULT
+-#define ACPI_DEBUG_DEFAULT          (ACPI_LV_INFO | ACPI_LV_REPAIR)
+-
+ /* External interface for __KERNEL__, stub is needed */
+ 
+ #define ACPI_EXTERNAL_RETURN_STATUS(prototype) \
+diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
+index 381e872bfde0..7cd5c150c21d 100644
+--- a/include/linux/huge_mm.h
++++ b/include/linux/huge_mm.h
+@@ -47,10 +47,8 @@ extern bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+ extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ 			unsigned long addr, pgprot_t newprot,
+ 			int prot_numa);
+-vm_fault_t vmf_insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
+-			pmd_t *pmd, pfn_t pfn, bool write);
+-vm_fault_t vmf_insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
+-			pud_t *pud, pfn_t pfn, bool write);
++vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write);
++vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write);
+ enum transparent_hugepage_flag {
+ 	TRANSPARENT_HUGEPAGE_FLAG,
+ 	TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index 11943b60f208..edf476c8cfb9 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -123,9 +123,7 @@ void move_hugetlb_state(struct page *oldpage, struct page *newpage, int reason);
+ void free_huge_page(struct page *page);
+ void hugetlb_fix_reserve_counts(struct inode *inode);
+ extern struct mutex *hugetlb_fault_mutex_table;
+-u32 hugetlb_fault_mutex_hash(struct hstate *h, struct mm_struct *mm,
+-				struct vm_area_struct *vma,
+-				struct address_space *mapping,
++u32 hugetlb_fault_mutex_hash(struct hstate *h, struct address_space *mapping,
+ 				pgoff_t idx, unsigned long address);
+ 
+ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud);
+diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h
+index 0f919d5fe84f..2cf6e04b08fc 100644
+--- a/include/linux/jbd2.h
++++ b/include/linux/jbd2.h
+@@ -1318,7 +1318,7 @@ extern void		__wait_on_journal (journal_t *);
+ 
+ /* Transaction cache support */
+ extern void jbd2_journal_destroy_transaction_cache(void);
+-extern int  jbd2_journal_init_transaction_cache(void);
++extern int __init jbd2_journal_init_transaction_cache(void);
+ extern void jbd2_journal_free_transaction(transaction_t *);
+ 
+ /*
+@@ -1446,8 +1446,10 @@ static inline void jbd2_free_inode(struct jbd2_inode *jinode)
+ /* Primary revoke support */
+ #define JOURNAL_REVOKE_DEFAULT_HASH 256
+ extern int	   jbd2_journal_init_revoke(journal_t *, int);
+-extern void	   jbd2_journal_destroy_revoke_caches(void);
+-extern int	   jbd2_journal_init_revoke_caches(void);
++extern void	   jbd2_journal_destroy_revoke_record_cache(void);
++extern void	   jbd2_journal_destroy_revoke_table_cache(void);
++extern int __init jbd2_journal_init_revoke_record_cache(void);
++extern int __init jbd2_journal_init_revoke_table_cache(void);
+ 
+ extern void	   jbd2_journal_destroy_revoke(journal_t *);
+ extern int	   jbd2_journal_revoke (handle_t *, unsigned long long, struct buffer_head *);
+diff --git a/include/linux/mfd/da9063/registers.h b/include/linux/mfd/da9063/registers.h
+index 5d42859cb441..844fc2973392 100644
+--- a/include/linux/mfd/da9063/registers.h
++++ b/include/linux/mfd/da9063/registers.h
+@@ -215,9 +215,9 @@
+ 
+ /* DA9063 Configuration registers */
+ /* OTP */
+-#define	DA9063_REG_OPT_COUNT		0x101
+-#define	DA9063_REG_OPT_ADDR		0x102
+-#define	DA9063_REG_OPT_DATA		0x103
++#define	DA9063_REG_OTP_CONT		0x101
++#define	DA9063_REG_OTP_ADDR		0x102
++#define	DA9063_REG_OTP_DATA		0x103
+ 
+ /* Customer Trim and Configuration */
+ #define	DA9063_REG_T_OFFSET		0x104
+diff --git a/include/linux/mfd/max77620.h b/include/linux/mfd/max77620.h
+index ad2a9a852aea..b4fd5a7c2aaa 100644
+--- a/include/linux/mfd/max77620.h
++++ b/include/linux/mfd/max77620.h
+@@ -136,8 +136,8 @@
+ #define MAX77620_FPS_PERIOD_MIN_US		40
+ #define MAX20024_FPS_PERIOD_MIN_US		20
+ 
+-#define MAX77620_FPS_PERIOD_MAX_US		2560
+-#define MAX20024_FPS_PERIOD_MAX_US		5120
++#define MAX20024_FPS_PERIOD_MAX_US		2560
++#define MAX77620_FPS_PERIOD_MAX_US		5120
+ 
+ #define MAX77620_REG_FPS_GPIO1			0x54
+ #define MAX77620_REG_FPS_GPIO2			0x55
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index ff09d32a8a1b..06ba9c5f156b 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -337,7 +337,7 @@ int bpf_prog_calc_tag(struct bpf_prog *fp)
+ }
+ 
+ static int bpf_adj_delta_to_imm(struct bpf_insn *insn, u32 pos, s32 end_old,
+-				s32 end_new, u32 curr, const bool probe_pass)
++				s32 end_new, s32 curr, const bool probe_pass)
+ {
+ 	const s64 imm_min = S32_MIN, imm_max = S32_MAX;
+ 	s32 delta = end_new - end_old;
+@@ -355,7 +355,7 @@ static int bpf_adj_delta_to_imm(struct bpf_insn *insn, u32 pos, s32 end_old,
+ }
+ 
+ static int bpf_adj_delta_to_off(struct bpf_insn *insn, u32 pos, s32 end_old,
+-				s32 end_new, u32 curr, const bool probe_pass)
++				s32 end_new, s32 curr, const bool probe_pass)
+ {
+ 	const s32 off_min = S16_MIN, off_max = S16_MAX;
+ 	s32 delta = end_new - end_old;
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 9dcd18aa210b..2628f3773ca8 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -952,6 +952,15 @@ static void mm_init_aio(struct mm_struct *mm)
+ #endif
+ }
+ 
++static __always_inline void mm_clear_owner(struct mm_struct *mm,
++					   struct task_struct *p)
++{
++#ifdef CONFIG_MEMCG
++	if (mm->owner == p)
++		WRITE_ONCE(mm->owner, NULL);
++#endif
++}
++
+ static void mm_init_owner(struct mm_struct *mm, struct task_struct *p)
+ {
+ #ifdef CONFIG_MEMCG
+@@ -1331,6 +1340,7 @@ static struct mm_struct *dup_mm(struct task_struct *tsk)
+ free_pt:
+ 	/* don't put binfmt in mmput, we haven't got module yet */
+ 	mm->binfmt = NULL;
++	mm_init_owner(mm, NULL);
+ 	mmput(mm);
+ 
+ fail_nomem:
+@@ -1662,6 +1672,21 @@ static inline void rcu_copy_process(struct task_struct *p)
+ #endif /* #ifdef CONFIG_TASKS_RCU */
+ }
+ 
++static void __delayed_free_task(struct rcu_head *rhp)
++{
++	struct task_struct *tsk = container_of(rhp, struct task_struct, rcu);
++
++	free_task(tsk);
++}
++
++static __always_inline void delayed_free_task(struct task_struct *tsk)
++{
++	if (IS_ENABLED(CONFIG_MEMCG))
++		call_rcu(&tsk->rcu, __delayed_free_task);
++	else
++		free_task(tsk);
++}
++
+ /*
+  * This creates a new process as a copy of the old one,
+  * but does not actually start it yet.
+@@ -2123,8 +2148,10 @@ bad_fork_cleanup_io:
+ bad_fork_cleanup_namespaces:
+ 	exit_task_namespaces(p);
+ bad_fork_cleanup_mm:
+-	if (p->mm)
++	if (p->mm) {
++		mm_clear_owner(p->mm, p);
+ 		mmput(p->mm);
++	}
+ bad_fork_cleanup_signal:
+ 	if (!(clone_flags & CLONE_THREAD))
+ 		free_signal_struct(p->signal);
+@@ -2155,7 +2182,7 @@ bad_fork_cleanup_count:
+ bad_fork_free:
+ 	p->state = TASK_DEAD;
+ 	put_task_stack(p);
+-	free_task(p);
++	delayed_free_task(p);
+ fork_out:
+ 	spin_lock_irq(&current->sighand->siglock);
+ 	hlist_del_init(&delayed.node);
+diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
+index fbe96341beee..59b801de8dd5 100644
+--- a/kernel/locking/rwsem-xadd.c
++++ b/kernel/locking/rwsem-xadd.c
+@@ -130,6 +130,7 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
+ {
+ 	struct rwsem_waiter *waiter, *tmp;
+ 	long oldcount, woken = 0, adjustment = 0;
++	struct list_head wlist;
+ 
+ 	/*
+ 	 * Take a peek at the queue head waiter such that we can determine
+@@ -188,18 +189,42 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
+ 	 * of the queue. We know that woken will be at least 1 as we accounted
+ 	 * for above. Note we increment the 'active part' of the count by the
+ 	 * number of readers before waking any processes up.
++	 *
++	 * We have to do wakeup in 2 passes to prevent the possibility that
++	 * the reader count may be decremented before it is incremented. It
++	 * is because the to-be-woken waiter may not have slept yet. So it
++	 * may see waiter->task got cleared, finish its critical section and
++	 * do an unlock before the reader count increment.
++	 *
++	 * 1) Collect the read-waiters in a separate list, count them and
++	 *    fully increment the reader count in rwsem.
++	 * 2) For each waiters in the new list, clear waiter->task and
++	 *    put them into wake_q to be woken up later.
+ 	 */
+-	list_for_each_entry_safe(waiter, tmp, &sem->wait_list, list) {
+-		struct task_struct *tsk;
+-
++	list_for_each_entry(waiter, &sem->wait_list, list) {
+ 		if (waiter->type == RWSEM_WAITING_FOR_WRITE)
+ 			break;
+ 
+ 		woken++;
+-		tsk = waiter->task;
++	}
++	list_cut_before(&wlist, &sem->wait_list, &waiter->list);
++
++	adjustment = woken * RWSEM_ACTIVE_READ_BIAS - adjustment;
++	if (list_empty(&sem->wait_list)) {
++		/* hit end of list above */
++		adjustment -= RWSEM_WAITING_BIAS;
++	}
++
++	if (adjustment)
++		atomic_long_add(adjustment, &sem->count);
++
++	/* 2nd pass */
++	list_for_each_entry_safe(waiter, tmp, &wlist, list) {
++		struct task_struct *tsk;
+ 
++		tsk = waiter->task;
+ 		get_task_struct(tsk);
+-		list_del(&waiter->list);
++
+ 		/*
+ 		 * Ensure calling get_task_struct() before setting the reader
+ 		 * waiter to nil such that rwsem_down_read_failed() cannot
+@@ -213,15 +238,6 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
+ 		 */
+ 		wake_q_add_safe(wake_q, tsk);
+ 	}
+-
+-	adjustment = woken * RWSEM_ACTIVE_READ_BIAS - adjustment;
+-	if (list_empty(&sem->wait_list)) {
+-		/* hit end of list above */
+-		adjustment -= RWSEM_WAITING_BIAS;
+-	}
+-
+-	if (adjustment)
+-		atomic_long_add(adjustment, &sem->count);
+ }
+ 
+ /*
+diff --git a/mm/compaction.c b/mm/compaction.c
+index 3319e0872d01..444029da4e9d 100644
+--- a/mm/compaction.c
++++ b/mm/compaction.c
+@@ -1228,7 +1228,7 @@ fast_isolate_around(struct compact_control *cc, unsigned long pfn, unsigned long
+ 
+ 	/* Pageblock boundaries */
+ 	start_pfn = pageblock_start_pfn(pfn);
+-	end_pfn = min(start_pfn + pageblock_nr_pages, zone_end_pfn(cc->zone));
++	end_pfn = min(pageblock_end_pfn(pfn), zone_end_pfn(cc->zone)) - 1;
+ 
+ 	/* Scan before */
+ 	if (start_pfn != pfn) {
+@@ -1239,7 +1239,7 @@ fast_isolate_around(struct compact_control *cc, unsigned long pfn, unsigned long
+ 
+ 	/* Scan after */
+ 	start_pfn = pfn + nr_isolated;
+-	if (start_pfn != end_pfn)
++	if (start_pfn < end_pfn)
+ 		isolate_freepages_block(cc, &start_pfn, end_pfn, &cc->freepages, 1, false);
+ 
+ 	/* Skip this pageblock in the future as it's full or nearly full */
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 165ea46bf149..4310c6e9e5a3 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -793,11 +793,13 @@ out_unlock:
+ 		pte_free(mm, pgtable);
+ }
+ 
+-vm_fault_t vmf_insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
+-			pmd_t *pmd, pfn_t pfn, bool write)
++vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write)
+ {
++	unsigned long addr = vmf->address & PMD_MASK;
++	struct vm_area_struct *vma = vmf->vma;
+ 	pgprot_t pgprot = vma->vm_page_prot;
+ 	pgtable_t pgtable = NULL;
++
+ 	/*
+ 	 * If we had pmd_special, we could avoid all these restrictions,
+ 	 * but we need to be consistent with PTEs and architectures that
+@@ -820,7 +822,7 @@ vm_fault_t vmf_insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
+ 
+ 	track_pfn_insert(vma, &pgprot, pfn);
+ 
+-	insert_pfn_pmd(vma, addr, pmd, pfn, pgprot, write, pgtable);
++	insert_pfn_pmd(vma, addr, vmf->pmd, pfn, pgprot, write, pgtable);
+ 	return VM_FAULT_NOPAGE;
+ }
+ EXPORT_SYMBOL_GPL(vmf_insert_pfn_pmd);
+@@ -869,10 +871,12 @@ out_unlock:
+ 	spin_unlock(ptl);
+ }
+ 
+-vm_fault_t vmf_insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
+-			pud_t *pud, pfn_t pfn, bool write)
++vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write)
+ {
++	unsigned long addr = vmf->address & PUD_MASK;
++	struct vm_area_struct *vma = vmf->vma;
+ 	pgprot_t pgprot = vma->vm_page_prot;
++
+ 	/*
+ 	 * If we had pud_special, we could avoid all these restrictions,
+ 	 * but we need to be consistent with PTEs and architectures that
+@@ -889,7 +893,7 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
+ 
+ 	track_pfn_insert(vma, &pgprot, pfn);
+ 
+-	insert_pfn_pud(vma, addr, pud, pfn, pgprot, write);
++	insert_pfn_pud(vma, addr, vmf->pud, pfn, pgprot, write);
+ 	return VM_FAULT_NOPAGE;
+ }
+ EXPORT_SYMBOL_GPL(vmf_insert_pfn_pud);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 6cdc7b2d9100..5baf1f00ad42 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -1574,8 +1574,9 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask,
+ 	 */
+ 	if (h->surplus_huge_pages >= h->nr_overcommit_huge_pages) {
+ 		SetPageHugeTemporary(page);
++		spin_unlock(&hugetlb_lock);
+ 		put_page(page);
+-		page = NULL;
++		return NULL;
+ 	} else {
+ 		h->surplus_huge_pages++;
+ 		h->surplus_huge_pages_node[page_to_nid(page)]++;
+@@ -3777,8 +3778,7 @@ retry:
+ 			 * handling userfault.  Reacquire after handling
+ 			 * fault to make calling code simpler.
+ 			 */
+-			hash = hugetlb_fault_mutex_hash(h, mm, vma, mapping,
+-							idx, haddr);
++			hash = hugetlb_fault_mutex_hash(h, mapping, idx, haddr);
+ 			mutex_unlock(&hugetlb_fault_mutex_table[hash]);
+ 			ret = handle_userfault(&vmf, VM_UFFD_MISSING);
+ 			mutex_lock(&hugetlb_fault_mutex_table[hash]);
+@@ -3886,21 +3886,14 @@ backout_unlocked:
+ }
+ 
+ #ifdef CONFIG_SMP
+-u32 hugetlb_fault_mutex_hash(struct hstate *h, struct mm_struct *mm,
+-			    struct vm_area_struct *vma,
+-			    struct address_space *mapping,
++u32 hugetlb_fault_mutex_hash(struct hstate *h, struct address_space *mapping,
+ 			    pgoff_t idx, unsigned long address)
+ {
+ 	unsigned long key[2];
+ 	u32 hash;
+ 
+-	if (vma->vm_flags & VM_SHARED) {
+-		key[0] = (unsigned long) mapping;
+-		key[1] = idx;
+-	} else {
+-		key[0] = (unsigned long) mm;
+-		key[1] = address >> huge_page_shift(h);
+-	}
++	key[0] = (unsigned long) mapping;
++	key[1] = idx;
+ 
+ 	hash = jhash2((u32 *)&key, sizeof(key)/sizeof(u32), 0);
+ 
+@@ -3911,9 +3904,7 @@ u32 hugetlb_fault_mutex_hash(struct hstate *h, struct mm_struct *mm,
+  * For uniprocesor systems we always use a single mutex, so just
+  * return 0 and avoid the hashing overhead.
+  */
+-u32 hugetlb_fault_mutex_hash(struct hstate *h, struct mm_struct *mm,
+-			    struct vm_area_struct *vma,
+-			    struct address_space *mapping,
++u32 hugetlb_fault_mutex_hash(struct hstate *h, struct address_space *mapping,
+ 			    pgoff_t idx, unsigned long address)
+ {
+ 	return 0;
+@@ -3958,7 +3949,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+ 	 * get spurious allocation failures if two CPUs race to instantiate
+ 	 * the same page in the page cache.
+ 	 */
+-	hash = hugetlb_fault_mutex_hash(h, mm, vma, mapping, idx, haddr);
++	hash = hugetlb_fault_mutex_hash(h, mapping, idx, haddr);
+ 	mutex_lock(&hugetlb_fault_mutex_table[hash]);
+ 
+ 	entry = huge_ptep_get(ptep);
+diff --git a/mm/mincore.c b/mm/mincore.c
+index 218099b5ed31..c3f058bd0faf 100644
+--- a/mm/mincore.c
++++ b/mm/mincore.c
+@@ -169,6 +169,22 @@ out:
+ 	return 0;
+ }
+ 
++static inline bool can_do_mincore(struct vm_area_struct *vma)
++{
++	if (vma_is_anonymous(vma))
++		return true;
++	if (!vma->vm_file)
++		return false;
++	/*
++	 * Reveal pagecache information only for non-anonymous mappings that
++	 * correspond to the files the calling process could (if tried) open
++	 * for writing; otherwise we'd be including shared non-exclusive
++	 * mappings, which opens a side channel.
++	 */
++	return inode_owner_or_capable(file_inode(vma->vm_file)) ||
++		inode_permission(file_inode(vma->vm_file), MAY_WRITE) == 0;
++}
++
+ /*
+  * Do a chunk of "sys_mincore()". We've already checked
+  * all the arguments, we hold the mmap semaphore: we should
+@@ -189,8 +205,13 @@ static long do_mincore(unsigned long addr, unsigned long pages, unsigned char *v
+ 	vma = find_vma(current->mm, addr);
+ 	if (!vma || addr < vma->vm_start)
+ 		return -ENOMEM;
+-	mincore_walk.mm = vma->vm_mm;
+ 	end = min(vma->vm_end, addr + (pages << PAGE_SHIFT));
++	if (!can_do_mincore(vma)) {
++		unsigned long pages = DIV_ROUND_UP(end - addr, PAGE_SIZE);
++		memset(vec, 1, pages);
++		return pages;
++	}
++	mincore_walk.mm = vma->vm_mm;
+ 	err = walk_page_range(addr, end, &mincore_walk);
+ 	if (err < 0)
+ 		return err;
+diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
+index d59b5a73dfb3..9932d5755e4c 100644
+--- a/mm/userfaultfd.c
++++ b/mm/userfaultfd.c
+@@ -271,8 +271,7 @@ retry:
+ 		 */
+ 		idx = linear_page_index(dst_vma, dst_addr);
+ 		mapping = dst_vma->vm_file->f_mapping;
+-		hash = hugetlb_fault_mutex_hash(h, dst_mm, dst_vma, mapping,
+-								idx, dst_addr);
++		hash = hugetlb_fault_mutex_hash(h, mapping, idx, dst_addr);
+ 		mutex_lock(&hugetlb_fault_mutex_table[hash]);
+ 
+ 		err = -ENOMEM;
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 8b3ac690efa3..0c61c05503f5 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -1551,9 +1551,11 @@ static bool hdmi_present_sense_via_verbs(struct hdmi_spec_per_pin *per_pin,
+ 	ret = !repoll || !eld->monitor_present || eld->eld_valid;
+ 
+ 	jack = snd_hda_jack_tbl_get(codec, pin_nid);
+-	if (jack)
++	if (jack) {
+ 		jack->block_report = !ret;
+-
++		jack->pin_sense = (eld->monitor_present && eld->eld_valid) ?
++			AC_PINSENSE_PRESENCE : 0;
++	}
+ 	mutex_unlock(&per_pin->lock);
+ 	return ret;
+ }
+@@ -1663,6 +1665,11 @@ static void hdmi_repoll_eld(struct work_struct *work)
+ 	container_of(to_delayed_work(work), struct hdmi_spec_per_pin, work);
+ 	struct hda_codec *codec = per_pin->codec;
+ 	struct hdmi_spec *spec = codec->spec;
++	struct hda_jack_tbl *jack;
++
++	jack = snd_hda_jack_tbl_get(codec, per_pin->pin_nid);
++	if (jack)
++		jack->jack_dirty = 1;
+ 
+ 	if (per_pin->repoll_count++ > 6)
+ 		per_pin->repoll_count = 0;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 42cd3945e0de..fc99a5a6f7e1 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -477,12 +477,45 @@ static void alc_auto_setup_eapd(struct hda_codec *codec, bool on)
+ 		set_eapd(codec, *p, on);
+ }
+ 
++static int find_ext_mic_pin(struct hda_codec *codec);
++
++static void alc_headset_mic_no_shutup(struct hda_codec *codec)
++{
++	const struct hda_pincfg *pin;
++	int mic_pin = find_ext_mic_pin(codec);
++	int i;
++
++	/* don't shut up pins when unloading the driver; otherwise it breaks
++	 * the default pin setup at the next load of the driver
++	 */
++	if (codec->bus->shutdown)
++		return;
++
++	snd_array_for_each(&codec->init_pins, i, pin) {
++		/* use read here for syncing after issuing each verb */
++		if (pin->nid != mic_pin)
++			snd_hda_codec_read(codec, pin->nid, 0,
++					AC_VERB_SET_PIN_WIDGET_CONTROL, 0);
++	}
++
++	codec->pins_shutup = 1;
++}
++
+ static void alc_shutup_pins(struct hda_codec *codec)
+ {
+ 	struct alc_spec *spec = codec->spec;
+ 
+-	if (!spec->no_shutup_pins)
+-		snd_hda_shutup_pins(codec);
++	switch (codec->core.vendor_id) {
++	case 0x10ec0286:
++	case 0x10ec0288:
++	case 0x10ec0298:
++		alc_headset_mic_no_shutup(codec);
++		break;
++	default:
++		if (!spec->no_shutup_pins)
++			snd_hda_shutup_pins(codec);
++		break;
++	}
+ }
+ 
+ /* generic shutup callback;
+@@ -803,11 +836,10 @@ static int alc_init(struct hda_codec *codec)
+ 	if (spec->init_hook)
+ 		spec->init_hook(codec);
+ 
++	snd_hda_gen_init(codec);
+ 	alc_fix_pll(codec);
+ 	alc_auto_init_amp(codec, spec->init_amp);
+ 
+-	snd_hda_gen_init(codec);
+-
+ 	snd_hda_apply_fixup(codec, HDA_FIXUP_ACT_INIT);
+ 
+ 	return 0;
+@@ -2924,27 +2956,6 @@ static int alc269_parse_auto_config(struct hda_codec *codec)
+ 	return alc_parse_auto_config(codec, alc269_ignore, ssids);
+ }
+ 
+-static int find_ext_mic_pin(struct hda_codec *codec);
+-
+-static void alc286_shutup(struct hda_codec *codec)
+-{
+-	const struct hda_pincfg *pin;
+-	int i;
+-	int mic_pin = find_ext_mic_pin(codec);
+-	/* don't shut up pins when unloading the driver; otherwise it breaks
+-	 * the default pin setup at the next load of the driver
+-	 */
+-	if (codec->bus->shutdown)
+-		return;
+-	snd_array_for_each(&codec->init_pins, i, pin) {
+-		/* use read here for syncing after issuing each verb */
+-		if (pin->nid != mic_pin)
+-			snd_hda_codec_read(codec, pin->nid, 0,
+-					AC_VERB_SET_PIN_WIDGET_CONTROL, 0);
+-	}
+-	codec->pins_shutup = 1;
+-}
+-
+ static void alc269vb_toggle_power_output(struct hda_codec *codec, int power_up)
+ {
+ 	alc_update_coef_idx(codec, 0x04, 1 << 11, power_up ? (1 << 11) : 0);
+@@ -6933,6 +6944,10 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1462, 0xb171, "Cubi N 8GL (MS-B171)", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1558, 0x1325, "System76 Darter Pro (darp5)", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x8550, "System76 Gazelle (gaze14)", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x8551, "System76 Gazelle (gaze14)", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x8560, "System76 Gazelle (gaze14)", ALC269_FIXUP_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1558, 0x8561, "System76 Gazelle (gaze14)", ALC269_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC233_FIXUP_LENOVO_MULTI_CODECS),
+ 	SND_PCI_QUIRK(0x17aa, 0x20f2, "Thinkpad SL410/510", ALC269_FIXUP_SKU_IGNORE),
+ 	SND_PCI_QUIRK(0x17aa, 0x215e, "Thinkpad L512", ALC269_FIXUP_SKU_IGNORE),
+@@ -6975,7 +6990,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x313c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+ 	SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+-	SND_PCI_QUIRK(0x17aa, 0x3978, "IdeaPad Y410P", ALC269_FIXUP_NO_SHUTUP),
++	SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ 	SND_PCI_QUIRK(0x17aa, 0x5013, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x17aa, 0x501a, "Thinkpad", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x501e, "Thinkpad L440", ALC292_FIXUP_TPT440_DOCK),
+@@ -7704,7 +7719,6 @@ static int patch_alc269(struct hda_codec *codec)
+ 	case 0x10ec0286:
+ 	case 0x10ec0288:
+ 		spec->codec_variant = ALC269_TYPE_ALC286;
+-		spec->shutup = alc286_shutup;
+ 		break;
+ 	case 0x10ec0298:
+ 		spec->codec_variant = ALC269_TYPE_ALC298;
+diff --git a/sound/soc/codecs/hdac_hdmi.c b/sound/soc/codecs/hdac_hdmi.c
+index 5eeb0fe836a9..4de1fbfa8827 100644
+--- a/sound/soc/codecs/hdac_hdmi.c
++++ b/sound/soc/codecs/hdac_hdmi.c
+@@ -1854,6 +1854,17 @@ static int hdmi_codec_probe(struct snd_soc_component *component)
+ 	/* Imp: Store the card pointer in hda_codec */
+ 	hdmi->card = dapm->card->snd_card;
+ 
++	/*
++	 * Setup a device_link between card device and HDMI codec device.
++	 * The card device is the consumer and the HDMI codec device is
++	 * the supplier. With this setting, we can make sure that the audio
++	 * domain in display power will be always turned on before operating
++	 * on the HDMI audio codec registers.
++	 * Let's use the flag DL_FLAG_AUTOREMOVE_CONSUMER. This can make
++	 * sure the device link is freed when the machine driver is removed.
++	 */
++	device_link_add(component->card->dev, &hdev->dev, DL_FLAG_RPM_ACTIVE |
++			DL_FLAG_AUTOREMOVE_CONSUMER);
+ 	/*
+ 	 * hdac_device core already sets the state to active and calls
+ 	 * get_noresume. So enable runtime and set the device to suspend.
+diff --git a/sound/soc/codecs/max98090.c b/sound/soc/codecs/max98090.c
+index 30c242c38d99..7619ea31ab50 100644
+--- a/sound/soc/codecs/max98090.c
++++ b/sound/soc/codecs/max98090.c
+@@ -1194,14 +1194,14 @@ static const struct snd_soc_dapm_widget max98090_dapm_widgets[] = {
+ 		&max98090_right_rcv_mixer_controls[0],
+ 		ARRAY_SIZE(max98090_right_rcv_mixer_controls)),
+ 
+-	SND_SOC_DAPM_MUX("LINMOD Mux", M98090_REG_LOUTR_MIXER,
+-		M98090_LINMOD_SHIFT, 0, &max98090_linmod_mux),
++	SND_SOC_DAPM_MUX("LINMOD Mux", SND_SOC_NOPM, 0, 0,
++		&max98090_linmod_mux),
+ 
+-	SND_SOC_DAPM_MUX("MIXHPLSEL Mux", M98090_REG_HP_CONTROL,
+-		M98090_MIXHPLSEL_SHIFT, 0, &max98090_mixhplsel_mux),
++	SND_SOC_DAPM_MUX("MIXHPLSEL Mux", SND_SOC_NOPM, 0, 0,
++		&max98090_mixhplsel_mux),
+ 
+-	SND_SOC_DAPM_MUX("MIXHPRSEL Mux", M98090_REG_HP_CONTROL,
+-		M98090_MIXHPRSEL_SHIFT, 0, &max98090_mixhprsel_mux),
++	SND_SOC_DAPM_MUX("MIXHPRSEL Mux", SND_SOC_NOPM, 0, 0,
++		&max98090_mixhprsel_mux),
+ 
+ 	SND_SOC_DAPM_PGA("HP Left Out", M98090_REG_OUTPUT_ENABLE,
+ 		M98090_HPLEN_SHIFT, 0, NULL, 0),
+diff --git a/sound/soc/codecs/rt5677-spi.c b/sound/soc/codecs/rt5677-spi.c
+index 84501c2020c7..a2c7ffa5f400 100644
+--- a/sound/soc/codecs/rt5677-spi.c
++++ b/sound/soc/codecs/rt5677-spi.c
+@@ -57,13 +57,15 @@ static DEFINE_MUTEX(spi_mutex);
+  * RT5677_SPI_READ/WRITE_32:	Transfer 4 bytes
+  * RT5677_SPI_READ/WRITE_BURST:	Transfer any multiples of 8 bytes
+  *
+- * For example, reading 260 bytes at 0x60030002 uses the following commands:
+- * 0x60030002 RT5677_SPI_READ_16	2 bytes
++ * Note:
++ * 16 Bit writes and reads are restricted to the address range
++ * 0x18020000 ~ 0x18021000
++ *
++ * For example, reading 256 bytes at 0x60030004 uses the following commands:
+  * 0x60030004 RT5677_SPI_READ_32	4 bytes
+  * 0x60030008 RT5677_SPI_READ_BURST	240 bytes
+  * 0x600300F8 RT5677_SPI_READ_BURST	8 bytes
+  * 0x60030100 RT5677_SPI_READ_32	4 bytes
+- * 0x60030104 RT5677_SPI_READ_16	2 bytes
+  *
+  * Input:
+  * @read: true for read commands; false for write commands
+@@ -78,15 +80,13 @@ static u8 rt5677_spi_select_cmd(bool read, u32 align, u32 remain, u32 *len)
+ {
+ 	u8 cmd;
+ 
+-	if (align == 2 || align == 6 || remain == 2) {
+-		cmd = RT5677_SPI_READ_16;
+-		*len = 2;
+-	} else if (align == 4 || remain <= 6) {
++	if (align == 4 || remain <= 4) {
+ 		cmd = RT5677_SPI_READ_32;
+ 		*len = 4;
+ 	} else {
+ 		cmd = RT5677_SPI_READ_BURST;
+-		*len = min_t(u32, remain & ~7, RT5677_SPI_BURST_LEN);
++		*len = (((remain - 1) >> 3) + 1) << 3;
++		*len = min_t(u32, *len, RT5677_SPI_BURST_LEN);
+ 	}
+ 	return read ? cmd : cmd + 1;
+ }
+@@ -107,7 +107,7 @@ static void rt5677_spi_reverse(u8 *dst, u32 dstlen, const u8 *src, u32 srclen)
+ 	}
+ }
+ 
+-/* Read DSP address space using SPI. addr and len have to be 2-byte aligned. */
++/* Read DSP address space using SPI. addr and len have to be 4-byte aligned. */
+ int rt5677_spi_read(u32 addr, void *rxbuf, size_t len)
+ {
+ 	u32 offset;
+@@ -123,7 +123,7 @@ int rt5677_spi_read(u32 addr, void *rxbuf, size_t len)
+ 	if (!g_spi)
+ 		return -ENODEV;
+ 
+-	if ((addr & 1) || (len & 1)) {
++	if ((addr & 3) || (len & 3)) {
+ 		dev_err(&g_spi->dev, "Bad read align 0x%x(%zu)\n", addr, len);
+ 		return -EACCES;
+ 	}
+@@ -158,13 +158,13 @@ int rt5677_spi_read(u32 addr, void *rxbuf, size_t len)
+ }
+ EXPORT_SYMBOL_GPL(rt5677_spi_read);
+ 
+-/* Write DSP address space using SPI. addr has to be 2-byte aligned.
+- * If len is not 2-byte aligned, an extra byte of zero is written at the end
++/* Write DSP address space using SPI. addr has to be 4-byte aligned.
++ * If len is not 4-byte aligned, then extra zeros are written at the end
+  * as padding.
+  */
+ int rt5677_spi_write(u32 addr, const void *txbuf, size_t len)
+ {
+-	u32 offset, len_with_pad = len;
++	u32 offset;
+ 	int status = 0;
+ 	struct spi_transfer t;
+ 	struct spi_message m;
+@@ -177,22 +177,19 @@ int rt5677_spi_write(u32 addr, const void *txbuf, size_t len)
+ 	if (!g_spi)
+ 		return -ENODEV;
+ 
+-	if (addr & 1) {
++	if (addr & 3) {
+ 		dev_err(&g_spi->dev, "Bad write align 0x%x(%zu)\n", addr, len);
+ 		return -EACCES;
+ 	}
+ 
+-	if (len & 1)
+-		len_with_pad = len + 1;
+-
+ 	memset(&t, 0, sizeof(t));
+ 	t.tx_buf = buf;
+ 	t.speed_hz = RT5677_SPI_FREQ;
+ 	spi_message_init_with_transfers(&m, &t, 1);
+ 
+-	for (offset = 0; offset < len_with_pad;) {
++	for (offset = 0; offset < len;) {
+ 		spi_cmd = rt5677_spi_select_cmd(false, (addr + offset) & 7,
+-				len_with_pad - offset, &t.len);
++				len - offset, &t.len);
+ 
+ 		/* Construct SPI message header */
+ 		buf[0] = spi_cmd;
+diff --git a/sound/soc/fsl/fsl_esai.c b/sound/soc/fsl/fsl_esai.c
+index 3623aa9a6f2e..15202a637197 100644
+--- a/sound/soc/fsl/fsl_esai.c
++++ b/sound/soc/fsl/fsl_esai.c
+@@ -251,7 +251,7 @@ static int fsl_esai_set_dai_sysclk(struct snd_soc_dai *dai, int clk_id,
+ 		break;
+ 	case ESAI_HCKT_EXTAL:
+ 		ecr |= ESAI_ECR_ETI;
+-		/* fall through */
++		break;
+ 	case ESAI_HCKR_EXTAL:
+ 		ecr |= ESAI_ECR_ERI;
+ 		break;
+diff --git a/sound/usb/line6/toneport.c b/sound/usb/line6/toneport.c
+index 19bee725de00..325b07b98b3c 100644
+--- a/sound/usb/line6/toneport.c
++++ b/sound/usb/line6/toneport.c
+@@ -54,8 +54,8 @@ struct usb_line6_toneport {
+ 	/* Firmware version (x 100) */
+ 	u8 firmware_version;
+ 
+-	/* Timer for delayed PCM startup */
+-	struct timer_list timer;
++	/* Work for delayed PCM startup */
++	struct delayed_work pcm_work;
+ 
+ 	/* Device type */
+ 	enum line6_device_type type;
+@@ -241,9 +241,10 @@ static int snd_toneport_source_put(struct snd_kcontrol *kcontrol,
+ 	return 1;
+ }
+ 
+-static void toneport_start_pcm(struct timer_list *t)
++static void toneport_start_pcm(struct work_struct *work)
+ {
+-	struct usb_line6_toneport *toneport = from_timer(toneport, t, timer);
++	struct usb_line6_toneport *toneport =
++		container_of(work, struct usb_line6_toneport, pcm_work.work);
+ 	struct usb_line6 *line6 = &toneport->line6;
+ 
+ 	line6_pcm_acquire(line6->line6pcm, LINE6_STREAM_MONITOR, true);
+@@ -393,7 +394,8 @@ static int toneport_setup(struct usb_line6_toneport *toneport)
+ 	if (toneport_has_led(toneport))
+ 		toneport_update_led(toneport);
+ 
+-	mod_timer(&toneport->timer, jiffies + TONEPORT_PCM_DELAY * HZ);
++	schedule_delayed_work(&toneport->pcm_work,
++			      msecs_to_jiffies(TONEPORT_PCM_DELAY * 1000));
+ 	return 0;
+ }
+ 
+@@ -405,7 +407,7 @@ static void line6_toneport_disconnect(struct usb_line6 *line6)
+ 	struct usb_line6_toneport *toneport =
+ 		(struct usb_line6_toneport *)line6;
+ 
+-	del_timer_sync(&toneport->timer);
++	cancel_delayed_work_sync(&toneport->pcm_work);
+ 
+ 	if (toneport_has_led(toneport))
+ 		toneport_remove_leds(toneport);
+@@ -422,7 +424,7 @@ static int toneport_init(struct usb_line6 *line6,
+ 	struct usb_line6_toneport *toneport =  (struct usb_line6_toneport *) line6;
+ 
+ 	toneport->type = id->driver_info;
+-	timer_setup(&toneport->timer, toneport_start_pcm, 0);
++	INIT_DELAYED_WORK(&toneport->pcm_work, toneport_start_pcm);
+ 
+ 	line6->disconnect = line6_toneport_disconnect;
+ 
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 73d7dff425c1..53dccbfe392b 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -2675,6 +2675,8 @@ static int parse_audio_selector_unit(struct mixer_build *state, int unitid,
+ 	kctl = snd_ctl_new1(&mixer_selectunit_ctl, cval);
+ 	if (! kctl) {
+ 		usb_audio_err(state->chip, "cannot malloc kcontrol\n");
++		for (i = 0; i < desc->bNrInPins; i++)
++			kfree(namelist[i]);
+ 		kfree(namelist);
+ 		kfree(cval);
+ 		return -ENOMEM;
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 479196aeb409..2cd57730381b 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -1832,7 +1832,8 @@ static int validate_branch(struct objtool_file *file, struct instruction *first,
+ 			return 1;
+ 		}
+ 
+-		func = insn->func ? insn->func->pfunc : NULL;
++		if (insn->func)
++			func = insn->func->pfunc;
+ 
+ 		if (func && insn->ignore) {
+ 			WARN_FUNC("BUG: why am I validating an ignored function?",
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index a704d1f9bd96..2b2a460c6252 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -1250,7 +1250,7 @@ int kvm_clear_dirty_log_protect(struct kvm *kvm,
+ 	if (!dirty_bitmap)
+ 		return -ENOENT;
+ 
+-	n = kvm_dirty_bitmap_bytes(memslot);
++	n = ALIGN(log->num_pages, BITS_PER_LONG) / 8;
+ 
+ 	if (log->first_page > memslot->npages ||
+ 	    log->num_pages > memslot->npages - log->first_page ||


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-05-26 17:07 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2019-05-26 17:07 UTC (permalink / raw
  To: gentoo-commits

commit:     9d3a433e9965ee8c3dd94f446ea6598439d3a362
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun May 26 17:06:28 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun May 26 17:06:28 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9d3a433e

Linux patch 5.1.5

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1004_linux-5.1.5.patch | 4948 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4952 insertions(+)

diff --git a/0000_README b/0000_README
index 7dd0866..2431699 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch:  1003_linux-5.1.4.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.1.4
 
+Patch:  1004_linux-5.1.5.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.1.5
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1004_linux-5.1.5.patch b/1004_linux-5.1.5.patch
new file mode 100644
index 0000000..db0c77c
--- /dev/null
+++ b/1004_linux-5.1.5.patch
@@ -0,0 +1,4948 @@
+diff --git a/Documentation/filesystems/porting b/Documentation/filesystems/porting
+index cf43bc4dbf31..a60fa516d4cb 100644
+--- a/Documentation/filesystems/porting
++++ b/Documentation/filesystems/porting
+@@ -638,3 +638,8 @@ in your dentry operations instead.
+ 	inode to d_splice_alias() will also do the right thing (equivalent of
+ 	d_add(dentry, NULL); return NULL;), so that kind of special cases
+ 	also doesn't need a separate treatment.
++--
++[mandatory]
++	DCACHE_RCUACCESS is gone; having an RCU delay on dentry freeing is the
++	default.  DCACHE_NORCU opts out, and only d_alloc_pseudo() has any
++	business doing so.
+diff --git a/Makefile b/Makefile
+index acab93537f63..24a16a544ffd 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/Kconfig b/arch/Kconfig
+index 33687dddd86a..9092e0ffe4d3 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -764,7 +764,7 @@ config COMPAT_OLD_SIGACTION
+ 	bool
+ 
+ config 64BIT_TIME
+-	def_bool ARCH_HAS_64BIT_TIME
++	def_bool y
+ 	help
+ 	  This should be selected by all architectures that need to support
+ 	  new system calls with a 64-bit time_t. This is relevant on all 32-bit
+diff --git a/arch/arm/boot/dts/imx6-logicpd-baseboard.dtsi b/arch/arm/boot/dts/imx6-logicpd-baseboard.dtsi
+index 3cae139e6396..c40a7af6ebee 100644
+--- a/arch/arm/boot/dts/imx6-logicpd-baseboard.dtsi
++++ b/arch/arm/boot/dts/imx6-logicpd-baseboard.dtsi
+@@ -88,6 +88,7 @@
+ 		regulator-min-microvolt = <5000000>;
+ 		regulator-max-microvolt = <5000000>;
+ 		gpio = <&gpio7 12 GPIO_ACTIVE_HIGH>;
++		startup-delay-us = <70000>;
+ 		enable-active-high;
+ 	};
+ 
+@@ -99,6 +100,7 @@
+ 		regulator-min-microvolt = <3300000>;
+ 		regulator-max-microvolt = <3300000>;
+ 		gpio = <&gpio1 26 GPIO_ACTIVE_HIGH>;
++		startup-delay-us = <70000>;
+ 		enable-active-high;
+ 		regulator-always-on;
+ 	};
+diff --git a/arch/mips/kernel/perf_event_mipsxx.c b/arch/mips/kernel/perf_event_mipsxx.c
+index 413863508f6f..d67fb64e908c 100644
+--- a/arch/mips/kernel/perf_event_mipsxx.c
++++ b/arch/mips/kernel/perf_event_mipsxx.c
+@@ -64,17 +64,11 @@ struct mips_perf_event {
+ 	#define CNTR_EVEN	0x55555555
+ 	#define CNTR_ODD	0xaaaaaaaa
+ 	#define CNTR_ALL	0xffffffff
+-#ifdef CONFIG_MIPS_MT_SMP
+ 	enum {
+ 		T  = 0,
+ 		V  = 1,
+ 		P  = 2,
+ 	} range;
+-#else
+-	#define T
+-	#define V
+-	#define P
+-#endif
+ };
+ 
+ static struct mips_perf_event raw_event;
+@@ -325,9 +319,7 @@ static void mipsxx_pmu_enable_event(struct hw_perf_event *evt, int idx)
+ {
+ 	struct perf_event *event = container_of(evt, struct perf_event, hw);
+ 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+-#ifdef CONFIG_MIPS_MT_SMP
+ 	unsigned int range = evt->event_base >> 24;
+-#endif /* CONFIG_MIPS_MT_SMP */
+ 
+ 	WARN_ON(idx < 0 || idx >= mipspmu.num_counters);
+ 
+@@ -336,21 +328,15 @@ static void mipsxx_pmu_enable_event(struct hw_perf_event *evt, int idx)
+ 		/* Make sure interrupt enabled. */
+ 		MIPS_PERFCTRL_IE;
+ 
+-#ifdef CONFIG_CPU_BMIPS5000
+-	{
++	if (IS_ENABLED(CONFIG_CPU_BMIPS5000)) {
+ 		/* enable the counter for the calling thread */
+ 		cpuc->saved_ctrl[idx] |=
+ 			(1 << (12 + vpe_id())) | BRCM_PERFCTRL_TC;
+-	}
+-#else
+-#ifdef CONFIG_MIPS_MT_SMP
+-	if (range > V) {
++	} else if (IS_ENABLED(CONFIG_MIPS_MT_SMP) && range > V) {
+ 		/* The counter is processor wide. Set it up to count all TCs. */
+ 		pr_debug("Enabling perf counter for all TCs\n");
+ 		cpuc->saved_ctrl[idx] |= M_TC_EN_ALL;
+-	} else
+-#endif /* CONFIG_MIPS_MT_SMP */
+-	{
++	} else {
+ 		unsigned int cpu, ctrl;
+ 
+ 		/*
+@@ -365,7 +351,6 @@ static void mipsxx_pmu_enable_event(struct hw_perf_event *evt, int idx)
+ 		cpuc->saved_ctrl[idx] |= ctrl;
+ 		pr_debug("Enabling perf counter for CPU%d\n", cpu);
+ 	}
+-#endif /* CONFIG_CPU_BMIPS5000 */
+ 	/*
+ 	 * We do not actually let the counter run. Leave it until start().
+ 	 */
+diff --git a/arch/parisc/boot/compressed/head.S b/arch/parisc/boot/compressed/head.S
+index 5aba20fa48aa..e8b798fd0cf0 100644
+--- a/arch/parisc/boot/compressed/head.S
++++ b/arch/parisc/boot/compressed/head.S
+@@ -22,7 +22,7 @@
+ 	__HEAD
+ 
+ ENTRY(startup)
+-	 .level LEVEL
++	 .level PA_ASM_LEVEL
+ 
+ #define PSW_W_SM	0x200
+ #define PSW_W_BIT       36
+@@ -63,7 +63,7 @@ $bss_loop:
+ 	load32	BOOTADDR(decompress_kernel),%r3
+ 
+ #ifdef CONFIG_64BIT
+-	.level LEVEL
++	.level PA_ASM_LEVEL
+ 	ssm	PSW_W_SM, %r0		/* set W-bit */
+ 	depdi	0, 31, 32, %r3
+ #endif
+@@ -72,7 +72,7 @@ $bss_loop:
+ 
+ startup_continue:
+ #ifdef CONFIG_64BIT
+-	.level LEVEL
++	.level PA_ASM_LEVEL
+ 	rsm	PSW_W_SM, %r0		/* clear W-bit */
+ #endif
+ 
+diff --git a/arch/parisc/include/asm/assembly.h b/arch/parisc/include/asm/assembly.h
+index c17ec0ee6e7c..d85738a7bbe6 100644
+--- a/arch/parisc/include/asm/assembly.h
++++ b/arch/parisc/include/asm/assembly.h
+@@ -61,14 +61,14 @@
+ #define LDCW		ldcw,co
+ #define BL		b,l
+ # ifdef CONFIG_64BIT
+-#  define LEVEL		2.0w
++#  define PA_ASM_LEVEL	2.0w
+ # else
+-#  define LEVEL		2.0
++#  define PA_ASM_LEVEL	2.0
+ # endif
+ #else
+ #define LDCW		ldcw
+ #define BL		bl
+-#define LEVEL		1.1
++#define PA_ASM_LEVEL	1.1
+ #endif
+ 
+ #ifdef __ASSEMBLY__
+diff --git a/arch/parisc/include/asm/cache.h b/arch/parisc/include/asm/cache.h
+index 006fb939cac8..4016fe1c65a9 100644
+--- a/arch/parisc/include/asm/cache.h
++++ b/arch/parisc/include/asm/cache.h
+@@ -44,22 +44,22 @@ void parisc_setup_cache_timing(void);
+ 
+ #define pdtlb(addr)	asm volatile("pdtlb 0(%%sr1,%0)" \
+ 			ALTERNATIVE(ALT_COND_NO_SMP, INSN_PxTLB) \
+-			: : "r" (addr))
++			: : "r" (addr) : "memory")
+ #define pitlb(addr)	asm volatile("pitlb 0(%%sr1,%0)" \
+ 			ALTERNATIVE(ALT_COND_NO_SMP, INSN_PxTLB) \
+ 			ALTERNATIVE(ALT_COND_NO_SPLIT_TLB, INSN_NOP) \
+-			: : "r" (addr))
++			: : "r" (addr) : "memory")
+ #define pdtlb_kernel(addr)  asm volatile("pdtlb 0(%0)"   \
+ 			ALTERNATIVE(ALT_COND_NO_SMP, INSN_PxTLB) \
+-			: : "r" (addr))
++			: : "r" (addr) : "memory")
+ 
+ #define asm_io_fdc(addr) asm volatile("fdc %%r0(%0)" \
+ 			ALTERNATIVE(ALT_COND_NO_DCACHE, INSN_NOP) \
+ 			ALTERNATIVE(ALT_COND_NO_IOC_FDC, INSN_NOP) \
+-			: : "r" (addr))
++			: : "r" (addr) : "memory")
+ #define asm_io_sync()	asm volatile("sync" \
+ 			ALTERNATIVE(ALT_COND_NO_DCACHE, INSN_NOP) \
+-			ALTERNATIVE(ALT_COND_NO_IOC_FDC, INSN_NOP) :: )
++			ALTERNATIVE(ALT_COND_NO_IOC_FDC, INSN_NOP) :::"memory")
+ 
+ #endif /* ! __ASSEMBLY__ */
+ 
+diff --git a/arch/parisc/kernel/head.S b/arch/parisc/kernel/head.S
+index fbb4e43fda05..f56cbab64ac1 100644
+--- a/arch/parisc/kernel/head.S
++++ b/arch/parisc/kernel/head.S
+@@ -22,7 +22,7 @@
+ #include <linux/linkage.h>
+ #include <linux/init.h>
+ 
+-	.level	LEVEL
++	.level	PA_ASM_LEVEL
+ 
+ 	__INITDATA
+ ENTRY(boot_args)
+@@ -258,7 +258,7 @@ stext_pdc_ret:
+ 	ldo		R%PA(fault_vector_11)(%r10),%r10
+ 
+ $is_pa20:
+-	.level		LEVEL /* restore 1.1 || 2.0w */
++	.level		PA_ASM_LEVEL /* restore 1.1 || 2.0w */
+ #endif /*!CONFIG_64BIT*/
+ 	load32		PA(fault_vector_20),%r10
+ 
+diff --git a/arch/parisc/kernel/process.c b/arch/parisc/kernel/process.c
+index 841db71958cd..97c206734e24 100644
+--- a/arch/parisc/kernel/process.c
++++ b/arch/parisc/kernel/process.c
+@@ -193,6 +193,7 @@ int dump_task_fpu (struct task_struct *tsk, elf_fpregset_t *r)
+  */
+ 
+ int running_on_qemu __read_mostly;
++EXPORT_SYMBOL(running_on_qemu);
+ 
+ void __cpuidle arch_cpu_idle_dead(void)
+ {
+diff --git a/arch/parisc/kernel/syscall.S b/arch/parisc/kernel/syscall.S
+index 4f77bd9be66b..93cc36d98875 100644
+--- a/arch/parisc/kernel/syscall.S
++++ b/arch/parisc/kernel/syscall.S
+@@ -48,7 +48,7 @@ registers).
+ 	 */
+ #define KILL_INSN	break	0,0
+ 
+-	.level          LEVEL
++	.level          PA_ASM_LEVEL
+ 
+ 	.text
+ 
+diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
+index d0b166256f1a..14147eb7a142 100644
+--- a/arch/parisc/mm/init.c
++++ b/arch/parisc/mm/init.c
+@@ -495,7 +495,7 @@ static void __init map_pages(unsigned long start_vaddr,
+ 
+ void __init set_kernel_text_rw(int enable_read_write)
+ {
+-	unsigned long start = (unsigned long) _text;
++	unsigned long start = (unsigned long) __init_begin;
+ 	unsigned long end   = (unsigned long) &data_start;
+ 
+ 	map_pages(start, __pa(start), end-start,
+diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
+index 6ee8195a2ffb..4a6dd3ba0b0b 100644
+--- a/arch/powerpc/include/asm/mmu_context.h
++++ b/arch/powerpc/include/asm/mmu_context.h
+@@ -237,7 +237,6 @@ extern void arch_exit_mmap(struct mm_struct *mm);
+ #endif
+ 
+ static inline void arch_unmap(struct mm_struct *mm,
+-			      struct vm_area_struct *vma,
+ 			      unsigned long start, unsigned long end)
+ {
+ 	if (start <= mm->context.vdso_base && mm->context.vdso_base < end)
+diff --git a/arch/um/include/asm/mmu_context.h b/arch/um/include/asm/mmu_context.h
+index fca34b2177e2..9f4b4bb78120 100644
+--- a/arch/um/include/asm/mmu_context.h
++++ b/arch/um/include/asm/mmu_context.h
+@@ -22,7 +22,6 @@ static inline int arch_dup_mmap(struct mm_struct *oldmm, struct mm_struct *mm)
+ }
+ extern void arch_exit_mmap(struct mm_struct *mm);
+ static inline void arch_unmap(struct mm_struct *mm,
+-			struct vm_area_struct *vma,
+ 			unsigned long start, unsigned long end)
+ {
+ }
+diff --git a/arch/unicore32/include/asm/mmu_context.h b/arch/unicore32/include/asm/mmu_context.h
+index 5c205a9cb5a6..9f06ea5466dd 100644
+--- a/arch/unicore32/include/asm/mmu_context.h
++++ b/arch/unicore32/include/asm/mmu_context.h
+@@ -88,7 +88,6 @@ static inline int arch_dup_mmap(struct mm_struct *oldmm,
+ }
+ 
+ static inline void arch_unmap(struct mm_struct *mm,
+-			struct vm_area_struct *vma,
+ 			unsigned long start, unsigned long end)
+ {
+ }
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index 4fe27b67d7e2..b1d59a7c556e 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -881,7 +881,7 @@ apicinterrupt IRQ_WORK_VECTOR			irq_work_interrupt		smp_irq_work_interrupt
+  * @paranoid == 2 is special: the stub will never switch stacks.  This is for
+  * #DF: if the thread stack is somehow unusable, we'll still get a useful OOPS.
+  */
+-.macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1
++.macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1 create_gap=0
+ ENTRY(\sym)
+ 	UNWIND_HINT_IRET_REGS offset=\has_error_code*8
+ 
+@@ -901,6 +901,20 @@ ENTRY(\sym)
+ 	jnz	.Lfrom_usermode_switch_stack_\@
+ 	.endif
+ 
++	.if \create_gap == 1
++	/*
++	 * If coming from kernel space, create a 6-word gap to allow the
++	 * int3 handler to emulate a call instruction.
++	 */
++	testb	$3, CS-ORIG_RAX(%rsp)
++	jnz	.Lfrom_usermode_no_gap_\@
++	.rept	6
++	pushq	5*8(%rsp)
++	.endr
++	UNWIND_HINT_IRET_REGS offset=8
++.Lfrom_usermode_no_gap_\@:
++	.endif
++
+ 	.if \paranoid
+ 	call	paranoid_entry
+ 	.else
+@@ -1132,7 +1146,7 @@ apicinterrupt3 HYPERV_STIMER0_VECTOR \
+ #endif /* CONFIG_HYPERV */
+ 
+ idtentry debug			do_debug		has_error_code=0	paranoid=1 shift_ist=DEBUG_STACK
+-idtentry int3			do_int3			has_error_code=0
++idtentry int3			do_int3			has_error_code=0	create_gap=1
+ idtentry stack_segment		do_stack_segment	has_error_code=1
+ 
+ #ifdef CONFIG_XEN_PV
+diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
+index 19d18fae6ec6..41019af68adf 100644
+--- a/arch/x86/include/asm/mmu_context.h
++++ b/arch/x86/include/asm/mmu_context.h
+@@ -277,8 +277,8 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm,
+ 	mpx_mm_init(mm);
+ }
+ 
+-static inline void arch_unmap(struct mm_struct *mm, struct vm_area_struct *vma,
+-			      unsigned long start, unsigned long end)
++static inline void arch_unmap(struct mm_struct *mm, unsigned long start,
++			      unsigned long end)
+ {
+ 	/*
+ 	 * mpx_notify_unmap() goes and reads a rarely-hot
+@@ -298,7 +298,7 @@ static inline void arch_unmap(struct mm_struct *mm, struct vm_area_struct *vma,
+ 	 * consistently wrong.
+ 	 */
+ 	if (unlikely(cpu_feature_enabled(X86_FEATURE_MPX)))
+-		mpx_notify_unmap(mm, vma, start, end);
++		mpx_notify_unmap(mm, start, end);
+ }
+ 
+ /*
+diff --git a/arch/x86/include/asm/mpx.h b/arch/x86/include/asm/mpx.h
+index d0b1434fb0b6..143a5c193ed3 100644
+--- a/arch/x86/include/asm/mpx.h
++++ b/arch/x86/include/asm/mpx.h
+@@ -64,12 +64,15 @@ struct mpx_fault_info {
+ };
+ 
+ #ifdef CONFIG_X86_INTEL_MPX
+-int mpx_fault_info(struct mpx_fault_info *info, struct pt_regs *regs);
+-int mpx_handle_bd_fault(void);
++
++extern int mpx_fault_info(struct mpx_fault_info *info, struct pt_regs *regs);
++extern int mpx_handle_bd_fault(void);
++
+ static inline int kernel_managing_mpx_tables(struct mm_struct *mm)
+ {
+ 	return (mm->context.bd_addr != MPX_INVALID_BOUNDS_DIR);
+ }
++
+ static inline void mpx_mm_init(struct mm_struct *mm)
+ {
+ 	/*
+@@ -78,11 +81,10 @@ static inline void mpx_mm_init(struct mm_struct *mm)
+ 	 */
+ 	mm->context.bd_addr = MPX_INVALID_BOUNDS_DIR;
+ }
+-void mpx_notify_unmap(struct mm_struct *mm, struct vm_area_struct *vma,
+-		      unsigned long start, unsigned long end);
+ 
+-unsigned long mpx_unmapped_area_check(unsigned long addr, unsigned long len,
+-		unsigned long flags);
++extern void mpx_notify_unmap(struct mm_struct *mm, unsigned long start, unsigned long end);
++extern unsigned long mpx_unmapped_area_check(unsigned long addr, unsigned long len, unsigned long flags);
++
+ #else
+ static inline int mpx_fault_info(struct mpx_fault_info *info, struct pt_regs *regs)
+ {
+@@ -100,7 +102,6 @@ static inline void mpx_mm_init(struct mm_struct *mm)
+ {
+ }
+ static inline void mpx_notify_unmap(struct mm_struct *mm,
+-				    struct vm_area_struct *vma,
+ 				    unsigned long start, unsigned long end)
+ {
+ }
+diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
+index e85ff65c43c3..05861cc08787 100644
+--- a/arch/x86/include/asm/text-patching.h
++++ b/arch/x86/include/asm/text-patching.h
+@@ -39,4 +39,32 @@ extern int poke_int3_handler(struct pt_regs *regs);
+ extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
+ extern int after_bootmem;
+ 
++static inline void int3_emulate_jmp(struct pt_regs *regs, unsigned long ip)
++{
++	regs->ip = ip;
++}
++
++#define INT3_INSN_SIZE 1
++#define CALL_INSN_SIZE 5
++
++#ifdef CONFIG_X86_64
++static inline void int3_emulate_push(struct pt_regs *regs, unsigned long val)
++{
++	/*
++	 * The int3 handler in entry_64.S adds a gap between the
++	 * stack where the break point happened, and the saving of
++	 * pt_regs. We can extend the original stack because of
++	 * this gap. See the idtentry macro's create_gap option.
++	 */
++	regs->sp -= sizeof(unsigned long);
++	*(unsigned long *)regs->sp = val;
++}
++
++static inline void int3_emulate_call(struct pt_regs *regs, unsigned long func)
++{
++	int3_emulate_push(regs, regs->ip - INT3_INSN_SIZE + CALL_INSN_SIZE);
++	int3_emulate_jmp(regs, func);
++}
++#endif
++
+ #endif /* _ASM_X86_TEXT_PATCHING_H */
+diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
+index ef49517f6bb2..bd553b3af22e 100644
+--- a/arch/x86/kernel/ftrace.c
++++ b/arch/x86/kernel/ftrace.c
+@@ -29,6 +29,7 @@
+ #include <asm/kprobes.h>
+ #include <asm/ftrace.h>
+ #include <asm/nops.h>
++#include <asm/text-patching.h>
+ 
+ #ifdef CONFIG_DYNAMIC_FTRACE
+ 
+@@ -231,6 +232,7 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
+ }
+ 
+ static unsigned long ftrace_update_func;
++static unsigned long ftrace_update_func_call;
+ 
+ static int update_ftrace_func(unsigned long ip, void *new)
+ {
+@@ -259,6 +261,8 @@ int ftrace_update_ftrace_func(ftrace_func_t func)
+ 	unsigned char *new;
+ 	int ret;
+ 
++	ftrace_update_func_call = (unsigned long)func;
++
+ 	new = ftrace_call_replace(ip, (unsigned long)func);
+ 	ret = update_ftrace_func(ip, new);
+ 
+@@ -294,13 +298,28 @@ int ftrace_int3_handler(struct pt_regs *regs)
+ 	if (WARN_ON_ONCE(!regs))
+ 		return 0;
+ 
+-	ip = regs->ip - 1;
+-	if (!ftrace_location(ip) && !is_ftrace_caller(ip))
+-		return 0;
++	ip = regs->ip - INT3_INSN_SIZE;
+ 
+-	regs->ip += MCOUNT_INSN_SIZE - 1;
++#ifdef CONFIG_X86_64
++	if (ftrace_location(ip)) {
++		int3_emulate_call(regs, (unsigned long)ftrace_regs_caller);
++		return 1;
++	} else if (is_ftrace_caller(ip)) {
++		if (!ftrace_update_func_call) {
++			int3_emulate_jmp(regs, ip + CALL_INSN_SIZE);
++			return 1;
++		}
++		int3_emulate_call(regs, ftrace_update_func_call);
++		return 1;
++	}
++#else
++	if (ftrace_location(ip) || is_ftrace_caller(ip)) {
++		int3_emulate_jmp(regs, ip + CALL_INSN_SIZE);
++		return 1;
++	}
++#endif
+ 
+-	return 1;
++	return 0;
+ }
+ NOKPROBE_SYMBOL(ftrace_int3_handler);
+ 
+@@ -859,6 +878,8 @@ void arch_ftrace_update_trampoline(struct ftrace_ops *ops)
+ 
+ 	func = ftrace_ops_get_func(ops);
+ 
++	ftrace_update_func_call = (unsigned long)func;
++
+ 	/* Do a safe modify in case the trampoline is executing */
+ 	new = ftrace_call_replace(ip, (unsigned long)func);
+ 	ret = update_ftrace_func(ip, new);
+@@ -960,6 +981,7 @@ static int ftrace_mod_jmp(unsigned long ip, void *func)
+ {
+ 	unsigned char *new;
+ 
++	ftrace_update_func_call = 0UL;
+ 	new = ftrace_jmp_replace(ip, (unsigned long)func);
+ 
+ 	return update_ftrace_func(ip, new);
+diff --git a/arch/x86/mm/mpx.c b/arch/x86/mm/mpx.c
+index c805db6236b4..7aeb9fe2955f 100644
+--- a/arch/x86/mm/mpx.c
++++ b/arch/x86/mm/mpx.c
+@@ -881,9 +881,10 @@ static int mpx_unmap_tables(struct mm_struct *mm,
+  * the virtual address region start...end have already been split if
+  * necessary, and the 'vma' is the first vma in this range (start -> end).
+  */
+-void mpx_notify_unmap(struct mm_struct *mm, struct vm_area_struct *vma,
+-		unsigned long start, unsigned long end)
++void mpx_notify_unmap(struct mm_struct *mm, unsigned long start,
++		      unsigned long end)
+ {
++	struct vm_area_struct *vma;
+ 	int ret;
+ 
+ 	/*
+@@ -902,11 +903,12 @@ void mpx_notify_unmap(struct mm_struct *mm, struct vm_area_struct *vma,
+ 	 * which should not occur normally. Being strict about it here
+ 	 * helps ensure that we do not have an exploitable stack overflow.
+ 	 */
+-	do {
++	vma = find_vma(mm, start);
++	while (vma && vma->vm_start < end) {
+ 		if (vma->vm_flags & VM_MPX)
+ 			return;
+ 		vma = vma->vm_next;
+-	} while (vma && vma->vm_start < end);
++	}
+ 
+ 	ret = mpx_unmap_tables(mm, start, end);
+ 	if (ret)
+diff --git a/block/blk-core.c b/block/blk-core.c
+index a55389ba8779..b375cfea024c 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -375,7 +375,7 @@ void blk_cleanup_queue(struct request_queue *q)
+ 	blk_exit_queue(q);
+ 
+ 	if (queue_is_mq(q))
+-		blk_mq_free_queue(q);
++		blk_mq_exit_queue(q);
+ 
+ 	percpu_ref_exit(&q->q_usage_counter);
+ 
+diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c
+index 3f9c3f4ac44c..4040e62c3737 100644
+--- a/block/blk-mq-sysfs.c
++++ b/block/blk-mq-sysfs.c
+@@ -10,6 +10,7 @@
+ #include <linux/smp.h>
+ 
+ #include <linux/blk-mq.h>
++#include "blk.h"
+ #include "blk-mq.h"
+ #include "blk-mq-tag.h"
+ 
+@@ -33,6 +34,11 @@ static void blk_mq_hw_sysfs_release(struct kobject *kobj)
+ {
+ 	struct blk_mq_hw_ctx *hctx = container_of(kobj, struct blk_mq_hw_ctx,
+ 						  kobj);
++
++	if (hctx->flags & BLK_MQ_F_BLOCKING)
++		cleanup_srcu_struct(hctx->srcu);
++	blk_free_flush_queue(hctx->fq);
++	sbitmap_free(&hctx->ctx_map);
+ 	free_cpumask_var(hctx->cpumask);
+ 	kfree(hctx->ctxs);
+ 	kfree(hctx);
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index fc60ed7e940e..b0e5e67e20a2 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2267,12 +2267,7 @@ static void blk_mq_exit_hctx(struct request_queue *q,
+ 	if (set->ops->exit_hctx)
+ 		set->ops->exit_hctx(hctx, hctx_idx);
+ 
+-	if (hctx->flags & BLK_MQ_F_BLOCKING)
+-		cleanup_srcu_struct(hctx->srcu);
+-
+ 	blk_mq_remove_cpuhp(hctx);
+-	blk_free_flush_queue(hctx->fq);
+-	sbitmap_free(&hctx->ctx_map);
+ }
+ 
+ static void blk_mq_exit_hw_queues(struct request_queue *q,
+@@ -2905,7 +2900,8 @@ err_exit:
+ }
+ EXPORT_SYMBOL(blk_mq_init_allocated_queue);
+ 
+-void blk_mq_free_queue(struct request_queue *q)
++/* tags can _not_ be used after returning from blk_mq_exit_queue */
++void blk_mq_exit_queue(struct request_queue *q)
+ {
+ 	struct blk_mq_tag_set	*set = q->tag_set;
+ 
+diff --git a/block/blk-mq.h b/block/blk-mq.h
+index 423ea88ab6fb..633a5a77ee8b 100644
+--- a/block/blk-mq.h
++++ b/block/blk-mq.h
+@@ -37,7 +37,7 @@ struct blk_mq_ctx {
+ 	struct kobject		kobj;
+ } ____cacheline_aligned_in_smp;
+ 
+-void blk_mq_free_queue(struct request_queue *q);
++void blk_mq_exit_queue(struct request_queue *q);
+ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr);
+ void blk_mq_wake_waiters(struct request_queue *q);
+ bool blk_mq_dispatch_rq_list(struct request_queue *, struct list_head *, bool);
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index a823f469e53f..0df9b4461766 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -490,7 +490,7 @@ re_probe:
+ 	if (dev->bus->dma_configure) {
+ 		ret = dev->bus->dma_configure(dev);
+ 		if (ret)
+-			goto dma_failed;
++			goto probe_failed;
+ 	}
+ 
+ 	if (driver_sysfs_add(dev)) {
+@@ -546,14 +546,13 @@ re_probe:
+ 	goto done;
+ 
+ probe_failed:
+-	arch_teardown_dma_ops(dev);
+-dma_failed:
+ 	if (dev->bus)
+ 		blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
+ 					     BUS_NOTIFY_DRIVER_NOT_BOUND, dev);
+ pinctrl_bind_failed:
+ 	device_links_no_driver(dev);
+ 	devres_release_all(dev);
++	arch_teardown_dma_ops(dev);
+ 	driver_sysfs_remove(dev);
+ 	dev->driver = NULL;
+ 	dev_set_drvdata(dev, NULL);
+diff --git a/drivers/block/brd.c b/drivers/block/brd.c
+index c18586fccb6f..17defbf4f332 100644
+--- a/drivers/block/brd.c
++++ b/drivers/block/brd.c
+@@ -96,13 +96,8 @@ static struct page *brd_insert_page(struct brd_device *brd, sector_t sector)
+ 	/*
+ 	 * Must use NOIO because we don't want to recurse back into the
+ 	 * block or filesystem layers from page reclaim.
+-	 *
+-	 * Cannot support DAX and highmem, because our ->direct_access
+-	 * routine for DAX must return memory that is always addressable.
+-	 * If DAX was reworked to use pfns and kmap throughout, this
+-	 * restriction might be able to be lifted.
+ 	 */
+-	gfp_flags = GFP_NOIO | __GFP_ZERO;
++	gfp_flags = GFP_NOIO | __GFP_ZERO | __GFP_HIGHMEM;
+ 	page = alloc_page(gfp_flags);
+ 	if (!page)
+ 		return NULL;
+diff --git a/drivers/clk/hisilicon/clk-hi3660.c b/drivers/clk/hisilicon/clk-hi3660.c
+index f40419959656..794eeff0d5d2 100644
+--- a/drivers/clk/hisilicon/clk-hi3660.c
++++ b/drivers/clk/hisilicon/clk-hi3660.c
+@@ -163,8 +163,12 @@ static const struct hisi_gate_clock hi3660_crgctrl_gate_sep_clks[] = {
+ 	  "clk_isp_snclk_mux", CLK_SET_RATE_PARENT, 0x50, 17, 0, },
+ 	{ HI3660_CLK_GATE_ISP_SNCLK2, "clk_gate_isp_snclk2",
+ 	  "clk_isp_snclk_mux", CLK_SET_RATE_PARENT, 0x50, 18, 0, },
++	/*
++	 * clk_gate_ufs_subsys is a system bus clock, mark it as critical
++	 * clock and keep it on for system suspend and resume.
++	 */
+ 	{ HI3660_CLK_GATE_UFS_SUBSYS, "clk_gate_ufs_subsys", "clk_div_sysbus",
+-	  CLK_SET_RATE_PARENT, 0x50, 21, 0, },
++	  CLK_SET_RATE_PARENT | CLK_IS_CRITICAL, 0x50, 21, 0, },
+ 	{ HI3660_PCLK_GATE_DSI0, "pclk_gate_dsi0", "clk_div_cfgbus",
+ 	  CLK_SET_RATE_PARENT, 0x50, 28, 0, },
+ 	{ HI3660_PCLK_GATE_DSI1, "pclk_gate_dsi1", "clk_div_cfgbus",
+diff --git a/drivers/clk/mediatek/clk-pll.c b/drivers/clk/mediatek/clk-pll.c
+index f54e4015b0b1..18842d660317 100644
+--- a/drivers/clk/mediatek/clk-pll.c
++++ b/drivers/clk/mediatek/clk-pll.c
+@@ -88,6 +88,32 @@ static unsigned long __mtk_pll_recalc_rate(struct mtk_clk_pll *pll, u32 fin,
+ 	return ((unsigned long)vco + postdiv - 1) / postdiv;
+ }
+ 
++static void __mtk_pll_tuner_enable(struct mtk_clk_pll *pll)
++{
++	u32 r;
++
++	if (pll->tuner_en_addr) {
++		r = readl(pll->tuner_en_addr) | BIT(pll->data->tuner_en_bit);
++		writel(r, pll->tuner_en_addr);
++	} else if (pll->tuner_addr) {
++		r = readl(pll->tuner_addr) | AUDPLL_TUNER_EN;
++		writel(r, pll->tuner_addr);
++	}
++}
++
++static void __mtk_pll_tuner_disable(struct mtk_clk_pll *pll)
++{
++	u32 r;
++
++	if (pll->tuner_en_addr) {
++		r = readl(pll->tuner_en_addr) & ~BIT(pll->data->tuner_en_bit);
++		writel(r, pll->tuner_en_addr);
++	} else if (pll->tuner_addr) {
++		r = readl(pll->tuner_addr) & ~AUDPLL_TUNER_EN;
++		writel(r, pll->tuner_addr);
++	}
++}
++
+ static void mtk_pll_set_rate_regs(struct mtk_clk_pll *pll, u32 pcw,
+ 		int postdiv)
+ {
+@@ -96,6 +122,9 @@ static void mtk_pll_set_rate_regs(struct mtk_clk_pll *pll, u32 pcw,
+ 
+ 	pll_en = readl(pll->base_addr + REG_CON0) & CON0_BASE_EN;
+ 
++	/* disable tuner */
++	__mtk_pll_tuner_disable(pll);
++
+ 	/* set postdiv */
+ 	val = readl(pll->pd_addr);
+ 	val &= ~(POSTDIV_MASK << pll->data->pd_shift);
+@@ -122,6 +151,9 @@ static void mtk_pll_set_rate_regs(struct mtk_clk_pll *pll, u32 pcw,
+ 	if (pll->tuner_addr)
+ 		writel(con1 + 1, pll->tuner_addr);
+ 
++	/* restore tuner_en */
++	__mtk_pll_tuner_enable(pll);
++
+ 	if (pll_en)
+ 		udelay(20);
+ }
+@@ -228,13 +260,7 @@ static int mtk_pll_prepare(struct clk_hw *hw)
+ 	r |= pll->data->en_mask;
+ 	writel(r, pll->base_addr + REG_CON0);
+ 
+-	if (pll->tuner_en_addr) {
+-		r = readl(pll->tuner_en_addr) | BIT(pll->data->tuner_en_bit);
+-		writel(r, pll->tuner_en_addr);
+-	} else if (pll->tuner_addr) {
+-		r = readl(pll->tuner_addr) | AUDPLL_TUNER_EN;
+-		writel(r, pll->tuner_addr);
+-	}
++	__mtk_pll_tuner_enable(pll);
+ 
+ 	udelay(20);
+ 
+@@ -258,13 +284,7 @@ static void mtk_pll_unprepare(struct clk_hw *hw)
+ 		writel(r, pll->base_addr + REG_CON0);
+ 	}
+ 
+-	if (pll->tuner_en_addr) {
+-		r = readl(pll->tuner_en_addr) & ~BIT(pll->data->tuner_en_bit);
+-		writel(r, pll->tuner_en_addr);
+-	} else if (pll->tuner_addr) {
+-		r = readl(pll->tuner_addr) & ~AUDPLL_TUNER_EN;
+-		writel(r, pll->tuner_addr);
+-	}
++	__mtk_pll_tuner_disable(pll);
+ 
+ 	r = readl(pll->base_addr + REG_CON0);
+ 	r &= ~CON0_BASE_EN;
+diff --git a/drivers/clk/rockchip/clk-rk3328.c b/drivers/clk/rockchip/clk-rk3328.c
+index 65ab5c2f48b0..f12142d9cea2 100644
+--- a/drivers/clk/rockchip/clk-rk3328.c
++++ b/drivers/clk/rockchip/clk-rk3328.c
+@@ -458,7 +458,7 @@ static struct rockchip_clk_branch rk3328_clk_branches[] __initdata = {
+ 			RK3328_CLKSEL_CON(35), 15, 1, MFLAGS, 8, 7, DFLAGS,
+ 			RK3328_CLKGATE_CON(2), 12, GFLAGS),
+ 	COMPOSITE(SCLK_CRYPTO, "clk_crypto", mux_2plls_p, 0,
+-			RK3328_CLKSEL_CON(20), 7, 1, MFLAGS, 0, 7, DFLAGS,
++			RK3328_CLKSEL_CON(20), 7, 1, MFLAGS, 0, 5, DFLAGS,
+ 			RK3328_CLKGATE_CON(2), 4, GFLAGS),
+ 	COMPOSITE_NOMUX(SCLK_TSADC, "clk_tsadc", "clk_24m", 0,
+ 			RK3328_CLKSEL_CON(22), 0, 10, DFLAGS,
+@@ -550,15 +550,15 @@ static struct rockchip_clk_branch rk3328_clk_branches[] __initdata = {
+ 	GATE(0, "hclk_rkvenc_niu", "hclk_rkvenc", 0,
+ 			RK3328_CLKGATE_CON(25), 1, GFLAGS),
+ 	GATE(ACLK_H265, "aclk_h265", "aclk_rkvenc", 0,
+-			RK3328_CLKGATE_CON(25), 0, GFLAGS),
++			RK3328_CLKGATE_CON(25), 2, GFLAGS),
+ 	GATE(PCLK_H265, "pclk_h265", "hclk_rkvenc", 0,
+-			RK3328_CLKGATE_CON(25), 1, GFLAGS),
++			RK3328_CLKGATE_CON(25), 3, GFLAGS),
+ 	GATE(ACLK_H264, "aclk_h264", "aclk_rkvenc", 0,
+-			RK3328_CLKGATE_CON(25), 0, GFLAGS),
++			RK3328_CLKGATE_CON(25), 4, GFLAGS),
+ 	GATE(HCLK_H264, "hclk_h264", "hclk_rkvenc", 0,
+-			RK3328_CLKGATE_CON(25), 1, GFLAGS),
++			RK3328_CLKGATE_CON(25), 5, GFLAGS),
+ 	GATE(ACLK_AXISRAM, "aclk_axisram", "aclk_rkvenc", CLK_IGNORE_UNUSED,
+-			RK3328_CLKGATE_CON(25), 0, GFLAGS),
++			RK3328_CLKGATE_CON(25), 6, GFLAGS),
+ 
+ 	COMPOSITE(SCLK_VENC_CORE, "sclk_venc_core", mux_4plls_p, 0,
+ 			RK3328_CLKSEL_CON(51), 14, 2, MFLAGS, 8, 5, DFLAGS,
+@@ -663,7 +663,7 @@ static struct rockchip_clk_branch rk3328_clk_branches[] __initdata = {
+ 
+ 	/* PD_GMAC */
+ 	COMPOSITE(ACLK_GMAC, "aclk_gmac", mux_2plls_hdmiphy_p, 0,
+-			RK3328_CLKSEL_CON(35), 6, 2, MFLAGS, 0, 5, DFLAGS,
++			RK3328_CLKSEL_CON(25), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ 			RK3328_CLKGATE_CON(3), 2, GFLAGS),
+ 	COMPOSITE_NOMUX(PCLK_GMAC, "pclk_gmac", "aclk_gmac", 0,
+ 			RK3328_CLKSEL_CON(25), 8, 3, DFLAGS,
+@@ -733,7 +733,7 @@ static struct rockchip_clk_branch rk3328_clk_branches[] __initdata = {
+ 
+ 	/* PD_PERI */
+ 	GATE(0, "aclk_peri_noc", "aclk_peri", CLK_IGNORE_UNUSED, RK3328_CLKGATE_CON(19), 11, GFLAGS),
+-	GATE(ACLK_USB3OTG, "aclk_usb3otg", "aclk_peri", 0, RK3328_CLKGATE_CON(19), 4, GFLAGS),
++	GATE(ACLK_USB3OTG, "aclk_usb3otg", "aclk_peri", 0, RK3328_CLKGATE_CON(19), 14, GFLAGS),
+ 
+ 	GATE(HCLK_SDMMC, "hclk_sdmmc", "hclk_peri", 0, RK3328_CLKGATE_CON(19), 0, GFLAGS),
+ 	GATE(HCLK_SDIO, "hclk_sdio", "hclk_peri", 0, RK3328_CLKGATE_CON(19), 1, GFLAGS),
+@@ -913,7 +913,7 @@ static void __init rk3328_clk_init(struct device_node *np)
+ 				     &rk3328_cpuclk_data, rk3328_cpuclk_rates,
+ 				     ARRAY_SIZE(rk3328_cpuclk_rates));
+ 
+-	rockchip_register_softrst(np, 11, reg_base + RK3328_SOFTRST_CON(0),
++	rockchip_register_softrst(np, 12, reg_base + RK3328_SOFTRST_CON(0),
+ 				  ROCKCHIP_SOFTRST_HIWORD_MASK);
+ 
+ 	rockchip_register_restart_notifier(ctx, RK3328_GLB_SRST_FST, NULL);
+diff --git a/drivers/clk/tegra/clk-pll.c b/drivers/clk/tegra/clk-pll.c
+index b50b7460014b..3e67cbcd80da 100644
+--- a/drivers/clk/tegra/clk-pll.c
++++ b/drivers/clk/tegra/clk-pll.c
+@@ -663,8 +663,8 @@ static void _update_pll_mnp(struct tegra_clk_pll *pll,
+ 		pll_override_writel(val, params->pmc_divp_reg, pll);
+ 
+ 		val = pll_override_readl(params->pmc_divnm_reg, pll);
+-		val &= ~(divm_mask(pll) << div_nmp->override_divm_shift) |
+-			~(divn_mask(pll) << div_nmp->override_divn_shift);
++		val &= ~((divm_mask(pll) << div_nmp->override_divm_shift) |
++			(divn_mask(pll) << div_nmp->override_divn_shift));
+ 		val |= (cfg->m << div_nmp->override_divm_shift) |
+ 			(cfg->n << div_nmp->override_divn_shift);
+ 		pll_override_writel(val, params->pmc_divnm_reg, pll);
+diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c
+index 5f3c1378b90e..99d9f431ae2c 100644
+--- a/drivers/dma/imx-sdma.c
++++ b/drivers/dma/imx-sdma.c
+@@ -419,6 +419,7 @@ struct sdma_driver_data {
+ 	int chnenbl0;
+ 	int num_events;
+ 	struct sdma_script_start_addrs	*script_addrs;
++	bool check_ratio;
+ };
+ 
+ struct sdma_engine {
+@@ -557,6 +558,13 @@ static struct sdma_driver_data sdma_imx7d = {
+ 	.script_addrs = &sdma_script_imx7d,
+ };
+ 
++static struct sdma_driver_data sdma_imx8mq = {
++	.chnenbl0 = SDMA_CHNENBL0_IMX35,
++	.num_events = 48,
++	.script_addrs = &sdma_script_imx7d,
++	.check_ratio = 1,
++};
++
+ static const struct platform_device_id sdma_devtypes[] = {
+ 	{
+ 		.name = "imx25-sdma",
+@@ -579,6 +587,9 @@ static const struct platform_device_id sdma_devtypes[] = {
+ 	}, {
+ 		.name = "imx7d-sdma",
+ 		.driver_data = (unsigned long)&sdma_imx7d,
++	}, {
++		.name = "imx8mq-sdma",
++		.driver_data = (unsigned long)&sdma_imx8mq,
+ 	}, {
+ 		/* sentinel */
+ 	}
+@@ -593,6 +604,7 @@ static const struct of_device_id sdma_dt_ids[] = {
+ 	{ .compatible = "fsl,imx31-sdma", .data = &sdma_imx31, },
+ 	{ .compatible = "fsl,imx25-sdma", .data = &sdma_imx25, },
+ 	{ .compatible = "fsl,imx7d-sdma", .data = &sdma_imx7d, },
++	{ .compatible = "fsl,imx8mq-sdma", .data = &sdma_imx8mq, },
+ 	{ /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, sdma_dt_ids);
+@@ -1852,7 +1864,8 @@ static int sdma_init(struct sdma_engine *sdma)
+ 	if (ret)
+ 		goto disable_clk_ipg;
+ 
+-	if (clk_get_rate(sdma->clk_ahb) == clk_get_rate(sdma->clk_ipg))
++	if (sdma->drvdata->check_ratio &&
++	    (clk_get_rate(sdma->clk_ahb) == clk_get_rate(sdma->clk_ipg)))
+ 		sdma->clk_ratio = 1;
+ 
+ 	/* Be sure SDMA has not started yet */
+diff --git a/drivers/hwtracing/intel_th/msu.c b/drivers/hwtracing/intel_th/msu.c
+index ba7aaf421f36..8ff326c0c406 100644
+--- a/drivers/hwtracing/intel_th/msu.c
++++ b/drivers/hwtracing/intel_th/msu.c
+@@ -84,6 +84,7 @@ struct msc_iter {
+  * @reg_base:		register window base address
+  * @thdev:		intel_th_device pointer
+  * @win_list:		list of windows in multiblock mode
++ * @single_sgt:		single mode buffer
+  * @nr_pages:		total number of pages allocated for this buffer
+  * @single_sz:		amount of data in single mode
+  * @single_wrap:	single mode wrap occurred
+@@ -104,6 +105,7 @@ struct msc {
+ 	struct intel_th_device	*thdev;
+ 
+ 	struct list_head	win_list;
++	struct sg_table		single_sgt;
+ 	unsigned long		nr_pages;
+ 	unsigned long		single_sz;
+ 	unsigned int		single_wrap : 1;
+@@ -617,22 +619,45 @@ static void intel_th_msc_deactivate(struct intel_th_device *thdev)
+  */
+ static int msc_buffer_contig_alloc(struct msc *msc, unsigned long size)
+ {
++	unsigned long nr_pages = size >> PAGE_SHIFT;
+ 	unsigned int order = get_order(size);
+ 	struct page *page;
++	int ret;
+ 
+ 	if (!size)
+ 		return 0;
+ 
++	ret = sg_alloc_table(&msc->single_sgt, 1, GFP_KERNEL);
++	if (ret)
++		goto err_out;
++
++	ret = -ENOMEM;
+ 	page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order);
+ 	if (!page)
+-		return -ENOMEM;
++		goto err_free_sgt;
+ 
+ 	split_page(page, order);
+-	msc->nr_pages = size >> PAGE_SHIFT;
++	sg_set_buf(msc->single_sgt.sgl, page_address(page), size);
++
++	ret = dma_map_sg(msc_dev(msc)->parent->parent, msc->single_sgt.sgl, 1,
++			 DMA_FROM_DEVICE);
++	if (ret < 0)
++		goto err_free_pages;
++
++	msc->nr_pages = nr_pages;
+ 	msc->base = page_address(page);
+-	msc->base_addr = page_to_phys(page);
++	msc->base_addr = sg_dma_address(msc->single_sgt.sgl);
+ 
+ 	return 0;
++
++err_free_pages:
++	__free_pages(page, order);
++
++err_free_sgt:
++	sg_free_table(&msc->single_sgt);
++
++err_out:
++	return ret;
+ }
+ 
+ /**
+@@ -643,6 +668,10 @@ static void msc_buffer_contig_free(struct msc *msc)
+ {
+ 	unsigned long off;
+ 
++	dma_unmap_sg(msc_dev(msc)->parent->parent, msc->single_sgt.sgl,
++		     1, DMA_FROM_DEVICE);
++	sg_free_table(&msc->single_sgt);
++
+ 	for (off = 0; off < msc->nr_pages << PAGE_SHIFT; off += PAGE_SIZE) {
+ 		struct page *page = virt_to_page(msc->base + off);
+ 
+diff --git a/drivers/hwtracing/stm/core.c b/drivers/hwtracing/stm/core.c
+index c7ba8acfd4d5..e55b902560de 100644
+--- a/drivers/hwtracing/stm/core.c
++++ b/drivers/hwtracing/stm/core.c
+@@ -166,11 +166,10 @@ stm_master(struct stm_device *stm, unsigned int idx)
+ static int stp_master_alloc(struct stm_device *stm, unsigned int idx)
+ {
+ 	struct stp_master *master;
+-	size_t size;
+ 
+-	size = ALIGN(stm->data->sw_nchannels, 8) / 8;
+-	size += sizeof(struct stp_master);
+-	master = kzalloc(size, GFP_ATOMIC);
++	master = kzalloc(struct_size(master, chan_map,
++				     BITS_TO_LONGS(stm->data->sw_nchannels)),
++			 GFP_ATOMIC);
+ 	if (!master)
+ 		return -ENOMEM;
+ 
+@@ -218,8 +217,8 @@ stm_output_disclaim(struct stm_device *stm, struct stm_output *output)
+ 	bitmap_release_region(&master->chan_map[0], output->channel,
+ 			      ilog2(output->nr_chans));
+ 
+-	output->nr_chans = 0;
+ 	master->nr_free += output->nr_chans;
++	output->nr_chans = 0;
+ }
+ 
+ /*
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index d3dd290ae1b1..da81402992bc 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -2070,11 +2070,12 @@ static int mlx5_ib_mmap_clock_info_page(struct mlx5_ib_dev *dev,
+ 		return -EPERM;
+ 	vma->vm_flags &= ~VM_MAYWRITE;
+ 
+-	if (!dev->mdev->clock_info_page)
++	if (!dev->mdev->clock_info)
+ 		return -EOPNOTSUPP;
+ 
+ 	return rdma_user_mmap_page(&context->ibucontext, vma,
+-				   dev->mdev->clock_info_page, PAGE_SIZE);
++				   virt_to_page(dev->mdev->clock_info),
++				   PAGE_SIZE);
+ }
+ 
+ static int uar_mmap(struct mlx5_ib_dev *dev, enum mlx5_ib_mmap_cmd cmd,
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+index 48eda16db1a7..9b5e11d3fb85 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+@@ -2402,7 +2402,18 @@ static ssize_t dev_id_show(struct device *dev,
+ {
+ 	struct net_device *ndev = to_net_dev(dev);
+ 
+-	if (ndev->dev_id == ndev->dev_port)
++	/*
++	 * ndev->dev_port will be equal to 0 in old kernel prior to commit
++	 * 9b8b2a323008 ("IB/ipoib: Use dev_port to expose network interface
++	 * port numbers") Zero was chosen as special case for user space
++	 * applications to fallback and query dev_id to check if it has
++	 * different value or not.
++	 *
++	 * Don't print warning in such scenario.
++	 *
++	 * https://github.com/systemd/systemd/blob/master/src/udev/udev-builtin-net_id.c#L358
++	 */
++	if (ndev->dev_port && ndev->dev_id == ndev->dev_port)
+ 		netdev_info_once(ndev,
+ 			"\"%s\" wants to know my dev_id. Should it look at dev_port instead? See Documentation/ABI/testing/sysfs-class-net for more info.\n",
+ 			current->comm);
+diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c
+index 5182c7d6171e..8d30653cd13a 100644
+--- a/drivers/iommu/tegra-smmu.c
++++ b/drivers/iommu/tegra-smmu.c
+@@ -102,7 +102,6 @@ static inline u32 smmu_readl(struct tegra_smmu *smmu, unsigned long offset)
+ #define  SMMU_TLB_FLUSH_VA_MATCH_ALL     (0 << 0)
+ #define  SMMU_TLB_FLUSH_VA_MATCH_SECTION (2 << 0)
+ #define  SMMU_TLB_FLUSH_VA_MATCH_GROUP   (3 << 0)
+-#define  SMMU_TLB_FLUSH_ASID(x)          (((x) & 0x7f) << 24)
+ #define  SMMU_TLB_FLUSH_VA_SECTION(addr) ((((addr) & 0xffc00000) >> 12) | \
+ 					  SMMU_TLB_FLUSH_VA_MATCH_SECTION)
+ #define  SMMU_TLB_FLUSH_VA_GROUP(addr)   ((((addr) & 0xffffc000) >> 12) | \
+@@ -205,8 +204,12 @@ static inline void smmu_flush_tlb_asid(struct tegra_smmu *smmu,
+ {
+ 	u32 value;
+ 
+-	value = SMMU_TLB_FLUSH_ASID_MATCH | SMMU_TLB_FLUSH_ASID(asid) |
+-		SMMU_TLB_FLUSH_VA_MATCH_ALL;
++	if (smmu->soc->num_asids == 4)
++		value = (asid & 0x3) << 29;
++	else
++		value = (asid & 0x7f) << 24;
++
++	value |= SMMU_TLB_FLUSH_ASID_MATCH | SMMU_TLB_FLUSH_VA_MATCH_ALL;
+ 	smmu_writel(smmu, value, SMMU_TLB_FLUSH);
+ }
+ 
+@@ -216,8 +219,12 @@ static inline void smmu_flush_tlb_section(struct tegra_smmu *smmu,
+ {
+ 	u32 value;
+ 
+-	value = SMMU_TLB_FLUSH_ASID_MATCH | SMMU_TLB_FLUSH_ASID(asid) |
+-		SMMU_TLB_FLUSH_VA_SECTION(iova);
++	if (smmu->soc->num_asids == 4)
++		value = (asid & 0x3) << 29;
++	else
++		value = (asid & 0x7f) << 24;
++
++	value |= SMMU_TLB_FLUSH_ASID_MATCH | SMMU_TLB_FLUSH_VA_SECTION(iova);
+ 	smmu_writel(smmu, value, SMMU_TLB_FLUSH);
+ }
+ 
+@@ -227,8 +234,12 @@ static inline void smmu_flush_tlb_group(struct tegra_smmu *smmu,
+ {
+ 	u32 value;
+ 
+-	value = SMMU_TLB_FLUSH_ASID_MATCH | SMMU_TLB_FLUSH_ASID(asid) |
+-		SMMU_TLB_FLUSH_VA_GROUP(iova);
++	if (smmu->soc->num_asids == 4)
++		value = (asid & 0x3) << 29;
++	else
++		value = (asid & 0x7f) << 24;
++
++	value |= SMMU_TLB_FLUSH_ASID_MATCH | SMMU_TLB_FLUSH_VA_GROUP(iova);
+ 	smmu_writel(smmu, value, SMMU_TLB_FLUSH);
+ }
+ 
+diff --git a/drivers/md/dm-cache-metadata.c b/drivers/md/dm-cache-metadata.c
+index 6fc93834da44..151aa95775be 100644
+--- a/drivers/md/dm-cache-metadata.c
++++ b/drivers/md/dm-cache-metadata.c
+@@ -1167,11 +1167,18 @@ static int __load_discards(struct dm_cache_metadata *cmd,
+ 		if (r)
+ 			return r;
+ 
+-		for (b = 0; b < from_dblock(cmd->discard_nr_blocks); b++) {
++		for (b = 0; ; b++) {
+ 			r = fn(context, cmd->discard_block_size, to_dblock(b),
+ 			       dm_bitset_cursor_get_value(&c));
+ 			if (r)
+ 				break;
++
++			if (b >= (from_dblock(cmd->discard_nr_blocks) - 1))
++				break;
++
++			r = dm_bitset_cursor_next(&c);
++			if (r)
++				break;
+ 		}
+ 
+ 		dm_bitset_cursor_end(&c);
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index dd6565798778..86fd2d0fa975 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -949,6 +949,7 @@ static int crypt_integrity_ctr(struct crypt_config *cc, struct dm_target *ti)
+ {
+ #ifdef CONFIG_BLK_DEV_INTEGRITY
+ 	struct blk_integrity *bi = blk_get_integrity(cc->dev->bdev->bd_disk);
++	struct mapped_device *md = dm_table_get_md(ti->table);
+ 
+ 	/* From now we require underlying device with our integrity profile */
+ 	if (!bi || strcasecmp(bi->profile->name, "DM-DIF-EXT-TAG")) {
+@@ -968,7 +969,7 @@ static int crypt_integrity_ctr(struct crypt_config *cc, struct dm_target *ti)
+ 
+ 	if (crypt_integrity_aead(cc)) {
+ 		cc->integrity_tag_size = cc->on_disk_tag_size - cc->integrity_iv_size;
+-		DMINFO("Integrity AEAD, tag size %u, IV size %u.",
++		DMDEBUG("%s: Integrity AEAD, tag size %u, IV size %u.", dm_device_name(md),
+ 		       cc->integrity_tag_size, cc->integrity_iv_size);
+ 
+ 		if (crypto_aead_setauthsize(any_tfm_aead(cc), cc->integrity_tag_size)) {
+@@ -976,7 +977,7 @@ static int crypt_integrity_ctr(struct crypt_config *cc, struct dm_target *ti)
+ 			return -EINVAL;
+ 		}
+ 	} else if (cc->integrity_iv_size)
+-		DMINFO("Additional per-sector space %u bytes for IV.",
++		DMDEBUG("%s: Additional per-sector space %u bytes for IV.", dm_device_name(md),
+ 		       cc->integrity_iv_size);
+ 
+ 	if ((cc->integrity_tag_size + cc->integrity_iv_size) != bi->tag_size) {
+@@ -1891,7 +1892,7 @@ static int crypt_alloc_tfms_skcipher(struct crypt_config *cc, char *ciphermode)
+ 	 * algorithm implementation is used.  Help people debug performance
+ 	 * problems by logging the ->cra_driver_name.
+ 	 */
+-	DMINFO("%s using implementation \"%s\"", ciphermode,
++	DMDEBUG_LIMIT("%s using implementation \"%s\"", ciphermode,
+ 	       crypto_skcipher_alg(any_tfm(cc))->base.cra_driver_name);
+ 	return 0;
+ }
+@@ -1911,7 +1912,7 @@ static int crypt_alloc_tfms_aead(struct crypt_config *cc, char *ciphermode)
+ 		return err;
+ 	}
+ 
+-	DMINFO("%s using implementation \"%s\"", ciphermode,
++	DMDEBUG_LIMIT("%s using implementation \"%s\"", ciphermode,
+ 	       crypto_aead_alg(any_tfm_aead(cc))->base.cra_driver_name);
+ 	return 0;
+ }
+diff --git a/drivers/md/dm-delay.c b/drivers/md/dm-delay.c
+index fddffe251bf6..f496213f8b67 100644
+--- a/drivers/md/dm-delay.c
++++ b/drivers/md/dm-delay.c
+@@ -121,7 +121,8 @@ static void delay_dtr(struct dm_target *ti)
+ {
+ 	struct delay_c *dc = ti->private;
+ 
+-	destroy_workqueue(dc->kdelayd_wq);
++	if (dc->kdelayd_wq)
++		destroy_workqueue(dc->kdelayd_wq);
+ 
+ 	if (dc->read.dev)
+ 		dm_put_device(ti, dc->read.dev);
+diff --git a/drivers/md/dm-init.c b/drivers/md/dm-init.c
+index 4b76f84424c3..352e803f566e 100644
+--- a/drivers/md/dm-init.c
++++ b/drivers/md/dm-init.c
+@@ -160,7 +160,7 @@ static int __init dm_parse_table(struct dm_device *dev, char *str)
+ 
+ 	while (table_entry) {
+ 		DMDEBUG("parsing table \"%s\"", str);
+-		if (++dev->dmi.target_count >= DM_MAX_TARGETS) {
++		if (++dev->dmi.target_count > DM_MAX_TARGETS) {
+ 			DMERR("too many targets %u > %d",
+ 			      dev->dmi.target_count, DM_MAX_TARGETS);
+ 			return -EINVAL;
+@@ -242,9 +242,9 @@ static int __init dm_parse_devices(struct list_head *devices, char *str)
+ 			return -ENOMEM;
+ 		list_add_tail(&dev->list, devices);
+ 
+-		if (++ndev >= DM_MAX_DEVICES) {
+-			DMERR("too many targets %u > %d",
+-			      dev->dmi.target_count, DM_MAX_TARGETS);
++		if (++ndev > DM_MAX_DEVICES) {
++			DMERR("too many devices %lu > %d",
++			      ndev, DM_MAX_DEVICES);
+ 			return -EINVAL;
+ 		}
+ 
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 7c678f50aaa3..7848ef019880 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -2568,7 +2568,7 @@ static int calculate_device_limits(struct dm_integrity_c *ic)
+ 		if (last_sector < ic->start || last_sector >= ic->meta_device_sectors)
+ 			return -EINVAL;
+ 	} else {
+-		__u64 meta_size = ic->provided_data_sectors * ic->tag_size;
++		__u64 meta_size = (ic->provided_data_sectors >> ic->sb->log2_sectors_per_block) * ic->tag_size;
+ 		meta_size = (meta_size + ((1U << (ic->log2_buffer_sectors + SECTOR_SHIFT)) - 1))
+ 				>> (ic->log2_buffer_sectors + SECTOR_SHIFT);
+ 		meta_size <<= ic->log2_buffer_sectors;
+@@ -3439,7 +3439,7 @@ try_smaller_buffer:
+ 	DEBUG_print("	journal_sections %u\n", (unsigned)le32_to_cpu(ic->sb->journal_sections));
+ 	DEBUG_print("	journal_entries %u\n", ic->journal_entries);
+ 	DEBUG_print("	log2_interleave_sectors %d\n", ic->sb->log2_interleave_sectors);
+-	DEBUG_print("	device_sectors 0x%llx\n", (unsigned long long)ic->device_sectors);
++	DEBUG_print("	data_device_sectors 0x%llx\n", (unsigned long long)ic->data_device_sectors);
+ 	DEBUG_print("	initial_sectors 0x%x\n", ic->initial_sectors);
+ 	DEBUG_print("	metadata_run 0x%x\n", ic->metadata_run);
+ 	DEBUG_print("	log2_metadata_run %d\n", ic->log2_metadata_run);
+diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
+index c740153b4e52..1e03bc89e20f 100644
+--- a/drivers/md/dm-ioctl.c
++++ b/drivers/md/dm-ioctl.c
+@@ -2069,7 +2069,7 @@ int __init dm_early_create(struct dm_ioctl *dmi,
+ 	/* alloc table */
+ 	r = dm_table_create(&t, get_mode(dmi), dmi->target_count, md);
+ 	if (r)
+-		goto err_destroy_dm;
++		goto err_hash_remove;
+ 
+ 	/* add targets */
+ 	for (i = 0; i < dmi->target_count; i++) {
+@@ -2116,6 +2116,10 @@ int __init dm_early_create(struct dm_ioctl *dmi,
+ 
+ err_destroy_table:
+ 	dm_table_destroy(t);
++err_hash_remove:
++	(void) __hash_remove(__get_name_cell(dmi->name));
++	/* release reference from __get_name_cell */
++	dm_put(md);
+ err_destroy_dm:
+ 	dm_put(md);
+ 	dm_destroy(md);
+diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
+index 2ee5e357a0a7..cc5173dfd466 100644
+--- a/drivers/md/dm-mpath.c
++++ b/drivers/md/dm-mpath.c
+@@ -882,6 +882,7 @@ static struct pgpath *parse_path(struct dm_arg_set *as, struct path_selector *ps
+ 	if (attached_handler_name || m->hw_handler_name) {
+ 		INIT_DELAYED_WORK(&p->activate_path, activate_path_work);
+ 		r = setup_scsi_dh(p->path.dev->bdev, m, &attached_handler_name, &ti->error);
++		kfree(attached_handler_name);
+ 		if (r) {
+ 			dm_put_device(ti, p->path.dev);
+ 			goto bad;
+@@ -896,7 +897,6 @@ static struct pgpath *parse_path(struct dm_arg_set *as, struct path_selector *ps
+ 
+ 	return p;
+  bad:
+-	kfree(attached_handler_name);
+ 	free_pgpath(p);
+ 	return ERR_PTR(r);
+ }
+diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c
+index fa68336560c3..d8334cd45d7c 100644
+--- a/drivers/md/dm-zoned-metadata.c
++++ b/drivers/md/dm-zoned-metadata.c
+@@ -1169,6 +1169,9 @@ static int dmz_init_zones(struct dmz_metadata *zmd)
+ 			goto out;
+ 		}
+ 
++		if (!nr_blkz)
++			break;
++
+ 		/* Process report */
+ 		for (i = 0; i < nr_blkz; i++) {
+ 			ret = dmz_init_zone(zmd, zone, &blkz[i]);
+@@ -1204,6 +1207,8 @@ static int dmz_update_zone(struct dmz_metadata *zmd, struct dm_zone *zone)
+ 	/* Get zone information from disk */
+ 	ret = blkdev_report_zones(zmd->dev->bdev, dmz_start_sect(zmd, zone),
+ 				  &blkz, &nr_blkz, GFP_NOIO);
++	if (!nr_blkz)
++		ret = -EIO;
+ 	if (ret) {
+ 		dmz_dev_err(zmd->dev, "Get zone %u report failed",
+ 			    dmz_id(zmd, zone));
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 043f0761e4a0..08e7d412af95 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1467,7 +1467,7 @@ static unsigned get_num_write_zeroes_bios(struct dm_target *ti)
+ static int __send_changing_extent_only(struct clone_info *ci, struct dm_target *ti,
+ 				       unsigned num_bios)
+ {
+-	unsigned len = ci->sector_count;
++	unsigned len;
+ 
+ 	/*
+ 	 * Even though the device advertised support for this type of
+@@ -1478,6 +1478,8 @@ static int __send_changing_extent_only(struct clone_info *ci, struct dm_target *
+ 	if (!num_bios)
+ 		return -EOPNOTSUPP;
+ 
++	len = min((sector_t)ci->sector_count, max_io_len_target_boundary(ci->sector, ti));
++
+ 	__send_duplicate_bios(ci, ti, num_bios, &len);
+ 
+ 	ci->sector += len;
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 05ffffb8b769..295ff09cff4c 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -132,24 +132,6 @@ static inline int speed_max(struct mddev *mddev)
+ 		mddev->sync_speed_max : sysctl_speed_limit_max;
+ }
+ 
+-static void * flush_info_alloc(gfp_t gfp_flags, void *data)
+-{
+-        return kzalloc(sizeof(struct flush_info), gfp_flags);
+-}
+-static void flush_info_free(void *flush_info, void *data)
+-{
+-        kfree(flush_info);
+-}
+-
+-static void * flush_bio_alloc(gfp_t gfp_flags, void *data)
+-{
+-	return kzalloc(sizeof(struct flush_bio), gfp_flags);
+-}
+-static void flush_bio_free(void *flush_bio, void *data)
+-{
+-	kfree(flush_bio);
+-}
+-
+ static struct ctl_table_header *raid_table_header;
+ 
+ static struct ctl_table raid_table[] = {
+@@ -423,54 +405,31 @@ static int md_congested(void *data, int bits)
+ /*
+  * Generic flush handling for md
+  */
+-static void submit_flushes(struct work_struct *ws)
+-{
+-	struct flush_info *fi = container_of(ws, struct flush_info, flush_work);
+-	struct mddev *mddev = fi->mddev;
+-	struct bio *bio = fi->bio;
+-
+-	bio->bi_opf &= ~REQ_PREFLUSH;
+-	md_handle_request(mddev, bio);
+-
+-	mempool_free(fi, mddev->flush_pool);
+-}
+ 
+-static void md_end_flush(struct bio *fbio)
++static void md_end_flush(struct bio *bio)
+ {
+-	struct flush_bio *fb = fbio->bi_private;
+-	struct md_rdev *rdev = fb->rdev;
+-	struct flush_info *fi = fb->fi;
+-	struct bio *bio = fi->bio;
+-	struct mddev *mddev = fi->mddev;
++	struct md_rdev *rdev = bio->bi_private;
++	struct mddev *mddev = rdev->mddev;
+ 
+ 	rdev_dec_pending(rdev, mddev);
+ 
+-	if (atomic_dec_and_test(&fi->flush_pending)) {
+-		if (bio->bi_iter.bi_size == 0) {
+-			/* an empty barrier - all done */
+-			bio_endio(bio);
+-			mempool_free(fi, mddev->flush_pool);
+-		} else {
+-			INIT_WORK(&fi->flush_work, submit_flushes);
+-			queue_work(md_wq, &fi->flush_work);
+-		}
++	if (atomic_dec_and_test(&mddev->flush_pending)) {
++		/* The pre-request flush has finished */
++		queue_work(md_wq, &mddev->flush_work);
+ 	}
+-
+-	mempool_free(fb, mddev->flush_bio_pool);
+-	bio_put(fbio);
++	bio_put(bio);
+ }
+ 
+-void md_flush_request(struct mddev *mddev, struct bio *bio)
++static void md_submit_flush_data(struct work_struct *ws);
++
++static void submit_flushes(struct work_struct *ws)
+ {
++	struct mddev *mddev = container_of(ws, struct mddev, flush_work);
+ 	struct md_rdev *rdev;
+-	struct flush_info *fi;
+-
+-	fi = mempool_alloc(mddev->flush_pool, GFP_NOIO);
+-
+-	fi->bio = bio;
+-	fi->mddev = mddev;
+-	atomic_set(&fi->flush_pending, 1);
+ 
++	mddev->start_flush = ktime_get_boottime();
++	INIT_WORK(&mddev->flush_work, md_submit_flush_data);
++	atomic_set(&mddev->flush_pending, 1);
+ 	rcu_read_lock();
+ 	rdev_for_each_rcu(rdev, mddev)
+ 		if (rdev->raid_disk >= 0 &&
+@@ -480,37 +439,74 @@ void md_flush_request(struct mddev *mddev, struct bio *bio)
+ 			 * we reclaim rcu_read_lock
+ 			 */
+ 			struct bio *bi;
+-			struct flush_bio *fb;
+ 			atomic_inc(&rdev->nr_pending);
+ 			atomic_inc(&rdev->nr_pending);
+ 			rcu_read_unlock();
+-
+-			fb = mempool_alloc(mddev->flush_bio_pool, GFP_NOIO);
+-			fb->fi = fi;
+-			fb->rdev = rdev;
+-
+ 			bi = bio_alloc_mddev(GFP_NOIO, 0, mddev);
+-			bio_set_dev(bi, rdev->bdev);
+ 			bi->bi_end_io = md_end_flush;
+-			bi->bi_private = fb;
++			bi->bi_private = rdev;
++			bio_set_dev(bi, rdev->bdev);
+ 			bi->bi_opf = REQ_OP_WRITE | REQ_PREFLUSH;
+-
+-			atomic_inc(&fi->flush_pending);
++			atomic_inc(&mddev->flush_pending);
+ 			submit_bio(bi);
+-
+ 			rcu_read_lock();
+ 			rdev_dec_pending(rdev, mddev);
+ 		}
+ 	rcu_read_unlock();
++	if (atomic_dec_and_test(&mddev->flush_pending))
++		queue_work(md_wq, &mddev->flush_work);
++}
++
++static void md_submit_flush_data(struct work_struct *ws)
++{
++	struct mddev *mddev = container_of(ws, struct mddev, flush_work);
++	struct bio *bio = mddev->flush_bio;
++
++	/*
++	 * must reset flush_bio before calling into md_handle_request to avoid a
++	 * deadlock, because other bios passed md_handle_request suspend check
++	 * could wait for this and below md_handle_request could wait for those
++	 * bios because of suspend check
++	 */
++	mddev->last_flush = mddev->start_flush;
++	mddev->flush_bio = NULL;
++	wake_up(&mddev->sb_wait);
++
++	if (bio->bi_iter.bi_size == 0) {
++		/* an empty barrier - all done */
++		bio_endio(bio);
++	} else {
++		bio->bi_opf &= ~REQ_PREFLUSH;
++		md_handle_request(mddev, bio);
++	}
++}
+ 
+-	if (atomic_dec_and_test(&fi->flush_pending)) {
+-		if (bio->bi_iter.bi_size == 0) {
++void md_flush_request(struct mddev *mddev, struct bio *bio)
++{
++	ktime_t start = ktime_get_boottime();
++	spin_lock_irq(&mddev->lock);
++	wait_event_lock_irq(mddev->sb_wait,
++			    !mddev->flush_bio ||
++			    ktime_after(mddev->last_flush, start),
++			    mddev->lock);
++	if (!ktime_after(mddev->last_flush, start)) {
++		WARN_ON(mddev->flush_bio);
++		mddev->flush_bio = bio;
++		bio = NULL;
++	}
++	spin_unlock_irq(&mddev->lock);
++
++	if (!bio) {
++		INIT_WORK(&mddev->flush_work, submit_flushes);
++		queue_work(md_wq, &mddev->flush_work);
++	} else {
++		/* flush was performed for some other bio while we waited. */
++		if (bio->bi_iter.bi_size == 0)
+ 			/* an empty barrier - all done */
+ 			bio_endio(bio);
+-			mempool_free(fi, mddev->flush_pool);
+-		} else {
+-			INIT_WORK(&fi->flush_work, submit_flushes);
+-			queue_work(md_wq, &fi->flush_work);
++		else {
++			bio->bi_opf &= ~REQ_PREFLUSH;
++			mddev->pers->make_request(mddev, bio);
+ 		}
+ 	}
+ }
+@@ -560,6 +556,7 @@ void mddev_init(struct mddev *mddev)
+ 	atomic_set(&mddev->openers, 0);
+ 	atomic_set(&mddev->active_io, 0);
+ 	spin_lock_init(&mddev->lock);
++	atomic_set(&mddev->flush_pending, 0);
+ 	init_waitqueue_head(&mddev->sb_wait);
+ 	init_waitqueue_head(&mddev->recovery_wait);
+ 	mddev->reshape_position = MaxSector;
+@@ -2855,8 +2852,10 @@ state_store(struct md_rdev *rdev, const char *buf, size_t len)
+ 			err = 0;
+ 		}
+ 	} else if (cmd_match(buf, "re-add")) {
+-		if (test_bit(Faulty, &rdev->flags) && (rdev->raid_disk == -1) &&
+-			rdev->saved_raid_disk >= 0) {
++		if (!rdev->mddev->pers)
++			err = -EINVAL;
++		else if (test_bit(Faulty, &rdev->flags) && (rdev->raid_disk == -1) &&
++				rdev->saved_raid_disk >= 0) {
+ 			/* clear_bit is performed _after_ all the devices
+ 			 * have their local Faulty bit cleared. If any writes
+ 			 * happen in the meantime in the local node, they
+@@ -5511,22 +5510,6 @@ int md_run(struct mddev *mddev)
+ 		if (err)
+ 			return err;
+ 	}
+-	if (mddev->flush_pool == NULL) {
+-		mddev->flush_pool = mempool_create(NR_FLUSH_INFOS, flush_info_alloc,
+-						flush_info_free, mddev);
+-		if (!mddev->flush_pool) {
+-			err = -ENOMEM;
+-			goto abort;
+-		}
+-	}
+-	if (mddev->flush_bio_pool == NULL) {
+-		mddev->flush_bio_pool = mempool_create(NR_FLUSH_BIOS, flush_bio_alloc,
+-						flush_bio_free, mddev);
+-		if (!mddev->flush_bio_pool) {
+-			err = -ENOMEM;
+-			goto abort;
+-		}
+-	}
+ 
+ 	spin_lock(&pers_lock);
+ 	pers = find_pers(mddev->level, mddev->clevel);
+@@ -5686,11 +5669,8 @@ int md_run(struct mddev *mddev)
+ 	return 0;
+ 
+ abort:
+-	mempool_destroy(mddev->flush_bio_pool);
+-	mddev->flush_bio_pool = NULL;
+-	mempool_destroy(mddev->flush_pool);
+-	mddev->flush_pool = NULL;
+-
++	bioset_exit(&mddev->bio_set);
++	bioset_exit(&mddev->sync_set);
+ 	return err;
+ }
+ EXPORT_SYMBOL_GPL(md_run);
+@@ -5894,14 +5874,6 @@ static void __md_stop(struct mddev *mddev)
+ 		mddev->to_remove = &md_redundancy_group;
+ 	module_put(pers->owner);
+ 	clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+-	if (mddev->flush_bio_pool) {
+-		mempool_destroy(mddev->flush_bio_pool);
+-		mddev->flush_bio_pool = NULL;
+-	}
+-	if (mddev->flush_pool) {
+-		mempool_destroy(mddev->flush_pool);
+-		mddev->flush_pool = NULL;
+-	}
+ }
+ 
+ void md_stop(struct mddev *mddev)
+@@ -9257,7 +9229,7 @@ static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev)
+ 		 * reshape is happening in the remote node, we need to
+ 		 * update reshape_position and call start_reshape.
+ 		 */
+-		mddev->reshape_position = sb->reshape_position;
++		mddev->reshape_position = le64_to_cpu(sb->reshape_position);
+ 		if (mddev->pers->update_reshape_pos)
+ 			mddev->pers->update_reshape_pos(mddev);
+ 		if (mddev->pers->start_reshape)
+diff --git a/drivers/md/md.h b/drivers/md/md.h
+index c52afb52c776..257cb4c9e22b 100644
+--- a/drivers/md/md.h
++++ b/drivers/md/md.h
+@@ -252,19 +252,6 @@ enum mddev_sb_flags {
+ 	MD_SB_NEED_REWRITE,	/* metadata write needs to be repeated */
+ };
+ 
+-#define NR_FLUSH_INFOS 8
+-#define NR_FLUSH_BIOS 64
+-struct flush_info {
+-	struct bio			*bio;
+-	struct mddev			*mddev;
+-	struct work_struct		flush_work;
+-	atomic_t			flush_pending;
+-};
+-struct flush_bio {
+-	struct flush_info *fi;
+-	struct md_rdev *rdev;
+-};
+-
+ struct mddev {
+ 	void				*private;
+ 	struct md_personality		*pers;
+@@ -470,8 +457,16 @@ struct mddev {
+ 						   * metadata and bitmap writes
+ 						   */
+ 
+-	mempool_t			*flush_pool;
+-	mempool_t			*flush_bio_pool;
++	/* Generic flush handling.
++	 * The last to finish preflush schedules a worker to submit
++	 * the rest of the request (without the REQ_PREFLUSH flag).
++	 */
++	struct bio *flush_bio;
++	atomic_t flush_pending;
++	ktime_t start_flush, last_flush; /* last_flush is when the last completed
++					  * flush was started.
++					  */
++	struct work_struct flush_work;
+ 	struct work_struct event_work;	/* used by dm to report failure event */
+ 	void (*sync_super)(struct mddev *mddev, struct md_rdev *rdev);
+ 	struct md_cluster_info		*cluster_info;
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 364dd2f6fa1b..a4d2f552c8ab 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -4187,7 +4187,7 @@ static void handle_parity_checks6(struct r5conf *conf, struct stripe_head *sh,
+ 		/* now write out any block on a failed drive,
+ 		 * or P or Q if they were recomputed
+ 		 */
+-		BUG_ON(s->uptodate < disks - 1); /* We don't need Q to recover */
++		dev = NULL;
+ 		if (s->failed == 2) {
+ 			dev = &sh->dev[s->failed_num[1]];
+ 			s->locked++;
+@@ -4212,6 +4212,14 @@ static void handle_parity_checks6(struct r5conf *conf, struct stripe_head *sh,
+ 			set_bit(R5_LOCKED, &dev->flags);
+ 			set_bit(R5_Wantwrite, &dev->flags);
+ 		}
++		if (WARN_ONCE(dev && !test_bit(R5_UPTODATE, &dev->flags),
++			      "%s: disk%td not up to date\n",
++			      mdname(conf->mddev),
++			      dev - (struct r5dev *) &sh->dev)) {
++			clear_bit(R5_LOCKED, &dev->flags);
++			clear_bit(R5_Wantwrite, &dev->flags);
++			s->locked--;
++		}
+ 		clear_bit(STRIPE_DEGRADED, &sh->state);
+ 
+ 		set_bit(STRIPE_INSYNC, &sh->state);
+@@ -4223,15 +4231,26 @@ static void handle_parity_checks6(struct r5conf *conf, struct stripe_head *sh,
+ 	case check_state_check_result:
+ 		sh->check_state = check_state_idle;
+ 
+-		if (s->failed > 1)
+-			break;
+ 		/* handle a successful check operation, if parity is correct
+ 		 * we are done.  Otherwise update the mismatch count and repair
+ 		 * parity if !MD_RECOVERY_CHECK
+ 		 */
+ 		if (sh->ops.zero_sum_result == 0) {
+-			/* Any parity checked was correct */
+-			set_bit(STRIPE_INSYNC, &sh->state);
++			/* both parities are correct */
++			if (!s->failed)
++				set_bit(STRIPE_INSYNC, &sh->state);
++			else {
++				/* in contrast to the raid5 case we can validate
++				 * parity, but still have a failure to write
++				 * back
++				 */
++				sh->check_state = check_state_compute_result;
++				/* Returning at this point means that we may go
++				 * off and bring p and/or q uptodate again so
++				 * we make sure to check zero_sum_result again
++				 * to verify if p or q need writeback
++				 */
++			}
+ 		} else {
+ 			atomic64_add(STRIPE_SECTORS, &conf->mddev->resync_mismatches);
+ 			if (test_bit(MD_RECOVERY_CHECK, &conf->mddev->recovery)) {
+diff --git a/drivers/media/i2c/ov6650.c b/drivers/media/i2c/ov6650.c
+index c33fd584cb44..f9359b11fa5c 100644
+--- a/drivers/media/i2c/ov6650.c
++++ b/drivers/media/i2c/ov6650.c
+@@ -814,6 +814,8 @@ static int ov6650_video_probe(struct i2c_client *client)
+ 	if (ret < 0)
+ 		return ret;
+ 
++	msleep(20);
++
+ 	/*
+ 	 * check and show product ID and manufacturer ID
+ 	 */
+diff --git a/drivers/media/platform/Kconfig b/drivers/media/platform/Kconfig
+index 4acbed189644..67e48ff10532 100644
+--- a/drivers/media/platform/Kconfig
++++ b/drivers/media/platform/Kconfig
+@@ -649,7 +649,7 @@ config VIDEO_SECO_CEC
+ config VIDEO_SECO_RC
+ 	bool "SECO Boards IR RC5 support"
+ 	depends on VIDEO_SECO_CEC
+-	depends on RC_CORE
++	depends on RC_CORE=y || RC_CORE = VIDEO_SECO_CEC
+ 	help
+ 	  If you say yes here you will get support for the
+ 	  SECO Boards Consumer-IR in seco-cec driver.
+diff --git a/drivers/memory/tegra/mc.c b/drivers/memory/tegra/mc.c
+index 0a53598d982f..5bd8df926052 100644
+--- a/drivers/memory/tegra/mc.c
++++ b/drivers/memory/tegra/mc.c
+@@ -282,7 +282,7 @@ static int tegra_mc_setup_latency_allowance(struct tegra_mc *mc)
+ 	u32 value;
+ 
+ 	/* compute the number of MC clock cycles per tick */
+-	tick = mc->tick * clk_get_rate(mc->clk);
++	tick = (unsigned long long)mc->tick * clk_get_rate(mc->clk);
+ 	do_div(tick, NSEC_PER_SEC);
+ 
+ 	value = readl(mc->regs + MC_EMEM_ARB_CFG);
+diff --git a/drivers/net/Makefile b/drivers/net/Makefile
+index 21cde7e78621..0d3ba056cda3 100644
+--- a/drivers/net/Makefile
++++ b/drivers/net/Makefile
+@@ -40,7 +40,7 @@ obj-$(CONFIG_ARCNET) += arcnet/
+ obj-$(CONFIG_DEV_APPLETALK) += appletalk/
+ obj-$(CONFIG_CAIF) += caif/
+ obj-$(CONFIG_CAN) += can/
+-obj-$(CONFIG_NET_DSA) += dsa/
++obj-y += dsa/
+ obj-$(CONFIG_ETHERNET) += ethernet/
+ obj-$(CONFIG_FDDI) += fddi/
+ obj-$(CONFIG_HIPPI) += hippi/
+diff --git a/drivers/net/ethernet/mellanox/mlx4/mcg.c b/drivers/net/ethernet/mellanox/mlx4/mcg.c
+index ffed2d4c9403..9c481823b3e8 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/mcg.c
++++ b/drivers/net/ethernet/mellanox/mlx4/mcg.c
+@@ -1492,7 +1492,7 @@ int mlx4_flow_steer_promisc_add(struct mlx4_dev *dev, u8 port,
+ 	rule.port = port;
+ 	rule.qpn = qpn;
+ 	INIT_LIST_HEAD(&rule.list);
+-	mlx4_err(dev, "going promisc on %x\n", port);
++	mlx4_info(dev, "going promisc on %x\n", port);
+ 
+ 	return  mlx4_flow_attach(dev, &rule, regid_p);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Kconfig b/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
+index 6debffb8336b..430c2eab6fc3 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
++++ b/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
+@@ -7,6 +7,7 @@ config MLX5_CORE
+ 	depends on PCI
+ 	imply PTP_1588_CLOCK
+ 	imply VXLAN
++	imply MLXFW
+ 	default n
+ 	---help---
+ 	  Core driver for low level functionality of the ConnectX-4 and
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c b/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c
+index 4746f2d28fb6..0ccd6d40baf7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c
+@@ -26,7 +26,7 @@ static int mlx5_peer_pf_disable_hca(struct mlx5_core_dev *dev)
+ 
+ 	MLX5_SET(disable_hca_in, in, opcode, MLX5_CMD_OP_DISABLE_HCA);
+ 	MLX5_SET(disable_hca_in, in, function_id, 0);
+-	MLX5_SET(enable_hca_in, in, embedded_cpu_function, 0);
++	MLX5_SET(disable_hca_in, in, embedded_cpu_function, 0);
+ 	return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index 78dc8fe2a83c..2821208119c0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -1901,6 +1901,22 @@ static int mlx5e_flash_device(struct net_device *dev,
+ 	return mlx5e_ethtool_flash_device(priv, flash);
+ }
+ 
++#ifndef CONFIG_MLX5_EN_RXNFC
++/* When CONFIG_MLX5_EN_RXNFC=n we only support ETHTOOL_GRXRINGS
++ * otherwise this function will be defined from en_fs_ethtool.c
++ */
++static int mlx5e_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *info, u32 *rule_locs)
++{
++	struct mlx5e_priv *priv = netdev_priv(dev);
++
++	if (info->cmd != ETHTOOL_GRXRINGS)
++		return -EOPNOTSUPP;
++	/* ring_count is needed by ethtool -x */
++	info->data = priv->channels.params.num_channels;
++	return 0;
++}
++#endif
++
+ const struct ethtool_ops mlx5e_ethtool_ops = {
+ 	.get_drvinfo       = mlx5e_get_drvinfo,
+ 	.get_link          = ethtool_op_get_link,
+@@ -1919,8 +1935,8 @@ const struct ethtool_ops mlx5e_ethtool_ops = {
+ 	.get_rxfh_indir_size = mlx5e_get_rxfh_indir_size,
+ 	.get_rxfh          = mlx5e_get_rxfh,
+ 	.set_rxfh          = mlx5e_set_rxfh,
+-#ifdef CONFIG_MLX5_EN_RXNFC
+ 	.get_rxnfc         = mlx5e_get_rxnfc,
++#ifdef CONFIG_MLX5_EN_RXNFC
+ 	.set_rxnfc         = mlx5e_set_rxnfc,
+ #endif
+ 	.flash_device      = mlx5e_flash_device,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index a66b6ed80b30..0b09fa91019d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -65,9 +65,26 @@ static void mlx5e_rep_indr_unregister_block(struct mlx5e_rep_priv *rpriv,
+ static void mlx5e_rep_get_drvinfo(struct net_device *dev,
+ 				  struct ethtool_drvinfo *drvinfo)
+ {
++	struct mlx5e_priv *priv = netdev_priv(dev);
++	struct mlx5_core_dev *mdev = priv->mdev;
++
+ 	strlcpy(drvinfo->driver, mlx5e_rep_driver_name,
+ 		sizeof(drvinfo->driver));
+ 	strlcpy(drvinfo->version, UTS_RELEASE, sizeof(drvinfo->version));
++	snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version),
++		 "%d.%d.%04d (%.16s)",
++		 fw_rev_maj(mdev), fw_rev_min(mdev),
++		 fw_rev_sub(mdev), mdev->board_id);
++}
++
++static void mlx5e_uplink_rep_get_drvinfo(struct net_device *dev,
++					 struct ethtool_drvinfo *drvinfo)
++{
++	struct mlx5e_priv *priv = netdev_priv(dev);
++
++	mlx5e_rep_get_drvinfo(dev, drvinfo);
++	strlcpy(drvinfo->bus_info, pci_name(priv->mdev->pdev),
++		sizeof(drvinfo->bus_info));
+ }
+ 
+ static const struct counter_desc sw_rep_stats_desc[] = {
+@@ -363,7 +380,7 @@ static const struct ethtool_ops mlx5e_vf_rep_ethtool_ops = {
+ };
+ 
+ static const struct ethtool_ops mlx5e_uplink_rep_ethtool_ops = {
+-	.get_drvinfo	   = mlx5e_rep_get_drvinfo,
++	.get_drvinfo	   = mlx5e_uplink_rep_get_drvinfo,
+ 	.get_link	   = ethtool_op_get_link,
+ 	.get_strings       = mlx5e_rep_get_strings,
+ 	.get_sset_count    = mlx5e_rep_get_sset_count,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index d75dc44eb2ff..4cb23631616b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -1561,7 +1561,7 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
+ 	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_CVLAN)) {
+ 		struct flow_match_vlan match;
+ 
+-		flow_rule_match_vlan(rule, &match);
++		flow_rule_match_cvlan(rule, &match);
+ 		if (match.mask->vlan_id ||
+ 		    match.mask->vlan_priority ||
+ 		    match.mask->vlan_tpid) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 0be3eb86dd84..581cc145795d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -1386,6 +1386,8 @@ static bool mlx5_flow_dests_cmp(struct mlx5_flow_destination *d1,
+ 		if ((d1->type == MLX5_FLOW_DESTINATION_TYPE_VPORT &&
+ 		     d1->vport.num == d2->vport.num &&
+ 		     d1->vport.flags == d2->vport.flags &&
++		     ((d1->vport.flags & MLX5_FLOW_DEST_VPORT_VHCA_ID) ?
++		      (d1->vport.vhca_id == d2->vport.vhca_id) : true) &&
+ 		     ((d1->vport.flags & MLX5_FLOW_DEST_VPORT_REFORMAT_ID) ?
+ 		      (d1->vport.reformat_id == d2->vport.reformat_id) : true)) ||
+ 		    (d1->type == MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE &&
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+index ca0ee9916e9e..0059b290e095 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+@@ -535,23 +535,16 @@ void mlx5_init_clock(struct mlx5_core_dev *mdev)
+ 	do_div(ns, NSEC_PER_SEC / HZ);
+ 	clock->overflow_period = ns;
+ 
+-	mdev->clock_info_page = alloc_page(GFP_KERNEL);
+-	if (mdev->clock_info_page) {
+-		mdev->clock_info = kmap(mdev->clock_info_page);
+-		if (!mdev->clock_info) {
+-			__free_page(mdev->clock_info_page);
+-			mlx5_core_warn(mdev, "failed to map clock page\n");
+-		} else {
+-			mdev->clock_info->sign   = 0;
+-			mdev->clock_info->nsec   = clock->tc.nsec;
+-			mdev->clock_info->cycles = clock->tc.cycle_last;
+-			mdev->clock_info->mask   = clock->cycles.mask;
+-			mdev->clock_info->mult   = clock->nominal_c_mult;
+-			mdev->clock_info->shift  = clock->cycles.shift;
+-			mdev->clock_info->frac   = clock->tc.frac;
+-			mdev->clock_info->overflow_period =
+-						clock->overflow_period;
+-		}
++	mdev->clock_info =
++		(struct mlx5_ib_clock_info *)get_zeroed_page(GFP_KERNEL);
++	if (mdev->clock_info) {
++		mdev->clock_info->nsec = clock->tc.nsec;
++		mdev->clock_info->cycles = clock->tc.cycle_last;
++		mdev->clock_info->mask = clock->cycles.mask;
++		mdev->clock_info->mult = clock->nominal_c_mult;
++		mdev->clock_info->shift = clock->cycles.shift;
++		mdev->clock_info->frac = clock->tc.frac;
++		mdev->clock_info->overflow_period = clock->overflow_period;
+ 	}
+ 
+ 	INIT_WORK(&clock->pps_info.out_work, mlx5_pps_out);
+@@ -599,8 +592,7 @@ void mlx5_cleanup_clock(struct mlx5_core_dev *mdev)
+ 	cancel_delayed_work_sync(&clock->overflow_work);
+ 
+ 	if (mdev->clock_info) {
+-		kunmap(mdev->clock_info_page);
+-		__free_page(mdev->clock_info_page);
++		free_page((unsigned long)mdev->clock_info);
+ 		mdev->clock_info = NULL;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c
+index f26a4ca29363..0b56291d22c6 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core.c
+@@ -122,6 +122,12 @@ void *mlxsw_core_driver_priv(struct mlxsw_core *mlxsw_core)
+ }
+ EXPORT_SYMBOL(mlxsw_core_driver_priv);
+ 
++bool mlxsw_core_res_query_enabled(const struct mlxsw_core *mlxsw_core)
++{
++	return mlxsw_core->driver->res_query_enabled;
++}
++EXPORT_SYMBOL(mlxsw_core_res_query_enabled);
++
+ struct mlxsw_rx_listener_item {
+ 	struct list_head list;
+ 	struct mlxsw_rx_listener rxl;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.h b/drivers/net/ethernet/mellanox/mlxsw/core.h
+index 8ec53f027575..62b8de9305af 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/core.h
+@@ -28,6 +28,8 @@ unsigned int mlxsw_core_max_ports(const struct mlxsw_core *mlxsw_core);
+ 
+ void *mlxsw_core_driver_priv(struct mlxsw_core *mlxsw_core);
+ 
++bool mlxsw_core_res_query_enabled(const struct mlxsw_core *mlxsw_core);
++
+ int mlxsw_core_driver_register(struct mlxsw_driver *mlxsw_driver);
+ void mlxsw_core_driver_unregister(struct mlxsw_driver *mlxsw_driver);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_env.c b/drivers/net/ethernet/mellanox/mlxsw/core_env.c
+index c1c1965d7acc..72539a9a3847 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core_env.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core_env.c
+@@ -3,6 +3,7 @@
+ 
+ #include <linux/kernel.h>
+ #include <linux/err.h>
++#include <linux/sfp.h>
+ 
+ #include "core.h"
+ #include "core_env.h"
+@@ -162,7 +163,7 @@ int mlxsw_env_get_module_info(struct mlxsw_core *mlxsw_core, int module,
+ {
+ 	u8 module_info[MLXSW_REG_MCIA_EEPROM_MODULE_INFO_SIZE];
+ 	u16 offset = MLXSW_REG_MCIA_EEPROM_MODULE_INFO_SIZE;
+-	u8 module_rev_id, module_id;
++	u8 module_rev_id, module_id, diag_mon;
+ 	unsigned int read_size;
+ 	int err;
+ 
+@@ -195,8 +196,21 @@ int mlxsw_env_get_module_info(struct mlxsw_core *mlxsw_core, int module,
+ 		}
+ 		break;
+ 	case MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID_SFP:
++		/* Verify if transceiver provides diagnostic monitoring page */
++		err = mlxsw_env_query_module_eeprom(mlxsw_core, module,
++						    SFP_DIAGMON, 1, &diag_mon,
++						    &read_size);
++		if (err)
++			return err;
++
++		if (read_size < 1)
++			return -EIO;
++
+ 		modinfo->type       = ETH_MODULE_SFF_8472;
+-		modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
++		if (diag_mon)
++			modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
++		else
++			modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN / 2;
+ 		break;
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c b/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c
+index 6956bbebe2f1..496dc904c5ed 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c
+@@ -518,6 +518,9 @@ static int mlxsw_hwmon_module_init(struct mlxsw_hwmon *mlxsw_hwmon)
+ 	u8 width;
+ 	int err;
+ 
++	if (!mlxsw_core_res_query_enabled(mlxsw_hwmon->core))
++		return 0;
++
+ 	/* Add extra attributes for module temperature. Sensor index is
+ 	 * assigned to sensor_count value, while all indexed before
+ 	 * sensor_count are already utilized by the sensors connected through
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
+index 472f63f9fac5..d3e851e7ca72 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
+@@ -740,6 +740,9 @@ mlxsw_thermal_modules_init(struct device *dev, struct mlxsw_core *core,
+ 	struct mlxsw_thermal_module *module_tz;
+ 	int i, err;
+ 
++	if (!mlxsw_core_res_query_enabled(core))
++		return 0;
++
+ 	thermal->tz_module_arr = kcalloc(module_count,
+ 					 sizeof(*thermal->tz_module_arr),
+ 					 GFP_KERNEL);
+@@ -776,6 +779,9 @@ mlxsw_thermal_modules_fini(struct mlxsw_thermal *thermal)
+ 	unsigned int module_count = mlxsw_core_max_ports(thermal->core);
+ 	int i;
+ 
++	if (!mlxsw_core_res_query_enabled(thermal->core))
++		return;
++
+ 	for (i = module_count - 1; i >= 0; i--)
+ 		mlxsw_thermal_module_fini(&thermal->tz_module_arr[i]);
+ 	kfree(thermal->tz_module_arr);
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+index 4d78be4ec4e9..843ddf548f26 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+@@ -168,6 +168,7 @@ void nfp_tunnel_keep_alive(struct nfp_app *app, struct sk_buff *skb)
+ 		return;
+ 	}
+ 
++	rcu_read_lock();
+ 	for (i = 0; i < count; i++) {
+ 		ipv4_addr = payload->tun_info[i].ipv4;
+ 		port = be32_to_cpu(payload->tun_info[i].egress_port);
+@@ -183,6 +184,7 @@ void nfp_tunnel_keep_alive(struct nfp_app *app, struct sk_buff *skb)
+ 		neigh_event_send(n, NULL);
+ 		neigh_release(n);
+ 	}
++	rcu_read_unlock();
+ }
+ 
+ static int
+@@ -366,9 +368,10 @@ void nfp_tunnel_request_route(struct nfp_app *app, struct sk_buff *skb)
+ 
+ 	payload = nfp_flower_cmsg_get_data(skb);
+ 
++	rcu_read_lock();
+ 	netdev = nfp_app_repr_get(app, be32_to_cpu(payload->ingress_port));
+ 	if (!netdev)
+-		goto route_fail_warning;
++		goto fail_rcu_unlock;
+ 
+ 	flow.daddr = payload->ipv4_addr;
+ 	flow.flowi4_proto = IPPROTO_UDP;
+@@ -378,21 +381,23 @@ void nfp_tunnel_request_route(struct nfp_app *app, struct sk_buff *skb)
+ 	rt = ip_route_output_key(dev_net(netdev), &flow);
+ 	err = PTR_ERR_OR_ZERO(rt);
+ 	if (err)
+-		goto route_fail_warning;
++		goto fail_rcu_unlock;
+ #else
+-	goto route_fail_warning;
++	goto fail_rcu_unlock;
+ #endif
+ 
+ 	/* Get the neighbour entry for the lookup */
+ 	n = dst_neigh_lookup(&rt->dst, &flow.daddr);
+ 	ip_rt_put(rt);
+ 	if (!n)
+-		goto route_fail_warning;
+-	nfp_tun_write_neigh(n->dev, app, &flow, n, GFP_KERNEL);
++		goto fail_rcu_unlock;
++	nfp_tun_write_neigh(n->dev, app, &flow, n, GFP_ATOMIC);
+ 	neigh_release(n);
++	rcu_read_unlock();
+ 	return;
+ 
+-route_fail_warning:
++fail_rcu_unlock:
++	rcu_read_unlock();
+ 	nfp_flower_cmsg_warn(app, "Requested route not found.\n");
+ }
+ 
+diff --git a/drivers/net/ppp/ppp_deflate.c b/drivers/net/ppp/ppp_deflate.c
+index b5edc7f96a39..685e875f5164 100644
+--- a/drivers/net/ppp/ppp_deflate.c
++++ b/drivers/net/ppp/ppp_deflate.c
+@@ -610,12 +610,20 @@ static struct compressor ppp_deflate_draft = {
+ 
+ static int __init deflate_init(void)
+ {
+-        int answer = ppp_register_compressor(&ppp_deflate);
+-        if (answer == 0)
+-                printk(KERN_INFO
+-		       "PPP Deflate Compression module registered\n");
+-	ppp_register_compressor(&ppp_deflate_draft);
+-        return answer;
++	int rc;
++
++	rc = ppp_register_compressor(&ppp_deflate);
++	if (rc)
++		return rc;
++
++	rc = ppp_register_compressor(&ppp_deflate_draft);
++	if (rc) {
++		ppp_unregister_compressor(&ppp_deflate);
++		return rc;
++	}
++
++	pr_info("PPP Deflate Compression module registered\n");
++	return 0;
+ }
+ 
+ static void __exit deflate_cleanup(void)
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 679e404a5224..366217263d70 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1250,6 +1250,8 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x1bc7, 0x1101, 3)},	/* Telit ME910 dual modem */
+ 	{QMI_FIXED_INTF(0x1bc7, 0x1200, 5)},	/* Telit LE920 */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1201, 2)},	/* Telit LE920, LE920A4 */
++	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1260, 2)},	/* Telit LE910Cx */
++	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1261, 2)},	/* Telit LE910Cx */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1900, 1)},	/* Telit LN940 series */
+ 	{QMI_FIXED_INTF(0x1c9e, 0x9801, 3)},	/* Telewell TW-3G HSPA+ */
+ 	{QMI_FIXED_INTF(0x1c9e, 0x9803, 4)},	/* Telewell TW-3G HSPA+ */
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c
+index 7535cb0d4ac0..9f1417e00073 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c
+@@ -31,6 +31,10 @@ struct brcmf_dmi_data {
+ 
+ /* NOTE: Please keep all entries sorted alphabetically */
+ 
++static const struct brcmf_dmi_data acepc_t8_data = {
++	BRCM_CC_4345_CHIP_ID, 6, "acepc-t8"
++};
++
+ static const struct brcmf_dmi_data gpd_win_pocket_data = {
+ 	BRCM_CC_4356_CHIP_ID, 2, "gpd-win-pocket"
+ };
+@@ -48,6 +52,28 @@ static const struct brcmf_dmi_data pov_tab_p1006w_data = {
+ };
+ 
+ static const struct dmi_system_id dmi_platform_data[] = {
++	{
++		/* ACEPC T8 Cherry Trail Z8350 mini PC */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "To be filled by O.E.M."),
++			DMI_EXACT_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "T8"),
++			/* also match on somewhat unique bios-version */
++			DMI_EXACT_MATCH(DMI_BIOS_VERSION, "1.000"),
++		},
++		.driver_data = (void *)&acepc_t8_data,
++	},
++	{
++		/* ACEPC T11 Cherry Trail Z8350 mini PC, same wifi as the T8 */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "To be filled by O.E.M."),
++			DMI_EXACT_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "T11"),
++			/* also match on somewhat unique bios-version */
++			DMI_EXACT_MATCH(DMI_BIOS_VERSION, "1.000"),
++		},
++		.driver_data = (void *)&acepc_t8_data,
++	},
+ 	{
+ 		/* Match for the GPDwin which unfortunately uses somewhat
+ 		 * generic dmi strings, which is why we test for 4 strings.
+diff --git a/drivers/net/wireless/intersil/p54/p54pci.c b/drivers/net/wireless/intersil/p54/p54pci.c
+index 27a49068d32d..57ad56435dda 100644
+--- a/drivers/net/wireless/intersil/p54/p54pci.c
++++ b/drivers/net/wireless/intersil/p54/p54pci.c
+@@ -554,7 +554,7 @@ static int p54p_probe(struct pci_dev *pdev,
+ 	err = pci_enable_device(pdev);
+ 	if (err) {
+ 		dev_err(&pdev->dev, "Cannot enable new PCI device\n");
+-		return err;
++		goto err_put;
+ 	}
+ 
+ 	mem_addr = pci_resource_start(pdev, 0);
+@@ -639,6 +639,7 @@ static int p54p_probe(struct pci_dev *pdev,
+ 	pci_release_regions(pdev);
+  err_disable_dev:
+ 	pci_disable_device(pdev);
++err_put:
+ 	pci_dev_put(pdev);
+ 	return err;
+ }
+diff --git a/drivers/parisc/led.c b/drivers/parisc/led.c
+index 0c6e8b44b4ed..c60b465f6fe4 100644
+--- a/drivers/parisc/led.c
++++ b/drivers/parisc/led.c
+@@ -568,6 +568,9 @@ int __init register_led_driver(int model, unsigned long cmd_reg, unsigned long d
+ 		break;
+ 
+ 	case DISPLAY_MODEL_LASI:
++		/* Skip to register LED in QEMU */
++		if (running_on_qemu)
++			return 1;
+ 		LED_DATA_REG = data_reg;
+ 		led_func_ptr = led_LASI_driver;
+ 		printk(KERN_INFO "LED display at %lx registered\n", LED_DATA_REG);
+diff --git a/drivers/pci/controller/pcie-rcar.c b/drivers/pci/controller/pcie-rcar.c
+index c8febb009454..6a4e435bd35f 100644
+--- a/drivers/pci/controller/pcie-rcar.c
++++ b/drivers/pci/controller/pcie-rcar.c
+@@ -46,6 +46,7 @@
+ 
+ /* Transfer control */
+ #define PCIETCTLR		0x02000
++#define  DL_DOWN		BIT(3)
+ #define  CFINIT			1
+ #define PCIETSTR		0x02004
+ #define  DATA_LINK_ACTIVE	1
+@@ -94,6 +95,7 @@
+ #define MACCTLR			0x011058
+ #define  SPEED_CHANGE		BIT(24)
+ #define  SCRAMBLE_DISABLE	BIT(27)
++#define PMSR			0x01105c
+ #define MACS2R			0x011078
+ #define MACCGSPSETR		0x011084
+ #define  SPCNGRSN		BIT(31)
+@@ -1130,6 +1132,7 @@ static int rcar_pcie_probe(struct platform_device *pdev)
+ 	pcie = pci_host_bridge_priv(bridge);
+ 
+ 	pcie->dev = dev;
++	platform_set_drvdata(pdev, pcie);
+ 
+ 	err = pci_parse_request_of_pci_ranges(dev, &pcie->resources, NULL);
+ 	if (err)
+@@ -1221,10 +1224,28 @@ err_free_bridge:
+ 	return err;
+ }
+ 
++static int rcar_pcie_resume_noirq(struct device *dev)
++{
++	struct rcar_pcie *pcie = dev_get_drvdata(dev);
++
++	if (rcar_pci_read_reg(pcie, PMSR) &&
++	    !(rcar_pci_read_reg(pcie, PCIETCTLR) & DL_DOWN))
++		return 0;
++
++	/* Re-establish the PCIe link */
++	rcar_pci_write_reg(pcie, CFINIT, PCIETCTLR);
++	return rcar_pcie_wait_for_dl(pcie);
++}
++
++static const struct dev_pm_ops rcar_pcie_pm_ops = {
++	.resume_noirq = rcar_pcie_resume_noirq,
++};
++
+ static struct platform_driver rcar_pcie_driver = {
+ 	.driver = {
+ 		.name = "rcar-pcie",
+ 		.of_match_table = rcar_pcie_of_match,
++		.pm = &rcar_pcie_pm_ops,
+ 		.suppress_bind_attrs = true,
+ 	},
+ 	.probe = rcar_pcie_probe,
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index d994839a3e24..9cb99380c61e 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -597,7 +597,7 @@ void pci_aer_clear_fatal_status(struct pci_dev *dev);
+ void pci_aer_clear_device_status(struct pci_dev *dev);
+ #else
+ static inline void pci_no_aer(void) { }
+-static inline int pci_aer_init(struct pci_dev *d) { return -ENODEV; }
++static inline void pci_aer_init(struct pci_dev *d) { }
+ static inline void pci_aer_exit(struct pci_dev *d) { }
+ static inline void pci_aer_clear_fatal_status(struct pci_dev *dev) { }
+ static inline void pci_aer_clear_device_status(struct pci_dev *dev) { }
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index 727e3c1ef9a4..38e7017478b5 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -196,6 +196,38 @@ static void pcie_clkpm_cap_init(struct pcie_link_state *link, int blacklist)
+ 	link->clkpm_capable = (blacklist) ? 0 : capable;
+ }
+ 
++static bool pcie_retrain_link(struct pcie_link_state *link)
++{
++	struct pci_dev *parent = link->pdev;
++	unsigned long start_jiffies;
++	u16 reg16;
++
++	pcie_capability_read_word(parent, PCI_EXP_LNKCTL, &reg16);
++	reg16 |= PCI_EXP_LNKCTL_RL;
++	pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
++	if (parent->clear_retrain_link) {
++		/*
++		 * Due to an erratum in some devices the Retrain Link bit
++		 * needs to be cleared again manually to allow the link
++		 * training to succeed.
++		 */
++		reg16 &= ~PCI_EXP_LNKCTL_RL;
++		pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
++	}
++
++	/* Wait for link training end. Break out after waiting for timeout */
++	start_jiffies = jiffies;
++	for (;;) {
++		pcie_capability_read_word(parent, PCI_EXP_LNKSTA, &reg16);
++		if (!(reg16 & PCI_EXP_LNKSTA_LT))
++			break;
++		if (time_after(jiffies, start_jiffies + LINK_RETRAIN_TIMEOUT))
++			break;
++		msleep(1);
++	}
++	return !(reg16 & PCI_EXP_LNKSTA_LT);
++}
++
+ /*
+  * pcie_aspm_configure_common_clock: check if the 2 ends of a link
+  *   could use common clock. If they are, configure them to use the
+@@ -205,7 +237,6 @@ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link)
+ {
+ 	int same_clock = 1;
+ 	u16 reg16, parent_reg, child_reg[8];
+-	unsigned long start_jiffies;
+ 	struct pci_dev *child, *parent = link->pdev;
+ 	struct pci_bus *linkbus = parent->subordinate;
+ 	/*
+@@ -263,21 +294,7 @@ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link)
+ 		reg16 &= ~PCI_EXP_LNKCTL_CCC;
+ 	pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
+ 
+-	/* Retrain link */
+-	reg16 |= PCI_EXP_LNKCTL_RL;
+-	pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
+-
+-	/* Wait for link training end. Break out after waiting for timeout */
+-	start_jiffies = jiffies;
+-	for (;;) {
+-		pcie_capability_read_word(parent, PCI_EXP_LNKSTA, &reg16);
+-		if (!(reg16 & PCI_EXP_LNKSTA_LT))
+-			break;
+-		if (time_after(jiffies, start_jiffies + LINK_RETRAIN_TIMEOUT))
+-			break;
+-		msleep(1);
+-	}
+-	if (!(reg16 & PCI_EXP_LNKSTA_LT))
++	if (pcie_retrain_link(link))
+ 		return;
+ 
+ 	/* Training failed. Restore common clock configurations */
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index 7e12d0163863..eea78477d311 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -586,16 +586,9 @@ static void pci_release_host_bridge_dev(struct device *dev)
+ 	kfree(to_pci_host_bridge(dev));
+ }
+ 
+-struct pci_host_bridge *pci_alloc_host_bridge(size_t priv)
++static void pci_init_host_bridge(struct pci_host_bridge *bridge)
+ {
+-	struct pci_host_bridge *bridge;
+-
+-	bridge = kzalloc(sizeof(*bridge) + priv, GFP_KERNEL);
+-	if (!bridge)
+-		return NULL;
+-
+ 	INIT_LIST_HEAD(&bridge->windows);
+-	bridge->dev.release = pci_release_host_bridge_dev;
+ 
+ 	/*
+ 	 * We assume we can manage these PCIe features.  Some systems may
+@@ -608,6 +601,18 @@ struct pci_host_bridge *pci_alloc_host_bridge(size_t priv)
+ 	bridge->native_shpc_hotplug = 1;
+ 	bridge->native_pme = 1;
+ 	bridge->native_ltr = 1;
++}
++
++struct pci_host_bridge *pci_alloc_host_bridge(size_t priv)
++{
++	struct pci_host_bridge *bridge;
++
++	bridge = kzalloc(sizeof(*bridge) + priv, GFP_KERNEL);
++	if (!bridge)
++		return NULL;
++
++	pci_init_host_bridge(bridge);
++	bridge->dev.release = pci_release_host_bridge_dev;
+ 
+ 	return bridge;
+ }
+@@ -622,7 +627,7 @@ struct pci_host_bridge *devm_pci_alloc_host_bridge(struct device *dev,
+ 	if (!bridge)
+ 		return NULL;
+ 
+-	INIT_LIST_HEAD(&bridge->windows);
++	pci_init_host_bridge(bridge);
+ 	bridge->dev.release = devm_pci_release_host_bridge_dev;
+ 
+ 	return bridge;
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index a077f67fe1da..cc616a5f6a8f 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -2245,6 +2245,23 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10f1, quirk_disable_aspm_l0s);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10f4, quirk_disable_aspm_l0s);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1508, quirk_disable_aspm_l0s);
+ 
++/*
++ * Some Pericom PCIe-to-PCI bridges in reverse mode need the PCIe Retrain
++ * Link bit cleared after starting the link retrain process to allow this
++ * process to finish.
++ *
++ * Affected devices: PI7C9X110, PI7C9X111SL, PI7C9X130.  See also the
++ * Pericom Errata Sheet PI7C9X111SLB_errata_rev1.2_102711.pdf.
++ */
++static void quirk_enable_clear_retrain_link(struct pci_dev *dev)
++{
++	dev->clear_retrain_link = 1;
++	pci_info(dev, "Enable PCIe Retrain Link quirk\n");
++}
++DECLARE_PCI_FIXUP_HEADER(0x12d8, 0xe110, quirk_enable_clear_retrain_link);
++DECLARE_PCI_FIXUP_HEADER(0x12d8, 0xe111, quirk_enable_clear_retrain_link);
++DECLARE_PCI_FIXUP_HEADER(0x12d8, 0xe130, quirk_enable_clear_retrain_link);
++
+ static void fixup_rev1_53c810(struct pci_dev *dev)
+ {
+ 	u32 class = dev->class;
+@@ -3408,6 +3425,7 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0030, quirk_no_bus_reset);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0032, quirk_no_bus_reset);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x003c, quirk_no_bus_reset);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0033, quirk_no_bus_reset);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0034, quirk_no_bus_reset);
+ 
+ /*
+  * Root port on some Cavium CN8xxx chips do not successfully complete a bus
+@@ -4905,6 +4923,7 @@ static void quirk_no_ats(struct pci_dev *pdev)
+ 
+ /* AMD Stoney platform GPU */
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x98e4, quirk_no_ats);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x6900, quirk_no_ats);
+ #endif /* CONFIG_PCI_ATS */
+ 
+ /* Freescale PCIe doesn't support MSI in RC mode */
+@@ -5122,3 +5141,61 @@ SWITCHTEC_QUIRK(0x8573);  /* PFXI 48XG3 */
+ SWITCHTEC_QUIRK(0x8574);  /* PFXI 64XG3 */
+ SWITCHTEC_QUIRK(0x8575);  /* PFXI 80XG3 */
+ SWITCHTEC_QUIRK(0x8576);  /* PFXI 96XG3 */
++
++/*
++ * On Lenovo Thinkpad P50 SKUs with a Nvidia Quadro M1000M, the BIOS does
++ * not always reset the secondary Nvidia GPU between reboots if the system
++ * is configured to use Hybrid Graphics mode.  This results in the GPU
++ * being left in whatever state it was in during the *previous* boot, which
++ * causes spurious interrupts from the GPU, which in turn causes us to
++ * disable the wrong IRQ and end up breaking the touchpad.  Unsurprisingly,
++ * this also completely breaks nouveau.
++ *
++ * Luckily, it seems a simple reset of the Nvidia GPU brings it back to a
++ * clean state and fixes all these issues.
++ *
++ * When the machine is configured in Dedicated display mode, the issue
++ * doesn't occur.  Fortunately the GPU advertises NoReset+ when in this
++ * mode, so we can detect that and avoid resetting it.
++ */
++static void quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev)
++{
++	void __iomem *map;
++	int ret;
++
++	if (pdev->subsystem_vendor != PCI_VENDOR_ID_LENOVO ||
++	    pdev->subsystem_device != 0x222e ||
++	    !pdev->reset_fn)
++		return;
++
++	if (pci_enable_device_mem(pdev))
++		return;
++
++	/*
++	 * Based on nvkm_device_ctor() in
++	 * drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
++	 */
++	map = pci_iomap(pdev, 0, 0x23000);
++	if (!map) {
++		pci_err(pdev, "Can't map MMIO space\n");
++		goto out_disable;
++	}
++
++	/*
++	 * Make sure the GPU looks like it's been POSTed before resetting
++	 * it.
++	 */
++	if (ioread32(map + 0x2240c) & 0x2) {
++		pci_info(pdev, FW_BUG "GPU left initialized by EFI, resetting\n");
++		ret = pci_reset_function(pdev);
++		if (ret < 0)
++			pci_err(pdev, "Failed to reset GPU: %d\n", ret);
++	}
++
++	iounmap(map);
++out_disable:
++	pci_disable_device(pdev);
++}
++DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1,
++			      PCI_CLASS_DISPLAY_VGA, 8,
++			      quirk_reset_lenovo_thinkpad_p50_nvgpu);
+diff --git a/drivers/phy/ti/phy-ti-pipe3.c b/drivers/phy/ti/phy-ti-pipe3.c
+index 68ce4a082b9b..693acc167351 100644
+--- a/drivers/phy/ti/phy-ti-pipe3.c
++++ b/drivers/phy/ti/phy-ti-pipe3.c
+@@ -303,7 +303,7 @@ static void ti_pipe3_calibrate(struct ti_pipe3 *phy)
+ 
+ 	val = ti_pipe3_readl(phy->phy_rx, PCIEPHYRX_ANA_PROGRAMMABILITY);
+ 	val &= ~(INTERFACE_MASK | LOSD_MASK | MEM_PLLDIV);
+-	val = (0x1 << INTERFACE_SHIFT | 0xA << LOSD_SHIFT);
++	val |= (0x1 << INTERFACE_SHIFT | 0xA << LOSD_SHIFT);
+ 	ti_pipe3_writel(phy->phy_rx, PCIEPHYRX_ANA_PROGRAMMABILITY, val);
+ 
+ 	val = ti_pipe3_readl(phy->phy_rx, PCIEPHYRX_DIGITAL_MODES);
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 68473d0cc57e..968dcd9d7a07 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -3322,15 +3322,12 @@ static int regulator_set_voltage_unlocked(struct regulator *regulator,
+ 
+ 	/* for not coupled regulators this will just set the voltage */
+ 	ret = regulator_balance_voltage(rdev, state);
+-	if (ret < 0)
+-		goto out2;
++	if (ret < 0) {
++		voltage->min_uV = old_min_uV;
++		voltage->max_uV = old_max_uV;
++	}
+ 
+ out:
+-	return 0;
+-out2:
+-	voltage->min_uV = old_min_uV;
+-	voltage->max_uV = old_max_uV;
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/staging/media/imx/imx-ic-common.c b/drivers/staging/media/imx/imx-ic-common.c
+index 765919487a73..90a926891eb9 100644
+--- a/drivers/staging/media/imx/imx-ic-common.c
++++ b/drivers/staging/media/imx/imx-ic-common.c
+@@ -26,7 +26,7 @@ static struct imx_ic_ops *ic_ops[IC_NUM_OPS] = {
+ 
+ static int imx_ic_probe(struct platform_device *pdev)
+ {
+-	struct imx_media_internal_sd_platformdata *pdata;
++	struct imx_media_ipu_internal_sd_pdata *pdata;
+ 	struct imx_ic_priv *priv;
+ 	int ret;
+ 
+diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
+index 3b7517348666..41965d8b56c4 100644
+--- a/drivers/staging/media/imx/imx-media-csi.c
++++ b/drivers/staging/media/imx/imx-media-csi.c
+@@ -154,9 +154,10 @@ static inline bool requires_passthrough(struct v4l2_fwnode_endpoint *ep,
+ /*
+  * Parses the fwnode endpoint from the source pad of the entity
+  * connected to this CSI. This will either be the entity directly
+- * upstream from the CSI-2 receiver, or directly upstream from the
+- * video mux. The endpoint is needed to determine the bus type and
+- * bus config coming into the CSI.
++ * upstream from the CSI-2 receiver, directly upstream from the
++ * video mux, or directly upstream from the CSI itself. The endpoint
++ * is needed to determine the bus type and bus config coming into
++ * the CSI.
+  */
+ static int csi_get_upstream_endpoint(struct csi_priv *priv,
+ 				     struct v4l2_fwnode_endpoint *ep)
+@@ -172,7 +173,8 @@ static int csi_get_upstream_endpoint(struct csi_priv *priv,
+ 	if (!priv->src_sd)
+ 		return -EPIPE;
+ 
+-	src = &priv->src_sd->entity;
++	sd = priv->src_sd;
++	src = &sd->entity;
+ 
+ 	if (src->function == MEDIA_ENT_F_VID_MUX) {
+ 		/*
+@@ -186,6 +188,14 @@ static int csi_get_upstream_endpoint(struct csi_priv *priv,
+ 			src = &sd->entity;
+ 	}
+ 
++	/*
++	 * If the source is neither the video mux nor the CSI-2 receiver,
++	 * get the source pad directly upstream from CSI itself.
++	 */
++	if (src->function != MEDIA_ENT_F_VID_MUX &&
++	    sd->grp_id != IMX_MEDIA_GRP_ID_CSI2)
++		src = &priv->sd.entity;
++
+ 	/* get source pad of entity directly upstream from src */
+ 	pad = imx_media_find_upstream_pad(priv->md, src, 0);
+ 	if (IS_ERR(pad))
+diff --git a/drivers/staging/media/imx/imx-media-dev.c b/drivers/staging/media/imx/imx-media-dev.c
+index 28a3d23aad5b..10a63a4fa90b 100644
+--- a/drivers/staging/media/imx/imx-media-dev.c
++++ b/drivers/staging/media/imx/imx-media-dev.c
+@@ -477,13 +477,6 @@ static int imx_media_probe(struct platform_device *pdev)
+ 		goto cleanup;
+ 	}
+ 
+-	ret = imx_media_add_internal_subdevs(imxmd);
+-	if (ret) {
+-		v4l2_err(&imxmd->v4l2_dev,
+-			 "add_internal_subdevs failed with %d\n", ret);
+-		goto cleanup;
+-	}
+-
+ 	ret = imx_media_dev_notifier_register(imxmd);
+ 	if (ret)
+ 		goto del_int;
+@@ -491,7 +484,7 @@ static int imx_media_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ del_int:
+-	imx_media_remove_internal_subdevs(imxmd);
++	imx_media_remove_ipu_internal_subdevs(imxmd);
+ cleanup:
+ 	v4l2_async_notifier_cleanup(&imxmd->notifier);
+ 	v4l2_device_unregister(&imxmd->v4l2_dev);
+@@ -508,7 +501,7 @@ static int imx_media_remove(struct platform_device *pdev)
+ 	v4l2_info(&imxmd->v4l2_dev, "Removing imx-media\n");
+ 
+ 	v4l2_async_notifier_unregister(&imxmd->notifier);
+-	imx_media_remove_internal_subdevs(imxmd);
++	imx_media_remove_ipu_internal_subdevs(imxmd);
+ 	v4l2_async_notifier_cleanup(&imxmd->notifier);
+ 	media_device_unregister(&imxmd->md);
+ 	v4l2_device_unregister(&imxmd->v4l2_dev);
+diff --git a/drivers/staging/media/imx/imx-media-internal-sd.c b/drivers/staging/media/imx/imx-media-internal-sd.c
+index 5e10d95e5529..dc510dcfe160 100644
+--- a/drivers/staging/media/imx/imx-media-internal-sd.c
++++ b/drivers/staging/media/imx/imx-media-internal-sd.c
+@@ -1,7 +1,7 @@
+ /*
+  * Media driver for Freescale i.MX5/6 SOC
+  *
+- * Adds the internal subdevices and the media links between them.
++ * Adds the IPU internal subdevices and the media links between them.
+  *
+  * Copyright (c) 2016 Mentor Graphics Inc.
+  *
+@@ -192,7 +192,7 @@ static struct v4l2_subdev *find_sink(struct imx_media_dev *imxmd,
+ 
+ 	/*
+ 	 * retrieve IPU id from subdev name, note: can't get this from
+-	 * struct imx_media_internal_sd_platformdata because if src is
++	 * struct imx_media_ipu_internal_sd_pdata because if src is
+ 	 * a CSI, it has different struct ipu_client_platformdata which
+ 	 * does not contain IPU id.
+ 	 */
+@@ -270,7 +270,7 @@ static int add_internal_subdev(struct imx_media_dev *imxmd,
+ 			       const struct internal_subdev *isd,
+ 			       int ipu_id)
+ {
+-	struct imx_media_internal_sd_platformdata pdata;
++	struct imx_media_ipu_internal_sd_pdata pdata;
+ 	struct platform_device_info pdevinfo = {};
+ 	struct platform_device *pdev;
+ 
+@@ -298,13 +298,14 @@ static int add_internal_subdev(struct imx_media_dev *imxmd,
+ }
+ 
+ /* adds the internal subdevs in one ipu */
+-static int add_ipu_internal_subdevs(struct imx_media_dev *imxmd, int ipu_id)
++int imx_media_add_ipu_internal_subdevs(struct imx_media_dev *imxmd,
++				       int ipu_id)
+ {
+ 	enum isd_enum i;
++	int ret;
+ 
+ 	for (i = 0; i < num_isd; i++) {
+ 		const struct internal_subdev *isd = &int_subdev[i];
+-		int ret;
+ 
+ 		/*
+ 		 * the CSIs are represented in the device-tree, so those
+@@ -322,32 +323,17 @@ static int add_ipu_internal_subdevs(struct imx_media_dev *imxmd, int ipu_id)
+ 		}
+ 
+ 		if (ret)
+-			return ret;
++			goto remove;
+ 	}
+ 
+ 	return 0;
+-}
+-
+-int imx_media_add_internal_subdevs(struct imx_media_dev *imxmd)
+-{
+-	int ret;
+-
+-	ret = add_ipu_internal_subdevs(imxmd, 0);
+-	if (ret)
+-		goto remove;
+-
+-	ret = add_ipu_internal_subdevs(imxmd, 1);
+-	if (ret)
+-		goto remove;
+-
+-	return 0;
+ 
+ remove:
+-	imx_media_remove_internal_subdevs(imxmd);
++	imx_media_remove_ipu_internal_subdevs(imxmd);
+ 	return ret;
+ }
+ 
+-void imx_media_remove_internal_subdevs(struct imx_media_dev *imxmd)
++void imx_media_remove_ipu_internal_subdevs(struct imx_media_dev *imxmd)
+ {
+ 	struct imx_media_async_subdev *imxasd;
+ 	struct v4l2_async_subdev *asd;
+diff --git a/drivers/staging/media/imx/imx-media-of.c b/drivers/staging/media/imx/imx-media-of.c
+index 03446335ac03..12383f4785ad 100644
+--- a/drivers/staging/media/imx/imx-media-of.c
++++ b/drivers/staging/media/imx/imx-media-of.c
+@@ -23,36 +23,25 @@
+ int imx_media_of_add_csi(struct imx_media_dev *imxmd,
+ 			 struct device_node *csi_np)
+ {
+-	int ret;
+-
+ 	if (!of_device_is_available(csi_np)) {
+ 		dev_dbg(imxmd->md.dev, "%s: %pOFn not enabled\n", __func__,
+ 			csi_np);
+-		/* unavailable is not an error */
+-		return 0;
++		return -ENODEV;
+ 	}
+ 
+ 	/* add CSI fwnode to async notifier */
+-	ret = imx_media_add_async_subdev(imxmd, of_fwnode_handle(csi_np), NULL);
+-	if (ret) {
+-		if (ret == -EEXIST) {
+-			/* already added, everything is fine */
+-			return 0;
+-		}
+-
+-		/* other error, can't continue */
+-		return ret;
+-	}
+-
+-	return 0;
++	return imx_media_add_async_subdev(imxmd, of_fwnode_handle(csi_np),
++					  NULL);
+ }
+ EXPORT_SYMBOL_GPL(imx_media_of_add_csi);
+ 
+ int imx_media_add_of_subdevs(struct imx_media_dev *imxmd,
+ 			     struct device_node *np)
+ {
++	bool ipu_found[2] = {false, false};
+ 	struct device_node *csi_np;
+ 	int i, ret;
++	u32 ipu_id;
+ 
+ 	for (i = 0; ; i++) {
+ 		csi_np = of_parse_phandle(np, "ports", i);
+@@ -60,12 +49,43 @@ int imx_media_add_of_subdevs(struct imx_media_dev *imxmd,
+ 			break;
+ 
+ 		ret = imx_media_of_add_csi(imxmd, csi_np);
+-		of_node_put(csi_np);
+-		if (ret)
+-			return ret;
++		if (ret) {
++			/* unavailable or already added is not an error */
++			if (ret == -ENODEV || ret == -EEXIST) {
++				of_node_put(csi_np);
++				continue;
++			}
++
++			/* other error, can't continue */
++			goto err_out;
++		}
++
++		ret = of_alias_get_id(csi_np->parent, "ipu");
++		if (ret < 0)
++			goto err_out;
++		if (ret > 1) {
++			ret = -EINVAL;
++			goto err_out;
++		}
++
++		ipu_id = ret;
++
++		if (!ipu_found[ipu_id]) {
++			ret = imx_media_add_ipu_internal_subdevs(imxmd,
++								 ipu_id);
++			if (ret)
++				goto err_out;
++		}
++
++		ipu_found[ipu_id] = true;
+ 	}
+ 
+ 	return 0;
++
++err_out:
++	imx_media_remove_ipu_internal_subdevs(imxmd);
++	of_node_put(csi_np);
++	return ret;
+ }
+ 
+ /*
+@@ -145,15 +165,18 @@ int imx_media_create_csi_of_links(struct imx_media_dev *imxmd,
+ 				  struct v4l2_subdev *csi)
+ {
+ 	struct device_node *csi_np = csi->dev->of_node;
+-	struct fwnode_handle *fwnode, *csi_ep;
+-	struct v4l2_fwnode_link link;
+ 	struct device_node *ep;
+-	int ret;
+-
+-	link.local_node = of_fwnode_handle(csi_np);
+-	link.local_port = CSI_SINK_PAD;
+ 
+ 	for_each_child_of_node(csi_np, ep) {
++		struct fwnode_handle *fwnode, *csi_ep;
++		struct v4l2_fwnode_link link;
++		int ret;
++
++		memset(&link, 0, sizeof(link));
++
++		link.local_node = of_fwnode_handle(csi_np);
++		link.local_port = CSI_SINK_PAD;
++
+ 		csi_ep = of_fwnode_handle(ep);
+ 
+ 		fwnode = fwnode_graph_get_remote_endpoint(csi_ep);
+diff --git a/drivers/staging/media/imx/imx-media-vdic.c b/drivers/staging/media/imx/imx-media-vdic.c
+index 2808662e2597..8a9af4688fd4 100644
+--- a/drivers/staging/media/imx/imx-media-vdic.c
++++ b/drivers/staging/media/imx/imx-media-vdic.c
+@@ -934,7 +934,7 @@ static const struct v4l2_subdev_internal_ops vdic_internal_ops = {
+ 
+ static int imx_vdic_probe(struct platform_device *pdev)
+ {
+-	struct imx_media_internal_sd_platformdata *pdata;
++	struct imx_media_ipu_internal_sd_pdata *pdata;
+ 	struct vdic_priv *priv;
+ 	int ret;
+ 
+diff --git a/drivers/staging/media/imx/imx-media.h b/drivers/staging/media/imx/imx-media.h
+index ae964c8d5be1..dd603a6b3a70 100644
+--- a/drivers/staging/media/imx/imx-media.h
++++ b/drivers/staging/media/imx/imx-media.h
+@@ -115,7 +115,7 @@ struct imx_media_pad_vdev {
+ 	struct list_head list;
+ };
+ 
+-struct imx_media_internal_sd_platformdata {
++struct imx_media_ipu_internal_sd_pdata {
+ 	char sd_name[V4L2_SUBDEV_NAME_SIZE];
+ 	u32 grp_id;
+ 	int ipu_id;
+@@ -252,10 +252,11 @@ struct imx_media_fim *imx_media_fim_init(struct v4l2_subdev *sd);
+ void imx_media_fim_free(struct imx_media_fim *fim);
+ 
+ /* imx-media-internal-sd.c */
+-int imx_media_add_internal_subdevs(struct imx_media_dev *imxmd);
++int imx_media_add_ipu_internal_subdevs(struct imx_media_dev *imxmd,
++				       int ipu_id);
+ int imx_media_create_ipu_internal_links(struct imx_media_dev *imxmd,
+ 					struct v4l2_subdev *sd);
+-void imx_media_remove_internal_subdevs(struct imx_media_dev *imxmd);
++void imx_media_remove_ipu_internal_subdevs(struct imx_media_dev *imxmd);
+ 
+ /* imx-media-of.c */
+ int imx_media_add_of_subdevs(struct imx_media_dev *dev,
+diff --git a/drivers/staging/media/imx/imx7-media-csi.c b/drivers/staging/media/imx/imx7-media-csi.c
+index 3fba7c27c0ec..1ba62fcdcae8 100644
+--- a/drivers/staging/media/imx/imx7-media-csi.c
++++ b/drivers/staging/media/imx/imx7-media-csi.c
+@@ -1271,7 +1271,7 @@ static int imx7_csi_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, &csi->sd);
+ 
+ 	ret = imx_media_of_add_csi(imxmd, node);
+-	if (ret < 0)
++	if (ret < 0 && ret != -ENODEV && ret != -EEXIST)
+ 		goto cleanup;
+ 
+ 	ret = imx_media_dev_notifier_register(imxmd);
+diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c
+index ba906876cc45..fd02e8a4841d 100644
+--- a/drivers/video/fbdev/efifb.c
++++ b/drivers/video/fbdev/efifb.c
+@@ -476,8 +476,12 @@ static int efifb_probe(struct platform_device *dev)
+ 		 * If the UEFI memory map covers the efifb region, we may only
+ 		 * remap it using the attributes the memory map prescribes.
+ 		 */
+-		mem_flags |= EFI_MEMORY_WT | EFI_MEMORY_WB;
+-		mem_flags &= md.attribute;
++		md.attribute &= EFI_MEMORY_UC | EFI_MEMORY_WC |
++				EFI_MEMORY_WT | EFI_MEMORY_WB;
++		if (md.attribute) {
++			mem_flags |= EFI_MEMORY_WT | EFI_MEMORY_WB;
++			mem_flags &= md.attribute;
++		}
+ 	}
+ 	if (mem_flags & EFI_MEMORY_WC)
+ 		info->screen_base = ioremap_wc(efifb_fix.smem_start,
+diff --git a/drivers/video/fbdev/sm712.h b/drivers/video/fbdev/sm712.h
+index aad1cc4be34a..c7ebf03b8d53 100644
+--- a/drivers/video/fbdev/sm712.h
++++ b/drivers/video/fbdev/sm712.h
+@@ -15,14 +15,10 @@
+ 
+ #define FB_ACCEL_SMI_LYNX 88
+ 
+-#define SCREEN_X_RES      1024
+-#define SCREEN_Y_RES      600
+-#define SCREEN_BPP        16
+-
+-/*Assume SM712 graphics chip has 4MB VRAM */
+-#define SM712_VIDEOMEMORYSIZE	  0x00400000
+-/*Assume SM722 graphics chip has 8MB VRAM */
+-#define SM722_VIDEOMEMORYSIZE	  0x00800000
++#define SCREEN_X_RES          1024
++#define SCREEN_Y_RES_PC       768
++#define SCREEN_Y_RES_NETBOOK  600
++#define SCREEN_BPP            16
+ 
+ #define dac_reg	(0x3c8)
+ #define dac_val	(0x3c9)
+diff --git a/drivers/video/fbdev/sm712fb.c b/drivers/video/fbdev/sm712fb.c
+index 502d0de2feec..f1dcc6766d1e 100644
+--- a/drivers/video/fbdev/sm712fb.c
++++ b/drivers/video/fbdev/sm712fb.c
+@@ -530,6 +530,65 @@ static const struct modeinit vgamode[] = {
+ 			0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x15, 0x03,
+ 		},
+ 	},
++	{	/*  1024 x 768  16Bpp  60Hz */
++		1024, 768, 16, 60,
++		/*  Init_MISC */
++		0xEB,
++		{	/*  Init_SR0_SR4 */
++			0x03, 0x01, 0x0F, 0x03, 0x0E,
++		},
++		{	/*  Init_SR10_SR24 */
++			0xF3, 0xB6, 0xC0, 0xDD, 0x00, 0x0E, 0x17, 0x2C,
++			0x99, 0x02, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
++			0xC4, 0x30, 0x02, 0x01, 0x01,
++		},
++		{	/*  Init_SR30_SR75 */
++			0x38, 0x03, 0x20, 0x09, 0xC0, 0x3A, 0x3A, 0x3A,
++			0x3A, 0x3A, 0x3A, 0x3A, 0x00, 0x00, 0x03, 0xFF,
++			0x00, 0xFC, 0x00, 0x00, 0x20, 0x18, 0x00, 0xFC,
++			0x20, 0x0C, 0x44, 0x20, 0x00, 0x00, 0x00, 0x3A,
++			0x06, 0x68, 0xA7, 0x7F, 0x83, 0x24, 0xFF, 0x03,
++			0x0F, 0x60, 0x59, 0x3A, 0x3A, 0x00, 0x00, 0x3A,
++			0x01, 0x80, 0x7E, 0x1A, 0x1A, 0x00, 0x00, 0x00,
++			0x50, 0x03, 0x74, 0x14, 0x3B, 0x0D, 0x09, 0x02,
++			0x04, 0x45, 0x30, 0x30, 0x40, 0x20,
++		},
++		{	/*  Init_SR80_SR93 */
++			0xFF, 0x07, 0x00, 0xFF, 0xFF, 0xFF, 0xFF, 0x3A,
++			0xF7, 0x00, 0x00, 0x00, 0xFF, 0xFF, 0x3A, 0x3A,
++			0x00, 0x00, 0x00, 0x00,
++		},
++		{	/*  Init_SRA0_SRAF */
++			0x00, 0xFB, 0x9F, 0x01, 0x00, 0xED, 0xED, 0xED,
++			0x7B, 0xFB, 0xFF, 0xFF, 0x97, 0xEF, 0xBF, 0xDF,
++		},
++		{	/*  Init_GR00_GR08 */
++			0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x05, 0x0F,
++			0xFF,
++		},
++		{	/*  Init_AR00_AR14 */
++			0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
++			0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F,
++			0x41, 0x00, 0x0F, 0x00, 0x00,
++		},
++		{	/*  Init_CR00_CR18 */
++			0xA3, 0x7F, 0x7F, 0x00, 0x85, 0x16, 0x24, 0xF5,
++			0x00, 0x60, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++			0x03, 0x09, 0xFF, 0x80, 0x40, 0xFF, 0x00, 0xE3,
++			0xFF,
++		},
++		{	/*  Init_CR30_CR4D */
++			0x00, 0x00, 0x00, 0x00, 0x00, 0x80, 0x02, 0x20,
++			0x00, 0x00, 0x00, 0x40, 0x00, 0xFF, 0xBF, 0xFF,
++			0xA3, 0x7F, 0x00, 0x86, 0x15, 0x24, 0xFF, 0x00,
++			0x01, 0x07, 0xE5, 0x20, 0x7F, 0xFF,
++		},
++		{	/*  Init_CR90_CRA7 */
++			0x55, 0xD9, 0x5D, 0xE1, 0x86, 0x1B, 0x8E, 0x26,
++			0xDA, 0x8D, 0xDE, 0x94, 0x00, 0x00, 0x18, 0x00,
++			0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x15, 0x03,
++		},
++	},
+ 	{	/*  mode#5: 1024 x 768  24Bpp  60Hz */
+ 		1024, 768, 24, 60,
+ 		/*  Init_MISC */
+@@ -827,67 +886,80 @@ static inline unsigned int chan_to_field(unsigned int chan,
+ 
+ static int smtc_blank(int blank_mode, struct fb_info *info)
+ {
++	struct smtcfb_info *sfb = info->par;
++
+ 	/* clear DPMS setting */
+ 	switch (blank_mode) {
+ 	case FB_BLANK_UNBLANK:
+ 		/* Screen On: HSync: On, VSync : On */
++
++		switch (sfb->chip_id) {
++		case 0x710:
++		case 0x712:
++			smtc_seqw(0x6a, 0x16);
++			smtc_seqw(0x6b, 0x02);
++			break;
++		case 0x720:
++			smtc_seqw(0x6a, 0x0d);
++			smtc_seqw(0x6b, 0x02);
++			break;
++		}
++
++		smtc_seqw(0x23, (smtc_seqr(0x23) & (~0xc0)));
+ 		smtc_seqw(0x01, (smtc_seqr(0x01) & (~0x20)));
+-		smtc_seqw(0x6a, 0x16);
+-		smtc_seqw(0x6b, 0x02);
+ 		smtc_seqw(0x21, (smtc_seqr(0x21) & 0x77));
+ 		smtc_seqw(0x22, (smtc_seqr(0x22) & (~0x30)));
+-		smtc_seqw(0x23, (smtc_seqr(0x23) & (~0xc0)));
+-		smtc_seqw(0x24, (smtc_seqr(0x24) | 0x01));
+ 		smtc_seqw(0x31, (smtc_seqr(0x31) | 0x03));
++		smtc_seqw(0x24, (smtc_seqr(0x24) | 0x01));
+ 		break;
+ 	case FB_BLANK_NORMAL:
+ 		/* Screen Off: HSync: On, VSync : On   Soft blank */
++		smtc_seqw(0x24, (smtc_seqr(0x24) | 0x01));
++		smtc_seqw(0x31, ((smtc_seqr(0x31) & (~0x07)) | 0x00));
++		smtc_seqw(0x23, (smtc_seqr(0x23) & (~0xc0)));
+ 		smtc_seqw(0x01, (smtc_seqr(0x01) & (~0x20)));
++		smtc_seqw(0x22, (smtc_seqr(0x22) & (~0x30)));
+ 		smtc_seqw(0x6a, 0x16);
+ 		smtc_seqw(0x6b, 0x02);
+-		smtc_seqw(0x22, (smtc_seqr(0x22) & (~0x30)));
+-		smtc_seqw(0x23, (smtc_seqr(0x23) & (~0xc0)));
+-		smtc_seqw(0x24, (smtc_seqr(0x24) | 0x01));
+-		smtc_seqw(0x31, ((smtc_seqr(0x31) & (~0x07)) | 0x00));
+ 		break;
+ 	case FB_BLANK_VSYNC_SUSPEND:
+ 		/* Screen On: HSync: On, VSync : Off */
++		smtc_seqw(0x24, (smtc_seqr(0x24) & (~0x01)));
++		smtc_seqw(0x31, ((smtc_seqr(0x31) & (~0x07)) | 0x00));
++		smtc_seqw(0x23, ((smtc_seqr(0x23) & (~0xc0)) | 0x20));
+ 		smtc_seqw(0x01, (smtc_seqr(0x01) | 0x20));
+-		smtc_seqw(0x20, (smtc_seqr(0x20) & (~0xB0)));
+-		smtc_seqw(0x6a, 0x0c);
+-		smtc_seqw(0x6b, 0x02);
+ 		smtc_seqw(0x21, (smtc_seqr(0x21) | 0x88));
++		smtc_seqw(0x20, (smtc_seqr(0x20) & (~0xB0)));
+ 		smtc_seqw(0x22, ((smtc_seqr(0x22) & (~0x30)) | 0x20));
+-		smtc_seqw(0x23, ((smtc_seqr(0x23) & (~0xc0)) | 0x20));
+-		smtc_seqw(0x24, (smtc_seqr(0x24) & (~0x01)));
+-		smtc_seqw(0x31, ((smtc_seqr(0x31) & (~0x07)) | 0x00));
+ 		smtc_seqw(0x34, (smtc_seqr(0x34) | 0x80));
++		smtc_seqw(0x6a, 0x0c);
++		smtc_seqw(0x6b, 0x02);
+ 		break;
+ 	case FB_BLANK_HSYNC_SUSPEND:
+ 		/* Screen On: HSync: Off, VSync : On */
++		smtc_seqw(0x24, (smtc_seqr(0x24) & (~0x01)));
++		smtc_seqw(0x31, ((smtc_seqr(0x31) & (~0x07)) | 0x00));
++		smtc_seqw(0x23, ((smtc_seqr(0x23) & (~0xc0)) | 0xD8));
+ 		smtc_seqw(0x01, (smtc_seqr(0x01) | 0x20));
+-		smtc_seqw(0x20, (smtc_seqr(0x20) & (~0xB0)));
+-		smtc_seqw(0x6a, 0x0c);
+-		smtc_seqw(0x6b, 0x02);
+ 		smtc_seqw(0x21, (smtc_seqr(0x21) | 0x88));
++		smtc_seqw(0x20, (smtc_seqr(0x20) & (~0xB0)));
+ 		smtc_seqw(0x22, ((smtc_seqr(0x22) & (~0x30)) | 0x10));
+-		smtc_seqw(0x23, ((smtc_seqr(0x23) & (~0xc0)) | 0xD8));
+-		smtc_seqw(0x24, (smtc_seqr(0x24) & (~0x01)));
+-		smtc_seqw(0x31, ((smtc_seqr(0x31) & (~0x07)) | 0x00));
+ 		smtc_seqw(0x34, (smtc_seqr(0x34) | 0x80));
++		smtc_seqw(0x6a, 0x0c);
++		smtc_seqw(0x6b, 0x02);
+ 		break;
+ 	case FB_BLANK_POWERDOWN:
+ 		/* Screen On: HSync: Off, VSync : Off */
++		smtc_seqw(0x24, (smtc_seqr(0x24) & (~0x01)));
++		smtc_seqw(0x31, ((smtc_seqr(0x31) & (~0x07)) | 0x00));
++		smtc_seqw(0x23, ((smtc_seqr(0x23) & (~0xc0)) | 0xD8));
+ 		smtc_seqw(0x01, (smtc_seqr(0x01) | 0x20));
+-		smtc_seqw(0x20, (smtc_seqr(0x20) & (~0xB0)));
+-		smtc_seqw(0x6a, 0x0c);
+-		smtc_seqw(0x6b, 0x02);
+ 		smtc_seqw(0x21, (smtc_seqr(0x21) | 0x88));
++		smtc_seqw(0x20, (smtc_seqr(0x20) & (~0xB0)));
+ 		smtc_seqw(0x22, ((smtc_seqr(0x22) & (~0x30)) | 0x30));
+-		smtc_seqw(0x23, ((smtc_seqr(0x23) & (~0xc0)) | 0xD8));
+-		smtc_seqw(0x24, (smtc_seqr(0x24) & (~0x01)));
+-		smtc_seqw(0x31, ((smtc_seqr(0x31) & (~0x07)) | 0x00));
+ 		smtc_seqw(0x34, (smtc_seqr(0x34) | 0x80));
++		smtc_seqw(0x6a, 0x0c);
++		smtc_seqw(0x6b, 0x02);
+ 		break;
+ 	default:
+ 		return -EINVAL;
+@@ -1145,8 +1217,10 @@ static void sm7xx_set_timing(struct smtcfb_info *sfb)
+ 
+ 		/* init SEQ register SR30 - SR75 */
+ 		for (i = 0; i < SIZE_SR30_SR75; i++)
+-			if ((i + 0x30) != 0x62 && (i + 0x30) != 0x6a &&
+-			    (i + 0x30) != 0x6b)
++			if ((i + 0x30) != 0x30 && (i + 0x30) != 0x62 &&
++			    (i + 0x30) != 0x6a && (i + 0x30) != 0x6b &&
++			    (i + 0x30) != 0x70 && (i + 0x30) != 0x71 &&
++			    (i + 0x30) != 0x74 && (i + 0x30) != 0x75)
+ 				smtc_seqw(i + 0x30,
+ 					  vgamode[j].init_sr30_sr75[i]);
+ 
+@@ -1171,8 +1245,12 @@ static void sm7xx_set_timing(struct smtcfb_info *sfb)
+ 			smtc_crtcw(i, vgamode[j].init_cr00_cr18[i]);
+ 
+ 		/* init CRTC register CR30 - CR4D */
+-		for (i = 0; i < SIZE_CR30_CR4D; i++)
++		for (i = 0; i < SIZE_CR30_CR4D; i++) {
++			if ((i + 0x30) >= 0x3B && (i + 0x30) <= 0x3F)
++				/* side-effect, don't write to CR3B-CR3F */
++				continue;
+ 			smtc_crtcw(i + 0x30, vgamode[j].init_cr30_cr4d[i]);
++		}
+ 
+ 		/* init CRTC register CR90 - CRA7 */
+ 		for (i = 0; i < SIZE_CR90_CRA7; i++)
+@@ -1323,6 +1401,11 @@ static int smtc_map_smem(struct smtcfb_info *sfb,
+ {
+ 	sfb->fb->fix.smem_start = pci_resource_start(pdev, 0);
+ 
++	if (sfb->chip_id == 0x720)
++		/* on SM720, the framebuffer starts at the 1 MB offset */
++		sfb->fb->fix.smem_start += 0x00200000;
++
++	/* XXX: is it safe for SM720 on Big-Endian? */
+ 	if (sfb->fb->var.bits_per_pixel == 32)
+ 		sfb->fb->fix.smem_start += big_addr;
+ 
+@@ -1360,12 +1443,82 @@ static inline void sm7xx_init_hw(void)
+ 	outb_p(0x11, 0x3c5);
+ }
+ 
++static u_long sm7xx_vram_probe(struct smtcfb_info *sfb)
++{
++	u8 vram;
++
++	switch (sfb->chip_id) {
++	case 0x710:
++	case 0x712:
++		/*
++		 * Assume SM712 graphics chip has 4MB VRAM.
++		 *
++		 * FIXME: SM712 can have 2MB VRAM, which is used on earlier
++		 * laptops, such as IBM Thinkpad 240X. This driver would
++		 * probably crash on those machines. If anyone gets one of
++		 * those and is willing to help, run "git blame" and send me
++		 * an E-mail.
++		 */
++		return 0x00400000;
++	case 0x720:
++		outb_p(0x76, 0x3c4);
++		vram = inb_p(0x3c5) >> 6;
++
++		if (vram == 0x00)
++			return 0x00800000;  /* 8 MB */
++		else if (vram == 0x01)
++			return 0x01000000;  /* 16 MB */
++		else if (vram == 0x02)
++			return 0x00400000;  /* illegal, fallback to 4 MB */
++		else if (vram == 0x03)
++			return 0x00400000;  /* 4 MB */
++	}
++	return 0;  /* unknown hardware */
++}
++
++static void sm7xx_resolution_probe(struct smtcfb_info *sfb)
++{
++	/* get mode parameter from smtc_scr_info */
++	if (smtc_scr_info.lfb_width != 0) {
++		sfb->fb->var.xres = smtc_scr_info.lfb_width;
++		sfb->fb->var.yres = smtc_scr_info.lfb_height;
++		sfb->fb->var.bits_per_pixel = smtc_scr_info.lfb_depth;
++		goto final;
++	}
++
++	/*
++	 * No parameter, default resolution is 1024x768-16.
++	 *
++	 * FIXME: earlier laptops, such as IBM Thinkpad 240X, has a 800x600
++	 * panel, also see the comments about Thinkpad 240X above.
++	 */
++	sfb->fb->var.xres = SCREEN_X_RES;
++	sfb->fb->var.yres = SCREEN_Y_RES_PC;
++	sfb->fb->var.bits_per_pixel = SCREEN_BPP;
++
++#ifdef CONFIG_MIPS
++	/*
++	 * Loongson MIPS netbooks use 1024x600 LCD panels, which is the original
++	 * target platform of this driver, but nearly all old x86 laptops have
++	 * 1024x768. Lighting 768 panels using 600's timings would partially
++	 * garble the display, so we don't want that. But it's not possible to
++	 * distinguish them reliably.
++	 *
++	 * So we change the default to 768, but keep 600 as-is on MIPS.
++	 */
++	sfb->fb->var.yres = SCREEN_Y_RES_NETBOOK;
++#endif
++
++final:
++	big_pixel_depth(sfb->fb->var.bits_per_pixel, smtc_scr_info.lfb_depth);
++}
++
+ static int smtcfb_pci_probe(struct pci_dev *pdev,
+ 			    const struct pci_device_id *ent)
+ {
+ 	struct smtcfb_info *sfb;
+ 	struct fb_info *info;
+-	u_long smem_size = 0x00800000;	/* default 8MB */
++	u_long smem_size;
+ 	int err;
+ 	unsigned long mmio_base;
+ 
+@@ -1405,29 +1558,19 @@ static int smtcfb_pci_probe(struct pci_dev *pdev,
+ 
+ 	sm7xx_init_hw();
+ 
+-	/* get mode parameter from smtc_scr_info */
+-	if (smtc_scr_info.lfb_width != 0) {
+-		sfb->fb->var.xres = smtc_scr_info.lfb_width;
+-		sfb->fb->var.yres = smtc_scr_info.lfb_height;
+-		sfb->fb->var.bits_per_pixel = smtc_scr_info.lfb_depth;
+-	} else {
+-		/* default resolution 1024x600 16bit mode */
+-		sfb->fb->var.xres = SCREEN_X_RES;
+-		sfb->fb->var.yres = SCREEN_Y_RES;
+-		sfb->fb->var.bits_per_pixel = SCREEN_BPP;
+-	}
+-
+-	big_pixel_depth(sfb->fb->var.bits_per_pixel, smtc_scr_info.lfb_depth);
+ 	/* Map address and memory detection */
+ 	mmio_base = pci_resource_start(pdev, 0);
+ 	pci_read_config_byte(pdev, PCI_REVISION_ID, &sfb->chip_rev_id);
+ 
++	smem_size = sm7xx_vram_probe(sfb);
++	dev_info(&pdev->dev, "%lu MiB of VRAM detected.\n",
++					smem_size / 1048576);
++
+ 	switch (sfb->chip_id) {
+ 	case 0x710:
+ 	case 0x712:
+ 		sfb->fb->fix.mmio_start = mmio_base + 0x00400000;
+ 		sfb->fb->fix.mmio_len = 0x00400000;
+-		smem_size = SM712_VIDEOMEMORYSIZE;
+ 		sfb->lfb = ioremap(mmio_base, mmio_addr);
+ 		if (!sfb->lfb) {
+ 			dev_err(&pdev->dev,
+@@ -1459,8 +1602,7 @@ static int smtcfb_pci_probe(struct pci_dev *pdev,
+ 	case 0x720:
+ 		sfb->fb->fix.mmio_start = mmio_base;
+ 		sfb->fb->fix.mmio_len = 0x00200000;
+-		smem_size = SM722_VIDEOMEMORYSIZE;
+-		sfb->dp_regs = ioremap(mmio_base, 0x00a00000);
++		sfb->dp_regs = ioremap(mmio_base, 0x00200000 + smem_size);
+ 		sfb->lfb = sfb->dp_regs + 0x00200000;
+ 		sfb->mmio = (smtc_regbaseaddress =
+ 		    sfb->dp_regs + 0x000c0000);
+@@ -1477,6 +1619,9 @@ static int smtcfb_pci_probe(struct pci_dev *pdev,
+ 		goto failed_fb;
+ 	}
+ 
++	/* probe and decide resolution */
++	sm7xx_resolution_probe(sfb);
++
+ 	/* can support 32 bpp */
+ 	if (sfb->fb->var.bits_per_pixel == 15)
+ 		sfb->fb->var.bits_per_pixel = 16;
+@@ -1487,7 +1632,11 @@ static int smtcfb_pci_probe(struct pci_dev *pdev,
+ 	if (err)
+ 		goto failed;
+ 
+-	smtcfb_setmode(sfb);
++	/*
++	 * The screen would be temporarily garbled when sm712fb takes over
++	 * vesafb or VGA text mode. Zero the framebuffer.
++	 */
++	memset_io(sfb->lfb, 0, sfb->fb->fix.smem_len);
+ 
+ 	err = register_framebuffer(info);
+ 	if (err < 0)
+diff --git a/drivers/video/fbdev/udlfb.c b/drivers/video/fbdev/udlfb.c
+index 1d034dddc556..5a0d6fb02bbc 100644
+--- a/drivers/video/fbdev/udlfb.c
++++ b/drivers/video/fbdev/udlfb.c
+@@ -594,8 +594,7 @@ static int dlfb_render_hline(struct dlfb_data *dlfb, struct urb **urb_ptr,
+ 	return 0;
+ }
+ 
+-static int dlfb_handle_damage(struct dlfb_data *dlfb, int x, int y,
+-	       int width, int height, char *data)
++static int dlfb_handle_damage(struct dlfb_data *dlfb, int x, int y, int width, int height)
+ {
+ 	int i, ret;
+ 	char *cmd;
+@@ -607,21 +606,29 @@ static int dlfb_handle_damage(struct dlfb_data *dlfb, int x, int y,
+ 
+ 	start_cycles = get_cycles();
+ 
++	mutex_lock(&dlfb->render_mutex);
++
+ 	aligned_x = DL_ALIGN_DOWN(x, sizeof(unsigned long));
+ 	width = DL_ALIGN_UP(width + (x-aligned_x), sizeof(unsigned long));
+ 	x = aligned_x;
+ 
+ 	if ((width <= 0) ||
+ 	    (x + width > dlfb->info->var.xres) ||
+-	    (y + height > dlfb->info->var.yres))
+-		return -EINVAL;
++	    (y + height > dlfb->info->var.yres)) {
++		ret = -EINVAL;
++		goto unlock_ret;
++	}
+ 
+-	if (!atomic_read(&dlfb->usb_active))
+-		return 0;
++	if (!atomic_read(&dlfb->usb_active)) {
++		ret = 0;
++		goto unlock_ret;
++	}
+ 
+ 	urb = dlfb_get_urb(dlfb);
+-	if (!urb)
+-		return 0;
++	if (!urb) {
++		ret = 0;
++		goto unlock_ret;
++	}
+ 	cmd = urb->transfer_buffer;
+ 
+ 	for (i = y; i < y + height ; i++) {
+@@ -641,7 +648,7 @@ static int dlfb_handle_damage(struct dlfb_data *dlfb, int x, int y,
+ 			*cmd++ = 0xAF;
+ 		/* Send partial buffer remaining before exiting */
+ 		len = cmd - (char *) urb->transfer_buffer;
+-		ret = dlfb_submit_urb(dlfb, urb, len);
++		dlfb_submit_urb(dlfb, urb, len);
+ 		bytes_sent += len;
+ 	} else
+ 		dlfb_urb_completion(urb);
+@@ -655,7 +662,55 @@ error:
+ 		    >> 10)), /* Kcycles */
+ 		   &dlfb->cpu_kcycles_used);
+ 
+-	return 0;
++	ret = 0;
++
++unlock_ret:
++	mutex_unlock(&dlfb->render_mutex);
++	return ret;
++}
++
++static void dlfb_init_damage(struct dlfb_data *dlfb)
++{
++	dlfb->damage_x = INT_MAX;
++	dlfb->damage_x2 = 0;
++	dlfb->damage_y = INT_MAX;
++	dlfb->damage_y2 = 0;
++}
++
++static void dlfb_damage_work(struct work_struct *w)
++{
++	struct dlfb_data *dlfb = container_of(w, struct dlfb_data, damage_work);
++	int x, x2, y, y2;
++
++	spin_lock_irq(&dlfb->damage_lock);
++	x = dlfb->damage_x;
++	x2 = dlfb->damage_x2;
++	y = dlfb->damage_y;
++	y2 = dlfb->damage_y2;
++	dlfb_init_damage(dlfb);
++	spin_unlock_irq(&dlfb->damage_lock);
++
++	if (x < x2 && y < y2)
++		dlfb_handle_damage(dlfb, x, y, x2 - x, y2 - y);
++}
++
++static void dlfb_offload_damage(struct dlfb_data *dlfb, int x, int y, int width, int height)
++{
++	unsigned long flags;
++	int x2 = x + width;
++	int y2 = y + height;
++
++	if (x >= x2 || y >= y2)
++		return;
++
++	spin_lock_irqsave(&dlfb->damage_lock, flags);
++	dlfb->damage_x = min(x, dlfb->damage_x);
++	dlfb->damage_x2 = max(x2, dlfb->damage_x2);
++	dlfb->damage_y = min(y, dlfb->damage_y);
++	dlfb->damage_y2 = max(y2, dlfb->damage_y2);
++	spin_unlock_irqrestore(&dlfb->damage_lock, flags);
++
++	schedule_work(&dlfb->damage_work);
+ }
+ 
+ /*
+@@ -679,7 +734,7 @@ static ssize_t dlfb_ops_write(struct fb_info *info, const char __user *buf,
+ 				(u32)info->var.yres);
+ 
+ 		dlfb_handle_damage(dlfb, 0, start, info->var.xres,
+-			lines, info->screen_base);
++			lines);
+ 	}
+ 
+ 	return result;
+@@ -694,8 +749,8 @@ static void dlfb_ops_copyarea(struct fb_info *info,
+ 
+ 	sys_copyarea(info, area);
+ 
+-	dlfb_handle_damage(dlfb, area->dx, area->dy,
+-			area->width, area->height, info->screen_base);
++	dlfb_offload_damage(dlfb, area->dx, area->dy,
++			area->width, area->height);
+ }
+ 
+ static void dlfb_ops_imageblit(struct fb_info *info,
+@@ -705,8 +760,8 @@ static void dlfb_ops_imageblit(struct fb_info *info,
+ 
+ 	sys_imageblit(info, image);
+ 
+-	dlfb_handle_damage(dlfb, image->dx, image->dy,
+-			image->width, image->height, info->screen_base);
++	dlfb_offload_damage(dlfb, image->dx, image->dy,
++			image->width, image->height);
+ }
+ 
+ static void dlfb_ops_fillrect(struct fb_info *info,
+@@ -716,8 +771,8 @@ static void dlfb_ops_fillrect(struct fb_info *info,
+ 
+ 	sys_fillrect(info, rect);
+ 
+-	dlfb_handle_damage(dlfb, rect->dx, rect->dy, rect->width,
+-			      rect->height, info->screen_base);
++	dlfb_offload_damage(dlfb, rect->dx, rect->dy, rect->width,
++			      rect->height);
+ }
+ 
+ /*
+@@ -739,17 +794,19 @@ static void dlfb_dpy_deferred_io(struct fb_info *info,
+ 	int bytes_identical = 0;
+ 	int bytes_rendered = 0;
+ 
++	mutex_lock(&dlfb->render_mutex);
++
+ 	if (!fb_defio)
+-		return;
++		goto unlock_ret;
+ 
+ 	if (!atomic_read(&dlfb->usb_active))
+-		return;
++		goto unlock_ret;
+ 
+ 	start_cycles = get_cycles();
+ 
+ 	urb = dlfb_get_urb(dlfb);
+ 	if (!urb)
+-		return;
++		goto unlock_ret;
+ 
+ 	cmd = urb->transfer_buffer;
+ 
+@@ -782,6 +839,8 @@ error:
+ 	atomic_add(((unsigned int) ((end_cycles - start_cycles)
+ 		    >> 10)), /* Kcycles */
+ 		   &dlfb->cpu_kcycles_used);
++unlock_ret:
++	mutex_unlock(&dlfb->render_mutex);
+ }
+ 
+ static int dlfb_get_edid(struct dlfb_data *dlfb, char *edid, int len)
+@@ -859,8 +918,7 @@ static int dlfb_ops_ioctl(struct fb_info *info, unsigned int cmd,
+ 		if (area.y > info->var.yres)
+ 			area.y = info->var.yres;
+ 
+-		dlfb_handle_damage(dlfb, area.x, area.y, area.w, area.h,
+-			   info->screen_base);
++		dlfb_handle_damage(dlfb, area.x, area.y, area.w, area.h);
+ 	}
+ 
+ 	return 0;
+@@ -942,6 +1000,10 @@ static void dlfb_ops_destroy(struct fb_info *info)
+ {
+ 	struct dlfb_data *dlfb = info->par;
+ 
++	cancel_work_sync(&dlfb->damage_work);
++
++	mutex_destroy(&dlfb->render_mutex);
++
+ 	if (info->cmap.len != 0)
+ 		fb_dealloc_cmap(&info->cmap);
+ 	if (info->monspecs.modedb)
+@@ -1065,8 +1127,7 @@ static int dlfb_ops_set_par(struct fb_info *info)
+ 			pix_framebuffer[i] = 0x37e6;
+ 	}
+ 
+-	dlfb_handle_damage(dlfb, 0, 0, info->var.xres, info->var.yres,
+-			   info->screen_base);
++	dlfb_handle_damage(dlfb, 0, 0, info->var.xres, info->var.yres);
+ 
+ 	return 0;
+ }
+@@ -1639,6 +1700,11 @@ static int dlfb_usb_probe(struct usb_interface *intf,
+ 	dlfb->ops = dlfb_ops;
+ 	info->fbops = &dlfb->ops;
+ 
++	mutex_init(&dlfb->render_mutex);
++	dlfb_init_damage(dlfb);
++	spin_lock_init(&dlfb->damage_lock);
++	INIT_WORK(&dlfb->damage_work, dlfb_damage_work);
++
+ 	INIT_LIST_HEAD(&info->modelist);
+ 
+ 	if (!dlfb_alloc_urb_list(dlfb, WRITES_IN_FLIGHT, MAX_TRANSFER)) {
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index ddf028509931..351fa506dc9b 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -4667,14 +4667,12 @@ int btrfs_reloc_cow_block(struct btrfs_trans_handle *trans,
+ void btrfs_reloc_pre_snapshot(struct btrfs_pending_snapshot *pending,
+ 			      u64 *bytes_to_reserve)
+ {
+-	struct btrfs_root *root;
+-	struct reloc_control *rc;
++	struct btrfs_root *root = pending->root;
++	struct reloc_control *rc = root->fs_info->reloc_ctl;
+ 
+-	root = pending->root;
+-	if (!root->reloc_root)
++	if (!root->reloc_root || !rc)
+ 		return;
+ 
+-	rc = root->fs_info->reloc_ctl;
+ 	if (!rc->merge_reloc_tree)
+ 		return;
+ 
+@@ -4703,10 +4701,10 @@ int btrfs_reloc_post_snapshot(struct btrfs_trans_handle *trans,
+ 	struct btrfs_root *root = pending->root;
+ 	struct btrfs_root *reloc_root;
+ 	struct btrfs_root *new_root;
+-	struct reloc_control *rc;
++	struct reloc_control *rc = root->fs_info->reloc_ctl;
+ 	int ret;
+ 
+-	if (!root->reloc_root)
++	if (!root->reloc_root || !rc)
+ 		return 0;
+ 
+ 	rc = root->fs_info->reloc_ctl;
+diff --git a/fs/ceph/super.c b/fs/ceph/super.c
+index 6d5bb2f74612..01113c86e469 100644
+--- a/fs/ceph/super.c
++++ b/fs/ceph/super.c
+@@ -845,6 +845,12 @@ static void ceph_umount_begin(struct super_block *sb)
+ 	return;
+ }
+ 
++static int ceph_remount(struct super_block *sb, int *flags, char *data)
++{
++	sync_filesystem(sb);
++	return 0;
++}
++
+ static const struct super_operations ceph_super_ops = {
+ 	.alloc_inode	= ceph_alloc_inode,
+ 	.destroy_inode	= ceph_destroy_inode,
+@@ -852,6 +858,7 @@ static const struct super_operations ceph_super_ops = {
+ 	.drop_inode	= ceph_drop_inode,
+ 	.sync_fs        = ceph_sync_fs,
+ 	.put_super	= ceph_put_super,
++	.remount_fs	= ceph_remount,
+ 	.show_options   = ceph_show_options,
+ 	.statfs		= ceph_statfs,
+ 	.umount_begin   = ceph_umount_begin,
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 585ad3207cb1..607468948f72 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -1687,6 +1687,7 @@ static inline bool is_retryable_error(int error)
+ 
+ #define   CIFS_HAS_CREDITS 0x0400    /* already has credits */
+ #define   CIFS_TRANSFORM_REQ 0x0800    /* transform request before sending */
++#define   CIFS_NO_SRV_RSP    0x1000    /* there is no server response */
+ 
+ /* Security Flags: indicate type of session setup needed */
+ #define   CIFSSEC_MAY_SIGN	0x00001
+diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
+index f43747c062a7..6050851edcb8 100644
+--- a/fs/cifs/cifssmb.c
++++ b/fs/cifs/cifssmb.c
+@@ -2540,7 +2540,7 @@ CIFSSMBLock(const unsigned int xid, struct cifs_tcon *tcon,
+ 
+ 	if (lockType == LOCKING_ANDX_OPLOCK_RELEASE) {
+ 		/* no response expected */
+-		flags = CIFS_ASYNC_OP | CIFS_OBREAK_OP;
++		flags = CIFS_NO_SRV_RSP | CIFS_ASYNC_OP | CIFS_OBREAK_OP;
+ 		pSMB->Timeout = 0;
+ 	} else if (waitFlag) {
+ 		flags = CIFS_BLOCKING_OP; /* blocking operation, no timeout */
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index c36ff0d1fe2a..aa61dcf471b3 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -2917,26 +2917,28 @@ smb21_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
+ 		       unsigned int epoch, bool *purge_cache)
+ {
+ 	char message[5] = {0};
++	unsigned int new_oplock = 0;
+ 
+ 	oplock &= 0xFF;
+ 	if (oplock == SMB2_OPLOCK_LEVEL_NOCHANGE)
+ 		return;
+ 
+-	cinode->oplock = 0;
+ 	if (oplock & SMB2_LEASE_READ_CACHING_HE) {
+-		cinode->oplock |= CIFS_CACHE_READ_FLG;
++		new_oplock |= CIFS_CACHE_READ_FLG;
+ 		strcat(message, "R");
+ 	}
+ 	if (oplock & SMB2_LEASE_HANDLE_CACHING_HE) {
+-		cinode->oplock |= CIFS_CACHE_HANDLE_FLG;
++		new_oplock |= CIFS_CACHE_HANDLE_FLG;
+ 		strcat(message, "H");
+ 	}
+ 	if (oplock & SMB2_LEASE_WRITE_CACHING_HE) {
+-		cinode->oplock |= CIFS_CACHE_WRITE_FLG;
++		new_oplock |= CIFS_CACHE_WRITE_FLG;
+ 		strcat(message, "W");
+ 	}
+-	if (!cinode->oplock)
+-		strcat(message, "None");
++	if (!new_oplock)
++		strncpy(message, "None", sizeof(message));
++
++	cinode->oplock = new_oplock;
+ 	cifs_dbg(FYI, "%s Lease granted on inode %p\n", message,
+ 		 &cinode->vfs_inode);
+ }
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index 1de8e996e566..72e242c49ca1 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -1054,8 +1054,11 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
+ 
+ 	mutex_unlock(&ses->server->srv_mutex);
+ 
+-	if (rc < 0) {
+-		/* Sending failed for some reason - return credits back */
++	/*
++	 * If sending failed for some reason or it is an oplock break that we
++	 * will not receive a response to - return credits back
++	 */
++	if (rc < 0 || (flags & CIFS_NO_SRV_RSP)) {
+ 		for (i = 0; i < num_rqst; i++)
+ 			add_credits(ses->server, &credits[i], optype);
+ 		goto out;
+@@ -1076,9 +1079,6 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
+ 		smb311_update_preauth_hash(ses, rqst[0].rq_iov,
+ 					   rqst[0].rq_nvec);
+ 
+-	if ((flags & CIFS_TIMEOUT_MASK) == CIFS_ASYNC_OP)
+-		goto out;
+-
+ 	for (i = 0; i < num_rqst; i++) {
+ 		rc = wait_for_response(ses->server, midQ[i]);
+ 		if (rc != 0)
+diff --git a/fs/dcache.c b/fs/dcache.c
+index aac41adf4743..c663c602f9ef 100644
+--- a/fs/dcache.c
++++ b/fs/dcache.c
+@@ -344,7 +344,7 @@ static void dentry_free(struct dentry *dentry)
+ 		}
+ 	}
+ 	/* if dentry was never visible to RCU, immediate free is OK */
+-	if (!(dentry->d_flags & DCACHE_RCUACCESS))
++	if (dentry->d_flags & DCACHE_NORCU)
+ 		__d_free(&dentry->d_u.d_rcu);
+ 	else
+ 		call_rcu(&dentry->d_u.d_rcu, __d_free);
+@@ -1701,7 +1701,6 @@ struct dentry *d_alloc(struct dentry * parent, const struct qstr *name)
+ 	struct dentry *dentry = __d_alloc(parent->d_sb, name);
+ 	if (!dentry)
+ 		return NULL;
+-	dentry->d_flags |= DCACHE_RCUACCESS;
+ 	spin_lock(&parent->d_lock);
+ 	/*
+ 	 * don't need child lock because it is not subject
+@@ -1726,7 +1725,7 @@ struct dentry *d_alloc_cursor(struct dentry * parent)
+ {
+ 	struct dentry *dentry = d_alloc_anon(parent->d_sb);
+ 	if (dentry) {
+-		dentry->d_flags |= DCACHE_RCUACCESS | DCACHE_DENTRY_CURSOR;
++		dentry->d_flags |= DCACHE_DENTRY_CURSOR;
+ 		dentry->d_parent = dget(parent);
+ 	}
+ 	return dentry;
+@@ -1739,10 +1738,17 @@ struct dentry *d_alloc_cursor(struct dentry * parent)
+  *
+  * For a filesystem that just pins its dentries in memory and never
+  * performs lookups at all, return an unhashed IS_ROOT dentry.
++ * This is used for pipes, sockets et.al. - the stuff that should
++ * never be anyone's children or parents.  Unlike all other
++ * dentries, these will not have RCU delay between dropping the
++ * last reference and freeing them.
+  */
+ struct dentry *d_alloc_pseudo(struct super_block *sb, const struct qstr *name)
+ {
+-	return __d_alloc(sb, name);
++	struct dentry *dentry = __d_alloc(sb, name);
++	if (likely(dentry))
++		dentry->d_flags |= DCACHE_NORCU;
++	return dentry;
+ }
+ EXPORT_SYMBOL(d_alloc_pseudo);
+ 
+@@ -1911,12 +1917,10 @@ struct dentry *d_make_root(struct inode *root_inode)
+ 
+ 	if (root_inode) {
+ 		res = d_alloc_anon(root_inode->i_sb);
+-		if (res) {
+-			res->d_flags |= DCACHE_RCUACCESS;
++		if (res)
+ 			d_instantiate(res, root_inode);
+-		} else {
++		else
+ 			iput(root_inode);
+-		}
+ 	}
+ 	return res;
+ }
+@@ -2781,9 +2785,7 @@ static void __d_move(struct dentry *dentry, struct dentry *target,
+ 		copy_name(dentry, target);
+ 		target->d_hash.pprev = NULL;
+ 		dentry->d_parent->d_lockref.count++;
+-		if (dentry == old_parent)
+-			dentry->d_flags |= DCACHE_RCUACCESS;
+-		else
++		if (dentry != old_parent) /* wasn't IS_ROOT */
+ 			WARN_ON(!--old_parent->d_lockref.count);
+ 	} else {
+ 		target->d_parent = old_parent;
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 06096b60f1df..92ee15dda4c7 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -178,7 +178,9 @@ void fuse_finish_open(struct inode *inode, struct file *file)
+ 
+ 	if (!(ff->open_flags & FOPEN_KEEP_CACHE))
+ 		invalidate_inode_pages2(inode->i_mapping);
+-	if (ff->open_flags & FOPEN_NONSEEKABLE)
++	if (ff->open_flags & FOPEN_STREAM)
++		stream_open(inode, file);
++	else if (ff->open_flags & FOPEN_NONSEEKABLE)
+ 		nonseekable_open(inode, file);
+ 	if (fc->atomic_o_trunc && (file->f_flags & O_TRUNC)) {
+ 		struct fuse_inode *fi = get_fuse_inode(inode);
+@@ -1586,7 +1588,7 @@ __acquires(fi->lock)
+ {
+ 	struct fuse_conn *fc = get_fuse_conn(inode);
+ 	struct fuse_inode *fi = get_fuse_inode(inode);
+-	size_t crop = i_size_read(inode);
++	loff_t crop = i_size_read(inode);
+ 	struct fuse_req *req;
+ 
+ 	while (fi->writectr >= 0 && !list_empty(&fi->queued_writes)) {
+@@ -3044,6 +3046,13 @@ static long fuse_file_fallocate(struct file *file, int mode, loff_t offset,
+ 		}
+ 	}
+ 
++	if (!(mode & FALLOC_FL_KEEP_SIZE) &&
++	    offset + length > i_size_read(inode)) {
++		err = inode_newsize_ok(inode, offset + length);
++		if (err)
++			return err;
++	}
++
+ 	if (!(mode & FALLOC_FL_KEEP_SIZE))
+ 		set_bit(FUSE_I_SIZE_UNSTABLE, &fi->state);
+ 
+diff --git a/fs/nfs/filelayout/filelayout.c b/fs/nfs/filelayout/filelayout.c
+index 61f46facb39c..b3e8ba3bd654 100644
+--- a/fs/nfs/filelayout/filelayout.c
++++ b/fs/nfs/filelayout/filelayout.c
+@@ -904,7 +904,7 @@ fl_pnfs_update_layout(struct inode *ino,
+ 	status = filelayout_check_deviceid(lo, fl, gfp_flags);
+ 	if (status) {
+ 		pnfs_put_lseg(lseg);
+-		lseg = ERR_PTR(status);
++		lseg = NULL;
+ 	}
+ out:
+ 	return lseg;
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index 3de36479ed7a..f502f1c054cf 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -159,6 +159,10 @@ int nfs40_discover_server_trunking(struct nfs_client *clp,
+ 		/* Sustain the lease, even if it's empty.  If the clientid4
+ 		 * goes stale it's of no use for trunking discovery. */
+ 		nfs4_schedule_state_renewal(*result);
++
++		/* If the client state need to recover, do it. */
++		if (clp->cl_state)
++			nfs4_schedule_state_manager(clp);
+ 	}
+ out:
+ 	return status;
+diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c
+index df06f3da166c..e8d3f349b7f2 100644
+--- a/fs/notify/fsnotify.c
++++ b/fs/notify/fsnotify.c
+@@ -107,6 +107,47 @@ void fsnotify_sb_delete(struct super_block *sb)
+ 	fsnotify_clear_marks_by_sb(sb);
+ }
+ 
++/*
++ * fsnotify_nameremove - a filename was removed from a directory
++ *
++ * This is mostly called under parent vfs inode lock so name and
++ * dentry->d_parent should be stable. However there are some corner cases where
++ * inode lock is not held. So to be on the safe side and be reselient to future
++ * callers and out of tree users of d_delete(), we do not assume that d_parent
++ * and d_name are stable and we use dget_parent() and
++ * take_dentry_name_snapshot() to grab stable references.
++ */
++void fsnotify_nameremove(struct dentry *dentry, int isdir)
++{
++	struct dentry *parent;
++	struct name_snapshot name;
++	__u32 mask = FS_DELETE;
++
++	/* d_delete() of pseudo inode? (e.g. __ns_get_path() playing tricks) */
++	if (IS_ROOT(dentry))
++		return;
++
++	if (isdir)
++		mask |= FS_ISDIR;
++
++	parent = dget_parent(dentry);
++	/* Avoid unneeded take_dentry_name_snapshot() */
++	if (!(d_inode(parent)->i_fsnotify_mask & FS_DELETE) &&
++	    !(dentry->d_sb->s_fsnotify_mask & FS_DELETE))
++		goto out_dput;
++
++	take_dentry_name_snapshot(&name, dentry);
++
++	fsnotify(d_inode(parent), mask, d_inode(dentry), FSNOTIFY_EVENT_INODE,
++		 name.name, 0);
++
++	release_dentry_name_snapshot(&name);
++
++out_dput:
++	dput(parent);
++}
++EXPORT_SYMBOL(fsnotify_nameremove);
++
+ /*
+  * Given an inode, first check if we care what happens to our children.  Inotify
+  * and dnotify both tell their parents about events.  If we care about any event
+diff --git a/fs/nsfs.c b/fs/nsfs.c
+index 60702d677bd4..30d150a4f0c6 100644
+--- a/fs/nsfs.c
++++ b/fs/nsfs.c
+@@ -85,13 +85,12 @@ slow:
+ 	inode->i_fop = &ns_file_operations;
+ 	inode->i_private = ns;
+ 
+-	dentry = d_alloc_pseudo(mnt->mnt_sb, &empty_name);
++	dentry = d_alloc_anon(mnt->mnt_sb);
+ 	if (!dentry) {
+ 		iput(inode);
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 	d_instantiate(dentry, inode);
+-	dentry->d_flags |= DCACHE_RCUACCESS;
+ 	dentry->d_fsdata = (void *)ns->ops;
+ 	d = atomic_long_cmpxchg(&ns->stashed, 0, (unsigned long)dentry);
+ 	if (d) {
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index 68b3303e4b46..56feaa739979 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -909,14 +909,14 @@ static bool ovl_open_need_copy_up(struct dentry *dentry, int flags)
+ 	return true;
+ }
+ 
+-int ovl_open_maybe_copy_up(struct dentry *dentry, unsigned int file_flags)
++int ovl_maybe_copy_up(struct dentry *dentry, int flags)
+ {
+ 	int err = 0;
+ 
+-	if (ovl_open_need_copy_up(dentry, file_flags)) {
++	if (ovl_open_need_copy_up(dentry, flags)) {
+ 		err = ovl_want_write(dentry);
+ 		if (!err) {
+-			err = ovl_copy_up_flags(dentry, file_flags);
++			err = ovl_copy_up_flags(dentry, flags);
+ 			ovl_drop_write(dentry);
+ 		}
+ 	}
+diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
+index 84dd957efa24..50e4407398d8 100644
+--- a/fs/overlayfs/file.c
++++ b/fs/overlayfs/file.c
+@@ -116,11 +116,10 @@ static int ovl_real_fdget(const struct file *file, struct fd *real)
+ 
+ static int ovl_open(struct inode *inode, struct file *file)
+ {
+-	struct dentry *dentry = file_dentry(file);
+ 	struct file *realfile;
+ 	int err;
+ 
+-	err = ovl_open_maybe_copy_up(dentry, file->f_flags);
++	err = ovl_maybe_copy_up(file_dentry(file), file->f_flags);
+ 	if (err)
+ 		return err;
+ 
+@@ -390,7 +389,7 @@ static long ovl_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 		if (ret)
+ 			return ret;
+ 
+-		ret = ovl_copy_up_with_data(file_dentry(file));
++		ret = ovl_maybe_copy_up(file_dentry(file), O_WRONLY);
+ 		if (!ret) {
+ 			ret = ovl_real_ioctl(file, cmd, arg);
+ 
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index 9c6018287d57..d26efed9f80a 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -421,7 +421,7 @@ extern const struct file_operations ovl_file_operations;
+ int ovl_copy_up(struct dentry *dentry);
+ int ovl_copy_up_with_data(struct dentry *dentry);
+ int ovl_copy_up_flags(struct dentry *dentry, int flags);
+-int ovl_open_maybe_copy_up(struct dentry *dentry, unsigned int file_flags);
++int ovl_maybe_copy_up(struct dentry *dentry, int flags);
+ int ovl_copy_xattr(struct dentry *old, struct dentry *new);
+ int ovl_set_attr(struct dentry *upper, struct kstat *stat);
+ struct ovl_fh *ovl_encode_real_fh(struct dentry *real, bool is_upper);
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index 6a803a0b75df..0c9bef89ac43 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -2540,6 +2540,11 @@ static ssize_t proc_pid_attr_write(struct file * file, const char __user * buf,
+ 		rcu_read_unlock();
+ 		return -EACCES;
+ 	}
++	/* Prevent changes to overridden credentials. */
++	if (current_cred() != current_real_cred()) {
++		rcu_read_unlock();
++		return -EBUSY;
++	}
+ 	rcu_read_unlock();
+ 
+ 	if (count > PAGE_SIZE)
+diff --git a/include/asm-generic/mm_hooks.h b/include/asm-generic/mm_hooks.h
+index 8ac4e68a12f0..6736ed2f632b 100644
+--- a/include/asm-generic/mm_hooks.h
++++ b/include/asm-generic/mm_hooks.h
+@@ -18,7 +18,6 @@ static inline void arch_exit_mmap(struct mm_struct *mm)
+ }
+ 
+ static inline void arch_unmap(struct mm_struct *mm,
+-			struct vm_area_struct *vma,
+ 			unsigned long start, unsigned long end)
+ {
+ }
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 944ccc310201..ac721fc5f95e 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -36,6 +36,7 @@ struct bpf_map_ops {
+ 	void (*map_free)(struct bpf_map *map);
+ 	int (*map_get_next_key)(struct bpf_map *map, void *key, void *next_key);
+ 	void (*map_release_uref)(struct bpf_map *map);
++	void *(*map_lookup_elem_sys_only)(struct bpf_map *map, void *key);
+ 
+ 	/* funcs callable from userspace and from eBPF programs */
+ 	void *(*map_lookup_elem)(struct bpf_map *map, void *key);
+diff --git a/include/linux/dcache.h b/include/linux/dcache.h
+index 60996e64c579..6e1e8e6602c6 100644
+--- a/include/linux/dcache.h
++++ b/include/linux/dcache.h
+@@ -176,7 +176,6 @@ struct dentry_operations {
+       * typically using d_splice_alias. */
+ 
+ #define DCACHE_REFERENCED		0x00000040 /* Recently used, don't discard. */
+-#define DCACHE_RCUACCESS		0x00000080 /* Entry has ever been RCU-visible */
+ 
+ #define DCACHE_CANT_MOUNT		0x00000100
+ #define DCACHE_GENOCIDE			0x00000200
+@@ -217,6 +216,7 @@ struct dentry_operations {
+ 
+ #define DCACHE_PAR_LOOKUP		0x10000000 /* being looked up (with parent locked shared) */
+ #define DCACHE_DENTRY_CURSOR		0x20000000
++#define DCACHE_NORCU			0x40000000 /* No RCU delay for freeing */
+ 
+ extern seqlock_t rename_lock;
+ 
+diff --git a/include/linux/fsnotify.h b/include/linux/fsnotify.h
+index 09587e2860b5..e30d6132c633 100644
+--- a/include/linux/fsnotify.h
++++ b/include/linux/fsnotify.h
+@@ -151,39 +151,6 @@ static inline void fsnotify_vfsmount_delete(struct vfsmount *mnt)
+ 	__fsnotify_vfsmount_delete(mnt);
+ }
+ 
+-/*
+- * fsnotify_nameremove - a filename was removed from a directory
+- *
+- * This is mostly called under parent vfs inode lock so name and
+- * dentry->d_parent should be stable. However there are some corner cases where
+- * inode lock is not held. So to be on the safe side and be reselient to future
+- * callers and out of tree users of d_delete(), we do not assume that d_parent
+- * and d_name are stable and we use dget_parent() and
+- * take_dentry_name_snapshot() to grab stable references.
+- */
+-static inline void fsnotify_nameremove(struct dentry *dentry, int isdir)
+-{
+-	struct dentry *parent;
+-	struct name_snapshot name;
+-	__u32 mask = FS_DELETE;
+-
+-	/* d_delete() of pseudo inode? (e.g. __ns_get_path() playing tricks) */
+-	if (IS_ROOT(dentry))
+-		return;
+-
+-	if (isdir)
+-		mask |= FS_ISDIR;
+-
+-	parent = dget_parent(dentry);
+-	take_dentry_name_snapshot(&name, dentry);
+-
+-	fsnotify(d_inode(parent), mask, d_inode(dentry), FSNOTIFY_EVENT_INODE,
+-		 name.name, 0);
+-
+-	release_dentry_name_snapshot(&name);
+-	dput(parent);
+-}
+-
+ /*
+  * fsnotify_inoderemove - an inode is going away
+  */
+diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
+index dfc28fcb4de8..094b38f2d9a1 100644
+--- a/include/linux/fsnotify_backend.h
++++ b/include/linux/fsnotify_backend.h
+@@ -355,6 +355,7 @@ extern int __fsnotify_parent(const struct path *path, struct dentry *dentry, __u
+ extern void __fsnotify_inode_delete(struct inode *inode);
+ extern void __fsnotify_vfsmount_delete(struct vfsmount *mnt);
+ extern void fsnotify_sb_delete(struct super_block *sb);
++extern void fsnotify_nameremove(struct dentry *dentry, int isdir);
+ extern u32 fsnotify_get_cookie(void);
+ 
+ static inline int fsnotify_inode_watches_children(struct inode *inode)
+@@ -524,6 +525,9 @@ static inline void __fsnotify_vfsmount_delete(struct vfsmount *mnt)
+ static inline void fsnotify_sb_delete(struct super_block *sb)
+ {}
+ 
++static inline void fsnotify_nameremove(struct dentry *dentry, int isdir)
++{}
++
+ static inline void fsnotify_update_flags(struct dentry *dentry)
+ {}
+ 
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 0d0729648844..9ffc53acaec1 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -681,7 +681,6 @@ struct mlx5_core_dev {
+ #endif
+ 	struct mlx5_clock        clock;
+ 	struct mlx5_ib_clock_info  *clock_info;
+-	struct page             *clock_info_page;
+ 	struct mlx5_fw_tracer   *tracer;
+ };
+ 
+diff --git a/include/linux/of.h b/include/linux/of.h
+index e240992e5cb6..074913002e39 100644
+--- a/include/linux/of.h
++++ b/include/linux/of.h
+@@ -234,8 +234,8 @@ extern struct device_node *of_find_all_nodes(struct device_node *prev);
+ static inline u64 of_read_number(const __be32 *cell, int size)
+ {
+ 	u64 r = 0;
+-	while (size--)
+-		r = (r << 32) | be32_to_cpu(*(cell++));
++	for (; size--; cell++)
++		r = (r << 32) | be32_to_cpu(*cell);
+ 	return r;
+ }
+ 
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 77448215ef5b..2c056a7a728a 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -348,6 +348,8 @@ struct pci_dev {
+ 	unsigned int	hotplug_user_indicators:1; /* SlotCtl indicators
+ 						      controlled exclusively by
+ 						      user sysfs */
++	unsigned int	clear_retrain_link:1;	/* Need to clear Retrain Link
++						   bit manually */
+ 	unsigned int	d3_delay;	/* D3->D0 transition time in ms */
+ 	unsigned int	d3cold_delay;	/* D3cold->D0 transition time in ms */
+ 
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 9027a8c4219f..20a4c2280308 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -1425,10 +1425,12 @@ static inline void skb_zcopy_clear(struct sk_buff *skb, bool zerocopy)
+ 	struct ubuf_info *uarg = skb_zcopy(skb);
+ 
+ 	if (uarg) {
+-		if (uarg->callback == sock_zerocopy_callback) {
++		if (skb_zcopy_is_nouarg(skb)) {
++			/* no notification callback */
++		} else if (uarg->callback == sock_zerocopy_callback) {
+ 			uarg->zerocopy = uarg->zerocopy && zerocopy;
+ 			sock_zerocopy_put(uarg);
+-		} else if (!skb_zcopy_is_nouarg(skb)) {
++		} else {
+ 			uarg->callback(uarg, zerocopy);
+ 		}
+ 
+@@ -2683,7 +2685,8 @@ static inline int skb_orphan_frags(struct sk_buff *skb, gfp_t gfp_mask)
+ {
+ 	if (likely(!skb_zcopy(skb)))
+ 		return 0;
+-	if (skb_uarg(skb)->callback == sock_zerocopy_callback)
++	if (!skb_zcopy_is_nouarg(skb) &&
++	    skb_uarg(skb)->callback == sock_zerocopy_callback)
+ 		return 0;
+ 	return skb_copy_ubufs(skb, gfp_mask);
+ }
+diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h
+index d035183c8d03..cc32b9d9ecec 100644
+--- a/include/net/flow_offload.h
++++ b/include/net/flow_offload.h
+@@ -71,6 +71,8 @@ void flow_rule_match_eth_addrs(const struct flow_rule *rule,
+ 			       struct flow_match_eth_addrs *out);
+ void flow_rule_match_vlan(const struct flow_rule *rule,
+ 			  struct flow_match_vlan *out);
++void flow_rule_match_cvlan(const struct flow_rule *rule,
++			   struct flow_match_vlan *out);
+ void flow_rule_match_ipv4_addrs(const struct flow_rule *rule,
+ 				struct flow_match_ipv4_addrs *out);
+ void flow_rule_match_ipv6_addrs(const struct flow_rule *rule,
+diff --git a/include/net/ip6_fib.h b/include/net/ip6_fib.h
+index 84097010237c..b5e3add90e99 100644
+--- a/include/net/ip6_fib.h
++++ b/include/net/ip6_fib.h
+@@ -171,7 +171,8 @@ struct fib6_info {
+ 					dst_nocount:1,
+ 					dst_nopolicy:1,
+ 					dst_host:1,
+-					unused:3;
++					fib6_destroying:1,
++					unused:2;
+ 
+ 	struct fib6_nh			fib6_nh;
+ 	struct rcu_head			rcu;
+diff --git a/include/uapi/linux/fuse.h b/include/uapi/linux/fuse.h
+index 2ac598614a8f..56a8fb4e1222 100644
+--- a/include/uapi/linux/fuse.h
++++ b/include/uapi/linux/fuse.h
+@@ -229,11 +229,13 @@ struct fuse_file_lock {
+  * FOPEN_KEEP_CACHE: don't invalidate the data cache on open
+  * FOPEN_NONSEEKABLE: the file is not seekable
+  * FOPEN_CACHE_DIR: allow caching this directory
++ * FOPEN_STREAM: the file is stream-like (no file position at all)
+  */
+ #define FOPEN_DIRECT_IO		(1 << 0)
+ #define FOPEN_KEEP_CACHE	(1 << 1)
+ #define FOPEN_NONSEEKABLE	(1 << 2)
+ #define FOPEN_CACHE_DIR		(1 << 3)
++#define FOPEN_STREAM		(1 << 4)
+ 
+ /**
+  * INIT request/reply flags
+diff --git a/include/video/udlfb.h b/include/video/udlfb.h
+index 7d09e54ae54e..58fb5732831a 100644
+--- a/include/video/udlfb.h
++++ b/include/video/udlfb.h
+@@ -48,6 +48,13 @@ struct dlfb_data {
+ 	int base8;
+ 	u32 pseudo_palette[256];
+ 	int blank_mode; /*one of FB_BLANK_ */
++	struct mutex render_mutex;
++	int damage_x;
++	int damage_y;
++	int damage_x2;
++	int damage_y2;
++	spinlock_t damage_lock;
++	struct work_struct damage_work;
+ 	struct fb_ops ops;
+ 	/* blit-only rendering path metrics, exposed through sysfs */
+ 	atomic_t bytes_rendered; /* raw pixel-bytes driver asked to render */
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index fed15cf94dca..f79b4aa0a4af 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -527,18 +527,30 @@ static u32 htab_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf)
+ 	return insn - insn_buf;
+ }
+ 
+-static void *htab_lru_map_lookup_elem(struct bpf_map *map, void *key)
++static __always_inline void *__htab_lru_map_lookup_elem(struct bpf_map *map,
++							void *key, const bool mark)
+ {
+ 	struct htab_elem *l = __htab_map_lookup_elem(map, key);
+ 
+ 	if (l) {
+-		bpf_lru_node_set_ref(&l->lru_node);
++		if (mark)
++			bpf_lru_node_set_ref(&l->lru_node);
+ 		return l->key + round_up(map->key_size, 8);
+ 	}
+ 
+ 	return NULL;
+ }
+ 
++static void *htab_lru_map_lookup_elem(struct bpf_map *map, void *key)
++{
++	return __htab_lru_map_lookup_elem(map, key, true);
++}
++
++static void *htab_lru_map_lookup_elem_sys(struct bpf_map *map, void *key)
++{
++	return __htab_lru_map_lookup_elem(map, key, false);
++}
++
+ static u32 htab_lru_map_gen_lookup(struct bpf_map *map,
+ 				   struct bpf_insn *insn_buf)
+ {
+@@ -1250,6 +1262,7 @@ const struct bpf_map_ops htab_lru_map_ops = {
+ 	.map_free = htab_map_free,
+ 	.map_get_next_key = htab_map_get_next_key,
+ 	.map_lookup_elem = htab_lru_map_lookup_elem,
++	.map_lookup_elem_sys_only = htab_lru_map_lookup_elem_sys,
+ 	.map_update_elem = htab_lru_map_update_elem,
+ 	.map_delete_elem = htab_lru_map_delete_elem,
+ 	.map_gen_lookup = htab_lru_map_gen_lookup,
+@@ -1281,7 +1294,6 @@ static void *htab_lru_percpu_map_lookup_elem(struct bpf_map *map, void *key)
+ 
+ int bpf_percpu_hash_copy(struct bpf_map *map, void *key, void *value)
+ {
+-	struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
+ 	struct htab_elem *l;
+ 	void __percpu *pptr;
+ 	int ret = -ENOENT;
+@@ -1297,8 +1309,9 @@ int bpf_percpu_hash_copy(struct bpf_map *map, void *key, void *value)
+ 	l = __htab_map_lookup_elem(map, key);
+ 	if (!l)
+ 		goto out;
+-	if (htab_is_lru(htab))
+-		bpf_lru_node_set_ref(&l->lru_node);
++	/* We do not mark LRU map element here in order to not mess up
++	 * eviction heuristics when user space does a map walk.
++	 */
+ 	pptr = htab_elem_get_ptr(l, map->key_size);
+ 	for_each_possible_cpu(cpu) {
+ 		bpf_long_memcpy(value + off,
+diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
+index 4a8f390a2b82..dc9d7ac8228d 100644
+--- a/kernel/bpf/inode.c
++++ b/kernel/bpf/inode.c
+@@ -518,7 +518,7 @@ out:
+ static struct bpf_prog *__get_prog_inode(struct inode *inode, enum bpf_prog_type type)
+ {
+ 	struct bpf_prog *prog;
+-	int ret = inode_permission(inode, MAY_READ | MAY_WRITE);
++	int ret = inode_permission(inode, MAY_READ);
+ 	if (ret)
+ 		return ERR_PTR(ret);
+ 
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index afca36f53c49..db6e825e2958 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -773,7 +773,10 @@ static int map_lookup_elem(union bpf_attr *attr)
+ 		err = map->ops->map_peek_elem(map, value);
+ 	} else {
+ 		rcu_read_lock();
+-		ptr = map->ops->map_lookup_elem(map, key);
++		if (map->ops->map_lookup_elem_sys_only)
++			ptr = map->ops->map_lookup_elem_sys_only(map, key);
++		else
++			ptr = map->ops->map_lookup_elem(map, key);
+ 		if (IS_ERR(ptr)) {
+ 			err = PTR_ERR(ptr);
+ 		} else if (!ptr) {
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index 5b3b0c3c8a47..d910e36c34b5 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -1318,9 +1318,6 @@ event_id_read(struct file *filp, char __user *ubuf, size_t cnt, loff_t *ppos)
+ 	char buf[32];
+ 	int len;
+ 
+-	if (*ppos)
+-		return 0;
+-
+ 	if (unlikely(!id))
+ 		return -ENODEV;
+ 
+diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
+index 8f8411e7835f..e41d389b7f49 100644
+--- a/kernel/trace/trace_probe.c
++++ b/kernel/trace/trace_probe.c
+@@ -420,13 +420,14 @@ static int traceprobe_parse_probe_arg_body(char *arg, ssize_t *size,
+ 				return -E2BIG;
+ 		}
+ 	}
+-	/*
+-	 * The default type of $comm should be "string", and it can't be
+-	 * dereferenced.
+-	 */
+-	if (!t && strcmp(arg, "$comm") == 0)
++
++	/* Since $comm can not be dereferred, we can find $comm by strcmp */
++	if (strcmp(arg, "$comm") == 0) {
++		/* The type of $comm must be "string", and not an array. */
++		if (parg->count || (t && strcmp(t, "string")))
++			return -EINVAL;
+ 		parg->type = find_fetch_type("string");
+-	else
++	} else
+ 		parg->type = find_fetch_type(t);
+ 	if (!parg->type) {
+ 		pr_info("Unsupported type: %s\n", t);
+diff --git a/mm/mmap.c b/mm/mmap.c
+index bd7b9f293b39..2d6a6662edb9 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -2735,9 +2735,17 @@ int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
+ 		return -EINVAL;
+ 
+ 	len = PAGE_ALIGN(len);
++	end = start + len;
+ 	if (len == 0)
+ 		return -EINVAL;
+ 
++	/*
++	 * arch_unmap() might do unmaps itself.  It must be called
++	 * and finish any rbtree manipulation before this code
++	 * runs and also starts to manipulate the rbtree.
++	 */
++	arch_unmap(mm, start, end);
++
+ 	/* Find the first overlapping VMA */
+ 	vma = find_vma(mm, start);
+ 	if (!vma)
+@@ -2746,7 +2754,6 @@ int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
+ 	/* we have  start < vma->vm_end  */
+ 
+ 	/* if it doesn't overlap, we have nothing.. */
+-	end = start + len;
+ 	if (vma->vm_start >= end)
+ 		return 0;
+ 
+@@ -2816,12 +2823,6 @@ int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
+ 	/* Detach vmas from rbtree */
+ 	detach_vmas_to_be_unmapped(mm, vma, prev, end);
+ 
+-	/*
+-	 * mpx unmap needs to be called with mmap_sem held for write.
+-	 * It is safe to call it before unmap_region().
+-	 */
+-	arch_unmap(mm, vma, start, end);
+-
+ 	if (downgrade)
+ 		downgrade_write(&mm->mmap_sem);
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index f409406254dd..255f99cb7c48 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -8911,7 +8911,7 @@ static void netdev_wait_allrefs(struct net_device *dev)
+ 
+ 		refcnt = netdev_refcnt_read(dev);
+ 
+-		if (time_after(jiffies, warning_time + 10 * HZ)) {
++		if (refcnt && time_after(jiffies, warning_time + 10 * HZ)) {
+ 			pr_emerg("unregister_netdevice: waiting for %s to become free. Usage count = %d\n",
+ 				 dev->name, refcnt);
+ 			warning_time = jiffies;
+diff --git a/net/core/flow_offload.c b/net/core/flow_offload.c
+index c3a00eac4804..5ce7d47a960e 100644
+--- a/net/core/flow_offload.c
++++ b/net/core/flow_offload.c
+@@ -54,6 +54,13 @@ void flow_rule_match_vlan(const struct flow_rule *rule,
+ }
+ EXPORT_SYMBOL(flow_rule_match_vlan);
+ 
++void flow_rule_match_cvlan(const struct flow_rule *rule,
++			   struct flow_match_vlan *out)
++{
++	FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_CVLAN, out);
++}
++EXPORT_SYMBOL(flow_rule_match_cvlan);
++
+ void flow_rule_match_ipv4_addrs(const struct flow_rule *rule,
+ 				struct flow_match_ipv4_addrs *out)
+ {
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 220c56e93659..467d771ac6ba 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -1496,14 +1496,15 @@ static int put_master_ifindex(struct sk_buff *skb, struct net_device *dev)
+ 	return ret;
+ }
+ 
+-static int nla_put_iflink(struct sk_buff *skb, const struct net_device *dev)
++static int nla_put_iflink(struct sk_buff *skb, const struct net_device *dev,
++			  bool force)
+ {
+ 	int ifindex = dev_get_iflink(dev);
+ 
+-	if (dev->ifindex == ifindex)
+-		return 0;
++	if (force || dev->ifindex != ifindex)
++		return nla_put_u32(skb, IFLA_LINK, ifindex);
+ 
+-	return nla_put_u32(skb, IFLA_LINK, ifindex);
++	return 0;
+ }
+ 
+ static noinline_for_stack int nla_put_ifalias(struct sk_buff *skb,
+@@ -1520,6 +1521,8 @@ static int rtnl_fill_link_netnsid(struct sk_buff *skb,
+ 				  const struct net_device *dev,
+ 				  struct net *src_net)
+ {
++	bool put_iflink = false;
++
+ 	if (dev->rtnl_link_ops && dev->rtnl_link_ops->get_link_net) {
+ 		struct net *link_net = dev->rtnl_link_ops->get_link_net(dev);
+ 
+@@ -1528,10 +1531,12 @@ static int rtnl_fill_link_netnsid(struct sk_buff *skb,
+ 
+ 			if (nla_put_s32(skb, IFLA_LINK_NETNSID, id))
+ 				return -EMSGSIZE;
++
++			put_iflink = true;
+ 		}
+ 	}
+ 
+-	return 0;
++	return nla_put_iflink(skb, dev, put_iflink);
+ }
+ 
+ static int rtnl_fill_link_af(struct sk_buff *skb,
+@@ -1617,7 +1622,6 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb,
+ #ifdef CONFIG_RPS
+ 	    nla_put_u32(skb, IFLA_NUM_RX_QUEUES, dev->num_rx_queues) ||
+ #endif
+-	    nla_put_iflink(skb, dev) ||
+ 	    put_master_ifindex(skb, dev) ||
+ 	    nla_put_u8(skb, IFLA_CARRIER, netif_carrier_ok(dev)) ||
+ 	    (dev->qdisc &&
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index 91247a6fc67f..9915f64b38a0 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -909,6 +909,12 @@ static void fib6_drop_pcpu_from(struct fib6_info *f6i,
+ {
+ 	int cpu;
+ 
++	/* Make sure rt6_make_pcpu_route() wont add other percpu routes
++	 * while we are cleaning them here.
++	 */
++	f6i->fib6_destroying = 1;
++	mb(); /* paired with the cmpxchg() in rt6_make_pcpu_route() */
++
+ 	/* release the reference to this fib entry from
+ 	 * all of its cached pcpu routes
+ 	 */
+@@ -932,6 +938,9 @@ static void fib6_purge_rt(struct fib6_info *rt, struct fib6_node *fn,
+ {
+ 	struct fib6_table *table = rt->fib6_table;
+ 
++	if (rt->rt6i_pcpu)
++		fib6_drop_pcpu_from(rt, table);
++
+ 	if (atomic_read(&rt->fib6_ref) != 1) {
+ 		/* This route is used as dummy address holder in some split
+ 		 * nodes. It is not leaked, but it still holds other resources,
+@@ -953,9 +962,6 @@ static void fib6_purge_rt(struct fib6_info *rt, struct fib6_node *fn,
+ 			fn = rcu_dereference_protected(fn->parent,
+ 				    lockdep_is_held(&table->tb6_lock));
+ 		}
+-
+-		if (rt->rt6i_pcpu)
+-			fib6_drop_pcpu_from(rt, table);
+ 	}
+ }
+ 
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 0520aca3354b..e470589fb93b 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -110,8 +110,8 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 			 int iif, int type, u32 portid, u32 seq,
+ 			 unsigned int flags);
+ static struct rt6_info *rt6_find_cached_rt(struct fib6_info *rt,
+-					   struct in6_addr *daddr,
+-					   struct in6_addr *saddr);
++					   const struct in6_addr *daddr,
++					   const struct in6_addr *saddr);
+ 
+ #ifdef CONFIG_IPV6_ROUTE_INFO
+ static struct fib6_info *rt6_add_route_info(struct net *net,
+@@ -1260,6 +1260,13 @@ static struct rt6_info *rt6_make_pcpu_route(struct net *net,
+ 	prev = cmpxchg(p, NULL, pcpu_rt);
+ 	BUG_ON(prev);
+ 
++	if (rt->fib6_destroying) {
++		struct fib6_info *from;
++
++		from = xchg((__force struct fib6_info **)&pcpu_rt->from, NULL);
++		fib6_info_release(from);
++	}
++
+ 	return pcpu_rt;
+ }
+ 
+@@ -1529,31 +1536,44 @@ out:
+  * Caller has to hold rcu_read_lock()
+  */
+ static struct rt6_info *rt6_find_cached_rt(struct fib6_info *rt,
+-					   struct in6_addr *daddr,
+-					   struct in6_addr *saddr)
++					   const struct in6_addr *daddr,
++					   const struct in6_addr *saddr)
+ {
++	const struct in6_addr *src_key = NULL;
+ 	struct rt6_exception_bucket *bucket;
+-	struct in6_addr *src_key = NULL;
+ 	struct rt6_exception *rt6_ex;
+ 	struct rt6_info *res = NULL;
+ 
+-	bucket = rcu_dereference(rt->rt6i_exception_bucket);
+-
+ #ifdef CONFIG_IPV6_SUBTREES
+ 	/* rt6i_src.plen != 0 indicates rt is in subtree
+ 	 * and exception table is indexed by a hash of
+ 	 * both rt6i_dst and rt6i_src.
+-	 * Otherwise, the exception table is indexed by
+-	 * a hash of only rt6i_dst.
++	 * However, the src addr used to create the hash
++	 * might not be exactly the passed in saddr which
++	 * is a /128 addr from the flow.
++	 * So we need to use f6i->fib6_src to redo lookup
++	 * if the passed in saddr does not find anything.
++	 * (See the logic in ip6_rt_cache_alloc() on how
++	 * rt->rt6i_src is updated.)
+ 	 */
+ 	if (rt->fib6_src.plen)
+ 		src_key = saddr;
++find_ex:
+ #endif
++	bucket = rcu_dereference(rt->rt6i_exception_bucket);
+ 	rt6_ex = __rt6_find_exception_rcu(&bucket, daddr, src_key);
+ 
+ 	if (rt6_ex && !rt6_check_expired(rt6_ex->rt6i))
+ 		res = rt6_ex->rt6i;
+ 
++#ifdef CONFIG_IPV6_SUBTREES
++	/* Use fib6_src as src_key and redo lookup */
++	if (!res && src_key && src_key != &rt->fib6_src.addr) {
++		src_key = &rt->fib6_src.addr;
++		goto find_ex;
++	}
++#endif
++
+ 	return res;
+ }
+ 
+@@ -2608,10 +2628,8 @@ out:
+ u32 ip6_mtu_from_fib6(struct fib6_info *f6i, struct in6_addr *daddr,
+ 		      struct in6_addr *saddr)
+ {
+-	struct rt6_exception_bucket *bucket;
+-	struct rt6_exception *rt6_ex;
+-	struct in6_addr *src_key;
+ 	struct inet6_dev *idev;
++	struct rt6_info *rt;
+ 	u32 mtu = 0;
+ 
+ 	if (unlikely(fib6_metric_locked(f6i, RTAX_MTU))) {
+@@ -2620,18 +2638,10 @@ u32 ip6_mtu_from_fib6(struct fib6_info *f6i, struct in6_addr *daddr,
+ 			goto out;
+ 	}
+ 
+-	src_key = NULL;
+-#ifdef CONFIG_IPV6_SUBTREES
+-	if (f6i->fib6_src.plen)
+-		src_key = saddr;
+-#endif
+-
+-	bucket = rcu_dereference(f6i->rt6i_exception_bucket);
+-	rt6_ex = __rt6_find_exception_rcu(&bucket, daddr, src_key);
+-	if (rt6_ex && !rt6_check_expired(rt6_ex->rt6i))
+-		mtu = dst_metric_raw(&rt6_ex->rt6i->dst, RTAX_MTU);
+-
+-	if (likely(!mtu)) {
++	rt = rt6_find_cached_rt(f6i, daddr, saddr);
++	if (unlikely(rt)) {
++		mtu = dst_metric_raw(&rt->dst, RTAX_MTU);
++	} else {
+ 		struct net_device *dev = fib6_info_nh_dev(f6i);
+ 
+ 		mtu = IPV6_MIN_MTU;
+diff --git a/net/tipc/core.c b/net/tipc/core.c
+index 5b38f5164281..d7b0688c98dd 100644
+--- a/net/tipc/core.c
++++ b/net/tipc/core.c
+@@ -66,6 +66,10 @@ static int __net_init tipc_init_net(struct net *net)
+ 	INIT_LIST_HEAD(&tn->node_list);
+ 	spin_lock_init(&tn->node_list_lock);
+ 
++	err = tipc_socket_init();
++	if (err)
++		goto out_socket;
++
+ 	err = tipc_sk_rht_init(net);
+ 	if (err)
+ 		goto out_sk_rht;
+@@ -92,6 +96,8 @@ out_subscr:
+ out_nametbl:
+ 	tipc_sk_rht_destroy(net);
+ out_sk_rht:
++	tipc_socket_stop();
++out_socket:
+ 	return err;
+ }
+ 
+@@ -102,6 +108,7 @@ static void __net_exit tipc_exit_net(struct net *net)
+ 	tipc_bcast_stop(net);
+ 	tipc_nametbl_stop(net);
+ 	tipc_sk_rht_destroy(net);
++	tipc_socket_stop();
+ }
+ 
+ static struct pernet_operations tipc_net_ops = {
+@@ -129,10 +136,6 @@ static int __init tipc_init(void)
+ 	if (err)
+ 		goto out_netlink_compat;
+ 
+-	err = tipc_socket_init();
+-	if (err)
+-		goto out_socket;
+-
+ 	err = tipc_register_sysctl();
+ 	if (err)
+ 		goto out_sysctl;
+@@ -152,8 +155,6 @@ out_bearer:
+ out_pernet:
+ 	tipc_unregister_sysctl();
+ out_sysctl:
+-	tipc_socket_stop();
+-out_socket:
+ 	tipc_netlink_compat_stop();
+ out_netlink_compat:
+ 	tipc_netlink_stop();
+@@ -168,7 +169,6 @@ static void __exit tipc_exit(void)
+ 	unregister_pernet_subsys(&tipc_net_ops);
+ 	tipc_netlink_stop();
+ 	tipc_netlink_compat_stop();
+-	tipc_socket_stop();
+ 	tipc_unregister_sysctl();
+ 
+ 	pr_info("Deactivated\n");
+diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
+index 15eb5d3d4750..96ab344f17bb 100644
+--- a/net/vmw_vsock/virtio_transport.c
++++ b/net/vmw_vsock/virtio_transport.c
+@@ -702,28 +702,27 @@ static int __init virtio_vsock_init(void)
+ 	if (!virtio_vsock_workqueue)
+ 		return -ENOMEM;
+ 
+-	ret = register_virtio_driver(&virtio_vsock_driver);
++	ret = vsock_core_init(&virtio_transport.transport);
+ 	if (ret)
+ 		goto out_wq;
+ 
+-	ret = vsock_core_init(&virtio_transport.transport);
++	ret = register_virtio_driver(&virtio_vsock_driver);
+ 	if (ret)
+-		goto out_vdr;
++		goto out_vci;
+ 
+ 	return 0;
+ 
+-out_vdr:
+-	unregister_virtio_driver(&virtio_vsock_driver);
++out_vci:
++	vsock_core_exit();
+ out_wq:
+ 	destroy_workqueue(virtio_vsock_workqueue);
+ 	return ret;
+-
+ }
+ 
+ static void __exit virtio_vsock_exit(void)
+ {
+-	vsock_core_exit();
+ 	unregister_virtio_driver(&virtio_vsock_driver);
++	vsock_core_exit();
+ 	destroy_workqueue(virtio_vsock_workqueue);
+ }
+ 
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index 602715fc9a75..f3f3d06cb6d8 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -786,12 +786,19 @@ static bool virtio_transport_close(struct vsock_sock *vsk)
+ 
+ void virtio_transport_release(struct vsock_sock *vsk)
+ {
++	struct virtio_vsock_sock *vvs = vsk->trans;
++	struct virtio_vsock_pkt *pkt, *tmp;
+ 	struct sock *sk = &vsk->sk;
+ 	bool remove_sock = true;
+ 
+ 	lock_sock(sk);
+ 	if (sk->sk_type == SOCK_STREAM)
+ 		remove_sock = virtio_transport_close(vsk);
++
++	list_for_each_entry_safe(pkt, tmp, &vvs->rx_queue, list) {
++		list_del(&pkt->list);
++		virtio_transport_free_pkt(pkt);
++	}
+ 	release_sock(sk);
+ 
+ 	if (remove_sock)
+diff --git a/scripts/gcc-plugins/arm_ssp_per_task_plugin.c b/scripts/gcc-plugins/arm_ssp_per_task_plugin.c
+index 89c47f57d1ce..8c1af9bdcb1b 100644
+--- a/scripts/gcc-plugins/arm_ssp_per_task_plugin.c
++++ b/scripts/gcc-plugins/arm_ssp_per_task_plugin.c
+@@ -36,7 +36,7 @@ static unsigned int arm_pertask_ssp_rtl_execute(void)
+ 		mask = GEN_INT(sext_hwi(sp_mask, GET_MODE_PRECISION(Pmode)));
+ 		masked_sp = gen_reg_rtx(Pmode);
+ 
+-		emit_insn_before(gen_rtx_SET(masked_sp,
++		emit_insn_before(gen_rtx_set(masked_sp,
+ 					     gen_rtx_AND(Pmode,
+ 							 stack_pointer_rtx,
+ 							 mask)),
+diff --git a/tools/objtool/Makefile b/tools/objtool/Makefile
+index 53f8be0f4a1f..88158239622b 100644
+--- a/tools/objtool/Makefile
++++ b/tools/objtool/Makefile
+@@ -7,11 +7,12 @@ ARCH := x86
+ endif
+ 
+ # always use the host compiler
++HOSTAR	?= ar
+ HOSTCC	?= gcc
+ HOSTLD	?= ld
++AR	 = $(HOSTAR)
+ CC	 = $(HOSTCC)
+ LD	 = $(HOSTLD)
+-AR	 = ar
+ 
+ ifeq ($(srctree),)
+ srctree := $(patsubst %/,%,$(dir $(CURDIR)))
+diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+index 872fab163585..f4c3c84b090f 100644
+--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+@@ -58,6 +58,7 @@ enum intel_pt_pkt_state {
+ 	INTEL_PT_STATE_NO_IP,
+ 	INTEL_PT_STATE_ERR_RESYNC,
+ 	INTEL_PT_STATE_IN_SYNC,
++	INTEL_PT_STATE_TNT_CONT,
+ 	INTEL_PT_STATE_TNT,
+ 	INTEL_PT_STATE_TIP,
+ 	INTEL_PT_STATE_TIP_PGD,
+@@ -72,8 +73,9 @@ static inline bool intel_pt_sample_time(enum intel_pt_pkt_state pkt_state)
+ 	case INTEL_PT_STATE_NO_IP:
+ 	case INTEL_PT_STATE_ERR_RESYNC:
+ 	case INTEL_PT_STATE_IN_SYNC:
+-	case INTEL_PT_STATE_TNT:
++	case INTEL_PT_STATE_TNT_CONT:
+ 		return true;
++	case INTEL_PT_STATE_TNT:
+ 	case INTEL_PT_STATE_TIP:
+ 	case INTEL_PT_STATE_TIP_PGD:
+ 	case INTEL_PT_STATE_FUP:
+@@ -888,16 +890,20 @@ static uint64_t intel_pt_next_period(struct intel_pt_decoder *decoder)
+ 	timestamp = decoder->timestamp + decoder->timestamp_insn_cnt;
+ 	masked_timestamp = timestamp & decoder->period_mask;
+ 	if (decoder->continuous_period) {
+-		if (masked_timestamp != decoder->last_masked_timestamp)
++		if (masked_timestamp > decoder->last_masked_timestamp)
+ 			return 1;
+ 	} else {
+ 		timestamp += 1;
+ 		masked_timestamp = timestamp & decoder->period_mask;
+-		if (masked_timestamp != decoder->last_masked_timestamp) {
++		if (masked_timestamp > decoder->last_masked_timestamp) {
+ 			decoder->last_masked_timestamp = masked_timestamp;
+ 			decoder->continuous_period = true;
+ 		}
+ 	}
++
++	if (masked_timestamp < decoder->last_masked_timestamp)
++		return decoder->period_ticks;
++
+ 	return decoder->period_ticks - (timestamp - masked_timestamp);
+ }
+ 
+@@ -926,7 +932,10 @@ static void intel_pt_sample_insn(struct intel_pt_decoder *decoder)
+ 	case INTEL_PT_PERIOD_TICKS:
+ 		timestamp = decoder->timestamp + decoder->timestamp_insn_cnt;
+ 		masked_timestamp = timestamp & decoder->period_mask;
+-		decoder->last_masked_timestamp = masked_timestamp;
++		if (masked_timestamp > decoder->last_masked_timestamp)
++			decoder->last_masked_timestamp = masked_timestamp;
++		else
++			decoder->last_masked_timestamp += decoder->period_ticks;
+ 		break;
+ 	case INTEL_PT_PERIOD_NONE:
+ 	case INTEL_PT_PERIOD_MTC:
+@@ -1254,7 +1263,9 @@ static int intel_pt_walk_tnt(struct intel_pt_decoder *decoder)
+ 				return -ENOENT;
+ 			}
+ 			decoder->tnt.count -= 1;
+-			if (!decoder->tnt.count)
++			if (decoder->tnt.count)
++				decoder->pkt_state = INTEL_PT_STATE_TNT_CONT;
++			else
+ 				decoder->pkt_state = INTEL_PT_STATE_IN_SYNC;
+ 			decoder->tnt.payload <<= 1;
+ 			decoder->state.from_ip = decoder->ip;
+@@ -1285,7 +1296,9 @@ static int intel_pt_walk_tnt(struct intel_pt_decoder *decoder)
+ 
+ 		if (intel_pt_insn.branch == INTEL_PT_BR_CONDITIONAL) {
+ 			decoder->tnt.count -= 1;
+-			if (!decoder->tnt.count)
++			if (decoder->tnt.count)
++				decoder->pkt_state = INTEL_PT_STATE_TNT_CONT;
++			else
+ 				decoder->pkt_state = INTEL_PT_STATE_IN_SYNC;
+ 			if (decoder->tnt.payload & BIT63) {
+ 				decoder->tnt.payload <<= 1;
+@@ -1305,8 +1318,11 @@ static int intel_pt_walk_tnt(struct intel_pt_decoder *decoder)
+ 				return 0;
+ 			}
+ 			decoder->ip += intel_pt_insn.length;
+-			if (!decoder->tnt.count)
++			if (!decoder->tnt.count) {
++				decoder->sample_timestamp = decoder->timestamp;
++				decoder->sample_insn_cnt = decoder->timestamp_insn_cnt;
+ 				return -EAGAIN;
++			}
+ 			decoder->tnt.payload <<= 1;
+ 			continue;
+ 		}
+@@ -2365,6 +2381,7 @@ const struct intel_pt_state *intel_pt_decode(struct intel_pt_decoder *decoder)
+ 			err = intel_pt_walk_trace(decoder);
+ 			break;
+ 		case INTEL_PT_STATE_TNT:
++		case INTEL_PT_STATE_TNT_CONT:
+ 			err = intel_pt_walk_tnt(decoder);
+ 			if (err == -EAGAIN)
+ 				err = intel_pt_walk_trace(decoder);


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-05-31 14:04 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2019-05-31 14:04 UTC (permalink / raw
  To: gentoo-commits

commit:     7b28b2e87d40d965b55c189c5dceb90e6b9d31d2
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri May 31 14:04:02 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri May 31 14:04:02 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7b28b2e8

Linux patch 5.1.6

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |     4 +
 1005_linux-5.1.6.patch | 14203 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 14207 insertions(+)

diff --git a/0000_README b/0000_README
index 2431699..7713f53 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch:  1004_linux-5.1.5.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.1.5
 
+Patch:  1005_linux-5.1.6.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.1.6
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1005_linux-5.1.6.patch b/1005_linux-5.1.6.patch
new file mode 100644
index 0000000..897ab6d
--- /dev/null
+++ b/1005_linux-5.1.6.patch
@@ -0,0 +1,14203 @@
+diff --git a/Documentation/arm64/silicon-errata.txt b/Documentation/arm64/silicon-errata.txt
+index d1e2bb801e1b..6e97a3f771ef 100644
+--- a/Documentation/arm64/silicon-errata.txt
++++ b/Documentation/arm64/silicon-errata.txt
+@@ -61,6 +61,7 @@ stable kernels.
+ | ARM            | Cortex-A76      | #1188873        | ARM64_ERRATUM_1188873       |
+ | ARM            | Cortex-A76      | #1165522        | ARM64_ERRATUM_1165522       |
+ | ARM            | Cortex-A76      | #1286807        | ARM64_ERRATUM_1286807       |
++| ARM            | Cortex-A76      | #1463225        | ARM64_ERRATUM_1463225       |
+ | ARM            | MMU-500         | #841119,#826419 | N/A                         |
+ |                |                 |                 |                             |
+ | Cavium         | ThunderX ITS    | #22375, #24313  | CAVIUM_ERRATUM_22375        |
+diff --git a/Documentation/devicetree/bindings/phy/qcom-qmp-phy.txt b/Documentation/devicetree/bindings/phy/qcom-qmp-phy.txt
+index 5d181fc3cc18..4a78ba8b85bc 100644
+--- a/Documentation/devicetree/bindings/phy/qcom-qmp-phy.txt
++++ b/Documentation/devicetree/bindings/phy/qcom-qmp-phy.txt
+@@ -59,7 +59,8 @@ Required properties:
+ 	   one for each entry in reset-names.
+  - reset-names: "phy" for reset of phy block,
+ 		"common" for phy common block reset,
+-		"cfg" for phy's ahb cfg block reset.
++		"cfg" for phy's ahb cfg block reset,
++		"ufsphy" for the PHY reset in the UFS controller.
+ 
+ 		For "qcom,ipq8074-qmp-pcie-phy" must contain:
+ 			"phy", "common".
+@@ -74,7 +75,8 @@ Required properties:
+ 			"phy", "common".
+ 		For "qcom,sdm845-qmp-usb3-uni-phy" must contain:
+ 			"phy", "common".
+-		For "qcom,sdm845-qmp-ufs-phy": no resets are listed.
++		For "qcom,sdm845-qmp-ufs-phy": must contain:
++			"ufsphy".
+ 
+  - vdda-phy-supply: Phandle to a regulator supply to PHY core block.
+  - vdda-pll-supply: Phandle to 1.8V regulator supply to PHY refclk pll block.
+diff --git a/Makefile b/Makefile
+index 24a16a544ffd..d8bdd2bb55dc 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
+index 07e27f212dc7..d2453e2d3f1f 100644
+--- a/arch/arm/include/asm/cp15.h
++++ b/arch/arm/include/asm/cp15.h
+@@ -68,6 +68,8 @@
+ #define BPIALL				__ACCESS_CP15(c7, 0, c5, 6)
+ #define ICIALLU				__ACCESS_CP15(c7, 0, c5, 0)
+ 
++#define CNTVCT				__ACCESS_CP15_64(1, c14)
++
+ extern unsigned long cr_alignment;	/* defined in entry-armv.S */
+ 
+ static inline unsigned long get_cr(void)
+diff --git a/arch/arm/vdso/vgettimeofday.c b/arch/arm/vdso/vgettimeofday.c
+index a9dd619c6c29..7bdbf5d5c47d 100644
+--- a/arch/arm/vdso/vgettimeofday.c
++++ b/arch/arm/vdso/vgettimeofday.c
+@@ -18,9 +18,9 @@
+ #include <linux/compiler.h>
+ #include <linux/hrtimer.h>
+ #include <linux/time.h>
+-#include <asm/arch_timer.h>
+ #include <asm/barrier.h>
+ #include <asm/bug.h>
++#include <asm/cp15.h>
+ #include <asm/page.h>
+ #include <asm/unistd.h>
+ #include <asm/vdso_datapage.h>
+@@ -123,7 +123,8 @@ static notrace u64 get_ns(struct vdso_data *vdata)
+ 	u64 cycle_now;
+ 	u64 nsec;
+ 
+-	cycle_now = arch_counter_get_cntvct();
++	isb();
++	cycle_now = read_sysreg(CNTVCT);
+ 
+ 	cycle_delta = (cycle_now - vdata->cs_cycle_last) & vdata->cs_mask;
+ 
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 7e34b9eba5de..d218729ec852 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -517,6 +517,24 @@ config ARM64_ERRATUM_1286807
+ 
+ 	  If unsure, say Y.
+ 
++config ARM64_ERRATUM_1463225
++	bool "Cortex-A76: Software Step might prevent interrupt recognition"
++	default y
++	help
++	  This option adds a workaround for Arm Cortex-A76 erratum 1463225.
++
++	  On the affected Cortex-A76 cores (r0p0 to r3p1), software stepping
++	  of a system call instruction (SVC) can prevent recognition of
++	  subsequent interrupts when software stepping is disabled in the
++	  exception handler of the system call and either kernel debugging
++	  is enabled or VHE is in use.
++
++	  Work around the erratum by triggering a dummy step exception
++	  when handling a system call from a task that is being stepped
++	  in a VHE configuration of the kernel.
++
++	  If unsure, say Y.
++
+ config CAVIUM_ERRATUM_22375
+ 	bool "Cavium erratum 22375, 24313"
+ 	default y
+@@ -1347,6 +1365,7 @@ config ARM64_MODULE_PLTS
+ 
+ config ARM64_PSEUDO_NMI
+ 	bool "Support for NMI-like interrupts"
++	depends on BROKEN # 1556553607-46531-1-git-send-email-julien.thierry@arm.com
+ 	select CONFIG_ARM_GIC_V3
+ 	help
+ 	  Adds support for mimicking Non-Maskable Interrupts through the use of
+diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
+index f6a76e43f39e..4389d5d0ca0f 100644
+--- a/arch/arm64/include/asm/cpucaps.h
++++ b/arch/arm64/include/asm/cpucaps.h
+@@ -61,7 +61,8 @@
+ #define ARM64_HAS_GENERIC_AUTH_ARCH		40
+ #define ARM64_HAS_GENERIC_AUTH_IMP_DEF		41
+ #define ARM64_HAS_IRQ_PRIO_MASKING		42
++#define ARM64_WORKAROUND_1463225		43
+ 
+-#define ARM64_NCAPS				43
++#define ARM64_NCAPS				44
+ 
+ #endif /* __ASM_CPUCAPS_H */
+diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
+index 6fb2214333a2..2d78ea6932b7 100644
+--- a/arch/arm64/include/asm/futex.h
++++ b/arch/arm64/include/asm/futex.h
+@@ -58,7 +58,7 @@ do {									\
+ static inline int
+ arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *_uaddr)
+ {
+-	int oldval = 0, ret, tmp;
++	int oldval, ret, tmp;
+ 	u32 __user *uaddr = __uaccess_mask_ptr(_uaddr);
+ 
+ 	pagefault_disable();
+diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
+index de70c1eabf33..74ebe9693714 100644
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -478,6 +478,8 @@ static inline phys_addr_t pmd_page_paddr(pmd_t pmd)
+ 	return __pmd_to_phys(pmd);
+ }
+ 
++static inline void pte_unmap(pte_t *pte) { }
++
+ /* Find an entry in the third-level page table. */
+ #define pte_index(addr)		(((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
+ 
+@@ -486,7 +488,6 @@ static inline phys_addr_t pmd_page_paddr(pmd_t pmd)
+ 
+ #define pte_offset_map(dir,addr)	pte_offset_kernel((dir), (addr))
+ #define pte_offset_map_nested(dir,addr)	pte_offset_kernel((dir), (addr))
+-#define pte_unmap(pte)			do { } while (0)
+ #define pte_unmap_nested(pte)		do { } while (0)
+ 
+ #define pte_set_fixmap(addr)		((pte_t *)set_fixmap_offset(FIX_PTE, addr))
+diff --git a/arch/arm64/include/asm/vdso_datapage.h b/arch/arm64/include/asm/vdso_datapage.h
+index 2b9a63771eda..f89263c8e11a 100644
+--- a/arch/arm64/include/asm/vdso_datapage.h
++++ b/arch/arm64/include/asm/vdso_datapage.h
+@@ -38,6 +38,7 @@ struct vdso_data {
+ 	__u32 tz_minuteswest;	/* Whacky timezone stuff */
+ 	__u32 tz_dsttime;
+ 	__u32 use_syscall;
++	__u32 hrtimer_res;
+ };
+ 
+ #endif /* !__ASSEMBLY__ */
+diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
+index 7f40dcbdd51d..e10e2a5d9ddc 100644
+--- a/arch/arm64/kernel/asm-offsets.c
++++ b/arch/arm64/kernel/asm-offsets.c
+@@ -94,7 +94,7 @@ int main(void)
+   DEFINE(CLOCK_REALTIME,	CLOCK_REALTIME);
+   DEFINE(CLOCK_MONOTONIC,	CLOCK_MONOTONIC);
+   DEFINE(CLOCK_MONOTONIC_RAW,	CLOCK_MONOTONIC_RAW);
+-  DEFINE(CLOCK_REALTIME_RES,	MONOTONIC_RES_NSEC);
++  DEFINE(CLOCK_REALTIME_RES,	offsetof(struct vdso_data, hrtimer_res));
+   DEFINE(CLOCK_REALTIME_COARSE,	CLOCK_REALTIME_COARSE);
+   DEFINE(CLOCK_MONOTONIC_COARSE,CLOCK_MONOTONIC_COARSE);
+   DEFINE(CLOCK_COARSE_RES,	LOW_RES_NSEC);
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 9950bb0cbd52..87019cd73f22 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -464,6 +464,22 @@ out_printmsg:
+ }
+ #endif	/* CONFIG_ARM64_SSBD */
+ 
++#ifdef CONFIG_ARM64_ERRATUM_1463225
++DEFINE_PER_CPU(int, __in_cortex_a76_erratum_1463225_wa);
++
++static bool
++has_cortex_a76_erratum_1463225(const struct arm64_cpu_capabilities *entry,
++			       int scope)
++{
++	u32 midr = read_cpuid_id();
++	/* Cortex-A76 r0p0 - r3p1 */
++	struct midr_range range = MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 3, 1);
++
++	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
++	return is_midr_in_range(midr, &range) && is_kernel_in_hyp_mode();
++}
++#endif
++
+ static void __maybe_unused
+ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
+ {
+@@ -738,6 +754,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ 		.capability = ARM64_WORKAROUND_1165522,
+ 		ERRATA_MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 2, 0),
+ 	},
++#endif
++#ifdef CONFIG_ARM64_ERRATUM_1463225
++	{
++		.desc = "ARM erratum 1463225",
++		.capability = ARM64_WORKAROUND_1463225,
++		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
++		.matches = has_cortex_a76_erratum_1463225,
++	},
+ #endif
+ 	{
+ 	}
+diff --git a/arch/arm64/kernel/cpu_ops.c b/arch/arm64/kernel/cpu_ops.c
+index ea001241bdd4..00f8b8612b69 100644
+--- a/arch/arm64/kernel/cpu_ops.c
++++ b/arch/arm64/kernel/cpu_ops.c
+@@ -85,6 +85,7 @@ static const char *__init cpu_read_enable_method(int cpu)
+ 				pr_err("%pOF: missing enable-method property\n",
+ 					dn);
+ 		}
++		of_node_put(dn);
+ 	} else {
+ 		enable_method = acpi_get_enable_method(cpu);
+ 		if (!enable_method) {
+diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
+index b09b6f75f759..06941c1fe418 100644
+--- a/arch/arm64/kernel/kaslr.c
++++ b/arch/arm64/kernel/kaslr.c
+@@ -145,15 +145,15 @@ u64 __init kaslr_early_init(u64 dt_phys)
+ 
+ 	if (IS_ENABLED(CONFIG_RANDOMIZE_MODULE_REGION_FULL)) {
+ 		/*
+-		 * Randomize the module region over a 4 GB window covering the
++		 * Randomize the module region over a 2 GB window covering the
+ 		 * kernel. This reduces the risk of modules leaking information
+ 		 * about the address of the kernel itself, but results in
+ 		 * branches between modules and the core kernel that are
+ 		 * resolved via PLTs. (Branches between modules will be
+ 		 * resolved normally.)
+ 		 */
+-		module_range = SZ_4G - (u64)(_end - _stext);
+-		module_alloc_base = max((u64)_end + offset - SZ_4G,
++		module_range = SZ_2G - (u64)(_end - _stext);
++		module_alloc_base = max((u64)_end + offset - SZ_2G,
+ 					(u64)MODULES_VADDR);
+ 	} else {
+ 		/*
+diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
+index f713e2fc4d75..1e418e69b58c 100644
+--- a/arch/arm64/kernel/module.c
++++ b/arch/arm64/kernel/module.c
+@@ -56,7 +56,7 @@ void *module_alloc(unsigned long size)
+ 		 * can simply omit this fallback in that case.
+ 		 */
+ 		p = __vmalloc_node_range(size, MODULE_ALIGN, module_alloc_base,
+-				module_alloc_base + SZ_4G, GFP_KERNEL,
++				module_alloc_base + SZ_2G, GFP_KERNEL,
+ 				PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
+ 				__builtin_return_address(0));
+ 
+diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
+index 5610ac01c1ec..871c739f060a 100644
+--- a/arch/arm64/kernel/syscall.c
++++ b/arch/arm64/kernel/syscall.c
+@@ -8,6 +8,7 @@
+ #include <linux/syscalls.h>
+ 
+ #include <asm/daifflags.h>
++#include <asm/debug-monitors.h>
+ #include <asm/fpsimd.h>
+ #include <asm/syscall.h>
+ #include <asm/thread_info.h>
+@@ -60,6 +61,35 @@ static inline bool has_syscall_work(unsigned long flags)
+ int syscall_trace_enter(struct pt_regs *regs);
+ void syscall_trace_exit(struct pt_regs *regs);
+ 
++#ifdef CONFIG_ARM64_ERRATUM_1463225
++DECLARE_PER_CPU(int, __in_cortex_a76_erratum_1463225_wa);
++
++static void cortex_a76_erratum_1463225_svc_handler(void)
++{
++	u32 reg, val;
++
++	if (!unlikely(test_thread_flag(TIF_SINGLESTEP)))
++		return;
++
++	if (!unlikely(this_cpu_has_cap(ARM64_WORKAROUND_1463225)))
++		return;
++
++	__this_cpu_write(__in_cortex_a76_erratum_1463225_wa, 1);
++	reg = read_sysreg(mdscr_el1);
++	val = reg | DBG_MDSCR_SS | DBG_MDSCR_KDE;
++	write_sysreg(val, mdscr_el1);
++	asm volatile("msr daifclr, #8");
++	isb();
++
++	/* We will have taken a single-step exception by this point */
++
++	write_sysreg(reg, mdscr_el1);
++	__this_cpu_write(__in_cortex_a76_erratum_1463225_wa, 0);
++}
++#else
++static void cortex_a76_erratum_1463225_svc_handler(void) { }
++#endif /* CONFIG_ARM64_ERRATUM_1463225 */
++
+ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
+ 			   const syscall_fn_t syscall_table[])
+ {
+@@ -68,6 +98,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
+ 	regs->orig_x0 = regs->regs[0];
+ 	regs->syscallno = scno;
+ 
++	cortex_a76_erratum_1463225_svc_handler();
+ 	local_daif_restore(DAIF_PROCCTX);
+ 	user_exit();
+ 
+diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
+index 2d419006ad43..ec0bb588d755 100644
+--- a/arch/arm64/kernel/vdso.c
++++ b/arch/arm64/kernel/vdso.c
+@@ -232,6 +232,9 @@ void update_vsyscall(struct timekeeper *tk)
+ 	vdso_data->wtm_clock_sec		= tk->wall_to_monotonic.tv_sec;
+ 	vdso_data->wtm_clock_nsec		= tk->wall_to_monotonic.tv_nsec;
+ 
++	/* Read without the seqlock held by clock_getres() */
++	WRITE_ONCE(vdso_data->hrtimer_res, hrtimer_resolution);
++
+ 	if (!use_syscall) {
+ 		/* tkr_mono.cycle_last == tkr_raw.cycle_last */
+ 		vdso_data->cs_cycle_last	= tk->tkr_mono.cycle_last;
+diff --git a/arch/arm64/kernel/vdso/gettimeofday.S b/arch/arm64/kernel/vdso/gettimeofday.S
+index e8f60112818f..856fee6d3512 100644
+--- a/arch/arm64/kernel/vdso/gettimeofday.S
++++ b/arch/arm64/kernel/vdso/gettimeofday.S
+@@ -308,13 +308,14 @@ ENTRY(__kernel_clock_getres)
+ 	ccmp	w0, #CLOCK_MONOTONIC_RAW, #0x4, ne
+ 	b.ne	1f
+ 
+-	ldr	x2, 5f
++	adr	vdso_data, _vdso_data
++	ldr	w2, [vdso_data, #CLOCK_REALTIME_RES]
+ 	b	2f
+ 1:
+ 	cmp	w0, #CLOCK_REALTIME_COARSE
+ 	ccmp	w0, #CLOCK_MONOTONIC_COARSE, #0x4, ne
+ 	b.ne	4f
+-	ldr	x2, 6f
++	ldr	x2, 5f
+ 2:
+ 	cbz	x1, 3f
+ 	stp	xzr, x2, [x1]
+@@ -328,8 +329,6 @@ ENTRY(__kernel_clock_getres)
+ 	svc	#0
+ 	ret
+ 5:
+-	.quad	CLOCK_REALTIME_RES
+-6:
+ 	.quad	CLOCK_COARSE_RES
+ 	.cfi_endproc
+ ENDPROC(__kernel_clock_getres)
+diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
+index 78c0a72f822c..674860e3e478 100644
+--- a/arch/arm64/mm/dma-mapping.c
++++ b/arch/arm64/mm/dma-mapping.c
+@@ -249,6 +249,11 @@ static int __iommu_mmap_attrs(struct device *dev, struct vm_area_struct *vma,
+ 	if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret))
+ 		return ret;
+ 
++	if (!is_vmalloc_addr(cpu_addr)) {
++		unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr));
++		return __swiotlb_mmap_pfn(vma, pfn, size);
++	}
++
+ 	if (attrs & DMA_ATTR_FORCE_CONTIGUOUS) {
+ 		/*
+ 		 * DMA_ATTR_FORCE_CONTIGUOUS allocations are always remapped,
+@@ -272,6 +277,11 @@ static int __iommu_get_sgtable(struct device *dev, struct sg_table *sgt,
+ 	unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+ 	struct vm_struct *area = find_vm_area(cpu_addr);
+ 
++	if (!is_vmalloc_addr(cpu_addr)) {
++		struct page *page = virt_to_page(cpu_addr);
++		return __swiotlb_get_sgtable_page(sgt, page, size);
++	}
++
+ 	if (attrs & DMA_ATTR_FORCE_CONTIGUOUS) {
+ 		/*
+ 		 * DMA_ATTR_FORCE_CONTIGUOUS allocations are always remapped,
+diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
+index 1a7e92ab69eb..9a6099a2c633 100644
+--- a/arch/arm64/mm/fault.c
++++ b/arch/arm64/mm/fault.c
+@@ -810,14 +810,47 @@ void __init hook_debug_fault_code(int nr,
+ 	debug_fault_info[nr].name	= name;
+ }
+ 
++#ifdef CONFIG_ARM64_ERRATUM_1463225
++DECLARE_PER_CPU(int, __in_cortex_a76_erratum_1463225_wa);
++
++static int __exception
++cortex_a76_erratum_1463225_debug_handler(struct pt_regs *regs)
++{
++	if (user_mode(regs))
++		return 0;
++
++	if (!__this_cpu_read(__in_cortex_a76_erratum_1463225_wa))
++		return 0;
++
++	/*
++	 * We've taken a dummy step exception from the kernel to ensure
++	 * that interrupts are re-enabled on the syscall path. Return back
++	 * to cortex_a76_erratum_1463225_svc_handler() with debug exceptions
++	 * masked so that we can safely restore the mdscr and get on with
++	 * handling the syscall.
++	 */
++	regs->pstate |= PSR_D_BIT;
++	return 1;
++}
++#else
++static int __exception
++cortex_a76_erratum_1463225_debug_handler(struct pt_regs *regs)
++{
++	return 0;
++}
++#endif /* CONFIG_ARM64_ERRATUM_1463225 */
++
+ asmlinkage int __exception do_debug_exception(unsigned long addr_if_watchpoint,
+-					      unsigned int esr,
+-					      struct pt_regs *regs)
++					       unsigned int esr,
++					       struct pt_regs *regs)
+ {
+ 	const struct fault_info *inf = esr_to_debug_fault_info(esr);
+ 	unsigned long pc = instruction_pointer(regs);
+ 	int rv;
+ 
++	if (cortex_a76_erratum_1463225_debug_handler(regs))
++		return 0;
++
+ 	/*
+ 	 * Tell lockdep we disabled irqs in entry.S. Do nothing if they were
+ 	 * already disabled to preserve the last enabled/disabled addresses.
+diff --git a/arch/powerpc/boot/addnote.c b/arch/powerpc/boot/addnote.c
+index 9d9f6f334d3c..3da3e2b1b51b 100644
+--- a/arch/powerpc/boot/addnote.c
++++ b/arch/powerpc/boot/addnote.c
+@@ -223,7 +223,11 @@ main(int ac, char **av)
+ 	PUT_16(E_PHNUM, np + 2);
+ 
+ 	/* write back */
+-	lseek(fd, (long) 0, SEEK_SET);
++	i = lseek(fd, (long) 0, SEEK_SET);
++	if (i < 0) {
++		perror("lseek");
++		exit(1);
++	}
+ 	i = write(fd, buf, n);
+ 	if (i < 0) {
+ 		perror("write");
+diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
+index 3fad8d499767..5321a11c2835 100644
+--- a/arch/powerpc/kernel/head_64.S
++++ b/arch/powerpc/kernel/head_64.S
+@@ -968,7 +968,9 @@ start_here_multiplatform:
+ 
+ 	/* Restore parameters passed from prom_init/kexec */
+ 	mr	r3,r31
+-	bl	early_setup		/* also sets r13 and SPRG_PACA */
++	LOAD_REG_ADDR(r12, DOTSYM(early_setup))
++	mtctr	r12
++	bctrl		/* also sets r13 and SPRG_PACA */
+ 
+ 	LOAD_REG_ADDR(r3, start_here_common)
+ 	ld	r4,PACAKMSR(r13)
+diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
+index 3c6ab22a0c4e..af3c15a1d41e 100644
+--- a/arch/powerpc/kernel/watchdog.c
++++ b/arch/powerpc/kernel/watchdog.c
+@@ -77,7 +77,7 @@ static u64 wd_smp_panic_timeout_tb __read_mostly; /* panic other CPUs */
+ 
+ static u64 wd_timer_period_ms __read_mostly;  /* interval between heartbeat */
+ 
+-static DEFINE_PER_CPU(struct timer_list, wd_timer);
++static DEFINE_PER_CPU(struct hrtimer, wd_hrtimer);
+ static DEFINE_PER_CPU(u64, wd_timer_tb);
+ 
+ /* SMP checker bits */
+@@ -293,21 +293,21 @@ out:
+ 	nmi_exit();
+ }
+ 
+-static void wd_timer_reset(unsigned int cpu, struct timer_list *t)
+-{
+-	t->expires = jiffies + msecs_to_jiffies(wd_timer_period_ms);
+-	if (wd_timer_period_ms > 1000)
+-		t->expires = __round_jiffies_up(t->expires, cpu);
+-	add_timer_on(t, cpu);
+-}
+-
+-static void wd_timer_fn(struct timer_list *t)
++static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
+ {
+ 	int cpu = smp_processor_id();
+ 
++	if (!(watchdog_enabled & NMI_WATCHDOG_ENABLED))
++		return HRTIMER_NORESTART;
++
++	if (!cpumask_test_cpu(cpu, &watchdog_cpumask))
++		return HRTIMER_NORESTART;
++
+ 	watchdog_timer_interrupt(cpu);
+ 
+-	wd_timer_reset(cpu, t);
++	hrtimer_forward_now(hrtimer, ms_to_ktime(wd_timer_period_ms));
++
++	return HRTIMER_RESTART;
+ }
+ 
+ void arch_touch_nmi_watchdog(void)
+@@ -323,37 +323,22 @@ void arch_touch_nmi_watchdog(void)
+ }
+ EXPORT_SYMBOL(arch_touch_nmi_watchdog);
+ 
+-static void start_watchdog_timer_on(unsigned int cpu)
+-{
+-	struct timer_list *t = per_cpu_ptr(&wd_timer, cpu);
+-
+-	per_cpu(wd_timer_tb, cpu) = get_tb();
+-
+-	timer_setup(t, wd_timer_fn, TIMER_PINNED);
+-	wd_timer_reset(cpu, t);
+-}
+-
+-static void stop_watchdog_timer_on(unsigned int cpu)
+-{
+-	struct timer_list *t = per_cpu_ptr(&wd_timer, cpu);
+-
+-	del_timer_sync(t);
+-}
+-
+-static int start_wd_on_cpu(unsigned int cpu)
++static void start_watchdog(void *arg)
+ {
++	struct hrtimer *hrtimer = this_cpu_ptr(&wd_hrtimer);
++	int cpu = smp_processor_id();
+ 	unsigned long flags;
+ 
+ 	if (cpumask_test_cpu(cpu, &wd_cpus_enabled)) {
+ 		WARN_ON(1);
+-		return 0;
++		return;
+ 	}
+ 
+ 	if (!(watchdog_enabled & NMI_WATCHDOG_ENABLED))
+-		return 0;
++		return;
+ 
+ 	if (!cpumask_test_cpu(cpu, &watchdog_cpumask))
+-		return 0;
++		return;
+ 
+ 	wd_smp_lock(&flags);
+ 	cpumask_set_cpu(cpu, &wd_cpus_enabled);
+@@ -363,27 +348,40 @@ static int start_wd_on_cpu(unsigned int cpu)
+ 	}
+ 	wd_smp_unlock(&flags);
+ 
+-	start_watchdog_timer_on(cpu);
++	*this_cpu_ptr(&wd_timer_tb) = get_tb();
+ 
+-	return 0;
++	hrtimer_init(hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++	hrtimer->function = watchdog_timer_fn;
++	hrtimer_start(hrtimer, ms_to_ktime(wd_timer_period_ms),
++		      HRTIMER_MODE_REL_PINNED);
+ }
+ 
+-static int stop_wd_on_cpu(unsigned int cpu)
++static int start_watchdog_on_cpu(unsigned int cpu)
+ {
++	return smp_call_function_single(cpu, start_watchdog, NULL, true);
++}
++
++static void stop_watchdog(void *arg)
++{
++	struct hrtimer *hrtimer = this_cpu_ptr(&wd_hrtimer);
++	int cpu = smp_processor_id();
+ 	unsigned long flags;
+ 
+ 	if (!cpumask_test_cpu(cpu, &wd_cpus_enabled))
+-		return 0; /* Can happen in CPU unplug case */
++		return; /* Can happen in CPU unplug case */
+ 
+-	stop_watchdog_timer_on(cpu);
++	hrtimer_cancel(hrtimer);
+ 
+ 	wd_smp_lock(&flags);
+ 	cpumask_clear_cpu(cpu, &wd_cpus_enabled);
+ 	wd_smp_unlock(&flags);
+ 
+ 	wd_smp_clear_cpu_pending(cpu, get_tb());
++}
+ 
+-	return 0;
++static int stop_watchdog_on_cpu(unsigned int cpu)
++{
++	return smp_call_function_single(cpu, stop_watchdog, NULL, true);
+ }
+ 
+ static void watchdog_calc_timeouts(void)
+@@ -402,7 +400,7 @@ void watchdog_nmi_stop(void)
+ 	int cpu;
+ 
+ 	for_each_cpu(cpu, &wd_cpus_enabled)
+-		stop_wd_on_cpu(cpu);
++		stop_watchdog_on_cpu(cpu);
+ }
+ 
+ void watchdog_nmi_start(void)
+@@ -411,7 +409,7 @@ void watchdog_nmi_start(void)
+ 
+ 	watchdog_calc_timeouts();
+ 	for_each_cpu_and(cpu, cpu_online_mask, &watchdog_cpumask)
+-		start_wd_on_cpu(cpu);
++		start_watchdog_on_cpu(cpu);
+ }
+ 
+ /*
+@@ -423,7 +421,8 @@ int __init watchdog_nmi_probe(void)
+ 
+ 	err = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN,
+ 					"powerpc/watchdog:online",
+-					start_wd_on_cpu, stop_wd_on_cpu);
++					start_watchdog_on_cpu,
++					stop_watchdog_on_cpu);
+ 	if (err < 0) {
+ 		pr_warn("could not be initialized");
+ 		return err;
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index f976676004ad..48c9a97eb2c3 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -1498,6 +1498,9 @@ int start_topology_update(void)
+ {
+ 	int rc = 0;
+ 
++	if (!topology_updates_enabled)
++		return 0;
++
+ 	if (firmware_has_feature(FW_FEATURE_PRRN)) {
+ 		if (!prrn_enabled) {
+ 			prrn_enabled = 1;
+@@ -1531,6 +1534,9 @@ int stop_topology_update(void)
+ {
+ 	int rc = 0;
+ 
++	if (!topology_updates_enabled)
++		return 0;
++
+ 	if (prrn_enabled) {
+ 		prrn_enabled = 0;
+ #ifdef CONFIG_SMP
+@@ -1588,11 +1594,13 @@ static ssize_t topology_write(struct file *file, const char __user *buf,
+ 
+ 	kbuf[read_len] = '\0';
+ 
+-	if (!strncmp(kbuf, "on", 2))
++	if (!strncmp(kbuf, "on", 2)) {
++		topology_updates_enabled = true;
+ 		start_topology_update();
+-	else if (!strncmp(kbuf, "off", 3))
++	} else if (!strncmp(kbuf, "off", 3)) {
+ 		stop_topology_update();
+-	else
++		topology_updates_enabled = false;
++	} else
+ 		return -EINVAL;
+ 
+ 	return count;
+@@ -1607,9 +1615,7 @@ static const struct file_operations topology_ops = {
+ 
+ static int topology_update_init(void)
+ {
+-	/* Do not poll for changes if disabled at boot */
+-	if (topology_updates_enabled)
+-		start_topology_update();
++	start_topology_update();
+ 
+ 	if (vphn_enabled)
+ 		topology_schedule_update();
+diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
+index b1c37cc3fa98..2d12f0037e3a 100644
+--- a/arch/powerpc/perf/imc-pmu.c
++++ b/arch/powerpc/perf/imc-pmu.c
+@@ -487,6 +487,11 @@ static int nest_imc_event_init(struct perf_event *event)
+ 	 * Get the base memory addresss for this cpu.
+ 	 */
+ 	chip_id = cpu_to_chip_id(event->cpu);
++
++	/* Return, if chip_id is not valid */
++	if (chip_id < 0)
++		return -ENODEV;
++
+ 	pcni = pmu->mem_info;
+ 	do {
+ 		if (pcni->id == chip_id) {
+@@ -494,7 +499,7 @@ static int nest_imc_event_init(struct perf_event *event)
+ 			break;
+ 		}
+ 		pcni++;
+-	} while (pcni);
++	} while (pcni->vbase != 0);
+ 
+ 	if (!flag)
+ 		return -ENODEV;
+diff --git a/arch/powerpc/platforms/powernv/opal-imc.c b/arch/powerpc/platforms/powernv/opal-imc.c
+index 58a07948c76e..3d27f02695e4 100644
+--- a/arch/powerpc/platforms/powernv/opal-imc.c
++++ b/arch/powerpc/platforms/powernv/opal-imc.c
+@@ -127,7 +127,7 @@ static int imc_get_mem_addr_nest(struct device_node *node,
+ 								nr_chips))
+ 		goto error;
+ 
+-	pmu_ptr->mem_info = kcalloc(nr_chips, sizeof(*pmu_ptr->mem_info),
++	pmu_ptr->mem_info = kcalloc(nr_chips + 1, sizeof(*pmu_ptr->mem_info),
+ 				    GFP_KERNEL);
+ 	if (!pmu_ptr->mem_info)
+ 		goto error;
+diff --git a/arch/s390/kernel/kexec_elf.c b/arch/s390/kernel/kexec_elf.c
+index 5a286b012043..602e7cc26d11 100644
+--- a/arch/s390/kernel/kexec_elf.c
++++ b/arch/s390/kernel/kexec_elf.c
+@@ -19,10 +19,15 @@ static int kexec_file_add_elf_kernel(struct kimage *image,
+ 	struct kexec_buf buf;
+ 	const Elf_Ehdr *ehdr;
+ 	const Elf_Phdr *phdr;
++	Elf_Addr entry;
+ 	int i, ret;
+ 
+ 	ehdr = (Elf_Ehdr *)kernel;
+ 	buf.image = image;
++	if (image->type == KEXEC_TYPE_CRASH)
++		entry = STARTUP_KDUMP_OFFSET;
++	else
++		entry = ehdr->e_entry;
+ 
+ 	phdr = (void *)ehdr + ehdr->e_phoff;
+ 	for (i = 0; i < ehdr->e_phnum; i++, phdr++) {
+@@ -35,7 +40,7 @@ static int kexec_file_add_elf_kernel(struct kimage *image,
+ 		buf.mem = ALIGN(phdr->p_paddr, phdr->p_align);
+ 		buf.memsz = phdr->p_memsz;
+ 
+-		if (phdr->p_paddr == 0) {
++		if (entry - phdr->p_paddr < phdr->p_memsz) {
+ 			data->kernel_buf = buf.buffer;
+ 			data->memsz += STARTUP_NORMAL_OFFSET;
+ 
+diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
+index 8485d6dc2754..9ebd01219812 100644
+--- a/arch/s390/mm/pgtable.c
++++ b/arch/s390/mm/pgtable.c
+@@ -410,6 +410,7 @@ static inline pmd_t pmdp_flush_lazy(struct mm_struct *mm,
+ 	return old;
+ }
+ 
++#ifdef CONFIG_PGSTE
+ static pmd_t *pmd_alloc_map(struct mm_struct *mm, unsigned long addr)
+ {
+ 	pgd_t *pgd;
+@@ -427,6 +428,7 @@ static pmd_t *pmd_alloc_map(struct mm_struct *mm, unsigned long addr)
+ 	pmd = pmd_alloc(mm, pud, addr);
+ 	return pmd;
+ }
++#endif
+ 
+ pmd_t pmdp_xchg_direct(struct mm_struct *mm, unsigned long addr,
+ 		       pmd_t *pmdp, pmd_t new)
+diff --git a/arch/sh/include/cpu-sh4/cpu/sh7786.h b/arch/sh/include/cpu-sh4/cpu/sh7786.h
+index 8f9bfbf3cdb1..d6cce65b4871 100644
+--- a/arch/sh/include/cpu-sh4/cpu/sh7786.h
++++ b/arch/sh/include/cpu-sh4/cpu/sh7786.h
+@@ -132,7 +132,7 @@ enum {
+ 
+ static inline u32 sh7786_mm_sel(void)
+ {
+-	return __raw_readl(0xFC400020) & 0x7;
++	return __raw_readl((const volatile void __iomem *)0xFC400020) & 0x7;
+ }
+ 
+ #endif /* __CPU_SH7786_H__ */
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index a587805c6687..56e748a7679f 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -47,7 +47,7 @@ export REALMODE_CFLAGS
+ export BITS
+ 
+ ifdef CONFIG_X86_NEED_RELOCS
+-        LDFLAGS_vmlinux := --emit-relocs
++        LDFLAGS_vmlinux := --emit-relocs --discard-none
+ endif
+ 
+ #
+diff --git a/arch/x86/events/intel/cstate.c b/arch/x86/events/intel/cstate.c
+index d41de9af7a39..6072f92cb8ea 100644
+--- a/arch/x86/events/intel/cstate.c
++++ b/arch/x86/events/intel/cstate.c
+@@ -578,6 +578,8 @@ static const struct x86_cpu_id intel_cstates_match[] __initconst = {
+ 	X86_CSTATES_MODEL(INTEL_FAM6_ATOM_GOLDMONT_X, glm_cstates),
+ 
+ 	X86_CSTATES_MODEL(INTEL_FAM6_ATOM_GOLDMONT_PLUS, glm_cstates),
++
++	X86_CSTATES_MODEL(INTEL_FAM6_ICELAKE_MOBILE, snb_cstates),
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(x86cpu, intel_cstates_match);
+diff --git a/arch/x86/events/intel/rapl.c b/arch/x86/events/intel/rapl.c
+index 94dc564146ca..37ebf6fc5415 100644
+--- a/arch/x86/events/intel/rapl.c
++++ b/arch/x86/events/intel/rapl.c
+@@ -775,6 +775,8 @@ static const struct x86_cpu_id rapl_cpu_match[] __initconst = {
+ 	X86_RAPL_MODEL_MATCH(INTEL_FAM6_ATOM_GOLDMONT_X, hsw_rapl_init),
+ 
+ 	X86_RAPL_MODEL_MATCH(INTEL_FAM6_ATOM_GOLDMONT_PLUS, hsw_rapl_init),
++
++	X86_RAPL_MODEL_MATCH(INTEL_FAM6_ICELAKE_MOBILE,  skl_rapl_init),
+ 	{},
+ };
+ 
+diff --git a/arch/x86/events/msr.c b/arch/x86/events/msr.c
+index a878e6286e4a..f3f4c2263501 100644
+--- a/arch/x86/events/msr.c
++++ b/arch/x86/events/msr.c
+@@ -89,6 +89,7 @@ static bool test_intel(int idx)
+ 	case INTEL_FAM6_SKYLAKE_X:
+ 	case INTEL_FAM6_KABYLAKE_MOBILE:
+ 	case INTEL_FAM6_KABYLAKE_DESKTOP:
++	case INTEL_FAM6_ICELAKE_MOBILE:
+ 		if (idx == PERF_MSR_SMI || idx == PERF_MSR_PPERF)
+ 			return true;
+ 		break;
+diff --git a/arch/x86/ia32/ia32_signal.c b/arch/x86/ia32/ia32_signal.c
+index 321fe5f5d0e9..4d5fcd47ab75 100644
+--- a/arch/x86/ia32/ia32_signal.c
++++ b/arch/x86/ia32/ia32_signal.c
+@@ -61,9 +61,8 @@
+ } while (0)
+ 
+ #define RELOAD_SEG(seg)		{		\
+-	unsigned int pre = GET_SEG(seg);	\
++	unsigned int pre = (seg) | 3;		\
+ 	unsigned int cur = get_user_seg(seg);	\
+-	pre |= 3;				\
+ 	if (pre != cur)				\
+ 		set_user_seg(seg, pre);		\
+ }
+@@ -72,6 +71,7 @@ static int ia32_restore_sigcontext(struct pt_regs *regs,
+ 				   struct sigcontext_32 __user *sc)
+ {
+ 	unsigned int tmpflags, err = 0;
++	u16 gs, fs, es, ds;
+ 	void __user *buf;
+ 	u32 tmp;
+ 
+@@ -79,16 +79,10 @@ static int ia32_restore_sigcontext(struct pt_regs *regs,
+ 	current->restart_block.fn = do_no_restart_syscall;
+ 
+ 	get_user_try {
+-		/*
+-		 * Reload fs and gs if they have changed in the signal
+-		 * handler.  This does not handle long fs/gs base changes in
+-		 * the handler, but does not clobber them at least in the
+-		 * normal case.
+-		 */
+-		RELOAD_SEG(gs);
+-		RELOAD_SEG(fs);
+-		RELOAD_SEG(ds);
+-		RELOAD_SEG(es);
++		gs = GET_SEG(gs);
++		fs = GET_SEG(fs);
++		ds = GET_SEG(ds);
++		es = GET_SEG(es);
+ 
+ 		COPY(di); COPY(si); COPY(bp); COPY(sp); COPY(bx);
+ 		COPY(dx); COPY(cx); COPY(ip); COPY(ax);
+@@ -106,6 +100,17 @@ static int ia32_restore_sigcontext(struct pt_regs *regs,
+ 		buf = compat_ptr(tmp);
+ 	} get_user_catch(err);
+ 
++	/*
++	 * Reload fs and gs if they have changed in the signal
++	 * handler.  This does not handle long fs/gs base changes in
++	 * the handler, but does not clobber them at least in the
++	 * normal case.
++	 */
++	RELOAD_SEG(gs);
++	RELOAD_SEG(fs);
++	RELOAD_SEG(ds);
++	RELOAD_SEG(es);
++
+ 	err |= fpu__restore_sig(buf, 1);
+ 
+ 	force_iret();
+diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
+index 05861cc08787..0bbb07eaed6b 100644
+--- a/arch/x86/include/asm/text-patching.h
++++ b/arch/x86/include/asm/text-patching.h
+@@ -39,6 +39,7 @@ extern int poke_int3_handler(struct pt_regs *regs);
+ extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
+ extern int after_bootmem;
+ 
++#ifndef CONFIG_UML_X86
+ static inline void int3_emulate_jmp(struct pt_regs *regs, unsigned long ip)
+ {
+ 	regs->ip = ip;
+@@ -65,6 +66,7 @@ static inline void int3_emulate_call(struct pt_regs *regs, unsigned long func)
+ 	int3_emulate_push(regs, regs->ip - INT3_INSN_SIZE + CALL_INSN_SIZE);
+ 	int3_emulate_jmp(regs, func);
+ }
+-#endif
++#endif /* CONFIG_X86_64 */
++#endif /* !CONFIG_UML_X86 */
+ 
+ #endif /* _ASM_X86_TEXT_PATCHING_H */
+diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
+index 1954dd5552a2..3822cc8ac9d6 100644
+--- a/arch/x86/include/asm/uaccess.h
++++ b/arch/x86/include/asm/uaccess.h
+@@ -427,10 +427,11 @@ do {									\
+ ({								\
+ 	__label__ __pu_label;					\
+ 	int __pu_err = -EFAULT;					\
+-	__typeof__(*(ptr)) __pu_val;				\
+-	__pu_val = x;						\
++	__typeof__(*(ptr)) __pu_val = (x);			\
++	__typeof__(ptr) __pu_ptr = (ptr);			\
++	__typeof__(size) __pu_size = (size);			\
+ 	__uaccess_begin();					\
+-	__put_user_size(__pu_val, (ptr), (size), __pu_label);	\
++	__put_user_size(__pu_val, __pu_ptr, __pu_size, __pu_label);	\
+ 	__pu_err = 0;						\
+ __pu_label:							\
+ 	__uaccess_end();					\
+diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
+index 9a79c7808f9c..d7df79fc448c 100644
+--- a/arch/x86/kernel/alternative.c
++++ b/arch/x86/kernel/alternative.c
+@@ -667,15 +667,29 @@ void __init alternative_instructions(void)
+  * handlers seeing an inconsistent instruction while you patch.
+  */
+ void *__init_or_module text_poke_early(void *addr, const void *opcode,
+-					      size_t len)
++				       size_t len)
+ {
+ 	unsigned long flags;
+-	local_irq_save(flags);
+-	memcpy(addr, opcode, len);
+-	local_irq_restore(flags);
+-	sync_core();
+-	/* Could also do a CLFLUSH here to speed up CPU recovery; but
+-	   that causes hangs on some VIA CPUs. */
++
++	if (boot_cpu_has(X86_FEATURE_NX) &&
++	    is_module_text_address((unsigned long)addr)) {
++		/*
++		 * Modules text is marked initially as non-executable, so the
++		 * code cannot be running and speculative code-fetches are
++		 * prevented. Just change the code.
++		 */
++		memcpy(addr, opcode, len);
++	} else {
++		local_irq_save(flags);
++		memcpy(addr, opcode, len);
++		local_irq_restore(flags);
++		sync_core();
++
++		/*
++		 * Could also do a CLFLUSH here to speed up CPU recovery; but
++		 * that causes hangs on some VIA CPUs.
++		 */
++	}
+ 	return addr;
+ }
+ 
+diff --git a/arch/x86/kernel/cpu/hygon.c b/arch/x86/kernel/cpu/hygon.c
+index cf25405444ab..415621ddb8a2 100644
+--- a/arch/x86/kernel/cpu/hygon.c
++++ b/arch/x86/kernel/cpu/hygon.c
+@@ -19,6 +19,8 @@
+ 
+ #include "cpu.h"
+ 
++#define APICID_SOCKET_ID_BIT 6
++
+ /*
+  * nodes_per_socket: Stores the number of nodes per socket.
+  * Refer to CPUID Fn8000_001E_ECX Node Identifiers[10:8]
+@@ -87,6 +89,9 @@ static void hygon_get_topology(struct cpuinfo_x86 *c)
+ 		if (!err)
+ 			c->x86_coreid_bits = get_count_order(c->x86_max_cores);
+ 
++		/* Socket ID is ApicId[6] for these processors. */
++		c->phys_proc_id = c->apicid >> APICID_SOCKET_ID_BIT;
++
+ 		cacheinfo_hygon_init_llc_id(c, cpu, node_id);
+ 	} else if (cpu_has(c, X86_FEATURE_NODEID_MSR)) {
+ 		u64 value;
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 1a7084ba9a3b..9e6a94c208e0 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -712,19 +712,49 @@ bool machine_check_poll(enum mcp_flags flags, mce_banks_t *b)
+ 
+ 		barrier();
+ 		m.status = mce_rdmsrl(msr_ops.status(i));
++
++		/* If this entry is not valid, ignore it */
+ 		if (!(m.status & MCI_STATUS_VAL))
+ 			continue;
+ 
+ 		/*
+-		 * Uncorrected or signalled events are handled by the exception
+-		 * handler when it is enabled, so don't process those here.
+-		 *
+-		 * TBD do the same check for MCI_STATUS_EN here?
++		 * If we are logging everything (at CPU online) or this
++		 * is a corrected error, then we must log it.
+ 		 */
+-		if (!(flags & MCP_UC) &&
+-		    (m.status & (mca_cfg.ser ? MCI_STATUS_S : MCI_STATUS_UC)))
+-			continue;
++		if ((flags & MCP_UC) || !(m.status & MCI_STATUS_UC))
++			goto log_it;
++
++		/*
++		 * Newer Intel systems that support software error
++		 * recovery need to make additional checks. Other
++		 * CPUs should skip over uncorrected errors, but log
++		 * everything else.
++		 */
++		if (!mca_cfg.ser) {
++			if (m.status & MCI_STATUS_UC)
++				continue;
++			goto log_it;
++		}
++
++		/* Log "not enabled" (speculative) errors */
++		if (!(m.status & MCI_STATUS_EN))
++			goto log_it;
++
++		/*
++		 * Log UCNA (SDM: 15.6.3 "UCR Error Classification")
++		 * UC == 1 && PCC == 0 && S == 0
++		 */
++		if (!(m.status & MCI_STATUS_PCC) && !(m.status & MCI_STATUS_S))
++			goto log_it;
++
++		/*
++		 * Skip anything else. Presumption is that our read of this
++		 * bank is racing with a machine check. Leave the log alone
++		 * for do_machine_check() to deal with it.
++		 */
++		continue;
+ 
++log_it:
+ 		error_seen = true;
+ 
+ 		mce_read_aux(&m, i);
+@@ -1451,13 +1481,12 @@ EXPORT_SYMBOL_GPL(mce_notify_irq);
+ static int __mcheck_cpu_mce_banks_init(void)
+ {
+ 	int i;
+-	u8 num_banks = mca_cfg.banks;
+ 
+-	mce_banks = kcalloc(num_banks, sizeof(struct mce_bank), GFP_KERNEL);
++	mce_banks = kcalloc(MAX_NR_BANKS, sizeof(struct mce_bank), GFP_KERNEL);
+ 	if (!mce_banks)
+ 		return -ENOMEM;
+ 
+-	for (i = 0; i < num_banks; i++) {
++	for (i = 0; i < MAX_NR_BANKS; i++) {
+ 		struct mce_bank *b = &mce_banks[i];
+ 
+ 		b->ctl = -1ULL;
+@@ -1471,28 +1500,19 @@ static int __mcheck_cpu_mce_banks_init(void)
+  */
+ static int __mcheck_cpu_cap_init(void)
+ {
+-	unsigned b;
+ 	u64 cap;
++	u8 b;
+ 
+ 	rdmsrl(MSR_IA32_MCG_CAP, cap);
+ 
+ 	b = cap & MCG_BANKCNT_MASK;
+-	if (!mca_cfg.banks)
+-		pr_info("CPU supports %d MCE banks\n", b);
+-
+-	if (b > MAX_NR_BANKS) {
+-		pr_warn("Using only %u machine check banks out of %u\n",
+-			MAX_NR_BANKS, b);
++	if (WARN_ON_ONCE(b > MAX_NR_BANKS))
+ 		b = MAX_NR_BANKS;
+-	}
+ 
+-	/* Don't support asymmetric configurations today */
+-	WARN_ON(mca_cfg.banks != 0 && b != mca_cfg.banks);
+-	mca_cfg.banks = b;
++	mca_cfg.banks = max(mca_cfg.banks, b);
+ 
+ 	if (!mce_banks) {
+ 		int err = __mcheck_cpu_mce_banks_init();
+-
+ 		if (err)
+ 			return err;
+ 	}
+@@ -2459,6 +2479,8 @@ EXPORT_SYMBOL_GPL(mcsafe_key);
+ 
+ static int __init mcheck_late_init(void)
+ {
++	pr_info("Using %d MCE banks\n", mca_cfg.banks);
++
+ 	if (mca_cfg.recovery)
+ 		static_branch_inc(&mcsafe_key);
+ 
+diff --git a/arch/x86/kernel/cpu/mce/inject.c b/arch/x86/kernel/cpu/mce/inject.c
+index 8492ef7d9015..3f82afd0f46f 100644
+--- a/arch/x86/kernel/cpu/mce/inject.c
++++ b/arch/x86/kernel/cpu/mce/inject.c
+@@ -46,8 +46,6 @@
+ static struct mce i_mce;
+ static struct dentry *dfs_inj;
+ 
+-static u8 n_banks;
+-
+ #define MAX_FLAG_OPT_SIZE	4
+ #define NBCFG			0x44
+ 
+@@ -570,9 +568,15 @@ err:
+ static int inj_bank_set(void *data, u64 val)
+ {
+ 	struct mce *m = (struct mce *)data;
++	u8 n_banks;
++	u64 cap;
++
++	/* Get bank count on target CPU so we can handle non-uniform values. */
++	rdmsrl_on_cpu(m->extcpu, MSR_IA32_MCG_CAP, &cap);
++	n_banks = cap & MCG_BANKCNT_MASK;
+ 
+ 	if (val >= n_banks) {
+-		pr_err("Non-existent MCE bank: %llu\n", val);
++		pr_err("MCA bank %llu non-existent on CPU%d\n", val, m->extcpu);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -665,10 +669,6 @@ static struct dfs_node {
+ static int __init debugfs_init(void)
+ {
+ 	unsigned int i;
+-	u64 cap;
+-
+-	rdmsrl(MSR_IA32_MCG_CAP, cap);
+-	n_banks = cap & MCG_BANKCNT_MASK;
+ 
+ 	dfs_inj = debugfs_create_dir("mce-inject", NULL);
+ 	if (!dfs_inj)
+diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
+index 5260185cbf7b..8a4a7823451a 100644
+--- a/arch/x86/kernel/cpu/microcode/core.c
++++ b/arch/x86/kernel/cpu/microcode/core.c
+@@ -418,8 +418,9 @@ static int do_microcode_update(const void __user *buf, size_t size)
+ 		if (ustate == UCODE_ERROR) {
+ 			error = -1;
+ 			break;
+-		} else if (ustate == UCODE_OK)
++		} else if (ustate == UCODE_NEW) {
+ 			apply_microcode_on_target(cpu);
++		}
+ 	}
+ 
+ 	return error;
+diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
+index bd553b3af22e..6e0c0ed8e4bf 100644
+--- a/arch/x86/kernel/ftrace.c
++++ b/arch/x86/kernel/ftrace.c
+@@ -749,6 +749,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
+ 	unsigned long end_offset;
+ 	unsigned long op_offset;
+ 	unsigned long offset;
++	unsigned long npages;
+ 	unsigned long size;
+ 	unsigned long retq;
+ 	unsigned long *ptr;
+@@ -781,6 +782,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
+ 		return 0;
+ 
+ 	*tramp_size = size + RET_SIZE + sizeof(void *);
++	npages = DIV_ROUND_UP(*tramp_size, PAGE_SIZE);
+ 
+ 	/* Copy ftrace_caller onto the trampoline memory */
+ 	ret = probe_kernel_read(trampoline, (void *)start_offset, size);
+@@ -825,6 +827,12 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
+ 	/* ALLOC_TRAMP flags lets us know we created it */
+ 	ops->flags |= FTRACE_OPS_FL_ALLOC_TRAMP;
+ 
++	/*
++	 * Module allocation needs to be completed by making the page
++	 * executable. The page is still writable, which is a security hazard,
++	 * but anyhow ftrace breaks W^X completely.
++	 */
++	set_memory_x((unsigned long)trampoline, npages);
+ 	return (unsigned long)trampoline;
+ fail:
+ 	tramp_free(trampoline, *tramp_size);
+diff --git a/arch/x86/kernel/irq_64.c b/arch/x86/kernel/irq_64.c
+index 0469cd078db1..b50ac9c7397b 100644
+--- a/arch/x86/kernel/irq_64.c
++++ b/arch/x86/kernel/irq_64.c
+@@ -26,9 +26,18 @@ int sysctl_panic_on_stackoverflow;
+ /*
+  * Probabilistic stack overflow check:
+  *
+- * Only check the stack in process context, because everything else
+- * runs on the big interrupt stacks. Checking reliably is too expensive,
+- * so we just check from interrupts.
++ * Regular device interrupts can enter on the following stacks:
++ *
++ * - User stack
++ *
++ * - Kernel task stack
++ *
++ * - Interrupt stack if a device driver reenables interrupts
++ *   which should only happen in really old drivers.
++ *
++ * - Debug IST stack
++ *
++ * All other contexts are invalid.
+  */
+ static inline void stack_overflow_check(struct pt_regs *regs)
+ {
+@@ -53,8 +62,8 @@ static inline void stack_overflow_check(struct pt_regs *regs)
+ 		return;
+ 
+ 	oist = this_cpu_ptr(&orig_ist);
+-	estack_top = (u64)oist->ist[0] - EXCEPTION_STKSZ + STACK_TOP_MARGIN;
+-	estack_bottom = (u64)oist->ist[N_EXCEPTION_STACKS - 1];
++	estack_bottom = (u64)oist->ist[DEBUG_STACK];
++	estack_top = estack_bottom - DEBUG_STKSZ + STACK_TOP_MARGIN;
+ 	if (regs->sp >= estack_top && regs->sp <= estack_bottom)
+ 		return;
+ 
+diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
+index b052e883dd8c..cfa3106faee4 100644
+--- a/arch/x86/kernel/module.c
++++ b/arch/x86/kernel/module.c
+@@ -87,7 +87,7 @@ void *module_alloc(unsigned long size)
+ 	p = __vmalloc_node_range(size, MODULE_ALIGN,
+ 				    MODULES_VADDR + get_module_load_offset(),
+ 				    MODULES_END, GFP_KERNEL,
+-				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
++				    PAGE_KERNEL, 0, NUMA_NO_NODE,
+ 				    __builtin_return_address(0));
+ 	if (p && (kasan_module_alloc(p, size) < 0)) {
+ 		vfree(p);
+diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c
+index 08dfd4c1a4f9..c8aa58a2bab9 100644
+--- a/arch/x86/kernel/signal.c
++++ b/arch/x86/kernel/signal.c
+@@ -132,16 +132,6 @@ static int restore_sigcontext(struct pt_regs *regs,
+ 		COPY_SEG_CPL3(cs);
+ 		COPY_SEG_CPL3(ss);
+ 
+-#ifdef CONFIG_X86_64
+-		/*
+-		 * Fix up SS if needed for the benefit of old DOSEMU and
+-		 * CRIU.
+-		 */
+-		if (unlikely(!(uc_flags & UC_STRICT_RESTORE_SS) &&
+-			     user_64bit_mode(regs)))
+-			force_valid_ss(regs);
+-#endif
+-
+ 		get_user_ex(tmpflags, &sc->flags);
+ 		regs->flags = (regs->flags & ~FIX_EFLAGS) | (tmpflags & FIX_EFLAGS);
+ 		regs->orig_ax = -1;		/* disable syscall checks */
+@@ -150,6 +140,15 @@ static int restore_sigcontext(struct pt_regs *regs,
+ 		buf = (void __user *)buf_val;
+ 	} get_user_catch(err);
+ 
++#ifdef CONFIG_X86_64
++	/*
++	 * Fix up SS if needed for the benefit of old DOSEMU and
++	 * CRIU.
++	 */
++	if (unlikely(!(uc_flags & UC_STRICT_RESTORE_SS) && user_64bit_mode(regs)))
++		force_valid_ss(regs);
++#endif
++
+ 	err |= fpu__restore_sig(buf, IS_ENABLED(CONFIG_X86_32));
+ 
+ 	force_iret();
+@@ -461,6 +460,7 @@ static int __setup_rt_frame(int sig, struct ksignal *ksig,
+ {
+ 	struct rt_sigframe __user *frame;
+ 	void __user *fp = NULL;
++	unsigned long uc_flags;
+ 	int err = 0;
+ 
+ 	frame = get_sigframe(&ksig->ka, regs, sizeof(struct rt_sigframe), &fp);
+@@ -473,9 +473,11 @@ static int __setup_rt_frame(int sig, struct ksignal *ksig,
+ 			return -EFAULT;
+ 	}
+ 
++	uc_flags = frame_uc_flags(regs);
++
+ 	put_user_try {
+ 		/* Create the ucontext.  */
+-		put_user_ex(frame_uc_flags(regs), &frame->uc.uc_flags);
++		put_user_ex(uc_flags, &frame->uc.uc_flags);
+ 		put_user_ex(0, &frame->uc.uc_link);
+ 		save_altstack_ex(&frame->uc.uc_stack, regs->sp);
+ 
+@@ -541,6 +543,7 @@ static int x32_setup_rt_frame(struct ksignal *ksig,
+ {
+ #ifdef CONFIG_X86_X32_ABI
+ 	struct rt_sigframe_x32 __user *frame;
++	unsigned long uc_flags;
+ 	void __user *restorer;
+ 	int err = 0;
+ 	void __user *fpstate = NULL;
+@@ -555,9 +558,11 @@ static int x32_setup_rt_frame(struct ksignal *ksig,
+ 			return -EFAULT;
+ 	}
+ 
++	uc_flags = frame_uc_flags(regs);
++
+ 	put_user_try {
+ 		/* Create the ucontext.  */
+-		put_user_ex(frame_uc_flags(regs), &frame->uc.uc_flags);
++		put_user_ex(uc_flags, &frame->uc.uc_flags);
+ 		put_user_ex(0, &frame->uc.uc_link);
+ 		compat_save_altstack_ex(&frame->uc.uc_stack, regs->sp);
+ 		put_user_ex(0, &frame->uc.uc__pad0);
+diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
+index a5127b2c195f..834659288ba9 100644
+--- a/arch/x86/kernel/vmlinux.lds.S
++++ b/arch/x86/kernel/vmlinux.lds.S
+@@ -141,11 +141,11 @@ SECTIONS
+ 		*(.text.__x86.indirect_thunk)
+ 		__indirect_thunk_end = .;
+ #endif
+-
+-		/* End of text section */
+-		_etext = .;
+ 	} :text = 0x9090
+ 
++	/* End of text section */
++	_etext = .;
++
+ 	NOTES :text :note
+ 
+ 	EXCEPTION_TABLE(16) :text = 0x9090
+diff --git a/arch/x86/kvm/irq.c b/arch/x86/kvm/irq.c
+index faa264822cee..007bc654f928 100644
+--- a/arch/x86/kvm/irq.c
++++ b/arch/x86/kvm/irq.c
+@@ -172,3 +172,10 @@ void __kvm_migrate_timers(struct kvm_vcpu *vcpu)
+ 	__kvm_migrate_apic_timer(vcpu);
+ 	__kvm_migrate_pit_timer(vcpu);
+ }
++
++bool kvm_arch_irqfd_allowed(struct kvm *kvm, struct kvm_irqfd *args)
++{
++	bool resample = args->flags & KVM_IRQFD_FLAG_RESAMPLE;
++
++	return resample ? irqchip_kernel(kvm) : irqchip_in_kernel(kvm);
++}
+diff --git a/arch/x86/kvm/irq.h b/arch/x86/kvm/irq.h
+index d5005cc26521..fd210cdd4983 100644
+--- a/arch/x86/kvm/irq.h
++++ b/arch/x86/kvm/irq.h
+@@ -114,6 +114,7 @@ static inline int irqchip_in_kernel(struct kvm *kvm)
+ 	return mode != KVM_IRQCHIP_NONE;
+ }
+ 
++bool kvm_arch_irqfd_allowed(struct kvm *kvm, struct kvm_irqfd *args);
+ void kvm_inject_pending_timer_irqs(struct kvm_vcpu *vcpu);
+ void kvm_inject_apic_timer_irqs(struct kvm_vcpu *vcpu);
+ void kvm_apic_nmi_wd_deliver(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kvm/pmu_amd.c b/arch/x86/kvm/pmu_amd.c
+index 1495a735b38e..50fa9450fcf1 100644
+--- a/arch/x86/kvm/pmu_amd.c
++++ b/arch/x86/kvm/pmu_amd.c
+@@ -269,10 +269,10 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu)
+ 
+ 	pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << 48) - 1;
+ 	pmu->reserved_bits = 0xffffffff00200000ull;
++	pmu->version = 1;
+ 	/* not applicable to AMD; but clean them to prevent any fall out */
+ 	pmu->counter_bitmask[KVM_PMC_FIXED] = 0;
+ 	pmu->nr_arch_fixed_counters = 0;
+-	pmu->version = 0;
+ 	pmu->global_status = 0;
+ }
+ 
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index 406b558abfef..ae6e51828a54 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -2024,7 +2024,11 @@ static void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ 	if (!kvm_vcpu_apicv_active(vcpu))
+ 		return;
+ 
+-	if (WARN_ON(h_physical_id >= AVIC_MAX_PHYSICAL_ID_COUNT))
++	/*
++	 * Since the host physical APIC id is 8 bits,
++	 * we can support host APIC ID upto 255.
++	 */
++	if (WARN_ON(h_physical_id > AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK))
+ 		return;
+ 
+ 	entry = READ_ONCE(*(svm->avic_physical_id_cache));
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 0c601d079cd2..8f6f69c26c35 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -2792,14 +2792,13 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu)
+ 	      : "cc", "memory"
+ 	);
+ 
+-	preempt_enable();
+-
+ 	if (vmx->msr_autoload.host.nr)
+ 		vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr);
+ 	if (vmx->msr_autoload.guest.nr)
+ 		vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr);
+ 
+ 	if (vm_fail) {
++		preempt_enable();
+ 		WARN_ON_ONCE(vmcs_read32(VM_INSTRUCTION_ERROR) !=
+ 			     VMXERR_ENTRY_INVALID_CONTROL_FIELD);
+ 		return 1;
+@@ -2811,6 +2810,7 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu)
+ 	local_irq_enable();
+ 	if (hw_breakpoint_active())
+ 		set_debugreg(__this_cpu_read(cpu_dr7), 7);
++	preempt_enable();
+ 
+ 	/*
+ 	 * A non-failing VMEntry means we somehow entered guest mode with
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index fed1ab6a825c..6b8575c547ee 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1288,7 +1288,7 @@ static int set_efer(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	u64 efer = msr_info->data;
+ 
+ 	if (efer & efer_reserved_bits)
+-		return false;
++		return 1;
+ 
+ 	if (!msr_info->host_initiated) {
+ 		if (!__kvm_valid_efer(vcpu, efer))
+diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
+index 3b24dc05251c..9d05572370ed 100644
+--- a/arch/x86/lib/memcpy_64.S
++++ b/arch/x86/lib/memcpy_64.S
+@@ -257,6 +257,7 @@ ENTRY(__memcpy_mcsafe)
+ 	/* Copy successful. Return zero */
+ .L_done_memcpy_trap:
+ 	xorl %eax, %eax
++.L_done:
+ 	ret
+ ENDPROC(__memcpy_mcsafe)
+ EXPORT_SYMBOL_GPL(__memcpy_mcsafe)
+@@ -273,7 +274,7 @@ EXPORT_SYMBOL_GPL(__memcpy_mcsafe)
+ 	addl	%edx, %ecx
+ .E_trailing_bytes:
+ 	mov	%ecx, %eax
+-	ret
++	jmp	.L_done
+ 
+ 	/*
+ 	 * For write fault handling, given the destination is unaligned,
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index 667f1da36208..5eaf67e8314f 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -359,8 +359,6 @@ static noinline int vmalloc_fault(unsigned long address)
+ 	if (!(address >= VMALLOC_START && address < VMALLOC_END))
+ 		return -1;
+ 
+-	WARN_ON_ONCE(in_nmi());
+-
+ 	/*
+ 	 * Copy kernel mappings over when needed. This can also
+ 	 * happen within a race in page table update. In the later
+diff --git a/arch/x86/platform/uv/tlb_uv.c b/arch/x86/platform/uv/tlb_uv.c
+index 2c53b0f19329..1297e185b8c8 100644
+--- a/arch/x86/platform/uv/tlb_uv.c
++++ b/arch/x86/platform/uv/tlb_uv.c
+@@ -2133,14 +2133,19 @@ static int __init summarize_uvhub_sockets(int nuvhubs,
+  */
+ static int __init init_per_cpu(int nuvhubs, int base_part_pnode)
+ {
+-	unsigned char *uvhub_mask;
+ 	struct uvhub_desc *uvhub_descs;
++	unsigned char *uvhub_mask = NULL;
+ 
+ 	if (is_uv3_hub() || is_uv2_hub() || is_uv1_hub())
+ 		timeout_us = calculate_destination_timeout();
+ 
+ 	uvhub_descs = kcalloc(nuvhubs, sizeof(struct uvhub_desc), GFP_KERNEL);
++	if (!uvhub_descs)
++		goto fail;
++
+ 	uvhub_mask = kzalloc((nuvhubs+7)/8, GFP_KERNEL);
++	if (!uvhub_mask)
++		goto fail;
+ 
+ 	if (get_cpu_topology(base_part_pnode, uvhub_descs, uvhub_mask))
+ 		goto fail;
+diff --git a/block/bio.c b/block/bio.c
+index 716510ecd7ff..a3c80a6c1fe5 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -776,6 +776,8 @@ bool __bio_try_merge_page(struct bio *bio, struct page *page,
+ 
+ 		if (vec_end_addr + 1 != page_addr + off)
+ 			return false;
++		if (xen_domain() && !xen_biovec_phys_mergeable(bv, page))
++			return false;
+ 		if (same_page && (vec_end_addr & PAGE_MASK) != page_addr)
+ 			return false;
+ 
+diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
+index aa6bc5c02643..c59babca6857 100644
+--- a/block/blk-mq-sched.c
++++ b/block/blk-mq-sched.c
+@@ -413,6 +413,14 @@ void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx,
+ 				  struct list_head *list, bool run_queue_async)
+ {
+ 	struct elevator_queue *e;
++	struct request_queue *q = hctx->queue;
++
++	/*
++	 * blk_mq_sched_insert_requests() is called from flush plug
++	 * context only, and hold one usage counter to prevent queue
++	 * from being released.
++	 */
++	percpu_ref_get(&q->q_usage_counter);
+ 
+ 	e = hctx->queue->elevator;
+ 	if (e && e->type->ops.insert_requests)
+@@ -426,12 +434,14 @@ void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx,
+ 		if (!hctx->dispatch_busy && !e && !run_queue_async) {
+ 			blk_mq_try_issue_list_directly(hctx, list);
+ 			if (list_empty(list))
+-				return;
++				goto out;
+ 		}
+ 		blk_mq_insert_requests(hctx, ctx, list);
+ 	}
+ 
+ 	blk_mq_run_hw_queue(hctx, run_queue_async);
++ out:
++	percpu_ref_put(&q->q_usage_counter);
+ }
+ 
+ static void blk_mq_sched_free_tags(struct blk_mq_tag_set *set,
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index b0e5e67e20a2..8a41cc5974fe 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2284,15 +2284,65 @@ static void blk_mq_exit_hw_queues(struct request_queue *q,
+ 	}
+ }
+ 
++static int blk_mq_hw_ctx_size(struct blk_mq_tag_set *tag_set)
++{
++	int hw_ctx_size = sizeof(struct blk_mq_hw_ctx);
++
++	BUILD_BUG_ON(ALIGN(offsetof(struct blk_mq_hw_ctx, srcu),
++			   __alignof__(struct blk_mq_hw_ctx)) !=
++		     sizeof(struct blk_mq_hw_ctx));
++
++	if (tag_set->flags & BLK_MQ_F_BLOCKING)
++		hw_ctx_size += sizeof(struct srcu_struct);
++
++	return hw_ctx_size;
++}
++
+ static int blk_mq_init_hctx(struct request_queue *q,
+ 		struct blk_mq_tag_set *set,
+ 		struct blk_mq_hw_ctx *hctx, unsigned hctx_idx)
+ {
+-	int node;
++	hctx->queue_num = hctx_idx;
++
++	cpuhp_state_add_instance_nocalls(CPUHP_BLK_MQ_DEAD, &hctx->cpuhp_dead);
++
++	hctx->tags = set->tags[hctx_idx];
++
++	if (set->ops->init_hctx &&
++	    set->ops->init_hctx(hctx, set->driver_data, hctx_idx))
++		goto unregister_cpu_notifier;
+ 
+-	node = hctx->numa_node;
++	if (blk_mq_init_request(set, hctx->fq->flush_rq, hctx_idx,
++				hctx->numa_node))
++		goto exit_hctx;
++	return 0;
++
++ exit_hctx:
++	if (set->ops->exit_hctx)
++		set->ops->exit_hctx(hctx, hctx_idx);
++ unregister_cpu_notifier:
++	blk_mq_remove_cpuhp(hctx);
++	return -1;
++}
++
++static struct blk_mq_hw_ctx *
++blk_mq_alloc_hctx(struct request_queue *q, struct blk_mq_tag_set *set,
++		int node)
++{
++	struct blk_mq_hw_ctx *hctx;
++	gfp_t gfp = GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY;
++
++	hctx = kzalloc_node(blk_mq_hw_ctx_size(set), gfp, node);
++	if (!hctx)
++		goto fail_alloc_hctx;
++
++	if (!zalloc_cpumask_var_node(&hctx->cpumask, gfp, node))
++		goto free_hctx;
++
++	atomic_set(&hctx->nr_active, 0);
+ 	if (node == NUMA_NO_NODE)
+-		node = hctx->numa_node = set->numa_node;
++		node = set->numa_node;
++	hctx->numa_node = node;
+ 
+ 	INIT_DELAYED_WORK(&hctx->run_work, blk_mq_run_work_fn);
+ 	spin_lock_init(&hctx->lock);
+@@ -2300,58 +2350,45 @@ static int blk_mq_init_hctx(struct request_queue *q,
+ 	hctx->queue = q;
+ 	hctx->flags = set->flags & ~BLK_MQ_F_TAG_SHARED;
+ 
+-	cpuhp_state_add_instance_nocalls(CPUHP_BLK_MQ_DEAD, &hctx->cpuhp_dead);
+-
+-	hctx->tags = set->tags[hctx_idx];
+-
+ 	/*
+ 	 * Allocate space for all possible cpus to avoid allocation at
+ 	 * runtime
+ 	 */
+ 	hctx->ctxs = kmalloc_array_node(nr_cpu_ids, sizeof(void *),
+-			GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY, node);
++			gfp, node);
+ 	if (!hctx->ctxs)
+-		goto unregister_cpu_notifier;
++		goto free_cpumask;
+ 
+ 	if (sbitmap_init_node(&hctx->ctx_map, nr_cpu_ids, ilog2(8),
+-				GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY, node))
++				gfp, node))
+ 		goto free_ctxs;
+-
+ 	hctx->nr_ctx = 0;
+ 
+ 	spin_lock_init(&hctx->dispatch_wait_lock);
+ 	init_waitqueue_func_entry(&hctx->dispatch_wait, blk_mq_dispatch_wake);
+ 	INIT_LIST_HEAD(&hctx->dispatch_wait.entry);
+ 
+-	if (set->ops->init_hctx &&
+-	    set->ops->init_hctx(hctx, set->driver_data, hctx_idx))
+-		goto free_bitmap;
+-
+ 	hctx->fq = blk_alloc_flush_queue(q, hctx->numa_node, set->cmd_size,
+-			GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY);
++			gfp);
+ 	if (!hctx->fq)
+-		goto exit_hctx;
+-
+-	if (blk_mq_init_request(set, hctx->fq->flush_rq, hctx_idx, node))
+-		goto free_fq;
++		goto free_bitmap;
+ 
+ 	if (hctx->flags & BLK_MQ_F_BLOCKING)
+ 		init_srcu_struct(hctx->srcu);
++	blk_mq_hctx_kobj_init(hctx);
+ 
+-	return 0;
++	return hctx;
+ 
+- free_fq:
+-	blk_free_flush_queue(hctx->fq);
+- exit_hctx:
+-	if (set->ops->exit_hctx)
+-		set->ops->exit_hctx(hctx, hctx_idx);
+  free_bitmap:
+ 	sbitmap_free(&hctx->ctx_map);
+  free_ctxs:
+ 	kfree(hctx->ctxs);
+- unregister_cpu_notifier:
+-	blk_mq_remove_cpuhp(hctx);
+-	return -1;
++ free_cpumask:
++	free_cpumask_var(hctx->cpumask);
++ free_hctx:
++	kfree(hctx);
++ fail_alloc_hctx:
++	return NULL;
+ }
+ 
+ static void blk_mq_init_cpu_queues(struct request_queue *q,
+@@ -2695,51 +2732,25 @@ struct request_queue *blk_mq_init_sq_queue(struct blk_mq_tag_set *set,
+ }
+ EXPORT_SYMBOL(blk_mq_init_sq_queue);
+ 
+-static int blk_mq_hw_ctx_size(struct blk_mq_tag_set *tag_set)
+-{
+-	int hw_ctx_size = sizeof(struct blk_mq_hw_ctx);
+-
+-	BUILD_BUG_ON(ALIGN(offsetof(struct blk_mq_hw_ctx, srcu),
+-			   __alignof__(struct blk_mq_hw_ctx)) !=
+-		     sizeof(struct blk_mq_hw_ctx));
+-
+-	if (tag_set->flags & BLK_MQ_F_BLOCKING)
+-		hw_ctx_size += sizeof(struct srcu_struct);
+-
+-	return hw_ctx_size;
+-}
+-
+ static struct blk_mq_hw_ctx *blk_mq_alloc_and_init_hctx(
+ 		struct blk_mq_tag_set *set, struct request_queue *q,
+ 		int hctx_idx, int node)
+ {
+ 	struct blk_mq_hw_ctx *hctx;
+ 
+-	hctx = kzalloc_node(blk_mq_hw_ctx_size(set),
+-			GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY,
+-			node);
++	hctx = blk_mq_alloc_hctx(q, set, node);
+ 	if (!hctx)
+-		return NULL;
+-
+-	if (!zalloc_cpumask_var_node(&hctx->cpumask,
+-				GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY,
+-				node)) {
+-		kfree(hctx);
+-		return NULL;
+-	}
+-
+-	atomic_set(&hctx->nr_active, 0);
+-	hctx->numa_node = node;
+-	hctx->queue_num = hctx_idx;
++		goto fail;
+ 
+-	if (blk_mq_init_hctx(q, set, hctx, hctx_idx)) {
+-		free_cpumask_var(hctx->cpumask);
+-		kfree(hctx);
+-		return NULL;
+-	}
+-	blk_mq_hctx_kobj_init(hctx);
++	if (blk_mq_init_hctx(q, set, hctx, hctx_idx))
++		goto free_hctx;
+ 
+ 	return hctx;
++
++ free_hctx:
++	kobject_put(&hctx->kobj);
++ fail:
++	return NULL;
+ }
+ 
+ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
+diff --git a/block/blk.h b/block/blk.h
+index 5d636ee41663..e27fd1512e4b 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -75,7 +75,7 @@ static inline bool biovec_phys_mergeable(struct request_queue *q,
+ 
+ 	if (addr1 + vec1->bv_len != addr2)
+ 		return false;
+-	if (xen_domain() && !xen_biovec_phys_mergeable(vec1, vec2))
++	if (xen_domain() && !xen_biovec_phys_mergeable(vec1, vec2->bv_page))
+ 		return false;
+ 	if ((addr1 | mask) != ((addr2 + vec2->bv_len - 1) | mask))
+ 		return false;
+diff --git a/block/genhd.c b/block/genhd.c
+index 703267865f14..d8dff0b21f7d 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -531,6 +531,18 @@ void blk_free_devt(dev_t devt)
+ 	}
+ }
+ 
++/**
++ *	We invalidate devt by assigning NULL pointer for devt in idr.
++ */
++void blk_invalidate_devt(dev_t devt)
++{
++	if (MAJOR(devt) == BLOCK_EXT_MAJOR) {
++		spin_lock_bh(&ext_devt_lock);
++		idr_replace(&ext_devt_idr, NULL, blk_mangle_minor(MINOR(devt)));
++		spin_unlock_bh(&ext_devt_lock);
++	}
++}
++
+ static char *bdevt_str(dev_t devt, char *buf)
+ {
+ 	if (MAJOR(devt) <= 0xff && MINOR(devt) <= 0xff) {
+@@ -793,6 +805,13 @@ void del_gendisk(struct gendisk *disk)
+ 
+ 	if (!(disk->flags & GENHD_FL_HIDDEN))
+ 		blk_unregister_region(disk_devt(disk), disk->minors);
++	/*
++	 * Remove gendisk pointer from idr so that it cannot be looked up
++	 * while RCU period before freeing gendisk is running to prevent
++	 * use-after-free issues. Note that the device number stays
++	 * "in-use" until we really free the gendisk.
++	 */
++	blk_invalidate_devt(disk_devt(disk));
+ 
+ 	kobject_put(disk->part0.holder_dir);
+ 	kobject_put(disk->slave_dir);
+diff --git a/block/partition-generic.c b/block/partition-generic.c
+index 8e596a8dff32..aee643ce13d1 100644
+--- a/block/partition-generic.c
++++ b/block/partition-generic.c
+@@ -285,6 +285,13 @@ void delete_partition(struct gendisk *disk, int partno)
+ 	kobject_put(part->holder_dir);
+ 	device_del(part_to_dev(part));
+ 
++	/*
++	 * Remove gendisk pointer from idr so that it cannot be looked up
++	 * while RCU period before freeing gendisk is running to prevent
++	 * use-after-free issues. Note that the device number stays
++	 * "in-use" until we really free the gendisk.
++	 */
++	blk_invalidate_devt(part_devt(part));
+ 	hd_struct_kill(part);
+ }
+ 
+diff --git a/block/sed-opal.c b/block/sed-opal.c
+index e0de4dd448b3..119640897293 100644
+--- a/block/sed-opal.c
++++ b/block/sed-opal.c
+@@ -2095,13 +2095,16 @@ static int opal_erase_locking_range(struct opal_dev *dev,
+ static int opal_enable_disable_shadow_mbr(struct opal_dev *dev,
+ 					  struct opal_mbr_data *opal_mbr)
+ {
++	u8 enable_disable = opal_mbr->enable_disable == OPAL_MBR_ENABLE ?
++		OPAL_TRUE : OPAL_FALSE;
++
+ 	const struct opal_step mbr_steps[] = {
+ 		{ opal_discovery0, },
+ 		{ start_admin1LSP_opal_session, &opal_mbr->key },
+-		{ set_mbr_done, &opal_mbr->enable_disable },
++		{ set_mbr_done, &enable_disable },
+ 		{ end_opal_session, },
+ 		{ start_admin1LSP_opal_session, &opal_mbr->key },
+-		{ set_mbr_enable_disable, &opal_mbr->enable_disable },
++		{ set_mbr_enable_disable, &enable_disable },
+ 		{ end_opal_session, },
+ 		{ NULL, }
+ 	};
+@@ -2221,7 +2224,7 @@ static int __opal_lock_unlock(struct opal_dev *dev,
+ 
+ static int __opal_set_mbr_done(struct opal_dev *dev, struct opal_key *key)
+ {
+-	u8 mbr_done_tf = 1;
++	u8 mbr_done_tf = OPAL_TRUE;
+ 	const struct opal_step mbrdone_step [] = {
+ 		{ opal_discovery0, },
+ 		{ start_admin1LSP_opal_session, key },
+diff --git a/crypto/hmac.c b/crypto/hmac.c
+index e74730224f0a..4b8c8ee8f15c 100644
+--- a/crypto/hmac.c
++++ b/crypto/hmac.c
+@@ -168,6 +168,8 @@ static int hmac_init_tfm(struct crypto_tfm *tfm)
+ 
+ 	parent->descsize = sizeof(struct shash_desc) +
+ 			   crypto_shash_descsize(hash);
++	if (WARN_ON(parent->descsize > HASH_MAX_DESCSIZE))
++		return -EINVAL;
+ 
+ 	ctx->hash = hash;
+ 	return 0;
+diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c
+index e48894e002ba..a46c2c162c03 100644
+--- a/drivers/acpi/arm64/iort.c
++++ b/drivers/acpi/arm64/iort.c
+@@ -1232,18 +1232,24 @@ static bool __init arm_smmu_v3_is_coherent(struct acpi_iort_node *node)
+ /*
+  * set numa proximity domain for smmuv3 device
+  */
+-static void  __init arm_smmu_v3_set_proximity(struct device *dev,
++static int  __init arm_smmu_v3_set_proximity(struct device *dev,
+ 					      struct acpi_iort_node *node)
+ {
+ 	struct acpi_iort_smmu_v3 *smmu;
+ 
+ 	smmu = (struct acpi_iort_smmu_v3 *)node->node_data;
+ 	if (smmu->flags & ACPI_IORT_SMMU_V3_PXM_VALID) {
+-		set_dev_node(dev, acpi_map_pxm_to_node(smmu->pxm));
++		int node = acpi_map_pxm_to_node(smmu->pxm);
++
++		if (node != NUMA_NO_NODE && !node_online(node))
++			return -EINVAL;
++
++		set_dev_node(dev, node);
+ 		pr_info("SMMU-v3[%llx] Mapped to Proximity domain %d\n",
+ 			smmu->base_address,
+ 			smmu->pxm);
+ 	}
++	return 0;
+ }
+ #else
+ #define arm_smmu_v3_set_proximity NULL
+@@ -1318,7 +1324,7 @@ struct iort_dev_config {
+ 	int (*dev_count_resources)(struct acpi_iort_node *node);
+ 	void (*dev_init_resources)(struct resource *res,
+ 				     struct acpi_iort_node *node);
+-	void (*dev_set_proximity)(struct device *dev,
++	int (*dev_set_proximity)(struct device *dev,
+ 				    struct acpi_iort_node *node);
+ };
+ 
+@@ -1369,8 +1375,11 @@ static int __init iort_add_platform_device(struct acpi_iort_node *node,
+ 	if (!pdev)
+ 		return -ENOMEM;
+ 
+-	if (ops->dev_set_proximity)
+-		ops->dev_set_proximity(&pdev->dev, node);
++	if (ops->dev_set_proximity) {
++		ret = ops->dev_set_proximity(&pdev->dev, node);
++		if (ret)
++			goto dev_put;
++	}
+ 
+ 	count = ops->dev_count_resources(node);
+ 
+diff --git a/drivers/acpi/property.c b/drivers/acpi/property.c
+index 77abe0ec4043..bd533f68b1de 100644
+--- a/drivers/acpi/property.c
++++ b/drivers/acpi/property.c
+@@ -1031,6 +1031,14 @@ struct fwnode_handle *acpi_get_next_subnode(const struct fwnode_handle *fwnode,
+ 		const struct acpi_data_node *data = to_acpi_data_node(fwnode);
+ 		struct acpi_data_node *dn;
+ 
++		/*
++		 * We can have a combination of device and data nodes, e.g. with
++		 * hierarchical _DSD properties. Make sure the adev pointer is
++		 * restored before going through data nodes, otherwise we will
++		 * be looking for data_nodes below the last device found instead
++		 * of the common fwnode shared by device_nodes and data_nodes.
++		 */
++		adev = to_acpi_device_node(fwnode);
+ 		if (adev)
+ 			head = &adev->data.subnodes;
+ 		else if (data)
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index f80d298de3fa..8ad20ed0cb7c 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -1747,6 +1747,10 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
+ 	if (dev->power.syscore)
+ 		goto Complete;
+ 
++	/* Avoid direct_complete to let wakeup_path propagate. */
++	if (device_may_wakeup(dev) || dev->power.wakeup_path)
++		dev->power.direct_complete = false;
++
+ 	if (dev->power.direct_complete) {
+ 		if (pm_runtime_status_suspended(dev)) {
+ 			pm_runtime_disable(dev);
+diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c
+index d5d6e6e5da3b..62d3aa2b26f6 100644
+--- a/drivers/bluetooth/btbcm.c
++++ b/drivers/bluetooth/btbcm.c
+@@ -37,6 +37,7 @@
+ #define BDADDR_BCM43430A0 (&(bdaddr_t) {{0xac, 0x1f, 0x12, 0xa0, 0x43, 0x43}})
+ #define BDADDR_BCM4324B3 (&(bdaddr_t) {{0x00, 0x00, 0x00, 0xb3, 0x24, 0x43}})
+ #define BDADDR_BCM4330B1 (&(bdaddr_t) {{0x00, 0x00, 0x00, 0xb1, 0x30, 0x43}})
++#define BDADDR_BCM43341B (&(bdaddr_t) {{0xac, 0x1f, 0x00, 0x1b, 0x34, 0x43}})
+ 
+ int btbcm_check_bdaddr(struct hci_dev *hdev)
+ {
+@@ -82,7 +83,8 @@ int btbcm_check_bdaddr(struct hci_dev *hdev)
+ 	    !bacmp(&bda->bdaddr, BDADDR_BCM20702A1) ||
+ 	    !bacmp(&bda->bdaddr, BDADDR_BCM4324B3) ||
+ 	    !bacmp(&bda->bdaddr, BDADDR_BCM4330B1) ||
+-	    !bacmp(&bda->bdaddr, BDADDR_BCM43430A0)) {
++	    !bacmp(&bda->bdaddr, BDADDR_BCM43430A0) ||
++	    !bacmp(&bda->bdaddr, BDADDR_BCM43341B)) {
+ 		bt_dev_info(hdev, "BCM: Using default device address (%pMR)",
+ 			    &bda->bdaddr);
+ 		set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
+diff --git a/drivers/bluetooth/btmtkuart.c b/drivers/bluetooth/btmtkuart.c
+index b0b680dd69f4..f5dbeec8e274 100644
+--- a/drivers/bluetooth/btmtkuart.c
++++ b/drivers/bluetooth/btmtkuart.c
+@@ -661,7 +661,7 @@ static int btmtkuart_change_baudrate(struct hci_dev *hdev)
+ {
+ 	struct btmtkuart_dev *bdev = hci_get_drvdata(hdev);
+ 	struct btmtk_hci_wmt_params wmt_params;
+-	u32 baudrate;
++	__le32 baudrate;
+ 	u8 param;
+ 	int err;
+ 
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index 237aea34b69f..d3b467792eb3 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -508,6 +508,8 @@ static int qca_open(struct hci_uart *hu)
+ 		qcadev = serdev_device_get_drvdata(hu->serdev);
+ 		if (qcadev->btsoc_type != QCA_WCN3990) {
+ 			gpiod_set_value_cansleep(qcadev->bt_en, 1);
++			/* Controller needs time to bootup. */
++			msleep(150);
+ 		} else {
+ 			hu->init_speed = qcadev->init_speed;
+ 			hu->oper_speed = qcadev->oper_speed;
+@@ -992,7 +994,8 @@ static int qca_set_baudrate(struct hci_dev *hdev, uint8_t baudrate)
+ 	while (!skb_queue_empty(&qca->txq))
+ 		usleep_range(100, 200);
+ 
+-	serdev_device_wait_until_sent(hu->serdev,
++	if (hu->serdev)
++		serdev_device_wait_until_sent(hu->serdev,
+ 		      msecs_to_jiffies(CMD_TRANS_TIMEOUT_MS));
+ 
+ 	/* Give the controller time to process the request */
+diff --git a/drivers/char/hw_random/omap-rng.c b/drivers/char/hw_random/omap-rng.c
+index b65ff6962899..e9b6ac61fb7f 100644
+--- a/drivers/char/hw_random/omap-rng.c
++++ b/drivers/char/hw_random/omap-rng.c
+@@ -443,6 +443,7 @@ static int omap_rng_probe(struct platform_device *pdev)
+ 	priv->rng.read = omap_rng_do_read;
+ 	priv->rng.init = omap_rng_init;
+ 	priv->rng.cleanup = omap_rng_cleanup;
++	priv->rng.quality = 900;
+ 
+ 	priv->rng.priv = (unsigned long)priv;
+ 	platform_set_drvdata(pdev, priv);
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index 38c6d1af6d1c..af6e240f98ff 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -777,6 +777,7 @@ static struct crng_state **crng_node_pool __read_mostly;
+ #endif
+ 
+ static void invalidate_batched_entropy(void);
++static void numa_crng_init(void);
+ 
+ static bool trust_cpu __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU);
+ static int __init parse_trust_cpu(char *arg)
+@@ -805,7 +806,9 @@ static void crng_initialize(struct crng_state *crng)
+ 		}
+ 		crng->state[i] ^= rv;
+ 	}
+-	if (trust_cpu && arch_init) {
++	if (trust_cpu && arch_init && crng == &primary_crng) {
++		invalidate_batched_entropy();
++		numa_crng_init();
+ 		crng_init = 2;
+ 		pr_notice("random: crng done (trusting CPU's manufacturer)\n");
+ 	}
+@@ -2211,8 +2214,8 @@ struct batched_entropy {
+ 		u32 entropy_u32[CHACHA_BLOCK_SIZE / sizeof(u32)];
+ 	};
+ 	unsigned int position;
++	spinlock_t batch_lock;
+ };
+-static rwlock_t batched_entropy_reset_lock = __RW_LOCK_UNLOCKED(batched_entropy_reset_lock);
+ 
+ /*
+  * Get a random word for internal kernel use only. The quality of the random
+@@ -2222,12 +2225,14 @@ static rwlock_t batched_entropy_reset_lock = __RW_LOCK_UNLOCKED(batched_entropy_
+  * wait_for_random_bytes() should be called and return 0 at least once
+  * at any point prior.
+  */
+-static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64);
++static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) = {
++	.batch_lock	= __SPIN_LOCK_UNLOCKED(batched_entropy_u64.lock),
++};
++
+ u64 get_random_u64(void)
+ {
+ 	u64 ret;
+-	bool use_lock;
+-	unsigned long flags = 0;
++	unsigned long flags;
+ 	struct batched_entropy *batch;
+ 	static void *previous;
+ 
+@@ -2242,28 +2247,25 @@ u64 get_random_u64(void)
+ 
+ 	warn_unseeded_randomness(&previous);
+ 
+-	use_lock = READ_ONCE(crng_init) < 2;
+-	batch = &get_cpu_var(batched_entropy_u64);
+-	if (use_lock)
+-		read_lock_irqsave(&batched_entropy_reset_lock, flags);
++	batch = raw_cpu_ptr(&batched_entropy_u64);
++	spin_lock_irqsave(&batch->batch_lock, flags);
+ 	if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) {
+ 		extract_crng((u8 *)batch->entropy_u64);
+ 		batch->position = 0;
+ 	}
+ 	ret = batch->entropy_u64[batch->position++];
+-	if (use_lock)
+-		read_unlock_irqrestore(&batched_entropy_reset_lock, flags);
+-	put_cpu_var(batched_entropy_u64);
++	spin_unlock_irqrestore(&batch->batch_lock, flags);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(get_random_u64);
+ 
+-static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32);
++static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32) = {
++	.batch_lock	= __SPIN_LOCK_UNLOCKED(batched_entropy_u32.lock),
++};
+ u32 get_random_u32(void)
+ {
+ 	u32 ret;
+-	bool use_lock;
+-	unsigned long flags = 0;
++	unsigned long flags;
+ 	struct batched_entropy *batch;
+ 	static void *previous;
+ 
+@@ -2272,18 +2274,14 @@ u32 get_random_u32(void)
+ 
+ 	warn_unseeded_randomness(&previous);
+ 
+-	use_lock = READ_ONCE(crng_init) < 2;
+-	batch = &get_cpu_var(batched_entropy_u32);
+-	if (use_lock)
+-		read_lock_irqsave(&batched_entropy_reset_lock, flags);
++	batch = raw_cpu_ptr(&batched_entropy_u32);
++	spin_lock_irqsave(&batch->batch_lock, flags);
+ 	if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) {
+ 		extract_crng((u8 *)batch->entropy_u32);
+ 		batch->position = 0;
+ 	}
+ 	ret = batch->entropy_u32[batch->position++];
+-	if (use_lock)
+-		read_unlock_irqrestore(&batched_entropy_reset_lock, flags);
+-	put_cpu_var(batched_entropy_u32);
++	spin_unlock_irqrestore(&batch->batch_lock, flags);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(get_random_u32);
+@@ -2297,12 +2295,19 @@ static void invalidate_batched_entropy(void)
+ 	int cpu;
+ 	unsigned long flags;
+ 
+-	write_lock_irqsave(&batched_entropy_reset_lock, flags);
+ 	for_each_possible_cpu (cpu) {
+-		per_cpu_ptr(&batched_entropy_u32, cpu)->position = 0;
+-		per_cpu_ptr(&batched_entropy_u64, cpu)->position = 0;
++		struct batched_entropy *batched_entropy;
++
++		batched_entropy = per_cpu_ptr(&batched_entropy_u32, cpu);
++		spin_lock_irqsave(&batched_entropy->batch_lock, flags);
++		batched_entropy->position = 0;
++		spin_unlock(&batched_entropy->batch_lock);
++
++		batched_entropy = per_cpu_ptr(&batched_entropy_u64, cpu);
++		spin_lock(&batched_entropy->batch_lock);
++		batched_entropy->position = 0;
++		spin_unlock_irqrestore(&batched_entropy->batch_lock, flags);
+ 	}
+-	write_unlock_irqrestore(&batched_entropy_reset_lock, flags);
+ }
+ 
+ /**
+diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
+index fbeb71953526..05dbfdb9f4af 100644
+--- a/drivers/char/virtio_console.c
++++ b/drivers/char/virtio_console.c
+@@ -75,7 +75,7 @@ struct ports_driver_data {
+ 	/* All the console devices handled by this driver */
+ 	struct list_head consoles;
+ };
+-static struct ports_driver_data pdrvdata;
++static struct ports_driver_data pdrvdata = { .next_vtermno = 1};
+ 
+ static DEFINE_SPINLOCK(pdrvdata_lock);
+ static DECLARE_COMPLETION(early_console_added);
+@@ -1394,6 +1394,7 @@ static int add_port(struct ports_device *portdev, u32 id)
+ 	port->async_queue = NULL;
+ 
+ 	port->cons.ws.ws_row = port->cons.ws.ws_col = 0;
++	port->cons.vtermno = 0;
+ 
+ 	port->host_connected = port->guest_connected = false;
+ 	port->stats = (struct port_stats) { 0 };
+diff --git a/drivers/clk/renesas/r8a774a1-cpg-mssr.c b/drivers/clk/renesas/r8a774a1-cpg-mssr.c
+index 4d92b27a6153..7a4c5957939a 100644
+--- a/drivers/clk/renesas/r8a774a1-cpg-mssr.c
++++ b/drivers/clk/renesas/r8a774a1-cpg-mssr.c
+@@ -123,8 +123,8 @@ static const struct mssr_mod_clk r8a774a1_mod_clks[] __initconst = {
+ 	DEF_MOD("msiof2",		 209,	R8A774A1_CLK_MSO),
+ 	DEF_MOD("msiof1",		 210,	R8A774A1_CLK_MSO),
+ 	DEF_MOD("msiof0",		 211,	R8A774A1_CLK_MSO),
+-	DEF_MOD("sys-dmac2",		 217,	R8A774A1_CLK_S0D3),
+-	DEF_MOD("sys-dmac1",		 218,	R8A774A1_CLK_S0D3),
++	DEF_MOD("sys-dmac2",		 217,	R8A774A1_CLK_S3D1),
++	DEF_MOD("sys-dmac1",		 218,	R8A774A1_CLK_S3D1),
+ 	DEF_MOD("sys-dmac0",		 219,	R8A774A1_CLK_S0D3),
+ 	DEF_MOD("cmt3",			 300,	R8A774A1_CLK_R),
+ 	DEF_MOD("cmt2",			 301,	R8A774A1_CLK_R),
+@@ -143,8 +143,8 @@ static const struct mssr_mod_clk r8a774a1_mod_clks[] __initconst = {
+ 	DEF_MOD("rwdt",			 402,	R8A774A1_CLK_R),
+ 	DEF_MOD("intc-ex",		 407,	R8A774A1_CLK_CP),
+ 	DEF_MOD("intc-ap",		 408,	R8A774A1_CLK_S0D3),
+-	DEF_MOD("audmac1",		 501,	R8A774A1_CLK_S0D3),
+-	DEF_MOD("audmac0",		 502,	R8A774A1_CLK_S0D3),
++	DEF_MOD("audmac1",		 501,	R8A774A1_CLK_S1D2),
++	DEF_MOD("audmac0",		 502,	R8A774A1_CLK_S1D2),
+ 	DEF_MOD("hscif4",		 516,	R8A774A1_CLK_S3D1),
+ 	DEF_MOD("hscif3",		 517,	R8A774A1_CLK_S3D1),
+ 	DEF_MOD("hscif2",		 518,	R8A774A1_CLK_S3D1),
+diff --git a/drivers/clk/renesas/r8a774c0-cpg-mssr.c b/drivers/clk/renesas/r8a774c0-cpg-mssr.c
+index 34e274f2a273..93dacd826fd0 100644
+--- a/drivers/clk/renesas/r8a774c0-cpg-mssr.c
++++ b/drivers/clk/renesas/r8a774c0-cpg-mssr.c
+@@ -157,7 +157,7 @@ static const struct mssr_mod_clk r8a774c0_mod_clks[] __initconst = {
+ 	DEF_MOD("intc-ex",		 407,	R8A774C0_CLK_CP),
+ 	DEF_MOD("intc-ap",		 408,	R8A774C0_CLK_S0D3),
+ 
+-	DEF_MOD("audmac0",		 502,	R8A774C0_CLK_S3D4),
++	DEF_MOD("audmac0",		 502,	R8A774C0_CLK_S1D2),
+ 	DEF_MOD("hscif4",		 516,	R8A774C0_CLK_S3D1C),
+ 	DEF_MOD("hscif3",		 517,	R8A774C0_CLK_S3D1C),
+ 	DEF_MOD("hscif2",		 518,	R8A774C0_CLK_S3D1C),
+diff --git a/drivers/clk/renesas/r8a7795-cpg-mssr.c b/drivers/clk/renesas/r8a7795-cpg-mssr.c
+index 86842c9fd314..0825cd0ff286 100644
+--- a/drivers/clk/renesas/r8a7795-cpg-mssr.c
++++ b/drivers/clk/renesas/r8a7795-cpg-mssr.c
+@@ -129,8 +129,8 @@ static struct mssr_mod_clk r8a7795_mod_clks[] __initdata = {
+ 	DEF_MOD("msiof2",		 209,	R8A7795_CLK_MSO),
+ 	DEF_MOD("msiof1",		 210,	R8A7795_CLK_MSO),
+ 	DEF_MOD("msiof0",		 211,	R8A7795_CLK_MSO),
+-	DEF_MOD("sys-dmac2",		 217,	R8A7795_CLK_S0D3),
+-	DEF_MOD("sys-dmac1",		 218,	R8A7795_CLK_S0D3),
++	DEF_MOD("sys-dmac2",		 217,	R8A7795_CLK_S3D1),
++	DEF_MOD("sys-dmac1",		 218,	R8A7795_CLK_S3D1),
+ 	DEF_MOD("sys-dmac0",		 219,	R8A7795_CLK_S0D3),
+ 	DEF_MOD("sceg-pub",		 229,	R8A7795_CLK_CR),
+ 	DEF_MOD("cmt3",			 300,	R8A7795_CLK_R),
+@@ -153,8 +153,8 @@ static struct mssr_mod_clk r8a7795_mod_clks[] __initdata = {
+ 	DEF_MOD("rwdt",			 402,	R8A7795_CLK_R),
+ 	DEF_MOD("intc-ex",		 407,	R8A7795_CLK_CP),
+ 	DEF_MOD("intc-ap",		 408,	R8A7795_CLK_S0D3),
+-	DEF_MOD("audmac1",		 501,	R8A7795_CLK_S0D3),
+-	DEF_MOD("audmac0",		 502,	R8A7795_CLK_S0D3),
++	DEF_MOD("audmac1",		 501,	R8A7795_CLK_S1D2),
++	DEF_MOD("audmac0",		 502,	R8A7795_CLK_S1D2),
+ 	DEF_MOD("drif7",		 508,	R8A7795_CLK_S3D2),
+ 	DEF_MOD("drif6",		 509,	R8A7795_CLK_S3D2),
+ 	DEF_MOD("drif5",		 510,	R8A7795_CLK_S3D2),
+diff --git a/drivers/clk/renesas/r8a7796-cpg-mssr.c b/drivers/clk/renesas/r8a7796-cpg-mssr.c
+index 12c455859f2c..997cd956f12b 100644
+--- a/drivers/clk/renesas/r8a7796-cpg-mssr.c
++++ b/drivers/clk/renesas/r8a7796-cpg-mssr.c
+@@ -126,8 +126,8 @@ static const struct mssr_mod_clk r8a7796_mod_clks[] __initconst = {
+ 	DEF_MOD("msiof2",		 209,	R8A7796_CLK_MSO),
+ 	DEF_MOD("msiof1",		 210,	R8A7796_CLK_MSO),
+ 	DEF_MOD("msiof0",		 211,	R8A7796_CLK_MSO),
+-	DEF_MOD("sys-dmac2",		 217,	R8A7796_CLK_S0D3),
+-	DEF_MOD("sys-dmac1",		 218,	R8A7796_CLK_S0D3),
++	DEF_MOD("sys-dmac2",		 217,	R8A7796_CLK_S3D1),
++	DEF_MOD("sys-dmac1",		 218,	R8A7796_CLK_S3D1),
+ 	DEF_MOD("sys-dmac0",		 219,	R8A7796_CLK_S0D3),
+ 	DEF_MOD("cmt3",			 300,	R8A7796_CLK_R),
+ 	DEF_MOD("cmt2",			 301,	R8A7796_CLK_R),
+@@ -146,8 +146,8 @@ static const struct mssr_mod_clk r8a7796_mod_clks[] __initconst = {
+ 	DEF_MOD("rwdt",			 402,	R8A7796_CLK_R),
+ 	DEF_MOD("intc-ex",		 407,	R8A7796_CLK_CP),
+ 	DEF_MOD("intc-ap",		 408,	R8A7796_CLK_S0D3),
+-	DEF_MOD("audmac1",		 501,	R8A7796_CLK_S0D3),
+-	DEF_MOD("audmac0",		 502,	R8A7796_CLK_S0D3),
++	DEF_MOD("audmac1",		 501,	R8A7796_CLK_S1D2),
++	DEF_MOD("audmac0",		 502,	R8A7796_CLK_S1D2),
+ 	DEF_MOD("drif7",		 508,	R8A7796_CLK_S3D2),
+ 	DEF_MOD("drif6",		 509,	R8A7796_CLK_S3D2),
+ 	DEF_MOD("drif5",		 510,	R8A7796_CLK_S3D2),
+diff --git a/drivers/clk/renesas/r8a77965-cpg-mssr.c b/drivers/clk/renesas/r8a77965-cpg-mssr.c
+index eb1cca58a1e1..afc9c72fa094 100644
+--- a/drivers/clk/renesas/r8a77965-cpg-mssr.c
++++ b/drivers/clk/renesas/r8a77965-cpg-mssr.c
+@@ -123,8 +123,8 @@ static const struct mssr_mod_clk r8a77965_mod_clks[] __initconst = {
+ 	DEF_MOD("msiof2",		209,	R8A77965_CLK_MSO),
+ 	DEF_MOD("msiof1",		210,	R8A77965_CLK_MSO),
+ 	DEF_MOD("msiof0",		211,	R8A77965_CLK_MSO),
+-	DEF_MOD("sys-dmac2",		217,	R8A77965_CLK_S0D3),
+-	DEF_MOD("sys-dmac1",		218,	R8A77965_CLK_S0D3),
++	DEF_MOD("sys-dmac2",		217,	R8A77965_CLK_S3D1),
++	DEF_MOD("sys-dmac1",		218,	R8A77965_CLK_S3D1),
+ 	DEF_MOD("sys-dmac0",		219,	R8A77965_CLK_S0D3),
+ 
+ 	DEF_MOD("cmt3",			300,	R8A77965_CLK_R),
+@@ -146,8 +146,8 @@ static const struct mssr_mod_clk r8a77965_mod_clks[] __initconst = {
+ 	DEF_MOD("intc-ex",		407,	R8A77965_CLK_CP),
+ 	DEF_MOD("intc-ap",		408,	R8A77965_CLK_S0D3),
+ 
+-	DEF_MOD("audmac1",		501,	R8A77965_CLK_S0D3),
+-	DEF_MOD("audmac0",		502,	R8A77965_CLK_S0D3),
++	DEF_MOD("audmac1",		501,	R8A77965_CLK_S1D2),
++	DEF_MOD("audmac0",		502,	R8A77965_CLK_S1D2),
+ 	DEF_MOD("drif7",		508,	R8A77965_CLK_S3D2),
+ 	DEF_MOD("drif6",		509,	R8A77965_CLK_S3D2),
+ 	DEF_MOD("drif5",		510,	R8A77965_CLK_S3D2),
+diff --git a/drivers/clk/renesas/r8a77990-cpg-mssr.c b/drivers/clk/renesas/r8a77990-cpg-mssr.c
+index 9a278c75c918..03f445d47ef6 100644
+--- a/drivers/clk/renesas/r8a77990-cpg-mssr.c
++++ b/drivers/clk/renesas/r8a77990-cpg-mssr.c
+@@ -152,7 +152,7 @@ static const struct mssr_mod_clk r8a77990_mod_clks[] __initconst = {
+ 	DEF_MOD("intc-ex",		 407,	R8A77990_CLK_CP),
+ 	DEF_MOD("intc-ap",		 408,	R8A77990_CLK_S0D3),
+ 
+-	DEF_MOD("audmac0",		 502,	R8A77990_CLK_S3D4),
++	DEF_MOD("audmac0",		 502,	R8A77990_CLK_S1D2),
+ 	DEF_MOD("drif7",		 508,	R8A77990_CLK_S3D2),
+ 	DEF_MOD("drif6",		 509,	R8A77990_CLK_S3D2),
+ 	DEF_MOD("drif5",		 510,	R8A77990_CLK_S3D2),
+diff --git a/drivers/clk/renesas/r8a77995-cpg-mssr.c b/drivers/clk/renesas/r8a77995-cpg-mssr.c
+index eee3874865a9..68707277b17b 100644
+--- a/drivers/clk/renesas/r8a77995-cpg-mssr.c
++++ b/drivers/clk/renesas/r8a77995-cpg-mssr.c
+@@ -133,7 +133,7 @@ static const struct mssr_mod_clk r8a77995_mod_clks[] __initconst = {
+ 	DEF_MOD("rwdt",			 402,	R8A77995_CLK_R),
+ 	DEF_MOD("intc-ex",		 407,	R8A77995_CLK_CP),
+ 	DEF_MOD("intc-ap",		 408,	R8A77995_CLK_S1D2),
+-	DEF_MOD("audmac0",		 502,	R8A77995_CLK_S3D1),
++	DEF_MOD("audmac0",		 502,	R8A77995_CLK_S1D2),
+ 	DEF_MOD("hscif3",		 517,	R8A77995_CLK_S3D1C),
+ 	DEF_MOD("hscif0",		 520,	R8A77995_CLK_S3D1C),
+ 	DEF_MOD("thermal",		 522,	R8A77995_CLK_CP),
+diff --git a/drivers/clk/rockchip/clk-rk3288.c b/drivers/clk/rockchip/clk-rk3288.c
+index 5a67b7869960..355d6a3611db 100644
+--- a/drivers/clk/rockchip/clk-rk3288.c
++++ b/drivers/clk/rockchip/clk-rk3288.c
+@@ -219,7 +219,7 @@ PNAME(mux_hsadcout_p)	= { "hsadc_src", "ext_hsadc" };
+ PNAME(mux_edp_24m_p)	= { "ext_edp_24m", "xin24m" };
+ PNAME(mux_tspout_p)	= { "cpll", "gpll", "npll", "xin27m" };
+ 
+-PNAME(mux_aclk_vcodec_pre_p)	= { "aclk_vepu", "aclk_vdpu" };
++PNAME(mux_aclk_vcodec_pre_p)	= { "aclk_vdpu", "aclk_vepu" };
+ PNAME(mux_usbphy480m_p)		= { "sclk_otgphy1_480m", "sclk_otgphy2_480m",
+ 				    "sclk_otgphy0_480m" };
+ PNAME(mux_hsicphy480m_p)	= { "cpll", "gpll", "usbphy480m_src" };
+@@ -313,13 +313,13 @@ static struct rockchip_clk_branch rk3288_clk_branches[] __initdata = {
+ 	COMPOSITE_NOMUX(0, "aclk_core_mp", "armclk", CLK_IGNORE_UNUSED,
+ 			RK3288_CLKSEL_CON(0), 4, 4, DFLAGS | CLK_DIVIDER_READ_ONLY,
+ 			RK3288_CLKGATE_CON(12), 6, GFLAGS),
+-	COMPOSITE_NOMUX(0, "atclk", "armclk", CLK_IGNORE_UNUSED,
++	COMPOSITE_NOMUX(0, "atclk", "armclk", 0,
+ 			RK3288_CLKSEL_CON(37), 4, 5, DFLAGS | CLK_DIVIDER_READ_ONLY,
+ 			RK3288_CLKGATE_CON(12), 7, GFLAGS),
+ 	COMPOSITE_NOMUX(0, "pclk_dbg_pre", "armclk", CLK_IGNORE_UNUSED,
+ 			RK3288_CLKSEL_CON(37), 9, 5, DFLAGS | CLK_DIVIDER_READ_ONLY,
+ 			RK3288_CLKGATE_CON(12), 8, GFLAGS),
+-	GATE(0, "pclk_dbg", "pclk_dbg_pre", CLK_IGNORE_UNUSED,
++	GATE(0, "pclk_dbg", "pclk_dbg_pre", 0,
+ 			RK3288_CLKGATE_CON(12), 9, GFLAGS),
+ 	GATE(0, "cs_dbg", "pclk_dbg_pre", CLK_IGNORE_UNUSED,
+ 			RK3288_CLKGATE_CON(12), 10, GFLAGS),
+@@ -420,7 +420,7 @@ static struct rockchip_clk_branch rk3288_clk_branches[] __initdata = {
+ 	COMPOSITE(0, "aclk_vdpu", mux_pll_src_cpll_gpll_usb480m_p, 0,
+ 			RK3288_CLKSEL_CON(32), 14, 2, MFLAGS, 8, 5, DFLAGS,
+ 			RK3288_CLKGATE_CON(3), 11, GFLAGS),
+-	MUXGRF(0, "aclk_vcodec_pre", mux_aclk_vcodec_pre_p, 0,
++	MUXGRF(0, "aclk_vcodec_pre", mux_aclk_vcodec_pre_p, CLK_SET_RATE_PARENT,
+ 			RK3288_GRF_SOC_CON(0), 7, 1, MFLAGS),
+ 	GATE(ACLK_VCODEC, "aclk_vcodec", "aclk_vcodec_pre", 0,
+ 		RK3288_CLKGATE_CON(9), 0, GFLAGS),
+@@ -647,7 +647,7 @@ static struct rockchip_clk_branch rk3288_clk_branches[] __initdata = {
+ 	INVERTER(SCLK_HSADC, "sclk_hsadc", "sclk_hsadc_out",
+ 			RK3288_CLKSEL_CON(22), 7, IFLAGS),
+ 
+-	GATE(0, "jtag", "ext_jtag", CLK_IGNORE_UNUSED,
++	GATE(0, "jtag", "ext_jtag", 0,
+ 			RK3288_CLKGATE_CON(4), 14, GFLAGS),
+ 
+ 	COMPOSITE_NODIV(SCLK_USBPHY480M_SRC, "usbphy480m_src", mux_usbphy480m_p, 0,
+@@ -656,7 +656,7 @@ static struct rockchip_clk_branch rk3288_clk_branches[] __initdata = {
+ 	COMPOSITE_NODIV(SCLK_HSICPHY480M, "sclk_hsicphy480m", mux_hsicphy480m_p, 0,
+ 			RK3288_CLKSEL_CON(29), 0, 2, MFLAGS,
+ 			RK3288_CLKGATE_CON(3), 6, GFLAGS),
+-	GATE(0, "hsicphy12m_xin12m", "xin12m", CLK_IGNORE_UNUSED,
++	GATE(0, "hsicphy12m_xin12m", "xin12m", 0,
+ 			RK3288_CLKGATE_CON(13), 9, GFLAGS),
+ 	DIV(0, "hsicphy12m_usbphy", "sclk_hsicphy480m", 0,
+ 			RK3288_CLKSEL_CON(11), 8, 6, DFLAGS),
+@@ -697,7 +697,7 @@ static struct rockchip_clk_branch rk3288_clk_branches[] __initdata = {
+ 	GATE(PCLK_TZPC, "pclk_tzpc", "pclk_cpu", 0, RK3288_CLKGATE_CON(11), 3, GFLAGS),
+ 	GATE(PCLK_UART2, "pclk_uart2", "pclk_cpu", 0, RK3288_CLKGATE_CON(11), 9, GFLAGS),
+ 	GATE(PCLK_EFUSE256, "pclk_efuse_256", "pclk_cpu", 0, RK3288_CLKGATE_CON(11), 10, GFLAGS),
+-	GATE(PCLK_RKPWM, "pclk_rkpwm", "pclk_cpu", CLK_IGNORE_UNUSED, RK3288_CLKGATE_CON(11), 11, GFLAGS),
++	GATE(PCLK_RKPWM, "pclk_rkpwm", "pclk_cpu", 0, RK3288_CLKGATE_CON(11), 11, GFLAGS),
+ 
+ 	/* ddrctrl [DDR Controller PHY clock] gates */
+ 	GATE(0, "nclk_ddrupctl0", "ddrphy", CLK_IGNORE_UNUSED, RK3288_CLKGATE_CON(11), 4, GFLAGS),
+@@ -837,12 +837,9 @@ static const char *const rk3288_critical_clocks[] __initconst = {
+ 	"pclk_alive_niu",
+ 	"pclk_pd_pmu",
+ 	"pclk_pmu_niu",
+-	"pclk_core_niu",
+-	"pclk_ddrupctl0",
+-	"pclk_publ0",
+-	"pclk_ddrupctl1",
+-	"pclk_publ1",
+ 	"pmu_hclk_otg0",
++	/* pwm-regulators on some boards, so handoff-critical later */
++	"pclk_rkpwm",
+ };
+ 
+ static void __iomem *rk3288_cru_base;
+diff --git a/drivers/clk/zynqmp/divider.c b/drivers/clk/zynqmp/divider.c
+index a371c66e72ef..bd9b5fbc443b 100644
+--- a/drivers/clk/zynqmp/divider.c
++++ b/drivers/clk/zynqmp/divider.c
+@@ -31,12 +31,14 @@
+  * struct zynqmp_clk_divider - adjustable divider clock
+  * @hw:		handle between common and hardware-specific interfaces
+  * @flags:	Hardware specific flags
++ * @is_frac:	The divider is a fractional divider
+  * @clk_id:	Id of clock
+  * @div_type:	divisor type (TYPE_DIV1 or TYPE_DIV2)
+  */
+ struct zynqmp_clk_divider {
+ 	struct clk_hw hw;
+ 	u8 flags;
++	bool is_frac;
+ 	u32 clk_id;
+ 	u32 div_type;
+ };
+@@ -116,8 +118,7 @@ static long zynqmp_clk_divider_round_rate(struct clk_hw *hw,
+ 
+ 	bestdiv = zynqmp_divider_get_val(*prate, rate);
+ 
+-	if ((clk_hw_get_flags(hw) & CLK_SET_RATE_PARENT) &&
+-	    (divider->flags & CLK_FRAC))
++	if ((clk_hw_get_flags(hw) & CLK_SET_RATE_PARENT) && divider->is_frac)
+ 		bestdiv = rate % *prate ? 1 : bestdiv;
+ 	*prate = rate * bestdiv;
+ 
+@@ -195,11 +196,13 @@ struct clk_hw *zynqmp_clk_register_divider(const char *name,
+ 
+ 	init.name = name;
+ 	init.ops = &zynqmp_clk_divider_ops;
+-	init.flags = nodes->flag;
++	/* CLK_FRAC is not defined in the common clk framework */
++	init.flags = nodes->flag & ~CLK_FRAC;
+ 	init.parent_names = parents;
+ 	init.num_parents = 1;
+ 
+ 	/* struct clk_divider assignments */
++	div->is_frac = !!(nodes->flag & CLK_FRAC);
+ 	div->flags = nodes->type_flag;
+ 	div->hw.init = &init;
+ 	div->clk_id = clk_id;
+diff --git a/drivers/cpufreq/armada-8k-cpufreq.c b/drivers/cpufreq/armada-8k-cpufreq.c
+index b3f4bd647e9b..988ebc326bdb 100644
+--- a/drivers/cpufreq/armada-8k-cpufreq.c
++++ b/drivers/cpufreq/armada-8k-cpufreq.c
+@@ -132,6 +132,7 @@ static int __init armada_8k_cpufreq_init(void)
+ 		of_node_put(node);
+ 		return -ENODEV;
+ 	}
++	of_node_put(node);
+ 
+ 	nb_cpus = num_possible_cpus();
+ 	freq_tables = kcalloc(nb_cpus, sizeof(*freq_tables), GFP_KERNEL);
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index e10922709d13..bbf79544d0ad 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -1098,6 +1098,7 @@ static struct cpufreq_policy *cpufreq_policy_alloc(unsigned int cpu)
+ 				   cpufreq_global_kobject, "policy%u", cpu);
+ 	if (ret) {
+ 		pr_err("%s: failed to init policy->kobj: %d\n", __func__, ret);
++		kobject_put(&policy->kobj);
+ 		goto err_free_real_cpus;
+ 	}
+ 
+diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
+index ffa9adeaba31..9d1d9bf02710 100644
+--- a/drivers/cpufreq/cpufreq_governor.c
++++ b/drivers/cpufreq/cpufreq_governor.c
+@@ -459,6 +459,8 @@ int cpufreq_dbs_governor_init(struct cpufreq_policy *policy)
+ 	/* Failure, so roll back. */
+ 	pr_err("initialization failed (dbs_data kobject init error %d)\n", ret);
+ 
++	kobject_put(&dbs_data->attr_set.kobj);
++
+ 	policy->governor_data = NULL;
+ 
+ 	if (!have_governor_per_policy())
+diff --git a/drivers/cpufreq/imx6q-cpufreq.c b/drivers/cpufreq/imx6q-cpufreq.c
+index a4ff09f91c8f..3e17560b1efe 100644
+--- a/drivers/cpufreq/imx6q-cpufreq.c
++++ b/drivers/cpufreq/imx6q-cpufreq.c
+@@ -388,11 +388,11 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
+ 		ret = imx6ul_opp_check_speed_grading(cpu_dev);
+ 		if (ret) {
+ 			if (ret == -EPROBE_DEFER)
+-				return ret;
++				goto put_node;
+ 
+ 			dev_err(cpu_dev, "failed to read ocotp: %d\n",
+ 				ret);
+-			return ret;
++			goto put_node;
+ 		}
+ 	} else {
+ 		imx6q_opp_check_speed_grading(cpu_dev);
+diff --git a/drivers/cpufreq/kirkwood-cpufreq.c b/drivers/cpufreq/kirkwood-cpufreq.c
+index c2dd43f3f5d8..8d63a6dc8383 100644
+--- a/drivers/cpufreq/kirkwood-cpufreq.c
++++ b/drivers/cpufreq/kirkwood-cpufreq.c
+@@ -124,13 +124,14 @@ static int kirkwood_cpufreq_probe(struct platform_device *pdev)
+ 	priv.cpu_clk = of_clk_get_by_name(np, "cpu_clk");
+ 	if (IS_ERR(priv.cpu_clk)) {
+ 		dev_err(priv.dev, "Unable to get cpuclk\n");
+-		return PTR_ERR(priv.cpu_clk);
++		err = PTR_ERR(priv.cpu_clk);
++		goto out_node;
+ 	}
+ 
+ 	err = clk_prepare_enable(priv.cpu_clk);
+ 	if (err) {
+ 		dev_err(priv.dev, "Unable to prepare cpuclk\n");
+-		return err;
++		goto out_node;
+ 	}
+ 
+ 	kirkwood_freq_table[0].frequency = clk_get_rate(priv.cpu_clk) / 1000;
+@@ -161,20 +162,22 @@ static int kirkwood_cpufreq_probe(struct platform_device *pdev)
+ 		goto out_ddr;
+ 	}
+ 
+-	of_node_put(np);
+-	np = NULL;
+-
+ 	err = cpufreq_register_driver(&kirkwood_cpufreq_driver);
+-	if (!err)
+-		return 0;
++	if (err) {
++		dev_err(priv.dev, "Failed to register cpufreq driver\n");
++		goto out_powersave;
++	}
+ 
+-	dev_err(priv.dev, "Failed to register cpufreq driver\n");
++	of_node_put(np);
++	return 0;
+ 
++out_powersave:
+ 	clk_disable_unprepare(priv.powersave_clk);
+ out_ddr:
+ 	clk_disable_unprepare(priv.ddr_clk);
+ out_cpu:
+ 	clk_disable_unprepare(priv.cpu_clk);
++out_node:
+ 	of_node_put(np);
+ 
+ 	return err;
+diff --git a/drivers/cpufreq/pasemi-cpufreq.c b/drivers/cpufreq/pasemi-cpufreq.c
+index 75dfbd2a58ea..c7710c149de8 100644
+--- a/drivers/cpufreq/pasemi-cpufreq.c
++++ b/drivers/cpufreq/pasemi-cpufreq.c
+@@ -146,6 +146,7 @@ static int pas_cpufreq_cpu_init(struct cpufreq_policy *policy)
+ 
+ 	cpu = of_get_cpu_node(policy->cpu, NULL);
+ 
++	of_node_put(cpu);
+ 	if (!cpu)
+ 		goto out;
+ 
+diff --git a/drivers/cpufreq/pmac32-cpufreq.c b/drivers/cpufreq/pmac32-cpufreq.c
+index 52f0d91d30c1..9b4ce2eb8222 100644
+--- a/drivers/cpufreq/pmac32-cpufreq.c
++++ b/drivers/cpufreq/pmac32-cpufreq.c
+@@ -552,6 +552,7 @@ static int pmac_cpufreq_init_7447A(struct device_node *cpunode)
+ 	volt_gpio_np = of_find_node_by_name(NULL, "cpu-vcore-select");
+ 	if (volt_gpio_np)
+ 		voltage_gpio = read_gpio(volt_gpio_np);
++	of_node_put(volt_gpio_np);
+ 	if (!voltage_gpio){
+ 		pr_err("missing cpu-vcore-select gpio\n");
+ 		return 1;
+@@ -588,6 +589,7 @@ static int pmac_cpufreq_init_750FX(struct device_node *cpunode)
+ 	if (volt_gpio_np)
+ 		voltage_gpio = read_gpio(volt_gpio_np);
+ 
++	of_node_put(volt_gpio_np);
+ 	pvr = mfspr(SPRN_PVR);
+ 	has_cpu_l2lve = !((pvr & 0xf00) == 0x100);
+ 
+diff --git a/drivers/cpufreq/ppc_cbe_cpufreq.c b/drivers/cpufreq/ppc_cbe_cpufreq.c
+index 41a0f0be3f9f..8414c3a4ea08 100644
+--- a/drivers/cpufreq/ppc_cbe_cpufreq.c
++++ b/drivers/cpufreq/ppc_cbe_cpufreq.c
+@@ -86,6 +86,7 @@ static int cbe_cpufreq_cpu_init(struct cpufreq_policy *policy)
+ 	if (!cbe_get_cpu_pmd_regs(policy->cpu) ||
+ 	    !cbe_get_cpu_mic_tm_regs(policy->cpu)) {
+ 		pr_info("invalid CBE regs pointers for cpufreq\n");
++		of_node_put(cpu);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/crypto/sunxi-ss/sun4i-ss-hash.c b/drivers/crypto/sunxi-ss/sun4i-ss-hash.c
+index a4b5ff2b72f8..f6936bb3b7be 100644
+--- a/drivers/crypto/sunxi-ss/sun4i-ss-hash.c
++++ b/drivers/crypto/sunxi-ss/sun4i-ss-hash.c
+@@ -240,7 +240,10 @@ static int sun4i_hash(struct ahash_request *areq)
+ 		}
+ 	} else {
+ 		/* Since we have the flag final, we can go up to modulo 4 */
+-		end = ((areq->nbytes + op->len) / 4) * 4 - op->len;
++		if (areq->nbytes < 4)
++			end = 0;
++		else
++			end = ((areq->nbytes + op->len) / 4) * 4 - op->len;
+ 	}
+ 
+ 	/* TODO if SGlen % 4 and !op->len then DMA */
+diff --git a/drivers/crypto/vmx/aesp8-ppc.pl b/drivers/crypto/vmx/aesp8-ppc.pl
+index de78282b8f44..9c6b5c1d6a1a 100644
+--- a/drivers/crypto/vmx/aesp8-ppc.pl
++++ b/drivers/crypto/vmx/aesp8-ppc.pl
+@@ -1357,7 +1357,7 @@ Loop_ctr32_enc:
+ 	addi		$idx,$idx,16
+ 	bdnz		Loop_ctr32_enc
+ 
+-	vadduwm		$ivec,$ivec,$one
++	vadduqm		$ivec,$ivec,$one
+ 	 vmr		$dat,$inptail
+ 	 lvx		$inptail,0,$inp
+ 	 addi		$inp,$inp,16
+diff --git a/drivers/dax/super.c b/drivers/dax/super.c
+index 0a339b85133e..d7f2257f2568 100644
+--- a/drivers/dax/super.c
++++ b/drivers/dax/super.c
+@@ -73,22 +73,12 @@ struct dax_device *fs_dax_get_by_bdev(struct block_device *bdev)
+ EXPORT_SYMBOL_GPL(fs_dax_get_by_bdev);
+ #endif
+ 
+-/**
+- * __bdev_dax_supported() - Check if the device supports dax for filesystem
+- * @bdev: block device to check
+- * @blocksize: The block size of the device
+- *
+- * This is a library function for filesystems to check if the block device
+- * can be mounted with dax option.
+- *
+- * Return: true if supported, false if unsupported
+- */
+-bool __bdev_dax_supported(struct block_device *bdev, int blocksize)
++bool __generic_fsdax_supported(struct dax_device *dax_dev,
++		struct block_device *bdev, int blocksize, sector_t start,
++		sector_t sectors)
+ {
+-	struct dax_device *dax_dev;
+ 	bool dax_enabled = false;
+ 	pgoff_t pgoff, pgoff_end;
+-	struct request_queue *q;
+ 	char buf[BDEVNAME_SIZE];
+ 	void *kaddr, *end_kaddr;
+ 	pfn_t pfn, end_pfn;
+@@ -102,21 +92,14 @@ bool __bdev_dax_supported(struct block_device *bdev, int blocksize)
+ 		return false;
+ 	}
+ 
+-	q = bdev_get_queue(bdev);
+-	if (!q || !blk_queue_dax(q)) {
+-		pr_debug("%s: error: request queue doesn't support dax\n",
+-				bdevname(bdev, buf));
+-		return false;
+-	}
+-
+-	err = bdev_dax_pgoff(bdev, 0, PAGE_SIZE, &pgoff);
++	err = bdev_dax_pgoff(bdev, start, PAGE_SIZE, &pgoff);
+ 	if (err) {
+ 		pr_debug("%s: error: unaligned partition for dax\n",
+ 				bdevname(bdev, buf));
+ 		return false;
+ 	}
+ 
+-	last_page = PFN_DOWN(i_size_read(bdev->bd_inode) - 1) * 8;
++	last_page = PFN_DOWN((start + sectors - 1) * 512) * PAGE_SIZE / 512;
+ 	err = bdev_dax_pgoff(bdev, last_page, PAGE_SIZE, &pgoff_end);
+ 	if (err) {
+ 		pr_debug("%s: error: unaligned partition for dax\n",
+@@ -124,20 +107,11 @@ bool __bdev_dax_supported(struct block_device *bdev, int blocksize)
+ 		return false;
+ 	}
+ 
+-	dax_dev = dax_get_by_host(bdev->bd_disk->disk_name);
+-	if (!dax_dev) {
+-		pr_debug("%s: error: device does not support dax\n",
+-				bdevname(bdev, buf));
+-		return false;
+-	}
+-
+ 	id = dax_read_lock();
+ 	len = dax_direct_access(dax_dev, pgoff, 1, &kaddr, &pfn);
+ 	len2 = dax_direct_access(dax_dev, pgoff_end, 1, &end_kaddr, &end_pfn);
+ 	dax_read_unlock(id);
+ 
+-	put_dax(dax_dev);
+-
+ 	if (len < 1 || len2 < 1) {
+ 		pr_debug("%s: error: dax access failed (%ld)\n",
+ 				bdevname(bdev, buf), len < 1 ? len : len2);
+@@ -178,6 +152,49 @@ bool __bdev_dax_supported(struct block_device *bdev, int blocksize)
+ 	}
+ 	return true;
+ }
++EXPORT_SYMBOL_GPL(__generic_fsdax_supported);
++
++/**
++ * __bdev_dax_supported() - Check if the device supports dax for filesystem
++ * @bdev: block device to check
++ * @blocksize: The block size of the device
++ *
++ * This is a library function for filesystems to check if the block device
++ * can be mounted with dax option.
++ *
++ * Return: true if supported, false if unsupported
++ */
++bool __bdev_dax_supported(struct block_device *bdev, int blocksize)
++{
++	struct dax_device *dax_dev;
++	struct request_queue *q;
++	char buf[BDEVNAME_SIZE];
++	bool ret;
++	int id;
++
++	q = bdev_get_queue(bdev);
++	if (!q || !blk_queue_dax(q)) {
++		pr_debug("%s: error: request queue doesn't support dax\n",
++				bdevname(bdev, buf));
++		return false;
++	}
++
++	dax_dev = dax_get_by_host(bdev->bd_disk->disk_name);
++	if (!dax_dev) {
++		pr_debug("%s: error: device does not support dax\n",
++				bdevname(bdev, buf));
++		return false;
++	}
++
++	id = dax_read_lock();
++	ret = dax_supported(dax_dev, bdev, blocksize, 0,
++			i_size_read(bdev->bd_inode) / 512);
++	dax_read_unlock(id);
++
++	put_dax(dax_dev);
++
++	return ret;
++}
+ EXPORT_SYMBOL_GPL(__bdev_dax_supported);
+ #endif
+ 
+@@ -303,6 +320,15 @@ long dax_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, long nr_pages,
+ }
+ EXPORT_SYMBOL_GPL(dax_direct_access);
+ 
++bool dax_supported(struct dax_device *dax_dev, struct block_device *bdev,
++		int blocksize, sector_t start, sector_t len)
++{
++	if (!dax_alive(dax_dev))
++		return false;
++
++	return dax_dev->ops->dax_supported(dax_dev, bdev, blocksize, start, len);
++}
++
+ size_t dax_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff, void *addr,
+ 		size_t bytes, struct iov_iter *i)
+ {
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index 0ae3de76833b..839621b044f4 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -228,7 +228,7 @@ static struct devfreq_governor *find_devfreq_governor(const char *name)
+  * if is not found. This can happen when both drivers (the governor driver
+  * and the driver that call devfreq_add_device) are built as modules.
+  * devfreq_list_lock should be held by the caller. Returns the matched
+- * governor's pointer.
++ * governor's pointer or an error pointer.
+  */
+ static struct devfreq_governor *try_then_request_governor(const char *name)
+ {
+@@ -254,7 +254,7 @@ static struct devfreq_governor *try_then_request_governor(const char *name)
+ 		/* Restore previous state before return */
+ 		mutex_lock(&devfreq_list_lock);
+ 		if (err)
+-			return NULL;
++			return ERR_PTR(err);
+ 
+ 		governor = find_devfreq_governor(name);
+ 	}
+diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
+index fe69dccfa0c0..37a269420435 100644
+--- a/drivers/dma/at_xdmac.c
++++ b/drivers/dma/at_xdmac.c
+@@ -1606,7 +1606,11 @@ static void at_xdmac_tasklet(unsigned long data)
+ 					struct at_xdmac_desc,
+ 					xfer_node);
+ 		dev_vdbg(chan2dev(&atchan->chan), "%s: desc 0x%p\n", __func__, desc);
+-		BUG_ON(!desc->active_xfer);
++		if (!desc->active_xfer) {
++			dev_err(chan2dev(&atchan->chan), "Xfer not active: exiting");
++			spin_unlock_bh(&atchan->lock);
++			return;
++		}
+ 
+ 		txd = &desc->tx_dma_desc;
+ 
+diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
+index eec79fdf27a5..56695ffb5d37 100644
+--- a/drivers/dma/pl330.c
++++ b/drivers/dma/pl330.c
+@@ -966,6 +966,7 @@ static void _stop(struct pl330_thread *thrd)
+ {
+ 	void __iomem *regs = thrd->dmac->base;
+ 	u8 insn[6] = {0, 0, 0, 0, 0, 0};
++	u32 inten = readl(regs + INTEN);
+ 
+ 	if (_state(thrd) == PL330_STATE_FAULT_COMPLETING)
+ 		UNTIL(thrd, PL330_STATE_FAULTING | PL330_STATE_KILLING);
+@@ -978,10 +979,13 @@ static void _stop(struct pl330_thread *thrd)
+ 
+ 	_emit_KILL(0, insn);
+ 
+-	/* Stop generating interrupts for SEV */
+-	writel(readl(regs + INTEN) & ~(1 << thrd->ev), regs + INTEN);
+-
+ 	_execute_DBGINSN(thrd, insn, is_manager(thrd));
++
++	/* clear the event */
++	if (inten & (1 << thrd->ev))
++		writel(1 << thrd->ev, regs + INTCLR);
++	/* Stop generating interrupts for SEV */
++	writel(inten & ~(1 << thrd->ev), regs + INTEN);
+ }
+ 
+ /* Start doing req 'idx' of thread 'thrd' */
+diff --git a/drivers/dma/tegra210-adma.c b/drivers/dma/tegra210-adma.c
+index 5ec0dd97b397..1477cce33dbe 100644
+--- a/drivers/dma/tegra210-adma.c
++++ b/drivers/dma/tegra210-adma.c
+@@ -22,7 +22,6 @@
+ #include <linux/of_device.h>
+ #include <linux/of_dma.h>
+ #include <linux/of_irq.h>
+-#include <linux/pm_clock.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/slab.h>
+ 
+@@ -141,6 +140,7 @@ struct tegra_adma {
+ 	struct dma_device		dma_dev;
+ 	struct device			*dev;
+ 	void __iomem			*base_addr;
++	struct clk			*ahub_clk;
+ 	unsigned int			nr_channels;
+ 	unsigned long			rx_requests_reserved;
+ 	unsigned long			tx_requests_reserved;
+@@ -637,8 +637,9 @@ static int tegra_adma_runtime_suspend(struct device *dev)
+ 	struct tegra_adma *tdma = dev_get_drvdata(dev);
+ 
+ 	tdma->global_cmd = tdma_read(tdma, ADMA_GLOBAL_CMD);
++	clk_disable_unprepare(tdma->ahub_clk);
+ 
+-	return pm_clk_suspend(dev);
++	return 0;
+ }
+ 
+ static int tegra_adma_runtime_resume(struct device *dev)
+@@ -646,10 +647,11 @@ static int tegra_adma_runtime_resume(struct device *dev)
+ 	struct tegra_adma *tdma = dev_get_drvdata(dev);
+ 	int ret;
+ 
+-	ret = pm_clk_resume(dev);
+-	if (ret)
++	ret = clk_prepare_enable(tdma->ahub_clk);
++	if (ret) {
++		dev_err(dev, "ahub clk_enable failed: %d\n", ret);
+ 		return ret;
+-
++	}
+ 	tdma_write(tdma, ADMA_GLOBAL_CMD, tdma->global_cmd);
+ 
+ 	return 0;
+@@ -693,13 +695,11 @@ static int tegra_adma_probe(struct platform_device *pdev)
+ 	if (IS_ERR(tdma->base_addr))
+ 		return PTR_ERR(tdma->base_addr);
+ 
+-	ret = pm_clk_create(&pdev->dev);
+-	if (ret)
+-		return ret;
+-
+-	ret = of_pm_clk_add_clk(&pdev->dev, "d_audio");
+-	if (ret)
+-		goto clk_destroy;
++	tdma->ahub_clk = devm_clk_get(&pdev->dev, "d_audio");
++	if (IS_ERR(tdma->ahub_clk)) {
++		dev_err(&pdev->dev, "Error: Missing ahub controller clock\n");
++		return PTR_ERR(tdma->ahub_clk);
++	}
+ 
+ 	pm_runtime_enable(&pdev->dev);
+ 
+@@ -776,8 +776,6 @@ rpm_put:
+ 	pm_runtime_put_sync(&pdev->dev);
+ rpm_disable:
+ 	pm_runtime_disable(&pdev->dev);
+-clk_destroy:
+-	pm_clk_destroy(&pdev->dev);
+ 
+ 	return ret;
+ }
+@@ -787,6 +785,7 @@ static int tegra_adma_remove(struct platform_device *pdev)
+ 	struct tegra_adma *tdma = platform_get_drvdata(pdev);
+ 	int i;
+ 
++	of_dma_controller_free(pdev->dev.of_node);
+ 	dma_async_device_unregister(&tdma->dma_dev);
+ 
+ 	for (i = 0; i < tdma->nr_channels; ++i)
+@@ -794,7 +793,6 @@ static int tegra_adma_remove(struct platform_device *pdev)
+ 
+ 	pm_runtime_put_sync(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+-	pm_clk_destroy(&pdev->dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/extcon/Kconfig b/drivers/extcon/Kconfig
+index 540e8cd16ee6..db3bcf96b98f 100644
+--- a/drivers/extcon/Kconfig
++++ b/drivers/extcon/Kconfig
+@@ -30,7 +30,7 @@ config EXTCON_ARIZONA
+ 
+ config EXTCON_AXP288
+ 	tristate "X-Power AXP288 EXTCON support"
+-	depends on MFD_AXP20X && USB_SUPPORT && X86
++	depends on MFD_AXP20X && USB_SUPPORT && X86 && ACPI
+ 	select USB_ROLE_SWITCH
+ 	help
+ 	  Say Y here to enable support for USB peripheral detection
+diff --git a/drivers/extcon/extcon-arizona.c b/drivers/extcon/extcon-arizona.c
+index da0e9bc4262f..9327479c719c 100644
+--- a/drivers/extcon/extcon-arizona.c
++++ b/drivers/extcon/extcon-arizona.c
+@@ -1726,6 +1726,16 @@ static int arizona_extcon_remove(struct platform_device *pdev)
+ 	struct arizona_extcon_info *info = platform_get_drvdata(pdev);
+ 	struct arizona *arizona = info->arizona;
+ 	int jack_irq_rise, jack_irq_fall;
++	bool change;
++
++	regmap_update_bits_check(arizona->regmap, ARIZONA_MIC_DETECT_1,
++				 ARIZONA_MICD_ENA, 0,
++				 &change);
++
++	if (change) {
++		regulator_disable(info->micvdd);
++		pm_runtime_put(info->dev);
++	}
+ 
+ 	gpiod_put(info->micd_pol_gpio);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
+index 466da5954a68..62bf9da25e4b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/Makefile
++++ b/drivers/gpu/drm/amd/amdgpu/Makefile
+@@ -23,7 +23,7 @@
+ # Makefile for the drm device driver.  This driver provides support for the
+ # Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
+ 
+-FULL_AMD_PATH=$(src)/..
++FULL_AMD_PATH=$(srctree)/$(src)/..
+ DISPLAY_FOLDER_NAME=display
+ FULL_AMD_DISPLAY_PATH = $(FULL_AMD_PATH)/$(DISPLAY_FOLDER_NAME)
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+index ee47c11e92ce..4dee2326b29c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+@@ -136,8 +136,9 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
+ {
+ 	struct amdgpu_device *adev = ring->adev;
+ 	struct amdgpu_fence *fence;
+-	struct dma_fence *old, **ptr;
++	struct dma_fence __rcu **ptr;
+ 	uint32_t seq;
++	int r;
+ 
+ 	fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_KERNEL);
+ 	if (fence == NULL)
+@@ -153,15 +154,24 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
+ 			       seq, flags | AMDGPU_FENCE_FLAG_INT);
+ 
+ 	ptr = &ring->fence_drv.fences[seq & ring->fence_drv.num_fences_mask];
++	if (unlikely(rcu_dereference_protected(*ptr, 1))) {
++		struct dma_fence *old;
++
++		rcu_read_lock();
++		old = dma_fence_get_rcu_safe(ptr);
++		rcu_read_unlock();
++
++		if (old) {
++			r = dma_fence_wait(old, false);
++			dma_fence_put(old);
++			if (r)
++				return r;
++		}
++	}
++
+ 	/* This function can't be called concurrently anyway, otherwise
+ 	 * emitting the fence would mess up the hardware ring buffer.
+ 	 */
+-	old = rcu_dereference_protected(*ptr, 1);
+-	if (old && !dma_fence_is_signaled(old)) {
+-		DRM_INFO("rcu slot is busy\n");
+-		dma_fence_wait(old, false);
+-	}
+-
+ 	rcu_assign_pointer(*ptr, dma_fence_get(&fence->base));
+ 
+ 	*f = &fence->base;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 3082b55b1e77..0886b36c2344 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3587,6 +3587,8 @@ static void dm_drm_plane_reset(struct drm_plane *plane)
+ 		plane->state = &amdgpu_state->base;
+ 		plane->state->plane = plane;
+ 		plane->state->rotation = DRM_MODE_ROTATE_0;
++		plane->state->alpha = DRM_BLEND_ALPHA_OPAQUE;
++		plane->state->pixel_blend_mode = DRM_MODE_BLEND_PREMULTI;
+ 	}
+ }
+ 
+@@ -4953,8 +4955,7 @@ cleanup:
+ static void amdgpu_dm_crtc_copy_transient_flags(struct drm_crtc_state *crtc_state,
+ 						struct dc_stream_state *stream_state)
+ {
+-	stream_state->mode_changed =
+-		crtc_state->mode_changed || crtc_state->active_changed;
++	stream_state->mode_changed = drm_atomic_crtc_needs_modeset(crtc_state);
+ }
+ 
+ static int amdgpu_dm_atomic_commit(struct drm_device *dev,
+@@ -5661,6 +5662,9 @@ skip_modeset:
+ 		update_stream_scaling_settings(
+ 			&new_crtc_state->mode, dm_new_conn_state, dm_new_crtc_state->stream);
+ 
++	/* ABM settings */
++	dm_new_crtc_state->abm_level = dm_new_conn_state->abm_level;
++
+ 	/*
+ 	 * Color management settings. We also update color properties
+ 	 * when a modeset is needed, to ensure it gets reprogrammed.
+@@ -5858,7 +5862,9 @@ dm_determine_update_type_for_commit(struct dc *dc,
+ 	}
+ 
+ 	for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
+-		struct dc_stream_update stream_update = { 0 };
++		struct dc_stream_update stream_update;
++
++		memset(&stream_update, 0, sizeof(stream_update));
+ 
+ 		new_dm_crtc_state = to_dm_crtc_state(new_crtc_state);
+ 		old_dm_crtc_state = to_dm_crtc_state(old_crtc_state);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index a6cda201c964..88fe4fb43bfd 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -524,6 +524,14 @@ void dc_link_set_preferred_link_settings(struct dc *dc,
+ 	struct dc_stream_state *link_stream;
+ 	struct dc_link_settings store_settings = *link_setting;
+ 
++	link->preferred_link_setting = store_settings;
++
++	/* Retrain with preferred link settings only relevant for
++	 * DP signal type
++	 */
++	if (!dc_is_dp_signal(link->connector_signal))
++		return;
++
+ 	for (i = 0; i < MAX_PIPES; i++) {
+ 		pipe = &dc->current_state->res_ctx.pipe_ctx[i];
+ 		if (pipe->stream && pipe->stream->link) {
+@@ -538,7 +546,10 @@ void dc_link_set_preferred_link_settings(struct dc *dc,
+ 
+ 	link_stream = link->dc->current_state->res_ctx.pipe_ctx[i].stream;
+ 
+-	link->preferred_link_setting = store_settings;
++	/* Cannot retrain link if backend is off */
++	if (link_stream->dpms_off)
++		return;
++
+ 	if (link_stream)
+ 		decide_link_settings(link_stream, &store_settings);
+ 
+@@ -1666,6 +1677,7 @@ static void commit_planes_do_stream_update(struct dc *dc,
+ 				continue;
+ 
+ 			if (stream_update->dpms_off) {
++				dc->hwss.pipe_control_lock(dc, pipe_ctx, true);
+ 				if (*stream_update->dpms_off) {
+ 					core_link_disable_stream(pipe_ctx, KEEP_ACQUIRED_RESOURCE);
+ 					dc->hwss.optimize_bandwidth(dc, dc->current_state);
+@@ -1673,6 +1685,7 @@ static void commit_planes_do_stream_update(struct dc *dc,
+ 					dc->hwss.prepare_bandwidth(dc, dc->current_state);
+ 					core_link_enable_stream(dc->current_state, pipe_ctx);
+ 				}
++				dc->hwss.pipe_control_lock(dc, pipe_ctx, false);
+ 			}
+ 
+ 			if (stream_update->abm_level && pipe_ctx->stream_res.abm) {
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index ea18e9c2d8ce..419e8de8c0f4 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -2074,11 +2074,28 @@ static void disable_link(struct dc_link *link, enum signal_type signal)
+ 	}
+ }
+ 
++static uint32_t get_timing_pixel_clock_100hz(const struct dc_crtc_timing *timing)
++{
++
++	uint32_t pxl_clk = timing->pix_clk_100hz;
++
++	if (timing->pixel_encoding == PIXEL_ENCODING_YCBCR420)
++		pxl_clk /= 2;
++	else if (timing->pixel_encoding == PIXEL_ENCODING_YCBCR422)
++		pxl_clk = pxl_clk * 2 / 3;
++
++	if (timing->display_color_depth == COLOR_DEPTH_101010)
++		pxl_clk = pxl_clk * 10 / 8;
++	else if (timing->display_color_depth == COLOR_DEPTH_121212)
++		pxl_clk = pxl_clk * 12 / 8;
++
++	return pxl_clk;
++}
++
+ static bool dp_active_dongle_validate_timing(
+ 		const struct dc_crtc_timing *timing,
+ 		const struct dpcd_caps *dpcd_caps)
+ {
+-	unsigned int required_pix_clk_100hz = timing->pix_clk_100hz;
+ 	const struct dc_dongle_caps *dongle_caps = &dpcd_caps->dongle_caps;
+ 
+ 	switch (dpcd_caps->dongle_type) {
+@@ -2115,13 +2132,6 @@ static bool dp_active_dongle_validate_timing(
+ 		return false;
+ 	}
+ 
+-
+-	/* Check Color Depth and Pixel Clock */
+-	if (timing->pixel_encoding == PIXEL_ENCODING_YCBCR420)
+-		required_pix_clk_100hz /= 2;
+-	else if (timing->pixel_encoding == PIXEL_ENCODING_YCBCR422)
+-		required_pix_clk_100hz = required_pix_clk_100hz * 2 / 3;
+-
+ 	switch (timing->display_color_depth) {
+ 	case COLOR_DEPTH_666:
+ 	case COLOR_DEPTH_888:
+@@ -2130,14 +2140,11 @@ static bool dp_active_dongle_validate_timing(
+ 	case COLOR_DEPTH_101010:
+ 		if (dongle_caps->dp_hdmi_max_bpc < 10)
+ 			return false;
+-		required_pix_clk_100hz = required_pix_clk_100hz * 10 / 8;
+ 		break;
+ 	case COLOR_DEPTH_121212:
+ 		if (dongle_caps->dp_hdmi_max_bpc < 12)
+ 			return false;
+-		required_pix_clk_100hz = required_pix_clk_100hz * 12 / 8;
+ 		break;
+-
+ 	case COLOR_DEPTH_141414:
+ 	case COLOR_DEPTH_161616:
+ 	default:
+@@ -2145,7 +2152,7 @@ static bool dp_active_dongle_validate_timing(
+ 		return false;
+ 	}
+ 
+-	if (required_pix_clk_100hz > (dongle_caps->dp_hdmi_max_pixel_clk * 10))
++	if (get_timing_pixel_clock_100hz(timing) > (dongle_caps->dp_hdmi_max_pixel_clk * 10))
+ 		return false;
+ 
+ 	return true;
+@@ -2166,7 +2173,7 @@ enum dc_status dc_link_validate_mode_timing(
+ 		return DC_OK;
+ 
+ 	/* Passive Dongle */
+-	if (0 != max_pix_clk && timing->pix_clk_100hz > max_pix_clk)
++	if (max_pix_clk != 0 && get_timing_pixel_clock_100hz(timing) > max_pix_clk)
+ 		return DC_EXCEED_DONGLE_CAP;
+ 
+ 	/* Active Dongle*/
+@@ -2316,7 +2323,7 @@ static struct fixed31_32 get_pbn_from_timing(struct pipe_ctx *pipe_ctx)
+ 	uint32_t denominator;
+ 
+ 	bpc = get_color_depth(pipe_ctx->stream_res.pix_clk_params.color_depth);
+-	kbps = pipe_ctx->stream_res.pix_clk_params.requested_pix_clk_100hz / 10 * bpc * 3;
++	kbps = dc_bandwidth_in_kbps_from_timing(&pipe_ctx->stream->timing);
+ 
+ 	/*
+ 	 * margin 5300ppm + 300ppm ~ 0.6% as per spec, factor is 1.006
+@@ -2736,3 +2743,49 @@ void dc_link_enable_hpd_filter(struct dc_link *link, bool enable)
+ 	}
+ }
+ 
++uint32_t dc_bandwidth_in_kbps_from_timing(
++	const struct dc_crtc_timing *timing)
++{
++	uint32_t bits_per_channel = 0;
++	uint32_t kbps;
++
++	switch (timing->display_color_depth) {
++	case COLOR_DEPTH_666:
++		bits_per_channel = 6;
++		break;
++	case COLOR_DEPTH_888:
++		bits_per_channel = 8;
++		break;
++	case COLOR_DEPTH_101010:
++		bits_per_channel = 10;
++		break;
++	case COLOR_DEPTH_121212:
++		bits_per_channel = 12;
++		break;
++	case COLOR_DEPTH_141414:
++		bits_per_channel = 14;
++		break;
++	case COLOR_DEPTH_161616:
++		bits_per_channel = 16;
++		break;
++	default:
++		break;
++	}
++
++	ASSERT(bits_per_channel != 0);
++
++	kbps = timing->pix_clk_100hz / 10;
++	kbps *= bits_per_channel;
++
++	if (timing->flags.Y_ONLY != 1) {
++		/*Only YOnly make reduce bandwidth by 1/3 compares to RGB*/
++		kbps *= 3;
++		if (timing->pixel_encoding == PIXEL_ENCODING_YCBCR420)
++			kbps /= 2;
++		else if (timing->pixel_encoding == PIXEL_ENCODING_YCBCR422)
++			kbps = kbps * 2 / 3;
++	}
++
++	return kbps;
++
++}
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index 09d301216076..6809932e80be 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -1520,53 +1520,6 @@ static bool decide_fallback_link_setting(
+ 	return true;
+ }
+ 
+-static uint32_t bandwidth_in_kbps_from_timing(
+-	const struct dc_crtc_timing *timing)
+-{
+-	uint32_t bits_per_channel = 0;
+-	uint32_t kbps;
+-
+-	switch (timing->display_color_depth) {
+-	case COLOR_DEPTH_666:
+-		bits_per_channel = 6;
+-		break;
+-	case COLOR_DEPTH_888:
+-		bits_per_channel = 8;
+-		break;
+-	case COLOR_DEPTH_101010:
+-		bits_per_channel = 10;
+-		break;
+-	case COLOR_DEPTH_121212:
+-		bits_per_channel = 12;
+-		break;
+-	case COLOR_DEPTH_141414:
+-		bits_per_channel = 14;
+-		break;
+-	case COLOR_DEPTH_161616:
+-		bits_per_channel = 16;
+-		break;
+-	default:
+-		break;
+-	}
+-
+-	ASSERT(bits_per_channel != 0);
+-
+-	kbps = timing->pix_clk_100hz / 10;
+-	kbps *= bits_per_channel;
+-
+-	if (timing->flags.Y_ONLY != 1) {
+-		/*Only YOnly make reduce bandwidth by 1/3 compares to RGB*/
+-		kbps *= 3;
+-		if (timing->pixel_encoding == PIXEL_ENCODING_YCBCR420)
+-			kbps /= 2;
+-		else if (timing->pixel_encoding == PIXEL_ENCODING_YCBCR422)
+-			kbps = kbps * 2 / 3;
+-	}
+-
+-	return kbps;
+-
+-}
+-
+ static uint32_t bandwidth_in_kbps_from_link_settings(
+ 	const struct dc_link_settings *link_setting)
+ {
+@@ -1607,7 +1560,7 @@ bool dp_validate_mode_timing(
+ 		link_setting = &link->verified_link_cap;
+ 	*/
+ 
+-	req_bw = bandwidth_in_kbps_from_timing(timing);
++	req_bw = dc_bandwidth_in_kbps_from_timing(timing);
+ 	max_bw = bandwidth_in_kbps_from_link_settings(link_setting);
+ 
+ 	if (req_bw <= max_bw) {
+@@ -1641,7 +1594,7 @@ void decide_link_settings(struct dc_stream_state *stream,
+ 	uint32_t req_bw;
+ 	uint32_t link_bw;
+ 
+-	req_bw = bandwidth_in_kbps_from_timing(&stream->timing);
++	req_bw = dc_bandwidth_in_kbps_from_timing(&stream->timing);
+ 
+ 	link = stream->link;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index 349ab8017776..4c06eb52ab73 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -1266,10 +1266,12 @@ bool dc_remove_plane_from_context(
+ 			 * For head pipe detach surfaces from pipe for tail
+ 			 * pipe just zero it out
+ 			 */
+-			if (!pipe_ctx->top_pipe) {
++			if (!pipe_ctx->top_pipe ||
++				(!pipe_ctx->top_pipe->top_pipe &&
++					pipe_ctx->top_pipe->stream_res.opp != pipe_ctx->stream_res.opp)) {
+ 				pipe_ctx->plane_state = NULL;
+ 				pipe_ctx->bottom_pipe = NULL;
+-			} else  {
++			} else {
+ 				memset(pipe_ctx, 0, sizeof(*pipe_ctx));
+ 			}
+ 		}
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_link.h b/drivers/gpu/drm/amd/display/dc/dc_link.h
+index 8fc223defed4..a83e1c60f9db 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_link.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_link.h
+@@ -252,4 +252,6 @@ bool dc_submit_i2c(
+ 		uint32_t link_index,
+ 		struct i2c_command *cmd);
+ 
++uint32_t dc_bandwidth_in_kbps_from_timing(
++	const struct dc_crtc_timing *timing);
+ #endif /* DC_LINK_H_ */
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_aux.c b/drivers/gpu/drm/amd/display/dc/dce/dce_aux.c
+index 4fe3664fb495..5ecfcb9ee8a0 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_aux.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_aux.c
+@@ -377,7 +377,6 @@ static bool acquire(
+ 	struct dce_aux *engine,
+ 	struct ddc *ddc)
+ {
+-
+ 	enum gpio_result result;
+ 
+ 	if (!is_engine_available(engine))
+@@ -458,7 +457,8 @@ int dce_aux_transfer(struct ddc_service *ddc,
+ 	memset(&aux_rep, 0, sizeof(aux_rep));
+ 
+ 	aux_engine = ddc->ctx->dc->res_pool->engines[ddc_pin->pin_data->en];
+-	acquire(aux_engine, ddc_pin);
++	if (!acquire(aux_engine, ddc_pin))
++		return -1;
+ 
+ 	if (payload->i2c_over_aux)
+ 		aux_req.type = AUX_TRANSACTION_TYPE_I2C;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c
+index c7642e748297..ce21a290bf3e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c
+@@ -406,15 +406,25 @@ void dpp1_dscl_calc_lb_num_partitions(
+ 		int *num_part_y,
+ 		int *num_part_c)
+ {
++	int lb_memory_size, lb_memory_size_c, lb_memory_size_a, num_partitions_a,
++	lb_bpc, memory_line_size_y, memory_line_size_c, memory_line_size_a;
++
+ 	int line_size = scl_data->viewport.width < scl_data->recout.width ?
+ 			scl_data->viewport.width : scl_data->recout.width;
+ 	int line_size_c = scl_data->viewport_c.width < scl_data->recout.width ?
+ 			scl_data->viewport_c.width : scl_data->recout.width;
+-	int lb_bpc = dpp1_dscl_get_lb_depth_bpc(scl_data->lb_params.depth);
+-	int memory_line_size_y = (line_size * lb_bpc + 71) / 72; /* +71 to ceil */
+-	int memory_line_size_c = (line_size_c * lb_bpc + 71) / 72; /* +71 to ceil */
+-	int memory_line_size_a = (line_size + 5) / 6; /* +5 to ceil */
+-	int lb_memory_size, lb_memory_size_c, lb_memory_size_a, num_partitions_a;
++
++	if (line_size == 0)
++		line_size = 1;
++
++	if (line_size_c == 0)
++		line_size_c = 1;
++
++
++	lb_bpc = dpp1_dscl_get_lb_depth_bpc(scl_data->lb_params.depth);
++	memory_line_size_y = (line_size * lb_bpc + 71) / 72; /* +71 to ceil */
++	memory_line_size_c = (line_size_c * lb_bpc + 71) / 72; /* +71 to ceil */
++	memory_line_size_a = (line_size + 5) / 6; /* +5 to ceil */
+ 
+ 	if (lb_config == LB_MEMORY_CONFIG_1) {
+ 		lb_memory_size = 816;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+index d1a8f1c302a9..5b551a544e82 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+@@ -1008,9 +1008,14 @@ static void dcn10_init_pipes(struct dc *dc, struct dc_state *context)
+ 		 * to non-preferred front end. If pipe_ctx->stream is not NULL,
+ 		 * we will use the pipe, so don't disable
+ 		 */
+-		if (pipe_ctx->stream != NULL)
++		if (pipe_ctx->stream != NULL &&
++		    pipe_ctx->stream_res.tg->funcs->is_tg_enabled(
++			    pipe_ctx->stream_res.tg))
+ 			continue;
+ 
++		/* Disable on the current state so the new one isn't cleared. */
++		pipe_ctx = &dc->current_state->res_ctx.pipe_ctx[i];
++
+ 		dpp->funcs->dpp_reset(dpp);
+ 
+ 		pipe_ctx->stream_res.tg = tg;
+@@ -2692,9 +2697,15 @@ static void dcn10_set_cursor_position(struct pipe_ctx *pipe_ctx)
+ 		.rotation = pipe_ctx->plane_state->rotation,
+ 		.mirror = pipe_ctx->plane_state->horizontal_mirror
+ 	};
+-
+-	pos_cpy.x_hotspot += pipe_ctx->plane_state->dst_rect.x;
+-	pos_cpy.y_hotspot += pipe_ctx->plane_state->dst_rect.y;
++	uint32_t x_plane = pipe_ctx->plane_state->dst_rect.x;
++	uint32_t y_plane = pipe_ctx->plane_state->dst_rect.y;
++	uint32_t x_offset = min(x_plane, pos_cpy.x);
++	uint32_t y_offset = min(y_plane, pos_cpy.y);
++
++	pos_cpy.x -= x_offset;
++	pos_cpy.y -= y_offset;
++	pos_cpy.x_hotspot += (x_plane - x_offset);
++	pos_cpy.y_hotspot += (y_plane - y_offset);
+ 
+ 	if (pipe_ctx->plane_state->address.type
+ 			== PLN_ADDR_TYPE_VIDEO_PROGRESSIVE)
+diff --git a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
+index 0fbc8fbc3541..a1055413bade 100644
+--- a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
++++ b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
+@@ -1854,6 +1854,8 @@ bool mod_color_calculate_degamma_params(struct dc_transfer_func *input_tf,
+ 			coordinates_x, axis_x, curve,
+ 			MAX_HW_POINTS, tf_pts,
+ 			mapUserRamp && ramp && ramp->type == GAMMA_RGB_256);
++	if (ramp->type == GAMMA_CUSTOM)
++		apply_lut_1d(ramp, MAX_HW_POINTS, tf_pts);
+ 
+ 	ret = true;
+ 
+diff --git a/drivers/gpu/drm/arm/display/komeda/Makefile b/drivers/gpu/drm/arm/display/komeda/Makefile
+index 1b875e5dc0f6..a72e30c0e03d 100644
+--- a/drivers/gpu/drm/arm/display/komeda/Makefile
++++ b/drivers/gpu/drm/arm/display/komeda/Makefile
+@@ -1,8 +1,8 @@
+ # SPDX-License-Identifier: GPL-2.0
+ 
+ ccflags-y := \
+-	-I$(src)/../include \
+-	-I$(src)
++	-I $(srctree)/$(src)/../include \
++	-I $(srctree)/$(src)
+ 
+ komeda-y := \
+ 	komeda_drv.o \
+diff --git a/drivers/gpu/drm/drm_atomic_state_helper.c b/drivers/gpu/drm/drm_atomic_state_helper.c
+index 4985384e51f6..59ffb6b9c745 100644
+--- a/drivers/gpu/drm/drm_atomic_state_helper.c
++++ b/drivers/gpu/drm/drm_atomic_state_helper.c
+@@ -30,6 +30,7 @@
+ #include <drm/drm_connector.h>
+ #include <drm/drm_atomic.h>
+ #include <drm/drm_device.h>
++#include <drm/drm_writeback.h>
+ 
+ #include <linux/slab.h>
+ #include <linux/dma-fence.h>
+@@ -412,6 +413,9 @@ __drm_atomic_helper_connector_destroy_state(struct drm_connector_state *state)
+ 
+ 	if (state->commit)
+ 		drm_crtc_commit_put(state->commit);
++
++	if (state->writeback_job)
++		drm_writeback_cleanup_job(state->writeback_job);
+ }
+ EXPORT_SYMBOL(__drm_atomic_helper_connector_destroy_state);
+ 
+diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
+index 05bbc2b622fc..04aa6ccdfb24 100644
+--- a/drivers/gpu/drm/drm_drv.c
++++ b/drivers/gpu/drm/drm_drv.c
+@@ -497,7 +497,7 @@ int drm_dev_init(struct drm_device *dev,
+ 	BUG_ON(!parent);
+ 
+ 	kref_init(&dev->ref);
+-	dev->dev = parent;
++	dev->dev = get_device(parent);
+ 	dev->driver = driver;
+ 
+ 	/* no per-device feature limits by default */
+@@ -567,6 +567,7 @@ err_minors:
+ 	drm_minor_free(dev, DRM_MINOR_RENDER);
+ 	drm_fs_inode_free(dev->anon_inode);
+ err_free:
++	put_device(dev->dev);
+ 	mutex_destroy(&dev->master_mutex);
+ 	mutex_destroy(&dev->ctxlist_mutex);
+ 	mutex_destroy(&dev->clientlist_mutex);
+@@ -602,6 +603,8 @@ void drm_dev_fini(struct drm_device *dev)
+ 	drm_minor_free(dev, DRM_MINOR_PRIMARY);
+ 	drm_minor_free(dev, DRM_MINOR_RENDER);
+ 
++	put_device(dev->dev);
++
+ 	mutex_destroy(&dev->master_mutex);
+ 	mutex_destroy(&dev->ctxlist_mutex);
+ 	mutex_destroy(&dev->clientlist_mutex);
+diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
+index 7caa3c7ed978..9701469a6e93 100644
+--- a/drivers/gpu/drm/drm_file.c
++++ b/drivers/gpu/drm/drm_file.c
+@@ -577,6 +577,7 @@ put_back_event:
+ 				file_priv->event_space -= length;
+ 				list_add(&e->link, &file_priv->event_list);
+ 				spin_unlock_irq(&dev->event_lock);
++				wake_up_interruptible(&file_priv->event_wait);
+ 				break;
+ 			}
+ 
+diff --git a/drivers/gpu/drm/drm_writeback.c b/drivers/gpu/drm/drm_writeback.c
+index c20e6fe00cb3..2d75032f8159 100644
+--- a/drivers/gpu/drm/drm_writeback.c
++++ b/drivers/gpu/drm/drm_writeback.c
+@@ -268,6 +268,15 @@ void drm_writeback_queue_job(struct drm_writeback_connector *wb_connector,
+ }
+ EXPORT_SYMBOL(drm_writeback_queue_job);
+ 
++void drm_writeback_cleanup_job(struct drm_writeback_job *job)
++{
++	if (job->fb)
++		drm_framebuffer_put(job->fb);
++
++	kfree(job);
++}
++EXPORT_SYMBOL(drm_writeback_cleanup_job);
++
+ /*
+  * @cleanup_work: deferred cleanup of a writeback job
+  *
+@@ -280,10 +289,9 @@ static void cleanup_work(struct work_struct *work)
+ 	struct drm_writeback_job *job = container_of(work,
+ 						     struct drm_writeback_job,
+ 						     cleanup_work);
+-	drm_framebuffer_put(job->fb);
+-	kfree(job);
+-}
+ 
++	drm_writeback_cleanup_job(job);
++}
+ 
+ /**
+  * drm_writeback_signal_completion - Signal the completion of a writeback job
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.c b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
+index 18c27f795cf6..3156450723ba 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
+@@ -515,6 +515,9 @@ static int etnaviv_bind(struct device *dev)
+ 	}
+ 	drm->dev_private = priv;
+ 
++	dev->dma_parms = &priv->dma_parms;
++	dma_set_max_seg_size(dev, SZ_2G);
++
+ 	mutex_init(&priv->gem_lock);
+ 	INIT_LIST_HEAD(&priv->gem_list);
+ 	priv->num_gpus = 0;
+@@ -552,6 +555,8 @@ static void etnaviv_unbind(struct device *dev)
+ 
+ 	component_unbind_all(dev, drm);
+ 
++	dev->dma_parms = NULL;
++
+ 	drm->dev_private = NULL;
+ 	kfree(priv);
+ 
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+index a6a7ded37ef1..6a4ea127c4f1 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
++++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+@@ -42,6 +42,7 @@ struct etnaviv_file_private {
+ 
+ struct etnaviv_drm_private {
+ 	int num_gpus;
++	struct device_dma_parameters dma_parms;
+ 	struct etnaviv_gpu *gpu[ETNA_MAX_PIPES];
+ 
+ 	/* list of GEM objects: */
+diff --git a/drivers/gpu/drm/i915/gvt/Makefile b/drivers/gpu/drm/i915/gvt/Makefile
+index 271fb46d4dd0..ea8324abc784 100644
+--- a/drivers/gpu/drm/i915/gvt/Makefile
++++ b/drivers/gpu/drm/i915/gvt/Makefile
+@@ -5,5 +5,5 @@ GVT_SOURCE := gvt.o aperture_gm.o handlers.o vgpu.o trace_points.o firmware.o \
+ 	execlist.o scheduler.o sched_policy.o mmio_context.o cmd_parser.o debugfs.o \
+ 	fb_decoder.o dmabuf.o page_track.o
+ 
+-ccflags-y				+= -I$(src) -I$(src)/$(GVT_DIR)
++ccflags-y				+= -I $(srctree)/$(src) -I $(srctree)/$(src)/$(GVT_DIR)/
+ i915-y					+= $(addprefix $(GVT_DIR)/, $(GVT_SOURCE))
+diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile
+index 56a70c74af4e..b7b1ebdc8190 100644
+--- a/drivers/gpu/drm/msm/Makefile
++++ b/drivers/gpu/drm/msm/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+-ccflags-y := -Idrivers/gpu/drm/msm
+-ccflags-y += -Idrivers/gpu/drm/msm/disp/dpu1
+-ccflags-$(CONFIG_DRM_MSM_DSI) += -Idrivers/gpu/drm/msm/dsi
++ccflags-y := -I $(srctree)/$(src)
++ccflags-y += -I $(srctree)/$(src)/disp/dpu1
++ccflags-$(CONFIG_DRM_MSM_DSI) += -I $(srctree)/$(src)/dsi
+ 
+ msm-y := \
+ 	adreno/adreno_device.o \
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+index d5f5e56422f5..270da14cba67 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+@@ -34,7 +34,7 @@ static int zap_shader_load_mdt(struct msm_gpu *gpu, const char *fwname)
+ {
+ 	struct device *dev = &gpu->pdev->dev;
+ 	const struct firmware *fw;
+-	struct device_node *np;
++	struct device_node *np, *mem_np;
+ 	struct resource r;
+ 	phys_addr_t mem_phys;
+ 	ssize_t mem_size;
+@@ -48,11 +48,13 @@ static int zap_shader_load_mdt(struct msm_gpu *gpu, const char *fwname)
+ 	if (!np)
+ 		return -ENODEV;
+ 
+-	np = of_parse_phandle(np, "memory-region", 0);
+-	if (!np)
++	mem_np = of_parse_phandle(np, "memory-region", 0);
++	of_node_put(np);
++	if (!mem_np)
+ 		return -EINVAL;
+ 
+-	ret = of_address_to_resource(np, 0, &r);
++	ret = of_address_to_resource(mem_np, 0, &r);
++	of_node_put(mem_np);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index 5aa3307f3f0c..dd2c4d11d0e1 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -1023,13 +1023,13 @@ static void dpu_encoder_virt_mode_set(struct drm_encoder *drm_enc,
+ 			if (!dpu_enc->hw_pp[i]) {
+ 				DPU_ERROR_ENC(dpu_enc, "no pp block assigned"
+ 					     "at idx: %d\n", i);
+-				return;
++				goto error;
+ 			}
+ 
+ 			if (!hw_ctl[i]) {
+ 				DPU_ERROR_ENC(dpu_enc, "no ctl block assigned"
+ 					     "at idx: %d\n", i);
+-				return;
++				goto error;
+ 			}
+ 
+ 			phys->hw_pp = dpu_enc->hw_pp[i];
+@@ -1042,6 +1042,9 @@ static void dpu_encoder_virt_mode_set(struct drm_encoder *drm_enc,
+ 	}
+ 
+ 	dpu_enc->mode_set_complete = true;
++
++error:
++	dpu_rm_release(&dpu_kms->rm, drm_enc);
+ }
+ 
+ static void _dpu_encoder_virt_enable_helper(struct drm_encoder *drm_enc)
+@@ -1547,8 +1550,14 @@ static void _dpu_encoder_kickoff_phys(struct dpu_encoder_virt *dpu_enc,
+ 		if (!ctl)
+ 			continue;
+ 
+-		if (phys->split_role != ENC_ROLE_SLAVE)
++		/*
++		 * This is cleared in frame_done worker, which isn't invoked
++		 * for async commits. So don't set this for async, since it'll
++		 * roll over to the next commit.
++		 */
++		if (!async && phys->split_role != ENC_ROLE_SLAVE)
+ 			set_bit(i, dpu_enc->frame_busy_mask);
++
+ 		if (!phys->ops.needs_single_flush ||
+ 				!phys->ops.needs_single_flush(phys))
+ 			_dpu_encoder_trigger_flush(&dpu_enc->base, phys, 0x0,
+diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
+index 49c04829cf34..fcf7a83f0e6f 100644
+--- a/drivers/gpu/drm/msm/msm_gem_vma.c
++++ b/drivers/gpu/drm/msm/msm_gem_vma.c
+@@ -85,7 +85,7 @@ msm_gem_map_vma(struct msm_gem_address_space *aspace,
+ 
+ 	vma->mapped = true;
+ 
+-	if (aspace->mmu)
++	if (aspace && aspace->mmu)
+ 		ret = aspace->mmu->funcs->map(aspace->mmu, vma->iova, sgt,
+ 				size, prot);
+ 
+diff --git a/drivers/gpu/drm/nouveau/Kbuild b/drivers/gpu/drm/nouveau/Kbuild
+index 581404e6544d..378c5dd692b0 100644
+--- a/drivers/gpu/drm/nouveau/Kbuild
++++ b/drivers/gpu/drm/nouveau/Kbuild
+@@ -1,7 +1,7 @@
+-ccflags-y += -I$(src)/include
+-ccflags-y += -I$(src)/include/nvkm
+-ccflags-y += -I$(src)/nvkm
+-ccflags-y += -I$(src)
++ccflags-y += -I $(srctree)/$(src)/include
++ccflags-y += -I $(srctree)/$(src)/include/nvkm
++ccflags-y += -I $(srctree)/$(src)/nvkm
++ccflags-y += -I $(srctree)/$(src)
+ 
+ # NVKM - HW resource manager
+ #- code also used by various userspace tools/tests
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bar/nv50.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bar/nv50.c
+index 157b076a1272..38c9c086754b 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bar/nv50.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bar/nv50.c
+@@ -109,7 +109,7 @@ nv50_bar_oneinit(struct nvkm_bar *base)
+ 	struct nvkm_device *device = bar->base.subdev.device;
+ 	static struct lock_class_key bar1_lock;
+ 	static struct lock_class_key bar2_lock;
+-	u64 start, limit;
++	u64 start, limit, size;
+ 	int ret;
+ 
+ 	ret = nvkm_gpuobj_new(device, 0x20000, 0, false, NULL, &bar->mem);
+@@ -127,7 +127,10 @@ nv50_bar_oneinit(struct nvkm_bar *base)
+ 
+ 	/* BAR2 */
+ 	start = 0x0100000000ULL;
+-	limit = start + device->func->resource_size(device, 3);
++	size = device->func->resource_size(device, 3);
++	if (!size)
++		return -ENOMEM;
++	limit = start + size;
+ 
+ 	ret = nvkm_vmm_new(device, start, limit-- - start, NULL, 0,
+ 			   &bar2_lock, "bar2", &bar->bar2_vmm);
+@@ -164,7 +167,10 @@ nv50_bar_oneinit(struct nvkm_bar *base)
+ 
+ 	/* BAR1 */
+ 	start = 0x0000000000ULL;
+-	limit = start + device->func->resource_size(device, 1);
++	size = device->func->resource_size(device, 1);
++	if (!size)
++		return -ENOMEM;
++	limit = start + size;
+ 
+ 	ret = nvkm_vmm_new(device, start, limit-- - start, NULL, 0,
+ 			   &bar1_lock, "bar1", &bar->bar1_vmm);
+diff --git a/drivers/gpu/drm/omapdrm/dss/dsi.c b/drivers/gpu/drm/omapdrm/dss/dsi.c
+index 64fb788b6647..f0fe975ed46c 100644
+--- a/drivers/gpu/drm/omapdrm/dss/dsi.c
++++ b/drivers/gpu/drm/omapdrm/dss/dsi.c
+@@ -1342,12 +1342,9 @@ static int dsi_pll_enable(struct dss_pll *pll)
+ 	 */
+ 	dsi_enable_scp_clk(dsi);
+ 
+-	if (!dsi->vdds_dsi_enabled) {
+-		r = regulator_enable(dsi->vdds_dsi_reg);
+-		if (r)
+-			goto err0;
+-		dsi->vdds_dsi_enabled = true;
+-	}
++	r = regulator_enable(dsi->vdds_dsi_reg);
++	if (r)
++		goto err0;
+ 
+ 	/* XXX PLL does not come out of reset without this... */
+ 	dispc_pck_free_enable(dsi->dss->dispc, 1);
+@@ -1372,36 +1369,25 @@ static int dsi_pll_enable(struct dss_pll *pll)
+ 
+ 	return 0;
+ err1:
+-	if (dsi->vdds_dsi_enabled) {
+-		regulator_disable(dsi->vdds_dsi_reg);
+-		dsi->vdds_dsi_enabled = false;
+-	}
++	regulator_disable(dsi->vdds_dsi_reg);
+ err0:
+ 	dsi_disable_scp_clk(dsi);
+ 	dsi_runtime_put(dsi);
+ 	return r;
+ }
+ 
+-static void dsi_pll_uninit(struct dsi_data *dsi, bool disconnect_lanes)
++static void dsi_pll_disable(struct dss_pll *pll)
+ {
++	struct dsi_data *dsi = container_of(pll, struct dsi_data, pll);
++
+ 	dsi_pll_power(dsi, DSI_PLL_POWER_OFF);
+-	if (disconnect_lanes) {
+-		WARN_ON(!dsi->vdds_dsi_enabled);
+-		regulator_disable(dsi->vdds_dsi_reg);
+-		dsi->vdds_dsi_enabled = false;
+-	}
++
++	regulator_disable(dsi->vdds_dsi_reg);
+ 
+ 	dsi_disable_scp_clk(dsi);
+ 	dsi_runtime_put(dsi);
+ 
+-	DSSDBG("PLL uninit done\n");
+-}
+-
+-static void dsi_pll_disable(struct dss_pll *pll)
+-{
+-	struct dsi_data *dsi = container_of(pll, struct dsi_data, pll);
+-
+-	dsi_pll_uninit(dsi, true);
++	DSSDBG("PLL disable done\n");
+ }
+ 
+ static int dsi_dump_dsi_clocks(struct seq_file *s, void *p)
+@@ -4096,11 +4082,11 @@ static int dsi_display_init_dsi(struct dsi_data *dsi)
+ 
+ 	r = dss_pll_enable(&dsi->pll);
+ 	if (r)
+-		goto err0;
++		return r;
+ 
+ 	r = dsi_configure_dsi_clocks(dsi);
+ 	if (r)
+-		goto err1;
++		goto err0;
+ 
+ 	dss_select_dsi_clk_source(dsi->dss, dsi->module_id,
+ 				  dsi->module_id == 0 ?
+@@ -4108,6 +4094,14 @@ static int dsi_display_init_dsi(struct dsi_data *dsi)
+ 
+ 	DSSDBG("PLL OK\n");
+ 
++	if (!dsi->vdds_dsi_enabled) {
++		r = regulator_enable(dsi->vdds_dsi_reg);
++		if (r)
++			goto err1;
++
++		dsi->vdds_dsi_enabled = true;
++	}
++
+ 	r = dsi_cio_init(dsi);
+ 	if (r)
+ 		goto err2;
+@@ -4136,10 +4130,13 @@ static int dsi_display_init_dsi(struct dsi_data *dsi)
+ err3:
+ 	dsi_cio_uninit(dsi);
+ err2:
+-	dss_select_dsi_clk_source(dsi->dss, dsi->module_id, DSS_CLK_SRC_FCK);
++	regulator_disable(dsi->vdds_dsi_reg);
++	dsi->vdds_dsi_enabled = false;
+ err1:
+-	dss_pll_disable(&dsi->pll);
++	dss_select_dsi_clk_source(dsi->dss, dsi->module_id, DSS_CLK_SRC_FCK);
+ err0:
++	dss_pll_disable(&dsi->pll);
++
+ 	return r;
+ }
+ 
+@@ -4158,7 +4155,12 @@ static void dsi_display_uninit_dsi(struct dsi_data *dsi, bool disconnect_lanes,
+ 
+ 	dss_select_dsi_clk_source(dsi->dss, dsi->module_id, DSS_CLK_SRC_FCK);
+ 	dsi_cio_uninit(dsi);
+-	dsi_pll_uninit(dsi, disconnect_lanes);
++	dss_pll_disable(&dsi->pll);
++
++	if (disconnect_lanes) {
++		regulator_disable(dsi->vdds_dsi_reg);
++		dsi->vdds_dsi_enabled = false;
++	}
+ }
+ 
+ static int dsi_display_enable(struct omap_dss_device *dssdev)
+diff --git a/drivers/gpu/drm/omapdrm/omap_connector.c b/drivers/gpu/drm/omapdrm/omap_connector.c
+index 9da94d10782a..d37e3c001e24 100644
+--- a/drivers/gpu/drm/omapdrm/omap_connector.c
++++ b/drivers/gpu/drm/omapdrm/omap_connector.c
+@@ -36,18 +36,22 @@ struct omap_connector {
+ };
+ 
+ static void omap_connector_hpd_notify(struct drm_connector *connector,
+-				      struct omap_dss_device *src,
+ 				      enum drm_connector_status status)
+ {
+-	if (status == connector_status_disconnected) {
+-		/*
+-		 * If the source is an HDMI encoder, notify it of disconnection.
+-		 * This is required to let the HDMI encoder reset any internal
+-		 * state related to connection status, such as the CEC address.
+-		 */
+-		if (src && src->type == OMAP_DISPLAY_TYPE_HDMI &&
+-		    src->ops->hdmi.lost_hotplug)
+-			src->ops->hdmi.lost_hotplug(src);
++	struct omap_connector *omap_connector = to_omap_connector(connector);
++	struct omap_dss_device *dssdev;
++
++	if (status != connector_status_disconnected)
++		return;
++
++	/*
++	 * Notify all devics in the pipeline of disconnection. This is required
++	 * to let the HDMI encoders reset their internal state related to
++	 * connection status, such as the CEC address.
++	 */
++	for (dssdev = omap_connector->output; dssdev; dssdev = dssdev->next) {
++		if (dssdev->ops && dssdev->ops->hdmi.lost_hotplug)
++			dssdev->ops->hdmi.lost_hotplug(dssdev);
+ 	}
+ }
+ 
+@@ -67,7 +71,7 @@ static void omap_connector_hpd_cb(void *cb_data,
+ 	if (old_status == status)
+ 		return;
+ 
+-	omap_connector_hpd_notify(connector, omap_connector->hpd, status);
++	omap_connector_hpd_notify(connector, status);
+ 
+ 	drm_kms_helper_hotplug_event(dev);
+ }
+@@ -128,7 +132,7 @@ static enum drm_connector_status omap_connector_detect(
+ 		       ? connector_status_connected
+ 		       : connector_status_disconnected;
+ 
+-		omap_connector_hpd_notify(connector, dssdev->src, status);
++		omap_connector_hpd_notify(connector, status);
+ 	} else {
+ 		switch (omap_connector->display->type) {
+ 		case OMAP_DISPLAY_TYPE_DPI:
+diff --git a/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c b/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c
+index 87fa316e1d7b..58ccf648b70f 100644
+--- a/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c
++++ b/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c
+@@ -248,6 +248,9 @@ static int otm8009a_init_sequence(struct otm8009a *ctx)
+ 	/* Send Command GRAM memory write (no parameters) */
+ 	dcs_write_seq(ctx, MIPI_DCS_WRITE_MEMORY_START);
+ 
++	/* Wait a short while to let the panel be ready before the 1st frame */
++	mdelay(10);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/pl111/pl111_versatile.c b/drivers/gpu/drm/pl111/pl111_versatile.c
+index b9baefdba38a..1c318ad32a8c 100644
+--- a/drivers/gpu/drm/pl111/pl111_versatile.c
++++ b/drivers/gpu/drm/pl111/pl111_versatile.c
+@@ -330,6 +330,7 @@ int pl111_versatile_init(struct device *dev, struct pl111_drm_dev_private *priv)
+ 		ret = vexpress_muxfpga_init();
+ 		if (ret) {
+ 			dev_err(dev, "unable to initialize muxfpga driver\n");
++			of_node_put(np);
+ 			return ret;
+ 		}
+ 
+@@ -337,17 +338,20 @@ int pl111_versatile_init(struct device *dev, struct pl111_drm_dev_private *priv)
+ 		pdev = of_find_device_by_node(np);
+ 		if (!pdev) {
+ 			dev_err(dev, "can't find the sysreg device, deferring\n");
++			of_node_put(np);
+ 			return -EPROBE_DEFER;
+ 		}
+ 		map = dev_get_drvdata(&pdev->dev);
+ 		if (!map) {
+ 			dev_err(dev, "sysreg has not yet probed\n");
+ 			platform_device_put(pdev);
++			of_node_put(np);
+ 			return -EPROBE_DEFER;
+ 		}
+ 	} else {
+ 		map = syscon_node_to_regmap(np);
+ 	}
++	of_node_put(np);
+ 
+ 	if (IS_ERR(map)) {
+ 		dev_err(dev, "no Versatile syscon regmap\n");
+diff --git a/drivers/gpu/drm/rcar-du/rcar_lvds.c b/drivers/gpu/drm/rcar-du/rcar_lvds.c
+index 7ef97b2a6eda..033f44e46daf 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_lvds.c
++++ b/drivers/gpu/drm/rcar-du/rcar_lvds.c
+@@ -283,7 +283,7 @@ static void rcar_lvds_d3_e3_pll_calc(struct rcar_lvds *lvds, struct clk *clk,
+ 				 * divider.
+ 				 */
+ 				fout = fvco / (1 << e) / div7;
+-				div = DIV_ROUND_CLOSEST(fout, target);
++				div = max(1UL, DIV_ROUND_CLOSEST(fout, target));
+ 				diff = abs(fout / div - target);
+ 
+ 				if (diff < pll->diff) {
+@@ -485,9 +485,13 @@ static void rcar_lvds_enable(struct drm_bridge *bridge)
+ 	}
+ 
+ 	if (lvds->info->quirks & RCAR_LVDS_QUIRK_GEN3_LVEN) {
+-		/* Turn on the LVDS PHY. */
++		/*
++		 * Turn on the LVDS PHY. On D3, the LVEN and LVRES bit must be
++		 * set at the same time, so don't write the register yet.
++		 */
+ 		lvdcr0 |= LVDCR0_LVEN;
+-		rcar_lvds_write(lvds, LVDCR0, lvdcr0);
++		if (!(lvds->info->quirks & RCAR_LVDS_QUIRK_PWD))
++			rcar_lvds_write(lvds, LVDCR0, lvdcr0);
+ 	}
+ 
+ 	if (!(lvds->info->quirks & RCAR_LVDS_QUIRK_EXT_PLL)) {
+diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.c b/drivers/gpu/drm/sun4i/sun4i_tcon.c
+index 7136fc91c603..e75f77ff8e0f 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_tcon.c
++++ b/drivers/gpu/drm/sun4i/sun4i_tcon.c
+@@ -341,8 +341,8 @@ static void sun4i_tcon0_mode_set_cpu(struct sun4i_tcon *tcon,
+ 	u32 block_space, start_delay;
+ 	u32 tcon_div;
+ 
+-	tcon->dclk_min_div = 4;
+-	tcon->dclk_max_div = 127;
++	tcon->dclk_min_div = SUN6I_DSI_TCON_DIV;
++	tcon->dclk_max_div = SUN6I_DSI_TCON_DIV;
+ 
+ 	sun4i_tcon0_mode_set_common(tcon, mode);
+ 
+diff --git a/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c b/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
+index 318994cd1b85..869e0aedf343 100644
+--- a/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
++++ b/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
+@@ -358,7 +358,13 @@ static void sun6i_dsi_inst_init(struct sun6i_dsi *dsi,
+ static u16 sun6i_dsi_get_video_start_delay(struct sun6i_dsi *dsi,
+ 					   struct drm_display_mode *mode)
+ {
+-	return mode->vtotal - (mode->vsync_end - mode->vdisplay) + 1;
++	u16 start = clamp(mode->vtotal - mode->vdisplay - 10, 8, 100);
++	u16 delay = mode->vtotal - (mode->vsync_end - mode->vdisplay) + start;
++
++	if (delay > mode->vtotal)
++		delay = delay % mode->vtotal;
++
++	return max_t(u16, delay, 1);
+ }
+ 
+ static void sun6i_dsi_setup_burst(struct sun6i_dsi *dsi,
+diff --git a/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.h b/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.h
+index a07090579f84..5c3ad5be0690 100644
+--- a/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.h
++++ b/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.h
+@@ -13,6 +13,8 @@
+ #include <drm/drm_encoder.h>
+ #include <drm/drm_mipi_dsi.h>
+ 
++#define SUN6I_DSI_TCON_DIV	4
++
+ struct sun6i_dsi {
+ 	struct drm_connector	connector;
+ 	struct drm_encoder	encoder;
+diff --git a/drivers/gpu/drm/tinydrm/ili9225.c b/drivers/gpu/drm/tinydrm/ili9225.c
+index 43a3b68d90a2..998d75be9e16 100644
+--- a/drivers/gpu/drm/tinydrm/ili9225.c
++++ b/drivers/gpu/drm/tinydrm/ili9225.c
+@@ -301,7 +301,7 @@ static void ili9225_pipe_disable(struct drm_simple_display_pipe *pipe)
+ 	mipi->enabled = false;
+ }
+ 
+-static int ili9225_dbi_command(struct mipi_dbi *mipi, u8 cmd, u8 *par,
++static int ili9225_dbi_command(struct mipi_dbi *mipi, u8 *cmd, u8 *par,
+ 			       size_t num)
+ {
+ 	struct spi_device *spi = mipi->spi;
+@@ -311,11 +311,11 @@ static int ili9225_dbi_command(struct mipi_dbi *mipi, u8 cmd, u8 *par,
+ 
+ 	gpiod_set_value_cansleep(mipi->dc, 0);
+ 	speed_hz = mipi_dbi_spi_cmd_max_speed(spi, 1);
+-	ret = tinydrm_spi_transfer(spi, speed_hz, NULL, 8, &cmd, 1);
++	ret = tinydrm_spi_transfer(spi, speed_hz, NULL, 8, cmd, 1);
+ 	if (ret || !num)
+ 		return ret;
+ 
+-	if (cmd == ILI9225_WRITE_DATA_TO_GRAM && !mipi->swap_bytes)
++	if (*cmd == ILI9225_WRITE_DATA_TO_GRAM && !mipi->swap_bytes)
+ 		bpw = 16;
+ 
+ 	gpiod_set_value_cansleep(mipi->dc, 1);
+diff --git a/drivers/gpu/drm/tinydrm/mipi-dbi.c b/drivers/gpu/drm/tinydrm/mipi-dbi.c
+index 918f77c7de34..295cbcbc2bb6 100644
+--- a/drivers/gpu/drm/tinydrm/mipi-dbi.c
++++ b/drivers/gpu/drm/tinydrm/mipi-dbi.c
+@@ -153,16 +153,42 @@ EXPORT_SYMBOL(mipi_dbi_command_read);
+  */
+ int mipi_dbi_command_buf(struct mipi_dbi *mipi, u8 cmd, u8 *data, size_t len)
+ {
++	u8 *cmdbuf;
+ 	int ret;
+ 
++	/* SPI requires dma-safe buffers */
++	cmdbuf = kmemdup(&cmd, 1, GFP_KERNEL);
++	if (!cmdbuf)
++		return -ENOMEM;
++
+ 	mutex_lock(&mipi->cmdlock);
+-	ret = mipi->command(mipi, cmd, data, len);
++	ret = mipi->command(mipi, cmdbuf, data, len);
+ 	mutex_unlock(&mipi->cmdlock);
+ 
++	kfree(cmdbuf);
++
+ 	return ret;
+ }
+ EXPORT_SYMBOL(mipi_dbi_command_buf);
+ 
++/* This should only be used by mipi_dbi_command() */
++int mipi_dbi_command_stackbuf(struct mipi_dbi *mipi, u8 cmd, u8 *data, size_t len)
++{
++	u8 *buf;
++	int ret;
++
++	buf = kmemdup(data, len, GFP_KERNEL);
++	if (!buf)
++		return -ENOMEM;
++
++	ret = mipi_dbi_command_buf(mipi, cmd, buf, len);
++
++	kfree(buf);
++
++	return ret;
++}
++EXPORT_SYMBOL(mipi_dbi_command_stackbuf);
++
+ /**
+  * mipi_dbi_buf_copy - Copy a framebuffer, transforming it if necessary
+  * @dst: The destination buffer
+@@ -774,18 +800,18 @@ static int mipi_dbi_spi1_transfer(struct mipi_dbi *mipi, int dc,
+ 	return 0;
+ }
+ 
+-static int mipi_dbi_typec1_command(struct mipi_dbi *mipi, u8 cmd,
++static int mipi_dbi_typec1_command(struct mipi_dbi *mipi, u8 *cmd,
+ 				   u8 *parameters, size_t num)
+ {
+-	unsigned int bpw = (cmd == MIPI_DCS_WRITE_MEMORY_START) ? 16 : 8;
++	unsigned int bpw = (*cmd == MIPI_DCS_WRITE_MEMORY_START) ? 16 : 8;
+ 	int ret;
+ 
+-	if (mipi_dbi_command_is_read(mipi, cmd))
++	if (mipi_dbi_command_is_read(mipi, *cmd))
+ 		return -ENOTSUPP;
+ 
+-	MIPI_DBI_DEBUG_COMMAND(cmd, parameters, num);
++	MIPI_DBI_DEBUG_COMMAND(*cmd, parameters, num);
+ 
+-	ret = mipi_dbi_spi1_transfer(mipi, 0, &cmd, 1, 8);
++	ret = mipi_dbi_spi1_transfer(mipi, 0, cmd, 1, 8);
+ 	if (ret || !num)
+ 		return ret;
+ 
+@@ -794,7 +820,7 @@ static int mipi_dbi_typec1_command(struct mipi_dbi *mipi, u8 cmd,
+ 
+ /* MIPI DBI Type C Option 3 */
+ 
+-static int mipi_dbi_typec3_command_read(struct mipi_dbi *mipi, u8 cmd,
++static int mipi_dbi_typec3_command_read(struct mipi_dbi *mipi, u8 *cmd,
+ 					u8 *data, size_t len)
+ {
+ 	struct spi_device *spi = mipi->spi;
+@@ -803,7 +829,7 @@ static int mipi_dbi_typec3_command_read(struct mipi_dbi *mipi, u8 cmd,
+ 	struct spi_transfer tr[2] = {
+ 		{
+ 			.speed_hz = speed_hz,
+-			.tx_buf = &cmd,
++			.tx_buf = cmd,
+ 			.len = 1,
+ 		}, {
+ 			.speed_hz = speed_hz,
+@@ -821,8 +847,8 @@ static int mipi_dbi_typec3_command_read(struct mipi_dbi *mipi, u8 cmd,
+ 	 * Support non-standard 24-bit and 32-bit Nokia read commands which
+ 	 * start with a dummy clock, so we need to read an extra byte.
+ 	 */
+-	if (cmd == MIPI_DCS_GET_DISPLAY_ID ||
+-	    cmd == MIPI_DCS_GET_DISPLAY_STATUS) {
++	if (*cmd == MIPI_DCS_GET_DISPLAY_ID ||
++	    *cmd == MIPI_DCS_GET_DISPLAY_STATUS) {
+ 		if (!(len == 3 || len == 4))
+ 			return -EINVAL;
+ 
+@@ -852,7 +878,7 @@ static int mipi_dbi_typec3_command_read(struct mipi_dbi *mipi, u8 cmd,
+ 			data[i] = (buf[i] << 1) | !!(buf[i + 1] & BIT(7));
+ 	}
+ 
+-	MIPI_DBI_DEBUG_COMMAND(cmd, data, len);
++	MIPI_DBI_DEBUG_COMMAND(*cmd, data, len);
+ 
+ err_free:
+ 	kfree(buf);
+@@ -860,7 +886,7 @@ err_free:
+ 	return ret;
+ }
+ 
+-static int mipi_dbi_typec3_command(struct mipi_dbi *mipi, u8 cmd,
++static int mipi_dbi_typec3_command(struct mipi_dbi *mipi, u8 *cmd,
+ 				   u8 *par, size_t num)
+ {
+ 	struct spi_device *spi = mipi->spi;
+@@ -868,18 +894,18 @@ static int mipi_dbi_typec3_command(struct mipi_dbi *mipi, u8 cmd,
+ 	u32 speed_hz;
+ 	int ret;
+ 
+-	if (mipi_dbi_command_is_read(mipi, cmd))
++	if (mipi_dbi_command_is_read(mipi, *cmd))
+ 		return mipi_dbi_typec3_command_read(mipi, cmd, par, num);
+ 
+-	MIPI_DBI_DEBUG_COMMAND(cmd, par, num);
++	MIPI_DBI_DEBUG_COMMAND(*cmd, par, num);
+ 
+ 	gpiod_set_value_cansleep(mipi->dc, 0);
+ 	speed_hz = mipi_dbi_spi_cmd_max_speed(spi, 1);
+-	ret = tinydrm_spi_transfer(spi, speed_hz, NULL, 8, &cmd, 1);
++	ret = tinydrm_spi_transfer(spi, speed_hz, NULL, 8, cmd, 1);
+ 	if (ret || !num)
+ 		return ret;
+ 
+-	if (cmd == MIPI_DCS_WRITE_MEMORY_START && !mipi->swap_bytes)
++	if (*cmd == MIPI_DCS_WRITE_MEMORY_START && !mipi->swap_bytes)
+ 		bpw = 16;
+ 
+ 	gpiod_set_value_cansleep(mipi->dc, 1);
+diff --git a/drivers/gpu/drm/v3d/v3d_drv.c b/drivers/gpu/drm/v3d/v3d_drv.c
+index f0afcec72c34..30ae1c74edaa 100644
+--- a/drivers/gpu/drm/v3d/v3d_drv.c
++++ b/drivers/gpu/drm/v3d/v3d_drv.c
+@@ -312,14 +312,18 @@ static int v3d_platform_drm_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto dev_destroy;
+ 
+-	v3d_irq_init(v3d);
++	ret = v3d_irq_init(v3d);
++	if (ret)
++		goto gem_destroy;
+ 
+ 	ret = drm_dev_register(drm, 0);
+ 	if (ret)
+-		goto gem_destroy;
++		goto irq_disable;
+ 
+ 	return 0;
+ 
++irq_disable:
++	v3d_irq_disable(v3d);
+ gem_destroy:
+ 	v3d_gem_destroy(drm);
+ dev_destroy:
+diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h
+index fdda3037f7af..2fdb456b72d3 100644
+--- a/drivers/gpu/drm/v3d/v3d_drv.h
++++ b/drivers/gpu/drm/v3d/v3d_drv.h
+@@ -310,7 +310,7 @@ void v3d_reset(struct v3d_dev *v3d);
+ void v3d_invalidate_caches(struct v3d_dev *v3d);
+ 
+ /* v3d_irq.c */
+-void v3d_irq_init(struct v3d_dev *v3d);
++int v3d_irq_init(struct v3d_dev *v3d);
+ void v3d_irq_enable(struct v3d_dev *v3d);
+ void v3d_irq_disable(struct v3d_dev *v3d);
+ void v3d_irq_reset(struct v3d_dev *v3d);
+diff --git a/drivers/gpu/drm/v3d/v3d_irq.c b/drivers/gpu/drm/v3d/v3d_irq.c
+index 69338da70ddc..29d746cfce57 100644
+--- a/drivers/gpu/drm/v3d/v3d_irq.c
++++ b/drivers/gpu/drm/v3d/v3d_irq.c
+@@ -156,7 +156,7 @@ v3d_hub_irq(int irq, void *arg)
+ 	return status;
+ }
+ 
+-void
++int
+ v3d_irq_init(struct v3d_dev *v3d)
+ {
+ 	int ret, core;
+@@ -173,13 +173,22 @@ v3d_irq_init(struct v3d_dev *v3d)
+ 	ret = devm_request_irq(v3d->dev, platform_get_irq(v3d->pdev, 0),
+ 			       v3d_hub_irq, IRQF_SHARED,
+ 			       "v3d_hub", v3d);
++	if (ret)
++		goto fail;
++
+ 	ret = devm_request_irq(v3d->dev, platform_get_irq(v3d->pdev, 1),
+ 			       v3d_irq, IRQF_SHARED,
+ 			       "v3d_core0", v3d);
+ 	if (ret)
+-		dev_err(v3d->dev, "IRQ setup failed: %d\n", ret);
++		goto fail;
+ 
+ 	v3d_irq_enable(v3d);
++	return 0;
++
++fail:
++	if (ret != -EPROBE_DEFER)
++		dev_err(v3d->dev, "IRQ setup failed: %d\n", ret);
++	return ret;
+ }
+ 
+ void
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 860e21ec6a49..63a43726cce0 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -218,13 +218,14 @@ static unsigned hid_lookup_collection(struct hid_parser *parser, unsigned type)
+  * Add a usage to the temporary parser table.
+  */
+ 
+-static int hid_add_usage(struct hid_parser *parser, unsigned usage)
++static int hid_add_usage(struct hid_parser *parser, unsigned usage, u8 size)
+ {
+ 	if (parser->local.usage_index >= HID_MAX_USAGES) {
+ 		hid_err(parser->device, "usage index exceeded\n");
+ 		return -1;
+ 	}
+ 	parser->local.usage[parser->local.usage_index] = usage;
++	parser->local.usage_size[parser->local.usage_index] = size;
+ 	parser->local.collection_index[parser->local.usage_index] =
+ 		parser->collection_stack_ptr ?
+ 		parser->collection_stack[parser->collection_stack_ptr - 1] : 0;
+@@ -486,10 +487,7 @@ static int hid_parser_local(struct hid_parser *parser, struct hid_item *item)
+ 			return 0;
+ 		}
+ 
+-		if (item->size <= 2)
+-			data = (parser->global.usage_page << 16) + data;
+-
+-		return hid_add_usage(parser, data);
++		return hid_add_usage(parser, data, item->size);
+ 
+ 	case HID_LOCAL_ITEM_TAG_USAGE_MINIMUM:
+ 
+@@ -498,9 +496,6 @@ static int hid_parser_local(struct hid_parser *parser, struct hid_item *item)
+ 			return 0;
+ 		}
+ 
+-		if (item->size <= 2)
+-			data = (parser->global.usage_page << 16) + data;
+-
+ 		parser->local.usage_minimum = data;
+ 		return 0;
+ 
+@@ -511,9 +506,6 @@ static int hid_parser_local(struct hid_parser *parser, struct hid_item *item)
+ 			return 0;
+ 		}
+ 
+-		if (item->size <= 2)
+-			data = (parser->global.usage_page << 16) + data;
+-
+ 		count = data - parser->local.usage_minimum;
+ 		if (count + parser->local.usage_index >= HID_MAX_USAGES) {
+ 			/*
+@@ -533,7 +525,7 @@ static int hid_parser_local(struct hid_parser *parser, struct hid_item *item)
+ 		}
+ 
+ 		for (n = parser->local.usage_minimum; n <= data; n++)
+-			if (hid_add_usage(parser, n)) {
++			if (hid_add_usage(parser, n, item->size)) {
+ 				dbg_hid("hid_add_usage failed\n");
+ 				return -1;
+ 			}
+@@ -547,6 +539,22 @@ static int hid_parser_local(struct hid_parser *parser, struct hid_item *item)
+ 	return 0;
+ }
+ 
++/*
++ * Concatenate Usage Pages into Usages where relevant:
++ * As per specification, 6.2.2.8: "When the parser encounters a main item it
++ * concatenates the last declared Usage Page with a Usage to form a complete
++ * usage value."
++ */
++
++static void hid_concatenate_usage_page(struct hid_parser *parser)
++{
++	int i;
++
++	for (i = 0; i < parser->local.usage_index; i++)
++		if (parser->local.usage_size[i] <= 2)
++			parser->local.usage[i] += parser->global.usage_page << 16;
++}
++
+ /*
+  * Process a main item.
+  */
+@@ -556,6 +564,8 @@ static int hid_parser_main(struct hid_parser *parser, struct hid_item *item)
+ 	__u32 data;
+ 	int ret;
+ 
++	hid_concatenate_usage_page(parser);
++
+ 	data = item_udata(item);
+ 
+ 	switch (item->tag) {
+@@ -765,6 +775,8 @@ static int hid_scan_main(struct hid_parser *parser, struct hid_item *item)
+ 	__u32 data;
+ 	int i;
+ 
++	hid_concatenate_usage_page(parser);
++
+ 	data = item_udata(item);
+ 
+ 	switch (item->tag) {
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index 199cc256e9d9..e74fa990ba13 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -836,13 +836,16 @@ static int hidpp_root_get_feature(struct hidpp_device *hidpp, u16 feature,
+ 
+ static int hidpp_root_get_protocol_version(struct hidpp_device *hidpp)
+ {
++	const u8 ping_byte = 0x5a;
++	u8 ping_data[3] = { 0, 0, ping_byte };
+ 	struct hidpp_report response;
+ 	int ret;
+ 
+-	ret = hidpp_send_fap_command_sync(hidpp,
++	ret = hidpp_send_rap_command_sync(hidpp,
++			REPORT_ID_HIDPP_SHORT,
+ 			HIDPP_PAGE_ROOT_IDX,
+ 			CMD_ROOT_GET_PROTOCOL_VERSION,
+-			NULL, 0, &response);
++			ping_data, sizeof(ping_data), &response);
+ 
+ 	if (ret == HIDPP_ERROR_INVALID_SUBID) {
+ 		hidpp->protocol_major = 1;
+@@ -862,8 +865,14 @@ static int hidpp_root_get_protocol_version(struct hidpp_device *hidpp)
+ 	if (ret)
+ 		return ret;
+ 
+-	hidpp->protocol_major = response.fap.params[0];
+-	hidpp->protocol_minor = response.fap.params[1];
++	if (response.rap.params[2] != ping_byte) {
++		hid_err(hidpp->hid_dev, "%s: ping mismatch 0x%02x != 0x%02x\n",
++			__func__, response.rap.params[2], ping_byte);
++		return -EPROTO;
++	}
++
++	hidpp->protocol_major = response.rap.params[0];
++	hidpp->protocol_minor = response.rap.params[1];
+ 
+ 	return ret;
+ }
+@@ -1012,7 +1021,11 @@ static int hidpp_map_battery_level(int capacity)
+ {
+ 	if (capacity < 11)
+ 		return POWER_SUPPLY_CAPACITY_LEVEL_CRITICAL;
+-	else if (capacity < 31)
++	/*
++	 * The spec says this should be < 31 but some devices report 30
++	 * with brand new batteries and Windows reports 30 as "Good".
++	 */
++	else if (capacity < 30)
+ 		return POWER_SUPPLY_CAPACITY_LEVEL_LOW;
+ 	else if (capacity < 81)
+ 		return POWER_SUPPLY_CAPACITY_LEVEL_NORMAL;
+diff --git a/drivers/hwmon/f71805f.c b/drivers/hwmon/f71805f.c
+index 73c681162653..623736d2a7c1 100644
+--- a/drivers/hwmon/f71805f.c
++++ b/drivers/hwmon/f71805f.c
+@@ -96,17 +96,23 @@ superio_select(int base, int ld)
+ 	outb(ld, base + 1);
+ }
+ 
+-static inline void
++static inline int
+ superio_enter(int base)
+ {
++	if (!request_muxed_region(base, 2, DRVNAME))
++		return -EBUSY;
++
+ 	outb(0x87, base);
+ 	outb(0x87, base);
++
++	return 0;
+ }
+ 
+ static inline void
+ superio_exit(int base)
+ {
+ 	outb(0xaa, base);
++	release_region(base, 2);
+ }
+ 
+ /*
+@@ -1561,7 +1567,7 @@ exit:
+ static int __init f71805f_find(int sioaddr, unsigned short *address,
+ 			       struct f71805f_sio_data *sio_data)
+ {
+-	int err = -ENODEV;
++	int err;
+ 	u16 devid;
+ 
+ 	static const char * const names[] = {
+@@ -1569,8 +1575,11 @@ static int __init f71805f_find(int sioaddr, unsigned short *address,
+ 		"F71872F/FG or F71806F/FG",
+ 	};
+ 
+-	superio_enter(sioaddr);
++	err = superio_enter(sioaddr);
++	if (err)
++		return err;
+ 
++	err = -ENODEV;
+ 	devid = superio_inw(sioaddr, SIO_REG_MANID);
+ 	if (devid != SIO_FINTEK_ID)
+ 		goto exit;
+diff --git a/drivers/hwmon/pc87427.c b/drivers/hwmon/pc87427.c
+index d1a3f2040c00..58eee8fa3e6d 100644
+--- a/drivers/hwmon/pc87427.c
++++ b/drivers/hwmon/pc87427.c
+@@ -106,6 +106,13 @@ static const char *logdev_str[2] = { DRVNAME " FMC", DRVNAME " HMC" };
+ #define LD_IN		1
+ #define LD_TEMP		1
+ 
++static inline int superio_enter(int sioaddr)
++{
++	if (!request_muxed_region(sioaddr, 2, DRVNAME))
++		return -EBUSY;
++	return 0;
++}
++
+ static inline void superio_outb(int sioaddr, int reg, int val)
+ {
+ 	outb(reg, sioaddr);
+@@ -122,6 +129,7 @@ static inline void superio_exit(int sioaddr)
+ {
+ 	outb(0x02, sioaddr);
+ 	outb(0x02, sioaddr + 1);
++	release_region(sioaddr, 2);
+ }
+ 
+ /*
+@@ -1195,7 +1203,11 @@ static int __init pc87427_find(int sioaddr, struct pc87427_sio_data *sio_data)
+ {
+ 	u16 val;
+ 	u8 cfg, cfg_b;
+-	int i, err = 0;
++	int i, err;
++
++	err = superio_enter(sioaddr);
++	if (err)
++		return err;
+ 
+ 	/* Identify device */
+ 	val = force_id ? force_id : superio_inb(sioaddr, SIOREG_DEVID);
+diff --git a/drivers/hwmon/smsc47b397.c b/drivers/hwmon/smsc47b397.c
+index c0775084dde0..60e193f2e970 100644
+--- a/drivers/hwmon/smsc47b397.c
++++ b/drivers/hwmon/smsc47b397.c
+@@ -72,14 +72,19 @@ static inline void superio_select(int ld)
+ 	superio_outb(0x07, ld);
+ }
+ 
+-static inline void superio_enter(void)
++static inline int superio_enter(void)
+ {
++	if (!request_muxed_region(REG, 2, DRVNAME))
++		return -EBUSY;
++
+ 	outb(0x55, REG);
++	return 0;
+ }
+ 
+ static inline void superio_exit(void)
+ {
+ 	outb(0xAA, REG);
++	release_region(REG, 2);
+ }
+ 
+ #define SUPERIO_REG_DEVID	0x20
+@@ -300,8 +305,12 @@ static int __init smsc47b397_find(void)
+ 	u8 id, rev;
+ 	char *name;
+ 	unsigned short addr;
++	int err;
++
++	err = superio_enter();
++	if (err)
++		return err;
+ 
+-	superio_enter();
+ 	id = force_id ? force_id : superio_inb(SUPERIO_REG_DEVID);
+ 
+ 	switch (id) {
+diff --git a/drivers/hwmon/smsc47m1.c b/drivers/hwmon/smsc47m1.c
+index c7b6a425e2c0..5eeac9853d0a 100644
+--- a/drivers/hwmon/smsc47m1.c
++++ b/drivers/hwmon/smsc47m1.c
+@@ -73,16 +73,21 @@ superio_inb(int reg)
+ /* logical device for fans is 0x0A */
+ #define superio_select() superio_outb(0x07, 0x0A)
+ 
+-static inline void
++static inline int
+ superio_enter(void)
+ {
++	if (!request_muxed_region(REG, 2, DRVNAME))
++		return -EBUSY;
++
+ 	outb(0x55, REG);
++	return 0;
+ }
+ 
+ static inline void
+ superio_exit(void)
+ {
+ 	outb(0xAA, REG);
++	release_region(REG, 2);
+ }
+ 
+ #define SUPERIO_REG_ACT		0x30
+@@ -531,8 +536,12 @@ static int __init smsc47m1_find(struct smsc47m1_sio_data *sio_data)
+ {
+ 	u8 val;
+ 	unsigned short addr;
++	int err;
++
++	err = superio_enter();
++	if (err)
++		return err;
+ 
+-	superio_enter();
+ 	val = force_id ? force_id : superio_inb(SUPERIO_REG_DEVID);
+ 
+ 	/*
+@@ -608,13 +617,14 @@ static int __init smsc47m1_find(struct smsc47m1_sio_data *sio_data)
+ static void smsc47m1_restore(const struct smsc47m1_sio_data *sio_data)
+ {
+ 	if ((sio_data->activate & 0x01) == 0) {
+-		superio_enter();
+-		superio_select();
+-
+-		pr_info("Disabling device\n");
+-		superio_outb(SUPERIO_REG_ACT, sio_data->activate);
+-
+-		superio_exit();
++		if (!superio_enter()) {
++			superio_select();
++			pr_info("Disabling device\n");
++			superio_outb(SUPERIO_REG_ACT, sio_data->activate);
++			superio_exit();
++		} else {
++			pr_warn("Failed to disable device\n");
++		}
+ 	}
+ }
+ 
+diff --git a/drivers/hwmon/vt1211.c b/drivers/hwmon/vt1211.c
+index 3a6bfa51cb94..95d5e8ec8b7f 100644
+--- a/drivers/hwmon/vt1211.c
++++ b/drivers/hwmon/vt1211.c
+@@ -226,15 +226,21 @@ static inline void superio_select(int sio_cip, int ldn)
+ 	outb(ldn, sio_cip + 1);
+ }
+ 
+-static inline void superio_enter(int sio_cip)
++static inline int superio_enter(int sio_cip)
+ {
++	if (!request_muxed_region(sio_cip, 2, DRVNAME))
++		return -EBUSY;
++
+ 	outb(0x87, sio_cip);
+ 	outb(0x87, sio_cip);
++
++	return 0;
+ }
+ 
+ static inline void superio_exit(int sio_cip)
+ {
+ 	outb(0xaa, sio_cip);
++	release_region(sio_cip, 2);
+ }
+ 
+ /* ---------------------------------------------------------------------
+@@ -1282,11 +1288,14 @@ EXIT:
+ 
+ static int __init vt1211_find(int sio_cip, unsigned short *address)
+ {
+-	int err = -ENODEV;
++	int err;
+ 	int devid;
+ 
+-	superio_enter(sio_cip);
++	err = superio_enter(sio_cip);
++	if (err)
++		return err;
+ 
++	err = -ENODEV;
+ 	devid = force_id ? force_id : superio_inb(sio_cip, SIO_VT1211_DEVID);
+ 	if (devid != SIO_VT1211_ID)
+ 		goto EXIT;
+diff --git a/drivers/iio/adc/Kconfig b/drivers/iio/adc/Kconfig
+index 76db6e5cc296..9ca21a8dfcd7 100644
+--- a/drivers/iio/adc/Kconfig
++++ b/drivers/iio/adc/Kconfig
+@@ -809,6 +809,7 @@ config STM32_DFSDM_ADC
+ 	depends on (ARCH_STM32 && OF) || COMPILE_TEST
+ 	select STM32_DFSDM_CORE
+ 	select REGMAP_MMIO
++	select IIO_BUFFER
+ 	select IIO_BUFFER_HW_CONSUMER
+ 	help
+ 	  Select this option to support ADCSigma delta modulator for
+diff --git a/drivers/iio/adc/ad_sigma_delta.c b/drivers/iio/adc/ad_sigma_delta.c
+index 54d9978b2740..a4310600a853 100644
+--- a/drivers/iio/adc/ad_sigma_delta.c
++++ b/drivers/iio/adc/ad_sigma_delta.c
+@@ -62,7 +62,7 @@ int ad_sd_write_reg(struct ad_sigma_delta *sigma_delta, unsigned int reg,
+ 	struct spi_transfer t = {
+ 		.tx_buf		= data,
+ 		.len		= size + 1,
+-		.cs_change	= sigma_delta->bus_locked,
++		.cs_change	= sigma_delta->keep_cs_asserted,
+ 	};
+ 	struct spi_message m;
+ 	int ret;
+@@ -218,6 +218,7 @@ static int ad_sd_calibrate(struct ad_sigma_delta *sigma_delta,
+ 
+ 	spi_bus_lock(sigma_delta->spi->master);
+ 	sigma_delta->bus_locked = true;
++	sigma_delta->keep_cs_asserted = true;
+ 	reinit_completion(&sigma_delta->completion);
+ 
+ 	ret = ad_sigma_delta_set_mode(sigma_delta, mode);
+@@ -235,9 +236,10 @@ static int ad_sd_calibrate(struct ad_sigma_delta *sigma_delta,
+ 		ret = 0;
+ 	}
+ out:
++	sigma_delta->keep_cs_asserted = false;
++	ad_sigma_delta_set_mode(sigma_delta, AD_SD_MODE_IDLE);
+ 	sigma_delta->bus_locked = false;
+ 	spi_bus_unlock(sigma_delta->spi->master);
+-	ad_sigma_delta_set_mode(sigma_delta, AD_SD_MODE_IDLE);
+ 
+ 	return ret;
+ }
+@@ -290,6 +292,7 @@ int ad_sigma_delta_single_conversion(struct iio_dev *indio_dev,
+ 
+ 	spi_bus_lock(sigma_delta->spi->master);
+ 	sigma_delta->bus_locked = true;
++	sigma_delta->keep_cs_asserted = true;
+ 	reinit_completion(&sigma_delta->completion);
+ 
+ 	ad_sigma_delta_set_mode(sigma_delta, AD_SD_MODE_SINGLE);
+@@ -299,9 +302,6 @@ int ad_sigma_delta_single_conversion(struct iio_dev *indio_dev,
+ 	ret = wait_for_completion_interruptible_timeout(
+ 			&sigma_delta->completion, HZ);
+ 
+-	sigma_delta->bus_locked = false;
+-	spi_bus_unlock(sigma_delta->spi->master);
+-
+ 	if (ret == 0)
+ 		ret = -EIO;
+ 	if (ret < 0)
+@@ -322,7 +322,10 @@ out:
+ 		sigma_delta->irq_dis = true;
+ 	}
+ 
++	sigma_delta->keep_cs_asserted = false;
+ 	ad_sigma_delta_set_mode(sigma_delta, AD_SD_MODE_IDLE);
++	sigma_delta->bus_locked = false;
++	spi_bus_unlock(sigma_delta->spi->master);
+ 	mutex_unlock(&indio_dev->mlock);
+ 
+ 	if (ret)
+@@ -359,6 +362,8 @@ static int ad_sd_buffer_postenable(struct iio_dev *indio_dev)
+ 
+ 	spi_bus_lock(sigma_delta->spi->master);
+ 	sigma_delta->bus_locked = true;
++	sigma_delta->keep_cs_asserted = true;
++
+ 	ret = ad_sigma_delta_set_mode(sigma_delta, AD_SD_MODE_CONTINUOUS);
+ 	if (ret)
+ 		goto err_unlock;
+@@ -387,6 +392,7 @@ static int ad_sd_buffer_postdisable(struct iio_dev *indio_dev)
+ 		sigma_delta->irq_dis = true;
+ 	}
+ 
++	sigma_delta->keep_cs_asserted = false;
+ 	ad_sigma_delta_set_mode(sigma_delta, AD_SD_MODE_IDLE);
+ 
+ 	sigma_delta->bus_locked = false;
+diff --git a/drivers/iio/adc/ti-ads7950.c b/drivers/iio/adc/ti-ads7950.c
+index 0ad63592cc3c..1e47bef72bb7 100644
+--- a/drivers/iio/adc/ti-ads7950.c
++++ b/drivers/iio/adc/ti-ads7950.c
+@@ -56,6 +56,9 @@ struct ti_ads7950_state {
+ 	struct spi_message	ring_msg;
+ 	struct spi_message	scan_single_msg;
+ 
++	/* Lock to protect the spi xfer buffers */
++	struct mutex		slock;
++
+ 	struct regulator	*reg;
+ 	unsigned int		vref_mv;
+ 
+@@ -268,6 +271,7 @@ static irqreturn_t ti_ads7950_trigger_handler(int irq, void *p)
+ 	struct ti_ads7950_state *st = iio_priv(indio_dev);
+ 	int ret;
+ 
++	mutex_lock(&st->slock);
+ 	ret = spi_sync(st->spi, &st->ring_msg);
+ 	if (ret < 0)
+ 		goto out;
+@@ -276,6 +280,7 @@ static irqreturn_t ti_ads7950_trigger_handler(int irq, void *p)
+ 					   iio_get_time_ns(indio_dev));
+ 
+ out:
++	mutex_unlock(&st->slock);
+ 	iio_trigger_notify_done(indio_dev->trig);
+ 
+ 	return IRQ_HANDLED;
+@@ -286,7 +291,7 @@ static int ti_ads7950_scan_direct(struct iio_dev *indio_dev, unsigned int ch)
+ 	struct ti_ads7950_state *st = iio_priv(indio_dev);
+ 	int ret, cmd;
+ 
+-	mutex_lock(&indio_dev->mlock);
++	mutex_lock(&st->slock);
+ 
+ 	cmd = TI_ADS7950_CR_WRITE | TI_ADS7950_CR_CHAN(ch) | st->settings;
+ 	st->single_tx = cmd;
+@@ -298,7 +303,7 @@ static int ti_ads7950_scan_direct(struct iio_dev *indio_dev, unsigned int ch)
+ 	ret = st->single_rx;
+ 
+ out:
+-	mutex_unlock(&indio_dev->mlock);
++	mutex_unlock(&st->slock);
+ 
+ 	return ret;
+ }
+@@ -432,16 +437,19 @@ static int ti_ads7950_probe(struct spi_device *spi)
+ 	if (ACPI_COMPANION(&spi->dev))
+ 		st->vref_mv = TI_ADS7950_VA_MV_ACPI_DEFAULT;
+ 
++	mutex_init(&st->slock);
++
+ 	st->reg = devm_regulator_get(&spi->dev, "vref");
+ 	if (IS_ERR(st->reg)) {
+ 		dev_err(&spi->dev, "Failed get get regulator \"vref\"\n");
+-		return PTR_ERR(st->reg);
++		ret = PTR_ERR(st->reg);
++		goto error_destroy_mutex;
+ 	}
+ 
+ 	ret = regulator_enable(st->reg);
+ 	if (ret) {
+ 		dev_err(&spi->dev, "Failed to enable regulator \"vref\"\n");
+-		return ret;
++		goto error_destroy_mutex;
+ 	}
+ 
+ 	ret = iio_triggered_buffer_setup(indio_dev, NULL,
+@@ -463,6 +471,8 @@ error_cleanup_ring:
+ 	iio_triggered_buffer_cleanup(indio_dev);
+ error_disable_reg:
+ 	regulator_disable(st->reg);
++error_destroy_mutex:
++	mutex_destroy(&st->slock);
+ 
+ 	return ret;
+ }
+@@ -475,6 +485,7 @@ static int ti_ads7950_remove(struct spi_device *spi)
+ 	iio_device_unregister(indio_dev);
+ 	iio_triggered_buffer_cleanup(indio_dev);
+ 	regulator_disable(st->reg);
++	mutex_destroy(&st->slock);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/iio/common/ssp_sensors/ssp_iio.c b/drivers/iio/common/ssp_sensors/ssp_iio.c
+index 645f2e3975db..e38f704d88b7 100644
+--- a/drivers/iio/common/ssp_sensors/ssp_iio.c
++++ b/drivers/iio/common/ssp_sensors/ssp_iio.c
+@@ -81,7 +81,7 @@ int ssp_common_process_data(struct iio_dev *indio_dev, void *buf,
+ 			    unsigned int len, int64_t timestamp)
+ {
+ 	__le32 time;
+-	int64_t calculated_time;
++	int64_t calculated_time = 0;
+ 	struct ssp_sensor_data *spd = iio_priv(indio_dev);
+ 
+ 	if (indio_dev->scan_bytes == 0)
+diff --git a/drivers/iio/magnetometer/hmc5843_i2c.c b/drivers/iio/magnetometer/hmc5843_i2c.c
+index 3de7f4426ac4..86abba5827a2 100644
+--- a/drivers/iio/magnetometer/hmc5843_i2c.c
++++ b/drivers/iio/magnetometer/hmc5843_i2c.c
+@@ -58,8 +58,13 @@ static const struct regmap_config hmc5843_i2c_regmap_config = {
+ static int hmc5843_i2c_probe(struct i2c_client *cli,
+ 			     const struct i2c_device_id *id)
+ {
++	struct regmap *regmap = devm_regmap_init_i2c(cli,
++			&hmc5843_i2c_regmap_config);
++	if (IS_ERR(regmap))
++		return PTR_ERR(regmap);
++
+ 	return hmc5843_common_probe(&cli->dev,
+-			devm_regmap_init_i2c(cli, &hmc5843_i2c_regmap_config),
++			regmap,
+ 			id->driver_data, id->name);
+ }
+ 
+diff --git a/drivers/iio/magnetometer/hmc5843_spi.c b/drivers/iio/magnetometer/hmc5843_spi.c
+index 535f03a70d63..79b2b707f90e 100644
+--- a/drivers/iio/magnetometer/hmc5843_spi.c
++++ b/drivers/iio/magnetometer/hmc5843_spi.c
+@@ -58,6 +58,7 @@ static const struct regmap_config hmc5843_spi_regmap_config = {
+ static int hmc5843_spi_probe(struct spi_device *spi)
+ {
+ 	int ret;
++	struct regmap *regmap;
+ 	const struct spi_device_id *id = spi_get_device_id(spi);
+ 
+ 	spi->mode = SPI_MODE_3;
+@@ -67,8 +68,12 @@ static int hmc5843_spi_probe(struct spi_device *spi)
+ 	if (ret)
+ 		return ret;
+ 
++	regmap = devm_regmap_init_spi(spi, &hmc5843_spi_regmap_config);
++	if (IS_ERR(regmap))
++		return PTR_ERR(regmap);
++
+ 	return hmc5843_common_probe(&spi->dev,
+-			devm_regmap_init_spi(spi, &hmc5843_spi_regmap_config),
++			regmap,
+ 			id->driver_data, id->name);
+ }
+ 
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 68c997be2429..c54da16df0be 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -1173,18 +1173,31 @@ static inline bool cma_any_addr(const struct sockaddr *addr)
+ 	return cma_zero_addr(addr) || cma_loopback_addr(addr);
+ }
+ 
+-static int cma_addr_cmp(struct sockaddr *src, struct sockaddr *dst)
++static int cma_addr_cmp(const struct sockaddr *src, const struct sockaddr *dst)
+ {
+ 	if (src->sa_family != dst->sa_family)
+ 		return -1;
+ 
+ 	switch (src->sa_family) {
+ 	case AF_INET:
+-		return ((struct sockaddr_in *) src)->sin_addr.s_addr !=
+-		       ((struct sockaddr_in *) dst)->sin_addr.s_addr;
+-	case AF_INET6:
+-		return ipv6_addr_cmp(&((struct sockaddr_in6 *) src)->sin6_addr,
+-				     &((struct sockaddr_in6 *) dst)->sin6_addr);
++		return ((struct sockaddr_in *)src)->sin_addr.s_addr !=
++		       ((struct sockaddr_in *)dst)->sin_addr.s_addr;
++	case AF_INET6: {
++		struct sockaddr_in6 *src_addr6 = (struct sockaddr_in6 *)src;
++		struct sockaddr_in6 *dst_addr6 = (struct sockaddr_in6 *)dst;
++		bool link_local;
++
++		if (ipv6_addr_cmp(&src_addr6->sin6_addr,
++					  &dst_addr6->sin6_addr))
++			return 1;
++		link_local = ipv6_addr_type(&dst_addr6->sin6_addr) &
++			     IPV6_ADDR_LINKLOCAL;
++		/* Link local must match their scope_ids */
++		return link_local ? (src_addr6->sin6_scope_id !=
++				     dst_addr6->sin6_scope_id) :
++				    0;
++	}
++
+ 	default:
+ 		return ib_addr_cmp(&((struct sockaddr_ib *) src)->sib_addr,
+ 				   &((struct sockaddr_ib *) dst)->sib_addr);
+diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
+index 4d232bdf9e97..689ba6bc2ca9 100644
+--- a/drivers/infiniband/hw/cxgb4/cm.c
++++ b/drivers/infiniband/hw/cxgb4/cm.c
+@@ -457,6 +457,8 @@ static struct sk_buff *get_skb(struct sk_buff *skb, int len, gfp_t gfp)
+ 		skb_reset_transport_header(skb);
+ 	} else {
+ 		skb = alloc_skb(len, gfp);
++		if (!skb)
++			return NULL;
+ 	}
+ 	t4_set_arp_err_handler(skb, NULL, NULL);
+ 	return skb;
+diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
+index faaaac8fbc55..3af5eb10a5ff 100644
+--- a/drivers/infiniband/hw/hfi1/init.c
++++ b/drivers/infiniband/hw/hfi1/init.c
+@@ -805,7 +805,8 @@ static int create_workqueues(struct hfi1_devdata *dd)
+ 			ppd->hfi1_wq =
+ 				alloc_workqueue(
+ 				    "hfi%d_%d",
+-				    WQ_SYSFS | WQ_HIGHPRI | WQ_CPU_INTENSIVE,
++				    WQ_SYSFS | WQ_HIGHPRI | WQ_CPU_INTENSIVE |
++				    WQ_MEM_RECLAIM,
+ 				    HFI1_MAX_ACTIVE_WORKQUEUE_ENTRIES,
+ 				    dd->unit, pidx);
+ 			if (!ppd->hfi1_wq)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_ah.c b/drivers/infiniband/hw/hns/hns_roce_ah.c
+index b3c8c45ec1e3..64e0c69b69c5 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_ah.c
++++ b/drivers/infiniband/hw/hns/hns_roce_ah.c
+@@ -70,7 +70,7 @@ struct ib_ah *hns_roce_create_ah(struct ib_pd *ibpd,
+ 			     HNS_ROCE_VLAN_SL_BIT_MASK) <<
+ 			     HNS_ROCE_VLAN_SL_SHIFT;
+ 
+-	ah->av.port_pd = cpu_to_be32(to_hr_pd(ibpd)->pdn |
++	ah->av.port_pd = cpu_to_le32(to_hr_pd(ibpd)->pdn |
+ 				     (rdma_ah_get_port_num(ah_attr) <<
+ 				     HNS_ROCE_PORT_NUM_SHIFT));
+ 	ah->av.gid_index = grh->sgid_index;
+diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
+index 0aa10ebda5d9..91669e35c6ca 100644
+--- a/drivers/infiniband/hw/mlx5/odp.c
++++ b/drivers/infiniband/hw/mlx5/odp.c
+@@ -711,6 +711,15 @@ struct pf_frame {
+ 	int depth;
+ };
+ 
++static bool mkey_is_eq(struct mlx5_core_mkey *mmkey, u32 key)
++{
++	if (!mmkey)
++		return false;
++	if (mmkey->type == MLX5_MKEY_MW)
++		return mlx5_base_mkey(mmkey->key) == mlx5_base_mkey(key);
++	return mmkey->key == key;
++}
++
+ static int get_indirect_num_descs(struct mlx5_core_mkey *mmkey)
+ {
+ 	struct mlx5_ib_mw *mw;
+@@ -760,7 +769,7 @@ static int pagefault_single_data_segment(struct mlx5_ib_dev *dev,
+ 
+ next_mr:
+ 	mmkey = __mlx5_mr_lookup(dev->mdev, mlx5_base_mkey(key));
+-	if (!mmkey || mmkey->key != key) {
++	if (!mkey_is_eq(mmkey, key)) {
+ 		mlx5_ib_dbg(dev, "failed to find mkey %x\n", key);
+ 		ret = -EFAULT;
+ 		goto srcu_unlock;
+diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
+index 42f0f25e396c..ec89fbd06c53 100644
+--- a/drivers/infiniband/sw/rxe/rxe_mr.c
++++ b/drivers/infiniband/sw/rxe/rxe_mr.c
+@@ -199,6 +199,12 @@ int rxe_mem_init_user(struct rxe_pd *pd, u64 start,
+ 		buf = map[0]->buf;
+ 
+ 		for_each_sg_page(umem->sg_head.sgl, &sg_iter, umem->nmap, 0) {
++			if (num_buf >= RXE_BUF_PER_MAP) {
++				map++;
++				buf = map[0]->buf;
++				num_buf = 0;
++			}
++
+ 			vaddr = page_address(sg_page_iter_page(&sg_iter));
+ 			if (!vaddr) {
+ 				pr_warn("null vaddr\n");
+@@ -211,11 +217,6 @@ int rxe_mem_init_user(struct rxe_pd *pd, u64 start,
+ 			num_buf++;
+ 			buf++;
+ 
+-			if (num_buf >= RXE_BUF_PER_MAP) {
+-				map++;
+-				buf = map[0]->buf;
+-				num_buf = 0;
+-			}
+ 		}
+ 	}
+ 
+diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c
+index 5002838ea476..f8986effcb50 100644
+--- a/drivers/md/bcache/alloc.c
++++ b/drivers/md/bcache/alloc.c
+@@ -327,10 +327,11 @@ static int bch_allocator_thread(void *arg)
+ 		 * possibly issue discards to them, then we add the bucket to
+ 		 * the free list:
+ 		 */
+-		while (!fifo_empty(&ca->free_inc)) {
++		while (1) {
+ 			long bucket;
+ 
+-			fifo_pop(&ca->free_inc, bucket);
++			if (!fifo_pop(&ca->free_inc, bucket))
++				break;
+ 
+ 			if (ca->discard) {
+ 				mutex_unlock(&ca->set->bucket_lock);
+diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
+index d3725c17ce3a..6c94fa007796 100644
+--- a/drivers/md/bcache/journal.c
++++ b/drivers/md/bcache/journal.c
+@@ -317,6 +317,18 @@ void bch_journal_mark(struct cache_set *c, struct list_head *list)
+ 	}
+ }
+ 
++bool is_discard_enabled(struct cache_set *s)
++{
++	struct cache *ca;
++	unsigned int i;
++
++	for_each_cache(ca, s, i)
++		if (ca->discard)
++			return true;
++
++	return false;
++}
++
+ int bch_journal_replay(struct cache_set *s, struct list_head *list)
+ {
+ 	int ret = 0, keys = 0, entries = 0;
+@@ -330,9 +342,17 @@ int bch_journal_replay(struct cache_set *s, struct list_head *list)
+ 	list_for_each_entry(i, list, list) {
+ 		BUG_ON(i->pin && atomic_read(i->pin) != 1);
+ 
+-		cache_set_err_on(n != i->j.seq, s,
+-"bcache: journal entries %llu-%llu missing! (replaying %llu-%llu)",
+-				 n, i->j.seq - 1, start, end);
++		if (n != i->j.seq) {
++			if (n == start && is_discard_enabled(s))
++				pr_info("bcache: journal entries %llu-%llu may be discarded! (replaying %llu-%llu)",
++					n, i->j.seq - 1, start, end);
++			else {
++				pr_err("bcache: journal entries %llu-%llu missing! (replaying %llu-%llu)",
++					n, i->j.seq - 1, start, end);
++				ret = -EIO;
++				goto err;
++			}
++		}
+ 
+ 		for (k = i->j.start;
+ 		     k < bset_bkey_last(&i->j);
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index 171d5e0f698b..e489d2459569 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -1775,13 +1775,15 @@ err:
+ 	return NULL;
+ }
+ 
+-static void run_cache_set(struct cache_set *c)
++static int run_cache_set(struct cache_set *c)
+ {
+ 	const char *err = "cannot allocate memory";
+ 	struct cached_dev *dc, *t;
+ 	struct cache *ca;
+ 	struct closure cl;
+ 	unsigned int i;
++	LIST_HEAD(journal);
++	struct journal_replay *l;
+ 
+ 	closure_init_stack(&cl);
+ 
+@@ -1869,7 +1871,9 @@ static void run_cache_set(struct cache_set *c)
+ 		if (j->version < BCACHE_JSET_VERSION_UUID)
+ 			__uuid_write(c);
+ 
+-		bch_journal_replay(c, &journal);
++		err = "bcache: replay journal failed";
++		if (bch_journal_replay(c, &journal))
++			goto err;
+ 	} else {
+ 		pr_notice("invalidating existing data");
+ 
+@@ -1937,11 +1941,19 @@ static void run_cache_set(struct cache_set *c)
+ 	flash_devs_run(c);
+ 
+ 	set_bit(CACHE_SET_RUNNING, &c->flags);
+-	return;
++	return 0;
+ err:
++	while (!list_empty(&journal)) {
++		l = list_first_entry(&journal, struct journal_replay, list);
++		list_del(&l->list);
++		kfree(l);
++	}
++
+ 	closure_sync(&cl);
+ 	/* XXX: test this, it's broken */
+ 	bch_cache_set_error(c, "%s", err);
++
++	return -EIO;
+ }
+ 
+ static bool can_attach_cache(struct cache *ca, struct cache_set *c)
+@@ -2005,8 +2017,11 @@ found:
+ 	ca->set->cache[ca->sb.nr_this_dev] = ca;
+ 	c->cache_by_alloc[c->caches_loaded++] = ca;
+ 
+-	if (c->caches_loaded == c->sb.nr_in_set)
+-		run_cache_set(c);
++	if (c->caches_loaded == c->sb.nr_in_set) {
++		err = "failed to run cache set";
++		if (run_cache_set(c) < 0)
++			goto err;
++	}
+ 
+ 	return NULL;
+ err:
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index cde3b49b2a91..350cf0451456 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -880,13 +880,17 @@ void dm_table_set_type(struct dm_table *t, enum dm_queue_mode type)
+ }
+ EXPORT_SYMBOL_GPL(dm_table_set_type);
+ 
++/* validate the dax capability of the target device span */
+ static int device_supports_dax(struct dm_target *ti, struct dm_dev *dev,
+-			       sector_t start, sector_t len, void *data)
++				       sector_t start, sector_t len, void *data)
+ {
+-	return bdev_dax_supported(dev->bdev, PAGE_SIZE);
++	int blocksize = *(int *) data;
++
++	return generic_fsdax_supported(dev->dax_dev, dev->bdev, blocksize,
++			start, len);
+ }
+ 
+-static bool dm_table_supports_dax(struct dm_table *t)
++bool dm_table_supports_dax(struct dm_table *t, int blocksize)
+ {
+ 	struct dm_target *ti;
+ 	unsigned i;
+@@ -899,7 +903,8 @@ static bool dm_table_supports_dax(struct dm_table *t)
+ 			return false;
+ 
+ 		if (!ti->type->iterate_devices ||
+-		    !ti->type->iterate_devices(ti, device_supports_dax, NULL))
++		    !ti->type->iterate_devices(ti, device_supports_dax,
++			    &blocksize))
+ 			return false;
+ 	}
+ 
+@@ -979,7 +984,7 @@ static int dm_table_determine_type(struct dm_table *t)
+ verify_bio_based:
+ 		/* We must use this table as bio-based */
+ 		t->type = DM_TYPE_BIO_BASED;
+-		if (dm_table_supports_dax(t) ||
++		if (dm_table_supports_dax(t, PAGE_SIZE) ||
+ 		    (list_empty(devices) && live_md_type == DM_TYPE_DAX_BIO_BASED)) {
+ 			t->type = DM_TYPE_DAX_BIO_BASED;
+ 		} else {
+@@ -1905,7 +1910,7 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
+ 	}
+ 	blk_queue_write_cache(q, wc, fua);
+ 
+-	if (dm_table_supports_dax(t))
++	if (dm_table_supports_dax(t, PAGE_SIZE))
+ 		blk_queue_flag_set(QUEUE_FLAG_DAX, q);
+ 	else
+ 		blk_queue_flag_clear(QUEUE_FLAG_DAX, q);
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 08e7d412af95..1cacf02633ec 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1105,6 +1105,25 @@ static long dm_dax_direct_access(struct dax_device *dax_dev, pgoff_t pgoff,
+ 	return ret;
+ }
+ 
++static bool dm_dax_supported(struct dax_device *dax_dev, struct block_device *bdev,
++		int blocksize, sector_t start, sector_t len)
++{
++	struct mapped_device *md = dax_get_private(dax_dev);
++	struct dm_table *map;
++	int srcu_idx;
++	bool ret;
++
++	map = dm_get_live_table(md, &srcu_idx);
++	if (!map)
++		return false;
++
++	ret = dm_table_supports_dax(map, blocksize);
++
++	dm_put_live_table(md, srcu_idx);
++
++	return ret;
++}
++
+ static size_t dm_dax_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff,
+ 				    void *addr, size_t bytes, struct iov_iter *i)
+ {
+@@ -3194,6 +3213,7 @@ static const struct block_device_operations dm_blk_dops = {
+ 
+ static const struct dax_operations dm_dax_ops = {
+ 	.direct_access = dm_dax_direct_access,
++	.dax_supported = dm_dax_supported,
+ 	.copy_from_iter = dm_dax_copy_from_iter,
+ 	.copy_to_iter = dm_dax_copy_to_iter,
+ };
+diff --git a/drivers/md/dm.h b/drivers/md/dm.h
+index 2d539b82ec08..17e3db54404c 100644
+--- a/drivers/md/dm.h
++++ b/drivers/md/dm.h
+@@ -72,6 +72,7 @@ bool dm_table_bio_based(struct dm_table *t);
+ bool dm_table_request_based(struct dm_table *t);
+ void dm_table_free_md_mempools(struct dm_table *t);
+ struct dm_md_mempools *dm_table_get_md_mempools(struct dm_table *t);
++bool dm_table_supports_dax(struct dm_table *t, int blocksize);
+ 
+ void dm_lock_md_type(struct mapped_device *md);
+ void dm_unlock_md_type(struct mapped_device *md);
+diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c
+index 15b6b9c0a2e4..9c163f658aaf 100644
+--- a/drivers/media/common/videobuf2/videobuf2-core.c
++++ b/drivers/media/common/videobuf2/videobuf2-core.c
+@@ -672,6 +672,11 @@ int vb2_core_reqbufs(struct vb2_queue *q, enum vb2_memory memory,
+ 		return -EBUSY;
+ 	}
+ 
++	if (q->waiting_in_dqbuf && *count) {
++		dprintk(1, "another dup()ped fd is waiting for a buffer\n");
++		return -EBUSY;
++	}
++
+ 	if (*count == 0 || q->num_buffers != 0 ||
+ 	    (q->memory != VB2_MEMORY_UNKNOWN && q->memory != memory)) {
+ 		/*
+@@ -807,6 +812,10 @@ int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory,
+ 	}
+ 
+ 	if (!q->num_buffers) {
++		if (q->waiting_in_dqbuf && *count) {
++			dprintk(1, "another dup()ped fd is waiting for a buffer\n");
++			return -EBUSY;
++		}
+ 		memset(q->alloc_devs, 0, sizeof(q->alloc_devs));
+ 		q->memory = memory;
+ 		q->waiting_for_buffers = !q->is_output;
+@@ -1659,6 +1668,11 @@ static int __vb2_wait_for_done_vb(struct vb2_queue *q, int nonblocking)
+ 	for (;;) {
+ 		int ret;
+ 
++		if (q->waiting_in_dqbuf) {
++			dprintk(1, "another dup()ped fd is waiting for a buffer\n");
++			return -EBUSY;
++		}
++
+ 		if (!q->streaming) {
+ 			dprintk(1, "streaming off, will not wait for buffers\n");
+ 			return -EINVAL;
+@@ -1686,6 +1700,7 @@ static int __vb2_wait_for_done_vb(struct vb2_queue *q, int nonblocking)
+ 			return -EAGAIN;
+ 		}
+ 
++		q->waiting_in_dqbuf = 1;
+ 		/*
+ 		 * We are streaming and blocking, wait for another buffer to
+ 		 * become ready or for streamoff. Driver's lock is released to
+@@ -1706,6 +1721,7 @@ static int __vb2_wait_for_done_vb(struct vb2_queue *q, int nonblocking)
+ 		 * the locks or return an error if one occurred.
+ 		 */
+ 		call_void_qop(q, wait_finish, q);
++		q->waiting_in_dqbuf = 0;
+ 		if (ret) {
+ 			dprintk(1, "sleep was interrupted\n");
+ 			return ret;
+@@ -2585,6 +2601,12 @@ static size_t __vb2_perform_fileio(struct vb2_queue *q, char __user *data, size_
+ 	if (!data)
+ 		return -EINVAL;
+ 
++	if (q->waiting_in_dqbuf) {
++		dprintk(3, "another dup()ped fd is %s\n",
++			read ? "reading" : "writing");
++		return -EBUSY;
++	}
++
+ 	/*
+ 	 * Initialize emulator on first call.
+ 	 */
+diff --git a/drivers/media/dvb-frontends/m88ds3103.c b/drivers/media/dvb-frontends/m88ds3103.c
+index 123f2a33738b..403f42806455 100644
+--- a/drivers/media/dvb-frontends/m88ds3103.c
++++ b/drivers/media/dvb-frontends/m88ds3103.c
+@@ -309,6 +309,9 @@ static int m88ds3103_set_frontend(struct dvb_frontend *fe)
+ 	u16 u16tmp;
+ 	u32 tuner_frequency_khz, target_mclk;
+ 	s32 s32tmp;
++	static const struct reg_sequence reset_buf[] = {
++		{0x07, 0x80}, {0x07, 0x00}
++	};
+ 
+ 	dev_dbg(&client->dev,
+ 		"delivery_system=%d modulation=%d frequency=%u symbol_rate=%d inversion=%d pilot=%d rolloff=%d\n",
+@@ -321,11 +324,7 @@ static int m88ds3103_set_frontend(struct dvb_frontend *fe)
+ 	}
+ 
+ 	/* reset */
+-	ret = regmap_write(dev->regmap, 0x07, 0x80);
+-	if (ret)
+-		goto err;
+-
+-	ret = regmap_write(dev->regmap, 0x07, 0x00);
++	ret = regmap_multi_reg_write(dev->regmap, reset_buf, 2);
+ 	if (ret)
+ 		goto err;
+ 
+diff --git a/drivers/media/dvb-frontends/si2165.c b/drivers/media/dvb-frontends/si2165.c
+index feacd8da421d..d55d8f169dca 100644
+--- a/drivers/media/dvb-frontends/si2165.c
++++ b/drivers/media/dvb-frontends/si2165.c
+@@ -275,18 +275,20 @@ static u32 si2165_get_fe_clk(struct si2165_state *state)
+ 
+ static int si2165_wait_init_done(struct si2165_state *state)
+ {
+-	int ret = -EINVAL;
++	int ret;
+ 	u8 val = 0;
+ 	int i;
+ 
+ 	for (i = 0; i < 3; ++i) {
+-		si2165_readreg8(state, REG_INIT_DONE, &val);
++		ret = si2165_readreg8(state, REG_INIT_DONE, &val);
++		if (ret < 0)
++			return ret;
+ 		if (val == 0x01)
+ 			return 0;
+ 		usleep_range(1000, 50000);
+ 	}
+ 	dev_err(&state->client->dev, "init_done was not set\n");
+-	return ret;
++	return -EINVAL;
+ }
+ 
+ static int si2165_upload_firmware_block(struct si2165_state *state,
+diff --git a/drivers/media/i2c/ov2659.c b/drivers/media/i2c/ov2659.c
+index 799acce803fe..a1e9a980a445 100644
+--- a/drivers/media/i2c/ov2659.c
++++ b/drivers/media/i2c/ov2659.c
+@@ -1117,8 +1117,10 @@ static int ov2659_set_fmt(struct v4l2_subdev *sd,
+ 		if (ov2659_formats[index].code == mf->code)
+ 			break;
+ 
+-	if (index < 0)
+-		return -EINVAL;
++	if (index < 0) {
++		index = 0;
++		mf->code = ov2659_formats[index].code;
++	}
+ 
+ 	mf->colorspace = V4L2_COLORSPACE_SRGB;
+ 	mf->field = V4L2_FIELD_NONE;
+diff --git a/drivers/media/i2c/ov6650.c b/drivers/media/i2c/ov6650.c
+index f9359b11fa5c..de7d9790f054 100644
+--- a/drivers/media/i2c/ov6650.c
++++ b/drivers/media/i2c/ov6650.c
+@@ -810,9 +810,16 @@ static int ov6650_video_probe(struct i2c_client *client)
+ 	u8		pidh, pidl, midh, midl;
+ 	int		ret;
+ 
++	priv->clk = v4l2_clk_get(&client->dev, NULL);
++	if (IS_ERR(priv->clk)) {
++		ret = PTR_ERR(priv->clk);
++		dev_err(&client->dev, "v4l2_clk request err: %d\n", ret);
++		return ret;
++	}
++
+ 	ret = ov6650_s_power(&priv->subdev, 1);
+ 	if (ret < 0)
+-		return ret;
++		goto eclkput;
+ 
+ 	msleep(20);
+ 
+@@ -849,6 +856,11 @@ static int ov6650_video_probe(struct i2c_client *client)
+ 
+ done:
+ 	ov6650_s_power(&priv->subdev, 0);
++	if (!ret)
++		return 0;
++eclkput:
++	v4l2_clk_put(priv->clk);
++
+ 	return ret;
+ }
+ 
+@@ -991,18 +1003,9 @@ static int ov6650_probe(struct i2c_client *client,
+ 	priv->code	  = MEDIA_BUS_FMT_YUYV8_2X8;
+ 	priv->colorspace  = V4L2_COLORSPACE_JPEG;
+ 
+-	priv->clk = v4l2_clk_get(&client->dev, NULL);
+-	if (IS_ERR(priv->clk)) {
+-		ret = PTR_ERR(priv->clk);
+-		goto eclkget;
+-	}
+-
+ 	ret = ov6650_video_probe(client);
+-	if (ret) {
+-		v4l2_clk_put(priv->clk);
+-eclkget:
++	if (ret)
+ 		v4l2_ctrl_handler_free(&priv->hdl);
+-	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/media/i2c/ov7670.c b/drivers/media/i2c/ov7670.c
+index a7d26b294eb5..e65693c2aad5 100644
+--- a/drivers/media/i2c/ov7670.c
++++ b/drivers/media/i2c/ov7670.c
+@@ -1664,6 +1664,7 @@ static int ov7670_s_power(struct v4l2_subdev *sd, int on)
+ 
+ 	if (on) {
+ 		ov7670_power_on (sd);
++		ov7670_init(sd, 0);
+ 		ov7670_apply_fmt(sd);
+ 		ov7675_apply_framerate(sd);
+ 		v4l2_ctrl_handler_setup(&info->hdl);
+diff --git a/drivers/media/pci/saa7146/hexium_gemini.c b/drivers/media/pci/saa7146/hexium_gemini.c
+index 5817d9cde4d0..6d8e4afe9673 100644
+--- a/drivers/media/pci/saa7146/hexium_gemini.c
++++ b/drivers/media/pci/saa7146/hexium_gemini.c
+@@ -270,9 +270,8 @@ static int hexium_attach(struct saa7146_dev *dev, struct saa7146_pci_extension_d
+ 	/* enable i2c-port pins */
+ 	saa7146_write(dev, MC1, (MASK_08 | MASK_24 | MASK_10 | MASK_26));
+ 
+-	hexium->i2c_adapter = (struct i2c_adapter) {
+-		.name = "hexium gemini",
+-	};
++	strscpy(hexium->i2c_adapter.name, "hexium gemini",
++		sizeof(hexium->i2c_adapter.name));
+ 	saa7146_i2c_adapter_prepare(dev, &hexium->i2c_adapter, SAA7146_I2C_BUS_BIT_RATE_480);
+ 	if (i2c_add_adapter(&hexium->i2c_adapter) < 0) {
+ 		DEB_S("cannot register i2c-device. skipping.\n");
+diff --git a/drivers/media/pci/saa7146/hexium_orion.c b/drivers/media/pci/saa7146/hexium_orion.c
+index 0a05176c18ab..a794f9e5f990 100644
+--- a/drivers/media/pci/saa7146/hexium_orion.c
++++ b/drivers/media/pci/saa7146/hexium_orion.c
+@@ -231,9 +231,8 @@ static int hexium_probe(struct saa7146_dev *dev)
+ 	saa7146_write(dev, DD1_STREAM_B, 0x00000000);
+ 	saa7146_write(dev, MC2, (MASK_09 | MASK_25 | MASK_10 | MASK_26));
+ 
+-	hexium->i2c_adapter = (struct i2c_adapter) {
+-		.name = "hexium orion",
+-	};
++	strscpy(hexium->i2c_adapter.name, "hexium orion",
++		sizeof(hexium->i2c_adapter.name));
+ 	saa7146_i2c_adapter_prepare(dev, &hexium->i2c_adapter, SAA7146_I2C_BUS_BIT_RATE_480);
+ 	if (i2c_add_adapter(&hexium->i2c_adapter) < 0) {
+ 		DEB_S("cannot register i2c-device. skipping.\n");
+diff --git a/drivers/media/platform/coda/coda-bit.c b/drivers/media/platform/coda/coda-bit.c
+index b4f396c2e72c..eaa86737fa04 100644
+--- a/drivers/media/platform/coda/coda-bit.c
++++ b/drivers/media/platform/coda/coda-bit.c
+@@ -2010,6 +2010,9 @@ static int coda_prepare_decode(struct coda_ctx *ctx)
+ 	/* Clear decode success flag */
+ 	coda_write(dev, 0, CODA_RET_DEC_PIC_SUCCESS);
+ 
++	/* Clear error return value */
++	coda_write(dev, 0, CODA_RET_DEC_PIC_ERR_MB);
++
+ 	trace_coda_dec_pic_run(ctx, meta);
+ 
+ 	coda_command_async(ctx, CODA_COMMAND_PIC_RUN);
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec.c
+index d022c65bb34c..e20b340855e7 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec.c
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec.c
+@@ -388,7 +388,7 @@ static void mtk_vdec_worker(struct work_struct *work)
+ 	}
+ 	buf.va = vb2_plane_vaddr(&src_buf->vb2_buf, 0);
+ 	buf.dma_addr = vb2_dma_contig_plane_dma_addr(&src_buf->vb2_buf, 0);
+-	buf.size = (size_t)src_buf->planes[0].bytesused;
++	buf.size = (size_t)src_buf->vb2_buf.planes[0].bytesused;
+ 	if (!buf.va) {
+ 		v4l2_m2m_job_finish(dev->m2m_dev_dec, ctx->m2m_ctx);
+ 		mtk_v4l2_err("[%d] id=%d src_addr is NULL!!",
+@@ -1155,10 +1155,10 @@ static void vb2ops_vdec_buf_queue(struct vb2_buffer *vb)
+ 
+ 	src_mem.va = vb2_plane_vaddr(&src_buf->vb2_buf, 0);
+ 	src_mem.dma_addr = vb2_dma_contig_plane_dma_addr(&src_buf->vb2_buf, 0);
+-	src_mem.size = (size_t)src_buf->planes[0].bytesused;
++	src_mem.size = (size_t)src_buf->vb2_buf.planes[0].bytesused;
+ 	mtk_v4l2_debug(2,
+ 			"[%d] buf id=%d va=%p dma=%pad size=%zx",
+-			ctx->id, src_buf->index,
++			ctx->id, src_buf->vb2_buf.index,
+ 			src_mem.va, &src_mem.dma_addr,
+ 			src_mem.size);
+ 
+@@ -1182,7 +1182,7 @@ static void vb2ops_vdec_buf_queue(struct vb2_buffer *vb)
+ 		}
+ 		mtk_v4l2_debug(ret ? 0 : 1,
+ 			       "[%d] vdec_if_decode() src_buf=%d, size=%zu, fail=%d, res_chg=%d",
+-			       ctx->id, src_buf->index,
++			       ctx->id, src_buf->vb2_buf.index,
+ 			       src_mem.size, ret, res_chg);
+ 		return;
+ 	}
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc.c
+index c6b48b5925fb..50351adafc47 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc.c
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc.c
+@@ -894,7 +894,7 @@ static void vb2ops_venc_stop_streaming(struct vb2_queue *q)
+ 
+ 	if (q->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE) {
+ 		while ((dst_buf = v4l2_m2m_dst_buf_remove(ctx->m2m_ctx))) {
+-			dst_buf->planes[0].bytesused = 0;
++			dst_buf->vb2_buf.planes[0].bytesused = 0;
+ 			v4l2_m2m_buf_done(dst_buf, VB2_BUF_STATE_ERROR);
+ 		}
+ 	} else {
+@@ -947,7 +947,7 @@ static int mtk_venc_encode_header(void *priv)
+ 
+ 	bs_buf.va = vb2_plane_vaddr(&dst_buf->vb2_buf, 0);
+ 	bs_buf.dma_addr = vb2_dma_contig_plane_dma_addr(&dst_buf->vb2_buf, 0);
+-	bs_buf.size = (size_t)dst_buf->planes[0].length;
++	bs_buf.size = (size_t)dst_buf->vb2_buf.planes[0].length;
+ 
+ 	mtk_v4l2_debug(1,
+ 			"[%d] buf id=%d va=0x%p dma_addr=0x%llx size=%zu",
+@@ -976,7 +976,7 @@ static int mtk_venc_encode_header(void *priv)
+ 	}
+ 
+ 	ctx->state = MTK_STATE_HEADER;
+-	dst_buf->planes[0].bytesused = enc_result.bs_size;
++	dst_buf->vb2_buf.planes[0].bytesused = enc_result.bs_size;
+ 	v4l2_m2m_buf_done(dst_buf, VB2_BUF_STATE_DONE);
+ 
+ 	return 0;
+@@ -1107,12 +1107,12 @@ static void mtk_venc_worker(struct work_struct *work)
+ 
+ 	if (ret) {
+ 		v4l2_m2m_buf_done(src_buf, VB2_BUF_STATE_ERROR);
+-		dst_buf->planes[0].bytesused = 0;
++		dst_buf->vb2_buf.planes[0].bytesused = 0;
+ 		v4l2_m2m_buf_done(dst_buf, VB2_BUF_STATE_ERROR);
+ 		mtk_v4l2_err("venc_if_encode failed=%d", ret);
+ 	} else {
+ 		v4l2_m2m_buf_done(src_buf, VB2_BUF_STATE_DONE);
+-		dst_buf->planes[0].bytesused = enc_result.bs_size;
++		dst_buf->vb2_buf.planes[0].bytesused = enc_result.bs_size;
+ 		v4l2_m2m_buf_done(dst_buf, VB2_BUF_STATE_DONE);
+ 		mtk_v4l2_debug(2, "venc_if_encode bs size=%d",
+ 				 enc_result.bs_size);
+diff --git a/drivers/media/platform/stm32/stm32-dcmi.c b/drivers/media/platform/stm32/stm32-dcmi.c
+index 5fe5b38fa901..922855b6025c 100644
+--- a/drivers/media/platform/stm32/stm32-dcmi.c
++++ b/drivers/media/platform/stm32/stm32-dcmi.c
+@@ -811,6 +811,9 @@ static int dcmi_try_fmt(struct stm32_dcmi *dcmi, struct v4l2_format *f,
+ 
+ 	sd_fmt = find_format_by_fourcc(dcmi, pix->pixelformat);
+ 	if (!sd_fmt) {
++		if (!dcmi->num_of_sd_formats)
++			return -ENODATA;
++
+ 		sd_fmt = dcmi->sd_formats[dcmi->num_of_sd_formats - 1];
+ 		pix->pixelformat = sd_fmt->fourcc;
+ 	}
+@@ -989,6 +992,9 @@ static int dcmi_set_sensor_format(struct stm32_dcmi *dcmi,
+ 
+ 	sd_fmt = find_format_by_fourcc(dcmi, pix->pixelformat);
+ 	if (!sd_fmt) {
++		if (!dcmi->num_of_sd_formats)
++			return -ENODATA;
++
+ 		sd_fmt = dcmi->sd_formats[dcmi->num_of_sd_formats - 1];
+ 		pix->pixelformat = sd_fmt->fourcc;
+ 	}
+@@ -1645,7 +1651,7 @@ static int dcmi_probe(struct platform_device *pdev)
+ 	dcmi->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
+ 	if (IS_ERR(dcmi->rstc)) {
+ 		dev_err(&pdev->dev, "Could not get reset control\n");
+-		return -ENODEV;
++		return PTR_ERR(dcmi->rstc);
+ 	}
+ 
+ 	/* Get bus characteristics from devicetree */
+@@ -1660,7 +1666,7 @@ static int dcmi_probe(struct platform_device *pdev)
+ 	of_node_put(np);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Could not parse the endpoint\n");
+-		return -ENODEV;
++		return ret;
+ 	}
+ 
+ 	if (ep.bus_type == V4L2_MBUS_CSI2_DPHY) {
+@@ -1673,8 +1679,9 @@ static int dcmi_probe(struct platform_device *pdev)
+ 
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq <= 0) {
+-		dev_err(&pdev->dev, "Could not get irq\n");
+-		return -ENODEV;
++		if (irq != -EPROBE_DEFER)
++			dev_err(&pdev->dev, "Could not get irq\n");
++		return irq;
+ 	}
+ 
+ 	dcmi->res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+@@ -1694,12 +1701,13 @@ static int dcmi_probe(struct platform_device *pdev)
+ 					dev_name(&pdev->dev), dcmi);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Unable to request irq %d\n", irq);
+-		return -ENODEV;
++		return ret;
+ 	}
+ 
+ 	mclk = devm_clk_get(&pdev->dev, "mclk");
+ 	if (IS_ERR(mclk)) {
+-		dev_err(&pdev->dev, "Unable to get mclk\n");
++		if (PTR_ERR(mclk) != -EPROBE_DEFER)
++			dev_err(&pdev->dev, "Unable to get mclk\n");
+ 		return PTR_ERR(mclk);
+ 	}
+ 
+diff --git a/drivers/media/platform/vicodec/codec-fwht.c b/drivers/media/platform/vicodec/codec-fwht.c
+index d1d6085da9f1..cf469a1191aa 100644
+--- a/drivers/media/platform/vicodec/codec-fwht.c
++++ b/drivers/media/platform/vicodec/codec-fwht.c
+@@ -46,8 +46,12 @@ static const uint8_t zigzag[64] = {
+ 	63,
+ };
+ 
+-
+-static int rlc(const s16 *in, __be16 *output, int blocktype)
++/*
++ * noinline_for_stack to work around
++ * https://bugs.llvm.org/show_bug.cgi?id=38809
++ */
++static int noinline_for_stack
++rlc(const s16 *in, __be16 *output, int blocktype)
+ {
+ 	s16 block[8 * 8];
+ 	s16 *wp = block;
+@@ -106,8 +110,8 @@ static int rlc(const s16 *in, __be16 *output, int blocktype)
+  * This function will worst-case increase rlc_in by 65*2 bytes:
+  * one s16 value for the header and 8 * 8 coefficients of type s16.
+  */
+-static u16 derlc(const __be16 **rlc_in, s16 *dwht_out,
+-		 const __be16 *end_of_input)
++static noinline_for_stack u16
++derlc(const __be16 **rlc_in, s16 *dwht_out, const __be16 *end_of_input)
+ {
+ 	/* header */
+ 	const __be16 *input = *rlc_in;
+@@ -240,8 +244,9 @@ static void dequantize_inter(s16 *coeff)
+ 			*coeff <<= *quant;
+ }
+ 
+-static void fwht(const u8 *block, s16 *output_block, unsigned int stride,
+-		 unsigned int input_step, bool intra)
++static void noinline_for_stack fwht(const u8 *block, s16 *output_block,
++				    unsigned int stride,
++				    unsigned int input_step, bool intra)
+ {
+ 	/* we'll need more than 8 bits for the transformed coefficients */
+ 	s32 workspace1[8], workspace2[8];
+@@ -373,7 +378,8 @@ static void fwht(const u8 *block, s16 *output_block, unsigned int stride,
+  * Furthermore values can be negative... This is just a version that
+  * works with 16 signed data
+  */
+-static void fwht16(const s16 *block, s16 *output_block, int stride, int intra)
++static void noinline_for_stack
++fwht16(const s16 *block, s16 *output_block, int stride, int intra)
+ {
+ 	/* we'll need more than 8 bits for the transformed coefficients */
+ 	s32 workspace1[8], workspace2[8];
+@@ -456,7 +462,8 @@ static void fwht16(const s16 *block, s16 *output_block, int stride, int intra)
+ 	}
+ }
+ 
+-static void ifwht(const s16 *block, s16 *output_block, int intra)
++static noinline_for_stack void
++ifwht(const s16 *block, s16 *output_block, int intra)
+ {
+ 	/*
+ 	 * we'll need more than 8 bits for the transformed coefficients
+@@ -604,9 +611,9 @@ static int var_inter(const s16 *old, const s16 *new)
+ 	return ret;
+ }
+ 
+-static int decide_blocktype(const u8 *cur, const u8 *reference,
+-			    s16 *deltablock, unsigned int stride,
+-			    unsigned int input_step)
++static noinline_for_stack int
++decide_blocktype(const u8 *cur, const u8 *reference, s16 *deltablock,
++		 unsigned int stride, unsigned int input_step)
+ {
+ 	s16 tmp[64];
+ 	s16 old[64];
+diff --git a/drivers/media/platform/vicodec/vicodec-core.c b/drivers/media/platform/vicodec/vicodec-core.c
+index d7636fe9e174..8788369e59a0 100644
+--- a/drivers/media/platform/vicodec/vicodec-core.c
++++ b/drivers/media/platform/vicodec/vicodec-core.c
+@@ -159,12 +159,10 @@ static int device_process(struct vicodec_ctx *ctx,
+ 			  struct vb2_v4l2_buffer *dst_vb)
+ {
+ 	struct vicodec_dev *dev = ctx->dev;
+-	struct vicodec_q_data *q_dst;
+ 	struct v4l2_fwht_state *state = &ctx->state;
+ 	u8 *p_src, *p_dst;
+ 	int ret;
+ 
+-	q_dst = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
+ 	if (ctx->is_enc)
+ 		p_src = vb2_plane_vaddr(&src_vb->vb2_buf, 0);
+ 	else
+@@ -186,8 +184,10 @@ static int device_process(struct vicodec_ctx *ctx,
+ 			return ret;
+ 		vb2_set_plane_payload(&dst_vb->vb2_buf, 0, ret);
+ 	} else {
++		struct vicodec_q_data *q_dst;
+ 		unsigned int comp_frame_size = ntohl(ctx->state.header.size);
+ 
++		q_dst = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
+ 		if (comp_frame_size > ctx->comp_max_size)
+ 			return -EINVAL;
+ 		state->info = q_dst->info;
+@@ -196,11 +196,6 @@ static int device_process(struct vicodec_ctx *ctx,
+ 			return ret;
+ 		vb2_set_plane_payload(&dst_vb->vb2_buf, 0, q_dst->sizeimage);
+ 	}
+-
+-	dst_vb->sequence = q_dst->sequence++;
+-	dst_vb->flags &= ~V4L2_BUF_FLAG_LAST;
+-	v4l2_m2m_buf_copy_metadata(src_vb, dst_vb, !ctx->is_enc);
+-
+ 	return 0;
+ }
+ 
+@@ -274,16 +269,22 @@ static void device_run(void *priv)
+ 	struct vicodec_ctx *ctx = priv;
+ 	struct vicodec_dev *dev = ctx->dev;
+ 	struct vb2_v4l2_buffer *src_buf, *dst_buf;
+-	struct vicodec_q_data *q_src;
++	struct vicodec_q_data *q_src, *q_dst;
+ 	u32 state;
+ 
+ 	src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 	dst_buf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
+ 	q_src = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
++	q_dst = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
+ 
+ 	state = VB2_BUF_STATE_DONE;
+ 	if (device_process(ctx, src_buf, dst_buf))
+ 		state = VB2_BUF_STATE_ERROR;
++	else
++		dst_buf->sequence = q_dst->sequence++;
++	dst_buf->flags &= ~V4L2_BUF_FLAG_LAST;
++	v4l2_m2m_buf_copy_metadata(src_buf, dst_buf, !ctx->is_enc);
++
+ 	ctx->last_dst_buf = dst_buf;
+ 
+ 	spin_lock(ctx->lock);
+@@ -1338,8 +1339,11 @@ static int vicodec_start_streaming(struct vb2_queue *q,
+ 	chroma_div = info->width_div * info->height_div;
+ 	q_data->sequence = 0;
+ 
+-	ctx->last_src_buf = NULL;
+-	ctx->last_dst_buf = NULL;
++	if (V4L2_TYPE_IS_OUTPUT(q->type))
++		ctx->last_src_buf = NULL;
++	else
++		ctx->last_dst_buf = NULL;
++
+ 	state->gop_cnt = 0;
+ 
+ 	if ((V4L2_TYPE_IS_OUTPUT(q->type) && !ctx->is_enc) ||
+diff --git a/drivers/media/platform/video-mux.c b/drivers/media/platform/video-mux.c
+index 0ba30756e1e4..d8cd5f5cb10d 100644
+--- a/drivers/media/platform/video-mux.c
++++ b/drivers/media/platform/video-mux.c
+@@ -419,9 +419,14 @@ static int video_mux_probe(struct platform_device *pdev)
+ 	vmux->active = -1;
+ 	vmux->pads = devm_kcalloc(dev, num_pads, sizeof(*vmux->pads),
+ 				  GFP_KERNEL);
++	if (!vmux->pads)
++		return -ENOMEM;
++
+ 	vmux->format_mbus = devm_kcalloc(dev, num_pads,
+ 					 sizeof(*vmux->format_mbus),
+ 					 GFP_KERNEL);
++	if (!vmux->format_mbus)
++		return -ENOMEM;
+ 
+ 	for (i = 0; i < num_pads; i++) {
+ 		vmux->pads[i].flags = (i < num_pads - 1) ? MEDIA_PAD_FL_SINK
+diff --git a/drivers/media/platform/vim2m.c b/drivers/media/platform/vim2m.c
+index 34dcaca45d8b..dd47821fc661 100644
+--- a/drivers/media/platform/vim2m.c
++++ b/drivers/media/platform/vim2m.c
+@@ -1262,6 +1262,15 @@ static int vim2m_release(struct file *file)
+ 	return 0;
+ }
+ 
++static void vim2m_device_release(struct video_device *vdev)
++{
++	struct vim2m_dev *dev = container_of(vdev, struct vim2m_dev, vfd);
++
++	v4l2_device_unregister(&dev->v4l2_dev);
++	v4l2_m2m_release(dev->m2m_dev);
++	kfree(dev);
++}
++
+ static const struct v4l2_file_operations vim2m_fops = {
+ 	.owner		= THIS_MODULE,
+ 	.open		= vim2m_open,
+@@ -1277,7 +1286,7 @@ static const struct video_device vim2m_videodev = {
+ 	.fops		= &vim2m_fops,
+ 	.ioctl_ops	= &vim2m_ioctl_ops,
+ 	.minor		= -1,
+-	.release	= video_device_release_empty,
++	.release	= vim2m_device_release,
+ 	.device_caps	= V4L2_CAP_VIDEO_M2M | V4L2_CAP_STREAMING,
+ };
+ 
+@@ -1298,13 +1307,13 @@ static int vim2m_probe(struct platform_device *pdev)
+ 	struct video_device *vfd;
+ 	int ret;
+ 
+-	dev = devm_kzalloc(&pdev->dev, sizeof(*dev), GFP_KERNEL);
++	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+ 	if (!dev)
+ 		return -ENOMEM;
+ 
+ 	ret = v4l2_device_register(&pdev->dev, &dev->v4l2_dev);
+ 	if (ret)
+-		return ret;
++		goto error_free;
+ 
+ 	atomic_set(&dev->num_inst, 0);
+ 	mutex_init(&dev->dev_mutex);
+@@ -1317,7 +1326,7 @@ static int vim2m_probe(struct platform_device *pdev)
+ 	ret = video_register_device(vfd, VFL_TYPE_GRABBER, 0);
+ 	if (ret) {
+ 		v4l2_err(&dev->v4l2_dev, "Failed to register video device\n");
+-		goto unreg_v4l2;
++		goto error_v4l2;
+ 	}
+ 
+ 	video_set_drvdata(vfd, dev);
+@@ -1330,7 +1339,7 @@ static int vim2m_probe(struct platform_device *pdev)
+ 	if (IS_ERR(dev->m2m_dev)) {
+ 		v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem device\n");
+ 		ret = PTR_ERR(dev->m2m_dev);
+-		goto unreg_dev;
++		goto error_dev;
+ 	}
+ 
+ #ifdef CONFIG_MEDIA_CONTROLLER
+@@ -1346,27 +1355,29 @@ static int vim2m_probe(struct platform_device *pdev)
+ 						 MEDIA_ENT_F_PROC_VIDEO_SCALER);
+ 	if (ret) {
+ 		v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem media controller\n");
+-		goto unreg_m2m;
++		goto error_m2m;
+ 	}
+ 
+ 	ret = media_device_register(&dev->mdev);
+ 	if (ret) {
+ 		v4l2_err(&dev->v4l2_dev, "Failed to register mem2mem media device\n");
+-		goto unreg_m2m_mc;
++		goto error_m2m_mc;
+ 	}
+ #endif
+ 	return 0;
+ 
+ #ifdef CONFIG_MEDIA_CONTROLLER
+-unreg_m2m_mc:
++error_m2m_mc:
+ 	v4l2_m2m_unregister_media_controller(dev->m2m_dev);
+-unreg_m2m:
++error_m2m:
+ 	v4l2_m2m_release(dev->m2m_dev);
+ #endif
+-unreg_dev:
++error_dev:
+ 	video_unregister_device(&dev->vfd);
+-unreg_v4l2:
++error_v4l2:
+ 	v4l2_device_unregister(&dev->v4l2_dev);
++error_free:
++	kfree(dev);
+ 
+ 	return ret;
+ }
+@@ -1382,9 +1393,7 @@ static int vim2m_remove(struct platform_device *pdev)
+ 	v4l2_m2m_unregister_media_controller(dev->m2m_dev);
+ 	media_device_cleanup(&dev->mdev);
+ #endif
+-	v4l2_m2m_release(dev->m2m_dev);
+ 	video_unregister_device(&dev->vfd);
+-	v4l2_device_unregister(&dev->v4l2_dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/platform/vimc/vimc-core.c b/drivers/media/platform/vimc/vimc-core.c
+index 0fbb7914098f..3aa62d7e3d0e 100644
+--- a/drivers/media/platform/vimc/vimc-core.c
++++ b/drivers/media/platform/vimc/vimc-core.c
+@@ -304,6 +304,8 @@ static int vimc_probe(struct platform_device *pdev)
+ 
+ 	dev_dbg(&pdev->dev, "probe");
+ 
++	memset(&vimc->mdev, 0, sizeof(vimc->mdev));
++
+ 	/* Create platform_device for each entity in the topology*/
+ 	vimc->subdevs = devm_kcalloc(&vimc->pdev.dev, vimc->pipe_cfg->num_ents,
+ 				     sizeof(*vimc->subdevs), GFP_KERNEL);
+diff --git a/drivers/media/platform/vimc/vimc-streamer.c b/drivers/media/platform/vimc/vimc-streamer.c
+index fcc897fb247b..392754c18046 100644
+--- a/drivers/media/platform/vimc/vimc-streamer.c
++++ b/drivers/media/platform/vimc/vimc-streamer.c
+@@ -120,7 +120,6 @@ static int vimc_streamer_thread(void *data)
+ 	int i;
+ 
+ 	set_freezable();
+-	set_current_state(TASK_UNINTERRUPTIBLE);
+ 
+ 	for (;;) {
+ 		try_to_freeze();
+@@ -137,6 +136,7 @@ static int vimc_streamer_thread(void *data)
+ 				break;
+ 		}
+ 		//wait for 60hz
++		set_current_state(TASK_UNINTERRUPTIBLE);
+ 		schedule_timeout(HZ / 60);
+ 	}
+ 
+diff --git a/drivers/media/platform/vivid/vivid-vid-cap.c b/drivers/media/platform/vivid/vivid-vid-cap.c
+index 52eeda624d7e..530ac8decb25 100644
+--- a/drivers/media/platform/vivid/vivid-vid-cap.c
++++ b/drivers/media/platform/vivid/vivid-vid-cap.c
+@@ -1007,7 +1007,7 @@ int vivid_vid_cap_s_selection(struct file *file, void *fh, struct v4l2_selection
+ 		v4l2_rect_map_inside(&s->r, &dev->fmt_cap_rect);
+ 		if (dev->bitmap_cap && (compose->width != s->r.width ||
+ 					compose->height != s->r.height)) {
+-			kfree(dev->bitmap_cap);
++			vfree(dev->bitmap_cap);
+ 			dev->bitmap_cap = NULL;
+ 		}
+ 		*compose = s->r;
+diff --git a/drivers/media/radio/wl128x/fmdrv_common.c b/drivers/media/radio/wl128x/fmdrv_common.c
+index 3c8987af3772..ac5706b4cab8 100644
+--- a/drivers/media/radio/wl128x/fmdrv_common.c
++++ b/drivers/media/radio/wl128x/fmdrv_common.c
+@@ -489,7 +489,8 @@ int fmc_send_cmd(struct fmdev *fmdev, u8 fm_op, u16 type, void *payload,
+ 		return -EIO;
+ 	}
+ 	/* Send response data to caller */
+-	if (response != NULL && response_len != NULL && evt_hdr->dlen) {
++	if (response != NULL && response_len != NULL && evt_hdr->dlen &&
++	    evt_hdr->dlen <= payload_len) {
+ 		/* Skip header info and copy only response data */
+ 		skb_pull(skb, sizeof(struct fm_event_msg_hdr));
+ 		memcpy(response, skb->data, evt_hdr->dlen);
+@@ -583,6 +584,8 @@ static void fm_irq_handle_flag_getcmd_resp(struct fmdev *fmdev)
+ 		return;
+ 
+ 	fm_evt_hdr = (void *)skb->data;
++	if (fm_evt_hdr->dlen > sizeof(fmdev->irq_info.flag))
++		return;
+ 
+ 	/* Skip header info and copy only response data */
+ 	skb_pull(skb, sizeof(struct fm_event_msg_hdr));
+@@ -1308,7 +1311,7 @@ static int load_default_rx_configuration(struct fmdev *fmdev)
+ static int fm_power_up(struct fmdev *fmdev, u8 mode)
+ {
+ 	u16 payload;
+-	__be16 asic_id, asic_ver;
++	__be16 asic_id = 0, asic_ver = 0;
+ 	int resp_len, ret;
+ 	u8 fw_name[50];
+ 
+diff --git a/drivers/media/rc/serial_ir.c b/drivers/media/rc/serial_ir.c
+index ffe2c672d105..3998ba29beb6 100644
+--- a/drivers/media/rc/serial_ir.c
++++ b/drivers/media/rc/serial_ir.c
+@@ -773,8 +773,6 @@ static void serial_ir_exit(void)
+ 
+ static int __init serial_ir_init_module(void)
+ {
+-	int result;
+-
+ 	switch (type) {
+ 	case IR_HOMEBREW:
+ 	case IR_IRDEO:
+@@ -802,12 +800,7 @@ static int __init serial_ir_init_module(void)
+ 	if (sense != -1)
+ 		sense = !!sense;
+ 
+-	result = serial_ir_init();
+-	if (!result)
+-		return 0;
+-
+-	serial_ir_exit();
+-	return result;
++	return serial_ir_init();
+ }
+ 
+ static void __exit serial_ir_exit_module(void)
+diff --git a/drivers/media/usb/au0828/au0828-video.c b/drivers/media/usb/au0828/au0828-video.c
+index 7876c897cc1d..222723d946e4 100644
+--- a/drivers/media/usb/au0828/au0828-video.c
++++ b/drivers/media/usb/au0828/au0828-video.c
+@@ -758,6 +758,9 @@ static int au0828_analog_stream_enable(struct au0828_dev *d)
+ 
+ 	dprintk(1, "au0828_analog_stream_enable called\n");
+ 
++	if (test_bit(DEV_DISCONNECTED, &d->dev_state))
++		return -ENODEV;
++
+ 	iface = usb_ifnum_to_if(d->usbdev, 0);
+ 	if (iface && iface->cur_altsetting->desc.bAlternateSetting != 5) {
+ 		dprintk(1, "Changing intf#0 to alt 5\n");
+@@ -839,9 +842,9 @@ int au0828_start_analog_streaming(struct vb2_queue *vq, unsigned int count)
+ 			return rc;
+ 		}
+ 
++		v4l2_device_call_all(&dev->v4l2_dev, 0, video, s_stream, 1);
++
+ 		if (vq->type == V4L2_BUF_TYPE_VIDEO_CAPTURE) {
+-			v4l2_device_call_all(&dev->v4l2_dev, 0, video,
+-						s_stream, 1);
+ 			dev->vid_timeout_running = 1;
+ 			mod_timer(&dev->vid_timeout, jiffies + (HZ / 10));
+ 		} else if (vq->type == V4L2_BUF_TYPE_VBI_CAPTURE) {
+@@ -861,10 +864,11 @@ static void au0828_stop_streaming(struct vb2_queue *vq)
+ 
+ 	dprintk(1, "au0828_stop_streaming called %d\n", dev->streaming_users);
+ 
+-	if (dev->streaming_users-- == 1)
++	if (dev->streaming_users-- == 1) {
+ 		au0828_uninit_isoc(dev);
++		v4l2_device_call_all(&dev->v4l2_dev, 0, video, s_stream, 0);
++	}
+ 
+-	v4l2_device_call_all(&dev->v4l2_dev, 0, video, s_stream, 0);
+ 	dev->vid_timeout_running = 0;
+ 	del_timer_sync(&dev->vid_timeout);
+ 
+@@ -893,8 +897,10 @@ void au0828_stop_vbi_streaming(struct vb2_queue *vq)
+ 	dprintk(1, "au0828_stop_vbi_streaming called %d\n",
+ 		dev->streaming_users);
+ 
+-	if (dev->streaming_users-- == 1)
++	if (dev->streaming_users-- == 1) {
+ 		au0828_uninit_isoc(dev);
++		v4l2_device_call_all(&dev->v4l2_dev, 0, video, s_stream, 0);
++	}
+ 
+ 	spin_lock_irqsave(&dev->slock, flags);
+ 	if (dev->isoc_ctl.vbi_buf != NULL) {
+diff --git a/drivers/media/usb/cpia2/cpia2_v4l.c b/drivers/media/usb/cpia2/cpia2_v4l.c
+index 95c0bd4a19dc..45caf78119c4 100644
+--- a/drivers/media/usb/cpia2/cpia2_v4l.c
++++ b/drivers/media/usb/cpia2/cpia2_v4l.c
+@@ -1240,8 +1240,7 @@ static int __init cpia2_init(void)
+ 	LOG("%s v%s\n",
+ 	    ABOUT, CPIA_VERSION);
+ 	check_parameters();
+-	cpia2_usb_init();
+-	return 0;
++	return cpia2_usb_init();
+ }
+ 
+ 
+diff --git a/drivers/media/usb/dvb-usb-v2/dvbsky.c b/drivers/media/usb/dvb-usb-v2/dvbsky.c
+index e28bd8836751..ae0814dd202a 100644
+--- a/drivers/media/usb/dvb-usb-v2/dvbsky.c
++++ b/drivers/media/usb/dvb-usb-v2/dvbsky.c
+@@ -615,16 +615,18 @@ static int dvbsky_init(struct dvb_usb_device *d)
+ 	return 0;
+ }
+ 
+-static void dvbsky_exit(struct dvb_usb_device *d)
++static int dvbsky_frontend_detach(struct dvb_usb_adapter *adap)
+ {
++	struct dvb_usb_device *d = adap_to_d(adap);
+ 	struct dvbsky_state *state = d_to_priv(d);
+-	struct dvb_usb_adapter *adap = &d->adapter[0];
++
++	dev_dbg(&d->udev->dev, "%s: adap=%d\n", __func__, adap->id);
+ 
+ 	dvb_module_release(state->i2c_client_tuner);
+ 	dvb_module_release(state->i2c_client_demod);
+ 	dvb_module_release(state->i2c_client_ci);
+ 
+-	adap->fe[0] = NULL;
++	return 0;
+ }
+ 
+ /* DVB USB Driver stuff */
+@@ -640,11 +642,11 @@ static struct dvb_usb_device_properties dvbsky_s960_props = {
+ 
+ 	.i2c_algo         = &dvbsky_i2c_algo,
+ 	.frontend_attach  = dvbsky_s960_attach,
++	.frontend_detach  = dvbsky_frontend_detach,
+ 	.init             = dvbsky_init,
+ 	.get_rc_config    = dvbsky_get_rc_config,
+ 	.streaming_ctrl   = dvbsky_streaming_ctrl,
+ 	.identify_state	  = dvbsky_identify_state,
+-	.exit             = dvbsky_exit,
+ 	.read_mac_address = dvbsky_read_mac_addr,
+ 
+ 	.num_adapters = 1,
+@@ -667,11 +669,11 @@ static struct dvb_usb_device_properties dvbsky_s960c_props = {
+ 
+ 	.i2c_algo         = &dvbsky_i2c_algo,
+ 	.frontend_attach  = dvbsky_s960c_attach,
++	.frontend_detach  = dvbsky_frontend_detach,
+ 	.init             = dvbsky_init,
+ 	.get_rc_config    = dvbsky_get_rc_config,
+ 	.streaming_ctrl   = dvbsky_streaming_ctrl,
+ 	.identify_state	  = dvbsky_identify_state,
+-	.exit             = dvbsky_exit,
+ 	.read_mac_address = dvbsky_read_mac_addr,
+ 
+ 	.num_adapters = 1,
+@@ -694,11 +696,11 @@ static struct dvb_usb_device_properties dvbsky_t680c_props = {
+ 
+ 	.i2c_algo         = &dvbsky_i2c_algo,
+ 	.frontend_attach  = dvbsky_t680c_attach,
++	.frontend_detach  = dvbsky_frontend_detach,
+ 	.init             = dvbsky_init,
+ 	.get_rc_config    = dvbsky_get_rc_config,
+ 	.streaming_ctrl   = dvbsky_streaming_ctrl,
+ 	.identify_state	  = dvbsky_identify_state,
+-	.exit             = dvbsky_exit,
+ 	.read_mac_address = dvbsky_read_mac_addr,
+ 
+ 	.num_adapters = 1,
+@@ -721,11 +723,11 @@ static struct dvb_usb_device_properties dvbsky_t330_props = {
+ 
+ 	.i2c_algo         = &dvbsky_i2c_algo,
+ 	.frontend_attach  = dvbsky_t330_attach,
++	.frontend_detach  = dvbsky_frontend_detach,
+ 	.init             = dvbsky_init,
+ 	.get_rc_config    = dvbsky_get_rc_config,
+ 	.streaming_ctrl   = dvbsky_streaming_ctrl,
+ 	.identify_state	  = dvbsky_identify_state,
+-	.exit             = dvbsky_exit,
+ 	.read_mac_address = dvbsky_read_mac_addr,
+ 
+ 	.num_adapters = 1,
+@@ -748,11 +750,11 @@ static struct dvb_usb_device_properties mygica_t230c_props = {
+ 
+ 	.i2c_algo         = &dvbsky_i2c_algo,
+ 	.frontend_attach  = dvbsky_mygica_t230c_attach,
++	.frontend_detach  = dvbsky_frontend_detach,
+ 	.init             = dvbsky_init,
+ 	.get_rc_config    = dvbsky_get_rc_config,
+ 	.streaming_ctrl   = dvbsky_streaming_ctrl,
+ 	.identify_state	  = dvbsky_identify_state,
+-	.exit             = dvbsky_exit,
+ 
+ 	.num_adapters = 1,
+ 	.adapter = {
+diff --git a/drivers/media/usb/go7007/go7007-fw.c b/drivers/media/usb/go7007/go7007-fw.c
+index 24f5b615dc7a..dfa9f899d0c2 100644
+--- a/drivers/media/usb/go7007/go7007-fw.c
++++ b/drivers/media/usb/go7007/go7007-fw.c
+@@ -1499,8 +1499,8 @@ static int modet_to_package(struct go7007 *go, __le16 *code, int space)
+ 	return cnt;
+ }
+ 
+-static int do_special(struct go7007 *go, u16 type, __le16 *code, int space,
+-			int *framelen)
++static noinline_for_stack int do_special(struct go7007 *go, u16 type,
++					 __le16 *code, int space, int *framelen)
+ {
+ 	switch (type) {
+ 	case SPECIAL_FRM_HEAD:
+diff --git a/drivers/media/usb/gspca/gspca.c b/drivers/media/usb/gspca/gspca.c
+index ac70b36d67b7..4d7517411cc2 100644
+--- a/drivers/media/usb/gspca/gspca.c
++++ b/drivers/media/usb/gspca/gspca.c
+@@ -294,7 +294,7 @@ static void fill_frame(struct gspca_dev *gspca_dev,
+ 		/* check the packet status and length */
+ 		st = urb->iso_frame_desc[i].status;
+ 		if (st) {
+-			pr_err("ISOC data error: [%d] len=%d, status=%d\n",
++			gspca_dbg(gspca_dev, D_PACK, "ISOC data error: [%d] len=%d, status=%d\n",
+ 			       i, len, st);
+ 			gspca_dev->last_packet_type = DISCARD_PACKET;
+ 			continue;
+@@ -314,6 +314,8 @@ static void fill_frame(struct gspca_dev *gspca_dev,
+ 	}
+ 
+ resubmit:
++	if (!gspca_dev->streaming)
++		return;
+ 	/* resubmit the URB */
+ 	st = usb_submit_urb(urb, GFP_ATOMIC);
+ 	if (st < 0)
+@@ -330,7 +332,7 @@ static void isoc_irq(struct urb *urb)
+ 	struct gspca_dev *gspca_dev = (struct gspca_dev *) urb->context;
+ 
+ 	gspca_dbg(gspca_dev, D_PACK, "isoc irq\n");
+-	if (!vb2_start_streaming_called(&gspca_dev->queue))
++	if (!gspca_dev->streaming)
+ 		return;
+ 	fill_frame(gspca_dev, urb);
+ }
+@@ -344,7 +346,7 @@ static void bulk_irq(struct urb *urb)
+ 	int st;
+ 
+ 	gspca_dbg(gspca_dev, D_PACK, "bulk irq\n");
+-	if (!vb2_start_streaming_called(&gspca_dev->queue))
++	if (!gspca_dev->streaming)
+ 		return;
+ 	switch (urb->status) {
+ 	case 0:
+@@ -367,6 +369,8 @@ static void bulk_irq(struct urb *urb)
+ 				urb->actual_length);
+ 
+ resubmit:
++	if (!gspca_dev->streaming)
++		return;
+ 	/* resubmit the URB */
+ 	if (gspca_dev->cam.bulk_nurbs != 0) {
+ 		st = usb_submit_urb(urb, GFP_ATOMIC);
+@@ -1638,6 +1642,8 @@ void gspca_disconnect(struct usb_interface *intf)
+ 
+ 	mutex_lock(&gspca_dev->usb_lock);
+ 	gspca_dev->present = false;
++	destroy_urbs(gspca_dev);
++	gspca_input_destroy_urb(gspca_dev);
+ 
+ 	vb2_queue_error(&gspca_dev->queue);
+ 
+diff --git a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+index 446a999dd2ce..2bab4713bc5b 100644
+--- a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
++++ b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+@@ -666,6 +666,8 @@ static int ctrl_get_input(struct pvr2_ctrl *cptr,int *vp)
+ 
+ static int ctrl_check_input(struct pvr2_ctrl *cptr,int v)
+ {
++	if (v < 0 || v > PVR2_CVAL_INPUT_MAX)
++		return 0;
+ 	return ((1 << v) & cptr->hdw->input_allowed_mask) != 0;
+ }
+ 
+diff --git a/drivers/media/usb/pvrusb2/pvrusb2-hdw.h b/drivers/media/usb/pvrusb2/pvrusb2-hdw.h
+index 25648add77e5..bd2b7a67b732 100644
+--- a/drivers/media/usb/pvrusb2/pvrusb2-hdw.h
++++ b/drivers/media/usb/pvrusb2/pvrusb2-hdw.h
+@@ -50,6 +50,7 @@
+ #define PVR2_CVAL_INPUT_COMPOSITE 2
+ #define PVR2_CVAL_INPUT_SVIDEO 3
+ #define PVR2_CVAL_INPUT_RADIO 4
++#define PVR2_CVAL_INPUT_MAX PVR2_CVAL_INPUT_RADIO
+ 
+ enum pvr2_config {
+ 	pvr2_config_empty,    /* No configuration */
+diff --git a/drivers/media/v4l2-core/v4l2-fwnode.c b/drivers/media/v4l2-core/v4l2-fwnode.c
+index 20571846e636..7495f8323147 100644
+--- a/drivers/media/v4l2-core/v4l2-fwnode.c
++++ b/drivers/media/v4l2-core/v4l2-fwnode.c
+@@ -225,6 +225,10 @@ static int v4l2_fwnode_endpoint_parse_csi2_bus(struct fwnode_handle *fwnode,
+ 	if (bus_type == V4L2_MBUS_CSI2_DPHY ||
+ 	    bus_type == V4L2_MBUS_CSI2_CPHY || lanes_used ||
+ 	    have_clk_lane || (flags & ~V4L2_MBUS_CSI2_CONTINUOUS_CLOCK)) {
++		/* Only D-PHY has a clock lane. */
++		unsigned int dfl_data_lane_index =
++			bus_type == V4L2_MBUS_CSI2_DPHY;
++
+ 		bus->flags = flags;
+ 		if (bus_type == V4L2_MBUS_UNKNOWN)
+ 			vep->bus_type = V4L2_MBUS_CSI2_DPHY;
+@@ -233,7 +237,7 @@ static int v4l2_fwnode_endpoint_parse_csi2_bus(struct fwnode_handle *fwnode,
+ 		if (use_default_lane_mapping) {
+ 			bus->clock_lane = 0;
+ 			for (i = 0; i < num_data_lanes; i++)
+-				bus->data_lanes[i] = 1 + i;
++				bus->data_lanes[i] = dfl_data_lane_index + i;
+ 		} else {
+ 			bus->clock_lane = clock_lane;
+ 			for (i = 0; i < num_data_lanes; i++)
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 36d0d5c9cfba..35be1cc11dd8 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -667,8 +667,16 @@ static int fastrpc_get_args(u32 kernel, struct fastrpc_invoke_ctx *ctx)
+ 		pages[i].size = roundup(len, PAGE_SIZE);
+ 
+ 		if (ctx->maps[i]) {
++			struct vm_area_struct *vma = NULL;
++
+ 			rpra[i].pv = (u64) ctx->args[i].ptr;
+ 			pages[i].addr = ctx->maps[i]->phys;
++
++			vma = find_vma(current->mm, ctx->args[i].ptr);
++			if (vma)
++				pages[i].addr += ctx->args[i].ptr -
++						 vma->vm_start;
++
+ 		} else {
+ 			rlen -= ALIGN(args, FASTRPC_ALIGN) - args;
+ 			args = ALIGN(args, FASTRPC_ALIGN);
+@@ -782,6 +790,9 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl,  u32 kernel,
+ 		if (err)
+ 			goto bail;
+ 	}
++
++	/* make sure that all CPU memory writes are seen by DSP */
++	dma_wmb();
+ 	/* Send invoke buffer to remote dsp */
+ 	err = fastrpc_invoke_send(fl->sctx, ctx, kernel, handle);
+ 	if (err)
+@@ -798,6 +809,8 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl,  u32 kernel,
+ 		goto bail;
+ 
+ 	if (ctx->nscalars) {
++		/* make sure that all memory writes by DSP are seen by CPU */
++		dma_rmb();
+ 		/* populate all the output buffers with results */
+ 		err = fastrpc_put_args(ctx, kernel);
+ 		if (err)
+@@ -843,12 +856,12 @@ static int fastrpc_init_create_process(struct fastrpc_user *fl,
+ 
+ 	if (copy_from_user(&init, argp, sizeof(init))) {
+ 		err = -EFAULT;
+-		goto bail;
++		goto err;
+ 	}
+ 
+ 	if (init.filelen > INIT_FILELEN_MAX) {
+ 		err = -EINVAL;
+-		goto bail;
++		goto err;
+ 	}
+ 
+ 	inbuf.pgid = fl->tgid;
+@@ -862,17 +875,15 @@ static int fastrpc_init_create_process(struct fastrpc_user *fl,
+ 	if (init.filelen && init.filefd) {
+ 		err = fastrpc_map_create(fl, init.filefd, init.filelen, &map);
+ 		if (err)
+-			goto bail;
++			goto err;
+ 	}
+ 
+ 	memlen = ALIGN(max(INIT_FILELEN_MAX, (int)init.filelen * 4),
+ 		       1024 * 1024);
+ 	err = fastrpc_buf_alloc(fl, fl->sctx->dev, memlen,
+ 				&imem);
+-	if (err) {
+-		fastrpc_map_put(map);
+-		goto bail;
+-	}
++	if (err)
++		goto err_alloc;
+ 
+ 	fl->init_mem = imem;
+ 	args[0].ptr = (u64)(uintptr_t)&inbuf;
+@@ -908,13 +919,24 @@ static int fastrpc_init_create_process(struct fastrpc_user *fl,
+ 
+ 	err = fastrpc_internal_invoke(fl, true, FASTRPC_INIT_HANDLE,
+ 				      sc, args);
++	if (err)
++		goto err_invoke;
+ 
+-	if (err) {
++	kfree(args);
++
++	return 0;
++
++err_invoke:
++	fl->init_mem = NULL;
++	fastrpc_buf_free(imem);
++err_alloc:
++	if (map) {
++		spin_lock(&fl->lock);
++		list_del(&map->node);
++		spin_unlock(&fl->lock);
+ 		fastrpc_map_put(map);
+-		fastrpc_buf_free(imem);
+ 	}
+-
+-bail:
++err:
+ 	kfree(args);
+ 
+ 	return err;
+diff --git a/drivers/misc/habanalabs/device.c b/drivers/misc/habanalabs/device.c
+index 77d51be66c7e..652c8edb2164 100644
+--- a/drivers/misc/habanalabs/device.c
++++ b/drivers/misc/habanalabs/device.c
+@@ -498,11 +498,8 @@ disable_device:
+ 	return rc;
+ }
+ 
+-static void hl_device_hard_reset_pending(struct work_struct *work)
++static void device_kill_open_processes(struct hl_device *hdev)
+ {
+-	struct hl_device_reset_work *device_reset_work =
+-		container_of(work, struct hl_device_reset_work, reset_work);
+-	struct hl_device *hdev = device_reset_work->hdev;
+ 	u16 pending_total, pending_cnt;
+ 	struct task_struct *task = NULL;
+ 
+@@ -537,6 +534,12 @@ static void hl_device_hard_reset_pending(struct work_struct *work)
+ 		}
+ 	}
+ 
++	/* We killed the open users, but because the driver cleans up after the
++	 * user contexts are closed (e.g. mmu mappings), we need to wait again
++	 * to make sure the cleaning phase is finished before continuing with
++	 * the reset
++	 */
++
+ 	pending_cnt = pending_total;
+ 
+ 	while ((atomic_read(&hdev->fd_open_cnt)) && (pending_cnt)) {
+@@ -552,6 +555,16 @@ static void hl_device_hard_reset_pending(struct work_struct *work)
+ 
+ 	mutex_unlock(&hdev->fd_open_cnt_lock);
+ 
++}
++
++static void device_hard_reset_pending(struct work_struct *work)
++{
++	struct hl_device_reset_work *device_reset_work =
++		container_of(work, struct hl_device_reset_work, reset_work);
++	struct hl_device *hdev = device_reset_work->hdev;
++
++	device_kill_open_processes(hdev);
++
+ 	hl_device_reset(hdev, true, true);
+ 
+ 	kfree(device_reset_work);
+@@ -635,7 +648,7 @@ again:
+ 		 * from a dedicated work
+ 		 */
+ 		INIT_WORK(&device_reset_work->reset_work,
+-				hl_device_hard_reset_pending);
++				device_hard_reset_pending);
+ 		device_reset_work->hdev = hdev;
+ 		schedule_work(&device_reset_work->reset_work);
+ 
+@@ -1035,6 +1048,15 @@ void hl_device_fini(struct hl_device *hdev)
+ 	/* Mark device as disabled */
+ 	hdev->disabled = true;
+ 
++	/*
++	 * Flush anyone that is inside the critical section of enqueue
++	 * jobs to the H/W
++	 */
++	hdev->asic_funcs->hw_queues_lock(hdev);
++	hdev->asic_funcs->hw_queues_unlock(hdev);
++
++	device_kill_open_processes(hdev);
++
+ 	hl_hwmon_fini(hdev);
+ 
+ 	device_late_fini(hdev);
+diff --git a/drivers/misc/habanalabs/goya/goya.c b/drivers/misc/habanalabs/goya/goya.c
+index 3c509e19d69d..1533cb320540 100644
+--- a/drivers/misc/habanalabs/goya/goya.c
++++ b/drivers/misc/habanalabs/goya/goya.c
+@@ -4407,6 +4407,9 @@ static u64 goya_read_pte(struct hl_device *hdev, u64 addr)
+ {
+ 	struct goya_device *goya = hdev->asic_specific;
+ 
++	if (hdev->hard_reset_pending)
++		return U64_MAX;
++
+ 	return readq(hdev->pcie_bar[DDR_BAR_ID] +
+ 			(addr - goya->ddr_bar_cur_addr));
+ }
+@@ -4415,6 +4418,9 @@ static void goya_write_pte(struct hl_device *hdev, u64 addr, u64 val)
+ {
+ 	struct goya_device *goya = hdev->asic_specific;
+ 
++	if (hdev->hard_reset_pending)
++		return;
++
+ 	writeq(val, hdev->pcie_bar[DDR_BAR_ID] +
+ 			(addr - goya->ddr_bar_cur_addr));
+ }
+diff --git a/drivers/misc/habanalabs/memory.c b/drivers/misc/habanalabs/memory.c
+index ce1fda40a8b8..fadaf557603f 100644
+--- a/drivers/misc/habanalabs/memory.c
++++ b/drivers/misc/habanalabs/memory.c
+@@ -1046,10 +1046,17 @@ static int unmap_device_va(struct hl_ctx *ctx, u64 vaddr)
+ 
+ 	mutex_lock(&ctx->mmu_lock);
+ 
+-	for (i = 0 ; i < phys_pg_pack->npages ; i++, next_vaddr += page_size)
++	for (i = 0 ; i < phys_pg_pack->npages ; i++, next_vaddr += page_size) {
+ 		if (hl_mmu_unmap(ctx, next_vaddr, page_size))
+ 			dev_warn_ratelimited(hdev->dev,
+-				"unmap failed for vaddr: 0x%llx\n", next_vaddr);
++			"unmap failed for vaddr: 0x%llx\n", next_vaddr);
++
++		/* unmapping on Palladium can be really long, so avoid a CPU
++		 * soft lockup bug by sleeping a little between unmapping pages
++		 */
++		if (hdev->pldm)
++			usleep_range(500, 1000);
++	}
+ 
+ 	hdev->asic_funcs->mmu_invalidate_cache(hdev, true);
+ 
+diff --git a/drivers/mmc/core/pwrseq_emmc.c b/drivers/mmc/core/pwrseq_emmc.c
+index efb8a7965dd4..154f4204d58c 100644
+--- a/drivers/mmc/core/pwrseq_emmc.c
++++ b/drivers/mmc/core/pwrseq_emmc.c
+@@ -30,19 +30,14 @@ struct mmc_pwrseq_emmc {
+ 
+ #define to_pwrseq_emmc(p) container_of(p, struct mmc_pwrseq_emmc, pwrseq)
+ 
+-static void __mmc_pwrseq_emmc_reset(struct mmc_pwrseq_emmc *pwrseq)
+-{
+-	gpiod_set_value(pwrseq->reset_gpio, 1);
+-	udelay(1);
+-	gpiod_set_value(pwrseq->reset_gpio, 0);
+-	udelay(200);
+-}
+-
+ static void mmc_pwrseq_emmc_reset(struct mmc_host *host)
+ {
+ 	struct mmc_pwrseq_emmc *pwrseq =  to_pwrseq_emmc(host->pwrseq);
+ 
+-	__mmc_pwrseq_emmc_reset(pwrseq);
++	gpiod_set_value_cansleep(pwrseq->reset_gpio, 1);
++	udelay(1);
++	gpiod_set_value_cansleep(pwrseq->reset_gpio, 0);
++	udelay(200);
+ }
+ 
+ static int mmc_pwrseq_emmc_reset_nb(struct notifier_block *this,
+@@ -50,8 +45,11 @@ static int mmc_pwrseq_emmc_reset_nb(struct notifier_block *this,
+ {
+ 	struct mmc_pwrseq_emmc *pwrseq = container_of(this,
+ 					struct mmc_pwrseq_emmc, reset_nb);
++	gpiod_set_value(pwrseq->reset_gpio, 1);
++	udelay(1);
++	gpiod_set_value(pwrseq->reset_gpio, 0);
++	udelay(200);
+ 
+-	__mmc_pwrseq_emmc_reset(pwrseq);
+ 	return NOTIFY_DONE;
+ }
+ 
+@@ -72,14 +70,18 @@ static int mmc_pwrseq_emmc_probe(struct platform_device *pdev)
+ 	if (IS_ERR(pwrseq->reset_gpio))
+ 		return PTR_ERR(pwrseq->reset_gpio);
+ 
+-	/*
+-	 * register reset handler to ensure emmc reset also from
+-	 * emergency_reboot(), priority 255 is the highest priority
+-	 * so it will be executed before any system reboot handler.
+-	 */
+-	pwrseq->reset_nb.notifier_call = mmc_pwrseq_emmc_reset_nb;
+-	pwrseq->reset_nb.priority = 255;
+-	register_restart_handler(&pwrseq->reset_nb);
++	if (!gpiod_cansleep(pwrseq->reset_gpio)) {
++		/*
++		 * register reset handler to ensure emmc reset also from
++		 * emergency_reboot(), priority 255 is the highest priority
++		 * so it will be executed before any system reboot handler.
++		 */
++		pwrseq->reset_nb.notifier_call = mmc_pwrseq_emmc_reset_nb;
++		pwrseq->reset_nb.priority = 255;
++		register_restart_handler(&pwrseq->reset_nb);
++	} else {
++		dev_notice(dev, "EMMC reset pin tied to a sleepy GPIO driver; reset on emergency-reboot disabled\n");
++	}
+ 
+ 	pwrseq->pwrseq.ops = &mmc_pwrseq_emmc_ops;
+ 	pwrseq->pwrseq.dev = dev;
+diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
+index 265e1aeeb9d8..d3d32f9a2cb1 100644
+--- a/drivers/mmc/core/sd.c
++++ b/drivers/mmc/core/sd.c
+@@ -221,6 +221,14 @@ static int mmc_decode_scr(struct mmc_card *card)
+ 
+ 	if (scr->sda_spec3)
+ 		scr->cmds = UNSTUFF_BITS(resp, 32, 2);
++
++	/* SD Spec says: any SD Card shall set at least bits 0 and 2 */
++	if (!(scr->bus_widths & SD_SCR_BUS_WIDTH_1) ||
++	    !(scr->bus_widths & SD_SCR_BUS_WIDTH_4)) {
++		pr_err("%s: invalid bus width\n", mmc_hostname(card->host));
++		return -EINVAL;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/mmc/host/mmc_spi.c b/drivers/mmc/host/mmc_spi.c
+index 1b1498805972..a3533935e282 100644
+--- a/drivers/mmc/host/mmc_spi.c
++++ b/drivers/mmc/host/mmc_spi.c
+@@ -819,6 +819,10 @@ mmc_spi_readblock(struct mmc_spi_host *host, struct spi_transfer *t,
+ 	}
+ 
+ 	status = spi_sync_locked(spi, &host->m);
++	if (status < 0) {
++		dev_dbg(&spi->dev, "read error %d\n", status);
++		return status;
++	}
+ 
+ 	if (host->dma_dev) {
+ 		dma_sync_single_for_cpu(host->dma_dev,
+diff --git a/drivers/mmc/host/sdhci-iproc.c b/drivers/mmc/host/sdhci-iproc.c
+index 9d12c06c7fd6..2feb4ef32035 100644
+--- a/drivers/mmc/host/sdhci-iproc.c
++++ b/drivers/mmc/host/sdhci-iproc.c
+@@ -196,7 +196,8 @@ static const struct sdhci_ops sdhci_iproc_32only_ops = {
+ };
+ 
+ static const struct sdhci_pltfm_data sdhci_iproc_cygnus_pltfm_data = {
+-	.quirks = SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK,
++	.quirks = SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK |
++		  SDHCI_QUIRK_NO_HISPD_BIT,
+ 	.quirks2 = SDHCI_QUIRK2_ACMD23_BROKEN | SDHCI_QUIRK2_HOST_OFF_CARD_ON,
+ 	.ops = &sdhci_iproc_32only_ops,
+ };
+@@ -219,7 +220,8 @@ static const struct sdhci_iproc_data iproc_cygnus_data = {
+ 
+ static const struct sdhci_pltfm_data sdhci_iproc_pltfm_data = {
+ 	.quirks = SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK |
+-		  SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12,
++		  SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12 |
++		  SDHCI_QUIRK_NO_HISPD_BIT,
+ 	.quirks2 = SDHCI_QUIRK2_ACMD23_BROKEN,
+ 	.ops = &sdhci_iproc_ops,
+ };
+diff --git a/drivers/mmc/host/sdhci-of-esdhc.c b/drivers/mmc/host/sdhci-of-esdhc.c
+index 4e669b4edfc1..7e0eae8dafae 100644
+--- a/drivers/mmc/host/sdhci-of-esdhc.c
++++ b/drivers/mmc/host/sdhci-of-esdhc.c
+@@ -694,6 +694,9 @@ static void esdhc_reset(struct sdhci_host *host, u8 mask)
+ 	sdhci_writel(host, host->ier, SDHCI_INT_ENABLE);
+ 	sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE);
+ 
++	if (of_find_compatible_node(NULL, NULL, "fsl,p2020-esdhc"))
++		mdelay(5);
++
+ 	if (mask & SDHCI_RESET_ALL) {
+ 		val = sdhci_readl(host, ESDHC_TBCTL);
+ 		val &= ~ESDHC_TB_EN;
+@@ -1074,6 +1077,11 @@ static int sdhci_esdhc_probe(struct platform_device *pdev)
+ 	if (esdhc->vendor_ver > VENDOR_V_22)
+ 		host->quirks &= ~SDHCI_QUIRK_NO_BUSY_IRQ;
+ 
++	if (of_find_compatible_node(NULL, NULL, "fsl,p2020-esdhc")) {
++		host->quirks2 |= SDHCI_QUIRK_RESET_AFTER_REQUEST;
++		host->quirks2 |= SDHCI_QUIRK_BROKEN_TIMEOUT_VAL;
++	}
++
+ 	if (of_device_is_compatible(np, "fsl,p5040-esdhc") ||
+ 	    of_device_is_compatible(np, "fsl,p5020-esdhc") ||
+ 	    of_device_is_compatible(np, "fsl,p4080-esdhc") ||
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+index a6eacf2099c3..9b03d7e404f8 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+@@ -224,28 +224,23 @@ static int ena_setup_tx_resources(struct ena_adapter *adapter, int qid)
+ 	if (!tx_ring->tx_buffer_info) {
+ 		tx_ring->tx_buffer_info = vzalloc(size);
+ 		if (!tx_ring->tx_buffer_info)
+-			return -ENOMEM;
++			goto err_tx_buffer_info;
+ 	}
+ 
+ 	size = sizeof(u16) * tx_ring->ring_size;
+ 	tx_ring->free_tx_ids = vzalloc_node(size, node);
+ 	if (!tx_ring->free_tx_ids) {
+ 		tx_ring->free_tx_ids = vzalloc(size);
+-		if (!tx_ring->free_tx_ids) {
+-			vfree(tx_ring->tx_buffer_info);
+-			return -ENOMEM;
+-		}
++		if (!tx_ring->free_tx_ids)
++			goto err_free_tx_ids;
+ 	}
+ 
+ 	size = tx_ring->tx_max_header_size;
+ 	tx_ring->push_buf_intermediate_buf = vzalloc_node(size, node);
+ 	if (!tx_ring->push_buf_intermediate_buf) {
+ 		tx_ring->push_buf_intermediate_buf = vzalloc(size);
+-		if (!tx_ring->push_buf_intermediate_buf) {
+-			vfree(tx_ring->tx_buffer_info);
+-			vfree(tx_ring->free_tx_ids);
+-			return -ENOMEM;
+-		}
++		if (!tx_ring->push_buf_intermediate_buf)
++			goto err_push_buf_intermediate_buf;
+ 	}
+ 
+ 	/* Req id ring for TX out of order completions */
+@@ -259,6 +254,15 @@ static int ena_setup_tx_resources(struct ena_adapter *adapter, int qid)
+ 	tx_ring->next_to_clean = 0;
+ 	tx_ring->cpu = ena_irq->cpu;
+ 	return 0;
++
++err_push_buf_intermediate_buf:
++	vfree(tx_ring->free_tx_ids);
++	tx_ring->free_tx_ids = NULL;
++err_free_tx_ids:
++	vfree(tx_ring->tx_buffer_info);
++	tx_ring->tx_buffer_info = NULL;
++err_tx_buffer_info:
++	return -ENOMEM;
+ }
+ 
+ /* ena_free_tx_resources - Free I/O Tx Resources per Queue
+@@ -378,6 +382,7 @@ static int ena_setup_rx_resources(struct ena_adapter *adapter,
+ 		rx_ring->free_rx_ids = vzalloc(size);
+ 		if (!rx_ring->free_rx_ids) {
+ 			vfree(rx_ring->rx_buffer_info);
++			rx_ring->rx_buffer_info = NULL;
+ 			return -ENOMEM;
+ 		}
+ 	}
+@@ -2292,7 +2297,7 @@ static void ena_config_host_info(struct ena_com_dev *ena_dev,
+ 	host_info->bdf = (pdev->bus->number << 8) | pdev->devfn;
+ 	host_info->os_type = ENA_ADMIN_OS_LINUX;
+ 	host_info->kernel_ver = LINUX_VERSION_CODE;
+-	strncpy(host_info->kernel_ver_str, utsname()->version,
++	strlcpy(host_info->kernel_ver_str, utsname()->version,
+ 		sizeof(host_info->kernel_ver_str) - 1);
+ 	host_info->os_dist = 0;
+ 	strncpy(host_info->os_dist_str, utsname()->release,
+diff --git a/drivers/net/ethernet/chelsio/cxgb3/l2t.h b/drivers/net/ethernet/chelsio/cxgb3/l2t.h
+index c2fd323c4078..ea75f275023f 100644
+--- a/drivers/net/ethernet/chelsio/cxgb3/l2t.h
++++ b/drivers/net/ethernet/chelsio/cxgb3/l2t.h
+@@ -75,8 +75,8 @@ struct l2t_data {
+ 	struct l2t_entry *rover;	/* starting point for next allocation */
+ 	atomic_t nfree;		/* number of free entries */
+ 	rwlock_t lock;
+-	struct l2t_entry l2tab[0];
+ 	struct rcu_head rcu_head;	/* to handle rcu cleanup */
++	struct l2t_entry l2tab[];
+ };
+ 
+ typedef void (*arp_failure_handler_func)(struct t3cdev * dev,
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index 89179e316687..4bc0c357cb8e 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -6161,15 +6161,24 @@ static int __init cxgb4_init_module(void)
+ 
+ 	ret = pci_register_driver(&cxgb4_driver);
+ 	if (ret < 0)
+-		debugfs_remove(cxgb4_debugfs_root);
++		goto err_pci;
+ 
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	if (!inet6addr_registered) {
+-		register_inet6addr_notifier(&cxgb4_inet6addr_notifier);
+-		inet6addr_registered = true;
++		ret = register_inet6addr_notifier(&cxgb4_inet6addr_notifier);
++		if (ret)
++			pci_unregister_driver(&cxgb4_driver);
++		else
++			inet6addr_registered = true;
+ 	}
+ #endif
+ 
++	if (ret == 0)
++		return ret;
++
++err_pci:
++	debugfs_remove(cxgb4_debugfs_root);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index dc339dc1adb2..57cbaa38d247 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -2796,6 +2796,7 @@ int dpaa2_eth_set_hash(struct net_device *net_dev, u64 flags)
+ static int dpaa2_eth_set_cls(struct dpaa2_eth_priv *priv)
+ {
+ 	struct device *dev = priv->net_dev->dev.parent;
++	int err;
+ 
+ 	/* Check if we actually support Rx flow classification */
+ 	if (dpaa2_eth_has_legacy_dist(priv)) {
+@@ -2814,9 +2815,13 @@ static int dpaa2_eth_set_cls(struct dpaa2_eth_priv *priv)
+ 		return -EOPNOTSUPP;
+ 	}
+ 
++	err = dpaa2_eth_set_dist_key(priv->net_dev, DPAA2_ETH_RX_DIST_CLS, 0);
++	if (err)
++		return err;
++
+ 	priv->rx_cls_enabled = 1;
+ 
+-	return dpaa2_eth_set_dist_key(priv->net_dev, DPAA2_ETH_RX_DIST_CLS, 0);
++	return 0;
+ }
+ 
+ /* Bind the DPNI to its needed objects and resources: buffer pool, DPIOs,
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
+index 299b277bc7ae..589b7ee32bff 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
+@@ -107,7 +107,7 @@ struct hclgevf_mbx_arq_ring {
+ 	struct hclgevf_dev *hdev;
+ 	u32 head;
+ 	u32 tail;
+-	u32 count;
++	atomic_t count;
+ 	u16 msg_q[HCLGE_MBX_MAX_ARQ_MSG_NUM][HCLGE_MBX_MAX_ARQ_MSG_SIZE];
+ };
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 162cb9afa0e7..c7d310903319 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -2705,7 +2705,7 @@ int hns3_clean_rx_ring(
+ #define RCB_NOF_ALLOC_RX_BUFF_ONCE 16
+ 	struct net_device *netdev = ring->tqp->handle->kinfo.netdev;
+ 	int recv_pkts, recv_bds, clean_count, err;
+-	int unused_count = hns3_desc_unused(ring) - ring->pending_buf;
++	int unused_count = hns3_desc_unused(ring);
+ 	struct sk_buff *skb = ring->skb;
+ 	int num;
+ 
+@@ -2714,6 +2714,7 @@ int hns3_clean_rx_ring(
+ 
+ 	recv_pkts = 0, recv_bds = 0, clean_count = 0;
+ 	num -= unused_count;
++	unused_count -= ring->pending_buf;
+ 
+ 	while (recv_pkts < budget && recv_bds < num) {
+ 		/* Reuse or realloc buffers */
+@@ -3773,12 +3774,13 @@ static int hns3_recover_hw_addr(struct net_device *ndev)
+ 	struct netdev_hw_addr *ha, *tmp;
+ 	int ret = 0;
+ 
++	netif_addr_lock_bh(ndev);
+ 	/* go through and sync uc_addr entries to the device */
+ 	list = &ndev->uc;
+ 	list_for_each_entry_safe(ha, tmp, &list->list, list) {
+ 		ret = hns3_nic_uc_sync(ndev, ha->addr);
+ 		if (ret)
+-			return ret;
++			goto out;
+ 	}
+ 
+ 	/* go through and sync mc_addr entries to the device */
+@@ -3786,9 +3788,11 @@ static int hns3_recover_hw_addr(struct net_device *ndev)
+ 	list_for_each_entry_safe(ha, tmp, &list->list, list) {
+ 		ret = hns3_nic_mc_sync(ndev, ha->addr);
+ 		if (ret)
+-			return ret;
++			goto out;
+ 	}
+ 
++out:
++	netif_addr_unlock_bh(ndev);
+ 	return ret;
+ }
+ 
+@@ -3799,6 +3803,7 @@ static void hns3_remove_hw_addr(struct net_device *netdev)
+ 
+ 	hns3_nic_uc_unsync(netdev, netdev->dev_addr);
+ 
++	netif_addr_lock_bh(netdev);
+ 	/* go through and unsync uc_addr entries to the device */
+ 	list = &netdev->uc;
+ 	list_for_each_entry_safe(ha, tmp, &list->list, list)
+@@ -3809,6 +3814,8 @@ static void hns3_remove_hw_addr(struct net_device *netdev)
+ 	list_for_each_entry_safe(ha, tmp, &list->list, list)
+ 		if (ha->refcount > 1)
+ 			hns3_nic_mc_unsync(netdev, ha->addr);
++
++	netif_addr_unlock_bh(netdev);
+ }
+ 
+ static void hns3_clear_tx_ring(struct hns3_enet_ring *ring)
+@@ -3850,6 +3857,13 @@ static int hns3_clear_rx_ring(struct hns3_enet_ring *ring)
+ 		ring_ptr_move_fw(ring, next_to_use);
+ 	}
+ 
++	/* Free the pending skb in rx ring */
++	if (ring->skb) {
++		dev_kfree_skb_any(ring->skb);
++		ring->skb = NULL;
++		ring->pending_buf = 0;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+index 359d4731fb2d..ea94b5152963 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+@@ -483,6 +483,11 @@ static void hns3_get_stats(struct net_device *netdev,
+ 	struct hnae3_handle *h = hns3_get_handle(netdev);
+ 	u64 *p = data;
+ 
++	if (hns3_nic_resetting(netdev)) {
++		netdev_err(netdev, "dev resetting, could not get stats\n");
++		return;
++	}
++
+ 	if (!h->ae_algo->ops->get_stats || !h->ae_algo->ops->update_stats) {
+ 		netdev_err(netdev, "could not get any statistics\n");
+ 		return;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
+index 3a093a92eac5..d92e4af11b1f 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
+@@ -373,21 +373,26 @@ int hclge_cmd_init(struct hclge_dev *hdev)
+ 	 * reset may happen when lower level reset is being processed.
+ 	 */
+ 	if ((hclge_is_reset_pending(hdev))) {
+-		set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state);
+-		return -EBUSY;
++		ret = -EBUSY;
++		goto err_cmd_init;
+ 	}
+ 
+ 	ret = hclge_cmd_query_firmware_version(&hdev->hw, &version);
+ 	if (ret) {
+ 		dev_err(&hdev->pdev->dev,
+ 			"firmware version query failed %d\n", ret);
+-		return ret;
++		goto err_cmd_init;
+ 	}
+ 	hdev->fw_version = version;
+ 
+ 	dev_info(&hdev->pdev->dev, "The firmware version is %08x\n", version);
+ 
+ 	return 0;
++
++err_cmd_init:
++	set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state);
++
++	return ret;
+ }
+ 
+ static void hclge_cmd_uninit_regs(struct hclge_hw *hw)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+index aafc69f4bfdd..a7bbb6d3091a 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+@@ -1331,8 +1331,11 @@ int hclge_pause_setup_hw(struct hclge_dev *hdev, bool init)
+ 	ret = hclge_pfc_setup_hw(hdev);
+ 	if (init && ret == -EOPNOTSUPP)
+ 		dev_warn(&hdev->pdev->dev, "GE MAC does not support pfc\n");
+-	else
++	else if (ret) {
++		dev_err(&hdev->pdev->dev, "config pfc failed! ret = %d\n",
++			ret);
+ 		return ret;
++	}
+ 
+ 	return hclge_tm_bp_setup(hdev);
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
+index 9441b453d38d..382ecb15e743 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
+@@ -327,7 +327,7 @@ int hclgevf_cmd_init(struct hclgevf_dev *hdev)
+ 	hdev->arq.hdev = hdev;
+ 	hdev->arq.head = 0;
+ 	hdev->arq.tail = 0;
+-	hdev->arq.count = 0;
++	atomic_set(&hdev->arq.count, 0);
+ 	hdev->hw.cmq.csq.next_to_clean = 0;
+ 	hdev->hw.cmq.csq.next_to_use = 0;
+ 	hdev->hw.cmq.crq.next_to_clean = 0;
+@@ -344,8 +344,8 @@ int hclgevf_cmd_init(struct hclgevf_dev *hdev)
+ 	 * reset may happen when lower level reset is being processed.
+ 	 */
+ 	if (hclgevf_is_reset_pending(hdev)) {
+-		set_bit(HCLGEVF_STATE_CMD_DISABLE, &hdev->state);
+-		return -EBUSY;
++		ret = -EBUSY;
++		goto err_cmd_init;
+ 	}
+ 
+ 	/* get firmware version */
+@@ -353,13 +353,18 @@ int hclgevf_cmd_init(struct hclgevf_dev *hdev)
+ 	if (ret) {
+ 		dev_err(&hdev->pdev->dev,
+ 			"failed(%d) to query firmware version\n", ret);
+-		return ret;
++		goto err_cmd_init;
+ 	}
+ 	hdev->fw_version = version;
+ 
+ 	dev_info(&hdev->pdev->dev, "The firmware version is %08x\n", version);
+ 
+ 	return 0;
++
++err_cmd_init:
++	set_bit(HCLGEVF_STATE_CMD_DISABLE, &hdev->state);
++
++	return ret;
+ }
+ 
+ static void hclgevf_cmd_uninit_regs(struct hclgevf_hw *hw)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 8bc28e6f465f..8dd7fef863f6 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -2007,9 +2007,15 @@ static int hclgevf_set_alive(struct hnae3_handle *handle, bool alive)
+ static int hclgevf_client_start(struct hnae3_handle *handle)
+ {
+ 	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
++	int ret;
++
++	ret = hclgevf_set_alive(handle, true);
++	if (ret)
++		return ret;
+ 
+ 	mod_timer(&hdev->keep_alive_timer, jiffies + 2 * HZ);
+-	return hclgevf_set_alive(handle, true);
++
++	return 0;
+ }
+ 
+ static void hclgevf_client_stop(struct hnae3_handle *handle)
+@@ -2051,6 +2057,10 @@ static void hclgevf_state_uninit(struct hclgevf_dev *hdev)
+ {
+ 	set_bit(HCLGEVF_STATE_DOWN, &hdev->state);
+ 
++	if (hdev->keep_alive_timer.function)
++		del_timer_sync(&hdev->keep_alive_timer);
++	if (hdev->keep_alive_task.func)
++		cancel_work_sync(&hdev->keep_alive_task);
+ 	if (hdev->service_timer.function)
+ 		del_timer_sync(&hdev->service_timer);
+ 	if (hdev->service_task.func)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
+index 7dc3c9f79169..4f2c77283cb4 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
+@@ -208,7 +208,8 @@ void hclgevf_mbx_handler(struct hclgevf_dev *hdev)
+ 			/* we will drop the async msg if we find ARQ as full
+ 			 * and continue with next message
+ 			 */
+-			if (hdev->arq.count >= HCLGE_MBX_MAX_ARQ_MSG_NUM) {
++			if (atomic_read(&hdev->arq.count) >=
++			    HCLGE_MBX_MAX_ARQ_MSG_NUM) {
+ 				dev_warn(&hdev->pdev->dev,
+ 					 "Async Q full, dropping msg(%d)\n",
+ 					 req->msg[1]);
+@@ -220,7 +221,7 @@ void hclgevf_mbx_handler(struct hclgevf_dev *hdev)
+ 			memcpy(&msg_q[0], req->msg,
+ 			       HCLGE_MBX_MAX_ARQ_MSG_SIZE * sizeof(u16));
+ 			hclge_mbx_tail_ptr_move_arq(hdev->arq);
+-			hdev->arq.count++;
++			atomic_inc(&hdev->arq.count);
+ 
+ 			hclgevf_mbx_task_schedule(hdev);
+ 
+@@ -308,7 +309,7 @@ void hclgevf_mbx_async_handler(struct hclgevf_dev *hdev)
+ 		}
+ 
+ 		hclge_mbx_head_ptr_move_arq(hdev->arq);
+-		hdev->arq.count--;
++		atomic_dec(&hdev->arq.count);
+ 		msg_q = hdev->arq.msg_q[hdev->arq.head];
+ 	}
+ }
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index 7acc61e4f645..c10c9d7eadaa 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -7350,7 +7350,7 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP);
+ 
+-	if (pci_dev_run_wake(pdev))
++	if (pci_dev_run_wake(pdev) && hw->mac.type < e1000_pch_cnp)
+ 		pm_runtime_put_noidle(&pdev->dev);
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index b1c265012c8a..ac9fcb097689 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -2654,6 +2654,10 @@ void i40e_vlan_stripping_enable(struct i40e_vsi *vsi)
+ 	struct i40e_vsi_context ctxt;
+ 	i40e_status ret;
+ 
++	/* Don't modify stripping options if a port VLAN is active */
++	if (vsi->info.pvid)
++		return;
++
+ 	if ((vsi->info.valid_sections &
+ 	     cpu_to_le16(I40E_AQ_VSI_PROP_VLAN_VALID)) &&
+ 	    ((vsi->info.port_vlan_flags & I40E_AQ_VSI_PVLAN_MODE_MASK) == 0))
+@@ -2684,6 +2688,10 @@ void i40e_vlan_stripping_disable(struct i40e_vsi *vsi)
+ 	struct i40e_vsi_context ctxt;
+ 	i40e_status ret;
+ 
++	/* Don't modify stripping options if a port VLAN is active */
++	if (vsi->info.pvid)
++		return;
++
+ 	if ((vsi->info.valid_sections &
+ 	     cpu_to_le16(I40E_AQ_VSI_PROP_VLAN_VALID)) &&
+ 	    ((vsi->info.port_vlan_flags & I40E_AQ_VSI_PVLAN_EMOD_MASK) ==
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 831d52bc3c9a..2b0362c827e9 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -181,7 +181,7 @@ static inline bool i40e_vc_isvalid_vsi_id(struct i40e_vf *vf, u16 vsi_id)
+  * check for the valid queue id
+  **/
+ static inline bool i40e_vc_isvalid_queue_id(struct i40e_vf *vf, u16 vsi_id,
+-					    u8 qid)
++					    u16 qid)
+ {
+ 	struct i40e_pf *pf = vf->pf;
+ 	struct i40e_vsi *vsi = i40e_find_vsi_from_id(pf, vsi_id);
+@@ -2454,8 +2454,10 @@ error_param:
+ 				      (u8 *)&stats, sizeof(stats));
+ }
+ 
+-/* If the VF is not trusted restrict the number of MAC/VLAN it can program */
+-#define I40E_VC_MAX_MAC_ADDR_PER_VF 12
++/* If the VF is not trusted restrict the number of MAC/VLAN it can program
++ * MAC filters: 16 for multicast, 1 for MAC, 1 for broadcast
++ */
++#define I40E_VC_MAX_MAC_ADDR_PER_VF (16 + 1 + 1)
+ #define I40E_VC_MAX_VLAN_PER_VF 8
+ 
+ /**
+@@ -3374,7 +3376,7 @@ static int i40e_vc_add_cloud_filter(struct i40e_vf *vf, u8 *msg)
+ 
+ 	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
+ 		aq_ret = I40E_ERR_PARAM;
+-		goto err;
++		goto err_out;
+ 	}
+ 
+ 	if (!vf->adq_enabled) {
+@@ -3382,7 +3384,7 @@ static int i40e_vc_add_cloud_filter(struct i40e_vf *vf, u8 *msg)
+ 			 "VF %d: ADq is not enabled, can't apply cloud filter\n",
+ 			 vf->vf_id);
+ 		aq_ret = I40E_ERR_PARAM;
+-		goto err;
++		goto err_out;
+ 	}
+ 
+ 	if (i40e_validate_cloud_filter(vf, vcf)) {
+@@ -3390,7 +3392,7 @@ static int i40e_vc_add_cloud_filter(struct i40e_vf *vf, u8 *msg)
+ 			 "VF %d: Invalid input/s, can't apply cloud filter\n",
+ 			 vf->vf_id);
+ 		aq_ret = I40E_ERR_PARAM;
+-		goto err;
++		goto err_out;
+ 	}
+ 
+ 	cfilter = kzalloc(sizeof(*cfilter), GFP_KERNEL);
+@@ -3451,13 +3453,17 @@ static int i40e_vc_add_cloud_filter(struct i40e_vf *vf, u8 *msg)
+ 			"VF %d: Failed to add cloud filter, err %s aq_err %s\n",
+ 			vf->vf_id, i40e_stat_str(&pf->hw, ret),
+ 			i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status));
+-		goto err;
++		goto err_free;
+ 	}
+ 
+ 	INIT_HLIST_NODE(&cfilter->cloud_node);
+ 	hlist_add_head(&cfilter->cloud_node, &vf->cloud_filter_list);
++	/* release the pointer passing it to the collection */
++	cfilter = NULL;
+ 	vf->num_cloud_filters++;
+-err:
++err_free:
++	kfree(cfilter);
++err_out:
+ 	return i40e_vc_send_resp_to_vf(vf, VIRTCHNL_OP_ADD_CLOUD_FILTER,
+ 				       aq_ret);
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index 89440775aea1..6af5bd5883ca 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -277,6 +277,7 @@ struct ice_q_vector {
+ 	 * value to the device
+ 	 */
+ 	u8 intrl;
++	u8 itr_countdown;	/* when 0 should adjust adaptive ITR */
+ } ____cacheline_internodealigned_in_smp;
+ 
+ enum ice_pf_flags {
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index fa61203bee26..b710545cf7d1 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -1848,6 +1848,10 @@ int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi)
+ 	 */
+ 	ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL;
+ 
++	/* Preserve existing VLAN strip setting */
++	ctxt->info.vlan_flags |= (vsi->info.vlan_flags &
++				  ICE_AQ_VSI_VLAN_EMOD_M);
++
+ 	ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID);
+ 
+ 	status = ice_update_vsi(hw, vsi->idx, ctxt, NULL);
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 47cc3f905b7f..6ec73864019c 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -342,6 +342,10 @@ ice_prepare_for_reset(struct ice_pf *pf)
+ {
+ 	struct ice_hw *hw = &pf->hw;
+ 
++	/* already prepared for reset */
++	if (test_bit(__ICE_PREPARED_FOR_RESET, pf->state))
++		return;
++
+ 	/* Notify VFs of impending reset */
+ 	if (ice_check_sq_alive(hw, &hw->mailboxq))
+ 		ice_vc_notify_reset(pf);
+@@ -416,10 +420,15 @@ static void ice_reset_subtask(struct ice_pf *pf)
+ 	 * for the reset now), poll for reset done, rebuild and return.
+ 	 */
+ 	if (test_bit(__ICE_RESET_OICR_RECV, pf->state)) {
+-		clear_bit(__ICE_GLOBR_RECV, pf->state);
+-		clear_bit(__ICE_CORER_RECV, pf->state);
+-		if (!test_bit(__ICE_PREPARED_FOR_RESET, pf->state))
+-			ice_prepare_for_reset(pf);
++		/* Perform the largest reset requested */
++		if (test_and_clear_bit(__ICE_CORER_RECV, pf->state))
++			reset_type = ICE_RESET_CORER;
++		if (test_and_clear_bit(__ICE_GLOBR_RECV, pf->state))
++			reset_type = ICE_RESET_GLOBR;
++		/* return if no valid reset type requested */
++		if (reset_type == ICE_RESET_INVAL)
++			return;
++		ice_prepare_for_reset(pf);
+ 
+ 		/* make sure we are ready to rebuild */
+ 		if (ice_check_reset(&pf->hw)) {
+@@ -2545,6 +2554,9 @@ static int ice_set_features(struct net_device *netdev,
+ 	struct ice_vsi *vsi = np->vsi;
+ 	int ret = 0;
+ 
++	/* Multiple features can be changed in one call so keep features in
++	 * separate if/else statements to guarantee each feature is checked
++	 */
+ 	if (features & NETIF_F_RXHASH && !(netdev->features & NETIF_F_RXHASH))
+ 		ret = ice_vsi_manage_rss_lut(vsi, true);
+ 	else if (!(features & NETIF_F_RXHASH) &&
+@@ -2557,8 +2569,9 @@ static int ice_set_features(struct net_device *netdev,
+ 	else if (!(features & NETIF_F_HW_VLAN_CTAG_RX) &&
+ 		 (netdev->features & NETIF_F_HW_VLAN_CTAG_RX))
+ 		ret = ice_vsi_manage_vlan_stripping(vsi, false);
+-	else if ((features & NETIF_F_HW_VLAN_CTAG_TX) &&
+-		 !(netdev->features & NETIF_F_HW_VLAN_CTAG_TX))
++
++	if ((features & NETIF_F_HW_VLAN_CTAG_TX) &&
++	    !(netdev->features & NETIF_F_HW_VLAN_CTAG_TX))
+ 		ret = ice_vsi_manage_vlan_insertion(vsi);
+ 	else if (!(features & NETIF_F_HW_VLAN_CTAG_TX) &&
+ 		 (netdev->features & NETIF_F_HW_VLAN_CTAG_TX))
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
+index c289d97f477d..851030ad5016 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
+@@ -1048,18 +1048,257 @@ static int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget)
+ 	return failure ? budget : (int)total_rx_pkts;
+ }
+ 
++static unsigned int ice_itr_divisor(struct ice_port_info *pi)
++{
++	switch (pi->phy.link_info.link_speed) {
++	case ICE_AQ_LINK_SPEED_40GB:
++		return ICE_ITR_ADAPTIVE_MIN_INC * 1024;
++	case ICE_AQ_LINK_SPEED_25GB:
++	case ICE_AQ_LINK_SPEED_20GB:
++		return ICE_ITR_ADAPTIVE_MIN_INC * 512;
++	case ICE_AQ_LINK_SPEED_100MB:
++		return ICE_ITR_ADAPTIVE_MIN_INC * 32;
++	default:
++		return ICE_ITR_ADAPTIVE_MIN_INC * 256;
++	}
++}
++
++/**
++ * ice_update_itr - update the adaptive ITR value based on statistics
++ * @q_vector: structure containing interrupt and ring information
++ * @rc: structure containing ring performance data
++ *
++ * Stores a new ITR value based on packets and byte
++ * counts during the last interrupt.  The advantage of per interrupt
++ * computation is faster updates and more accurate ITR for the current
++ * traffic pattern.  Constants in this function were computed
++ * based on theoretical maximum wire speed and thresholds were set based
++ * on testing data as well as attempting to minimize response time
++ * while increasing bulk throughput.
++ */
++static void
++ice_update_itr(struct ice_q_vector *q_vector, struct ice_ring_container *rc)
++{
++	unsigned int avg_wire_size, packets, bytes, itr;
++	unsigned long next_update = jiffies;
++	bool container_is_rx;
++
++	if (!rc->ring || !ITR_IS_DYNAMIC(rc->itr_setting))
++		return;
++
++	/* If itr_countdown is set it means we programmed an ITR within
++	 * the last 4 interrupt cycles. This has a side effect of us
++	 * potentially firing an early interrupt. In order to work around
++	 * this we need to throw out any data received for a few
++	 * interrupts following the update.
++	 */
++	if (q_vector->itr_countdown) {
++		itr = rc->target_itr;
++		goto clear_counts;
++	}
++
++	container_is_rx = (&q_vector->rx == rc);
++	/* For Rx we want to push the delay up and default to low latency.
++	 * for Tx we want to pull the delay down and default to high latency.
++	 */
++	itr = container_is_rx ?
++		ICE_ITR_ADAPTIVE_MIN_USECS | ICE_ITR_ADAPTIVE_LATENCY :
++		ICE_ITR_ADAPTIVE_MAX_USECS | ICE_ITR_ADAPTIVE_LATENCY;
++
++	/* If we didn't update within up to 1 - 2 jiffies we can assume
++	 * that either packets are coming in so slow there hasn't been
++	 * any work, or that there is so much work that NAPI is dealing
++	 * with interrupt moderation and we don't need to do anything.
++	 */
++	if (time_after(next_update, rc->next_update))
++		goto clear_counts;
++
++	packets = rc->total_pkts;
++	bytes = rc->total_bytes;
++
++	if (container_is_rx) {
++		/* If Rx there are 1 to 4 packets and bytes are less than
++		 * 9000 assume insufficient data to use bulk rate limiting
++		 * approach unless Tx is already in bulk rate limiting. We
++		 * are likely latency driven.
++		 */
++		if (packets && packets < 4 && bytes < 9000 &&
++		    (q_vector->tx.target_itr & ICE_ITR_ADAPTIVE_LATENCY)) {
++			itr = ICE_ITR_ADAPTIVE_LATENCY;
++			goto adjust_by_size;
++		}
++	} else if (packets < 4) {
++		/* If we have Tx and Rx ITR maxed and Tx ITR is running in
++		 * bulk mode and we are receiving 4 or fewer packets just
++		 * reset the ITR_ADAPTIVE_LATENCY bit for latency mode so
++		 * that the Rx can relax.
++		 */
++		if (rc->target_itr == ICE_ITR_ADAPTIVE_MAX_USECS &&
++		    (q_vector->rx.target_itr & ICE_ITR_MASK) ==
++		    ICE_ITR_ADAPTIVE_MAX_USECS)
++			goto clear_counts;
++	} else if (packets > 32) {
++		/* If we have processed over 32 packets in a single interrupt
++		 * for Tx assume we need to switch over to "bulk" mode.
++		 */
++		rc->target_itr &= ~ICE_ITR_ADAPTIVE_LATENCY;
++	}
++
++	/* We have no packets to actually measure against. This means
++	 * either one of the other queues on this vector is active or
++	 * we are a Tx queue doing TSO with too high of an interrupt rate.
++	 *
++	 * Between 4 and 56 we can assume that our current interrupt delay
++	 * is only slightly too low. As such we should increase it by a small
++	 * fixed amount.
++	 */
++	if (packets < 56) {
++		itr = rc->target_itr + ICE_ITR_ADAPTIVE_MIN_INC;
++		if ((itr & ICE_ITR_MASK) > ICE_ITR_ADAPTIVE_MAX_USECS) {
++			itr &= ICE_ITR_ADAPTIVE_LATENCY;
++			itr += ICE_ITR_ADAPTIVE_MAX_USECS;
++		}
++		goto clear_counts;
++	}
++
++	if (packets <= 256) {
++		itr = min(q_vector->tx.current_itr, q_vector->rx.current_itr);
++		itr &= ICE_ITR_MASK;
++
++		/* Between 56 and 112 is our "goldilocks" zone where we are
++		 * working out "just right". Just report that our current
++		 * ITR is good for us.
++		 */
++		if (packets <= 112)
++			goto clear_counts;
++
++		/* If packet count is 128 or greater we are likely looking
++		 * at a slight overrun of the delay we want. Try halving
++		 * our delay to see if that will cut the number of packets
++		 * in half per interrupt.
++		 */
++		itr >>= 1;
++		itr &= ICE_ITR_MASK;
++		if (itr < ICE_ITR_ADAPTIVE_MIN_USECS)
++			itr = ICE_ITR_ADAPTIVE_MIN_USECS;
++
++		goto clear_counts;
++	}
++
++	/* The paths below assume we are dealing with a bulk ITR since
++	 * number of packets is greater than 256. We are just going to have
++	 * to compute a value and try to bring the count under control,
++	 * though for smaller packet sizes there isn't much we can do as
++	 * NAPI polling will likely be kicking in sooner rather than later.
++	 */
++	itr = ICE_ITR_ADAPTIVE_BULK;
++
++adjust_by_size:
++	/* If packet counts are 256 or greater we can assume we have a gross
++	 * overestimation of what the rate should be. Instead of trying to fine
++	 * tune it just use the formula below to try and dial in an exact value
++	 * gives the current packet size of the frame.
++	 */
++	avg_wire_size = bytes / packets;
++
++	/* The following is a crude approximation of:
++	 *  wmem_default / (size + overhead) = desired_pkts_per_int
++	 *  rate / bits_per_byte / (size + ethernet overhead) = pkt_rate
++	 *  (desired_pkt_rate / pkt_rate) * usecs_per_sec = ITR value
++	 *
++	 * Assuming wmem_default is 212992 and overhead is 640 bytes per
++	 * packet, (256 skb, 64 headroom, 320 shared info), we can reduce the
++	 * formula down to
++	 *
++	 *  (170 * (size + 24)) / (size + 640) = ITR
++	 *
++	 * We first do some math on the packet size and then finally bitshift
++	 * by 8 after rounding up. We also have to account for PCIe link speed
++	 * difference as ITR scales based on this.
++	 */
++	if (avg_wire_size <= 60) {
++		/* Start at 250k ints/sec */
++		avg_wire_size = 4096;
++	} else if (avg_wire_size <= 380) {
++		/* 250K ints/sec to 60K ints/sec */
++		avg_wire_size *= 40;
++		avg_wire_size += 1696;
++	} else if (avg_wire_size <= 1084) {
++		/* 60K ints/sec to 36K ints/sec */
++		avg_wire_size *= 15;
++		avg_wire_size += 11452;
++	} else if (avg_wire_size <= 1980) {
++		/* 36K ints/sec to 30K ints/sec */
++		avg_wire_size *= 5;
++		avg_wire_size += 22420;
++	} else {
++		/* plateau at a limit of 30K ints/sec */
++		avg_wire_size = 32256;
++	}
++
++	/* If we are in low latency mode halve our delay which doubles the
++	 * rate to somewhere between 100K to 16K ints/sec
++	 */
++	if (itr & ICE_ITR_ADAPTIVE_LATENCY)
++		avg_wire_size >>= 1;
++
++	/* Resultant value is 256 times larger than it needs to be. This
++	 * gives us room to adjust the value as needed to either increase
++	 * or decrease the value based on link speeds of 10G, 2.5G, 1G, etc.
++	 *
++	 * Use addition as we have already recorded the new latency flag
++	 * for the ITR value.
++	 */
++	itr += DIV_ROUND_UP(avg_wire_size,
++			    ice_itr_divisor(q_vector->vsi->port_info)) *
++	       ICE_ITR_ADAPTIVE_MIN_INC;
++
++	if ((itr & ICE_ITR_MASK) > ICE_ITR_ADAPTIVE_MAX_USECS) {
++		itr &= ICE_ITR_ADAPTIVE_LATENCY;
++		itr += ICE_ITR_ADAPTIVE_MAX_USECS;
++	}
++
++clear_counts:
++	/* write back value */
++	rc->target_itr = itr;
++
++	/* next update should occur within next jiffy */
++	rc->next_update = next_update + 1;
++
++	rc->total_bytes = 0;
++	rc->total_pkts = 0;
++}
++
+ /**
+  * ice_buildreg_itr - build value for writing to the GLINT_DYN_CTL register
+  * @itr_idx: interrupt throttling index
+- * @reg_itr: interrupt throttling value adjusted based on ITR granularity
++ * @itr: interrupt throttling value in usecs
+  */
+-static u32 ice_buildreg_itr(int itr_idx, u16 reg_itr)
++static u32 ice_buildreg_itr(int itr_idx, u16 itr)
+ {
++	/* The itr value is reported in microseconds, and the register value is
++	 * recorded in 2 microsecond units. For this reason we only need to
++	 * shift by the GLINT_DYN_CTL_INTERVAL_S - ICE_ITR_GRAN_S to apply this
++	 * granularity as a shift instead of division. The mask makes sure the
++	 * ITR value is never odd so we don't accidentally write into the field
++	 * prior to the ITR field.
++	 */
++	itr &= ICE_ITR_MASK;
++
+ 	return GLINT_DYN_CTL_INTENA_M | GLINT_DYN_CTL_CLEARPBA_M |
+ 		(itr_idx << GLINT_DYN_CTL_ITR_INDX_S) |
+-		(reg_itr << GLINT_DYN_CTL_INTERVAL_S);
++		(itr << (GLINT_DYN_CTL_INTERVAL_S - ICE_ITR_GRAN_S));
+ }
+ 
++/* The act of updating the ITR will cause it to immediately trigger. In order
++ * to prevent this from throwing off adaptive update statistics we defer the
++ * update so that it can only happen so often. So after either Tx or Rx are
++ * updated we make the adaptive scheme wait until either the ITR completely
++ * expires via the next_update expiration or we have been through at least
++ * 3 interrupts.
++ */
++#define ITR_COUNTDOWN_START 3
++
+ /**
+  * ice_update_ena_itr - Update ITR and re-enable MSIX interrupt
+  * @vsi: the VSI associated with the q_vector
+@@ -1068,10 +1307,14 @@ static u32 ice_buildreg_itr(int itr_idx, u16 reg_itr)
+ static void
+ ice_update_ena_itr(struct ice_vsi *vsi, struct ice_q_vector *q_vector)
+ {
+-	struct ice_hw *hw = &vsi->back->hw;
+-	struct ice_ring_container *rc;
++	struct ice_ring_container *tx = &q_vector->tx;
++	struct ice_ring_container *rx = &q_vector->rx;
+ 	u32 itr_val;
+ 
++	/* This will do nothing if dynamic updates are not enabled */
++	ice_update_itr(q_vector, tx);
++	ice_update_itr(q_vector, rx);
++
+ 	/* This block of logic allows us to get away with only updating
+ 	 * one ITR value with each interrupt. The idea is to perform a
+ 	 * pseudo-lazy update with the following criteria.
+@@ -1080,35 +1323,36 @@ ice_update_ena_itr(struct ice_vsi *vsi, struct ice_q_vector *q_vector)
+ 	 * 2. If we must reduce an ITR that is given highest priority.
+ 	 * 3. We then give priority to increasing ITR based on amount.
+ 	 */
+-	if (q_vector->rx.target_itr < q_vector->rx.current_itr) {
+-		rc = &q_vector->rx;
++	if (rx->target_itr < rx->current_itr) {
+ 		/* Rx ITR needs to be reduced, this is highest priority */
+-		itr_val = ice_buildreg_itr(rc->itr_idx, rc->target_itr);
+-		rc->current_itr = rc->target_itr;
+-	} else if ((q_vector->tx.target_itr < q_vector->tx.current_itr) ||
+-		   ((q_vector->rx.target_itr - q_vector->rx.current_itr) <
+-		    (q_vector->tx.target_itr - q_vector->tx.current_itr))) {
+-		rc = &q_vector->tx;
++		itr_val = ice_buildreg_itr(rx->itr_idx, rx->target_itr);
++		rx->current_itr = rx->target_itr;
++		q_vector->itr_countdown = ITR_COUNTDOWN_START;
++	} else if ((tx->target_itr < tx->current_itr) ||
++		   ((rx->target_itr - rx->current_itr) <
++		    (tx->target_itr - tx->current_itr))) {
+ 		/* Tx ITR needs to be reduced, this is second priority
+ 		 * Tx ITR needs to be increased more than Rx, fourth priority
+ 		 */
+-		itr_val = ice_buildreg_itr(rc->itr_idx, rc->target_itr);
+-		rc->current_itr = rc->target_itr;
+-	} else if (q_vector->rx.current_itr != q_vector->rx.target_itr) {
+-		rc = &q_vector->rx;
++		itr_val = ice_buildreg_itr(tx->itr_idx, tx->target_itr);
++		tx->current_itr = tx->target_itr;
++		q_vector->itr_countdown = ITR_COUNTDOWN_START;
++	} else if (rx->current_itr != rx->target_itr) {
+ 		/* Rx ITR needs to be increased, third priority */
+-		itr_val = ice_buildreg_itr(rc->itr_idx, rc->target_itr);
+-		rc->current_itr = rc->target_itr;
++		itr_val = ice_buildreg_itr(rx->itr_idx, rx->target_itr);
++		rx->current_itr = rx->target_itr;
++		q_vector->itr_countdown = ITR_COUNTDOWN_START;
+ 	} else {
+ 		/* Still have to re-enable the interrupts */
+ 		itr_val = ice_buildreg_itr(ICE_ITR_NONE, 0);
++		if (q_vector->itr_countdown)
++			q_vector->itr_countdown--;
+ 	}
+ 
+-	if (!test_bit(__ICE_DOWN, vsi->state)) {
+-		int vector = vsi->hw_base_vector + q_vector->v_idx;
+-
+-		wr32(hw, GLINT_DYN_CTL(vector), itr_val);
+-	}
++	if (!test_bit(__ICE_DOWN, vsi->state))
++		wr32(&vsi->back->hw,
++		     GLINT_DYN_CTL(vsi->hw_base_vector + q_vector->v_idx),
++		     itr_val);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h
+index fc358ea81816..74a031fbd732 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.h
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.h
+@@ -128,6 +128,12 @@ enum ice_rx_dtype {
+ #define ICE_ITR_MASK		0x1FFE	/* ITR register value alignment mask */
+ #define ITR_REG_ALIGN(setting)	__ALIGN_MASK(setting, ~ICE_ITR_MASK)
+ 
++#define ICE_ITR_ADAPTIVE_MIN_INC	0x0002
++#define ICE_ITR_ADAPTIVE_MIN_USECS	0x0002
++#define ICE_ITR_ADAPTIVE_MAX_USECS	0x00FA
++#define ICE_ITR_ADAPTIVE_LATENCY	0x8000
++#define ICE_ITR_ADAPTIVE_BULK		0x0000
++
+ #define ICE_DFLT_INTRL	0
+ 
+ /* Legacy or Advanced Mode Queue */
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+index 57155b4a59dc..8b1ee9f3a39d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+@@ -764,6 +764,7 @@ static void ice_cleanup_and_realloc_vf(struct ice_vf *vf)
+ bool ice_reset_all_vfs(struct ice_pf *pf, bool is_vflr)
+ {
+ 	struct ice_hw *hw = &pf->hw;
++	struct ice_vf *vf;
+ 	int v, i;
+ 
+ 	/* If we don't have any VFs, then there is nothing to reset */
+@@ -778,12 +779,17 @@ bool ice_reset_all_vfs(struct ice_pf *pf, bool is_vflr)
+ 	for (v = 0; v < pf->num_alloc_vfs; v++)
+ 		ice_trigger_vf_reset(&pf->vf[v], is_vflr);
+ 
+-	/* Call Disable LAN Tx queue AQ call with VFR bit set and 0
+-	 * queues to inform Firmware about VF reset.
+-	 */
+-	for (v = 0; v < pf->num_alloc_vfs; v++)
+-		ice_dis_vsi_txq(pf->vsi[0]->port_info, 0, NULL, NULL,
+-				ICE_VF_RESET, v, NULL);
++	for (v = 0; v < pf->num_alloc_vfs; v++) {
++		struct ice_vsi *vsi;
++
++		vf = &pf->vf[v];
++		vsi = pf->vsi[vf->lan_vsi_idx];
++		if (test_bit(ICE_VF_STATE_ENA, vf->vf_states)) {
++			ice_vsi_stop_lan_tx_rings(vsi, ICE_VF_RESET, vf->vf_id);
++			ice_vsi_stop_rx_rings(vsi);
++			clear_bit(ICE_VF_STATE_ENA, vf->vf_states);
++		}
++	}
+ 
+ 	/* HW requires some time to make sure it can flush the FIFO for a VF
+ 	 * when it resets it. Poll the VPGEN_VFRSTAT register for each VF in
+@@ -796,9 +802,9 @@ bool ice_reset_all_vfs(struct ice_pf *pf, bool is_vflr)
+ 
+ 		/* Check each VF in sequence */
+ 		while (v < pf->num_alloc_vfs) {
+-			struct ice_vf *vf = &pf->vf[v];
+ 			u32 reg;
+ 
++			vf = &pf->vf[v];
+ 			reg = rd32(hw, VPGEN_VFRSTAT(vf->vf_id));
+ 			if (!(reg & VPGEN_VFRSTAT_VFRD_M))
+ 				break;
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 3269d8e94744..580d14b49fda 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -3452,6 +3452,9 @@ static int igb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 			break;
+ 		}
+ 	}
++
++	dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP);
++
+ 	pm_runtime_put_noidle(&pdev->dev);
+ 	return 0;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+index 3f3cd32ae60a..e0ba59b5296f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+@@ -431,6 +431,9 @@ static inline int mlx5_eswitch_index_to_vport_num(struct mlx5_eswitch *esw,
+ 	return index;
+ }
+ 
++/* TODO: This mlx5e_tc function shouldn't be called by eswitch */
++void mlx5e_tc_clean_fdb_peer_flows(struct mlx5_eswitch *esw);
++
+ #else  /* CONFIG_MLX5_ESWITCH */
+ /* eswitch API stubs */
+ static inline int  mlx5_eswitch_init(struct mlx5_core_dev *dev) { return 0; }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index 9b2d78ee22b8..a97ffd0dbf01 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -363,7 +363,7 @@ static int esw_set_global_vlan_pop(struct mlx5_eswitch *esw, u8 val)
+ 	esw_debug(esw->dev, "%s applying global %s policy\n", __func__, val ? "pop" : "none");
+ 	for (vf_vport = 1; vf_vport < esw->enabled_vports; vf_vport++) {
+ 		rep = &esw->offloads.vport_reps[vf_vport];
+-		if (rep->rep_if[REP_ETH].state != REP_LOADED)
++		if (atomic_read(&rep->rep_if[REP_ETH].state) != REP_LOADED)
+ 			continue;
+ 
+ 		err = __mlx5_eswitch_set_vport_vlan(esw, rep->vport, 0, 0, val);
+@@ -1306,7 +1306,8 @@ int esw_offloads_init_reps(struct mlx5_eswitch *esw)
+ 		ether_addr_copy(rep->hw_id, hw_id);
+ 
+ 		for (rep_type = 0; rep_type < NUM_REP_TYPES; rep_type++)
+-			rep->rep_if[rep_type].state = REP_UNREGISTERED;
++			atomic_set(&rep->rep_if[rep_type].state,
++				   REP_UNREGISTERED);
+ 	}
+ 
+ 	return 0;
+@@ -1315,11 +1316,9 @@ int esw_offloads_init_reps(struct mlx5_eswitch *esw)
+ static void __esw_offloads_unload_rep(struct mlx5_eswitch *esw,
+ 				      struct mlx5_eswitch_rep *rep, u8 rep_type)
+ {
+-	if (rep->rep_if[rep_type].state != REP_LOADED)
+-		return;
+-
+-	rep->rep_if[rep_type].unload(rep);
+-	rep->rep_if[rep_type].state = REP_REGISTERED;
++	if (atomic_cmpxchg(&rep->rep_if[rep_type].state,
++			   REP_LOADED, REP_REGISTERED) == REP_LOADED)
++		rep->rep_if[rep_type].unload(rep);
+ }
+ 
+ static void __unload_reps_special_vport(struct mlx5_eswitch *esw, u8 rep_type)
+@@ -1380,16 +1379,15 @@ static int __esw_offloads_load_rep(struct mlx5_eswitch *esw,
+ {
+ 	int err = 0;
+ 
+-	if (rep->rep_if[rep_type].state != REP_REGISTERED)
+-		return 0;
+-
+-	err = rep->rep_if[rep_type].load(esw->dev, rep);
+-	if (err)
+-		return err;
+-
+-	rep->rep_if[rep_type].state = REP_LOADED;
++	if (atomic_cmpxchg(&rep->rep_if[rep_type].state,
++			   REP_REGISTERED, REP_LOADED) == REP_REGISTERED) {
++		err = rep->rep_if[rep_type].load(esw->dev, rep);
++		if (err)
++			atomic_set(&rep->rep_if[rep_type].state,
++				   REP_REGISTERED);
++	}
+ 
+-	return 0;
++	return err;
+ }
+ 
+ static int __load_reps_special_vport(struct mlx5_eswitch *esw, u8 rep_type)
+@@ -1523,8 +1521,6 @@ static int mlx5_esw_offloads_pair(struct mlx5_eswitch *esw,
+ 	return 0;
+ }
+ 
+-void mlx5e_tc_clean_fdb_peer_flows(struct mlx5_eswitch *esw);
+-
+ static void mlx5_esw_offloads_unpair(struct mlx5_eswitch *esw)
+ {
+ 	mlx5e_tc_clean_fdb_peer_flows(esw);
+@@ -2076,7 +2072,7 @@ void mlx5_eswitch_register_vport_reps(struct mlx5_eswitch *esw,
+ 		rep_if->get_proto_dev = __rep_if->get_proto_dev;
+ 		rep_if->priv = __rep_if->priv;
+ 
+-		rep_if->state = REP_REGISTERED;
++		atomic_set(&rep_if->state, REP_REGISTERED);
+ 	}
+ }
+ EXPORT_SYMBOL(mlx5_eswitch_register_vport_reps);
+@@ -2091,7 +2087,7 @@ void mlx5_eswitch_unregister_vport_reps(struct mlx5_eswitch *esw, u8 rep_type)
+ 		__unload_reps_all_vport(esw, max_vf, rep_type);
+ 
+ 	mlx5_esw_for_all_reps(esw, i, rep)
+-		rep->rep_if[rep_type].state = REP_UNREGISTERED;
++		atomic_set(&rep->rep_if[rep_type].state, REP_UNREGISTERED);
+ }
+ EXPORT_SYMBOL(mlx5_eswitch_unregister_vport_reps);
+ 
+@@ -2111,7 +2107,7 @@ void *mlx5_eswitch_get_proto_dev(struct mlx5_eswitch *esw,
+ 
+ 	rep = mlx5_eswitch_get_rep(esw, vport);
+ 
+-	if (rep->rep_if[rep_type].state == REP_LOADED &&
++	if (atomic_read(&rep->rep_if[rep_type].state) == REP_LOADED &&
+ 	    rep->rep_if[rep_type].get_proto_dev)
+ 		return rep->rep_if[rep_type].get_proto_dev(rep);
+ 	return NULL;
+diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
+index a591583d120e..dd12b73a8853 100644
+--- a/drivers/net/ethernet/ti/cpsw.c
++++ b/drivers/net/ethernet/ti/cpsw.c
+@@ -800,12 +800,17 @@ static int cpsw_purge_all_mc(struct net_device *ndev, const u8 *addr, int num)
+ 
+ static void cpsw_ndo_set_rx_mode(struct net_device *ndev)
+ {
+-	struct cpsw_common *cpsw = ndev_to_cpsw(ndev);
++	struct cpsw_priv *priv = netdev_priv(ndev);
++	struct cpsw_common *cpsw = priv->cpsw;
++	int slave_port = -1;
++
++	if (cpsw->data.dual_emac)
++		slave_port = priv->emac_port + 1;
+ 
+ 	if (ndev->flags & IFF_PROMISC) {
+ 		/* Enable promiscuous mode */
+ 		cpsw_set_promiscious(ndev, true);
+-		cpsw_ale_set_allmulti(cpsw->ale, IFF_ALLMULTI);
++		cpsw_ale_set_allmulti(cpsw->ale, IFF_ALLMULTI, slave_port);
+ 		return;
+ 	} else {
+ 		/* Disable promiscuous mode */
+@@ -813,7 +818,8 @@ static void cpsw_ndo_set_rx_mode(struct net_device *ndev)
+ 	}
+ 
+ 	/* Restore allmulti on vlans if necessary */
+-	cpsw_ale_set_allmulti(cpsw->ale, ndev->flags & IFF_ALLMULTI);
++	cpsw_ale_set_allmulti(cpsw->ale,
++			      ndev->flags & IFF_ALLMULTI, slave_port);
+ 
+ 	/* add/remove mcast address either for real netdev or for vlan */
+ 	__hw_addr_ref_sync_dev(&ndev->mc, ndev, cpsw_add_mc_addr,
+diff --git a/drivers/net/ethernet/ti/cpsw_ale.c b/drivers/net/ethernet/ti/cpsw_ale.c
+index 798c989d5d93..b3d9591b4824 100644
+--- a/drivers/net/ethernet/ti/cpsw_ale.c
++++ b/drivers/net/ethernet/ti/cpsw_ale.c
+@@ -482,24 +482,25 @@ int cpsw_ale_del_vlan(struct cpsw_ale *ale, u16 vid, int port_mask)
+ }
+ EXPORT_SYMBOL_GPL(cpsw_ale_del_vlan);
+ 
+-void cpsw_ale_set_allmulti(struct cpsw_ale *ale, int allmulti)
++void cpsw_ale_set_allmulti(struct cpsw_ale *ale, int allmulti, int port)
+ {
+ 	u32 ale_entry[ALE_ENTRY_WORDS];
+-	int type, idx;
+ 	int unreg_mcast = 0;
+-
+-	/* Only bother doing the work if the setting is actually changing */
+-	if (ale->allmulti == allmulti)
+-		return;
+-
+-	/* Remember the new setting to check against next time */
+-	ale->allmulti = allmulti;
++	int type, idx;
+ 
+ 	for (idx = 0; idx < ale->params.ale_entries; idx++) {
++		int vlan_members;
++
+ 		cpsw_ale_read(ale, idx, ale_entry);
+ 		type = cpsw_ale_get_entry_type(ale_entry);
+ 		if (type != ALE_TYPE_VLAN)
+ 			continue;
++		vlan_members =
++			cpsw_ale_get_vlan_member_list(ale_entry,
++						      ale->vlan_field_bits);
++
++		if (port != -1 && !(vlan_members & BIT(port)))
++			continue;
+ 
+ 		unreg_mcast =
+ 			cpsw_ale_get_vlan_unreg_mcast(ale_entry,
+diff --git a/drivers/net/ethernet/ti/cpsw_ale.h b/drivers/net/ethernet/ti/cpsw_ale.h
+index cd07a3e96d57..1fe196d8a5e4 100644
+--- a/drivers/net/ethernet/ti/cpsw_ale.h
++++ b/drivers/net/ethernet/ti/cpsw_ale.h
+@@ -37,7 +37,6 @@ struct cpsw_ale {
+ 	struct cpsw_ale_params	params;
+ 	struct timer_list	timer;
+ 	unsigned long		ageout;
+-	int			allmulti;
+ 	u32			version;
+ 	/* These bits are different on NetCP NU Switch ALE */
+ 	u32			port_mask_bits;
+@@ -116,7 +115,7 @@ int cpsw_ale_del_mcast(struct cpsw_ale *ale, const u8 *addr, int port_mask,
+ int cpsw_ale_add_vlan(struct cpsw_ale *ale, u16 vid, int port, int untag,
+ 			int reg_mcast, int unreg_mcast);
+ int cpsw_ale_del_vlan(struct cpsw_ale *ale, u16 vid, int port);
+-void cpsw_ale_set_allmulti(struct cpsw_ale *ale, int allmulti);
++void cpsw_ale_set_allmulti(struct cpsw_ale *ale, int allmulti, int port);
+ 
+ int cpsw_ale_control_get(struct cpsw_ale *ale, int port, int control);
+ int cpsw_ale_control_set(struct cpsw_ale *ale, int port,
+diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
+index e0dce373cdd9..3d4a166a49d5 100644
+--- a/drivers/net/hyperv/netvsc.c
++++ b/drivers/net/hyperv/netvsc.c
+@@ -875,12 +875,6 @@ static inline int netvsc_send_pkt(
+ 	} else if (ret == -EAGAIN) {
+ 		netif_tx_stop_queue(txq);
+ 		ndev_ctx->eth_stats.stop_queue++;
+-		if (atomic_read(&nvchan->queue_sends) < 1 &&
+-		    !net_device->tx_disable) {
+-			netif_tx_wake_queue(txq);
+-			ndev_ctx->eth_stats.wake_queue++;
+-			ret = -ENOSPC;
+-		}
+ 	} else {
+ 		netdev_err(ndev,
+ 			   "Unable to send packet pages %u len %u, ret %d\n",
+@@ -888,6 +882,15 @@ static inline int netvsc_send_pkt(
+ 			   ret);
+ 	}
+ 
++	if (netif_tx_queue_stopped(txq) &&
++	    atomic_read(&nvchan->queue_sends) < 1 &&
++	    !net_device->tx_disable) {
++		netif_tx_wake_queue(txq);
++		ndev_ctx->eth_stats.wake_queue++;
++		if (ret == -EAGAIN)
++			ret = -ENOSPC;
++	}
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index cd5966b0db57..f6a6cc5bf118 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -1829,13 +1829,25 @@ EXPORT_SYMBOL(genphy_read_status);
+  */
+ int genphy_soft_reset(struct phy_device *phydev)
+ {
++	u16 res = BMCR_RESET;
+ 	int ret;
+ 
+-	ret = phy_set_bits(phydev, MII_BMCR, BMCR_RESET);
++	if (phydev->autoneg == AUTONEG_ENABLE)
++		res |= BMCR_ANRESTART;
++
++	ret = phy_modify(phydev, MII_BMCR, BMCR_ISOLATE, res);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	return phy_poll_reset(phydev);
++	ret = phy_poll_reset(phydev);
++	if (ret)
++		return ret;
++
++	/* BMCR may be reset to defaults */
++	if (phydev->autoneg == AUTONEG_DISABLE)
++		ret = genphy_setup_forced(phydev);
++
++	return ret;
+ }
+ EXPORT_SYMBOL(genphy_soft_reset);
+ 
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 366217263d70..d9a6699abe59 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -63,6 +63,7 @@ enum qmi_wwan_flags {
+ 
+ enum qmi_wwan_quirks {
+ 	QMI_WWAN_QUIRK_DTR = 1 << 0,	/* needs "set DTR" request */
++	QMI_WWAN_QUIRK_QUECTEL_DYNCFG = 1 << 1,	/* check num. endpoints */
+ };
+ 
+ struct qmimux_hdr {
+@@ -845,6 +846,16 @@ static const struct driver_info	qmi_wwan_info_quirk_dtr = {
+ 	.data           = QMI_WWAN_QUIRK_DTR,
+ };
+ 
++static const struct driver_info	qmi_wwan_info_quirk_quectel_dyncfg = {
++	.description	= "WWAN/QMI device",
++	.flags		= FLAG_WWAN | FLAG_SEND_ZLP,
++	.bind		= qmi_wwan_bind,
++	.unbind		= qmi_wwan_unbind,
++	.manage_power	= qmi_wwan_manage_power,
++	.rx_fixup       = qmi_wwan_rx_fixup,
++	.data           = QMI_WWAN_QUIRK_DTR | QMI_WWAN_QUIRK_QUECTEL_DYNCFG,
++};
++
+ #define HUAWEI_VENDOR_ID	0x12D1
+ 
+ /* map QMI/wwan function by a fixed interface number */
+@@ -865,6 +876,15 @@ static const struct driver_info	qmi_wwan_info_quirk_dtr = {
+ #define QMI_GOBI_DEVICE(vend, prod) \
+ 	QMI_FIXED_INTF(vend, prod, 0)
+ 
++/* Quectel does not use fixed interface numbers on at least some of their
++ * devices. We need to check the number of endpoints to ensure that we bind to
++ * the correct interface.
++ */
++#define QMI_QUIRK_QUECTEL_DYNCFG(vend, prod) \
++	USB_DEVICE_AND_INTERFACE_INFO(vend, prod, USB_CLASS_VENDOR_SPEC, \
++				      USB_SUBCLASS_VENDOR_SPEC, 0xff), \
++	.driver_info = (unsigned long)&qmi_wwan_info_quirk_quectel_dyncfg
++
+ static const struct usb_device_id products[] = {
+ 	/* 1. CDC ECM like devices match on the control interface */
+ 	{	/* Huawei E392, E398 and possibly others sharing both device id and more... */
+@@ -969,20 +989,9 @@ static const struct usb_device_id products[] = {
+ 		USB_DEVICE_AND_INTERFACE_INFO(0x03f0, 0x581d, USB_CLASS_VENDOR_SPEC, 1, 7),
+ 		.driver_info = (unsigned long)&qmi_wwan_info,
+ 	},
+-	{	/* Quectel EP06/EG06/EM06 */
+-		USB_DEVICE_AND_INTERFACE_INFO(0x2c7c, 0x0306,
+-					      USB_CLASS_VENDOR_SPEC,
+-					      USB_SUBCLASS_VENDOR_SPEC,
+-					      0xff),
+-		.driver_info	    = (unsigned long)&qmi_wwan_info_quirk_dtr,
+-	},
+-	{	/* Quectel EG12/EM12 */
+-		USB_DEVICE_AND_INTERFACE_INFO(0x2c7c, 0x0512,
+-					      USB_CLASS_VENDOR_SPEC,
+-					      USB_SUBCLASS_VENDOR_SPEC,
+-					      0xff),
+-		.driver_info	    = (unsigned long)&qmi_wwan_info_quirk_dtr,
+-	},
++	{QMI_QUIRK_QUECTEL_DYNCFG(0x2c7c, 0x0125)},	/* Quectel EC25, EC20 R2.0  Mini PCIe */
++	{QMI_QUIRK_QUECTEL_DYNCFG(0x2c7c, 0x0306)},	/* Quectel EP06/EG06/EM06 */
++	{QMI_QUIRK_QUECTEL_DYNCFG(0x2c7c, 0x0512)},	/* Quectel EG12/EM12 */
+ 
+ 	/* 3. Combined interface devices matching on interface number */
+ 	{QMI_FIXED_INTF(0x0408, 0xea42, 4)},	/* Yota / Megafon M100-1 */
+@@ -1283,7 +1292,6 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x03f0, 0x9d1d, 1)},	/* HP lt4120 Snapdragon X5 LTE */
+ 	{QMI_FIXED_INTF(0x22de, 0x9061, 3)},	/* WeTelecom WPD-600N */
+ 	{QMI_QUIRK_SET_DTR(0x1e0e, 0x9001, 5)},	/* SIMCom 7100E, 7230E, 7600E ++ */
+-	{QMI_QUIRK_SET_DTR(0x2c7c, 0x0125, 4)},	/* Quectel EC25, EC20 R2.0  Mini PCIe */
+ 	{QMI_QUIRK_SET_DTR(0x2c7c, 0x0121, 4)},	/* Quectel EC21 Mini PCIe */
+ 	{QMI_QUIRK_SET_DTR(0x2c7c, 0x0191, 4)},	/* Quectel EG91 */
+ 	{QMI_FIXED_INTF(0x2c7c, 0x0296, 4)},	/* Quectel BG96 */
+@@ -1363,27 +1371,12 @@ static bool quectel_ec20_detected(struct usb_interface *intf)
+ 	return false;
+ }
+ 
+-static bool quectel_diag_detected(struct usb_interface *intf)
+-{
+-	struct usb_device *dev = interface_to_usbdev(intf);
+-	struct usb_interface_descriptor intf_desc = intf->cur_altsetting->desc;
+-	u16 id_vendor = le16_to_cpu(dev->descriptor.idVendor);
+-	u16 id_product = le16_to_cpu(dev->descriptor.idProduct);
+-
+-	if (id_vendor != 0x2c7c || intf_desc.bNumEndpoints != 2)
+-		return false;
+-
+-	if (id_product == 0x0306 || id_product == 0x0512)
+-		return true;
+-	else
+-		return false;
+-}
+-
+ static int qmi_wwan_probe(struct usb_interface *intf,
+ 			  const struct usb_device_id *prod)
+ {
+ 	struct usb_device_id *id = (struct usb_device_id *)prod;
+ 	struct usb_interface_descriptor *desc = &intf->cur_altsetting->desc;
++	const struct driver_info *info;
+ 
+ 	/* Workaround to enable dynamic IDs.  This disables usbnet
+ 	 * blacklisting functionality.  Which, if required, can be
+@@ -1417,10 +1410,14 @@ static int qmi_wwan_probe(struct usb_interface *intf,
+ 	 * we need to match on class/subclass/protocol. These values are
+ 	 * identical for the diagnostic- and QMI-interface, but bNumEndpoints is
+ 	 * different. Ignore the current interface if the number of endpoints
+-	 * the number for the diag interface (two).
++	 * equals the number for the diag interface (two).
+ 	 */
+-	if (quectel_diag_detected(intf))
+-		return -ENODEV;
++	info = (void *)&id->driver_info;
++
++	if (info->data & QMI_WWAN_QUIRK_QUECTEL_DYNCFG) {
++		if (desc->bNumEndpoints == 2)
++			return -ENODEV;
++	}
+ 
+ 	return usbnet_probe(intf, id);
+ }
+diff --git a/drivers/net/wireless/ath/wil6210/cfg80211.c b/drivers/net/wireless/ath/wil6210/cfg80211.c
+index a1e226652b4a..692730415d78 100644
+--- a/drivers/net/wireless/ath/wil6210/cfg80211.c
++++ b/drivers/net/wireless/ath/wil6210/cfg80211.c
+@@ -1274,7 +1274,12 @@ int wil_cfg80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
+ 			     params->wait);
+ 
+ out:
++	/* when the sent packet was not acked by receiver(ACK=0), rc will
++	 * be -EAGAIN. In this case this function needs to return success,
++	 * the ACK=0 will be reflected in tx_status.
++	 */
+ 	tx_status = (rc == 0);
++	rc = (rc == -EAGAIN) ? 0 : rc;
+ 	cfg80211_mgmt_tx_status(wdev, cookie ? *cookie : 0, buf, len,
+ 				tx_status, GFP_KERNEL);
+ 
+diff --git a/drivers/net/wireless/ath/wil6210/wmi.c b/drivers/net/wireless/ath/wil6210/wmi.c
+index bda4a9712f91..63116f4b62c7 100644
+--- a/drivers/net/wireless/ath/wil6210/wmi.c
++++ b/drivers/net/wireless/ath/wil6210/wmi.c
+@@ -3502,8 +3502,9 @@ int wmi_mgmt_tx(struct wil6210_vif *vif, const u8 *buf, size_t len)
+ 	rc = wmi_call(wil, WMI_SW_TX_REQ_CMDID, vif->mid, cmd, total,
+ 		      WMI_SW_TX_COMPLETE_EVENTID, &evt, sizeof(evt), 2000);
+ 	if (!rc && evt.evt.status != WMI_FW_STATUS_SUCCESS) {
+-		wil_err(wil, "mgmt_tx failed with status %d\n", evt.evt.status);
+-		rc = -EINVAL;
++		wil_dbg_wmi(wil, "mgmt_tx failed with status %d\n",
++			    evt.evt.status);
++		rc = -EAGAIN;
+ 	}
+ 
+ 	kfree(cmd);
+@@ -3555,9 +3556,9 @@ int wmi_mgmt_tx_ext(struct wil6210_vif *vif, const u8 *buf, size_t len,
+ 	rc = wmi_call(wil, WMI_SW_TX_REQ_EXT_CMDID, vif->mid, cmd, total,
+ 		      WMI_SW_TX_COMPLETE_EVENTID, &evt, sizeof(evt), 2000);
+ 	if (!rc && evt.evt.status != WMI_FW_STATUS_SUCCESS) {
+-		wil_err(wil, "mgmt_tx_ext failed with status %d\n",
+-			evt.evt.status);
+-		rc = -EINVAL;
++		wil_dbg_wmi(wil, "mgmt_tx_ext failed with status %d\n",
++			    evt.evt.status);
++		rc = -EAGAIN;
+ 	}
+ 
+ 	kfree(cmd);
+diff --git a/drivers/net/wireless/atmel/at76c50x-usb.c b/drivers/net/wireless/atmel/at76c50x-usb.c
+index e99e766a3028..1cabae424839 100644
+--- a/drivers/net/wireless/atmel/at76c50x-usb.c
++++ b/drivers/net/wireless/atmel/at76c50x-usb.c
+@@ -2585,8 +2585,8 @@ static int __init at76_mod_init(void)
+ 	if (result < 0)
+ 		printk(KERN_ERR DRIVER_NAME
+ 		       ": usb_register failed (status %d)\n", result);
+-
+-	led_trigger_register_simple("at76_usb-tx", &ledtrig_tx);
++	else
++		led_trigger_register_simple("at76_usb-tx", &ledtrig_tx);
+ 	return result;
+ }
+ 
+diff --git a/drivers/net/wireless/broadcom/b43/phy_lp.c b/drivers/net/wireless/broadcom/b43/phy_lp.c
+index 46408a560814..aedee026c5e2 100644
+--- a/drivers/net/wireless/broadcom/b43/phy_lp.c
++++ b/drivers/net/wireless/broadcom/b43/phy_lp.c
+@@ -1835,7 +1835,7 @@ static void lpphy_papd_cal(struct b43_wldev *dev, struct lpphy_tx_gains gains,
+ static void lpphy_papd_cal_txpwr(struct b43_wldev *dev)
+ {
+ 	struct b43_phy_lp *lpphy = dev->phy.lp;
+-	struct lpphy_tx_gains gains, oldgains;
++	struct lpphy_tx_gains oldgains;
+ 	int old_txpctl, old_afe_ovr, old_rf, old_bbmult;
+ 
+ 	lpphy_read_tx_pctl_mode_from_hardware(dev);
+@@ -1849,9 +1849,9 @@ static void lpphy_papd_cal_txpwr(struct b43_wldev *dev)
+ 	lpphy_set_tx_power_control(dev, B43_LPPHY_TXPCTL_OFF);
+ 
+ 	if (dev->dev->chip_id == 0x4325 && dev->dev->chip_rev == 0)
+-		lpphy_papd_cal(dev, gains, 0, 1, 30);
++		lpphy_papd_cal(dev, oldgains, 0, 1, 30);
+ 	else
+-		lpphy_papd_cal(dev, gains, 0, 1, 65);
++		lpphy_papd_cal(dev, oldgains, 0, 1, 65);
+ 
+ 	if (old_afe_ovr)
+ 		lpphy_set_tx_gains(dev, oldgains);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index e92f6351bd22..8ee8af4e7ec4 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -5464,6 +5464,8 @@ static s32 brcmf_get_assoc_ies(struct brcmf_cfg80211_info *cfg,
+ 		conn_info->req_ie =
+ 		    kmemdup(cfg->extra_buf, conn_info->req_ie_len,
+ 			    GFP_KERNEL);
++		if (!conn_info->req_ie)
++			conn_info->req_ie_len = 0;
+ 	} else {
+ 		conn_info->req_ie_len = 0;
+ 		conn_info->req_ie = NULL;
+@@ -5480,6 +5482,8 @@ static s32 brcmf_get_assoc_ies(struct brcmf_cfg80211_info *cfg,
+ 		conn_info->resp_ie =
+ 		    kmemdup(cfg->extra_buf, conn_info->resp_ie_len,
+ 			    GFP_KERNEL);
++		if (!conn_info->resp_ie)
++			conn_info->resp_ie_len = 0;
+ 	} else {
+ 		conn_info->resp_ie_len = 0;
+ 		conn_info->resp_ie = NULL;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+index 4fbe8791f674..24ed19ed116e 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+@@ -841,17 +841,17 @@ static void brcmf_del_if(struct brcmf_pub *drvr, s32 bsscfgidx,
+ 			 bool rtnl_locked)
+ {
+ 	struct brcmf_if *ifp;
++	int ifidx;
+ 
+ 	ifp = drvr->iflist[bsscfgidx];
+-	drvr->iflist[bsscfgidx] = NULL;
+ 	if (!ifp) {
+ 		bphy_err(drvr, "Null interface, bsscfgidx=%d\n", bsscfgidx);
+ 		return;
+ 	}
+ 	brcmf_dbg(TRACE, "Enter, bsscfgidx=%d, ifidx=%d\n", bsscfgidx,
+ 		  ifp->ifidx);
+-	if (drvr->if2bss[ifp->ifidx] == bsscfgidx)
+-		drvr->if2bss[ifp->ifidx] = BRCMF_BSSIDX_INVALID;
++	ifidx = ifp->ifidx;
++
+ 	if (ifp->ndev) {
+ 		if (bsscfgidx == 0) {
+ 			if (ifp->ndev->netdev_ops == &brcmf_netdev_ops_pri) {
+@@ -879,6 +879,10 @@ static void brcmf_del_if(struct brcmf_pub *drvr, s32 bsscfgidx,
+ 		brcmf_p2p_ifp_removed(ifp, rtnl_locked);
+ 		kfree(ifp);
+ 	}
++
++	drvr->iflist[bsscfgidx] = NULL;
++	if (drvr->if2bss[ifidx] == bsscfgidx)
++		drvr->if2bss[ifidx] = BRCMF_BSSIDX_INVALID;
+ }
+ 
+ void brcmf_remove_interface(struct brcmf_if *ifp, bool rtnl_locked)
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c
+index abeb305492e0..d48b8b2d946f 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c
+@@ -580,24 +580,6 @@ static bool brcmf_fws_ifidx_match(struct sk_buff *skb, void *arg)
+ 	return ifidx == *(int *)arg;
+ }
+ 
+-static void brcmf_fws_psq_flush(struct brcmf_fws_info *fws, struct pktq *q,
+-				int ifidx)
+-{
+-	bool (*matchfn)(struct sk_buff *, void *) = NULL;
+-	struct sk_buff *skb;
+-	int prec;
+-
+-	if (ifidx != -1)
+-		matchfn = brcmf_fws_ifidx_match;
+-	for (prec = 0; prec < q->num_prec; prec++) {
+-		skb = brcmu_pktq_pdeq_match(q, prec, matchfn, &ifidx);
+-		while (skb) {
+-			brcmu_pkt_buf_free_skb(skb);
+-			skb = brcmu_pktq_pdeq_match(q, prec, matchfn, &ifidx);
+-		}
+-	}
+-}
+-
+ static void brcmf_fws_hanger_init(struct brcmf_fws_hanger *hanger)
+ {
+ 	int i;
+@@ -669,6 +651,28 @@ static inline int brcmf_fws_hanger_poppkt(struct brcmf_fws_hanger *h,
+ 	return 0;
+ }
+ 
++static void brcmf_fws_psq_flush(struct brcmf_fws_info *fws, struct pktq *q,
++				int ifidx)
++{
++	bool (*matchfn)(struct sk_buff *, void *) = NULL;
++	struct sk_buff *skb;
++	int prec;
++	u32 hslot;
++
++	if (ifidx != -1)
++		matchfn = brcmf_fws_ifidx_match;
++	for (prec = 0; prec < q->num_prec; prec++) {
++		skb = brcmu_pktq_pdeq_match(q, prec, matchfn, &ifidx);
++		while (skb) {
++			hslot = brcmf_skb_htod_tag_get_field(skb, HSLOT);
++			brcmf_fws_hanger_poppkt(&fws->hanger, hslot, &skb,
++						true);
++			brcmu_pkt_buf_free_skb(skb);
++			skb = brcmu_pktq_pdeq_match(q, prec, matchfn, &ifidx);
++		}
++	}
++}
++
+ static int brcmf_fws_hanger_mark_suppressed(struct brcmf_fws_hanger *h,
+ 					    u32 slot_id)
+ {
+@@ -2200,6 +2204,8 @@ void brcmf_fws_del_interface(struct brcmf_if *ifp)
+ 	brcmf_fws_lock(fws);
+ 	ifp->fws_desc = NULL;
+ 	brcmf_dbg(TRACE, "deleting %s\n", entry->name);
++	brcmf_fws_macdesc_cleanup(fws, &fws->desc.iface[ifp->ifidx],
++				  ifp->ifidx);
+ 	brcmf_fws_macdesc_deinit(entry);
+ 	brcmf_fws_cleanup(fws, ifp->ifidx);
+ 	brcmf_fws_unlock(fws);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
+index e9cbfd077710..81e1842f1d8c 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
+@@ -160,7 +160,7 @@ struct brcmf_usbdev_info {
+ 
+ 	struct usb_device *usbdev;
+ 	struct device *dev;
+-	struct mutex dev_init_lock;
++	struct completion dev_init_done;
+ 
+ 	int ctl_in_pipe, ctl_out_pipe;
+ 	struct urb *ctl_urb; /* URB for control endpoint */
+@@ -682,12 +682,18 @@ static int brcmf_usb_up(struct device *dev)
+ 
+ static void brcmf_cancel_all_urbs(struct brcmf_usbdev_info *devinfo)
+ {
++	int i;
++
+ 	if (devinfo->ctl_urb)
+ 		usb_kill_urb(devinfo->ctl_urb);
+ 	if (devinfo->bulk_urb)
+ 		usb_kill_urb(devinfo->bulk_urb);
+-	brcmf_usb_free_q(&devinfo->tx_postq, true);
+-	brcmf_usb_free_q(&devinfo->rx_postq, true);
++	if (devinfo->tx_reqs)
++		for (i = 0; i < devinfo->bus_pub.ntxq; i++)
++			usb_kill_urb(devinfo->tx_reqs[i].urb);
++	if (devinfo->rx_reqs)
++		for (i = 0; i < devinfo->bus_pub.nrxq; i++)
++			usb_kill_urb(devinfo->rx_reqs[i].urb);
+ }
+ 
+ static void brcmf_usb_down(struct device *dev)
+@@ -1193,11 +1199,11 @@ static void brcmf_usb_probe_phase2(struct device *dev, int ret,
+ 	if (ret)
+ 		goto error;
+ 
+-	mutex_unlock(&devinfo->dev_init_lock);
++	complete(&devinfo->dev_init_done);
+ 	return;
+ error:
+ 	brcmf_dbg(TRACE, "failed: dev=%s, err=%d\n", dev_name(dev), ret);
+-	mutex_unlock(&devinfo->dev_init_lock);
++	complete(&devinfo->dev_init_done);
+ 	device_release_driver(dev);
+ }
+ 
+@@ -1265,7 +1271,7 @@ static int brcmf_usb_probe_cb(struct brcmf_usbdev_info *devinfo)
+ 		if (ret)
+ 			goto fail;
+ 		/* we are done */
+-		mutex_unlock(&devinfo->dev_init_lock);
++		complete(&devinfo->dev_init_done);
+ 		return 0;
+ 	}
+ 	bus->chip = bus_pub->devid;
+@@ -1325,11 +1331,10 @@ brcmf_usb_probe(struct usb_interface *intf, const struct usb_device_id *id)
+ 
+ 	devinfo->usbdev = usb;
+ 	devinfo->dev = &usb->dev;
+-	/* Take an init lock, to protect for disconnect while still loading.
++	/* Init completion, to protect for disconnect while still loading.
+ 	 * Necessary because of the asynchronous firmware load construction
+ 	 */
+-	mutex_init(&devinfo->dev_init_lock);
+-	mutex_lock(&devinfo->dev_init_lock);
++	init_completion(&devinfo->dev_init_done);
+ 
+ 	usb_set_intfdata(intf, devinfo);
+ 
+@@ -1407,7 +1412,7 @@ brcmf_usb_probe(struct usb_interface *intf, const struct usb_device_id *id)
+ 	return 0;
+ 
+ fail:
+-	mutex_unlock(&devinfo->dev_init_lock);
++	complete(&devinfo->dev_init_done);
+ 	kfree(devinfo);
+ 	usb_set_intfdata(intf, NULL);
+ 	return ret;
+@@ -1422,7 +1427,7 @@ brcmf_usb_disconnect(struct usb_interface *intf)
+ 	devinfo = (struct brcmf_usbdev_info *)usb_get_intfdata(intf);
+ 
+ 	if (devinfo) {
+-		mutex_lock(&devinfo->dev_init_lock);
++		wait_for_completion(&devinfo->dev_init_done);
+ 		/* Make sure that devinfo still exists. Firmware probe routines
+ 		 * may have released the device and cleared the intfdata.
+ 		 */
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/vendor.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/vendor.c
+index 8eff2753abad..d493021f6031 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/vendor.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/vendor.c
+@@ -35,9 +35,10 @@ static int brcmf_cfg80211_vndr_cmds_dcmd_handler(struct wiphy *wiphy,
+ 	struct brcmf_if *ifp;
+ 	const struct brcmf_vndr_dcmd_hdr *cmdhdr = data;
+ 	struct sk_buff *reply;
+-	int ret, payload, ret_len;
++	unsigned int payload, ret_len;
+ 	void *dcmd_buf = NULL, *wr_pointer;
+ 	u16 msglen, maxmsglen = PAGE_SIZE - 0x100;
++	int ret;
+ 
+ 	if (len < sizeof(*cmdhdr)) {
+ 		brcmf_err("vendor command too short: %d\n", len);
+@@ -65,7 +66,7 @@ static int brcmf_cfg80211_vndr_cmds_dcmd_handler(struct wiphy *wiphy,
+ 			brcmf_err("oversize return buffer %d\n", ret_len);
+ 			ret_len = BRCMF_DCMD_MAXLEN;
+ 		}
+-		payload = max(ret_len, len) + 1;
++		payload = max_t(unsigned int, ret_len, len) + 1;
+ 		dcmd_buf = vzalloc(payload);
+ 		if (NULL == dcmd_buf)
+ 			return -ENOMEM;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+index 98d123dd7177..eb452e9dce05 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+@@ -2277,7 +2277,8 @@ int iwl_mvm_add_mcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+ 	static const u8 _maddr[] = {0x03, 0x00, 0x00, 0x00, 0x00, 0x00};
+ 	const u8 *maddr = _maddr;
+ 	struct iwl_trans_txq_scd_cfg cfg = {
+-		.fifo = IWL_MVM_TX_FIFO_MCAST,
++		.fifo = vif->type == NL80211_IFTYPE_AP ?
++			IWL_MVM_TX_FIFO_MCAST : IWL_MVM_TX_FIFO_BE,
+ 		.sta_id = msta->sta_id,
+ 		.tid = 0,
+ 		.aggregate = false,
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+index 8d4f0628622b..12f02aaf923e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+@@ -1434,10 +1434,15 @@ out_err:
+ static void iwl_pcie_rx_handle(struct iwl_trans *trans, int queue)
+ {
+ 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+-	struct iwl_rxq *rxq = &trans_pcie->rxq[queue];
++	struct iwl_rxq *rxq;
+ 	u32 r, i, count = 0;
+ 	bool emergency = false;
+ 
++	if (WARN_ON_ONCE(!trans_pcie->rxq || !trans_pcie->rxq[queue].bd))
++		return;
++
++	rxq = &trans_pcie->rxq[queue];
++
+ restart:
+ 	spin_lock(&rxq->lock);
+ 	/* uCode's read index (stored in shared DRAM) indicates the last Rx
+diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+index c46f0a54a0c7..e582d9b3e50c 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
++++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+@@ -4082,16 +4082,20 @@ static int mwifiex_tm_cmd(struct wiphy *wiphy, struct wireless_dev *wdev,
+ 
+ 		if (mwifiex_send_cmd(priv, 0, 0, 0, hostcmd, true)) {
+ 			dev_err(priv->adapter->dev, "Failed to process hostcmd\n");
++			kfree(hostcmd);
+ 			return -EFAULT;
+ 		}
+ 
+ 		/* process hostcmd response*/
+ 		skb = cfg80211_testmode_alloc_reply_skb(wiphy, hostcmd->len);
+-		if (!skb)
++		if (!skb) {
++			kfree(hostcmd);
+ 			return -ENOMEM;
++		}
+ 		err = nla_put(skb, MWIFIEX_TM_ATTR_DATA,
+ 			      hostcmd->len, hostcmd->cmd);
+ 		if (err) {
++			kfree(hostcmd);
+ 			kfree_skb(skb);
+ 			return -EMSGSIZE;
+ 		}
+diff --git a/drivers/net/wireless/marvell/mwifiex/cfp.c b/drivers/net/wireless/marvell/mwifiex/cfp.c
+index bfe84e55df77..f1522fb1c1e8 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cfp.c
++++ b/drivers/net/wireless/marvell/mwifiex/cfp.c
+@@ -531,5 +531,8 @@ u8 mwifiex_adjust_data_rate(struct mwifiex_private *priv,
+ 		rate_index = (rx_rate > MWIFIEX_RATE_INDEX_OFDM0) ?
+ 			      rx_rate - 1 : rx_rate;
+ 
++	if (rate_index >= MWIFIEX_MAX_AC_RX_RATES)
++		rate_index = MWIFIEX_MAX_AC_RX_RATES - 1;
++
+ 	return rate_index;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
+index 76629b98c78d..8c7ee8302fb8 100644
+--- a/drivers/net/wireless/mediatek/mt76/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/dma.c
+@@ -271,10 +271,11 @@ mt76_dma_tx_queue_skb_raw(struct mt76_dev *dev, enum mt76_txq_id qid,
+ 	return 0;
+ }
+ 
+-int mt76_dma_tx_queue_skb(struct mt76_dev *dev, struct mt76_queue *q,
++int mt76_dma_tx_queue_skb(struct mt76_dev *dev, enum mt76_txq_id qid,
+ 			  struct sk_buff *skb, struct mt76_wcid *wcid,
+ 			  struct ieee80211_sta *sta)
+ {
++	struct mt76_queue *q = &dev->q_tx[qid];
+ 	struct mt76_queue_entry e;
+ 	struct mt76_txwi_cache *t;
+ 	struct mt76_queue_buf buf[32];
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index bcbfd3c4a44b..eb882b2cbc0e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -156,7 +156,7 @@ struct mt76_queue_ops {
+ 		       struct mt76_queue_buf *buf, int nbufs, u32 info,
+ 		       struct sk_buff *skb, void *txwi);
+ 
+-	int (*tx_queue_skb)(struct mt76_dev *dev, struct mt76_queue *q,
++	int (*tx_queue_skb)(struct mt76_dev *dev, enum mt76_txq_id qid,
+ 			    struct sk_buff *skb, struct mt76_wcid *wcid,
+ 			    struct ieee80211_sta *sta);
+ 
+@@ -645,7 +645,7 @@ static inline struct mt76_tx_cb *mt76_tx_skb_cb(struct sk_buff *skb)
+ 	return ((void *) IEEE80211_SKB_CB(skb)->status.status_driver_data);
+ }
+ 
+-int mt76_dma_tx_queue_skb(struct mt76_dev *dev, struct mt76_queue *q,
++int mt76_dma_tx_queue_skb(struct mt76_dev *dev, enum mt76_txq_id qid,
+ 			  struct sk_buff *skb, struct mt76_wcid *wcid,
+ 			  struct ieee80211_sta *sta);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/beacon.c b/drivers/net/wireless/mediatek/mt76/mt7603/beacon.c
+index 4dcb465095d1..99c0a3ba37cb 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7603/beacon.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7603/beacon.c
+@@ -23,7 +23,7 @@ mt7603_update_beacon_iter(void *priv, u8 *mac, struct ieee80211_vif *vif)
+ 	if (!skb)
+ 		return;
+ 
+-	mt76_dma_tx_queue_skb(&dev->mt76, &dev->mt76.q_tx[MT_TXQ_BEACON], skb,
++	mt76_dma_tx_queue_skb(&dev->mt76, MT_TXQ_BEACON, skb,
+ 			      &mvif->sta.wcid, NULL);
+ 
+ 	spin_lock_bh(&dev->ps_lock);
+@@ -118,8 +118,8 @@ void mt7603_pre_tbtt_tasklet(unsigned long arg)
+ 		struct ieee80211_vif *vif = info->control.vif;
+ 		struct mt7603_vif *mvif = (struct mt7603_vif *)vif->drv_priv;
+ 
+-		mt76_dma_tx_queue_skb(&dev->mt76, q, skb, &mvif->sta.wcid,
+-				      NULL);
++		mt76_dma_tx_queue_skb(&dev->mt76, MT_TXQ_CAB, skb,
++				      &mvif->sta.wcid, NULL);
+ 	}
+ 	mt76_queue_kick(dev, q);
+ 	spin_unlock_bh(&q->lock);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c b/drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c
+index daaed1220147..952fe19cba9b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c
+@@ -146,8 +146,8 @@ static void mt76x02_pre_tbtt_tasklet(unsigned long arg)
+ 		struct ieee80211_vif *vif = info->control.vif;
+ 		struct mt76x02_vif *mvif = (struct mt76x02_vif *)vif->drv_priv;
+ 
+-		mt76_dma_tx_queue_skb(&dev->mt76, q, skb, &mvif->group_wcid,
+-				      NULL);
++		mt76_dma_tx_queue_skb(&dev->mt76, MT_TXQ_PSD, skb,
++				      &mvif->group_wcid, NULL);
+ 	}
+ 	spin_unlock_bh(&q->lock);
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/tx.c b/drivers/net/wireless/mediatek/mt76/tx.c
+index 2585df512335..0c1036da9a92 100644
+--- a/drivers/net/wireless/mediatek/mt76/tx.c
++++ b/drivers/net/wireless/mediatek/mt76/tx.c
+@@ -286,7 +286,7 @@ mt76_tx(struct mt76_dev *dev, struct ieee80211_sta *sta,
+ 	q = &dev->q_tx[qid];
+ 
+ 	spin_lock_bh(&q->lock);
+-	dev->queue_ops->tx_queue_skb(dev, q, skb, wcid, sta);
++	dev->queue_ops->tx_queue_skb(dev, qid, skb, wcid, sta);
+ 	dev->queue_ops->kick(dev, q);
+ 
+ 	if (q->queued > q->ndesc - 8 && !q->stopped) {
+@@ -327,7 +327,6 @@ mt76_queue_ps_skb(struct mt76_dev *dev, struct ieee80211_sta *sta,
+ {
+ 	struct mt76_wcid *wcid = (struct mt76_wcid *) sta->drv_priv;
+ 	struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+-	struct mt76_queue *hwq = &dev->q_tx[MT_TXQ_PSD];
+ 
+ 	info->control.flags |= IEEE80211_TX_CTRL_PS_RESPONSE;
+ 	if (last)
+@@ -335,7 +334,7 @@ mt76_queue_ps_skb(struct mt76_dev *dev, struct ieee80211_sta *sta,
+ 			       IEEE80211_TX_CTL_REQ_TX_STATUS;
+ 
+ 	mt76_skb_set_moredata(skb, !last);
+-	dev->queue_ops->tx_queue_skb(dev, hwq, skb, wcid, sta);
++	dev->queue_ops->tx_queue_skb(dev, MT_TXQ_PSD, skb, wcid, sta);
+ }
+ 
+ void
+@@ -390,6 +389,7 @@ mt76_txq_send_burst(struct mt76_dev *dev, struct mt76_queue *hwq,
+ 		    struct mt76_txq *mtxq, bool *empty)
+ {
+ 	struct ieee80211_txq *txq = mtxq_to_txq(mtxq);
++	enum mt76_txq_id qid = mt76_txq_get_qid(txq);
+ 	struct ieee80211_tx_info *info;
+ 	struct mt76_wcid *wcid = mtxq->wcid;
+ 	struct sk_buff *skb;
+@@ -423,7 +423,7 @@ mt76_txq_send_burst(struct mt76_dev *dev, struct mt76_queue *hwq,
+ 	if (ampdu)
+ 		mt76_check_agg_ssn(mtxq, skb);
+ 
+-	idx = dev->queue_ops->tx_queue_skb(dev, hwq, skb, wcid, txq->sta);
++	idx = dev->queue_ops->tx_queue_skb(dev, qid, skb, wcid, txq->sta);
+ 
+ 	if (idx < 0)
+ 		return idx;
+@@ -458,7 +458,7 @@ mt76_txq_send_burst(struct mt76_dev *dev, struct mt76_queue *hwq,
+ 		if (cur_ampdu)
+ 			mt76_check_agg_ssn(mtxq, skb);
+ 
+-		idx = dev->queue_ops->tx_queue_skb(dev, hwq, skb, wcid,
++		idx = dev->queue_ops->tx_queue_skb(dev, qid, skb, wcid,
+ 						   txq->sta);
+ 		if (idx < 0)
+ 			return idx;
+diff --git a/drivers/net/wireless/mediatek/mt76/usb.c b/drivers/net/wireless/mediatek/mt76/usb.c
+index 4c1abd492405..b1551419338f 100644
+--- a/drivers/net/wireless/mediatek/mt76/usb.c
++++ b/drivers/net/wireless/mediatek/mt76/usb.c
+@@ -726,10 +726,11 @@ mt76u_tx_build_sg(struct mt76_dev *dev, struct sk_buff *skb,
+ }
+ 
+ static int
+-mt76u_tx_queue_skb(struct mt76_dev *dev, struct mt76_queue *q,
++mt76u_tx_queue_skb(struct mt76_dev *dev, enum mt76_txq_id qid,
+ 		   struct sk_buff *skb, struct mt76_wcid *wcid,
+ 		   struct ieee80211_sta *sta)
+ {
++	struct mt76_queue *q = &dev->q_tx[qid];
+ 	struct mt76u_buf *buf;
+ 	u16 idx = q->tail;
+ 	int err;
+diff --git a/drivers/net/wireless/realtek/rtlwifi/base.c b/drivers/net/wireless/realtek/rtlwifi/base.c
+index 217d2a7a43c7..ac746c322554 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/base.c
++++ b/drivers/net/wireless/realtek/rtlwifi/base.c
+@@ -448,6 +448,11 @@ static void _rtl_init_deferred_work(struct ieee80211_hw *hw)
+ 	/* <2> work queue */
+ 	rtlpriv->works.hw = hw;
+ 	rtlpriv->works.rtl_wq = alloc_workqueue("%s", 0, 0, rtlpriv->cfg->name);
++	if (unlikely(!rtlpriv->works.rtl_wq)) {
++		pr_err("Failed to allocate work queue\n");
++		return;
++	}
++
+ 	INIT_DELAYED_WORK(&rtlpriv->works.watchdog_wq,
+ 			  (void *)rtl_watchdog_wq_callback);
+ 	INIT_DELAYED_WORK(&rtlpriv->works.ips_nic_off_wq,
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/fw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/fw.c
+index 203e7b574e84..e2e0bfbc24fe 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/fw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/fw.c
+@@ -600,6 +600,8 @@ void rtl88e_set_fw_rsvdpagepkt(struct ieee80211_hw *hw, bool b_dl_finished)
+ 		      u1rsvdpageloc, 3);
+ 
+ 	skb = dev_alloc_skb(totalpacketlen);
++	if (!skb)
++		return;
+ 	skb_put_data(skb, &reserved_page_packet, totalpacketlen);
+ 
+ 	rtstatus = rtl_cmd_send_packet(hw, skb);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192c/fw_common.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192c/fw_common.c
+index 18c76990a089..86b1b88cc4ed 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192c/fw_common.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192c/fw_common.c
+@@ -623,6 +623,8 @@ void rtl92c_set_fw_rsvdpagepkt(struct ieee80211_hw *hw,
+ 		      u1rsvdpageloc, 3);
+ 
+ 	skb = dev_alloc_skb(totalpacketlen);
++	if (!skb)
++		return;
+ 	skb_put_data(skb, &reserved_page_packet, totalpacketlen);
+ 
+ 	if (cmd_send_packet)
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/fw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/fw.c
+index 7c5b54b71a92..67305ce915ec 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/fw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/fw.c
+@@ -744,6 +744,8 @@ void rtl92ee_set_fw_rsvdpagepkt(struct ieee80211_hw *hw, bool b_dl_finished)
+ 		      u1rsvdpageloc, 3);
+ 
+ 	skb = dev_alloc_skb(totalpacketlen);
++	if (!skb)
++		return;
+ 	skb_put_data(skb, &reserved_page_packet, totalpacketlen);
+ 
+ 	rtstatus = rtl_cmd_send_packet(hw, skb);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/fw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/fw.c
+index be451a6f7dbe..33481232fad0 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/fw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/fw.c
+@@ -448,6 +448,8 @@ void rtl8723e_set_fw_rsvdpagepkt(struct ieee80211_hw *hw, bool b_dl_finished)
+ 		      u1rsvdpageloc, 3);
+ 
+ 	skb = dev_alloc_skb(totalpacketlen);
++	if (!skb)
++		return;
+ 	skb_put_data(skb, &reserved_page_packet, totalpacketlen);
+ 
+ 	rtstatus = rtl_cmd_send_packet(hw, skb);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/fw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/fw.c
+index 4d7fa27f55ca..aa56058af56e 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/fw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/fw.c
+@@ -562,6 +562,8 @@ void rtl8723be_set_fw_rsvdpagepkt(struct ieee80211_hw *hw,
+ 		      u1rsvdpageloc, sizeof(u1rsvdpageloc));
+ 
+ 	skb = dev_alloc_skb(totalpacketlen);
++	if (!skb)
++		return;
+ 	skb_put_data(skb, &reserved_page_packet, totalpacketlen);
+ 
+ 	rtstatus = rtl_cmd_send_packet(hw, skb);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/fw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/fw.c
+index dc0eb692088f..fe32d397d287 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/fw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/fw.c
+@@ -1623,6 +1623,8 @@ out:
+ 		      &reserved_page_packet_8812[0], totalpacketlen);
+ 
+ 	skb = dev_alloc_skb(totalpacketlen);
++	if (!skb)
++		return;
+ 	skb_put_data(skb, &reserved_page_packet_8812, totalpacketlen);
+ 
+ 	rtstatus = rtl_cmd_send_packet(hw, skb);
+@@ -1759,6 +1761,8 @@ out:
+ 		      &reserved_page_packet_8821[0], totalpacketlen);
+ 
+ 	skb = dev_alloc_skb(totalpacketlen);
++	if (!skb)
++		return;
+ 	skb_put_data(skb, &reserved_page_packet_8821, totalpacketlen);
+ 
+ 	rtstatus = rtl_cmd_send_packet(hw, skb);
+diff --git a/drivers/net/wireless/rsi/rsi_91x_mac80211.c b/drivers/net/wireless/rsi/rsi_91x_mac80211.c
+index 831046e760f8..49df3bb08d41 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_mac80211.c
++++ b/drivers/net/wireless/rsi/rsi_91x_mac80211.c
+@@ -188,27 +188,27 @@ bool rsi_is_cipher_wep(struct rsi_common *common)
+  * @adapter: Pointer to the adapter structure.
+  * @band: Operating band to be set.
+  *
+- * Return: None.
++ * Return: int - 0 on success, negative error on failure.
+  */
+-static void rsi_register_rates_channels(struct rsi_hw *adapter, int band)
++static int rsi_register_rates_channels(struct rsi_hw *adapter, int band)
+ {
+ 	struct ieee80211_supported_band *sbands = &adapter->sbands[band];
+ 	void *channels = NULL;
+ 
+ 	if (band == NL80211_BAND_2GHZ) {
+-		channels = kmalloc(sizeof(rsi_2ghz_channels), GFP_KERNEL);
+-		memcpy(channels,
+-		       rsi_2ghz_channels,
+-		       sizeof(rsi_2ghz_channels));
++		channels = kmemdup(rsi_2ghz_channels, sizeof(rsi_2ghz_channels),
++				   GFP_KERNEL);
++		if (!channels)
++			return -ENOMEM;
+ 		sbands->band = NL80211_BAND_2GHZ;
+ 		sbands->n_channels = ARRAY_SIZE(rsi_2ghz_channels);
+ 		sbands->bitrates = rsi_rates;
+ 		sbands->n_bitrates = ARRAY_SIZE(rsi_rates);
+ 	} else {
+-		channels = kmalloc(sizeof(rsi_5ghz_channels), GFP_KERNEL);
+-		memcpy(channels,
+-		       rsi_5ghz_channels,
+-		       sizeof(rsi_5ghz_channels));
++		channels = kmemdup(rsi_5ghz_channels, sizeof(rsi_5ghz_channels),
++				   GFP_KERNEL);
++		if (!channels)
++			return -ENOMEM;
+ 		sbands->band = NL80211_BAND_5GHZ;
+ 		sbands->n_channels = ARRAY_SIZE(rsi_5ghz_channels);
+ 		sbands->bitrates = &rsi_rates[4];
+@@ -227,6 +227,7 @@ static void rsi_register_rates_channels(struct rsi_hw *adapter, int band)
+ 	sbands->ht_cap.mcs.rx_mask[0] = 0xff;
+ 	sbands->ht_cap.mcs.tx_params = IEEE80211_HT_MCS_TX_DEFINED;
+ 	/* sbands->ht_cap.mcs.rx_highest = 0x82; */
++	return 0;
+ }
+ 
+ static int rsi_mac80211_hw_scan_start(struct ieee80211_hw *hw,
+@@ -2064,11 +2065,16 @@ int rsi_mac80211_attach(struct rsi_common *common)
+ 	wiphy->available_antennas_rx = 1;
+ 	wiphy->available_antennas_tx = 1;
+ 
+-	rsi_register_rates_channels(adapter, NL80211_BAND_2GHZ);
++	status = rsi_register_rates_channels(adapter, NL80211_BAND_2GHZ);
++	if (status)
++		return status;
+ 	wiphy->bands[NL80211_BAND_2GHZ] =
+ 		&adapter->sbands[NL80211_BAND_2GHZ];
+ 	if (common->num_supp_bands > 1) {
+-		rsi_register_rates_channels(adapter, NL80211_BAND_5GHZ);
++		status = rsi_register_rates_channels(adapter,
++						     NL80211_BAND_5GHZ);
++		if (status)
++			return status;
+ 		wiphy->bands[NL80211_BAND_5GHZ] =
+ 			&adapter->sbands[NL80211_BAND_5GHZ];
+ 	}
+diff --git a/drivers/net/wireless/st/cw1200/main.c b/drivers/net/wireless/st/cw1200/main.c
+index 90dc979f260b..c1608f0bf6d0 100644
+--- a/drivers/net/wireless/st/cw1200/main.c
++++ b/drivers/net/wireless/st/cw1200/main.c
+@@ -345,6 +345,11 @@ static struct ieee80211_hw *cw1200_init_common(const u8 *macaddr,
+ 	mutex_init(&priv->wsm_cmd_mux);
+ 	mutex_init(&priv->conf_mutex);
+ 	priv->workqueue = create_singlethread_workqueue("cw1200_wq");
++	if (!priv->workqueue) {
++		ieee80211_free_hw(hw);
++		return NULL;
++	}
++
+ 	sema_init(&priv->scan.lock, 1);
+ 	INIT_WORK(&priv->scan.work, cw1200_scan_work);
+ 	INIT_DELAYED_WORK(&priv->scan.probe_work, cw1200_probe_work);
+diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
+index 0279eb1da3ef..d9d845077b8b 100644
+--- a/drivers/nvdimm/pmem.c
++++ b/drivers/nvdimm/pmem.c
+@@ -281,20 +281,27 @@ static long pmem_dax_direct_access(struct dax_device *dax_dev,
+ 	return __pmem_direct_access(pmem, pgoff, nr_pages, kaddr, pfn);
+ }
+ 
++/*
++ * Use the 'no check' versions of copy_from_iter_flushcache() and
++ * copy_to_iter_mcsafe() to bypass HARDENED_USERCOPY overhead. Bounds
++ * checking, both file offset and device offset, is handled by
++ * dax_iomap_actor()
++ */
+ static size_t pmem_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff,
+ 		void *addr, size_t bytes, struct iov_iter *i)
+ {
+-	return copy_from_iter_flushcache(addr, bytes, i);
++	return _copy_from_iter_flushcache(addr, bytes, i);
+ }
+ 
+ static size_t pmem_copy_to_iter(struct dax_device *dax_dev, pgoff_t pgoff,
+ 		void *addr, size_t bytes, struct iov_iter *i)
+ {
+-	return copy_to_iter_mcsafe(addr, bytes, i);
++	return _copy_to_iter_mcsafe(addr, bytes, i);
+ }
+ 
+ static const struct dax_operations pmem_dax_ops = {
+ 	.direct_access = pmem_dax_direct_access,
++	.dax_supported = generic_fsdax_supported,
+ 	.copy_from_iter = pmem_copy_from_iter,
+ 	.copy_to_iter = pmem_copy_to_iter,
+ };
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 2c43e12b70af..8782d86a8ca3 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1591,6 +1591,10 @@ static void nvme_update_disk_info(struct gendisk *disk,
+ 	sector_t capacity = le64_to_cpup(&id->nsze) << (ns->lba_shift - 9);
+ 	unsigned short bs = 1 << ns->lba_shift;
+ 
++	if (ns->lba_shift > PAGE_SHIFT) {
++		/* unsupported block size, set capacity to 0 later */
++		bs = (1 << 9);
++	}
+ 	blk_mq_freeze_queue(disk->queue);
+ 	blk_integrity_unregister(disk);
+ 
+@@ -1601,7 +1605,8 @@ static void nvme_update_disk_info(struct gendisk *disk,
+ 	if (ns->ms && !ns->ext &&
+ 	    (ns->ctrl->ops->flags & NVME_F_METADATA_SUPPORTED))
+ 		nvme_init_integrity(disk, ns->ms, ns->pi_type);
+-	if (ns->ms && !nvme_ns_has_pi(ns) && !blk_get_integrity(disk))
++	if ((ns->ms && !nvme_ns_has_pi(ns) && !blk_get_integrity(disk)) ||
++	    ns->lba_shift > PAGE_SHIFT)
+ 		capacity = 0;
+ 
+ 	set_capacity(disk, capacity);
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 11a5ecae78c8..e1824c2e0a1c 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -914,8 +914,9 @@ static void nvme_rdma_teardown_admin_queue(struct nvme_rdma_ctrl *ctrl,
+ {
+ 	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
+ 	nvme_rdma_stop_queue(&ctrl->queues[0]);
+-	blk_mq_tagset_busy_iter(&ctrl->admin_tag_set, nvme_cancel_request,
+-			&ctrl->ctrl);
++	if (ctrl->ctrl.admin_tagset)
++		blk_mq_tagset_busy_iter(ctrl->ctrl.admin_tagset,
++			nvme_cancel_request, &ctrl->ctrl);
+ 	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
+ 	nvme_rdma_destroy_admin_queue(ctrl, remove);
+ }
+@@ -926,8 +927,9 @@ static void nvme_rdma_teardown_io_queues(struct nvme_rdma_ctrl *ctrl,
+ 	if (ctrl->ctrl.queue_count > 1) {
+ 		nvme_stop_queues(&ctrl->ctrl);
+ 		nvme_rdma_stop_io_queues(ctrl);
+-		blk_mq_tagset_busy_iter(&ctrl->tag_set, nvme_cancel_request,
+-				&ctrl->ctrl);
++		if (ctrl->ctrl.tagset)
++			blk_mq_tagset_busy_iter(ctrl->ctrl.tagset,
++				nvme_cancel_request, &ctrl->ctrl);
+ 		if (remove)
+ 			nvme_start_queues(&ctrl->ctrl);
+ 		nvme_rdma_destroy_io_queues(ctrl, remove);
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 68c49dd67210..aae5374d2b93 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1710,7 +1710,9 @@ static void nvme_tcp_teardown_admin_queue(struct nvme_ctrl *ctrl,
+ {
+ 	blk_mq_quiesce_queue(ctrl->admin_q);
+ 	nvme_tcp_stop_queue(ctrl, 0);
+-	blk_mq_tagset_busy_iter(ctrl->admin_tagset, nvme_cancel_request, ctrl);
++	if (ctrl->admin_tagset)
++		blk_mq_tagset_busy_iter(ctrl->admin_tagset,
++			nvme_cancel_request, ctrl);
+ 	blk_mq_unquiesce_queue(ctrl->admin_q);
+ 	nvme_tcp_destroy_admin_queue(ctrl, remove);
+ }
+@@ -1722,7 +1724,9 @@ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
+ 		return;
+ 	nvme_stop_queues(ctrl);
+ 	nvme_tcp_stop_io_queues(ctrl);
+-	blk_mq_tagset_busy_iter(ctrl->tagset, nvme_cancel_request, ctrl);
++	if (ctrl->tagset)
++		blk_mq_tagset_busy_iter(ctrl->tagset,
++			nvme_cancel_request, ctrl);
+ 	if (remove)
+ 		nvme_start_queues(ctrl);
+ 	nvme_tcp_destroy_io_queues(ctrl, remove);
+diff --git a/drivers/perf/arm-cci.c b/drivers/perf/arm-cci.c
+index bfd03e023308..8f8606b9bc9e 100644
+--- a/drivers/perf/arm-cci.c
++++ b/drivers/perf/arm-cci.c
+@@ -1684,21 +1684,24 @@ static int cci_pmu_probe(struct platform_device *pdev)
+ 	raw_spin_lock_init(&cci_pmu->hw_events.pmu_lock);
+ 	mutex_init(&cci_pmu->reserve_mutex);
+ 	atomic_set(&cci_pmu->active_events, 0);
+-	cci_pmu->cpu = get_cpu();
+-
+-	ret = cci_pmu_init(cci_pmu, pdev);
+-	if (ret) {
+-		put_cpu();
+-		return ret;
+-	}
+ 
++	cci_pmu->cpu = raw_smp_processor_id();
++	g_cci_pmu = cci_pmu;
+ 	cpuhp_setup_state_nocalls(CPUHP_AP_PERF_ARM_CCI_ONLINE,
+ 				  "perf/arm/cci:online", NULL,
+ 				  cci_pmu_offline_cpu);
+-	put_cpu();
+-	g_cci_pmu = cci_pmu;
++
++	ret = cci_pmu_init(cci_pmu, pdev);
++	if (ret)
++		goto error_pmu_init;
++
+ 	pr_info("ARM %s PMU driver probed", cci_pmu->model->name);
+ 	return 0;
++
++error_pmu_init:
++	cpuhp_remove_state(CPUHP_AP_PERF_ARM_CCI_ONLINE);
++	g_cci_pmu = NULL;
++	return ret;
+ }
+ 
+ static int cci_pmu_remove(struct platform_device *pdev)
+diff --git a/drivers/phy/allwinner/phy-sun4i-usb.c b/drivers/phy/allwinner/phy-sun4i-usb.c
+index 4bbd9ede38c8..cc5af961778d 100644
+--- a/drivers/phy/allwinner/phy-sun4i-usb.c
++++ b/drivers/phy/allwinner/phy-sun4i-usb.c
+@@ -554,6 +554,7 @@ static void sun4i_usb_phy0_id_vbus_det_scan(struct work_struct *work)
+ 	struct sun4i_usb_phy_data *data =
+ 		container_of(work, struct sun4i_usb_phy_data, detect.work);
+ 	struct phy *phy0 = data->phys[0].phy;
++	struct sun4i_usb_phy *phy = phy_get_drvdata(phy0);
+ 	bool force_session_end, id_notify = false, vbus_notify = false;
+ 	int id_det, vbus_det;
+ 
+@@ -610,6 +611,9 @@ static void sun4i_usb_phy0_id_vbus_det_scan(struct work_struct *work)
+ 			mutex_unlock(&phy0->mutex);
+ 		}
+ 
++		/* Enable PHY0 passby for host mode only. */
++		sun4i_usb_phy_passby(phy, !id_det);
++
+ 		/* Re-route PHY0 if necessary */
+ 		if (data->cfg->phy0_dual_route)
+ 			sun4i_usb_phy0_reroute(data, id_det);
+diff --git a/drivers/phy/motorola/Kconfig b/drivers/phy/motorola/Kconfig
+index 82651524ffb9..718f8729701d 100644
+--- a/drivers/phy/motorola/Kconfig
++++ b/drivers/phy/motorola/Kconfig
+@@ -13,7 +13,7 @@ config PHY_CPCAP_USB
+ 
+ config PHY_MAPPHONE_MDM6600
+ 	tristate "Motorola Mapphone MDM6600 modem USB PHY driver"
+-	depends on OF && USB_SUPPORT
++	depends on OF && USB_SUPPORT && GPIOLIB
+ 	select GENERIC_PHY
+ 	help
+ 	  Enable this for MDM6600 USB modem to work on Motorola phones
+diff --git a/drivers/phy/ti/Kconfig b/drivers/phy/ti/Kconfig
+index 103efc456a12..022ac16f626c 100644
+--- a/drivers/phy/ti/Kconfig
++++ b/drivers/phy/ti/Kconfig
+@@ -37,7 +37,7 @@ config OMAP_USB2
+ 	depends on USB_SUPPORT
+ 	select GENERIC_PHY
+ 	select USB_PHY
+-	select OMAP_CONTROL_PHY if ARCH_OMAP2PLUS
++	select OMAP_CONTROL_PHY if ARCH_OMAP2PLUS || COMPILE_TEST
+ 	help
+ 	  Enable this to support the transceiver that is part of SOC. This
+ 	  driver takes care of all the PHY functionality apart from comparator.
+diff --git a/drivers/pinctrl/pinctrl-pistachio.c b/drivers/pinctrl/pinctrl-pistachio.c
+index aa5f949ef219..5b0678f310e5 100644
+--- a/drivers/pinctrl/pinctrl-pistachio.c
++++ b/drivers/pinctrl/pinctrl-pistachio.c
+@@ -1367,6 +1367,7 @@ static int pistachio_gpio_register(struct pistachio_pinctrl *pctl)
+ 		if (!of_find_property(child, "gpio-controller", NULL)) {
+ 			dev_err(pctl->dev,
+ 				"No gpio-controller property for bank %u\n", i);
++			of_node_put(child);
+ 			ret = -ENODEV;
+ 			goto err;
+ 		}
+@@ -1374,6 +1375,7 @@ static int pistachio_gpio_register(struct pistachio_pinctrl *pctl)
+ 		irq = irq_of_parse_and_map(child, 0);
+ 		if (irq < 0) {
+ 			dev_err(pctl->dev, "No IRQ for bank %u: %d\n", i, irq);
++			of_node_put(child);
+ 			ret = irq;
+ 			goto err;
+ 		}
+diff --git a/drivers/pinctrl/pinctrl-st.c b/drivers/pinctrl/pinctrl-st.c
+index e66af93f2cbf..195b442a2343 100644
+--- a/drivers/pinctrl/pinctrl-st.c
++++ b/drivers/pinctrl/pinctrl-st.c
+@@ -1170,7 +1170,7 @@ static int st_pctl_dt_parse_groups(struct device_node *np,
+ 	struct property *pp;
+ 	struct st_pinconf *conf;
+ 	struct device_node *pins;
+-	int i = 0, npins = 0, nr_props;
++	int i = 0, npins = 0, nr_props, ret = 0;
+ 
+ 	pins = of_get_child_by_name(np, "st,pins");
+ 	if (!pins)
+@@ -1185,7 +1185,8 @@ static int st_pctl_dt_parse_groups(struct device_node *np,
+ 			npins++;
+ 		} else {
+ 			pr_warn("Invalid st,pins in %pOFn node\n", np);
+-			return -EINVAL;
++			ret = -EINVAL;
++			goto out_put_node;
+ 		}
+ 	}
+ 
+@@ -1195,8 +1196,10 @@ static int st_pctl_dt_parse_groups(struct device_node *np,
+ 	grp->pin_conf = devm_kcalloc(info->dev,
+ 					npins, sizeof(*conf), GFP_KERNEL);
+ 
+-	if (!grp->pins || !grp->pin_conf)
+-		return -ENOMEM;
++	if (!grp->pins || !grp->pin_conf) {
++		ret = -ENOMEM;
++		goto out_put_node;
++	}
+ 
+ 	/* <bank offset mux direction rt_type rt_delay rt_clk> */
+ 	for_each_property_of_node(pins, pp) {
+@@ -1229,9 +1232,11 @@ static int st_pctl_dt_parse_groups(struct device_node *np,
+ 		}
+ 		i++;
+ 	}
++
++out_put_node:
+ 	of_node_put(pins);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int st_pctl_parse_functions(struct device_node *np,
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos-arm.c b/drivers/pinctrl/samsung/pinctrl-exynos-arm.c
+index 44c6b753f692..85ddf49a5188 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos-arm.c
++++ b/drivers/pinctrl/samsung/pinctrl-exynos-arm.c
+@@ -71,6 +71,7 @@ s5pv210_retention_init(struct samsung_pinctrl_drv_data *drvdata,
+ 	}
+ 
+ 	clk_base = of_iomap(np, 0);
++	of_node_put(np);
+ 	if (!clk_base) {
+ 		pr_err("%s: failed to map clock registers\n", __func__);
+ 		return ERR_PTR(-EINVAL);
+diff --git a/drivers/pinctrl/zte/pinctrl-zx.c b/drivers/pinctrl/zte/pinctrl-zx.c
+index caa44dd2880a..3cb69309912b 100644
+--- a/drivers/pinctrl/zte/pinctrl-zx.c
++++ b/drivers/pinctrl/zte/pinctrl-zx.c
+@@ -411,6 +411,7 @@ int zx_pinctrl_init(struct platform_device *pdev,
+ 	}
+ 
+ 	zpctl->aux_base = of_iomap(np, 0);
++	of_node_put(np);
+ 	if (!zpctl->aux_base)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 968dcd9d7a07..35a7d020afec 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -2256,6 +2256,7 @@ static void regulator_ena_gpio_free(struct regulator_dev *rdev)
+ 		if (pin->gpiod == rdev->ena_pin->gpiod) {
+ 			if (pin->request_count <= 1) {
+ 				pin->request_count = 0;
++				gpiod_put(pin->gpiod);
+ 				list_del(&pin->list);
+ 				kfree(pin);
+ 				rdev->ena_pin = NULL;
+@@ -5061,10 +5062,11 @@ void regulator_unregister(struct regulator_dev *rdev)
+ 		regulator_put(rdev->supply);
+ 	}
+ 
++	flush_work(&rdev->disable_work.work);
++
+ 	mutex_lock(&regulator_list_mutex);
+ 
+ 	debugfs_remove_recursive(rdev->debugfs);
+-	flush_work(&rdev->disable_work.work);
+ 	WARN_ON(rdev->open_count);
+ 	regulator_remove_coupling(rdev);
+ 	unset_regulator_supplies(rdev);
+diff --git a/drivers/regulator/da9055-regulator.c b/drivers/regulator/da9055-regulator.c
+index 3c6fac793658..3ade4b8d204e 100644
+--- a/drivers/regulator/da9055-regulator.c
++++ b/drivers/regulator/da9055-regulator.c
+@@ -487,8 +487,10 @@ static irqreturn_t da9055_ldo5_6_oc_irq(int irq, void *data)
+ {
+ 	struct da9055_regulator *regulator = data;
+ 
++	regulator_lock(regulator->rdev);
+ 	regulator_notifier_call_chain(regulator->rdev,
+ 				      REGULATOR_EVENT_OVER_CURRENT, NULL);
++	regulator_unlock(regulator->rdev);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/regulator/da9062-regulator.c b/drivers/regulator/da9062-regulator.c
+index b064d8a19d4c..bab88ddfc509 100644
+--- a/drivers/regulator/da9062-regulator.c
++++ b/drivers/regulator/da9062-regulator.c
+@@ -974,8 +974,10 @@ static irqreturn_t da9062_ldo_lim_event(int irq, void *data)
+ 			continue;
+ 
+ 		if (BIT(regl->info->oc_event.lsb) & bits) {
++			regulator_lock(regl->rdev);
+ 			regulator_notifier_call_chain(regl->rdev,
+ 					REGULATOR_EVENT_OVER_CURRENT, NULL);
++			regulator_unlock(regl->rdev);
+ 			handled = IRQ_HANDLED;
+ 		}
+ 	}
+diff --git a/drivers/regulator/da9063-regulator.c b/drivers/regulator/da9063-regulator.c
+index 2b0c7a85306a..d7bdb95b7602 100644
+--- a/drivers/regulator/da9063-regulator.c
++++ b/drivers/regulator/da9063-regulator.c
+@@ -615,9 +615,12 @@ static irqreturn_t da9063_ldo_lim_event(int irq, void *data)
+ 		if (regl->info->oc_event.reg != DA9063_REG_STATUS_D)
+ 			continue;
+ 
+-		if (BIT(regl->info->oc_event.lsb) & bits)
++		if (BIT(regl->info->oc_event.lsb) & bits) {
++		        regulator_lock(regl->rdev);
+ 			regulator_notifier_call_chain(regl->rdev,
+ 					REGULATOR_EVENT_OVER_CURRENT, NULL);
++		        regulator_unlock(regl->rdev);
++		}
+ 	}
+ 
+ 	return IRQ_HANDLED;
+diff --git a/drivers/regulator/da9211-regulator.c b/drivers/regulator/da9211-regulator.c
+index 109ee12d4362..4d7fe4819c1c 100644
+--- a/drivers/regulator/da9211-regulator.c
++++ b/drivers/regulator/da9211-regulator.c
+@@ -322,8 +322,10 @@ static irqreturn_t da9211_irq_handler(int irq, void *data)
+ 		goto error_i2c;
+ 
+ 	if (reg_val & DA9211_E_OV_CURR_A) {
++	        regulator_lock(chip->rdev[0]);
+ 		regulator_notifier_call_chain(chip->rdev[0],
+ 			REGULATOR_EVENT_OVER_CURRENT, NULL);
++	        regulator_unlock(chip->rdev[0]);
+ 
+ 		err = regmap_write(chip->regmap, DA9211_REG_EVENT_B,
+ 			DA9211_E_OV_CURR_A);
+@@ -334,8 +336,10 @@ static irqreturn_t da9211_irq_handler(int irq, void *data)
+ 	}
+ 
+ 	if (reg_val & DA9211_E_OV_CURR_B) {
++	        regulator_lock(chip->rdev[1]);
+ 		regulator_notifier_call_chain(chip->rdev[1],
+ 			REGULATOR_EVENT_OVER_CURRENT, NULL);
++	        regulator_unlock(chip->rdev[1]);
+ 
+ 		err = regmap_write(chip->regmap, DA9211_REG_EVENT_B,
+ 			DA9211_E_OV_CURR_B);
+diff --git a/drivers/regulator/lp8755.c b/drivers/regulator/lp8755.c
+index 14fd38807134..2e16a6ab491d 100644
+--- a/drivers/regulator/lp8755.c
++++ b/drivers/regulator/lp8755.c
+@@ -372,10 +372,13 @@ static irqreturn_t lp8755_irq_handler(int irq, void *data)
+ 	for (icnt = 0; icnt < LP8755_BUCK_MAX; icnt++)
+ 		if ((flag0 & (0x4 << icnt))
+ 		    && (pchip->irqmask & (0x04 << icnt))
+-		    && (pchip->rdev[icnt] != NULL))
++		    && (pchip->rdev[icnt] != NULL)) {
++			regulator_lock(pchip->rdev[icnt]);
+ 			regulator_notifier_call_chain(pchip->rdev[icnt],
+ 						      LP8755_EVENT_PWR_FAULT,
+ 						      NULL);
++			regulator_unlock(pchip->rdev[icnt]);
++		}
+ 
+ 	/* read flag1 register */
+ 	ret = lp8755_read(pchip, 0x0E, &flag1);
+@@ -389,18 +392,24 @@ static irqreturn_t lp8755_irq_handler(int irq, void *data)
+ 	/* send OCP event to all regulator devices */
+ 	if ((flag1 & 0x01) && (pchip->irqmask & 0x01))
+ 		for (icnt = 0; icnt < LP8755_BUCK_MAX; icnt++)
+-			if (pchip->rdev[icnt] != NULL)
++			if (pchip->rdev[icnt] != NULL) {
++				regulator_lock(pchip->rdev[icnt]);
+ 				regulator_notifier_call_chain(pchip->rdev[icnt],
+ 							      LP8755_EVENT_OCP,
+ 							      NULL);
++				regulator_unlock(pchip->rdev[icnt]);
++			}
+ 
+ 	/* send OVP event to all regulator devices */
+ 	if ((flag1 & 0x02) && (pchip->irqmask & 0x02))
+ 		for (icnt = 0; icnt < LP8755_BUCK_MAX; icnt++)
+-			if (pchip->rdev[icnt] != NULL)
++			if (pchip->rdev[icnt] != NULL) {
++				regulator_lock(pchip->rdev[icnt]);
+ 				regulator_notifier_call_chain(pchip->rdev[icnt],
+ 							      LP8755_EVENT_OVP,
+ 							      NULL);
++				regulator_unlock(pchip->rdev[icnt]);
++			}
+ 	return IRQ_HANDLED;
+ 
+ err_i2c:
+diff --git a/drivers/regulator/ltc3589.c b/drivers/regulator/ltc3589.c
+index 63f724f260ef..75089b037b72 100644
+--- a/drivers/regulator/ltc3589.c
++++ b/drivers/regulator/ltc3589.c
+@@ -419,16 +419,22 @@ static irqreturn_t ltc3589_isr(int irq, void *dev_id)
+ 
+ 	if (irqstat & LTC3589_IRQSTAT_THERMAL_WARN) {
+ 		event = REGULATOR_EVENT_OVER_TEMP;
+-		for (i = 0; i < LTC3589_NUM_REGULATORS; i++)
++		for (i = 0; i < LTC3589_NUM_REGULATORS; i++) {
++		        regulator_lock(ltc3589->regulators[i]);
+ 			regulator_notifier_call_chain(ltc3589->regulators[i],
+ 						      event, NULL);
++		        regulator_unlock(ltc3589->regulators[i]);
++		}
+ 	}
+ 
+ 	if (irqstat & LTC3589_IRQSTAT_UNDERVOLT_WARN) {
+ 		event = REGULATOR_EVENT_UNDER_VOLTAGE;
+-		for (i = 0; i < LTC3589_NUM_REGULATORS; i++)
++		for (i = 0; i < LTC3589_NUM_REGULATORS; i++) {
++		        regulator_lock(ltc3589->regulators[i]);
+ 			regulator_notifier_call_chain(ltc3589->regulators[i],
+ 						      event, NULL);
++		        regulator_unlock(ltc3589->regulators[i]);
++		}
+ 	}
+ 
+ 	/* Clear warning condition */
+diff --git a/drivers/regulator/ltc3676.c b/drivers/regulator/ltc3676.c
+index e6d66e492b85..4be90c78c720 100644
+--- a/drivers/regulator/ltc3676.c
++++ b/drivers/regulator/ltc3676.c
+@@ -285,17 +285,23 @@ static irqreturn_t ltc3676_isr(int irq, void *dev_id)
+ 	if (irqstat & LTC3676_IRQSTAT_THERMAL_WARN) {
+ 		dev_warn(dev, "Over-temperature Warning\n");
+ 		event = REGULATOR_EVENT_OVER_TEMP;
+-		for (i = 0; i < LTC3676_NUM_REGULATORS; i++)
++		for (i = 0; i < LTC3676_NUM_REGULATORS; i++) {
++			regulator_lock(ltc3676->regulators[i]);
+ 			regulator_notifier_call_chain(ltc3676->regulators[i],
+ 						      event, NULL);
++			regulator_unlock(ltc3676->regulators[i]);
++		}
+ 	}
+ 
+ 	if (irqstat & LTC3676_IRQSTAT_UNDERVOLT_WARN) {
+ 		dev_info(dev, "Undervoltage Warning\n");
+ 		event = REGULATOR_EVENT_UNDER_VOLTAGE;
+-		for (i = 0; i < LTC3676_NUM_REGULATORS; i++)
++		for (i = 0; i < LTC3676_NUM_REGULATORS; i++) {
++			regulator_lock(ltc3676->regulators[i]);
+ 			regulator_notifier_call_chain(ltc3676->regulators[i],
+ 						      event, NULL);
++			regulator_unlock(ltc3676->regulators[i]);
++		}
+ 	}
+ 
+ 	/* Clear warning condition */
+diff --git a/drivers/regulator/pv88060-regulator.c b/drivers/regulator/pv88060-regulator.c
+index 1600f9821891..810816e9df5d 100644
+--- a/drivers/regulator/pv88060-regulator.c
++++ b/drivers/regulator/pv88060-regulator.c
+@@ -244,9 +244,11 @@ static irqreturn_t pv88060_irq_handler(int irq, void *data)
+ 	if (reg_val & PV88060_E_VDD_FLT) {
+ 		for (i = 0; i < PV88060_MAX_REGULATORS; i++) {
+ 			if (chip->rdev[i] != NULL) {
++				regulator_lock(chip->rdev[i]);
+ 				regulator_notifier_call_chain(chip->rdev[i],
+ 					REGULATOR_EVENT_UNDER_VOLTAGE,
+ 					NULL);
++				regulator_unlock(chip->rdev[i]);
+ 			}
+ 		}
+ 
+@@ -261,9 +263,11 @@ static irqreturn_t pv88060_irq_handler(int irq, void *data)
+ 	if (reg_val & PV88060_E_OVER_TEMP) {
+ 		for (i = 0; i < PV88060_MAX_REGULATORS; i++) {
+ 			if (chip->rdev[i] != NULL) {
++				regulator_lock(chip->rdev[i]);
+ 				regulator_notifier_call_chain(chip->rdev[i],
+ 					REGULATOR_EVENT_OVER_TEMP,
+ 					NULL);
++				regulator_unlock(chip->rdev[i]);
+ 			}
+ 		}
+ 
+diff --git a/drivers/regulator/pv88080-regulator.c b/drivers/regulator/pv88080-regulator.c
+index bdddacdbeb99..6279216fb254 100644
+--- a/drivers/regulator/pv88080-regulator.c
++++ b/drivers/regulator/pv88080-regulator.c
+@@ -345,9 +345,11 @@ static irqreturn_t pv88080_irq_handler(int irq, void *data)
+ 	if (reg_val & PV88080_E_VDD_FLT) {
+ 		for (i = 0; i < PV88080_MAX_REGULATORS; i++) {
+ 			if (chip->rdev[i] != NULL) {
++			        regulator_lock(chip->rdev[i]);
+ 				regulator_notifier_call_chain(chip->rdev[i],
+ 					REGULATOR_EVENT_UNDER_VOLTAGE,
+ 					NULL);
++			        regulator_unlock(chip->rdev[i]);
+ 			}
+ 		}
+ 
+@@ -362,9 +364,11 @@ static irqreturn_t pv88080_irq_handler(int irq, void *data)
+ 	if (reg_val & PV88080_E_OVER_TEMP) {
+ 		for (i = 0; i < PV88080_MAX_REGULATORS; i++) {
+ 			if (chip->rdev[i] != NULL) {
++			        regulator_lock(chip->rdev[i]);
+ 				regulator_notifier_call_chain(chip->rdev[i],
+ 					REGULATOR_EVENT_OVER_TEMP,
+ 					NULL);
++			        regulator_unlock(chip->rdev[i]);
+ 			}
+ 		}
+ 
+diff --git a/drivers/regulator/pv88090-regulator.c b/drivers/regulator/pv88090-regulator.c
+index 6e97cc6df2ee..90f4f907fb3f 100644
+--- a/drivers/regulator/pv88090-regulator.c
++++ b/drivers/regulator/pv88090-regulator.c
+@@ -237,9 +237,11 @@ static irqreturn_t pv88090_irq_handler(int irq, void *data)
+ 	if (reg_val & PV88090_E_VDD_FLT) {
+ 		for (i = 0; i < PV88090_MAX_REGULATORS; i++) {
+ 			if (chip->rdev[i] != NULL) {
++			        regulator_lock(chip->rdev[i]);
+ 				regulator_notifier_call_chain(chip->rdev[i],
+ 					REGULATOR_EVENT_UNDER_VOLTAGE,
+ 					NULL);
++			        regulator_unlock(chip->rdev[i]);
+ 			}
+ 		}
+ 
+@@ -254,9 +256,11 @@ static irqreturn_t pv88090_irq_handler(int irq, void *data)
+ 	if (reg_val & PV88090_E_OVER_TEMP) {
+ 		for (i = 0; i < PV88090_MAX_REGULATORS; i++) {
+ 			if (chip->rdev[i] != NULL) {
++			        regulator_lock(chip->rdev[i]);
+ 				regulator_notifier_call_chain(chip->rdev[i],
+ 					REGULATOR_EVENT_OVER_TEMP,
+ 					NULL);
++			        regulator_unlock(chip->rdev[i]);
+ 			}
+ 		}
+ 
+diff --git a/drivers/regulator/wm831x-dcdc.c b/drivers/regulator/wm831x-dcdc.c
+index 12b422373580..d1873f94bca7 100644
+--- a/drivers/regulator/wm831x-dcdc.c
++++ b/drivers/regulator/wm831x-dcdc.c
+@@ -183,9 +183,11 @@ static irqreturn_t wm831x_dcdc_uv_irq(int irq, void *data)
+ {
+ 	struct wm831x_dcdc *dcdc = data;
+ 
++	regulator_lock(dcdc->regulator);
+ 	regulator_notifier_call_chain(dcdc->regulator,
+ 				      REGULATOR_EVENT_UNDER_VOLTAGE,
+ 				      NULL);
++	regulator_unlock(dcdc->regulator);
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -194,9 +196,11 @@ static irqreturn_t wm831x_dcdc_oc_irq(int irq, void *data)
+ {
+ 	struct wm831x_dcdc *dcdc = data;
+ 
++	regulator_lock(dcdc->regulator);
+ 	regulator_notifier_call_chain(dcdc->regulator,
+ 				      REGULATOR_EVENT_OVER_CURRENT,
+ 				      NULL);
++	regulator_unlock(dcdc->regulator);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/regulator/wm831x-isink.c b/drivers/regulator/wm831x-isink.c
+index 6dd891d7eee3..11f351191dba 100644
+--- a/drivers/regulator/wm831x-isink.c
++++ b/drivers/regulator/wm831x-isink.c
+@@ -140,9 +140,11 @@ static irqreturn_t wm831x_isink_irq(int irq, void *data)
+ {
+ 	struct wm831x_isink *isink = data;
+ 
++	regulator_lock(isink->regulator);
+ 	regulator_notifier_call_chain(isink->regulator,
+ 				      REGULATOR_EVENT_OVER_CURRENT,
+ 				      NULL);
++	regulator_unlock(isink->regulator);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/regulator/wm831x-ldo.c b/drivers/regulator/wm831x-ldo.c
+index e4a6f888484e..fcd038e7cd80 100644
+--- a/drivers/regulator/wm831x-ldo.c
++++ b/drivers/regulator/wm831x-ldo.c
+@@ -51,9 +51,11 @@ static irqreturn_t wm831x_ldo_uv_irq(int irq, void *data)
+ {
+ 	struct wm831x_ldo *ldo = data;
+ 
++	regulator_lock(ldo->regulator);
+ 	regulator_notifier_call_chain(ldo->regulator,
+ 				      REGULATOR_EVENT_UNDER_VOLTAGE,
+ 				      NULL);
++	regulator_unlock(ldo->regulator);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/rtc/rtc-88pm860x.c b/drivers/rtc/rtc-88pm860x.c
+index d25282b4a7dd..73697e4b18a9 100644
+--- a/drivers/rtc/rtc-88pm860x.c
++++ b/drivers/rtc/rtc-88pm860x.c
+@@ -421,7 +421,7 @@ static int pm860x_rtc_remove(struct platform_device *pdev)
+ 	struct pm860x_rtc_info *info = platform_get_drvdata(pdev);
+ 
+ #ifdef VRTC_CALIBRATION
+-	flush_scheduled_work();
++	cancel_delayed_work_sync(&info->calib_work);
+ 	/* disable measurement */
+ 	pm860x_set_bits(info->i2c, PM8607_MEAS_EN2, MEAS2_VRTC, 0);
+ #endif	/* VRTC_CALIBRATION */
+diff --git a/drivers/rtc/rtc-stm32.c b/drivers/rtc/rtc-stm32.c
+index c5908cfea234..8e6c9b3bcc29 100644
+--- a/drivers/rtc/rtc-stm32.c
++++ b/drivers/rtc/rtc-stm32.c
+@@ -788,11 +788,14 @@ static int stm32_rtc_probe(struct platform_device *pdev)
+ 	ret = device_init_wakeup(&pdev->dev, true);
+ 	if (rtc->data->has_wakeirq) {
+ 		rtc->wakeirq_alarm = platform_get_irq(pdev, 1);
+-		if (rtc->wakeirq_alarm <= 0)
+-			ret = rtc->wakeirq_alarm;
+-		else
++		if (rtc->wakeirq_alarm > 0) {
+ 			ret = dev_pm_set_dedicated_wake_irq(&pdev->dev,
+ 							    rtc->wakeirq_alarm);
++		} else {
++			ret = rtc->wakeirq_alarm;
++			if (rtc->wakeirq_alarm == -EPROBE_DEFER)
++				goto err;
++		}
+ 	}
+ 	if (ret)
+ 		dev_warn(&pdev->dev, "alarm can't wake up the system: %d", ret);
+diff --git a/drivers/rtc/rtc-xgene.c b/drivers/rtc/rtc-xgene.c
+index 153820876a82..2f741f455c30 100644
+--- a/drivers/rtc/rtc-xgene.c
++++ b/drivers/rtc/rtc-xgene.c
+@@ -168,6 +168,10 @@ static int xgene_rtc_probe(struct platform_device *pdev)
+ 	if (IS_ERR(pdata->csr_base))
+ 		return PTR_ERR(pdata->csr_base);
+ 
++	pdata->rtc = devm_rtc_allocate_device(&pdev->dev);
++	if (IS_ERR(pdata->rtc))
++		return PTR_ERR(pdata->rtc);
++
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq < 0) {
+ 		dev_err(&pdev->dev, "No IRQ resource\n");
+@@ -198,15 +202,15 @@ static int xgene_rtc_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
+-	pdata->rtc = devm_rtc_device_register(&pdev->dev, pdev->name,
+-					 &xgene_rtc_ops, THIS_MODULE);
+-	if (IS_ERR(pdata->rtc)) {
+-		clk_disable_unprepare(pdata->clk);
+-		return PTR_ERR(pdata->rtc);
+-	}
+-
+ 	/* HW does not support update faster than 1 seconds */
+ 	pdata->rtc->uie_unsupported = 1;
++	pdata->rtc->ops = &xgene_rtc_ops;
++
++	ret = rtc_register_device(pdata->rtc);
++	if (ret) {
++		clk_disable_unprepare(pdata->clk);
++		return ret;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/s390/block/dcssblk.c b/drivers/s390/block/dcssblk.c
+index 4e8aedd50cb0..d04d4378ca50 100644
+--- a/drivers/s390/block/dcssblk.c
++++ b/drivers/s390/block/dcssblk.c
+@@ -59,6 +59,7 @@ static size_t dcssblk_dax_copy_to_iter(struct dax_device *dax_dev,
+ 
+ static const struct dax_operations dcssblk_dax_ops = {
+ 	.direct_access = dcssblk_dax_direct_access,
++	.dax_supported = generic_fsdax_supported,
+ 	.copy_from_iter = dcssblk_dax_copy_from_iter,
+ 	.copy_to_iter = dcssblk_dax_copy_to_iter,
+ };
+diff --git a/drivers/s390/cio/cio.h b/drivers/s390/cio/cio.h
+index 9811fd8a0c73..92eabbb5f18d 100644
+--- a/drivers/s390/cio/cio.h
++++ b/drivers/s390/cio/cio.h
+@@ -115,7 +115,7 @@ struct subchannel {
+ 	struct schib_config config;
+ } __attribute__ ((aligned(8)));
+ 
+-DECLARE_PER_CPU(struct irb, cio_irb);
++DECLARE_PER_CPU_ALIGNED(struct irb, cio_irb);
+ 
+ #define to_subchannel(n) container_of(n, struct subchannel, dev)
+ 
+diff --git a/drivers/s390/cio/vfio_ccw_drv.c b/drivers/s390/cio/vfio_ccw_drv.c
+index 0b3b9de45c60..9e84d8a971ad 100644
+--- a/drivers/s390/cio/vfio_ccw_drv.c
++++ b/drivers/s390/cio/vfio_ccw_drv.c
+@@ -40,26 +40,30 @@ int vfio_ccw_sch_quiesce(struct subchannel *sch)
+ 	if (ret != -EBUSY)
+ 		goto out_unlock;
+ 
++	iretry = 255;
+ 	do {
+-		iretry = 255;
+ 
+ 		ret = cio_cancel_halt_clear(sch, &iretry);
+-		while (ret == -EBUSY) {
+-			/*
+-			 * Flush all I/O and wait for
+-			 * cancel/halt/clear completion.
+-			 */
+-			private->completion = &completion;
+-			spin_unlock_irq(sch->lock);
+ 
+-			wait_for_completion_timeout(&completion, 3*HZ);
++		if (ret == -EIO) {
++			pr_err("vfio_ccw: could not quiesce subchannel 0.%x.%04x!\n",
++			       sch->schid.ssid, sch->schid.sch_no);
++			break;
++		}
++
++		/*
++		 * Flush all I/O and wait for
++		 * cancel/halt/clear completion.
++		 */
++		private->completion = &completion;
++		spin_unlock_irq(sch->lock);
+ 
+-			spin_lock_irq(sch->lock);
+-			private->completion = NULL;
+-			flush_workqueue(vfio_ccw_work_q);
+-			ret = cio_cancel_halt_clear(sch, &iretry);
+-		};
++		if (ret == -EBUSY)
++			wait_for_completion_timeout(&completion, 3*HZ);
+ 
++		private->completion = NULL;
++		flush_workqueue(vfio_ccw_work_q);
++		spin_lock_irq(sch->lock);
+ 		ret = cio_disable_subchannel(sch);
+ 	} while (ret == -EBUSY);
+ out_unlock:
+diff --git a/drivers/s390/cio/vfio_ccw_ops.c b/drivers/s390/cio/vfio_ccw_ops.c
+index f673e106c041..dc5ff47de3fe 100644
+--- a/drivers/s390/cio/vfio_ccw_ops.c
++++ b/drivers/s390/cio/vfio_ccw_ops.c
+@@ -130,11 +130,12 @@ static int vfio_ccw_mdev_remove(struct mdev_device *mdev)
+ 
+ 	if ((private->state != VFIO_CCW_STATE_NOT_OPER) &&
+ 	    (private->state != VFIO_CCW_STATE_STANDBY)) {
+-		if (!vfio_ccw_mdev_reset(mdev))
++		if (!vfio_ccw_sch_quiesce(private->sch))
+ 			private->state = VFIO_CCW_STATE_STANDBY;
+ 		/* The state will be NOT_OPER on error. */
+ 	}
+ 
++	cp_free(&private->cp);
+ 	private->mdev = NULL;
+ 	atomic_inc(&private->avail);
+ 
+@@ -158,6 +159,14 @@ static void vfio_ccw_mdev_release(struct mdev_device *mdev)
+ 	struct vfio_ccw_private *private =
+ 		dev_get_drvdata(mdev_parent_dev(mdev));
+ 
++	if ((private->state != VFIO_CCW_STATE_NOT_OPER) &&
++	    (private->state != VFIO_CCW_STATE_STANDBY)) {
++		if (!vfio_ccw_mdev_reset(mdev))
++			private->state = VFIO_CCW_STATE_STANDBY;
++		/* The state will be NOT_OPER on error. */
++	}
++
++	cp_free(&private->cp);
+ 	vfio_unregister_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY,
+ 				 &private->nb);
+ }
+diff --git a/drivers/s390/crypto/zcrypt_api.c b/drivers/s390/crypto/zcrypt_api.c
+index 689c2af7026a..c31b2d31cd83 100644
+--- a/drivers/s390/crypto/zcrypt_api.c
++++ b/drivers/s390/crypto/zcrypt_api.c
+@@ -659,6 +659,7 @@ static long zcrypt_rsa_modexpo(struct ap_perms *perms,
+ 	trace_s390_zcrypt_req(mex, TP_ICARSAMODEXPO);
+ 
+ 	if (mex->outputdatalength < mex->inputdatalength) {
++		func_code = 0;
+ 		rc = -EINVAL;
+ 		goto out;
+ 	}
+@@ -742,6 +743,7 @@ static long zcrypt_rsa_crt(struct ap_perms *perms,
+ 	trace_s390_zcrypt_req(crt, TP_ICARSACRT);
+ 
+ 	if (crt->outputdatalength < crt->inputdatalength) {
++		func_code = 0;
+ 		rc = -EINVAL;
+ 		goto out;
+ 	}
+@@ -951,6 +953,7 @@ static long zcrypt_send_ep11_cprb(struct ap_perms *perms,
+ 
+ 		targets = kcalloc(target_num, sizeof(*targets), GFP_KERNEL);
+ 		if (!targets) {
++			func_code = 0;
+ 			rc = -ENOMEM;
+ 			goto out;
+ 		}
+@@ -958,6 +961,7 @@ static long zcrypt_send_ep11_cprb(struct ap_perms *perms,
+ 		uptr = (struct ep11_target_dev __force __user *) xcrb->targets;
+ 		if (copy_from_user(targets, uptr,
+ 				   target_num * sizeof(*targets))) {
++			func_code = 0;
+ 			rc = -EFAULT;
+ 			goto out_free;
+ 		}
+diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h
+index c851cf6e01c4..d603dfea97ab 100644
+--- a/drivers/s390/net/qeth_core.h
++++ b/drivers/s390/net/qeth_core.h
+@@ -163,6 +163,12 @@ struct qeth_vnicc_info {
+ 	bool rx_bcast_enabled;
+ };
+ 
++static inline int qeth_is_adp_supported(struct qeth_ipa_info *ipa,
++		enum qeth_ipa_setadp_cmd func)
++{
++	return (ipa->supported_funcs & func);
++}
++
+ static inline int qeth_is_ipa_supported(struct qeth_ipa_info *ipa,
+ 		enum qeth_ipa_funcs func)
+ {
+@@ -176,9 +182,7 @@ static inline int qeth_is_ipa_enabled(struct qeth_ipa_info *ipa,
+ }
+ 
+ #define qeth_adp_supported(c, f) \
+-	qeth_is_ipa_supported(&c->options.adp, f)
+-#define qeth_adp_enabled(c, f) \
+-	qeth_is_ipa_enabled(&c->options.adp, f)
++	qeth_is_adp_supported(&c->options.adp, f)
+ #define qeth_is_supported(c, f) \
+ 	qeth_is_ipa_supported(&c->options.ipa4, f)
+ #define qeth_is_enabled(c, f) \
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index 44bd6f04c145..8c73a99daff3 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -1308,7 +1308,7 @@ static void qeth_set_multiple_write_queues(struct qeth_card *card)
+ 	card->qdio.no_out_queues = 4;
+ }
+ 
+-static void qeth_update_from_chp_desc(struct qeth_card *card)
++static int qeth_update_from_chp_desc(struct qeth_card *card)
+ {
+ 	struct ccw_device *ccwdev;
+ 	struct channel_path_desc_fmt0 *chp_dsc;
+@@ -1318,7 +1318,7 @@ static void qeth_update_from_chp_desc(struct qeth_card *card)
+ 	ccwdev = card->data.ccwdev;
+ 	chp_dsc = ccw_device_get_chp_desc(ccwdev, 0);
+ 	if (!chp_dsc)
+-		goto out;
++		return -ENOMEM;
+ 
+ 	card->info.func_level = 0x4100 + chp_dsc->desc;
+ 	if (card->info.type == QETH_CARD_TYPE_IQD)
+@@ -1333,6 +1333,7 @@ out:
+ 	kfree(chp_dsc);
+ 	QETH_DBF_TEXT_(SETUP, 2, "nr:%x", card->qdio.no_out_queues);
+ 	QETH_DBF_TEXT_(SETUP, 2, "lvl:%02x", card->info.func_level);
++	return 0;
+ }
+ 
+ static void qeth_init_qdio_info(struct qeth_card *card)
+@@ -4986,7 +4987,9 @@ int qeth_core_hardsetup_card(struct qeth_card *card, bool *carrier_ok)
+ 
+ 	QETH_DBF_TEXT(SETUP, 2, "hrdsetup");
+ 	atomic_set(&card->force_alloc_skb, 0);
+-	qeth_update_from_chp_desc(card);
++	rc = qeth_update_from_chp_desc(card);
++	if (rc)
++		return rc;
+ retry:
+ 	if (retries < 3)
+ 		QETH_DBF_MESSAGE(2, "Retrying to do IDX activates on device %x.\n",
+@@ -5641,7 +5644,9 @@ static int qeth_core_probe_device(struct ccwgroup_device *gdev)
+ 	}
+ 
+ 	qeth_setup_card(card);
+-	qeth_update_from_chp_desc(card);
++	rc = qeth_update_from_chp_desc(card);
++	if (rc)
++		goto err_chp_desc;
+ 
+ 	card->dev = qeth_alloc_netdev(card);
+ 	if (!card->dev) {
+@@ -5676,6 +5681,7 @@ err_disc:
+ 	qeth_core_free_discipline(card);
+ err_load:
+ 	free_netdev(card->dev);
++err_chp_desc:
+ err_card:
+ 	qeth_core_free_card(card);
+ err_dev:
+diff --git a/drivers/scsi/libsas/sas_expander.c b/drivers/scsi/libsas/sas_expander.c
+index 17b45a0c7bc3..3611a4ef0d15 100644
+--- a/drivers/scsi/libsas/sas_expander.c
++++ b/drivers/scsi/libsas/sas_expander.c
+@@ -2052,6 +2052,11 @@ static int sas_rediscover_dev(struct domain_device *dev, int phy_id, bool last)
+ 	if ((SAS_ADDR(sas_addr) == 0) || (res == -ECOMM)) {
+ 		phy->phy_state = PHY_EMPTY;
+ 		sas_unregister_devs_sas_addr(dev, phy_id, last);
++		/*
++		 * Even though the PHY is empty, for convenience we discover
++		 * the PHY to update the PHY info, like negotiated linkrate.
++		 */
++		sas_ex_phy_discover(dev, phy_id);
+ 		return res;
+ 	} else if (SAS_ADDR(sas_addr) == SAS_ADDR(phy->attached_sas_addr) &&
+ 		   dev_type_flutter(type, phy->attached_dev_type)) {
+diff --git a/drivers/scsi/lpfc/lpfc_ct.c b/drivers/scsi/lpfc/lpfc_ct.c
+index 2e3949c6cd07..25553e7ba85c 100644
+--- a/drivers/scsi/lpfc/lpfc_ct.c
++++ b/drivers/scsi/lpfc/lpfc_ct.c
+@@ -2005,8 +2005,11 @@ lpfc_fdmi_hba_attr_manufacturer(struct lpfc_vport *vport,
+ 	ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
+ 	memset(ae, 0, 256);
+ 
++	/* This string MUST be consistent with other FC platforms
++	 * supported by Broadcom.
++	 */
+ 	strncpy(ae->un.AttrString,
+-		"Broadcom Inc.",
++		"Emulex Corporation",
+ 		       sizeof(ae->un.AttrString));
+ 	len = strnlen(ae->un.AttrString,
+ 			  sizeof(ae->un.AttrString));
+@@ -2360,10 +2363,11 @@ lpfc_fdmi_port_attr_fc4type(struct lpfc_vport *vport,
+ 	ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
+ 	memset(ae, 0, 32);
+ 
+-	ae->un.AttrTypes[3] = 0x02; /* Type 1 - ELS */
+-	ae->un.AttrTypes[2] = 0x01; /* Type 8 - FCP */
+-	ae->un.AttrTypes[6] = 0x01; /* Type 40 - NVME */
+-	ae->un.AttrTypes[7] = 0x01; /* Type 32 - CT */
++	ae->un.AttrTypes[3] = 0x02; /* Type 0x1 - ELS */
++	ae->un.AttrTypes[2] = 0x01; /* Type 0x8 - FCP */
++	if (vport->nvmei_support || vport->phba->nvmet_support)
++		ae->un.AttrTypes[6] = 0x01; /* Type 0x28 - NVME */
++	ae->un.AttrTypes[7] = 0x01; /* Type 0x20 - CT */
+ 	size = FOURBYTES + 32;
+ 	ad->AttrLen = cpu_to_be16(size);
+ 	ad->AttrType = cpu_to_be16(RPRT_SUPPORTED_FC4_TYPES);
+@@ -2673,9 +2677,11 @@ lpfc_fdmi_port_attr_active_fc4type(struct lpfc_vport *vport,
+ 	ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
+ 	memset(ae, 0, 32);
+ 
+-	ae->un.AttrTypes[3] = 0x02; /* Type 1 - ELS */
+-	ae->un.AttrTypes[2] = 0x01; /* Type 8 - FCP */
+-	ae->un.AttrTypes[7] = 0x01; /* Type 32 - CT */
++	ae->un.AttrTypes[3] = 0x02; /* Type 0x1 - ELS */
++	ae->un.AttrTypes[2] = 0x01; /* Type 0x8 - FCP */
++	if (vport->phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME)
++		ae->un.AttrTypes[6] = 0x1; /* Type 0x28 - NVME */
++	ae->un.AttrTypes[7] = 0x01; /* Type 0x20 - CT */
+ 	size = FOURBYTES + 32;
+ 	ad->AttrLen = cpu_to_be16(size);
+ 	ad->AttrType = cpu_to_be16(RPRT_ACTIVE_FC4_TYPES);
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index aa4961a2caf8..75e9d46d44d4 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -932,7 +932,11 @@ lpfc_linkdown(struct lpfc_hba *phba)
+ 		}
+ 	}
+ 	lpfc_destroy_vport_work_array(phba, vports);
+-	/* Clean up any firmware default rpi's */
++
++	/* Clean up any SLI3 firmware default rpi's */
++	if (phba->sli_rev > LPFC_SLI_REV3)
++		goto skip_unreg_did;
++
+ 	mb = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+ 	if (mb) {
+ 		lpfc_unreg_did(phba, 0xffff, LPFC_UNREG_ALL_DFLT_RPIS, mb);
+@@ -944,6 +948,7 @@ lpfc_linkdown(struct lpfc_hba *phba)
+ 		}
+ 	}
+ 
++ skip_unreg_did:
+ 	/* Setup myDID for link up if we are in pt2pt mode */
+ 	if (phba->pport->fc_flag & FC_PT2PT) {
+ 		mb = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+@@ -4868,6 +4873,10 @@ lpfc_unreg_rpi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 					 * accept PLOGIs after unreg_rpi_cmpl
+ 					 */
+ 					acc_plogi = 0;
++				} else if (vport->load_flag & FC_UNLOADING) {
++					mbox->ctx_ndlp = NULL;
++					mbox->mbox_cmpl =
++						lpfc_sli_def_mbox_cmpl;
+ 				} else {
+ 					mbox->ctx_ndlp = ndlp;
+ 					mbox->mbox_cmpl =
+@@ -4979,6 +4988,10 @@ lpfc_unreg_default_rpis(struct lpfc_vport *vport)
+ 	LPFC_MBOXQ_t     *mbox;
+ 	int rc;
+ 
++	/* Unreg DID is an SLI3 operation. */
++	if (phba->sli_rev > LPFC_SLI_REV3)
++		return;
++
+ 	mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+ 	if (mbox) {
+ 		lpfc_unreg_did(phba, vport->vpi, LPFC_UNREG_ALL_DFLT_RPIS,
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 7fcdaed3fa94..46e155d1fa15 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -3245,6 +3245,13 @@ void lpfc_destroy_multixri_pools(struct lpfc_hba *phba)
+ 	if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME)
+ 		lpfc_destroy_expedite_pool(phba);
+ 
++	if (!(phba->pport->load_flag & FC_UNLOADING)) {
++		lpfc_sli_flush_fcp_rings(phba);
++
++		if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME)
++			lpfc_sli_flush_nvme_rings(phba);
++	}
++
+ 	hwq_count = phba->cfg_hdw_queue;
+ 
+ 	for (i = 0; i < hwq_count; i++) {
+@@ -3611,8 +3618,6 @@ lpfc_io_free(struct lpfc_hba *phba)
+ 	struct lpfc_sli4_hdw_queue *qp;
+ 	int idx;
+ 
+-	spin_lock_irq(&phba->hbalock);
+-
+ 	for (idx = 0; idx < phba->cfg_hdw_queue; idx++) {
+ 		qp = &phba->sli4_hba.hdwq[idx];
+ 		/* Release all the lpfc_nvme_bufs maintained by this host. */
+@@ -3642,8 +3647,6 @@ lpfc_io_free(struct lpfc_hba *phba)
+ 		}
+ 		spin_unlock(&qp->io_buf_list_get_lock);
+ 	}
+-
+-	spin_unlock_irq(&phba->hbalock);
+ }
+ 
+ /**
+diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
+index 1aa00d2c3f74..9defff711884 100644
+--- a/drivers/scsi/lpfc/lpfc_nvme.c
++++ b/drivers/scsi/lpfc/lpfc_nvme.c
+@@ -2080,15 +2080,15 @@ lpfc_nvme_create_localport(struct lpfc_vport *vport)
+ 		lpfc_nvme_template.max_hw_queues =
+ 			phba->sli4_hba.num_present_cpu;
+ 
++	if (!IS_ENABLED(CONFIG_NVME_FC))
++		return ret;
++
+ 	/* localport is allocated from the stack, but the registration
+ 	 * call allocates heap memory as well as the private area.
+ 	 */
+-#if (IS_ENABLED(CONFIG_NVME_FC))
++
+ 	ret = nvme_fc_register_localport(&nfcp_info, &lpfc_nvme_template,
+ 					 &vport->phba->pcidev->dev, &localport);
+-#else
+-	ret = -ENOMEM;
+-#endif
+ 	if (!ret) {
+ 		lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME | LOG_NVME_DISC,
+ 				 "6005 Successfully registered local "
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index a497b2c0cb79..25501d4605ff 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -3670,7 +3670,7 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
+ #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
+ 	if (phba->cpucheck_on & LPFC_CHECK_SCSI_IO) {
+ 		cpu = smp_processor_id();
+-		if (cpu < LPFC_CHECK_CPU_CNT)
++		if (cpu < LPFC_CHECK_CPU_CNT && phba->sli4_hba.hdwq)
+ 			phba->sli4_hba.hdwq[idx].cpucheck_cmpl_io[cpu]++;
+ 	}
+ #endif
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 57b4a463b589..dc933b6d7800 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -2502,8 +2502,8 @@ lpfc_sli_def_mbox_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ 			} else {
+ 				ndlp->nlp_flag &= ~NLP_UNREG_INP;
+ 			}
++			pmb->ctx_ndlp = NULL;
+ 		}
+-		pmb->ctx_ndlp = NULL;
+ 	}
+ 
+ 	/* Check security permission status on INIT_LINK mailbox command */
+@@ -7652,12 +7652,6 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
+ 		phba->cfg_xri_rebalancing = 0;
+ 	}
+ 
+-	/* Arm the CQs and then EQs on device */
+-	lpfc_sli4_arm_cqeq_intr(phba);
+-
+-	/* Indicate device interrupt mode */
+-	phba->sli4_hba.intr_enable = 1;
+-
+ 	/* Allow asynchronous mailbox command to go through */
+ 	spin_lock_irq(&phba->hbalock);
+ 	phba->sli.sli_flag &= ~LPFC_SLI_ASYNC_MBX_BLK;
+@@ -7726,6 +7720,12 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
+ 		phba->trunk_link.link3.state = LPFC_LINK_DOWN;
+ 	spin_unlock_irq(&phba->hbalock);
+ 
++	/* Arm the CQs and then EQs on device */
++	lpfc_sli4_arm_cqeq_intr(phba);
++
++	/* Indicate device interrupt mode */
++	phba->sli4_hba.intr_enable = 1;
++
+ 	if (!(phba->hba_flag & HBA_FCOE_MODE) &&
+ 	    (phba->hba_flag & LINK_DISABLED)) {
+ 		lpfc_printf_log(phba, KERN_ERR, LOG_INIT | LOG_SLI,
+diff --git a/drivers/scsi/qedf/qedf_io.c b/drivers/scsi/qedf/qedf_io.c
+index 6ca583bdde23..29b51c466721 100644
+--- a/drivers/scsi/qedf/qedf_io.c
++++ b/drivers/scsi/qedf/qedf_io.c
+@@ -902,6 +902,7 @@ int qedf_post_io_req(struct qedf_rport *fcport, struct qedf_ioreq *io_req)
+ 	if (!test_bit(QEDF_RPORT_SESSION_READY, &fcport->flags)) {
+ 		QEDF_ERR(&(qedf->dbg_ctx), "Session not offloaded yet.\n");
+ 		kref_put(&io_req->refcount, qedf_release_cmd);
++		return -EINVAL;
+ 	}
+ 
+ 	/* Obtain free SQE */
+diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c
+index 6d6d6013e35b..bf371e7b957d 100644
+--- a/drivers/scsi/qedi/qedi_iscsi.c
++++ b/drivers/scsi/qedi/qedi_iscsi.c
+@@ -1000,6 +1000,9 @@ static void qedi_ep_disconnect(struct iscsi_endpoint *ep)
+ 	qedi_ep = ep->dd_data;
+ 	qedi = qedi_ep->qedi;
+ 
++	if (qedi_ep->state == EP_STATE_OFLDCONN_START)
++		goto ep_exit_recover;
++
+ 	flush_work(&qedi_ep->offload_work);
+ 
+ 	if (qedi_ep->conn) {
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index 69bbea9239cc..add17843148d 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -3475,7 +3475,7 @@ qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp)
+ 		ql_log(ql_log_fatal, vha, 0x00c8,
+ 		    "Failed to allocate memory for ha->msix_entries.\n");
+ 		ret = -ENOMEM;
+-		goto msix_out;
++		goto free_irqs;
+ 	}
+ 	ha->flags.msix_enabled = 1;
+ 
+@@ -3558,6 +3558,10 @@ msix_register_fail:
+ 
+ msix_out:
+ 	return ret;
++
++free_irqs:
++	pci_free_irq_vectors(ha->pdev);
++	goto msix_out;
+ }
+ 
+ int
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index 697eee1d8847..b210a8296c27 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -680,7 +680,6 @@ done:
+ void qla24xx_do_nack_work(struct scsi_qla_host *vha, struct qla_work_evt *e)
+ {
+ 	fc_port_t *t;
+-	unsigned long flags;
+ 
+ 	switch (e->u.nack.type) {
+ 	case SRB_NACK_PRLI:
+@@ -693,10 +692,8 @@ void qla24xx_do_nack_work(struct scsi_qla_host *vha, struct qla_work_evt *e)
+ 		if (t) {
+ 			ql_log(ql_log_info, vha, 0xd034,
+ 			    "%s create sess success %p", __func__, t);
+-			spin_lock_irqsave(&vha->hw->tgt.sess_lock, flags);
+ 			/* create sess has an extra kref */
+ 			vha->hw->tgt.tgt_ops->put_sess(e->u.nack.fcport);
+-			spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
+ 		}
+ 		break;
+ 	}
+@@ -708,9 +705,6 @@ void qla24xx_delete_sess_fn(struct work_struct *work)
+ {
+ 	fc_port_t *fcport = container_of(work, struct fc_port, del_work);
+ 	struct qla_hw_data *ha = fcport->vha->hw;
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&ha->tgt.sess_lock, flags);
+ 
+ 	if (fcport->se_sess) {
+ 		ha->tgt.tgt_ops->shutdown_sess(fcport);
+@@ -718,7 +712,6 @@ void qla24xx_delete_sess_fn(struct work_struct *work)
+ 	} else {
+ 		qlt_unreg_sess(fcport);
+ 	}
+-	spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
+ }
+ 
+ /*
+@@ -787,8 +780,9 @@ void qlt_fc_port_added(struct scsi_qla_host *vha, fc_port_t *fcport)
+ 		    fcport->port_name, sess->loop_id);
+ 		sess->local = 0;
+ 	}
+-	ha->tgt.tgt_ops->put_sess(sess);
+ 	spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
++
++	ha->tgt.tgt_ops->put_sess(sess);
+ }
+ 
+ /*
+@@ -4242,9 +4236,7 @@ static void __qlt_do_work(struct qla_tgt_cmd *cmd)
+ 	/*
+ 	 * Drop extra session reference from qla_tgt_handle_cmd_for_atio*(
+ 	 */
+-	spin_lock_irqsave(&ha->tgt.sess_lock, flags);
+ 	ha->tgt.tgt_ops->put_sess(sess);
+-	spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
+ 	return;
+ 
+ out_term:
+@@ -4261,9 +4253,7 @@ out_term:
+ 	target_free_tag(sess->se_sess, &cmd->se_cmd);
+ 	spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
+ 
+-	spin_lock_irqsave(&ha->tgt.sess_lock, flags);
+ 	ha->tgt.tgt_ops->put_sess(sess);
+-	spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
+ }
+ 
+ static void qlt_do_work(struct work_struct *work)
+@@ -4472,9 +4462,7 @@ static int qlt_handle_cmd_for_atio(struct scsi_qla_host *vha,
+ 	if (!cmd) {
+ 		ql_dbg(ql_dbg_io, vha, 0x3062,
+ 		    "qla_target(%d): Allocation of cmd failed\n", vha->vp_idx);
+-		spin_lock_irqsave(&ha->tgt.sess_lock, flags);
+ 		ha->tgt.tgt_ops->put_sess(sess);
+-		spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
+ 		return -EBUSY;
+ 	}
+ 
+@@ -6318,17 +6306,19 @@ static void qlt_abort_work(struct qla_tgt *tgt,
+ 	}
+ 
+ 	rc = __qlt_24xx_handle_abts(vha, &prm->abts, sess);
+-	ha->tgt.tgt_ops->put_sess(sess);
+ 	spin_unlock_irqrestore(&ha->tgt.sess_lock, flags2);
+ 
++	ha->tgt.tgt_ops->put_sess(sess);
++
+ 	if (rc != 0)
+ 		goto out_term;
+ 	return;
+ 
+ out_term2:
++	spin_unlock_irqrestore(&ha->tgt.sess_lock, flags2);
++
+ 	if (sess)
+ 		ha->tgt.tgt_ops->put_sess(sess);
+-	spin_unlock_irqrestore(&ha->tgt.sess_lock, flags2);
+ 
+ out_term:
+ 	spin_lock_irqsave(&ha->hardware_lock, flags);
+@@ -6386,9 +6376,10 @@ static void qlt_tmr_work(struct qla_tgt *tgt,
+ 	    scsilun_to_int((struct scsi_lun *)&a->u.isp24.fcp_cmnd.lun);
+ 
+ 	rc = qlt_issue_task_mgmt(sess, unpacked_lun, fn, iocb, 0);
+-	ha->tgt.tgt_ops->put_sess(sess);
+ 	spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
+ 
++	ha->tgt.tgt_ops->put_sess(sess);
++
+ 	if (rc != 0)
+ 		goto out_term;
+ 	return;
+diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
+index 8a3075d17c63..e58becb790fa 100644
+--- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c
++++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
+@@ -359,7 +359,6 @@ static void tcm_qla2xxx_put_sess(struct fc_port *sess)
+ 	if (!sess)
+ 		return;
+ 
+-	assert_spin_locked(&sess->vha->hw->tgt.sess_lock);
+ 	kref_put(&sess->sess_kref, tcm_qla2xxx_release_session);
+ }
+ 
+@@ -374,8 +373,9 @@ static void tcm_qla2xxx_close_session(struct se_session *se_sess)
+ 
+ 	spin_lock_irqsave(&vha->hw->tgt.sess_lock, flags);
+ 	target_sess_cmd_list_set_waiting(se_sess);
+-	tcm_qla2xxx_put_sess(sess);
+ 	spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
++
++	tcm_qla2xxx_put_sess(sess);
+ }
+ 
+ static u32 tcm_qla2xxx_sess_get_index(struct se_session *se_sess)
+@@ -399,6 +399,8 @@ static int tcm_qla2xxx_write_pending(struct se_cmd *se_cmd)
+ 			cmd->se_cmd.transport_state,
+ 			cmd->se_cmd.t_state,
+ 			cmd->se_cmd.se_cmd_flags);
++		transport_generic_request_failure(&cmd->se_cmd,
++			TCM_CHECK_CONDITION_ABORT_CMD);
+ 		return 0;
+ 	}
+ 	cmd->trc_flags |= TRC_XFR_RDY;
+@@ -829,7 +831,6 @@ static void tcm_qla2xxx_clear_nacl_from_fcport_map(struct fc_port *sess)
+ 
+ static void tcm_qla2xxx_shutdown_sess(struct fc_port *sess)
+ {
+-	assert_spin_locked(&sess->vha->hw->tgt.sess_lock);
+ 	target_sess_cmd_list_set_waiting(sess->se_sess);
+ }
+ 
+diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c
+index 6e4f4931ae17..8c674eca09f1 100644
+--- a/drivers/scsi/qla4xxx/ql4_os.c
++++ b/drivers/scsi/qla4xxx/ql4_os.c
+@@ -5930,7 +5930,7 @@ static int get_fw_boot_info(struct scsi_qla_host *ha, uint16_t ddb_index[])
+ 		val = rd_nvram_byte(ha, sec_addr);
+ 		if (val & BIT_7)
+ 			ddb_index[1] = (val & 0x7f);
+-
++		goto exit_boot_info;
+ 	} else if (is_qla80XX(ha)) {
+ 		buf = dma_alloc_coherent(&ha->pdev->dev, size,
+ 					 &buf_dma, GFP_KERNEL);
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 2b2bc4b49d78..b894786df6c2 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -2603,7 +2603,6 @@ sd_read_write_protect_flag(struct scsi_disk *sdkp, unsigned char *buffer)
+ 	int res;
+ 	struct scsi_device *sdp = sdkp->device;
+ 	struct scsi_mode_data data;
+-	int disk_ro = get_disk_ro(sdkp->disk);
+ 	int old_wp = sdkp->write_prot;
+ 
+ 	set_disk_ro(sdkp->disk, 0);
+@@ -2644,7 +2643,7 @@ sd_read_write_protect_flag(struct scsi_disk *sdkp, unsigned char *buffer)
+ 			  "Test WP failed, assume Write Enabled\n");
+ 	} else {
+ 		sdkp->write_prot = ((data.device_specific & 0x80) != 0);
+-		set_disk_ro(sdkp->disk, sdkp->write_prot || disk_ro);
++		set_disk_ro(sdkp->disk, sdkp->write_prot);
+ 		if (sdkp->first_scan || old_wp != sdkp->write_prot) {
+ 			sd_printk(KERN_NOTICE, sdkp, "Write Protect is %s\n",
+ 				  sdkp->write_prot ? "on" : "off");
+diff --git a/drivers/scsi/ufs/ufs-hisi.c b/drivers/scsi/ufs/ufs-hisi.c
+index 0e855b5afe82..2f592df921d9 100644
+--- a/drivers/scsi/ufs/ufs-hisi.c
++++ b/drivers/scsi/ufs/ufs-hisi.c
+@@ -587,6 +587,10 @@ static int ufs_hisi_init_common(struct ufs_hba *hba)
+ 	ufshcd_set_variant(hba, host);
+ 
+ 	host->rst  = devm_reset_control_get(dev, "rst");
++	if (IS_ERR(host->rst)) {
++		dev_err(dev, "%s: failed to get reset control\n", __func__);
++		return PTR_ERR(host->rst);
++	}
+ 
+ 	ufs_hisi_set_pm_lvl(hba);
+ 
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index e040f9dd9ff3..5ba49c8cd2a3 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -6294,19 +6294,19 @@ static u32 ufshcd_find_max_sup_active_icc_level(struct ufs_hba *hba,
+ 		goto out;
+ 	}
+ 
+-	if (hba->vreg_info.vcc)
++	if (hba->vreg_info.vcc && hba->vreg_info.vcc->max_uA)
+ 		icc_level = ufshcd_get_max_icc_level(
+ 				hba->vreg_info.vcc->max_uA,
+ 				POWER_DESC_MAX_ACTV_ICC_LVLS - 1,
+ 				&desc_buf[PWR_DESC_ACTIVE_LVLS_VCC_0]);
+ 
+-	if (hba->vreg_info.vccq)
++	if (hba->vreg_info.vccq && hba->vreg_info.vccq->max_uA)
+ 		icc_level = ufshcd_get_max_icc_level(
+ 				hba->vreg_info.vccq->max_uA,
+ 				icc_level,
+ 				&desc_buf[PWR_DESC_ACTIVE_LVLS_VCCQ_0]);
+ 
+-	if (hba->vreg_info.vccq2)
++	if (hba->vreg_info.vccq2 && hba->vreg_info.vccq2->max_uA)
+ 		icc_level = ufshcd_get_max_icc_level(
+ 				hba->vreg_info.vccq2->max_uA,
+ 				icc_level,
+@@ -7004,6 +7004,15 @@ static int ufshcd_config_vreg_load(struct device *dev, struct ufs_vreg *vreg,
+ 	if (!vreg)
+ 		return 0;
+ 
++	/*
++	 * "set_load" operation shall be required on those regulators
++	 * which specifically configured current limitation. Otherwise
++	 * zero max_uA may cause unexpected behavior when regulator is
++	 * enabled or set as high power mode.
++	 */
++	if (!vreg->max_uA)
++		return 0;
++
+ 	ret = regulator_set_load(vreg->reg, ua);
+ 	if (ret < 0) {
+ 		dev_err(dev, "%s: %s set load (ua=%d) failed, err=%d\n",
+@@ -7039,12 +7048,15 @@ static int ufshcd_config_vreg(struct device *dev,
+ 	name = vreg->name;
+ 
+ 	if (regulator_count_voltages(reg) > 0) {
+-		min_uV = on ? vreg->min_uV : 0;
+-		ret = regulator_set_voltage(reg, min_uV, vreg->max_uV);
+-		if (ret) {
+-			dev_err(dev, "%s: %s set voltage failed, err=%d\n",
++		if (vreg->min_uV && vreg->max_uV) {
++			min_uV = on ? vreg->min_uV : 0;
++			ret = regulator_set_voltage(reg, min_uV, vreg->max_uV);
++			if (ret) {
++				dev_err(dev,
++					"%s: %s set voltage failed, err=%d\n",
+ 					__func__, name, ret);
+-			goto out;
++				goto out;
++			}
+ 		}
+ 
+ 		uA_load = on ? vreg->max_uA : 0;
+diff --git a/drivers/slimbus/qcom-ngd-ctrl.c b/drivers/slimbus/qcom-ngd-ctrl.c
+index 71f094c9ec68..f3585777324c 100644
+--- a/drivers/slimbus/qcom-ngd-ctrl.c
++++ b/drivers/slimbus/qcom-ngd-ctrl.c
+@@ -1342,6 +1342,10 @@ static int of_qcom_slim_ngd_register(struct device *parent,
+ 			return -ENOMEM;
+ 
+ 		ngd->pdev = platform_device_alloc(QCOM_SLIM_NGD_DRV_NAME, id);
++		if (!ngd->pdev) {
++			kfree(ngd);
++			return -ENOMEM;
++		}
+ 		ngd->id = id;
+ 		ngd->pdev->dev.parent = parent;
+ 		ngd->pdev->driver_override = QCOM_SLIM_NGD_DRV_NAME;
+diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c
+index fffc21cd5f79..b3173ebddade 100644
+--- a/drivers/spi/atmel-quadspi.c
++++ b/drivers/spi/atmel-quadspi.c
+@@ -570,7 +570,8 @@ static int atmel_qspi_remove(struct platform_device *pdev)
+ 
+ static int __maybe_unused atmel_qspi_suspend(struct device *dev)
+ {
+-	struct atmel_qspi *aq = dev_get_drvdata(dev);
++	struct spi_controller *ctrl = dev_get_drvdata(dev);
++	struct atmel_qspi *aq = spi_controller_get_devdata(ctrl);
+ 
+ 	clk_disable_unprepare(aq->qspick);
+ 	clk_disable_unprepare(aq->pclk);
+@@ -580,7 +581,8 @@ static int __maybe_unused atmel_qspi_suspend(struct device *dev)
+ 
+ static int __maybe_unused atmel_qspi_resume(struct device *dev)
+ {
+-	struct atmel_qspi *aq = dev_get_drvdata(dev);
++	struct spi_controller *ctrl = dev_get_drvdata(dev);
++	struct atmel_qspi *aq = spi_controller_get_devdata(ctrl);
+ 
+ 	clk_prepare_enable(aq->pclk);
+ 	clk_prepare_enable(aq->qspick);
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index 6ec647bbba77..a81ae29aa68a 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -1494,7 +1494,7 @@ static int spi_imx_transfer(struct spi_device *spi,
+ 
+ 	/* flush rxfifo before transfer */
+ 	while (spi_imx->devtype_data->rx_available(spi_imx))
+-		spi_imx->rx(spi_imx);
++		readl(spi_imx->base + MXC_CSPIRXDATA);
+ 
+ 	if (spi_imx->slave_mode)
+ 		return spi_imx_pio_transfer_slave(spi, transfer);
+diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c
+index b6ddba833d02..d2076f2f468f 100644
+--- a/drivers/spi/spi-pxa2xx.c
++++ b/drivers/spi/spi-pxa2xx.c
+@@ -884,10 +884,14 @@ static unsigned int ssp_get_clk_div(struct driver_data *drv_data, int rate)
+ 
+ 	rate = min_t(int, ssp_clk, rate);
+ 
++	/*
++	 * Calculate the divisor for the SCR (Serial Clock Rate), avoiding
++	 * that the SSP transmission rate can be greater than the device rate
++	 */
+ 	if (ssp->type == PXA25x_SSP || ssp->type == CE4100_SSP)
+-		return (ssp_clk / (2 * rate) - 1) & 0xff;
++		return (DIV_ROUND_UP(ssp_clk, 2 * rate) - 1) & 0xff;
+ 	else
+-		return (ssp_clk / rate - 1) & 0xfff;
++		return (DIV_ROUND_UP(ssp_clk, rate) - 1)  & 0xfff;
+ }
+ 
+ static unsigned int pxa2xx_ssp_get_clk_div(struct driver_data *drv_data,
+diff --git a/drivers/spi/spi-rspi.c b/drivers/spi/spi-rspi.c
+index 556870dcdf79..5d35a82945cd 100644
+--- a/drivers/spi/spi-rspi.c
++++ b/drivers/spi/spi-rspi.c
+@@ -271,7 +271,8 @@ static int rspi_set_config_register(struct rspi_data *rspi, int access_size)
+ 	/* Sets parity, interrupt mask */
+ 	rspi_write8(rspi, 0x00, RSPI_SPCR2);
+ 
+-	/* Sets SPCMD */
++	/* Resets sequencer */
++	rspi_write8(rspi, 0, RSPI_SPSCR);
+ 	rspi->spcmd |= SPCMD_SPB_8_TO_16(access_size);
+ 	rspi_write16(rspi, rspi->spcmd, RSPI_SPCMD0);
+ 
+@@ -315,7 +316,8 @@ static int rspi_rz_set_config_register(struct rspi_data *rspi, int access_size)
+ 	rspi_write8(rspi, 0x00, RSPI_SSLND);
+ 	rspi_write8(rspi, 0x00, RSPI_SPND);
+ 
+-	/* Sets SPCMD */
++	/* Resets sequencer */
++	rspi_write8(rspi, 0, RSPI_SPSCR);
+ 	rspi->spcmd |= SPCMD_SPB_8_TO_16(access_size);
+ 	rspi_write16(rspi, rspi->spcmd, RSPI_SPCMD0);
+ 
+@@ -366,7 +368,8 @@ static int qspi_set_config_register(struct rspi_data *rspi, int access_size)
+ 	/* Sets buffer to allow normal operation */
+ 	rspi_write8(rspi, 0x00, QSPI_SPBFCR);
+ 
+-	/* Sets SPCMD */
++	/* Resets sequencer */
++	rspi_write8(rspi, 0, RSPI_SPSCR);
+ 	rspi_write16(rspi, rspi->spcmd, RSPI_SPCMD0);
+ 
+ 	/* Sets RSPI mode */
+diff --git a/drivers/spi/spi-stm32-qspi.c b/drivers/spi/spi-stm32-qspi.c
+index 3b2a9a6b990d..0b9a8bddb939 100644
+--- a/drivers/spi/spi-stm32-qspi.c
++++ b/drivers/spi/spi-stm32-qspi.c
+@@ -93,6 +93,7 @@ struct stm32_qspi_flash {
+ 
+ struct stm32_qspi {
+ 	struct device *dev;
++	struct spi_controller *ctrl;
+ 	void __iomem *io_base;
+ 	void __iomem *mm_base;
+ 	resource_size_t mm_size;
+@@ -397,6 +398,7 @@ static void stm32_qspi_release(struct stm32_qspi *qspi)
+ 	writel_relaxed(0, qspi->io_base + QSPI_CR);
+ 	mutex_destroy(&qspi->lock);
+ 	clk_disable_unprepare(qspi->clk);
++	spi_master_put(qspi->ctrl);
+ }
+ 
+ static int stm32_qspi_probe(struct platform_device *pdev)
+@@ -413,43 +415,54 @@ static int stm32_qspi_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	qspi = spi_controller_get_devdata(ctrl);
++	qspi->ctrl = ctrl;
+ 
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "qspi");
+ 	qspi->io_base = devm_ioremap_resource(dev, res);
+-	if (IS_ERR(qspi->io_base))
+-		return PTR_ERR(qspi->io_base);
++	if (IS_ERR(qspi->io_base)) {
++		ret = PTR_ERR(qspi->io_base);
++		goto err;
++	}
+ 
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "qspi_mm");
+ 	qspi->mm_base = devm_ioremap_resource(dev, res);
+-	if (IS_ERR(qspi->mm_base))
+-		return PTR_ERR(qspi->mm_base);
++	if (IS_ERR(qspi->mm_base)) {
++		ret = PTR_ERR(qspi->mm_base);
++		goto err;
++	}
+ 
+ 	qspi->mm_size = resource_size(res);
+-	if (qspi->mm_size > STM32_QSPI_MAX_MMAP_SZ)
+-		return -EINVAL;
++	if (qspi->mm_size > STM32_QSPI_MAX_MMAP_SZ) {
++		ret = -EINVAL;
++		goto err;
++	}
+ 
+ 	irq = platform_get_irq(pdev, 0);
+ 	ret = devm_request_irq(dev, irq, stm32_qspi_irq, 0,
+ 			       dev_name(dev), qspi);
+ 	if (ret) {
+ 		dev_err(dev, "failed to request irq\n");
+-		return ret;
++		goto err;
+ 	}
+ 
+ 	init_completion(&qspi->data_completion);
+ 
+ 	qspi->clk = devm_clk_get(dev, NULL);
+-	if (IS_ERR(qspi->clk))
+-		return PTR_ERR(qspi->clk);
++	if (IS_ERR(qspi->clk)) {
++		ret = PTR_ERR(qspi->clk);
++		goto err;
++	}
+ 
+ 	qspi->clk_rate = clk_get_rate(qspi->clk);
+-	if (!qspi->clk_rate)
+-		return -EINVAL;
++	if (!qspi->clk_rate) {
++		ret = -EINVAL;
++		goto err;
++	}
+ 
+ 	ret = clk_prepare_enable(qspi->clk);
+ 	if (ret) {
+ 		dev_err(dev, "can not enable the clock\n");
+-		return ret;
++		goto err;
+ 	}
+ 
+ 	rstc = devm_reset_control_get_exclusive(dev, NULL);
+@@ -472,14 +485,11 @@ static int stm32_qspi_probe(struct platform_device *pdev)
+ 	ctrl->dev.of_node = dev->of_node;
+ 
+ 	ret = devm_spi_register_master(dev, ctrl);
+-	if (ret)
+-		goto err_spi_register;
+-
+-	return 0;
++	if (!ret)
++		return 0;
+ 
+-err_spi_register:
++err:
+ 	stm32_qspi_release(qspi);
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/spi/spi-tegra114.c b/drivers/spi/spi-tegra114.c
+index a76acedd7e2f..a1888dc6a938 100644
+--- a/drivers/spi/spi-tegra114.c
++++ b/drivers/spi/spi-tegra114.c
+@@ -1067,27 +1067,19 @@ static int tegra_spi_probe(struct platform_device *pdev)
+ 
+ 	spi_irq = platform_get_irq(pdev, 0);
+ 	tspi->irq = spi_irq;
+-	ret = request_threaded_irq(tspi->irq, tegra_spi_isr,
+-			tegra_spi_isr_thread, IRQF_ONESHOT,
+-			dev_name(&pdev->dev), tspi);
+-	if (ret < 0) {
+-		dev_err(&pdev->dev, "Failed to register ISR for IRQ %d\n",
+-					tspi->irq);
+-		goto exit_free_master;
+-	}
+ 
+ 	tspi->clk = devm_clk_get(&pdev->dev, "spi");
+ 	if (IS_ERR(tspi->clk)) {
+ 		dev_err(&pdev->dev, "can not get clock\n");
+ 		ret = PTR_ERR(tspi->clk);
+-		goto exit_free_irq;
++		goto exit_free_master;
+ 	}
+ 
+ 	tspi->rst = devm_reset_control_get_exclusive(&pdev->dev, "spi");
+ 	if (IS_ERR(tspi->rst)) {
+ 		dev_err(&pdev->dev, "can not get reset\n");
+ 		ret = PTR_ERR(tspi->rst);
+-		goto exit_free_irq;
++		goto exit_free_master;
+ 	}
+ 
+ 	tspi->max_buf_size = SPI_FIFO_DEPTH << 2;
+@@ -1095,7 +1087,7 @@ static int tegra_spi_probe(struct platform_device *pdev)
+ 
+ 	ret = tegra_spi_init_dma_param(tspi, true);
+ 	if (ret < 0)
+-		goto exit_free_irq;
++		goto exit_free_master;
+ 	ret = tegra_spi_init_dma_param(tspi, false);
+ 	if (ret < 0)
+ 		goto exit_rx_dma_free;
+@@ -1117,18 +1109,32 @@ static int tegra_spi_probe(struct platform_device *pdev)
+ 		dev_err(&pdev->dev, "pm runtime get failed, e = %d\n", ret);
+ 		goto exit_pm_disable;
+ 	}
++
++	reset_control_assert(tspi->rst);
++	udelay(2);
++	reset_control_deassert(tspi->rst);
+ 	tspi->def_command1_reg  = SPI_M_S;
+ 	tegra_spi_writel(tspi, tspi->def_command1_reg, SPI_COMMAND1);
+ 	pm_runtime_put(&pdev->dev);
++	ret = request_threaded_irq(tspi->irq, tegra_spi_isr,
++				   tegra_spi_isr_thread, IRQF_ONESHOT,
++				   dev_name(&pdev->dev), tspi);
++	if (ret < 0) {
++		dev_err(&pdev->dev, "Failed to register ISR for IRQ %d\n",
++			tspi->irq);
++		goto exit_pm_disable;
++	}
+ 
+ 	master->dev.of_node = pdev->dev.of_node;
+ 	ret = devm_spi_register_master(&pdev->dev, master);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "can not register to master err %d\n", ret);
+-		goto exit_pm_disable;
++		goto exit_free_irq;
+ 	}
+ 	return ret;
+ 
++exit_free_irq:
++	free_irq(spi_irq, tspi);
+ exit_pm_disable:
+ 	pm_runtime_disable(&pdev->dev);
+ 	if (!pm_runtime_status_suspended(&pdev->dev))
+@@ -1136,8 +1142,6 @@ exit_pm_disable:
+ 	tegra_spi_deinit_dma_param(tspi, false);
+ exit_rx_dma_free:
+ 	tegra_spi_deinit_dma_param(tspi, true);
+-exit_free_irq:
+-	free_irq(spi_irq, tspi);
+ exit_free_master:
+ 	spi_master_put(master);
+ 	return ret;
+diff --git a/drivers/spi/spi-topcliff-pch.c b/drivers/spi/spi-topcliff-pch.c
+index fba3f180f233..8a5966963834 100644
+--- a/drivers/spi/spi-topcliff-pch.c
++++ b/drivers/spi/spi-topcliff-pch.c
+@@ -1299,18 +1299,27 @@ static void pch_free_dma_buf(struct pch_spi_board_data *board_dat,
+ 				  dma->rx_buf_virt, dma->rx_buf_dma);
+ }
+ 
+-static void pch_alloc_dma_buf(struct pch_spi_board_data *board_dat,
++static int pch_alloc_dma_buf(struct pch_spi_board_data *board_dat,
+ 			      struct pch_spi_data *data)
+ {
+ 	struct pch_spi_dma_ctrl *dma;
++	int ret;
+ 
+ 	dma = &data->dma;
++	ret = 0;
+ 	/* Get Consistent memory for Tx DMA */
+ 	dma->tx_buf_virt = dma_alloc_coherent(&board_dat->pdev->dev,
+ 				PCH_BUF_SIZE, &dma->tx_buf_dma, GFP_KERNEL);
++	if (!dma->tx_buf_virt)
++		ret = -ENOMEM;
++
+ 	/* Get Consistent memory for Rx DMA */
+ 	dma->rx_buf_virt = dma_alloc_coherent(&board_dat->pdev->dev,
+ 				PCH_BUF_SIZE, &dma->rx_buf_dma, GFP_KERNEL);
++	if (!dma->rx_buf_virt)
++		ret = -ENOMEM;
++
++	return ret;
+ }
+ 
+ static int pch_spi_pd_probe(struct platform_device *plat_dev)
+@@ -1387,7 +1396,9 @@ static int pch_spi_pd_probe(struct platform_device *plat_dev)
+ 
+ 	if (use_dma) {
+ 		dev_info(&plat_dev->dev, "Use DMA for data transfers\n");
+-		pch_alloc_dma_buf(board_dat, data);
++		ret = pch_alloc_dma_buf(board_dat, data);
++		if (ret)
++			goto err_spi_register_master;
+ 	}
+ 
+ 	ret = spi_register_master(master);
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 93986f879b09..a83fcddf1dad 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -36,6 +36,8 @@
+ 
+ #define CREATE_TRACE_POINTS
+ #include <trace/events/spi.h>
++EXPORT_TRACEPOINT_SYMBOL(spi_transfer_start);
++EXPORT_TRACEPOINT_SYMBOL(spi_transfer_stop);
+ 
+ #include "internals.h"
+ 
+@@ -1039,6 +1041,8 @@ static int spi_map_msg(struct spi_controller *ctlr, struct spi_message *msg)
+ 		if (max_tx || max_rx) {
+ 			list_for_each_entry(xfer, &msg->transfers,
+ 					    transfer_list) {
++				if (!xfer->len)
++					continue;
+ 				if (!xfer->tx_buf)
+ 					xfer->tx_buf = ctlr->dummy_tx;
+ 				if (!xfer->rx_buf)
+@@ -2195,6 +2199,8 @@ static int spi_get_gpio_descs(struct spi_controller *ctlr)
+ 		 */
+ 		cs[i] = devm_gpiod_get_index_optional(dev, "cs", i,
+ 						      GPIOD_OUT_LOW);
++		if (IS_ERR(cs[i]))
++			return PTR_ERR(cs[i]);
+ 
+ 		if (cs[i]) {
+ 			/*
+@@ -2275,24 +2281,6 @@ int spi_register_controller(struct spi_controller *ctlr)
+ 	if (status)
+ 		return status;
+ 
+-	if (!spi_controller_is_slave(ctlr)) {
+-		if (ctlr->use_gpio_descriptors) {
+-			status = spi_get_gpio_descs(ctlr);
+-			if (status)
+-				return status;
+-			/*
+-			 * A controller using GPIO descriptors always
+-			 * supports SPI_CS_HIGH if need be.
+-			 */
+-			ctlr->mode_bits |= SPI_CS_HIGH;
+-		} else {
+-			/* Legacy code path for GPIOs from DT */
+-			status = of_spi_register_master(ctlr);
+-			if (status)
+-				return status;
+-		}
+-	}
+-
+ 	/* even if it's just one always-selected device, there must
+ 	 * be at least one chipselect
+ 	 */
+@@ -2349,6 +2337,25 @@ int spi_register_controller(struct spi_controller *ctlr)
+ 	 * registration fails if the bus ID is in use.
+ 	 */
+ 	dev_set_name(&ctlr->dev, "spi%u", ctlr->bus_num);
++
++	if (!spi_controller_is_slave(ctlr)) {
++		if (ctlr->use_gpio_descriptors) {
++			status = spi_get_gpio_descs(ctlr);
++			if (status)
++				return status;
++			/*
++			 * A controller using GPIO descriptors always
++			 * supports SPI_CS_HIGH if need be.
++			 */
++			ctlr->mode_bits |= SPI_CS_HIGH;
++		} else {
++			/* Legacy code path for GPIOs from DT */
++			status = of_spi_register_master(ctlr);
++			if (status)
++				return status;
++		}
++	}
++
+ 	status = device_add(&ctlr->dev);
+ 	if (status < 0) {
+ 		/* free bus id */
+diff --git a/drivers/ssb/bridge_pcmcia_80211.c b/drivers/ssb/bridge_pcmcia_80211.c
+index f51f150307df..ffa379efff83 100644
+--- a/drivers/ssb/bridge_pcmcia_80211.c
++++ b/drivers/ssb/bridge_pcmcia_80211.c
+@@ -113,16 +113,21 @@ static struct pcmcia_driver ssb_host_pcmcia_driver = {
+ 	.resume		= ssb_host_pcmcia_resume,
+ };
+ 
++static int pcmcia_init_failed;
++
+ /*
+  * These are not module init/exit functions!
+  * The module_pcmcia_driver() helper cannot be used here.
+  */
+ int ssb_host_pcmcia_init(void)
+ {
+-	return pcmcia_register_driver(&ssb_host_pcmcia_driver);
++	pcmcia_init_failed = pcmcia_register_driver(&ssb_host_pcmcia_driver);
++
++	return pcmcia_init_failed;
+ }
+ 
+ void ssb_host_pcmcia_exit(void)
+ {
+-	pcmcia_unregister_driver(&ssb_host_pcmcia_driver);
++	if (!pcmcia_init_failed)
++		pcmcia_unregister_driver(&ssb_host_pcmcia_driver);
+ }
+diff --git a/drivers/staging/media/davinci_vpfe/Kconfig b/drivers/staging/media/davinci_vpfe/Kconfig
+index aea449a8dbf8..76818cc48ddc 100644
+--- a/drivers/staging/media/davinci_vpfe/Kconfig
++++ b/drivers/staging/media/davinci_vpfe/Kconfig
+@@ -1,7 +1,7 @@
+ config VIDEO_DM365_VPFE
+ 	tristate "DM365 VPFE Media Controller Capture Driver"
+ 	depends on VIDEO_V4L2
+-	depends on (ARCH_DAVINCI_DM365 && !VIDEO_DM365_ISIF) || COMPILE_TEST
++	depends on (ARCH_DAVINCI_DM365 && !VIDEO_DM365_ISIF) || (COMPILE_TEST && !ARCH_OMAP1)
+ 	depends on VIDEO_V4L2_SUBDEV_API
+ 	depends on VIDEO_DAVINCI_VPBE_DISPLAY
+ 	select VIDEOBUF2_DMA_CONTIG
+diff --git a/drivers/staging/media/imx/imx-media-vdic.c b/drivers/staging/media/imx/imx-media-vdic.c
+index 8a9af4688fd4..8cdd3daa6c5f 100644
+--- a/drivers/staging/media/imx/imx-media-vdic.c
++++ b/drivers/staging/media/imx/imx-media-vdic.c
+@@ -231,6 +231,12 @@ static void __maybe_unused prepare_vdi_in_buffers(struct vdic_priv *priv,
+ 		curr_phys = vb2_dma_contig_plane_dma_addr(curr_vb, 0);
+ 		next_phys = vb2_dma_contig_plane_dma_addr(curr_vb, 0) + is;
+ 		break;
++	default:
++		/*
++		 * can't get here, priv->fieldtype can only be one of
++		 * the above. This is to quiet smatch errors.
++		 */
++		return;
+ 	}
+ 
+ 	ipu_cpmem_set_buffer(priv->vdi_in_ch_p, 0, prev_phys);
+diff --git a/drivers/staging/media/ipu3/ipu3.c b/drivers/staging/media/ipu3/ipu3.c
+index d575ac78c8f0..d00d26264c37 100644
+--- a/drivers/staging/media/ipu3/ipu3.c
++++ b/drivers/staging/media/ipu3/ipu3.c
+@@ -791,7 +791,7 @@ out:
+  * PCI rpm framework checks the existence of driver rpm callbacks.
+  * Place a dummy callback here to avoid rpm going into error state.
+  */
+-static int imgu_rpm_dummy_cb(struct device *dev)
++static __maybe_unused int imgu_rpm_dummy_cb(struct device *dev)
+ {
+ 	return 0;
+ }
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus.h b/drivers/staging/media/sunxi/cedrus/cedrus.h
+index 4aedd24a9848..c57c04b41d2e 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus.h
++++ b/drivers/staging/media/sunxi/cedrus/cedrus.h
+@@ -28,6 +28,8 @@
+ 
+ #define CEDRUS_CAPABILITY_UNTILED	BIT(0)
+ 
++#define CEDRUS_QUIRK_NO_DMA_OFFSET	BIT(0)
++
+ enum cedrus_codec {
+ 	CEDRUS_CODEC_MPEG2,
+ 
+@@ -91,6 +93,7 @@ struct cedrus_dec_ops {
+ 
+ struct cedrus_variant {
+ 	unsigned int	capabilities;
++	unsigned int	quirks;
+ };
+ 
+ struct cedrus_dev {
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_hw.c b/drivers/staging/media/sunxi/cedrus/cedrus_hw.c
+index 0acf219a8c91..fbfff7c1c771 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_hw.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_hw.c
+@@ -177,7 +177,8 @@ int cedrus_hw_probe(struct cedrus_dev *dev)
+ 	 */
+ 
+ #ifdef PHYS_PFN_OFFSET
+-	dev->dev->dma_pfn_offset = PHYS_PFN_OFFSET;
++	if (!(variant->quirks & CEDRUS_QUIRK_NO_DMA_OFFSET))
++		dev->dev->dma_pfn_offset = PHYS_PFN_OFFSET;
+ #endif
+ 
+ 	ret = of_reserved_mem_device_init(dev->dev);
+diff --git a/drivers/staging/mt7621-mmc/sd.c b/drivers/staging/mt7621-mmc/sd.c
+index 4b26ec896a96..38f9ea02ee3a 100644
+--- a/drivers/staging/mt7621-mmc/sd.c
++++ b/drivers/staging/mt7621-mmc/sd.c
+@@ -468,7 +468,11 @@ static unsigned int msdc_command_start(struct msdc_host   *host,
+ 	host->cmd     = cmd;
+ 	host->cmd_rsp = resp;
+ 
+-	init_completion(&host->cmd_done);
++	// The completion should have been consumed by the previous command
++	// response handler, because the mmc requests should be serialized
++	if (completion_done(&host->cmd_done))
++		dev_err(mmc_dev(host->mmc),
++			"previous command was not handled\n");
+ 
+ 	sdr_set_bits(host->base + MSDC_INTEN, wints);
+ 	sdc_send_cmd(rawcmd, cmd->arg);
+@@ -490,7 +494,6 @@ static unsigned int msdc_command_resp(struct msdc_host   *host,
+ 		    MSDC_INT_ACMD19_DONE;
+ 
+ 	BUG_ON(in_interrupt());
+-	//init_completion(&host->cmd_done);
+ 	//sdr_set_bits(host->base + MSDC_INTEN, wints);
+ 
+ 	spin_unlock(&host->lock);
+@@ -593,8 +596,6 @@ static void msdc_dma_setup(struct msdc_host *host, struct msdc_dma *dma,
+ 	struct bd *bd;
+ 	u32 j;
+ 
+-	BUG_ON(sglen > MAX_BD_NUM); /* not support currently */
+-
+ 	gpd = dma->gpd;
+ 	bd  = dma->bd;
+ 
+@@ -674,7 +675,13 @@ static int msdc_do_request(struct mmc_host *mmc, struct mmc_request *mrq)
+ 		//msdc_clr_fifo(host);  /* no need */
+ 
+ 		msdc_dma_on();  /* enable DMA mode first!! */
+-		init_completion(&host->xfer_done);
++
++		// The completion should have been consumed by the previous
++		// xfer response handler, because the mmc requests should be
++		// serialized
++		if (completion_done(&host->cmd_done))
++			dev_err(mmc_dev(host->mmc),
++				"previous transfer was not handled\n");
+ 
+ 		/* start the command first*/
+ 		if (msdc_command_start(host, cmd, CMD_TIMEOUT) != 0)
+@@ -683,6 +690,13 @@ static int msdc_do_request(struct mmc_host *mmc, struct mmc_request *mrq)
+ 		data->sg_count = dma_map_sg(mmc_dev(mmc), data->sg,
+ 					    data->sg_len,
+ 					    mmc_get_dma_dir(data));
++
++		if (data->sg_count == 0) {
++			dev_err(mmc_dev(host->mmc), "failed to map DMA for transfer\n");
++			data->error = -ENOMEM;
++			goto done;
++		}
++
+ 		msdc_dma_setup(host, &host->dma, data->sg,
+ 			       data->sg_count);
+ 
+@@ -693,7 +707,6 @@ static int msdc_do_request(struct mmc_host *mmc, struct mmc_request *mrq)
+ 		/* for read, the data coming too fast, then CRC error
+ 		 *  start DMA no business with CRC.
+ 		 */
+-		//init_completion(&host->xfer_done);
+ 		msdc_dma_start(host);
+ 
+ 		spin_unlock(&host->lock);
+@@ -1688,6 +1701,8 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ 	}
+ 	msdc_init_gpd_bd(host, &host->dma);
+ 
++	init_completion(&host->cmd_done);
++	init_completion(&host->xfer_done);
+ 	INIT_DELAYED_WORK(&host->card_delaywork, msdc_tasklet_card);
+ 	spin_lock_init(&host->lock);
+ 	msdc_init_hw(host);
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
+index dd4898861b83..eb1e5dcb0d52 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
+@@ -209,6 +209,9 @@ vchiq_platform_init_state(struct vchiq_state *state)
+ 	struct vchiq_2835_state *platform_state;
+ 
+ 	state->platform_state = kzalloc(sizeof(*platform_state), GFP_KERNEL);
++	if (!state->platform_state)
++		return VCHIQ_ERROR;
++
+ 	platform_state = (struct vchiq_2835_state *)state->platform_state;
+ 
+ 	platform_state->inited = 1;
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+index 53f5a1cb4636..819813e742d8 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+@@ -2239,6 +2239,8 @@ vchiq_init_state(struct vchiq_state *state, struct vchiq_slot_zero *slot_zero)
+ 	local->debug[DEBUG_ENTRIES] = DEBUG_MAX;
+ 
+ 	status = vchiq_platform_init_state(state);
++	if (status != VCHIQ_SUCCESS)
++		return VCHIQ_ERROR;
+ 
+ 	/*
+ 		bring up slot handler thread
+diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c
+index e3fc920af682..8b7f9131e9d1 100644
+--- a/drivers/thunderbolt/icm.c
++++ b/drivers/thunderbolt/icm.c
+@@ -473,6 +473,11 @@ static void add_switch(struct tb_switch *parent_sw, u64 route,
+ 		goto out;
+ 
+ 	sw->uuid = kmemdup(uuid, sizeof(*uuid), GFP_KERNEL);
++	if (!sw->uuid) {
++		tb_sw_warn(sw, "cannot allocate memory for switch\n");
++		tb_switch_put(sw);
++		goto out;
++	}
+ 	sw->connection_id = connection_id;
+ 	sw->connection_key = connection_key;
+ 	sw->link = link;
+diff --git a/drivers/thunderbolt/property.c b/drivers/thunderbolt/property.c
+index b2f0d6386cee..8c077c4f3b5b 100644
+--- a/drivers/thunderbolt/property.c
++++ b/drivers/thunderbolt/property.c
+@@ -548,6 +548,11 @@ int tb_property_add_data(struct tb_property_dir *parent, const char *key,
+ 
+ 	property->length = size / 4;
+ 	property->value.data = kzalloc(size, GFP_KERNEL);
++	if (!property->value.data) {
++		kfree(property);
++		return -ENOMEM;
++	}
++
+ 	memcpy(property->value.data, buf, buflen);
+ 
+ 	list_add_tail(&property->list, &parent->properties);
+@@ -578,7 +583,12 @@ int tb_property_add_text(struct tb_property_dir *parent, const char *key,
+ 		return -ENOMEM;
+ 
+ 	property->length = size / 4;
+-	property->value.data = kzalloc(size, GFP_KERNEL);
++	property->value.text = kzalloc(size, GFP_KERNEL);
++	if (!property->value.text) {
++		kfree(property);
++		return -ENOMEM;
++	}
++
+ 	strcpy(property->value.text, text);
+ 
+ 	list_add_tail(&property->list, &parent->properties);
+diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
+index cd96994dc094..f569a2673742 100644
+--- a/drivers/thunderbolt/switch.c
++++ b/drivers/thunderbolt/switch.c
+@@ -10,15 +10,13 @@
+ #include <linux/idr.h>
+ #include <linux/nvmem-provider.h>
+ #include <linux/pm_runtime.h>
++#include <linux/sched/signal.h>
+ #include <linux/sizes.h>
+ #include <linux/slab.h>
+ #include <linux/vmalloc.h>
+ 
+ #include "tb.h"
+ 
+-/* Switch authorization from userspace is serialized by this lock */
+-static DEFINE_MUTEX(switch_lock);
+-
+ /* Switch NVM support */
+ 
+ #define NVM_DEVID		0x05
+@@ -254,8 +252,8 @@ static int tb_switch_nvm_write(void *priv, unsigned int offset, void *val,
+ 	struct tb_switch *sw = priv;
+ 	int ret = 0;
+ 
+-	if (mutex_lock_interruptible(&switch_lock))
+-		return -ERESTARTSYS;
++	if (!mutex_trylock(&sw->tb->lock))
++		return restart_syscall();
+ 
+ 	/*
+ 	 * Since writing the NVM image might require some special steps,
+@@ -275,7 +273,7 @@ static int tb_switch_nvm_write(void *priv, unsigned int offset, void *val,
+ 	memcpy(sw->nvm->buf + offset, val, bytes);
+ 
+ unlock:
+-	mutex_unlock(&switch_lock);
++	mutex_unlock(&sw->tb->lock);
+ 
+ 	return ret;
+ }
+@@ -364,10 +362,7 @@ static int tb_switch_nvm_add(struct tb_switch *sw)
+ 	}
+ 	nvm->non_active = nvm_dev;
+ 
+-	mutex_lock(&switch_lock);
+ 	sw->nvm = nvm;
+-	mutex_unlock(&switch_lock);
+-
+ 	return 0;
+ 
+ err_nvm_active:
+@@ -384,10 +379,8 @@ static void tb_switch_nvm_remove(struct tb_switch *sw)
+ {
+ 	struct tb_switch_nvm *nvm;
+ 
+-	mutex_lock(&switch_lock);
+ 	nvm = sw->nvm;
+ 	sw->nvm = NULL;
+-	mutex_unlock(&switch_lock);
+ 
+ 	if (!nvm)
+ 		return;
+@@ -716,8 +709,8 @@ static int tb_switch_set_authorized(struct tb_switch *sw, unsigned int val)
+ {
+ 	int ret = -EINVAL;
+ 
+-	if (mutex_lock_interruptible(&switch_lock))
+-		return -ERESTARTSYS;
++	if (!mutex_trylock(&sw->tb->lock))
++		return restart_syscall();
+ 
+ 	if (sw->authorized)
+ 		goto unlock;
+@@ -760,7 +753,7 @@ static int tb_switch_set_authorized(struct tb_switch *sw, unsigned int val)
+ 	}
+ 
+ unlock:
+-	mutex_unlock(&switch_lock);
++	mutex_unlock(&sw->tb->lock);
+ 	return ret;
+ }
+ 
+@@ -817,15 +810,15 @@ static ssize_t key_show(struct device *dev, struct device_attribute *attr,
+ 	struct tb_switch *sw = tb_to_switch(dev);
+ 	ssize_t ret;
+ 
+-	if (mutex_lock_interruptible(&switch_lock))
+-		return -ERESTARTSYS;
++	if (!mutex_trylock(&sw->tb->lock))
++		return restart_syscall();
+ 
+ 	if (sw->key)
+ 		ret = sprintf(buf, "%*phN\n", TB_SWITCH_KEY_SIZE, sw->key);
+ 	else
+ 		ret = sprintf(buf, "\n");
+ 
+-	mutex_unlock(&switch_lock);
++	mutex_unlock(&sw->tb->lock);
+ 	return ret;
+ }
+ 
+@@ -842,8 +835,8 @@ static ssize_t key_store(struct device *dev, struct device_attribute *attr,
+ 	else if (hex2bin(key, buf, sizeof(key)))
+ 		return -EINVAL;
+ 
+-	if (mutex_lock_interruptible(&switch_lock))
+-		return -ERESTARTSYS;
++	if (!mutex_trylock(&sw->tb->lock))
++		return restart_syscall();
+ 
+ 	if (sw->authorized) {
+ 		ret = -EBUSY;
+@@ -858,7 +851,7 @@ static ssize_t key_store(struct device *dev, struct device_attribute *attr,
+ 		}
+ 	}
+ 
+-	mutex_unlock(&switch_lock);
++	mutex_unlock(&sw->tb->lock);
+ 	return ret;
+ }
+ static DEVICE_ATTR(key, 0600, key_show, key_store);
+@@ -904,8 +897,8 @@ static ssize_t nvm_authenticate_store(struct device *dev,
+ 	bool val;
+ 	int ret;
+ 
+-	if (mutex_lock_interruptible(&switch_lock))
+-		return -ERESTARTSYS;
++	if (!mutex_trylock(&sw->tb->lock))
++		return restart_syscall();
+ 
+ 	/* If NVMem devices are not yet added */
+ 	if (!sw->nvm) {
+@@ -953,7 +946,7 @@ static ssize_t nvm_authenticate_store(struct device *dev,
+ 	}
+ 
+ exit_unlock:
+-	mutex_unlock(&switch_lock);
++	mutex_unlock(&sw->tb->lock);
+ 
+ 	if (ret)
+ 		return ret;
+@@ -967,8 +960,8 @@ static ssize_t nvm_version_show(struct device *dev,
+ 	struct tb_switch *sw = tb_to_switch(dev);
+ 	int ret;
+ 
+-	if (mutex_lock_interruptible(&switch_lock))
+-		return -ERESTARTSYS;
++	if (!mutex_trylock(&sw->tb->lock))
++		return restart_syscall();
+ 
+ 	if (sw->safe_mode)
+ 		ret = -ENODATA;
+@@ -977,7 +970,7 @@ static ssize_t nvm_version_show(struct device *dev,
+ 	else
+ 		ret = sprintf(buf, "%x.%x\n", sw->nvm->major, sw->nvm->minor);
+ 
+-	mutex_unlock(&switch_lock);
++	mutex_unlock(&sw->tb->lock);
+ 
+ 	return ret;
+ }
+@@ -1294,13 +1287,14 @@ int tb_switch_configure(struct tb_switch *sw)
+ 	return tb_plug_events_active(sw, true);
+ }
+ 
+-static void tb_switch_set_uuid(struct tb_switch *sw)
++static int tb_switch_set_uuid(struct tb_switch *sw)
+ {
+ 	u32 uuid[4];
+-	int cap;
++	int cap, ret;
+ 
++	ret = 0;
+ 	if (sw->uuid)
+-		return;
++		return ret;
+ 
+ 	/*
+ 	 * The newer controllers include fused UUID as part of link
+@@ -1308,7 +1302,9 @@ static void tb_switch_set_uuid(struct tb_switch *sw)
+ 	 */
+ 	cap = tb_switch_find_vse_cap(sw, TB_VSE_CAP_LINK_CONTROLLER);
+ 	if (cap > 0) {
+-		tb_sw_read(sw, uuid, TB_CFG_SWITCH, cap + 3, 4);
++		ret = tb_sw_read(sw, uuid, TB_CFG_SWITCH, cap + 3, 4);
++		if (ret)
++			return ret;
+ 	} else {
+ 		/*
+ 		 * ICM generates UUID based on UID and fills the upper
+@@ -1323,6 +1319,9 @@ static void tb_switch_set_uuid(struct tb_switch *sw)
+ 	}
+ 
+ 	sw->uuid = kmemdup(uuid, sizeof(uuid), GFP_KERNEL);
++	if (!sw->uuid)
++		ret = -ENOMEM;
++	return ret;
+ }
+ 
+ static int tb_switch_add_dma_port(struct tb_switch *sw)
+@@ -1372,7 +1371,9 @@ static int tb_switch_add_dma_port(struct tb_switch *sw)
+ 
+ 	if (status) {
+ 		tb_sw_info(sw, "switch flash authentication failed\n");
+-		tb_switch_set_uuid(sw);
++		ret = tb_switch_set_uuid(sw);
++		if (ret)
++			return ret;
+ 		nvm_set_auth_status(sw, status);
+ 	}
+ 
+@@ -1422,7 +1423,9 @@ int tb_switch_add(struct tb_switch *sw)
+ 		}
+ 		tb_sw_dbg(sw, "uid: %#llx\n", sw->uid);
+ 
+-		tb_switch_set_uuid(sw);
++		ret = tb_switch_set_uuid(sw);
++		if (ret)
++			return ret;
+ 
+ 		for (i = 0; i <= sw->config.max_port_number; i++) {
+ 			if (sw->ports[i].disabled) {
+diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
+index 52584c4003e3..f5e0282225d1 100644
+--- a/drivers/thunderbolt/tb.h
++++ b/drivers/thunderbolt/tb.h
+@@ -80,8 +80,7 @@ struct tb_switch_nvm {
+  * @depth: Depth in the chain this switch is connected (ICM only)
+  *
+  * When the switch is being added or removed to the domain (other
+- * switches) you need to have domain lock held. For switch authorization
+- * internal switch_lock is enough.
++ * switches) you need to have domain lock held.
+  */
+ struct tb_switch {
+ 	struct device dev;
+diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c
+index e27dd8beb94b..e0642dcb8b9b 100644
+--- a/drivers/thunderbolt/xdomain.c
++++ b/drivers/thunderbolt/xdomain.c
+@@ -740,6 +740,7 @@ static void enumerate_services(struct tb_xdomain *xd)
+ 	struct tb_service *svc;
+ 	struct tb_property *p;
+ 	struct device *dev;
++	int id;
+ 
+ 	/*
+ 	 * First remove all services that are not available anymore in
+@@ -768,7 +769,12 @@ static void enumerate_services(struct tb_xdomain *xd)
+ 			break;
+ 		}
+ 
+-		svc->id = ida_simple_get(&xd->service_ids, 0, 0, GFP_KERNEL);
++		id = ida_simple_get(&xd->service_ids, 0, 0, GFP_KERNEL);
++		if (id < 0) {
++			kfree(svc);
++			break;
++		}
++		svc->id = id;
+ 		svc->dev.bus = &tb_bus_type;
+ 		svc->dev.type = &tb_service_type;
+ 		svc->dev.parent = &xd->dev;
+diff --git a/drivers/tty/ipwireless/main.c b/drivers/tty/ipwireless/main.c
+index 3475e841ef5c..4c18bbfe1a92 100644
+--- a/drivers/tty/ipwireless/main.c
++++ b/drivers/tty/ipwireless/main.c
+@@ -114,6 +114,10 @@ static int ipwireless_probe(struct pcmcia_device *p_dev, void *priv_data)
+ 
+ 	ipw->common_memory = ioremap(p_dev->resource[2]->start,
+ 				resource_size(p_dev->resource[2]));
++	if (!ipw->common_memory) {
++		ret = -ENOMEM;
++		goto exit1;
++	}
+ 	if (!request_mem_region(p_dev->resource[2]->start,
+ 				resource_size(p_dev->resource[2]),
+ 				IPWIRELESS_PCCARD_NAME)) {
+@@ -134,6 +138,10 @@ static int ipwireless_probe(struct pcmcia_device *p_dev, void *priv_data)
+ 
+ 	ipw->attr_memory = ioremap(p_dev->resource[3]->start,
+ 				resource_size(p_dev->resource[3]));
++	if (!ipw->attr_memory) {
++		ret = -ENOMEM;
++		goto exit3;
++	}
+ 	if (!request_mem_region(p_dev->resource[3]->start,
+ 				resource_size(p_dev->resource[3]),
+ 				IPWIRELESS_PCCARD_NAME)) {
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index 975d7c1288e3..e9f740484001 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -3020,6 +3020,9 @@ usb_hcd_platform_shutdown(struct platform_device *dev)
+ {
+ 	struct usb_hcd *hcd = platform_get_drvdata(dev);
+ 
++	/* No need for pm_runtime_put(), we're shutting down */
++	pm_runtime_get_sync(&dev->dev);
++
+ 	if (hcd->driver->shutdown)
+ 		hcd->driver->shutdown(hcd);
+ }
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 8d4631c81b9f..310eef451db8 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -5902,7 +5902,10 @@ int usb_reset_device(struct usb_device *udev)
+ 					cintf->needs_binding = 1;
+ 			}
+ 		}
+-		usb_unbind_and_rebind_marked_interfaces(udev);
++
++		/* If the reset failed, hub_wq will unbind drivers later */
++		if (ret == 0)
++			usb_unbind_and_rebind_marked_interfaces(udev);
+ 	}
+ 
+ 	usb_autosuspend_device(udev);
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index 6812a8a3a98b..a749de7604c6 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -714,13 +714,11 @@ static unsigned int dwc2_gadget_get_chain_limit(struct dwc2_hsotg_ep *hs_ep)
+ 	unsigned int maxsize;
+ 
+ 	if (is_isoc)
+-		maxsize = hs_ep->dir_in ? DEV_DMA_ISOC_TX_NBYTES_LIMIT :
+-					   DEV_DMA_ISOC_RX_NBYTES_LIMIT;
++		maxsize = (hs_ep->dir_in ? DEV_DMA_ISOC_TX_NBYTES_LIMIT :
++					   DEV_DMA_ISOC_RX_NBYTES_LIMIT) *
++					   MAX_DMA_DESC_NUM_HS_ISOC;
+ 	else
+-		maxsize = DEV_DMA_NBYTES_LIMIT;
+-
+-	/* Above size of one descriptor was chosen, multiple it */
+-	maxsize *= MAX_DMA_DESC_NUM_GENERIC;
++		maxsize = DEV_DMA_NBYTES_LIMIT * MAX_DMA_DESC_NUM_GENERIC;
+ 
+ 	return maxsize;
+ }
+@@ -932,7 +930,7 @@ static int dwc2_gadget_fill_isoc_desc(struct dwc2_hsotg_ep *hs_ep,
+ 
+ 	/* Update index of last configured entry in the chain */
+ 	hs_ep->next_desc++;
+-	if (hs_ep->next_desc >= MAX_DMA_DESC_NUM_GENERIC)
++	if (hs_ep->next_desc >= MAX_DMA_DESC_NUM_HS_ISOC)
+ 		hs_ep->next_desc = 0;
+ 
+ 	return 0;
+@@ -964,7 +962,7 @@ static void dwc2_gadget_start_isoc_ddma(struct dwc2_hsotg_ep *hs_ep)
+ 	}
+ 
+ 	/* Initialize descriptor chain by Host Busy status */
+-	for (i = 0; i < MAX_DMA_DESC_NUM_GENERIC; i++) {
++	for (i = 0; i < MAX_DMA_DESC_NUM_HS_ISOC; i++) {
+ 		desc = &hs_ep->desc_list[i];
+ 		desc->status = 0;
+ 		desc->status |= (DEV_DMA_BUFF_STS_HBUSY
+@@ -2162,7 +2160,7 @@ static void dwc2_gadget_complete_isoc_request_ddma(struct dwc2_hsotg_ep *hs_ep)
+ 		dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req, 0);
+ 
+ 		hs_ep->compl_desc++;
+-		if (hs_ep->compl_desc > (MAX_DMA_DESC_NUM_GENERIC - 1))
++		if (hs_ep->compl_desc > (MAX_DMA_DESC_NUM_HS_ISOC - 1))
+ 			hs_ep->compl_desc = 0;
+ 		desc_sts = hs_ep->desc_list[hs_ep->compl_desc].status;
+ 	}
+@@ -3899,6 +3897,7 @@ static int dwc2_hsotg_ep_enable(struct usb_ep *ep,
+ 	unsigned int i, val, size;
+ 	int ret = 0;
+ 	unsigned char ep_type;
++	int desc_num;
+ 
+ 	dev_dbg(hsotg->dev,
+ 		"%s: ep %s: a 0x%02x, attr 0x%02x, mps 0x%04x, intr %d\n",
+@@ -3945,11 +3944,15 @@ static int dwc2_hsotg_ep_enable(struct usb_ep *ep,
+ 	dev_dbg(hsotg->dev, "%s: read DxEPCTL=0x%08x from 0x%08x\n",
+ 		__func__, epctrl, epctrl_reg);
+ 
++	if (using_desc_dma(hsotg) && ep_type == USB_ENDPOINT_XFER_ISOC)
++		desc_num = MAX_DMA_DESC_NUM_HS_ISOC;
++	else
++		desc_num = MAX_DMA_DESC_NUM_GENERIC;
++
+ 	/* Allocate DMA descriptor chain for non-ctrl endpoints */
+ 	if (using_desc_dma(hsotg) && !hs_ep->desc_list) {
+ 		hs_ep->desc_list = dmam_alloc_coherent(hsotg->dev,
+-			MAX_DMA_DESC_NUM_GENERIC *
+-			sizeof(struct dwc2_dma_desc),
++			desc_num * sizeof(struct dwc2_dma_desc),
+ 			&hs_ep->desc_list_dma, GFP_ATOMIC);
+ 		if (!hs_ep->desc_list) {
+ 			ret = -ENOMEM;
+@@ -4092,7 +4095,7 @@ error1:
+ 
+ error2:
+ 	if (ret && using_desc_dma(hsotg) && hs_ep->desc_list) {
+-		dmam_free_coherent(hsotg->dev, MAX_DMA_DESC_NUM_GENERIC *
++		dmam_free_coherent(hsotg->dev, desc_num *
+ 			sizeof(struct dwc2_dma_desc),
+ 			hs_ep->desc_list, hs_ep->desc_list_dma);
+ 		hs_ep->desc_list = NULL;
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index f944cea4056b..72110a8c49d6 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -1600,6 +1600,7 @@ static int dwc3_suspend_common(struct dwc3 *dwc, pm_message_t msg)
+ 		spin_lock_irqsave(&dwc->lock, flags);
+ 		dwc3_gadget_suspend(dwc);
+ 		spin_unlock_irqrestore(&dwc->lock, flags);
++		synchronize_irq(dwc->irq_gadget);
+ 		dwc3_core_exit(dwc);
+ 		break;
+ 	case DWC3_GCTL_PRTCAP_HOST:
+@@ -1632,6 +1633,7 @@ static int dwc3_suspend_common(struct dwc3 *dwc, pm_message_t msg)
+ 			spin_lock_irqsave(&dwc->lock, flags);
+ 			dwc3_gadget_suspend(dwc);
+ 			spin_unlock_irqrestore(&dwc->lock, flags);
++			synchronize_irq(dwc->irq_gadget);
+ 		}
+ 
+ 		dwc3_otg_exit(dwc);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index e293400cc6e9..2bb0ff9608d3 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -3384,8 +3384,6 @@ int dwc3_gadget_suspend(struct dwc3 *dwc)
+ 	dwc3_disconnect_gadget(dwc);
+ 	__dwc3_gadget_stop(dwc);
+ 
+-	synchronize_irq(dwc->irq_gadget);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 20413c276c61..47be961f1bf3 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -1133,7 +1133,8 @@ error_lock:
+ error_mutex:
+ 	mutex_unlock(&epfile->mutex);
+ error:
+-	ffs_free_buffer(io_data);
++	if (ret != -EIOCBQUEUED) /* don't free if there is iocb queued */
++		ffs_free_buffer(io_data);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/video/fbdev/core/fbcmap.c b/drivers/video/fbdev/core/fbcmap.c
+index 68a113594808..2811c4afde01 100644
+--- a/drivers/video/fbdev/core/fbcmap.c
++++ b/drivers/video/fbdev/core/fbcmap.c
+@@ -94,6 +94,8 @@ int fb_alloc_cmap_gfp(struct fb_cmap *cmap, int len, int transp, gfp_t flags)
+ 	int size = len * sizeof(u16);
+ 	int ret = -ENOMEM;
+ 
++	flags |= __GFP_NOWARN;
++
+ 	if (cmap->len != len) {
+ 		fb_dealloc_cmap(cmap);
+ 		if (!len)
+diff --git a/drivers/video/fbdev/core/modedb.c b/drivers/video/fbdev/core/modedb.c
+index 283d9307df21..ac049871704d 100644
+--- a/drivers/video/fbdev/core/modedb.c
++++ b/drivers/video/fbdev/core/modedb.c
+@@ -935,6 +935,9 @@ void fb_var_to_videomode(struct fb_videomode *mode,
+ 	if (var->vmode & FB_VMODE_DOUBLE)
+ 		vtotal *= 2;
+ 
++	if (!htotal || !vtotal)
++		return;
++
+ 	hfreq = pixclock/htotal;
+ 	mode->refresh = hfreq/vtotal;
+ }
+diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c
+index fd02e8a4841d..9f39f0c360e0 100644
+--- a/drivers/video/fbdev/efifb.c
++++ b/drivers/video/fbdev/efifb.c
+@@ -464,7 +464,8 @@ static int efifb_probe(struct platform_device *dev)
+ 	info->apertures->ranges[0].base = efifb_fix.smem_start;
+ 	info->apertures->ranges[0].size = size_remap;
+ 
+-	if (!efi_mem_desc_lookup(efifb_fix.smem_start, &md)) {
++	if (efi_enabled(EFI_BOOT) &&
++	    !efi_mem_desc_lookup(efifb_fix.smem_start, &md)) {
+ 		if ((efifb_fix.smem_start + efifb_fix.smem_len) >
+ 		    (md.phys_addr + (md.num_pages << EFI_PAGE_SHIFT))) {
+ 			pr_err("efifb: video memory @ 0x%lx spans multiple EFI memory regions\n",
+diff --git a/drivers/w1/w1_io.c b/drivers/w1/w1_io.c
+index 0364d3329c52..3516ce6718d9 100644
+--- a/drivers/w1/w1_io.c
++++ b/drivers/w1/w1_io.c
+@@ -432,8 +432,7 @@ int w1_reset_resume_command(struct w1_master *dev)
+ 	if (w1_reset_bus(dev))
+ 		return -1;
+ 
+-	/* This will make only the last matched slave perform a skip ROM. */
+-	w1_write_8(dev, W1_RESUME_CMD);
++	w1_write_8(dev, dev->slave_count > 1 ? W1_RESUME_CMD : W1_SKIP_ROM);
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(w1_reset_resume_command);
+diff --git a/drivers/xen/biomerge.c b/drivers/xen/biomerge.c
+index f3fbb700f569..05a286d24f14 100644
+--- a/drivers/xen/biomerge.c
++++ b/drivers/xen/biomerge.c
+@@ -4,12 +4,13 @@
+ #include <xen/xen.h>
+ #include <xen/page.h>
+ 
++/* check if @page can be merged with 'vec1' */
+ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
+-			       const struct bio_vec *vec2)
++			       const struct page *page)
+ {
+ #if XEN_PAGE_SIZE == PAGE_SIZE
+ 	unsigned long bfn1 = pfn_to_bfn(page_to_pfn(vec1->bv_page));
+-	unsigned long bfn2 = pfn_to_bfn(page_to_pfn(vec2->bv_page));
++	unsigned long bfn2 = pfn_to_bfn(page_to_pfn(page));
+ 
+ 	return bfn1 + PFN_DOWN(vec1->bv_offset + vec1->bv_len) == bfn2;
+ #else
+diff --git a/fs/afs/xattr.c b/fs/afs/xattr.c
+index a2cdf25573e2..706801c6c4c4 100644
+--- a/fs/afs/xattr.c
++++ b/fs/afs/xattr.c
+@@ -69,11 +69,20 @@ static int afs_xattr_get_fid(const struct xattr_handler *handler,
+ 			     void *buffer, size_t size)
+ {
+ 	struct afs_vnode *vnode = AFS_FS_I(inode);
+-	char text[8 + 1 + 8 + 1 + 8 + 1];
++	char text[16 + 1 + 24 + 1 + 8 + 1];
+ 	size_t len;
+ 
+-	len = sprintf(text, "%llx:%llx:%x",
+-		      vnode->fid.vid, vnode->fid.vnode, vnode->fid.unique);
++	/* The volume ID is 64-bit, the vnode ID is 96-bit and the
++	 * uniquifier is 32-bit.
++	 */
++	len = sprintf(text, "%llx:", vnode->fid.vid);
++	if (vnode->fid.vnode_hi)
++		len += sprintf(text + len, "%x%016llx",
++			       vnode->fid.vnode_hi, vnode->fid.vnode);
++	else
++		len += sprintf(text + len, "%llx", vnode->fid.vnode);
++	len += sprintf(text + len, ":%x", vnode->fid.unique);
++
+ 	if (size == 0)
+ 		return len;
+ 	if (len > size)
+diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
+index 4f2a8ae0aa42..716656d502a9 100644
+--- a/fs/btrfs/compression.c
++++ b/fs/btrfs/compression.c
+@@ -1009,6 +1009,7 @@ int btrfs_compress_pages(unsigned int type_level, struct address_space *mapping,
+ 	struct list_head *workspace;
+ 	int ret;
+ 
++	level = btrfs_compress_op[type]->set_level(level);
+ 	workspace = get_workspace(type, level);
+ 	ret = btrfs_compress_op[type]->compress_pages(workspace, mapping,
+ 						      start, pages,
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index d789542edc5a..5e40c8f1e97a 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -3981,8 +3981,7 @@ static int create_space_info(struct btrfs_fs_info *info, u64 flags)
+ 				    info->space_info_kobj, "%s",
+ 				    alloc_name(space_info->flags));
+ 	if (ret) {
+-		percpu_counter_destroy(&space_info->total_bytes_pinned);
+-		kfree(space_info);
++		kobject_put(&space_info->kobj);
+ 		return ret;
+ 	}
+ 
+@@ -11315,9 +11314,9 @@ int btrfs_error_unpin_extent_range(struct btrfs_fs_info *fs_info,
+  * held back allocations.
+  */
+ static int btrfs_trim_free_extents(struct btrfs_device *device,
+-				   struct fstrim_range *range, u64 *trimmed)
++				   u64 minlen, u64 *trimmed)
+ {
+-	u64 start = range->start, len = 0;
++	u64 start = 0, len = 0;
+ 	int ret;
+ 
+ 	*trimmed = 0;
+@@ -11360,8 +11359,8 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 		if (!trans)
+ 			up_read(&fs_info->commit_root_sem);
+ 
+-		ret = find_free_dev_extent_start(trans, device, range->minlen,
+-						 start, &start, &len);
++		ret = find_free_dev_extent_start(trans, device, minlen, start,
++						 &start, &len);
+ 		if (trans) {
+ 			up_read(&fs_info->commit_root_sem);
+ 			btrfs_put_transaction(trans);
+@@ -11374,16 +11373,6 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 			break;
+ 		}
+ 
+-		/* If we are out of the passed range break */
+-		if (start > range->start + range->len - 1) {
+-			mutex_unlock(&fs_info->chunk_mutex);
+-			ret = 0;
+-			break;
+-		}
+-
+-		start = max(range->start, start);
+-		len = min(range->len, len);
+-
+ 		ret = btrfs_issue_discard(device->bdev, start, len, &bytes);
+ 		mutex_unlock(&fs_info->chunk_mutex);
+ 
+@@ -11393,10 +11382,6 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 		start += len;
+ 		*trimmed += bytes;
+ 
+-		/* We've trimmed enough */
+-		if (*trimmed >= range->len)
+-			break;
+-
+ 		if (fatal_signal_pending(current)) {
+ 			ret = -ERESTARTSYS;
+ 			break;
+@@ -11480,7 +11465,8 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ 	mutex_lock(&fs_info->fs_devices->device_list_mutex);
+ 	devices = &fs_info->fs_devices->devices;
+ 	list_for_each_entry(device, devices, dev_list) {
+-		ret = btrfs_trim_free_extents(device, range, &group_trimmed);
++		ret = btrfs_trim_free_extents(device, range->minlen,
++					      &group_trimmed);
+ 		if (ret) {
+ 			dev_failed++;
+ 			dev_ret = ret;
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 34fe8a58b0e9..ef11808b592b 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -2058,6 +2058,18 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 	int ret = 0, err;
+ 	u64 len;
+ 
++	/*
++	 * If the inode needs a full sync, make sure we use a full range to
++	 * avoid log tree corruption, due to hole detection racing with ordered
++	 * extent completion for adjacent ranges, and assertion failures during
++	 * hole detection.
++	 */
++	if (test_bit(BTRFS_INODE_NEEDS_FULL_SYNC,
++		     &BTRFS_I(inode)->runtime_flags)) {
++		start = 0;
++		end = LLONG_MAX;
++	}
++
+ 	/*
+ 	 * The range length can be represented by u64, we have to do the typecasts
+ 	 * to avoid signed overflow if it's [0, LLONG_MAX] eg. from fsync()
+@@ -2546,10 +2558,8 @@ static int btrfs_punch_hole(struct inode *inode, loff_t offset, loff_t len)
+ 
+ 	ret = btrfs_punch_hole_lock_range(inode, lockstart, lockend,
+ 					  &cached_state);
+-	if (ret) {
+-		inode_unlock(inode);
++	if (ret)
+ 		goto out_only_mutex;
+-	}
+ 
+ 	path = btrfs_alloc_path();
+ 	if (!path) {
+@@ -3132,6 +3142,7 @@ static long btrfs_fallocate(struct file *file, int mode,
+ 			ret = btrfs_qgroup_reserve_data(inode, &data_reserved,
+ 					cur_offset, last_byte - cur_offset);
+ 			if (ret < 0) {
++				cur_offset = last_byte;
+ 				free_extent_map(em);
+ 				break;
+ 			}
+@@ -3181,7 +3192,7 @@ out:
+ 	/* Let go of our reservation. */
+ 	if (ret != 0 && !(mode & FALLOC_FL_ZERO_RANGE))
+ 		btrfs_free_reserved_data_space(inode, data_reserved,
+-				alloc_start, alloc_end - cur_offset);
++				cur_offset, alloc_end - cur_offset);
+ 	extent_changeset_free(data_reserved);
+ 	return ret;
+ }
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 351fa506dc9b..1d82ee4883eb 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -4330,27 +4330,36 @@ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start)
+ 		mutex_lock(&fs_info->cleaner_mutex);
+ 		ret = relocate_block_group(rc);
+ 		mutex_unlock(&fs_info->cleaner_mutex);
+-		if (ret < 0) {
++		if (ret < 0)
+ 			err = ret;
+-			goto out;
+-		}
+-
+-		if (rc->extents_found == 0)
+-			break;
+-
+-		btrfs_info(fs_info, "found %llu extents", rc->extents_found);
+ 
++		/*
++		 * We may have gotten ENOSPC after we already dirtied some
++		 * extents.  If writeout happens while we're relocating a
++		 * different block group we could end up hitting the
++		 * BUG_ON(rc->stage == UPDATE_DATA_PTRS) in
++		 * btrfs_reloc_cow_block.  Make sure we write everything out
++		 * properly so we don't trip over this problem, and then break
++		 * out of the loop if we hit an error.
++		 */
+ 		if (rc->stage == MOVE_DATA_EXTENTS && rc->found_file_extent) {
+ 			ret = btrfs_wait_ordered_range(rc->data_inode, 0,
+ 						       (u64)-1);
+-			if (ret) {
++			if (ret)
+ 				err = ret;
+-				goto out;
+-			}
+ 			invalidate_mapping_pages(rc->data_inode->i_mapping,
+ 						 0, -1);
+ 			rc->stage = UPDATE_DATA_PTRS;
+ 		}
++
++		if (err < 0)
++			goto out;
++
++		if (rc->extents_found == 0)
++			break;
++
++		btrfs_info(fs_info, "found %llu extents", rc->extents_found);
++
+ 	}
+ 
+ 	WARN_ON(rc->block_group->pinned > 0);
+diff --git a/fs/btrfs/root-tree.c b/fs/btrfs/root-tree.c
+index 893d12fbfda0..22124122728c 100644
+--- a/fs/btrfs/root-tree.c
++++ b/fs/btrfs/root-tree.c
+@@ -132,16 +132,17 @@ int btrfs_update_root(struct btrfs_trans_handle *trans, struct btrfs_root
+ 		return -ENOMEM;
+ 
+ 	ret = btrfs_search_slot(trans, root, key, path, 0, 1);
+-	if (ret < 0) {
+-		btrfs_abort_transaction(trans, ret);
++	if (ret < 0)
+ 		goto out;
+-	}
+ 
+-	if (ret != 0) {
+-		btrfs_print_leaf(path->nodes[0]);
+-		btrfs_crit(fs_info, "unable to update root key %llu %u %llu",
+-			   key->objectid, key->type, key->offset);
+-		BUG_ON(1);
++	if (ret > 0) {
++		btrfs_crit(fs_info,
++			"unable to find root key (%llu %u %llu) in tree %llu",
++			key->objectid, key->type, key->offset,
++			root->root_key.objectid);
++		ret = -EUCLEAN;
++		btrfs_abort_transaction(trans, ret);
++		goto out;
+ 	}
+ 
+ 	l = path->nodes[0];
+diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
+index 5a5930e3d32b..2f078b77fe14 100644
+--- a/fs/btrfs/sysfs.c
++++ b/fs/btrfs/sysfs.c
+@@ -825,7 +825,12 @@ int btrfs_sysfs_add_fsid(struct btrfs_fs_devices *fs_devs,
+ 	fs_devs->fsid_kobj.kset = btrfs_kset;
+ 	error = kobject_init_and_add(&fs_devs->fsid_kobj,
+ 				&btrfs_ktype, parent, "%pU", fs_devs->fsid);
+-	return error;
++	if (error) {
++		kobject_put(&fs_devs->fsid_kobj);
++		return error;
++	}
++
++	return 0;
+ }
+ 
+ int btrfs_sysfs_add_mounted(struct btrfs_fs_info *fs_info)
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 561884f60d35..60aac95be54b 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -4169,6 +4169,7 @@ fill_holes:
+ 							       *last_extent, 0,
+ 							       0, len, 0, len,
+ 							       0, 0, 0);
++				*last_extent += len;
+ 			}
+ 		}
+ 	}
+diff --git a/fs/char_dev.c b/fs/char_dev.c
+index a279c58fe360..8a63cfa29005 100644
+--- a/fs/char_dev.c
++++ b/fs/char_dev.c
+@@ -159,6 +159,12 @@ __register_chrdev_region(unsigned int major, unsigned int baseminor,
+ 			ret = -EBUSY;
+ 			goto out;
+ 		}
++
++		if (new_min < old_min && new_max > old_max) {
++			ret = -EBUSY;
++			goto out;
++		}
++
+ 	}
+ 
+ 	cd->next = *cp;
+diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c
+index 4dc788e3bc96..fe38b5306045 100644
+--- a/fs/crypto/crypto.c
++++ b/fs/crypto/crypto.c
+@@ -334,7 +334,7 @@ static int fscrypt_d_revalidate(struct dentry *dentry, unsigned int flags)
+ 	spin_lock(&dentry->d_lock);
+ 	cached_with_key = dentry->d_flags & DCACHE_ENCRYPTED_WITH_KEY;
+ 	spin_unlock(&dentry->d_lock);
+-	dir_has_key = (d_inode(dir)->i_crypt_info != NULL);
++	dir_has_key = fscrypt_has_encryption_key(d_inode(dir));
+ 	dput(dir);
+ 
+ 	/*
+diff --git a/fs/crypto/fname.c b/fs/crypto/fname.c
+index 7ff40a73dbec..050384c79f40 100644
+--- a/fs/crypto/fname.c
++++ b/fs/crypto/fname.c
+@@ -269,7 +269,7 @@ int fscrypt_fname_disk_to_usr(struct inode *inode,
+ 	if (iname->len < FS_CRYPTO_BLOCK_SIZE)
+ 		return -EUCLEAN;
+ 
+-	if (inode->i_crypt_info)
++	if (fscrypt_has_encryption_key(inode))
+ 		return fname_decrypt(inode, iname, oname);
+ 
+ 	if (iname->len <= FSCRYPT_FNAME_MAX_UNDIGESTED_SIZE) {
+@@ -336,7 +336,7 @@ int fscrypt_setup_filename(struct inode *dir, const struct qstr *iname,
+ 	if (ret)
+ 		return ret;
+ 
+-	if (dir->i_crypt_info) {
++	if (fscrypt_has_encryption_key(dir)) {
+ 		if (!fscrypt_fname_encrypted_size(dir, iname->len,
+ 						  dir->i_sb->s_cop->max_namelen,
+ 						  &fname->crypto_buf.len))
+diff --git a/fs/crypto/keyinfo.c b/fs/crypto/keyinfo.c
+index 322ce9686bdb..bf291c10c682 100644
+--- a/fs/crypto/keyinfo.c
++++ b/fs/crypto/keyinfo.c
+@@ -509,7 +509,7 @@ int fscrypt_get_encryption_info(struct inode *inode)
+ 	u8 *raw_key = NULL;
+ 	int res;
+ 
+-	if (inode->i_crypt_info)
++	if (fscrypt_has_encryption_key(inode))
+ 		return 0;
+ 
+ 	res = fscrypt_initialize(inode->i_sb->s_cop->flags);
+@@ -573,7 +573,7 @@ int fscrypt_get_encryption_info(struct inode *inode)
+ 	if (res)
+ 		goto out;
+ 
+-	if (cmpxchg(&inode->i_crypt_info, NULL, crypt_info) == NULL)
++	if (cmpxchg_release(&inode->i_crypt_info, NULL, crypt_info) == NULL)
+ 		crypt_info = NULL;
+ out:
+ 	if (res == -ENOKEY)
+diff --git a/fs/crypto/policy.c b/fs/crypto/policy.c
+index bd7eaf9b3f00..d536889ac31b 100644
+--- a/fs/crypto/policy.c
++++ b/fs/crypto/policy.c
+@@ -194,8 +194,8 @@ int fscrypt_has_permitted_context(struct inode *parent, struct inode *child)
+ 	res = fscrypt_get_encryption_info(child);
+ 	if (res)
+ 		return 0;
+-	parent_ci = parent->i_crypt_info;
+-	child_ci = child->i_crypt_info;
++	parent_ci = READ_ONCE(parent->i_crypt_info);
++	child_ci = READ_ONCE(child->i_crypt_info);
+ 
+ 	if (parent_ci && child_ci) {
+ 		return memcmp(parent_ci->ci_master_key_descriptor,
+@@ -246,7 +246,7 @@ int fscrypt_inherit_context(struct inode *parent, struct inode *child,
+ 	if (res < 0)
+ 		return res;
+ 
+-	ci = parent->i_crypt_info;
++	ci = READ_ONCE(parent->i_crypt_info);
+ 	if (ci == NULL)
+ 		return -ENOKEY;
+ 
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index b32a57bc5d5d..7fd2d14dc27c 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -5619,25 +5619,22 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr)
+ 			up_write(&EXT4_I(inode)->i_data_sem);
+ 			ext4_journal_stop(handle);
+ 			if (error) {
+-				if (orphan)
++				if (orphan && inode->i_nlink)
+ 					ext4_orphan_del(NULL, inode);
+ 				goto err_out;
+ 			}
+ 		}
+-		if (!shrink)
++		if (!shrink) {
+ 			pagecache_isize_extended(inode, oldsize, inode->i_size);
+-
+-		/*
+-		 * Blocks are going to be removed from the inode. Wait
+-		 * for dio in flight.  Temporarily disable
+-		 * dioread_nolock to prevent livelock.
+-		 */
+-		if (orphan) {
+-			if (!ext4_should_journal_data(inode)) {
+-				inode_dio_wait(inode);
+-			} else
+-				ext4_wait_for_tail_page_commit(inode);
++		} else {
++			/*
++			 * Blocks are going to be removed from the inode. Wait
++			 * for dio in flight.
++			 */
++			inode_dio_wait(inode);
+ 		}
++		if (orphan && ext4_should_journal_data(inode))
++			ext4_wait_for_tail_page_commit(inode);
+ 		down_write(&EXT4_I(inode)->i_mmap_sem);
+ 
+ 		rc = ext4_break_layouts(inode);
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index d32964cd1117..71c28ff98b56 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -140,6 +140,7 @@ void gfs2_glock_free(struct gfs2_glock *gl)
+ {
+ 	struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
+ 
++	BUG_ON(atomic_read(&gl->gl_revokes));
+ 	rhashtable_remove_fast(&gl_hash_table, &gl->gl_node, ht_parms);
+ 	smp_mb();
+ 	wake_up_glock(gl);
+@@ -183,15 +184,19 @@ static int demote_ok(const struct gfs2_glock *gl)
+ 
+ void gfs2_glock_add_to_lru(struct gfs2_glock *gl)
+ {
++	if (!(gl->gl_ops->go_flags & GLOF_LRU))
++		return;
++
+ 	spin_lock(&lru_lock);
+ 
+-	if (!list_empty(&gl->gl_lru))
+-		list_del_init(&gl->gl_lru);
+-	else
++	list_del(&gl->gl_lru);
++	list_add_tail(&gl->gl_lru, &lru_list);
++
++	if (!test_bit(GLF_LRU, &gl->gl_flags)) {
++		set_bit(GLF_LRU, &gl->gl_flags);
+ 		atomic_inc(&lru_count);
++	}
+ 
+-	list_add_tail(&gl->gl_lru, &lru_list);
+-	set_bit(GLF_LRU, &gl->gl_flags);
+ 	spin_unlock(&lru_lock);
+ }
+ 
+@@ -201,7 +206,7 @@ static void gfs2_glock_remove_from_lru(struct gfs2_glock *gl)
+ 		return;
+ 
+ 	spin_lock(&lru_lock);
+-	if (!list_empty(&gl->gl_lru)) {
++	if (test_bit(GLF_LRU, &gl->gl_flags)) {
+ 		list_del_init(&gl->gl_lru);
+ 		atomic_dec(&lru_count);
+ 		clear_bit(GLF_LRU, &gl->gl_flags);
+@@ -1159,8 +1164,7 @@ void gfs2_glock_dq(struct gfs2_holder *gh)
+ 		    !test_bit(GLF_DEMOTE, &gl->gl_flags))
+ 			fast_path = 1;
+ 	}
+-	if (!test_bit(GLF_LFLUSH, &gl->gl_flags) && demote_ok(gl) &&
+-	    (glops->go_flags & GLOF_LRU))
++	if (!test_bit(GLF_LFLUSH, &gl->gl_flags) && demote_ok(gl))
+ 		gfs2_glock_add_to_lru(gl);
+ 
+ 	trace_gfs2_glock_queue(gh, 0);
+@@ -1456,6 +1460,7 @@ __acquires(&lru_lock)
+ 		if (!spin_trylock(&gl->gl_lockref.lock)) {
+ add_back_to_lru:
+ 			list_add(&gl->gl_lru, &lru_list);
++			set_bit(GLF_LRU, &gl->gl_flags);
+ 			atomic_inc(&lru_count);
+ 			continue;
+ 		}
+@@ -1463,7 +1468,6 @@ add_back_to_lru:
+ 			spin_unlock(&gl->gl_lockref.lock);
+ 			goto add_back_to_lru;
+ 		}
+-		clear_bit(GLF_LRU, &gl->gl_flags);
+ 		gl->gl_lockref.count++;
+ 		if (demote_ok(gl))
+ 			handle_callback(gl, LM_ST_UNLOCKED, 0, false);
+@@ -1498,6 +1502,7 @@ static long gfs2_scan_glock_lru(int nr)
+ 		if (!test_bit(GLF_LOCK, &gl->gl_flags)) {
+ 			list_move(&gl->gl_lru, &dispose);
+ 			atomic_dec(&lru_count);
++			clear_bit(GLF_LRU, &gl->gl_flags);
+ 			freed++;
+ 			continue;
+ 		}
+diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h
+index cdf07b408f54..539e8dc5a3f6 100644
+--- a/fs/gfs2/incore.h
++++ b/fs/gfs2/incore.h
+@@ -621,6 +621,7 @@ enum {
+ 	SDF_SKIP_DLM_UNLOCK	= 8,
+ 	SDF_FORCE_AIL_FLUSH     = 9,
+ 	SDF_AIL1_IO_ERROR	= 10,
++	SDF_FS_FROZEN           = 11,
+ };
+ 
+ enum gfs2_freeze_state {
+diff --git a/fs/gfs2/lock_dlm.c b/fs/gfs2/lock_dlm.c
+index 31df26ed7854..69bd1597bacf 100644
+--- a/fs/gfs2/lock_dlm.c
++++ b/fs/gfs2/lock_dlm.c
+@@ -31,9 +31,10 @@
+  * @delta is the difference between the current rtt sample and the
+  * running average srtt. We add 1/8 of that to the srtt in order to
+  * update the current srtt estimate. The variance estimate is a bit
+- * more complicated. We subtract the abs value of the @delta from
+- * the current variance estimate and add 1/4 of that to the running
+- * total.
++ * more complicated. We subtract the current variance estimate from
++ * the abs value of the @delta and add 1/4 of that to the running
++ * total.  That's equivalent to 3/4 of the current variance
++ * estimate plus 1/4 of the abs of @delta.
+  *
+  * Note that the index points at the array entry containing the smoothed
+  * mean value, and the variance is always in the following entry
+@@ -49,7 +50,7 @@ static inline void gfs2_update_stats(struct gfs2_lkstats *s, unsigned index,
+ 	s64 delta = sample - s->stats[index];
+ 	s->stats[index] += (delta >> 3);
+ 	index++;
+-	s->stats[index] += ((abs(delta) - s->stats[index]) >> 2);
++	s->stats[index] += (s64)(abs(delta) - s->stats[index]) >> 2;
+ }
+ 
+ /**
+diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
+index b8830fda51e8..0e04f87a7ddd 100644
+--- a/fs/gfs2/log.c
++++ b/fs/gfs2/log.c
+@@ -606,7 +606,8 @@ void gfs2_add_revoke(struct gfs2_sbd *sdp, struct gfs2_bufdata *bd)
+ 	gfs2_remove_from_ail(bd); /* drops ref on bh */
+ 	bd->bd_bh = NULL;
+ 	sdp->sd_log_num_revoke++;
+-	atomic_inc(&gl->gl_revokes);
++	if (atomic_inc_return(&gl->gl_revokes) == 1)
++		gfs2_glock_hold(gl);
+ 	set_bit(GLF_LFLUSH, &gl->gl_flags);
+ 	list_add(&bd->bd_list, &sdp->sd_log_le_revoke);
+ }
+diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c
+index 8722c60b11fe..4b280611246d 100644
+--- a/fs/gfs2/lops.c
++++ b/fs/gfs2/lops.c
+@@ -669,8 +669,10 @@ static void revoke_lo_after_commit(struct gfs2_sbd *sdp, struct gfs2_trans *tr)
+ 		bd = list_entry(head->next, struct gfs2_bufdata, bd_list);
+ 		list_del_init(&bd->bd_list);
+ 		gl = bd->bd_gl;
+-		atomic_dec(&gl->gl_revokes);
+-		clear_bit(GLF_LFLUSH, &gl->gl_flags);
++		if (atomic_dec_return(&gl->gl_revokes) == 0) {
++			clear_bit(GLF_LFLUSH, &gl->gl_flags);
++			gfs2_glock_queue_put(gl);
++		}
+ 		kmem_cache_free(gfs2_bufdata_cachep, bd);
+ 	}
+ }
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index ca71163ff7cf..360206704a14 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -973,8 +973,7 @@ void gfs2_freeze_func(struct work_struct *work)
+ 	if (error) {
+ 		printk(KERN_INFO "GFS2: couldn't get freeze lock : %d\n", error);
+ 		gfs2_assert_withdraw(sdp, 0);
+-	}
+-	else {
++	} else {
+ 		atomic_set(&sdp->sd_freeze_state, SFS_UNFROZEN);
+ 		error = thaw_super(sb);
+ 		if (error) {
+@@ -987,6 +986,8 @@ void gfs2_freeze_func(struct work_struct *work)
+ 		gfs2_glock_dq_uninit(&freeze_gh);
+ 	}
+ 	deactivate_super(sb);
++	clear_bit_unlock(SDF_FS_FROZEN, &sdp->sd_flags);
++	wake_up_bit(&sdp->sd_flags, SDF_FS_FROZEN);
+ 	return;
+ }
+ 
+@@ -1029,6 +1030,7 @@ static int gfs2_freeze(struct super_block *sb)
+ 		msleep(1000);
+ 	}
+ 	error = 0;
++	set_bit(SDF_FS_FROZEN, &sdp->sd_flags);
+ out:
+ 	mutex_unlock(&sdp->sd_freeze_mutex);
+ 	return error;
+@@ -1053,7 +1055,7 @@ static int gfs2_unfreeze(struct super_block *sb)
+ 
+ 	gfs2_glock_dq_uninit(&sdp->sd_freeze_gh);
+ 	mutex_unlock(&sdp->sd_freeze_mutex);
+-	return 0;
++	return wait_on_bit(&sdp->sd_flags, SDF_FS_FROZEN, TASK_INTERRUPTIBLE);
+ }
+ 
+ /**
+diff --git a/fs/internal.h b/fs/internal.h
+index 6a8b71643af4..2e7362837a6e 100644
+--- a/fs/internal.h
++++ b/fs/internal.h
+@@ -89,9 +89,7 @@ extern int sb_prepare_remount_readonly(struct super_block *);
+ 
+ extern void __init mnt_init(void);
+ 
+-extern int __mnt_want_write(struct vfsmount *);
+ extern int __mnt_want_write_file(struct file *);
+-extern void __mnt_drop_write(struct vfsmount *);
+ extern void __mnt_drop_write_file(struct file *);
+ 
+ /*
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 84efb8956734..30a5687a17b6 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -2334,7 +2334,7 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx,
+ 							nr_cpu_ids);
+ 
+ 			ret = -EINVAL;
+-			if (!cpu_possible(cpu))
++			if (!cpu_online(cpu))
+ 				goto err;
+ 
+ 			ctx->sqo_thread = kthread_create_on_cpu(io_sq_thread,
+diff --git a/fs/nfs/client.c b/fs/nfs/client.c
+index 90d71fda65ce..dfb796eab912 100644
+--- a/fs/nfs/client.c
++++ b/fs/nfs/client.c
+@@ -284,6 +284,7 @@ static struct nfs_client *nfs_match_client(const struct nfs_client_initdata *dat
+ 	struct nfs_client *clp;
+ 	const struct sockaddr *sap = data->addr;
+ 	struct nfs_net *nn = net_generic(data->net, nfs_net_id);
++	int error;
+ 
+ again:
+ 	list_for_each_entry(clp, &nn->nfs_client_list, cl_share_link) {
+@@ -296,9 +297,11 @@ again:
+ 		if (clp->cl_cons_state > NFS_CS_READY) {
+ 			refcount_inc(&clp->cl_count);
+ 			spin_unlock(&nn->nfs_client_lock);
+-			nfs_wait_client_init_complete(clp);
++			error = nfs_wait_client_init_complete(clp);
+ 			nfs_put_client(clp);
+ 			spin_lock(&nn->nfs_client_lock);
++			if (error < 0)
++				return ERR_PTR(error);
+ 			goto again;
+ 		}
+ 
+@@ -407,6 +410,8 @@ struct nfs_client *nfs_get_client(const struct nfs_client_initdata *cl_init)
+ 		clp = nfs_match_client(cl_init);
+ 		if (clp) {
+ 			spin_unlock(&nn->nfs_client_lock);
++			if (IS_ERR(clp))
++				return clp;
+ 			if (new)
+ 				new->rpc_ops->free_client(new);
+ 			return nfs_found_client(cl_init, clp);
+diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c
+index 00d17198ee12..f10b660805fc 100644
+--- a/fs/nfs/nfs4file.c
++++ b/fs/nfs/nfs4file.c
+@@ -187,7 +187,7 @@ static loff_t nfs42_remap_file_range(struct file *src_file, loff_t src_off,
+ 	bool same_inode = false;
+ 	int ret;
+ 
+-	if (remap_flags & ~REMAP_FILE_ADVISORY)
++	if (remap_flags & ~(REMAP_FILE_DEDUP | REMAP_FILE_ADVISORY))
+ 		return -EINVAL;
+ 
+ 	/* check alignment w.r.t. clone_blksize */
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index 82c129bfe58d..93872bb50230 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -260,7 +260,7 @@ static int ovl_instantiate(struct dentry *dentry, struct inode *inode,
+ 		 * hashed directory inode aliases.
+ 		 */
+ 		inode = ovl_get_inode(dentry->d_sb, &oip);
+-		if (WARN_ON(IS_ERR(inode)))
++		if (IS_ERR(inode))
+ 			return PTR_ERR(inode);
+ 	} else {
+ 		WARN_ON(ovl_inode_real(inode) != d_inode(newdentry));
+diff --git a/fs/overlayfs/inode.c b/fs/overlayfs/inode.c
+index 3b7ed5d2279c..b48273e846ad 100644
+--- a/fs/overlayfs/inode.c
++++ b/fs/overlayfs/inode.c
+@@ -832,7 +832,7 @@ struct inode *ovl_get_inode(struct super_block *sb,
+ 	int fsid = bylower ? oip->lowerpath->layer->fsid : 0;
+ 	bool is_dir, metacopy = false;
+ 	unsigned long ino = 0;
+-	int err = -ENOMEM;
++	int err = oip->newinode ? -EEXIST : -ENOMEM;
+ 
+ 	if (!realinode)
+ 		realinode = d_inode(lowerdentry);
+@@ -917,6 +917,7 @@ out:
+ 	return inode;
+ 
+ out_err:
++	pr_warn_ratelimited("overlayfs: failed to get inode (%i)\n", err);
+ 	inode = ERR_PTR(err);
+ 	goto out;
+ }
+diff --git a/include/crypto/hash.h b/include/crypto/hash.h
+index 3b31c1b349ae..bc143b410359 100644
+--- a/include/crypto/hash.h
++++ b/include/crypto/hash.h
+@@ -152,7 +152,13 @@ struct shash_desc {
+ };
+ 
+ #define HASH_MAX_DIGESTSIZE	 64
+-#define HASH_MAX_DESCSIZE	360
++
++/*
++ * Worst case is hmac(sha3-224-generic).  Its context is a nested 'shash_desc'
++ * containing a 'struct sha3_state'.
++ */
++#define HASH_MAX_DESCSIZE	(sizeof(struct shash_desc) + 360)
++
+ #define HASH_MAX_STATESIZE	512
+ 
+ #define SHASH_DESC_ON_STACK(shash, ctx)				  \
+diff --git a/include/drm/tinydrm/mipi-dbi.h b/include/drm/tinydrm/mipi-dbi.h
+index f4ec2834bc22..7dfa67a15a04 100644
+--- a/include/drm/tinydrm/mipi-dbi.h
++++ b/include/drm/tinydrm/mipi-dbi.h
+@@ -43,7 +43,7 @@ struct mipi_dbi {
+ 	struct spi_device *spi;
+ 	bool enabled;
+ 	struct mutex cmdlock;
+-	int (*command)(struct mipi_dbi *mipi, u8 cmd, u8 *param, size_t num);
++	int (*command)(struct mipi_dbi *mipi, u8 *cmd, u8 *param, size_t num);
+ 	const u8 *read_commands;
+ 	struct gpio_desc *dc;
+ 	u16 *tx_buf;
+@@ -82,6 +82,7 @@ u32 mipi_dbi_spi_cmd_max_speed(struct spi_device *spi, size_t len);
+ 
+ int mipi_dbi_command_read(struct mipi_dbi *mipi, u8 cmd, u8 *val);
+ int mipi_dbi_command_buf(struct mipi_dbi *mipi, u8 cmd, u8 *data, size_t len);
++int mipi_dbi_command_stackbuf(struct mipi_dbi *mipi, u8 cmd, u8 *data, size_t len);
+ int mipi_dbi_buf_copy(void *dst, struct drm_framebuffer *fb,
+ 		      struct drm_rect *clip, bool swap);
+ /**
+@@ -99,7 +100,7 @@ int mipi_dbi_buf_copy(void *dst, struct drm_framebuffer *fb,
+ #define mipi_dbi_command(mipi, cmd, seq...) \
+ ({ \
+ 	u8 d[] = { seq }; \
+-	mipi_dbi_command_buf(mipi, cmd, d, ARRAY_SIZE(d)); \
++	mipi_dbi_command_stackbuf(mipi, cmd, d, ARRAY_SIZE(d)); \
+ })
+ 
+ #ifdef CONFIG_DEBUG_FS
+diff --git a/include/linux/bio.h b/include/linux/bio.h
+index e584673c1881..5becbafb84e8 100644
+--- a/include/linux/bio.h
++++ b/include/linux/bio.h
+@@ -224,7 +224,7 @@ static inline void bio_cnt_set(struct bio *bio, unsigned int count)
+ {
+ 	if (count != 1) {
+ 		bio->bi_flags |= (1 << BIO_REFFED);
+-		smp_mb__before_atomic();
++		smp_mb();
+ 	}
+ 	atomic_set(&bio->__bi_cnt, count);
+ }
+diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
+index 1c70803e9f77..7d57890cec67 100644
+--- a/include/linux/cgroup-defs.h
++++ b/include/linux/cgroup-defs.h
+@@ -349,6 +349,11 @@ struct cgroup {
+ 	 * Dying cgroups are cgroups which were deleted by a user,
+ 	 * but are still existing because someone else is holding a reference.
+ 	 * max_descendants is a maximum allowed number of descent cgroups.
++	 *
++	 * nr_descendants and nr_dying_descendants are protected
++	 * by cgroup_mutex and css_set_lock. It's fine to read them holding
++	 * any of cgroup_mutex and css_set_lock; for writing both locks
++	 * should be held.
+ 	 */
+ 	int nr_descendants;
+ 	int nr_dying_descendants;
+diff --git a/include/linux/dax.h b/include/linux/dax.h
+index 0dd316a74a29..becaea5f4488 100644
+--- a/include/linux/dax.h
++++ b/include/linux/dax.h
+@@ -19,6 +19,12 @@ struct dax_operations {
+ 	 */
+ 	long (*direct_access)(struct dax_device *, pgoff_t, long,
+ 			void **, pfn_t *);
++	/*
++	 * Validate whether this device is usable as an fsdax backing
++	 * device.
++	 */
++	bool (*dax_supported)(struct dax_device *, struct block_device *, int,
++			sector_t, sector_t);
+ 	/* copy_from_iter: required operation for fs-dax direct-i/o */
+ 	size_t (*copy_from_iter)(struct dax_device *, pgoff_t, void *, size_t,
+ 			struct iov_iter *);
+@@ -75,6 +81,17 @@ static inline bool bdev_dax_supported(struct block_device *bdev, int blocksize)
+ 	return __bdev_dax_supported(bdev, blocksize);
+ }
+ 
++bool __generic_fsdax_supported(struct dax_device *dax_dev,
++		struct block_device *bdev, int blocksize, sector_t start,
++		sector_t sectors);
++static inline bool generic_fsdax_supported(struct dax_device *dax_dev,
++		struct block_device *bdev, int blocksize, sector_t start,
++		sector_t sectors)
++{
++	return __generic_fsdax_supported(dax_dev, bdev, blocksize, start,
++			sectors);
++}
++
+ static inline struct dax_device *fs_dax_get_by_host(const char *host)
+ {
+ 	return dax_get_by_host(host);
+@@ -99,6 +116,13 @@ static inline bool bdev_dax_supported(struct block_device *bdev,
+ 	return false;
+ }
+ 
++static inline bool generic_fsdax_supported(struct dax_device *dax_dev,
++		struct block_device *bdev, int blocksize, sector_t start,
++		sector_t sectors)
++{
++	return false;
++}
++
+ static inline struct dax_device *fs_dax_get_by_host(const char *host)
+ {
+ 	return NULL;
+@@ -142,6 +166,8 @@ bool dax_alive(struct dax_device *dax_dev);
+ void *dax_get_private(struct dax_device *dax_dev);
+ long dax_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, long nr_pages,
+ 		void **kaddr, pfn_t *pfn);
++bool dax_supported(struct dax_device *dax_dev, struct block_device *bdev,
++		int blocksize, sector_t start, sector_t len);
+ size_t dax_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff, void *addr,
+ 		size_t bytes, struct iov_iter *i);
+ size_t dax_copy_to_iter(struct dax_device *dax_dev, pgoff_t pgoff, void *addr,
+diff --git a/include/linux/filter.h b/include/linux/filter.h
+index 6074aa064b54..14ec3bdad9a9 100644
+--- a/include/linux/filter.h
++++ b/include/linux/filter.h
+@@ -746,6 +746,7 @@ static inline void bpf_prog_unlock_ro(struct bpf_prog *fp)
+ static inline void bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr)
+ {
+ 	set_memory_ro((unsigned long)hdr, hdr->pages);
++	set_memory_x((unsigned long)hdr, hdr->pages);
+ }
+ 
+ static inline void bpf_jit_binary_unlock_ro(struct bpf_binary_header *hdr)
+diff --git a/include/linux/fscrypt.h b/include/linux/fscrypt.h
+index e5194fc3983e..08246f068fd8 100644
+--- a/include/linux/fscrypt.h
++++ b/include/linux/fscrypt.h
+@@ -79,7 +79,8 @@ struct fscrypt_ctx {
+ 
+ static inline bool fscrypt_has_encryption_key(const struct inode *inode)
+ {
+-	return (inode->i_crypt_info != NULL);
++	/* pairs with cmpxchg_release() in fscrypt_get_encryption_info() */
++	return READ_ONCE(inode->i_crypt_info) != NULL;
+ }
+ 
+ static inline bool fscrypt_dummy_context_enabled(struct inode *inode)
+diff --git a/include/linux/genhd.h b/include/linux/genhd.h
+index 06c0fd594097..69db1affedb0 100644
+--- a/include/linux/genhd.h
++++ b/include/linux/genhd.h
+@@ -610,6 +610,7 @@ struct unixware_disklabel {
+ 
+ extern int blk_alloc_devt(struct hd_struct *part, dev_t *devt);
+ extern void blk_free_devt(dev_t devt);
++extern void blk_invalidate_devt(dev_t devt);
+ extern dev_t blk_lookup_devt(const char *name, int partno);
+ extern char *disk_name (struct gendisk *hd, int partno, char *buf);
+ 
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index f9707d1dcb58..ac0c70b4ce10 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -417,6 +417,7 @@ struct hid_global {
+ 
+ struct hid_local {
+ 	unsigned usage[HID_MAX_USAGES]; /* usage array */
++	u8 usage_size[HID_MAX_USAGES]; /* usage size array */
+ 	unsigned collection_index[HID_MAX_USAGES]; /* collection index array */
+ 	unsigned usage_index;
+ 	unsigned usage_minimum;
+diff --git a/include/linux/iio/adc/ad_sigma_delta.h b/include/linux/iio/adc/ad_sigma_delta.h
+index 7e84351fa2c0..6e9fb1932dde 100644
+--- a/include/linux/iio/adc/ad_sigma_delta.h
++++ b/include/linux/iio/adc/ad_sigma_delta.h
+@@ -69,6 +69,7 @@ struct ad_sigma_delta {
+ 	bool			irq_dis;
+ 
+ 	bool			bus_locked;
++	bool			keep_cs_asserted;
+ 
+ 	uint8_t			comm;
+ 
+diff --git a/include/linux/mlx5/eswitch.h b/include/linux/mlx5/eswitch.h
+index 96d8435421de..0ca77dd1429c 100644
+--- a/include/linux/mlx5/eswitch.h
++++ b/include/linux/mlx5/eswitch.h
+@@ -35,7 +35,7 @@ struct mlx5_eswitch_rep_if {
+ 	void		       (*unload)(struct mlx5_eswitch_rep *rep);
+ 	void		       *(*get_proto_dev)(struct mlx5_eswitch_rep *rep);
+ 	void			*priv;
+-	u8			state;
++	atomic_t		state;
+ };
+ 
+ struct mlx5_eswitch_rep {
+diff --git a/include/linux/mount.h b/include/linux/mount.h
+index 9197ddbf35fb..bf8cc4108b8f 100644
+--- a/include/linux/mount.h
++++ b/include/linux/mount.h
+@@ -87,6 +87,8 @@ extern bool mnt_may_suid(struct vfsmount *mnt);
+ 
+ struct path;
+ extern struct vfsmount *clone_private_mount(const struct path *path);
++extern int __mnt_want_write(struct vfsmount *);
++extern void __mnt_drop_write(struct vfsmount *);
+ 
+ struct file_system_type;
+ extern struct vfsmount *fc_mount(struct fs_context *fc);
+diff --git a/include/linux/overflow.h b/include/linux/overflow.h
+index 40b48e2133cb..15eb85de9226 100644
+--- a/include/linux/overflow.h
++++ b/include/linux/overflow.h
+@@ -36,6 +36,12 @@
+ #define type_max(T) ((T)((__type_half_max(T) - 1) + __type_half_max(T)))
+ #define type_min(T) ((T)((T)-type_max(T)-(T)1))
+ 
++/*
++ * Avoids triggering -Wtype-limits compilation warning,
++ * while using unsigned data types to check a < 0.
++ */
++#define is_non_negative(a) ((a) > 0 || (a) == 0)
++#define is_negative(a) (!(is_non_negative(a)))
+ 
+ #ifdef COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW
+ /*
+@@ -227,10 +233,10 @@
+ 	typeof(d) _d = d;						\
+ 	u64 _a_full = _a;						\
+ 	unsigned int _to_shift =					\
+-		_s >= 0 && _s < 8 * sizeof(*d) ? _s : 0;		\
++		is_non_negative(_s) && _s < 8 * sizeof(*d) ? _s : 0;	\
+ 	*_d = (_a_full << _to_shift);					\
+-	(_to_shift != _s || *_d < 0 || _a < 0 ||			\
+-		(*_d >> _to_shift) != _a);				\
++	(_to_shift != _s || is_negative(*_d) || is_negative(_a) ||	\
++	(*_d >> _to_shift) != _a);					\
+ })
+ 
+ /**
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index 6cdb1db776cf..922bb6848813 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -878,9 +878,11 @@ static inline void rcu_head_init(struct rcu_head *rhp)
+ static inline bool
+ rcu_head_after_call_rcu(struct rcu_head *rhp, rcu_callback_t f)
+ {
+-	if (READ_ONCE(rhp->func) == f)
++	rcu_callback_t func = READ_ONCE(rhp->func);
++
++	if (func == f)
+ 		return true;
+-	WARN_ON_ONCE(READ_ONCE(rhp->func) != (rcu_callback_t)~0L);
++	WARN_ON_ONCE(func != (rcu_callback_t)~0L);
+ 	return false;
+ }
+ 
+diff --git a/include/linux/regulator/consumer.h b/include/linux/regulator/consumer.h
+index f3f76051e8b0..aaf3cee70439 100644
+--- a/include/linux/regulator/consumer.h
++++ b/include/linux/regulator/consumer.h
+@@ -478,6 +478,11 @@ static inline int regulator_is_supported_voltage(struct regulator *regulator,
+ 	return 0;
+ }
+ 
++static inline unsigned int regulator_get_linear_step(struct regulator *regulator)
++{
++	return 0;
++}
++
+ static inline int regulator_set_current_limit(struct regulator *regulator,
+ 					     int min_uA, int max_uA)
+ {
+diff --git a/include/linux/smpboot.h b/include/linux/smpboot.h
+index d0884b525001..9d1bc65d226c 100644
+--- a/include/linux/smpboot.h
++++ b/include/linux/smpboot.h
+@@ -29,7 +29,7 @@ struct smpboot_thread_data;
+  * @thread_comm:	The base name of the thread
+  */
+ struct smp_hotplug_thread {
+-	struct task_struct __percpu	**store;
++	struct task_struct		* __percpu *store;
+ 	struct list_head		list;
+ 	int				(*thread_should_run)(unsigned int cpu);
+ 	void				(*thread_fn)(unsigned int cpu);
+diff --git a/include/linux/time64.h b/include/linux/time64.h
+index f38d382ffec1..a620ee610b9f 100644
+--- a/include/linux/time64.h
++++ b/include/linux/time64.h
+@@ -33,6 +33,17 @@ struct itimerspec64 {
+ #define KTIME_MAX			((s64)~((u64)1 << 63))
+ #define KTIME_SEC_MAX			(KTIME_MAX / NSEC_PER_SEC)
+ 
++/*
++ * Limits for settimeofday():
++ *
++ * To prevent setting the time close to the wraparound point time setting
++ * is limited so a reasonable uptime can be accomodated. Uptime of 30 years
++ * should be really sufficient, which means the cutoff is 2232. At that
++ * point the cutoff is just a small part of the larger problem.
++ */
++#define TIME_UPTIME_SEC_MAX		(30LL * 365 * 24 *3600)
++#define TIME_SETTOD_SEC_MAX		(KTIME_SEC_MAX - TIME_UPTIME_SEC_MAX)
++
+ static inline int timespec64_equal(const struct timespec64 *a,
+ 				   const struct timespec64 *b)
+ {
+@@ -100,6 +111,16 @@ static inline bool timespec64_valid_strict(const struct timespec64 *ts)
+ 	return true;
+ }
+ 
++static inline bool timespec64_valid_settod(const struct timespec64 *ts)
++{
++	if (!timespec64_valid(ts))
++		return false;
++	/* Disallow values which cause overflow issues vs. CLOCK_REALTIME */
++	if ((unsigned long long)ts->tv_sec >= TIME_SETTOD_SEC_MAX)
++		return false;
++	return true;
++}
++
+ /**
+  * timespec64_to_ns - Convert timespec64 to nanoseconds
+  * @ts:		pointer to the timespec64 variable to be converted
+diff --git a/include/media/videobuf2-core.h b/include/media/videobuf2-core.h
+index 910f3d469005..65108819de5a 100644
+--- a/include/media/videobuf2-core.h
++++ b/include/media/videobuf2-core.h
+@@ -595,6 +595,7 @@ struct vb2_queue {
+ 	unsigned int			start_streaming_called:1;
+ 	unsigned int			error:1;
+ 	unsigned int			waiting_for_buffers:1;
++	unsigned int			waiting_in_dqbuf:1;
+ 	unsigned int			is_multiplanar:1;
+ 	unsigned int			is_output:1;
+ 	unsigned int			copy_timestamp:1;
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index fbba43e9bef5..9a5330eed794 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -282,6 +282,7 @@ enum {
+ 	HCI_FORCE_BREDR_SMP,
+ 	HCI_FORCE_STATIC_ADDR,
+ 	HCI_LL_RPA_RESOLUTION,
++	HCI_CMD_PENDING,
+ 
+ 	__HCI_NUM_FLAGS,
+ };
+diff --git a/include/xen/xen.h b/include/xen/xen.h
+index 19d032373de5..19a72f591e2b 100644
+--- a/include/xen/xen.h
++++ b/include/xen/xen.h
+@@ -43,8 +43,10 @@ extern struct hvm_start_info pvh_start_info;
+ #endif	/* CONFIG_XEN_DOM0 */
+ 
+ struct bio_vec;
++struct page;
++
+ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
+-		const struct bio_vec *vec2);
++		const struct page *page);
+ 
+ #if defined(CONFIG_MEMORY_HOTPLUG) && defined(CONFIG_XEN_BALLOON)
+ extern u64 xen_saved_max_mem_size;
+diff --git a/kernel/acct.c b/kernel/acct.c
+index addf7732fb56..81f9831a7859 100644
+--- a/kernel/acct.c
++++ b/kernel/acct.c
+@@ -227,7 +227,7 @@ static int acct_on(struct filename *pathname)
+ 		filp_close(file, NULL);
+ 		return PTR_ERR(internal);
+ 	}
+-	err = mnt_want_write(internal);
++	err = __mnt_want_write(internal);
+ 	if (err) {
+ 		mntput(internal);
+ 		kfree(acct);
+@@ -252,7 +252,7 @@ static int acct_on(struct filename *pathname)
+ 	old = xchg(&ns->bacct, &acct->pin);
+ 	mutex_unlock(&acct->lock);
+ 	pin_kill(old);
+-	mnt_drop_write(mnt);
++	__mnt_drop_write(mnt);
+ 	mntput(mnt);
+ 	return 0;
+ }
+diff --git a/kernel/auditfilter.c b/kernel/auditfilter.c
+index 63f8b3f26fab..3ac71c4fda49 100644
+--- a/kernel/auditfilter.c
++++ b/kernel/auditfilter.c
+@@ -1114,22 +1114,24 @@ int audit_rule_change(int type, int seq, void *data, size_t datasz)
+ 	int err = 0;
+ 	struct audit_entry *entry;
+ 
+-	entry = audit_data_to_entry(data, datasz);
+-	if (IS_ERR(entry))
+-		return PTR_ERR(entry);
+-
+ 	switch (type) {
+ 	case AUDIT_ADD_RULE:
++		entry = audit_data_to_entry(data, datasz);
++		if (IS_ERR(entry))
++			return PTR_ERR(entry);
+ 		err = audit_add_rule(entry);
+ 		audit_log_rule_change("add_rule", &entry->rule, !err);
+ 		break;
+ 	case AUDIT_DEL_RULE:
++		entry = audit_data_to_entry(data, datasz);
++		if (IS_ERR(entry))
++			return PTR_ERR(entry);
+ 		err = audit_del_rule(entry);
+ 		audit_log_rule_change("remove_rule", &entry->rule, !err);
+ 		break;
+ 	default:
+-		err = -EINVAL;
+ 		WARN_ON(1);
++		return -EINVAL;
+ 	}
+ 
+ 	if (err || type == AUDIT_DEL_RULE) {
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index d1eab1d4a930..fa7b8047aab8 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -840,6 +840,13 @@ static inline void audit_proctitle_free(struct audit_context *context)
+ 	context->proctitle.len = 0;
+ }
+ 
++static inline void audit_free_module(struct audit_context *context)
++{
++	if (context->type == AUDIT_KERN_MODULE) {
++		kfree(context->module.name);
++		context->module.name = NULL;
++	}
++}
+ static inline void audit_free_names(struct audit_context *context)
+ {
+ 	struct audit_names *n, *next;
+@@ -923,6 +930,7 @@ int audit_alloc(struct task_struct *tsk)
+ 
+ static inline void audit_free_context(struct audit_context *context)
+ {
++	audit_free_module(context);
+ 	audit_free_names(context);
+ 	unroll_tree_refs(context, NULL, 0);
+ 	free_tree_refs(context);
+@@ -1266,7 +1274,6 @@ static void show_special(struct audit_context *context, int *call_panic)
+ 		audit_log_format(ab, "name=");
+ 		if (context->module.name) {
+ 			audit_log_untrustedstring(ab, context->module.name);
+-			kfree(context->module.name);
+ 		} else
+ 			audit_log_format(ab, "(null)");
+ 
+@@ -1697,6 +1704,7 @@ void __audit_syscall_exit(int success, long return_code)
+ 	context->in_syscall = 0;
+ 	context->prio = context->state == AUDIT_RECORD_CONTEXT ? ~0ULL : 0;
+ 
++	audit_free_module(context);
+ 	audit_free_names(context);
+ 	unroll_tree_refs(context, NULL, 0);
+ 	audit_free_aux(context);
+diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
+index 191b79948424..1e525d70f833 100644
+--- a/kernel/bpf/devmap.c
++++ b/kernel/bpf/devmap.c
+@@ -164,6 +164,9 @@ static void dev_map_free(struct bpf_map *map)
+ 	bpf_clear_redirect_map(map);
+ 	synchronize_rcu();
+ 
++	/* Make sure prior __dev_map_entry_free() have completed. */
++	rcu_barrier();
++
+ 	/* To ensure all pending flush operations have completed wait for flush
+ 	 * bitmap to indicate all flush_needed bits to be zero on _all_ cpus.
+ 	 * Because the above synchronize_rcu() ensures the map is disconnected
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 3f2b4bde0f9c..9fcf6338ea5f 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -4781,9 +4781,11 @@ static void css_release_work_fn(struct work_struct *work)
+ 		if (cgroup_on_dfl(cgrp))
+ 			cgroup_rstat_flush(cgrp);
+ 
++		spin_lock_irq(&css_set_lock);
+ 		for (tcgrp = cgroup_parent(cgrp); tcgrp;
+ 		     tcgrp = cgroup_parent(tcgrp))
+ 			tcgrp->nr_dying_descendants--;
++		spin_unlock_irq(&css_set_lock);
+ 
+ 		cgroup_idr_remove(&cgrp->root->cgroup_idr, cgrp->id);
+ 		cgrp->id = -1;
+@@ -5001,12 +5003,14 @@ static struct cgroup *cgroup_create(struct cgroup *parent)
+ 	if (ret)
+ 		goto out_psi_free;
+ 
++	spin_lock_irq(&css_set_lock);
+ 	for (tcgrp = cgrp; tcgrp; tcgrp = cgroup_parent(tcgrp)) {
+ 		cgrp->ancestor_ids[tcgrp->level] = tcgrp->id;
+ 
+ 		if (tcgrp != cgrp)
+ 			tcgrp->nr_descendants++;
+ 	}
++	spin_unlock_irq(&css_set_lock);
+ 
+ 	if (notify_on_release(parent))
+ 		set_bit(CGRP_NOTIFY_ON_RELEASE, &cgrp->flags);
+@@ -5291,10 +5295,12 @@ static int cgroup_destroy_locked(struct cgroup *cgrp)
+ 	if (parent && cgroup_is_threaded(cgrp))
+ 		parent->nr_threaded_children--;
+ 
++	spin_lock_irq(&css_set_lock);
+ 	for (tcgrp = cgroup_parent(cgrp); tcgrp; tcgrp = cgroup_parent(tcgrp)) {
+ 		tcgrp->nr_descendants--;
+ 		tcgrp->nr_dying_descendants++;
+ 	}
++	spin_unlock_irq(&css_set_lock);
+ 
+ 	cgroup1_check_for_release(parent);
+ 
+diff --git a/kernel/irq_work.c b/kernel/irq_work.c
+index 6b7cdf17ccf8..73288914ed5e 100644
+--- a/kernel/irq_work.c
++++ b/kernel/irq_work.c
+@@ -56,61 +56,70 @@ void __weak arch_irq_work_raise(void)
+ 	 */
+ }
+ 
+-/*
+- * Enqueue the irq_work @work on @cpu unless it's already pending
+- * somewhere.
+- *
+- * Can be re-enqueued while the callback is still in progress.
+- */
+-bool irq_work_queue_on(struct irq_work *work, int cpu)
++/* Enqueue on current CPU, work must already be claimed and preempt disabled */
++static void __irq_work_queue_local(struct irq_work *work)
+ {
+-	/* All work should have been flushed before going offline */
+-	WARN_ON_ONCE(cpu_is_offline(cpu));
+-
+-#ifdef CONFIG_SMP
+-
+-	/* Arch remote IPI send/receive backend aren't NMI safe */
+-	WARN_ON_ONCE(in_nmi());
++	/* If the work is "lazy", handle it from next tick if any */
++	if (work->flags & IRQ_WORK_LAZY) {
++		if (llist_add(&work->llnode, this_cpu_ptr(&lazy_list)) &&
++		    tick_nohz_tick_stopped())
++			arch_irq_work_raise();
++	} else {
++		if (llist_add(&work->llnode, this_cpu_ptr(&raised_list)))
++			arch_irq_work_raise();
++	}
++}
+ 
++/* Enqueue the irq work @work on the current CPU */
++bool irq_work_queue(struct irq_work *work)
++{
+ 	/* Only queue if not already pending */
+ 	if (!irq_work_claim(work))
+ 		return false;
+ 
+-	if (llist_add(&work->llnode, &per_cpu(raised_list, cpu)))
+-		arch_send_call_function_single_ipi(cpu);
+-
+-#else /* #ifdef CONFIG_SMP */
+-	irq_work_queue(work);
+-#endif /* #else #ifdef CONFIG_SMP */
++	/* Queue the entry and raise the IPI if needed. */
++	preempt_disable();
++	__irq_work_queue_local(work);
++	preempt_enable();
+ 
+ 	return true;
+ }
++EXPORT_SYMBOL_GPL(irq_work_queue);
+ 
+-/* Enqueue the irq work @work on the current CPU */
+-bool irq_work_queue(struct irq_work *work)
++/*
++ * Enqueue the irq_work @work on @cpu unless it's already pending
++ * somewhere.
++ *
++ * Can be re-enqueued while the callback is still in progress.
++ */
++bool irq_work_queue_on(struct irq_work *work, int cpu)
+ {
++#ifndef CONFIG_SMP
++	return irq_work_queue(work);
++
++#else /* CONFIG_SMP: */
++	/* All work should have been flushed before going offline */
++	WARN_ON_ONCE(cpu_is_offline(cpu));
++
+ 	/* Only queue if not already pending */
+ 	if (!irq_work_claim(work))
+ 		return false;
+ 
+-	/* Queue the entry and raise the IPI if needed. */
+ 	preempt_disable();
+-
+-	/* If the work is "lazy", handle it from next tick if any */
+-	if (work->flags & IRQ_WORK_LAZY) {
+-		if (llist_add(&work->llnode, this_cpu_ptr(&lazy_list)) &&
+-		    tick_nohz_tick_stopped())
+-			arch_irq_work_raise();
++	if (cpu != smp_processor_id()) {
++		/* Arch remote IPI send/receive backend aren't NMI safe */
++		WARN_ON_ONCE(in_nmi());
++		if (llist_add(&work->llnode, &per_cpu(raised_list, cpu)))
++			arch_send_call_function_single_ipi(cpu);
+ 	} else {
+-		if (llist_add(&work->llnode, this_cpu_ptr(&raised_list)))
+-			arch_irq_work_raise();
++		__irq_work_queue_local(work);
+ 	}
+-
+ 	preempt_enable();
+ 
+ 	return true;
++#endif /* CONFIG_SMP */
+ }
+-EXPORT_SYMBOL_GPL(irq_work_queue);
++
+ 
+ bool irq_work_needs_cpu(void)
+ {
+diff --git a/kernel/jump_label.c b/kernel/jump_label.c
+index bad96b476eb6..a799b1ac6b2f 100644
+--- a/kernel/jump_label.c
++++ b/kernel/jump_label.c
+@@ -206,6 +206,8 @@ static void __static_key_slow_dec_cpuslocked(struct static_key *key,
+ 					   unsigned long rate_limit,
+ 					   struct delayed_work *work)
+ {
++	int val;
++
+ 	lockdep_assert_cpus_held();
+ 
+ 	/*
+@@ -215,17 +217,20 @@ static void __static_key_slow_dec_cpuslocked(struct static_key *key,
+ 	 * returns is unbalanced, because all other static_key_slow_inc()
+ 	 * instances block while the update is in progress.
+ 	 */
+-	if (!atomic_dec_and_mutex_lock(&key->enabled, &jump_label_mutex)) {
+-		WARN(atomic_read(&key->enabled) < 0,
+-		     "jump label: negative count!\n");
++	val = atomic_fetch_add_unless(&key->enabled, -1, 1);
++	if (val != 1) {
++		WARN(val < 0, "jump label: negative count!\n");
+ 		return;
+ 	}
+ 
+-	if (rate_limit) {
+-		atomic_inc(&key->enabled);
+-		schedule_delayed_work(work, rate_limit);
+-	} else {
+-		jump_label_update(key);
++	jump_label_lock();
++	if (atomic_dec_and_test(&key->enabled)) {
++		if (rate_limit) {
++			atomic_inc(&key->enabled);
++			schedule_delayed_work(work, rate_limit);
++		} else {
++			jump_label_update(key);
++		}
+ 	}
+ 	jump_label_unlock();
+ }
+diff --git a/kernel/module.c b/kernel/module.c
+index 0b9aa8ab89f0..2b2845ae983e 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -1950,8 +1950,13 @@ void module_enable_ro(const struct module *mod, bool after_init)
+ 		return;
+ 
+ 	frob_text(&mod->core_layout, set_memory_ro);
++	frob_text(&mod->core_layout, set_memory_x);
++
+ 	frob_rodata(&mod->core_layout, set_memory_ro);
++
+ 	frob_text(&mod->init_layout, set_memory_ro);
++	frob_text(&mod->init_layout, set_memory_x);
++
+ 	frob_rodata(&mod->init_layout, set_memory_ro);
+ 
+ 	if (after_init)
+diff --git a/kernel/rcu/rcuperf.c b/kernel/rcu/rcuperf.c
+index c29761152874..7a6890b23c5f 100644
+--- a/kernel/rcu/rcuperf.c
++++ b/kernel/rcu/rcuperf.c
+@@ -494,6 +494,10 @@ rcu_perf_cleanup(void)
+ 
+ 	if (torture_cleanup_begin())
+ 		return;
++	if (!cur_ops) {
++		torture_cleanup_end();
++		return;
++	}
+ 
+ 	if (reader_tasks) {
+ 		for (i = 0; i < nrealreaders; i++)
+@@ -614,6 +618,7 @@ rcu_perf_init(void)
+ 		pr_cont("\n");
+ 		WARN_ON(!IS_MODULE(CONFIG_RCU_PERF_TEST));
+ 		firsterr = -EINVAL;
++		cur_ops = NULL;
+ 		goto unwind;
+ 	}
+ 	if (cur_ops->init)
+diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
+index f14d1b18a74f..a2efe27317be 100644
+--- a/kernel/rcu/rcutorture.c
++++ b/kernel/rcu/rcutorture.c
+@@ -2094,6 +2094,10 @@ rcu_torture_cleanup(void)
+ 			cur_ops->cb_barrier();
+ 		return;
+ 	}
++	if (!cur_ops) {
++		torture_cleanup_end();
++		return;
++	}
+ 
+ 	rcu_torture_barrier_cleanup();
+ 	torture_stop_kthread(rcu_torture_fwd_prog, fwd_prog_task);
+@@ -2267,6 +2271,7 @@ rcu_torture_init(void)
+ 		pr_cont("\n");
+ 		WARN_ON(!IS_MODULE(CONFIG_RCU_TORTURE_TEST));
+ 		firsterr = -EINVAL;
++		cur_ops = NULL;
+ 		goto unwind;
+ 	}
+ 	if (cur_ops->fqs == NULL && fqs_duration != 0) {
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 4778c48a7fda..a75ad50b5e2f 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -6559,6 +6559,8 @@ static void cpu_cgroup_attach(struct cgroup_taskset *tset)
+ static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
+ 				struct cftype *cftype, u64 shareval)
+ {
++	if (shareval > scale_load_down(ULONG_MAX))
++		shareval = MAX_SHARES;
+ 	return sched_group_set_shares(css_tg(css), scale_load(shareval));
+ }
+ 
+@@ -6661,8 +6663,10 @@ int tg_set_cfs_quota(struct task_group *tg, long cfs_quota_us)
+ 	period = ktime_to_ns(tg->cfs_bandwidth.period);
+ 	if (cfs_quota_us < 0)
+ 		quota = RUNTIME_INF;
+-	else
++	else if ((u64)cfs_quota_us <= U64_MAX / NSEC_PER_USEC)
+ 		quota = (u64)cfs_quota_us * NSEC_PER_USEC;
++	else
++		return -EINVAL;
+ 
+ 	return tg_set_cfs_bandwidth(tg, period, quota);
+ }
+@@ -6684,6 +6688,9 @@ int tg_set_cfs_period(struct task_group *tg, long cfs_period_us)
+ {
+ 	u64 quota, period;
+ 
++	if ((u64)cfs_period_us > U64_MAX / NSEC_PER_USEC)
++		return -EINVAL;
++
+ 	period = (u64)cfs_period_us * NSEC_PER_USEC;
+ 	quota = tg->cfs_bandwidth.quota;
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 35f3ea375084..232491e3ed0d 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -9551,22 +9551,26 @@ static inline int on_null_domain(struct rq *rq)
+  * - When one of the busy CPUs notice that there may be an idle rebalancing
+  *   needed, they will kick the idle load balancer, which then does idle
+  *   load balancing for all the idle CPUs.
++ * - HK_FLAG_MISC CPUs are used for this task, because HK_FLAG_SCHED not set
++ *   anywhere yet.
+  */
+ 
+ static inline int find_new_ilb(void)
+ {
+-	int ilb = cpumask_first(nohz.idle_cpus_mask);
++	int ilb;
+ 
+-	if (ilb < nr_cpu_ids && idle_cpu(ilb))
+-		return ilb;
++	for_each_cpu_and(ilb, nohz.idle_cpus_mask,
++			      housekeeping_cpumask(HK_FLAG_MISC)) {
++		if (idle_cpu(ilb))
++			return ilb;
++	}
+ 
+ 	return nr_cpu_ids;
+ }
+ 
+ /*
+- * Kick a CPU to do the nohz balancing, if it is time for it. We pick the
+- * nohz_load_balancer CPU (if there is one) otherwise fallback to any idle
+- * CPU (if there is one).
++ * Kick a CPU to do the nohz balancing, if it is time for it. We pick any
++ * idle CPU in the HK_FLAG_MISC housekeeping set (if there is one).
+  */
+ static void kick_ilb(unsigned int flags)
+ {
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index 90fa23d36565..1e6b909dca36 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -2555,6 +2555,8 @@ int sched_group_set_rt_runtime(struct task_group *tg, long rt_runtime_us)
+ 	rt_runtime = (u64)rt_runtime_us * NSEC_PER_USEC;
+ 	if (rt_runtime_us < 0)
+ 		rt_runtime = RUNTIME_INF;
++	else if ((u64)rt_runtime_us > U64_MAX / NSEC_PER_USEC)
++		return -EINVAL;
+ 
+ 	return tg_set_rt_bandwidth(tg, rt_period, rt_runtime);
+ }
+@@ -2575,6 +2577,9 @@ int sched_group_set_rt_period(struct task_group *tg, u64 rt_period_us)
+ {
+ 	u64 rt_runtime, rt_period;
+ 
++	if (rt_period_us > U64_MAX / NSEC_PER_USEC)
++		return -EINVAL;
++
+ 	rt_period = rt_period_us * NSEC_PER_USEC;
+ 	rt_runtime = tg->rt_bandwidth.rt_runtime;
+ 
+diff --git a/kernel/time/time.c b/kernel/time/time.c
+index c3f756f8534b..86656bbac232 100644
+--- a/kernel/time/time.c
++++ b/kernel/time/time.c
+@@ -171,7 +171,7 @@ int do_sys_settimeofday64(const struct timespec64 *tv, const struct timezone *tz
+ 	static int firsttime = 1;
+ 	int error = 0;
+ 
+-	if (tv && !timespec64_valid(tv))
++	if (tv && !timespec64_valid_settod(tv))
+ 		return -EINVAL;
+ 
+ 	error = security_settime64(tv, tz);
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index f986e1918d12..f136c56c2805 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -1221,7 +1221,7 @@ int do_settimeofday64(const struct timespec64 *ts)
+ 	unsigned long flags;
+ 	int ret = 0;
+ 
+-	if (!timespec64_valid_strict(ts))
++	if (!timespec64_valid_settod(ts))
+ 		return -EINVAL;
+ 
+ 	raw_spin_lock_irqsave(&timekeeper_lock, flags);
+@@ -1278,7 +1278,7 @@ static int timekeeping_inject_offset(const struct timespec64 *ts)
+ 	/* Make sure the proposed value is valid */
+ 	tmp = timespec64_add(tk_xtime(tk), *ts);
+ 	if (timespec64_compare(&tk->wall_to_monotonic, ts) > 0 ||
+-	    !timespec64_valid_strict(&tmp)) {
++	    !timespec64_valid_settod(&tmp)) {
+ 		ret = -EINVAL;
+ 		goto error;
+ 	}
+@@ -1527,7 +1527,7 @@ void __init timekeeping_init(void)
+ 	unsigned long flags;
+ 
+ 	read_persistent_wall_and_boot_offset(&wall_time, &boot_offset);
+-	if (timespec64_valid_strict(&wall_time) &&
++	if (timespec64_valid_settod(&wall_time) &&
+ 	    timespec64_to_ns(&wall_time) > 0) {
+ 		persistent_clock_exists = true;
+ 	} else if (timespec64_to_ns(&wall_time) != 0) {
+diff --git a/kernel/trace/trace_branch.c b/kernel/trace/trace_branch.c
+index 4ad967453b6f..3ea65cdff30d 100644
+--- a/kernel/trace/trace_branch.c
++++ b/kernel/trace/trace_branch.c
+@@ -205,6 +205,8 @@ void trace_likely_condition(struct ftrace_likely_data *f, int val, int expect)
+ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
+ 			  int expect, int is_constant)
+ {
++	unsigned long flags = user_access_save();
++
+ 	/* A constant is always correct */
+ 	if (is_constant) {
+ 		f->constant++;
+@@ -223,6 +225,8 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
+ 		f->data.correct++;
+ 	else
+ 		f->data.incorrect++;
++
++	user_access_restore(flags);
+ }
+ EXPORT_SYMBOL(ftrace_likely_update);
+ 
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 795aa2038377..0a200d42fa96 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -3543,14 +3543,20 @@ static bool cond_snapshot_update(struct trace_array *tr, void *cond_data)
+ 	struct track_data *track_data = tr->cond_snapshot->cond_data;
+ 	struct hist_elt_data *elt_data, *track_elt_data;
+ 	struct snapshot_context *context = cond_data;
++	struct action_data *action;
+ 	u64 track_val;
+ 
+ 	if (!track_data)
+ 		return false;
+ 
++	action = track_data->action_data;
++
+ 	track_val = get_track_val(track_data->hist_data, context->elt,
+ 				  track_data->action_data);
+ 
++	if (!action->track_data.check_val(track_data->track_val, track_val))
++		return false;
++
+ 	track_data->track_val = track_val;
+ 	memcpy(track_data->key, context->key, track_data->key_len);
+ 
+diff --git a/lib/kobject_uevent.c b/lib/kobject_uevent.c
+index f05802687ba4..7998affa45d4 100644
+--- a/lib/kobject_uevent.c
++++ b/lib/kobject_uevent.c
+@@ -466,6 +466,13 @@ int kobject_uevent_env(struct kobject *kobj, enum kobject_action action,
+ 	int i = 0;
+ 	int retval = 0;
+ 
++	/*
++	 * Mark "remove" event done regardless of result, for some subsystems
++	 * do not want to re-trigger "remove" event via automatic cleanup.
++	 */
++	if (action == KOBJ_REMOVE)
++		kobj->state_remove_uevent_sent = 1;
++
+ 	pr_debug("kobject: '%s' (%p): %s\n",
+ 		 kobject_name(kobj), kobj, __func__);
+ 
+@@ -567,10 +574,6 @@ int kobject_uevent_env(struct kobject *kobj, enum kobject_action action,
+ 		kobj->state_add_uevent_sent = 1;
+ 		break;
+ 
+-	case KOBJ_REMOVE:
+-		kobj->state_remove_uevent_sent = 1;
+-		break;
+-
+ 	case KOBJ_UNBIND:
+ 		zap_modalias_env(env);
+ 		break;
+diff --git a/lib/sbitmap.c b/lib/sbitmap.c
+index 155fe38756ec..4a7fc4915dfc 100644
+--- a/lib/sbitmap.c
++++ b/lib/sbitmap.c
+@@ -435,7 +435,7 @@ static void sbitmap_queue_update_wake_batch(struct sbitmap_queue *sbq,
+ 		 * to ensure that the batch size is updated before the wait
+ 		 * counts.
+ 		 */
+-		smp_mb__before_atomic();
++		smp_mb();
+ 		for (i = 0; i < SBQ_WAIT_QUEUES; i++)
+ 			atomic_set(&sbq->ws[i].wait_cnt, 1);
+ 	}
+diff --git a/lib/strncpy_from_user.c b/lib/strncpy_from_user.c
+index 58eacd41526c..023ba9f3b99f 100644
+--- a/lib/strncpy_from_user.c
++++ b/lib/strncpy_from_user.c
+@@ -23,10 +23,11 @@
+  * hit it), 'max' is the address space maximum (and we return
+  * -EFAULT if we hit it).
+  */
+-static inline long do_strncpy_from_user(char *dst, const char __user *src, long count, unsigned long max)
++static inline long do_strncpy_from_user(char *dst, const char __user *src,
++					unsigned long count, unsigned long max)
+ {
+ 	const struct word_at_a_time constants = WORD_AT_A_TIME_CONSTANTS;
+-	long res = 0;
++	unsigned long res = 0;
+ 
+ 	/*
+ 	 * Truncate 'max' to the user-specified limit, so that
+diff --git a/lib/strnlen_user.c b/lib/strnlen_user.c
+index 1c1a1b0e38a5..7f2db3fe311f 100644
+--- a/lib/strnlen_user.c
++++ b/lib/strnlen_user.c
+@@ -28,7 +28,7 @@
+ static inline long do_strnlen_user(const char __user *src, unsigned long count, unsigned long max)
+ {
+ 	const struct word_at_a_time constants = WORD_AT_A_TIME_CONSTANTS;
+-	long align, res = 0;
++	unsigned long align, res = 0;
+ 	unsigned long c;
+ 
+ 	/*
+@@ -42,7 +42,7 @@ static inline long do_strnlen_user(const char __user *src, unsigned long count,
+ 	 * Do everything aligned. But that means that we
+ 	 * need to also expand the maximum..
+ 	 */
+-	align = (sizeof(long) - 1) & (unsigned long)src;
++	align = (sizeof(unsigned long) - 1) & (unsigned long)src;
+ 	src -= align;
+ 	max += align;
+ 
+diff --git a/net/batman-adv/distributed-arp-table.c b/net/batman-adv/distributed-arp-table.c
+index 310a4f353008..8d290da0d596 100644
+--- a/net/batman-adv/distributed-arp-table.c
++++ b/net/batman-adv/distributed-arp-table.c
+@@ -1444,7 +1444,6 @@ bool batadv_dat_snoop_incoming_arp_reply(struct batadv_priv *bat_priv,
+ 			   hw_src, &ip_src, hw_dst, &ip_dst,
+ 			   dat_entry->mac_addr,	&dat_entry->ip);
+ 		dropped = true;
+-		goto out;
+ 	}
+ 
+ 	/* Update our internal cache with both the IP addresses the node got
+@@ -1453,6 +1452,9 @@ bool batadv_dat_snoop_incoming_arp_reply(struct batadv_priv *bat_priv,
+ 	batadv_dat_entry_add(bat_priv, ip_src, hw_src, vid);
+ 	batadv_dat_entry_add(bat_priv, ip_dst, hw_dst, vid);
+ 
++	if (dropped)
++		goto out;
++
+ 	/* If BLA is enabled, only forward ARP replies if we have claimed the
+ 	 * source of the ARP reply or if no one else of the same backbone has
+ 	 * already claimed that client. This prevents that different gateways
+diff --git a/net/batman-adv/main.c b/net/batman-adv/main.c
+index 75750870cf04..f8725786b596 100644
+--- a/net/batman-adv/main.c
++++ b/net/batman-adv/main.c
+@@ -161,6 +161,7 @@ int batadv_mesh_init(struct net_device *soft_iface)
+ 	spin_lock_init(&bat_priv->tt.commit_lock);
+ 	spin_lock_init(&bat_priv->gw.list_lock);
+ #ifdef CONFIG_BATMAN_ADV_MCAST
++	spin_lock_init(&bat_priv->mcast.mla_lock);
+ 	spin_lock_init(&bat_priv->mcast.want_lists_lock);
+ #endif
+ 	spin_lock_init(&bat_priv->tvlv.container_list_lock);
+diff --git a/net/batman-adv/multicast.c b/net/batman-adv/multicast.c
+index f91b1b6265cf..1b985ab89c08 100644
+--- a/net/batman-adv/multicast.c
++++ b/net/batman-adv/multicast.c
+@@ -325,8 +325,6 @@ static void batadv_mcast_mla_list_free(struct hlist_head *mcast_list)
+  * translation table except the ones listed in the given mcast_list.
+  *
+  * If mcast_list is NULL then all are retracted.
+- *
+- * Do not call outside of the mcast worker! (or cancel mcast worker first)
+  */
+ static void batadv_mcast_mla_tt_retract(struct batadv_priv *bat_priv,
+ 					struct hlist_head *mcast_list)
+@@ -334,8 +332,6 @@ static void batadv_mcast_mla_tt_retract(struct batadv_priv *bat_priv,
+ 	struct batadv_hw_addr *mcast_entry;
+ 	struct hlist_node *tmp;
+ 
+-	WARN_ON(delayed_work_pending(&bat_priv->mcast.work));
+-
+ 	hlist_for_each_entry_safe(mcast_entry, tmp, &bat_priv->mcast.mla_list,
+ 				  list) {
+ 		if (mcast_list &&
+@@ -359,8 +355,6 @@ static void batadv_mcast_mla_tt_retract(struct batadv_priv *bat_priv,
+  *
+  * Adds multicast listener announcements from the given mcast_list to the
+  * translation table if they have not been added yet.
+- *
+- * Do not call outside of the mcast worker! (or cancel mcast worker first)
+  */
+ static void batadv_mcast_mla_tt_add(struct batadv_priv *bat_priv,
+ 				    struct hlist_head *mcast_list)
+@@ -368,8 +362,6 @@ static void batadv_mcast_mla_tt_add(struct batadv_priv *bat_priv,
+ 	struct batadv_hw_addr *mcast_entry;
+ 	struct hlist_node *tmp;
+ 
+-	WARN_ON(delayed_work_pending(&bat_priv->mcast.work));
+-
+ 	if (!mcast_list)
+ 		return;
+ 
+@@ -658,7 +650,10 @@ static void batadv_mcast_mla_update(struct work_struct *work)
+ 	priv_mcast = container_of(delayed_work, struct batadv_priv_mcast, work);
+ 	bat_priv = container_of(priv_mcast, struct batadv_priv, mcast);
+ 
++	spin_lock(&bat_priv->mcast.mla_lock);
+ 	__batadv_mcast_mla_update(bat_priv);
++	spin_unlock(&bat_priv->mcast.mla_lock);
++
+ 	batadv_mcast_start_timer(bat_priv);
+ }
+ 
+diff --git a/net/batman-adv/types.h b/net/batman-adv/types.h
+index a21b34ed6548..ed0f6a519de5 100644
+--- a/net/batman-adv/types.h
++++ b/net/batman-adv/types.h
+@@ -1223,6 +1223,11 @@ struct batadv_priv_mcast {
+ 	/** @bridged: whether the soft interface has a bridge on top */
+ 	unsigned char bridged:1;
+ 
++	/**
++	 * @mla_lock: a lock protecting mla_list and mla_flags
++	 */
++	spinlock_t mla_lock;
++
+ 	/**
+ 	 * @num_want_all_unsnoopables: number of nodes wanting unsnoopable IP
+ 	 *  traffic
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index d6b2540ba7f8..f275c9905650 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -4383,6 +4383,9 @@ void hci_req_cmd_complete(struct hci_dev *hdev, u16 opcode, u8 status,
+ 		return;
+ 	}
+ 
++	/* If we reach this point this event matches the last command sent */
++	hci_dev_clear_flag(hdev, HCI_CMD_PENDING);
++
+ 	/* If the command succeeded and there's still more commands in
+ 	 * this request the request is not yet complete.
+ 	 */
+@@ -4493,6 +4496,8 @@ static void hci_cmd_work(struct work_struct *work)
+ 
+ 		hdev->sent_cmd = skb_clone(skb, GFP_KERNEL);
+ 		if (hdev->sent_cmd) {
++			if (hci_req_status_pend(hdev))
++				hci_dev_set_flag(hdev, HCI_CMD_PENDING);
+ 			atomic_dec(&hdev->cmd_cnt);
+ 			hci_send_frame(hdev, skb);
+ 			if (test_bit(HCI_RESET, &hdev->flags))
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 609fd6871c5a..8b893baf9bbe 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -3404,6 +3404,12 @@ static void hci_cmd_complete_evt(struct hci_dev *hdev, struct sk_buff *skb,
+ 	hci_req_cmd_complete(hdev, *opcode, *status, req_complete,
+ 			     req_complete_skb);
+ 
++	if (hci_dev_test_flag(hdev, HCI_CMD_PENDING)) {
++		bt_dev_err(hdev,
++			   "unexpected event for opcode 0x%4.4x", *opcode);
++		return;
++	}
++
+ 	if (atomic_read(&hdev->cmd_cnt) && !skb_queue_empty(&hdev->cmd_q))
+ 		queue_work(hdev->workqueue, &hdev->cmd_work);
+ }
+@@ -3511,6 +3517,12 @@ static void hci_cmd_status_evt(struct hci_dev *hdev, struct sk_buff *skb,
+ 		hci_req_cmd_complete(hdev, *opcode, ev->status, req_complete,
+ 				     req_complete_skb);
+ 
++	if (hci_dev_test_flag(hdev, HCI_CMD_PENDING)) {
++		bt_dev_err(hdev,
++			   "unexpected event for opcode 0x%4.4x", *opcode);
++		return;
++	}
++
+ 	if (atomic_read(&hdev->cmd_cnt) && !skb_queue_empty(&hdev->cmd_q))
+ 		queue_work(hdev->workqueue, &hdev->cmd_work);
+ }
+diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
+index ca73d36cc149..e9a95ed65491 100644
+--- a/net/bluetooth/hci_request.c
++++ b/net/bluetooth/hci_request.c
+@@ -46,6 +46,11 @@ void hci_req_purge(struct hci_request *req)
+ 	skb_queue_purge(&req->cmd_q);
+ }
+ 
++bool hci_req_status_pend(struct hci_dev *hdev)
++{
++	return hdev->req_status == HCI_REQ_PEND;
++}
++
+ static int req_run(struct hci_request *req, hci_req_complete_t complete,
+ 		   hci_req_complete_skb_t complete_skb)
+ {
+diff --git a/net/bluetooth/hci_request.h b/net/bluetooth/hci_request.h
+index 692cc8b13368..55b2050cc9ff 100644
+--- a/net/bluetooth/hci_request.h
++++ b/net/bluetooth/hci_request.h
+@@ -37,6 +37,7 @@ struct hci_request {
+ 
+ void hci_req_init(struct hci_request *req, struct hci_dev *hdev);
+ void hci_req_purge(struct hci_request *req);
++bool hci_req_status_pend(struct hci_dev *hdev);
+ int hci_req_run(struct hci_request *req, hci_req_complete_t complete);
+ int hci_req_run_skb(struct hci_request *req, hci_req_complete_skb_t complete);
+ void hci_req_add(struct hci_request *req, u16 opcode, u32 plen,
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 2dbcf5d5512e..b7a9fe3d5fcb 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -1188,9 +1188,6 @@ static void ieee80211_chswitch_work(struct work_struct *work)
+ 		goto out;
+ 	}
+ 
+-	/* XXX: shouldn't really modify cfg80211-owned data! */
+-	ifmgd->associated->channel = sdata->csa_chandef.chan;
+-
+ 	ifmgd->csa_waiting_bcn = true;
+ 
+ 	ieee80211_sta_reset_beacon_monitor(sdata);
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index d7f61b0547c6..d2715b4d2e72 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -1254,7 +1254,7 @@ static int ctnetlink_del_conntrack(struct net *net, struct sock *ctnl,
+ 	struct nf_conntrack_tuple tuple;
+ 	struct nf_conn *ct;
+ 	struct nfgenmsg *nfmsg = nlmsg_data(nlh);
+-	u_int8_t u3 = nfmsg->nfgen_family;
++	u_int8_t u3 = nfmsg->version ? nfmsg->nfgen_family : AF_UNSPEC;
+ 	struct nf_conntrack_zone zone;
+ 	int err;
+ 
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 47e30a58566c..d2a7459a5da4 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -15727,6 +15727,11 @@ void cfg80211_ch_switch_notify(struct net_device *dev,
+ 
+ 	wdev->chandef = *chandef;
+ 	wdev->preset_chandef = *chandef;
++
++	if (wdev->iftype == NL80211_IFTYPE_STATION &&
++	    !WARN_ON(!wdev->current_bss))
++		wdev->current_bss->pub.channel = chandef->chan;
++
+ 	nl80211_ch_switch_notify(rdev, dev, chandef, GFP_KERNEL,
+ 				 NL80211_CMD_CH_SWITCH_NOTIFY, 0);
+ }
+diff --git a/samples/bpf/asm_goto_workaround.h b/samples/bpf/asm_goto_workaround.h
+index 5cd7c1d1a5d5..7409722727ca 100644
+--- a/samples/bpf/asm_goto_workaround.h
++++ b/samples/bpf/asm_goto_workaround.h
+@@ -13,4 +13,5 @@
+ #define asm_volatile_goto(x...) asm volatile("invalid use of asm_volatile_goto")
+ #endif
+ 
++#define volatile(x...) volatile("")
+ #endif
+diff --git a/security/selinux/netlabel.c b/security/selinux/netlabel.c
+index 186e727b737b..6fd9954e1c08 100644
+--- a/security/selinux/netlabel.c
++++ b/security/selinux/netlabel.c
+@@ -288,11 +288,8 @@ int selinux_netlbl_sctp_assoc_request(struct sctp_endpoint *ep,
+ 	int rc;
+ 	struct netlbl_lsm_secattr secattr;
+ 	struct sk_security_struct *sksec = ep->base.sk->sk_security;
+-	struct sockaddr *addr;
+ 	struct sockaddr_in addr4;
+-#if IS_ENABLED(CONFIG_IPV6)
+ 	struct sockaddr_in6 addr6;
+-#endif
+ 
+ 	if (ep->base.sk->sk_family != PF_INET &&
+ 				ep->base.sk->sk_family != PF_INET6)
+@@ -310,16 +307,15 @@ int selinux_netlbl_sctp_assoc_request(struct sctp_endpoint *ep,
+ 	if (ip_hdr(skb)->version == 4) {
+ 		addr4.sin_family = AF_INET;
+ 		addr4.sin_addr.s_addr = ip_hdr(skb)->saddr;
+-		addr = (struct sockaddr *)&addr4;
+-#if IS_ENABLED(CONFIG_IPV6)
+-	} else {
++		rc = netlbl_conn_setattr(ep->base.sk, (void *)&addr4, &secattr);
++	} else if (IS_ENABLED(CONFIG_IPV6) && ip_hdr(skb)->version == 6) {
+ 		addr6.sin6_family = AF_INET6;
+ 		addr6.sin6_addr = ipv6_hdr(skb)->saddr;
+-		addr = (struct sockaddr *)&addr6;
+-#endif
++		rc = netlbl_conn_setattr(ep->base.sk, (void *)&addr6, &secattr);
++	} else {
++		rc = -EAFNOSUPPORT;
+ 	}
+ 
+-	rc = netlbl_conn_setattr(ep->base.sk, addr, &secattr);
+ 	if (rc == 0)
+ 		sksec->nlbl_state = NLBL_LABELED;
+ 
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 701a69d856f5..b20eb7fc83eb 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -832,7 +832,13 @@ static int snd_hda_codec_dev_free(struct snd_device *device)
+ 	struct hda_codec *codec = device->device_data;
+ 
+ 	codec->in_freeing = 1;
+-	snd_hdac_device_unregister(&codec->core);
++	/*
++	 * snd_hda_codec_device_new() is used by legacy HDA and ASoC driver.
++	 * We can't unregister ASoC device since it will be unregistered in
++	 * snd_hdac_ext_bus_device_remove().
++	 */
++	if (codec->core.type == HDA_DEV_LEGACY)
++		snd_hdac_device_unregister(&codec->core);
+ 	codec_display_power(codec, false);
+ 	put_device(hda_codec_dev(codec));
+ 	return 0;
+diff --git a/sound/soc/codecs/hdmi-codec.c b/sound/soc/codecs/hdmi-codec.c
+index 35df73e42cbc..fb2f0ac1f16f 100644
+--- a/sound/soc/codecs/hdmi-codec.c
++++ b/sound/soc/codecs/hdmi-codec.c
+@@ -439,8 +439,12 @@ static int hdmi_codec_startup(struct snd_pcm_substream *substream,
+ 		if (!ret) {
+ 			ret = snd_pcm_hw_constraint_eld(substream->runtime,
+ 							hcp->eld);
+-			if (ret)
++			if (ret) {
++				mutex_lock(&hcp->current_stream_lock);
++				hcp->current_stream = NULL;
++				mutex_unlock(&hcp->current_stream_lock);
+ 				return ret;
++			}
+ 		}
+ 		/* Select chmap supported */
+ 		hdmi_codec_eld_chmap(hcp);
+diff --git a/sound/soc/codecs/wcd9335.c b/sound/soc/codecs/wcd9335.c
+index 981f88a5f615..a04a7cedd99d 100644
+--- a/sound/soc/codecs/wcd9335.c
++++ b/sound/soc/codecs/wcd9335.c
+@@ -5188,6 +5188,7 @@ static int wcd9335_slim_status(struct slim_device *sdev,
+ 
+ 	wcd->slim = sdev;
+ 	wcd->slim_ifc_dev = of_slim_get_device(sdev->ctrl, ifc_dev_np);
++	of_node_put(ifc_dev_np);
+ 	if (!wcd->slim_ifc_dev) {
+ 		dev_err(dev, "Unable to get SLIM Interface device\n");
+ 		return -EINVAL;
+diff --git a/sound/soc/fsl/Kconfig b/sound/soc/fsl/Kconfig
+index 7b1d9970be8b..1f65cf555ebe 100644
+--- a/sound/soc/fsl/Kconfig
++++ b/sound/soc/fsl/Kconfig
+@@ -182,16 +182,17 @@ config SND_MPC52xx_SOC_EFIKA
+ 
+ endif # SND_POWERPC_SOC
+ 
++config SND_SOC_IMX_PCM_FIQ
++	tristate
++	default y if SND_SOC_IMX_SSI=y && (SND_SOC_FSL_SSI=m || SND_SOC_FSL_SPDIF=m) && (MXC_TZIC || MXC_AVIC)
++	select FIQ
++
+ if SND_IMX_SOC
+ 
+ config SND_SOC_IMX_SSI
+ 	tristate
+ 	select SND_SOC_FSL_UTILS
+ 
+-config SND_SOC_IMX_PCM_FIQ
+-	tristate
+-	select FIQ
+-
+ comment "SoC Audio support for Freescale i.MX boards:"
+ 
+ config SND_MXC_SOC_WM1133_EV1
+diff --git a/sound/soc/fsl/eukrea-tlv320.c b/sound/soc/fsl/eukrea-tlv320.c
+index 191426a6d9ad..30a3d68b5c03 100644
+--- a/sound/soc/fsl/eukrea-tlv320.c
++++ b/sound/soc/fsl/eukrea-tlv320.c
+@@ -118,13 +118,13 @@ static int eukrea_tlv320_probe(struct platform_device *pdev)
+ 		if (ret) {
+ 			dev_err(&pdev->dev,
+ 				"fsl,mux-int-port node missing or invalid.\n");
+-			return ret;
++			goto err;
+ 		}
+ 		ret = of_property_read_u32(np, "fsl,mux-ext-port", &ext_port);
+ 		if (ret) {
+ 			dev_err(&pdev->dev,
+ 				"fsl,mux-ext-port node missing or invalid.\n");
+-			return ret;
++			goto err;
+ 		}
+ 
+ 		/*
+diff --git a/sound/soc/fsl/fsl_sai.c b/sound/soc/fsl/fsl_sai.c
+index db9e0872f73d..7549b74e464e 100644
+--- a/sound/soc/fsl/fsl_sai.c
++++ b/sound/soc/fsl/fsl_sai.c
+@@ -268,12 +268,14 @@ static int fsl_sai_set_dai_fmt_tr(struct snd_soc_dai *cpu_dai,
+ 	case SND_SOC_DAIFMT_CBS_CFS:
+ 		val_cr2 |= FSL_SAI_CR2_BCD_MSTR;
+ 		val_cr4 |= FSL_SAI_CR4_FSD_MSTR;
++		sai->is_slave_mode = false;
+ 		break;
+ 	case SND_SOC_DAIFMT_CBM_CFM:
+ 		sai->is_slave_mode = true;
+ 		break;
+ 	case SND_SOC_DAIFMT_CBS_CFM:
+ 		val_cr2 |= FSL_SAI_CR2_BCD_MSTR;
++		sai->is_slave_mode = false;
+ 		break;
+ 	case SND_SOC_DAIFMT_CBM_CFS:
+ 		val_cr4 |= FSL_SAI_CR4_FSD_MSTR;
+diff --git a/sound/soc/fsl/fsl_utils.c b/sound/soc/fsl/fsl_utils.c
+index 9981668ab590..040d06b89f00 100644
+--- a/sound/soc/fsl/fsl_utils.c
++++ b/sound/soc/fsl/fsl_utils.c
+@@ -71,6 +71,7 @@ int fsl_asoc_get_dma_channel(struct device_node *ssi_np,
+ 	iprop = of_get_property(dma_np, "cell-index", NULL);
+ 	if (!iprop) {
+ 		of_node_put(dma_np);
++		of_node_put(dma_channel_np);
+ 		return -EINVAL;
+ 	}
+ 	*dma_id = be32_to_cpup(iprop);
+diff --git a/sound/soc/intel/boards/kbl_da7219_max98357a.c b/sound/soc/intel/boards/kbl_da7219_max98357a.c
+index 38f6ab74709d..07491a0f8fb8 100644
+--- a/sound/soc/intel/boards/kbl_da7219_max98357a.c
++++ b/sound/soc/intel/boards/kbl_da7219_max98357a.c
+@@ -188,7 +188,7 @@ static int kabylake_da7219_codec_init(struct snd_soc_pcm_runtime *rtd)
+ 
+ 	jack = &ctx->kabylake_headset;
+ 
+-	snd_jack_set_key(jack->jack, SND_JACK_BTN_0, KEY_MEDIA);
++	snd_jack_set_key(jack->jack, SND_JACK_BTN_0, KEY_PLAYPAUSE);
+ 	snd_jack_set_key(jack->jack, SND_JACK_BTN_1, KEY_VOLUMEUP);
+ 	snd_jack_set_key(jack->jack, SND_JACK_BTN_2, KEY_VOLUMEDOWN);
+ 	snd_jack_set_key(jack->jack, SND_JACK_BTN_3, KEY_VOICECOMMAND);
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index 46e3ab0fced4..fe99b02bbf17 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -2828,10 +2828,21 @@ EXPORT_SYMBOL_GPL(snd_soc_register_card);
+ 
+ static void snd_soc_unbind_card(struct snd_soc_card *card, bool unregister)
+ {
++	struct snd_soc_pcm_runtime *rtd;
++	int order;
++
+ 	if (card->instantiated) {
+ 		card->instantiated = false;
+ 		snd_soc_dapm_shutdown(card);
+ 		snd_soc_flush_all_delayed_work(card);
++
++		/* remove all components used by DAI links on this card */
++		for_each_comp_order(order) {
++			for_each_card_rtds(card, rtd) {
++				soc_remove_link_components(card, rtd, order);
++			}
++		}
++
+ 		soc_cleanup_card_resources(card);
+ 		if (!unregister)
+ 			list_add(&card->list, &unbind_card_list);
+diff --git a/sound/soc/ti/Kconfig b/sound/soc/ti/Kconfig
+index 4bf3c15d4e51..ee7c202c69b7 100644
+--- a/sound/soc/ti/Kconfig
++++ b/sound/soc/ti/Kconfig
+@@ -21,8 +21,8 @@ config SND_SOC_DAVINCI_ASP
+ 
+ config SND_SOC_DAVINCI_MCASP
+ 	tristate "Multichannel Audio Serial Port (McASP) support"
+-	select SND_SOC_TI_EDMA_PCM if TI_EDMA
+-	select SND_SOC_TI_SDMA_PCM if DMA_OMAP
++	select SND_SOC_TI_EDMA_PCM
++	select SND_SOC_TI_SDMA_PCM
+ 	help
+ 	  Say Y or M here if you want to have support for McASP IP found in
+ 	  various Texas Instruments SoCs like:
+diff --git a/sound/soc/ti/davinci-mcasp.c b/sound/soc/ti/davinci-mcasp.c
+index a3a67a8f0f54..9fbc759fdefe 100644
+--- a/sound/soc/ti/davinci-mcasp.c
++++ b/sound/soc/ti/davinci-mcasp.c
+@@ -45,6 +45,7 @@
+ 
+ #define MCASP_MAX_AFIFO_DEPTH	64
+ 
++#ifdef CONFIG_PM
+ static u32 context_regs[] = {
+ 	DAVINCI_MCASP_TXFMCTL_REG,
+ 	DAVINCI_MCASP_RXFMCTL_REG,
+@@ -68,6 +69,7 @@ struct davinci_mcasp_context {
+ 	u32	*xrsr_regs; /* for serializer configuration */
+ 	bool	pm_state;
+ };
++#endif
+ 
+ struct davinci_mcasp_ruledata {
+ 	struct davinci_mcasp *mcasp;
+diff --git a/tools/bpf/bpftool/.gitignore b/tools/bpf/bpftool/.gitignore
+index 67167e44b726..8248b8dd89d4 100644
+--- a/tools/bpf/bpftool/.gitignore
++++ b/tools/bpf/bpftool/.gitignore
+@@ -1,5 +1,5 @@
+ *.d
+-bpftool
++/bpftool
+ bpftool*.8
+ bpf-helpers.*
+ FEATURE-DUMP.bpftool
+diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
+index 9cd015574e83..d82edadf7589 100644
+--- a/tools/lib/bpf/bpf.c
++++ b/tools/lib/bpf/bpf.c
+@@ -46,6 +46,8 @@
+ #  define __NR_bpf 349
+ # elif defined(__s390__)
+ #  define __NR_bpf 351
++# elif defined(__arc__)
++#  define __NR_bpf 280
+ # else
+ #  error __NR_bpf not defined. libbpf does not support your arch.
+ # endif
+diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
+index 6ffdd79bea89..6dc1f418034f 100644
+--- a/tools/lib/bpf/bpf.h
++++ b/tools/lib/bpf/bpf.h
+@@ -26,6 +26,7 @@
+ #include <linux/bpf.h>
+ #include <stdbool.h>
+ #include <stddef.h>
++#include <stdint.h>
+ 
+ #ifdef __cplusplus
+ extern "C" {
+diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
+index 8d0078b65486..af5f310ecca1 100644
+--- a/tools/lib/bpf/xsk.c
++++ b/tools/lib/bpf/xsk.c
+@@ -248,8 +248,7 @@ int xsk_umem__create(struct xsk_umem **umem_ptr, void *umem_area, __u64 size,
+ 	return 0;
+ 
+ out_mmap:
+-	munmap(umem->fill,
+-	       off.fr.desc + umem->config.fill_size * sizeof(__u64));
++	munmap(map, off.fr.desc + umem->config.fill_size * sizeof(__u64));
+ out_socket:
+ 	close(umem->fd);
+ out_umem_alloc:
+@@ -523,11 +522,11 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
+ 		       struct xsk_ring_cons *rx, struct xsk_ring_prod *tx,
+ 		       const struct xsk_socket_config *usr_config)
+ {
++	void *rx_map = NULL, *tx_map = NULL;
+ 	struct sockaddr_xdp sxdp = {};
+ 	struct xdp_mmap_offsets off;
+ 	struct xsk_socket *xsk;
+ 	socklen_t optlen;
+-	void *map;
+ 	int err;
+ 
+ 	if (!umem || !xsk_ptr || !rx || !tx)
+@@ -593,40 +592,40 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
+ 	}
+ 
+ 	if (rx) {
+-		map = xsk_mmap(NULL, off.rx.desc +
+-			       xsk->config.rx_size * sizeof(struct xdp_desc),
+-			       PROT_READ | PROT_WRITE,
+-			       MAP_SHARED | MAP_POPULATE,
+-			       xsk->fd, XDP_PGOFF_RX_RING);
+-		if (map == MAP_FAILED) {
++		rx_map = xsk_mmap(NULL, off.rx.desc +
++				  xsk->config.rx_size * sizeof(struct xdp_desc),
++				  PROT_READ | PROT_WRITE,
++				  MAP_SHARED | MAP_POPULATE,
++				  xsk->fd, XDP_PGOFF_RX_RING);
++		if (rx_map == MAP_FAILED) {
+ 			err = -errno;
+ 			goto out_socket;
+ 		}
+ 
+ 		rx->mask = xsk->config.rx_size - 1;
+ 		rx->size = xsk->config.rx_size;
+-		rx->producer = map + off.rx.producer;
+-		rx->consumer = map + off.rx.consumer;
+-		rx->ring = map + off.rx.desc;
++		rx->producer = rx_map + off.rx.producer;
++		rx->consumer = rx_map + off.rx.consumer;
++		rx->ring = rx_map + off.rx.desc;
+ 	}
+ 	xsk->rx = rx;
+ 
+ 	if (tx) {
+-		map = xsk_mmap(NULL, off.tx.desc +
+-			       xsk->config.tx_size * sizeof(struct xdp_desc),
+-			       PROT_READ | PROT_WRITE,
+-			       MAP_SHARED | MAP_POPULATE,
+-			       xsk->fd, XDP_PGOFF_TX_RING);
+-		if (map == MAP_FAILED) {
++		tx_map = xsk_mmap(NULL, off.tx.desc +
++				  xsk->config.tx_size * sizeof(struct xdp_desc),
++				  PROT_READ | PROT_WRITE,
++				  MAP_SHARED | MAP_POPULATE,
++				  xsk->fd, XDP_PGOFF_TX_RING);
++		if (tx_map == MAP_FAILED) {
+ 			err = -errno;
+ 			goto out_mmap_rx;
+ 		}
+ 
+ 		tx->mask = xsk->config.tx_size - 1;
+ 		tx->size = xsk->config.tx_size;
+-		tx->producer = map + off.tx.producer;
+-		tx->consumer = map + off.tx.consumer;
+-		tx->ring = map + off.tx.desc;
++		tx->producer = tx_map + off.tx.producer;
++		tx->consumer = tx_map + off.tx.consumer;
++		tx->ring = tx_map + off.tx.desc;
+ 		tx->cached_cons = xsk->config.tx_size;
+ 	}
+ 	xsk->tx = tx;
+@@ -653,13 +652,11 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
+ 
+ out_mmap_tx:
+ 	if (tx)
+-		munmap(xsk->tx,
+-		       off.tx.desc +
++		munmap(tx_map, off.tx.desc +
+ 		       xsk->config.tx_size * sizeof(struct xdp_desc));
+ out_mmap_rx:
+ 	if (rx)
+-		munmap(xsk->rx,
+-		       off.rx.desc +
++		munmap(rx_map, off.rx.desc +
+ 		       xsk->config.rx_size * sizeof(struct xdp_desc));
+ out_socket:
+ 	if (--umem->refcount)
+@@ -684,10 +681,12 @@ int xsk_umem__delete(struct xsk_umem *umem)
+ 	optlen = sizeof(off);
+ 	err = getsockopt(umem->fd, SOL_XDP, XDP_MMAP_OFFSETS, &off, &optlen);
+ 	if (!err) {
+-		munmap(umem->fill->ring,
+-		       off.fr.desc + umem->config.fill_size * sizeof(__u64));
+-		munmap(umem->comp->ring,
+-		       off.cr.desc + umem->config.comp_size * sizeof(__u64));
++		(void)munmap(umem->fill->ring - off.fr.desc,
++			     off.fr.desc +
++			     umem->config.fill_size * sizeof(__u64));
++		(void)munmap(umem->comp->ring - off.cr.desc,
++			     off.cr.desc +
++			     umem->config.comp_size * sizeof(__u64));
+ 	}
+ 
+ 	close(umem->fd);
+@@ -698,6 +697,7 @@ int xsk_umem__delete(struct xsk_umem *umem)
+ 
+ void xsk_socket__delete(struct xsk_socket *xsk)
+ {
++	size_t desc_sz = sizeof(struct xdp_desc);
+ 	struct xdp_mmap_offsets off;
+ 	socklen_t optlen;
+ 	int err;
+@@ -710,14 +710,17 @@ void xsk_socket__delete(struct xsk_socket *xsk)
+ 	optlen = sizeof(off);
+ 	err = getsockopt(xsk->fd, SOL_XDP, XDP_MMAP_OFFSETS, &off, &optlen);
+ 	if (!err) {
+-		if (xsk->rx)
+-			munmap(xsk->rx->ring,
+-			       off.rx.desc +
+-			       xsk->config.rx_size * sizeof(struct xdp_desc));
+-		if (xsk->tx)
+-			munmap(xsk->tx->ring,
+-			       off.tx.desc +
+-			       xsk->config.tx_size * sizeof(struct xdp_desc));
++		if (xsk->rx) {
++			(void)munmap(xsk->rx->ring - off.rx.desc,
++				     off.rx.desc +
++				     xsk->config.rx_size * desc_sz);
++		}
++		if (xsk->tx) {
++			(void)munmap(xsk->tx->ring - off.tx.desc,
++				     off.tx.desc +
++				     xsk->config.tx_size * desc_sz);
++		}
++
+ 	}
+ 
+ 	xsk->umem->refcount--;
+diff --git a/tools/testing/selftests/bpf/test_libbpf_open.c b/tools/testing/selftests/bpf/test_libbpf_open.c
+index 65cbd30704b5..9e9db202d218 100644
+--- a/tools/testing/selftests/bpf/test_libbpf_open.c
++++ b/tools/testing/selftests/bpf/test_libbpf_open.c
+@@ -11,6 +11,8 @@ static const char *__doc__ =
+ #include <bpf/libbpf.h>
+ #include <getopt.h>
+ 
++#include "bpf_rlimit.h"
++
+ static const struct option long_options[] = {
+ 	{"help",	no_argument,		NULL, 'h' },
+ 	{"debug",	no_argument,		NULL, 'D' },
+diff --git a/tools/testing/selftests/bpf/trace_helpers.c b/tools/testing/selftests/bpf/trace_helpers.c
+index 4cdb63bf0521..9a9fc6c9b70b 100644
+--- a/tools/testing/selftests/bpf/trace_helpers.c
++++ b/tools/testing/selftests/bpf/trace_helpers.c
+@@ -52,6 +52,10 @@ struct ksym *ksym_search(long key)
+ 	int start = 0, end = sym_cnt;
+ 	int result;
+ 
++	/* kallsyms not loaded. return NULL */
++	if (sym_cnt <= 0)
++		return NULL;
++
+ 	while (start < end) {
+ 		size_t mid = start + (end - start) / 2;
+ 
+diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
+index 28d321ba311b..6f339882a6ca 100644
+--- a/tools/testing/selftests/cgroup/test_memcontrol.c
++++ b/tools/testing/selftests/cgroup/test_memcontrol.c
+@@ -26,7 +26,7 @@
+  */
+ static int test_memcg_subtree_control(const char *root)
+ {
+-	char *parent, *child, *parent2, *child2;
++	char *parent, *child, *parent2 = NULL, *child2 = NULL;
+ 	int ret = KSFT_FAIL;
+ 	char buf[PAGE_SIZE];
+ 
+@@ -34,50 +34,54 @@ static int test_memcg_subtree_control(const char *root)
+ 	parent = cg_name(root, "memcg_test_0");
+ 	child = cg_name(root, "memcg_test_0/memcg_test_1");
+ 	if (!parent || !child)
+-		goto cleanup;
++		goto cleanup_free;
+ 
+ 	if (cg_create(parent))
+-		goto cleanup;
++		goto cleanup_free;
+ 
+ 	if (cg_write(parent, "cgroup.subtree_control", "+memory"))
+-		goto cleanup;
++		goto cleanup_parent;
+ 
+ 	if (cg_create(child))
+-		goto cleanup;
++		goto cleanup_parent;
+ 
+ 	if (cg_read_strstr(child, "cgroup.controllers", "memory"))
+-		goto cleanup;
++		goto cleanup_child;
+ 
+ 	/* Create two nested cgroups without enabling memory controller */
+ 	parent2 = cg_name(root, "memcg_test_1");
+ 	child2 = cg_name(root, "memcg_test_1/memcg_test_1");
+ 	if (!parent2 || !child2)
+-		goto cleanup;
++		goto cleanup_free2;
+ 
+ 	if (cg_create(parent2))
+-		goto cleanup;
++		goto cleanup_free2;
+ 
+ 	if (cg_create(child2))
+-		goto cleanup;
++		goto cleanup_parent2;
+ 
+ 	if (cg_read(child2, "cgroup.controllers", buf, sizeof(buf)))
+-		goto cleanup;
++		goto cleanup_all;
+ 
+ 	if (!cg_read_strstr(child2, "cgroup.controllers", "memory"))
+-		goto cleanup;
++		goto cleanup_all;
+ 
+ 	ret = KSFT_PASS;
+ 
+-cleanup:
+-	cg_destroy(child);
+-	cg_destroy(parent);
+-	free(parent);
+-	free(child);
+-
++cleanup_all:
+ 	cg_destroy(child2);
++cleanup_parent2:
+ 	cg_destroy(parent2);
++cleanup_free2:
+ 	free(parent2);
+ 	free(child2);
++cleanup_child:
++	cg_destroy(child);
++cleanup_parent:
++	cg_destroy(parent);
++cleanup_free:
++	free(parent);
++	free(child);
+ 
+ 	return ret;
+ }
+diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
+index 001aeda4c154..3972a9564c76 100644
+--- a/virt/kvm/eventfd.c
++++ b/virt/kvm/eventfd.c
+@@ -44,6 +44,12 @@
+ 
+ static struct workqueue_struct *irqfd_cleanup_wq;
+ 
++bool __attribute__((weak))
++kvm_arch_irqfd_allowed(struct kvm *kvm, struct kvm_irqfd *args)
++{
++	return true;
++}
++
+ static void
+ irqfd_inject(struct work_struct *work)
+ {
+@@ -297,6 +303,9 @@ kvm_irqfd_assign(struct kvm *kvm, struct kvm_irqfd *args)
+ 	if (!kvm_arch_intc_initialized(kvm))
+ 		return -EAGAIN;
+ 
++	if (!kvm_arch_irqfd_allowed(kvm, args))
++		return -EINVAL;
++
+ 	irqfd = kzalloc(sizeof(*irqfd), GFP_KERNEL_ACCOUNT);
+ 	if (!irqfd)
+ 		return -ENOMEM;


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-06-04 11:09 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2019-06-04 11:09 UTC (permalink / raw
  To: gentoo-commits

commit:     1e9d12596e3b32b5f8db872785dba97e0f54d942
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jun  4 11:08:46 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jun  4 11:08:46 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1e9d1259

Linux patch 5.1.7

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1006_linux-5.1.7.patch | 1551 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1555 insertions(+)

diff --git a/0000_README b/0000_README
index 7713f53..7c0827d 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch:  1005_linux-5.1.6.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.1.6
 
+Patch:  1006_linux-5.1.7.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.1.7
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1006_linux-5.1.7.patch b/1006_linux-5.1.7.patch
new file mode 100644
index 0000000..6a91998
--- /dev/null
+++ b/1006_linux-5.1.7.patch
@@ -0,0 +1,1551 @@
+diff --git a/Makefile b/Makefile
+index d8bdd2bb55dc..299578ce385a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/drivers/crypto/vmx/ghash.c b/drivers/crypto/vmx/ghash.c
+index dd8b8716467a..2d1a8cd35509 100644
+--- a/drivers/crypto/vmx/ghash.c
++++ b/drivers/crypto/vmx/ghash.c
+@@ -1,22 +1,14 @@
++// SPDX-License-Identifier: GPL-2.0
+ /**
+  * GHASH routines supporting VMX instructions on the Power 8
+  *
+- * Copyright (C) 2015 International Business Machines Inc.
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License as published by
+- * the Free Software Foundation; version 2 only.
+- *
+- * This program is distributed in the hope that it will be useful,
+- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+- * GNU General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, write to the Free Software
+- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
++ * Copyright (C) 2015, 2019 International Business Machines Inc.
+  *
+  * Author: Marcelo Henrique Cerri <mhcerri@br.ibm.com>
++ *
++ * Extended by Daniel Axtens <dja@axtens.net> to replace the fallback
++ * mechanism. The new approach is based on arm64 code, which is:
++ *   Copyright (C) 2014 - 2018 Linaro Ltd. <ard.biesheuvel@linaro.org>
+  */
+ 
+ #include <linux/types.h>
+@@ -39,71 +31,25 @@ void gcm_ghash_p8(u64 Xi[2], const u128 htable[16],
+ 		  const u8 *in, size_t len);
+ 
+ struct p8_ghash_ctx {
++	/* key used by vector asm */
+ 	u128 htable[16];
+-	struct crypto_shash *fallback;
++	/* key used by software fallback */
++	be128 key;
+ };
+ 
+ struct p8_ghash_desc_ctx {
+ 	u64 shash[2];
+ 	u8 buffer[GHASH_DIGEST_SIZE];
+ 	int bytes;
+-	struct shash_desc fallback_desc;
+ };
+ 
+-static int p8_ghash_init_tfm(struct crypto_tfm *tfm)
+-{
+-	const char *alg = "ghash-generic";
+-	struct crypto_shash *fallback;
+-	struct crypto_shash *shash_tfm = __crypto_shash_cast(tfm);
+-	struct p8_ghash_ctx *ctx = crypto_tfm_ctx(tfm);
+-
+-	fallback = crypto_alloc_shash(alg, 0, CRYPTO_ALG_NEED_FALLBACK);
+-	if (IS_ERR(fallback)) {
+-		printk(KERN_ERR
+-		       "Failed to allocate transformation for '%s': %ld\n",
+-		       alg, PTR_ERR(fallback));
+-		return PTR_ERR(fallback);
+-	}
+-
+-	crypto_shash_set_flags(fallback,
+-			       crypto_shash_get_flags((struct crypto_shash
+-						       *) tfm));
+-
+-	/* Check if the descsize defined in the algorithm is still enough. */
+-	if (shash_tfm->descsize < sizeof(struct p8_ghash_desc_ctx)
+-	    + crypto_shash_descsize(fallback)) {
+-		printk(KERN_ERR
+-		       "Desc size of the fallback implementation (%s) does not match the expected value: %lu vs %u\n",
+-		       alg,
+-		       shash_tfm->descsize - sizeof(struct p8_ghash_desc_ctx),
+-		       crypto_shash_descsize(fallback));
+-		return -EINVAL;
+-	}
+-	ctx->fallback = fallback;
+-
+-	return 0;
+-}
+-
+-static void p8_ghash_exit_tfm(struct crypto_tfm *tfm)
+-{
+-	struct p8_ghash_ctx *ctx = crypto_tfm_ctx(tfm);
+-
+-	if (ctx->fallback) {
+-		crypto_free_shash(ctx->fallback);
+-		ctx->fallback = NULL;
+-	}
+-}
+-
+ static int p8_ghash_init(struct shash_desc *desc)
+ {
+-	struct p8_ghash_ctx *ctx = crypto_tfm_ctx(crypto_shash_tfm(desc->tfm));
+ 	struct p8_ghash_desc_ctx *dctx = shash_desc_ctx(desc);
+ 
+ 	dctx->bytes = 0;
+ 	memset(dctx->shash, 0, GHASH_DIGEST_SIZE);
+-	dctx->fallback_desc.tfm = ctx->fallback;
+-	dctx->fallback_desc.flags = desc->flags;
+-	return crypto_shash_init(&dctx->fallback_desc);
++	return 0;
+ }
+ 
+ static int p8_ghash_setkey(struct crypto_shash *tfm, const u8 *key,
+@@ -121,7 +67,51 @@ static int p8_ghash_setkey(struct crypto_shash *tfm, const u8 *key,
+ 	disable_kernel_vsx();
+ 	pagefault_enable();
+ 	preempt_enable();
+-	return crypto_shash_setkey(ctx->fallback, key, keylen);
++
++	memcpy(&ctx->key, key, GHASH_BLOCK_SIZE);
++
++	return 0;
++}
++
++static inline void __ghash_block(struct p8_ghash_ctx *ctx,
++				 struct p8_ghash_desc_ctx *dctx)
++{
++	if (!IN_INTERRUPT) {
++		preempt_disable();
++		pagefault_disable();
++		enable_kernel_vsx();
++		gcm_ghash_p8(dctx->shash, ctx->htable,
++				dctx->buffer, GHASH_DIGEST_SIZE);
++		disable_kernel_vsx();
++		pagefault_enable();
++		preempt_enable();
++	} else {
++		crypto_xor((u8 *)dctx->shash, dctx->buffer, GHASH_BLOCK_SIZE);
++		gf128mul_lle((be128 *)dctx->shash, &ctx->key);
++	}
++}
++
++static inline void __ghash_blocks(struct p8_ghash_ctx *ctx,
++				  struct p8_ghash_desc_ctx *dctx,
++				  const u8 *src, unsigned int srclen)
++{
++	if (!IN_INTERRUPT) {
++		preempt_disable();
++		pagefault_disable();
++		enable_kernel_vsx();
++		gcm_ghash_p8(dctx->shash, ctx->htable,
++				src, srclen);
++		disable_kernel_vsx();
++		pagefault_enable();
++		preempt_enable();
++	} else {
++		while (srclen >= GHASH_BLOCK_SIZE) {
++			crypto_xor((u8 *)dctx->shash, src, GHASH_BLOCK_SIZE);
++			gf128mul_lle((be128 *)dctx->shash, &ctx->key);
++			srclen -= GHASH_BLOCK_SIZE;
++			src += GHASH_BLOCK_SIZE;
++		}
++	}
+ }
+ 
+ static int p8_ghash_update(struct shash_desc *desc,
+@@ -131,49 +121,33 @@ static int p8_ghash_update(struct shash_desc *desc,
+ 	struct p8_ghash_ctx *ctx = crypto_tfm_ctx(crypto_shash_tfm(desc->tfm));
+ 	struct p8_ghash_desc_ctx *dctx = shash_desc_ctx(desc);
+ 
+-	if (IN_INTERRUPT) {
+-		return crypto_shash_update(&dctx->fallback_desc, src,
+-					   srclen);
+-	} else {
+-		if (dctx->bytes) {
+-			if (dctx->bytes + srclen < GHASH_DIGEST_SIZE) {
+-				memcpy(dctx->buffer + dctx->bytes, src,
+-				       srclen);
+-				dctx->bytes += srclen;
+-				return 0;
+-			}
++	if (dctx->bytes) {
++		if (dctx->bytes + srclen < GHASH_DIGEST_SIZE) {
+ 			memcpy(dctx->buffer + dctx->bytes, src,
+-			       GHASH_DIGEST_SIZE - dctx->bytes);
+-			preempt_disable();
+-			pagefault_disable();
+-			enable_kernel_vsx();
+-			gcm_ghash_p8(dctx->shash, ctx->htable,
+-				     dctx->buffer, GHASH_DIGEST_SIZE);
+-			disable_kernel_vsx();
+-			pagefault_enable();
+-			preempt_enable();
+-			src += GHASH_DIGEST_SIZE - dctx->bytes;
+-			srclen -= GHASH_DIGEST_SIZE - dctx->bytes;
+-			dctx->bytes = 0;
+-		}
+-		len = srclen & ~(GHASH_DIGEST_SIZE - 1);
+-		if (len) {
+-			preempt_disable();
+-			pagefault_disable();
+-			enable_kernel_vsx();
+-			gcm_ghash_p8(dctx->shash, ctx->htable, src, len);
+-			disable_kernel_vsx();
+-			pagefault_enable();
+-			preempt_enable();
+-			src += len;
+-			srclen -= len;
+-		}
+-		if (srclen) {
+-			memcpy(dctx->buffer, src, srclen);
+-			dctx->bytes = srclen;
++				srclen);
++			dctx->bytes += srclen;
++			return 0;
+ 		}
+-		return 0;
++		memcpy(dctx->buffer + dctx->bytes, src,
++			GHASH_DIGEST_SIZE - dctx->bytes);
++
++		__ghash_block(ctx, dctx);
++
++		src += GHASH_DIGEST_SIZE - dctx->bytes;
++		srclen -= GHASH_DIGEST_SIZE - dctx->bytes;
++		dctx->bytes = 0;
++	}
++	len = srclen & ~(GHASH_DIGEST_SIZE - 1);
++	if (len) {
++		__ghash_blocks(ctx, dctx, src, len);
++		src += len;
++		srclen -= len;
+ 	}
++	if (srclen) {
++		memcpy(dctx->buffer, src, srclen);
++		dctx->bytes = srclen;
++	}
++	return 0;
+ }
+ 
+ static int p8_ghash_final(struct shash_desc *desc, u8 *out)
+@@ -182,25 +156,14 @@ static int p8_ghash_final(struct shash_desc *desc, u8 *out)
+ 	struct p8_ghash_ctx *ctx = crypto_tfm_ctx(crypto_shash_tfm(desc->tfm));
+ 	struct p8_ghash_desc_ctx *dctx = shash_desc_ctx(desc);
+ 
+-	if (IN_INTERRUPT) {
+-		return crypto_shash_final(&dctx->fallback_desc, out);
+-	} else {
+-		if (dctx->bytes) {
+-			for (i = dctx->bytes; i < GHASH_DIGEST_SIZE; i++)
+-				dctx->buffer[i] = 0;
+-			preempt_disable();
+-			pagefault_disable();
+-			enable_kernel_vsx();
+-			gcm_ghash_p8(dctx->shash, ctx->htable,
+-				     dctx->buffer, GHASH_DIGEST_SIZE);
+-			disable_kernel_vsx();
+-			pagefault_enable();
+-			preempt_enable();
+-			dctx->bytes = 0;
+-		}
+-		memcpy(out, dctx->shash, GHASH_DIGEST_SIZE);
+-		return 0;
++	if (dctx->bytes) {
++		for (i = dctx->bytes; i < GHASH_DIGEST_SIZE; i++)
++			dctx->buffer[i] = 0;
++		__ghash_block(ctx, dctx);
++		dctx->bytes = 0;
+ 	}
++	memcpy(out, dctx->shash, GHASH_DIGEST_SIZE);
++	return 0;
+ }
+ 
+ struct shash_alg p8_ghash_alg = {
+@@ -215,11 +178,8 @@ struct shash_alg p8_ghash_alg = {
+ 		 .cra_name = "ghash",
+ 		 .cra_driver_name = "p8_ghash",
+ 		 .cra_priority = 1000,
+-		 .cra_flags = CRYPTO_ALG_NEED_FALLBACK,
+ 		 .cra_blocksize = GHASH_BLOCK_SIZE,
+ 		 .cra_ctxsize = sizeof(struct p8_ghash_ctx),
+ 		 .cra_module = THIS_MODULE,
+-		 .cra_init = p8_ghash_init_tfm,
+-		 .cra_exit = p8_ghash_exit_tfm,
+ 	},
+ };
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index ee610721098e..f96efa363d34 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -3122,13 +3122,18 @@ static int bond_slave_netdev_event(unsigned long event,
+ 	case NETDEV_CHANGE:
+ 		/* For 802.3ad mode only:
+ 		 * Getting invalid Speed/Duplex values here will put slave
+-		 * in weird state. So mark it as link-fail for the time
+-		 * being and let link-monitoring (miimon) set it right when
+-		 * correct speeds/duplex are available.
++		 * in weird state. Mark it as link-fail if the link was
++		 * previously up or link-down if it hasn't yet come up, and
++		 * let link-monitoring (miimon) set it right when correct
++		 * speeds/duplex are available.
+ 		 */
+ 		if (bond_update_speed_duplex(slave) &&
+-		    BOND_MODE(bond) == BOND_MODE_8023AD)
+-			slave->link = BOND_LINK_FAIL;
++		    BOND_MODE(bond) == BOND_MODE_8023AD) {
++			if (slave->last_link_up)
++				slave->link = BOND_LINK_FAIL;
++			else
++				slave->link = BOND_LINK_DOWN;
++		}
+ 
+ 		if (BOND_MODE(bond) == BOND_MODE_8023AD)
+ 			bond_3ad_adapter_speed_duplex_changed(slave);
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index f4e2db44ad91..720f1dde2c2d 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -910,7 +910,7 @@ static uint64_t _mv88e6xxx_get_ethtool_stat(struct mv88e6xxx_chip *chip,
+ 			err = mv88e6xxx_port_read(chip, port, s->reg + 1, &reg);
+ 			if (err)
+ 				return U64_MAX;
+-			high = reg;
++			low |= ((u32)reg) << 16;
+ 		}
+ 		break;
+ 	case STATS_TYPE_BANK1:
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 52ade133b57c..30cafe4cdb6e 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -1640,6 +1640,8 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ 		skb = bnxt_copy_skb(bnapi, data_ptr, len, dma_addr);
+ 		bnxt_reuse_rx_data(rxr, cons, data);
+ 		if (!skb) {
++			if (agg_bufs)
++				bnxt_reuse_rx_agg_bufs(cpr, cp_cons, agg_bufs);
+ 			rc = -ENOMEM;
+ 			goto next_rx;
+ 		}
+@@ -6340,7 +6342,7 @@ static int bnxt_alloc_ctx_mem(struct bnxt *bp)
+ 	if (!ctx || (ctx->flags & BNXT_CTX_FLAG_INITED))
+ 		return 0;
+ 
+-	if (bp->flags & BNXT_FLAG_ROCE_CAP) {
++	if ((bp->flags & BNXT_FLAG_ROCE_CAP) && !is_kdump_kernel()) {
+ 		pg_lvl = 2;
+ 		extra_qps = 65536;
+ 		extra_srqs = 8192;
+@@ -7512,22 +7514,23 @@ static void bnxt_clear_int_mode(struct bnxt *bp)
+ 	bp->flags &= ~BNXT_FLAG_USING_MSIX;
+ }
+ 
+-int bnxt_reserve_rings(struct bnxt *bp)
++int bnxt_reserve_rings(struct bnxt *bp, bool irq_re_init)
+ {
+ 	int tcs = netdev_get_num_tc(bp->dev);
+-	bool reinit_irq = false;
++	bool irq_cleared = false;
+ 	int rc;
+ 
+ 	if (!bnxt_need_reserve_rings(bp))
+ 		return 0;
+ 
+-	if (BNXT_NEW_RM(bp) && (bnxt_get_num_msix(bp) != bp->total_irqs)) {
++	if (irq_re_init && BNXT_NEW_RM(bp) &&
++	    bnxt_get_num_msix(bp) != bp->total_irqs) {
+ 		bnxt_ulp_irq_stop(bp);
+ 		bnxt_clear_int_mode(bp);
+-		reinit_irq = true;
++		irq_cleared = true;
+ 	}
+ 	rc = __bnxt_reserve_rings(bp);
+-	if (reinit_irq) {
++	if (irq_cleared) {
+ 		if (!rc)
+ 			rc = bnxt_init_int_mode(bp);
+ 		bnxt_ulp_irq_restart(bp, rc);
+@@ -8426,7 +8429,7 @@ static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
+ 			return rc;
+ 		}
+ 	}
+-	rc = bnxt_reserve_rings(bp);
++	rc = bnxt_reserve_rings(bp, irq_re_init);
+ 	if (rc)
+ 		return rc;
+ 	if ((bp->flags & BNXT_FLAG_RFS) &&
+@@ -10337,7 +10340,7 @@ static int bnxt_set_dflt_rings(struct bnxt *bp, bool sh)
+ 
+ 	if (sh)
+ 		bp->flags |= BNXT_FLAG_SHARED_RINGS;
+-	dflt_rings = netif_get_num_default_rss_queues();
++	dflt_rings = is_kdump_kernel() ? 1 : netif_get_num_default_rss_queues();
+ 	/* Reduce default rings on multi-port cards so that total default
+ 	 * rings do not exceed CPU count.
+ 	 */
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index cf81ace7a6e6..0fb93280ad4e 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -20,6 +20,7 @@
+ 
+ #include <linux/interrupt.h>
+ #include <linux/rhashtable.h>
++#include <linux/crash_dump.h>
+ #include <net/devlink.h>
+ #include <net/dst_metadata.h>
+ #include <net/xdp.h>
+@@ -1367,7 +1368,8 @@ struct bnxt {
+ #define BNXT_CHIP_TYPE_NITRO_A0(bp) ((bp)->flags & BNXT_FLAG_CHIP_NITRO_A0)
+ #define BNXT_RX_PAGE_MODE(bp)	((bp)->flags & BNXT_FLAG_RX_PAGE_MODE)
+ #define BNXT_SUPPORTS_TPA(bp)	(!BNXT_CHIP_TYPE_NITRO_A0(bp) &&	\
+-				 !(bp->flags & BNXT_FLAG_CHIP_P5))
++				 !(bp->flags & BNXT_FLAG_CHIP_P5) &&	\
++				 !is_kdump_kernel())
+ 
+ /* Chip class phase 5 */
+ #define BNXT_CHIP_P5(bp)			\
+@@ -1778,7 +1780,7 @@ unsigned int bnxt_get_avail_stat_ctxs_for_en(struct bnxt *bp);
+ unsigned int bnxt_get_max_func_cp_rings(struct bnxt *bp);
+ unsigned int bnxt_get_avail_cp_rings_for_en(struct bnxt *bp);
+ int bnxt_get_avail_msix(struct bnxt *bp, int num);
+-int bnxt_reserve_rings(struct bnxt *bp);
++int bnxt_reserve_rings(struct bnxt *bp, bool irq_re_init);
+ void bnxt_tx_disable(struct bnxt *bp);
+ void bnxt_tx_enable(struct bnxt *bp);
+ int bnxt_hwrm_set_pause(struct bnxt *);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index adabbe94a259..e1460e391952 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -788,7 +788,7 @@ static int bnxt_set_channels(struct net_device *dev,
+ 			 */
+ 		}
+ 	} else {
+-		rc = bnxt_reserve_rings(bp);
++		rc = bnxt_reserve_rings(bp, true);
+ 	}
+ 
+ 	return rc;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+index cf475873ce81..bfa342a98d08 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+@@ -147,7 +147,7 @@ static int bnxt_req_msix_vecs(struct bnxt_en_dev *edev, int ulp_id,
+ 			bnxt_close_nic(bp, true, false);
+ 			rc = bnxt_open_nic(bp, true, false);
+ 		} else {
+-			rc = bnxt_reserve_rings(bp);
++			rc = bnxt_reserve_rings(bp, true);
+ 		}
+ 	}
+ 	if (rc) {
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
+index 82a8d1970060..35462bccd91a 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
+@@ -197,6 +197,9 @@ static void cxgb4_process_flow_match(struct net_device *dev,
+ 		fs->val.ivlan = vlan_tci;
+ 		fs->mask.ivlan = vlan_tci_mask;
+ 
++		fs->val.ivlan_vld = 1;
++		fs->mask.ivlan_vld = 1;
++
+ 		/* Chelsio adapters use ivlan_vld bit to match vlan packets
+ 		 * as 802.1Q. Also, when vlan tag is present in packets,
+ 		 * ethtype match is used then to match on ethtype of inner
+@@ -207,8 +210,6 @@ static void cxgb4_process_flow_match(struct net_device *dev,
+ 		 * ethtype value with ethtype of inner header.
+ 		 */
+ 		if (fs->val.ethtype == ETH_P_8021Q) {
+-			fs->val.ivlan_vld = 1;
+-			fs->mask.ivlan_vld = 1;
+ 			fs->val.ethtype = 0;
+ 			fs->mask.ethtype = 0;
+ 		}
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+index a3544041ad32..8d63eed628d7 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+@@ -7206,10 +7206,21 @@ int t4_fixup_host_params(struct adapter *adap, unsigned int page_size,
+ 			 unsigned int cache_line_size)
+ {
+ 	unsigned int page_shift = fls(page_size) - 1;
++	unsigned int sge_hps = page_shift - 10;
+ 	unsigned int stat_len = cache_line_size > 64 ? 128 : 64;
+ 	unsigned int fl_align = cache_line_size < 32 ? 32 : cache_line_size;
+ 	unsigned int fl_align_log = fls(fl_align) - 1;
+ 
++	t4_write_reg(adap, SGE_HOST_PAGE_SIZE_A,
++		     HOSTPAGESIZEPF0_V(sge_hps) |
++		     HOSTPAGESIZEPF1_V(sge_hps) |
++		     HOSTPAGESIZEPF2_V(sge_hps) |
++		     HOSTPAGESIZEPF3_V(sge_hps) |
++		     HOSTPAGESIZEPF4_V(sge_hps) |
++		     HOSTPAGESIZEPF5_V(sge_hps) |
++		     HOSTPAGESIZEPF6_V(sge_hps) |
++		     HOSTPAGESIZEPF7_V(sge_hps));
++
+ 	if (is_t4(adap->params.chip)) {
+ 		t4_set_reg_field(adap, SGE_CONTROL_A,
+ 				 INGPADBOUNDARY_V(INGPADBOUNDARY_M) |
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index a96ad20ee484..878ccce1dfcd 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -3556,7 +3556,7 @@ failed_init:
+ 	if (fep->reg_phy)
+ 		regulator_disable(fep->reg_phy);
+ failed_reset:
+-	pm_runtime_put(&pdev->dev);
++	pm_runtime_put_noidle(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ failed_regulator:
+ 	clk_disable_unprepare(fep->clk_ahb);
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index c0a3718b2e2a..c7f4b72b3c07 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -4674,7 +4674,7 @@ static int mvneta_probe(struct platform_device *pdev)
+ 	err = register_netdev(dev);
+ 	if (err < 0) {
+ 		dev_err(&pdev->dev, "failed to register\n");
+-		goto err_free_stats;
++		goto err_netdev;
+ 	}
+ 
+ 	netdev_info(dev, "Using %s mac address %pM\n", mac_from,
+@@ -4685,14 +4685,12 @@ static int mvneta_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ err_netdev:
+-	unregister_netdev(dev);
+ 	if (pp->bm_priv) {
+ 		mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_long, 1 << pp->id);
+ 		mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_short,
+ 				       1 << pp->id);
+ 		mvneta_bm_put(pp->bm_priv);
+ 	}
+-err_free_stats:
+ 	free_percpu(pp->stats);
+ err_free_ports:
+ 	free_percpu(pp->ports);
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 25fbed2b8d94..f4f076d7090e 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -1455,7 +1455,7 @@ static inline void mvpp2_xlg_max_rx_size_set(struct mvpp2_port *port)
+ /* Set defaults to the MVPP2 port */
+ static void mvpp2_defaults_set(struct mvpp2_port *port)
+ {
+-	int tx_port_num, val, queue, ptxq, lrxq;
++	int tx_port_num, val, queue, lrxq;
+ 
+ 	if (port->priv->hw_version == MVPP21) {
+ 		/* Update TX FIFO MIN Threshold */
+@@ -1476,11 +1476,9 @@ static void mvpp2_defaults_set(struct mvpp2_port *port)
+ 	mvpp2_write(port->priv, MVPP2_TXP_SCHED_FIXED_PRIO_REG, 0);
+ 
+ 	/* Close bandwidth for all queues */
+-	for (queue = 0; queue < MVPP2_MAX_TXQ; queue++) {
+-		ptxq = mvpp2_txq_phys(port->id, queue);
++	for (queue = 0; queue < MVPP2_MAX_TXQ; queue++)
+ 		mvpp2_write(port->priv,
+-			    MVPP2_TXQ_SCHED_TOKEN_CNTR_REG(ptxq), 0);
+-	}
++			    MVPP2_TXQ_SCHED_TOKEN_CNTR_REG(queue), 0);
+ 
+ 	/* Set refill period to 1 usec, refill tokens
+ 	 * and bucket size to maximum
+@@ -2336,7 +2334,7 @@ static void mvpp2_txq_deinit(struct mvpp2_port *port,
+ 	txq->descs_dma         = 0;
+ 
+ 	/* Set minimum bandwidth for disabled TXQs */
+-	mvpp2_write(port->priv, MVPP2_TXQ_SCHED_TOKEN_CNTR_REG(txq->id), 0);
++	mvpp2_write(port->priv, MVPP2_TXQ_SCHED_TOKEN_CNTR_REG(txq->log_id), 0);
+ 
+ 	/* Set Tx descriptors queue starting address and size */
+ 	thread = mvpp2_cpu_to_thread(port->priv, get_cpu());
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 46157e2a1e5a..1e2688e2ed47 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -3750,6 +3750,12 @@ static netdev_features_t mlx5e_fix_features(struct net_device *netdev,
+ 			netdev_warn(netdev, "Disabling LRO, not supported in legacy RQ\n");
+ 	}
+ 
++	if (MLX5E_GET_PFLAG(params, MLX5E_PFLAG_RX_CQE_COMPRESS)) {
++		features &= ~NETIF_F_RXHASH;
++		if (netdev->features & NETIF_F_RXHASH)
++			netdev_warn(netdev, "Disabling rxhash, not supported when CQE compress is active\n");
++	}
++
+ 	mutex_unlock(&priv->state_lock);
+ 
+ 	return features;
+@@ -3875,6 +3881,9 @@ int mlx5e_hwstamp_set(struct mlx5e_priv *priv, struct ifreq *ifr)
+ 	memcpy(&priv->tstamp, &config, sizeof(config));
+ 	mutex_unlock(&priv->state_lock);
+ 
++	/* might need to fix some features */
++	netdev_update_features(priv->netdev);
++
+ 	return copy_to_user(ifr->ifr_data, &config,
+ 			    sizeof(config)) ? -EFAULT : 0;
+ }
+@@ -4734,6 +4743,10 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
+ 	if (!priv->channels.params.scatter_fcs_en)
+ 		netdev->features  &= ~NETIF_F_RXFCS;
+ 
++	/* prefere CQE compression over rxhash */
++	if (MLX5E_GET_PFLAG(&priv->channels.params, MLX5E_PFLAG_RX_CQE_COMPRESS))
++		netdev->features &= ~NETIF_F_RXHASH;
++
+ #define FT_CAP(f) MLX5_CAP_FLOWTABLE(mdev, flow_table_properties_nic_receive.f)
+ 	if (FT_CAP(flow_modify_en) &&
+ 	    FT_CAP(modify_root) &&
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 581cc145795d..e29e5beb239d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -2286,7 +2286,7 @@ static struct mlx5_flow_root_namespace
+ 		cmds = mlx5_fs_cmd_get_default_ipsec_fpga_cmds(table_type);
+ 
+ 	/* Create the root namespace */
+-	root_ns = kvzalloc(sizeof(*root_ns), GFP_KERNEL);
++	root_ns = kzalloc(sizeof(*root_ns), GFP_KERNEL);
+ 	if (!root_ns)
+ 		return NULL;
+ 
+@@ -2429,6 +2429,7 @@ static void cleanup_egress_acls_root_ns(struct mlx5_core_dev *dev)
+ 		cleanup_root_ns(steering->esw_egress_root_ns[i]);
+ 
+ 	kfree(steering->esw_egress_root_ns);
++	steering->esw_egress_root_ns = NULL;
+ }
+ 
+ static void cleanup_ingress_acls_root_ns(struct mlx5_core_dev *dev)
+@@ -2443,6 +2444,7 @@ static void cleanup_ingress_acls_root_ns(struct mlx5_core_dev *dev)
+ 		cleanup_root_ns(steering->esw_ingress_root_ns[i]);
+ 
+ 	kfree(steering->esw_ingress_root_ns);
++	steering->esw_ingress_root_ns = NULL;
+ }
+ 
+ void mlx5_cleanup_fs(struct mlx5_core_dev *dev)
+@@ -2611,6 +2613,7 @@ cleanup_root_ns:
+ 	for (i--; i >= 0; i--)
+ 		cleanup_root_ns(steering->esw_egress_root_ns[i]);
+ 	kfree(steering->esw_egress_root_ns);
++	steering->esw_egress_root_ns = NULL;
+ 	return err;
+ }
+ 
+@@ -2638,6 +2641,7 @@ cleanup_root_ns:
+ 	for (i--; i >= 0; i--)
+ 		cleanup_root_ns(steering->esw_ingress_root_ns[i]);
+ 	kfree(steering->esw_ingress_root_ns);
++	steering->esw_ingress_root_ns = NULL;
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c
+index c1a9cc9a3292..4c98950380d5 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c
+@@ -1171,13 +1171,12 @@ mlxsw_sp_acl_erp_delta_fill(const struct mlxsw_sp_acl_erp_key *parent_key,
+ 			return -EINVAL;
+ 	}
+ 	if (si == -1) {
+-		/* The masks are the same, this cannot happen.
+-		 * That means the caller is broken.
++		/* The masks are the same, this can happen in case eRPs with
++		 * the same mask were created in both A-TCAM and C-TCAM.
++		 * The only possible condition under which this can happen
++		 * is identical rule insertion. Delta is not possible here.
+ 		 */
+-		WARN_ON(1);
+-		*delta_start = 0;
+-		*delta_mask = 0;
+-		return 0;
++		return -EINVAL;
+ 	}
+ 	pmask = (unsigned char) parent_key->mask[__MASK_IDX(si)];
+ 	mask = (unsigned char) key->mask[__MASK_IDX(si)];
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index ed651dde6ef9..6d176be51a6b 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -6914,6 +6914,8 @@ static int rtl8169_resume(struct device *device)
+ 	struct net_device *dev = dev_get_drvdata(device);
+ 	struct rtl8169_private *tp = netdev_priv(dev);
+ 
++	rtl_rar_set(tp, dev->dev_addr);
++
+ 	clk_prepare_enable(tp->clk);
+ 
+ 	if (netif_running(dev))
+@@ -6947,6 +6949,7 @@ static int rtl8169_runtime_resume(struct device *device)
+ {
+ 	struct net_device *dev = dev_get_drvdata(device);
+ 	struct rtl8169_private *tp = netdev_priv(dev);
++
+ 	rtl_rar_set(tp, dev->dev_addr);
+ 
+ 	if (!tp->TxDescArray)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+index 3c749c327cbd..e09522c5509a 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+@@ -460,7 +460,7 @@ stmmac_get_pauseparam(struct net_device *netdev,
+ 	} else {
+ 		if (!linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT,
+ 				       netdev->phydev->supported) ||
+-		    linkmode_test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
++		    !linkmode_test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
+ 				      netdev->phydev->supported))
+ 			return;
+ 	}
+@@ -491,7 +491,7 @@ stmmac_set_pauseparam(struct net_device *netdev,
+ 	} else {
+ 		if (!linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT,
+ 				       phy->supported) ||
+-		    linkmode_test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
++		    !linkmode_test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
+ 				      phy->supported))
+ 			return -EOPNOTSUPP;
+ 	}
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 48712437d0da..3c409862c52e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -2208,6 +2208,10 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv)
+ 	if (priv->plat->axi)
+ 		stmmac_axi(priv, priv->ioaddr, priv->plat->axi);
+ 
++	/* DMA CSR Channel configuration */
++	for (chan = 0; chan < dma_csr_ch; chan++)
++		stmmac_init_chan(priv, priv->ioaddr, priv->plat->dma_cfg, chan);
++
+ 	/* DMA RX Channel Configuration */
+ 	for (chan = 0; chan < rx_channels_count; chan++) {
+ 		rx_q = &priv->rx_queue[chan];
+@@ -2233,10 +2237,6 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv)
+ 				       tx_q->tx_tail_addr, chan);
+ 	}
+ 
+-	/* DMA CSR Channel configuration */
+-	for (chan = 0; chan < dma_csr_ch; chan++)
+-		stmmac_init_chan(priv, priv->ioaddr, priv->plat->dma_cfg, chan);
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+index bdd351597b55..093a223fe408 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+@@ -267,7 +267,8 @@ int stmmac_mdio_reset(struct mii_bus *bus)
+ 			of_property_read_u32_array(np,
+ 				"snps,reset-delays-us", data->delays, 3);
+ 
+-			if (gpio_request(data->reset_gpio, "mdio-reset"))
++			if (devm_gpio_request(priv->device, data->reset_gpio,
++					      "mdio-reset"))
+ 				return 0;
+ 		}
+ 
+diff --git a/drivers/net/phy/marvell10g.c b/drivers/net/phy/marvell10g.c
+index 100b401b1f4a..754cde873dde 100644
+--- a/drivers/net/phy/marvell10g.c
++++ b/drivers/net/phy/marvell10g.c
+@@ -31,6 +31,9 @@
+ #define MV_PHY_ALASKA_NBT_QUIRK_REV	(MARVELL_PHY_ID_88X3310 | 0xa)
+ 
+ enum {
++	MV_PMA_BOOT		= 0xc050,
++	MV_PMA_BOOT_FATAL	= BIT(0),
++
+ 	MV_PCS_BASE_T		= 0x0000,
+ 	MV_PCS_BASE_R		= 0x1000,
+ 	MV_PCS_1000BASEX	= 0x2000,
+@@ -211,6 +214,16 @@ static int mv3310_probe(struct phy_device *phydev)
+ 	    (phydev->c45_ids.devices_in_package & mmd_mask) != mmd_mask)
+ 		return -ENODEV;
+ 
++	ret = phy_read_mmd(phydev, MDIO_MMD_PMAPMD, MV_PMA_BOOT);
++	if (ret < 0)
++		return ret;
++
++	if (ret & MV_PMA_BOOT_FATAL) {
++		dev_warn(&phydev->mdio.dev,
++			 "PHY failed to boot firmware, status=%04x\n", ret);
++		return -ENODEV;
++	}
++
+ 	priv = devm_kzalloc(&phydev->mdio.dev, sizeof(*priv), GFP_KERNEL);
+ 	if (!priv)
+ 		return -ENOMEM;
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index 504282af27e5..921cc0571bd0 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -506,6 +506,7 @@ static int rx_submit (struct usbnet *dev, struct urb *urb, gfp_t flags)
+ 
+ 	if (netif_running (dev->net) &&
+ 	    netif_device_present (dev->net) &&
++	    test_bit(EVENT_DEV_OPEN, &dev->flags) &&
+ 	    !test_bit (EVENT_RX_HALT, &dev->flags) &&
+ 	    !test_bit (EVENT_DEV_ASLEEP, &dev->flags)) {
+ 		switch (retval = usb_submit_urb (urb, GFP_ATOMIC)) {
+@@ -1431,6 +1432,11 @@ netdev_tx_t usbnet_start_xmit (struct sk_buff *skb,
+ 		spin_unlock_irqrestore(&dev->txq.lock, flags);
+ 		goto drop;
+ 	}
++	if (netif_queue_stopped(net)) {
++		usb_autopm_put_interface_async(dev->intf);
++		spin_unlock_irqrestore(&dev->txq.lock, flags);
++		goto drop;
++	}
+ 
+ #ifdef CONFIG_PM
+ 	/* if this triggers the device is still a sleep */
+diff --git a/include/linux/siphash.h b/include/linux/siphash.h
+index fa7a6b9cedbf..bf21591a9e5e 100644
+--- a/include/linux/siphash.h
++++ b/include/linux/siphash.h
+@@ -21,6 +21,11 @@ typedef struct {
+ 	u64 key[2];
+ } siphash_key_t;
+ 
++static inline bool siphash_key_is_zero(const siphash_key_t *key)
++{
++	return !(key->key[0] | key->key[1]);
++}
++
+ u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key);
+ #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+ u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key);
+diff --git a/include/net/netns/ipv4.h b/include/net/netns/ipv4.h
+index 104a6669e344..7698460a3dd1 100644
+--- a/include/net/netns/ipv4.h
++++ b/include/net/netns/ipv4.h
+@@ -9,6 +9,7 @@
+ #include <linux/uidgid.h>
+ #include <net/inet_frag.h>
+ #include <linux/rcupdate.h>
++#include <linux/siphash.h>
+ 
+ struct tcpm_hash_bucket;
+ struct ctl_table_header;
+@@ -217,5 +218,6 @@ struct netns_ipv4 {
+ 	unsigned int	ipmr_seq;	/* protected by rtnl_mutex */
+ 
+ 	atomic_t	rt_genid;
++	siphash_key_t	ip_id_key;
+ };
+ #endif
+diff --git a/include/uapi/linux/tipc_config.h b/include/uapi/linux/tipc_config.h
+index 4b2c93b1934c..4955e1a9f1bc 100644
+--- a/include/uapi/linux/tipc_config.h
++++ b/include/uapi/linux/tipc_config.h
+@@ -307,8 +307,10 @@ static inline int TLV_SET(void *tlv, __u16 type, void *data, __u16 len)
+ 	tlv_ptr = (struct tlv_desc *)tlv;
+ 	tlv_ptr->tlv_type = htons(type);
+ 	tlv_ptr->tlv_len  = htons(tlv_len);
+-	if (len && data)
+-		memcpy(TLV_DATA(tlv_ptr), data, tlv_len);
++	if (len && data) {
++		memcpy(TLV_DATA(tlv_ptr), data, len);
++		memset(TLV_DATA(tlv_ptr) + len, 0, TLV_SPACE(len) - tlv_len);
++	}
+ 	return TLV_SPACE(len);
+ }
+ 
+@@ -405,8 +407,10 @@ static inline int TCM_SET(void *msg, __u16 cmd, __u16 flags,
+ 	tcm_hdr->tcm_len   = htonl(msg_len);
+ 	tcm_hdr->tcm_type  = htons(cmd);
+ 	tcm_hdr->tcm_flags = htons(flags);
+-	if (data_len && data)
++	if (data_len && data) {
+ 		memcpy(TCM_DATA(msg), data, data_len);
++		memset(TCM_DATA(msg) + data_len, 0, TCM_SPACE(data_len) - msg_len);
++	}
+ 	return TCM_SPACE(data_len);
+ }
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 255f99cb7c48..c6b2f6db0a9b 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -5804,7 +5804,6 @@ static struct sk_buff *napi_frags_skb(struct napi_struct *napi)
+ 	skb_reset_mac_header(skb);
+ 	skb_gro_reset_offset(skb);
+ 
+-	eth = skb_gro_header_fast(skb, 0);
+ 	if (unlikely(skb_gro_header_hard(skb, hlen))) {
+ 		eth = skb_gro_header_slow(skb, hlen, 0);
+ 		if (unlikely(!eth)) {
+@@ -5814,6 +5813,7 @@ static struct sk_buff *napi_frags_skb(struct napi_struct *napi)
+ 			return NULL;
+ 		}
+ 	} else {
++		eth = (const struct ethhdr *)skb->data;
+ 		gro_pull_from_frag0(skb, hlen);
+ 		NAPI_GRO_CB(skb)->frag0 += hlen;
+ 		NAPI_GRO_CB(skb)->frag0_len -= hlen;
+diff --git a/net/core/ethtool.c b/net/core/ethtool.c
+index 36ed619faf36..014dcd63b451 100644
+--- a/net/core/ethtool.c
++++ b/net/core/ethtool.c
+@@ -3008,11 +3008,12 @@ ethtool_rx_flow_rule_create(const struct ethtool_rx_flow_spec_input *input)
+ 		const struct ethtool_flow_ext *ext_h_spec = &fs->h_ext;
+ 		const struct ethtool_flow_ext *ext_m_spec = &fs->m_ext;
+ 
+-		if (ext_m_spec->vlan_etype &&
+-		    ext_m_spec->vlan_tci) {
++		if (ext_m_spec->vlan_etype) {
+ 			match->key.vlan.vlan_tpid = ext_h_spec->vlan_etype;
+ 			match->mask.vlan.vlan_tpid = ext_m_spec->vlan_etype;
++		}
+ 
++		if (ext_m_spec->vlan_tci) {
+ 			match->key.vlan.vlan_id =
+ 				ntohs(ext_h_spec->vlan_tci) & 0x0fff;
+ 			match->mask.vlan.vlan_id =
+@@ -3022,7 +3023,10 @@ ethtool_rx_flow_rule_create(const struct ethtool_rx_flow_spec_input *input)
+ 				(ntohs(ext_h_spec->vlan_tci) & 0xe000) >> 13;
+ 			match->mask.vlan.vlan_priority =
+ 				(ntohs(ext_m_spec->vlan_tci) & 0xe000) >> 13;
++		}
+ 
++		if (ext_m_spec->vlan_etype ||
++		    ext_m_spec->vlan_tci) {
+ 			match->dissector.used_keys |=
+ 				BIT(FLOW_DISSECTOR_KEY_VLAN);
+ 			match->dissector.offset[FLOW_DISSECTOR_KEY_VLAN] =
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 40796b8bf820..e5bfd42fd083 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -1001,7 +1001,11 @@ struct ubuf_info *sock_zerocopy_realloc(struct sock *sk, size_t size,
+ 			uarg->len++;
+ 			uarg->bytelen = bytelen;
+ 			atomic_set(&sk->sk_zckey, ++next);
+-			sock_zerocopy_get(uarg);
++
++			/* no extra ref when appending to datagram (MSG_MORE) */
++			if (sk->sk_type == SOCK_STREAM)
++				sock_zerocopy_get(uarg);
++
+ 			return uarg;
+ 		}
+ 	}
+diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
+index 6c2febc39dca..eb03153dfe12 100644
+--- a/net/ipv4/igmp.c
++++ b/net/ipv4/igmp.c
+@@ -188,6 +188,17 @@ static void ip_ma_put(struct ip_mc_list *im)
+ 	     pmc != NULL;					\
+ 	     pmc = rtnl_dereference(pmc->next_rcu))
+ 
++static void ip_sf_list_clear_all(struct ip_sf_list *psf)
++{
++	struct ip_sf_list *next;
++
++	while (psf) {
++		next = psf->sf_next;
++		kfree(psf);
++		psf = next;
++	}
++}
++
+ #ifdef CONFIG_IP_MULTICAST
+ 
+ /*
+@@ -633,6 +644,13 @@ static void igmpv3_clear_zeros(struct ip_sf_list **ppsf)
+ 	}
+ }
+ 
++static void kfree_pmc(struct ip_mc_list *pmc)
++{
++	ip_sf_list_clear_all(pmc->sources);
++	ip_sf_list_clear_all(pmc->tomb);
++	kfree(pmc);
++}
++
+ static void igmpv3_send_cr(struct in_device *in_dev)
+ {
+ 	struct ip_mc_list *pmc, *pmc_prev, *pmc_next;
+@@ -669,7 +687,7 @@ static void igmpv3_send_cr(struct in_device *in_dev)
+ 			else
+ 				in_dev->mc_tomb = pmc_next;
+ 			in_dev_put(pmc->interface);
+-			kfree(pmc);
++			kfree_pmc(pmc);
+ 		} else
+ 			pmc_prev = pmc;
+ 	}
+@@ -1215,14 +1233,18 @@ static void igmpv3_del_delrec(struct in_device *in_dev, struct ip_mc_list *im)
+ 		im->interface = pmc->interface;
+ 		if (im->sfmode == MCAST_INCLUDE) {
+ 			im->tomb = pmc->tomb;
++			pmc->tomb = NULL;
++
+ 			im->sources = pmc->sources;
++			pmc->sources = NULL;
++
+ 			for (psf = im->sources; psf; psf = psf->sf_next)
+ 				psf->sf_crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv;
+ 		} else {
+ 			im->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv;
+ 		}
+ 		in_dev_put(pmc->interface);
+-		kfree(pmc);
++		kfree_pmc(pmc);
+ 	}
+ 	spin_unlock_bh(&im->lock);
+ }
+@@ -1243,21 +1265,18 @@ static void igmpv3_clear_delrec(struct in_device *in_dev)
+ 		nextpmc = pmc->next;
+ 		ip_mc_clear_src(pmc);
+ 		in_dev_put(pmc->interface);
+-		kfree(pmc);
++		kfree_pmc(pmc);
+ 	}
+ 	/* clear dead sources, too */
+ 	rcu_read_lock();
+ 	for_each_pmc_rcu(in_dev, pmc) {
+-		struct ip_sf_list *psf, *psf_next;
++		struct ip_sf_list *psf;
+ 
+ 		spin_lock_bh(&pmc->lock);
+ 		psf = pmc->tomb;
+ 		pmc->tomb = NULL;
+ 		spin_unlock_bh(&pmc->lock);
+-		for (; psf; psf = psf_next) {
+-			psf_next = psf->sf_next;
+-			kfree(psf);
+-		}
++		ip_sf_list_clear_all(psf);
+ 	}
+ 	rcu_read_unlock();
+ }
+@@ -2123,7 +2142,7 @@ static int ip_mc_add_src(struct in_device *in_dev, __be32 *pmca, int sfmode,
+ 
+ static void ip_mc_clear_src(struct ip_mc_list *pmc)
+ {
+-	struct ip_sf_list *psf, *nextpsf, *tomb, *sources;
++	struct ip_sf_list *tomb, *sources;
+ 
+ 	spin_lock_bh(&pmc->lock);
+ 	tomb = pmc->tomb;
+@@ -2135,14 +2154,8 @@ static void ip_mc_clear_src(struct ip_mc_list *pmc)
+ 	pmc->sfcount[MCAST_EXCLUDE] = 1;
+ 	spin_unlock_bh(&pmc->lock);
+ 
+-	for (psf = tomb; psf; psf = nextpsf) {
+-		nextpsf = psf->sf_next;
+-		kfree(psf);
+-	}
+-	for (psf = sources; psf; psf = nextpsf) {
+-		nextpsf = psf->sf_next;
+-		kfree(psf);
+-	}
++	ip_sf_list_clear_all(tomb);
++	ip_sf_list_clear_all(sources);
+ }
+ 
+ /* Join a multicast group
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index e8bb2e85c5a4..ac770940adb9 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -883,7 +883,7 @@ static int __ip_append_data(struct sock *sk,
+ 	int csummode = CHECKSUM_NONE;
+ 	struct rtable *rt = (struct rtable *)cork->dst;
+ 	unsigned int wmem_alloc_delta = 0;
+-	bool paged, extra_uref;
++	bool paged, extra_uref = false;
+ 	u32 tskey = 0;
+ 
+ 	skb = skb_peek_tail(queue);
+@@ -923,7 +923,7 @@ static int __ip_append_data(struct sock *sk,
+ 		uarg = sock_zerocopy_realloc(sk, length, skb_zcopy(skb));
+ 		if (!uarg)
+ 			return -ENOBUFS;
+-		extra_uref = true;
++		extra_uref = !skb;	/* only extra ref if !MSG_MORE */
+ 		if (rt->dst.dev->features & NETIF_F_SG &&
+ 		    csummode == CHECKSUM_PARTIAL) {
+ 			paged = true;
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 6fdf1c195d8e..df6afb092936 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -500,15 +500,17 @@ EXPORT_SYMBOL(ip_idents_reserve);
+ 
+ void __ip_select_ident(struct net *net, struct iphdr *iph, int segs)
+ {
+-	static u32 ip_idents_hashrnd __read_mostly;
+ 	u32 hash, id;
+ 
+-	net_get_random_once(&ip_idents_hashrnd, sizeof(ip_idents_hashrnd));
++	/* Note the following code is not safe, but this is okay. */
++	if (unlikely(siphash_key_is_zero(&net->ipv4.ip_id_key)))
++		get_random_bytes(&net->ipv4.ip_id_key,
++				 sizeof(net->ipv4.ip_id_key));
+ 
+-	hash = jhash_3words((__force u32)iph->daddr,
++	hash = siphash_3u32((__force u32)iph->daddr,
+ 			    (__force u32)iph->saddr,
+-			    iph->protocol ^ net_hash_mix(net),
+-			    ip_idents_hashrnd);
++			    iph->protocol,
++			    &net->ipv4.ip_id_key);
+ 	id = ip_idents_reserve(hash, segs);
+ 	iph->id = htons(id);
+ }
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index e51f3c648b09..b5e0c85bcd57 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -1275,7 +1275,7 @@ static int __ip6_append_data(struct sock *sk,
+ 	int csummode = CHECKSUM_NONE;
+ 	unsigned int maxnonfragsize, headersize;
+ 	unsigned int wmem_alloc_delta = 0;
+-	bool paged, extra_uref;
++	bool paged, extra_uref = false;
+ 
+ 	skb = skb_peek_tail(queue);
+ 	if (!skb) {
+@@ -1344,7 +1344,7 @@ emsgsize:
+ 		uarg = sock_zerocopy_realloc(sk, length, skb_zcopy(skb));
+ 		if (!uarg)
+ 			return -ENOBUFS;
+-		extra_uref = true;
++		extra_uref = !skb;	/* only extra ref if !MSG_MORE */
+ 		if (rt->dst.dev->features & NETIF_F_SG &&
+ 		    csummode == CHECKSUM_PARTIAL) {
+ 			paged = true;
+diff --git a/net/ipv6/output_core.c b/net/ipv6/output_core.c
+index 4fe7c90962dd..868ae23dbae1 100644
+--- a/net/ipv6/output_core.c
++++ b/net/ipv6/output_core.c
+@@ -10,15 +10,25 @@
+ #include <net/secure_seq.h>
+ #include <linux/netfilter.h>
+ 
+-static u32 __ipv6_select_ident(struct net *net, u32 hashrnd,
++static u32 __ipv6_select_ident(struct net *net,
+ 			       const struct in6_addr *dst,
+ 			       const struct in6_addr *src)
+ {
++	const struct {
++		struct in6_addr dst;
++		struct in6_addr src;
++	} __aligned(SIPHASH_ALIGNMENT) combined = {
++		.dst = *dst,
++		.src = *src,
++	};
+ 	u32 hash, id;
+ 
+-	hash = __ipv6_addr_jhash(dst, hashrnd);
+-	hash = __ipv6_addr_jhash(src, hash);
+-	hash ^= net_hash_mix(net);
++	/* Note the following code is not safe, but this is okay. */
++	if (unlikely(siphash_key_is_zero(&net->ipv4.ip_id_key)))
++		get_random_bytes(&net->ipv4.ip_id_key,
++				 sizeof(net->ipv4.ip_id_key));
++
++	hash = siphash(&combined, sizeof(combined), &net->ipv4.ip_id_key);
+ 
+ 	/* Treat id of 0 as unset and if we get 0 back from ip_idents_reserve,
+ 	 * set the hight order instead thus minimizing possible future
+@@ -41,7 +51,6 @@ static u32 __ipv6_select_ident(struct net *net, u32 hashrnd,
+  */
+ __be32 ipv6_proxy_select_ident(struct net *net, struct sk_buff *skb)
+ {
+-	static u32 ip6_proxy_idents_hashrnd __read_mostly;
+ 	struct in6_addr buf[2];
+ 	struct in6_addr *addrs;
+ 	u32 id;
+@@ -53,11 +62,7 @@ __be32 ipv6_proxy_select_ident(struct net *net, struct sk_buff *skb)
+ 	if (!addrs)
+ 		return 0;
+ 
+-	net_get_random_once(&ip6_proxy_idents_hashrnd,
+-			    sizeof(ip6_proxy_idents_hashrnd));
+-
+-	id = __ipv6_select_ident(net, ip6_proxy_idents_hashrnd,
+-				 &addrs[1], &addrs[0]);
++	id = __ipv6_select_ident(net, &addrs[1], &addrs[0]);
+ 	return htonl(id);
+ }
+ EXPORT_SYMBOL_GPL(ipv6_proxy_select_ident);
+@@ -66,12 +71,9 @@ __be32 ipv6_select_ident(struct net *net,
+ 			 const struct in6_addr *daddr,
+ 			 const struct in6_addr *saddr)
+ {
+-	static u32 ip6_idents_hashrnd __read_mostly;
+ 	u32 id;
+ 
+-	net_get_random_once(&ip6_idents_hashrnd, sizeof(ip6_idents_hashrnd));
+-
+-	id = __ipv6_select_ident(net, ip6_idents_hashrnd, daddr, saddr);
++	id = __ipv6_select_ident(net, daddr, saddr);
+ 	return htonl(id);
+ }
+ EXPORT_SYMBOL(ipv6_select_ident);
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index 5a426226c762..5cb14eabfc65 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -287,7 +287,9 @@ static int rawv6_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ 			/* Binding to link-local address requires an interface */
+ 			if (!sk->sk_bound_dev_if)
+ 				goto out_unlock;
++		}
+ 
++		if (sk->sk_bound_dev_if) {
+ 			err = -ENODEV;
+ 			dev = dev_get_by_index_rcu(sock_net(sk),
+ 						   sk->sk_bound_dev_if);
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index e470589fb93b..ab348489bd8a 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -2442,6 +2442,12 @@ static struct rt6_info *__ip6_route_redirect(struct net *net,
+ 	struct fib6_info *rt;
+ 	struct fib6_node *fn;
+ 
++	/* l3mdev_update_flow overrides oif if the device is enslaved; in
++	 * this case we must match on the real ingress device, so reset it
++	 */
++	if (fl6->flowi6_flags & FLOWI_FLAG_SKIP_NH_OIF)
++		fl6->flowi6_oif = skb->dev->ifindex;
++
+ 	/* Get the "current" route for this destination and
+ 	 * check if the redirect has come from appropriate router.
+ 	 *
+diff --git a/net/llc/llc_output.c b/net/llc/llc_output.c
+index 94425e421213..9e4b6bcf6920 100644
+--- a/net/llc/llc_output.c
++++ b/net/llc/llc_output.c
+@@ -72,6 +72,8 @@ int llc_build_and_send_ui_pkt(struct llc_sap *sap, struct sk_buff *skb,
+ 	rc = llc_mac_hdr_init(skb, skb->dev->dev_addr, dmac);
+ 	if (likely(!rc))
+ 		rc = dev_queue_xmit(skb);
++	else
++		kfree_skb(skb);
+ 	return rc;
+ }
+ 
+diff --git a/net/sched/act_api.c b/net/sched/act_api.c
+index 5a87e271d35a..5b56b1cb2417 100644
+--- a/net/sched/act_api.c
++++ b/net/sched/act_api.c
+@@ -800,7 +800,7 @@ int tcf_action_dump(struct sk_buff *skb, struct tc_action *actions[],
+ 
+ 	for (i = 0; i < TCA_ACT_MAX_PRIO && actions[i]; i++) {
+ 		a = actions[i];
+-		nest = nla_nest_start(skb, a->order);
++		nest = nla_nest_start(skb, i + 1);
+ 		if (nest == NULL)
+ 			goto nla_put_failure;
+ 		err = tcf_action_dump_1(skb, a, bind, ref);
+@@ -1300,7 +1300,6 @@ tca_action_gd(struct net *net, struct nlattr *nla, struct nlmsghdr *n,
+ 			ret = PTR_ERR(act);
+ 			goto err;
+ 		}
+-		act->order = i;
+ 		attr_size += tcf_action_fill_size(act);
+ 		actions[i - 1] = act;
+ 	}
+diff --git a/net/tipc/core.c b/net/tipc/core.c
+index d7b0688c98dd..3ecca3b88bf8 100644
+--- a/net/tipc/core.c
++++ b/net/tipc/core.c
+@@ -66,10 +66,6 @@ static int __net_init tipc_init_net(struct net *net)
+ 	INIT_LIST_HEAD(&tn->node_list);
+ 	spin_lock_init(&tn->node_list_lock);
+ 
+-	err = tipc_socket_init();
+-	if (err)
+-		goto out_socket;
+-
+ 	err = tipc_sk_rht_init(net);
+ 	if (err)
+ 		goto out_sk_rht;
+@@ -79,9 +75,6 @@ static int __net_init tipc_init_net(struct net *net)
+ 		goto out_nametbl;
+ 
+ 	INIT_LIST_HEAD(&tn->dist_queue);
+-	err = tipc_topsrv_start(net);
+-	if (err)
+-		goto out_subscr;
+ 
+ 	err = tipc_bcast_init(net);
+ 	if (err)
+@@ -90,25 +83,19 @@ static int __net_init tipc_init_net(struct net *net)
+ 	return 0;
+ 
+ out_bclink:
+-	tipc_bcast_stop(net);
+-out_subscr:
+ 	tipc_nametbl_stop(net);
+ out_nametbl:
+ 	tipc_sk_rht_destroy(net);
+ out_sk_rht:
+-	tipc_socket_stop();
+-out_socket:
+ 	return err;
+ }
+ 
+ static void __net_exit tipc_exit_net(struct net *net)
+ {
+-	tipc_topsrv_stop(net);
+ 	tipc_net_stop(net);
+ 	tipc_bcast_stop(net);
+ 	tipc_nametbl_stop(net);
+ 	tipc_sk_rht_destroy(net);
+-	tipc_socket_stop();
+ }
+ 
+ static struct pernet_operations tipc_net_ops = {
+@@ -118,6 +105,11 @@ static struct pernet_operations tipc_net_ops = {
+ 	.size = sizeof(struct tipc_net),
+ };
+ 
++static struct pernet_operations tipc_topsrv_net_ops = {
++	.init = tipc_topsrv_init_net,
++	.exit = tipc_topsrv_exit_net,
++};
++
+ static int __init tipc_init(void)
+ {
+ 	int err;
+@@ -144,6 +136,14 @@ static int __init tipc_init(void)
+ 	if (err)
+ 		goto out_pernet;
+ 
++	err = tipc_socket_init();
++	if (err)
++		goto out_socket;
++
++	err = register_pernet_subsys(&tipc_topsrv_net_ops);
++	if (err)
++		goto out_pernet_topsrv;
++
+ 	err = tipc_bearer_setup();
+ 	if (err)
+ 		goto out_bearer;
+@@ -151,6 +151,10 @@ static int __init tipc_init(void)
+ 	pr_info("Started in single node mode\n");
+ 	return 0;
+ out_bearer:
++	unregister_pernet_subsys(&tipc_topsrv_net_ops);
++out_pernet_topsrv:
++	tipc_socket_stop();
++out_socket:
+ 	unregister_pernet_subsys(&tipc_net_ops);
+ out_pernet:
+ 	tipc_unregister_sysctl();
+@@ -166,6 +170,8 @@ out_netlink:
+ static void __exit tipc_exit(void)
+ {
+ 	tipc_bearer_cleanup();
++	unregister_pernet_subsys(&tipc_topsrv_net_ops);
++	tipc_socket_stop();
+ 	unregister_pernet_subsys(&tipc_net_ops);
+ 	tipc_netlink_stop();
+ 	tipc_netlink_compat_stop();
+diff --git a/net/tipc/subscr.h b/net/tipc/subscr.h
+index d793b4343885..aa015c233898 100644
+--- a/net/tipc/subscr.h
++++ b/net/tipc/subscr.h
+@@ -77,8 +77,9 @@ void tipc_sub_report_overlap(struct tipc_subscription *sub,
+ 			     u32 found_lower, u32 found_upper,
+ 			     u32 event, u32 port, u32 node,
+ 			     u32 scope, int must);
+-int tipc_topsrv_start(struct net *net);
+-void tipc_topsrv_stop(struct net *net);
++
++int __net_init tipc_topsrv_init_net(struct net *net);
++void __net_exit tipc_topsrv_exit_net(struct net *net);
+ 
+ void tipc_sub_put(struct tipc_subscription *subscription);
+ void tipc_sub_get(struct tipc_subscription *subscription);
+diff --git a/net/tipc/topsrv.c b/net/tipc/topsrv.c
+index b45932d78004..f345662890a6 100644
+--- a/net/tipc/topsrv.c
++++ b/net/tipc/topsrv.c
+@@ -635,7 +635,7 @@ static void tipc_topsrv_work_stop(struct tipc_topsrv *s)
+ 	destroy_workqueue(s->send_wq);
+ }
+ 
+-int tipc_topsrv_start(struct net *net)
++static int tipc_topsrv_start(struct net *net)
+ {
+ 	struct tipc_net *tn = tipc_net(net);
+ 	const char name[] = "topology_server";
+@@ -668,7 +668,7 @@ int tipc_topsrv_start(struct net *net)
+ 	return ret;
+ }
+ 
+-void tipc_topsrv_stop(struct net *net)
++static void tipc_topsrv_stop(struct net *net)
+ {
+ 	struct tipc_topsrv *srv = tipc_topsrv(net);
+ 	struct socket *lsock = srv->listener;
+@@ -693,3 +693,13 @@ void tipc_topsrv_stop(struct net *net)
+ 	idr_destroy(&srv->conn_idr);
+ 	kfree(srv);
+ }
++
++int __net_init tipc_topsrv_init_net(struct net *net)
++{
++	return tipc_topsrv_start(net);
++}
++
++void __net_exit tipc_topsrv_exit_net(struct net *net)
++{
++	tipc_topsrv_stop(net);
++}
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index 14dedb24fa7b..0fd8f0997ff5 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -943,12 +943,6 @@ void tls_device_offload_cleanup_rx(struct sock *sk)
+ 	if (!netdev)
+ 		goto out;
+ 
+-	if (!(netdev->features & NETIF_F_HW_TLS_RX)) {
+-		pr_err_ratelimited("%s: device is missing NETIF_F_HW_TLS_RX cap\n",
+-				   __func__);
+-		goto out;
+-	}
+-
+ 	netdev->tlsdev_ops->tls_dev_del(netdev, tls_ctx,
+ 					TLS_OFFLOAD_CTX_DIR_RX);
+ 
+@@ -1007,7 +1001,8 @@ static int tls_dev_event(struct notifier_block *this, unsigned long event,
+ {
+ 	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+ 
+-	if (!(dev->features & (NETIF_F_HW_TLS_RX | NETIF_F_HW_TLS_TX)))
++	if (!dev->tlsdev_ops &&
++	    !(dev->features & (NETIF_F_HW_TLS_RX | NETIF_F_HW_TLS_TX)))
+ 		return NOTIFY_DONE;
+ 
+ 	switch (event) {
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 29d6af43dd24..d350ff73a391 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -1685,15 +1685,14 @@ int tls_sw_recvmsg(struct sock *sk,
+ 		copied = err;
+ 	}
+ 
+-	len = len - copied;
+-	if (len) {
+-		target = sock_rcvlowat(sk, flags & MSG_WAITALL, len);
+-		timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
+-	} else {
++	if (len <= copied)
+ 		goto recv_end;
+-	}
+ 
+-	do {
++	target = sock_rcvlowat(sk, flags & MSG_WAITALL, len);
++	len = len - copied;
++	timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
++
++	while (len && (decrypted + copied < target || ctx->recv_pkt)) {
+ 		bool retain_skb = false;
+ 		bool zc = false;
+ 		int to_decrypt;
+@@ -1824,11 +1823,7 @@ pick_next_record:
+ 		} else {
+ 			break;
+ 		}
+-
+-		/* If we have a new message from strparser, continue now. */
+-		if (decrypted >= target && !ctx->recv_pkt)
+-			break;
+-	} while (len);
++	}
+ 
+ recv_end:
+ 	if (num_async) {
+diff --git a/tools/testing/selftests/net/tls.c b/tools/testing/selftests/net/tls.c
+index 47ddfc154036..278c86134556 100644
+--- a/tools/testing/selftests/net/tls.c
++++ b/tools/testing/selftests/net/tls.c
+@@ -442,6 +442,21 @@ TEST_F(tls, multiple_send_single_recv)
+ 	EXPECT_EQ(memcmp(send_mem, recv_mem + send_len, send_len), 0);
+ }
+ 
++TEST_F(tls, single_send_multiple_recv_non_align)
++{
++	const unsigned int total_len = 15;
++	const unsigned int recv_len = 10;
++	char recv_mem[recv_len * 2];
++	char send_mem[total_len];
++
++	EXPECT_GE(send(self->fd, send_mem, total_len, 0), 0);
++	memset(recv_mem, 0, total_len);
++
++	EXPECT_EQ(recv(self->cfd, recv_mem, recv_len, 0), recv_len);
++	EXPECT_EQ(recv(self->cfd, recv_mem + recv_len, recv_len, 0), 5);
++	EXPECT_EQ(memcmp(send_mem, recv_mem, total_len), 0);
++}
++
+ TEST_F(tls, recv_partial)
+ {
+ 	char const *test_str = "test_read_partial";
+@@ -575,6 +590,25 @@ TEST_F(tls, recv_peek_large_buf_mult_recs)
+ 	EXPECT_EQ(memcmp(test_str, buf, len), 0);
+ }
+ 
++TEST_F(tls, recv_lowat)
++{
++	char send_mem[10] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
++	char recv_mem[20];
++	int lowat = 8;
++
++	EXPECT_EQ(send(self->fd, send_mem, 10, 0), 10);
++	EXPECT_EQ(send(self->fd, send_mem, 5, 0), 5);
++
++	memset(recv_mem, 0, 20);
++	EXPECT_EQ(setsockopt(self->cfd, SOL_SOCKET, SO_RCVLOWAT,
++			     &lowat, sizeof(lowat)), 0);
++	EXPECT_EQ(recv(self->cfd, recv_mem, 1, MSG_WAITALL), 1);
++	EXPECT_EQ(recv(self->cfd, recv_mem + 1, 6, MSG_WAITALL), 6);
++	EXPECT_EQ(recv(self->cfd, recv_mem + 7, 10, 0), 8);
++
++	EXPECT_EQ(memcmp(send_mem, recv_mem, 10), 0);
++	EXPECT_EQ(memcmp(send_mem, recv_mem + 10, 5), 0);
++}
+ 
+ TEST_F(tls, pollin)
+ {


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-06-09 16:20 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2019-06-09 16:20 UTC (permalink / raw
  To: gentoo-commits

commit:     516a9e00306527e226708dafdd5012629e65d623
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Jun  9 16:19:55 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jun  9 16:19:55 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=516a9e00

Linux patch 5.1.8

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1007_linux-5.1.8.patch | 3548 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3552 insertions(+)

diff --git a/0000_README b/0000_README
index 7c0827d..c561860 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  1006_linux-5.1.7.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.1.7
 
+Patch:  1007_linux-5.1.8.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.1.8
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1007_linux-5.1.8.patch b/1007_linux-5.1.8.patch
new file mode 100644
index 0000000..b0a6865
--- /dev/null
+++ b/1007_linux-5.1.8.patch
@@ -0,0 +1,3548 @@
+diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
+index 20f92c16ffbf..26b3a3e7daf6 100644
+--- a/Documentation/admin-guide/cgroup-v2.rst
++++ b/Documentation/admin-guide/cgroup-v2.rst
+@@ -177,6 +177,15 @@ cgroup v2 currently supports the following mount options.
+ 	ignored on non-init namespace mounts.  Please refer to the
+ 	Delegation section for details.
+ 
++  memory_localevents
++
++        Only populate memory.events with data for the current cgroup,
++        and not any subtrees. This is legacy behaviour, the default
++        behaviour without this option is to include subtree counts.
++        This option is system wide and can only be set on mount or
++        modified through remount from the init namespace. The mount
++        option is ignored on non-init namespace mounts.
++
+ 
+ Organizing Processes and Threads
+ --------------------------------
+diff --git a/Documentation/conf.py b/Documentation/conf.py
+index 72647a38b5c2..7ace3f8852bd 100644
+--- a/Documentation/conf.py
++++ b/Documentation/conf.py
+@@ -37,7 +37,7 @@ needs_sphinx = '1.3'
+ extensions = ['kerneldoc', 'rstFlatTable', 'kernel_include', 'cdomain', 'kfigure', 'sphinx.ext.ifconfig']
+ 
+ # The name of the math extension changed on Sphinx 1.4
+-if major == 1 and minor > 3:
++if (major == 1 and minor > 3) or (major > 1):
+     extensions.append("sphinx.ext.imgmath")
+ else:
+     extensions.append("sphinx.ext.pngmath")
+diff --git a/Documentation/sphinx/kerneldoc.py b/Documentation/sphinx/kerneldoc.py
+index 9d0a7f08f93b..1159405cb920 100644
+--- a/Documentation/sphinx/kerneldoc.py
++++ b/Documentation/sphinx/kerneldoc.py
+@@ -37,7 +37,19 @@ import glob
+ from docutils import nodes, statemachine
+ from docutils.statemachine import ViewList
+ from docutils.parsers.rst import directives, Directive
+-from sphinx.ext.autodoc import AutodocReporter
++
++#
++# AutodocReporter is only good up to Sphinx 1.7
++#
++import sphinx
++
++Use_SSI = sphinx.__version__[:3] >= '1.7'
++if Use_SSI:
++    from sphinx.util.docutils import switch_source_input
++else:
++    from sphinx.ext.autodoc import AutodocReporter
++
++import kernellog
+ 
+ __version__  = '1.0'
+ 
+@@ -90,7 +102,8 @@ class KernelDocDirective(Directive):
+         cmd += [filename]
+ 
+         try:
+-            env.app.verbose('calling kernel-doc \'%s\'' % (" ".join(cmd)))
++            kernellog.verbose(env.app,
++                              'calling kernel-doc \'%s\'' % (" ".join(cmd)))
+ 
+             p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+             out, err = p.communicate()
+@@ -100,7 +113,8 @@ class KernelDocDirective(Directive):
+             if p.returncode != 0:
+                 sys.stderr.write(err)
+ 
+-                env.app.warn('kernel-doc \'%s\' failed with return code %d' % (" ".join(cmd), p.returncode))
++                kernellog.warn(env.app,
++                               'kernel-doc \'%s\' failed with return code %d' % (" ".join(cmd), p.returncode))
+                 return [nodes.error(None, nodes.paragraph(text = "kernel-doc missing"))]
+             elif env.config.kerneldoc_verbosity > 0:
+                 sys.stderr.write(err)
+@@ -121,20 +135,28 @@ class KernelDocDirective(Directive):
+                     lineoffset += 1
+ 
+             node = nodes.section()
+-            buf = self.state.memo.title_styles, self.state.memo.section_level, self.state.memo.reporter
++            self.do_parse(result, node)
++
++            return node.children
++
++        except Exception as e:  # pylint: disable=W0703
++            kernellog.warn(env.app, 'kernel-doc \'%s\' processing failed with: %s' %
++                           (" ".join(cmd), str(e)))
++            return [nodes.error(None, nodes.paragraph(text = "kernel-doc missing"))]
++
++    def do_parse(self, result, node):
++        if Use_SSI:
++            with switch_source_input(self.state, result):
++                self.state.nested_parse(result, 0, node, match_titles=1)
++        else:
++            save = self.state.memo.title_styles, self.state.memo.section_level, self.state.memo.reporter
+             self.state.memo.reporter = AutodocReporter(result, self.state.memo.reporter)
+             self.state.memo.title_styles, self.state.memo.section_level = [], 0
+             try:
+                 self.state.nested_parse(result, 0, node, match_titles=1)
+             finally:
+-                self.state.memo.title_styles, self.state.memo.section_level, self.state.memo.reporter = buf
++                self.state.memo.title_styles, self.state.memo.section_level, self.state.memo.reporter = save
+ 
+-            return node.children
+-
+-        except Exception as e:  # pylint: disable=W0703
+-            env.app.warn('kernel-doc \'%s\' processing failed with: %s' %
+-                         (" ".join(cmd), str(e)))
+-            return [nodes.error(None, nodes.paragraph(text = "kernel-doc missing"))]
+ 
+ def setup(app):
+     app.add_config_value('kerneldoc_bin', None, 'env')
+diff --git a/Documentation/sphinx/kernellog.py b/Documentation/sphinx/kernellog.py
+new file mode 100644
+index 000000000000..af924f51a7dc
+--- /dev/null
++++ b/Documentation/sphinx/kernellog.py
+@@ -0,0 +1,28 @@
++# SPDX-License-Identifier: GPL-2.0
++#
++# Sphinx has deprecated its older logging interface, but the replacement
++# only goes back to 1.6.  So here's a wrapper layer to keep around for
++# as long as we support 1.4.
++#
++import sphinx
++
++if sphinx.__version__[:3] >= '1.6':
++    UseLogging = True
++    from sphinx.util import logging
++    logger = logging.getLogger('kerneldoc')
++else:
++    UseLogging = False
++
++def warn(app, message):
++    if UseLogging:
++        logger.warning(message)
++    else:
++        app.warn(message)
++
++def verbose(app, message):
++    if UseLogging:
++        logger.verbose(message)
++    else:
++        app.verbose(message)
++
++
+diff --git a/Documentation/sphinx/kfigure.py b/Documentation/sphinx/kfigure.py
+index b97228d2cc0e..fbfe6693bb60 100644
+--- a/Documentation/sphinx/kfigure.py
++++ b/Documentation/sphinx/kfigure.py
+@@ -60,6 +60,8 @@ import sphinx
+ from sphinx.util.nodes import clean_astext
+ from six import iteritems
+ 
++import kernellog
++
+ PY3 = sys.version_info[0] == 3
+ 
+ if PY3:
+@@ -171,20 +173,20 @@ def setupTools(app):
+     This function is called once, when the builder is initiated.
+     """
+     global dot_cmd, convert_cmd   # pylint: disable=W0603
+-    app.verbose("kfigure: check installed tools ...")
++    kernellog.verbose(app, "kfigure: check installed tools ...")
+ 
+     dot_cmd = which('dot')
+     convert_cmd = which('convert')
+ 
+     if dot_cmd:
+-        app.verbose("use dot(1) from: " + dot_cmd)
++        kernellog.verbose(app, "use dot(1) from: " + dot_cmd)
+     else:
+-        app.warn("dot(1) not found, for better output quality install "
+-                 "graphviz from http://www.graphviz.org")
++        kernellog.warn(app, "dot(1) not found, for better output quality install "
++                       "graphviz from http://www.graphviz.org")
+     if convert_cmd:
+-        app.verbose("use convert(1) from: " + convert_cmd)
++        kernellog.verbose(app, "use convert(1) from: " + convert_cmd)
+     else:
+-        app.warn(
++        kernellog.warn(app,
+             "convert(1) not found, for SVG to PDF conversion install "
+             "ImageMagick (https://www.imagemagick.org)")
+ 
+@@ -220,12 +222,13 @@ def convert_image(img_node, translator, src_fname=None):
+ 
+     # in kernel builds, use 'make SPHINXOPTS=-v' to see verbose messages
+ 
+-    app.verbose('assert best format for: ' + img_node['uri'])
++    kernellog.verbose(app, 'assert best format for: ' + img_node['uri'])
+ 
+     if in_ext == '.dot':
+ 
+         if not dot_cmd:
+-            app.verbose("dot from graphviz not available / include DOT raw.")
++            kernellog.verbose(app,
++                              "dot from graphviz not available / include DOT raw.")
+             img_node.replace_self(file2literal(src_fname))
+ 
+         elif translator.builder.format == 'latex':
+@@ -252,7 +255,8 @@ def convert_image(img_node, translator, src_fname=None):
+ 
+         if translator.builder.format == 'latex':
+             if convert_cmd is None:
+-                app.verbose("no SVG to PDF conversion available / include SVG raw.")
++                kernellog.verbose(app,
++                                  "no SVG to PDF conversion available / include SVG raw.")
+                 img_node.replace_self(file2literal(src_fname))
+             else:
+                 dst_fname = path.join(translator.builder.outdir, fname + '.pdf')
+@@ -265,18 +269,19 @@ def convert_image(img_node, translator, src_fname=None):
+         _name = dst_fname[len(translator.builder.outdir) + 1:]
+ 
+         if isNewer(dst_fname, src_fname):
+-            app.verbose("convert: {out}/%s already exists and is newer" % _name)
++            kernellog.verbose(app,
++                              "convert: {out}/%s already exists and is newer" % _name)
+ 
+         else:
+             ok = False
+             mkdir(path.dirname(dst_fname))
+ 
+             if in_ext == '.dot':
+-                app.verbose('convert DOT to: {out}/' + _name)
++                kernellog.verbose(app, 'convert DOT to: {out}/' + _name)
+                 ok = dot2format(app, src_fname, dst_fname)
+ 
+             elif in_ext == '.svg':
+-                app.verbose('convert SVG to: {out}/' + _name)
++                kernellog.verbose(app, 'convert SVG to: {out}/' + _name)
+                 ok = svg2pdf(app, src_fname, dst_fname)
+ 
+             if not ok:
+@@ -305,7 +310,8 @@ def dot2format(app, dot_fname, out_fname):
+     with open(out_fname, "w") as out:
+         exit_code = subprocess.call(cmd, stdout = out)
+         if exit_code != 0:
+-            app.warn("Error #%d when calling: %s" % (exit_code, " ".join(cmd)))
++            kernellog.warn(app,
++                          "Error #%d when calling: %s" % (exit_code, " ".join(cmd)))
+     return bool(exit_code == 0)
+ 
+ def svg2pdf(app, svg_fname, pdf_fname):
+@@ -322,7 +328,7 @@ def svg2pdf(app, svg_fname, pdf_fname):
+     # use stdout and stderr from parent
+     exit_code = subprocess.call(cmd)
+     if exit_code != 0:
+-        app.warn("Error #%d when calling: %s" % (exit_code, " ".join(cmd)))
++        kernellog.warn(app, "Error #%d when calling: %s" % (exit_code, " ".join(cmd)))
+     return bool(exit_code == 0)
+ 
+ 
+@@ -415,15 +421,15 @@ def visit_kernel_render(self, node):
+     app = self.builder.app
+     srclang = node.get('srclang')
+ 
+-    app.verbose('visit kernel-render node lang: "%s"' % (srclang))
++    kernellog.verbose(app, 'visit kernel-render node lang: "%s"' % (srclang))
+ 
+     tmp_ext = RENDER_MARKUP_EXT.get(srclang, None)
+     if tmp_ext is None:
+-        app.warn('kernel-render: "%s" unknown / include raw.' % (srclang))
++        kernellog.warn(app, 'kernel-render: "%s" unknown / include raw.' % (srclang))
+         return
+ 
+     if not dot_cmd and tmp_ext == '.dot':
+-        app.verbose("dot from graphviz not available / include raw.")
++        kernellog.verbose(app, "dot from graphviz not available / include raw.")
+         return
+ 
+     literal_block = node[0]
+diff --git a/Makefile b/Makefile
+index 299578ce385a..3027a0ce7a02 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arm64/kernel/sys.c b/arch/arm64/kernel/sys.c
+index 6f91e8116514..162a95ed0881 100644
+--- a/arch/arm64/kernel/sys.c
++++ b/arch/arm64/kernel/sys.c
+@@ -50,7 +50,7 @@ SYSCALL_DEFINE1(arm64_personality, unsigned int, personality)
+ /*
+  * Wrappers to pass the pt_regs argument.
+  */
+-#define sys_personality		sys_arm64_personality
++#define __arm64_sys_personality		__arm64_sys_arm64_personality
+ 
+ asmlinkage long sys_ni_syscall(const struct pt_regs *);
+ #define __arm64_sys_ni_syscall	sys_ni_syscall
+diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
+index 29755989f616..3102685a6cd7 100644
+--- a/arch/arm64/kernel/traps.c
++++ b/arch/arm64/kernel/traps.c
+@@ -256,7 +256,10 @@ void arm64_force_sig_fault(int signo, int code, void __user *addr,
+ 			   const char *str)
+ {
+ 	arm64_show_signal(signo, str);
+-	force_sig_fault(signo, code, addr, current);
++	if (signo == SIGKILL)
++		force_sig(SIGKILL, current);
++	else
++		force_sig_fault(signo, code, addr, current);
+ }
+ 
+ void arm64_force_sig_mceerr(int code, void __user *addr, short lsb,
+diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
+index 6d0517ac18e5..0369f26ab96d 100644
+--- a/arch/mips/kvm/mips.c
++++ b/arch/mips/kvm/mips.c
+@@ -1122,6 +1122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+ 	case KVM_CAP_MAX_VCPUS:
+ 		r = KVM_MAX_VCPUS;
+ 		break;
++	case KVM_CAP_MAX_VCPU_ID:
++		r = KVM_MAX_VCPU_ID;
++		break;
+ 	case KVM_CAP_MIPS_FPU:
+ 		/* We don't handle systems with inconsistent cpu_has_fpu */
+ 		r = !!raw_cpu_has_fpu;
+diff --git a/arch/powerpc/kernel/kexec_elf_64.c b/arch/powerpc/kernel/kexec_elf_64.c
+index ba4f18a43ee8..52a29fc73730 100644
+--- a/arch/powerpc/kernel/kexec_elf_64.c
++++ b/arch/powerpc/kernel/kexec_elf_64.c
+@@ -547,6 +547,7 @@ static int elf_exec_load(struct kimage *image, struct elfhdr *ehdr,
+ 		kbuf.memsz = phdr->p_memsz;
+ 		kbuf.buf_align = phdr->p_align;
+ 		kbuf.buf_min = phdr->p_paddr + base;
++		kbuf.mem = KEXEC_BUF_MEM_UNKNOWN;
+ 		ret = kexec_add_buffer(&kbuf);
+ 		if (ret)
+ 			goto out;
+@@ -581,7 +582,8 @@ static void *elf64_load(struct kimage *image, char *kernel_buf,
+ 	struct kexec_buf kbuf = { .image = image, .buf_min = 0,
+ 				  .buf_max = ppc64_rma_size };
+ 	struct kexec_buf pbuf = { .image = image, .buf_min = 0,
+-				  .buf_max = ppc64_rma_size, .top_down = true };
++				  .buf_max = ppc64_rma_size, .top_down = true,
++				  .mem = KEXEC_BUF_MEM_UNKNOWN };
+ 
+ 	ret = build_elf_exec_info(kernel_buf, kernel_len, &ehdr, &elf_info);
+ 	if (ret)
+@@ -606,6 +608,7 @@ static void *elf64_load(struct kimage *image, char *kernel_buf,
+ 		kbuf.bufsz = kbuf.memsz = initrd_len;
+ 		kbuf.buf_align = PAGE_SIZE;
+ 		kbuf.top_down = false;
++		kbuf.mem = KEXEC_BUF_MEM_UNKNOWN;
+ 		ret = kexec_add_buffer(&kbuf);
+ 		if (ret)
+ 			goto out;
+@@ -638,6 +641,7 @@ static void *elf64_load(struct kimage *image, char *kernel_buf,
+ 	kbuf.bufsz = kbuf.memsz = fdt_size;
+ 	kbuf.buf_align = PAGE_SIZE;
+ 	kbuf.top_down = true;
++	kbuf.mem = KEXEC_BUF_MEM_UNKNOWN;
+ 	ret = kexec_add_buffer(&kbuf);
+ 	if (ret)
+ 		goto out;
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index b2b29d4f9842..bd68b3e59de5 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -3624,6 +3624,7 @@ int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
+ 	vc->in_guest = 0;
+ 
+ 	mtspr(SPRN_DEC, local_paca->kvm_hstate.dec_expires - mftb());
++	mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso);
+ 
+ 	kvmhv_load_host_pmu();
+ 
+@@ -4048,16 +4049,20 @@ int kvmhv_run_single_vcpu(struct kvm_run *kvm_run,
+ 	if (cpu_has_feature(CPU_FTR_HVMODE))
+ 		kvmppc_radix_check_need_tlb_flush(kvm, pcpu, nested);
+ 
+-	trace_hardirqs_on();
+ 	guest_enter_irqoff();
+ 
+ 	srcu_idx = srcu_read_lock(&kvm->srcu);
+ 
+ 	this_cpu_disable_ftrace();
+ 
++	/* Tell lockdep that we're about to enable interrupts */
++	trace_hardirqs_on();
++
+ 	trap = kvmhv_p9_guest_entry(vcpu, time_limit, lpcr);
+ 	vcpu->arch.trap = trap;
+ 
++	trace_hardirqs_off();
++
+ 	this_cpu_enable_ftrace();
+ 
+ 	srcu_read_unlock(&kvm->srcu, srcu_idx);
+@@ -4067,7 +4072,6 @@ int kvmhv_run_single_vcpu(struct kvm_run *kvm_run,
+ 		isync();
+ 	}
+ 
+-	trace_hardirqs_off();
+ 	set_irq_happened(trap);
+ 
+ 	kvmppc_set_host_core(pcpu);
+diff --git a/arch/powerpc/kvm/book3s_xive.c b/arch/powerpc/kvm/book3s_xive.c
+index f78d002f0fe0..1ad950af7b7b 100644
+--- a/arch/powerpc/kvm/book3s_xive.c
++++ b/arch/powerpc/kvm/book3s_xive.c
+@@ -1786,7 +1786,6 @@ static void kvmppc_xive_cleanup_irq(u32 hw_num, struct xive_irq_data *xd)
+ {
+ 	xive_vm_esb_load(xd, XIVE_ESB_SET_PQ_01);
+ 	xive_native_configure_irq(hw_num, 0, MASKED, 0);
+-	xive_cleanup_irq_data(xd);
+ }
+ 
+ static void kvmppc_xive_free_sources(struct kvmppc_xive_src_block *sb)
+@@ -1800,9 +1799,10 @@ static void kvmppc_xive_free_sources(struct kvmppc_xive_src_block *sb)
+ 			continue;
+ 
+ 		kvmppc_xive_cleanup_irq(state->ipi_number, &state->ipi_data);
++		xive_cleanup_irq_data(&state->ipi_data);
+ 		xive_native_free_irq(state->ipi_number);
+ 
+-		/* Pass-through, cleanup too */
++		/* Pass-through, cleanup too but keep IRQ hw data */
+ 		if (state->pt_number)
+ 			kvmppc_xive_cleanup_irq(state->pt_number, state->pt_data);
+ 
+diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
+index 8885377ec3e0..d2e800e27f03 100644
+--- a/arch/powerpc/kvm/powerpc.c
++++ b/arch/powerpc/kvm/powerpc.c
+@@ -650,6 +650,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+ 	case KVM_CAP_MAX_VCPUS:
+ 		r = KVM_MAX_VCPUS;
+ 		break;
++	case KVM_CAP_MAX_VCPU_ID:
++		r = KVM_MAX_VCPU_ID;
++		break;
+ #ifdef CONFIG_PPC_BOOK3S_64
+ 	case KVM_CAP_PPC_GET_SMMU_INFO:
+ 		r = 1;
+diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
+index b0723002a396..8eb5dc5df62b 100644
+--- a/arch/powerpc/perf/core-book3s.c
++++ b/arch/powerpc/perf/core-book3s.c
+@@ -1846,6 +1846,7 @@ static int power_pmu_event_init(struct perf_event *event)
+ 	int n;
+ 	int err;
+ 	struct cpu_hw_events *cpuhw;
++	u64 bhrb_filter;
+ 
+ 	if (!ppmu)
+ 		return -ENOENT;
+@@ -1951,13 +1952,14 @@ static int power_pmu_event_init(struct perf_event *event)
+ 	err = power_check_constraints(cpuhw, events, cflags, n + 1);
+ 
+ 	if (has_branch_stack(event)) {
+-		cpuhw->bhrb_filter = ppmu->bhrb_filter_map(
++		bhrb_filter = ppmu->bhrb_filter_map(
+ 					event->attr.branch_sample_type);
+ 
+-		if (cpuhw->bhrb_filter == -1) {
++		if (bhrb_filter == -1) {
+ 			put_cpu_var(cpu_hw_events);
+ 			return -EOPNOTSUPP;
+ 		}
++		cpuhw->bhrb_filter = bhrb_filter;
+ 	}
+ 
+ 	put_cpu_var(cpu_hw_events);
+diff --git a/arch/powerpc/perf/power8-pmu.c b/arch/powerpc/perf/power8-pmu.c
+index d12a2db26353..d10feef93b6b 100644
+--- a/arch/powerpc/perf/power8-pmu.c
++++ b/arch/powerpc/perf/power8-pmu.c
+@@ -29,6 +29,7 @@ enum {
+ #define	POWER8_MMCRA_IFM1		0x0000000040000000UL
+ #define	POWER8_MMCRA_IFM2		0x0000000080000000UL
+ #define	POWER8_MMCRA_IFM3		0x00000000C0000000UL
++#define	POWER8_MMCRA_BHRB_MASK		0x00000000C0000000UL
+ 
+ /*
+  * Raw event encoding for PowerISA v2.07 (Power8):
+@@ -243,6 +244,8 @@ static u64 power8_bhrb_filter_map(u64 branch_sample_type)
+ 
+ static void power8_config_bhrb(u64 pmu_bhrb_filter)
+ {
++	pmu_bhrb_filter &= POWER8_MMCRA_BHRB_MASK;
++
+ 	/* Enable BHRB filter in PMU */
+ 	mtspr(SPRN_MMCRA, (mfspr(SPRN_MMCRA) | pmu_bhrb_filter));
+ }
+diff --git a/arch/powerpc/perf/power9-pmu.c b/arch/powerpc/perf/power9-pmu.c
+index 030544e35959..f3987915cadc 100644
+--- a/arch/powerpc/perf/power9-pmu.c
++++ b/arch/powerpc/perf/power9-pmu.c
+@@ -92,6 +92,7 @@ enum {
+ #define POWER9_MMCRA_IFM1		0x0000000040000000UL
+ #define POWER9_MMCRA_IFM2		0x0000000080000000UL
+ #define POWER9_MMCRA_IFM3		0x00000000C0000000UL
++#define POWER9_MMCRA_BHRB_MASK		0x00000000C0000000UL
+ 
+ /* Nasty Power9 specific hack */
+ #define PVR_POWER9_CUMULUS		0x00002000
+@@ -300,6 +301,8 @@ static u64 power9_bhrb_filter_map(u64 branch_sample_type)
+ 
+ static void power9_config_bhrb(u64 pmu_bhrb_filter)
+ {
++	pmu_bhrb_filter &= POWER9_MMCRA_BHRB_MASK;
++
+ 	/* Enable BHRB filter in PMU */
+ 	mtspr(SPRN_MMCRA, (mfspr(SPRN_MMCRA) | pmu_bhrb_filter));
+ }
+diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c
+index dd456725189f..d00f84add5f4 100644
+--- a/arch/s390/crypto/aes_s390.c
++++ b/arch/s390/crypto/aes_s390.c
+@@ -27,14 +27,14 @@
+ #include <linux/module.h>
+ #include <linux/cpufeature.h>
+ #include <linux/init.h>
+-#include <linux/spinlock.h>
++#include <linux/mutex.h>
+ #include <linux/fips.h>
+ #include <linux/string.h>
+ #include <crypto/xts.h>
+ #include <asm/cpacf.h>
+ 
+ static u8 *ctrblk;
+-static DEFINE_SPINLOCK(ctrblk_lock);
++static DEFINE_MUTEX(ctrblk_lock);
+ 
+ static cpacf_mask_t km_functions, kmc_functions, kmctr_functions,
+ 		    kma_functions;
+@@ -698,7 +698,7 @@ static int ctr_aes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
+ 	unsigned int n, nbytes;
+ 	int ret, locked;
+ 
+-	locked = spin_trylock(&ctrblk_lock);
++	locked = mutex_trylock(&ctrblk_lock);
+ 
+ 	ret = blkcipher_walk_virt_block(desc, walk, AES_BLOCK_SIZE);
+ 	while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
+@@ -716,7 +716,7 @@ static int ctr_aes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
+ 		ret = blkcipher_walk_done(desc, walk, nbytes - n);
+ 	}
+ 	if (locked)
+-		spin_unlock(&ctrblk_lock);
++		mutex_unlock(&ctrblk_lock);
+ 	/*
+ 	 * final block may be < AES_BLOCK_SIZE, copy only nbytes
+ 	 */
+@@ -826,19 +826,45 @@ static int gcm_aes_setauthsize(struct crypto_aead *tfm, unsigned int authsize)
+ 	return 0;
+ }
+ 
+-static void gcm_sg_walk_start(struct gcm_sg_walk *gw, struct scatterlist *sg,
+-			      unsigned int len)
++static void gcm_walk_start(struct gcm_sg_walk *gw, struct scatterlist *sg,
++			   unsigned int len)
+ {
+ 	memset(gw, 0, sizeof(*gw));
+ 	gw->walk_bytes_remain = len;
+ 	scatterwalk_start(&gw->walk, sg);
+ }
+ 
+-static int gcm_sg_walk_go(struct gcm_sg_walk *gw, unsigned int minbytesneeded)
++static inline unsigned int _gcm_sg_clamp_and_map(struct gcm_sg_walk *gw)
++{
++	struct scatterlist *nextsg;
++
++	gw->walk_bytes = scatterwalk_clamp(&gw->walk, gw->walk_bytes_remain);
++	while (!gw->walk_bytes) {
++		nextsg = sg_next(gw->walk.sg);
++		if (!nextsg)
++			return 0;
++		scatterwalk_start(&gw->walk, nextsg);
++		gw->walk_bytes = scatterwalk_clamp(&gw->walk,
++						   gw->walk_bytes_remain);
++	}
++	gw->walk_ptr = scatterwalk_map(&gw->walk);
++	return gw->walk_bytes;
++}
++
++static inline void _gcm_sg_unmap_and_advance(struct gcm_sg_walk *gw,
++					     unsigned int nbytes)
++{
++	gw->walk_bytes_remain -= nbytes;
++	scatterwalk_unmap(&gw->walk);
++	scatterwalk_advance(&gw->walk, nbytes);
++	scatterwalk_done(&gw->walk, 0, gw->walk_bytes_remain);
++	gw->walk_ptr = NULL;
++}
++
++static int gcm_in_walk_go(struct gcm_sg_walk *gw, unsigned int minbytesneeded)
+ {
+ 	int n;
+ 
+-	/* minbytesneeded <= AES_BLOCK_SIZE */
+ 	if (gw->buf_bytes && gw->buf_bytes >= minbytesneeded) {
+ 		gw->ptr = gw->buf;
+ 		gw->nbytes = gw->buf_bytes;
+@@ -851,13 +877,11 @@ static int gcm_sg_walk_go(struct gcm_sg_walk *gw, unsigned int minbytesneeded)
+ 		goto out;
+ 	}
+ 
+-	gw->walk_bytes = scatterwalk_clamp(&gw->walk, gw->walk_bytes_remain);
+-	if (!gw->walk_bytes) {
+-		scatterwalk_start(&gw->walk, sg_next(gw->walk.sg));
+-		gw->walk_bytes = scatterwalk_clamp(&gw->walk,
+-						   gw->walk_bytes_remain);
++	if (!_gcm_sg_clamp_and_map(gw)) {
++		gw->ptr = NULL;
++		gw->nbytes = 0;
++		goto out;
+ 	}
+-	gw->walk_ptr = scatterwalk_map(&gw->walk);
+ 
+ 	if (!gw->buf_bytes && gw->walk_bytes >= minbytesneeded) {
+ 		gw->ptr = gw->walk_ptr;
+@@ -869,51 +893,90 @@ static int gcm_sg_walk_go(struct gcm_sg_walk *gw, unsigned int minbytesneeded)
+ 		n = min(gw->walk_bytes, AES_BLOCK_SIZE - gw->buf_bytes);
+ 		memcpy(gw->buf + gw->buf_bytes, gw->walk_ptr, n);
+ 		gw->buf_bytes += n;
+-		gw->walk_bytes_remain -= n;
+-		scatterwalk_unmap(&gw->walk);
+-		scatterwalk_advance(&gw->walk, n);
+-		scatterwalk_done(&gw->walk, 0, gw->walk_bytes_remain);
+-
++		_gcm_sg_unmap_and_advance(gw, n);
+ 		if (gw->buf_bytes >= minbytesneeded) {
+ 			gw->ptr = gw->buf;
+ 			gw->nbytes = gw->buf_bytes;
+ 			goto out;
+ 		}
+-
+-		gw->walk_bytes = scatterwalk_clamp(&gw->walk,
+-						   gw->walk_bytes_remain);
+-		if (!gw->walk_bytes) {
+-			scatterwalk_start(&gw->walk, sg_next(gw->walk.sg));
+-			gw->walk_bytes = scatterwalk_clamp(&gw->walk,
+-							gw->walk_bytes_remain);
++		if (!_gcm_sg_clamp_and_map(gw)) {
++			gw->ptr = NULL;
++			gw->nbytes = 0;
++			goto out;
+ 		}
+-		gw->walk_ptr = scatterwalk_map(&gw->walk);
+ 	}
+ 
+ out:
+ 	return gw->nbytes;
+ }
+ 
+-static void gcm_sg_walk_done(struct gcm_sg_walk *gw, unsigned int bytesdone)
++static int gcm_out_walk_go(struct gcm_sg_walk *gw, unsigned int minbytesneeded)
+ {
+-	int n;
++	if (gw->walk_bytes_remain == 0) {
++		gw->ptr = NULL;
++		gw->nbytes = 0;
++		goto out;
++	}
+ 
++	if (!_gcm_sg_clamp_and_map(gw)) {
++		gw->ptr = NULL;
++		gw->nbytes = 0;
++		goto out;
++	}
++
++	if (gw->walk_bytes >= minbytesneeded) {
++		gw->ptr = gw->walk_ptr;
++		gw->nbytes = gw->walk_bytes;
++		goto out;
++	}
++
++	scatterwalk_unmap(&gw->walk);
++	gw->walk_ptr = NULL;
++
++	gw->ptr = gw->buf;
++	gw->nbytes = sizeof(gw->buf);
++
++out:
++	return gw->nbytes;
++}
++
++static int gcm_in_walk_done(struct gcm_sg_walk *gw, unsigned int bytesdone)
++{
+ 	if (gw->ptr == NULL)
+-		return;
++		return 0;
+ 
+ 	if (gw->ptr == gw->buf) {
+-		n = gw->buf_bytes - bytesdone;
++		int n = gw->buf_bytes - bytesdone;
+ 		if (n > 0) {
+ 			memmove(gw->buf, gw->buf + bytesdone, n);
+-			gw->buf_bytes -= n;
++			gw->buf_bytes = n;
+ 		} else
+ 			gw->buf_bytes = 0;
+-	} else {
+-		gw->walk_bytes_remain -= bytesdone;
+-		scatterwalk_unmap(&gw->walk);
+-		scatterwalk_advance(&gw->walk, bytesdone);
+-		scatterwalk_done(&gw->walk, 0, gw->walk_bytes_remain);
+-	}
++	} else
++		_gcm_sg_unmap_and_advance(gw, bytesdone);
++
++	return bytesdone;
++}
++
++static int gcm_out_walk_done(struct gcm_sg_walk *gw, unsigned int bytesdone)
++{
++	int i, n;
++
++	if (gw->ptr == NULL)
++		return 0;
++
++	if (gw->ptr == gw->buf) {
++		for (i = 0; i < bytesdone; i += n) {
++			if (!_gcm_sg_clamp_and_map(gw))
++				return i;
++			n = min(gw->walk_bytes, bytesdone - i);
++			memcpy(gw->walk_ptr, gw->buf + i, n);
++			_gcm_sg_unmap_and_advance(gw, n);
++		}
++	} else
++		_gcm_sg_unmap_and_advance(gw, bytesdone);
++
++	return bytesdone;
+ }
+ 
+ static int gcm_aes_crypt(struct aead_request *req, unsigned int flags)
+@@ -926,7 +989,7 @@ static int gcm_aes_crypt(struct aead_request *req, unsigned int flags)
+ 	unsigned int pclen = req->cryptlen;
+ 	int ret = 0;
+ 
+-	unsigned int len, in_bytes, out_bytes,
++	unsigned int n, len, in_bytes, out_bytes,
+ 		     min_bytes, bytes, aad_bytes, pc_bytes;
+ 	struct gcm_sg_walk gw_in, gw_out;
+ 	u8 tag[GHASH_DIGEST_SIZE];
+@@ -963,14 +1026,14 @@ static int gcm_aes_crypt(struct aead_request *req, unsigned int flags)
+ 	*(u32 *)(param.j0 + ivsize) = 1;
+ 	memcpy(param.k, ctx->key, ctx->key_len);
+ 
+-	gcm_sg_walk_start(&gw_in, req->src, len);
+-	gcm_sg_walk_start(&gw_out, req->dst, len);
++	gcm_walk_start(&gw_in, req->src, len);
++	gcm_walk_start(&gw_out, req->dst, len);
+ 
+ 	do {
+ 		min_bytes = min_t(unsigned int,
+ 				  aadlen > 0 ? aadlen : pclen, AES_BLOCK_SIZE);
+-		in_bytes = gcm_sg_walk_go(&gw_in, min_bytes);
+-		out_bytes = gcm_sg_walk_go(&gw_out, min_bytes);
++		in_bytes = gcm_in_walk_go(&gw_in, min_bytes);
++		out_bytes = gcm_out_walk_go(&gw_out, min_bytes);
+ 		bytes = min(in_bytes, out_bytes);
+ 
+ 		if (aadlen + pclen <= bytes) {
+@@ -997,8 +1060,11 @@ static int gcm_aes_crypt(struct aead_request *req, unsigned int flags)
+ 			  gw_in.ptr + aad_bytes, pc_bytes,
+ 			  gw_in.ptr, aad_bytes);
+ 
+-		gcm_sg_walk_done(&gw_in, aad_bytes + pc_bytes);
+-		gcm_sg_walk_done(&gw_out, aad_bytes + pc_bytes);
++		n = aad_bytes + pc_bytes;
++		if (gcm_in_walk_done(&gw_in, n) != n)
++			return -ENOMEM;
++		if (gcm_out_walk_done(&gw_out, n) != n)
++			return -ENOMEM;
+ 		aadlen -= aad_bytes;
+ 		pclen -= pc_bytes;
+ 	} while (aadlen + pclen > 0);
+diff --git a/arch/s390/crypto/des_s390.c b/arch/s390/crypto/des_s390.c
+index 0d15383d0ff1..d64531cc9623 100644
+--- a/arch/s390/crypto/des_s390.c
++++ b/arch/s390/crypto/des_s390.c
+@@ -14,6 +14,7 @@
+ #include <linux/cpufeature.h>
+ #include <linux/crypto.h>
+ #include <linux/fips.h>
++#include <linux/mutex.h>
+ #include <crypto/algapi.h>
+ #include <crypto/des.h>
+ #include <asm/cpacf.h>
+@@ -21,7 +22,7 @@
+ #define DES3_KEY_SIZE	(3 * DES_KEY_SIZE)
+ 
+ static u8 *ctrblk;
+-static DEFINE_SPINLOCK(ctrblk_lock);
++static DEFINE_MUTEX(ctrblk_lock);
+ 
+ static cpacf_mask_t km_functions, kmc_functions, kmctr_functions;
+ 
+@@ -387,7 +388,7 @@ static int ctr_desall_crypt(struct blkcipher_desc *desc, unsigned long fc,
+ 	unsigned int n, nbytes;
+ 	int ret, locked;
+ 
+-	locked = spin_trylock(&ctrblk_lock);
++	locked = mutex_trylock(&ctrblk_lock);
+ 
+ 	ret = blkcipher_walk_virt_block(desc, walk, DES_BLOCK_SIZE);
+ 	while ((nbytes = walk->nbytes) >= DES_BLOCK_SIZE) {
+@@ -404,7 +405,7 @@ static int ctr_desall_crypt(struct blkcipher_desc *desc, unsigned long fc,
+ 		ret = blkcipher_walk_done(desc, walk, nbytes - n);
+ 	}
+ 	if (locked)
+-		spin_unlock(&ctrblk_lock);
++		mutex_unlock(&ctrblk_lock);
+ 	/* final block may be < DES_BLOCK_SIZE, copy only nbytes */
+ 	if (nbytes) {
+ 		cpacf_kmctr(fc, ctx->key, buf, walk->src.virt.addr,
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 4638303ba6a8..c4180ecfbb2a 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -507,6 +507,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+ 		break;
+ 	case KVM_CAP_NR_VCPUS:
+ 	case KVM_CAP_MAX_VCPUS:
++	case KVM_CAP_MAX_VCPU_ID:
+ 		r = KVM_S390_BSCA_CPU_SLOTS;
+ 		if (!kvm_s390_use_sca_entries())
+ 			r = KVM_MAX_VCPUS;
+diff --git a/arch/sparc/mm/ultra.S b/arch/sparc/mm/ultra.S
+index d245f89d1395..d220b6848746 100644
+--- a/arch/sparc/mm/ultra.S
++++ b/arch/sparc/mm/ultra.S
+@@ -587,7 +587,7 @@ xcall_flush_tlb_kernel_range:	/* 44 insns */
+ 	sub		%g7, %g1, %g3
+ 	srlx		%g3, 18, %g2
+ 	brnz,pn		%g2, 2f
+-	 add		%g2, 1, %g2
++	 sethi		%hi(PAGE_SIZE), %g2
+ 	sub		%g3, %g2, %g3
+ 	or		%g1, 0x20, %g1		! Nucleus
+ 1:	stxa		%g0, [%g1 + %g3] ASI_DMMU_DEMAP
+@@ -751,7 +751,7 @@ __cheetah_xcall_flush_tlb_kernel_range:	/* 44 insns */
+ 	sub		%g7, %g1, %g3
+ 	srlx		%g3, 18, %g2
+ 	brnz,pn		%g2, 2f
+-	 add		%g2, 1, %g2
++	 sethi		%hi(PAGE_SIZE), %g2
+ 	sub		%g3, %g2, %g3
+ 	or		%g1, 0x20, %g1		! Nucleus
+ 1:	stxa		%g0, [%g1 + %g3] ASI_DMMU_DEMAP
+diff --git a/arch/x86/kernel/ima_arch.c b/arch/x86/kernel/ima_arch.c
+index e47cd9390ab4..2a2e87717bad 100644
+--- a/arch/x86/kernel/ima_arch.c
++++ b/arch/x86/kernel/ima_arch.c
+@@ -17,6 +17,11 @@ static enum efi_secureboot_mode get_sb_mode(void)
+ 
+ 	size = sizeof(secboot);
+ 
++	if (!efi_enabled(EFI_RUNTIME_SERVICES)) {
++		pr_info("ima: secureboot mode unknown, no efi\n");
++		return efi_secureboot_mode_unknown;
++	}
++
+ 	/* Get variable contents into buffer */
+ 	status = efi.get_variable(efi_SecureBoot_name, &efi_variable_guid,
+ 				  NULL, &size, &secboot);
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index fed46ddb1eef..06058c44ab57 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -431,8 +431,20 @@ void *alloc_insn_page(void)
+ 	void *page;
+ 
+ 	page = module_alloc(PAGE_SIZE);
+-	if (page)
+-		set_memory_ro((unsigned long)page & PAGE_MASK, 1);
++	if (!page)
++		return NULL;
++
++	/*
++	 * First make the page read-only, and only then make it executable to
++	 * prevent it from being W+X in between.
++	 */
++	set_memory_ro((unsigned long)page, 1);
++
++	/*
++	 * TODO: Once additional kernel code protection mechanisms are set, ensure
++	 * that the page was not maliciously altered and it is still zeroed.
++	 */
++	set_memory_x((unsigned long)page, 1);
+ 
+ 	return page;
+ }
+@@ -440,8 +452,12 @@ void *alloc_insn_page(void)
+ /* Recover page to RW mode before releasing it */
+ void free_insn_page(void *page)
+ {
+-	set_memory_nx((unsigned long)page & PAGE_MASK, 1);
+-	set_memory_rw((unsigned long)page & PAGE_MASK, 1);
++	/*
++	 * First make the page non-executable, and only then make it writable to
++	 * prevent it from being W+X in between.
++	 */
++	set_memory_nx((unsigned long)page, 1);
++	set_memory_rw((unsigned long)page, 1);
+ 	module_memfree(page);
+ }
+ 
+diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
+index 834659288ba9..a5127b2c195f 100644
+--- a/arch/x86/kernel/vmlinux.lds.S
++++ b/arch/x86/kernel/vmlinux.lds.S
+@@ -141,10 +141,10 @@ SECTIONS
+ 		*(.text.__x86.indirect_thunk)
+ 		__indirect_thunk_end = .;
+ #endif
+-	} :text = 0x9090
+ 
+-	/* End of text section */
+-	_etext = .;
++		/* End of text section */
++		_etext = .;
++	} :text = 0x9090
+ 
+ 	NOTES :text :note
+ 
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 6b8575c547ee..efc8adf7ca0e 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -3090,6 +3090,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+ 	case KVM_CAP_MAX_VCPUS:
+ 		r = KVM_MAX_VCPUS;
+ 		break;
++	case KVM_CAP_MAX_VCPU_ID:
++		r = KVM_MAX_VCPU_ID;
++		break;
+ 	case KVM_CAP_NR_MEMSLOTS:
+ 		r = KVM_USER_MEM_SLOTS;
+ 		break;
+diff --git a/drivers/clk/imx/clk-imx8mm.c b/drivers/clk/imx/clk-imx8mm.c
+index 1ef8438e3d6d..122a81ab8e48 100644
+--- a/drivers/clk/imx/clk-imx8mm.c
++++ b/drivers/clk/imx/clk-imx8mm.c
+@@ -449,12 +449,12 @@ static int __init imx8mm_clocks_init(struct device_node *ccm_node)
+ 	clks[IMX8MM_AUDIO_PLL2_OUT] = imx_clk_gate("audio_pll2_out", "audio_pll2_bypass", base + 0x14, 13);
+ 	clks[IMX8MM_VIDEO_PLL1_OUT] = imx_clk_gate("video_pll1_out", "video_pll1_bypass", base + 0x28, 13);
+ 	clks[IMX8MM_DRAM_PLL_OUT] = imx_clk_gate("dram_pll_out", "dram_pll_bypass", base + 0x50, 13);
+-	clks[IMX8MM_GPU_PLL_OUT] = imx_clk_gate("gpu_pll_out", "gpu_pll_bypass", base + 0x64, 13);
+-	clks[IMX8MM_VPU_PLL_OUT] = imx_clk_gate("vpu_pll_out", "vpu_pll_bypass", base + 0x74, 13);
+-	clks[IMX8MM_ARM_PLL_OUT] = imx_clk_gate("arm_pll_out", "arm_pll_bypass", base + 0x84, 13);
+-	clks[IMX8MM_SYS_PLL1_OUT] = imx_clk_gate("sys_pll1_out", "sys_pll1_bypass", base + 0x94, 13);
+-	clks[IMX8MM_SYS_PLL2_OUT] = imx_clk_gate("sys_pll2_out", "sys_pll2_bypass", base + 0x104, 13);
+-	clks[IMX8MM_SYS_PLL3_OUT] = imx_clk_gate("sys_pll3_out", "sys_pll3_bypass", base + 0x114, 13);
++	clks[IMX8MM_GPU_PLL_OUT] = imx_clk_gate("gpu_pll_out", "gpu_pll_bypass", base + 0x64, 11);
++	clks[IMX8MM_VPU_PLL_OUT] = imx_clk_gate("vpu_pll_out", "vpu_pll_bypass", base + 0x74, 11);
++	clks[IMX8MM_ARM_PLL_OUT] = imx_clk_gate("arm_pll_out", "arm_pll_bypass", base + 0x84, 11);
++	clks[IMX8MM_SYS_PLL1_OUT] = imx_clk_gate("sys_pll1_out", "sys_pll1_bypass", base + 0x94, 11);
++	clks[IMX8MM_SYS_PLL2_OUT] = imx_clk_gate("sys_pll2_out", "sys_pll2_bypass", base + 0x104, 11);
++	clks[IMX8MM_SYS_PLL3_OUT] = imx_clk_gate("sys_pll3_out", "sys_pll3_bypass", base + 0x114, 11);
+ 
+ 	/* SYS PLL fixed output */
+ 	clks[IMX8MM_SYS_PLL1_40M] = imx_clk_fixed_factor("sys_pll1_40m", "sys_pll1_out", 1, 20);
+diff --git a/drivers/gpu/drm/drm_atomic_uapi.c b/drivers/gpu/drm/drm_atomic_uapi.c
+index 0aabd401d3ca..0ceda7c14915 100644
+--- a/drivers/gpu/drm/drm_atomic_uapi.c
++++ b/drivers/gpu/drm/drm_atomic_uapi.c
+@@ -512,8 +512,8 @@ drm_atomic_crtc_get_property(struct drm_crtc *crtc,
+ }
+ 
+ static int drm_atomic_plane_set_property(struct drm_plane *plane,
+-		struct drm_plane_state *state, struct drm_property *property,
+-		uint64_t val)
++		struct drm_plane_state *state, struct drm_file *file_priv,
++		struct drm_property *property, uint64_t val)
+ {
+ 	struct drm_device *dev = plane->dev;
+ 	struct drm_mode_config *config = &dev->mode_config;
+@@ -521,7 +521,8 @@ static int drm_atomic_plane_set_property(struct drm_plane *plane,
+ 	int ret;
+ 
+ 	if (property == config->prop_fb_id) {
+-		struct drm_framebuffer *fb = drm_framebuffer_lookup(dev, NULL, val);
++		struct drm_framebuffer *fb;
++		fb = drm_framebuffer_lookup(dev, file_priv, val);
+ 		drm_atomic_set_fb_for_plane(state, fb);
+ 		if (fb)
+ 			drm_framebuffer_put(fb);
+@@ -537,7 +538,7 @@ static int drm_atomic_plane_set_property(struct drm_plane *plane,
+ 			return -EINVAL;
+ 
+ 	} else if (property == config->prop_crtc_id) {
+-		struct drm_crtc *crtc = drm_crtc_find(dev, NULL, val);
++		struct drm_crtc *crtc = drm_crtc_find(dev, file_priv, val);
+ 		return drm_atomic_set_crtc_for_plane(state, crtc);
+ 	} else if (property == config->prop_crtc_x) {
+ 		state->crtc_x = U642I64(val);
+@@ -681,14 +682,14 @@ static int drm_atomic_set_writeback_fb_for_connector(
+ }
+ 
+ static int drm_atomic_connector_set_property(struct drm_connector *connector,
+-		struct drm_connector_state *state, struct drm_property *property,
+-		uint64_t val)
++		struct drm_connector_state *state, struct drm_file *file_priv,
++		struct drm_property *property, uint64_t val)
+ {
+ 	struct drm_device *dev = connector->dev;
+ 	struct drm_mode_config *config = &dev->mode_config;
+ 
+ 	if (property == config->prop_crtc_id) {
+-		struct drm_crtc *crtc = drm_crtc_find(dev, NULL, val);
++		struct drm_crtc *crtc = drm_crtc_find(dev, file_priv, val);
+ 		return drm_atomic_set_crtc_for_connector(state, crtc);
+ 	} else if (property == config->dpms_property) {
+ 		/* setting DPMS property requires special handling, which
+@@ -747,8 +748,10 @@ static int drm_atomic_connector_set_property(struct drm_connector *connector,
+ 		}
+ 		state->content_protection = val;
+ 	} else if (property == config->writeback_fb_id_property) {
+-		struct drm_framebuffer *fb = drm_framebuffer_lookup(dev, NULL, val);
+-		int ret = drm_atomic_set_writeback_fb_for_connector(state, fb);
++		struct drm_framebuffer *fb;
++		int ret;
++		fb = drm_framebuffer_lookup(dev, file_priv, val);
++		ret = drm_atomic_set_writeback_fb_for_connector(state, fb);
+ 		if (fb)
+ 			drm_framebuffer_put(fb);
+ 		return ret;
+@@ -943,6 +946,7 @@ out:
+ }
+ 
+ int drm_atomic_set_property(struct drm_atomic_state *state,
++			    struct drm_file *file_priv,
+ 			    struct drm_mode_object *obj,
+ 			    struct drm_property *prop,
+ 			    uint64_t prop_value)
+@@ -965,7 +969,8 @@ int drm_atomic_set_property(struct drm_atomic_state *state,
+ 		}
+ 
+ 		ret = drm_atomic_connector_set_property(connector,
+-				connector_state, prop, prop_value);
++				connector_state, file_priv,
++				prop, prop_value);
+ 		break;
+ 	}
+ 	case DRM_MODE_OBJECT_CRTC: {
+@@ -993,7 +998,8 @@ int drm_atomic_set_property(struct drm_atomic_state *state,
+ 		}
+ 
+ 		ret = drm_atomic_plane_set_property(plane,
+-				plane_state, prop, prop_value);
++				plane_state, file_priv,
++				prop, prop_value);
+ 		break;
+ 	}
+ 	default:
+@@ -1365,8 +1371,8 @@ retry:
+ 				goto out;
+ 			}
+ 
+-			ret = drm_atomic_set_property(state, obj, prop,
+-						      prop_value);
++			ret = drm_atomic_set_property(state, file_priv,
++						      obj, prop, prop_value);
+ 			if (ret) {
+ 				drm_mode_object_put(obj);
+ 				goto out;
+diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
+index 7dabbaf033a1..790ba5941954 100644
+--- a/drivers/gpu/drm/drm_crtc.c
++++ b/drivers/gpu/drm/drm_crtc.c
+@@ -559,6 +559,10 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
+ 
+ 	plane = crtc->primary;
+ 
++	/* allow disabling with the primary plane leased */
++	if (crtc_req->mode_valid && !drm_lease_held(file_priv, plane->base.id))
++		return -EACCES;
++
+ 	mutex_lock(&crtc->dev->mode_config.mutex);
+ 	DRM_MODESET_LOCK_ALL_BEGIN(dev, ctx,
+ 				   DRM_MODESET_ACQUIRE_INTERRUPTIBLE, ret);
+diff --git a/drivers/gpu/drm/drm_crtc_internal.h b/drivers/gpu/drm/drm_crtc_internal.h
+index 216f2a9ee3d4..0719a235d6cc 100644
+--- a/drivers/gpu/drm/drm_crtc_internal.h
++++ b/drivers/gpu/drm/drm_crtc_internal.h
+@@ -214,6 +214,7 @@ int drm_atomic_connector_commit_dpms(struct drm_atomic_state *state,
+ 				     struct drm_connector *connector,
+ 				     int mode);
+ int drm_atomic_set_property(struct drm_atomic_state *state,
++			    struct drm_file *file_priv,
+ 			    struct drm_mode_object *obj,
+ 			    struct drm_property *prop,
+ 			    uint64_t prop_value);
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index af2ab640cadb..4a5b2d2bc508 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -3317,8 +3317,6 @@ int drm_fbdev_generic_setup(struct drm_device *dev, unsigned int preferred_bpp)
+ 		return ret;
+ 	}
+ 
+-	drm_client_add(&fb_helper->client);
+-
+ 	if (!preferred_bpp)
+ 		preferred_bpp = dev->mode_config.preferred_depth;
+ 	if (!preferred_bpp)
+@@ -3329,6 +3327,8 @@ int drm_fbdev_generic_setup(struct drm_device *dev, unsigned int preferred_bpp)
+ 	if (ret)
+ 		DRM_DEV_DEBUG(dev->dev, "client hotplug ret=%d\n", ret);
+ 
++	drm_client_add(&fb_helper->client);
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL(drm_fbdev_generic_setup);
+diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
+index cc26625b4b33..e01ceed09e67 100644
+--- a/drivers/gpu/drm/drm_gem_cma_helper.c
++++ b/drivers/gpu/drm/drm_gem_cma_helper.c
+@@ -186,13 +186,13 @@ void drm_gem_cma_free_object(struct drm_gem_object *gem_obj)
+ 
+ 	cma_obj = to_drm_gem_cma_obj(gem_obj);
+ 
+-	if (cma_obj->vaddr) {
+-		dma_free_wc(gem_obj->dev->dev, cma_obj->base.size,
+-			    cma_obj->vaddr, cma_obj->paddr);
+-	} else if (gem_obj->import_attach) {
++	if (gem_obj->import_attach) {
+ 		if (cma_obj->vaddr)
+ 			dma_buf_vunmap(gem_obj->import_attach->dmabuf, cma_obj->vaddr);
+ 		drm_prime_gem_destroy(gem_obj, cma_obj->sgt);
++	} else if (cma_obj->vaddr) {
++		dma_free_wc(gem_obj->dev->dev, cma_obj->base.size,
++			    cma_obj->vaddr, cma_obj->paddr);
+ 	}
+ 
+ 	drm_gem_object_release(gem_obj);
+diff --git a/drivers/gpu/drm/drm_mode_config.c b/drivers/gpu/drm/drm_mode_config.c
+index 4a1c2023ccf0..1a346ae1599d 100644
+--- a/drivers/gpu/drm/drm_mode_config.c
++++ b/drivers/gpu/drm/drm_mode_config.c
+@@ -297,8 +297,9 @@ static int drm_mode_create_standard_properties(struct drm_device *dev)
+ 		return -ENOMEM;
+ 	dev->mode_config.prop_crtc_id = prop;
+ 
+-	prop = drm_property_create(dev, DRM_MODE_PROP_BLOB, "FB_DAMAGE_CLIPS",
+-				   0);
++	prop = drm_property_create(dev,
++			DRM_MODE_PROP_ATOMIC | DRM_MODE_PROP_BLOB,
++			"FB_DAMAGE_CLIPS", 0);
+ 	if (!prop)
+ 		return -ENOMEM;
+ 	dev->mode_config.prop_fb_damage_clips = prop;
+diff --git a/drivers/gpu/drm/drm_mode_object.c b/drivers/gpu/drm/drm_mode_object.c
+index a9005c1c2384..f32507e65b79 100644
+--- a/drivers/gpu/drm/drm_mode_object.c
++++ b/drivers/gpu/drm/drm_mode_object.c
+@@ -451,6 +451,7 @@ static int set_property_legacy(struct drm_mode_object *obj,
+ }
+ 
+ static int set_property_atomic(struct drm_mode_object *obj,
++			       struct drm_file *file_priv,
+ 			       struct drm_property *prop,
+ 			       uint64_t prop_value)
+ {
+@@ -477,7 +478,7 @@ retry:
+ 						       obj_to_connector(obj),
+ 						       prop_value);
+ 	} else {
+-		ret = drm_atomic_set_property(state, obj, prop, prop_value);
++		ret = drm_atomic_set_property(state, file_priv, obj, prop, prop_value);
+ 		if (ret)
+ 			goto out;
+ 		ret = drm_atomic_commit(state);
+@@ -520,7 +521,7 @@ int drm_mode_obj_set_property_ioctl(struct drm_device *dev, void *data,
+ 		goto out_unref;
+ 
+ 	if (drm_drv_uses_atomic_modeset(property->dev))
+-		ret = set_property_atomic(arg_obj, property, arg->value);
++		ret = set_property_atomic(arg_obj, file_priv, property, arg->value);
+ 	else
+ 		ret = set_property_legacy(arg_obj, property, arg->value);
+ 
+diff --git a/drivers/gpu/drm/drm_plane.c b/drivers/gpu/drm/drm_plane.c
+index 4cfb56893b7f..d6ad60ab0d38 100644
+--- a/drivers/gpu/drm/drm_plane.c
++++ b/drivers/gpu/drm/drm_plane.c
+@@ -960,6 +960,11 @@ retry:
+ 		if (ret)
+ 			goto out;
+ 
++		if (!drm_lease_held(file_priv, crtc->cursor->base.id)) {
++			ret = -EACCES;
++			goto out;
++		}
++
+ 		ret = drm_mode_cursor_universal(crtc, req, file_priv, &ctx);
+ 		goto out;
+ 	}
+@@ -1062,6 +1067,9 @@ int drm_mode_page_flip_ioctl(struct drm_device *dev,
+ 
+ 	plane = crtc->primary;
+ 
++	if (!drm_lease_held(file_priv, plane->base.id))
++		return -EACCES;
++
+ 	if (crtc->funcs->page_flip_target) {
+ 		u32 current_vblank;
+ 		int r;
+diff --git a/drivers/gpu/drm/imx/ipuv3-plane.c b/drivers/gpu/drm/imx/ipuv3-plane.c
+index d7a727a6e3d7..91edfe2498a6 100644
+--- a/drivers/gpu/drm/imx/ipuv3-plane.c
++++ b/drivers/gpu/drm/imx/ipuv3-plane.c
+@@ -605,7 +605,6 @@ static void ipu_plane_atomic_update(struct drm_plane *plane,
+ 		active = ipu_idmac_get_current_buffer(ipu_plane->ipu_ch);
+ 		ipu_cpmem_set_buffer(ipu_plane->ipu_ch, !active, eba);
+ 		ipu_idmac_select_buffer(ipu_plane->ipu_ch, !active);
+-		ipu_plane->next_buf = !active;
+ 		if (ipu_plane_separate_alpha(ipu_plane)) {
+ 			active = ipu_idmac_get_current_buffer(ipu_plane->alpha_ch);
+ 			ipu_cpmem_set_buffer(ipu_plane->alpha_ch, !active,
+@@ -710,7 +709,6 @@ static void ipu_plane_atomic_update(struct drm_plane *plane,
+ 	ipu_cpmem_set_buffer(ipu_plane->ipu_ch, 1, eba);
+ 	ipu_idmac_lock_enable(ipu_plane->ipu_ch, num_bursts);
+ 	ipu_plane_enable(ipu_plane);
+-	ipu_plane->next_buf = -1;
+ }
+ 
+ static const struct drm_plane_helper_funcs ipu_plane_helper_funcs = {
+@@ -732,10 +730,15 @@ bool ipu_plane_atomic_update_pending(struct drm_plane *plane)
+ 
+ 	if (ipu_state->use_pre)
+ 		return ipu_prg_channel_configure_pending(ipu_plane->ipu_ch);
+-	else if (ipu_plane->next_buf >= 0)
+-		return ipu_idmac_get_current_buffer(ipu_plane->ipu_ch) !=
+-		       ipu_plane->next_buf;
+ 
++	/*
++	 * Pretend no update is pending in the non-PRE/PRG case. For this to
++	 * happen, an atomic update would have to be deferred until after the
++	 * start of the next frame and simultaneously interrupt latency would
++	 * have to be high enough to let the atomic update finish and issue an
++	 * event before the previous end of frame interrupt handler can be
++	 * executed.
++	 */
+ 	return false;
+ }
+ int ipu_planes_assign_pre(struct drm_device *dev,
+diff --git a/drivers/gpu/drm/imx/ipuv3-plane.h b/drivers/gpu/drm/imx/ipuv3-plane.h
+index 15e85e15d35c..ffacbcdd2f98 100644
+--- a/drivers/gpu/drm/imx/ipuv3-plane.h
++++ b/drivers/gpu/drm/imx/ipuv3-plane.h
+@@ -27,7 +27,6 @@ struct ipu_plane {
+ 	int			dp_flow;
+ 
+ 	bool			disabling;
+-	int			next_buf;
+ };
+ 
+ struct ipu_plane *ipu_plane_init(struct drm_device *dev, struct ipu_soc *ipu,
+diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/i2c.h b/drivers/gpu/drm/nouveau/include/nvkm/subdev/i2c.h
+index eef54e9b5d77..7957eafa5f0e 100644
+--- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/i2c.h
++++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/i2c.h
+@@ -38,6 +38,7 @@ struct nvkm_i2c_bus {
+ 	struct mutex mutex;
+ 	struct list_head head;
+ 	struct i2c_adapter i2c;
++	u8 enabled;
+ };
+ 
+ int nvkm_i2c_bus_acquire(struct nvkm_i2c_bus *);
+@@ -57,6 +58,7 @@ struct nvkm_i2c_aux {
+ 	struct mutex mutex;
+ 	struct list_head head;
+ 	struct i2c_adapter i2c;
++	u8 enabled;
+ 
+ 	u32 intr;
+ };
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/aux.c b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/aux.c
+index 4c1f547da463..b4e7404fe660 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/aux.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/aux.c
+@@ -105,9 +105,15 @@ nvkm_i2c_aux_acquire(struct nvkm_i2c_aux *aux)
+ {
+ 	struct nvkm_i2c_pad *pad = aux->pad;
+ 	int ret;
++
+ 	AUX_TRACE(aux, "acquire");
+ 	mutex_lock(&aux->mutex);
+-	ret = nvkm_i2c_pad_acquire(pad, NVKM_I2C_PAD_AUX);
++
++	if (aux->enabled)
++		ret = nvkm_i2c_pad_acquire(pad, NVKM_I2C_PAD_AUX);
++	else
++		ret = -EIO;
++
+ 	if (ret)
+ 		mutex_unlock(&aux->mutex);
+ 	return ret;
+@@ -145,6 +151,24 @@ nvkm_i2c_aux_del(struct nvkm_i2c_aux **paux)
+ 	}
+ }
+ 
++void
++nvkm_i2c_aux_init(struct nvkm_i2c_aux *aux)
++{
++	AUX_TRACE(aux, "init");
++	mutex_lock(&aux->mutex);
++	aux->enabled = true;
++	mutex_unlock(&aux->mutex);
++}
++
++void
++nvkm_i2c_aux_fini(struct nvkm_i2c_aux *aux)
++{
++	AUX_TRACE(aux, "fini");
++	mutex_lock(&aux->mutex);
++	aux->enabled = false;
++	mutex_unlock(&aux->mutex);
++}
++
+ int
+ nvkm_i2c_aux_ctor(const struct nvkm_i2c_aux_func *func,
+ 		  struct nvkm_i2c_pad *pad, int id,
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/aux.h b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/aux.h
+index 7d56c4ba693c..08f6b2ee64ab 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/aux.h
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/aux.h
+@@ -16,6 +16,8 @@ int nvkm_i2c_aux_ctor(const struct nvkm_i2c_aux_func *, struct nvkm_i2c_pad *,
+ int nvkm_i2c_aux_new_(const struct nvkm_i2c_aux_func *, struct nvkm_i2c_pad *,
+ 		      int id, struct nvkm_i2c_aux **);
+ void nvkm_i2c_aux_del(struct nvkm_i2c_aux **);
++void nvkm_i2c_aux_init(struct nvkm_i2c_aux *);
++void nvkm_i2c_aux_fini(struct nvkm_i2c_aux *);
+ int nvkm_i2c_aux_xfer(struct nvkm_i2c_aux *, bool retry, u8 type,
+ 		      u32 addr, u8 *data, u8 *size);
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c
+index 4f197b15acf6..ecacb22834d7 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c
+@@ -160,8 +160,18 @@ nvkm_i2c_fini(struct nvkm_subdev *subdev, bool suspend)
+ {
+ 	struct nvkm_i2c *i2c = nvkm_i2c(subdev);
+ 	struct nvkm_i2c_pad *pad;
++	struct nvkm_i2c_bus *bus;
++	struct nvkm_i2c_aux *aux;
+ 	u32 mask;
+ 
++	list_for_each_entry(aux, &i2c->aux, head) {
++		nvkm_i2c_aux_fini(aux);
++	}
++
++	list_for_each_entry(bus, &i2c->bus, head) {
++		nvkm_i2c_bus_fini(bus);
++	}
++
+ 	if ((mask = (1 << i2c->func->aux) - 1), i2c->func->aux_stat) {
+ 		i2c->func->aux_mask(i2c, NVKM_I2C_ANY, mask, 0);
+ 		i2c->func->aux_stat(i2c, &mask, &mask, &mask, &mask);
+@@ -180,6 +190,7 @@ nvkm_i2c_init(struct nvkm_subdev *subdev)
+ 	struct nvkm_i2c *i2c = nvkm_i2c(subdev);
+ 	struct nvkm_i2c_bus *bus;
+ 	struct nvkm_i2c_pad *pad;
++	struct nvkm_i2c_aux *aux;
+ 
+ 	list_for_each_entry(pad, &i2c->pad, head) {
+ 		nvkm_i2c_pad_init(pad);
+@@ -189,6 +200,10 @@ nvkm_i2c_init(struct nvkm_subdev *subdev)
+ 		nvkm_i2c_bus_init(bus);
+ 	}
+ 
++	list_for_each_entry(aux, &i2c->aux, head) {
++		nvkm_i2c_aux_init(aux);
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/bus.c b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/bus.c
+index 807a2b67bd64..ed50cc3736b9 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/bus.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/bus.c
+@@ -110,6 +110,19 @@ nvkm_i2c_bus_init(struct nvkm_i2c_bus *bus)
+ 	BUS_TRACE(bus, "init");
+ 	if (bus->func->init)
+ 		bus->func->init(bus);
++
++	mutex_lock(&bus->mutex);
++	bus->enabled = true;
++	mutex_unlock(&bus->mutex);
++}
++
++void
++nvkm_i2c_bus_fini(struct nvkm_i2c_bus *bus)
++{
++	BUS_TRACE(bus, "fini");
++	mutex_lock(&bus->mutex);
++	bus->enabled = false;
++	mutex_unlock(&bus->mutex);
+ }
+ 
+ void
+@@ -126,9 +139,15 @@ nvkm_i2c_bus_acquire(struct nvkm_i2c_bus *bus)
+ {
+ 	struct nvkm_i2c_pad *pad = bus->pad;
+ 	int ret;
++
+ 	BUS_TRACE(bus, "acquire");
+ 	mutex_lock(&bus->mutex);
+-	ret = nvkm_i2c_pad_acquire(pad, NVKM_I2C_PAD_I2C);
++
++	if (bus->enabled)
++		ret = nvkm_i2c_pad_acquire(pad, NVKM_I2C_PAD_I2C);
++	else
++		ret = -EIO;
++
+ 	if (ret)
+ 		mutex_unlock(&bus->mutex);
+ 	return ret;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/bus.h b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/bus.h
+index bea0dd33961e..465464bba58b 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/bus.h
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/bus.h
+@@ -18,6 +18,7 @@ int nvkm_i2c_bus_new_(const struct nvkm_i2c_bus_func *, struct nvkm_i2c_pad *,
+ 		      int id, struct nvkm_i2c_bus **);
+ void nvkm_i2c_bus_del(struct nvkm_i2c_bus **);
+ void nvkm_i2c_bus_init(struct nvkm_i2c_bus *);
++void nvkm_i2c_bus_fini(struct nvkm_i2c_bus *);
+ 
+ int nvkm_i2c_bit_xfer(struct nvkm_i2c_bus *, struct i2c_msg *, int);
+ 
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_drv.c b/drivers/gpu/drm/rockchip/rockchip_drm_drv.c
+index d7fa17f12769..8ddb0c0e4bff 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_drv.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_drv.c
+@@ -448,6 +448,14 @@ static int rockchip_drm_platform_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++static void rockchip_drm_platform_shutdown(struct platform_device *pdev)
++{
++	struct drm_device *drm = platform_get_drvdata(pdev);
++
++	if (drm)
++		drm_atomic_helper_shutdown(drm);
++}
++
+ static const struct of_device_id rockchip_drm_dt_ids[] = {
+ 	{ .compatible = "rockchip,display-subsystem", },
+ 	{ /* sentinel */ },
+@@ -457,6 +465,7 @@ MODULE_DEVICE_TABLE(of, rockchip_drm_dt_ids);
+ static struct platform_driver rockchip_drm_platform_driver = {
+ 	.probe = rockchip_drm_platform_probe,
+ 	.remove = rockchip_drm_platform_remove,
++	.shutdown = rockchip_drm_platform_shutdown,
+ 	.driver = {
+ 		.name = "rockchip-drm",
+ 		.of_match_table = rockchip_drm_dt_ids,
+diff --git a/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c b/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
+index 66ea3a902e36..43643ad31730 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
++++ b/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
+@@ -293,7 +293,8 @@ static int sun8i_hdmi_phy_config_h3(struct dw_hdmi *hdmi,
+ 				 SUN8I_HDMI_PHY_ANA_CFG2_REG_BIGSW |
+ 				 SUN8I_HDMI_PHY_ANA_CFG2_REG_SLV(4);
+ 		ana_cfg3_init |= SUN8I_HDMI_PHY_ANA_CFG3_REG_AMPCK(9) |
+-				 SUN8I_HDMI_PHY_ANA_CFG3_REG_AMP(13);
++				 SUN8I_HDMI_PHY_ANA_CFG3_REG_AMP(13) |
++				 SUN8I_HDMI_PHY_ANA_CFG3_REG_EMP(3);
+ 	}
+ 
+ 	regmap_update_bits(phy->regs, SUN8I_HDMI_PHY_ANA_CFG1_REG,
+@@ -672,22 +673,13 @@ int sun8i_hdmi_phy_probe(struct sun8i_dw_hdmi *hdmi, struct device_node *node)
+ 				goto err_put_clk_pll0;
+ 			}
+ 		}
+-
+-		ret = sun8i_phy_clk_create(phy, dev,
+-					   phy->variant->has_second_pll);
+-		if (ret) {
+-			dev_err(dev, "Couldn't create the PHY clock\n");
+-			goto err_put_clk_pll1;
+-		}
+-
+-		clk_prepare_enable(phy->clk_phy);
+ 	}
+ 
+ 	phy->rst_phy = of_reset_control_get_shared(node, "phy");
+ 	if (IS_ERR(phy->rst_phy)) {
+ 		dev_err(dev, "Could not get phy reset control\n");
+ 		ret = PTR_ERR(phy->rst_phy);
+-		goto err_disable_clk_phy;
++		goto err_put_clk_pll1;
+ 	}
+ 
+ 	ret = reset_control_deassert(phy->rst_phy);
+@@ -708,18 +700,29 @@ int sun8i_hdmi_phy_probe(struct sun8i_dw_hdmi *hdmi, struct device_node *node)
+ 		goto err_disable_clk_bus;
+ 	}
+ 
++	if (phy->variant->has_phy_clk) {
++		ret = sun8i_phy_clk_create(phy, dev,
++					   phy->variant->has_second_pll);
++		if (ret) {
++			dev_err(dev, "Couldn't create the PHY clock\n");
++			goto err_disable_clk_mod;
++		}
++
++		clk_prepare_enable(phy->clk_phy);
++	}
++
+ 	hdmi->phy = phy;
+ 
+ 	return 0;
+ 
++err_disable_clk_mod:
++	clk_disable_unprepare(phy->clk_mod);
+ err_disable_clk_bus:
+ 	clk_disable_unprepare(phy->clk_bus);
+ err_deassert_rst_phy:
+ 	reset_control_assert(phy->rst_phy);
+ err_put_rst_phy:
+ 	reset_control_put(phy->rst_phy);
+-err_disable_clk_phy:
+-	clk_disable_unprepare(phy->clk_phy);
+ err_put_clk_pll1:
+ 	clk_put(phy->clk_pll1);
+ err_put_clk_pll0:
+diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
+index 4f80100ff5f3..4cce11fd8836 100644
+--- a/drivers/gpu/drm/tegra/gem.c
++++ b/drivers/gpu/drm/tegra/gem.c
+@@ -204,7 +204,7 @@ static void tegra_bo_free(struct drm_device *drm, struct tegra_bo *bo)
+ {
+ 	if (bo->pages) {
+ 		dma_unmap_sg(drm->dev, bo->sgt->sgl, bo->sgt->nents,
+-			     DMA_BIDIRECTIONAL);
++			     DMA_FROM_DEVICE);
+ 		drm_gem_put_pages(&bo->gem, bo->pages, true, true);
+ 		sg_free_table(bo->sgt);
+ 		kfree(bo->sgt);
+@@ -230,7 +230,7 @@ static int tegra_bo_get_pages(struct drm_device *drm, struct tegra_bo *bo)
+ 	}
+ 
+ 	err = dma_map_sg(drm->dev, bo->sgt->sgl, bo->sgt->nents,
+-			 DMA_BIDIRECTIONAL);
++			 DMA_FROM_DEVICE);
+ 	if (err == 0) {
+ 		err = -EFAULT;
+ 		goto free_sgt;
+diff --git a/drivers/gpu/drm/vmwgfx/ttm_object.c b/drivers/gpu/drm/vmwgfx/ttm_object.c
+index 36990b80e790..16077785ad47 100644
+--- a/drivers/gpu/drm/vmwgfx/ttm_object.c
++++ b/drivers/gpu/drm/vmwgfx/ttm_object.c
+@@ -174,7 +174,7 @@ int ttm_base_object_init(struct ttm_object_file *tfile,
+ 	kref_init(&base->refcount);
+ 	idr_preload(GFP_KERNEL);
+ 	spin_lock(&tdev->object_lock);
+-	ret = idr_alloc(&tdev->idr, base, 0, 0, GFP_NOWAIT);
++	ret = idr_alloc(&tdev->idr, base, 1, 0, GFP_NOWAIT);
+ 	spin_unlock(&tdev->object_lock);
+ 	idr_preload_end();
+ 	if (ret < 0)
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+index 1bfa353d995c..9d2e1ce5c0a6 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+@@ -1240,7 +1240,13 @@ static int vmw_master_set(struct drm_device *dev,
+ 	}
+ 
+ 	dev_priv->active_master = vmaster;
+-	drm_sysfs_hotplug_event(dev);
++
++	/*
++	 * Inform a new master that the layout may have changed while
++	 * it was gone.
++	 */
++	if (!from_open)
++		drm_sysfs_hotplug_event(dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+index 88b8178d4687..c0231a817d3b 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+@@ -2129,6 +2129,11 @@ static int vmw_cmd_set_shader(struct vmw_private *dev_priv,
+ 		return 0;
+ 
+ 	if (cmd->body.shid != SVGA3D_INVALID_ID) {
++		/*
++		 * This is the compat shader path - Per device guest-backed
++		 * shaders, but user-space thinks it's per context host-
++		 * backed shaders.
++		 */
+ 		res = vmw_shader_lookup(vmw_context_res_man(ctx),
+ 					cmd->body.shid,
+ 					cmd->body.type);
+@@ -2137,6 +2142,14 @@ static int vmw_cmd_set_shader(struct vmw_private *dev_priv,
+ 			ret = vmw_execbuf_res_noctx_val_add(sw_context, res);
+ 			if (unlikely(ret != 0))
+ 				return ret;
++
++			ret = vmw_resource_relocation_add
++				(sw_context, res,
++				 vmw_ptr_diff(sw_context->buf_start,
++					      &cmd->body.shid),
++				 vmw_res_rel_normal);
++			if (unlikely(ret != 0))
++				return ret;
+ 		}
+ 	}
+ 
+diff --git a/drivers/i2c/busses/i2c-mlxcpld.c b/drivers/i2c/busses/i2c-mlxcpld.c
+index 745ed43a22d6..2fd717d8dd30 100644
+--- a/drivers/i2c/busses/i2c-mlxcpld.c
++++ b/drivers/i2c/busses/i2c-mlxcpld.c
+@@ -503,6 +503,7 @@ static int mlxcpld_i2c_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, priv);
+ 
+ 	priv->dev = &pdev->dev;
++	priv->base_addr = MLXPLAT_CPLD_LPC_I2C_BASE_ADDR;
+ 
+ 	/* Register with i2c layer */
+ 	mlxcpld_i2c_adapter.timeout = usecs_to_jiffies(MLXCPLD_I2C_XFER_TO);
+@@ -518,7 +519,6 @@ static int mlxcpld_i2c_probe(struct platform_device *pdev)
+ 		mlxcpld_i2c_adapter.nr = pdev->id;
+ 	priv->adap = mlxcpld_i2c_adapter;
+ 	priv->adap.dev.parent = &pdev->dev;
+-	priv->base_addr = MLXPLAT_CPLD_LPC_I2C_BASE_ADDR;
+ 	i2c_set_adapdata(&priv->adap, priv);
+ 
+ 	err = i2c_add_numbered_adapter(&priv->adap);
+diff --git a/drivers/i2c/busses/i2c-synquacer.c b/drivers/i2c/busses/i2c-synquacer.c
+index f14d4b3fab44..f724c8e6b360 100644
+--- a/drivers/i2c/busses/i2c-synquacer.c
++++ b/drivers/i2c/busses/i2c-synquacer.c
+@@ -351,7 +351,7 @@ static int synquacer_i2c_doxfer(struct synquacer_i2c *i2c,
+ 	/* wait 2 clock periods to ensure the stop has been through the bus */
+ 	udelay(DIV_ROUND_UP(2 * 1000, i2c->speed_khz));
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static irqreturn_t synquacer_i2c_isr(int irq, void *dev_id)
+diff --git a/drivers/iio/adc/npcm_adc.c b/drivers/iio/adc/npcm_adc.c
+index 9e25bbec9c70..193b3b81de4d 100644
+--- a/drivers/iio/adc/npcm_adc.c
++++ b/drivers/iio/adc/npcm_adc.c
+@@ -149,7 +149,7 @@ static int npcm_adc_read_raw(struct iio_dev *indio_dev,
+ 		}
+ 		return IIO_VAL_INT;
+ 	case IIO_CHAN_INFO_SCALE:
+-		if (info->vref) {
++		if (!IS_ERR(info->vref)) {
+ 			vref_uv = regulator_get_voltage(info->vref);
+ 			*val = vref_uv / 1000;
+ 		} else {
+diff --git a/drivers/iio/adc/ti-ads124s08.c b/drivers/iio/adc/ti-ads124s08.c
+index 53f17e4f2f23..552c2be8d87a 100644
+--- a/drivers/iio/adc/ti-ads124s08.c
++++ b/drivers/iio/adc/ti-ads124s08.c
+@@ -202,7 +202,7 @@ static int ads124s_read(struct iio_dev *indio_dev, unsigned int chan)
+ 	};
+ 
+ 	priv->data[0] = ADS124S08_CMD_RDATA;
+-	memset(&priv->data[1], ADS124S08_CMD_NOP, sizeof(priv->data));
++	memset(&priv->data[1], ADS124S08_CMD_NOP, sizeof(priv->data) - 1);
+ 
+ 	ret = spi_sync_transfer(priv->spi, t, ARRAY_SIZE(t));
+ 	if (ret < 0)
+diff --git a/drivers/iio/adc/ti-ads8688.c b/drivers/iio/adc/ti-ads8688.c
+index 8b4568edd5cb..7f16c77b99fb 100644
+--- a/drivers/iio/adc/ti-ads8688.c
++++ b/drivers/iio/adc/ti-ads8688.c
+@@ -397,7 +397,7 @@ static irqreturn_t ads8688_trigger_handler(int irq, void *p)
+ 	}
+ 
+ 	iio_push_to_buffers_with_timestamp(indio_dev, buffer,
+-			pf->timestamp);
++			iio_get_time_ns(indio_dev));
+ 
+ 	iio_trigger_notify_done(indio_dev->trig);
+ 
+diff --git a/drivers/iio/dac/ds4424.c b/drivers/iio/dac/ds4424.c
+index 883a47562055..714a97f91319 100644
+--- a/drivers/iio/dac/ds4424.c
++++ b/drivers/iio/dac/ds4424.c
+@@ -166,7 +166,7 @@ static int ds4424_verify_chip(struct iio_dev *indio_dev)
+ {
+ 	int ret, val;
+ 
+-	ret = ds4424_get_value(indio_dev, &val, DS4424_DAC_ADDR(0));
++	ret = ds4424_get_value(indio_dev, &val, 0);
+ 	if (ret < 0)
+ 		dev_err(&indio_dev->dev,
+ 				"%s failed. ret: %d\n", __func__, ret);
+diff --git a/drivers/media/usb/siano/smsusb.c b/drivers/media/usb/siano/smsusb.c
+index 4fc03ec8a4f1..e39f3f40dfdd 100644
+--- a/drivers/media/usb/siano/smsusb.c
++++ b/drivers/media/usb/siano/smsusb.c
+@@ -400,6 +400,7 @@ static int smsusb_init_device(struct usb_interface *intf, int board_id)
+ 	struct smsusb_device_t *dev;
+ 	void *mdev;
+ 	int i, rc;
++	int align = 0;
+ 
+ 	/* create device object */
+ 	dev = kzalloc(sizeof(struct smsusb_device_t), GFP_KERNEL);
+@@ -411,6 +412,24 @@ static int smsusb_init_device(struct usb_interface *intf, int board_id)
+ 	dev->udev = interface_to_usbdev(intf);
+ 	dev->state = SMSUSB_DISCONNECTED;
+ 
++	for (i = 0; i < intf->cur_altsetting->desc.bNumEndpoints; i++) {
++		struct usb_endpoint_descriptor *desc =
++				&intf->cur_altsetting->endpoint[i].desc;
++
++		if (desc->bEndpointAddress & USB_DIR_IN) {
++			dev->in_ep = desc->bEndpointAddress;
++			align = usb_endpoint_maxp(desc) - sizeof(struct sms_msg_hdr);
++		} else {
++			dev->out_ep = desc->bEndpointAddress;
++		}
++	}
++
++	pr_debug("in_ep = %02x, out_ep = %02x\n", dev->in_ep, dev->out_ep);
++	if (!dev->in_ep || !dev->out_ep || align < 0) {  /* Missing endpoints? */
++		smsusb_term_device(intf);
++		return -ENODEV;
++	}
++
+ 	params.device_type = sms_get_board(board_id)->type;
+ 
+ 	switch (params.device_type) {
+@@ -425,24 +444,12 @@ static int smsusb_init_device(struct usb_interface *intf, int board_id)
+ 		/* fall-thru */
+ 	default:
+ 		dev->buffer_size = USB2_BUFFER_SIZE;
+-		dev->response_alignment =
+-		    le16_to_cpu(dev->udev->ep_in[1]->desc.wMaxPacketSize) -
+-		    sizeof(struct sms_msg_hdr);
++		dev->response_alignment = align;
+ 
+ 		params.flags |= SMS_DEVICE_FAMILY2;
+ 		break;
+ 	}
+ 
+-	for (i = 0; i < intf->cur_altsetting->desc.bNumEndpoints; i++) {
+-		if (intf->cur_altsetting->endpoint[i].desc. bEndpointAddress & USB_DIR_IN)
+-			dev->in_ep = intf->cur_altsetting->endpoint[i].desc.bEndpointAddress;
+-		else
+-			dev->out_ep = intf->cur_altsetting->endpoint[i].desc.bEndpointAddress;
+-	}
+-
+-	pr_debug("in_ep = %02x, out_ep = %02x\n",
+-		dev->in_ep, dev->out_ep);
+-
+ 	params.device = &dev->udev->dev;
+ 	params.usb_device = dev->udev;
+ 	params.buffer_size = dev->buffer_size;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcdc.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcdc.c
+index 73d3c1a0a7c9..98b168736df0 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcdc.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcdc.c
+@@ -490,11 +490,18 @@ fail:
+ 	return -ENOMEM;
+ }
+ 
+-void brcmf_proto_bcdc_detach(struct brcmf_pub *drvr)
++void brcmf_proto_bcdc_detach_pre_delif(struct brcmf_pub *drvr)
++{
++	struct brcmf_bcdc *bcdc = drvr->proto->pd;
++
++	brcmf_fws_detach_pre_delif(bcdc->fws);
++}
++
++void brcmf_proto_bcdc_detach_post_delif(struct brcmf_pub *drvr)
+ {
+ 	struct brcmf_bcdc *bcdc = drvr->proto->pd;
+ 
+ 	drvr->proto->pd = NULL;
+-	brcmf_fws_detach(bcdc->fws);
++	brcmf_fws_detach_post_delif(bcdc->fws);
+ 	kfree(bcdc);
+ }
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcdc.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcdc.h
+index 3b0e9eff21b5..4bc52240ccea 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcdc.h
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcdc.h
+@@ -18,14 +18,16 @@
+ 
+ #ifdef CONFIG_BRCMFMAC_PROTO_BCDC
+ int brcmf_proto_bcdc_attach(struct brcmf_pub *drvr);
+-void brcmf_proto_bcdc_detach(struct brcmf_pub *drvr);
++void brcmf_proto_bcdc_detach_pre_delif(struct brcmf_pub *drvr);
++void brcmf_proto_bcdc_detach_post_delif(struct brcmf_pub *drvr);
+ void brcmf_proto_bcdc_txflowblock(struct device *dev, bool state);
+ void brcmf_proto_bcdc_txcomplete(struct device *dev, struct sk_buff *txp,
+ 				 bool success);
+ struct brcmf_fws_info *drvr_to_fws(struct brcmf_pub *drvr);
+ #else
+ static inline int brcmf_proto_bcdc_attach(struct brcmf_pub *drvr) { return 0; }
+-static inline void brcmf_proto_bcdc_detach(struct brcmf_pub *drvr) {}
++static void brcmf_proto_bcdc_detach_pre_delif(struct brcmf_pub *drvr) {};
++static inline void brcmf_proto_bcdc_detach_post_delif(struct brcmf_pub *drvr) {}
+ #endif
+ 
+ #endif /* BRCMFMAC_BCDC_H */
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+index 24ed19ed116e..52da307087e1 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+@@ -1303,6 +1303,8 @@ void brcmf_detach(struct device *dev)
+ 
+ 	brcmf_bus_change_state(bus_if, BRCMF_BUS_DOWN);
+ 
++	brcmf_proto_detach_pre_delif(drvr);
++
+ 	/* make sure primary interface removed last */
+ 	for (i = BRCMF_MAX_IFS-1; i > -1; i--)
+ 		brcmf_remove_interface(drvr->iflist[i], false);
+@@ -1312,7 +1314,7 @@ void brcmf_detach(struct device *dev)
+ 
+ 	brcmf_bus_stop(drvr->bus_if);
+ 
+-	brcmf_proto_detach(drvr);
++	brcmf_proto_detach_post_delif(drvr);
+ 
+ 	bus_if->drvr = NULL;
+ 	wiphy_free(drvr->wiphy);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c
+index d48b8b2d946f..c22c49ae552e 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c
+@@ -2443,17 +2443,25 @@ struct brcmf_fws_info *brcmf_fws_attach(struct brcmf_pub *drvr)
+ 	return fws;
+ 
+ fail:
+-	brcmf_fws_detach(fws);
++	brcmf_fws_detach_pre_delif(fws);
++	brcmf_fws_detach_post_delif(fws);
+ 	return ERR_PTR(rc);
+ }
+ 
+-void brcmf_fws_detach(struct brcmf_fws_info *fws)
++void brcmf_fws_detach_pre_delif(struct brcmf_fws_info *fws)
+ {
+ 	if (!fws)
+ 		return;
+-
+-	if (fws->fws_wq)
++	if (fws->fws_wq) {
+ 		destroy_workqueue(fws->fws_wq);
++		fws->fws_wq = NULL;
++	}
++}
++
++void brcmf_fws_detach_post_delif(struct brcmf_fws_info *fws)
++{
++	if (!fws)
++		return;
+ 
+ 	/* cleanup */
+ 	brcmf_fws_lock(fws);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.h
+index 4e6835766d5d..749c06dcdc17 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.h
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.h
+@@ -19,7 +19,8 @@
+ #define FWSIGNAL_H_
+ 
+ struct brcmf_fws_info *brcmf_fws_attach(struct brcmf_pub *drvr);
+-void brcmf_fws_detach(struct brcmf_fws_info *fws);
++void brcmf_fws_detach_pre_delif(struct brcmf_fws_info *fws);
++void brcmf_fws_detach_post_delif(struct brcmf_fws_info *fws);
+ void brcmf_fws_debugfs_create(struct brcmf_pub *drvr);
+ bool brcmf_fws_queue_skbs(struct brcmf_fws_info *fws);
+ bool brcmf_fws_fc_active(struct brcmf_fws_info *fws);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/proto.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/proto.c
+index 024c643052bc..c7964ccdda69 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/proto.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/proto.c
+@@ -67,16 +67,22 @@ fail:
+ 	return -ENOMEM;
+ }
+ 
+-void brcmf_proto_detach(struct brcmf_pub *drvr)
++void brcmf_proto_detach_post_delif(struct brcmf_pub *drvr)
+ {
+ 	brcmf_dbg(TRACE, "Enter\n");
+ 
+ 	if (drvr->proto) {
+ 		if (drvr->bus_if->proto_type == BRCMF_PROTO_BCDC)
+-			brcmf_proto_bcdc_detach(drvr);
++			brcmf_proto_bcdc_detach_post_delif(drvr);
+ 		else if (drvr->bus_if->proto_type == BRCMF_PROTO_MSGBUF)
+ 			brcmf_proto_msgbuf_detach(drvr);
+ 		kfree(drvr->proto);
+ 		drvr->proto = NULL;
+ 	}
+ }
++
++void brcmf_proto_detach_pre_delif(struct brcmf_pub *drvr)
++{
++	if (drvr->proto && drvr->bus_if->proto_type == BRCMF_PROTO_BCDC)
++		brcmf_proto_bcdc_detach_pre_delif(drvr);
++}
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/proto.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/proto.h
+index d3c3b9a815ad..72355aea9028 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/proto.h
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/proto.h
+@@ -54,7 +54,8 @@ struct brcmf_proto {
+ 
+ 
+ int brcmf_proto_attach(struct brcmf_pub *drvr);
+-void brcmf_proto_detach(struct brcmf_pub *drvr);
++void brcmf_proto_detach_pre_delif(struct brcmf_pub *drvr);
++void brcmf_proto_detach_post_delif(struct brcmf_pub *drvr);
+ 
+ static inline int brcmf_proto_hdrpull(struct brcmf_pub *drvr, bool do_fws,
+ 				      struct sk_buff *skb,
+diff --git a/drivers/s390/scsi/zfcp_ext.h b/drivers/s390/scsi/zfcp_ext.h
+index c6acca521ffe..31e8a7240fd7 100644
+--- a/drivers/s390/scsi/zfcp_ext.h
++++ b/drivers/s390/scsi/zfcp_ext.h
+@@ -167,6 +167,7 @@ extern const struct attribute_group *zfcp_port_attr_groups[];
+ extern struct mutex zfcp_sysfs_port_units_mutex;
+ extern struct device_attribute *zfcp_sysfs_sdev_attrs[];
+ extern struct device_attribute *zfcp_sysfs_shost_attrs[];
++bool zfcp_sysfs_port_is_removing(const struct zfcp_port *const port);
+ 
+ /* zfcp_unit.c */
+ extern int zfcp_unit_add(struct zfcp_port *, u64);
+diff --git a/drivers/s390/scsi/zfcp_scsi.c b/drivers/s390/scsi/zfcp_scsi.c
+index 221d0dfb8493..e9ded2befa0d 100644
+--- a/drivers/s390/scsi/zfcp_scsi.c
++++ b/drivers/s390/scsi/zfcp_scsi.c
+@@ -129,6 +129,15 @@ static int zfcp_scsi_slave_alloc(struct scsi_device *sdev)
+ 
+ 	zfcp_sdev->erp_action.port = port;
+ 
++	mutex_lock(&zfcp_sysfs_port_units_mutex);
++	if (zfcp_sysfs_port_is_removing(port)) {
++		/* port is already gone */
++		mutex_unlock(&zfcp_sysfs_port_units_mutex);
++		put_device(&port->dev); /* undo zfcp_get_port_by_wwpn() */
++		return -ENXIO;
++	}
++	mutex_unlock(&zfcp_sysfs_port_units_mutex);
++
+ 	unit = zfcp_unit_find(port, zfcp_scsi_dev_lun(sdev));
+ 	if (unit)
+ 		put_device(&unit->dev);
+diff --git a/drivers/s390/scsi/zfcp_sysfs.c b/drivers/s390/scsi/zfcp_sysfs.c
+index b277be6f7611..af197e2b3e69 100644
+--- a/drivers/s390/scsi/zfcp_sysfs.c
++++ b/drivers/s390/scsi/zfcp_sysfs.c
+@@ -235,6 +235,53 @@ static ZFCP_DEV_ATTR(adapter, port_rescan, S_IWUSR, NULL,
+ 
+ DEFINE_MUTEX(zfcp_sysfs_port_units_mutex);
+ 
++static void zfcp_sysfs_port_set_removing(struct zfcp_port *const port)
++{
++	lockdep_assert_held(&zfcp_sysfs_port_units_mutex);
++	atomic_set(&port->units, -1);
++}
++
++bool zfcp_sysfs_port_is_removing(const struct zfcp_port *const port)
++{
++	lockdep_assert_held(&zfcp_sysfs_port_units_mutex);
++	return atomic_read(&port->units) == -1;
++}
++
++static bool zfcp_sysfs_port_in_use(struct zfcp_port *const port)
++{
++	struct zfcp_adapter *const adapter = port->adapter;
++	unsigned long flags;
++	struct scsi_device *sdev;
++	bool in_use = true;
++
++	mutex_lock(&zfcp_sysfs_port_units_mutex);
++	if (atomic_read(&port->units) > 0)
++		goto unlock_port_units_mutex; /* zfcp_unit(s) under port */
++
++	spin_lock_irqsave(adapter->scsi_host->host_lock, flags);
++	__shost_for_each_device(sdev, adapter->scsi_host) {
++		const struct zfcp_scsi_dev *zsdev = sdev_to_zfcp(sdev);
++
++		if (sdev->sdev_state == SDEV_DEL ||
++		    sdev->sdev_state == SDEV_CANCEL)
++			continue;
++		if (zsdev->port != port)
++			continue;
++		/* alive scsi_device under port of interest */
++		goto unlock_host_lock;
++	}
++
++	/* port is about to be removed, so no more unit_add or slave_alloc */
++	zfcp_sysfs_port_set_removing(port);
++	in_use = false;
++
++unlock_host_lock:
++	spin_unlock_irqrestore(adapter->scsi_host->host_lock, flags);
++unlock_port_units_mutex:
++	mutex_unlock(&zfcp_sysfs_port_units_mutex);
++	return in_use;
++}
++
+ static ssize_t zfcp_sysfs_port_remove_store(struct device *dev,
+ 					    struct device_attribute *attr,
+ 					    const char *buf, size_t count)
+@@ -257,15 +304,11 @@ static ssize_t zfcp_sysfs_port_remove_store(struct device *dev,
+ 	else
+ 		retval = 0;
+ 
+-	mutex_lock(&zfcp_sysfs_port_units_mutex);
+-	if (atomic_read(&port->units) > 0) {
++	if (zfcp_sysfs_port_in_use(port)) {
+ 		retval = -EBUSY;
+-		mutex_unlock(&zfcp_sysfs_port_units_mutex);
++		put_device(&port->dev); /* undo zfcp_get_port_by_wwpn() */
+ 		goto out;
+ 	}
+-	/* port is about to be removed, so no more unit_add */
+-	atomic_set(&port->units, -1);
+-	mutex_unlock(&zfcp_sysfs_port_units_mutex);
+ 
+ 	write_lock_irq(&adapter->port_list_lock);
+ 	list_del(&port->list);
+diff --git a/drivers/s390/scsi/zfcp_unit.c b/drivers/s390/scsi/zfcp_unit.c
+index 1bf0a0984a09..e67bf7388cae 100644
+--- a/drivers/s390/scsi/zfcp_unit.c
++++ b/drivers/s390/scsi/zfcp_unit.c
+@@ -124,7 +124,7 @@ int zfcp_unit_add(struct zfcp_port *port, u64 fcp_lun)
+ 	int retval = 0;
+ 
+ 	mutex_lock(&zfcp_sysfs_port_units_mutex);
+-	if (atomic_read(&port->units) == -1) {
++	if (zfcp_sysfs_port_is_removing(port)) {
+ 		/* port is already gone */
+ 		retval = -ENODEV;
+ 		goto out;
+@@ -168,8 +168,14 @@ int zfcp_unit_add(struct zfcp_port *port, u64 fcp_lun)
+ 	write_lock_irq(&port->unit_list_lock);
+ 	list_add_tail(&unit->list, &port->unit_list);
+ 	write_unlock_irq(&port->unit_list_lock);
++	/*
++	 * lock order: shost->scan_mutex before zfcp_sysfs_port_units_mutex
++	 * due to      zfcp_unit_scsi_scan() => zfcp_scsi_slave_alloc()
++	 */
++	mutex_unlock(&zfcp_sysfs_port_units_mutex);
+ 
+ 	zfcp_unit_scsi_scan(unit);
++	return retval;
+ 
+ out:
+ 	mutex_unlock(&zfcp_sysfs_port_units_mutex);
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
+index eb1e5dcb0d52..a9cc01e8e6c5 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
+@@ -398,9 +398,18 @@ create_pagelist(char __user *buf, size_t count, unsigned short type)
+ 	int dma_buffers;
+ 	dma_addr_t dma_addr;
+ 
++	if (count >= INT_MAX - PAGE_SIZE)
++		return NULL;
++
+ 	offset = ((unsigned int)(unsigned long)buf & (PAGE_SIZE - 1));
+ 	num_pages = DIV_ROUND_UP(count + offset, PAGE_SIZE);
+ 
++	if (num_pages > (SIZE_MAX - sizeof(struct pagelist) -
++			 sizeof(struct vchiq_pagelist_info)) /
++			(sizeof(u32) + sizeof(pages[0]) +
++			 sizeof(struct scatterlist)))
++		return NULL;
++
+ 	pagelist_size = sizeof(struct pagelist) +
+ 			(num_pages * sizeof(u32)) +
+ 			(num_pages * sizeof(pages[0]) +
+diff --git a/drivers/staging/wlan-ng/hfa384x_usb.c b/drivers/staging/wlan-ng/hfa384x_usb.c
+index 6261881e9bcd..e359855ff2ad 100644
+--- a/drivers/staging/wlan-ng/hfa384x_usb.c
++++ b/drivers/staging/wlan-ng/hfa384x_usb.c
+@@ -3119,7 +3119,9 @@ static void hfa384x_usbin_callback(struct urb *urb)
+ 		break;
+ 	}
+ 
++	/* Save values from the RX URB before reposting overwrites it. */
+ 	urb_status = urb->status;
++	usbin = (union hfa384x_usbin *)urb->transfer_buffer;
+ 
+ 	if (action != ABORT) {
+ 		/* Repost the RX URB */
+@@ -3136,7 +3138,6 @@ static void hfa384x_usbin_callback(struct urb *urb)
+ 	/* Note: the check of the sw_support field, the type field doesn't
+ 	 *       have bit 12 set like the docs suggest.
+ 	 */
+-	usbin = (union hfa384x_usbin *)urb->transfer_buffer;
+ 	type = le16_to_cpu(usbin->type);
+ 	if (HFA384x_USB_ISRXFRM(type)) {
+ 		if (action == HANDLE) {
+diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c
+index 450ba6d7996c..e5aebbf5f302 100644
+--- a/drivers/tty/serial/max310x.c
++++ b/drivers/tty/serial/max310x.c
+@@ -581,7 +581,7 @@ static int max310x_set_ref_clk(struct device *dev, struct max310x_port *s,
+ 	}
+ 
+ 	/* Configure clock source */
+-	clksrc = xtal ? MAX310X_CLKSRC_CRYST_BIT : MAX310X_CLKSRC_EXTCLK_BIT;
++	clksrc = MAX310X_CLKSRC_EXTCLK_BIT | (xtal ? MAX310X_CLKSRC_CRYST_BIT : 0);
+ 
+ 	/* Configure PLL */
+ 	if (pllcfg) {
+diff --git a/drivers/tty/serial/msm_serial.c b/drivers/tty/serial/msm_serial.c
+index 109096033bb1..23833ad952ba 100644
+--- a/drivers/tty/serial/msm_serial.c
++++ b/drivers/tty/serial/msm_serial.c
+@@ -860,6 +860,7 @@ static void msm_handle_tx(struct uart_port *port)
+ 	struct circ_buf *xmit = &msm_port->uart.state->xmit;
+ 	struct msm_dma *dma = &msm_port->tx_dma;
+ 	unsigned int pio_count, dma_count, dma_min;
++	char buf[4] = { 0 };
+ 	void __iomem *tf;
+ 	int err = 0;
+ 
+@@ -869,10 +870,12 @@ static void msm_handle_tx(struct uart_port *port)
+ 		else
+ 			tf = port->membase + UART_TF;
+ 
++		buf[0] = port->x_char;
++
+ 		if (msm_port->is_uartdm)
+ 			msm_reset_dm_count(port, 1);
+ 
+-		iowrite8_rep(tf, &port->x_char, 1);
++		iowrite32_rep(tf, buf, 1);
+ 		port->icount.tx++;
+ 		port->x_char = 0;
+ 		return;
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index 3cd139752d3f..abc705716aa0 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -1557,6 +1557,13 @@ static void sci_request_dma(struct uart_port *port)
+ 
+ 	dev_dbg(port->dev, "%s: port %d\n", __func__, port->line);
+ 
++	/*
++	 * DMA on console may interfere with Kernel log messages which use
++	 * plain putchar(). So, simply don't use it with a console.
++	 */
++	if (uart_console(port))
++		return;
++
+ 	if (!port->dev->of_node)
+ 		return;
+ 
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index 693b3b4176f5..9d8aaf9941f9 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -1056,6 +1056,13 @@ static void visual_init(struct vc_data *vc, int num, int init)
+ 	vc->vc_screenbuf_size = vc->vc_rows * vc->vc_size_row;
+ }
+ 
++
++static void visual_deinit(struct vc_data *vc)
++{
++	vc->vc_sw->con_deinit(vc);
++	module_put(vc->vc_sw->owner);
++}
++
+ int vc_allocate(unsigned int currcons)	/* return 0 on success */
+ {
+ 	struct vt_notifier_param param;
+@@ -1103,6 +1110,7 @@ int vc_allocate(unsigned int currcons)	/* return 0 on success */
+ 
+ 	return 0;
+ err_free:
++	visual_deinit(vc);
+ 	kfree(vc);
+ 	vc_cons[currcons].d = NULL;
+ 	return -ENOMEM;
+@@ -1331,9 +1339,8 @@ struct vc_data *vc_deallocate(unsigned int currcons)
+ 		param.vc = vc = vc_cons[currcons].d;
+ 		atomic_notifier_call_chain(&vt_notifier_list, VT_DEALLOCATE, &param);
+ 		vcs_remove_sysfs(currcons);
+-		vc->vc_sw->con_deinit(vc);
++		visual_deinit(vc);
+ 		put_pid(vc->vt_pid);
+-		module_put(vc->vc_sw->owner);
+ 		vc_uniscr_set(vc, NULL);
+ 		kfree(vc->vc_screenbuf);
+ 		vc_cons[currcons].d = NULL;
+diff --git a/drivers/usb/core/config.c b/drivers/usb/core/config.c
+index 20ff036b4c22..9d6cb709ca7b 100644
+--- a/drivers/usb/core/config.c
++++ b/drivers/usb/core/config.c
+@@ -932,8 +932,8 @@ int usb_get_bos_descriptor(struct usb_device *dev)
+ 
+ 	/* Get BOS descriptor */
+ 	ret = usb_get_descriptor(dev, USB_DT_BOS, 0, bos, USB_DT_BOS_SIZE);
+-	if (ret < USB_DT_BOS_SIZE) {
+-		dev_err(ddev, "unable to get BOS descriptor\n");
++	if (ret < USB_DT_BOS_SIZE || bos->bLength < USB_DT_BOS_SIZE) {
++		dev_err(ddev, "unable to get BOS descriptor or descriptor too short\n");
+ 		if (ret >= 0)
+ 			ret = -ENOMSG;
+ 		kfree(bos);
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 8bc35d53408b..6082b008969b 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -209,6 +209,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	/* Microsoft LifeCam-VX700 v2.0 */
+ 	{ USB_DEVICE(0x045e, 0x0770), .driver_info = USB_QUIRK_RESET_RESUME },
+ 
++	/* Microsoft Surface Dock Ethernet (RTL8153 GigE) */
++	{ USB_DEVICE(0x045e, 0x07c6), .driver_info = USB_QUIRK_NO_LPM },
++
+ 	/* Cherry Stream G230 2.0 (G85-231) and 3.0 (G85-232) */
+ 	{ USB_DEVICE(0x046a, 0x0023), .driver_info = USB_QUIRK_RESET_RESUME },
+ 
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 9215a28dad40..765ef5f1ffb8 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -656,6 +656,7 @@ static void xhci_unmap_td_bounce_buffer(struct xhci_hcd *xhci,
+ 	struct device *dev = xhci_to_hcd(xhci)->self.controller;
+ 	struct xhci_segment *seg = td->bounce_seg;
+ 	struct urb *urb = td->urb;
++	size_t len;
+ 
+ 	if (!ring || !seg || !urb)
+ 		return;
+@@ -666,11 +667,14 @@ static void xhci_unmap_td_bounce_buffer(struct xhci_hcd *xhci,
+ 		return;
+ 	}
+ 
+-	/* for in tranfers we need to copy the data from bounce to sg */
+-	sg_pcopy_from_buffer(urb->sg, urb->num_mapped_sgs, seg->bounce_buf,
+-			     seg->bounce_len, seg->bounce_offs);
+ 	dma_unmap_single(dev, seg->bounce_dma, ring->bounce_buf_len,
+ 			 DMA_FROM_DEVICE);
++	/* for in tranfers we need to copy the data from bounce to sg */
++	len = sg_pcopy_from_buffer(urb->sg, urb->num_sgs, seg->bounce_buf,
++			     seg->bounce_len, seg->bounce_offs);
++	if (len != seg->bounce_len)
++		xhci_warn(xhci, "WARN Wrong bounce buffer read length: %zu != %d\n",
++				len, seg->bounce_len);
+ 	seg->bounce_len = 0;
+ 	seg->bounce_offs = 0;
+ }
+@@ -3123,6 +3127,7 @@ static int xhci_align_td(struct xhci_hcd *xhci, struct urb *urb, u32 enqd_len,
+ 	unsigned int unalign;
+ 	unsigned int max_pkt;
+ 	u32 new_buff_len;
++	size_t len;
+ 
+ 	max_pkt = usb_endpoint_maxp(&urb->ep->desc);
+ 	unalign = (enqd_len + *trb_buff_len) % max_pkt;
+@@ -3153,8 +3158,12 @@ static int xhci_align_td(struct xhci_hcd *xhci, struct urb *urb, u32 enqd_len,
+ 
+ 	/* create a max max_pkt sized bounce buffer pointed to by last trb */
+ 	if (usb_urb_dir_out(urb)) {
+-		sg_pcopy_to_buffer(urb->sg, urb->num_mapped_sgs,
++		len = sg_pcopy_to_buffer(urb->sg, urb->num_sgs,
+ 				   seg->bounce_buf, new_buff_len, enqd_len);
++		if (len != seg->bounce_len)
++			xhci_warn(xhci,
++				"WARN Wrong bounce buffer write length: %zu != %d\n",
++				len, seg->bounce_len);
+ 		seg->bounce_dma = dma_map_single(dev, seg->bounce_buf,
+ 						 max_pkt, DMA_TO_DEVICE);
+ 	} else {
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 7fa58c99f126..448e3f812833 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -9,6 +9,7 @@
+  */
+ 
+ #include <linux/pci.h>
++#include <linux/iopoll.h>
+ #include <linux/irq.h>
+ #include <linux/log2.h>
+ #include <linux/module.h>
+@@ -52,7 +53,6 @@ static bool td_on_ring(struct xhci_td *td, struct xhci_ring *ring)
+ 	return false;
+ }
+ 
+-/* TODO: copied from ehci-hcd.c - can this be refactored? */
+ /*
+  * xhci_handshake - spin reading hc until handshake completes or fails
+  * @ptr: address of hc register to be read
+@@ -69,18 +69,16 @@ static bool td_on_ring(struct xhci_td *td, struct xhci_ring *ring)
+ int xhci_handshake(void __iomem *ptr, u32 mask, u32 done, int usec)
+ {
+ 	u32	result;
++	int	ret;
+ 
+-	do {
+-		result = readl(ptr);
+-		if (result == ~(u32)0)		/* card removed */
+-			return -ENODEV;
+-		result &= mask;
+-		if (result == done)
+-			return 0;
+-		udelay(1);
+-		usec--;
+-	} while (usec > 0);
+-	return -ETIMEDOUT;
++	ret = readl_poll_timeout_atomic(ptr, result,
++					(result & mask) == done ||
++					result == U32_MAX,
++					1, usec);
++	if (result == U32_MAX)		/* card removed */
++		return -ENODEV;
++
++	return ret;
+ }
+ 
+ /*
+@@ -4289,7 +4287,6 @@ static int xhci_set_usb2_hardware_lpm(struct usb_hcd *hcd,
+ 	pm_addr = ports[port_num]->addr + PORTPMSC;
+ 	pm_val = readl(pm_addr);
+ 	hlpm_addr = ports[port_num]->addr + PORTHLPMC;
+-	field = le32_to_cpu(udev->bos->ext_cap->bmAttributes);
+ 
+ 	xhci_dbg(xhci, "%s port %d USB2 hardware LPM\n",
+ 			enable ? "enable" : "disable", port_num + 1);
+@@ -4301,6 +4298,7 @@ static int xhci_set_usb2_hardware_lpm(struct usb_hcd *hcd,
+ 			 * default one which works with mixed HIRD and BESL
+ 			 * systems. See XHCI_DEFAULT_BESL definition in xhci.h
+ 			 */
++			field = le32_to_cpu(udev->bos->ext_cap->bmAttributes);
+ 			if ((field & USB_BESL_SUPPORT) &&
+ 			    (field & USB_BESL_BASELINE_VALID))
+ 				hird = USB_GET_BESL_BASELINE(field);
+diff --git a/drivers/usb/misc/rio500.c b/drivers/usb/misc/rio500.c
+index 7b9adeb3e7aa..a32d61a79ab8 100644
+--- a/drivers/usb/misc/rio500.c
++++ b/drivers/usb/misc/rio500.c
+@@ -86,9 +86,22 @@ static int close_rio(struct inode *inode, struct file *file)
+ {
+ 	struct rio_usb_data *rio = &rio_instance;
+ 
+-	rio->isopen = 0;
++	/* against disconnect() */
++	mutex_lock(&rio500_mutex);
++	mutex_lock(&(rio->lock));
+ 
+-	dev_info(&rio->rio_dev->dev, "Rio closed.\n");
++	rio->isopen = 0;
++	if (!rio->present) {
++		/* cleanup has been delayed */
++		kfree(rio->ibuf);
++		kfree(rio->obuf);
++		rio->ibuf = NULL;
++		rio->obuf = NULL;
++	} else {
++		dev_info(&rio->rio_dev->dev, "Rio closed.\n");
++	}
++	mutex_unlock(&(rio->lock));
++	mutex_unlock(&rio500_mutex);
+ 	return 0;
+ }
+ 
+@@ -447,15 +460,23 @@ static int probe_rio(struct usb_interface *intf,
+ {
+ 	struct usb_device *dev = interface_to_usbdev(intf);
+ 	struct rio_usb_data *rio = &rio_instance;
+-	int retval;
++	int retval = 0;
+ 
+-	dev_info(&intf->dev, "USB Rio found at address %d\n", dev->devnum);
++	mutex_lock(&rio500_mutex);
++	if (rio->present) {
++		dev_info(&intf->dev, "Second USB Rio at address %d refused\n", dev->devnum);
++		retval = -EBUSY;
++		goto bail_out;
++	} else {
++		dev_info(&intf->dev, "USB Rio found at address %d\n", dev->devnum);
++	}
+ 
+ 	retval = usb_register_dev(intf, &usb_rio_class);
+ 	if (retval) {
+ 		dev_err(&dev->dev,
+ 			"Not able to get a minor for this device.\n");
+-		return -ENOMEM;
++		retval = -ENOMEM;
++		goto bail_out;
+ 	}
+ 
+ 	rio->rio_dev = dev;
+@@ -464,7 +485,8 @@ static int probe_rio(struct usb_interface *intf,
+ 		dev_err(&dev->dev,
+ 			"probe_rio: Not enough memory for the output buffer\n");
+ 		usb_deregister_dev(intf, &usb_rio_class);
+-		return -ENOMEM;
++		retval = -ENOMEM;
++		goto bail_out;
+ 	}
+ 	dev_dbg(&intf->dev, "obuf address:%p\n", rio->obuf);
+ 
+@@ -473,7 +495,8 @@ static int probe_rio(struct usb_interface *intf,
+ 			"probe_rio: Not enough memory for the input buffer\n");
+ 		usb_deregister_dev(intf, &usb_rio_class);
+ 		kfree(rio->obuf);
+-		return -ENOMEM;
++		retval = -ENOMEM;
++		goto bail_out;
+ 	}
+ 	dev_dbg(&intf->dev, "ibuf address:%p\n", rio->ibuf);
+ 
+@@ -481,8 +504,10 @@ static int probe_rio(struct usb_interface *intf,
+ 
+ 	usb_set_intfdata (intf, rio);
+ 	rio->present = 1;
++bail_out:
++	mutex_unlock(&rio500_mutex);
+ 
+-	return 0;
++	return retval;
+ }
+ 
+ static void disconnect_rio(struct usb_interface *intf)
+diff --git a/drivers/usb/misc/sisusbvga/sisusb.c b/drivers/usb/misc/sisusbvga/sisusb.c
+index 9560fde621ee..ea06f1fed6fa 100644
+--- a/drivers/usb/misc/sisusbvga/sisusb.c
++++ b/drivers/usb/misc/sisusbvga/sisusb.c
+@@ -3029,6 +3029,13 @@ static int sisusb_probe(struct usb_interface *intf,
+ 
+ 	mutex_init(&(sisusb->lock));
+ 
++	sisusb->sisusb_dev = dev;
++	sisusb->vrambase   = SISUSB_PCI_MEMBASE;
++	sisusb->mmiobase   = SISUSB_PCI_MMIOBASE;
++	sisusb->mmiosize   = SISUSB_PCI_MMIOSIZE;
++	sisusb->ioportbase = SISUSB_PCI_IOPORTBASE;
++	/* Everything else is zero */
++
+ 	/* Register device */
+ 	retval = usb_register_dev(intf, &usb_sisusb_class);
+ 	if (retval) {
+@@ -3039,13 +3046,7 @@ static int sisusb_probe(struct usb_interface *intf,
+ 		goto error_1;
+ 	}
+ 
+-	sisusb->sisusb_dev = dev;
+-	sisusb->minor      = intf->minor;
+-	sisusb->vrambase   = SISUSB_PCI_MEMBASE;
+-	sisusb->mmiobase   = SISUSB_PCI_MMIOBASE;
+-	sisusb->mmiosize   = SISUSB_PCI_MMIOSIZE;
+-	sisusb->ioportbase = SISUSB_PCI_IOPORTBASE;
+-	/* Everything else is zero */
++	sisusb->minor = intf->minor;
+ 
+ 	/* Allocate buffers */
+ 	sisusb->ibufsize = SISUSB_IBUF_SIZE;
+diff --git a/drivers/usb/usbip/stub_dev.c b/drivers/usb/usbip/stub_dev.c
+index c0d6ff1baa72..7931e6cecc70 100644
+--- a/drivers/usb/usbip/stub_dev.c
++++ b/drivers/usb/usbip/stub_dev.c
+@@ -301,9 +301,17 @@ static int stub_probe(struct usb_device *udev)
+ 	const char *udev_busid = dev_name(&udev->dev);
+ 	struct bus_id_priv *busid_priv;
+ 	int rc = 0;
++	char save_status;
+ 
+ 	dev_dbg(&udev->dev, "Enter probe\n");
+ 
++	/* Not sure if this is our device. Allocate here to avoid
++	 * calling alloc while holding busid_table lock.
++	 */
++	sdev = stub_device_alloc(udev);
++	if (!sdev)
++		return -ENOMEM;
++
+ 	/* check we should claim or not by busid_table */
+ 	busid_priv = get_busid_priv(udev_busid);
+ 	if (!busid_priv || (busid_priv->status == STUB_BUSID_REMOV) ||
+@@ -318,6 +326,9 @@ static int stub_probe(struct usb_device *udev)
+ 		 * See driver_probe_device() in driver/base/dd.c
+ 		 */
+ 		rc = -ENODEV;
++		if (!busid_priv)
++			goto sdev_free;
++
+ 		goto call_put_busid_priv;
+ 	}
+ 
+@@ -337,12 +348,6 @@ static int stub_probe(struct usb_device *udev)
+ 		goto call_put_busid_priv;
+ 	}
+ 
+-	/* ok, this is my device */
+-	sdev = stub_device_alloc(udev);
+-	if (!sdev) {
+-		rc = -ENOMEM;
+-		goto call_put_busid_priv;
+-	}
+ 
+ 	dev_info(&udev->dev,
+ 		"usbip-host: register new device (bus %u dev %u)\n",
+@@ -352,9 +357,16 @@ static int stub_probe(struct usb_device *udev)
+ 
+ 	/* set private data to usb_device */
+ 	dev_set_drvdata(&udev->dev, sdev);
++
+ 	busid_priv->sdev = sdev;
+ 	busid_priv->udev = udev;
+ 
++	save_status = busid_priv->status;
++	busid_priv->status = STUB_BUSID_ALLOC;
++
++	/* release the busid_lock */
++	put_busid_priv(busid_priv);
++
+ 	/*
+ 	 * Claim this hub port.
+ 	 * It doesn't matter what value we pass as owner
+@@ -372,10 +384,8 @@ static int stub_probe(struct usb_device *udev)
+ 		dev_err(&udev->dev, "stub_add_files for %s\n", udev_busid);
+ 		goto err_files;
+ 	}
+-	busid_priv->status = STUB_BUSID_ALLOC;
+ 
+-	rc = 0;
+-	goto call_put_busid_priv;
++	return 0;
+ 
+ err_files:
+ 	usb_hub_release_port(udev->parent, udev->portnum,
+@@ -384,23 +394,30 @@ err_port:
+ 	dev_set_drvdata(&udev->dev, NULL);
+ 	usb_put_dev(udev);
+ 
++	/* we already have busid_priv, just lock busid_lock */
++	spin_lock(&busid_priv->busid_lock);
+ 	busid_priv->sdev = NULL;
+-	stub_device_free(sdev);
++	busid_priv->status = save_status;
++	spin_unlock(&busid_priv->busid_lock);
++	/* lock is released - go to free */
++	goto sdev_free;
+ 
+ call_put_busid_priv:
++	/* release the busid_lock */
+ 	put_busid_priv(busid_priv);
++
++sdev_free:
++	stub_device_free(sdev);
++
+ 	return rc;
+ }
+ 
+ static void shutdown_busid(struct bus_id_priv *busid_priv)
+ {
+-	if (busid_priv->sdev && !busid_priv->shutdown_busid) {
+-		busid_priv->shutdown_busid = 1;
+-		usbip_event_add(&busid_priv->sdev->ud, SDEV_EVENT_REMOVED);
++	usbip_event_add(&busid_priv->sdev->ud, SDEV_EVENT_REMOVED);
+ 
+-		/* wait for the stop of the event handler */
+-		usbip_stop_eh(&busid_priv->sdev->ud);
+-	}
++	/* wait for the stop of the event handler */
++	usbip_stop_eh(&busid_priv->sdev->ud);
+ }
+ 
+ /*
+@@ -427,11 +444,16 @@ static void stub_disconnect(struct usb_device *udev)
+ 	/* get stub_device */
+ 	if (!sdev) {
+ 		dev_err(&udev->dev, "could not get device");
+-		goto call_put_busid_priv;
++		/* release busid_lock */
++		put_busid_priv(busid_priv);
++		return;
+ 	}
+ 
+ 	dev_set_drvdata(&udev->dev, NULL);
+ 
++	/* release busid_lock before call to remove device files */
++	put_busid_priv(busid_priv);
++
+ 	/*
+ 	 * NOTE: rx/tx threads are invoked for each usb_device.
+ 	 */
+@@ -442,27 +464,36 @@ static void stub_disconnect(struct usb_device *udev)
+ 				  (struct usb_dev_state *) udev);
+ 	if (rc) {
+ 		dev_dbg(&udev->dev, "unable to release port\n");
+-		goto call_put_busid_priv;
++		return;
+ 	}
+ 
+ 	/* If usb reset is called from event handler */
+ 	if (usbip_in_eh(current))
+-		goto call_put_busid_priv;
++		return;
++
++	/* we already have busid_priv, just lock busid_lock */
++	spin_lock(&busid_priv->busid_lock);
++	if (!busid_priv->shutdown_busid)
++		busid_priv->shutdown_busid = 1;
++	/* release busid_lock */
++	spin_unlock(&busid_priv->busid_lock);
+ 
+ 	/* shutdown the current connection */
+ 	shutdown_busid(busid_priv);
+ 
+ 	usb_put_dev(sdev->udev);
+ 
++	/* we already have busid_priv, just lock busid_lock */
++	spin_lock(&busid_priv->busid_lock);
+ 	/* free sdev */
+ 	busid_priv->sdev = NULL;
+ 	stub_device_free(sdev);
+ 
+ 	if (busid_priv->status == STUB_BUSID_ALLOC)
+ 		busid_priv->status = STUB_BUSID_ADDED;
+-
+-call_put_busid_priv:
+-	put_busid_priv(busid_priv);
++	/* release busid_lock */
++	spin_unlock(&busid_priv->busid_lock);
++	return;
+ }
+ 
+ #ifdef CONFIG_PM
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index cd059a801662..c59b23f6e9ba 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -1248,7 +1248,7 @@ finished:
+ 	if (free_font)
+ 		vc->vc_font.data = NULL;
+ 
+-	if (vc->vc_hi_font_mask)
++	if (vc->vc_hi_font_mask && vc->vc_screenbuf)
+ 		set_vc_hi_font(vc, false);
+ 
+ 	if (!con_is_bound(&fb_con))
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 2973608824ec..adb660669ff5 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -6396,8 +6396,18 @@ int btrfs_add_link(struct btrfs_trans_handle *trans,
+ 	btrfs_i_size_write(parent_inode, parent_inode->vfs_inode.i_size +
+ 			   name_len * 2);
+ 	inode_inc_iversion(&parent_inode->vfs_inode);
+-	parent_inode->vfs_inode.i_mtime = parent_inode->vfs_inode.i_ctime =
+-		current_time(&parent_inode->vfs_inode);
++	/*
++	 * If we are replaying a log tree, we do not want to update the mtime
++	 * and ctime of the parent directory with the current time, since the
++	 * log replay procedure is responsible for setting them to their correct
++	 * values (the ones it had when the fsync was done).
++	 */
++	if (!test_bit(BTRFS_FS_LOG_RECOVERING, &root->fs_info->flags)) {
++		struct timespec64 now = current_time(&parent_inode->vfs_inode);
++
++		parent_inode->vfs_inode.i_mtime = now;
++		parent_inode->vfs_inode.i_ctime = now;
++	}
+ 	ret = btrfs_update_inode(trans, root, &parent_inode->vfs_inode);
+ 	if (ret)
+ 		btrfs_abort_transaction(trans, ret);
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index e659d9d61107..3ecaab8cca06 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -3831,7 +3831,13 @@ int btrfs_qgroup_add_swapped_blocks(struct btrfs_trans_handle *trans,
+ 							    subvol_slot);
+ 	block->last_snapshot = last_snapshot;
+ 	block->level = level;
+-	if (bg->flags & BTRFS_BLOCK_GROUP_DATA)
++
++	/*
++	 * If we have bg == NULL, we're called from btrfs_recover_relocation(),
++	 * no one else can modify tree blocks thus we qgroup will not change
++	 * no matter the value of trace_leaf.
++	 */
++	if (bg && bg->flags & BTRFS_BLOCK_GROUP_DATA)
+ 		block->trace_leaf = true;
+ 	else
+ 		block->trace_leaf = false;
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 1d82ee4883eb..b0f9cca8e8d4 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -2161,22 +2161,30 @@ static int clean_dirty_subvols(struct reloc_control *rc)
+ 	struct btrfs_root *root;
+ 	struct btrfs_root *next;
+ 	int ret = 0;
++	int ret2;
+ 
+ 	list_for_each_entry_safe(root, next, &rc->dirty_subvol_roots,
+ 				 reloc_dirty_list) {
+-		struct btrfs_root *reloc_root = root->reloc_root;
++		if (root->root_key.objectid != BTRFS_TREE_RELOC_OBJECTID) {
++			/* Merged subvolume, cleanup its reloc root */
++			struct btrfs_root *reloc_root = root->reloc_root;
+ 
+-		clear_bit(BTRFS_ROOT_DEAD_RELOC_TREE, &root->state);
+-		list_del_init(&root->reloc_dirty_list);
+-		root->reloc_root = NULL;
+-		if (reloc_root) {
+-			int ret2;
++			clear_bit(BTRFS_ROOT_DEAD_RELOC_TREE, &root->state);
++			list_del_init(&root->reloc_dirty_list);
++			root->reloc_root = NULL;
++			if (reloc_root) {
+ 
+-			ret2 = btrfs_drop_snapshot(reloc_root, NULL, 0, 1);
++				ret2 = btrfs_drop_snapshot(reloc_root, NULL, 0, 1);
++				if (ret2 < 0 && !ret)
++					ret = ret2;
++			}
++			btrfs_put_fs_root(root);
++		} else {
++			/* Orphan reloc tree, just clean it up */
++			ret2 = btrfs_drop_snapshot(root, NULL, 0, 1);
+ 			if (ret2 < 0 && !ret)
+ 				ret = ret2;
+ 		}
+-		btrfs_put_fs_root(root);
+ 	}
+ 	return ret;
+ }
+@@ -2464,6 +2472,9 @@ again:
+ 			}
+ 		} else {
+ 			list_del_init(&reloc_root->root_list);
++			/* Don't forget to queue this reloc root for cleanup */
++			list_add_tail(&reloc_root->reloc_dirty_list,
++				      &rc->dirty_subvol_roots);
+ 		}
+ 	}
+ 
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 19b00b1668ed..67b9feb72e03 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -5017,6 +5017,12 @@ static int send_hole(struct send_ctx *sctx, u64 end)
+ 	if (offset >= sctx->cur_inode_size)
+ 		return 0;
+ 
++	/*
++	 * Don't go beyond the inode's i_size due to prealloc extents that start
++	 * after the i_size.
++	 */
++	end = min_t(u64, end, sctx->cur_inode_size);
++
+ 	if (sctx->flags & BTRFS_SEND_FLAG_NO_FILE_DATA)
+ 		return send_update_extent(sctx, offset, end - offset);
+ 
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 60aac95be54b..fc93e0d6e48d 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -3095,6 +3095,12 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
+ 	root->log_transid++;
+ 	log->log_transid = root->log_transid;
+ 	root->log_start_pid = 0;
++	/*
++	 * Update or create log root item under the root's log_mutex to prevent
++	 * races with concurrent log syncs that can lead to failure to update
++	 * log root item because it was not created yet.
++	 */
++	ret = update_log_root(trans, log);
+ 	/*
+ 	 * IO has been started, blocks of the log tree have WRITTEN flag set
+ 	 * in their headers. new modifications of the log will be written to
+@@ -3114,8 +3120,6 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
+ 
+ 	mutex_unlock(&log_root_tree->log_mutex);
+ 
+-	ret = update_log_root(trans, log);
+-
+ 	mutex_lock(&log_root_tree->log_mutex);
+ 	if (atomic_dec_and_test(&log_root_tree->log_writers)) {
+ 		/* atomic_dec_and_test implies a barrier */
+@@ -5465,7 +5469,6 @@ static noinline int check_parent_dirs_for_sync(struct btrfs_trans_handle *trans,
+ {
+ 	int ret = 0;
+ 	struct dentry *old_parent = NULL;
+-	struct btrfs_inode *orig_inode = inode;
+ 
+ 	/*
+ 	 * for regular files, if its inode is already on disk, we don't
+@@ -5485,16 +5488,6 @@ static noinline int check_parent_dirs_for_sync(struct btrfs_trans_handle *trans,
+ 	}
+ 
+ 	while (1) {
+-		/*
+-		 * If we are logging a directory then we start with our inode,
+-		 * not our parent's inode, so we need to skip setting the
+-		 * logged_trans so that further down in the log code we don't
+-		 * think this inode has already been logged.
+-		 */
+-		if (inode != orig_inode)
+-			inode->logged_trans = trans->transid;
+-		smp_mb();
+-
+ 		if (btrfs_must_commit_transaction(trans, inode)) {
+ 			ret = 1;
+ 			break;
+@@ -6223,7 +6216,6 @@ void btrfs_record_unlink_dir(struct btrfs_trans_handle *trans,
+ 	 * if this directory was already logged any new
+ 	 * names for this file/dir will get recorded
+ 	 */
+-	smp_mb();
+ 	if (dir->logged_trans == trans->transid)
+ 		return;
+ 
+diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
+index 6b9e29d050f3..f828b8870178 100644
+--- a/fs/btrfs/zstd.c
++++ b/fs/btrfs/zstd.c
+@@ -102,10 +102,10 @@ static void zstd_reclaim_timer_fn(struct timer_list *timer)
+ 	unsigned long reclaim_threshold = jiffies - ZSTD_BTRFS_RECLAIM_JIFFIES;
+ 	struct list_head *pos, *next;
+ 
+-	spin_lock(&wsm.lock);
++	spin_lock_bh(&wsm.lock);
+ 
+ 	if (list_empty(&wsm.lru_list)) {
+-		spin_unlock(&wsm.lock);
++		spin_unlock_bh(&wsm.lock);
+ 		return;
+ 	}
+ 
+@@ -134,7 +134,7 @@ static void zstd_reclaim_timer_fn(struct timer_list *timer)
+ 	if (!list_empty(&wsm.lru_list))
+ 		mod_timer(&wsm.timer, jiffies + ZSTD_BTRFS_RECLAIM_JIFFIES);
+ 
+-	spin_unlock(&wsm.lock);
++	spin_unlock_bh(&wsm.lock);
+ }
+ 
+ /*
+@@ -195,7 +195,7 @@ static void zstd_cleanup_workspace_manager(void)
+ 	struct workspace *workspace;
+ 	int i;
+ 
+-	spin_lock(&wsm.lock);
++	spin_lock_bh(&wsm.lock);
+ 	for (i = 0; i < ZSTD_BTRFS_MAX_LEVEL; i++) {
+ 		while (!list_empty(&wsm.idle_ws[i])) {
+ 			workspace = container_of(wsm.idle_ws[i].next,
+@@ -205,7 +205,7 @@ static void zstd_cleanup_workspace_manager(void)
+ 			wsm.ops->free_workspace(&workspace->list);
+ 		}
+ 	}
+-	spin_unlock(&wsm.lock);
++	spin_unlock_bh(&wsm.lock);
+ 
+ 	del_timer_sync(&wsm.timer);
+ }
+@@ -227,7 +227,7 @@ static struct list_head *zstd_find_workspace(unsigned int level)
+ 	struct workspace *workspace;
+ 	int i = level - 1;
+ 
+-	spin_lock(&wsm.lock);
++	spin_lock_bh(&wsm.lock);
+ 	for_each_set_bit_from(i, &wsm.active_map, ZSTD_BTRFS_MAX_LEVEL) {
+ 		if (!list_empty(&wsm.idle_ws[i])) {
+ 			ws = wsm.idle_ws[i].next;
+@@ -239,11 +239,11 @@ static struct list_head *zstd_find_workspace(unsigned int level)
+ 				list_del(&workspace->lru_list);
+ 			if (list_empty(&wsm.idle_ws[i]))
+ 				clear_bit(i, &wsm.active_map);
+-			spin_unlock(&wsm.lock);
++			spin_unlock_bh(&wsm.lock);
+ 			return ws;
+ 		}
+ 	}
+-	spin_unlock(&wsm.lock);
++	spin_unlock_bh(&wsm.lock);
+ 
+ 	return NULL;
+ }
+@@ -302,7 +302,7 @@ static void zstd_put_workspace(struct list_head *ws)
+ {
+ 	struct workspace *workspace = list_to_workspace(ws);
+ 
+-	spin_lock(&wsm.lock);
++	spin_lock_bh(&wsm.lock);
+ 
+ 	/* A node is only taken off the lru if we are the corresponding level */
+ 	if (workspace->req_level == workspace->level) {
+@@ -322,7 +322,7 @@ static void zstd_put_workspace(struct list_head *ws)
+ 	list_add(&workspace->list, &wsm.idle_ws[workspace->level - 1]);
+ 	workspace->req_level = 0;
+ 
+-	spin_unlock(&wsm.lock);
++	spin_unlock_bh(&wsm.lock);
+ 
+ 	if (workspace->level == ZSTD_BTRFS_MAX_LEVEL)
+ 		cond_wake_up(&wsm.wait);
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 7037a137fa53..9a1db37b303a 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -3221,7 +3221,9 @@ cifs_read_allocate_pages(struct cifs_readdata *rdata, unsigned int nr_pages)
+ 	}
+ 
+ 	if (rc) {
+-		for (i = 0; i < nr_pages; i++) {
++		unsigned int nr_page_failed = i;
++
++		for (i = 0; i < nr_page_failed; i++) {
+ 			put_page(rdata->pages[i]);
+ 			rdata->pages[i] = NULL;
+ 		}
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index a37774a55f3a..d2d8708f5e1a 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -1013,7 +1013,8 @@ int smb3_validate_negotiate(const unsigned int xid, struct cifs_tcon *tcon)
+ 		 * not supported error. Client should accept it.
+ 		 */
+ 		cifs_dbg(VFS, "Server does not support validate negotiate\n");
+-		return 0;
++		rc = 0;
++		goto out_free_inbuf;
+ 	} else if (rc != 0) {
+ 		cifs_dbg(VFS, "validate protocol negotiate failed: %d\n", rc);
+ 		rc = -EIO;
+diff --git a/fs/lockd/xdr.c b/fs/lockd/xdr.c
+index 9846f7e95282..7147e4aebecc 100644
+--- a/fs/lockd/xdr.c
++++ b/fs/lockd/xdr.c
+@@ -127,7 +127,7 @@ nlm_decode_lock(__be32 *p, struct nlm_lock *lock)
+ 
+ 	locks_init_lock(fl);
+ 	fl->fl_owner = current->files;
+-	fl->fl_pid   = current->tgid;
++	fl->fl_pid   = (pid_t)lock->svid;
+ 	fl->fl_flags = FL_POSIX;
+ 	fl->fl_type  = F_RDLCK;		/* as good as anything else */
+ 	start = ntohl(*p++);
+@@ -269,7 +269,7 @@ nlmsvc_decode_shareargs(struct svc_rqst *rqstp, __be32 *p)
+ 	memset(lock, 0, sizeof(*lock));
+ 	locks_init_lock(&lock->fl);
+ 	lock->svid = ~(u32) 0;
+-	lock->fl.fl_pid = current->tgid;
++	lock->fl.fl_pid = (pid_t)lock->svid;
+ 
+ 	if (!(p = nlm_decode_cookie(p, &argp->cookie))
+ 	 || !(p = xdr_decode_string_inplace(p, &lock->caller,
+diff --git a/fs/lockd/xdr4.c b/fs/lockd/xdr4.c
+index 70154f376695..7ed9edf9aed4 100644
+--- a/fs/lockd/xdr4.c
++++ b/fs/lockd/xdr4.c
+@@ -119,7 +119,7 @@ nlm4_decode_lock(__be32 *p, struct nlm_lock *lock)
+ 
+ 	locks_init_lock(fl);
+ 	fl->fl_owner = current->files;
+-	fl->fl_pid   = current->tgid;
++	fl->fl_pid   = (pid_t)lock->svid;
+ 	fl->fl_flags = FL_POSIX;
+ 	fl->fl_type  = F_RDLCK;		/* as good as anything else */
+ 	p = xdr_decode_hyper(p, &start);
+@@ -266,7 +266,7 @@ nlm4svc_decode_shareargs(struct svc_rqst *rqstp, __be32 *p)
+ 	memset(lock, 0, sizeof(*lock));
+ 	locks_init_lock(&lock->fl);
+ 	lock->svid = ~(u32) 0;
+-	lock->fl.fl_pid = current->tgid;
++	lock->fl.fl_pid = (pid_t)lock->svid;
+ 
+ 	if (!(p = nlm4_decode_cookie(p, &argp->cookie))
+ 	 || !(p = xdr_decode_string_inplace(p, &lock->caller,
+diff --git a/include/linux/bitops.h b/include/linux/bitops.h
+index 602af23b98c7..cf074bce3eb3 100644
+--- a/include/linux/bitops.h
++++ b/include/linux/bitops.h
+@@ -60,7 +60,7 @@ static __always_inline unsigned long hweight_long(unsigned long w)
+  */
+ static inline __u64 rol64(__u64 word, unsigned int shift)
+ {
+-	return (word << shift) | (word >> (64 - shift));
++	return (word << (shift & 63)) | (word >> ((-shift) & 63));
+ }
+ 
+ /**
+@@ -70,7 +70,7 @@ static inline __u64 rol64(__u64 word, unsigned int shift)
+  */
+ static inline __u64 ror64(__u64 word, unsigned int shift)
+ {
+-	return (word >> shift) | (word << (64 - shift));
++	return (word >> (shift & 63)) | (word << ((-shift) & 63));
+ }
+ 
+ /**
+@@ -80,7 +80,7 @@ static inline __u64 ror64(__u64 word, unsigned int shift)
+  */
+ static inline __u32 rol32(__u32 word, unsigned int shift)
+ {
+-	return (word << shift) | (word >> ((-shift) & 31));
++	return (word << (shift & 31)) | (word >> ((-shift) & 31));
+ }
+ 
+ /**
+@@ -90,7 +90,7 @@ static inline __u32 rol32(__u32 word, unsigned int shift)
+  */
+ static inline __u32 ror32(__u32 word, unsigned int shift)
+ {
+-	return (word >> shift) | (word << (32 - shift));
++	return (word >> (shift & 31)) | (word << ((-shift) & 31));
+ }
+ 
+ /**
+@@ -100,7 +100,7 @@ static inline __u32 ror32(__u32 word, unsigned int shift)
+  */
+ static inline __u16 rol16(__u16 word, unsigned int shift)
+ {
+-	return (word << shift) | (word >> (16 - shift));
++	return (word << (shift & 15)) | (word >> ((-shift) & 15));
+ }
+ 
+ /**
+@@ -110,7 +110,7 @@ static inline __u16 rol16(__u16 word, unsigned int shift)
+  */
+ static inline __u16 ror16(__u16 word, unsigned int shift)
+ {
+-	return (word >> shift) | (word << (16 - shift));
++	return (word >> (shift & 15)) | (word << ((-shift) & 15));
+ }
+ 
+ /**
+@@ -120,7 +120,7 @@ static inline __u16 ror16(__u16 word, unsigned int shift)
+  */
+ static inline __u8 rol8(__u8 word, unsigned int shift)
+ {
+-	return (word << shift) | (word >> (8 - shift));
++	return (word << (shift & 7)) | (word >> ((-shift) & 7));
+ }
+ 
+ /**
+@@ -130,7 +130,7 @@ static inline __u8 rol8(__u8 word, unsigned int shift)
+  */
+ static inline __u8 ror8(__u8 word, unsigned int shift)
+ {
+-	return (word >> shift) | (word << (8 - shift));
++	return (word >> (shift & 7)) | (word << ((-shift) & 7));
+ }
+ 
+ /**
+diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
+index 7d57890cec67..96398d8d4fd8 100644
+--- a/include/linux/cgroup-defs.h
++++ b/include/linux/cgroup-defs.h
+@@ -83,6 +83,11 @@ enum {
+ 	 * Enable cpuset controller in v1 cgroup to use v2 behavior.
+ 	 */
+ 	CGRP_ROOT_CPUSET_V2_MODE = (1 << 4),
++
++	/*
++	 * Enable legacy local memory.events.
++	 */
++	CGRP_ROOT_MEMORY_LOCAL_EVENTS = (1 << 5),
+ };
+ 
+ /* cftype->flags */
+diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
+index aa5efd9351eb..d5ceb2839a2d 100644
+--- a/include/linux/list_lru.h
++++ b/include/linux/list_lru.h
+@@ -54,6 +54,7 @@ struct list_lru {
+ #ifdef CONFIG_MEMCG_KMEM
+ 	struct list_head	list;
+ 	int			shrinker_id;
++	bool			memcg_aware;
+ #endif
+ };
+ 
+diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
+index dbb6118370c1..28525c7a8ab0 100644
+--- a/include/linux/memcontrol.h
++++ b/include/linux/memcontrol.h
+@@ -777,8 +777,14 @@ static inline void count_memcg_event_mm(struct mm_struct *mm,
+ static inline void memcg_memory_event(struct mem_cgroup *memcg,
+ 				      enum memcg_memory_event event)
+ {
+-	atomic_long_inc(&memcg->memory_events[event]);
+-	cgroup_file_notify(&memcg->events_file);
++	do {
++		atomic_long_inc(&memcg->memory_events[event]);
++		cgroup_file_notify(&memcg->events_file);
++
++		if (cgrp_dfl_root.flags & CGRP_ROOT_MEMORY_LOCAL_EVENTS)
++			break;
++	} while ((memcg = parent_mem_cgroup(memcg)) &&
++		 !mem_cgroup_is_root(memcg));
+ }
+ 
+ static inline void memcg_memory_event_mm(struct mm_struct *mm,
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 9fcf6338ea5f..4e2d82eea2cd 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -1775,11 +1775,13 @@ int cgroup_show_path(struct seq_file *sf, struct kernfs_node *kf_node,
+ 
+ enum cgroup2_param {
+ 	Opt_nsdelegate,
++	Opt_memory_localevents,
+ 	nr__cgroup2_params
+ };
+ 
+ static const struct fs_parameter_spec cgroup2_param_specs[] = {
+-	fsparam_flag  ("nsdelegate",		Opt_nsdelegate),
++	fsparam_flag("nsdelegate",		Opt_nsdelegate),
++	fsparam_flag("memory_localevents",	Opt_memory_localevents),
+ 	{}
+ };
+ 
+@@ -1802,6 +1804,9 @@ static int cgroup2_parse_param(struct fs_context *fc, struct fs_parameter *param
+ 	case Opt_nsdelegate:
+ 		ctx->flags |= CGRP_ROOT_NS_DELEGATE;
+ 		return 0;
++	case Opt_memory_localevents:
++		ctx->flags |= CGRP_ROOT_MEMORY_LOCAL_EVENTS;
++		return 0;
+ 	}
+ 	return -EINVAL;
+ }
+@@ -1813,6 +1818,11 @@ static void apply_cgroup_root_flags(unsigned int root_flags)
+ 			cgrp_dfl_root.flags |= CGRP_ROOT_NS_DELEGATE;
+ 		else
+ 			cgrp_dfl_root.flags &= ~CGRP_ROOT_NS_DELEGATE;
++
++		if (root_flags & CGRP_ROOT_MEMORY_LOCAL_EVENTS)
++			cgrp_dfl_root.flags |= CGRP_ROOT_MEMORY_LOCAL_EVENTS;
++		else
++			cgrp_dfl_root.flags &= ~CGRP_ROOT_MEMORY_LOCAL_EVENTS;
+ 	}
+ }
+ 
+@@ -1820,6 +1830,8 @@ static int cgroup_show_options(struct seq_file *seq, struct kernfs_root *kf_root
+ {
+ 	if (cgrp_dfl_root.flags & CGRP_ROOT_NS_DELEGATE)
+ 		seq_puts(seq, ",nsdelegate");
++	if (cgrp_dfl_root.flags & CGRP_ROOT_MEMORY_LOCAL_EVENTS)
++		seq_puts(seq, ",memory_localevents");
+ 	return 0;
+ }
+ 
+@@ -6122,7 +6134,7 @@ static struct kobj_attribute cgroup_delegate_attr = __ATTR_RO(delegate);
+ static ssize_t features_show(struct kobject *kobj, struct kobj_attribute *attr,
+ 			     char *buf)
+ {
+-	return snprintf(buf, PAGE_SIZE, "nsdelegate\n");
++	return snprintf(buf, PAGE_SIZE, "nsdelegate\nmemory_localevents\n");
+ }
+ static struct kobj_attribute cgroup_features_attr = __ATTR_RO(features);
+ 
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 227ba170298e..429f5663edd9 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -2441,6 +2441,8 @@ relock:
+ 	if (signal_group_exit(signal)) {
+ 		ksig->info.si_signo = signr = SIGKILL;
+ 		sigdelset(&current->pending.signal, SIGKILL);
++		trace_signal_deliver(SIGKILL, SEND_SIG_NOINFO,
++				&sighand->action[SIGKILL - 1]);
+ 		recalc_sigpending();
+ 		goto fatal;
+ 	}
+diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
+index 05a66493a164..8cea61795a7e 100644
+--- a/kernel/trace/trace_events_filter.c
++++ b/kernel/trace/trace_events_filter.c
+@@ -427,7 +427,7 @@ predicate_parse(const char *str, int nr_parens, int nr_preds,
+ 	op_stack = kmalloc_array(nr_parens, sizeof(*op_stack), GFP_KERNEL);
+ 	if (!op_stack)
+ 		return ERR_PTR(-ENOMEM);
+-	prog_stack = kmalloc_array(nr_preds, sizeof(*prog_stack), GFP_KERNEL);
++	prog_stack = kcalloc(nr_preds, sizeof(*prog_stack), GFP_KERNEL);
+ 	if (!prog_stack) {
+ 		parse_error(pe, -ENOMEM, 0);
+ 		goto out_free;
+@@ -578,7 +578,11 @@ predicate_parse(const char *str, int nr_parens, int nr_preds,
+ out_free:
+ 	kfree(op_stack);
+ 	kfree(inverts);
+-	kfree(prog_stack);
++	if (prog_stack) {
++		for (i = 0; prog_stack[i].pred; i++)
++			kfree(prog_stack[i].pred);
++		kfree(prog_stack);
++	}
+ 	return ERR_PTR(ret);
+ }
+ 
+diff --git a/mm/compaction.c b/mm/compaction.c
+index 444029da4e9d..368445cc71cf 100644
+--- a/mm/compaction.c
++++ b/mm/compaction.c
+@@ -1397,7 +1397,7 @@ fast_isolate_freepages(struct compact_control *cc)
+ 				page = pfn_to_page(highest);
+ 				cc->free_pfn = highest;
+ 			} else {
+-				if (cc->direct_compaction) {
++				if (cc->direct_compaction && pfn_valid(min_pfn)) {
+ 					page = pfn_to_page(min_pfn);
+ 					cc->free_pfn = min_pfn;
+ 				}
+diff --git a/mm/kasan/common.c b/mm/kasan/common.c
+index 80bbe62b16cd..94d3c83c0fa8 100644
+--- a/mm/kasan/common.c
++++ b/mm/kasan/common.c
+@@ -472,7 +472,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
+ {
+ 	unsigned long redzone_start;
+ 	unsigned long redzone_end;
+-	u8 tag;
++	u8 tag = 0xff;
+ 
+ 	if (gfpflags_allow_blocking(flags))
+ 		quarantine_reduce();
+diff --git a/mm/list_lru.c b/mm/list_lru.c
+index 0730bf8ff39f..d3b538146efd 100644
+--- a/mm/list_lru.c
++++ b/mm/list_lru.c
+@@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)
+ 
+ static inline bool list_lru_memcg_aware(struct list_lru *lru)
+ {
+-	/*
+-	 * This needs node 0 to be always present, even
+-	 * in the systems supporting sparse numa ids.
+-	 */
+-	return !!lru->node[0].memcg_lrus;
++	return lru->memcg_aware;
+ }
+ 
+ static inline struct list_lru_one *
+@@ -451,6 +447,8 @@ static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
+ {
+ 	int i;
+ 
++	lru->memcg_aware = memcg_aware;
++
+ 	if (!memcg_aware)
+ 		return 0;
+ 
+diff --git a/scripts/gcc-plugins/gcc-common.h b/scripts/gcc-plugins/gcc-common.h
+index 552d5efd7cb7..17f06079a712 100644
+--- a/scripts/gcc-plugins/gcc-common.h
++++ b/scripts/gcc-plugins/gcc-common.h
+@@ -150,8 +150,12 @@ void print_gimple_expr(FILE *, gimple, int, int);
+ void dump_gimple_stmt(pretty_printer *, gimple, int, int);
+ #endif
+ 
++#ifndef __unused
+ #define __unused __attribute__((__unused__))
++#endif
++#ifndef __visible
+ #define __visible __attribute__((visibility("default")))
++#endif
+ 
+ #define DECL_NAME_POINTER(node) IDENTIFIER_POINTER(DECL_NAME(node))
+ #define DECL_NAME_LENGTH(node) IDENTIFIER_LENGTH(DECL_NAME(node))
+diff --git a/security/integrity/evm/evm_crypto.c b/security/integrity/evm/evm_crypto.c
+index c37d08118af5..ab3ab654363d 100644
+--- a/security/integrity/evm/evm_crypto.c
++++ b/security/integrity/evm/evm_crypto.c
+@@ -89,6 +89,9 @@ static struct shash_desc *init_desc(char type, uint8_t hash_algo)
+ 		tfm = &hmac_tfm;
+ 		algo = evm_hmac;
+ 	} else {
++		if (hash_algo >= HASH_ALGO__LAST)
++			return ERR_PTR(-EINVAL);
++
+ 		tfm = &evm_tfm[hash_algo];
+ 		algo = hash_algo_name[hash_algo];
+ 	}
+diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c
+index e0cc323f948f..1cc822a59054 100644
+--- a/security/integrity/ima/ima_policy.c
++++ b/security/integrity/ima/ima_policy.c
+@@ -498,10 +498,11 @@ static void add_rules(struct ima_rule_entry *entries, int count,
+ 
+ 			list_add_tail(&entry->list, &ima_policy_rules);
+ 		}
+-		if (entries[i].action == APPRAISE)
++		if (entries[i].action == APPRAISE) {
+ 			temp_ima_appraise |= ima_appraise_flag(entries[i].func);
+-		if (entries[i].func == POLICY_CHECK)
+-			temp_ima_appraise |= IMA_APPRAISE_POLICY;
++			if (entries[i].func == POLICY_CHECK)
++				temp_ima_appraise |= IMA_APPRAISE_POLICY;
++		}
+ 	}
+ }
+ 
+@@ -1146,10 +1147,10 @@ enum {
+ };
+ 
+ static const char *const mask_tokens[] = {
+-	"MAY_EXEC",
+-	"MAY_WRITE",
+-	"MAY_READ",
+-	"MAY_APPEND"
++	"^MAY_EXEC",
++	"^MAY_WRITE",
++	"^MAY_READ",
++	"^MAY_APPEND"
+ };
+ 
+ #define __ima_hook_stringify(str)	(#str),
+@@ -1209,6 +1210,7 @@ int ima_policy_show(struct seq_file *m, void *v)
+ 	struct ima_rule_entry *entry = v;
+ 	int i;
+ 	char tbuf[64] = {0,};
++	int offset = 0;
+ 
+ 	rcu_read_lock();
+ 
+@@ -1232,15 +1234,17 @@ int ima_policy_show(struct seq_file *m, void *v)
+ 	if (entry->flags & IMA_FUNC)
+ 		policy_func_show(m, entry->func);
+ 
+-	if (entry->flags & IMA_MASK) {
++	if ((entry->flags & IMA_MASK) || (entry->flags & IMA_INMASK)) {
++		if (entry->flags & IMA_MASK)
++			offset = 1;
+ 		if (entry->mask & MAY_EXEC)
+-			seq_printf(m, pt(Opt_mask), mt(mask_exec));
++			seq_printf(m, pt(Opt_mask), mt(mask_exec) + offset);
+ 		if (entry->mask & MAY_WRITE)
+-			seq_printf(m, pt(Opt_mask), mt(mask_write));
++			seq_printf(m, pt(Opt_mask), mt(mask_write) + offset);
+ 		if (entry->mask & MAY_READ)
+-			seq_printf(m, pt(Opt_mask), mt(mask_read));
++			seq_printf(m, pt(Opt_mask), mt(mask_read) + offset);
+ 		if (entry->mask & MAY_APPEND)
+-			seq_printf(m, pt(Opt_mask), mt(mask_append));
++			seq_printf(m, pt(Opt_mask), mt(mask_append) + offset);
+ 		seq_puts(m, " ");
+ 	}
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index fc99a5a6f7e1..239697421118 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6166,13 +6166,15 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chain_id = ALC269_FIXUP_THINKPAD_ACPI,
+ 	},
+ 	[ALC255_FIXUP_ACER_MIC_NO_PRESENCE] = {
+-		.type = HDA_FIXUP_PINS,
+-		.v.pins = (const struct hda_pintbl[]) {
+-			{ 0x19, 0x01a1913c }, /* use as headset mic, without its own jack detect */
+-			{ }
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			/* Enable the Mic */
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x45 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x5089 },
++			{}
+ 		},
+ 		.chained = true,
+-		.chain_id = ALC255_FIXUP_HEADSET_MODE
++		.chain_id = ALC269_FIXUP_LIFEBOOK_EXTMIC
+ 	},
+ 	[ALC255_FIXUP_ASUS_MIC_NO_PRESENCE] = {
+ 		.type = HDA_FIXUP_PINS,
+@@ -7217,6 +7219,10 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x18, 0x02a11030},
+ 		{0x19, 0x0181303F},
+ 		{0x21, 0x0221102f}),
++	SND_HDA_PIN_QUIRK(0x10ec0255, 0x1025, "Acer", ALC255_FIXUP_ACER_MIC_NO_PRESENCE,
++		{0x12, 0x90a60140},
++		{0x14, 0x90170120},
++		{0x21, 0x02211030}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0255, 0x1025, "Acer", ALC255_FIXUP_ACER_MIC_NO_PRESENCE,
+ 		{0x12, 0x90a601c0},
+ 		{0x14, 0x90171120},
+@@ -7654,7 +7660,7 @@ static int patch_alc269(struct hda_codec *codec)
+ 
+ 	spec = codec->spec;
+ 	spec->gen.shared_mic_vref_pin = 0x18;
+-	codec->power_save_node = 1;
++	codec->power_save_node = 0;
+ 
+ #ifdef CONFIG_PM
+ 	codec->patch_ops.suspend = alc269_suspend;
+diff --git a/sound/usb/line6/driver.c b/sound/usb/line6/driver.c
+index b61f65bed4e4..2b57854335b3 100644
+--- a/sound/usb/line6/driver.c
++++ b/sound/usb/line6/driver.c
+@@ -720,6 +720,15 @@ static int line6_init_cap_control(struct usb_line6 *line6)
+ 	return 0;
+ }
+ 
++static void line6_startup_work(struct work_struct *work)
++{
++	struct usb_line6 *line6 =
++		container_of(work, struct usb_line6, startup_work.work);
++
++	if (line6->startup)
++		line6->startup(line6);
++}
++
+ /*
+ 	Probe USB device.
+ */
+@@ -755,6 +764,7 @@ int line6_probe(struct usb_interface *interface,
+ 	line6->properties = properties;
+ 	line6->usbdev = usbdev;
+ 	line6->ifcdev = &interface->dev;
++	INIT_DELAYED_WORK(&line6->startup_work, line6_startup_work);
+ 
+ 	strcpy(card->id, properties->id);
+ 	strcpy(card->driver, driver_name);
+@@ -825,6 +835,8 @@ void line6_disconnect(struct usb_interface *interface)
+ 	if (WARN_ON(usbdev != line6->usbdev))
+ 		return;
+ 
++	cancel_delayed_work(&line6->startup_work);
++
+ 	if (line6->urb_listen != NULL)
+ 		line6_stop_listen(line6);
+ 
+diff --git a/sound/usb/line6/driver.h b/sound/usb/line6/driver.h
+index 61425597eb61..650d909c9c4f 100644
+--- a/sound/usb/line6/driver.h
++++ b/sound/usb/line6/driver.h
+@@ -178,11 +178,15 @@ struct usb_line6 {
+ 			fifo;
+ 	} messages;
+ 
++	/* Work for delayed PCM startup */
++	struct delayed_work startup_work;
++
+ 	/* If MIDI is supported, buffer_message contains the pre-processed data;
+ 	 * otherwise the data is only in urb_listen (buffer_incoming).
+ 	 */
+ 	void (*process_message)(struct usb_line6 *);
+ 	void (*disconnect)(struct usb_line6 *line6);
++	void (*startup)(struct usb_line6 *line6);
+ };
+ 
+ extern char *line6_alloc_sysex_buffer(struct usb_line6 *line6, int code1,
+diff --git a/sound/usb/line6/toneport.c b/sound/usb/line6/toneport.c
+index 325b07b98b3c..7e39083f8f76 100644
+--- a/sound/usb/line6/toneport.c
++++ b/sound/usb/line6/toneport.c
+@@ -54,9 +54,6 @@ struct usb_line6_toneport {
+ 	/* Firmware version (x 100) */
+ 	u8 firmware_version;
+ 
+-	/* Work for delayed PCM startup */
+-	struct delayed_work pcm_work;
+-
+ 	/* Device type */
+ 	enum line6_device_type type;
+ 
+@@ -241,12 +238,8 @@ static int snd_toneport_source_put(struct snd_kcontrol *kcontrol,
+ 	return 1;
+ }
+ 
+-static void toneport_start_pcm(struct work_struct *work)
++static void toneport_startup(struct usb_line6 *line6)
+ {
+-	struct usb_line6_toneport *toneport =
+-		container_of(work, struct usb_line6_toneport, pcm_work.work);
+-	struct usb_line6 *line6 = &toneport->line6;
+-
+ 	line6_pcm_acquire(line6->line6pcm, LINE6_STREAM_MONITOR, true);
+ }
+ 
+@@ -394,7 +387,7 @@ static int toneport_setup(struct usb_line6_toneport *toneport)
+ 	if (toneport_has_led(toneport))
+ 		toneport_update_led(toneport);
+ 
+-	schedule_delayed_work(&toneport->pcm_work,
++	schedule_delayed_work(&toneport->line6.startup_work,
+ 			      msecs_to_jiffies(TONEPORT_PCM_DELAY * 1000));
+ 	return 0;
+ }
+@@ -407,8 +400,6 @@ static void line6_toneport_disconnect(struct usb_line6 *line6)
+ 	struct usb_line6_toneport *toneport =
+ 		(struct usb_line6_toneport *)line6;
+ 
+-	cancel_delayed_work_sync(&toneport->pcm_work);
+-
+ 	if (toneport_has_led(toneport))
+ 		toneport_remove_leds(toneport);
+ }
+@@ -424,9 +415,9 @@ static int toneport_init(struct usb_line6 *line6,
+ 	struct usb_line6_toneport *toneport =  (struct usb_line6_toneport *) line6;
+ 
+ 	toneport->type = id->driver_info;
+-	INIT_DELAYED_WORK(&toneport->pcm_work, toneport_start_pcm);
+ 
+ 	line6->disconnect = line6_toneport_disconnect;
++	line6->startup = toneport_startup;
+ 
+ 	/* initialize PCM subsystem: */
+ 	err = line6_init_pcm(line6, &toneport_pcm_properties);
+diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
+index f412ebc90610..8e2a5eb38bc0 100644
+--- a/virt/kvm/arm/arm.c
++++ b/virt/kvm/arm/arm.c
+@@ -224,6 +224,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+ 	case KVM_CAP_MAX_VCPUS:
+ 		r = KVM_MAX_VCPUS;
+ 		break;
++	case KVM_CAP_MAX_VCPU_ID:
++		r = KVM_MAX_VCPU_ID;
++		break;
+ 	case KVM_CAP_NR_MEMSLOTS:
+ 		r = KVM_USER_MEM_SLOTS;
+ 		break;
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 2b2a460c6252..48c549a88486 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -3062,8 +3062,6 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
+ 	case KVM_CAP_MULTI_ADDRESS_SPACE:
+ 		return KVM_ADDRESS_SPACE_NUM;
+ #endif
+-	case KVM_CAP_MAX_VCPU_ID:
+-		return KVM_MAX_VCPU_ID;
+ 	default:
+ 		break;
+ 	}


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-06-11 12:43 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2019-06-11 12:43 UTC (permalink / raw
  To: gentoo-commits

commit:     744db6688f9ae6c096201f606dfa0a19e8eb56d7
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jun 11 12:42:49 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jun 11 12:42:49 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=744db668

Linux patch 5.1.9

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1008_linux-5.1.9.patch | 2590 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2594 insertions(+)

diff --git a/0000_README b/0000_README
index c561860..cb361e8 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch:  1007_linux-5.1.8.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.1.8
 
+Patch:  1008_linux-5.1.9.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.1.9
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1008_linux-5.1.9.patch b/1008_linux-5.1.9.patch
new file mode 100644
index 0000000..1316328
--- /dev/null
+++ b/1008_linux-5.1.9.patch
@@ -0,0 +1,2590 @@
+diff --git a/Makefile b/Makefile
+index 3027a0ce7a02..2884a8d3b6d6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c
+index 8df1638259f3..6836095251ed 100644
+--- a/arch/arc/mm/fault.c
++++ b/arch/arc/mm/fault.c
+@@ -66,7 +66,7 @@ void do_page_fault(unsigned long address, struct pt_regs *regs)
+ 	struct vm_area_struct *vma = NULL;
+ 	struct task_struct *tsk = current;
+ 	struct mm_struct *mm = tsk->mm;
+-	int si_code = 0;
++	int si_code = SEGV_MAPERR;
+ 	int ret;
+ 	vm_fault_t fault;
+ 	int write = regs->ecr_cause & ECR_C_PROTV_STORE;  /* ST/EX */
+@@ -81,16 +81,14 @@ void do_page_fault(unsigned long address, struct pt_regs *regs)
+ 	 * only copy the information from the master page table,
+ 	 * nothing more.
+ 	 */
+-	if (address >= VMALLOC_START) {
++	if (address >= VMALLOC_START && !user_mode(regs)) {
+ 		ret = handle_kernel_vaddr_fault(address);
+ 		if (unlikely(ret))
+-			goto bad_area_nosemaphore;
++			goto no_context;
+ 		else
+ 			return;
+ 	}
+ 
+-	si_code = SEGV_MAPERR;
+-
+ 	/*
+ 	 * If we're in an interrupt or have no user
+ 	 * context, we must not take the fault..
+@@ -198,7 +196,6 @@ good_area:
+ bad_area:
+ 	up_read(&mm->mmap_sem);
+ 
+-bad_area_nosemaphore:
+ 	/* User mode accesses just cause a SIGSEGV */
+ 	if (user_mode(regs)) {
+ 		tsk->thread.fault_address = address;
+diff --git a/arch/mips/mm/mmap.c b/arch/mips/mm/mmap.c
+index 2f616ebeb7e0..7755a1fad05a 100644
+--- a/arch/mips/mm/mmap.c
++++ b/arch/mips/mm/mmap.c
+@@ -203,6 +203,11 @@ unsigned long arch_randomize_brk(struct mm_struct *mm)
+ 
+ int __virt_addr_valid(const volatile void *kaddr)
+ {
++	unsigned long vaddr = (unsigned long)vaddr;
++
++	if ((vaddr < PAGE_OFFSET) || (vaddr >= MAP_BASE))
++		return 0;
++
+ 	return pfn_valid(PFN_DOWN(virt_to_phys(kaddr)));
+ }
+ EXPORT_SYMBOL_GPL(__virt_addr_valid);
+diff --git a/arch/mips/pistachio/Platform b/arch/mips/pistachio/Platform
+index d80cd612df1f..c3592b374ad2 100644
+--- a/arch/mips/pistachio/Platform
++++ b/arch/mips/pistachio/Platform
+@@ -6,3 +6,4 @@ cflags-$(CONFIG_MACH_PISTACHIO)		+=				\
+ 		-I$(srctree)/arch/mips/include/asm/mach-pistachio
+ load-$(CONFIG_MACH_PISTACHIO)		+= 0xffffffff80400000
+ zload-$(CONFIG_MACH_PISTACHIO)		+= 0xffffffff81000000
++all-$(CONFIG_MACH_PISTACHIO)		:= uImage.gz
+diff --git a/arch/parisc/kernel/alternative.c b/arch/parisc/kernel/alternative.c
+index bf2274e01a96..ca1f5ca0540a 100644
+--- a/arch/parisc/kernel/alternative.c
++++ b/arch/parisc/kernel/alternative.c
+@@ -56,7 +56,8 @@ void __init_or_module apply_alternatives(struct alt_instr *start,
+ 		 * time IO-PDIR is changed in Ike/Astro.
+ 		 */
+ 		if ((cond & ALT_COND_NO_IOC_FDC) &&
+-			(boot_cpu_data.pdc.capabilities & PDC_MODEL_IOPDIR_FDC))
++			((boot_cpu_data.cpu_type <= pcxw_) ||
++			 (boot_cpu_data.pdc.capabilities & PDC_MODEL_IOPDIR_FDC)))
+ 			continue;
+ 
+ 		/* Want to replace pdtlb by a pdtlb,l instruction? */
+diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
+index 11613362c4e7..5f14d0df99bf 100644
+--- a/arch/s390/mm/fault.c
++++ b/arch/s390/mm/fault.c
+@@ -83,7 +83,6 @@ static inline int notify_page_fault(struct pt_regs *regs)
+ 
+ /*
+  * Find out which address space caused the exception.
+- * Access register mode is impossible, ignore space == 3.
+  */
+ static inline enum fault_type get_fault_type(struct pt_regs *regs)
+ {
+@@ -108,6 +107,10 @@ static inline enum fault_type get_fault_type(struct pt_regs *regs)
+ 		}
+ 		return VDSO_FAULT;
+ 	}
++	if (trans_exc_code == 1) {
++		/* access register mode, not used in the kernel */
++		return USER_FAULT;
++	}
+ 	/* home space exception -> access via kernel ASCE */
+ 	return KERNEL_FAULT;
+ }
+diff --git a/arch/x86/lib/insn-eval.c b/arch/x86/lib/insn-eval.c
+index cf00ab6c6621..306c3a0902ba 100644
+--- a/arch/x86/lib/insn-eval.c
++++ b/arch/x86/lib/insn-eval.c
+@@ -557,7 +557,8 @@ static int get_reg_offset_16(struct insn *insn, struct pt_regs *regs,
+ }
+ 
+ /**
+- * get_desc() - Obtain pointer to a segment descriptor
++ * get_desc() - Obtain contents of a segment descriptor
++ * @out:	Segment descriptor contents on success
+  * @sel:	Segment selector
+  *
+  * Given a segment selector, obtain a pointer to the segment descriptor.
+@@ -565,18 +566,18 @@ static int get_reg_offset_16(struct insn *insn, struct pt_regs *regs,
+  *
+  * Returns:
+  *
+- * Pointer to segment descriptor on success.
++ * True on success, false on failure.
+  *
+  * NULL on error.
+  */
+-static struct desc_struct *get_desc(unsigned short sel)
++static bool get_desc(struct desc_struct *out, unsigned short sel)
+ {
+ 	struct desc_ptr gdt_desc = {0, 0};
+ 	unsigned long desc_base;
+ 
+ #ifdef CONFIG_MODIFY_LDT_SYSCALL
+ 	if ((sel & SEGMENT_TI_MASK) == SEGMENT_LDT) {
+-		struct desc_struct *desc = NULL;
++		bool success = false;
+ 		struct ldt_struct *ldt;
+ 
+ 		/* Bits [15:3] contain the index of the desired entry. */
+@@ -584,12 +585,14 @@ static struct desc_struct *get_desc(unsigned short sel)
+ 
+ 		mutex_lock(&current->active_mm->context.lock);
+ 		ldt = current->active_mm->context.ldt;
+-		if (ldt && sel < ldt->nr_entries)
+-			desc = &ldt->entries[sel];
++		if (ldt && sel < ldt->nr_entries) {
++			*out = ldt->entries[sel];
++			success = true;
++		}
+ 
+ 		mutex_unlock(&current->active_mm->context.lock);
+ 
+-		return desc;
++		return success;
+ 	}
+ #endif
+ 	native_store_gdt(&gdt_desc);
+@@ -604,9 +607,10 @@ static struct desc_struct *get_desc(unsigned short sel)
+ 	desc_base = sel & ~(SEGMENT_RPL_MASK | SEGMENT_TI_MASK);
+ 
+ 	if (desc_base > gdt_desc.size)
+-		return NULL;
++		return false;
+ 
+-	return (struct desc_struct *)(gdt_desc.address + desc_base);
++	*out = *(struct desc_struct *)(gdt_desc.address + desc_base);
++	return true;
+ }
+ 
+ /**
+@@ -628,7 +632,7 @@ static struct desc_struct *get_desc(unsigned short sel)
+  */
+ unsigned long insn_get_seg_base(struct pt_regs *regs, int seg_reg_idx)
+ {
+-	struct desc_struct *desc;
++	struct desc_struct desc;
+ 	short sel;
+ 
+ 	sel = get_segment_selector(regs, seg_reg_idx);
+@@ -666,11 +670,10 @@ unsigned long insn_get_seg_base(struct pt_regs *regs, int seg_reg_idx)
+ 	if (!sel)
+ 		return -1L;
+ 
+-	desc = get_desc(sel);
+-	if (!desc)
++	if (!get_desc(&desc, sel))
+ 		return -1L;
+ 
+-	return get_desc_base(desc);
++	return get_desc_base(&desc);
+ }
+ 
+ /**
+@@ -692,7 +695,7 @@ unsigned long insn_get_seg_base(struct pt_regs *regs, int seg_reg_idx)
+  */
+ static unsigned long get_seg_limit(struct pt_regs *regs, int seg_reg_idx)
+ {
+-	struct desc_struct *desc;
++	struct desc_struct desc;
+ 	unsigned long limit;
+ 	short sel;
+ 
+@@ -706,8 +709,7 @@ static unsigned long get_seg_limit(struct pt_regs *regs, int seg_reg_idx)
+ 	if (!sel)
+ 		return 0;
+ 
+-	desc = get_desc(sel);
+-	if (!desc)
++	if (!get_desc(&desc, sel))
+ 		return 0;
+ 
+ 	/*
+@@ -716,8 +718,8 @@ static unsigned long get_seg_limit(struct pt_regs *regs, int seg_reg_idx)
+ 	 * not tested when checking the segment limits. In practice,
+ 	 * this means that the segment ends in (limit << 12) + 0xfff.
+ 	 */
+-	limit = get_desc_limit(desc);
+-	if (desc->g)
++	limit = get_desc_limit(&desc);
++	if (desc.g)
+ 		limit = (limit << 12) + 0xfff;
+ 
+ 	return limit;
+@@ -741,7 +743,7 @@ static unsigned long get_seg_limit(struct pt_regs *regs, int seg_reg_idx)
+  */
+ int insn_get_code_seg_params(struct pt_regs *regs)
+ {
+-	struct desc_struct *desc;
++	struct desc_struct desc;
+ 	short sel;
+ 
+ 	if (v8086_mode(regs))
+@@ -752,8 +754,7 @@ int insn_get_code_seg_params(struct pt_regs *regs)
+ 	if (sel < 0)
+ 		return sel;
+ 
+-	desc = get_desc(sel);
+-	if (!desc)
++	if (!get_desc(&desc, sel))
+ 		return -EINVAL;
+ 
+ 	/*
+@@ -761,10 +762,10 @@ int insn_get_code_seg_params(struct pt_regs *regs)
+ 	 * determines whether a segment contains data or code. If this is a data
+ 	 * segment, return error.
+ 	 */
+-	if (!(desc->type & BIT(3)))
++	if (!(desc.type & BIT(3)))
+ 		return -EINVAL;
+ 
+-	switch ((desc->l << 1) | desc->d) {
++	switch ((desc.l << 1) | desc.d) {
+ 	case 0: /*
+ 		 * Legacy mode. CS.L=0, CS.D=0. Address and operand size are
+ 		 * both 16-bit.
+diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
+index a7d966964c6f..513ce09e9950 100644
+--- a/arch/x86/power/cpu.c
++++ b/arch/x86/power/cpu.c
+@@ -299,7 +299,17 @@ int hibernate_resume_nonboot_cpu_disable(void)
+ 	 * address in its instruction pointer may not be possible to resolve
+ 	 * any more at that point (the page tables used by it previously may
+ 	 * have been overwritten by hibernate image data).
++	 *
++	 * First, make sure that we wake up all the potentially disabled SMT
++	 * threads which have been initially brought up and then put into
++	 * mwait/cpuidle sleep.
++	 * Those will be put to proper (not interfering with hibernation
++	 * resume) sleep afterwards, and the resumed kernel will decide itself
++	 * what to do with them.
+ 	 */
++	ret = cpuhp_smt_enable();
++	if (ret)
++		return ret;
+ 	smp_ops.play_dead = resume_play_dead;
+ 	ret = disable_nonboot_cpus();
+ 	smp_ops.play_dead = play_dead;
+diff --git a/arch/x86/power/hibernate.c b/arch/x86/power/hibernate.c
+index bcddf09b5aa3..b924a81ca0df 100644
+--- a/arch/x86/power/hibernate.c
++++ b/arch/x86/power/hibernate.c
+@@ -11,6 +11,7 @@
+ #include <linux/suspend.h>
+ #include <linux/scatterlist.h>
+ #include <linux/kdebug.h>
++#include <linux/cpu.h>
+ 
+ #include <crypto/hash.h>
+ 
+@@ -246,3 +247,35 @@ out:
+ 	__flush_tlb_all();
+ 	return 0;
+ }
++
++int arch_resume_nosmt(void)
++{
++	int ret = 0;
++	/*
++	 * We reached this while coming out of hibernation. This means
++	 * that SMT siblings are sleeping in hlt, as mwait is not safe
++	 * against control transition during resume (see comment in
++	 * hibernate_resume_nonboot_cpu_disable()).
++	 *
++	 * If the resumed kernel has SMT disabled, we have to take all the
++	 * SMT siblings out of hlt, and offline them again so that they
++	 * end up in mwait proper.
++	 *
++	 * Called with hotplug disabled.
++	 */
++	cpu_hotplug_enable();
++	if (cpu_smt_control == CPU_SMT_DISABLED ||
++			cpu_smt_control == CPU_SMT_FORCE_DISABLED) {
++		enum cpuhp_smt_control old = cpu_smt_control;
++
++		ret = cpuhp_smt_enable();
++		if (ret)
++			goto out;
++		ret = cpuhp_smt_disable(old);
++		if (ret)
++			goto out;
++	}
++out:
++	cpu_hotplug_disable();
++	return ret;
++}
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index d43a5677ccbc..a74d03913822 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -1310,11 +1310,11 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo)
+ 		}
+ 
+ free_shadow:
+-		kfree(rinfo->shadow[i].grants_used);
++		kvfree(rinfo->shadow[i].grants_used);
+ 		rinfo->shadow[i].grants_used = NULL;
+-		kfree(rinfo->shadow[i].indirect_grants);
++		kvfree(rinfo->shadow[i].indirect_grants);
+ 		rinfo->shadow[i].indirect_grants = NULL;
+-		kfree(rinfo->shadow[i].sg);
++		kvfree(rinfo->shadow[i].sg);
+ 		rinfo->shadow[i].sg = NULL;
+ 	}
+ 
+@@ -1353,7 +1353,7 @@ static void blkif_free(struct blkfront_info *info, int suspend)
+ 	for (i = 0; i < info->nr_rings; i++)
+ 		blkif_free_ring(&info->rinfo[i]);
+ 
+-	kfree(info->rinfo);
++	kvfree(info->rinfo);
+ 	info->rinfo = NULL;
+ 	info->nr_rings = 0;
+ }
+@@ -1914,9 +1914,9 @@ static int negotiate_mq(struct blkfront_info *info)
+ 	if (!info->nr_rings)
+ 		info->nr_rings = 1;
+ 
+-	info->rinfo = kcalloc(info->nr_rings,
+-			      sizeof(struct blkfront_ring_info),
+-			      GFP_KERNEL);
++	info->rinfo = kvcalloc(info->nr_rings,
++			       sizeof(struct blkfront_ring_info),
++			       GFP_KERNEL);
+ 	if (!info->rinfo) {
+ 		xenbus_dev_fatal(info->xbdev, -ENOMEM, "allocating ring_info structure");
+ 		info->nr_rings = 0;
+@@ -2232,17 +2232,17 @@ static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo)
+ 
+ 	for (i = 0; i < BLK_RING_SIZE(info); i++) {
+ 		rinfo->shadow[i].grants_used =
+-			kcalloc(grants,
+-				sizeof(rinfo->shadow[i].grants_used[0]),
+-				GFP_NOIO);
+-		rinfo->shadow[i].sg = kcalloc(psegs,
+-					      sizeof(rinfo->shadow[i].sg[0]),
+-					      GFP_NOIO);
++			kvcalloc(grants,
++				 sizeof(rinfo->shadow[i].grants_used[0]),
++				 GFP_NOIO);
++		rinfo->shadow[i].sg = kvcalloc(psegs,
++					       sizeof(rinfo->shadow[i].sg[0]),
++					       GFP_NOIO);
+ 		if (info->max_indirect_segments)
+ 			rinfo->shadow[i].indirect_grants =
+-				kcalloc(INDIRECT_GREFS(grants),
+-					sizeof(rinfo->shadow[i].indirect_grants[0]),
+-					GFP_NOIO);
++				kvcalloc(INDIRECT_GREFS(grants),
++					 sizeof(rinfo->shadow[i].indirect_grants[0]),
++					 GFP_NOIO);
+ 		if ((rinfo->shadow[i].grants_used == NULL) ||
+ 			(rinfo->shadow[i].sg == NULL) ||
+ 		     (info->max_indirect_segments &&
+@@ -2256,11 +2256,11 @@ static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo)
+ 
+ out_of_memory:
+ 	for (i = 0; i < BLK_RING_SIZE(info); i++) {
+-		kfree(rinfo->shadow[i].grants_used);
++		kvfree(rinfo->shadow[i].grants_used);
+ 		rinfo->shadow[i].grants_used = NULL;
+-		kfree(rinfo->shadow[i].sg);
++		kvfree(rinfo->shadow[i].sg);
+ 		rinfo->shadow[i].sg = NULL;
+-		kfree(rinfo->shadow[i].indirect_grants);
++		kvfree(rinfo->shadow[i].indirect_grants);
+ 		rinfo->shadow[i].indirect_grants = NULL;
+ 	}
+ 	if (!list_empty(&rinfo->indirect_pages)) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+index 4376b17ca594..56f8ca2a3bb4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+@@ -464,8 +464,7 @@ static int amdgpu_atif_handler(struct amdgpu_device *adev,
+ 			}
+ 		}
+ 		if (req.pending & ATIF_DGPU_DISPLAY_EVENT) {
+-			if ((adev->flags & AMD_IS_PX) &&
+-			    amdgpu_atpx_dgpu_req_power_for_displays()) {
++			if (adev->flags & AMD_IS_PX) {
+ 				pm_runtime_get_sync(adev->ddev->dev);
+ 				/* Just fire off a uevent and let userspace tell us what to do */
+ 				drm_helper_hpd_irq_event(adev->ddev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index 3091488cd8cc..e0877fd0c051 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -38,18 +38,10 @@ static void psp_set_funcs(struct amdgpu_device *adev);
+ static int psp_early_init(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
++	struct psp_context *psp = &adev->psp;
+ 
+ 	psp_set_funcs(adev);
+ 
+-	return 0;
+-}
+-
+-static int psp_sw_init(void *handle)
+-{
+-	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+-	struct psp_context *psp = &adev->psp;
+-	int ret;
+-
+ 	switch (adev->asic_type) {
+ 	case CHIP_VEGA10:
+ 	case CHIP_VEGA12:
+@@ -67,6 +59,15 @@ static int psp_sw_init(void *handle)
+ 
+ 	psp->adev = adev;
+ 
++	return 0;
++}
++
++static int psp_sw_init(void *handle)
++{
++	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
++	struct psp_context *psp = &adev->psp;
++	int ret;
++
+ 	ret = psp_init_microcode(psp);
+ 	if (ret) {
+ 		DRM_ERROR("Failed to load psp firmware!\n");
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+index c021b114c8a4..f7189e22f6b7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+@@ -1072,7 +1072,7 @@ void amdgpu_vce_ring_emit_fence(struct amdgpu_ring *ring, u64 addr, u64 seq,
+ int amdgpu_vce_ring_test_ring(struct amdgpu_ring *ring)
+ {
+ 	struct amdgpu_device *adev = ring->adev;
+-	uint32_t rptr = amdgpu_ring_get_rptr(ring);
++	uint32_t rptr;
+ 	unsigned i;
+ 	int r, timeout = adev->usec_timeout;
+ 
+@@ -1084,6 +1084,8 @@ int amdgpu_vce_ring_test_ring(struct amdgpu_ring *ring)
+ 	if (r)
+ 		return r;
+ 
++	rptr = amdgpu_ring_get_rptr(ring);
++
+ 	amdgpu_ring_write(ring, VCE_CMD_END);
+ 	amdgpu_ring_commit(ring);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
+index ed89a101f73f..3f38fae08ff8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
+@@ -713,6 +713,11 @@ static bool soc15_need_reset_on_init(struct amdgpu_device *adev)
+ {
+ 	u32 sol_reg;
+ 
++	/* Just return false for soc15 GPUs.  Reset does not seem to
++	 * be necessary.
++	 */
++	return false;
++
+ 	if (adev->flags & AMD_IS_APU)
+ 		return false;
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 0886b36c2344..bfb65e8c728f 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3789,8 +3789,7 @@ static void dm_plane_atomic_async_update(struct drm_plane *plane,
+ 	struct drm_plane_state *old_state =
+ 		drm_atomic_get_old_plane_state(new_state->state, plane);
+ 
+-	if (plane->state->fb != new_state->fb)
+-		drm_atomic_set_fb_for_plane(plane->state, new_state->fb);
++	swap(plane->state->fb, new_state->fb);
+ 
+ 	plane->state->src_x = new_state->src_x;
+ 	plane->state->src_y = new_state->src_y;
+diff --git a/drivers/gpu/drm/amd/display/include/dal_asic_id.h b/drivers/gpu/drm/amd/display/include/dal_asic_id.h
+index 34d6fdcb32e2..4c8ce7938f01 100644
+--- a/drivers/gpu/drm/amd/display/include/dal_asic_id.h
++++ b/drivers/gpu/drm/amd/display/include/dal_asic_id.h
+@@ -138,13 +138,14 @@
+ #endif
+ #define RAVEN_UNKNOWN 0xFF
+ 
+-#if defined(CONFIG_DRM_AMD_DC_DCN1_01)
+-#define ASICREV_IS_RAVEN2(eChipRev) ((eChipRev >= RAVEN2_A0) && (eChipRev < 0xF0))
+-#endif /* DCN1_01 */
+ #define ASIC_REV_IS_RAVEN(eChipRev) ((eChipRev >= RAVEN_A0) && eChipRev < RAVEN_UNKNOWN)
+ #define RAVEN1_F0 0xF0
+ #define ASICREV_IS_RV1_F0(eChipRev) ((eChipRev >= RAVEN1_F0) && (eChipRev < RAVEN_UNKNOWN))
+ 
++#if defined(CONFIG_DRM_AMD_DC_DCN1_01)
++#define ASICREV_IS_PICASSO(eChipRev) ((eChipRev >= PICASSO_A0) && (eChipRev < RAVEN2_A0))
++#define ASICREV_IS_RAVEN2(eChipRev) ((eChipRev >= RAVEN2_A0) && (eChipRev < 0xF0))
++#endif /* DCN1_01 */
+ 
+ #define FAMILY_RV 142 /* DCN 1*/
+ 
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index fbb76332cc9f..9a80ed005d1f 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -1607,15 +1607,6 @@ int drm_atomic_helper_async_check(struct drm_device *dev,
+ 	    old_plane_state->crtc != new_plane_state->crtc)
+ 		return -EINVAL;
+ 
+-	/*
+-	 * FIXME: Since prepare_fb and cleanup_fb are always called on
+-	 * the new_plane_state for async updates we need to block framebuffer
+-	 * changes. This prevents use of a fb that's been cleaned up and
+-	 * double cleanups from occuring.
+-	 */
+-	if (old_plane_state->fb != new_plane_state->fb)
+-		return -EINVAL;
+-
+ 	funcs = plane->helper_private;
+ 	if (!funcs->atomic_async_update)
+ 		return -EINVAL;
+@@ -1646,6 +1637,8 @@ EXPORT_SYMBOL(drm_atomic_helper_async_check);
+  * drm_atomic_async_check() succeeds. Async commits are not supposed to swap
+  * the states like normal sync commits, but just do in-place changes on the
+  * current state.
++ *
++ * TODO: Implement full swap instead of doing in-place changes.
+  */
+ void drm_atomic_helper_async_commit(struct drm_device *dev,
+ 				    struct drm_atomic_state *state)
+@@ -1656,6 +1649,9 @@ void drm_atomic_helper_async_commit(struct drm_device *dev,
+ 	int i;
+ 
+ 	for_each_new_plane_in_state(state, plane, plane_state, i) {
++		struct drm_framebuffer *new_fb = plane_state->fb;
++		struct drm_framebuffer *old_fb = plane->state->fb;
++
+ 		funcs = plane->helper_private;
+ 		funcs->atomic_async_update(plane, plane_state);
+ 
+@@ -1664,11 +1660,17 @@ void drm_atomic_helper_async_commit(struct drm_device *dev,
+ 		 * plane->state in-place, make sure at least common
+ 		 * properties have been properly updated.
+ 		 */
+-		WARN_ON_ONCE(plane->state->fb != plane_state->fb);
++		WARN_ON_ONCE(plane->state->fb != new_fb);
+ 		WARN_ON_ONCE(plane->state->crtc_x != plane_state->crtc_x);
+ 		WARN_ON_ONCE(plane->state->crtc_y != plane_state->crtc_y);
+ 		WARN_ON_ONCE(plane->state->src_x != plane_state->src_x);
+ 		WARN_ON_ONCE(plane->state->src_y != plane_state->src_y);
++
++		/*
++		 * Make sure the FBs have been swapped so that cleanups in the
++		 * new_state performs a cleanup in the old FB.
++		 */
++		WARN_ON_ONCE(plane_state->fb != old_fb);
+ 	}
+ }
+ EXPORT_SYMBOL(drm_atomic_helper_async_commit);
+diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
+index dd40eff0911c..160edeafa6d6 100644
+--- a/drivers/gpu/drm/drm_connector.c
++++ b/drivers/gpu/drm/drm_connector.c
+@@ -1385,12 +1385,6 @@ EXPORT_SYMBOL(drm_mode_create_scaling_mode_property);
+  *
+  *	The driver may place further restrictions within these minimum
+  *	and maximum bounds.
+- *
+- *	The semantics for the vertical blank timestamp differ when
+- *	variable refresh rate is active. The vertical blank timestamp
+- *	is defined to be an estimate using the current mode's fixed
+- *	refresh rate timings. The semantics for the page-flip event
+- *	timestamp remain the same.
+  */
+ 
+ /**
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index 990b1909f9d7..8db099b8077e 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -172,6 +172,25 @@ static const struct edid_quirk {
+ 	/* Rotel RSX-1058 forwards sink's EDID but only does HDMI 1.1*/
+ 	{ "ETR", 13896, EDID_QUIRK_FORCE_8BPC },
+ 
++	/* Valve Index Headset */
++	{ "VLV", 0x91a8, EDID_QUIRK_NON_DESKTOP },
++	{ "VLV", 0x91b0, EDID_QUIRK_NON_DESKTOP },
++	{ "VLV", 0x91b1, EDID_QUIRK_NON_DESKTOP },
++	{ "VLV", 0x91b2, EDID_QUIRK_NON_DESKTOP },
++	{ "VLV", 0x91b3, EDID_QUIRK_NON_DESKTOP },
++	{ "VLV", 0x91b4, EDID_QUIRK_NON_DESKTOP },
++	{ "VLV", 0x91b5, EDID_QUIRK_NON_DESKTOP },
++	{ "VLV", 0x91b6, EDID_QUIRK_NON_DESKTOP },
++	{ "VLV", 0x91b7, EDID_QUIRK_NON_DESKTOP },
++	{ "VLV", 0x91b8, EDID_QUIRK_NON_DESKTOP },
++	{ "VLV", 0x91b9, EDID_QUIRK_NON_DESKTOP },
++	{ "VLV", 0x91ba, EDID_QUIRK_NON_DESKTOP },
++	{ "VLV", 0x91bb, EDID_QUIRK_NON_DESKTOP },
++	{ "VLV", 0x91bc, EDID_QUIRK_NON_DESKTOP },
++	{ "VLV", 0x91bd, EDID_QUIRK_NON_DESKTOP },
++	{ "VLV", 0x91be, EDID_QUIRK_NON_DESKTOP },
++	{ "VLV", 0x91bf, EDID_QUIRK_NON_DESKTOP },
++
+ 	/* HTC Vive and Vive Pro VR Headsets */
+ 	{ "HVR", 0xaa01, EDID_QUIRK_NON_DESKTOP },
+ 	{ "HVR", 0xaa02, EDID_QUIRK_NON_DESKTOP },
+@@ -193,6 +212,12 @@ static const struct edid_quirk {
+ 
+ 	/* Sony PlayStation VR Headset */
+ 	{ "SNY", 0x0704, EDID_QUIRK_NON_DESKTOP },
++
++	/* Sensics VR Headsets */
++	{ "SEN", 0x1019, EDID_QUIRK_NON_DESKTOP },
++
++	/* OSVR HDK and HDK2 VR Headsets */
++	{ "SVR", 0x1019, EDID_QUIRK_NON_DESKTOP },
+ };
+ 
+ /*
+diff --git a/drivers/gpu/drm/gma500/cdv_intel_lvds.c b/drivers/gpu/drm/gma500/cdv_intel_lvds.c
+index de9531caaca0..9c8446184b17 100644
+--- a/drivers/gpu/drm/gma500/cdv_intel_lvds.c
++++ b/drivers/gpu/drm/gma500/cdv_intel_lvds.c
+@@ -594,6 +594,9 @@ void cdv_intel_lvds_init(struct drm_device *dev,
+ 	int pipe;
+ 	u8 pin;
+ 
++	if (!dev_priv->lvds_enabled_in_vbt)
++		return;
++
+ 	pin = GMBUS_PORT_PANEL;
+ 	if (!lvds_is_present_in_vbt(dev, &pin)) {
+ 		DRM_DEBUG_KMS("LVDS is not present in VBT\n");
+diff --git a/drivers/gpu/drm/gma500/intel_bios.c b/drivers/gpu/drm/gma500/intel_bios.c
+index 63bde4e86c6a..e019ea271ffc 100644
+--- a/drivers/gpu/drm/gma500/intel_bios.c
++++ b/drivers/gpu/drm/gma500/intel_bios.c
+@@ -436,6 +436,9 @@ parse_driver_features(struct drm_psb_private *dev_priv,
+ 	if (driver->lvds_config == BDB_DRIVER_FEATURE_EDP)
+ 		dev_priv->edp.support = 1;
+ 
++	dev_priv->lvds_enabled_in_vbt = driver->lvds_config != 0;
++	DRM_DEBUG_KMS("LVDS VBT config bits: 0x%x\n", driver->lvds_config);
++
+ 	/* This bit means to use 96Mhz for DPLL_A or not */
+ 	if (driver->primary_lfp_id)
+ 		dev_priv->dplla_96mhz = true;
+diff --git a/drivers/gpu/drm/gma500/psb_drv.h b/drivers/gpu/drm/gma500/psb_drv.h
+index 941b238bdcc9..bc608ddc3bd1 100644
+--- a/drivers/gpu/drm/gma500/psb_drv.h
++++ b/drivers/gpu/drm/gma500/psb_drv.h
+@@ -537,6 +537,7 @@ struct drm_psb_private {
+ 	int lvds_ssc_freq;
+ 	bool is_lvds_on;
+ 	bool is_mipi_on;
++	bool lvds_enabled_in_vbt;
+ 	u32 mipi_ctrl_display;
+ 
+ 	unsigned int core_freq;
+diff --git a/drivers/gpu/drm/i915/gvt/gtt.c b/drivers/gpu/drm/i915/gvt/gtt.c
+index 9814773882ec..3090a3862668 100644
+--- a/drivers/gpu/drm/i915/gvt/gtt.c
++++ b/drivers/gpu/drm/i915/gvt/gtt.c
+@@ -2178,7 +2178,8 @@ static int emulate_ggtt_mmio_write(struct intel_vgpu *vgpu, unsigned int off,
+ 	struct intel_gvt_gtt_pte_ops *ops = gvt->gtt.pte_ops;
+ 	unsigned long g_gtt_index = off >> info->gtt_entry_size_shift;
+ 	unsigned long gma, gfn;
+-	struct intel_gvt_gtt_entry e, m;
++	struct intel_gvt_gtt_entry e = {.val64 = 0, .type = GTT_TYPE_GGTT_PTE};
++	struct intel_gvt_gtt_entry m = {.val64 = 0, .type = GTT_TYPE_GGTT_PTE};
+ 	dma_addr_t dma_addr;
+ 	int ret;
+ 	struct intel_gvt_partial_pte *partial_pte, *pos, *n;
+@@ -2245,7 +2246,8 @@ static int emulate_ggtt_mmio_write(struct intel_vgpu *vgpu, unsigned int off,
+ 
+ 	if (!partial_update && (ops->test_present(&e))) {
+ 		gfn = ops->get_pfn(&e);
+-		m = e;
++		m.val64 = e.val64;
++		m.type = e.type;
+ 
+ 		/* one PTE update may be issued in multiple writes and the
+ 		 * first write may not construct a valid gfn
+diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
+index 05b953793316..5df868eb3dc6 100644
+--- a/drivers/gpu/drm/i915/gvt/scheduler.c
++++ b/drivers/gpu/drm/i915/gvt/scheduler.c
+@@ -298,12 +298,31 @@ static int copy_workload_to_ring_buffer(struct intel_vgpu_workload *workload)
+ 	struct i915_request *req = workload->req;
+ 	void *shadow_ring_buffer_va;
+ 	u32 *cs;
++	int err;
+ 
+ 	if ((IS_KABYLAKE(req->i915) || IS_BROXTON(req->i915)
+ 		|| IS_COFFEELAKE(req->i915))
+ 		&& is_inhibit_context(req->hw_context))
+ 		intel_vgpu_restore_inhibit_context(vgpu, req);
+ 
++	/*
++	 * To track whether a request has started on HW, we can emit a
++	 * breadcrumb at the beginning of the request and check its
++	 * timeline's HWSP to see if the breadcrumb has advanced past the
++	 * start of this request. Actually, the request must have the
++	 * init_breadcrumb if its timeline set has_init_bread_crumb, or the
++	 * scheduler might get a wrong state of it during reset. Since the
++	 * requests from gvt always set the has_init_breadcrumb flag, here
++	 * need to do the emit_init_breadcrumb for all the requests.
++	 */
++	if (req->engine->emit_init_breadcrumb) {
++		err = req->engine->emit_init_breadcrumb(req);
++		if (err) {
++			gvt_vgpu_err("fail to emit init breadcrumb\n");
++			return err;
++		}
++	}
++
+ 	/* allocate shadow ring buffer */
+ 	cs = intel_ring_begin(workload->req, workload->rb_len / sizeof(u32));
+ 	if (IS_ERR(cs)) {
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 047855dd8c6b..328823ab6f60 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -32,7 +32,7 @@
+  * macros. Do **not** mass change existing definitions just to update the style.
+  *
+  * Layout
+- * ''''''
++ * ~~~~~~
+  *
+  * Keep helper macros near the top. For example, _PIPE() and friends.
+  *
+@@ -78,7 +78,7 @@
+  * style. Use lower case in hexadecimal values.
+  *
+  * Naming
+- * ''''''
++ * ~~~~~~
+  *
+  * Try to name registers according to the specs. If the register name changes in
+  * the specs from platform to another, stick to the original name.
+@@ -96,7 +96,7 @@
+  * suffix to the name. For example, ``_SKL`` or ``_GEN8``.
+  *
+  * Examples
+- * ''''''''
++ * ~~~~~~~~
+  *
+  * (Note that the values in the example are indented using spaces instead of
+  * TABs to avoid misalignment in generated documentation. Use TABs in the
+diff --git a/drivers/gpu/drm/i915/intel_fbc.c b/drivers/gpu/drm/i915/intel_fbc.c
+index 656e684e7c9a..fc018f3f53a1 100644
+--- a/drivers/gpu/drm/i915/intel_fbc.c
++++ b/drivers/gpu/drm/i915/intel_fbc.c
+@@ -1278,6 +1278,10 @@ static int intel_sanitize_fbc_option(struct drm_i915_private *dev_priv)
+ 	if (!HAS_FBC(dev_priv))
+ 		return 0;
+ 
++	/* https://bugs.freedesktop.org/show_bug.cgi?id=108085 */
++	if (IS_GEMINILAKE(dev_priv))
++		return 0;
++
+ 	if (IS_BROADWELL(dev_priv) || INTEL_GEN(dev_priv) >= 9)
+ 		return 1;
+ 
+diff --git a/drivers/gpu/drm/i915/intel_workarounds.c b/drivers/gpu/drm/i915/intel_workarounds.c
+index 15f4a6dee5aa..3899e1f67ebc 100644
+--- a/drivers/gpu/drm/i915/intel_workarounds.c
++++ b/drivers/gpu/drm/i915/intel_workarounds.c
+@@ -37,7 +37,7 @@
+  *    costly and simplifies things. We can revisit this in the future.
+  *
+  * Layout
+- * ''''''
++ * ~~~~~~
+  *
+  * Keep things in this file ordered by WA type, as per the above (context, GT,
+  * display, register whitelist, batchbuffer). Then, inside each type, keep the
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
+index be13140967b4..b854f471e9e5 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
+@@ -502,6 +502,8 @@ static int mdp5_plane_atomic_async_check(struct drm_plane *plane,
+ static void mdp5_plane_atomic_async_update(struct drm_plane *plane,
+ 					   struct drm_plane_state *new_state)
+ {
++	struct drm_framebuffer *old_fb = plane->state->fb;
++
+ 	plane->state->src_x = new_state->src_x;
+ 	plane->state->src_y = new_state->src_y;
+ 	plane->state->crtc_x = new_state->crtc_x;
+@@ -524,6 +526,8 @@ static void mdp5_plane_atomic_async_update(struct drm_plane *plane,
+ 
+ 	*to_mdp5_plane_state(plane->state) =
+ 		*to_mdp5_plane_state(new_state);
++
++	new_state->fb = old_fb;
+ }
+ 
+ static const struct drm_plane_helper_funcs mdp5_plane_helper_funcs = {
+diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
+index 00cd9ab8948d..db28012dbf54 100644
+--- a/drivers/gpu/drm/nouveau/Kconfig
++++ b/drivers/gpu/drm/nouveau/Kconfig
+@@ -17,10 +17,21 @@ config DRM_NOUVEAU
+ 	select INPUT if ACPI && X86
+ 	select THERMAL if ACPI && X86
+ 	select ACPI_VIDEO if ACPI && X86
+-	select DRM_VM
+ 	help
+ 	  Choose this option for open-source NVIDIA support.
+ 
++config NOUVEAU_LEGACY_CTX_SUPPORT
++	bool "Nouveau legacy context support"
++	depends on DRM_NOUVEAU
++	select DRM_VM
++	default y
++	help
++	  There was a version of the nouveau DDX that relied on legacy
++	  ctx ioctls not erroring out. But that was back in time a long
++	  ways, so offer a way to disable it now. For uapi compat with
++	  old nouveau ddx this should be on by default, but modern distros
++	  should consider turning it off.
++
+ config NOUVEAU_PLATFORM_DRIVER
+ 	bool "Nouveau (NVIDIA) SoC GPUs"
+ 	depends on DRM_NOUVEAU && ARCH_TEGRA
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index 5020265bfbd9..6ab9033f49da 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -1094,8 +1094,11 @@ nouveau_driver_fops = {
+ static struct drm_driver
+ driver_stub = {
+ 	.driver_features =
+-		DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME | DRIVER_RENDER |
+-		DRIVER_KMS_LEGACY_CONTEXT,
++		DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME | DRIVER_RENDER
++#if defined(CONFIG_NOUVEAU_LEGACY_CTX_SUPPORT)
++		| DRIVER_KMS_LEGACY_CONTEXT
++#endif
++		,
+ 
+ 	.open = nouveau_drm_open,
+ 	.postclose = nouveau_drm_postclose,
+diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c
+index aa898c699101..433df7036f96 100644
+--- a/drivers/gpu/drm/radeon/radeon_display.c
++++ b/drivers/gpu/drm/radeon/radeon_display.c
+@@ -922,12 +922,12 @@ static void avivo_get_fb_ref_div(unsigned nom, unsigned den, unsigned post_div,
+ 	ref_div_max = max(min(100 / post_div, ref_div_max), 1u);
+ 
+ 	/* get matching reference and feedback divider */
+-	*ref_div = min(max(DIV_ROUND_CLOSEST(den, post_div), 1u), ref_div_max);
++	*ref_div = min(max(den/post_div, 1u), ref_div_max);
+ 	*fb_div = DIV_ROUND_CLOSEST(nom * *ref_div * post_div, den);
+ 
+ 	/* limit fb divider to its maximum */
+ 	if (*fb_div > fb_div_max) {
+-		*ref_div = DIV_ROUND_CLOSEST(*ref_div * fb_div_max, *fb_div);
++		*ref_div = (*ref_div * fb_div_max)/(*fb_div);
+ 		*fb_div = fb_div_max;
+ 	}
+ }
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+index 0d4ade9d4722..cd58dc81ccf3 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -924,29 +924,17 @@ static void vop_plane_atomic_async_update(struct drm_plane *plane,
+ 					  struct drm_plane_state *new_state)
+ {
+ 	struct vop *vop = to_vop(plane->state->crtc);
+-	struct drm_plane_state *plane_state;
+-
+-	plane_state = plane->funcs->atomic_duplicate_state(plane);
+-	plane_state->crtc_x = new_state->crtc_x;
+-	plane_state->crtc_y = new_state->crtc_y;
+-	plane_state->crtc_h = new_state->crtc_h;
+-	plane_state->crtc_w = new_state->crtc_w;
+-	plane_state->src_x = new_state->src_x;
+-	plane_state->src_y = new_state->src_y;
+-	plane_state->src_h = new_state->src_h;
+-	plane_state->src_w = new_state->src_w;
+-
+-	if (plane_state->fb != new_state->fb)
+-		drm_atomic_set_fb_for_plane(plane_state, new_state->fb);
+-
+-	swap(plane_state, plane->state);
+-
+-	if (plane->state->fb && plane->state->fb != new_state->fb) {
+-		drm_framebuffer_get(plane->state->fb);
+-		WARN_ON(drm_crtc_vblank_get(plane->state->crtc) != 0);
+-		drm_flip_work_queue(&vop->fb_unref_work, plane->state->fb);
+-		set_bit(VOP_PENDING_FB_UNREF, &vop->pending);
+-	}
++	struct drm_framebuffer *old_fb = plane->state->fb;
++
++	plane->state->crtc_x = new_state->crtc_x;
++	plane->state->crtc_y = new_state->crtc_y;
++	plane->state->crtc_h = new_state->crtc_h;
++	plane->state->crtc_w = new_state->crtc_w;
++	plane->state->src_x = new_state->src_x;
++	plane->state->src_y = new_state->src_y;
++	plane->state->src_h = new_state->src_h;
++	plane->state->src_w = new_state->src_w;
++	swap(plane->state->fb, new_state->fb);
+ 
+ 	if (vop->is_enabled) {
+ 		rockchip_drm_psr_inhibit_get_state(new_state->state);
+@@ -955,9 +943,22 @@ static void vop_plane_atomic_async_update(struct drm_plane *plane,
+ 		vop_cfg_done(vop);
+ 		spin_unlock(&vop->reg_lock);
+ 		rockchip_drm_psr_inhibit_put_state(new_state->state);
+-	}
+ 
+-	plane->funcs->atomic_destroy_state(plane, plane_state);
++		/*
++		 * A scanout can still be occurring, so we can't drop the
++		 * reference to the old framebuffer. To solve this we get a
++		 * reference to old_fb and set a worker to release it later.
++		 * FIXME: if we perform 500 async_update calls before the
++		 * vblank, then we can have 500 different framebuffers waiting
++		 * to be released.
++		 */
++		if (old_fb && plane->state->fb != old_fb) {
++			drm_framebuffer_get(old_fb);
++			WARN_ON(drm_crtc_vblank_get(plane->state->crtc) != 0);
++			drm_flip_work_queue(&vop->fb_unref_work, old_fb);
++			set_bit(VOP_PENDING_FB_UNREF, &vop->pending);
++		}
++	}
+ }
+ 
+ static const struct drm_plane_helper_funcs plane_helper_funcs = {
+diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
+index d098337c10e9..7dad38e554a5 100644
+--- a/drivers/gpu/drm/vc4/vc4_plane.c
++++ b/drivers/gpu/drm/vc4/vc4_plane.c
+@@ -968,7 +968,7 @@ static void vc4_plane_atomic_async_update(struct drm_plane *plane,
+ {
+ 	struct vc4_plane_state *vc4_state, *new_vc4_state;
+ 
+-	drm_atomic_set_fb_for_plane(plane->state, state->fb);
++	swap(plane->state->fb, state->fb);
+ 	plane->state->crtc_x = state->crtc_x;
+ 	plane->state->crtc_y = state->crtc_y;
+ 	plane->state->crtc_w = state->crtc_w;
+diff --git a/drivers/i2c/busses/i2c-xiic.c b/drivers/i2c/busses/i2c-xiic.c
+index 0c51c0ffdda9..8d6b6eeef71c 100644
+--- a/drivers/i2c/busses/i2c-xiic.c
++++ b/drivers/i2c/busses/i2c-xiic.c
+@@ -718,11 +718,16 @@ static const struct i2c_algorithm xiic_algorithm = {
+ 	.functionality = xiic_func,
+ };
+ 
++static const struct i2c_adapter_quirks xiic_quirks = {
++	.max_read_len = 255,
++};
++
+ static const struct i2c_adapter xiic_adapter = {
+ 	.owner = THIS_MODULE,
+ 	.name = DRIVER_NAME,
+ 	.class = I2C_CLASS_DEPRECATED,
+ 	.algo = &xiic_algorithm,
++	.quirks = &xiic_quirks,
+ };
+ 
+ 
+diff --git a/drivers/memstick/core/mspro_block.c b/drivers/memstick/core/mspro_block.c
+index aba50ec98b4d..9545e87b6085 100644
+--- a/drivers/memstick/core/mspro_block.c
++++ b/drivers/memstick/core/mspro_block.c
+@@ -694,13 +694,13 @@ static void h_mspro_block_setup_cmd(struct memstick_dev *card, u64 offset,
+ 
+ /*** Data transfer ***/
+ 
+-static int mspro_block_issue_req(struct memstick_dev *card, bool chunk)
++static int mspro_block_issue_req(struct memstick_dev *card)
+ {
+ 	struct mspro_block_data *msb = memstick_get_drvdata(card);
+ 	u64 t_off;
+ 	unsigned int count;
+ 
+-	while (chunk) {
++	while (true) {
+ 		msb->current_page = 0;
+ 		msb->current_seg = 0;
+ 		msb->seg_count = blk_rq_map_sg(msb->block_req->q,
+@@ -709,6 +709,7 @@ static int mspro_block_issue_req(struct memstick_dev *card, bool chunk)
+ 
+ 		if (!msb->seg_count) {
+ 			unsigned int bytes = blk_rq_cur_bytes(msb->block_req);
++			bool chunk;
+ 
+ 			chunk = blk_update_request(msb->block_req,
+ 							BLK_STS_RESOURCE,
+@@ -718,7 +719,7 @@ static int mspro_block_issue_req(struct memstick_dev *card, bool chunk)
+ 			__blk_mq_end_request(msb->block_req,
+ 						BLK_STS_RESOURCE);
+ 			msb->block_req = NULL;
+-			break;
++			return -EAGAIN;
+ 		}
+ 
+ 		t_off = blk_rq_pos(msb->block_req);
+@@ -735,8 +736,6 @@ static int mspro_block_issue_req(struct memstick_dev *card, bool chunk)
+ 		memstick_new_req(card->host);
+ 		return 0;
+ 	}
+-
+-	return 1;
+ }
+ 
+ static int mspro_block_complete_req(struct memstick_dev *card, int error)
+@@ -779,7 +778,7 @@ static int mspro_block_complete_req(struct memstick_dev *card, int error)
+ 		chunk = blk_update_request(msb->block_req,
+ 				errno_to_blk_status(error), t_len);
+ 		if (chunk) {
+-			error = mspro_block_issue_req(card, chunk);
++			error = mspro_block_issue_req(card);
+ 			if (!error)
+ 				goto out;
+ 		} else {
+@@ -849,7 +848,7 @@ static blk_status_t mspro_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 	msb->block_req = bd->rq;
+ 	blk_mq_start_request(bd->rq);
+ 
+-	if (mspro_block_issue_req(card, true))
++	if (mspro_block_issue_req(card))
+ 		msb->block_req = NULL;
+ 
+ 	spin_unlock_irq(&msb->q_lock);
+diff --git a/drivers/misc/genwqe/card_dev.c b/drivers/misc/genwqe/card_dev.c
+index 8c1b63a4337b..d2098b4d2945 100644
+--- a/drivers/misc/genwqe/card_dev.c
++++ b/drivers/misc/genwqe/card_dev.c
+@@ -780,6 +780,8 @@ static int genwqe_pin_mem(struct genwqe_file *cfile, struct genwqe_mem *m)
+ 
+ 	if ((m->addr == 0x0) || (m->size == 0))
+ 		return -EINVAL;
++	if (m->size > ULONG_MAX - PAGE_SIZE - (m->addr & ~PAGE_MASK))
++		return -EINVAL;
+ 
+ 	map_addr = (m->addr & PAGE_MASK);
+ 	map_size = round_up(m->size + (m->addr & ~PAGE_MASK), PAGE_SIZE);
+diff --git a/drivers/misc/genwqe/card_utils.c b/drivers/misc/genwqe/card_utils.c
+index 25265fd0fd6e..459cdbd94302 100644
+--- a/drivers/misc/genwqe/card_utils.c
++++ b/drivers/misc/genwqe/card_utils.c
+@@ -586,6 +586,10 @@ int genwqe_user_vmap(struct genwqe_dev *cd, struct dma_mapping *m, void *uaddr,
+ 	/* determine space needed for page_list. */
+ 	data = (unsigned long)uaddr;
+ 	offs = offset_in_page(data);
++	if (size > ULONG_MAX - PAGE_SIZE - offs) {
++		m->size = 0;	/* mark unused and not added */
++		return -EINVAL;
++	}
+ 	m->nr_pages = DIV_ROUND_UP(offs + size, PAGE_SIZE);
+ 
+ 	m->page_list = kcalloc(m->nr_pages,
+diff --git a/drivers/misc/habanalabs/debugfs.c b/drivers/misc/habanalabs/debugfs.c
+index 974a87789bd8..17ba26422b29 100644
+--- a/drivers/misc/habanalabs/debugfs.c
++++ b/drivers/misc/habanalabs/debugfs.c
+@@ -459,41 +459,31 @@ static ssize_t mmu_write(struct file *file, const char __user *buf,
+ 	struct hl_debugfs_entry *entry = s->private;
+ 	struct hl_dbg_device_entry *dev_entry = entry->dev_entry;
+ 	struct hl_device *hdev = dev_entry->hdev;
+-	char kbuf[MMU_KBUF_SIZE], asid_kbuf[MMU_ASID_BUF_SIZE],
+-		addr_kbuf[MMU_ADDR_BUF_SIZE];
++	char kbuf[MMU_KBUF_SIZE];
+ 	char *c;
+ 	ssize_t rc;
+ 
+ 	if (!hdev->mmu_enable)
+ 		return count;
+ 
+-	memset(kbuf, 0, sizeof(kbuf));
+-	memset(asid_kbuf, 0, sizeof(asid_kbuf));
+-	memset(addr_kbuf, 0, sizeof(addr_kbuf));
+-
++	if (count > sizeof(kbuf) - 1)
++		goto err;
+ 	if (copy_from_user(kbuf, buf, count))
+ 		goto err;
+-
+-	kbuf[MMU_KBUF_SIZE - 1] = 0;
++	kbuf[count] = 0;
+ 
+ 	c = strchr(kbuf, ' ');
+ 	if (!c)
+ 		goto err;
++	*c = '\0';
+ 
+-	memcpy(asid_kbuf, kbuf, c - kbuf);
+-
+-	rc = kstrtouint(asid_kbuf, 10, &dev_entry->mmu_asid);
++	rc = kstrtouint(kbuf, 10, &dev_entry->mmu_asid);
+ 	if (rc)
+ 		goto err;
+ 
+-	c = strstr(kbuf, " 0x");
+-	if (!c)
++	if (strncmp(c+1, "0x", 2))
+ 		goto err;
+-
+-	c += 3;
+-	memcpy(addr_kbuf, c, (kbuf + count) - c);
+-
+-	rc = kstrtoull(addr_kbuf, 16, &dev_entry->mmu_addr);
++	rc = kstrtoull(c+3, 16, &dev_entry->mmu_addr);
+ 	if (rc)
+ 		goto err;
+ 
+@@ -525,10 +515,8 @@ static ssize_t hl_data_read32(struct file *f, char __user *buf,
+ 	}
+ 
+ 	sprintf(tmp_buf, "0x%08x\n", val);
+-	rc = simple_read_from_buffer(buf, strlen(tmp_buf) + 1, ppos, tmp_buf,
+-			strlen(tmp_buf) + 1);
+-
+-	return rc;
++	return simple_read_from_buffer(buf, count, ppos, tmp_buf,
++			strlen(tmp_buf));
+ }
+ 
+ static ssize_t hl_data_write32(struct file *f, const char __user *buf,
+@@ -559,7 +547,6 @@ static ssize_t hl_get_power_state(struct file *f, char __user *buf,
+ 	struct hl_dbg_device_entry *entry = file_inode(f)->i_private;
+ 	struct hl_device *hdev = entry->hdev;
+ 	char tmp_buf[200];
+-	ssize_t rc;
+ 	int i;
+ 
+ 	if (*ppos)
+@@ -574,10 +561,8 @@ static ssize_t hl_get_power_state(struct file *f, char __user *buf,
+ 
+ 	sprintf(tmp_buf,
+ 		"current power state: %d\n1 - D0\n2 - D3hot\n3 - Unknown\n", i);
+-	rc = simple_read_from_buffer(buf, strlen(tmp_buf) + 1, ppos, tmp_buf,
+-			strlen(tmp_buf) + 1);
+-
+-	return rc;
++	return simple_read_from_buffer(buf, count, ppos, tmp_buf,
++			strlen(tmp_buf));
+ }
+ 
+ static ssize_t hl_set_power_state(struct file *f, const char __user *buf,
+@@ -630,8 +615,8 @@ static ssize_t hl_i2c_data_read(struct file *f, char __user *buf,
+ 	}
+ 
+ 	sprintf(tmp_buf, "0x%02x\n", val);
+-	rc = simple_read_from_buffer(buf, strlen(tmp_buf) + 1, ppos, tmp_buf,
+-			strlen(tmp_buf) + 1);
++	rc = simple_read_from_buffer(buf, count, ppos, tmp_buf,
++			strlen(tmp_buf));
+ 
+ 	return rc;
+ }
+@@ -720,18 +705,9 @@ static ssize_t hl_led2_write(struct file *f, const char __user *buf,
+ static ssize_t hl_device_read(struct file *f, char __user *buf,
+ 					size_t count, loff_t *ppos)
+ {
+-	char tmp_buf[200];
+-	ssize_t rc;
+-
+-	if (*ppos)
+-		return 0;
+-
+-	sprintf(tmp_buf,
+-		"Valid values: disable, enable, suspend, resume, cpu_timeout\n");
+-	rc = simple_read_from_buffer(buf, strlen(tmp_buf) + 1, ppos, tmp_buf,
+-			strlen(tmp_buf) + 1);
+-
+-	return rc;
++	static const char *help =
++		"Valid values: disable, enable, suspend, resume, cpu_timeout\n";
++	return simple_read_from_buffer(buf, count, ppos, help, strlen(help));
+ }
+ 
+ static ssize_t hl_device_write(struct file *f, const char __user *buf,
+@@ -739,7 +715,7 @@ static ssize_t hl_device_write(struct file *f, const char __user *buf,
+ {
+ 	struct hl_dbg_device_entry *entry = file_inode(f)->i_private;
+ 	struct hl_device *hdev = entry->hdev;
+-	char data[30];
++	char data[30] = {0};
+ 
+ 	/* don't allow partial writes */
+ 	if (*ppos != 0)
+diff --git a/drivers/mmc/host/sdhci_am654.c b/drivers/mmc/host/sdhci_am654.c
+index eea183e90f1b..f427f04991a6 100644
+--- a/drivers/mmc/host/sdhci_am654.c
++++ b/drivers/mmc/host/sdhci_am654.c
+@@ -209,7 +209,7 @@ static int sdhci_am654_init(struct sdhci_host *host)
+ 		ctl_cfg_2 = SLOTTYPE_EMBEDDED;
+ 
+ 	regmap_update_bits(sdhci_am654->base, CTL_CFG_2,
+-			   ctl_cfg_2, SLOTTYPE_MASK);
++			   SLOTTYPE_MASK, ctl_cfg_2);
+ 
+ 	return sdhci_add_host(host);
+ }
+diff --git a/drivers/mmc/host/tmio_mmc_core.c b/drivers/mmc/host/tmio_mmc_core.c
+index 595949f1f001..78cc2a928efe 100644
+--- a/drivers/mmc/host/tmio_mmc_core.c
++++ b/drivers/mmc/host/tmio_mmc_core.c
+@@ -842,8 +842,9 @@ static void tmio_mmc_finish_request(struct tmio_mmc_host *host)
+ 	if (mrq->cmd->error || (mrq->data && mrq->data->error))
+ 		tmio_mmc_abort_dma(host);
+ 
++	/* SCC error means retune, but executed command was still successful */
+ 	if (host->check_scc_error && host->check_scc_error(host))
+-		mrq->cmd->error = -EILSEQ;
++		mmc_retune_needed(host->mmc);
+ 
+ 	/* If SET_BLOCK_COUNT, continue with main command */
+ 	if (host->mrq && !mrq->cmd->error) {
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils.c
+index eb4b99d56081..33d3c3789209 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils.c
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils.c
+@@ -335,13 +335,13 @@ static int hw_atl_utils_fw_upload_dwords(struct aq_hw_s *self, u32 a, u32 *p,
+ {
+ 	u32 val;
+ 	int err = 0;
+-	bool is_locked;
+ 
+-	is_locked = hw_atl_sem_ram_get(self);
+-	if (!is_locked) {
+-		err = -ETIME;
++	err = readx_poll_timeout_atomic(hw_atl_sem_ram_get, self,
++					val, val == 1U,
++					10U, 100000U);
++	if (err < 0)
+ 		goto err_exit;
+-	}
++
+ 	if (IS_CHIP_FEATURE(REVISION_B1)) {
+ 		u32 offset = 0;
+ 
+@@ -353,8 +353,8 @@ static int hw_atl_utils_fw_upload_dwords(struct aq_hw_s *self, u32 a, u32 *p,
+ 			/* 1000 times by 10us = 10ms */
+ 			err = readx_poll_timeout_atomic(hw_atl_scrpad12_get,
+ 							self, val,
+-							(val & 0xF0000000) ==
+-							 0x80000000,
++							(val & 0xF0000000) !=
++							0x80000000,
+ 							10U, 10000U);
+ 		}
+ 	} else {
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils_fw2x.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils_fw2x.c
+index fe6c5658e016..9d0292aa071d 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils_fw2x.c
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils_fw2x.c
+@@ -349,7 +349,7 @@ static int aq_fw2x_set_sleep_proxy(struct aq_hw_s *self, u8 *mac)
+ 	err = readx_poll_timeout_atomic(aq_fw2x_state2_get,
+ 					self, val,
+ 					val & HW_ATL_FW2X_CTRL_SLEEP_PROXY,
+-					1U, 10000U);
++					1U, 100000U);
+ 
+ err_exit:
+ 	return err;
+@@ -369,6 +369,8 @@ static int aq_fw2x_set_wol_params(struct aq_hw_s *self, u8 *mac)
+ 
+ 	msg = (struct fw2x_msg_wol *)rpc;
+ 
++	memset(msg, 0, sizeof(*msg));
++
+ 	msg->msg_id = HAL_ATLANTIC_UTILS_FW2X_MSG_WOL;
+ 	msg->magic_packet_enabled = true;
+ 	memcpy(msg->hw_addr, mac, ETH_ALEN);
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index f4f076d7090e..906f080d9559 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -1304,8 +1304,8 @@ static void mvpp2_ethtool_get_strings(struct net_device *netdev, u32 sset,
+ 		int i;
+ 
+ 		for (i = 0; i < ARRAY_SIZE(mvpp2_ethtool_regs); i++)
+-			memcpy(data + i * ETH_GSTRING_LEN,
+-			       &mvpp2_ethtool_regs[i].string, ETH_GSTRING_LEN);
++			strscpy(data + i * ETH_GSTRING_LEN,
++			        mvpp2_ethtool_regs[i].string, ETH_GSTRING_LEN);
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+index d290f0787dfb..94c59939a8cf 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+@@ -2010,6 +2010,8 @@ static int mlx4_en_set_tunable(struct net_device *dev,
+ 	return ret;
+ }
+ 
++#define MLX4_EEPROM_PAGE_LEN 256
++
+ static int mlx4_en_get_module_info(struct net_device *dev,
+ 				   struct ethtool_modinfo *modinfo)
+ {
+@@ -2044,7 +2046,7 @@ static int mlx4_en_get_module_info(struct net_device *dev,
+ 		break;
+ 	case MLX4_MODULE_ID_SFP:
+ 		modinfo->type = ETH_MODULE_SFF_8472;
+-		modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
++		modinfo->eeprom_len = MLX4_EEPROM_PAGE_LEN;
+ 		break;
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/net/ethernet/mellanox/mlx4/port.c b/drivers/net/ethernet/mellanox/mlx4/port.c
+index 10fcc22f4590..ba6ac31a339d 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/port.c
++++ b/drivers/net/ethernet/mellanox/mlx4/port.c
+@@ -2077,11 +2077,6 @@ int mlx4_get_module_info(struct mlx4_dev *dev, u8 port,
+ 		size -= offset + size - I2C_PAGE_SIZE;
+ 
+ 	i2c_addr = I2C_ADDR_LOW;
+-	if (offset >= I2C_PAGE_SIZE) {
+-		/* Reset offset to high page */
+-		i2c_addr = I2C_ADDR_HIGH;
+-		offset -= I2C_PAGE_SIZE;
+-	}
+ 
+ 	cable_info = (struct mlx4_cable_info *)inmad->data;
+ 	cable_info->dev_mem_address = cpu_to_be16(offset);
+diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
+index dd12b73a8853..1285f282d3ac 100644
+--- a/drivers/net/ethernet/ti/cpsw.c
++++ b/drivers/net/ethernet/ti/cpsw.c
+@@ -3130,6 +3130,7 @@ static void cpsw_get_ringparam(struct net_device *ndev,
+ 	struct cpsw_common *cpsw = priv->cpsw;
+ 
+ 	/* not supported */
++	ering->tx_max_pending = descs_pool_size - CPSW_MAX_QUEUES;
+ 	ering->tx_max_pending = 0;
+ 	ering->tx_pending = cpdma_get_num_tx_descs(cpsw->dma);
+ 	ering->rx_max_pending = descs_pool_size - CPSW_MAX_QUEUES;
+diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
+index d4635c2178d1..71812be0ac64 100644
+--- a/drivers/net/phy/sfp.c
++++ b/drivers/net/phy/sfp.c
+@@ -281,6 +281,7 @@ static int sfp_i2c_read(struct sfp *sfp, bool a2, u8 dev_addr, void *buf,
+ {
+ 	struct i2c_msg msgs[2];
+ 	u8 bus_addr = a2 ? 0x51 : 0x50;
++	size_t this_len;
+ 	int ret;
+ 
+ 	msgs[0].addr = bus_addr;
+@@ -292,11 +293,26 @@ static int sfp_i2c_read(struct sfp *sfp, bool a2, u8 dev_addr, void *buf,
+ 	msgs[1].len = len;
+ 	msgs[1].buf = buf;
+ 
+-	ret = i2c_transfer(sfp->i2c, msgs, ARRAY_SIZE(msgs));
+-	if (ret < 0)
+-		return ret;
++	while (len) {
++		this_len = len;
++		if (this_len > 16)
++			this_len = 16;
+ 
+-	return ret == ARRAY_SIZE(msgs) ? len : 0;
++		msgs[1].len = this_len;
++
++		ret = i2c_transfer(sfp->i2c, msgs, ARRAY_SIZE(msgs));
++		if (ret < 0)
++			return ret;
++
++		if (ret != ARRAY_SIZE(msgs))
++			break;
++
++		msgs[1].buf += this_len;
++		dev_addr += this_len;
++		len -= this_len;
++	}
++
++	return msgs[1].buf - (u8 *)buf;
+ }
+ 
+ static int sfp_i2c_write(struct sfp *sfp, bool a2, u8 dev_addr, void *buf,
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index e1824c2e0a1c..8aff3a0ab609 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -641,34 +641,16 @@ static int nvme_rdma_alloc_io_queues(struct nvme_rdma_ctrl *ctrl)
+ {
+ 	struct nvmf_ctrl_options *opts = ctrl->ctrl.opts;
+ 	struct ib_device *ibdev = ctrl->device->dev;
+-	unsigned int nr_io_queues;
++	unsigned int nr_io_queues, nr_default_queues;
++	unsigned int nr_read_queues, nr_poll_queues;
+ 	int i, ret;
+ 
+-	nr_io_queues = min(opts->nr_io_queues, num_online_cpus());
+-
+-	/*
+-	 * we map queues according to the device irq vectors for
+-	 * optimal locality so we don't need more queues than
+-	 * completion vectors.
+-	 */
+-	nr_io_queues = min_t(unsigned int, nr_io_queues,
+-				ibdev->num_comp_vectors);
+-
+-	if (opts->nr_write_queues) {
+-		ctrl->io_queues[HCTX_TYPE_DEFAULT] =
+-				min(opts->nr_write_queues, nr_io_queues);
+-		nr_io_queues += ctrl->io_queues[HCTX_TYPE_DEFAULT];
+-	} else {
+-		ctrl->io_queues[HCTX_TYPE_DEFAULT] = nr_io_queues;
+-	}
+-
+-	ctrl->io_queues[HCTX_TYPE_READ] = nr_io_queues;
+-
+-	if (opts->nr_poll_queues) {
+-		ctrl->io_queues[HCTX_TYPE_POLL] =
+-			min(opts->nr_poll_queues, num_online_cpus());
+-		nr_io_queues += ctrl->io_queues[HCTX_TYPE_POLL];
+-	}
++	nr_read_queues = min_t(unsigned int, ibdev->num_comp_vectors,
++				min(opts->nr_io_queues, num_online_cpus()));
++	nr_default_queues =  min_t(unsigned int, ibdev->num_comp_vectors,
++				min(opts->nr_write_queues, num_online_cpus()));
++	nr_poll_queues = min(opts->nr_poll_queues, num_online_cpus());
++	nr_io_queues = nr_read_queues + nr_default_queues + nr_poll_queues;
+ 
+ 	ret = nvme_set_queue_count(&ctrl->ctrl, &nr_io_queues);
+ 	if (ret)
+@@ -681,6 +663,34 @@ static int nvme_rdma_alloc_io_queues(struct nvme_rdma_ctrl *ctrl)
+ 	dev_info(ctrl->ctrl.device,
+ 		"creating %d I/O queues.\n", nr_io_queues);
+ 
++	if (opts->nr_write_queues && nr_read_queues < nr_io_queues) {
++		/*
++		 * separate read/write queues
++		 * hand out dedicated default queues only after we have
++		 * sufficient read queues.
++		 */
++		ctrl->io_queues[HCTX_TYPE_READ] = nr_read_queues;
++		nr_io_queues -= ctrl->io_queues[HCTX_TYPE_READ];
++		ctrl->io_queues[HCTX_TYPE_DEFAULT] =
++			min(nr_default_queues, nr_io_queues);
++		nr_io_queues -= ctrl->io_queues[HCTX_TYPE_DEFAULT];
++	} else {
++		/*
++		 * shared read/write queues
++		 * either no write queues were requested, or we don't have
++		 * sufficient queue count to have dedicated default queues.
++		 */
++		ctrl->io_queues[HCTX_TYPE_DEFAULT] =
++			min(nr_read_queues, nr_io_queues);
++		nr_io_queues -= ctrl->io_queues[HCTX_TYPE_DEFAULT];
++	}
++
++	if (opts->nr_poll_queues && nr_io_queues) {
++		/* map dedicated poll queues only if we have queues left */
++		ctrl->io_queues[HCTX_TYPE_POLL] =
++			min(nr_poll_queues, nr_io_queues);
++	}
++
+ 	for (i = 1; i < ctrl->ctrl.queue_count; i++) {
+ 		ret = nvme_rdma_alloc_queue(ctrl, i,
+ 				ctrl->ctrl.sqsize + 1);
+@@ -1787,17 +1797,24 @@ static void nvme_rdma_complete_rq(struct request *rq)
+ static int nvme_rdma_map_queues(struct blk_mq_tag_set *set)
+ {
+ 	struct nvme_rdma_ctrl *ctrl = set->driver_data;
++	struct nvmf_ctrl_options *opts = ctrl->ctrl.opts;
+ 
+-	set->map[HCTX_TYPE_DEFAULT].queue_offset = 0;
+-	set->map[HCTX_TYPE_DEFAULT].nr_queues =
+-			ctrl->io_queues[HCTX_TYPE_DEFAULT];
+-	set->map[HCTX_TYPE_READ].nr_queues = ctrl->io_queues[HCTX_TYPE_READ];
+-	if (ctrl->ctrl.opts->nr_write_queues) {
++	if (opts->nr_write_queues && ctrl->io_queues[HCTX_TYPE_READ]) {
+ 		/* separate read/write queues */
++		set->map[HCTX_TYPE_DEFAULT].nr_queues =
++			ctrl->io_queues[HCTX_TYPE_DEFAULT];
++		set->map[HCTX_TYPE_DEFAULT].queue_offset = 0;
++		set->map[HCTX_TYPE_READ].nr_queues =
++			ctrl->io_queues[HCTX_TYPE_READ];
+ 		set->map[HCTX_TYPE_READ].queue_offset =
+-				ctrl->io_queues[HCTX_TYPE_DEFAULT];
++			ctrl->io_queues[HCTX_TYPE_DEFAULT];
+ 	} else {
+-		/* mixed read/write queues */
++		/* shared read/write queues */
++		set->map[HCTX_TYPE_DEFAULT].nr_queues =
++			ctrl->io_queues[HCTX_TYPE_DEFAULT];
++		set->map[HCTX_TYPE_DEFAULT].queue_offset = 0;
++		set->map[HCTX_TYPE_READ].nr_queues =
++			ctrl->io_queues[HCTX_TYPE_DEFAULT];
+ 		set->map[HCTX_TYPE_READ].queue_offset = 0;
+ 	}
+ 	blk_mq_rdma_map_queues(&set->map[HCTX_TYPE_DEFAULT],
+@@ -1805,16 +1822,22 @@ static int nvme_rdma_map_queues(struct blk_mq_tag_set *set)
+ 	blk_mq_rdma_map_queues(&set->map[HCTX_TYPE_READ],
+ 			ctrl->device->dev, 0);
+ 
+-	if (ctrl->ctrl.opts->nr_poll_queues) {
++	if (opts->nr_poll_queues && ctrl->io_queues[HCTX_TYPE_POLL]) {
++		/* map dedicated poll queues only if we have queues left */
+ 		set->map[HCTX_TYPE_POLL].nr_queues =
+ 				ctrl->io_queues[HCTX_TYPE_POLL];
+ 		set->map[HCTX_TYPE_POLL].queue_offset =
+-				ctrl->io_queues[HCTX_TYPE_DEFAULT];
+-		if (ctrl->ctrl.opts->nr_write_queues)
+-			set->map[HCTX_TYPE_POLL].queue_offset +=
+-				ctrl->io_queues[HCTX_TYPE_READ];
++			ctrl->io_queues[HCTX_TYPE_DEFAULT] +
++			ctrl->io_queues[HCTX_TYPE_READ];
+ 		blk_mq_map_queues(&set->map[HCTX_TYPE_POLL]);
+ 	}
++
++	dev_info(ctrl->ctrl.device,
++		"mapped %d/%d/%d default/read/poll queues.\n",
++		ctrl->io_queues[HCTX_TYPE_DEFAULT],
++		ctrl->io_queues[HCTX_TYPE_READ],
++		ctrl->io_queues[HCTX_TYPE_POLL]);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/parisc/ccio-dma.c b/drivers/parisc/ccio-dma.c
+index 8937ba70d817..9b434644524c 100644
+--- a/drivers/parisc/ccio-dma.c
++++ b/drivers/parisc/ccio-dma.c
+@@ -565,8 +565,6 @@ ccio_io_pdir_entry(u64 *pdir_ptr, space_t sid, unsigned long vba,
+ 	/* We currently only support kernel addresses */
+ 	BUG_ON(sid != KERNEL_SPACE);
+ 
+-	mtsp(sid,1);
+-
+ 	/*
+ 	** WORD 1 - low order word
+ 	** "hints" parm includes the VALID bit!
+@@ -597,7 +595,7 @@ ccio_io_pdir_entry(u64 *pdir_ptr, space_t sid, unsigned long vba,
+ 	** Grab virtual index [0:11]
+ 	** Deposit virt_idx bits into I/O PDIR word
+ 	*/
+-	asm volatile ("lci %%r0(%%sr1, %1), %0" : "=r" (ci) : "r" (vba));
++	asm volatile ("lci %%r0(%1), %0" : "=r" (ci) : "r" (vba));
+ 	asm volatile ("extru %1,19,12,%0" : "+r" (ci) : "r" (ci));
+ 	asm volatile ("depw  %1,15,12,%0" : "+r" (pa) : "r" (ci));
+ 
+diff --git a/drivers/parisc/sba_iommu.c b/drivers/parisc/sba_iommu.c
+index afaf8e6aefe6..78df92600203 100644
+--- a/drivers/parisc/sba_iommu.c
++++ b/drivers/parisc/sba_iommu.c
+@@ -575,8 +575,7 @@ sba_io_pdir_entry(u64 *pdir_ptr, space_t sid, unsigned long vba,
+ 	pa = virt_to_phys(vba);
+ 	pa &= IOVP_MASK;
+ 
+-	mtsp(sid,1);
+-	asm("lci 0(%%sr1, %1), %0" : "=r" (ci) : "r" (vba));
++	asm("lci 0(%1), %0" : "=r" (ci) : "r" (vba));
+ 	pa |= (ci >> PAGE_SHIFT) & 0xff;  /* move CI (8 bits) into lowest byte */
+ 
+ 	pa |= SBA_PDIR_VALID_BIT;	/* set "valid" bit */
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 351843f847c0..46e3a2337f06 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -130,9 +130,6 @@ static void uart_start(struct tty_struct *tty)
+ 	struct uart_port *port;
+ 	unsigned long flags;
+ 
+-	if (!state)
+-		return;
+-
+ 	port = uart_port_lock(state, flags);
+ 	__uart_start(tty);
+ 	uart_port_unlock(port, flags);
+@@ -730,9 +727,6 @@ static void uart_unthrottle(struct tty_struct *tty)
+ 	upstat_t mask = UPSTAT_SYNC_FIFO;
+ 	struct uart_port *port;
+ 
+-	if (!state)
+-		return;
+-
+ 	port = uart_port_ref(state);
+ 	if (!port)
+ 		return;
+@@ -1747,6 +1741,16 @@ static void uart_dtr_rts(struct tty_port *port, int raise)
+ 	uart_port_deref(uport);
+ }
+ 
++static int uart_install(struct tty_driver *driver, struct tty_struct *tty)
++{
++	struct uart_driver *drv = driver->driver_state;
++	struct uart_state *state = drv->state + tty->index;
++
++	tty->driver_data = state;
++
++	return tty_standard_install(driver, tty);
++}
++
+ /*
+  * Calls to uart_open are serialised by the tty_lock in
+  *   drivers/tty/tty_io.c:tty_open()
+@@ -1759,11 +1763,8 @@ static void uart_dtr_rts(struct tty_port *port, int raise)
+  */
+ static int uart_open(struct tty_struct *tty, struct file *filp)
+ {
+-	struct uart_driver *drv = tty->driver->driver_state;
+-	int retval, line = tty->index;
+-	struct uart_state *state = drv->state + line;
+-
+-	tty->driver_data = state;
++	struct uart_state *state = tty->driver_data;
++	int retval;
+ 
+ 	retval = tty_port_open(&state->port, tty, filp);
+ 	if (retval > 0)
+@@ -2448,6 +2449,7 @@ static void uart_poll_put_char(struct tty_driver *driver, int line, char ch)
+ #endif
+ 
+ static const struct tty_operations uart_ops = {
++	.install	= uart_install,
+ 	.open		= uart_open,
+ 	.close		= uart_close,
+ 	.write		= uart_write,
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 92ee15dda4c7..c547f4589ff4 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -3050,7 +3050,7 @@ static long fuse_file_fallocate(struct file *file, int mode, loff_t offset,
+ 	    offset + length > i_size_read(inode)) {
+ 		err = inode_newsize_ok(inode, offset + length);
+ 		if (err)
+-			return err;
++			goto out;
+ 	}
+ 
+ 	if (!(mode & FALLOC_FL_KEEP_SIZE))
+@@ -3098,6 +3098,7 @@ static ssize_t fuse_copy_file_range(struct file *file_in, loff_t pos_in,
+ {
+ 	struct fuse_file *ff_in = file_in->private_data;
+ 	struct fuse_file *ff_out = file_out->private_data;
++	struct inode *inode_in = file_inode(file_in);
+ 	struct inode *inode_out = file_inode(file_out);
+ 	struct fuse_inode *fi_out = get_fuse_inode(inode_out);
+ 	struct fuse_conn *fc = ff_in->fc;
+@@ -3121,6 +3122,17 @@ static ssize_t fuse_copy_file_range(struct file *file_in, loff_t pos_in,
+ 	if (fc->no_copy_file_range)
+ 		return -EOPNOTSUPP;
+ 
++	if (fc->writeback_cache) {
++		inode_lock(inode_in);
++		err = filemap_write_and_wait_range(inode_in->i_mapping,
++						   pos_in, pos_in + len);
++		if (!err)
++			fuse_sync_writes(inode_in);
++		inode_unlock(inode_in);
++		if (err)
++			return err;
++	}
++
+ 	inode_lock(inode_out);
+ 
+ 	if (fc->writeback_cache) {
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 741ff8c9c6ed..eeee100785a5 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -6867,7 +6867,6 @@ struct nfs4_lock_waiter {
+ 	struct task_struct	*task;
+ 	struct inode		*inode;
+ 	struct nfs_lowner	*owner;
+-	bool			notified;
+ };
+ 
+ static int
+@@ -6889,13 +6888,13 @@ nfs4_wake_lock_waiter(wait_queue_entry_t *wait, unsigned int mode, int flags, vo
+ 		/* Make sure it's for the right inode */
+ 		if (nfs_compare_fh(NFS_FH(waiter->inode), &cbnl->cbnl_fh))
+ 			return 0;
+-
+-		waiter->notified = true;
+ 	}
+ 
+ 	/* override "private" so we can use default_wake_function */
+ 	wait->private = waiter->task;
+-	ret = autoremove_wake_function(wait, mode, flags, key);
++	ret = woken_wake_function(wait, mode, flags, key);
++	if (ret)
++		list_del_init(&wait->entry);
+ 	wait->private = waiter;
+ 	return ret;
+ }
+@@ -6904,7 +6903,6 @@ static int
+ nfs4_retry_setlk(struct nfs4_state *state, int cmd, struct file_lock *request)
+ {
+ 	int status = -ERESTARTSYS;
+-	unsigned long flags;
+ 	struct nfs4_lock_state *lsp = request->fl_u.nfs4_fl.owner;
+ 	struct nfs_server *server = NFS_SERVER(state->inode);
+ 	struct nfs_client *clp = server->nfs_client;
+@@ -6914,8 +6912,7 @@ nfs4_retry_setlk(struct nfs4_state *state, int cmd, struct file_lock *request)
+ 				    .s_dev = server->s_dev };
+ 	struct nfs4_lock_waiter waiter = { .task  = current,
+ 					   .inode = state->inode,
+-					   .owner = &owner,
+-					   .notified = false };
++					   .owner = &owner};
+ 	wait_queue_entry_t wait;
+ 
+ 	/* Don't bother with waitqueue if we don't expect a callback */
+@@ -6925,27 +6922,22 @@ nfs4_retry_setlk(struct nfs4_state *state, int cmd, struct file_lock *request)
+ 	init_wait(&wait);
+ 	wait.private = &waiter;
+ 	wait.func = nfs4_wake_lock_waiter;
+-	add_wait_queue(q, &wait);
+ 
+ 	while(!signalled()) {
+-		waiter.notified = false;
++		add_wait_queue(q, &wait);
+ 		status = nfs4_proc_setlk(state, cmd, request);
+-		if ((status != -EAGAIN) || IS_SETLK(cmd))
++		if ((status != -EAGAIN) || IS_SETLK(cmd)) {
++			finish_wait(q, &wait);
+ 			break;
+-
+-		status = -ERESTARTSYS;
+-		spin_lock_irqsave(&q->lock, flags);
+-		if (waiter.notified) {
+-			spin_unlock_irqrestore(&q->lock, flags);
+-			continue;
+ 		}
+-		set_current_state(TASK_INTERRUPTIBLE);
+-		spin_unlock_irqrestore(&q->lock, flags);
+ 
+-		freezable_schedule_timeout(NFS4_LOCK_MAXTIMEOUT);
++		status = -ERESTARTSYS;
++		freezer_do_not_count();
++		wait_woken(&wait, TASK_INTERRUPTIBLE, NFS4_LOCK_MAXTIMEOUT);
++		freezer_count();
++		finish_wait(q, &wait);
+ 	}
+ 
+-	finish_wait(q, &wait);
+ 	return status;
+ }
+ #else /* !CONFIG_NFS_V4_1 */
+diff --git a/fs/pstore/platform.c b/fs/pstore/platform.c
+index 75887a269b64..dca07f239bd1 100644
+--- a/fs/pstore/platform.c
++++ b/fs/pstore/platform.c
+@@ -347,8 +347,10 @@ static void allocate_buf_for_compression(void)
+ 
+ static void free_buf_for_compression(void)
+ {
+-	if (IS_ENABLED(CONFIG_PSTORE_COMPRESS) && tfm)
++	if (IS_ENABLED(CONFIG_PSTORE_COMPRESS) && tfm) {
+ 		crypto_free_comp(tfm);
++		tfm = NULL;
++	}
+ 	kfree(big_oops_buf);
+ 	big_oops_buf = NULL;
+ 	big_oops_buf_sz = 0;
+@@ -606,7 +608,8 @@ int pstore_register(struct pstore_info *psi)
+ 		return -EINVAL;
+ 	}
+ 
+-	allocate_buf_for_compression();
++	if (psi->flags & PSTORE_FLAGS_DMESG)
++		allocate_buf_for_compression();
+ 
+ 	if (pstore_is_mounted())
+ 		pstore_get_records(0);
+diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c
+index c5c685589e36..4310d547c3b2 100644
+--- a/fs/pstore/ram.c
++++ b/fs/pstore/ram.c
+@@ -800,26 +800,36 @@ static int ramoops_probe(struct platform_device *pdev)
+ 
+ 	cxt->pstore.data = cxt;
+ 	/*
+-	 * Since bufsize is only used for dmesg crash dumps, it
+-	 * must match the size of the dprz record (after PRZ header
+-	 * and ECC bytes have been accounted for).
++	 * Prepare frontend flags based on which areas are initialized.
++	 * For ramoops_init_przs() cases, the "max count" variable tells
++	 * if there are regions present. For ramoops_init_prz() cases,
++	 * the single region size is how to check.
+ 	 */
+-	cxt->pstore.bufsize = cxt->dprzs[0]->buffer_size;
+-	cxt->pstore.buf = kzalloc(cxt->pstore.bufsize, GFP_KERNEL);
+-	if (!cxt->pstore.buf) {
+-		pr_err("cannot allocate pstore crash dump buffer\n");
+-		err = -ENOMEM;
+-		goto fail_clear;
+-	}
+-
+-	cxt->pstore.flags = PSTORE_FLAGS_DMESG;
++	cxt->pstore.flags = 0;
++	if (cxt->max_dump_cnt)
++		cxt->pstore.flags |= PSTORE_FLAGS_DMESG;
+ 	if (cxt->console_size)
+ 		cxt->pstore.flags |= PSTORE_FLAGS_CONSOLE;
+-	if (cxt->ftrace_size)
++	if (cxt->max_ftrace_cnt)
+ 		cxt->pstore.flags |= PSTORE_FLAGS_FTRACE;
+ 	if (cxt->pmsg_size)
+ 		cxt->pstore.flags |= PSTORE_FLAGS_PMSG;
+ 
++	/*
++	 * Since bufsize is only used for dmesg crash dumps, it
++	 * must match the size of the dprz record (after PRZ header
++	 * and ECC bytes have been accounted for).
++	 */
++	if (cxt->pstore.flags & PSTORE_FLAGS_DMESG) {
++		cxt->pstore.bufsize = cxt->dprzs[0]->buffer_size;
++		cxt->pstore.buf = kzalloc(cxt->pstore.bufsize, GFP_KERNEL);
++		if (!cxt->pstore.buf) {
++			pr_err("cannot allocate pstore crash dump buffer\n");
++			err = -ENOMEM;
++			goto fail_clear;
++		}
++	}
++
+ 	err = pstore_register(&cxt->pstore);
+ 	if (err) {
+ 		pr_err("registering with pstore failed\n");
+diff --git a/include/drm/drm_modeset_helper_vtables.h b/include/drm/drm_modeset_helper_vtables.h
+index ce4de6b1e444..cbe29579aac9 100644
+--- a/include/drm/drm_modeset_helper_vtables.h
++++ b/include/drm/drm_modeset_helper_vtables.h
+@@ -1178,6 +1178,14 @@ struct drm_plane_helper_funcs {
+ 	 * current one with the new plane configurations in the new
+ 	 * plane_state.
+ 	 *
++	 * Drivers should also swap the framebuffers between current plane
++	 * state (&drm_plane.state) and new_state.
++	 * This is required since cleanup for async commits is performed on
++	 * the new state, rather than old state like for traditional commits.
++	 * Since we want to give up the reference on the current (old) fb
++	 * instead of our brand new one, swap them in the driver during the
++	 * async commit.
++	 *
+ 	 * FIXME:
+ 	 *  - It only works for single plane updates
+ 	 *  - Async Pageflips are not supported yet
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index 57ae83c4d5f4..006f69f9277b 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -183,10 +183,14 @@ enum cpuhp_smt_control {
+ extern enum cpuhp_smt_control cpu_smt_control;
+ extern void cpu_smt_disable(bool force);
+ extern void cpu_smt_check_topology(void);
++extern int cpuhp_smt_enable(void);
++extern int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval);
+ #else
+ # define cpu_smt_control		(CPU_SMT_ENABLED)
+ static inline void cpu_smt_disable(bool force) { }
+ static inline void cpu_smt_check_topology(void) { }
++static inline int cpuhp_smt_enable(void) { return 0; }
++static inline int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval) { return 0; }
+ #endif
+ 
+ /*
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index 922bb6848813..b25d20822e75 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -56,14 +56,12 @@ void __rcu_read_unlock(void);
+ 
+ static inline void __rcu_read_lock(void)
+ {
+-	if (IS_ENABLED(CONFIG_PREEMPT_COUNT))
+-		preempt_disable();
++	preempt_disable();
+ }
+ 
+ static inline void __rcu_read_unlock(void)
+ {
+-	if (IS_ENABLED(CONFIG_PREEMPT_COUNT))
+-		preempt_enable();
++	preempt_enable();
+ }
+ 
+ static inline int rcu_preempt_depth(void)
+diff --git a/include/net/arp.h b/include/net/arp.h
+index 977aabfcdc03..c8f580a0e6b1 100644
+--- a/include/net/arp.h
++++ b/include/net/arp.h
+@@ -18,6 +18,7 @@ static inline u32 arp_hashfn(const void *pkey, const struct net_device *dev, u32
+ 	return val * hash_rnd[0];
+ }
+ 
++#ifdef CONFIG_INET
+ static inline struct neighbour *__ipv4_neigh_lookup_noref(struct net_device *dev, u32 key)
+ {
+ 	if (dev->flags & (IFF_LOOPBACK | IFF_POINTOPOINT))
+@@ -25,6 +26,13 @@ static inline struct neighbour *__ipv4_neigh_lookup_noref(struct net_device *dev
+ 
+ 	return ___neigh_lookup_noref(&arp_tbl, neigh_key_eq32, arp_hashfn, &key, dev);
+ }
++#else
++static inline
++struct neighbour *__ipv4_neigh_lookup_noref(struct net_device *dev, u32 key)
++{
++	return NULL;
++}
++#endif
+ 
+ static inline struct neighbour *__ipv4_neigh_lookup(struct net_device *dev, u32 key)
+ {
+diff --git a/include/net/ip6_fib.h b/include/net/ip6_fib.h
+index b5e3add90e99..4c59fff718c1 100644
+--- a/include/net/ip6_fib.h
++++ b/include/net/ip6_fib.h
+@@ -259,8 +259,7 @@ static inline u32 rt6_get_cookie(const struct rt6_info *rt)
+ 	rcu_read_lock();
+ 
+ 	from = rcu_dereference(rt->from);
+-	if (from && (rt->rt6i_flags & RTF_PCPU ||
+-	    unlikely(!list_empty(&rt->rt6i_uncached))))
++	if (from)
+ 		fib6_get_cookie_safe(from, &cookie);
+ 
+ 	rcu_read_unlock();
+diff --git a/include/net/tls.h b/include/net/tls.h
+index 5934246b2c6f..053082d98906 100644
+--- a/include/net/tls.h
++++ b/include/net/tls.h
+@@ -199,6 +199,10 @@ struct tls_offload_context_tx {
+ 	(ALIGN(sizeof(struct tls_offload_context_tx), sizeof(void *)) +        \
+ 	 TLS_DRIVER_STATE_SIZE)
+ 
++enum tls_context_flags {
++	TLS_RX_SYNC_RUNNING = 0,
++};
++
+ struct cipher_context {
+ 	char *iv;
+ 	char *rec_seq;
+diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
+index 397810fa2d33..7ebef5c5473d 100644
+--- a/include/uapi/drm/i915_drm.h
++++ b/include/uapi/drm/i915_drm.h
+@@ -972,7 +972,7 @@ struct drm_i915_gem_execbuffer2 {
+ 	 * struct drm_i915_gem_exec_fence *fences.
+ 	 */
+ 	__u64 cliprects_ptr;
+-#define I915_EXEC_RING_MASK              (7<<0)
++#define I915_EXEC_RING_MASK              (0x3f)
+ #define I915_EXEC_DEFAULT                (0<<0)
+ #define I915_EXEC_RENDER                 (1<<0)
+ #define I915_EXEC_BSD                    (2<<0)
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 43e741e88691..9cc8b6fdb2dc 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -2064,7 +2064,7 @@ static void cpuhp_online_cpu_device(unsigned int cpu)
+ 	kobject_uevent(&dev->kobj, KOBJ_ONLINE);
+ }
+ 
+-static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
++int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
+ {
+ 	int cpu, ret = 0;
+ 
+@@ -2096,7 +2096,7 @@ static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
+ 	return ret;
+ }
+ 
+-static int cpuhp_smt_enable(void)
++int cpuhp_smt_enable(void)
+ {
+ 	int cpu, ret = 0;
+ 
+diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
+index abef759de7c8..f5ce9f7ec132 100644
+--- a/kernel/power/hibernate.c
++++ b/kernel/power/hibernate.c
+@@ -258,6 +258,11 @@ void swsusp_show_speed(ktime_t start, ktime_t stop,
+ 		(kps % 1000) / 10);
+ }
+ 
++__weak int arch_resume_nosmt(void)
++{
++	return 0;
++}
++
+ /**
+  * create_image - Create a hibernation image.
+  * @platform_mode: Whether or not to use the platform driver.
+@@ -325,6 +330,10 @@ static int create_image(int platform_mode)
+  Enable_cpus:
+ 	enable_nonboot_cpus();
+ 
++	/* Allow architectures to do nosmt-specific post-resume dances */
++	if (!in_suspend)
++		error = arch_resume_nosmt();
++
+  Platform_finish:
+ 	platform_finish(platform_mode);
+ 
+diff --git a/lib/test_firmware.c b/lib/test_firmware.c
+index 7222093ee00b..b5487ed829d7 100644
+--- a/lib/test_firmware.c
++++ b/lib/test_firmware.c
+@@ -223,30 +223,30 @@ static ssize_t config_show(struct device *dev,
+ 
+ 	mutex_lock(&test_fw_mutex);
+ 
+-	len += snprintf(buf, PAGE_SIZE,
++	len += scnprintf(buf, PAGE_SIZE - len,
+ 			"Custom trigger configuration for: %s\n",
+ 			dev_name(dev));
+ 
+ 	if (test_fw_config->name)
+-		len += snprintf(buf+len, PAGE_SIZE,
++		len += scnprintf(buf+len, PAGE_SIZE - len,
+ 				"name:\t%s\n",
+ 				test_fw_config->name);
+ 	else
+-		len += snprintf(buf+len, PAGE_SIZE,
++		len += scnprintf(buf+len, PAGE_SIZE - len,
+ 				"name:\tEMTPY\n");
+ 
+-	len += snprintf(buf+len, PAGE_SIZE,
++	len += scnprintf(buf+len, PAGE_SIZE - len,
+ 			"num_requests:\t%u\n", test_fw_config->num_requests);
+ 
+-	len += snprintf(buf+len, PAGE_SIZE,
++	len += scnprintf(buf+len, PAGE_SIZE - len,
+ 			"send_uevent:\t\t%s\n",
+ 			test_fw_config->send_uevent ?
+ 			"FW_ACTION_HOTPLUG" :
+ 			"FW_ACTION_NOHOTPLUG");
+-	len += snprintf(buf+len, PAGE_SIZE,
++	len += scnprintf(buf+len, PAGE_SIZE - len,
+ 			"sync_direct:\t\t%s\n",
+ 			test_fw_config->sync_direct ? "true" : "false");
+-	len += snprintf(buf+len, PAGE_SIZE,
++	len += scnprintf(buf+len, PAGE_SIZE - len,
+ 			"read_fw_idx:\t%u\n", test_fw_config->read_fw_idx);
+ 
+ 	mutex_unlock(&test_fw_mutex);
+diff --git a/net/core/ethtool.c b/net/core/ethtool.c
+index 014dcd63b451..7285a19bb135 100644
+--- a/net/core/ethtool.c
++++ b/net/core/ethtool.c
+@@ -1358,13 +1358,16 @@ static int ethtool_get_regs(struct net_device *dev, char __user *useraddr)
+ 	if (!regbuf)
+ 		return -ENOMEM;
+ 
++	if (regs.len < reglen)
++		reglen = regs.len;
++
+ 	ops->get_regs(dev, &regs, regbuf);
+ 
+ 	ret = -EFAULT;
+ 	if (copy_to_user(useraddr, &regs, sizeof(regs)))
+ 		goto out;
+ 	useraddr += offsetof(struct ethtool_regs, data);
+-	if (regbuf && copy_to_user(useraddr, regbuf, regs.len))
++	if (copy_to_user(useraddr, regbuf, reglen))
+ 		goto out;
+ 	ret = 0;
+ 
+diff --git a/net/core/fib_rules.c b/net/core/fib_rules.c
+index c49b752ea7eb..ffbb827723a2 100644
+--- a/net/core/fib_rules.c
++++ b/net/core/fib_rules.c
+@@ -756,9 +756,9 @@ int fib_nl_newrule(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 	if (err)
+ 		goto errout;
+ 
+-	if (rule_exists(ops, frh, tb, rule)) {
+-		if (nlh->nlmsg_flags & NLM_F_EXCL)
+-			err = -EEXIST;
++	if ((nlh->nlmsg_flags & NLM_F_EXCL) &&
++	    rule_exists(ops, frh, tb, rule)) {
++		err = -EEXIST;
+ 		goto errout_free;
+ 	}
+ 
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 30f6fd8f68e0..9b9da5142613 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -31,6 +31,7 @@
+ #include <linux/times.h>
+ #include <net/net_namespace.h>
+ #include <net/neighbour.h>
++#include <net/arp.h>
+ #include <net/dst.h>
+ #include <net/sock.h>
+ #include <net/netevent.h>
+@@ -663,6 +664,8 @@ out:
+ out_tbl_unlock:
+ 	write_unlock_bh(&tbl->lock);
+ out_neigh_release:
++	if (!exempt_from_gc)
++		atomic_dec(&tbl->gc_entries);
+ 	neigh_release(n);
+ 	goto out;
+ }
+@@ -2982,7 +2985,13 @@ int neigh_xmit(int index, struct net_device *dev,
+ 		if (!tbl)
+ 			goto out;
+ 		rcu_read_lock_bh();
+-		neigh = __neigh_lookup_noref(tbl, addr, dev);
++		if (index == NEIGH_ARP_TABLE) {
++			u32 key = *((u32 *)addr);
++
++			neigh = __ipv4_neigh_lookup_noref(dev, key);
++		} else {
++			neigh = __neigh_lookup_noref(tbl, addr, dev);
++		}
+ 		if (!neigh)
+ 			neigh = __neigh_create(tbl, addr, dev, false);
+ 		err = PTR_ERR(neigh);
+diff --git a/net/core/pktgen.c b/net/core/pktgen.c
+index f3f5a78cd062..f19c498f4ecb 100644
+--- a/net/core/pktgen.c
++++ b/net/core/pktgen.c
+@@ -3066,7 +3066,13 @@ static int pktgen_wait_thread_run(struct pktgen_thread *t)
+ {
+ 	while (thread_is_running(t)) {
+ 
++		/* note: 't' will still be around even after the unlock/lock
++		 * cycle because pktgen_thread threads are only cleared at
++		 * net exit
++		 */
++		mutex_unlock(&pktgen_thread_lock);
+ 		msleep_interruptible(100);
++		mutex_lock(&pktgen_thread_lock);
+ 
+ 		if (signal_pending(current))
+ 			goto signal;
+@@ -3081,6 +3087,10 @@ static int pktgen_wait_all_threads_run(struct pktgen_net *pn)
+ 	struct pktgen_thread *t;
+ 	int sig = 1;
+ 
++	/* prevent from racing with rmmod */
++	if (!try_module_get(THIS_MODULE))
++		return sig;
++
+ 	mutex_lock(&pktgen_thread_lock);
+ 
+ 	list_for_each_entry(t, &pn->pktgen_threads, th_list) {
+@@ -3094,6 +3104,7 @@ static int pktgen_wait_all_threads_run(struct pktgen_net *pn)
+ 			t->control |= (T_STOP);
+ 
+ 	mutex_unlock(&pktgen_thread_lock);
++	module_put(THIS_MODULE);
+ 	return sig;
+ }
+ 
+diff --git a/net/ipv4/ipmr_base.c b/net/ipv4/ipmr_base.c
+index 3e614cc824f7..3a1af50bd0a5 100644
+--- a/net/ipv4/ipmr_base.c
++++ b/net/ipv4/ipmr_base.c
+@@ -335,8 +335,6 @@ next_entry2:
+ 	}
+ 	spin_unlock_bh(lock);
+ 	err = 0;
+-	e = 0;
+-
+ out:
+ 	cb->args[1] = e;
+ 	return err;
+@@ -374,6 +372,7 @@ int mr_rtm_dumproute(struct sk_buff *skb, struct netlink_callback *cb,
+ 		err = mr_table_dump(mrt, skb, cb, fill, lock, filter);
+ 		if (err < 0)
+ 			break;
++		cb->args[1] = 0;
+ next_table:
+ 		t++;
+ 	}
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index df6afb092936..1cd512ac84ba 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1954,7 +1954,7 @@ static int ip_route_input_slow(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ 	u32		itag = 0;
+ 	struct rtable	*rth;
+ 	struct flowi4	fl4;
+-	bool do_cache;
++	bool do_cache = true;
+ 
+ 	/* IP on this device is disabled. */
+ 
+@@ -2031,6 +2031,9 @@ static int ip_route_input_slow(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ 	if (res->type == RTN_BROADCAST) {
+ 		if (IN_DEV_BFORWARD(in_dev))
+ 			goto make_route;
++		/* not do cache if bc_forwarding is enabled */
++		if (IPV4_DEVCONF_ALL(net, BC_FORWARDING))
++			do_cache = false;
+ 		goto brd_input;
+ 	}
+ 
+@@ -2068,16 +2071,13 @@ brd_input:
+ 	RT_CACHE_STAT_INC(in_brd);
+ 
+ local_input:
+-	do_cache = false;
+-	if (res->fi) {
+-		if (!itag) {
+-			rth = rcu_dereference(FIB_RES_NH(*res).nh_rth_input);
+-			if (rt_cache_valid(rth)) {
+-				skb_dst_set_noref(skb, &rth->dst);
+-				err = 0;
+-				goto out;
+-			}
+-			do_cache = true;
++	do_cache &= res->fi && !itag;
++	if (do_cache) {
++		rth = rcu_dereference(FIB_RES_NH(*res).nh_rth_input);
++		if (rt_cache_valid(rth)) {
++			skb_dst_set_noref(skb, &rth->dst);
++			err = 0;
++			goto out;
+ 		}
+ 	}
+ 
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 372fdc5381a9..3b179ce6170f 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -538,8 +538,7 @@ static inline bool __udp_is_mcast_sock(struct net *net, struct sock *sk,
+ 	    (inet->inet_dport != rmt_port && inet->inet_dport) ||
+ 	    (inet->inet_rcv_saddr && inet->inet_rcv_saddr != loc_addr) ||
+ 	    ipv6_only_sock(sk) ||
+-	    (sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif &&
+-	     sk->sk_bound_dev_if != sdif))
++	    !udp_sk_bound_dev_eq(net, sk->sk_bound_dev_if, dif, sdif))
+ 		return false;
+ 	if (!ip_mc_sf_allow(sk, loc_addr, rmt_addr, dif, sdif))
+ 		return false;
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index 5cb14eabfc65..5f8fe98b435b 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -783,6 +783,7 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	struct flowi6 fl6;
+ 	struct ipcm6_cookie ipc6;
+ 	int addr_len = msg->msg_namelen;
++	int hdrincl;
+ 	u16 proto;
+ 	int err;
+ 
+@@ -796,6 +797,13 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	if (msg->msg_flags & MSG_OOB)
+ 		return -EOPNOTSUPP;
+ 
++	/* hdrincl should be READ_ONCE(inet->hdrincl)
++	 * but READ_ONCE() doesn't work with bit fields.
++	 * Doing this indirectly yields the same result.
++	 */
++	hdrincl = inet->hdrincl;
++	hdrincl = READ_ONCE(hdrincl);
++
+ 	/*
+ 	 *	Get and verify the address.
+ 	 */
+@@ -887,11 +895,14 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	opt = ipv6_fixup_options(&opt_space, opt);
+ 
+ 	fl6.flowi6_proto = proto;
+-	rfv.msg = msg;
+-	rfv.hlen = 0;
+-	err = rawv6_probe_proto_opt(&rfv, &fl6);
+-	if (err)
+-		goto out;
++
++	if (!hdrincl) {
++		rfv.msg = msg;
++		rfv.hlen = 0;
++		err = rawv6_probe_proto_opt(&rfv, &fl6);
++		if (err)
++			goto out;
++	}
+ 
+ 	if (!ipv6_addr_any(daddr))
+ 		fl6.daddr = *daddr;
+@@ -908,7 +919,7 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 		fl6.flowi6_oif = np->ucast_oif;
+ 	security_sk_classify_flow(sk, flowi6_to_flowi(&fl6));
+ 
+-	if (inet->hdrincl)
++	if (hdrincl)
+ 		fl6.flowi6_flags |= FLOWI_FLAG_KNOWN_NH;
+ 
+ 	if (ipc6.tclass < 0)
+@@ -931,7 +942,7 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 		goto do_confirm;
+ 
+ back_from_confirm:
+-	if (inet->hdrincl)
++	if (hdrincl)
+ 		err = rawv6_send_hdrinc(sk, msg, len, &fl6, &dst,
+ 					msg->msg_flags, &ipc6.sockc);
+ 	else {
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 59da6f5b717d..71d5544243d2 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -3016,8 +3016,8 @@ static int packet_release(struct socket *sock)
+ 
+ 	synchronize_net();
+ 
++	kfree(po->rollover);
+ 	if (f) {
+-		kfree(po->rollover);
+ 		fanout_release_data(f);
+ 		kfree(f);
+ 	}
+diff --git a/net/rds/ib_rdma.c b/net/rds/ib_rdma.c
+index d664e9ade74d..0b347f46b2f4 100644
+--- a/net/rds/ib_rdma.c
++++ b/net/rds/ib_rdma.c
+@@ -428,12 +428,14 @@ int rds_ib_flush_mr_pool(struct rds_ib_mr_pool *pool,
+ 		wait_clean_list_grace();
+ 
+ 		list_to_llist_nodes(pool, &unmap_list, &clean_nodes, &clean_tail);
+-		if (ibmr_ret)
++		if (ibmr_ret) {
+ 			*ibmr_ret = llist_entry(clean_nodes, struct rds_ib_mr, llnode);
+-
++			clean_nodes = clean_nodes->next;
++		}
+ 		/* more than one entry in llist nodes */
+-		if (clean_nodes->next)
+-			llist_add_batch(clean_nodes->next, clean_tail, &pool->clean_list);
++		if (clean_nodes)
++			llist_add_batch(clean_nodes, clean_tail,
++					&pool->clean_list);
+ 
+ 	}
+ 
+diff --git a/net/sched/cls_matchall.c b/net/sched/cls_matchall.c
+index a13bc351a414..3d021f2aad1c 100644
+--- a/net/sched/cls_matchall.c
++++ b/net/sched/cls_matchall.c
+@@ -32,6 +32,9 @@ static int mall_classify(struct sk_buff *skb, const struct tcf_proto *tp,
+ {
+ 	struct cls_mall_head *head = rcu_dereference_bh(tp->root);
+ 
++	if (unlikely(!head))
++		return -1;
++
+ 	if (tc_skip_sw(head->flags))
+ 		return -1;
+ 
+diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c
+index d05c57664e36..ae65a1cfa596 100644
+--- a/net/sctp/sm_make_chunk.c
++++ b/net/sctp/sm_make_chunk.c
+@@ -2329,7 +2329,6 @@ int sctp_process_init(struct sctp_association *asoc, struct sctp_chunk *chunk,
+ 	union sctp_addr addr;
+ 	struct sctp_af *af;
+ 	int src_match = 0;
+-	char *cookie;
+ 
+ 	/* We must include the address that the INIT packet came from.
+ 	 * This is the only address that matters for an INIT packet.
+@@ -2433,14 +2432,6 @@ int sctp_process_init(struct sctp_association *asoc, struct sctp_chunk *chunk,
+ 	/* Peer Rwnd   : Current calculated value of the peer's rwnd.  */
+ 	asoc->peer.rwnd = asoc->peer.i.a_rwnd;
+ 
+-	/* Copy cookie in case we need to resend COOKIE-ECHO. */
+-	cookie = asoc->peer.cookie;
+-	if (cookie) {
+-		asoc->peer.cookie = kmemdup(cookie, asoc->peer.cookie_len, gfp);
+-		if (!asoc->peer.cookie)
+-			goto clean_up;
+-	}
+-
+ 	/* RFC 2960 7.2.1 The initial value of ssthresh MAY be arbitrarily
+ 	 * high (for example, implementations MAY use the size of the receiver
+ 	 * advertised window).
+@@ -2609,7 +2600,9 @@ do_addr_param:
+ 	case SCTP_PARAM_STATE_COOKIE:
+ 		asoc->peer.cookie_len =
+ 			ntohs(param.p->length) - sizeof(struct sctp_paramhdr);
+-		asoc->peer.cookie = param.cookie->body;
++		asoc->peer.cookie = kmemdup(param.cookie->body, asoc->peer.cookie_len, gfp);
++		if (!asoc->peer.cookie)
++			retval = 0;
+ 		break;
+ 
+ 	case SCTP_PARAM_HEARTBEAT_INFO:
+diff --git a/net/sctp/sm_sideeffect.c b/net/sctp/sm_sideeffect.c
+index 4aa03588f87b..27ddf2d8f001 100644
+--- a/net/sctp/sm_sideeffect.c
++++ b/net/sctp/sm_sideeffect.c
+@@ -898,6 +898,11 @@ static void sctp_cmd_new_state(struct sctp_cmd_seq *cmds,
+ 						asoc->rto_initial;
+ 	}
+ 
++	if (sctp_state(asoc, ESTABLISHED)) {
++		kfree(asoc->peer.cookie);
++		asoc->peer.cookie = NULL;
++	}
++
+ 	if (sctp_state(asoc, ESTABLISHED) ||
+ 	    sctp_state(asoc, CLOSED) ||
+ 	    sctp_state(asoc, SHUTDOWN_RECEIVED)) {
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index 8ff11dc98d7f..ea8d5aed1e2c 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -2260,13 +2260,13 @@ call_status(struct rpc_task *task)
+ 	case -ECONNREFUSED:
+ 	case -ECONNRESET:
+ 	case -ECONNABORTED:
++	case -ENOTCONN:
+ 		rpc_force_rebind(clnt);
+ 		/* fall through */
+ 	case -EADDRINUSE:
+ 		rpc_delay(task, 3*HZ);
+ 		/* fall through */
+ 	case -EPIPE:
+-	case -ENOTCONN:
+ 	case -EAGAIN:
+ 		break;
+ 	case -EIO:
+@@ -2387,17 +2387,21 @@ call_decode(struct rpc_task *task)
+ 		return;
+ 	case -EAGAIN:
+ 		task->tk_status = 0;
+-		/* Note: rpc_decode_header() may have freed the RPC slot */
+-		if (task->tk_rqstp == req) {
+-			xdr_free_bvec(&req->rq_rcv_buf);
+-			req->rq_reply_bytes_recvd = 0;
+-			req->rq_rcv_buf.len = 0;
+-			if (task->tk_client->cl_discrtry)
+-				xprt_conditional_disconnect(req->rq_xprt,
+-							    req->rq_connect_cookie);
+-		}
++		xdr_free_bvec(&req->rq_rcv_buf);
++		req->rq_reply_bytes_recvd = 0;
++		req->rq_rcv_buf.len = 0;
++		if (task->tk_client->cl_discrtry)
++			xprt_conditional_disconnect(req->rq_xprt,
++						    req->rq_connect_cookie);
+ 		task->tk_action = call_encode;
+ 		rpc_check_timeout(task);
++		break;
++	case -EKEYREJECTED:
++		task->tk_action = call_reserve;
++		rpc_check_timeout(task);
++		rpcauth_invalcred(task);
++		/* Ensure we obtain a new XID if we retry! */
++		xprt_release(task);
+ 	}
+ }
+ 
+@@ -2533,11 +2537,7 @@ out_msg_denied:
+ 			break;
+ 		task->tk_cred_retry--;
+ 		trace_rpc__stale_creds(task);
+-		rpcauth_invalcred(task);
+-		/* Ensure we obtain a new XID! */
+-		xprt_release(task);
+-		task->tk_action = call_reserve;
+-		return -EAGAIN;
++		return -EKEYREJECTED;
+ 	case rpc_autherr_badcred:
+ 	case rpc_autherr_badverf:
+ 		/* possibly garbled cred/verf? */
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index 0fd8f0997ff5..12454f0d5a63 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -570,10 +570,22 @@ void tls_device_write_space(struct sock *sk, struct tls_context *ctx)
+ 	}
+ }
+ 
++static void tls_device_resync_rx(struct tls_context *tls_ctx,
++				 struct sock *sk, u32 seq, u64 rcd_sn)
++{
++	struct net_device *netdev;
++
++	if (WARN_ON(test_and_set_bit(TLS_RX_SYNC_RUNNING, &tls_ctx->flags)))
++		return;
++	netdev = READ_ONCE(tls_ctx->netdev);
++	if (netdev)
++		netdev->tlsdev_ops->tls_dev_resync_rx(netdev, sk, seq, rcd_sn);
++	clear_bit_unlock(TLS_RX_SYNC_RUNNING, &tls_ctx->flags);
++}
++
+ void handle_device_resync(struct sock *sk, u32 seq, u64 rcd_sn)
+ {
+ 	struct tls_context *tls_ctx = tls_get_ctx(sk);
+-	struct net_device *netdev = tls_ctx->netdev;
+ 	struct tls_offload_context_rx *rx_ctx;
+ 	u32 is_req_pending;
+ 	s64 resync_req;
+@@ -588,10 +600,10 @@ void handle_device_resync(struct sock *sk, u32 seq, u64 rcd_sn)
+ 	is_req_pending = resync_req;
+ 
+ 	if (unlikely(is_req_pending) && req_seq == seq &&
+-	    atomic64_try_cmpxchg(&rx_ctx->resync_req, &resync_req, 0))
+-		netdev->tlsdev_ops->tls_dev_resync_rx(netdev, sk,
+-						      seq + TLS_HEADER_SIZE - 1,
+-						      rcd_sn);
++	    atomic64_try_cmpxchg(&rx_ctx->resync_req, &resync_req, 0)) {
++		seq += TLS_HEADER_SIZE - 1;
++		tls_device_resync_rx(tls_ctx, sk, seq, rcd_sn);
++	}
+ }
+ 
+ static int tls_device_reencrypt(struct sock *sk, struct sk_buff *skb)
+@@ -981,7 +993,10 @@ static int tls_device_down(struct net_device *netdev)
+ 		if (ctx->rx_conf == TLS_HW)
+ 			netdev->tlsdev_ops->tls_dev_del(netdev, ctx,
+ 							TLS_OFFLOAD_CTX_DIR_RX);
+-		ctx->netdev = NULL;
++		WRITE_ONCE(ctx->netdev, NULL);
++		smp_mb__before_atomic(); /* pairs with test_and_set_bit() */
++		while (test_bit(TLS_RX_SYNC_RUNNING, &ctx->flags))
++			usleep_range(10, 200);
+ 		dev_put(netdev);
+ 		list_del_init(&ctx->list);
+ 
+diff --git a/scripts/Kbuild.include b/scripts/Kbuild.include
+index 7484b9d8272f..addbbb7d6e68 100644
+--- a/scripts/Kbuild.include
++++ b/scripts/Kbuild.include
+@@ -73,8 +73,13 @@ endef
+ # Usage: CROSS_COMPILE := $(call cc-cross-prefix, m68k-linux-gnu- m68k-linux-)
+ # Return first <prefix> where a <prefix>gcc is found in PATH.
+ # If no gcc found in PATH with listed prefixes return nothing
++#
++# Note: '2>/dev/null' is here to force Make to invoke a shell. Otherwise, it
++# would try to directly execute the shell builtin 'command'. This workaround
++# should be kept for a long time since this issue was fixed only after the
++# GNU Make 4.2.1 release.
+ cc-cross-prefix = $(firstword $(foreach c, $(filter-out -%, $(1)), \
+-					$(if $(shell which $(c)gcc), $(c))))
++			$(if $(shell command -v $(c)gcc 2>/dev/null), $(c))))
+ 
+ # output directory for tests below
+ TMPOUT := $(if $(KBUILD_EXTMOD),$(firstword $(KBUILD_EXTMOD))/)


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-06-11 18:01 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2019-06-11 18:01 UTC (permalink / raw
  To: gentoo-commits

commit:     b058673f875a63a18c094e7034f225eda945408d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jun 11 18:01:00 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jun 11 18:01:00 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b058673f

Bluetooth: Check key sizes only when Secure Simple Pairing is enabled.

See bug #686758

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |  4 +++
 ...zes-only-if-Secure-Simple-Pairing-enabled.patch | 37 ++++++++++++++++++++++
 2 files changed, 41 insertions(+)

diff --git a/0000_README b/0000_README
index cb361e8..c7d01be 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
+Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
+From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
+Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
+
 Patch:  2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch
 From:   https://bugzilla.redhat.com/show_bug.cgi?id=1260207#c5
 Desc:   Add UAS disable quirk. See bug #640082.

diff --git a/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch b/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
new file mode 100644
index 0000000..394ad48
--- /dev/null
+++ b/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
@@ -0,0 +1,37 @@
+The encryption is only mandatory to be enforced when both sides are using
+Secure Simple Pairing and this means the key size check makes only sense
+in that case.
+
+On legacy Bluetooth 2.0 and earlier devices like mice the encryption was
+optional and thus causing an issue if the key size check is not bound to
+using Secure Simple Pairing.
+
+Fixes: d5bb334a8e17 ("Bluetooth: Align minimum encryption key size for LE and BR/EDR connections")
+Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
+Cc: stable@vger.kernel.org
+---
+ net/bluetooth/hci_conn.c | 9 +++++++--
+ 1 file changed, 7 insertions(+), 2 deletions(-)
+
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 3cf0764d5793..7516cdde3373 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -1272,8 +1272,13 @@ int hci_conn_check_link_mode(struct hci_conn *conn)
+ 			return 0;
+ 	}
+ 
+-	if (hci_conn_ssp_enabled(conn) &&
+-	    !test_bit(HCI_CONN_ENCRYPT, &conn->flags))
++	/* If Secure Simple Pairing is not enabled, then legacy connection
++	 * setup is used and no encryption or key sizes can be enforced.
++	 */
++	if (!hci_conn_ssp_enabled(conn))
++		return 1;
++
++	if (!test_bit(HCI_CONN_ENCRYPT, &conn->flags))
+ 		return 0;
+ 
+ 	/* The minimum encryption key size needs to be enforced by the
+-- 
+2.20.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-06-15 15:10 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2019-06-15 15:10 UTC (permalink / raw
  To: gentoo-commits

commit:     c5ae9b74d246be7052cf5a2302e5fc0c935ac8a8
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jun 15 15:09:55 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jun 15 15:09:55 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c5ae9b74

Linux patch 5.1.10

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1009_linux-5.1.10.patch | 4366 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4370 insertions(+)

diff --git a/0000_README b/0000_README
index c7d01be..6a502ad 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch:  1008_linux-5.1.9.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.1.9
 
+Patch:  1009_linux-5.1.10.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.1.10
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1009_linux-5.1.10.patch b/1009_linux-5.1.10.patch
new file mode 100644
index 0000000..90072d1
--- /dev/null
+++ b/1009_linux-5.1.10.patch
@@ -0,0 +1,4366 @@
+diff --git a/Documentation/devicetree/bindings/input/touchscreen/goodix.txt b/Documentation/devicetree/bindings/input/touchscreen/goodix.txt
+index 8cf0b4d38a7e..109cc0cebaa2 100644
+--- a/Documentation/devicetree/bindings/input/touchscreen/goodix.txt
++++ b/Documentation/devicetree/bindings/input/touchscreen/goodix.txt
+@@ -3,6 +3,7 @@ Device tree bindings for Goodix GT9xx series touchscreen controller
+ Required properties:
+ 
+  - compatible		: Should be "goodix,gt1151"
++				 or "goodix,gt5663"
+ 				 or "goodix,gt5688"
+ 				 or "goodix,gt911"
+ 				 or "goodix,gt9110"
+diff --git a/Makefile b/Makefile
+index 2884a8d3b6d6..e7d1973d9c26 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arm/boot/dts/exynos5420-arndale-octa.dts b/arch/arm/boot/dts/exynos5420-arndale-octa.dts
+index 3447160e1fbf..a0e27e1c0feb 100644
+--- a/arch/arm/boot/dts/exynos5420-arndale-octa.dts
++++ b/arch/arm/boot/dts/exynos5420-arndale-octa.dts
+@@ -107,6 +107,7 @@
+ 				regulator-name = "PVDD_APIO_1V8";
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <1800000>;
++				regulator-always-on;
+ 			};
+ 
+ 			ldo3_reg: LDO3 {
+@@ -145,6 +146,7 @@
+ 				regulator-name = "PVDD_ABB_1V8";
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <1800000>;
++				regulator-always-on;
+ 			};
+ 
+ 			ldo9_reg: LDO9 {
+diff --git a/arch/arm/boot/dts/imx50.dtsi b/arch/arm/boot/dts/imx50.dtsi
+index ee1e3e8bf4ec..4a68e30cc668 100644
+--- a/arch/arm/boot/dts/imx50.dtsi
++++ b/arch/arm/boot/dts/imx50.dtsi
+@@ -411,7 +411,7 @@
+ 				reg = <0x63fb0000 0x4000>;
+ 				interrupts = <6>;
+ 				clocks = <&clks IMX5_CLK_SDMA_GATE>,
+-					 <&clks IMX5_CLK_SDMA_GATE>;
++					 <&clks IMX5_CLK_AHB>;
+ 				clock-names = "ipg", "ahb";
+ 				#dma-cells = <3>;
+ 				fsl,sdma-ram-script-name = "imx/sdma/sdma-imx50.bin";
+diff --git a/arch/arm/boot/dts/imx51.dtsi b/arch/arm/boot/dts/imx51.dtsi
+index a5ee25cedc10..0a4b9a5d9a9c 100644
+--- a/arch/arm/boot/dts/imx51.dtsi
++++ b/arch/arm/boot/dts/imx51.dtsi
+@@ -489,7 +489,7 @@
+ 				reg = <0x83fb0000 0x4000>;
+ 				interrupts = <6>;
+ 				clocks = <&clks IMX5_CLK_SDMA_GATE>,
+-					 <&clks IMX5_CLK_SDMA_GATE>;
++					 <&clks IMX5_CLK_AHB>;
+ 				clock-names = "ipg", "ahb";
+ 				#dma-cells = <3>;
+ 				fsl,sdma-ram-script-name = "imx/sdma/sdma-imx51.bin";
+diff --git a/arch/arm/boot/dts/imx53.dtsi b/arch/arm/boot/dts/imx53.dtsi
+index b3300300aabe..9b672ed2486d 100644
+--- a/arch/arm/boot/dts/imx53.dtsi
++++ b/arch/arm/boot/dts/imx53.dtsi
+@@ -702,7 +702,7 @@
+ 				reg = <0x63fb0000 0x4000>;
+ 				interrupts = <6>;
+ 				clocks = <&clks IMX5_CLK_SDMA_GATE>,
+-					 <&clks IMX5_CLK_SDMA_GATE>;
++					 <&clks IMX5_CLK_AHB>;
+ 				clock-names = "ipg", "ahb";
+ 				#dma-cells = <3>;
+ 				fsl,sdma-ram-script-name = "imx/sdma/sdma-imx53.bin";
+diff --git a/arch/arm/boot/dts/imx6qdl.dtsi b/arch/arm/boot/dts/imx6qdl.dtsi
+index fe17a3405edc..2df39c308c83 100644
+--- a/arch/arm/boot/dts/imx6qdl.dtsi
++++ b/arch/arm/boot/dts/imx6qdl.dtsi
+@@ -918,7 +918,7 @@
+ 				compatible = "fsl,imx6q-sdma", "fsl,imx35-sdma";
+ 				reg = <0x020ec000 0x4000>;
+ 				interrupts = <0 2 IRQ_TYPE_LEVEL_HIGH>;
+-				clocks = <&clks IMX6QDL_CLK_SDMA>,
++				clocks = <&clks IMX6QDL_CLK_IPG>,
+ 					 <&clks IMX6QDL_CLK_SDMA>;
+ 				clock-names = "ipg", "ahb";
+ 				#dma-cells = <3>;
+diff --git a/arch/arm/boot/dts/imx6sl.dtsi b/arch/arm/boot/dts/imx6sl.dtsi
+index 4b4813f176cd..1f2a4ed99ed3 100644
+--- a/arch/arm/boot/dts/imx6sl.dtsi
++++ b/arch/arm/boot/dts/imx6sl.dtsi
+@@ -741,7 +741,7 @@
+ 				reg = <0x020ec000 0x4000>;
+ 				interrupts = <0 2 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clks IMX6SL_CLK_SDMA>,
+-					 <&clks IMX6SL_CLK_SDMA>;
++					 <&clks IMX6SL_CLK_AHB>;
+ 				clock-names = "ipg", "ahb";
+ 				#dma-cells = <3>;
+ 				/* imx6sl reuses imx6q sdma firmware */
+diff --git a/arch/arm/boot/dts/imx6sll.dtsi b/arch/arm/boot/dts/imx6sll.dtsi
+index 62847c68330b..ed598d72038c 100644
+--- a/arch/arm/boot/dts/imx6sll.dtsi
++++ b/arch/arm/boot/dts/imx6sll.dtsi
+@@ -621,7 +621,7 @@
+ 				compatible = "fsl,imx6sll-sdma", "fsl,imx35-sdma";
+ 				reg = <0x020ec000 0x4000>;
+ 				interrupts = <GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>;
+-				clocks = <&clks IMX6SLL_CLK_SDMA>,
++				clocks = <&clks IMX6SLL_CLK_IPG>,
+ 					 <&clks IMX6SLL_CLK_SDMA>;
+ 				clock-names = "ipg", "ahb";
+ 				#dma-cells = <3>;
+diff --git a/arch/arm/boot/dts/imx6sx.dtsi b/arch/arm/boot/dts/imx6sx.dtsi
+index 5b16e65f7696..fc5a8fc74091 100644
+--- a/arch/arm/boot/dts/imx6sx.dtsi
++++ b/arch/arm/boot/dts/imx6sx.dtsi
+@@ -820,7 +820,7 @@
+ 				compatible = "fsl,imx6sx-sdma", "fsl,imx6q-sdma";
+ 				reg = <0x020ec000 0x4000>;
+ 				interrupts = <GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>;
+-				clocks = <&clks IMX6SX_CLK_SDMA>,
++				clocks = <&clks IMX6SX_CLK_IPG>,
+ 					 <&clks IMX6SX_CLK_SDMA>;
+ 				clock-names = "ipg", "ahb";
+ 				#dma-cells = <3>;
+diff --git a/arch/arm/boot/dts/imx6ul.dtsi b/arch/arm/boot/dts/imx6ul.dtsi
+index 62ed30c781ed..facd65602c2d 100644
+--- a/arch/arm/boot/dts/imx6ul.dtsi
++++ b/arch/arm/boot/dts/imx6ul.dtsi
+@@ -708,7 +708,7 @@
+ 					     "fsl,imx35-sdma";
+ 				reg = <0x020ec000 0x4000>;
+ 				interrupts = <GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>;
+-				clocks = <&clks IMX6UL_CLK_SDMA>,
++				clocks = <&clks IMX6UL_CLK_IPG>,
+ 					 <&clks IMX6UL_CLK_SDMA>;
+ 				clock-names = "ipg", "ahb";
+ 				#dma-cells = <3>;
+diff --git a/arch/arm/boot/dts/imx7s.dtsi b/arch/arm/boot/dts/imx7s.dtsi
+index e88f53a4c7f4..2f45ef527e6c 100644
+--- a/arch/arm/boot/dts/imx7s.dtsi
++++ b/arch/arm/boot/dts/imx7s.dtsi
+@@ -1067,8 +1067,8 @@
+ 				compatible = "fsl,imx7d-sdma", "fsl,imx35-sdma";
+ 				reg = <0x30bd0000 0x10000>;
+ 				interrupts = <GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>;
+-				clocks = <&clks IMX7D_SDMA_CORE_CLK>,
+-					 <&clks IMX7D_AHB_CHANNEL_ROOT_CLK>;
++				clocks = <&clks IMX7D_IPG_ROOT_CLK>,
++					 <&clks IMX7D_SDMA_CORE_CLK>;
+ 				clock-names = "ipg", "ahb";
+ 				#dma-cells = <3>;
+ 				fsl,sdma-ram-script-name = "imx/sdma/sdma-imx7d.bin";
+diff --git a/arch/arm/include/asm/hardirq.h b/arch/arm/include/asm/hardirq.h
+index cba23eaa6072..7a88f160b1fb 100644
+--- a/arch/arm/include/asm/hardirq.h
++++ b/arch/arm/include/asm/hardirq.h
+@@ -6,6 +6,7 @@
+ #include <linux/threads.h>
+ #include <asm/irq.h>
+ 
++/* number of IPIS _not_ including IPI_CPU_BACKTRACE */
+ #define NR_IPI	7
+ 
+ typedef struct {
+diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
+index facd4240ca02..c93fe0f256de 100644
+--- a/arch/arm/kernel/smp.c
++++ b/arch/arm/kernel/smp.c
+@@ -70,6 +70,10 @@ enum ipi_msg_type {
+ 	IPI_CPU_STOP,
+ 	IPI_IRQ_WORK,
+ 	IPI_COMPLETION,
++	/*
++	 * CPU_BACKTRACE is special and not included in NR_IPI
++	 * or tracable with trace_ipi_*
++	 */
+ 	IPI_CPU_BACKTRACE,
+ 	/*
+ 	 * SGI8-15 can be reserved by secure firmware, and thus may
+@@ -797,7 +801,7 @@ core_initcall(register_cpufreq_notifier);
+ 
+ static void raise_nmi(cpumask_t *mask)
+ {
+-	smp_cross_call(mask, IPI_CPU_BACKTRACE);
++	__smp_cross_call(mask, IPI_CPU_BACKTRACE);
+ }
+ 
+ void arch_trigger_cpumask_backtrace(const cpumask_t *mask, bool exclude_self)
+diff --git a/arch/arm/mach-exynos/suspend.c b/arch/arm/mach-exynos/suspend.c
+index 9afb0c69db34..5bc2d76deccb 100644
+--- a/arch/arm/mach-exynos/suspend.c
++++ b/arch/arm/mach-exynos/suspend.c
+@@ -444,8 +444,27 @@ early_wakeup:
+ 
+ static void exynos5420_prepare_pm_resume(void)
+ {
++	unsigned int mpidr, cluster;
++
++	mpidr = read_cpuid_mpidr();
++	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
++
+ 	if (IS_ENABLED(CONFIG_EXYNOS5420_MCPM))
+ 		WARN_ON(mcpm_cpu_powered_up());
++
++	if (IS_ENABLED(CONFIG_HW_PERF_EVENTS) && cluster != 0) {
++		/*
++		 * When system is resumed on the LITTLE/KFC core (cluster 1),
++		 * the DSCR is not properly updated until the power is turned
++		 * on also for the cluster 0. Enable it for a while to
++		 * propagate the SPNIDEN and SPIDEN signals from Secure JTAG
++		 * block and avoid undefined instruction issue on CP14 reset.
++		 */
++		pmu_raw_writel(S5P_CORE_LOCAL_PWR_EN,
++				EXYNOS_COMMON_CONFIGURATION(0));
++		pmu_raw_writel(0,
++				EXYNOS_COMMON_CONFIGURATION(0));
++	}
+ }
+ 
+ static void exynos5420_pm_resume(void)
+diff --git a/arch/arm/mach-omap2/pm33xx-core.c b/arch/arm/mach-omap2/pm33xx-core.c
+index 724cf5774a6c..c93b6efd565f 100644
+--- a/arch/arm/mach-omap2/pm33xx-core.c
++++ b/arch/arm/mach-omap2/pm33xx-core.c
+@@ -51,10 +51,12 @@ static int amx3_common_init(void)
+ 
+ 	/* CEFUSE domain can be turned off post bootup */
+ 	cefuse_pwrdm = pwrdm_lookup("cefuse_pwrdm");
+-	if (cefuse_pwrdm)
+-		omap_set_pwrdm_state(cefuse_pwrdm, PWRDM_POWER_OFF);
+-	else
++	if (!cefuse_pwrdm)
+ 		pr_err("PM: Failed to get cefuse_pwrdm\n");
++	else if (omap_type() != OMAP2_DEVICE_TYPE_GP)
++		pr_info("PM: Leaving EFUSE power domain active\n");
++	else
++		omap_set_pwrdm_state(cefuse_pwrdm, PWRDM_POWER_OFF);
+ 
+ 	return 0;
+ }
+diff --git a/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c b/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
+index dc526ef2e9b3..ee949255ced3 100644
+--- a/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
++++ b/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /*
+- * R-Car Generation 2 da9063/da9210 regulator quirk
++ * R-Car Generation 2 da9063(L)/da9210 regulator quirk
+  *
+  * Certain Gen2 development boards have an da9063 and one or more da9210
+  * regulators. All of these regulators have their interrupt request lines
+@@ -65,6 +65,7 @@ static struct i2c_msg da9210_msg = {
+ 
+ static const struct of_device_id rcar_gen2_quirk_match[] = {
+ 	{ .compatible = "dlg,da9063", .data = &da9063_msg },
++	{ .compatible = "dlg,da9063l", .data = &da9063_msg },
+ 	{ .compatible = "dlg,da9210", .data = &da9210_msg },
+ 	{},
+ };
+@@ -147,6 +148,7 @@ static int __init rcar_gen2_regulator_quirk(void)
+ 
+ 	if (!of_machine_is_compatible("renesas,koelsch") &&
+ 	    !of_machine_is_compatible("renesas,lager") &&
++	    !of_machine_is_compatible("renesas,porter") &&
+ 	    !of_machine_is_compatible("renesas,stout") &&
+ 	    !of_machine_is_compatible("renesas,gose"))
+ 		return -ENODEV;
+@@ -210,7 +212,7 @@ static int __init rcar_gen2_regulator_quirk(void)
+ 		goto err_free;
+ 	}
+ 
+-	pr_info("IRQ2 is asserted, installing da9063/da9210 regulator quirk\n");
++	pr_info("IRQ2 is asserted, installing regulator quirk\n");
+ 
+ 	bus_register_notifier(&i2c_bus_type, &regulator_quirk_nb);
+ 	return 0;
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq.dtsi b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+index 9155bd4784eb..aa051af23bd0 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+@@ -240,7 +240,7 @@
+ 			};
+ 
+ 			iomuxc_gpr: syscon@30340000 {
+-				compatible = "fsl,imx8mq-iomuxc-gpr", "syscon";
++				compatible = "fsl,imx8mq-iomuxc-gpr", "fsl,imx6q-iomuxc-gpr", "syscon";
+ 				reg = <0x30340000 0x10000>;
+ 			};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/qcs404-evb.dtsi b/arch/arm64/boot/dts/qcom/qcs404-evb.dtsi
+index 50b3589c7f15..536f735243d2 100644
+--- a/arch/arm64/boot/dts/qcom/qcs404-evb.dtsi
++++ b/arch/arm64/boot/dts/qcom/qcs404-evb.dtsi
+@@ -37,18 +37,18 @@
+ 	pms405-regulators {
+ 		compatible = "qcom,rpm-pms405-regulators";
+ 
+-		vdd-s1-supply = <&vph_pwr>;
+-		vdd-s2-supply = <&vph_pwr>;
+-		vdd-s3-supply = <&vph_pwr>;
+-		vdd-s4-supply = <&vph_pwr>;
+-		vdd-s5-supply = <&vph_pwr>;
+-		vdd-l1-l2-supply = <&vreg_s5_1p35>;
+-		vdd-l3-l8-supply = <&vreg_s5_1p35>;
+-		vdd-l4-supply = <&vreg_s5_1p35>;
+-		vdd-l5-l6-supply = <&vreg_s4_1p8>;
+-		vdd-l7-supply = <&vph_pwr>;
+-		vdd-l9-supply = <&vreg_s5_1p35>;
+-		vdd-l10-l11-l12-l13-supply = <&vph_pwr>;
++		vdd_s1-supply = <&vph_pwr>;
++		vdd_s2-supply = <&vph_pwr>;
++		vdd_s3-supply = <&vph_pwr>;
++		vdd_s4-supply = <&vph_pwr>;
++		vdd_s5-supply = <&vph_pwr>;
++		vdd_l1_l2-supply = <&vreg_s5_1p35>;
++		vdd_l3_l8-supply = <&vreg_s5_1p35>;
++		vdd_l4-supply = <&vreg_s5_1p35>;
++		vdd_l5_l6-supply = <&vreg_s4_1p8>;
++		vdd_l7-supply = <&vph_pwr>;
++		vdd_l9-supply = <&vreg_s5_1p35>;
++		vdd_l10_l11_l12_l13-supply = <&vph_pwr>;
+ 
+ 		vreg_s4_1p8: s4 {
+ 			regulator-min-microvolt = <1728000>;
+@@ -56,8 +56,8 @@
+ 		};
+ 
+ 		vreg_s5_1p35: s5 {
+-			regulator-min-microvolt = <>;
+-			regulator-max-microvolt = <>;
++			regulator-min-microvolt = <1352000>;
++			regulator-max-microvolt = <1352000>;
+ 		};
+ 
+ 		vreg_l1_1p3: l1 {
+diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
+index 2d9c39033c1a..32fb03503b0b 100644
+--- a/arch/arm64/configs/defconfig
++++ b/arch/arm64/configs/defconfig
+@@ -222,10 +222,10 @@ CONFIG_BLK_DEV_SD=y
+ CONFIG_SCSI_SAS_ATA=y
+ CONFIG_SCSI_HISI_SAS=y
+ CONFIG_SCSI_HISI_SAS_PCI=y
+-CONFIG_SCSI_UFSHCD=m
+-CONFIG_SCSI_UFSHCD_PLATFORM=m
++CONFIG_SCSI_UFSHCD=y
++CONFIG_SCSI_UFSHCD_PLATFORM=y
+ CONFIG_SCSI_UFS_QCOM=m
+-CONFIG_SCSI_UFS_HISI=m
++CONFIG_SCSI_UFS_HISI=y
+ CONFIG_ATA=y
+ CONFIG_SATA_AHCI=y
+ CONFIG_SATA_AHCI_PLATFORM=y
+diff --git a/arch/mips/kernel/prom.c b/arch/mips/kernel/prom.c
+index 93b8e0b4332f..b9d6c6ec4177 100644
+--- a/arch/mips/kernel/prom.c
++++ b/arch/mips/kernel/prom.c
+@@ -41,7 +41,19 @@ char *mips_get_machine_name(void)
+ #ifdef CONFIG_USE_OF
+ void __init early_init_dt_add_memory_arch(u64 base, u64 size)
+ {
+-	return add_memory_region(base, size, BOOT_MEM_RAM);
++	if (base >= PHYS_ADDR_MAX) {
++		pr_warn("Trying to add an invalid memory region, skipped\n");
++		return;
++	}
++
++	/* Truncate the passed memory region instead of type casting */
++	if (base + size - 1 >= PHYS_ADDR_MAX || base + size < base) {
++		pr_warn("Truncate memory region %llx @ %llx to size %llx\n",
++			size, base, PHYS_ADDR_MAX - base);
++		size = PHYS_ADDR_MAX - base;
++	}
++
++	add_memory_region(base, size, BOOT_MEM_RAM);
+ }
+ 
+ int __init early_init_dt_reserve_memory_arch(phys_addr_t base,
+diff --git a/arch/powerpc/include/asm/drmem.h b/arch/powerpc/include/asm/drmem.h
+index 7c1d8e74b25d..7f3279b014db 100644
+--- a/arch/powerpc/include/asm/drmem.h
++++ b/arch/powerpc/include/asm/drmem.h
+@@ -17,6 +17,9 @@ struct drmem_lmb {
+ 	u32     drc_index;
+ 	u32     aa_index;
+ 	u32     flags;
++#ifdef CONFIG_MEMORY_HOTPLUG
++	int	nid;
++#endif
+ };
+ 
+ struct drmem_lmb_info {
+@@ -104,4 +107,22 @@ static inline void invalidate_lmb_associativity_index(struct drmem_lmb *lmb)
+ 	lmb->aa_index = 0xffffffff;
+ }
+ 
++#ifdef CONFIG_MEMORY_HOTPLUG
++static inline void lmb_set_nid(struct drmem_lmb *lmb)
++{
++	lmb->nid = memory_add_physaddr_to_nid(lmb->base_addr);
++}
++static inline void lmb_clear_nid(struct drmem_lmb *lmb)
++{
++	lmb->nid = -1;
++}
++#else
++static inline void lmb_set_nid(struct drmem_lmb *lmb)
++{
++}
++static inline void lmb_clear_nid(struct drmem_lmb *lmb)
++{
++}
++#endif
++
+ #endif /* _ASM_POWERPC_LMB_H */
+diff --git a/arch/powerpc/mm/drmem.c b/arch/powerpc/mm/drmem.c
+index 3f1803672c9b..641891df2046 100644
+--- a/arch/powerpc/mm/drmem.c
++++ b/arch/powerpc/mm/drmem.c
+@@ -366,8 +366,10 @@ static void __init init_drmem_v1_lmbs(const __be32 *prop)
+ 	if (!drmem_info->lmbs)
+ 		return;
+ 
+-	for_each_drmem_lmb(lmb)
++	for_each_drmem_lmb(lmb) {
+ 		read_drconf_v1_cell(lmb, &prop);
++		lmb_set_nid(lmb);
++	}
+ }
+ 
+ static void __init init_drmem_v2_lmbs(const __be32 *prop)
+@@ -412,6 +414,8 @@ static void __init init_drmem_v2_lmbs(const __be32 *prop)
+ 
+ 			lmb->aa_index = dr_cell.aa_index;
+ 			lmb->flags = dr_cell.flags;
++
++			lmb_set_nid(lmb);
+ 		}
+ 	}
+ }
+diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
+index d291b618a559..47087832f8b2 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-memory.c
++++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
+@@ -379,7 +379,7 @@ static int dlpar_add_lmb(struct drmem_lmb *);
+ static int dlpar_remove_lmb(struct drmem_lmb *lmb)
+ {
+ 	unsigned long block_sz;
+-	int nid, rc;
++	int rc;
+ 
+ 	if (!lmb_is_removable(lmb))
+ 		return -EINVAL;
+@@ -389,14 +389,14 @@ static int dlpar_remove_lmb(struct drmem_lmb *lmb)
+ 		return rc;
+ 
+ 	block_sz = pseries_memory_block_size();
+-	nid = memory_add_physaddr_to_nid(lmb->base_addr);
+ 
+-	__remove_memory(nid, lmb->base_addr, block_sz);
++	__remove_memory(lmb->nid, lmb->base_addr, block_sz);
+ 
+ 	/* Update memory regions for memory remove */
+ 	memblock_remove(lmb->base_addr, block_sz);
+ 
+ 	invalidate_lmb_associativity_index(lmb);
++	lmb_clear_nid(lmb);
+ 	lmb->flags &= ~DRCONF_MEM_ASSIGNED;
+ 
+ 	return 0;
+@@ -653,7 +653,7 @@ static int dlpar_memory_remove_by_ic(u32 lmbs_to_remove, u32 drc_index)
+ static int dlpar_add_lmb(struct drmem_lmb *lmb)
+ {
+ 	unsigned long block_sz;
+-	int nid, rc;
++	int rc;
+ 
+ 	if (lmb->flags & DRCONF_MEM_ASSIGNED)
+ 		return -EINVAL;
+@@ -664,13 +664,11 @@ static int dlpar_add_lmb(struct drmem_lmb *lmb)
+ 		return rc;
+ 	}
+ 
++	lmb_set_nid(lmb);
+ 	block_sz = memory_block_size_bytes();
+ 
+-	/* Find the node id for this address */
+-	nid = memory_add_physaddr_to_nid(lmb->base_addr);
+-
+ 	/* Add the memory */
+-	rc = __add_memory(nid, lmb->base_addr, block_sz);
++	rc = __add_memory(lmb->nid, lmb->base_addr, block_sz);
+ 	if (rc) {
+ 		invalidate_lmb_associativity_index(lmb);
+ 		return rc;
+@@ -678,8 +676,9 @@ static int dlpar_add_lmb(struct drmem_lmb *lmb)
+ 
+ 	rc = dlpar_online_lmb(lmb);
+ 	if (rc) {
+-		__remove_memory(nid, lmb->base_addr, block_sz);
++		__remove_memory(lmb->nid, lmb->base_addr, block_sz);
+ 		invalidate_lmb_associativity_index(lmb);
++		lmb_clear_nid(lmb);
+ 	} else {
+ 		lmb->flags |= DRCONF_MEM_ASSIGNED;
+ 	}
+diff --git a/arch/um/kernel/time.c b/arch/um/kernel/time.c
+index 052de4c8acb2..0c572a48158e 100644
+--- a/arch/um/kernel/time.c
++++ b/arch/um/kernel/time.c
+@@ -56,7 +56,7 @@ static int itimer_one_shot(struct clock_event_device *evt)
+ static struct clock_event_device timer_clockevent = {
+ 	.name			= "posix-timer",
+ 	.rating			= 250,
+-	.cpumask		= cpu_all_mask,
++	.cpumask		= cpu_possible_mask,
+ 	.features		= CLOCK_EVT_FEAT_PERIODIC |
+ 				  CLOCK_EVT_FEAT_ONESHOT,
+ 	.set_state_shutdown	= itimer_shutdown,
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index d35f4775d5f1..82dad001d1ea 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -3189,7 +3189,7 @@ static int intel_pmu_hw_config(struct perf_event *event)
+ 		return ret;
+ 
+ 	if (event->attr.precise_ip) {
+-		if (!(event->attr.freq || event->attr.wakeup_events)) {
++		if (!(event->attr.freq || (event->attr.wakeup_events && !event->attr.watermark))) {
+ 			event->hw.flags |= PERF_X86_EVENT_AUTO_RELOAD;
+ 			if (!(event->attr.sample_type &
+ 			      ~intel_pmu_large_pebs_flags(event)))
+diff --git a/arch/x86/pci/irq.c b/arch/x86/pci/irq.c
+index 52e55108404e..d3a73f9335e1 100644
+--- a/arch/x86/pci/irq.c
++++ b/arch/x86/pci/irq.c
+@@ -1119,6 +1119,8 @@ static const struct dmi_system_id pciirq_dmi_table[] __initconst = {
+ 
+ void __init pcibios_irq_init(void)
+ {
++	struct irq_routing_table *rtable = NULL;
++
+ 	DBG(KERN_DEBUG "PCI: IRQ init\n");
+ 
+ 	if (raw_pci_ops == NULL)
+@@ -1129,8 +1131,10 @@ void __init pcibios_irq_init(void)
+ 	pirq_table = pirq_find_routing_table();
+ 
+ #ifdef CONFIG_PCI_BIOS
+-	if (!pirq_table && (pci_probe & PCI_BIOS_IRQ_SCAN))
++	if (!pirq_table && (pci_probe & PCI_BIOS_IRQ_SCAN)) {
+ 		pirq_table = pcibios_get_irq_routing_table();
++		rtable = pirq_table;
++	}
+ #endif
+ 	if (pirq_table) {
+ 		pirq_peer_trick();
+@@ -1145,8 +1149,10 @@ void __init pcibios_irq_init(void)
+ 		 * If we're using the I/O APIC, avoid using the PCI IRQ
+ 		 * routing table
+ 		 */
+-		if (io_apic_assign_pci_irqs)
++		if (io_apic_assign_pci_irqs) {
++			kfree(rtable);
+ 			pirq_table = NULL;
++		}
+ 	}
+ 
+ 	x86_init.pci.fixup_irqs();
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 5ba1e0d841b4..679d608347ea 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2545,6 +2545,8 @@ static void bfq_arm_slice_timer(struct bfq_data *bfqd)
+ 	if (BFQQ_SEEKY(bfqq) && bfqq->wr_coeff == 1 &&
+ 	    bfq_symmetric_scenario(bfqd))
+ 		sl = min_t(u64, sl, BFQ_MIN_TT);
++	else if (bfqq->wr_coeff > 1)
++		sl = max_t(u32, sl, 20ULL * NSEC_PER_MSEC);
+ 
+ 	bfqd->last_idling_start = ktime_get();
+ 	hrtimer_start(&bfqd->idle_slice_timer, ns_to_ktime(sl),
+diff --git a/block/blk-core.c b/block/blk-core.c
+index b375cfea024c..2dd94b3e9ece 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -237,7 +237,6 @@ void blk_sync_queue(struct request_queue *q)
+ 		struct blk_mq_hw_ctx *hctx;
+ 		int i;
+ 
+-		cancel_delayed_work_sync(&q->requeue_work);
+ 		queue_for_each_hw_ctx(q, hctx, i)
+ 			cancel_delayed_work_sync(&hctx->run_work);
+ 	}
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 8a41cc5974fe..11efca3534ad 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2666,6 +2666,8 @@ void blk_mq_release(struct request_queue *q)
+ 	struct blk_mq_hw_ctx *hctx;
+ 	unsigned int i;
+ 
++	cancel_delayed_work_sync(&q->requeue_work);
++
+ 	/* hctx kobj stays in hctx */
+ 	queue_for_each_hw_ctx(q, hctx, i) {
+ 		if (!hctx)
+diff --git a/drivers/clk/rockchip/clk-rk3288.c b/drivers/clk/rockchip/clk-rk3288.c
+index 355d6a3611db..18460e0993bc 100644
+--- a/drivers/clk/rockchip/clk-rk3288.c
++++ b/drivers/clk/rockchip/clk-rk3288.c
+@@ -856,6 +856,9 @@ static const int rk3288_saved_cru_reg_ids[] = {
+ 	RK3288_CLKSEL_CON(10),
+ 	RK3288_CLKSEL_CON(33),
+ 	RK3288_CLKSEL_CON(37),
++
++	/* We turn aclk_dmac1 on for suspend; this will restore it */
++	RK3288_CLKGATE_CON(10),
+ };
+ 
+ static u32 rk3288_saved_cru_regs[ARRAY_SIZE(rk3288_saved_cru_reg_ids)];
+@@ -871,6 +874,14 @@ static int rk3288_clk_suspend(void)
+ 				readl_relaxed(rk3288_cru_base + reg_id);
+ 	}
+ 
++	/*
++	 * Going into deep sleep (specifically setting PMU_CLR_DMA in
++	 * RK3288_PMU_PWRMODE_CON1) appears to fail unless
++	 * "aclk_dmac1" is on.
++	 */
++	writel_relaxed(1 << (12 + 16),
++		       rk3288_cru_base + RK3288_CLKGATE_CON(10));
++
+ 	/*
+ 	 * Switch PLLs other than DPLL (for SDRAM) to slow mode to
+ 	 * avoid crashes on resume. The Mask ROM on the system will
+diff --git a/drivers/dma/idma64.c b/drivers/dma/idma64.c
+index 0baf9797cc09..83796a33dc16 100644
+--- a/drivers/dma/idma64.c
++++ b/drivers/dma/idma64.c
+@@ -592,7 +592,7 @@ static int idma64_probe(struct idma64_chip *chip)
+ 	idma64->dma.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+ 	idma64->dma.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
+ 
+-	idma64->dma.dev = chip->dev;
++	idma64->dma.dev = chip->sysdev;
+ 
+ 	dma_set_max_seg_size(idma64->dma.dev, IDMA64C_CTLH_BLOCK_TS_MASK);
+ 
+@@ -632,6 +632,7 @@ static int idma64_platform_probe(struct platform_device *pdev)
+ {
+ 	struct idma64_chip *chip;
+ 	struct device *dev = &pdev->dev;
++	struct device *sysdev = dev->parent;
+ 	struct resource *mem;
+ 	int ret;
+ 
+@@ -648,11 +649,12 @@ static int idma64_platform_probe(struct platform_device *pdev)
+ 	if (IS_ERR(chip->regs))
+ 		return PTR_ERR(chip->regs);
+ 
+-	ret = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
++	ret = dma_coerce_mask_and_coherent(sysdev, DMA_BIT_MASK(64));
+ 	if (ret)
+ 		return ret;
+ 
+ 	chip->dev = dev;
++	chip->sysdev = sysdev;
+ 
+ 	ret = idma64_probe(chip);
+ 	if (ret)
+diff --git a/drivers/dma/idma64.h b/drivers/dma/idma64.h
+index 6b816878e5e7..baa32e1425de 100644
+--- a/drivers/dma/idma64.h
++++ b/drivers/dma/idma64.h
+@@ -216,12 +216,14 @@ static inline void idma64_writel(struct idma64 *idma64, int offset, u32 value)
+ /**
+  * struct idma64_chip - representation of iDMA 64-bit controller hardware
+  * @dev:		struct device of the DMA controller
++ * @sysdev:		struct device of the physical device that does DMA
+  * @irq:		irq line
+  * @regs:		memory mapped I/O space
+  * @idma64:		struct idma64 that is filed by idma64_probe()
+  */
+ struct idma64_chip {
+ 	struct device	*dev;
++	struct device	*sysdev;
+ 	int		irq;
+ 	void __iomem	*regs;
+ 	struct idma64	*idma64;
+diff --git a/drivers/edac/Kconfig b/drivers/edac/Kconfig
+index 47eb4d13ed5f..5e2e0348d460 100644
+--- a/drivers/edac/Kconfig
++++ b/drivers/edac/Kconfig
+@@ -263,8 +263,8 @@ config EDAC_PND2
+ 	  micro-server but may appear on others in the future.
+ 
+ config EDAC_MPC85XX
+-	tristate "Freescale MPC83xx / MPC85xx"
+-	depends on FSL_SOC
++	bool "Freescale MPC83xx / MPC85xx"
++	depends on FSL_SOC && EDAC=y
+ 	help
+ 	  Support for error detection and correction on the Freescale
+ 	  MPC8349, MPC8560, MPC8540, MPC8548, T4240
+diff --git a/drivers/gpio/gpio-omap.c b/drivers/gpio/gpio-omap.c
+index 7f33024b6d83..fafd79438bbf 100644
+--- a/drivers/gpio/gpio-omap.c
++++ b/drivers/gpio/gpio-omap.c
+@@ -353,6 +353,22 @@ static void omap_clear_gpio_debounce(struct gpio_bank *bank, unsigned offset)
+ 	}
+ }
+ 
++/*
++ * Off mode wake-up capable GPIOs in bank(s) that are in the wakeup domain.
++ * See TRM section for GPIO for "Wake-Up Generation" for the list of GPIOs
++ * in wakeup domain. If bank->non_wakeup_gpios is not configured, assume none
++ * are capable waking up the system from off mode.
++ */
++static bool omap_gpio_is_off_wakeup_capable(struct gpio_bank *bank, u32 gpio_mask)
++{
++	u32 no_wake = bank->non_wakeup_gpios;
++
++	if (no_wake)
++		return !!(~no_wake & gpio_mask);
++
++	return false;
++}
++
+ static inline void omap_set_gpio_trigger(struct gpio_bank *bank, int gpio,
+ 						unsigned trigger)
+ {
+@@ -384,13 +400,7 @@ static inline void omap_set_gpio_trigger(struct gpio_bank *bank, int gpio,
+ 	}
+ 
+ 	/* This part needs to be executed always for OMAP{34xx, 44xx} */
+-	if (!bank->regs->irqctrl) {
+-		/* On omap24xx proceed only when valid GPIO bit is set */
+-		if (bank->non_wakeup_gpios) {
+-			if (!(bank->non_wakeup_gpios & gpio_bit))
+-				goto exit;
+-		}
+-
++	if (!bank->regs->irqctrl && !omap_gpio_is_off_wakeup_capable(bank, gpio)) {
+ 		/*
+ 		 * Log the edge gpio and manually trigger the IRQ
+ 		 * after resume if the input level changes
+@@ -403,7 +413,6 @@ static inline void omap_set_gpio_trigger(struct gpio_bank *bank, int gpio,
+ 			bank->enabled_non_wakeup_gpios &= ~gpio_bit;
+ 	}
+ 
+-exit:
+ 	bank->level_mask =
+ 		readl_relaxed(bank->base + bank->regs->leveldetect0) |
+ 		readl_relaxed(bank->base + bank->regs->leveldetect1);
+@@ -1444,7 +1453,10 @@ static void omap_gpio_restore_context(struct gpio_bank *bank);
+ static void omap_gpio_idle(struct gpio_bank *bank, bool may_lose_context)
+ {
+ 	struct device *dev = bank->chip.parent;
+-	u32 l1 = 0, l2 = 0;
++	void __iomem *base = bank->base;
++	u32 nowake;
++
++	bank->saved_datain = readl_relaxed(base + bank->regs->datain);
+ 
+ 	if (bank->funcs.idle_enable_level_quirk)
+ 		bank->funcs.idle_enable_level_quirk(bank);
+@@ -1456,20 +1468,15 @@ static void omap_gpio_idle(struct gpio_bank *bank, bool may_lose_context)
+ 		goto update_gpio_context_count;
+ 
+ 	/*
+-	 * If going to OFF, remove triggering for all
++	 * If going to OFF, remove triggering for all wkup domain
+ 	 * non-wakeup GPIOs.  Otherwise spurious IRQs will be
+ 	 * generated.  See OMAP2420 Errata item 1.101.
+ 	 */
+-	bank->saved_datain = readl_relaxed(bank->base +
+-						bank->regs->datain);
+-	l1 = bank->context.fallingdetect;
+-	l2 = bank->context.risingdetect;
+-
+-	l1 &= ~bank->enabled_non_wakeup_gpios;
+-	l2 &= ~bank->enabled_non_wakeup_gpios;
+-
+-	writel_relaxed(l1, bank->base + bank->regs->fallingdetect);
+-	writel_relaxed(l2, bank->base + bank->regs->risingdetect);
++	if (!bank->loses_context && bank->enabled_non_wakeup_gpios) {
++		nowake = bank->enabled_non_wakeup_gpios;
++		omap_gpio_rmw(base, bank->regs->fallingdetect, nowake, ~nowake);
++		omap_gpio_rmw(base, bank->regs->risingdetect, nowake, ~nowake);
++	}
+ 
+ 	bank->workaround_enabled = true;
+ 
+@@ -1518,6 +1525,12 @@ static void omap_gpio_unidle(struct gpio_bank *bank)
+ 				return;
+ 			}
+ 		}
++	} else {
++		/* Restore changes done for OMAP2420 errata 1.101 */
++		writel_relaxed(bank->context.fallingdetect,
++			       bank->base + bank->regs->fallingdetect);
++		writel_relaxed(bank->context.risingdetect,
++			       bank->base + bank->regs->risingdetect);
+ 	}
+ 
+ 	if (!bank->workaround_enabled)
+diff --git a/drivers/gpio/gpio-vf610.c b/drivers/gpio/gpio-vf610.c
+index 541fa6ac399d..7e9451f47efe 100644
+--- a/drivers/gpio/gpio-vf610.c
++++ b/drivers/gpio/gpio-vf610.c
+@@ -29,6 +29,7 @@ struct fsl_gpio_soc_data {
+ 
+ struct vf610_gpio_port {
+ 	struct gpio_chip gc;
++	struct irq_chip ic;
+ 	void __iomem *base;
+ 	void __iomem *gpio_base;
+ 	const struct fsl_gpio_soc_data *sdata;
+@@ -60,8 +61,6 @@ struct vf610_gpio_port {
+ #define PORT_INT_EITHER_EDGE	0xb
+ #define PORT_INT_LOGIC_ONE	0xc
+ 
+-static struct irq_chip vf610_gpio_irq_chip;
+-
+ static const struct fsl_gpio_soc_data imx_data = {
+ 	.have_paddr = true,
+ };
+@@ -237,15 +236,6 @@ static int vf610_gpio_irq_set_wake(struct irq_data *d, u32 enable)
+ 	return 0;
+ }
+ 
+-static struct irq_chip vf610_gpio_irq_chip = {
+-	.name		= "gpio-vf610",
+-	.irq_ack	= vf610_gpio_irq_ack,
+-	.irq_mask	= vf610_gpio_irq_mask,
+-	.irq_unmask	= vf610_gpio_irq_unmask,
+-	.irq_set_type	= vf610_gpio_irq_set_type,
+-	.irq_set_wake	= vf610_gpio_irq_set_wake,
+-};
+-
+ static int vf610_gpio_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+@@ -253,6 +243,7 @@ static int vf610_gpio_probe(struct platform_device *pdev)
+ 	struct vf610_gpio_port *port;
+ 	struct resource *iores;
+ 	struct gpio_chip *gc;
++	struct irq_chip *ic;
+ 	int i;
+ 	int ret;
+ 
+@@ -316,6 +307,14 @@ static int vf610_gpio_probe(struct platform_device *pdev)
+ 	gc->direction_output = vf610_gpio_direction_output;
+ 	gc->set = vf610_gpio_set;
+ 
++	ic = &port->ic;
++	ic->name = "gpio-vf610";
++	ic->irq_ack = vf610_gpio_irq_ack;
++	ic->irq_mask = vf610_gpio_irq_mask;
++	ic->irq_unmask = vf610_gpio_irq_unmask;
++	ic->irq_set_type = vf610_gpio_irq_set_type;
++	ic->irq_set_wake = vf610_gpio_irq_set_wake;
++
+ 	ret = gpiochip_add_data(gc, port);
+ 	if (ret < 0)
+ 		return ret;
+@@ -327,14 +326,13 @@ static int vf610_gpio_probe(struct platform_device *pdev)
+ 	/* Clear the interrupt status register for all GPIO's */
+ 	vf610_gpio_writel(~0, port->base + PORT_ISFR);
+ 
+-	ret = gpiochip_irqchip_add(gc, &vf610_gpio_irq_chip, 0,
+-				   handle_edge_irq, IRQ_TYPE_NONE);
++	ret = gpiochip_irqchip_add(gc, ic, 0, handle_edge_irq, IRQ_TYPE_NONE);
+ 	if (ret) {
+ 		dev_err(dev, "failed to add irqchip\n");
+ 		gpiochip_remove(gc);
+ 		return ret;
+ 	}
+-	gpiochip_set_chained_irqchip(gc, &vf610_gpio_irq_chip, port->irq,
++	gpiochip_set_chained_irqchip(gc, ic, port->irq,
+ 				     vf610_gpio_irq_handler);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index 419e8de8c0f4..6072636da388 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -1399,6 +1399,15 @@ static enum dc_status enable_link_dp(
+ 	/* get link settings for video mode timing */
+ 	decide_link_settings(stream, &link_settings);
+ 
++	/* If link settings are different than current and link already enabled
++	 * then need to disable before programming to new rate.
++	 */
++	if (link->link_status.link_active &&
++		(link->cur_link_settings.lane_count != link_settings.lane_count ||
++		 link->cur_link_settings.link_rate != link_settings.link_rate)) {
++		dp_disable_link_phy(link, pipe_ctx->stream->signal);
++	}
++
+ 	pipe_ctx->stream_res.pix_clk_params.requested_sym_clk =
+ 			link_settings.link_rate * LINK_RATE_REF_FREQ_IN_KHZ;
+ 	state->dccg->funcs->update_clocks(state->dccg, state, false);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c
+index cd1ebe57ed59..1951f9276e41 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c
+@@ -392,6 +392,10 @@ void dpp1_cnv_setup (
+ 	default:
+ 		break;
+ 	}
++
++	/* Set default color space based on format if none is given. */
++	color_space = input_color_space ? input_color_space : color_space;
++
+ 	REG_SET(CNVC_SURFACE_PIXEL_FORMAT, 0,
+ 			CNVC_SURFACE_PIXEL_FORMAT, pixel_format);
+ 	REG_UPDATE(FORMAT_CONTROL, FORMAT_CONTROL__ALPHA_EN, alpha_en);
+@@ -403,7 +407,7 @@ void dpp1_cnv_setup (
+ 		for (i = 0; i < 12; i++)
+ 			tbl_entry.regval[i] = input_csc_color_matrix.matrix[i];
+ 
+-		tbl_entry.color_space = input_color_space;
++		tbl_entry.color_space = color_space;
+ 
+ 		if (color_space >= COLOR_SPACE_YCBCR601)
+ 			select = INPUT_CSC_SELECT_ICSC;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+index 5b551a544e82..1fac86d3032d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+@@ -1936,7 +1936,7 @@ static void update_dpp(struct dpp *dpp, struct dc_plane_state *plane_state)
+ 			plane_state->format,
+ 			EXPANSION_MODE_ZERO,
+ 			plane_state->input_csc_color_matrix,
+-			COLOR_SPACE_YCBCR601_LIMITED);
++			plane_state->color_space);
+ 
+ 	//set scale and bias registers
+ 	build_prescale_params(&bns_params, plane_state);
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+index ec2ca71e1323..c532e9c9e491 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+@@ -748,11 +748,11 @@ static void adv7511_mode_set(struct adv7511 *adv7511,
+ 			vsync_polarity = 1;
+ 	}
+ 
+-	if (mode->vrefresh <= 24000)
++	if (drm_mode_vrefresh(mode) <= 24)
+ 		low_refresh_rate = ADV7511_LOW_REFRESH_RATE_24HZ;
+-	else if (mode->vrefresh <= 25000)
++	else if (drm_mode_vrefresh(mode) <= 25)
+ 		low_refresh_rate = ADV7511_LOW_REFRESH_RATE_25HZ;
+-	else if (mode->vrefresh <= 30000)
++	else if (drm_mode_vrefresh(mode) <= 30)
+ 		low_refresh_rate = ADV7511_LOW_REFRESH_RATE_30HZ;
+ 	else
+ 		low_refresh_rate = ADV7511_LOW_REFRESH_RATE_NONE;
+diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c
+index 687943df58e1..ab5692104ea0 100644
+--- a/drivers/gpu/drm/drm_ioctl.c
++++ b/drivers/gpu/drm/drm_ioctl.c
+@@ -508,13 +508,6 @@ int drm_version(struct drm_device *dev, void *data,
+ 	return err;
+ }
+ 
+-static inline bool
+-drm_render_driver_and_ioctl(const struct drm_device *dev, u32 flags)
+-{
+-	return drm_core_check_feature(dev, DRIVER_RENDER) &&
+-		(flags & DRM_RENDER_ALLOW);
+-}
+-
+ /**
+  * drm_ioctl_permit - Check ioctl permissions against caller
+  *
+@@ -529,19 +522,14 @@ drm_render_driver_and_ioctl(const struct drm_device *dev, u32 flags)
+  */
+ int drm_ioctl_permit(u32 flags, struct drm_file *file_priv)
+ {
+-	const struct drm_device *dev = file_priv->minor->dev;
+-
+ 	/* ROOT_ONLY is only for CAP_SYS_ADMIN */
+ 	if (unlikely((flags & DRM_ROOT_ONLY) && !capable(CAP_SYS_ADMIN)))
+ 		return -EACCES;
+ 
+-	/* AUTH is only for master ... */
+-	if (unlikely((flags & DRM_AUTH) && drm_is_primary_client(file_priv))) {
+-		/* authenticated ones, or render capable on DRM_RENDER_ALLOW. */
+-		if (!file_priv->authenticated &&
+-		    !drm_render_driver_and_ioctl(dev, flags))
+-			return -EACCES;
+-	}
++	/* AUTH is only for authenticated or render client */
++	if (unlikely((flags & DRM_AUTH) && !drm_is_render_client(file_priv) &&
++		     !file_priv->authenticated))
++		return -EACCES;
+ 
+ 	/* MASTER is only for master or control clients */
+ 	if (unlikely((flags & DRM_MASTER) &&
+diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
+index 18ca651ab942..23de4d1b7b1a 100644
+--- a/drivers/gpu/drm/msm/msm_gem.c
++++ b/drivers/gpu/drm/msm/msm_gem.c
+@@ -805,7 +805,8 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m)
+ 		seq_puts(m, "      vmas:");
+ 
+ 		list_for_each_entry(vma, &msm_obj->vmas, list)
+-			seq_printf(m, " [%s: %08llx,%s,inuse=%d]", vma->aspace->name,
++			seq_printf(m, " [%s: %08llx,%s,inuse=%d]",
++				vma->aspace != NULL ? vma->aspace->name : NULL,
+ 				vma->iova, vma->mapped ? "mapped" : "unmapped",
+ 				vma->inuse);
+ 
+diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
+index db28012dbf54..00cd9ab8948d 100644
+--- a/drivers/gpu/drm/nouveau/Kconfig
++++ b/drivers/gpu/drm/nouveau/Kconfig
+@@ -17,20 +17,9 @@ config DRM_NOUVEAU
+ 	select INPUT if ACPI && X86
+ 	select THERMAL if ACPI && X86
+ 	select ACPI_VIDEO if ACPI && X86
+-	help
+-	  Choose this option for open-source NVIDIA support.
+-
+-config NOUVEAU_LEGACY_CTX_SUPPORT
+-	bool "Nouveau legacy context support"
+-	depends on DRM_NOUVEAU
+ 	select DRM_VM
+-	default y
+ 	help
+-	  There was a version of the nouveau DDX that relied on legacy
+-	  ctx ioctls not erroring out. But that was back in time a long
+-	  ways, so offer a way to disable it now. For uapi compat with
+-	  old nouveau ddx this should be on by default, but modern distros
+-	  should consider turning it off.
++	  Choose this option for open-source NVIDIA support.
+ 
+ config NOUVEAU_PLATFORM_DRIVER
+ 	bool "Nouveau (NVIDIA) SoC GPUs"
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.h b/drivers/gpu/drm/nouveau/dispnv50/disp.h
+index 2216c58620c2..7c41b0599d1a 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.h
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.h
+@@ -41,6 +41,7 @@ struct nv50_disp_interlock {
+ 		NV50_DISP_INTERLOCK__SIZE
+ 	} type;
+ 	u32 data;
++	u32 wimm;
+ };
+ 
+ void corec37d_ntfy_init(struct nouveau_bo *, u32);
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/head.c b/drivers/gpu/drm/nouveau/dispnv50/head.c
+index 2e7a0c347ddb..06ee23823a68 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/head.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/head.c
+@@ -306,7 +306,7 @@ nv50_head_atomic_check(struct drm_crtc *crtc, struct drm_crtc_state *state)
+ 			asyh->set.or = head->func->or != NULL;
+ 		}
+ 
+-		if (asyh->state.mode_changed)
++		if (asyh->state.mode_changed || asyh->state.connectors_changed)
+ 			nv50_head_atomic_check_mode(head, asyh);
+ 
+ 		if (asyh->state.color_mgmt_changed ||
+@@ -413,6 +413,7 @@ nv50_head_atomic_duplicate_state(struct drm_crtc *crtc)
+ 	asyh->ovly = armh->ovly;
+ 	asyh->dither = armh->dither;
+ 	asyh->procamp = armh->procamp;
++	asyh->or = armh->or;
+ 	asyh->dp = armh->dp;
+ 	asyh->clr.mask = 0;
+ 	asyh->set.mask = 0;
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c b/drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c
+index 9103b8494279..f7dbd965e4e7 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c
+@@ -75,6 +75,7 @@ wimmc37b_init_(const struct nv50_wimm_func *func, struct nouveau_drm *drm,
+ 		return ret;
+ 	}
+ 
++	wndw->interlock.wimm = wndw->interlock.data;
+ 	wndw->immd = func;
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
+index b95181027b31..471a39a077e5 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
+@@ -149,7 +149,7 @@ nv50_wndw_flush_set(struct nv50_wndw *wndw, u32 *interlock,
+ 	if (asyw->set.point) {
+ 		if (asyw->set.point = false, asyw->set.mask)
+ 			interlock[wndw->interlock.type] |= wndw->interlock.data;
+-		interlock[NV50_DISP_INTERLOCK_WIMM] |= wndw->interlock.data;
++		interlock[NV50_DISP_INTERLOCK_WIMM] |= wndw->interlock.wimm;
+ 
+ 		wndw->immd->point(wndw, asyw);
+ 		wndw->immd->update(wndw, interlock);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index 6ab9033f49da..5020265bfbd9 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -1094,11 +1094,8 @@ nouveau_driver_fops = {
+ static struct drm_driver
+ driver_stub = {
+ 	.driver_features =
+-		DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME | DRIVER_RENDER
+-#if defined(CONFIG_NOUVEAU_LEGACY_CTX_SUPPORT)
+-		| DRIVER_KMS_LEGACY_CONTEXT
+-#endif
+-		,
++		DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME | DRIVER_RENDER |
++		DRIVER_KMS_LEGACY_CONTEXT,
+ 
+ 	.open = nouveau_drm_open,
+ 	.postclose = nouveau_drm_postclose,
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c
+index 5f301e632599..818d21bd28d3 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c
+@@ -365,8 +365,15 @@ nvkm_dp_train(struct nvkm_dp *dp, u32 dataKBps)
+ 	 * and it's better to have a failed modeset than that.
+ 	 */
+ 	for (cfg = nvkm_dp_rates; cfg->rate; cfg++) {
+-		if (cfg->nr <= outp_nr && cfg->nr <= outp_bw)
+-			failsafe = cfg;
++		if (cfg->nr <= outp_nr && cfg->nr <= outp_bw) {
++			/* Try to respect sink limits too when selecting
++			 * lowest link configuration.
++			 */
++			if (!failsafe ||
++			    (cfg->nr <= sink_nr && cfg->bw <= sink_bw))
++				failsafe = cfg;
++		}
++
+ 		if (failsafe && cfg[1].rate < dataKBps)
+ 			break;
+ 	}
+diff --git a/drivers/gpu/drm/pl111/pl111_display.c b/drivers/gpu/drm/pl111/pl111_display.c
+index 754f6b25f265..6d9f78612dee 100644
+--- a/drivers/gpu/drm/pl111/pl111_display.c
++++ b/drivers/gpu/drm/pl111/pl111_display.c
+@@ -531,14 +531,15 @@ pl111_init_clock_divider(struct drm_device *drm)
+ 		dev_err(drm->dev, "CLCD: unable to get clcdclk.\n");
+ 		return PTR_ERR(parent);
+ 	}
++
++	spin_lock_init(&priv->tim2_lock);
++
+ 	/* If the clock divider is broken, use the parent directly */
+ 	if (priv->variant->broken_clockdivider) {
+ 		priv->clk = parent;
+ 		return 0;
+ 	}
+ 	parent_name = __clk_get_name(parent);
+-
+-	spin_lock_init(&priv->tim2_lock);
+ 	div->init = &init;
+ 
+ 	ret = devm_clk_hw_register(drm->dev, div);
+diff --git a/drivers/input/touchscreen/goodix.c b/drivers/input/touchscreen/goodix.c
+index f57d82220a88..dd029e689903 100644
+--- a/drivers/input/touchscreen/goodix.c
++++ b/drivers/input/touchscreen/goodix.c
+@@ -216,6 +216,7 @@ static const struct goodix_chip_data *goodix_get_chip_data(u16 id)
+ {
+ 	switch (id) {
+ 	case 1151:
++	case 5663:
+ 	case 5688:
+ 		return &gt1x_chip_data;
+ 
+@@ -945,6 +946,7 @@ MODULE_DEVICE_TABLE(acpi, goodix_acpi_match);
+ #ifdef CONFIG_OF
+ static const struct of_device_id goodix_of_match[] = {
+ 	{ .compatible = "goodix,gt1151" },
++	{ .compatible = "goodix,gt5663" },
+ 	{ .compatible = "goodix,gt5688" },
+ 	{ .compatible = "goodix,gt911" },
+ 	{ .compatible = "goodix,gt9110" },
+diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
+index d3880010c6cf..d8b73da6447d 100644
+--- a/drivers/iommu/arm-smmu-v3.c
++++ b/drivers/iommu/arm-smmu-v3.c
+@@ -2454,13 +2454,9 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
+ 	/* Clear CR0 and sync (disables SMMU and queue processing) */
+ 	reg = readl_relaxed(smmu->base + ARM_SMMU_CR0);
+ 	if (reg & CR0_SMMUEN) {
+-		if (is_kdump_kernel()) {
+-			arm_smmu_update_gbpa(smmu, GBPA_ABORT, 0);
+-			arm_smmu_device_disable(smmu);
+-			return -EBUSY;
+-		}
+-
+ 		dev_warn(smmu->dev, "SMMU currently enabled! Resetting...\n");
++		WARN_ON(is_kdump_kernel() && !disable_bypass);
++		arm_smmu_update_gbpa(smmu, GBPA_ABORT, 0);
+ 	}
+ 
+ 	ret = arm_smmu_device_disable(smmu);
+@@ -2553,6 +2549,8 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
+ 		return ret;
+ 	}
+ 
++	if (is_kdump_kernel())
++		enables &= ~(CR0_EVTQEN | CR0_PRIQEN);
+ 
+ 	/* Enable the SMMU interface, or ensure bypass */
+ 	if (!bypass || disable_bypass) {
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index 28cb713d728c..0feb3f70da16 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -3496,7 +3496,13 @@ domains_done:
+ 
+ #ifdef CONFIG_INTEL_IOMMU_SVM
+ 		if (pasid_supported(iommu) && ecap_prs(iommu->ecap)) {
++			/*
++			 * Call dmar_alloc_hwirq() with dmar_global_lock held,
++			 * could cause possible lock race condition.
++			 */
++			up_write(&dmar_global_lock);
+ 			ret = intel_svm_enable_prq(iommu);
++			down_write(&dmar_global_lock);
+ 			if (ret)
+ 				goto free_iommu;
+ 		}
+@@ -3730,6 +3736,7 @@ static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size)
+ 	unsigned long iova_pfn;
+ 	struct intel_iommu *iommu;
+ 	struct page *freelist;
++	struct pci_dev *pdev = NULL;
+ 
+ 	if (iommu_no_mapping(dev))
+ 		return;
+@@ -3745,11 +3752,14 @@ static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size)
+ 	start_pfn = mm_to_dma_pfn(iova_pfn);
+ 	last_pfn = start_pfn + nrpages - 1;
+ 
++	if (dev_is_pci(dev))
++		pdev = to_pci_dev(dev);
++
+ 	dev_dbg(dev, "Device unmapping: pfn %lx-%lx\n", start_pfn, last_pfn);
+ 
+ 	freelist = domain_unmap(domain, start_pfn, last_pfn);
+ 
+-	if (intel_iommu_strict) {
++	if (intel_iommu_strict || (pdev && pdev->untrusted)) {
+ 		iommu_flush_iotlb_psi(iommu, domain, start_pfn,
+ 				      nrpages, !freelist, 0);
+ 		/* free iova */
+@@ -4055,9 +4065,7 @@ static void __init init_no_remapping_devices(void)
+ 
+ 		/* This IOMMU has *only* gfx devices. Either bypass it or
+ 		   set the gfx_mapped flag, as appropriate */
+-		if (dmar_map_gfx) {
+-			intel_iommu_gfx_mapped = 1;
+-		} else {
++		if (!dmar_map_gfx) {
+ 			drhd->ignored = 1;
+ 			for_each_active_dev_scope(drhd->devices,
+ 						  drhd->devices_cnt, i, dev)
+@@ -4896,6 +4904,9 @@ int __init intel_iommu_init(void)
+ 		goto out_free_reserved_range;
+ 	}
+ 
++	if (dmar_map_gfx)
++		intel_iommu_gfx_mapped = 1;
++
+ 	init_no_remapping_devices();
+ 
+ 	ret = init_dmars();
+diff --git a/drivers/mailbox/stm32-ipcc.c b/drivers/mailbox/stm32-ipcc.c
+index 210fe504f5ae..f91dfb1327c7 100644
+--- a/drivers/mailbox/stm32-ipcc.c
++++ b/drivers/mailbox/stm32-ipcc.c
+@@ -8,9 +8,9 @@
+ #include <linux/bitfield.h>
+ #include <linux/clk.h>
+ #include <linux/interrupt.h>
++#include <linux/io.h>
+ #include <linux/mailbox_controller.h>
+ #include <linux/module.h>
+-#include <linux/of_irq.h>
+ #include <linux/platform_device.h>
+ #include <linux/pm_wakeirq.h>
+ 
+@@ -240,9 +240,11 @@ static int stm32_ipcc_probe(struct platform_device *pdev)
+ 
+ 	/* irq */
+ 	for (i = 0; i < IPCC_IRQ_NUM; i++) {
+-		ipcc->irqs[i] = of_irq_get_byname(dev->of_node, irq_name[i]);
++		ipcc->irqs[i] = platform_get_irq_byname(pdev, irq_name[i]);
+ 		if (ipcc->irqs[i] < 0) {
+-			dev_err(dev, "no IRQ specified %s\n", irq_name[i]);
++			if (ipcc->irqs[i] != -EPROBE_DEFER)
++				dev_err(dev, "no IRQ specified %s\n",
++					irq_name[i]);
+ 			ret = ipcc->irqs[i];
+ 			goto err_clk;
+ 		}
+@@ -263,9 +265,10 @@ static int stm32_ipcc_probe(struct platform_device *pdev)
+ 
+ 	/* wakeup */
+ 	if (of_property_read_bool(np, "wakeup-source")) {
+-		ipcc->wkp = of_irq_get_byname(dev->of_node, "wakeup");
++		ipcc->wkp = platform_get_irq_byname(pdev, "wakeup");
+ 		if (ipcc->wkp < 0) {
+-			dev_err(dev, "could not get wakeup IRQ\n");
++			if (ipcc->wkp != -EPROBE_DEFER)
++				dev_err(dev, "could not get wakeup IRQ\n");
+ 			ret = ipcc->wkp;
+ 			goto err_clk;
+ 		}
+diff --git a/drivers/media/platform/atmel/atmel-isc.c b/drivers/media/platform/atmel/atmel-isc.c
+index 50178968b8a6..5b0e96bd8c7e 100644
+--- a/drivers/media/platform/atmel/atmel-isc.c
++++ b/drivers/media/platform/atmel/atmel-isc.c
+@@ -2065,8 +2065,11 @@ static int isc_parse_dt(struct device *dev, struct isc_device *isc)
+ 			break;
+ 		}
+ 
+-		subdev_entity->asd = devm_kzalloc(dev,
+-				     sizeof(*subdev_entity->asd), GFP_KERNEL);
++		/* asd will be freed by the subsystem once it's added to the
++		 * notifier list
++		 */
++		subdev_entity->asd = kzalloc(sizeof(*subdev_entity->asd),
++					     GFP_KERNEL);
+ 		if (!subdev_entity->asd) {
+ 			of_node_put(rem);
+ 			ret = -ENOMEM;
+@@ -2210,6 +2213,7 @@ static int atmel_isc_probe(struct platform_device *pdev)
+ 						     subdev_entity->asd);
+ 		if (ret) {
+ 			fwnode_handle_put(subdev_entity->asd->match.fwnode);
++			kfree(subdev_entity->asd);
+ 			goto cleanup_subdev;
+ 		}
+ 
+diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
+index b79d3bbd8350..54d66dbc2a31 100644
+--- a/drivers/media/v4l2-core/v4l2-ctrls.c
++++ b/drivers/media/v4l2-core/v4l2-ctrls.c
+@@ -3899,18 +3899,19 @@ void v4l2_ctrl_request_complete(struct media_request *req,
+ }
+ EXPORT_SYMBOL(v4l2_ctrl_request_complete);
+ 
+-void v4l2_ctrl_request_setup(struct media_request *req,
++int v4l2_ctrl_request_setup(struct media_request *req,
+ 			     struct v4l2_ctrl_handler *main_hdl)
+ {
+ 	struct media_request_object *obj;
+ 	struct v4l2_ctrl_handler *hdl;
+ 	struct v4l2_ctrl_ref *ref;
++	int ret = 0;
+ 
+ 	if (!req || !main_hdl)
+-		return;
++		return 0;
+ 
+ 	if (WARN_ON(req->state != MEDIA_REQUEST_STATE_QUEUED))
+-		return;
++		return -EBUSY;
+ 
+ 	/*
+ 	 * Note that it is valid if nothing was found. It means
+@@ -3919,10 +3920,10 @@ void v4l2_ctrl_request_setup(struct media_request *req,
+ 	 */
+ 	obj = media_request_object_find(req, &req_ops, main_hdl);
+ 	if (!obj)
+-		return;
++		return 0;
+ 	if (obj->completed) {
+ 		media_request_object_put(obj);
+-		return;
++		return -EBUSY;
+ 	}
+ 	hdl = container_of(obj, struct v4l2_ctrl_handler, req_obj);
+ 
+@@ -3990,12 +3991,15 @@ void v4l2_ctrl_request_setup(struct media_request *req,
+ 				update_from_auto_cluster(master);
+ 		}
+ 
+-		try_or_set_cluster(NULL, master, true, 0);
+-
++		ret = try_or_set_cluster(NULL, master, true, 0);
+ 		v4l2_ctrl_unlock(master);
++
++		if (ret)
++			break;
+ 	}
+ 
+ 	media_request_object_put(obj);
++	return ret;
+ }
+ EXPORT_SYMBOL(v4l2_ctrl_request_setup);
+ 
+diff --git a/drivers/media/v4l2-core/v4l2-fwnode.c b/drivers/media/v4l2-core/v4l2-fwnode.c
+index 7495f8323147..ccefa55813ad 100644
+--- a/drivers/media/v4l2-core/v4l2-fwnode.c
++++ b/drivers/media/v4l2-core/v4l2-fwnode.c
+@@ -163,7 +163,7 @@ static int v4l2_fwnode_endpoint_parse_csi2_bus(struct fwnode_handle *fwnode,
+ 		}
+ 
+ 		if (use_default_lane_mapping)
+-			pr_debug("using default lane mapping\n");
++			pr_debug("no lane mapping given, using defaults\n");
+ 	}
+ 
+ 	rval = fwnode_property_read_u32_array(fwnode, "data-lanes", NULL, 0);
+@@ -175,6 +175,10 @@ static int v4l2_fwnode_endpoint_parse_csi2_bus(struct fwnode_handle *fwnode,
+ 					       num_data_lanes);
+ 
+ 		have_data_lanes = true;
++		if (use_default_lane_mapping) {
++			pr_debug("data-lanes property exists; disabling default mapping\n");
++			use_default_lane_mapping = false;
++		}
+ 	}
+ 
+ 	for (i = 0; i < num_data_lanes; i++) {
+diff --git a/drivers/mfd/intel-lpss.c b/drivers/mfd/intel-lpss.c
+index 50bffc3382d7..ff3fba16e735 100644
+--- a/drivers/mfd/intel-lpss.c
++++ b/drivers/mfd/intel-lpss.c
+@@ -273,6 +273,9 @@ static void intel_lpss_init_dev(const struct intel_lpss *lpss)
+ {
+ 	u32 value = LPSS_PRIV_SSP_REG_DIS_DMA_FIN;
+ 
++	/* Set the device in reset state */
++	writel(0, lpss->priv + LPSS_PRIV_RESETS);
++
+ 	intel_lpss_deassert_reset(lpss);
+ 
+ 	intel_lpss_set_remap_addr(lpss);
+diff --git a/drivers/mfd/tps65912-spi.c b/drivers/mfd/tps65912-spi.c
+index 3bd75061f777..f78be039e463 100644
+--- a/drivers/mfd/tps65912-spi.c
++++ b/drivers/mfd/tps65912-spi.c
+@@ -27,6 +27,7 @@ static const struct of_device_id tps65912_spi_of_match_table[] = {
+ 	{ .compatible = "ti,tps65912", },
+ 	{ /* sentinel */ }
+ };
++MODULE_DEVICE_TABLE(of, tps65912_spi_of_match_table);
+ 
+ static int tps65912_spi_probe(struct spi_device *spi)
+ {
+diff --git a/drivers/mfd/twl6040.c b/drivers/mfd/twl6040.c
+index 7c3c5fd5fcd0..86052c5c6069 100644
+--- a/drivers/mfd/twl6040.c
++++ b/drivers/mfd/twl6040.c
+@@ -322,8 +322,19 @@ int twl6040_power(struct twl6040 *twl6040, int on)
+ 			}
+ 		}
+ 
++		/*
++		 * Register access can produce errors after power-up unless we
++		 * wait at least 8ms based on measurements on duovero.
++		 */
++		usleep_range(10000, 12000);
++
+ 		/* Sync with the HW */
+-		regcache_sync(twl6040->regmap);
++		ret = regcache_sync(twl6040->regmap);
++		if (ret) {
++			dev_err(twl6040->dev, "Failed to sync with the HW: %i\n",
++				ret);
++			goto out;
++		}
+ 
+ 		/* Default PLL configuration after power up */
+ 		twl6040->pll = TWL6040_SYSCLK_SEL_LPPLL;
+diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c
+index 29582fe57151..733274a061dc 100644
+--- a/drivers/misc/pci_endpoint_test.c
++++ b/drivers/misc/pci_endpoint_test.c
+@@ -662,6 +662,7 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
+ 	data = (struct pci_endpoint_test_data *)ent->driver_data;
+ 	if (data) {
+ 		test_reg_bar = data->test_reg_bar;
++		test->test_reg_bar = test_reg_bar;
+ 		test->alignment = data->alignment;
+ 		irq_type = data->irq_type;
+ 	}
+diff --git a/drivers/mmc/host/mmci.c b/drivers/mmc/host/mmci.c
+index 387ff14587b8..e27978c47db7 100644
+--- a/drivers/mmc/host/mmci.c
++++ b/drivers/mmc/host/mmci.c
+@@ -1550,9 +1550,10 @@ static irqreturn_t mmci_irq(int irq, void *dev_id)
+ 		}
+ 
+ 		/*
+-		 * Don't poll for busy completion in irq context.
++		 * Busy detection has been handled by mmci_cmd_irq() above.
++		 * Clear the status bit to prevent polling in IRQ context.
+ 		 */
+-		if (host->variant->busy_detect && host->busy_status)
++		if (host->variant->busy_detect_flag)
+ 			status &= ~host->variant->busy_detect_flag;
+ 
+ 		ret = 1;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index deda606c51e7..6d4d5a470163 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -5942,8 +5942,11 @@ int hclge_add_uc_addr_common(struct hclge_vport *vport,
+ 	}
+ 
+ 	/* check if we just hit the duplicate */
+-	if (!ret)
+-		ret = -EINVAL;
++	if (!ret) {
++		dev_warn(&hdev->pdev->dev, "VF %d mac(%pM) exists\n",
++			 vport->vport_id, addr);
++		return 0;
++	}
+ 
+ 	dev_err(&hdev->pdev->dev,
+ 		"PF failed to add unicast entry(%pM) in the MAC table\n",
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index ac9fcb097689..133f5e008822 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -6854,10 +6854,12 @@ static int i40e_setup_tc(struct net_device *netdev, void *type_data)
+ 	struct i40e_pf *pf = vsi->back;
+ 	u8 enabled_tc = 0, num_tc, hw;
+ 	bool need_reset = false;
++	int old_queue_pairs;
+ 	int ret = -EINVAL;
+ 	u16 mode;
+ 	int i;
+ 
++	old_queue_pairs = vsi->num_queue_pairs;
+ 	num_tc = mqprio_qopt->qopt.num_tc;
+ 	hw = mqprio_qopt->qopt.hw;
+ 	mode = mqprio_qopt->mode;
+@@ -6958,6 +6960,7 @@ config_tc:
+ 		}
+ 		ret = i40e_configure_queue_channels(vsi);
+ 		if (ret) {
++			vsi->num_queue_pairs = old_queue_pairs;
+ 			netdev_info(netdev,
+ 				    "Failed configuring queue channels\n");
+ 			need_reset = true;
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 6ec73864019c..b562476b1251 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -528,6 +528,9 @@ void ice_print_link_msg(struct ice_vsi *vsi, bool isup)
+ 	case ICE_FC_RX_PAUSE:
+ 		fc = "RX";
+ 		break;
++	case ICE_FC_NONE:
++		fc = "None";
++		break;
+ 	default:
+ 		fc = "Unknown";
+ 		break;
+diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
+index 09d1c314b68f..c5f6cfecc042 100644
+--- a/drivers/net/ethernet/intel/ice/ice_switch.c
++++ b/drivers/net/ethernet/intel/ice/ice_switch.c
+@@ -643,21 +643,37 @@ static void ice_fill_sw_info(struct ice_hw *hw, struct ice_fltr_info *fi)
+ 	     fi->fltr_act == ICE_FWD_TO_VSI_LIST ||
+ 	     fi->fltr_act == ICE_FWD_TO_Q ||
+ 	     fi->fltr_act == ICE_FWD_TO_QGRP)) {
+-		fi->lb_en = true;
+-		/* Do not set lan_en to TRUE if
++		/* Setting LB for prune actions will result in replicated
++		 * packets to the internal switch that will be dropped.
++		 */
++		if (fi->lkup_type != ICE_SW_LKUP_VLAN)
++			fi->lb_en = true;
++
++		/* Set lan_en to TRUE if
+ 		 * 1. The switch is a VEB AND
+ 		 * 2
+-		 * 2.1 The lookup is MAC with unicast addr for MAC, OR
+-		 * 2.2 The lookup is MAC_VLAN with unicast addr for MAC
++		 * 2.1 The lookup is VLAN, OR
++		 * 2.2 The lookup is default port mode, OR
++		 * 2.3 The lookup is MAC with mcast or bcast addr for MAC, OR
++		 * 2.4 The lookup is MAC_VLAN with mcast or bcast addr for MAC.
++		 *
++		 * OR
+ 		 *
+-		 * In all other cases, the LAN enable has to be set to true.
++		 * The switch is a VEPA.
++		 *
++		 * In all other cases, the LAN enable has to be set to false.
+ 		 */
+-		if (!(hw->evb_veb &&
+-		      ((fi->lkup_type == ICE_SW_LKUP_MAC &&
+-			is_unicast_ether_addr(fi->l_data.mac.mac_addr)) ||
+-		       (fi->lkup_type == ICE_SW_LKUP_MAC_VLAN &&
+-			is_unicast_ether_addr(fi->l_data.mac_vlan.mac_addr)))))
++		if (hw->evb_veb) {
++			if (fi->lkup_type == ICE_SW_LKUP_VLAN ||
++			    fi->lkup_type == ICE_SW_LKUP_DFLT ||
++			    (fi->lkup_type == ICE_SW_LKUP_MAC &&
++			     !is_unicast_ether_addr(fi->l_data.mac.mac_addr)) ||
++			    (fi->lkup_type == ICE_SW_LKUP_MAC_VLAN &&
++			     !is_unicast_ether_addr(fi->l_data.mac.mac_addr)))
++				fi->lan_en = true;
++		} else {
+ 			fi->lan_en = true;
++		}
+ 	}
+ }
+ 
+diff --git a/drivers/net/thunderbolt.c b/drivers/net/thunderbolt.c
+index c48c3a1eb1f8..fcf31335a8b6 100644
+--- a/drivers/net/thunderbolt.c
++++ b/drivers/net/thunderbolt.c
+@@ -1282,6 +1282,7 @@ static int __maybe_unused tbnet_suspend(struct device *dev)
+ 		tbnet_tear_down(net, true);
+ 	}
+ 
++	tb_unregister_protocol_handler(&net->handler);
+ 	return 0;
+ }
+ 
+@@ -1290,6 +1291,8 @@ static int __maybe_unused tbnet_resume(struct device *dev)
+ 	struct tb_service *svc = tb_to_service(dev);
+ 	struct tbnet *net = tb_service_get_drvdata(svc);
+ 
++	tb_register_protocol_handler(&net->handler);
++
+ 	netif_carrier_off(net->dev);
+ 	if (netif_running(net->dev)) {
+ 		netif_device_attach(net->dev);
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index a90cf5d63aac..372d3f4a106a 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -1271,6 +1271,7 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved)
+ 	struct nvme_dev *dev = nvmeq->dev;
+ 	struct request *abort_req;
+ 	struct nvme_command cmd;
++	bool shutdown = false;
+ 	u32 csts = readl(dev->bar + NVME_REG_CSTS);
+ 
+ 	/* If PCI error recovery process is happening, we cannot reset or
+@@ -1307,12 +1308,14 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved)
+ 	 * shutdown, so we return BLK_EH_DONE.
+ 	 */
+ 	switch (dev->ctrl.state) {
++	case NVME_CTRL_DELETING:
++		shutdown = true;
+ 	case NVME_CTRL_CONNECTING:
+ 	case NVME_CTRL_RESETTING:
+ 		dev_warn_ratelimited(dev->ctrl.device,
+ 			 "I/O %d QID %d timeout, disable controller\n",
+ 			 req->tag, nvmeq->qid);
+-		nvme_dev_disable(dev, false);
++		nvme_dev_disable(dev, shutdown);
+ 		nvme_req(req)->flags |= NVME_REQ_CANCELLED;
+ 		return BLK_EH_DONE;
+ 	default:
+@@ -2438,8 +2441,11 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
+ 	 * must flush all entered requests to their failed completion to avoid
+ 	 * deadlocking blk-mq hot-cpu notifier.
+ 	 */
+-	if (shutdown)
++	if (shutdown) {
+ 		nvme_start_queues(&dev->ctrl);
++		if (dev->ctrl.admin_q && !blk_queue_dying(dev->ctrl.admin_q))
++			blk_mq_unquiesce_queue(dev->ctrl.admin_q);
++	}
+ 	mutex_unlock(&dev->shutdown_lock);
+ }
+ 
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index f24008b66826..53dc37574b5d 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -1166,7 +1166,7 @@ EXPORT_SYMBOL_GPL(nvmem_cell_put);
+ static void nvmem_shift_read_buffer_in_place(struct nvmem_cell *cell, void *buf)
+ {
+ 	u8 *p, *b;
+-	int i, bit_offset = cell->bit_offset;
++	int i, extra, bit_offset = cell->bit_offset;
+ 
+ 	p = b = buf;
+ 	if (bit_offset) {
+@@ -1181,11 +1181,16 @@ static void nvmem_shift_read_buffer_in_place(struct nvmem_cell *cell, void *buf)
+ 			p = b;
+ 			*b++ >>= bit_offset;
+ 		}
+-
+-		/* result fits in less bytes */
+-		if (cell->bytes != DIV_ROUND_UP(cell->nbits, BITS_PER_BYTE))
+-			*p-- = 0;
++	} else {
++		/* point to the msb */
++		p += cell->bytes - 1;
+ 	}
++
++	/* result fits in less bytes */
++	extra = cell->bytes - DIV_ROUND_UP(cell->nbits, BITS_PER_BYTE);
++	while (--extra >= 0)
++		*p-- = 0;
++
+ 	/* clear msb bits if any leftover in the last byte */
+ 	*p &= GENMASK((cell->nbits%BITS_PER_BYTE) - 1, 0);
+ }
+diff --git a/drivers/nvmem/sunxi_sid.c b/drivers/nvmem/sunxi_sid.c
+index 570a2e354f30..ef3d776bb16e 100644
+--- a/drivers/nvmem/sunxi_sid.c
++++ b/drivers/nvmem/sunxi_sid.c
+@@ -222,8 +222,10 @@ static const struct sunxi_sid_cfg sun50i_a64_cfg = {
+ static const struct of_device_id sunxi_sid_of_match[] = {
+ 	{ .compatible = "allwinner,sun4i-a10-sid", .data = &sun4i_a10_cfg },
+ 	{ .compatible = "allwinner,sun7i-a20-sid", .data = &sun7i_a20_cfg },
++	{ .compatible = "allwinner,sun8i-a83t-sid", .data = &sun50i_a64_cfg },
+ 	{ .compatible = "allwinner,sun8i-h3-sid", .data = &sun8i_h3_cfg },
+ 	{ .compatible = "allwinner,sun50i-a64-sid", .data = &sun50i_a64_cfg },
++	{ .compatible = "allwinner,sun50i-h5-sid", .data = &sun50i_a64_cfg },
+ 	{/* sentinel */},
+ };
+ MODULE_DEVICE_TABLE(of, sunxi_sid_of_match);
+diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/controller/dwc/pci-keystone.c
+index 14f2b0b4ed5e..ba6907af9dcb 100644
+--- a/drivers/pci/controller/dwc/pci-keystone.c
++++ b/drivers/pci/controller/dwc/pci-keystone.c
+@@ -705,6 +705,7 @@ static void ks_pcie_setup_interrupts(struct keystone_pcie *ks_pcie)
+ 		ks_pcie_enable_error_irq(ks_pcie);
+ }
+ 
++#ifdef CONFIG_ARM
+ /*
+  * When a PCI device does not exist during config cycles, keystone host gets a
+  * bus error instead of returning 0xffffffff. This handler always returns 0
+@@ -724,6 +725,7 @@ static int ks_pcie_fault(unsigned long addr, unsigned int fsr,
+ 
+ 	return 0;
+ }
++#endif
+ 
+ static int __init ks_pcie_init_id(struct keystone_pcie *ks_pcie)
+ {
+@@ -766,12 +768,14 @@ static int __init ks_pcie_host_init(struct pcie_port *pp)
+ 	if (ret < 0)
+ 		return ret;
+ 
++#ifdef CONFIG_ARM
+ 	/*
+ 	 * PCIe access errors that result into OCP errors are caught by ARM as
+ 	 * "External aborts"
+ 	 */
+ 	hook_fault_code(17, ks_pcie_fault, SIGBUS, 0,
+ 			"Asynchronous external abort");
++#endif
+ 
+ 	return 0;
+ }
+@@ -873,6 +877,10 @@ static int ks_pcie_enable_phy(struct keystone_pcie *ks_pcie)
+ 	int num_lanes = ks_pcie->num_lanes;
+ 
+ 	for (i = 0; i < num_lanes; i++) {
++		ret = phy_reset(ks_pcie->phy[i]);
++		if (ret < 0)
++			goto err_phy;
++
+ 		ret = phy_init(ks_pcie->phy[i]);
+ 		if (ret < 0)
+ 			goto err_phy;
+diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c
+index 24f5a775ad34..e3cce5d203f3 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-ep.c
++++ b/drivers/pci/controller/dwc/pcie-designware-ep.c
+@@ -397,6 +397,7 @@ int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no,
+ {
+ 	struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
+ 	struct pci_epc *epc = ep->epc;
++	unsigned int aligned_offset;
+ 	u16 msg_ctrl, msg_data;
+ 	u32 msg_addr_lower, msg_addr_upper, reg;
+ 	u64 msg_addr;
+@@ -422,13 +423,15 @@ int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no,
+ 		reg = ep->msi_cap + PCI_MSI_DATA_32;
+ 		msg_data = dw_pcie_readw_dbi(pci, reg);
+ 	}
+-	msg_addr = ((u64) msg_addr_upper) << 32 | msg_addr_lower;
++	aligned_offset = msg_addr_lower & (epc->mem->page_size - 1);
++	msg_addr = ((u64)msg_addr_upper) << 32 |
++			(msg_addr_lower & ~aligned_offset);
+ 	ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr,
+ 				  epc->mem->page_size);
+ 	if (ret)
+ 		return ret;
+ 
+-	writel(msg_data | (interrupt_num - 1), ep->msi_mem);
++	writel(msg_data | (interrupt_num - 1), ep->msi_mem + aligned_offset);
+ 
+ 	dw_pcie_ep_unmap_addr(epc, func_no, ep->msi_mem_phys);
+ 
+diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
+index 25087d3c9a82..214e883aa0a1 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-host.c
++++ b/drivers/pci/controller/dwc/pcie-designware-host.c
+@@ -303,20 +303,24 @@ void dw_pcie_free_msi(struct pcie_port *pp)
+ 
+ 	irq_domain_remove(pp->msi_domain);
+ 	irq_domain_remove(pp->irq_domain);
++
++	if (pp->msi_page)
++		__free_page(pp->msi_page);
+ }
+ 
+ void dw_pcie_msi_init(struct pcie_port *pp)
+ {
+ 	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+ 	struct device *dev = pci->dev;
+-	struct page *page;
+ 	u64 msi_target;
+ 
+-	page = alloc_page(GFP_KERNEL);
+-	pp->msi_data = dma_map_page(dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE);
++	pp->msi_page = alloc_page(GFP_KERNEL);
++	pp->msi_data = dma_map_page(dev, pp->msi_page, 0, PAGE_SIZE,
++				    DMA_FROM_DEVICE);
+ 	if (dma_mapping_error(dev, pp->msi_data)) {
+ 		dev_err(dev, "Failed to map MSI data\n");
+-		__free_page(page);
++		__free_page(pp->msi_page);
++		pp->msi_page = NULL;
+ 		return;
+ 	}
+ 	msi_target = (u64)pp->msi_data;
+@@ -439,7 +443,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
+ 	if (ret)
+ 		pci->num_viewport = 2;
+ 
+-	if (IS_ENABLED(CONFIG_PCI_MSI) && pci_msi_enabled()) {
++	if (pci_msi_enabled()) {
+ 		/*
+ 		 * If a specific SoC driver needs to change the
+ 		 * default number of vectors, it needs to implement
+@@ -477,7 +481,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
+ 	if (pp->ops->host_init) {
+ 		ret = pp->ops->host_init(pp);
+ 		if (ret)
+-			goto error;
++			goto err_free_msi;
+ 	}
+ 
+ 	pp->root_bus_nr = pp->busn->start;
+@@ -491,7 +495,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
+ 
+ 	ret = pci_scan_root_bus_bridge(bridge);
+ 	if (ret)
+-		goto error;
++		goto err_free_msi;
+ 
+ 	bus = bridge->bus;
+ 
+@@ -507,6 +511,9 @@ int dw_pcie_host_init(struct pcie_port *pp)
+ 	pci_bus_add_devices(bus);
+ 	return 0;
+ 
++err_free_msi:
++	if (pci_msi_enabled() && !pp->ops->msi_host_init)
++		dw_pcie_free_msi(pp);
+ error:
+ 	pci_free_host_bridge(bridge);
+ 	return ret;
+@@ -646,17 +653,19 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
+ 
+ 	dw_pcie_setup(pci);
+ 
+-	num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL;
+-
+-	/* Initialize IRQ Status array */
+-	for (ctrl = 0; ctrl < num_ctrls; ctrl++) {
+-		pp->irq_mask[ctrl] = ~0;
+-		dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK +
+-					(ctrl * MSI_REG_CTRL_BLOCK_SIZE),
+-				    4, pp->irq_mask[ctrl]);
+-		dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE +
+-					(ctrl * MSI_REG_CTRL_BLOCK_SIZE),
+-				    4, ~0);
++	if (!pp->ops->msi_host_init) {
++		num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL;
++
++		/* Initialize IRQ Status array */
++		for (ctrl = 0; ctrl < num_ctrls; ctrl++) {
++			pp->irq_mask[ctrl] = ~0;
++			dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK +
++					    (ctrl * MSI_REG_CTRL_BLOCK_SIZE),
++					    4, pp->irq_mask[ctrl]);
++			dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE +
++					    (ctrl * MSI_REG_CTRL_BLOCK_SIZE),
++					    4, ~0);
++		}
+ 	}
+ 
+ 	/* Setup RC BARs */
+diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h
+index 377f4c0b52da..6fb0a1879932 100644
+--- a/drivers/pci/controller/dwc/pcie-designware.h
++++ b/drivers/pci/controller/dwc/pcie-designware.h
+@@ -179,6 +179,7 @@ struct pcie_port {
+ 	struct irq_domain	*irq_domain;
+ 	struct irq_domain	*msi_domain;
+ 	dma_addr_t		msi_data;
++	struct page		*msi_page;
+ 	u32			num_vectors;
+ 	u32			irq_mask[MAX_MSI_CTRLS];
+ 	raw_spinlock_t		lock;
+diff --git a/drivers/pci/controller/pcie-rcar.c b/drivers/pci/controller/pcie-rcar.c
+index 6a4e435bd35f..9b9c677ad3a0 100644
+--- a/drivers/pci/controller/pcie-rcar.c
++++ b/drivers/pci/controller/pcie-rcar.c
+@@ -892,7 +892,7 @@ static int rcar_pcie_enable_msi(struct rcar_pcie *pcie)
+ {
+ 	struct device *dev = pcie->dev;
+ 	struct rcar_msi *msi = &pcie->msi;
+-	unsigned long base;
++	phys_addr_t base;
+ 	int err, i;
+ 
+ 	mutex_init(&msi->lock);
+@@ -931,10 +931,14 @@ static int rcar_pcie_enable_msi(struct rcar_pcie *pcie)
+ 
+ 	/* setup MSI data target */
+ 	msi->pages = __get_free_pages(GFP_KERNEL, 0);
++	if (!msi->pages) {
++		err = -ENOMEM;
++		goto err;
++	}
+ 	base = virt_to_phys((void *)msi->pages);
+ 
+-	rcar_pci_write_reg(pcie, base | MSIFE, PCIEMSIALR);
+-	rcar_pci_write_reg(pcie, 0, PCIEMSIAUR);
++	rcar_pci_write_reg(pcie, lower_32_bits(base) | MSIFE, PCIEMSIALR);
++	rcar_pci_write_reg(pcie, upper_32_bits(base), PCIEMSIAUR);
+ 
+ 	/* enable all MSI interrupts */
+ 	rcar_pci_write_reg(pcie, 0xffffffff, PCIEMSIIER);
+diff --git a/drivers/pci/controller/pcie-xilinx.c b/drivers/pci/controller/pcie-xilinx.c
+index 9bd1a35cd5d8..5bf3af3b28e6 100644
+--- a/drivers/pci/controller/pcie-xilinx.c
++++ b/drivers/pci/controller/pcie-xilinx.c
+@@ -336,14 +336,19 @@ static const struct irq_domain_ops msi_domain_ops = {
+  * xilinx_pcie_enable_msi - Enable MSI support
+  * @port: PCIe port information
+  */
+-static void xilinx_pcie_enable_msi(struct xilinx_pcie_port *port)
++static int xilinx_pcie_enable_msi(struct xilinx_pcie_port *port)
+ {
+ 	phys_addr_t msg_addr;
+ 
+ 	port->msi_pages = __get_free_pages(GFP_KERNEL, 0);
++	if (!port->msi_pages)
++		return -ENOMEM;
++
+ 	msg_addr = virt_to_phys((void *)port->msi_pages);
+ 	pcie_write(port, 0x0, XILINX_PCIE_REG_MSIBASE1);
+ 	pcie_write(port, msg_addr, XILINX_PCIE_REG_MSIBASE2);
++
++	return 0;
+ }
+ 
+ /* INTx Functions */
+@@ -498,6 +503,7 @@ static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port)
+ 	struct device *dev = port->dev;
+ 	struct device_node *node = dev->of_node;
+ 	struct device_node *pcie_intc_node;
++	int ret;
+ 
+ 	/* Setup INTx */
+ 	pcie_intc_node = of_get_next_child(node, NULL);
+@@ -526,7 +532,9 @@ static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port)
+ 			return -ENODEV;
+ 		}
+ 
+-		xilinx_pcie_enable_msi(port);
++		ret = xilinx_pcie_enable_msi(port);
++		if (ret)
++			return ret;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/pci/hotplug/rpadlpar_core.c b/drivers/pci/hotplug/rpadlpar_core.c
+index e2356a9c7088..182f9e3443ee 100644
+--- a/drivers/pci/hotplug/rpadlpar_core.c
++++ b/drivers/pci/hotplug/rpadlpar_core.c
+@@ -51,6 +51,7 @@ static struct device_node *find_vio_slot_node(char *drc_name)
+ 		if (rc == 0)
+ 			break;
+ 	}
++	of_node_put(parent);
+ 
+ 	return dn;
+ }
+@@ -71,6 +72,7 @@ static struct device_node *find_php_slot_pci_node(char *drc_name,
+ 	return np;
+ }
+ 
++/* Returns a device_node with its reference count incremented */
+ static struct device_node *find_dlpar_node(char *drc_name, int *node_type)
+ {
+ 	struct device_node *dn;
+@@ -306,6 +308,7 @@ int dlpar_add_slot(char *drc_name)
+ 			rc = dlpar_add_phb(drc_name, dn);
+ 			break;
+ 	}
++	of_node_put(dn);
+ 
+ 	printk(KERN_INFO "%s: slot %s added\n", DLPAR_MODULE_NAME, drc_name);
+ exit:
+@@ -439,6 +442,7 @@ int dlpar_remove_slot(char *drc_name)
+ 			rc = dlpar_remove_pci_slot(drc_name, dn);
+ 			break;
+ 	}
++	of_node_put(dn);
+ 	vm_unmap_aliases();
+ 
+ 	printk(KERN_INFO "%s: slot %s removed\n", DLPAR_MODULE_NAME, drc_name);
+diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c
+index e22766c79fe9..c2fa830b8ef5 100644
+--- a/drivers/pci/switch/switchtec.c
++++ b/drivers/pci/switch/switchtec.c
+@@ -1162,7 +1162,8 @@ static int mask_event(struct switchtec_dev *stdev, int eid, int idx)
+ 	if (!(hdr & SWITCHTEC_EVENT_OCCURRED && hdr & SWITCHTEC_EVENT_EN_IRQ))
+ 		return 0;
+ 
+-	if (eid == SWITCHTEC_IOCTL_EVENT_LINK_STATE)
++	if (eid == SWITCHTEC_IOCTL_EVENT_LINK_STATE ||
++	    eid == SWITCHTEC_IOCTL_EVENT_MRPC_COMP)
+ 		return 0;
+ 
+ 	dev_dbg(&stdev->dev, "%s: %d %d %x\n", __func__, eid, idx, hdr);
+diff --git a/drivers/pinctrl/intel/pinctrl-intel.c b/drivers/pinctrl/intel/pinctrl-intel.c
+index 3b1818184207..70638b74f9d6 100644
+--- a/drivers/pinctrl/intel/pinctrl-intel.c
++++ b/drivers/pinctrl/intel/pinctrl-intel.c
+@@ -1466,7 +1466,7 @@ static bool intel_pinctrl_should_save(struct intel_pinctrl *pctrl, unsigned int
+ 	return false;
+ }
+ 
+-int intel_pinctrl_suspend(struct device *dev)
++int intel_pinctrl_suspend_noirq(struct device *dev)
+ {
+ 	struct intel_pinctrl *pctrl = dev_get_drvdata(dev);
+ 	struct intel_community_context *communities;
+@@ -1505,7 +1505,7 @@ int intel_pinctrl_suspend(struct device *dev)
+ 
+ 	return 0;
+ }
+-EXPORT_SYMBOL_GPL(intel_pinctrl_suspend);
++EXPORT_SYMBOL_GPL(intel_pinctrl_suspend_noirq);
+ 
+ static void intel_gpio_irq_init(struct intel_pinctrl *pctrl)
+ {
+@@ -1527,7 +1527,7 @@ static void intel_gpio_irq_init(struct intel_pinctrl *pctrl)
+ 	}
+ }
+ 
+-int intel_pinctrl_resume(struct device *dev)
++int intel_pinctrl_resume_noirq(struct device *dev)
+ {
+ 	struct intel_pinctrl *pctrl = dev_get_drvdata(dev);
+ 	const struct intel_community_context *communities;
+@@ -1589,7 +1589,7 @@ int intel_pinctrl_resume(struct device *dev)
+ 
+ 	return 0;
+ }
+-EXPORT_SYMBOL_GPL(intel_pinctrl_resume);
++EXPORT_SYMBOL_GPL(intel_pinctrl_resume_noirq);
+ #endif
+ 
+ MODULE_AUTHOR("Mathias Nyman <mathias.nyman@linux.intel.com>");
+diff --git a/drivers/pinctrl/intel/pinctrl-intel.h b/drivers/pinctrl/intel/pinctrl-intel.h
+index b8a07d37d18f..a8e958f1dcf5 100644
+--- a/drivers/pinctrl/intel/pinctrl-intel.h
++++ b/drivers/pinctrl/intel/pinctrl-intel.h
+@@ -177,13 +177,14 @@ int intel_pinctrl_probe_by_hid(struct platform_device *pdev);
+ int intel_pinctrl_probe_by_uid(struct platform_device *pdev);
+ 
+ #ifdef CONFIG_PM_SLEEP
+-int intel_pinctrl_suspend(struct device *dev);
+-int intel_pinctrl_resume(struct device *dev);
++int intel_pinctrl_suspend_noirq(struct device *dev);
++int intel_pinctrl_resume_noirq(struct device *dev);
+ #endif
+ 
+-#define INTEL_PINCTRL_PM_OPS(_name)						  \
+-const struct dev_pm_ops _name = {						  \
+-	SET_LATE_SYSTEM_SLEEP_PM_OPS(intel_pinctrl_suspend, intel_pinctrl_resume) \
++#define INTEL_PINCTRL_PM_OPS(_name)					\
++const struct dev_pm_ops _name = {					\
++	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(intel_pinctrl_suspend_noirq,	\
++				      intel_pinctrl_resume_noirq)	\
+ }
+ 
+ #endif /* PINCTRL_INTEL_H */
+diff --git a/drivers/platform/chrome/cros_ec_proto.c b/drivers/platform/chrome/cros_ec_proto.c
+index 97a068dff192..3bb954997ebc 100644
+--- a/drivers/platform/chrome/cros_ec_proto.c
++++ b/drivers/platform/chrome/cros_ec_proto.c
+@@ -56,6 +56,17 @@ static int send_command(struct cros_ec_device *ec_dev,
+ 	else
+ 		xfer_fxn = ec_dev->cmd_xfer;
+ 
++	if (!xfer_fxn) {
++		/*
++		 * This error can happen if a communication error happened and
++		 * the EC is trying to use protocol v2, on an underlying
++		 * communication mechanism that does not support v2.
++		 */
++		dev_err_once(ec_dev->dev,
++			     "missing EC transfer API, cannot send command\n");
++		return -EIO;
++	}
++
+ 	ret = (*xfer_fxn)(ec_dev, msg);
+ 	if (msg->result == EC_RES_IN_PROGRESS) {
+ 		int i;
+diff --git a/drivers/platform/x86/intel_pmc_ipc.c b/drivers/platform/x86/intel_pmc_ipc.c
+index 7964ba22ef8d..d37cbd1cf58c 100644
+--- a/drivers/platform/x86/intel_pmc_ipc.c
++++ b/drivers/platform/x86/intel_pmc_ipc.c
+@@ -771,13 +771,17 @@ static int ipc_create_pmc_devices(void)
+ 	if (ret) {
+ 		dev_err(ipcdev.dev, "Failed to add punit platform device\n");
+ 		platform_device_unregister(ipcdev.tco_dev);
++		return ret;
+ 	}
+ 
+ 	if (!ipcdev.telem_res_inval) {
+ 		ret = ipc_create_telemetry_device();
+-		if (ret)
++		if (ret) {
+ 			dev_warn(ipcdev.dev,
+ 				"Failed to add telemetry platform device\n");
++			platform_device_unregister(ipcdev.punit_dev);
++			platform_device_unregister(ipcdev.tco_dev);
++		}
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/power/supply/cpcap-battery.c b/drivers/power/supply/cpcap-battery.c
+index 6887870ba32c..453baa8f7d73 100644
+--- a/drivers/power/supply/cpcap-battery.c
++++ b/drivers/power/supply/cpcap-battery.c
+@@ -82,7 +82,7 @@ struct cpcap_battery_config {
+ };
+ 
+ struct cpcap_coulomb_counter_data {
+-	s32 sample;		/* 24-bits */
++	s32 sample;		/* 24 or 32 bits */
+ 	s32 accumulator;
+ 	s16 offset;		/* 10-bits */
+ };
+@@ -213,7 +213,7 @@ static int cpcap_battery_get_current(struct cpcap_battery_ddata *ddata)
+  * TI or ST coulomb counter in the PMIC.
+  */
+ static int cpcap_battery_cc_raw_div(struct cpcap_battery_ddata *ddata,
+-				    u32 sample, s32 accumulator,
++				    s32 sample, s32 accumulator,
+ 				    s16 offset, u32 divider)
+ {
+ 	s64 acc;
+@@ -224,7 +224,6 @@ static int cpcap_battery_cc_raw_div(struct cpcap_battery_ddata *ddata,
+ 	if (!divider)
+ 		return 0;
+ 
+-	sample &= 0xffffff;		/* 24-bits, unsigned */
+ 	offset &= 0x7ff;		/* 10-bits, signed */
+ 
+ 	switch (ddata->vendor) {
+@@ -259,7 +258,7 @@ static int cpcap_battery_cc_raw_div(struct cpcap_battery_ddata *ddata,
+ 
+ /* 3600000μAms = 1μAh */
+ static int cpcap_battery_cc_to_uah(struct cpcap_battery_ddata *ddata,
+-				   u32 sample, s32 accumulator,
++				   s32 sample, s32 accumulator,
+ 				   s16 offset)
+ {
+ 	return cpcap_battery_cc_raw_div(ddata, sample,
+@@ -268,7 +267,7 @@ static int cpcap_battery_cc_to_uah(struct cpcap_battery_ddata *ddata,
+ }
+ 
+ static int cpcap_battery_cc_to_ua(struct cpcap_battery_ddata *ddata,
+-				  u32 sample, s32 accumulator,
++				  s32 sample, s32 accumulator,
+ 				  s16 offset)
+ {
+ 	return cpcap_battery_cc_raw_div(ddata, sample,
+@@ -312,6 +311,8 @@ cpcap_battery_read_accumulated(struct cpcap_battery_ddata *ddata,
+ 	/* Sample value CPCAP_REG_CCS1 & 2 */
+ 	ccd->sample = (buf[1] & 0x0fff) << 16;
+ 	ccd->sample |= buf[0];
++	if (ddata->vendor == CPCAP_VENDOR_TI)
++		ccd->sample = sign_extend32(24, ccd->sample);
+ 
+ 	/* Accumulator value CPCAP_REG_CCA1 & 2 */
+ 	ccd->accumulator = ((s16)buf[3]) << 16;
+diff --git a/drivers/power/supply/max14656_charger_detector.c b/drivers/power/supply/max14656_charger_detector.c
+index b91b1d2999dc..d19307f791c6 100644
+--- a/drivers/power/supply/max14656_charger_detector.c
++++ b/drivers/power/supply/max14656_charger_detector.c
+@@ -280,6 +280,13 @@ static int max14656_probe(struct i2c_client *client,
+ 
+ 	INIT_DELAYED_WORK(&chip->irq_work, max14656_irq_worker);
+ 
++	chip->detect_psy = devm_power_supply_register(dev,
++		       &chip->psy_desc, &psy_cfg);
++	if (IS_ERR(chip->detect_psy)) {
++		dev_err(dev, "power_supply_register failed\n");
++		return -EINVAL;
++	}
++
+ 	ret = devm_request_irq(dev, chip->irq, max14656_irq,
+ 			       IRQF_TRIGGER_FALLING,
+ 			       MAX14656_NAME, chip);
+@@ -289,13 +296,6 @@ static int max14656_probe(struct i2c_client *client,
+ 	}
+ 	enable_irq_wake(chip->irq);
+ 
+-	chip->detect_psy = devm_power_supply_register(dev,
+-		       &chip->psy_desc, &psy_cfg);
+-	if (IS_ERR(chip->detect_psy)) {
+-		dev_err(dev, "power_supply_register failed\n");
+-		return -EINVAL;
+-	}
+-
+ 	schedule_delayed_work(&chip->irq_work, msecs_to_jiffies(2000));
+ 
+ 	return 0;
+diff --git a/drivers/pwm/core.c b/drivers/pwm/core.c
+index 3149204567f3..8c9200a0df5e 100644
+--- a/drivers/pwm/core.c
++++ b/drivers/pwm/core.c
+@@ -311,10 +311,12 @@ int pwmchip_add_with_polarity(struct pwm_chip *chip,
+ 	if (IS_ENABLED(CONFIG_OF))
+ 		of_pwmchip_add(chip);
+ 
+-	pwmchip_sysfs_export(chip);
+-
+ out:
+ 	mutex_unlock(&pwm_lock);
++
++	if (!ret)
++		pwmchip_sysfs_export(chip);
++
+ 	return ret;
+ }
+ EXPORT_SYMBOL_GPL(pwmchip_add_with_polarity);
+@@ -348,7 +350,7 @@ int pwmchip_remove(struct pwm_chip *chip)
+ 	unsigned int i;
+ 	int ret = 0;
+ 
+-	pwmchip_sysfs_unexport_children(chip);
++	pwmchip_sysfs_unexport(chip);
+ 
+ 	mutex_lock(&pwm_lock);
+ 
+@@ -368,8 +370,6 @@ int pwmchip_remove(struct pwm_chip *chip)
+ 
+ 	free_pwms(chip);
+ 
+-	pwmchip_sysfs_unexport(chip);
+-
+ out:
+ 	mutex_unlock(&pwm_lock);
+ 	return ret;
+diff --git a/drivers/pwm/pwm-meson.c b/drivers/pwm/pwm-meson.c
+index c1ed641b3e26..f6e738ad7bd9 100644
+--- a/drivers/pwm/pwm-meson.c
++++ b/drivers/pwm/pwm-meson.c
+@@ -111,6 +111,10 @@ struct meson_pwm {
+ 	const struct meson_pwm_data *data;
+ 	void __iomem *base;
+ 	u8 inverter_mask;
++	/*
++	 * Protects register (write) access to the REG_MISC_AB register
++	 * that is shared between the two PWMs.
++	 */
+ 	spinlock_t lock;
+ };
+ 
+@@ -235,6 +239,7 @@ static void meson_pwm_enable(struct meson_pwm *meson,
+ {
+ 	u32 value, clk_shift, clk_enable, enable;
+ 	unsigned int offset;
++	unsigned long flags;
+ 
+ 	switch (id) {
+ 	case 0:
+@@ -255,6 +260,8 @@ static void meson_pwm_enable(struct meson_pwm *meson,
+ 		return;
+ 	}
+ 
++	spin_lock_irqsave(&meson->lock, flags);
++
+ 	value = readl(meson->base + REG_MISC_AB);
+ 	value &= ~(MISC_CLK_DIV_MASK << clk_shift);
+ 	value |= channel->pre_div << clk_shift;
+@@ -267,11 +274,14 @@ static void meson_pwm_enable(struct meson_pwm *meson,
+ 	value = readl(meson->base + REG_MISC_AB);
+ 	value |= enable;
+ 	writel(value, meson->base + REG_MISC_AB);
++
++	spin_unlock_irqrestore(&meson->lock, flags);
+ }
+ 
+ static void meson_pwm_disable(struct meson_pwm *meson, unsigned int id)
+ {
+ 	u32 value, enable;
++	unsigned long flags;
+ 
+ 	switch (id) {
+ 	case 0:
+@@ -286,9 +296,13 @@ static void meson_pwm_disable(struct meson_pwm *meson, unsigned int id)
+ 		return;
+ 	}
+ 
++	spin_lock_irqsave(&meson->lock, flags);
++
+ 	value = readl(meson->base + REG_MISC_AB);
+ 	value &= ~enable;
+ 	writel(value, meson->base + REG_MISC_AB);
++
++	spin_unlock_irqrestore(&meson->lock, flags);
+ }
+ 
+ static int meson_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+@@ -296,19 +310,16 @@ static int meson_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ {
+ 	struct meson_pwm_channel *channel = pwm_get_chip_data(pwm);
+ 	struct meson_pwm *meson = to_meson_pwm(chip);
+-	unsigned long flags;
+ 	int err = 0;
+ 
+ 	if (!state)
+ 		return -EINVAL;
+ 
+-	spin_lock_irqsave(&meson->lock, flags);
+-
+ 	if (!state->enabled) {
+ 		meson_pwm_disable(meson, pwm->hwpwm);
+ 		channel->state.enabled = false;
+ 
+-		goto unlock;
++		return 0;
+ 	}
+ 
+ 	if (state->period != channel->state.period ||
+@@ -329,7 +340,7 @@ static int meson_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 		err = meson_pwm_calc(meson, channel, pwm->hwpwm,
+ 				     state->duty_cycle, state->period);
+ 		if (err < 0)
+-			goto unlock;
++			return err;
+ 
+ 		channel->state.polarity = state->polarity;
+ 		channel->state.period = state->period;
+@@ -341,9 +352,7 @@ static int meson_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 		channel->state.enabled = true;
+ 	}
+ 
+-unlock:
+-	spin_unlock_irqrestore(&meson->lock, flags);
+-	return err;
++	return 0;
+ }
+ 
+ static void meson_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
+diff --git a/drivers/pwm/pwm-tiehrpwm.c b/drivers/pwm/pwm-tiehrpwm.c
+index f7b8a86fa5c5..ad4a40c0f27c 100644
+--- a/drivers/pwm/pwm-tiehrpwm.c
++++ b/drivers/pwm/pwm-tiehrpwm.c
+@@ -382,6 +382,8 @@ static void ehrpwm_pwm_disable(struct pwm_chip *chip, struct pwm_device *pwm)
+ 	}
+ 
+ 	/* Update shadow register first before modifying active register */
++	ehrpwm_modify(pc->mmio_base, AQSFRC, AQSFRC_RLDCSF_MASK,
++		      AQSFRC_RLDCSF_ZRO);
+ 	ehrpwm_modify(pc->mmio_base, AQCSFRC, aqcsfrc_mask, aqcsfrc_val);
+ 	/*
+ 	 * Changes to immediate action on Action Qualifier. This puts
+diff --git a/drivers/pwm/sysfs.c b/drivers/pwm/sysfs.c
+index ceb233dd6048..13d9bd54dfce 100644
+--- a/drivers/pwm/sysfs.c
++++ b/drivers/pwm/sysfs.c
+@@ -409,19 +409,6 @@ void pwmchip_sysfs_export(struct pwm_chip *chip)
+ }
+ 
+ void pwmchip_sysfs_unexport(struct pwm_chip *chip)
+-{
+-	struct device *parent;
+-
+-	parent = class_find_device(&pwm_class, NULL, chip,
+-				   pwmchip_sysfs_match);
+-	if (parent) {
+-		/* for class_find_device() */
+-		put_device(parent);
+-		device_unregister(parent);
+-	}
+-}
+-
+-void pwmchip_sysfs_unexport_children(struct pwm_chip *chip)
+ {
+ 	struct device *parent;
+ 	unsigned int i;
+@@ -439,6 +426,7 @@ void pwmchip_sysfs_unexport_children(struct pwm_chip *chip)
+ 	}
+ 
+ 	put_device(parent);
++	device_unregister(parent);
+ }
+ 
+ static int __init pwm_sysfs_init(void)
+diff --git a/drivers/rapidio/rio_cm.c b/drivers/rapidio/rio_cm.c
+index cf45829585cb..b29fc258eeba 100644
+--- a/drivers/rapidio/rio_cm.c
++++ b/drivers/rapidio/rio_cm.c
+@@ -2147,6 +2147,14 @@ static int riocm_add_mport(struct device *dev,
+ 	mutex_init(&cm->rx_lock);
+ 	riocm_rx_fill(cm, RIOCM_RX_RING_SIZE);
+ 	cm->rx_wq = create_workqueue(DRV_NAME "/rxq");
++	if (!cm->rx_wq) {
++		riocm_error("failed to allocate IBMBOX_%d on %s",
++			    cmbox, mport->name);
++		rio_release_outb_mbox(mport, cmbox);
++		kfree(cm);
++		return -ENOMEM;
++	}
++
+ 	INIT_WORK(&cm->rx_work, rio_ibmsg_handler);
+ 
+ 	cm->tx_slot = 0;
+diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c
+index c6fdad12428e..2e377814d7c1 100644
+--- a/drivers/scsi/qla2xxx/qla_gs.c
++++ b/drivers/scsi/qla2xxx/qla_gs.c
+@@ -3031,6 +3031,8 @@ static void qla24xx_async_gpsc_sp_done(void *s, int res)
+ 	    "Async done-%s res %x, WWPN %8phC \n",
+ 	    sp->name, res, fcport->port_name);
+ 
++	fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE);
++
+ 	if (res == QLA_FUNCTION_TIMEOUT)
+ 		return;
+ 
+@@ -4370,6 +4372,7 @@ int qla24xx_async_gnnid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ 
+ done_free_sp:
+ 	sp->free(sp);
++	fcport->flags &= ~FCF_ASYNC_SENT;
+ done:
+ 	return rval;
+ }
+diff --git a/drivers/soc/mediatek/mtk-pmic-wrap.c b/drivers/soc/mediatek/mtk-pmic-wrap.c
+index 8236a6c87e19..2f632e8790f7 100644
+--- a/drivers/soc/mediatek/mtk-pmic-wrap.c
++++ b/drivers/soc/mediatek/mtk-pmic-wrap.c
+@@ -1281,7 +1281,7 @@ static bool pwrap_is_pmic_cipher_ready(struct pmic_wrapper *wrp)
+ static int pwrap_init_cipher(struct pmic_wrapper *wrp)
+ {
+ 	int ret;
+-	u32 rdata;
++	u32 rdata = 0;
+ 
+ 	pwrap_writel(wrp, 0x1, PWRAP_CIPHER_SWRST);
+ 	pwrap_writel(wrp, 0x0, PWRAP_CIPHER_SWRST);
+diff --git a/drivers/soc/renesas/renesas-soc.c b/drivers/soc/renesas/renesas-soc.c
+index 4af96e668a2f..3299cf5365f3 100644
+--- a/drivers/soc/renesas/renesas-soc.c
++++ b/drivers/soc/renesas/renesas-soc.c
+@@ -335,6 +335,9 @@ static int __init renesas_soc_init(void)
+ 		/* R-Car M3-W ES1.1 incorrectly identifies as ES2.0 */
+ 		if ((product & 0x7fff) == 0x5210)
+ 			product ^= 0x11;
++		/* R-Car M3-W ES1.3 incorrectly identifies as ES2.1 */
++		if ((product & 0x7fff) == 0x5211)
++			product ^= 0x12;
+ 		if (soc->id && ((product >> 8) & 0xff) != soc->id) {
+ 			pr_warn("SoC mismatch (product = 0x%x)\n", product);
+ 			return -ENODEV;
+diff --git a/drivers/soc/rockchip/grf.c b/drivers/soc/rockchip/grf.c
+index 96882ffde67e..3b81e1d75a97 100644
+--- a/drivers/soc/rockchip/grf.c
++++ b/drivers/soc/rockchip/grf.c
+@@ -66,9 +66,11 @@ static const struct rockchip_grf_info rk3228_grf __initconst = {
+ };
+ 
+ #define RK3288_GRF_SOC_CON0		0x244
++#define RK3288_GRF_SOC_CON2		0x24c
+ 
+ static const struct rockchip_grf_value rk3288_defaults[] __initconst = {
+ 	{ "jtag switching", RK3288_GRF_SOC_CON0, HIWORD_UPDATE(0, 1, 12) },
++	{ "pwm select", RK3288_GRF_SOC_CON2, HIWORD_UPDATE(1, 1, 0) },
+ };
+ 
+ static const struct rockchip_grf_info rk3288_grf __initconst = {
+diff --git a/drivers/soc/tegra/pmc.c b/drivers/soc/tegra/pmc.c
+index 0df258518693..2fba8cdbe3bd 100644
+--- a/drivers/soc/tegra/pmc.c
++++ b/drivers/soc/tegra/pmc.c
+@@ -1999,7 +1999,7 @@ static int tegra_pmc_probe(struct platform_device *pdev)
+ 	if (IS_ENABLED(CONFIG_DEBUG_FS)) {
+ 		err = tegra_powergate_debugfs_init();
+ 		if (err < 0)
+-			return err;
++			goto cleanup_sysfs;
+ 	}
+ 
+ 	err = register_restart_handler(&tegra_pmc_restart_handler);
+@@ -2030,6 +2030,9 @@ cleanup_restart_handler:
+ 	unregister_restart_handler(&tegra_pmc_restart_handler);
+ cleanup_debugfs:
+ 	debugfs_remove(pmc->debugfs);
++cleanup_sysfs:
++	device_remove_file(&pdev->dev, &dev_attr_reset_reason);
++	device_remove_file(&pdev->dev, &dev_attr_reset_level);
+ 	return err;
+ }
+ 
+diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c
+index d2076f2f468f..a1a63b617ae1 100644
+--- a/drivers/spi/spi-pxa2xx.c
++++ b/drivers/spi/spi-pxa2xx.c
+@@ -1491,12 +1491,7 @@ static int pxa2xx_spi_get_port_id(struct acpi_device *adev)
+ 
+ static bool pxa2xx_spi_idma_filter(struct dma_chan *chan, void *param)
+ {
+-	struct device *dev = param;
+-
+-	if (dev != chan->device->dev->parent)
+-		return false;
+-
+-	return true;
++	return param == chan->device->dev;
+ }
+ 
+ #endif /* CONFIG_PCI */
+diff --git a/drivers/staging/media/rockchip/vpu/rockchip_vpu_drv.c b/drivers/staging/media/rockchip/vpu/rockchip_vpu_drv.c
+index 962412c79b91..d489b5dd54d7 100644
+--- a/drivers/staging/media/rockchip/vpu/rockchip_vpu_drv.c
++++ b/drivers/staging/media/rockchip/vpu/rockchip_vpu_drv.c
+@@ -481,15 +481,18 @@ static int rockchip_vpu_probe(struct platform_device *pdev)
+ 	return 0;
+ err_video_dev_unreg:
+ 	if (vpu->vfd_enc) {
++		v4l2_m2m_unregister_media_controller(vpu->m2m_dev);
+ 		video_unregister_device(vpu->vfd_enc);
+ 		video_device_release(vpu->vfd_enc);
+ 	}
+ err_m2m_rel:
++	media_device_cleanup(&vpu->mdev);
+ 	v4l2_m2m_release(vpu->m2m_dev);
+ err_v4l2_unreg:
+ 	v4l2_device_unregister(&vpu->v4l2_dev);
+ err_clk_unprepare:
+ 	clk_bulk_unprepare(vpu->variant->num_clocks, vpu->clocks);
++	pm_runtime_dont_use_autosuspend(vpu->dev);
+ 	pm_runtime_disable(vpu->dev);
+ 	return ret;
+ }
+@@ -501,15 +504,16 @@ static int rockchip_vpu_remove(struct platform_device *pdev)
+ 	v4l2_info(&vpu->v4l2_dev, "Removing %s\n", pdev->name);
+ 
+ 	media_device_unregister(&vpu->mdev);
+-	v4l2_m2m_unregister_media_controller(vpu->m2m_dev);
+-	v4l2_m2m_release(vpu->m2m_dev);
+-	media_device_cleanup(&vpu->mdev);
+ 	if (vpu->vfd_enc) {
++		v4l2_m2m_unregister_media_controller(vpu->m2m_dev);
+ 		video_unregister_device(vpu->vfd_enc);
+ 		video_device_release(vpu->vfd_enc);
+ 	}
++	media_device_cleanup(&vpu->mdev);
++	v4l2_m2m_release(vpu->m2m_dev);
+ 	v4l2_device_unregister(&vpu->v4l2_dev);
+ 	clk_bulk_unprepare(vpu->variant->num_clocks, vpu->clocks);
++	pm_runtime_dont_use_autosuspend(vpu->dev);
+ 	pm_runtime_disable(vpu->dev);
+ 	return 0;
+ }
+diff --git a/drivers/thermal/qcom/tsens.c b/drivers/thermal/qcom/tsens.c
+index f1ec9bbe4717..54b2c0e3c4f4 100644
+--- a/drivers/thermal/qcom/tsens.c
++++ b/drivers/thermal/qcom/tsens.c
+@@ -160,7 +160,8 @@ static int tsens_probe(struct platform_device *pdev)
+ 	if (tmdev->ops->calibrate) {
+ 		ret = tmdev->ops->calibrate(tmdev);
+ 		if (ret < 0) {
+-			dev_err(dev, "tsens calibration failed\n");
++			if (ret != -EPROBE_DEFER)
++				dev_err(dev, "tsens calibration failed\n");
+ 			return ret;
+ 		}
+ 	}
+diff --git a/drivers/thermal/rcar_gen3_thermal.c b/drivers/thermal/rcar_gen3_thermal.c
+index 88fa41cf16e8..0c9e7a5bacdf 100644
+--- a/drivers/thermal/rcar_gen3_thermal.c
++++ b/drivers/thermal/rcar_gen3_thermal.c
+@@ -331,6 +331,9 @@ MODULE_DEVICE_TABLE(of, rcar_gen3_thermal_dt_ids);
+ static int rcar_gen3_thermal_remove(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
++	struct rcar_gen3_thermal_priv *priv = dev_get_drvdata(dev);
++
++	rcar_thermal_irq_set(priv, false);
+ 
+ 	pm_runtime_put(dev);
+ 	pm_runtime_disable(dev);
+diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
+index d31b975dd3fd..284e8d052fc3 100644
+--- a/drivers/tty/serial/8250/8250_dw.c
++++ b/drivers/tty/serial/8250/8250_dw.c
+@@ -365,7 +365,7 @@ static bool dw8250_fallback_dma_filter(struct dma_chan *chan, void *param)
+ 
+ static bool dw8250_idma_filter(struct dma_chan *chan, void *param)
+ {
+-	return param == chan->device->dev->parent;
++	return param == chan->device->dev;
+ }
+ 
+ /*
+@@ -434,7 +434,7 @@ static void dw8250_quirks(struct uart_port *p, struct dw8250_data *data)
+ 		data->uart_16550_compatible = true;
+ 	}
+ 
+-	/* Platforms with iDMA */
++	/* Platforms with iDMA 64-bit */
+ 	if (platform_get_resource_byname(to_platform_device(p->dev),
+ 					 IORESOURCE_MEM, "lpss_priv")) {
+ 		data->dma.rx_param = p->dev->parent;
+diff --git a/drivers/usb/host/ohci-da8xx.c b/drivers/usb/host/ohci-da8xx.c
+index ca8a94f15ac0..113401b7d70d 100644
+--- a/drivers/usb/host/ohci-da8xx.c
++++ b/drivers/usb/host/ohci-da8xx.c
+@@ -206,12 +206,23 @@ static int ohci_da8xx_regulator_event(struct notifier_block *nb,
+ 	return 0;
+ }
+ 
+-static irqreturn_t ohci_da8xx_oc_handler(int irq, void *data)
++static irqreturn_t ohci_da8xx_oc_thread(int irq, void *data)
+ {
+ 	struct da8xx_ohci_hcd *da8xx_ohci = data;
++	struct device *dev = da8xx_ohci->hcd->self.controller;
++	int ret;
+ 
+-	if (gpiod_get_value(da8xx_ohci->oc_gpio))
+-		gpiod_set_value(da8xx_ohci->vbus_gpio, 0);
++	if (gpiod_get_value_cansleep(da8xx_ohci->oc_gpio)) {
++		if (da8xx_ohci->vbus_gpio) {
++			gpiod_set_value_cansleep(da8xx_ohci->vbus_gpio, 0);
++		} else if (da8xx_ohci->vbus_reg) {
++			ret = regulator_disable(da8xx_ohci->vbus_reg);
++			if (ret)
++				dev_err(dev,
++					"Failed to disable regulator: %d\n",
++					ret);
++		}
++	}
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -438,8 +449,9 @@ static int ohci_da8xx_probe(struct platform_device *pdev)
+ 		if (oc_irq < 0)
+ 			goto err;
+ 
+-		error = devm_request_irq(dev, oc_irq, ohci_da8xx_oc_handler,
+-				IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
++		error = devm_request_threaded_irq(dev, oc_irq, NULL,
++				ohci_da8xx_oc_thread, IRQF_TRIGGER_RISING |
++				IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ 				"OHCI over-current indicator", da8xx_ohci);
+ 		if (error)
+ 			goto err;
+diff --git a/drivers/usb/typec/tcpm/fusb302.c b/drivers/usb/typec/tcpm/fusb302.c
+index e9344997329c..d8b128ed08af 100644
+--- a/drivers/usb/typec/tcpm/fusb302.c
++++ b/drivers/usb/typec/tcpm/fusb302.c
+@@ -634,6 +634,8 @@ static int fusb302_set_toggling(struct fusb302_chip *chip,
+ 			return ret;
+ 		chip->intr_togdone = false;
+ 	} else {
++		/* Datasheet says vconn MUST be off when toggling */
++		WARN(chip->vconn_on, "Vconn is on during toggle start");
+ 		/* unmask TOGDONE interrupt */
+ 		ret = fusb302_i2c_clear_bits(chip, FUSB_REG_MASKA,
+ 					     FUSB_REG_MASKA_TOGDONE);
+diff --git a/drivers/vfio/pci/vfio_pci_nvlink2.c b/drivers/vfio/pci/vfio_pci_nvlink2.c
+index 32f695ffe128..50fe3c4f7feb 100644
+--- a/drivers/vfio/pci/vfio_pci_nvlink2.c
++++ b/drivers/vfio/pci/vfio_pci_nvlink2.c
+@@ -472,6 +472,8 @@ int vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev)
+ 	return 0;
+ 
+ free_exit:
++	if (data->base)
++		memunmap(data->base);
+ 	kfree(data);
+ 
+ 	return ret;
+diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
+index a3030cdf3c18..aa0e14d25ac3 100644
+--- a/drivers/vfio/vfio.c
++++ b/drivers/vfio/vfio.c
+@@ -34,6 +34,7 @@
+ #include <linux/uaccess.h>
+ #include <linux/vfio.h>
+ #include <linux/wait.h>
++#include <linux/sched/signal.h>
+ 
+ #define DRIVER_VERSION	"0.3"
+ #define DRIVER_AUTHOR	"Alex Williamson <alex.williamson@redhat.com>"
+@@ -904,30 +905,17 @@ void *vfio_device_data(struct vfio_device *device)
+ }
+ EXPORT_SYMBOL_GPL(vfio_device_data);
+ 
+-/* Given a referenced group, check if it contains the device */
+-static bool vfio_dev_present(struct vfio_group *group, struct device *dev)
+-{
+-	struct vfio_device *device;
+-
+-	device = vfio_group_get_device(group, dev);
+-	if (!device)
+-		return false;
+-
+-	vfio_device_put(device);
+-	return true;
+-}
+-
+ /*
+  * Decrement the device reference count and wait for the device to be
+  * removed.  Open file descriptors for the device... */
+ void *vfio_del_group_dev(struct device *dev)
+ {
++	DEFINE_WAIT_FUNC(wait, woken_wake_function);
+ 	struct vfio_device *device = dev_get_drvdata(dev);
+ 	struct vfio_group *group = device->group;
+ 	void *device_data = device->device_data;
+ 	struct vfio_unbound_dev *unbound;
+ 	unsigned int i = 0;
+-	long ret;
+ 	bool interrupted = false;
+ 
+ 	/*
+@@ -964,6 +952,8 @@ void *vfio_del_group_dev(struct device *dev)
+ 	 * interval with counter to allow the driver to take escalating
+ 	 * measures to release the device if it has the ability to do so.
+ 	 */
++	add_wait_queue(&vfio.release_q, &wait);
++
+ 	do {
+ 		device = vfio_group_get_device(group, dev);
+ 		if (!device)
+@@ -975,12 +965,10 @@ void *vfio_del_group_dev(struct device *dev)
+ 		vfio_device_put(device);
+ 
+ 		if (interrupted) {
+-			ret = wait_event_timeout(vfio.release_q,
+-					!vfio_dev_present(group, dev), HZ * 10);
++			wait_woken(&wait, TASK_UNINTERRUPTIBLE, HZ * 10);
+ 		} else {
+-			ret = wait_event_interruptible_timeout(vfio.release_q,
+-					!vfio_dev_present(group, dev), HZ * 10);
+-			if (ret == -ERESTARTSYS) {
++			wait_woken(&wait, TASK_INTERRUPTIBLE, HZ * 10);
++			if (signal_pending(current)) {
+ 				interrupted = true;
+ 				dev_warn(dev,
+ 					 "Device is currently in use, task"
+@@ -989,8 +977,10 @@ void *vfio_del_group_dev(struct device *dev)
+ 					 current->comm, task_pid_nr(current));
+ 			}
+ 		}
+-	} while (ret <= 0);
+ 
++	} while (1);
++
++	remove_wait_queue(&vfio.release_q, &wait);
+ 	/*
+ 	 * In order to support multiple devices per group, devices can be
+ 	 * plucked from the group while other devices in the group are still
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index c59b23f6e9ba..a9c69ae30878 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -1069,7 +1069,7 @@ static void fbcon_init(struct vc_data *vc, int init)
+ 
+ 	cap = info->flags;
+ 
+-	if (console_loglevel <= CONSOLE_LOGLEVEL_QUIET)
++	if (logo_shown < 0 && console_loglevel <= CONSOLE_LOGLEVEL_QUIET)
+ 		logo_shown = FBCON_LOGO_DONTSHOW;
+ 
+ 	if (vc != svc || logo_shown == FBCON_LOGO_DONTSHOW ||
+diff --git a/drivers/video/fbdev/hgafb.c b/drivers/video/fbdev/hgafb.c
+index 463028543173..59e1cae57948 100644
+--- a/drivers/video/fbdev/hgafb.c
++++ b/drivers/video/fbdev/hgafb.c
+@@ -285,6 +285,8 @@ static int hga_card_detect(void)
+ 	hga_vram_len  = 0x08000;
+ 
+ 	hga_vram = ioremap(0xb0000, hga_vram_len);
++	if (!hga_vram)
++		goto error;
+ 
+ 	if (request_region(0x3b0, 12, "hgafb"))
+ 		release_io_ports = 1;
+diff --git a/drivers/video/fbdev/imsttfb.c b/drivers/video/fbdev/imsttfb.c
+index 4b9615e4ce74..35bba3c2036d 100644
+--- a/drivers/video/fbdev/imsttfb.c
++++ b/drivers/video/fbdev/imsttfb.c
+@@ -1515,6 +1515,11 @@ static int imsttfb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	info->fix.smem_start = addr;
+ 	info->screen_base = (__u8 *)ioremap(addr, par->ramdac == IBM ?
+ 					    0x400000 : 0x800000);
++	if (!info->screen_base) {
++		release_mem_region(addr, size);
++		framebuffer_release(info);
++		return -ENOMEM;
++	}
+ 	info->fix.mmio_start = addr + 0x800000;
+ 	par->dc_regs = ioremap(addr + 0x800000, 0x1000);
+ 	par->cmap_regs_phys = addr + 0x840000;
+diff --git a/drivers/watchdog/Kconfig b/drivers/watchdog/Kconfig
+index 242eea859637..fa325b49672e 100644
+--- a/drivers/watchdog/Kconfig
++++ b/drivers/watchdog/Kconfig
+@@ -2028,6 +2028,7 @@ comment "Watchdog Pretimeout Governors"
+ 
+ config WATCHDOG_PRETIMEOUT_GOV
+ 	bool "Enable watchdog pretimeout governors"
++	depends on WATCHDOG_CORE
+ 	help
+ 	  The option allows to select watchdog pretimeout governors.
+ 
+diff --git a/drivers/watchdog/imx2_wdt.c b/drivers/watchdog/imx2_wdt.c
+index 2b52514eaa86..7e7bdcbbc741 100644
+--- a/drivers/watchdog/imx2_wdt.c
++++ b/drivers/watchdog/imx2_wdt.c
+@@ -178,8 +178,10 @@ static void __imx2_wdt_set_timeout(struct watchdog_device *wdog,
+ static int imx2_wdt_set_timeout(struct watchdog_device *wdog,
+ 				unsigned int new_timeout)
+ {
+-	__imx2_wdt_set_timeout(wdog, new_timeout);
++	unsigned int actual;
+ 
++	actual = min(new_timeout, wdog->max_hw_heartbeat_ms * 1000);
++	__imx2_wdt_set_timeout(wdog, actual);
+ 	wdog->timeout = new_timeout;
+ 	return 0;
+ }
+diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c
+index 39843fa7e11b..920d350df37b 100644
+--- a/fs/configfs/dir.c
++++ b/fs/configfs/dir.c
+@@ -1755,12 +1755,19 @@ int configfs_register_group(struct config_group *parent_group,
+ 
+ 	inode_lock_nested(d_inode(parent), I_MUTEX_PARENT);
+ 	ret = create_default_group(parent_group, group);
+-	if (!ret) {
+-		spin_lock(&configfs_dirent_lock);
+-		configfs_dir_set_ready(group->cg_item.ci_dentry->d_fsdata);
+-		spin_unlock(&configfs_dirent_lock);
+-	}
++	if (ret)
++		goto err_out;
++
++	spin_lock(&configfs_dirent_lock);
++	configfs_dir_set_ready(group->cg_item.ci_dentry->d_fsdata);
++	spin_unlock(&configfs_dirent_lock);
++	inode_unlock(d_inode(parent));
++	return 0;
++err_out:
+ 	inode_unlock(d_inode(parent));
++	mutex_lock(&subsys->su_mutex);
++	unlink_group(group);
++	mutex_unlock(&subsys->su_mutex);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(configfs_register_group);
+diff --git a/fs/dax.c b/fs/dax.c
+index 83009875308c..f74386293632 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -814,7 +814,7 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index,
+ 				goto unlock_pmd;
+ 
+ 			flush_cache_page(vma, address, pfn);
+-			pmd = pmdp_huge_clear_flush(vma, address, pmdp);
++			pmd = pmdp_invalidate(vma, address, pmdp);
+ 			pmd = pmd_wrprotect(pmd);
+ 			pmd = pmd_mkclean(pmd);
+ 			set_pmd_at(vma->vm_mm, address, pmdp, pmd);
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index a98e1b02279e..935ebdb9cf47 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -1009,13 +1009,11 @@ retry:
+ 	if (inode) {
+ 		unsigned long cur_ino = inode->i_ino;
+ 
+-		if (is_dir)
+-			F2FS_I(inode)->cp_task = current;
++		F2FS_I(inode)->cp_task = current;
+ 
+ 		filemap_fdatawrite(inode->i_mapping);
+ 
+-		if (is_dir)
+-			F2FS_I(inode)->cp_task = NULL;
++		F2FS_I(inode)->cp_task = NULL;
+ 
+ 		iput(inode);
+ 		/* We need to give cpu to another writers. */
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index d87dfa5aa112..9d3c11e09a03 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -2038,7 +2038,8 @@ out:
+ 	}
+ 
+ 	unlock_page(page);
+-	if (!S_ISDIR(inode->i_mode) && !IS_NOQUOTA(inode))
++	if (!S_ISDIR(inode->i_mode) && !IS_NOQUOTA(inode) &&
++					!F2FS_I(inode)->cp_task)
+ 		f2fs_balance_fs(sbi, need_balance_fs);
+ 
+ 	if (unlikely(f2fs_cp_error(sbi))) {
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 7bea1bc6589f..e2cf567dcbd7 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1789,6 +1789,7 @@ enospc:
+ 	return -ENOSPC;
+ }
+ 
++void f2fs_msg(struct super_block *sb, const char *level, const char *fmt, ...);
+ static inline void dec_valid_block_count(struct f2fs_sb_info *sbi,
+ 						struct inode *inode,
+ 						block_t count)
+@@ -1797,13 +1798,21 @@ static inline void dec_valid_block_count(struct f2fs_sb_info *sbi,
+ 
+ 	spin_lock(&sbi->stat_lock);
+ 	f2fs_bug_on(sbi, sbi->total_valid_block_count < (block_t) count);
+-	f2fs_bug_on(sbi, inode->i_blocks < sectors);
+ 	sbi->total_valid_block_count -= (block_t)count;
+ 	if (sbi->reserved_blocks &&
+ 		sbi->current_reserved_blocks < sbi->reserved_blocks)
+ 		sbi->current_reserved_blocks = min(sbi->reserved_blocks,
+ 					sbi->current_reserved_blocks + count);
+ 	spin_unlock(&sbi->stat_lock);
++	if (unlikely(inode->i_blocks < sectors)) {
++		f2fs_msg(sbi->sb, KERN_WARNING,
++			"Inconsistent i_blocks, ino:%lu, iblocks:%llu, sectors:%llu",
++			inode->i_ino,
++			(unsigned long long)inode->i_blocks,
++			(unsigned long long)sectors);
++		set_sbi_flag(sbi, SBI_NEED_FSCK);
++		return;
++	}
+ 	f2fs_i_blocks_write(inode, count, false, true);
+ }
+ 
+@@ -2020,7 +2029,6 @@ static inline void dec_valid_node_count(struct f2fs_sb_info *sbi,
+ 
+ 	f2fs_bug_on(sbi, !sbi->total_valid_block_count);
+ 	f2fs_bug_on(sbi, !sbi->total_valid_node_count);
+-	f2fs_bug_on(sbi, !is_inode && !inode->i_blocks);
+ 
+ 	sbi->total_valid_node_count--;
+ 	sbi->total_valid_block_count--;
+@@ -2030,10 +2038,19 @@ static inline void dec_valid_node_count(struct f2fs_sb_info *sbi,
+ 
+ 	spin_unlock(&sbi->stat_lock);
+ 
+-	if (is_inode)
++	if (is_inode) {
+ 		dquot_free_inode(inode);
+-	else
++	} else {
++		if (unlikely(inode->i_blocks == 0)) {
++			f2fs_msg(sbi->sb, KERN_WARNING,
++				"Inconsistent i_blocks, ino:%lu, iblocks:%llu",
++				inode->i_ino,
++				(unsigned long long)inode->i_blocks);
++			set_sbi_flag(sbi, SBI_NEED_FSCK);
++			return;
++		}
+ 		f2fs_i_blocks_write(inode, 1, false, true);
++	}
+ }
+ 
+ static inline unsigned int valid_node_count(struct f2fs_sb_info *sbi)
+@@ -2570,7 +2587,9 @@ static inline void *inline_xattr_addr(struct inode *inode, struct page *page)
+ 
+ static inline int inline_xattr_size(struct inode *inode)
+ {
+-	return get_inline_xattr_addrs(inode) * sizeof(__le32);
++	if (f2fs_has_inline_xattr(inode))
++		return get_inline_xattr_addrs(inode) * sizeof(__le32);
++	return 0;
+ }
+ 
+ static inline int f2fs_has_inline_data(struct inode *inode)
+@@ -2817,7 +2836,6 @@ static inline void f2fs_update_iostat(struct f2fs_sb_info *sbi,
+ 
+ bool f2fs_is_valid_blkaddr(struct f2fs_sb_info *sbi,
+ 					block_t blkaddr, int type);
+-void f2fs_msg(struct super_block *sb, const char *level, const char *fmt, ...);
+ static inline void verify_blkaddr(struct f2fs_sb_info *sbi,
+ 					block_t blkaddr, int type)
+ {
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index ab764bd106de..a66a8752e5f6 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1175,6 +1175,7 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi,
+ 				"type [%d, %d] in SSA and SIT",
+ 				segno, type, GET_SUM_TYPE((&sum->footer)));
+ 			set_sbi_flag(sbi, SBI_NEED_FSCK);
++			f2fs_stop_checkpoint(sbi, false);
+ 			goto skip;
+ 		}
+ 
+diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
+index bb6a152310ef..404d2462a0fe 100644
+--- a/fs/f2fs/inline.c
++++ b/fs/f2fs/inline.c
+@@ -420,6 +420,14 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage,
+ 	stat_dec_inline_dir(dir);
+ 	clear_inode_flag(dir, FI_INLINE_DENTRY);
+ 
++	/*
++	 * should retrieve reserved space which was used to keep
++	 * inline_dentry's structure for backward compatibility.
++	 */
++	if (!f2fs_sb_has_flexible_inline_xattr(F2FS_I_SB(dir)) &&
++			!f2fs_has_inline_xattr(dir))
++		F2FS_I(dir)->i_inline_xattr_size = 0;
++
+ 	f2fs_i_depth_write(dir, 1);
+ 	if (i_size_read(dir) < PAGE_SIZE)
+ 		f2fs_i_size_write(dir, PAGE_SIZE);
+@@ -501,6 +509,15 @@ static int f2fs_move_rehashed_dirents(struct inode *dir, struct page *ipage,
+ 
+ 	stat_dec_inline_dir(dir);
+ 	clear_inode_flag(dir, FI_INLINE_DENTRY);
++
++	/*
++	 * should retrieve reserved space which was used to keep
++	 * inline_dentry's structure for backward compatibility.
++	 */
++	if (!f2fs_sb_has_flexible_inline_xattr(F2FS_I_SB(dir)) &&
++			!f2fs_has_inline_xattr(dir))
++		F2FS_I(dir)->i_inline_xattr_size = 0;
++
+ 	kvfree(backup_dentry);
+ 	return 0;
+ recover:
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index e7f2e8759315..b53952a15ffa 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -177,8 +177,8 @@ bool f2fs_inode_chksum_verify(struct f2fs_sb_info *sbi, struct page *page)
+ 
+ 	if (provided != calculated)
+ 		f2fs_msg(sbi->sb, KERN_WARNING,
+-			"checksum invalid, ino = %x, %x vs. %x",
+-			ino_of_node(page), provided, calculated);
++			"checksum invalid, nid = %lu, ino_of_node = %x, %x vs. %x",
++			page->index, ino_of_node(page), provided, calculated);
+ 
+ 	return provided == calculated;
+ }
+@@ -488,6 +488,7 @@ make_now:
+ 	return inode;
+ 
+ bad_inode:
++	f2fs_inode_synced(inode);
+ 	iget_failed(inode);
+ 	trace_f2fs_iget_exit(inode, ret);
+ 	return ERR_PTR(ret);
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 3f99ab288695..e29d5f6735ae 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -1179,8 +1179,14 @@ int f2fs_remove_inode_page(struct inode *inode)
+ 		f2fs_put_dnode(&dn);
+ 		return -EIO;
+ 	}
+-	f2fs_bug_on(F2FS_I_SB(inode),
+-			inode->i_blocks != 0 && inode->i_blocks != 8);
++
++	if (unlikely(inode->i_blocks != 0 && inode->i_blocks != 8)) {
++		f2fs_msg(F2FS_I_SB(inode)->sb, KERN_WARNING,
++			"Inconsistent i_blocks, ino:%lu, iblocks:%llu",
++			inode->i_ino,
++			(unsigned long long)inode->i_blocks);
++		set_sbi_flag(F2FS_I_SB(inode), SBI_NEED_FSCK);
++	}
+ 
+ 	/* will put inode & node pages */
+ 	err = truncate_node(&dn);
+@@ -1275,9 +1281,10 @@ static int read_node_page(struct page *page, int op_flags)
+ 	int err;
+ 
+ 	if (PageUptodate(page)) {
+-#ifdef CONFIG_F2FS_CHECK_FS
+-		f2fs_bug_on(sbi, !f2fs_inode_chksum_verify(sbi, page));
+-#endif
++		if (!f2fs_inode_chksum_verify(sbi, page)) {
++			ClearPageUptodate(page);
++			return -EBADMSG;
++		}
+ 		return LOCKED_PAGE;
+ 	}
+ 
+@@ -2076,6 +2083,9 @@ static bool add_free_nid(struct f2fs_sb_info *sbi,
+ 	if (unlikely(nid == 0))
+ 		return false;
+ 
++	if (unlikely(f2fs_check_nid_range(sbi, nid)))
++		return false;
++
+ 	i = f2fs_kmem_cache_alloc(free_nid_slab, GFP_NOFS);
+ 	i->nid = nid;
+ 	i->state = FREE_NID;
+diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
+index e3883db868d8..b14c718139a9 100644
+--- a/fs/f2fs/recovery.c
++++ b/fs/f2fs/recovery.c
+@@ -325,8 +325,10 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head,
+ 			break;
+ 		}
+ 
+-		if (!is_recoverable_dnode(page))
++		if (!is_recoverable_dnode(page)) {
++			f2fs_put_page(page, 1);
+ 			break;
++		}
+ 
+ 		if (!is_fsync_dnode(page))
+ 			goto next;
+@@ -338,8 +340,10 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head,
+ 			if (!check_only &&
+ 					IS_INODE(page) && is_dent_dnode(page)) {
+ 				err = f2fs_recover_inode_page(sbi, page);
+-				if (err)
++				if (err) {
++					f2fs_put_page(page, 1);
+ 					break;
++				}
+ 				quota_inode = true;
+ 			}
+ 
+@@ -355,6 +359,7 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head,
+ 					err = 0;
+ 					goto next;
+ 				}
++				f2fs_put_page(page, 1);
+ 				break;
+ 			}
+ 		}
+@@ -370,6 +375,7 @@ next:
+ 				"%s: detect looped node chain, "
+ 				"blkaddr:%u, next:%u",
+ 				__func__, blkaddr, next_blkaddr_of_node(page));
++			f2fs_put_page(page, 1);
+ 			err = -EINVAL;
+ 			break;
+ 		}
+@@ -380,7 +386,6 @@ next:
+ 
+ 		f2fs_ra_meta_pages_cond(sbi, blkaddr);
+ 	}
+-	f2fs_put_page(page, 1);
+ 	return err;
+ }
+ 
+@@ -546,7 +551,15 @@ retry_dn:
+ 		goto err;
+ 
+ 	f2fs_bug_on(sbi, ni.ino != ino_of_node(page));
+-	f2fs_bug_on(sbi, ofs_of_node(dn.node_page) != ofs_of_node(page));
++
++	if (ofs_of_node(dn.node_page) != ofs_of_node(page)) {
++		f2fs_msg(sbi->sb, KERN_WARNING,
++			"Inconsistent ofs_of_node, ino:%lu, ofs:%u, %u",
++			inode->i_ino, ofs_of_node(dn.node_page),
++			ofs_of_node(page));
++		err = -EFAULT;
++		goto err;
++	}
+ 
+ 	for (; start < end; start++, dn.ofs_in_node++) {
+ 		block_t src, dest;
+@@ -666,8 +679,10 @@ static int recover_data(struct f2fs_sb_info *sbi, struct list_head *inode_list,
+ 		 */
+ 		if (IS_INODE(page)) {
+ 			err = recover_inode(entry->inode, page);
+-			if (err)
++			if (err) {
++				f2fs_put_page(page, 1);
+ 				break;
++			}
+ 		}
+ 		if (entry->last_dentry == blkaddr) {
+ 			err = recover_dentry(entry->inode, page, dir_list);
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index ddfa2eb7ec58..3d6efbfe180f 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -3188,13 +3188,18 @@ int f2fs_inplace_write_data(struct f2fs_io_info *fio)
+ {
+ 	int err;
+ 	struct f2fs_sb_info *sbi = fio->sbi;
++	unsigned int segno;
+ 
+ 	fio->new_blkaddr = fio->old_blkaddr;
+ 	/* i/o temperature is needed for passing down write hints */
+ 	__get_segment_type(fio);
+ 
+-	f2fs_bug_on(sbi, !IS_DATASEG(get_seg_entry(sbi,
+-			GET_SEGNO(sbi, fio->new_blkaddr))->type));
++	segno = GET_SEGNO(sbi, fio->new_blkaddr);
++
++	if (!IS_DATASEG(get_seg_entry(sbi, segno)->type)) {
++		set_sbi_flag(sbi, SBI_NEED_FSCK);
++		return -EFAULT;
++	}
+ 
+ 	stat_inc_inplace_blocks(fio->sbi);
+ 
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index 5c7ed0442d6e..6f48e0763279 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -672,7 +672,6 @@ static inline void verify_block_addr(struct f2fs_io_info *fio, block_t blk_addr)
+ static inline int check_block_count(struct f2fs_sb_info *sbi,
+ 		int segno, struct f2fs_sit_entry *raw_sit)
+ {
+-#ifdef CONFIG_F2FS_CHECK_FS
+ 	bool is_valid  = test_bit_le(0, raw_sit->valid_map) ? true : false;
+ 	int valid_blocks = 0;
+ 	int cur_pos = 0, next_pos;
+@@ -699,7 +698,7 @@ static inline int check_block_count(struct f2fs_sb_info *sbi,
+ 		set_sbi_flag(sbi, SBI_NEED_FSCK);
+ 		return -EINVAL;
+ 	}
+-#endif
++
+ 	/* check segment usage, and check boundary of a given segment number */
+ 	if (unlikely(GET_SIT_VBLOCKS(raw_sit) > sbi->blocks_per_seg
+ 					|| segno > TOTAL_SEGS(sbi) - 1)) {
+diff --git a/fs/fat/file.c b/fs/fat/file.c
+index b3bed32946b1..0e3ed79fcc3f 100644
+--- a/fs/fat/file.c
++++ b/fs/fat/file.c
+@@ -193,12 +193,17 @@ static int fat_file_release(struct inode *inode, struct file *filp)
+ int fat_file_fsync(struct file *filp, loff_t start, loff_t end, int datasync)
+ {
+ 	struct inode *inode = filp->f_mapping->host;
+-	int res, err;
++	int err;
++
++	err = __generic_file_fsync(filp, start, end, datasync);
++	if (err)
++		return err;
+ 
+-	res = generic_file_fsync(filp, start, end, datasync);
+ 	err = sync_mapping_buffers(MSDOS_SB(inode->i_sb)->fat_inode->i_mapping);
++	if (err)
++		return err;
+ 
+-	return res ? res : err;
++	return blkdev_issue_flush(inode->i_sb->s_bdev, GFP_KERNEL, NULL);
+ }
+ 
+ 
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index 9971a35cf1ef..1e2efd873201 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -1749,7 +1749,7 @@ static int fuse_retrieve(struct fuse_conn *fc, struct inode *inode,
+ 	offset = outarg->offset & ~PAGE_MASK;
+ 	file_size = i_size_read(inode);
+ 
+-	num = outarg->size;
++	num = min(outarg->size, fc->max_write);
+ 	if (outarg->offset > file_size)
+ 		num = 0;
+ 	else if (outarg->offset + num > file_size)
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 30a5687a17b6..28269a0c5037 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -2330,10 +2330,11 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx,
+ 			ctx->sq_thread_idle = HZ;
+ 
+ 		if (p->flags & IORING_SETUP_SQ_AFF) {
+-			int cpu = array_index_nospec(p->sq_thread_cpu,
+-							nr_cpu_ids);
++			int cpu = p->sq_thread_cpu;
+ 
+ 			ret = -EINVAL;
++			if (cpu >= nr_cpu_ids)
++				goto err;
+ 			if (!cpu_online(cpu))
+ 				goto err;
+ 
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index 3de42a729093..a3a3455826aa 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -2420,8 +2420,10 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
+ 	__be32 status;
+ 	int err;
+ 	struct nfs4_acl *acl = NULL;
++#ifdef CONFIG_NFSD_V4_SECURITY_LABEL
+ 	void *context = NULL;
+ 	int contextlen;
++#endif
+ 	bool contextsupport = false;
+ 	struct nfsd4_compoundres *resp = rqstp->rq_resp;
+ 	u32 minorversion = resp->cstate.minorversion;
+@@ -2906,12 +2908,14 @@ out_acl:
+ 			*p++ = cpu_to_be32(NFS4_CHANGE_TYPE_IS_TIME_METADATA);
+ 	}
+ 
++#ifdef CONFIG_NFSD_V4_SECURITY_LABEL
+ 	if (bmval2 & FATTR4_WORD2_SECURITY_LABEL) {
+ 		status = nfsd4_encode_security_label(xdr, rqstp, context,
+ 								contextlen);
+ 		if (status)
+ 			goto out;
+ 	}
++#endif
+ 
+ 	attrlen = htonl(xdr->buf->len - attrlen_offset - 4);
+ 	write_bytes_to_xdr_buf(xdr->buf, attrlen_offset, &attrlen, 4);
+diff --git a/fs/nfsd/vfs.h b/fs/nfsd/vfs.h
+index a7e107309f76..db351247892d 100644
+--- a/fs/nfsd/vfs.h
++++ b/fs/nfsd/vfs.h
+@@ -120,8 +120,11 @@ void		nfsd_put_raparams(struct file *file, struct raparms *ra);
+ 
+ static inline int fh_want_write(struct svc_fh *fh)
+ {
+-	int ret = mnt_want_write(fh->fh_export->ex_path.mnt);
++	int ret;
+ 
++	if (fh->fh_want_write)
++		return 0;
++	ret = mnt_want_write(fh->fh_export->ex_path.mnt);
+ 	if (!ret)
+ 		fh->fh_want_write = true;
+ 	return ret;
+diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
+index 50e4407398d8..540a8b845145 100644
+--- a/fs/overlayfs/file.c
++++ b/fs/overlayfs/file.c
+@@ -11,6 +11,7 @@
+ #include <linux/mount.h>
+ #include <linux/xattr.h>
+ #include <linux/uio.h>
++#include <linux/uaccess.h>
+ #include "overlayfs.h"
+ 
+ static char ovl_whatisit(struct inode *inode, struct inode *realinode)
+@@ -29,10 +30,11 @@ static struct file *ovl_open_realfile(const struct file *file,
+ 	struct inode *inode = file_inode(file);
+ 	struct file *realfile;
+ 	const struct cred *old_cred;
++	int flags = file->f_flags | O_NOATIME | FMODE_NONOTIFY;
+ 
+ 	old_cred = ovl_override_creds(inode->i_sb);
+-	realfile = open_with_fake_path(&file->f_path, file->f_flags | O_NOATIME,
+-				       realinode, current_cred());
++	realfile = open_with_fake_path(&file->f_path, flags, realinode,
++				       current_cred());
+ 	revert_creds(old_cred);
+ 
+ 	pr_debug("open(%p[%pD2/%c], 0%o) -> (%p, 0%o)\n",
+@@ -50,7 +52,7 @@ static int ovl_change_flags(struct file *file, unsigned int flags)
+ 	int err;
+ 
+ 	/* No atime modificaton on underlying */
+-	flags |= O_NOATIME;
++	flags |= O_NOATIME | FMODE_NONOTIFY;
+ 
+ 	/* If some flag changed that cannot be changed then something's amiss */
+ 	if (WARN_ON((file->f_flags ^ flags) & ~OVL_SETFL_MASK))
+@@ -144,11 +146,47 @@ static int ovl_release(struct inode *inode, struct file *file)
+ 
+ static loff_t ovl_llseek(struct file *file, loff_t offset, int whence)
+ {
+-	struct inode *realinode = ovl_inode_real(file_inode(file));
++	struct inode *inode = file_inode(file);
++	struct fd real;
++	const struct cred *old_cred;
++	ssize_t ret;
++
++	/*
++	 * The two special cases below do not need to involve real fs,
++	 * so we can optimizing concurrent callers.
++	 */
++	if (offset == 0) {
++		if (whence == SEEK_CUR)
++			return file->f_pos;
++
++		if (whence == SEEK_SET)
++			return vfs_setpos(file, 0, 0);
++	}
++
++	ret = ovl_real_fdget(file, &real);
++	if (ret)
++		return ret;
++
++	/*
++	 * Overlay file f_pos is the master copy that is preserved
++	 * through copy up and modified on read/write, but only real
++	 * fs knows how to SEEK_HOLE/SEEK_DATA and real fs may impose
++	 * limitations that are more strict than ->s_maxbytes for specific
++	 * files, so we use the real file to perform seeks.
++	 */
++	inode_lock(inode);
++	real.file->f_pos = file->f_pos;
++
++	old_cred = ovl_override_creds(inode->i_sb);
++	ret = vfs_llseek(real.file, offset, whence);
++	revert_creds(old_cred);
++
++	file->f_pos = real.file->f_pos;
++	inode_unlock(inode);
++
++	fdput(real);
+ 
+-	return generic_file_llseek_size(file, offset, whence,
+-					realinode->i_sb->s_maxbytes,
+-					i_size_read(realinode));
++	return ret;
+ }
+ 
+ static void ovl_file_accessed(struct file *file)
+@@ -371,10 +409,68 @@ static long ovl_real_ioctl(struct file *file, unsigned int cmd,
+ 	return ret;
+ }
+ 
+-static long ovl_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
++static unsigned int ovl_get_inode_flags(struct inode *inode)
++{
++	unsigned int flags = READ_ONCE(inode->i_flags);
++	unsigned int ovl_iflags = 0;
++
++	if (flags & S_SYNC)
++		ovl_iflags |= FS_SYNC_FL;
++	if (flags & S_APPEND)
++		ovl_iflags |= FS_APPEND_FL;
++	if (flags & S_IMMUTABLE)
++		ovl_iflags |= FS_IMMUTABLE_FL;
++	if (flags & S_NOATIME)
++		ovl_iflags |= FS_NOATIME_FL;
++
++	return ovl_iflags;
++}
++
++static long ovl_ioctl_set_flags(struct file *file, unsigned long arg)
+ {
+ 	long ret;
+ 	struct inode *inode = file_inode(file);
++	unsigned int flags;
++	unsigned int old_flags;
++
++	if (!inode_owner_or_capable(inode))
++		return -EACCES;
++
++	if (get_user(flags, (int __user *) arg))
++		return -EFAULT;
++
++	ret = mnt_want_write_file(file);
++	if (ret)
++		return ret;
++
++	inode_lock(inode);
++
++	/* Check the capability before cred override */
++	ret = -EPERM;
++	old_flags = ovl_get_inode_flags(inode);
++	if (((flags ^ old_flags) & (FS_APPEND_FL | FS_IMMUTABLE_FL)) &&
++	    !capable(CAP_LINUX_IMMUTABLE))
++		goto unlock;
++
++	ret = ovl_maybe_copy_up(file_dentry(file), O_WRONLY);
++	if (ret)
++		goto unlock;
++
++	ret = ovl_real_ioctl(file, FS_IOC_SETFLAGS, arg);
++
++	ovl_copyflags(ovl_inode_real(inode), inode);
++unlock:
++	inode_unlock(inode);
++
++	mnt_drop_write_file(file);
++
++	return ret;
++
++}
++
++static long ovl_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
++{
++	long ret;
+ 
+ 	switch (cmd) {
+ 	case FS_IOC_GETFLAGS:
+@@ -382,23 +478,7 @@ static long ovl_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 		break;
+ 
+ 	case FS_IOC_SETFLAGS:
+-		if (!inode_owner_or_capable(inode))
+-			return -EACCES;
+-
+-		ret = mnt_want_write_file(file);
+-		if (ret)
+-			return ret;
+-
+-		ret = ovl_maybe_copy_up(file_dentry(file), O_WRONLY);
+-		if (!ret) {
+-			ret = ovl_real_ioctl(file, cmd, arg);
+-
+-			inode_lock(inode);
+-			ovl_copyflags(ovl_inode_real(inode), inode);
+-			inode_unlock(inode);
+-		}
+-
+-		mnt_drop_write_file(file);
++		ret = ovl_ioctl_set_flags(file, arg);
+ 		break;
+ 
+ 	default:
+diff --git a/include/linux/pwm.h b/include/linux/pwm.h
+index b628abfffacc..eaa5c6e3fc9f 100644
+--- a/include/linux/pwm.h
++++ b/include/linux/pwm.h
+@@ -596,7 +596,6 @@ static inline void pwm_remove_table(struct pwm_lookup *table, size_t num)
+ #ifdef CONFIG_PWM_SYSFS
+ void pwmchip_sysfs_export(struct pwm_chip *chip);
+ void pwmchip_sysfs_unexport(struct pwm_chip *chip);
+-void pwmchip_sysfs_unexport_children(struct pwm_chip *chip);
+ #else
+ static inline void pwmchip_sysfs_export(struct pwm_chip *chip)
+ {
+@@ -605,10 +604,6 @@ static inline void pwmchip_sysfs_export(struct pwm_chip *chip)
+ static inline void pwmchip_sysfs_unexport(struct pwm_chip *chip)
+ {
+ }
+-
+-static inline void pwmchip_sysfs_unexport_children(struct pwm_chip *chip)
+-{
+-}
+ #endif /* CONFIG_PWM_SYSFS */
+ 
+ #endif /* __LINUX_PWM_H */
+diff --git a/include/media/v4l2-ctrls.h b/include/media/v4l2-ctrls.h
+index e5cae37ced2d..200f8a66ecaa 100644
+--- a/include/media/v4l2-ctrls.h
++++ b/include/media/v4l2-ctrls.h
+@@ -1127,7 +1127,7 @@ __poll_t v4l2_ctrl_poll(struct file *file, struct poll_table_struct *wait);
+  * applying control values in a request is only applicable to memory-to-memory
+  * devices.
+  */
+-void v4l2_ctrl_request_setup(struct media_request *req,
++int v4l2_ctrl_request_setup(struct media_request *req,
+ 			     struct v4l2_ctrl_handler *parent);
+ 
+ /**
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 05b1b96f4d9e..094e61e07030 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -190,9 +190,6 @@ struct adv_info {
+ 
+ #define HCI_MAX_SHORT_NAME_LENGTH	10
+ 
+-/* Min encryption key size to match with SMP */
+-#define HCI_MIN_ENC_KEY_SIZE		7
+-
+ /* Default LE RPA expiry time, 15 minutes */
+ #define HCI_DEFAULT_RPA_TIMEOUT		(15 * 60)
+ 
+diff --git a/init/initramfs.c b/init/initramfs.c
+index 4749e1115eef..c322e1099f43 100644
+--- a/init/initramfs.c
++++ b/init/initramfs.c
+@@ -612,13 +612,12 @@ static int __init populate_rootfs(void)
+ 		printk(KERN_INFO "Trying to unpack rootfs image as initramfs...\n");
+ 		err = unpack_to_rootfs((char *)initrd_start,
+ 			initrd_end - initrd_start);
+-		if (!err) {
+-			free_initrd();
++		if (!err)
+ 			goto done;
+-		} else {
+-			clean_rootfs();
+-			unpack_to_rootfs(__initramfs_start, __initramfs_size);
+-		}
++
++		clean_rootfs();
++		unpack_to_rootfs(__initramfs_start, __initramfs_size);
++
+ 		printk(KERN_INFO "rootfs image is not initramfs (%s)"
+ 				"; looks like an initrd\n", err);
+ 		fd = ksys_open("/initrd.image",
+@@ -632,7 +631,6 @@ static int __init populate_rootfs(void)
+ 				       written, initrd_end - initrd_start);
+ 
+ 			ksys_close(fd);
+-			free_initrd();
+ 		}
+ 	done:
+ 		/* empty statement */;
+@@ -642,9 +640,9 @@ static int __init populate_rootfs(void)
+ 			initrd_end - initrd_start);
+ 		if (err)
+ 			printk(KERN_EMERG "Initramfs unpacking failed: %s\n", err);
+-		free_initrd();
+ #endif
+ 	}
++	free_initrd();
+ 	flush_delayed_fput();
+ 	return 0;
+ }
+diff --git a/ipc/mqueue.c b/ipc/mqueue.c
+index aea30530c472..127ba1e8950b 100644
+--- a/ipc/mqueue.c
++++ b/ipc/mqueue.c
+@@ -436,7 +436,8 @@ static void mqueue_evict_inode(struct inode *inode)
+ 	struct user_struct *user;
+ 	unsigned long mq_bytes, mq_treesize;
+ 	struct ipc_namespace *ipc_ns;
+-	struct msg_msg *msg;
++	struct msg_msg *msg, *nmsg;
++	LIST_HEAD(tmp_msg);
+ 
+ 	clear_inode(inode);
+ 
+@@ -447,10 +448,15 @@ static void mqueue_evict_inode(struct inode *inode)
+ 	info = MQUEUE_I(inode);
+ 	spin_lock(&info->lock);
+ 	while ((msg = msg_get(info)) != NULL)
+-		free_msg(msg);
++		list_add_tail(&msg->m_list, &tmp_msg);
+ 	kfree(info->node_cache);
+ 	spin_unlock(&info->lock);
+ 
++	list_for_each_entry_safe(msg, nmsg, &tmp_msg, m_list) {
++		list_del(&msg->m_list);
++		free_msg(msg);
++	}
++
+ 	/* Total amount of bytes accounted for the mqueue */
+ 	mq_treesize = info->attr.mq_maxmsg * sizeof(struct msg_msg) +
+ 		min_t(unsigned int, info->attr.mq_maxmsg, MQ_PRIO_MAX) *
+diff --git a/ipc/msgutil.c b/ipc/msgutil.c
+index 84598025a6ad..e65593742e2b 100644
+--- a/ipc/msgutil.c
++++ b/ipc/msgutil.c
+@@ -18,6 +18,7 @@
+ #include <linux/utsname.h>
+ #include <linux/proc_ns.h>
+ #include <linux/uaccess.h>
++#include <linux/sched.h>
+ 
+ #include "util.h"
+ 
+@@ -64,6 +65,9 @@ static struct msg_msg *alloc_msg(size_t len)
+ 	pseg = &msg->next;
+ 	while (len > 0) {
+ 		struct msg_msgseg *seg;
++
++		cond_resched();
++
+ 		alen = min(len, DATALEN_SEG);
+ 		seg = kmalloc(sizeof(*seg) + alen, GFP_KERNEL_ACCOUNT);
+ 		if (seg == NULL)
+@@ -176,6 +180,8 @@ void free_msg(struct msg_msg *msg)
+ 	kfree(msg);
+ 	while (seg != NULL) {
+ 		struct msg_msgseg *tmp = seg->next;
++
++		cond_resched();
+ 		kfree(seg);
+ 		seg = tmp;
+ 	}
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 09d5d972c9ff..950fac024fbb 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -7296,7 +7296,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
+ 									insn->dst_reg,
+ 									shift);
+ 				insn_buf[cnt++] = BPF_ALU64_IMM(BPF_AND, insn->dst_reg,
+-								(1 << size * 8) - 1);
++								(1ULL << size * 8) - 1);
+ 			}
+ 		}
+ 
+diff --git a/kernel/sys.c b/kernel/sys.c
+index 12df0e5434b8..bdbfe8d37418 100644
+--- a/kernel/sys.c
++++ b/kernel/sys.c
+@@ -1924,7 +1924,7 @@ static int validate_prctl_map(struct prctl_mm_map *prctl_map)
+ 	((unsigned long)prctl_map->__m1 __op				\
+ 	 (unsigned long)prctl_map->__m2) ? 0 : -EINVAL
+ 	error  = __prctl_check_order(start_code, <, end_code);
+-	error |= __prctl_check_order(start_data, <, end_data);
++	error |= __prctl_check_order(start_data,<=, end_data);
+ 	error |= __prctl_check_order(start_brk, <=, brk);
+ 	error |= __prctl_check_order(arg_start, <=, arg_end);
+ 	error |= __prctl_check_order(env_start, <=, env_end);
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index c9ec050bcf46..387efbaf464a 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -2874,8 +2874,10 @@ static int __do_proc_doulongvec_minmax(void *data, struct ctl_table *table, int
+ 			if (neg)
+ 				continue;
+ 			val = convmul * val / convdiv;
+-			if ((min && val < *min) || (max && val > *max))
+-				continue;
++			if ((min && val < *min) || (max && val > *max)) {
++				err = -EINVAL;
++				break;
++			}
+ 			*i = val;
+ 		} else {
+ 			val = convdiv * (*i) / convmul;
+diff --git a/kernel/time/ntp.c b/kernel/time/ntp.c
+index 92a90014a925..f43d47c8c3b6 100644
+--- a/kernel/time/ntp.c
++++ b/kernel/time/ntp.c
+@@ -690,7 +690,7 @@ static inline void process_adjtimex_modes(const struct __kernel_timex *txc,
+ 		time_constant = max(time_constant, 0l);
+ 	}
+ 
+-	if (txc->modes & ADJ_TAI && txc->constant > 0)
++	if (txc->modes & ADJ_TAI && txc->constant >= 0)
+ 		*time_tai = txc->constant;
+ 
+ 	if (txc->modes & ADJ_OFFSET)
+diff --git a/mm/Kconfig b/mm/Kconfig
+index 25c71eb8a7db..2e6d24d783f7 100644
+--- a/mm/Kconfig
++++ b/mm/Kconfig
+@@ -694,12 +694,12 @@ config DEV_PAGEMAP_OPS
+ 
+ config HMM
+ 	bool
++	select MMU_NOTIFIER
+ 	select MIGRATE_VMA_HELPER
+ 
+ config HMM_MIRROR
+ 	bool "HMM mirror CPU page table into a device page table"
+ 	depends on ARCH_HAS_HMM
+-	select MMU_NOTIFIER
+ 	select HMM
+ 	help
+ 	  Select HMM_MIRROR if you want to mirror range of the CPU page table of a
+diff --git a/mm/cma.c b/mm/cma.c
+index bb2d333ffcb3..5e36d7418031 100644
+--- a/mm/cma.c
++++ b/mm/cma.c
+@@ -106,8 +106,10 @@ static int __init cma_activate_area(struct cma *cma)
+ 
+ 	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
+ 
+-	if (!cma->bitmap)
++	if (!cma->bitmap) {
++		cma->count = 0;
+ 		return -ENOMEM;
++	}
+ 
+ 	WARN_ON_ONCE(!pfn_valid(pfn));
+ 	zone = page_zone(pfn_to_page(pfn));
+@@ -367,23 +369,26 @@ err:
+ #ifdef CONFIG_CMA_DEBUG
+ static void cma_debug_show_areas(struct cma *cma)
+ {
+-	unsigned long next_zero_bit, next_set_bit;
++	unsigned long next_zero_bit, next_set_bit, nr_zero;
+ 	unsigned long start = 0;
+-	unsigned int nr_zero, nr_total = 0;
++	unsigned long nr_part, nr_total = 0;
++	unsigned long nbits = cma_bitmap_maxno(cma);
+ 
+ 	mutex_lock(&cma->lock);
+ 	pr_info("number of available pages: ");
+ 	for (;;) {
+-		next_zero_bit = find_next_zero_bit(cma->bitmap, cma->count, start);
+-		if (next_zero_bit >= cma->count)
++		next_zero_bit = find_next_zero_bit(cma->bitmap, nbits, start);
++		if (next_zero_bit >= nbits)
+ 			break;
+-		next_set_bit = find_next_bit(cma->bitmap, cma->count, next_zero_bit);
++		next_set_bit = find_next_bit(cma->bitmap, nbits, next_zero_bit);
+ 		nr_zero = next_set_bit - next_zero_bit;
+-		pr_cont("%s%u@%lu", nr_total ? "+" : "", nr_zero, next_zero_bit);
+-		nr_total += nr_zero;
++		nr_part = nr_zero << cma->order_per_bit;
++		pr_cont("%s%lu@%lu", nr_total ? "+" : "", nr_part,
++			next_zero_bit);
++		nr_total += nr_part;
+ 		start = next_zero_bit + nr_zero;
+ 	}
+-	pr_cont("=> %u free of %lu total pages\n", nr_total, cma->count);
++	pr_cont("=> %lu free of %lu total pages\n", nr_total, cma->count);
+ 	mutex_unlock(&cma->lock);
+ }
+ #else
+diff --git a/mm/cma_debug.c b/mm/cma_debug.c
+index 8d7b2fd52225..a7dd9e8e10d5 100644
+--- a/mm/cma_debug.c
++++ b/mm/cma_debug.c
+@@ -56,7 +56,7 @@ static int cma_maxchunk_get(void *data, u64 *val)
+ 	mutex_lock(&cma->lock);
+ 	for (;;) {
+ 		start = find_next_zero_bit(cma->bitmap, bitmap_maxno, end);
+-		if (start >= cma->count)
++		if (start >= bitmap_maxno)
+ 			break;
+ 		end = find_next_bit(cma->bitmap, bitmap_maxno, start);
+ 		maxchunk = max(end - start, maxchunk);
+diff --git a/mm/compaction.c b/mm/compaction.c
+index 368445cc71cf..2d7bb9eb07cd 100644
+--- a/mm/compaction.c
++++ b/mm/compaction.c
+@@ -1164,7 +1164,9 @@ static bool suitable_migration_target(struct compact_control *cc,
+ static inline unsigned int
+ freelist_scan_limit(struct compact_control *cc)
+ {
+-	return (COMPACT_CLUSTER_MAX >> cc->fast_search_fail) + 1;
++	unsigned short shift = BITS_PER_LONG - 1;
++
++	return (COMPACT_CLUSTER_MAX >> min(shift, cc->fast_search_fail)) + 1;
+ }
+ 
+ /*
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 5baf1f00ad42..5b4f00be325d 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -1258,12 +1258,23 @@ void free_huge_page(struct page *page)
+ 	ClearPagePrivate(page);
+ 
+ 	/*
+-	 * A return code of zero implies that the subpool will be under its
+-	 * minimum size if the reservation is not restored after page is free.
+-	 * Therefore, force restore_reserve operation.
++	 * If PagePrivate() was set on page, page allocation consumed a
++	 * reservation.  If the page was associated with a subpool, there
++	 * would have been a page reserved in the subpool before allocation
++	 * via hugepage_subpool_get_pages().  Since we are 'restoring' the
++	 * reservtion, do not call hugepage_subpool_put_pages() as this will
++	 * remove the reserved page from the subpool.
+ 	 */
+-	if (hugepage_subpool_put_pages(spool, 1) == 0)
+-		restore_reserve = true;
++	if (!restore_reserve) {
++		/*
++		 * A return code of zero implies that the subpool will be
++		 * under its minimum size if the reservation is not restored
++		 * after page is free.  Therefore, force restore_reserve
++		 * operation.
++		 */
++		if (hugepage_subpool_put_pages(spool, 1) == 0)
++			restore_reserve = true;
++	}
+ 
+ 	spin_lock(&hugetlb_lock);
+ 	clear_page_huge_active(page);
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index b236069ff0d8..547e48addced 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -561,20 +561,6 @@ int __remove_pages(struct zone *zone, unsigned long phys_start_pfn,
+ 	if (is_dev_zone(zone)) {
+ 		if (altmap)
+ 			map_offset = vmem_altmap_offset(altmap);
+-	} else {
+-		resource_size_t start, size;
+-
+-		start = phys_start_pfn << PAGE_SHIFT;
+-		size = nr_pages * PAGE_SIZE;
+-
+-		ret = release_mem_region_adjustable(&iomem_resource, start,
+-					size);
+-		if (ret) {
+-			resource_size_t endres = start + size - 1;
+-
+-			pr_warn("Unable to release resource <%pa-%pa> (%d)\n",
+-					&start, &endres, ret);
+-		}
+ 	}
+ 
+ 	clear_zone_contiguous(zone);
+@@ -714,7 +700,7 @@ static void node_states_check_changes_online(unsigned long nr_pages,
+ 	if (zone_idx(zone) <= ZONE_NORMAL && !node_state(nid, N_NORMAL_MEMORY))
+ 		arg->status_change_nid_normal = nid;
+ #ifdef CONFIG_HIGHMEM
+-	if (zone_idx(zone) <= N_HIGH_MEMORY && !node_state(nid, N_HIGH_MEMORY))
++	if (zone_idx(zone) <= ZONE_HIGHMEM && !node_state(nid, N_HIGH_MEMORY))
+ 		arg->status_change_nid_high = nid;
+ #endif
+ }
+@@ -1843,6 +1829,26 @@ void try_offline_node(int nid)
+ }
+ EXPORT_SYMBOL(try_offline_node);
+ 
++static void __release_memory_resource(resource_size_t start,
++				      resource_size_t size)
++{
++	int ret;
++
++	/*
++	 * When removing memory in the same granularity as it was added,
++	 * this function never fails. It might only fail if resources
++	 * have to be adjusted or split. We'll ignore the error, as
++	 * removing of memory cannot fail.
++	 */
++	ret = release_mem_region_adjustable(&iomem_resource, start, size);
++	if (ret) {
++		resource_size_t endres = start + size - 1;
++
++		pr_warn("Unable to release resource <%pa-%pa> (%d)\n",
++			&start, &endres, ret);
++	}
++}
++
+ /**
+  * remove_memory
+  * @nid: the node ID
+@@ -1877,6 +1883,7 @@ void __ref __remove_memory(int nid, u64 start, u64 size)
+ 	memblock_remove(start, size);
+ 
+ 	arch_remove_memory(nid, start, size, NULL);
++	__release_memory_resource(start, size);
+ 
+ 	try_offline_node(nid);
+ 
+diff --git a/mm/mprotect.c b/mm/mprotect.c
+index 028c724dcb1a..ab40f3d04aa3 100644
+--- a/mm/mprotect.c
++++ b/mm/mprotect.c
+@@ -39,7 +39,6 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
+ 		unsigned long addr, unsigned long end, pgprot_t newprot,
+ 		int dirty_accountable, int prot_numa)
+ {
+-	struct mm_struct *mm = vma->vm_mm;
+ 	pte_t *pte, oldpte;
+ 	spinlock_t *ptl;
+ 	unsigned long pages = 0;
+@@ -136,7 +135,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
+ 				newpte = swp_entry_to_pte(entry);
+ 				if (pte_swp_soft_dirty(oldpte))
+ 					newpte = pte_swp_mksoft_dirty(newpte);
+-				set_pte_at(mm, addr, pte, newpte);
++				set_pte_at(vma->vm_mm, addr, pte, newpte);
+ 
+ 				pages++;
+ 			}
+@@ -150,7 +149,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
+ 				 */
+ 				make_device_private_entry_read(&entry);
+ 				newpte = swp_entry_to_pte(entry);
+-				set_pte_at(mm, addr, pte, newpte);
++				set_pte_at(vma->vm_mm, addr, pte, newpte);
+ 
+ 				pages++;
+ 			}
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index c02cff1ed56e..475ca5b1a824 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -6244,13 +6244,15 @@ static unsigned long __init zone_spanned_pages_in_node(int nid,
+ 					unsigned long *zone_end_pfn,
+ 					unsigned long *ignored)
+ {
++	unsigned long zone_low = arch_zone_lowest_possible_pfn[zone_type];
++	unsigned long zone_high = arch_zone_highest_possible_pfn[zone_type];
+ 	/* When hotadd a new node from cpu_up(), the node should be empty */
+ 	if (!node_start_pfn && !node_end_pfn)
+ 		return 0;
+ 
+ 	/* Get the start and end of the zone */
+-	*zone_start_pfn = arch_zone_lowest_possible_pfn[zone_type];
+-	*zone_end_pfn = arch_zone_highest_possible_pfn[zone_type];
++	*zone_start_pfn = clamp(node_start_pfn, zone_low, zone_high);
++	*zone_end_pfn = clamp(node_end_pfn, zone_low, zone_high);
+ 	adjust_zone_range_for_zone_movable(nid, zone_type,
+ 				node_start_pfn, node_end_pfn,
+ 				zone_start_pfn, zone_end_pfn);
+diff --git a/mm/percpu.c b/mm/percpu.c
+index 68dd2e7e73b5..d38bd83fbe96 100644
+--- a/mm/percpu.c
++++ b/mm/percpu.c
+@@ -988,7 +988,8 @@ static int pcpu_alloc_area(struct pcpu_chunk *chunk, int alloc_bits,
+ 	/*
+ 	 * Search to find a fit.
+ 	 */
+-	end = start + alloc_bits + PCPU_BITMAP_BLOCK_BITS;
++	end = min_t(int, start + alloc_bits + PCPU_BITMAP_BLOCK_BITS,
++		    pcpu_chunk_map_bits(chunk));
+ 	bit_off = bitmap_find_next_zero_area(chunk->alloc_map, end, start,
+ 					     alloc_bits, align_mask);
+ 	if (bit_off >= end)
+@@ -1738,6 +1739,7 @@ void free_percpu(void __percpu *ptr)
+ 	struct pcpu_chunk *chunk;
+ 	unsigned long flags;
+ 	int off;
++	bool need_balance = false;
+ 
+ 	if (!ptr)
+ 		return;
+@@ -1759,7 +1761,7 @@ void free_percpu(void __percpu *ptr)
+ 
+ 		list_for_each_entry(pos, &pcpu_slot[pcpu_nr_slots - 1], list)
+ 			if (pos != chunk) {
+-				pcpu_schedule_balance_work();
++				need_balance = true;
+ 				break;
+ 			}
+ 	}
+@@ -1767,6 +1769,9 @@ void free_percpu(void __percpu *ptr)
+ 	trace_percpu_free_percpu(chunk->base_addr, off, ptr);
+ 
+ 	spin_unlock_irqrestore(&pcpu_lock, flags);
++
++	if (need_balance)
++		pcpu_schedule_balance_work();
+ }
+ EXPORT_SYMBOL_GPL(free_percpu);
+ 
+diff --git a/mm/rmap.c b/mm/rmap.c
+index b30c7c71d1d9..76c8dfd3ae1c 100644
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -928,7 +928,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
+ 				continue;
+ 
+ 			flush_cache_page(vma, address, page_to_pfn(page));
+-			entry = pmdp_huge_clear_flush(vma, address, pmd);
++			entry = pmdp_invalidate(vma, address, pmd);
+ 			entry = pmd_wrprotect(entry);
+ 			entry = pmd_mkclean(entry);
+ 			set_pmd_at(vma->vm_mm, address, pmd, entry);
+diff --git a/mm/slab.c b/mm/slab.c
+index 9142ee992493..fbbef79e1ad5 100644
+--- a/mm/slab.c
++++ b/mm/slab.c
+@@ -4328,8 +4328,12 @@ static int leaks_show(struct seq_file *m, void *p)
+ 	 * whole processing.
+ 	 */
+ 	do {
+-		set_store_user_clean(cachep);
+ 		drain_cpu_caches(cachep);
++		/*
++		 * drain_cpu_caches() could make kmemleak_object and
++		 * debug_objects_cache dirty, so reset afterwards.
++		 */
++		set_store_user_clean(cachep);
+ 
+ 		x[1] = 0;
+ 
+diff --git a/net/batman-adv/distributed-arp-table.c b/net/batman-adv/distributed-arp-table.c
+index 8d290da0d596..9ba7b9bb198a 100644
+--- a/net/batman-adv/distributed-arp-table.c
++++ b/net/batman-adv/distributed-arp-table.c
+@@ -667,7 +667,7 @@ batadv_dat_select_candidates(struct batadv_priv *bat_priv, __be32 ip_dst,
+ }
+ 
+ /**
+- * batadv_dat_send_data() - send a payload to the selected candidates
++ * batadv_dat_forward_data() - copy and send payload to the selected candidates
+  * @bat_priv: the bat priv with all the soft interface information
+  * @skb: payload to send
+  * @ip: the DHT key
+@@ -680,9 +680,9 @@ batadv_dat_select_candidates(struct batadv_priv *bat_priv, __be32 ip_dst,
+  * Return: true if the packet is sent to at least one candidate, false
+  * otherwise.
+  */
+-static bool batadv_dat_send_data(struct batadv_priv *bat_priv,
+-				 struct sk_buff *skb, __be32 ip,
+-				 unsigned short vid, int packet_subtype)
++static bool batadv_dat_forward_data(struct batadv_priv *bat_priv,
++				    struct sk_buff *skb, __be32 ip,
++				    unsigned short vid, int packet_subtype)
+ {
+ 	int i;
+ 	bool ret = false;
+@@ -1277,8 +1277,8 @@ bool batadv_dat_snoop_outgoing_arp_request(struct batadv_priv *bat_priv,
+ 		ret = true;
+ 	} else {
+ 		/* Send the request to the DHT */
+-		ret = batadv_dat_send_data(bat_priv, skb, ip_dst, vid,
+-					   BATADV_P_DAT_DHT_GET);
++		ret = batadv_dat_forward_data(bat_priv, skb, ip_dst, vid,
++					      BATADV_P_DAT_DHT_GET);
+ 	}
+ out:
+ 	if (dat_entry)
+@@ -1392,8 +1392,10 @@ void batadv_dat_snoop_outgoing_arp_reply(struct batadv_priv *bat_priv,
+ 	/* Send the ARP reply to the candidates for both the IP addresses that
+ 	 * the node obtained from the ARP reply
+ 	 */
+-	batadv_dat_send_data(bat_priv, skb, ip_src, vid, BATADV_P_DAT_DHT_PUT);
+-	batadv_dat_send_data(bat_priv, skb, ip_dst, vid, BATADV_P_DAT_DHT_PUT);
++	batadv_dat_forward_data(bat_priv, skb, ip_src, vid,
++				BATADV_P_DAT_DHT_PUT);
++	batadv_dat_forward_data(bat_priv, skb, ip_dst, vid,
++				BATADV_P_DAT_DHT_PUT);
+ }
+ 
+ /**
+@@ -1710,8 +1712,10 @@ static void batadv_dat_put_dhcp(struct batadv_priv *bat_priv, u8 *chaddr,
+ 	batadv_dat_entry_add(bat_priv, yiaddr, chaddr, vid);
+ 	batadv_dat_entry_add(bat_priv, ip_dst, hw_dst, vid);
+ 
+-	batadv_dat_send_data(bat_priv, skb, yiaddr, vid, BATADV_P_DAT_DHT_PUT);
+-	batadv_dat_send_data(bat_priv, skb, ip_dst, vid, BATADV_P_DAT_DHT_PUT);
++	batadv_dat_forward_data(bat_priv, skb, yiaddr, vid,
++				BATADV_P_DAT_DHT_PUT);
++	batadv_dat_forward_data(bat_priv, skb, ip_dst, vid,
++				BATADV_P_DAT_DHT_PUT);
+ 
+ 	consume_skb(skb);
+ 
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 3cf0764d5793..bd4978ce8c45 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -1276,14 +1276,6 @@ int hci_conn_check_link_mode(struct hci_conn *conn)
+ 	    !test_bit(HCI_CONN_ENCRYPT, &conn->flags))
+ 		return 0;
+ 
+-	/* The minimum encryption key size needs to be enforced by the
+-	 * host stack before establishing any L2CAP connections. The
+-	 * specification in theory allows a minimum of 1, but to align
+-	 * BR/EDR and LE transports, a minimum of 7 is chosen.
+-	 */
+-	if (conn->enc_key_size < HCI_MIN_ENC_KEY_SIZE)
+-		return 0;
+-
+ 	return 1;
+ }
+ 
+diff --git a/net/netfilter/nf_conntrack_h323_asn1.c b/net/netfilter/nf_conntrack_h323_asn1.c
+index 1601275efe2d..4c2ef42e189c 100644
+--- a/net/netfilter/nf_conntrack_h323_asn1.c
++++ b/net/netfilter/nf_conntrack_h323_asn1.c
+@@ -172,7 +172,7 @@ static int nf_h323_error_boundary(struct bitstr *bs, size_t bytes, size_t bits)
+ 	if (bits % BITS_PER_BYTE > 0)
+ 		bytes++;
+ 
+-	if (*bs->cur + bytes > *bs->end)
++	if (bs->cur + bytes > bs->end)
+ 		return 1;
+ 
+ 	return 0;
+diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
+index 7aabfd4b1e50..a9e4f74b1ff6 100644
+--- a/net/netfilter/nf_flow_table_core.c
++++ b/net/netfilter/nf_flow_table_core.c
+@@ -185,14 +185,25 @@ static const struct rhashtable_params nf_flow_offload_rhash_params = {
+ 
+ int flow_offload_add(struct nf_flowtable *flow_table, struct flow_offload *flow)
+ {
+-	flow->timeout = (u32)jiffies;
++	int err;
+ 
+-	rhashtable_insert_fast(&flow_table->rhashtable,
+-			       &flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].node,
+-			       nf_flow_offload_rhash_params);
+-	rhashtable_insert_fast(&flow_table->rhashtable,
+-			       &flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].node,
+-			       nf_flow_offload_rhash_params);
++	err = rhashtable_insert_fast(&flow_table->rhashtable,
++				     &flow->tuplehash[0].node,
++				     nf_flow_offload_rhash_params);
++	if (err < 0)
++		return err;
++
++	err = rhashtable_insert_fast(&flow_table->rhashtable,
++				     &flow->tuplehash[1].node,
++				     nf_flow_offload_rhash_params);
++	if (err < 0) {
++		rhashtable_remove_fast(&flow_table->rhashtable,
++				       &flow->tuplehash[0].node,
++				       nf_flow_offload_rhash_params);
++		return err;
++	}
++
++	flow->timeout = (u32)jiffies;
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(flow_offload_add);
+diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c
+index 1d291a51cd45..46022a2867d7 100644
+--- a/net/netfilter/nf_flow_table_ip.c
++++ b/net/netfilter/nf_flow_table_ip.c
+@@ -181,6 +181,9 @@ static int nf_flow_tuple_ip(struct sk_buff *skb, const struct net_device *dev,
+ 	    iph->protocol != IPPROTO_UDP)
+ 		return -1;
+ 
++	if (iph->ttl <= 1)
++		return -1;
++
+ 	thoff = iph->ihl * 4;
+ 	if (!pskb_may_pull(skb, thoff + sizeof(*ports)))
+ 		return -1;
+@@ -411,6 +414,9 @@ static int nf_flow_tuple_ipv6(struct sk_buff *skb, const struct net_device *dev,
+ 	    ip6h->nexthdr != IPPROTO_UDP)
+ 		return -1;
+ 
++	if (ip6h->hop_limit <= 1)
++		return -1;
++
+ 	thoff = sizeof(*ip6h);
+ 	if (!pskb_may_pull(skb, thoff + sizeof(*ports)))
+ 		return -1;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 1606eaa5ae0d..aa5e7b00a581 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1190,6 +1190,9 @@ static int nft_dump_stats(struct sk_buff *skb, struct nft_stats __percpu *stats)
+ 	u64 pkts, bytes;
+ 	int cpu;
+ 
++	if (!stats)
++		return 0;
++
+ 	memset(&total, 0, sizeof(total));
+ 	for_each_possible_cpu(cpu) {
+ 		cpu_stats = per_cpu_ptr(stats, cpu);
+@@ -1247,6 +1250,7 @@ static int nf_tables_fill_chain_info(struct sk_buff *skb, struct net *net,
+ 	if (nft_is_base_chain(chain)) {
+ 		const struct nft_base_chain *basechain = nft_base_chain(chain);
+ 		const struct nf_hook_ops *ops = &basechain->ops;
++		struct nft_stats __percpu *stats;
+ 		struct nlattr *nest;
+ 
+ 		nest = nla_nest_start(skb, NFTA_CHAIN_HOOK);
+@@ -1268,8 +1272,9 @@ static int nf_tables_fill_chain_info(struct sk_buff *skb, struct net *net,
+ 		if (nla_put_string(skb, NFTA_CHAIN_TYPE, basechain->type->name))
+ 			goto nla_put_failure;
+ 
+-		if (rcu_access_pointer(basechain->stats) &&
+-		    nft_dump_stats(skb, rcu_dereference(basechain->stats)))
++		stats = rcu_dereference_check(basechain->stats,
++					      lockdep_commit_lock_is_held(net));
++		if (nft_dump_stats(skb, stats))
+ 			goto nla_put_failure;
+ 	}
+ 
+diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c
+index 6e6b9adf7d38..ff50bc1b144f 100644
+--- a/net/netfilter/nft_flow_offload.c
++++ b/net/netfilter/nft_flow_offload.c
+@@ -113,6 +113,7 @@ static void nft_flow_offload_eval(const struct nft_expr *expr,
+ 	if (ret < 0)
+ 		goto err_flow_add;
+ 
++	dst_release(route.tuple[!dir].dst);
+ 	return;
+ 
+ err_flow_add:
+diff --git a/sound/core/seq/seq_ports.c b/sound/core/seq/seq_ports.c
+index 24d90abfc64d..da31aa8e216e 100644
+--- a/sound/core/seq/seq_ports.c
++++ b/sound/core/seq/seq_ports.c
+@@ -550,10 +550,10 @@ static void delete_and_unsubscribe_port(struct snd_seq_client *client,
+ 		list_del_init(list);
+ 	grp->exclusive = 0;
+ 	write_unlock_irq(&grp->list_lock);
+-	up_write(&grp->list_mutex);
+ 
+ 	if (!empty)
+ 		unsubscribe_port(client, port, grp, &subs->info, ack);
++	up_write(&grp->list_mutex);
+ }
+ 
+ /* connect two ports */
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 2ec91085fa3e..789308f54785 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -1788,9 +1788,6 @@ static int azx_first_init(struct azx *chip)
+ 			chip->msi = 0;
+ 	}
+ 
+-	if (azx_acquire_irq(chip, 0) < 0)
+-		return -EBUSY;
+-
+ 	pci_set_master(pci);
+ 	synchronize_irq(bus->irq);
+ 
+@@ -1904,6 +1901,9 @@ static int azx_first_init(struct azx *chip)
+ 		return -ENODEV;
+ 	}
+ 
++	if (azx_acquire_irq(chip, 0) < 0)
++		return -EBUSY;
++
+ 	strcpy(card->driver, "HDA-Intel");
+ 	strlcpy(card->shortname, driver_short_names[chip->driver_type],
+ 		sizeof(card->shortname));
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 2cd57730381b..ecf5fc77f50b 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -28,6 +28,8 @@
+ #include <linux/hashtable.h>
+ #include <linux/kernel.h>
+ 
++#define FAKE_JUMP_OFFSET -1
++
+ struct alternative {
+ 	struct list_head list;
+ 	struct instruction *insn;
+@@ -501,7 +503,7 @@ static int add_jump_destinations(struct objtool_file *file)
+ 		    insn->type != INSN_JUMP_UNCONDITIONAL)
+ 			continue;
+ 
+-		if (insn->ignore)
++		if (insn->ignore || insn->offset == FAKE_JUMP_OFFSET)
+ 			continue;
+ 
+ 		rela = find_rela_by_dest_range(insn->sec, insn->offset,
+@@ -670,10 +672,10 @@ static int handle_group_alt(struct objtool_file *file,
+ 		clear_insn_state(&fake_jump->state);
+ 
+ 		fake_jump->sec = special_alt->new_sec;
+-		fake_jump->offset = -1;
++		fake_jump->offset = FAKE_JUMP_OFFSET;
+ 		fake_jump->type = INSN_JUMP_UNCONDITIONAL;
+ 		fake_jump->jump_dest = list_next_entry(last_orig_insn, list);
+-		fake_jump->ignore = true;
++		fake_jump->func = orig_insn->func;
+ 	}
+ 
+ 	if (!special_alt->new_len) {


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-06-17 19:22 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2019-06-17 19:22 UTC (permalink / raw
  To: gentoo-commits

commit:     c64234fb787336f6395f2c3d420b37daa5ace651
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jun 17 19:22:46 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jun 17 19:22:46 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c64234fb

Linux patch 5.1.11

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |   4 +
 1010_linux-5.1.11.patch | 253 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 257 insertions(+)

diff --git a/0000_README b/0000_README
index 6a502ad..33e1406 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch:  1009_linux-5.1.10.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.1.10
 
+Patch:  1010_linux-5.1.11.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.1.11
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1010_linux-5.1.11.patch b/1010_linux-5.1.11.patch
new file mode 100644
index 0000000..3515914
--- /dev/null
+++ b/1010_linux-5.1.11.patch
@@ -0,0 +1,253 @@
+diff --git a/Documentation/networking/ip-sysctl.txt b/Documentation/networking/ip-sysctl.txt
+index c4ac35234f05..f0d09162c7a3 100644
+--- a/Documentation/networking/ip-sysctl.txt
++++ b/Documentation/networking/ip-sysctl.txt
+@@ -250,6 +250,14 @@ tcp_base_mss - INTEGER
+ 	Path MTU discovery (MTU probing).  If MTU probing is enabled,
+ 	this is the initial MSS used by the connection.
+ 
++tcp_min_snd_mss - INTEGER
++	TCP SYN and SYNACK messages usually advertise an ADVMSS option,
++	as described in RFC 1122 and RFC 6691.
++	If this ADVMSS option is smaller than tcp_min_snd_mss,
++	it is silently capped to tcp_min_snd_mss.
++
++	Default : 48 (at least 8 bytes of payload per segment)
++
+ tcp_congestion_control - STRING
+ 	Set the congestion control algorithm to be used for new
+ 	connections. The algorithm "reno" is always available, but
+diff --git a/Makefile b/Makefile
+index e7d1973d9c26..5171900e5c93 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/include/linux/tcp.h b/include/linux/tcp.h
+index a9b0280687d5..2ba676469f98 100644
+--- a/include/linux/tcp.h
++++ b/include/linux/tcp.h
+@@ -488,4 +488,8 @@ static inline u16 tcp_mss_clamp(const struct tcp_sock *tp, u16 mss)
+ 
+ 	return (user_mss && user_mss < mss) ? user_mss : mss;
+ }
++
++int tcp_skb_shift(struct sk_buff *to, struct sk_buff *from, int pcount,
++		  int shiftlen);
++
+ #endif	/* _LINUX_TCP_H */
+diff --git a/include/net/netns/ipv4.h b/include/net/netns/ipv4.h
+index 7698460a3dd1..623cfbb7b8dc 100644
+--- a/include/net/netns/ipv4.h
++++ b/include/net/netns/ipv4.h
+@@ -117,6 +117,7 @@ struct netns_ipv4 {
+ #endif
+ 	int sysctl_tcp_mtu_probing;
+ 	int sysctl_tcp_base_mss;
++	int sysctl_tcp_min_snd_mss;
+ 	int sysctl_tcp_probe_threshold;
+ 	u32 sysctl_tcp_probe_interval;
+ 
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 68ee02523b87..36fcd0ad0515 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -55,6 +55,8 @@ void tcp_time_wait(struct sock *sk, int state, int timeo);
+ 
+ #define MAX_TCP_HEADER	(128 + MAX_HEADER)
+ #define MAX_TCP_OPTION_SPACE 40
++#define TCP_MIN_SND_MSS		48
++#define TCP_MIN_GSO_SIZE	(TCP_MIN_SND_MSS - MAX_TCP_OPTION_SPACE)
+ 
+ /*
+  * Never offer a window over 32767 without using window scaling. Some
+diff --git a/include/uapi/linux/snmp.h b/include/uapi/linux/snmp.h
+index 86dc24a96c90..fd42c1316d3d 100644
+--- a/include/uapi/linux/snmp.h
++++ b/include/uapi/linux/snmp.h
+@@ -283,6 +283,7 @@ enum
+ 	LINUX_MIB_TCPACKCOMPRESSED,		/* TCPAckCompressed */
+ 	LINUX_MIB_TCPZEROWINDOWDROP,		/* TCPZeroWindowDrop */
+ 	LINUX_MIB_TCPRCVQDROP,			/* TCPRcvQDrop */
++	LINUX_MIB_TCPWQUEUETOOBIG,		/* TCPWqueueTooBig */
+ 	__LINUX_MIB_MAX
+ };
+ 
+diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
+index c3610b37bb4c..dff6755dc1a7 100644
+--- a/net/ipv4/proc.c
++++ b/net/ipv4/proc.c
+@@ -291,6 +291,7 @@ static const struct snmp_mib snmp4_net_list[] = {
+ 	SNMP_MIB_ITEM("TCPAckCompressed", LINUX_MIB_TCPACKCOMPRESSED),
+ 	SNMP_MIB_ITEM("TCPZeroWindowDrop", LINUX_MIB_TCPZEROWINDOWDROP),
+ 	SNMP_MIB_ITEM("TCPRcvQDrop", LINUX_MIB_TCPRCVQDROP),
++	SNMP_MIB_ITEM("TCPWqueueTooBig", LINUX_MIB_TCPWQUEUETOOBIG),
+ 	SNMP_MIB_SENTINEL
+ };
+ 
+diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
+index eeb4041fa5f9..4f1fa744d3c8 100644
+--- a/net/ipv4/sysctl_net_ipv4.c
++++ b/net/ipv4/sysctl_net_ipv4.c
+@@ -39,6 +39,8 @@ static int ip_local_port_range_min[] = { 1, 1 };
+ static int ip_local_port_range_max[] = { 65535, 65535 };
+ static int tcp_adv_win_scale_min = -31;
+ static int tcp_adv_win_scale_max = 31;
++static int tcp_min_snd_mss_min = TCP_MIN_SND_MSS;
++static int tcp_min_snd_mss_max = 65535;
+ static int ip_privileged_port_min;
+ static int ip_privileged_port_max = 65535;
+ static int ip_ttl_min = 1;
+@@ -748,6 +750,15 @@ static struct ctl_table ipv4_net_table[] = {
+ 		.mode		= 0644,
+ 		.proc_handler	= proc_dointvec,
+ 	},
++	{
++		.procname	= "tcp_min_snd_mss",
++		.data		= &init_net.ipv4.sysctl_tcp_min_snd_mss,
++		.maxlen		= sizeof(int),
++		.mode		= 0644,
++		.proc_handler	= proc_dointvec_minmax,
++		.extra1		= &tcp_min_snd_mss_min,
++		.extra2		= &tcp_min_snd_mss_max,
++	},
+ 	{
+ 		.procname	= "tcp_probe_threshold",
+ 		.data		= &init_net.ipv4.sysctl_tcp_probe_threshold,
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 6baa6dc1b13b..365c8490b34b 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -3889,6 +3889,7 @@ void __init tcp_init(void)
+ 	unsigned long limit;
+ 	unsigned int i;
+ 
++	BUILD_BUG_ON(TCP_MIN_SND_MSS <= MAX_TCP_OPTION_SPACE);
+ 	BUILD_BUG_ON(sizeof(struct tcp_skb_cb) >
+ 		     FIELD_SIZEOF(struct sk_buff, cb));
+ 
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 731d3045b50a..d48f935c8e28 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -1296,7 +1296,7 @@ static bool tcp_shifted_skb(struct sock *sk, struct sk_buff *prev,
+ 	TCP_SKB_CB(skb)->seq += shifted;
+ 
+ 	tcp_skb_pcount_add(prev, pcount);
+-	BUG_ON(tcp_skb_pcount(skb) < pcount);
++	WARN_ON_ONCE(tcp_skb_pcount(skb) < pcount);
+ 	tcp_skb_pcount_add(skb, -pcount);
+ 
+ 	/* When we're adding to gso_segs == 1, gso_size will be zero,
+@@ -1362,6 +1362,21 @@ static int skb_can_shift(const struct sk_buff *skb)
+ 	return !skb_headlen(skb) && skb_is_nonlinear(skb);
+ }
+ 
++int tcp_skb_shift(struct sk_buff *to, struct sk_buff *from,
++		  int pcount, int shiftlen)
++{
++	/* TCP min gso_size is 8 bytes (TCP_MIN_GSO_SIZE)
++	 * Since TCP_SKB_CB(skb)->tcp_gso_segs is 16 bits, we need
++	 * to make sure not storing more than 65535 * 8 bytes per skb,
++	 * even if current MSS is bigger.
++	 */
++	if (unlikely(to->len + shiftlen >= 65535 * TCP_MIN_GSO_SIZE))
++		return 0;
++	if (unlikely(tcp_skb_pcount(to) + pcount > 65535))
++		return 0;
++	return skb_shift(to, from, shiftlen);
++}
++
+ /* Try collapsing SACK blocks spanning across multiple skbs to a single
+  * skb.
+  */
+@@ -1467,7 +1482,7 @@ static struct sk_buff *tcp_shift_skb_data(struct sock *sk, struct sk_buff *skb,
+ 	if (!after(TCP_SKB_CB(skb)->seq + len, tp->snd_una))
+ 		goto fallback;
+ 
+-	if (!skb_shift(prev, skb, len))
++	if (!tcp_skb_shift(prev, skb, pcount, len))
+ 		goto fallback;
+ 	if (!tcp_shifted_skb(sk, prev, skb, state, pcount, len, mss, dup_sack))
+ 		goto out;
+@@ -1485,11 +1500,10 @@ static struct sk_buff *tcp_shift_skb_data(struct sock *sk, struct sk_buff *skb,
+ 		goto out;
+ 
+ 	len = skb->len;
+-	if (skb_shift(prev, skb, len)) {
+-		pcount += tcp_skb_pcount(skb);
+-		tcp_shifted_skb(sk, prev, skb, state, tcp_skb_pcount(skb),
++	pcount = tcp_skb_pcount(skb);
++	if (tcp_skb_shift(prev, skb, pcount, len))
++		tcp_shifted_skb(sk, prev, skb, state, pcount,
+ 				len, mss, 0);
+-	}
+ 
+ out:
+ 	return prev;
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index a2896944aa37..72cb13cf41e7 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -2626,6 +2626,7 @@ static int __net_init tcp_sk_init(struct net *net)
+ 	net->ipv4.sysctl_tcp_ecn_fallback = 1;
+ 
+ 	net->ipv4.sysctl_tcp_base_mss = TCP_BASE_MSS;
++	net->ipv4.sysctl_tcp_min_snd_mss = TCP_MIN_SND_MSS;
+ 	net->ipv4.sysctl_tcp_probe_threshold = TCP_PROBE_THRESHOLD;
+ 	net->ipv4.sysctl_tcp_probe_interval = TCP_PROBE_INTERVAL;
+ 
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 4522579aaca2..2d86e1bc483c 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -1299,6 +1299,11 @@ int tcp_fragment(struct sock *sk, enum tcp_queue tcp_queue,
+ 	if (nsize < 0)
+ 		nsize = 0;
+ 
++	if (unlikely((sk->sk_wmem_queued >> 1) > sk->sk_sndbuf)) {
++		NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPWQUEUETOOBIG);
++		return -ENOMEM;
++	}
++
+ 	if (skb_unclone(skb, gfp))
+ 		return -ENOMEM;
+ 
+@@ -1457,8 +1462,7 @@ static inline int __tcp_mtu_to_mss(struct sock *sk, int pmtu)
+ 	mss_now -= icsk->icsk_ext_hdr_len;
+ 
+ 	/* Then reserve room for full set of TCP options and 8 bytes of data */
+-	if (mss_now < 48)
+-		mss_now = 48;
++	mss_now = max(mss_now, sock_net(sk)->ipv4.sysctl_tcp_min_snd_mss);
+ 	return mss_now;
+ }
+ 
+@@ -2750,7 +2754,7 @@ static bool tcp_collapse_retrans(struct sock *sk, struct sk_buff *skb)
+ 		if (next_skb_size <= skb_availroom(skb))
+ 			skb_copy_bits(next_skb, 0, skb_put(skb, next_skb_size),
+ 				      next_skb_size);
+-		else if (!skb_shift(skb, next_skb, next_skb_size))
++		else if (!tcp_skb_shift(skb, next_skb, 1, next_skb_size))
+ 			return false;
+ 	}
+ 	tcp_highest_sack_replace(sk, next_skb, skb);
+diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
+index f0c86398e6a7..cec6c542ca39 100644
+--- a/net/ipv4/tcp_timer.c
++++ b/net/ipv4/tcp_timer.c
+@@ -154,6 +154,7 @@ static void tcp_mtu_probing(struct inet_connection_sock *icsk, struct sock *sk)
+ 		mss = tcp_mtu_to_mss(sk, icsk->icsk_mtup.search_low) >> 1;
+ 		mss = min(net->ipv4.sysctl_tcp_base_mss, mss);
+ 		mss = max(mss, 68 - tcp_sk(sk)->tcp_header_len);
++		mss = max(mss, net->ipv4.sysctl_tcp_min_snd_mss);
+ 		icsk->icsk_mtup.search_low = tcp_mss_to_mtu(sk, mss);
+ 	}
+ 	tcp_sync_mss(sk, icsk->icsk_pmtu_cookie);


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-06-19 16:36 Thomas Deutschmann
  0 siblings, 0 replies; 23+ messages in thread
From: Thomas Deutschmann @ 2019-06-19 16:36 UTC (permalink / raw
  To: gentoo-commits

commit:     f4da0db1ef8167ef3aed194a342c455943843feb
Author:     Thomas Deutschmann <whissi <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 19 16:36:08 2019 +0000
Commit:     Thomas Deutschmann <whissi <AT> gentoo <DOT> org>
CommitDate: Wed Jun 19 16:36:08 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f4da0db1

Linux patch 5.1.12

Signed-off-by: Thomas Deutschmann <whissi <AT> gentoo.org>

 0000_README             |   26 +-
 1011_linux-5.1.12.patch | 4727 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4742 insertions(+), 11 deletions(-)

diff --git a/0000_README b/0000_README
index 33e1406..540b4c1 100644
--- a/0000_README
+++ b/0000_README
@@ -44,49 +44,53 @@ Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
 Patch:  1000_linux-5.1.1.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.1.1
 
 Patch:  1001_linux-5.1.2.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.1.2
 
 Patch:  1002_linux-5.1.3.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.1.3
 
 Patch:  1003_linux-5.1.4.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.1.4
 
 Patch:  1004_linux-5.1.5.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.1.5
 
 Patch:  1005_linux-5.1.6.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.1.6
 
 Patch:  1006_linux-5.1.7.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.1.7
 
 Patch:  1007_linux-5.1.8.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.1.8
 
 Patch:  1008_linux-5.1.9.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.1.9
 
 Patch:  1009_linux-5.1.10.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.1.10
 
 Patch:  1010_linux-5.1.11.patch
-From:   http://www.kernel.org
+From:   https://www.kernel.org
 Desc:   Linux 5.1.11
 
+Patch:  1011_linux-5.1.12.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.1.12
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1011_linux-5.1.12.patch b/1011_linux-5.1.12.patch
new file mode 100644
index 0000000..8f59cfe
--- /dev/null
+++ b/1011_linux-5.1.12.patch
@@ -0,0 +1,4727 @@
+diff --git a/Makefile b/Makefile
+index 5171900e5c93..6d7bfe9fcd7d 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
+index d2b5ec9c4b92..ba88b1eca93c 100644
+--- a/arch/arm/kvm/hyp/Makefile
++++ b/arch/arm/kvm/hyp/Makefile
+@@ -11,6 +11,7 @@ CFLAGS_ARMV7VE		   :=$(call cc-option, -march=armv7ve)
+ 
+ obj-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/hyp/vgic-v3-sr.o
+ obj-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/hyp/timer-sr.o
++obj-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/hyp/aarch32.o
+ 
+ obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
+ obj-$(CONFIG_KVM_ARM_HOST) += cp15-sr.o
+diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile
+index 82d1904328ad..ea710f674cb6 100644
+--- a/arch/arm64/kvm/hyp/Makefile
++++ b/arch/arm64/kvm/hyp/Makefile
+@@ -10,6 +10,7 @@ KVM=../../../../virt/kvm
+ 
+ obj-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/hyp/vgic-v3-sr.o
+ obj-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/hyp/timer-sr.o
++obj-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/hyp/aarch32.o
+ 
+ obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-cpuif-proxy.o
+ obj-$(CONFIG_KVM_ARM_HOST) += sysreg-sr.o
+diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
+index 9a6099a2c633..f637447e96b0 100644
+--- a/arch/arm64/mm/fault.c
++++ b/arch/arm64/mm/fault.c
+@@ -171,9 +171,10 @@ void show_pte(unsigned long addr)
+ 		return;
+ 	}
+ 
+-	pr_alert("%s pgtable: %luk pages, %u-bit VAs, pgdp = %p\n",
++	pr_alert("%s pgtable: %luk pages, %u-bit VAs, pgdp=%016lx\n",
+ 		 mm == &init_mm ? "swapper" : "user", PAGE_SIZE / SZ_1K,
+-		 mm == &init_mm ? VA_BITS : (int) vabits_user, mm->pgd);
++		 mm == &init_mm ? VA_BITS : (int)vabits_user,
++		 (unsigned long)virt_to_phys(mm->pgd));
+ 	pgdp = pgd_offset(mm, addr);
+ 	pgd = READ_ONCE(*pgdp);
+ 	pr_alert("[%016lx] pgd=%016llx", addr, pgd_val(pgd));
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index e97f018ff740..ece9490e3018 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -936,13 +936,18 @@ void *__init fixmap_remap_fdt(phys_addr_t dt_phys)
+ 
+ int __init arch_ioremap_pud_supported(void)
+ {
+-	/* only 4k granule supports level 1 block mappings */
+-	return IS_ENABLED(CONFIG_ARM64_4K_PAGES);
++	/*
++	 * Only 4k granule supports level 1 block mappings.
++	 * SW table walks can't handle removal of intermediate entries.
++	 */
++	return IS_ENABLED(CONFIG_ARM64_4K_PAGES) &&
++	       !IS_ENABLED(CONFIG_ARM64_PTDUMP_DEBUGFS);
+ }
+ 
+ int __init arch_ioremap_pmd_supported(void)
+ {
+-	return 1;
++	/* See arch_ioremap_pud_supported() */
++	return !IS_ENABLED(CONFIG_ARM64_PTDUMP_DEBUGFS);
+ }
+ 
+ int pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot)
+diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
+index 581f91be9dd4..ebc6c03d4afe 100644
+--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
++++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
+@@ -875,6 +875,23 @@ static inline int pmd_present(pmd_t pmd)
+ 	return false;
+ }
+ 
++static inline int pmd_is_serializing(pmd_t pmd)
++{
++	/*
++	 * If the pmd is undergoing a split, the _PAGE_PRESENT bit is clear
++	 * and _PAGE_INVALID is set (see pmd_present, pmdp_invalidate).
++	 *
++	 * This condition may also occur when flushing a pmd while flushing
++	 * it (see ptep_modify_prot_start), so callers must ensure this
++	 * case is fine as well.
++	 */
++	if ((pmd_raw(pmd) & cpu_to_be64(_PAGE_PRESENT | _PAGE_INVALID)) ==
++						cpu_to_be64(_PAGE_INVALID))
++		return true;
++
++	return false;
++}
++
+ static inline int pmd_bad(pmd_t pmd)
+ {
+ 	if (radix_enabled())
+@@ -1090,6 +1107,19 @@ static inline int pmd_protnone(pmd_t pmd)
+ #define pmd_access_permitted pmd_access_permitted
+ static inline bool pmd_access_permitted(pmd_t pmd, bool write)
+ {
++	/*
++	 * pmdp_invalidate sets this combination (which is not caught by
++	 * !pte_present() check in pte_access_permitted), to prevent
++	 * lock-free lookups, as part of the serialize_against_pte_lookup()
++	 * synchronisation.
++	 *
++	 * This also catches the case where the PTE's hardware PRESENT bit is
++	 * cleared while TLB is flushed, which is suboptimal but should not
++	 * be frequent.
++	 */
++	if (pmd_is_serializing(pmd))
++		return false;
++
+ 	return pte_access_permitted(pmd_pte(pmd), write);
+ }
+ 
+diff --git a/arch/powerpc/include/asm/kexec.h b/arch/powerpc/include/asm/kexec.h
+index 4a585cba1787..c68476818753 100644
+--- a/arch/powerpc/include/asm/kexec.h
++++ b/arch/powerpc/include/asm/kexec.h
+@@ -94,6 +94,9 @@ static inline bool kdump_in_progress(void)
+ 	return crashing_cpu >= 0;
+ }
+ 
++void relocate_new_kernel(unsigned long indirection_page, unsigned long reboot_code_buffer,
++			 unsigned long start_address) __noreturn;
++
+ #ifdef CONFIG_KEXEC_FILE
+ extern const struct kexec_file_ops kexec_elf64_ops;
+ 
+diff --git a/arch/powerpc/kernel/machine_kexec_32.c b/arch/powerpc/kernel/machine_kexec_32.c
+index affe5dcce7f4..2b160d68db49 100644
+--- a/arch/powerpc/kernel/machine_kexec_32.c
++++ b/arch/powerpc/kernel/machine_kexec_32.c
+@@ -30,7 +30,6 @@ typedef void (*relocate_new_kernel_t)(
+  */
+ void default_machine_kexec(struct kimage *image)
+ {
+-	extern const unsigned char relocate_new_kernel[];
+ 	extern const unsigned int relocate_new_kernel_size;
+ 	unsigned long page_list;
+ 	unsigned long reboot_code_buffer, reboot_code_buffer_phys;
+@@ -58,6 +57,9 @@ void default_machine_kexec(struct kimage *image)
+ 				reboot_code_buffer + KEXEC_CONTROL_PAGE_SIZE);
+ 	printk(KERN_INFO "Bye!\n");
+ 
++	if (!IS_ENABLED(CONFIG_FSL_BOOKE) && !IS_ENABLED(CONFIG_44x))
++		relocate_new_kernel(page_list, reboot_code_buffer_phys, image->start);
++
+ 	/* now call it */
+ 	rnk = (relocate_new_kernel_t) reboot_code_buffer;
+ 	(*rnk)(page_list, reboot_code_buffer_phys, image->start);
+diff --git a/arch/powerpc/mm/pgtable-book3s64.c b/arch/powerpc/mm/pgtable-book3s64.c
+index a4341aba0af4..e538804d3f4a 100644
+--- a/arch/powerpc/mm/pgtable-book3s64.c
++++ b/arch/powerpc/mm/pgtable-book3s64.c
+@@ -116,6 +116,9 @@ pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
+ 	/*
+ 	 * This ensures that generic code that rely on IRQ disabling
+ 	 * to prevent a parallel THP split work as expected.
++	 *
++	 * Marking the entry with _PAGE_INVALID && ~_PAGE_PRESENT requires
++	 * a special case check in pmd_access_permitted.
+ 	 */
+ 	serialize_against_pte_lookup(vma->vm_mm);
+ 	return __pmd(old_pmd);
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index c4180ecfbb2a..ee35f1112db9 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -4413,21 +4413,28 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
+ 				const struct kvm_memory_slot *new,
+ 				enum kvm_mr_change change)
+ {
+-	int rc;
+-
+-	/* If the basics of the memslot do not change, we do not want
+-	 * to update the gmap. Every update causes several unnecessary
+-	 * segment translation exceptions. This is usually handled just
+-	 * fine by the normal fault handler + gmap, but it will also
+-	 * cause faults on the prefix page of running guest CPUs.
+-	 */
+-	if (old->userspace_addr == mem->userspace_addr &&
+-	    old->base_gfn * PAGE_SIZE == mem->guest_phys_addr &&
+-	    old->npages * PAGE_SIZE == mem->memory_size)
+-		return;
++	int rc = 0;
+ 
+-	rc = gmap_map_segment(kvm->arch.gmap, mem->userspace_addr,
+-		mem->guest_phys_addr, mem->memory_size);
++	switch (change) {
++	case KVM_MR_DELETE:
++		rc = gmap_unmap_segment(kvm->arch.gmap, old->base_gfn * PAGE_SIZE,
++					old->npages * PAGE_SIZE);
++		break;
++	case KVM_MR_MOVE:
++		rc = gmap_unmap_segment(kvm->arch.gmap, old->base_gfn * PAGE_SIZE,
++					old->npages * PAGE_SIZE);
++		if (rc)
++			break;
++		/* FALLTHROUGH */
++	case KVM_MR_CREATE:
++		rc = gmap_map_segment(kvm->arch.gmap, mem->userspace_addr,
++				      mem->guest_phys_addr, mem->memory_size);
++		break;
++	case KVM_MR_FLAGS_ONLY:
++		break;
++	default:
++		WARN(1, "Unknown KVM MR CHANGE: %d\n", change);
++	}
+ 	if (rc)
+ 		pr_warn("failed to commit memory region\n");
+ 	return;
+diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
+index 8a4a7823451a..f53658dde639 100644
+--- a/arch/x86/kernel/cpu/microcode/core.c
++++ b/arch/x86/kernel/cpu/microcode/core.c
+@@ -876,7 +876,7 @@ int __init microcode_init(void)
+ 		goto out_ucode_group;
+ 
+ 	register_syscore_ops(&mc_syscore_ops);
+-	cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "x86/microcode:online",
++	cpuhp_setup_state_nocalls(CPUHP_AP_MICROCODE_LOADER, "x86/microcode:online",
+ 				  mc_cpu_online, mc_cpu_down_prep);
+ 
+ 	pr_info("Microcode Update Driver: v%s.", DRIVER_VERSION);
+diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
+index 1573a0a6b525..ff6e8e561405 100644
+--- a/arch/x86/kernel/cpu/resctrl/monitor.c
++++ b/arch/x86/kernel/cpu/resctrl/monitor.c
+@@ -368,6 +368,9 @@ static void update_mba_bw(struct rdtgroup *rgrp, struct rdt_domain *dom_mbm)
+ 	struct list_head *head;
+ 	struct rdtgroup *entry;
+ 
++	if (!is_mbm_local_enabled())
++		return;
++
+ 	r_mba = &rdt_resources_all[RDT_RESOURCE_MBA];
+ 	closid = rgrp->closid;
+ 	rmid = rgrp->mon.rmid;
+diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
+index e39741997893..dd745b58ffd8 100644
+--- a/arch/x86/kvm/pmu.c
++++ b/arch/x86/kvm/pmu.c
+@@ -283,7 +283,7 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data)
+ 	bool fast_mode = idx & (1u << 31);
+ 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
+ 	struct kvm_pmc *pmc;
+-	u64 ctr_val;
++	u64 mask = fast_mode ? ~0u : ~0ull;
+ 
+ 	if (!pmu->version)
+ 		return 1;
+@@ -291,15 +291,11 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data)
+ 	if (is_vmware_backdoor_pmc(idx))
+ 		return kvm_pmu_rdpmc_vmware(vcpu, idx, data);
+ 
+-	pmc = kvm_x86_ops->pmu_ops->msr_idx_to_pmc(vcpu, idx);
++	pmc = kvm_x86_ops->pmu_ops->msr_idx_to_pmc(vcpu, idx, &mask);
+ 	if (!pmc)
+ 		return 1;
+ 
+-	ctr_val = pmc_read_counter(pmc);
+-	if (fast_mode)
+-		ctr_val = (u32)ctr_val;
+-
+-	*data = ctr_val;
++	*data = pmc_read_counter(pmc) & mask;
+ 	return 0;
+ }
+ 
+diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
+index ba8898e1a854..22dff661145a 100644
+--- a/arch/x86/kvm/pmu.h
++++ b/arch/x86/kvm/pmu.h
+@@ -25,7 +25,8 @@ struct kvm_pmu_ops {
+ 	unsigned (*find_fixed_event)(int idx);
+ 	bool (*pmc_is_enabled)(struct kvm_pmc *pmc);
+ 	struct kvm_pmc *(*pmc_idx_to_pmc)(struct kvm_pmu *pmu, int pmc_idx);
+-	struct kvm_pmc *(*msr_idx_to_pmc)(struct kvm_vcpu *vcpu, unsigned idx);
++	struct kvm_pmc *(*msr_idx_to_pmc)(struct kvm_vcpu *vcpu, unsigned idx,
++					  u64 *mask);
+ 	int (*is_valid_msr_idx)(struct kvm_vcpu *vcpu, unsigned idx);
+ 	bool (*is_valid_msr)(struct kvm_vcpu *vcpu, u32 msr);
+ 	int (*get_msr)(struct kvm_vcpu *vcpu, u32 msr, u64 *data);
+diff --git a/arch/x86/kvm/pmu_amd.c b/arch/x86/kvm/pmu_amd.c
+index 50fa9450fcf1..d3118088f1cd 100644
+--- a/arch/x86/kvm/pmu_amd.c
++++ b/arch/x86/kvm/pmu_amd.c
+@@ -186,7 +186,7 @@ static int amd_is_valid_msr_idx(struct kvm_vcpu *vcpu, unsigned idx)
+ }
+ 
+ /* idx is the ECX register of RDPMC instruction */
+-static struct kvm_pmc *amd_msr_idx_to_pmc(struct kvm_vcpu *vcpu, unsigned idx)
++static struct kvm_pmc *amd_msr_idx_to_pmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *mask)
+ {
+ 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
+ 	struct kvm_pmc *counters;
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index ae6e51828a54..aa3b77acfbf9 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -379,6 +379,9 @@ module_param(vgif, int, 0444);
+ static int sev = IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT);
+ module_param(sev, int, 0444);
+ 
++static bool __read_mostly dump_invalid_vmcb = 0;
++module_param(dump_invalid_vmcb, bool, 0644);
++
+ static u8 rsm_ins_bytes[] = "\x0f\xaa";
+ 
+ static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
+@@ -4834,6 +4837,11 @@ static void dump_vmcb(struct kvm_vcpu *vcpu)
+ 	struct vmcb_control_area *control = &svm->vmcb->control;
+ 	struct vmcb_save_area *save = &svm->vmcb->save;
+ 
++	if (!dump_invalid_vmcb) {
++		pr_warn_ratelimited("set kvm_amd.dump_invalid_vmcb=1 to dump internal KVM state.\n");
++		return;
++	}
++
+ 	pr_err("VMCB Control Area:\n");
+ 	pr_err("%-20s%04x\n", "cr_read:", control->intercept_cr & 0xffff);
+ 	pr_err("%-20s%04x\n", "cr_write:", control->intercept_cr >> 16);
+@@ -4992,7 +5000,6 @@ static int handle_exit(struct kvm_vcpu *vcpu)
+ 		kvm_run->exit_reason = KVM_EXIT_FAIL_ENTRY;
+ 		kvm_run->fail_entry.hardware_entry_failure_reason
+ 			= svm->vmcb->control.exit_code;
+-		pr_err("KVM: FAILED VMRUN WITH VMCB:\n");
+ 		dump_vmcb(vcpu);
+ 		return 0;
+ 	}
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 8f6f69c26c35..5fa0c17d0b41 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -5467,7 +5467,7 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu,
+ 	    vmcs12->vmcs_link_pointer != -1ull) {
+ 		struct vmcs12 *shadow_vmcs12 = get_shadow_vmcs12(vcpu);
+ 
+-		if (kvm_state->size < sizeof(*kvm_state) + 2 * sizeof(*vmcs12))
++		if (kvm_state->size < sizeof(*kvm_state) + VMCS12_SIZE + sizeof(*vmcs12))
+ 			return -EINVAL;
+ 
+ 		if (copy_from_user(shadow_vmcs12,
+diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
+index 5ab4a364348e..c3f103e2b08e 100644
+--- a/arch/x86/kvm/vmx/pmu_intel.c
++++ b/arch/x86/kvm/vmx/pmu_intel.c
+@@ -126,7 +126,7 @@ static int intel_is_valid_msr_idx(struct kvm_vcpu *vcpu, unsigned idx)
+ }
+ 
+ static struct kvm_pmc *intel_msr_idx_to_pmc(struct kvm_vcpu *vcpu,
+-					    unsigned idx)
++					    unsigned idx, u64 *mask)
+ {
+ 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
+ 	bool fixed = idx & (1u << 30);
+@@ -138,6 +138,7 @@ static struct kvm_pmc *intel_msr_idx_to_pmc(struct kvm_vcpu *vcpu,
+ 	if (fixed && idx >= pmu->nr_arch_fixed_counters)
+ 		return NULL;
+ 	counters = fixed ? pmu->fixed_counters : pmu->gp_counters;
++	*mask &= pmu->counter_bitmask[fixed ? KVM_PMC_FIXED : KVM_PMC_GP];
+ 
+ 	return &counters[idx];
+ }
+@@ -183,9 +184,13 @@ static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *data)
+ 		*data = pmu->global_ovf_ctrl;
+ 		return 0;
+ 	default:
+-		if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) ||
+-		    (pmc = get_fixed_pmc(pmu, msr))) {
+-			*data = pmc_read_counter(pmc);
++		if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0))) {
++			u64 val = pmc_read_counter(pmc);
++			*data = val & pmu->counter_bitmask[KVM_PMC_GP];
++			return 0;
++		} else if ((pmc = get_fixed_pmc(pmu, msr))) {
++			u64 val = pmc_read_counter(pmc);
++			*data = val & pmu->counter_bitmask[KVM_PMC_FIXED];
+ 			return 0;
+ 		} else if ((pmc = get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0))) {
+ 			*data = pmc->eventsel;
+@@ -235,11 +240,14 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		}
+ 		break;
+ 	default:
+-		if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) ||
+-		    (pmc = get_fixed_pmc(pmu, msr))) {
+-			if (!msr_info->host_initiated)
+-				data = (s64)(s32)data;
+-			pmc->counter += data - pmc_read_counter(pmc);
++		if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0))) {
++			if (msr_info->host_initiated)
++				pmc->counter = data;
++			else
++				pmc->counter = (s32)data;
++			return 0;
++		} else if ((pmc = get_fixed_pmc(pmu, msr))) {
++			pmc->counter = data;
+ 			return 0;
+ 		} else if ((pmc = get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0))) {
+ 			if (data == pmc->eventsel)
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 2b4a3d32c511..cfb8f1ec9a0a 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -114,6 +114,9 @@ static u64 __read_mostly host_xss;
+ bool __read_mostly enable_pml = 1;
+ module_param_named(pml, enable_pml, bool, S_IRUGO);
+ 
++static bool __read_mostly dump_invalid_vmcs = 0;
++module_param(dump_invalid_vmcs, bool, 0644);
++
+ #define MSR_BITMAP_MODE_X2APIC		1
+ #define MSR_BITMAP_MODE_X2APIC_APICV	2
+ 
+@@ -5605,15 +5608,24 @@ static void vmx_dump_dtsel(char *name, uint32_t limit)
+ 
+ void dump_vmcs(void)
+ {
+-	u32 vmentry_ctl = vmcs_read32(VM_ENTRY_CONTROLS);
+-	u32 vmexit_ctl = vmcs_read32(VM_EXIT_CONTROLS);
+-	u32 cpu_based_exec_ctrl = vmcs_read32(CPU_BASED_VM_EXEC_CONTROL);
+-	u32 pin_based_exec_ctrl = vmcs_read32(PIN_BASED_VM_EXEC_CONTROL);
+-	u32 secondary_exec_control = 0;
+-	unsigned long cr4 = vmcs_readl(GUEST_CR4);
+-	u64 efer = vmcs_read64(GUEST_IA32_EFER);
++	u32 vmentry_ctl, vmexit_ctl;
++	u32 cpu_based_exec_ctrl, pin_based_exec_ctrl, secondary_exec_control;
++	unsigned long cr4;
++	u64 efer;
+ 	int i, n;
+ 
++	if (!dump_invalid_vmcs) {
++		pr_warn_ratelimited("set kvm_intel.dump_invalid_vmcs=1 to dump internal KVM state.\n");
++		return;
++	}
++
++	vmentry_ctl = vmcs_read32(VM_ENTRY_CONTROLS);
++	vmexit_ctl = vmcs_read32(VM_EXIT_CONTROLS);
++	cpu_based_exec_ctrl = vmcs_read32(CPU_BASED_VM_EXEC_CONTROL);
++	pin_based_exec_ctrl = vmcs_read32(PIN_BASED_VM_EXEC_CONTROL);
++	cr4 = vmcs_readl(GUEST_CR4);
++	efer = vmcs_read64(GUEST_IA32_EFER);
++	secondary_exec_control = 0;
+ 	if (cpu_has_secondary_exec_ctrls())
+ 		secondary_exec_control = vmcs_read32(SECONDARY_VM_EXEC_CONTROL);
+ 
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index f879529906b4..9cd72decdfe4 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -314,6 +314,7 @@ void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked);
+ void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu);
+ struct shared_msr_entry *find_msr_entry(struct vcpu_vmx *vmx, u32 msr);
+ void pt_update_intercept_for_msr(struct vcpu_vmx *vmx);
++void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp);
+ 
+ #define POSTED_INTR_ON  0
+ #define POSTED_INTR_SN  1
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index efc8adf7ca0e..b07868eb1656 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -143,7 +143,7 @@ module_param(tsc_tolerance_ppm, uint, S_IRUGO | S_IWUSR);
+  * tuning, i.e. allows priveleged userspace to set an exact advancement time.
+  */
+ static int __read_mostly lapic_timer_advance_ns = -1;
+-module_param(lapic_timer_advance_ns, uint, S_IRUGO | S_IWUSR);
++module_param(lapic_timer_advance_ns, int, S_IRUGO | S_IWUSR);
+ 
+ static bool __read_mostly vector_hashing = true;
+ module_param(vector_hashing, bool, S_IRUGO);
+diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
+index 8dc0fc0b1382..296da58f3013 100644
+--- a/arch/x86/mm/kasan_init_64.c
++++ b/arch/x86/mm/kasan_init_64.c
+@@ -199,7 +199,7 @@ static inline p4d_t *early_p4d_offset(pgd_t *pgd, unsigned long addr)
+ 	if (!pgtable_l5_enabled())
+ 		return (p4d_t *)pgd;
+ 
+-	p4d = __pa_nodebug(pgd_val(*pgd)) & PTE_PFN_MASK;
++	p4d = pgd_val(*pgd) & PTE_PFN_MASK;
+ 	p4d += __START_KERNEL_map - phys_base;
+ 	return (p4d_t *)p4d + p4d_index(addr);
+ }
+diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
+index d669c5e797e0..5aef69b7ff52 100644
+--- a/arch/x86/mm/kaslr.c
++++ b/arch/x86/mm/kaslr.c
+@@ -52,7 +52,7 @@ static __initdata struct kaslr_memory_region {
+ } kaslr_regions[] = {
+ 	{ &page_offset_base, 0 },
+ 	{ &vmalloc_base, 0 },
+-	{ &vmemmap_base, 1 },
++	{ &vmemmap_base, 0 },
+ };
+ 
+ /* Get size in bytes used by the memory region */
+@@ -78,6 +78,7 @@ void __init kernel_randomize_memory(void)
+ 	unsigned long rand, memory_tb;
+ 	struct rnd_state rand_state;
+ 	unsigned long remain_entropy;
++	unsigned long vmemmap_size;
+ 
+ 	vaddr_start = pgtable_l5_enabled() ? __PAGE_OFFSET_BASE_L5 : __PAGE_OFFSET_BASE_L4;
+ 	vaddr = vaddr_start;
+@@ -109,6 +110,14 @@ void __init kernel_randomize_memory(void)
+ 	if (memory_tb < kaslr_regions[0].size_tb)
+ 		kaslr_regions[0].size_tb = memory_tb;
+ 
++	/*
++	 * Calculate the vmemmap region size in TBs, aligned to a TB
++	 * boundary.
++	 */
++	vmemmap_size = (kaslr_regions[0].size_tb << (TB_SHIFT - PAGE_SHIFT)) *
++			sizeof(struct page);
++	kaslr_regions[2].size_tb = DIV_ROUND_UP(vmemmap_size, 1UL << TB_SHIFT);
++
+ 	/* Calculate entropy available between regions */
+ 	remain_entropy = vaddr_end - vaddr_start;
+ 	for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++)
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index adf28788cab5..133fed8e4a8b 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -4476,9 +4476,12 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ 	{ "ST3320[68]13AS",	"SD1[5-9]",	ATA_HORKAGE_NONCQ |
+ 						ATA_HORKAGE_FIRMWARE_WARN },
+ 
+-	/* drives which fail FPDMA_AA activation (some may freeze afterwards) */
+-	{ "ST1000LM024 HN-M101MBB", "2AR10001",	ATA_HORKAGE_BROKEN_FPDMA_AA },
+-	{ "ST1000LM024 HN-M101MBB", "2BA30001",	ATA_HORKAGE_BROKEN_FPDMA_AA },
++	/* drives which fail FPDMA_AA activation (some may freeze afterwards)
++	   the ST disks also have LPM issues */
++	{ "ST1000LM024 HN-M101MBB", "2AR10001",	ATA_HORKAGE_BROKEN_FPDMA_AA |
++						ATA_HORKAGE_NOLPM, },
++	{ "ST1000LM024 HN-M101MBB", "2BA30001",	ATA_HORKAGE_BROKEN_FPDMA_AA |
++						ATA_HORKAGE_NOLPM, },
+ 	{ "VB0250EAVER",	"HPG7",		ATA_HORKAGE_BROKEN_FPDMA_AA },
+ 
+ 	/* Blacklist entries taken from Silicon Image 3124/3132
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+index ecf6f96df2ad..e6b07ece3910 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+@@ -594,7 +594,7 @@ error:
+ int amdgpu_vcn_enc_ring_test_ring(struct amdgpu_ring *ring)
+ {
+ 	struct amdgpu_device *adev = ring->adev;
+-	uint32_t rptr = amdgpu_ring_get_rptr(ring);
++	uint32_t rptr;
+ 	unsigned i;
+ 	int r;
+ 
+@@ -602,6 +602,8 @@ int amdgpu_vcn_enc_ring_test_ring(struct amdgpu_ring *ring)
+ 	if (r)
+ 		return r;
+ 
++	rptr = amdgpu_ring_get_rptr(ring);
++
+ 	amdgpu_ring_write(ring, VCN_ENC_CMD_END);
+ 	amdgpu_ring_commit(ring);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+index 2fe8397241ea..1611bef19a2c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+@@ -715,6 +715,7 @@ static bool gmc_v9_0_keep_stolen_memory(struct amdgpu_device *adev)
+ 	case CHIP_VEGA10:
+ 		return true;
+ 	case CHIP_RAVEN:
++		return (adev->pdev->device == 0x15d8);
+ 	case CHIP_VEGA12:
+ 	case CHIP_VEGA20:
+ 	default:
+diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
+index c9edddf9f88a..be70e6e5f9df 100644
+--- a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
+@@ -170,13 +170,16 @@ static void uvd_v6_0_enc_ring_set_wptr(struct amdgpu_ring *ring)
+ static int uvd_v6_0_enc_ring_test_ring(struct amdgpu_ring *ring)
+ {
+ 	struct amdgpu_device *adev = ring->adev;
+-	uint32_t rptr = amdgpu_ring_get_rptr(ring);
++	uint32_t rptr;
+ 	unsigned i;
+ 	int r;
+ 
+ 	r = amdgpu_ring_alloc(ring, 16);
+ 	if (r)
+ 		return r;
++
++	rptr = amdgpu_ring_get_rptr(ring);
++
+ 	amdgpu_ring_write(ring, HEVC_ENC_CMD_END);
+ 	amdgpu_ring_commit(ring);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
+index dc461df48da0..9682f39d57c6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
+@@ -175,7 +175,7 @@ static void uvd_v7_0_enc_ring_set_wptr(struct amdgpu_ring *ring)
+ static int uvd_v7_0_enc_ring_test_ring(struct amdgpu_ring *ring)
+ {
+ 	struct amdgpu_device *adev = ring->adev;
+-	uint32_t rptr = amdgpu_ring_get_rptr(ring);
++	uint32_t rptr;
+ 	unsigned i;
+ 	int r;
+ 
+@@ -185,6 +185,9 @@ static int uvd_v7_0_enc_ring_test_ring(struct amdgpu_ring *ring)
+ 	r = amdgpu_ring_alloc(ring, 16);
+ 	if (r)
+ 		return r;
++
++	rptr = amdgpu_ring_get_rptr(ring);
++
+ 	amdgpu_ring_write(ring, HEVC_ENC_CMD_END);
+ 	amdgpu_ring_commit(ring);
+ 
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index 8db099b8077e..d94778915312 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -1580,6 +1580,50 @@ static void connector_bad_edid(struct drm_connector *connector,
+ 	}
+ }
+ 
++/* Get override or firmware EDID */
++static struct edid *drm_get_override_edid(struct drm_connector *connector)
++{
++	struct edid *override = NULL;
++
++	if (connector->override_edid)
++		override = drm_edid_duplicate(connector->edid_blob_ptr->data);
++
++	if (!override)
++		override = drm_load_edid_firmware(connector);
++
++	return IS_ERR(override) ? NULL : override;
++}
++
++/**
++ * drm_add_override_edid_modes - add modes from override/firmware EDID
++ * @connector: connector we're probing
++ *
++ * Add modes from the override/firmware EDID, if available. Only to be used from
++ * drm_helper_probe_single_connector_modes() as a fallback for when DDC probe
++ * failed during drm_get_edid() and caused the override/firmware EDID to be
++ * skipped.
++ *
++ * Return: The number of modes added or 0 if we couldn't find any.
++ */
++int drm_add_override_edid_modes(struct drm_connector *connector)
++{
++	struct edid *override;
++	int num_modes = 0;
++
++	override = drm_get_override_edid(connector);
++	if (override) {
++		drm_connector_update_edid_property(connector, override);
++		num_modes = drm_add_edid_modes(connector, override);
++		kfree(override);
++
++		DRM_DEBUG_KMS("[CONNECTOR:%d:%s] adding %d modes via fallback override/firmware EDID\n",
++			      connector->base.id, connector->name, num_modes);
++	}
++
++	return num_modes;
++}
++EXPORT_SYMBOL(drm_add_override_edid_modes);
++
+ /**
+  * drm_do_get_edid - get EDID data using a custom EDID block read function
+  * @connector: connector we're probing
+@@ -1607,15 +1651,10 @@ struct edid *drm_do_get_edid(struct drm_connector *connector,
+ {
+ 	int i, j = 0, valid_extensions = 0;
+ 	u8 *edid, *new;
+-	struct edid *override = NULL;
+-
+-	if (connector->override_edid)
+-		override = drm_edid_duplicate(connector->edid_blob_ptr->data);
+-
+-	if (!override)
+-		override = drm_load_edid_firmware(connector);
++	struct edid *override;
+ 
+-	if (!IS_ERR_OR_NULL(override))
++	override = drm_get_override_edid(connector);
++	if (override)
+ 		return override;
+ 
+ 	if ((edid = kmalloc(EDID_LENGTH, GFP_KERNEL)) == NULL)
+diff --git a/drivers/gpu/drm/drm_probe_helper.c b/drivers/gpu/drm/drm_probe_helper.c
+index 6fd08e04b323..dd427c7ff967 100644
+--- a/drivers/gpu/drm/drm_probe_helper.c
++++ b/drivers/gpu/drm/drm_probe_helper.c
+@@ -479,6 +479,13 @@ retry:
+ 
+ 	count = (*connector_funcs->get_modes)(connector);
+ 
++	/*
++	 * Fallback for when DDC probe failed in drm_get_edid() and thus skipped
++	 * override/firmware EDID.
++	 */
++	if (count == 0 && connector->status == connector_status_connected)
++		count = drm_add_override_edid_modes(connector);
++
+ 	if (count == 0 && connector->status == connector_status_connected)
+ 		count = drm_add_modes_noedid(connector, 1024, 768);
+ 	count += drm_helper_probe_add_cmdline_mode(connector);
+diff --git a/drivers/gpu/drm/i915/intel_csr.c b/drivers/gpu/drm/i915/intel_csr.c
+index e8ac04c33e29..b552d9d28127 100644
+--- a/drivers/gpu/drm/i915/intel_csr.c
++++ b/drivers/gpu/drm/i915/intel_csr.c
+@@ -300,10 +300,17 @@ static u32 *parse_csr_fw(struct drm_i915_private *dev_priv,
+ 	u32 dmc_offset = CSR_DEFAULT_FW_OFFSET, readcount = 0, nbytes;
+ 	u32 i;
+ 	u32 *dmc_payload;
++	size_t fsize;
+ 
+ 	if (!fw)
+ 		return NULL;
+ 
++	fsize = sizeof(struct intel_css_header) +
++		sizeof(struct intel_package_header) +
++		sizeof(struct intel_dmc_header);
++	if (fsize > fw->size)
++		goto error_truncated;
++
+ 	/* Extract CSS Header information*/
+ 	css_header = (struct intel_css_header *)fw->data;
+ 	if (sizeof(struct intel_css_header) !=
+@@ -363,6 +370,9 @@ static u32 *parse_csr_fw(struct drm_i915_private *dev_priv,
+ 	/* Convert dmc_offset into number of bytes. By default it is in dwords*/
+ 	dmc_offset *= 4;
+ 	readcount += dmc_offset;
++	fsize += dmc_offset;
++	if (fsize > fw->size)
++		goto error_truncated;
+ 
+ 	/* Extract dmc_header information. */
+ 	dmc_header = (struct intel_dmc_header *)&fw->data[readcount];
+@@ -394,6 +404,10 @@ static u32 *parse_csr_fw(struct drm_i915_private *dev_priv,
+ 
+ 	/* fw_size is in dwords, so multiplied by 4 to convert into bytes. */
+ 	nbytes = dmc_header->fw_size * 4;
++	fsize += nbytes;
++	if (fsize > fw->size)
++		goto error_truncated;
++
+ 	if (nbytes > csr->max_fw_size) {
+ 		DRM_ERROR("DMC FW too big (%u bytes)\n", nbytes);
+ 		return NULL;
+@@ -407,6 +421,10 @@ static u32 *parse_csr_fw(struct drm_i915_private *dev_priv,
+ 	}
+ 
+ 	return memcpy(dmc_payload, &fw->data[readcount], nbytes);
++
++error_truncated:
++	DRM_ERROR("Truncated DMC firmware, rejecting.\n");
++	return NULL;
+ }
+ 
+ static void intel_csr_runtime_pm_get(struct drm_i915_private *dev_priv)
+diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
+index 421aac80a838..cd8a22d6370e 100644
+--- a/drivers/gpu/drm/i915/intel_display.c
++++ b/drivers/gpu/drm/i915/intel_display.c
+@@ -2444,10 +2444,14 @@ static unsigned int intel_fb_modifier_to_tiling(u64 fb_modifier)
+  * main surface.
+  */
+ static const struct drm_format_info ccs_formats[] = {
+-	{ .format = DRM_FORMAT_XRGB8888, .depth = 24, .num_planes = 2, .cpp = { 4, 1, }, .hsub = 8, .vsub = 16, },
+-	{ .format = DRM_FORMAT_XBGR8888, .depth = 24, .num_planes = 2, .cpp = { 4, 1, }, .hsub = 8, .vsub = 16, },
+-	{ .format = DRM_FORMAT_ARGB8888, .depth = 32, .num_planes = 2, .cpp = { 4, 1, }, .hsub = 8, .vsub = 16, },
+-	{ .format = DRM_FORMAT_ABGR8888, .depth = 32, .num_planes = 2, .cpp = { 4, 1, }, .hsub = 8, .vsub = 16, },
++	{ .format = DRM_FORMAT_XRGB8888, .depth = 24, .num_planes = 2,
++	  .cpp = { 4, 1, }, .hsub = 8, .vsub = 16, },
++	{ .format = DRM_FORMAT_XBGR8888, .depth = 24, .num_planes = 2,
++	  .cpp = { 4, 1, }, .hsub = 8, .vsub = 16, },
++	{ .format = DRM_FORMAT_ARGB8888, .depth = 32, .num_planes = 2,
++	  .cpp = { 4, 1, }, .hsub = 8, .vsub = 16, .has_alpha = true, },
++	{ .format = DRM_FORMAT_ABGR8888, .depth = 32, .num_planes = 2,
++	  .cpp = { 4, 1, }, .hsub = 8, .vsub = 16, .has_alpha = true, },
+ };
+ 
+ static const struct drm_format_info *
+@@ -11757,7 +11761,7 @@ encoder_retry:
+ 	return 0;
+ }
+ 
+-static bool intel_fuzzy_clock_check(int clock1, int clock2)
++bool intel_fuzzy_clock_check(int clock1, int clock2)
+ {
+ 	int diff;
+ 
+diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
+index d5660ac1b0d6..18e17b422701 100644
+--- a/drivers/gpu/drm/i915/intel_drv.h
++++ b/drivers/gpu/drm/i915/intel_drv.h
+@@ -1707,6 +1707,7 @@ int vlv_force_pll_on(struct drm_i915_private *dev_priv, enum pipe pipe,
+ 		     const struct dpll *dpll);
+ void vlv_force_pll_off(struct drm_i915_private *dev_priv, enum pipe pipe);
+ int lpt_get_iclkip(struct drm_i915_private *dev_priv);
++bool intel_fuzzy_clock_check(int clock1, int clock2);
+ 
+ /* modesetting asserts */
+ void assert_panel_unlocked(struct drm_i915_private *dev_priv,
+diff --git a/drivers/gpu/drm/i915/intel_dsi_vbt.c b/drivers/gpu/drm/i915/intel_dsi_vbt.c
+index 06a11c35a784..2ec792ce49a5 100644
+--- a/drivers/gpu/drm/i915/intel_dsi_vbt.c
++++ b/drivers/gpu/drm/i915/intel_dsi_vbt.c
+@@ -871,6 +871,17 @@ bool intel_dsi_vbt_init(struct intel_dsi *intel_dsi, u16 panel_id)
+ 		if (mipi_config->target_burst_mode_freq) {
+ 			u32 bitrate = intel_dsi_bitrate(intel_dsi);
+ 
++			/*
++			 * Sometimes the VBT contains a slightly lower clock,
++			 * then the bitrate we have calculated, in this case
++			 * just replace it with the calculated bitrate.
++			 */
++			if (mipi_config->target_burst_mode_freq < bitrate &&
++			    intel_fuzzy_clock_check(
++					mipi_config->target_burst_mode_freq,
++					bitrate))
++				mipi_config->target_burst_mode_freq = bitrate;
++
+ 			if (mipi_config->target_burst_mode_freq < bitrate) {
+ 				DRM_ERROR("Burst mode freq is less than computed\n");
+ 				return false;
+diff --git a/drivers/gpu/drm/i915/intel_sdvo.c b/drivers/gpu/drm/i915/intel_sdvo.c
+index e7b0884ba5a5..366d5554e8e9 100644
+--- a/drivers/gpu/drm/i915/intel_sdvo.c
++++ b/drivers/gpu/drm/i915/intel_sdvo.c
+@@ -909,6 +909,13 @@ static bool intel_sdvo_set_colorimetry(struct intel_sdvo *intel_sdvo,
+ 	return intel_sdvo_set_value(intel_sdvo, SDVO_CMD_SET_COLORIMETRY, &mode, 1);
+ }
+ 
++static bool intel_sdvo_set_audio_state(struct intel_sdvo *intel_sdvo,
++				       u8 audio_state)
++{
++	return intel_sdvo_set_value(intel_sdvo, SDVO_CMD_SET_AUDIO_STAT,
++				    &audio_state, 1);
++}
++
+ #if 0
+ static void intel_sdvo_dump_hdmi_buf(struct intel_sdvo *intel_sdvo)
+ {
+@@ -1366,11 +1373,6 @@ static void intel_sdvo_pre_enable(struct intel_encoder *intel_encoder,
+ 	else
+ 		sdvox |= SDVO_PIPE_SEL(crtc->pipe);
+ 
+-	if (crtc_state->has_audio) {
+-		WARN_ON_ONCE(INTEL_GEN(dev_priv) < 4);
+-		sdvox |= SDVO_AUDIO_ENABLE;
+-	}
+-
+ 	if (INTEL_GEN(dev_priv) >= 4) {
+ 		/* done in crtc_mode_set as the dpll_md reg must be written early */
+ 	} else if (IS_I945G(dev_priv) || IS_I945GM(dev_priv) ||
+@@ -1510,8 +1512,13 @@ static void intel_sdvo_get_config(struct intel_encoder *encoder,
+ 	if (sdvox & HDMI_COLOR_RANGE_16_235)
+ 		pipe_config->limited_color_range = true;
+ 
+-	if (sdvox & SDVO_AUDIO_ENABLE)
+-		pipe_config->has_audio = true;
++	if (intel_sdvo_get_value(intel_sdvo, SDVO_CMD_GET_AUDIO_STAT,
++				 &val, 1)) {
++		u8 mask = SDVO_AUDIO_ELD_VALID | SDVO_AUDIO_PRESENCE_DETECT;
++
++		if ((val & mask) == mask)
++			pipe_config->has_audio = true;
++	}
+ 
+ 	if (intel_sdvo_get_value(intel_sdvo, SDVO_CMD_GET_ENCODE,
+ 				 &val, 1)) {
+@@ -1524,6 +1531,32 @@ static void intel_sdvo_get_config(struct intel_encoder *encoder,
+ 	     pipe_config->pixel_multiplier, encoder_pixel_multiplier);
+ }
+ 
++static void intel_sdvo_disable_audio(struct intel_sdvo *intel_sdvo)
++{
++	intel_sdvo_set_audio_state(intel_sdvo, 0);
++}
++
++static void intel_sdvo_enable_audio(struct intel_sdvo *intel_sdvo,
++				    const struct intel_crtc_state *crtc_state,
++				    const struct drm_connector_state *conn_state)
++{
++	const struct drm_display_mode *adjusted_mode =
++		&crtc_state->base.adjusted_mode;
++	struct drm_connector *connector = conn_state->connector;
++	u8 *eld = connector->eld;
++
++	eld[6] = drm_av_sync_delay(connector, adjusted_mode) / 2;
++
++	intel_sdvo_set_audio_state(intel_sdvo, 0);
++
++	intel_sdvo_write_infoframe(intel_sdvo, SDVO_HBUF_INDEX_ELD,
++				   SDVO_HBUF_TX_DISABLED,
++				   eld, drm_eld_size(eld));
++
++	intel_sdvo_set_audio_state(intel_sdvo, SDVO_AUDIO_ELD_VALID |
++				   SDVO_AUDIO_PRESENCE_DETECT);
++}
++
+ static void intel_disable_sdvo(struct intel_encoder *encoder,
+ 			       const struct intel_crtc_state *old_crtc_state,
+ 			       const struct drm_connector_state *conn_state)
+@@ -1533,6 +1566,9 @@ static void intel_disable_sdvo(struct intel_encoder *encoder,
+ 	struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->base.crtc);
+ 	u32 temp;
+ 
++	if (old_crtc_state->has_audio)
++		intel_sdvo_disable_audio(intel_sdvo);
++
+ 	intel_sdvo_set_active_outputs(intel_sdvo, 0);
+ 	if (0)
+ 		intel_sdvo_set_encoder_power_state(intel_sdvo,
+@@ -1618,6 +1654,9 @@ static void intel_enable_sdvo(struct intel_encoder *encoder,
+ 		intel_sdvo_set_encoder_power_state(intel_sdvo,
+ 						   DRM_MODE_DPMS_ON);
+ 	intel_sdvo_set_active_outputs(intel_sdvo, intel_sdvo->attached_output);
++
++	if (pipe_config->has_audio)
++		intel_sdvo_enable_audio(intel_sdvo, pipe_config, conn_state);
+ }
+ 
+ static enum drm_mode_status
+@@ -2480,7 +2519,6 @@ static bool
+ intel_sdvo_dvi_init(struct intel_sdvo *intel_sdvo, int device)
+ {
+ 	struct drm_encoder *encoder = &intel_sdvo->base.base;
+-	struct drm_i915_private *dev_priv = to_i915(encoder->dev);
+ 	struct drm_connector *connector;
+ 	struct intel_encoder *intel_encoder = to_intel_encoder(encoder);
+ 	struct intel_connector *intel_connector;
+@@ -2517,9 +2555,7 @@ intel_sdvo_dvi_init(struct intel_sdvo *intel_sdvo, int device)
+ 	encoder->encoder_type = DRM_MODE_ENCODER_TMDS;
+ 	connector->connector_type = DRM_MODE_CONNECTOR_DVID;
+ 
+-	/* gen3 doesn't do the hdmi bits in the SDVO register */
+-	if (INTEL_GEN(dev_priv) >= 4 &&
+-	    intel_sdvo_is_hdmi_connector(intel_sdvo, device)) {
++	if (intel_sdvo_is_hdmi_connector(intel_sdvo, device)) {
+ 		connector->connector_type = DRM_MODE_CONNECTOR_HDMIA;
+ 		intel_sdvo_connector->is_hdmi = true;
+ 	}
+diff --git a/drivers/gpu/drm/i915/intel_sdvo_regs.h b/drivers/gpu/drm/i915/intel_sdvo_regs.h
+index db0ed499268a..e9ba3b047f93 100644
+--- a/drivers/gpu/drm/i915/intel_sdvo_regs.h
++++ b/drivers/gpu/drm/i915/intel_sdvo_regs.h
+@@ -707,6 +707,9 @@ struct intel_sdvo_enhancements_arg {
+ #define SDVO_CMD_GET_AUDIO_ENCRYPT_PREFER 0x90
+ #define SDVO_CMD_SET_AUDIO_STAT		0x91
+ #define SDVO_CMD_GET_AUDIO_STAT		0x92
++  #define SDVO_AUDIO_ELD_VALID		(1 << 0)
++  #define SDVO_AUDIO_PRESENCE_DETECT	(1 << 1)
++  #define SDVO_AUDIO_CP_READY		(1 << 2)
+ #define SDVO_CMD_SET_HBUF_INDEX		0x93
+   #define SDVO_HBUF_INDEX_ELD		0
+   #define SDVO_HBUF_INDEX_AVI_IF	1
+diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
+index 00cd9ab8948d..db28012dbf54 100644
+--- a/drivers/gpu/drm/nouveau/Kconfig
++++ b/drivers/gpu/drm/nouveau/Kconfig
+@@ -17,10 +17,21 @@ config DRM_NOUVEAU
+ 	select INPUT if ACPI && X86
+ 	select THERMAL if ACPI && X86
+ 	select ACPI_VIDEO if ACPI && X86
+-	select DRM_VM
+ 	help
+ 	  Choose this option for open-source NVIDIA support.
+ 
++config NOUVEAU_LEGACY_CTX_SUPPORT
++	bool "Nouveau legacy context support"
++	depends on DRM_NOUVEAU
++	select DRM_VM
++	default y
++	help
++	  There was a version of the nouveau DDX that relied on legacy
++	  ctx ioctls not erroring out. But that was back in time a long
++	  ways, so offer a way to disable it now. For uapi compat with
++	  old nouveau ddx this should be on by default, but modern distros
++	  should consider turning it off.
++
+ config NOUVEAU_PLATFORM_DRIVER
+ 	bool "Nouveau (NVIDIA) SoC GPUs"
+ 	depends on DRM_NOUVEAU && ARCH_TEGRA
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index 5020265bfbd9..6ab9033f49da 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -1094,8 +1094,11 @@ nouveau_driver_fops = {
+ static struct drm_driver
+ driver_stub = {
+ 	.driver_features =
+-		DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME | DRIVER_RENDER |
+-		DRIVER_KMS_LEGACY_CONTEXT,
++		DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME | DRIVER_RENDER
++#if defined(CONFIG_NOUVEAU_LEGACY_CTX_SUPPORT)
++		| DRIVER_KMS_LEGACY_CONTEXT
++#endif
++		,
+ 
+ 	.open = nouveau_drm_open,
+ 	.postclose = nouveau_drm_postclose,
+diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c
+index 1543c2f8d3d3..05d513d54555 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
+@@ -169,7 +169,11 @@ nouveau_ttm_mmap(struct file *filp, struct vm_area_struct *vma)
+ 	struct nouveau_drm *drm = nouveau_drm(file_priv->minor->dev);
+ 
+ 	if (unlikely(vma->vm_pgoff < DRM_FILE_PAGE_OFFSET))
++#if defined(CONFIG_NOUVEAU_LEGACY_CTX_SUPPORT)
+ 		return drm_legacy_mmap(filp, vma);
++#else
++		return -EINVAL;
++#endif
+ 
+ 	return ttm_bo_mmap(filp, vma, &drm->ttm.bdev);
+ }
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+index c0231a817d3b..6dc7d90a12b4 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+@@ -2351,7 +2351,8 @@ static int vmw_cmd_dx_set_shader(struct vmw_private *dev_priv,
+ 
+ 	cmd = container_of(header, typeof(*cmd), header);
+ 
+-	if (cmd->body.type >= SVGA3D_SHADERTYPE_DX10_MAX) {
++	if (cmd->body.type >= SVGA3D_SHADERTYPE_DX10_MAX ||
++	    cmd->body.type < SVGA3D_SHADERTYPE_MIN) {
+ 		DRM_ERROR("Illegal shader type %u.\n",
+ 			  (unsigned) cmd->body.type);
+ 		return -EINVAL;
+@@ -2587,6 +2588,10 @@ static int vmw_cmd_dx_view_define(struct vmw_private *dev_priv,
+ 	if (view_type == vmw_view_max)
+ 		return -EINVAL;
+ 	cmd = container_of(header, typeof(*cmd), header);
++	if (unlikely(cmd->sid == SVGA3D_INVALID_ID)) {
++		DRM_ERROR("Invalid surface id.\n");
++		return -EINVAL;
++	}
+ 	ret = vmw_cmd_res_check(dev_priv, sw_context, vmw_res_surface,
+ 				user_surface_converter,
+ 				&cmd->sid, &srf);
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 63a43726cce0..39eba8106d40 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -1313,10 +1313,10 @@ static u32 __extract(u8 *report, unsigned offset, int n)
+ u32 hid_field_extract(const struct hid_device *hid, u8 *report,
+ 			unsigned offset, unsigned n)
+ {
+-	if (n > 256) {
+-		hid_warn(hid, "hid_field_extract() called with n (%d) > 256! (%s)\n",
++	if (n > 32) {
++		hid_warn(hid, "hid_field_extract() called with n (%d) > 32! (%s)\n",
+ 			 n, current->comm);
+-		n = 256;
++		n = 32;
+ 	}
+ 
+ 	return __extract(report, offset, n);
+@@ -1636,7 +1636,7 @@ static struct hid_report *hid_get_report(struct hid_report_enum *report_enum,
+  * Implement a generic .request() callback, using .raw_request()
+  * DO NOT USE in hid drivers directly, but through hid_hw_request instead.
+  */
+-void __hid_request(struct hid_device *hid, struct hid_report *report,
++int __hid_request(struct hid_device *hid, struct hid_report *report,
+ 		int reqtype)
+ {
+ 	char *buf;
+@@ -1645,7 +1645,7 @@ void __hid_request(struct hid_device *hid, struct hid_report *report,
+ 
+ 	buf = hid_alloc_report_buf(report, GFP_KERNEL);
+ 	if (!buf)
+-		return;
++		return -ENOMEM;
+ 
+ 	len = hid_report_len(report);
+ 
+@@ -1662,8 +1662,11 @@ void __hid_request(struct hid_device *hid, struct hid_report *report,
+ 	if (reqtype == HID_REQ_GET_REPORT)
+ 		hid_input_report(hid, report->type, buf, ret, 0);
+ 
++	ret = 0;
++
+ out:
+ 	kfree(buf);
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(__hid_request);
+ 
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index b607286a0bc8..46c6efea1404 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -1557,52 +1557,71 @@ static void hidinput_close(struct input_dev *dev)
+ 	hid_hw_close(hid);
+ }
+ 
+-static void hidinput_change_resolution_multipliers(struct hid_device *hid)
++static bool __hidinput_change_resolution_multipliers(struct hid_device *hid,
++		struct hid_report *report, bool use_logical_max)
+ {
+-	struct hid_report_enum *rep_enum;
+-	struct hid_report *rep;
+ 	struct hid_usage *usage;
++	bool update_needed = false;
+ 	int i, j;
+ 
+-	rep_enum = &hid->report_enum[HID_FEATURE_REPORT];
+-	list_for_each_entry(rep, &rep_enum->report_list, list) {
+-		bool update_needed = false;
++	if (report->maxfield == 0)
++		return false;
+ 
+-		if (rep->maxfield == 0)
+-			continue;
++	/*
++	 * If we have more than one feature within this report we
++	 * need to fill in the bits from the others before we can
++	 * overwrite the ones for the Resolution Multiplier.
++	 */
++	if (report->maxfield > 1) {
++		hid_hw_request(hid, report, HID_REQ_GET_REPORT);
++		hid_hw_wait(hid);
++	}
+ 
+-		/*
+-		 * If we have more than one feature within this report we
+-		 * need to fill in the bits from the others before we can
+-		 * overwrite the ones for the Resolution Multiplier.
++	for (i = 0; i < report->maxfield; i++) {
++		__s32 value = use_logical_max ?
++			      report->field[i]->logical_maximum :
++			      report->field[i]->logical_minimum;
++
++		/* There is no good reason for a Resolution
++		 * Multiplier to have a count other than 1.
++		 * Ignore that case.
+ 		 */
+-		if (rep->maxfield > 1) {
+-			hid_hw_request(hid, rep, HID_REQ_GET_REPORT);
+-			hid_hw_wait(hid);
+-		}
++		if (report->field[i]->report_count != 1)
++			continue;
+ 
+-		for (i = 0; i < rep->maxfield; i++) {
+-			__s32 logical_max = rep->field[i]->logical_maximum;
++		for (j = 0; j < report->field[i]->maxusage; j++) {
++			usage = &report->field[i]->usage[j];
+ 
+-			/* There is no good reason for a Resolution
+-			 * Multiplier to have a count other than 1.
+-			 * Ignore that case.
+-			 */
+-			if (rep->field[i]->report_count != 1)
++			if (usage->hid != HID_GD_RESOLUTION_MULTIPLIER)
+ 				continue;
+ 
+-			for (j = 0; j < rep->field[i]->maxusage; j++) {
+-				usage = &rep->field[i]->usage[j];
++			report->field[i]->value[j] = value;
++			update_needed = true;
++		}
++	}
++
++	return update_needed;
++}
+ 
+-				if (usage->hid != HID_GD_RESOLUTION_MULTIPLIER)
+-					continue;
++static void hidinput_change_resolution_multipliers(struct hid_device *hid)
++{
++	struct hid_report_enum *rep_enum;
++	struct hid_report *rep;
++	int ret;
+ 
+-				*rep->field[i]->value = logical_max;
+-				update_needed = true;
++	rep_enum = &hid->report_enum[HID_FEATURE_REPORT];
++	list_for_each_entry(rep, &rep_enum->report_list, list) {
++		bool update_needed = __hidinput_change_resolution_multipliers(hid,
++								     rep, true);
++
++		if (update_needed) {
++			ret = __hid_request(hid, rep, HID_REQ_SET_REPORT);
++			if (ret) {
++				__hidinput_change_resolution_multipliers(hid,
++								    rep, false);
++				return;
+ 			}
+ 		}
+-		if (update_needed)
+-			hid_hw_request(hid, rep, HID_REQ_SET_REPORT);
+ 	}
+ 
+ 	/* refresh our structs */
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index c02d4cad1893..1565a307170a 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -641,6 +641,13 @@ static void mt_store_field(struct hid_device *hdev,
+ 	if (*target != DEFAULT_TRUE &&
+ 	    *target != DEFAULT_FALSE &&
+ 	    *target != DEFAULT_ZERO) {
++		if (usage->contactid == DEFAULT_ZERO ||
++		    usage->x == DEFAULT_ZERO ||
++		    usage->y == DEFAULT_ZERO) {
++			hid_dbg(hdev,
++				"ignoring duplicate usage on incomplete");
++			return;
++		}
+ 		usage = mt_allocate_usage(hdev, application);
+ 		if (!usage)
+ 			return;
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 747730d32ab6..09b8e4aac82f 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -1236,13 +1236,13 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
+ 		/* Add back in missing bits of ID for non-USI pens */
+ 		wacom->id[0] |= (wacom->serial[0] >> 32) & 0xFFFFF;
+ 	}
+-	wacom->tool[0]   = wacom_intuos_get_tool_type(wacom_intuos_id_mangle(wacom->id[0]));
+ 
+ 	for (i = 0; i < pen_frames; i++) {
+ 		unsigned char *frame = &data[i*pen_frame_len + 1];
+ 		bool valid = frame[0] & 0x80;
+ 		bool prox = frame[0] & 0x40;
+ 		bool range = frame[0] & 0x20;
++		bool invert = frame[0] & 0x10;
+ 
+ 		if (!valid)
+ 			continue;
+@@ -1251,9 +1251,24 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
+ 			wacom->shared->stylus_in_proximity = false;
+ 			wacom_exit_report(wacom);
+ 			input_sync(pen_input);
++
++			wacom->tool[0] = 0;
++			wacom->id[0] = 0;
++			wacom->serial[0] = 0;
+ 			return;
+ 		}
++
+ 		if (range) {
++			if (!wacom->tool[0]) { /* first in range */
++				/* Going into range select tool */
++				if (invert)
++					wacom->tool[0] = BTN_TOOL_RUBBER;
++				else if (wacom->id[0])
++					wacom->tool[0] = wacom_intuos_get_tool_type(wacom->id[0]);
++				else
++					wacom->tool[0] = BTN_TOOL_PEN;
++			}
++
+ 			input_report_abs(pen_input, ABS_X, get_unaligned_le16(&frame[1]));
+ 			input_report_abs(pen_input, ABS_Y, get_unaligned_le16(&frame[3]));
+ 
+@@ -1275,23 +1290,26 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
+ 						 get_unaligned_le16(&frame[11]));
+ 			}
+ 		}
+-		input_report_abs(pen_input, ABS_PRESSURE, get_unaligned_le16(&frame[5]));
+-		if (wacom->features.type == INTUOSP2_BT) {
+-			input_report_abs(pen_input, ABS_DISTANCE,
+-					 range ? frame[13] : wacom->features.distance_max);
+-		} else {
+-			input_report_abs(pen_input, ABS_DISTANCE,
+-					 range ? frame[7] : wacom->features.distance_max);
+-		}
+ 
+-		input_report_key(pen_input, BTN_TOUCH, frame[0] & 0x01);
+-		input_report_key(pen_input, BTN_STYLUS, frame[0] & 0x02);
+-		input_report_key(pen_input, BTN_STYLUS2, frame[0] & 0x04);
++		if (wacom->tool[0]) {
++			input_report_abs(pen_input, ABS_PRESSURE, get_unaligned_le16(&frame[5]));
++			if (wacom->features.type == INTUOSP2_BT) {
++				input_report_abs(pen_input, ABS_DISTANCE,
++						 range ? frame[13] : wacom->features.distance_max);
++			} else {
++				input_report_abs(pen_input, ABS_DISTANCE,
++						 range ? frame[7] : wacom->features.distance_max);
++			}
+ 
+-		input_report_key(pen_input, wacom->tool[0], prox);
+-		input_event(pen_input, EV_MSC, MSC_SERIAL, wacom->serial[0]);
+-		input_report_abs(pen_input, ABS_MISC,
+-				 wacom_intuos_id_mangle(wacom->id[0])); /* report tool id */
++			input_report_key(pen_input, BTN_TOUCH, frame[0] & 0x09);
++			input_report_key(pen_input, BTN_STYLUS, frame[0] & 0x02);
++			input_report_key(pen_input, BTN_STYLUS2, frame[0] & 0x04);
++
++			input_report_key(pen_input, wacom->tool[0], prox);
++			input_event(pen_input, EV_MSC, MSC_SERIAL, wacom->serial[0]);
++			input_report_abs(pen_input, ABS_MISC,
++					 wacom_intuos_id_mangle(wacom->id[0])); /* report tool id */
++		}
+ 
+ 		wacom->shared->stylus_in_proximity = prox;
+ 
+@@ -1353,11 +1371,17 @@ static void wacom_intuos_pro2_bt_touch(struct wacom_wac *wacom)
+ 		if (wacom->num_contacts_left <= 0) {
+ 			wacom->num_contacts_left = 0;
+ 			wacom->shared->touch_down = wacom_wac_finger_count_touches(wacom);
++			input_sync(touch_input);
+ 		}
+ 	}
+ 
+-	input_report_switch(touch_input, SW_MUTE_DEVICE, !(data[281] >> 7));
+-	input_sync(touch_input);
++	if (wacom->num_contacts_left == 0) {
++		// Be careful that we don't accidentally call input_sync with
++		// only a partial set of fingers of processed
++		input_report_switch(touch_input, SW_MUTE_DEVICE, !(data[281] >> 7));
++		input_sync(touch_input);
++	}
++
+ }
+ 
+ static void wacom_intuos_pro2_bt_pad(struct wacom_wac *wacom)
+@@ -1365,7 +1389,7 @@ static void wacom_intuos_pro2_bt_pad(struct wacom_wac *wacom)
+ 	struct input_dev *pad_input = wacom->pad_input;
+ 	unsigned char *data = wacom->data;
+ 
+-	int buttons = (data[282] << 1) | ((data[281] >> 6) & 0x01);
++	int buttons = data[282] | ((data[281] & 0x40) << 2);
+ 	int ring = data[285] & 0x7F;
+ 	bool ringstatus = data[285] & 0x80;
+ 	bool prox = buttons || ringstatus;
+@@ -3814,7 +3838,7 @@ static void wacom_24hd_update_leds(struct wacom *wacom, int mask, int group)
+ static bool wacom_is_led_toggled(struct wacom *wacom, int button_count,
+ 				 int mask, int group)
+ {
+-	int button_per_group;
++	int group_button;
+ 
+ 	/*
+ 	 * 21UX2 has LED group 1 to the left and LED group 0
+@@ -3824,9 +3848,12 @@ static bool wacom_is_led_toggled(struct wacom *wacom, int button_count,
+ 	if (wacom->wacom_wac.features.type == WACOM_21UX2)
+ 		group = 1 - group;
+ 
+-	button_per_group = button_count/wacom->led.count;
++	group_button = group * (button_count/wacom->led.count);
++
++	if (wacom->wacom_wac.features.type == INTUOSP2_BT)
++		group_button = 8;
+ 
+-	return mask & (1 << (group * button_per_group));
++	return mask & (1 << group_button);
+ }
+ 
+ static void wacom_update_led(struct wacom *wacom, int button_count, int mask,
+diff --git a/drivers/i2c/busses/i2c-acorn.c b/drivers/i2c/busses/i2c-acorn.c
+index f4a5ae69bf6a..fa3763e4b3ee 100644
+--- a/drivers/i2c/busses/i2c-acorn.c
++++ b/drivers/i2c/busses/i2c-acorn.c
+@@ -81,6 +81,7 @@ static struct i2c_algo_bit_data ioc_data = {
+ 
+ static struct i2c_adapter ioc_ops = {
+ 	.nr			= 0,
++	.name			= "ioc",
+ 	.algo_data		= &ioc_data,
+ };
+ 
+diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
+index 045d93884164..7e63a15fa724 100644
+--- a/drivers/iommu/arm-smmu.c
++++ b/drivers/iommu/arm-smmu.c
+@@ -59,6 +59,15 @@
+ 
+ #include "arm-smmu-regs.h"
+ 
++/*
++ * Apparently, some Qualcomm arm64 platforms which appear to expose their SMMU
++ * global register space are still, in fact, using a hypervisor to mediate it
++ * by trapping and emulating register accesses. Sadly, some deployed versions
++ * of said trapping code have bugs wherein they go horribly wrong for stores
++ * using r31 (i.e. XZR/WZR) as the source register.
++ */
++#define QCOM_DUMMY_VAL -1
++
+ #define ARM_MMU500_ACTLR_CPRE		(1 << 1)
+ 
+ #define ARM_MMU500_ACR_CACHE_LOCK	(1 << 26)
+@@ -422,7 +431,7 @@ static void __arm_smmu_tlb_sync(struct arm_smmu_device *smmu,
+ {
+ 	unsigned int spin_cnt, delay;
+ 
+-	writel_relaxed(0, sync);
++	writel_relaxed(QCOM_DUMMY_VAL, sync);
+ 	for (delay = 1; delay < TLB_LOOP_TIMEOUT; delay *= 2) {
+ 		for (spin_cnt = TLB_SPIN_COUNT; spin_cnt > 0; spin_cnt--) {
+ 			if (!(readl_relaxed(status) & sTLBGSTATUS_GSACTIVE))
+@@ -1760,8 +1769,8 @@ static void arm_smmu_device_reset(struct arm_smmu_device *smmu)
+ 	}
+ 
+ 	/* Invalidate the TLB, just in case */
+-	writel_relaxed(0, gr0_base + ARM_SMMU_GR0_TLBIALLH);
+-	writel_relaxed(0, gr0_base + ARM_SMMU_GR0_TLBIALLNSNH);
++	writel_relaxed(QCOM_DUMMY_VAL, gr0_base + ARM_SMMU_GR0_TLBIALLH);
++	writel_relaxed(QCOM_DUMMY_VAL, gr0_base + ARM_SMMU_GR0_TLBIALLNSNH);
+ 
+ 	reg = readl_relaxed(ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sCR0);
+ 
+diff --git a/drivers/md/bcache/bset.c b/drivers/md/bcache/bset.c
+index 8f07fa6e1739..268f1b685084 100644
+--- a/drivers/md/bcache/bset.c
++++ b/drivers/md/bcache/bset.c
+@@ -887,12 +887,22 @@ unsigned int bch_btree_insert_key(struct btree_keys *b, struct bkey *k,
+ 	struct bset *i = bset_tree_last(b)->data;
+ 	struct bkey *m, *prev = NULL;
+ 	struct btree_iter iter;
++	struct bkey preceding_key_on_stack = ZERO_KEY;
++	struct bkey *preceding_key_p = &preceding_key_on_stack;
+ 
+ 	BUG_ON(b->ops->is_extents && !KEY_SIZE(k));
+ 
+-	m = bch_btree_iter_init(b, &iter, b->ops->is_extents
+-				? PRECEDING_KEY(&START_KEY(k))
+-				: PRECEDING_KEY(k));
++	/*
++	 * If k has preceding key, preceding_key_p will be set to address
++	 *  of k's preceding key; otherwise preceding_key_p will be set
++	 * to NULL inside preceding_key().
++	 */
++	if (b->ops->is_extents)
++		preceding_key(&START_KEY(k), &preceding_key_p);
++	else
++		preceding_key(k, &preceding_key_p);
++
++	m = bch_btree_iter_init(b, &iter, preceding_key_p);
+ 
+ 	if (b->ops->insert_fixup(b, k, &iter, replace_key))
+ 		return status;
+diff --git a/drivers/md/bcache/bset.h b/drivers/md/bcache/bset.h
+index bac76aabca6d..c71365e7c1fa 100644
+--- a/drivers/md/bcache/bset.h
++++ b/drivers/md/bcache/bset.h
+@@ -434,20 +434,26 @@ static inline bool bch_cut_back(const struct bkey *where, struct bkey *k)
+ 	return __bch_cut_back(where, k);
+ }
+ 
+-#define PRECEDING_KEY(_k)					\
+-({								\
+-	struct bkey *_ret = NULL;				\
+-								\
+-	if (KEY_INODE(_k) || KEY_OFFSET(_k)) {			\
+-		_ret = &KEY(KEY_INODE(_k), KEY_OFFSET(_k), 0);	\
+-								\
+-		if (!_ret->low)					\
+-			_ret->high--;				\
+-		_ret->low--;					\
+-	}							\
+-								\
+-	_ret;							\
+-})
++/*
++ * Pointer '*preceding_key_p' points to a memory object to store preceding
++ * key of k. If the preceding key does not exist, set '*preceding_key_p' to
++ * NULL. So the caller of preceding_key() needs to take care of memory
++ * which '*preceding_key_p' pointed to before calling preceding_key().
++ * Currently the only caller of preceding_key() is bch_btree_insert_key(),
++ * and it points to an on-stack variable, so the memory release is handled
++ * by stackframe itself.
++ */
++static inline void preceding_key(struct bkey *k, struct bkey **preceding_key_p)
++{
++	if (KEY_INODE(k) || KEY_OFFSET(k)) {
++		(**preceding_key_p) = KEY(KEY_INODE(k), KEY_OFFSET(k), 0);
++		if (!(*preceding_key_p)->low)
++			(*preceding_key_p)->high--;
++		(*preceding_key_p)->low--;
++	} else {
++		(*preceding_key_p) = NULL;
++	}
++}
+ 
+ static inline bool bch_ptr_invalid(struct btree_keys *b, const struct bkey *k)
+ {
+diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
+index 17bae9c14ca0..eb42dcf52277 100644
+--- a/drivers/md/bcache/sysfs.c
++++ b/drivers/md/bcache/sysfs.c
+@@ -431,8 +431,13 @@ STORE(bch_cached_dev)
+ 			bch_writeback_queue(dc);
+ 	}
+ 
++	/*
++	 * Only set BCACHE_DEV_WB_RUNNING when cached device attached to
++	 * a cache set, otherwise it doesn't make sense.
++	 */
+ 	if (attr == &sysfs_writeback_percent)
+-		if (!test_and_set_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags))
++		if ((dc->disk.c != NULL) &&
++		    (!test_and_set_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags)))
+ 			schedule_delayed_work(&dc->writeback_rate_update,
+ 				      dc->writeback_rate_update_seconds * HZ);
+ 
+diff --git a/drivers/media/dvb-core/dvb_frontend.c b/drivers/media/dvb-core/dvb_frontend.c
+index fbdb4ecc7c50..7402c9834189 100644
+--- a/drivers/media/dvb-core/dvb_frontend.c
++++ b/drivers/media/dvb-core/dvb_frontend.c
+@@ -917,7 +917,7 @@ static void dvb_frontend_get_frequency_limits(struct dvb_frontend *fe,
+ 			 "DVB: adapter %i frontend %u frequency limits undefined - fix the driver\n",
+ 			 fe->dvb->num, fe->id);
+ 
+-	dprintk("frequency interval: tuner: %u...%u, frontend: %u...%u",
++	dev_dbg(fe->dvb->device, "frequency interval: tuner: %u...%u, frontend: %u...%u",
+ 		tuner_min, tuner_max, frontend_min, frontend_max);
+ 
+ 	/* If the standard is for satellite, convert frequencies to kHz */
+diff --git a/drivers/misc/kgdbts.c b/drivers/misc/kgdbts.c
+index de20bdaa148d..8b01257783dd 100644
+--- a/drivers/misc/kgdbts.c
++++ b/drivers/misc/kgdbts.c
+@@ -1135,7 +1135,7 @@ static void kgdbts_put_char(u8 chr)
+ static int param_set_kgdbts_var(const char *kmessage,
+ 				const struct kernel_param *kp)
+ {
+-	int len = strlen(kmessage);
++	size_t len = strlen(kmessage);
+ 
+ 	if (len >= MAX_CONFIG_LEN) {
+ 		printk(KERN_ERR "kgdbts: config string too long\n");
+@@ -1155,7 +1155,7 @@ static int param_set_kgdbts_var(const char *kmessage,
+ 
+ 	strcpy(config, kmessage);
+ 	/* Chop out \n char as a result of echo */
+-	if (config[len - 1] == '\n')
++	if (len && config[len - 1] == '\n')
+ 		config[len - 1] = '\0';
+ 
+ 	/* Go and configure with the new params. */
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index a6535e226d84..d005ed12b4d1 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -3377,7 +3377,7 @@ static int macb_clk_init(struct platform_device *pdev, struct clk **pclk,
+ 		if (!err)
+ 			err = -ENODEV;
+ 
+-		dev_err(&pdev->dev, "failed to get macb_clk (%u)\n", err);
++		dev_err(&pdev->dev, "failed to get macb_clk (%d)\n", err);
+ 		return err;
+ 	}
+ 
+@@ -3386,7 +3386,7 @@ static int macb_clk_init(struct platform_device *pdev, struct clk **pclk,
+ 		if (!err)
+ 			err = -ENODEV;
+ 
+-		dev_err(&pdev->dev, "failed to get hclk (%u)\n", err);
++		dev_err(&pdev->dev, "failed to get hclk (%d)\n", err);
+ 		return err;
+ 	}
+ 
+@@ -3404,31 +3404,31 @@ static int macb_clk_init(struct platform_device *pdev, struct clk **pclk,
+ 
+ 	err = clk_prepare_enable(*pclk);
+ 	if (err) {
+-		dev_err(&pdev->dev, "failed to enable pclk (%u)\n", err);
++		dev_err(&pdev->dev, "failed to enable pclk (%d)\n", err);
+ 		return err;
+ 	}
+ 
+ 	err = clk_prepare_enable(*hclk);
+ 	if (err) {
+-		dev_err(&pdev->dev, "failed to enable hclk (%u)\n", err);
++		dev_err(&pdev->dev, "failed to enable hclk (%d)\n", err);
+ 		goto err_disable_pclk;
+ 	}
+ 
+ 	err = clk_prepare_enable(*tx_clk);
+ 	if (err) {
+-		dev_err(&pdev->dev, "failed to enable tx_clk (%u)\n", err);
++		dev_err(&pdev->dev, "failed to enable tx_clk (%d)\n", err);
+ 		goto err_disable_hclk;
+ 	}
+ 
+ 	err = clk_prepare_enable(*rx_clk);
+ 	if (err) {
+-		dev_err(&pdev->dev, "failed to enable rx_clk (%u)\n", err);
++		dev_err(&pdev->dev, "failed to enable rx_clk (%d)\n", err);
+ 		goto err_disable_txclk;
+ 	}
+ 
+ 	err = clk_prepare_enable(*tsu_clk);
+ 	if (err) {
+-		dev_err(&pdev->dev, "failed to enable tsu_clk (%u)\n", err);
++		dev_err(&pdev->dev, "failed to enable tsu_clk (%d)\n", err);
+ 		goto err_disable_rxclk;
+ 	}
+ 
+@@ -3902,7 +3902,7 @@ static int at91ether_clk_init(struct platform_device *pdev, struct clk **pclk,
+ 
+ 	err = clk_prepare_enable(*pclk);
+ 	if (err) {
+-		dev_err(&pdev->dev, "failed to enable pclk (%u)\n", err);
++		dev_err(&pdev->dev, "failed to enable pclk (%d)\n", err);
+ 		return err;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
+index 5bb9eb35d76d..491475d87736 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc.c
+@@ -313,7 +313,9 @@ static bool enetc_clean_tx_ring(struct enetc_bdr *tx_ring, int napi_budget)
+ 	while (bds_to_clean && tx_frm_cnt < ENETC_DEFAULT_TX_WORK) {
+ 		bool is_eof = !!tx_swbd->skb;
+ 
+-		enetc_unmap_tx_buff(tx_ring, tx_swbd);
++		if (likely(tx_swbd->dma))
++			enetc_unmap_tx_buff(tx_ring, tx_swbd);
++
+ 		if (is_eof) {
+ 			napi_consume_skb(tx_swbd->skb, napi_budget);
+ 			tx_swbd->skb = NULL;
+diff --git a/drivers/net/usb/ipheth.c b/drivers/net/usb/ipheth.c
+index 3d8a70d3ea9b..3d71f1716390 100644
+--- a/drivers/net/usb/ipheth.c
++++ b/drivers/net/usb/ipheth.c
+@@ -437,17 +437,18 @@ static int ipheth_tx(struct sk_buff *skb, struct net_device *net)
+ 			  dev);
+ 	dev->tx_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
+ 
++	netif_stop_queue(net);
+ 	retval = usb_submit_urb(dev->tx_urb, GFP_ATOMIC);
+ 	if (retval) {
+ 		dev_err(&dev->intf->dev, "%s: usb_submit_urb: %d\n",
+ 			__func__, retval);
+ 		dev->net->stats.tx_errors++;
+ 		dev_kfree_skb_any(skb);
++		netif_wake_queue(net);
+ 	} else {
+ 		dev->net->stats.tx_packets++;
+ 		dev->net->stats.tx_bytes += skb->len;
+ 		dev_consume_skb_any(skb);
+-		netif_stop_queue(net);
+ 	}
+ 
+ 	return NETDEV_TX_OK;
+diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
+index 7bbff0af29b2..99d5892ea98a 100644
+--- a/drivers/nvdimm/bus.c
++++ b/drivers/nvdimm/bus.c
+@@ -642,7 +642,7 @@ static struct attribute *nd_device_attributes[] = {
+ 	NULL,
+ };
+ 
+-/**
++/*
+  * nd_device_attribute_group - generic attributes for all devices on an nd bus
+  */
+ struct attribute_group nd_device_attribute_group = {
+@@ -671,7 +671,7 @@ static umode_t nd_numa_attr_visible(struct kobject *kobj, struct attribute *a,
+ 	return a->mode;
+ }
+ 
+-/**
++/*
+  * nd_numa_attribute_group - NUMA attributes for all devices on an nd bus
+  */
+ struct attribute_group nd_numa_attribute_group = {
+diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
+index 2030805aa216..edf278067e72 100644
+--- a/drivers/nvdimm/label.c
++++ b/drivers/nvdimm/label.c
+@@ -25,6 +25,8 @@ static guid_t nvdimm_btt2_guid;
+ static guid_t nvdimm_pfn_guid;
+ static guid_t nvdimm_dax_guid;
+ 
++static const char NSINDEX_SIGNATURE[] = "NAMESPACE_INDEX\0";
++
+ static u32 best_seq(u32 a, u32 b)
+ {
+ 	a &= NSINDEX_SEQ_MASK;
+diff --git a/drivers/nvdimm/label.h b/drivers/nvdimm/label.h
+index e9a2ad3c2150..4bb7add39580 100644
+--- a/drivers/nvdimm/label.h
++++ b/drivers/nvdimm/label.h
+@@ -38,8 +38,6 @@ enum {
+ 	ND_NSINDEX_INIT = 0x1,
+ };
+ 
+-static const char NSINDEX_SIGNATURE[] = "NAMESPACE_INDEX\0";
+-
+ /**
+  * struct nd_namespace_index - label set superblock
+  * @sig: NAMESPACE_INDEX\0
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 8782d86a8ca3..35d2202ee2fd 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1362,9 +1362,14 @@ static struct nvme_ns *nvme_get_ns_from_disk(struct gendisk *disk,
+ {
+ #ifdef CONFIG_NVME_MULTIPATH
+ 	if (disk->fops == &nvme_ns_head_ops) {
++		struct nvme_ns *ns;
++
+ 		*head = disk->private_data;
+ 		*srcu_idx = srcu_read_lock(&(*head)->srcu);
+-		return nvme_find_path(*head);
++		ns = nvme_find_path(*head);
++		if (!ns)
++			srcu_read_unlock(&(*head)->srcu, *srcu_idx);
++		return ns;
+ 	}
+ #endif
+ 	*head = NULL;
+@@ -1378,42 +1383,56 @@ static void nvme_put_ns_from_disk(struct nvme_ns_head *head, int idx)
+ 		srcu_read_unlock(&head->srcu, idx);
+ }
+ 
+-static int nvme_ns_ioctl(struct nvme_ns *ns, unsigned cmd, unsigned long arg)
++static int nvme_ioctl(struct block_device *bdev, fmode_t mode,
++		unsigned int cmd, unsigned long arg)
+ {
++	struct nvme_ns_head *head = NULL;
++	void __user *argp = (void __user *)arg;
++	struct nvme_ns *ns;
++	int srcu_idx, ret;
++
++	ns = nvme_get_ns_from_disk(bdev->bd_disk, &head, &srcu_idx);
++	if (unlikely(!ns))
++		return -EWOULDBLOCK;
++
++	/*
++	 * Handle ioctls that apply to the controller instead of the namespace
++	 * seperately and drop the ns SRCU reference early.  This avoids a
++	 * deadlock when deleting namespaces using the passthrough interface.
++	 */
++	if (cmd == NVME_IOCTL_ADMIN_CMD || is_sed_ioctl(cmd)) {
++		struct nvme_ctrl *ctrl = ns->ctrl;
++
++		nvme_get_ctrl(ns->ctrl);
++		nvme_put_ns_from_disk(head, srcu_idx);
++
++		if (cmd == NVME_IOCTL_ADMIN_CMD)
++			ret = nvme_user_cmd(ctrl, NULL, argp);
++		else
++			ret = sed_ioctl(ctrl->opal_dev, cmd, argp);
++
++		nvme_put_ctrl(ctrl);
++		return ret;
++	}
++
+ 	switch (cmd) {
+ 	case NVME_IOCTL_ID:
+ 		force_successful_syscall_return();
+-		return ns->head->ns_id;
+-	case NVME_IOCTL_ADMIN_CMD:
+-		return nvme_user_cmd(ns->ctrl, NULL, (void __user *)arg);
++		ret = ns->head->ns_id;
++		break;
+ 	case NVME_IOCTL_IO_CMD:
+-		return nvme_user_cmd(ns->ctrl, ns, (void __user *)arg);
++		ret = nvme_user_cmd(ns->ctrl, ns, argp);
++		break;
+ 	case NVME_IOCTL_SUBMIT_IO:
+-		return nvme_submit_io(ns, (void __user *)arg);
++		ret = nvme_submit_io(ns, argp);
++		break;
+ 	default:
+-#ifdef CONFIG_NVM
+ 		if (ns->ndev)
+-			return nvme_nvm_ioctl(ns, cmd, arg);
+-#endif
+-		if (is_sed_ioctl(cmd))
+-			return sed_ioctl(ns->ctrl->opal_dev, cmd,
+-					 (void __user *) arg);
+-		return -ENOTTY;
++			ret = nvme_nvm_ioctl(ns, cmd, arg);
++		else
++			ret = -ENOTTY;
+ 	}
+-}
+ 
+-static int nvme_ioctl(struct block_device *bdev, fmode_t mode,
+-		unsigned int cmd, unsigned long arg)
+-{
+-	struct nvme_ns_head *head = NULL;
+-	struct nvme_ns *ns;
+-	int srcu_idx, ret;
+-
+-	ns = nvme_get_ns_from_disk(bdev->bd_disk, &head, &srcu_idx);
+-	if (unlikely(!ns))
+-		ret = -EWOULDBLOCK;
+-	else
+-		ret = nvme_ns_ioctl(ns, cmd, arg);
+ 	nvme_put_ns_from_disk(head, srcu_idx);
+ 	return ret;
+ }
+@@ -3680,6 +3699,7 @@ EXPORT_SYMBOL_GPL(nvme_start_ctrl);
+ 
+ void nvme_uninit_ctrl(struct nvme_ctrl *ctrl)
+ {
++	dev_pm_qos_hide_latency_tolerance(ctrl->device);
+ 	cdev_device_del(&ctrl->cdev, ctrl->device);
+ }
+ EXPORT_SYMBOL_GPL(nvme_uninit_ctrl);
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 372d3f4a106a..693f2a856200 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -500,7 +500,7 @@ static int nvme_pci_map_queues(struct blk_mq_tag_set *set)
+ 		 * affinity), so use the regular blk-mq cpu mapping
+ 		 */
+ 		map->queue_offset = qoff;
+-		if (i != HCTX_TYPE_POLL)
++		if (i != HCTX_TYPE_POLL && offset)
+ 			blk_mq_pci_map_queues(map, to_pci_dev(dev->dev), offset);
+ 		else
+ 			blk_mq_map_queues(map);
+@@ -2400,7 +2400,7 @@ static void nvme_pci_disable(struct nvme_dev *dev)
+ 
+ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
+ {
+-	bool dead = true;
++	bool dead = true, freeze = false;
+ 	struct pci_dev *pdev = to_pci_dev(dev->dev);
+ 
+ 	mutex_lock(&dev->shutdown_lock);
+@@ -2408,8 +2408,10 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
+ 		u32 csts = readl(dev->bar + NVME_REG_CSTS);
+ 
+ 		if (dev->ctrl.state == NVME_CTRL_LIVE ||
+-		    dev->ctrl.state == NVME_CTRL_RESETTING)
++		    dev->ctrl.state == NVME_CTRL_RESETTING) {
++			freeze = true;
+ 			nvme_start_freeze(&dev->ctrl);
++		}
+ 		dead = !!((csts & NVME_CSTS_CFS) || !(csts & NVME_CSTS_RDY) ||
+ 			pdev->error_state  != pci_channel_io_normal);
+ 	}
+@@ -2418,10 +2420,8 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
+ 	 * Give the controller a chance to complete all entered requests if
+ 	 * doing a safe shutdown.
+ 	 */
+-	if (!dead) {
+-		if (shutdown)
+-			nvme_wait_freeze_timeout(&dev->ctrl, NVME_IO_TIMEOUT);
+-	}
++	if (!dead && shutdown && freeze)
++		nvme_wait_freeze_timeout(&dev->ctrl, NVME_IO_TIMEOUT);
+ 
+ 	nvme_stop_queues(&dev->ctrl);
+ 
+diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c
+index 7cb766dafe85..e120f933412a 100644
+--- a/drivers/perf/arm_spe_pmu.c
++++ b/drivers/perf/arm_spe_pmu.c
+@@ -855,16 +855,8 @@ static void *arm_spe_pmu_setup_aux(struct perf_event *event, void **pages,
+ 	if (!pglist)
+ 		goto out_free_buf;
+ 
+-	for (i = 0; i < nr_pages; ++i) {
+-		struct page *page = virt_to_page(pages[i]);
+-
+-		if (PagePrivate(page)) {
+-			pr_warn("unexpected high-order page for auxbuf!");
+-			goto out_free_pglist;
+-		}
+-
++	for (i = 0; i < nr_pages; ++i)
+ 		pglist[i] = virt_to_page(pages[i]);
+-	}
+ 
+ 	buf->base = vmap(pglist, nr_pages, VM_MAP, PAGE_KERNEL);
+ 	if (!buf->base)
+diff --git a/drivers/platform/x86/pmc_atom.c b/drivers/platform/x86/pmc_atom.c
+index c7039f52ad51..b1d804376237 100644
+--- a/drivers/platform/x86/pmc_atom.c
++++ b/drivers/platform/x86/pmc_atom.c
+@@ -398,12 +398,45 @@ static int pmc_dbgfs_register(struct pmc_dev *pmc)
+  */
+ static const struct dmi_system_id critclk_systems[] = {
+ 	{
++		/* pmc_plt_clk0 is used for an external HSIC USB HUB */
+ 		.ident = "MPL CEC1x",
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "MPL AG"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "CEC10 Family"),
+ 		},
+ 	},
++	{
++		/* pmc_plt_clk0 - 3 are used for the 4 ethernet controllers */
++		.ident = "Lex 3I380D",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Lex BayTrail"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "3I380D"),
++		},
++	},
++	{
++		/* pmc_plt_clk* - are used for ethernet controllers */
++		.ident = "Beckhoff CB3163",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Beckhoff Automation"),
++			DMI_MATCH(DMI_BOARD_NAME, "CB3163"),
++		},
++	},
++	{
++		/* pmc_plt_clk* - are used for ethernet controllers */
++		.ident = "Beckhoff CB6263",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Beckhoff Automation"),
++			DMI_MATCH(DMI_BOARD_NAME, "CB6263"),
++		},
++	},
++	{
++		/* pmc_plt_clk* - are used for ethernet controllers */
++		.ident = "Beckhoff CB6363",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Beckhoff Automation"),
++			DMI_MATCH(DMI_BOARD_NAME, "CB6363"),
++		},
++	},
+ 	{ /*sentinel*/ }
+ };
+ 
+diff --git a/drivers/ras/cec.c b/drivers/ras/cec.c
+index 2d9ec378a8bc..f85d6b7a1984 100644
+--- a/drivers/ras/cec.c
++++ b/drivers/ras/cec.c
+@@ -2,6 +2,7 @@
+ #include <linux/mm.h>
+ #include <linux/gfp.h>
+ #include <linux/kernel.h>
++#include <linux/workqueue.h>
+ 
+ #include <asm/mce.h>
+ 
+@@ -123,16 +124,12 @@ static u64 dfs_pfn;
+ /* Amount of errors after which we offline */
+ static unsigned int count_threshold = COUNT_MASK;
+ 
+-/*
+- * The timer "decays" element count each timer_interval which is 24hrs by
+- * default.
+- */
+-
+-#define CEC_TIMER_DEFAULT_INTERVAL	24 * 60 * 60	/* 24 hrs */
+-#define CEC_TIMER_MIN_INTERVAL		 1 * 60 * 60	/* 1h */
+-#define CEC_TIMER_MAX_INTERVAL	   30 *	24 * 60 * 60	/* one month */
+-static struct timer_list cec_timer;
+-static u64 timer_interval = CEC_TIMER_DEFAULT_INTERVAL;
++/* Each element "decays" each decay_interval which is 24hrs by default. */
++#define CEC_DECAY_DEFAULT_INTERVAL	24 * 60 * 60	/* 24 hrs */
++#define CEC_DECAY_MIN_INTERVAL		 1 * 60 * 60	/* 1h */
++#define CEC_DECAY_MAX_INTERVAL	   30 *	24 * 60 * 60	/* one month */
++static struct delayed_work cec_work;
++static u64 decay_interval = CEC_DECAY_DEFAULT_INTERVAL;
+ 
+ /*
+  * Decrement decay value. We're using DECAY_BITS bits to denote decay of an
+@@ -160,20 +157,21 @@ static void do_spring_cleaning(struct ce_array *ca)
+ /*
+  * @interval in seconds
+  */
+-static void cec_mod_timer(struct timer_list *t, unsigned long interval)
++static void cec_mod_work(unsigned long interval)
+ {
+ 	unsigned long iv;
+ 
+-	iv = interval * HZ + jiffies;
+-
+-	mod_timer(t, round_jiffies(iv));
++	iv = interval * HZ;
++	mod_delayed_work(system_wq, &cec_work, round_jiffies(iv));
+ }
+ 
+-static void cec_timer_fn(struct timer_list *unused)
++static void cec_work_fn(struct work_struct *work)
+ {
++	mutex_lock(&ce_mutex);
+ 	do_spring_cleaning(&ce_arr);
++	mutex_unlock(&ce_mutex);
+ 
+-	cec_mod_timer(&cec_timer, timer_interval);
++	cec_mod_work(decay_interval);
+ }
+ 
+ /*
+@@ -183,32 +181,38 @@ static void cec_timer_fn(struct timer_list *unused)
+  */
+ static int __find_elem(struct ce_array *ca, u64 pfn, unsigned int *to)
+ {
++	int min = 0, max = ca->n - 1;
+ 	u64 this_pfn;
+-	int min = 0, max = ca->n;
+ 
+-	while (min < max) {
+-		int tmp = (max + min) >> 1;
++	while (min <= max) {
++		int i = (min + max) >> 1;
+ 
+-		this_pfn = PFN(ca->array[tmp]);
++		this_pfn = PFN(ca->array[i]);
+ 
+ 		if (this_pfn < pfn)
+-			min = tmp + 1;
++			min = i + 1;
+ 		else if (this_pfn > pfn)
+-			max = tmp;
+-		else {
+-			min = tmp;
+-			break;
++			max = i - 1;
++		else if (this_pfn == pfn) {
++			if (to)
++				*to = i;
++
++			return i;
+ 		}
+ 	}
+ 
++	/*
++	 * When the loop terminates without finding @pfn, min has the index of
++	 * the element slot where the new @pfn should be inserted. The loop
++	 * terminates when min > max, which means the min index points to the
++	 * bigger element while the max index to the smaller element, in-between
++	 * which the new @pfn belongs to.
++	 *
++	 * For more details, see exercise 1, Section 6.2.1 in TAOCP, vol. 3.
++	 */
+ 	if (to)
+ 		*to = min;
+ 
+-	this_pfn = PFN(ca->array[min]);
+-
+-	if (this_pfn == pfn)
+-		return min;
+-
+ 	return -ENOKEY;
+ }
+ 
+@@ -374,15 +378,15 @@ static int decay_interval_set(void *data, u64 val)
+ {
+ 	*(u64 *)data = val;
+ 
+-	if (val < CEC_TIMER_MIN_INTERVAL)
++	if (val < CEC_DECAY_MIN_INTERVAL)
+ 		return -EINVAL;
+ 
+-	if (val > CEC_TIMER_MAX_INTERVAL)
++	if (val > CEC_DECAY_MAX_INTERVAL)
+ 		return -EINVAL;
+ 
+-	timer_interval = val;
++	decay_interval = val;
+ 
+-	cec_mod_timer(&cec_timer, timer_interval);
++	cec_mod_work(decay_interval);
+ 	return 0;
+ }
+ DEFINE_DEBUGFS_ATTRIBUTE(decay_interval_ops, u64_get, decay_interval_set, "%lld\n");
+@@ -426,7 +430,7 @@ static int array_dump(struct seq_file *m, void *v)
+ 
+ 	seq_printf(m, "Flags: 0x%x\n", ca->flags);
+ 
+-	seq_printf(m, "Timer interval: %lld seconds\n", timer_interval);
++	seq_printf(m, "Decay interval: %lld seconds\n", decay_interval);
+ 	seq_printf(m, "Decays: %lld\n", ca->decays_done);
+ 
+ 	seq_printf(m, "Action threshold: %d\n", count_threshold);
+@@ -472,7 +476,7 @@ static int __init create_debugfs_nodes(void)
+ 	}
+ 
+ 	decay = debugfs_create_file("decay_interval", S_IRUSR | S_IWUSR, d,
+-				    &timer_interval, &decay_interval_ops);
++				    &decay_interval, &decay_interval_ops);
+ 	if (!decay) {
+ 		pr_warn("Error creating decay_interval debugfs node!\n");
+ 		goto err;
+@@ -508,8 +512,8 @@ void __init cec_init(void)
+ 	if (create_debugfs_nodes())
+ 		return;
+ 
+-	timer_setup(&cec_timer, cec_timer_fn, 0);
+-	cec_mod_timer(&cec_timer, CEC_TIMER_DEFAULT_INTERVAL);
++	INIT_DELAYED_WORK(&cec_work, cec_work_fn);
++	schedule_delayed_work(&cec_work, CEC_DECAY_DEFAULT_INTERVAL);
+ 
+ 	pr_info("Correctable Errors collector initialized.\n");
+ }
+diff --git a/drivers/scsi/bnx2fc/bnx2fc_hwi.c b/drivers/scsi/bnx2fc/bnx2fc_hwi.c
+index 039328d9ef13..30e6d78e82f0 100644
+--- a/drivers/scsi/bnx2fc/bnx2fc_hwi.c
++++ b/drivers/scsi/bnx2fc/bnx2fc_hwi.c
+@@ -830,7 +830,7 @@ ret_err_rqe:
+ 			((u64)err_entry->data.err_warn_bitmap_hi << 32) |
+ 			(u64)err_entry->data.err_warn_bitmap_lo;
+ 		for (i = 0; i < BNX2FC_NUM_ERR_BITS; i++) {
+-			if (err_warn_bit_map & (u64) (1 << i)) {
++			if (err_warn_bit_map & ((u64)1 << i)) {
+ 				err_warn = i;
+ 				break;
+ 			}
+diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
+index a09a742d7ec1..26a22e41204e 100644
+--- a/drivers/scsi/lpfc/lpfc_attr.c
++++ b/drivers/scsi/lpfc/lpfc_attr.c
+@@ -159,6 +159,7 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
+ 	int i;
+ 	int len = 0;
+ 	char tmp[LPFC_MAX_NVME_INFO_TMP_LEN] = {0};
++	unsigned long iflags = 0;
+ 
+ 	if (!(vport->cfg_enable_fc4_type & LPFC_ENABLE_NVME)) {
+ 		len = scnprintf(buf, PAGE_SIZE, "NVME Disabled\n");
+@@ -337,7 +338,7 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
+ 		  phba->sli4_hba.io_xri_max,
+ 		  lpfc_sli4_get_els_iocb_cnt(phba));
+ 	if (strlcat(buf, tmp, PAGE_SIZE) >= PAGE_SIZE)
+-		goto buffer_done;
++		goto rcu_unlock_buf_done;
+ 
+ 	/* Port state is only one of two values for now. */
+ 	if (localport->port_id)
+@@ -353,15 +354,15 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
+ 		  wwn_to_u64(vport->fc_nodename.u.wwn),
+ 		  localport->port_id, statep);
+ 	if (strlcat(buf, tmp, PAGE_SIZE) >= PAGE_SIZE)
+-		goto buffer_done;
++		goto rcu_unlock_buf_done;
+ 
+ 	list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
+ 		nrport = NULL;
+-		spin_lock(&vport->phba->hbalock);
++		spin_lock_irqsave(&vport->phba->hbalock, iflags);
+ 		rport = lpfc_ndlp_get_nrport(ndlp);
+ 		if (rport)
+ 			nrport = rport->remoteport;
+-		spin_unlock(&vport->phba->hbalock);
++		spin_unlock_irqrestore(&vport->phba->hbalock, iflags);
+ 		if (!nrport)
+ 			continue;
+ 
+@@ -380,39 +381,39 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
+ 
+ 		/* Tab in to show lport ownership. */
+ 		if (strlcat(buf, "NVME RPORT       ", PAGE_SIZE) >= PAGE_SIZE)
+-			goto buffer_done;
++			goto rcu_unlock_buf_done;
+ 		if (phba->brd_no >= 10) {
+ 			if (strlcat(buf, " ", PAGE_SIZE) >= PAGE_SIZE)
+-				goto buffer_done;
++				goto rcu_unlock_buf_done;
+ 		}
+ 
+ 		scnprintf(tmp, sizeof(tmp), "WWPN x%llx ",
+ 			  nrport->port_name);
+ 		if (strlcat(buf, tmp, PAGE_SIZE) >= PAGE_SIZE)
+-			goto buffer_done;
++			goto rcu_unlock_buf_done;
+ 
+ 		scnprintf(tmp, sizeof(tmp), "WWNN x%llx ",
+ 			  nrport->node_name);
+ 		if (strlcat(buf, tmp, PAGE_SIZE) >= PAGE_SIZE)
+-			goto buffer_done;
++			goto rcu_unlock_buf_done;
+ 
+ 		scnprintf(tmp, sizeof(tmp), "DID x%06x ",
+ 			  nrport->port_id);
+ 		if (strlcat(buf, tmp, PAGE_SIZE) >= PAGE_SIZE)
+-			goto buffer_done;
++			goto rcu_unlock_buf_done;
+ 
+ 		/* An NVME rport can have multiple roles. */
+ 		if (nrport->port_role & FC_PORT_ROLE_NVME_INITIATOR) {
+ 			if (strlcat(buf, "INITIATOR ", PAGE_SIZE) >= PAGE_SIZE)
+-				goto buffer_done;
++				goto rcu_unlock_buf_done;
+ 		}
+ 		if (nrport->port_role & FC_PORT_ROLE_NVME_TARGET) {
+ 			if (strlcat(buf, "TARGET ", PAGE_SIZE) >= PAGE_SIZE)
+-				goto buffer_done;
++				goto rcu_unlock_buf_done;
+ 		}
+ 		if (nrport->port_role & FC_PORT_ROLE_NVME_DISCOVERY) {
+ 			if (strlcat(buf, "DISCSRVC ", PAGE_SIZE) >= PAGE_SIZE)
+-				goto buffer_done;
++				goto rcu_unlock_buf_done;
+ 		}
+ 		if (nrport->port_role & ~(FC_PORT_ROLE_NVME_INITIATOR |
+ 					  FC_PORT_ROLE_NVME_TARGET |
+@@ -420,12 +421,12 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
+ 			scnprintf(tmp, sizeof(tmp), "UNKNOWN ROLE x%x",
+ 				  nrport->port_role);
+ 			if (strlcat(buf, tmp, PAGE_SIZE) >= PAGE_SIZE)
+-				goto buffer_done;
++				goto rcu_unlock_buf_done;
+ 		}
+ 
+ 		scnprintf(tmp, sizeof(tmp), "%s\n", statep);
+ 		if (strlcat(buf, tmp, PAGE_SIZE) >= PAGE_SIZE)
+-			goto buffer_done;
++			goto rcu_unlock_buf_done;
+ 	}
+ 	rcu_read_unlock();
+ 
+@@ -487,7 +488,13 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
+ 		  atomic_read(&lport->cmpl_fcp_err));
+ 	strlcat(buf, tmp, PAGE_SIZE);
+ 
+-buffer_done:
++	/* RCU is already unlocked. */
++	goto buffer_done;
++
++ rcu_unlock_buf_done:
++	rcu_read_unlock();
++
++ buffer_done:
+ 	len = strnlen(buf, PAGE_SIZE);
+ 
+ 	if (unlikely(len >= (PAGE_SIZE - 1))) {
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index fc077cb87900..965f8a1a8f67 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -7338,7 +7338,10 @@ int
+ lpfc_send_rrq(struct lpfc_hba *phba, struct lpfc_node_rrq *rrq)
+ {
+ 	struct lpfc_nodelist *ndlp = lpfc_findnode_did(rrq->vport,
+-							rrq->nlp_DID);
++						       rrq->nlp_DID);
++	if (!ndlp)
++		return 1;
++
+ 	if (lpfc_test_rrq_active(phba, ndlp, rrq->xritag))
+ 		return lpfc_issue_els_rrq(rrq->vport, ndlp,
+ 					 rrq->nlp_DID, rrq);
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index dc933b6d7800..363b21c4255e 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -994,15 +994,14 @@ lpfc_cleanup_vports_rrqs(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+  * @ndlp: Targets nodelist pointer for this exchange.
+  * @xritag the xri in the bitmap to test.
+  *
+- * This function is called with hbalock held. This function
+- * returns 0 = rrq not active for this xri
+- *         1 = rrq is valid for this xri.
++ * This function returns:
++ * 0 = rrq not active for this xri
++ * 1 = rrq is valid for this xri.
+  **/
+ int
+ lpfc_test_rrq_active(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp,
+ 			uint16_t  xritag)
+ {
+-	lockdep_assert_held(&phba->hbalock);
+ 	if (!ndlp)
+ 		return 0;
+ 	if (!ndlp->active_rrqs_xri_bitmap)
+@@ -1105,10 +1104,11 @@ out:
+  * @phba: Pointer to HBA context object.
+  * @piocb: Pointer to the iocbq.
+  *
+- * This function is called with the ring lock held. This function
+- * gets a new driver sglq object from the sglq list. If the
+- * list is not empty then it is successful, it returns pointer to the newly
+- * allocated sglq object else it returns NULL.
++ * The driver calls this function with either the nvme ls ring lock
++ * or the fc els ring lock held depending on the iocb usage.  This function
++ * gets a new driver sglq object from the sglq list. If the list is not empty
++ * then it is successful, it returns pointer to the newly allocated sglq
++ * object else it returns NULL.
+  **/
+ static struct lpfc_sglq *
+ __lpfc_sli_get_els_sglq(struct lpfc_hba *phba, struct lpfc_iocbq *piocbq)
+@@ -1118,9 +1118,15 @@ __lpfc_sli_get_els_sglq(struct lpfc_hba *phba, struct lpfc_iocbq *piocbq)
+ 	struct lpfc_sglq *start_sglq = NULL;
+ 	struct lpfc_io_buf *lpfc_cmd;
+ 	struct lpfc_nodelist *ndlp;
++	struct lpfc_sli_ring *pring = NULL;
+ 	int found = 0;
+ 
+-	lockdep_assert_held(&phba->hbalock);
++	if (piocbq->iocb_flag & LPFC_IO_NVME_LS)
++		pring =  phba->sli4_hba.nvmels_wq->pring;
++	else
++		pring = lpfc_phba_elsring(phba);
++
++	lockdep_assert_held(&pring->ring_lock);
+ 
+ 	if (piocbq->iocb_flag &  LPFC_IO_FCP) {
+ 		lpfc_cmd = (struct lpfc_io_buf *) piocbq->context1;
+@@ -1563,7 +1569,8 @@ lpfc_sli_ring_map(struct lpfc_hba *phba)
+  * @pring: Pointer to driver SLI ring object.
+  * @piocb: Pointer to the driver iocb object.
+  *
+- * This function is called with hbalock held. The function adds the
++ * The driver calls this function with the hbalock held for SLI3 ports or
++ * the ring lock held for SLI4 ports. The function adds the
+  * new iocb to txcmplq of the given ring. This function always returns
+  * 0. If this function is called for ELS ring, this function checks if
+  * there is a vport associated with the ELS command. This function also
+@@ -1573,7 +1580,10 @@ static int
+ lpfc_sli_ringtxcmpl_put(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ 			struct lpfc_iocbq *piocb)
+ {
+-	lockdep_assert_held(&phba->hbalock);
++	if (phba->sli_rev == LPFC_SLI_REV4)
++		lockdep_assert_held(&pring->ring_lock);
++	else
++		lockdep_assert_held(&phba->hbalock);
+ 
+ 	BUG_ON(!piocb);
+ 
+@@ -2970,8 +2980,8 @@ lpfc_sli_process_unsol_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+  *
+  * This function looks up the iocb_lookup table to get the command iocb
+  * corresponding to the given response iocb using the iotag of the
+- * response iocb. This function is called with the hbalock held
+- * for sli3 devices or the ring_lock for sli4 devices.
++ * response iocb. The driver calls this function with the hbalock held
++ * for SLI3 ports or the ring lock held for SLI4 ports.
+  * This function returns the command iocb object if it finds the command
+  * iocb else returns NULL.
+  **/
+@@ -2982,8 +2992,15 @@ lpfc_sli_iocbq_lookup(struct lpfc_hba *phba,
+ {
+ 	struct lpfc_iocbq *cmd_iocb = NULL;
+ 	uint16_t iotag;
+-	lockdep_assert_held(&phba->hbalock);
++	spinlock_t *temp_lock = NULL;
++	unsigned long iflag = 0;
+ 
++	if (phba->sli_rev == LPFC_SLI_REV4)
++		temp_lock = &pring->ring_lock;
++	else
++		temp_lock = &phba->hbalock;
++
++	spin_lock_irqsave(temp_lock, iflag);
+ 	iotag = prspiocb->iocb.ulpIoTag;
+ 
+ 	if (iotag != 0 && iotag <= phba->sli.last_iotag) {
+@@ -2993,10 +3010,12 @@ lpfc_sli_iocbq_lookup(struct lpfc_hba *phba,
+ 			list_del_init(&cmd_iocb->list);
+ 			cmd_iocb->iocb_flag &= ~LPFC_IO_ON_TXCMPLQ;
+ 			pring->txcmplq_cnt--;
++			spin_unlock_irqrestore(temp_lock, iflag);
+ 			return cmd_iocb;
+ 		}
+ 	}
+ 
++	spin_unlock_irqrestore(temp_lock, iflag);
+ 	lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
+ 			"0317 iotag x%x is out of "
+ 			"range: max iotag x%x wd0 x%x\n",
+@@ -3012,8 +3031,8 @@ lpfc_sli_iocbq_lookup(struct lpfc_hba *phba,
+  * @iotag: IOCB tag.
+  *
+  * This function looks up the iocb_lookup table to get the command iocb
+- * corresponding to the given iotag. This function is called with the
+- * hbalock held.
++ * corresponding to the given iotag. The driver calls this function with
++ * the ring lock held because this function is an SLI4 port only helper.
+  * This function returns the command iocb object if it finds the command
+  * iocb else returns NULL.
+  **/
+@@ -3022,8 +3041,15 @@ lpfc_sli_iocbq_lookup_by_tag(struct lpfc_hba *phba,
+ 			     struct lpfc_sli_ring *pring, uint16_t iotag)
+ {
+ 	struct lpfc_iocbq *cmd_iocb = NULL;
++	spinlock_t *temp_lock = NULL;
++	unsigned long iflag = 0;
+ 
+-	lockdep_assert_held(&phba->hbalock);
++	if (phba->sli_rev == LPFC_SLI_REV4)
++		temp_lock = &pring->ring_lock;
++	else
++		temp_lock = &phba->hbalock;
++
++	spin_lock_irqsave(temp_lock, iflag);
+ 	if (iotag != 0 && iotag <= phba->sli.last_iotag) {
+ 		cmd_iocb = phba->sli.iocbq_lookup[iotag];
+ 		if (cmd_iocb->iocb_flag & LPFC_IO_ON_TXCMPLQ) {
+@@ -3031,10 +3057,12 @@ lpfc_sli_iocbq_lookup_by_tag(struct lpfc_hba *phba,
+ 			list_del_init(&cmd_iocb->list);
+ 			cmd_iocb->iocb_flag &= ~LPFC_IO_ON_TXCMPLQ;
+ 			pring->txcmplq_cnt--;
++			spin_unlock_irqrestore(temp_lock, iflag);
+ 			return cmd_iocb;
+ 		}
+ 	}
+ 
++	spin_unlock_irqrestore(temp_lock, iflag);
+ 	lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
+ 			"0372 iotag x%x lookup error: max iotag (x%x) "
+ 			"iocb_flag x%x\n",
+@@ -3068,17 +3096,7 @@ lpfc_sli_process_sol_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ 	int rc = 1;
+ 	unsigned long iflag;
+ 
+-	/* Based on the iotag field, get the cmd IOCB from the txcmplq */
+-	if (phba->sli_rev == LPFC_SLI_REV4)
+-		spin_lock_irqsave(&pring->ring_lock, iflag);
+-	else
+-		spin_lock_irqsave(&phba->hbalock, iflag);
+ 	cmdiocbp = lpfc_sli_iocbq_lookup(phba, pring, saveq);
+-	if (phba->sli_rev == LPFC_SLI_REV4)
+-		spin_unlock_irqrestore(&pring->ring_lock, iflag);
+-	else
+-		spin_unlock_irqrestore(&phba->hbalock, iflag);
+-
+ 	if (cmdiocbp) {
+ 		if (cmdiocbp->iocb_cmpl) {
+ 			/*
+@@ -3409,8 +3427,10 @@ lpfc_sli_handle_fast_ring_event(struct lpfc_hba *phba,
+ 				break;
+ 			}
+ 
++			spin_unlock_irqrestore(&phba->hbalock, iflag);
+ 			cmdiocbq = lpfc_sli_iocbq_lookup(phba, pring,
+ 							 &rspiocbq);
++			spin_lock_irqsave(&phba->hbalock, iflag);
+ 			if (unlikely(!cmdiocbq))
+ 				break;
+ 			if (cmdiocbq->iocb_flag & LPFC_DRIVER_ABORTED)
+@@ -3604,9 +3624,12 @@ lpfc_sli_sp_handle_rspiocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ 
+ 		case LPFC_ABORT_IOCB:
+ 			cmdiocbp = NULL;
+-			if (irsp->ulpCommand != CMD_XRI_ABORTED_CX)
++			if (irsp->ulpCommand != CMD_XRI_ABORTED_CX) {
++				spin_unlock_irqrestore(&phba->hbalock, iflag);
+ 				cmdiocbp = lpfc_sli_iocbq_lookup(phba, pring,
+ 								 saveq);
++				spin_lock_irqsave(&phba->hbalock, iflag);
++			}
+ 			if (cmdiocbp) {
+ 				/* Call the specified completion routine */
+ 				if (cmdiocbp->iocb_cmpl) {
+@@ -13070,13 +13093,11 @@ lpfc_sli4_els_wcqe_to_rspiocbq(struct lpfc_hba *phba,
+ 		return NULL;
+ 
+ 	wcqe = &irspiocbq->cq_event.cqe.wcqe_cmpl;
+-	spin_lock_irqsave(&pring->ring_lock, iflags);
+ 	pring->stats.iocb_event++;
+ 	/* Look up the ELS command IOCB and create pseudo response IOCB */
+ 	cmdiocbq = lpfc_sli_iocbq_lookup_by_tag(phba, pring,
+ 				bf_get(lpfc_wcqe_c_request_tag, wcqe));
+ 	if (unlikely(!cmdiocbq)) {
+-		spin_unlock_irqrestore(&pring->ring_lock, iflags);
+ 		lpfc_printf_log(phba, KERN_WARNING, LOG_SLI,
+ 				"0386 ELS complete with no corresponding "
+ 				"cmdiocb: 0x%x 0x%x 0x%x 0x%x\n",
+@@ -13086,6 +13107,7 @@ lpfc_sli4_els_wcqe_to_rspiocbq(struct lpfc_hba *phba,
+ 		return NULL;
+ 	}
+ 
++	spin_lock_irqsave(&pring->ring_lock, iflags);
+ 	/* Put the iocb back on the txcmplq */
+ 	lpfc_sli_ringtxcmpl_put(phba, pring, cmdiocbq);
+ 	spin_unlock_irqrestore(&pring->ring_lock, iflags);
+@@ -13856,9 +13878,9 @@ lpfc_sli4_fp_handle_fcp_wcqe(struct lpfc_hba *phba, struct lpfc_queue *cq,
+ 	/* Look up the FCP command IOCB and create pseudo response IOCB */
+ 	spin_lock_irqsave(&pring->ring_lock, iflags);
+ 	pring->stats.iocb_event++;
++	spin_unlock_irqrestore(&pring->ring_lock, iflags);
+ 	cmdiocbq = lpfc_sli_iocbq_lookup_by_tag(phba, pring,
+ 				bf_get(lpfc_wcqe_c_request_tag, wcqe));
+-	spin_unlock_irqrestore(&pring->ring_lock, iflags);
+ 	if (unlikely(!cmdiocbq)) {
+ 		lpfc_printf_log(phba, KERN_WARNING, LOG_SLI,
+ 				"0374 FCP complete with no corresponding "
+diff --git a/drivers/scsi/myrs.c b/drivers/scsi/myrs.c
+index b8d54ef8cf6d..eb0dd566330a 100644
+--- a/drivers/scsi/myrs.c
++++ b/drivers/scsi/myrs.c
+@@ -818,7 +818,7 @@ static void myrs_log_event(struct myrs_hba *cs, struct myrs_event *ev)
+ 	unsigned char ev_type, *ev_msg;
+ 	struct Scsi_Host *shost = cs->host;
+ 	struct scsi_device *sdev;
+-	struct scsi_sense_hdr sshdr;
++	struct scsi_sense_hdr sshdr = {0};
+ 	unsigned char sense_info[4];
+ 	unsigned char cmd_specific[4];
+ 
+diff --git a/drivers/scsi/qedi/qedi_dbg.c b/drivers/scsi/qedi/qedi_dbg.c
+index 8fd28b056f73..3383314a3882 100644
+--- a/drivers/scsi/qedi/qedi_dbg.c
++++ b/drivers/scsi/qedi/qedi_dbg.c
+@@ -16,10 +16,6 @@ qedi_dbg_err(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
+ {
+ 	va_list va;
+ 	struct va_format vaf;
+-	char nfunc[32];
+-
+-	memset(nfunc, 0, sizeof(nfunc));
+-	memcpy(nfunc, func, sizeof(nfunc) - 1);
+ 
+ 	va_start(va, fmt);
+ 
+@@ -28,9 +24,9 @@ qedi_dbg_err(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
+ 
+ 	if (likely(qedi) && likely(qedi->pdev))
+ 		pr_err("[%s]:[%s:%d]:%d: %pV", dev_name(&qedi->pdev->dev),
+-		       nfunc, line, qedi->host_no, &vaf);
++		       func, line, qedi->host_no, &vaf);
+ 	else
+-		pr_err("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
++		pr_err("[0000:00:00.0]:[%s:%d]: %pV", func, line, &vaf);
+ 
+ 	va_end(va);
+ }
+@@ -41,10 +37,6 @@ qedi_dbg_warn(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
+ {
+ 	va_list va;
+ 	struct va_format vaf;
+-	char nfunc[32];
+-
+-	memset(nfunc, 0, sizeof(nfunc));
+-	memcpy(nfunc, func, sizeof(nfunc) - 1);
+ 
+ 	va_start(va, fmt);
+ 
+@@ -56,9 +48,9 @@ qedi_dbg_warn(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
+ 
+ 	if (likely(qedi) && likely(qedi->pdev))
+ 		pr_warn("[%s]:[%s:%d]:%d: %pV", dev_name(&qedi->pdev->dev),
+-			nfunc, line, qedi->host_no, &vaf);
++			func, line, qedi->host_no, &vaf);
+ 	else
+-		pr_warn("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
++		pr_warn("[0000:00:00.0]:[%s:%d]: %pV", func, line, &vaf);
+ 
+ ret:
+ 	va_end(va);
+@@ -70,10 +62,6 @@ qedi_dbg_notice(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
+ {
+ 	va_list va;
+ 	struct va_format vaf;
+-	char nfunc[32];
+-
+-	memset(nfunc, 0, sizeof(nfunc));
+-	memcpy(nfunc, func, sizeof(nfunc) - 1);
+ 
+ 	va_start(va, fmt);
+ 
+@@ -85,10 +73,10 @@ qedi_dbg_notice(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
+ 
+ 	if (likely(qedi) && likely(qedi->pdev))
+ 		pr_notice("[%s]:[%s:%d]:%d: %pV",
+-			  dev_name(&qedi->pdev->dev), nfunc, line,
++			  dev_name(&qedi->pdev->dev), func, line,
+ 			  qedi->host_no, &vaf);
+ 	else
+-		pr_notice("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
++		pr_notice("[0000:00:00.0]:[%s:%d]: %pV", func, line, &vaf);
+ 
+ ret:
+ 	va_end(va);
+@@ -100,10 +88,6 @@ qedi_dbg_info(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
+ {
+ 	va_list va;
+ 	struct va_format vaf;
+-	char nfunc[32];
+-
+-	memset(nfunc, 0, sizeof(nfunc));
+-	memcpy(nfunc, func, sizeof(nfunc) - 1);
+ 
+ 	va_start(va, fmt);
+ 
+@@ -115,9 +99,9 @@ qedi_dbg_info(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
+ 
+ 	if (likely(qedi) && likely(qedi->pdev))
+ 		pr_info("[%s]:[%s:%d]:%d: %pV", dev_name(&qedi->pdev->dev),
+-			nfunc, line, qedi->host_no, &vaf);
++			func, line, qedi->host_no, &vaf);
+ 	else
+-		pr_info("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
++		pr_info("[0000:00:00.0]:[%s:%d]: %pV", func, line, &vaf);
+ 
+ ret:
+ 	va_end(va);
+diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c
+index bf371e7b957d..c3d0d246df14 100644
+--- a/drivers/scsi/qedi/qedi_iscsi.c
++++ b/drivers/scsi/qedi/qedi_iscsi.c
+@@ -809,8 +809,6 @@ qedi_ep_connect(struct Scsi_Host *shost, struct sockaddr *dst_addr,
+ 	struct qedi_endpoint *qedi_ep;
+ 	struct sockaddr_in *addr;
+ 	struct sockaddr_in6 *addr6;
+-	struct qed_dev *cdev  =  NULL;
+-	struct qedi_uio_dev *udev = NULL;
+ 	struct iscsi_path path_req;
+ 	u32 msg_type = ISCSI_KEVENT_IF_DOWN;
+ 	u32 iscsi_cid = QEDI_CID_RESERVED;
+@@ -830,8 +828,6 @@ qedi_ep_connect(struct Scsi_Host *shost, struct sockaddr *dst_addr,
+ 	}
+ 
+ 	qedi = iscsi_host_priv(shost);
+-	cdev = qedi->cdev;
+-	udev = qedi->udev;
+ 
+ 	if (test_bit(QEDI_IN_OFFLINE, &qedi->flags) ||
+ 	    test_bit(QEDI_IN_RECOVERY, &qedi->flags)) {
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 91f576d743fe..d377e50a6c19 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -6838,6 +6838,78 @@ qla2x00_release_firmware(void)
+ 	mutex_unlock(&qla_fw_lock);
+ }
+ 
++static void qla_pci_error_cleanup(scsi_qla_host_t *vha)
++{
++	struct qla_hw_data *ha = vha->hw;
++	scsi_qla_host_t *base_vha = pci_get_drvdata(ha->pdev);
++	struct qla_qpair *qpair = NULL;
++	struct scsi_qla_host *vp;
++	fc_port_t *fcport;
++	int i;
++	unsigned long flags;
++
++	ha->chip_reset++;
++
++	ha->base_qpair->chip_reset = ha->chip_reset;
++	for (i = 0; i < ha->max_qpairs; i++) {
++		if (ha->queue_pair_map[i])
++			ha->queue_pair_map[i]->chip_reset =
++			    ha->base_qpair->chip_reset;
++	}
++
++	/* purge MBox commands */
++	if (atomic_read(&ha->num_pend_mbx_stage3)) {
++		clear_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags);
++		complete(&ha->mbx_intr_comp);
++	}
++
++	i = 0;
++
++	while (atomic_read(&ha->num_pend_mbx_stage3) ||
++	    atomic_read(&ha->num_pend_mbx_stage2) ||
++	    atomic_read(&ha->num_pend_mbx_stage1)) {
++		msleep(20);
++		i++;
++		if (i > 50)
++			break;
++	}
++
++	ha->flags.purge_mbox = 0;
++
++	mutex_lock(&ha->mq_lock);
++	list_for_each_entry(qpair, &base_vha->qp_list, qp_list_elem)
++		qpair->online = 0;
++	mutex_unlock(&ha->mq_lock);
++
++	qla2x00_mark_all_devices_lost(vha, 0);
++
++	spin_lock_irqsave(&ha->vport_slock, flags);
++	list_for_each_entry(vp, &ha->vp_list, list) {
++		atomic_inc(&vp->vref_count);
++		spin_unlock_irqrestore(&ha->vport_slock, flags);
++		qla2x00_mark_all_devices_lost(vp, 0);
++		spin_lock_irqsave(&ha->vport_slock, flags);
++		atomic_dec(&vp->vref_count);
++	}
++	spin_unlock_irqrestore(&ha->vport_slock, flags);
++
++	/* Clear all async request states across all VPs. */
++	list_for_each_entry(fcport, &vha->vp_fcports, list)
++		fcport->flags &= ~(FCF_LOGIN_NEEDED | FCF_ASYNC_SENT);
++
++	spin_lock_irqsave(&ha->vport_slock, flags);
++	list_for_each_entry(vp, &ha->vp_list, list) {
++		atomic_inc(&vp->vref_count);
++		spin_unlock_irqrestore(&ha->vport_slock, flags);
++		list_for_each_entry(fcport, &vp->vp_fcports, list)
++			fcport->flags &= ~(FCF_LOGIN_NEEDED | FCF_ASYNC_SENT);
++		spin_lock_irqsave(&ha->vport_slock, flags);
++		atomic_dec(&vp->vref_count);
++	}
++	spin_unlock_irqrestore(&ha->vport_slock, flags);
++}
++
++
+ static pci_ers_result_t
+ qla2xxx_pci_error_detected(struct pci_dev *pdev, pci_channel_state_t state)
+ {
+@@ -6863,20 +6935,7 @@ qla2xxx_pci_error_detected(struct pci_dev *pdev, pci_channel_state_t state)
+ 		return PCI_ERS_RESULT_CAN_RECOVER;
+ 	case pci_channel_io_frozen:
+ 		ha->flags.eeh_busy = 1;
+-		/* For ISP82XX complete any pending mailbox cmd */
+-		if (IS_QLA82XX(ha)) {
+-			ha->flags.isp82xx_fw_hung = 1;
+-			ql_dbg(ql_dbg_aer, vha, 0x9001, "Pci channel io frozen\n");
+-			qla82xx_clear_pending_mbx(vha);
+-		}
+-		qla2x00_free_irqs(vha);
+-		pci_disable_device(pdev);
+-		/* Return back all IOs */
+-		qla2x00_abort_all_cmds(vha, DID_RESET << 16);
+-		if (ql2xmqsupport || ql2xnvmeenable) {
+-			set_bit(QPAIR_ONLINE_CHECK_NEEDED, &vha->dpc_flags);
+-			qla2xxx_wake_dpc(vha);
+-		}
++		qla_pci_error_cleanup(vha);
+ 		return PCI_ERS_RESULT_NEED_RESET;
+ 	case pci_channel_io_perm_failure:
+ 		ha->flags.pci_channel_io_perm_failure = 1;
+@@ -6930,122 +6989,14 @@ qla2xxx_pci_mmio_enabled(struct pci_dev *pdev)
+ 		return PCI_ERS_RESULT_RECOVERED;
+ }
+ 
+-static uint32_t
+-qla82xx_error_recovery(scsi_qla_host_t *base_vha)
+-{
+-	uint32_t rval = QLA_FUNCTION_FAILED;
+-	uint32_t drv_active = 0;
+-	struct qla_hw_data *ha = base_vha->hw;
+-	int fn;
+-	struct pci_dev *other_pdev = NULL;
+-
+-	ql_dbg(ql_dbg_aer, base_vha, 0x9006,
+-	    "Entered %s.\n", __func__);
+-
+-	set_bit(ABORT_ISP_ACTIVE, &base_vha->dpc_flags);
+-
+-	if (base_vha->flags.online) {
+-		/* Abort all outstanding commands,
+-		 * so as to be requeued later */
+-		qla2x00_abort_isp_cleanup(base_vha);
+-	}
+-
+-
+-	fn = PCI_FUNC(ha->pdev->devfn);
+-	while (fn > 0) {
+-		fn--;
+-		ql_dbg(ql_dbg_aer, base_vha, 0x9007,
+-		    "Finding pci device at function = 0x%x.\n", fn);
+-		other_pdev =
+-		    pci_get_domain_bus_and_slot(pci_domain_nr(ha->pdev->bus),
+-		    ha->pdev->bus->number, PCI_DEVFN(PCI_SLOT(ha->pdev->devfn),
+-		    fn));
+-
+-		if (!other_pdev)
+-			continue;
+-		if (atomic_read(&other_pdev->enable_cnt)) {
+-			ql_dbg(ql_dbg_aer, base_vha, 0x9008,
+-			    "Found PCI func available and enable at 0x%x.\n",
+-			    fn);
+-			pci_dev_put(other_pdev);
+-			break;
+-		}
+-		pci_dev_put(other_pdev);
+-	}
+-
+-	if (!fn) {
+-		/* Reset owner */
+-		ql_dbg(ql_dbg_aer, base_vha, 0x9009,
+-		    "This devfn is reset owner = 0x%x.\n",
+-		    ha->pdev->devfn);
+-		qla82xx_idc_lock(ha);
+-
+-		qla82xx_wr_32(ha, QLA82XX_CRB_DEV_STATE,
+-		    QLA8XXX_DEV_INITIALIZING);
+-
+-		qla82xx_wr_32(ha, QLA82XX_CRB_DRV_IDC_VERSION,
+-		    QLA82XX_IDC_VERSION);
+-
+-		drv_active = qla82xx_rd_32(ha, QLA82XX_CRB_DRV_ACTIVE);
+-		ql_dbg(ql_dbg_aer, base_vha, 0x900a,
+-		    "drv_active = 0x%x.\n", drv_active);
+-
+-		qla82xx_idc_unlock(ha);
+-		/* Reset if device is not already reset
+-		 * drv_active would be 0 if a reset has already been done
+-		 */
+-		if (drv_active)
+-			rval = qla82xx_start_firmware(base_vha);
+-		else
+-			rval = QLA_SUCCESS;
+-		qla82xx_idc_lock(ha);
+-
+-		if (rval != QLA_SUCCESS) {
+-			ql_log(ql_log_info, base_vha, 0x900b,
+-			    "HW State: FAILED.\n");
+-			qla82xx_clear_drv_active(ha);
+-			qla82xx_wr_32(ha, QLA82XX_CRB_DEV_STATE,
+-			    QLA8XXX_DEV_FAILED);
+-		} else {
+-			ql_log(ql_log_info, base_vha, 0x900c,
+-			    "HW State: READY.\n");
+-			qla82xx_wr_32(ha, QLA82XX_CRB_DEV_STATE,
+-			    QLA8XXX_DEV_READY);
+-			qla82xx_idc_unlock(ha);
+-			ha->flags.isp82xx_fw_hung = 0;
+-			rval = qla82xx_restart_isp(base_vha);
+-			qla82xx_idc_lock(ha);
+-			/* Clear driver state register */
+-			qla82xx_wr_32(ha, QLA82XX_CRB_DRV_STATE, 0);
+-			qla82xx_set_drv_active(base_vha);
+-		}
+-		qla82xx_idc_unlock(ha);
+-	} else {
+-		ql_dbg(ql_dbg_aer, base_vha, 0x900d,
+-		    "This devfn is not reset owner = 0x%x.\n",
+-		    ha->pdev->devfn);
+-		if ((qla82xx_rd_32(ha, QLA82XX_CRB_DEV_STATE) ==
+-		    QLA8XXX_DEV_READY)) {
+-			ha->flags.isp82xx_fw_hung = 0;
+-			rval = qla82xx_restart_isp(base_vha);
+-			qla82xx_idc_lock(ha);
+-			qla82xx_set_drv_active(base_vha);
+-			qla82xx_idc_unlock(ha);
+-		}
+-	}
+-	clear_bit(ABORT_ISP_ACTIVE, &base_vha->dpc_flags);
+-
+-	return rval;
+-}
+-
+ static pci_ers_result_t
+ qla2xxx_pci_slot_reset(struct pci_dev *pdev)
+ {
+ 	pci_ers_result_t ret = PCI_ERS_RESULT_DISCONNECT;
+ 	scsi_qla_host_t *base_vha = pci_get_drvdata(pdev);
+ 	struct qla_hw_data *ha = base_vha->hw;
+-	struct rsp_que *rsp;
+-	int rc, retries = 10;
++	int rc;
++	struct qla_qpair *qpair = NULL;
+ 
+ 	ql_dbg(ql_dbg_aer, base_vha, 0x9004,
+ 	    "Slot Reset.\n");
+@@ -7074,24 +7025,16 @@ qla2xxx_pci_slot_reset(struct pci_dev *pdev)
+ 		goto exit_slot_reset;
+ 	}
+ 
+-	rsp = ha->rsp_q_map[0];
+-	if (qla2x00_request_irqs(ha, rsp))
+-		goto exit_slot_reset;
+ 
+ 	if (ha->isp_ops->pci_config(base_vha))
+ 		goto exit_slot_reset;
+ 
+-	if (IS_QLA82XX(ha)) {
+-		if (qla82xx_error_recovery(base_vha) == QLA_SUCCESS) {
+-			ret = PCI_ERS_RESULT_RECOVERED;
+-			goto exit_slot_reset;
+-		} else
+-			goto exit_slot_reset;
+-	}
+-
+-	while (ha->flags.mbox_busy && retries--)
+-		msleep(1000);
++	mutex_lock(&ha->mq_lock);
++	list_for_each_entry(qpair, &base_vha->qp_list, qp_list_elem)
++		qpair->online = 1;
++	mutex_unlock(&ha->mq_lock);
+ 
++	base_vha->flags.online = 1;
+ 	set_bit(ABORT_ISP_ACTIVE, &base_vha->dpc_flags);
+ 	if (ha->isp_ops->abort_isp(base_vha) == QLA_SUCCESS)
+ 		ret =  PCI_ERS_RESULT_RECOVERED;
+@@ -7115,13 +7058,13 @@ qla2xxx_pci_resume(struct pci_dev *pdev)
+ 	ql_dbg(ql_dbg_aer, base_vha, 0x900f,
+ 	    "pci_resume.\n");
+ 
++	ha->flags.eeh_busy = 0;
++
+ 	ret = qla2x00_wait_for_hba_online(base_vha);
+ 	if (ret != QLA_SUCCESS) {
+ 		ql_log(ql_log_fatal, base_vha, 0x9002,
+ 		    "The device failed to resume I/O from slot/link_reset.\n");
+ 	}
+-
+-	ha->flags.eeh_busy = 0;
+ }
+ 
+ static void
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 6082b008969b..6b6413073584 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -215,6 +215,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	/* Cherry Stream G230 2.0 (G85-231) and 3.0 (G85-232) */
+ 	{ USB_DEVICE(0x046a, 0x0023), .driver_info = USB_QUIRK_RESET_RESUME },
+ 
++	/* Logitech HD Webcam C270 */
++	{ USB_DEVICE(0x046d, 0x0825), .driver_info = USB_QUIRK_RESET_RESUME },
++
+ 	/* Logitech HD Pro Webcams C920, C920-C, C925e and C930e */
+ 	{ USB_DEVICE(0x046d, 0x082d), .driver_info = USB_QUIRK_DELAY_INIT },
+ 	{ USB_DEVICE(0x046d, 0x0841), .driver_info = USB_QUIRK_DELAY_INIT },
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index 3f087962f498..445213378a6a 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -2664,8 +2664,10 @@ static void dwc2_free_dma_aligned_buffer(struct urb *urb)
+ 		return;
+ 
+ 	/* Restore urb->transfer_buffer from the end of the allocated area */
+-	memcpy(&stored_xfer_buffer, urb->transfer_buffer +
+-	       urb->transfer_buffer_length, sizeof(urb->transfer_buffer));
++	memcpy(&stored_xfer_buffer,
++	       PTR_ALIGN(urb->transfer_buffer + urb->transfer_buffer_length,
++			 dma_get_cache_alignment()),
++	       sizeof(urb->transfer_buffer));
+ 
+ 	if (usb_urb_dir_in(urb)) {
+ 		if (usb_pipeisoc(urb->pipe))
+@@ -2697,6 +2699,7 @@ static int dwc2_alloc_dma_aligned_buffer(struct urb *urb, gfp_t mem_flags)
+ 	 * DMA
+ 	 */
+ 	kmalloc_size = urb->transfer_buffer_length +
++		(dma_get_cache_alignment() - 1) +
+ 		sizeof(urb->transfer_buffer);
+ 
+ 	kmalloc_ptr = kmalloc(kmalloc_size, mem_flags);
+@@ -2707,7 +2710,8 @@ static int dwc2_alloc_dma_aligned_buffer(struct urb *urb, gfp_t mem_flags)
+ 	 * Position value of original urb->transfer_buffer pointer to the end
+ 	 * of allocation for later referencing
+ 	 */
+-	memcpy(kmalloc_ptr + urb->transfer_buffer_length,
++	memcpy(PTR_ALIGN(kmalloc_ptr + urb->transfer_buffer_length,
++			 dma_get_cache_alignment()),
+ 	       &urb->transfer_buffer, sizeof(urb->transfer_buffer));
+ 
+ 	if (usb_urb_dir_out(urb))
+@@ -2792,7 +2796,7 @@ static int dwc2_assign_and_init_hc(struct dwc2_hsotg *hsotg, struct dwc2_qh *qh)
+ 	chan->dev_addr = dwc2_hcd_get_dev_addr(&urb->pipe_info);
+ 	chan->ep_num = dwc2_hcd_get_ep_num(&urb->pipe_info);
+ 	chan->speed = qh->dev_speed;
+-	chan->max_packet = dwc2_max_packet(qh->maxp);
++	chan->max_packet = qh->maxp;
+ 
+ 	chan->xfer_started = 0;
+ 	chan->halt_status = DWC2_HC_XFER_NO_HALT_STATUS;
+@@ -2870,7 +2874,7 @@ static int dwc2_assign_and_init_hc(struct dwc2_hsotg *hsotg, struct dwc2_qh *qh)
+ 		 * This value may be modified when the transfer is started
+ 		 * to reflect the actual transfer length
+ 		 */
+-		chan->multi_count = dwc2_hb_mult(qh->maxp);
++		chan->multi_count = qh->maxp_mult;
+ 
+ 	if (hsotg->params.dma_desc_enable) {
+ 		chan->desc_list_addr = qh->desc_list_dma;
+@@ -3990,19 +3994,21 @@ static struct dwc2_hcd_urb *dwc2_hcd_urb_alloc(struct dwc2_hsotg *hsotg,
+ 
+ static void dwc2_hcd_urb_set_pipeinfo(struct dwc2_hsotg *hsotg,
+ 				      struct dwc2_hcd_urb *urb, u8 dev_addr,
+-				      u8 ep_num, u8 ep_type, u8 ep_dir, u16 mps)
++				      u8 ep_num, u8 ep_type, u8 ep_dir,
++				      u16 maxp, u16 maxp_mult)
+ {
+ 	if (dbg_perio() ||
+ 	    ep_type == USB_ENDPOINT_XFER_BULK ||
+ 	    ep_type == USB_ENDPOINT_XFER_CONTROL)
+ 		dev_vdbg(hsotg->dev,
+-			 "addr=%d, ep_num=%d, ep_dir=%1x, ep_type=%1x, mps=%d\n",
+-			 dev_addr, ep_num, ep_dir, ep_type, mps);
++			 "addr=%d, ep_num=%d, ep_dir=%1x, ep_type=%1x, maxp=%d (%d mult)\n",
++			 dev_addr, ep_num, ep_dir, ep_type, maxp, maxp_mult);
+ 	urb->pipe_info.dev_addr = dev_addr;
+ 	urb->pipe_info.ep_num = ep_num;
+ 	urb->pipe_info.pipe_type = ep_type;
+ 	urb->pipe_info.pipe_dir = ep_dir;
+-	urb->pipe_info.mps = mps;
++	urb->pipe_info.maxp = maxp;
++	urb->pipe_info.maxp_mult = maxp_mult;
+ }
+ 
+ /*
+@@ -4093,8 +4099,9 @@ void dwc2_hcd_dump_state(struct dwc2_hsotg *hsotg)
+ 					dwc2_hcd_is_pipe_in(&urb->pipe_info) ?
+ 					"IN" : "OUT");
+ 				dev_dbg(hsotg->dev,
+-					"      Max packet size: %d\n",
+-					dwc2_hcd_get_mps(&urb->pipe_info));
++					"      Max packet size: %d (%d mult)\n",
++					dwc2_hcd_get_maxp(&urb->pipe_info),
++					dwc2_hcd_get_maxp_mult(&urb->pipe_info));
+ 				dev_dbg(hsotg->dev,
+ 					"      transfer_buffer: %p\n",
+ 					urb->buf);
+@@ -4661,8 +4668,10 @@ static void dwc2_dump_urb_info(struct usb_hcd *hcd, struct urb *urb,
+ 	}
+ 
+ 	dev_vdbg(hsotg->dev, "  Speed: %s\n", speed);
+-	dev_vdbg(hsotg->dev, "  Max packet size: %d\n",
+-		 usb_maxpacket(urb->dev, urb->pipe, usb_pipeout(urb->pipe)));
++	dev_vdbg(hsotg->dev, "  Max packet size: %d (%d mult)\n",
++		 usb_endpoint_maxp(&urb->ep->desc),
++		 usb_endpoint_maxp_mult(&urb->ep->desc));
++
+ 	dev_vdbg(hsotg->dev, "  Data buffer length: %d\n",
+ 		 urb->transfer_buffer_length);
+ 	dev_vdbg(hsotg->dev, "  Transfer buffer: %p, Transfer DMA: %08lx\n",
+@@ -4745,8 +4754,8 @@ static int _dwc2_hcd_urb_enqueue(struct usb_hcd *hcd, struct urb *urb,
+ 	dwc2_hcd_urb_set_pipeinfo(hsotg, dwc2_urb, usb_pipedevice(urb->pipe),
+ 				  usb_pipeendpoint(urb->pipe), ep_type,
+ 				  usb_pipein(urb->pipe),
+-				  usb_maxpacket(urb->dev, urb->pipe,
+-						!(usb_pipein(urb->pipe))));
++				  usb_endpoint_maxp(&ep->desc),
++				  usb_endpoint_maxp_mult(&ep->desc));
+ 
+ 	buf = urb->transfer_buffer;
+ 
+diff --git a/drivers/usb/dwc2/hcd.h b/drivers/usb/dwc2/hcd.h
+index c089ffa1f0a8..ce6445a06588 100644
+--- a/drivers/usb/dwc2/hcd.h
++++ b/drivers/usb/dwc2/hcd.h
+@@ -171,7 +171,8 @@ struct dwc2_hcd_pipe_info {
+ 	u8 ep_num;
+ 	u8 pipe_type;
+ 	u8 pipe_dir;
+-	u16 mps;
++	u16 maxp;
++	u16 maxp_mult;
+ };
+ 
+ struct dwc2_hcd_iso_packet_desc {
+@@ -264,6 +265,7 @@ struct dwc2_hs_transfer_time {
+  *                       - USB_ENDPOINT_XFER_ISOC
+  * @ep_is_in:           Endpoint direction
+  * @maxp:               Value from wMaxPacketSize field of Endpoint Descriptor
++ * @maxp_mult:          Multiplier for maxp
+  * @dev_speed:          Device speed. One of the following values:
+  *                       - USB_SPEED_LOW
+  *                       - USB_SPEED_FULL
+@@ -340,6 +342,7 @@ struct dwc2_qh {
+ 	u8 ep_type;
+ 	u8 ep_is_in;
+ 	u16 maxp;
++	u16 maxp_mult;
+ 	u8 dev_speed;
+ 	u8 data_toggle;
+ 	u8 ping_state;
+@@ -503,9 +506,14 @@ static inline u8 dwc2_hcd_get_pipe_type(struct dwc2_hcd_pipe_info *pipe)
+ 	return pipe->pipe_type;
+ }
+ 
+-static inline u16 dwc2_hcd_get_mps(struct dwc2_hcd_pipe_info *pipe)
++static inline u16 dwc2_hcd_get_maxp(struct dwc2_hcd_pipe_info *pipe)
++{
++	return pipe->maxp;
++}
++
++static inline u16 dwc2_hcd_get_maxp_mult(struct dwc2_hcd_pipe_info *pipe)
+ {
+-	return pipe->mps;
++	return pipe->maxp_mult;
+ }
+ 
+ static inline u8 dwc2_hcd_get_dev_addr(struct dwc2_hcd_pipe_info *pipe)
+@@ -620,12 +628,6 @@ static inline bool dbg_urb(struct urb *urb)
+ static inline bool dbg_perio(void) { return false; }
+ #endif
+ 
+-/* High bandwidth multiplier as encoded in highspeed endpoint descriptors */
+-#define dwc2_hb_mult(wmaxpacketsize) (1 + (((wmaxpacketsize) >> 11) & 0x03))
+-
+-/* Packet size for any kind of endpoint descriptor */
+-#define dwc2_max_packet(wmaxpacketsize) ((wmaxpacketsize) & 0x07ff)
+-
+ /*
+  * Returns true if frame1 index is greater than frame2 index. The comparison
+  * is done modulo FRLISTEN_64_SIZE. This accounts for the rollover of the
+diff --git a/drivers/usb/dwc2/hcd_intr.c b/drivers/usb/dwc2/hcd_intr.c
+index 88b5dcf3aefc..a052d39b4375 100644
+--- a/drivers/usb/dwc2/hcd_intr.c
++++ b/drivers/usb/dwc2/hcd_intr.c
+@@ -1617,8 +1617,9 @@ static void dwc2_hc_ahberr_intr(struct dwc2_hsotg *hsotg,
+ 
+ 	dev_err(hsotg->dev, "  Speed: %s\n", speed);
+ 
+-	dev_err(hsotg->dev, "  Max packet size: %d\n",
+-		dwc2_hcd_get_mps(&urb->pipe_info));
++	dev_err(hsotg->dev, "  Max packet size: %d (mult %d)\n",
++		dwc2_hcd_get_maxp(&urb->pipe_info),
++		dwc2_hcd_get_maxp_mult(&urb->pipe_info));
+ 	dev_err(hsotg->dev, "  Data buffer length: %d\n", urb->length);
+ 	dev_err(hsotg->dev, "  Transfer buffer: %p, Transfer DMA: %08lx\n",
+ 		urb->buf, (unsigned long)urb->dma);
+diff --git a/drivers/usb/dwc2/hcd_queue.c b/drivers/usb/dwc2/hcd_queue.c
+index ea3aa640c15c..68bbac64b753 100644
+--- a/drivers/usb/dwc2/hcd_queue.c
++++ b/drivers/usb/dwc2/hcd_queue.c
+@@ -708,7 +708,7 @@ static void dwc2_hs_pmap_unschedule(struct dwc2_hsotg *hsotg,
+ static int dwc2_uframe_schedule_split(struct dwc2_hsotg *hsotg,
+ 				      struct dwc2_qh *qh)
+ {
+-	int bytecount = dwc2_hb_mult(qh->maxp) * dwc2_max_packet(qh->maxp);
++	int bytecount = qh->maxp_mult * qh->maxp;
+ 	int ls_search_slice;
+ 	int err = 0;
+ 	int host_interval_in_sched;
+@@ -1332,7 +1332,7 @@ static int dwc2_check_max_xfer_size(struct dwc2_hsotg *hsotg,
+ 	u32 max_channel_xfer_size;
+ 	int status = 0;
+ 
+-	max_xfer_size = dwc2_max_packet(qh->maxp) * dwc2_hb_mult(qh->maxp);
++	max_xfer_size = qh->maxp * qh->maxp_mult;
+ 	max_channel_xfer_size = hsotg->params.max_transfer_size;
+ 
+ 	if (max_xfer_size > max_channel_xfer_size) {
+@@ -1517,8 +1517,9 @@ static void dwc2_qh_init(struct dwc2_hsotg *hsotg, struct dwc2_qh *qh,
+ 	u32 prtspd = (hprt & HPRT0_SPD_MASK) >> HPRT0_SPD_SHIFT;
+ 	bool do_split = (prtspd == HPRT0_SPD_HIGH_SPEED &&
+ 			 dev_speed != USB_SPEED_HIGH);
+-	int maxp = dwc2_hcd_get_mps(&urb->pipe_info);
+-	int bytecount = dwc2_hb_mult(maxp) * dwc2_max_packet(maxp);
++	int maxp = dwc2_hcd_get_maxp(&urb->pipe_info);
++	int maxp_mult = dwc2_hcd_get_maxp_mult(&urb->pipe_info);
++	int bytecount = maxp_mult * maxp;
+ 	char *speed, *type;
+ 
+ 	/* Initialize QH */
+@@ -1531,6 +1532,7 @@ static void dwc2_qh_init(struct dwc2_hsotg *hsotg, struct dwc2_qh *qh,
+ 
+ 	qh->data_toggle = DWC2_HC_PID_DATA0;
+ 	qh->maxp = maxp;
++	qh->maxp_mult = maxp_mult;
+ 	INIT_LIST_HEAD(&qh->qtd_list);
+ 	INIT_LIST_HEAD(&qh->qh_list_entry);
+ 
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 83869065b802..a0aaf0635359 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1171,6 +1171,10 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920A4_1213, 0xff) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920A4_1214),
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) | RSVD(3) },
++	{ USB_DEVICE(TELIT_VENDOR_ID, 0x1260),
++	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
++	{ USB_DEVICE(TELIT_VENDOR_ID, 0x1261),
++	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, 0x1900),				/* Telit LN940 (QMI) */
+ 	  .driver_info = NCTRL(0) | RSVD(1) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1901, 0xff),	/* Telit LN940 (MBIM) */
+@@ -1772,6 +1776,8 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE(ALINK_VENDOR_ID, SIMCOM_PRODUCT_SIM7100E),
+ 	  .driver_info = RSVD(5) | RSVD(6) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x1e0e, 0x9003, 0xff) },	/* Simcom SIM7500/SIM7600 MBIM mode */
++	{ USB_DEVICE_INTERFACE_CLASS(0x1e0e, 0x9011, 0xff),	/* Simcom SIM7500/SIM7600 RNDIS mode */
++	  .driver_info = RSVD(7) },
+ 	{ USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_X060S_X200),
+ 	  .driver_info = NCTRL(0) | NCTRL(1) | RSVD(4) },
+ 	{ USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_X220_X500D),
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index bb3f9aa4a909..781e7949c45d 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -106,6 +106,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(SANWA_VENDOR_ID, SANWA_PRODUCT_ID) },
+ 	{ USB_DEVICE(ADLINK_VENDOR_ID, ADLINK_ND6530_PRODUCT_ID) },
+ 	{ USB_DEVICE(SMART_VENDOR_ID, SMART_PRODUCT_ID) },
++	{ USB_DEVICE(AT_VENDOR_ID, AT_VTKIT3_PRODUCT_ID) },
+ 	{ }					/* Terminating entry */
+ };
+ 
+diff --git a/drivers/usb/serial/pl2303.h b/drivers/usb/serial/pl2303.h
+index 559941ca884d..b0175f17d1a2 100644
+--- a/drivers/usb/serial/pl2303.h
++++ b/drivers/usb/serial/pl2303.h
+@@ -155,3 +155,6 @@
+ #define SMART_VENDOR_ID	0x0b8c
+ #define SMART_PRODUCT_ID	0x2303
+ 
++/* Allied Telesis VT-Kit3 */
++#define AT_VENDOR_ID		0x0caa
++#define AT_VTKIT3_PRODUCT_ID	0x3001
+diff --git a/drivers/usb/storage/unusual_realtek.h b/drivers/usb/storage/unusual_realtek.h
+index 6b2140f966ef..7e14c2d7cf73 100644
+--- a/drivers/usb/storage/unusual_realtek.h
++++ b/drivers/usb/storage/unusual_realtek.h
+@@ -17,6 +17,11 @@ UNUSUAL_DEV(0x0bda, 0x0138, 0x0000, 0x9999,
+ 		"USB Card Reader",
+ 		USB_SC_DEVICE, USB_PR_DEVICE, init_realtek_cr, 0),
+ 
++UNUSUAL_DEV(0x0bda, 0x0153, 0x0000, 0x9999,
++		"Realtek",
++		"USB Card Reader",
++		USB_SC_DEVICE, USB_PR_DEVICE, init_realtek_cr, 0),
++
+ UNUSUAL_DEV(0x0bda, 0x0158, 0x0000, 0x9999,
+ 		"Realtek",
+ 		"USB Card Reader",
+diff --git a/fs/f2fs/xattr.c b/fs/f2fs/xattr.c
+index 848a785abe25..e791741d193b 100644
+--- a/fs/f2fs/xattr.c
++++ b/fs/f2fs/xattr.c
+@@ -202,12 +202,17 @@ static inline const struct xattr_handler *f2fs_xattr_handler(int index)
+ 	return handler;
+ }
+ 
+-static struct f2fs_xattr_entry *__find_xattr(void *base_addr, int index,
+-					size_t len, const char *name)
++static struct f2fs_xattr_entry *__find_xattr(void *base_addr,
++				void *last_base_addr, int index,
++				size_t len, const char *name)
+ {
+ 	struct f2fs_xattr_entry *entry;
+ 
+ 	list_for_each_xattr(entry, base_addr) {
++		if ((void *)(entry) + sizeof(__u32) > last_base_addr ||
++			(void *)XATTR_NEXT_ENTRY(entry) > last_base_addr)
++			return NULL;
++
+ 		if (entry->e_name_index != index)
+ 			continue;
+ 		if (entry->e_name_len != len)
+@@ -297,20 +302,22 @@ static int lookup_all_xattrs(struct inode *inode, struct page *ipage,
+ 				const char *name, struct f2fs_xattr_entry **xe,
+ 				void **base_addr, int *base_size)
+ {
+-	void *cur_addr, *txattr_addr, *last_addr = NULL;
++	void *cur_addr, *txattr_addr, *last_txattr_addr;
++	void *last_addr = NULL;
+ 	nid_t xnid = F2FS_I(inode)->i_xattr_nid;
+-	unsigned int size = xnid ? VALID_XATTR_BLOCK_SIZE : 0;
+ 	unsigned int inline_size = inline_xattr_size(inode);
+ 	int err = 0;
+ 
+-	if (!size && !inline_size)
++	if (!xnid && !inline_size)
+ 		return -ENODATA;
+ 
+-	*base_size = inline_size + size + XATTR_PADDING_SIZE;
++	*base_size = XATTR_SIZE(xnid, inode) + XATTR_PADDING_SIZE;
+ 	txattr_addr = f2fs_kzalloc(F2FS_I_SB(inode), *base_size, GFP_NOFS);
+ 	if (!txattr_addr)
+ 		return -ENOMEM;
+ 
++	last_txattr_addr = (void *)txattr_addr + XATTR_SIZE(xnid, inode);
++
+ 	/* read from inline xattr */
+ 	if (inline_size) {
+ 		err = read_inline_xattr(inode, ipage, txattr_addr);
+@@ -337,7 +344,11 @@ static int lookup_all_xattrs(struct inode *inode, struct page *ipage,
+ 	else
+ 		cur_addr = txattr_addr;
+ 
+-	*xe = __find_xattr(cur_addr, index, len, name);
++	*xe = __find_xattr(cur_addr, last_txattr_addr, index, len, name);
++	if (!*xe) {
++		err = -EFAULT;
++		goto out;
++	}
+ check:
+ 	if (IS_XATTR_LAST_ENTRY(*xe)) {
+ 		err = -ENODATA;
+@@ -581,7 +592,8 @@ static int __f2fs_setxattr(struct inode *inode, int index,
+ 			struct page *ipage, int flags)
+ {
+ 	struct f2fs_xattr_entry *here, *last;
+-	void *base_addr;
++	void *base_addr, *last_base_addr;
++	nid_t xnid = F2FS_I(inode)->i_xattr_nid;
+ 	int found, newsize;
+ 	size_t len;
+ 	__u32 new_hsize;
+@@ -605,8 +617,14 @@ static int __f2fs_setxattr(struct inode *inode, int index,
+ 	if (error)
+ 		return error;
+ 
++	last_base_addr = (void *)base_addr + XATTR_SIZE(xnid, inode);
++
+ 	/* find entry with wanted name. */
+-	here = __find_xattr(base_addr, index, len, name);
++	here = __find_xattr(base_addr, last_base_addr, index, len, name);
++	if (!here) {
++		error = -EFAULT;
++		goto exit;
++	}
+ 
+ 	found = IS_XATTR_LAST_ENTRY(here) ? 0 : 1;
+ 
+diff --git a/fs/f2fs/xattr.h b/fs/f2fs/xattr.h
+index 9172ee082ca8..a90920e2f949 100644
+--- a/fs/f2fs/xattr.h
++++ b/fs/f2fs/xattr.h
+@@ -71,6 +71,8 @@ struct f2fs_xattr_entry {
+ 				entry = XATTR_NEXT_ENTRY(entry))
+ #define VALID_XATTR_BLOCK_SIZE	(PAGE_SIZE - sizeof(struct node_footer))
+ #define XATTR_PADDING_SIZE	(sizeof(__u32))
++#define XATTR_SIZE(x,i)		(((x) ? VALID_XATTR_BLOCK_SIZE : 0) +	\
++						(inline_xattr_size(i)))
+ #define MIN_OFFSET(i)		XATTR_ALIGN(inline_xattr_size(i) +	\
+ 						VALID_XATTR_BLOCK_SIZE)
+ 
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 28269a0c5037..4e32a033394c 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -2633,8 +2633,10 @@ static void io_ring_ctx_free(struct io_ring_ctx *ctx)
+ 	io_sqe_files_unregister(ctx);
+ 
+ #if defined(CONFIG_UNIX)
+-	if (ctx->ring_sock)
++	if (ctx->ring_sock) {
++		ctx->ring_sock->file = NULL; /* so that iput() is called */
+ 		sock_release(ctx->ring_sock);
++	}
+ #endif
+ 
+ 	io_mem_free(ctx->sq_ring);
+diff --git a/fs/ocfs2/dcache.c b/fs/ocfs2/dcache.c
+index 290373024d9d..e8ace3b54e9c 100644
+--- a/fs/ocfs2/dcache.c
++++ b/fs/ocfs2/dcache.c
+@@ -310,6 +310,18 @@ int ocfs2_dentry_attach_lock(struct dentry *dentry,
+ 
+ out_attach:
+ 	spin_lock(&dentry_attach_lock);
++	if (unlikely(dentry->d_fsdata && !alias)) {
++		/* d_fsdata is set by a racing thread which is doing
++		 * the same thing as this thread is doing. Leave the racing
++		 * thread going ahead and we return here.
++		 */
++		spin_unlock(&dentry_attach_lock);
++		iput(dl->dl_inode);
++		ocfs2_lock_res_free(&dl->dl_lockres);
++		kfree(dl);
++		return 0;
++	}
++
+ 	dentry->d_fsdata = dl;
+ 	dl->dl_count++;
+ 	spin_unlock(&dentry_attach_lock);
+diff --git a/include/drm/drm_edid.h b/include/drm/drm_edid.h
+index 8dc1a081fb36..19b15fd28854 100644
+--- a/include/drm/drm_edid.h
++++ b/include/drm/drm_edid.h
+@@ -465,6 +465,7 @@ struct edid *drm_get_edid_switcheroo(struct drm_connector *connector,
+ 				     struct i2c_adapter *adapter);
+ struct edid *drm_edid_duplicate(const struct edid *edid);
+ int drm_add_edid_modes(struct drm_connector *connector, struct edid *edid);
++int drm_add_override_edid_modes(struct drm_connector *connector);
+ 
+ u8 drm_match_cea_mode(const struct drm_display_mode *to_match);
+ enum hdmi_picture_aspect drm_get_cea_aspect_ratio(const u8 video_code);
+diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
+index 81f58b4a5418..b6a8df3b7e96 100644
+--- a/include/linux/cgroup.h
++++ b/include/linux/cgroup.h
+@@ -487,7 +487,7 @@ static inline struct cgroup_subsys_state *task_css(struct task_struct *task,
+  *
+  * Find the css for the (@task, @subsys_id) combination, increment a
+  * reference on and return it.  This function is guaranteed to return a
+- * valid css.
++ * valid css.  The returned css may already have been offlined.
+  */
+ static inline struct cgroup_subsys_state *
+ task_get_css(struct task_struct *task, int subsys_id)
+@@ -497,7 +497,13 @@ task_get_css(struct task_struct *task, int subsys_id)
+ 	rcu_read_lock();
+ 	while (true) {
+ 		css = task_css(task, subsys_id);
+-		if (likely(css_tryget_online(css)))
++		/*
++		 * Can't use css_tryget_online() here.  A task which has
++		 * PF_EXITING set may stay associated with an offline css.
++		 * If such task calls this function, css_tryget_online()
++		 * will keep failing.
++		 */
++		if (likely(css_tryget(css)))
+ 			break;
+ 		cpu_relax();
+ 	}
+diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
+index e78281d07b70..63fedd85c6c5 100644
+--- a/include/linux/cpuhotplug.h
++++ b/include/linux/cpuhotplug.h
+@@ -101,6 +101,7 @@ enum cpuhp_state {
+ 	CPUHP_AP_IRQ_BCM2836_STARTING,
+ 	CPUHP_AP_IRQ_MIPS_GIC_STARTING,
+ 	CPUHP_AP_ARM_MVEBU_COHERENCY,
++	CPUHP_AP_MICROCODE_LOADER,
+ 	CPUHP_AP_PERF_X86_AMD_UNCORE_STARTING,
+ 	CPUHP_AP_PERF_X86_STARTING,
+ 	CPUHP_AP_PERF_X86_AMD_IBS_STARTING,
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index ac0c70b4ce10..5a24ebfb5b47 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -894,7 +894,7 @@ struct hid_field *hidinput_get_led_field(struct hid_device *hid);
+ unsigned int hidinput_count_leds(struct hid_device *hid);
+ __s32 hidinput_calc_abs_res(const struct hid_field *field, __u16 code);
+ void hid_output_report(struct hid_report *report, __u8 *data);
+-void __hid_request(struct hid_device *hid, struct hid_report *rep, int reqtype);
++int __hid_request(struct hid_device *hid, struct hid_report *rep, int reqtype);
+ u8 *hid_alloc_report_buf(struct hid_report *report, gfp_t flags);
+ struct hid_device *hid_allocate_device(void);
+ struct hid_report *hid_register_report(struct hid_device *device,
+diff --git a/kernel/Makefile b/kernel/Makefile
+index 6c57e78817da..62471e75a2b0 100644
+--- a/kernel/Makefile
++++ b/kernel/Makefile
+@@ -30,6 +30,7 @@ KCOV_INSTRUMENT_extable.o := n
+ # Don't self-instrument.
+ KCOV_INSTRUMENT_kcov.o := n
+ KASAN_SANITIZE_kcov.o := n
++CFLAGS_kcov.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
+ 
+ # cond_syscall is currently not LTO compatible
+ CFLAGS_sys_ni.o = $(DISABLE_LTO)
+diff --git a/kernel/cred.c b/kernel/cred.c
+index 45d77284aed0..07e069d00696 100644
+--- a/kernel/cred.c
++++ b/kernel/cred.c
+@@ -450,6 +450,15 @@ int commit_creds(struct cred *new)
+ 		if (task->mm)
+ 			set_dumpable(task->mm, suid_dumpable);
+ 		task->pdeath_signal = 0;
++		/*
++		 * If a task drops privileges and becomes nondumpable,
++		 * the dumpability change must become visible before
++		 * the credential change; otherwise, a __ptrace_may_access()
++		 * racing with this change may be able to attach to a task it
++		 * shouldn't be able to attach to (as if the task had dropped
++		 * privileges without becoming nondumpable).
++		 * Pairs with a read barrier in __ptrace_may_access().
++		 */
+ 		smp_wmb();
+ 	}
+ 
+diff --git a/kernel/ptrace.c b/kernel/ptrace.c
+index 6f357f4fc859..c9b4646ad375 100644
+--- a/kernel/ptrace.c
++++ b/kernel/ptrace.c
+@@ -323,6 +323,16 @@ static int __ptrace_may_access(struct task_struct *task, unsigned int mode)
+ 	return -EPERM;
+ ok:
+ 	rcu_read_unlock();
++	/*
++	 * If a task drops privileges and becomes nondumpable (through a syscall
++	 * like setresuid()) while we are trying to access it, we must ensure
++	 * that the dumpability is read after the credentials; otherwise,
++	 * we may be able to attach to a task that we shouldn't be able to
++	 * attach to (as if the task had dropped privileges without becoming
++	 * nondumpable).
++	 * Pairs with a write barrier in commit_creds().
++	 */
++	smp_rmb();
+ 	mm = task->mm;
+ 	if (mm &&
+ 	    ((get_dumpable(mm) != SUID_DUMP_USER) &&
+@@ -704,6 +714,10 @@ static int ptrace_peek_siginfo(struct task_struct *child,
+ 	if (arg.nr < 0)
+ 		return -EINVAL;
+ 
++	/* Ensure arg.off fits in an unsigned long */
++	if (arg.off > ULONG_MAX)
++		return 0;
++
+ 	if (arg.flags & PTRACE_PEEKSIGINFO_SHARED)
+ 		pending = &child->signal->shared_pending;
+ 	else
+@@ -711,18 +725,20 @@ static int ptrace_peek_siginfo(struct task_struct *child,
+ 
+ 	for (i = 0; i < arg.nr; ) {
+ 		kernel_siginfo_t info;
+-		s32 off = arg.off + i;
++		unsigned long off = arg.off + i;
++		bool found = false;
+ 
+ 		spin_lock_irq(&child->sighand->siglock);
+ 		list_for_each_entry(q, &pending->list, list) {
+ 			if (!off--) {
++				found = true;
+ 				copy_siginfo(&info, &q->info);
+ 				break;
+ 			}
+ 		}
+ 		spin_unlock_irq(&child->sighand->siglock);
+ 
+-		if (off >= 0) /* beyond the end of the list */
++		if (!found) /* beyond the end of the list */
+ 			break;
+ 
+ #ifdef CONFIG_COMPAT
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index f136c56c2805..7b2b0a3c720f 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -807,17 +807,18 @@ ktime_t ktime_get_coarse_with_offset(enum tk_offsets offs)
+ 	struct timekeeper *tk = &tk_core.timekeeper;
+ 	unsigned int seq;
+ 	ktime_t base, *offset = offsets[offs];
++	u64 nsecs;
+ 
+ 	WARN_ON(timekeeping_suspended);
+ 
+ 	do {
+ 		seq = read_seqcount_begin(&tk_core.seq);
+ 		base = ktime_add(tk->tkr_mono.base, *offset);
++		nsecs = tk->tkr_mono.xtime_nsec >> tk->tkr_mono.shift;
+ 
+ 	} while (read_seqcount_retry(&tk_core.seq, seq));
+ 
+-	return base;
+-
++	return base + nsecs;
+ }
+ EXPORT_SYMBOL_GPL(ktime_get_coarse_with_offset);
+ 
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 0a200d42fa96..f50d57eac875 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -1815,6 +1815,9 @@ static u64 hist_field_var_ref(struct hist_field *hist_field,
+ 	struct hist_elt_data *elt_data;
+ 	u64 var_val = 0;
+ 
++	if (WARN_ON_ONCE(!elt))
++		return var_val;
++
+ 	elt_data = elt->private_data;
+ 	var_val = elt_data->var_ref_vals[hist_field->var_ref_idx];
+ 
+diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
+index be78d99ee6bc..9cd4d0d22a9e 100644
+--- a/kernel/trace/trace_uprobe.c
++++ b/kernel/trace/trace_uprobe.c
+@@ -434,10 +434,17 @@ static int trace_uprobe_create(int argc, const char **argv)
+ 	ret = 0;
+ 	ref_ctr_offset = 0;
+ 
+-	/* argc must be >= 1 */
+-	if (argv[0][0] == 'r')
++	switch (argv[0][0]) {
++	case 'r':
+ 		is_return = true;
+-	else if (argv[0][0] != 'p' || argc < 2)
++		break;
++	case 'p':
++		break;
++	default:
++		return -ECANCELED;
++	}
++
++	if (argc < 2)
+ 		return -ECANCELED;
+ 
+ 	if (argv[0][1] == ':')
+diff --git a/mm/list_lru.c b/mm/list_lru.c
+index d3b538146efd..6468eb8d4d4d 100644
+--- a/mm/list_lru.c
++++ b/mm/list_lru.c
+@@ -353,7 +353,7 @@ static int __memcg_init_list_lru_node(struct list_lru_memcg *memcg_lrus,
+ 	}
+ 	return 0;
+ fail:
+-	__memcg_destroy_list_lru_node(memcg_lrus, begin, i - 1);
++	__memcg_destroy_list_lru_node(memcg_lrus, begin, i);
+ 	return -ENOMEM;
+ }
+ 
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index a815f73ee4d5..3fb1d75804de 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -1502,7 +1502,7 @@ unsigned long reclaim_clean_pages_from_list(struct zone *zone,
+ 
+ 	list_for_each_entry_safe(page, next, page_list, lru) {
+ 		if (page_is_file_cache(page) && !PageDirty(page) &&
+-		    !__PageMovable(page)) {
++		    !__PageMovable(page) && !PageUnevictable(page)) {
+ 			ClearPageActive(page);
+ 			list_move(&page->lru, &clean_pages);
+ 		}
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index cc94d921476c..93bffaad2135 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -411,6 +411,7 @@ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb)
+ 	sk_mem_charge(sk, skb->len);
+ 	copied = skb->len;
+ 	msg->sg.start = 0;
++	msg->sg.size = copied;
+ 	msg->sg.end = num_sge == MAX_MSG_FRAGS ? 0 : num_sge;
+ 	msg->skb = skb;
+ 
+@@ -554,8 +555,10 @@ static void sk_psock_destroy_deferred(struct work_struct *gc)
+ 	struct sk_psock *psock = container_of(gc, struct sk_psock, gc);
+ 
+ 	/* No sk_callback_lock since already detached. */
+-	strp_stop(&psock->parser.strp);
+-	strp_done(&psock->parser.strp);
++
++	/* Parser has been stopped */
++	if (psock->progs.skb_parser)
++		strp_done(&psock->parser.strp);
+ 
+ 	cancel_work_sync(&psock->work);
+ 
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index 1bb7321a256d..3d1e15401384 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -27,7 +27,10 @@ static int tcp_bpf_wait_data(struct sock *sk, struct sk_psock *psock,
+ 			     int flags, long timeo, int *err)
+ {
+ 	DEFINE_WAIT_FUNC(wait, woken_wake_function);
+-	int ret;
++	int ret = 0;
++
++	if (!timeo)
++		return ret;
+ 
+ 	add_wait_queue(sk_sleep(sk), &wait);
+ 	sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+@@ -528,8 +531,6 @@ static void tcp_bpf_remove(struct sock *sk, struct sk_psock *psock)
+ {
+ 	struct sk_psock_link *link;
+ 
+-	sk_psock_cork_free(psock);
+-	__sk_psock_purge_ingress_msg(psock);
+ 	while ((link = sk_psock_link_pop(psock))) {
+ 		sk_psock_unlink(sk, link);
+ 		sk_psock_free_link(link);
+diff --git a/security/selinux/avc.c b/security/selinux/avc.c
+index 8346a4f7c5d7..a99be508f93d 100644
+--- a/security/selinux/avc.c
++++ b/security/selinux/avc.c
+@@ -739,14 +739,20 @@ static void avc_audit_post_callback(struct audit_buffer *ab, void *a)
+ 	rc = security_sid_to_context_inval(sad->state, sad->ssid, &scontext,
+ 					   &scontext_len);
+ 	if (!rc && scontext) {
+-		audit_log_format(ab, " srawcon=%s", scontext);
++		if (scontext_len && scontext[scontext_len - 1] == '\0')
++			scontext_len--;
++		audit_log_format(ab, " srawcon=");
++		audit_log_n_untrustedstring(ab, scontext, scontext_len);
+ 		kfree(scontext);
+ 	}
+ 
+ 	rc = security_sid_to_context_inval(sad->state, sad->tsid, &scontext,
+ 					   &scontext_len);
+ 	if (!rc && scontext) {
+-		audit_log_format(ab, " trawcon=%s", scontext);
++		if (scontext_len && scontext[scontext_len - 1] == '\0')
++			scontext_len--;
++		audit_log_format(ab, " trawcon=");
++		audit_log_n_untrustedstring(ab, scontext, scontext_len);
+ 		kfree(scontext);
+ 	}
+ }
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 28bff30c2f15..614bc753822c 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -1048,15 +1048,24 @@ static int selinux_add_mnt_opt(const char *option, const char *val, int len,
+ 	if (token == Opt_error)
+ 		return -EINVAL;
+ 
+-	if (token != Opt_seclabel)
++	if (token != Opt_seclabel) {
+ 		val = kmemdup_nul(val, len, GFP_KERNEL);
++		if (!val) {
++			rc = -ENOMEM;
++			goto free_opt;
++		}
++	}
+ 	rc = selinux_add_opt(token, val, mnt_opts);
+ 	if (unlikely(rc)) {
+ 		kfree(val);
+-		if (*mnt_opts) {
+-			selinux_free_mnt_opts(*mnt_opts);
+-			*mnt_opts = NULL;
+-		}
++		goto free_opt;
++	}
++	return rc;
++
++free_opt:
++	if (*mnt_opts) {
++		selinux_free_mnt_opts(*mnt_opts);
++		*mnt_opts = NULL;
+ 	}
+ 	return rc;
+ }
+@@ -2603,10 +2612,11 @@ static int selinux_sb_eat_lsm_opts(char *options, void **mnt_opts)
+ 	char *from = options;
+ 	char *to = options;
+ 	bool first = true;
++	int rc;
+ 
+ 	while (1) {
+ 		int len = opt_len(from);
+-		int token, rc;
++		int token;
+ 		char *arg = NULL;
+ 
+ 		token = match_opt_prefix(from, len, &arg);
+@@ -2622,15 +2632,15 @@ static int selinux_sb_eat_lsm_opts(char *options, void **mnt_opts)
+ 						*q++ = c;
+ 				}
+ 				arg = kmemdup_nul(arg, q - arg, GFP_KERNEL);
++				if (!arg) {
++					rc = -ENOMEM;
++					goto free_opt;
++				}
+ 			}
+ 			rc = selinux_add_opt(token, arg, mnt_opts);
+ 			if (unlikely(rc)) {
+ 				kfree(arg);
+-				if (*mnt_opts) {
+-					selinux_free_mnt_opts(*mnt_opts);
+-					*mnt_opts = NULL;
+-				}
+-				return rc;
++				goto free_opt;
+ 			}
+ 		} else {
+ 			if (!first) {	// copy with preceding comma
+@@ -2648,6 +2658,13 @@ static int selinux_sb_eat_lsm_opts(char *options, void **mnt_opts)
+ 	}
+ 	*to = '\0';
+ 	return 0;
++
++free_opt:
++	if (*mnt_opts) {
++		selinux_free_mnt_opts(*mnt_opts);
++		*mnt_opts = NULL;
++	}
++	return rc;
+ }
+ 
+ static int selinux_sb_remount(struct super_block *sb, void *mnt_opts)
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 5c1613519d5a..c51f4bb6c1e7 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -67,6 +67,7 @@ static struct {
+ 	int len;
+ 	int opt;
+ } smk_mount_opts[] = {
++	{"smackfsdef", sizeof("smackfsdef") - 1, Opt_fsdefault},
+ 	A(fsdefault), A(fsfloor), A(fshat), A(fsroot), A(fstransmute)
+ };
+ #undef A
+@@ -681,11 +682,12 @@ static int smack_fs_context_dup(struct fs_context *fc,
+ }
+ 
+ static const struct fs_parameter_spec smack_param_specs[] = {
+-	fsparam_string("fsdefault",	Opt_fsdefault),
+-	fsparam_string("fsfloor",	Opt_fsfloor),
+-	fsparam_string("fshat",		Opt_fshat),
+-	fsparam_string("fsroot",	Opt_fsroot),
+-	fsparam_string("fstransmute",	Opt_fstransmute),
++	fsparam_string("smackfsdef",		Opt_fsdefault),
++	fsparam_string("smackfsdefault",	Opt_fsdefault),
++	fsparam_string("smackfsfloor",		Opt_fsfloor),
++	fsparam_string("smackfshat",		Opt_fshat),
++	fsparam_string("smackfsroot",		Opt_fsroot),
++	fsparam_string("smackfstransmute",	Opt_fstransmute),
+ 	{}
+ };
+ 
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index 38e7deab6384..c99e1b77a45b 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -1900,20 +1900,14 @@ static int snd_seq_ioctl_get_subscription(struct snd_seq_client *client,
+ 	int result;
+ 	struct snd_seq_client *sender = NULL;
+ 	struct snd_seq_client_port *sport = NULL;
+-	struct snd_seq_subscribers *p;
+ 
+ 	result = -EINVAL;
+ 	if ((sender = snd_seq_client_use_ptr(subs->sender.client)) == NULL)
+ 		goto __end;
+ 	if ((sport = snd_seq_port_use_ptr(sender, subs->sender.port)) == NULL)
+ 		goto __end;
+-	p = snd_seq_port_get_subscription(&sport->c_src, &subs->dest);
+-	if (p) {
+-		result = 0;
+-		*subs = p->info;
+-	} else
+-		result = -ENOENT;
+-
++	result = snd_seq_port_get_subscription(&sport->c_src, &subs->dest,
++					       subs);
+       __end:
+       	if (sport)
+ 		snd_seq_port_unlock(sport);
+diff --git a/sound/core/seq/seq_ports.c b/sound/core/seq/seq_ports.c
+index da31aa8e216e..16289aefb443 100644
+--- a/sound/core/seq/seq_ports.c
++++ b/sound/core/seq/seq_ports.c
+@@ -635,20 +635,23 @@ int snd_seq_port_disconnect(struct snd_seq_client *connector,
+ 
+ 
+ /* get matched subscriber */
+-struct snd_seq_subscribers *snd_seq_port_get_subscription(struct snd_seq_port_subs_info *src_grp,
+-							  struct snd_seq_addr *dest_addr)
++int snd_seq_port_get_subscription(struct snd_seq_port_subs_info *src_grp,
++				  struct snd_seq_addr *dest_addr,
++				  struct snd_seq_port_subscribe *subs)
+ {
+-	struct snd_seq_subscribers *s, *found = NULL;
++	struct snd_seq_subscribers *s;
++	int err = -ENOENT;
+ 
+ 	down_read(&src_grp->list_mutex);
+ 	list_for_each_entry(s, &src_grp->list_head, src_list) {
+ 		if (addr_match(dest_addr, &s->info.dest)) {
+-			found = s;
++			*subs = s->info;
++			err = 0;
+ 			break;
+ 		}
+ 	}
+ 	up_read(&src_grp->list_mutex);
+-	return found;
++	return err;
+ }
+ 
+ /*
+diff --git a/sound/core/seq/seq_ports.h b/sound/core/seq/seq_ports.h
+index 26bd71f36c41..06003b36652e 100644
+--- a/sound/core/seq/seq_ports.h
++++ b/sound/core/seq/seq_ports.h
+@@ -135,7 +135,8 @@ int snd_seq_port_subscribe(struct snd_seq_client_port *port,
+ 			   struct snd_seq_port_subscribe *info);
+ 
+ /* get matched subscriber */
+-struct snd_seq_subscribers *snd_seq_port_get_subscription(struct snd_seq_port_subs_info *src_grp,
+-							  struct snd_seq_addr *dest_addr);
++int snd_seq_port_get_subscription(struct snd_seq_port_subs_info *src_grp,
++				  struct snd_seq_addr *dest_addr,
++				  struct snd_seq_port_subscribe *subs);
+ 
+ #endif
+diff --git a/sound/firewire/motu/motu-stream.c b/sound/firewire/motu/motu-stream.c
+index 73e7a5e527fc..483a8771d502 100644
+--- a/sound/firewire/motu/motu-stream.c
++++ b/sound/firewire/motu/motu-stream.c
+@@ -345,7 +345,7 @@ static void destroy_stream(struct snd_motu *motu,
+ 	}
+ 
+ 	amdtp_stream_destroy(stream);
+-	fw_iso_resources_free(resources);
++	fw_iso_resources_destroy(resources);
+ }
+ 
+ int snd_motu_stream_init_duplex(struct snd_motu *motu)
+diff --git a/sound/firewire/oxfw/oxfw.c b/sound/firewire/oxfw/oxfw.c
+index 3d27f3378d5d..b4bef574929d 100644
+--- a/sound/firewire/oxfw/oxfw.c
++++ b/sound/firewire/oxfw/oxfw.c
+@@ -148,9 +148,6 @@ static int detect_quirks(struct snd_oxfw *oxfw)
+ 		oxfw->midi_input_ports = 0;
+ 		oxfw->midi_output_ports = 0;
+ 
+-		/* Output stream exists but no data channels are useful. */
+-		oxfw->has_output = false;
+-
+ 		return snd_oxfw_scs1x_add(oxfw);
+ 	}
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 239697421118..d0e543ff6b64 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -4084,18 +4084,19 @@ static struct coef_fw alc225_pre_hsmode[] = {
+ static void alc_headset_mode_unplugged(struct hda_codec *codec)
+ {
+ 	static struct coef_fw coef0255[] = {
++		WRITE_COEF(0x1b, 0x0c0b), /* LDO and MISC control */
+ 		WRITE_COEF(0x45, 0xd089), /* UAJ function set to menual mode */
+ 		UPDATE_COEFEX(0x57, 0x05, 1<<14, 0), /* Direct Drive HP Amp control(Set to verb control)*/
+ 		WRITE_COEF(0x06, 0x6104), /* Set MIC2 Vref gate with HP */
+ 		WRITE_COEFEX(0x57, 0x03, 0x8aa6), /* Direct Drive HP Amp control */
+ 		{}
+ 	};
+-	static struct coef_fw coef0255_1[] = {
+-		WRITE_COEF(0x1b, 0x0c0b), /* LDO and MISC control */
+-		{}
+-	};
+ 	static struct coef_fw coef0256[] = {
+ 		WRITE_COEF(0x1b, 0x0c4b), /* LDO and MISC control */
++		WRITE_COEF(0x45, 0xd089), /* UAJ function set to menual mode */
++		WRITE_COEF(0x06, 0x6104), /* Set MIC2 Vref gate with HP */
++		WRITE_COEFEX(0x57, 0x03, 0x09a3), /* Direct Drive HP Amp control */
++		UPDATE_COEFEX(0x57, 0x05, 1<<14, 0), /* Direct Drive HP Amp control(Set to verb control)*/
+ 		{}
+ 	};
+ 	static struct coef_fw coef0233[] = {
+@@ -4158,13 +4159,11 @@ static void alc_headset_mode_unplugged(struct hda_codec *codec)
+ 
+ 	switch (codec->core.vendor_id) {
+ 	case 0x10ec0255:
+-		alc_process_coef_fw(codec, coef0255_1);
+ 		alc_process_coef_fw(codec, coef0255);
+ 		break;
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+ 		alc_process_coef_fw(codec, coef0256);
+-		alc_process_coef_fw(codec, coef0255);
+ 		break;
+ 	case 0x10ec0234:
+ 	case 0x10ec0274:
+@@ -4217,6 +4216,12 @@ static void alc_headset_mode_mic_in(struct hda_codec *codec, hda_nid_t hp_pin,
+ 		WRITE_COEF(0x06, 0x6100), /* Set MIC2 Vref gate to normal */
+ 		{}
+ 	};
++	static struct coef_fw coef0256[] = {
++		UPDATE_COEFEX(0x57, 0x05, 1<<14, 1<<14), /* Direct Drive HP Amp control(Set to verb control)*/
++		WRITE_COEFEX(0x57, 0x03, 0x09a3),
++		WRITE_COEF(0x06, 0x6100), /* Set MIC2 Vref gate to normal */
++		{}
++	};
+ 	static struct coef_fw coef0233[] = {
+ 		UPDATE_COEF(0x35, 0, 1<<14),
+ 		WRITE_COEF(0x06, 0x2100),
+@@ -4264,14 +4269,19 @@ static void alc_headset_mode_mic_in(struct hda_codec *codec, hda_nid_t hp_pin,
+ 	};
+ 
+ 	switch (codec->core.vendor_id) {
+-	case 0x10ec0236:
+ 	case 0x10ec0255:
+-	case 0x10ec0256:
+ 		alc_write_coef_idx(codec, 0x45, 0xc489);
+ 		snd_hda_set_pin_ctl_cache(codec, hp_pin, 0);
+ 		alc_process_coef_fw(codec, coef0255);
+ 		snd_hda_set_pin_ctl_cache(codec, mic_pin, PIN_VREF50);
+ 		break;
++	case 0x10ec0236:
++	case 0x10ec0256:
++		alc_write_coef_idx(codec, 0x45, 0xc489);
++		snd_hda_set_pin_ctl_cache(codec, hp_pin, 0);
++		alc_process_coef_fw(codec, coef0256);
++		snd_hda_set_pin_ctl_cache(codec, mic_pin, PIN_VREF50);
++		break;
+ 	case 0x10ec0234:
+ 	case 0x10ec0274:
+ 	case 0x10ec0294:
+@@ -4353,6 +4363,14 @@ static void alc_headset_mode_default(struct hda_codec *codec)
+ 		WRITE_COEF(0x49, 0x0049),
+ 		{}
+ 	};
++	static struct coef_fw coef0256[] = {
++		WRITE_COEF(0x45, 0xc489),
++		WRITE_COEFEX(0x57, 0x03, 0x0da3),
++		WRITE_COEF(0x49, 0x0049),
++		UPDATE_COEFEX(0x57, 0x05, 1<<14, 0), /* Direct Drive HP Amp control(Set to verb control)*/
++		WRITE_COEF(0x06, 0x6100),
++		{}
++	};
+ 	static struct coef_fw coef0233[] = {
+ 		WRITE_COEF(0x06, 0x2100),
+ 		WRITE_COEF(0x32, 0x4ea3),
+@@ -4403,11 +4421,16 @@ static void alc_headset_mode_default(struct hda_codec *codec)
+ 		alc_process_coef_fw(codec, alc225_pre_hsmode);
+ 		alc_process_coef_fw(codec, coef0225);
+ 		break;
+-	case 0x10ec0236:
+ 	case 0x10ec0255:
+-	case 0x10ec0256:
+ 		alc_process_coef_fw(codec, coef0255);
+ 		break;
++	case 0x10ec0236:
++	case 0x10ec0256:
++		alc_write_coef_idx(codec, 0x1b, 0x0e4b);
++		alc_write_coef_idx(codec, 0x45, 0xc089);
++		msleep(50);
++		alc_process_coef_fw(codec, coef0256);
++		break;
+ 	case 0x10ec0234:
+ 	case 0x10ec0274:
+ 	case 0x10ec0294:
+@@ -4451,8 +4474,7 @@ static void alc_headset_mode_ctia(struct hda_codec *codec)
+ 	};
+ 	static struct coef_fw coef0256[] = {
+ 		WRITE_COEF(0x45, 0xd489), /* Set to CTIA type */
+-		WRITE_COEF(0x1b, 0x0c6b),
+-		WRITE_COEFEX(0x57, 0x03, 0x8ea6),
++		WRITE_COEF(0x1b, 0x0e6b),
+ 		{}
+ 	};
+ 	static struct coef_fw coef0233[] = {
+@@ -4570,8 +4592,7 @@ static void alc_headset_mode_omtp(struct hda_codec *codec)
+ 	};
+ 	static struct coef_fw coef0256[] = {
+ 		WRITE_COEF(0x45, 0xe489), /* Set to OMTP Type */
+-		WRITE_COEF(0x1b, 0x0c6b),
+-		WRITE_COEFEX(0x57, 0x03, 0x8ea6),
++		WRITE_COEF(0x1b, 0x0e6b),
+ 		{}
+ 	};
+ 	static struct coef_fw coef0233[] = {
+@@ -4703,13 +4724,37 @@ static void alc_determine_headset_type(struct hda_codec *codec)
+ 	};
+ 
+ 	switch (codec->core.vendor_id) {
+-	case 0x10ec0236:
+ 	case 0x10ec0255:
++		alc_process_coef_fw(codec, coef0255);
++		msleep(300);
++		val = alc_read_coef_idx(codec, 0x46);
++		is_ctia = (val & 0x0070) == 0x0070;
++		break;
++	case 0x10ec0236:
+ 	case 0x10ec0256:
++		alc_write_coef_idx(codec, 0x1b, 0x0e4b);
++		alc_write_coef_idx(codec, 0x06, 0x6104);
++		alc_write_coefex_idx(codec, 0x57, 0x3, 0x09a3);
++
++		snd_hda_codec_write(codec, 0x21, 0,
++			    AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
++		msleep(80);
++		snd_hda_codec_write(codec, 0x21, 0,
++			    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++
+ 		alc_process_coef_fw(codec, coef0255);
+ 		msleep(300);
+ 		val = alc_read_coef_idx(codec, 0x46);
+ 		is_ctia = (val & 0x0070) == 0x0070;
++
++		alc_write_coefex_idx(codec, 0x57, 0x3, 0x0da3);
++		alc_update_coefex_idx(codec, 0x57, 0x5, 1<<14, 0);
++
++		snd_hda_codec_write(codec, 0x21, 0,
++			    AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
++		msleep(80);
++		snd_hda_codec_write(codec, 0x21, 0,
++			    AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
+ 		break;
+ 	case 0x10ec0234:
+ 	case 0x10ec0274:
+@@ -6166,15 +6211,13 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chain_id = ALC269_FIXUP_THINKPAD_ACPI,
+ 	},
+ 	[ALC255_FIXUP_ACER_MIC_NO_PRESENCE] = {
+-		.type = HDA_FIXUP_VERBS,
+-		.v.verbs = (const struct hda_verb[]) {
+-			/* Enable the Mic */
+-			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x45 },
+-			{ 0x20, AC_VERB_SET_PROC_COEF, 0x5089 },
+-			{}
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x19, 0x01a1913c }, /* use as headset mic, without its own jack detect */
++			{ }
+ 		},
+ 		.chained = true,
+-		.chain_id = ALC269_FIXUP_LIFEBOOK_EXTMIC
++		.chain_id = ALC255_FIXUP_HEADSET_MODE
+ 	},
+ 	[ALC255_FIXUP_ASUS_MIC_NO_PRESENCE] = {
+ 		.type = HDA_FIXUP_PINS,
+@@ -7219,10 +7262,6 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x18, 0x02a11030},
+ 		{0x19, 0x0181303F},
+ 		{0x21, 0x0221102f}),
+-	SND_HDA_PIN_QUIRK(0x10ec0255, 0x1025, "Acer", ALC255_FIXUP_ACER_MIC_NO_PRESENCE,
+-		{0x12, 0x90a60140},
+-		{0x14, 0x90170120},
+-		{0x21, 0x02211030}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0255, 0x1025, "Acer", ALC255_FIXUP_ACER_MIC_NO_PRESENCE,
+ 		{0x12, 0x90a601c0},
+ 		{0x14, 0x90171120},
+diff --git a/sound/pci/ice1712/ews.c b/sound/pci/ice1712/ews.c
+index 7646c93e8268..d492dde88e16 100644
+--- a/sound/pci/ice1712/ews.c
++++ b/sound/pci/ice1712/ews.c
+@@ -826,7 +826,7 @@ static int snd_ice1712_6fire_read_pca(struct snd_ice1712 *ice, unsigned char reg
+ 
+ 	snd_i2c_lock(ice->i2c);
+ 	byte = reg;
+-	if (snd_i2c_sendbytes(spec->i2cdevs[EWS_I2C_6FIRE], &byte, 1)) {
++	if (snd_i2c_sendbytes(spec->i2cdevs[EWS_I2C_6FIRE], &byte, 1) != 1) {
+ 		snd_i2c_unlock(ice->i2c);
+ 		dev_err(ice->card->dev, "cannot send pca\n");
+ 		return -EIO;
+diff --git a/sound/soc/codecs/cs42xx8.c b/sound/soc/codecs/cs42xx8.c
+index ebb9e0cf8364..28a4ac36c4f8 100644
+--- a/sound/soc/codecs/cs42xx8.c
++++ b/sound/soc/codecs/cs42xx8.c
+@@ -558,6 +558,7 @@ static int cs42xx8_runtime_resume(struct device *dev)
+ 	msleep(5);
+ 
+ 	regcache_cache_only(cs42xx8->regmap, false);
++	regcache_mark_dirty(cs42xx8->regmap);
+ 
+ 	ret = regcache_sync(cs42xx8->regmap);
+ 	if (ret) {
+diff --git a/sound/soc/fsl/fsl_asrc.c b/sound/soc/fsl/fsl_asrc.c
+index 0b937924d2e4..ea035c12a325 100644
+--- a/sound/soc/fsl/fsl_asrc.c
++++ b/sound/soc/fsl/fsl_asrc.c
+@@ -282,8 +282,8 @@ static int fsl_asrc_config_pair(struct fsl_asrc_pair *pair)
+ 		return -EINVAL;
+ 	}
+ 
+-	if ((outrate > 8000 && outrate < 30000) &&
+-	    (outrate/inrate > 24 || inrate/outrate > 8)) {
++	if ((outrate >= 8000 && outrate <= 30000) &&
++	    (outrate > 24 * inrate || inrate > 8 * outrate)) {
+ 		pair_err("exceed supported ratio range [1/24, 8] for \
+ 				inrate/outrate: %d/%d\n", inrate, outrate);
+ 		return -EINVAL;
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index fe99b02bbf17..a7b4fab92f26 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -228,7 +228,10 @@ static void soc_init_card_debugfs(struct snd_soc_card *card)
+ 
+ static void soc_cleanup_card_debugfs(struct snd_soc_card *card)
+ {
++	if (!card->debugfs_card_root)
++		return;
+ 	debugfs_remove_recursive(card->debugfs_card_root);
++	card->debugfs_card_root = NULL;
+ }
+ 
+ static void snd_soc_debugfs_init(void)
+@@ -2034,8 +2037,10 @@ static void soc_check_tplg_fes(struct snd_soc_card *card)
+ static int soc_cleanup_card_resources(struct snd_soc_card *card)
+ {
+ 	/* free the ALSA card at first; this syncs with pending operations */
+-	if (card->snd_card)
++	if (card->snd_card) {
+ 		snd_card_free(card->snd_card);
++		card->snd_card = NULL;
++	}
+ 
+ 	/* remove and free each DAI */
+ 	soc_remove_dai_links(card);
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index 0382a47b30bd..5d9d7678e4fa 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -2192,7 +2192,10 @@ static void dapm_debugfs_add_widget(struct snd_soc_dapm_widget *w)
+ 
+ static void dapm_debugfs_cleanup(struct snd_soc_dapm_context *dapm)
+ {
++	if (!dapm->debugfs_dapm)
++		return;
+ 	debugfs_remove_recursive(dapm->debugfs_dapm);
++	dapm->debugfs_dapm = NULL;
+ }
+ 
+ #else
+diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
+index d2be5a06c339..ed8ef5c82256 100644
+--- a/tools/bpf/bpftool/prog.c
++++ b/tools/bpf/bpftool/prog.c
+@@ -873,6 +873,8 @@ static int load_with_options(int argc, char **argv, bool first_prog_only)
+ 		}
+ 	}
+ 
++	set_max_rlimit();
++
+ 	obj = __bpf_object__open_xattr(&attr, bpf_flags);
+ 	if (IS_ERR_OR_NULL(obj)) {
+ 		p_err("failed to open object file");
+@@ -952,8 +954,6 @@ static int load_with_options(int argc, char **argv, bool first_prog_only)
+ 		goto err_close_obj;
+ 	}
+ 
+-	set_max_rlimit();
+-
+ 	err = bpf_object__load(obj);
+ 	if (err) {
+ 		p_err("failed to load object file");
+diff --git a/tools/io_uring/Makefile b/tools/io_uring/Makefile
+index f79522fc37b5..00f146c54c53 100644
+--- a/tools/io_uring/Makefile
++++ b/tools/io_uring/Makefile
+@@ -8,7 +8,7 @@ all: io_uring-cp io_uring-bench
+ 	$(CC) $(CFLAGS) -o $@ $^
+ 
+ io_uring-bench: syscall.o io_uring-bench.o
+-	$(CC) $(CFLAGS) $(LDLIBS) -o $@ $^
++	$(CC) $(CFLAGS) -o $@ $^ $(LDLIBS)
+ 
+ io_uring-cp: setup.o syscall.o queue.o
+ 
+diff --git a/tools/kvm/kvm_stat/kvm_stat b/tools/kvm/kvm_stat/kvm_stat
+index 2ed395b817cb..bc508dae286c 100755
+--- a/tools/kvm/kvm_stat/kvm_stat
++++ b/tools/kvm/kvm_stat/kvm_stat
+@@ -575,8 +575,12 @@ class TracepointProvider(Provider):
+     def update_fields(self, fields_filter):
+         """Refresh fields, applying fields_filter"""
+         self.fields = [field for field in self._get_available_fields()
+-                       if self.is_field_wanted(fields_filter, field) or
+-                       ARCH.tracepoint_is_child(field)]
++                       if self.is_field_wanted(fields_filter, field)]
++        # add parents for child fields - otherwise we won't see any output!
++        for field in self._fields:
++            parent = ARCH.tracepoint_is_child(field)
++            if (parent and parent not in self._fields):
++                self.fields.append(parent)
+ 
+     @staticmethod
+     def _get_online_cpus():
+@@ -735,8 +739,12 @@ class DebugfsProvider(Provider):
+     def update_fields(self, fields_filter):
+         """Refresh fields, applying fields_filter"""
+         self._fields = [field for field in self._get_available_fields()
+-                        if self.is_field_wanted(fields_filter, field) or
+-                        ARCH.debugfs_is_child(field)]
++                        if self.is_field_wanted(fields_filter, field)]
++        # add parents for child fields - otherwise we won't see any output!
++        for field in self._fields:
++            parent = ARCH.debugfs_is_child(field)
++            if (parent and parent not in self._fields):
++                self.fields.append(parent)
+ 
+     @property
+     def fields(self):
+diff --git a/tools/kvm/kvm_stat/kvm_stat.txt b/tools/kvm/kvm_stat/kvm_stat.txt
+index 0811d860fe75..c057ba52364e 100644
+--- a/tools/kvm/kvm_stat/kvm_stat.txt
++++ b/tools/kvm/kvm_stat/kvm_stat.txt
+@@ -34,6 +34,8 @@ INTERACTIVE COMMANDS
+ *c*::	clear filter
+ 
+ *f*::	filter by regular expression
++ ::     *Note*: Child events pull in their parents, and parents' stats summarize
++                all child events, not just the filtered ones
+ 
+ *g*::	filter by guest name/PID
+ 
+diff --git a/tools/testing/selftests/bpf/bpf_helpers.h b/tools/testing/selftests/bpf/bpf_helpers.h
+index c81fc350f7ad..a43a52cdd3f0 100644
+--- a/tools/testing/selftests/bpf/bpf_helpers.h
++++ b/tools/testing/selftests/bpf/bpf_helpers.h
+@@ -246,7 +246,7 @@ static int (*bpf_skb_change_type)(void *ctx, __u32 type) =
+ 	(void *) BPF_FUNC_skb_change_type;
+ static unsigned int (*bpf_get_hash_recalc)(void *ctx) =
+ 	(void *) BPF_FUNC_get_hash_recalc;
+-static unsigned long long (*bpf_get_current_task)(void *ctx) =
++static unsigned long long (*bpf_get_current_task)(void) =
+ 	(void *) BPF_FUNC_get_current_task;
+ static int (*bpf_skb_change_tail)(void *ctx, __u32 len, __u64 flags) =
+ 	(void *) BPF_FUNC_skb_change_tail;
+diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
+index 93f99c6b7d79..a2542ed42819 100644
+--- a/tools/testing/selftests/kvm/dirty_log_test.c
++++ b/tools/testing/selftests/kvm/dirty_log_test.c
+@@ -292,7 +292,7 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
+ 	 * A little more than 1G of guest page sized pages.  Cover the
+ 	 * case where the size is not aligned to 64 pages.
+ 	 */
+-	guest_num_pages = (1ul << (30 - guest_page_shift)) + 3;
++	guest_num_pages = (1ul << (30 - guest_page_shift)) + 16;
+ 	host_page_size = getpagesize();
+ 	host_num_pages = (guest_num_pages * guest_page_size) / host_page_size +
+ 			 !!((guest_num_pages * guest_page_size) % host_page_size);
+diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
+index e8c42506a09d..fa6cd340137c 100644
+--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
++++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
+@@ -226,7 +226,7 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
+ 	uint64_t extra_pg_pages = (extra_mem_pages / ptrs_per_4k_pte) * 2;
+ 	struct kvm_vm *vm;
+ 
+-	vm = vm_create(VM_MODE_P52V48_4K, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
++	vm = vm_create(VM_MODE_P40V48_4K, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+ 
+ 	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
+ 	vm_vcpu_add_default(vm, vcpuid, guest_code);
+diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_cpuid.c b/tools/testing/selftests/kvm/x86_64/hyperv_cpuid.c
+index 9a21e912097c..63b9fc3fdfbe 100644
+--- a/tools/testing/selftests/kvm/x86_64/hyperv_cpuid.c
++++ b/tools/testing/selftests/kvm/x86_64/hyperv_cpuid.c
+@@ -58,9 +58,8 @@ static void test_hv_cpuid(struct kvm_cpuid2 *hv_cpuid_entries,
+ 		TEST_ASSERT(entry->flags == 0,
+ 			    ".flags field should be zero");
+ 
+-		TEST_ASSERT(entry->padding[0] == entry->padding[1]
+-			    == entry->padding[2] == 0,
+-			    ".index field should be zero");
++		TEST_ASSERT(!entry->padding[0] && !entry->padding[1] &&
++			    !entry->padding[2], "padding should be zero");
+ 
+ 		/*
+ 		 * If needed for debug:
+diff --git a/tools/testing/selftests/net/fib_rule_tests.sh b/tools/testing/selftests/net/fib_rule_tests.sh
+index 4b7e107865bf..1ba069967fa2 100755
+--- a/tools/testing/selftests/net/fib_rule_tests.sh
++++ b/tools/testing/selftests/net/fib_rule_tests.sh
+@@ -55,7 +55,7 @@ setup()
+ 
+ 	$IP link add dummy0 type dummy
+ 	$IP link set dev dummy0 up
+-	$IP address add 198.51.100.1/24 dev dummy0
++	$IP address add 192.51.100.1/24 dev dummy0
+ 	$IP -6 address add 2001:db8:1::1/64 dev dummy0
+ 
+ 	set +e
+diff --git a/tools/testing/selftests/timers/adjtick.c b/tools/testing/selftests/timers/adjtick.c
+index 0caca3a06bd2..54d8d87f36b3 100644
+--- a/tools/testing/selftests/timers/adjtick.c
++++ b/tools/testing/selftests/timers/adjtick.c
+@@ -136,6 +136,7 @@ int check_tick_adj(long tickval)
+ 
+ 	eppm = get_ppm_drift();
+ 	printf("%lld usec, %lld ppm", systick + (systick * eppm / MILLION), eppm);
++	fflush(stdout);
+ 
+ 	tx1.modes = 0;
+ 	adjtimex(&tx1);
+diff --git a/tools/testing/selftests/timers/leapcrash.c b/tools/testing/selftests/timers/leapcrash.c
+index 830c462f605d..dc80728ed191 100644
+--- a/tools/testing/selftests/timers/leapcrash.c
++++ b/tools/testing/selftests/timers/leapcrash.c
+@@ -101,6 +101,7 @@ int main(void)
+ 		}
+ 		clear_time_state();
+ 		printf(".");
++		fflush(stdout);
+ 	}
+ 	printf("[OK]\n");
+ 	return ksft_exit_pass();
+diff --git a/tools/testing/selftests/timers/mqueue-lat.c b/tools/testing/selftests/timers/mqueue-lat.c
+index 1867db5d6f5e..7916cf5cc6ff 100644
+--- a/tools/testing/selftests/timers/mqueue-lat.c
++++ b/tools/testing/selftests/timers/mqueue-lat.c
+@@ -102,6 +102,7 @@ int main(int argc, char **argv)
+ 	int ret;
+ 
+ 	printf("Mqueue latency :                          ");
++	fflush(stdout);
+ 
+ 	ret = mqueue_lat_test();
+ 	if (ret < 0) {
+diff --git a/tools/testing/selftests/timers/nanosleep.c b/tools/testing/selftests/timers/nanosleep.c
+index 8adb0bb51d4d..71b5441c2fd9 100644
+--- a/tools/testing/selftests/timers/nanosleep.c
++++ b/tools/testing/selftests/timers/nanosleep.c
+@@ -142,6 +142,7 @@ int main(int argc, char **argv)
+ 			continue;
+ 
+ 		printf("Nanosleep %-31s ", clockstring(clockid));
++		fflush(stdout);
+ 
+ 		length = 10;
+ 		while (length <= (NSEC_PER_SEC * 10)) {
+diff --git a/tools/testing/selftests/timers/nsleep-lat.c b/tools/testing/selftests/timers/nsleep-lat.c
+index c3c3dc10db17..eb3e79ed7b4a 100644
+--- a/tools/testing/selftests/timers/nsleep-lat.c
++++ b/tools/testing/selftests/timers/nsleep-lat.c
+@@ -155,6 +155,7 @@ int main(int argc, char **argv)
+ 			continue;
+ 
+ 		printf("nsleep latency %-26s ", clockstring(clockid));
++		fflush(stdout);
+ 
+ 		length = 10;
+ 		while (length <= (NSEC_PER_SEC * 10)) {
+diff --git a/tools/testing/selftests/timers/raw_skew.c b/tools/testing/selftests/timers/raw_skew.c
+index dcf73c5dab6e..b41d8dd0c40c 100644
+--- a/tools/testing/selftests/timers/raw_skew.c
++++ b/tools/testing/selftests/timers/raw_skew.c
+@@ -112,6 +112,7 @@ int main(int argv, char **argc)
+ 		printf("WARNING: ADJ_OFFSET in progress, this will cause inaccurate results\n");
+ 
+ 	printf("Estimating clock drift: ");
++	fflush(stdout);
+ 	sleep(120);
+ 
+ 	get_monotonic_and_raw(&mon, &raw);
+diff --git a/tools/testing/selftests/timers/set-tai.c b/tools/testing/selftests/timers/set-tai.c
+index 70fed27d8fd3..8c4179ee2ca2 100644
+--- a/tools/testing/selftests/timers/set-tai.c
++++ b/tools/testing/selftests/timers/set-tai.c
+@@ -55,6 +55,7 @@ int main(int argc, char **argv)
+ 	printf("tai offset started at %i\n", ret);
+ 
+ 	printf("Checking tai offsets can be properly set: ");
++	fflush(stdout);
+ 	for (i = 1; i <= 60; i++) {
+ 		ret = set_tai(i);
+ 		ret = get_tai();
+diff --git a/tools/testing/selftests/timers/set-tz.c b/tools/testing/selftests/timers/set-tz.c
+index 877fd5532fee..62bd33eb16f0 100644
+--- a/tools/testing/selftests/timers/set-tz.c
++++ b/tools/testing/selftests/timers/set-tz.c
+@@ -65,6 +65,7 @@ int main(int argc, char **argv)
+ 	printf("tz_minuteswest started at %i, dst at %i\n", min, dst);
+ 
+ 	printf("Checking tz_minuteswest can be properly set: ");
++	fflush(stdout);
+ 	for (i = -15*60; i < 15*60; i += 30) {
+ 		ret = set_tz(i, dst);
+ 		ret = get_tz_min();
+@@ -76,6 +77,7 @@ int main(int argc, char **argv)
+ 	printf("[OK]\n");
+ 
+ 	printf("Checking invalid tz_minuteswest values are caught: ");
++	fflush(stdout);
+ 
+ 	if (!set_tz(-15*60-1, dst)) {
+ 		printf("[FAILED] %i didn't return failure!\n", -15*60-1);
+diff --git a/tools/testing/selftests/timers/threadtest.c b/tools/testing/selftests/timers/threadtest.c
+index 759c9c06f1a0..cf3e48919874 100644
+--- a/tools/testing/selftests/timers/threadtest.c
++++ b/tools/testing/selftests/timers/threadtest.c
+@@ -163,6 +163,7 @@ int main(int argc, char **argv)
+ 	strftime(buf, 255, "%a, %d %b %Y %T %z", localtime(&start));
+ 	printf("%s\n", buf);
+ 	printf("Testing consistency with %i threads for %ld seconds: ", thread_count, runtime);
++	fflush(stdout);
+ 
+ 	/* spawn */
+ 	for (i = 0; i < thread_count; i++)
+diff --git a/tools/testing/selftests/timers/valid-adjtimex.c b/tools/testing/selftests/timers/valid-adjtimex.c
+index d9d3ab93b31a..5397de708d3c 100644
+--- a/tools/testing/selftests/timers/valid-adjtimex.c
++++ b/tools/testing/selftests/timers/valid-adjtimex.c
+@@ -123,6 +123,7 @@ int validate_freq(void)
+ 	/* Set the leap second insert flag */
+ 
+ 	printf("Testing ADJ_FREQ... ");
++	fflush(stdout);
+ 	for (i = 0; i < NUM_FREQ_VALID; i++) {
+ 		tx.modes = ADJ_FREQUENCY;
+ 		tx.freq = valid_freq[i];
+@@ -250,6 +251,7 @@ int set_bad_offset(long sec, long usec, int use_nano)
+ int validate_set_offset(void)
+ {
+ 	printf("Testing ADJ_SETOFFSET... ");
++	fflush(stdout);
+ 
+ 	/* Test valid values */
+ 	if (set_offset(NSEC_PER_SEC - 1, 1))
+diff --git a/virt/kvm/arm/aarch32.c b/virt/kvm/arm/aarch32.c
+index 5abbe9b3c652..6880236974b8 100644
+--- a/virt/kvm/arm/aarch32.c
++++ b/virt/kvm/arm/aarch32.c
+@@ -25,127 +25,6 @@
+ #include <asm/kvm_emulate.h>
+ #include <asm/kvm_hyp.h>
+ 
+-/*
+- * stolen from arch/arm/kernel/opcodes.c
+- *
+- * condition code lookup table
+- * index into the table is test code: EQ, NE, ... LT, GT, AL, NV
+- *
+- * bit position in short is condition code: NZCV
+- */
+-static const unsigned short cc_map[16] = {
+-	0xF0F0,			/* EQ == Z set            */
+-	0x0F0F,			/* NE                     */
+-	0xCCCC,			/* CS == C set            */
+-	0x3333,			/* CC                     */
+-	0xFF00,			/* MI == N set            */
+-	0x00FF,			/* PL                     */
+-	0xAAAA,			/* VS == V set            */
+-	0x5555,			/* VC                     */
+-	0x0C0C,			/* HI == C set && Z clear */
+-	0xF3F3,			/* LS == C clear || Z set */
+-	0xAA55,			/* GE == (N==V)           */
+-	0x55AA,			/* LT == (N!=V)           */
+-	0x0A05,			/* GT == (!Z && (N==V))   */
+-	0xF5FA,			/* LE == (Z || (N!=V))    */
+-	0xFFFF,			/* AL always              */
+-	0			/* NV                     */
+-};
+-
+-/*
+- * Check if a trapped instruction should have been executed or not.
+- */
+-bool __hyp_text kvm_condition_valid32(const struct kvm_vcpu *vcpu)
+-{
+-	unsigned long cpsr;
+-	u32 cpsr_cond;
+-	int cond;
+-
+-	/* Top two bits non-zero?  Unconditional. */
+-	if (kvm_vcpu_get_hsr(vcpu) >> 30)
+-		return true;
+-
+-	/* Is condition field valid? */
+-	cond = kvm_vcpu_get_condition(vcpu);
+-	if (cond == 0xE)
+-		return true;
+-
+-	cpsr = *vcpu_cpsr(vcpu);
+-
+-	if (cond < 0) {
+-		/* This can happen in Thumb mode: examine IT state. */
+-		unsigned long it;
+-
+-		it = ((cpsr >> 8) & 0xFC) | ((cpsr >> 25) & 0x3);
+-
+-		/* it == 0 => unconditional. */
+-		if (it == 0)
+-			return true;
+-
+-		/* The cond for this insn works out as the top 4 bits. */
+-		cond = (it >> 4);
+-	}
+-
+-	cpsr_cond = cpsr >> 28;
+-
+-	if (!((cc_map[cond] >> cpsr_cond) & 1))
+-		return false;
+-
+-	return true;
+-}
+-
+-/**
+- * adjust_itstate - adjust ITSTATE when emulating instructions in IT-block
+- * @vcpu:	The VCPU pointer
+- *
+- * When exceptions occur while instructions are executed in Thumb IF-THEN
+- * blocks, the ITSTATE field of the CPSR is not advanced (updated), so we have
+- * to do this little bit of work manually. The fields map like this:
+- *
+- * IT[7:0] -> CPSR[26:25],CPSR[15:10]
+- */
+-static void __hyp_text kvm_adjust_itstate(struct kvm_vcpu *vcpu)
+-{
+-	unsigned long itbits, cond;
+-	unsigned long cpsr = *vcpu_cpsr(vcpu);
+-	bool is_arm = !(cpsr & PSR_AA32_T_BIT);
+-
+-	if (is_arm || !(cpsr & PSR_AA32_IT_MASK))
+-		return;
+-
+-	cond = (cpsr & 0xe000) >> 13;
+-	itbits = (cpsr & 0x1c00) >> (10 - 2);
+-	itbits |= (cpsr & (0x3 << 25)) >> 25;
+-
+-	/* Perform ITAdvance (see page A2-52 in ARM DDI 0406C) */
+-	if ((itbits & 0x7) == 0)
+-		itbits = cond = 0;
+-	else
+-		itbits = (itbits << 1) & 0x1f;
+-
+-	cpsr &= ~PSR_AA32_IT_MASK;
+-	cpsr |= cond << 13;
+-	cpsr |= (itbits & 0x1c) << (10 - 2);
+-	cpsr |= (itbits & 0x3) << 25;
+-	*vcpu_cpsr(vcpu) = cpsr;
+-}
+-
+-/**
+- * kvm_skip_instr - skip a trapped instruction and proceed to the next
+- * @vcpu: The vcpu pointer
+- */
+-void __hyp_text kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr)
+-{
+-	bool is_thumb;
+-
+-	is_thumb = !!(*vcpu_cpsr(vcpu) & PSR_AA32_T_BIT);
+-	if (is_thumb && !is_wide_instr)
+-		*vcpu_pc(vcpu) += 2;
+-	else
+-		*vcpu_pc(vcpu) += 4;
+-	kvm_adjust_itstate(vcpu);
+-}
+-
+ /*
+  * Table taken from ARMv8 ARM DDI0487B-B, table G1-10.
+  */
+diff --git a/virt/kvm/arm/hyp/aarch32.c b/virt/kvm/arm/hyp/aarch32.c
+new file mode 100644
+index 000000000000..d31f267961e7
+--- /dev/null
++++ b/virt/kvm/arm/hyp/aarch32.c
+@@ -0,0 +1,136 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Hyp portion of the (not much of an) Emulation layer for 32bit guests.
++ *
++ * Copyright (C) 2012,2013 - ARM Ltd
++ * Author: Marc Zyngier <marc.zyngier@arm.com>
++ *
++ * based on arch/arm/kvm/emulate.c
++ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
++ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
++ */
++
++#include <linux/kvm_host.h>
++#include <asm/kvm_emulate.h>
++#include <asm/kvm_hyp.h>
++
++/*
++ * stolen from arch/arm/kernel/opcodes.c
++ *
++ * condition code lookup table
++ * index into the table is test code: EQ, NE, ... LT, GT, AL, NV
++ *
++ * bit position in short is condition code: NZCV
++ */
++static const unsigned short cc_map[16] = {
++	0xF0F0,			/* EQ == Z set            */
++	0x0F0F,			/* NE                     */
++	0xCCCC,			/* CS == C set            */
++	0x3333,			/* CC                     */
++	0xFF00,			/* MI == N set            */
++	0x00FF,			/* PL                     */
++	0xAAAA,			/* VS == V set            */
++	0x5555,			/* VC                     */
++	0x0C0C,			/* HI == C set && Z clear */
++	0xF3F3,			/* LS == C clear || Z set */
++	0xAA55,			/* GE == (N==V)           */
++	0x55AA,			/* LT == (N!=V)           */
++	0x0A05,			/* GT == (!Z && (N==V))   */
++	0xF5FA,			/* LE == (Z || (N!=V))    */
++	0xFFFF,			/* AL always              */
++	0			/* NV                     */
++};
++
++/*
++ * Check if a trapped instruction should have been executed or not.
++ */
++bool __hyp_text kvm_condition_valid32(const struct kvm_vcpu *vcpu)
++{
++	unsigned long cpsr;
++	u32 cpsr_cond;
++	int cond;
++
++	/* Top two bits non-zero?  Unconditional. */
++	if (kvm_vcpu_get_hsr(vcpu) >> 30)
++		return true;
++
++	/* Is condition field valid? */
++	cond = kvm_vcpu_get_condition(vcpu);
++	if (cond == 0xE)
++		return true;
++
++	cpsr = *vcpu_cpsr(vcpu);
++
++	if (cond < 0) {
++		/* This can happen in Thumb mode: examine IT state. */
++		unsigned long it;
++
++		it = ((cpsr >> 8) & 0xFC) | ((cpsr >> 25) & 0x3);
++
++		/* it == 0 => unconditional. */
++		if (it == 0)
++			return true;
++
++		/* The cond for this insn works out as the top 4 bits. */
++		cond = (it >> 4);
++	}
++
++	cpsr_cond = cpsr >> 28;
++
++	if (!((cc_map[cond] >> cpsr_cond) & 1))
++		return false;
++
++	return true;
++}
++
++/**
++ * adjust_itstate - adjust ITSTATE when emulating instructions in IT-block
++ * @vcpu:	The VCPU pointer
++ *
++ * When exceptions occur while instructions are executed in Thumb IF-THEN
++ * blocks, the ITSTATE field of the CPSR is not advanced (updated), so we have
++ * to do this little bit of work manually. The fields map like this:
++ *
++ * IT[7:0] -> CPSR[26:25],CPSR[15:10]
++ */
++static void __hyp_text kvm_adjust_itstate(struct kvm_vcpu *vcpu)
++{
++	unsigned long itbits, cond;
++	unsigned long cpsr = *vcpu_cpsr(vcpu);
++	bool is_arm = !(cpsr & PSR_AA32_T_BIT);
++
++	if (is_arm || !(cpsr & PSR_AA32_IT_MASK))
++		return;
++
++	cond = (cpsr & 0xe000) >> 13;
++	itbits = (cpsr & 0x1c00) >> (10 - 2);
++	itbits |= (cpsr & (0x3 << 25)) >> 25;
++
++	/* Perform ITAdvance (see page A2-52 in ARM DDI 0406C) */
++	if ((itbits & 0x7) == 0)
++		itbits = cond = 0;
++	else
++		itbits = (itbits << 1) & 0x1f;
++
++	cpsr &= ~PSR_AA32_IT_MASK;
++	cpsr |= cond << 13;
++	cpsr |= (itbits & 0x1c) << (10 - 2);
++	cpsr |= (itbits & 0x3) << 25;
++	*vcpu_cpsr(vcpu) = cpsr;
++}
++
++/**
++ * kvm_skip_instr - skip a trapped instruction and proceed to the next
++ * @vcpu: The vcpu pointer
++ */
++void __hyp_text kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr)
++{
++	bool is_thumb;
++
++	is_thumb = !!(*vcpu_cpsr(vcpu) & PSR_AA32_T_BIT);
++	if (is_thumb && !is_wide_instr)
++		*vcpu_pc(vcpu) += 2;
++	else
++		*vcpu_pc(vcpu) += 4;
++	kvm_adjust_itstate(vcpu);
++}


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-06-22 19:16 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2019-06-22 19:16 UTC (permalink / raw
  To: gentoo-commits

commit:     6937a558748348df369ee6cf6dbe2918e736ccd4
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jun 22 19:16:36 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jun 22 19:16:36 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6937a558

Linux patch 5.1.13 and 5.1.14

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    8 +
 1012_linux-5.1.13.patch | 3413 +++++++++++++++++++++++++++++++++++++++++++++++
 1013_linux-5.1.14.patch |   27 +
 3 files changed, 3448 insertions(+)

diff --git a/0000_README b/0000_README
index 540b4c1..3443ce1 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,14 @@ Patch:  1011_linux-5.1.12.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.1.12
 
+Patch:  1012_linux-5.1.13.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.1.13
+
+Patch:  1013_linux-5.1.14.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.1.14
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1012_linux-5.1.13.patch b/1012_linux-5.1.13.patch
new file mode 100644
index 0000000..069e6f6
--- /dev/null
+++ b/1012_linux-5.1.13.patch
@@ -0,0 +1,3413 @@
+diff --git a/Makefile b/Makefile
+index 6d7bfe9fcd7d..dfcd51a35824 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h
+index a179df3674a1..6206ab9bfcfc 100644
+--- a/arch/arm64/include/asm/syscall.h
++++ b/arch/arm64/include/asm/syscall.h
+@@ -20,7 +20,7 @@
+ #include <linux/compat.h>
+ #include <linux/err.h>
+ 
+-typedef long (*syscall_fn_t)(struct pt_regs *regs);
++typedef long (*syscall_fn_t)(const struct pt_regs *regs);
+ 
+ extern const syscall_fn_t sys_call_table[];
+ 
+diff --git a/arch/arm64/include/asm/syscall_wrapper.h b/arch/arm64/include/asm/syscall_wrapper.h
+index a4477e515b79..507d0ee6bc69 100644
+--- a/arch/arm64/include/asm/syscall_wrapper.h
++++ b/arch/arm64/include/asm/syscall_wrapper.h
+@@ -30,10 +30,10 @@
+ 	}										\
+ 	static inline long __do_compat_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__))
+ 
+-#define COMPAT_SYSCALL_DEFINE0(sname)					\
+-	asmlinkage long __arm64_compat_sys_##sname(void);		\
+-	ALLOW_ERROR_INJECTION(__arm64_compat_sys_##sname, ERRNO);	\
+-	asmlinkage long __arm64_compat_sys_##sname(void)
++#define COMPAT_SYSCALL_DEFINE0(sname)							\
++	asmlinkage long __arm64_compat_sys_##sname(const struct pt_regs *__unused);	\
++	ALLOW_ERROR_INJECTION(__arm64_compat_sys_##sname, ERRNO);			\
++	asmlinkage long __arm64_compat_sys_##sname(const struct pt_regs *__unused)
+ 
+ #define COND_SYSCALL_COMPAT(name) \
+ 	cond_syscall(__arm64_compat_sys_##name);
+@@ -62,11 +62,11 @@
+ 	static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__))
+ 
+ #ifndef SYSCALL_DEFINE0
+-#define SYSCALL_DEFINE0(sname)					\
+-	SYSCALL_METADATA(_##sname, 0);				\
+-	asmlinkage long __arm64_sys_##sname(void);		\
+-	ALLOW_ERROR_INJECTION(__arm64_sys_##sname, ERRNO);	\
+-	asmlinkage long __arm64_sys_##sname(void)
++#define SYSCALL_DEFINE0(sname)							\
++	SYSCALL_METADATA(_##sname, 0);						\
++	asmlinkage long __arm64_sys_##sname(const struct pt_regs *__unused);	\
++	ALLOW_ERROR_INJECTION(__arm64_sys_##sname, ERRNO);			\
++	asmlinkage long __arm64_sys_##sname(const struct pt_regs *__unused)
+ #endif
+ 
+ #ifndef COND_SYSCALL
+diff --git a/arch/arm64/kernel/sys.c b/arch/arm64/kernel/sys.c
+index 162a95ed0881..fe20c461582a 100644
+--- a/arch/arm64/kernel/sys.c
++++ b/arch/arm64/kernel/sys.c
+@@ -47,22 +47,26 @@ SYSCALL_DEFINE1(arm64_personality, unsigned int, personality)
+ 	return ksys_personality(personality);
+ }
+ 
++asmlinkage long sys_ni_syscall(void);
++
++asmlinkage long __arm64_sys_ni_syscall(const struct pt_regs *__unused)
++{
++	return sys_ni_syscall();
++}
++
+ /*
+  * Wrappers to pass the pt_regs argument.
+  */
+ #define __arm64_sys_personality		__arm64_sys_arm64_personality
+ 
+-asmlinkage long sys_ni_syscall(const struct pt_regs *);
+-#define __arm64_sys_ni_syscall	sys_ni_syscall
+-
+ #undef __SYSCALL
+ #define __SYSCALL(nr, sym)	asmlinkage long __arm64_##sym(const struct pt_regs *);
+ #include <asm/unistd.h>
+ 
+ #undef __SYSCALL
+-#define __SYSCALL(nr, sym)	[nr] = (syscall_fn_t)__arm64_##sym,
++#define __SYSCALL(nr, sym)	[nr] = __arm64_##sym,
+ 
+ const syscall_fn_t sys_call_table[__NR_syscalls] = {
+-	[0 ... __NR_syscalls - 1] = (syscall_fn_t)sys_ni_syscall,
++	[0 ... __NR_syscalls - 1] = __arm64_sys_ni_syscall,
+ #include <asm/unistd.h>
+ };
+diff --git a/arch/arm64/kernel/sys32.c b/arch/arm64/kernel/sys32.c
+index 0f8bcb7de700..3c80a40c1c9d 100644
+--- a/arch/arm64/kernel/sys32.c
++++ b/arch/arm64/kernel/sys32.c
+@@ -133,17 +133,14 @@ COMPAT_SYSCALL_DEFINE6(aarch32_fallocate, int, fd, int, mode,
+ 	return ksys_fallocate(fd, mode, arg_u64(offset), arg_u64(len));
+ }
+ 
+-asmlinkage long sys_ni_syscall(const struct pt_regs *);
+-#define __arm64_sys_ni_syscall	sys_ni_syscall
+-
+ #undef __SYSCALL
+ #define __SYSCALL(nr, sym)	asmlinkage long __arm64_##sym(const struct pt_regs *);
+ #include <asm/unistd32.h>
+ 
+ #undef __SYSCALL
+-#define __SYSCALL(nr, sym)	[nr] = (syscall_fn_t)__arm64_##sym,
++#define __SYSCALL(nr, sym)	[nr] = __arm64_##sym,
+ 
+ const syscall_fn_t compat_sys_call_table[__NR_compat_syscalls] = {
+-	[0 ... __NR_compat_syscalls - 1] = (syscall_fn_t)sys_ni_syscall,
++	[0 ... __NR_compat_syscalls - 1] = __arm64_sys_ni_syscall,
+ #include <asm/unistd32.h>
+ };
+diff --git a/arch/ia64/mm/numa.c b/arch/ia64/mm/numa.c
+index a03803506b0c..5e1015eb6d0d 100644
+--- a/arch/ia64/mm/numa.c
++++ b/arch/ia64/mm/numa.c
+@@ -55,6 +55,7 @@ paddr_to_nid(unsigned long paddr)
+ 
+ 	return (i < num_node_memblks) ? node_memblk[i].nid : (num_node_memblks ? -1 : 0);
+ }
++EXPORT_SYMBOL(paddr_to_nid);
+ 
+ #if defined(CONFIG_SPARSEMEM) && defined(CONFIG_NUMA)
+ /*
+diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
+index e6b5bb012ccb..1f9eb75ce95a 100644
+--- a/arch/powerpc/include/asm/kvm_host.h
++++ b/arch/powerpc/include/asm/kvm_host.h
+@@ -305,6 +305,7 @@ struct kvm_arch {
+ #ifdef CONFIG_PPC_BOOK3S_64
+ 	struct list_head spapr_tce_tables;
+ 	struct list_head rtas_tokens;
++	struct mutex rtas_token_lock;
+ 	DECLARE_BITMAP(enabled_hcalls, MAX_HCALL_OPCODE/4 + 1);
+ #endif
+ #ifdef CONFIG_KVM_MPIC
+@@ -317,6 +318,7 @@ struct kvm_arch {
+ #endif
+ 	struct kvmppc_ops *kvm_ops;
+ #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
++	struct mutex mmu_setup_lock;	/* nests inside vcpu mutexes */
+ 	u64 l1_ptcr;
+ 	int max_nested_lpid;
+ 	struct kvm_nested_guest *nested_guests[KVM_MAX_NESTED_GUESTS];
+diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
+index 10c5579d20ce..020304403bae 100644
+--- a/arch/powerpc/kvm/book3s.c
++++ b/arch/powerpc/kvm/book3s.c
+@@ -878,6 +878,7 @@ int kvmppc_core_init_vm(struct kvm *kvm)
+ #ifdef CONFIG_PPC64
+ 	INIT_LIST_HEAD_RCU(&kvm->arch.spapr_tce_tables);
+ 	INIT_LIST_HEAD(&kvm->arch.rtas_tokens);
++	mutex_init(&kvm->arch.rtas_token_lock);
+ #endif
+ 
+ 	return kvm->arch.kvm_ops->init_vm(kvm);
+diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
+index be7bc070eae5..c1ced22455f9 100644
+--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
++++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
+@@ -63,7 +63,7 @@ struct kvm_resize_hpt {
+ 	struct work_struct work;
+ 	u32 order;
+ 
+-	/* These fields protected by kvm->lock */
++	/* These fields protected by kvm->arch.mmu_setup_lock */
+ 
+ 	/* Possible values and their usage:
+ 	 *  <0     an error occurred during allocation,
+@@ -73,7 +73,7 @@ struct kvm_resize_hpt {
+ 	int error;
+ 
+ 	/* Private to the work thread, until error != -EBUSY,
+-	 * then protected by kvm->lock.
++	 * then protected by kvm->arch.mmu_setup_lock.
+ 	 */
+ 	struct kvm_hpt_info hpt;
+ };
+@@ -139,7 +139,7 @@ long kvmppc_alloc_reset_hpt(struct kvm *kvm, int order)
+ 	long err = -EBUSY;
+ 	struct kvm_hpt_info info;
+ 
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.mmu_setup_lock);
+ 	if (kvm->arch.mmu_ready) {
+ 		kvm->arch.mmu_ready = 0;
+ 		/* order mmu_ready vs. vcpus_running */
+@@ -183,7 +183,7 @@ out:
+ 		/* Ensure that each vcpu will flush its TLB on next entry. */
+ 		cpumask_setall(&kvm->arch.need_tlb_flush);
+ 
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.mmu_setup_lock);
+ 	return err;
+ }
+ 
+@@ -1447,7 +1447,7 @@ static void resize_hpt_pivot(struct kvm_resize_hpt *resize)
+ 
+ static void resize_hpt_release(struct kvm *kvm, struct kvm_resize_hpt *resize)
+ {
+-	if (WARN_ON(!mutex_is_locked(&kvm->lock)))
++	if (WARN_ON(!mutex_is_locked(&kvm->arch.mmu_setup_lock)))
+ 		return;
+ 
+ 	if (!resize)
+@@ -1474,14 +1474,14 @@ static void resize_hpt_prepare_work(struct work_struct *work)
+ 	if (WARN_ON(resize->error != -EBUSY))
+ 		return;
+ 
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.mmu_setup_lock);
+ 
+ 	/* Request is still current? */
+ 	if (kvm->arch.resize_hpt == resize) {
+ 		/* We may request large allocations here:
+-		 * do not sleep with kvm->lock held for a while.
++		 * do not sleep with kvm->arch.mmu_setup_lock held for a while.
+ 		 */
+-		mutex_unlock(&kvm->lock);
++		mutex_unlock(&kvm->arch.mmu_setup_lock);
+ 
+ 		resize_hpt_debug(resize, "resize_hpt_prepare_work(): order = %d\n",
+ 				 resize->order);
+@@ -1494,9 +1494,9 @@ static void resize_hpt_prepare_work(struct work_struct *work)
+ 		if (WARN_ON(err == -EBUSY))
+ 			err = -EINPROGRESS;
+ 
+-		mutex_lock(&kvm->lock);
++		mutex_lock(&kvm->arch.mmu_setup_lock);
+ 		/* It is possible that kvm->arch.resize_hpt != resize
+-		 * after we grab kvm->lock again.
++		 * after we grab kvm->arch.mmu_setup_lock again.
+ 		 */
+ 	}
+ 
+@@ -1505,7 +1505,7 @@ static void resize_hpt_prepare_work(struct work_struct *work)
+ 	if (kvm->arch.resize_hpt != resize)
+ 		resize_hpt_release(kvm, resize);
+ 
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.mmu_setup_lock);
+ }
+ 
+ long kvm_vm_ioctl_resize_hpt_prepare(struct kvm *kvm,
+@@ -1522,7 +1522,7 @@ long kvm_vm_ioctl_resize_hpt_prepare(struct kvm *kvm,
+ 	if (shift && ((shift < 18) || (shift > 46)))
+ 		return -EINVAL;
+ 
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.mmu_setup_lock);
+ 
+ 	resize = kvm->arch.resize_hpt;
+ 
+@@ -1565,7 +1565,7 @@ long kvm_vm_ioctl_resize_hpt_prepare(struct kvm *kvm,
+ 	ret = 100; /* estimated time in ms */
+ 
+ out:
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.mmu_setup_lock);
+ 	return ret;
+ }
+ 
+@@ -1588,7 +1588,7 @@ long kvm_vm_ioctl_resize_hpt_commit(struct kvm *kvm,
+ 	if (shift && ((shift < 18) || (shift > 46)))
+ 		return -EINVAL;
+ 
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.mmu_setup_lock);
+ 
+ 	resize = kvm->arch.resize_hpt;
+ 
+@@ -1625,7 +1625,7 @@ out:
+ 	smp_mb();
+ out_no_hpt:
+ 	resize_hpt_release(kvm, resize);
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.mmu_setup_lock);
+ 	return ret;
+ }
+ 
+@@ -1868,7 +1868,7 @@ static ssize_t kvm_htab_write(struct file *file, const char __user *buf,
+ 		return -EINVAL;
+ 
+ 	/* lock out vcpus from running while we're doing this */
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.mmu_setup_lock);
+ 	mmu_ready = kvm->arch.mmu_ready;
+ 	if (mmu_ready) {
+ 		kvm->arch.mmu_ready = 0;	/* temporarily */
+@@ -1876,7 +1876,7 @@ static ssize_t kvm_htab_write(struct file *file, const char __user *buf,
+ 		smp_mb();
+ 		if (atomic_read(&kvm->arch.vcpus_running)) {
+ 			kvm->arch.mmu_ready = 1;
+-			mutex_unlock(&kvm->lock);
++			mutex_unlock(&kvm->arch.mmu_setup_lock);
+ 			return -EBUSY;
+ 		}
+ 	}
+@@ -1963,7 +1963,7 @@ static ssize_t kvm_htab_write(struct file *file, const char __user *buf,
+ 	/* Order HPTE updates vs. mmu_ready */
+ 	smp_wmb();
+ 	kvm->arch.mmu_ready = mmu_ready;
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.mmu_setup_lock);
+ 
+ 	if (err)
+ 		return err;
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index bd68b3e59de5..6d4f0f72231f 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -445,12 +445,7 @@ static void kvmppc_dump_regs(struct kvm_vcpu *vcpu)
+ 
+ static struct kvm_vcpu *kvmppc_find_vcpu(struct kvm *kvm, int id)
+ {
+-	struct kvm_vcpu *ret;
+-
+-	mutex_lock(&kvm->lock);
+-	ret = kvm_get_vcpu_by_id(kvm, id);
+-	mutex_unlock(&kvm->lock);
+-	return ret;
++	return kvm_get_vcpu_by_id(kvm, id);
+ }
+ 
+ static void init_vpa(struct kvm_vcpu *vcpu, struct lppaca *vpa)
+@@ -1502,7 +1497,6 @@ static void kvmppc_set_lpcr(struct kvm_vcpu *vcpu, u64 new_lpcr,
+ 	struct kvmppc_vcore *vc = vcpu->arch.vcore;
+ 	u64 mask;
+ 
+-	mutex_lock(&kvm->lock);
+ 	spin_lock(&vc->lock);
+ 	/*
+ 	 * If ILE (interrupt little-endian) has changed, update the
+@@ -1542,7 +1536,6 @@ static void kvmppc_set_lpcr(struct kvm_vcpu *vcpu, u64 new_lpcr,
+ 		mask &= 0xFFFFFFFF;
+ 	vc->lpcr = (vc->lpcr & ~mask) | (new_lpcr & mask);
+ 	spin_unlock(&vc->lock);
+-	mutex_unlock(&kvm->lock);
+ }
+ 
+ static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
+@@ -2257,11 +2250,17 @@ static struct kvm_vcpu *kvmppc_core_vcpu_create_hv(struct kvm *kvm,
+ 			pr_devel("KVM: collision on id %u", id);
+ 			vcore = NULL;
+ 		} else if (!vcore) {
++			/*
++			 * Take mmu_setup_lock for mutual exclusion
++			 * with kvmppc_update_lpcr().
++			 */
+ 			err = -ENOMEM;
+ 			vcore = kvmppc_vcore_create(kvm,
+ 					id & ~(kvm->arch.smt_mode - 1));
++			mutex_lock(&kvm->arch.mmu_setup_lock);
+ 			kvm->arch.vcores[core] = vcore;
+ 			kvm->arch.online_vcores++;
++			mutex_unlock(&kvm->arch.mmu_setup_lock);
+ 		}
+ 	}
+ 	mutex_unlock(&kvm->lock);
+@@ -3821,7 +3820,7 @@ static int kvmhv_setup_mmu(struct kvm_vcpu *vcpu)
+ 	int r = 0;
+ 	struct kvm *kvm = vcpu->kvm;
+ 
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.mmu_setup_lock);
+ 	if (!kvm->arch.mmu_ready) {
+ 		if (!kvm_is_radix(kvm))
+ 			r = kvmppc_hv_setup_htab_rma(vcpu);
+@@ -3831,7 +3830,7 @@ static int kvmhv_setup_mmu(struct kvm_vcpu *vcpu)
+ 			kvm->arch.mmu_ready = 1;
+ 		}
+ 	}
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.mmu_setup_lock);
+ 	return r;
+ }
+ 
+@@ -4439,7 +4438,8 @@ static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm,
+ 
+ /*
+  * Update LPCR values in kvm->arch and in vcores.
+- * Caller must hold kvm->lock.
++ * Caller must hold kvm->arch.mmu_setup_lock (for mutual exclusion
++ * of kvm->arch.lpcr update).
+  */
+ void kvmppc_update_lpcr(struct kvm *kvm, unsigned long lpcr, unsigned long mask)
+ {
+@@ -4491,7 +4491,7 @@ void kvmppc_setup_partition_table(struct kvm *kvm)
+ 
+ /*
+  * Set up HPT (hashed page table) and RMA (real-mode area).
+- * Must be called with kvm->lock held.
++ * Must be called with kvm->arch.mmu_setup_lock held.
+  */
+ static int kvmppc_hv_setup_htab_rma(struct kvm_vcpu *vcpu)
+ {
+@@ -4579,7 +4579,10 @@ static int kvmppc_hv_setup_htab_rma(struct kvm_vcpu *vcpu)
+ 	goto out_srcu;
+ }
+ 
+-/* Must be called with kvm->lock held and mmu_ready = 0 and no vcpus running */
++/*
++ * Must be called with kvm->arch.mmu_setup_lock held and
++ * mmu_ready = 0 and no vcpus running.
++ */
+ int kvmppc_switch_mmu_to_hpt(struct kvm *kvm)
+ {
+ 	if (nesting_enabled(kvm))
+@@ -4596,7 +4599,10 @@ int kvmppc_switch_mmu_to_hpt(struct kvm *kvm)
+ 	return 0;
+ }
+ 
+-/* Must be called with kvm->lock held and mmu_ready = 0 and no vcpus running */
++/*
++ * Must be called with kvm->arch.mmu_setup_lock held and
++ * mmu_ready = 0 and no vcpus running.
++ */
+ int kvmppc_switch_mmu_to_radix(struct kvm *kvm)
+ {
+ 	int err;
+@@ -4701,6 +4707,8 @@ static int kvmppc_core_init_vm_hv(struct kvm *kvm)
+ 	char buf[32];
+ 	int ret;
+ 
++	mutex_init(&kvm->arch.mmu_setup_lock);
++
+ 	/* Allocate the guest's logical partition ID */
+ 
+ 	lpid = kvmppc_alloc_lpid();
+@@ -5226,7 +5234,7 @@ static int kvmhv_configure_mmu(struct kvm *kvm, struct kvm_ppc_mmuv3_cfg *cfg)
+ 	if (kvmhv_on_pseries() && !radix)
+ 		return -EINVAL;
+ 
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.mmu_setup_lock);
+ 	if (radix != kvm_is_radix(kvm)) {
+ 		if (kvm->arch.mmu_ready) {
+ 			kvm->arch.mmu_ready = 0;
+@@ -5254,7 +5262,7 @@ static int kvmhv_configure_mmu(struct kvm *kvm, struct kvm_ppc_mmuv3_cfg *cfg)
+ 	err = 0;
+ 
+  out_unlock:
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.mmu_setup_lock);
+ 	return err;
+ }
+ 
+diff --git a/arch/powerpc/kvm/book3s_rtas.c b/arch/powerpc/kvm/book3s_rtas.c
+index 4e178c4c1ea5..b7ae3dfbf00e 100644
+--- a/arch/powerpc/kvm/book3s_rtas.c
++++ b/arch/powerpc/kvm/book3s_rtas.c
+@@ -146,7 +146,7 @@ static int rtas_token_undefine(struct kvm *kvm, char *name)
+ {
+ 	struct rtas_token_definition *d, *tmp;
+ 
+-	lockdep_assert_held(&kvm->lock);
++	lockdep_assert_held(&kvm->arch.rtas_token_lock);
+ 
+ 	list_for_each_entry_safe(d, tmp, &kvm->arch.rtas_tokens, list) {
+ 		if (rtas_name_matches(d->handler->name, name)) {
+@@ -167,7 +167,7 @@ static int rtas_token_define(struct kvm *kvm, char *name, u64 token)
+ 	bool found;
+ 	int i;
+ 
+-	lockdep_assert_held(&kvm->lock);
++	lockdep_assert_held(&kvm->arch.rtas_token_lock);
+ 
+ 	list_for_each_entry(d, &kvm->arch.rtas_tokens, list) {
+ 		if (d->token == token)
+@@ -206,14 +206,14 @@ int kvm_vm_ioctl_rtas_define_token(struct kvm *kvm, void __user *argp)
+ 	if (copy_from_user(&args, argp, sizeof(args)))
+ 		return -EFAULT;
+ 
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.rtas_token_lock);
+ 
+ 	if (args.token)
+ 		rc = rtas_token_define(kvm, args.name, args.token);
+ 	else
+ 		rc = rtas_token_undefine(kvm, args.name);
+ 
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.rtas_token_lock);
+ 
+ 	return rc;
+ }
+@@ -245,7 +245,7 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu)
+ 	orig_rets = args.rets;
+ 	args.rets = &args.args[be32_to_cpu(args.nargs)];
+ 
+-	mutex_lock(&vcpu->kvm->lock);
++	mutex_lock(&vcpu->kvm->arch.rtas_token_lock);
+ 
+ 	rc = -ENOENT;
+ 	list_for_each_entry(d, &vcpu->kvm->arch.rtas_tokens, list) {
+@@ -256,7 +256,7 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu)
+ 		}
+ 	}
+ 
+-	mutex_unlock(&vcpu->kvm->lock);
++	mutex_unlock(&vcpu->kvm->arch.rtas_token_lock);
+ 
+ 	if (rc == 0) {
+ 		args.rets = orig_rets;
+@@ -282,8 +282,6 @@ void kvmppc_rtas_tokens_free(struct kvm *kvm)
+ {
+ 	struct rtas_token_definition *d, *tmp;
+ 
+-	lockdep_assert_held(&kvm->lock);
+-
+ 	list_for_each_entry_safe(d, tmp, &kvm->arch.rtas_tokens, list) {
+ 		list_del(&d->list);
+ 		kfree(d);
+diff --git a/arch/powerpc/platforms/powernv/opal-imc.c b/arch/powerpc/platforms/powernv/opal-imc.c
+index 3d27f02695e4..828f6656f8f7 100644
+--- a/arch/powerpc/platforms/powernv/opal-imc.c
++++ b/arch/powerpc/platforms/powernv/opal-imc.c
+@@ -161,6 +161,10 @@ static int imc_pmu_create(struct device_node *parent, int pmu_index, int domain)
+ 	struct imc_pmu *pmu_ptr;
+ 	u32 offset;
+ 
++	/* Return for unknown domain */
++	if (domain < 0)
++		return -EINVAL;
++
+ 	/* memory for pmu */
+ 	pmu_ptr = kzalloc(sizeof(*pmu_ptr), GFP_KERNEL);
+ 	if (!pmu_ptr)
+diff --git a/arch/s390/include/asm/ap.h b/arch/s390/include/asm/ap.h
+index e94a0a28b5eb..aea32dda3d14 100644
+--- a/arch/s390/include/asm/ap.h
++++ b/arch/s390/include/asm/ap.h
+@@ -160,8 +160,8 @@ struct ap_config_info {
+ 	unsigned char Nd;		/* max # of Domains - 1 */
+ 	unsigned char _reserved3[10];
+ 	unsigned int apm[8];		/* AP ID mask */
+-	unsigned int aqm[8];		/* AP queue mask */
+-	unsigned int adm[8];		/* AP domain mask */
++	unsigned int aqm[8];		/* AP (usage) queue mask */
++	unsigned int adm[8];		/* AP (control) domain mask */
+ 	unsigned char _reserved4[16];
+ } __aligned(8);
+ 
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index 10c99ce1fead..b71adf603b86 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -684,7 +684,7 @@ struct event_constraint intel_core2_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x1fc7, 0x1), /* SIMD_INST_RETURED.ANY */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0x1),    /* MEM_LOAD_RETIRED.* */
+ 	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x01),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108000c0, 0x01),
+ 	EVENT_CONSTRAINT_END
+ };
+ 
+@@ -693,7 +693,7 @@ struct event_constraint intel_atom_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x00c5, 0x1), /* MISPREDICTED_BRANCH_RETIRED */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0x1),    /* MEM_LOAD_RETIRED.* */
+ 	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x01),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108000c0, 0x01),
+ 	/* Allow all events as PEBS with no flags */
+ 	INTEL_ALL_EVENT_CONSTRAINT(0, 0x1),
+ 	EVENT_CONSTRAINT_END
+@@ -701,7 +701,7 @@ struct event_constraint intel_atom_pebs_event_constraints[] = {
+ 
+ struct event_constraint intel_slm_pebs_event_constraints[] = {
+ 	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x1),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108000c0, 0x1),
+ 	/* Allow all events as PEBS with no flags */
+ 	INTEL_ALL_EVENT_CONSTRAINT(0, 0x1),
+ 	EVENT_CONSTRAINT_END
+@@ -726,7 +726,7 @@ struct event_constraint intel_nehalem_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0xf),    /* MEM_LOAD_RETIRED.* */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xf7, 0xf),    /* FP_ASSIST.* */
+ 	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x0f),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108000c0, 0x0f),
+ 	EVENT_CONSTRAINT_END
+ };
+ 
+@@ -743,7 +743,7 @@ struct event_constraint intel_westmere_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0xf),    /* MEM_LOAD_RETIRED.* */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xf7, 0xf),    /* FP_ASSIST.* */
+ 	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x0f),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108000c0, 0x0f),
+ 	EVENT_CONSTRAINT_END
+ };
+ 
+@@ -752,7 +752,7 @@ struct event_constraint intel_snb_pebs_event_constraints[] = {
+ 	INTEL_PLD_CONSTRAINT(0x01cd, 0x8),    /* MEM_TRANS_RETIRED.LAT_ABOVE_THR */
+ 	INTEL_PST_CONSTRAINT(0x02cd, 0x8),    /* MEM_TRANS_RETIRED.PRECISE_STORES */
+ 	/* UOPS_RETIRED.ALL, inv=1, cmask=16 (cycles:p). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c2, 0xf),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c2, 0xf),
+         INTEL_EXCLEVT_CONSTRAINT(0xd0, 0xf),    /* MEM_UOP_RETIRED.* */
+         INTEL_EXCLEVT_CONSTRAINT(0xd1, 0xf),    /* MEM_LOAD_UOPS_RETIRED.* */
+         INTEL_EXCLEVT_CONSTRAINT(0xd2, 0xf),    /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.* */
+@@ -767,9 +767,9 @@ struct event_constraint intel_ivb_pebs_event_constraints[] = {
+         INTEL_PLD_CONSTRAINT(0x01cd, 0x8),    /* MEM_TRANS_RETIRED.LAT_ABOVE_THR */
+ 	INTEL_PST_CONSTRAINT(0x02cd, 0x8),    /* MEM_TRANS_RETIRED.PRECISE_STORES */
+ 	/* UOPS_RETIRED.ALL, inv=1, cmask=16 (cycles:p). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c2, 0xf),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c2, 0xf),
+ 	/* INST_RETIRED.PREC_DIST, inv=1, cmask=16 (cycles:ppp). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c0, 0x2),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c0, 0x2),
+ 	INTEL_EXCLEVT_CONSTRAINT(0xd0, 0xf),    /* MEM_UOP_RETIRED.* */
+ 	INTEL_EXCLEVT_CONSTRAINT(0xd1, 0xf),    /* MEM_LOAD_UOPS_RETIRED.* */
+ 	INTEL_EXCLEVT_CONSTRAINT(0xd2, 0xf),    /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.* */
+@@ -783,9 +783,9 @@ struct event_constraint intel_hsw_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PRECDIST */
+ 	INTEL_PLD_CONSTRAINT(0x01cd, 0xf),    /* MEM_TRANS_RETIRED.* */
+ 	/* UOPS_RETIRED.ALL, inv=1, cmask=16 (cycles:p). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c2, 0xf),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c2, 0xf),
+ 	/* INST_RETIRED.PREC_DIST, inv=1, cmask=16 (cycles:ppp). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c0, 0x2),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c0, 0x2),
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_NA(0x01c2, 0xf), /* UOPS_RETIRED.ALL */
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_XLD(0x11d0, 0xf), /* MEM_UOPS_RETIRED.STLB_MISS_LOADS */
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_XLD(0x21d0, 0xf), /* MEM_UOPS_RETIRED.LOCK_LOADS */
+@@ -806,9 +806,9 @@ struct event_constraint intel_bdw_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PRECDIST */
+ 	INTEL_PLD_CONSTRAINT(0x01cd, 0xf),    /* MEM_TRANS_RETIRED.* */
+ 	/* UOPS_RETIRED.ALL, inv=1, cmask=16 (cycles:p). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c2, 0xf),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c2, 0xf),
+ 	/* INST_RETIRED.PREC_DIST, inv=1, cmask=16 (cycles:ppp). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c0, 0x2),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c0, 0x2),
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_NA(0x01c2, 0xf), /* UOPS_RETIRED.ALL */
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x11d0, 0xf), /* MEM_UOPS_RETIRED.STLB_MISS_LOADS */
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x21d0, 0xf), /* MEM_UOPS_RETIRED.LOCK_LOADS */
+@@ -829,9 +829,9 @@ struct event_constraint intel_bdw_pebs_event_constraints[] = {
+ struct event_constraint intel_skl_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x1c0, 0x2),	/* INST_RETIRED.PREC_DIST */
+ 	/* INST_RETIRED.PREC_DIST, inv=1, cmask=16 (cycles:ppp). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c0, 0x2),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c0, 0x2),
+ 	/* INST_RETIRED.TOTAL_CYCLES_PS (inv=1, cmask=16) (cycles:p). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x0f),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108000c0, 0x0f),
+ 	INTEL_PLD_CONSTRAINT(0x1cd, 0xf),		      /* MEM_TRANS_RETIRED.* */
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x11d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_LOADS */
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x12d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_STORES */
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 01004bfb1a1b..524709dcf749 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -820,8 +820,11 @@ static void init_amd_zn(struct cpuinfo_x86 *c)
+ {
+ 	set_cpu_cap(c, X86_FEATURE_ZEN);
+ 
+-	/* Fix erratum 1076: CPB feature bit not being set in CPUID. */
+-	if (!cpu_has(c, X86_FEATURE_CPB))
++	/*
++	 * Fix erratum 1076: CPB feature bit not being set in CPUID.
++	 * Always set it, except when running under a hypervisor.
++	 */
++	if (!cpu_has(c, X86_FEATURE_HYPERVISOR) && !cpu_has(c, X86_FEATURE_CPB))
+ 		set_cpu_cap(c, X86_FEATURE_CPB);
+ }
+ 
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 11efca3534ad..00b826399228 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2846,7 +2846,7 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
+ 		goto err_exit;
+ 
+ 	if (blk_mq_alloc_ctxs(q))
+-		goto err_exit;
++		goto err_poll;
+ 
+ 	/* init q->mq_kobj and sw queues' kobjects */
+ 	blk_mq_sysfs_init(q);
+@@ -2907,6 +2907,9 @@ err_hctxs:
+ 	kfree(q->queue_hw_ctx);
+ err_sys_init:
+ 	blk_mq_sysfs_deinit(q);
++err_poll:
++	blk_stat_free_callback(q->poll_cb);
++	q->poll_cb = NULL;
+ err_exit:
+ 	q->mq_ops = NULL;
+ 	return ERR_PTR(-ENOMEM);
+diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
+index 824ae985ad93..ccb59768b1f3 100644
+--- a/drivers/acpi/device_pm.c
++++ b/drivers/acpi/device_pm.c
+@@ -949,8 +949,8 @@ static bool acpi_dev_needs_resume(struct device *dev, struct acpi_device *adev)
+ 	u32 sys_target = acpi_target_system_state();
+ 	int ret, state;
+ 
+-	if (!pm_runtime_suspended(dev) || !adev ||
+-	    device_may_wakeup(dev) != !!adev->wakeup.prepare_count)
++	if (!pm_runtime_suspended(dev) || !adev || (adev->wakeup.flags.valid &&
++	    device_may_wakeup(dev) != !!adev->wakeup.prepare_count))
+ 		return true;
+ 
+ 	if (sys_target == ACPI_STATE_S0)
+diff --git a/drivers/clk/ti/clkctrl.c b/drivers/clk/ti/clkctrl.c
+index 639f515e08f0..3325ee43bcc1 100644
+--- a/drivers/clk/ti/clkctrl.c
++++ b/drivers/clk/ti/clkctrl.c
+@@ -137,9 +137,6 @@ static int _omap4_clkctrl_clk_enable(struct clk_hw *hw)
+ 	int ret;
+ 	union omap4_timeout timeout = { 0 };
+ 
+-	if (!clk->enable_bit)
+-		return 0;
+-
+ 	if (clk->clkdm) {
+ 		ret = ti_clk_ll_ops->clkdm_clk_enable(clk->clkdm, hw->clk);
+ 		if (ret) {
+@@ -151,6 +148,9 @@ static int _omap4_clkctrl_clk_enable(struct clk_hw *hw)
+ 		}
+ 	}
+ 
++	if (!clk->enable_bit)
++		return 0;
++
+ 	val = ti_clk_ll_ops->clk_readl(&clk->enable_reg);
+ 
+ 	val &= ~OMAP4_MODULEMODE_MASK;
+@@ -179,7 +179,7 @@ static void _omap4_clkctrl_clk_disable(struct clk_hw *hw)
+ 	union omap4_timeout timeout = { 0 };
+ 
+ 	if (!clk->enable_bit)
+-		return;
++		goto exit;
+ 
+ 	val = ti_clk_ll_ops->clk_readl(&clk->enable_reg);
+ 
+diff --git a/drivers/gpio/Kconfig b/drivers/gpio/Kconfig
+index 3f50526a771f..864a1ba7aa3a 100644
+--- a/drivers/gpio/Kconfig
++++ b/drivers/gpio/Kconfig
+@@ -824,6 +824,7 @@ config GPIO_ADP5588
+ config GPIO_ADP5588_IRQ
+ 	bool "Interrupt controller support for ADP5588"
+ 	depends on GPIO_ADP5588=y
++	select GPIOLIB_IRQCHIP
+ 	help
+ 	  Say yes here to enable the adp5588 to be used as an interrupt
+ 	  controller. It requires the driver to be built in the kernel.
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_dump.c b/drivers/gpu/drm/etnaviv/etnaviv_dump.c
+index 33854c94cb85..515515ef24f9 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_dump.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_dump.c
+@@ -125,6 +125,8 @@ void etnaviv_core_dump(struct etnaviv_gpu *gpu)
+ 		return;
+ 	etnaviv_dump_core = false;
+ 
++	mutex_lock(&gpu->mmu->lock);
++
+ 	mmu_size = etnaviv_iommu_dump_size(gpu->mmu);
+ 
+ 	/* We always dump registers, mmu, ring and end marker */
+@@ -167,6 +169,7 @@ void etnaviv_core_dump(struct etnaviv_gpu *gpu)
+ 	iter.start = __vmalloc(file_size, GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY,
+ 			       PAGE_KERNEL);
+ 	if (!iter.start) {
++		mutex_unlock(&gpu->mmu->lock);
+ 		dev_warn(gpu->dev, "failed to allocate devcoredump file\n");
+ 		return;
+ 	}
+@@ -234,6 +237,8 @@ void etnaviv_core_dump(struct etnaviv_gpu *gpu)
+ 					 obj->base.size);
+ 	}
+ 
++	mutex_unlock(&gpu->mmu->lock);
++
+ 	etnaviv_core_dump_header(&iter, ETDUMP_BUF_END, iter.data);
+ 
+ 	dev_coredumpv(gpu->dev, iter.start, iter.data - iter.start, GFP_KERNEL);
+diff --git a/drivers/i2c/i2c-dev.c b/drivers/i2c/i2c-dev.c
+index 3f7b9af11137..776f36690448 100644
+--- a/drivers/i2c/i2c-dev.c
++++ b/drivers/i2c/i2c-dev.c
+@@ -283,6 +283,7 @@ static noinline int i2cdev_ioctl_rdwr(struct i2c_client *client,
+ 			    msgs[i].len < 1 || msgs[i].buf[0] < 1 ||
+ 			    msgs[i].len < msgs[i].buf[0] +
+ 					     I2C_SMBUS_BLOCK_MAX) {
++				i++;
+ 				res = -EINVAL;
+ 				break;
+ 			}
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
+index 650de0fefb7b..385f14a4d5a7 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
+@@ -471,7 +471,10 @@ inv_mpu6050_read_raw(struct iio_dev *indio_dev,
+ 			return IIO_VAL_INT_PLUS_MICRO;
+ 		case IIO_TEMP:
+ 			*val = 0;
+-			*val2 = INV_MPU6050_TEMP_SCALE;
++			if (st->chip_type == INV_ICM20602)
++				*val2 = INV_ICM20602_TEMP_SCALE;
++			else
++				*val2 = INV_MPU6050_TEMP_SCALE;
+ 
+ 			return IIO_VAL_INT_PLUS_MICRO;
+ 		default:
+@@ -480,7 +483,10 @@ inv_mpu6050_read_raw(struct iio_dev *indio_dev,
+ 	case IIO_CHAN_INFO_OFFSET:
+ 		switch (chan->type) {
+ 		case IIO_TEMP:
+-			*val = INV_MPU6050_TEMP_OFFSET;
++			if (st->chip_type == INV_ICM20602)
++				*val = INV_ICM20602_TEMP_OFFSET;
++			else
++				*val = INV_MPU6050_TEMP_OFFSET;
+ 
+ 			return IIO_VAL_INT;
+ 		default:
+@@ -845,6 +851,32 @@ static const struct iio_chan_spec inv_mpu_channels[] = {
+ 	INV_MPU6050_CHAN(IIO_ACCEL, IIO_MOD_Z, INV_MPU6050_SCAN_ACCL_Z),
+ };
+ 
++static const struct iio_chan_spec inv_icm20602_channels[] = {
++	IIO_CHAN_SOFT_TIMESTAMP(INV_ICM20602_SCAN_TIMESTAMP),
++	{
++		.type = IIO_TEMP,
++		.info_mask_separate = BIT(IIO_CHAN_INFO_RAW)
++				| BIT(IIO_CHAN_INFO_OFFSET)
++				| BIT(IIO_CHAN_INFO_SCALE),
++		.scan_index = INV_ICM20602_SCAN_TEMP,
++		.scan_type = {
++				.sign = 's',
++				.realbits = 16,
++				.storagebits = 16,
++				.shift = 0,
++				.endianness = IIO_BE,
++			     },
++	},
++
++	INV_MPU6050_CHAN(IIO_ANGL_VEL, IIO_MOD_X, INV_ICM20602_SCAN_GYRO_X),
++	INV_MPU6050_CHAN(IIO_ANGL_VEL, IIO_MOD_Y, INV_ICM20602_SCAN_GYRO_Y),
++	INV_MPU6050_CHAN(IIO_ANGL_VEL, IIO_MOD_Z, INV_ICM20602_SCAN_GYRO_Z),
++
++	INV_MPU6050_CHAN(IIO_ACCEL, IIO_MOD_Y, INV_ICM20602_SCAN_ACCL_Y),
++	INV_MPU6050_CHAN(IIO_ACCEL, IIO_MOD_X, INV_ICM20602_SCAN_ACCL_X),
++	INV_MPU6050_CHAN(IIO_ACCEL, IIO_MOD_Z, INV_ICM20602_SCAN_ACCL_Z),
++};
++
+ /*
+  * The user can choose any frequency between INV_MPU6050_MIN_FIFO_RATE and
+  * INV_MPU6050_MAX_FIFO_RATE, but only these frequencies are matched by the
+@@ -1100,8 +1132,14 @@ int inv_mpu_core_probe(struct regmap *regmap, int irq, const char *name,
+ 		indio_dev->name = name;
+ 	else
+ 		indio_dev->name = dev_name(dev);
+-	indio_dev->channels = inv_mpu_channels;
+-	indio_dev->num_channels = ARRAY_SIZE(inv_mpu_channels);
++
++	if (chip_type == INV_ICM20602) {
++		indio_dev->channels = inv_icm20602_channels;
++		indio_dev->num_channels = ARRAY_SIZE(inv_icm20602_channels);
++	} else {
++		indio_dev->channels = inv_mpu_channels;
++		indio_dev->num_channels = ARRAY_SIZE(inv_mpu_channels);
++	}
+ 
+ 	indio_dev->info = &mpu_info;
+ 	indio_dev->modes = INDIO_BUFFER_TRIGGERED;
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h b/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
+index 325afd9f5f61..3d5fe4474378 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
+@@ -208,6 +208,9 @@ struct inv_mpu6050_state {
+ #define INV_MPU6050_BYTES_PER_3AXIS_SENSOR   6
+ #define INV_MPU6050_FIFO_COUNT_BYTE          2
+ 
++/* ICM20602 FIFO samples include temperature readings */
++#define INV_ICM20602_BYTES_PER_TEMP_SENSOR   2
++
+ /* mpu6500 registers */
+ #define INV_MPU6500_REG_ACCEL_CONFIG_2      0x1D
+ #define INV_MPU6500_REG_ACCEL_OFFSET        0x77
+@@ -229,6 +232,9 @@ struct inv_mpu6050_state {
+ #define INV_MPU6050_GYRO_CONFIG_FSR_SHIFT    3
+ #define INV_MPU6050_ACCL_CONFIG_FSR_SHIFT    3
+ 
++#define INV_ICM20602_TEMP_OFFSET	     8170
++#define INV_ICM20602_TEMP_SCALE		     3060
++
+ /* 6 + 6 round up and plus 8 */
+ #define INV_MPU6050_OUTPUT_DATA_SIZE         24
+ 
+@@ -270,7 +276,7 @@ struct inv_mpu6050_state {
+ #define INV_ICM20608_WHOAMI_VALUE		0xAF
+ #define INV_ICM20602_WHOAMI_VALUE		0x12
+ 
+-/* scan element definition */
++/* scan element definition for generic MPU6xxx devices */
+ enum inv_mpu6050_scan {
+ 	INV_MPU6050_SCAN_ACCL_X,
+ 	INV_MPU6050_SCAN_ACCL_Y,
+@@ -281,6 +287,18 @@ enum inv_mpu6050_scan {
+ 	INV_MPU6050_SCAN_TIMESTAMP,
+ };
+ 
++/* scan element definition for ICM20602, which includes temperature */
++enum inv_icm20602_scan {
++	INV_ICM20602_SCAN_ACCL_X,
++	INV_ICM20602_SCAN_ACCL_Y,
++	INV_ICM20602_SCAN_ACCL_Z,
++	INV_ICM20602_SCAN_TEMP,
++	INV_ICM20602_SCAN_GYRO_X,
++	INV_ICM20602_SCAN_GYRO_Y,
++	INV_ICM20602_SCAN_GYRO_Z,
++	INV_ICM20602_SCAN_TIMESTAMP,
++};
++
+ enum inv_mpu6050_filter_e {
+ 	INV_MPU6050_FILTER_256HZ_NOLPF2 = 0,
+ 	INV_MPU6050_FILTER_188HZ,
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
+index 548e042f7b5b..57bd11bde56b 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
+@@ -207,6 +207,9 @@ irqreturn_t inv_mpu6050_read_fifo(int irq, void *p)
+ 	if (st->chip_config.gyro_fifo_enable)
+ 		bytes_per_datum += INV_MPU6050_BYTES_PER_3AXIS_SENSOR;
+ 
++	if (st->chip_type == INV_ICM20602)
++		bytes_per_datum += INV_ICM20602_BYTES_PER_TEMP_SENSOR;
++
+ 	/*
+ 	 * read fifo_count register to know how many bytes are inside the FIFO
+ 	 * right now
+diff --git a/drivers/isdn/mISDN/socket.c b/drivers/isdn/mISDN/socket.c
+index a14e35d40538..84e1d4c2db66 100644
+--- a/drivers/isdn/mISDN/socket.c
++++ b/drivers/isdn/mISDN/socket.c
+@@ -393,7 +393,7 @@ data_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ 			memcpy(di.channelmap, dev->channelmap,
+ 			       sizeof(di.channelmap));
+ 			di.nrbchan = dev->nrbchan;
+-			strcpy(di.name, dev_name(&dev->dev));
++			strscpy(di.name, dev_name(&dev->dev), sizeof(di.name));
+ 			if (copy_to_user((void __user *)arg, &di, sizeof(di)))
+ 				err = -EFAULT;
+ 		} else
+@@ -676,7 +676,7 @@ base_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ 			memcpy(di.channelmap, dev->channelmap,
+ 			       sizeof(di.channelmap));
+ 			di.nrbchan = dev->nrbchan;
+-			strcpy(di.name, dev_name(&dev->dev));
++			strscpy(di.name, dev_name(&dev->dev), sizeof(di.name));
+ 			if (copy_to_user((void __user *)arg, &di, sizeof(di)))
+ 				err = -EFAULT;
+ 		} else
+@@ -690,6 +690,7 @@ base_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ 			err = -EFAULT;
+ 			break;
+ 		}
++		dn.name[sizeof(dn.name) - 1] = '\0';
+ 		dev = get_mdevice(dn.id);
+ 		if (dev)
+ 			err = device_rename(&dev->dev, dn.name);
+diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
+index 39dace8e3512..f46086fa9064 100644
+--- a/drivers/net/dsa/microchip/ksz_common.c
++++ b/drivers/net/dsa/microchip/ksz_common.c
+@@ -83,6 +83,9 @@ static void ksz_mib_read_work(struct work_struct *work)
+ 	int i;
+ 
+ 	for (i = 0; i < dev->mib_port_cnt; i++) {
++		if (dsa_is_unused_port(dev->ds, i))
++			continue;
++
+ 		p = &dev->ports[i];
+ 		mib = &p->mib;
+ 		mutex_lock(&mib->cnt_mutex);
+diff --git a/drivers/net/dsa/rtl8366.c b/drivers/net/dsa/rtl8366.c
+index 6dedd43442cc..35b767baf21f 100644
+--- a/drivers/net/dsa/rtl8366.c
++++ b/drivers/net/dsa/rtl8366.c
+@@ -307,7 +307,8 @@ int rtl8366_vlan_filtering(struct dsa_switch *ds, int port, bool vlan_filtering)
+ 	struct rtl8366_vlan_4k vlan4k;
+ 	int ret;
+ 
+-	if (!smi->ops->is_vlan_valid(smi, port))
++	/* Use VLAN nr port + 1 since VLAN0 is not valid */
++	if (!smi->ops->is_vlan_valid(smi, port + 1))
+ 		return -EINVAL;
+ 
+ 	dev_info(smi->dev, "%s filtering on port %d\n",
+@@ -318,12 +319,12 @@ int rtl8366_vlan_filtering(struct dsa_switch *ds, int port, bool vlan_filtering)
+ 	 * The hardware support filter ID (FID) 0..7, I have no clue how to
+ 	 * support this in the driver when the callback only says on/off.
+ 	 */
+-	ret = smi->ops->get_vlan_4k(smi, port, &vlan4k);
++	ret = smi->ops->get_vlan_4k(smi, port + 1, &vlan4k);
+ 	if (ret)
+ 		return ret;
+ 
+ 	/* Just set the filter to FID 1 for now then */
+-	ret = rtl8366_set_vlan(smi, port,
++	ret = rtl8366_set_vlan(smi, port + 1,
+ 			       vlan4k.member,
+ 			       vlan4k.untag,
+ 			       1);
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+index e2ffb159cbe2..bf4aa7060f1a 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+@@ -139,10 +139,10 @@ void aq_ring_queue_stop(struct aq_ring_s *ring)
+ bool aq_ring_tx_clean(struct aq_ring_s *self)
+ {
+ 	struct device *dev = aq_nic_get_dev(self->aq_nic);
+-	unsigned int budget = AQ_CFG_TX_CLEAN_BUDGET;
++	unsigned int budget;
+ 
+-	for (; self->sw_head != self->hw_head && budget--;
+-		self->sw_head = aq_ring_next_dx(self, self->sw_head)) {
++	for (budget = AQ_CFG_TX_CLEAN_BUDGET;
++	     budget && self->sw_head != self->hw_head; budget--) {
+ 		struct aq_ring_buff_s *buff = &self->buff_ring[self->sw_head];
+ 
+ 		if (likely(buff->is_mapped)) {
+@@ -167,6 +167,7 @@ bool aq_ring_tx_clean(struct aq_ring_s *self)
+ 
+ 		buff->pa = 0U;
+ 		buff->eop_index = 0xffffU;
++		self->sw_head = aq_ring_next_dx(self, self->sw_head);
+ 	}
+ 
+ 	return !!budget;
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
+index b31dba1b1a55..ec302fdfec63 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
+@@ -702,38 +702,41 @@ static int hw_atl_b0_hw_ring_rx_receive(struct aq_hw_s *self,
+ 		if ((rx_stat & BIT(0)) || rxd_wb->type & 0x1000U) {
+ 			/* MAC error or DMA error */
+ 			buff->is_error = 1U;
+-		} else {
+-			if (self->aq_nic_cfg->is_rss) {
+-				/* last 4 byte */
+-				u16 rss_type = rxd_wb->type & 0xFU;
+-
+-				if (rss_type && rss_type < 0x8U) {
+-					buff->is_hash_l4 = (rss_type == 0x4 ||
+-					rss_type == 0x5);
+-					buff->rss_hash = rxd_wb->rss_hash;
+-				}
++		}
++		if (self->aq_nic_cfg->is_rss) {
++			/* last 4 byte */
++			u16 rss_type = rxd_wb->type & 0xFU;
++
++			if (rss_type && rss_type < 0x8U) {
++				buff->is_hash_l4 = (rss_type == 0x4 ||
++				rss_type == 0x5);
++				buff->rss_hash = rxd_wb->rss_hash;
+ 			}
++		}
+ 
+-			if (HW_ATL_B0_RXD_WB_STAT2_EOP & rxd_wb->status) {
+-				buff->len = rxd_wb->pkt_len %
+-					AQ_CFG_RX_FRAME_MAX;
+-				buff->len = buff->len ?
+-					buff->len : AQ_CFG_RX_FRAME_MAX;
+-				buff->next = 0U;
+-				buff->is_eop = 1U;
++		if (HW_ATL_B0_RXD_WB_STAT2_EOP & rxd_wb->status) {
++			buff->len = rxd_wb->pkt_len %
++				AQ_CFG_RX_FRAME_MAX;
++			buff->len = buff->len ?
++				buff->len : AQ_CFG_RX_FRAME_MAX;
++			buff->next = 0U;
++			buff->is_eop = 1U;
++		} else {
++			buff->len =
++				rxd_wb->pkt_len > AQ_CFG_RX_FRAME_MAX ?
++				AQ_CFG_RX_FRAME_MAX : rxd_wb->pkt_len;
++
++			if (HW_ATL_B0_RXD_WB_STAT2_RSCCNT &
++				rxd_wb->status) {
++				/* LRO */
++				buff->next = rxd_wb->next_desc_ptr;
++				++ring->stats.rx.lro_packets;
+ 			} else {
+-				if (HW_ATL_B0_RXD_WB_STAT2_RSCCNT &
+-					rxd_wb->status) {
+-					/* LRO */
+-					buff->next = rxd_wb->next_desc_ptr;
+-					++ring->stats.rx.lro_packets;
+-				} else {
+-					/* jumbo */
+-					buff->next =
+-						aq_ring_next_dx(ring,
+-								ring->hw_head);
+-					++ring->stats.rx.jumbo_packets;
+-				}
++				/* jumbo */
++				buff->next =
++					aq_ring_next_dx(ring,
++							ring->hw_head);
++				++ring->stats.rx.jumbo_packets;
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/net/ethernet/dec/tulip/de4x5.c b/drivers/net/ethernet/dec/tulip/de4x5.c
+index 66535d1653f6..f16853c3c851 100644
+--- a/drivers/net/ethernet/dec/tulip/de4x5.c
++++ b/drivers/net/ethernet/dec/tulip/de4x5.c
+@@ -2107,7 +2107,6 @@ static struct eisa_driver de4x5_eisa_driver = {
+ 		.remove  = de4x5_eisa_remove,
+         }
+ };
+-MODULE_DEVICE_TABLE(eisa, de4x5_eisa_ids);
+ #endif
+ 
+ #ifdef CONFIG_PCI
+diff --git a/drivers/net/ethernet/emulex/benet/be_ethtool.c b/drivers/net/ethernet/emulex/benet/be_ethtool.c
+index 4c218341c51b..6e635debc7fd 100644
+--- a/drivers/net/ethernet/emulex/benet/be_ethtool.c
++++ b/drivers/net/ethernet/emulex/benet/be_ethtool.c
+@@ -1105,7 +1105,7 @@ static int be_get_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd,
+ 		cmd->data = be_get_rss_hash_opts(adapter, cmd->flow_type);
+ 		break;
+ 	case ETHTOOL_GRXRINGS:
+-		cmd->data = adapter->num_rx_qs - 1;
++		cmd->data = adapter->num_rx_qs;
+ 		break;
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+index d3f2408dc9e8..f38c3fa7d705 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+@@ -780,7 +780,7 @@ static void dpaa_eth_add_channel(u16 channel)
+ 	struct qman_portal *portal;
+ 	int cpu;
+ 
+-	for_each_cpu(cpu, cpus) {
++	for_each_cpu_and(cpu, cpus, cpu_online_mask) {
+ 		portal = qman_get_affine_portal(cpu);
+ 		qman_p_static_dequeue_add(portal, pool);
+ 	}
+@@ -896,7 +896,7 @@ static void dpaa_fq_setup(struct dpaa_priv *priv,
+ 	u16 channels[NR_CPUS];
+ 	struct dpaa_fq *fq;
+ 
+-	for_each_cpu(cpu, affine_cpus)
++	for_each_cpu_and(cpu, affine_cpus, cpu_online_mask)
+ 		channels[num_portals++] = qman_affine_channel(cpu);
+ 
+ 	if (num_portals == 0)
+@@ -2174,7 +2174,6 @@ static int dpaa_eth_poll(struct napi_struct *napi, int budget)
+ 	if (cleaned < budget) {
+ 		napi_complete_done(napi, cleaned);
+ 		qman_p_irqsource_add(np->p, QM_PIRQ_DQRI);
+-
+ 	} else if (np->down) {
+ 		qman_p_irqsource_add(np->p, QM_PIRQ_DQRI);
+ 	}
+@@ -2448,7 +2447,7 @@ static void dpaa_eth_napi_enable(struct dpaa_priv *priv)
+ 	struct dpaa_percpu_priv *percpu_priv;
+ 	int i;
+ 
+-	for_each_possible_cpu(i) {
++	for_each_online_cpu(i) {
+ 		percpu_priv = per_cpu_ptr(priv->percpu_priv, i);
+ 
+ 		percpu_priv->np.down = 0;
+@@ -2461,7 +2460,7 @@ static void dpaa_eth_napi_disable(struct dpaa_priv *priv)
+ 	struct dpaa_percpu_priv *percpu_priv;
+ 	int i;
+ 
+-	for_each_possible_cpu(i) {
++	for_each_online_cpu(i) {
+ 		percpu_priv = per_cpu_ptr(priv->percpu_priv, i);
+ 
+ 		percpu_priv->np.down = 1;
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
+index bdee441bc3b7..7ce2e99b594d 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
+@@ -569,7 +569,7 @@ static int dpaa_set_coalesce(struct net_device *dev,
+ 	qman_dqrr_get_ithresh(portal, &prev_thresh);
+ 
+ 	/* set new values */
+-	for_each_cpu(cpu, cpus) {
++	for_each_cpu_and(cpu, cpus, cpu_online_mask) {
+ 		portal = qman_get_affine_portal(cpu);
+ 		res = qman_portal_set_iperiod(portal, period);
+ 		if (res)
+@@ -586,7 +586,7 @@ static int dpaa_set_coalesce(struct net_device *dev,
+ 
+ revert_values:
+ 	/* restore previous values */
+-	for_each_cpu(cpu, cpus) {
++	for_each_cpu_and(cpu, cpus, cpu_online_mask) {
+ 		if (!needs_revert[cpu])
+ 			continue;
+ 		portal = qman_get_affine_portal(cpu);
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index 57cbaa38d247..df371c81a706 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -1966,7 +1966,7 @@ alloc_channel(struct dpaa2_eth_priv *priv)
+ 
+ 	channel->dpcon = setup_dpcon(priv);
+ 	if (IS_ERR_OR_NULL(channel->dpcon)) {
+-		err = PTR_ERR(channel->dpcon);
++		err = PTR_ERR_OR_ZERO(channel->dpcon);
+ 		goto err_setup;
+ 	}
+ 
+@@ -2022,7 +2022,7 @@ static int setup_dpio(struct dpaa2_eth_priv *priv)
+ 		/* Try to allocate a channel */
+ 		channel = alloc_channel(priv);
+ 		if (IS_ERR_OR_NULL(channel)) {
+-			err = PTR_ERR(channel);
++			err = PTR_ERR_OR_ZERO(channel);
+ 			if (err != -EPROBE_DEFER)
+ 				dev_info(dev,
+ 					 "No affine channel for cpu %d and above\n", i);
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c
+index 591dfcf76adb..0610fc0bebc2 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c
+@@ -4,6 +4,7 @@
+  */
+ 
+ #include <linux/net_tstamp.h>
++#include <linux/nospec.h>
+ 
+ #include "dpni.h"	/* DPNI_LINK_OPT_* */
+ #include "dpaa2-eth.h"
+@@ -589,6 +590,8 @@ static int dpaa2_eth_get_rxnfc(struct net_device *net_dev,
+ 	case ETHTOOL_GRXCLSRULE:
+ 		if (rxnfc->fs.location >= max_rules)
+ 			return -EINVAL;
++		rxnfc->fs.location = array_index_nospec(rxnfc->fs.location,
++							max_rules);
+ 		if (!priv->cls_rules[rxnfc->fs.location].in_use)
+ 			return -EINVAL;
+ 		rxnfc->fs = priv->cls_rules[rxnfc->fs.location].fs;
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
+index 392fd895f278..ae2240074d8e 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
+@@ -1905,8 +1905,7 @@ static int mvpp2_prs_ip6_init(struct mvpp2 *priv)
+ }
+ 
+ /* Find tcam entry with matched pair <vid,port> */
+-static int mvpp2_prs_vid_range_find(struct mvpp2 *priv, int pmap, u16 vid,
+-				    u16 mask)
++static int mvpp2_prs_vid_range_find(struct mvpp2_port *port, u16 vid, u16 mask)
+ {
+ 	unsigned char byte[2], enable[2];
+ 	struct mvpp2_prs_entry pe;
+@@ -1914,13 +1913,13 @@ static int mvpp2_prs_vid_range_find(struct mvpp2 *priv, int pmap, u16 vid,
+ 	int tid;
+ 
+ 	/* Go through the all entries with MVPP2_PRS_LU_VID */
+-	for (tid = MVPP2_PE_VID_FILT_RANGE_START;
+-	     tid <= MVPP2_PE_VID_FILT_RANGE_END; tid++) {
+-		if (!priv->prs_shadow[tid].valid ||
+-		    priv->prs_shadow[tid].lu != MVPP2_PRS_LU_VID)
++	for (tid = MVPP2_PRS_VID_PORT_FIRST(port->id);
++	     tid <= MVPP2_PRS_VID_PORT_LAST(port->id); tid++) {
++		if (!port->priv->prs_shadow[tid].valid ||
++		    port->priv->prs_shadow[tid].lu != MVPP2_PRS_LU_VID)
+ 			continue;
+ 
+-		mvpp2_prs_init_from_hw(priv, &pe, tid);
++		mvpp2_prs_init_from_hw(port->priv, &pe, tid);
+ 
+ 		mvpp2_prs_tcam_data_byte_get(&pe, 2, &byte[0], &enable[0]);
+ 		mvpp2_prs_tcam_data_byte_get(&pe, 3, &byte[1], &enable[1]);
+@@ -1950,7 +1949,7 @@ int mvpp2_prs_vid_entry_add(struct mvpp2_port *port, u16 vid)
+ 	memset(&pe, 0, sizeof(pe));
+ 
+ 	/* Scan TCAM and see if entry with this <vid,port> already exist */
+-	tid = mvpp2_prs_vid_range_find(priv, (1 << port->id), vid, mask);
++	tid = mvpp2_prs_vid_range_find(port, vid, mask);
+ 
+ 	reg_val = mvpp2_read(priv, MVPP2_MH_REG(port->id));
+ 	if (reg_val & MVPP2_DSA_EXTENDED)
+@@ -2008,7 +2007,7 @@ void mvpp2_prs_vid_entry_remove(struct mvpp2_port *port, u16 vid)
+ 	int tid;
+ 
+ 	/* Scan TCAM and see if entry with this <vid,port> already exist */
+-	tid = mvpp2_prs_vid_range_find(priv, (1 << port->id), vid, 0xfff);
++	tid = mvpp2_prs_vid_range_find(port, vid, 0xfff);
+ 
+ 	/* No such entry */
+ 	if (tid < 0)
+@@ -2026,8 +2025,10 @@ void mvpp2_prs_vid_remove_all(struct mvpp2_port *port)
+ 
+ 	for (tid = MVPP2_PRS_VID_PORT_FIRST(port->id);
+ 	     tid <= MVPP2_PRS_VID_PORT_LAST(port->id); tid++) {
+-		if (priv->prs_shadow[tid].valid)
+-			mvpp2_prs_vid_entry_remove(port, tid);
++		if (priv->prs_shadow[tid].valid) {
++			mvpp2_prs_hw_inv(priv, tid);
++			priv->prs_shadow[tid].valid = false;
++		}
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index be48c6440251..c205a80abdec 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -441,6 +441,10 @@ static int mlx5_internal_err_ret_value(struct mlx5_core_dev *dev, u16 op,
+ 	case MLX5_CMD_OP_CREATE_GENERAL_OBJECT:
+ 	case MLX5_CMD_OP_MODIFY_GENERAL_OBJECT:
+ 	case MLX5_CMD_OP_QUERY_GENERAL_OBJECT:
++	case MLX5_CMD_OP_CREATE_UCTX:
++	case MLX5_CMD_OP_DESTROY_UCTX:
++	case MLX5_CMD_OP_CREATE_UMEM:
++	case MLX5_CMD_OP_DESTROY_UMEM:
+ 	case MLX5_CMD_OP_ALLOC_MEMIC:
+ 		*status = MLX5_DRIVER_STATUS_ABORTED;
+ 		*synd = MLX5_DRIVER_SYND;
+@@ -629,6 +633,10 @@ const char *mlx5_command_str(int command)
+ 	MLX5_COMMAND_STR_CASE(ALLOC_MEMIC);
+ 	MLX5_COMMAND_STR_CASE(DEALLOC_MEMIC);
+ 	MLX5_COMMAND_STR_CASE(QUERY_HOST_PARAMS);
++	MLX5_COMMAND_STR_CASE(CREATE_UCTX);
++	MLX5_COMMAND_STR_CASE(DESTROY_UCTX);
++	MLX5_COMMAND_STR_CASE(CREATE_UMEM);
++	MLX5_COMMAND_STR_CASE(DESTROY_UMEM);
+ 	default: return "unknown command opcode";
+ 	}
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+index ebc046fa97d3..f6b1da99e6c2 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/dev.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+@@ -248,11 +248,32 @@ void mlx5_unregister_interface(struct mlx5_interface *intf)
+ }
+ EXPORT_SYMBOL(mlx5_unregister_interface);
+ 
++/* Must be called with intf_mutex held */
++static bool mlx5_has_added_dev_by_protocol(struct mlx5_core_dev *mdev, int protocol)
++{
++	struct mlx5_device_context *dev_ctx;
++	struct mlx5_interface *intf;
++	bool found = false;
++
++	list_for_each_entry(intf, &intf_list, list) {
++		if (intf->protocol == protocol) {
++			dev_ctx = mlx5_get_device(intf, &mdev->priv);
++			if (dev_ctx && test_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state))
++				found = true;
++			break;
++		}
++	}
++
++	return found;
++}
++
+ void mlx5_reload_interface(struct mlx5_core_dev *mdev, int protocol)
+ {
+ 	mutex_lock(&mlx5_intf_mutex);
+-	mlx5_remove_dev_by_protocol(mdev, protocol);
+-	mlx5_add_dev_by_protocol(mdev, protocol);
++	if (mlx5_has_added_dev_by_protocol(mdev, protocol)) {
++		mlx5_remove_dev_by_protocol(mdev, protocol);
++		mlx5_add_dev_by_protocol(mdev, protocol);
++	}
+ 	mutex_unlock(&mlx5_intf_mutex);
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index d3eaf2ceaa39..a80031b2cfaf 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -1059,6 +1059,7 @@ void mlx5e_del_vxlan_port(struct net_device *netdev, struct udp_tunnel_info *ti)
+ netdev_features_t mlx5e_features_check(struct sk_buff *skb,
+ 				       struct net_device *netdev,
+ 				       netdev_features_t features);
++int mlx5e_set_features(struct net_device *netdev, netdev_features_t features);
+ #ifdef CONFIG_MLX5_ESWITCH
+ int mlx5e_set_vf_mac(struct net_device *dev, int vf, u8 *mac);
+ int mlx5e_set_vf_rate(struct net_device *dev, int vf, int min_tx_rate, int max_tx_rate);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+index eec07b34b4ad..5efe9b5d9086 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+@@ -11,24 +11,25 @@ static int get_route_and_out_devs(struct mlx5e_priv *priv,
+ 				  struct net_device **route_dev,
+ 				  struct net_device **out_dev)
+ {
++	struct net_device *uplink_dev, *uplink_upper, *real_dev;
+ 	struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+-	struct net_device *uplink_dev, *uplink_upper;
+ 	bool dst_is_lag_dev;
+ 
++	real_dev = is_vlan_dev(dev) ? vlan_dev_real_dev(dev) : dev;
+ 	uplink_dev = mlx5_eswitch_uplink_get_proto_dev(esw, REP_ETH);
+ 	uplink_upper = netdev_master_upper_dev_get(uplink_dev);
+ 	dst_is_lag_dev = (uplink_upper &&
+ 			  netif_is_lag_master(uplink_upper) &&
+-			  dev == uplink_upper &&
++			  real_dev == uplink_upper &&
+ 			  mlx5_lag_is_sriov(priv->mdev));
+ 
+ 	/* if the egress device isn't on the same HW e-switch or
+ 	 * it's a LAG device, use the uplink
+ 	 */
+-	if (!netdev_port_same_parent_id(priv->netdev, dev) ||
++	if (!netdev_port_same_parent_id(priv->netdev, real_dev) ||
+ 	    dst_is_lag_dev) {
+-		*route_dev = uplink_dev;
+-		*out_dev = *route_dev;
++		*route_dev = dev;
++		*out_dev = uplink_dev;
+ 	} else {
+ 		*route_dev = dev;
+ 		if (is_vlan_dev(*route_dev))
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 1e2688e2ed47..6a8dc73855c9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -3698,8 +3698,7 @@ static int mlx5e_handle_feature(struct net_device *netdev,
+ 	return 0;
+ }
+ 
+-static int mlx5e_set_features(struct net_device *netdev,
+-			      netdev_features_t features)
++int mlx5e_set_features(struct net_device *netdev, netdev_features_t features)
+ {
+ 	netdev_features_t oper_features = netdev->features;
+ 	int err = 0;
+@@ -5166,6 +5165,11 @@ static void mlx5e_detach(struct mlx5_core_dev *mdev, void *vpriv)
+ 	struct mlx5e_priv *priv = vpriv;
+ 	struct net_device *netdev = priv->netdev;
+ 
++#ifdef CONFIG_MLX5_ESWITCH
++	if (MLX5_ESWITCH_MANAGER(mdev) && vpriv == mdev)
++		return;
++#endif
++
+ 	if (!netif_device_present(netdev))
+ 		return;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index 0b09fa91019d..fd8cede040b8 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -1350,6 +1350,7 @@ static const struct net_device_ops mlx5e_netdev_ops_uplink_rep = {
+ 	.ndo_get_vf_stats        = mlx5e_get_vf_stats,
+ 	.ndo_set_vf_vlan         = mlx5e_uplink_rep_set_vf_vlan,
+ 	.ndo_get_port_parent_id	 = mlx5e_rep_get_port_parent_id,
++	.ndo_set_features        = mlx5e_set_features,
+ };
+ 
+ bool mlx5e_eswitch_rep(struct net_device *netdev)
+@@ -1423,10 +1424,9 @@ static void mlx5e_build_rep_netdev(struct net_device *netdev)
+ 
+ 	netdev->watchdog_timeo    = 15 * HZ;
+ 
++	netdev->features       |= NETIF_F_NETNS_LOCAL;
+ 
+-	netdev->features	 |= NETIF_F_HW_TC | NETIF_F_NETNS_LOCAL;
+-	netdev->hw_features      |= NETIF_F_HW_TC;
+-
++	netdev->hw_features    |= NETIF_F_HW_TC;
+ 	netdev->hw_features    |= NETIF_F_SG;
+ 	netdev->hw_features    |= NETIF_F_IP_CSUM;
+ 	netdev->hw_features    |= NETIF_F_IPV6_CSUM;
+@@ -1435,7 +1435,9 @@ static void mlx5e_build_rep_netdev(struct net_device *netdev)
+ 	netdev->hw_features    |= NETIF_F_TSO6;
+ 	netdev->hw_features    |= NETIF_F_RXCSUM;
+ 
+-	if (rep->vport != MLX5_VPORT_UPLINK)
++	if (rep->vport == MLX5_VPORT_UPLINK)
++		netdev->hw_features |= NETIF_F_HW_VLAN_CTAG_RX;
++	else
+ 		netdev->features |= NETIF_F_VLAN_CHALLENGED;
+ 
+ 	netdev->features |= netdev->hw_features;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 4cb23631616b..a43ddfc0ff0b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -2572,9 +2572,6 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv,
+ 	if (!flow_action_has_entries(flow_action))
+ 		return -EINVAL;
+ 
+-	attr->in_rep = rpriv->rep;
+-	attr->in_mdev = priv->mdev;
+-
+ 	flow_action_for_each(i, act, flow_action) {
+ 		switch (act->id) {
+ 		case FLOW_ACTION_DROP:
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+index 6b8aa3761899..f4acb38569e1 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+@@ -3110,6 +3110,10 @@ mlxsw_sp_port_set_link_ksettings(struct net_device *dev,
+ 	ops->reg_ptys_eth_unpack(mlxsw_sp, ptys_pl, &eth_proto_cap, NULL, NULL);
+ 
+ 	autoneg = cmd->base.autoneg == AUTONEG_ENABLE;
++	if (!autoneg && cmd->base.speed == SPEED_56000) {
++		netdev_err(dev, "56G not supported with autoneg off\n");
++		return -EINVAL;
++	}
+ 	eth_proto_new = autoneg ?
+ 		ops->to_ptys_advert_link(mlxsw_sp, cmd) :
+ 		ops->to_ptys_speed(mlxsw_sp, cmd->base.speed);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+index d633bef5f105..77fe3ed38d1b 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+@@ -411,9 +411,9 @@ static const struct mlxsw_sp_sb_pr mlxsw_sp1_sb_prs[] = {
+ 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_STATIC, MLXSW_SP_SB_INFI),
+ };
+ 
+-#define MLXSW_SP2_SB_PR_INGRESS_SIZE	40960000
++#define MLXSW_SP2_SB_PR_INGRESS_SIZE	38128752
++#define MLXSW_SP2_SB_PR_EGRESS_SIZE	38128752
+ #define MLXSW_SP2_SB_PR_INGRESS_MNG_SIZE (200 * 1000)
+-#define MLXSW_SP2_SB_PR_EGRESS_SIZE	40960000
+ 
+ static const struct mlxsw_sp_sb_pr mlxsw_sp2_sb_prs[] = {
+ 	/* Ingress pools. */
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
+index 15f804453cd6..96b23c856f4d 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
+@@ -247,8 +247,8 @@ static int mlxsw_sp_flower_parse_ip(struct mlxsw_sp *mlxsw_sp,
+ 				       match.mask->tos & 0x3);
+ 
+ 	mlxsw_sp_acl_rulei_keymask_u32(rulei, MLXSW_AFK_ELEMENT_IP_DSCP,
+-				       match.key->tos >> 6,
+-				       match.mask->tos >> 6);
++				       match.key->tos >> 2,
++				       match.mask->tos >> 2);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+index 902e766a8ed3..18d29b8f763f 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+@@ -2363,7 +2363,7 @@ static void mlxsw_sp_router_probe_unresolved_nexthops(struct work_struct *work)
+ static void
+ mlxsw_sp_nexthop_neigh_update(struct mlxsw_sp *mlxsw_sp,
+ 			      struct mlxsw_sp_neigh_entry *neigh_entry,
+-			      bool removing);
++			      bool removing, bool dead);
+ 
+ static enum mlxsw_reg_rauht_op mlxsw_sp_rauht_op(bool adding)
+ {
+@@ -2494,7 +2494,8 @@ static void mlxsw_sp_router_neigh_event_work(struct work_struct *work)
+ 
+ 	memcpy(neigh_entry->ha, ha, ETH_ALEN);
+ 	mlxsw_sp_neigh_entry_update(mlxsw_sp, neigh_entry, entry_connected);
+-	mlxsw_sp_nexthop_neigh_update(mlxsw_sp, neigh_entry, !entry_connected);
++	mlxsw_sp_nexthop_neigh_update(mlxsw_sp, neigh_entry, !entry_connected,
++				      dead);
+ 
+ 	if (!neigh_entry->connected && list_empty(&neigh_entry->nexthop_list))
+ 		mlxsw_sp_neigh_entry_destroy(mlxsw_sp, neigh_entry);
+@@ -3458,13 +3459,79 @@ static void __mlxsw_sp_nexthop_neigh_update(struct mlxsw_sp_nexthop *nh,
+ 	nh->update = 1;
+ }
+ 
++static int
++mlxsw_sp_nexthop_dead_neigh_replace(struct mlxsw_sp *mlxsw_sp,
++				    struct mlxsw_sp_neigh_entry *neigh_entry)
++{
++	struct neighbour *n, *old_n = neigh_entry->key.n;
++	struct mlxsw_sp_nexthop *nh;
++	bool entry_connected;
++	u8 nud_state, dead;
++	int err;
++
++	nh = list_first_entry(&neigh_entry->nexthop_list,
++			      struct mlxsw_sp_nexthop, neigh_list_node);
++
++	n = neigh_lookup(nh->nh_grp->neigh_tbl, &nh->gw_addr, nh->rif->dev);
++	if (!n) {
++		n = neigh_create(nh->nh_grp->neigh_tbl, &nh->gw_addr,
++				 nh->rif->dev);
++		if (IS_ERR(n))
++			return PTR_ERR(n);
++		neigh_event_send(n, NULL);
++	}
++
++	mlxsw_sp_neigh_entry_remove(mlxsw_sp, neigh_entry);
++	neigh_entry->key.n = n;
++	err = mlxsw_sp_neigh_entry_insert(mlxsw_sp, neigh_entry);
++	if (err)
++		goto err_neigh_entry_insert;
++
++	read_lock_bh(&n->lock);
++	nud_state = n->nud_state;
++	dead = n->dead;
++	read_unlock_bh(&n->lock);
++	entry_connected = nud_state & NUD_VALID && !dead;
++
++	list_for_each_entry(nh, &neigh_entry->nexthop_list,
++			    neigh_list_node) {
++		neigh_release(old_n);
++		neigh_clone(n);
++		__mlxsw_sp_nexthop_neigh_update(nh, !entry_connected);
++		mlxsw_sp_nexthop_group_refresh(mlxsw_sp, nh->nh_grp);
++	}
++
++	neigh_release(n);
++
++	return 0;
++
++err_neigh_entry_insert:
++	neigh_entry->key.n = old_n;
++	mlxsw_sp_neigh_entry_insert(mlxsw_sp, neigh_entry);
++	neigh_release(n);
++	return err;
++}
++
+ static void
+ mlxsw_sp_nexthop_neigh_update(struct mlxsw_sp *mlxsw_sp,
+ 			      struct mlxsw_sp_neigh_entry *neigh_entry,
+-			      bool removing)
++			      bool removing, bool dead)
+ {
+ 	struct mlxsw_sp_nexthop *nh;
+ 
++	if (list_empty(&neigh_entry->nexthop_list))
++		return;
++
++	if (dead) {
++		int err;
++
++		err = mlxsw_sp_nexthop_dead_neigh_replace(mlxsw_sp,
++							  neigh_entry);
++		if (err)
++			dev_err(mlxsw_sp->bus_info->dev, "Failed to replace dead neigh\n");
++		return;
++	}
++
+ 	list_for_each_entry(nh, &neigh_entry->nexthop_list,
+ 			    neigh_list_node) {
+ 		__mlxsw_sp_nexthop_neigh_update(nh, removing);
+diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
+index e33af371b169..48967dd27bbf 100644
+--- a/drivers/net/ethernet/renesas/sh_eth.c
++++ b/drivers/net/ethernet/renesas/sh_eth.c
+@@ -1594,6 +1594,10 @@ static void sh_eth_dev_exit(struct net_device *ndev)
+ 	sh_eth_get_stats(ndev);
+ 	mdp->cd->soft_reset(ndev);
+ 
++	/* Set the RMII mode again if required */
++	if (mdp->cd->rmiimode)
++		sh_eth_write(ndev, 0x1, RMIIMODE);
++
+ 	/* Set MAC address again */
+ 	update_mac_address(ndev);
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c
+index bf2562995fc8..126b66bb73a6 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c
+@@ -346,8 +346,6 @@ static int mediatek_dwmac_probe(struct platform_device *pdev)
+ 		return PTR_ERR(plat_dat);
+ 
+ 	plat_dat->interface = priv_plat->phy_mode;
+-	/* clk_csr_i = 250-300MHz & MDC = clk_csr_i/124 */
+-	plat_dat->clk_csr = 5;
+ 	plat_dat->has_gmac4 = 1;
+ 	plat_dat->has_gmac = 0;
+ 	plat_dat->pmt = 0;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 3c409862c52e..635d88d82610 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -3338,6 +3338,7 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)
+ 		entry = STMMAC_GET_ENTRY(entry, DMA_RX_SIZE);
+ 	}
+ 	rx_q->dirty_rx = entry;
++	stmmac_set_rx_tail_ptr(priv, priv->ioaddr, rx_q->rx_tail_addr, queue);
+ }
+ 
+ /**
+@@ -4379,10 +4380,10 @@ int stmmac_dvr_probe(struct device *device,
+ 	 * set the MDC clock dynamically according to the csr actual
+ 	 * clock input.
+ 	 */
+-	if (!priv->plat->clk_csr)
+-		stmmac_clk_csr_set(priv);
+-	else
++	if (priv->plat->clk_csr >= 0)
+ 		priv->clk_csr = priv->plat->clk_csr;
++	else
++		stmmac_clk_csr_set(priv);
+ 
+ 	stmmac_check_pcs_mode(priv);
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index 3031f2bf15d6..f45bfbef97d0 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -408,7 +408,10 @@ stmmac_probe_config_dt(struct platform_device *pdev, const char **mac)
+ 	/* Default to phy auto-detection */
+ 	plat->phy_addr = -1;
+ 
+-	/* Get clk_csr from device tree */
++	/* Default to get clk_csr from stmmac_clk_crs_set(),
++	 * or get clk_csr from device tree.
++	 */
++	plat->clk_csr = -1;
+ 	of_property_read_u32(np, "clk_csr", &plat->clk_csr);
+ 
+ 	/* "snps,phy-addr" is not a standard property. Mark it as deprecated
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index 5583d993480d..ffe421944429 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -396,7 +396,7 @@ static int geneve_udp_encap_err_lookup(struct sock *sk, struct sk_buff *skb)
+ 	u8 zero_vni[3] = { 0 };
+ 	u8 *vni = zero_vni;
+ 
+-	if (skb->len < GENEVE_BASE_HLEN)
++	if (!pskb_may_pull(skb, skb_transport_offset(skb) + GENEVE_BASE_HLEN))
+ 		return -EINVAL;
+ 
+ 	geneveh = geneve_hdr(skb);
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index b20fb0fb595b..e7d8884b1a10 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -2414,7 +2414,7 @@ static struct  hv_driver netvsc_drv = {
+ 	.probe = netvsc_probe,
+ 	.remove = netvsc_remove,
+ 	.driver = {
+-		.probe_type = PROBE_PREFER_ASYNCHRONOUS,
++		.probe_type = PROBE_FORCE_SYNCHRONOUS,
+ 	},
+ };
+ 
+diff --git a/drivers/net/phy/dp83867.c b/drivers/net/phy/dp83867.c
+index 8448d01819ef..2995a1788ceb 100644
+--- a/drivers/net/phy/dp83867.c
++++ b/drivers/net/phy/dp83867.c
+@@ -26,10 +26,18 @@
+ 
+ /* Extended Registers */
+ #define DP83867_CFG4            0x0031
++#define DP83867_CFG4_SGMII_ANEG_MASK (BIT(5) | BIT(6))
++#define DP83867_CFG4_SGMII_ANEG_TIMER_11MS   (3 << 5)
++#define DP83867_CFG4_SGMII_ANEG_TIMER_800US  (2 << 5)
++#define DP83867_CFG4_SGMII_ANEG_TIMER_2US    (1 << 5)
++#define DP83867_CFG4_SGMII_ANEG_TIMER_16MS   (0 << 5)
++
+ #define DP83867_RGMIICTL	0x0032
+ #define DP83867_STRAP_STS1	0x006E
+ #define DP83867_RGMIIDCTL	0x0086
+ #define DP83867_IO_MUX_CFG	0x0170
++#define DP83867_10M_SGMII_CFG   0x016F
++#define DP83867_10M_SGMII_RATE_ADAPT_MASK BIT(7)
+ 
+ #define DP83867_SW_RESET	BIT(15)
+ #define DP83867_SW_RESTART	BIT(14)
+@@ -247,10 +255,8 @@ static int dp83867_config_init(struct phy_device *phydev)
+ 		ret = phy_write(phydev, MII_DP83867_PHYCTRL, val);
+ 		if (ret)
+ 			return ret;
+-	}
+ 
+-	if ((phydev->interface >= PHY_INTERFACE_MODE_RGMII_ID) &&
+-	    (phydev->interface <= PHY_INTERFACE_MODE_RGMII_RXID)) {
++		/* Set up RGMII delays */
+ 		val = phy_read_mmd(phydev, DP83867_DEVADDR, DP83867_RGMIICTL);
+ 
+ 		if (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID)
+@@ -277,6 +283,33 @@ static int dp83867_config_init(struct phy_device *phydev)
+ 				       DP83867_IO_MUX_CFG_IO_IMPEDANCE_CTRL);
+ 	}
+ 
++	if (phydev->interface == PHY_INTERFACE_MODE_SGMII) {
++		/* For support SPEED_10 in SGMII mode
++		 * DP83867_10M_SGMII_RATE_ADAPT bit
++		 * has to be cleared by software. That
++		 * does not affect SPEED_100 and
++		 * SPEED_1000.
++		 */
++		ret = phy_modify_mmd(phydev, DP83867_DEVADDR,
++				     DP83867_10M_SGMII_CFG,
++				     DP83867_10M_SGMII_RATE_ADAPT_MASK,
++				     0);
++		if (ret)
++			return ret;
++
++		/* After reset SGMII Autoneg timer is set to 2us (bits 6 and 5
++		 * are 01). That is not enough to finalize autoneg on some
++		 * devices. Increase this timer duration to maximum 16ms.
++		 */
++		ret = phy_modify_mmd(phydev, DP83867_DEVADDR,
++				     DP83867_CFG4,
++				     DP83867_CFG4_SGMII_ANEG_MASK,
++				     DP83867_CFG4_SGMII_ANEG_TIMER_16MS);
++
++		if (ret)
++			return ret;
++	}
++
+ 	/* Enable Interrupt output INT_OE in CFG3 register */
+ 	if (phy_interrupt_is_valid(phydev)) {
+ 		val = phy_read(phydev, DP83867_CFG3);
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index 89750c7dfd6f..efa31fcda505 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -51,6 +51,10 @@ struct phylink {
+ 
+ 	/* The link configuration settings */
+ 	struct phylink_link_state link_config;
++
++	/* The current settings */
++	phy_interface_t cur_interface;
++
+ 	struct gpio_desc *link_gpio;
+ 	struct timer_list link_poll;
+ 	void (*get_fixed_state)(struct net_device *dev,
+@@ -453,12 +457,12 @@ static void phylink_resolve(struct work_struct *w)
+ 		if (!link_state.link) {
+ 			netif_carrier_off(ndev);
+ 			pl->ops->mac_link_down(ndev, pl->link_an_mode,
+-					       pl->phy_state.interface);
++					       pl->cur_interface);
+ 			netdev_info(ndev, "Link is Down\n");
+ 		} else {
++			pl->cur_interface = link_state.interface;
+ 			pl->ops->mac_link_up(ndev, pl->link_an_mode,
+-					     pl->phy_state.interface,
+-					     pl->phydev);
++					     pl->cur_interface, pl->phydev);
+ 
+ 			netif_carrier_on(ndev);
+ 
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index d76dfed8d9bb..38ecb66fb3e9 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -1765,7 +1765,7 @@ static int vxlan_err_lookup(struct sock *sk, struct sk_buff *skb)
+ 	struct vxlanhdr *hdr;
+ 	__be32 vni;
+ 
+-	if (skb->len < VXLAN_HLEN)
++	if (!pskb_may_pull(skb, skb_transport_offset(skb) + VXLAN_HLEN))
+ 		return -EINVAL;
+ 
+ 	hdr = vxlan_hdr(skb);
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index aae5374d2b93..08a2501b9357 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -111,6 +111,7 @@ struct nvme_tcp_ctrl {
+ 	struct work_struct	err_work;
+ 	struct delayed_work	connect_work;
+ 	struct nvme_tcp_request async_req;
++	u32			io_queues[HCTX_MAX_TYPES];
+ };
+ 
+ static LIST_HEAD(nvme_tcp_ctrl_list);
+@@ -473,7 +474,6 @@ static int nvme_tcp_handle_c2h_data(struct nvme_tcp_queue *queue,
+ 	}
+ 
+ 	return 0;
+-
+ }
+ 
+ static int nvme_tcp_handle_comp(struct nvme_tcp_queue *queue,
+@@ -634,7 +634,6 @@ static inline void nvme_tcp_end_request(struct request *rq, u16 status)
+ 	nvme_end_request(rq, cpu_to_le16(status << 1), res);
+ }
+ 
+-
+ static int nvme_tcp_recv_data(struct nvme_tcp_queue *queue, struct sk_buff *skb,
+ 			      unsigned int *offset, size_t *len)
+ {
+@@ -1425,7 +1424,8 @@ static int nvme_tcp_start_queue(struct nvme_ctrl *nctrl, int idx)
+ 	if (!ret) {
+ 		set_bit(NVME_TCP_Q_LIVE, &ctrl->queues[idx].flags);
+ 	} else {
+-		__nvme_tcp_stop_queue(&ctrl->queues[idx]);
++		if (test_bit(NVME_TCP_Q_ALLOCATED, &ctrl->queues[idx].flags))
++			__nvme_tcp_stop_queue(&ctrl->queues[idx]);
+ 		dev_err(nctrl->device,
+ 			"failed to connect queue: %d ret=%d\n", idx, ret);
+ 	}
+@@ -1535,7 +1535,7 @@ out_free_queue:
+ 	return ret;
+ }
+ 
+-static int nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl)
++static int __nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl)
+ {
+ 	int i, ret;
+ 
+@@ -1565,7 +1565,36 @@ static unsigned int nvme_tcp_nr_io_queues(struct nvme_ctrl *ctrl)
+ 	return nr_io_queues;
+ }
+ 
+-static int nvme_alloc_io_queues(struct nvme_ctrl *ctrl)
++static void nvme_tcp_set_io_queues(struct nvme_ctrl *nctrl,
++		unsigned int nr_io_queues)
++{
++	struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl);
++	struct nvmf_ctrl_options *opts = nctrl->opts;
++
++	if (opts->nr_write_queues && opts->nr_io_queues < nr_io_queues) {
++		/*
++		 * separate read/write queues
++		 * hand out dedicated default queues only after we have
++		 * sufficient read queues.
++		 */
++		ctrl->io_queues[HCTX_TYPE_READ] = opts->nr_io_queues;
++		nr_io_queues -= ctrl->io_queues[HCTX_TYPE_READ];
++		ctrl->io_queues[HCTX_TYPE_DEFAULT] =
++			min(opts->nr_write_queues, nr_io_queues);
++		nr_io_queues -= ctrl->io_queues[HCTX_TYPE_DEFAULT];
++	} else {
++		/*
++		 * shared read/write queues
++		 * either no write queues were requested, or we don't have
++		 * sufficient queue count to have dedicated default queues.
++		 */
++		ctrl->io_queues[HCTX_TYPE_DEFAULT] =
++			min(opts->nr_io_queues, nr_io_queues);
++		nr_io_queues -= ctrl->io_queues[HCTX_TYPE_DEFAULT];
++	}
++}
++
++static int nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl)
+ {
+ 	unsigned int nr_io_queues;
+ 	int ret;
+@@ -1582,7 +1611,9 @@ static int nvme_alloc_io_queues(struct nvme_ctrl *ctrl)
+ 	dev_info(ctrl->device,
+ 		"creating %d I/O queues.\n", nr_io_queues);
+ 
+-	return nvme_tcp_alloc_io_queues(ctrl);
++	nvme_tcp_set_io_queues(ctrl, nr_io_queues);
++
++	return __nvme_tcp_alloc_io_queues(ctrl);
+ }
+ 
+ static void nvme_tcp_destroy_io_queues(struct nvme_ctrl *ctrl, bool remove)
+@@ -1599,7 +1630,7 @@ static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new)
+ {
+ 	int ret;
+ 
+-	ret = nvme_alloc_io_queues(ctrl);
++	ret = nvme_tcp_alloc_io_queues(ctrl);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -2090,23 +2121,34 @@ static blk_status_t nvme_tcp_queue_rq(struct blk_mq_hw_ctx *hctx,
+ static int nvme_tcp_map_queues(struct blk_mq_tag_set *set)
+ {
+ 	struct nvme_tcp_ctrl *ctrl = set->driver_data;
++	struct nvmf_ctrl_options *opts = ctrl->ctrl.opts;
+ 
+-	set->map[HCTX_TYPE_DEFAULT].queue_offset = 0;
+-	set->map[HCTX_TYPE_READ].nr_queues = ctrl->ctrl.opts->nr_io_queues;
+-	if (ctrl->ctrl.opts->nr_write_queues) {
++	if (opts->nr_write_queues && ctrl->io_queues[HCTX_TYPE_READ]) {
+ 		/* separate read/write queues */
+ 		set->map[HCTX_TYPE_DEFAULT].nr_queues =
+-				ctrl->ctrl.opts->nr_write_queues;
++			ctrl->io_queues[HCTX_TYPE_DEFAULT];
++		set->map[HCTX_TYPE_DEFAULT].queue_offset = 0;
++		set->map[HCTX_TYPE_READ].nr_queues =
++			ctrl->io_queues[HCTX_TYPE_READ];
+ 		set->map[HCTX_TYPE_READ].queue_offset =
+-				ctrl->ctrl.opts->nr_write_queues;
++			ctrl->io_queues[HCTX_TYPE_DEFAULT];
+ 	} else {
+-		/* mixed read/write queues */
++		/* shared read/write queues */
+ 		set->map[HCTX_TYPE_DEFAULT].nr_queues =
+-				ctrl->ctrl.opts->nr_io_queues;
++			ctrl->io_queues[HCTX_TYPE_DEFAULT];
++		set->map[HCTX_TYPE_DEFAULT].queue_offset = 0;
++		set->map[HCTX_TYPE_READ].nr_queues =
++			ctrl->io_queues[HCTX_TYPE_DEFAULT];
+ 		set->map[HCTX_TYPE_READ].queue_offset = 0;
+ 	}
+ 	blk_mq_map_queues(&set->map[HCTX_TYPE_DEFAULT]);
+ 	blk_mq_map_queues(&set->map[HCTX_TYPE_READ]);
++
++	dev_info(ctrl->ctrl.device,
++		"mapped %d/%d default/read queues.\n",
++		ctrl->io_queues[HCTX_TYPE_DEFAULT],
++		ctrl->io_queues[HCTX_TYPE_READ]);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index e1949f7efd9c..bf32fde328c2 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -666,7 +666,8 @@ static bool acpi_pci_need_resume(struct pci_dev *dev)
+ 	if (!adev || !acpi_device_power_manageable(adev))
+ 		return false;
+ 
+-	if (device_may_wakeup(&dev->dev) != !!adev->wakeup.prepare_count)
++	if (adev->wakeup.flags.valid &&
++	    device_may_wakeup(&dev->dev) != !!adev->wakeup.prepare_count)
+ 		return true;
+ 
+ 	if (acpi_target_system_state() == ACPI_STATE_S0)
+diff --git a/drivers/pinctrl/intel/pinctrl-intel.c b/drivers/pinctrl/intel/pinctrl-intel.c
+index 70638b74f9d6..95d224404c7c 100644
+--- a/drivers/pinctrl/intel/pinctrl-intel.c
++++ b/drivers/pinctrl/intel/pinctrl-intel.c
+@@ -913,35 +913,6 @@ static void intel_gpio_irq_ack(struct irq_data *d)
+ 	}
+ }
+ 
+-static void intel_gpio_irq_enable(struct irq_data *d)
+-{
+-	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+-	struct intel_pinctrl *pctrl = gpiochip_get_data(gc);
+-	const struct intel_community *community;
+-	const struct intel_padgroup *padgrp;
+-	int pin;
+-
+-	pin = intel_gpio_to_pin(pctrl, irqd_to_hwirq(d), &community, &padgrp);
+-	if (pin >= 0) {
+-		unsigned int gpp, gpp_offset, is_offset;
+-		unsigned long flags;
+-		u32 value;
+-
+-		gpp = padgrp->reg_num;
+-		gpp_offset = padgroup_offset(padgrp, pin);
+-		is_offset = community->is_offset + gpp * 4;
+-
+-		raw_spin_lock_irqsave(&pctrl->lock, flags);
+-		/* Clear interrupt status first to avoid unexpected interrupt */
+-		writel(BIT(gpp_offset), community->regs + is_offset);
+-
+-		value = readl(community->regs + community->ie_offset + gpp * 4);
+-		value |= BIT(gpp_offset);
+-		writel(value, community->regs + community->ie_offset + gpp * 4);
+-		raw_spin_unlock_irqrestore(&pctrl->lock, flags);
+-	}
+-}
+-
+ static void intel_gpio_irq_mask_unmask(struct irq_data *d, bool mask)
+ {
+ 	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+@@ -954,15 +925,20 @@ static void intel_gpio_irq_mask_unmask(struct irq_data *d, bool mask)
+ 	if (pin >= 0) {
+ 		unsigned int gpp, gpp_offset;
+ 		unsigned long flags;
+-		void __iomem *reg;
++		void __iomem *reg, *is;
+ 		u32 value;
+ 
+ 		gpp = padgrp->reg_num;
+ 		gpp_offset = padgroup_offset(padgrp, pin);
+ 
+ 		reg = community->regs + community->ie_offset + gpp * 4;
++		is = community->regs + community->is_offset + gpp * 4;
+ 
+ 		raw_spin_lock_irqsave(&pctrl->lock, flags);
++
++		/* Clear interrupt status first to avoid unexpected interrupt */
++		writel(BIT(gpp_offset), is);
++
+ 		value = readl(reg);
+ 		if (mask)
+ 			value &= ~BIT(gpp_offset);
+@@ -1106,7 +1082,6 @@ static irqreturn_t intel_gpio_irq(int irq, void *data)
+ 
+ static struct irq_chip intel_gpio_irqchip = {
+ 	.name = "intel-gpio",
+-	.irq_enable = intel_gpio_irq_enable,
+ 	.irq_ack = intel_gpio_irq_ack,
+ 	.irq_mask = intel_gpio_irq_mask,
+ 	.irq_unmask = intel_gpio_irq_unmask,
+diff --git a/drivers/s390/crypto/ap_bus.c b/drivers/s390/crypto/ap_bus.c
+index 1546389d71db..6717536a633c 100644
+--- a/drivers/s390/crypto/ap_bus.c
++++ b/drivers/s390/crypto/ap_bus.c
+@@ -254,19 +254,37 @@ static inline int ap_test_config_card_id(unsigned int id)
+ }
+ 
+ /*
+- * ap_test_config_domain(): Test, whether an AP usage domain is configured.
++ * ap_test_config_usage_domain(): Test, whether an AP usage domain
++ * is configured.
+  * @domain AP usage domain ID
+  *
+  * Returns 0 if the usage domain is not configured
+  *	   1 if the usage domain is configured or
+  *	     if the configuration information is not available
+  */
+-static inline int ap_test_config_domain(unsigned int domain)
++int ap_test_config_usage_domain(unsigned int domain)
+ {
+ 	if (!ap_configuration)	/* QCI not supported */
+ 		return domain < 16;
+ 	return ap_test_config(ap_configuration->aqm, domain);
+ }
++EXPORT_SYMBOL(ap_test_config_usage_domain);
++
++/*
++ * ap_test_config_ctrl_domain(): Test, whether an AP control domain
++ * is configured.
++ * @domain AP control domain ID
++ *
++ * Returns 1 if the control domain is configured
++ *	   0 in all other cases
++ */
++int ap_test_config_ctrl_domain(unsigned int domain)
++{
++	if (!ap_configuration)	/* QCI not supported */
++		return 0;
++	return ap_test_config(ap_configuration->adm, domain);
++}
++EXPORT_SYMBOL(ap_test_config_ctrl_domain);
+ 
+ /**
+  * ap_query_queue(): Check if an AP queue is available.
+@@ -1267,7 +1285,7 @@ static void ap_select_domain(void)
+ 	best_domain = -1;
+ 	max_count = 0;
+ 	for (i = 0; i < AP_DOMAINS; i++) {
+-		if (!ap_test_config_domain(i) ||
++		if (!ap_test_config_usage_domain(i) ||
+ 		    !test_bit_inv(i, ap_perms.aqm))
+ 			continue;
+ 		count = 0;
+@@ -1442,7 +1460,7 @@ static void _ap_scan_bus_adapter(int id)
+ 				      (void *)(long) qid,
+ 				      __match_queue_device_with_qid);
+ 		aq = dev ? to_ap_queue(dev) : NULL;
+-		if (!ap_test_config_domain(dom)) {
++		if (!ap_test_config_usage_domain(dom)) {
+ 			if (dev) {
+ 				/* Queue device exists but has been
+ 				 * removed from configuration.
+diff --git a/drivers/s390/crypto/ap_bus.h b/drivers/s390/crypto/ap_bus.h
+index 15a98a673c5c..6f3cf37776ca 100644
+--- a/drivers/s390/crypto/ap_bus.h
++++ b/drivers/s390/crypto/ap_bus.h
+@@ -251,6 +251,9 @@ void ap_wait(enum ap_wait wait);
+ void ap_request_timeout(struct timer_list *t);
+ void ap_bus_force_rescan(void);
+ 
++int ap_test_config_usage_domain(unsigned int domain);
++int ap_test_config_ctrl_domain(unsigned int domain);
++
+ void ap_queue_init_reply(struct ap_queue *aq, struct ap_message *ap_msg);
+ struct ap_queue *ap_queue_create(ap_qid_t qid, int device_type);
+ void ap_queue_prepare_remove(struct ap_queue *aq);
+diff --git a/drivers/s390/crypto/zcrypt_api.c b/drivers/s390/crypto/zcrypt_api.c
+index c31b2d31cd83..03b1853464db 100644
+--- a/drivers/s390/crypto/zcrypt_api.c
++++ b/drivers/s390/crypto/zcrypt_api.c
+@@ -822,7 +822,7 @@ static long _zcrypt_send_cprb(struct ap_perms *perms,
+ 	struct ap_message ap_msg;
+ 	unsigned int weight, pref_weight;
+ 	unsigned int func_code;
+-	unsigned short *domain;
++	unsigned short *domain, tdom;
+ 	int qid = 0, rc = -ENODEV;
+ 	struct module *mod;
+ 
+@@ -834,6 +834,17 @@ static long _zcrypt_send_cprb(struct ap_perms *perms,
+ 	if (rc)
+ 		goto out;
+ 
++	/*
++	 * If a valid target domain is set and this domain is NOT a usage
++	 * domain but a control only domain, use the default domain as target.
++	 */
++	tdom = *domain;
++	if (tdom >= 0 && tdom < AP_DOMAINS &&
++	    !ap_test_config_usage_domain(tdom) &&
++	    ap_test_config_ctrl_domain(tdom) &&
++	    ap_domain_index >= 0)
++		tdom = ap_domain_index;
++
+ 	pref_zc = NULL;
+ 	pref_zq = NULL;
+ 	spin_lock(&zcrypt_list_lock);
+@@ -856,8 +867,8 @@ static long _zcrypt_send_cprb(struct ap_perms *perms,
+ 			/* check if device is online and eligible */
+ 			if (!zq->online ||
+ 			    !zq->ops->send_cprb ||
+-			    ((*domain != (unsigned short) AUTOSELECT) &&
+-			     (*domain != AP_QID_QUEUE(zq->queue->qid))))
++			    (tdom != (unsigned short) AUTOSELECT &&
++			     tdom != AP_QID_QUEUE(zq->queue->qid)))
+ 				continue;
+ 			/* check if device node has admission for this queue */
+ 			if (!zcrypt_check_queue(perms,
+diff --git a/drivers/scsi/cxgbi/libcxgbi.c b/drivers/scsi/cxgbi/libcxgbi.c
+index 006372b3fba2..a50734f3c486 100644
+--- a/drivers/scsi/cxgbi/libcxgbi.c
++++ b/drivers/scsi/cxgbi/libcxgbi.c
+@@ -641,6 +641,10 @@ cxgbi_check_route(struct sockaddr *dst_addr, int ifindex)
+ 
+ 	if (ndev->flags & IFF_LOOPBACK) {
+ 		ndev = ip_dev_find(&init_net, daddr->sin_addr.s_addr);
++		if (!ndev) {
++			err = -ENETUNREACH;
++			goto rel_neigh;
++		}
+ 		mtu = ndev->mtu;
+ 		pr_info("rt dev %s, loopback -> %s, mtu %u.\n",
+ 			n->dev->name, ndev->name, mtu);
+diff --git a/drivers/scsi/device_handler/scsi_dh_alua.c b/drivers/scsi/device_handler/scsi_dh_alua.c
+index d7ac498ba35a..2a9dcb8973b7 100644
+--- a/drivers/scsi/device_handler/scsi_dh_alua.c
++++ b/drivers/scsi/device_handler/scsi_dh_alua.c
+@@ -1174,10 +1174,8 @@ static int __init alua_init(void)
+ 	int r;
+ 
+ 	kaluad_wq = alloc_workqueue("kaluad", WQ_MEM_RECLAIM, 0);
+-	if (!kaluad_wq) {
+-		/* Temporary failure, bypass */
+-		return SCSI_DH_DEV_TEMP_BUSY;
+-	}
++	if (!kaluad_wq)
++		return -ENOMEM;
+ 
+ 	r = scsi_register_device_handler(&alua_dh);
+ 	if (r != 0) {
+diff --git a/drivers/scsi/libsas/sas_expander.c b/drivers/scsi/libsas/sas_expander.c
+index 3611a4ef0d15..7c2d78d189e4 100644
+--- a/drivers/scsi/libsas/sas_expander.c
++++ b/drivers/scsi/libsas/sas_expander.c
+@@ -1014,6 +1014,8 @@ static struct domain_device *sas_ex_discover_expander(
+ 		list_del(&child->dev_list_node);
+ 		spin_unlock_irq(&parent->port->dev_list_lock);
+ 		sas_put_device(child);
++		sas_port_delete(phy->port);
++		phy->port = NULL;
+ 		return NULL;
+ 	}
+ 	list_add_tail(&child->siblings, &parent->ex_dev.children);
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index 75ec43aa8df3..531824afba5f 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -7285,7 +7285,7 @@ static int pqi_pci_init(struct pqi_ctrl_info *ctrl_info)
+ 	else
+ 		mask = DMA_BIT_MASK(32);
+ 
+-	rc = dma_set_mask(&ctrl_info->pci_dev->dev, mask);
++	rc = dma_set_mask_and_coherent(&ctrl_info->pci_dev->dev, mask);
+ 	if (rc) {
+ 		dev_err(&ctrl_info->pci_dev->dev, "failed to set DMA mask\n");
+ 		goto disable_device;
+diff --git a/drivers/staging/erofs/super.c b/drivers/staging/erofs/super.c
+index 15c784fba879..c8981662a49b 100644
+--- a/drivers/staging/erofs/super.c
++++ b/drivers/staging/erofs/super.c
+@@ -459,6 +459,7 @@ static int erofs_read_super(struct super_block *sb,
+ 	 */
+ err_devname:
+ 	dput(sb->s_root);
++	sb->s_root = NULL;
+ err_iget:
+ #ifdef EROFS_FS_HAS_MANAGED_CACHE
+ 	iput(sbi->managed_cache);
+diff --git a/drivers/staging/vc04_services/bcm2835-camera/controls.c b/drivers/staging/vc04_services/bcm2835-camera/controls.c
+index a2c55cb2192a..52f3c4be5ff8 100644
+--- a/drivers/staging/vc04_services/bcm2835-camera/controls.c
++++ b/drivers/staging/vc04_services/bcm2835-camera/controls.c
+@@ -576,7 +576,7 @@ exit:
+ 				dev->colourfx.enable ? "true" : "false",
+ 				dev->colourfx.u, dev->colourfx.v,
+ 				ret, (ret == 0 ? 0 : -EINVAL));
+-	return (ret == 0 ? 0 : EINVAL);
++	return (ret == 0 ? 0 : -EINVAL);
+ }
+ 
+ static int ctrl_set_colfx(struct bm2835_mmal_dev *dev,
+@@ -600,7 +600,7 @@ static int ctrl_set_colfx(struct bm2835_mmal_dev *dev,
+ 		 "%s: After: mmal_ctrl:%p ctrl id:0x%x ctrl val:%d ret %d(%d)\n",
+ 			__func__, mmal_ctrl, ctrl->id, ctrl->val, ret,
+ 			(ret == 0 ? 0 : -EINVAL));
+-	return (ret == 0 ? 0 : EINVAL);
++	return (ret == 0 ? 0 : -EINVAL);
+ }
+ 
+ static int ctrl_set_bitrate(struct bm2835_mmal_dev *dev,
+diff --git a/drivers/staging/wilc1000/wilc_wlan.c b/drivers/staging/wilc1000/wilc_wlan.c
+index c2389695fe20..70b1ab21f8a3 100644
+--- a/drivers/staging/wilc1000/wilc_wlan.c
++++ b/drivers/staging/wilc1000/wilc_wlan.c
+@@ -1076,13 +1076,17 @@ void wilc_wlan_cleanup(struct net_device *dev)
+ 	acquire_bus(wilc, WILC_BUS_ACQUIRE_AND_WAKEUP);
+ 
+ 	ret = wilc->hif_func->hif_read_reg(wilc, WILC_GP_REG_0, &reg);
+-	if (!ret)
++	if (!ret) {
+ 		release_bus(wilc, WILC_BUS_RELEASE_ALLOW_SLEEP);
++		return;
++	}
+ 
+ 	ret = wilc->hif_func->hif_write_reg(wilc, WILC_GP_REG_0,
+ 					(reg | ABORT_INT));
+-	if (!ret)
++	if (!ret) {
+ 		release_bus(wilc, WILC_BUS_RELEASE_ALLOW_SLEEP);
++		return;
++	}
+ 
+ 	release_bus(wilc, WILC_BUS_RELEASE_ALLOW_SLEEP);
+ 	wilc->hif_func->hif_deinit(NULL);
+diff --git a/drivers/tty/serial/sunhv.c b/drivers/tty/serial/sunhv.c
+index 63e34d868de8..f8503f8fc44e 100644
+--- a/drivers/tty/serial/sunhv.c
++++ b/drivers/tty/serial/sunhv.c
+@@ -397,7 +397,7 @@ static const struct uart_ops sunhv_pops = {
+ static struct uart_driver sunhv_reg = {
+ 	.owner			= THIS_MODULE,
+ 	.driver_name		= "sunhv",
+-	.dev_name		= "ttyS",
++	.dev_name		= "ttyHV",
+ 	.major			= TTY_MAJOR,
+ };
+ 
+diff --git a/drivers/usb/host/xhci-debugfs.c b/drivers/usb/host/xhci-debugfs.c
+index cadc01336bf8..7ba6afc7ef23 100644
+--- a/drivers/usb/host/xhci-debugfs.c
++++ b/drivers/usb/host/xhci-debugfs.c
+@@ -440,6 +440,9 @@ void xhci_debugfs_create_endpoint(struct xhci_hcd *xhci,
+ 	struct xhci_ep_priv	*epriv;
+ 	struct xhci_slot_priv	*spriv = dev->debugfs_private;
+ 
++	if (!spriv)
++		return;
++
+ 	if (spriv->eps[ep_index])
+ 		return;
+ 
+diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
+index 8a249c95c193..d7438fdc5706 100644
+--- a/drivers/xen/pvcalls-front.c
++++ b/drivers/xen/pvcalls-front.c
+@@ -540,7 +540,6 @@ out:
+ int pvcalls_front_sendmsg(struct socket *sock, struct msghdr *msg,
+ 			  size_t len)
+ {
+-	struct pvcalls_bedata *bedata;
+ 	struct sock_mapping *map;
+ 	int sent, tot_sent = 0;
+ 	int count = 0, flags;
+@@ -552,7 +551,6 @@ int pvcalls_front_sendmsg(struct socket *sock, struct msghdr *msg,
+ 	map = pvcalls_enter_sock(sock);
+ 	if (IS_ERR(map))
+ 		return PTR_ERR(map);
+-	bedata = dev_get_drvdata(&pvcalls_front_dev->dev);
+ 
+ 	mutex_lock(&map->active.out_mutex);
+ 	if ((flags & MSG_DONTWAIT) && !pvcalls_front_write_todo(map)) {
+@@ -635,7 +633,6 @@ out:
+ int pvcalls_front_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 		     int flags)
+ {
+-	struct pvcalls_bedata *bedata;
+ 	int ret;
+ 	struct sock_mapping *map;
+ 
+@@ -645,7 +642,6 @@ int pvcalls_front_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 	map = pvcalls_enter_sock(sock);
+ 	if (IS_ERR(map))
+ 		return PTR_ERR(map);
+-	bedata = dev_get_drvdata(&pvcalls_front_dev->dev);
+ 
+ 	mutex_lock(&map->active.in_mutex);
+ 	if (len > XEN_FLEX_RING_SIZE(PVCALLS_RING_ORDER))
+diff --git a/drivers/xen/xenbus/xenbus.h b/drivers/xen/xenbus/xenbus.h
+index 092981171df1..d75a2385b37c 100644
+--- a/drivers/xen/xenbus/xenbus.h
++++ b/drivers/xen/xenbus/xenbus.h
+@@ -83,6 +83,7 @@ struct xb_req_data {
+ 	int num_vecs;
+ 	int err;
+ 	enum xb_req_state state;
++	bool user_req;
+ 	void (*cb)(struct xb_req_data *);
+ 	void *par;
+ };
+@@ -133,4 +134,6 @@ void xenbus_ring_ops_init(void);
+ int xenbus_dev_request_and_reply(struct xsd_sockmsg *msg, void *par);
+ void xenbus_dev_queue_reply(struct xb_req_data *req);
+ 
++extern unsigned int xb_dev_generation_id;
++
+ #endif
+diff --git a/drivers/xen/xenbus/xenbus_dev_frontend.c b/drivers/xen/xenbus/xenbus_dev_frontend.c
+index 0782ff3c2273..39c63152a358 100644
+--- a/drivers/xen/xenbus/xenbus_dev_frontend.c
++++ b/drivers/xen/xenbus/xenbus_dev_frontend.c
+@@ -62,6 +62,8 @@
+ 
+ #include "xenbus.h"
+ 
++unsigned int xb_dev_generation_id;
++
+ /*
+  * An element of a list of outstanding transactions, for which we're
+  * still waiting a reply.
+@@ -69,6 +71,7 @@
+ struct xenbus_transaction_holder {
+ 	struct list_head list;
+ 	struct xenbus_transaction handle;
++	unsigned int generation_id;
+ };
+ 
+ /*
+@@ -441,6 +444,7 @@ static int xenbus_write_transaction(unsigned msg_type,
+ 			rc = -ENOMEM;
+ 			goto out;
+ 		}
++		trans->generation_id = xb_dev_generation_id;
+ 		list_add(&trans->list, &u->transactions);
+ 	} else if (msg->hdr.tx_id != 0 &&
+ 		   !xenbus_get_transaction(u, msg->hdr.tx_id))
+@@ -449,6 +453,20 @@ static int xenbus_write_transaction(unsigned msg_type,
+ 		 !(msg->hdr.len == 2 &&
+ 		   (!strcmp(msg->body, "T") || !strcmp(msg->body, "F"))))
+ 		return xenbus_command_reply(u, XS_ERROR, "EINVAL");
++	else if (msg_type == XS_TRANSACTION_END) {
++		trans = xenbus_get_transaction(u, msg->hdr.tx_id);
++		if (trans && trans->generation_id != xb_dev_generation_id) {
++			list_del(&trans->list);
++			kfree(trans);
++			if (!strcmp(msg->body, "T"))
++				return xenbus_command_reply(u, XS_ERROR,
++							    "EAGAIN");
++			else
++				return xenbus_command_reply(u,
++							    XS_TRANSACTION_END,
++							    "OK");
++		}
++	}
+ 
+ 	rc = xenbus_dev_request_and_reply(&msg->hdr, u);
+ 	if (rc && trans) {
+diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
+index 49a3874ae6bb..ddc18da61834 100644
+--- a/drivers/xen/xenbus/xenbus_xs.c
++++ b/drivers/xen/xenbus/xenbus_xs.c
+@@ -105,6 +105,7 @@ static void xs_suspend_enter(void)
+ 
+ static void xs_suspend_exit(void)
+ {
++	xb_dev_generation_id++;
+ 	spin_lock(&xs_state_lock);
+ 	xs_suspend_active--;
+ 	spin_unlock(&xs_state_lock);
+@@ -125,7 +126,7 @@ static uint32_t xs_request_enter(struct xb_req_data *req)
+ 		spin_lock(&xs_state_lock);
+ 	}
+ 
+-	if (req->type == XS_TRANSACTION_START)
++	if (req->type == XS_TRANSACTION_START && !req->user_req)
+ 		xs_state_users++;
+ 	xs_state_users++;
+ 	rq_id = xs_request_id++;
+@@ -140,7 +141,7 @@ void xs_request_exit(struct xb_req_data *req)
+ 	spin_lock(&xs_state_lock);
+ 	xs_state_users--;
+ 	if ((req->type == XS_TRANSACTION_START && req->msg.type == XS_ERROR) ||
+-	    (req->type == XS_TRANSACTION_END &&
++	    (req->type == XS_TRANSACTION_END && !req->user_req &&
+ 	     !WARN_ON_ONCE(req->msg.type == XS_ERROR &&
+ 			   !strcmp(req->body, "ENOENT"))))
+ 		xs_state_users--;
+@@ -286,6 +287,7 @@ int xenbus_dev_request_and_reply(struct xsd_sockmsg *msg, void *par)
+ 	req->num_vecs = 1;
+ 	req->cb = xenbus_dev_queue_reply;
+ 	req->par = par;
++	req->user_req = true;
+ 
+ 	xs_send(req, msg);
+ 
+@@ -313,6 +315,7 @@ static void *xs_talkv(struct xenbus_transaction t,
+ 	req->vec = iovec;
+ 	req->num_vecs = num_vecs;
+ 	req->cb = xs_wake_up;
++	req->user_req = false;
+ 
+ 	msg.req_id = 0;
+ 	msg.tx_id = t.id;
+diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c
+index 09b7d0d4f6e4..007cfa39be5f 100644
+--- a/fs/cifs/dfs_cache.c
++++ b/fs/cifs/dfs_cache.c
+@@ -131,7 +131,7 @@ static inline void flush_cache_ent(struct dfs_cache_entry *ce)
+ 		return;
+ 
+ 	hlist_del_init_rcu(&ce->ce_hlist);
+-	kfree(ce->ce_path);
++	kfree_const(ce->ce_path);
+ 	free_tgts(ce);
+ 	dfs_cache_count--;
+ 	call_rcu(&ce->ce_rcu, free_cache_entry);
+@@ -421,7 +421,7 @@ alloc_cache_entry(const char *path, const struct dfs_info3_param *refs,
+ 
+ 	rc = copy_ref_data(refs, numrefs, ce, NULL);
+ 	if (rc) {
+-		kfree(ce->ce_path);
++		kfree_const(ce->ce_path);
+ 		kmem_cache_free(dfs_cache_slab, ce);
+ 		ce = ERR_PTR(rc);
+ 	}
+diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c
+index 920d350df37b..809c1edffbaf 100644
+--- a/fs/configfs/dir.c
++++ b/fs/configfs/dir.c
+@@ -58,15 +58,13 @@ static void configfs_d_iput(struct dentry * dentry,
+ 	if (sd) {
+ 		/* Coordinate with configfs_readdir */
+ 		spin_lock(&configfs_dirent_lock);
+-		/* Coordinate with configfs_attach_attr where will increase
+-		 * sd->s_count and update sd->s_dentry to new allocated one.
+-		 * Only set sd->dentry to null when this dentry is the only
+-		 * sd owner.
+-		 * If not do so, configfs_d_iput may run just after
+-		 * configfs_attach_attr and set sd->s_dentry to null
+-		 * even it's still in use.
++		/*
++		 * Set sd->s_dentry to null only when this dentry is the one
++		 * that is going to be killed.  Otherwise configfs_d_iput may
++		 * run just after configfs_attach_attr and set sd->s_dentry to
++		 * NULL even it's still in use.
+ 		 */
+-		if (atomic_read(&sd->s_count) <= 2)
++		if (sd->s_dentry == dentry)
+ 			sd->s_dentry = NULL;
+ 
+ 		spin_unlock(&configfs_dirent_lock);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 4e32a033394c..e82adbf8adc1 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -2506,7 +2506,7 @@ static int io_sqe_buffer_register(struct io_ring_ctx *ctx, void __user *arg,
+ 
+ 		ret = io_copy_iov(ctx, &iov, arg, i);
+ 		if (ret)
+-			break;
++			goto err;
+ 
+ 		/*
+ 		 * Don't impose further limits on the size and buffer
+diff --git a/fs/ocfs2/filecheck.c b/fs/ocfs2/filecheck.c
+index f65f2b2f594d..1906cc962c4d 100644
+--- a/fs/ocfs2/filecheck.c
++++ b/fs/ocfs2/filecheck.c
+@@ -193,6 +193,7 @@ int ocfs2_filecheck_create_sysfs(struct ocfs2_super *osb)
+ 	ret = kobject_init_and_add(&entry->fs_kobj, &ocfs2_ktype_filecheck,
+ 					NULL, "filecheck");
+ 	if (ret) {
++		kobject_put(&entry->fs_kobj);
+ 		kfree(fcheck);
+ 		return ret;
+ 	}
+diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
+index a3fda9f024c3..4a7944078cc3 100644
+--- a/include/linux/sched/mm.h
++++ b/include/linux/sched/mm.h
+@@ -54,6 +54,10 @@ static inline void mmdrop(struct mm_struct *mm)
+  * followed by taking the mmap_sem for writing before modifying the
+  * vmas or anything the coredump pretends not to change from under it.
+  *
++ * It also has to be called when mmgrab() is used in the context of
++ * the process, but then the mm_count refcount is transferred outside
++ * the context of the process to run down_write() on that pinned mm.
++ *
+  * NOTE: find_extend_vma() called from GUP context is the only place
+  * that can modify the "mm" (notably the vm_start/end) under mmap_sem
+  * for reading and outside the context of the process, so it is also
+diff --git a/include/net/flow_dissector.h b/include/net/flow_dissector.h
+index 2b26979efb48..fc0d471af4b9 100644
+--- a/include/net/flow_dissector.h
++++ b/include/net/flow_dissector.h
+@@ -46,6 +46,7 @@ struct flow_dissector_key_tags {
+ 
+ struct flow_dissector_key_vlan {
+ 	u16	vlan_id:12,
++		vlan_dei:1,
+ 		vlan_priority:3;
+ 	__be16	vlan_tpid;
+ };
+diff --git a/include/net/netfilter/nft_fib.h b/include/net/netfilter/nft_fib.h
+index a88f92737308..e4c4d8eaca8c 100644
+--- a/include/net/netfilter/nft_fib.h
++++ b/include/net/netfilter/nft_fib.h
+@@ -34,5 +34,5 @@ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 		   const struct nft_pktinfo *pkt);
+ 
+ void nft_fib_store_result(void *reg, const struct nft_fib *priv,
+-			  const struct nft_pktinfo *pkt, int index);
++			  const struct net_device *dev);
+ #endif
+diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
+index 674b35383491..7a0c73e4b3eb 100644
+--- a/kernel/events/ring_buffer.c
++++ b/kernel/events/ring_buffer.c
+@@ -48,14 +48,30 @@ static void perf_output_put_handle(struct perf_output_handle *handle)
+ 	unsigned long head;
+ 
+ again:
++	/*
++	 * In order to avoid publishing a head value that goes backwards,
++	 * we must ensure the load of @rb->head happens after we've
++	 * incremented @rb->nest.
++	 *
++	 * Otherwise we can observe a @rb->head value before one published
++	 * by an IRQ/NMI happening between the load and the increment.
++	 */
++	barrier();
+ 	head = local_read(&rb->head);
+ 
+ 	/*
+-	 * IRQ/NMI can happen here, which means we can miss a head update.
++	 * IRQ/NMI can happen here and advance @rb->head, causing our
++	 * load above to be stale.
+ 	 */
+ 
+-	if (!local_dec_and_test(&rb->nest))
++	/*
++	 * If this isn't the outermost nesting, we don't have to update
++	 * @rb->user_page->data_head.
++	 */
++	if (local_read(&rb->nest) > 1) {
++		local_dec(&rb->nest);
+ 		goto out;
++	}
+ 
+ 	/*
+ 	 * Since the mmap() consumer (userspace) can run on a different CPU:
+@@ -84,12 +100,21 @@ again:
+ 	 * See perf_output_begin().
+ 	 */
+ 	smp_wmb(); /* B, matches C */
+-	rb->user_page->data_head = head;
++	WRITE_ONCE(rb->user_page->data_head, head);
++
++	/*
++	 * We must publish the head before decrementing the nest count,
++	 * otherwise an IRQ/NMI can publish a more recent head value and our
++	 * write will (temporarily) publish a stale value.
++	 */
++	barrier();
++	local_set(&rb->nest, 0);
+ 
+ 	/*
+-	 * Now check if we missed an update -- rely on previous implied
+-	 * compiler barriers to force a re-read.
++	 * Ensure we decrement @rb->nest before we validate the @rb->head.
++	 * Otherwise we cannot be sure we caught the 'last' nested update.
+ 	 */
++	barrier();
+ 	if (unlikely(head != local_read(&rb->head))) {
+ 		local_inc(&rb->nest);
+ 		goto again;
+@@ -471,7 +496,7 @@ void perf_aux_output_end(struct perf_output_handle *handle, unsigned long size)
+ 		perf_event_aux_event(handle->event, aux_head, size,
+ 				     handle->aux_flags);
+ 
+-	rb->user_page->aux_head = rb->aux_head;
++	WRITE_ONCE(rb->user_page->aux_head, rb->aux_head);
+ 	if (rb_need_aux_wakeup(rb))
+ 		wakeup = true;
+ 
+@@ -503,7 +528,7 @@ int perf_aux_output_skip(struct perf_output_handle *handle, unsigned long size)
+ 
+ 	rb->aux_head += size;
+ 
+-	rb->user_page->aux_head = rb->aux_head;
++	WRITE_ONCE(rb->user_page->aux_head, rb->aux_head);
+ 	if (rb_need_aux_wakeup(rb)) {
+ 		perf_output_wakeup(handle);
+ 		handle->wakeup = rb->aux_wakeup + rb->aux_watermark;
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index 449044378782..79bcfe252d4d 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -1004,6 +1004,9 @@ static void collapse_huge_page(struct mm_struct *mm,
+ 	 * handled by the anon_vma lock + PG_lock.
+ 	 */
+ 	down_write(&mm->mmap_sem);
++	result = SCAN_ANY_PROCESS;
++	if (!mmget_still_valid(mm))
++		goto out;
+ 	result = hugepage_vma_revalidate(mm, address, &vma);
+ 	if (result)
+ 		goto out;
+diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
+index f2f03c655807..a58bd0db2155 100644
+--- a/mm/mmu_gather.c
++++ b/mm/mmu_gather.c
+@@ -93,8 +93,17 @@ void arch_tlb_finish_mmu(struct mmu_gather *tlb,
+ 	struct mmu_gather_batch *batch, *next;
+ 
+ 	if (force) {
++		/*
++		 * The aarch64 yields better performance with fullmm by
++		 * avoiding multiple CPUs spamming TLBI messages at the
++		 * same time.
++		 *
++		 * On x86 non-fullmm doesn't yield significant difference
++		 * against fullmm.
++		 */
++		tlb->fullmm = 1;
+ 		__tlb_reset_range(tlb);
+-		__tlb_adjust_range(tlb, start, end - start);
++		tlb->freed_tables = 1;
+ 	}
+ 
+ 	tlb_flush_mmu(tlb);
+@@ -249,10 +258,15 @@ void tlb_finish_mmu(struct mmu_gather *tlb,
+ {
+ 	/*
+ 	 * If there are parallel threads are doing PTE changes on same range
+-	 * under non-exclusive lock(e.g., mmap_sem read-side) but defer TLB
+-	 * flush by batching, a thread has stable TLB entry can fail to flush
+-	 * the TLB by observing pte_none|!pte_dirty, for example so flush TLB
+-	 * forcefully if we detect parallel PTE batching threads.
++	 * under non-exclusive lock (e.g., mmap_sem read-side) but defer TLB
++	 * flush by batching, one thread may end up seeing inconsistent PTEs
++	 * and result in having stale TLB entries.  So flush TLB forcefully
++	 * if we detect parallel PTE batching threads.
++	 *
++	 * However, some syscalls, e.g. munmap(), may free page tables, this
++	 * needs force flush everything in the given range. Otherwise this
++	 * may result in having stale TLB entries for some architectures,
++	 * e.g. aarch64, that could specify flush what level TLB.
+ 	 */
+ 	bool force = mm_tlb_flush_nested(tlb->mm);
+ 
+diff --git a/net/ax25/ax25_route.c b/net/ax25/ax25_route.c
+index 66f74c85cf6b..66d54fc11831 100644
+--- a/net/ax25/ax25_route.c
++++ b/net/ax25/ax25_route.c
+@@ -429,9 +429,11 @@ int ax25_rt_autobind(ax25_cb *ax25, ax25_address *addr)
+ 	}
+ 
+ 	if (ax25->sk != NULL) {
++		local_bh_disable();
+ 		bh_lock_sock(ax25->sk);
+ 		sock_reset_flag(ax25->sk, SOCK_ZAPPED);
+ 		bh_unlock_sock(ax25->sk);
++		local_bh_enable();
+ 	}
+ 
+ put:
+diff --git a/net/core/ethtool.c b/net/core/ethtool.c
+index 7285a19bb135..7b84e014633a 100644
+--- a/net/core/ethtool.c
++++ b/net/core/ethtool.c
+@@ -3022,6 +3022,11 @@ ethtool_rx_flow_rule_create(const struct ethtool_rx_flow_spec_input *input)
+ 			match->mask.vlan.vlan_id =
+ 				ntohs(ext_m_spec->vlan_tci) & 0x0fff;
+ 
++			match->key.vlan.vlan_dei =
++				!!(ext_h_spec->vlan_tci & htons(0x1000));
++			match->mask.vlan.vlan_dei =
++				!!(ext_m_spec->vlan_tci & htons(0x1000));
++
+ 			match->key.vlan.vlan_priority =
+ 				(ntohs(ext_h_spec->vlan_tci) & 0xe000) >> 13;
+ 			match->mask.vlan.vlan_priority =
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 9b9da5142613..cce4fbcd7dcb 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -3199,6 +3199,7 @@ static void *neigh_get_idx_any(struct seq_file *seq, loff_t *pos)
+ }
+ 
+ void *neigh_seq_start(struct seq_file *seq, loff_t *pos, struct neigh_table *tbl, unsigned int neigh_seq_flags)
++	__acquires(tbl->lock)
+ 	__acquires(rcu_bh)
+ {
+ 	struct neigh_seq_state *state = seq->private;
+@@ -3209,6 +3210,7 @@ void *neigh_seq_start(struct seq_file *seq, loff_t *pos, struct neigh_table *tbl
+ 
+ 	rcu_read_lock_bh();
+ 	state->nht = rcu_dereference_bh(tbl->nht);
++	read_lock(&tbl->lock);
+ 
+ 	return *pos ? neigh_get_idx_any(seq, pos) : SEQ_START_TOKEN;
+ }
+@@ -3242,8 +3244,13 @@ out:
+ EXPORT_SYMBOL(neigh_seq_next);
+ 
+ void neigh_seq_stop(struct seq_file *seq, void *v)
++	__releases(tbl->lock)
+ 	__releases(rcu_bh)
+ {
++	struct neigh_seq_state *state = seq->private;
++	struct neigh_table *tbl = state->tbl;
++
++	read_unlock(&tbl->lock);
+ 	rcu_read_unlock_bh();
+ }
+ EXPORT_SYMBOL(neigh_seq_stop);
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index ac770940adb9..1086c3ccb601 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -923,7 +923,7 @@ static int __ip_append_data(struct sock *sk,
+ 		uarg = sock_zerocopy_realloc(sk, length, skb_zcopy(skb));
+ 		if (!uarg)
+ 			return -ENOBUFS;
+-		extra_uref = !skb;	/* only extra ref if !MSG_MORE */
++		extra_uref = !skb_zcopy(skb);	/* only ref on new uarg */
+ 		if (rt->dst.dev->features & NETIF_F_SG &&
+ 		    csummode == CHECKSUM_PARTIAL) {
+ 			paged = true;
+diff --git a/net/ipv4/netfilter/nft_fib_ipv4.c b/net/ipv4/netfilter/nft_fib_ipv4.c
+index 94eb25bc8d7e..c8888e52591f 100644
+--- a/net/ipv4/netfilter/nft_fib_ipv4.c
++++ b/net/ipv4/netfilter/nft_fib_ipv4.c
+@@ -58,11 +58,6 @@ void nft_fib4_eval_type(const struct nft_expr *expr, struct nft_regs *regs,
+ }
+ EXPORT_SYMBOL_GPL(nft_fib4_eval_type);
+ 
+-static int get_ifindex(const struct net_device *dev)
+-{
+-	return dev ? dev->ifindex : 0;
+-}
+-
+ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 		   const struct nft_pktinfo *pkt)
+ {
+@@ -94,8 +89,7 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 
+ 	if (nft_hook(pkt) == NF_INET_PRE_ROUTING &&
+ 	    nft_fib_is_loopback(pkt->skb, nft_in(pkt))) {
+-		nft_fib_store_result(dest, priv, pkt,
+-				     nft_in(pkt)->ifindex);
++		nft_fib_store_result(dest, priv, nft_in(pkt));
+ 		return;
+ 	}
+ 
+@@ -108,8 +102,7 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 	if (ipv4_is_zeronet(iph->saddr)) {
+ 		if (ipv4_is_lbcast(iph->daddr) ||
+ 		    ipv4_is_local_multicast(iph->daddr)) {
+-			nft_fib_store_result(dest, priv, pkt,
+-					     get_ifindex(pkt->skb->dev));
++			nft_fib_store_result(dest, priv, pkt->skb->dev);
+ 			return;
+ 		}
+ 	}
+@@ -150,17 +143,7 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 		found = oif;
+ 	}
+ 
+-	switch (priv->result) {
+-	case NFT_FIB_RESULT_OIF:
+-		*dest = found->ifindex;
+-		break;
+-	case NFT_FIB_RESULT_OIFNAME:
+-		strncpy((char *)dest, found->name, IFNAMSIZ);
+-		break;
+-	default:
+-		WARN_ON_ONCE(1);
+-		break;
+-	}
++	nft_fib_store_result(dest, priv, found);
+ }
+ EXPORT_SYMBOL_GPL(nft_fib4_eval);
+ 
+diff --git a/net/ipv6/ip6_flowlabel.c b/net/ipv6/ip6_flowlabel.c
+index be5f3d7ceb96..f994f50e1516 100644
+--- a/net/ipv6/ip6_flowlabel.c
++++ b/net/ipv6/ip6_flowlabel.c
+@@ -254,9 +254,9 @@ struct ip6_flowlabel *fl6_sock_lookup(struct sock *sk, __be32 label)
+ 	rcu_read_lock_bh();
+ 	for_each_sk_fl_rcu(np, sfl) {
+ 		struct ip6_flowlabel *fl = sfl->fl;
+-		if (fl->label == label) {
++
++		if (fl->label == label && atomic_inc_not_zero(&fl->users)) {
+ 			fl->lastuse = jiffies;
+-			atomic_inc(&fl->users);
+ 			rcu_read_unlock_bh();
+ 			return fl;
+ 		}
+@@ -622,7 +622,8 @@ int ipv6_flowlabel_opt(struct sock *sk, char __user *optval, int optlen)
+ 						goto done;
+ 					}
+ 					fl1 = sfl->fl;
+-					atomic_inc(&fl1->users);
++					if (!atomic_inc_not_zero(&fl1->users))
++						fl1 = NULL;
+ 					break;
+ 				}
+ 			}
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index b5e0c85bcd57..ed9f6a7d224b 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -1344,7 +1344,7 @@ emsgsize:
+ 		uarg = sock_zerocopy_realloc(sk, length, skb_zcopy(skb));
+ 		if (!uarg)
+ 			return -ENOBUFS;
+-		extra_uref = !skb;	/* only extra ref if !MSG_MORE */
++		extra_uref = !skb_zcopy(skb);	/* only ref on new uarg */
+ 		if (rt->dst.dev->features & NETIF_F_SG &&
+ 		    csummode == CHECKSUM_PARTIAL) {
+ 			paged = true;
+diff --git a/net/ipv6/netfilter/nft_fib_ipv6.c b/net/ipv6/netfilter/nft_fib_ipv6.c
+index 73cdc0bc63f7..ec068b0cffca 100644
+--- a/net/ipv6/netfilter/nft_fib_ipv6.c
++++ b/net/ipv6/netfilter/nft_fib_ipv6.c
+@@ -169,8 +169,7 @@ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 
+ 	if (nft_hook(pkt) == NF_INET_PRE_ROUTING &&
+ 	    nft_fib_is_loopback(pkt->skb, nft_in(pkt))) {
+-		nft_fib_store_result(dest, priv, pkt,
+-				     nft_in(pkt)->ifindex);
++		nft_fib_store_result(dest, priv, nft_in(pkt));
+ 		return;
+ 	}
+ 
+@@ -187,18 +186,7 @@ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 	if (oif && oif != rt->rt6i_idev->dev)
+ 		goto put_rt_err;
+ 
+-	switch (priv->result) {
+-	case NFT_FIB_RESULT_OIF:
+-		*dest = rt->rt6i_idev->dev->ifindex;
+-		break;
+-	case NFT_FIB_RESULT_OIFNAME:
+-		strncpy((char *)dest, rt->rt6i_idev->dev->name, IFNAMSIZ);
+-		break;
+-	default:
+-		WARN_ON_ONCE(1);
+-		break;
+-	}
+-
++	nft_fib_store_result(dest, priv, rt->rt6i_idev->dev);
+  put_rt_err:
+ 	ip6_rt_put(rt);
+ }
+diff --git a/net/lapb/lapb_iface.c b/net/lapb/lapb_iface.c
+index db6e0afe3a20..1740f852002e 100644
+--- a/net/lapb/lapb_iface.c
++++ b/net/lapb/lapb_iface.c
+@@ -182,6 +182,7 @@ int lapb_unregister(struct net_device *dev)
+ 	lapb = __lapb_devtostruct(dev);
+ 	if (!lapb)
+ 		goto out;
++	lapb_put(lapb);
+ 
+ 	lapb_stop_t1timer(lapb);
+ 	lapb_stop_t2timer(lapb);
+diff --git a/net/netfilter/ipvs/ip_vs_core.c b/net/netfilter/ipvs/ip_vs_core.c
+index 14457551bcb4..8ebf21149ec3 100644
+--- a/net/netfilter/ipvs/ip_vs_core.c
++++ b/net/netfilter/ipvs/ip_vs_core.c
+@@ -2312,7 +2312,6 @@ static void __net_exit __ip_vs_cleanup(struct net *net)
+ {
+ 	struct netns_ipvs *ipvs = net_ipvs(net);
+ 
+-	nf_unregister_net_hooks(net, ip_vs_ops, ARRAY_SIZE(ip_vs_ops));
+ 	ip_vs_service_net_cleanup(ipvs);	/* ip_vs_flush() with locks */
+ 	ip_vs_conn_net_cleanup(ipvs);
+ 	ip_vs_app_net_cleanup(ipvs);
+@@ -2327,6 +2326,7 @@ static void __net_exit __ip_vs_dev_cleanup(struct net *net)
+ {
+ 	struct netns_ipvs *ipvs = net_ipvs(net);
+ 	EnterFunction(2);
++	nf_unregister_net_hooks(net, ip_vs_ops, ARRAY_SIZE(ip_vs_ops));
+ 	ipvs->enable = 0;	/* Disable packet reception */
+ 	smp_wmb();
+ 	ip_vs_sync_net_cleanup(ipvs);
+diff --git a/net/netfilter/nf_nat_helper.c b/net/netfilter/nf_nat_helper.c
+index ccc06f7539d7..53aeb12b70fb 100644
+--- a/net/netfilter/nf_nat_helper.c
++++ b/net/netfilter/nf_nat_helper.c
+@@ -170,7 +170,7 @@ nf_nat_mangle_udp_packet(struct sk_buff *skb,
+ 	if (!udph->check && skb->ip_summed != CHECKSUM_PARTIAL)
+ 		return true;
+ 
+-	nf_nat_csum_recalc(skb, nf_ct_l3num(ct), IPPROTO_TCP,
++	nf_nat_csum_recalc(skb, nf_ct_l3num(ct), IPPROTO_UDP,
+ 			   udph, &udph->check, datalen, oldlen);
+ 
+ 	return true;
+diff --git a/net/netfilter/nf_queue.c b/net/netfilter/nf_queue.c
+index a36a77bae1d6..5b86574e7b89 100644
+--- a/net/netfilter/nf_queue.c
++++ b/net/netfilter/nf_queue.c
+@@ -254,6 +254,7 @@ static unsigned int nf_iterate(struct sk_buff *skb,
+ repeat:
+ 		verdict = nf_hook_entry_hookfn(hook, skb, state);
+ 		if (verdict != NF_ACCEPT) {
++			*index = i;
+ 			if (verdict != NF_REPEAT)
+ 				return verdict;
+ 			goto repeat;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index aa5e7b00a581..101975386547 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -2261,13 +2261,13 @@ static int nf_tables_fill_rule_info(struct sk_buff *skb, struct net *net,
+ 				    u32 flags, int family,
+ 				    const struct nft_table *table,
+ 				    const struct nft_chain *chain,
+-				    const struct nft_rule *rule)
++				    const struct nft_rule *rule,
++				    const struct nft_rule *prule)
+ {
+ 	struct nlmsghdr *nlh;
+ 	struct nfgenmsg *nfmsg;
+ 	const struct nft_expr *expr, *next;
+ 	struct nlattr *list;
+-	const struct nft_rule *prule;
+ 	u16 type = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+ 
+ 	nlh = nlmsg_put(skb, portid, seq, type, sizeof(struct nfgenmsg), flags);
+@@ -2287,8 +2287,7 @@ static int nf_tables_fill_rule_info(struct sk_buff *skb, struct net *net,
+ 			 NFTA_RULE_PAD))
+ 		goto nla_put_failure;
+ 
+-	if ((event != NFT_MSG_DELRULE) && (rule->list.prev != &chain->rules)) {
+-		prule = list_prev_entry(rule, list);
++	if (event != NFT_MSG_DELRULE && prule) {
+ 		if (nla_put_be64(skb, NFTA_RULE_POSITION,
+ 				 cpu_to_be64(prule->handle),
+ 				 NFTA_RULE_PAD))
+@@ -2335,7 +2334,7 @@ static void nf_tables_rule_notify(const struct nft_ctx *ctx,
+ 
+ 	err = nf_tables_fill_rule_info(skb, ctx->net, ctx->portid, ctx->seq,
+ 				       event, 0, ctx->family, ctx->table,
+-				       ctx->chain, rule);
++				       ctx->chain, rule, NULL);
+ 	if (err < 0) {
+ 		kfree_skb(skb);
+ 		goto err;
+@@ -2360,12 +2359,13 @@ static int __nf_tables_dump_rules(struct sk_buff *skb,
+ 				  const struct nft_chain *chain)
+ {
+ 	struct net *net = sock_net(skb->sk);
++	const struct nft_rule *rule, *prule;
+ 	unsigned int s_idx = cb->args[0];
+-	const struct nft_rule *rule;
+ 
++	prule = NULL;
+ 	list_for_each_entry_rcu(rule, &chain->rules, list) {
+ 		if (!nft_is_active(net, rule))
+-			goto cont;
++			goto cont_skip;
+ 		if (*idx < s_idx)
+ 			goto cont;
+ 		if (*idx > s_idx) {
+@@ -2377,11 +2377,13 @@ static int __nf_tables_dump_rules(struct sk_buff *skb,
+ 					NFT_MSG_NEWRULE,
+ 					NLM_F_MULTI | NLM_F_APPEND,
+ 					table->family,
+-					table, chain, rule) < 0)
++					table, chain, rule, prule) < 0)
+ 			return 1;
+ 
+ 		nl_dump_check_consistent(cb, nlmsg_hdr(skb));
+ cont:
++		prule = rule;
++cont_skip:
+ 		(*idx)++;
+ 	}
+ 	return 0;
+@@ -2537,7 +2539,7 @@ static int nf_tables_getrule(struct net *net, struct sock *nlsk,
+ 
+ 	err = nf_tables_fill_rule_info(skb2, net, NETLINK_CB(skb).portid,
+ 				       nlh->nlmsg_seq, NFT_MSG_NEWRULE, 0,
+-				       family, table, chain, rule);
++				       family, table, chain, rule, NULL);
+ 	if (err < 0)
+ 		goto err;
+ 
+diff --git a/net/netfilter/nft_fib.c b/net/netfilter/nft_fib.c
+index 21df8cccea65..77f00a99dfab 100644
+--- a/net/netfilter/nft_fib.c
++++ b/net/netfilter/nft_fib.c
+@@ -135,17 +135,17 @@ int nft_fib_dump(struct sk_buff *skb, const struct nft_expr *expr)
+ EXPORT_SYMBOL_GPL(nft_fib_dump);
+ 
+ void nft_fib_store_result(void *reg, const struct nft_fib *priv,
+-			  const struct nft_pktinfo *pkt, int index)
++			  const struct net_device *dev)
+ {
+-	struct net_device *dev;
+ 	u32 *dreg = reg;
++	int index;
+ 
+ 	switch (priv->result) {
+ 	case NFT_FIB_RESULT_OIF:
++		index = dev ? dev->ifindex : 0;
+ 		*dreg = (priv->flags & NFTA_FIB_F_PRESENT) ? !!index : index;
+ 		break;
+ 	case NFT_FIB_RESULT_OIFNAME:
+-		dev = dev_get_by_index_rcu(nft_net(pkt), index);
+ 		if (priv->flags & NFTA_FIB_F_PRESENT)
+ 			*dreg = !!dev;
+ 		else
+diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c
+index 376181cc1def..9f2875efb4ac 100644
+--- a/net/nfc/netlink.c
++++ b/net/nfc/netlink.c
+@@ -922,7 +922,8 @@ static int nfc_genl_deactivate_target(struct sk_buff *skb,
+ 	u32 device_idx, target_idx;
+ 	int rc;
+ 
+-	if (!info->attrs[NFC_ATTR_DEVICE_INDEX])
++	if (!info->attrs[NFC_ATTR_DEVICE_INDEX] ||
++	    !info->attrs[NFC_ATTR_TARGET_INDEX])
+ 		return -EINVAL;
+ 
+ 	device_idx = nla_get_u32(info->attrs[NFC_ATTR_DEVICE_INDEX]);
+diff --git a/net/openvswitch/vport-internal_dev.c b/net/openvswitch/vport-internal_dev.c
+index 26f71cbf7527..5993405c25c1 100644
+--- a/net/openvswitch/vport-internal_dev.c
++++ b/net/openvswitch/vport-internal_dev.c
+@@ -170,7 +170,9 @@ static struct vport *internal_dev_create(const struct vport_parms *parms)
+ {
+ 	struct vport *vport;
+ 	struct internal_dev *internal_dev;
++	struct net_device *dev;
+ 	int err;
++	bool free_vport = true;
+ 
+ 	vport = ovs_vport_alloc(0, &ovs_internal_vport_ops, parms);
+ 	if (IS_ERR(vport)) {
+@@ -178,8 +180,9 @@ static struct vport *internal_dev_create(const struct vport_parms *parms)
+ 		goto error;
+ 	}
+ 
+-	vport->dev = alloc_netdev(sizeof(struct internal_dev),
+-				  parms->name, NET_NAME_USER, do_setup);
++	dev = alloc_netdev(sizeof(struct internal_dev),
++			   parms->name, NET_NAME_USER, do_setup);
++	vport->dev = dev;
+ 	if (!vport->dev) {
+ 		err = -ENOMEM;
+ 		goto error_free_vport;
+@@ -200,8 +203,10 @@ static struct vport *internal_dev_create(const struct vport_parms *parms)
+ 
+ 	rtnl_lock();
+ 	err = register_netdevice(vport->dev);
+-	if (err)
++	if (err) {
++		free_vport = false;
+ 		goto error_unlock;
++	}
+ 
+ 	dev_set_promiscuity(vport->dev, 1);
+ 	rtnl_unlock();
+@@ -211,11 +216,12 @@ static struct vport *internal_dev_create(const struct vport_parms *parms)
+ 
+ error_unlock:
+ 	rtnl_unlock();
+-	free_percpu(vport->dev->tstats);
++	free_percpu(dev->tstats);
+ error_free_netdev:
+-	free_netdev(vport->dev);
++	free_netdev(dev);
+ error_free_vport:
+-	ovs_vport_free(vport);
++	if (free_vport)
++		ovs_vport_free(vport);
+ error:
+ 	return ERR_PTR(err);
+ }
+diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c
+index ae65a1cfa596..fb546b2d67ca 100644
+--- a/net/sctp/sm_make_chunk.c
++++ b/net/sctp/sm_make_chunk.c
+@@ -2600,6 +2600,8 @@ do_addr_param:
+ 	case SCTP_PARAM_STATE_COOKIE:
+ 		asoc->peer.cookie_len =
+ 			ntohs(param.p->length) - sizeof(struct sctp_paramhdr);
++		if (asoc->peer.cookie)
++			kfree(asoc->peer.cookie);
+ 		asoc->peer.cookie = kmemdup(param.cookie->body, asoc->peer.cookie_len, gfp);
+ 		if (!asoc->peer.cookie)
+ 			retval = 0;
+@@ -2664,6 +2666,8 @@ do_addr_param:
+ 			goto fall_through;
+ 
+ 		/* Save peer's random parameter */
++		if (asoc->peer.peer_random)
++			kfree(asoc->peer.peer_random);
+ 		asoc->peer.peer_random = kmemdup(param.p,
+ 					    ntohs(param.p->length), gfp);
+ 		if (!asoc->peer.peer_random) {
+@@ -2677,6 +2681,8 @@ do_addr_param:
+ 			goto fall_through;
+ 
+ 		/* Save peer's HMAC list */
++		if (asoc->peer.peer_hmacs)
++			kfree(asoc->peer.peer_hmacs);
+ 		asoc->peer.peer_hmacs = kmemdup(param.p,
+ 					    ntohs(param.p->length), gfp);
+ 		if (!asoc->peer.peer_hmacs) {
+@@ -2692,6 +2698,8 @@ do_addr_param:
+ 		if (!ep->auth_enable)
+ 			goto fall_through;
+ 
++		if (asoc->peer.peer_chunks)
++			kfree(asoc->peer.peer_chunks);
+ 		asoc->peer.peer_chunks = kmemdup(param.p,
+ 					    ntohs(param.p->length), gfp);
+ 		if (!asoc->peer.peer_chunks)
+diff --git a/net/tipc/group.c b/net/tipc/group.c
+index 63f39201e41e..df0c0c4b38d5 100644
+--- a/net/tipc/group.c
++++ b/net/tipc/group.c
+@@ -218,6 +218,7 @@ void tipc_group_delete(struct net *net, struct tipc_group *grp)
+ 
+ 	rbtree_postorder_for_each_entry_safe(m, tmp, tree, tree_node) {
+ 		tipc_group_proto_xmit(grp, m, GRP_LEAVE_MSG, &xmitq);
++		__skb_queue_purge(&m->deferredq);
+ 		list_del(&m->list);
+ 		kfree(m);
+ 	}
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index d350ff73a391..41e17ed0c94e 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -1128,7 +1128,6 @@ static int tls_sw_do_sendpage(struct sock *sk, struct page *page,
+ 
+ 		full_record = false;
+ 		record_room = TLS_MAX_PAYLOAD_SIZE - msg_pl->sg.size;
+-		copied = 0;
+ 		copy = size;
+ 		if (copy >= record_room) {
+ 			copy = record_room;
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index f3f3d06cb6d8..e30f53728725 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -871,8 +871,10 @@ virtio_transport_recv_connected(struct sock *sk,
+ 		if (le32_to_cpu(pkt->hdr.flags) & VIRTIO_VSOCK_SHUTDOWN_SEND)
+ 			vsk->peer_shutdown |= SEND_SHUTDOWN;
+ 		if (vsk->peer_shutdown == SHUTDOWN_MASK &&
+-		    vsock_stream_has_data(vsk) <= 0)
++		    vsock_stream_has_data(vsk) <= 0) {
++			sock_set_flag(sk, SOCK_DONE);
+ 			sk->sk_state = TCP_CLOSING;
++		}
+ 		if (le32_to_cpu(pkt->hdr.flags))
+ 			sk->sk_state_change(sk);
+ 		break;
+diff --git a/sound/firewire/fireface/ff-protocol-latter.c b/sound/firewire/fireface/ff-protocol-latter.c
+index c8236ff89b7f..b30d02d359b1 100644
+--- a/sound/firewire/fireface/ff-protocol-latter.c
++++ b/sound/firewire/fireface/ff-protocol-latter.c
+@@ -9,11 +9,11 @@
+ 
+ #include "ff.h"
+ 
+-#define LATTER_STF		0xffff00000004
+-#define LATTER_ISOC_CHANNELS	0xffff00000008
+-#define LATTER_ISOC_START	0xffff0000000c
+-#define LATTER_FETCH_MODE	0xffff00000010
+-#define LATTER_SYNC_STATUS	0x0000801c0000
++#define LATTER_STF		0xffff00000004ULL
++#define LATTER_ISOC_CHANNELS	0xffff00000008ULL
++#define LATTER_ISOC_START	0xffff0000000cULL
++#define LATTER_FETCH_MODE	0xffff00000010ULL
++#define LATTER_SYNC_STATUS	0x0000801c0000ULL
+ 
+ static int parse_clock_bits(u32 data, unsigned int *rate,
+ 			    enum snd_ff_clock_src *src)
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 789308f54785..5c29d6490a18 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -375,6 +375,7 @@ enum {
+ 
+ #define IS_BXT(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0x5a98)
+ #define IS_CFL(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0xa348)
++#define IS_CNL(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0x9dc8)
+ 
+ static char *driver_short_names[] = {
+ 	[AZX_DRIVER_ICH] = "HDA Intel",
+@@ -1700,8 +1701,8 @@ static int azx_create(struct snd_card *card, struct pci_dev *pci,
+ 	else
+ 		chip->bdl_pos_adj = bdl_pos_adj[dev];
+ 
+-	/* Workaround for a communication error on CFL (bko#199007) */
+-	if (IS_CFL(pci))
++	/* Workaround for a communication error on CFL (bko#199007) and CNL */
++	if (IS_CFL(pci) || IS_CNL(pci))
+ 		chip->polling_mode = 1;
+ 
+ 	err = azx_bus_init(chip, model[dev], &pci_hda_io_ops);
+diff --git a/tools/perf/arch/s390/util/machine.c b/tools/perf/arch/s390/util/machine.c
+index 0b2054007314..a19690a17291 100644
+--- a/tools/perf/arch/s390/util/machine.c
++++ b/tools/perf/arch/s390/util/machine.c
+@@ -5,16 +5,19 @@
+ #include "util.h"
+ #include "machine.h"
+ #include "api/fs/fs.h"
++#include "debug.h"
+ 
+ int arch__fix_module_text_start(u64 *start, const char *name)
+ {
++	u64 m_start = *start;
+ 	char path[PATH_MAX];
+ 
+ 	snprintf(path, PATH_MAX, "module/%.*s/sections/.text",
+ 				(int)strlen(name) - 2, name + 1);
+-
+-	if (sysfs__read_ull(path, (unsigned long long *)start) < 0)
+-		return -1;
++	if (sysfs__read_ull(path, (unsigned long long *)start) < 0) {
++		pr_debug2("Using module %s start:%#lx\n", path, m_start);
++		*start = m_start;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/tools/perf/util/data-convert-bt.c b/tools/perf/util/data-convert-bt.c
+index 26af43ad9ddd..53d49fd8b8ae 100644
+--- a/tools/perf/util/data-convert-bt.c
++++ b/tools/perf/util/data-convert-bt.c
+@@ -271,7 +271,7 @@ static int string_set_value(struct bt_ctf_field *field, const char *string)
+ 				if (i > 0)
+ 					strncpy(buffer, string, i);
+ 			}
+-			strncat(buffer + p, numstr, 4);
++			memcpy(buffer + p, numstr, 4);
+ 			p += 3;
+ 		}
+ 	}
+diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
+index 50678d318185..b800752745af 100644
+--- a/tools/perf/util/thread.c
++++ b/tools/perf/util/thread.c
+@@ -132,7 +132,7 @@ void thread__put(struct thread *thread)
+ 	}
+ }
+ 
+-struct namespaces *thread__namespaces(const struct thread *thread)
++static struct namespaces *__thread__namespaces(const struct thread *thread)
+ {
+ 	if (list_empty(&thread->namespaces_list))
+ 		return NULL;
+@@ -140,10 +140,21 @@ struct namespaces *thread__namespaces(const struct thread *thread)
+ 	return list_first_entry(&thread->namespaces_list, struct namespaces, list);
+ }
+ 
++struct namespaces *thread__namespaces(const struct thread *thread)
++{
++	struct namespaces *ns;
++
++	down_read((struct rw_semaphore *)&thread->namespaces_lock);
++	ns = __thread__namespaces(thread);
++	up_read((struct rw_semaphore *)&thread->namespaces_lock);
++
++	return ns;
++}
++
+ static int __thread__set_namespaces(struct thread *thread, u64 timestamp,
+ 				    struct namespaces_event *event)
+ {
+-	struct namespaces *new, *curr = thread__namespaces(thread);
++	struct namespaces *new, *curr = __thread__namespaces(thread);
+ 
+ 	new = namespaces__new(event);
+ 	if (!new)
+diff --git a/tools/testing/selftests/netfilter/nft_nat.sh b/tools/testing/selftests/netfilter/nft_nat.sh
+index 3194007cf8d1..a59c5fd4e987 100755
+--- a/tools/testing/selftests/netfilter/nft_nat.sh
++++ b/tools/testing/selftests/netfilter/nft_nat.sh
+@@ -23,7 +23,11 @@ ip netns add ns0
+ ip netns add ns1
+ ip netns add ns2
+ 
+-ip link add veth0 netns ns0 type veth peer name eth0 netns ns1
++ip link add veth0 netns ns0 type veth peer name eth0 netns ns1 > /dev/null 2>&1
++if [ $? -ne 0 ];then
++    echo "SKIP: No virtual ethernet pair device support in kernel"
++    exit $ksft_skip
++fi
+ ip link add veth1 netns ns0 type veth peer name eth0 netns ns2
+ 
+ ip -net ns0 link set lo up

diff --git a/1013_linux-5.1.14.patch b/1013_linux-5.1.14.patch
new file mode 100644
index 0000000..a5fab59
--- /dev/null
+++ b/1013_linux-5.1.14.patch
@@ -0,0 +1,27 @@
+diff --git a/Makefile b/Makefile
+index dfcd51a35824..c4b1a345d3f0 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 2d86e1bc483c..b8b4ae555e34 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -1299,7 +1299,8 @@ int tcp_fragment(struct sock *sk, enum tcp_queue tcp_queue,
+ 	if (nsize < 0)
+ 		nsize = 0;
+ 
+-	if (unlikely((sk->sk_wmem_queued >> 1) > sk->sk_sndbuf)) {
++	if (unlikely((sk->sk_wmem_queued >> 1) > sk->sk_sndbuf &&
++		     tcp_queue != TCP_FRAG_IN_WRITE_QUEUE)) {
+ 		NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPWQUEUETOOBIG);
+ 		return -ENOMEM;
+ 	}


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-06-25 10:54 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2019-06-25 10:54 UTC (permalink / raw
  To: gentoo-commits

commit:     43bec38f38323b1bb6703d492e753525586dd530
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jun 25 10:54:03 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jun 25 10:54:03 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=43bec38f

Linux patch 5.1.15

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1014_linux-5.1.15.patch | 4242 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4246 insertions(+)

diff --git a/0000_README b/0000_README
index 3443ce1..db80d60 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,10 @@ Patch:  1013_linux-5.1.14.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.1.14
 
+Patch:  1014_linux-5.1.15.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.1.15
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1014_linux-5.1.15.patch b/1014_linux-5.1.15.patch
new file mode 100644
index 0000000..3ba035b
--- /dev/null
+++ b/1014_linux-5.1.15.patch
@@ -0,0 +1,4242 @@
+diff --git a/Makefile b/Makefile
+index c4b1a345d3f0..d7b3c8e3ff3e 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 14
++SUBLEVEL = 15
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arc/boot/dts/hsdk.dts b/arch/arc/boot/dts/hsdk.dts
+index 7425bb0f2d1b..6219b372e9c1 100644
+--- a/arch/arc/boot/dts/hsdk.dts
++++ b/arch/arc/boot/dts/hsdk.dts
+@@ -187,6 +187,7 @@
+ 			interrupt-names = "macirq";
+ 			phy-mode = "rgmii";
+ 			snps,pbl = <32>;
++			snps,multicast-filter-bins = <256>;
+ 			clocks = <&gmacclk>;
+ 			clock-names = "stmmaceth";
+ 			phy-handle = <&phy0>;
+@@ -195,6 +196,9 @@
+ 			mac-address = [00 00 00 00 00 00]; /* Filled in by U-Boot */
+ 			dma-coherent;
+ 
++			tx-fifo-depth = <4096>;
++			rx-fifo-depth = <4096>;
++
+ 			mdio {
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+diff --git a/arch/arc/include/asm/cmpxchg.h b/arch/arc/include/asm/cmpxchg.h
+index d819de1c5d10..3ea4112c8302 100644
+--- a/arch/arc/include/asm/cmpxchg.h
++++ b/arch/arc/include/asm/cmpxchg.h
+@@ -92,8 +92,11 @@ __cmpxchg(volatile void *ptr, unsigned long expected, unsigned long new)
+ 
+ #endif /* CONFIG_ARC_HAS_LLSC */
+ 
+-#define cmpxchg(ptr, o, n) ((typeof(*(ptr)))__cmpxchg((ptr), \
+-				(unsigned long)(o), (unsigned long)(n)))
++#define cmpxchg(ptr, o, n) ({				\
++	(typeof(*(ptr)))__cmpxchg((ptr),		\
++				  (unsigned long)(o),	\
++				  (unsigned long)(n));	\
++})
+ 
+ /*
+  * atomic_cmpxchg is same as cmpxchg
+@@ -198,8 +201,11 @@ static inline unsigned long __xchg(unsigned long val, volatile void *ptr,
+ 	return __xchg_bad_pointer();
+ }
+ 
+-#define xchg(ptr, with) ((typeof(*(ptr)))__xchg((unsigned long)(with), (ptr), \
+-						 sizeof(*(ptr))))
++#define xchg(ptr, with) ({				\
++	(typeof(*(ptr)))__xchg((unsigned long)(with),	\
++			       (ptr),			\
++			       sizeof(*(ptr)));		\
++})
+ 
+ #endif /* CONFIG_ARC_PLAT_EZNPS */
+ 
+diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c
+index 4097764fea23..fa18c00b0cfd 100644
+--- a/arch/arc/mm/tlb.c
++++ b/arch/arc/mm/tlb.c
+@@ -911,9 +911,11 @@ void do_tlb_overlap_fault(unsigned long cause, unsigned long address,
+ 			  struct pt_regs *regs)
+ {
+ 	struct cpuinfo_arc_mmu *mmu = &cpuinfo_arc700[smp_processor_id()].mmu;
+-	unsigned int pd0[mmu->ways];
+ 	unsigned long flags;
+-	int set;
++	int set, n_ways = mmu->ways;
++
++	n_ways = min(n_ways, 4);
++	BUG_ON(mmu->ways > 4);
+ 
+ 	local_irq_save(flags);
+ 
+@@ -921,9 +923,10 @@ void do_tlb_overlap_fault(unsigned long cause, unsigned long address,
+ 	for (set = 0; set < mmu->sets; set++) {
+ 
+ 		int is_valid, way;
++		unsigned int pd0[4];
+ 
+ 		/* read out all the ways of current set */
+-		for (way = 0, is_valid = 0; way < mmu->ways; way++) {
++		for (way = 0, is_valid = 0; way < n_ways; way++) {
+ 			write_aux_reg(ARC_REG_TLBINDEX,
+ 					  SET_WAY_TO_IDX(mmu, set, way));
+ 			write_aux_reg(ARC_REG_TLBCOMMAND, TLBRead);
+@@ -937,14 +940,14 @@ void do_tlb_overlap_fault(unsigned long cause, unsigned long address,
+ 			continue;
+ 
+ 		/* Scan the set for duplicate ways: needs a nested loop */
+-		for (way = 0; way < mmu->ways - 1; way++) {
++		for (way = 0; way < n_ways - 1; way++) {
+ 
+ 			int n;
+ 
+ 			if (!pd0[way])
+ 				continue;
+ 
+-			for (n = way + 1; n < mmu->ways; n++) {
++			for (n = way + 1; n < n_ways; n++) {
+ 				if (pd0[way] != pd0[n])
+ 					continue;
+ 
+diff --git a/arch/arm/boot/dts/am57xx-idk-common.dtsi b/arch/arm/boot/dts/am57xx-idk-common.dtsi
+index f7bd26458915..42e433da79ec 100644
+--- a/arch/arm/boot/dts/am57xx-idk-common.dtsi
++++ b/arch/arm/boot/dts/am57xx-idk-common.dtsi
+@@ -420,6 +420,7 @@
+ 	vqmmc-supply = <&ldo1_reg>;
+ 	bus-width = <4>;
+ 	cd-gpios = <&gpio6 27 GPIO_ACTIVE_LOW>; /* gpio 219 */
++	no-1-8-v;
+ };
+ 
+ &mmc2 {
+diff --git a/arch/arm/boot/dts/dra76x-mmc-iodelay.dtsi b/arch/arm/boot/dts/dra76x-mmc-iodelay.dtsi
+index baba7b00eca7..fdca48186916 100644
+--- a/arch/arm/boot/dts/dra76x-mmc-iodelay.dtsi
++++ b/arch/arm/boot/dts/dra76x-mmc-iodelay.dtsi
+@@ -22,7 +22,7 @@
+  *
+  * Datamanual Revisions:
+  *
+- * DRA76x Silicon Revision 1.0: SPRS993A, Revised July 2017
++ * DRA76x Silicon Revision 1.0: SPRS993E, Revised December 2018
+  *
+  */
+ 
+@@ -169,25 +169,25 @@
+ 	/* Corresponds to MMC2_HS200_MANUAL1 in datamanual */
+ 	mmc2_iodelay_hs200_conf: mmc2_iodelay_hs200_conf {
+ 		pinctrl-pin-array = <
+-			0x190 A_DELAY_PS(384) G_DELAY_PS(0)       /* CFG_GPMC_A19_OEN */
+-			0x194 A_DELAY_PS(0) G_DELAY_PS(174)       /* CFG_GPMC_A19_OUT */
+-			0x1a8 A_DELAY_PS(410) G_DELAY_PS(0)       /* CFG_GPMC_A20_OEN */
+-			0x1ac A_DELAY_PS(85) G_DELAY_PS(0)        /* CFG_GPMC_A20_OUT */
+-			0x1b4 A_DELAY_PS(468) G_DELAY_PS(0)       /* CFG_GPMC_A21_OEN */
+-			0x1b8 A_DELAY_PS(139) G_DELAY_PS(0)       /* CFG_GPMC_A21_OUT */
+-			0x1c0 A_DELAY_PS(676) G_DELAY_PS(0)       /* CFG_GPMC_A22_OEN */
+-			0x1c4 A_DELAY_PS(69) G_DELAY_PS(0)        /* CFG_GPMC_A22_OUT */
+-			0x1d0 A_DELAY_PS(1062) G_DELAY_PS(154)	  /* CFG_GPMC_A23_OUT */
+-			0x1d8 A_DELAY_PS(640) G_DELAY_PS(0)       /* CFG_GPMC_A24_OEN */
+-			0x1dc A_DELAY_PS(0) G_DELAY_PS(0)         /* CFG_GPMC_A24_OUT */
+-			0x1e4 A_DELAY_PS(356) G_DELAY_PS(0)       /* CFG_GPMC_A25_OEN */
+-			0x1e8 A_DELAY_PS(0) G_DELAY_PS(0)         /* CFG_GPMC_A25_OUT */
+-			0x1f0 A_DELAY_PS(579) G_DELAY_PS(0)       /* CFG_GPMC_A26_OEN */
+-			0x1f4 A_DELAY_PS(0) G_DELAY_PS(0)         /* CFG_GPMC_A26_OUT */
+-			0x1fc A_DELAY_PS(435) G_DELAY_PS(0)       /* CFG_GPMC_A27_OEN */
+-			0x200 A_DELAY_PS(36) G_DELAY_PS(0)        /* CFG_GPMC_A27_OUT */
+-			0x364 A_DELAY_PS(759) G_DELAY_PS(0)       /* CFG_GPMC_CS1_OEN */
+-			0x368 A_DELAY_PS(72) G_DELAY_PS(0)        /* CFG_GPMC_CS1_OUT */
++			0x190 A_DELAY_PS(384) G_DELAY_PS(0)	/* CFG_GPMC_A19_OEN */
++			0x194 A_DELAY_PS(350) G_DELAY_PS(174)	/* CFG_GPMC_A19_OUT */
++			0x1a8 A_DELAY_PS(410) G_DELAY_PS(0)	/* CFG_GPMC_A20_OEN */
++			0x1ac A_DELAY_PS(335) G_DELAY_PS(0)	/* CFG_GPMC_A20_OUT */
++			0x1b4 A_DELAY_PS(468) G_DELAY_PS(0)	/* CFG_GPMC_A21_OEN */
++			0x1b8 A_DELAY_PS(339) G_DELAY_PS(0)	/* CFG_GPMC_A21_OUT */
++			0x1c0 A_DELAY_PS(676) G_DELAY_PS(0)	/* CFG_GPMC_A22_OEN */
++			0x1c4 A_DELAY_PS(219) G_DELAY_PS(0)	/* CFG_GPMC_A22_OUT */
++			0x1d0 A_DELAY_PS(1062) G_DELAY_PS(154)	/* CFG_GPMC_A23_OUT */
++			0x1d8 A_DELAY_PS(640) G_DELAY_PS(0)	/* CFG_GPMC_A24_OEN */
++			0x1dc A_DELAY_PS(150) G_DELAY_PS(0)	/* CFG_GPMC_A24_OUT */
++			0x1e4 A_DELAY_PS(356) G_DELAY_PS(0)	/* CFG_GPMC_A25_OEN */
++			0x1e8 A_DELAY_PS(150) G_DELAY_PS(0)	/* CFG_GPMC_A25_OUT */
++			0x1f0 A_DELAY_PS(579) G_DELAY_PS(0)	/* CFG_GPMC_A26_OEN */
++			0x1f4 A_DELAY_PS(200) G_DELAY_PS(0)	/* CFG_GPMC_A26_OUT */
++			0x1fc A_DELAY_PS(435) G_DELAY_PS(0)	/* CFG_GPMC_A27_OEN */
++			0x200 A_DELAY_PS(236) G_DELAY_PS(0)	/* CFG_GPMC_A27_OUT */
++			0x364 A_DELAY_PS(759) G_DELAY_PS(0)	/* CFG_GPMC_CS1_OEN */
++			0x368 A_DELAY_PS(372) G_DELAY_PS(0)	/* CFG_GPMC_CS1_OUT */
+ 	      >;
+ 	};
+ 
+diff --git a/arch/arm/configs/mvebu_v7_defconfig b/arch/arm/configs/mvebu_v7_defconfig
+index 55140219ab11..001460ee519e 100644
+--- a/arch/arm/configs/mvebu_v7_defconfig
++++ b/arch/arm/configs/mvebu_v7_defconfig
+@@ -131,6 +131,7 @@ CONFIG_MV_XOR=y
+ # CONFIG_IOMMU_SUPPORT is not set
+ CONFIG_MEMORY=y
+ CONFIG_PWM=y
++CONFIG_PHY_MVEBU_A38X_COMPHY=y
+ CONFIG_EXT4_FS=y
+ CONFIG_ISO9660_FS=y
+ CONFIG_JOLIET=y
+diff --git a/arch/arm/mach-imx/cpuidle-imx6sx.c b/arch/arm/mach-imx/cpuidle-imx6sx.c
+index fd0053e47a15..3708a71f30e6 100644
+--- a/arch/arm/mach-imx/cpuidle-imx6sx.c
++++ b/arch/arm/mach-imx/cpuidle-imx6sx.c
+@@ -15,6 +15,7 @@
+ 
+ #include "common.h"
+ #include "cpuidle.h"
++#include "hardware.h"
+ 
+ static int imx6sx_idle_finish(unsigned long val)
+ {
+@@ -110,7 +111,7 @@ int __init imx6sx_cpuidle_init(void)
+ 	 * except for power up sw2iso which need to be
+ 	 * larger than LDO ramp up time.
+ 	 */
+-	imx_gpc_set_arm_power_up_timing(0xf, 1);
++	imx_gpc_set_arm_power_up_timing(cpu_is_imx6sx() ? 0xf : 0x2, 1);
+ 	imx_gpc_set_arm_power_down_timing(1, 1);
+ 
+ 	return cpuidle_register(&imx6sx_cpuidle_driver, NULL);
+diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
+index b025304bde46..8fbd583b18e1 100644
+--- a/arch/arm64/Makefile
++++ b/arch/arm64/Makefile
+@@ -51,6 +51,7 @@ endif
+ 
+ KBUILD_CFLAGS	+= -mgeneral-regs-only $(lseinstr) $(brokengasinst)
+ KBUILD_CFLAGS	+= -fno-asynchronous-unwind-tables
++KBUILD_CFLAGS	+= -Wno-psabi
+ KBUILD_AFLAGS	+= $(lseinstr) $(brokengasinst)
+ 
+ KBUILD_CFLAGS	+= $(call cc-option,-mabi=lp64)
+diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h
+index d78623acb649..438759e7e8a7 100644
+--- a/arch/arm64/include/uapi/asm/ptrace.h
++++ b/arch/arm64/include/uapi/asm/ptrace.h
+@@ -65,8 +65,6 @@
+ 
+ #ifndef __ASSEMBLY__
+ 
+-#include <linux/prctl.h>
+-
+ /*
+  * User structures for general purpose, floating point and debug registers.
+  */
+@@ -113,10 +111,10 @@ struct user_sve_header {
+ 
+ /*
+  * Common SVE_PT_* flags:
+- * These must be kept in sync with prctl interface in <linux/ptrace.h>
++ * These must be kept in sync with prctl interface in <linux/prctl.h>
+  */
+-#define SVE_PT_VL_INHERIT		(PR_SVE_VL_INHERIT >> 16)
+-#define SVE_PT_VL_ONEXEC		(PR_SVE_SET_VL_ONEXEC >> 16)
++#define SVE_PT_VL_INHERIT		((1 << 17) /* PR_SVE_VL_INHERIT */ >> 16)
++#define SVE_PT_VL_ONEXEC		((1 << 18) /* PR_SVE_SET_VL_ONEXEC */ >> 16)
+ 
+ 
+ /*
+diff --git a/arch/arm64/kernel/ssbd.c b/arch/arm64/kernel/ssbd.c
+index 885f13e58708..52cfc6148355 100644
+--- a/arch/arm64/kernel/ssbd.c
++++ b/arch/arm64/kernel/ssbd.c
+@@ -5,6 +5,7 @@
+ 
+ #include <linux/compat.h>
+ #include <linux/errno.h>
++#include <linux/prctl.h>
+ #include <linux/sched.h>
+ #include <linux/sched/task_stack.h>
+ #include <linux/thread_info.h>
+diff --git a/arch/mips/include/asm/ginvt.h b/arch/mips/include/asm/ginvt.h
+index 49c6dbe37338..6eb7c2b94dc7 100644
+--- a/arch/mips/include/asm/ginvt.h
++++ b/arch/mips/include/asm/ginvt.h
+@@ -19,7 +19,7 @@ _ASM_MACRO_1R1I(ginvt, rs, type,
+ # define _ASM_SET_GINV
+ #endif
+ 
+-static inline void ginvt(unsigned long addr, enum ginvt_type type)
++static __always_inline void ginvt(unsigned long addr, enum ginvt_type type)
+ {
+ 	asm volatile(
+ 		".set	push\n"
+diff --git a/arch/mips/kernel/uprobes.c b/arch/mips/kernel/uprobes.c
+index 4aaff3b3175c..6dbe4eab0a0e 100644
+--- a/arch/mips/kernel/uprobes.c
++++ b/arch/mips/kernel/uprobes.c
+@@ -112,9 +112,6 @@ int arch_uprobe_pre_xol(struct arch_uprobe *aup, struct pt_regs *regs)
+ 	 */
+ 	aup->resume_epc = regs->cp0_epc + 4;
+ 	if (insn_has_delay_slot((union mips_instruction) aup->insn[0])) {
+-		unsigned long epc;
+-
+-		epc = regs->cp0_epc;
+ 		__compute_return_epc_for_insn(regs,
+ 			(union mips_instruction) aup->insn[0]);
+ 		aup->resume_epc = regs->cp0_epc;
+diff --git a/arch/parisc/math-emu/cnv_float.h b/arch/parisc/math-emu/cnv_float.h
+index 933423fa5144..b0db61188a61 100644
+--- a/arch/parisc/math-emu/cnv_float.h
++++ b/arch/parisc/math-emu/cnv_float.h
+@@ -60,19 +60,19 @@
+     ((exponent < (SGL_P - 1)) ?				\
+      (Sall(sgl_value) << (SGL_EXP_LENGTH + 1 + exponent)) : FALSE)
+ 
+-#define Int_isinexact_to_sgl(int_value)	(int_value << 33 - SGL_EXP_LENGTH)
++#define Int_isinexact_to_sgl(int_value)	((int_value << 33 - SGL_EXP_LENGTH) != 0)
+ 
+ #define Sgl_roundnearest_from_int(int_value,sgl_value)			\
+     if (int_value & 1<<(SGL_EXP_LENGTH - 2))   /* round bit */		\
+-    	if ((int_value << 34 - SGL_EXP_LENGTH) || Slow(sgl_value))	\
++	if (((int_value << 34 - SGL_EXP_LENGTH) != 0) || Slow(sgl_value)) \
+ 		Sall(sgl_value)++
+ 
+ #define Dint_isinexact_to_sgl(dint_valueA,dint_valueB)		\
+-    ((Dintp1(dint_valueA) << 33 - SGL_EXP_LENGTH) || Dintp2(dint_valueB))
++    (((Dintp1(dint_valueA) << 33 - SGL_EXP_LENGTH) != 0) || Dintp2(dint_valueB))
+ 
+ #define Sgl_roundnearest_from_dint(dint_valueA,dint_valueB,sgl_value)	\
+     if (Dintp1(dint_valueA) & 1<<(SGL_EXP_LENGTH - 2)) 			\
+-    	if ((Dintp1(dint_valueA) << 34 - SGL_EXP_LENGTH) ||		\
++	if (((Dintp1(dint_valueA) << 34 - SGL_EXP_LENGTH) != 0) ||	\
+     	Dintp2(dint_valueB) || Slow(sgl_value)) Sall(sgl_value)++
+ 
+ #define Dint_isinexact_to_dbl(dint_value) 	\
+diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
+index 23f7ed796f38..49d65cd08ee0 100644
+--- a/arch/powerpc/include/asm/ppc-opcode.h
++++ b/arch/powerpc/include/asm/ppc-opcode.h
+@@ -342,6 +342,7 @@
+ #define PPC_INST_MADDLD			0x10000033
+ #define PPC_INST_DIVWU			0x7c000396
+ #define PPC_INST_DIVD			0x7c0003d2
++#define PPC_INST_DIVDU			0x7c000392
+ #define PPC_INST_RLWINM			0x54000000
+ #define PPC_INST_RLWINM_DOT		0x54000001
+ #define PPC_INST_RLWIMI			0x50000000
+diff --git a/arch/powerpc/mm/mmu_context_book3s64.c b/arch/powerpc/mm/mmu_context_book3s64.c
+index f720c5cc0b5e..8751ae2e2d04 100644
+--- a/arch/powerpc/mm/mmu_context_book3s64.c
++++ b/arch/powerpc/mm/mmu_context_book3s64.c
+@@ -55,14 +55,48 @@ EXPORT_SYMBOL_GPL(hash__alloc_context_id);
+ 
+ void slb_setup_new_exec(void);
+ 
++static int realloc_context_ids(mm_context_t *ctx)
++{
++	int i, id;
++
++	/*
++	 * id 0 (aka. ctx->id) is special, we always allocate a new one, even if
++	 * there wasn't one allocated previously (which happens in the exec
++	 * case where ctx is newly allocated).
++	 *
++	 * We have to be a bit careful here. We must keep the existing ids in
++	 * the array, so that we can test if they're non-zero to decide if we
++	 * need to allocate a new one. However in case of error we must free the
++	 * ids we've allocated but *not* any of the existing ones (or risk a
++	 * UAF). That's why we decrement i at the start of the error handling
++	 * loop, to skip the id that we just tested but couldn't reallocate.
++	 */
++	for (i = 0; i < ARRAY_SIZE(ctx->extended_id); i++) {
++		if (i == 0 || ctx->extended_id[i]) {
++			id = hash__alloc_context_id();
++			if (id < 0)
++				goto error;
++
++			ctx->extended_id[i] = id;
++		}
++	}
++
++	/* The caller expects us to return id */
++	return ctx->id;
++
++error:
++	for (i--; i >= 0; i--) {
++		if (ctx->extended_id[i])
++			ida_free(&mmu_context_ida, ctx->extended_id[i]);
++	}
++
++	return id;
++}
++
+ static int hash__init_new_context(struct mm_struct *mm)
+ {
+ 	int index;
+ 
+-	index = hash__alloc_context_id();
+-	if (index < 0)
+-		return index;
+-
+ 	/*
+ 	 * The old code would re-promote on fork, we don't do that when using
+ 	 * slices as it could cause problem promoting slices that have been
+@@ -80,6 +114,10 @@ static int hash__init_new_context(struct mm_struct *mm)
+ 	if (mm->context.id == 0)
+ 		slice_init_new_context_exec(mm);
+ 
++	index = realloc_context_ids(&mm->context);
++	if (index < 0)
++		return index;
++
+ 	subpage_prot_init_new_context(mm);
+ 
+ 	pkey_mm_init(mm);
+diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
+index dcac37745b05..1e932898d430 100644
+--- a/arch/powerpc/net/bpf_jit.h
++++ b/arch/powerpc/net/bpf_jit.h
+@@ -116,7 +116,7 @@
+ 				     ___PPC_RA(a) | IMM_L(i))
+ #define PPC_DIVWU(d, a, b)	EMIT(PPC_INST_DIVWU | ___PPC_RT(d) |	      \
+ 				     ___PPC_RA(a) | ___PPC_RB(b))
+-#define PPC_DIVD(d, a, b)	EMIT(PPC_INST_DIVD | ___PPC_RT(d) |	      \
++#define PPC_DIVDU(d, a, b)	EMIT(PPC_INST_DIVDU | ___PPC_RT(d) |	      \
+ 				     ___PPC_RA(a) | ___PPC_RB(b))
+ #define PPC_AND(d, a, b)	EMIT(PPC_INST_AND | ___PPC_RA(d) |	      \
+ 				     ___PPC_RS(a) | ___PPC_RB(b))
+diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
+index 21a1dcd4b156..e3fedeffe40f 100644
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -399,12 +399,12 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
+ 		case BPF_ALU64 | BPF_DIV | BPF_X: /* dst /= src */
+ 		case BPF_ALU64 | BPF_MOD | BPF_X: /* dst %= src */
+ 			if (BPF_OP(code) == BPF_MOD) {
+-				PPC_DIVD(b2p[TMP_REG_1], dst_reg, src_reg);
++				PPC_DIVDU(b2p[TMP_REG_1], dst_reg, src_reg);
+ 				PPC_MULD(b2p[TMP_REG_1], src_reg,
+ 						b2p[TMP_REG_1]);
+ 				PPC_SUB(dst_reg, dst_reg, b2p[TMP_REG_1]);
+ 			} else
+-				PPC_DIVD(dst_reg, dst_reg, src_reg);
++				PPC_DIVDU(dst_reg, dst_reg, src_reg);
+ 			break;
+ 		case BPF_ALU | BPF_MOD | BPF_K: /* (u32) dst %= (u32) imm */
+ 		case BPF_ALU | BPF_DIV | BPF_K: /* (u32) dst /= (u32) imm */
+@@ -432,7 +432,7 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
+ 				break;
+ 			case BPF_ALU64:
+ 				if (BPF_OP(code) == BPF_MOD) {
+-					PPC_DIVD(b2p[TMP_REG_2], dst_reg,
++					PPC_DIVDU(b2p[TMP_REG_2], dst_reg,
+ 							b2p[TMP_REG_1]);
+ 					PPC_MULD(b2p[TMP_REG_1],
+ 							b2p[TMP_REG_1],
+@@ -440,7 +440,7 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
+ 					PPC_SUB(dst_reg, dst_reg,
+ 							b2p[TMP_REG_1]);
+ 				} else
+-					PPC_DIVD(dst_reg, dst_reg,
++					PPC_DIVDU(dst_reg, dst_reg,
+ 							b2p[TMP_REG_1]);
+ 				break;
+ 			}
+diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
+index 88401d5125bc..523dbfbac03d 100644
+--- a/arch/riscv/mm/fault.c
++++ b/arch/riscv/mm/fault.c
+@@ -29,6 +29,7 @@
+ 
+ #include <asm/pgalloc.h>
+ #include <asm/ptrace.h>
++#include <asm/tlbflush.h>
+ 
+ /*
+  * This routine handles page faults.  It determines the address and the
+@@ -281,6 +282,18 @@ vmalloc_fault:
+ 		pte_k = pte_offset_kernel(pmd_k, addr);
+ 		if (!pte_present(*pte_k))
+ 			goto no_context;
++
++		/*
++		 * The kernel assumes that TLBs don't cache invalid
++		 * entries, but in RISC-V, SFENCE.VMA specifies an
++		 * ordering constraint, not a cache flush; it is
++		 * necessary even after writing invalid entries.
++		 * Relying on flush_tlb_fix_spurious_fault would
++		 * suffice, but the extra traps reduce
++		 * performance. So, eagerly SFENCE.VMA.
++		 */
++		local_flush_tlb_page(addr);
++
+ 		return;
+ 	}
+ }
+diff --git a/arch/sparc/kernel/mdesc.c b/arch/sparc/kernel/mdesc.c
+index 9a26b442f820..8e645ddac58e 100644
+--- a/arch/sparc/kernel/mdesc.c
++++ b/arch/sparc/kernel/mdesc.c
+@@ -356,6 +356,8 @@ static int get_vdev_port_node_info(struct mdesc_handle *md, u64 node,
+ 
+ 	node_info->vdev_port.id = *idp;
+ 	node_info->vdev_port.name = kstrdup_const(name, GFP_KERNEL);
++	if (!node_info->vdev_port.name)
++		return -1;
+ 	node_info->vdev_port.parent_cfg_hdl = *parent_cfg_hdlp;
+ 
+ 	return 0;
+diff --git a/arch/sparc/kernel/perf_event.c b/arch/sparc/kernel/perf_event.c
+index 6de7c684c29f..a58ae9c42803 100644
+--- a/arch/sparc/kernel/perf_event.c
++++ b/arch/sparc/kernel/perf_event.c
+@@ -891,6 +891,10 @@ static int sparc_perf_event_set_period(struct perf_event *event,
+ 	s64 period = hwc->sample_period;
+ 	int ret = 0;
+ 
++	/* The period may have been changed by PERF_EVENT_IOC_PERIOD */
++	if (unlikely(period != hwc->last_period))
++		left = period - (hwc->last_period - left);
++
+ 	if (unlikely(left <= -period)) {
+ 		left = period;
+ 		local64_set(&hwc->period_left, left);
+diff --git a/arch/x86/entry/vdso/vclock_gettime.c b/arch/x86/entry/vdso/vclock_gettime.c
+index 98c7d12b945c..ed7ddee1ae69 100644
+--- a/arch/x86/entry/vdso/vclock_gettime.c
++++ b/arch/x86/entry/vdso/vclock_gettime.c
+@@ -128,13 +128,24 @@ notrace static inline u64 vgetcyc(int mode)
+ {
+ 	if (mode == VCLOCK_TSC)
+ 		return (u64)rdtsc_ordered();
++
++	/*
++	 * For any memory-mapped vclock type, we need to make sure that gcc
++	 * doesn't cleverly hoist a load before the mode check.  Otherwise we
++	 * might end up touching the memory-mapped page even if the vclock in
++	 * question isn't enabled, which will segfault.  Hence the barriers.
++	 */
+ #ifdef CONFIG_PARAVIRT_CLOCK
+-	else if (mode == VCLOCK_PVCLOCK)
++	if (mode == VCLOCK_PVCLOCK) {
++		barrier();
+ 		return vread_pvclock();
++	}
+ #endif
+ #ifdef CONFIG_HYPERV_TSCPAGE
+-	else if (mode == VCLOCK_HVCLOCK)
++	if (mode == VCLOCK_HVCLOCK) {
++		barrier();
+ 		return vread_hvclock();
++	}
+ #endif
+ 	return U64_MAX;
+ }
+diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+index 85212a32b54d..c51b56e29948 100644
+--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
++++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+@@ -2556,7 +2556,7 @@ static int rdtgroup_init_alloc(struct rdtgroup *rdtgrp)
+ 				if (closid_allocated(i) && i != closid) {
+ 					mode = rdtgroup_mode_by_closid(i);
+ 					if (mode == RDT_MODE_PSEUDO_LOCKSETUP)
+-						break;
++						continue;
+ 					/*
+ 					 * If CDP is active include peer
+ 					 * domain's usage to ensure there
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index d9c7b45d231f..85438a624930 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -5591,14 +5591,18 @@ static int alloc_mmu_pages(struct kvm_vcpu *vcpu)
+ 	struct page *page;
+ 	int i;
+ 
+-	if (tdp_enabled)
+-		return 0;
+-
+ 	/*
+-	 * When emulating 32-bit mode, cr3 is only 32 bits even on x86_64.
+-	 * Therefore we need to allocate shadow page tables in the first
+-	 * 4GB of memory, which happens to fit the DMA32 zone.
++	 * When using PAE paging, the four PDPTEs are treated as 'root' pages,
++	 * while the PDP table is a per-vCPU construct that's allocated at MMU
++	 * creation.  When emulating 32-bit mode, cr3 is only 32 bits even on
++	 * x86_64.  Therefore we need to allocate the PDP table in the first
++	 * 4GB of memory, which happens to fit the DMA32 zone.  Except for
++	 * SVM's 32-bit NPT support, TDP paging doesn't use PAE paging and can
++	 * skip allocating the PDP table.
+ 	 */
++	if (tdp_enabled && kvm_x86_ops->get_tdp_level(vcpu) > PT32E_ROOT_LEVEL)
++		return 0;
++
+ 	page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_DMA32);
+ 	if (!page)
+ 		return -ENOMEM;
+diff --git a/arch/xtensa/kernel/setup.c b/arch/xtensa/kernel/setup.c
+index 4ec6fbb696bf..a5139f1d9220 100644
+--- a/arch/xtensa/kernel/setup.c
++++ b/arch/xtensa/kernel/setup.c
+@@ -310,7 +310,8 @@ extern char _SecondaryResetVector_text_start;
+ extern char _SecondaryResetVector_text_end;
+ #endif
+ 
+-static inline int mem_reserve(unsigned long start, unsigned long end)
++static inline int __init_memblock mem_reserve(unsigned long start,
++					      unsigned long end)
+ {
+ 	return memblock_reserve(start, end - start);
+ }
+diff --git a/crypto/hmac.c b/crypto/hmac.c
+index 4b8c8ee8f15c..c623778b36ba 100644
+--- a/crypto/hmac.c
++++ b/crypto/hmac.c
+@@ -168,8 +168,10 @@ static int hmac_init_tfm(struct crypto_tfm *tfm)
+ 
+ 	parent->descsize = sizeof(struct shash_desc) +
+ 			   crypto_shash_descsize(hash);
+-	if (WARN_ON(parent->descsize > HASH_MAX_DESCSIZE))
++	if (WARN_ON(parent->descsize > HASH_MAX_DESCSIZE)) {
++		crypto_free_shash(hash);
+ 		return -EINVAL;
++	}
+ 
+ 	ctx->hash = hash;
+ 	return 0;
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 4b9c7ca492e6..cc19d91c1688 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -1950,8 +1950,18 @@ static void binder_free_txn_fixups(struct binder_transaction *t)
+ 
+ static void binder_free_transaction(struct binder_transaction *t)
+ {
+-	if (t->buffer)
+-		t->buffer->transaction = NULL;
++	struct binder_proc *target_proc = t->to_proc;
++
++	if (target_proc) {
++		binder_inner_proc_lock(target_proc);
++		if (t->buffer)
++			t->buffer->transaction = NULL;
++		binder_inner_proc_unlock(target_proc);
++	}
++	/*
++	 * If the transaction has no target_proc, then
++	 * t->buffer->transaction has already been cleared.
++	 */
+ 	binder_free_txn_fixups(t);
+ 	kfree(t);
+ 	binder_stats_deleted(BINDER_STAT_TRANSACTION);
+@@ -3550,10 +3560,12 @@ err_invalid_target_handle:
+ static void
+ binder_free_buf(struct binder_proc *proc, struct binder_buffer *buffer)
+ {
++	binder_inner_proc_lock(proc);
+ 	if (buffer->transaction) {
+ 		buffer->transaction->buffer = NULL;
+ 		buffer->transaction = NULL;
+ 	}
++	binder_inner_proc_unlock(proc);
+ 	if (buffer->async_transaction && buffer->target_node) {
+ 		struct binder_node *buf_node;
+ 		struct binder_work *w;
+diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
+index cd57747286f2..9635897458a0 100644
+--- a/drivers/dma-buf/udmabuf.c
++++ b/drivers/dma-buf/udmabuf.c
+@@ -77,6 +77,7 @@ static void unmap_udmabuf(struct dma_buf_attachment *at,
+ 			  struct sg_table *sg,
+ 			  enum dma_data_direction direction)
+ {
++	dma_unmap_sg(at->dev, sg->sgl, sg->nents, direction);
+ 	sg_free_table(sg);
+ 	kfree(sg);
+ }
+diff --git a/drivers/dma/dma-jz4780.c b/drivers/dma/dma-jz4780.c
+index 9ce0a386225b..f49534019d37 100644
+--- a/drivers/dma/dma-jz4780.c
++++ b/drivers/dma/dma-jz4780.c
+@@ -666,10 +666,11 @@ static enum dma_status jz4780_dma_tx_status(struct dma_chan *chan,
+ 	return status;
+ }
+ 
+-static void jz4780_dma_chan_irq(struct jz4780_dma_dev *jzdma,
+-	struct jz4780_dma_chan *jzchan)
++static bool jz4780_dma_chan_irq(struct jz4780_dma_dev *jzdma,
++				struct jz4780_dma_chan *jzchan)
+ {
+ 	uint32_t dcs;
++	bool ack = true;
+ 
+ 	spin_lock(&jzchan->vchan.lock);
+ 
+@@ -692,12 +693,20 @@ static void jz4780_dma_chan_irq(struct jz4780_dma_dev *jzdma,
+ 		if ((dcs & (JZ_DMA_DCS_AR | JZ_DMA_DCS_HLT)) == 0) {
+ 			if (jzchan->desc->type == DMA_CYCLIC) {
+ 				vchan_cyclic_callback(&jzchan->desc->vdesc);
+-			} else {
++
++				jz4780_dma_begin(jzchan);
++			} else if (dcs & JZ_DMA_DCS_TT) {
+ 				vchan_cookie_complete(&jzchan->desc->vdesc);
+ 				jzchan->desc = NULL;
+-			}
+ 
+-			jz4780_dma_begin(jzchan);
++				jz4780_dma_begin(jzchan);
++			} else {
++				/* False positive - continue the transfer */
++				ack = false;
++				jz4780_dma_chn_writel(jzdma, jzchan->id,
++						      JZ_DMA_REG_DCS,
++						      JZ_DMA_DCS_CTE);
++			}
+ 		}
+ 	} else {
+ 		dev_err(&jzchan->vchan.chan.dev->device,
+@@ -705,21 +714,22 @@ static void jz4780_dma_chan_irq(struct jz4780_dma_dev *jzdma,
+ 	}
+ 
+ 	spin_unlock(&jzchan->vchan.lock);
++
++	return ack;
+ }
+ 
+ static irqreturn_t jz4780_dma_irq_handler(int irq, void *data)
+ {
+ 	struct jz4780_dma_dev *jzdma = data;
++	unsigned int nb_channels = jzdma->soc_data->nb_channels;
+ 	uint32_t pending, dmac;
+ 	int i;
+ 
+ 	pending = jz4780_dma_ctrl_readl(jzdma, JZ_DMA_REG_DIRQP);
+ 
+-	for (i = 0; i < jzdma->soc_data->nb_channels; i++) {
+-		if (!(pending & (1<<i)))
+-			continue;
+-
+-		jz4780_dma_chan_irq(jzdma, &jzdma->chan[i]);
++	for_each_set_bit(i, (unsigned long *)&pending, nb_channels) {
++		if (jz4780_dma_chan_irq(jzdma, &jzdma->chan[i]))
++			pending &= ~BIT(i);
+ 	}
+ 
+ 	/* Clear halt and address error status of all channels. */
+@@ -728,7 +738,7 @@ static irqreturn_t jz4780_dma_irq_handler(int irq, void *data)
+ 	jz4780_dma_ctrl_writel(jzdma, JZ_DMA_REG_DMAC, dmac);
+ 
+ 	/* Clear interrupt pending status. */
+-	jz4780_dma_ctrl_writel(jzdma, JZ_DMA_REG_DIRQP, 0);
++	jz4780_dma_ctrl_writel(jzdma, JZ_DMA_REG_DIRQP, pending);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c b/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
+index b2ac1d2c5b86..a1ce307c502f 100644
+--- a/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
++++ b/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
+@@ -512,7 +512,8 @@ dma_chan_prep_dma_memcpy(struct dma_chan *dchan, dma_addr_t dst_adr,
+ 	return vchan_tx_prep(&chan->vc, &first->vd, flags);
+ 
+ err_desc_get:
+-	axi_desc_put(first);
++	if (first)
++		axi_desc_put(first);
+ 	return NULL;
+ }
+ 
+diff --git a/drivers/dma/mediatek/mtk-cqdma.c b/drivers/dma/mediatek/mtk-cqdma.c
+index 814853842e29..723b11c190b3 100644
+--- a/drivers/dma/mediatek/mtk-cqdma.c
++++ b/drivers/dma/mediatek/mtk-cqdma.c
+@@ -225,7 +225,7 @@ static int mtk_cqdma_hard_reset(struct mtk_cqdma_pchan *pc)
+ 	mtk_dma_set(pc, MTK_CQDMA_RESET, MTK_CQDMA_HARD_RST_BIT);
+ 	mtk_dma_clr(pc, MTK_CQDMA_RESET, MTK_CQDMA_HARD_RST_BIT);
+ 
+-	return mtk_cqdma_poll_engine_done(pc, false);
++	return mtk_cqdma_poll_engine_done(pc, true);
+ }
+ 
+ static void mtk_cqdma_start(struct mtk_cqdma_pchan *pc,
+@@ -671,7 +671,7 @@ static void mtk_cqdma_free_chan_resources(struct dma_chan *c)
+ 		mtk_dma_set(cvc->pc, MTK_CQDMA_FLUSH, MTK_CQDMA_FLUSH_BIT);
+ 
+ 		/* wait for the completion of flush operation */
+-		if (mtk_cqdma_poll_engine_done(cvc->pc, false) < 0)
++		if (mtk_cqdma_poll_engine_done(cvc->pc, true) < 0)
+ 			dev_err(cqdma2dev(to_cqdma_dev(c)), "cqdma flush timeout\n");
+ 
+ 		/* clear the flush bit and interrupt flag */
+diff --git a/drivers/dma/sprd-dma.c b/drivers/dma/sprd-dma.c
+index 48431e2da987..01abed5cde49 100644
+--- a/drivers/dma/sprd-dma.c
++++ b/drivers/dma/sprd-dma.c
+@@ -510,7 +510,9 @@ static void sprd_dma_start(struct sprd_dma_chn *schan)
+ 	sprd_dma_set_uid(schan);
+ 	sprd_dma_enable_chn(schan);
+ 
+-	if (schan->dev_id == SPRD_DMA_SOFTWARE_UID)
++	if (schan->dev_id == SPRD_DMA_SOFTWARE_UID &&
++	    schan->chn_mode != SPRD_DMA_DST_CHN0 &&
++	    schan->chn_mode != SPRD_DMA_DST_CHN1)
+ 		sprd_dma_soft_request(schan);
+ }
+ 
+@@ -552,12 +554,17 @@ static irqreturn_t dma_irq_handle(int irq, void *dev_id)
+ 		schan = &sdev->channels[i];
+ 
+ 		spin_lock(&schan->vc.lock);
++
++		sdesc = schan->cur_desc;
++		if (!sdesc) {
++			spin_unlock(&schan->vc.lock);
++			return IRQ_HANDLED;
++		}
++
+ 		int_type = sprd_dma_get_int_type(schan);
+ 		req_type = sprd_dma_get_req_type(schan);
+ 		sprd_dma_clear_int(schan);
+ 
+-		sdesc = schan->cur_desc;
+-
+ 		/* cyclic mode schedule callback */
+ 		cyclic = schan->linklist.phy_addr ? true : false;
+ 		if (cyclic == true) {
+@@ -625,7 +632,7 @@ static enum dma_status sprd_dma_tx_status(struct dma_chan *chan,
+ 		else
+ 			pos = 0;
+ 	} else if (schan->cur_desc && schan->cur_desc->vd.tx.cookie == cookie) {
+-		struct sprd_dma_desc *sdesc = to_sprd_dma_desc(vd);
++		struct sprd_dma_desc *sdesc = schan->cur_desc;
+ 
+ 		if (sdesc->dir == DMA_DEV_TO_MEM)
+ 			pos = sprd_dma_get_dst_addr(schan);
+@@ -771,7 +778,7 @@ static int sprd_dma_fill_desc(struct dma_chan *chan,
+ 	temp |= slave_cfg->src_maxburst & SPRD_DMA_FRG_LEN_MASK;
+ 	hw->frg_len = temp;
+ 
+-	hw->blk_len = len & SPRD_DMA_BLK_LEN_MASK;
++	hw->blk_len = slave_cfg->src_maxburst & SPRD_DMA_BLK_LEN_MASK;
+ 	hw->trsc_len = len & SPRD_DMA_TRSC_LEN_MASK;
+ 
+ 	temp = (dst_step & SPRD_DMA_TRSF_STEP_MASK) << SPRD_DMA_DEST_TRSF_STEP_OFFSET;
+@@ -904,6 +911,12 @@ sprd_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+ 		schan->linklist.virt_addr = 0;
+ 	}
+ 
++	/* Set channel mode and trigger mode for 2-stage transfer */
++	schan->chn_mode =
++		(flags >> SPRD_DMA_CHN_MODE_SHIFT) & SPRD_DMA_CHN_MODE_MASK;
++	schan->trg_mode =
++		(flags >> SPRD_DMA_TRG_MODE_SHIFT) & SPRD_DMA_TRG_MODE_MASK;
++
+ 	sdesc = kzalloc(sizeof(*sdesc), GFP_NOWAIT);
+ 	if (!sdesc)
+ 		return NULL;
+@@ -937,12 +950,6 @@ sprd_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+ 		}
+ 	}
+ 
+-	/* Set channel mode and trigger mode for 2-stage transfer */
+-	schan->chn_mode =
+-		(flags >> SPRD_DMA_CHN_MODE_SHIFT) & SPRD_DMA_CHN_MODE_MASK;
+-	schan->trg_mode =
+-		(flags >> SPRD_DMA_TRG_MODE_SHIFT) & SPRD_DMA_TRG_MODE_MASK;
+-
+ 	ret = sprd_dma_fill_desc(chan, &sdesc->chn_hw, 0, 0, src, dst, len,
+ 				 dir, flags, slave_cfg);
+ 	if (ret) {
+diff --git a/drivers/fpga/dfl-afu-dma-region.c b/drivers/fpga/dfl-afu-dma-region.c
+index e18a786fc943..cd68002ac097 100644
+--- a/drivers/fpga/dfl-afu-dma-region.c
++++ b/drivers/fpga/dfl-afu-dma-region.c
+@@ -399,7 +399,7 @@ int afu_dma_map_region(struct dfl_feature_platform_data *pdata,
+ 				    region->pages[0], 0,
+ 				    region->length,
+ 				    DMA_BIDIRECTIONAL);
+-	if (dma_mapping_error(&pdata->dev->dev, region->iova)) {
++	if (dma_mapping_error(dfl_fpga_pdata_to_parent(pdata), region->iova)) {
+ 		dev_err(&pdata->dev->dev, "failed to map for dma\n");
+ 		ret = -EFAULT;
+ 		goto unpin_pages;
+diff --git a/drivers/fpga/dfl.c b/drivers/fpga/dfl.c
+index 2c09e502e721..c25217cde5ca 100644
+--- a/drivers/fpga/dfl.c
++++ b/drivers/fpga/dfl.c
+@@ -40,6 +40,13 @@ enum dfl_fpga_devt_type {
+ 	DFL_FPGA_DEVT_MAX,
+ };
+ 
++static struct lock_class_key dfl_pdata_keys[DFL_ID_MAX];
++
++static const char *dfl_pdata_key_strings[DFL_ID_MAX] = {
++	"dfl-fme-pdata",
++	"dfl-port-pdata",
++};
++
+ /**
+  * dfl_dev_info - dfl feature device information.
+  * @name: name string of the feature platform device.
+@@ -443,11 +450,16 @@ static int build_info_commit_dev(struct build_feature_devs_info *binfo)
+ 	struct platform_device *fdev = binfo->feature_dev;
+ 	struct dfl_feature_platform_data *pdata;
+ 	struct dfl_feature_info *finfo, *p;
++	enum dfl_id_type type;
+ 	int ret, index = 0;
+ 
+ 	if (!fdev)
+ 		return 0;
+ 
++	type = feature_dev_id_type(fdev);
++	if (WARN_ON_ONCE(type >= DFL_ID_MAX))
++		return -EINVAL;
++
+ 	/*
+ 	 * we do not need to care for the memory which is associated with
+ 	 * the platform device. After calling platform_device_unregister(),
+@@ -463,6 +475,8 @@ static int build_info_commit_dev(struct build_feature_devs_info *binfo)
+ 	pdata->num = binfo->feature_num;
+ 	pdata->dfl_cdev = binfo->cdev;
+ 	mutex_init(&pdata->lock);
++	lockdep_set_class_and_name(&pdata->lock, &dfl_pdata_keys[type],
++				   dfl_pdata_key_strings[type]);
+ 
+ 	/*
+ 	 * the count should be initialized to 0 to make sure
+@@ -497,7 +511,7 @@ static int build_info_commit_dev(struct build_feature_devs_info *binfo)
+ 
+ 	ret = platform_device_add(binfo->feature_dev);
+ 	if (!ret) {
+-		if (feature_dev_id_type(binfo->feature_dev) == PORT_ID)
++		if (type == PORT_ID)
+ 			dfl_fpga_cdev_add_port_dev(binfo->cdev,
+ 						   binfo->feature_dev);
+ 		else
+diff --git a/drivers/fpga/stratix10-soc.c b/drivers/fpga/stratix10-soc.c
+index 13851b3d1c56..215d33789c74 100644
+--- a/drivers/fpga/stratix10-soc.c
++++ b/drivers/fpga/stratix10-soc.c
+@@ -507,12 +507,16 @@ static int __init s10_init(void)
+ 	if (!fw_np)
+ 		return -ENODEV;
+ 
++	of_node_get(fw_np);
+ 	np = of_find_matching_node(fw_np, s10_of_match);
+-	if (!np)
++	if (!np) {
++		of_node_put(fw_np);
+ 		return -ENODEV;
++	}
+ 
+ 	of_node_put(np);
+ 	ret = of_platform_populate(fw_np, s10_of_match, NULL, NULL);
++	of_node_put(fw_np);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/gpu/drm/arm/hdlcd_crtc.c b/drivers/gpu/drm/arm/hdlcd_crtc.c
+index 0b2b62f8fa3c..a3efa28436ea 100644
+--- a/drivers/gpu/drm/arm/hdlcd_crtc.c
++++ b/drivers/gpu/drm/arm/hdlcd_crtc.c
+@@ -186,20 +186,20 @@ static void hdlcd_crtc_atomic_disable(struct drm_crtc *crtc,
+ 	clk_disable_unprepare(hdlcd->clk);
+ }
+ 
+-static int hdlcd_crtc_atomic_check(struct drm_crtc *crtc,
+-				   struct drm_crtc_state *state)
++static enum drm_mode_status hdlcd_crtc_mode_valid(struct drm_crtc *crtc,
++		const struct drm_display_mode *mode)
+ {
+ 	struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc);
+-	struct drm_display_mode *mode = &state->adjusted_mode;
+ 	long rate, clk_rate = mode->clock * 1000;
+ 
+ 	rate = clk_round_rate(hdlcd->clk, clk_rate);
+-	if (rate != clk_rate) {
++	/* 0.1% seems a close enough tolerance for the TDA19988 on Juno */
++	if (abs(rate - clk_rate) * 1000 > clk_rate) {
+ 		/* clock required by mode not supported by hardware */
+-		return -EINVAL;
++		return MODE_NOCLOCK;
+ 	}
+ 
+-	return 0;
++	return MODE_OK;
+ }
+ 
+ static void hdlcd_crtc_atomic_begin(struct drm_crtc *crtc,
+@@ -220,7 +220,7 @@ static void hdlcd_crtc_atomic_begin(struct drm_crtc *crtc,
+ }
+ 
+ static const struct drm_crtc_helper_funcs hdlcd_crtc_helper_funcs = {
+-	.atomic_check	= hdlcd_crtc_atomic_check,
++	.mode_valid	= hdlcd_crtc_mode_valid,
+ 	.atomic_begin	= hdlcd_crtc_atomic_begin,
+ 	.atomic_enable	= hdlcd_crtc_atomic_enable,
+ 	.atomic_disable	= hdlcd_crtc_atomic_disable,
+diff --git a/drivers/gpu/drm/arm/malidp_drv.c b/drivers/gpu/drm/arm/malidp_drv.c
+index ab50ad06e271..64da56f4b0cf 100644
+--- a/drivers/gpu/drm/arm/malidp_drv.c
++++ b/drivers/gpu/drm/arm/malidp_drv.c
+@@ -192,6 +192,7 @@ static void malidp_atomic_commit_hw_done(struct drm_atomic_state *state)
+ {
+ 	struct drm_device *drm = state->dev;
+ 	struct malidp_drm *malidp = drm->dev_private;
++	int loop = 5;
+ 
+ 	malidp->event = malidp->crtc.state->event;
+ 	malidp->crtc.state->event = NULL;
+@@ -206,8 +207,18 @@ static void malidp_atomic_commit_hw_done(struct drm_atomic_state *state)
+ 			drm_crtc_vblank_get(&malidp->crtc);
+ 
+ 		/* only set config_valid if the CRTC is enabled */
+-		if (malidp_set_and_wait_config_valid(drm) < 0)
++		if (malidp_set_and_wait_config_valid(drm) < 0) {
++			/*
++			 * make a loop around the second CVAL setting and
++			 * try 5 times before giving up.
++			 */
++			while (loop--) {
++				if (!malidp_set_and_wait_config_valid(drm))
++					break;
++			}
+ 			DRM_DEBUG_DRIVER("timed out waiting for updated configuration\n");
++		}
++
+ 	} else if (malidp->event) {
+ 		/* CRTC inactive means vblank IRQ is disabled, send event directly */
+ 		spin_lock_irq(&drm->event_lock);
+diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
+index cd8a22d6370e..be4024f0e3a8 100644
+--- a/drivers/gpu/drm/i915/intel_display.c
++++ b/drivers/gpu/drm/i915/intel_display.c
+@@ -11820,9 +11820,6 @@ intel_compare_link_m_n(const struct intel_link_m_n *m_n,
+ 			      m2_n2->gmch_m, m2_n2->gmch_n, !adjust) &&
+ 	    intel_compare_m_n(m_n->link_m, m_n->link_n,
+ 			      m2_n2->link_m, m2_n2->link_n, !adjust)) {
+-		if (adjust)
+-			*m2_n2 = *m_n;
+-
+ 		return true;
+ 	}
+ 
+@@ -12855,6 +12852,33 @@ static int calc_watermark_data(struct intel_atomic_state *state)
+ 	return 0;
+ }
+ 
++static void intel_crtc_check_fastset(struct intel_crtc_state *old_crtc_state,
++				     struct intel_crtc_state *new_crtc_state)
++{
++	struct drm_i915_private *dev_priv =
++		to_i915(new_crtc_state->base.crtc->dev);
++
++	if (!intel_pipe_config_compare(dev_priv, old_crtc_state,
++				       new_crtc_state, true))
++		return;
++
++	new_crtc_state->base.mode_changed = false;
++	new_crtc_state->update_pipe = true;
++
++	/*
++	 * If we're not doing the full modeset we want to
++	 * keep the current M/N values as they may be
++	 * sufficiently different to the computed values
++	 * to cause problems.
++	 *
++	 * FIXME: should really copy more fuzzy state here
++	 */
++	new_crtc_state->fdi_m_n = old_crtc_state->fdi_m_n;
++	new_crtc_state->dp_m_n = old_crtc_state->dp_m_n;
++	new_crtc_state->dp_m2_n2 = old_crtc_state->dp_m2_n2;
++	new_crtc_state->has_drrs = old_crtc_state->has_drrs;
++}
++
+ /**
+  * intel_atomic_check - validate state object
+  * @dev: drm device
+@@ -12903,12 +12927,8 @@ static int intel_atomic_check(struct drm_device *dev,
+ 			return ret;
+ 		}
+ 
+-		if (intel_pipe_config_compare(dev_priv,
+-					to_intel_crtc_state(old_crtc_state),
+-					pipe_config, true)) {
+-			crtc_state->mode_changed = false;
+-			pipe_config->update_pipe = true;
+-		}
++		intel_crtc_check_fastset(to_intel_crtc_state(old_crtc_state),
++					 pipe_config);
+ 
+ 		if (needs_modeset(crtc_state))
+ 			any_ms = true;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
+index 8b9270f31409..e4e09d47c5c0 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
+@@ -136,6 +136,114 @@ static int vmw_close_channel(struct rpc_channel *channel)
+ 	return 0;
+ }
+ 
++/**
++ * vmw_port_hb_out - Send the message payload either through the
++ * high-bandwidth port if available, or through the backdoor otherwise.
++ * @channel: The rpc channel.
++ * @msg: NULL-terminated message.
++ * @hb: Whether the high-bandwidth port is available.
++ *
++ * Return: The port status.
++ */
++static unsigned long vmw_port_hb_out(struct rpc_channel *channel,
++				     const char *msg, bool hb)
++{
++	unsigned long si, di, eax, ebx, ecx, edx;
++	unsigned long msg_len = strlen(msg);
++
++	if (hb) {
++		unsigned long bp = channel->cookie_high;
++
++		si = (uintptr_t) msg;
++		di = channel->cookie_low;
++
++		VMW_PORT_HB_OUT(
++			(MESSAGE_STATUS_SUCCESS << 16) | VMW_PORT_CMD_HB_MSG,
++			msg_len, si, di,
++			VMW_HYPERVISOR_HB_PORT | (channel->channel_id << 16),
++			VMW_HYPERVISOR_MAGIC, bp,
++			eax, ebx, ecx, edx, si, di);
++
++		return ebx;
++	}
++
++	/* HB port not available. Send the message 4 bytes at a time. */
++	ecx = MESSAGE_STATUS_SUCCESS << 16;
++	while (msg_len && (HIGH_WORD(ecx) & MESSAGE_STATUS_SUCCESS)) {
++		unsigned int bytes = min_t(size_t, msg_len, 4);
++		unsigned long word = 0;
++
++		memcpy(&word, msg, bytes);
++		msg_len -= bytes;
++		msg += bytes;
++		si = channel->cookie_high;
++		di = channel->cookie_low;
++
++		VMW_PORT(VMW_PORT_CMD_MSG | (MSG_TYPE_SENDPAYLOAD << 16),
++			 word, si, di,
++			 VMW_HYPERVISOR_PORT | (channel->channel_id << 16),
++			 VMW_HYPERVISOR_MAGIC,
++			 eax, ebx, ecx, edx, si, di);
++	}
++
++	return ecx;
++}
++
++/**
++ * vmw_port_hb_in - Receive the message payload either through the
++ * high-bandwidth port if available, or through the backdoor otherwise.
++ * @channel: The rpc channel.
++ * @reply: Pointer to buffer holding reply.
++ * @reply_len: Length of the reply.
++ * @hb: Whether the high-bandwidth port is available.
++ *
++ * Return: The port status.
++ */
++static unsigned long vmw_port_hb_in(struct rpc_channel *channel, char *reply,
++				    unsigned long reply_len, bool hb)
++{
++	unsigned long si, di, eax, ebx, ecx, edx;
++
++	if (hb) {
++		unsigned long bp = channel->cookie_low;
++
++		si = channel->cookie_high;
++		di = (uintptr_t) reply;
++
++		VMW_PORT_HB_IN(
++			(MESSAGE_STATUS_SUCCESS << 16) | VMW_PORT_CMD_HB_MSG,
++			reply_len, si, di,
++			VMW_HYPERVISOR_HB_PORT | (channel->channel_id << 16),
++			VMW_HYPERVISOR_MAGIC, bp,
++			eax, ebx, ecx, edx, si, di);
++
++		return ebx;
++	}
++
++	/* HB port not available. Retrieve the message 4 bytes at a time. */
++	ecx = MESSAGE_STATUS_SUCCESS << 16;
++	while (reply_len) {
++		unsigned int bytes = min_t(unsigned long, reply_len, 4);
++
++		si = channel->cookie_high;
++		di = channel->cookie_low;
++
++		VMW_PORT(VMW_PORT_CMD_MSG | (MSG_TYPE_RECVPAYLOAD << 16),
++			 MESSAGE_STATUS_SUCCESS, si, di,
++			 VMW_HYPERVISOR_PORT | (channel->channel_id << 16),
++			 VMW_HYPERVISOR_MAGIC,
++			 eax, ebx, ecx, edx, si, di);
++
++		if ((HIGH_WORD(ecx) & MESSAGE_STATUS_SUCCESS) == 0)
++			break;
++
++		memcpy(reply, &ebx, bytes);
++		reply_len -= bytes;
++		reply += bytes;
++	}
++
++	return ecx;
++}
+ 
+ 
+ /**
+@@ -148,11 +256,10 @@ static int vmw_close_channel(struct rpc_channel *channel)
+  */
+ static int vmw_send_msg(struct rpc_channel *channel, const char *msg)
+ {
+-	unsigned long eax, ebx, ecx, edx, si, di, bp;
++	unsigned long eax, ebx, ecx, edx, si, di;
+ 	size_t msg_len = strlen(msg);
+ 	int retries = 0;
+ 
+-
+ 	while (retries < RETRIES) {
+ 		retries++;
+ 
+@@ -166,23 +273,14 @@ static int vmw_send_msg(struct rpc_channel *channel, const char *msg)
+ 			VMW_HYPERVISOR_MAGIC,
+ 			eax, ebx, ecx, edx, si, di);
+ 
+-		if ((HIGH_WORD(ecx) & MESSAGE_STATUS_SUCCESS) == 0 ||
+-		    (HIGH_WORD(ecx) & MESSAGE_STATUS_HB) == 0) {
+-			/* Expected success + high-bandwidth. Give up. */
++		if ((HIGH_WORD(ecx) & MESSAGE_STATUS_SUCCESS) == 0) {
++			/* Expected success. Give up. */
+ 			return -EINVAL;
+ 		}
+ 
+ 		/* Send msg */
+-		si  = (uintptr_t) msg;
+-		di  = channel->cookie_low;
+-		bp  = channel->cookie_high;
+-
+-		VMW_PORT_HB_OUT(
+-			(MESSAGE_STATUS_SUCCESS << 16) | VMW_PORT_CMD_HB_MSG,
+-			msg_len, si, di,
+-			VMW_HYPERVISOR_HB_PORT | (channel->channel_id << 16),
+-			VMW_HYPERVISOR_MAGIC, bp,
+-			eax, ebx, ecx, edx, si, di);
++		ebx = vmw_port_hb_out(channel, msg,
++				      !!(HIGH_WORD(ecx) & MESSAGE_STATUS_HB));
+ 
+ 		if ((HIGH_WORD(ebx) & MESSAGE_STATUS_SUCCESS) != 0) {
+ 			return 0;
+@@ -211,7 +309,7 @@ STACK_FRAME_NON_STANDARD(vmw_send_msg);
+ static int vmw_recv_msg(struct rpc_channel *channel, void **msg,
+ 			size_t *msg_len)
+ {
+-	unsigned long eax, ebx, ecx, edx, si, di, bp;
++	unsigned long eax, ebx, ecx, edx, si, di;
+ 	char *reply;
+ 	size_t reply_len;
+ 	int retries = 0;
+@@ -233,8 +331,7 @@ static int vmw_recv_msg(struct rpc_channel *channel, void **msg,
+ 			VMW_HYPERVISOR_MAGIC,
+ 			eax, ebx, ecx, edx, si, di);
+ 
+-		if ((HIGH_WORD(ecx) & MESSAGE_STATUS_SUCCESS) == 0 ||
+-		    (HIGH_WORD(ecx) & MESSAGE_STATUS_HB) == 0) {
++		if ((HIGH_WORD(ecx) & MESSAGE_STATUS_SUCCESS) == 0) {
+ 			DRM_ERROR("Failed to get reply size for host message.\n");
+ 			return -EINVAL;
+ 		}
+@@ -252,17 +349,8 @@ static int vmw_recv_msg(struct rpc_channel *channel, void **msg,
+ 
+ 
+ 		/* Receive buffer */
+-		si  = channel->cookie_high;
+-		di  = (uintptr_t) reply;
+-		bp  = channel->cookie_low;
+-
+-		VMW_PORT_HB_IN(
+-			(MESSAGE_STATUS_SUCCESS << 16) | VMW_PORT_CMD_HB_MSG,
+-			reply_len, si, di,
+-			VMW_HYPERVISOR_HB_PORT | (channel->channel_id << 16),
+-			VMW_HYPERVISOR_MAGIC, bp,
+-			eax, ebx, ecx, edx, si, di);
+-
++		ebx = vmw_port_hb_in(channel, reply, reply_len,
++				     !!(HIGH_WORD(ecx) & MESSAGE_STATUS_HB));
+ 		if ((HIGH_WORD(ebx) & MESSAGE_STATUS_SUCCESS) == 0) {
+ 			kfree(reply);
+ 
+diff --git a/drivers/hwmon/hwmon.c b/drivers/hwmon/hwmon.c
+index c22dc1e07911..c38883f748a1 100644
+--- a/drivers/hwmon/hwmon.c
++++ b/drivers/hwmon/hwmon.c
+@@ -633,7 +633,7 @@ __hwmon_device_register(struct device *dev, const char *name, void *drvdata,
+ 	if (err)
+ 		goto free_hwmon;
+ 
+-	if (dev && chip && chip->ops->read &&
++	if (dev && dev->of_node && chip && chip->ops->read &&
+ 	    chip->info[0]->type == hwmon_chip &&
+ 	    (chip->info[0]->config[0] & HWMON_C_REGISTER_TZ)) {
+ 		const struct hwmon_channel_info **info = chip->info;
+diff --git a/drivers/hwmon/pmbus/pmbus_core.c b/drivers/hwmon/pmbus/pmbus_core.c
+index 2e2b5851139c..cd24b375df1e 100644
+--- a/drivers/hwmon/pmbus/pmbus_core.c
++++ b/drivers/hwmon/pmbus/pmbus_core.c
+@@ -1230,7 +1230,8 @@ static int pmbus_add_sensor_attrs_one(struct i2c_client *client,
+ 				      const struct pmbus_driver_info *info,
+ 				      const char *name,
+ 				      int index, int page,
+-				      const struct pmbus_sensor_attr *attr)
++				      const struct pmbus_sensor_attr *attr,
++				      bool paged)
+ {
+ 	struct pmbus_sensor *base;
+ 	bool upper = !!(attr->gbit & 0xff00);	/* need to check STATUS_WORD */
+@@ -1238,7 +1239,7 @@ static int pmbus_add_sensor_attrs_one(struct i2c_client *client,
+ 
+ 	if (attr->label) {
+ 		ret = pmbus_add_label(data, name, index, attr->label,
+-				      attr->paged ? page + 1 : 0);
++				      paged ? page + 1 : 0);
+ 		if (ret)
+ 			return ret;
+ 	}
+@@ -1271,6 +1272,30 @@ static int pmbus_add_sensor_attrs_one(struct i2c_client *client,
+ 	return 0;
+ }
+ 
++static bool pmbus_sensor_is_paged(const struct pmbus_driver_info *info,
++				  const struct pmbus_sensor_attr *attr)
++{
++	int p;
++
++	if (attr->paged)
++		return true;
++
++	/*
++	 * Some attributes may be present on more than one page despite
++	 * not being marked with the paged attribute. If that is the case,
++	 * then treat the sensor as being paged and add the page suffix to the
++	 * attribute name.
++	 * We don't just add the paged attribute to all such attributes, in
++	 * order to maintain the un-suffixed labels in the case where the
++	 * attribute is only on page 0.
++	 */
++	for (p = 1; p < info->pages; p++) {
++		if (info->func[p] & attr->func)
++			return true;
++	}
++	return false;
++}
++
+ static int pmbus_add_sensor_attrs(struct i2c_client *client,
+ 				  struct pmbus_data *data,
+ 				  const char *name,
+@@ -1284,14 +1309,15 @@ static int pmbus_add_sensor_attrs(struct i2c_client *client,
+ 	index = 1;
+ 	for (i = 0; i < nattrs; i++) {
+ 		int page, pages;
++		bool paged = pmbus_sensor_is_paged(info, attrs);
+ 
+-		pages = attrs->paged ? info->pages : 1;
++		pages = paged ? info->pages : 1;
+ 		for (page = 0; page < pages; page++) {
+ 			if (!(info->func[page] & attrs->func))
+ 				continue;
+ 			ret = pmbus_add_sensor_attrs_one(client, data, info,
+ 							 name, index, page,
+-							 attrs);
++							 attrs, paged);
+ 			if (ret)
+ 				return ret;
+ 			index++;
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
+index d1d8d07a0714..83425b7b580c 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
+@@ -265,6 +265,7 @@ struct st_lsm6dsx_sensor {
+  * @conf_lock: Mutex to prevent concurrent FIFO configuration update.
+  * @page_lock: Mutex to prevent concurrent memory page configuration.
+  * @fifo_mode: FIFO operating mode supported by the device.
++ * @suspend_mask: Suspended sensor bitmask.
+  * @enable_mask: Enabled sensor bitmask.
+  * @ts_sip: Total number of timestamp samples in a given pattern.
+  * @sip: Total number of samples (acc/gyro/ts) in a given pattern.
+@@ -282,6 +283,7 @@ struct st_lsm6dsx_hw {
+ 	struct mutex page_lock;
+ 
+ 	enum st_lsm6dsx_fifo_mode fifo_mode;
++	u8 suspend_mask;
+ 	u8 enable_mask;
+ 	u8 ts_sip;
+ 	u8 sip;
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+index 12e29dda9b98..96986d84e418 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+@@ -1023,8 +1023,6 @@ static int __maybe_unused st_lsm6dsx_suspend(struct device *dev)
+ {
+ 	struct st_lsm6dsx_hw *hw = dev_get_drvdata(dev);
+ 	struct st_lsm6dsx_sensor *sensor;
+-	const struct st_lsm6dsx_reg *reg;
+-	unsigned int data;
+ 	int i, err = 0;
+ 
+ 	for (i = 0; i < ST_LSM6DSX_ID_MAX; i++) {
+@@ -1035,12 +1033,16 @@ static int __maybe_unused st_lsm6dsx_suspend(struct device *dev)
+ 		if (!(hw->enable_mask & BIT(sensor->id)))
+ 			continue;
+ 
+-		reg = &st_lsm6dsx_odr_table[sensor->id].reg;
+-		data = ST_LSM6DSX_SHIFT_VAL(0, reg->mask);
+-		err = st_lsm6dsx_update_bits_locked(hw, reg->addr, reg->mask,
+-						    data);
++		if (sensor->id == ST_LSM6DSX_ID_EXT0 ||
++		    sensor->id == ST_LSM6DSX_ID_EXT1 ||
++		    sensor->id == ST_LSM6DSX_ID_EXT2)
++			err = st_lsm6dsx_shub_set_enable(sensor, false);
++		else
++			err = st_lsm6dsx_sensor_set_enable(sensor, false);
+ 		if (err < 0)
+ 			return err;
++
++		hw->suspend_mask |= BIT(sensor->id);
+ 	}
+ 
+ 	if (hw->fifo_mode != ST_LSM6DSX_FIFO_BYPASS)
+@@ -1060,12 +1062,19 @@ static int __maybe_unused st_lsm6dsx_resume(struct device *dev)
+ 			continue;
+ 
+ 		sensor = iio_priv(hw->iio_devs[i]);
+-		if (!(hw->enable_mask & BIT(sensor->id)))
++		if (!(hw->suspend_mask & BIT(sensor->id)))
+ 			continue;
+ 
+-		err = st_lsm6dsx_set_odr(sensor, sensor->odr);
++		if (sensor->id == ST_LSM6DSX_ID_EXT0 ||
++		    sensor->id == ST_LSM6DSX_ID_EXT1 ||
++		    sensor->id == ST_LSM6DSX_ID_EXT2)
++			err = st_lsm6dsx_shub_set_enable(sensor, true);
++		else
++			err = st_lsm6dsx_sensor_set_enable(sensor, true);
+ 		if (err < 0)
+ 			return err;
++
++		hw->suspend_mask &= ~BIT(sensor->id);
+ 	}
+ 
+ 	if (hw->enable_mask)
+diff --git a/drivers/iio/temperature/mlx90632.c b/drivers/iio/temperature/mlx90632.c
+index be03be719efe..f71918430f95 100644
+--- a/drivers/iio/temperature/mlx90632.c
++++ b/drivers/iio/temperature/mlx90632.c
+@@ -81,6 +81,8 @@
+ /* Magic constants */
+ #define MLX90632_ID_MEDICAL	0x0105 /* EEPROM DSPv5 Medical device id */
+ #define MLX90632_ID_CONSUMER	0x0205 /* EEPROM DSPv5 Consumer device id */
++#define MLX90632_DSP_VERSION	5 /* DSP version */
++#define MLX90632_DSP_MASK	GENMASK(7, 0) /* DSP version in EE_VERSION */
+ #define MLX90632_RESET_CMD	0x0006 /* Reset sensor (address or global) */
+ #define MLX90632_REF_12		12LL /**< ResCtrlRef value of Ch 1 or Ch 2 */
+ #define MLX90632_REF_3		12LL /**< ResCtrlRef value of Channel 3 */
+@@ -667,10 +669,13 @@ static int mlx90632_probe(struct i2c_client *client,
+ 	} else if (read == MLX90632_ID_CONSUMER) {
+ 		dev_dbg(&client->dev,
+ 			"Detected Consumer EEPROM calibration %x\n", read);
++	} else if ((read & MLX90632_DSP_MASK) == MLX90632_DSP_VERSION) {
++		dev_dbg(&client->dev,
++			"Detected Unknown EEPROM calibration %x\n", read);
+ 	} else {
+ 		dev_err(&client->dev,
+-			"EEPROM version mismatch %x (expected %x or %x)\n",
+-			read, MLX90632_ID_CONSUMER, MLX90632_ID_MEDICAL);
++			"Wrong DSP version %x (expected %x)\n",
++			read, MLX90632_DSP_VERSION);
+ 		return -EPROTONOSUPPORT;
+ 	}
+ 
+diff --git a/drivers/infiniband/hw/hfi1/chip.c b/drivers/infiniband/hw/hfi1/chip.c
+index 9784c6c0d2ec..597f2f02f3a8 100644
+--- a/drivers/infiniband/hw/hfi1/chip.c
++++ b/drivers/infiniband/hw/hfi1/chip.c
+@@ -9848,6 +9848,7 @@ void hfi1_quiet_serdes(struct hfi1_pportdata *ppd)
+ 
+ 	/* disable the port */
+ 	clear_rcvctrl(dd, RCV_CTRL_RCV_PORT_ENABLE_SMASK);
++	cancel_work_sync(&ppd->freeze_work);
+ }
+ 
+ static inline int init_cpu_counters(struct hfi1_devdata *dd)
+@@ -14027,6 +14028,19 @@ static void init_kdeth_qp(struct hfi1_devdata *dd)
+ 		  RCV_BTH_QP_KDETH_QP_SHIFT);
+ }
+ 
++/**
++ * hfi1_get_qp_map
++ * @dd: device data
++ * @idx: index to read
++ */
++u8 hfi1_get_qp_map(struct hfi1_devdata *dd, u8 idx)
++{
++	u64 reg = read_csr(dd, RCV_QP_MAP_TABLE + (idx / 8) * 8);
++
++	reg >>= (idx % 8) * 8;
++	return reg;
++}
++
+ /**
+  * init_qpmap_table
+  * @dd - device data
+diff --git a/drivers/infiniband/hw/hfi1/chip.h b/drivers/infiniband/hw/hfi1/chip.h
+index 6c27c1c6a868..a5c61400b295 100644
+--- a/drivers/infiniband/hw/hfi1/chip.h
++++ b/drivers/infiniband/hw/hfi1/chip.h
+@@ -1442,6 +1442,7 @@ void clear_all_interrupts(struct hfi1_devdata *dd);
+ void remap_intr(struct hfi1_devdata *dd, int isrc, int msix_intr);
+ void remap_sdma_interrupts(struct hfi1_devdata *dd, int engine, int msix_intr);
+ void reset_interrupts(struct hfi1_devdata *dd);
++u8 hfi1_get_qp_map(struct hfi1_devdata *dd, u8 idx);
+ 
+ /*
+  * Interrupt source table.
+diff --git a/drivers/infiniband/hw/hfi1/fault.c b/drivers/infiniband/hw/hfi1/fault.c
+index 3fd3315d0fb0..93613e5def9b 100644
+--- a/drivers/infiniband/hw/hfi1/fault.c
++++ b/drivers/infiniband/hw/hfi1/fault.c
+@@ -153,6 +153,7 @@ static ssize_t fault_opcodes_write(struct file *file, const char __user *buf,
+ 		char *dash;
+ 		unsigned long range_start, range_end, i;
+ 		bool remove = false;
++		unsigned long bound = 1U << BITS_PER_BYTE;
+ 
+ 		end = strchr(ptr, ',');
+ 		if (end)
+@@ -178,6 +179,10 @@ static ssize_t fault_opcodes_write(struct file *file, const char __user *buf,
+ 				    BITS_PER_BYTE);
+ 			break;
+ 		}
++		/* Check the inputs */
++		if (range_start >= bound || range_end >= bound)
++			break;
++
+ 		for (i = range_start; i <= range_end; i++) {
+ 			if (remove)
+ 				clear_bit(i, fault->opcodes);
+diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
+index b0110728f541..70828de7436b 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.c
++++ b/drivers/infiniband/hw/hfi1/sdma.c
+@@ -410,10 +410,7 @@ static void sdma_flush(struct sdma_engine *sde)
+ 	sdma_flush_descq(sde);
+ 	spin_lock_irqsave(&sde->flushlist_lock, flags);
+ 	/* copy flush list */
+-	list_for_each_entry_safe(txp, txp_next, &sde->flushlist, list) {
+-		list_del_init(&txp->list);
+-		list_add_tail(&txp->list, &flushlist);
+-	}
++	list_splice_init(&sde->flushlist, &flushlist);
+ 	spin_unlock_irqrestore(&sde->flushlist_lock, flags);
+ 	/* flush from flush list */
+ 	list_for_each_entry_safe(txp, txp_next, &flushlist, list)
+@@ -2413,7 +2410,7 @@ unlock_noconn:
+ 	list_add_tail(&tx->list, &sde->flushlist);
+ 	spin_unlock(&sde->flushlist_lock);
+ 	iowait_inc_wait_count(wait, tx->num_desc);
+-	schedule_work(&sde->flush_worker);
++	queue_work_on(sde->cpu, system_highpri_wq, &sde->flush_worker);
+ 	ret = -ECOMM;
+ 	goto unlock;
+ nodesc:
+@@ -2511,7 +2508,7 @@ unlock_noconn:
+ 		iowait_inc_wait_count(wait, tx->num_desc);
+ 	}
+ 	spin_unlock(&sde->flushlist_lock);
+-	schedule_work(&sde->flush_worker);
++	queue_work_on(sde->cpu, system_highpri_wq, &sde->flush_worker);
+ 	ret = -ECOMM;
+ 	goto update_tail;
+ nodesc:
+diff --git a/drivers/infiniband/hw/hfi1/tid_rdma.c b/drivers/infiniband/hw/hfi1/tid_rdma.c
+index 43cbce7a19ea..e0851f01a804 100644
+--- a/drivers/infiniband/hw/hfi1/tid_rdma.c
++++ b/drivers/infiniband/hw/hfi1/tid_rdma.c
+@@ -305,9 +305,7 @@ static struct hfi1_ctxtdata *qp_to_rcd(struct rvt_dev_info *rdi,
+ 	if (qp->ibqp.qp_num == 0)
+ 		ctxt = 0;
+ 	else
+-		ctxt = ((qp->ibqp.qp_num >> dd->qos_shift) %
+-			(dd->n_krcv_queues - 1)) + 1;
+-
++		ctxt = hfi1_get_qp_map(dd, qp->ibqp.qp_num >> dd->qos_shift);
+ 	return dd->rcd[ctxt];
+ }
+ 
+diff --git a/drivers/infiniband/hw/hfi1/user_exp_rcv.c b/drivers/infiniband/hw/hfi1/user_exp_rcv.c
+index 0cd71ce7cc71..3592a9ec155e 100644
+--- a/drivers/infiniband/hw/hfi1/user_exp_rcv.c
++++ b/drivers/infiniband/hw/hfi1/user_exp_rcv.c
+@@ -324,6 +324,9 @@ int hfi1_user_exp_rcv_setup(struct hfi1_filedata *fd,
+ 	u32 *tidlist = NULL;
+ 	struct tid_user_buf *tidbuf;
+ 
++	if (!PAGE_ALIGNED(tinfo->vaddr))
++		return -EINVAL;
++
+ 	tidbuf = kzalloc(sizeof(*tidbuf), GFP_KERNEL);
+ 	if (!tidbuf)
+ 		return -ENOMEM;
+diff --git a/drivers/infiniband/hw/hfi1/user_sdma.c b/drivers/infiniband/hw/hfi1/user_sdma.c
+index 8bfbc6d7ea34..fd754a16475a 100644
+--- a/drivers/infiniband/hw/hfi1/user_sdma.c
++++ b/drivers/infiniband/hw/hfi1/user_sdma.c
+@@ -130,20 +130,16 @@ static int defer_packet_queue(
+ {
+ 	struct hfi1_user_sdma_pkt_q *pq =
+ 		container_of(wait->iow, struct hfi1_user_sdma_pkt_q, busy);
+-	struct user_sdma_txreq *tx =
+-		container_of(txreq, struct user_sdma_txreq, txreq);
+ 
+-	if (sdma_progress(sde, seq, txreq)) {
+-		if (tx->busycount++ < MAX_DEFER_RETRY_COUNT)
+-			goto eagain;
+-	}
++	write_seqlock(&sde->waitlock);
++	if (sdma_progress(sde, seq, txreq))
++		goto eagain;
+ 	/*
+ 	 * We are assuming that if the list is enqueued somewhere, it
+ 	 * is to the dmawait list since that is the only place where
+ 	 * it is supposed to be enqueued.
+ 	 */
+ 	xchg(&pq->state, SDMA_PKT_Q_DEFERRED);
+-	write_seqlock(&sde->waitlock);
+ 	if (list_empty(&pq->busy.list)) {
+ 		iowait_get_priority(&pq->busy);
+ 		iowait_queue(pkts_sent, &pq->busy, &sde->dmawait);
+@@ -151,6 +147,7 @@ static int defer_packet_queue(
+ 	write_sequnlock(&sde->waitlock);
+ 	return -EBUSY;
+ eagain:
++	write_sequnlock(&sde->waitlock);
+ 	return -EAGAIN;
+ }
+ 
+@@ -804,7 +801,6 @@ static int user_sdma_send_pkts(struct user_sdma_request *req, u16 maxpkts)
+ 
+ 		tx->flags = 0;
+ 		tx->req = req;
+-		tx->busycount = 0;
+ 		INIT_LIST_HEAD(&tx->list);
+ 
+ 		/*
+diff --git a/drivers/infiniband/hw/hfi1/user_sdma.h b/drivers/infiniband/hw/hfi1/user_sdma.h
+index 14dfd757dafd..4d8510b0fc38 100644
+--- a/drivers/infiniband/hw/hfi1/user_sdma.h
++++ b/drivers/infiniband/hw/hfi1/user_sdma.h
+@@ -245,7 +245,6 @@ struct user_sdma_txreq {
+ 	struct list_head list;
+ 	struct user_sdma_request *req;
+ 	u16 flags;
+-	unsigned int busycount;
+ 	u16 seqnum;
+ };
+ 
+diff --git a/drivers/infiniband/hw/hfi1/verbs.c b/drivers/infiniband/hw/hfi1/verbs.c
+index 55a56b3d7f83..ea68eeba3f22 100644
+--- a/drivers/infiniband/hw/hfi1/verbs.c
++++ b/drivers/infiniband/hw/hfi1/verbs.c
+@@ -1355,8 +1355,6 @@ static void hfi1_fill_device_attr(struct hfi1_devdata *dd)
+ 	rdi->dparms.props.max_cq = hfi1_max_cqs;
+ 	rdi->dparms.props.max_ah = hfi1_max_ahs;
+ 	rdi->dparms.props.max_cqe = hfi1_max_cqes;
+-	rdi->dparms.props.max_mr = rdi->lkey_table.max;
+-	rdi->dparms.props.max_fmr = rdi->lkey_table.max;
+ 	rdi->dparms.props.max_map_per_fmr = 32767;
+ 	rdi->dparms.props.max_pd = hfi1_max_pds;
+ 	rdi->dparms.props.max_qp_rd_atom = HFI1_MAX_RDMA_ATOMIC;
+diff --git a/drivers/infiniband/hw/hfi1/verbs_txreq.c b/drivers/infiniband/hw/hfi1/verbs_txreq.c
+index c4ab2d5b4502..8f766dd3f61c 100644
+--- a/drivers/infiniband/hw/hfi1/verbs_txreq.c
++++ b/drivers/infiniband/hw/hfi1/verbs_txreq.c
+@@ -100,7 +100,7 @@ struct verbs_txreq *__get_txreq(struct hfi1_ibdev *dev,
+ 	if (ib_rvt_state_ops[qp->state] & RVT_PROCESS_RECV_OK) {
+ 		struct hfi1_qp_priv *priv;
+ 
+-		tx = kmem_cache_alloc(dev->verbs_txreq_cache, GFP_ATOMIC);
++		tx = kmem_cache_alloc(dev->verbs_txreq_cache, VERBS_TXREQ_GFP);
+ 		if (tx)
+ 			goto out;
+ 		priv = qp->priv;
+diff --git a/drivers/infiniband/hw/hfi1/verbs_txreq.h b/drivers/infiniband/hw/hfi1/verbs_txreq.h
+index b002e96eb335..bfa6e081cb56 100644
+--- a/drivers/infiniband/hw/hfi1/verbs_txreq.h
++++ b/drivers/infiniband/hw/hfi1/verbs_txreq.h
+@@ -72,6 +72,7 @@ struct hfi1_ibdev;
+ struct verbs_txreq *__get_txreq(struct hfi1_ibdev *dev,
+ 				struct rvt_qp *qp);
+ 
++#define VERBS_TXREQ_GFP (GFP_ATOMIC | __GFP_NOWARN)
+ static inline struct verbs_txreq *get_txreq(struct hfi1_ibdev *dev,
+ 					    struct rvt_qp *qp)
+ 	__must_hold(&qp->slock)
+@@ -79,7 +80,7 @@ static inline struct verbs_txreq *get_txreq(struct hfi1_ibdev *dev,
+ 	struct verbs_txreq *tx;
+ 	struct hfi1_qp_priv *priv = qp->priv;
+ 
+-	tx = kmem_cache_alloc(dev->verbs_txreq_cache, GFP_ATOMIC);
++	tx = kmem_cache_alloc(dev->verbs_txreq_cache, VERBS_TXREQ_GFP);
+ 	if (unlikely(!tx)) {
+ 		/* call slow path to get the lock */
+ 		tx = __get_txreq(dev, qp);
+diff --git a/drivers/infiniband/hw/qib/qib_verbs.c b/drivers/infiniband/hw/qib/qib_verbs.c
+index 5ff32d32c61c..2c4e569ce438 100644
+--- a/drivers/infiniband/hw/qib/qib_verbs.c
++++ b/drivers/infiniband/hw/qib/qib_verbs.c
+@@ -1459,8 +1459,6 @@ static void qib_fill_device_attr(struct qib_devdata *dd)
+ 	rdi->dparms.props.max_cq = ib_qib_max_cqs;
+ 	rdi->dparms.props.max_cqe = ib_qib_max_cqes;
+ 	rdi->dparms.props.max_ah = ib_qib_max_ahs;
+-	rdi->dparms.props.max_mr = rdi->lkey_table.max;
+-	rdi->dparms.props.max_fmr = rdi->lkey_table.max;
+ 	rdi->dparms.props.max_map_per_fmr = 32767;
+ 	rdi->dparms.props.max_qp_rd_atom = QIB_MAX_RDMA_ATOMIC;
+ 	rdi->dparms.props.max_qp_init_rd_atom = 255;
+diff --git a/drivers/infiniband/sw/rdmavt/mr.c b/drivers/infiniband/sw/rdmavt/mr.c
+index 0bb6e39dd03a..b04d2173e3f4 100644
+--- a/drivers/infiniband/sw/rdmavt/mr.c
++++ b/drivers/infiniband/sw/rdmavt/mr.c
+@@ -96,6 +96,8 @@ int rvt_driver_mr_init(struct rvt_dev_info *rdi)
+ 	for (i = 0; i < rdi->lkey_table.max; i++)
+ 		RCU_INIT_POINTER(rdi->lkey_table.table[i], NULL);
+ 
++	rdi->dparms.props.max_mr = rdi->lkey_table.max;
++	rdi->dparms.props.max_fmr = rdi->lkey_table.max;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
+index a34b9a2a32b6..a77436ee5ff7 100644
+--- a/drivers/infiniband/sw/rdmavt/qp.c
++++ b/drivers/infiniband/sw/rdmavt/qp.c
+@@ -594,7 +594,8 @@ static int alloc_qpn(struct rvt_dev_info *rdi, struct rvt_qpn_table *qpt,
+ 			offset = qpt->incr | ((offset & 1) ^ 1);
+ 		}
+ 		/* there can be no set bits in low-order QoS bits */
+-		WARN_ON(offset & (BIT(rdi->dparms.qos_shift) - 1));
++		WARN_ON(rdi->dparms.qos_shift > 1 &&
++			offset & ((BIT(rdi->dparms.qos_shift - 1) - 1) << 1));
+ 		qpn = mk_qpn(qpt, map, offset);
+ 	}
+ 
+diff --git a/drivers/input/misc/uinput.c b/drivers/input/misc/uinput.c
+index 26ec603fe220..83d1499fe021 100644
+--- a/drivers/input/misc/uinput.c
++++ b/drivers/input/misc/uinput.c
+@@ -1051,13 +1051,31 @@ static long uinput_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 
+ #ifdef CONFIG_COMPAT
+ 
+-#define UI_SET_PHYS_COMPAT	_IOW(UINPUT_IOCTL_BASE, 108, compat_uptr_t)
++/*
++ * These IOCTLs change their size and thus their numbers between
++ * 32 and 64 bits.
++ */
++#define UI_SET_PHYS_COMPAT		\
++	_IOW(UINPUT_IOCTL_BASE, 108, compat_uptr_t)
++#define UI_BEGIN_FF_UPLOAD_COMPAT	\
++	_IOWR(UINPUT_IOCTL_BASE, 200, struct uinput_ff_upload_compat)
++#define UI_END_FF_UPLOAD_COMPAT		\
++	_IOW(UINPUT_IOCTL_BASE, 201, struct uinput_ff_upload_compat)
+ 
+ static long uinput_compat_ioctl(struct file *file,
+ 				unsigned int cmd, unsigned long arg)
+ {
+-	if (cmd == UI_SET_PHYS_COMPAT)
++	switch (cmd) {
++	case UI_SET_PHYS_COMPAT:
+ 		cmd = UI_SET_PHYS;
++		break;
++	case UI_BEGIN_FF_UPLOAD_COMPAT:
++		cmd = UI_BEGIN_FF_UPLOAD;
++		break;
++	case UI_END_FF_UPLOAD_COMPAT:
++		cmd = UI_END_FF_UPLOAD;
++		break;
++	}
+ 
+ 	return uinput_ioctl_handler(file, cmd, arg, compat_ptr(arg));
+ }
+diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
+index b6da0c1267e3..8e6077d8e434 100644
+--- a/drivers/input/mouse/synaptics.c
++++ b/drivers/input/mouse/synaptics.c
+@@ -179,6 +179,8 @@ static const char * const smbus_pnp_ids[] = {
+ 	"LEN0096", /* X280 */
+ 	"LEN0097", /* X280 -> ALPS trackpoint */
+ 	"LEN200f", /* T450s */
++	"LEN2054", /* E480 */
++	"LEN2055", /* E580 */
+ 	"SYN3052", /* HP EliteBook 840 G4 */
+ 	"SYN3221", /* HP 15-ay000 */
+ 	NULL
+diff --git a/drivers/input/touchscreen/silead.c b/drivers/input/touchscreen/silead.c
+index 09241d4cdebc..06f0eb04a8fd 100644
+--- a/drivers/input/touchscreen/silead.c
++++ b/drivers/input/touchscreen/silead.c
+@@ -617,6 +617,7 @@ static const struct acpi_device_id silead_ts_acpi_match[] = {
+ 	{ "MSSL1680", 0 },
+ 	{ "MSSL0001", 0 },
+ 	{ "MSSL0002", 0 },
++	{ "MSSL0017", 0 },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(acpi, silead_ts_acpi_match);
+diff --git a/drivers/misc/habanalabs/memory.c b/drivers/misc/habanalabs/memory.c
+index fadaf557603f..425442819d31 100644
+--- a/drivers/misc/habanalabs/memory.c
++++ b/drivers/misc/habanalabs/memory.c
+@@ -675,11 +675,6 @@ static int init_phys_pg_pack_from_userptr(struct hl_ctx *ctx,
+ 
+ 		total_npages += npages;
+ 
+-		if (first) {
+-			first = false;
+-			dma_addr &= PAGE_MASK_2MB;
+-		}
+-
+ 		if ((npages % PGS_IN_2MB_PAGE) ||
+ 					(dma_addr & (PAGE_SIZE_2MB - 1)))
+ 			is_huge_page_opt = false;
+@@ -704,7 +699,6 @@ static int init_phys_pg_pack_from_userptr(struct hl_ctx *ctx,
+ 	phys_pg_pack->total_size = total_npages * page_size;
+ 
+ 	j = 0;
+-	first = true;
+ 	for_each_sg(userptr->sgt->sgl, sg, userptr->sgt->nents, i) {
+ 		npages = get_sg_info(sg, &dma_addr);
+ 
+diff --git a/drivers/misc/lkdtm/usercopy.c b/drivers/misc/lkdtm/usercopy.c
+index d5a0e7f1813b..e172719dd86d 100644
+--- a/drivers/misc/lkdtm/usercopy.c
++++ b/drivers/misc/lkdtm/usercopy.c
+@@ -324,14 +324,16 @@ free_user:
+ 
+ void lkdtm_USERCOPY_KERNEL_DS(void)
+ {
+-	char __user *user_ptr = (char __user *)ERR_PTR(-EINVAL);
++	char __user *user_ptr =
++		(char __user *)(0xFUL << (sizeof(unsigned long) * 8 - 4));
+ 	mm_segment_t old_fs = get_fs();
+ 	char buf[10] = {0};
+ 
+-	pr_info("attempting copy_to_user on unmapped kernel address\n");
++	pr_info("attempting copy_to_user() to noncanonical address: %px\n",
++		user_ptr);
+ 	set_fs(KERNEL_DS);
+-	if (copy_to_user(user_ptr, buf, sizeof(buf)))
+-		pr_info("copy_to_user un unmapped kernel address failed\n");
++	if (copy_to_user(user_ptr, buf, sizeof(buf)) == 0)
++		pr_err("copy_to_user() to noncanonical address succeeded!?\n");
+ 	set_fs(old_fs);
+ }
+ 
+diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
+index 6db36dc870b5..9020cb2490f7 100644
+--- a/drivers/mmc/core/core.c
++++ b/drivers/mmc/core/core.c
+@@ -144,8 +144,9 @@ void mmc_request_done(struct mmc_host *host, struct mmc_request *mrq)
+ 	int err = cmd->error;
+ 
+ 	/* Flag re-tuning needed on CRC errors */
+-	if ((cmd->opcode != MMC_SEND_TUNING_BLOCK &&
+-	    cmd->opcode != MMC_SEND_TUNING_BLOCK_HS200) &&
++	if (cmd->opcode != MMC_SEND_TUNING_BLOCK &&
++	    cmd->opcode != MMC_SEND_TUNING_BLOCK_HS200 &&
++	    !host->retune_crc_disable &&
+ 	    (err == -EILSEQ || (mrq->sbc && mrq->sbc->error == -EILSEQ) ||
+ 	    (mrq->data && mrq->data->error == -EILSEQ) ||
+ 	    (mrq->stop && mrq->stop->error == -EILSEQ)))
+diff --git a/drivers/mmc/core/sdio.c b/drivers/mmc/core/sdio.c
+index 6718fc8bb40f..0f51e774183e 100644
+--- a/drivers/mmc/core/sdio.c
++++ b/drivers/mmc/core/sdio.c
+@@ -941,6 +941,10 @@ static int mmc_sdio_pre_suspend(struct mmc_host *host)
+  */
+ static int mmc_sdio_suspend(struct mmc_host *host)
+ {
++	/* Prevent processing of SDIO IRQs in suspended state. */
++	mmc_card_set_suspended(host->card);
++	cancel_delayed_work_sync(&host->sdio_irq_work);
++
+ 	mmc_claim_host(host);
+ 
+ 	if (mmc_card_keep_power(host) && mmc_card_wake_sdio_irq(host))
+@@ -989,13 +993,20 @@ static int mmc_sdio_resume(struct mmc_host *host)
+ 		err = sdio_enable_4bit_bus(host->card);
+ 	}
+ 
+-	if (!err && host->sdio_irqs) {
++	if (err)
++		goto out;
++
++	/* Allow SDIO IRQs to be processed again. */
++	mmc_card_clr_suspended(host->card);
++
++	if (host->sdio_irqs) {
+ 		if (!(host->caps2 & MMC_CAP2_SDIO_IRQ_NOTHREAD))
+ 			wake_up_process(host->sdio_irq_thread);
+ 		else if (host->caps & MMC_CAP_SDIO_IRQ)
+ 			host->ops->enable_sdio_irq(host, 1);
+ 	}
+ 
++out:
+ 	mmc_release_host(host);
+ 
+ 	host->pm_flags &= ~MMC_PM_KEEP_POWER;
+diff --git a/drivers/mmc/core/sdio_io.c b/drivers/mmc/core/sdio_io.c
+index 3f67fbbe0d75..df887ced9666 100644
+--- a/drivers/mmc/core/sdio_io.c
++++ b/drivers/mmc/core/sdio_io.c
+@@ -19,6 +19,7 @@
+ #include "sdio_ops.h"
+ #include "core.h"
+ #include "card.h"
++#include "host.h"
+ 
+ /**
+  *	sdio_claim_host - exclusively claim a bus for a certain SDIO function
+@@ -738,3 +739,79 @@ int sdio_set_host_pm_flags(struct sdio_func *func, mmc_pm_flag_t flags)
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(sdio_set_host_pm_flags);
++
++/**
++ *	sdio_retune_crc_disable - temporarily disable retuning on CRC errors
++ *	@func: SDIO function attached to host
++ *
++ *	If the SDIO card is known to be in a state where it might produce
++ *	CRC errors on the bus in response to commands (like if we know it is
++ *	transitioning between power states), an SDIO function driver can
++ *	call this function to temporarily disable the SD/MMC core behavior of
++ *	triggering an automatic retuning.
++ *
++ *	This function should be called while the host is claimed and the host
++ *	should remain claimed until sdio_retune_crc_enable() is called.
++ *	Specifically, the expected sequence of calls is:
++ *	- sdio_claim_host()
++ *	- sdio_retune_crc_disable()
++ *	- some number of calls like sdio_writeb() and sdio_readb()
++ *	- sdio_retune_crc_enable()
++ *	- sdio_release_host()
++ */
++void sdio_retune_crc_disable(struct sdio_func *func)
++{
++	func->card->host->retune_crc_disable = true;
++}
++EXPORT_SYMBOL_GPL(sdio_retune_crc_disable);
++
++/**
++ *	sdio_retune_crc_enable - re-enable retuning on CRC errors
++ *	@func: SDIO function attached to host
++ *
++ *	This is the compement to sdio_retune_crc_disable().
++ */
++void sdio_retune_crc_enable(struct sdio_func *func)
++{
++	func->card->host->retune_crc_disable = false;
++}
++EXPORT_SYMBOL_GPL(sdio_retune_crc_enable);
++
++/**
++ *	sdio_retune_hold_now - start deferring retuning requests till release
++ *	@func: SDIO function attached to host
++ *
++ *	This function can be called if it's currently a bad time to do
++ *	a retune of the SDIO card.  Retune requests made during this time
++ *	will be held and we'll actually do the retune sometime after the
++ *	release.
++ *
++ *	This function could be useful if an SDIO card is in a power state
++ *	where it can respond to a small subset of commands that doesn't
++ *	include the retuning command.  Care should be taken when using
++ *	this function since (presumably) the retuning request we might be
++ *	deferring was made for a good reason.
++ *
++ *	This function should be called while the host is claimed.
++ */
++void sdio_retune_hold_now(struct sdio_func *func)
++{
++	mmc_retune_hold_now(func->card->host);
++}
++EXPORT_SYMBOL_GPL(sdio_retune_hold_now);
++
++/**
++ *	sdio_retune_release - signal that it's OK to retune now
++ *	@func: SDIO function attached to host
++ *
++ *	This is the complement to sdio_retune_hold_now().  Calling this
++ *	function won't make a retune happen right away but will allow
++ *	them to be scheduled normally.
++ *
++ *	This function should be called while the host is claimed.
++ */
++void sdio_retune_release(struct sdio_func *func)
++{
++	mmc_retune_release(func->card->host);
++}
++EXPORT_SYMBOL_GPL(sdio_retune_release);
+diff --git a/drivers/mmc/core/sdio_irq.c b/drivers/mmc/core/sdio_irq.c
+index 7ca7b99413f0..b299a24d33f9 100644
+--- a/drivers/mmc/core/sdio_irq.c
++++ b/drivers/mmc/core/sdio_irq.c
+@@ -38,6 +38,10 @@ static int process_sdio_pending_irqs(struct mmc_host *host)
+ 	unsigned char pending;
+ 	struct sdio_func *func;
+ 
++	/* Don't process SDIO IRQs if the card is suspended. */
++	if (mmc_card_suspended(card))
++		return 0;
++
+ 	/*
+ 	 * Optimization, if there is only 1 function interrupt registered
+ 	 * and we know an IRQ was signaled then call irq handler directly.
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index 833ef0590af8..b33f2c90d8d8 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -1003,6 +1003,8 @@ static void msdc_request_done(struct msdc_host *host, struct mmc_request *mrq)
+ 	msdc_track_cmd_data(host, mrq->cmd, mrq->data);
+ 	if (mrq->data)
+ 		msdc_unprepare_data(host, mrq);
++	if (host->error)
++		msdc_reset_hw(host);
+ 	mmc_request_done(host->mmc, mrq);
+ }
+ 
+@@ -1355,24 +1357,25 @@ static void msdc_request_timeout(struct work_struct *work)
+ 	}
+ }
+ 
+-static void __msdc_enable_sdio_irq(struct mmc_host *mmc, int enb)
++static void __msdc_enable_sdio_irq(struct msdc_host *host, int enb)
+ {
+-	unsigned long flags;
+-	struct msdc_host *host = mmc_priv(mmc);
+-
+-	spin_lock_irqsave(&host->lock, flags);
+-	if (enb)
++	if (enb) {
+ 		sdr_set_bits(host->base + MSDC_INTEN, MSDC_INTEN_SDIOIRQ);
+-	else
++		sdr_set_bits(host->base + SDC_CFG, SDC_CFG_SDIOIDE);
++	} else {
+ 		sdr_clr_bits(host->base + MSDC_INTEN, MSDC_INTEN_SDIOIRQ);
+-	spin_unlock_irqrestore(&host->lock, flags);
++		sdr_clr_bits(host->base + SDC_CFG, SDC_CFG_SDIOIDE);
++	}
+ }
+ 
+ static void msdc_enable_sdio_irq(struct mmc_host *mmc, int enb)
+ {
++	unsigned long flags;
+ 	struct msdc_host *host = mmc_priv(mmc);
+ 
+-	__msdc_enable_sdio_irq(mmc, enb);
++	spin_lock_irqsave(&host->lock, flags);
++	__msdc_enable_sdio_irq(host, enb);
++	spin_unlock_irqrestore(&host->lock, flags);
+ 
+ 	if (enb)
+ 		pm_runtime_get_noresume(host->dev);
+@@ -1394,6 +1397,8 @@ static irqreturn_t msdc_irq(int irq, void *dev_id)
+ 		spin_lock_irqsave(&host->lock, flags);
+ 		events = readl(host->base + MSDC_INT);
+ 		event_mask = readl(host->base + MSDC_INTEN);
++		if ((events & event_mask) & MSDC_INT_SDIOIRQ)
++			__msdc_enable_sdio_irq(host, 0);
+ 		/* clear interrupts */
+ 		writel(events & event_mask, host->base + MSDC_INT);
+ 
+@@ -1402,10 +1407,8 @@ static irqreturn_t msdc_irq(int irq, void *dev_id)
+ 		data = host->data;
+ 		spin_unlock_irqrestore(&host->lock, flags);
+ 
+-		if ((events & event_mask) & MSDC_INT_SDIOIRQ) {
+-			__msdc_enable_sdio_irq(host->mmc, 0);
++		if ((events & event_mask) & MSDC_INT_SDIOIRQ)
+ 			sdio_signal_irq(host->mmc);
+-		}
+ 
+ 		if (!(events & (event_mask & ~MSDC_INT_SDIOIRQ)))
+ 			break;
+@@ -1528,10 +1531,7 @@ static void msdc_init_hw(struct msdc_host *host)
+ 	sdr_set_bits(host->base + SDC_CFG, SDC_CFG_SDIO);
+ 
+ 	/* Config SDIO device detect interrupt function */
+-	if (host->mmc->caps & MMC_CAP_SDIO_IRQ)
+-		sdr_set_bits(host->base + SDC_CFG, SDC_CFG_SDIOIDE);
+-	else
+-		sdr_clr_bits(host->base + SDC_CFG, SDC_CFG_SDIOIDE);
++	sdr_clr_bits(host->base + SDC_CFG, SDC_CFG_SDIOIDE);
+ 
+ 	/* Configure to default data timeout */
+ 	sdr_set_field(host->base + SDC_CFG, SDC_CFG_DTOC, 3);
+@@ -2052,7 +2052,12 @@ static void msdc_hw_reset(struct mmc_host *mmc)
+ 
+ static void msdc_ack_sdio_irq(struct mmc_host *mmc)
+ {
+-	__msdc_enable_sdio_irq(mmc, 1);
++	unsigned long flags;
++	struct msdc_host *host = mmc_priv(mmc);
++
++	spin_lock_irqsave(&host->lock, flags);
++	__msdc_enable_sdio_irq(host, 1);
++	spin_unlock_irqrestore(&host->lock, flags);
+ }
+ 
+ static const struct mmc_host_ops mt_msdc_ops = {
+diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
+index 8742e27e4e8b..c42e442ba7ab 100644
+--- a/drivers/mmc/host/renesas_sdhi_core.c
++++ b/drivers/mmc/host/renesas_sdhi_core.c
+@@ -620,11 +620,16 @@ static const struct renesas_sdhi_quirks sdhi_quirks_h3_es2 = {
+ 	.hs400_4taps = true,
+ };
+ 
++static const struct renesas_sdhi_quirks sdhi_quirks_nohs400 = {
++	.hs400_disabled = true,
++};
++
+ static const struct soc_device_attribute sdhi_quirks_match[]  = {
+ 	{ .soc_id = "r8a7795", .revision = "ES1.*", .data = &sdhi_quirks_h3_m3w_es1 },
+ 	{ .soc_id = "r8a7795", .revision = "ES2.0", .data = &sdhi_quirks_h3_es2 },
+-	{ .soc_id = "r8a7796", .revision = "ES1.0", .data = &sdhi_quirks_h3_m3w_es1 },
+-	{ .soc_id = "r8a7796", .revision = "ES1.1", .data = &sdhi_quirks_h3_m3w_es1 },
++	{ .soc_id = "r8a7796", .revision = "ES1.[012]", .data = &sdhi_quirks_h3_m3w_es1 },
++	{ .soc_id = "r8a774a1", .revision = "ES1.[012]", .data = &sdhi_quirks_h3_m3w_es1 },
++	{ .soc_id = "r8a77980", .data = &sdhi_quirks_nohs400 },
+ 	{ /* Sentinel. */ },
+ };
+ 
+diff --git a/drivers/mmc/host/sdhci-pci-o2micro.c b/drivers/mmc/host/sdhci-pci-o2micro.c
+index 05a012a694b2..423c3339c03b 100644
+--- a/drivers/mmc/host/sdhci-pci-o2micro.c
++++ b/drivers/mmc/host/sdhci-pci-o2micro.c
+@@ -124,6 +124,7 @@ static int sdhci_o2_execute_tuning(struct mmc_host *mmc, u32 opcode)
+ 	 */
+ 	if (mmc->ios.bus_width == MMC_BUS_WIDTH_8) {
+ 		current_bus_width = mmc->ios.bus_width;
++		mmc->ios.bus_width = MMC_BUS_WIDTH_4;
+ 		sdhci_set_bus_width(host, MMC_BUS_WIDTH_4);
+ 	}
+ 
+@@ -135,8 +136,10 @@ static int sdhci_o2_execute_tuning(struct mmc_host *mmc, u32 opcode)
+ 
+ 	sdhci_end_tuning(host);
+ 
+-	if (current_bus_width == MMC_BUS_WIDTH_8)
++	if (current_bus_width == MMC_BUS_WIDTH_8) {
++		mmc->ios.bus_width = MMC_BUS_WIDTH_8;
+ 		sdhci_set_bus_width(host, current_bus_width);
++	}
+ 
+ 	host->flags &= ~SDHCI_HS400_TUNING;
+ 	return 0;
+diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c
+index 1c66fb2ad76b..f97c628eb2ad 100644
+--- a/drivers/net/can/flexcan.c
++++ b/drivers/net/can/flexcan.c
+@@ -166,7 +166,7 @@
+ #define FLEXCAN_MB_CNT_LENGTH(x)	(((x) & 0xf) << 16)
+ #define FLEXCAN_MB_CNT_TIMESTAMP(x)	((x) & 0xffff)
+ 
+-#define FLEXCAN_TIMEOUT_US		(50)
++#define FLEXCAN_TIMEOUT_US		(250)
+ 
+ /* FLEXCAN hardware feature flags
+  *
+diff --git a/drivers/net/can/xilinx_can.c b/drivers/net/can/xilinx_can.c
+index 97d0933d9bd9..e4a5038eba9a 100644
+--- a/drivers/net/can/xilinx_can.c
++++ b/drivers/net/can/xilinx_can.c
+@@ -1443,7 +1443,7 @@ static const struct xcan_devtype_data xcan_canfd_data = {
+ 		 XCAN_FLAG_RXMNF |
+ 		 XCAN_FLAG_TX_MAILBOXES |
+ 		 XCAN_FLAG_RX_FIFO_MULTI,
+-	.bittiming_const = &xcan_bittiming_const,
++	.bittiming_const = &xcan_bittiming_const_canfd,
+ 	.btr_ts2_shift = XCAN_BTR_TS2_SHIFT_CANFD,
+ 	.btr_sjw_shift = XCAN_BTR_SJW_SHIFT_CANFD,
+ 	.bus_clk_name = "s_axi_aclk",
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 720f1dde2c2d..ae750ab9a4d7 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -1517,7 +1517,7 @@ static int mv88e6xxx_vtu_get(struct mv88e6xxx_chip *chip, u16 vid,
+ 	int err;
+ 
+ 	if (!vid)
+-		return -EINVAL;
++		return -EOPNOTSUPP;
+ 
+ 	entry->vid = vid - 1;
+ 	entry->valid = false;
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c b/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
+index ce15d2350db9..188c3f6791b5 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
+@@ -339,6 +339,7 @@ static int __lb_setup(struct net_device *ndev,
+ static int __lb_up(struct net_device *ndev,
+ 		   enum hnae_loop loop_mode)
+ {
++#define NIC_LB_TEST_WAIT_PHY_LINK_TIME 300
+ 	struct hns_nic_priv *priv = netdev_priv(ndev);
+ 	struct hnae_handle *h = priv->ae_handle;
+ 	int speed, duplex;
+@@ -365,6 +366,9 @@ static int __lb_up(struct net_device *ndev,
+ 
+ 	h->dev->ops->adjust_link(h, speed, duplex);
+ 
++	/* wait adjust link done and phy ready */
++	msleep(NIC_LB_TEST_WAIT_PHY_LINK_TIME);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index 549d36497b8c..f3f7551162a9 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -1777,6 +1777,7 @@ static void mtk_poll_controller(struct net_device *dev)
+ 
+ static int mtk_start_dma(struct mtk_eth *eth)
+ {
++	u32 rx_2b_offset = (NET_IP_ALIGN == 2) ? MTK_RX_2B_OFFSET : 0;
+ 	int err;
+ 
+ 	err = mtk_dma_init(eth);
+@@ -1793,7 +1794,7 @@ static int mtk_start_dma(struct mtk_eth *eth)
+ 		MTK_QDMA_GLO_CFG);
+ 
+ 	mtk_w32(eth,
+-		MTK_RX_DMA_EN | MTK_RX_2B_OFFSET |
++		MTK_RX_DMA_EN | rx_2b_offset |
+ 		MTK_RX_BT_32DWORDS | MTK_MULTI_EN,
+ 		MTK_PDMA_GLO_CFG);
+ 
+@@ -2297,13 +2298,13 @@ static int mtk_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd,
+ 
+ 	switch (cmd->cmd) {
+ 	case ETHTOOL_GRXRINGS:
+-		if (dev->features & NETIF_F_LRO) {
++		if (dev->hw_features & NETIF_F_LRO) {
+ 			cmd->data = MTK_MAX_RX_RING_NUM;
+ 			ret = 0;
+ 		}
+ 		break;
+ 	case ETHTOOL_GRXCLSRLCNT:
+-		if (dev->features & NETIF_F_LRO) {
++		if (dev->hw_features & NETIF_F_LRO) {
+ 			struct mtk_mac *mac = netdev_priv(dev);
+ 
+ 			cmd->rule_cnt = mac->hwlro_ip_cnt;
+@@ -2311,11 +2312,11 @@ static int mtk_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd,
+ 		}
+ 		break;
+ 	case ETHTOOL_GRXCLSRULE:
+-		if (dev->features & NETIF_F_LRO)
++		if (dev->hw_features & NETIF_F_LRO)
+ 			ret = mtk_hwlro_get_fdir_entry(dev, cmd);
+ 		break;
+ 	case ETHTOOL_GRXCLSRLALL:
+-		if (dev->features & NETIF_F_LRO)
++		if (dev->hw_features & NETIF_F_LRO)
+ 			ret = mtk_hwlro_get_fdir_all(dev, cmd,
+ 						     rule_locs);
+ 		break;
+@@ -2332,11 +2333,11 @@ static int mtk_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd)
+ 
+ 	switch (cmd->cmd) {
+ 	case ETHTOOL_SRXCLSRLINS:
+-		if (dev->features & NETIF_F_LRO)
++		if (dev->hw_features & NETIF_F_LRO)
+ 			ret = mtk_hwlro_add_ipaddr(dev, cmd);
+ 		break;
+ 	case ETHTOOL_SRXCLSRLDEL:
+-		if (dev->features & NETIF_F_LRO)
++		if (dev->hw_features & NETIF_F_LRO)
+ 			ret = mtk_hwlro_del_ipaddr(dev, cmd);
+ 		break;
+ 	default:
+diff --git a/drivers/net/ipvlan/ipvlan_main.c b/drivers/net/ipvlan/ipvlan_main.c
+index bbeb1623e2d5..717fce6edeb7 100644
+--- a/drivers/net/ipvlan/ipvlan_main.c
++++ b/drivers/net/ipvlan/ipvlan_main.c
+@@ -112,7 +112,7 @@ static void ipvlan_port_destroy(struct net_device *dev)
+ }
+ 
+ #define IPVLAN_FEATURES \
+-	(NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST | \
++	(NETIF_F_SG | NETIF_F_CSUM_MASK | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST | \
+ 	 NETIF_F_GSO | NETIF_F_TSO | NETIF_F_GSO_ROBUST | \
+ 	 NETIF_F_TSO_ECN | NETIF_F_TSO6 | NETIF_F_GRO | NETIF_F_RXCSUM | \
+ 	 NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_HW_VLAN_STAG_FILTER)
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index efa31fcda505..611dfc3d89a0 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -1080,6 +1080,7 @@ EXPORT_SYMBOL_GPL(phylink_ethtool_ksettings_get);
+ int phylink_ethtool_ksettings_set(struct phylink *pl,
+ 				  const struct ethtool_link_ksettings *kset)
+ {
++	__ETHTOOL_DECLARE_LINK_MODE_MASK(support);
+ 	struct ethtool_link_ksettings our_kset;
+ 	struct phylink_link_state config;
+ 	int ret;
+@@ -1090,11 +1091,12 @@ int phylink_ethtool_ksettings_set(struct phylink *pl,
+ 	    kset->base.autoneg != AUTONEG_ENABLE)
+ 		return -EINVAL;
+ 
++	linkmode_copy(support, pl->supported);
+ 	config = pl->link_config;
+ 
+ 	/* Mask out unsupported advertisements */
+ 	linkmode_and(config.advertising, kset->link_modes.advertising,
+-		     pl->supported);
++		     support);
+ 
+ 	/* FIXME: should we reject autoneg if phy/mac does not support it? */
+ 	if (kset->base.autoneg == AUTONEG_DISABLE) {
+@@ -1104,7 +1106,7 @@ int phylink_ethtool_ksettings_set(struct phylink *pl,
+ 		 * duplex.
+ 		 */
+ 		s = phy_lookup_setting(kset->base.speed, kset->base.duplex,
+-				       pl->supported, false);
++				       support, false);
+ 		if (!s)
+ 			return -EINVAL;
+ 
+@@ -1133,7 +1135,7 @@ int phylink_ethtool_ksettings_set(struct phylink *pl,
+ 		__set_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, config.advertising);
+ 	}
+ 
+-	if (phylink_validate(pl, pl->supported, &config))
++	if (phylink_validate(pl, support, &config))
+ 		return -EINVAL;
+ 
+ 	/* If autonegotiation is enabled, we must have an advertisement */
+@@ -1583,6 +1585,7 @@ static int phylink_sfp_module_insert(void *upstream,
+ {
+ 	struct phylink *pl = upstream;
+ 	__ETHTOOL_DECLARE_LINK_MODE_MASK(support) = { 0, };
++	__ETHTOOL_DECLARE_LINK_MODE_MASK(support1);
+ 	struct phylink_link_state config;
+ 	phy_interface_t iface;
+ 	int ret = 0;
+@@ -1610,6 +1613,8 @@ static int phylink_sfp_module_insert(void *upstream,
+ 		return ret;
+ 	}
+ 
++	linkmode_copy(support1, support);
++
+ 	iface = sfp_select_interface(pl->sfp_bus, id, config.advertising);
+ 	if (iface == PHY_INTERFACE_MODE_NA) {
+ 		netdev_err(pl->netdev,
+@@ -1619,7 +1624,7 @@ static int phylink_sfp_module_insert(void *upstream,
+ 	}
+ 
+ 	config.interface = iface;
+-	ret = phylink_validate(pl, support, &config);
++	ret = phylink_validate(pl, support1, &config);
+ 	if (ret) {
+ 		netdev_err(pl->netdev, "validation of %s/%s with support %*pb failed: %d\n",
+ 			   phylink_an_mode_str(MLO_AN_INBAND),
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+index 4d104ab80fd8..f757c9e72e71 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+@@ -676,6 +676,12 @@ brcmf_sdio_kso_control(struct brcmf_sdio *bus, bool on)
+ 
+ 	brcmf_dbg(TRACE, "Enter: on=%d\n", on);
+ 
++	sdio_retune_crc_disable(bus->sdiodev->func1);
++
++	/* Cannot re-tune if device is asleep; defer till we're awake */
++	if (on)
++		sdio_retune_hold_now(bus->sdiodev->func1);
++
+ 	wr_val = (on << SBSDIO_FUNC1_SLEEPCSR_KSO_SHIFT);
+ 	/* 1st KSO write goes to AOS wake up core if device is asleep  */
+ 	brcmf_sdiod_writeb(bus->sdiodev, SBSDIO_FUNC1_SLEEPCSR, wr_val, &err);
+@@ -736,6 +742,11 @@ brcmf_sdio_kso_control(struct brcmf_sdio *bus, bool on)
+ 	if (try_cnt > MAX_KSO_ATTEMPTS)
+ 		brcmf_err("max tries: rd_val=0x%x err=%d\n", rd_val, err);
+ 
++	if (on)
++		sdio_retune_release(bus->sdiodev->func1);
++
++	sdio_retune_crc_enable(bus->sdiodev->func1);
++
+ 	return err;
+ }
+ 
+@@ -3373,11 +3384,7 @@ err:
+ 
+ static bool brcmf_sdio_aos_no_decode(struct brcmf_sdio *bus)
+ {
+-	if (bus->ci->chip == CY_CC_43012_CHIP_ID ||
+-	    bus->ci->chip == CY_CC_4373_CHIP_ID ||
+-	    bus->ci->chip == BRCM_CC_4339_CHIP_ID ||
+-	    bus->ci->chip == BRCM_CC_4345_CHIP_ID ||
+-	    bus->ci->chip == BRCM_CC_4354_CHIP_ID)
++	if (bus->ci->chip == CY_CC_43012_CHIP_ID)
+ 		return true;
+ 	else
+ 		return false;
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 35d2202ee2fd..3a390b2c7540 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -3397,7 +3397,8 @@ static int nvme_scan_ns_list(struct nvme_ctrl *ctrl, unsigned nn)
+ {
+ 	struct nvme_ns *ns;
+ 	__le32 *ns_list;
+-	unsigned i, j, nsid, prev = 0, num_lists = DIV_ROUND_UP(nn, 1024);
++	unsigned i, j, nsid, prev = 0;
++	unsigned num_lists = DIV_ROUND_UP_ULL((u64)nn, 1024);
+ 	int ret = 0;
+ 
+ 	ns_list = kzalloc(NVME_IDENTIFY_DATA_SIZE, GFP_KERNEL);
+diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
+index a065dbfc43b1..a77fd8674ecf 100644
+--- a/drivers/nvme/target/io-cmd-bdev.c
++++ b/drivers/nvme/target/io-cmd-bdev.c
+@@ -295,6 +295,7 @@ u16 nvmet_bdev_parse_io_cmd(struct nvmet_req *req)
+ 		return 0;
+ 	case nvme_cmd_write_zeroes:
+ 		req->execute = nvmet_bdev_execute_write_zeroes;
++		req->data_len = 0;
+ 		return 0;
+ 	default:
+ 		pr_err("unhandled cmd %d on qid %d\n", cmd->common.opcode,
+diff --git a/drivers/parport/share.c b/drivers/parport/share.c
+index 5dc53d420ca8..7b4ee33c1935 100644
+--- a/drivers/parport/share.c
++++ b/drivers/parport/share.c
+@@ -895,6 +895,7 @@ parport_register_dev_model(struct parport *port, const char *name,
+ 	par_dev->devmodel = true;
+ 	ret = device_register(&par_dev->dev);
+ 	if (ret) {
++		kfree(par_dev->state);
+ 		put_device(&par_dev->dev);
+ 		goto err_put_port;
+ 	}
+@@ -912,6 +913,7 @@ parport_register_dev_model(struct parport *port, const char *name,
+ 			spin_unlock(&port->physport->pardevice_lock);
+ 			pr_debug("%s: cannot grant exclusive access for device %s\n",
+ 				 port->name, name);
++			kfree(par_dev->state);
+ 			device_unregister(&par_dev->dev);
+ 			goto err_put_port;
+ 		}
+diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c
+index c3067fd3bd9e..fece768efcb1 100644
+--- a/drivers/s390/net/qeth_l2_main.c
++++ b/drivers/s390/net/qeth_l2_main.c
+@@ -1679,7 +1679,7 @@ static void qeth_bridgeport_an_set_cb(void *priv,
+ 
+ 	l2entry = (struct qdio_brinfo_entry_l2 *)entry;
+ 	code = IPA_ADDR_CHANGE_CODE_MACADDR;
+-	if (l2entry->addr_lnid.lnid)
++	if (l2entry->addr_lnid.lnid < VLAN_N_VID)
+ 		code |= IPA_ADDR_CHANGE_CODE_VLANID;
+ 	qeth_bridge_emit_host_event(card, anev_reg_unreg, code,
+ 		(struct net_if_token *)&l2entry->nit,
+diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c
+index 53712cf26406..93a5748036de 100644
+--- a/drivers/s390/net/qeth_l3_main.c
++++ b/drivers/s390/net/qeth_l3_main.c
+@@ -1883,13 +1883,20 @@ static int qeth_l3_do_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+ 
+ static int qeth_l3_get_cast_type(struct sk_buff *skb)
+ {
++	int ipv = qeth_get_ip_version(skb);
+ 	struct neighbour *n = NULL;
+ 	struct dst_entry *dst;
+ 
+ 	rcu_read_lock();
+ 	dst = skb_dst(skb);
+-	if (dst)
+-		n = dst_neigh_lookup_skb(dst, skb);
++	if (dst) {
++		struct rt6_info *rt = (struct rt6_info *) dst;
++
++		dst = dst_check(dst, (ipv == 6) ? rt6_get_cookie(rt) : 0);
++		if (dst)
++			n = dst_neigh_lookup_skb(dst, skb);
++	}
++
+ 	if (n) {
+ 		int cast_type = n->type;
+ 
+@@ -1904,8 +1911,10 @@ static int qeth_l3_get_cast_type(struct sk_buff *skb)
+ 	rcu_read_unlock();
+ 
+ 	/* no neighbour (eg AF_PACKET), fall back to target's IP address ... */
+-	switch (qeth_get_ip_version(skb)) {
++	switch (ipv) {
+ 	case 4:
++		if (ipv4_is_lbcast(ip_hdr(skb)->daddr))
++			return RTN_BROADCAST;
+ 		return ipv4_is_multicast(ip_hdr(skb)->daddr) ?
+ 				RTN_MULTICAST : RTN_UNICAST;
+ 	case 6:
+@@ -1941,6 +1950,7 @@ static void qeth_l3_fill_header(struct qeth_qdio_out_q *queue,
+ 	struct qeth_hdr_layer3 *l3_hdr = &hdr->hdr.l3;
+ 	struct vlan_ethhdr *veth = vlan_eth_hdr(skb);
+ 	struct qeth_card *card = queue->card;
++	struct dst_entry *dst;
+ 
+ 	hdr->hdr.l3.length = data_len;
+ 
+@@ -1991,15 +2001,27 @@ static void qeth_l3_fill_header(struct qeth_qdio_out_q *queue,
+ 
+ 	hdr->hdr.l3.flags = qeth_l3_cast_type_to_flag(cast_type);
+ 	rcu_read_lock();
++	dst = skb_dst(skb);
++
+ 	if (ipv == 4) {
+-		struct rtable *rt = skb_rtable(skb);
++		struct rtable *rt;
++
++		if (dst)
++			dst = dst_check(dst, 0);
++		rt = (struct rtable *) dst;
+ 
+ 		*((__be32 *) &hdr->hdr.l3.next_hop.ipv4.addr) = (rt) ?
+ 				rt_nexthop(rt, ip_hdr(skb)->daddr) :
+ 				ip_hdr(skb)->daddr;
+ 	} else {
+ 		/* IPv6 */
+-		const struct rt6_info *rt = skb_rt6_info(skb);
++		struct rt6_info *rt;
++
++		if (dst) {
++			rt = (struct rt6_info *) dst;
++			dst = dst_check(dst, rt6_get_cookie(rt));
++		}
++		rt = (struct rt6_info *) dst;
+ 
+ 		if (rt && !ipv6_addr_any(&rt->rt6i_gateway))
+ 			l3_hdr->next_hop.ipv6_addr = rt->rt6i_gateway;
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index 531824afba5f..392695b4691a 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -4044,8 +4044,10 @@ static int pqi_submit_raid_request_synchronous(struct pqi_ctrl_info *ctrl_info,
+ 				return -ETIMEDOUT;
+ 			msecs_blocked =
+ 				jiffies_to_msecs(jiffies - start_jiffies);
+-			if (msecs_blocked >= timeout_msecs)
+-				return -ETIMEDOUT;
++			if (msecs_blocked >= timeout_msecs) {
++				rc = -ETIMEDOUT;
++				goto out;
++			}
+ 			timeout_msecs -= msecs_blocked;
+ 		}
+ 	}
+diff --git a/drivers/scsi/ufs/ufshcd-pltfrm.c b/drivers/scsi/ufs/ufshcd-pltfrm.c
+index 27213676329c..848c7478efd6 100644
+--- a/drivers/scsi/ufs/ufshcd-pltfrm.c
++++ b/drivers/scsi/ufs/ufshcd-pltfrm.c
+@@ -340,24 +340,21 @@ int ufshcd_pltfrm_init(struct platform_device *pdev,
+ 		goto dealloc_host;
+ 	}
+ 
+-	pm_runtime_set_active(&pdev->dev);
+-	pm_runtime_enable(&pdev->dev);
+-
+ 	ufshcd_init_lanes_per_dir(hba);
+ 
+ 	err = ufshcd_init(hba, mmio_base, irq);
+ 	if (err) {
+ 		dev_err(dev, "Initialization failed\n");
+-		goto out_disable_rpm;
++		goto dealloc_host;
+ 	}
+ 
+ 	platform_set_drvdata(pdev, hba);
+ 
++	pm_runtime_set_active(&pdev->dev);
++	pm_runtime_enable(&pdev->dev);
++
+ 	return 0;
+ 
+-out_disable_rpm:
+-	pm_runtime_disable(&pdev->dev);
+-	pm_runtime_set_suspended(&pdev->dev);
+ dealloc_host:
+ 	ufshcd_dealloc_host(hba);
+ out:
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 5ba49c8cd2a3..dbd1f8c253bf 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -1917,7 +1917,8 @@ int ufshcd_copy_query_response(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
+ 	memcpy(&query_res->upiu_res, &lrbp->ucd_rsp_ptr->qr, QUERY_OSF_SIZE);
+ 
+ 	/* Get the descriptor */
+-	if (lrbp->ucd_rsp_ptr->qr.opcode == UPIU_QUERY_OPCODE_READ_DESC) {
++	if (hba->dev_cmd.query.descriptor &&
++	    lrbp->ucd_rsp_ptr->qr.opcode == UPIU_QUERY_OPCODE_READ_DESC) {
+ 		u8 *descp = (u8 *)lrbp->ucd_rsp_ptr +
+ 				GENERAL_UPIU_REQUEST_SIZE;
+ 		u16 resp_len;
+diff --git a/drivers/staging/erofs/erofs_fs.h b/drivers/staging/erofs/erofs_fs.h
+index fa52898df006..8ddb2b3e7d39 100644
+--- a/drivers/staging/erofs/erofs_fs.h
++++ b/drivers/staging/erofs/erofs_fs.h
+@@ -17,10 +17,16 @@
+ #define EROFS_SUPER_MAGIC_V1    0xE0F5E1E2
+ #define EROFS_SUPER_OFFSET      1024
+ 
++/*
++ * Any bits that aren't in EROFS_ALL_REQUIREMENTS should be
++ * incompatible with this kernel version.
++ */
++#define EROFS_ALL_REQUIREMENTS  0
++
+ struct erofs_super_block {
+ /*  0 */__le32 magic;           /* in the little endian */
+ /*  4 */__le32 checksum;        /* crc32c(super_block) */
+-/*  8 */__le32 features;
++/*  8 */__le32 features;        /* (aka. feature_compat) */
+ /* 12 */__u8 blkszbits;         /* support block_size == PAGE_SIZE only */
+ /* 13 */__u8 reserved;
+ 
+@@ -34,9 +40,10 @@ struct erofs_super_block {
+ /* 44 */__le32 xattr_blkaddr;
+ /* 48 */__u8 uuid[16];          /* 128-bit uuid for volume */
+ /* 64 */__u8 volume_name[16];   /* volume name */
++/* 80 */__le32 requirements;    /* (aka. feature_incompat) */
+ 
+-/* 80 */__u8 reserved2[48];     /* 128 bytes */
+-} __packed;
++/* 84 */__u8 reserved2[44];
++} __packed;                     /* 128 bytes */
+ 
+ /*
+  * erofs inode data mapping:
+diff --git a/drivers/staging/erofs/internal.h b/drivers/staging/erofs/internal.h
+index e3bfde00c7d2..20cf6e7e170f 100644
+--- a/drivers/staging/erofs/internal.h
++++ b/drivers/staging/erofs/internal.h
+@@ -114,6 +114,8 @@ struct erofs_sb_info {
+ 
+ 	u8 uuid[16];                    /* 128-bit uuid for volume */
+ 	u8 volume_name[16];             /* volume name */
++	u32 requirements;
++
+ 	char *dev_name;
+ 
+ 	unsigned int mount_opt;
+diff --git a/drivers/staging/erofs/super.c b/drivers/staging/erofs/super.c
+index c8981662a49b..2ed53dd7f50c 100644
+--- a/drivers/staging/erofs/super.c
++++ b/drivers/staging/erofs/super.c
+@@ -76,6 +76,22 @@ static void destroy_inode(struct inode *inode)
+ 	call_rcu(&inode->i_rcu, i_callback);
+ }
+ 
++static bool check_layout_compatibility(struct super_block *sb,
++				       struct erofs_super_block *layout)
++{
++	const unsigned int requirements = le32_to_cpu(layout->requirements);
++
++	EROFS_SB(sb)->requirements = requirements;
++
++	/* check if current kernel meets all mandatory requirements */
++	if (requirements & (~EROFS_ALL_REQUIREMENTS)) {
++		errln("unidentified requirements %x, please upgrade kernel version",
++		      requirements & ~EROFS_ALL_REQUIREMENTS);
++		return false;
++	}
++	return true;
++}
++
+ static int superblock_read(struct super_block *sb)
+ {
+ 	struct erofs_sb_info *sbi;
+@@ -109,6 +125,9 @@ static int superblock_read(struct super_block *sb)
+ 		goto out;
+ 	}
+ 
++	if (!check_layout_compatibility(sb, layout))
++		goto out;
++
+ 	sbi->blocks = le32_to_cpu(layout->blocks);
+ 	sbi->meta_blkaddr = le32_to_cpu(layout->meta_blkaddr);
+ #ifdef CONFIG_EROFS_FS_XATTR
+diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c
+index 829e947cabf5..6a5ee8e6da10 100644
+--- a/drivers/usb/chipidea/udc.c
++++ b/drivers/usb/chipidea/udc.c
+@@ -1622,6 +1622,25 @@ static int ci_udc_pullup(struct usb_gadget *_gadget, int is_on)
+ static int ci_udc_start(struct usb_gadget *gadget,
+ 			 struct usb_gadget_driver *driver);
+ static int ci_udc_stop(struct usb_gadget *gadget);
++
++/* Match ISOC IN from the highest endpoint */
++static struct usb_ep *ci_udc_match_ep(struct usb_gadget *gadget,
++			      struct usb_endpoint_descriptor *desc,
++			      struct usb_ss_ep_comp_descriptor *comp_desc)
++{
++	struct ci_hdrc *ci = container_of(gadget, struct ci_hdrc, gadget);
++	struct usb_ep *ep;
++
++	if (usb_endpoint_xfer_isoc(desc) && usb_endpoint_dir_in(desc)) {
++		list_for_each_entry_reverse(ep, &ci->gadget.ep_list, ep_list) {
++			if (ep->caps.dir_in && !ep->claimed)
++				return ep;
++		}
++	}
++
++	return NULL;
++}
++
+ /**
+  * Device operations part of the API to the USB controller hardware,
+  * which don't involve endpoints (or i/o)
+@@ -1635,6 +1654,7 @@ static const struct usb_gadget_ops usb_gadget_ops = {
+ 	.vbus_draw	= ci_udc_vbus_draw,
+ 	.udc_start	= ci_udc_start,
+ 	.udc_stop	= ci_udc_stop,
++	.match_ep 	= ci_udc_match_ep,
+ };
+ 
+ static int init_eps(struct ci_hdrc *ci)
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 765ef5f1ffb8..3c8e65900dcb 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1608,8 +1608,13 @@ static void handle_port_status(struct xhci_hcd *xhci,
+ 		usb_hcd_resume_root_hub(hcd);
+ 	}
+ 
+-	if (hcd->speed >= HCD_USB3 && (portsc & PORT_PLS_MASK) == XDEV_INACTIVE)
++	if (hcd->speed >= HCD_USB3 &&
++	    (portsc & PORT_PLS_MASK) == XDEV_INACTIVE) {
++		slot_id = xhci_find_slot_id_by_port(hcd, xhci, hcd_portnum + 1);
++		if (slot_id && xhci->devs[slot_id])
++			xhci->devs[slot_id]->flags |= VDEV_PORT_ERROR;
+ 		bus_state->port_remote_wakeup &= ~(1 << hcd_portnum);
++	}
+ 
+ 	if ((portsc & PORT_PLC) && (portsc & PORT_PLS_MASK) == XDEV_RESUME) {
+ 		xhci_dbg(xhci, "port resume event for port %d\n", port_id);
+@@ -1797,6 +1802,14 @@ static void xhci_cleanup_halted_endpoint(struct xhci_hcd *xhci,
+ {
+ 	struct xhci_virt_ep *ep = &xhci->devs[slot_id]->eps[ep_index];
+ 	struct xhci_command *command;
++
++	/*
++	 * Avoid resetting endpoint if link is inactive. Can cause host hang.
++	 * Device will be reset soon to recover the link so don't do anything
++	 */
++	if (xhci->devs[slot_id]->flags & VDEV_PORT_ERROR)
++		return;
++
+ 	command = xhci_alloc_command(xhci, false, GFP_ATOMIC);
+ 	if (!command)
+ 		return;
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 448e3f812833..f39ca3980e48 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -1442,6 +1442,10 @@ static int xhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flag
+ 			xhci_dbg(xhci, "urb submitted during PCI suspend\n");
+ 		return -ESHUTDOWN;
+ 	}
++	if (xhci->devs[slot_id]->flags & VDEV_PORT_ERROR) {
++		xhci_dbg(xhci, "Can't queue urb, port error, link inactive\n");
++		return -ENODEV;
++	}
+ 
+ 	if (usb_endpoint_xfer_isoc(&urb->ep->desc))
+ 		num_tds = urb->number_of_packets;
+@@ -3724,6 +3728,7 @@ static int xhci_discover_or_reset_device(struct usb_hcd *hcd,
+ 	}
+ 	/* If necessary, update the number of active TTs on this root port */
+ 	xhci_update_tt_active_eps(xhci, virt_dev, old_active_eps);
++	virt_dev->flags = 0;
+ 	ret = 0;
+ 
+ command_cleanup:
+@@ -5029,16 +5034,26 @@ int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks)
+ 	} else {
+ 		/*
+ 		 * Some 3.1 hosts return sbrn 0x30, use xhci supported protocol
+-		 * minor revision instead of sbrn
++		 * minor revision instead of sbrn. Minor revision is a two digit
++		 * BCD containing minor and sub-minor numbers, only show minor.
+ 		 */
+-		minor_rev = xhci->usb3_rhub.min_rev;
+-		if (minor_rev) {
++		minor_rev = xhci->usb3_rhub.min_rev / 0x10;
++
++		switch (minor_rev) {
++		case 2:
++			hcd->speed = HCD_USB32;
++			hcd->self.root_hub->speed = USB_SPEED_SUPER_PLUS;
++			hcd->self.root_hub->rx_lanes = 2;
++			hcd->self.root_hub->tx_lanes = 2;
++			break;
++		case 1:
+ 			hcd->speed = HCD_USB31;
+ 			hcd->self.root_hub->speed = USB_SPEED_SUPER_PLUS;
++			break;
+ 		}
+-		xhci_info(xhci, "Host supports USB 3.%x %s SuperSpeed\n",
++		xhci_info(xhci, "Host supports USB 3.%x %sSuperSpeed\n",
+ 			  minor_rev,
+-			  minor_rev ? "Enhanced" : "");
++			  minor_rev ? "Enhanced " : "");
+ 
+ 		xhci->usb3_rhub.hcd = hcd;
+ 		/* xHCI private pointer was set in xhci_pci_probe for the second
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 9334cdee382a..a0035e7b62d8 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1010,6 +1010,15 @@ struct xhci_virt_device {
+ 	u8				real_port;
+ 	struct xhci_interval_bw_table	*bw_table;
+ 	struct xhci_tt_bw_info		*tt_info;
++	/*
++	 * flags for state tracking based on events and issued commands.
++	 * Software can not rely on states from output contexts because of
++	 * latency between events and xHC updating output context values.
++	 * See xhci 1.1 section 4.8.3 for more details
++	 */
++	unsigned long			flags;
++#define VDEV_PORT_ERROR			BIT(0) /* Port error, link inactive */
++
+ 	/* The current max exit latency for the enabled USB3 link states. */
+ 	u16				current_mel;
+ 	/* Used for the debugfs interfaces. */
+diff --git a/fs/btrfs/reada.c b/fs/btrfs/reada.c
+index 10d9589001a9..bb5bd49573b4 100644
+--- a/fs/btrfs/reada.c
++++ b/fs/btrfs/reada.c
+@@ -747,6 +747,7 @@ static void __reada_start_machine(struct btrfs_fs_info *fs_info)
+ 	u64 total = 0;
+ 	int i;
+ 
++again:
+ 	do {
+ 		enqueued = 0;
+ 		mutex_lock(&fs_devices->device_list_mutex);
+@@ -758,6 +759,10 @@ static void __reada_start_machine(struct btrfs_fs_info *fs_info)
+ 		mutex_unlock(&fs_devices->device_list_mutex);
+ 		total += enqueued;
+ 	} while (enqueued && total < 10000);
++	if (fs_devices->seed) {
++		fs_devices = fs_devices->seed;
++		goto again;
++	}
+ 
+ 	if (enqueued == 0)
+ 		return;
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index a05bf1d6e1d0..2ab3de440927 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -303,6 +303,7 @@ cifs_alloc_inode(struct super_block *sb)
+ 	cifs_inode->uniqueid = 0;
+ 	cifs_inode->createtime = 0;
+ 	cifs_inode->epoch = 0;
++	spin_lock_init(&cifs_inode->open_file_lock);
+ 	generate_random_uuid(cifs_inode->lease_key);
+ 
+ 	/*
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 607468948f72..a588fbc54968 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -1357,6 +1357,7 @@ struct cifsInodeInfo {
+ 	struct rw_semaphore lock_sem;	/* protect the fields above */
+ 	/* BB add in lists for dirty pages i.e. write caching info for oplock */
+ 	struct list_head openFileList;
++	spinlock_t	open_file_lock;	/* protects openFileList */
+ 	__u32 cifsAttrs; /* e.g. DOS archive bit, sparse, compressed, system */
+ 	unsigned int oplock;		/* oplock/lease level we have */
+ 	unsigned int epoch;		/* used to track lease state changes */
+@@ -1760,10 +1761,14 @@ require use of the stronger protocol */
+  *  tcp_ses_lock protects:
+  *	list operations on tcp and SMB session lists
+  *  tcon->open_file_lock protects the list of open files hanging off the tcon
++ *  inode->open_file_lock protects the openFileList hanging off the inode
+  *  cfile->file_info_lock protects counters and fields in cifs file struct
+  *  f_owner.lock protects certain per file struct operations
+  *  mapping->page_lock protects certain per page operations
+  *
++ *  Note that the cifs_tcon.open_file_lock should be taken before
++ *  not after the cifsInodeInfo.open_file_lock
++ *
+  *  Semaphores
+  *  ----------
+  *  sesSem     operations on smb session
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 4c0e44489f21..e9507fba0b36 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -478,6 +478,7 @@ cifs_reconnect(struct TCP_Server_Info *server)
+ 	spin_lock(&GlobalMid_Lock);
+ 	server->nr_targets = 1;
+ #ifdef CONFIG_CIFS_DFS_UPCALL
++	spin_unlock(&GlobalMid_Lock);
+ 	cifs_sb = find_super_by_tcp(server);
+ 	if (IS_ERR(cifs_sb)) {
+ 		rc = PTR_ERR(cifs_sb);
+@@ -495,6 +496,7 @@ cifs_reconnect(struct TCP_Server_Info *server)
+ 	}
+ 	cifs_dbg(FYI, "%s: will retry %d target(s)\n", __func__,
+ 		 server->nr_targets);
++	spin_lock(&GlobalMid_Lock);
+ #endif
+ 	if (server->tcpStatus == CifsExiting) {
+ 		/* the demux thread will exit normally
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 9a1db37b303a..736a61843e73 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -338,10 +338,12 @@ cifs_new_fileinfo(struct cifs_fid *fid, struct file *file,
+ 	atomic_inc(&tcon->num_local_opens);
+ 
+ 	/* if readable file instance put first in list*/
++	spin_lock(&cinode->open_file_lock);
+ 	if (file->f_mode & FMODE_READ)
+ 		list_add(&cfile->flist, &cinode->openFileList);
+ 	else
+ 		list_add_tail(&cfile->flist, &cinode->openFileList);
++	spin_unlock(&cinode->open_file_lock);
+ 	spin_unlock(&tcon->open_file_lock);
+ 
+ 	if (fid->purge_cache)
+@@ -413,7 +415,9 @@ void _cifsFileInfo_put(struct cifsFileInfo *cifs_file, bool wait_oplock_handler)
+ 	cifs_add_pending_open_locked(&fid, cifs_file->tlink, &open);
+ 
+ 	/* remove it from the lists */
++	spin_lock(&cifsi->open_file_lock);
+ 	list_del(&cifs_file->flist);
++	spin_unlock(&cifsi->open_file_lock);
+ 	list_del(&cifs_file->tlist);
+ 	atomic_dec(&tcon->num_local_opens);
+ 
+@@ -1950,9 +1954,9 @@ refind_writable:
+ 			return 0;
+ 		}
+ 
+-		spin_lock(&tcon->open_file_lock);
++		spin_lock(&cifs_inode->open_file_lock);
+ 		list_move_tail(&inv_file->flist, &cifs_inode->openFileList);
+-		spin_unlock(&tcon->open_file_lock);
++		spin_unlock(&cifs_inode->open_file_lock);
+ 		cifsFileInfo_put(inv_file);
+ 		++refind;
+ 		inv_file = NULL;
+diff --git a/fs/cifs/smb2maperror.c b/fs/cifs/smb2maperror.c
+index e32c264e3adb..82ade16c9501 100644
+--- a/fs/cifs/smb2maperror.c
++++ b/fs/cifs/smb2maperror.c
+@@ -457,7 +457,7 @@ static const struct status_to_posix_error smb2_error_map_table[] = {
+ 	{STATUS_FILE_INVALID, -EIO, "STATUS_FILE_INVALID"},
+ 	{STATUS_ALLOTTED_SPACE_EXCEEDED, -EIO,
+ 	"STATUS_ALLOTTED_SPACE_EXCEEDED"},
+-	{STATUS_INSUFFICIENT_RESOURCES, -EREMOTEIO,
++	{STATUS_INSUFFICIENT_RESOURCES, -EAGAIN,
+ 				"STATUS_INSUFFICIENT_RESOURCES"},
+ 	{STATUS_DFS_EXIT_PATH_FOUND, -EIO, "STATUS_DFS_EXIT_PATH_FOUND"},
+ 	{STATUS_DEVICE_DATA_ERROR, -EIO, "STATUS_DEVICE_DATA_ERROR"},
+diff --git a/fs/namespace.c b/fs/namespace.c
+index c9cab307fa77..061f247a3cdb 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -2079,6 +2079,7 @@ static int attach_recursive_mnt(struct mount *source_mnt,
+ 		/* Notice when we are propagating across user namespaces */
+ 		if (child->mnt_parent->mnt_ns->user_ns != user_ns)
+ 			lock_mnt_tree(child);
++		child->mnt.mnt_flags &= ~MNT_LOCKED;
+ 		commit_tree(child);
+ 	}
+ 	put_mountpoint(smp);
+diff --git a/fs/overlayfs/inode.c b/fs/overlayfs/inode.c
+index b48273e846ad..f0389849fd80 100644
+--- a/fs/overlayfs/inode.c
++++ b/fs/overlayfs/inode.c
+@@ -553,15 +553,15 @@ static void ovl_fill_inode(struct inode *inode, umode_t mode, dev_t rdev,
+ 	int xinobits = ovl_xino_bits(inode->i_sb);
+ 
+ 	/*
+-	 * When NFS export is enabled and d_ino is consistent with st_ino
+-	 * (samefs or i_ino has enough bits to encode layer), set the same
+-	 * value used for d_ino to i_ino, because nfsd readdirplus compares
+-	 * d_ino values to i_ino values of child entries. When called from
++	 * When d_ino is consistent with st_ino (samefs or i_ino has enough
++	 * bits to encode layer), set the same value used for st_ino to i_ino,
++	 * so inode number exposed via /proc/locks and a like will be
++	 * consistent with d_ino and st_ino values. An i_ino value inconsistent
++	 * with d_ino also causes nfsd readdirplus to fail.  When called from
+ 	 * ovl_new_inode(), ino arg is 0, so i_ino will be updated to real
+ 	 * upper inode i_ino on ovl_inode_init() or ovl_inode_update().
+ 	 */
+-	if (inode->i_sb->s_export_op &&
+-	    (ovl_same_sb(inode->i_sb) || xinobits)) {
++	if (ovl_same_sb(inode->i_sb) || xinobits) {
+ 		inode->i_ino = ino;
+ 		if (xinobits && fsid && !(ino >> (64 - xinobits)))
+ 			inode->i_ino |= (unsigned long)fsid << (64 - xinobits);
+@@ -777,6 +777,54 @@ struct inode *ovl_lookup_inode(struct super_block *sb, struct dentry *real,
+ 	return inode;
+ }
+ 
++bool ovl_lookup_trap_inode(struct super_block *sb, struct dentry *dir)
++{
++	struct inode *key = d_inode(dir);
++	struct inode *trap;
++	bool res;
++
++	trap = ilookup5(sb, (unsigned long) key, ovl_inode_test, key);
++	if (!trap)
++		return false;
++
++	res = IS_DEADDIR(trap) && !ovl_inode_upper(trap) &&
++				  !ovl_inode_lower(trap);
++
++	iput(trap);
++	return res;
++}
++
++/*
++ * Create an inode cache entry for layer root dir, that will intentionally
++ * fail ovl_verify_inode(), so any lookup that will find some layer root
++ * will fail.
++ */
++struct inode *ovl_get_trap_inode(struct super_block *sb, struct dentry *dir)
++{
++	struct inode *key = d_inode(dir);
++	struct inode *trap;
++
++	if (!d_is_dir(dir))
++		return ERR_PTR(-ENOTDIR);
++
++	trap = iget5_locked(sb, (unsigned long) key, ovl_inode_test,
++			    ovl_inode_set, key);
++	if (!trap)
++		return ERR_PTR(-ENOMEM);
++
++	if (!(trap->i_state & I_NEW)) {
++		/* Conflicting layer roots? */
++		iput(trap);
++		return ERR_PTR(-ELOOP);
++	}
++
++	trap->i_mode = S_IFDIR;
++	trap->i_flags = S_DEAD;
++	unlock_new_inode(trap);
++
++	return trap;
++}
++
+ /*
+  * Does overlay inode need to be hashed by lower inode?
+  */
+diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c
+index efd372312ef1..badf039267a2 100644
+--- a/fs/overlayfs/namei.c
++++ b/fs/overlayfs/namei.c
+@@ -18,6 +18,7 @@
+ #include "overlayfs.h"
+ 
+ struct ovl_lookup_data {
++	struct super_block *sb;
+ 	struct qstr name;
+ 	bool is_dir;
+ 	bool opaque;
+@@ -244,6 +245,12 @@ static int ovl_lookup_single(struct dentry *base, struct ovl_lookup_data *d,
+ 		if (!d->metacopy || d->last)
+ 			goto out;
+ 	} else {
++		if (ovl_lookup_trap_inode(d->sb, this)) {
++			/* Caught in a trap of overlapping layers */
++			err = -ELOOP;
++			goto out_err;
++		}
++
+ 		if (last_element)
+ 			d->is_dir = true;
+ 		if (d->last)
+@@ -819,6 +826,7 @@ struct dentry *ovl_lookup(struct inode *dir, struct dentry *dentry,
+ 	int err;
+ 	bool metacopy = false;
+ 	struct ovl_lookup_data d = {
++		.sb = dentry->d_sb,
+ 		.name = dentry->d_name,
+ 		.is_dir = false,
+ 		.opaque = false,
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index d26efed9f80a..cec40077b522 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -270,6 +270,7 @@ void ovl_clear_flag(unsigned long flag, struct inode *inode);
+ bool ovl_test_flag(unsigned long flag, struct inode *inode);
+ bool ovl_inuse_trylock(struct dentry *dentry);
+ void ovl_inuse_unlock(struct dentry *dentry);
++bool ovl_is_inuse(struct dentry *dentry);
+ bool ovl_need_index(struct dentry *dentry);
+ int ovl_nlink_start(struct dentry *dentry);
+ void ovl_nlink_end(struct dentry *dentry);
+@@ -376,6 +377,8 @@ struct ovl_inode_params {
+ struct inode *ovl_new_inode(struct super_block *sb, umode_t mode, dev_t rdev);
+ struct inode *ovl_lookup_inode(struct super_block *sb, struct dentry *real,
+ 			       bool is_upper);
++bool ovl_lookup_trap_inode(struct super_block *sb, struct dentry *dir);
++struct inode *ovl_get_trap_inode(struct super_block *sb, struct dentry *dir);
+ struct inode *ovl_get_inode(struct super_block *sb,
+ 			    struct ovl_inode_params *oip);
+ static inline void ovl_copyattr(struct inode *from, struct inode *to)
+diff --git a/fs/overlayfs/ovl_entry.h b/fs/overlayfs/ovl_entry.h
+index ec237035333a..6ed1ace8f8b3 100644
+--- a/fs/overlayfs/ovl_entry.h
++++ b/fs/overlayfs/ovl_entry.h
+@@ -29,6 +29,8 @@ struct ovl_sb {
+ 
+ struct ovl_layer {
+ 	struct vfsmount *mnt;
++	/* Trap in ovl inode cache */
++	struct inode *trap;
+ 	struct ovl_sb *fs;
+ 	/* Index of this layer in fs root (upper idx == 0) */
+ 	int idx;
+@@ -65,6 +67,10 @@ struct ovl_fs {
+ 	/* Did we take the inuse lock? */
+ 	bool upperdir_locked;
+ 	bool workdir_locked;
++	/* Traps in ovl inode cache */
++	struct inode *upperdir_trap;
++	struct inode *workdir_trap;
++	struct inode *indexdir_trap;
+ 	/* Inode numbers in all layers do not use the high xino_bits */
+ 	unsigned int xino_bits;
+ };
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index 0116735cc321..9780617c69ee 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -217,6 +217,9 @@ static void ovl_free_fs(struct ovl_fs *ofs)
+ {
+ 	unsigned i;
+ 
++	iput(ofs->indexdir_trap);
++	iput(ofs->workdir_trap);
++	iput(ofs->upperdir_trap);
+ 	dput(ofs->indexdir);
+ 	dput(ofs->workdir);
+ 	if (ofs->workdir_locked)
+@@ -225,8 +228,10 @@ static void ovl_free_fs(struct ovl_fs *ofs)
+ 	if (ofs->upperdir_locked)
+ 		ovl_inuse_unlock(ofs->upper_mnt->mnt_root);
+ 	mntput(ofs->upper_mnt);
+-	for (i = 0; i < ofs->numlower; i++)
++	for (i = 0; i < ofs->numlower; i++) {
++		iput(ofs->lower_layers[i].trap);
+ 		mntput(ofs->lower_layers[i].mnt);
++	}
+ 	for (i = 0; i < ofs->numlowerfs; i++)
+ 		free_anon_bdev(ofs->lower_fs[i].pseudo_dev);
+ 	kfree(ofs->lower_layers);
+@@ -984,7 +989,26 @@ static const struct xattr_handler *ovl_xattr_handlers[] = {
+ 	NULL
+ };
+ 
+-static int ovl_get_upper(struct ovl_fs *ofs, struct path *upperpath)
++static int ovl_setup_trap(struct super_block *sb, struct dentry *dir,
++			  struct inode **ptrap, const char *name)
++{
++	struct inode *trap;
++	int err;
++
++	trap = ovl_get_trap_inode(sb, dir);
++	err = PTR_ERR_OR_ZERO(trap);
++	if (err) {
++		if (err == -ELOOP)
++			pr_err("overlayfs: conflicting %s path\n", name);
++		return err;
++	}
++
++	*ptrap = trap;
++	return 0;
++}
++
++static int ovl_get_upper(struct super_block *sb, struct ovl_fs *ofs,
++			 struct path *upperpath)
+ {
+ 	struct vfsmount *upper_mnt;
+ 	int err;
+@@ -1004,6 +1028,11 @@ static int ovl_get_upper(struct ovl_fs *ofs, struct path *upperpath)
+ 	if (err)
+ 		goto out;
+ 
++	err = ovl_setup_trap(sb, upperpath->dentry, &ofs->upperdir_trap,
++			     "upperdir");
++	if (err)
++		goto out;
++
+ 	upper_mnt = clone_private_mount(upperpath);
+ 	err = PTR_ERR(upper_mnt);
+ 	if (IS_ERR(upper_mnt)) {
+@@ -1030,7 +1059,8 @@ out:
+ 	return err;
+ }
+ 
+-static int ovl_make_workdir(struct ovl_fs *ofs, struct path *workpath)
++static int ovl_make_workdir(struct super_block *sb, struct ovl_fs *ofs,
++			    struct path *workpath)
+ {
+ 	struct vfsmount *mnt = ofs->upper_mnt;
+ 	struct dentry *temp;
+@@ -1045,6 +1075,10 @@ static int ovl_make_workdir(struct ovl_fs *ofs, struct path *workpath)
+ 	if (!ofs->workdir)
+ 		goto out;
+ 
++	err = ovl_setup_trap(sb, ofs->workdir, &ofs->workdir_trap, "workdir");
++	if (err)
++		goto out;
++
+ 	/*
+ 	 * Upper should support d_type, else whiteouts are visible.  Given
+ 	 * workdir and upper are on same fs, we can do iterate_dir() on
+@@ -1105,7 +1139,8 @@ out:
+ 	return err;
+ }
+ 
+-static int ovl_get_workdir(struct ovl_fs *ofs, struct path *upperpath)
++static int ovl_get_workdir(struct super_block *sb, struct ovl_fs *ofs,
++			   struct path *upperpath)
+ {
+ 	int err;
+ 	struct path workpath = { };
+@@ -1136,19 +1171,16 @@ static int ovl_get_workdir(struct ovl_fs *ofs, struct path *upperpath)
+ 		pr_warn("overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.\n");
+ 	}
+ 
+-	err = ovl_make_workdir(ofs, &workpath);
+-	if (err)
+-		goto out;
++	err = ovl_make_workdir(sb, ofs, &workpath);
+ 
+-	err = 0;
+ out:
+ 	path_put(&workpath);
+ 
+ 	return err;
+ }
+ 
+-static int ovl_get_indexdir(struct ovl_fs *ofs, struct ovl_entry *oe,
+-			    struct path *upperpath)
++static int ovl_get_indexdir(struct super_block *sb, struct ovl_fs *ofs,
++			    struct ovl_entry *oe, struct path *upperpath)
+ {
+ 	struct vfsmount *mnt = ofs->upper_mnt;
+ 	int err;
+@@ -1167,6 +1199,11 @@ static int ovl_get_indexdir(struct ovl_fs *ofs, struct ovl_entry *oe,
+ 
+ 	ofs->indexdir = ovl_workdir_create(ofs, OVL_INDEXDIR_NAME, true);
+ 	if (ofs->indexdir) {
++		err = ovl_setup_trap(sb, ofs->indexdir, &ofs->indexdir_trap,
++				     "indexdir");
++		if (err)
++			goto out;
++
+ 		/*
+ 		 * Verify upper root is exclusively associated with index dir.
+ 		 * Older kernels stored upper fh in "trusted.overlay.origin"
+@@ -1254,8 +1291,8 @@ static int ovl_get_fsid(struct ovl_fs *ofs, const struct path *path)
+ 	return ofs->numlowerfs;
+ }
+ 
+-static int ovl_get_lower_layers(struct ovl_fs *ofs, struct path *stack,
+-				unsigned int numlower)
++static int ovl_get_lower_layers(struct super_block *sb, struct ovl_fs *ofs,
++				struct path *stack, unsigned int numlower)
+ {
+ 	int err;
+ 	unsigned int i;
+@@ -1273,16 +1310,28 @@ static int ovl_get_lower_layers(struct ovl_fs *ofs, struct path *stack,
+ 
+ 	for (i = 0; i < numlower; i++) {
+ 		struct vfsmount *mnt;
++		struct inode *trap;
+ 		int fsid;
+ 
+ 		err = fsid = ovl_get_fsid(ofs, &stack[i]);
+ 		if (err < 0)
+ 			goto out;
+ 
++		err = -EBUSY;
++		if (ovl_is_inuse(stack[i].dentry)) {
++			pr_err("overlayfs: lowerdir is in-use as upperdir/workdir\n");
++			goto out;
++		}
++
++		err = ovl_setup_trap(sb, stack[i].dentry, &trap, "lowerdir");
++		if (err)
++			goto out;
++
+ 		mnt = clone_private_mount(&stack[i]);
+ 		err = PTR_ERR(mnt);
+ 		if (IS_ERR(mnt)) {
+ 			pr_err("overlayfs: failed to clone lowerpath\n");
++			iput(trap);
+ 			goto out;
+ 		}
+ 
+@@ -1292,6 +1341,7 @@ static int ovl_get_lower_layers(struct ovl_fs *ofs, struct path *stack,
+ 		 */
+ 		mnt->mnt_flags |= MNT_READONLY | MNT_NOATIME;
+ 
++		ofs->lower_layers[ofs->numlower].trap = trap;
+ 		ofs->lower_layers[ofs->numlower].mnt = mnt;
+ 		ofs->lower_layers[ofs->numlower].idx = i + 1;
+ 		ofs->lower_layers[ofs->numlower].fsid = fsid;
+@@ -1386,7 +1436,7 @@ static struct ovl_entry *ovl_get_lowerstack(struct super_block *sb,
+ 		goto out_err;
+ 	}
+ 
+-	err = ovl_get_lower_layers(ofs, stack, numlower);
++	err = ovl_get_lower_layers(sb, ofs, stack, numlower);
+ 	if (err)
+ 		goto out_err;
+ 
+@@ -1418,6 +1468,77 @@ out_err:
+ 	goto out;
+ }
+ 
++/*
++ * Check if this layer root is a descendant of:
++ * - another layer of this overlayfs instance
++ * - upper/work dir of any overlayfs instance
++ */
++static int ovl_check_layer(struct super_block *sb, struct dentry *dentry,
++			   const char *name)
++{
++	struct dentry *next = dentry, *parent;
++	int err = 0;
++
++	if (!dentry)
++		return 0;
++
++	parent = dget_parent(next);
++
++	/* Walk back ancestors to root (inclusive) looking for traps */
++	while (!err && parent != next) {
++		if (ovl_is_inuse(parent)) {
++			err = -EBUSY;
++			pr_err("overlayfs: %s path overlapping in-use upperdir/workdir\n",
++			       name);
++		} else if (ovl_lookup_trap_inode(sb, parent)) {
++			err = -ELOOP;
++			pr_err("overlayfs: overlapping %s path\n", name);
++		}
++		next = parent;
++		parent = dget_parent(next);
++		dput(next);
++	}
++
++	dput(parent);
++
++	return err;
++}
++
++/*
++ * Check if any of the layers or work dirs overlap.
++ */
++static int ovl_check_overlapping_layers(struct super_block *sb,
++					struct ovl_fs *ofs)
++{
++	int i, err;
++
++	if (ofs->upper_mnt) {
++		err = ovl_check_layer(sb, ofs->upper_mnt->mnt_root, "upperdir");
++		if (err)
++			return err;
++
++		/*
++		 * Checking workbasedir avoids hitting ovl_is_inuse(parent) of
++		 * this instance and covers overlapping work and index dirs,
++		 * unless work or index dir have been moved since created inside
++		 * workbasedir.  In that case, we already have their traps in
++		 * inode cache and we will catch that case on lookup.
++		 */
++		err = ovl_check_layer(sb, ofs->workbasedir, "workdir");
++		if (err)
++			return err;
++	}
++
++	for (i = 0; i < ofs->numlower; i++) {
++		err = ovl_check_layer(sb, ofs->lower_layers[i].mnt->mnt_root,
++				      "lowerdir");
++		if (err)
++			return err;
++	}
++
++	return 0;
++}
++
+ static int ovl_fill_super(struct super_block *sb, void *data, int silent)
+ {
+ 	struct path upperpath = { };
+@@ -1457,17 +1578,20 @@ static int ovl_fill_super(struct super_block *sb, void *data, int silent)
+ 	if (ofs->config.xino != OVL_XINO_OFF)
+ 		ofs->xino_bits = BITS_PER_LONG - 32;
+ 
++	/* alloc/destroy_inode needed for setting up traps in inode cache */
++	sb->s_op = &ovl_super_operations;
++
+ 	if (ofs->config.upperdir) {
+ 		if (!ofs->config.workdir) {
+ 			pr_err("overlayfs: missing 'workdir'\n");
+ 			goto out_err;
+ 		}
+ 
+-		err = ovl_get_upper(ofs, &upperpath);
++		err = ovl_get_upper(sb, ofs, &upperpath);
+ 		if (err)
+ 			goto out_err;
+ 
+-		err = ovl_get_workdir(ofs, &upperpath);
++		err = ovl_get_workdir(sb, ofs, &upperpath);
+ 		if (err)
+ 			goto out_err;
+ 
+@@ -1488,7 +1612,7 @@ static int ovl_fill_super(struct super_block *sb, void *data, int silent)
+ 		sb->s_flags |= SB_RDONLY;
+ 
+ 	if (!(ovl_force_readonly(ofs)) && ofs->config.index) {
+-		err = ovl_get_indexdir(ofs, oe, &upperpath);
++		err = ovl_get_indexdir(sb, ofs, oe, &upperpath);
+ 		if (err)
+ 			goto out_free_oe;
+ 
+@@ -1501,6 +1625,10 @@ static int ovl_fill_super(struct super_block *sb, void *data, int silent)
+ 
+ 	}
+ 
++	err = ovl_check_overlapping_layers(sb, ofs);
++	if (err)
++		goto out_free_oe;
++
+ 	/* Show index=off in /proc/mounts for forced r/o mount */
+ 	if (!ofs->indexdir) {
+ 		ofs->config.index = false;
+@@ -1522,7 +1650,6 @@ static int ovl_fill_super(struct super_block *sb, void *data, int silent)
+ 	cap_lower(cred->cap_effective, CAP_SYS_RESOURCE);
+ 
+ 	sb->s_magic = OVERLAYFS_SUPER_MAGIC;
+-	sb->s_op = &ovl_super_operations;
+ 	sb->s_xattr = ovl_xattr_handlers;
+ 	sb->s_fs_info = ofs;
+ 	sb->s_flags |= SB_POSIXACL;
+diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
+index 4035e640f402..e135064e87ad 100644
+--- a/fs/overlayfs/util.c
++++ b/fs/overlayfs/util.c
+@@ -652,6 +652,18 @@ void ovl_inuse_unlock(struct dentry *dentry)
+ 	}
+ }
+ 
++bool ovl_is_inuse(struct dentry *dentry)
++{
++	struct inode *inode = d_inode(dentry);
++	bool inuse;
++
++	spin_lock(&inode->i_lock);
++	inuse = (inode->i_state & I_OVL_INUSE);
++	spin_unlock(&inode->i_lock);
++
++	return inuse;
++}
++
+ /*
+  * Does this overlay dentry need to be indexed on copy up?
+  */
+diff --git a/fs/pnode.c b/fs/pnode.c
+index 7ea6cfb65077..012be405fec0 100644
+--- a/fs/pnode.c
++++ b/fs/pnode.c
+@@ -262,7 +262,6 @@ static int propagate_one(struct mount *m)
+ 	child = copy_tree(last_source, last_source->mnt.mnt_root, type);
+ 	if (IS_ERR(child))
+ 		return PTR_ERR(child);
+-	child->mnt.mnt_flags &= ~MNT_LOCKED;
+ 	mnt_set_mountpoint(m, mp, child);
+ 	last_dest = m;
+ 	last_source = child;
+diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
+index 43d0f0c496f6..ecb7972e2423 100644
+--- a/include/linux/mmc/host.h
++++ b/include/linux/mmc/host.h
+@@ -398,6 +398,7 @@ struct mmc_host {
+ 	unsigned int		retune_now:1;	/* do re-tuning at next req */
+ 	unsigned int		retune_paused:1; /* re-tuning is temporarily disabled */
+ 	unsigned int		use_blk_mq:1;	/* use blk-mq */
++	unsigned int		retune_crc_disable:1; /* don't trigger retune upon crc */
+ 
+ 	int			rescan_disable;	/* disable card detection */
+ 	int			rescan_entered;	/* used with nonremovable devices */
+diff --git a/include/linux/mmc/sdio_func.h b/include/linux/mmc/sdio_func.h
+index 97ca105347a6..6905f3f641cc 100644
+--- a/include/linux/mmc/sdio_func.h
++++ b/include/linux/mmc/sdio_func.h
+@@ -159,4 +159,10 @@ extern void sdio_f0_writeb(struct sdio_func *func, unsigned char b,
+ extern mmc_pm_flag_t sdio_get_host_pm_caps(struct sdio_func *func);
+ extern int sdio_set_host_pm_flags(struct sdio_func *func, mmc_pm_flag_t flags);
+ 
++extern void sdio_retune_crc_disable(struct sdio_func *func);
++extern void sdio_retune_crc_enable(struct sdio_func *func);
++
++extern void sdio_retune_hold_now(struct sdio_func *func);
++extern void sdio_retune_release(struct sdio_func *func);
++
+ #endif /* LINUX_MMC_SDIO_FUNC_H */
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 094e61e07030..05b1b96f4d9e 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -190,6 +190,9 @@ struct adv_info {
+ 
+ #define HCI_MAX_SHORT_NAME_LENGTH	10
+ 
++/* Min encryption key size to match with SMP */
++#define HCI_MIN_ENC_KEY_SIZE		7
++
+ /* Default LE RPA expiry time, 15 minutes */
+ #define HCI_DEFAULT_RPA_TIMEOUT		(15 * 60)
+ 
+diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
+index 13bfeb712d36..ab00733087ac 100644
+--- a/include/net/cfg80211.h
++++ b/include/net/cfg80211.h
+@@ -3767,7 +3767,8 @@ struct cfg80211_ops {
+  *	on wiphy_new(), but can be changed by the driver if it has a good
+  *	reason to override the default
+  * @WIPHY_FLAG_4ADDR_AP: supports 4addr mode even on AP (with a single station
+- *	on a VLAN interface)
++ *	on a VLAN interface). This flag also serves an extra purpose of
++ *	supporting 4ADDR AP mode on devices which do not support AP/VLAN iftype.
+  * @WIPHY_FLAG_4ADDR_STATION: supports 4addr mode even as a station
+  * @WIPHY_FLAG_CONTROL_PORT_PROTOCOL: This device supports setting the
+  *	control port protocol ethertype. The device also honours the
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index ca1ee656d6d8..5880c993002b 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -8627,12 +8627,8 @@ void ftrace_dump(enum ftrace_dump_mode oops_dump_mode)
+ 
+ 		cnt++;
+ 
+-		/* reset all but tr, trace, and overruns */
+-		memset(&iter.seq, 0,
+-		       sizeof(struct trace_iterator) -
+-		       offsetof(struct trace_iterator, seq));
++		trace_iterator_reset(&iter);
+ 		iter.iter_flags |= TRACE_FILE_LAT_FMT;
+-		iter.pos = -1;
+ 
+ 		if (trace_find_next_entry_inc(&iter) != NULL) {
+ 			int ret;
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index d80cee49e0eb..8ddf36e5eb42 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -1964,4 +1964,22 @@ static inline void tracer_hardirqs_off(unsigned long a0, unsigned long a1) { }
+ 
+ extern struct trace_iterator *tracepoint_print_iter;
+ 
++/*
++ * Reset the state of the trace_iterator so that it can read consumed data.
++ * Normally, the trace_iterator is used for reading the data when it is not
++ * consumed, and must retain state.
++ */
++static __always_inline void trace_iterator_reset(struct trace_iterator *iter)
++{
++	const size_t offset = offsetof(struct trace_iterator, seq);
++
++	/*
++	 * Keep gcc from complaining about overwriting more than just one
++	 * member in the structure.
++	 */
++	memset((char *)iter + offset, 0, sizeof(struct trace_iterator) - offset);
++
++	iter->pos = -1;
++}
++
+ #endif /* _LINUX_KERNEL_TRACE_H */
+diff --git a/kernel/trace/trace_kdb.c b/kernel/trace/trace_kdb.c
+index 810d78a8d14c..2905a3dd94c1 100644
+--- a/kernel/trace/trace_kdb.c
++++ b/kernel/trace/trace_kdb.c
+@@ -41,12 +41,8 @@ static void ftrace_dump_buf(int skip_lines, long cpu_file)
+ 
+ 	kdb_printf("Dumping ftrace buffer:\n");
+ 
+-	/* reset all but tr, trace, and overruns */
+-	memset(&iter.seq, 0,
+-		   sizeof(struct trace_iterator) -
+-		   offsetof(struct trace_iterator, seq));
++	trace_iterator_reset(&iter);
+ 	iter.iter_flags |= TRACE_FILE_LAT_FMT;
+-	iter.pos = -1;
+ 
+ 	if (cpu_file == RING_BUFFER_ALL_CPUS) {
+ 		for_each_tracing_cpu(cpu) {
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index bd4978ce8c45..15d1cb5aee18 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -1392,8 +1392,16 @@ auth:
+ 		return 0;
+ 
+ encrypt:
+-	if (test_bit(HCI_CONN_ENCRYPT, &conn->flags))
++	if (test_bit(HCI_CONN_ENCRYPT, &conn->flags)) {
++		/* Ensure that the encryption key size has been read,
++		 * otherwise stall the upper layer responses.
++		 */
++		if (!conn->enc_key_size)
++			return 0;
++
++		/* Nothing else needed, all requirements are met */
+ 		return 1;
++	}
+ 
+ 	hci_conn_encrypt(conn);
+ 	return 0;
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index b53acd6c9a3d..9f77432dbe38 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -1341,6 +1341,21 @@ static void l2cap_request_info(struct l2cap_conn *conn)
+ 		       sizeof(req), &req);
+ }
+ 
++static bool l2cap_check_enc_key_size(struct hci_conn *hcon)
++{
++	/* The minimum encryption key size needs to be enforced by the
++	 * host stack before establishing any L2CAP connections. The
++	 * specification in theory allows a minimum of 1, but to align
++	 * BR/EDR and LE transports, a minimum of 7 is chosen.
++	 *
++	 * This check might also be called for unencrypted connections
++	 * that have no key size requirements. Ensure that the link is
++	 * actually encrypted before enforcing a key size.
++	 */
++	return (!test_bit(HCI_CONN_ENCRYPT, &hcon->flags) ||
++		hcon->enc_key_size > HCI_MIN_ENC_KEY_SIZE);
++}
++
+ static void l2cap_do_start(struct l2cap_chan *chan)
+ {
+ 	struct l2cap_conn *conn = chan->conn;
+@@ -1358,9 +1373,14 @@ static void l2cap_do_start(struct l2cap_chan *chan)
+ 	if (!(conn->info_state & L2CAP_INFO_FEAT_MASK_REQ_DONE))
+ 		return;
+ 
+-	if (l2cap_chan_check_security(chan, true) &&
+-	    __l2cap_no_conn_pending(chan))
++	if (!l2cap_chan_check_security(chan, true) ||
++	    !__l2cap_no_conn_pending(chan))
++		return;
++
++	if (l2cap_check_enc_key_size(conn->hcon))
+ 		l2cap_start_connection(chan);
++	else
++		__set_chan_timer(chan, L2CAP_DISC_TIMEOUT);
+ }
+ 
+ static inline int l2cap_mode_supported(__u8 mode, __u32 feat_mask)
+@@ -1439,7 +1459,10 @@ static void l2cap_conn_start(struct l2cap_conn *conn)
+ 				continue;
+ 			}
+ 
+-			l2cap_start_connection(chan);
++			if (l2cap_check_enc_key_size(conn->hcon))
++				l2cap_start_connection(chan);
++			else
++				l2cap_chan_close(chan, ECONNREFUSED);
+ 
+ 		} else if (chan->state == BT_CONNECT2) {
+ 			struct l2cap_conn_rsp rsp;
+@@ -7490,7 +7513,7 @@ static void l2cap_security_cfm(struct hci_conn *hcon, u8 status, u8 encrypt)
+ 		}
+ 
+ 		if (chan->state == BT_CONNECT) {
+-			if (!status)
++			if (!status && l2cap_check_enc_key_size(hcon))
+ 				l2cap_start_connection(chan);
+ 			else
+ 				__set_chan_timer(chan, L2CAP_DISC_TIMEOUT);
+@@ -7499,7 +7522,7 @@ static void l2cap_security_cfm(struct hci_conn *hcon, u8 status, u8 encrypt)
+ 			struct l2cap_conn_rsp rsp;
+ 			__u16 res, stat;
+ 
+-			if (!status) {
++			if (!status && l2cap_check_enc_key_size(hcon)) {
+ 				if (test_bit(FLAG_DEFER_SETUP, &chan->flags)) {
+ 					res = L2CAP_CR_PEND;
+ 					stat = L2CAP_CS_AUTHOR_PEND;
+diff --git a/net/can/af_can.c b/net/can/af_can.c
+index 1684ba5b51eb..e386d654116d 100644
+--- a/net/can/af_can.c
++++ b/net/can/af_can.c
+@@ -105,6 +105,7 @@ EXPORT_SYMBOL(can_ioctl);
+ static void can_sock_destruct(struct sock *sk)
+ {
+ 	skb_queue_purge(&sk->sk_receive_queue);
++	skb_queue_purge(&sk->sk_error_queue);
+ }
+ 
+ static const struct can_proto *can_get_proto(int protocol)
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index e170f986d226..c875d45f1e1d 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -2222,6 +2222,9 @@ void ieee80211_tdls_cancel_channel_switch(struct wiphy *wiphy,
+ 					  const u8 *addr);
+ void ieee80211_teardown_tdls_peers(struct ieee80211_sub_if_data *sdata);
+ void ieee80211_tdls_chsw_work(struct work_struct *wk);
++void ieee80211_tdls_handle_disconnect(struct ieee80211_sub_if_data *sdata,
++				      const u8 *peer, u16 reason);
++const char *ieee80211_get_reason_code_string(u16 reason_code);
+ 
+ extern const struct ethtool_ops ieee80211_ethtool_ops;
+ 
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index b7a9fe3d5fcb..383b0df100e4 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -2963,7 +2963,7 @@ static void ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
+ #define case_WLAN(type) \
+ 	case WLAN_REASON_##type: return #type
+ 
+-static const char *ieee80211_get_reason_code_string(u16 reason_code)
++const char *ieee80211_get_reason_code_string(u16 reason_code)
+ {
+ 	switch (reason_code) {
+ 	case_WLAN(UNSPECIFIED);
+@@ -3028,6 +3028,11 @@ static void ieee80211_rx_mgmt_deauth(struct ieee80211_sub_if_data *sdata,
+ 	if (len < 24 + 2)
+ 		return;
+ 
++	if (!ether_addr_equal(mgmt->bssid, mgmt->sa)) {
++		ieee80211_tdls_handle_disconnect(sdata, mgmt->sa, reason_code);
++		return;
++	}
++
+ 	if (ifmgd->associated &&
+ 	    ether_addr_equal(mgmt->bssid, ifmgd->associated->bssid)) {
+ 		const u8 *bssid = ifmgd->associated->bssid;
+@@ -3077,6 +3082,11 @@ static void ieee80211_rx_mgmt_disassoc(struct ieee80211_sub_if_data *sdata,
+ 
+ 	reason_code = le16_to_cpu(mgmt->u.disassoc.reason_code);
+ 
++	if (!ether_addr_equal(mgmt->bssid, mgmt->sa)) {
++		ieee80211_tdls_handle_disconnect(sdata, mgmt->sa, reason_code);
++		return;
++	}
++
+ 	sdata_info(sdata, "disassociated from %pM (Reason: %u=%s)\n",
+ 		   mgmt->sa, reason_code,
+ 		   ieee80211_get_reason_code_string(reason_code));
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index bf0b187f994e..1a1f850b76fd 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -3823,6 +3823,8 @@ static bool ieee80211_accept_frame(struct ieee80211_rx_data *rx)
+ 	case NL80211_IFTYPE_STATION:
+ 		if (!bssid && !sdata->u.mgd.use_4addr)
+ 			return false;
++		if (ieee80211_is_robust_mgmt_frame(skb) && !rx->sta)
++			return false;
+ 		if (multicast)
+ 			return true;
+ 		return ether_addr_equal(sdata->vif.addr, hdr->addr1);
+diff --git a/net/mac80211/tdls.c b/net/mac80211/tdls.c
+index d30690d79a58..fcc5cd49c3ac 100644
+--- a/net/mac80211/tdls.c
++++ b/net/mac80211/tdls.c
+@@ -1994,3 +1994,26 @@ void ieee80211_tdls_chsw_work(struct work_struct *wk)
+ 	}
+ 	rtnl_unlock();
+ }
++
++void ieee80211_tdls_handle_disconnect(struct ieee80211_sub_if_data *sdata,
++				      const u8 *peer, u16 reason)
++{
++	struct ieee80211_sta *sta;
++
++	rcu_read_lock();
++	sta = ieee80211_find_sta(&sdata->vif, peer);
++	if (!sta || !sta->tdls) {
++		rcu_read_unlock();
++		return;
++	}
++	rcu_read_unlock();
++
++	tdls_dbg(sdata, "disconnected from TDLS peer %pM (Reason: %u=%s)\n",
++		 peer, reason,
++		 ieee80211_get_reason_code_string(reason));
++
++	ieee80211_tdls_oper_request(&sdata->vif, peer,
++				    NL80211_TDLS_TEARDOWN,
++				    WLAN_REASON_TDLS_TEARDOWN_UNREACHABLE,
++				    GFP_ATOMIC);
++}
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index 4c1655972565..447a55ae9df1 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -3757,7 +3757,9 @@ int ieee80211_check_combinations(struct ieee80211_sub_if_data *sdata,
+ 	}
+ 
+ 	/* Always allow software iftypes */
+-	if (local->hw.wiphy->software_iftypes & BIT(iftype)) {
++	if (local->hw.wiphy->software_iftypes & BIT(iftype) ||
++	    (iftype == NL80211_IFTYPE_AP_VLAN &&
++	     local->hw.wiphy->flags & WIPHY_FLAG_4ADDR_AP)) {
+ 		if (radar_detect)
+ 			return -EINVAL;
+ 		return 0;
+diff --git a/net/mac80211/wpa.c b/net/mac80211/wpa.c
+index 58d0b258b684..5dd48f0a4b1b 100644
+--- a/net/mac80211/wpa.c
++++ b/net/mac80211/wpa.c
+@@ -1175,7 +1175,7 @@ ieee80211_crypto_aes_gmac_decrypt(struct ieee80211_rx_data *rx)
+ 	struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
+ 	struct ieee80211_key *key = rx->key;
+ 	struct ieee80211_mmie_16 *mmie;
+-	u8 aad[GMAC_AAD_LEN], mic[GMAC_MIC_LEN], ipn[6], nonce[GMAC_NONCE_LEN];
++	u8 aad[GMAC_AAD_LEN], *mic, ipn[6], nonce[GMAC_NONCE_LEN];
+ 	struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ 
+ 	if (!ieee80211_is_mgmt(hdr->frame_control))
+@@ -1206,13 +1206,18 @@ ieee80211_crypto_aes_gmac_decrypt(struct ieee80211_rx_data *rx)
+ 		memcpy(nonce, hdr->addr2, ETH_ALEN);
+ 		memcpy(nonce + ETH_ALEN, ipn, 6);
+ 
++		mic = kmalloc(GMAC_MIC_LEN, GFP_ATOMIC);
++		if (!mic)
++			return RX_DROP_UNUSABLE;
+ 		if (ieee80211_aes_gmac(key->u.aes_gmac.tfm, aad, nonce,
+ 				       skb->data + 24, skb->len - 24,
+ 				       mic) < 0 ||
+ 		    crypto_memneq(mic, mmie->mic, sizeof(mmie->mic))) {
+ 			key->u.aes_gmac.icverrors++;
++			kfree(mic);
+ 			return RX_DROP_UNUSABLE;
+ 		}
++		kfree(mic);
+ 	}
+ 
+ 	memcpy(key->u.aes_gmac.rx_pn, ipn, 6);
+diff --git a/net/wireless/core.c b/net/wireless/core.c
+index b36ad8efb5e5..c58acca09301 100644
+--- a/net/wireless/core.c
++++ b/net/wireless/core.c
+@@ -513,7 +513,7 @@ use_default_name:
+ 				   &rdev->rfkill_ops, rdev);
+ 
+ 	if (!rdev->rfkill) {
+-		kfree(rdev);
++		wiphy_free(&rdev->wiphy);
+ 		return NULL;
+ 	}
+ 
+@@ -1396,8 +1396,12 @@ static int cfg80211_netdev_notifier_call(struct notifier_block *nb,
+ 		}
+ 		break;
+ 	case NETDEV_PRE_UP:
+-		if (!(wdev->wiphy->interface_modes & BIT(wdev->iftype)))
++		if (!(wdev->wiphy->interface_modes & BIT(wdev->iftype)) &&
++		    !(wdev->iftype == NL80211_IFTYPE_AP_VLAN &&
++		      rdev->wiphy.flags & WIPHY_FLAG_4ADDR_AP &&
++		      wdev->use_4addr))
+ 			return notifier_from_errno(-EOPNOTSUPP);
++
+ 		if (rfkill_blocked(rdev->rfkill))
+ 			return notifier_from_errno(-ERFKILL);
+ 		break;
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index d2a7459a5da4..d1553a661336 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -3385,8 +3385,7 @@ static int nl80211_new_interface(struct sk_buff *skb, struct genl_info *info)
+ 	if (info->attrs[NL80211_ATTR_IFTYPE])
+ 		type = nla_get_u32(info->attrs[NL80211_ATTR_IFTYPE]);
+ 
+-	if (!rdev->ops->add_virtual_intf ||
+-	    !(rdev->wiphy.interface_modes & (1 << type)))
++	if (!rdev->ops->add_virtual_intf)
+ 		return -EOPNOTSUPP;
+ 
+ 	if ((type == NL80211_IFTYPE_P2P_DEVICE || type == NL80211_IFTYPE_NAN ||
+@@ -3405,6 +3404,11 @@ static int nl80211_new_interface(struct sk_buff *skb, struct genl_info *info)
+ 			return err;
+ 	}
+ 
++	if (!(rdev->wiphy.interface_modes & (1 << type)) &&
++	    !(type == NL80211_IFTYPE_AP_VLAN && params.use_4addr &&
++	      rdev->wiphy.flags & WIPHY_FLAG_4ADDR_AP))
++		return -EOPNOTSUPP;
++
+ 	err = nl80211_parse_mon_options(rdev, type, info, &params);
+ 	if (err < 0)
+ 		return err;
+@@ -4800,8 +4804,10 @@ static int nl80211_send_station(struct sk_buff *msg, u32 cmd, u32 portid,
+ 	struct nlattr *sinfoattr, *bss_param;
+ 
+ 	hdr = nl80211hdr_put(msg, portid, seq, flags, cmd);
+-	if (!hdr)
++	if (!hdr) {
++		cfg80211_sinfo_release_content(sinfo);
+ 		return -1;
++	}
+ 
+ 	if (nla_put_u32(msg, NL80211_ATTR_IFINDEX, dev->ifindex) ||
+ 	    nla_put(msg, NL80211_ATTR_MAC, ETH_ALEN, mac_addr) ||
+diff --git a/scripts/checkstack.pl b/scripts/checkstack.pl
+index 122aef5e4e14..371bd17a4983 100755
+--- a/scripts/checkstack.pl
++++ b/scripts/checkstack.pl
+@@ -46,7 +46,7 @@ my (@stack, $re, $dre, $x, $xs, $funcre);
+ 	$x	= "[0-9a-f]";	# hex character
+ 	$xs	= "[0-9a-f ]";	# hex character or space
+ 	$funcre = qr/^$x* <(.*)>:$/;
+-	if ($arch eq 'aarch64') {
++	if ($arch =~ '^(aarch|arm)64$') {
+ 		#ffffffc0006325cc:       a9bb7bfd        stp     x29, x30, [sp, #-80]!
+ 		#a110:       d11643ff        sub     sp, sp, #0x590
+ 		$re = qr/^.*stp.*sp, \#-([0-9]{1,8})\]\!/o;
+diff --git a/scripts/package/Makefile b/scripts/package/Makefile
+index 2c6de21e5152..fd854439de0f 100644
+--- a/scripts/package/Makefile
++++ b/scripts/package/Makefile
+@@ -103,7 +103,7 @@ clean-dirs += $(objtree)/snap/
+ # ---------------------------------------------------------------------------
+ tar%pkg: FORCE
+ 	$(MAKE) -f $(srctree)/Makefile
+-	$(CONFIG_SHELL) $(srctree)/scripts/package/buildtar $@
++	+$(CONFIG_SHELL) $(srctree)/scripts/package/buildtar $@
+ 
+ clean-dirs += $(objtree)/tar-install/
+ 
+diff --git a/security/apparmor/include/policy.h b/security/apparmor/include/policy.h
+index 8e6707c837be..06ed62f00b4b 100644
+--- a/security/apparmor/include/policy.h
++++ b/security/apparmor/include/policy.h
+@@ -217,7 +217,16 @@ static inline struct aa_profile *aa_get_newest_profile(struct aa_profile *p)
+ 	return labels_profile(aa_get_newest_label(&p->label));
+ }
+ 
+-#define PROFILE_MEDIATES(P, T)  ((P)->policy.start[(unsigned char) (T)])
++static inline unsigned int PROFILE_MEDIATES(struct aa_profile *profile,
++					    unsigned char class)
++{
++	if (class <= AA_CLASS_LAST)
++		return profile->policy.start[class];
++	else
++		return aa_dfa_match_len(profile->policy.dfa,
++					profile->policy.start[0], &class, 1);
++}
++
+ static inline unsigned int PROFILE_MEDIATES_AF(struct aa_profile *profile,
+ 					       u16 AF) {
+ 	unsigned int state = PROFILE_MEDIATES(profile, AA_CLASS_NET);
+diff --git a/security/apparmor/policy_unpack.c b/security/apparmor/policy_unpack.c
+index f6c2bcb2ab14..f1b2202f725e 100644
+--- a/security/apparmor/policy_unpack.c
++++ b/security/apparmor/policy_unpack.c
+@@ -223,16 +223,21 @@ static void *kvmemdup(const void *src, size_t len)
+ static size_t unpack_u16_chunk(struct aa_ext *e, char **chunk)
+ {
+ 	size_t size = 0;
++	void *pos = e->pos;
+ 
+ 	if (!inbounds(e, sizeof(u16)))
+-		return 0;
++		goto fail;
+ 	size = le16_to_cpu(get_unaligned((__le16 *) e->pos));
+ 	e->pos += sizeof(__le16);
+ 	if (!inbounds(e, size))
+-		return 0;
++		goto fail;
+ 	*chunk = e->pos;
+ 	e->pos += size;
+ 	return size;
++
++fail:
++	e->pos = pos;
++	return 0;
+ }
+ 
+ /* unpack control byte */
+@@ -276,7 +281,7 @@ static bool unpack_nameX(struct aa_ext *e, enum aa_code code, const char *name)
+ 		char *tag = NULL;
+ 		size_t size = unpack_u16_chunk(e, &tag);
+ 		/* if a name is specified it must match. otherwise skip tag */
+-		if (name && (!size || strcmp(name, tag)))
++		if (name && (!size || tag[size-1] != '\0' || strcmp(name, tag)))
+ 			goto fail;
+ 	} else if (name) {
+ 		/* if a name is specified and there is no name tag fail */
+@@ -294,62 +299,84 @@ fail:
+ 
+ static bool unpack_u8(struct aa_ext *e, u8 *data, const char *name)
+ {
++	void *pos = e->pos;
++
+ 	if (unpack_nameX(e, AA_U8, name)) {
+ 		if (!inbounds(e, sizeof(u8)))
+-			return 0;
++			goto fail;
+ 		if (data)
+ 			*data = get_unaligned((u8 *)e->pos);
+ 		e->pos += sizeof(u8);
+ 		return 1;
+ 	}
++
++fail:
++	e->pos = pos;
+ 	return 0;
+ }
+ 
+ static bool unpack_u32(struct aa_ext *e, u32 *data, const char *name)
+ {
++	void *pos = e->pos;
++
+ 	if (unpack_nameX(e, AA_U32, name)) {
+ 		if (!inbounds(e, sizeof(u32)))
+-			return 0;
++			goto fail;
+ 		if (data)
+ 			*data = le32_to_cpu(get_unaligned((__le32 *) e->pos));
+ 		e->pos += sizeof(u32);
+ 		return 1;
+ 	}
++
++fail:
++	e->pos = pos;
+ 	return 0;
+ }
+ 
+ static bool unpack_u64(struct aa_ext *e, u64 *data, const char *name)
+ {
++	void *pos = e->pos;
++
+ 	if (unpack_nameX(e, AA_U64, name)) {
+ 		if (!inbounds(e, sizeof(u64)))
+-			return 0;
++			goto fail;
+ 		if (data)
+ 			*data = le64_to_cpu(get_unaligned((__le64 *) e->pos));
+ 		e->pos += sizeof(u64);
+ 		return 1;
+ 	}
++
++fail:
++	e->pos = pos;
+ 	return 0;
+ }
+ 
+ static size_t unpack_array(struct aa_ext *e, const char *name)
+ {
++	void *pos = e->pos;
++
+ 	if (unpack_nameX(e, AA_ARRAY, name)) {
+ 		int size;
+ 		if (!inbounds(e, sizeof(u16)))
+-			return 0;
++			goto fail;
+ 		size = (int)le16_to_cpu(get_unaligned((__le16 *) e->pos));
+ 		e->pos += sizeof(u16);
+ 		return size;
+ 	}
++
++fail:
++	e->pos = pos;
+ 	return 0;
+ }
+ 
+ static size_t unpack_blob(struct aa_ext *e, char **blob, const char *name)
+ {
++	void *pos = e->pos;
++
+ 	if (unpack_nameX(e, AA_BLOB, name)) {
+ 		u32 size;
+ 		if (!inbounds(e, sizeof(u32)))
+-			return 0;
++			goto fail;
+ 		size = le32_to_cpu(get_unaligned((__le32 *) e->pos));
+ 		e->pos += sizeof(u32);
+ 		if (inbounds(e, (size_t) size)) {
+@@ -358,6 +385,9 @@ static size_t unpack_blob(struct aa_ext *e, char **blob, const char *name)
+ 			return size;
+ 		}
+ 	}
++
++fail:
++	e->pos = pos;
+ 	return 0;
+ }
+ 
+@@ -374,9 +404,10 @@ static int unpack_str(struct aa_ext *e, const char **string, const char *name)
+ 			if (src_str[size - 1] != 0)
+ 				goto fail;
+ 			*string = src_str;
++
++			return size;
+ 		}
+ 	}
+-	return size;
+ 
+ fail:
+ 	e->pos = pos;
+diff --git a/tools/testing/selftests/cgroup/test_core.c b/tools/testing/selftests/cgroup/test_core.c
+index be59f9c34ea2..79053a4f4783 100644
+--- a/tools/testing/selftests/cgroup/test_core.c
++++ b/tools/testing/selftests/cgroup/test_core.c
+@@ -198,7 +198,7 @@ static int test_cgcore_no_internal_process_constraint_on_threads(const char *roo
+ 	char *parent = NULL, *child = NULL;
+ 
+ 	if (cg_read_strstr(root, "cgroup.controllers", "cpu") ||
+-	    cg_read_strstr(root, "cgroup.subtree_control", "cpu")) {
++	    cg_write(root, "cgroup.subtree_control", "+cpu")) {
+ 		ret = KSFT_SKIP;
+ 		goto cleanup;
+ 	}
+@@ -376,6 +376,11 @@ int main(int argc, char *argv[])
+ 
+ 	if (cg_find_unified_root(root, sizeof(root)))
+ 		ksft_exit_skip("cgroup v2 isn't mounted\n");
++
++	if (cg_read_strstr(root, "cgroup.subtree_control", "memory"))
++		if (cg_write(root, "cgroup.subtree_control", "+memory"))
++			ksft_exit_skip("Failed to set memory controller\n");
++
+ 	for (i = 0; i < ARRAY_SIZE(tests); i++) {
+ 		switch (tests[i].fn(root)) {
+ 		case KSFT_PASS:
+diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
+index 6f339882a6ca..c19a97dd02d4 100644
+--- a/tools/testing/selftests/cgroup/test_memcontrol.c
++++ b/tools/testing/selftests/cgroup/test_memcontrol.c
+@@ -1205,6 +1205,10 @@ int main(int argc, char **argv)
+ 	if (cg_read_strstr(root, "cgroup.controllers", "memory"))
+ 		ksft_exit_skip("memory controller isn't available\n");
+ 
++	if (cg_read_strstr(root, "cgroup.subtree_control", "memory"))
++		if (cg_write(root, "cgroup.subtree_control", "+memory"))
++			ksft_exit_skip("Failed to set memory controller\n");
++
+ 	for (i = 0; i < ARRAY_SIZE(tests); i++) {
+ 		switch (tests[i].fn(root)) {
+ 		case KSFT_PASS:
+diff --git a/tools/testing/selftests/net/forwarding/router_broadcast.sh b/tools/testing/selftests/net/forwarding/router_broadcast.sh
+index 9a678ece32b4..4eac0a06f451 100755
+--- a/tools/testing/selftests/net/forwarding/router_broadcast.sh
++++ b/tools/testing/selftests/net/forwarding/router_broadcast.sh
+@@ -145,16 +145,19 @@ bc_forwarding_disable()
+ {
+ 	sysctl_set net.ipv4.conf.all.bc_forwarding 0
+ 	sysctl_set net.ipv4.conf.$rp1.bc_forwarding 0
++	sysctl_set net.ipv4.conf.$rp2.bc_forwarding 0
+ }
+ 
+ bc_forwarding_enable()
+ {
+ 	sysctl_set net.ipv4.conf.all.bc_forwarding 1
+ 	sysctl_set net.ipv4.conf.$rp1.bc_forwarding 1
++	sysctl_set net.ipv4.conf.$rp2.bc_forwarding 1
+ }
+ 
+ bc_forwarding_restore()
+ {
++	sysctl_restore net.ipv4.conf.$rp2.bc_forwarding
+ 	sysctl_restore net.ipv4.conf.$rp1.bc_forwarding
+ 	sysctl_restore net.ipv4.conf.all.bc_forwarding
+ }
+@@ -171,7 +174,7 @@ ping_test_from()
+ 	log_info "ping $dip, expected reply from $from"
+ 	ip vrf exec $(master_name_get $oif) \
+ 		$PING -I $oif $dip -c 10 -i 0.1 -w $PING_TIMEOUT -b 2>&1 \
+-		| grep $from &> /dev/null
++		| grep "bytes from $from" > /dev/null
+ 	check_err_fail $fail $?
+ }
+ 
+diff --git a/tools/testing/selftests/pidfd/pidfd_test.c b/tools/testing/selftests/pidfd/pidfd_test.c
+index d59378a93782..20323f55613a 100644
+--- a/tools/testing/selftests/pidfd/pidfd_test.c
++++ b/tools/testing/selftests/pidfd/pidfd_test.c
+@@ -16,6 +16,10 @@
+ 
+ #include "../kselftest.h"
+ 
++#ifndef __NR_pidfd_send_signal
++#define __NR_pidfd_send_signal -1
++#endif
++
+ static inline int sys_pidfd_send_signal(int pidfd, int sig, siginfo_t *info,
+ 					unsigned int flags)
+ {
+diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
+index e13eb6cc8901..05306c58ff9f 100644
+--- a/tools/testing/selftests/vm/Makefile
++++ b/tools/testing/selftests/vm/Makefile
+@@ -25,6 +25,8 @@ TEST_GEN_FILES += virtual_address_range
+ 
+ TEST_PROGS := run_vmtests
+ 
++TEST_FILES := test_vmalloc.sh
++
+ KSFT_KHDR_INSTALL := 1
+ include ../lib.mk
+ 
+diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c
+index 5d1db824f73a..b3e6497b080c 100644
+--- a/tools/testing/selftests/vm/userfaultfd.c
++++ b/tools/testing/selftests/vm/userfaultfd.c
+@@ -123,7 +123,7 @@ static void usage(void)
+ 	fprintf(stderr, "Supported <test type>: anon, hugetlb, "
+ 		"hugetlb_shared, shmem\n\n");
+ 	fprintf(stderr, "Examples:\n\n");
+-	fprintf(stderr, examples);
++	fprintf(stderr, "%s", examples);
+ 	exit(1);
+ }
+ 


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-07-03 11:35 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2019-07-03 11:35 UTC (permalink / raw
  To: gentoo-commits

commit:     ec31621f64416a0fd7a9d9db43542b98820cd285
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul  3 11:35:36 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul  3 11:35:36 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ec31621f

Linux patch 5.1.16

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1015_linux-5.1.16.patch | 2889 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2893 insertions(+)

diff --git a/0000_README b/0000_README
index db80d60..941f7f1 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch:  1014_linux-5.1.15.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.1.15
 
+Patch:  1015_linux-5.1.16.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.1.16
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1015_linux-5.1.16.patch b/1015_linux-5.1.16.patch
new file mode 100644
index 0000000..23d3be5
--- /dev/null
+++ b/1015_linux-5.1.16.patch
@@ -0,0 +1,2889 @@
+diff --git a/Documentation/robust-futexes.txt b/Documentation/robust-futexes.txt
+index 6c42c75103eb..6361fb01c9c1 100644
+--- a/Documentation/robust-futexes.txt
++++ b/Documentation/robust-futexes.txt
+@@ -218,5 +218,4 @@ All other architectures should build just fine too - but they won't have
+ the new syscalls yet.
+ 
+ Architectures need to implement the new futex_atomic_cmpxchg_inatomic()
+-inline function before writing up the syscalls (that function returns
+--ENOSYS right now).
++inline function before writing up the syscalls.
+diff --git a/Makefile b/Makefile
+index d7b3c8e3ff3e..46a0ae537182 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 15
++SUBLEVEL = 16
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
+index 8fbd583b18e1..e9d2e578cbe6 100644
+--- a/arch/arm64/Makefile
++++ b/arch/arm64/Makefile
+@@ -51,7 +51,7 @@ endif
+ 
+ KBUILD_CFLAGS	+= -mgeneral-regs-only $(lseinstr) $(brokengasinst)
+ KBUILD_CFLAGS	+= -fno-asynchronous-unwind-tables
+-KBUILD_CFLAGS	+= -Wno-psabi
++KBUILD_CFLAGS	+= $(call cc-disable-warning, psabi)
+ KBUILD_AFLAGS	+= $(lseinstr) $(brokengasinst)
+ 
+ KBUILD_CFLAGS	+= $(call cc-option,-mabi=lp64)
+diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
+index 2d78ea6932b7..bdb3c05070a2 100644
+--- a/arch/arm64/include/asm/futex.h
++++ b/arch/arm64/include/asm/futex.h
+@@ -134,7 +134,9 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *_uaddr,
+ 	: "memory");
+ 	uaccess_disable();
+ 
+-	*uval = val;
++	if (!ret)
++		*uval = val;
++
+ 	return ret;
+ }
+ 
+diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
+index 9c01f04db64d..f71b84d9f294 100644
+--- a/arch/arm64/include/asm/insn.h
++++ b/arch/arm64/include/asm/insn.h
+@@ -277,6 +277,7 @@ __AARCH64_INSN_FUNCS(adrp,	0x9F000000, 0x90000000)
+ __AARCH64_INSN_FUNCS(prfm,	0x3FC00000, 0x39800000)
+ __AARCH64_INSN_FUNCS(prfm_lit,	0xFF000000, 0xD8000000)
+ __AARCH64_INSN_FUNCS(str_reg,	0x3FE0EC00, 0x38206800)
++__AARCH64_INSN_FUNCS(ldadd,	0x3F20FC00, 0x38200000)
+ __AARCH64_INSN_FUNCS(ldr_reg,	0x3FE0EC00, 0x38606800)
+ __AARCH64_INSN_FUNCS(ldr_lit,	0xBF000000, 0x18000000)
+ __AARCH64_INSN_FUNCS(ldrsw_lit,	0xFF000000, 0x98000000)
+@@ -394,6 +395,13 @@ u32 aarch64_insn_gen_load_store_ex(enum aarch64_insn_register reg,
+ 				   enum aarch64_insn_register state,
+ 				   enum aarch64_insn_size_type size,
+ 				   enum aarch64_insn_ldst_type type);
++u32 aarch64_insn_gen_ldadd(enum aarch64_insn_register result,
++			   enum aarch64_insn_register address,
++			   enum aarch64_insn_register value,
++			   enum aarch64_insn_size_type size);
++u32 aarch64_insn_gen_stadd(enum aarch64_insn_register address,
++			   enum aarch64_insn_register value,
++			   enum aarch64_insn_size_type size);
+ u32 aarch64_insn_gen_add_sub_imm(enum aarch64_insn_register dst,
+ 				 enum aarch64_insn_register src,
+ 				 int imm, enum aarch64_insn_variant variant,
+diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
+index 7820a4a688fa..9e2b5882cdeb 100644
+--- a/arch/arm64/kernel/insn.c
++++ b/arch/arm64/kernel/insn.c
+@@ -734,6 +734,46 @@ u32 aarch64_insn_gen_load_store_ex(enum aarch64_insn_register reg,
+ 					    state);
+ }
+ 
++u32 aarch64_insn_gen_ldadd(enum aarch64_insn_register result,
++			   enum aarch64_insn_register address,
++			   enum aarch64_insn_register value,
++			   enum aarch64_insn_size_type size)
++{
++	u32 insn = aarch64_insn_get_ldadd_value();
++
++	switch (size) {
++	case AARCH64_INSN_SIZE_32:
++	case AARCH64_INSN_SIZE_64:
++		break;
++	default:
++		pr_err("%s: unimplemented size encoding %d\n", __func__, size);
++		return AARCH64_BREAK_FAULT;
++	}
++
++	insn = aarch64_insn_encode_ldst_size(size, insn);
++
++	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RT, insn,
++					    result);
++
++	insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn,
++					    address);
++
++	return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RS, insn,
++					    value);
++}
++
++u32 aarch64_insn_gen_stadd(enum aarch64_insn_register address,
++			   enum aarch64_insn_register value,
++			   enum aarch64_insn_size_type size)
++{
++	/*
++	 * STADD is simply encoded as an alias for LDADD with XZR as
++	 * the destination register.
++	 */
++	return aarch64_insn_gen_ldadd(AARCH64_INSN_REG_ZR, address,
++				      value, size);
++}
++
+ static u32 aarch64_insn_encode_prfm_imm(enum aarch64_insn_prfm_type type,
+ 					enum aarch64_insn_prfm_target target,
+ 					enum aarch64_insn_prfm_policy policy,
+diff --git a/arch/arm64/net/bpf_jit.h b/arch/arm64/net/bpf_jit.h
+index 6c881659ee8a..76606e87233f 100644
+--- a/arch/arm64/net/bpf_jit.h
++++ b/arch/arm64/net/bpf_jit.h
+@@ -100,6 +100,10 @@
+ #define A64_STXR(sf, Rt, Rn, Rs) \
+ 	A64_LSX(sf, Rt, Rn, Rs, STORE_EX)
+ 
++/* LSE atomics */
++#define A64_STADD(sf, Rn, Rs) \
++	aarch64_insn_gen_stadd(Rn, Rs, A64_SIZE(sf))
++
+ /* Add/subtract (immediate) */
+ #define A64_ADDSUB_IMM(sf, Rd, Rn, imm12, type) \
+ 	aarch64_insn_gen_add_sub_imm(Rd, Rn, imm12, \
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index a1420626fca2..df845cee438e 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -365,7 +365,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
+ 	const bool is64 = BPF_CLASS(code) == BPF_ALU64 ||
+ 			  BPF_CLASS(code) == BPF_JMP;
+ 	const bool isdw = BPF_SIZE(code) == BPF_DW;
+-	u8 jmp_cond;
++	u8 jmp_cond, reg;
+ 	s32 jmp_offset;
+ 
+ #define check_imm(bits, imm) do {				\
+@@ -756,18 +756,28 @@ emit_cond_jmp:
+ 			break;
+ 		}
+ 		break;
++
+ 	/* STX XADD: lock *(u32 *)(dst + off) += src */
+ 	case BPF_STX | BPF_XADD | BPF_W:
+ 	/* STX XADD: lock *(u64 *)(dst + off) += src */
+ 	case BPF_STX | BPF_XADD | BPF_DW:
+-		emit_a64_mov_i(1, tmp, off, ctx);
+-		emit(A64_ADD(1, tmp, tmp, dst), ctx);
+-		emit(A64_LDXR(isdw, tmp2, tmp), ctx);
+-		emit(A64_ADD(isdw, tmp2, tmp2, src), ctx);
+-		emit(A64_STXR(isdw, tmp2, tmp, tmp3), ctx);
+-		jmp_offset = -3;
+-		check_imm19(jmp_offset);
+-		emit(A64_CBNZ(0, tmp3, jmp_offset), ctx);
++		if (!off) {
++			reg = dst;
++		} else {
++			emit_a64_mov_i(1, tmp, off, ctx);
++			emit(A64_ADD(1, tmp, tmp, dst), ctx);
++			reg = tmp;
++		}
++		if (cpus_have_cap(ARM64_HAS_LSE_ATOMICS)) {
++			emit(A64_STADD(isdw, reg, src), ctx);
++		} else {
++			emit(A64_LDXR(isdw, tmp2, reg), ctx);
++			emit(A64_ADD(isdw, tmp2, tmp2, src), ctx);
++			emit(A64_STXR(isdw, tmp2, reg, tmp3), ctx);
++			jmp_offset = -3;
++			check_imm19(jmp_offset);
++			emit(A64_CBNZ(0, tmp3, jmp_offset), ctx);
++		}
+ 		break;
+ 
+ 	default:
+diff --git a/arch/mips/include/asm/mips-gic.h b/arch/mips/include/asm/mips-gic.h
+index 558059a8f218..0277b56157af 100644
+--- a/arch/mips/include/asm/mips-gic.h
++++ b/arch/mips/include/asm/mips-gic.h
+@@ -314,6 +314,36 @@ static inline bool mips_gic_present(void)
+ 	return IS_ENABLED(CONFIG_MIPS_GIC) && mips_gic_base;
+ }
+ 
++/**
++ * mips_gic_vx_map_reg() - Return GIC_Vx_<intr>_MAP register offset
++ * @intr: A GIC local interrupt
++ *
++ * Determine the index of the GIC_VL_<intr>_MAP or GIC_VO_<intr>_MAP register
++ * within the block of GIC map registers. This is almost the same as the order
++ * of interrupts in the pending & mask registers, as used by enum
++ * mips_gic_local_interrupt, but moves the FDC interrupt & thus offsets the
++ * interrupts after it...
++ *
++ * Return: The map register index corresponding to @intr.
++ *
++ * The return value is suitable for use with the (read|write)_gic_v[lo]_map
++ * accessor functions.
++ */
++static inline unsigned int
++mips_gic_vx_map_reg(enum mips_gic_local_interrupt intr)
++{
++	/* WD, Compare & Timer are 1:1 */
++	if (intr <= GIC_LOCAL_INT_TIMER)
++		return intr;
++
++	/* FDC moves to after Timer... */
++	if (intr == GIC_LOCAL_INT_FDC)
++		return GIC_LOCAL_INT_TIMER + 1;
++
++	/* As a result everything else is offset by 1 */
++	return intr + 1;
++}
++
+ /**
+  * gic_get_c0_compare_int() - Return cp0 count/compare interrupt virq
+  *
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 03b4cc0ec3a7..66ca906aa790 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -835,6 +835,16 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
+ 		break;
+ 	}
+ 
++	/*
++	 * If SSBD is controlled by the SPEC_CTRL MSR, then set the proper
++	 * bit in the mask to allow guests to use the mitigation even in the
++	 * case where the host does not enable it.
++	 */
++	if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
++	    static_cpu_has(X86_FEATURE_AMD_SSBD)) {
++		x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;
++	}
++
+ 	/*
+ 	 * We have three CPU feature flags that are in play here:
+ 	 *  - X86_BUG_SPEC_STORE_BYPASS - CPU is susceptible.
+@@ -852,7 +862,6 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
+ 			x86_amd_ssb_disable();
+ 		} else {
+ 			x86_spec_ctrl_base |= SPEC_CTRL_SSBD;
+-			x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;
+ 			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+ 		}
+ 	}
+diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
+index f53658dde639..dfc36cd56676 100644
+--- a/arch/x86/kernel/cpu/microcode/core.c
++++ b/arch/x86/kernel/cpu/microcode/core.c
+@@ -793,13 +793,16 @@ static struct syscore_ops mc_syscore_ops = {
+ 	.resume			= mc_bp_resume,
+ };
+ 
+-static int mc_cpu_online(unsigned int cpu)
++static int mc_cpu_starting(unsigned int cpu)
+ {
+-	struct device *dev;
+-
+-	dev = get_cpu_device(cpu);
+ 	microcode_update_cpu(cpu);
+ 	pr_debug("CPU%d added\n", cpu);
++	return 0;
++}
++
++static int mc_cpu_online(unsigned int cpu)
++{
++	struct device *dev = get_cpu_device(cpu);
+ 
+ 	if (sysfs_create_group(&dev->kobj, &mc_attr_group))
+ 		pr_err("Failed to create group for CPU%d\n", cpu);
+@@ -876,7 +879,9 @@ int __init microcode_init(void)
+ 		goto out_ucode_group;
+ 
+ 	register_syscore_ops(&mc_syscore_ops);
+-	cpuhp_setup_state_nocalls(CPUHP_AP_MICROCODE_LOADER, "x86/microcode:online",
++	cpuhp_setup_state_nocalls(CPUHP_AP_MICROCODE_LOADER, "x86/microcode:starting",
++				  mc_cpu_starting, NULL);
++	cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "x86/microcode:online",
+ 				  mc_cpu_online, mc_cpu_down_prep);
+ 
+ 	pr_info("Microcode Update Driver: v%s.", DRIVER_VERSION);
+diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+index c51b56e29948..f70a617b31b0 100644
+--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
++++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+@@ -804,8 +804,12 @@ static int rdt_bit_usage_show(struct kernfs_open_file *of,
+ 			      struct seq_file *seq, void *v)
+ {
+ 	struct rdt_resource *r = of->kn->parent->priv;
+-	u32 sw_shareable = 0, hw_shareable = 0;
+-	u32 exclusive = 0, pseudo_locked = 0;
++	/*
++	 * Use unsigned long even though only 32 bits are used to ensure
++	 * test_bit() is used safely.
++	 */
++	unsigned long sw_shareable = 0, hw_shareable = 0;
++	unsigned long exclusive = 0, pseudo_locked = 0;
+ 	struct rdt_domain *dom;
+ 	int i, hwb, swb, excl, psl;
+ 	enum rdtgrp_mode mode;
+@@ -850,10 +854,10 @@ static int rdt_bit_usage_show(struct kernfs_open_file *of,
+ 		}
+ 		for (i = r->cache.cbm_len - 1; i >= 0; i--) {
+ 			pseudo_locked = dom->plr ? dom->plr->cbm : 0;
+-			hwb = test_bit(i, (unsigned long *)&hw_shareable);
+-			swb = test_bit(i, (unsigned long *)&sw_shareable);
+-			excl = test_bit(i, (unsigned long *)&exclusive);
+-			psl = test_bit(i, (unsigned long *)&pseudo_locked);
++			hwb = test_bit(i, &hw_shareable);
++			swb = test_bit(i, &sw_shareable);
++			excl = test_bit(i, &exclusive);
++			psl = test_bit(i, &pseudo_locked);
+ 			if (hwb && swb)
+ 				seq_putc(seq, 'X');
+ 			else if (hwb && !swb)
+@@ -2494,26 +2498,19 @@ out_destroy:
+  */
+ static void cbm_ensure_valid(u32 *_val, struct rdt_resource *r)
+ {
+-	/*
+-	 * Convert the u32 _val to an unsigned long required by all the bit
+-	 * operations within this function. No more than 32 bits of this
+-	 * converted value can be accessed because all bit operations are
+-	 * additionally provided with cbm_len that is initialized during
+-	 * hardware enumeration using five bits from the EAX register and
+-	 * thus never can exceed 32 bits.
+-	 */
+-	unsigned long *val = (unsigned long *)_val;
++	unsigned long val = *_val;
+ 	unsigned int cbm_len = r->cache.cbm_len;
+ 	unsigned long first_bit, zero_bit;
+ 
+-	if (*val == 0)
++	if (val == 0)
+ 		return;
+ 
+-	first_bit = find_first_bit(val, cbm_len);
+-	zero_bit = find_next_zero_bit(val, cbm_len, first_bit);
++	first_bit = find_first_bit(&val, cbm_len);
++	zero_bit = find_next_zero_bit(&val, cbm_len, first_bit);
+ 
+ 	/* Clear any remaining bits to ensure contiguous region */
+-	bitmap_clear(val, zero_bit, cbm_len - zero_bit);
++	bitmap_clear(&val, zero_bit, cbm_len - zero_bit);
++	*_val = (u32)val;
+ }
+ 
+ /**
+diff --git a/drivers/clk/socfpga/clk-s10.c b/drivers/clk/socfpga/clk-s10.c
+index 8281dfbf38c2..5bed36e12951 100644
+--- a/drivers/clk/socfpga/clk-s10.c
++++ b/drivers/clk/socfpga/clk-s10.c
+@@ -103,9 +103,9 @@ static const struct stratix10_perip_cnt_clock s10_main_perip_cnt_clks[] = {
+ 	{ STRATIX10_NOC_CLK, "noc_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux),
+ 	  0, 0, 0, 0x3C, 1},
+ 	{ STRATIX10_EMAC_A_FREE_CLK, "emaca_free_clk", NULL, emaca_free_mux, ARRAY_SIZE(emaca_free_mux),
+-	  0, 0, 4, 0xB0, 0},
++	  0, 0, 2, 0xB0, 0},
+ 	{ STRATIX10_EMAC_B_FREE_CLK, "emacb_free_clk", NULL, emacb_free_mux, ARRAY_SIZE(emacb_free_mux),
+-	  0, 0, 4, 0xB0, 1},
++	  0, 0, 2, 0xB0, 1},
+ 	{ STRATIX10_EMAC_PTP_FREE_CLK, "emac_ptp_free_clk", NULL, emac_ptp_free_mux,
+ 	  ARRAY_SIZE(emac_ptp_free_mux), 0, 0, 4, 0xB0, 2},
+ 	{ STRATIX10_GPIO_DB_FREE_CLK, "gpio_db_free_clk", NULL, gpio_db_free_mux,
+diff --git a/drivers/clk/tegra/clk-tegra210.c b/drivers/clk/tegra/clk-tegra210.c
+index 7545af763d7a..af4ace8f7369 100644
+--- a/drivers/clk/tegra/clk-tegra210.c
++++ b/drivers/clk/tegra/clk-tegra210.c
+@@ -3377,6 +3377,8 @@ static struct tegra_clk_init_table init_table[] __initdata = {
+ 	{ TEGRA210_CLK_I2S3_SYNC, TEGRA210_CLK_CLK_MAX, 24576000, 0 },
+ 	{ TEGRA210_CLK_I2S4_SYNC, TEGRA210_CLK_CLK_MAX, 24576000, 0 },
+ 	{ TEGRA210_CLK_VIMCLK_SYNC, TEGRA210_CLK_CLK_MAX, 24576000, 0 },
++	{ TEGRA210_CLK_HDA, TEGRA210_CLK_PLL_P, 51000000, 0 },
++	{ TEGRA210_CLK_HDA2CODEC_2X, TEGRA210_CLK_PLL_P, 48000000, 0 },
+ 	/* This MUST be the last entry. */
+ 	{ TEGRA210_CLK_CLK_MAX, TEGRA210_CLK_CLK_MAX, 0, 0 },
+ };
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index 55b77c576c42..6d9995cbf651 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -1007,14 +1007,16 @@ int __ref efi_mem_reserve_persistent(phys_addr_t addr, u64 size)
+ 
+ 	/* first try to find a slot in an existing linked list entry */
+ 	for (prsv = efi_memreserve_root->next; prsv; prsv = rsv->next) {
+-		rsv = __va(prsv);
++		rsv = memremap(prsv, sizeof(*rsv), MEMREMAP_WB);
+ 		index = atomic_fetch_add_unless(&rsv->count, 1, rsv->size);
+ 		if (index < rsv->size) {
+ 			rsv->entry[index].base = addr;
+ 			rsv->entry[index].size = size;
+ 
++			memunmap(rsv);
+ 			return 0;
+ 		}
++		memunmap(rsv);
+ 	}
+ 
+ 	/* no slot found - allocate a new linked list entry */
+@@ -1022,7 +1024,13 @@ int __ref efi_mem_reserve_persistent(phys_addr_t addr, u64 size)
+ 	if (!rsv)
+ 		return -ENOMEM;
+ 
+-	rsv->size = EFI_MEMRESERVE_COUNT(PAGE_SIZE);
++	/*
++	 * The memremap() call above assumes that a linux_efi_memreserve entry
++	 * never crosses a page boundary, so let's ensure that this remains true
++	 * even when kexec'ing a 4k pages kernel from a >4k pages kernel, by
++	 * using SZ_4K explicitly in the size calculation below.
++	 */
++	rsv->size = EFI_MEMRESERVE_COUNT(SZ_4K);
+ 	atomic_set(&rsv->count, 1);
+ 	rsv->entry[0].base = addr;
+ 	rsv->entry[0].size = size;
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index a67a63b5aa84..0c4a76bca5c6 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -280,7 +280,8 @@ struct drm_i915_display_funcs {
+ 	void (*get_cdclk)(struct drm_i915_private *dev_priv,
+ 			  struct intel_cdclk_state *cdclk_state);
+ 	void (*set_cdclk)(struct drm_i915_private *dev_priv,
+-			  const struct intel_cdclk_state *cdclk_state);
++			  const struct intel_cdclk_state *cdclk_state,
++			  enum pipe pipe);
+ 	int (*get_fifo_size)(struct drm_i915_private *dev_priv,
+ 			     enum i9xx_plane_id i9xx_plane);
+ 	int (*compute_pipe_wm)(struct intel_crtc_state *cstate);
+@@ -1622,6 +1623,8 @@ struct drm_i915_private {
+ 		struct intel_cdclk_state actual;
+ 		/* The current hardware cdclk state */
+ 		struct intel_cdclk_state hw;
++
++		int force_min_cdclk;
+ 	} cdclk;
+ 
+ 	/**
+@@ -1741,6 +1744,7 @@ struct drm_i915_private {
+ 	 *
+ 	 */
+ 	struct mutex av_mutex;
++	int audio_power_refcount;
+ 
+ 	struct {
+ 		struct mutex mutex;
+diff --git a/drivers/gpu/drm/i915/intel_audio.c b/drivers/gpu/drm/i915/intel_audio.c
+index 5104c6bbd66f..2f3fd5bf0f11 100644
+--- a/drivers/gpu/drm/i915/intel_audio.c
++++ b/drivers/gpu/drm/i915/intel_audio.c
+@@ -741,15 +741,71 @@ void intel_init_audio_hooks(struct drm_i915_private *dev_priv)
+ 	}
+ }
+ 
++static void glk_force_audio_cdclk(struct drm_i915_private *dev_priv,
++				  bool enable)
++{
++	struct drm_modeset_acquire_ctx ctx;
++	struct drm_atomic_state *state;
++	int ret;
++
++	drm_modeset_acquire_init(&ctx, 0);
++	state = drm_atomic_state_alloc(&dev_priv->drm);
++	if (WARN_ON(!state))
++		return;
++
++	state->acquire_ctx = &ctx;
++
++retry:
++	to_intel_atomic_state(state)->cdclk.force_min_cdclk_changed = true;
++	to_intel_atomic_state(state)->cdclk.force_min_cdclk =
++		enable ? 2 * 96000 : 0;
++
++	/*
++	 * Protects dev_priv->cdclk.force_min_cdclk
++	 * Need to lock this here in case we have no active pipes
++	 * and thus wouldn't lock it during the commit otherwise.
++	 */
++	ret = drm_modeset_lock(&dev_priv->drm.mode_config.connection_mutex,
++			       &ctx);
++	if (!ret)
++		ret = drm_atomic_commit(state);
++
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		drm_modeset_backoff(&ctx);
++		goto retry;
++	}
++
++	WARN_ON(ret);
++
++	drm_atomic_state_put(state);
++
++	drm_modeset_drop_locks(&ctx);
++	drm_modeset_acquire_fini(&ctx);
++}
++
+ static void i915_audio_component_get_power(struct device *kdev)
+ {
+-	intel_display_power_get(kdev_to_i915(kdev), POWER_DOMAIN_AUDIO);
++	struct drm_i915_private *dev_priv = kdev_to_i915(kdev);
++
++	intel_display_power_get(dev_priv, POWER_DOMAIN_AUDIO);
++
++	/* Force CDCLK to 2*BCLK as long as we need audio to be powered. */
++	if (dev_priv->audio_power_refcount++ == 0)
++		if (IS_CANNONLAKE(dev_priv) || IS_GEMINILAKE(dev_priv))
++			glk_force_audio_cdclk(dev_priv, true);
+ }
+ 
+ static void i915_audio_component_put_power(struct device *kdev)
+ {
+-	intel_display_power_put_unchecked(kdev_to_i915(kdev),
+-					  POWER_DOMAIN_AUDIO);
++	struct drm_i915_private *dev_priv = kdev_to_i915(kdev);
++
++	/* Stop forcing CDCLK to 2*BCLK if no need for audio to be powered. */
++	if (--dev_priv->audio_power_refcount == 0)
++		if (IS_CANNONLAKE(dev_priv) || IS_GEMINILAKE(dev_priv))
++			glk_force_audio_cdclk(dev_priv, false);
++
++	intel_display_power_put_unchecked(dev_priv, POWER_DOMAIN_AUDIO);
+ }
+ 
+ static void i915_audio_component_codec_wake_override(struct device *kdev,
+diff --git a/drivers/gpu/drm/i915/intel_cdclk.c b/drivers/gpu/drm/i915/intel_cdclk.c
+index 15ba950dee00..00f76261924c 100644
+--- a/drivers/gpu/drm/i915/intel_cdclk.c
++++ b/drivers/gpu/drm/i915/intel_cdclk.c
+@@ -516,7 +516,8 @@ static void vlv_program_pfi_credits(struct drm_i915_private *dev_priv)
+ }
+ 
+ static void vlv_set_cdclk(struct drm_i915_private *dev_priv,
+-			  const struct intel_cdclk_state *cdclk_state)
++			  const struct intel_cdclk_state *cdclk_state,
++			  enum pipe pipe)
+ {
+ 	int cdclk = cdclk_state->cdclk;
+ 	u32 val, cmd = cdclk_state->voltage_level;
+@@ -598,7 +599,8 @@ static void vlv_set_cdclk(struct drm_i915_private *dev_priv,
+ }
+ 
+ static void chv_set_cdclk(struct drm_i915_private *dev_priv,
+-			  const struct intel_cdclk_state *cdclk_state)
++			  const struct intel_cdclk_state *cdclk_state,
++			  enum pipe pipe)
+ {
+ 	int cdclk = cdclk_state->cdclk;
+ 	u32 val, cmd = cdclk_state->voltage_level;
+@@ -697,7 +699,8 @@ static void bdw_get_cdclk(struct drm_i915_private *dev_priv,
+ }
+ 
+ static void bdw_set_cdclk(struct drm_i915_private *dev_priv,
+-			  const struct intel_cdclk_state *cdclk_state)
++			  const struct intel_cdclk_state *cdclk_state,
++			  enum pipe pipe)
+ {
+ 	int cdclk = cdclk_state->cdclk;
+ 	u32 val;
+@@ -987,7 +990,8 @@ static void skl_dpll0_disable(struct drm_i915_private *dev_priv)
+ }
+ 
+ static void skl_set_cdclk(struct drm_i915_private *dev_priv,
+-			  const struct intel_cdclk_state *cdclk_state)
++			  const struct intel_cdclk_state *cdclk_state,
++			  enum pipe pipe)
+ {
+ 	int cdclk = cdclk_state->cdclk;
+ 	int vco = cdclk_state->vco;
+@@ -1158,7 +1162,7 @@ void skl_init_cdclk(struct drm_i915_private *dev_priv)
+ 	cdclk_state.cdclk = skl_calc_cdclk(0, cdclk_state.vco);
+ 	cdclk_state.voltage_level = skl_calc_voltage_level(cdclk_state.cdclk);
+ 
+-	skl_set_cdclk(dev_priv, &cdclk_state);
++	skl_set_cdclk(dev_priv, &cdclk_state, INVALID_PIPE);
+ }
+ 
+ /**
+@@ -1176,7 +1180,7 @@ void skl_uninit_cdclk(struct drm_i915_private *dev_priv)
+ 	cdclk_state.vco = 0;
+ 	cdclk_state.voltage_level = skl_calc_voltage_level(cdclk_state.cdclk);
+ 
+-	skl_set_cdclk(dev_priv, &cdclk_state);
++	skl_set_cdclk(dev_priv, &cdclk_state, INVALID_PIPE);
+ }
+ 
+ static int bxt_calc_cdclk(int min_cdclk)
+@@ -1355,7 +1359,8 @@ static void bxt_de_pll_enable(struct drm_i915_private *dev_priv, int vco)
+ }
+ 
+ static void bxt_set_cdclk(struct drm_i915_private *dev_priv,
+-			  const struct intel_cdclk_state *cdclk_state)
++			  const struct intel_cdclk_state *cdclk_state,
++			  enum pipe pipe)
+ {
+ 	int cdclk = cdclk_state->cdclk;
+ 	int vco = cdclk_state->vco;
+@@ -1408,11 +1413,10 @@ static void bxt_set_cdclk(struct drm_i915_private *dev_priv,
+ 		bxt_de_pll_enable(dev_priv, vco);
+ 
+ 	val = divider | skl_cdclk_decimal(cdclk);
+-	/*
+-	 * FIXME if only the cd2x divider needs changing, it could be done
+-	 * without shutting off the pipe (if only one pipe is active).
+-	 */
+-	val |= BXT_CDCLK_CD2X_PIPE_NONE;
++	if (pipe == INVALID_PIPE)
++		val |= BXT_CDCLK_CD2X_PIPE_NONE;
++	else
++		val |= BXT_CDCLK_CD2X_PIPE(pipe);
+ 	/*
+ 	 * Disable SSA Precharge when CD clock frequency < 500 MHz,
+ 	 * enable otherwise.
+@@ -1421,6 +1425,9 @@ static void bxt_set_cdclk(struct drm_i915_private *dev_priv,
+ 		val |= BXT_CDCLK_SSA_PRECHARGE_ENABLE;
+ 	I915_WRITE(CDCLK_CTL, val);
+ 
++	if (pipe != INVALID_PIPE)
++		intel_wait_for_vblank(dev_priv, pipe);
++
+ 	mutex_lock(&dev_priv->pcu_lock);
+ 	/*
+ 	 * The timeout isn't specified, the 2ms used here is based on
+@@ -1525,7 +1532,7 @@ void bxt_init_cdclk(struct drm_i915_private *dev_priv)
+ 	}
+ 	cdclk_state.voltage_level = bxt_calc_voltage_level(cdclk_state.cdclk);
+ 
+-	bxt_set_cdclk(dev_priv, &cdclk_state);
++	bxt_set_cdclk(dev_priv, &cdclk_state, INVALID_PIPE);
+ }
+ 
+ /**
+@@ -1543,7 +1550,7 @@ void bxt_uninit_cdclk(struct drm_i915_private *dev_priv)
+ 	cdclk_state.vco = 0;
+ 	cdclk_state.voltage_level = bxt_calc_voltage_level(cdclk_state.cdclk);
+ 
+-	bxt_set_cdclk(dev_priv, &cdclk_state);
++	bxt_set_cdclk(dev_priv, &cdclk_state, INVALID_PIPE);
+ }
+ 
+ static int cnl_calc_cdclk(int min_cdclk)
+@@ -1663,7 +1670,8 @@ static void cnl_cdclk_pll_enable(struct drm_i915_private *dev_priv, int vco)
+ }
+ 
+ static void cnl_set_cdclk(struct drm_i915_private *dev_priv,
+-			  const struct intel_cdclk_state *cdclk_state)
++			  const struct intel_cdclk_state *cdclk_state,
++			  enum pipe pipe)
+ {
+ 	int cdclk = cdclk_state->cdclk;
+ 	int vco = cdclk_state->vco;
+@@ -1704,13 +1712,15 @@ static void cnl_set_cdclk(struct drm_i915_private *dev_priv,
+ 		cnl_cdclk_pll_enable(dev_priv, vco);
+ 
+ 	val = divider | skl_cdclk_decimal(cdclk);
+-	/*
+-	 * FIXME if only the cd2x divider needs changing, it could be done
+-	 * without shutting off the pipe (if only one pipe is active).
+-	 */
+-	val |= BXT_CDCLK_CD2X_PIPE_NONE;
++	if (pipe == INVALID_PIPE)
++		val |= BXT_CDCLK_CD2X_PIPE_NONE;
++	else
++		val |= BXT_CDCLK_CD2X_PIPE(pipe);
+ 	I915_WRITE(CDCLK_CTL, val);
+ 
++	if (pipe != INVALID_PIPE)
++		intel_wait_for_vblank(dev_priv, pipe);
++
+ 	/* inform PCU of the change */
+ 	mutex_lock(&dev_priv->pcu_lock);
+ 	sandybridge_pcode_write(dev_priv, SKL_PCODE_CDCLK_CONTROL,
+@@ -1847,7 +1857,8 @@ static int icl_calc_cdclk_pll_vco(struct drm_i915_private *dev_priv, int cdclk)
+ }
+ 
+ static void icl_set_cdclk(struct drm_i915_private *dev_priv,
+-			  const struct intel_cdclk_state *cdclk_state)
++			  const struct intel_cdclk_state *cdclk_state,
++			  enum pipe pipe)
+ {
+ 	unsigned int cdclk = cdclk_state->cdclk;
+ 	unsigned int vco = cdclk_state->vco;
+@@ -1872,6 +1883,11 @@ static void icl_set_cdclk(struct drm_i915_private *dev_priv,
+ 	if (dev_priv->cdclk.hw.vco != vco)
+ 		cnl_cdclk_pll_enable(dev_priv, vco);
+ 
++	/*
++	 * On ICL CD2X_DIV can only be 1, so we'll never end up changing the
++	 * divider here synchronized to a pipe while CDCLK is on, nor will we
++	 * need the corresponding vblank wait.
++	 */
+ 	I915_WRITE(CDCLK_CTL, ICL_CDCLK_CD2X_PIPE_NONE |
+ 			      skl_cdclk_decimal(cdclk));
+ 
+@@ -2002,7 +2018,7 @@ sanitize:
+ 	sanitized_state.voltage_level =
+ 				icl_calc_voltage_level(sanitized_state.cdclk);
+ 
+-	icl_set_cdclk(dev_priv, &sanitized_state);
++	icl_set_cdclk(dev_priv, &sanitized_state, INVALID_PIPE);
+ }
+ 
+ /**
+@@ -2020,7 +2036,7 @@ void icl_uninit_cdclk(struct drm_i915_private *dev_priv)
+ 	cdclk_state.vco = 0;
+ 	cdclk_state.voltage_level = icl_calc_voltage_level(cdclk_state.cdclk);
+ 
+-	icl_set_cdclk(dev_priv, &cdclk_state);
++	icl_set_cdclk(dev_priv, &cdclk_state, INVALID_PIPE);
+ }
+ 
+ /**
+@@ -2048,7 +2064,7 @@ void cnl_init_cdclk(struct drm_i915_private *dev_priv)
+ 	cdclk_state.vco = cnl_cdclk_pll_vco(dev_priv, cdclk_state.cdclk);
+ 	cdclk_state.voltage_level = cnl_calc_voltage_level(cdclk_state.cdclk);
+ 
+-	cnl_set_cdclk(dev_priv, &cdclk_state);
++	cnl_set_cdclk(dev_priv, &cdclk_state, INVALID_PIPE);
+ }
+ 
+ /**
+@@ -2066,7 +2082,7 @@ void cnl_uninit_cdclk(struct drm_i915_private *dev_priv)
+ 	cdclk_state.vco = 0;
+ 	cdclk_state.voltage_level = cnl_calc_voltage_level(cdclk_state.cdclk);
+ 
+-	cnl_set_cdclk(dev_priv, &cdclk_state);
++	cnl_set_cdclk(dev_priv, &cdclk_state, INVALID_PIPE);
+ }
+ 
+ /**
+@@ -2085,6 +2101,27 @@ bool intel_cdclk_needs_modeset(const struct intel_cdclk_state *a,
+ 		a->ref != b->ref;
+ }
+ 
++/**
++ * intel_cdclk_needs_cd2x_update - Determine if two CDCLK states require a cd2x divider update
++ * @a: first CDCLK state
++ * @b: second CDCLK state
++ *
++ * Returns:
++ * True if the CDCLK states require just a cd2x divider update, false if not.
++ */
++bool intel_cdclk_needs_cd2x_update(struct drm_i915_private *dev_priv,
++				   const struct intel_cdclk_state *a,
++				   const struct intel_cdclk_state *b)
++{
++	/* Older hw doesn't have the capability */
++	if (INTEL_GEN(dev_priv) < 10 && !IS_GEN9_LP(dev_priv))
++		return false;
++
++	return a->cdclk != b->cdclk &&
++		a->vco == b->vco &&
++		a->ref == b->ref;
++}
++
+ /**
+  * intel_cdclk_changed - Determine if two CDCLK states are different
+  * @a: first CDCLK state
+@@ -2100,6 +2137,26 @@ bool intel_cdclk_changed(const struct intel_cdclk_state *a,
+ 		a->voltage_level != b->voltage_level;
+ }
+ 
++/**
++ * intel_cdclk_swap_state - make atomic CDCLK configuration effective
++ * @state: atomic state
++ *
++ * This is the CDCLK version of drm_atomic_helper_swap_state() since the
++ * helper does not handle driver-specific global state.
++ *
++ * Similarly to the atomic helpers this function does a complete swap,
++ * i.e. it also puts the old state into @state. This is used by the commit
++ * code to determine how CDCLK has changed (for instance did it increase or
++ * decrease).
++ */
++void intel_cdclk_swap_state(struct intel_atomic_state *state)
++{
++	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
++
++	swap(state->cdclk.logical, dev_priv->cdclk.logical);
++	swap(state->cdclk.actual, dev_priv->cdclk.actual);
++}
++
+ void intel_dump_cdclk_state(const struct intel_cdclk_state *cdclk_state,
+ 			    const char *context)
+ {
+@@ -2113,12 +2170,14 @@ void intel_dump_cdclk_state(const struct intel_cdclk_state *cdclk_state,
+  * intel_set_cdclk - Push the CDCLK state to the hardware
+  * @dev_priv: i915 device
+  * @cdclk_state: new CDCLK state
++ * @pipe: pipe with which to synchronize the update
+  *
+  * Program the hardware based on the passed in CDCLK state,
+  * if necessary.
+  */
+-void intel_set_cdclk(struct drm_i915_private *dev_priv,
+-		     const struct intel_cdclk_state *cdclk_state)
++static void intel_set_cdclk(struct drm_i915_private *dev_priv,
++			    const struct intel_cdclk_state *cdclk_state,
++			    enum pipe pipe)
+ {
+ 	if (!intel_cdclk_changed(&dev_priv->cdclk.hw, cdclk_state))
+ 		return;
+@@ -2128,7 +2187,7 @@ void intel_set_cdclk(struct drm_i915_private *dev_priv,
+ 
+ 	intel_dump_cdclk_state(cdclk_state, "Changing CDCLK to");
+ 
+-	dev_priv->display.set_cdclk(dev_priv, cdclk_state);
++	dev_priv->display.set_cdclk(dev_priv, cdclk_state, pipe);
+ 
+ 	if (WARN(intel_cdclk_changed(&dev_priv->cdclk.hw, cdclk_state),
+ 		 "cdclk state doesn't match!\n")) {
+@@ -2137,6 +2196,46 @@ void intel_set_cdclk(struct drm_i915_private *dev_priv,
+ 	}
+ }
+ 
++/**
++ * intel_set_cdclk_pre_plane_update - Push the CDCLK state to the hardware
++ * @dev_priv: i915 device
++ * @old_state: old CDCLK state
++ * @new_state: new CDCLK state
++ * @pipe: pipe with which to synchronize the update
++ *
++ * Program the hardware before updating the HW plane state based on the passed
++ * in CDCLK state, if necessary.
++ */
++void
++intel_set_cdclk_pre_plane_update(struct drm_i915_private *dev_priv,
++				 const struct intel_cdclk_state *old_state,
++				 const struct intel_cdclk_state *new_state,
++				 enum pipe pipe)
++{
++	if (pipe == INVALID_PIPE || old_state->cdclk <= new_state->cdclk)
++		intel_set_cdclk(dev_priv, new_state, pipe);
++}
++
++/**
++ * intel_set_cdclk_post_plane_update - Push the CDCLK state to the hardware
++ * @dev_priv: i915 device
++ * @old_state: old CDCLK state
++ * @new_state: new CDCLK state
++ * @pipe: pipe with which to synchronize the update
++ *
++ * Program the hardware after updating the HW plane state based on the passed
++ * in CDCLK state, if necessary.
++ */
++void
++intel_set_cdclk_post_plane_update(struct drm_i915_private *dev_priv,
++				  const struct intel_cdclk_state *old_state,
++				  const struct intel_cdclk_state *new_state,
++				  enum pipe pipe)
++{
++	if (pipe != INVALID_PIPE && old_state->cdclk > new_state->cdclk)
++		intel_set_cdclk(dev_priv, new_state, pipe);
++}
++
+ static int intel_pixel_rate_to_cdclk(struct drm_i915_private *dev_priv,
+ 				     int pixel_rate)
+ {
+@@ -2187,19 +2286,8 @@ int intel_crtc_compute_min_cdclk(const struct intel_crtc_state *crtc_state)
+ 	/*
+ 	 * According to BSpec, "The CD clock frequency must be at least twice
+ 	 * the frequency of the Azalia BCLK." and BCLK is 96 MHz by default.
+-	 *
+-	 * FIXME: Check the actual, not default, BCLK being used.
+-	 *
+-	 * FIXME: This does not depend on ->has_audio because the higher CDCLK
+-	 * is required for audio probe, also when there are no audio capable
+-	 * displays connected at probe time. This leads to unnecessarily high
+-	 * CDCLK when audio is not required.
+-	 *
+-	 * FIXME: This limit is only applied when there are displays connected
+-	 * at probe time. If we probe without displays, we'll still end up using
+-	 * the platform minimum CDCLK, failing audio probe.
+ 	 */
+-	if (INTEL_GEN(dev_priv) >= 9)
++	if (crtc_state->has_audio && INTEL_GEN(dev_priv) >= 9)
+ 		min_cdclk = max(2 * 96000, min_cdclk);
+ 
+ 	/*
+@@ -2239,7 +2327,7 @@ static int intel_compute_min_cdclk(struct drm_atomic_state *state)
+ 		intel_state->min_cdclk[i] = min_cdclk;
+ 	}
+ 
+-	min_cdclk = 0;
++	min_cdclk = intel_state->cdclk.force_min_cdclk;
+ 	for_each_pipe(dev_priv, pipe)
+ 		min_cdclk = max(intel_state->min_cdclk[pipe], min_cdclk);
+ 
+@@ -2300,7 +2388,8 @@ static int vlv_modeset_calc_cdclk(struct drm_atomic_state *state)
+ 		vlv_calc_voltage_level(dev_priv, cdclk);
+ 
+ 	if (!intel_state->active_crtcs) {
+-		cdclk = vlv_calc_cdclk(dev_priv, 0);
++		cdclk = vlv_calc_cdclk(dev_priv,
++				       intel_state->cdclk.force_min_cdclk);
+ 
+ 		intel_state->cdclk.actual.cdclk = cdclk;
+ 		intel_state->cdclk.actual.voltage_level =
+@@ -2333,7 +2422,7 @@ static int bdw_modeset_calc_cdclk(struct drm_atomic_state *state)
+ 		bdw_calc_voltage_level(cdclk);
+ 
+ 	if (!intel_state->active_crtcs) {
+-		cdclk = bdw_calc_cdclk(0);
++		cdclk = bdw_calc_cdclk(intel_state->cdclk.force_min_cdclk);
+ 
+ 		intel_state->cdclk.actual.cdclk = cdclk;
+ 		intel_state->cdclk.actual.voltage_level =
+@@ -2405,7 +2494,7 @@ static int skl_modeset_calc_cdclk(struct drm_atomic_state *state)
+ 		skl_calc_voltage_level(cdclk);
+ 
+ 	if (!intel_state->active_crtcs) {
+-		cdclk = skl_calc_cdclk(0, vco);
++		cdclk = skl_calc_cdclk(intel_state->cdclk.force_min_cdclk, vco);
+ 
+ 		intel_state->cdclk.actual.vco = vco;
+ 		intel_state->cdclk.actual.cdclk = cdclk;
+@@ -2444,10 +2533,10 @@ static int bxt_modeset_calc_cdclk(struct drm_atomic_state *state)
+ 
+ 	if (!intel_state->active_crtcs) {
+ 		if (IS_GEMINILAKE(dev_priv)) {
+-			cdclk = glk_calc_cdclk(0);
++			cdclk = glk_calc_cdclk(intel_state->cdclk.force_min_cdclk);
+ 			vco = glk_de_pll_vco(dev_priv, cdclk);
+ 		} else {
+-			cdclk = bxt_calc_cdclk(0);
++			cdclk = bxt_calc_cdclk(intel_state->cdclk.force_min_cdclk);
+ 			vco = bxt_de_pll_vco(dev_priv, cdclk);
+ 		}
+ 
+@@ -2483,7 +2572,7 @@ static int cnl_modeset_calc_cdclk(struct drm_atomic_state *state)
+ 		    cnl_compute_min_voltage_level(intel_state));
+ 
+ 	if (!intel_state->active_crtcs) {
+-		cdclk = cnl_calc_cdclk(0);
++		cdclk = cnl_calc_cdclk(intel_state->cdclk.force_min_cdclk);
+ 		vco = cnl_cdclk_pll_vco(dev_priv, cdclk);
+ 
+ 		intel_state->cdclk.actual.vco = vco;
+@@ -2519,7 +2608,7 @@ static int icl_modeset_calc_cdclk(struct drm_atomic_state *state)
+ 		    cnl_compute_min_voltage_level(intel_state));
+ 
+ 	if (!intel_state->active_crtcs) {
+-		cdclk = icl_calc_cdclk(0, ref);
++		cdclk = icl_calc_cdclk(intel_state->cdclk.force_min_cdclk, ref);
+ 		vco = icl_calc_cdclk_pll_vco(dev_priv, cdclk);
+ 
+ 		intel_state->cdclk.actual.vco = vco;
+diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
+index be4024f0e3a8..9803d713f746 100644
+--- a/drivers/gpu/drm/i915/intel_display.c
++++ b/drivers/gpu/drm/i915/intel_display.c
+@@ -12770,10 +12770,16 @@ static int intel_modeset_checks(struct drm_atomic_state *state)
+ 		return -EINVAL;
+ 	}
+ 
++	/* keep the current setting */
++	if (!intel_state->cdclk.force_min_cdclk_changed)
++		intel_state->cdclk.force_min_cdclk =
++			dev_priv->cdclk.force_min_cdclk;
++
+ 	intel_state->modeset = true;
+ 	intel_state->active_crtcs = dev_priv->active_crtcs;
+ 	intel_state->cdclk.logical = dev_priv->cdclk.logical;
+ 	intel_state->cdclk.actual = dev_priv->cdclk.actual;
++	intel_state->cdclk.pipe = INVALID_PIPE;
+ 
+ 	for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
+ 		if (new_crtc_state->active)
+@@ -12793,6 +12799,8 @@ static int intel_modeset_checks(struct drm_atomic_state *state)
+ 	 * adjusted_mode bits in the crtc directly.
+ 	 */
+ 	if (dev_priv->display.modeset_calc_cdclk) {
++		enum pipe pipe;
++
+ 		ret = dev_priv->display.modeset_calc_cdclk(state);
+ 		if (ret < 0)
+ 			return ret;
+@@ -12809,12 +12817,36 @@ static int intel_modeset_checks(struct drm_atomic_state *state)
+ 				return ret;
+ 		}
+ 
++		if (is_power_of_2(intel_state->active_crtcs)) {
++			struct drm_crtc *crtc;
++			struct drm_crtc_state *crtc_state;
++
++			pipe = ilog2(intel_state->active_crtcs);
++			crtc = &intel_get_crtc_for_pipe(dev_priv, pipe)->base;
++			crtc_state = drm_atomic_get_new_crtc_state(state, crtc);
++			if (crtc_state && needs_modeset(crtc_state))
++				pipe = INVALID_PIPE;
++		} else {
++			pipe = INVALID_PIPE;
++		}
++
+ 		/* All pipes must be switched off while we change the cdclk. */
+-		if (intel_cdclk_needs_modeset(&dev_priv->cdclk.actual,
+-					      &intel_state->cdclk.actual)) {
++		if (pipe != INVALID_PIPE &&
++		    intel_cdclk_needs_cd2x_update(dev_priv,
++						  &dev_priv->cdclk.actual,
++						  &intel_state->cdclk.actual)) {
++			ret = intel_lock_all_pipes(state);
++			if (ret < 0)
++				return ret;
++
++			intel_state->cdclk.pipe = pipe;
++		} else if (intel_cdclk_needs_modeset(&dev_priv->cdclk.actual,
++						     &intel_state->cdclk.actual)) {
+ 			ret = intel_modeset_all_pipes(state);
+ 			if (ret < 0)
+ 				return ret;
++
++			intel_state->cdclk.pipe = INVALID_PIPE;
+ 		}
+ 
+ 		DRM_DEBUG_KMS("New cdclk calculated to be logical %u kHz, actual %u kHz\n",
+@@ -12823,8 +12855,6 @@ static int intel_modeset_checks(struct drm_atomic_state *state)
+ 		DRM_DEBUG_KMS("New voltage level calculated to be logical %u, actual %u\n",
+ 			      intel_state->cdclk.logical.voltage_level,
+ 			      intel_state->cdclk.actual.voltage_level);
+-	} else {
+-		to_intel_atomic_state(state)->cdclk.logical = dev_priv->cdclk.logical;
+ 	}
+ 
+ 	intel_modeset_clear_plls(state);
+@@ -12892,7 +12922,7 @@ static int intel_atomic_check(struct drm_device *dev,
+ 	struct drm_crtc *crtc;
+ 	struct drm_crtc_state *old_crtc_state, *crtc_state;
+ 	int ret, i;
+-	bool any_ms = false;
++	bool any_ms = intel_state->cdclk.force_min_cdclk_changed;
+ 
+ 	/* Catch I915_MODE_FLAG_INHERITED */
+ 	for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state,
+@@ -13248,7 +13278,10 @@ static void intel_atomic_commit_tail(struct drm_atomic_state *state)
+ 	if (intel_state->modeset) {
+ 		drm_atomic_helper_update_legacy_modeset_state(state->dev, state);
+ 
+-		intel_set_cdclk(dev_priv, &dev_priv->cdclk.actual);
++		intel_set_cdclk_pre_plane_update(dev_priv,
++						 &intel_state->cdclk.actual,
++						 &dev_priv->cdclk.actual,
++						 intel_state->cdclk.pipe);
+ 
+ 		/*
+ 		 * SKL workaround: bspec recommends we disable the SAGV when we
+@@ -13277,6 +13310,12 @@ static void intel_atomic_commit_tail(struct drm_atomic_state *state)
+ 	/* Now enable the clocks, plane, pipe, and connectors that we set up. */
+ 	dev_priv->display.update_crtcs(state);
+ 
++	if (intel_state->modeset)
++		intel_set_cdclk_post_plane_update(dev_priv,
++						  &intel_state->cdclk.actual,
++						  &dev_priv->cdclk.actual,
++						  intel_state->cdclk.pipe);
++
+ 	/* FIXME: We should call drm_atomic_helper_commit_hw_done() here
+ 	 * already, but still need the state for the delayed optimization. To
+ 	 * fix this:
+@@ -13478,8 +13517,10 @@ static int intel_atomic_commit(struct drm_device *dev,
+ 		       intel_state->min_voltage_level,
+ 		       sizeof(intel_state->min_voltage_level));
+ 		dev_priv->active_crtcs = intel_state->active_crtcs;
+-		dev_priv->cdclk.logical = intel_state->cdclk.logical;
+-		dev_priv->cdclk.actual = intel_state->cdclk.actual;
++		dev_priv->cdclk.force_min_cdclk =
++			intel_state->cdclk.force_min_cdclk;
++
++		intel_cdclk_swap_state(intel_state);
+ 	}
+ 
+ 	drm_atomic_state_get(state);
+diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
+index 18e17b422701..ceaa05dfa0d0 100644
+--- a/drivers/gpu/drm/i915/intel_drv.h
++++ b/drivers/gpu/drm/i915/intel_drv.h
+@@ -480,6 +480,11 @@ struct intel_atomic_state {
+ 		 * state only when all crtc's are DPMS off.
+ 		 */
+ 		struct intel_cdclk_state actual;
++
++		int force_min_cdclk;
++		bool force_min_cdclk_changed;
++		/* pipe to which cd2x update is synchronized */
++		enum pipe pipe;
+ 	} cdclk;
+ 
+ 	bool dpll_set, modeset;
+@@ -1590,12 +1595,24 @@ void intel_init_cdclk_hooks(struct drm_i915_private *dev_priv);
+ void intel_update_max_cdclk(struct drm_i915_private *dev_priv);
+ void intel_update_cdclk(struct drm_i915_private *dev_priv);
+ void intel_update_rawclk(struct drm_i915_private *dev_priv);
++bool intel_cdclk_needs_cd2x_update(struct drm_i915_private *dev_priv,
++				   const struct intel_cdclk_state *a,
++				   const struct intel_cdclk_state *b);
+ bool intel_cdclk_needs_modeset(const struct intel_cdclk_state *a,
+ 			       const struct intel_cdclk_state *b);
+ bool intel_cdclk_changed(const struct intel_cdclk_state *a,
+ 			 const struct intel_cdclk_state *b);
+-void intel_set_cdclk(struct drm_i915_private *dev_priv,
+-		     const struct intel_cdclk_state *cdclk_state);
++void intel_cdclk_swap_state(struct intel_atomic_state *state);
++void
++intel_set_cdclk_pre_plane_update(struct drm_i915_private *dev_priv,
++				 const struct intel_cdclk_state *old_state,
++				 const struct intel_cdclk_state *new_state,
++				 enum pipe pipe);
++void
++intel_set_cdclk_post_plane_update(struct drm_i915_private *dev_priv,
++				  const struct intel_cdclk_state *old_state,
++				  const struct intel_cdclk_state *new_state,
++				  enum pipe pipe);
+ void intel_dump_cdclk_state(const struct intel_cdclk_state *cdclk_state,
+ 			    const char *context);
+ 
+diff --git a/drivers/infiniband/core/addr.c b/drivers/infiniband/core/addr.c
+index 0dce94e3c495..d0b04b0d309f 100644
+--- a/drivers/infiniband/core/addr.c
++++ b/drivers/infiniband/core/addr.c
+@@ -730,8 +730,8 @@ int roce_resolve_route_from_path(struct sa_path_rec *rec,
+ 	if (rec->roce.route_resolved)
+ 		return 0;
+ 
+-	rdma_gid2ip(&sgid._sockaddr, &rec->sgid);
+-	rdma_gid2ip(&dgid._sockaddr, &rec->dgid);
++	rdma_gid2ip((struct sockaddr *)&sgid, &rec->sgid);
++	rdma_gid2ip((struct sockaddr *)&dgid, &rec->dgid);
+ 
+ 	if (sgid._sockaddr.sa_family != dgid._sockaddr.sa_family)
+ 		return -EINVAL;
+@@ -742,7 +742,7 @@ int roce_resolve_route_from_path(struct sa_path_rec *rec,
+ 	dev_addr.net = &init_net;
+ 	dev_addr.sgid_attr = attr;
+ 
+-	ret = addr_resolve(&sgid._sockaddr, &dgid._sockaddr,
++	ret = addr_resolve((struct sockaddr *)&sgid, (struct sockaddr *)&dgid,
+ 			   &dev_addr, false, true, 0);
+ 	if (ret)
+ 		return ret;
+@@ -814,22 +814,22 @@ int rdma_addr_find_l2_eth_by_grh(const union ib_gid *sgid,
+ 	struct rdma_dev_addr dev_addr;
+ 	struct resolve_cb_context ctx;
+ 	union {
+-		struct sockaddr     _sockaddr;
+ 		struct sockaddr_in  _sockaddr_in;
+ 		struct sockaddr_in6 _sockaddr_in6;
+ 	} sgid_addr, dgid_addr;
+ 	int ret;
+ 
+-	rdma_gid2ip(&sgid_addr._sockaddr, sgid);
+-	rdma_gid2ip(&dgid_addr._sockaddr, dgid);
++	rdma_gid2ip((struct sockaddr *)&sgid_addr, sgid);
++	rdma_gid2ip((struct sockaddr *)&dgid_addr, dgid);
+ 
+ 	memset(&dev_addr, 0, sizeof(dev_addr));
+ 	dev_addr.net = &init_net;
+ 	dev_addr.sgid_attr = sgid_attr;
+ 
+ 	init_completion(&ctx.comp);
+-	ret = rdma_resolve_ip(&sgid_addr._sockaddr, &dgid_addr._sockaddr,
+-			      &dev_addr, 1000, resolve_cb, true, &ctx);
++	ret = rdma_resolve_ip((struct sockaddr *)&sgid_addr,
++			      (struct sockaddr *)&dgid_addr, &dev_addr, 1000,
++			      resolve_cb, true, &ctx);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_ah.c b/drivers/infiniband/hw/ocrdma/ocrdma_ah.c
+index a7295322efbc..f4500d77f314 100644
+--- a/drivers/infiniband/hw/ocrdma/ocrdma_ah.c
++++ b/drivers/infiniband/hw/ocrdma/ocrdma_ah.c
+@@ -83,7 +83,6 @@ static inline int set_av_attr(struct ocrdma_dev *dev, struct ocrdma_ah *ah,
+ 	struct iphdr ipv4;
+ 	const struct ib_global_route *ib_grh;
+ 	union {
+-		struct sockaddr     _sockaddr;
+ 		struct sockaddr_in  _sockaddr_in;
+ 		struct sockaddr_in6 _sockaddr_in6;
+ 	} sgid_addr, dgid_addr;
+@@ -133,9 +132,9 @@ static inline int set_av_attr(struct ocrdma_dev *dev, struct ocrdma_ah *ah,
+ 		ipv4.tot_len = htons(0);
+ 		ipv4.ttl = ib_grh->hop_limit;
+ 		ipv4.protocol = nxthdr;
+-		rdma_gid2ip(&sgid_addr._sockaddr, sgid);
++		rdma_gid2ip((struct sockaddr *)&sgid_addr, sgid);
+ 		ipv4.saddr = sgid_addr._sockaddr_in.sin_addr.s_addr;
+-		rdma_gid2ip(&dgid_addr._sockaddr, &ib_grh->dgid);
++		rdma_gid2ip((struct sockaddr*)&dgid_addr, &ib_grh->dgid);
+ 		ipv4.daddr = dgid_addr._sockaddr_in.sin_addr.s_addr;
+ 		memcpy((u8 *)ah->av + eth_sz, &ipv4, sizeof(struct iphdr));
+ 	} else {
+diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_hw.c b/drivers/infiniband/hw/ocrdma/ocrdma_hw.c
+index 097e5ab2a19f..94ca0f4a4073 100644
+--- a/drivers/infiniband/hw/ocrdma/ocrdma_hw.c
++++ b/drivers/infiniband/hw/ocrdma/ocrdma_hw.c
+@@ -2499,7 +2499,6 @@ static int ocrdma_set_av_params(struct ocrdma_qp *qp,
+ 	u32 vlan_id = 0xFFFF;
+ 	u8 mac_addr[6], hdr_type;
+ 	union {
+-		struct sockaddr     _sockaddr;
+ 		struct sockaddr_in  _sockaddr_in;
+ 		struct sockaddr_in6 _sockaddr_in6;
+ 	} sgid_addr, dgid_addr;
+@@ -2541,8 +2540,8 @@ static int ocrdma_set_av_params(struct ocrdma_qp *qp,
+ 
+ 	hdr_type = rdma_gid_attr_network_type(sgid_attr);
+ 	if (hdr_type == RDMA_NETWORK_IPV4) {
+-		rdma_gid2ip(&sgid_addr._sockaddr, &sgid_attr->gid);
+-		rdma_gid2ip(&dgid_addr._sockaddr, &grh->dgid);
++		rdma_gid2ip((struct sockaddr *)&sgid_addr, &sgid_attr->gid);
++		rdma_gid2ip((struct sockaddr *)&dgid_addr, &grh->dgid);
+ 		memcpy(&cmd->params.dgid[0],
+ 		       &dgid_addr._sockaddr_in.sin_addr.s_addr, 4);
+ 		memcpy(&cmd->params.sgid[0],
+diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c
+index d32268cc1174..f3985469c221 100644
+--- a/drivers/irqchip/irq-mips-gic.c
++++ b/drivers/irqchip/irq-mips-gic.c
+@@ -388,7 +388,7 @@ static void gic_all_vpes_irq_cpu_online(struct irq_data *d)
+ 	intr = GIC_HWIRQ_TO_LOCAL(d->hwirq);
+ 	cd = irq_data_get_irq_chip_data(d);
+ 
+-	write_gic_vl_map(intr, cd->map);
++	write_gic_vl_map(mips_gic_vx_map_reg(intr), cd->map);
+ 	if (cd->mask)
+ 		write_gic_vl_smask(BIT(intr));
+ }
+@@ -517,7 +517,7 @@ static int gic_irq_domain_map(struct irq_domain *d, unsigned int virq,
+ 	spin_lock_irqsave(&gic_lock, flags);
+ 	for_each_online_cpu(cpu) {
+ 		write_gic_vl_other(mips_cm_vp_id(cpu));
+-		write_gic_vo_map(intr, map);
++		write_gic_vo_map(mips_gic_vx_map_reg(intr), map);
+ 	}
+ 	spin_unlock_irqrestore(&gic_lock, flags);
+ 
+diff --git a/drivers/md/dm-init.c b/drivers/md/dm-init.c
+index 352e803f566e..64611633e77c 100644
+--- a/drivers/md/dm-init.c
++++ b/drivers/md/dm-init.c
+@@ -140,8 +140,8 @@ static char __init *dm_parse_table_entry(struct dm_device *dev, char *str)
+ 		return ERR_PTR(-EINVAL);
+ 	}
+ 	/* target_args */
+-	dev->target_args_array[n] = kstrndup(field[3], GFP_KERNEL,
+-					     DM_MAX_STR_SIZE);
++	dev->target_args_array[n] = kstrndup(field[3], DM_MAX_STR_SIZE,
++					     GFP_KERNEL);
+ 	if (!dev->target_args_array[n])
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -275,7 +275,7 @@ static int __init dm_init_init(void)
+ 		DMERR("Argument is too big. Limit is %d\n", DM_MAX_STR_SIZE);
+ 		return -EINVAL;
+ 	}
+-	str = kstrndup(create, GFP_KERNEL, DM_MAX_STR_SIZE);
++	str = kstrndup(create, DM_MAX_STR_SIZE, GFP_KERNEL);
+ 	if (!str)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/md/dm-log-writes.c b/drivers/md/dm-log-writes.c
+index 9ea2b0291f20..e549392e0ea5 100644
+--- a/drivers/md/dm-log-writes.c
++++ b/drivers/md/dm-log-writes.c
+@@ -60,6 +60,7 @@
+ 
+ #define WRITE_LOG_VERSION 1ULL
+ #define WRITE_LOG_MAGIC 0x6a736677736872ULL
++#define WRITE_LOG_SUPER_SECTOR 0
+ 
+ /*
+  * The disk format for this is braindead simple.
+@@ -115,6 +116,7 @@ struct log_writes_c {
+ 	struct list_head logging_blocks;
+ 	wait_queue_head_t wait;
+ 	struct task_struct *log_kthread;
++	struct completion super_done;
+ };
+ 
+ struct pending_block {
+@@ -180,6 +182,14 @@ static void log_end_io(struct bio *bio)
+ 	bio_put(bio);
+ }
+ 
++static void log_end_super(struct bio *bio)
++{
++	struct log_writes_c *lc = bio->bi_private;
++
++	complete(&lc->super_done);
++	log_end_io(bio);
++}
++
+ /*
+  * Meant to be called if there is an error, it will free all the pages
+  * associated with the block.
+@@ -215,7 +225,8 @@ static int write_metadata(struct log_writes_c *lc, void *entry,
+ 	bio->bi_iter.bi_size = 0;
+ 	bio->bi_iter.bi_sector = sector;
+ 	bio_set_dev(bio, lc->logdev->bdev);
+-	bio->bi_end_io = log_end_io;
++	bio->bi_end_io = (sector == WRITE_LOG_SUPER_SECTOR) ?
++			  log_end_super : log_end_io;
+ 	bio->bi_private = lc;
+ 	bio_set_op_attrs(bio, REQ_OP_WRITE, 0);
+ 
+@@ -418,11 +429,18 @@ static int log_super(struct log_writes_c *lc)
+ 	super.nr_entries = cpu_to_le64(lc->logged_entries);
+ 	super.sectorsize = cpu_to_le32(lc->sectorsize);
+ 
+-	if (write_metadata(lc, &super, sizeof(super), NULL, 0, 0)) {
++	if (write_metadata(lc, &super, sizeof(super), NULL, 0,
++			   WRITE_LOG_SUPER_SECTOR)) {
+ 		DMERR("Couldn't write super");
+ 		return -1;
+ 	}
+ 
++	/*
++	 * Super sector should be writen in-order, otherwise the
++	 * nr_entries could be rewritten incorrectly by an old bio.
++	 */
++	wait_for_completion_io(&lc->super_done);
++
+ 	return 0;
+ }
+ 
+@@ -531,6 +549,7 @@ static int log_writes_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ 	INIT_LIST_HEAD(&lc->unflushed_blocks);
+ 	INIT_LIST_HEAD(&lc->logging_blocks);
+ 	init_waitqueue_head(&lc->wait);
++	init_completion(&lc->super_done);
+ 	atomic_set(&lc->io_blocks, 0);
+ 	atomic_set(&lc->pending_blocks, 0);
+ 
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index f96efa363d34..59e919b92873 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -4321,12 +4321,12 @@ void bond_setup(struct net_device *bond_dev)
+ 	bond_dev->features |= NETIF_F_NETNS_LOCAL;
+ 
+ 	bond_dev->hw_features = BOND_VLAN_FEATURES |
+-				NETIF_F_HW_VLAN_CTAG_TX |
+ 				NETIF_F_HW_VLAN_CTAG_RX |
+ 				NETIF_F_HW_VLAN_CTAG_FILTER;
+ 
+ 	bond_dev->hw_features |= NETIF_F_GSO_ENCAP_ALL | NETIF_F_GSO_UDP_L4;
+ 	bond_dev->features |= bond_dev->hw_features;
++	bond_dev->features |= NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_STAG_TX;
+ }
+ 
+ /* Destroy a bonding device.
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_filters.c b/drivers/net/ethernet/aquantia/atlantic/aq_filters.c
+index 18bc035da850..1fff462a4175 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_filters.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_filters.c
+@@ -843,9 +843,14 @@ int aq_filters_vlans_update(struct aq_nic_s *aq_nic)
+ 		return err;
+ 
+ 	if (aq_nic->ndev->features & NETIF_F_HW_VLAN_CTAG_FILTER) {
+-		if (hweight < AQ_VLAN_MAX_FILTERS)
+-			err = aq_hw_ops->hw_filter_vlan_ctrl(aq_hw, true);
++		if (hweight < AQ_VLAN_MAX_FILTERS && hweight > 0) {
++			err = aq_hw_ops->hw_filter_vlan_ctrl(aq_hw,
++				!(aq_nic->packet_filter & IFF_PROMISC));
++			aq_nic->aq_nic_cfg.is_vlan_force_promisc = false;
++		} else {
+ 		/* otherwise left in promiscue mode */
++			aq_nic->aq_nic_cfg.is_vlan_force_promisc = true;
++		}
+ 	}
+ 
+ 	return err;
+@@ -866,6 +871,7 @@ int aq_filters_vlan_offload_off(struct aq_nic_s *aq_nic)
+ 	if (unlikely(!aq_hw_ops->hw_filter_vlan_ctrl))
+ 		return -EOPNOTSUPP;
+ 
++	aq_nic->aq_nic_cfg.is_vlan_force_promisc = true;
+ 	err = aq_hw_ops->hw_filter_vlan_ctrl(aq_hw, false);
+ 	if (err)
+ 		return err;
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+index ff83667410bd..550abfe6973d 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+@@ -117,6 +117,7 @@ void aq_nic_cfg_start(struct aq_nic_s *self)
+ 
+ 	cfg->link_speed_msk &= cfg->aq_hw_caps->link_speed_msk;
+ 	cfg->features = cfg->aq_hw_caps->hw_features;
++	cfg->is_vlan_force_promisc = true;
+ }
+ 
+ static int aq_nic_update_link_status(struct aq_nic_s *self)
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.h b/drivers/net/ethernet/aquantia/atlantic/aq_nic.h
+index 8e34c1e49bf2..65e681be9b5d 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.h
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.h
+@@ -36,6 +36,7 @@ struct aq_nic_cfg_s {
+ 	u32 flow_control;
+ 	u32 link_speed_msk;
+ 	u32 wol;
++	bool is_vlan_force_promisc;
+ 	u16 is_mc_list_enabled;
+ 	u16 mc_list_count;
+ 	bool is_autoneg;
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
+index ec302fdfec63..a4cc04741115 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
+@@ -771,8 +771,15 @@ static int hw_atl_b0_hw_packet_filter_set(struct aq_hw_s *self,
+ 					  unsigned int packet_filter)
+ {
+ 	unsigned int i = 0U;
++	struct aq_nic_cfg_s *cfg = self->aq_nic_cfg;
++
++	hw_atl_rpfl2promiscuous_mode_en_set(self,
++					    IS_FILTER_ENABLED(IFF_PROMISC));
++
++	hw_atl_rpf_vlan_prom_mode_en_set(self,
++				     IS_FILTER_ENABLED(IFF_PROMISC) ||
++				     cfg->is_vlan_force_promisc);
+ 
+-	hw_atl_rpfl2promiscuous_mode_en_set(self, IS_FILTER_ENABLED(IFF_PROMISC));
+ 	hw_atl_rpfl2multicast_flr_en_set(self,
+ 					 IS_FILTER_ENABLED(IFF_ALLMULTI), 0);
+ 
+@@ -781,13 +788,13 @@ static int hw_atl_b0_hw_packet_filter_set(struct aq_hw_s *self,
+ 
+ 	hw_atl_rpfl2broadcast_en_set(self, IS_FILTER_ENABLED(IFF_BROADCAST));
+ 
+-	self->aq_nic_cfg->is_mc_list_enabled = IS_FILTER_ENABLED(IFF_MULTICAST);
++	cfg->is_mc_list_enabled = IS_FILTER_ENABLED(IFF_MULTICAST);
+ 
+ 	for (i = HW_ATL_B0_MAC_MIN; i < HW_ATL_B0_MAC_MAX; ++i)
+ 		hw_atl_rpfl2_uc_flr_en_set(self,
+-					   (self->aq_nic_cfg->is_mc_list_enabled &&
+-				    (i <= self->aq_nic_cfg->mc_list_count)) ?
+-				    1U : 0U, i);
++					   (cfg->is_mc_list_enabled &&
++					    (i <= cfg->mc_list_count)) ?
++					   1U : 0U, i);
+ 
+ 	return aq_hw_err_from_flags(self);
+ }
+@@ -1079,7 +1086,7 @@ static int hw_atl_b0_hw_vlan_set(struct aq_hw_s *self,
+ static int hw_atl_b0_hw_vlan_ctrl(struct aq_hw_s *self, bool enable)
+ {
+ 	/* set promisc in case of disabing the vland filter */
+-	hw_atl_rpf_vlan_prom_mode_en_set(self, !!!enable);
++	hw_atl_rpf_vlan_prom_mode_en_set(self, !enable);
+ 
+ 	return aq_hw_err_from_flags(self);
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
+index 8d9cc2157afd..7423262ce590 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
+@@ -122,7 +122,7 @@ static int adjust_systime(void __iomem *ioaddr, u32 sec, u32 nsec,
+ 		 * programmed with (2^32 – <new_sec_value>)
+ 		 */
+ 		if (gmac4)
+-			sec = (100000000ULL - sec);
++			sec = -sec;
+ 
+ 		value = readl(ioaddr + PTP_TCR);
+ 		if (value & PTP_TCR_TSCTRLSSR)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 635d88d82610..a634054dcb11 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -2957,12 +2957,15 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 	/* Manage tx mitigation */
+ 	tx_q->tx_count_frames += nfrags + 1;
+-	if (priv->tx_coal_frames <= tx_q->tx_count_frames) {
++	if (likely(priv->tx_coal_frames > tx_q->tx_count_frames) &&
++	    !(priv->synopsys_id >= DWMAC_CORE_4_00 &&
++	    (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
++	    priv->hwts_tx_en)) {
++		stmmac_tx_timer_arm(priv, queue);
++	} else {
++		tx_q->tx_count_frames = 0;
+ 		stmmac_set_tx_ic(priv, desc);
+ 		priv->xstats.tx_set_ic_bit++;
+-		tx_q->tx_count_frames = 0;
+-	} else {
+-		stmmac_tx_timer_arm(priv, queue);
+ 	}
+ 
+ 	skb_tx_timestamp(skb);
+@@ -3176,12 +3179,15 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	 * element in case of no SG.
+ 	 */
+ 	tx_q->tx_count_frames += nfrags + 1;
+-	if (priv->tx_coal_frames <= tx_q->tx_count_frames) {
++	if (likely(priv->tx_coal_frames > tx_q->tx_count_frames) &&
++	    !(priv->synopsys_id >= DWMAC_CORE_4_00 &&
++	    (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
++	    priv->hwts_tx_en)) {
++		stmmac_tx_timer_arm(priv, queue);
++	} else {
++		tx_q->tx_count_frames = 0;
+ 		stmmac_set_tx_ic(priv, desc);
+ 		priv->xstats.tx_set_ic_bit++;
+-		tx_q->tx_count_frames = 0;
+-	} else {
+-		stmmac_tx_timer_arm(priv, queue);
+ 	}
+ 
+ 	skb_tx_timestamp(skb);
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index 16963f7a88f7..351f25e1fc48 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -2135,12 +2135,12 @@ static void team_setup(struct net_device *dev)
+ 	dev->features |= NETIF_F_NETNS_LOCAL;
+ 
+ 	dev->hw_features = TEAM_VLAN_FEATURES |
+-			   NETIF_F_HW_VLAN_CTAG_TX |
+ 			   NETIF_F_HW_VLAN_CTAG_RX |
+ 			   NETIF_F_HW_VLAN_CTAG_FILTER;
+ 
+ 	dev->hw_features |= NETIF_F_GSO_ENCAP_ALL | NETIF_F_GSO_UDP_L4;
+ 	dev->features |= dev->hw_features;
++	dev->features |= NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_STAG_TX;
+ }
+ 
+ static int team_newlink(struct net *src_net, struct net_device *dev,
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index f4c933ac6edf..d1cafb29eca9 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1024,18 +1024,8 @@ static void tun_net_uninit(struct net_device *dev)
+ /* Net device open. */
+ static int tun_net_open(struct net_device *dev)
+ {
+-	struct tun_struct *tun = netdev_priv(dev);
+-	int i;
+-
+ 	netif_tx_start_all_queues(dev);
+ 
+-	for (i = 0; i < tun->numqueues; i++) {
+-		struct tun_file *tfile;
+-
+-		tfile = rtnl_dereference(tun->tfiles[i]);
+-		tfile->socket.sk->sk_write_space(tfile->socket.sk);
+-	}
+-
+ 	return 0;
+ }
+ 
+@@ -3636,6 +3626,7 @@ static int tun_device_event(struct notifier_block *unused,
+ {
+ 	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+ 	struct tun_struct *tun = netdev_priv(dev);
++	int i;
+ 
+ 	if (dev->rtnl_link_ops != &tun_link_ops)
+ 		return NOTIFY_DONE;
+@@ -3645,6 +3636,14 @@ static int tun_device_event(struct notifier_block *unused,
+ 		if (tun_queue_resize(tun))
+ 			return NOTIFY_BAD;
+ 		break;
++	case NETDEV_UP:
++		for (i = 0; i < tun->numqueues; i++) {
++			struct tun_file *tfile;
++
++			tfile = rtnl_dereference(tun->tfiles[i]);
++			tfile->socket.sk->sk_write_space(tfile->socket.sk);
++		}
++		break;
+ 	default:
+ 		break;
+ 	}
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index d9a6699abe59..e657d8947125 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1412,7 +1412,7 @@ static int qmi_wwan_probe(struct usb_interface *intf,
+ 	 * different. Ignore the current interface if the number of endpoints
+ 	 * equals the number for the diag interface (two).
+ 	 */
+-	info = (void *)&id->driver_info;
++	info = (void *)id->driver_info;
+ 
+ 	if (info->data & QMI_WWAN_QUIRK_QUECTEL_DYNCFG) {
+ 		if (desc->bNumEndpoints == 2)
+diff --git a/drivers/scsi/vmw_pvscsi.c b/drivers/scsi/vmw_pvscsi.c
+index ecee4b3ff073..377b07b2feeb 100644
+--- a/drivers/scsi/vmw_pvscsi.c
++++ b/drivers/scsi/vmw_pvscsi.c
+@@ -763,6 +763,7 @@ static int pvscsi_queue_lck(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd
+ 	struct pvscsi_adapter *adapter = shost_priv(host);
+ 	struct pvscsi_ctx *ctx;
+ 	unsigned long flags;
++	unsigned char op;
+ 
+ 	spin_lock_irqsave(&adapter->hw_lock, flags);
+ 
+@@ -775,13 +776,14 @@ static int pvscsi_queue_lck(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd
+ 	}
+ 
+ 	cmd->scsi_done = done;
++	op = cmd->cmnd[0];
+ 
+ 	dev_dbg(&cmd->device->sdev_gendev,
+-		"queued cmd %p, ctx %p, op=%x\n", cmd, ctx, cmd->cmnd[0]);
++		"queued cmd %p, ctx %p, op=%x\n", cmd, ctx, op);
+ 
+ 	spin_unlock_irqrestore(&adapter->hw_lock, flags);
+ 
+-	pvscsi_kick_io(adapter, cmd->cmnd[0]);
++	pvscsi_kick_io(adapter, op);
+ 
+ 	return 0;
+ }
+diff --git a/fs/binfmt_flat.c b/fs/binfmt_flat.c
+index 82a48e830018..e4b59e76afb0 100644
+--- a/fs/binfmt_flat.c
++++ b/fs/binfmt_flat.c
+@@ -856,9 +856,14 @@ err:
+ 
+ static int load_flat_shared_library(int id, struct lib_info *libs)
+ {
++	/*
++	 * This is a fake bprm struct; only the members "buf", "file" and
++	 * "filename" are actually used.
++	 */
+ 	struct linux_binprm bprm;
+ 	int res;
+ 	char buf[16];
++	loff_t pos = 0;
+ 
+ 	memset(&bprm, 0, sizeof(bprm));
+ 
+@@ -872,25 +877,11 @@ static int load_flat_shared_library(int id, struct lib_info *libs)
+ 	if (IS_ERR(bprm.file))
+ 		return res;
+ 
+-	bprm.cred = prepare_exec_creds();
+-	res = -ENOMEM;
+-	if (!bprm.cred)
+-		goto out;
+-
+-	/* We don't really care about recalculating credentials at this point
+-	 * as we're past the point of no return and are dealing with shared
+-	 * libraries.
+-	 */
+-	bprm.called_set_creds = 1;
++	res = kernel_read(bprm.file, bprm.buf, BINPRM_BUF_SIZE, &pos);
+ 
+-	res = prepare_binprm(&bprm);
+-
+-	if (!res)
++	if (res >= 0)
+ 		res = load_flat_file(&bprm, libs, id, NULL);
+ 
+-	abort_creds(bprm.cred);
+-
+-out:
+ 	allow_write_access(bprm.file);
+ 	fput(bprm.file);
+ 
+diff --git a/fs/inode.c b/fs/inode.c
+index 9a453f3637f8..5bd1dd2e942f 100644
+--- a/fs/inode.c
++++ b/fs/inode.c
+@@ -349,7 +349,7 @@ EXPORT_SYMBOL(inc_nlink);
+ 
+ static void __address_space_init_once(struct address_space *mapping)
+ {
+-	xa_init_flags(&mapping->i_pages, XA_FLAGS_LOCK_IRQ);
++	xa_init_flags(&mapping->i_pages, XA_FLAGS_LOCK_IRQ | XA_FLAGS_ACCOUNT);
+ 	init_rwsem(&mapping->i_mmap_rwsem);
+ 	INIT_LIST_HEAD(&mapping->private_list);
+ 	spin_lock_init(&mapping->private_lock);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index e82adbf8adc1..b897695c91c0 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -533,6 +533,7 @@ static struct io_kiocb *io_get_req(struct io_ring_ctx *ctx,
+ 		state->cur_req++;
+ 	}
+ 
++	req->file = NULL;
+ 	req->ctx = ctx;
+ 	req->flags = 0;
+ 	/* one is dropped after submission, the other at completion */
+@@ -1684,10 +1685,8 @@ static int io_req_set_file(struct io_ring_ctx *ctx, const struct sqe_submit *s,
+ 	flags = READ_ONCE(s->sqe->flags);
+ 	fd = READ_ONCE(s->sqe->fd);
+ 
+-	if (!io_op_needs_file(s->sqe)) {
+-		req->file = NULL;
++	if (!io_op_needs_file(s->sqe))
+ 		return 0;
+-	}
+ 
+ 	if (flags & IOSQE_FIXED_FILE) {
+ 		if (unlikely(!ctx->user_files ||
+diff --git a/fs/nfs/flexfilelayout/flexfilelayoutdev.c b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+index a809989807d6..19f856f45689 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayoutdev.c
++++ b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+@@ -18,7 +18,7 @@
+ 
+ #define NFSDBG_FACILITY		NFSDBG_PNFS_LD
+ 
+-static unsigned int dataserver_timeo = NFS_DEF_TCP_RETRANS;
++static unsigned int dataserver_timeo = NFS_DEF_TCP_TIMEO;
+ static unsigned int dataserver_retrans;
+ 
+ static bool ff_layout_has_available_ds(struct pnfs_layout_segment *lseg);
+diff --git a/fs/notify/fanotify/fanotify.c b/fs/notify/fanotify/fanotify.c
+index 63c6bb1f8c4d..8c286f8228e5 100644
+--- a/fs/notify/fanotify/fanotify.c
++++ b/fs/notify/fanotify/fanotify.c
+@@ -355,6 +355,10 @@ static __kernel_fsid_t fanotify_get_fsid(struct fsnotify_iter_info *iter_info)
+ 		/* Mark is just getting destroyed or created? */
+ 		if (!conn)
+ 			continue;
++		if (!(conn->flags & FSNOTIFY_CONN_FLAG_HAS_FSID))
++			continue;
++		/* Pairs with smp_wmb() in fsnotify_add_mark_list() */
++		smp_rmb();
+ 		fsid = conn->fsid;
+ 		if (WARN_ON_ONCE(!fsid.val[0] && !fsid.val[1]))
+ 			continue;
+diff --git a/fs/notify/mark.c b/fs/notify/mark.c
+index 22acb0a79b53..e9d49191d39e 100644
+--- a/fs/notify/mark.c
++++ b/fs/notify/mark.c
+@@ -495,10 +495,13 @@ static int fsnotify_attach_connector_to_object(fsnotify_connp_t *connp,
+ 	conn->type = type;
+ 	conn->obj = connp;
+ 	/* Cache fsid of filesystem containing the object */
+-	if (fsid)
++	if (fsid) {
+ 		conn->fsid = *fsid;
+-	else
++		conn->flags = FSNOTIFY_CONN_FLAG_HAS_FSID;
++	} else {
+ 		conn->fsid.val[0] = conn->fsid.val[1] = 0;
++		conn->flags = 0;
++	}
+ 	if (conn->type == FSNOTIFY_OBJ_TYPE_INODE)
+ 		inode = igrab(fsnotify_conn_inode(conn));
+ 	/*
+@@ -573,7 +576,12 @@ restart:
+ 		if (err)
+ 			return err;
+ 		goto restart;
+-	} else if (fsid && (conn->fsid.val[0] || conn->fsid.val[1]) &&
++	} else if (fsid && !(conn->flags & FSNOTIFY_CONN_FLAG_HAS_FSID)) {
++		conn->fsid = *fsid;
++		/* Pairs with smp_rmb() in fanotify_get_fsid() */
++		smp_wmb();
++		conn->flags |= FSNOTIFY_CONN_FLAG_HAS_FSID;
++	} else if (fsid && (conn->flags & FSNOTIFY_CONN_FLAG_HAS_FSID) &&
+ 		   (fsid->val[0] != conn->fsid.val[0] ||
+ 		    fsid->val[1] != conn->fsid.val[1])) {
+ 		/*
+diff --git a/fs/proc/array.c b/fs/proc/array.c
+index 2edbb657f859..55180501b915 100644
+--- a/fs/proc/array.c
++++ b/fs/proc/array.c
+@@ -462,7 +462,7 @@ static int do_task_stat(struct seq_file *m, struct pid_namespace *ns,
+ 		 * a program is not able to use ptrace(2) in that case. It is
+ 		 * safe because the task has stopped executing permanently.
+ 		 */
+-		if (permitted && (task->flags & PF_DUMPCORE)) {
++		if (permitted && (task->flags & (PF_EXITING|PF_DUMPCORE))) {
+ 			if (try_get_task_stack(task)) {
+ 				eip = KSTK_EIP(task);
+ 				esp = KSTK_ESP(task);
+diff --git a/include/asm-generic/futex.h b/include/asm-generic/futex.h
+index fcb61b4659b3..8666fe7f35d7 100644
+--- a/include/asm-generic/futex.h
++++ b/include/asm-generic/futex.h
+@@ -23,7 +23,9 @@
+  *
+  * Return:
+  * 0 - On success
+- * <0 - On error
++ * -EFAULT - User access resulted in a page fault
++ * -EAGAIN - Atomic operation was unable to complete due to contention
++ * -ENOSYS - Operation not supported
+  */
+ static inline int
+ arch_futex_atomic_op_inuser(int op, u32 oparg, int *oval, u32 __user *uaddr)
+@@ -85,7 +87,9 @@ out_pagefault_enable:
+  *
+  * Return:
+  * 0 - On success
+- * <0 - On error
++ * -EFAULT - User access resulted in a page fault
++ * -EAGAIN - Atomic operation was unable to complete due to contention
++ * -ENOSYS - Function not implemented (only if !HAVE_FUTEX_CMPXCHG)
+  */
+ static inline int
+ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
+index a4c644c1c091..ada0246d43ab 100644
+--- a/include/linux/bpf-cgroup.h
++++ b/include/linux/bpf-cgroup.h
+@@ -230,6 +230,12 @@ int bpf_percpu_cgroup_storage_update(struct bpf_map *map, void *key,
+ #define BPF_CGROUP_RUN_PROG_UDP6_SENDMSG_LOCK(sk, uaddr, t_ctx)		       \
+ 	BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, BPF_CGROUP_UDP6_SENDMSG, t_ctx)
+ 
++#define BPF_CGROUP_RUN_PROG_UDP4_RECVMSG_LOCK(sk, uaddr)			\
++	BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, BPF_CGROUP_UDP4_RECVMSG, NULL)
++
++#define BPF_CGROUP_RUN_PROG_UDP6_RECVMSG_LOCK(sk, uaddr)			\
++	BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, BPF_CGROUP_UDP6_RECVMSG, NULL)
++
+ #define BPF_CGROUP_RUN_PROG_SOCK_OPS(sock_ops)				       \
+ ({									       \
+ 	int __ret = 0;							       \
+@@ -319,6 +325,8 @@ static inline int bpf_percpu_cgroup_storage_update(struct bpf_map *map,
+ #define BPF_CGROUP_RUN_PROG_INET6_CONNECT_LOCK(sk, uaddr) ({ 0; })
+ #define BPF_CGROUP_RUN_PROG_UDP4_SENDMSG_LOCK(sk, uaddr, t_ctx) ({ 0; })
+ #define BPF_CGROUP_RUN_PROG_UDP6_SENDMSG_LOCK(sk, uaddr, t_ctx) ({ 0; })
++#define BPF_CGROUP_RUN_PROG_UDP4_RECVMSG_LOCK(sk, uaddr) ({ 0; })
++#define BPF_CGROUP_RUN_PROG_UDP6_RECVMSG_LOCK(sk, uaddr) ({ 0; })
+ #define BPF_CGROUP_RUN_PROG_SOCK_OPS(sock_ops) ({ 0; })
+ #define BPF_CGROUP_RUN_PROG_DEVICE_CGROUP(type,major,minor,access) ({ 0; })
+ 
+diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
+index 094b38f2d9a1..0f67cabbbec8 100644
+--- a/include/linux/fsnotify_backend.h
++++ b/include/linux/fsnotify_backend.h
+@@ -292,7 +292,9 @@ typedef struct fsnotify_mark_connector __rcu *fsnotify_connp_t;
+  */
+ struct fsnotify_mark_connector {
+ 	spinlock_t lock;
+-	unsigned int type;	/* Type of object [lock] */
++	unsigned short type;	/* Type of object [lock] */
++#define FSNOTIFY_CONN_FLAG_HAS_FSID	0x01
++	unsigned short flags;	/* flags [lock] */
+ 	__kernel_fsid_t fsid;	/* fsid of filesystem containing object */
+ 	union {
+ 		/* Object pointer [lock] */
+diff --git a/include/linux/xarray.h b/include/linux/xarray.h
+index 0e01e6129145..5921599b6dc4 100644
+--- a/include/linux/xarray.h
++++ b/include/linux/xarray.h
+@@ -265,6 +265,7 @@ enum xa_lock_type {
+ #define XA_FLAGS_TRACK_FREE	((__force gfp_t)4U)
+ #define XA_FLAGS_ZERO_BUSY	((__force gfp_t)8U)
+ #define XA_FLAGS_ALLOC_WRAPPED	((__force gfp_t)16U)
++#define XA_FLAGS_ACCOUNT	((__force gfp_t)32U)
+ #define XA_FLAGS_MARK(mark)	((__force gfp_t)((1U << __GFP_BITS_SHIFT) << \
+ 						(__force unsigned)(mark)))
+ 
+diff --git a/include/net/tls.h b/include/net/tls.h
+index 053082d98906..a67ad7d56ff2 100644
+--- a/include/net/tls.h
++++ b/include/net/tls.h
+@@ -347,21 +347,6 @@ static inline bool tls_is_partially_sent_record(struct tls_context *ctx)
+ 	return !!ctx->partially_sent_record;
+ }
+ 
+-static inline int tls_complete_pending_work(struct sock *sk,
+-					    struct tls_context *ctx,
+-					    int flags, long *timeo)
+-{
+-	int rc = 0;
+-
+-	if (unlikely(sk->sk_write_pending))
+-		rc = wait_on_pending_writer(sk, timeo);
+-
+-	if (!rc && tls_is_partially_sent_record(ctx))
+-		rc = tls_push_partial_record(sk, ctx, flags);
+-
+-	return rc;
+-}
+-
+ static inline bool tls_is_pending_open_record(struct tls_context *tls_ctx)
+ {
+ 	return tls_ctx->pending_open_record_frags;
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index 929c8e537a14..9d01f4788d3e 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -187,6 +187,8 @@ enum bpf_attach_type {
+ 	BPF_CGROUP_UDP6_SENDMSG,
+ 	BPF_LIRC_MODE2,
+ 	BPF_FLOW_DISSECTOR,
++	BPF_CGROUP_UDP4_RECVMSG = 19,
++	BPF_CGROUP_UDP6_RECVMSG,
+ 	__MAX_BPF_ATTACH_TYPE
+ };
+ 
+@@ -3104,8 +3106,8 @@ struct bpf_raw_tracepoint_args {
+ /* DIRECT:  Skip the FIB rules and go to FIB table associated with device
+  * OUTPUT:  Do lookup from egress perspective; default is ingress
+  */
+-#define BPF_FIB_LOOKUP_DIRECT  BIT(0)
+-#define BPF_FIB_LOOKUP_OUTPUT  BIT(1)
++#define BPF_FIB_LOOKUP_DIRECT  (1U << 0)
++#define BPF_FIB_LOOKUP_OUTPUT  (1U << 1)
+ 
+ enum {
+ 	BPF_FIB_LKUP_RET_SUCCESS,      /* lookup successful */
+diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
+index 93a5cbbde421..3b03a7342f3c 100644
+--- a/kernel/bpf/lpm_trie.c
++++ b/kernel/bpf/lpm_trie.c
+@@ -715,9 +715,14 @@ find_leftmost:
+ 	 * have exact two children, so this function will never return NULL.
+ 	 */
+ 	for (node = search_root; node;) {
+-		if (!(node->flags & LPM_TREE_NODE_FLAG_IM))
++		if (node->flags & LPM_TREE_NODE_FLAG_IM) {
++			node = rcu_dereference(node->child[0]);
++		} else {
+ 			next_node = node;
+-		node = rcu_dereference(node->child[0]);
++			node = rcu_dereference(node->child[0]);
++			if (!node)
++				node = rcu_dereference(next_node->child[1]);
++		}
+ 	}
+ do_copy:
+ 	next_key->prefixlen = next_node->prefixlen;
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index db6e825e2958..29b8f20d500c 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -1520,6 +1520,8 @@ bpf_prog_load_check_attach_type(enum bpf_prog_type prog_type,
+ 		case BPF_CGROUP_INET6_CONNECT:
+ 		case BPF_CGROUP_UDP4_SENDMSG:
+ 		case BPF_CGROUP_UDP6_SENDMSG:
++		case BPF_CGROUP_UDP4_RECVMSG:
++		case BPF_CGROUP_UDP6_RECVMSG:
+ 			return 0;
+ 		default:
+ 			return -EINVAL;
+@@ -1809,6 +1811,8 @@ static int bpf_prog_attach(const union bpf_attr *attr)
+ 	case BPF_CGROUP_INET6_CONNECT:
+ 	case BPF_CGROUP_UDP4_SENDMSG:
+ 	case BPF_CGROUP_UDP6_SENDMSG:
++	case BPF_CGROUP_UDP4_RECVMSG:
++	case BPF_CGROUP_UDP6_RECVMSG:
+ 		ptype = BPF_PROG_TYPE_CGROUP_SOCK_ADDR;
+ 		break;
+ 	case BPF_CGROUP_SOCK_OPS:
+@@ -1891,6 +1895,8 @@ static int bpf_prog_detach(const union bpf_attr *attr)
+ 	case BPF_CGROUP_INET6_CONNECT:
+ 	case BPF_CGROUP_UDP4_SENDMSG:
+ 	case BPF_CGROUP_UDP6_SENDMSG:
++	case BPF_CGROUP_UDP4_RECVMSG:
++	case BPF_CGROUP_UDP6_RECVMSG:
+ 		ptype = BPF_PROG_TYPE_CGROUP_SOCK_ADDR;
+ 		break;
+ 	case BPF_CGROUP_SOCK_OPS:
+@@ -1939,6 +1945,8 @@ static int bpf_prog_query(const union bpf_attr *attr,
+ 	case BPF_CGROUP_INET6_CONNECT:
+ 	case BPF_CGROUP_UDP4_SENDMSG:
+ 	case BPF_CGROUP_UDP6_SENDMSG:
++	case BPF_CGROUP_UDP4_RECVMSG:
++	case BPF_CGROUP_UDP6_RECVMSG:
+ 	case BPF_CGROUP_SOCK_OPS:
+ 	case BPF_CGROUP_DEVICE:
+ 		break;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 950fac024fbb..4ff130ddfbf6 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -5145,9 +5145,12 @@ static int check_return_code(struct bpf_verifier_env *env)
+ 	struct tnum range = tnum_range(0, 1);
+ 
+ 	switch (env->prog->type) {
++	case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
++		if (env->prog->expected_attach_type == BPF_CGROUP_UDP4_RECVMSG ||
++		    env->prog->expected_attach_type == BPF_CGROUP_UDP6_RECVMSG)
++			range = tnum_range(1, 1);
+ 	case BPF_PROG_TYPE_CGROUP_SKB:
+ 	case BPF_PROG_TYPE_CGROUP_SOCK:
+-	case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
+ 	case BPF_PROG_TYPE_SOCK_OPS:
+ 	case BPF_PROG_TYPE_CGROUP_DEVICE:
+ 		break;
+@@ -5163,16 +5166,17 @@ static int check_return_code(struct bpf_verifier_env *env)
+ 	}
+ 
+ 	if (!tnum_in(range, reg->var_off)) {
++		char tn_buf[48];
++
+ 		verbose(env, "At program exit the register R0 ");
+ 		if (!tnum_is_unknown(reg->var_off)) {
+-			char tn_buf[48];
+-
+ 			tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
+ 			verbose(env, "has value %s", tn_buf);
+ 		} else {
+ 			verbose(env, "has unknown scalar value");
+ 		}
+-		verbose(env, " should have been 0 or 1\n");
++		tnum_strn(tn_buf, sizeof(tn_buf), range);
++		verbose(env, " should have been in %s\n", tn_buf);
+ 		return -EINVAL;
+ 	}
+ 	return 0;
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 9cc8b6fdb2dc..6170034f4118 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -2315,6 +2315,9 @@ static int __init mitigations_parse_cmdline(char *arg)
+ 		cpu_mitigations = CPU_MITIGATIONS_AUTO;
+ 	else if (!strcmp(arg, "auto,nosmt"))
+ 		cpu_mitigations = CPU_MITIGATIONS_AUTO_NOSMT;
++	else
++		pr_crit("Unsupported mitigations=%s, system may still be vulnerable\n",
++			arg);
+ 
+ 	return 0;
+ }
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index d64c00afceb5..568a0e839903 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -402,8 +402,6 @@ static const struct bpf_func_proto bpf_perf_event_read_value_proto = {
+ 	.arg4_type	= ARG_CONST_SIZE,
+ };
+ 
+-static DEFINE_PER_CPU(struct perf_sample_data, bpf_trace_sd);
+-
+ static __always_inline u64
+ __bpf_perf_event_output(struct pt_regs *regs, struct bpf_map *map,
+ 			u64 flags, struct perf_sample_data *sd)
+@@ -434,24 +432,50 @@ __bpf_perf_event_output(struct pt_regs *regs, struct bpf_map *map,
+ 	return perf_event_output(event, sd, regs);
+ }
+ 
++/*
++ * Support executing tracepoints in normal, irq, and nmi context that each call
++ * bpf_perf_event_output
++ */
++struct bpf_trace_sample_data {
++	struct perf_sample_data sds[3];
++};
++
++static DEFINE_PER_CPU(struct bpf_trace_sample_data, bpf_trace_sds);
++static DEFINE_PER_CPU(int, bpf_trace_nest_level);
+ BPF_CALL_5(bpf_perf_event_output, struct pt_regs *, regs, struct bpf_map *, map,
+ 	   u64, flags, void *, data, u64, size)
+ {
+-	struct perf_sample_data *sd = this_cpu_ptr(&bpf_trace_sd);
++	struct bpf_trace_sample_data *sds = this_cpu_ptr(&bpf_trace_sds);
++	int nest_level = this_cpu_inc_return(bpf_trace_nest_level);
+ 	struct perf_raw_record raw = {
+ 		.frag = {
+ 			.size = size,
+ 			.data = data,
+ 		},
+ 	};
++	struct perf_sample_data *sd;
++	int err;
+ 
+-	if (unlikely(flags & ~(BPF_F_INDEX_MASK)))
+-		return -EINVAL;
++	if (WARN_ON_ONCE(nest_level > ARRAY_SIZE(sds->sds))) {
++		err = -EBUSY;
++		goto out;
++	}
++
++	sd = &sds->sds[nest_level - 1];
++
++	if (unlikely(flags & ~(BPF_F_INDEX_MASK))) {
++		err = -EINVAL;
++		goto out;
++	}
+ 
+ 	perf_sample_data_init(sd, 0, 0);
+ 	sd->raw = &raw;
+ 
+-	return __bpf_perf_event_output(regs, map, flags, sd);
++	err = __bpf_perf_event_output(regs, map, flags, sd);
++
++out:
++	this_cpu_dec(bpf_trace_nest_level);
++	return err;
+ }
+ 
+ static const struct bpf_func_proto bpf_perf_event_output_proto = {
+@@ -808,16 +832,48 @@ pe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
+ /*
+  * bpf_raw_tp_regs are separate from bpf_pt_regs used from skb/xdp
+  * to avoid potential recursive reuse issue when/if tracepoints are added
+- * inside bpf_*_event_output, bpf_get_stackid and/or bpf_get_stack
++ * inside bpf_*_event_output, bpf_get_stackid and/or bpf_get_stack.
++ *
++ * Since raw tracepoints run despite bpf_prog_active, support concurrent usage
++ * in normal, irq, and nmi context.
+  */
+-static DEFINE_PER_CPU(struct pt_regs, bpf_raw_tp_regs);
++struct bpf_raw_tp_regs {
++	struct pt_regs regs[3];
++};
++static DEFINE_PER_CPU(struct bpf_raw_tp_regs, bpf_raw_tp_regs);
++static DEFINE_PER_CPU(int, bpf_raw_tp_nest_level);
++static struct pt_regs *get_bpf_raw_tp_regs(void)
++{
++	struct bpf_raw_tp_regs *tp_regs = this_cpu_ptr(&bpf_raw_tp_regs);
++	int nest_level = this_cpu_inc_return(bpf_raw_tp_nest_level);
++
++	if (WARN_ON_ONCE(nest_level > ARRAY_SIZE(tp_regs->regs))) {
++		this_cpu_dec(bpf_raw_tp_nest_level);
++		return ERR_PTR(-EBUSY);
++	}
++
++	return &tp_regs->regs[nest_level - 1];
++}
++
++static void put_bpf_raw_tp_regs(void)
++{
++	this_cpu_dec(bpf_raw_tp_nest_level);
++}
++
+ BPF_CALL_5(bpf_perf_event_output_raw_tp, struct bpf_raw_tracepoint_args *, args,
+ 	   struct bpf_map *, map, u64, flags, void *, data, u64, size)
+ {
+-	struct pt_regs *regs = this_cpu_ptr(&bpf_raw_tp_regs);
++	struct pt_regs *regs = get_bpf_raw_tp_regs();
++	int ret;
++
++	if (IS_ERR(regs))
++		return PTR_ERR(regs);
+ 
+ 	perf_fetch_caller_regs(regs);
+-	return ____bpf_perf_event_output(regs, map, flags, data, size);
++	ret = ____bpf_perf_event_output(regs, map, flags, data, size);
++
++	put_bpf_raw_tp_regs();
++	return ret;
+ }
+ 
+ static const struct bpf_func_proto bpf_perf_event_output_proto_raw_tp = {
+@@ -834,12 +890,18 @@ static const struct bpf_func_proto bpf_perf_event_output_proto_raw_tp = {
+ BPF_CALL_3(bpf_get_stackid_raw_tp, struct bpf_raw_tracepoint_args *, args,
+ 	   struct bpf_map *, map, u64, flags)
+ {
+-	struct pt_regs *regs = this_cpu_ptr(&bpf_raw_tp_regs);
++	struct pt_regs *regs = get_bpf_raw_tp_regs();
++	int ret;
++
++	if (IS_ERR(regs))
++		return PTR_ERR(regs);
+ 
+ 	perf_fetch_caller_regs(regs);
+ 	/* similar to bpf_perf_event_output_tp, but pt_regs fetched differently */
+-	return bpf_get_stackid((unsigned long) regs, (unsigned long) map,
+-			       flags, 0, 0);
++	ret = bpf_get_stackid((unsigned long) regs, (unsigned long) map,
++			      flags, 0, 0);
++	put_bpf_raw_tp_regs();
++	return ret;
+ }
+ 
+ static const struct bpf_func_proto bpf_get_stackid_proto_raw_tp = {
+@@ -854,11 +916,17 @@ static const struct bpf_func_proto bpf_get_stackid_proto_raw_tp = {
+ BPF_CALL_4(bpf_get_stack_raw_tp, struct bpf_raw_tracepoint_args *, args,
+ 	   void *, buf, u32, size, u64, flags)
+ {
+-	struct pt_regs *regs = this_cpu_ptr(&bpf_raw_tp_regs);
++	struct pt_regs *regs = get_bpf_raw_tp_regs();
++	int ret;
++
++	if (IS_ERR(regs))
++		return PTR_ERR(regs);
+ 
+ 	perf_fetch_caller_regs(regs);
+-	return bpf_get_stack((unsigned long) regs, (unsigned long) buf,
+-			     (unsigned long) size, flags, 0);
++	ret = bpf_get_stack((unsigned long) regs, (unsigned long) buf,
++			    (unsigned long) size, flags, 0);
++	put_bpf_raw_tp_regs();
++	return ret;
+ }
+ 
+ static const struct bpf_func_proto bpf_get_stack_proto_raw_tp = {
+diff --git a/kernel/trace/trace_branch.c b/kernel/trace/trace_branch.c
+index 3ea65cdff30d..4ad967453b6f 100644
+--- a/kernel/trace/trace_branch.c
++++ b/kernel/trace/trace_branch.c
+@@ -205,8 +205,6 @@ void trace_likely_condition(struct ftrace_likely_data *f, int val, int expect)
+ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
+ 			  int expect, int is_constant)
+ {
+-	unsigned long flags = user_access_save();
+-
+ 	/* A constant is always correct */
+ 	if (is_constant) {
+ 		f->constant++;
+@@ -225,8 +223,6 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
+ 		f->data.correct++;
+ 	else
+ 		f->data.incorrect++;
+-
+-	user_access_restore(flags);
+ }
+ EXPORT_SYMBOL(ftrace_likely_update);
+ 
+diff --git a/lib/xarray.c b/lib/xarray.c
+index 6be3acbb861f..446b956c9188 100644
+--- a/lib/xarray.c
++++ b/lib/xarray.c
+@@ -298,6 +298,8 @@ bool xas_nomem(struct xa_state *xas, gfp_t gfp)
+ 		xas_destroy(xas);
+ 		return false;
+ 	}
++	if (xas->xa->xa_flags & XA_FLAGS_ACCOUNT)
++		gfp |= __GFP_ACCOUNT;
+ 	xas->xa_alloc = kmem_cache_alloc(radix_tree_node_cachep, gfp);
+ 	if (!xas->xa_alloc)
+ 		return false;
+@@ -325,6 +327,8 @@ static bool __xas_nomem(struct xa_state *xas, gfp_t gfp)
+ 		xas_destroy(xas);
+ 		return false;
+ 	}
++	if (xas->xa->xa_flags & XA_FLAGS_ACCOUNT)
++		gfp |= __GFP_ACCOUNT;
+ 	if (gfpflags_allow_blocking(gfp)) {
+ 		xas_unlock_type(xas, lock_type);
+ 		xas->xa_alloc = kmem_cache_alloc(radix_tree_node_cachep, gfp);
+@@ -358,8 +362,12 @@ static void *xas_alloc(struct xa_state *xas, unsigned int shift)
+ 	if (node) {
+ 		xas->xa_alloc = NULL;
+ 	} else {
+-		node = kmem_cache_alloc(radix_tree_node_cachep,
+-					GFP_NOWAIT | __GFP_NOWARN);
++		gfp_t gfp = GFP_NOWAIT | __GFP_NOWARN;
++
++		if (xas->xa->xa_flags & XA_FLAGS_ACCOUNT)
++			gfp |= __GFP_ACCOUNT;
++
++		node = kmem_cache_alloc(radix_tree_node_cachep, gfp);
+ 		if (!node) {
+ 			xas_set_err(xas, -ENOMEM);
+ 			return NULL;
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 5b4f00be325d..ec81808830ae 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -1491,16 +1491,29 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
+ 
+ /*
+  * Dissolve a given free hugepage into free buddy pages. This function does
+- * nothing for in-use (including surplus) hugepages. Returns -EBUSY if the
+- * dissolution fails because a give page is not a free hugepage, or because
+- * free hugepages are fully reserved.
++ * nothing for in-use hugepages and non-hugepages.
++ * This function returns values like below:
++ *
++ *  -EBUSY: failed to dissolved free hugepages or the hugepage is in-use
++ *          (allocated or reserved.)
++ *       0: successfully dissolved free hugepages or the page is not a
++ *          hugepage (considered as already dissolved)
+  */
+ int dissolve_free_huge_page(struct page *page)
+ {
+ 	int rc = -EBUSY;
+ 
++	/* Not to disrupt normal path by vainly holding hugetlb_lock */
++	if (!PageHuge(page))
++		return 0;
++
+ 	spin_lock(&hugetlb_lock);
+-	if (PageHuge(page) && !page_count(page)) {
++	if (!PageHuge(page)) {
++		rc = 0;
++		goto out;
++	}
++
++	if (!page_count(page)) {
+ 		struct page *head = compound_head(page);
+ 		struct hstate *h = page_hstate(head);
+ 		int nid = page_to_nid(head);
+@@ -1545,11 +1558,9 @@ int dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
+ 
+ 	for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order) {
+ 		page = pfn_to_page(pfn);
+-		if (PageHuge(page) && !page_count(page)) {
+-			rc = dissolve_free_huge_page(page);
+-			if (rc)
+-				break;
+-		}
++		rc = dissolve_free_huge_page(page);
++		if (rc)
++			break;
+ 	}
+ 
+ 	return rc;
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index fc8b51744579..3a83e279cc98 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1733,6 +1733,8 @@ static int soft_offline_huge_page(struct page *page, int flags)
+ 		if (!ret) {
+ 			if (set_hwpoison_free_buddy_page(page))
+ 				num_poisoned_pages_inc();
++			else
++				ret = -EBUSY;
+ 		}
+ 	}
+ 	return ret;
+@@ -1857,11 +1859,8 @@ static int soft_offline_in_use_page(struct page *page, int flags)
+ 
+ static int soft_offline_free_page(struct page *page)
+ {
+-	int rc = 0;
+-	struct page *head = compound_head(page);
++	int rc = dissolve_free_huge_page(page);
+ 
+-	if (PageHuge(head))
+-		rc = dissolve_free_huge_page(page);
+ 	if (!rc) {
+ 		if (set_hwpoison_free_buddy_page(page))
+ 			num_poisoned_pages_inc();
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index 2219e747df49..5b3bf1747c19 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -306,7 +306,7 @@ static void mpol_rebind_nodemask(struct mempolicy *pol, const nodemask_t *nodes)
+ 	else {
+ 		nodes_remap(tmp, pol->v.nodes,pol->w.cpuset_mems_allowed,
+ 								*nodes);
+-		pol->w.cpuset_mems_allowed = tmp;
++		pol->w.cpuset_mems_allowed = *nodes;
+ 	}
+ 
+ 	if (nodes_empty(tmp))
+diff --git a/mm/page_idle.c b/mm/page_idle.c
+index 0b39ec0c945c..295512465065 100644
+--- a/mm/page_idle.c
++++ b/mm/page_idle.c
+@@ -136,7 +136,7 @@ static ssize_t page_idle_bitmap_read(struct file *file, struct kobject *kobj,
+ 
+ 	end_pfn = pfn + count * BITS_PER_BYTE;
+ 	if (end_pfn > max_pfn)
+-		end_pfn = ALIGN(max_pfn, BITMAP_CHUNK_BITS);
++		end_pfn = max_pfn;
+ 
+ 	for (; pfn < end_pfn; pfn++) {
+ 		bit = pfn % BITMAP_CHUNK_BITS;
+@@ -181,7 +181,7 @@ static ssize_t page_idle_bitmap_write(struct file *file, struct kobject *kobj,
+ 
+ 	end_pfn = pfn + count * BITS_PER_BYTE;
+ 	if (end_pfn > max_pfn)
+-		end_pfn = ALIGN(max_pfn, BITMAP_CHUNK_BITS);
++		end_pfn = max_pfn;
+ 
+ 	for (; pfn < end_pfn; pfn++) {
+ 		bit = pfn % BITMAP_CHUNK_BITS;
+diff --git a/mm/page_io.c b/mm/page_io.c
+index 2e8019d0e048..189415852077 100644
+--- a/mm/page_io.c
++++ b/mm/page_io.c
+@@ -29,10 +29,9 @@
+ static struct bio *get_swap_bio(gfp_t gfp_flags,
+ 				struct page *page, bio_end_io_t end_io)
+ {
+-	int i, nr = hpage_nr_pages(page);
+ 	struct bio *bio;
+ 
+-	bio = bio_alloc(gfp_flags, nr);
++	bio = bio_alloc(gfp_flags, 1);
+ 	if (bio) {
+ 		struct block_device *bdev;
+ 
+@@ -41,9 +40,7 @@ static struct bio *get_swap_bio(gfp_t gfp_flags,
+ 		bio->bi_iter.bi_sector <<= PAGE_SHIFT - 9;
+ 		bio->bi_end_io = end_io;
+ 
+-		for (i = 0; i < nr; i++)
+-			bio_add_page(bio, page + i, PAGE_SIZE, 0);
+-		VM_BUG_ON(bio->bi_iter.bi_size != PAGE_SIZE * nr);
++		bio_add_page(bio, page, PAGE_SIZE * hpage_nr_pages(page), 0);
+ 	}
+ 	return bio;
+ }
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 27e61ffd9039..b76f14197128 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -6420,6 +6420,7 @@ static bool sock_addr_is_valid_access(int off, int size,
+ 		case BPF_CGROUP_INET4_BIND:
+ 		case BPF_CGROUP_INET4_CONNECT:
+ 		case BPF_CGROUP_UDP4_SENDMSG:
++		case BPF_CGROUP_UDP4_RECVMSG:
+ 			break;
+ 		default:
+ 			return false;
+@@ -6430,6 +6431,7 @@ static bool sock_addr_is_valid_access(int off, int size,
+ 		case BPF_CGROUP_INET6_BIND:
+ 		case BPF_CGROUP_INET6_CONNECT:
+ 		case BPF_CGROUP_UDP6_SENDMSG:
++		case BPF_CGROUP_UDP6_RECVMSG:
+ 			break;
+ 		default:
+ 			return false;
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 067878a1e4c5..30afb072eecf 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -1482,9 +1482,6 @@ int sock_getsockopt(struct socket *sock, int level, int optname,
+ 	{
+ 		u32 meminfo[SK_MEMINFO_VARS];
+ 
+-		if (get_user(len, optlen))
+-			return -EFAULT;
+-
+ 		sk_get_meminfo(sk, meminfo);
+ 
+ 		len = min_t(unsigned int, len, sizeof(meminfo));
+diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c
+index dc91c27bb788..3f6e95cb21b0 100644
+--- a/net/ipv4/raw.c
++++ b/net/ipv4/raw.c
+@@ -201,7 +201,7 @@ static int raw_v4_input(struct sk_buff *skb, const struct iphdr *iph, int hash)
+ 		}
+ 		sk = __raw_v4_lookup(net, sk_next(sk), iph->protocol,
+ 				     iph->saddr, iph->daddr,
+-				     skb->dev->ifindex, sdif);
++				     dif, sdif);
+ 	}
+ out:
+ 	read_unlock(&raw_v4_hashinfo.lock);
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 3b179ce6170f..ee999cb5e670 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -503,7 +503,11 @@ static inline struct sock *__udp4_lib_lookup_skb(struct sk_buff *skb,
+ struct sock *udp4_lib_lookup_skb(struct sk_buff *skb,
+ 				 __be16 sport, __be16 dport)
+ {
+-	return __udp4_lib_lookup_skb(skb, sport, dport, &udp_table);
++	const struct iphdr *iph = ip_hdr(skb);
++
++	return __udp4_lib_lookup(dev_net(skb->dev), iph->saddr, sport,
++				 iph->daddr, dport, inet_iif(skb),
++				 inet_sdif(skb), &udp_table, NULL);
+ }
+ EXPORT_SYMBOL_GPL(udp4_lib_lookup_skb);
+ 
+@@ -1783,6 +1787,10 @@ try_again:
+ 		sin->sin_addr.s_addr = ip_hdr(skb)->saddr;
+ 		memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
+ 		*addr_len = sizeof(*sin);
++
++		if (cgroup_bpf_enabled)
++			BPF_CGROUP_RUN_PROG_UDP4_RECVMSG_LOCK(sk,
++							(struct sockaddr *)sin);
+ 	}
+ 
+ 	if (udp_sk(sk)->gro_enabled)
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 622eeaf5732b..3a01d3d2f105 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -242,7 +242,7 @@ struct sock *udp6_lib_lookup_skb(struct sk_buff *skb,
+ 
+ 	return __udp6_lib_lookup(dev_net(skb->dev), &iph->saddr, sport,
+ 				 &iph->daddr, dport, inet6_iif(skb),
+-				 inet6_sdif(skb), &udp_table, skb);
++				 inet6_sdif(skb), &udp_table, NULL);
+ }
+ EXPORT_SYMBOL_GPL(udp6_lib_lookup_skb);
+ 
+@@ -370,6 +370,10 @@ try_again:
+ 						    inet6_iif(skb));
+ 		}
+ 		*addr_len = sizeof(*sin6);
++
++		if (cgroup_bpf_enabled)
++			BPF_CGROUP_RUN_PROG_UDP6_RECVMSG_LOCK(sk,
++						(struct sockaddr *)sin6);
+ 	}
+ 
+ 	if (udp_sk(sk)->gro_enabled)
+@@ -516,7 +520,7 @@ int __udp6_lib_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ 	struct net *net = dev_net(skb->dev);
+ 
+ 	sk = __udp6_lib_lookup(net, daddr, uh->dest, saddr, uh->source,
+-			       inet6_iif(skb), inet6_sdif(skb), udptable, skb);
++			       inet6_iif(skb), inet6_sdif(skb), udptable, NULL);
+ 	if (!sk) {
+ 		/* No socket for error: try tunnels before discarding */
+ 		sk = ERR_PTR(-ENOENT);
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 71d5544243d2..21814acb862d 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -2409,6 +2409,9 @@ static void tpacket_destruct_skb(struct sk_buff *skb)
+ 
+ 		ts = __packet_set_timestamp(po, ph, skb);
+ 		__packet_set_status(po, ph, TP_STATUS_AVAILABLE | ts);
++
++		if (!packet_read_pending(&po->tx_ring))
++			complete(&po->skb_completion);
+ 	}
+ 
+ 	sock_wfree(skb);
+@@ -2593,7 +2596,7 @@ static int tpacket_parse_header(struct packet_sock *po, void *frame,
+ 
+ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
+ {
+-	struct sk_buff *skb;
++	struct sk_buff *skb = NULL;
+ 	struct net_device *dev;
+ 	struct virtio_net_hdr *vnet_hdr = NULL;
+ 	struct sockcm_cookie sockc;
+@@ -2608,6 +2611,7 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
+ 	int len_sum = 0;
+ 	int status = TP_STATUS_AVAILABLE;
+ 	int hlen, tlen, copylen = 0;
++	long timeo = 0;
+ 
+ 	mutex_lock(&po->pg_vec_lock);
+ 
+@@ -2654,12 +2658,21 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
+ 	if ((size_max > dev->mtu + reserve + VLAN_HLEN) && !po->has_vnet_hdr)
+ 		size_max = dev->mtu + reserve + VLAN_HLEN;
+ 
++	reinit_completion(&po->skb_completion);
++
+ 	do {
+ 		ph = packet_current_frame(po, &po->tx_ring,
+ 					  TP_STATUS_SEND_REQUEST);
+ 		if (unlikely(ph == NULL)) {
+-			if (need_wait && need_resched())
+-				schedule();
++			if (need_wait && skb) {
++				timeo = sock_sndtimeo(&po->sk, msg->msg_flags & MSG_DONTWAIT);
++				timeo = wait_for_completion_interruptible_timeout(&po->skb_completion, timeo);
++				if (timeo <= 0) {
++					err = !timeo ? -ETIMEDOUT : -ERESTARTSYS;
++					goto out_put;
++				}
++			}
++			/* check for additional frames */
+ 			continue;
+ 		}
+ 
+@@ -3215,6 +3228,7 @@ static int packet_create(struct net *net, struct socket *sock, int protocol,
+ 	sock_init_data(sock, sk);
+ 
+ 	po = pkt_sk(sk);
++	init_completion(&po->skb_completion);
+ 	sk->sk_family = PF_PACKET;
+ 	po->num = proto;
+ 	po->xmit = dev_queue_xmit;
+@@ -4327,7 +4341,7 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ 				    req3->tp_sizeof_priv ||
+ 				    req3->tp_feature_req_word) {
+ 					err = -EINVAL;
+-					goto out;
++					goto out_free_pg_vec;
+ 				}
+ 			}
+ 			break;
+@@ -4391,6 +4405,7 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ 			prb_shutdown_retire_blk_timer(po, rb_queue);
+ 	}
+ 
++out_free_pg_vec:
+ 	if (pg_vec)
+ 		free_pg_vec(pg_vec, order, req->tp_block_nr);
+ out:
+diff --git a/net/packet/internal.h b/net/packet/internal.h
+index 3bb7c5fb3bff..c70a2794456f 100644
+--- a/net/packet/internal.h
++++ b/net/packet/internal.h
+@@ -128,6 +128,7 @@ struct packet_sock {
+ 	unsigned int		tp_hdrlen;
+ 	unsigned int		tp_reserve;
+ 	unsigned int		tp_tstamp;
++	struct completion	skb_completion;
+ 	struct net_device __rcu	*cached_dev;
+ 	int			(*xmit)(struct sk_buff *skb);
+ 	struct packet_type	prot_hook ____cacheline_aligned_in_smp;
+diff --git a/net/sctp/endpointola.c b/net/sctp/endpointola.c
+index 0448b68fce74..bcfc81ee153d 100644
+--- a/net/sctp/endpointola.c
++++ b/net/sctp/endpointola.c
+@@ -133,10 +133,6 @@ static struct sctp_endpoint *sctp_endpoint_init(struct sctp_endpoint *ep,
+ 	/* Initialize the bind addr area */
+ 	sctp_bind_addr_init(&ep->base.bind_addr, 0);
+ 
+-	/* Remember who we are attached to.  */
+-	ep->base.sk = sk;
+-	sock_hold(ep->base.sk);
+-
+ 	/* Create the lists of associations.  */
+ 	INIT_LIST_HEAD(&ep->asocs);
+ 
+@@ -169,6 +165,10 @@ static struct sctp_endpoint *sctp_endpoint_init(struct sctp_endpoint *ep,
+ 	ep->prsctp_enable = net->sctp.prsctp_enable;
+ 	ep->reconf_enable = net->sctp.reconf_enable;
+ 
++	/* Remember who we are attached to.  */
++	ep->base.sk = sk;
++	sock_hold(ep->base.sk);
++
+ 	return ep;
+ 
+ nomem_shkey:
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index 732d4b57411a..a437ee8ae482 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -950,6 +950,8 @@ static int xs_local_send_request(struct rpc_rqst *req)
+ 	struct sock_xprt *transport =
+ 				container_of(xprt, struct sock_xprt, xprt);
+ 	struct xdr_buf *xdr = &req->rq_snd_buf;
++	rpc_fraghdr rm = xs_stream_record_marker(xdr);
++	unsigned int msglen = rm ? req->rq_slen + sizeof(rm) : req->rq_slen;
+ 	int status;
+ 	int sent = 0;
+ 
+@@ -964,9 +966,7 @@ static int xs_local_send_request(struct rpc_rqst *req)
+ 
+ 	req->rq_xtime = ktime_get();
+ 	status = xs_sendpages(transport->sock, NULL, 0, xdr,
+-			      transport->xmit.offset,
+-			      xs_stream_record_marker(xdr),
+-			      &sent);
++			      transport->xmit.offset, rm, &sent);
+ 	dprintk("RPC:       %s(%u) = %d\n",
+ 			__func__, xdr->len - transport->xmit.offset, status);
+ 
+@@ -976,7 +976,7 @@ static int xs_local_send_request(struct rpc_rqst *req)
+ 	if (likely(sent > 0) || status == 0) {
+ 		transport->xmit.offset += sent;
+ 		req->rq_bytes_sent = transport->xmit.offset;
+-		if (likely(req->rq_bytes_sent >= req->rq_slen)) {
++		if (likely(req->rq_bytes_sent >= msglen)) {
+ 			req->rq_xmit_bytes_sent += transport->xmit.offset;
+ 			transport->xmit.offset = 0;
+ 			return 0;
+@@ -1097,6 +1097,8 @@ static int xs_tcp_send_request(struct rpc_rqst *req)
+ 	struct rpc_xprt *xprt = req->rq_xprt;
+ 	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
+ 	struct xdr_buf *xdr = &req->rq_snd_buf;
++	rpc_fraghdr rm = xs_stream_record_marker(xdr);
++	unsigned int msglen = rm ? req->rq_slen + sizeof(rm) : req->rq_slen;
+ 	bool vm_wait = false;
+ 	int status;
+ 	int sent;
+@@ -1122,9 +1124,7 @@ static int xs_tcp_send_request(struct rpc_rqst *req)
+ 	while (1) {
+ 		sent = 0;
+ 		status = xs_sendpages(transport->sock, NULL, 0, xdr,
+-				      transport->xmit.offset,
+-				      xs_stream_record_marker(xdr),
+-				      &sent);
++				      transport->xmit.offset, rm, &sent);
+ 
+ 		dprintk("RPC:       xs_tcp_send_request(%u) = %d\n",
+ 				xdr->len - transport->xmit.offset, status);
+@@ -1133,7 +1133,7 @@ static int xs_tcp_send_request(struct rpc_rqst *req)
+ 		 * reset the count of bytes sent. */
+ 		transport->xmit.offset += sent;
+ 		req->rq_bytes_sent = transport->xmit.offset;
+-		if (likely(req->rq_bytes_sent >= req->rq_slen)) {
++		if (likely(req->rq_bytes_sent >= msglen)) {
+ 			req->rq_xmit_bytes_sent += transport->xmit.offset;
+ 			transport->xmit.offset = 0;
+ 			return 0;
+diff --git a/net/tipc/core.c b/net/tipc/core.c
+index 3ecca3b88bf8..eb0f701f9bf1 100644
+--- a/net/tipc/core.c
++++ b/net/tipc/core.c
+@@ -132,7 +132,7 @@ static int __init tipc_init(void)
+ 	if (err)
+ 		goto out_sysctl;
+ 
+-	err = register_pernet_subsys(&tipc_net_ops);
++	err = register_pernet_device(&tipc_net_ops);
+ 	if (err)
+ 		goto out_pernet;
+ 
+@@ -140,7 +140,7 @@ static int __init tipc_init(void)
+ 	if (err)
+ 		goto out_socket;
+ 
+-	err = register_pernet_subsys(&tipc_topsrv_net_ops);
++	err = register_pernet_device(&tipc_topsrv_net_ops);
+ 	if (err)
+ 		goto out_pernet_topsrv;
+ 
+@@ -151,11 +151,11 @@ static int __init tipc_init(void)
+ 	pr_info("Started in single node mode\n");
+ 	return 0;
+ out_bearer:
+-	unregister_pernet_subsys(&tipc_topsrv_net_ops);
++	unregister_pernet_device(&tipc_topsrv_net_ops);
+ out_pernet_topsrv:
+ 	tipc_socket_stop();
+ out_socket:
+-	unregister_pernet_subsys(&tipc_net_ops);
++	unregister_pernet_device(&tipc_net_ops);
+ out_pernet:
+ 	tipc_unregister_sysctl();
+ out_sysctl:
+@@ -170,9 +170,9 @@ out_netlink:
+ static void __exit tipc_exit(void)
+ {
+ 	tipc_bearer_cleanup();
+-	unregister_pernet_subsys(&tipc_topsrv_net_ops);
++	unregister_pernet_device(&tipc_topsrv_net_ops);
+ 	tipc_socket_stop();
+-	unregister_pernet_subsys(&tipc_net_ops);
++	unregister_pernet_device(&tipc_net_ops);
+ 	tipc_netlink_stop();
+ 	tipc_netlink_compat_stop();
+ 	tipc_unregister_sysctl();
+diff --git a/net/tipc/netlink_compat.c b/net/tipc/netlink_compat.c
+index 340a6e7c43a7..8836aebd6180 100644
+--- a/net/tipc/netlink_compat.c
++++ b/net/tipc/netlink_compat.c
+@@ -445,7 +445,11 @@ static int tipc_nl_compat_bearer_disable(struct tipc_nl_compat_cmd_doit *cmd,
+ 	if (!bearer)
+ 		return -EMSGSIZE;
+ 
+-	len = min_t(int, TLV_GET_DATA_LEN(msg->req), TIPC_MAX_BEARER_NAME);
++	len = TLV_GET_DATA_LEN(msg->req);
++	if (len <= 0)
++		return -EINVAL;
++
++	len = min_t(int, len, TIPC_MAX_BEARER_NAME);
+ 	if (!string_is_valid(name, len))
+ 		return -EINVAL;
+ 
+@@ -537,7 +541,11 @@ static int tipc_nl_compat_link_stat_dump(struct tipc_nl_compat_msg *msg,
+ 
+ 	name = (char *)TLV_DATA(msg->req);
+ 
+-	len = min_t(int, TLV_GET_DATA_LEN(msg->req), TIPC_MAX_LINK_NAME);
++	len = TLV_GET_DATA_LEN(msg->req);
++	if (len <= 0)
++		return -EINVAL;
++
++	len = min_t(int, len, TIPC_MAX_BEARER_NAME);
+ 	if (!string_is_valid(name, len))
+ 		return -EINVAL;
+ 
+@@ -815,7 +823,11 @@ static int tipc_nl_compat_link_reset_stats(struct tipc_nl_compat_cmd_doit *cmd,
+ 	if (!link)
+ 		return -EMSGSIZE;
+ 
+-	len = min_t(int, TLV_GET_DATA_LEN(msg->req), TIPC_MAX_LINK_NAME);
++	len = TLV_GET_DATA_LEN(msg->req);
++	if (len <= 0)
++		return -EINVAL;
++
++	len = min_t(int, len, TIPC_MAX_BEARER_NAME);
+ 	if (!string_is_valid(name, len))
+ 		return -EINVAL;
+ 
+diff --git a/net/tipc/udp_media.c b/net/tipc/udp_media.c
+index 4d85d71f16e2..c86f136e5962 100644
+--- a/net/tipc/udp_media.c
++++ b/net/tipc/udp_media.c
+@@ -176,7 +176,6 @@ static int tipc_udp_xmit(struct net *net, struct sk_buff *skb,
+ 			goto tx_error;
+ 		}
+ 
+-		skb->dev = rt->dst.dev;
+ 		ttl = ip4_dst_hoplimit(&rt->dst);
+ 		udp_tunnel_xmit_skb(rt, ub->ubsock->sk, skb, src->ipv4.s_addr,
+ 				    dst->ipv4.s_addr, 0, ttl, 0, src->port,
+@@ -195,10 +194,9 @@ static int tipc_udp_xmit(struct net *net, struct sk_buff *skb,
+ 		if (err)
+ 			goto tx_error;
+ 		ttl = ip6_dst_hoplimit(ndst);
+-		err = udp_tunnel6_xmit_skb(ndst, ub->ubsock->sk, skb,
+-					   ndst->dev, &src->ipv6,
+-					   &dst->ipv6, 0, ttl, 0, src->port,
+-					   dst->port, false);
++		err = udp_tunnel6_xmit_skb(ndst, ub->ubsock->sk, skb, NULL,
++					   &src->ipv6, &dst->ipv6, 0, ttl, 0,
++					   src->port, dst->port, false);
+ #endif
+ 	}
+ 	return err;
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 478603f43964..f4f632824247 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -279,7 +279,8 @@ static void tls_sk_proto_close(struct sock *sk, long timeout)
+ 		goto skip_tx_cleanup;
+ 	}
+ 
+-	if (!tls_complete_pending_work(sk, ctx, 0, &timeo))
++	if (unlikely(sk->sk_write_pending) &&
++	    !wait_on_pending_writer(sk, &timeo))
+ 		tls_handle_open_record(sk, 0);
+ 
+ 	/* We need these for tls_sw_fallback handling of other packets */
+diff --git a/tools/testing/selftests/bpf/test_lpm_map.c b/tools/testing/selftests/bpf/test_lpm_map.c
+index 02d7c871862a..006be3963977 100644
+--- a/tools/testing/selftests/bpf/test_lpm_map.c
++++ b/tools/testing/selftests/bpf/test_lpm_map.c
+@@ -573,13 +573,13 @@ static void test_lpm_get_next_key(void)
+ 
+ 	/* add one more element (total two) */
+ 	key_p->prefixlen = 24;
+-	inet_pton(AF_INET, "192.168.0.0", key_p->data);
++	inet_pton(AF_INET, "192.168.128.0", key_p->data);
+ 	assert(bpf_map_update_elem(map_fd, key_p, &value, 0) == 0);
+ 
+ 	memset(key_p, 0, key_size);
+ 	assert(bpf_map_get_next_key(map_fd, NULL, key_p) == 0);
+ 	assert(key_p->prefixlen == 24 && key_p->data[0] == 192 &&
+-	       key_p->data[1] == 168 && key_p->data[2] == 0);
++	       key_p->data[1] == 168 && key_p->data[2] == 128);
+ 
+ 	memset(next_key_p, 0, key_size);
+ 	assert(bpf_map_get_next_key(map_fd, key_p, next_key_p) == 0);
+@@ -592,7 +592,7 @@ static void test_lpm_get_next_key(void)
+ 
+ 	/* Add one more element (total three) */
+ 	key_p->prefixlen = 24;
+-	inet_pton(AF_INET, "192.168.128.0", key_p->data);
++	inet_pton(AF_INET, "192.168.0.0", key_p->data);
+ 	assert(bpf_map_update_elem(map_fd, key_p, &value, 0) == 0);
+ 
+ 	memset(key_p, 0, key_size);
+@@ -643,6 +643,41 @@ static void test_lpm_get_next_key(void)
+ 	assert(bpf_map_get_next_key(map_fd, key_p, next_key_p) == -1 &&
+ 	       errno == ENOENT);
+ 
++	/* Add one more element (total five) */
++	key_p->prefixlen = 28;
++	inet_pton(AF_INET, "192.168.1.128", key_p->data);
++	assert(bpf_map_update_elem(map_fd, key_p, &value, 0) == 0);
++
++	memset(key_p, 0, key_size);
++	assert(bpf_map_get_next_key(map_fd, NULL, key_p) == 0);
++	assert(key_p->prefixlen == 24 && key_p->data[0] == 192 &&
++	       key_p->data[1] == 168 && key_p->data[2] == 0);
++
++	memset(next_key_p, 0, key_size);
++	assert(bpf_map_get_next_key(map_fd, key_p, next_key_p) == 0);
++	assert(next_key_p->prefixlen == 28 && next_key_p->data[0] == 192 &&
++	       next_key_p->data[1] == 168 && next_key_p->data[2] == 1 &&
++	       next_key_p->data[3] == 128);
++
++	memcpy(key_p, next_key_p, key_size);
++	assert(bpf_map_get_next_key(map_fd, key_p, next_key_p) == 0);
++	assert(next_key_p->prefixlen == 24 && next_key_p->data[0] == 192 &&
++	       next_key_p->data[1] == 168 && next_key_p->data[2] == 1);
++
++	memcpy(key_p, next_key_p, key_size);
++	assert(bpf_map_get_next_key(map_fd, key_p, next_key_p) == 0);
++	assert(next_key_p->prefixlen == 24 && next_key_p->data[0] == 192 &&
++	       next_key_p->data[1] == 168 && next_key_p->data[2] == 128);
++
++	memcpy(key_p, next_key_p, key_size);
++	assert(bpf_map_get_next_key(map_fd, key_p, next_key_p) == 0);
++	assert(next_key_p->prefixlen == 16 && next_key_p->data[0] == 192 &&
++	       next_key_p->data[1] == 168);
++
++	memcpy(key_p, next_key_p, key_size);
++	assert(bpf_map_get_next_key(map_fd, key_p, next_key_p) == -1 &&
++	       errno == ENOENT);
++
+ 	/* no exact matching key should return the first one in post order */
+ 	key_p->prefixlen = 22;
+ 	inet_pton(AF_INET, "192.168.1.0", key_p->data);


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-07-10 11:07 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2019-07-10 11:07 UTC (permalink / raw
  To: gentoo-commits

commit:     06901cce331844d85af281217797f793eabdf9a3
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul 10 11:07:15 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul 10 11:07:15 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=06901cce

Linux patch 5.1.17

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1016_linux-5.1.17.patch | 2743 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2747 insertions(+)

diff --git a/0000_README b/0000_README
index 941f7f1..f6eec27 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch:  1015_linux-5.1.16.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.1.16
 
+Patch:  1016_linux-5.1.17.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.1.17
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1016_linux-5.1.17.patch b/1016_linux-5.1.17.patch
new file mode 100644
index 0000000..acd6a52
--- /dev/null
+++ b/1016_linux-5.1.17.patch
@@ -0,0 +1,2743 @@
+diff --git a/Makefile b/Makefile
+index 46a0ae537182..14c91d46583f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 16
++SUBLEVEL = 17
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arm/boot/dts/armada-xp-98dx3236.dtsi b/arch/arm/boot/dts/armada-xp-98dx3236.dtsi
+index 59753470cd34..267d0c178e55 100644
+--- a/arch/arm/boot/dts/armada-xp-98dx3236.dtsi
++++ b/arch/arm/boot/dts/armada-xp-98dx3236.dtsi
+@@ -336,3 +336,11 @@
+ 	status = "disabled";
+ };
+ 
++&uart0 {
++	compatible = "marvell,armada-38x-uart";
++};
++
++&uart1 {
++	compatible = "marvell,armada-38x-uart";
++};
++
+diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
+index 3a1870228946..dff8f9ea5754 100644
+--- a/arch/arm64/include/asm/tlbflush.h
++++ b/arch/arm64/include/asm/tlbflush.h
+@@ -195,6 +195,9 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
+ 	unsigned long asid = ASID(vma->vm_mm);
+ 	unsigned long addr;
+ 
++	start = round_down(start, stride);
++	end = round_up(end, stride);
++
+ 	if ((end - start) >= (MAX_TLBI_OPS * stride)) {
+ 		flush_tlb_mm(vma->vm_mm);
+ 		return;
+diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
+index 1e418e69b58c..9b67304fba89 100644
+--- a/arch/arm64/kernel/module.c
++++ b/arch/arm64/kernel/module.c
+@@ -32,6 +32,7 @@
+ 
+ void *module_alloc(unsigned long size)
+ {
++	u64 module_alloc_end = module_alloc_base + MODULES_VSIZE;
+ 	gfp_t gfp_mask = GFP_KERNEL;
+ 	void *p;
+ 
+@@ -39,9 +40,12 @@ void *module_alloc(unsigned long size)
+ 	if (IS_ENABLED(CONFIG_ARM64_MODULE_PLTS))
+ 		gfp_mask |= __GFP_NOWARN;
+ 
++	if (IS_ENABLED(CONFIG_KASAN))
++		/* don't exceed the static module region - see below */
++		module_alloc_end = MODULES_END;
++
+ 	p = __vmalloc_node_range(size, MODULE_ALIGN, module_alloc_base,
+-				module_alloc_base + MODULES_VSIZE,
+-				gfp_mask, PAGE_KERNEL_EXEC, 0,
++				module_alloc_end, gfp_mask, PAGE_KERNEL_EXEC, 0,
+ 				NUMA_NO_NODE, __builtin_return_address(0));
+ 
+ 	if (!p && IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) &&
+diff --git a/arch/mips/Makefile b/arch/mips/Makefile
+index 8f4486c4415b..eceff9b75b22 100644
+--- a/arch/mips/Makefile
++++ b/arch/mips/Makefile
+@@ -17,6 +17,7 @@ archscripts: scripts_basic
+ 	$(Q)$(MAKE) $(build)=arch/mips/boot/tools relocs
+ 
+ KBUILD_DEFCONFIG := 32r2el_defconfig
++KBUILD_DTBS      := dtbs
+ 
+ #
+ # Select the object file format to substitute into the linker script.
+@@ -384,7 +385,7 @@ quiet_cmd_64 = OBJCOPY $@
+ vmlinux.64: vmlinux
+ 	$(call cmd,64)
+ 
+-all:	$(all-y)
++all:	$(all-y) $(KBUILD_DTBS)
+ 
+ # boot
+ $(boot-y): $(vmlinux-32) FORCE
+diff --git a/arch/mips/mm/mmap.c b/arch/mips/mm/mmap.c
+index 7755a1fad05a..1b705fb2f10c 100644
+--- a/arch/mips/mm/mmap.c
++++ b/arch/mips/mm/mmap.c
+@@ -203,7 +203,7 @@ unsigned long arch_randomize_brk(struct mm_struct *mm)
+ 
+ int __virt_addr_valid(const volatile void *kaddr)
+ {
+-	unsigned long vaddr = (unsigned long)vaddr;
++	unsigned long vaddr = (unsigned long)kaddr;
+ 
+ 	if ((vaddr < PAGE_OFFSET) || (vaddr >= MAP_BASE))
+ 		return 0;
+diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
+index 65b6e85447b1..144ceb0fba88 100644
+--- a/arch/mips/mm/tlbex.c
++++ b/arch/mips/mm/tlbex.c
+@@ -391,6 +391,7 @@ static struct work_registers build_get_work_registers(u32 **p)
+ static void build_restore_work_registers(u32 **p)
+ {
+ 	if (scratch_reg >= 0) {
++		uasm_i_ehb(p);
+ 		UASM_i_MFC0(p, 1, c0_kscratch(), scratch_reg);
+ 		return;
+ 	}
+@@ -668,10 +669,12 @@ static void build_restore_pagemask(u32 **p, struct uasm_reloc **r,
+ 			uasm_i_mtc0(p, 0, C0_PAGEMASK);
+ 			uasm_il_b(p, r, lid);
+ 		}
+-		if (scratch_reg >= 0)
++		if (scratch_reg >= 0) {
++			uasm_i_ehb(p);
+ 			UASM_i_MFC0(p, 1, c0_kscratch(), scratch_reg);
+-		else
++		} else {
+ 			UASM_i_LW(p, 1, scratchpad_offset(0), 0);
++		}
+ 	} else {
+ 		/* Reset default page size */
+ 		if (PM_DEFAULT_MASK >> 16) {
+@@ -938,10 +941,12 @@ build_get_pgd_vmalloc64(u32 **p, struct uasm_label **l, struct uasm_reloc **r,
+ 		uasm_i_jr(p, ptr);
+ 
+ 		if (mode == refill_scratch) {
+-			if (scratch_reg >= 0)
++			if (scratch_reg >= 0) {
++				uasm_i_ehb(p);
+ 				UASM_i_MFC0(p, 1, c0_kscratch(), scratch_reg);
+-			else
++			} else {
+ 				UASM_i_LW(p, 1, scratchpad_offset(0), 0);
++			}
+ 		} else {
+ 			uasm_i_nop(p);
+ 		}
+@@ -1258,6 +1263,7 @@ build_fast_tlb_refill_handler (u32 **p, struct uasm_label **l,
+ 	UASM_i_MTC0(p, odd, C0_ENTRYLO1); /* load it */
+ 
+ 	if (c0_scratch_reg >= 0) {
++		uasm_i_ehb(p);
+ 		UASM_i_MFC0(p, scratch, c0_kscratch(), c0_scratch_reg);
+ 		build_tlb_write_entry(p, l, r, tlb_random);
+ 		uasm_l_leave(l, *p);
+@@ -1603,15 +1609,17 @@ static void build_setup_pgd(void)
+ 		uasm_i_dinsm(&p, a0, 0, 29, 64 - 29);
+ 		uasm_l_tlbl_goaround1(&l, p);
+ 		UASM_i_SLL(&p, a0, a0, 11);
+-		uasm_i_jr(&p, 31);
+ 		UASM_i_MTC0(&p, a0, C0_CONTEXT);
++		uasm_i_jr(&p, 31);
++		uasm_i_ehb(&p);
+ 	} else {
+ 		/* PGD in c0_KScratch */
+-		uasm_i_jr(&p, 31);
+ 		if (cpu_has_ldpte)
+ 			UASM_i_MTC0(&p, a0, C0_PWBASE);
+ 		else
+ 			UASM_i_MTC0(&p, a0, c0_kscratch(), pgd_reg);
++		uasm_i_jr(&p, 31);
++		uasm_i_ehb(&p);
+ 	}
+ #else
+ #ifdef CONFIG_SMP
+@@ -1625,13 +1633,16 @@ static void build_setup_pgd(void)
+ 	UASM_i_LA_mostly(&p, a2, pgdc);
+ 	UASM_i_SW(&p, a0, uasm_rel_lo(pgdc), a2);
+ #endif /* SMP */
+-	uasm_i_jr(&p, 31);
+ 
+ 	/* if pgd_reg is allocated, save PGD also to scratch register */
+-	if (pgd_reg != -1)
++	if (pgd_reg != -1) {
+ 		UASM_i_MTC0(&p, a0, c0_kscratch(), pgd_reg);
+-	else
++		uasm_i_jr(&p, 31);
++		uasm_i_ehb(&p);
++	} else {
++		uasm_i_jr(&p, 31);
+ 		uasm_i_nop(&p);
++	}
+ #endif
+ 	if (p >= (u32 *)tlbmiss_handler_setup_pgd_end)
+ 		panic("tlbmiss_handler_setup_pgd space exceeded");
+diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
+index 394bec31cb97..9f0195d5fa16 100644
+--- a/arch/s390/include/asm/pgtable.h
++++ b/arch/s390/include/asm/pgtable.h
+@@ -238,7 +238,7 @@ static inline int is_module_addr(void *addr)
+ #define _REGION_ENTRY_NOEXEC	0x100	/* region no-execute bit	    */
+ #define _REGION_ENTRY_OFFSET	0xc0	/* region table offset		    */
+ #define _REGION_ENTRY_INVALID	0x20	/* invalid region table entry	    */
+-#define _REGION_ENTRY_TYPE_MASK	0x0c	/* region/segment table type mask   */
++#define _REGION_ENTRY_TYPE_MASK	0x0c	/* region table type mask	    */
+ #define _REGION_ENTRY_TYPE_R1	0x0c	/* region first table type	    */
+ #define _REGION_ENTRY_TYPE_R2	0x08	/* region second table type	    */
+ #define _REGION_ENTRY_TYPE_R3	0x04	/* region third table type	    */
+@@ -277,6 +277,7 @@ static inline int is_module_addr(void *addr)
+ #define _SEGMENT_ENTRY_PROTECT	0x200	/* segment protection bit	    */
+ #define _SEGMENT_ENTRY_NOEXEC	0x100	/* segment no-execute bit	    */
+ #define _SEGMENT_ENTRY_INVALID	0x20	/* invalid segment table entry	    */
++#define _SEGMENT_ENTRY_TYPE_MASK 0x0c	/* segment table type mask	    */
+ 
+ #define _SEGMENT_ENTRY		(0)
+ #define _SEGMENT_ENTRY_EMPTY	(_SEGMENT_ENTRY_INVALID)
+@@ -614,15 +615,9 @@ static inline int pgd_none(pgd_t pgd)
+ 
+ static inline int pgd_bad(pgd_t pgd)
+ {
+-	/*
+-	 * With dynamic page table levels the pgd can be a region table
+-	 * entry or a segment table entry. Check for the bit that are
+-	 * invalid for either table entry.
+-	 */
+-	unsigned long mask =
+-		~_SEGMENT_ENTRY_ORIGIN & ~_REGION_ENTRY_INVALID &
+-		~_REGION_ENTRY_TYPE_MASK & ~_REGION_ENTRY_LENGTH;
+-	return (pgd_val(pgd) & mask) != 0;
++	if ((pgd_val(pgd) & _REGION_ENTRY_TYPE_MASK) < _REGION_ENTRY_TYPE_R1)
++		return 0;
++	return (pgd_val(pgd) & ~_REGION_ENTRY_BITS) != 0;
+ }
+ 
+ static inline unsigned long pgd_pfn(pgd_t pgd)
+@@ -703,6 +698,8 @@ static inline int pmd_large(pmd_t pmd)
+ 
+ static inline int pmd_bad(pmd_t pmd)
+ {
++	if ((pmd_val(pmd) & _SEGMENT_ENTRY_TYPE_MASK) > 0)
++		return 1;
+ 	if (pmd_large(pmd))
+ 		return (pmd_val(pmd) & ~_SEGMENT_ENTRY_BITS_LARGE) != 0;
+ 	return (pmd_val(pmd) & ~_SEGMENT_ENTRY_BITS) != 0;
+@@ -710,8 +707,12 @@ static inline int pmd_bad(pmd_t pmd)
+ 
+ static inline int pud_bad(pud_t pud)
+ {
+-	if ((pud_val(pud) & _REGION_ENTRY_TYPE_MASK) < _REGION_ENTRY_TYPE_R3)
+-		return pmd_bad(__pmd(pud_val(pud)));
++	unsigned long type = pud_val(pud) & _REGION_ENTRY_TYPE_MASK;
++
++	if (type > _REGION_ENTRY_TYPE_R3)
++		return 1;
++	if (type < _REGION_ENTRY_TYPE_R3)
++		return 0;
+ 	if (pud_large(pud))
+ 		return (pud_val(pud) & ~_REGION_ENTRY_BITS_LARGE) != 0;
+ 	return (pud_val(pud) & ~_REGION_ENTRY_BITS) != 0;
+@@ -719,8 +720,12 @@ static inline int pud_bad(pud_t pud)
+ 
+ static inline int p4d_bad(p4d_t p4d)
+ {
+-	if ((p4d_val(p4d) & _REGION_ENTRY_TYPE_MASK) < _REGION_ENTRY_TYPE_R2)
+-		return pud_bad(__pud(p4d_val(p4d)));
++	unsigned long type = p4d_val(p4d) & _REGION_ENTRY_TYPE_MASK;
++
++	if (type > _REGION_ENTRY_TYPE_R2)
++		return 1;
++	if (type < _REGION_ENTRY_TYPE_R2)
++		return 0;
+ 	return (p4d_val(p4d) & ~_REGION_ENTRY_BITS) != 0;
+ }
+ 
+diff --git a/arch/x86/include/asm/intel-family.h b/arch/x86/include/asm/intel-family.h
+index 9f15384c504a..310118805f57 100644
+--- a/arch/x86/include/asm/intel-family.h
++++ b/arch/x86/include/asm/intel-family.h
+@@ -52,6 +52,9 @@
+ 
+ #define INTEL_FAM6_CANNONLAKE_MOBILE	0x66
+ 
++#define INTEL_FAM6_ICELAKE_X		0x6A
++#define INTEL_FAM6_ICELAKE_XEON_D	0x6C
++#define INTEL_FAM6_ICELAKE_DESKTOP	0x7D
+ #define INTEL_FAM6_ICELAKE_MOBILE	0x7E
+ 
+ /* "Small Core" Processors (Atom) */
+diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
+index 6e0c0ed8e4bf..ba3656405fcc 100644
+--- a/arch/x86/kernel/ftrace.c
++++ b/arch/x86/kernel/ftrace.c
+@@ -22,6 +22,7 @@
+ #include <linux/init.h>
+ #include <linux/list.h>
+ #include <linux/module.h>
++#include <linux/memory.h>
+ 
+ #include <trace/syscall.h>
+ 
+@@ -35,6 +36,7 @@
+ 
+ int ftrace_arch_code_modify_prepare(void)
+ {
++	mutex_lock(&text_mutex);
+ 	set_kernel_text_rw();
+ 	set_all_modules_text_rw();
+ 	return 0;
+@@ -44,6 +46,7 @@ int ftrace_arch_code_modify_post_process(void)
+ {
+ 	set_all_modules_text_ro();
+ 	set_kernel_text_ro();
++	mutex_unlock(&text_mutex);
+ 	return 0;
+ }
+ 
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index ea188545a15c..c313dbaa8792 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -2331,7 +2331,7 @@ int kvm_apic_has_interrupt(struct kvm_vcpu *vcpu)
+ 	struct kvm_lapic *apic = vcpu->arch.apic;
+ 	u32 ppr;
+ 
+-	if (!apic_enabled(apic))
++	if (!kvm_apic_hw_enabled(apic))
+ 		return -1;
+ 
+ 	__apic_update_ppr(apic, &ppr);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index b07868eb1656..37028ea85d4c 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1547,7 +1547,7 @@ static int set_tsc_khz(struct kvm_vcpu *vcpu, u32 user_tsc_khz, bool scale)
+ 			vcpu->arch.tsc_always_catchup = 1;
+ 			return 0;
+ 		} else {
+-			WARN(1, "user requested TSC rate below hardware speed\n");
++			pr_warn_ratelimited("user requested TSC rate below hardware speed\n");
+ 			return -1;
+ 		}
+ 	}
+@@ -1557,8 +1557,8 @@ static int set_tsc_khz(struct kvm_vcpu *vcpu, u32 user_tsc_khz, bool scale)
+ 				user_tsc_khz, tsc_khz);
+ 
+ 	if (ratio == 0 || ratio >= kvm_max_tsc_scaling_ratio) {
+-		WARN_ONCE(1, "Invalid TSC scaling ratio - virtual-tsc-khz=%u\n",
+-			  user_tsc_khz);
++		pr_warn_ratelimited("Invalid TSC scaling ratio - virtual-tsc-khz=%u\n",
++			            user_tsc_khz);
+ 		return -1;
+ 	}
+ 
+diff --git a/crypto/cryptd.c b/crypto/cryptd.c
+index 5640e5db7bdb..de1dc6fe4d4c 100644
+--- a/crypto/cryptd.c
++++ b/crypto/cryptd.c
+@@ -586,6 +586,7 @@ static void cryptd_skcipher_free(struct skcipher_instance *inst)
+ 	struct skcipherd_instance_ctx *ctx = skcipher_instance_ctx(inst);
+ 
+ 	crypto_drop_skcipher(&ctx->spawn);
++	kfree(inst);
+ }
+ 
+ static int cryptd_create_skcipher(struct crypto_template *tmpl,
+diff --git a/crypto/crypto_user_base.c b/crypto/crypto_user_base.c
+index f25d3f32c9c2..f93b691f8045 100644
+--- a/crypto/crypto_user_base.c
++++ b/crypto/crypto_user_base.c
+@@ -56,6 +56,9 @@ struct crypto_alg *crypto_alg_match(struct crypto_user_alg *p, int exact)
+ 	list_for_each_entry(q, &crypto_alg_list, cra_list) {
+ 		int match = 0;
+ 
++		if (crypto_is_larval(q))
++			continue;
++
+ 		if ((q->cra_flags ^ p->cru_type) & p->cru_mask)
+ 			continue;
+ 
+diff --git a/drivers/dma/dma-jz4780.c b/drivers/dma/dma-jz4780.c
+index f49534019d37..503d9f13ea97 100644
+--- a/drivers/dma/dma-jz4780.c
++++ b/drivers/dma/dma-jz4780.c
+@@ -722,12 +722,13 @@ static irqreturn_t jz4780_dma_irq_handler(int irq, void *data)
+ {
+ 	struct jz4780_dma_dev *jzdma = data;
+ 	unsigned int nb_channels = jzdma->soc_data->nb_channels;
+-	uint32_t pending, dmac;
++	unsigned long pending;
++	uint32_t dmac;
+ 	int i;
+ 
+ 	pending = jz4780_dma_ctrl_readl(jzdma, JZ_DMA_REG_DIRQP);
+ 
+-	for_each_set_bit(i, (unsigned long *)&pending, nb_channels) {
++	for_each_set_bit(i, &pending, nb_channels) {
+ 		if (jz4780_dma_chan_irq(jzdma, &jzdma->chan[i]))
+ 			pending &= ~BIT(i);
+ 	}
+diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c
+index 99d9f431ae2c..248c440c10f2 100644
+--- a/drivers/dma/imx-sdma.c
++++ b/drivers/dma/imx-sdma.c
+@@ -703,7 +703,7 @@ static int sdma_load_script(struct sdma_engine *sdma, void *buf, int size,
+ 	spin_lock_irqsave(&sdma->channel_0_lock, flags);
+ 
+ 	bd0->mode.command = C0_SETPM;
+-	bd0->mode.status = BD_DONE | BD_INTR | BD_WRAP | BD_EXTD;
++	bd0->mode.status = BD_DONE | BD_WRAP | BD_EXTD;
+ 	bd0->mode.count = size / 2;
+ 	bd0->buffer_addr = buf_phys;
+ 	bd0->ext_buffer_addr = address;
+@@ -1025,7 +1025,7 @@ static int sdma_load_context(struct sdma_channel *sdmac)
+ 	context->gReg[7] = sdmac->watermark_level;
+ 
+ 	bd0->mode.command = C0_SETDM;
+-	bd0->mode.status = BD_DONE | BD_INTR | BD_WRAP | BD_EXTD;
++	bd0->mode.status = BD_DONE | BD_WRAP | BD_EXTD;
+ 	bd0->mode.count = sizeof(*context) / 4;
+ 	bd0->buffer_addr = sdma->context_phys;
+ 	bd0->ext_buffer_addr = 2048 + (sizeof(*context) / 4) * channel;
+diff --git a/drivers/dma/qcom/bam_dma.c b/drivers/dma/qcom/bam_dma.c
+index cb860cb53c27..d30f8bd434d5 100644
+--- a/drivers/dma/qcom/bam_dma.c
++++ b/drivers/dma/qcom/bam_dma.c
+@@ -808,6 +808,9 @@ static u32 process_channel_irqs(struct bam_device *bdev)
+ 		/* Number of bytes available to read */
+ 		avail = CIRC_CNT(offset, bchan->head, MAX_DESCRIPTORS + 1);
+ 
++		if (offset < bchan->head)
++			avail--;
++
+ 		list_for_each_entry_safe(async_desc, tmp,
+ 					 &bchan->desc_list, desc_node) {
+ 			/* Not enough data to read */
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index 7e76830b3368..b6f10e56dfa0 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -306,7 +306,8 @@ static const struct regmap_config pca953x_i2c_regmap = {
+ 	.volatile_reg = pca953x_volatile_register,
+ 
+ 	.cache_type = REGCACHE_RBTREE,
+-	.max_register = 0x7f,
++	/* REVISIT: should be 0x7f but some 24 bit chips use REG_ADDR_AI */
++	.max_register = 0xff,
+ };
+ 
+ static u8 pca953x_recalc_addr(struct pca953x_chip *chip, int reg, int off,
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index a11db2b1a63f..1c72903db2ba 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -1899,25 +1899,6 @@ static void gfx_v9_0_constants_init(struct amdgpu_device *adev)
+ 	mutex_unlock(&adev->srbm_mutex);
+ 
+ 	gfx_v9_0_init_compute_vmid(adev);
+-
+-	mutex_lock(&adev->grbm_idx_mutex);
+-	/*
+-	 * making sure that the following register writes will be broadcasted
+-	 * to all the shaders
+-	 */
+-	gfx_v9_0_select_se_sh(adev, 0xffffffff, 0xffffffff, 0xffffffff);
+-
+-	WREG32_SOC15(GC, 0, mmPA_SC_FIFO_SIZE,
+-		   (adev->gfx.config.sc_prim_fifo_size_frontend <<
+-			PA_SC_FIFO_SIZE__SC_FRONTEND_PRIM_FIFO_SIZE__SHIFT) |
+-		   (adev->gfx.config.sc_prim_fifo_size_backend <<
+-			PA_SC_FIFO_SIZE__SC_BACKEND_PRIM_FIFO_SIZE__SHIFT) |
+-		   (adev->gfx.config.sc_hiz_tile_fifo_size <<
+-			PA_SC_FIFO_SIZE__SC_HIZ_TILE_FIFO_SIZE__SHIFT) |
+-		   (adev->gfx.config.sc_earlyz_tile_fifo_size <<
+-			PA_SC_FIFO_SIZE__SC_EARLYZ_TILE_FIFO_SIZE__SHIFT));
+-	mutex_unlock(&adev->grbm_idx_mutex);
+-
+ }
+ 
+ static void gfx_v9_0_wait_for_rlc_serdes(struct amdgpu_device *adev)
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
+index 6cd6497c6fc2..0e1b2d930816 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
+@@ -325,7 +325,7 @@ int hwmgr_resume(struct pp_hwmgr *hwmgr)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = psm_adjust_power_state_dynamic(hwmgr, true, NULL);
++	ret = psm_adjust_power_state_dynamic(hwmgr, false, NULL);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/process_pptables_v1_0.c b/drivers/gpu/drm/amd/powerplay/hwmgr/process_pptables_v1_0.c
+index ae64ff7153d6..1cd5a8b5cdc1 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/process_pptables_v1_0.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/process_pptables_v1_0.c
+@@ -916,8 +916,10 @@ static int init_thermal_controller(
+ 			PHM_PlatformCaps_ThermalController
+ 		  );
+ 
+-	if (0 == powerplay_table->usFanTableOffset)
++	if (0 == powerplay_table->usFanTableOffset) {
++		hwmgr->thermal_controller.use_hw_fan_control = 1;
+ 		return 0;
++	}
+ 
+ 	fan_table = (const PPTable_Generic_SubTable_Header *)
+ 		(((unsigned long)powerplay_table) +
+diff --git a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
+index bac3d85e3b82..7d90583246f5 100644
+--- a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
++++ b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
+@@ -694,6 +694,7 @@ struct pp_thermal_controller_info {
+ 	uint8_t ucType;
+ 	uint8_t ucI2cLine;
+ 	uint8_t ucI2cAddress;
++	uint8_t use_hw_fan_control;
+ 	struct pp_fan_info fanInfo;
+ 	struct pp_advance_fan_control_parameters advanceFanControlParameters;
+ };
+diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smumgr.c b/drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smumgr.c
+index 2d4cfe14f72e..29e641c6a5db 100644
+--- a/drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smumgr.c
++++ b/drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smumgr.c
+@@ -2092,6 +2092,10 @@ static int polaris10_thermal_setup_fan_table(struct pp_hwmgr *hwmgr)
+ 		return 0;
+ 	}
+ 
++	/* use hardware fan control */
++	if (hwmgr->thermal_controller.use_hw_fan_control)
++		return 0;
++
+ 	tmp64 = hwmgr->thermal_controller.advanceFanControlParameters.
+ 			usPWMMin * duty100;
+ 	do_div(tmp64, 10000);
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index 52e445bb1aa5..dd982563304d 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -42,6 +42,14 @@ static const struct drm_dmi_panel_orientation_data asus_t100ha = {
+ 	.orientation = DRM_MODE_PANEL_ORIENTATION_LEFT_UP,
+ };
+ 
++static const struct drm_dmi_panel_orientation_data gpd_micropc = {
++	.width = 720,
++	.height = 1280,
++	.bios_dates = (const char * const []){ "04/26/2019",
++		NULL },
++	.orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
++};
++
+ static const struct drm_dmi_panel_orientation_data gpd_pocket = {
+ 	.width = 1200,
+ 	.height = 1920,
+@@ -50,6 +58,14 @@ static const struct drm_dmi_panel_orientation_data gpd_pocket = {
+ 	.orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
+ };
+ 
++static const struct drm_dmi_panel_orientation_data gpd_pocket2 = {
++	.width = 1200,
++	.height = 1920,
++	.bios_dates = (const char * const []){ "06/28/2018", "08/28/2018",
++		"12/07/2018", NULL },
++	.orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
++};
++
+ static const struct drm_dmi_panel_orientation_data gpd_win = {
+ 	.width = 720,
+ 	.height = 1280,
+@@ -93,6 +109,14 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T100HAN"),
+ 		},
+ 		.driver_data = (void *)&asus_t100ha,
++	}, {	/* GPD MicroPC (generic strings, also match on bios date) */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Default string"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Default string"),
++		  DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "Default string"),
++		  DMI_EXACT_MATCH(DMI_BOARD_NAME, "Default string"),
++		},
++		.driver_data = (void *)&gpd_micropc,
+ 	}, {	/*
+ 		 * GPD Pocket, note that the the DMI data is less generic then
+ 		 * it seems, devices with a board-vendor of "AMI Corporation"
+@@ -106,6 +130,14 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Default string"),
+ 		},
+ 		.driver_data = (void *)&gpd_pocket,
++	}, {	/* GPD Pocket 2 (generic strings, also match on bios date) */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Default string"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Default string"),
++		  DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "Default string"),
++		  DMI_EXACT_MATCH(DMI_BOARD_NAME, "Default string"),
++		},
++		.driver_data = (void *)&gpd_pocket2,
+ 	}, {	/* GPD Win (same note on DMI match as GPD Pocket) */
+ 		.matches = {
+ 		  DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+index 6904535475de..4cf44575a27b 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+@@ -762,7 +762,7 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
+ 	if (IS_ERR(gpu->cmdbuf_suballoc)) {
+ 		dev_err(gpu->dev, "Failed to create cmdbuf suballocator\n");
+ 		ret = PTR_ERR(gpu->cmdbuf_suballoc);
+-		goto fail;
++		goto destroy_iommu;
+ 	}
+ 
+ 	/* Create buffer: */
+@@ -770,7 +770,7 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
+ 				  PAGE_SIZE);
+ 	if (ret) {
+ 		dev_err(gpu->dev, "could not create command buffer\n");
+-		goto destroy_iommu;
++		goto destroy_suballoc;
+ 	}
+ 
+ 	if (gpu->mmu->version == ETNAVIV_IOMMU_V1 &&
+@@ -802,6 +802,9 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
+ free_buffer:
+ 	etnaviv_cmdbuf_free(&gpu->buffer);
+ 	gpu->buffer.suballoc = NULL;
++destroy_suballoc:
++	etnaviv_cmdbuf_suballoc_destroy(gpu->cmdbuf_suballoc);
++	gpu->cmdbuf_suballoc = NULL;
+ destroy_iommu:
+ 	etnaviv_iommu_destroy(gpu->mmu);
+ 	gpu->mmu = NULL;
+diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
+index 7f841dba87b3..bb5042919ff9 100644
+--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
++++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
+@@ -1957,12 +1957,12 @@ static int ring_request_alloc(struct i915_request *request)
+ 	 */
+ 	request->reserved_space += LEGACY_REQUEST_SIZE;
+ 
+-	ret = switch_context(request);
++	/* Unconditionally invalidate GPU caches and TLBs. */
++	ret = request->engine->emit_flush(request, EMIT_INVALIDATE);
+ 	if (ret)
+ 		return ret;
+ 
+-	/* Unconditionally invalidate GPU caches and TLBs. */
+-	ret = request->engine->emit_flush(request, EMIT_INVALIDATE);
++	ret = switch_context(request);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/gpu/drm/imx/ipuv3-crtc.c b/drivers/gpu/drm/imx/ipuv3-crtc.c
+index 54011df8c2e8..a42288b8c7a4 100644
+--- a/drivers/gpu/drm/imx/ipuv3-crtc.c
++++ b/drivers/gpu/drm/imx/ipuv3-crtc.c
+@@ -91,14 +91,14 @@ static void ipu_crtc_atomic_disable(struct drm_crtc *crtc,
+ 	ipu_dc_disable(ipu);
+ 	ipu_prg_disable(ipu);
+ 
++	drm_crtc_vblank_off(crtc);
++
+ 	spin_lock_irq(&crtc->dev->event_lock);
+-	if (crtc->state->event) {
++	if (crtc->state->event && !crtc->state->active) {
+ 		drm_crtc_send_vblank_event(crtc, crtc->state->event);
+ 		crtc->state->event = NULL;
+ 	}
+ 	spin_unlock_irq(&crtc->dev->event_lock);
+-
+-	drm_crtc_vblank_off(crtc);
+ }
+ 
+ static void imx_drm_crtc_reset(struct drm_crtc *crtc)
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+index 57ce4708ef1b..bbfe3a464aea 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+@@ -311,6 +311,7 @@ err_config_cleanup:
+ static void mtk_drm_kms_deinit(struct drm_device *drm)
+ {
+ 	drm_kms_helper_poll_fini(drm);
++	drm_atomic_helper_shutdown(drm);
+ 
+ 	component_unbind_all(drm->dev, drm);
+ 	drm_mode_config_cleanup(drm);
+@@ -397,7 +398,9 @@ static void mtk_drm_unbind(struct device *dev)
+ 	struct mtk_drm_private *private = dev_get_drvdata(dev);
+ 
+ 	drm_dev_unregister(private->drm);
++	mtk_drm_kms_deinit(private->drm);
+ 	drm_dev_put(private->drm);
++	private->num_pipes = 0;
+ 	private->drm = NULL;
+ }
+ 
+@@ -568,13 +571,8 @@ err_node:
+ static int mtk_drm_remove(struct platform_device *pdev)
+ {
+ 	struct mtk_drm_private *private = platform_get_drvdata(pdev);
+-	struct drm_device *drm = private->drm;
+ 	int i;
+ 
+-	drm_dev_unregister(drm);
+-	mtk_drm_kms_deinit(drm);
+-	drm_dev_put(drm);
+-
+ 	component_master_del(&pdev->dev, &mtk_drm_ops);
+ 	pm_runtime_disable(&pdev->dev);
+ 	of_node_put(private->mutex_node);
+diff --git a/drivers/gpu/drm/mediatek/mtk_dsi.c b/drivers/gpu/drm/mediatek/mtk_dsi.c
+index b00eb2d2e086..179f2b080342 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dsi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dsi.c
+@@ -630,6 +630,15 @@ static void mtk_dsi_poweroff(struct mtk_dsi *dsi)
+ 	if (--dsi->refcount != 0)
+ 		return;
+ 
++	/*
++	 * mtk_dsi_stop() and mtk_dsi_start() is asymmetric, since
++	 * mtk_dsi_stop() should be called after mtk_drm_crtc_atomic_disable(),
++	 * which needs irq for vblank, and mtk_dsi_stop() will disable irq.
++	 * mtk_dsi_start() needs to be called in mtk_output_dsi_enable(),
++	 * after dsi is fully set.
++	 */
++	mtk_dsi_stop(dsi);
++
+ 	if (!mtk_dsi_switch_to_cmd_mode(dsi, VM_DONE_INT_FLAG, 500)) {
+ 		if (dsi->panel) {
+ 			if (drm_panel_unprepare(dsi->panel)) {
+@@ -696,7 +705,6 @@ static void mtk_output_dsi_disable(struct mtk_dsi *dsi)
+ 		}
+ 	}
+ 
+-	mtk_dsi_stop(dsi);
+ 	mtk_dsi_poweroff(dsi);
+ 
+ 	dsi->enabled = false;
+@@ -844,6 +852,8 @@ static void mtk_dsi_destroy_conn_enc(struct mtk_dsi *dsi)
+ 	/* Skip connector cleanup if creation was delegated to the bridge */
+ 	if (dsi->conn.dev)
+ 		drm_connector_cleanup(&dsi->conn);
++	if (dsi->panel)
++		drm_panel_detach(dsi->panel);
+ }
+ 
+ static void mtk_dsi_ddp_start(struct mtk_ddp_comp *comp)
+diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
+index 6bc2008b0d0d..3ef24f89ef93 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
++++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
+@@ -620,11 +620,11 @@ static void virtio_gpu_cmd_get_edid_cb(struct virtio_gpu_device *vgdev,
+ 	output = vgdev->outputs + scanout;
+ 
+ 	new_edid = drm_do_get_edid(&output->conn, virtio_get_edid_block, resp);
++	drm_connector_update_edid_property(&output->conn, new_edid);
+ 
+ 	spin_lock(&vgdev->display_info_lock);
+ 	old_edid = output->edid;
+ 	output->edid = new_edid;
+-	drm_connector_update_edid_property(&output->conn, output->edid);
+ 	spin_unlock(&vgdev->display_info_lock);
+ 
+ 	kfree(old_edid);
+diff --git a/drivers/hid/hid-a4tech.c b/drivers/hid/hid-a4tech.c
+index 9428ea7cdf8a..c3a6ce3613fe 100644
+--- a/drivers/hid/hid-a4tech.c
++++ b/drivers/hid/hid-a4tech.c
+@@ -38,8 +38,10 @@ static int a4_input_mapped(struct hid_device *hdev, struct hid_input *hi,
+ {
+ 	struct a4tech_sc *a4 = hid_get_drvdata(hdev);
+ 
+-	if (usage->type == EV_REL && usage->code == REL_WHEEL)
++	if (usage->type == EV_REL && usage->code == REL_WHEEL_HI_RES) {
+ 		set_bit(REL_HWHEEL, *bit);
++		set_bit(REL_HWHEEL_HI_RES, *bit);
++	}
+ 
+ 	if ((a4->quirks & A4_2WHEEL_MOUSE_HACK_7) && usage->hid == 0x00090007)
+ 		return -1;
+@@ -60,7 +62,7 @@ static int a4_event(struct hid_device *hdev, struct hid_field *field,
+ 	input = field->hidinput->input;
+ 
+ 	if (a4->quirks & A4_2WHEEL_MOUSE_HACK_B8) {
+-		if (usage->type == EV_REL && usage->code == REL_WHEEL) {
++		if (usage->type == EV_REL && usage->code == REL_WHEEL_HI_RES) {
+ 			a4->delayed_value = value;
+ 			return 1;
+ 		}
+@@ -68,6 +70,8 @@ static int a4_event(struct hid_device *hdev, struct hid_field *field,
+ 		if (usage->hid == 0x000100b8) {
+ 			input_event(input, EV_REL, value ? REL_HWHEEL :
+ 					REL_WHEEL, a4->delayed_value);
++			input_event(input, EV_REL, value ? REL_HWHEEL_HI_RES :
++					REL_WHEEL_HI_RES, a4->delayed_value * 120);
+ 			return 1;
+ 		}
+ 	}
+@@ -77,8 +81,9 @@ static int a4_event(struct hid_device *hdev, struct hid_field *field,
+ 		return 1;
+ 	}
+ 
+-	if (usage->code == REL_WHEEL && a4->hw_wheel) {
++	if (usage->code == REL_WHEEL_HI_RES && a4->hw_wheel) {
+ 		input_event(input, usage->type, REL_HWHEEL, value);
++		input_event(input, usage->type, REL_HWHEEL_HI_RES, value * 120);
+ 		return 1;
+ 	}
+ 
+diff --git a/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c b/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
+index fd1b6eea6d2f..75078c83be1a 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
++++ b/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
+@@ -354,6 +354,14 @@ static const struct dmi_system_id i2c_hid_dmi_desc_override_table[] = {
+ 		},
+ 		.driver_data = (void *)&sipodev_desc
+ 	},
++	{
++		.ident = "iBall Aer3",
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "iBall"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Aer3"),
++		},
++		.driver_data = (void *)&sipodev_desc
++	},
+ 	{ }	/* Terminate list */
+ };
+ 
+diff --git a/drivers/i2c/busses/i2c-pca-platform.c b/drivers/i2c/busses/i2c-pca-platform.c
+index de3fe6e828cb..f50afa8e3cba 100644
+--- a/drivers/i2c/busses/i2c-pca-platform.c
++++ b/drivers/i2c/busses/i2c-pca-platform.c
+@@ -21,7 +21,6 @@
+ #include <linux/platform_device.h>
+ #include <linux/i2c-algo-pca.h>
+ #include <linux/platform_data/i2c-pca-platform.h>
+-#include <linux/gpio.h>
+ #include <linux/gpio/consumer.h>
+ #include <linux/io.h>
+ #include <linux/of.h>
+@@ -173,7 +172,7 @@ static int i2c_pca_pf_probe(struct platform_device *pdev)
+ 	i2c->adap.dev.parent = &pdev->dev;
+ 	i2c->adap.dev.of_node = np;
+ 
+-	i2c->gpio = devm_gpiod_get_optional(&pdev->dev, "reset-gpios", GPIOD_OUT_LOW);
++	i2c->gpio = devm_gpiod_get_optional(&pdev->dev, "reset", GPIOD_OUT_LOW);
+ 	if (IS_ERR(i2c->gpio))
+ 		return PTR_ERR(i2c->gpio);
+ 
+diff --git a/drivers/iommu/intel-pasid.c b/drivers/iommu/intel-pasid.c
+index 03b12d2ee213..fdf05c45d516 100644
+--- a/drivers/iommu/intel-pasid.c
++++ b/drivers/iommu/intel-pasid.c
+@@ -387,7 +387,7 @@ static inline void pasid_set_present(struct pasid_entry *pe)
+  */
+ static inline void pasid_set_page_snoop(struct pasid_entry *pe, bool value)
+ {
+-	pasid_set_bits(&pe->val[1], 1 << 23, value);
++	pasid_set_bits(&pe->val[1], 1 << 23, value << 23);
+ }
+ 
+ /*
+diff --git a/drivers/platform/mellanox/mlxreg-hotplug.c b/drivers/platform/mellanox/mlxreg-hotplug.c
+index 687ce6817d0d..f85a1b9d129b 100644
+--- a/drivers/platform/mellanox/mlxreg-hotplug.c
++++ b/drivers/platform/mellanox/mlxreg-hotplug.c
+@@ -694,6 +694,7 @@ static int mlxreg_hotplug_remove(struct platform_device *pdev)
+ 
+ 	/* Clean interrupts setup. */
+ 	mlxreg_hotplug_unset_irq(priv);
++	devm_free_irq(&pdev->dev, priv->irq, priv);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index b6f2ff95c3ed..59f3a37a44d7 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -78,10 +78,12 @@ static bool asus_q500a_i8042_filter(unsigned char data, unsigned char str,
+ 
+ static struct quirk_entry quirk_asus_unknown = {
+ 	.wapf = 0,
++	.wmi_backlight_set_devstate = true,
+ };
+ 
+ static struct quirk_entry quirk_asus_q500a = {
+ 	.i8042_filter = asus_q500a_i8042_filter,
++	.wmi_backlight_set_devstate = true,
+ };
+ 
+ /*
+@@ -92,26 +94,32 @@ static struct quirk_entry quirk_asus_q500a = {
+ static struct quirk_entry quirk_asus_x55u = {
+ 	.wapf = 4,
+ 	.wmi_backlight_power = true,
++	.wmi_backlight_set_devstate = true,
+ 	.no_display_toggle = true,
+ };
+ 
+ static struct quirk_entry quirk_asus_wapf4 = {
+ 	.wapf = 4,
++	.wmi_backlight_set_devstate = true,
+ };
+ 
+ static struct quirk_entry quirk_asus_x200ca = {
+ 	.wapf = 2,
++	.wmi_backlight_set_devstate = true,
+ };
+ 
+ static struct quirk_entry quirk_asus_ux303ub = {
+ 	.wmi_backlight_native = true,
++	.wmi_backlight_set_devstate = true,
+ };
+ 
+ static struct quirk_entry quirk_asus_x550lb = {
++	.wmi_backlight_set_devstate = true,
+ 	.xusb2pr = 0x01D9,
+ };
+ 
+ static struct quirk_entry quirk_asus_forceals = {
++	.wmi_backlight_set_devstate = true,
+ 	.wmi_force_als_set = true,
+ };
+ 
+diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
+index ee1fa93708ec..a66e99500c12 100644
+--- a/drivers/platform/x86/asus-wmi.c
++++ b/drivers/platform/x86/asus-wmi.c
+@@ -2131,7 +2131,7 @@ static int asus_wmi_add(struct platform_device *pdev)
+ 		err = asus_wmi_backlight_init(asus);
+ 		if (err && err != -ENODEV)
+ 			goto fail_backlight;
+-	} else
++	} else if (asus->driver->quirks->wmi_backlight_set_devstate)
+ 		err = asus_wmi_set_devstate(ASUS_WMI_DEVID_BACKLIGHT, 2, NULL);
+ 
+ 	status = wmi_install_notify_handler(asus->driver->event_guid,
+diff --git a/drivers/platform/x86/asus-wmi.h b/drivers/platform/x86/asus-wmi.h
+index 6c1311f4b04d..57a79bddb286 100644
+--- a/drivers/platform/x86/asus-wmi.h
++++ b/drivers/platform/x86/asus-wmi.h
+@@ -44,6 +44,7 @@ struct quirk_entry {
+ 	bool store_backlight_power;
+ 	bool wmi_backlight_power;
+ 	bool wmi_backlight_native;
++	bool wmi_backlight_set_devstate;
+ 	bool wmi_force_als_set;
+ 	int wapf;
+ 	/*
+diff --git a/drivers/platform/x86/intel-vbtn.c b/drivers/platform/x86/intel-vbtn.c
+index 06cd7e818ed5..a0d0cecff55f 100644
+--- a/drivers/platform/x86/intel-vbtn.c
++++ b/drivers/platform/x86/intel-vbtn.c
+@@ -76,12 +76,24 @@ static void notify_handler(acpi_handle handle, u32 event, void *context)
+ 	struct platform_device *device = context;
+ 	struct intel_vbtn_priv *priv = dev_get_drvdata(&device->dev);
+ 	unsigned int val = !(event & 1); /* Even=press, Odd=release */
+-	const struct key_entry *ke_rel;
++	const struct key_entry *ke, *ke_rel;
+ 	bool autorelease;
+ 
+ 	if (priv->wakeup_mode) {
+-		if (sparse_keymap_entry_from_scancode(priv->input_dev, event)) {
++		ke = sparse_keymap_entry_from_scancode(priv->input_dev, event);
++		if (ke) {
+ 			pm_wakeup_hard_event(&device->dev);
++
++			/*
++			 * Switch events like tablet mode will wake the device
++			 * and report the new switch position to the input
++			 * subsystem.
++			 */
++			if (ke->type == KE_SW)
++				sparse_keymap_report_event(priv->input_dev,
++							   event,
++							   val,
++							   0);
+ 			return;
+ 		}
+ 		goto out_unknown;
+diff --git a/drivers/platform/x86/mlx-platform.c b/drivers/platform/x86/mlx-platform.c
+index 48fa7573e29b..0e5f073e51bc 100644
+--- a/drivers/platform/x86/mlx-platform.c
++++ b/drivers/platform/x86/mlx-platform.c
+@@ -1828,7 +1828,7 @@ static int __init mlxplat_init(void)
+ 
+ 	for (i = 0; i < ARRAY_SIZE(mlxplat_mux_data); i++) {
+ 		priv->pdev_mux[i] = platform_device_register_resndata(
+-						&mlxplat_dev->dev,
++						&priv->pdev_i2c->dev,
+ 						"i2c-mux-reg", i, NULL,
+ 						0, &mlxplat_mux_data[i],
+ 						sizeof(mlxplat_mux_data[i]));
+diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
+index f044e7d10d63..2d181e5e65ff 100644
+--- a/drivers/scsi/hpsa.c
++++ b/drivers/scsi/hpsa.c
+@@ -4925,7 +4925,7 @@ static int hpsa_scsi_ioaccel2_queue_command(struct ctlr_info *h,
+ 			curr_sg->reserved[0] = 0;
+ 			curr_sg->reserved[1] = 0;
+ 			curr_sg->reserved[2] = 0;
+-			curr_sg->chain_indicator = 0x80;
++			curr_sg->chain_indicator = IOACCEL2_CHAIN;
+ 
+ 			curr_sg = h->ioaccel2_cmd_sg_list[c->cmdindex];
+ 		}
+@@ -4942,6 +4942,11 @@ static int hpsa_scsi_ioaccel2_queue_command(struct ctlr_info *h,
+ 			curr_sg++;
+ 		}
+ 
++		/*
++		 * Set the last s/g element bit
++		 */
++		(curr_sg - 1)->chain_indicator = IOACCEL2_LAST_SG;
++
+ 		switch (cmd->sc_data_direction) {
+ 		case DMA_TO_DEVICE:
+ 			cp->direction &= ~IOACCEL2_DIRECTION_MASK;
+diff --git a/drivers/scsi/hpsa_cmd.h b/drivers/scsi/hpsa_cmd.h
+index 21a726e2eec6..f6afca4b2319 100644
+--- a/drivers/scsi/hpsa_cmd.h
++++ b/drivers/scsi/hpsa_cmd.h
+@@ -517,6 +517,7 @@ struct ioaccel2_sg_element {
+ 	u8 reserved[3];
+ 	u8 chain_indicator;
+ #define IOACCEL2_CHAIN 0x80
++#define IOACCEL2_LAST_SG 0x40
+ };
+ 
+ /*
+diff --git a/drivers/spi/spi-bitbang.c b/drivers/spi/spi-bitbang.c
+index dd9a8c54a693..be95be4fe985 100644
+--- a/drivers/spi/spi-bitbang.c
++++ b/drivers/spi/spi-bitbang.c
+@@ -403,7 +403,7 @@ int spi_bitbang_start(struct spi_bitbang *bitbang)
+ 	if (ret)
+ 		spi_master_put(master);
+ 
+-	return 0;
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(spi_bitbang_start);
+ 
+diff --git a/drivers/target/target_core_iblock.c b/drivers/target/target_core_iblock.c
+index b5ed9c377060..efebacd36101 100644
+--- a/drivers/target/target_core_iblock.c
++++ b/drivers/target/target_core_iblock.c
+@@ -515,7 +515,7 @@ iblock_execute_write_same(struct se_cmd *cmd)
+ 
+ 		/* Always in 512 byte units for Linux/Block */
+ 		block_lba += sg->length >> SECTOR_SHIFT;
+-		sectors -= 1;
++		sectors -= sg->length >> SECTOR_SHIFT;
+ 	}
+ 
+ 	iblock_submit_bios(&list);
+diff --git a/drivers/tty/rocket.c b/drivers/tty/rocket.c
+index b121d8f8f3d7..27aeca30eeae 100644
+--- a/drivers/tty/rocket.c
++++ b/drivers/tty/rocket.c
+@@ -266,7 +266,7 @@ MODULE_PARM_DESC(pc104_3, "set interface types for ISA(PC104) board #3 (e.g. pc1
+ module_param_array(pc104_4, ulong, NULL, 0);
+ MODULE_PARM_DESC(pc104_4, "set interface types for ISA(PC104) board #4 (e.g. pc104_4=232,232,485,485,...");
+ 
+-static int rp_init(void);
++static int __init rp_init(void);
+ static void rp_cleanup_module(void);
+ 
+ module_init(rp_init);
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index a749de7604c6..c99ef9753930 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -833,19 +833,22 @@ static void dwc2_gadget_fill_nonisoc_xfer_ddma_one(struct dwc2_hsotg_ep *hs_ep,
+  * with corresponding information based on transfer data.
+  */
+ static void dwc2_gadget_config_nonisoc_xfer_ddma(struct dwc2_hsotg_ep *hs_ep,
+-						 struct usb_request *ureq,
+-						 unsigned int offset,
++						 dma_addr_t dma_buff,
+ 						 unsigned int len)
+ {
++	struct usb_request *ureq = NULL;
+ 	struct dwc2_dma_desc *desc = hs_ep->desc_list;
+ 	struct scatterlist *sg;
+ 	int i;
+ 	u8 desc_count = 0;
+ 
++	if (hs_ep->req)
++		ureq = &hs_ep->req->req;
++
+ 	/* non-DMA sg buffer */
+-	if (!ureq->num_sgs) {
++	if (!ureq || !ureq->num_sgs) {
+ 		dwc2_gadget_fill_nonisoc_xfer_ddma_one(hs_ep, &desc,
+-			ureq->dma + offset, len, true);
++			dma_buff, len, true);
+ 		return;
+ 	}
+ 
+@@ -1133,7 +1136,7 @@ static void dwc2_hsotg_start_req(struct dwc2_hsotg *hsotg,
+ 			offset = ureq->actual;
+ 
+ 		/* Fill DDMA chain entries */
+-		dwc2_gadget_config_nonisoc_xfer_ddma(hs_ep, ureq, offset,
++		dwc2_gadget_config_nonisoc_xfer_ddma(hs_ep, ureq->dma + offset,
+ 						     length);
+ 
+ 		/* write descriptor chain address to control register */
+@@ -2026,12 +2029,13 @@ static void dwc2_hsotg_program_zlp(struct dwc2_hsotg *hsotg,
+ 		dev_dbg(hsotg->dev, "Receiving zero-length packet on ep%d\n",
+ 			index);
+ 	if (using_desc_dma(hsotg)) {
++		/* Not specific buffer needed for ep0 ZLP */
++		dma_addr_t dma = hs_ep->desc_list_dma;
++
+ 		if (!index)
+ 			dwc2_gadget_set_ep0_desc_chain(hsotg, hs_ep);
+ 
+-		/* Not specific buffer needed for ep0 ZLP */
+-		dwc2_gadget_fill_nonisoc_xfer_ddma_one(hs_ep, &hs_ep->desc_list,
+-			hs_ep->desc_list_dma, 0, true);
++		dwc2_gadget_config_nonisoc_xfer_ddma(hs_ep, dma, 0);
+ 	} else {
+ 		dwc2_writel(hsotg, DXEPTSIZ_MC(1) | DXEPTSIZ_PKTCNT(1) |
+ 			    DXEPTSIZ_XFERSIZE(0),
+diff --git a/drivers/usb/gadget/udc/fusb300_udc.c b/drivers/usb/gadget/udc/fusb300_udc.c
+index 263804d154a7..00e3f66836a9 100644
+--- a/drivers/usb/gadget/udc/fusb300_udc.c
++++ b/drivers/usb/gadget/udc/fusb300_udc.c
+@@ -1342,12 +1342,15 @@ static const struct usb_gadget_ops fusb300_gadget_ops = {
+ static int fusb300_remove(struct platform_device *pdev)
+ {
+ 	struct fusb300 *fusb300 = platform_get_drvdata(pdev);
++	int i;
+ 
+ 	usb_del_gadget_udc(&fusb300->gadget);
+ 	iounmap(fusb300->reg);
+ 	free_irq(platform_get_irq(pdev, 0), fusb300);
+ 
+ 	fusb300_free_request(&fusb300->ep[0]->ep, fusb300->ep0_req);
++	for (i = 0; i < FUSB300_MAX_NUM_EP; i++)
++		kfree(fusb300->ep[i]);
+ 	kfree(fusb300);
+ 
+ 	return 0;
+@@ -1491,6 +1494,8 @@ clean_up:
+ 		if (fusb300->ep0_req)
+ 			fusb300_free_request(&fusb300->ep[0]->ep,
+ 				fusb300->ep0_req);
++		for (i = 0; i < FUSB300_MAX_NUM_EP; i++)
++			kfree(fusb300->ep[i]);
+ 		kfree(fusb300);
+ 	}
+ 	if (reg)
+diff --git a/drivers/usb/gadget/udc/lpc32xx_udc.c b/drivers/usb/gadget/udc/lpc32xx_udc.c
+index b0781771704e..eafc2a00c96a 100644
+--- a/drivers/usb/gadget/udc/lpc32xx_udc.c
++++ b/drivers/usb/gadget/udc/lpc32xx_udc.c
+@@ -922,8 +922,7 @@ static struct lpc32xx_usbd_dd_gad *udc_dd_alloc(struct lpc32xx_udc *udc)
+ 	dma_addr_t			dma;
+ 	struct lpc32xx_usbd_dd_gad	*dd;
+ 
+-	dd = (struct lpc32xx_usbd_dd_gad *) dma_pool_alloc(
+-			udc->dd_cache, (GFP_KERNEL | GFP_DMA), &dma);
++	dd = dma_pool_alloc(udc->dd_cache, GFP_ATOMIC | GFP_DMA, &dma);
+ 	if (dd)
+ 		dd->this_dma = dma;
+ 
+diff --git a/fs/Kconfig b/fs/Kconfig
+index 3e6d3101f3ff..db921dc267d3 100644
+--- a/fs/Kconfig
++++ b/fs/Kconfig
+@@ -10,7 +10,6 @@ config DCACHE_WORD_ACCESS
+ 
+ config VALIDATE_FS_PARSER
+ 	bool "Validate filesystem parameter description"
+-	default y
+ 	help
+ 	  Enable this to perform validation of the parameter description for a
+ 	  filesystem when it is registered.
+diff --git a/fs/aio.c b/fs/aio.c
+index 3490d1fa0e16..c1e581dd32f5 100644
+--- a/fs/aio.c
++++ b/fs/aio.c
+@@ -2095,6 +2095,7 @@ SYSCALL_DEFINE6(io_pgetevents,
+ 	struct __aio_sigset	ksig = { NULL, };
+ 	sigset_t		ksigmask, sigsaved;
+ 	struct timespec64	ts;
++	bool interrupted;
+ 	int ret;
+ 
+ 	if (timeout && unlikely(get_timespec64(&ts, timeout)))
+@@ -2108,8 +2109,10 @@ SYSCALL_DEFINE6(io_pgetevents,
+ 		return ret;
+ 
+ 	ret = do_io_getevents(ctx_id, min_nr, nr, events, timeout ? &ts : NULL);
+-	restore_user_sigmask(ksig.sigmask, &sigsaved);
+-	if (signal_pending(current) && !ret)
++
++	interrupted = signal_pending(current);
++	restore_user_sigmask(ksig.sigmask, &sigsaved, interrupted);
++	if (interrupted && !ret)
+ 		ret = -ERESTARTNOHAND;
+ 
+ 	return ret;
+@@ -2128,6 +2131,7 @@ SYSCALL_DEFINE6(io_pgetevents_time32,
+ 	struct __aio_sigset	ksig = { NULL, };
+ 	sigset_t		ksigmask, sigsaved;
+ 	struct timespec64	ts;
++	bool interrupted;
+ 	int ret;
+ 
+ 	if (timeout && unlikely(get_old_timespec32(&ts, timeout)))
+@@ -2142,8 +2146,10 @@ SYSCALL_DEFINE6(io_pgetevents_time32,
+ 		return ret;
+ 
+ 	ret = do_io_getevents(ctx_id, min_nr, nr, events, timeout ? &ts : NULL);
+-	restore_user_sigmask(ksig.sigmask, &sigsaved);
+-	if (signal_pending(current) && !ret)
++
++	interrupted = signal_pending(current);
++	restore_user_sigmask(ksig.sigmask, &sigsaved, interrupted);
++	if (interrupted && !ret)
+ 		ret = -ERESTARTNOHAND;
+ 
+ 	return ret;
+@@ -2193,6 +2199,7 @@ COMPAT_SYSCALL_DEFINE6(io_pgetevents,
+ 	struct __compat_aio_sigset ksig = { NULL, };
+ 	sigset_t ksigmask, sigsaved;
+ 	struct timespec64 t;
++	bool interrupted;
+ 	int ret;
+ 
+ 	if (timeout && get_old_timespec32(&t, timeout))
+@@ -2206,8 +2213,10 @@ COMPAT_SYSCALL_DEFINE6(io_pgetevents,
+ 		return ret;
+ 
+ 	ret = do_io_getevents(ctx_id, min_nr, nr, events, timeout ? &t : NULL);
+-	restore_user_sigmask(ksig.sigmask, &sigsaved);
+-	if (signal_pending(current) && !ret)
++
++	interrupted = signal_pending(current);
++	restore_user_sigmask(ksig.sigmask, &sigsaved, interrupted);
++	if (interrupted && !ret)
+ 		ret = -ERESTARTNOHAND;
+ 
+ 	return ret;
+@@ -2226,6 +2235,7 @@ COMPAT_SYSCALL_DEFINE6(io_pgetevents_time64,
+ 	struct __compat_aio_sigset ksig = { NULL, };
+ 	sigset_t ksigmask, sigsaved;
+ 	struct timespec64 t;
++	bool interrupted;
+ 	int ret;
+ 
+ 	if (timeout && get_timespec64(&t, timeout))
+@@ -2239,8 +2249,10 @@ COMPAT_SYSCALL_DEFINE6(io_pgetevents_time64,
+ 		return ret;
+ 
+ 	ret = do_io_getevents(ctx_id, min_nr, nr, events, timeout ? &t : NULL);
+-	restore_user_sigmask(ksig.sigmask, &sigsaved);
+-	if (signal_pending(current) && !ret)
++
++	interrupted = signal_pending(current);
++	restore_user_sigmask(ksig.sigmask, &sigsaved, interrupted);
++	if (interrupted && !ret)
+ 		ret = -ERESTARTNOHAND;
+ 
+ 	return ret;
+diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
+index ee193c5222b2..a69c3b14f2b1 100644
+--- a/fs/btrfs/dev-replace.c
++++ b/fs/btrfs/dev-replace.c
+@@ -603,17 +603,25 @@ static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info,
+ 	}
+ 	btrfs_wait_ordered_roots(fs_info, U64_MAX, 0, (u64)-1);
+ 
+-	trans = btrfs_start_transaction(root, 0);
+-	if (IS_ERR(trans)) {
+-		mutex_unlock(&dev_replace->lock_finishing_cancel_unmount);
+-		return PTR_ERR(trans);
++	while (1) {
++		trans = btrfs_start_transaction(root, 0);
++		if (IS_ERR(trans)) {
++			mutex_unlock(&dev_replace->lock_finishing_cancel_unmount);
++			return PTR_ERR(trans);
++		}
++		ret = btrfs_commit_transaction(trans);
++		WARN_ON(ret);
++		/* keep away write_all_supers() during the finishing procedure */
++		mutex_lock(&fs_info->fs_devices->device_list_mutex);
++		mutex_lock(&fs_info->chunk_mutex);
++		if (src_device->has_pending_chunks) {
++			mutex_unlock(&root->fs_info->chunk_mutex);
++			mutex_unlock(&root->fs_info->fs_devices->device_list_mutex);
++		} else {
++			break;
++		}
+ 	}
+-	ret = btrfs_commit_transaction(trans);
+-	WARN_ON(ret);
+ 
+-	/* keep away write_all_supers() during the finishing procedure */
+-	mutex_lock(&fs_info->fs_devices->device_list_mutex);
+-	mutex_lock(&fs_info->chunk_mutex);
+ 	down_write(&dev_replace->rwsem);
+ 	dev_replace->replace_state =
+ 		scrub_ret ? BTRFS_IOCTL_DEV_REPLACE_STATE_CANCELED
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index db934ceae9c1..62c32779bdea 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -5222,9 +5222,11 @@ static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
+ 	if (ret)
+ 		goto error_del_extent;
+ 
+-	for (i = 0; i < map->num_stripes; i++)
++	for (i = 0; i < map->num_stripes; i++) {
+ 		btrfs_device_set_bytes_used(map->stripes[i].dev,
+ 				map->stripes[i].dev->bytes_used + stripe_size);
++		map->stripes[i].dev->has_pending_chunks = true;
++	}
+ 
+ 	atomic64_sub(stripe_size * map->num_stripes, &info->free_chunk_space);
+ 
+@@ -7716,6 +7718,7 @@ void btrfs_update_commit_device_bytes_used(struct btrfs_transaction *trans)
+ 		for (i = 0; i < map->num_stripes; i++) {
+ 			dev = map->stripes[i].dev;
+ 			dev->commit_bytes_used = dev->bytes_used;
++			dev->has_pending_chunks = false;
+ 		}
+ 	}
+ 	mutex_unlock(&fs_info->chunk_mutex);
+diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
+index 3ad9d58d1b66..fb51ec810cf9 100644
+--- a/fs/btrfs/volumes.h
++++ b/fs/btrfs/volumes.h
+@@ -54,6 +54,11 @@ struct btrfs_device {
+ 
+ 	spinlock_t io_lock ____cacheline_aligned;
+ 	int running_pending;
++	/* When true means this device has pending chunk alloc in
++	 * current transaction. Protected by chunk_mutex.
++	 */
++	bool has_pending_chunks;
++
+ 	/* regular prio bios */
+ 	struct btrfs_pending_bios pending_bios;
+ 	/* sync bios */
+diff --git a/fs/dax.c b/fs/dax.c
+index f74386293632..9fd908f3df32 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -728,12 +728,11 @@ static void *dax_insert_entry(struct xa_state *xas,
+ 
+ 	xas_reset(xas);
+ 	xas_lock_irq(xas);
+-	if (dax_entry_size(entry) != dax_entry_size(new_entry)) {
++	if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
++		void *old;
++
+ 		dax_disassociate_entry(entry, mapping, false);
+ 		dax_associate_entry(new_entry, mapping, vmf->vma, vmf->address);
+-	}
+-
+-	if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
+ 		/*
+ 		 * Only swap our new entry into the page cache if the current
+ 		 * entry is a zero page or an empty entry.  If a normal PTE or
+@@ -742,7 +741,7 @@ static void *dax_insert_entry(struct xa_state *xas,
+ 		 * existing entry is a PMD, we will just leave the PMD in the
+ 		 * tree and dirty it if necessary.
+ 		 */
+-		void *old = dax_lock_entry(xas, new_entry);
++		old = dax_lock_entry(xas, new_entry);
+ 		WARN_ON_ONCE(old != xa_mk_value(xa_to_value(entry) |
+ 					DAX_LOCKED));
+ 		entry = new_entry;
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index 4a0e98d87fcc..55c0e1c75ad1 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -2330,7 +2330,7 @@ SYSCALL_DEFINE6(epoll_pwait, int, epfd, struct epoll_event __user *, events,
+ 
+ 	error = do_epoll_wait(epfd, events, maxevents, timeout);
+ 
+-	restore_user_sigmask(sigmask, &sigsaved);
++	restore_user_sigmask(sigmask, &sigsaved, error == -EINTR);
+ 
+ 	return error;
+ }
+@@ -2355,7 +2355,7 @@ COMPAT_SYSCALL_DEFINE6(epoll_pwait, int, epfd,
+ 
+ 	err = do_epoll_wait(epfd, events, maxevents, timeout);
+ 
+-	restore_user_sigmask(sigmask, &sigsaved);
++	restore_user_sigmask(sigmask, &sigsaved, err == -EINTR);
+ 
+ 	return err;
+ }
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index b897695c91c0..7d8e83458278 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -2096,7 +2096,7 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+ 	finish_wait(&ctx->wait, &wait);
+ 
+ 	if (sig)
+-		restore_user_sigmask(sig, &sigsaved);
++		restore_user_sigmask(sig, &sigsaved, ret == -EINTR);
+ 
+ 	return READ_ONCE(ring->r.head) == READ_ONCE(ring->r.tail) ? ret : 0;
+ }
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index f056b1d3fecd..d1a8edd49d53 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -1562,7 +1562,7 @@ static u32 nfsd4_get_drc_mem(struct nfsd4_channel_attrs *ca)
+ 	 * Never use more than a third of the remaining memory,
+ 	 * unless it's the only way to give this client a slot:
+ 	 */
+-	avail = clamp_t(int, avail, slotsize, total_avail/3);
++	avail = clamp_t(unsigned long, avail, slotsize, total_avail/3);
+ 	num = min_t(int, num, avail / slotsize);
+ 	nfsd_drc_mem_used += num * slotsize;
+ 	spin_unlock(&nfsd_drc_lock);
+diff --git a/fs/select.c b/fs/select.c
+index 6cbc9ff56ba0..a4d8f6e8b63c 100644
+--- a/fs/select.c
++++ b/fs/select.c
+@@ -758,10 +758,9 @@ static long do_pselect(int n, fd_set __user *inp, fd_set __user *outp,
+ 		return ret;
+ 
+ 	ret = core_sys_select(n, inp, outp, exp, to);
++	restore_user_sigmask(sigmask, &sigsaved, ret == -ERESTARTNOHAND);
+ 	ret = poll_select_copy_remaining(&end_time, tsp, type, ret);
+ 
+-	restore_user_sigmask(sigmask, &sigsaved);
+-
+ 	return ret;
+ }
+ 
+@@ -1106,8 +1105,7 @@ SYSCALL_DEFINE5(ppoll, struct pollfd __user *, ufds, unsigned int, nfds,
+ 
+ 	ret = do_sys_poll(ufds, nfds, to);
+ 
+-	restore_user_sigmask(sigmask, &sigsaved);
+-
++	restore_user_sigmask(sigmask, &sigsaved, ret == -EINTR);
+ 	/* We can restart this syscall, usually */
+ 	if (ret == -EINTR)
+ 		ret = -ERESTARTNOHAND;
+@@ -1142,8 +1140,7 @@ SYSCALL_DEFINE5(ppoll_time32, struct pollfd __user *, ufds, unsigned int, nfds,
+ 
+ 	ret = do_sys_poll(ufds, nfds, to);
+ 
+-	restore_user_sigmask(sigmask, &sigsaved);
+-
++	restore_user_sigmask(sigmask, &sigsaved, ret == -EINTR);
+ 	/* We can restart this syscall, usually */
+ 	if (ret == -EINTR)
+ 		ret = -ERESTARTNOHAND;
+@@ -1350,10 +1347,9 @@ static long do_compat_pselect(int n, compat_ulong_t __user *inp,
+ 		return ret;
+ 
+ 	ret = compat_core_sys_select(n, inp, outp, exp, to);
++	restore_user_sigmask(sigmask, &sigsaved, ret == -ERESTARTNOHAND);
+ 	ret = poll_select_copy_remaining(&end_time, tsp, type, ret);
+ 
+-	restore_user_sigmask(sigmask, &sigsaved);
+-
+ 	return ret;
+ }
+ 
+@@ -1425,8 +1421,7 @@ COMPAT_SYSCALL_DEFINE5(ppoll_time32, struct pollfd __user *, ufds,
+ 
+ 	ret = do_sys_poll(ufds, nfds, to);
+ 
+-	restore_user_sigmask(sigmask, &sigsaved);
+-
++	restore_user_sigmask(sigmask, &sigsaved, ret == -EINTR);
+ 	/* We can restart this syscall, usually */
+ 	if (ret == -EINTR)
+ 		ret = -ERESTARTNOHAND;
+@@ -1461,8 +1456,7 @@ COMPAT_SYSCALL_DEFINE5(ppoll_time64, struct pollfd __user *, ufds,
+ 
+ 	ret = do_sys_poll(ufds, nfds, to);
+ 
+-	restore_user_sigmask(sigmask, &sigsaved);
+-
++	restore_user_sigmask(sigmask, &sigsaved, ret == -EINTR);
+ 	/* We can restart this syscall, usually */
+ 	if (ret == -EINTR)
+ 		ret = -ERESTARTNOHAND;
+diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
+index f5de1e726356..f30f824b0728 100644
+--- a/fs/userfaultfd.c
++++ b/fs/userfaultfd.c
+@@ -40,6 +40,16 @@ enum userfaultfd_state {
+ /*
+  * Start with fault_pending_wqh and fault_wqh so they're more likely
+  * to be in the same cacheline.
++ *
++ * Locking order:
++ *	fd_wqh.lock
++ *		fault_pending_wqh.lock
++ *			fault_wqh.lock
++ *		event_wqh.lock
++ *
++ * To avoid deadlocks, IRQs must be disabled when taking any of the above locks,
++ * since fd_wqh.lock is taken by aio_poll() while it's holding a lock that's
++ * also taken in IRQ context.
+  */
+ struct userfaultfd_ctx {
+ 	/* waitqueue head for the pending (i.e. not read) userfaults */
+@@ -458,7 +468,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
+ 	blocking_state = return_to_userland ? TASK_INTERRUPTIBLE :
+ 			 TASK_KILLABLE;
+ 
+-	spin_lock(&ctx->fault_pending_wqh.lock);
++	spin_lock_irq(&ctx->fault_pending_wqh.lock);
+ 	/*
+ 	 * After the __add_wait_queue the uwq is visible to userland
+ 	 * through poll/read().
+@@ -470,7 +480,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
+ 	 * __add_wait_queue.
+ 	 */
+ 	set_current_state(blocking_state);
+-	spin_unlock(&ctx->fault_pending_wqh.lock);
++	spin_unlock_irq(&ctx->fault_pending_wqh.lock);
+ 
+ 	if (!is_vm_hugetlb_page(vmf->vma))
+ 		must_wait = userfaultfd_must_wait(ctx, vmf->address, vmf->flags,
+@@ -552,13 +562,13 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
+ 	 * kernel stack can be released after the list_del_init.
+ 	 */
+ 	if (!list_empty_careful(&uwq.wq.entry)) {
+-		spin_lock(&ctx->fault_pending_wqh.lock);
++		spin_lock_irq(&ctx->fault_pending_wqh.lock);
+ 		/*
+ 		 * No need of list_del_init(), the uwq on the stack
+ 		 * will be freed shortly anyway.
+ 		 */
+ 		list_del(&uwq.wq.entry);
+-		spin_unlock(&ctx->fault_pending_wqh.lock);
++		spin_unlock_irq(&ctx->fault_pending_wqh.lock);
+ 	}
+ 
+ 	/*
+@@ -583,7 +593,7 @@ static void userfaultfd_event_wait_completion(struct userfaultfd_ctx *ctx,
+ 	init_waitqueue_entry(&ewq->wq, current);
+ 	release_new_ctx = NULL;
+ 
+-	spin_lock(&ctx->event_wqh.lock);
++	spin_lock_irq(&ctx->event_wqh.lock);
+ 	/*
+ 	 * After the __add_wait_queue the uwq is visible to userland
+ 	 * through poll/read().
+@@ -613,15 +623,15 @@ static void userfaultfd_event_wait_completion(struct userfaultfd_ctx *ctx,
+ 			break;
+ 		}
+ 
+-		spin_unlock(&ctx->event_wqh.lock);
++		spin_unlock_irq(&ctx->event_wqh.lock);
+ 
+ 		wake_up_poll(&ctx->fd_wqh, EPOLLIN);
+ 		schedule();
+ 
+-		spin_lock(&ctx->event_wqh.lock);
++		spin_lock_irq(&ctx->event_wqh.lock);
+ 	}
+ 	__set_current_state(TASK_RUNNING);
+-	spin_unlock(&ctx->event_wqh.lock);
++	spin_unlock_irq(&ctx->event_wqh.lock);
+ 
+ 	if (release_new_ctx) {
+ 		struct vm_area_struct *vma;
+@@ -918,10 +928,10 @@ wakeup:
+ 	 * the last page faults that may have been already waiting on
+ 	 * the fault_*wqh.
+ 	 */
+-	spin_lock(&ctx->fault_pending_wqh.lock);
++	spin_lock_irq(&ctx->fault_pending_wqh.lock);
+ 	__wake_up_locked_key(&ctx->fault_pending_wqh, TASK_NORMAL, &range);
+ 	__wake_up(&ctx->fault_wqh, TASK_NORMAL, 1, &range);
+-	spin_unlock(&ctx->fault_pending_wqh.lock);
++	spin_unlock_irq(&ctx->fault_pending_wqh.lock);
+ 
+ 	/* Flush pending events that may still wait on event_wqh */
+ 	wake_up_all(&ctx->event_wqh);
+@@ -1134,7 +1144,7 @@ static ssize_t userfaultfd_ctx_read(struct userfaultfd_ctx *ctx, int no_wait,
+ 
+ 	if (!ret && msg->event == UFFD_EVENT_FORK) {
+ 		ret = resolve_userfault_fork(ctx, fork_nctx, msg);
+-		spin_lock(&ctx->event_wqh.lock);
++		spin_lock_irq(&ctx->event_wqh.lock);
+ 		if (!list_empty(&fork_event)) {
+ 			/*
+ 			 * The fork thread didn't abort, so we can
+@@ -1180,7 +1190,7 @@ static ssize_t userfaultfd_ctx_read(struct userfaultfd_ctx *ctx, int no_wait,
+ 			if (ret)
+ 				userfaultfd_ctx_put(fork_nctx);
+ 		}
+-		spin_unlock(&ctx->event_wqh.lock);
++		spin_unlock_irq(&ctx->event_wqh.lock);
+ 	}
+ 
+ 	return ret;
+@@ -1219,14 +1229,14 @@ static ssize_t userfaultfd_read(struct file *file, char __user *buf,
+ static void __wake_userfault(struct userfaultfd_ctx *ctx,
+ 			     struct userfaultfd_wake_range *range)
+ {
+-	spin_lock(&ctx->fault_pending_wqh.lock);
++	spin_lock_irq(&ctx->fault_pending_wqh.lock);
+ 	/* wake all in the range and autoremove */
+ 	if (waitqueue_active(&ctx->fault_pending_wqh))
+ 		__wake_up_locked_key(&ctx->fault_pending_wqh, TASK_NORMAL,
+ 				     range);
+ 	if (waitqueue_active(&ctx->fault_wqh))
+ 		__wake_up(&ctx->fault_wqh, TASK_NORMAL, 1, range);
+-	spin_unlock(&ctx->fault_pending_wqh.lock);
++	spin_unlock_irq(&ctx->fault_pending_wqh.lock);
+ }
+ 
+ static __always_inline void wake_userfault(struct userfaultfd_ctx *ctx,
+@@ -1881,7 +1891,7 @@ static void userfaultfd_show_fdinfo(struct seq_file *m, struct file *f)
+ 	wait_queue_entry_t *wq;
+ 	unsigned long pending = 0, total = 0;
+ 
+-	spin_lock(&ctx->fault_pending_wqh.lock);
++	spin_lock_irq(&ctx->fault_pending_wqh.lock);
+ 	list_for_each_entry(wq, &ctx->fault_pending_wqh.head, entry) {
+ 		pending++;
+ 		total++;
+@@ -1889,7 +1899,7 @@ static void userfaultfd_show_fdinfo(struct seq_file *m, struct file *f)
+ 	list_for_each_entry(wq, &ctx->fault_wqh.head, entry) {
+ 		total++;
+ 	}
+-	spin_unlock(&ctx->fault_pending_wqh.lock);
++	spin_unlock_irq(&ctx->fault_pending_wqh.lock);
+ 
+ 	/*
+ 	 * If more protocols will be added, there will be all shown
+diff --git a/include/linux/signal.h b/include/linux/signal.h
+index 9702016734b1..78c2bb376954 100644
+--- a/include/linux/signal.h
++++ b/include/linux/signal.h
+@@ -276,7 +276,7 @@ extern int sigprocmask(int, sigset_t *, sigset_t *);
+ extern int set_user_sigmask(const sigset_t __user *usigmask, sigset_t *set,
+ 	sigset_t *oldset, size_t sigsetsize);
+ extern void restore_user_sigmask(const void __user *usigmask,
+-				 sigset_t *sigsaved);
++				 sigset_t *sigsaved, bool interrupted);
+ extern void set_current_blocked(sigset_t *);
+ extern void __set_current_blocked(const sigset_t *);
+ extern int show_unhandled_signals;
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 4834c4214e9c..6c9deb2cc687 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -3255,10 +3255,23 @@ void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask)
+ 	spin_unlock_irqrestore(&callback_lock, flags);
+ }
+ 
++/**
++ * cpuset_cpus_allowed_fallback - final fallback before complete catastrophe.
++ * @tsk: pointer to task_struct with which the scheduler is struggling
++ *
++ * Description: In the case that the scheduler cannot find an allowed cpu in
++ * tsk->cpus_allowed, we fall back to task_cs(tsk)->cpus_allowed. In legacy
++ * mode however, this value is the same as task_cs(tsk)->effective_cpus,
++ * which will not contain a sane cpumask during cases such as cpu hotplugging.
++ * This is the absolute last resort for the scheduler and it is only used if
++ * _every_ other avenue has been traveled.
++ **/
++
+ void cpuset_cpus_allowed_fallback(struct task_struct *tsk)
+ {
+ 	rcu_read_lock();
+-	do_set_cpus_allowed(tsk, task_cs(tsk)->effective_cpus);
++	do_set_cpus_allowed(tsk, is_in_v2_mode() ?
++		task_cs(tsk)->cpus_allowed : cpu_possible_mask);
+ 	rcu_read_unlock();
+ 
+ 	/*
+diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
+index eb0ee10a1981..05d5b0afc864 100644
+--- a/kernel/livepatch/core.c
++++ b/kernel/livepatch/core.c
+@@ -30,6 +30,7 @@
+ #include <linux/elf.h>
+ #include <linux/moduleloader.h>
+ #include <linux/completion.h>
++#include <linux/memory.h>
+ #include <asm/cacheflush.h>
+ #include "core.h"
+ #include "patch.h"
+@@ -746,16 +747,21 @@ static int klp_init_object_loaded(struct klp_patch *patch,
+ 	struct klp_func *func;
+ 	int ret;
+ 
++	mutex_lock(&text_mutex);
++
+ 	module_disable_ro(patch->mod);
+ 	ret = klp_write_object_relocations(patch->mod, obj);
+ 	if (ret) {
+ 		module_enable_ro(patch->mod, true);
++		mutex_unlock(&text_mutex);
+ 		return ret;
+ 	}
+ 
+ 	arch_klp_init_object_loaded(patch, obj);
+ 	module_enable_ro(patch->mod, true);
+ 
++	mutex_unlock(&text_mutex);
++
+ 	klp_for_each_func(obj, func) {
+ 		ret = klp_find_object_symbol(obj->name, func->old_name,
+ 					     func->old_sympos,
+diff --git a/kernel/ptrace.c b/kernel/ptrace.c
+index c9b4646ad375..d31506318454 100644
+--- a/kernel/ptrace.c
++++ b/kernel/ptrace.c
+@@ -78,9 +78,7 @@ void __ptrace_link(struct task_struct *child, struct task_struct *new_parent,
+  */
+ static void ptrace_link(struct task_struct *child, struct task_struct *new_parent)
+ {
+-	rcu_read_lock();
+-	__ptrace_link(child, new_parent, __task_cred(new_parent));
+-	rcu_read_unlock();
++	__ptrace_link(child, new_parent, current_cred());
+ }
+ 
+ /**
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 429f5663edd9..5f3dd69b50e2 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -2851,7 +2851,8 @@ EXPORT_SYMBOL(set_compat_user_sigmask);
+  * This is useful for syscalls such as ppoll, pselect, io_pgetevents and
+  * epoll_pwait where a new sigmask is passed in from userland for the syscalls.
+  */
+-void restore_user_sigmask(const void __user *usigmask, sigset_t *sigsaved)
++void restore_user_sigmask(const void __user *usigmask, sigset_t *sigsaved,
++				bool interrupted)
+ {
+ 
+ 	if (!usigmask)
+@@ -2861,7 +2862,7 @@ void restore_user_sigmask(const void __user *usigmask, sigset_t *sigsaved)
+ 	 * Restoring sigmask here can lead to delivering signals that the above
+ 	 * syscalls are intended to block because of the sigmask passed in.
+ 	 */
+-	if (signal_pending(current)) {
++	if (interrupted) {
+ 		current->saved_sigmask = *sigsaved;
+ 		set_restore_sigmask();
+ 		return;
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index b920358dd8f7..6b6fa18f0a02 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -2939,14 +2939,13 @@ static int ftrace_update_code(struct module *mod, struct ftrace_page *new_pgs)
+ 			p = &pg->records[i];
+ 			p->flags = rec_flags;
+ 
+-#ifndef CC_USING_NOP_MCOUNT
+ 			/*
+ 			 * Do the initial record conversion from mcount jump
+ 			 * to the NOP instructions.
+ 			 */
+-			if (!ftrace_code_disable(mod, p))
++			if (!__is_defined(CC_USING_NOP_MCOUNT) &&
++			    !ftrace_code_disable(mod, p))
+ 				break;
+-#endif
+ 
+ 			update_cnt++;
+ 		}
+@@ -4225,10 +4224,13 @@ void free_ftrace_func_mapper(struct ftrace_func_mapper *mapper,
+ 	struct ftrace_func_entry *entry;
+ 	struct ftrace_func_map *map;
+ 	struct hlist_head *hhd;
+-	int size = 1 << mapper->hash.size_bits;
+-	int i;
++	int size, i;
++
++	if (!mapper)
++		return;
+ 
+ 	if (free_func && mapper->hash.count) {
++		size = 1 << mapper->hash.size_bits;
+ 		for (i = 0; i < size; i++) {
+ 			hhd = &mapper->hash.buckets[i];
+ 			hlist_for_each_entry(entry, hhd, hlist) {
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 5880c993002b..411e3a819e42 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -6696,11 +6696,13 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
+ 			break;
+ 		}
+ #endif
+-		if (!tr->allocated_snapshot) {
++		if (tr->allocated_snapshot)
++			ret = resize_buffer_duplicate_size(&tr->max_buffer,
++					&tr->trace_buffer, iter->cpu_file);
++		else
+ 			ret = tracing_alloc_snapshot_instance(tr);
+-			if (ret < 0)
+-				break;
+-		}
++		if (ret < 0)
++			break;
+ 		local_irq_disable();
+ 		/* Now, we're going to swap */
+ 		if (iter->cpu_file == RING_BUFFER_ALL_CPUS)
+diff --git a/lib/idr.c b/lib/idr.c
+index cb1db9b8d3f6..da3021e7c2b5 100644
+--- a/lib/idr.c
++++ b/lib/idr.c
+@@ -227,11 +227,21 @@ void *idr_get_next(struct idr *idr, int *nextid)
+ {
+ 	struct radix_tree_iter iter;
+ 	void __rcu **slot;
++	void *entry = NULL;
+ 	unsigned long base = idr->idr_base;
+ 	unsigned long id = *nextid;
+ 
+ 	id = (id < base) ? 0 : id - base;
+-	slot = radix_tree_iter_find(&idr->idr_rt, &iter, id);
++	radix_tree_for_each_slot(slot, &idr->idr_rt, &iter, id) {
++		entry = rcu_dereference_raw(*slot);
++		if (!entry)
++			continue;
++		if (!xa_is_internal(entry))
++			break;
++		if (slot != &idr->idr_rt.xa_head && !xa_is_retry(entry))
++			break;
++		slot = radix_tree_iter_retry(&iter);
++	}
+ 	if (!slot)
+ 		return NULL;
+ 	id = iter.index + base;
+@@ -240,7 +250,7 @@ void *idr_get_next(struct idr *idr, int *nextid)
+ 		return NULL;
+ 
+ 	*nextid = id;
+-	return rcu_dereference_raw(*slot);
++	return entry;
+ }
+ EXPORT_SYMBOL(idr_get_next);
+ 
+diff --git a/lib/mpi/mpi-pow.c b/lib/mpi/mpi-pow.c
+index a5c921e6d667..d3ca55093fa5 100644
+--- a/lib/mpi/mpi-pow.c
++++ b/lib/mpi/mpi-pow.c
+@@ -37,6 +37,7 @@
+ int mpi_powm(MPI res, MPI base, MPI exp, MPI mod)
+ {
+ 	mpi_ptr_t mp_marker = NULL, bp_marker = NULL, ep_marker = NULL;
++	struct karatsuba_ctx karactx = {};
+ 	mpi_ptr_t xp_marker = NULL;
+ 	mpi_ptr_t tspace = NULL;
+ 	mpi_ptr_t rp, ep, mp, bp;
+@@ -163,13 +164,11 @@ int mpi_powm(MPI res, MPI base, MPI exp, MPI mod)
+ 		int c;
+ 		mpi_limb_t e;
+ 		mpi_limb_t carry_limb;
+-		struct karatsuba_ctx karactx;
+ 
+ 		xp = xp_marker = mpi_alloc_limb_space(2 * (msize + 1));
+ 		if (!xp)
+ 			goto enomem;
+ 
+-		memset(&karactx, 0, sizeof karactx);
+ 		negative_result = (ep[0] & 1) && base->sign;
+ 
+ 		i = esize - 1;
+@@ -294,8 +293,6 @@ int mpi_powm(MPI res, MPI base, MPI exp, MPI mod)
+ 		if (mod_shift_cnt)
+ 			mpihelp_rshift(rp, rp, rsize, mod_shift_cnt);
+ 		MPN_NORMALIZE(rp, rsize);
+-
+-		mpihelp_release_karatsuba_ctx(&karactx);
+ 	}
+ 
+ 	if (negative_result && rsize) {
+@@ -312,6 +309,7 @@ int mpi_powm(MPI res, MPI base, MPI exp, MPI mod)
+ leave:
+ 	rc = 0;
+ enomem:
++	mpihelp_release_karatsuba_ctx(&karactx);
+ 	if (assign_rp)
+ 		mpi_assign_limb_space(res, rp, size);
+ 	if (mp_marker)
+diff --git a/mm/mlock.c b/mm/mlock.c
+index 080f3b36415b..d614163f569b 100644
+--- a/mm/mlock.c
++++ b/mm/mlock.c
+@@ -636,11 +636,11 @@ static int apply_vma_lock_flags(unsigned long start, size_t len,
+  * is also counted.
+  * Return value: previously mlocked page counts
+  */
+-static int count_mm_mlocked_page_nr(struct mm_struct *mm,
++static unsigned long count_mm_mlocked_page_nr(struct mm_struct *mm,
+ 		unsigned long start, size_t len)
+ {
+ 	struct vm_area_struct *vma;
+-	int count = 0;
++	unsigned long count = 0;
+ 
+ 	if (mm == NULL)
+ 		mm = current->mm;
+diff --git a/mm/page_io.c b/mm/page_io.c
+index 189415852077..a39aac2f8c8d 100644
+--- a/mm/page_io.c
++++ b/mm/page_io.c
+@@ -137,8 +137,10 @@ out:
+ 	unlock_page(page);
+ 	WRITE_ONCE(bio->bi_private, NULL);
+ 	bio_put(bio);
+-	blk_wake_io_task(waiter);
+-	put_task_struct(waiter);
++	if (waiter) {
++		blk_wake_io_task(waiter);
++		put_task_struct(waiter);
++	}
+ }
+ 
+ int generic_swapfile_activate(struct swap_info_struct *sis,
+@@ -395,11 +397,12 @@ int swap_readpage(struct page *page, bool synchronous)
+ 	 * Keep this task valid during swap readpage because the oom killer may
+ 	 * attempt to access it in the page fault retry time check.
+ 	 */
+-	get_task_struct(current);
+-	bio->bi_private = current;
+ 	bio_set_op_attrs(bio, REQ_OP_READ, 0);
+-	if (synchronous)
++	if (synchronous) {
+ 		bio->bi_opf |= REQ_HIPRI;
++		get_task_struct(current);
++		bio->bi_private = current;
++	}
+ 	count_vm_event(PSWPIN);
+ 	bio_get(bio);
+ 	qc = submit_bio(bio);
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 3fb1d75804de..dbcf2cd5e7e9 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -3703,19 +3703,18 @@ out:
+ }
+ 
+ /*
+- * pgdat->kswapd_classzone_idx is the highest zone index that a recent
+- * allocation request woke kswapd for. When kswapd has not woken recently,
+- * the value is MAX_NR_ZONES which is not a valid index. This compares a
+- * given classzone and returns it or the highest classzone index kswapd
+- * was recently woke for.
++ * The pgdat->kswapd_classzone_idx is used to pass the highest zone index to be
++ * reclaimed by kswapd from the waker. If the value is MAX_NR_ZONES which is not
++ * a valid index then either kswapd runs for first time or kswapd couldn't sleep
++ * after previous reclaim attempt (node is still unbalanced). In that case
++ * return the zone index of the previous kswapd reclaim cycle.
+  */
+ static enum zone_type kswapd_classzone_idx(pg_data_t *pgdat,
+-					   enum zone_type classzone_idx)
++					   enum zone_type prev_classzone_idx)
+ {
+ 	if (pgdat->kswapd_classzone_idx == MAX_NR_ZONES)
+-		return classzone_idx;
+-
+-	return max(pgdat->kswapd_classzone_idx, classzone_idx);
++		return prev_classzone_idx;
++	return pgdat->kswapd_classzone_idx;
+ }
+ 
+ static void kswapd_try_to_sleep(pg_data_t *pgdat, int alloc_order, int reclaim_order,
+@@ -3856,7 +3855,7 @@ kswapd_try_sleep:
+ 
+ 		/* Read the new order and classzone_idx */
+ 		alloc_order = reclaim_order = pgdat->kswapd_order;
+-		classzone_idx = kswapd_classzone_idx(pgdat, 0);
++		classzone_idx = kswapd_classzone_idx(pgdat, classzone_idx);
+ 		pgdat->kswapd_order = 0;
+ 		pgdat->kswapd_classzone_idx = MAX_NR_ZONES;
+ 
+@@ -3910,8 +3909,12 @@ void wakeup_kswapd(struct zone *zone, gfp_t gfp_flags, int order,
+ 	if (!cpuset_zone_allowed(zone, gfp_flags))
+ 		return;
+ 	pgdat = zone->zone_pgdat;
+-	pgdat->kswapd_classzone_idx = kswapd_classzone_idx(pgdat,
+-							   classzone_idx);
++
++	if (pgdat->kswapd_classzone_idx == MAX_NR_ZONES)
++		pgdat->kswapd_classzone_idx = classzone_idx;
++	else
++		pgdat->kswapd_classzone_idx = max(pgdat->kswapd_classzone_idx,
++						  classzone_idx);
+ 	pgdat->kswapd_order = max(pgdat->kswapd_order, order);
+ 	if (!waitqueue_active(&pgdat->kswapd_wait))
+ 		return;
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 9f77432dbe38..5406d7cd46ad 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -1353,7 +1353,7 @@ static bool l2cap_check_enc_key_size(struct hci_conn *hcon)
+ 	 * actually encrypted before enforcing a key size.
+ 	 */
+ 	return (!test_bit(HCI_CONN_ENCRYPT, &hcon->flags) ||
+-		hcon->enc_key_size > HCI_MIN_ENC_KEY_SIZE);
++		hcon->enc_key_size >= HCI_MIN_ENC_KEY_SIZE);
+ }
+ 
+ static void l2cap_do_start(struct l2cap_chan *chan)
+diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c
+index 46022a2867d7..e7c3daddeffc 100644
+--- a/net/netfilter/nf_flow_table_ip.c
++++ b/net/netfilter/nf_flow_table_ip.c
+@@ -246,8 +246,7 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
+ 	flow = container_of(tuplehash, struct flow_offload, tuplehash[dir]);
+ 	rt = (struct rtable *)flow->tuplehash[dir].tuple.dst_cache;
+ 
+-	if (unlikely(nf_flow_exceeds_mtu(skb, flow->tuplehash[dir].tuple.mtu)) &&
+-	    (ip_hdr(skb)->frag_off & htons(IP_DF)) != 0)
++	if (unlikely(nf_flow_exceeds_mtu(skb, flow->tuplehash[dir].tuple.mtu)))
+ 		return NF_ACCEPT;
+ 
+ 	if (skb_try_make_writable(skb, sizeof(*iph)))
+diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c
+index ff50bc1b144f..ae57efb31d83 100644
+--- a/net/netfilter/nft_flow_offload.c
++++ b/net/netfilter/nft_flow_offload.c
+@@ -12,7 +12,6 @@
+ #include <net/netfilter/nf_conntrack_core.h>
+ #include <linux/netfilter/nf_conntrack_common.h>
+ #include <net/netfilter/nf_flow_table.h>
+-#include <net/netfilter/nf_conntrack_helper.h>
+ 
+ struct nft_flow_offload {
+ 	struct nft_flowtable	*flowtable;
+@@ -49,15 +48,20 @@ static int nft_flow_route(const struct nft_pktinfo *pkt,
+ 	return 0;
+ }
+ 
+-static bool nft_flow_offload_skip(struct sk_buff *skb)
++static bool nft_flow_offload_skip(struct sk_buff *skb, int family)
+ {
+-	struct ip_options *opt  = &(IPCB(skb)->opt);
+-
+-	if (unlikely(opt->optlen))
+-		return true;
+ 	if (skb_sec_path(skb))
+ 		return true;
+ 
++	if (family == NFPROTO_IPV4) {
++		const struct ip_options *opt;
++
++		opt = &(IPCB(skb)->opt);
++
++		if (unlikely(opt->optlen))
++			return true;
++	}
++
+ 	return false;
+ }
+ 
+@@ -67,15 +71,15 @@ static void nft_flow_offload_eval(const struct nft_expr *expr,
+ {
+ 	struct nft_flow_offload *priv = nft_expr_priv(expr);
+ 	struct nf_flowtable *flowtable = &priv->flowtable->data;
+-	const struct nf_conn_help *help;
+ 	enum ip_conntrack_info ctinfo;
+ 	struct nf_flow_route route;
+ 	struct flow_offload *flow;
+ 	enum ip_conntrack_dir dir;
++	bool is_tcp = false;
+ 	struct nf_conn *ct;
+ 	int ret;
+ 
+-	if (nft_flow_offload_skip(pkt->skb))
++	if (nft_flow_offload_skip(pkt->skb, nft_pf(pkt)))
+ 		goto out;
+ 
+ 	ct = nf_ct_get(pkt->skb, &ctinfo);
+@@ -84,14 +88,16 @@ static void nft_flow_offload_eval(const struct nft_expr *expr,
+ 
+ 	switch (ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.protonum) {
+ 	case IPPROTO_TCP:
++		is_tcp = true;
++		break;
+ 	case IPPROTO_UDP:
+ 		break;
+ 	default:
+ 		goto out;
+ 	}
+ 
+-	help = nfct_help(ct);
+-	if (help)
++	if (nf_ct_ext_exist(ct, NF_CT_EXT_HELPER) ||
++	    ct->status & IPS_SEQ_ADJUST)
+ 		goto out;
+ 
+ 	if (ctinfo == IP_CT_NEW ||
+@@ -109,6 +115,11 @@ static void nft_flow_offload_eval(const struct nft_expr *expr,
+ 	if (!flow)
+ 		goto err_flow_alloc;
+ 
++	if (is_tcp) {
++		ct->proto.tcp.seen[0].flags |= IP_CT_TCP_FLAG_BE_LIBERAL;
++		ct->proto.tcp.seen[1].flags |= IP_CT_TCP_FLAG_BE_LIBERAL;
++	}
++
+ 	ret = flow_offload_add(flowtable, flow);
+ 	if (ret < 0)
+ 		goto err_flow_add;
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+index 027a3b07d329..0004535c0188 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+@@ -211,9 +211,14 @@ static void handle_connect_req(struct rdma_cm_id *new_cma_id,
+ 	/* Save client advertised inbound read limit for use later in accept. */
+ 	newxprt->sc_ord = param->initiator_depth;
+ 
+-	/* Set the local and remote addresses in the transport */
+ 	sa = (struct sockaddr *)&newxprt->sc_cm_id->route.addr.dst_addr;
+ 	svc_xprt_set_remote(&newxprt->sc_xprt, sa, svc_addr_len(sa));
++	/* The remote port is arbitrary and not under the control of the
++	 * client ULP. Set it to a fixed value so that the DRC continues
++	 * to be effective after a reconnect.
++	 */
++	rpc_set_port((struct sockaddr *)&newxprt->sc_xprt.xpt_remote, 0);
++
+ 	sa = (struct sockaddr *)&newxprt->sc_cm_id->route.addr.src_addr;
+ 	svc_xprt_set_local(&newxprt->sc_xprt, sa, svc_addr_len(sa));
+ 
+diff --git a/scripts/decode_stacktrace.sh b/scripts/decode_stacktrace.sh
+index bcdd45df3f51..a7a36209a193 100755
+--- a/scripts/decode_stacktrace.sh
++++ b/scripts/decode_stacktrace.sh
+@@ -73,7 +73,7 @@ parse_symbol() {
+ 	if [[ "${cache[$module,$address]+isset}" == "isset" ]]; then
+ 		local code=${cache[$module,$address]}
+ 	else
+-		local code=$(addr2line -i -e "$objfile" "$address")
++		local code=$(${CROSS_COMPILE}addr2line -i -e "$objfile" "$address")
+ 		cache[$module,$address]=$code
+ 	fi
+ 
+diff --git a/sound/core/seq/oss/seq_oss_ioctl.c b/sound/core/seq/oss/seq_oss_ioctl.c
+index 5b8520177b0e..7d72e3d48ad5 100644
+--- a/sound/core/seq/oss/seq_oss_ioctl.c
++++ b/sound/core/seq/oss/seq_oss_ioctl.c
+@@ -62,7 +62,7 @@ static int snd_seq_oss_oob_user(struct seq_oss_devinfo *dp, void __user *arg)
+ 	if (copy_from_user(ev, arg, 8))
+ 		return -EFAULT;
+ 	memset(&tmpev, 0, sizeof(tmpev));
+-	snd_seq_oss_fill_addr(dp, &tmpev, dp->addr.port, dp->addr.client);
++	snd_seq_oss_fill_addr(dp, &tmpev, dp->addr.client, dp->addr.port);
+ 	tmpev.time.tick = 0;
+ 	if (! snd_seq_oss_process_event(dp, (union evrec *)ev, &tmpev)) {
+ 		snd_seq_oss_dispatch(dp, &tmpev, 0, 0);
+diff --git a/sound/core/seq/oss/seq_oss_rw.c b/sound/core/seq/oss/seq_oss_rw.c
+index 30886f5fb100..05fbb564beb3 100644
+--- a/sound/core/seq/oss/seq_oss_rw.c
++++ b/sound/core/seq/oss/seq_oss_rw.c
+@@ -174,7 +174,7 @@ insert_queue(struct seq_oss_devinfo *dp, union evrec *rec, struct file *opt)
+ 	memset(&event, 0, sizeof(event));
+ 	/* set dummy -- to be sure */
+ 	event.type = SNDRV_SEQ_EVENT_NOTEOFF;
+-	snd_seq_oss_fill_addr(dp, &event, dp->addr.port, dp->addr.client);
++	snd_seq_oss_fill_addr(dp, &event, dp->addr.client, dp->addr.port);
+ 
+ 	if (snd_seq_oss_process_event(dp, rec, &event))
+ 		return 0; /* invalid event - no need to insert queue */
+diff --git a/sound/firewire/amdtp-am824.c b/sound/firewire/amdtp-am824.c
+index 4210e5c6262e..d09da9dbf235 100644
+--- a/sound/firewire/amdtp-am824.c
++++ b/sound/firewire/amdtp-am824.c
+@@ -321,7 +321,7 @@ static void read_midi_messages(struct amdtp_stream *s,
+ 	u8 *b;
+ 
+ 	for (f = 0; f < frames; f++) {
+-		port = (s->data_block_counter + f) % 8;
++		port = (8 - s->tx_first_dbc + s->data_block_counter + f) % 8;
+ 		b = (u8 *)&buffer[p->midi_position];
+ 
+ 		len = b[0] - 0x80;
+diff --git a/sound/hda/ext/hdac_ext_bus.c b/sound/hda/ext/hdac_ext_bus.c
+index ec7715c6b0c0..c147ebe542da 100644
+--- a/sound/hda/ext/hdac_ext_bus.c
++++ b/sound/hda/ext/hdac_ext_bus.c
+@@ -172,7 +172,6 @@ EXPORT_SYMBOL_GPL(snd_hdac_ext_bus_device_init);
+ void snd_hdac_ext_bus_device_exit(struct hdac_device *hdev)
+ {
+ 	snd_hdac_device_exit(hdev);
+-	kfree(hdev);
+ }
+ EXPORT_SYMBOL_GPL(snd_hdac_ext_bus_device_exit);
+ 
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index b20eb7fc83eb..fcdf2cd3783b 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -840,7 +840,14 @@ static int snd_hda_codec_dev_free(struct snd_device *device)
+ 	if (codec->core.type == HDA_DEV_LEGACY)
+ 		snd_hdac_device_unregister(&codec->core);
+ 	codec_display_power(codec, false);
+-	put_device(hda_codec_dev(codec));
++
++	/*
++	 * In the case of ASoC HD-audio bus, the device refcount is released in
++	 * snd_hdac_ext_bus_device_remove() explicitly.
++	 */
++	if (codec->core.type == HDA_DEV_LEGACY)
++		put_device(hda_codec_dev(codec));
++
+ 	return 0;
+ }
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index d0e543ff6b64..ee620f39dbe3 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2443,9 +2443,10 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x9501, "Clevo P950HR", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1558, 0x95e1, "Clevo P95xER", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1558, 0x95e2, "Clevo P950ER", ALC1220_FIXUP_CLEVO_P950),
+-	SND_PCI_QUIRK(0x1558, 0x96e1, "System76 Oryx Pro (oryp5)", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x97e1, "System76 Oryx Pro (oryp5)", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x65d1, "Tuxedo Book XC1509", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x96e1, "Clevo P960[ER][CDFN]-K", ALC1220_FIXUP_CLEVO_P950),
++	SND_PCI_QUIRK(0x1558, 0x97e1, "Clevo P970[ER][CDFN]", ALC1220_FIXUP_CLEVO_P950),
++	SND_PCI_QUIRK(0x1558, 0x65d1, "Clevo PB51[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x67d1, "Clevo PB71[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK_VENDOR(0x1558, "Clevo laptop", ALC882_FIXUP_EAPD),
+ 	SND_PCI_QUIRK(0x161f, 0x2054, "Medion laptop", ALC883_FIXUP_EAPD),
+ 	SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Y530", ALC882_FIXUP_LENOVO_Y530),
+@@ -7030,6 +7031,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
+ 	SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
+ 	SND_PCI_QUIRK(0x17aa, 0x310c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
++	SND_PCI_QUIRK(0x17aa, 0x3111, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+ 	SND_PCI_QUIRK(0x17aa, 0x312a, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+ 	SND_PCI_QUIRK(0x17aa, 0x312f, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+ 	SND_PCI_QUIRK(0x17aa, 0x313c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+diff --git a/sound/soc/codecs/ak4458.c b/sound/soc/codecs/ak4458.c
+index eab7c76cfcd9..71562154c0b1 100644
+--- a/sound/soc/codecs/ak4458.c
++++ b/sound/soc/codecs/ak4458.c
+@@ -304,7 +304,10 @@ static int ak4458_rstn_control(struct snd_soc_component *component, int bit)
+ 					  AK4458_00_CONTROL1,
+ 					  AK4458_RSTN_MASK,
+ 					  0x0);
+-	return ret;
++	if (ret < 0)
++		return ret;
++
++	return 0;
+ }
+ 
+ static int ak4458_hw_params(struct snd_pcm_substream *substream,
+@@ -536,9 +539,10 @@ static void ak4458_power_on(struct ak4458_priv *ak4458)
+ 	}
+ }
+ 
+-static void ak4458_init(struct snd_soc_component *component)
++static int ak4458_init(struct snd_soc_component *component)
+ {
+ 	struct ak4458_priv *ak4458 = snd_soc_component_get_drvdata(component);
++	int ret;
+ 
+ 	/* External Mute ON */
+ 	if (ak4458->mute_gpiod)
+@@ -546,21 +550,21 @@ static void ak4458_init(struct snd_soc_component *component)
+ 
+ 	ak4458_power_on(ak4458);
+ 
+-	snd_soc_component_update_bits(component, AK4458_00_CONTROL1,
++	ret = snd_soc_component_update_bits(component, AK4458_00_CONTROL1,
+ 			    0x80, 0x80);   /* ACKS bit = 1; 10000000 */
++	if (ret < 0)
++		return ret;
+ 
+-	ak4458_rstn_control(component, 1);
++	return ak4458_rstn_control(component, 1);
+ }
+ 
+ static int ak4458_probe(struct snd_soc_component *component)
+ {
+ 	struct ak4458_priv *ak4458 = snd_soc_component_get_drvdata(component);
+ 
+-	ak4458_init(component);
+-
+ 	ak4458->fs = 48000;
+ 
+-	return 0;
++	return ak4458_init(component);
+ }
+ 
+ static void ak4458_remove(struct snd_soc_component *component)
+diff --git a/sound/soc/codecs/cs4265.c b/sound/soc/codecs/cs4265.c
+index ab27d2b94d02..c0190ec59e74 100644
+--- a/sound/soc/codecs/cs4265.c
++++ b/sound/soc/codecs/cs4265.c
+@@ -60,7 +60,7 @@ static const struct reg_default cs4265_reg_defaults[] = {
+ static bool cs4265_readable_register(struct device *dev, unsigned int reg)
+ {
+ 	switch (reg) {
+-	case CS4265_CHIP_ID ... CS4265_SPDIF_CTL2:
++	case CS4265_CHIP_ID ... CS4265_MAX_REGISTER:
+ 		return true;
+ 	default:
+ 		return false;
+diff --git a/sound/soc/codecs/max98090.c b/sound/soc/codecs/max98090.c
+index 7619ea31ab50..ada8c25e643d 100644
+--- a/sound/soc/codecs/max98090.c
++++ b/sound/soc/codecs/max98090.c
+@@ -1909,6 +1909,21 @@ static int max98090_configure_dmic(struct max98090_priv *max98090,
+ 	return 0;
+ }
+ 
++static int max98090_dai_startup(struct snd_pcm_substream *substream,
++				struct snd_soc_dai *dai)
++{
++	struct snd_soc_component *component = dai->component;
++	struct max98090_priv *max98090 = snd_soc_component_get_drvdata(component);
++	unsigned int fmt = max98090->dai_fmt;
++
++	/* Remove 24-bit format support if it is not in right justified mode. */
++	if ((fmt & SND_SOC_DAIFMT_FORMAT_MASK) != SND_SOC_DAIFMT_RIGHT_J) {
++		substream->runtime->hw.formats = SNDRV_PCM_FMTBIT_S16_LE;
++		snd_pcm_hw_constraint_msbits(substream->runtime, 0, 16, 16);
++	}
++	return 0;
++}
++
+ static int max98090_dai_hw_params(struct snd_pcm_substream *substream,
+ 				   struct snd_pcm_hw_params *params,
+ 				   struct snd_soc_dai *dai)
+@@ -2316,6 +2331,7 @@ EXPORT_SYMBOL_GPL(max98090_mic_detect);
+ #define MAX98090_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE)
+ 
+ static const struct snd_soc_dai_ops max98090_dai_ops = {
++	.startup = max98090_dai_startup,
+ 	.set_sysclk = max98090_dai_set_sysclk,
+ 	.set_fmt = max98090_dai_set_fmt,
+ 	.set_tdm_slot = max98090_set_tdm_slot,
+diff --git a/sound/soc/codecs/rt274.c b/sound/soc/codecs/rt274.c
+index adf59039a3b6..cdd312db3e78 100644
+--- a/sound/soc/codecs/rt274.c
++++ b/sound/soc/codecs/rt274.c
+@@ -405,6 +405,8 @@ static int rt274_mic_detect(struct snd_soc_component *component,
+ {
+ 	struct rt274_priv *rt274 = snd_soc_component_get_drvdata(component);
+ 
++	rt274->jack = jack;
++
+ 	if (jack == NULL) {
+ 		/* Disable jack detection */
+ 		regmap_update_bits(rt274->regmap, RT274_EAPD_GPIO_IRQ_CTRL,
+@@ -412,7 +414,6 @@ static int rt274_mic_detect(struct snd_soc_component *component,
+ 
+ 		return 0;
+ 	}
+-	rt274->jack = jack;
+ 
+ 	regmap_update_bits(rt274->regmap, RT274_EAPD_GPIO_IRQ_CTRL,
+ 				RT274_IRQ_EN, RT274_IRQ_EN);
+diff --git a/sound/soc/codecs/rt5670.c b/sound/soc/codecs/rt5670.c
+index 9a037108b1ae..a746e11ccfe3 100644
+--- a/sound/soc/codecs/rt5670.c
++++ b/sound/soc/codecs/rt5670.c
+@@ -2882,6 +2882,18 @@ static const struct dmi_system_id dmi_platform_intel_quirks[] = {
+ 						 RT5670_DEV_GPIO |
+ 						 RT5670_JD_MODE3),
+ 	},
++	{
++		.callback = rt5670_quirk_cb,
++		.ident = "Aegex 10 tablet (RU2)",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "AEGEX"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "RU2"),
++		},
++		.driver_data = (unsigned long *)(RT5670_DMIC_EN |
++						 RT5670_DMIC2_INR |
++						 RT5670_DEV_GPIO |
++						 RT5670_JD_MODE3),
++	},
+ 	{}
+ };
+ 
+diff --git a/sound/soc/intel/atom/sst/sst_pvt.c b/sound/soc/intel/atom/sst/sst_pvt.c
+index 00a37a09dc9b..dba0ca07ebf9 100644
+--- a/sound/soc/intel/atom/sst/sst_pvt.c
++++ b/sound/soc/intel/atom/sst/sst_pvt.c
+@@ -166,11 +166,11 @@ int sst_create_ipc_msg(struct ipc_post **arg, bool large)
+ {
+ 	struct ipc_post *msg;
+ 
+-	msg = kzalloc(sizeof(*msg), GFP_KERNEL);
++	msg = kzalloc(sizeof(*msg), GFP_ATOMIC);
+ 	if (!msg)
+ 		return -ENOMEM;
+ 	if (large) {
+-		msg->mailbox_data = kzalloc(SST_MAILBOX_SIZE, GFP_KERNEL);
++		msg->mailbox_data = kzalloc(SST_MAILBOX_SIZE, GFP_ATOMIC);
+ 		if (!msg->mailbox_data) {
+ 			kfree(msg);
+ 			return -ENOMEM;
+diff --git a/sound/soc/intel/boards/bytcht_es8316.c b/sound/soc/intel/boards/bytcht_es8316.c
+index d2a7e6ba11ae..1c686f83220a 100644
+--- a/sound/soc/intel/boards/bytcht_es8316.c
++++ b/sound/soc/intel/boards/bytcht_es8316.c
+@@ -471,6 +471,7 @@ static int snd_byt_cht_es8316_mc_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	/* override plaform name, if required */
++	byt_cht_es8316_card.dev = dev;
+ 	platform_name = mach->mach_params.platform;
+ 
+ 	ret = snd_soc_fixup_dai_links_platform_name(&byt_cht_es8316_card,
+@@ -538,7 +539,6 @@ static int snd_byt_cht_es8316_mc_probe(struct platform_device *pdev)
+ 		 (quirk & BYT_CHT_ES8316_MONO_SPEAKER) ? "mono" : "stereo",
+ 		 mic_name[BYT_CHT_ES8316_MAP(quirk)]);
+ 	byt_cht_es8316_card.long_name = long_name;
+-	byt_cht_es8316_card.dev = dev;
+ 	snd_soc_card_set_drvdata(&byt_cht_es8316_card, priv);
+ 
+ 	ret = devm_snd_soc_register_card(dev, &byt_cht_es8316_card);
+diff --git a/sound/soc/intel/boards/cht_bsw_max98090_ti.c b/sound/soc/intel/boards/cht_bsw_max98090_ti.c
+index c0e0844f75b9..572e336ae0f9 100644
+--- a/sound/soc/intel/boards/cht_bsw_max98090_ti.c
++++ b/sound/soc/intel/boards/cht_bsw_max98090_ti.c
+@@ -454,6 +454,7 @@ static int snd_cht_mc_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	/* override plaform name, if required */
++	snd_soc_card_cht.dev = &pdev->dev;
+ 	mach = (&pdev->dev)->platform_data;
+ 	platform_name = mach->mach_params.platform;
+ 
+@@ -463,7 +464,6 @@ static int snd_cht_mc_probe(struct platform_device *pdev)
+ 		return ret_val;
+ 
+ 	/* register the soc card */
+-	snd_soc_card_cht.dev = &pdev->dev;
+ 	snd_soc_card_set_drvdata(&snd_soc_card_cht, drv);
+ 
+ 	if (drv->quirks & QUIRK_PMC_PLT_CLK_0)
+diff --git a/sound/soc/intel/boards/cht_bsw_nau8824.c b/sound/soc/intel/boards/cht_bsw_nau8824.c
+index 02c2fa239331..20fae391c75a 100644
+--- a/sound/soc/intel/boards/cht_bsw_nau8824.c
++++ b/sound/soc/intel/boards/cht_bsw_nau8824.c
+@@ -257,6 +257,7 @@ static int snd_cht_mc_probe(struct platform_device *pdev)
+ 	snd_soc_card_set_drvdata(&snd_soc_card_cht, drv);
+ 
+ 	/* override plaform name, if required */
++	snd_soc_card_cht.dev = &pdev->dev;
+ 	mach = (&pdev->dev)->platform_data;
+ 	platform_name = mach->mach_params.platform;
+ 
+@@ -266,7 +267,6 @@ static int snd_cht_mc_probe(struct platform_device *pdev)
+ 		return ret_val;
+ 
+ 	/* register the soc card */
+-	snd_soc_card_cht.dev = &pdev->dev;
+ 	ret_val = devm_snd_soc_register_card(&pdev->dev, &snd_soc_card_cht);
+ 	if (ret_val) {
+ 		dev_err(&pdev->dev,
+diff --git a/sound/soc/intel/boards/cht_bsw_rt5672.c b/sound/soc/intel/boards/cht_bsw_rt5672.c
+index 3d5a2b3a06f0..87ce3857376d 100644
+--- a/sound/soc/intel/boards/cht_bsw_rt5672.c
++++ b/sound/soc/intel/boards/cht_bsw_rt5672.c
+@@ -425,6 +425,7 @@ static int snd_cht_mc_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	/* override plaform name, if required */
++	snd_soc_card_cht.dev = &pdev->dev;
+ 	platform_name = mach->mach_params.platform;
+ 
+ 	ret_val = snd_soc_fixup_dai_links_platform_name(&snd_soc_card_cht,
+@@ -442,7 +443,6 @@ static int snd_cht_mc_probe(struct platform_device *pdev)
+ 	snd_soc_card_set_drvdata(&snd_soc_card_cht, drv);
+ 
+ 	/* register the soc card */
+-	snd_soc_card_cht.dev = &pdev->dev;
+ 	ret_val = devm_snd_soc_register_card(&pdev->dev, &snd_soc_card_cht);
+ 	if (ret_val) {
+ 		dev_err(&pdev->dev,
+diff --git a/sound/soc/intel/common/soc-acpi-intel-byt-match.c b/sound/soc/intel/common/soc-acpi-intel-byt-match.c
+index fe812a909db4..3a37f4eca437 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-byt-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-byt-match.c
+@@ -22,6 +22,7 @@ static unsigned long byt_machine_id;
+ 
+ #define BYT_THINKPAD_10  1
+ #define BYT_POV_P1006W   2
++#define BYT_AEGEX_10     3
+ 
+ static int byt_thinkpad10_quirk_cb(const struct dmi_system_id *id)
+ {
+@@ -35,6 +36,12 @@ static int byt_pov_p1006w_quirk_cb(const struct dmi_system_id *id)
+ 	return 1;
+ }
+ 
++static int byt_aegex10_quirk_cb(const struct dmi_system_id *id)
++{
++	byt_machine_id = BYT_AEGEX_10;
++	return 1;
++}
++
+ static const struct dmi_system_id byt_table[] = {
+ 	{
+ 		.callback = byt_thinkpad10_quirk_cb,
+@@ -75,9 +82,18 @@ static const struct dmi_system_id byt_table[] = {
+ 			DMI_EXACT_MATCH(DMI_BOARD_NAME, "0E57"),
+ 		},
+ 	},
++	{
++		/* Aegex 10 tablet (RU2) */
++		.callback = byt_aegex10_quirk_cb,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "AEGEX"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "RU2"),
++		},
++	},
+ 	{ }
+ };
+ 
++/* The Thinkapd 10 and Aegex 10 tablets have the same ID problem */
+ static struct snd_soc_acpi_mach byt_thinkpad_10 = {
+ 	.id = "10EC5640",
+ 	.drv_name = "cht-bsw-rt5672",
+@@ -104,6 +120,7 @@ static struct snd_soc_acpi_mach *byt_quirk(void *arg)
+ 
+ 	switch (byt_machine_id) {
+ 	case BYT_THINKPAD_10:
++	case BYT_AEGEX_10:
+ 		return &byt_thinkpad_10;
+ 	case BYT_POV_P1006W:
+ 		return &byt_pov_p1006w;
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index a7b4fab92f26..c010cc864cf3 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -2067,6 +2067,16 @@ static int snd_soc_instantiate_card(struct snd_soc_card *card)
+ 	int ret, i, order;
+ 
+ 	mutex_lock(&client_mutex);
++	for_each_card_prelinks(card, i, dai_link) {
++		ret = soc_init_dai_link(card, dai_link);
++		if (ret) {
++			soc_cleanup_platform(card);
++			dev_err(card->dev, "ASoC: failed to init link %s: %d\n",
++				dai_link->name, ret);
++			mutex_unlock(&client_mutex);
++			return ret;
++		}
++	}
+ 	mutex_lock_nested(&card->mutex, SND_SOC_CARD_CLASS_INIT);
+ 
+ 	card->dapm.bias_level = SND_SOC_BIAS_OFF;
+@@ -2791,26 +2801,9 @@ static int snd_soc_bind_card(struct snd_soc_card *card)
+  */
+ int snd_soc_register_card(struct snd_soc_card *card)
+ {
+-	int i, ret;
+-	struct snd_soc_dai_link *link;
+-
+ 	if (!card->name || !card->dev)
+ 		return -EINVAL;
+ 
+-	mutex_lock(&client_mutex);
+-	for_each_card_prelinks(card, i, link) {
+-
+-		ret = soc_init_dai_link(card, link);
+-		if (ret) {
+-			soc_cleanup_platform(card);
+-			dev_err(card->dev, "ASoC: failed to init link %s\n",
+-				link->name);
+-			mutex_unlock(&client_mutex);
+-			return ret;
+-		}
+-	}
+-	mutex_unlock(&client_mutex);
+-
+ 	dev_set_drvdata(card->dev, card);
+ 
+ 	snd_soc_initialize_card_lists(card);
+@@ -2841,12 +2834,14 @@ static void snd_soc_unbind_card(struct snd_soc_card *card, bool unregister)
+ 		snd_soc_dapm_shutdown(card);
+ 		snd_soc_flush_all_delayed_work(card);
+ 
++		mutex_lock(&client_mutex);
+ 		/* remove all components used by DAI links on this card */
+ 		for_each_comp_order(order) {
+ 			for_each_card_rtds(card, rtd) {
+ 				soc_remove_link_components(card, rtd, order);
+ 			}
+ 		}
++		mutex_unlock(&client_mutex);
+ 
+ 		soc_cleanup_card_resources(card);
+ 		if (!unregister)
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index be80a12fba27..2a3aacec8057 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -2469,7 +2469,8 @@ int dpcm_be_dai_prepare(struct snd_soc_pcm_runtime *fe, int stream)
+ 
+ 		if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_HW_PARAMS) &&
+ 		    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP) &&
+-		    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_SUSPEND))
++		    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_SUSPEND) &&
++		    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED))
+ 			continue;
+ 
+ 		dev_dbg(be->dev, "ASoC: prepare BE %s\n",
+diff --git a/sound/soc/sunxi/sun4i-i2s.c b/sound/soc/sunxi/sun4i-i2s.c
+index d5ec1a20499d..bc128e2a6096 100644
+--- a/sound/soc/sunxi/sun4i-i2s.c
++++ b/sound/soc/sunxi/sun4i-i2s.c
+@@ -110,7 +110,7 @@
+ 
+ #define SUN8I_I2S_TX_CHAN_MAP_REG	0x44
+ #define SUN8I_I2S_TX_CHAN_SEL_REG	0x34
+-#define SUN8I_I2S_TX_CHAN_OFFSET_MASK		GENMASK(13, 11)
++#define SUN8I_I2S_TX_CHAN_OFFSET_MASK		GENMASK(13, 12)
+ #define SUN8I_I2S_TX_CHAN_OFFSET(offset)	(offset << 12)
+ #define SUN8I_I2S_TX_CHAN_EN_MASK		GENMASK(11, 4)
+ #define SUN8I_I2S_TX_CHAN_EN(num_chan)		(((1 << num_chan) - 1) << 4)
+@@ -460,6 +460,10 @@ static int sun4i_i2s_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ 		regmap_update_bits(i2s->regmap, SUN8I_I2S_TX_CHAN_SEL_REG,
+ 				   SUN8I_I2S_TX_CHAN_OFFSET_MASK,
+ 				   SUN8I_I2S_TX_CHAN_OFFSET(offset));
++
++		regmap_update_bits(i2s->regmap, SUN8I_I2S_RX_CHAN_SEL_REG,
++				   SUN8I_I2S_TX_CHAN_OFFSET_MASK,
++				   SUN8I_I2S_TX_CHAN_OFFSET(offset));
+ 	}
+ 
+ 	regmap_field_write(i2s->field_fmt_mode, val);
+diff --git a/sound/usb/line6/pcm.c b/sound/usb/line6/pcm.c
+index 72c6f8e82a7e..78c2d6cab3b5 100644
+--- a/sound/usb/line6/pcm.c
++++ b/sound/usb/line6/pcm.c
+@@ -560,6 +560,11 @@ int line6_init_pcm(struct usb_line6 *line6,
+ 	line6pcm->max_packet_size_out =
+ 		usb_maxpacket(line6->usbdev,
+ 			usb_sndisocpipe(line6->usbdev, ep_write), 1);
++	if (!line6pcm->max_packet_size_in || !line6pcm->max_packet_size_out) {
++		dev_err(line6pcm->line6->ifcdev,
++			"cannot get proper max packet size\n");
++		return -EINVAL;
++	}
+ 
+ 	spin_lock_init(&line6pcm->out.lock);
+ 	spin_lock_init(&line6pcm->in.lock);
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index a751a18ca4c2..5783329a3237 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -754,7 +754,7 @@ static int snd_ni_control_init_val(struct usb_mixer_interface *mixer,
+ 		return err;
+ 	}
+ 
+-	kctl->private_value |= (value << 24);
++	kctl->private_value |= ((unsigned int)value << 24);
+ 	return 0;
+ }
+ 
+@@ -915,7 +915,7 @@ static int snd_ftu_eff_switch_init(struct usb_mixer_interface *mixer,
+ 	if (err < 0)
+ 		return err;
+ 
+-	kctl->private_value |= value[0] << 24;
++	kctl->private_value |= (unsigned int)value[0] << 24;
+ 	return 0;
+ }
+ 
+diff --git a/tools/testing/radix-tree/idr-test.c b/tools/testing/radix-tree/idr-test.c
+index 1b63bdb7688f..fe33be4c2475 100644
+--- a/tools/testing/radix-tree/idr-test.c
++++ b/tools/testing/radix-tree/idr-test.c
+@@ -287,6 +287,51 @@ static void idr_align_test(struct idr *idr)
+ 	}
+ }
+ 
++DEFINE_IDR(find_idr);
++
++static void *idr_throbber(void *arg)
++{
++	time_t start = time(NULL);
++	int id = *(int *)arg;
++
++	rcu_register_thread();
++	do {
++		idr_alloc(&find_idr, xa_mk_value(id), id, id + 1, GFP_KERNEL);
++		idr_remove(&find_idr, id);
++	} while (time(NULL) < start + 10);
++	rcu_unregister_thread();
++
++	return NULL;
++}
++
++void idr_find_test_1(int anchor_id, int throbber_id)
++{
++	pthread_t throbber;
++	time_t start = time(NULL);
++
++	pthread_create(&throbber, NULL, idr_throbber, &throbber_id);
++
++	BUG_ON(idr_alloc(&find_idr, xa_mk_value(anchor_id), anchor_id,
++				anchor_id + 1, GFP_KERNEL) != anchor_id);
++
++	do {
++		int id = 0;
++		void *entry = idr_get_next(&find_idr, &id);
++		BUG_ON(entry != xa_mk_value(id));
++	} while (time(NULL) < start + 11);
++
++	pthread_join(throbber, NULL);
++
++	idr_remove(&find_idr, anchor_id);
++	BUG_ON(!idr_is_empty(&find_idr));
++}
++
++void idr_find_test(void)
++{
++	idr_find_test_1(100000, 0);
++	idr_find_test_1(0, 100000);
++}
++
+ void idr_checks(void)
+ {
+ 	unsigned long i;
+@@ -368,6 +413,7 @@ void idr_checks(void)
+ 	idr_u32_test(1);
+ 	idr_u32_test(0);
+ 	idr_align_test(&idr);
++	idr_find_test();
+ }
+ 
+ #define module_init(x)


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-07-14 15:48 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2019-07-14 15:48 UTC (permalink / raw
  To: gentoo-commits

commit:     7ce0954a9d6752e81c5252af7a7aa7c2e5e09723
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Jul 14 15:48:24 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jul 14 15:48:24 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7ce0954a

Linux patch 5.1.18

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1017_linux-5.1.18.patch | 6282 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6286 insertions(+)

diff --git a/0000_README b/0000_README
index f6eec27..83bea0b 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,10 @@ Patch:  1016_linux-5.1.17.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.1.17
 
+Patch:  1017_linux-5.1.18.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.1.18
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1017_linux-5.1.18.patch b/1017_linux-5.1.18.patch
new file mode 100644
index 0000000..99403c2
--- /dev/null
+++ b/1017_linux-5.1.18.patch
@@ -0,0 +1,6282 @@
+diff --git a/Documentation/ABI/testing/sysfs-class-net-qmi b/Documentation/ABI/testing/sysfs-class-net-qmi
+index 7122d6264c49..c310db4ccbc2 100644
+--- a/Documentation/ABI/testing/sysfs-class-net-qmi
++++ b/Documentation/ABI/testing/sysfs-class-net-qmi
+@@ -29,7 +29,7 @@ Contact:	Bjørn Mork <bjorn@mork.no>
+ Description:
+ 		Unsigned integer.
+ 
+-		Write a number ranging from 1 to 127 to add a qmap mux
++		Write a number ranging from 1 to 254 to add a qmap mux
+ 		based network device, supported by recent Qualcomm based
+ 		modems.
+ 
+@@ -46,5 +46,5 @@ Contact:	Bjørn Mork <bjorn@mork.no>
+ Description:
+ 		Unsigned integer.
+ 
+-		Write a number ranging from 1 to 127 to delete a previously
++		Write a number ranging from 1 to 254 to delete a previously
+ 		created qmap mux based network device.
+diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
+index ffc064c1ec68..49311f3da6f2 100644
+--- a/Documentation/admin-guide/hw-vuln/index.rst
++++ b/Documentation/admin-guide/hw-vuln/index.rst
+@@ -9,5 +9,6 @@ are configurable at compile, boot or run time.
+ .. toctree::
+    :maxdepth: 1
+ 
++   spectre
+    l1tf
+    mds
+diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
+new file mode 100644
+index 000000000000..25f3b2532198
+--- /dev/null
++++ b/Documentation/admin-guide/hw-vuln/spectre.rst
+@@ -0,0 +1,697 @@
++.. SPDX-License-Identifier: GPL-2.0
++
++Spectre Side Channels
++=====================
++
++Spectre is a class of side channel attacks that exploit branch prediction
++and speculative execution on modern CPUs to read memory, possibly
++bypassing access controls. Speculative execution side channel exploits
++do not modify memory but attempt to infer privileged data in the memory.
++
++This document covers Spectre variant 1 and Spectre variant 2.
++
++Affected processors
++-------------------
++
++Speculative execution side channel methods affect a wide range of modern
++high performance processors, since most modern high speed processors
++use branch prediction and speculative execution.
++
++The following CPUs are vulnerable:
++
++    - Intel Core, Atom, Pentium, and Xeon processors
++
++    - AMD Phenom, EPYC, and Zen processors
++
++    - IBM POWER and zSeries processors
++
++    - Higher end ARM processors
++
++    - Apple CPUs
++
++    - Higher end MIPS CPUs
++
++    - Likely most other high performance CPUs. Contact your CPU vendor for details.
++
++Whether a processor is affected or not can be read out from the Spectre
++vulnerability files in sysfs. See :ref:`spectre_sys_info`.
++
++Related CVEs
++------------
++
++The following CVE entries describe Spectre variants:
++
++   =============   =======================  =================
++   CVE-2017-5753   Bounds check bypass      Spectre variant 1
++   CVE-2017-5715   Branch target injection  Spectre variant 2
++   =============   =======================  =================
++
++Problem
++-------
++
++CPUs use speculative operations to improve performance. That may leave
++traces of memory accesses or computations in the processor's caches,
++buffers, and branch predictors. Malicious software may be able to
++influence the speculative execution paths, and then use the side effects
++of the speculative execution in the CPUs' caches and buffers to infer
++privileged data touched during the speculative execution.
++
++Spectre variant 1 attacks take advantage of speculative execution of
++conditional branches, while Spectre variant 2 attacks use speculative
++execution of indirect branches to leak privileged memory.
++See :ref:`[1] <spec_ref1>` :ref:`[5] <spec_ref5>` :ref:`[7] <spec_ref7>`
++:ref:`[10] <spec_ref10>` :ref:`[11] <spec_ref11>`.
++
++Spectre variant 1 (Bounds Check Bypass)
++---------------------------------------
++
++The bounds check bypass attack :ref:`[2] <spec_ref2>` takes advantage
++of speculative execution that bypasses conditional branch instructions
++used for memory access bounds check (e.g. checking if the index of an
++array results in memory access within a valid range). This results in
++memory accesses to invalid memory (with out-of-bound index) that are
++done speculatively before validation checks resolve. Such speculative
++memory accesses can leave side effects, creating side channels which
++leak information to the attacker.
++
++There are some extensions of Spectre variant 1 attacks for reading data
++over the network, see :ref:`[12] <spec_ref12>`. However such attacks
++are difficult, low bandwidth, fragile, and are considered low risk.
++
++Spectre variant 2 (Branch Target Injection)
++-------------------------------------------
++
++The branch target injection attack takes advantage of speculative
++execution of indirect branches :ref:`[3] <spec_ref3>`.  The indirect
++branch predictors inside the processor used to guess the target of
++indirect branches can be influenced by an attacker, causing gadget code
++to be speculatively executed, thus exposing sensitive data touched by
++the victim. The side effects left in the CPU's caches during speculative
++execution can be measured to infer data values.
++
++.. _poison_btb:
++
++In Spectre variant 2 attacks, the attacker can steer speculative indirect
++branches in the victim to gadget code by poisoning the branch target
++buffer of a CPU used for predicting indirect branch addresses. Such
++poisoning could be done by indirect branching into existing code,
++with the address offset of the indirect branch under the attacker's
++control. Since the branch prediction on impacted hardware does not
++fully disambiguate branch address and uses the offset for prediction,
++this could cause privileged code's indirect branch to jump to a gadget
++code with the same offset.
++
++The most useful gadgets take an attacker-controlled input parameter (such
++as a register value) so that the memory read can be controlled. Gadgets
++without input parameters might be possible, but the attacker would have
++very little control over what memory can be read, reducing the risk of
++the attack revealing useful data.
++
++One other variant 2 attack vector is for the attacker to poison the
++return stack buffer (RSB) :ref:`[13] <spec_ref13>` to cause speculative
++subroutine return instruction execution to go to a gadget.  An attacker's
++imbalanced subroutine call instructions might "poison" entries in the
++return stack buffer which are later consumed by a victim's subroutine
++return instructions.  This attack can be mitigated by flushing the return
++stack buffer on context switch, or virtual machine (VM) exit.
++
++On systems with simultaneous multi-threading (SMT), attacks are possible
++from the sibling thread, as level 1 cache and branch target buffer
++(BTB) may be shared between hardware threads in a CPU core.  A malicious
++program running on the sibling thread may influence its peer's BTB to
++steer its indirect branch speculations to gadget code, and measure the
++speculative execution's side effects left in level 1 cache to infer the
++victim's data.
++
++Attack scenarios
++----------------
++
++The following list of attack scenarios have been anticipated, but may
++not cover all possible attack vectors.
++
++1. A user process attacking the kernel
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   The attacker passes a parameter to the kernel via a register or
++   via a known address in memory during a syscall. Such parameter may
++   be used later by the kernel as an index to an array or to derive
++   a pointer for a Spectre variant 1 attack.  The index or pointer
++   is invalid, but bound checks are bypassed in the code branch taken
++   for speculative execution. This could cause privileged memory to be
++   accessed and leaked.
++
++   For kernel code that has been identified where data pointers could
++   potentially be influenced for Spectre attacks, new "nospec" accessor
++   macros are used to prevent speculative loading of data.
++
++   Spectre variant 2 attacker can :ref:`poison <poison_btb>` the branch
++   target buffer (BTB) before issuing syscall to launch an attack.
++   After entering the kernel, the kernel could use the poisoned branch
++   target buffer on indirect jump and jump to gadget code in speculative
++   execution.
++
++   If an attacker tries to control the memory addresses leaked during
++   speculative execution, he would also need to pass a parameter to the
++   gadget, either through a register or a known address in memory. After
++   the gadget has executed, he can measure the side effect.
++
++   The kernel can protect itself against consuming poisoned branch
++   target buffer entries by using return trampolines (also known as
++   "retpoline") :ref:`[3] <spec_ref3>` :ref:`[9] <spec_ref9>` for all
++   indirect branches. Return trampolines trap speculative execution paths
++   to prevent jumping to gadget code during speculative execution.
++   x86 CPUs with Enhanced Indirect Branch Restricted Speculation
++   (Enhanced IBRS) available in hardware should use the feature to
++   mitigate Spectre variant 2 instead of retpoline. Enhanced IBRS is
++   more efficient than retpoline.
++
++   There may be gadget code in firmware which could be exploited with
++   Spectre variant 2 attack by a rogue user process. To mitigate such
++   attacks on x86, Indirect Branch Restricted Speculation (IBRS) feature
++   is turned on before the kernel invokes any firmware code.
++
++2. A user process attacking another user process
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   A malicious user process can try to attack another user process,
++   either via a context switch on the same hardware thread, or from the
++   sibling hyperthread sharing a physical processor core on simultaneous
++   multi-threading (SMT) system.
++
++   Spectre variant 1 attacks generally require passing parameters
++   between the processes, which needs a data passing relationship, such
++   as remote procedure calls (RPC).  Those parameters are used in gadget
++   code to derive invalid data pointers accessing privileged memory in
++   the attacked process.
++
++   Spectre variant 2 attacks can be launched from a rogue process by
++   :ref:`poisoning <poison_btb>` the branch target buffer.  This can
++   influence the indirect branch targets for a victim process that either
++   runs later on the same hardware thread, or running concurrently on
++   a sibling hardware thread sharing the same physical core.
++
++   A user process can protect itself against Spectre variant 2 attacks
++   by using the prctl() syscall to disable indirect branch speculation
++   for itself.  An administrator can also cordon off an unsafe process
++   from polluting the branch target buffer by disabling the process's
++   indirect branch speculation. This comes with a performance cost
++   from not using indirect branch speculation and clearing the branch
++   target buffer.  When SMT is enabled on x86, for a process that has
++   indirect branch speculation disabled, Single Threaded Indirect Branch
++   Predictors (STIBP) :ref:`[4] <spec_ref4>` are turned on to prevent the
++   sibling thread from controlling branch target buffer.  In addition,
++   the Indirect Branch Prediction Barrier (IBPB) is issued to clear the
++   branch target buffer when context switching to and from such process.
++
++   On x86, the return stack buffer is stuffed on context switch.
++   This prevents the branch target buffer from being used for branch
++   prediction when the return stack buffer underflows while switching to
++   a deeper call stack. Any poisoned entries in the return stack buffer
++   left by the previous process will also be cleared.
++
++   User programs should use address space randomization to make attacks
++   more difficult (Set /proc/sys/kernel/randomize_va_space = 1 or 2).
++
++3. A virtualized guest attacking the host
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   The attack mechanism is similar to how user processes attack the
++   kernel.  The kernel is entered via hyper-calls or other virtualization
++   exit paths.
++
++   For Spectre variant 1 attacks, rogue guests can pass parameters
++   (e.g. in registers) via hyper-calls to derive invalid pointers to
++   speculate into privileged memory after entering the kernel.  For places
++   where such kernel code has been identified, nospec accessor macros
++   are used to stop speculative memory access.
++
++   For Spectre variant 2 attacks, rogue guests can :ref:`poison
++   <poison_btb>` the branch target buffer or return stack buffer, causing
++   the kernel to jump to gadget code in the speculative execution paths.
++
++   To mitigate variant 2, the host kernel can use return trampolines
++   for indirect branches to bypass the poisoned branch target buffer,
++   and flushing the return stack buffer on VM exit.  This prevents rogue
++   guests from affecting indirect branching in the host kernel.
++
++   To protect host processes from rogue guests, host processes can have
++   indirect branch speculation disabled via prctl().  The branch target
++   buffer is cleared before context switching to such processes.
++
++4. A virtualized guest attacking other guest
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   A rogue guest may attack another guest to get data accessible by the
++   other guest.
++
++   Spectre variant 1 attacks are possible if parameters can be passed
++   between guests.  This may be done via mechanisms such as shared memory
++   or message passing.  Such parameters could be used to derive data
++   pointers to privileged data in guest.  The privileged data could be
++   accessed by gadget code in the victim's speculation paths.
++
++   Spectre variant 2 attacks can be launched from a rogue guest by
++   :ref:`poisoning <poison_btb>` the branch target buffer or the return
++   stack buffer. Such poisoned entries could be used to influence
++   speculation execution paths in the victim guest.
++
++   Linux kernel mitigates attacks to other guests running in the same
++   CPU hardware thread by flushing the return stack buffer on VM exit,
++   and clearing the branch target buffer before switching to a new guest.
++
++   If SMT is used, Spectre variant 2 attacks from an untrusted guest
++   in the sibling hyperthread can be mitigated by the administrator,
++   by turning off the unsafe guest's indirect branch speculation via
++   prctl().  A guest can also protect itself by turning on microcode
++   based mitigations (such as IBPB or STIBP on x86) within the guest.
++
++.. _spectre_sys_info:
++
++Spectre system information
++--------------------------
++
++The Linux kernel provides a sysfs interface to enumerate the current
++mitigation status of the system for Spectre: whether the system is
++vulnerable, and which mitigations are active.
++
++The sysfs file showing Spectre variant 1 mitigation status is:
++
++   /sys/devices/system/cpu/vulnerabilities/spectre_v1
++
++The possible values in this file are:
++
++  =======================================  =================================
++  'Mitigation: __user pointer sanitation'  Protection in kernel on a case by
++                                           case base with explicit pointer
++                                           sanitation.
++  =======================================  =================================
++
++However, the protections are put in place on a case by case basis,
++and there is no guarantee that all possible attack vectors for Spectre
++variant 1 are covered.
++
++The spectre_v2 kernel file reports if the kernel has been compiled with
++retpoline mitigation or if the CPU has hardware mitigation, and if the
++CPU has support for additional process-specific mitigation.
++
++This file also reports CPU features enabled by microcode to mitigate
++attack between user processes:
++
++1. Indirect Branch Prediction Barrier (IBPB) to add additional
++   isolation between processes of different users.
++2. Single Thread Indirect Branch Predictors (STIBP) to add additional
++   isolation between CPU threads running on the same core.
++
++These CPU features may impact performance when used and can be enabled
++per process on a case-by-case base.
++
++The sysfs file showing Spectre variant 2 mitigation status is:
++
++   /sys/devices/system/cpu/vulnerabilities/spectre_v2
++
++The possible values in this file are:
++
++  - Kernel status:
++
++  ====================================  =================================
++  'Not affected'                        The processor is not vulnerable
++  'Vulnerable'                          Vulnerable, no mitigation
++  'Mitigation: Full generic retpoline'  Software-focused mitigation
++  'Mitigation: Full AMD retpoline'      AMD-specific software mitigation
++  'Mitigation: Enhanced IBRS'           Hardware-focused mitigation
++  ====================================  =================================
++
++  - Firmware status: Show if Indirect Branch Restricted Speculation (IBRS) is
++    used to protect against Spectre variant 2 attacks when calling firmware (x86 only).
++
++  ========== =============================================================
++  'IBRS_FW'  Protection against user program attacks when calling firmware
++  ========== =============================================================
++
++  - Indirect branch prediction barrier (IBPB) status for protection between
++    processes of different users. This feature can be controlled through
++    prctl() per process, or through kernel command line options. This is
++    an x86 only feature. For more details see below.
++
++  ===================   ========================================================
++  'IBPB: disabled'      IBPB unused
++  'IBPB: always-on'     Use IBPB on all tasks
++  'IBPB: conditional'   Use IBPB on SECCOMP or indirect branch restricted tasks
++  ===================   ========================================================
++
++  - Single threaded indirect branch prediction (STIBP) status for protection
++    between different hyper threads. This feature can be controlled through
++    prctl per process, or through kernel command line options. This is x86
++    only feature. For more details see below.
++
++  ====================  ========================================================
++  'STIBP: disabled'     STIBP unused
++  'STIBP: forced'       Use STIBP on all tasks
++  'STIBP: conditional'  Use STIBP on SECCOMP or indirect branch restricted tasks
++  ====================  ========================================================
++
++  - Return stack buffer (RSB) protection status:
++
++  =============   ===========================================
++  'RSB filling'   Protection of RSB on context switch enabled
++  =============   ===========================================
++
++Full mitigation might require a microcode update from the CPU
++vendor. When the necessary microcode is not available, the kernel will
++report vulnerability.
++
++Turning on mitigation for Spectre variant 1 and Spectre variant 2
++-----------------------------------------------------------------
++
++1. Kernel mitigation
++^^^^^^^^^^^^^^^^^^^^
++
++   For the Spectre variant 1, vulnerable kernel code (as determined
++   by code audit or scanning tools) is annotated on a case by case
++   basis to use nospec accessor macros for bounds clipping :ref:`[2]
++   <spec_ref2>` to avoid any usable disclosure gadgets. However, it may
++   not cover all attack vectors for Spectre variant 1.
++
++   For Spectre variant 2 mitigation, the compiler turns indirect calls or
++   jumps in the kernel into equivalent return trampolines (retpolines)
++   :ref:`[3] <spec_ref3>` :ref:`[9] <spec_ref9>` to go to the target
++   addresses.  Speculative execution paths under retpolines are trapped
++   in an infinite loop to prevent any speculative execution jumping to
++   a gadget.
++
++   To turn on retpoline mitigation on a vulnerable CPU, the kernel
++   needs to be compiled with a gcc compiler that supports the
++   -mindirect-branch=thunk-extern -mindirect-branch-register options.
++   If the kernel is compiled with a Clang compiler, the compiler needs
++   to support -mretpoline-external-thunk option.  The kernel config
++   CONFIG_RETPOLINE needs to be turned on, and the CPU needs to run with
++   the latest updated microcode.
++
++   On Intel Skylake-era systems the mitigation covers most, but not all,
++   cases. See :ref:`[3] <spec_ref3>` for more details.
++
++   On CPUs with hardware mitigation for Spectre variant 2 (e.g. Enhanced
++   IBRS on x86), retpoline is automatically disabled at run time.
++
++   The retpoline mitigation is turned on by default on vulnerable
++   CPUs. It can be forced on or off by the administrator
++   via the kernel command line and sysfs control files. See
++   :ref:`spectre_mitigation_control_command_line`.
++
++   On x86, indirect branch restricted speculation is turned on by default
++   before invoking any firmware code to prevent Spectre variant 2 exploits
++   using the firmware.
++
++   Using kernel address space randomization (CONFIG_RANDOMIZE_SLAB=y
++   and CONFIG_SLAB_FREELIST_RANDOM=y in the kernel configuration) makes
++   attacks on the kernel generally more difficult.
++
++2. User program mitigation
++^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   User programs can mitigate Spectre variant 1 using LFENCE or "bounds
++   clipping". For more details see :ref:`[2] <spec_ref2>`.
++
++   For Spectre variant 2 mitigation, individual user programs
++   can be compiled with return trampolines for indirect branches.
++   This protects them from consuming poisoned entries in the branch
++   target buffer left by malicious software.  Alternatively, the
++   programs can disable their indirect branch speculation via prctl()
++   (See :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
++   On x86, this will turn on STIBP to guard against attacks from the
++   sibling thread when the user program is running, and use IBPB to
++   flush the branch target buffer when switching to/from the program.
++
++   Restricting indirect branch speculation on a user program will
++   also prevent the program from launching a variant 2 attack
++   on x86.  All sand-boxed SECCOMP programs have indirect branch
++   speculation restricted by default.  Administrators can change
++   that behavior via the kernel command line and sysfs control files.
++   See :ref:`spectre_mitigation_control_command_line`.
++
++   Programs that disable their indirect branch speculation will have
++   more overhead and run slower.
++
++   User programs should use address space randomization
++   (/proc/sys/kernel/randomize_va_space = 1 or 2) to make attacks more
++   difficult.
++
++3. VM mitigation
++^^^^^^^^^^^^^^^^
++
++   Within the kernel, Spectre variant 1 attacks from rogue guests are
++   mitigated on a case by case basis in VM exit paths. Vulnerable code
++   uses nospec accessor macros for "bounds clipping", to avoid any
++   usable disclosure gadgets.  However, this may not cover all variant
++   1 attack vectors.
++
++   For Spectre variant 2 attacks from rogue guests to the kernel, the
++   Linux kernel uses retpoline or Enhanced IBRS to prevent consumption of
++   poisoned entries in branch target buffer left by rogue guests.  It also
++   flushes the return stack buffer on every VM exit to prevent a return
++   stack buffer underflow so poisoned branch target buffer could be used,
++   or attacker guests leaving poisoned entries in the return stack buffer.
++
++   To mitigate guest-to-guest attacks in the same CPU hardware thread,
++   the branch target buffer is sanitized by flushing before switching
++   to a new guest on a CPU.
++
++   The above mitigations are turned on by default on vulnerable CPUs.
++
++   To mitigate guest-to-guest attacks from sibling thread when SMT is
++   in use, an untrusted guest running in the sibling thread can have
++   its indirect branch speculation disabled by administrator via prctl().
++
++   The kernel also allows guests to use any microcode based mitigation
++   they choose to use (such as IBPB or STIBP on x86) to protect themselves.
++
++.. _spectre_mitigation_control_command_line:
++
++Mitigation control on the kernel command line
++---------------------------------------------
++
++Spectre variant 2 mitigation can be disabled or force enabled at the
++kernel command line.
++
++	nospectre_v2
++
++		[X86] Disable all mitigations for the Spectre variant 2
++		(indirect branch prediction) vulnerability. System may
++		allow data leaks with this option, which is equivalent
++		to spectre_v2=off.
++
++
++        spectre_v2=
++
++		[X86] Control mitigation of Spectre variant 2
++		(indirect branch speculation) vulnerability.
++		The default operation protects the kernel from
++		user space attacks.
++
++		on
++			unconditionally enable, implies
++			spectre_v2_user=on
++		off
++			unconditionally disable, implies
++		        spectre_v2_user=off
++		auto
++			kernel detects whether your CPU model is
++		        vulnerable
++
++		Selecting 'on' will, and 'auto' may, choose a
++		mitigation method at run time according to the
++		CPU, the available microcode, the setting of the
++		CONFIG_RETPOLINE configuration option, and the
++		compiler with which the kernel was built.
++
++		Selecting 'on' will also enable the mitigation
++		against user space to user space task attacks.
++
++		Selecting 'off' will disable both the kernel and
++		the user space protections.
++
++		Specific mitigations can also be selected manually:
++
++		retpoline
++					replace indirect branches
++		retpoline,generic
++					google's original retpoline
++		retpoline,amd
++					AMD-specific minimal thunk
++
++		Not specifying this option is equivalent to
++		spectre_v2=auto.
++
++For user space mitigation:
++
++        spectre_v2_user=
++
++		[X86] Control mitigation of Spectre variant 2
++		(indirect branch speculation) vulnerability between
++		user space tasks
++
++		on
++			Unconditionally enable mitigations. Is
++			enforced by spectre_v2=on
++
++		off
++			Unconditionally disable mitigations. Is
++			enforced by spectre_v2=off
++
++		prctl
++			Indirect branch speculation is enabled,
++			but mitigation can be enabled via prctl
++			per thread. The mitigation control state
++			is inherited on fork.
++
++		prctl,ibpb
++			Like "prctl" above, but only STIBP is
++			controlled per thread. IBPB is issued
++			always when switching between different user
++			space processes.
++
++		seccomp
++			Same as "prctl" above, but all seccomp
++			threads will enable the mitigation unless
++			they explicitly opt out.
++
++		seccomp,ibpb
++			Like "seccomp" above, but only STIBP is
++			controlled per thread. IBPB is issued
++			always when switching between different
++			user space processes.
++
++		auto
++			Kernel selects the mitigation depending on
++			the available CPU features and vulnerability.
++
++		Default mitigation:
++		If CONFIG_SECCOMP=y then "seccomp", otherwise "prctl"
++
++		Not specifying this option is equivalent to
++		spectre_v2_user=auto.
++
++		In general the kernel by default selects
++		reasonable mitigations for the current CPU. To
++		disable Spectre variant 2 mitigations, boot with
++		spectre_v2=off. Spectre variant 1 mitigations
++		cannot be disabled.
++
++Mitigation selection guide
++--------------------------
++
++1. Trusted userspace
++^^^^^^^^^^^^^^^^^^^^
++
++   If all userspace applications are from trusted sources and do not
++   execute externally supplied untrusted code, then the mitigations can
++   be disabled.
++
++2. Protect sensitive programs
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   For security-sensitive programs that have secrets (e.g. crypto
++   keys), protection against Spectre variant 2 can be put in place by
++   disabling indirect branch speculation when the program is running
++   (See :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
++
++3. Sandbox untrusted programs
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   Untrusted programs that could be a source of attacks can be cordoned
++   off by disabling their indirect branch speculation when they are run
++   (See :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
++   This prevents untrusted programs from polluting the branch target
++   buffer.  All programs running in SECCOMP sandboxes have indirect
++   branch speculation restricted by default. This behavior can be
++   changed via the kernel command line and sysfs control files. See
++   :ref:`spectre_mitigation_control_command_line`.
++
++3. High security mode
++^^^^^^^^^^^^^^^^^^^^^
++
++   All Spectre variant 2 mitigations can be forced on
++   at boot time for all programs (See the "on" option in
++   :ref:`spectre_mitigation_control_command_line`).  This will add
++   overhead as indirect branch speculations for all programs will be
++   restricted.
++
++   On x86, branch target buffer will be flushed with IBPB when switching
++   to a new program. STIBP is left on all the time to protect programs
++   against variant 2 attacks originating from programs running on
++   sibling threads.
++
++   Alternatively, STIBP can be used only when running programs
++   whose indirect branch speculation is explicitly disabled,
++   while IBPB is still used all the time when switching to a new
++   program to clear the branch target buffer (See "ibpb" option in
++   :ref:`spectre_mitigation_control_command_line`).  This "ibpb" option
++   has less performance cost than the "on" option, which leaves STIBP
++   on all the time.
++
++References on Spectre
++---------------------
++
++Intel white papers:
++
++.. _spec_ref1:
++
++[1] `Intel analysis of speculative execution side channels <https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/Intel-Analysis-of-Speculative-Execution-Side-Channels.pdf>`_.
++
++.. _spec_ref2:
++
++[2] `Bounds check bypass <https://software.intel.com/security-software-guidance/software-guidance/bounds-check-bypass>`_.
++
++.. _spec_ref3:
++
++[3] `Deep dive: Retpoline: A branch target injection mitigation <https://software.intel.com/security-software-guidance/insights/deep-dive-retpoline-branch-target-injection-mitigation>`_.
++
++.. _spec_ref4:
++
++[4] `Deep Dive: Single Thread Indirect Branch Predictors <https://software.intel.com/security-software-guidance/insights/deep-dive-single-thread-indirect-branch-predictors>`_.
++
++AMD white papers:
++
++.. _spec_ref5:
++
++[5] `AMD64 technology indirect branch control extension <https://developer.amd.com/wp-content/resources/Architecture_Guidelines_Update_Indirect_Branch_Control.pdf>`_.
++
++.. _spec_ref6:
++
++[6] `Software techniques for managing speculation on AMD processors <https://developer.amd.com/wp-content/resources/90343-B_SoftwareTechniquesforManagingSpeculation_WP_7-18Update_FNL.pdf>`_.
++
++ARM white papers:
++
++.. _spec_ref7:
++
++[7] `Cache speculation side-channels <https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/download-the-whitepaper>`_.
++
++.. _spec_ref8:
++
++[8] `Cache speculation issues update <https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/latest-updates/cache-speculation-issues-update>`_.
++
++Google white paper:
++
++.. _spec_ref9:
++
++[9] `Retpoline: a software construct for preventing branch-target-injection <https://support.google.com/faqs/answer/7625886>`_.
++
++MIPS white paper:
++
++.. _spec_ref10:
++
++[10] `MIPS: response on speculative execution and side channel vulnerabilities <https://www.mips.com/blog/mips-response-on-speculative-execution-and-side-channel-vulnerabilities/>`_.
++
++Academic papers:
++
++.. _spec_ref11:
++
++[11] `Spectre Attacks: Exploiting Speculative Execution <https://spectreattack.com/spectre.pdf>`_.
++
++.. _spec_ref12:
++
++[12] `NetSpectre: Read Arbitrary Memory over Network <https://arxiv.org/abs/1807.10535>`_.
++
++.. _spec_ref13:
++
++[13] `Spectre Returns! Speculation Attacks using the Return Stack Buffer <https://www.usenix.org/system/files/conference/woot18/woot18-paper-koruyeh.pdf>`_.
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index c7937f379d22..135438b0fd42 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -5074,12 +5074,6 @@
+ 			emulate     [default] Vsyscalls turn into traps and are
+ 			            emulated reasonably safely.
+ 
+-			native      Vsyscalls are native syscall instructions.
+-			            This is a little bit faster than trapping
+-			            and makes a few dynamic recompilers work
+-			            better than they would in emulation mode.
+-			            It also makes exploits much easier to write.
+-
+ 			none        Vsyscalls don't work at all.  This makes
+ 			            them quite hard to use for exploits but
+ 			            might break your system.
+diff --git a/Documentation/devicetree/bindings/net/can/microchip,mcp251x.txt b/Documentation/devicetree/bindings/net/can/microchip,mcp251x.txt
+index 188c8bd4eb67..5a0111d4de58 100644
+--- a/Documentation/devicetree/bindings/net/can/microchip,mcp251x.txt
++++ b/Documentation/devicetree/bindings/net/can/microchip,mcp251x.txt
+@@ -4,6 +4,7 @@ Required properties:
+  - compatible: Should be one of the following:
+    - "microchip,mcp2510" for MCP2510.
+    - "microchip,mcp2515" for MCP2515.
++   - "microchip,mcp25625" for MCP25625.
+  - reg: SPI chip select.
+  - clocks: The clock feeding the CAN controller.
+  - interrupts: Should contain IRQ line for the CAN controller.
+diff --git a/Documentation/userspace-api/spec_ctrl.rst b/Documentation/userspace-api/spec_ctrl.rst
+index 1129c7550a48..7ddd8f667459 100644
+--- a/Documentation/userspace-api/spec_ctrl.rst
++++ b/Documentation/userspace-api/spec_ctrl.rst
+@@ -49,6 +49,8 @@ If PR_SPEC_PRCTL is set, then the per-task control of the mitigation is
+ available. If not set, prctl(PR_SET_SPECULATION_CTRL) for the speculation
+ misfeature will fail.
+ 
++.. _set_spec_ctrl:
++
+ PR_SET_SPECULATION_CTRL
+ -----------------------
+ 
+diff --git a/Makefile b/Makefile
+index 14c91d46583f..01a0a61f86e7 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 17
++SUBLEVEL = 18
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arm/boot/dts/am335x-pcm-953.dtsi b/arch/arm/boot/dts/am335x-pcm-953.dtsi
+index 1ec8e0d80191..572fbd254690 100644
+--- a/arch/arm/boot/dts/am335x-pcm-953.dtsi
++++ b/arch/arm/boot/dts/am335x-pcm-953.dtsi
+@@ -197,7 +197,7 @@
+ 	bus-width = <4>;
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&mmc1_pins>;
+-	cd-gpios = <&gpio0 6 GPIO_ACTIVE_HIGH>;
++	cd-gpios = <&gpio0 6 GPIO_ACTIVE_LOW>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm/boot/dts/am335x-wega.dtsi b/arch/arm/boot/dts/am335x-wega.dtsi
+index 8ce541739b24..83e4fe595e37 100644
+--- a/arch/arm/boot/dts/am335x-wega.dtsi
++++ b/arch/arm/boot/dts/am335x-wega.dtsi
+@@ -157,7 +157,7 @@
+ 	bus-width = <4>;
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&mmc1_pins>;
+-	cd-gpios = <&gpio0 6 GPIO_ACTIVE_HIGH>;
++	cd-gpios = <&gpio0 6 GPIO_ACTIVE_LOW>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm/boot/dts/dra7-l4.dtsi b/arch/arm/boot/dts/dra7-l4.dtsi
+index 414f1cd68733..03282d9edf7d 100644
+--- a/arch/arm/boot/dts/dra7-l4.dtsi
++++ b/arch/arm/boot/dts/dra7-l4.dtsi
+@@ -4450,8 +4450,6 @@
+ 			timer12: timer@0 {
+ 				compatible = "ti,omap5430-timer";
+ 				reg = <0x0 0x80>;
+-				clocks = <&wkupaon_clkctrl DRA7_WKUPAON_TIMER12_CLKCTRL 24>;
+-				clock-names = "fck";
+ 				interrupts = <GIC_SPI 90 IRQ_TYPE_LEVEL_HIGH>;
+ 				ti,timer-alwon;
+ 				ti,timer-secure;
+diff --git a/arch/arm/mach-davinci/board-da850-evm.c b/arch/arm/mach-davinci/board-da850-evm.c
+index 1fdc9283a8c5..2450936b91d0 100644
+--- a/arch/arm/mach-davinci/board-da850-evm.c
++++ b/arch/arm/mach-davinci/board-da850-evm.c
+@@ -1479,6 +1479,8 @@ static __init void da850_evm_init(void)
+ 	if (ret)
+ 		pr_warn("%s: dsp/rproc registration failed: %d\n",
+ 			__func__, ret);
++
++	regulator_has_full_constraints();
+ }
+ 
+ #ifdef CONFIG_SERIAL_8250_CONSOLE
+diff --git a/arch/arm/mach-davinci/devices-da8xx.c b/arch/arm/mach-davinci/devices-da8xx.c
+index b8dc674e06bc..261240913b45 100644
+--- a/arch/arm/mach-davinci/devices-da8xx.c
++++ b/arch/arm/mach-davinci/devices-da8xx.c
+@@ -686,6 +686,9 @@ static struct platform_device da8xx_lcdc_device = {
+ 	.id		= 0,
+ 	.num_resources	= ARRAY_SIZE(da8xx_lcdc_resources),
+ 	.resource	= da8xx_lcdc_resources,
++	.dev		= {
++		.coherent_dma_mask	= DMA_BIT_MASK(32),
++	}
+ };
+ 
+ int __init da8xx_register_lcdc(struct da8xx_lcdc_platform_data *pdata)
+diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
+index ed870468ef6f..d408711d09fb 100644
+--- a/arch/powerpc/include/asm/page.h
++++ b/arch/powerpc/include/asm/page.h
+@@ -330,6 +330,13 @@ struct vm_area_struct;
+ #endif /* __ASSEMBLY__ */
+ #include <asm/slice.h>
+ 
++/*
++ * Allow 30-bit DMA for very limited Broadcom wifi chips on many powerbooks.
++ */
++#ifdef CONFIG_PPC32
++#define ARCH_ZONE_DMA_BITS 30
++#else
+ #define ARCH_ZONE_DMA_BITS 31
++#endif
+ 
+ #endif /* _ASM_POWERPC_PAGE_H */
+diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
+index f6787f90e158..b98ce400a889 100644
+--- a/arch/powerpc/mm/mem.c
++++ b/arch/powerpc/mm/mem.c
+@@ -255,7 +255,8 @@ void __init paging_init(void)
+ 	       (long int)((top_of_ram - total_ram) >> 20));
+ 
+ #ifdef CONFIG_ZONE_DMA
+-	max_zone_pfns[ZONE_DMA]	= min(max_low_pfn, 0x7fffffffUL >> PAGE_SHIFT);
++	max_zone_pfns[ZONE_DMA]	= min(max_low_pfn,
++			((1UL << ARCH_ZONE_DMA_BITS) - 1) >> PAGE_SHIFT);
+ #endif
+ 	max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
+ #ifdef CONFIG_HIGHMEM
+diff --git a/arch/powerpc/platforms/powermac/Kconfig b/arch/powerpc/platforms/powermac/Kconfig
+index f834a19ed772..c02d8c503b29 100644
+--- a/arch/powerpc/platforms/powermac/Kconfig
++++ b/arch/powerpc/platforms/powermac/Kconfig
+@@ -7,6 +7,7 @@ config PPC_PMAC
+ 	select PPC_INDIRECT_PCI if PPC32
+ 	select PPC_MPC106 if PPC32
+ 	select PPC_NATIVE
++	select ZONE_DMA if PPC32
+ 	default y
+ 
+ config PPC_PMAC64
+diff --git a/arch/riscv/configs/defconfig b/arch/riscv/configs/defconfig
+index 2fd3461e50ab..4f02967e55de 100644
+--- a/arch/riscv/configs/defconfig
++++ b/arch/riscv/configs/defconfig
+@@ -49,6 +49,8 @@ CONFIG_SERIAL_8250=y
+ CONFIG_SERIAL_8250_CONSOLE=y
+ CONFIG_SERIAL_OF_PLATFORM=y
+ CONFIG_SERIAL_EARLYCON_RISCV_SBI=y
++CONFIG_SERIAL_SIFIVE=y
++CONFIG_SERIAL_SIFIVE_CONSOLE=y
+ CONFIG_HVC_RISCV_SBI=y
+ # CONFIG_PTP_1588_CLOCK is not set
+ CONFIG_DRM=y
+@@ -64,6 +66,8 @@ CONFIG_USB_OHCI_HCD_PLATFORM=y
+ CONFIG_USB_STORAGE=y
+ CONFIG_USB_UAS=y
+ CONFIG_VIRTIO_MMIO=y
++CONFIG_CLK_SIFIVE=y
++CONFIG_CLK_SIFIVE_FU540_PRCI=y
+ CONFIG_SIFIVE_PLIC=y
+ CONFIG_EXT4_FS=y
+ CONFIG_EXT4_FS_POSIX_ACL=y
+diff --git a/arch/riscv/lib/delay.c b/arch/riscv/lib/delay.c
+index dce8ae24c6d3..ee6853c1e341 100644
+--- a/arch/riscv/lib/delay.c
++++ b/arch/riscv/lib/delay.c
+@@ -88,7 +88,7 @@ EXPORT_SYMBOL(__delay);
+ 
+ void udelay(unsigned long usecs)
+ {
+-	unsigned long ucycles = usecs * lpj_fine * UDELAY_MULT;
++	u64 ucycles = (u64)usecs * lpj_fine * UDELAY_MULT;
+ 
+ 	if (unlikely(usecs > MAX_UDELAY_US)) {
+ 		__delay((u64)usecs * riscv_timebase / 1000000ULL);
+diff --git a/arch/riscv/net/bpf_jit_comp.c b/arch/riscv/net/bpf_jit_comp.c
+index 80b12aa5e10d..426d5c33ea90 100644
+--- a/arch/riscv/net/bpf_jit_comp.c
++++ b/arch/riscv/net/bpf_jit_comp.c
+@@ -751,22 +751,32 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
+ 	case BPF_ALU | BPF_ADD | BPF_X:
+ 	case BPF_ALU64 | BPF_ADD | BPF_X:
+ 		emit(is64 ? rv_add(rd, rd, rs) : rv_addw(rd, rd, rs), ctx);
++		if (!is64)
++			emit_zext_32(rd, ctx);
+ 		break;
+ 	case BPF_ALU | BPF_SUB | BPF_X:
+ 	case BPF_ALU64 | BPF_SUB | BPF_X:
+ 		emit(is64 ? rv_sub(rd, rd, rs) : rv_subw(rd, rd, rs), ctx);
++		if (!is64)
++			emit_zext_32(rd, ctx);
+ 		break;
+ 	case BPF_ALU | BPF_AND | BPF_X:
+ 	case BPF_ALU64 | BPF_AND | BPF_X:
+ 		emit(rv_and(rd, rd, rs), ctx);
++		if (!is64)
++			emit_zext_32(rd, ctx);
+ 		break;
+ 	case BPF_ALU | BPF_OR | BPF_X:
+ 	case BPF_ALU64 | BPF_OR | BPF_X:
+ 		emit(rv_or(rd, rd, rs), ctx);
++		if (!is64)
++			emit_zext_32(rd, ctx);
+ 		break;
+ 	case BPF_ALU | BPF_XOR | BPF_X:
+ 	case BPF_ALU64 | BPF_XOR | BPF_X:
+ 		emit(rv_xor(rd, rd, rs), ctx);
++		if (!is64)
++			emit_zext_32(rd, ctx);
+ 		break;
+ 	case BPF_ALU | BPF_MUL | BPF_X:
+ 	case BPF_ALU64 | BPF_MUL | BPF_X:
+@@ -789,14 +799,20 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
+ 	case BPF_ALU | BPF_LSH | BPF_X:
+ 	case BPF_ALU64 | BPF_LSH | BPF_X:
+ 		emit(is64 ? rv_sll(rd, rd, rs) : rv_sllw(rd, rd, rs), ctx);
++		if (!is64)
++			emit_zext_32(rd, ctx);
+ 		break;
+ 	case BPF_ALU | BPF_RSH | BPF_X:
+ 	case BPF_ALU64 | BPF_RSH | BPF_X:
+ 		emit(is64 ? rv_srl(rd, rd, rs) : rv_srlw(rd, rd, rs), ctx);
++		if (!is64)
++			emit_zext_32(rd, ctx);
+ 		break;
+ 	case BPF_ALU | BPF_ARSH | BPF_X:
+ 	case BPF_ALU64 | BPF_ARSH | BPF_X:
+ 		emit(is64 ? rv_sra(rd, rd, rs) : rv_sraw(rd, rd, rs), ctx);
++		if (!is64)
++			emit_zext_32(rd, ctx);
+ 		break;
+ 
+ 	/* dst = -dst */
+@@ -804,6 +820,8 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
+ 	case BPF_ALU64 | BPF_NEG:
+ 		emit(is64 ? rv_sub(rd, RV_REG_ZERO, rd) :
+ 		     rv_subw(rd, RV_REG_ZERO, rd), ctx);
++		if (!is64)
++			emit_zext_32(rd, ctx);
+ 		break;
+ 
+ 	/* dst = BSWAP##imm(dst) */
+@@ -958,14 +976,20 @@ out_be:
+ 	case BPF_ALU | BPF_LSH | BPF_K:
+ 	case BPF_ALU64 | BPF_LSH | BPF_K:
+ 		emit(is64 ? rv_slli(rd, rd, imm) : rv_slliw(rd, rd, imm), ctx);
++		if (!is64)
++			emit_zext_32(rd, ctx);
+ 		break;
+ 	case BPF_ALU | BPF_RSH | BPF_K:
+ 	case BPF_ALU64 | BPF_RSH | BPF_K:
+ 		emit(is64 ? rv_srli(rd, rd, imm) : rv_srliw(rd, rd, imm), ctx);
++		if (!is64)
++			emit_zext_32(rd, ctx);
+ 		break;
+ 	case BPF_ALU | BPF_ARSH | BPF_K:
+ 	case BPF_ALU64 | BPF_ARSH | BPF_K:
+ 		emit(is64 ? rv_srai(rd, rd, imm) : rv_sraiw(rd, rd, imm), ctx);
++		if (!is64)
++			emit_zext_32(rd, ctx);
+ 		break;
+ 
+ 	/* JUMP off */
+diff --git a/arch/s390/Makefile b/arch/s390/Makefile
+index e21053e5e0da..bbd2dab6730e 100644
+--- a/arch/s390/Makefile
++++ b/arch/s390/Makefile
+@@ -24,6 +24,7 @@ KBUILD_CFLAGS_DECOMPRESSOR += -DDISABLE_BRANCH_PROFILING -D__NO_FORTIFY
+ KBUILD_CFLAGS_DECOMPRESSOR += -fno-delete-null-pointer-checks -msoft-float
+ KBUILD_CFLAGS_DECOMPRESSOR += -fno-asynchronous-unwind-tables
+ KBUILD_CFLAGS_DECOMPRESSOR += $(call cc-option,-ffreestanding)
++KBUILD_CFLAGS_DECOMPRESSOR += $(call cc-disable-warning, address-of-packed-member)
+ KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO),-g)
+ KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO_DWARF4), $(call cc-option, -gdwarf-4,))
+ UTS_MACHINE	:= s390x
+diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c
+index 4b8ee05dd6ad..6d8f108dd563 100644
+--- a/arch/x86/kernel/ptrace.c
++++ b/arch/x86/kernel/ptrace.c
+@@ -24,6 +24,7 @@
+ #include <linux/rcupdate.h>
+ #include <linux/export.h>
+ #include <linux/context_tracking.h>
++#include <linux/nospec.h>
+ 
+ #include <linux/uaccess.h>
+ #include <asm/pgtable.h>
+@@ -642,9 +643,11 @@ static unsigned long ptrace_get_debugreg(struct task_struct *tsk, int n)
+ {
+ 	struct thread_struct *thread = &tsk->thread;
+ 	unsigned long val = 0;
++	int index = n;
+ 
+ 	if (n < HBP_NUM) {
+-		struct perf_event *bp = thread->ptrace_bps[n];
++		struct perf_event *bp = thread->ptrace_bps[index];
++		index = array_index_nospec(index, HBP_NUM);
+ 
+ 		if (bp)
+ 			val = bp->hw.info.address;
+diff --git a/arch/x86/kernel/tls.c b/arch/x86/kernel/tls.c
+index a5b802a12212..71d3fef1edc9 100644
+--- a/arch/x86/kernel/tls.c
++++ b/arch/x86/kernel/tls.c
+@@ -5,6 +5,7 @@
+ #include <linux/user.h>
+ #include <linux/regset.h>
+ #include <linux/syscalls.h>
++#include <linux/nospec.h>
+ 
+ #include <linux/uaccess.h>
+ #include <asm/desc.h>
+@@ -220,6 +221,7 @@ int do_get_thread_area(struct task_struct *p, int idx,
+ 		       struct user_desc __user *u_info)
+ {
+ 	struct user_desc info;
++	int index;
+ 
+ 	if (idx == -1 && get_user(idx, &u_info->entry_number))
+ 		return -EFAULT;
+@@ -227,8 +229,11 @@ int do_get_thread_area(struct task_struct *p, int idx,
+ 	if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX)
+ 		return -EINVAL;
+ 
+-	fill_user_desc(&info, idx,
+-		       &p->thread.tls_array[idx - GDT_ENTRY_TLS_MIN]);
++	index = idx - GDT_ENTRY_TLS_MIN;
++	index = array_index_nospec(index,
++			GDT_ENTRY_TLS_MAX - GDT_ENTRY_TLS_MIN + 1);
++
++	fill_user_desc(&info, idx, &p->thread.tls_array[index]);
+ 
+ 	if (copy_to_user(u_info, &info, sizeof(info)))
+ 		return -EFAULT;
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 5fa0c17d0b41..4ca834d22169 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -1404,7 +1404,7 @@ static int copy_enlightened_to_vmcs12(struct vcpu_vmx *vmx)
+ 	}
+ 
+ 	if (unlikely(!(evmcs->hv_clean_fields &
+-		       HV_VMX_ENLIGHTENED_CLEAN_FIELD_CONTROL_PROC))) {
++		       HV_VMX_ENLIGHTENED_CLEAN_FIELD_CONTROL_EXCPN))) {
+ 		vmcs12->exception_bitmap = evmcs->exception_bitmap;
+ 	}
+ 
+@@ -1444,7 +1444,7 @@ static int copy_enlightened_to_vmcs12(struct vcpu_vmx *vmx)
+ 	}
+ 
+ 	if (unlikely(!(evmcs->hv_clean_fields &
+-		       HV_VMX_ENLIGHTENED_CLEAN_FIELD_HOST_GRP1))) {
++		       HV_VMX_ENLIGHTENED_CLEAN_FIELD_CONTROL_GRP1))) {
+ 		vmcs12->pin_based_vm_exec_control =
+ 			evmcs->pin_based_vm_exec_control;
+ 		vmcs12->vm_exit_controls = evmcs->vm_exit_controls;
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index afabf597c855..d88bc0935886 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -190,9 +190,7 @@ struct jit_context {
+ #define BPF_MAX_INSN_SIZE	128
+ #define BPF_INSN_SAFETY		64
+ 
+-#define AUX_STACK_SPACE		40 /* Space for RBX, R13, R14, R15, tailcnt */
+-
+-#define PROLOGUE_SIZE		37
++#define PROLOGUE_SIZE		20
+ 
+ /*
+  * Emit x86-64 prologue code for BPF program and check its size.
+@@ -203,44 +201,19 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf)
+ 	u8 *prog = *pprog;
+ 	int cnt = 0;
+ 
+-	/* push rbp */
+-	EMIT1(0x55);
+-
+-	/* mov rbp,rsp */
+-	EMIT3(0x48, 0x89, 0xE5);
+-
+-	/* sub rsp, rounded_stack_depth + AUX_STACK_SPACE */
+-	EMIT3_off32(0x48, 0x81, 0xEC,
+-		    round_up(stack_depth, 8) + AUX_STACK_SPACE);
+-
+-	/* sub rbp, AUX_STACK_SPACE */
+-	EMIT4(0x48, 0x83, 0xED, AUX_STACK_SPACE);
+-
+-	/* mov qword ptr [rbp+0],rbx */
+-	EMIT4(0x48, 0x89, 0x5D, 0);
+-	/* mov qword ptr [rbp+8],r13 */
+-	EMIT4(0x4C, 0x89, 0x6D, 8);
+-	/* mov qword ptr [rbp+16],r14 */
+-	EMIT4(0x4C, 0x89, 0x75, 16);
+-	/* mov qword ptr [rbp+24],r15 */
+-	EMIT4(0x4C, 0x89, 0x7D, 24);
+-
++	EMIT1(0x55);             /* push rbp */
++	EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */
++	/* sub rsp, rounded_stack_depth */
++	EMIT3_off32(0x48, 0x81, 0xEC, round_up(stack_depth, 8));
++	EMIT1(0x53);             /* push rbx */
++	EMIT2(0x41, 0x55);       /* push r13 */
++	EMIT2(0x41, 0x56);       /* push r14 */
++	EMIT2(0x41, 0x57);       /* push r15 */
+ 	if (!ebpf_from_cbpf) {
+-		/*
+-		 * Clear the tail call counter (tail_call_cnt): for eBPF tail
+-		 * calls we need to reset the counter to 0. It's done in two
+-		 * instructions, resetting RAX register to 0, and moving it
+-		 * to the counter location.
+-		 */
+-
+-		/* xor eax, eax */
+-		EMIT2(0x31, 0xc0);
+-		/* mov qword ptr [rbp+32], rax */
+-		EMIT4(0x48, 0x89, 0x45, 32);
+-
++		/* zero init tail_call_cnt */
++		EMIT2(0x6a, 0x00);
+ 		BUILD_BUG_ON(cnt != PROLOGUE_SIZE);
+ 	}
+-
+ 	*pprog = prog;
+ }
+ 
+@@ -285,13 +258,13 @@ static void emit_bpf_tail_call(u8 **pprog)
+ 	 * if (tail_call_cnt > MAX_TAIL_CALL_CNT)
+ 	 *	goto out;
+ 	 */
+-	EMIT2_off32(0x8B, 0x85, 36);              /* mov eax, dword ptr [rbp + 36] */
++	EMIT2_off32(0x8B, 0x85, -36 - MAX_BPF_STACK); /* mov eax, dword ptr [rbp - 548] */
+ 	EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT);     /* cmp eax, MAX_TAIL_CALL_CNT */
+ #define OFFSET2 (30 + RETPOLINE_RAX_BPF_JIT_SIZE)
+ 	EMIT2(X86_JA, OFFSET2);                   /* ja out */
+ 	label2 = cnt;
+ 	EMIT3(0x83, 0xC0, 0x01);                  /* add eax, 1 */
+-	EMIT2_off32(0x89, 0x85, 36);              /* mov dword ptr [rbp + 36], eax */
++	EMIT2_off32(0x89, 0x85, -36 - MAX_BPF_STACK); /* mov dword ptr [rbp -548], eax */
+ 
+ 	/* prog = array->ptrs[index]; */
+ 	EMIT4_off32(0x48, 0x8B, 0x84, 0xD6,       /* mov rax, [rsi + rdx * 8 + offsetof(...)] */
+@@ -1040,19 +1013,14 @@ emit_jmp:
+ 			seen_exit = true;
+ 			/* Update cleanup_addr */
+ 			ctx->cleanup_addr = proglen;
+-			/* mov rbx, qword ptr [rbp+0] */
+-			EMIT4(0x48, 0x8B, 0x5D, 0);
+-			/* mov r13, qword ptr [rbp+8] */
+-			EMIT4(0x4C, 0x8B, 0x6D, 8);
+-			/* mov r14, qword ptr [rbp+16] */
+-			EMIT4(0x4C, 0x8B, 0x75, 16);
+-			/* mov r15, qword ptr [rbp+24] */
+-			EMIT4(0x4C, 0x8B, 0x7D, 24);
+-
+-			/* add rbp, AUX_STACK_SPACE */
+-			EMIT4(0x48, 0x83, 0xC5, AUX_STACK_SPACE);
+-			EMIT1(0xC9); /* leave */
+-			EMIT1(0xC3); /* ret */
++			if (!bpf_prog_was_classic(bpf_prog))
++				EMIT1(0x5B); /* get rid of tail_call_cnt */
++			EMIT2(0x41, 0x5F);   /* pop r15 */
++			EMIT2(0x41, 0x5E);   /* pop r14 */
++			EMIT2(0x41, 0x5D);   /* pop r13 */
++			EMIT1(0x5B);         /* pop rbx */
++			EMIT1(0xC9);         /* leave */
++			EMIT1(0xC3);         /* ret */
+ 			break;
+ 
+ 		default:
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 679d608347ea..7d4d47177940 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -4265,6 +4265,7 @@ static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic, bool is_sync)
+ 		unsigned long flags;
+ 
+ 		spin_lock_irqsave(&bfqd->lock, flags);
++		bfqq->bic = NULL;
+ 		bfq_exit_bfqq(bfqd, bfqq);
+ 		bic_set_bfqq(bic, NULL, is_sync);
+ 		spin_unlock_irqrestore(&bfqd->lock, flags);
+diff --git a/crypto/lrw.c b/crypto/lrw.c
+index cc5c89246193..ac2a09ede90f 100644
+--- a/crypto/lrw.c
++++ b/crypto/lrw.c
+@@ -388,7 +388,7 @@ static int create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	inst->alg.base.cra_priority = alg->base.cra_priority;
+ 	inst->alg.base.cra_blocksize = LRW_BLOCK_SIZE;
+ 	inst->alg.base.cra_alignmask = alg->base.cra_alignmask |
+-				       (__alignof__(__be32) - 1);
++				       (__alignof__(be128) - 1);
+ 
+ 	inst->alg.ivsize = LRW_BLOCK_SIZE;
+ 	inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg) +
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index cc19d91c1688..e64db514f1c4 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -2068,10 +2068,9 @@ static size_t binder_get_object(struct binder_proc *proc,
+ 
+ 	read_size = min_t(size_t, sizeof(*object), buffer->data_size - offset);
+ 	if (offset > buffer->data_size || read_size < sizeof(*hdr) ||
+-	    !IS_ALIGNED(offset, sizeof(u32)))
++	    binder_alloc_copy_from_buffer(&proc->alloc, object, buffer,
++					  offset, read_size))
+ 		return 0;
+-	binder_alloc_copy_from_buffer(&proc->alloc, object, buffer,
+-				      offset, read_size);
+ 
+ 	/* Ok, now see if we read a complete object. */
+ 	hdr = &object->hdr;
+@@ -2140,8 +2139,10 @@ static struct binder_buffer_object *binder_validate_ptr(
+ 		return NULL;
+ 
+ 	buffer_offset = start_offset + sizeof(binder_size_t) * index;
+-	binder_alloc_copy_from_buffer(&proc->alloc, &object_offset,
+-				      b, buffer_offset, sizeof(object_offset));
++	if (binder_alloc_copy_from_buffer(&proc->alloc, &object_offset,
++					  b, buffer_offset,
++					  sizeof(object_offset)))
++		return NULL;
+ 	object_size = binder_get_object(proc, b, object_offset, object);
+ 	if (!object_size || object->hdr.type != BINDER_TYPE_PTR)
+ 		return NULL;
+@@ -2221,10 +2222,12 @@ static bool binder_validate_fixup(struct binder_proc *proc,
+ 			return false;
+ 		last_min_offset = last_bbo->parent_offset + sizeof(uintptr_t);
+ 		buffer_offset = objects_start_offset +
+-			sizeof(binder_size_t) * last_bbo->parent,
+-		binder_alloc_copy_from_buffer(&proc->alloc, &last_obj_offset,
+-					      b, buffer_offset,
+-					      sizeof(last_obj_offset));
++			sizeof(binder_size_t) * last_bbo->parent;
++		if (binder_alloc_copy_from_buffer(&proc->alloc,
++						  &last_obj_offset,
++						  b, buffer_offset,
++						  sizeof(last_obj_offset)))
++			return false;
+ 	}
+ 	return (fixup_offset >= last_min_offset);
+ }
+@@ -2310,15 +2313,15 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
+ 	for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;
+ 	     buffer_offset += sizeof(binder_size_t)) {
+ 		struct binder_object_header *hdr;
+-		size_t object_size;
++		size_t object_size = 0;
+ 		struct binder_object object;
+ 		binder_size_t object_offset;
+ 
+-		binder_alloc_copy_from_buffer(&proc->alloc, &object_offset,
+-					      buffer, buffer_offset,
+-					      sizeof(object_offset));
+-		object_size = binder_get_object(proc, buffer,
+-						object_offset, &object);
++		if (!binder_alloc_copy_from_buffer(&proc->alloc, &object_offset,
++						   buffer, buffer_offset,
++						   sizeof(object_offset)))
++			object_size = binder_get_object(proc, buffer,
++							object_offset, &object);
+ 		if (object_size == 0) {
+ 			pr_err("transaction release %d bad object at offset %lld, size %zd\n",
+ 			       debug_id, (u64)object_offset, buffer->data_size);
+@@ -2441,15 +2444,16 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
+ 			for (fd_index = 0; fd_index < fda->num_fds;
+ 			     fd_index++) {
+ 				u32 fd;
++				int err;
+ 				binder_size_t offset = fda_offset +
+ 					fd_index * sizeof(fd);
+ 
+-				binder_alloc_copy_from_buffer(&proc->alloc,
+-							      &fd,
+-							      buffer,
+-							      offset,
+-							      sizeof(fd));
+-				binder_deferred_fd_close(fd);
++				err = binder_alloc_copy_from_buffer(
++						&proc->alloc, &fd, buffer,
++						offset, sizeof(fd));
++				WARN_ON(err);
++				if (!err)
++					binder_deferred_fd_close(fd);
+ 			}
+ 		} break;
+ 		default:
+@@ -2692,11 +2696,12 @@ static int binder_translate_fd_array(struct binder_fd_array_object *fda,
+ 		int ret;
+ 		binder_size_t offset = fda_offset + fdi * sizeof(fd);
+ 
+-		binder_alloc_copy_from_buffer(&target_proc->alloc,
+-					      &fd, t->buffer,
+-					      offset, sizeof(fd));
+-		ret = binder_translate_fd(fd, offset, t, thread,
+-					  in_reply_to);
++		ret = binder_alloc_copy_from_buffer(&target_proc->alloc,
++						    &fd, t->buffer,
++						    offset, sizeof(fd));
++		if (!ret)
++			ret = binder_translate_fd(fd, offset, t, thread,
++						  in_reply_to);
+ 		if (ret < 0)
+ 			return ret;
+ 	}
+@@ -2749,8 +2754,12 @@ static int binder_fixup_parent(struct binder_transaction *t,
+ 	}
+ 	buffer_offset = bp->parent_offset +
+ 			(uintptr_t)parent->buffer - (uintptr_t)b->user_data;
+-	binder_alloc_copy_to_buffer(&target_proc->alloc, b, buffer_offset,
+-				    &bp->buffer, sizeof(bp->buffer));
++	if (binder_alloc_copy_to_buffer(&target_proc->alloc, b, buffer_offset,
++					&bp->buffer, sizeof(bp->buffer))) {
++		binder_user_error("%d:%d got transaction with invalid parent offset\n",
++				  proc->pid, thread->pid);
++		return -EINVAL;
++	}
+ 
+ 	return 0;
+ }
+@@ -3160,15 +3169,20 @@ static void binder_transaction(struct binder_proc *proc,
+ 		goto err_binder_alloc_buf_failed;
+ 	}
+ 	if (secctx) {
++		int err;
+ 		size_t buf_offset = ALIGN(tr->data_size, sizeof(void *)) +
+ 				    ALIGN(tr->offsets_size, sizeof(void *)) +
+ 				    ALIGN(extra_buffers_size, sizeof(void *)) -
+ 				    ALIGN(secctx_sz, sizeof(u64));
+ 
+ 		t->security_ctx = (uintptr_t)t->buffer->user_data + buf_offset;
+-		binder_alloc_copy_to_buffer(&target_proc->alloc,
+-					    t->buffer, buf_offset,
+-					    secctx, secctx_sz);
++		err = binder_alloc_copy_to_buffer(&target_proc->alloc,
++						  t->buffer, buf_offset,
++						  secctx, secctx_sz);
++		if (err) {
++			t->security_ctx = 0;
++			WARN_ON(1);
++		}
+ 		security_release_secctx(secctx, secctx_sz);
+ 		secctx = NULL;
+ 	}
+@@ -3234,11 +3248,16 @@ static void binder_transaction(struct binder_proc *proc,
+ 		struct binder_object object;
+ 		binder_size_t object_offset;
+ 
+-		binder_alloc_copy_from_buffer(&target_proc->alloc,
+-					      &object_offset,
+-					      t->buffer,
+-					      buffer_offset,
+-					      sizeof(object_offset));
++		if (binder_alloc_copy_from_buffer(&target_proc->alloc,
++						  &object_offset,
++						  t->buffer,
++						  buffer_offset,
++						  sizeof(object_offset))) {
++			return_error = BR_FAILED_REPLY;
++			return_error_param = -EINVAL;
++			return_error_line = __LINE__;
++			goto err_bad_offset;
++		}
+ 		object_size = binder_get_object(target_proc, t->buffer,
+ 						object_offset, &object);
+ 		if (object_size == 0 || object_offset < off_min) {
+@@ -3262,15 +3281,17 @@ static void binder_transaction(struct binder_proc *proc,
+ 
+ 			fp = to_flat_binder_object(hdr);
+ 			ret = binder_translate_binder(fp, t, thread);
+-			if (ret < 0) {
++
++			if (ret < 0 ||
++			    binder_alloc_copy_to_buffer(&target_proc->alloc,
++							t->buffer,
++							object_offset,
++							fp, sizeof(*fp))) {
+ 				return_error = BR_FAILED_REPLY;
+ 				return_error_param = ret;
+ 				return_error_line = __LINE__;
+ 				goto err_translate_failed;
+ 			}
+-			binder_alloc_copy_to_buffer(&target_proc->alloc,
+-						    t->buffer, object_offset,
+-						    fp, sizeof(*fp));
+ 		} break;
+ 		case BINDER_TYPE_HANDLE:
+ 		case BINDER_TYPE_WEAK_HANDLE: {
+@@ -3278,15 +3299,16 @@ static void binder_transaction(struct binder_proc *proc,
+ 
+ 			fp = to_flat_binder_object(hdr);
+ 			ret = binder_translate_handle(fp, t, thread);
+-			if (ret < 0) {
++			if (ret < 0 ||
++			    binder_alloc_copy_to_buffer(&target_proc->alloc,
++							t->buffer,
++							object_offset,
++							fp, sizeof(*fp))) {
+ 				return_error = BR_FAILED_REPLY;
+ 				return_error_param = ret;
+ 				return_error_line = __LINE__;
+ 				goto err_translate_failed;
+ 			}
+-			binder_alloc_copy_to_buffer(&target_proc->alloc,
+-						    t->buffer, object_offset,
+-						    fp, sizeof(*fp));
+ 		} break;
+ 
+ 		case BINDER_TYPE_FD: {
+@@ -3296,16 +3318,17 @@ static void binder_transaction(struct binder_proc *proc,
+ 			int ret = binder_translate_fd(fp->fd, fd_offset, t,
+ 						      thread, in_reply_to);
+ 
+-			if (ret < 0) {
++			fp->pad_binder = 0;
++			if (ret < 0 ||
++			    binder_alloc_copy_to_buffer(&target_proc->alloc,
++							t->buffer,
++							object_offset,
++							fp, sizeof(*fp))) {
+ 				return_error = BR_FAILED_REPLY;
+ 				return_error_param = ret;
+ 				return_error_line = __LINE__;
+ 				goto err_translate_failed;
+ 			}
+-			fp->pad_binder = 0;
+-			binder_alloc_copy_to_buffer(&target_proc->alloc,
+-						    t->buffer, object_offset,
+-						    fp, sizeof(*fp));
+ 		} break;
+ 		case BINDER_TYPE_FDA: {
+ 			struct binder_object ptr_object;
+@@ -3393,15 +3416,16 @@ static void binder_transaction(struct binder_proc *proc,
+ 						  num_valid,
+ 						  last_fixup_obj_off,
+ 						  last_fixup_min_off);
+-			if (ret < 0) {
++			if (ret < 0 ||
++			    binder_alloc_copy_to_buffer(&target_proc->alloc,
++							t->buffer,
++							object_offset,
++							bp, sizeof(*bp))) {
+ 				return_error = BR_FAILED_REPLY;
+ 				return_error_param = ret;
+ 				return_error_line = __LINE__;
+ 				goto err_translate_failed;
+ 			}
+-			binder_alloc_copy_to_buffer(&target_proc->alloc,
+-						    t->buffer, object_offset,
+-						    bp, sizeof(*bp));
+ 			last_fixup_obj_off = object_offset;
+ 			last_fixup_min_off = 0;
+ 		} break;
+@@ -4139,20 +4163,27 @@ static int binder_apply_fd_fixups(struct binder_proc *proc,
+ 		trace_binder_transaction_fd_recv(t, fd, fixup->offset);
+ 		fd_install(fd, fixup->file);
+ 		fixup->file = NULL;
+-		binder_alloc_copy_to_buffer(&proc->alloc, t->buffer,
+-					    fixup->offset, &fd,
+-					    sizeof(u32));
++		if (binder_alloc_copy_to_buffer(&proc->alloc, t->buffer,
++						fixup->offset, &fd,
++						sizeof(u32))) {
++			ret = -EINVAL;
++			break;
++		}
+ 	}
+ 	list_for_each_entry_safe(fixup, tmp, &t->fd_fixups, fixup_entry) {
+ 		if (fixup->file) {
+ 			fput(fixup->file);
+ 		} else if (ret) {
+ 			u32 fd;
+-
+-			binder_alloc_copy_from_buffer(&proc->alloc, &fd,
+-						      t->buffer, fixup->offset,
+-						      sizeof(fd));
+-			binder_deferred_fd_close(fd);
++			int err;
++
++			err = binder_alloc_copy_from_buffer(&proc->alloc, &fd,
++							    t->buffer,
++							    fixup->offset,
++							    sizeof(fd));
++			WARN_ON(err);
++			if (!err)
++				binder_deferred_fd_close(fd);
+ 		}
+ 		list_del(&fixup->fixup_entry);
+ 		kfree(fixup);
+@@ -4267,6 +4298,8 @@ retry:
+ 		case BINDER_WORK_TRANSACTION_COMPLETE: {
+ 			binder_inner_proc_unlock(proc);
+ 			cmd = BR_TRANSACTION_COMPLETE;
++			kfree(w);
++			binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
+ 			if (put_user(cmd, (uint32_t __user *)ptr))
+ 				return -EFAULT;
+ 			ptr += sizeof(uint32_t);
+@@ -4275,8 +4308,6 @@ retry:
+ 			binder_debug(BINDER_DEBUG_TRANSACTION_COMPLETE,
+ 				     "%d:%d BR_TRANSACTION_COMPLETE\n",
+ 				     proc->pid, thread->pid);
+-			kfree(w);
+-			binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
+ 		} break;
+ 		case BINDER_WORK_NODE: {
+ 			struct binder_node *node = container_of(w, struct binder_node, work);
+diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
+index 195f120c4e8c..80eaf8ab7dc0 100644
+--- a/drivers/android/binder_alloc.c
++++ b/drivers/android/binder_alloc.c
+@@ -1128,15 +1128,16 @@ binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc,
+ 	return 0;
+ }
+ 
+-static void binder_alloc_do_buffer_copy(struct binder_alloc *alloc,
+-					bool to_buffer,
+-					struct binder_buffer *buffer,
+-					binder_size_t buffer_offset,
+-					void *ptr,
+-					size_t bytes)
++static int binder_alloc_do_buffer_copy(struct binder_alloc *alloc,
++				       bool to_buffer,
++				       struct binder_buffer *buffer,
++				       binder_size_t buffer_offset,
++				       void *ptr,
++				       size_t bytes)
+ {
+ 	/* All copies must be 32-bit aligned and 32-bit size */
+-	BUG_ON(!check_buffer(alloc, buffer, buffer_offset, bytes));
++	if (!check_buffer(alloc, buffer, buffer_offset, bytes))
++		return -EINVAL;
+ 
+ 	while (bytes) {
+ 		unsigned long size;
+@@ -1164,25 +1165,26 @@ static void binder_alloc_do_buffer_copy(struct binder_alloc *alloc,
+ 		ptr = ptr + size;
+ 		buffer_offset += size;
+ 	}
++	return 0;
+ }
+ 
+-void binder_alloc_copy_to_buffer(struct binder_alloc *alloc,
+-				 struct binder_buffer *buffer,
+-				 binder_size_t buffer_offset,
+-				 void *src,
+-				 size_t bytes)
++int binder_alloc_copy_to_buffer(struct binder_alloc *alloc,
++				struct binder_buffer *buffer,
++				binder_size_t buffer_offset,
++				void *src,
++				size_t bytes)
+ {
+-	binder_alloc_do_buffer_copy(alloc, true, buffer, buffer_offset,
+-				    src, bytes);
++	return binder_alloc_do_buffer_copy(alloc, true, buffer, buffer_offset,
++					   src, bytes);
+ }
+ 
+-void binder_alloc_copy_from_buffer(struct binder_alloc *alloc,
+-				   void *dest,
+-				   struct binder_buffer *buffer,
+-				   binder_size_t buffer_offset,
+-				   size_t bytes)
++int binder_alloc_copy_from_buffer(struct binder_alloc *alloc,
++				  void *dest,
++				  struct binder_buffer *buffer,
++				  binder_size_t buffer_offset,
++				  size_t bytes)
+ {
+-	binder_alloc_do_buffer_copy(alloc, false, buffer, buffer_offset,
+-				    dest, bytes);
++	return binder_alloc_do_buffer_copy(alloc, false, buffer, buffer_offset,
++					   dest, bytes);
+ }
+ 
+diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h
+index b60d161b7a7a..9a51b72624ae 100644
+--- a/drivers/android/binder_alloc.h
++++ b/drivers/android/binder_alloc.h
+@@ -168,17 +168,17 @@ binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc,
+ 				 const void __user *from,
+ 				 size_t bytes);
+ 
+-void binder_alloc_copy_to_buffer(struct binder_alloc *alloc,
+-				 struct binder_buffer *buffer,
+-				 binder_size_t buffer_offset,
+-				 void *src,
+-				 size_t bytes);
+-
+-void binder_alloc_copy_from_buffer(struct binder_alloc *alloc,
+-				   void *dest,
+-				   struct binder_buffer *buffer,
+-				   binder_size_t buffer_offset,
+-				   size_t bytes);
++int binder_alloc_copy_to_buffer(struct binder_alloc *alloc,
++				struct binder_buffer *buffer,
++				binder_size_t buffer_offset,
++				void *src,
++				size_t bytes);
++
++int binder_alloc_copy_from_buffer(struct binder_alloc *alloc,
++				  void *dest,
++				  struct binder_buffer *buffer,
++				  binder_size_t buffer_offset,
++				  size_t bytes);
+ 
+ #endif /* _LINUX_BINDER_ALLOC_H */
+ 
+diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
+index 8804c9e916fd..f566fa8bf704 100644
+--- a/drivers/char/tpm/tpm-chip.c
++++ b/drivers/char/tpm/tpm-chip.c
+@@ -294,15 +294,15 @@ static int tpm_class_shutdown(struct device *dev)
+ {
+ 	struct tpm_chip *chip = container_of(dev, struct tpm_chip, dev);
+ 
++	down_write(&chip->ops_sem);
+ 	if (chip->flags & TPM_CHIP_FLAG_TPM2) {
+-		down_write(&chip->ops_sem);
+ 		if (!tpm_chip_start(chip)) {
+ 			tpm2_shutdown(chip, TPM2_SU_CLEAR);
+ 			tpm_chip_stop(chip);
+ 		}
+-		chip->ops = NULL;
+-		up_write(&chip->ops_sem);
+ 	}
++	chip->ops = NULL;
++	up_write(&chip->ops_sem);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/char/tpm/tpm1-cmd.c b/drivers/char/tpm/tpm1-cmd.c
+index 85dcf2654d11..faacbe1ffa1a 100644
+--- a/drivers/char/tpm/tpm1-cmd.c
++++ b/drivers/char/tpm/tpm1-cmd.c
+@@ -510,7 +510,7 @@ struct tpm1_get_random_out {
+  *
+  * Return:
+  * *  number of bytes read
+- * * -errno or a TPM return code otherwise
++ * * -errno (positive TPM return codes are masked to -EIO)
+  */
+ int tpm1_get_random(struct tpm_chip *chip, u8 *dest, size_t max)
+ {
+@@ -531,8 +531,11 @@ int tpm1_get_random(struct tpm_chip *chip, u8 *dest, size_t max)
+ 
+ 		rc = tpm_transmit_cmd(chip, &buf, sizeof(out->rng_data_len),
+ 				      "attempting get random");
+-		if (rc)
++		if (rc) {
++			if (rc > 0)
++				rc = -EIO;
+ 			goto out;
++		}
+ 
+ 		out = (struct tpm1_get_random_out *)&buf.data[TPM_HEADER_SIZE];
+ 
+diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c
+index e74c5b7b64bf..f57e25ab8f39 100644
+--- a/drivers/char/tpm/tpm2-cmd.c
++++ b/drivers/char/tpm/tpm2-cmd.c
+@@ -301,7 +301,7 @@ struct tpm2_get_random_out {
+  *
+  * Return:
+  *   size of the buffer on success,
+- *   -errno otherwise
++ *   -errno otherwise (positive TPM return codes are masked to -EIO)
+  */
+ int tpm2_get_random(struct tpm_chip *chip, u8 *dest, size_t max)
+ {
+@@ -328,8 +328,11 @@ int tpm2_get_random(struct tpm_chip *chip, u8 *dest, size_t max)
+ 				       offsetof(struct tpm2_get_random_out,
+ 						buffer),
+ 				       "attempting get random");
+-		if (err)
++		if (err) {
++			if (err > 0)
++				err = -EIO;
+ 			goto out;
++		}
+ 
+ 		out = (struct tpm2_get_random_out *)
+ 			&buf.data[TPM_HEADER_SIZE];
+diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
+index de78b54bcfb1..0fee83b2eb91 100644
+--- a/drivers/crypto/talitos.c
++++ b/drivers/crypto/talitos.c
+@@ -2286,7 +2286,7 @@ static struct talitos_alg_template driver_algs[] = {
+ 			.base = {
+ 				.cra_name = "authenc(hmac(sha1),cbc(aes))",
+ 				.cra_driver_name = "authenc-hmac-sha1-"
+-						   "cbc-aes-talitos",
++						   "cbc-aes-talitos-hsna",
+ 				.cra_blocksize = AES_BLOCK_SIZE,
+ 				.cra_flags = CRYPTO_ALG_ASYNC,
+ 			},
+@@ -2330,7 +2330,7 @@ static struct talitos_alg_template driver_algs[] = {
+ 				.cra_name = "authenc(hmac(sha1),"
+ 					    "cbc(des3_ede))",
+ 				.cra_driver_name = "authenc-hmac-sha1-"
+-						   "cbc-3des-talitos",
++						   "cbc-3des-talitos-hsna",
+ 				.cra_blocksize = DES3_EDE_BLOCK_SIZE,
+ 				.cra_flags = CRYPTO_ALG_ASYNC,
+ 			},
+@@ -2372,7 +2372,7 @@ static struct talitos_alg_template driver_algs[] = {
+ 			.base = {
+ 				.cra_name = "authenc(hmac(sha224),cbc(aes))",
+ 				.cra_driver_name = "authenc-hmac-sha224-"
+-						   "cbc-aes-talitos",
++						   "cbc-aes-talitos-hsna",
+ 				.cra_blocksize = AES_BLOCK_SIZE,
+ 				.cra_flags = CRYPTO_ALG_ASYNC,
+ 			},
+@@ -2416,7 +2416,7 @@ static struct talitos_alg_template driver_algs[] = {
+ 				.cra_name = "authenc(hmac(sha224),"
+ 					    "cbc(des3_ede))",
+ 				.cra_driver_name = "authenc-hmac-sha224-"
+-						   "cbc-3des-talitos",
++						   "cbc-3des-talitos-hsna",
+ 				.cra_blocksize = DES3_EDE_BLOCK_SIZE,
+ 				.cra_flags = CRYPTO_ALG_ASYNC,
+ 			},
+@@ -2458,7 +2458,7 @@ static struct talitos_alg_template driver_algs[] = {
+ 			.base = {
+ 				.cra_name = "authenc(hmac(sha256),cbc(aes))",
+ 				.cra_driver_name = "authenc-hmac-sha256-"
+-						   "cbc-aes-talitos",
++						   "cbc-aes-talitos-hsna",
+ 				.cra_blocksize = AES_BLOCK_SIZE,
+ 				.cra_flags = CRYPTO_ALG_ASYNC,
+ 			},
+@@ -2502,7 +2502,7 @@ static struct talitos_alg_template driver_algs[] = {
+ 				.cra_name = "authenc(hmac(sha256),"
+ 					    "cbc(des3_ede))",
+ 				.cra_driver_name = "authenc-hmac-sha256-"
+-						   "cbc-3des-talitos",
++						   "cbc-3des-talitos-hsna",
+ 				.cra_blocksize = DES3_EDE_BLOCK_SIZE,
+ 				.cra_flags = CRYPTO_ALG_ASYNC,
+ 			},
+@@ -2628,7 +2628,7 @@ static struct talitos_alg_template driver_algs[] = {
+ 			.base = {
+ 				.cra_name = "authenc(hmac(md5),cbc(aes))",
+ 				.cra_driver_name = "authenc-hmac-md5-"
+-						   "cbc-aes-talitos",
++						   "cbc-aes-talitos-hsna",
+ 				.cra_blocksize = AES_BLOCK_SIZE,
+ 				.cra_flags = CRYPTO_ALG_ASYNC,
+ 			},
+@@ -2670,7 +2670,7 @@ static struct talitos_alg_template driver_algs[] = {
+ 			.base = {
+ 				.cra_name = "authenc(hmac(md5),cbc(des3_ede))",
+ 				.cra_driver_name = "authenc-hmac-md5-"
+-						   "cbc-3des-talitos",
++						   "cbc-3des-talitos-hsna",
+ 				.cra_blocksize = DES3_EDE_BLOCK_SIZE,
+ 				.cra_flags = CRYPTO_ALG_ASYNC,
+ 			},
+diff --git a/drivers/gpu/drm/drm_bufs.c b/drivers/gpu/drm/drm_bufs.c
+index e407adb033e7..4fbc34425570 100644
+--- a/drivers/gpu/drm/drm_bufs.c
++++ b/drivers/gpu/drm/drm_bufs.c
+@@ -1332,7 +1332,10 @@ static int copy_one_buf(void *data, int count, struct drm_buf_entry *from)
+ 				 .size = from->buf_size,
+ 				 .low_mark = from->low_mark,
+ 				 .high_mark = from->high_mark};
+-	return copy_to_user(to, &v, offsetof(struct drm_buf_desc, flags));
++
++	if (copy_to_user(to, &v, offsetof(struct drm_buf_desc, flags)))
++		return -EFAULT;
++	return 0;
+ }
+ 
+ int drm_legacy_infobufs(struct drm_device *dev, void *data,
+diff --git a/drivers/gpu/drm/drm_ioc32.c b/drivers/gpu/drm/drm_ioc32.c
+index 0e3043e08c69..f8672238d444 100644
+--- a/drivers/gpu/drm/drm_ioc32.c
++++ b/drivers/gpu/drm/drm_ioc32.c
+@@ -372,7 +372,10 @@ static int copy_one_buf32(void *data, int count, struct drm_buf_entry *from)
+ 			      .size = from->buf_size,
+ 			      .low_mark = from->low_mark,
+ 			      .high_mark = from->high_mark};
+-	return copy_to_user(to + count, &v, offsetof(drm_buf_desc32_t, flags));
++
++	if (copy_to_user(to + count, &v, offsetof(drm_buf_desc32_t, flags)))
++		return -EFAULT;
++	return 0;
+ }
+ 
+ static int drm_legacy_infobufs32(struct drm_device *dev, void *data,
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+index 9d2e1ce5c0a6..b77374ea3825 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+@@ -747,6 +747,9 @@ static int vmw_driver_load(struct drm_device *dev, unsigned long chipset)
+ 	if (unlikely(ret != 0))
+ 		goto out_err0;
+ 
++	dma_set_max_seg_size(dev->dev, min_t(unsigned int, U32_MAX & PAGE_MASK,
++					     SCATTERLIST_MAX_SEGMENT));
++
+ 	if (dev_priv->capabilities & SVGA_CAP_GMR2) {
+ 		DRM_INFO("Max GMR ids is %u\n",
+ 			 (unsigned)dev_priv->max_gmr_ids);
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
+index a3357ff7540d..97adee1f0575 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
+@@ -454,11 +454,11 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
+ 		if (unlikely(ret != 0))
+ 			return ret;
+ 
+-		ret = sg_alloc_table_from_pages(&vmw_tt->sgt, vsgt->pages,
+-						vsgt->num_pages, 0,
+-						(unsigned long)
+-						vsgt->num_pages << PAGE_SHIFT,
+-						GFP_KERNEL);
++		ret = __sg_alloc_table_from_pages
++			(&vmw_tt->sgt, vsgt->pages, vsgt->num_pages, 0,
++			 (unsigned long) vsgt->num_pages << PAGE_SHIFT,
++			 dma_get_max_seg_size(dev_priv->dev->dev),
++			 GFP_KERNEL);
+ 		if (unlikely(ret != 0))
+ 			goto out_sg_alloc_fail;
+ 
+diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c b/drivers/gpu/ipu-v3/ipu-image-convert.c
+index 13103ab86050..e9803e2151f9 100644
+--- a/drivers/gpu/ipu-v3/ipu-image-convert.c
++++ b/drivers/gpu/ipu-v3/ipu-image-convert.c
+@@ -409,12 +409,14 @@ static int calc_image_resize_coefficients(struct ipu_image_convert_ctx *ctx,
+ 	if (WARN_ON(resized_width == 0 || resized_height == 0))
+ 		return -EINVAL;
+ 
+-	while (downsized_width >= resized_width * 2) {
++	while (downsized_width > 1024 ||
++	       downsized_width >= resized_width * 2) {
+ 		downsized_width >>= 1;
+ 		downsize_coeff_h++;
+ 	}
+ 
+-	while (downsized_height >= resized_height * 2) {
++	while (downsized_height > 1024 ||
++	       downsized_height >= resized_height * 2) {
+ 		downsized_height >>= 1;
+ 		downsize_coeff_v++;
+ 	}
+@@ -1885,7 +1887,8 @@ void ipu_image_convert_adjust(struct ipu_image *in, struct ipu_image *out,
+ 			      enum ipu_rotate_mode rot_mode)
+ {
+ 	const struct ipu_image_pixfmt *infmt, *outfmt;
+-	u32 w_align, h_align;
++	u32 w_align_out, h_align_out;
++	u32 w_align_in, h_align_in;
+ 
+ 	infmt = get_format(in->pix.pixelformat);
+ 	outfmt = get_format(out->pix.pixelformat);
+@@ -1917,22 +1920,33 @@ void ipu_image_convert_adjust(struct ipu_image *in, struct ipu_image *out,
+ 	}
+ 
+ 	/* align input width/height */
+-	w_align = ilog2(tile_width_align(IMAGE_CONVERT_IN, infmt, rot_mode));
+-	h_align = ilog2(tile_height_align(IMAGE_CONVERT_IN, infmt, rot_mode));
+-	in->pix.width = clamp_align(in->pix.width, MIN_W, MAX_W, w_align);
+-	in->pix.height = clamp_align(in->pix.height, MIN_H, MAX_H, h_align);
++	w_align_in = ilog2(tile_width_align(IMAGE_CONVERT_IN, infmt,
++					    rot_mode));
++	h_align_in = ilog2(tile_height_align(IMAGE_CONVERT_IN, infmt,
++					     rot_mode));
++	in->pix.width = clamp_align(in->pix.width, MIN_W, MAX_W,
++				    w_align_in);
++	in->pix.height = clamp_align(in->pix.height, MIN_H, MAX_H,
++				     h_align_in);
+ 
+ 	/* align output width/height */
+-	w_align = ilog2(tile_width_align(IMAGE_CONVERT_OUT, outfmt, rot_mode));
+-	h_align = ilog2(tile_height_align(IMAGE_CONVERT_OUT, outfmt, rot_mode));
+-	out->pix.width = clamp_align(out->pix.width, MIN_W, MAX_W, w_align);
+-	out->pix.height = clamp_align(out->pix.height, MIN_H, MAX_H, h_align);
++	w_align_out = ilog2(tile_width_align(IMAGE_CONVERT_OUT, outfmt,
++					     rot_mode));
++	h_align_out = ilog2(tile_height_align(IMAGE_CONVERT_OUT, outfmt,
++					      rot_mode));
++	out->pix.width = clamp_align(out->pix.width, MIN_W, MAX_W,
++				     w_align_out);
++	out->pix.height = clamp_align(out->pix.height, MIN_H, MAX_H,
++				      h_align_out);
+ 
+ 	/* set input/output strides and image sizes */
+ 	in->pix.bytesperline = infmt->planar ?
+-		clamp_align(in->pix.width, 2 << w_align, MAX_W, w_align) :
++		clamp_align(in->pix.width, 2 << w_align_in, MAX_W,
++			    w_align_in) :
+ 		clamp_align((in->pix.width * infmt->bpp) >> 3,
+-			    2 << w_align, MAX_W, w_align);
++			    ((2 << w_align_in) * infmt->bpp) >> 3,
++			    (MAX_W * infmt->bpp) >> 3,
++			    w_align_in);
+ 	in->pix.sizeimage = infmt->planar ?
+ 		(in->pix.height * in->pix.bytesperline * infmt->bpp) >> 3 :
+ 		in->pix.height * in->pix.bytesperline;
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index adce58f24f76..6537086fb145 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -1235,6 +1235,7 @@
+ #define USB_DEVICE_ID_PRIMAX_KEYBOARD	0x4e05
+ #define USB_DEVICE_ID_PRIMAX_REZEL	0x4e72
+ #define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F	0x4d0f
++#define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D65	0x4d65
+ #define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4E22	0x4e22
+ 
+ 
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index 77ffba48cc73..189bf68eb35c 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -132,6 +132,7 @@ static const struct hid_device_id hid_quirks[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_USB_OPTICAL_MOUSE), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_MOUSE_4D22), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F), HID_QUIRK_ALWAYS_POLL },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D65), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4E22), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_PRODIGE, USB_DEVICE_ID_PRODIGE_CORDLESS), HID_QUIRK_NOGET },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3001), HID_QUIRK_NOGET },
+diff --git a/drivers/iio/adc/stm32-adc-core.c b/drivers/iio/adc/stm32-adc-core.c
+index 2327ec18b40c..1f7ce5186dfc 100644
+--- a/drivers/iio/adc/stm32-adc-core.c
++++ b/drivers/iio/adc/stm32-adc-core.c
+@@ -87,6 +87,7 @@ struct stm32_adc_priv_cfg {
+  * @domain:		irq domain reference
+  * @aclk:		clock reference for the analog circuitry
+  * @bclk:		bus clock common for all ADCs, depends on part used
++ * @vdda:		vdda analog supply reference
+  * @vref:		regulator reference
+  * @cfg:		compatible configuration data
+  * @common:		common data for all ADC instances
+@@ -97,6 +98,7 @@ struct stm32_adc_priv {
+ 	struct irq_domain		*domain;
+ 	struct clk			*aclk;
+ 	struct clk			*bclk;
++	struct regulator		*vdda;
+ 	struct regulator		*vref;
+ 	const struct stm32_adc_priv_cfg	*cfg;
+ 	struct stm32_adc_common		common;
+@@ -394,10 +396,16 @@ static int stm32_adc_core_hw_start(struct device *dev)
+ 	struct stm32_adc_priv *priv = to_stm32_adc_priv(common);
+ 	int ret;
+ 
++	ret = regulator_enable(priv->vdda);
++	if (ret < 0) {
++		dev_err(dev, "vdda enable failed %d\n", ret);
++		return ret;
++	}
++
+ 	ret = regulator_enable(priv->vref);
+ 	if (ret < 0) {
+ 		dev_err(dev, "vref enable failed\n");
+-		return ret;
++		goto err_vdda_disable;
+ 	}
+ 
+ 	if (priv->bclk) {
+@@ -425,6 +433,8 @@ err_bclk_disable:
+ 		clk_disable_unprepare(priv->bclk);
+ err_regulator_disable:
+ 	regulator_disable(priv->vref);
++err_vdda_disable:
++	regulator_disable(priv->vdda);
+ 
+ 	return ret;
+ }
+@@ -441,6 +451,7 @@ static void stm32_adc_core_hw_stop(struct device *dev)
+ 	if (priv->bclk)
+ 		clk_disable_unprepare(priv->bclk);
+ 	regulator_disable(priv->vref);
++	regulator_disable(priv->vdda);
+ }
+ 
+ static int stm32_adc_probe(struct platform_device *pdev)
+@@ -468,6 +479,14 @@ static int stm32_adc_probe(struct platform_device *pdev)
+ 		return PTR_ERR(priv->common.base);
+ 	priv->common.phys_base = res->start;
+ 
++	priv->vdda = devm_regulator_get(&pdev->dev, "vdda");
++	if (IS_ERR(priv->vdda)) {
++		ret = PTR_ERR(priv->vdda);
++		if (ret != -EPROBE_DEFER)
++			dev_err(&pdev->dev, "vdda get failed, %d\n", ret);
++		return ret;
++	}
++
+ 	priv->vref = devm_regulator_get(&pdev->dev, "vref");
+ 	if (IS_ERR(priv->vref)) {
+ 		ret = PTR_ERR(priv->vref);
+diff --git a/drivers/infiniband/hw/hfi1/hfi.h b/drivers/infiniband/hw/hfi1/hfi.h
+index 048b5d73ba39..d85b16a3aaaf 100644
+--- a/drivers/infiniband/hw/hfi1/hfi.h
++++ b/drivers/infiniband/hw/hfi1/hfi.h
+@@ -539,6 +539,37 @@ static inline void hfi1_16B_set_qpn(struct opa_16b_mgmt *mgmt,
+ 	mgmt->src_qpn = cpu_to_be32(src_qp & OPA_16B_MGMT_QPN_MASK);
+ }
+ 
++/**
++ * hfi1_get_rc_ohdr - get extended header
++ * @opah - the opaheader
++ */
++static inline struct ib_other_headers *
++hfi1_get_rc_ohdr(struct hfi1_opa_header *opah)
++{
++	struct ib_other_headers *ohdr;
++	struct ib_header *hdr = NULL;
++	struct hfi1_16b_header *hdr_16b = NULL;
++
++	/* Find out where the BTH is */
++	if (opah->hdr_type == HFI1_PKT_TYPE_9B) {
++		hdr = &opah->ibh;
++		if (ib_get_lnh(hdr) == HFI1_LRH_BTH)
++			ohdr = &hdr->u.oth;
++		else
++			ohdr = &hdr->u.l.oth;
++	} else {
++		u8 l4;
++
++		hdr_16b = &opah->opah;
++		l4  = hfi1_16B_get_l4(hdr_16b);
++		if (l4 == OPA_16B_L4_IB_LOCAL)
++			ohdr = &hdr_16b->u.oth;
++		else
++			ohdr = &hdr_16b->u.l.oth;
++	}
++	return ohdr;
++}
++
+ struct rvt_sge_state;
+ 
+ /*
+diff --git a/drivers/infiniband/hw/hfi1/pio.c b/drivers/infiniband/hw/hfi1/pio.c
+index a1de566fe95e..17ea224fbecb 100644
+--- a/drivers/infiniband/hw/hfi1/pio.c
++++ b/drivers/infiniband/hw/hfi1/pio.c
+@@ -952,6 +952,22 @@ void sc_disable(struct send_context *sc)
+ 		}
+ 	}
+ 	spin_unlock(&sc->release_lock);
++
++	write_seqlock(&sc->waitlock);
++	while (!list_empty(&sc->piowait)) {
++		struct iowait *wait;
++		struct rvt_qp *qp;
++		struct hfi1_qp_priv *priv;
++
++		wait = list_first_entry(&sc->piowait, struct iowait, list);
++		qp = iowait_to_qp(wait);
++		priv = qp->priv;
++		list_del_init(&priv->s_iowait.list);
++		priv->s_iowait.lock = NULL;
++		hfi1_qp_wakeup(qp, RVT_S_WAIT_PIO | HFI1_S_WAIT_PIO_DRAIN);
++	}
++	write_sequnlock(&sc->waitlock);
++
+ 	spin_unlock_irq(&sc->alloc_lock);
+ }
+ 
+@@ -1427,7 +1443,8 @@ void sc_stop(struct send_context *sc, int flag)
+  * @cb: optional callback to call when the buffer is finished sending
+  * @arg: argument for cb
+  *
+- * Return a pointer to a PIO buffer if successful, NULL if not enough room.
++ * Return a pointer to a PIO buffer, NULL if not enough room, -ECOMM
++ * when link is down.
+  */
+ struct pio_buf *sc_buffer_alloc(struct send_context *sc, u32 dw_len,
+ 				pio_release_cb cb, void *arg)
+@@ -1443,7 +1460,7 @@ struct pio_buf *sc_buffer_alloc(struct send_context *sc, u32 dw_len,
+ 	spin_lock_irqsave(&sc->alloc_lock, flags);
+ 	if (!(sc->flags & SCF_ENABLED)) {
+ 		spin_unlock_irqrestore(&sc->alloc_lock, flags);
+-		goto done;
++		return ERR_PTR(-ECOMM);
+ 	}
+ 
+ retry:
+diff --git a/drivers/infiniband/hw/hfi1/rc.c b/drivers/infiniband/hw/hfi1/rc.c
+index 5991211d72bd..b7b74222eaf0 100644
+--- a/drivers/infiniband/hw/hfi1/rc.c
++++ b/drivers/infiniband/hw/hfi1/rc.c
+@@ -1434,7 +1434,7 @@ void hfi1_send_rc_ack(struct hfi1_packet *packet, bool is_fecn)
+ 	pbc = create_pbc(ppd, pbc_flags, qp->srate_mbps,
+ 			 sc_to_vlt(ppd->dd, sc5), plen);
+ 	pbuf = sc_buffer_alloc(rcd->sc, plen, NULL, NULL);
+-	if (!pbuf) {
++	if (IS_ERR_OR_NULL(pbuf)) {
+ 		/*
+ 		 * We have no room to send at the moment.  Pass
+ 		 * responsibility for sending the ACK to the send engine
+@@ -1703,6 +1703,36 @@ static void reset_sending_psn(struct rvt_qp *qp, u32 psn)
+ 	}
+ }
+ 
++/**
++ * hfi1_rc_verbs_aborted - handle abort status
++ * @qp: the QP
++ * @opah: the opa header
++ *
++ * This code modifies both ACK bit in BTH[2]
++ * and the s_flags to go into send one mode.
++ *
++ * This serves to throttle the send engine to only
++ * send a single packet in the likely case the
++ * a link has gone down.
++ */
++void hfi1_rc_verbs_aborted(struct rvt_qp *qp, struct hfi1_opa_header *opah)
++{
++	struct ib_other_headers *ohdr = hfi1_get_rc_ohdr(opah);
++	u8 opcode = ib_bth_get_opcode(ohdr);
++	u32 psn;
++
++	/* ignore responses */
++	if ((opcode >= OP(RDMA_READ_RESPONSE_FIRST) &&
++	     opcode <= OP(ATOMIC_ACKNOWLEDGE)) ||
++	    opcode == TID_OP(READ_RESP) ||
++	    opcode == TID_OP(WRITE_RESP))
++		return;
++
++	psn = ib_bth_get_psn(ohdr) | IB_BTH_REQ_ACK;
++	ohdr->bth[2] = cpu_to_be32(psn);
++	qp->s_flags |= RVT_S_SEND_ONE;
++}
++
+ /*
+  * This should be called with the QP s_lock held and interrupts disabled.
+  */
+@@ -1711,8 +1741,6 @@ void hfi1_rc_send_complete(struct rvt_qp *qp, struct hfi1_opa_header *opah)
+ 	struct ib_other_headers *ohdr;
+ 	struct hfi1_qp_priv *priv = qp->priv;
+ 	struct rvt_swqe *wqe;
+-	struct ib_header *hdr = NULL;
+-	struct hfi1_16b_header *hdr_16b = NULL;
+ 	u32 opcode, head, tail;
+ 	u32 psn;
+ 	struct tid_rdma_request *req;
+@@ -1721,24 +1749,7 @@ void hfi1_rc_send_complete(struct rvt_qp *qp, struct hfi1_opa_header *opah)
+ 	if (!(ib_rvt_state_ops[qp->state] & RVT_SEND_OR_FLUSH_OR_RECV_OK))
+ 		return;
+ 
+-	/* Find out where the BTH is */
+-	if (priv->hdr_type == HFI1_PKT_TYPE_9B) {
+-		hdr = &opah->ibh;
+-		if (ib_get_lnh(hdr) == HFI1_LRH_BTH)
+-			ohdr = &hdr->u.oth;
+-		else
+-			ohdr = &hdr->u.l.oth;
+-	} else {
+-		u8 l4;
+-
+-		hdr_16b = &opah->opah;
+-		l4  = hfi1_16B_get_l4(hdr_16b);
+-		if (l4 == OPA_16B_L4_IB_LOCAL)
+-			ohdr = &hdr_16b->u.oth;
+-		else
+-			ohdr = &hdr_16b->u.l.oth;
+-	}
+-
++	ohdr = hfi1_get_rc_ohdr(opah);
+ 	opcode = ib_bth_get_opcode(ohdr);
+ 	if ((opcode >= OP(RDMA_READ_RESPONSE_FIRST) &&
+ 	     opcode <= OP(ATOMIC_ACKNOWLEDGE)) ||
+diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
+index 70828de7436b..28b66bd70b74 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.c
++++ b/drivers/infiniband/hw/hfi1/sdma.c
+@@ -405,6 +405,7 @@ static void sdma_flush(struct sdma_engine *sde)
+ 	struct sdma_txreq *txp, *txp_next;
+ 	LIST_HEAD(flushlist);
+ 	unsigned long flags;
++	uint seq;
+ 
+ 	/* flush from head to tail */
+ 	sdma_flush_descq(sde);
+@@ -415,6 +416,22 @@ static void sdma_flush(struct sdma_engine *sde)
+ 	/* flush from flush list */
+ 	list_for_each_entry_safe(txp, txp_next, &flushlist, list)
+ 		complete_tx(sde, txp, SDMA_TXREQ_S_ABORTED);
++	/* wakeup QPs orphaned on the dmawait list */
++	do {
++		struct iowait *w, *nw;
++
++		seq = read_seqbegin(&sde->waitlock);
++		if (!list_empty(&sde->dmawait)) {
++			write_seqlock(&sde->waitlock);
++			list_for_each_entry_safe(w, nw, &sde->dmawait, list) {
++				if (w->wakeup) {
++					w->wakeup(w, SDMA_AVAIL_REASON);
++					list_del_init(&w->list);
++				}
++			}
++			write_sequnlock(&sde->waitlock);
++		}
++	} while (read_seqretry(&sde->waitlock, seq));
+ }
+ 
+ /*
+diff --git a/drivers/infiniband/hw/hfi1/ud.c b/drivers/infiniband/hw/hfi1/ud.c
+index f88ad425664a..4cb0fce5c096 100644
+--- a/drivers/infiniband/hw/hfi1/ud.c
++++ b/drivers/infiniband/hw/hfi1/ud.c
+@@ -683,7 +683,7 @@ void return_cnp_16B(struct hfi1_ibport *ibp, struct rvt_qp *qp,
+ 	pbc = create_pbc(ppd, pbc_flags, qp->srate_mbps, vl, plen);
+ 	if (ctxt) {
+ 		pbuf = sc_buffer_alloc(ctxt, plen, NULL, NULL);
+-		if (pbuf) {
++		if (!IS_ERR_OR_NULL(pbuf)) {
+ 			trace_pio_output_ibhdr(ppd->dd, &hdr, sc5);
+ 			ppd->dd->pio_inline_send(ppd->dd, pbuf, pbc,
+ 						 &hdr, hwords);
+@@ -738,7 +738,7 @@ void return_cnp(struct hfi1_ibport *ibp, struct rvt_qp *qp, u32 remote_qpn,
+ 	pbc = create_pbc(ppd, pbc_flags, qp->srate_mbps, vl, plen);
+ 	if (ctxt) {
+ 		pbuf = sc_buffer_alloc(ctxt, plen, NULL, NULL);
+-		if (pbuf) {
++		if (!IS_ERR_OR_NULL(pbuf)) {
+ 			trace_pio_output_ibhdr(ppd->dd, &hdr, sc5);
+ 			ppd->dd->pio_inline_send(ppd->dd, pbuf, pbc,
+ 						 &hdr, hwords);
+diff --git a/drivers/infiniband/hw/hfi1/verbs.c b/drivers/infiniband/hw/hfi1/verbs.c
+index ea68eeba3f22..117e73cd69d7 100644
+--- a/drivers/infiniband/hw/hfi1/verbs.c
++++ b/drivers/infiniband/hw/hfi1/verbs.c
+@@ -638,6 +638,8 @@ static void verbs_sdma_complete(
+ 		struct hfi1_opa_header *hdr;
+ 
+ 		hdr = &tx->phdr.hdr;
++		if (unlikely(status == SDMA_TXREQ_S_ABORTED))
++			hfi1_rc_verbs_aborted(qp, hdr);
+ 		hfi1_rc_send_complete(qp, hdr);
+ 	}
+ 	spin_unlock(&qp->s_lock);
+@@ -1037,10 +1039,10 @@ int hfi1_verbs_send_pio(struct rvt_qp *qp, struct hfi1_pkt_state *ps,
+ 	if (cb)
+ 		iowait_pio_inc(&priv->s_iowait);
+ 	pbuf = sc_buffer_alloc(sc, plen, cb, qp);
+-	if (unlikely(!pbuf)) {
++	if (unlikely(IS_ERR_OR_NULL(pbuf))) {
+ 		if (cb)
+ 			verbs_pio_complete(qp, 0);
+-		if (ppd->host_link_state != HLS_UP_ACTIVE) {
++		if (IS_ERR(pbuf)) {
+ 			/*
+ 			 * If we have filled the PIO buffers to capacity and are
+ 			 * not in an active state this request is not going to
+@@ -1095,15 +1097,15 @@ int hfi1_verbs_send_pio(struct rvt_qp *qp, struct hfi1_pkt_state *ps,
+ 			       &ps->s_txreq->phdr.hdr, ib_is_sc5(sc5));
+ 
+ pio_bail:
++	spin_lock_irqsave(&qp->s_lock, flags);
+ 	if (qp->s_wqe) {
+-		spin_lock_irqsave(&qp->s_lock, flags);
+ 		rvt_send_complete(qp, qp->s_wqe, wc_status);
+-		spin_unlock_irqrestore(&qp->s_lock, flags);
+ 	} else if (qp->ibqp.qp_type == IB_QPT_RC) {
+-		spin_lock_irqsave(&qp->s_lock, flags);
++		if (unlikely(wc_status == IB_WC_GENERAL_ERR))
++			hfi1_rc_verbs_aborted(qp, &ps->s_txreq->phdr.hdr);
+ 		hfi1_rc_send_complete(qp, &ps->s_txreq->phdr.hdr);
+-		spin_unlock_irqrestore(&qp->s_lock, flags);
+ 	}
++	spin_unlock_irqrestore(&qp->s_lock, flags);
+ 
+ 	ret = 0;
+ 
+diff --git a/drivers/infiniband/hw/hfi1/verbs.h b/drivers/infiniband/hw/hfi1/verbs.h
+index 62ace0b2d17a..1714c0f6475d 100644
+--- a/drivers/infiniband/hw/hfi1/verbs.h
++++ b/drivers/infiniband/hw/hfi1/verbs.h
+@@ -415,6 +415,7 @@ void hfi1_rc_hdrerr(
+ 
+ u8 ah_to_sc(struct ib_device *ibdev, struct rdma_ah_attr *ah_attr);
+ 
++void hfi1_rc_verbs_aborted(struct rvt_qp *qp, struct hfi1_opa_header *opah);
+ void hfi1_rc_send_complete(struct rvt_qp *qp, struct hfi1_opa_header *opah);
+ 
+ void hfi1_ud_rcv(struct hfi1_packet *packet);
+diff --git a/drivers/input/keyboard/imx_keypad.c b/drivers/input/keyboard/imx_keypad.c
+index 539cb670de41..ae9c51cc85f9 100644
+--- a/drivers/input/keyboard/imx_keypad.c
++++ b/drivers/input/keyboard/imx_keypad.c
+@@ -526,11 +526,12 @@ static int imx_keypad_probe(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-static int __maybe_unused imx_kbd_suspend(struct device *dev)
++static int __maybe_unused imx_kbd_noirq_suspend(struct device *dev)
+ {
+ 	struct platform_device *pdev = to_platform_device(dev);
+ 	struct imx_keypad *kbd = platform_get_drvdata(pdev);
+ 	struct input_dev *input_dev = kbd->input_dev;
++	unsigned short reg_val = readw(kbd->mmio_base + KPSR);
+ 
+ 	/* imx kbd can wake up system even clock is disabled */
+ 	mutex_lock(&input_dev->mutex);
+@@ -540,13 +541,20 @@ static int __maybe_unused imx_kbd_suspend(struct device *dev)
+ 
+ 	mutex_unlock(&input_dev->mutex);
+ 
+-	if (device_may_wakeup(&pdev->dev))
++	if (device_may_wakeup(&pdev->dev)) {
++		if (reg_val & KBD_STAT_KPKD)
++			reg_val |= KBD_STAT_KRIE;
++		if (reg_val & KBD_STAT_KPKR)
++			reg_val |= KBD_STAT_KDIE;
++		writew(reg_val, kbd->mmio_base + KPSR);
++
+ 		enable_irq_wake(kbd->irq);
++	}
+ 
+ 	return 0;
+ }
+ 
+-static int __maybe_unused imx_kbd_resume(struct device *dev)
++static int __maybe_unused imx_kbd_noirq_resume(struct device *dev)
+ {
+ 	struct platform_device *pdev = to_platform_device(dev);
+ 	struct imx_keypad *kbd = platform_get_drvdata(pdev);
+@@ -570,7 +578,9 @@ err_clk:
+ 	return ret;
+ }
+ 
+-static SIMPLE_DEV_PM_OPS(imx_kbd_pm_ops, imx_kbd_suspend, imx_kbd_resume);
++static const struct dev_pm_ops imx_kbd_pm_ops = {
++	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(imx_kbd_noirq_suspend, imx_kbd_noirq_resume)
++};
+ 
+ static struct platform_driver imx_keypad_driver = {
+ 	.driver		= {
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index a7f8b1614559..530142b5a115 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -1189,6 +1189,8 @@ static const char * const middle_button_pnp_ids[] = {
+ 	"LEN2132", /* ThinkPad P52 */
+ 	"LEN2133", /* ThinkPad P72 w/ NFC */
+ 	"LEN2134", /* ThinkPad P72 */
++	"LEN0407",
++	"LEN0408",
+ 	NULL
+ };
+ 
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 295ff09cff4c..84aec3647994 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -7617,9 +7617,9 @@ static void status_unused(struct seq_file *seq)
+ static int status_resync(struct seq_file *seq, struct mddev *mddev)
+ {
+ 	sector_t max_sectors, resync, res;
+-	unsigned long dt, db;
+-	sector_t rt;
+-	int scale;
++	unsigned long dt, db = 0;
++	sector_t rt, curr_mark_cnt, resync_mark_cnt;
++	int scale, recovery_active;
+ 	unsigned int per_milli;
+ 
+ 	if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery) ||
+@@ -7708,22 +7708,30 @@ static int status_resync(struct seq_file *seq, struct mddev *mddev)
+ 	 * db: blocks written from mark until now
+ 	 * rt: remaining time
+ 	 *
+-	 * rt is a sector_t, so could be 32bit or 64bit.
+-	 * So we divide before multiply in case it is 32bit and close
+-	 * to the limit.
+-	 * We scale the divisor (db) by 32 to avoid losing precision
+-	 * near the end of resync when the number of remaining sectors
+-	 * is close to 'db'.
+-	 * We then divide rt by 32 after multiplying by db to compensate.
+-	 * The '+1' avoids division by zero if db is very small.
++	 * rt is a sector_t, which is always 64bit now. We are keeping
++	 * the original algorithm, but it is not really necessary.
++	 *
++	 * Original algorithm:
++	 *   So we divide before multiply in case it is 32bit and close
++	 *   to the limit.
++	 *   We scale the divisor (db) by 32 to avoid losing precision
++	 *   near the end of resync when the number of remaining sectors
++	 *   is close to 'db'.
++	 *   We then divide rt by 32 after multiplying by db to compensate.
++	 *   The '+1' avoids division by zero if db is very small.
+ 	 */
+ 	dt = ((jiffies - mddev->resync_mark) / HZ);
+ 	if (!dt) dt++;
+-	db = (mddev->curr_mark_cnt - atomic_read(&mddev->recovery_active))
+-		- mddev->resync_mark_cnt;
++
++	curr_mark_cnt = mddev->curr_mark_cnt;
++	recovery_active = atomic_read(&mddev->recovery_active);
++	resync_mark_cnt = mddev->resync_mark_cnt;
++
++	if (curr_mark_cnt >= (recovery_active + resync_mark_cnt))
++		db = curr_mark_cnt - (recovery_active + resync_mark_cnt);
+ 
+ 	rt = max_sectors - resync;    /* number of remaining sectors */
+-	sector_div(rt, db/32+1);
++	rt = div64_u64(rt, db/32+1);
+ 	rt *= dt;
+ 	rt >>= 5;
+ 
+diff --git a/drivers/media/dvb-frontends/stv0297.c b/drivers/media/dvb-frontends/stv0297.c
+index 9a9915f71483..3ef31a3a27ff 100644
+--- a/drivers/media/dvb-frontends/stv0297.c
++++ b/drivers/media/dvb-frontends/stv0297.c
+@@ -694,7 +694,7 @@ static const struct dvb_frontend_ops stv0297_ops = {
+ 	.delsys = { SYS_DVBC_ANNEX_A },
+ 	.info = {
+ 		 .name = "ST STV0297 DVB-C",
+-		 .frequency_min_hz = 470 * MHz,
++		 .frequency_min_hz = 47 * MHz,
+ 		 .frequency_max_hz = 862 * MHz,
+ 		 .frequency_stepsize_hz = 62500,
+ 		 .symbol_rate_min = 870000,
+diff --git a/drivers/misc/lkdtm/Makefile b/drivers/misc/lkdtm/Makefile
+index 951c984de61a..fb10eafe9bde 100644
+--- a/drivers/misc/lkdtm/Makefile
++++ b/drivers/misc/lkdtm/Makefile
+@@ -15,8 +15,7 @@ KCOV_INSTRUMENT_rodata.o	:= n
+ 
+ OBJCOPYFLAGS :=
+ OBJCOPYFLAGS_rodata_objcopy.o	:= \
+-			--set-section-flags .text=alloc,readonly \
+-			--rename-section .text=.rodata
++			--rename-section .text=.rodata,alloc,readonly,load
+ targets += rodata.o rodata_objcopy.o
+ $(obj)/rodata_objcopy.o: $(obj)/rodata.o FORCE
+ 	$(call if_changed,objcopy)
+diff --git a/drivers/misc/vmw_vmci/vmci_context.c b/drivers/misc/vmw_vmci/vmci_context.c
+index 21d0fa592145..bc089e634a75 100644
+--- a/drivers/misc/vmw_vmci/vmci_context.c
++++ b/drivers/misc/vmw_vmci/vmci_context.c
+@@ -29,6 +29,9 @@
+ #include "vmci_driver.h"
+ #include "vmci_event.h"
+ 
++/* Use a wide upper bound for the maximum contexts. */
++#define VMCI_MAX_CONTEXTS 2000
++
+ /*
+  * List of current VMCI contexts.  Contexts can be added by
+  * vmci_ctx_create() and removed via vmci_ctx_destroy().
+@@ -125,19 +128,22 @@ struct vmci_ctx *vmci_ctx_create(u32 cid, u32 priv_flags,
+ 	/* Initialize host-specific VMCI context. */
+ 	init_waitqueue_head(&context->host_context.wait_queue);
+ 
+-	context->queue_pair_array = vmci_handle_arr_create(0);
++	context->queue_pair_array =
++		vmci_handle_arr_create(0, VMCI_MAX_GUEST_QP_COUNT);
+ 	if (!context->queue_pair_array) {
+ 		error = -ENOMEM;
+ 		goto err_free_ctx;
+ 	}
+ 
+-	context->doorbell_array = vmci_handle_arr_create(0);
++	context->doorbell_array =
++		vmci_handle_arr_create(0, VMCI_MAX_GUEST_DOORBELL_COUNT);
+ 	if (!context->doorbell_array) {
+ 		error = -ENOMEM;
+ 		goto err_free_qp_array;
+ 	}
+ 
+-	context->pending_doorbell_array = vmci_handle_arr_create(0);
++	context->pending_doorbell_array =
++		vmci_handle_arr_create(0, VMCI_MAX_GUEST_DOORBELL_COUNT);
+ 	if (!context->pending_doorbell_array) {
+ 		error = -ENOMEM;
+ 		goto err_free_db_array;
+@@ -212,7 +218,7 @@ static int ctx_fire_notification(u32 context_id, u32 priv_flags)
+ 	 * We create an array to hold the subscribers we find when
+ 	 * scanning through all contexts.
+ 	 */
+-	subscriber_array = vmci_handle_arr_create(0);
++	subscriber_array = vmci_handle_arr_create(0, VMCI_MAX_CONTEXTS);
+ 	if (subscriber_array == NULL)
+ 		return VMCI_ERROR_NO_MEM;
+ 
+@@ -631,20 +637,26 @@ int vmci_ctx_add_notification(u32 context_id, u32 remote_cid)
+ 
+ 	spin_lock(&context->lock);
+ 
+-	list_for_each_entry(n, &context->notifier_list, node) {
+-		if (vmci_handle_is_equal(n->handle, notifier->handle)) {
+-			exists = true;
+-			break;
++	if (context->n_notifiers < VMCI_MAX_CONTEXTS) {
++		list_for_each_entry(n, &context->notifier_list, node) {
++			if (vmci_handle_is_equal(n->handle, notifier->handle)) {
++				exists = true;
++				break;
++			}
+ 		}
+-	}
+ 
+-	if (exists) {
+-		kfree(notifier);
+-		result = VMCI_ERROR_ALREADY_EXISTS;
++		if (exists) {
++			kfree(notifier);
++			result = VMCI_ERROR_ALREADY_EXISTS;
++		} else {
++			list_add_tail_rcu(&notifier->node,
++					  &context->notifier_list);
++			context->n_notifiers++;
++			result = VMCI_SUCCESS;
++		}
+ 	} else {
+-		list_add_tail_rcu(&notifier->node, &context->notifier_list);
+-		context->n_notifiers++;
+-		result = VMCI_SUCCESS;
++		kfree(notifier);
++		result = VMCI_ERROR_NO_MEM;
+ 	}
+ 
+ 	spin_unlock(&context->lock);
+@@ -729,8 +741,7 @@ static int vmci_ctx_get_chkpt_doorbells(struct vmci_ctx *context,
+ 					u32 *buf_size, void **pbuf)
+ {
+ 	struct dbell_cpt_state *dbells;
+-	size_t n_doorbells;
+-	int i;
++	u32 i, n_doorbells;
+ 
+ 	n_doorbells = vmci_handle_arr_get_size(context->doorbell_array);
+ 	if (n_doorbells > 0) {
+@@ -868,7 +879,8 @@ int vmci_ctx_rcv_notifications_get(u32 context_id,
+ 	spin_lock(&context->lock);
+ 
+ 	*db_handle_array = context->pending_doorbell_array;
+-	context->pending_doorbell_array = vmci_handle_arr_create(0);
++	context->pending_doorbell_array =
++		vmci_handle_arr_create(0, VMCI_MAX_GUEST_DOORBELL_COUNT);
+ 	if (!context->pending_doorbell_array) {
+ 		context->pending_doorbell_array = *db_handle_array;
+ 		*db_handle_array = NULL;
+@@ -950,12 +962,11 @@ int vmci_ctx_dbell_create(u32 context_id, struct vmci_handle handle)
+ 		return VMCI_ERROR_NOT_FOUND;
+ 
+ 	spin_lock(&context->lock);
+-	if (!vmci_handle_arr_has_entry(context->doorbell_array, handle)) {
+-		vmci_handle_arr_append_entry(&context->doorbell_array, handle);
+-		result = VMCI_SUCCESS;
+-	} else {
++	if (!vmci_handle_arr_has_entry(context->doorbell_array, handle))
++		result = vmci_handle_arr_append_entry(&context->doorbell_array,
++						      handle);
++	else
+ 		result = VMCI_ERROR_DUPLICATE_ENTRY;
+-	}
+ 
+ 	spin_unlock(&context->lock);
+ 	vmci_ctx_put(context);
+@@ -1091,15 +1102,16 @@ int vmci_ctx_notify_dbell(u32 src_cid,
+ 			if (!vmci_handle_arr_has_entry(
+ 					dst_context->pending_doorbell_array,
+ 					handle)) {
+-				vmci_handle_arr_append_entry(
++				result = vmci_handle_arr_append_entry(
+ 					&dst_context->pending_doorbell_array,
+ 					handle);
+-
+-				ctx_signal_notify(dst_context);
+-				wake_up(&dst_context->host_context.wait_queue);
+-
++				if (result == VMCI_SUCCESS) {
++					ctx_signal_notify(dst_context);
++					wake_up(&dst_context->host_context.wait_queue);
++				}
++			} else {
++				result = VMCI_SUCCESS;
+ 			}
+-			result = VMCI_SUCCESS;
+ 		}
+ 		spin_unlock(&dst_context->lock);
+ 	}
+@@ -1126,13 +1138,11 @@ int vmci_ctx_qp_create(struct vmci_ctx *context, struct vmci_handle handle)
+ 	if (context == NULL || vmci_handle_is_invalid(handle))
+ 		return VMCI_ERROR_INVALID_ARGS;
+ 
+-	if (!vmci_handle_arr_has_entry(context->queue_pair_array, handle)) {
+-		vmci_handle_arr_append_entry(&context->queue_pair_array,
+-					     handle);
+-		result = VMCI_SUCCESS;
+-	} else {
++	if (!vmci_handle_arr_has_entry(context->queue_pair_array, handle))
++		result = vmci_handle_arr_append_entry(
++			&context->queue_pair_array, handle);
++	else
+ 		result = VMCI_ERROR_DUPLICATE_ENTRY;
+-	}
+ 
+ 	return result;
+ }
+diff --git a/drivers/misc/vmw_vmci/vmci_handle_array.c b/drivers/misc/vmw_vmci/vmci_handle_array.c
+index 344973a0fb0a..917e18a8af95 100644
+--- a/drivers/misc/vmw_vmci/vmci_handle_array.c
++++ b/drivers/misc/vmw_vmci/vmci_handle_array.c
+@@ -16,24 +16,29 @@
+ #include <linux/slab.h>
+ #include "vmci_handle_array.h"
+ 
+-static size_t handle_arr_calc_size(size_t capacity)
++static size_t handle_arr_calc_size(u32 capacity)
+ {
+-	return sizeof(struct vmci_handle_arr) +
++	return VMCI_HANDLE_ARRAY_HEADER_SIZE +
+ 	    capacity * sizeof(struct vmci_handle);
+ }
+ 
+-struct vmci_handle_arr *vmci_handle_arr_create(size_t capacity)
++struct vmci_handle_arr *vmci_handle_arr_create(u32 capacity, u32 max_capacity)
+ {
+ 	struct vmci_handle_arr *array;
+ 
++	if (max_capacity == 0 || capacity > max_capacity)
++		return NULL;
++
+ 	if (capacity == 0)
+-		capacity = VMCI_HANDLE_ARRAY_DEFAULT_SIZE;
++		capacity = min((u32)VMCI_HANDLE_ARRAY_DEFAULT_CAPACITY,
++			       max_capacity);
+ 
+ 	array = kmalloc(handle_arr_calc_size(capacity), GFP_ATOMIC);
+ 	if (!array)
+ 		return NULL;
+ 
+ 	array->capacity = capacity;
++	array->max_capacity = max_capacity;
+ 	array->size = 0;
+ 
+ 	return array;
+@@ -44,27 +49,34 @@ void vmci_handle_arr_destroy(struct vmci_handle_arr *array)
+ 	kfree(array);
+ }
+ 
+-void vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr,
+-				  struct vmci_handle handle)
++int vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr,
++				 struct vmci_handle handle)
+ {
+ 	struct vmci_handle_arr *array = *array_ptr;
+ 
+ 	if (unlikely(array->size >= array->capacity)) {
+ 		/* reallocate. */
+ 		struct vmci_handle_arr *new_array;
+-		size_t new_capacity = array->capacity * VMCI_ARR_CAP_MULT;
+-		size_t new_size = handle_arr_calc_size(new_capacity);
++		u32 capacity_bump = min(array->max_capacity - array->capacity,
++					array->capacity);
++		size_t new_size = handle_arr_calc_size(array->capacity +
++						       capacity_bump);
++
++		if (array->size >= array->max_capacity)
++			return VMCI_ERROR_NO_MEM;
+ 
+ 		new_array = krealloc(array, new_size, GFP_ATOMIC);
+ 		if (!new_array)
+-			return;
++			return VMCI_ERROR_NO_MEM;
+ 
+-		new_array->capacity = new_capacity;
++		new_array->capacity += capacity_bump;
+ 		*array_ptr = array = new_array;
+ 	}
+ 
+ 	array->entries[array->size] = handle;
+ 	array->size++;
++
++	return VMCI_SUCCESS;
+ }
+ 
+ /*
+@@ -74,7 +86,7 @@ struct vmci_handle vmci_handle_arr_remove_entry(struct vmci_handle_arr *array,
+ 						struct vmci_handle entry_handle)
+ {
+ 	struct vmci_handle handle = VMCI_INVALID_HANDLE;
+-	size_t i;
++	u32 i;
+ 
+ 	for (i = 0; i < array->size; i++) {
+ 		if (vmci_handle_is_equal(array->entries[i], entry_handle)) {
+@@ -109,7 +121,7 @@ struct vmci_handle vmci_handle_arr_remove_tail(struct vmci_handle_arr *array)
+  * Handle at given index, VMCI_INVALID_HANDLE if invalid index.
+  */
+ struct vmci_handle
+-vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, size_t index)
++vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, u32 index)
+ {
+ 	if (unlikely(index >= array->size))
+ 		return VMCI_INVALID_HANDLE;
+@@ -120,7 +132,7 @@ vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, size_t index)
+ bool vmci_handle_arr_has_entry(const struct vmci_handle_arr *array,
+ 			       struct vmci_handle entry_handle)
+ {
+-	size_t i;
++	u32 i;
+ 
+ 	for (i = 0; i < array->size; i++)
+ 		if (vmci_handle_is_equal(array->entries[i], entry_handle))
+diff --git a/drivers/misc/vmw_vmci/vmci_handle_array.h b/drivers/misc/vmw_vmci/vmci_handle_array.h
+index b5f3a7f98cf1..0fc58597820e 100644
+--- a/drivers/misc/vmw_vmci/vmci_handle_array.h
++++ b/drivers/misc/vmw_vmci/vmci_handle_array.h
+@@ -17,32 +17,41 @@
+ #define _VMCI_HANDLE_ARRAY_H_
+ 
+ #include <linux/vmw_vmci_defs.h>
++#include <linux/limits.h>
+ #include <linux/types.h>
+ 
+-#define VMCI_HANDLE_ARRAY_DEFAULT_SIZE 4
+-#define VMCI_ARR_CAP_MULT 2	/* Array capacity multiplier */
+-
+ struct vmci_handle_arr {
+-	size_t capacity;
+-	size_t size;
++	u32 capacity;
++	u32 max_capacity;
++	u32 size;
++	u32 pad;
+ 	struct vmci_handle entries[];
+ };
+ 
+-struct vmci_handle_arr *vmci_handle_arr_create(size_t capacity);
++#define VMCI_HANDLE_ARRAY_HEADER_SIZE				\
++	offsetof(struct vmci_handle_arr, entries)
++/* Select a default capacity that results in a 64 byte sized array */
++#define VMCI_HANDLE_ARRAY_DEFAULT_CAPACITY			6
++/* Make sure that the max array size can be expressed by a u32 */
++#define VMCI_HANDLE_ARRAY_MAX_CAPACITY				\
++	((U32_MAX - VMCI_HANDLE_ARRAY_HEADER_SIZE - 1) /	\
++	sizeof(struct vmci_handle))
++
++struct vmci_handle_arr *vmci_handle_arr_create(u32 capacity, u32 max_capacity);
+ void vmci_handle_arr_destroy(struct vmci_handle_arr *array);
+-void vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr,
+-				  struct vmci_handle handle);
++int vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr,
++				 struct vmci_handle handle);
+ struct vmci_handle vmci_handle_arr_remove_entry(struct vmci_handle_arr *array,
+ 						struct vmci_handle
+ 						entry_handle);
+ struct vmci_handle vmci_handle_arr_remove_tail(struct vmci_handle_arr *array);
+ struct vmci_handle
+-vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, size_t index);
++vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, u32 index);
+ bool vmci_handle_arr_has_entry(const struct vmci_handle_arr *array,
+ 			       struct vmci_handle entry_handle);
+ struct vmci_handle *vmci_handle_arr_get_handles(struct vmci_handle_arr *array);
+ 
+-static inline size_t vmci_handle_arr_get_size(
++static inline u32 vmci_handle_arr_get_size(
+ 	const struct vmci_handle_arr *array)
+ {
+ 	return array->size;
+diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
+index 3e786ba204c3..671bfcceea6a 100644
+--- a/drivers/mmc/core/mmc.c
++++ b/drivers/mmc/core/mmc.c
+@@ -1212,13 +1212,13 @@ static int mmc_select_hs400(struct mmc_card *card)
+ 	mmc_set_timing(host, MMC_TIMING_MMC_HS400);
+ 	mmc_set_bus_speed(card);
+ 
++	if (host->ops->hs400_complete)
++		host->ops->hs400_complete(host);
++
+ 	err = mmc_switch_status(card);
+ 	if (err)
+ 		goto out_err;
+ 
+-	if (host->ops->hs400_complete)
+-		host->ops->hs400_complete(host);
+-
+ 	return 0;
+ 
+ out_err:
+diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c
+index f97c628eb2ad..f2fe344593d5 100644
+--- a/drivers/net/can/flexcan.c
++++ b/drivers/net/can/flexcan.c
+@@ -1583,9 +1583,6 @@ static int flexcan_probe(struct platform_device *pdev)
+ 			dev_dbg(&pdev->dev, "failed to setup stop-mode\n");
+ 	}
+ 
+-	dev_info(&pdev->dev, "device registered (reg_base=%p, irq=%d)\n",
+-		 priv->regs, dev->irq);
+-
+ 	return 0;
+ 
+  failed_register:
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index 9b449400376b..deb274a19ba0 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -822,6 +822,27 @@ static int m_can_poll(struct napi_struct *napi, int quota)
+ 	if (!irqstatus)
+ 		goto end;
+ 
++	/* Errata workaround for issue "Needless activation of MRAF irq"
++	 * During frame reception while the MCAN is in Error Passive state
++	 * and the Receive Error Counter has the value MCAN_ECR.REC = 127,
++	 * it may happen that MCAN_IR.MRAF is set although there was no
++	 * Message RAM access failure.
++	 * If MCAN_IR.MRAF is enabled, an interrupt to the Host CPU is generated
++	 * The Message RAM Access Failure interrupt routine needs to check
++	 * whether MCAN_ECR.RP = ’1’ and MCAN_ECR.REC = 127.
++	 * In this case, reset MCAN_IR.MRAF. No further action is required.
++	 */
++	if ((priv->version <= 31) && (irqstatus & IR_MRAF) &&
++	    (m_can_read(priv, M_CAN_ECR) & ECR_RP)) {
++		struct can_berr_counter bec;
++
++		__m_can_get_berr_counter(dev, &bec);
++		if (bec.rxerr == 127) {
++			m_can_write(priv, M_CAN_IR, IR_MRAF);
++			irqstatus &= ~IR_MRAF;
++		}
++	}
++
+ 	psr = m_can_read(priv, M_CAN_PSR);
+ 	if (irqstatus & IR_ERR_STATE)
+ 		work_done += m_can_handle_state_errors(dev, psr);
+diff --git a/drivers/net/can/spi/Kconfig b/drivers/net/can/spi/Kconfig
+index 8f2e0dd7b756..792e9c6c4a2f 100644
+--- a/drivers/net/can/spi/Kconfig
++++ b/drivers/net/can/spi/Kconfig
+@@ -8,9 +8,10 @@ config CAN_HI311X
+ 	  Driver for the Holt HI311x SPI CAN controllers.
+ 
+ config CAN_MCP251X
+-	tristate "Microchip MCP251x SPI CAN controllers"
++	tristate "Microchip MCP251x and MCP25625 SPI CAN controllers"
+ 	depends on HAS_DMA
+ 	---help---
+-	  Driver for the Microchip MCP251x SPI CAN controllers.
++	  Driver for the Microchip MCP251x and MCP25625 SPI CAN
++	  controllers.
+ 
+ endmenu
+diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c
+index e90817608645..da64e71a62ee 100644
+--- a/drivers/net/can/spi/mcp251x.c
++++ b/drivers/net/can/spi/mcp251x.c
+@@ -1,5 +1,5 @@
+ /*
+- * CAN bus driver for Microchip 251x CAN Controller with SPI Interface
++ * CAN bus driver for Microchip 251x/25625 CAN Controller with SPI Interface
+  *
+  * MCP2510 support and bug fixes by Christian Pellegrin
+  * <chripell@evolware.org>
+@@ -41,7 +41,7 @@
+  * static struct spi_board_info spi_board_info[] = {
+  *         {
+  *                 .modalias = "mcp2510",
+- *			// or "mcp2515" depending on your controller
++ *			// "mcp2515" or "mcp25625" depending on your controller
+  *                 .platform_data = &mcp251x_info,
+  *                 .irq = IRQ_EINT13,
+  *                 .max_speed_hz = 2*1000*1000,
+@@ -238,6 +238,7 @@ static const struct can_bittiming_const mcp251x_bittiming_const = {
+ enum mcp251x_model {
+ 	CAN_MCP251X_MCP2510	= 0x2510,
+ 	CAN_MCP251X_MCP2515	= 0x2515,
++	CAN_MCP251X_MCP25625	= 0x25625,
+ };
+ 
+ struct mcp251x_priv {
+@@ -280,7 +281,6 @@ static inline int mcp251x_is_##_model(struct spi_device *spi) \
+ }
+ 
+ MCP251X_IS(2510);
+-MCP251X_IS(2515);
+ 
+ static void mcp251x_clean(struct net_device *net)
+ {
+@@ -639,7 +639,7 @@ static int mcp251x_hw_reset(struct spi_device *spi)
+ 
+ 	/* Wait for oscillator startup timer after reset */
+ 	mdelay(MCP251X_OST_DELAY_MS);
+-	
++
+ 	reg = mcp251x_read_reg(spi, CANSTAT);
+ 	if ((reg & CANCTRL_REQOP_MASK) != CANCTRL_REQOP_CONF)
+ 		return -ENODEV;
+@@ -820,9 +820,8 @@ static irqreturn_t mcp251x_can_ist(int irq, void *dev_id)
+ 		/* receive buffer 0 */
+ 		if (intf & CANINTF_RX0IF) {
+ 			mcp251x_hw_rx(spi, 0);
+-			/*
+-			 * Free one buffer ASAP
+-			 * (The MCP2515 does this automatically.)
++			/* Free one buffer ASAP
++			 * (The MCP2515/25625 does this automatically.)
+ 			 */
+ 			if (mcp251x_is_2510(spi))
+ 				mcp251x_write_bits(spi, CANINTF, CANINTF_RX0IF, 0x00);
+@@ -831,7 +830,7 @@ static irqreturn_t mcp251x_can_ist(int irq, void *dev_id)
+ 		/* receive buffer 1 */
+ 		if (intf & CANINTF_RX1IF) {
+ 			mcp251x_hw_rx(spi, 1);
+-			/* the MCP2515 does this automatically */
++			/* The MCP2515/25625 does this automatically. */
+ 			if (mcp251x_is_2510(spi))
+ 				clear_intf |= CANINTF_RX1IF;
+ 		}
+@@ -1006,6 +1005,10 @@ static const struct of_device_id mcp251x_of_match[] = {
+ 		.compatible	= "microchip,mcp2515",
+ 		.data		= (void *)CAN_MCP251X_MCP2515,
+ 	},
++	{
++		.compatible	= "microchip,mcp25625",
++		.data		= (void *)CAN_MCP251X_MCP25625,
++	},
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(of, mcp251x_of_match);
+@@ -1019,6 +1022,10 @@ static const struct spi_device_id mcp251x_id_table[] = {
+ 		.name		= "mcp2515",
+ 		.driver_data	= (kernel_ulong_t)CAN_MCP251X_MCP2515,
+ 	},
++	{
++		.name		= "mcp25625",
++		.driver_data	= (kernel_ulong_t)CAN_MCP251X_MCP25625,
++	},
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(spi, mcp251x_id_table);
+@@ -1259,5 +1266,5 @@ module_spi_driver(mcp251x_can_driver);
+ 
+ MODULE_AUTHOR("Chris Elston <celston@katalix.com>, "
+ 	      "Christian Pellegrin <chripell@evolware.org>");
+-MODULE_DESCRIPTION("Microchip 251x CAN driver");
++MODULE_DESCRIPTION("Microchip 251x/25625 CAN driver");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/net/dsa/mv88e6xxx/global1_vtu.c b/drivers/net/dsa/mv88e6xxx/global1_vtu.c
+index 058326924f3e..7a6667e0b9f9 100644
+--- a/drivers/net/dsa/mv88e6xxx/global1_vtu.c
++++ b/drivers/net/dsa/mv88e6xxx/global1_vtu.c
+@@ -419,7 +419,7 @@ int mv88e6185_g1_vtu_loadpurge(struct mv88e6xxx_chip *chip,
+ 		 * VTU DBNum[7:4] are located in VTU Operation 11:8
+ 		 */
+ 		op |= entry->fid & 0x000f;
+-		op |= (entry->fid & 0x00f0) << 8;
++		op |= (entry->fid & 0x00f0) << 4;
+ 	}
+ 
+ 	return mv88e6xxx_g1_vtu_op(chip, op);
+diff --git a/drivers/net/ethernet/8390/Kconfig b/drivers/net/ethernet/8390/Kconfig
+index f2f0264c58ba..443b34e2725f 100644
+--- a/drivers/net/ethernet/8390/Kconfig
++++ b/drivers/net/ethernet/8390/Kconfig
+@@ -49,7 +49,7 @@ config XSURF100
+ 	tristate "Amiga XSurf 100 AX88796/NE2000 clone support"
+ 	depends on ZORRO
+ 	select AX88796
+-	select ASIX_PHY
++	select AX88796B_PHY
+ 	help
+ 	  This driver is for the Individual Computers X-Surf 100 Ethernet
+ 	  card (based on the Asix AX88796 chip). If you have such a card,
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
+index 749d0ef44371..59f227fcc68b 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
+@@ -1609,7 +1609,8 @@ static int bnx2x_get_module_info(struct net_device *dev,
+ 	}
+ 
+ 	if (!sff8472_comp ||
+-	    (diag_type & SFP_EEPROM_DIAG_ADDR_CHANGE_REQ)) {
++	    (diag_type & SFP_EEPROM_DIAG_ADDR_CHANGE_REQ) ||
++	    !(diag_type & SFP_EEPROM_DDM_IMPLEMENTED)) {
+ 		modinfo->type = ETH_MODULE_SFF_8079;
+ 		modinfo->eeprom_len = ETH_MODULE_SFF_8079_LEN;
+ 	} else {
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h
+index b7d251108c19..7115f5025664 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h
+@@ -62,6 +62,7 @@
+ #define SFP_EEPROM_DIAG_TYPE_ADDR		0x5c
+ #define SFP_EEPROM_DIAG_TYPE_SIZE		1
+ #define SFP_EEPROM_DIAG_ADDR_CHANGE_REQ		(1<<2)
++#define SFP_EEPROM_DDM_IMPLEMENTED		(1<<6)
+ #define SFP_EEPROM_SFF_8472_COMP_ADDR		0x5e
+ #define SFP_EEPROM_SFF_8472_COMP_SIZE		1
+ 
+diff --git a/drivers/net/ethernet/cavium/liquidio/lio_core.c b/drivers/net/ethernet/cavium/liquidio/lio_core.c
+index 1c50c10b5a16..d7e805749a5b 100644
+--- a/drivers/net/ethernet/cavium/liquidio/lio_core.c
++++ b/drivers/net/ethernet/cavium/liquidio/lio_core.c
+@@ -964,7 +964,7 @@ static void liquidio_schedule_droq_pkt_handlers(struct octeon_device *oct)
+ 
+ 			if (droq->ops.poll_mode) {
+ 				droq->ops.napi_fn(droq);
+-				oct_priv->napi_mask |= (1 << oq_no);
++				oct_priv->napi_mask |= BIT_ULL(oq_no);
+ 			} else {
+ 				tasklet_schedule(&oct_priv->droq_tasklet);
+ 			}
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 3dfb2d131eb7..0e4029c54241 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -438,9 +438,10 @@ static int reset_rx_pools(struct ibmvnic_adapter *adapter)
+ 		if (rx_pool->buff_size != be64_to_cpu(size_array[i])) {
+ 			free_long_term_buff(adapter, &rx_pool->long_term_buff);
+ 			rx_pool->buff_size = be64_to_cpu(size_array[i]);
+-			alloc_long_term_buff(adapter, &rx_pool->long_term_buff,
+-					     rx_pool->size *
+-					     rx_pool->buff_size);
++			rc = alloc_long_term_buff(adapter,
++						  &rx_pool->long_term_buff,
++						  rx_pool->size *
++						  rx_pool->buff_size);
+ 		} else {
+ 			rc = reset_long_term_buff(adapter,
+ 						  &rx_pool->long_term_buff);
+@@ -706,9 +707,9 @@ static int init_tx_pools(struct net_device *netdev)
+ 			return rc;
+ 		}
+ 
+-		init_one_tx_pool(netdev, &adapter->tso_pool[i],
+-				 IBMVNIC_TSO_BUFS,
+-				 IBMVNIC_TSO_BUF_SZ);
++		rc = init_one_tx_pool(netdev, &adapter->tso_pool[i],
++				      IBMVNIC_TSO_BUFS,
++				      IBMVNIC_TSO_BUF_SZ);
+ 		if (rc) {
+ 			release_tx_pools(adapter);
+ 			return rc;
+@@ -1751,7 +1752,8 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+ 
+ 	ibmvnic_cleanup(netdev);
+ 
+-	if (adapter->reset_reason != VNIC_RESET_MOBILITY &&
++	if (reset_state == VNIC_OPEN &&
++	    adapter->reset_reason != VNIC_RESET_MOBILITY &&
+ 	    adapter->reset_reason != VNIC_RESET_FAILOVER) {
+ 		rc = __ibmvnic_close(netdev);
+ 		if (rc)
+@@ -1850,6 +1852,9 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+ 		return 0;
+ 	}
+ 
++	/* refresh device's multicast list */
++	ibmvnic_set_multi(netdev);
++
+ 	/* kick napi */
+ 	for (i = 0; i < adapter->req_rx_queues; i++)
+ 		napi_schedule(&adapter->napi[i]);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h
+index eb4c5e8964cd..5865597577d6 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/reg.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h
+@@ -997,7 +997,7 @@ static inline void mlxsw_reg_spaft_pack(char *payload, u8 local_port,
+ 	MLXSW_REG_ZERO(spaft, payload);
+ 	mlxsw_reg_spaft_local_port_set(payload, local_port);
+ 	mlxsw_reg_spaft_allow_untagged_set(payload, allow_untagged);
+-	mlxsw_reg_spaft_allow_prio_tagged_set(payload, true);
++	mlxsw_reg_spaft_allow_prio_tagged_set(payload, allow_untagged);
+ 	mlxsw_reg_spaft_allow_tagged_set(payload, true);
+ }
+ 
+diff --git a/drivers/net/phy/Kconfig b/drivers/net/phy/Kconfig
+index 520657945b82..b0c13f8c2b62 100644
+--- a/drivers/net/phy/Kconfig
++++ b/drivers/net/phy/Kconfig
+@@ -242,7 +242,7 @@ config AQUANTIA_PHY
+ 	---help---
+ 	  Currently supports the Aquantia AQ1202, AQ2104, AQR105, AQR405
+ 
+-config ASIX_PHY
++config AX88796B_PHY
+ 	tristate "Asix PHYs"
+ 	help
+ 	  Currently supports the Asix Electronics PHY found in the X-Surf 100
+diff --git a/drivers/net/phy/Makefile b/drivers/net/phy/Makefile
+index ece5dae67174..6d44ab91fbf6 100644
+--- a/drivers/net/phy/Makefile
++++ b/drivers/net/phy/Makefile
+@@ -51,7 +51,7 @@ ifdef CONFIG_HWMON
+ aquantia-objs			+= aquantia_hwmon.o
+ endif
+ obj-$(CONFIG_AQUANTIA_PHY)	+= aquantia.o
+-obj-$(CONFIG_ASIX_PHY)		+= asix.o
++obj-$(CONFIG_AX88796B_PHY)	+= ax88796b.o
+ obj-$(CONFIG_AT803X_PHY)	+= at803x.o
+ obj-$(CONFIG_BCM63XX_PHY)	+= bcm63xx.o
+ obj-$(CONFIG_BCM7XXX_PHY)	+= bcm7xxx.o
+diff --git a/drivers/net/phy/asix.c b/drivers/net/phy/asix.c
+deleted file mode 100644
+index f14ba5366b91..000000000000
+--- a/drivers/net/phy/asix.c
++++ /dev/null
+@@ -1,57 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0+
+-/* Driver for Asix PHYs
+- *
+- * Author: Michael Schmitz <schmitzmic@gmail.com>
+- */
+-#include <linux/kernel.h>
+-#include <linux/errno.h>
+-#include <linux/init.h>
+-#include <linux/module.h>
+-#include <linux/mii.h>
+-#include <linux/phy.h>
+-
+-#define PHY_ID_ASIX_AX88796B		0x003b1841
+-
+-MODULE_DESCRIPTION("Asix PHY driver");
+-MODULE_AUTHOR("Michael Schmitz <schmitzmic@gmail.com>");
+-MODULE_LICENSE("GPL");
+-
+-/**
+- * asix_soft_reset - software reset the PHY via BMCR_RESET bit
+- * @phydev: target phy_device struct
+- *
+- * Description: Perform a software PHY reset using the standard
+- * BMCR_RESET bit and poll for the reset bit to be cleared.
+- * Toggle BMCR_RESET bit off to accommodate broken AX8796B PHY implementation
+- * such as used on the Individual Computers' X-Surf 100 Zorro card.
+- *
+- * Returns: 0 on success, < 0 on failure
+- */
+-static int asix_soft_reset(struct phy_device *phydev)
+-{
+-	int ret;
+-
+-	/* Asix PHY won't reset unless reset bit toggles */
+-	ret = phy_write(phydev, MII_BMCR, 0);
+-	if (ret < 0)
+-		return ret;
+-
+-	return genphy_soft_reset(phydev);
+-}
+-
+-static struct phy_driver asix_driver[] = { {
+-	.phy_id		= PHY_ID_ASIX_AX88796B,
+-	.name		= "Asix Electronics AX88796B",
+-	.phy_id_mask	= 0xfffffff0,
+-	.features	= PHY_BASIC_FEATURES,
+-	.soft_reset	= asix_soft_reset,
+-} };
+-
+-module_phy_driver(asix_driver);
+-
+-static struct mdio_device_id __maybe_unused asix_tbl[] = {
+-	{ PHY_ID_ASIX_AX88796B, 0xfffffff0 },
+-	{ }
+-};
+-
+-MODULE_DEVICE_TABLE(mdio, asix_tbl);
+diff --git a/drivers/net/phy/ax88796b.c b/drivers/net/phy/ax88796b.c
+new file mode 100644
+index 000000000000..f14ba5366b91
+--- /dev/null
++++ b/drivers/net/phy/ax88796b.c
+@@ -0,0 +1,57 @@
++// SPDX-License-Identifier: GPL-2.0+
++/* Driver for Asix PHYs
++ *
++ * Author: Michael Schmitz <schmitzmic@gmail.com>
++ */
++#include <linux/kernel.h>
++#include <linux/errno.h>
++#include <linux/init.h>
++#include <linux/module.h>
++#include <linux/mii.h>
++#include <linux/phy.h>
++
++#define PHY_ID_ASIX_AX88796B		0x003b1841
++
++MODULE_DESCRIPTION("Asix PHY driver");
++MODULE_AUTHOR("Michael Schmitz <schmitzmic@gmail.com>");
++MODULE_LICENSE("GPL");
++
++/**
++ * asix_soft_reset - software reset the PHY via BMCR_RESET bit
++ * @phydev: target phy_device struct
++ *
++ * Description: Perform a software PHY reset using the standard
++ * BMCR_RESET bit and poll for the reset bit to be cleared.
++ * Toggle BMCR_RESET bit off to accommodate broken AX8796B PHY implementation
++ * such as used on the Individual Computers' X-Surf 100 Zorro card.
++ *
++ * Returns: 0 on success, < 0 on failure
++ */
++static int asix_soft_reset(struct phy_device *phydev)
++{
++	int ret;
++
++	/* Asix PHY won't reset unless reset bit toggles */
++	ret = phy_write(phydev, MII_BMCR, 0);
++	if (ret < 0)
++		return ret;
++
++	return genphy_soft_reset(phydev);
++}
++
++static struct phy_driver asix_driver[] = { {
++	.phy_id		= PHY_ID_ASIX_AX88796B,
++	.name		= "Asix Electronics AX88796B",
++	.phy_id_mask	= 0xfffffff0,
++	.features	= PHY_BASIC_FEATURES,
++	.soft_reset	= asix_soft_reset,
++} };
++
++module_phy_driver(asix_driver);
++
++static struct mdio_device_id __maybe_unused asix_tbl[] = {
++	{ PHY_ID_ASIX_AX88796B, 0xfffffff0 },
++	{ }
++};
++
++MODULE_DEVICE_TABLE(mdio, asix_tbl);
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index e657d8947125..128c8a327d8e 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -153,7 +153,7 @@ static bool qmimux_has_slaves(struct usbnet *dev)
+ 
+ static int qmimux_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ {
+-	unsigned int len, offset = 0;
++	unsigned int len, offset = 0, pad_len, pkt_len;
+ 	struct qmimux_hdr *hdr;
+ 	struct net_device *net;
+ 	struct sk_buff *skbn;
+@@ -171,10 +171,16 @@ static int qmimux_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ 		if (hdr->pad & 0x80)
+ 			goto skip;
+ 
++		/* extract padding length and check for valid length info */
++		pad_len = hdr->pad & 0x3f;
++		if (len == 0 || pad_len >= len)
++			goto skip;
++		pkt_len = len - pad_len;
++
+ 		net = qmimux_find_dev(dev, hdr->mux_id);
+ 		if (!net)
+ 			goto skip;
+-		skbn = netdev_alloc_skb(net, len);
++		skbn = netdev_alloc_skb(net, pkt_len);
+ 		if (!skbn)
+ 			return 0;
+ 		skbn->dev = net;
+@@ -191,7 +197,7 @@ static int qmimux_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ 			goto skip;
+ 		}
+ 
+-		skb_put_data(skbn, skb->data + offset + qmimux_hdr_sz, len);
++		skb_put_data(skbn, skb->data + offset + qmimux_hdr_sz, pkt_len);
+ 		if (netif_rx(skbn) != NET_RX_SUCCESS)
+ 			return 0;
+ 
+@@ -241,13 +247,14 @@ out_free_newdev:
+ 	return err;
+ }
+ 
+-static void qmimux_unregister_device(struct net_device *dev)
++static void qmimux_unregister_device(struct net_device *dev,
++				     struct list_head *head)
+ {
+ 	struct qmimux_priv *priv = netdev_priv(dev);
+ 	struct net_device *real_dev = priv->real_dev;
+ 
+ 	netdev_upper_dev_unlink(real_dev, dev);
+-	unregister_netdevice(dev);
++	unregister_netdevice_queue(dev, head);
+ 
+ 	/* Get rid of the reference to real_dev */
+ 	dev_put(real_dev);
+@@ -356,8 +363,8 @@ static ssize_t add_mux_store(struct device *d,  struct device_attribute *attr, c
+ 	if (kstrtou8(buf, 0, &mux_id))
+ 		return -EINVAL;
+ 
+-	/* mux_id [1 - 0x7f] range empirically found */
+-	if (mux_id < 1 || mux_id > 0x7f)
++	/* mux_id [1 - 254] for compatibility with ip(8) and the rmnet driver */
++	if (mux_id < 1 || mux_id > 254)
+ 		return -EINVAL;
+ 
+ 	if (!rtnl_trylock())
+@@ -418,7 +425,7 @@ static ssize_t del_mux_store(struct device *d,  struct device_attribute *attr, c
+ 		ret = -EINVAL;
+ 		goto err;
+ 	}
+-	qmimux_unregister_device(del_dev);
++	qmimux_unregister_device(del_dev, NULL);
+ 
+ 	if (!qmimux_has_slaves(dev))
+ 		info->flags &= ~QMI_WWAN_FLAG_MUX;
+@@ -1428,6 +1435,7 @@ static void qmi_wwan_disconnect(struct usb_interface *intf)
+ 	struct qmi_wwan_state *info;
+ 	struct list_head *iter;
+ 	struct net_device *ldev;
++	LIST_HEAD(list);
+ 
+ 	/* called twice if separate control and data intf */
+ 	if (!dev)
+@@ -1440,8 +1448,9 @@ static void qmi_wwan_disconnect(struct usb_interface *intf)
+ 		}
+ 		rcu_read_lock();
+ 		netdev_for_each_upper_dev_rcu(dev->net, ldev, iter)
+-			qmimux_unregister_device(ldev);
++			qmimux_unregister_device(ldev, &list);
+ 		rcu_read_unlock();
++		unregister_netdevice_many(&list);
+ 		rtnl_unlock();
+ 		info->flags &= ~QMI_WWAN_FLAG_MUX;
+ 	}
+diff --git a/drivers/net/wireless/ath/carl9170/usb.c b/drivers/net/wireless/ath/carl9170/usb.c
+index e7c3f3b8457d..99f1897a775d 100644
+--- a/drivers/net/wireless/ath/carl9170/usb.c
++++ b/drivers/net/wireless/ath/carl9170/usb.c
+@@ -128,6 +128,8 @@ static const struct usb_device_id carl9170_usb_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(usb, carl9170_usb_ids);
+ 
++static struct usb_driver carl9170_driver;
++
+ static void carl9170_usb_submit_data_urb(struct ar9170 *ar)
+ {
+ 	struct urb *urb;
+@@ -966,32 +968,28 @@ err_out:
+ 
+ static void carl9170_usb_firmware_failed(struct ar9170 *ar)
+ {
+-	struct device *parent = ar->udev->dev.parent;
+-	struct usb_device *udev;
+-
+-	/*
+-	 * Store a copy of the usb_device pointer locally.
+-	 * This is because device_release_driver initiates
+-	 * carl9170_usb_disconnect, which in turn frees our
+-	 * driver context (ar).
++	/* Store a copies of the usb_interface and usb_device pointer locally.
++	 * This is because release_driver initiates carl9170_usb_disconnect,
++	 * which in turn frees our driver context (ar).
+ 	 */
+-	udev = ar->udev;
++	struct usb_interface *intf = ar->intf;
++	struct usb_device *udev = ar->udev;
+ 
+ 	complete(&ar->fw_load_wait);
++	/* at this point 'ar' could be already freed. Don't use it anymore */
++	ar = NULL;
+ 
+ 	/* unbind anything failed */
+-	if (parent)
+-		device_lock(parent);
+-
+-	device_release_driver(&udev->dev);
+-	if (parent)
+-		device_unlock(parent);
++	usb_lock_device(udev);
++	usb_driver_release_interface(&carl9170_driver, intf);
++	usb_unlock_device(udev);
+ 
+-	usb_put_dev(udev);
++	usb_put_intf(intf);
+ }
+ 
+ static void carl9170_usb_firmware_finish(struct ar9170 *ar)
+ {
++	struct usb_interface *intf = ar->intf;
+ 	int err;
+ 
+ 	err = carl9170_parse_firmware(ar);
+@@ -1009,7 +1007,7 @@ static void carl9170_usb_firmware_finish(struct ar9170 *ar)
+ 		goto err_unrx;
+ 
+ 	complete(&ar->fw_load_wait);
+-	usb_put_dev(ar->udev);
++	usb_put_intf(intf);
+ 	return;
+ 
+ err_unrx:
+@@ -1052,7 +1050,6 @@ static int carl9170_usb_probe(struct usb_interface *intf,
+ 		return PTR_ERR(ar);
+ 
+ 	udev = interface_to_usbdev(intf);
+-	usb_get_dev(udev);
+ 	ar->udev = udev;
+ 	ar->intf = intf;
+ 	ar->features = id->driver_info;
+@@ -1094,15 +1091,14 @@ static int carl9170_usb_probe(struct usb_interface *intf,
+ 	atomic_set(&ar->rx_anch_urbs, 0);
+ 	atomic_set(&ar->rx_pool_urbs, 0);
+ 
+-	usb_get_dev(ar->udev);
++	usb_get_intf(intf);
+ 
+ 	carl9170_set_state(ar, CARL9170_STOPPED);
+ 
+ 	err = request_firmware_nowait(THIS_MODULE, 1, CARL9170FW_NAME,
+ 		&ar->udev->dev, GFP_KERNEL, ar, carl9170_usb_firmware_step2);
+ 	if (err) {
+-		usb_put_dev(udev);
+-		usb_put_dev(udev);
++		usb_put_intf(intf);
+ 		carl9170_free(ar);
+ 	}
+ 	return err;
+@@ -1131,7 +1127,6 @@ static void carl9170_usb_disconnect(struct usb_interface *intf)
+ 
+ 	carl9170_release_firmware(ar);
+ 	carl9170_free(ar);
+-	usb_put_dev(udev);
+ }
+ 
+ #ifdef CONFIG_PM
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+index 689a65b11cc3..4fd1737d768b 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+@@ -1579,7 +1579,6 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
+ 	goto free;
+ 
+  out_free_fw:
+-	iwl_dealloc_ucode(drv);
+ 	release_firmware(ucode_raw);
+  out_unbind:
+ 	complete(&drv->request_firmware_complete);
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-prph.h b/drivers/net/wireless/intel/iwlwifi/iwl-prph.h
+index 1af9f9e1ecd4..0c0799d13e15 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-prph.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-prph.h
+@@ -402,7 +402,12 @@ enum aux_misc_master1_en {
+ #define AUX_MISC_MASTER1_SMPHR_STATUS	0xA20800
+ #define RSA_ENABLE			0xA24B08
+ #define PREG_AUX_BUS_WPROT_0		0xA04CC0
+-#define PREG_PRPH_WPROT_0		0xA04CE0
++
++/* device family 9000 WPROT register */
++#define PREG_PRPH_WPROT_9000		0xA04CE0
++/* device family 22000 WPROT register */
++#define PREG_PRPH_WPROT_22000		0xA04D00
++
+ #define SB_CPU_1_STATUS			0xA01E30
+ #define SB_CPU_2_STATUS			0xA01E34
+ #define UMAG_SB_CPU_1_STATUS		0xA038C0
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index ab68b5d53ec9..153717587aeb 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -311,6 +311,8 @@ static int iwl_mvm_load_ucode_wait_alive(struct iwl_mvm *mvm,
+ 	int ret;
+ 	enum iwl_ucode_type old_type = mvm->fwrt.cur_fw_img;
+ 	static const u16 alive_cmd[] = { MVM_ALIVE };
++	bool run_in_rfkill =
++		ucode_type == IWL_UCODE_INIT || iwl_mvm_has_unified_ucode(mvm);
+ 
+ 	if (ucode_type == IWL_UCODE_REGULAR &&
+ 	    iwl_fw_dbg_conf_usniffer(mvm->fw, FW_DBG_START_FROM_ALIVE) &&
+@@ -328,7 +330,12 @@ static int iwl_mvm_load_ucode_wait_alive(struct iwl_mvm *mvm,
+ 				   alive_cmd, ARRAY_SIZE(alive_cmd),
+ 				   iwl_alive_fn, &alive_data);
+ 
+-	ret = iwl_trans_start_fw(mvm->trans, fw, ucode_type == IWL_UCODE_INIT);
++	/*
++	 * We want to load the INIT firmware even in RFKILL
++	 * For the unified firmware case, the ucode_type is not
++	 * INIT, but we still need to run it.
++	 */
++	ret = iwl_trans_start_fw(mvm->trans, fw, run_in_rfkill);
+ 	if (ret) {
+ 		iwl_fw_set_current_image(&mvm->fwrt, old_type);
+ 		iwl_remove_notification(&mvm->notif_wait, &alive_wait);
+@@ -433,7 +440,8 @@ static int iwl_run_unified_mvm_ucode(struct iwl_mvm *mvm, bool read_nvm)
+ 	 * commands
+ 	 */
+ 	ret = iwl_mvm_send_cmd_pdu(mvm, WIDE_ID(SYSTEM_GROUP,
+-						INIT_EXTENDED_CFG_CMD), 0,
++						INIT_EXTENDED_CFG_CMD),
++				   CMD_SEND_IN_RFKILL,
+ 				   sizeof(init_cfg), &init_cfg);
+ 	if (ret) {
+ 		IWL_ERR(mvm, "Failed to run init config command: %d\n",
+@@ -457,7 +465,8 @@ static int iwl_run_unified_mvm_ucode(struct iwl_mvm *mvm, bool read_nvm)
+ 	}
+ 
+ 	ret = iwl_mvm_send_cmd_pdu(mvm, WIDE_ID(REGULATORY_AND_NVM_GROUP,
+-						NVM_ACCESS_COMPLETE), 0,
++						NVM_ACCESS_COMPLETE),
++				   CMD_SEND_IN_RFKILL,
+ 				   sizeof(nvm_complete), &nvm_complete);
+ 	if (ret) {
+ 		IWL_ERR(mvm, "Failed to run complete NVM access: %d\n",
+@@ -482,6 +491,8 @@ static int iwl_run_unified_mvm_ucode(struct iwl_mvm *mvm, bool read_nvm)
+ 		}
+ 	}
+ 
++	mvm->rfkill_safe_init_done = true;
++
+ 	return 0;
+ 
+ error:
+@@ -526,7 +537,7 @@ int iwl_run_init_mvm_ucode(struct iwl_mvm *mvm, bool read_nvm)
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+ 
+-	if (WARN_ON_ONCE(mvm->calibrating))
++	if (WARN_ON_ONCE(mvm->rfkill_safe_init_done))
+ 		return 0;
+ 
+ 	iwl_init_notification_wait(&mvm->notif_wait,
+@@ -576,7 +587,7 @@ int iwl_run_init_mvm_ucode(struct iwl_mvm *mvm, bool read_nvm)
+ 		goto remove_notif;
+ 	}
+ 
+-	mvm->calibrating = true;
++	mvm->rfkill_safe_init_done = true;
+ 
+ 	/* Send TX valid antennas before triggering calibrations */
+ 	ret = iwl_send_tx_ant_cfg(mvm, iwl_mvm_get_valid_tx_ant(mvm));
+@@ -612,7 +623,7 @@ int iwl_run_init_mvm_ucode(struct iwl_mvm *mvm, bool read_nvm)
+ remove_notif:
+ 	iwl_remove_notification(&mvm->notif_wait, &calib_wait);
+ out:
+-	mvm->calibrating = false;
++	mvm->rfkill_safe_init_done = false;
+ 	if (iwlmvm_mod_params.init_dbg && !mvm->nvm_data) {
+ 		/* we want to debug INIT and we have no NVM - fake */
+ 		mvm->nvm_data = kzalloc(sizeof(struct iwl_nvm_data) +
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 6a3b11dd2edf..4ddf620c267d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -1211,7 +1211,7 @@ static void iwl_mvm_restart_cleanup(struct iwl_mvm *mvm)
+ 
+ 	mvm->scan_status = 0;
+ 	mvm->ps_disabled = false;
+-	mvm->calibrating = false;
++	mvm->rfkill_safe_init_done = false;
+ 
+ 	/* just in case one was running */
+ 	iwl_mvm_cleanup_roc_te(mvm);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+index a50dc53df086..b698d55ace1b 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+@@ -877,7 +877,7 @@ struct iwl_mvm {
+ 	struct iwl_mvm_vif *bf_allowed_vif;
+ 
+ 	bool hw_registered;
+-	bool calibrating;
++	bool rfkill_safe_init_done;
+ 	bool support_umac_log;
+ 
+ 	u32 ampdu_ref;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index 13681b03c10e..20115770e75a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -1222,7 +1222,8 @@ void iwl_mvm_set_hw_ctkill_state(struct iwl_mvm *mvm, bool state)
+ static bool iwl_mvm_set_hw_rfkill_state(struct iwl_op_mode *op_mode, bool state)
+ {
+ 	struct iwl_mvm *mvm = IWL_OP_MODE_GET_MVM(op_mode);
+-	bool calibrating = READ_ONCE(mvm->calibrating);
++	bool rfkill_safe_init_done = READ_ONCE(mvm->rfkill_safe_init_done);
++	bool unified = iwl_mvm_has_unified_ucode(mvm);
+ 
+ 	if (state)
+ 		set_bit(IWL_MVM_STATUS_HW_RFKILL, &mvm->status);
+@@ -1231,15 +1232,23 @@ static bool iwl_mvm_set_hw_rfkill_state(struct iwl_op_mode *op_mode, bool state)
+ 
+ 	iwl_mvm_set_rfkill_state(mvm);
+ 
+-	/* iwl_run_init_mvm_ucode is waiting for results, abort it */
+-	if (calibrating)
++	 /* iwl_run_init_mvm_ucode is waiting for results, abort it. */
++	if (rfkill_safe_init_done)
+ 		iwl_abort_notification_waits(&mvm->notif_wait);
+ 
++	/*
++	 * Don't ask the transport to stop the firmware. We'll do it
++	 * after cfg80211 takes us down.
++	 */
++	if (unified)
++		return false;
++
+ 	/*
+ 	 * Stop the device if we run OPERATIONAL firmware or if we are in the
+ 	 * middle of the calibrations.
+ 	 */
+-	return state && (mvm->fwrt.cur_fw_img != IWL_UCODE_INIT || calibrating);
++	return state && (mvm->fwrt.cur_fw_img != IWL_UCODE_INIT ||
++			 rfkill_safe_init_done);
+ }
+ 
+ static void iwl_mvm_free_skb(struct iwl_op_mode *op_mode, struct sk_buff *skb)
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+index 59213164f35e..2afce5c41322 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+@@ -948,7 +948,7 @@ static inline void iwl_enable_rfkill_int(struct iwl_trans *trans)
+ 					   MSIX_HW_INT_CAUSES_REG_RF_KILL);
+ 	}
+ 
+-	if (trans->cfg->device_family == IWL_DEVICE_FAMILY_9000) {
++	if (trans->cfg->device_family >= IWL_DEVICE_FAMILY_9000) {
+ 		/*
+ 		 * On 9000-series devices this bit isn't enabled by default, so
+ 		 * when we power down the device we need set the bit to allow it
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index c4375b868901..80695584e406 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -1696,26 +1696,26 @@ static int iwl_pcie_init_msix_handler(struct pci_dev *pdev,
+ 	return 0;
+ }
+ 
+-static int _iwl_trans_pcie_start_hw(struct iwl_trans *trans, bool low_power)
++static int iwl_trans_pcie_clear_persistence_bit(struct iwl_trans *trans)
+ {
+-	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+-	u32 hpm;
+-	int err;
+-
+-	lockdep_assert_held(&trans_pcie->mutex);
++	u32 hpm, wprot;
+ 
+-	err = iwl_pcie_prepare_card_hw(trans);
+-	if (err) {
+-		IWL_ERR(trans, "Error while preparing HW: %d\n", err);
+-		return err;
++	switch (trans->cfg->device_family) {
++	case IWL_DEVICE_FAMILY_9000:
++		wprot = PREG_PRPH_WPROT_9000;
++		break;
++	case IWL_DEVICE_FAMILY_22000:
++		wprot = PREG_PRPH_WPROT_22000;
++		break;
++	default:
++		return 0;
+ 	}
+ 
+ 	hpm = iwl_read_umac_prph_no_grab(trans, HPM_DEBUG);
+ 	if (hpm != 0xa5a5a5a0 && (hpm & PERSISTENCE_BIT)) {
+-		int wfpm_val = iwl_read_umac_prph_no_grab(trans,
+-							  PREG_PRPH_WPROT_0);
++		u32 wprot_val = iwl_read_umac_prph_no_grab(trans, wprot);
+ 
+-		if (wfpm_val & PREG_WFPM_ACCESS) {
++		if (wprot_val & PREG_WFPM_ACCESS) {
+ 			IWL_ERR(trans,
+ 				"Error, can not clear persistence bit\n");
+ 			return -EPERM;
+@@ -1724,6 +1724,26 @@ static int _iwl_trans_pcie_start_hw(struct iwl_trans *trans, bool low_power)
+ 					    hpm & ~PERSISTENCE_BIT);
+ 	}
+ 
++	return 0;
++}
++
++static int _iwl_trans_pcie_start_hw(struct iwl_trans *trans, bool low_power)
++{
++	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
++	int err;
++
++	lockdep_assert_held(&trans_pcie->mutex);
++
++	err = iwl_pcie_prepare_card_hw(trans);
++	if (err) {
++		IWL_ERR(trans, "Error while preparing HW: %d\n", err);
++		return err;
++	}
++
++	err = iwl_trans_pcie_clear_persistence_bit(trans);
++	if (err)
++		return err;
++
+ 	iwl_trans_pcie_sw_reset(trans);
+ 
+ 	err = iwl_pcie_apm_init(trans);
+@@ -3565,7 +3585,9 @@ struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev,
+ 		}
+ 	} else if (CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id) ==
+ 		   CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_HR) &&
+-		   (trans->cfg != &iwl_ax200_cfg_cc ||
++		   ((trans->cfg != &iwl_ax200_cfg_cc &&
++		    trans->cfg != &killer1650x_2ax_cfg &&
++		    trans->cfg != &killer1650w_2ax_cfg) ||
+ 		    trans->hw_rev == CSR_HW_REV_TYPE_QNJ_B0)) {
+ 		u32 hw_status;
+ 
+diff --git a/drivers/net/wireless/intersil/p54/p54usb.c b/drivers/net/wireless/intersil/p54/p54usb.c
+index b0b86f701061..15661da6eedc 100644
+--- a/drivers/net/wireless/intersil/p54/p54usb.c
++++ b/drivers/net/wireless/intersil/p54/p54usb.c
+@@ -33,6 +33,8 @@ MODULE_ALIAS("prism54usb");
+ MODULE_FIRMWARE("isl3886usb");
+ MODULE_FIRMWARE("isl3887usb");
+ 
++static struct usb_driver p54u_driver;
++
+ /*
+  * Note:
+  *
+@@ -921,9 +923,9 @@ static void p54u_load_firmware_cb(const struct firmware *firmware,
+ {
+ 	struct p54u_priv *priv = context;
+ 	struct usb_device *udev = priv->udev;
++	struct usb_interface *intf = priv->intf;
+ 	int err;
+ 
+-	complete(&priv->fw_wait_load);
+ 	if (firmware) {
+ 		priv->fw = firmware;
+ 		err = p54u_start_ops(priv);
+@@ -932,26 +934,22 @@ static void p54u_load_firmware_cb(const struct firmware *firmware,
+ 		dev_err(&udev->dev, "Firmware not found.\n");
+ 	}
+ 
+-	if (err) {
+-		struct device *parent = priv->udev->dev.parent;
+-
+-		dev_err(&udev->dev, "failed to initialize device (%d)\n", err);
+-
+-		if (parent)
+-			device_lock(parent);
++	complete(&priv->fw_wait_load);
++	/*
++	 * At this point p54u_disconnect may have already freed
++	 * the "priv" context. Do not use it anymore!
++	 */
++	priv = NULL;
+ 
+-		device_release_driver(&udev->dev);
+-		/*
+-		 * At this point p54u_disconnect has already freed
+-		 * the "priv" context. Do not use it anymore!
+-		 */
+-		priv = NULL;
++	if (err) {
++		dev_err(&intf->dev, "failed to initialize device (%d)\n", err);
+ 
+-		if (parent)
+-			device_unlock(parent);
++		usb_lock_device(udev);
++		usb_driver_release_interface(&p54u_driver, intf);
++		usb_unlock_device(udev);
+ 	}
+ 
+-	usb_put_dev(udev);
++	usb_put_intf(intf);
+ }
+ 
+ static int p54u_load_firmware(struct ieee80211_hw *dev,
+@@ -972,14 +970,14 @@ static int p54u_load_firmware(struct ieee80211_hw *dev,
+ 	dev_info(&priv->udev->dev, "Loading firmware file %s\n",
+ 	       p54u_fwlist[i].fw);
+ 
+-	usb_get_dev(udev);
++	usb_get_intf(intf);
+ 	err = request_firmware_nowait(THIS_MODULE, 1, p54u_fwlist[i].fw,
+ 				      device, GFP_KERNEL, priv,
+ 				      p54u_load_firmware_cb);
+ 	if (err) {
+ 		dev_err(&priv->udev->dev, "(p54usb) cannot load firmware %s "
+ 					  "(%d)!\n", p54u_fwlist[i].fw, err);
+-		usb_put_dev(udev);
++		usb_put_intf(intf);
+ 	}
+ 
+ 	return err;
+@@ -1011,8 +1009,6 @@ static int p54u_probe(struct usb_interface *intf,
+ 	skb_queue_head_init(&priv->rx_queue);
+ 	init_usb_anchor(&priv->submitted);
+ 
+-	usb_get_dev(udev);
+-
+ 	/* really lazy and simple way of figuring out if we're a 3887 */
+ 	/* TODO: should just stick the identification in the device table */
+ 	i = intf->altsetting->desc.bNumEndpoints;
+@@ -1053,10 +1049,8 @@ static int p54u_probe(struct usb_interface *intf,
+ 		priv->upload_fw = p54u_upload_firmware_net2280;
+ 	}
+ 	err = p54u_load_firmware(dev, intf);
+-	if (err) {
+-		usb_put_dev(udev);
++	if (err)
+ 		p54_free_common(dev);
+-	}
+ 	return err;
+ }
+ 
+@@ -1072,7 +1066,6 @@ static void p54u_disconnect(struct usb_interface *intf)
+ 	wait_for_completion(&priv->fw_wait_load);
+ 	p54_unregister_common(dev);
+ 
+-	usb_put_dev(interface_to_usbdev(intf));
+ 	release_firmware(priv->fw);
+ 	p54_free_common(dev);
+ }
+diff --git a/drivers/net/wireless/intersil/p54/txrx.c b/drivers/net/wireless/intersil/p54/txrx.c
+index 790784568ad2..5bf1c19ecced 100644
+--- a/drivers/net/wireless/intersil/p54/txrx.c
++++ b/drivers/net/wireless/intersil/p54/txrx.c
+@@ -142,7 +142,10 @@ static int p54_assign_address(struct p54_common *priv, struct sk_buff *skb)
+ 	    unlikely(GET_HW_QUEUE(skb) == P54_QUEUE_BEACON))
+ 		priv->beacon_req_id = data->req_id;
+ 
+-	__skb_queue_after(&priv->tx_queue, target_skb, skb);
++	if (target_skb)
++		__skb_queue_after(&priv->tx_queue, target_skb, skb);
++	else
++		__skb_queue_head(&priv->tx_queue, skb);
+ 	spin_unlock_irqrestore(&priv->tx_queue.lock, flags);
+ 	return 0;
+ }
+diff --git a/drivers/net/wireless/marvell/mwifiex/fw.h b/drivers/net/wireless/marvell/mwifiex/fw.h
+index b73f99dc5a72..1fb76d2f5d3f 100644
+--- a/drivers/net/wireless/marvell/mwifiex/fw.h
++++ b/drivers/net/wireless/marvell/mwifiex/fw.h
+@@ -1759,9 +1759,10 @@ struct mwifiex_ie_types_wmm_queue_status {
+ struct ieee_types_vendor_header {
+ 	u8 element_id;
+ 	u8 len;
+-	u8 oui[4];	/* 0~2: oui, 3: oui_type */
+-	u8 oui_subtype;
+-	u8 version;
++	struct {
++		u8 oui[3];
++		u8 oui_type;
++	} __packed oui;
+ } __packed;
+ 
+ struct ieee_types_wmm_parameter {
+@@ -1775,6 +1776,9 @@ struct ieee_types_wmm_parameter {
+ 	 *   Version     [1]
+ 	 */
+ 	struct ieee_types_vendor_header vend_hdr;
++	u8 oui_subtype;
++	u8 version;
++
+ 	u8 qos_info_bitmap;
+ 	u8 reserved;
+ 	struct ieee_types_wmm_ac_parameters ac_params[IEEE80211_NUM_ACS];
+@@ -1792,6 +1796,8 @@ struct ieee_types_wmm_info {
+ 	 *   Version     [1]
+ 	 */
+ 	struct ieee_types_vendor_header vend_hdr;
++	u8 oui_subtype;
++	u8 version;
+ 
+ 	u8 qos_info_bitmap;
+ } __packed;
+diff --git a/drivers/net/wireless/marvell/mwifiex/ie.c b/drivers/net/wireless/marvell/mwifiex/ie.c
+index 6845eb57b39a..653d347a9a19 100644
+--- a/drivers/net/wireless/marvell/mwifiex/ie.c
++++ b/drivers/net/wireless/marvell/mwifiex/ie.c
+@@ -329,6 +329,8 @@ static int mwifiex_uap_parse_tail_ies(struct mwifiex_private *priv,
+ 	struct ieee80211_vendor_ie *vendorhdr;
+ 	u16 gen_idx = MWIFIEX_AUTO_IDX_MASK, ie_len = 0;
+ 	int left_len, parsed_len = 0;
++	unsigned int token_len;
++	int err = 0;
+ 
+ 	if (!info->tail || !info->tail_len)
+ 		return 0;
+@@ -344,6 +346,12 @@ static int mwifiex_uap_parse_tail_ies(struct mwifiex_private *priv,
+ 	 */
+ 	while (left_len > sizeof(struct ieee_types_header)) {
+ 		hdr = (void *)(info->tail + parsed_len);
++		token_len = hdr->len + sizeof(struct ieee_types_header);
++		if (token_len > left_len) {
++			err = -EINVAL;
++			goto out;
++		}
++
+ 		switch (hdr->element_id) {
+ 		case WLAN_EID_SSID:
+ 		case WLAN_EID_SUPP_RATES:
+@@ -361,17 +369,20 @@ static int mwifiex_uap_parse_tail_ies(struct mwifiex_private *priv,
+ 			if (cfg80211_find_vendor_ie(WLAN_OUI_MICROSOFT,
+ 						    WLAN_OUI_TYPE_MICROSOFT_WMM,
+ 						    (const u8 *)hdr,
+-						    hdr->len + sizeof(struct ieee_types_header)))
++						    token_len))
+ 				break;
+ 			/* fall through */
+ 		default:
+-			memcpy(gen_ie->ie_buffer + ie_len, hdr,
+-			       hdr->len + sizeof(struct ieee_types_header));
+-			ie_len += hdr->len + sizeof(struct ieee_types_header);
++			if (ie_len + token_len > IEEE_MAX_IE_SIZE) {
++				err = -EINVAL;
++				goto out;
++			}
++			memcpy(gen_ie->ie_buffer + ie_len, hdr, token_len);
++			ie_len += token_len;
+ 			break;
+ 		}
+-		left_len -= hdr->len + sizeof(struct ieee_types_header);
+-		parsed_len += hdr->len + sizeof(struct ieee_types_header);
++		left_len -= token_len;
++		parsed_len += token_len;
+ 	}
+ 
+ 	/* parse only WPA vendor IE from tail, WMM IE is configured by
+@@ -381,15 +392,17 @@ static int mwifiex_uap_parse_tail_ies(struct mwifiex_private *priv,
+ 						    WLAN_OUI_TYPE_MICROSOFT_WPA,
+ 						    info->tail, info->tail_len);
+ 	if (vendorhdr) {
+-		memcpy(gen_ie->ie_buffer + ie_len, vendorhdr,
+-		       vendorhdr->len + sizeof(struct ieee_types_header));
+-		ie_len += vendorhdr->len + sizeof(struct ieee_types_header);
++		token_len = vendorhdr->len + sizeof(struct ieee_types_header);
++		if (ie_len + token_len > IEEE_MAX_IE_SIZE) {
++			err = -EINVAL;
++			goto out;
++		}
++		memcpy(gen_ie->ie_buffer + ie_len, vendorhdr, token_len);
++		ie_len += token_len;
+ 	}
+ 
+-	if (!ie_len) {
+-		kfree(gen_ie);
+-		return 0;
+-	}
++	if (!ie_len)
++		goto out;
+ 
+ 	gen_ie->ie_index = cpu_to_le16(gen_idx);
+ 	gen_ie->mgmt_subtype_mask = cpu_to_le16(MGMT_MASK_BEACON |
+@@ -399,13 +412,15 @@ static int mwifiex_uap_parse_tail_ies(struct mwifiex_private *priv,
+ 
+ 	if (mwifiex_update_uap_custom_ie(priv, gen_ie, &gen_idx, NULL, NULL,
+ 					 NULL, NULL)) {
+-		kfree(gen_ie);
+-		return -1;
++		err = -EINVAL;
++		goto out;
+ 	}
+ 
+ 	priv->gen_idx = gen_idx;
++
++ out:
+ 	kfree(gen_ie);
+-	return 0;
++	return err;
+ }
+ 
+ /* This function parses different IEs-head & tail IEs, beacon IEs,
+diff --git a/drivers/net/wireless/marvell/mwifiex/scan.c b/drivers/net/wireless/marvell/mwifiex/scan.c
+index 935778ec9a1b..e2786ab612ca 100644
+--- a/drivers/net/wireless/marvell/mwifiex/scan.c
++++ b/drivers/net/wireless/marvell/mwifiex/scan.c
+@@ -1247,6 +1247,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
+ 		}
+ 		switch (element_id) {
+ 		case WLAN_EID_SSID:
++			if (element_len > IEEE80211_MAX_SSID_LEN)
++				return -EINVAL;
+ 			bss_entry->ssid.ssid_len = element_len;
+ 			memcpy(bss_entry->ssid.ssid, (current_ptr + 2),
+ 			       element_len);
+@@ -1256,6 +1258,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
+ 			break;
+ 
+ 		case WLAN_EID_SUPP_RATES:
++			if (element_len > MWIFIEX_SUPPORTED_RATES)
++				return -EINVAL;
+ 			memcpy(bss_entry->data_rates, current_ptr + 2,
+ 			       element_len);
+ 			memcpy(bss_entry->supported_rates, current_ptr + 2,
+@@ -1265,6 +1269,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
+ 			break;
+ 
+ 		case WLAN_EID_FH_PARAMS:
++			if (element_len + 2 < sizeof(*fh_param_set))
++				return -EINVAL;
+ 			fh_param_set =
+ 				(struct ieee_types_fh_param_set *) current_ptr;
+ 			memcpy(&bss_entry->phy_param_set.fh_param_set,
+@@ -1273,6 +1279,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
+ 			break;
+ 
+ 		case WLAN_EID_DS_PARAMS:
++			if (element_len + 2 < sizeof(*ds_param_set))
++				return -EINVAL;
+ 			ds_param_set =
+ 				(struct ieee_types_ds_param_set *) current_ptr;
+ 
+@@ -1284,6 +1292,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
+ 			break;
+ 
+ 		case WLAN_EID_CF_PARAMS:
++			if (element_len + 2 < sizeof(*cf_param_set))
++				return -EINVAL;
+ 			cf_param_set =
+ 				(struct ieee_types_cf_param_set *) current_ptr;
+ 			memcpy(&bss_entry->ss_param_set.cf_param_set,
+@@ -1292,6 +1302,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
+ 			break;
+ 
+ 		case WLAN_EID_IBSS_PARAMS:
++			if (element_len + 2 < sizeof(*ibss_param_set))
++				return -EINVAL;
+ 			ibss_param_set =
+ 				(struct ieee_types_ibss_param_set *)
+ 				current_ptr;
+@@ -1301,10 +1313,14 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
+ 			break;
+ 
+ 		case WLAN_EID_ERP_INFO:
++			if (!element_len)
++				return -EINVAL;
+ 			bss_entry->erp_flags = *(current_ptr + 2);
+ 			break;
+ 
+ 		case WLAN_EID_PWR_CONSTRAINT:
++			if (!element_len)
++				return -EINVAL;
+ 			bss_entry->local_constraint = *(current_ptr + 2);
+ 			bss_entry->sensed_11h = true;
+ 			break;
+@@ -1348,15 +1364,22 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
+ 			vendor_ie = (struct ieee_types_vendor_specific *)
+ 					current_ptr;
+ 
+-			if (!memcmp
+-			    (vendor_ie->vend_hdr.oui, wpa_oui,
+-			     sizeof(wpa_oui))) {
++			/* 802.11 requires at least 3-byte OUI. */
++			if (element_len < sizeof(vendor_ie->vend_hdr.oui.oui))
++				return -EINVAL;
++
++			/* Not long enough for a match? Skip it. */
++			if (element_len < sizeof(wpa_oui))
++				break;
++
++			if (!memcmp(&vendor_ie->vend_hdr.oui, wpa_oui,
++				    sizeof(wpa_oui))) {
+ 				bss_entry->bcn_wpa_ie =
+ 					(struct ieee_types_vendor_specific *)
+ 					current_ptr;
+ 				bss_entry->wpa_offset = (u16)
+ 					(current_ptr - bss_entry->beacon_buf);
+-			} else if (!memcmp(vendor_ie->vend_hdr.oui, wmm_oui,
++			} else if (!memcmp(&vendor_ie->vend_hdr.oui, wmm_oui,
+ 				    sizeof(wmm_oui))) {
+ 				if (total_ie_len ==
+ 				    sizeof(struct ieee_types_wmm_parameter) ||
+diff --git a/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c b/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c
+index ebc0e41e5d3b..74e50566db1f 100644
+--- a/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c
++++ b/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c
+@@ -1351,7 +1351,7 @@ mwifiex_set_gen_ie_helper(struct mwifiex_private *priv, u8 *ie_data_ptr,
+ 			/* Test to see if it is a WPA IE, if not, then
+ 			 * it is a gen IE
+ 			 */
+-			if (!memcmp(pvendor_ie->oui, wpa_oui,
++			if (!memcmp(&pvendor_ie->oui, wpa_oui,
+ 				    sizeof(wpa_oui))) {
+ 				/* IE is a WPA/WPA2 IE so call set_wpa function
+ 				 */
+@@ -1361,7 +1361,7 @@ mwifiex_set_gen_ie_helper(struct mwifiex_private *priv, u8 *ie_data_ptr,
+ 				goto next_ie;
+ 			}
+ 
+-			if (!memcmp(pvendor_ie->oui, wps_oui,
++			if (!memcmp(&pvendor_ie->oui, wps_oui,
+ 				    sizeof(wps_oui))) {
+ 				/* Test to see if it is a WPS IE,
+ 				 * if so, enable wps session flag
+diff --git a/drivers/net/wireless/marvell/mwifiex/wmm.c b/drivers/net/wireless/marvell/mwifiex/wmm.c
+index 407b9932ca4d..64916ba15df5 100644
+--- a/drivers/net/wireless/marvell/mwifiex/wmm.c
++++ b/drivers/net/wireless/marvell/mwifiex/wmm.c
+@@ -240,7 +240,7 @@ mwifiex_wmm_setup_queue_priorities(struct mwifiex_private *priv,
+ 	mwifiex_dbg(priv->adapter, INFO,
+ 		    "info: WMM Parameter IE: version=%d,\t"
+ 		    "qos_info Parameter Set Count=%d, Reserved=%#x\n",
+-		    wmm_ie->vend_hdr.version, wmm_ie->qos_info_bitmap &
++		    wmm_ie->version, wmm_ie->qos_info_bitmap &
+ 		    IEEE80211_WMM_IE_AP_QOSINFO_PARAM_SET_CNT_MASK,
+ 		    wmm_ie->reserved);
+ 
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index e5db9a9954dc..a6ff7be0210a 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -990,6 +990,9 @@ static int qedi_find_boot_info(struct qedi_ctx *qedi,
+ 		if (!iscsi_is_session_online(cls_sess))
+ 			continue;
+ 
++		if (!sess->targetname)
++			continue;
++
+ 		if (pri_ctrl_flags) {
+ 			if (!strcmp(pri_tgt->iscsi_name, sess->targetname) &&
+ 			    !strcmp(pri_tgt->ip_addr, ep_ip_addr)) {
+diff --git a/drivers/soc/bcm/brcmstb/biuctrl.c b/drivers/soc/bcm/brcmstb/biuctrl.c
+index 6d89ebf13b8a..20b63bee5b09 100644
+--- a/drivers/soc/bcm/brcmstb/biuctrl.c
++++ b/drivers/soc/bcm/brcmstb/biuctrl.c
+@@ -56,7 +56,7 @@ static inline void cbc_writel(u32 val, int reg)
+ 	if (offset == -1)
+ 		return;
+ 
+-	writel_relaxed(val,  cpubiuctrl_base + offset);
++	writel(val, cpubiuctrl_base + offset);
+ }
+ 
+ enum cpubiuctrl_regs {
+@@ -246,7 +246,9 @@ static int __init brcmstb_biuctrl_init(void)
+ 	if (!np)
+ 		return 0;
+ 
+-	setup_hifcpubiuctrl_regs(np);
++	ret = setup_hifcpubiuctrl_regs(np);
++	if (ret)
++		return ret;
+ 
+ 	ret = mcp_write_pairing_set();
+ 	if (ret) {
+diff --git a/drivers/soundwire/intel.c b/drivers/soundwire/intel.c
+index fd8d034cfec1..5ba641858e21 100644
+--- a/drivers/soundwire/intel.c
++++ b/drivers/soundwire/intel.c
+@@ -714,8 +714,8 @@ static int intel_create_dai(struct sdw_cdns *cdns,
+ 				return -ENOMEM;
+ 			}
+ 
+-			dais[i].playback.channels_min = 1;
+-			dais[i].playback.channels_max = max_ch;
++			dais[i].capture.channels_min = 1;
++			dais[i].capture.channels_max = max_ch;
+ 			dais[i].capture.rates = SNDRV_PCM_RATE_48000;
+ 			dais[i].capture.formats = SNDRV_PCM_FMTBIT_S16_LE;
+ 		}
+diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
+index bd879b1a76c8..794ced434cf2 100644
+--- a/drivers/soundwire/stream.c
++++ b/drivers/soundwire/stream.c
+@@ -805,7 +805,8 @@ static int do_bank_switch(struct sdw_stream_runtime *stream)
+ 			goto error;
+ 		}
+ 
+-		mutex_unlock(&bus->msg_lock);
++		if (bus->multi_link)
++			mutex_unlock(&bus->msg_lock);
+ 	}
+ 
+ 	return ret;
+@@ -1401,9 +1402,7 @@ struct sdw_dpn_prop *sdw_get_slave_dpn_prop(struct sdw_slave *slave,
+ 	}
+ 
+ 	for (i = 0; i < num_ports; i++) {
+-		dpn_prop = &dpn_prop[i];
+-
+-		if (dpn_prop->num == port_num)
++		if (dpn_prop[i].num == port_num)
+ 			return &dpn_prop[i];
+ 	}
+ 
+diff --git a/drivers/staging/comedi/drivers/amplc_pci230.c b/drivers/staging/comedi/drivers/amplc_pci230.c
+index 08ffe26c5d43..0f16e85911f2 100644
+--- a/drivers/staging/comedi/drivers/amplc_pci230.c
++++ b/drivers/staging/comedi/drivers/amplc_pci230.c
+@@ -2330,7 +2330,8 @@ static irqreturn_t pci230_interrupt(int irq, void *d)
+ 	devpriv->intr_running = false;
+ 	spin_unlock_irqrestore(&devpriv->isr_spinlock, irqflags);
+ 
+-	comedi_handle_events(dev, s_ao);
++	if (s_ao)
++		comedi_handle_events(dev, s_ao);
+ 	comedi_handle_events(dev, s_ai);
+ 
+ 	return IRQ_HANDLED;
+diff --git a/drivers/staging/comedi/drivers/dt282x.c b/drivers/staging/comedi/drivers/dt282x.c
+index 3be927f1d3a9..e15e33ed94ae 100644
+--- a/drivers/staging/comedi/drivers/dt282x.c
++++ b/drivers/staging/comedi/drivers/dt282x.c
+@@ -557,7 +557,8 @@ static irqreturn_t dt282x_interrupt(int irq, void *d)
+ 	}
+ #endif
+ 	comedi_handle_events(dev, s);
+-	comedi_handle_events(dev, s_ao);
++	if (s_ao)
++		comedi_handle_events(dev, s_ao);
+ 
+ 	return IRQ_RETVAL(handled);
+ }
+diff --git a/drivers/staging/fsl-dpaa2/ethsw/ethsw.c b/drivers/staging/fsl-dpaa2/ethsw/ethsw.c
+index ad577beeb052..3a562179468c 100644
+--- a/drivers/staging/fsl-dpaa2/ethsw/ethsw.c
++++ b/drivers/staging/fsl-dpaa2/ethsw/ethsw.c
+@@ -1086,6 +1086,7 @@ static int port_switchdev_event(struct notifier_block *unused,
+ 		dev_hold(dev);
+ 		break;
+ 	default:
++		kfree(switchdev_work);
+ 		return NOTIFY_DONE;
+ 	}
+ 
+diff --git a/drivers/staging/iio/cdc/ad7150.c b/drivers/staging/iio/cdc/ad7150.c
+index 24f74ce60f80..14596aa7eaf1 100644
+--- a/drivers/staging/iio/cdc/ad7150.c
++++ b/drivers/staging/iio/cdc/ad7150.c
+@@ -6,6 +6,7 @@
+  * Licensed under the GPL-2 or later.
+  */
+ 
++#include <linux/bitfield.h>
+ #include <linux/interrupt.h>
+ #include <linux/device.h>
+ #include <linux/kernel.h>
+@@ -131,7 +132,7 @@ static int ad7150_read_event_config(struct iio_dev *indio_dev,
+ {
+ 	int ret;
+ 	u8 threshtype;
+-	bool adaptive;
++	bool thrfixed;
+ 	struct ad7150_chip_info *chip = iio_priv(indio_dev);
+ 
+ 	ret = i2c_smbus_read_byte_data(chip->client, AD7150_CFG);
+@@ -139,21 +140,23 @@ static int ad7150_read_event_config(struct iio_dev *indio_dev,
+ 		return ret;
+ 
+ 	threshtype = (ret >> 5) & 0x03;
+-	adaptive = !!(ret & 0x80);
++
++	/*check if threshold mode is fixed or adaptive*/
++	thrfixed = FIELD_GET(AD7150_CFG_FIX, ret);
+ 
+ 	switch (type) {
+ 	case IIO_EV_TYPE_MAG_ADAPTIVE:
+ 		if (dir == IIO_EV_DIR_RISING)
+-			return adaptive && (threshtype == 0x1);
+-		return adaptive && (threshtype == 0x0);
++			return !thrfixed && (threshtype == 0x1);
++		return !thrfixed && (threshtype == 0x0);
+ 	case IIO_EV_TYPE_THRESH_ADAPTIVE:
+ 		if (dir == IIO_EV_DIR_RISING)
+-			return adaptive && (threshtype == 0x3);
+-		return adaptive && (threshtype == 0x2);
++			return !thrfixed && (threshtype == 0x3);
++		return !thrfixed && (threshtype == 0x2);
+ 	case IIO_EV_TYPE_THRESH:
+ 		if (dir == IIO_EV_DIR_RISING)
+-			return !adaptive && (threshtype == 0x1);
+-		return !adaptive && (threshtype == 0x0);
++			return thrfixed && (threshtype == 0x1);
++		return thrfixed && (threshtype == 0x0);
+ 	default:
+ 		break;
+ 	}
+diff --git a/drivers/staging/mt7621-pci/pci-mt7621.c b/drivers/staging/mt7621-pci/pci-mt7621.c
+index 379ae780c691..c1909ff5dee9 100644
+--- a/drivers/staging/mt7621-pci/pci-mt7621.c
++++ b/drivers/staging/mt7621-pci/pci-mt7621.c
+@@ -40,7 +40,7 @@
+ /* MediaTek specific configuration registers */
+ #define PCIE_FTS_NUM			0x70c
+ #define PCIE_FTS_NUM_MASK		GENMASK(15, 8)
+-#define PCIE_FTS_NUM_L0(x)		((x) & 0xff << 8)
++#define PCIE_FTS_NUM_L0(x)		(((x) & 0xff) << 8)
+ 
+ /* rt_sysc_membase relative registers */
+ #define RALINK_PCIE_CLK_GEN		0x7c
+diff --git a/drivers/staging/rtl8712/rtl871x_ioctl_linux.c b/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
+index e723357ac8c0..3be305615a40 100644
+--- a/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
++++ b/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
+@@ -124,10 +124,91 @@ static inline void handle_group_key(struct ieee_param *param,
+ 	}
+ }
+ 
+-static noinline_for_stack char *translate_scan(struct _adapter *padapter,
+-				   struct iw_request_info *info,
+-				   struct wlan_network *pnetwork,
+-				   char *start, char *stop)
++static noinline_for_stack char *translate_scan_wpa(struct iw_request_info *info,
++						   struct wlan_network *pnetwork,
++						   struct iw_event *iwe,
++						   char *start, char *stop)
++{
++	/* parsing WPA/WPA2 IE */
++	u8 buf[MAX_WPA_IE_LEN];
++	u8 wpa_ie[255], rsn_ie[255];
++	u16 wpa_len = 0, rsn_len = 0;
++	int n, i;
++
++	r8712_get_sec_ie(pnetwork->network.IEs,
++			 pnetwork->network.IELength, rsn_ie, &rsn_len,
++			 wpa_ie, &wpa_len);
++	if (wpa_len > 0) {
++		memset(buf, 0, MAX_WPA_IE_LEN);
++		n = sprintf(buf, "wpa_ie=");
++		for (i = 0; i < wpa_len; i++) {
++			n += snprintf(buf + n, MAX_WPA_IE_LEN - n,
++						"%02x", wpa_ie[i]);
++			if (n >= MAX_WPA_IE_LEN)
++				break;
++		}
++		memset(iwe, 0, sizeof(*iwe));
++		iwe->cmd = IWEVCUSTOM;
++		iwe->u.data.length = (u16)strlen(buf);
++		start = iwe_stream_add_point(info, start, stop,
++			iwe, buf);
++		memset(iwe, 0, sizeof(*iwe));
++		iwe->cmd = IWEVGENIE;
++		iwe->u.data.length = (u16)wpa_len;
++		start = iwe_stream_add_point(info, start, stop,
++			iwe, wpa_ie);
++	}
++	if (rsn_len > 0) {
++		memset(buf, 0, MAX_WPA_IE_LEN);
++		n = sprintf(buf, "rsn_ie=");
++		for (i = 0; i < rsn_len; i++) {
++			n += snprintf(buf + n, MAX_WPA_IE_LEN - n,
++						"%02x", rsn_ie[i]);
++			if (n >= MAX_WPA_IE_LEN)
++				break;
++		}
++		memset(iwe, 0, sizeof(*iwe));
++		iwe->cmd = IWEVCUSTOM;
++		iwe->u.data.length = strlen(buf);
++		start = iwe_stream_add_point(info, start, stop,
++			iwe, buf);
++		memset(iwe, 0, sizeof(*iwe));
++		iwe->cmd = IWEVGENIE;
++		iwe->u.data.length = rsn_len;
++		start = iwe_stream_add_point(info, start, stop, iwe,
++			rsn_ie);
++	}
++
++	return start;
++}
++
++static noinline_for_stack char *translate_scan_wps(struct iw_request_info *info,
++						   struct wlan_network *pnetwork,
++						   struct iw_event *iwe,
++						   char *start, char *stop)
++{
++	/* parsing WPS IE */
++	u8 wps_ie[512];
++	uint wps_ielen;
++
++	if (r8712_get_wps_ie(pnetwork->network.IEs,
++	    pnetwork->network.IELength,
++	    wps_ie, &wps_ielen)) {
++		if (wps_ielen > 2) {
++			iwe->cmd = IWEVGENIE;
++			iwe->u.data.length = (u16)wps_ielen;
++			start = iwe_stream_add_point(info, start, stop,
++				iwe, wps_ie);
++		}
++	}
++
++	return start;
++}
++
++static char *translate_scan(struct _adapter *padapter,
++			    struct iw_request_info *info,
++			    struct wlan_network *pnetwork,
++			    char *start, char *stop)
+ {
+ 	struct iw_event iwe;
+ 	struct ieee80211_ht_cap *pht_capie;
+@@ -240,73 +321,11 @@ static noinline_for_stack char *translate_scan(struct _adapter *padapter,
+ 	/* Check if we added any event */
+ 	if ((current_val - start) > iwe_stream_lcp_len(info))
+ 		start = current_val;
+-	/* parsing WPA/WPA2 IE */
+-	{
+-		u8 buf[MAX_WPA_IE_LEN];
+-		u8 wpa_ie[255], rsn_ie[255];
+-		u16 wpa_len = 0, rsn_len = 0;
+-		int n;
+-
+-		r8712_get_sec_ie(pnetwork->network.IEs,
+-				 pnetwork->network.IELength, rsn_ie, &rsn_len,
+-				 wpa_ie, &wpa_len);
+-		if (wpa_len > 0) {
+-			memset(buf, 0, MAX_WPA_IE_LEN);
+-			n = sprintf(buf, "wpa_ie=");
+-			for (i = 0; i < wpa_len; i++) {
+-				n += snprintf(buf + n, MAX_WPA_IE_LEN - n,
+-							"%02x", wpa_ie[i]);
+-				if (n >= MAX_WPA_IE_LEN)
+-					break;
+-			}
+-			memset(&iwe, 0, sizeof(iwe));
+-			iwe.cmd = IWEVCUSTOM;
+-			iwe.u.data.length = (u16)strlen(buf);
+-			start = iwe_stream_add_point(info, start, stop,
+-				&iwe, buf);
+-			memset(&iwe, 0, sizeof(iwe));
+-			iwe.cmd = IWEVGENIE;
+-			iwe.u.data.length = (u16)wpa_len;
+-			start = iwe_stream_add_point(info, start, stop,
+-				&iwe, wpa_ie);
+-		}
+-		if (rsn_len > 0) {
+-			memset(buf, 0, MAX_WPA_IE_LEN);
+-			n = sprintf(buf, "rsn_ie=");
+-			for (i = 0; i < rsn_len; i++) {
+-				n += snprintf(buf + n, MAX_WPA_IE_LEN - n,
+-							"%02x", rsn_ie[i]);
+-				if (n >= MAX_WPA_IE_LEN)
+-					break;
+-			}
+-			memset(&iwe, 0, sizeof(iwe));
+-			iwe.cmd = IWEVCUSTOM;
+-			iwe.u.data.length = strlen(buf);
+-			start = iwe_stream_add_point(info, start, stop,
+-				&iwe, buf);
+-			memset(&iwe, 0, sizeof(iwe));
+-			iwe.cmd = IWEVGENIE;
+-			iwe.u.data.length = rsn_len;
+-			start = iwe_stream_add_point(info, start, stop, &iwe,
+-				rsn_ie);
+-		}
+-	}
+ 
+-	{ /* parsing WPS IE */
+-		u8 wps_ie[512];
+-		uint wps_ielen;
++	start = translate_scan_wpa(info, pnetwork, &iwe, start, stop);
++
++	start = translate_scan_wps(info, pnetwork, &iwe, start, stop);
+ 
+-		if (r8712_get_wps_ie(pnetwork->network.IEs,
+-		    pnetwork->network.IELength,
+-		    wps_ie, &wps_ielen)) {
+-			if (wps_ielen > 2) {
+-				iwe.cmd = IWEVGENIE;
+-				iwe.u.data.length = (u16)wps_ielen;
+-				start = iwe_stream_add_point(info, start, stop,
+-					&iwe, wps_ie);
+-			}
+-		}
+-	}
+ 	/* Add quality statistics */
+ 	iwe.cmd = IWEVQUAL;
+ 	rssi = r8712_signal_scale_mapping(pnetwork->network.Rssi);
+diff --git a/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c b/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
+index 7c6cf41645eb..3bf61aefcb1e 100644
+--- a/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
++++ b/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
+@@ -337,16 +337,13 @@ static void buffer_cb(struct vchiq_mmal_instance *instance,
+ 		return;
+ 	} else if (length == 0) {
+ 		/* stream ended */
+-		if (buf) {
+-			/* this should only ever happen if the port is
+-			 * disabled and there are buffers still queued
++		if (dev->capture.frame_count) {
++			/* empty buffer whilst capturing - expected to be an
++			 * EOS, so grab another frame
+ 			 */
+-			vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
+-			pr_debug("Empty buffer");
+-		} else if (dev->capture.frame_count) {
+-			/* grab another frame */
+ 			if (is_capturing(dev)) {
+-				pr_debug("Grab another frame");
++				v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++					 "Grab another frame");
+ 				vchiq_mmal_port_parameter_set(
+ 					instance,
+ 					dev->capture.camera_port,
+@@ -354,8 +351,14 @@ static void buffer_cb(struct vchiq_mmal_instance *instance,
+ 					&dev->capture.frame_count,
+ 					sizeof(dev->capture.frame_count));
+ 			}
++			if (vchiq_mmal_submit_buffer(instance, port, buf))
++				v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++					 "Failed to return EOS buffer");
+ 		} else {
+-			/* signal frame completion */
++			/* stopping streaming.
++			 * return buffer, and signal frame completion
++			 */
++			vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
+ 			complete(&dev->capture.frame_cmplt);
+ 		}
+ 	} else {
+@@ -577,6 +580,7 @@ static void stop_streaming(struct vb2_queue *vq)
+ 	int ret;
+ 	unsigned long timeout;
+ 	struct bm2835_mmal_dev *dev = vb2_get_drv_priv(vq);
++	struct vchiq_mmal_port *port = dev->capture.port;
+ 
+ 	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, "%s: dev:%p\n",
+ 		 __func__, dev);
+@@ -600,12 +604,6 @@ static void stop_streaming(struct vb2_queue *vq)
+ 				      &dev->capture.frame_count,
+ 				      sizeof(dev->capture.frame_count));
+ 
+-	/* wait for last frame to complete */
+-	timeout = wait_for_completion_timeout(&dev->capture.frame_cmplt, HZ);
+-	if (timeout == 0)
+-		v4l2_err(&dev->v4l2_dev,
+-			 "timed out waiting for frame completion\n");
+-
+ 	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
+ 		 "disabling connection\n");
+ 
+@@ -620,6 +618,21 @@ static void stop_streaming(struct vb2_queue *vq)
+ 			 ret);
+ 	}
+ 
++	/* wait for all buffers to be returned */
++	while (atomic_read(&port->buffers_with_vpu)) {
++		v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++			 "%s: Waiting for buffers to be returned - %d outstanding\n",
++			 __func__, atomic_read(&port->buffers_with_vpu));
++		timeout = wait_for_completion_timeout(&dev->capture.frame_cmplt,
++						      HZ);
++		if (timeout == 0) {
++			v4l2_err(&dev->v4l2_dev, "%s: Timeout waiting for buffers to be returned - %d outstanding\n",
++				 __func__,
++				 atomic_read(&port->buffers_with_vpu));
++			break;
++		}
++	}
++
+ 	if (disable_camera(dev) < 0)
+ 		v4l2_err(&dev->v4l2_dev, "Failed to disable camera\n");
+ }
+diff --git a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c
+index 16af735af5c3..29761f6c3b55 100644
+--- a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c
++++ b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c
+@@ -161,7 +161,8 @@ struct vchiq_mmal_instance {
+ 	void *bulk_scratch;
+ 
+ 	struct idr context_map;
+-	spinlock_t context_map_lock;
++	/* protect accesses to context_map */
++	struct mutex context_map_lock;
+ 
+ 	/* component to use next */
+ 	int component_idx;
+@@ -184,10 +185,10 @@ get_msg_context(struct vchiq_mmal_instance *instance)
+ 	 * that when we service the VCHI reply, we can look up what
+ 	 * message is being replied to.
+ 	 */
+-	spin_lock(&instance->context_map_lock);
++	mutex_lock(&instance->context_map_lock);
+ 	handle = idr_alloc(&instance->context_map, msg_context,
+ 			   0, 0, GFP_KERNEL);
+-	spin_unlock(&instance->context_map_lock);
++	mutex_unlock(&instance->context_map_lock);
+ 
+ 	if (handle < 0) {
+ 		kfree(msg_context);
+@@ -211,9 +212,9 @@ release_msg_context(struct mmal_msg_context *msg_context)
+ {
+ 	struct vchiq_mmal_instance *instance = msg_context->instance;
+ 
+-	spin_lock(&instance->context_map_lock);
++	mutex_lock(&instance->context_map_lock);
+ 	idr_remove(&instance->context_map, msg_context->handle);
+-	spin_unlock(&instance->context_map_lock);
++	mutex_unlock(&instance->context_map_lock);
+ 	kfree(msg_context);
+ }
+ 
+@@ -239,6 +240,8 @@ static void buffer_work_cb(struct work_struct *work)
+ 	struct mmal_msg_context *msg_context =
+ 		container_of(work, struct mmal_msg_context, u.bulk.work);
+ 
++	atomic_dec(&msg_context->u.bulk.port->buffers_with_vpu);
++
+ 	msg_context->u.bulk.port->buffer_cb(msg_context->u.bulk.instance,
+ 					    msg_context->u.bulk.port,
+ 					    msg_context->u.bulk.status,
+@@ -287,8 +290,6 @@ static int bulk_receive(struct vchiq_mmal_instance *instance,
+ 
+ 	/* store length */
+ 	msg_context->u.bulk.buffer_used = rd_len;
+-	msg_context->u.bulk.mmal_flags =
+-	    msg->u.buffer_from_host.buffer_header.flags;
+ 	msg_context->u.bulk.dts = msg->u.buffer_from_host.buffer_header.dts;
+ 	msg_context->u.bulk.pts = msg->u.buffer_from_host.buffer_header.pts;
+ 
+@@ -379,6 +380,8 @@ buffer_from_host(struct vchiq_mmal_instance *instance,
+ 	/* initialise work structure ready to schedule callback */
+ 	INIT_WORK(&msg_context->u.bulk.work, buffer_work_cb);
+ 
++	atomic_inc(&port->buffers_with_vpu);
++
+ 	/* prep the buffer from host message */
+ 	memset(&m, 0xbc, sizeof(m));	/* just to make debug clearer */
+ 
+@@ -447,6 +450,9 @@ static void buffer_to_host_cb(struct vchiq_mmal_instance *instance,
+ 		return;
+ 	}
+ 
++	msg_context->u.bulk.mmal_flags =
++				msg->u.buffer_from_host.buffer_header.flags;
++
+ 	if (msg->h.status != MMAL_MSG_STATUS_SUCCESS) {
+ 		/* message reception had an error */
+ 		pr_warn("error %d in reply\n", msg->h.status);
+@@ -1323,16 +1329,6 @@ static int port_enable(struct vchiq_mmal_instance *instance,
+ 	if (port->enabled)
+ 		return 0;
+ 
+-	/* ensure there are enough buffers queued to cover the buffer headers */
+-	if (port->buffer_cb) {
+-		hdr_count = 0;
+-		list_for_each(buf_head, &port->buffers) {
+-			hdr_count++;
+-		}
+-		if (hdr_count < port->current_buffer.num)
+-			return -ENOSPC;
+-	}
+-
+ 	ret = port_action_port(instance, port,
+ 			       MMAL_MSG_PORT_ACTION_TYPE_ENABLE);
+ 	if (ret)
+@@ -1849,7 +1845,7 @@ int vchiq_mmal_init(struct vchiq_mmal_instance **out_instance)
+ 
+ 	instance->bulk_scratch = vmalloc(PAGE_SIZE);
+ 
+-	spin_lock_init(&instance->context_map_lock);
++	mutex_init(&instance->context_map_lock);
+ 	idr_init_base(&instance->context_map, 1);
+ 
+ 	params.callback_param = instance;
+diff --git a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.h b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.h
+index 22b839ecd5f0..b0ee1716525b 100644
+--- a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.h
++++ b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.h
+@@ -71,6 +71,9 @@ struct vchiq_mmal_port {
+ 	struct list_head buffers;
+ 	/* lock to serialise adding and removing buffers from list */
+ 	spinlock_t slock;
++
++	/* Count of buffers the VPU has yet to return */
++	atomic_t buffers_with_vpu;
+ 	/* callback on buffer completion */
+ 	vchiq_mmal_buffer_cb buffer_cb;
+ 	/* callback context */
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
+index a9cc01e8e6c5..833b28e9ba4b 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
+@@ -553,7 +553,7 @@ create_pagelist(char __user *buf, size_t count, unsigned short type)
+ 		(g_cache_line_size - 1)))) {
+ 		char *fragments;
+ 
+-		if (down_killable(&g_free_fragments_sema)) {
++		if (down_interruptible(&g_free_fragments_sema) != 0) {
+ 			cleanup_pagelistinfo(pagelistinfo);
+ 			return NULL;
+ 		}
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+index 064d0db4c51e..ccfb8218b83c 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+@@ -560,7 +560,8 @@ add_completion(VCHIQ_INSTANCE_T instance, VCHIQ_REASON_T reason,
+ 		vchiq_log_trace(vchiq_arm_log_level,
+ 			"%s - completion queue full", __func__);
+ 		DEBUG_COUNT(COMPLETION_QUEUE_FULL_COUNT);
+-		if (wait_for_completion_killable( &instance->remove_event)) {
++		if (wait_for_completion_interruptible(
++					&instance->remove_event)) {
+ 			vchiq_log_info(vchiq_arm_log_level,
+ 				"service_callback interrupted");
+ 			return VCHIQ_RETRY;
+@@ -671,7 +672,7 @@ service_callback(VCHIQ_REASON_T reason, struct vchiq_header *header,
+ 			}
+ 
+ 			DEBUG_TRACE(SERVICE_CALLBACK_LINE);
+-			if (wait_for_completion_killable(
++			if (wait_for_completion_interruptible(
+ 						&user_service->remove_event)
+ 				!= 0) {
+ 				vchiq_log_info(vchiq_arm_log_level,
+@@ -1006,7 +1007,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 		   has been closed until the client library calls the
+ 		   CLOSE_DELIVERED ioctl, signalling close_event. */
+ 		if (user_service->close_pending &&
+-			wait_for_completion_killable(
++			wait_for_completion_interruptible(
+ 				&user_service->close_event))
+ 			status = VCHIQ_RETRY;
+ 		break;
+@@ -1182,7 +1183,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 
+ 			DEBUG_TRACE(AWAIT_COMPLETION_LINE);
+ 			mutex_unlock(&instance->completion_mutex);
+-			rc = wait_for_completion_killable(
++			rc = wait_for_completion_interruptible(
+ 						&instance->insert_event);
+ 			mutex_lock(&instance->completion_mutex);
+ 			if (rc != 0) {
+@@ -1352,7 +1353,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 			do {
+ 				spin_unlock(&msg_queue_spinlock);
+ 				DEBUG_TRACE(DEQUEUE_MESSAGE_LINE);
+-				if (wait_for_completion_killable(
++				if (wait_for_completion_interruptible(
+ 					&user_service->insert_event)) {
+ 					vchiq_log_info(vchiq_arm_log_level,
+ 						"DEQUEUE_MESSAGE interrupted");
+@@ -2360,7 +2361,7 @@ vchiq_keepalive_thread_func(void *v)
+ 	while (1) {
+ 		long rc = 0, uc = 0;
+ 
+-		if (wait_for_completion_killable(&arm_state->ka_evt)
++		if (wait_for_completion_interruptible(&arm_state->ka_evt)
+ 				!= 0) {
+ 			vchiq_log_error(vchiq_susp_log_level,
+ 				"%s interrupted", __func__);
+@@ -2611,7 +2612,7 @@ block_resume(struct vchiq_arm_state *arm_state)
+ 		write_unlock_bh(&arm_state->susp_res_lock);
+ 		vchiq_log_info(vchiq_susp_log_level, "%s wait for previously "
+ 			"blocked clients", __func__);
+-		if (wait_for_completion_killable_timeout(
++		if (wait_for_completion_interruptible_timeout(
+ 				&arm_state->blocked_blocker, timeout_val)
+ 					<= 0) {
+ 			vchiq_log_error(vchiq_susp_log_level, "%s wait for "
+@@ -2637,7 +2638,7 @@ block_resume(struct vchiq_arm_state *arm_state)
+ 		write_unlock_bh(&arm_state->susp_res_lock);
+ 		vchiq_log_info(vchiq_susp_log_level, "%s wait for resume",
+ 			__func__);
+-		if (wait_for_completion_killable_timeout(
++		if (wait_for_completion_interruptible_timeout(
+ 				&arm_state->vc_resume_complete, timeout_val)
+ 					<= 0) {
+ 			vchiq_log_error(vchiq_susp_log_level, "%s wait for "
+@@ -2844,7 +2845,7 @@ vchiq_arm_force_suspend(struct vchiq_state *state)
+ 	do {
+ 		write_unlock_bh(&arm_state->susp_res_lock);
+ 
+-		rc = wait_for_completion_killable_timeout(
++		rc = wait_for_completion_interruptible_timeout(
+ 				&arm_state->vc_suspend_complete,
+ 				msecs_to_jiffies(FORCE_SUSPEND_TIMEOUT_MS));
+ 
+@@ -2940,7 +2941,7 @@ vchiq_arm_allow_resume(struct vchiq_state *state)
+ 	write_unlock_bh(&arm_state->susp_res_lock);
+ 
+ 	if (resume) {
+-		if (wait_for_completion_killable(
++		if (wait_for_completion_interruptible(
+ 			&arm_state->vc_resume_complete) < 0) {
+ 			vchiq_log_error(vchiq_susp_log_level,
+ 				"%s interrupted", __func__);
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+index 819813e742d8..0958d86aebe6 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+@@ -425,13 +425,21 @@ remote_event_create(wait_queue_head_t *wq, struct remote_event *event)
+ 	init_waitqueue_head(wq);
+ }
+ 
++/*
++ * All the event waiting routines in VCHIQ used a custom semaphore
++ * implementation that filtered most signals. This achieved a behaviour similar
++ * to the "killable" family of functions. While cleaning up this code all the
++ * routines where switched to the "interruptible" family of functions, as the
++ * former was deemed unjustified and the use "killable" set all VCHIQ's
++ * threads in D state.
++ */
+ static inline int
+ remote_event_wait(wait_queue_head_t *wq, struct remote_event *event)
+ {
+ 	if (!event->fired) {
+ 		event->armed = 1;
+ 		dsb(sy);
+-		if (wait_event_killable(*wq, event->fired)) {
++		if (wait_event_interruptible(*wq, event->fired)) {
+ 			event->armed = 0;
+ 			return 0;
+ 		}
+@@ -590,7 +598,7 @@ reserve_space(struct vchiq_state *state, size_t space, int is_blocking)
+ 			remote_event_signal(&state->remote->trigger);
+ 
+ 			if (!is_blocking ||
+-				(wait_for_completion_killable(
++				(wait_for_completion_interruptible(
+ 				&state->slot_available_event)))
+ 				return NULL; /* No space available */
+ 		}
+@@ -860,7 +868,7 @@ queue_message(struct vchiq_state *state, struct vchiq_service *service,
+ 			spin_unlock(&quota_spinlock);
+ 			mutex_unlock(&state->slot_mutex);
+ 
+-			if (wait_for_completion_killable(
++			if (wait_for_completion_interruptible(
+ 						&state->data_quota_event))
+ 				return VCHIQ_RETRY;
+ 
+@@ -891,7 +899,7 @@ queue_message(struct vchiq_state *state, struct vchiq_service *service,
+ 				service_quota->slot_use_count);
+ 			VCHIQ_SERVICE_STATS_INC(service, quota_stalls);
+ 			mutex_unlock(&state->slot_mutex);
+-			if (wait_for_completion_killable(
++			if (wait_for_completion_interruptible(
+ 						&service_quota->quota_event))
+ 				return VCHIQ_RETRY;
+ 			if (service->closing)
+@@ -1740,7 +1748,8 @@ parse_rx_slots(struct vchiq_state *state)
+ 					&service->bulk_rx : &service->bulk_tx;
+ 
+ 				DEBUG_TRACE(PARSE_LINE);
+-				if (mutex_lock_killable(&service->bulk_mutex)) {
++				if (mutex_lock_killable(
++					&service->bulk_mutex) != 0) {
+ 					DEBUG_TRACE(PARSE_LINE);
+ 					goto bail_not_ready;
+ 				}
+@@ -2458,7 +2467,7 @@ vchiq_open_service_internal(struct vchiq_service *service, int client_id)
+ 			       QMFLAGS_IS_BLOCKING);
+ 	if (status == VCHIQ_SUCCESS) {
+ 		/* Wait for the ACK/NAK */
+-		if (wait_for_completion_killable(&service->remove_event)) {
++		if (wait_for_completion_interruptible(&service->remove_event)) {
+ 			status = VCHIQ_RETRY;
+ 			vchiq_release_service_internal(service);
+ 		} else if ((service->srvstate != VCHIQ_SRVSTATE_OPEN) &&
+@@ -2825,7 +2834,7 @@ vchiq_connect_internal(struct vchiq_state *state, VCHIQ_INSTANCE_T instance)
+ 	}
+ 
+ 	if (state->conn_state == VCHIQ_CONNSTATE_CONNECTING) {
+-		if (wait_for_completion_killable(&state->connect))
++		if (wait_for_completion_interruptible(&state->connect))
+ 			return VCHIQ_RETRY;
+ 
+ 		vchiq_set_conn_state(state, VCHIQ_CONNSTATE_CONNECTED);
+@@ -2924,7 +2933,7 @@ vchiq_close_service(VCHIQ_SERVICE_HANDLE_T handle)
+ 	}
+ 
+ 	while (1) {
+-		if (wait_for_completion_killable(&service->remove_event)) {
++		if (wait_for_completion_interruptible(&service->remove_event)) {
+ 			status = VCHIQ_RETRY;
+ 			break;
+ 		}
+@@ -2985,7 +2994,7 @@ vchiq_remove_service(VCHIQ_SERVICE_HANDLE_T handle)
+ 		request_poll(service->state, service, VCHIQ_POLL_REMOVE);
+ 	}
+ 	while (1) {
+-		if (wait_for_completion_killable(&service->remove_event)) {
++		if (wait_for_completion_interruptible(&service->remove_event)) {
+ 			status = VCHIQ_RETRY;
+ 			break;
+ 		}
+@@ -3068,7 +3077,7 @@ VCHIQ_STATUS_T vchiq_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle,
+ 		VCHIQ_SERVICE_STATS_INC(service, bulk_stalls);
+ 		do {
+ 			mutex_unlock(&service->bulk_mutex);
+-			if (wait_for_completion_killable(
++			if (wait_for_completion_interruptible(
+ 						&service->bulk_remove_event)) {
+ 				status = VCHIQ_RETRY;
+ 				goto error_exit;
+@@ -3145,7 +3154,7 @@ waiting:
+ 
+ 	if (bulk_waiter) {
+ 		bulk_waiter->bulk = bulk;
+-		if (wait_for_completion_killable(&bulk_waiter->event))
++		if (wait_for_completion_interruptible(&bulk_waiter->event))
+ 			status = VCHIQ_RETRY;
+ 		else if (bulk_waiter->actual == VCHIQ_BULK_ACTUAL_ABORTED)
+ 			status = VCHIQ_ERROR;
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.c
+index 55c5fd82b911..30deea1b57f7 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.c
+@@ -80,7 +80,7 @@ void vchiu_queue_push(struct vchiu_queue *queue, struct vchiq_header *header)
+ 		return;
+ 
+ 	while (queue->write == queue->read + queue->size) {
+-		if (wait_for_completion_killable(&queue->pop))
++		if (wait_for_completion_interruptible(&queue->pop))
+ 			flush_signals(current);
+ 	}
+ 
+@@ -93,7 +93,7 @@ void vchiu_queue_push(struct vchiu_queue *queue, struct vchiq_header *header)
+ struct vchiq_header *vchiu_queue_peek(struct vchiu_queue *queue)
+ {
+ 	while (queue->write == queue->read) {
+-		if (wait_for_completion_killable(&queue->push))
++		if (wait_for_completion_interruptible(&queue->push))
+ 			flush_signals(current);
+ 	}
+ 
+@@ -107,7 +107,7 @@ struct vchiq_header *vchiu_queue_pop(struct vchiu_queue *queue)
+ 	struct vchiq_header *header;
+ 
+ 	while (queue->write == queue->read) {
+-		if (wait_for_completion_killable(&queue->push))
++		if (wait_for_completion_interruptible(&queue->push))
+ 			flush_signals(current);
+ 	}
+ 
+diff --git a/drivers/staging/wilc1000/wilc_netdev.c b/drivers/staging/wilc1000/wilc_netdev.c
+index ba78c08a17f1..5338d7d2b248 100644
+--- a/drivers/staging/wilc1000/wilc_netdev.c
++++ b/drivers/staging/wilc1000/wilc_netdev.c
+@@ -530,17 +530,17 @@ static int wilc_wlan_initialize(struct net_device *dev, struct wilc_vif *vif)
+ 			goto fail_locks;
+ 		}
+ 
+-		if (wl->gpio_irq && init_irq(dev)) {
+-			ret = -EIO;
+-			goto fail_locks;
+-		}
+-
+ 		ret = wlan_initialize_threads(dev);
+ 		if (ret < 0) {
+ 			ret = -EIO;
+ 			goto fail_wilc_wlan;
+ 		}
+ 
++		if (wl->gpio_irq && init_irq(dev)) {
++			ret = -EIO;
++			goto fail_threads;
++		}
++
+ 		if (!wl->dev_irq_num &&
+ 		    wl->hif_func->enable_interrupt &&
+ 		    wl->hif_func->enable_interrupt(wl)) {
+@@ -596,7 +596,7 @@ fail_irq_enable:
+ fail_irq_init:
+ 		if (wl->dev_irq_num)
+ 			deinit_irq(dev);
+-
++fail_threads:
+ 		wlan_deinitialize_threads(dev);
+ fail_wilc_wlan:
+ 		wilc_wlan_cleanup(dev);
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index d2f3310abe54..682300713be4 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -1869,8 +1869,7 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
+ 
+ 	status = serial_port_in(port, UART_LSR);
+ 
+-	if (status & (UART_LSR_DR | UART_LSR_BI) &&
+-	    iir & UART_IIR_RDI) {
++	if (status & (UART_LSR_DR | UART_LSR_BI)) {
+ 		if (!up->dma || handle_rx_dma(up, iir))
+ 			status = serial8250_rx_chars(up, status);
+ 	}
+diff --git a/drivers/usb/dwc2/core.c b/drivers/usb/dwc2/core.c
+index 55d5ae2a7ec7..51d83f77dc04 100644
+--- a/drivers/usb/dwc2/core.c
++++ b/drivers/usb/dwc2/core.c
+@@ -531,7 +531,7 @@ int dwc2_core_reset(struct dwc2_hsotg *hsotg, bool skip_wait)
+ 	}
+ 
+ 	/* Wait for AHB master IDLE state */
+-	if (dwc2_hsotg_wait_bit_set(hsotg, GRSTCTL, GRSTCTL_AHBIDLE, 50)) {
++	if (dwc2_hsotg_wait_bit_set(hsotg, GRSTCTL, GRSTCTL_AHBIDLE, 10000)) {
+ 		dev_warn(hsotg->dev, "%s: HANG! AHB Idle timeout GRSTCTL GRSTCTL_AHBIDLE\n",
+ 			 __func__);
+ 		return -EBUSY;
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 47be961f1bf3..c7ed90084d1a 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -997,7 +997,6 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
+ 		 * earlier
+ 		 */
+ 		gadget = epfile->ffs->gadget;
+-		io_data->use_sg = gadget->sg_supported && data_len > PAGE_SIZE;
+ 
+ 		spin_lock_irq(&epfile->ffs->eps_lock);
+ 		/* In the meantime, endpoint got disabled or changed. */
+@@ -1012,6 +1011,8 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
+ 		 */
+ 		if (io_data->read)
+ 			data_len = usb_ep_align_maybe(gadget, ep->ep, data_len);
++
++		io_data->use_sg = gadget->sg_supported && data_len > PAGE_SIZE;
+ 		spin_unlock_irq(&epfile->ffs->eps_lock);
+ 
+ 		data = ffs_alloc_buffer(io_data, data_len);
+diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c
+index 737bd77a575d..2929bb47a618 100644
+--- a/drivers/usb/gadget/function/u_ether.c
++++ b/drivers/usb/gadget/function/u_ether.c
+@@ -186,11 +186,12 @@ rx_submit(struct eth_dev *dev, struct usb_request *req, gfp_t gfp_flags)
+ 		out = dev->port_usb->out_ep;
+ 	else
+ 		out = NULL;
+-	spin_unlock_irqrestore(&dev->lock, flags);
+ 
+ 	if (!out)
++	{
++		spin_unlock_irqrestore(&dev->lock, flags);
+ 		return -ENOTCONN;
+-
++	}
+ 
+ 	/* Padding up to RX_EXTRA handles minor disagreements with host.
+ 	 * Normally we use the USB "terminate on short read" convention;
+@@ -214,6 +215,7 @@ rx_submit(struct eth_dev *dev, struct usb_request *req, gfp_t gfp_flags)
+ 
+ 	if (dev->port_usb->is_fixed)
+ 		size = max_t(size_t, size, dev->port_usb->fixed_out_len);
++	spin_unlock_irqrestore(&dev->lock, flags);
+ 
+ 	skb = __netdev_alloc_skb(dev->net, size + NET_IP_ALIGN, gfp_flags);
+ 	if (skb == NULL) {
+diff --git a/drivers/usb/renesas_usbhs/fifo.c b/drivers/usb/renesas_usbhs/fifo.c
+index 39fa2fc1b8b7..6036cbae8c78 100644
+--- a/drivers/usb/renesas_usbhs/fifo.c
++++ b/drivers/usb/renesas_usbhs/fifo.c
+@@ -802,9 +802,8 @@ static int __usbhsf_dma_map_ctrl(struct usbhs_pkt *pkt, int map)
+ }
+ 
+ static void usbhsf_dma_complete(void *arg);
+-static void xfer_work(struct work_struct *work)
++static void usbhsf_dma_xfer_preparing(struct usbhs_pkt *pkt)
+ {
+-	struct usbhs_pkt *pkt = container_of(work, struct usbhs_pkt, work);
+ 	struct usbhs_pipe *pipe = pkt->pipe;
+ 	struct usbhs_fifo *fifo;
+ 	struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe);
+@@ -812,12 +811,10 @@ static void xfer_work(struct work_struct *work)
+ 	struct dma_chan *chan;
+ 	struct device *dev = usbhs_priv_to_dev(priv);
+ 	enum dma_transfer_direction dir;
+-	unsigned long flags;
+ 
+-	usbhs_lock(priv, flags);
+ 	fifo = usbhs_pipe_to_fifo(pipe);
+ 	if (!fifo)
+-		goto xfer_work_end;
++		return;
+ 
+ 	chan = usbhsf_dma_chan_get(fifo, pkt);
+ 	dir = usbhs_pipe_is_dir_in(pipe) ? DMA_DEV_TO_MEM : DMA_MEM_TO_DEV;
+@@ -826,7 +823,7 @@ static void xfer_work(struct work_struct *work)
+ 					pkt->trans, dir,
+ 					DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+ 	if (!desc)
+-		goto xfer_work_end;
++		return;
+ 
+ 	desc->callback		= usbhsf_dma_complete;
+ 	desc->callback_param	= pipe;
+@@ -834,7 +831,7 @@ static void xfer_work(struct work_struct *work)
+ 	pkt->cookie = dmaengine_submit(desc);
+ 	if (pkt->cookie < 0) {
+ 		dev_err(dev, "Failed to submit dma descriptor\n");
+-		goto xfer_work_end;
++		return;
+ 	}
+ 
+ 	dev_dbg(dev, "  %s %d (%d/ %d)\n",
+@@ -845,8 +842,17 @@ static void xfer_work(struct work_struct *work)
+ 	dma_async_issue_pending(chan);
+ 	usbhsf_dma_start(pipe, fifo);
+ 	usbhs_pipe_enable(pipe);
++}
++
++static void xfer_work(struct work_struct *work)
++{
++	struct usbhs_pkt *pkt = container_of(work, struct usbhs_pkt, work);
++	struct usbhs_pipe *pipe = pkt->pipe;
++	struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe);
++	unsigned long flags;
+ 
+-xfer_work_end:
++	usbhs_lock(priv, flags);
++	usbhsf_dma_xfer_preparing(pkt);
+ 	usbhs_unlock(priv, flags);
+ }
+ 
+@@ -899,8 +905,13 @@ static int usbhsf_dma_prepare_push(struct usbhs_pkt *pkt, int *is_done)
+ 	pkt->trans = len;
+ 
+ 	usbhsf_tx_irq_ctrl(pipe, 0);
+-	INIT_WORK(&pkt->work, xfer_work);
+-	schedule_work(&pkt->work);
++	/* FIXME: Workaound for usb dmac that driver can be used in atomic */
++	if (usbhs_get_dparam(priv, has_usb_dmac)) {
++		usbhsf_dma_xfer_preparing(pkt);
++	} else {
++		INIT_WORK(&pkt->work, xfer_work);
++		schedule_work(&pkt->work);
++	}
+ 
+ 	return 0;
+ 
+@@ -1006,8 +1017,7 @@ static int usbhsf_dma_prepare_pop_with_usb_dmac(struct usbhs_pkt *pkt,
+ 
+ 	pkt->trans = pkt->length;
+ 
+-	INIT_WORK(&pkt->work, xfer_work);
+-	schedule_work(&pkt->work);
++	usbhsf_dma_xfer_preparing(pkt);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 1d8461ae2c34..23669a584bae 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -1029,6 +1029,7 @@ static const struct usb_device_id id_table_combined[] = {
+ 	{ USB_DEVICE(AIRBUS_DS_VID, AIRBUS_DS_P8GR) },
+ 	/* EZPrototypes devices */
+ 	{ USB_DEVICE(EZPROTOTYPES_VID, HJELMSLUND_USB485_ISO_PID) },
++	{ USB_DEVICE_INTERFACE_NUMBER(UNJO_VID, UNJO_ISODEBUG_V1_PID, 1) },
+ 	{ }					/* Terminating entry */
+ };
+ 
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index 5755f0df0025..f12d806220b4 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -1543,3 +1543,9 @@
+ #define CHETCO_SEASMART_DISPLAY_PID	0xA5AD /* SeaSmart NMEA2000 Display */
+ #define CHETCO_SEASMART_LITE_PID	0xA5AE /* SeaSmart Lite USB Adapter */
+ #define CHETCO_SEASMART_ANALOG_PID	0xA5AF /* SeaSmart Analog Adapter */
++
++/*
++ * Unjo AB
++ */
++#define UNJO_VID			0x22B7
++#define UNJO_ISODEBUG_V1_PID		0x150D
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index a0aaf0635359..c1582fbd1150 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1343,6 +1343,7 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(4) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0414, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0417, 0xff, 0xff, 0xff) },
++	{ USB_DEVICE_INTERFACE_CLASS(ZTE_VENDOR_ID, 0x0601, 0xff) },	/* GosunCn ZTE WeLink ME3630 (RNDIS mode) */
+ 	{ USB_DEVICE_INTERFACE_CLASS(ZTE_VENDOR_ID, 0x0602, 0xff) },	/* GosunCn ZTE WeLink ME3630 (MBIM mode) */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1008, 0xff, 0xff, 0xff),
+ 	  .driver_info = RSVD(4) },
+diff --git a/drivers/usb/typec/tps6598x.c b/drivers/usb/typec/tps6598x.c
+index c674abe3cf99..a38d1409f15b 100644
+--- a/drivers/usb/typec/tps6598x.c
++++ b/drivers/usb/typec/tps6598x.c
+@@ -41,7 +41,7 @@
+ #define TPS_STATUS_VCONN(s)		(!!((s) & BIT(7)))
+ 
+ /* TPS_REG_SYSTEM_CONF bits */
+-#define TPS_SYSCONF_PORTINFO(c)		((c) & 3)
++#define TPS_SYSCONF_PORTINFO(c)		((c) & 7)
+ 
+ enum {
+ 	TPS_PORTINFO_SINK,
+@@ -127,7 +127,7 @@ tps6598x_block_read(struct tps6598x *tps, u8 reg, void *val, size_t len)
+ }
+ 
+ static int tps6598x_block_write(struct tps6598x *tps, u8 reg,
+-				void *val, size_t len)
++				const void *val, size_t len)
+ {
+ 	u8 data[TPS_MAX_LEN + 1];
+ 
+@@ -173,7 +173,7 @@ static inline int tps6598x_write64(struct tps6598x *tps, u8 reg, u64 val)
+ static inline int
+ tps6598x_write_4cc(struct tps6598x *tps, u8 reg, const char *val)
+ {
+-	return tps6598x_block_write(tps, reg, &val, sizeof(u32));
++	return tps6598x_block_write(tps, reg, val, 4);
+ }
+ 
+ static int tps6598x_read_partner_identity(struct tps6598x *tps)
+diff --git a/fs/crypto/policy.c b/fs/crypto/policy.c
+index d536889ac31b..4941fe8471ce 100644
+--- a/fs/crypto/policy.c
++++ b/fs/crypto/policy.c
+@@ -81,6 +81,8 @@ int fscrypt_ioctl_set_policy(struct file *filp, const void __user *arg)
+ 	if (ret == -ENODATA) {
+ 		if (!S_ISDIR(inode->i_mode))
+ 			ret = -ENOTDIR;
++		else if (IS_DEADDIR(inode))
++			ret = -ENOENT;
+ 		else if (!inode->i_sb->s_cop->empty_dir(inode))
+ 			ret = -ENOTEMPTY;
+ 		else
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index eeee100785a5..fd2c19eea647 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -1234,10 +1234,20 @@ static struct nfs4_opendata *nfs4_opendata_alloc(struct dentry *dentry,
+ 	atomic_inc(&sp->so_count);
+ 	p->o_arg.open_flags = flags;
+ 	p->o_arg.fmode = fmode & (FMODE_READ|FMODE_WRITE);
+-	p->o_arg.umask = current_umask();
+ 	p->o_arg.claim = nfs4_map_atomic_open_claim(server, claim);
+ 	p->o_arg.share_access = nfs4_map_atomic_open_share(server,
+ 			fmode, flags);
++	if (flags & O_CREAT) {
++		p->o_arg.umask = current_umask();
++		p->o_arg.label = nfs4_label_copy(p->a_label, label);
++		if (c->sattr != NULL && c->sattr->ia_valid != 0) {
++			p->o_arg.u.attrs = &p->attrs;
++			memcpy(&p->attrs, c->sattr, sizeof(p->attrs));
++
++			memcpy(p->o_arg.u.verifier.data, c->verf,
++					sizeof(p->o_arg.u.verifier.data));
++		}
++	}
+ 	/* don't put an ACCESS op in OPEN compound if O_EXCL, because ACCESS
+ 	 * will return permission denied for all bits until close */
+ 	if (!(flags & O_EXCL)) {
+@@ -1261,7 +1271,6 @@ static struct nfs4_opendata *nfs4_opendata_alloc(struct dentry *dentry,
+ 	p->o_arg.server = server;
+ 	p->o_arg.bitmask = nfs4_bitmask(server, label);
+ 	p->o_arg.open_bitmap = &nfs4_fattr_bitmap[0];
+-	p->o_arg.label = nfs4_label_copy(p->a_label, label);
+ 	switch (p->o_arg.claim) {
+ 	case NFS4_OPEN_CLAIM_NULL:
+ 	case NFS4_OPEN_CLAIM_DELEGATE_CUR:
+@@ -1274,13 +1283,6 @@ static struct nfs4_opendata *nfs4_opendata_alloc(struct dentry *dentry,
+ 	case NFS4_OPEN_CLAIM_DELEG_PREV_FH:
+ 		p->o_arg.fh = NFS_FH(d_inode(dentry));
+ 	}
+-	if (c != NULL && c->sattr != NULL && c->sattr->ia_valid != 0) {
+-		p->o_arg.u.attrs = &p->attrs;
+-		memcpy(&p->attrs, c->sattr, sizeof(p->attrs));
+-
+-		memcpy(p->o_arg.u.verifier.data, c->verf,
+-				sizeof(p->o_arg.u.verifier.data));
+-	}
+ 	p->c_arg.fh = &p->o_res.fh;
+ 	p->c_arg.stateid = &p->o_res.stateid;
+ 	p->c_arg.seqid = p->o_arg.seqid;
+diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
+index fc20e06c56ba..dd1783ea7003 100644
+--- a/fs/quota/dquot.c
++++ b/fs/quota/dquot.c
+@@ -1993,8 +1993,8 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
+ 				       &warn_to[cnt]);
+ 		if (ret)
+ 			goto over_quota;
+-		ret = dquot_add_space(transfer_to[cnt], cur_space, rsv_space, 0,
+-				      &warn_to[cnt]);
++		ret = dquot_add_space(transfer_to[cnt], cur_space, rsv_space,
++				      DQUOT_SPACE_WARN, &warn_to[cnt]);
+ 		if (ret) {
+ 			spin_lock(&transfer_to[cnt]->dq_dqb_lock);
+ 			dquot_decr_inodes(transfer_to[cnt], inode_usage);
+diff --git a/fs/udf/inode.c b/fs/udf/inode.c
+index e7276932e433..9bb18311a22f 100644
+--- a/fs/udf/inode.c
++++ b/fs/udf/inode.c
+@@ -470,13 +470,15 @@ static struct buffer_head *udf_getblk(struct inode *inode, udf_pblk_t block,
+ 	return NULL;
+ }
+ 
+-/* Extend the file by 'blocks' blocks, return the number of extents added */
++/* Extend the file with new blocks totaling 'new_block_bytes',
++ * return the number of extents added
++ */
+ static int udf_do_extend_file(struct inode *inode,
+ 			      struct extent_position *last_pos,
+ 			      struct kernel_long_ad *last_ext,
+-			      sector_t blocks)
++			      loff_t new_block_bytes)
+ {
+-	sector_t add;
++	uint32_t add;
+ 	int count = 0, fake = !(last_ext->extLength & UDF_EXTENT_LENGTH_MASK);
+ 	struct super_block *sb = inode->i_sb;
+ 	struct kernel_lb_addr prealloc_loc = {};
+@@ -486,7 +488,7 @@ static int udf_do_extend_file(struct inode *inode,
+ 
+ 	/* The previous extent is fake and we should not extend by anything
+ 	 * - there's nothing to do... */
+-	if (!blocks && fake)
++	if (!new_block_bytes && fake)
+ 		return 0;
+ 
+ 	iinfo = UDF_I(inode);
+@@ -517,13 +519,12 @@ static int udf_do_extend_file(struct inode *inode,
+ 	/* Can we merge with the previous extent? */
+ 	if ((last_ext->extLength & UDF_EXTENT_FLAG_MASK) ==
+ 					EXT_NOT_RECORDED_NOT_ALLOCATED) {
+-		add = ((1 << 30) - sb->s_blocksize -
+-			(last_ext->extLength & UDF_EXTENT_LENGTH_MASK)) >>
+-			sb->s_blocksize_bits;
+-		if (add > blocks)
+-			add = blocks;
+-		blocks -= add;
+-		last_ext->extLength += add << sb->s_blocksize_bits;
++		add = (1 << 30) - sb->s_blocksize -
++			(last_ext->extLength & UDF_EXTENT_LENGTH_MASK);
++		if (add > new_block_bytes)
++			add = new_block_bytes;
++		new_block_bytes -= add;
++		last_ext->extLength += add;
+ 	}
+ 
+ 	if (fake) {
+@@ -544,28 +545,27 @@ static int udf_do_extend_file(struct inode *inode,
+ 	}
+ 
+ 	/* Managed to do everything necessary? */
+-	if (!blocks)
++	if (!new_block_bytes)
+ 		goto out;
+ 
+ 	/* All further extents will be NOT_RECORDED_NOT_ALLOCATED */
+ 	last_ext->extLocation.logicalBlockNum = 0;
+ 	last_ext->extLocation.partitionReferenceNum = 0;
+-	add = (1 << (30-sb->s_blocksize_bits)) - 1;
+-	last_ext->extLength = EXT_NOT_RECORDED_NOT_ALLOCATED |
+-				(add << sb->s_blocksize_bits);
++	add = (1 << 30) - sb->s_blocksize;
++	last_ext->extLength = EXT_NOT_RECORDED_NOT_ALLOCATED | add;
+ 
+ 	/* Create enough extents to cover the whole hole */
+-	while (blocks > add) {
+-		blocks -= add;
++	while (new_block_bytes > add) {
++		new_block_bytes -= add;
+ 		err = udf_add_aext(inode, last_pos, &last_ext->extLocation,
+ 				   last_ext->extLength, 1);
+ 		if (err)
+ 			return err;
+ 		count++;
+ 	}
+-	if (blocks) {
++	if (new_block_bytes) {
+ 		last_ext->extLength = EXT_NOT_RECORDED_NOT_ALLOCATED |
+-			(blocks << sb->s_blocksize_bits);
++			new_block_bytes;
+ 		err = udf_add_aext(inode, last_pos, &last_ext->extLocation,
+ 				   last_ext->extLength, 1);
+ 		if (err)
+@@ -596,6 +596,24 @@ out:
+ 	return count;
+ }
+ 
++/* Extend the final block of the file to final_block_len bytes */
++static void udf_do_extend_final_block(struct inode *inode,
++				      struct extent_position *last_pos,
++				      struct kernel_long_ad *last_ext,
++				      uint32_t final_block_len)
++{
++	struct super_block *sb = inode->i_sb;
++	uint32_t added_bytes;
++
++	added_bytes = final_block_len -
++		      (last_ext->extLength & (sb->s_blocksize - 1));
++	last_ext->extLength += added_bytes;
++	UDF_I(inode)->i_lenExtents += added_bytes;
++
++	udf_write_aext(inode, last_pos, &last_ext->extLocation,
++			last_ext->extLength, 1);
++}
++
+ static int udf_extend_file(struct inode *inode, loff_t newsize)
+ {
+ 
+@@ -605,10 +623,12 @@ static int udf_extend_file(struct inode *inode, loff_t newsize)
+ 	int8_t etype;
+ 	struct super_block *sb = inode->i_sb;
+ 	sector_t first_block = newsize >> sb->s_blocksize_bits, offset;
++	unsigned long partial_final_block;
+ 	int adsize;
+ 	struct udf_inode_info *iinfo = UDF_I(inode);
+ 	struct kernel_long_ad extent;
+-	int err;
++	int err = 0;
++	int within_final_block;
+ 
+ 	if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_SHORT)
+ 		adsize = sizeof(struct short_ad);
+@@ -618,18 +638,8 @@ static int udf_extend_file(struct inode *inode, loff_t newsize)
+ 		BUG();
+ 
+ 	etype = inode_bmap(inode, first_block, &epos, &eloc, &elen, &offset);
++	within_final_block = (etype != -1);
+ 
+-	/* File has extent covering the new size (could happen when extending
+-	 * inside a block)? */
+-	if (etype != -1)
+-		return 0;
+-	if (newsize & (sb->s_blocksize - 1))
+-		offset++;
+-	/* Extended file just to the boundary of the last file block? */
+-	if (offset == 0)
+-		return 0;
+-
+-	/* Truncate is extending the file by 'offset' blocks */
+ 	if ((!epos.bh && epos.offset == udf_file_entry_alloc_offset(inode)) ||
+ 	    (epos.bh && epos.offset == sizeof(struct allocExtDesc))) {
+ 		/* File has no extents at all or has empty last
+@@ -643,7 +653,22 @@ static int udf_extend_file(struct inode *inode, loff_t newsize)
+ 				      &extent.extLength, 0);
+ 		extent.extLength |= etype << 30;
+ 	}
+-	err = udf_do_extend_file(inode, &epos, &extent, offset);
++
++	partial_final_block = newsize & (sb->s_blocksize - 1);
++
++	/* File has extent covering the new size (could happen when extending
++	 * inside a block)?
++	 */
++	if (within_final_block) {
++		/* Extending file within the last file block */
++		udf_do_extend_final_block(inode, &epos, &extent,
++					  partial_final_block);
++	} else {
++		loff_t add = ((loff_t)offset << sb->s_blocksize_bits) |
++			     partial_final_block;
++		err = udf_do_extend_file(inode, &epos, &extent, add);
++	}
++
+ 	if (err < 0)
+ 		goto out;
+ 	err = 0;
+@@ -745,6 +770,7 @@ static sector_t inode_getblk(struct inode *inode, sector_t block,
+ 	/* Are we beyond EOF? */
+ 	if (etype == -1) {
+ 		int ret;
++		loff_t hole_len;
+ 		isBeyondEOF = true;
+ 		if (count) {
+ 			if (c)
+@@ -760,7 +786,8 @@ static sector_t inode_getblk(struct inode *inode, sector_t block,
+ 			startnum = (offset > 0);
+ 		}
+ 		/* Create extents for the hole between EOF and offset */
+-		ret = udf_do_extend_file(inode, &prev_epos, laarr, offset);
++		hole_len = (loff_t)offset << inode->i_blkbits;
++		ret = udf_do_extend_file(inode, &prev_epos, laarr, hole_len);
+ 		if (ret < 0) {
+ 			*err = ret;
+ 			newblock = 0;
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index 178a3933a71b..50ced8aba9db 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -351,6 +351,8 @@ static inline void sk_psock_update_proto(struct sock *sk,
+ static inline void sk_psock_restore_proto(struct sock *sk,
+ 					  struct sk_psock *psock)
+ {
++	sk->sk_write_space = psock->saved_write_space;
++
+ 	if (psock->sk_proto) {
+ 		sk->sk_prot = psock->sk_proto;
+ 		psock->sk_proto = NULL;
+diff --git a/include/linux/vmw_vmci_defs.h b/include/linux/vmw_vmci_defs.h
+index eaa1e762bf06..6124b4cebb42 100644
+--- a/include/linux/vmw_vmci_defs.h
++++ b/include/linux/vmw_vmci_defs.h
+@@ -69,9 +69,18 @@ enum {
+ 
+ /*
+  * A single VMCI device has an upper limit of 128MB on the amount of
+- * memory that can be used for queue pairs.
++ * memory that can be used for queue pairs. Since each queue pair
++ * consists of at least two pages, the memory limit also dictates the
++ * number of queue pairs a guest can create.
+  */
+ #define VMCI_MAX_GUEST_QP_MEMORY (128 * 1024 * 1024)
++#define VMCI_MAX_GUEST_QP_COUNT  (VMCI_MAX_GUEST_QP_MEMORY / PAGE_SIZE / 2)
++
++/*
++ * There can be at most PAGE_SIZE doorbells since there is one doorbell
++ * per byte in the doorbell bitmap page.
++ */
++#define VMCI_MAX_GUEST_DOORBELL_COUNT PAGE_SIZE
+ 
+ /*
+  * Queues with pre-mapped data pages must be small, so that we don't pin
+diff --git a/include/net/ip6_tunnel.h b/include/net/ip6_tunnel.h
+index 69b4bcf880c9..028eaea1c854 100644
+--- a/include/net/ip6_tunnel.h
++++ b/include/net/ip6_tunnel.h
+@@ -158,9 +158,12 @@ static inline void ip6tunnel_xmit(struct sock *sk, struct sk_buff *skb,
+ 	memset(skb->cb, 0, sizeof(struct inet6_skb_parm));
+ 	pkt_len = skb->len - skb_inner_network_offset(skb);
+ 	err = ip6_local_out(dev_net(skb_dst(skb)->dev), sk, skb);
+-	if (unlikely(net_xmit_eval(err)))
+-		pkt_len = -1;
+-	iptunnel_xmit_stats(dev, pkt_len);
++
++	if (dev) {
++		if (unlikely(net_xmit_eval(err)))
++			pkt_len = -1;
++		iptunnel_xmit_stats(dev, pkt_len);
++	}
+ }
+ #endif
+ #endif
+diff --git a/include/uapi/linux/usb/audio.h b/include/uapi/linux/usb/audio.h
+index ddc5396800aa..76b7c3f6cd0d 100644
+--- a/include/uapi/linux/usb/audio.h
++++ b/include/uapi/linux/usb/audio.h
+@@ -450,6 +450,43 @@ static inline __u8 *uac_processing_unit_specific(struct uac_processing_unit_desc
+ 	}
+ }
+ 
++/*
++ * Extension Unit (XU) has almost compatible layout with Processing Unit, but
++ * on UAC2, it has a different bmControls size (bControlSize); it's 1 byte for
++ * XU while 2 bytes for PU.  The last iExtension field is a one-byte index as
++ * well as iProcessing field of PU.
++ */
++static inline __u8 uac_extension_unit_bControlSize(struct uac_processing_unit_descriptor *desc,
++						   int protocol)
++{
++	switch (protocol) {
++	case UAC_VERSION_1:
++		return desc->baSourceID[desc->bNrInPins + 4];
++	case UAC_VERSION_2:
++		return 1; /* in UAC2, this value is constant */
++	case UAC_VERSION_3:
++		return 4; /* in UAC3, this value is constant */
++	default:
++		return 1;
++	}
++}
++
++static inline __u8 uac_extension_unit_iExtension(struct uac_processing_unit_descriptor *desc,
++						 int protocol)
++{
++	__u8 control_size = uac_extension_unit_bControlSize(desc, protocol);
++
++	switch (protocol) {
++	case UAC_VERSION_1:
++	case UAC_VERSION_2:
++	default:
++		return *(uac_processing_unit_bmControls(desc, protocol)
++			 + control_size);
++	case UAC_VERSION_3:
++		return 0; /* UAC3 does not have this field */
++	}
++}
++
+ /* 4.5.2 Class-Specific AS Interface Descriptor */
+ struct uac1_as_header_descriptor {
+ 	__u8  bLength;			/* in bytes: 7 */
+diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
+index 1e525d70f833..1defea4b2755 100644
+--- a/kernel/bpf/devmap.c
++++ b/kernel/bpf/devmap.c
+@@ -186,6 +186,7 @@ static void dev_map_free(struct bpf_map *map)
+ 		if (!dev)
+ 			continue;
+ 
++		free_percpu(dev->bulkq);
+ 		dev_put(dev->dev);
+ 		kfree(dev);
+ 	}
+@@ -281,6 +282,7 @@ void __dev_map_flush(struct bpf_map *map)
+ 	unsigned long *bitmap = this_cpu_ptr(dtab->flush_needed);
+ 	u32 bit;
+ 
++	rcu_read_lock();
+ 	for_each_set_bit(bit, bitmap, map->max_entries) {
+ 		struct bpf_dtab_netdev *dev = READ_ONCE(dtab->netdev_map[bit]);
+ 		struct xdp_bulk_queue *bq;
+@@ -291,11 +293,12 @@ void __dev_map_flush(struct bpf_map *map)
+ 		if (unlikely(!dev))
+ 			continue;
+ 
+-		__clear_bit(bit, bitmap);
+-
+ 		bq = this_cpu_ptr(dev->bulkq);
+ 		bq_xmit_all(dev, bq, XDP_XMIT_FLUSH, true);
++
++		__clear_bit(bit, bitmap);
+ 	}
++	rcu_read_unlock();
+ }
+ 
+ /* rcu_read_lock (from syscall and BPF contexts) ensures that if a delete and/or
+@@ -388,6 +391,7 @@ static void dev_map_flush_old(struct bpf_dtab_netdev *dev)
+ 
+ 		int cpu;
+ 
++		rcu_read_lock();
+ 		for_each_online_cpu(cpu) {
+ 			bitmap = per_cpu_ptr(dev->dtab->flush_needed, cpu);
+ 			__clear_bit(dev->bit, bitmap);
+@@ -395,6 +399,7 @@ static void dev_map_flush_old(struct bpf_dtab_netdev *dev)
+ 			bq = per_cpu_ptr(dev->bulkq, cpu);
+ 			bq_xmit_all(dev, bq, XDP_XMIT_FLUSH, false);
+ 		}
++		rcu_read_unlock();
+ 	}
+ }
+ 
+diff --git a/net/can/af_can.c b/net/can/af_can.c
+index e386d654116d..04132b0b5d36 100644
+--- a/net/can/af_can.c
++++ b/net/can/af_can.c
+@@ -959,6 +959,8 @@ static struct pernet_operations can_pernet_ops __read_mostly = {
+ 
+ static __init int can_init(void)
+ {
++	int err;
++
+ 	/* check for correct padding to be able to use the structs similarly */
+ 	BUILD_BUG_ON(offsetof(struct can_frame, can_dlc) !=
+ 		     offsetof(struct canfd_frame, len) ||
+@@ -972,15 +974,31 @@ static __init int can_init(void)
+ 	if (!rcv_cache)
+ 		return -ENOMEM;
+ 
+-	register_pernet_subsys(&can_pernet_ops);
++	err = register_pernet_subsys(&can_pernet_ops);
++	if (err)
++		goto out_pernet;
+ 
+ 	/* protocol register */
+-	sock_register(&can_family_ops);
+-	register_netdevice_notifier(&can_netdev_notifier);
++	err = sock_register(&can_family_ops);
++	if (err)
++		goto out_sock;
++	err = register_netdevice_notifier(&can_netdev_notifier);
++	if (err)
++		goto out_notifier;
++
+ 	dev_add_pack(&can_packet);
+ 	dev_add_pack(&canfd_packet);
+ 
+ 	return 0;
++
++out_notifier:
++	sock_unregister(PF_CAN);
++out_sock:
++	unregister_pernet_subsys(&can_pernet_ops);
++out_pernet:
++	kmem_cache_destroy(rcv_cache);
++
++	return err;
+ }
+ 
+ static __exit void can_exit(void)
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index e5bfd42fd083..4ea96fbf3b49 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -2309,6 +2309,7 @@ do_frag_list:
+ 		kv.iov_base = skb->data + offset;
+ 		kv.iov_len = slen;
+ 		memset(&msg, 0, sizeof(msg));
++		msg.msg_flags = MSG_DONTWAIT;
+ 
+ 		ret = kernel_sendmsg_locked(sk, &msg, &kv, 1, slen);
+ 		if (ret <= 0)
+diff --git a/net/ipv6/netfilter/nf_conntrack_reasm.c b/net/ipv6/netfilter/nf_conntrack_reasm.c
+index 3de0e9b0a482..8951de8b568f 100644
+--- a/net/ipv6/netfilter/nf_conntrack_reasm.c
++++ b/net/ipv6/netfilter/nf_conntrack_reasm.c
+@@ -265,8 +265,14 @@ static int nf_ct_frag6_queue(struct frag_queue *fq, struct sk_buff *skb,
+ 
+ 	prev = fq->q.fragments_tail;
+ 	err = inet_frag_queue_insert(&fq->q, skb, offset, end);
+-	if (err)
++	if (err) {
++		if (err == IPFRAG_DUP) {
++			/* No error for duplicates, pretend they got queued. */
++			kfree_skb(skb);
++			return -EINPROGRESS;
++		}
+ 		goto insert_error;
++	}
+ 
+ 	if (dev)
+ 		fq->iif = dev->ifindex;
+@@ -293,15 +299,17 @@ static int nf_ct_frag6_queue(struct frag_queue *fq, struct sk_buff *skb,
+ 		skb->_skb_refdst = 0UL;
+ 		err = nf_ct_frag6_reasm(fq, skb, prev, dev);
+ 		skb->_skb_refdst = orefdst;
+-		return err;
++
++		/* After queue has assumed skb ownership, only 0 or
++		 * -EINPROGRESS must be returned.
++		 */
++		return err ? -EINPROGRESS : 0;
+ 	}
+ 
+ 	skb_dst_drop(skb);
+ 	return -EINPROGRESS;
+ 
+ insert_error:
+-	if (err == IPFRAG_DUP)
+-		goto err;
+ 	inet_frag_kill(&fq->q);
+ err:
+ 	skb_dst_drop(skb);
+@@ -480,12 +488,6 @@ int nf_ct_frag6_gather(struct net *net, struct sk_buff *skb, u32 user)
+ 		ret = 0;
+ 	}
+ 
+-	/* after queue has assumed skb ownership, only 0 or -EINPROGRESS
+-	 * must be returned.
+-	 */
+-	if (ret)
+-		ret = -EINPROGRESS;
+-
+ 	spin_unlock_bh(&fq->q.lock);
+ 	inet_frag_put(&fq->q);
+ 	return ret;
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index c875d45f1e1d..6708c1640207 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -1434,7 +1434,7 @@ ieee80211_get_sband(struct ieee80211_sub_if_data *sdata)
+ 	rcu_read_lock();
+ 	chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
+ 
+-	if (WARN_ON(!chanctx_conf)) {
++	if (WARN_ON_ONCE(!chanctx_conf)) {
+ 		rcu_read_unlock();
+ 		return NULL;
+ 	}
+@@ -2034,6 +2034,13 @@ void __ieee80211_flush_queues(struct ieee80211_local *local,
+ 
+ static inline bool ieee80211_can_run_worker(struct ieee80211_local *local)
+ {
++	/*
++	 * It's unsafe to try to do any work during reconfigure flow.
++	 * When the flow ends the work will be requeued.
++	 */
++	if (local->in_reconfig)
++		return false;
++
+ 	/*
+ 	 * If quiescing is set, we are racing with __ieee80211_suspend.
+ 	 * __ieee80211_suspend flushes the workers after setting quiescing,
+diff --git a/net/mac80211/mesh.c b/net/mac80211/mesh.c
+index 766e5e5bab8a..fe44f0d98de0 100644
+--- a/net/mac80211/mesh.c
++++ b/net/mac80211/mesh.c
+@@ -929,6 +929,7 @@ void ieee80211_stop_mesh(struct ieee80211_sub_if_data *sdata)
+ 
+ 	/* flush STAs and mpaths on this iface */
+ 	sta_info_flush(sdata);
++	ieee80211_free_keys(sdata, true);
+ 	mesh_path_flush_by_iface(sdata);
+ 
+ 	/* stop the beacon */
+@@ -1220,7 +1221,8 @@ int ieee80211_mesh_finish_csa(struct ieee80211_sub_if_data *sdata)
+ 	ifmsh->chsw_ttl = 0;
+ 
+ 	/* Remove the CSA and MCSP elements from the beacon */
+-	tmp_csa_settings = rcu_dereference(ifmsh->csa);
++	tmp_csa_settings = rcu_dereference_protected(ifmsh->csa,
++					    lockdep_is_held(&sdata->wdev.mtx));
+ 	RCU_INIT_POINTER(ifmsh->csa, NULL);
+ 	if (tmp_csa_settings)
+ 		kfree_rcu(tmp_csa_settings, rcu_head);
+@@ -1242,6 +1244,8 @@ int ieee80211_mesh_csa_beacon(struct ieee80211_sub_if_data *sdata,
+ 	struct mesh_csa_settings *tmp_csa_settings;
+ 	int ret = 0;
+ 
++	lockdep_assert_held(&sdata->wdev.mtx);
++
+ 	tmp_csa_settings = kmalloc(sizeof(*tmp_csa_settings),
+ 				   GFP_ATOMIC);
+ 	if (!tmp_csa_settings)
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index 447a55ae9df1..3400e2da7297 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -2442,6 +2442,10 @@ int ieee80211_reconfig(struct ieee80211_local *local)
+ 		mutex_lock(&local->mtx);
+ 		ieee80211_start_next_roc(local);
+ 		mutex_unlock(&local->mtx);
++
++		/* Requeue all works */
++		list_for_each_entry(sdata, &local->interfaces, list)
++			ieee80211_queue_work(&local->hw, &sdata->work);
+ 	}
+ 
+ 	ieee80211_wake_queues_by_reason(hw, IEEE80211_MAX_QUEUE_MAP,
+diff --git a/net/wireless/pmsr.c b/net/wireless/pmsr.c
+index 5e2ab01d325c..d06d514f0bba 100644
+--- a/net/wireless/pmsr.c
++++ b/net/wireless/pmsr.c
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ /*
+- * Copyright (C) 2018 Intel Corporation
++ * Copyright (C) 2018 - 2019 Intel Corporation
+  */
+ #ifndef __PMSR_H
+ #define __PMSR_H
+@@ -446,7 +446,7 @@ static int nl80211_pmsr_send_result(struct sk_buff *msg,
+ 
+ 	if (res->ap_tsf_valid &&
+ 	    nla_put_u64_64bit(msg, NL80211_PMSR_RESP_ATTR_AP_TSF,
+-			      res->host_time, NL80211_PMSR_RESP_ATTR_PAD))
++			      res->ap_tsf, NL80211_PMSR_RESP_ATTR_PAD))
+ 		goto error;
+ 
+ 	if (res->final && nla_put_flag(msg, NL80211_PMSR_RESP_ATTR_FINAL))
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index 75899b62bdc9..5ac66a571e33 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -1237,7 +1237,7 @@ static u32 cfg80211_calculate_bitrate_he(struct rate_info *rate)
+ 	if (rate->he_dcm)
+ 		result /= 2;
+ 
+-	return result;
++	return result / 10000;
+ }
+ 
+ u32 cfg80211_calculate_bitrate(struct rate_info *rate)
+@@ -1989,7 +1989,7 @@ int ieee80211_get_vht_max_nss(struct ieee80211_vht_cap *cap,
+ 			continue;
+ 
+ 		if (supp >= mcs_encoding) {
+-			max_vht_nss = i;
++			max_vht_nss = i + 1;
+ 			break;
+ 		}
+ 	}
+diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c
+index 989e52386c35..2f7e2c33a812 100644
+--- a/net/xdp/xdp_umem.c
++++ b/net/xdp/xdp_umem.c
+@@ -143,6 +143,9 @@ static void xdp_umem_clear_dev(struct xdp_umem *umem)
+ 	struct netdev_bpf bpf;
+ 	int err;
+ 
++	if (!umem->dev)
++		return;
++
+ 	if (umem->zc) {
+ 		bpf.command = XDP_SETUP_XSK_UMEM;
+ 		bpf.xsk.umem = NULL;
+@@ -156,11 +159,9 @@ static void xdp_umem_clear_dev(struct xdp_umem *umem)
+ 			WARN(1, "failed to disable umem!\n");
+ 	}
+ 
+-	if (umem->dev) {
+-		rtnl_lock();
+-		xdp_clear_umem_at_qid(umem->dev, umem->queue_id);
+-		rtnl_unlock();
+-	}
++	rtnl_lock();
++	xdp_clear_umem_at_qid(umem->dev, umem->queue_id);
++	rtnl_unlock();
+ 
+ 	if (umem->zc) {
+ 		dev_put(umem->dev);
+diff --git a/samples/bpf/bpf_load.c b/samples/bpf/bpf_load.c
+index eae7b635343d..6e87cc831e84 100644
+--- a/samples/bpf/bpf_load.c
++++ b/samples/bpf/bpf_load.c
+@@ -678,7 +678,7 @@ void read_trace_pipe(void)
+ 		static char buf[4096];
+ 		ssize_t sz;
+ 
+-		sz = read(trace_fd, buf, sizeof(buf));
++		sz = read(trace_fd, buf, sizeof(buf) - 1);
+ 		if (sz > 0) {
+ 			buf[sz] = 0;
+ 			puts(buf);
+diff --git a/samples/bpf/task_fd_query_user.c b/samples/bpf/task_fd_query_user.c
+index aff2b4ae914e..e39938058223 100644
+--- a/samples/bpf/task_fd_query_user.c
++++ b/samples/bpf/task_fd_query_user.c
+@@ -216,7 +216,7 @@ static int test_debug_fs_uprobe(char *binary_path, long offset, bool is_return)
+ {
+ 	const char *event_type = "uprobe";
+ 	struct perf_event_attr attr = {};
+-	char buf[256], event_alias[256];
++	char buf[256], event_alias[sizeof("test_1234567890")];
+ 	__u64 probe_offset, probe_addr;
+ 	__u32 len, prog_id, fd_type;
+ 	int err, res, kfd, efd;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index ee620f39dbe3..dde9a49ded78 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -3236,6 +3236,7 @@ static void alc256_init(struct hda_codec *codec)
+ 	alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
+ 	alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 1 << 15); /* Clear bit */
+ 	alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 0 << 15);
++	alc_update_coef_idx(codec, 0x36, 1 << 13, 1 << 5); /* Switch pcbeep path to Line in path*/
+ }
+ 
+ static void alc256_shutup(struct hda_codec *codec)
+@@ -7782,7 +7783,6 @@ static int patch_alc269(struct hda_codec *codec)
+ 		spec->shutup = alc256_shutup;
+ 		spec->init_hook = alc256_init;
+ 		spec->gen.mixer_nid = 0; /* ALC256 does not have any loopback mixer path */
+-		alc_update_coef_idx(codec, 0x36, 1 << 13, 1 << 5); /* Switch pcbeep path to Line in path*/
+ 		break;
+ 	case 0x10ec0257:
+ 		spec->codec_variant = ALC269_TYPE_ALC257;
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 53dccbfe392b..34fbecab2210 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -2318,7 +2318,7 @@ static struct procunit_info extunits[] = {
+  */
+ static int build_audio_procunit(struct mixer_build *state, int unitid,
+ 				void *raw_desc, struct procunit_info *list,
+-				char *name)
++				bool extension_unit)
+ {
+ 	struct uac_processing_unit_descriptor *desc = raw_desc;
+ 	int num_ins;
+@@ -2335,6 +2335,8 @@ static int build_audio_procunit(struct mixer_build *state, int unitid,
+ 	static struct procunit_info default_info = {
+ 		0, NULL, default_value_info
+ 	};
++	const char *name = extension_unit ?
++		"Extension Unit" : "Processing Unit";
+ 
+ 	if (desc->bLength < 13) {
+ 		usb_audio_err(state->chip, "invalid %s descriptor (id %d)\n", name, unitid);
+@@ -2448,7 +2450,10 @@ static int build_audio_procunit(struct mixer_build *state, int unitid,
+ 		} else if (info->name) {
+ 			strlcpy(kctl->id.name, info->name, sizeof(kctl->id.name));
+ 		} else {
+-			nameid = uac_processing_unit_iProcessing(desc, state->mixer->protocol);
++			if (extension_unit)
++				nameid = uac_extension_unit_iExtension(desc, state->mixer->protocol);
++			else
++				nameid = uac_processing_unit_iProcessing(desc, state->mixer->protocol);
+ 			len = 0;
+ 			if (nameid)
+ 				len = snd_usb_copy_string_desc(state->chip,
+@@ -2481,10 +2486,10 @@ static int parse_audio_processing_unit(struct mixer_build *state, int unitid,
+ 	case UAC_VERSION_2:
+ 	default:
+ 		return build_audio_procunit(state, unitid, raw_desc,
+-				procunits, "Processing Unit");
++					    procunits, false);
+ 	case UAC_VERSION_3:
+ 		return build_audio_procunit(state, unitid, raw_desc,
+-				uac3_procunits, "Processing Unit");
++					    uac3_procunits, false);
+ 	}
+ }
+ 
+@@ -2495,8 +2500,7 @@ static int parse_audio_extension_unit(struct mixer_build *state, int unitid,
+ 	 * Note that we parse extension units with processing unit descriptors.
+ 	 * That's ok as the layout is the same.
+ 	 */
+-	return build_audio_procunit(state, unitid, raw_desc,
+-				    extunits, "Extension Unit");
++	return build_audio_procunit(state, unitid, raw_desc, extunits, true);
+ }
+ 
+ /*
+diff --git a/tools/bpf/bpftool/map.c b/tools/bpf/bpftool/map.c
+index 994a7e0d16fb..14f581b562bd 100644
+--- a/tools/bpf/bpftool/map.c
++++ b/tools/bpf/bpftool/map.c
+@@ -713,12 +713,14 @@ static int dump_map_elem(int fd, void *key, void *value,
+ 		return 0;
+ 
+ 	if (json_output) {
++		jsonw_start_object(json_wtr);
+ 		jsonw_name(json_wtr, "key");
+ 		print_hex_data_json(key, map_info->key_size);
+ 		jsonw_name(json_wtr, "value");
+ 		jsonw_start_object(json_wtr);
+ 		jsonw_string_field(json_wtr, "error", strerror(lookup_errno));
+ 		jsonw_end_object(json_wtr);
++		jsonw_end_object(json_wtr);
+ 	} else {
+ 		if (errno == ENOENT)
+ 			print_entry_plain(map_info, key, NULL);
+diff --git a/tools/perf/Documentation/intel-pt.txt b/tools/perf/Documentation/intel-pt.txt
+index 115eaacc455f..60d99e5e7921 100644
+--- a/tools/perf/Documentation/intel-pt.txt
++++ b/tools/perf/Documentation/intel-pt.txt
+@@ -88,16 +88,16 @@ smaller.
+ 
+ To represent software control flow, "branches" samples are produced.  By default
+ a branch sample is synthesized for every single branch.  To get an idea what
+-data is available you can use the 'perf script' tool with no parameters, which
+-will list all the samples.
++data is available you can use the 'perf script' tool with all itrace sampling
++options, which will list all the samples.
+ 
+ 	perf record -e intel_pt//u ls
+-	perf script
++	perf script --itrace=ibxwpe
+ 
+ An interesting field that is not printed by default is 'flags' which can be
+ displayed as follows:
+ 
+-	perf script -Fcomm,tid,pid,time,cpu,event,trace,ip,sym,dso,addr,symoff,flags
++	perf script --itrace=ibxwpe -F+flags
+ 
+ The flags are "bcrosyiABEx" which stand for branch, call, return, conditional,
+ system, asynchronous, interrupt, transaction abort, trace begin, trace end, and
+@@ -713,7 +713,7 @@ Having no option is the same as
+ 
+ which, in turn, is the same as
+ 
+-	--itrace=ibxwpe
++	--itrace=cepwx
+ 
+ The letters are:
+ 
+diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
+index fb76b6b232d4..5dd9d1893b89 100644
+--- a/tools/perf/util/auxtrace.c
++++ b/tools/perf/util/auxtrace.c
+@@ -1010,7 +1010,8 @@ int itrace_parse_synth_opts(const struct option *opt, const char *str,
+ 	}
+ 
+ 	if (!str) {
+-		itrace_synth_opts__set_default(synth_opts, false);
++		itrace_synth_opts__set_default(synth_opts,
++					       synth_opts->default_no_sample);
+ 		return 0;
+ 	}
+ 
+diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
+index 2d2af2ac2b1e..682e3d524d3c 100644
+--- a/tools/perf/util/header.c
++++ b/tools/perf/util/header.c
+@@ -3549,6 +3549,7 @@ int perf_event__synthesize_features(struct perf_tool *tool,
+ 		return -ENOMEM;
+ 
+ 	ff.size = sz - sz_hdr;
++	ff.ph = &session->header;
+ 
+ 	for_each_set_bit(feat, header->adds_features, HEADER_FEAT_BITS) {
+ 		if (!feat_ops[feat].synthesize) {
+diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
+index 6d288237887b..03b1da6d1da4 100644
+--- a/tools/perf/util/intel-pt.c
++++ b/tools/perf/util/intel-pt.c
+@@ -2588,7 +2588,8 @@ int intel_pt_process_auxtrace_info(union perf_event *event,
+ 	} else {
+ 		itrace_synth_opts__set_default(&pt->synth_opts,
+ 				session->itrace_synth_opts->default_no_sample);
+-		if (use_browser != -1) {
++		if (!session->itrace_synth_opts->default_no_sample &&
++		    !session->itrace_synth_opts->inject) {
+ 			pt->synth_opts.branches = false;
+ 			pt->synth_opts.callchain = true;
+ 		}
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index e0429f4ef335..faa8eb231e1b 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -709,9 +709,7 @@ static void pmu_add_cpu_aliases(struct list_head *head, struct perf_pmu *pmu)
+ {
+ 	int i;
+ 	struct pmu_events_map *map;
+-	struct pmu_event *pe;
+ 	const char *name = pmu->name;
+-	const char *pname;
+ 
+ 	map = perf_pmu__find_map(pmu);
+ 	if (!map)
+@@ -722,28 +720,26 @@ static void pmu_add_cpu_aliases(struct list_head *head, struct perf_pmu *pmu)
+ 	 */
+ 	i = 0;
+ 	while (1) {
++		const char *cpu_name = is_arm_pmu_core(name) ? name : "cpu";
++		struct pmu_event *pe = &map->table[i++];
++		const char *pname = pe->pmu ? pe->pmu : cpu_name;
+ 
+-		pe = &map->table[i++];
+ 		if (!pe->name) {
+ 			if (pe->metric_group || pe->metric_name)
+ 				continue;
+ 			break;
+ 		}
+ 
+-		if (!is_arm_pmu_core(name)) {
+-			pname = pe->pmu ? pe->pmu : "cpu";
+-
+-			/*
+-			 * uncore alias may be from different PMU
+-			 * with common prefix
+-			 */
+-			if (pmu_is_uncore(name) &&
+-			    !strncmp(pname, name, strlen(pname)))
+-				goto new_alias;
++		/*
++		 * uncore alias may be from different PMU
++		 * with common prefix
++		 */
++		if (pmu_is_uncore(name) &&
++		    !strncmp(pname, name, strlen(pname)))
++			goto new_alias;
+ 
+-			if (strcmp(pname, name))
+-				continue;
+-		}
++		if (strcmp(pname, name))
++			continue;
+ 
+ new_alias:
+ 		/* need type casts to override 'const' */
+diff --git a/tools/perf/util/thread-stack.c b/tools/perf/util/thread-stack.c
+index 41942c2aaa18..402d099eddaf 100644
+--- a/tools/perf/util/thread-stack.c
++++ b/tools/perf/util/thread-stack.c
+@@ -625,6 +625,23 @@ static int thread_stack__bottom(struct thread_stack *ts,
+ 				     true, false);
+ }
+ 
++static int thread_stack__pop_ks(struct thread *thread, struct thread_stack *ts,
++				struct perf_sample *sample, u64 ref)
++{
++	u64 tm = sample->time;
++	int err;
++
++	/* Return to userspace, so pop all kernel addresses */
++	while (thread_stack__in_kernel(ts)) {
++		err = thread_stack__call_return(thread, ts, --ts->cnt,
++						tm, ref, true);
++		if (err)
++			return err;
++	}
++
++	return 0;
++}
++
+ static int thread_stack__no_call_return(struct thread *thread,
+ 					struct thread_stack *ts,
+ 					struct perf_sample *sample,
+@@ -905,7 +922,18 @@ int thread_stack__process(struct thread *thread, struct comm *comm,
+ 			ts->rstate = X86_RETPOLINE_DETECTED;
+ 
+ 	} else if (sample->flags & PERF_IP_FLAG_RETURN) {
+-		if (!sample->ip || !sample->addr)
++		if (!sample->addr) {
++			u32 return_from_kernel = PERF_IP_FLAG_SYSCALLRET |
++						 PERF_IP_FLAG_INTERRUPT;
++
++			if (!(sample->flags & return_from_kernel))
++				return 0;
++
++			/* Pop kernel stack */
++			return thread_stack__pop_ks(thread, ts, sample, ref);
++		}
++
++		if (!sample->ip)
+ 			return 0;
+ 
+ 		/* x86 retpoline 'return' doesn't match the stack */
+diff --git a/tools/testing/selftests/bpf/verifier/div_overflow.c b/tools/testing/selftests/bpf/verifier/div_overflow.c
+index bd3f38dbe796..acab4f00819f 100644
+--- a/tools/testing/selftests/bpf/verifier/div_overflow.c
++++ b/tools/testing/selftests/bpf/verifier/div_overflow.c
+@@ -29,8 +29,11 @@
+ 	"DIV64 overflow, check 1",
+ 	.insns = {
+ 	BPF_MOV64_IMM(BPF_REG_1, -1),
+-	BPF_LD_IMM64(BPF_REG_0, LLONG_MIN),
+-	BPF_ALU64_REG(BPF_DIV, BPF_REG_0, BPF_REG_1),
++	BPF_LD_IMM64(BPF_REG_2, LLONG_MIN),
++	BPF_ALU64_REG(BPF_DIV, BPF_REG_2, BPF_REG_1),
++	BPF_MOV32_IMM(BPF_REG_0, 0),
++	BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_2, 1),
++	BPF_MOV32_IMM(BPF_REG_0, 1),
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+@@ -40,8 +43,11 @@
+ {
+ 	"DIV64 overflow, check 2",
+ 	.insns = {
+-	BPF_LD_IMM64(BPF_REG_0, LLONG_MIN),
+-	BPF_ALU64_IMM(BPF_DIV, BPF_REG_0, -1),
++	BPF_LD_IMM64(BPF_REG_1, LLONG_MIN),
++	BPF_ALU64_IMM(BPF_DIV, BPF_REG_1, -1),
++	BPF_MOV32_IMM(BPF_REG_0, 0),
++	BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_1, 1),
++	BPF_MOV32_IMM(BPF_REG_0, 1),
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
+index 7fc272ecae16..1b1c449ceaf4 100644
+--- a/virt/kvm/arm/arch_timer.c
++++ b/virt/kvm/arm/arch_timer.c
+@@ -321,14 +321,15 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
+ 	}
+ }
+ 
++/* Only called for a fully emulated timer */
+ static void timer_emulate(struct arch_timer_context *ctx)
+ {
+ 	bool should_fire = kvm_timer_should_fire(ctx);
+ 
+ 	trace_kvm_timer_emulate(ctx, should_fire);
+ 
+-	if (should_fire) {
+-		kvm_timer_update_irq(ctx->vcpu, true, ctx);
++	if (should_fire != ctx->irq.level) {
++		kvm_timer_update_irq(ctx->vcpu, should_fire, ctx);
+ 		return;
+ 	}
+ 
+diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c
+index 44ceaccb18cf..8c9fe831bce4 100644
+--- a/virt/kvm/arm/vgic/vgic-its.c
++++ b/virt/kvm/arm/vgic/vgic-its.c
+@@ -1734,6 +1734,7 @@ static void vgic_its_destroy(struct kvm_device *kvm_dev)
+ 
+ 	mutex_unlock(&its->its_lock);
+ 	kfree(its);
++	kfree(kvm_dev);/* alloc by kvm_ioctl_create_device, free by .destroy */
+ }
+ 
+ static int vgic_its_has_attr_regs(struct kvm_device *dev,


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-07-21 14:42 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2019-07-21 14:42 UTC (permalink / raw
  To: gentoo-commits

commit:     07dd1095fd4c2871c25d22d002308ad564979b94
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Jul 21 14:42:21 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jul 21 14:42:21 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=07dd1095

Linux patch 5.1.19

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1018_linux-5.1.19.patch | 2234 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2238 insertions(+)

diff --git a/0000_README b/0000_README
index 83bea0b..99c9da8 100644
--- a/0000_README
+++ b/0000_README
@@ -115,6 +115,10 @@ Patch:  1017_linux-5.1.18.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.1.18
 
+Patch:  1018_linux-5.1.19.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.1.19
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1018_linux-5.1.19.patch b/1018_linux-5.1.19.patch
new file mode 100644
index 0000000..a1293bd
--- /dev/null
+++ b/1018_linux-5.1.19.patch
@@ -0,0 +1,2234 @@
+diff --git a/Makefile b/Makefile
+index 01a0a61f86e7..432a62fec680 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 18
++SUBLEVEL = 19
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arc/kernel/unwind.c b/arch/arc/kernel/unwind.c
+index 271e9fafa479..4c221f0edaae 100644
+--- a/arch/arc/kernel/unwind.c
++++ b/arch/arc/kernel/unwind.c
+@@ -184,11 +184,6 @@ static void *__init unw_hdr_alloc_early(unsigned long sz)
+ 	return memblock_alloc_from(sz, sizeof(unsigned int), MAX_DMA_ADDRESS);
+ }
+ 
+-static void *unw_hdr_alloc(unsigned long sz)
+-{
+-	return kmalloc(sz, GFP_KERNEL);
+-}
+-
+ static void init_unwind_table(struct unwind_table *table, const char *name,
+ 			      const void *core_start, unsigned long core_size,
+ 			      const void *init_start, unsigned long init_size,
+@@ -369,6 +364,10 @@ ret_err:
+ }
+ 
+ #ifdef CONFIG_MODULES
++static void *unw_hdr_alloc(unsigned long sz)
++{
++	return kmalloc(sz, GFP_KERNEL);
++}
+ 
+ static struct unwind_table *last_table;
+ 
+diff --git a/arch/arm/boot/dts/gemini-dlink-dns-313.dts b/arch/arm/boot/dts/gemini-dlink-dns-313.dts
+index b12504e10f0b..360642a02a48 100644
+--- a/arch/arm/boot/dts/gemini-dlink-dns-313.dts
++++ b/arch/arm/boot/dts/gemini-dlink-dns-313.dts
+@@ -11,7 +11,7 @@
+ 
+ / {
+ 	model = "D-Link DNS-313 1-Bay Network Storage Enclosure";
+-	compatible = "dlink,dir-313", "cortina,gemini";
++	compatible = "dlink,dns-313", "cortina,gemini";
+ 	#address-cells = <1>;
+ 	#size-cells = <1>;
+ 
+diff --git a/arch/arm/boot/dts/imx6ul.dtsi b/arch/arm/boot/dts/imx6ul.dtsi
+index facd65602c2d..572c04296fe1 100644
+--- a/arch/arm/boot/dts/imx6ul.dtsi
++++ b/arch/arm/boot/dts/imx6ul.dtsi
+@@ -358,7 +358,7 @@
+ 			pwm1: pwm@2080000 {
+ 				compatible = "fsl,imx6ul-pwm", "fsl,imx27-pwm";
+ 				reg = <0x02080000 0x4000>;
+-				interrupts = <GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>;
++				interrupts = <GIC_SPI 83 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clks IMX6UL_CLK_PWM1>,
+ 					 <&clks IMX6UL_CLK_PWM1>;
+ 				clock-names = "ipg", "per";
+@@ -369,7 +369,7 @@
+ 			pwm2: pwm@2084000 {
+ 				compatible = "fsl,imx6ul-pwm", "fsl,imx27-pwm";
+ 				reg = <0x02084000 0x4000>;
+-				interrupts = <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>;
++				interrupts = <GIC_SPI 84 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clks IMX6UL_CLK_PWM2>,
+ 					 <&clks IMX6UL_CLK_PWM2>;
+ 				clock-names = "ipg", "per";
+@@ -380,7 +380,7 @@
+ 			pwm3: pwm@2088000 {
+ 				compatible = "fsl,imx6ul-pwm", "fsl,imx27-pwm";
+ 				reg = <0x02088000 0x4000>;
+-				interrupts = <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>;
++				interrupts = <GIC_SPI 85 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clks IMX6UL_CLK_PWM3>,
+ 					 <&clks IMX6UL_CLK_PWM3>;
+ 				clock-names = "ipg", "per";
+@@ -391,7 +391,7 @@
+ 			pwm4: pwm@208c000 {
+ 				compatible = "fsl,imx6ul-pwm", "fsl,imx27-pwm";
+ 				reg = <0x0208c000 0x4000>;
+-				interrupts = <GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>;
++				interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clks IMX6UL_CLK_PWM4>,
+ 					 <&clks IMX6UL_CLK_PWM4>;
+ 				clock-names = "ipg", "per";
+diff --git a/arch/arm/boot/dts/meson8.dtsi b/arch/arm/boot/dts/meson8.dtsi
+index a9781243453e..048b55c8dc1e 100644
+--- a/arch/arm/boot/dts/meson8.dtsi
++++ b/arch/arm/boot/dts/meson8.dtsi
+@@ -248,8 +248,8 @@
+ 				     <GIC_SPI 167 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 169 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 172 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 173 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 170 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 171 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 172 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 173 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 174 IRQ_TYPE_LEVEL_HIGH>,
+@@ -264,7 +264,6 @@
+ 			clocks = <&clkc CLKID_CLK81>, <&clkc CLKID_MALI>;
+ 			clock-names = "bus", "core";
+ 			operating-points-v2 = <&gpu_opp_table>;
+-			switch-delay = <0xffff>;
+ 		};
+ 	};
+ }; /* end of / */
+diff --git a/arch/arm/boot/dts/meson8b.dtsi b/arch/arm/boot/dts/meson8b.dtsi
+index fe84a8c3ce81..6b80aff32fc2 100644
+--- a/arch/arm/boot/dts/meson8b.dtsi
++++ b/arch/arm/boot/dts/meson8b.dtsi
+@@ -163,23 +163,23 @@
+ 
+ 		opp-255000000 {
+ 			opp-hz = /bits/ 64 <255000000>;
+-			opp-microvolt = <1150000>;
++			opp-microvolt = <1100000>;
+ 		};
+ 		opp-364300000 {
+ 			opp-hz = /bits/ 64 <364300000>;
+-			opp-microvolt = <1150000>;
++			opp-microvolt = <1100000>;
+ 		};
+ 		opp-425000000 {
+ 			opp-hz = /bits/ 64 <425000000>;
+-			opp-microvolt = <1150000>;
++			opp-microvolt = <1100000>;
+ 		};
+ 		opp-510000000 {
+ 			opp-hz = /bits/ 64 <510000000>;
+-			opp-microvolt = <1150000>;
++			opp-microvolt = <1100000>;
+ 		};
+ 		opp-637500000 {
+ 			opp-hz = /bits/ 64 <637500000>;
+-			opp-microvolt = <1150000>;
++			opp-microvolt = <1100000>;
+ 			turbo-mode;
+ 		};
+ 	};
+diff --git a/arch/arm/mach-omap2/prm3xxx.c b/arch/arm/mach-omap2/prm3xxx.c
+index 05858f966f7d..dfa65fc2c82b 100644
+--- a/arch/arm/mach-omap2/prm3xxx.c
++++ b/arch/arm/mach-omap2/prm3xxx.c
+@@ -433,7 +433,7 @@ static void omap3_prm_reconfigure_io_chain(void)
+  * registers, and omap3xxx_prm_reconfigure_io_chain() must be called.
+  * No return value.
+  */
+-static void __init omap3xxx_prm_enable_io_wakeup(void)
++static void omap3xxx_prm_enable_io_wakeup(void)
+ {
+ 	if (prm_features & PRM_HAS_IO_WAKEUP)
+ 		omap2_prm_set_mod_reg_bits(OMAP3430_EN_IO_MASK, WKUP_MOD,
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
+index 2896bbcfa3bb..228872549f01 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
+@@ -28,7 +28,7 @@
+ 			enable-method = "psci";
+ 			clocks = <&clockgen 1 0>;
+ 			next-level-cache = <&l2>;
+-			cpu-idle-states = <&CPU_PH20>;
++			cpu-idle-states = <&CPU_PW20>;
+ 		};
+ 
+ 		cpu1: cpu@1 {
+@@ -38,7 +38,7 @@
+ 			enable-method = "psci";
+ 			clocks = <&clockgen 1 0>;
+ 			next-level-cache = <&l2>;
+-			cpu-idle-states = <&CPU_PH20>;
++			cpu-idle-states = <&CPU_PW20>;
+ 		};
+ 
+ 		l2: l2-cache {
+@@ -53,13 +53,13 @@
+ 		 */
+ 		entry-method = "arm,psci";
+ 
+-		CPU_PH20: cpu-ph20 {
+-			compatible = "arm,idle-state";
+-			idle-state-name = "PH20";
+-			arm,psci-suspend-param = <0x00010000>;
+-			entry-latency-us = <1000>;
+-			exit-latency-us = <1000>;
+-			min-residency-us = <3000>;
++		CPU_PW20: cpu-pw20 {
++			  compatible = "arm,idle-state";
++			  idle-state-name = "PW20";
++			  arm,psci-suspend-param = <0x0>;
++			  entry-latency-us = <2000>;
++			  exit-latency-us = <2000>;
++			  min-residency-us = <6000>;
+ 		};
+ 	};
+ 
+diff --git a/arch/s390/include/asm/facility.h b/arch/s390/include/asm/facility.h
+index e78cda94456b..68c476b20b57 100644
+--- a/arch/s390/include/asm/facility.h
++++ b/arch/s390/include/asm/facility.h
+@@ -59,6 +59,18 @@ static inline int test_facility(unsigned long nr)
+ 	return __test_facility(nr, &S390_lowcore.stfle_fac_list);
+ }
+ 
++static inline unsigned long __stfle_asm(u64 *stfle_fac_list, int size)
++{
++	register unsigned long reg0 asm("0") = size - 1;
++
++	asm volatile(
++		".insn s,0xb2b00000,0(%1)" /* stfle */
++		: "+d" (reg0)
++		: "a" (stfle_fac_list)
++		: "memory", "cc");
++	return reg0;
++}
++
+ /**
+  * stfle - Store facility list extended
+  * @stfle_fac_list: array where facility list can be stored
+@@ -75,13 +87,8 @@ static inline void __stfle(u64 *stfle_fac_list, int size)
+ 	memcpy(stfle_fac_list, &S390_lowcore.stfl_fac_list, 4);
+ 	if (S390_lowcore.stfl_fac_list & 0x01000000) {
+ 		/* More facility bits available with stfle */
+-		register unsigned long reg0 asm("0") = size - 1;
+-
+-		asm volatile(".insn s,0xb2b00000,0(%1)" /* stfle */
+-			     : "+d" (reg0)
+-			     : "a" (stfle_fac_list)
+-			     : "memory", "cc");
+-		nr = (reg0 + 1) * 8; /* # bytes stored by stfle */
++		nr = __stfle_asm(stfle_fac_list, size);
++		nr = min_t(unsigned long, (nr + 1) * 8, size * 8);
+ 	}
+ 	memset((char *) stfle_fac_list + nr, 0, size * 8 - nr);
+ }
+diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
+index 5fc76b755510..f4afacfd40bb 100644
+--- a/arch/x86/entry/entry_32.S
++++ b/arch/x86/entry/entry_32.S
+@@ -1105,6 +1105,30 @@ ENTRY(irq_entries_start)
+     .endr
+ END(irq_entries_start)
+ 
++#ifdef CONFIG_X86_LOCAL_APIC
++	.align 8
++ENTRY(spurious_entries_start)
++    vector=FIRST_SYSTEM_VECTOR
++    .rept (NR_VECTORS - FIRST_SYSTEM_VECTOR)
++	pushl	$(~vector+0x80)			/* Note: always in signed byte range */
++    vector=vector+1
++	jmp	common_spurious
++	.align	8
++    .endr
++END(spurious_entries_start)
++
++common_spurious:
++	ASM_CLAC
++	addl	$-0x80, (%esp)			/* Adjust vector into the [-256, -1] range */
++	SAVE_ALL switch_stacks=1
++	ENCODE_FRAME_POINTER
++	TRACE_IRQS_OFF
++	movl	%esp, %eax
++	call	smp_spurious_interrupt
++	jmp	ret_from_intr
++ENDPROC(common_spurious)
++#endif
++
+ /*
+  * the CPU automatically disables interrupts when executing an IRQ vector,
+  * so IRQ-flags tracing has to follow that:
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index b1d59a7c556e..16a472ddbfe4 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -377,6 +377,18 @@ ENTRY(irq_entries_start)
+     .endr
+ END(irq_entries_start)
+ 
++	.align 8
++ENTRY(spurious_entries_start)
++    vector=FIRST_SYSTEM_VECTOR
++    .rept (NR_VECTORS - FIRST_SYSTEM_VECTOR)
++	UNWIND_HINT_IRET_REGS
++	pushq	$(~vector+0x80)			/* Note: always in signed byte range */
++	jmp	common_spurious
++	.align	8
++	vector=vector+1
++    .endr
++END(spurious_entries_start)
++
+ .macro DEBUG_ENTRY_ASSERT_IRQS_OFF
+ #ifdef CONFIG_DEBUG_ENTRY
+ 	pushq %rax
+@@ -573,10 +585,20 @@ _ASM_NOKPROBE(interrupt_entry)
+ 
+ /* Interrupt entry/exit. */
+ 
+-	/*
+-	 * The interrupt stubs push (~vector+0x80) onto the stack and
+-	 * then jump to common_interrupt.
+-	 */
++/*
++ * The interrupt stubs push (~vector+0x80) onto the stack and
++ * then jump to common_spurious/interrupt.
++ */
++common_spurious:
++	addq	$-0x80, (%rsp)			/* Adjust vector to [-256, -1] range */
++	call	interrupt_entry
++	UNWIND_HINT_REGS indirect=1
++	call	smp_spurious_interrupt		/* rdi points to pt_regs */
++	jmp	ret_from_intr
++END(common_spurious)
++_ASM_NOKPROBE(common_spurious)
++
++/* common_interrupt is a hotpath. Align it */
+ 	.p2align CONFIG_X86_L1_CACHE_SHIFT
+ common_interrupt:
+ 	addq	$-0x80, (%rsp)			/* Adjust vector to [-256, -1] range */
+diff --git a/arch/x86/include/asm/hw_irq.h b/arch/x86/include/asm/hw_irq.h
+index 32e666e1231e..cbd97e22d2f3 100644
+--- a/arch/x86/include/asm/hw_irq.h
++++ b/arch/x86/include/asm/hw_irq.h
+@@ -150,8 +150,11 @@ extern char irq_entries_start[];
+ #define trace_irq_entries_start irq_entries_start
+ #endif
+ 
++extern char spurious_entries_start[];
++
+ #define VECTOR_UNUSED		NULL
+-#define VECTOR_RETRIGGERED	((void *)~0UL)
++#define VECTOR_SHUTDOWN		((void *)~0UL)
++#define VECTOR_RETRIGGERED	((void *)~1UL)
+ 
+ typedef struct irq_desc* vector_irq_t[NR_VECTORS];
+ DECLARE_PER_CPU(vector_irq_t, vector_irq);
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index b7bcdd781651..2b8a57ae57f6 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -1458,7 +1458,8 @@ static void apic_pending_intr_clear(void)
+ 		if (queued) {
+ 			if (boot_cpu_has(X86_FEATURE_TSC) && cpu_khz) {
+ 				ntsc = rdtsc();
+-				max_loops = (cpu_khz << 10) - (ntsc - tsc);
++				max_loops = (long long)cpu_khz << 10;
++				max_loops -= ntsc - tsc;
+ 			} else {
+ 				max_loops--;
+ 			}
+@@ -2034,21 +2035,32 @@ __visible void __irq_entry smp_spurious_interrupt(struct pt_regs *regs)
+ 	entering_irq();
+ 	trace_spurious_apic_entry(vector);
+ 
++	inc_irq_stat(irq_spurious_count);
++
++	/*
++	 * If this is a spurious interrupt then do not acknowledge
++	 */
++	if (vector == SPURIOUS_APIC_VECTOR) {
++		/* See SDM vol 3 */
++		pr_info("Spurious APIC interrupt (vector 0xFF) on CPU#%d, should never happen.\n",
++			smp_processor_id());
++		goto out;
++	}
++
+ 	/*
+-	 * Check if this really is a spurious interrupt and ACK it
+-	 * if it is a vectored one.  Just in case...
+-	 * Spurious interrupts should not be ACKed.
++	 * If it is a vectored one, verify it's set in the ISR. If set,
++	 * acknowledge it.
+ 	 */
+ 	v = apic_read(APIC_ISR + ((vector & ~0x1f) >> 1));
+-	if (v & (1 << (vector & 0x1f)))
++	if (v & (1 << (vector & 0x1f))) {
++		pr_info("Spurious interrupt (vector 0x%02x) on CPU#%d. Acked\n",
++			vector, smp_processor_id());
+ 		ack_APIC_irq();
+-
+-	inc_irq_stat(irq_spurious_count);
+-
+-	/* see sw-dev-man vol 3, chapter 7.4.13.5 */
+-	pr_info("spurious APIC interrupt through vector %02x on CPU#%d, "
+-		"should never happen.\n", vector, smp_processor_id());
+-
++	} else {
++		pr_info("Spurious interrupt (vector 0x%02x) on CPU#%d. Not pending!\n",
++			vector, smp_processor_id());
++	}
++out:
+ 	trace_spurious_apic_exit(vector);
+ 	exiting_irq();
+ }
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 53aa234a6803..c9fec0657eea 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -1893,6 +1893,50 @@ static int ioapic_set_affinity(struct irq_data *irq_data,
+ 	return ret;
+ }
+ 
++/*
++ * Interrupt shutdown masks the ioapic pin, but the interrupt might already
++ * be in flight, but not yet serviced by the target CPU. That means
++ * __synchronize_hardirq() would return and claim that everything is calmed
++ * down. So free_irq() would proceed and deactivate the interrupt and free
++ * resources.
++ *
++ * Once the target CPU comes around to service it it will find a cleared
++ * vector and complain. While the spurious interrupt is harmless, the full
++ * release of resources might prevent the interrupt from being acknowledged
++ * which keeps the hardware in a weird state.
++ *
++ * Verify that the corresponding Remote-IRR bits are clear.
++ */
++static int ioapic_irq_get_chip_state(struct irq_data *irqd,
++				   enum irqchip_irq_state which,
++				   bool *state)
++{
++	struct mp_chip_data *mcd = irqd->chip_data;
++	struct IO_APIC_route_entry rentry;
++	struct irq_pin_list *p;
++
++	if (which != IRQCHIP_STATE_ACTIVE)
++		return -EINVAL;
++
++	*state = false;
++	raw_spin_lock(&ioapic_lock);
++	for_each_irq_pin(p, mcd->irq_2_pin) {
++		rentry = __ioapic_read_entry(p->apic, p->pin);
++		/*
++		 * The remote IRR is only valid in level trigger mode. It's
++		 * meaning is undefined for edge triggered interrupts and
++		 * irrelevant because the IO-APIC treats them as fire and
++		 * forget.
++		 */
++		if (rentry.irr && rentry.trigger) {
++			*state = true;
++			break;
++		}
++	}
++	raw_spin_unlock(&ioapic_lock);
++	return 0;
++}
++
+ static struct irq_chip ioapic_chip __read_mostly = {
+ 	.name			= "IO-APIC",
+ 	.irq_startup		= startup_ioapic_irq,
+@@ -1902,6 +1946,7 @@ static struct irq_chip ioapic_chip __read_mostly = {
+ 	.irq_eoi		= ioapic_ack_level,
+ 	.irq_set_affinity	= ioapic_set_affinity,
+ 	.irq_retrigger		= irq_chip_retrigger_hierarchy,
++	.irq_get_irqchip_state	= ioapic_irq_get_chip_state,
+ 	.flags			= IRQCHIP_SKIP_SET_WAKE,
+ };
+ 
+@@ -1914,6 +1959,7 @@ static struct irq_chip ioapic_ir_chip __read_mostly = {
+ 	.irq_eoi		= ioapic_ir_ack_level,
+ 	.irq_set_affinity	= ioapic_set_affinity,
+ 	.irq_retrigger		= irq_chip_retrigger_hierarchy,
++	.irq_get_irqchip_state	= ioapic_irq_get_chip_state,
+ 	.flags			= IRQCHIP_SKIP_SET_WAKE,
+ };
+ 
+diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
+index 3173e07d3791..1c6d1d5f28d3 100644
+--- a/arch/x86/kernel/apic/vector.c
++++ b/arch/x86/kernel/apic/vector.c
+@@ -343,7 +343,7 @@ static void clear_irq_vector(struct irq_data *irqd)
+ 	trace_vector_clear(irqd->irq, vector, apicd->cpu, apicd->prev_vector,
+ 			   apicd->prev_cpu);
+ 
+-	per_cpu(vector_irq, apicd->cpu)[vector] = VECTOR_UNUSED;
++	per_cpu(vector_irq, apicd->cpu)[vector] = VECTOR_SHUTDOWN;
+ 	irq_matrix_free(vector_matrix, apicd->cpu, vector, managed);
+ 	apicd->vector = 0;
+ 
+@@ -352,7 +352,7 @@ static void clear_irq_vector(struct irq_data *irqd)
+ 	if (!vector)
+ 		return;
+ 
+-	per_cpu(vector_irq, apicd->prev_cpu)[vector] = VECTOR_UNUSED;
++	per_cpu(vector_irq, apicd->prev_cpu)[vector] = VECTOR_SHUTDOWN;
+ 	irq_matrix_free(vector_matrix, apicd->prev_cpu, vector, managed);
+ 	apicd->prev_vector = 0;
+ 	apicd->move_in_progress = 0;
+diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
+index 16b1cbd3a61e..29ffa495bd1c 100644
+--- a/arch/x86/kernel/head64.c
++++ b/arch/x86/kernel/head64.c
+@@ -184,24 +184,25 @@ unsigned long __head __startup_64(unsigned long physaddr,
+ 	pgtable_flags = _KERNPG_TABLE_NOENC + sme_get_me_mask();
+ 
+ 	if (la57) {
+-		p4d = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr);
++		p4d = fixup_pointer(early_dynamic_pgts[(*next_pgt_ptr)++],
++				    physaddr);
+ 
+ 		i = (physaddr >> PGDIR_SHIFT) % PTRS_PER_PGD;
+ 		pgd[i + 0] = (pgdval_t)p4d + pgtable_flags;
+ 		pgd[i + 1] = (pgdval_t)p4d + pgtable_flags;
+ 
+-		i = (physaddr >> P4D_SHIFT) % PTRS_PER_P4D;
+-		p4d[i + 0] = (pgdval_t)pud + pgtable_flags;
+-		p4d[i + 1] = (pgdval_t)pud + pgtable_flags;
++		i = physaddr >> P4D_SHIFT;
++		p4d[(i + 0) % PTRS_PER_P4D] = (pgdval_t)pud + pgtable_flags;
++		p4d[(i + 1) % PTRS_PER_P4D] = (pgdval_t)pud + pgtable_flags;
+ 	} else {
+ 		i = (physaddr >> PGDIR_SHIFT) % PTRS_PER_PGD;
+ 		pgd[i + 0] = (pgdval_t)pud + pgtable_flags;
+ 		pgd[i + 1] = (pgdval_t)pud + pgtable_flags;
+ 	}
+ 
+-	i = (physaddr >> PUD_SHIFT) % PTRS_PER_PUD;
+-	pud[i + 0] = (pudval_t)pmd + pgtable_flags;
+-	pud[i + 1] = (pudval_t)pmd + pgtable_flags;
++	i = physaddr >> PUD_SHIFT;
++	pud[(i + 0) % PTRS_PER_PUD] = (pudval_t)pmd + pgtable_flags;
++	pud[(i + 1) % PTRS_PER_PUD] = (pudval_t)pmd + pgtable_flags;
+ 
+ 	pmd_entry = __PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL;
+ 	/* Filter out unsupported __PAGE_KERNEL_* bits: */
+@@ -211,8 +212,9 @@ unsigned long __head __startup_64(unsigned long physaddr,
+ 	pmd_entry +=  physaddr;
+ 
+ 	for (i = 0; i < DIV_ROUND_UP(_end - _text, PMD_SIZE); i++) {
+-		int idx = i + (physaddr >> PMD_SHIFT) % PTRS_PER_PMD;
+-		pmd[idx] = pmd_entry + i * PMD_SIZE;
++		int idx = i + (physaddr >> PMD_SHIFT);
++
++		pmd[idx % PTRS_PER_PMD] = pmd_entry + i * PMD_SIZE;
+ 	}
+ 
+ 	/*
+diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c
+index 01adea278a71..a7e0e975043f 100644
+--- a/arch/x86/kernel/idt.c
++++ b/arch/x86/kernel/idt.c
+@@ -321,7 +321,8 @@ void __init idt_setup_apic_and_irq_gates(void)
+ #ifdef CONFIG_X86_LOCAL_APIC
+ 	for_each_clear_bit_from(i, system_vectors, NR_VECTORS) {
+ 		set_bit(i, system_vectors);
+-		set_intr_gate(i, spurious_interrupt);
++		entry = spurious_entries_start + 8 * (i - FIRST_SYSTEM_VECTOR);
++		set_intr_gate(i, entry);
+ 	}
+ #endif
+ }
+diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
+index 59b5f2ea7c2f..a975246074b5 100644
+--- a/arch/x86/kernel/irq.c
++++ b/arch/x86/kernel/irq.c
+@@ -246,7 +246,7 @@ __visible unsigned int __irq_entry do_IRQ(struct pt_regs *regs)
+ 	if (!handle_irq(desc, regs)) {
+ 		ack_APIC_irq();
+ 
+-		if (desc != VECTOR_RETRIGGERED) {
++		if (desc != VECTOR_RETRIGGERED && desc != VECTOR_SHUTDOWN) {
+ 			pr_emerg_ratelimited("%s: %d.%d No irq handler for vector\n",
+ 					     __func__, smp_processor_id(),
+ 					     vector);
+diff --git a/arch/x86/platform/efi/quirks.c b/arch/x86/platform/efi/quirks.c
+index a25a9fd987a9..529522c62d89 100644
+--- a/arch/x86/platform/efi/quirks.c
++++ b/arch/x86/platform/efi/quirks.c
+@@ -724,7 +724,7 @@ void efi_recover_from_page_fault(unsigned long phys_addr)
+ 	 * Address range 0x0000 - 0x0fff is always mapped in the efi_pgd, so
+ 	 * page faulting on these addresses isn't expected.
+ 	 */
+-	if (phys_addr >= 0x0000 && phys_addr <= 0x0fff)
++	if (phys_addr <= 0x0fff)
+ 		return;
+ 
+ 	/*
+diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c
+index a7359535caf5..b444f89a2041 100644
+--- a/drivers/base/cacheinfo.c
++++ b/drivers/base/cacheinfo.c
+@@ -655,7 +655,8 @@ static int cacheinfo_cpu_pre_down(unsigned int cpu)
+ 
+ static int __init cacheinfo_sysfs_init(void)
+ {
+-	return cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "base/cacheinfo:online",
++	return cpuhp_setup_state(CPUHP_AP_BASE_CACHEINFO_ONLINE,
++				 "base/cacheinfo:online",
+ 				 cacheinfo_cpu_online, cacheinfo_cpu_pre_down);
+ }
+ device_initcall(cacheinfo_sysfs_init);
+diff --git a/drivers/base/firmware_loader/fallback.c b/drivers/base/firmware_loader/fallback.c
+index b5c865fe263b..818d8c37d70a 100644
+--- a/drivers/base/firmware_loader/fallback.c
++++ b/drivers/base/firmware_loader/fallback.c
+@@ -659,7 +659,7 @@ static bool fw_run_sysfs_fallback(enum fw_opt opt_flags)
+ 	/* Also permit LSMs and IMA to fail firmware sysfs fallback */
+ 	ret = security_kernel_load_data(LOADING_FIRMWARE);
+ 	if (ret < 0)
+-		return ret;
++		return false;
+ 
+ 	return fw_force_sysfs_fallback(opt_flags);
+ }
+diff --git a/drivers/clk/ti/clkctrl.c b/drivers/clk/ti/clkctrl.c
+index 3325ee43bcc1..626090b59cd7 100644
+--- a/drivers/clk/ti/clkctrl.c
++++ b/drivers/clk/ti/clkctrl.c
+@@ -229,6 +229,7 @@ static struct clk_hw *_ti_omap4_clkctrl_xlate(struct of_phandle_args *clkspec,
+ {
+ 	struct omap_clkctrl_provider *provider = data;
+ 	struct omap_clkctrl_clk *entry;
++	bool found = false;
+ 
+ 	if (clkspec->args_count != 2)
+ 		return ERR_PTR(-EINVAL);
+@@ -238,11 +239,13 @@ static struct clk_hw *_ti_omap4_clkctrl_xlate(struct of_phandle_args *clkspec,
+ 
+ 	list_for_each_entry(entry, &provider->clocks, node) {
+ 		if (entry->reg_offset == clkspec->args[0] &&
+-		    entry->bit_offset == clkspec->args[1])
++		    entry->bit_offset == clkspec->args[1]) {
++			found = true;
+ 			break;
++		}
+ 	}
+ 
+-	if (!entry)
++	if (!found)
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	return entry->clk;
+diff --git a/drivers/crypto/nx/nx-842-powernv.c b/drivers/crypto/nx/nx-842-powernv.c
+index c68df7e8bee1..7ce2467c771e 100644
+--- a/drivers/crypto/nx/nx-842-powernv.c
++++ b/drivers/crypto/nx/nx-842-powernv.c
+@@ -36,8 +36,6 @@ MODULE_ALIAS_CRYPTO("842-nx");
+ #define WORKMEM_ALIGN	(CRB_ALIGN)
+ #define CSB_WAIT_MAX	(5000) /* ms */
+ #define VAS_RETRIES	(10)
+-/* # of requests allowed per RxFIFO at a time. 0 for unlimited */
+-#define MAX_CREDITS_PER_RXFIFO	(1024)
+ 
+ struct nx842_workmem {
+ 	/* Below fields must be properly aligned */
+@@ -821,7 +819,11 @@ static int __init vas_cfg_coproc_info(struct device_node *dn, int chip_id,
+ 	rxattr.lnotify_lpid = lpid;
+ 	rxattr.lnotify_pid = pid;
+ 	rxattr.lnotify_tid = tid;
+-	rxattr.wcreds_max = MAX_CREDITS_PER_RXFIFO;
++	/*
++	 * Maximum RX window credits can not be more than #CRBs in
++	 * RxFIFO. Otherwise, can get checkstop if RxFIFO overruns.
++	 */
++	rxattr.wcreds_max = fifo_size / CRB_SIZE;
+ 
+ 	/*
+ 	 * Open a VAS receice window which is used to configure RxFIFO
+diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
+index 0fee83b2eb91..becc654e0cd3 100644
+--- a/drivers/crypto/talitos.c
++++ b/drivers/crypto/talitos.c
+@@ -334,6 +334,21 @@ int talitos_submit(struct device *dev, int ch, struct talitos_desc *desc,
+ }
+ EXPORT_SYMBOL(talitos_submit);
+ 
++static __be32 get_request_hdr(struct talitos_request *request, bool is_sec1)
++{
++	struct talitos_edesc *edesc;
++
++	if (!is_sec1)
++		return request->desc->hdr;
++
++	if (!request->desc->next_desc)
++		return request->desc->hdr1;
++
++	edesc = container_of(request->desc, struct talitos_edesc, desc);
++
++	return ((struct talitos_desc *)(edesc->buf + edesc->dma_len))->hdr1;
++}
++
+ /*
+  * process what was done, notify callback of error if not
+  */
+@@ -355,12 +370,7 @@ static void flush_channel(struct device *dev, int ch, int error, int reset_ch)
+ 
+ 		/* descriptors with their done bits set don't get the error */
+ 		rmb();
+-		if (!is_sec1)
+-			hdr = request->desc->hdr;
+-		else if (request->desc->next_desc)
+-			hdr = (request->desc + 1)->hdr1;
+-		else
+-			hdr = request->desc->hdr1;
++		hdr = get_request_hdr(request, is_sec1);
+ 
+ 		if ((hdr & DESC_HDR_DONE) == DESC_HDR_DONE)
+ 			status = 0;
+@@ -490,8 +500,14 @@ static u32 current_desc_hdr(struct device *dev, int ch)
+ 		}
+ 	}
+ 
+-	if (priv->chan[ch].fifo[iter].desc->next_desc == cur_desc)
+-		return (priv->chan[ch].fifo[iter].desc + 1)->hdr;
++	if (priv->chan[ch].fifo[iter].desc->next_desc == cur_desc) {
++		struct talitos_edesc *edesc;
++
++		edesc = container_of(priv->chan[ch].fifo[iter].desc,
++				     struct talitos_edesc, desc);
++		return ((struct talitos_desc *)
++			(edesc->buf + edesc->dma_len))->hdr;
++	}
+ 
+ 	return priv->chan[ch].fifo[iter].desc->hdr;
+ }
+@@ -913,36 +929,6 @@ badkey:
+ 	return -EINVAL;
+ }
+ 
+-/*
+- * talitos_edesc - s/w-extended descriptor
+- * @src_nents: number of segments in input scatterlist
+- * @dst_nents: number of segments in output scatterlist
+- * @icv_ool: whether ICV is out-of-line
+- * @iv_dma: dma address of iv for checking continuity and link table
+- * @dma_len: length of dma mapped link_tbl space
+- * @dma_link_tbl: bus physical address of link_tbl/buf
+- * @desc: h/w descriptor
+- * @link_tbl: input and output h/w link tables (if {src,dst}_nents > 1) (SEC2)
+- * @buf: input and output buffeur (if {src,dst}_nents > 1) (SEC1)
+- *
+- * if decrypting (with authcheck), or either one of src_nents or dst_nents
+- * is greater than 1, an integrity check value is concatenated to the end
+- * of link_tbl data
+- */
+-struct talitos_edesc {
+-	int src_nents;
+-	int dst_nents;
+-	bool icv_ool;
+-	dma_addr_t iv_dma;
+-	int dma_len;
+-	dma_addr_t dma_link_tbl;
+-	struct talitos_desc desc;
+-	union {
+-		struct talitos_ptr link_tbl[0];
+-		u8 buf[0];
+-	};
+-};
+-
+ static void talitos_sg_unmap(struct device *dev,
+ 			     struct talitos_edesc *edesc,
+ 			     struct scatterlist *src,
+@@ -1431,15 +1417,11 @@ static struct talitos_edesc *talitos_edesc_alloc(struct device *dev,
+ 	edesc->dst_nents = dst_nents;
+ 	edesc->iv_dma = iv_dma;
+ 	edesc->dma_len = dma_len;
+-	if (dma_len) {
+-		void *addr = &edesc->link_tbl[0];
+-
+-		if (is_sec1 && !dst)
+-			addr += sizeof(struct talitos_desc);
+-		edesc->dma_link_tbl = dma_map_single(dev, addr,
++	if (dma_len)
++		edesc->dma_link_tbl = dma_map_single(dev, &edesc->link_tbl[0],
+ 						     edesc->dma_len,
+ 						     DMA_BIDIRECTIONAL);
+-	}
++
+ 	return edesc;
+ }
+ 
+@@ -1706,14 +1688,16 @@ static void common_nonsnoop_hash_unmap(struct device *dev,
+ 	struct talitos_private *priv = dev_get_drvdata(dev);
+ 	bool is_sec1 = has_ftr_sec1(priv);
+ 	struct talitos_desc *desc = &edesc->desc;
+-	struct talitos_desc *desc2 = desc + 1;
++	struct talitos_desc *desc2 = (struct talitos_desc *)
++				     (edesc->buf + edesc->dma_len);
+ 
+ 	unmap_single_talitos_ptr(dev, &edesc->desc.ptr[5], DMA_FROM_DEVICE);
+ 	if (desc->next_desc &&
+ 	    desc->ptr[5].ptr != desc2->ptr[5].ptr)
+ 		unmap_single_talitos_ptr(dev, &desc2->ptr[5], DMA_FROM_DEVICE);
+ 
+-	talitos_sg_unmap(dev, edesc, req_ctx->psrc, NULL, 0, 0);
++	if (req_ctx->psrc)
++		talitos_sg_unmap(dev, edesc, req_ctx->psrc, NULL, 0, 0);
+ 
+ 	/* When using hashctx-in, must unmap it. */
+ 	if (from_talitos_ptr_len(&edesc->desc.ptr[1], is_sec1))
+@@ -1780,7 +1764,6 @@ static void talitos_handle_buggy_hash(struct talitos_ctx *ctx,
+ 
+ static int common_nonsnoop_hash(struct talitos_edesc *edesc,
+ 				struct ahash_request *areq, unsigned int length,
+-				unsigned int offset,
+ 				void (*callback) (struct device *dev,
+ 						  struct talitos_desc *desc,
+ 						  void *context, int error))
+@@ -1819,9 +1802,7 @@ static int common_nonsnoop_hash(struct talitos_edesc *edesc,
+ 
+ 	sg_count = edesc->src_nents ?: 1;
+ 	if (is_sec1 && sg_count > 1)
+-		sg_pcopy_to_buffer(req_ctx->psrc, sg_count,
+-				   edesc->buf + sizeof(struct talitos_desc),
+-				   length, req_ctx->nbuf);
++		sg_copy_to_buffer(req_ctx->psrc, sg_count, edesc->buf, length);
+ 	else if (length)
+ 		sg_count = dma_map_sg(dev, req_ctx->psrc, sg_count,
+ 				      DMA_TO_DEVICE);
+@@ -1834,7 +1815,7 @@ static int common_nonsnoop_hash(struct talitos_edesc *edesc,
+ 				       DMA_TO_DEVICE);
+ 	} else {
+ 		sg_count = talitos_sg_map(dev, req_ctx->psrc, length, edesc,
+-					  &desc->ptr[3], sg_count, offset, 0);
++					  &desc->ptr[3], sg_count, 0, 0);
+ 		if (sg_count > 1)
+ 			sync_needed = true;
+ 	}
+@@ -1858,7 +1839,8 @@ static int common_nonsnoop_hash(struct talitos_edesc *edesc,
+ 		talitos_handle_buggy_hash(ctx, edesc, &desc->ptr[3]);
+ 
+ 	if (is_sec1 && req_ctx->nbuf && length) {
+-		struct talitos_desc *desc2 = desc + 1;
++		struct talitos_desc *desc2 = (struct talitos_desc *)
++					     (edesc->buf + edesc->dma_len);
+ 		dma_addr_t next_desc;
+ 
+ 		memset(desc2, 0, sizeof(*desc2));
+@@ -1879,7 +1861,7 @@ static int common_nonsnoop_hash(struct talitos_edesc *edesc,
+ 						      DMA_TO_DEVICE);
+ 		copy_talitos_ptr(&desc2->ptr[2], &desc->ptr[2], is_sec1);
+ 		sg_count = talitos_sg_map(dev, req_ctx->psrc, length, edesc,
+-					  &desc2->ptr[3], sg_count, offset, 0);
++					  &desc2->ptr[3], sg_count, 0, 0);
+ 		if (sg_count > 1)
+ 			sync_needed = true;
+ 		copy_talitos_ptr(&desc2->ptr[5], &desc->ptr[5], is_sec1);
+@@ -1990,7 +1972,6 @@ static int ahash_process_req(struct ahash_request *areq, unsigned int nbytes)
+ 	struct device *dev = ctx->dev;
+ 	struct talitos_private *priv = dev_get_drvdata(dev);
+ 	bool is_sec1 = has_ftr_sec1(priv);
+-	int offset = 0;
+ 	u8 *ctx_buf = req_ctx->buf[req_ctx->buf_idx];
+ 
+ 	if (!req_ctx->last && (nbytes + req_ctx->nbuf <= blocksize)) {
+@@ -2030,6 +2011,8 @@ static int ahash_process_req(struct ahash_request *areq, unsigned int nbytes)
+ 			sg_chain(req_ctx->bufsl, 2, areq->src);
+ 		req_ctx->psrc = req_ctx->bufsl;
+ 	} else if (is_sec1 && req_ctx->nbuf && req_ctx->nbuf < blocksize) {
++		int offset;
++
+ 		if (nbytes_to_hash > blocksize)
+ 			offset = blocksize - req_ctx->nbuf;
+ 		else
+@@ -2042,7 +2025,8 @@ static int ahash_process_req(struct ahash_request *areq, unsigned int nbytes)
+ 		sg_copy_to_buffer(areq->src, nents,
+ 				  ctx_buf + req_ctx->nbuf, offset);
+ 		req_ctx->nbuf += offset;
+-		req_ctx->psrc = areq->src;
++		req_ctx->psrc = scatterwalk_ffwd(req_ctx->bufsl, areq->src,
++						 offset);
+ 	} else
+ 		req_ctx->psrc = areq->src;
+ 
+@@ -2082,8 +2066,7 @@ static int ahash_process_req(struct ahash_request *areq, unsigned int nbytes)
+ 	if (ctx->keylen && (req_ctx->first || req_ctx->last))
+ 		edesc->desc.hdr |= DESC_HDR_MODE0_MDEU_HMAC;
+ 
+-	return common_nonsnoop_hash(edesc, areq, nbytes_to_hash, offset,
+-				    ahash_done);
++	return common_nonsnoop_hash(edesc, areq, nbytes_to_hash, ahash_done);
+ }
+ 
+ static int ahash_update(struct ahash_request *areq)
+diff --git a/drivers/crypto/talitos.h b/drivers/crypto/talitos.h
+index a65a63e0d6c1..979f6a61e545 100644
+--- a/drivers/crypto/talitos.h
++++ b/drivers/crypto/talitos.h
+@@ -65,6 +65,36 @@ struct talitos_desc {
+ 
+ #define TALITOS_DESC_SIZE	(sizeof(struct talitos_desc) - sizeof(__be32))
+ 
++/*
++ * talitos_edesc - s/w-extended descriptor
++ * @src_nents: number of segments in input scatterlist
++ * @dst_nents: number of segments in output scatterlist
++ * @icv_ool: whether ICV is out-of-line
++ * @iv_dma: dma address of iv for checking continuity and link table
++ * @dma_len: length of dma mapped link_tbl space
++ * @dma_link_tbl: bus physical address of link_tbl/buf
++ * @desc: h/w descriptor
++ * @link_tbl: input and output h/w link tables (if {src,dst}_nents > 1) (SEC2)
++ * @buf: input and output buffeur (if {src,dst}_nents > 1) (SEC1)
++ *
++ * if decrypting (with authcheck), or either one of src_nents or dst_nents
++ * is greater than 1, an integrity check value is concatenated to the end
++ * of link_tbl data
++ */
++struct talitos_edesc {
++	int src_nents;
++	int dst_nents;
++	bool icv_ool;
++	dma_addr_t iv_dma;
++	int dma_len;
++	dma_addr_t dma_link_tbl;
++	struct talitos_desc desc;
++	union {
++		struct talitos_ptr link_tbl[0];
++		u8 buf[0];
++	};
++};
++
+ /**
+  * talitos_request - descriptor submission request
+  * @desc: descriptor pointer (kernel virtual)
+diff --git a/drivers/firmware/efi/efi-bgrt.c b/drivers/firmware/efi/efi-bgrt.c
+index a2384184a7de..b07c17643210 100644
+--- a/drivers/firmware/efi/efi-bgrt.c
++++ b/drivers/firmware/efi/efi-bgrt.c
+@@ -47,11 +47,6 @@ void __init efi_bgrt_init(struct acpi_table_header *table)
+ 		       bgrt->version);
+ 		goto out;
+ 	}
+-	if (bgrt->status & 0xfe) {
+-		pr_notice("Ignoring BGRT: reserved status bits are non-zero %u\n",
+-		       bgrt->status);
+-		goto out;
+-	}
+ 	if (bgrt->image_type != 0) {
+ 		pr_notice("Ignoring BGRT: invalid image type %u (expected 0)\n",
+ 		       bgrt->image_type);
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 6537086fb145..b1636ce22060 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -83,6 +83,7 @@
+ #define HID_DEVICE_ID_ALPS_U1_DUAL_3BTN_PTP	0x1220
+ #define HID_DEVICE_ID_ALPS_U1		0x1215
+ #define HID_DEVICE_ID_ALPS_T4_BTNLESS	0x120C
++#define HID_DEVICE_ID_ALPS_1222		0x1222
+ 
+ 
+ #define USB_VENDOR_ID_AMI		0x046b
+@@ -272,6 +273,7 @@
+ #define USB_DEVICE_ID_CHICONY_MULTI_TOUCH	0xb19d
+ #define USB_DEVICE_ID_CHICONY_WIRELESS	0x0618
+ #define USB_DEVICE_ID_CHICONY_PIXART_USB_OPTICAL_MOUSE	0x1053
++#define USB_DEVICE_ID_CHICONY_PIXART_USB_OPTICAL_MOUSE2	0x0939
+ #define USB_DEVICE_ID_CHICONY_WIRELESS2	0x1123
+ #define USB_DEVICE_ID_ASUS_AK1D		0x1125
+ #define USB_DEVICE_ID_CHICONY_TOSHIBA_WT10A	0x1408
+@@ -571,6 +573,7 @@
+ 
+ #define USB_VENDOR_ID_HUION		0x256c
+ #define USB_DEVICE_ID_HUION_TABLET	0x006e
++#define USB_DEVICE_ID_HUION_HS64	0x006d
+ 
+ #define USB_VENDOR_ID_IBM					0x04b3
+ #define USB_DEVICE_ID_IBM_SCROLLPOINT_III			0x3100
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 1565a307170a..42bb635895cf 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -1780,6 +1780,10 @@ static const struct hid_device_id mt_devices[] = {
+ 		HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
+ 			USB_VENDOR_ID_ALPS_JP,
+ 			HID_DEVICE_ID_ALPS_U1_DUAL_3BTN_PTP) },
++	{ .driver_data = MT_CLS_WIN_8_DUAL,
++		HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
++			USB_VENDOR_ID_ALPS_JP,
++			HID_DEVICE_ID_ALPS_1222) },
+ 
+ 	/* Lenovo X1 TAB Gen 2 */
+ 	{ .driver_data = MT_CLS_WIN_8_DUAL,
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index 189bf68eb35c..74c0ad21b267 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -45,6 +45,7 @@ static const struct hid_device_id hid_quirks[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_UC100KM), HID_QUIRK_NOGET },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_MULTI_TOUCH), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_PIXART_USB_OPTICAL_MOUSE), HID_QUIRK_ALWAYS_POLL },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_PIXART_USB_OPTICAL_MOUSE2), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_WIRELESS), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_CHIC, USB_DEVICE_ID_CHIC_GAMEPAD), HID_QUIRK_BADPAD },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_3AXIS_5BUTTON_STICK), HID_QUIRK_NOGET },
+diff --git a/drivers/hid/hid-uclogic-core.c b/drivers/hid/hid-uclogic-core.c
+index 8fe02d81265d..914fb527ae7a 100644
+--- a/drivers/hid/hid-uclogic-core.c
++++ b/drivers/hid/hid-uclogic-core.c
+@@ -369,6 +369,8 @@ static const struct hid_device_id uclogic_devices[] = {
+ 				USB_DEVICE_ID_UCLOGIC_TABLET_TWHA60) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_HUION,
+ 				USB_DEVICE_ID_HUION_TABLET) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_HUION,
++				USB_DEVICE_ID_HUION_HS64) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_UCLOGIC,
+ 				USB_DEVICE_ID_HUION_TABLET) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_UCLOGIC,
+diff --git a/drivers/hid/hid-uclogic-params.c b/drivers/hid/hid-uclogic-params.c
+index 0187c9f8fc22..273d784fff66 100644
+--- a/drivers/hid/hid-uclogic-params.c
++++ b/drivers/hid/hid-uclogic-params.c
+@@ -977,6 +977,8 @@ int uclogic_params_init(struct uclogic_params *params,
+ 		/* FALL THROUGH */
+ 	case VID_PID(USB_VENDOR_ID_HUION,
+ 		     USB_DEVICE_ID_HUION_TABLET):
++	case VID_PID(USB_VENDOR_ID_HUION,
++		     USB_DEVICE_ID_HUION_HS64):
+ 	case VID_PID(USB_VENDOR_ID_UCLOGIC,
+ 		     USB_DEVICE_ID_HUION_TABLET):
+ 	case VID_PID(USB_VENDOR_ID_UCLOGIC,
+diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
+index 8e6077d8e434..68fd8232d44c 100644
+--- a/drivers/input/mouse/synaptics.c
++++ b/drivers/input/mouse/synaptics.c
+@@ -176,6 +176,7 @@ static const char * const smbus_pnp_ids[] = {
+ 	"LEN0072", /* X1 Carbon Gen 5 (2017) - Elan/ALPS trackpoint */
+ 	"LEN0073", /* X1 Carbon G5 (Elantech) */
+ 	"LEN0092", /* X1 Carbon 6 */
++	"LEN0093", /* T480 */
+ 	"LEN0096", /* X280 */
+ 	"LEN0097", /* X280 -> ALPS trackpoint */
+ 	"LEN200f", /* T450s */
+diff --git a/drivers/irqchip/irq-csky-mpintc.c b/drivers/irqchip/irq-csky-mpintc.c
+index c67c961ab6cc..a4c1aacba1ff 100644
+--- a/drivers/irqchip/irq-csky-mpintc.c
++++ b/drivers/irqchip/irq-csky-mpintc.c
+@@ -89,8 +89,19 @@ static int csky_irq_set_affinity(struct irq_data *d,
+ 	if (cpu >= nr_cpu_ids)
+ 		return -EINVAL;
+ 
+-	/* Enable interrupt destination */
+-	cpu |= BIT(31);
++	/*
++	 * The csky,mpintc could support auto irq deliver, but it only
++	 * could deliver external irq to one cpu or all cpus. So it
++	 * doesn't support deliver external irq to a group of cpus
++	 * with cpu_mask.
++	 * SO we only use auto deliver mode when affinity mask_val is
++	 * equal to cpu_present_mask.
++	 *
++	 */
++	if (cpumask_equal(mask_val, cpu_present_mask))
++		cpu = 0;
++	else
++		cpu |= BIT(31);
+ 
+ 	writel_relaxed(cpu, INTCG_base + INTCG_CIDSTR + offset);
+ 
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index 7577755bdcf4..eead9def9921 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -745,32 +745,43 @@ static void its_flush_cmd(struct its_node *its, struct its_cmd_block *cmd)
+ }
+ 
+ static int its_wait_for_range_completion(struct its_node *its,
+-					 struct its_cmd_block *from,
++					 u64	prev_idx,
+ 					 struct its_cmd_block *to)
+ {
+-	u64 rd_idx, from_idx, to_idx;
++	u64 rd_idx, to_idx, linear_idx;
+ 	u32 count = 1000000;	/* 1s! */
+ 
+-	from_idx = its_cmd_ptr_to_offset(its, from);
++	/* Linearize to_idx if the command set has wrapped around */
+ 	to_idx = its_cmd_ptr_to_offset(its, to);
++	if (to_idx < prev_idx)
++		to_idx += ITS_CMD_QUEUE_SZ;
++
++	linear_idx = prev_idx;
+ 
+ 	while (1) {
++		s64 delta;
++
+ 		rd_idx = readl_relaxed(its->base + GITS_CREADR);
+ 
+-		/* Direct case */
+-		if (from_idx < to_idx && rd_idx >= to_idx)
+-			break;
++		/*
++		 * Compute the read pointer progress, taking the
++		 * potential wrap-around into account.
++		 */
++		delta = rd_idx - prev_idx;
++		if (rd_idx < prev_idx)
++			delta += ITS_CMD_QUEUE_SZ;
+ 
+-		/* Wrapped case */
+-		if (from_idx >= to_idx && rd_idx >= to_idx && rd_idx < from_idx)
++		linear_idx += delta;
++		if (linear_idx >= to_idx)
+ 			break;
+ 
+ 		count--;
+ 		if (!count) {
+-			pr_err_ratelimited("ITS queue timeout (%llu %llu %llu)\n",
+-					   from_idx, to_idx, rd_idx);
++			pr_err_ratelimited("ITS queue timeout (%llu %llu)\n",
++					   to_idx, linear_idx);
+ 			return -1;
+ 		}
++		prev_idx = rd_idx;
+ 		cpu_relax();
+ 		udelay(1);
+ 	}
+@@ -787,6 +798,7 @@ void name(struct its_node *its,						\
+ 	struct its_cmd_block *cmd, *sync_cmd, *next_cmd;		\
+ 	synctype *sync_obj;						\
+ 	unsigned long flags;						\
++	u64 rd_idx;							\
+ 									\
+ 	raw_spin_lock_irqsave(&its->lock, flags);			\
+ 									\
+@@ -808,10 +820,11 @@ void name(struct its_node *its,						\
+ 	}								\
+ 									\
+ post:									\
++	rd_idx = readl_relaxed(its->base + GITS_CREADR);		\
+ 	next_cmd = its_post_commands(its);				\
+ 	raw_spin_unlock_irqrestore(&its->lock, flags);			\
+ 									\
+-	if (its_wait_for_range_completion(its, cmd, next_cmd))		\
++	if (its_wait_for_range_completion(its, rd_idx, next_cmd))	\
+ 		pr_err_ratelimited("ITS cmd %ps failed\n", builder);	\
+ }
+ 
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index 350cf0451456..ec8b27e20de3 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -561,7 +561,7 @@ static char **realloc_argv(unsigned *size, char **old_argv)
+ 		gfp = GFP_NOIO;
+ 	}
+ 	argv = kmalloc_array(new_size, sizeof(*argv), gfp);
+-	if (argv) {
++	if (argv && old_argv) {
+ 		memcpy(argv, old_argv, *size * sizeof(*argv));
+ 		*size = new_size;
+ 	}
+diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
+index f4c31ffaa88e..cec1c0ff33eb 100644
+--- a/drivers/md/dm-verity-target.c
++++ b/drivers/md/dm-verity-target.c
+@@ -236,8 +236,8 @@ static int verity_handle_err(struct dm_verity *v, enum verity_block_type type,
+ 		BUG();
+ 	}
+ 
+-	DMERR("%s: %s block %llu is corrupted", v->data_dev->name, type_str,
+-		block);
++	DMERR_LIMIT("%s: %s block %llu is corrupted", v->data_dev->name,
++		    type_str, block);
+ 
+ 	if (v->corrupted_errs == DM_VERITY_MAX_CORRUPTED_ERRS)
+ 		DMERR("%s: reached maximum errors", v->data_dev->name);
+diff --git a/drivers/net/ethernet/emulex/benet/be_ethtool.c b/drivers/net/ethernet/emulex/benet/be_ethtool.c
+index 6e635debc7fd..cfa01efa5b48 100644
+--- a/drivers/net/ethernet/emulex/benet/be_ethtool.c
++++ b/drivers/net/ethernet/emulex/benet/be_ethtool.c
+@@ -895,7 +895,7 @@ static void be_self_test(struct net_device *netdev, struct ethtool_test *test,
+ 			 u64 *data)
+ {
+ 	struct be_adapter *adapter = netdev_priv(netdev);
+-	int status;
++	int status, cnt;
+ 	u8 link_status = 0;
+ 
+ 	if (adapter->function_caps & BE_FUNCTION_CAPS_SUPER_NIC) {
+@@ -906,6 +906,9 @@ static void be_self_test(struct net_device *netdev, struct ethtool_test *test,
+ 
+ 	memset(data, 0, sizeof(u64) * ETHTOOL_TESTS_NUM);
+ 
++	/* check link status before offline tests */
++	link_status = netif_carrier_ok(netdev);
++
+ 	if (test->flags & ETH_TEST_FL_OFFLINE) {
+ 		if (be_loopback_test(adapter, BE_MAC_LOOPBACK, &data[0]) != 0)
+ 			test->flags |= ETH_TEST_FL_FAILED;
+@@ -926,13 +929,26 @@ static void be_self_test(struct net_device *netdev, struct ethtool_test *test,
+ 		test->flags |= ETH_TEST_FL_FAILED;
+ 	}
+ 
+-	status = be_cmd_link_status_query(adapter, NULL, &link_status, 0);
+-	if (status) {
+-		test->flags |= ETH_TEST_FL_FAILED;
+-		data[4] = -1;
+-	} else if (!link_status) {
++	/* link status was down prior to test */
++	if (!link_status) {
+ 		test->flags |= ETH_TEST_FL_FAILED;
+ 		data[4] = 1;
++		return;
++	}
++
++	for (cnt = 10; cnt; cnt--) {
++		status = be_cmd_link_status_query(adapter, NULL, &link_status,
++						  0);
++		if (status) {
++			test->flags |= ETH_TEST_FL_FAILED;
++			data[4] = -1;
++			break;
++		}
++
++		if (link_status)
++			break;
++
++		msleep_interruptible(500);
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index c10c9d7eadaa..f4a00ee39834 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -4209,7 +4209,7 @@ void e1000e_up(struct e1000_adapter *adapter)
+ 		e1000_configure_msix(adapter);
+ 	e1000_irq_enable(adapter);
+ 
+-	netif_start_queue(adapter->netdev);
++	/* Tx queue started by watchdog timer when link is up */
+ 
+ 	e1000e_trigger_lsc(adapter);
+ }
+@@ -4607,6 +4607,7 @@ int e1000e_open(struct net_device *netdev)
+ 	pm_runtime_get_sync(&pdev->dev);
+ 
+ 	netif_carrier_off(netdev);
++	netif_stop_queue(netdev);
+ 
+ 	/* allocate transmit descriptors */
+ 	err = e1000e_setup_tx_resources(adapter->tx_ring);
+@@ -4667,7 +4668,6 @@ int e1000e_open(struct net_device *netdev)
+ 	e1000_irq_enable(adapter);
+ 
+ 	adapter->tx_hang_recheck = false;
+-	netif_start_queue(netdev);
+ 
+ 	hw->mac.get_link_status = true;
+ 	pm_runtime_put(&pdev->dev);
+@@ -5289,6 +5289,7 @@ static void e1000_watchdog_task(struct work_struct *work)
+ 			if (phy->ops.cfg_on_link_up)
+ 				phy->ops.cfg_on_link_up(hw);
+ 
++			netif_wake_queue(netdev);
+ 			netif_carrier_on(netdev);
+ 
+ 			if (!test_bit(__E1000_DOWN, &adapter->state))
+@@ -5302,6 +5303,7 @@ static void e1000_watchdog_task(struct work_struct *work)
+ 			/* Link status message must follow this format */
+ 			pr_info("%s NIC Link is Down\n", adapter->netdev->name);
+ 			netif_carrier_off(netdev);
++			netif_stop_queue(netdev);
+ 			if (!test_bit(__E1000_DOWN, &adapter->state))
+ 				mod_timer(&adapter->phy_info_timer,
+ 					  round_jiffies(jiffies + 2 * HZ));
+@@ -5309,13 +5311,8 @@ static void e1000_watchdog_task(struct work_struct *work)
+ 			/* 8000ES2LAN requires a Rx packet buffer work-around
+ 			 * on link down event; reset the controller to flush
+ 			 * the Rx packet buffer.
+-			 *
+-			 * If the link is lost the controller stops DMA, but
+-			 * if there is queued Tx work it cannot be done.  So
+-			 * reset the controller to flush the Tx packet buffers.
+ 			 */
+-			if ((adapter->flags & FLAG_RX_NEEDS_RESTART) ||
+-			    e1000_desc_unused(tx_ring) + 1 < tx_ring->count)
++			if (adapter->flags & FLAG_RX_NEEDS_RESTART)
+ 				adapter->flags |= FLAG_RESTART_NOW;
+ 			else
+ 				pm_schedule_suspend(netdev->dev.parent,
+@@ -5338,6 +5335,14 @@ link_up:
+ 	adapter->gotc_old = adapter->stats.gotc;
+ 	spin_unlock(&adapter->stats64_lock);
+ 
++	/* If the link is lost the controller stops DMA, but
++	 * if there is queued Tx work it cannot be done.  So
++	 * reset the controller to flush the Tx packet buffers.
++	 */
++	if (!netif_carrier_ok(netdev) &&
++	    (e1000_desc_unused(tx_ring) + 1 < tx_ring->count))
++		adapter->flags |= FLAG_RESTART_NOW;
++
+ 	/* If reset is necessary, do it outside of interrupt context. */
+ 	if (adapter->flags & FLAG_RESTART_NOW) {
+ 		schedule_work(&adapter->reset_task);
+diff --git a/drivers/net/ethernet/sis/sis900.c b/drivers/net/ethernet/sis/sis900.c
+index 67f9bb6e941b..9b036c857b1d 100644
+--- a/drivers/net/ethernet/sis/sis900.c
++++ b/drivers/net/ethernet/sis/sis900.c
+@@ -1057,7 +1057,7 @@ sis900_open(struct net_device *net_dev)
+ 	sis900_set_mode(sis_priv, HW_SPEED_10_MBPS, FDX_CAPABLE_HALF_SELECTED);
+ 
+ 	/* Enable all known interrupts by setting the interrupt mask. */
+-	sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxIDLE);
++	sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxIDLE | TxDESC);
+ 	sw32(cr, RxENA | sr32(cr));
+ 	sw32(ier, IE);
+ 
+@@ -1578,7 +1578,7 @@ static void sis900_tx_timeout(struct net_device *net_dev)
+ 	sw32(txdp, sis_priv->tx_ring_dma);
+ 
+ 	/* Enable all known interrupts by setting the interrupt mask. */
+-	sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxIDLE);
++	sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxIDLE | TxDESC);
+ }
+ 
+ /**
+@@ -1618,7 +1618,7 @@ sis900_start_xmit(struct sk_buff *skb, struct net_device *net_dev)
+ 			spin_unlock_irqrestore(&sis_priv->lock, flags);
+ 			return NETDEV_TX_OK;
+ 	}
+-	sis_priv->tx_ring[entry].cmdsts = (OWN | skb->len);
++	sis_priv->tx_ring[entry].cmdsts = (OWN | INTR | skb->len);
+ 	sw32(cr, TxENA | sr32(cr));
+ 
+ 	sis_priv->cur_tx ++;
+@@ -1674,7 +1674,7 @@ static irqreturn_t sis900_interrupt(int irq, void *dev_instance)
+ 	do {
+ 		status = sr32(isr);
+ 
+-		if ((status & (HIBERR|TxURN|TxERR|TxIDLE|RxORN|RxERR|RxOK)) == 0)
++		if ((status & (HIBERR|TxURN|TxERR|TxIDLE|TxDESC|RxORN|RxERR|RxOK)) == 0)
+ 			/* nothing intresting happened */
+ 			break;
+ 		handled = 1;
+@@ -1684,7 +1684,7 @@ static irqreturn_t sis900_interrupt(int irq, void *dev_instance)
+ 			/* Rx interrupt */
+ 			sis900_rx(net_dev);
+ 
+-		if (status & (TxURN | TxERR | TxIDLE))
++		if (status & (TxURN | TxERR | TxIDLE | TxDESC))
+ 			/* Tx interrupt */
+ 			sis900_finish_xmit(net_dev);
+ 
+@@ -1896,8 +1896,8 @@ static void sis900_finish_xmit (struct net_device *net_dev)
+ 
+ 		if (tx_status & OWN) {
+ 			/* The packet is not transmitted yet (owned by hardware) !
+-			 * Note: the interrupt is generated only when Tx Machine
+-			 * is idle, so this is an almost impossible case */
++			 * Note: this is an almost impossible condition
++			 * in case of TxDESC ('descriptor interrupt') */
+ 			break;
+ 		}
+ 
+@@ -2473,7 +2473,7 @@ static int sis900_resume(struct pci_dev *pci_dev)
+ 	sis900_set_mode(sis_priv, HW_SPEED_10_MBPS, FDX_CAPABLE_HALF_SELECTED);
+ 
+ 	/* Enable all known interrupts by setting the interrupt mask. */
+-	sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxIDLE);
++	sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxIDLE | TxDESC);
+ 	sw32(cr, RxENA | sr32(cr));
+ 	sw32(ier, IE);
+ 
+diff --git a/drivers/net/ppp/ppp_mppe.c b/drivers/net/ppp/ppp_mppe.c
+index 7ccdc62c6052..06d620b10704 100644
+--- a/drivers/net/ppp/ppp_mppe.c
++++ b/drivers/net/ppp/ppp_mppe.c
+@@ -63,6 +63,7 @@ MODULE_AUTHOR("Frank Cusack <fcusack@fcusack.com>");
+ MODULE_DESCRIPTION("Point-to-Point Protocol Microsoft Point-to-Point Encryption support");
+ MODULE_LICENSE("Dual BSD/GPL");
+ MODULE_ALIAS("ppp-compress-" __stringify(CI_MPPE));
++MODULE_SOFTDEP("pre: arc4");
+ MODULE_VERSION("1.0.2");
+ 
+ static unsigned int
+diff --git a/drivers/pinctrl/mediatek/mtk-eint.c b/drivers/pinctrl/mediatek/mtk-eint.c
+index f464f8cd274b..7e526bcf5e0b 100644
+--- a/drivers/pinctrl/mediatek/mtk-eint.c
++++ b/drivers/pinctrl/mediatek/mtk-eint.c
+@@ -113,6 +113,8 @@ static void mtk_eint_mask(struct irq_data *d)
+ 	void __iomem *reg = mtk_eint_get_offset(eint, d->hwirq,
+ 						eint->regs->mask_set);
+ 
++	eint->cur_mask[d->hwirq >> 5] &= ~mask;
++
+ 	writel(mask, reg);
+ }
+ 
+@@ -123,6 +125,8 @@ static void mtk_eint_unmask(struct irq_data *d)
+ 	void __iomem *reg = mtk_eint_get_offset(eint, d->hwirq,
+ 						eint->regs->mask_clr);
+ 
++	eint->cur_mask[d->hwirq >> 5] |= mask;
++
+ 	writel(mask, reg);
+ 
+ 	if (eint->dual_edge[d->hwirq])
+@@ -217,19 +221,6 @@ static void mtk_eint_chip_write_mask(const struct mtk_eint *eint,
+ 	}
+ }
+ 
+-static void mtk_eint_chip_read_mask(const struct mtk_eint *eint,
+-				    void __iomem *base, u32 *buf)
+-{
+-	int port;
+-	void __iomem *reg;
+-
+-	for (port = 0; port < eint->hw->ports; port++) {
+-		reg = base + eint->regs->mask + (port << 2);
+-		buf[port] = ~readl_relaxed(reg);
+-		/* Mask is 0 when irq is enabled, and 1 when disabled. */
+-	}
+-}
+-
+ static int mtk_eint_irq_request_resources(struct irq_data *d)
+ {
+ 	struct mtk_eint *eint = irq_data_get_irq_chip_data(d);
+@@ -318,7 +309,7 @@ static void mtk_eint_irq_handler(struct irq_desc *desc)
+ 	struct irq_chip *chip = irq_desc_get_chip(desc);
+ 	struct mtk_eint *eint = irq_desc_get_handler_data(desc);
+ 	unsigned int status, eint_num;
+-	int offset, index, virq;
++	int offset, mask_offset, index, virq;
+ 	void __iomem *reg =  mtk_eint_get_offset(eint, 0, eint->regs->stat);
+ 	int dual_edge, start_level, curr_level;
+ 
+@@ -328,10 +319,24 @@ static void mtk_eint_irq_handler(struct irq_desc *desc)
+ 		status = readl(reg);
+ 		while (status) {
+ 			offset = __ffs(status);
++			mask_offset = eint_num >> 5;
+ 			index = eint_num + offset;
+ 			virq = irq_find_mapping(eint->domain, index);
+ 			status &= ~BIT(offset);
+ 
++			/*
++			 * If we get an interrupt on pin that was only required
++			 * for wake (but no real interrupt requested), mask the
++			 * interrupt (as would mtk_eint_resume do anyway later
++			 * in the resume sequence).
++			 */
++			if (eint->wake_mask[mask_offset] & BIT(offset) &&
++			    !(eint->cur_mask[mask_offset] & BIT(offset))) {
++				writel_relaxed(BIT(offset), reg -
++					eint->regs->stat +
++					eint->regs->mask_set);
++			}
++
+ 			dual_edge = eint->dual_edge[index];
+ 			if (dual_edge) {
+ 				/*
+@@ -370,7 +375,6 @@ static void mtk_eint_irq_handler(struct irq_desc *desc)
+ 
+ int mtk_eint_do_suspend(struct mtk_eint *eint)
+ {
+-	mtk_eint_chip_read_mask(eint, eint->base, eint->cur_mask);
+ 	mtk_eint_chip_write_mask(eint, eint->base, eint->wake_mask);
+ 
+ 	return 0;
+diff --git a/drivers/pinctrl/pinctrl-mcp23s08.c b/drivers/pinctrl/pinctrl-mcp23s08.c
+index 5d7a8514def9..b727de5654cd 100644
+--- a/drivers/pinctrl/pinctrl-mcp23s08.c
++++ b/drivers/pinctrl/pinctrl-mcp23s08.c
+@@ -881,6 +881,10 @@ static int mcp23s08_probe_one(struct mcp23s08 *mcp, struct device *dev,
+ 	if (ret < 0)
+ 		goto fail;
+ 
++	ret = devm_gpiochip_add_data(dev, &mcp->chip, mcp);
++	if (ret < 0)
++		goto fail;
++
+ 	mcp->irq_controller =
+ 		device_property_read_bool(dev, "interrupt-controller");
+ 	if (mcp->irq && mcp->irq_controller) {
+@@ -922,10 +926,6 @@ static int mcp23s08_probe_one(struct mcp23s08 *mcp, struct device *dev,
+ 			goto fail;
+ 	}
+ 
+-	ret = devm_gpiochip_add_data(dev, &mcp->chip, mcp);
+-	if (ret < 0)
+-		goto fail;
+-
+ 	if (one_regmap_config) {
+ 		mcp->pinctrl_desc.name = devm_kasprintf(dev, GFP_KERNEL,
+ 				"mcp23xxx-pinctrl.%d", raw_chip_address);
+diff --git a/drivers/pinctrl/pinctrl-ocelot.c b/drivers/pinctrl/pinctrl-ocelot.c
+index 3b4ca52d2456..fb76fb2e9ea5 100644
+--- a/drivers/pinctrl/pinctrl-ocelot.c
++++ b/drivers/pinctrl/pinctrl-ocelot.c
+@@ -396,7 +396,7 @@ static int ocelot_pin_function_idx(struct ocelot_pinctrl *info,
+ 	return -1;
+ }
+ 
+-#define REG(r, info, p) ((r) * (info)->stride + (4 * ((p) / 32)))
++#define REG_ALT(msb, info, p) (OCELOT_GPIO_ALT0 * (info)->stride + 4 * ((msb) + ((info)->stride * ((p) / 32))))
+ 
+ static int ocelot_pinmux_set_mux(struct pinctrl_dev *pctldev,
+ 				 unsigned int selector, unsigned int group)
+@@ -412,19 +412,21 @@ static int ocelot_pinmux_set_mux(struct pinctrl_dev *pctldev,
+ 
+ 	/*
+ 	 * f is encoded on two bits.
+-	 * bit 0 of f goes in BIT(pin) of ALT0, bit 1 of f goes in BIT(pin) of
+-	 * ALT1
++	 * bit 0 of f goes in BIT(pin) of ALT[0], bit 1 of f goes in BIT(pin) of
++	 * ALT[1]
+ 	 * This is racy because both registers can't be updated at the same time
+ 	 * but it doesn't matter much for now.
+ 	 */
+-	regmap_update_bits(info->map, REG(OCELOT_GPIO_ALT0, info, pin->pin),
++	regmap_update_bits(info->map, REG_ALT(0, info, pin->pin),
+ 			   BIT(p), f << p);
+-	regmap_update_bits(info->map, REG(OCELOT_GPIO_ALT1, info, pin->pin),
++	regmap_update_bits(info->map, REG_ALT(1, info, pin->pin),
+ 			   BIT(p), f << (p - 1));
+ 
+ 	return 0;
+ }
+ 
++#define REG(r, info, p) ((r) * (info)->stride + (4 * ((p) / 32)))
++
+ static int ocelot_gpio_set_direction(struct pinctrl_dev *pctldev,
+ 				     struct pinctrl_gpio_range *range,
+ 				     unsigned int pin, bool input)
+@@ -432,7 +434,7 @@ static int ocelot_gpio_set_direction(struct pinctrl_dev *pctldev,
+ 	struct ocelot_pinctrl *info = pinctrl_dev_get_drvdata(pctldev);
+ 	unsigned int p = pin % 32;
+ 
+-	regmap_update_bits(info->map, REG(OCELOT_GPIO_OE, info, p), BIT(p),
++	regmap_update_bits(info->map, REG(OCELOT_GPIO_OE, info, pin), BIT(p),
+ 			   input ? 0 : BIT(p));
+ 
+ 	return 0;
+@@ -445,9 +447,9 @@ static int ocelot_gpio_request_enable(struct pinctrl_dev *pctldev,
+ 	struct ocelot_pinctrl *info = pinctrl_dev_get_drvdata(pctldev);
+ 	unsigned int p = offset % 32;
+ 
+-	regmap_update_bits(info->map, REG(OCELOT_GPIO_ALT0, info, offset),
++	regmap_update_bits(info->map, REG_ALT(0, info, offset),
+ 			   BIT(p), 0);
+-	regmap_update_bits(info->map, REG(OCELOT_GPIO_ALT1, info, offset),
++	regmap_update_bits(info->map, REG_ALT(1, info, offset),
+ 			   BIT(p), 0);
+ 
+ 	return 0;
+diff --git a/drivers/s390/cio/qdio_setup.c b/drivers/s390/cio/qdio_setup.c
+index a59887fad13e..06a9c7e3a63a 100644
+--- a/drivers/s390/cio/qdio_setup.c
++++ b/drivers/s390/cio/qdio_setup.c
+@@ -150,6 +150,7 @@ static int __qdio_allocate_qs(struct qdio_q **irq_ptr_qs, int nr_queues)
+ 			return -ENOMEM;
+ 		}
+ 		irq_ptr_qs[i] = q;
++		INIT_LIST_HEAD(&q->entry);
+ 	}
+ 	return 0;
+ }
+@@ -178,6 +179,7 @@ static void setup_queues_misc(struct qdio_q *q, struct qdio_irq *irq_ptr,
+ 	q->mask = 1 << (31 - i);
+ 	q->nr = i;
+ 	q->handler = handler;
++	INIT_LIST_HEAD(&q->entry);
+ }
+ 
+ static void setup_storage_lists(struct qdio_q *q, struct qdio_irq *irq_ptr,
+diff --git a/drivers/s390/cio/qdio_thinint.c b/drivers/s390/cio/qdio_thinint.c
+index 07dea602205b..6628e0c9e70e 100644
+--- a/drivers/s390/cio/qdio_thinint.c
++++ b/drivers/s390/cio/qdio_thinint.c
+@@ -79,7 +79,6 @@ void tiqdio_add_input_queues(struct qdio_irq *irq_ptr)
+ 	mutex_lock(&tiq_list_lock);
+ 	list_add_rcu(&irq_ptr->input_qs[0]->entry, &tiq_list);
+ 	mutex_unlock(&tiq_list_lock);
+-	xchg(irq_ptr->dsci, 1 << 7);
+ }
+ 
+ void tiqdio_remove_input_queues(struct qdio_irq *irq_ptr)
+@@ -87,14 +86,14 @@ void tiqdio_remove_input_queues(struct qdio_irq *irq_ptr)
+ 	struct qdio_q *q;
+ 
+ 	q = irq_ptr->input_qs[0];
+-	/* if establish triggered an error */
+-	if (!q || !q->entry.prev || !q->entry.next)
++	if (!q)
+ 		return;
+ 
+ 	mutex_lock(&tiq_list_lock);
+ 	list_del_rcu(&q->entry);
+ 	mutex_unlock(&tiq_list_lock);
+ 	synchronize_rcu();
++	INIT_LIST_HEAD(&q->entry);
+ }
+ 
+ static inline int has_multiple_inq_on_dsci(struct qdio_irq *irq_ptr)
+diff --git a/fs/afs/callback.c b/fs/afs/callback.c
+index 128f2dbe256a..fee6fde79e6b 100644
+--- a/fs/afs/callback.c
++++ b/fs/afs/callback.c
+@@ -278,9 +278,9 @@ static void afs_break_one_callback(struct afs_server *server,
+ 			struct afs_super_info *as = AFS_FS_S(cbi->sb);
+ 			struct afs_volume *volume = as->volume;
+ 
+-			write_lock(&volume->cb_break_lock);
++			write_lock(&volume->cb_v_break_lock);
+ 			volume->cb_v_break++;
+-			write_unlock(&volume->cb_break_lock);
++			write_unlock(&volume->cb_v_break_lock);
+ 		} else {
+ 			data.volume = NULL;
+ 			data.fid = *fid;
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index 3904ab0b9563..fd0750fb96a5 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -582,7 +582,7 @@ struct afs_volume {
+ 	unsigned int		servers_seq;	/* Incremented each time ->servers changes */
+ 
+ 	unsigned		cb_v_break;	/* Break-everything counter. */
+-	rwlock_t		cb_break_lock;
++	rwlock_t		cb_v_break_lock;
+ 
+ 	afs_voltype_t		type;		/* type of volume */
+ 	short			error;
+diff --git a/fs/afs/volume.c b/fs/afs/volume.c
+index f6eba2def0a1..3e8dbee09f87 100644
+--- a/fs/afs/volume.c
++++ b/fs/afs/volume.c
+@@ -47,6 +47,7 @@ static struct afs_volume *afs_alloc_volume(struct afs_fs_context *params,
+ 	atomic_set(&volume->usage, 1);
+ 	INIT_LIST_HEAD(&volume->proc_link);
+ 	rwlock_init(&volume->servers_lock);
++	rwlock_init(&volume->cb_v_break_lock);
+ 	memcpy(volume->name, vldb->name, vldb->name_len + 1);
+ 
+ 	slist = afs_alloc_server_list(params->cell, params->key, vldb, type_mask);
+diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
+index 63fedd85c6c5..dec95654f3ae 100644
+--- a/include/linux/cpuhotplug.h
++++ b/include/linux/cpuhotplug.h
+@@ -174,6 +174,7 @@ enum cpuhp_state {
+ 	CPUHP_AP_WATCHDOG_ONLINE,
+ 	CPUHP_AP_WORKQUEUE_ONLINE,
+ 	CPUHP_AP_RCUTREE_ONLINE,
++	CPUHP_AP_BASE_CACHEINFO_ONLINE,
+ 	CPUHP_AP_ONLINE_DYN,
+ 	CPUHP_AP_ONLINE_DYN_END		= CPUHP_AP_ONLINE_DYN + 30,
+ 	CPUHP_AP_X86_HPET_ONLINE,
+diff --git a/include/linux/kernel.h b/include/linux/kernel.h
+index 2d14e21c16c0..4330cecd2237 100644
+--- a/include/linux/kernel.h
++++ b/include/linux/kernel.h
+@@ -92,7 +92,8 @@
+ #define DIV_ROUND_DOWN_ULL(ll, d) \
+ 	({ unsigned long long _tmp = (ll); do_div(_tmp, d); _tmp; })
+ 
+-#define DIV_ROUND_UP_ULL(ll, d)		DIV_ROUND_DOWN_ULL((ll) + (d) - 1, (d))
++#define DIV_ROUND_UP_ULL(ll, d) \
++	DIV_ROUND_DOWN_ULL((unsigned long long)(ll) + (d) - 1, (d))
+ 
+ #if BITS_PER_LONG == 32
+ # define DIV_ROUND_UP_SECTOR_T(ll,d) DIV_ROUND_UP_ULL(ll, d)
+diff --git a/include/uapi/linux/nilfs2_ondisk.h b/include/uapi/linux/nilfs2_ondisk.h
+index a7e66ab11d1d..c23f91ae5fe8 100644
+--- a/include/uapi/linux/nilfs2_ondisk.h
++++ b/include/uapi/linux/nilfs2_ondisk.h
+@@ -29,7 +29,7 @@
+ 
+ #include <linux/types.h>
+ #include <linux/magic.h>
+-
++#include <asm/byteorder.h>
+ 
+ #define NILFS_INODE_BMAP_SIZE	7
+ 
+@@ -533,19 +533,19 @@ enum {
+ static inline void							\
+ nilfs_checkpoint_set_##name(struct nilfs_checkpoint *cp)		\
+ {									\
+-	cp->cp_flags = cpu_to_le32(le32_to_cpu(cp->cp_flags) |		\
+-				   (1UL << NILFS_CHECKPOINT_##flag));	\
++	cp->cp_flags = __cpu_to_le32(__le32_to_cpu(cp->cp_flags) |	\
++				     (1UL << NILFS_CHECKPOINT_##flag));	\
+ }									\
+ static inline void							\
+ nilfs_checkpoint_clear_##name(struct nilfs_checkpoint *cp)		\
+ {									\
+-	cp->cp_flags = cpu_to_le32(le32_to_cpu(cp->cp_flags) &		\
++	cp->cp_flags = __cpu_to_le32(__le32_to_cpu(cp->cp_flags) &	\
+ 				   ~(1UL << NILFS_CHECKPOINT_##flag));	\
+ }									\
+ static inline int							\
+ nilfs_checkpoint_##name(const struct nilfs_checkpoint *cp)		\
+ {									\
+-	return !!(le32_to_cpu(cp->cp_flags) &				\
++	return !!(__le32_to_cpu(cp->cp_flags) &				\
+ 		  (1UL << NILFS_CHECKPOINT_##flag));			\
+ }
+ 
+@@ -595,20 +595,20 @@ enum {
+ static inline void							\
+ nilfs_segment_usage_set_##name(struct nilfs_segment_usage *su)		\
+ {									\
+-	su->su_flags = cpu_to_le32(le32_to_cpu(su->su_flags) |		\
++	su->su_flags = __cpu_to_le32(__le32_to_cpu(su->su_flags) |	\
+ 				   (1UL << NILFS_SEGMENT_USAGE_##flag));\
+ }									\
+ static inline void							\
+ nilfs_segment_usage_clear_##name(struct nilfs_segment_usage *su)	\
+ {									\
+ 	su->su_flags =							\
+-		cpu_to_le32(le32_to_cpu(su->su_flags) &			\
++		__cpu_to_le32(__le32_to_cpu(su->su_flags) &		\
+ 			    ~(1UL << NILFS_SEGMENT_USAGE_##flag));      \
+ }									\
+ static inline int							\
+ nilfs_segment_usage_##name(const struct nilfs_segment_usage *su)	\
+ {									\
+-	return !!(le32_to_cpu(su->su_flags) &				\
++	return !!(__le32_to_cpu(su->su_flags) &				\
+ 		  (1UL << NILFS_SEGMENT_USAGE_##flag));			\
+ }
+ 
+@@ -619,15 +619,15 @@ NILFS_SEGMENT_USAGE_FNS(ERROR, error)
+ static inline void
+ nilfs_segment_usage_set_clean(struct nilfs_segment_usage *su)
+ {
+-	su->su_lastmod = cpu_to_le64(0);
+-	su->su_nblocks = cpu_to_le32(0);
+-	su->su_flags = cpu_to_le32(0);
++	su->su_lastmod = __cpu_to_le64(0);
++	su->su_nblocks = __cpu_to_le32(0);
++	su->su_flags = __cpu_to_le32(0);
+ }
+ 
+ static inline int
+ nilfs_segment_usage_clean(const struct nilfs_segment_usage *su)
+ {
+-	return !le32_to_cpu(su->su_flags);
++	return !__le32_to_cpu(su->su_flags);
+ }
+ 
+ /**
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 6170034f4118..e97e7224ab47 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -1954,6 +1954,9 @@ static ssize_t write_cpuhp_fail(struct device *dev,
+ 	if (ret)
+ 		return ret;
+ 
++	if (fail < CPUHP_OFFLINE || fail > CPUHP_ONLINE)
++		return -EINVAL;
++
+ 	/*
+ 	 * Cannot fail STARTING/DYING callbacks.
+ 	 */
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index dc7dead2d2cc..f33bd0a89391 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -5913,7 +5913,7 @@ static void perf_sample_regs_user(struct perf_regs *regs_user,
+ 	if (user_mode(regs)) {
+ 		regs_user->abi = perf_reg_abi(current);
+ 		regs_user->regs = regs;
+-	} else if (current->mm) {
++	} else if (!(current->flags & PF_KTHREAD)) {
+ 		perf_get_regs_user(regs_user, regs, regs_user_copy);
+ 	} else {
+ 		regs_user->abi = PERF_SAMPLE_REGS_ABI_NONE;
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 2628f3773ca8..ee24fea0eede 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -245,7 +245,11 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node)
+ 	struct page *page = alloc_pages_node(node, THREADINFO_GFP,
+ 					     THREAD_SIZE_ORDER);
+ 
+-	return page ? page_address(page) : NULL;
++	if (likely(page)) {
++		tsk->stack = page_address(page);
++		return tsk->stack;
++	}
++	return NULL;
+ #endif
+ }
+ 
+diff --git a/kernel/irq/autoprobe.c b/kernel/irq/autoprobe.c
+index 16cbf6beb276..ae60cae24e9a 100644
+--- a/kernel/irq/autoprobe.c
++++ b/kernel/irq/autoprobe.c
+@@ -90,7 +90,7 @@ unsigned long probe_irq_on(void)
+ 			/* It triggered already - consider it spurious. */
+ 			if (!(desc->istate & IRQS_WAITING)) {
+ 				desc->istate &= ~IRQS_AUTODETECT;
+-				irq_shutdown(desc);
++				irq_shutdown_and_deactivate(desc);
+ 			} else
+ 				if (i < 32)
+ 					mask |= 1 << i;
+@@ -127,7 +127,7 @@ unsigned int probe_irq_mask(unsigned long val)
+ 				mask |= 1 << i;
+ 
+ 			desc->istate &= ~IRQS_AUTODETECT;
+-			irq_shutdown(desc);
++			irq_shutdown_and_deactivate(desc);
+ 		}
+ 		raw_spin_unlock_irq(&desc->lock);
+ 	}
+@@ -169,7 +169,7 @@ int probe_irq_off(unsigned long val)
+ 				nr_of_irqs++;
+ 			}
+ 			desc->istate &= ~IRQS_AUTODETECT;
+-			irq_shutdown(desc);
++			irq_shutdown_and_deactivate(desc);
+ 		}
+ 		raw_spin_unlock_irq(&desc->lock);
+ 	}
+diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
+index 51128bea3846..04fe4f989bd8 100644
+--- a/kernel/irq/chip.c
++++ b/kernel/irq/chip.c
+@@ -314,6 +314,12 @@ void irq_shutdown(struct irq_desc *desc)
+ 		}
+ 		irq_state_clr_started(desc);
+ 	}
++}
++
++
++void irq_shutdown_and_deactivate(struct irq_desc *desc)
++{
++	irq_shutdown(desc);
+ 	/*
+ 	 * This must be called even if the interrupt was never started up,
+ 	 * because the activation can happen before the interrupt is
+diff --git a/kernel/irq/cpuhotplug.c b/kernel/irq/cpuhotplug.c
+index 5b1072e394b2..6c7ca2e983a5 100644
+--- a/kernel/irq/cpuhotplug.c
++++ b/kernel/irq/cpuhotplug.c
+@@ -116,7 +116,7 @@ static bool migrate_one_irq(struct irq_desc *desc)
+ 		 */
+ 		if (irqd_affinity_is_managed(d)) {
+ 			irqd_set_managed_shutdown(d);
+-			irq_shutdown(desc);
++			irq_shutdown_and_deactivate(desc);
+ 			return false;
+ 		}
+ 		affinity = cpu_online_mask;
+diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
+index 70c3053bc1f6..3a948f41ab00 100644
+--- a/kernel/irq/internals.h
++++ b/kernel/irq/internals.h
+@@ -82,6 +82,7 @@ extern int irq_activate_and_startup(struct irq_desc *desc, bool resend);
+ extern int irq_startup(struct irq_desc *desc, bool resend, bool force);
+ 
+ extern void irq_shutdown(struct irq_desc *desc);
++extern void irq_shutdown_and_deactivate(struct irq_desc *desc);
+ extern void irq_enable(struct irq_desc *desc);
+ extern void irq_disable(struct irq_desc *desc);
+ extern void irq_percpu_enable(struct irq_desc *desc, unsigned int cpu);
+@@ -96,6 +97,10 @@ static inline void irq_mark_irq(unsigned int irq) { }
+ extern void irq_mark_irq(unsigned int irq);
+ #endif
+ 
++extern int __irq_get_irqchip_state(struct irq_data *data,
++				   enum irqchip_irq_state which,
++				   bool *state);
++
+ extern void init_kstat_irqs(struct irq_desc *desc, int node, int nr);
+ 
+ irqreturn_t __handle_irq_event_percpu(struct irq_desc *desc, unsigned int *flags);
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index 53a081392115..fad61986f35c 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -13,6 +13,7 @@
+ #include <linux/module.h>
+ #include <linux/random.h>
+ #include <linux/interrupt.h>
++#include <linux/irqdomain.h>
+ #include <linux/slab.h>
+ #include <linux/sched.h>
+ #include <linux/sched/rt.h>
+@@ -34,8 +35,9 @@ static int __init setup_forced_irqthreads(char *arg)
+ early_param("threadirqs", setup_forced_irqthreads);
+ #endif
+ 
+-static void __synchronize_hardirq(struct irq_desc *desc)
++static void __synchronize_hardirq(struct irq_desc *desc, bool sync_chip)
+ {
++	struct irq_data *irqd = irq_desc_get_irq_data(desc);
+ 	bool inprogress;
+ 
+ 	do {
+@@ -51,6 +53,20 @@ static void __synchronize_hardirq(struct irq_desc *desc)
+ 		/* Ok, that indicated we're done: double-check carefully. */
+ 		raw_spin_lock_irqsave(&desc->lock, flags);
+ 		inprogress = irqd_irq_inprogress(&desc->irq_data);
++
++		/*
++		 * If requested and supported, check at the chip whether it
++		 * is in flight at the hardware level, i.e. already pending
++		 * in a CPU and waiting for service and acknowledge.
++		 */
++		if (!inprogress && sync_chip) {
++			/*
++			 * Ignore the return code. inprogress is only updated
++			 * when the chip supports it.
++			 */
++			__irq_get_irqchip_state(irqd, IRQCHIP_STATE_ACTIVE,
++						&inprogress);
++		}
+ 		raw_spin_unlock_irqrestore(&desc->lock, flags);
+ 
+ 		/* Oops, that failed? */
+@@ -73,13 +89,18 @@ static void __synchronize_hardirq(struct irq_desc *desc)
+  *	Returns: false if a threaded handler is active.
+  *
+  *	This function may be called - with care - from IRQ context.
++ *
++ *	It does not check whether there is an interrupt in flight at the
++ *	hardware level, but not serviced yet, as this might deadlock when
++ *	called with interrupts disabled and the target CPU of the interrupt
++ *	is the current CPU.
+  */
+ bool synchronize_hardirq(unsigned int irq)
+ {
+ 	struct irq_desc *desc = irq_to_desc(irq);
+ 
+ 	if (desc) {
+-		__synchronize_hardirq(desc);
++		__synchronize_hardirq(desc, false);
+ 		return !atomic_read(&desc->threads_active);
+ 	}
+ 
+@@ -95,14 +116,19 @@ EXPORT_SYMBOL(synchronize_hardirq);
+  *	to complete before returning. If you use this function while
+  *	holding a resource the IRQ handler may need you will deadlock.
+  *
+- *	This function may be called - with care - from IRQ context.
++ *	Can only be called from preemptible code as it might sleep when
++ *	an interrupt thread is associated to @irq.
++ *
++ *	It optionally makes sure (when the irq chip supports that method)
++ *	that the interrupt is not pending in any CPU and waiting for
++ *	service.
+  */
+ void synchronize_irq(unsigned int irq)
+ {
+ 	struct irq_desc *desc = irq_to_desc(irq);
+ 
+ 	if (desc) {
+-		__synchronize_hardirq(desc);
++		__synchronize_hardirq(desc, true);
+ 		/*
+ 		 * We made sure that no hardirq handler is
+ 		 * running. Now verify that no threaded handlers are
+@@ -1699,6 +1725,7 @@ static struct irqaction *__free_irq(struct irq_desc *desc, void *dev_id)
+ 	/* If this was the last handler, shut down the IRQ line: */
+ 	if (!desc->action) {
+ 		irq_settings_clr_disable_unlazy(desc);
++		/* Only shutdown. Deactivate after synchronize_hardirq() */
+ 		irq_shutdown(desc);
+ 	}
+ 
+@@ -1727,8 +1754,12 @@ static struct irqaction *__free_irq(struct irq_desc *desc, void *dev_id)
+ 
+ 	unregister_handler_proc(irq, action);
+ 
+-	/* Make sure it's not being used on another CPU: */
+-	synchronize_hardirq(irq);
++	/*
++	 * Make sure it's not being used on another CPU and if the chip
++	 * supports it also make sure that there is no (not yet serviced)
++	 * interrupt in flight at the hardware level.
++	 */
++	__synchronize_hardirq(desc, true);
+ 
+ #ifdef CONFIG_DEBUG_SHIRQ
+ 	/*
+@@ -1768,6 +1799,14 @@ static struct irqaction *__free_irq(struct irq_desc *desc, void *dev_id)
+ 		 * require it to deallocate resources over the slow bus.
+ 		 */
+ 		chip_bus_lock(desc);
++		/*
++		 * There is no interrupt on the fly anymore. Deactivate it
++		 * completely.
++		 */
++		raw_spin_lock_irqsave(&desc->lock, flags);
++		irq_domain_deactivate_irq(&desc->irq_data);
++		raw_spin_unlock_irqrestore(&desc->lock, flags);
++
+ 		irq_release_resources(desc);
+ 		chip_bus_sync_unlock(desc);
+ 		irq_remove_timings(desc);
+@@ -1855,7 +1894,7 @@ static const void *__cleanup_nmi(unsigned int irq, struct irq_desc *desc)
+ 	}
+ 
+ 	irq_settings_clr_disable_unlazy(desc);
+-	irq_shutdown(desc);
++	irq_shutdown_and_deactivate(desc);
+ 
+ 	irq_release_resources(desc);
+ 
+@@ -2578,6 +2617,28 @@ out:
+ 	irq_put_desc_unlock(desc, flags);
+ }
+ 
++int __irq_get_irqchip_state(struct irq_data *data, enum irqchip_irq_state which,
++			    bool *state)
++{
++	struct irq_chip *chip;
++	int err = -EINVAL;
++
++	do {
++		chip = irq_data_get_irq_chip(data);
++		if (chip->irq_get_irqchip_state)
++			break;
++#ifdef CONFIG_IRQ_DOMAIN_HIERARCHY
++		data = data->parent_data;
++#else
++		data = NULL;
++#endif
++	} while (data);
++
++	if (data)
++		err = chip->irq_get_irqchip_state(data, which, state);
++	return err;
++}
++
+ /**
+  *	irq_get_irqchip_state - returns the irqchip state of a interrupt.
+  *	@irq: Interrupt line that is forwarded to a VM
+@@ -2596,7 +2657,6 @@ int irq_get_irqchip_state(unsigned int irq, enum irqchip_irq_state which,
+ {
+ 	struct irq_desc *desc;
+ 	struct irq_data *data;
+-	struct irq_chip *chip;
+ 	unsigned long flags;
+ 	int err = -EINVAL;
+ 
+@@ -2606,19 +2666,7 @@ int irq_get_irqchip_state(unsigned int irq, enum irqchip_irq_state which,
+ 
+ 	data = irq_desc_get_irq_data(desc);
+ 
+-	do {
+-		chip = irq_data_get_irq_chip(data);
+-		if (chip->irq_get_irqchip_state)
+-			break;
+-#ifdef CONFIG_IRQ_DOMAIN_HIERARCHY
+-		data = data->parent_data;
+-#else
+-		data = NULL;
+-#endif
+-	} while (data);
+-
+-	if (data)
+-		err = chip->irq_get_irqchip_state(data, which, state);
++	err = __irq_get_irqchip_state(data, which, state);
+ 
+ 	irq_put_desc_busunlock(desc, flags);
+ 	return err;
+diff --git a/mm/oom_kill.c b/mm/oom_kill.c
+index 3a2484884cfd..263efad6fc7e 100644
+--- a/mm/oom_kill.c
++++ b/mm/oom_kill.c
+@@ -985,8 +985,7 @@ static void oom_kill_process(struct oom_control *oc, const char *message)
+ /*
+  * Determines whether the kernel must panic because of the panic_on_oom sysctl.
+  */
+-static void check_panic_on_oom(struct oom_control *oc,
+-			       enum oom_constraint constraint)
++static void check_panic_on_oom(struct oom_control *oc)
+ {
+ 	if (likely(!sysctl_panic_on_oom))
+ 		return;
+@@ -996,7 +995,7 @@ static void check_panic_on_oom(struct oom_control *oc,
+ 		 * does not panic for cpuset, mempolicy, or memcg allocation
+ 		 * failures.
+ 		 */
+-		if (constraint != CONSTRAINT_NONE)
++		if (oc->constraint != CONSTRAINT_NONE)
+ 			return;
+ 	}
+ 	/* Do not panic for oom kills triggered by sysrq */
+@@ -1033,7 +1032,6 @@ EXPORT_SYMBOL_GPL(unregister_oom_notifier);
+ bool out_of_memory(struct oom_control *oc)
+ {
+ 	unsigned long freed = 0;
+-	enum oom_constraint constraint = CONSTRAINT_NONE;
+ 
+ 	if (oom_killer_disabled)
+ 		return false;
+@@ -1069,10 +1067,10 @@ bool out_of_memory(struct oom_control *oc)
+ 	 * Check if there were limitations on the allocation (only relevant for
+ 	 * NUMA and memcg) that may require different handling.
+ 	 */
+-	constraint = constrained_alloc(oc);
+-	if (constraint != CONSTRAINT_MEMORY_POLICY)
++	oc->constraint = constrained_alloc(oc);
++	if (oc->constraint != CONSTRAINT_MEMORY_POLICY)
+ 		oc->nodemask = NULL;
+-	check_panic_on_oom(oc, constraint);
++	check_panic_on_oom(oc);
+ 
+ 	if (!is_memcg_oom(oc) && sysctl_oom_kill_allocating_task &&
+ 	    current->mm && !oom_unkillable_task(current, NULL, oc->nodemask) &&
+diff --git a/tools/testing/selftests/powerpc/mm/.gitignore b/tools/testing/selftests/powerpc/mm/.gitignore
+index ba919308fe30..d503b8764a8e 100644
+--- a/tools/testing/selftests/powerpc/mm/.gitignore
++++ b/tools/testing/selftests/powerpc/mm/.gitignore
+@@ -3,4 +3,5 @@ subpage_prot
+ tempfile
+ prot_sao
+ segv_errors
+-wild_bctr
+\ No newline at end of file
++wild_bctr
++large_vm_fork_separation
+\ No newline at end of file
+diff --git a/tools/testing/selftests/powerpc/mm/Makefile b/tools/testing/selftests/powerpc/mm/Makefile
+index 43d68420e363..f1fbc15800c4 100644
+--- a/tools/testing/selftests/powerpc/mm/Makefile
++++ b/tools/testing/selftests/powerpc/mm/Makefile
+@@ -2,7 +2,8 @@
+ noarg:
+ 	$(MAKE) -C ../
+ 
+-TEST_GEN_PROGS := hugetlb_vs_thp_test subpage_prot prot_sao segv_errors wild_bctr
++TEST_GEN_PROGS := hugetlb_vs_thp_test subpage_prot prot_sao segv_errors wild_bctr \
++		  large_vm_fork_separation
+ TEST_GEN_FILES := tempfile
+ 
+ top_srcdir = ../../../../..
+@@ -13,6 +14,7 @@ $(TEST_GEN_PROGS): ../harness.c
+ $(OUTPUT)/prot_sao: ../utils.c
+ 
+ $(OUTPUT)/wild_bctr: CFLAGS += -m64
++$(OUTPUT)/large_vm_fork_separation: CFLAGS += -m64
+ 
+ $(OUTPUT)/tempfile:
+ 	dd if=/dev/zero of=$@ bs=64k count=1
+diff --git a/tools/testing/selftests/powerpc/mm/large_vm_fork_separation.c b/tools/testing/selftests/powerpc/mm/large_vm_fork_separation.c
+new file mode 100644
+index 000000000000..2363a7f3ab0d
+--- /dev/null
++++ b/tools/testing/selftests/powerpc/mm/large_vm_fork_separation.c
+@@ -0,0 +1,87 @@
++// SPDX-License-Identifier: GPL-2.0+
++//
++// Copyright 2019, Michael Ellerman, IBM Corp.
++//
++// Test that allocating memory beyond the memory limit and then forking is
++// handled correctly, ie. the child is able to access the mappings beyond the
++// memory limit and the child's writes are not visible to the parent.
++
++#include <stdio.h>
++#include <stdlib.h>
++#include <sys/mman.h>
++#include <sys/types.h>
++#include <sys/wait.h>
++#include <unistd.h>
++
++#include "utils.h"
++
++
++#ifndef MAP_FIXED_NOREPLACE
++#define MAP_FIXED_NOREPLACE	MAP_FIXED	// "Should be safe" above 512TB
++#endif
++
++
++static int test(void)
++{
++	int p2c[2], c2p[2], rc, status, c, *p;
++	unsigned long page_size;
++	pid_t pid;
++
++	page_size = sysconf(_SC_PAGESIZE);
++	SKIP_IF(page_size != 65536);
++
++	// Create a mapping at 512TB to allocate an extended_id
++	p = mmap((void *)(512ul << 40), page_size, PROT_READ | PROT_WRITE,
++		MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED_NOREPLACE, -1, 0);
++	if (p == MAP_FAILED) {
++		perror("mmap");
++		printf("Error: couldn't mmap(), confirm kernel has 4TB support?\n");
++		return 1;
++	}
++
++	printf("parent writing %p = 1\n", p);
++	*p = 1;
++
++	FAIL_IF(pipe(p2c) == -1 || pipe(c2p) == -1);
++
++	pid = fork();
++	if (pid == 0) {
++		FAIL_IF(read(p2c[0], &c, 1) != 1);
++
++		pid = getpid();
++		printf("child writing  %p = %d\n", p, pid);
++		*p = pid;
++
++		FAIL_IF(write(c2p[1], &c, 1) != 1);
++		FAIL_IF(read(p2c[0], &c, 1) != 1);
++		exit(0);
++	}
++
++	c = 0;
++	FAIL_IF(write(p2c[1], &c, 1) != 1);
++	FAIL_IF(read(c2p[0], &c, 1) != 1);
++
++	// Prevent compiler optimisation
++	barrier();
++
++	rc = 0;
++	printf("parent reading %p = %d\n", p, *p);
++	if (*p != 1) {
++		printf("Error: BUG! parent saw child's write! *p = %d\n", *p);
++		rc = 1;
++	}
++
++	FAIL_IF(write(p2c[1], &c, 1) != 1);
++	FAIL_IF(waitpid(pid, &status, 0) == -1);
++	FAIL_IF(!WIFEXITED(status) || WEXITSTATUS(status));
++
++	if (rc == 0)
++		printf("success: test completed OK\n");
++
++	return rc;
++}
++
++int main(void)
++{
++	return test_harness(test, "large_vm_fork_separation");
++}


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-07-26 11:37 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2019-07-26 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     b2f3521fd881ccfc4a0516fedeff41ffb2e0fe49
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jul 26 11:37:15 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jul 26 11:37:15 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b2f3521f

Linux patch 5.1.20

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |     4 +
 1019_linux-5.1.20.patch | 17687 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 17691 insertions(+)

diff --git a/0000_README b/0000_README
index 99c9da8..46f6f44 100644
--- a/0000_README
+++ b/0000_README
@@ -119,6 +119,10 @@ Patch:  1018_linux-5.1.19.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.1.19
 
+Patch:  1019_linux-5.1.20.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.1.20
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1019_linux-5.1.20.patch b/1019_linux-5.1.20.patch
new file mode 100644
index 0000000..d4cd00f
--- /dev/null
+++ b/1019_linux-5.1.20.patch
@@ -0,0 +1,17687 @@
+diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
+index 913396ac5824..ed0d814df7e0 100644
+--- a/Documentation/atomic_t.txt
++++ b/Documentation/atomic_t.txt
+@@ -177,6 +177,9 @@ These helper barriers exist because architectures have varying implicit
+ ordering on their SMP atomic primitives. For example our TSO architectures
+ provide full ordered atomics and these barriers are no-ops.
+ 
++NOTE: when the atomic RmW ops are fully ordered, they should also imply a
++compiler barrier.
++
+ Thus:
+ 
+   atomic_fetch_add();
+diff --git a/Documentation/devicetree/bindings/net/marvell-orion-mdio.txt b/Documentation/devicetree/bindings/net/marvell-orion-mdio.txt
+index 42cd81090a2c..3f3cfc1d8d4d 100644
+--- a/Documentation/devicetree/bindings/net/marvell-orion-mdio.txt
++++ b/Documentation/devicetree/bindings/net/marvell-orion-mdio.txt
+@@ -16,7 +16,7 @@ Required properties:
+ 
+ Optional properties:
+ - interrupts: interrupt line number for the SMI error/done interrupt
+-- clocks: phandle for up to three required clocks for the MDIO instance
++- clocks: phandle for up to four required clocks for the MDIO instance
+ 
+ The child nodes of the MDIO driver are the individual PHY devices
+ connected to this MDIO bus. They must have a "reg" property given the
+diff --git a/Documentation/scheduler/sched-pelt.c b/Documentation/scheduler/sched-pelt.c
+index e4219139386a..7238b355919c 100644
+--- a/Documentation/scheduler/sched-pelt.c
++++ b/Documentation/scheduler/sched-pelt.c
+@@ -20,7 +20,8 @@ void calc_runnable_avg_yN_inv(void)
+ 	int i;
+ 	unsigned int x;
+ 
+-	printf("static const u32 runnable_avg_yN_inv[] = {");
++	/* To silence -Wunused-but-set-variable warnings. */
++	printf("static const u32 runnable_avg_yN_inv[] __maybe_unused = {");
+ 	for (i = 0; i < HALFLIFE; i++) {
+ 		x = ((1UL<<32)-1)*pow(y, i);
+ 
+diff --git a/Makefile b/Makefile
+index 432a62fec680..ef6daeb823b9 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 19
++SUBLEVEL = 20
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arm/boot/dts/gemini-dlink-dir-685.dts b/arch/arm/boot/dts/gemini-dlink-dir-685.dts
+index 592111c8d6fd..dfe3da77b437 100644
+--- a/arch/arm/boot/dts/gemini-dlink-dir-685.dts
++++ b/arch/arm/boot/dts/gemini-dlink-dir-685.dts
+@@ -64,7 +64,7 @@
+ 		gpio-sck = <&gpio1 5 GPIO_ACTIVE_HIGH>;
+ 		gpio-miso = <&gpio1 8 GPIO_ACTIVE_HIGH>;
+ 		gpio-mosi = <&gpio1 7 GPIO_ACTIVE_HIGH>;
+-		cs-gpios = <&gpio0 20 GPIO_ACTIVE_HIGH>;
++		cs-gpios = <&gpio0 20 GPIO_ACTIVE_LOW>;
+ 		num-chipselects = <1>;
+ 
+ 		panel: display@0 {
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index d218729ec852..dc3e62a18b62 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -258,7 +258,8 @@ config GENERIC_CALIBRATE_DELAY
+ 	def_bool y
+ 
+ config ZONE_DMA32
+-	def_bool y
++	bool "Support DMA32 zone" if EXPERT
++	default y
+ 
+ config HAVE_GENERIC_GUP
+ 	def_bool y
+diff --git a/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi b/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi
+index 053458a5db55..ede86856ec39 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi
+@@ -322,7 +322,8 @@
+ 			regulator-max-microvolt = <1320000>;
+ 			enable-gpios = <&pmic 6 GPIO_ACTIVE_HIGH>;
+ 			regulator-ramp-delay = <80>;
+-			regulator-enable-ramp-delay = <1000>;
++			regulator-enable-ramp-delay = <2000>;
++			regulator-settling-time-us = <160>;
+ 		};
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/nvidia/tegra210.dtsi b/arch/arm64/boot/dts/nvidia/tegra210.dtsi
+index 6574396d2257..d7baff79205c 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra210.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra210.dtsi
+@@ -1250,7 +1250,7 @@
+ 			compatible = "nvidia,tegra210-agic";
+ 			#interrupt-cells = <3>;
+ 			interrupt-controller;
+-			reg = <0x702f9000 0x2000>,
++			reg = <0x702f9000 0x1000>,
+ 			      <0x702fa000 0x2000>;
+ 			interrupts = <GIC_SPI 102 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_HIGH)>;
+ 			clocks = <&tegra_car TEGRA210_CLK_APE>;
+diff --git a/arch/arm64/crypto/sha1-ce-glue.c b/arch/arm64/crypto/sha1-ce-glue.c
+index 17fac2889f56..d8c521c757e8 100644
+--- a/arch/arm64/crypto/sha1-ce-glue.c
++++ b/arch/arm64/crypto/sha1-ce-glue.c
+@@ -54,7 +54,7 @@ static int sha1_ce_finup(struct shash_desc *desc, const u8 *data,
+ 			 unsigned int len, u8 *out)
+ {
+ 	struct sha1_ce_state *sctx = shash_desc_ctx(desc);
+-	bool finalize = !sctx->sst.count && !(len % SHA1_BLOCK_SIZE);
++	bool finalize = !sctx->sst.count && !(len % SHA1_BLOCK_SIZE) && len;
+ 
+ 	if (!may_use_simd())
+ 		return crypto_sha1_finup(desc, data, len, out);
+diff --git a/arch/arm64/crypto/sha2-ce-glue.c b/arch/arm64/crypto/sha2-ce-glue.c
+index 261f5195cab7..c47d1a28ff6b 100644
+--- a/arch/arm64/crypto/sha2-ce-glue.c
++++ b/arch/arm64/crypto/sha2-ce-glue.c
+@@ -59,7 +59,7 @@ static int sha256_ce_finup(struct shash_desc *desc, const u8 *data,
+ 			   unsigned int len, u8 *out)
+ {
+ 	struct sha256_ce_state *sctx = shash_desc_ctx(desc);
+-	bool finalize = !sctx->sst.count && !(len % SHA256_BLOCK_SIZE);
++	bool finalize = !sctx->sst.count && !(len % SHA256_BLOCK_SIZE) && len;
+ 
+ 	if (!may_use_simd()) {
+ 		if (len)
+diff --git a/arch/arm64/include/asm/irqflags.h b/arch/arm64/include/asm/irqflags.h
+index 43d8366c1e87..efa2976ca050 100644
+--- a/arch/arm64/include/asm/irqflags.h
++++ b/arch/arm64/include/asm/irqflags.h
+@@ -92,7 +92,7 @@ static inline unsigned long arch_local_save_flags(void)
+ 			ARM64_HAS_IRQ_PRIO_MASKING)
+ 		: "=&r" (flags), "+r" (daif_bits)
+ 		: "r" ((unsigned long) GIC_PRIO_IRQOFF)
+-		: "memory");
++		: "cc", "memory");
+ 
+ 	return flags;
+ }
+@@ -136,7 +136,7 @@ static inline int arch_irqs_disabled_flags(unsigned long flags)
+ 			ARM64_HAS_IRQ_PRIO_MASKING)
+ 		: "=&r" (res)
+ 		: "r" ((int) flags)
+-		: "memory");
++		: "cc", "memory");
+ 
+ 	return res;
+ }
+diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c
+index 803f0494dd3e..7722e85fb69c 100644
+--- a/arch/arm64/kernel/acpi.c
++++ b/arch/arm64/kernel/acpi.c
+@@ -155,10 +155,14 @@ static int __init acpi_fadt_sanity_check(void)
+ 	 */
+ 	if (table->revision < 5 ||
+ 	   (table->revision == 5 && fadt->minor_revision < 1)) {
+-		pr_err("Unsupported FADT revision %d.%d, should be 5.1+\n",
++		pr_err(FW_BUG "Unsupported FADT revision %d.%d, should be 5.1+\n",
+ 		       table->revision, fadt->minor_revision);
+-		ret = -EINVAL;
+-		goto out;
++
++		if (!fadt->arm_boot_flags) {
++			ret = -EINVAL;
++			goto out;
++		}
++		pr_err("FADT has ARM boot flags set, assuming 5.1\n");
+ 	}
+ 
+ 	if (!(fadt->flags & ACPI_FADT_HW_REDUCED)) {
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index c50a7a75f2e0..381ff2a96e0b 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -420,6 +420,20 @@ tsk	.req	x28		// current thread_info
+ 	irq_stack_exit
+ 	.endm
+ 
++#ifdef CONFIG_ARM64_PSEUDO_NMI
++	/*
++	 * Set res to 0 if irqs were unmasked in interrupted context.
++	 * Otherwise set res to non-0 value.
++	 */
++	.macro	test_irqs_unmasked res:req, pmr:req
++alternative_if ARM64_HAS_IRQ_PRIO_MASKING
++	sub	\res, \pmr, #GIC_PRIO_IRQON
++alternative_else
++	mov	\res, xzr
++alternative_endif
++	.endm
++#endif
++
+ 	.text
+ 
+ /*
+@@ -616,19 +630,19 @@ ENDPROC(el1_sync)
+ el1_irq:
+ 	kernel_entry 1
+ 	enable_da_f
+-#ifdef CONFIG_TRACE_IRQFLAGS
++
+ #ifdef CONFIG_ARM64_PSEUDO_NMI
+ alternative_if ARM64_HAS_IRQ_PRIO_MASKING
+ 	ldr	x20, [sp, #S_PMR_SAVE]
+-alternative_else
+-	mov	x20, #GIC_PRIO_IRQON
+-alternative_endif
+-	cmp	x20, #GIC_PRIO_IRQOFF
+-	/* Irqs were disabled, don't trace */
+-	b.ls	1f
++alternative_else_nop_endif
++	test_irqs_unmasked	res=x0, pmr=x20
++	cbz	x0, 1f
++	bl	asm_nmi_enter
++1:
+ #endif
++
++#ifdef CONFIG_TRACE_IRQFLAGS
+ 	bl	trace_hardirqs_off
+-1:
+ #endif
+ 
+ 	irq_handler
+@@ -647,14 +661,22 @@ alternative_else_nop_endif
+ 	bl	preempt_schedule_irq		// irq en/disable is done inside
+ 1:
+ #endif
+-#ifdef CONFIG_TRACE_IRQFLAGS
++
+ #ifdef CONFIG_ARM64_PSEUDO_NMI
+ 	/*
+ 	 * if IRQs were disabled when we received the interrupt, we have an NMI
+ 	 * and we are not re-enabling interrupt upon eret. Skip tracing.
+ 	 */
+-	cmp	x20, #GIC_PRIO_IRQOFF
+-	b.ls	1f
++	test_irqs_unmasked	res=x0, pmr=x20
++	cbz	x0, 1f
++	bl	asm_nmi_exit
++1:
++#endif
++
++#ifdef CONFIG_TRACE_IRQFLAGS
++#ifdef CONFIG_ARM64_PSEUDO_NMI
++	test_irqs_unmasked	res=x0, pmr=x20
++	cbnz	x0, 1f
+ #endif
+ 	bl	trace_hardirqs_on
+ 1:
+@@ -855,7 +877,7 @@ el0_dbg:
+ 	mov	x1, x25
+ 	mov	x2, sp
+ 	bl	do_debug_exception
+-	enable_daif
++	enable_da_f
+ 	ct_user_exit
+ 	b	ret_to_user
+ el0_inv:
+@@ -907,7 +929,7 @@ el0_error_naked:
+ 	enable_dbg
+ 	mov	x0, sp
+ 	bl	do_serror
+-	enable_daif
++	enable_da_f
+ 	ct_user_exit
+ 	b	ret_to_user
+ ENDPROC(el0_error)
+diff --git a/arch/arm64/kernel/image.h b/arch/arm64/kernel/image.h
+index 33f14e484040..b22e8ad071b1 100644
+--- a/arch/arm64/kernel/image.h
++++ b/arch/arm64/kernel/image.h
+@@ -78,7 +78,11 @@
+ 
+ #ifdef CONFIG_EFI
+ 
+-__efistub_stext_offset = stext - _text;
++/*
++ * Use ABSOLUTE() to avoid ld.lld treating this as a relative symbol:
++ * https://github.com/ClangBuiltLinux/linux/issues/561
++ */
++__efistub_stext_offset = ABSOLUTE(stext - _text);
+ 
+ /*
+  * The EFI stub has its own symbol namespace prefixed by __efistub_, to
+diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c
+index 92fa81798fb9..fdd9cb27fed5 100644
+--- a/arch/arm64/kernel/irq.c
++++ b/arch/arm64/kernel/irq.c
+@@ -27,8 +27,10 @@
+ #include <linux/smp.h>
+ #include <linux/init.h>
+ #include <linux/irqchip.h>
++#include <linux/kprobes.h>
+ #include <linux/seq_file.h>
+ #include <linux/vmalloc.h>
++#include <asm/daifflags.h>
+ #include <asm/vmap_stack.h>
+ 
+ unsigned long irq_err_count;
+@@ -76,3 +78,18 @@ void __init init_IRQ(void)
+ 	if (!handle_arch_irq)
+ 		panic("No interrupt controller found.");
+ }
++
++/*
++ * Stubs to make nmi_enter/exit() code callable from ASM
++ */
++asmlinkage void notrace asm_nmi_enter(void)
++{
++	nmi_enter();
++}
++NOKPROBE_SYMBOL(asm_nmi_enter);
++
++asmlinkage void notrace asm_nmi_exit(void)
++{
++	nmi_exit();
++}
++NOKPROBE_SYMBOL(asm_nmi_exit);
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index 7cae155e81a5..fff8c61ff608 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -191,8 +191,9 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
+ {
+ 	unsigned long max_zone_pfns[MAX_NR_ZONES]  = {0};
+ 
+-	if (IS_ENABLED(CONFIG_ZONE_DMA32))
+-		max_zone_pfns[ZONE_DMA32] = PFN_DOWN(max_zone_dma_phys());
++#ifdef CONFIG_ZONE_DMA32
++	max_zone_pfns[ZONE_DMA32] = PFN_DOWN(max_zone_dma_phys());
++#endif
+ 	max_zone_pfns[ZONE_NORMAL] = max;
+ 
+ 	free_area_init_nodes(max_zone_pfns);
+diff --git a/arch/mips/boot/compressed/Makefile b/arch/mips/boot/compressed/Makefile
+index 3c453a1f1ff1..172801ed35b8 100644
+--- a/arch/mips/boot/compressed/Makefile
++++ b/arch/mips/boot/compressed/Makefile
+@@ -78,6 +78,8 @@ OBJCOPYFLAGS_piggy.o := --add-section=.image=$(obj)/vmlinux.bin.z \
+ $(obj)/piggy.o: $(obj)/dummy.o $(obj)/vmlinux.bin.z FORCE
+ 	$(call if_changed,objcopy)
+ 
++HOSTCFLAGS_calc_vmlinuz_load_addr.o += $(LINUXINCLUDE)
++
+ # Calculate the load address of the compressed kernel image
+ hostprogs-y := calc_vmlinuz_load_addr
+ 
+diff --git a/arch/mips/boot/compressed/calc_vmlinuz_load_addr.c b/arch/mips/boot/compressed/calc_vmlinuz_load_addr.c
+index 542c3ede9722..d14f75ec8273 100644
+--- a/arch/mips/boot/compressed/calc_vmlinuz_load_addr.c
++++ b/arch/mips/boot/compressed/calc_vmlinuz_load_addr.c
+@@ -13,7 +13,7 @@
+ #include <stdint.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+-#include "../../../../include/linux/sizes.h"
++#include <linux/sizes.h>
+ 
+ int main(int argc, char *argv[])
+ {
+diff --git a/arch/mips/include/asm/mach-ath79/ar933x_uart.h b/arch/mips/include/asm/mach-ath79/ar933x_uart.h
+index c2917b39966b..bba2c8837951 100644
+--- a/arch/mips/include/asm/mach-ath79/ar933x_uart.h
++++ b/arch/mips/include/asm/mach-ath79/ar933x_uart.h
+@@ -27,8 +27,8 @@
+ #define AR933X_UART_CS_PARITY_S		0
+ #define AR933X_UART_CS_PARITY_M		0x3
+ #define	  AR933X_UART_CS_PARITY_NONE	0
+-#define	  AR933X_UART_CS_PARITY_ODD	1
+-#define	  AR933X_UART_CS_PARITY_EVEN	2
++#define	  AR933X_UART_CS_PARITY_ODD	2
++#define	  AR933X_UART_CS_PARITY_EVEN	3
+ #define AR933X_UART_CS_IF_MODE_S	2
+ #define AR933X_UART_CS_IF_MODE_M	0x3
+ #define	  AR933X_UART_CS_IF_MODE_NONE	0
+diff --git a/arch/parisc/kernel/ptrace.c b/arch/parisc/kernel/ptrace.c
+index 0964c236e3e5..de2998cb189e 100644
+--- a/arch/parisc/kernel/ptrace.c
++++ b/arch/parisc/kernel/ptrace.c
+@@ -167,6 +167,9 @@ long arch_ptrace(struct task_struct *child, long request,
+ 		if ((addr & (sizeof(unsigned long)-1)) ||
+ 		     addr >= sizeof(struct pt_regs))
+ 			break;
++		if (addr == PT_IAOQ0 || addr == PT_IAOQ1) {
++			data |= 3; /* ensure userspace privilege */
++		}
+ 		if ((addr >= PT_GR1 && addr <= PT_GR31) ||
+ 				addr == PT_IAOQ0 || addr == PT_IAOQ1 ||
+ 				(addr >= PT_FR0 && addr <= PT_FR31 + 4) ||
+@@ -228,16 +231,18 @@ long arch_ptrace(struct task_struct *child, long request,
+ 
+ static compat_ulong_t translate_usr_offset(compat_ulong_t offset)
+ {
+-	if (offset < 0)
+-		return sizeof(struct pt_regs);
+-	else if (offset <= 32*4)	/* gr[0..31] */
+-		return offset * 2 + 4;
+-	else if (offset <= 32*4+32*8)	/* gr[0..31] + fr[0..31] */
+-		return offset + 32*4;
+-	else if (offset < sizeof(struct pt_regs)/2 + 32*4)
+-		return offset * 2 + 4 - 32*8;
++	compat_ulong_t pos;
++
++	if (offset < 32*4)	/* gr[0..31] */
++		pos = offset * 2 + 4;
++	else if (offset < 32*4+32*8)	/* fr[0] ... fr[31] */
++		pos = (offset - 32*4) + PT_FR0;
++	else if (offset < sizeof(struct pt_regs)/2 + 32*4) /* sr[0] ... ipsw */
++		pos = (offset - 32*4 - 32*8) * 2 + PT_SR0 + 4;
+ 	else
+-		return sizeof(struct pt_regs);
++		pos = sizeof(struct pt_regs);
++
++	return pos;
+ }
+ 
+ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
+@@ -281,9 +286,12 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
+ 			addr = translate_usr_offset(addr);
+ 			if (addr >= sizeof(struct pt_regs))
+ 				break;
++			if (addr == PT_IAOQ0+4 || addr == PT_IAOQ1+4) {
++				data |= 3; /* ensure userspace privilege */
++			}
+ 			if (addr >= PT_FR0 && addr <= PT_FR31 + 4) {
+ 				/* Special case, fp regs are 64 bits anyway */
+-				*(__u64 *) ((char *) task_regs(child) + addr) = data;
++				*(__u32 *) ((char *) task_regs(child) + addr) = data;
+ 				ret = 0;
+ 			}
+ 			else if ((addr >= PT_GR1+4 && addr <= PT_GR31+4) ||
+@@ -496,7 +504,8 @@ static void set_reg(struct pt_regs *regs, int num, unsigned long val)
+ 			return;
+ 	case RI(iaoq[0]):
+ 	case RI(iaoq[1]):
+-			regs->iaoq[num - RI(iaoq[0])] = val;
++			/* set 2 lowest bits to ensure userspace privilege: */
++			regs->iaoq[num - RI(iaoq[0])] = val | 3;
+ 			return;
+ 	case RI(sar):	regs->sar = val;
+ 			return;
+diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
+index 505550fb2935..22f53f9c4071 100644
+--- a/arch/powerpc/include/asm/pgtable.h
++++ b/arch/powerpc/include/asm/pgtable.h
+@@ -137,6 +137,20 @@ static inline void pte_frag_set(mm_context_t *ctx, void *p)
+ }
+ #endif
+ 
++#ifdef CONFIG_PPC64
++#define is_ioremap_addr is_ioremap_addr
++static inline bool is_ioremap_addr(const void *x)
++{
++#ifdef CONFIG_MMU
++	unsigned long addr = (unsigned long)x;
++
++	return addr >= IOREMAP_BASE && addr < IOREMAP_END;
++#else
++	return false;
++#endif
++}
++#endif /* CONFIG_PPC64 */
++
+ #endif /* __ASSEMBLY__ */
+ 
+ #endif /* _ASM_POWERPC_PGTABLE_H */
+diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
+index 9481a117e242..28f230a58bf5 100644
+--- a/arch/powerpc/kernel/exceptions-64s.S
++++ b/arch/powerpc/kernel/exceptions-64s.S
+@@ -1738,7 +1738,7 @@ handle_page_fault:
+ 	addi	r3,r1,STACK_FRAME_OVERHEAD
+ 	bl	do_page_fault
+ 	cmpdi	r3,0
+-	beq+	12f
++	beq+	ret_from_except_lite
+ 	bl	save_nvgprs
+ 	mr	r5,r3
+ 	addi	r3,r1,STACK_FRAME_OVERHEAD
+@@ -1753,7 +1753,12 @@ handle_dabr_fault:
+ 	ld      r5,_DSISR(r1)
+ 	addi    r3,r1,STACK_FRAME_OVERHEAD
+ 	bl      do_break
+-12:	b       ret_from_except_lite
++	/*
++	 * do_break() may have changed the NV GPRS while handling a breakpoint.
++	 * If so, we need to restore them with their updated values. Don't use
++	 * ret_from_except_lite here.
++	 */
++	b       ret_from_except
+ 
+ 
+ #ifdef CONFIG_PPC_BOOK3S_64
+diff --git a/arch/powerpc/kernel/swsusp_32.S b/arch/powerpc/kernel/swsusp_32.S
+index 7a919e9a3400..cbdf86228eaa 100644
+--- a/arch/powerpc/kernel/swsusp_32.S
++++ b/arch/powerpc/kernel/swsusp_32.S
+@@ -25,11 +25,19 @@
+ #define SL_IBAT2	0x48
+ #define SL_DBAT3	0x50
+ #define SL_IBAT3	0x58
+-#define SL_TB		0x60
+-#define SL_R2		0x68
+-#define SL_CR		0x6c
+-#define SL_LR		0x70
+-#define SL_R12		0x74	/* r12 to r31 */
++#define SL_DBAT4	0x60
++#define SL_IBAT4	0x68
++#define SL_DBAT5	0x70
++#define SL_IBAT5	0x78
++#define SL_DBAT6	0x80
++#define SL_IBAT6	0x88
++#define SL_DBAT7	0x90
++#define SL_IBAT7	0x98
++#define SL_TB		0xa0
++#define SL_R2		0xa8
++#define SL_CR		0xac
++#define SL_LR		0xb0
++#define SL_R12		0xb4	/* r12 to r31 */
+ #define SL_SIZE		(SL_R12 + 80)
+ 
+ 	.section .data
+@@ -114,6 +122,41 @@ _GLOBAL(swsusp_arch_suspend)
+ 	mfibatl	r4,3
+ 	stw	r4,SL_IBAT3+4(r11)
+ 
++BEGIN_MMU_FTR_SECTION
++	mfspr	r4,SPRN_DBAT4U
++	stw	r4,SL_DBAT4(r11)
++	mfspr	r4,SPRN_DBAT4L
++	stw	r4,SL_DBAT4+4(r11)
++	mfspr	r4,SPRN_DBAT5U
++	stw	r4,SL_DBAT5(r11)
++	mfspr	r4,SPRN_DBAT5L
++	stw	r4,SL_DBAT5+4(r11)
++	mfspr	r4,SPRN_DBAT6U
++	stw	r4,SL_DBAT6(r11)
++	mfspr	r4,SPRN_DBAT6L
++	stw	r4,SL_DBAT6+4(r11)
++	mfspr	r4,SPRN_DBAT7U
++	stw	r4,SL_DBAT7(r11)
++	mfspr	r4,SPRN_DBAT7L
++	stw	r4,SL_DBAT7+4(r11)
++	mfspr	r4,SPRN_IBAT4U
++	stw	r4,SL_IBAT4(r11)
++	mfspr	r4,SPRN_IBAT4L
++	stw	r4,SL_IBAT4+4(r11)
++	mfspr	r4,SPRN_IBAT5U
++	stw	r4,SL_IBAT5(r11)
++	mfspr	r4,SPRN_IBAT5L
++	stw	r4,SL_IBAT5+4(r11)
++	mfspr	r4,SPRN_IBAT6U
++	stw	r4,SL_IBAT6(r11)
++	mfspr	r4,SPRN_IBAT6L
++	stw	r4,SL_IBAT6+4(r11)
++	mfspr	r4,SPRN_IBAT7U
++	stw	r4,SL_IBAT7(r11)
++	mfspr	r4,SPRN_IBAT7L
++	stw	r4,SL_IBAT7+4(r11)
++END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS)
++
+ #if  0
+ 	/* Backup various CPU config stuffs */
+ 	bl	__save_cpu_setup
+@@ -279,27 +322,41 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
+ 	mtibatu	3,r4
+ 	lwz	r4,SL_IBAT3+4(r11)
+ 	mtibatl	3,r4
+-#endif
+-
+ BEGIN_MMU_FTR_SECTION
+-	li	r4,0
++	lwz	r4,SL_DBAT4(r11)
+ 	mtspr	SPRN_DBAT4U,r4
++	lwz	r4,SL_DBAT4+4(r11)
+ 	mtspr	SPRN_DBAT4L,r4
++	lwz	r4,SL_DBAT5(r11)
+ 	mtspr	SPRN_DBAT5U,r4
++	lwz	r4,SL_DBAT5+4(r11)
+ 	mtspr	SPRN_DBAT5L,r4
++	lwz	r4,SL_DBAT6(r11)
+ 	mtspr	SPRN_DBAT6U,r4
++	lwz	r4,SL_DBAT6+4(r11)
+ 	mtspr	SPRN_DBAT6L,r4
++	lwz	r4,SL_DBAT7(r11)
+ 	mtspr	SPRN_DBAT7U,r4
++	lwz	r4,SL_DBAT7+4(r11)
+ 	mtspr	SPRN_DBAT7L,r4
++	lwz	r4,SL_IBAT4(r11)
+ 	mtspr	SPRN_IBAT4U,r4
++	lwz	r4,SL_IBAT4+4(r11)
+ 	mtspr	SPRN_IBAT4L,r4
++	lwz	r4,SL_IBAT5(r11)
+ 	mtspr	SPRN_IBAT5U,r4
++	lwz	r4,SL_IBAT5+4(r11)
+ 	mtspr	SPRN_IBAT5L,r4
++	lwz	r4,SL_IBAT6(r11)
+ 	mtspr	SPRN_IBAT6U,r4
++	lwz	r4,SL_IBAT6+4(r11)
+ 	mtspr	SPRN_IBAT6L,r4
++	lwz	r4,SL_IBAT7(r11)
+ 	mtspr	SPRN_IBAT7U,r4
++	lwz	r4,SL_IBAT7+4(r11)
+ 	mtspr	SPRN_IBAT7L,r4
+ END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS)
++#endif
+ 
+ 	/* Flush all TLBs */
+ 	lis	r4,0x1000
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 6d4f0f72231f..335fac522234 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -3568,6 +3568,8 @@ int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
+ 
+ 	vcpu->arch.slb_max = 0;
+ 	dec = mfspr(SPRN_DEC);
++	if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */
++		dec = (s32) dec;
+ 	tb = mftb();
+ 	vcpu->arch.dec_expires = dec + tb;
+ 	vcpu->cpu = -1;
+@@ -4082,8 +4084,15 @@ int kvmhv_run_single_vcpu(struct kvm_run *kvm_run,
+ 
+ 	preempt_enable();
+ 
+-	/* cancel pending decrementer exception if DEC is now positive */
+-	if (get_tb() < vcpu->arch.dec_expires && kvmppc_core_pending_dec(vcpu))
++	/*
++	 * cancel pending decrementer exception if DEC is now positive, or if
++	 * entering a nested guest in which case the decrementer is now owned
++	 * by L2 and the L1 decrementer is provided in hdec_expires
++	 */
++	if (kvmppc_core_pending_dec(vcpu) &&
++			((get_tb() < vcpu->arch.dec_expires) ||
++			 (trap == BOOK3S_INTERRUPT_SYSCALL &&
++			  kvmppc_get_gpr(vcpu, 3) == H_ENTER_NESTED)))
+ 		kvmppc_core_dequeue_dec(vcpu);
+ 
+ 	trace_kvm_guest_exit(vcpu);
+diff --git a/arch/powerpc/kvm/book3s_hv_tm.c b/arch/powerpc/kvm/book3s_hv_tm.c
+index 888e2609e3f1..31cd0f327c8a 100644
+--- a/arch/powerpc/kvm/book3s_hv_tm.c
++++ b/arch/powerpc/kvm/book3s_hv_tm.c
+@@ -131,7 +131,7 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
+ 		}
+ 		/* Set CR0 to indicate previous transactional state */
+ 		vcpu->arch.regs.ccr = (vcpu->arch.regs.ccr & 0x0fffffff) |
+-			(((msr & MSR_TS_MASK) >> MSR_TS_S_LG) << 28);
++			(((msr & MSR_TS_MASK) >> MSR_TS_S_LG) << 29);
+ 		/* L=1 => tresume, L=0 => tsuspend */
+ 		if (instr & (1 << 21)) {
+ 			if (MSR_TM_SUSPENDED(msr))
+@@ -175,7 +175,7 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
+ 
+ 		/* Set CR0 to indicate previous transactional state */
+ 		vcpu->arch.regs.ccr = (vcpu->arch.regs.ccr & 0x0fffffff) |
+-			(((msr & MSR_TS_MASK) >> MSR_TS_S_LG) << 28);
++			(((msr & MSR_TS_MASK) >> MSR_TS_S_LG) << 29);
+ 		vcpu->arch.shregs.msr &= ~MSR_TS_MASK;
+ 		return RESUME_GUEST;
+ 
+@@ -205,7 +205,7 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
+ 
+ 		/* Set CR0 to indicate previous transactional state */
+ 		vcpu->arch.regs.ccr = (vcpu->arch.regs.ccr & 0x0fffffff) |
+-			(((msr & MSR_TS_MASK) >> MSR_TS_S_LG) << 28);
++			(((msr & MSR_TS_MASK) >> MSR_TS_S_LG) << 29);
+ 		vcpu->arch.shregs.msr = msr | MSR_TS_S;
+ 		return RESUME_GUEST;
+ 	}
+diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
+index 6e56a6240bfa..7bf491bb0ca4 100644
+--- a/arch/powerpc/mm/pgtable_32.c
++++ b/arch/powerpc/mm/pgtable_32.c
+@@ -353,7 +353,7 @@ void mark_initmem_nx(void)
+ 	unsigned long numpages = PFN_UP((unsigned long)_einittext) -
+ 				 PFN_DOWN((unsigned long)_sinittext);
+ 
+-	if (v_block_mapped((unsigned long)_stext) + 1)
++	if (v_block_mapped((unsigned long)_stext + 1))
+ 		mmu_mark_initmem_nx();
+ 	else
+ 		change_page_attr(page, numpages, PAGE_KERNEL);
+diff --git a/arch/powerpc/platforms/powermac/sleep.S b/arch/powerpc/platforms/powermac/sleep.S
+index fb64b09cad9d..eb583bd9a75d 100644
+--- a/arch/powerpc/platforms/powermac/sleep.S
++++ b/arch/powerpc/platforms/powermac/sleep.S
+@@ -38,10 +38,18 @@
+ #define SL_IBAT2	0x48
+ #define SL_DBAT3	0x50
+ #define SL_IBAT3	0x58
+-#define SL_TB		0x60
+-#define SL_R2		0x68
+-#define SL_CR		0x6c
+-#define SL_R12		0x70	/* r12 to r31 */
++#define SL_DBAT4	0x60
++#define SL_IBAT4	0x68
++#define SL_DBAT5	0x70
++#define SL_IBAT5	0x78
++#define SL_DBAT6	0x80
++#define SL_IBAT6	0x88
++#define SL_DBAT7	0x90
++#define SL_IBAT7	0x98
++#define SL_TB		0xa0
++#define SL_R2		0xa8
++#define SL_CR		0xac
++#define SL_R12		0xb0	/* r12 to r31 */
+ #define SL_SIZE		(SL_R12 + 80)
+ 
+ 	.section .text
+@@ -126,6 +134,41 @@ _GLOBAL(low_sleep_handler)
+ 	mfibatl	r4,3
+ 	stw	r4,SL_IBAT3+4(r1)
+ 
++BEGIN_MMU_FTR_SECTION
++	mfspr	r4,SPRN_DBAT4U
++	stw	r4,SL_DBAT4(r1)
++	mfspr	r4,SPRN_DBAT4L
++	stw	r4,SL_DBAT4+4(r1)
++	mfspr	r4,SPRN_DBAT5U
++	stw	r4,SL_DBAT5(r1)
++	mfspr	r4,SPRN_DBAT5L
++	stw	r4,SL_DBAT5+4(r1)
++	mfspr	r4,SPRN_DBAT6U
++	stw	r4,SL_DBAT6(r1)
++	mfspr	r4,SPRN_DBAT6L
++	stw	r4,SL_DBAT6+4(r1)
++	mfspr	r4,SPRN_DBAT7U
++	stw	r4,SL_DBAT7(r1)
++	mfspr	r4,SPRN_DBAT7L
++	stw	r4,SL_DBAT7+4(r1)
++	mfspr	r4,SPRN_IBAT4U
++	stw	r4,SL_IBAT4(r1)
++	mfspr	r4,SPRN_IBAT4L
++	stw	r4,SL_IBAT4+4(r1)
++	mfspr	r4,SPRN_IBAT5U
++	stw	r4,SL_IBAT5(r1)
++	mfspr	r4,SPRN_IBAT5L
++	stw	r4,SL_IBAT5+4(r1)
++	mfspr	r4,SPRN_IBAT6U
++	stw	r4,SL_IBAT6(r1)
++	mfspr	r4,SPRN_IBAT6L
++	stw	r4,SL_IBAT6+4(r1)
++	mfspr	r4,SPRN_IBAT7U
++	stw	r4,SL_IBAT7(r1)
++	mfspr	r4,SPRN_IBAT7L
++	stw	r4,SL_IBAT7+4(r1)
++END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS)
++
+ 	/* Backup various CPU config stuffs */
+ 	bl	__save_cpu_setup
+ 
+@@ -326,22 +369,37 @@ grackle_wake_up:
+ 	mtibatl	3,r4
+ 
+ BEGIN_MMU_FTR_SECTION
+-	li	r4,0
++	lwz	r4,SL_DBAT4(r1)
+ 	mtspr	SPRN_DBAT4U,r4
++	lwz	r4,SL_DBAT4+4(r1)
+ 	mtspr	SPRN_DBAT4L,r4
++	lwz	r4,SL_DBAT5(r1)
+ 	mtspr	SPRN_DBAT5U,r4
++	lwz	r4,SL_DBAT5+4(r1)
+ 	mtspr	SPRN_DBAT5L,r4
++	lwz	r4,SL_DBAT6(r1)
+ 	mtspr	SPRN_DBAT6U,r4
++	lwz	r4,SL_DBAT6+4(r1)
+ 	mtspr	SPRN_DBAT6L,r4
++	lwz	r4,SL_DBAT7(r1)
+ 	mtspr	SPRN_DBAT7U,r4
++	lwz	r4,SL_DBAT7+4(r1)
+ 	mtspr	SPRN_DBAT7L,r4
++	lwz	r4,SL_IBAT4(r1)
+ 	mtspr	SPRN_IBAT4U,r4
++	lwz	r4,SL_IBAT4+4(r1)
+ 	mtspr	SPRN_IBAT4L,r4
++	lwz	r4,SL_IBAT5(r1)
+ 	mtspr	SPRN_IBAT5U,r4
++	lwz	r4,SL_IBAT5+4(r1)
+ 	mtspr	SPRN_IBAT5L,r4
++	lwz	r4,SL_IBAT6(r1)
+ 	mtspr	SPRN_IBAT6U,r4
++	lwz	r4,SL_IBAT6+4(r1)
+ 	mtspr	SPRN_IBAT6L,r4
++	lwz	r4,SL_IBAT7(r1)
+ 	mtspr	SPRN_IBAT7U,r4
++	lwz	r4,SL_IBAT7+4(r1)
+ 	mtspr	SPRN_IBAT7L,r4
+ END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS)
+ 
+diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
+index dc23d9d2a7d9..6309819ee2f4 100644
+--- a/arch/powerpc/platforms/powernv/npu-dma.c
++++ b/arch/powerpc/platforms/powernv/npu-dma.c
+@@ -31,9 +31,22 @@ static DEFINE_SPINLOCK(npu_context_lock);
+ static struct pci_dev *get_pci_dev(struct device_node *dn)
+ {
+ 	struct pci_dn *pdn = PCI_DN(dn);
++	struct pci_dev *pdev;
+ 
+-	return pci_get_domain_bus_and_slot(pci_domain_nr(pdn->phb->bus),
++	pdev = pci_get_domain_bus_and_slot(pci_domain_nr(pdn->phb->bus),
+ 					   pdn->busno, pdn->devfn);
++
++	/*
++	 * pci_get_domain_bus_and_slot() increased the reference count of
++	 * the PCI device, but callers don't need that actually as the PE
++	 * already holds a reference to the device. Since callers aren't
++	 * aware of the reference count change, call pci_dev_put() now to
++	 * avoid leaks.
++	 */
++	if (pdev)
++		pci_dev_put(pdev);
++
++	return pdev;
+ }
+ 
+ /* Given a NPU device get the associated PCI device. */
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
+index 3ead4c237ed0..34427b46ec3b 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda.c
+@@ -2459,6 +2459,14 @@ static long pnv_pci_ioda2_setup_default_config(struct pnv_ioda_pe *pe)
+ 	if (!pnv_iommu_bypass_disabled)
+ 		pnv_pci_ioda2_set_bypass(pe, true);
+ 
++	/*
++	 * Set table base for the case of IOMMU DMA use. Usually this is done
++	 * from dma_dev_setup() which is not called when a device is returned
++	 * from VFIO so do it here.
++	 */
++	if (pe->pdev)
++		set_iommu_table_base(&pe->pdev->dev, tbl);
++
+ 	return 0;
+ }
+ 
+@@ -2546,6 +2554,8 @@ static void pnv_ioda2_take_ownership(struct iommu_table_group *table_group)
+ 	pnv_pci_ioda2_unset_window(&pe->table_group, 0);
+ 	if (pe->pbus)
+ 		pnv_ioda_setup_bus_dma(pe, pe->pbus);
++	else if (pe->pdev)
++		set_iommu_table_base(&pe->pdev->dev, NULL);
+ 	iommu_tce_table_put(tbl);
+ }
+ 
+diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
+index 47087832f8b2..e6bd172bcf30 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-memory.c
++++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
+@@ -980,6 +980,9 @@ static int pseries_update_drconf_memory(struct of_reconfig_data *pr)
+ 	if (!memblock_size)
+ 		return -EINVAL;
+ 
++	if (!pr->old_prop)
++		return 0;
++
+ 	p = (__be32 *) pr->old_prop->value;
+ 	if (!p)
+ 		return -EINVAL;
+diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
+index 79cfd3b30ceb..771e9b3b62eb 100644
+--- a/arch/x86/events/amd/uncore.c
++++ b/arch/x86/events/amd/uncore.c
+@@ -205,15 +205,22 @@ static int amd_uncore_event_init(struct perf_event *event)
+ 	hwc->config = event->attr.config & AMD64_RAW_EVENT_MASK_NB;
+ 	hwc->idx = -1;
+ 
++	if (event->cpu < 0)
++		return -EINVAL;
++
+ 	/*
+ 	 * SliceMask and ThreadMask need to be set for certain L3 events in
+ 	 * Family 17h. For other events, the two fields do not affect the count.
+ 	 */
+-	if (l3_mask)
+-		hwc->config |= (AMD64_L3_SLICE_MASK | AMD64_L3_THREAD_MASK);
++	if (l3_mask && is_llc_event(event)) {
++		int thread = 2 * (cpu_data(event->cpu).cpu_core_id % 4);
+ 
+-	if (event->cpu < 0)
+-		return -EINVAL;
++		if (smp_num_siblings > 1)
++			thread += cpu_data(event->cpu).apicid & 1;
++
++		hwc->config |= (1ULL << (AMD64_L3_THREAD_SHIFT + thread) &
++				AMD64_L3_THREAD_MASK) | AMD64_L3_SLICE_MASK;
++	}
+ 
+ 	uncore = event_to_amd_uncore(event);
+ 	if (!uncore)
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 82dad001d1ea..228bad687caa 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -19,6 +19,7 @@
+ #include <asm/intel-family.h>
+ #include <asm/apic.h>
+ #include <asm/cpu_device_id.h>
++#include <asm/hypervisor.h>
+ 
+ #include "../perf_event.h"
+ 
+@@ -2091,12 +2092,10 @@ static void intel_pmu_disable_event(struct perf_event *event)
+ 	cpuc->intel_ctrl_host_mask &= ~(1ull << hwc->idx);
+ 	cpuc->intel_cp_status &= ~(1ull << hwc->idx);
+ 
+-	if (unlikely(hwc->config_base == MSR_ARCH_PERFMON_FIXED_CTR_CTRL)) {
++	if (unlikely(hwc->config_base == MSR_ARCH_PERFMON_FIXED_CTR_CTRL))
+ 		intel_pmu_disable_fixed(hwc);
+-		return;
+-	}
+-
+-	x86_pmu_disable_event(event);
++	else
++		x86_pmu_disable_event(event);
+ 
+ 	/*
+ 	 * Needs to be called after x86_pmu_disable_event,
+@@ -3927,6 +3926,13 @@ static bool check_msr(unsigned long msr, u64 mask)
+ {
+ 	u64 val_old, val_new, val_tmp;
+ 
++	/*
++	 * Disable the check for real HW, so we don't
++	 * mess with potentionaly enabled registers:
++	 */
++	if (hypervisor_is_type(X86_HYPER_NATIVE))
++		return true;
++
+ 	/*
+ 	 * Read the current value, change it and read it back to see if it
+ 	 * matches, this is needed to detect certain hardware emulators
+diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
+index 853a49a8ccf6..b24da63459c4 100644
+--- a/arch/x86/events/intel/uncore.h
++++ b/arch/x86/events/intel/uncore.h
+@@ -419,6 +419,16 @@ static inline bool is_freerunning_event(struct perf_event *event)
+ 	       (((cfg >> 8) & 0xff) >= UNCORE_FREERUNNING_UMASK_START);
+ }
+ 
++/* Check and reject invalid config */
++static inline int uncore_freerunning_hw_config(struct intel_uncore_box *box,
++					       struct perf_event *event)
++{
++	if (is_freerunning_event(event))
++		return 0;
++
++	return -EINVAL;
++}
++
+ static inline void uncore_disable_box(struct intel_uncore_box *box)
+ {
+ 	if (box->pmu->type->ops->disable_box)
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index b10e04387f38..8e4e8e423839 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -3585,6 +3585,7 @@ static struct uncore_event_desc skx_uncore_iio_freerunning_events[] = {
+ 
+ static struct intel_uncore_ops skx_uncore_iio_freerunning_ops = {
+ 	.read_counter		= uncore_msr_read_counter,
++	.hw_config		= uncore_freerunning_hw_config,
+ };
+ 
+ static struct attribute *skx_uncore_iio_freerunning_formats_attr[] = {
+diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
+index ea3d95275b43..115127c7ad28 100644
+--- a/arch/x86/include/asm/atomic.h
++++ b/arch/x86/include/asm/atomic.h
+@@ -54,7 +54,7 @@ static __always_inline void arch_atomic_add(int i, atomic_t *v)
+ {
+ 	asm volatile(LOCK_PREFIX "addl %1,%0"
+ 		     : "+m" (v->counter)
+-		     : "ir" (i));
++		     : "ir" (i) : "memory");
+ }
+ 
+ /**
+@@ -68,7 +68,7 @@ static __always_inline void arch_atomic_sub(int i, atomic_t *v)
+ {
+ 	asm volatile(LOCK_PREFIX "subl %1,%0"
+ 		     : "+m" (v->counter)
+-		     : "ir" (i));
++		     : "ir" (i) : "memory");
+ }
+ 
+ /**
+@@ -95,7 +95,7 @@ static __always_inline bool arch_atomic_sub_and_test(int i, atomic_t *v)
+ static __always_inline void arch_atomic_inc(atomic_t *v)
+ {
+ 	asm volatile(LOCK_PREFIX "incl %0"
+-		     : "+m" (v->counter));
++		     : "+m" (v->counter) :: "memory");
+ }
+ #define arch_atomic_inc arch_atomic_inc
+ 
+@@ -108,7 +108,7 @@ static __always_inline void arch_atomic_inc(atomic_t *v)
+ static __always_inline void arch_atomic_dec(atomic_t *v)
+ {
+ 	asm volatile(LOCK_PREFIX "decl %0"
+-		     : "+m" (v->counter));
++		     : "+m" (v->counter) :: "memory");
+ }
+ #define arch_atomic_dec arch_atomic_dec
+ 
+diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h
+index dadc20adba21..5e86c0d68ac1 100644
+--- a/arch/x86/include/asm/atomic64_64.h
++++ b/arch/x86/include/asm/atomic64_64.h
+@@ -45,7 +45,7 @@ static __always_inline void arch_atomic64_add(long i, atomic64_t *v)
+ {
+ 	asm volatile(LOCK_PREFIX "addq %1,%0"
+ 		     : "=m" (v->counter)
+-		     : "er" (i), "m" (v->counter));
++		     : "er" (i), "m" (v->counter) : "memory");
+ }
+ 
+ /**
+@@ -59,7 +59,7 @@ static inline void arch_atomic64_sub(long i, atomic64_t *v)
+ {
+ 	asm volatile(LOCK_PREFIX "subq %1,%0"
+ 		     : "=m" (v->counter)
+-		     : "er" (i), "m" (v->counter));
++		     : "er" (i), "m" (v->counter) : "memory");
+ }
+ 
+ /**
+@@ -87,7 +87,7 @@ static __always_inline void arch_atomic64_inc(atomic64_t *v)
+ {
+ 	asm volatile(LOCK_PREFIX "incq %0"
+ 		     : "=m" (v->counter)
+-		     : "m" (v->counter));
++		     : "m" (v->counter) : "memory");
+ }
+ #define arch_atomic64_inc arch_atomic64_inc
+ 
+@@ -101,7 +101,7 @@ static __always_inline void arch_atomic64_dec(atomic64_t *v)
+ {
+ 	asm volatile(LOCK_PREFIX "decq %0"
+ 		     : "=m" (v->counter)
+-		     : "m" (v->counter));
++		     : "m" (v->counter) : "memory");
+ }
+ #define arch_atomic64_dec arch_atomic64_dec
+ 
+diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
+index 14de0432d288..84f848c2541a 100644
+--- a/arch/x86/include/asm/barrier.h
++++ b/arch/x86/include/asm/barrier.h
+@@ -80,8 +80,8 @@ do {									\
+ })
+ 
+ /* Atomic operations are already serializing on x86 */
+-#define __smp_mb__before_atomic()	barrier()
+-#define __smp_mb__after_atomic()	barrier()
++#define __smp_mb__before_atomic()	do { } while (0)
++#define __smp_mb__after_atomic()	do { } while (0)
+ 
+ #include <asm-generic/barrier.h>
+ 
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 75f27ee2c263..1017b9c7dfe0 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -239,12 +239,14 @@
+ #define X86_FEATURE_BMI1		( 9*32+ 3) /* 1st group bit manipulation extensions */
+ #define X86_FEATURE_HLE			( 9*32+ 4) /* Hardware Lock Elision */
+ #define X86_FEATURE_AVX2		( 9*32+ 5) /* AVX2 instructions */
++#define X86_FEATURE_FDP_EXCPTN_ONLY	( 9*32+ 6) /* "" FPU data pointer updated only on x87 exceptions */
+ #define X86_FEATURE_SMEP		( 9*32+ 7) /* Supervisor Mode Execution Protection */
+ #define X86_FEATURE_BMI2		( 9*32+ 8) /* 2nd group bit manipulation extensions */
+ #define X86_FEATURE_ERMS		( 9*32+ 9) /* Enhanced REP MOVSB/STOSB instructions */
+ #define X86_FEATURE_INVPCID		( 9*32+10) /* Invalidate Processor Context ID */
+ #define X86_FEATURE_RTM			( 9*32+11) /* Restricted Transactional Memory */
+ #define X86_FEATURE_CQM			( 9*32+12) /* Cache QoS Monitoring */
++#define X86_FEATURE_ZERO_FCS_FDS	( 9*32+13) /* "" Zero out FPU CS and FPU DS */
+ #define X86_FEATURE_MPX			( 9*32+14) /* Memory Protection Extension */
+ #define X86_FEATURE_RDT_A		( 9*32+15) /* Resource Director Technology Allocation */
+ #define X86_FEATURE_AVX512F		( 9*32+16) /* AVX-512 Foundation */
+diff --git a/arch/x86/include/asm/intel-family.h b/arch/x86/include/asm/intel-family.h
+index 310118805f57..f60ddd655c78 100644
+--- a/arch/x86/include/asm/intel-family.h
++++ b/arch/x86/include/asm/intel-family.h
+@@ -56,6 +56,7 @@
+ #define INTEL_FAM6_ICELAKE_XEON_D	0x6C
+ #define INTEL_FAM6_ICELAKE_DESKTOP	0x7D
+ #define INTEL_FAM6_ICELAKE_MOBILE	0x7E
++#define INTEL_FAM6_ICELAKE_NNPI		0x9D
+ 
+ /* "Small Core" Processors (Atom) */
+ 
+diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
+index 395d46f78582..c7503be92f35 100644
+--- a/arch/x86/kernel/cpu/cacheinfo.c
++++ b/arch/x86/kernel/cpu/cacheinfo.c
+@@ -658,8 +658,7 @@ void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu, u8 node_id)
+ 	if (c->x86 < 0x17) {
+ 		/* LLC is at the node level. */
+ 		per_cpu(cpu_llc_id, cpu) = node_id;
+-	} else if (c->x86 == 0x17 &&
+-		   c->x86_model >= 0 && c->x86_model <= 0x1F) {
++	} else if (c->x86 == 0x17 && c->x86_model <= 0x1F) {
+ 		/*
+ 		 * LLC is at the core complex level.
+ 		 * Core complex ID is ApicId[3] for these processors.
+diff --git a/arch/x86/kernel/cpu/mkcapflags.sh b/arch/x86/kernel/cpu/mkcapflags.sh
+index d0dfb892c72f..aed45b8895d5 100644
+--- a/arch/x86/kernel/cpu/mkcapflags.sh
++++ b/arch/x86/kernel/cpu/mkcapflags.sh
+@@ -4,6 +4,8 @@
+ # Generate the x86_cap/bug_flags[] arrays from include/asm/cpufeatures.h
+ #
+ 
++set -e
++
+ IN=$1
+ OUT=$2
+ 
+diff --git a/arch/x86/kernel/mpparse.c b/arch/x86/kernel/mpparse.c
+index 1bfe5c6e6cfe..afac7ccce72f 100644
+--- a/arch/x86/kernel/mpparse.c
++++ b/arch/x86/kernel/mpparse.c
+@@ -546,17 +546,15 @@ void __init default_get_smp_config(unsigned int early)
+ 			 * local APIC has default address
+ 			 */
+ 			mp_lapic_addr = APIC_DEFAULT_PHYS_BASE;
+-			return;
++			goto out;
+ 		}
+ 
+ 		pr_info("Default MP configuration #%d\n", mpf->feature1);
+ 		construct_default_ISA_mptable(mpf->feature1);
+ 
+ 	} else if (mpf->physptr) {
+-		if (check_physptr(mpf, early)) {
+-			early_memunmap(mpf, sizeof(*mpf));
+-			return;
+-		}
++		if (check_physptr(mpf, early))
++			goto out;
+ 	} else
+ 		BUG();
+ 
+@@ -565,7 +563,7 @@ void __init default_get_smp_config(unsigned int early)
+ 	/*
+ 	 * Only use the first configuration found.
+ 	 */
+-
++out:
+ 	early_memunmap(mpf, sizeof(*mpf));
+ }
+ 
+diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
+index dd745b58ffd8..ccefaf5218c7 100644
+--- a/arch/x86/kvm/pmu.c
++++ b/arch/x86/kvm/pmu.c
+@@ -131,8 +131,8 @@ static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type,
+ 						 intr ? kvm_perf_overflow_intr :
+ 						 kvm_perf_overflow, pmc);
+ 	if (IS_ERR(event)) {
+-		printk_once("kvm_pmu: event creation failed %ld\n",
+-			    PTR_ERR(event));
++		pr_debug_ratelimited("kvm_pmu: event creation failed %ld for pmc->idx = %d\n",
++			    PTR_ERR(event), pmc->idx);
+ 		return;
+ 	}
+ 
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 4ca834d22169..f83e79a4d0b2 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -2243,13 +2243,9 @@ static void prepare_vmcs02_full(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
+ 
+ 	set_cr4_guest_host_mask(vmx);
+ 
+-	if (kvm_mpx_supported()) {
+-		if (vmx->nested.nested_run_pending &&
+-			(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS))
+-			vmcs_write64(GUEST_BNDCFGS, vmcs12->guest_bndcfgs);
+-		else
+-			vmcs_write64(GUEST_BNDCFGS, vmx->nested.vmcs01_guest_bndcfgs);
+-	}
++	if (kvm_mpx_supported() && vmx->nested.nested_run_pending &&
++	    (vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS))
++		vmcs_write64(GUEST_BNDCFGS, vmcs12->guest_bndcfgs);
+ }
+ 
+ /*
+@@ -2292,6 +2288,9 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+ 		kvm_set_dr(vcpu, 7, vcpu->arch.dr7);
+ 		vmcs_write64(GUEST_IA32_DEBUGCTL, vmx->nested.vmcs01_debugctl);
+ 	}
++	if (kvm_mpx_supported() && (!vmx->nested.nested_run_pending ||
++	    !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS)))
++		vmcs_write64(GUEST_BNDCFGS, vmx->nested.vmcs01_guest_bndcfgs);
+ 	vmx_set_rflags(vcpu, vmcs12->guest_rflags);
+ 
+ 	/* EXCEPTION_BITMAP and CR0_GUEST_HOST_MASK should basically be the
+@@ -2891,9 +2890,6 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
+ 			 */
+ 			vmcs_clear_bits(CPU_BASED_VM_EXEC_CONTROL,
+ 					CPU_BASED_TPR_SHADOW);
+-		} else {
+-			printk("bad virtual-APIC page address\n");
+-			dump_vmcs();
+ 		}
+ 	}
+ 
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index cfb8f1ec9a0a..d9ea86cc899f 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -1718,7 +1718,10 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		return vmx_get_vmx_msr(&vmx->nested.msrs, msr_info->index,
+ 				       &msr_info->data);
+ 	case MSR_IA32_XSS:
+-		if (!vmx_xsaves_supported())
++		if (!vmx_xsaves_supported() ||
++		    (!msr_info->host_initiated &&
++		     !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
++		       guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
+ 			return 1;
+ 		msr_info->data = vcpu->arch.ia32_xss;
+ 		break;
+@@ -1929,7 +1932,10 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 			return 1;
+ 		return vmx_set_vmx_msr(vcpu, msr_index, data);
+ 	case MSR_IA32_XSS:
+-		if (!vmx_xsaves_supported())
++		if (!vmx_xsaves_supported() ||
++		    (!msr_info->host_initiated &&
++		     !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
++		       guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
+ 			return 1;
+ 		/*
+ 		 * The only supported bit as of Skylake is bit 8, but
+@@ -6102,28 +6108,21 @@ static void vmx_apicv_post_state_restore(struct kvm_vcpu *vcpu)
+ 
+ static void vmx_complete_atomic_exit(struct vcpu_vmx *vmx)
+ {
+-	u32 exit_intr_info = 0;
+-	u16 basic_exit_reason = (u16)vmx->exit_reason;
+-
+-	if (!(basic_exit_reason == EXIT_REASON_MCE_DURING_VMENTRY
+-	      || basic_exit_reason == EXIT_REASON_EXCEPTION_NMI))
++	if (vmx->exit_reason != EXIT_REASON_EXCEPTION_NMI)
+ 		return;
+ 
+-	if (!(vmx->exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY))
+-		exit_intr_info = vmcs_read32(VM_EXIT_INTR_INFO);
+-	vmx->exit_intr_info = exit_intr_info;
++	vmx->exit_intr_info = vmcs_read32(VM_EXIT_INTR_INFO);
+ 
+ 	/* if exit due to PF check for async PF */
+-	if (is_page_fault(exit_intr_info))
++	if (is_page_fault(vmx->exit_intr_info))
+ 		vmx->vcpu.arch.apf.host_apf_reason = kvm_read_and_reset_pf_reason();
+ 
+ 	/* Handle machine checks before interrupts are enabled */
+-	if (basic_exit_reason == EXIT_REASON_MCE_DURING_VMENTRY ||
+-	    is_machine_check(exit_intr_info))
++	if (is_machine_check(vmx->exit_intr_info))
+ 		kvm_machine_check();
+ 
+ 	/* We need to handle NMIs before interrupts are enabled */
+-	if (is_nmi(exit_intr_info)) {
++	if (is_nmi(vmx->exit_intr_info)) {
+ 		kvm_before_interrupt(&vmx->vcpu);
+ 		asm("int $2");
+ 		kvm_after_interrupt(&vmx->vcpu);
+@@ -6526,6 +6525,9 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
+ 	vmx->idt_vectoring_info = 0;
+ 
+ 	vmx->exit_reason = vmx->fail ? 0xdead : vmcs_read32(VM_EXIT_REASON);
++	if ((u16)vmx->exit_reason == EXIT_REASON_MCE_DURING_VMENTRY)
++		kvm_machine_check();
++
+ 	if (vmx->fail || (vmx->exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY))
+ 		return;
+ 
+diff --git a/block/bio.c b/block/bio.c
+index a3c80a6c1fe5..e73a2a4c01cd 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -29,6 +29,7 @@
+ #include <linux/workqueue.h>
+ #include <linux/cgroup.h>
+ #include <linux/blk-cgroup.h>
++#include <linux/highmem.h>
+ 
+ #include <trace/events/block.h>
+ #include "blk.h"
+@@ -1475,8 +1476,22 @@ void bio_unmap_user(struct bio *bio)
+ 	bio_put(bio);
+ }
+ 
++static void bio_invalidate_vmalloc_pages(struct bio *bio)
++{
++#ifdef ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE
++	if (bio->bi_private && !op_is_write(bio_op(bio))) {
++		unsigned long i, len = 0;
++
++		for (i = 0; i < bio->bi_vcnt; i++)
++			len += bio->bi_io_vec[i].bv_len;
++		invalidate_kernel_vmap_range(bio->bi_private, len);
++	}
++#endif
++}
++
+ static void bio_map_kern_endio(struct bio *bio)
+ {
++	bio_invalidate_vmalloc_pages(bio);
+ 	bio_put(bio);
+ }
+ 
+@@ -1497,6 +1512,8 @@ struct bio *bio_map_kern(struct request_queue *q, void *data, unsigned int len,
+ 	unsigned long end = (kaddr + len + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ 	unsigned long start = kaddr >> PAGE_SHIFT;
+ 	const int nr_pages = end - start;
++	bool is_vmalloc = is_vmalloc_addr(data);
++	struct page *page;
+ 	int offset, i;
+ 	struct bio *bio;
+ 
+@@ -1504,6 +1521,11 @@ struct bio *bio_map_kern(struct request_queue *q, void *data, unsigned int len,
+ 	if (!bio)
+ 		return ERR_PTR(-ENOMEM);
+ 
++	if (is_vmalloc) {
++		flush_kernel_vmap_range(data, len);
++		bio->bi_private = data;
++	}
++
+ 	offset = offset_in_page(kaddr);
+ 	for (i = 0; i < nr_pages; i++) {
+ 		unsigned int bytes = PAGE_SIZE - offset;
+@@ -1514,7 +1536,11 @@ struct bio *bio_map_kern(struct request_queue *q, void *data, unsigned int len,
+ 		if (bytes > len)
+ 			bytes = len;
+ 
+-		if (bio_add_pc_page(q, bio, virt_to_page(data), bytes,
++		if (!is_vmalloc)
++			page = virt_to_page(data);
++		else
++			page = vmalloc_to_page(data);
++		if (bio_add_pc_page(q, bio, page, bytes,
+ 				    offset) < bytes) {
+ 			/* we don't support partial mappings */
+ 			bio_put(bio);
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 617a2b3f7582..95593df93986 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -1005,8 +1005,12 @@ static int blkcg_print_stat(struct seq_file *sf, void *v)
+ 		}
+ next:
+ 		if (has_stats) {
+-			off += scnprintf(buf+off, size-off, "\n");
+-			seq_commit(sf, off);
++			if (off < size - 1) {
++				off += scnprintf(buf+off, size-off, "\n");
++				seq_commit(sf, off);
++			} else {
++				seq_commit(sf, -1);
++			}
+ 		}
+ 	}
+ 
+diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
+index 507212d75ee2..51070a730e3c 100644
+--- a/block/blk-iolatency.c
++++ b/block/blk-iolatency.c
+@@ -617,44 +617,26 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio)
+ 
+ 		inflight = atomic_dec_return(&rqw->inflight);
+ 		WARN_ON_ONCE(inflight < 0);
+-		if (iolat->min_lat_nsec == 0)
+-			goto next;
+-		iolatency_record_time(iolat, &bio->bi_issue, now,
+-				      issue_as_root);
+-		window_start = atomic64_read(&iolat->window_start);
+-		if (now > window_start &&
+-		    (now - window_start) >= iolat->cur_win_nsec) {
+-			if (atomic64_cmpxchg(&iolat->window_start,
+-					window_start, now) == window_start)
+-				iolatency_check_latencies(iolat, now);
++		/*
++		 * If bi_status is BLK_STS_AGAIN, the bio wasn't actually
++		 * submitted, so do not account for it.
++		 */
++		if (iolat->min_lat_nsec && bio->bi_status != BLK_STS_AGAIN) {
++			iolatency_record_time(iolat, &bio->bi_issue, now,
++					      issue_as_root);
++			window_start = atomic64_read(&iolat->window_start);
++			if (now > window_start &&
++			    (now - window_start) >= iolat->cur_win_nsec) {
++				if (atomic64_cmpxchg(&iolat->window_start,
++					     window_start, now) == window_start)
++					iolatency_check_latencies(iolat, now);
++			}
+ 		}
+-next:
+ 		wake_up(&rqw->wait);
+ 		blkg = blkg->parent;
+ 	}
+ }
+ 
+-static void blkcg_iolatency_cleanup(struct rq_qos *rqos, struct bio *bio)
+-{
+-	struct blkcg_gq *blkg;
+-
+-	blkg = bio->bi_blkg;
+-	while (blkg && blkg->parent) {
+-		struct rq_wait *rqw;
+-		struct iolatency_grp *iolat;
+-
+-		iolat = blkg_to_lat(blkg);
+-		if (!iolat)
+-			goto next;
+-
+-		rqw = &iolat->rq_wait;
+-		atomic_dec(&rqw->inflight);
+-		wake_up(&rqw->wait);
+-next:
+-		blkg = blkg->parent;
+-	}
+-}
+-
+ static void blkcg_iolatency_exit(struct rq_qos *rqos)
+ {
+ 	struct blk_iolatency *blkiolat = BLKIOLATENCY(rqos);
+@@ -666,7 +648,6 @@ static void blkcg_iolatency_exit(struct rq_qos *rqos)
+ 
+ static struct rq_qos_ops blkcg_iolatency_ops = {
+ 	.throttle = blkcg_iolatency_throttle,
+-	.cleanup = blkcg_iolatency_cleanup,
+ 	.done_bio = blkcg_iolatency_done_bio,
+ 	.exit = blkcg_iolatency_exit,
+ };
+@@ -777,8 +758,10 @@ static int iolatency_set_min_lat_nsec(struct blkcg_gq *blkg, u64 val)
+ 
+ 	if (!oldval && val)
+ 		return 1;
+-	if (oldval && !val)
++	if (oldval && !val) {
++		blkcg_clear_delay(blkg);
+ 		return -1;
++	}
+ 	return 0;
+ }
+ 
+diff --git a/block/blk-throttle.c b/block/blk-throttle.c
+index 1b97a73d2fb1..61e990867a5d 100644
+--- a/block/blk-throttle.c
++++ b/block/blk-throttle.c
+@@ -881,13 +881,10 @@ static bool tg_with_in_iops_limit(struct throtl_grp *tg, struct bio *bio,
+ 	unsigned long jiffy_elapsed, jiffy_wait, jiffy_elapsed_rnd;
+ 	u64 tmp;
+ 
+-	jiffy_elapsed = jiffy_elapsed_rnd = jiffies - tg->slice_start[rw];
+-
+-	/* Slice has just started. Consider one slice interval */
+-	if (!jiffy_elapsed)
+-		jiffy_elapsed_rnd = tg->td->throtl_slice;
++	jiffy_elapsed = jiffies - tg->slice_start[rw];
+ 
+-	jiffy_elapsed_rnd = roundup(jiffy_elapsed_rnd, tg->td->throtl_slice);
++	/* Round up to the next throttle slice, wait time must be nonzero */
++	jiffy_elapsed_rnd = roundup(jiffy_elapsed + 1, tg->td->throtl_slice);
+ 
+ 	/*
+ 	 * jiffy_elapsed_rnd should not be a big value as minimum iops can be
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index 2d98803faec2..6ea455b62cb4 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -69,7 +69,7 @@ EXPORT_SYMBOL_GPL(__blk_req_zone_write_unlock);
+ static inline unsigned int __blkdev_nr_zones(struct request_queue *q,
+ 					     sector_t nr_sectors)
+ {
+-	unsigned long zone_sectors = blk_queue_zone_sectors(q);
++	sector_t zone_sectors = blk_queue_zone_sectors(q);
+ 
+ 	return (nr_sectors + zone_sectors - 1) >> ilog2(zone_sectors);
+ }
+diff --git a/crypto/asymmetric_keys/Kconfig b/crypto/asymmetric_keys/Kconfig
+index be70ca6c85d3..1f1f004dc757 100644
+--- a/crypto/asymmetric_keys/Kconfig
++++ b/crypto/asymmetric_keys/Kconfig
+@@ -15,6 +15,7 @@ config ASYMMETRIC_PUBLIC_KEY_SUBTYPE
+ 	select MPILIB
+ 	select CRYPTO_HASH_INFO
+ 	select CRYPTO_AKCIPHER
++	select CRYPTO_HASH
+ 	help
+ 	  This option provides support for asymmetric public key type handling.
+ 	  If signature generation and/or verification are to be used,
+@@ -65,6 +66,7 @@ config TPM_KEY_PARSER
+ config PKCS7_MESSAGE_PARSER
+ 	tristate "PKCS#7 message parser"
+ 	depends on X509_CERTIFICATE_PARSER
++	select CRYPTO_HASH
+ 	select ASN1
+ 	select OID_REGISTRY
+ 	help
+@@ -87,6 +89,7 @@ config SIGNED_PE_FILE_VERIFICATION
+ 	bool "Support for PE file signature verification"
+ 	depends on PKCS7_MESSAGE_PARSER=y
+ 	depends on SYSTEM_DATA_VERIFICATION
++	select CRYPTO_HASH
+ 	select ASN1
+ 	select OID_REGISTRY
+ 	help
+diff --git a/crypto/chacha20poly1305.c b/crypto/chacha20poly1305.c
+index 279d816ab51d..e803ccf18c14 100644
+--- a/crypto/chacha20poly1305.c
++++ b/crypto/chacha20poly1305.c
+@@ -65,6 +65,8 @@ struct chachapoly_req_ctx {
+ 	unsigned int cryptlen;
+ 	/* Actual AD, excluding IV */
+ 	unsigned int assoclen;
++	/* request flags, with MAY_SLEEP cleared if needed */
++	u32 flags;
+ 	union {
+ 		struct poly_req poly;
+ 		struct chacha_req chacha;
+@@ -74,8 +76,12 @@ struct chachapoly_req_ctx {
+ static inline void async_done_continue(struct aead_request *req, int err,
+ 				       int (*cont)(struct aead_request *))
+ {
+-	if (!err)
++	if (!err) {
++		struct chachapoly_req_ctx *rctx = aead_request_ctx(req);
++
++		rctx->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+ 		err = cont(req);
++	}
+ 
+ 	if (err != -EINPROGRESS && err != -EBUSY)
+ 		aead_request_complete(req, err);
+@@ -142,7 +148,7 @@ static int chacha_decrypt(struct aead_request *req)
+ 		dst = scatterwalk_ffwd(rctx->dst, req->dst, req->assoclen);
+ 	}
+ 
+-	skcipher_request_set_callback(&creq->req, aead_request_flags(req),
++	skcipher_request_set_callback(&creq->req, rctx->flags,
+ 				      chacha_decrypt_done, req);
+ 	skcipher_request_set_tfm(&creq->req, ctx->chacha);
+ 	skcipher_request_set_crypt(&creq->req, src, dst,
+@@ -186,7 +192,7 @@ static int poly_tail(struct aead_request *req)
+ 	memcpy(&preq->tail.cryptlen, &len, sizeof(len));
+ 	sg_set_buf(preq->src, &preq->tail, sizeof(preq->tail));
+ 
+-	ahash_request_set_callback(&preq->req, aead_request_flags(req),
++	ahash_request_set_callback(&preq->req, rctx->flags,
+ 				   poly_tail_done, req);
+ 	ahash_request_set_tfm(&preq->req, ctx->poly);
+ 	ahash_request_set_crypt(&preq->req, preq->src,
+@@ -217,7 +223,7 @@ static int poly_cipherpad(struct aead_request *req)
+ 	sg_init_table(preq->src, 1);
+ 	sg_set_buf(preq->src, &preq->pad, padlen);
+ 
+-	ahash_request_set_callback(&preq->req, aead_request_flags(req),
++	ahash_request_set_callback(&preq->req, rctx->flags,
+ 				   poly_cipherpad_done, req);
+ 	ahash_request_set_tfm(&preq->req, ctx->poly);
+ 	ahash_request_set_crypt(&preq->req, preq->src, NULL, padlen);
+@@ -248,7 +254,7 @@ static int poly_cipher(struct aead_request *req)
+ 	sg_init_table(rctx->src, 2);
+ 	crypt = scatterwalk_ffwd(rctx->src, crypt, req->assoclen);
+ 
+-	ahash_request_set_callback(&preq->req, aead_request_flags(req),
++	ahash_request_set_callback(&preq->req, rctx->flags,
+ 				   poly_cipher_done, req);
+ 	ahash_request_set_tfm(&preq->req, ctx->poly);
+ 	ahash_request_set_crypt(&preq->req, crypt, NULL, rctx->cryptlen);
+@@ -278,7 +284,7 @@ static int poly_adpad(struct aead_request *req)
+ 	sg_init_table(preq->src, 1);
+ 	sg_set_buf(preq->src, preq->pad, padlen);
+ 
+-	ahash_request_set_callback(&preq->req, aead_request_flags(req),
++	ahash_request_set_callback(&preq->req, rctx->flags,
+ 				   poly_adpad_done, req);
+ 	ahash_request_set_tfm(&preq->req, ctx->poly);
+ 	ahash_request_set_crypt(&preq->req, preq->src, NULL, padlen);
+@@ -302,7 +308,7 @@ static int poly_ad(struct aead_request *req)
+ 	struct poly_req *preq = &rctx->u.poly;
+ 	int err;
+ 
+-	ahash_request_set_callback(&preq->req, aead_request_flags(req),
++	ahash_request_set_callback(&preq->req, rctx->flags,
+ 				   poly_ad_done, req);
+ 	ahash_request_set_tfm(&preq->req, ctx->poly);
+ 	ahash_request_set_crypt(&preq->req, req->src, NULL, rctx->assoclen);
+@@ -329,7 +335,7 @@ static int poly_setkey(struct aead_request *req)
+ 	sg_init_table(preq->src, 1);
+ 	sg_set_buf(preq->src, rctx->key, sizeof(rctx->key));
+ 
+-	ahash_request_set_callback(&preq->req, aead_request_flags(req),
++	ahash_request_set_callback(&preq->req, rctx->flags,
+ 				   poly_setkey_done, req);
+ 	ahash_request_set_tfm(&preq->req, ctx->poly);
+ 	ahash_request_set_crypt(&preq->req, preq->src, NULL, sizeof(rctx->key));
+@@ -353,7 +359,7 @@ static int poly_init(struct aead_request *req)
+ 	struct poly_req *preq = &rctx->u.poly;
+ 	int err;
+ 
+-	ahash_request_set_callback(&preq->req, aead_request_flags(req),
++	ahash_request_set_callback(&preq->req, rctx->flags,
+ 				   poly_init_done, req);
+ 	ahash_request_set_tfm(&preq->req, ctx->poly);
+ 
+@@ -391,7 +397,7 @@ static int poly_genkey(struct aead_request *req)
+ 
+ 	chacha_iv(creq->iv, req, 0);
+ 
+-	skcipher_request_set_callback(&creq->req, aead_request_flags(req),
++	skcipher_request_set_callback(&creq->req, rctx->flags,
+ 				      poly_genkey_done, req);
+ 	skcipher_request_set_tfm(&creq->req, ctx->chacha);
+ 	skcipher_request_set_crypt(&creq->req, creq->src, creq->src,
+@@ -431,7 +437,7 @@ static int chacha_encrypt(struct aead_request *req)
+ 		dst = scatterwalk_ffwd(rctx->dst, req->dst, req->assoclen);
+ 	}
+ 
+-	skcipher_request_set_callback(&creq->req, aead_request_flags(req),
++	skcipher_request_set_callback(&creq->req, rctx->flags,
+ 				      chacha_encrypt_done, req);
+ 	skcipher_request_set_tfm(&creq->req, ctx->chacha);
+ 	skcipher_request_set_crypt(&creq->req, src, dst,
+@@ -449,6 +455,7 @@ static int chachapoly_encrypt(struct aead_request *req)
+ 	struct chachapoly_req_ctx *rctx = aead_request_ctx(req);
+ 
+ 	rctx->cryptlen = req->cryptlen;
++	rctx->flags = aead_request_flags(req);
+ 
+ 	/* encrypt call chain:
+ 	 * - chacha_encrypt/done()
+@@ -470,6 +477,7 @@ static int chachapoly_decrypt(struct aead_request *req)
+ 	struct chachapoly_req_ctx *rctx = aead_request_ctx(req);
+ 
+ 	rctx->cryptlen = req->cryptlen - POLY1305_DIGEST_SIZE;
++	rctx->flags = aead_request_flags(req);
+ 
+ 	/* decrypt call chain:
+ 	 * - poly_genkey/done()
+diff --git a/crypto/ghash-generic.c b/crypto/ghash-generic.c
+index d9f192b953b2..591b52d3bdca 100644
+--- a/crypto/ghash-generic.c
++++ b/crypto/ghash-generic.c
+@@ -34,6 +34,7 @@ static int ghash_setkey(struct crypto_shash *tfm,
+ 			const u8 *key, unsigned int keylen)
+ {
+ 	struct ghash_ctx *ctx = crypto_shash_ctx(tfm);
++	be128 k;
+ 
+ 	if (keylen != GHASH_BLOCK_SIZE) {
+ 		crypto_shash_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
+@@ -42,7 +43,12 @@ static int ghash_setkey(struct crypto_shash *tfm,
+ 
+ 	if (ctx->gf128)
+ 		gf128mul_free_4k(ctx->gf128);
+-	ctx->gf128 = gf128mul_init_4k_lle((be128 *)key);
++
++	BUILD_BUG_ON(sizeof(k) != GHASH_BLOCK_SIZE);
++	memcpy(&k, key, GHASH_BLOCK_SIZE); /* avoid violating alignment rules */
++	ctx->gf128 = gf128mul_init_4k_lle(&k);
++	memzero_explicit(&k, GHASH_BLOCK_SIZE);
++
+ 	if (!ctx->gf128)
+ 		return -ENOMEM;
+ 
+diff --git a/crypto/serpent_generic.c b/crypto/serpent_generic.c
+index 7c3382facc82..600bd288881d 100644
+--- a/crypto/serpent_generic.c
++++ b/crypto/serpent_generic.c
+@@ -229,7 +229,13 @@
+ 	x4 ^= x2;					\
+ 	})
+ 
+-static void __serpent_setkey_sbox(u32 r0, u32 r1, u32 r2, u32 r3, u32 r4, u32 *k)
++/*
++ * both gcc and clang have misoptimized this function in the past,
++ * producing horrible object code from spilling temporary variables
++ * on the stack. Forcing this part out of line avoids that.
++ */
++static noinline void __serpent_setkey_sbox(u32 r0, u32 r1, u32 r2,
++					   u32 r3, u32 r4, u32 *k)
+ {
+ 	k += 100;
+ 	S3(r3, r4, r0, r1, r2); store_and_load_keys(r1, r2, r4, r3, 28, 24);
+diff --git a/crypto/testmgr.c b/crypto/testmgr.c
+index 8386038d67c7..51540dbee23b 100644
+--- a/crypto/testmgr.c
++++ b/crypto/testmgr.c
+@@ -1050,6 +1050,7 @@ static int test_hash_vec(const char *driver, const struct hash_testvec *vec,
+ 						req, tsgl, hashstate);
+ 			if (err)
+ 				return err;
++			cond_resched();
+ 		}
+ 	}
+ #endif
+@@ -1105,6 +1106,7 @@ static int __alg_test_hash(const struct hash_testvec *vecs,
+ 		err = test_hash_vec(driver, &vecs[i], i, req, tsgl, hashstate);
+ 		if (err)
+ 			goto out;
++		cond_resched();
+ 	}
+ 	err = 0;
+ out:
+@@ -1346,6 +1348,7 @@ static int test_aead_vec(const char *driver, int enc,
+ 						&cfg, req, tsgls);
+ 			if (err)
+ 				return err;
++			cond_resched();
+ 		}
+ 	}
+ #endif
+@@ -1365,6 +1368,7 @@ static int test_aead(const char *driver, int enc,
+ 				    tsgls);
+ 		if (err)
+ 			return err;
++		cond_resched();
+ 	}
+ 	return 0;
+ }
+@@ -1679,6 +1683,7 @@ static int test_skcipher_vec(const char *driver, int enc,
+ 						    &cfg, req, tsgls);
+ 			if (err)
+ 				return err;
++			cond_resched();
+ 		}
+ 	}
+ #endif
+@@ -1698,6 +1703,7 @@ static int test_skcipher(const char *driver, int enc,
+ 					tsgls);
+ 		if (err)
+ 			return err;
++		cond_resched();
+ 	}
+ 	return 0;
+ }
+diff --git a/drivers/acpi/acpica/acevents.h b/drivers/acpi/acpica/acevents.h
+index 831660179662..c8652f91054e 100644
+--- a/drivers/acpi/acpica/acevents.h
++++ b/drivers/acpi/acpica/acevents.h
+@@ -69,7 +69,8 @@ acpi_status
+ acpi_ev_mask_gpe(struct acpi_gpe_event_info *gpe_event_info, u8 is_masked);
+ 
+ acpi_status
+-acpi_ev_add_gpe_reference(struct acpi_gpe_event_info *gpe_event_info);
++acpi_ev_add_gpe_reference(struct acpi_gpe_event_info *gpe_event_info,
++			  u8 clear_on_enable);
+ 
+ acpi_status
+ acpi_ev_remove_gpe_reference(struct acpi_gpe_event_info *gpe_event_info);
+diff --git a/drivers/acpi/acpica/evgpe.c b/drivers/acpi/acpica/evgpe.c
+index 62d3aa74277b..344feba29063 100644
+--- a/drivers/acpi/acpica/evgpe.c
++++ b/drivers/acpi/acpica/evgpe.c
+@@ -146,6 +146,7 @@ acpi_ev_mask_gpe(struct acpi_gpe_event_info *gpe_event_info, u8 is_masked)
+  * FUNCTION:    acpi_ev_add_gpe_reference
+  *
+  * PARAMETERS:  gpe_event_info          - Add a reference to this GPE
++ *              clear_on_enable         - Clear GPE status before enabling it
+  *
+  * RETURN:      Status
+  *
+@@ -155,7 +156,8 @@ acpi_ev_mask_gpe(struct acpi_gpe_event_info *gpe_event_info, u8 is_masked)
+  ******************************************************************************/
+ 
+ acpi_status
+-acpi_ev_add_gpe_reference(struct acpi_gpe_event_info *gpe_event_info)
++acpi_ev_add_gpe_reference(struct acpi_gpe_event_info *gpe_event_info,
++			  u8 clear_on_enable)
+ {
+ 	acpi_status status = AE_OK;
+ 
+@@ -170,6 +172,10 @@ acpi_ev_add_gpe_reference(struct acpi_gpe_event_info *gpe_event_info)
+ 
+ 		/* Enable on first reference */
+ 
++		if (clear_on_enable) {
++			(void)acpi_hw_clear_gpe(gpe_event_info);
++		}
++
+ 		status = acpi_ev_update_gpe_enable_mask(gpe_event_info);
+ 		if (ACPI_SUCCESS(status)) {
+ 			status = acpi_ev_enable_gpe(gpe_event_info);
+diff --git a/drivers/acpi/acpica/evgpeblk.c b/drivers/acpi/acpica/evgpeblk.c
+index 328d1d6123ad..fb15e9e2373b 100644
+--- a/drivers/acpi/acpica/evgpeblk.c
++++ b/drivers/acpi/acpica/evgpeblk.c
+@@ -453,7 +453,7 @@ acpi_ev_initialize_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
+ 				continue;
+ 			}
+ 
+-			status = acpi_ev_add_gpe_reference(gpe_event_info);
++			status = acpi_ev_add_gpe_reference(gpe_event_info, FALSE);
+ 			if (ACPI_FAILURE(status)) {
+ 				ACPI_EXCEPTION((AE_INFO, status,
+ 					"Could not enable GPE 0x%02X",
+diff --git a/drivers/acpi/acpica/evxface.c b/drivers/acpi/acpica/evxface.c
+index 3df00eb6621b..279ef0557aa3 100644
+--- a/drivers/acpi/acpica/evxface.c
++++ b/drivers/acpi/acpica/evxface.c
+@@ -971,7 +971,7 @@ acpi_remove_gpe_handler(acpi_handle gpe_device,
+ 	      ACPI_GPE_DISPATCH_METHOD) ||
+ 	     (ACPI_GPE_DISPATCH_TYPE(handler->original_flags) ==
+ 	      ACPI_GPE_DISPATCH_NOTIFY)) && handler->originally_enabled) {
+-		(void)acpi_ev_add_gpe_reference(gpe_event_info);
++		(void)acpi_ev_add_gpe_reference(gpe_event_info, FALSE);
+ 		if (ACPI_GPE_IS_POLLING_NEEDED(gpe_event_info)) {
+ 
+ 			/* Poll edge triggered GPEs to handle existing events */
+diff --git a/drivers/acpi/acpica/evxfgpe.c b/drivers/acpi/acpica/evxfgpe.c
+index 30a083902f52..710488ec59e9 100644
+--- a/drivers/acpi/acpica/evxfgpe.c
++++ b/drivers/acpi/acpica/evxfgpe.c
+@@ -108,7 +108,7 @@ acpi_status acpi_enable_gpe(acpi_handle gpe_device, u32 gpe_number)
+ 	if (gpe_event_info) {
+ 		if (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) !=
+ 		    ACPI_GPE_DISPATCH_NONE) {
+-			status = acpi_ev_add_gpe_reference(gpe_event_info);
++			status = acpi_ev_add_gpe_reference(gpe_event_info, TRUE);
+ 			if (ACPI_SUCCESS(status) &&
+ 			    ACPI_GPE_IS_POLLING_NEEDED(gpe_event_info)) {
+ 
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index 938ed513b070..6215680418c4 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -1486,7 +1486,7 @@ static int ata_eh_read_log_10h(struct ata_device *dev,
+ 	tf->hob_lbah = buf[10];
+ 	tf->nsect = buf[12];
+ 	tf->hob_nsect = buf[13];
+-	if (ata_id_has_ncq_autosense(dev->id))
++	if (dev->class == ATA_DEV_ZAC && ata_id_has_ncq_autosense(dev->id))
+ 		tf->auxiliary = buf[14] << 16 | buf[15] << 8 | buf[16];
+ 
+ 	return 0;
+@@ -1733,7 +1733,8 @@ void ata_eh_analyze_ncq_error(struct ata_link *link)
+ 	memcpy(&qc->result_tf, &tf, sizeof(tf));
+ 	qc->result_tf.flags = ATA_TFLAG_ISADDR | ATA_TFLAG_LBA | ATA_TFLAG_LBA48;
+ 	qc->err_mask |= AC_ERR_DEV | AC_ERR_NCQ;
+-	if ((qc->result_tf.command & ATA_SENSE) || qc->result_tf.auxiliary) {
++	if (dev->class == ATA_DEV_ZAC &&
++	    ((qc->result_tf.command & ATA_SENSE) || qc->result_tf.auxiliary)) {
+ 		char sense_key, asc, ascq;
+ 
+ 		sense_key = (qc->result_tf.auxiliary >> 16) & 0xff;
+@@ -1787,10 +1788,11 @@ static unsigned int ata_eh_analyze_tf(struct ata_queued_cmd *qc,
+ 	}
+ 
+ 	switch (qc->dev->class) {
+-	case ATA_DEV_ATA:
+ 	case ATA_DEV_ZAC:
+ 		if (stat & ATA_SENSE)
+ 			ata_eh_request_sense(qc, qc->scsicmd);
++		/* fall through */
++	case ATA_DEV_ATA:
+ 		if (err & ATA_ICRC)
+ 			qc->err_mask |= AC_ERR_ATA_BUS;
+ 		if (err & (ATA_UNC | ATA_AMNF))
+diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c
+index 19eb454f26c3..df2893d4626b 100644
+--- a/drivers/base/regmap/regmap-debugfs.c
++++ b/drivers/base/regmap/regmap-debugfs.c
+@@ -565,6 +565,8 @@ void regmap_debugfs_init(struct regmap *map, const char *name)
+ 	}
+ 
+ 	if (!strcmp(name, "dummy")) {
++		kfree(map->debugfs_name);
++
+ 		map->debugfs_name = kasprintf(GFP_KERNEL, "dummy%d",
+ 						dummy_index);
+ 		name = map->debugfs_name;
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index 4f822e087def..61d1a0864dea 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -1642,6 +1642,8 @@ static int _regmap_raw_write_impl(struct regmap *map, unsigned int reg,
+ 					     map->format.reg_bytes +
+ 					     map->format.pad_bytes,
+ 					     val, val_len);
++	else
++		ret = -ENOTSUPP;
+ 
+ 	/* If that didn't work fall back on linearising by hand. */
+ 	if (ret == -ENOTSUPP) {
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
+index 95f608d1a098..38e5811a045e 100644
+--- a/drivers/block/floppy.c
++++ b/drivers/block/floppy.c
+@@ -2119,6 +2119,9 @@ static void setup_format_params(int track)
+ 	raw_cmd->kernel_data = floppy_track_buffer;
+ 	raw_cmd->length = 4 * F_SECT_PER_TRACK;
+ 
++	if (!F_SECT_PER_TRACK)
++		return;
++
+ 	/* allow for about 30ms for data transport per track */
+ 	head_shift = (F_SECT_PER_TRACK + 5) / 6;
+ 
+@@ -3229,8 +3232,12 @@ static int set_geometry(unsigned int cmd, struct floppy_struct *g,
+ 	int cnt;
+ 
+ 	/* sanity checking for parameters. */
+-	if (g->sect <= 0 ||
+-	    g->head <= 0 ||
++	if ((int)g->sect <= 0 ||
++	    (int)g->head <= 0 ||
++	    /* check for overflow in max_sector */
++	    (int)(g->sect * g->head) <= 0 ||
++	    /* check for zero in F_SECT_PER_TRACK */
++	    (unsigned char)((g->sect << 2) >> FD_SIZECODE(g)) == 0 ||
+ 	    g->track <= 0 || g->track > UDP->tracks >> STRETCH(g) ||
+ 	    /* check if reserved bits are set */
+ 	    (g->stretch & ~(FD_STRETCH | FD_SWAPSIDES | FD_SECTBASEMASK)) != 0)
+@@ -3374,6 +3381,24 @@ static int fd_getgeo(struct block_device *bdev, struct hd_geometry *geo)
+ 	return 0;
+ }
+ 
++static bool valid_floppy_drive_params(const short autodetect[8],
++		int native_format)
++{
++	size_t floppy_type_size = ARRAY_SIZE(floppy_type);
++	size_t i = 0;
++
++	for (i = 0; i < 8; ++i) {
++		if (autodetect[i] < 0 ||
++		    autodetect[i] >= floppy_type_size)
++			return false;
++	}
++
++	if (native_format < 0 || native_format >= floppy_type_size)
++		return false;
++
++	return true;
++}
++
+ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode, unsigned int cmd,
+ 		    unsigned long param)
+ {
+@@ -3500,6 +3525,9 @@ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode, unsigned int
+ 		SUPBOUND(size, strlen((const char *)outparam) + 1);
+ 		break;
+ 	case FDSETDRVPRM:
++		if (!valid_floppy_drive_params(inparam.dp.autodetect,
++				inparam.dp.native_format))
++			return -EINVAL;
+ 		*UDP = inparam.dp;
+ 		break;
+ 	case FDGETDRVPRM:
+@@ -3697,6 +3725,8 @@ static int compat_setdrvprm(int drive,
+ 		return -EPERM;
+ 	if (copy_from_user(&v, arg, sizeof(struct compat_floppy_drive_params)))
+ 		return -EFAULT;
++	if (!valid_floppy_drive_params(v.autodetect, v.native_format))
++		return -EINVAL;
+ 	mutex_lock(&floppy_mutex);
+ 	UDP->cmos = v.cmos;
+ 	UDP->max_dtr = v.max_dtr;
+diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c
+index d7ac09c092f2..21d0b651b335 100644
+--- a/drivers/block/null_blk_main.c
++++ b/drivers/block/null_blk_main.c
+@@ -326,11 +326,12 @@ static ssize_t nullb_device_power_store(struct config_item *item,
+ 		set_bit(NULLB_DEV_FL_CONFIGURED, &dev->flags);
+ 		dev->power = newp;
+ 	} else if (dev->power && !newp) {
+-		mutex_lock(&lock);
+-		dev->power = newp;
+-		null_del_dev(dev->nullb);
+-		mutex_unlock(&lock);
+-		clear_bit(NULLB_DEV_FL_UP, &dev->flags);
++		if (test_and_clear_bit(NULLB_DEV_FL_UP, &dev->flags)) {
++			mutex_lock(&lock);
++			dev->power = newp;
++			null_del_dev(dev->nullb);
++			mutex_unlock(&lock);
++		}
+ 		clear_bit(NULLB_DEV_FL_CONFIGURED, &dev->flags);
+ 	}
+ 
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 7db48ae65cd2..4c9f11766e82 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -279,7 +279,9 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_DEVICE(0x04ca, 0x3015), .driver_info = BTUSB_QCA_ROME },
+ 	{ USB_DEVICE(0x04ca, 0x3016), .driver_info = BTUSB_QCA_ROME },
+ 	{ USB_DEVICE(0x04ca, 0x301a), .driver_info = BTUSB_QCA_ROME },
++	{ USB_DEVICE(0x13d3, 0x3491), .driver_info = BTUSB_QCA_ROME },
+ 	{ USB_DEVICE(0x13d3, 0x3496), .driver_info = BTUSB_QCA_ROME },
++	{ USB_DEVICE(0x13d3, 0x3501), .driver_info = BTUSB_QCA_ROME },
+ 
+ 	/* Broadcom BCM2035 */
+ 	{ USB_DEVICE(0x0a5c, 0x2009), .driver_info = BTUSB_BCM92035 },
+diff --git a/drivers/bluetooth/hci_bcsp.c b/drivers/bluetooth/hci_bcsp.c
+index 1a7f0c82fb36..66fe1e6dc631 100644
+--- a/drivers/bluetooth/hci_bcsp.c
++++ b/drivers/bluetooth/hci_bcsp.c
+@@ -759,6 +759,11 @@ static int bcsp_close(struct hci_uart *hu)
+ 	skb_queue_purge(&bcsp->rel);
+ 	skb_queue_purge(&bcsp->unrel);
+ 
++	if (bcsp->rx_skb) {
++		kfree_skb(bcsp->rx_skb);
++		bcsp->rx_skb = NULL;
++	}
++
+ 	kfree(bcsp);
+ 	return 0;
+ }
+diff --git a/drivers/clk/imx/clk-imx8mm.c b/drivers/clk/imx/clk-imx8mm.c
+index 122a81ab8e48..01ef2fab5764 100644
+--- a/drivers/clk/imx/clk-imx8mm.c
++++ b/drivers/clk/imx/clk-imx8mm.c
+@@ -325,7 +325,7 @@ static const char *imx8mm_dsi_dbi_sels[] = {"osc_24m", "sys_pll1_266m", "sys_pll
+ 					    "sys_pll2_1000m", "sys_pll3_out", "audio_pll2_out", "video_pll1_out", };
+ 
+ static const char *imx8mm_usdhc3_sels[] = {"osc_24m", "sys_pll1_400m", "sys_pll1_800m", "sys_pll2_500m",
+-					   "sys_pll3_out", "sys_pll1_266m", "audio_pll2_clk", "sys_pll1_100m", };
++					   "sys_pll3_out", "sys_pll1_266m", "audio_pll2_out", "sys_pll1_100m", };
+ 
+ static const char *imx8mm_csi1_core_sels[] = {"osc_24m", "sys_pll1_266m", "sys_pll2_250m", "sys_pll1_800m",
+ 					      "sys_pll2_1000m", "sys_pll3_out", "audio_pll2_out", "video_pll1_out", };
+@@ -361,11 +361,11 @@ static const char *imx8mm_pdm_sels[] = {"osc_24m", "sys_pll2_100m", "audio_pll1_
+ 					"sys_pll2_1000m", "sys_pll3_out", "clk_ext3", "audio_pll2_out", };
+ 
+ static const char *imx8mm_vpu_h1_sels[] = {"osc_24m", "vpu_pll_out", "sys_pll1_800m", "sys_pll2_1000m",
+-					   "audio_pll2_clk", "sys_pll2_125m", "sys_pll3_clk", "audio_pll1_out", };
++					   "audio_pll2_out", "sys_pll2_125m", "sys_pll3_clk", "audio_pll1_out", };
+ 
+ static const char *imx8mm_dram_core_sels[] = {"dram_pll_out", "dram_alt_root", };
+ 
+-static const char *imx8mm_clko1_sels[] = {"osc_24m", "sys_pll1_800m", "osc_27m", "sys_pll1_200m", "audio_pll2_clk",
++static const char *imx8mm_clko1_sels[] = {"osc_24m", "sys_pll1_800m", "osc_27m", "sys_pll1_200m", "audio_pll2_out",
+ 					 "vpu_pll", "sys_pll1_80m", };
+ 
+ static struct clk *clks[IMX8MM_CLK_END];
+diff --git a/drivers/clocksource/exynos_mct.c b/drivers/clocksource/exynos_mct.c
+index 34bd250d46c6..6aa10cbc1d59 100644
+--- a/drivers/clocksource/exynos_mct.c
++++ b/drivers/clocksource/exynos_mct.c
+@@ -209,7 +209,7 @@ static void exynos4_frc_resume(struct clocksource *cs)
+ 
+ static struct clocksource mct_frc = {
+ 	.name		= "mct-frc",
+-	.rating		= 400,
++	.rating		= 450,	/* use value higher than ARM arch timer */
+ 	.read		= exynos4_frc_read,
+ 	.mask		= CLOCKSOURCE_MASK(32),
+ 	.flags		= CLOCK_SOURCE_IS_CONTINUOUS,
+@@ -464,7 +464,7 @@ static int exynos4_mct_starting_cpu(unsigned int cpu)
+ 	evt->set_state_oneshot_stopped = set_state_shutdown;
+ 	evt->tick_resume = set_state_shutdown;
+ 	evt->features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT;
+-	evt->rating = 450;
++	evt->rating = 500;	/* use value higher than ARM arch timer */
+ 
+ 	exynos4_mct_write(TICK_BASE_CNT, mevt->base + MCT_L_TCNTB_OFFSET);
+ 
+diff --git a/drivers/clocksource/timer-tegra20.c b/drivers/clocksource/timer-tegra20.c
+index fdb3d795a409..84adfff59fb0 100644
+--- a/drivers/clocksource/timer-tegra20.c
++++ b/drivers/clocksource/timer-tegra20.c
+@@ -310,7 +310,7 @@ static int __init tegra_init_timer(struct device_node *np)
+ 			pr_err("%s: can't map IRQ for CPU%d\n",
+ 			       __func__, cpu);
+ 			ret = -EINVAL;
+-			goto out;
++			goto out_irq;
+ 		}
+ 
+ 		irq_set_status_flags(cpu_to->clkevt.irq, IRQ_NOAUTOEN);
+@@ -320,7 +320,8 @@ static int __init tegra_init_timer(struct device_node *np)
+ 		if (ret) {
+ 			pr_err("%s: cannot setup irq %d for CPU%d\n",
+ 				__func__, cpu_to->clkevt.irq, cpu);
+-			ret = -EINVAL;
++			irq_dispose_mapping(cpu_to->clkevt.irq);
++			cpu_to->clkevt.irq = 0;
+ 			goto out_irq;
+ 		}
+ 	}
+@@ -340,6 +341,8 @@ out_irq:
+ 			irq_dispose_mapping(cpu_to->clkevt.irq);
+ 		}
+ 	}
++
++	to->of_base.base = timer_reg_base;
+ out:
+ 	timer_of_cleanup(to);
+ 	return ret;
+diff --git a/drivers/crypto/amcc/crypto4xx_alg.c b/drivers/crypto/amcc/crypto4xx_alg.c
+index 3458c5a085d9..fb141d9b3c43 100644
+--- a/drivers/crypto/amcc/crypto4xx_alg.c
++++ b/drivers/crypto/amcc/crypto4xx_alg.c
+@@ -76,12 +76,16 @@ static void set_dynamic_sa_command_1(struct dynamic_sa_ctl *sa, u32 cm,
+ }
+ 
+ static inline int crypto4xx_crypt(struct skcipher_request *req,
+-				  const unsigned int ivlen, bool decrypt)
++				  const unsigned int ivlen, bool decrypt,
++				  bool check_blocksize)
+ {
+ 	struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(req);
+ 	struct crypto4xx_ctx *ctx = crypto_skcipher_ctx(cipher);
+ 	__le32 iv[AES_IV_SIZE];
+ 
++	if (check_blocksize && !IS_ALIGNED(req->cryptlen, AES_BLOCK_SIZE))
++		return -EINVAL;
++
+ 	if (ivlen)
+ 		crypto4xx_memcpy_to_le32(iv, req->iv, ivlen);
+ 
+@@ -90,24 +94,34 @@ static inline int crypto4xx_crypt(struct skcipher_request *req,
+ 		ctx->sa_len, 0, NULL);
+ }
+ 
+-int crypto4xx_encrypt_noiv(struct skcipher_request *req)
++int crypto4xx_encrypt_noiv_block(struct skcipher_request *req)
++{
++	return crypto4xx_crypt(req, 0, false, true);
++}
++
++int crypto4xx_encrypt_iv_stream(struct skcipher_request *req)
++{
++	return crypto4xx_crypt(req, AES_IV_SIZE, false, false);
++}
++
++int crypto4xx_decrypt_noiv_block(struct skcipher_request *req)
+ {
+-	return crypto4xx_crypt(req, 0, false);
++	return crypto4xx_crypt(req, 0, true, true);
+ }
+ 
+-int crypto4xx_encrypt_iv(struct skcipher_request *req)
++int crypto4xx_decrypt_iv_stream(struct skcipher_request *req)
+ {
+-	return crypto4xx_crypt(req, AES_IV_SIZE, false);
++	return crypto4xx_crypt(req, AES_IV_SIZE, true, false);
+ }
+ 
+-int crypto4xx_decrypt_noiv(struct skcipher_request *req)
++int crypto4xx_encrypt_iv_block(struct skcipher_request *req)
+ {
+-	return crypto4xx_crypt(req, 0, true);
++	return crypto4xx_crypt(req, AES_IV_SIZE, false, true);
+ }
+ 
+-int crypto4xx_decrypt_iv(struct skcipher_request *req)
++int crypto4xx_decrypt_iv_block(struct skcipher_request *req)
+ {
+-	return crypto4xx_crypt(req, AES_IV_SIZE, true);
++	return crypto4xx_crypt(req, AES_IV_SIZE, true, true);
+ }
+ 
+ /**
+@@ -278,8 +292,8 @@ crypto4xx_ctr_crypt(struct skcipher_request *req, bool encrypt)
+ 		return ret;
+ 	}
+ 
+-	return encrypt ? crypto4xx_encrypt_iv(req)
+-		       : crypto4xx_decrypt_iv(req);
++	return encrypt ? crypto4xx_encrypt_iv_stream(req)
++		       : crypto4xx_decrypt_iv_stream(req);
+ }
+ 
+ static int crypto4xx_sk_setup_fallback(struct crypto4xx_ctx *ctx,
+diff --git a/drivers/crypto/amcc/crypto4xx_core.c b/drivers/crypto/amcc/crypto4xx_core.c
+index 920bd5e720b2..63c915f170c9 100644
+--- a/drivers/crypto/amcc/crypto4xx_core.c
++++ b/drivers/crypto/amcc/crypto4xx_core.c
+@@ -1226,8 +1226,8 @@ static struct crypto4xx_alg_common crypto4xx_alg[] = {
+ 		.max_keysize = AES_MAX_KEY_SIZE,
+ 		.ivsize	= AES_IV_SIZE,
+ 		.setkey = crypto4xx_setkey_aes_cbc,
+-		.encrypt = crypto4xx_encrypt_iv,
+-		.decrypt = crypto4xx_decrypt_iv,
++		.encrypt = crypto4xx_encrypt_iv_block,
++		.decrypt = crypto4xx_decrypt_iv_block,
+ 		.init = crypto4xx_sk_init,
+ 		.exit = crypto4xx_sk_exit,
+ 	} },
+@@ -1238,7 +1238,7 @@ static struct crypto4xx_alg_common crypto4xx_alg[] = {
+ 			.cra_priority = CRYPTO4XX_CRYPTO_PRIORITY,
+ 			.cra_flags = CRYPTO_ALG_ASYNC |
+ 				CRYPTO_ALG_KERN_DRIVER_ONLY,
+-			.cra_blocksize = AES_BLOCK_SIZE,
++			.cra_blocksize = 1,
+ 			.cra_ctxsize = sizeof(struct crypto4xx_ctx),
+ 			.cra_module = THIS_MODULE,
+ 		},
+@@ -1246,8 +1246,8 @@ static struct crypto4xx_alg_common crypto4xx_alg[] = {
+ 		.max_keysize = AES_MAX_KEY_SIZE,
+ 		.ivsize	= AES_IV_SIZE,
+ 		.setkey	= crypto4xx_setkey_aes_cfb,
+-		.encrypt = crypto4xx_encrypt_iv,
+-		.decrypt = crypto4xx_decrypt_iv,
++		.encrypt = crypto4xx_encrypt_iv_stream,
++		.decrypt = crypto4xx_decrypt_iv_stream,
+ 		.init = crypto4xx_sk_init,
+ 		.exit = crypto4xx_sk_exit,
+ 	} },
+@@ -1259,7 +1259,7 @@ static struct crypto4xx_alg_common crypto4xx_alg[] = {
+ 			.cra_flags = CRYPTO_ALG_NEED_FALLBACK |
+ 				CRYPTO_ALG_ASYNC |
+ 				CRYPTO_ALG_KERN_DRIVER_ONLY,
+-			.cra_blocksize = AES_BLOCK_SIZE,
++			.cra_blocksize = 1,
+ 			.cra_ctxsize = sizeof(struct crypto4xx_ctx),
+ 			.cra_module = THIS_MODULE,
+ 		},
+@@ -1279,7 +1279,7 @@ static struct crypto4xx_alg_common crypto4xx_alg[] = {
+ 			.cra_priority = CRYPTO4XX_CRYPTO_PRIORITY,
+ 			.cra_flags = CRYPTO_ALG_ASYNC |
+ 				CRYPTO_ALG_KERN_DRIVER_ONLY,
+-			.cra_blocksize = AES_BLOCK_SIZE,
++			.cra_blocksize = 1,
+ 			.cra_ctxsize = sizeof(struct crypto4xx_ctx),
+ 			.cra_module = THIS_MODULE,
+ 		},
+@@ -1306,8 +1306,8 @@ static struct crypto4xx_alg_common crypto4xx_alg[] = {
+ 		.min_keysize = AES_MIN_KEY_SIZE,
+ 		.max_keysize = AES_MAX_KEY_SIZE,
+ 		.setkey	= crypto4xx_setkey_aes_ecb,
+-		.encrypt = crypto4xx_encrypt_noiv,
+-		.decrypt = crypto4xx_decrypt_noiv,
++		.encrypt = crypto4xx_encrypt_noiv_block,
++		.decrypt = crypto4xx_decrypt_noiv_block,
+ 		.init = crypto4xx_sk_init,
+ 		.exit = crypto4xx_sk_exit,
+ 	} },
+@@ -1318,7 +1318,7 @@ static struct crypto4xx_alg_common crypto4xx_alg[] = {
+ 			.cra_priority = CRYPTO4XX_CRYPTO_PRIORITY,
+ 			.cra_flags = CRYPTO_ALG_ASYNC |
+ 				CRYPTO_ALG_KERN_DRIVER_ONLY,
+-			.cra_blocksize = AES_BLOCK_SIZE,
++			.cra_blocksize = 1,
+ 			.cra_ctxsize = sizeof(struct crypto4xx_ctx),
+ 			.cra_module = THIS_MODULE,
+ 		},
+@@ -1326,8 +1326,8 @@ static struct crypto4xx_alg_common crypto4xx_alg[] = {
+ 		.max_keysize = AES_MAX_KEY_SIZE,
+ 		.ivsize	= AES_IV_SIZE,
+ 		.setkey	= crypto4xx_setkey_aes_ofb,
+-		.encrypt = crypto4xx_encrypt_iv,
+-		.decrypt = crypto4xx_decrypt_iv,
++		.encrypt = crypto4xx_encrypt_iv_stream,
++		.decrypt = crypto4xx_decrypt_iv_stream,
+ 		.init = crypto4xx_sk_init,
+ 		.exit = crypto4xx_sk_exit,
+ 	} },
+diff --git a/drivers/crypto/amcc/crypto4xx_core.h b/drivers/crypto/amcc/crypto4xx_core.h
+index 18df695ca6b1..da82862e57c9 100644
+--- a/drivers/crypto/amcc/crypto4xx_core.h
++++ b/drivers/crypto/amcc/crypto4xx_core.h
+@@ -183,10 +183,12 @@ int crypto4xx_setkey_rfc3686(struct crypto_skcipher *cipher,
+ 			     const u8 *key, unsigned int keylen);
+ int crypto4xx_encrypt_ctr(struct skcipher_request *req);
+ int crypto4xx_decrypt_ctr(struct skcipher_request *req);
+-int crypto4xx_encrypt_iv(struct skcipher_request *req);
+-int crypto4xx_decrypt_iv(struct skcipher_request *req);
+-int crypto4xx_encrypt_noiv(struct skcipher_request *req);
+-int crypto4xx_decrypt_noiv(struct skcipher_request *req);
++int crypto4xx_encrypt_iv_stream(struct skcipher_request *req);
++int crypto4xx_decrypt_iv_stream(struct skcipher_request *req);
++int crypto4xx_encrypt_iv_block(struct skcipher_request *req);
++int crypto4xx_decrypt_iv_block(struct skcipher_request *req);
++int crypto4xx_encrypt_noiv_block(struct skcipher_request *req);
++int crypto4xx_decrypt_noiv_block(struct skcipher_request *req);
+ int crypto4xx_rfc3686_encrypt(struct skcipher_request *req);
+ int crypto4xx_rfc3686_decrypt(struct skcipher_request *req);
+ int crypto4xx_sha1_alg_init(struct crypto_tfm *tfm);
+diff --git a/drivers/crypto/amcc/crypto4xx_trng.c b/drivers/crypto/amcc/crypto4xx_trng.c
+index 53ab1f140a26..8a3ed4031206 100644
+--- a/drivers/crypto/amcc/crypto4xx_trng.c
++++ b/drivers/crypto/amcc/crypto4xx_trng.c
+@@ -111,7 +111,6 @@ void ppc4xx_trng_probe(struct crypto4xx_core_device *core_dev)
+ 	return;
+ 
+ err_out:
+-	of_node_put(trng);
+ 	iounmap(dev->trng_base);
+ 	kfree(rng);
+ 	dev->trng_base = NULL;
+diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
+index 579578498deb..2b9b64b01c3b 100644
+--- a/drivers/crypto/caam/caamalg.c
++++ b/drivers/crypto/caam/caamalg.c
+@@ -965,6 +965,7 @@ static void skcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
+ 	struct skcipher_request *req = context;
+ 	struct skcipher_edesc *edesc;
+ 	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
++	struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
+ 	int ivsize = crypto_skcipher_ivsize(skcipher);
+ 
+ #ifdef DEBUG
+@@ -989,9 +990,9 @@ static void skcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
+ 
+ 	/*
+ 	 * The crypto API expects us to set the IV (req->iv) to the last
+-	 * ciphertext block. This is used e.g. by the CTS mode.
++	 * ciphertext block when running in CBC mode.
+ 	 */
+-	if (ivsize)
++	if ((ctx->cdata.algtype & OP_ALG_AAI_MASK) == OP_ALG_AAI_CBC)
+ 		scatterwalk_map_and_copy(req->iv, req->dst, req->cryptlen -
+ 					 ivsize, ivsize, 0);
+ 
+@@ -1072,6 +1073,7 @@ static void init_aead_job(struct aead_request *req,
+ 	if (unlikely(req->src != req->dst)) {
+ 		if (!edesc->mapped_dst_nents) {
+ 			dst_dma = 0;
++			out_options = 0;
+ 		} else if (edesc->mapped_dst_nents == 1) {
+ 			dst_dma = sg_dma_address(req->dst);
+ 			out_options = 0;
+@@ -1808,9 +1810,9 @@ static int skcipher_decrypt(struct skcipher_request *req)
+ 
+ 	/*
+ 	 * The crypto API expects us to set the IV (req->iv) to the last
+-	 * ciphertext block.
++	 * ciphertext block when running in CBC mode.
+ 	 */
+-	if (ivsize)
++	if ((ctx->cdata.algtype & OP_ALG_AAI_MASK) == OP_ALG_AAI_CBC)
+ 		scatterwalk_map_and_copy(req->iv, req->src, req->cryptlen -
+ 					 ivsize, ivsize, 0);
+ 
+diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c
+index c61921d32489..96d1a9647b01 100644
+--- a/drivers/crypto/caam/caamalg_qi.c
++++ b/drivers/crypto/caam/caamalg_qi.c
+@@ -1068,7 +1068,7 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
+ 			dma_to_qm_sg_one_ext(&fd_sgt[0], qm_sg_dma +
+ 					     (1 + !!ivsize) * sizeof(*sg_table),
+ 					     out_len, 0);
+-	} else if (mapped_dst_nents == 1) {
++	} else if (mapped_dst_nents <= 1) {
+ 		dma_to_qm_sg_one(&fd_sgt[0], sg_dma_address(req->dst), out_len,
+ 				 0);
+ 	} else {
+diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c
+index 0a72c96708c4..faf238db153c 100644
+--- a/drivers/crypto/caam/caamalg_qi2.c
++++ b/drivers/crypto/caam/caamalg_qi2.c
+@@ -525,6 +525,14 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
+ 			dpaa2_fl_set_addr(out_fle, qm_sg_dma +
+ 					  (1 + !!ivsize) * sizeof(*sg_table));
+ 		}
++	} else if (!mapped_dst_nents) {
++		/*
++		 * crypto engine requires the output entry to be present when
++		 * "frame list" FD is used.
++		 * Since engine does not support FMT=2'b11 (unused entry type),
++		 * leaving out_fle zeroized is the best option.
++		 */
++		goto skip_out_fle;
+ 	} else if (mapped_dst_nents == 1) {
+ 		dpaa2_fl_set_format(out_fle, dpaa2_fl_single);
+ 		dpaa2_fl_set_addr(out_fle, sg_dma_address(req->dst));
+@@ -536,6 +544,7 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
+ 
+ 	dpaa2_fl_set_len(out_fle, out_len);
+ 
++skip_out_fle:
+ 	return edesc;
+ }
+ 
+diff --git a/drivers/crypto/caam/qi.c b/drivers/crypto/caam/qi.c
+index 7cb8b1755e57..976aa9b3b264 100644
+--- a/drivers/crypto/caam/qi.c
++++ b/drivers/crypto/caam/qi.c
+@@ -18,6 +18,7 @@
+ #include "desc_constr.h"
+ 
+ #define PREHDR_RSLS_SHIFT	31
++#define PREHDR_ABS		BIT(25)
+ 
+ /*
+  * Use a reasonable backlog of frames (per CPU) as congestion threshold,
+@@ -346,6 +347,7 @@ int caam_drv_ctx_update(struct caam_drv_ctx *drv_ctx, u32 *sh_desc)
+ 	 */
+ 	drv_ctx->prehdr[0] = cpu_to_caam32((1 << PREHDR_RSLS_SHIFT) |
+ 					   num_words);
++	drv_ctx->prehdr[1] = cpu_to_caam32(PREHDR_ABS);
+ 	memcpy(drv_ctx->sh_desc, sh_desc, desc_bytes(sh_desc));
+ 	dma_sync_single_for_device(qidev, drv_ctx->context_a,
+ 				   sizeof(drv_ctx->sh_desc) +
+@@ -401,6 +403,7 @@ struct caam_drv_ctx *caam_drv_ctx_init(struct device *qidev,
+ 	 */
+ 	drv_ctx->prehdr[0] = cpu_to_caam32((1 << PREHDR_RSLS_SHIFT) |
+ 					   num_words);
++	drv_ctx->prehdr[1] = cpu_to_caam32(PREHDR_ABS);
+ 	memcpy(drv_ctx->sh_desc, sh_desc, desc_bytes(sh_desc));
+ 	size = sizeof(drv_ctx->prehdr) + sizeof(drv_ctx->sh_desc);
+ 	hwdesc = dma_map_single(qidev, drv_ctx->prehdr, size,
+diff --git a/drivers/crypto/ccp/ccp-dev.c b/drivers/crypto/ccp/ccp-dev.c
+index 1b5035d56288..9b6d8972a565 100644
+--- a/drivers/crypto/ccp/ccp-dev.c
++++ b/drivers/crypto/ccp/ccp-dev.c
+@@ -35,56 +35,62 @@ struct ccp_tasklet_data {
+ };
+ 
+ /* Human-readable error strings */
++#define CCP_MAX_ERROR_CODE	64
+ static char *ccp_error_codes[] = {
+ 	"",
+-	"ERR 01: ILLEGAL_ENGINE",
+-	"ERR 02: ILLEGAL_KEY_ID",
+-	"ERR 03: ILLEGAL_FUNCTION_TYPE",
+-	"ERR 04: ILLEGAL_FUNCTION_MODE",
+-	"ERR 05: ILLEGAL_FUNCTION_ENCRYPT",
+-	"ERR 06: ILLEGAL_FUNCTION_SIZE",
+-	"ERR 07: Zlib_MISSING_INIT_EOM",
+-	"ERR 08: ILLEGAL_FUNCTION_RSVD",
+-	"ERR 09: ILLEGAL_BUFFER_LENGTH",
+-	"ERR 10: VLSB_FAULT",
+-	"ERR 11: ILLEGAL_MEM_ADDR",
+-	"ERR 12: ILLEGAL_MEM_SEL",
+-	"ERR 13: ILLEGAL_CONTEXT_ID",
+-	"ERR 14: ILLEGAL_KEY_ADDR",
+-	"ERR 15: 0xF Reserved",
+-	"ERR 16: Zlib_ILLEGAL_MULTI_QUEUE",
+-	"ERR 17: Zlib_ILLEGAL_JOBID_CHANGE",
+-	"ERR 18: CMD_TIMEOUT",
+-	"ERR 19: IDMA0_AXI_SLVERR",
+-	"ERR 20: IDMA0_AXI_DECERR",
+-	"ERR 21: 0x15 Reserved",
+-	"ERR 22: IDMA1_AXI_SLAVE_FAULT",
+-	"ERR 23: IDMA1_AIXI_DECERR",
+-	"ERR 24: 0x18 Reserved",
+-	"ERR 25: ZLIBVHB_AXI_SLVERR",
+-	"ERR 26: ZLIBVHB_AXI_DECERR",
+-	"ERR 27: 0x1B Reserved",
+-	"ERR 27: ZLIB_UNEXPECTED_EOM",
+-	"ERR 27: ZLIB_EXTRA_DATA",
+-	"ERR 30: ZLIB_BTYPE",
+-	"ERR 31: ZLIB_UNDEFINED_SYMBOL",
+-	"ERR 32: ZLIB_UNDEFINED_DISTANCE_S",
+-	"ERR 33: ZLIB_CODE_LENGTH_SYMBOL",
+-	"ERR 34: ZLIB _VHB_ILLEGAL_FETCH",
+-	"ERR 35: ZLIB_UNCOMPRESSED_LEN",
+-	"ERR 36: ZLIB_LIMIT_REACHED",
+-	"ERR 37: ZLIB_CHECKSUM_MISMATCH0",
+-	"ERR 38: ODMA0_AXI_SLVERR",
+-	"ERR 39: ODMA0_AXI_DECERR",
+-	"ERR 40: 0x28 Reserved",
+-	"ERR 41: ODMA1_AXI_SLVERR",
+-	"ERR 42: ODMA1_AXI_DECERR",
+-	"ERR 43: LSB_PARITY_ERR",
++	"ILLEGAL_ENGINE",
++	"ILLEGAL_KEY_ID",
++	"ILLEGAL_FUNCTION_TYPE",
++	"ILLEGAL_FUNCTION_MODE",
++	"ILLEGAL_FUNCTION_ENCRYPT",
++	"ILLEGAL_FUNCTION_SIZE",
++	"Zlib_MISSING_INIT_EOM",
++	"ILLEGAL_FUNCTION_RSVD",
++	"ILLEGAL_BUFFER_LENGTH",
++	"VLSB_FAULT",
++	"ILLEGAL_MEM_ADDR",
++	"ILLEGAL_MEM_SEL",
++	"ILLEGAL_CONTEXT_ID",
++	"ILLEGAL_KEY_ADDR",
++	"0xF Reserved",
++	"Zlib_ILLEGAL_MULTI_QUEUE",
++	"Zlib_ILLEGAL_JOBID_CHANGE",
++	"CMD_TIMEOUT",
++	"IDMA0_AXI_SLVERR",
++	"IDMA0_AXI_DECERR",
++	"0x15 Reserved",
++	"IDMA1_AXI_SLAVE_FAULT",
++	"IDMA1_AIXI_DECERR",
++	"0x18 Reserved",
++	"ZLIBVHB_AXI_SLVERR",
++	"ZLIBVHB_AXI_DECERR",
++	"0x1B Reserved",
++	"ZLIB_UNEXPECTED_EOM",
++	"ZLIB_EXTRA_DATA",
++	"ZLIB_BTYPE",
++	"ZLIB_UNDEFINED_SYMBOL",
++	"ZLIB_UNDEFINED_DISTANCE_S",
++	"ZLIB_CODE_LENGTH_SYMBOL",
++	"ZLIB _VHB_ILLEGAL_FETCH",
++	"ZLIB_UNCOMPRESSED_LEN",
++	"ZLIB_LIMIT_REACHED",
++	"ZLIB_CHECKSUM_MISMATCH0",
++	"ODMA0_AXI_SLVERR",
++	"ODMA0_AXI_DECERR",
++	"0x28 Reserved",
++	"ODMA1_AXI_SLVERR",
++	"ODMA1_AXI_DECERR",
+ };
+ 
+-void ccp_log_error(struct ccp_device *d, int e)
++void ccp_log_error(struct ccp_device *d, unsigned int e)
+ {
+-	dev_err(d->dev, "CCP error: %s (0x%x)\n", ccp_error_codes[e], e);
++	if (WARN_ON(e >= CCP_MAX_ERROR_CODE))
++		return;
++
++	if (e < ARRAY_SIZE(ccp_error_codes))
++		dev_err(d->dev, "CCP error %d: %s\n", e, ccp_error_codes[e]);
++	else
++		dev_err(d->dev, "CCP error %d: Unknown Error\n", e);
+ }
+ 
+ /* List of CCPs, CCP count, read-write access lock, and access functions
+diff --git a/drivers/crypto/ccp/ccp-dev.h b/drivers/crypto/ccp/ccp-dev.h
+index 6810b65c1939..7442b0422f8a 100644
+--- a/drivers/crypto/ccp/ccp-dev.h
++++ b/drivers/crypto/ccp/ccp-dev.h
+@@ -632,7 +632,7 @@ struct ccp5_desc {
+ void ccp_add_device(struct ccp_device *ccp);
+ void ccp_del_device(struct ccp_device *ccp);
+ 
+-extern void ccp_log_error(struct ccp_device *, int);
++extern void ccp_log_error(struct ccp_device *, unsigned int);
+ 
+ struct ccp_device *ccp_alloc_struct(struct sp_device *sp);
+ bool ccp_queues_suspended(struct ccp_device *ccp);
+diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
+index 267a367bd076..a17de7c9841b 100644
+--- a/drivers/crypto/ccp/ccp-ops.c
++++ b/drivers/crypto/ccp/ccp-ops.c
+@@ -625,6 +625,7 @@ static int ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q,
+ 
+ 	unsigned long long *final;
+ 	unsigned int dm_offset;
++	unsigned int jobid;
+ 	unsigned int ilen;
+ 	bool in_place = true; /* Default value */
+ 	int ret;
+@@ -663,9 +664,11 @@ static int ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q,
+ 		p_tag = scatterwalk_ffwd(sg_tag, p_inp, ilen);
+ 	}
+ 
++	jobid = CCP_NEW_JOBID(cmd_q->ccp);
++
+ 	memset(&op, 0, sizeof(op));
+ 	op.cmd_q = cmd_q;
+-	op.jobid = CCP_NEW_JOBID(cmd_q->ccp);
++	op.jobid = jobid;
+ 	op.sb_key = cmd_q->sb_key; /* Pre-allocated */
+ 	op.sb_ctx = cmd_q->sb_ctx; /* Pre-allocated */
+ 	op.init = 1;
+@@ -816,6 +819,13 @@ static int ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q,
+ 	final[0] = cpu_to_be64(aes->aad_len * 8);
+ 	final[1] = cpu_to_be64(ilen * 8);
+ 
++	memset(&op, 0, sizeof(op));
++	op.cmd_q = cmd_q;
++	op.jobid = jobid;
++	op.sb_key = cmd_q->sb_key; /* Pre-allocated */
++	op.sb_ctx = cmd_q->sb_ctx; /* Pre-allocated */
++	op.init = 1;
++	op.u.aes.type = aes->type;
+ 	op.u.aes.mode = CCP_AES_MODE_GHASH;
+ 	op.u.aes.action = CCP_AES_GHASHFINAL;
+ 	op.src.type = CCP_MEMTYPE_SYSTEM;
+@@ -843,7 +853,8 @@ static int ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q,
+ 		if (ret)
+ 			goto e_tag;
+ 
+-		ret = memcmp(tag.address, final_wa.address, AES_BLOCK_SIZE);
++		ret = crypto_memneq(tag.address, final_wa.address,
++				    AES_BLOCK_SIZE) ? -EBADMSG : 0;
+ 		ccp_dm_free(&tag);
+ 	}
+ 
+diff --git a/drivers/crypto/inside-secure/safexcel_cipher.c b/drivers/crypto/inside-secure/safexcel_cipher.c
+index 7ef30a98cb24..23fb85f4b3cc 100644
+--- a/drivers/crypto/inside-secure/safexcel_cipher.c
++++ b/drivers/crypto/inside-secure/safexcel_cipher.c
+@@ -51,6 +51,8 @@ struct safexcel_cipher_ctx {
+ 
+ struct safexcel_cipher_req {
+ 	enum safexcel_cipher_direction direction;
++	/* Number of result descriptors associated to the request */
++	unsigned int rdescs;
+ 	bool needs_inv;
+ };
+ 
+@@ -333,7 +335,10 @@ static int safexcel_handle_req_result(struct safexcel_crypto_priv *priv, int rin
+ 
+ 	*ret = 0;
+ 
+-	do {
++	if (unlikely(!sreq->rdescs))
++		return 0;
++
++	while (sreq->rdescs--) {
+ 		rdesc = safexcel_ring_next_rptr(priv, &priv->ring[ring].rdr);
+ 		if (IS_ERR(rdesc)) {
+ 			dev_err(priv->dev,
+@@ -346,7 +351,7 @@ static int safexcel_handle_req_result(struct safexcel_crypto_priv *priv, int rin
+ 			*ret = safexcel_rdesc_check_errors(priv, rdesc);
+ 
+ 		ndesc++;
+-	} while (!rdesc->last_seg);
++	}
+ 
+ 	safexcel_complete(priv, ring);
+ 
+@@ -501,6 +506,7 @@ cdesc_rollback:
+ static int safexcel_handle_inv_result(struct safexcel_crypto_priv *priv,
+ 				      int ring,
+ 				      struct crypto_async_request *base,
++				      struct safexcel_cipher_req *sreq,
+ 				      bool *should_complete, int *ret)
+ {
+ 	struct safexcel_cipher_ctx *ctx = crypto_tfm_ctx(base->tfm);
+@@ -509,7 +515,10 @@ static int safexcel_handle_inv_result(struct safexcel_crypto_priv *priv,
+ 
+ 	*ret = 0;
+ 
+-	do {
++	if (unlikely(!sreq->rdescs))
++		return 0;
++
++	while (sreq->rdescs--) {
+ 		rdesc = safexcel_ring_next_rptr(priv, &priv->ring[ring].rdr);
+ 		if (IS_ERR(rdesc)) {
+ 			dev_err(priv->dev,
+@@ -522,7 +531,7 @@ static int safexcel_handle_inv_result(struct safexcel_crypto_priv *priv,
+ 			*ret = safexcel_rdesc_check_errors(priv, rdesc);
+ 
+ 		ndesc++;
+-	} while (!rdesc->last_seg);
++	}
+ 
+ 	safexcel_complete(priv, ring);
+ 
+@@ -564,7 +573,7 @@ static int safexcel_skcipher_handle_result(struct safexcel_crypto_priv *priv,
+ 
+ 	if (sreq->needs_inv) {
+ 		sreq->needs_inv = false;
+-		err = safexcel_handle_inv_result(priv, ring, async,
++		err = safexcel_handle_inv_result(priv, ring, async, sreq,
+ 						 should_complete, ret);
+ 	} else {
+ 		err = safexcel_handle_req_result(priv, ring, async, req->src,
+@@ -587,7 +596,7 @@ static int safexcel_aead_handle_result(struct safexcel_crypto_priv *priv,
+ 
+ 	if (sreq->needs_inv) {
+ 		sreq->needs_inv = false;
+-		err = safexcel_handle_inv_result(priv, ring, async,
++		err = safexcel_handle_inv_result(priv, ring, async, sreq,
+ 						 should_complete, ret);
+ 	} else {
+ 		err = safexcel_handle_req_result(priv, ring, async, req->src,
+@@ -633,6 +642,8 @@ static int safexcel_skcipher_send(struct crypto_async_request *async, int ring,
+ 		ret = safexcel_send_req(async, ring, sreq, req->src,
+ 					req->dst, req->cryptlen, 0, 0, req->iv,
+ 					commands, results);
++
++	sreq->rdescs = *results;
+ 	return ret;
+ }
+ 
+@@ -655,6 +666,7 @@ static int safexcel_aead_send(struct crypto_async_request *async, int ring,
+ 					req->cryptlen, req->assoclen,
+ 					crypto_aead_authsize(tfm), req->iv,
+ 					commands, results);
++	sreq->rdescs = *results;
+ 	return ret;
+ }
+ 
+diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
+index becc654e0cd3..82d3625667cd 100644
+--- a/drivers/crypto/talitos.c
++++ b/drivers/crypto/talitos.c
+@@ -1001,7 +1001,6 @@ static void ipsec_esp_encrypt_done(struct device *dev,
+ 	unsigned int authsize = crypto_aead_authsize(authenc);
+ 	unsigned int ivsize = crypto_aead_ivsize(authenc);
+ 	struct talitos_edesc *edesc;
+-	struct scatterlist *sg;
+ 	void *icvdata;
+ 
+ 	edesc = container_of(desc, struct talitos_edesc, desc);
+@@ -1015,9 +1014,8 @@ static void ipsec_esp_encrypt_done(struct device *dev,
+ 		else
+ 			icvdata = &edesc->link_tbl[edesc->src_nents +
+ 						   edesc->dst_nents + 2];
+-		sg = sg_last(areq->dst, edesc->dst_nents);
+-		memcpy((char *)sg_virt(sg) + sg->length - authsize,
+-		       icvdata, authsize);
++		sg_pcopy_from_buffer(areq->dst, edesc->dst_nents ? : 1, icvdata,
++				     authsize, areq->assoclen + areq->cryptlen);
+ 	}
+ 
+ 	dma_unmap_single(dev, edesc->iv_dma, ivsize, DMA_TO_DEVICE);
+@@ -1035,7 +1033,6 @@ static void ipsec_esp_decrypt_swauth_done(struct device *dev,
+ 	struct crypto_aead *authenc = crypto_aead_reqtfm(req);
+ 	unsigned int authsize = crypto_aead_authsize(authenc);
+ 	struct talitos_edesc *edesc;
+-	struct scatterlist *sg;
+ 	char *oicv, *icv;
+ 	struct talitos_private *priv = dev_get_drvdata(dev);
+ 	bool is_sec1 = has_ftr_sec1(priv);
+@@ -1045,9 +1042,18 @@ static void ipsec_esp_decrypt_swauth_done(struct device *dev,
+ 	ipsec_esp_unmap(dev, edesc, req);
+ 
+ 	if (!err) {
++		char icvdata[SHA512_DIGEST_SIZE];
++		int nents = edesc->dst_nents ? : 1;
++		unsigned int len = req->assoclen + req->cryptlen;
++
+ 		/* auth check */
+-		sg = sg_last(req->dst, edesc->dst_nents ? : 1);
+-		icv = (char *)sg_virt(sg) + sg->length - authsize;
++		if (nents > 1) {
++			sg_pcopy_to_buffer(req->dst, nents, icvdata, authsize,
++					   len - authsize);
++			icv = icvdata;
++		} else {
++			icv = (char *)sg_virt(req->dst) + len - authsize;
++		}
+ 
+ 		if (edesc->dma_len) {
+ 			if (is_sec1)
+@@ -1463,7 +1469,6 @@ static int aead_decrypt(struct aead_request *req)
+ 	struct talitos_ctx *ctx = crypto_aead_ctx(authenc);
+ 	struct talitos_private *priv = dev_get_drvdata(ctx->dev);
+ 	struct talitos_edesc *edesc;
+-	struct scatterlist *sg;
+ 	void *icvdata;
+ 
+ 	req->cryptlen -= authsize;
+@@ -1497,9 +1502,8 @@ static int aead_decrypt(struct aead_request *req)
+ 	else
+ 		icvdata = &edesc->link_tbl[0];
+ 
+-	sg = sg_last(req->src, edesc->src_nents ? : 1);
+-
+-	memcpy(icvdata, (char *)sg_virt(sg) + sg->length - authsize, authsize);
++	sg_pcopy_to_buffer(req->src, edesc->src_nents ? : 1, icvdata, authsize,
++			   req->assoclen + req->cryptlen - authsize);
+ 
+ 	return ipsec_esp(edesc, req, ipsec_esp_decrypt_swauth_done);
+ }
+@@ -1553,11 +1557,15 @@ static void ablkcipher_done(struct device *dev,
+ 			    int err)
+ {
+ 	struct ablkcipher_request *areq = context;
++	struct crypto_ablkcipher *cipher = crypto_ablkcipher_reqtfm(areq);
++	struct talitos_ctx *ctx = crypto_ablkcipher_ctx(cipher);
++	unsigned int ivsize = crypto_ablkcipher_ivsize(cipher);
+ 	struct talitos_edesc *edesc;
+ 
+ 	edesc = container_of(desc, struct talitos_edesc, desc);
+ 
+ 	common_nonsnoop_unmap(dev, edesc, areq);
++	memcpy(areq->info, ctx->iv, ivsize);
+ 
+ 	kfree(edesc);
+ 
+@@ -3184,7 +3192,10 @@ static struct talitos_crypto_alg *talitos_alg_alloc(struct device *dev,
+ 		alg->cra_priority = t_alg->algt.priority;
+ 	else
+ 		alg->cra_priority = TALITOS_CRA_PRIORITY;
+-	alg->cra_alignmask = 0;
++	if (has_ftr_sec1(priv))
++		alg->cra_alignmask = 3;
++	else
++		alg->cra_alignmask = 0;
+ 	alg->cra_ctxsize = sizeof(struct talitos_ctx);
+ 	alg->cra_flags |= CRYPTO_ALG_KERN_DRIVER_ONLY;
+ 
+diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c
+index 248c440c10f2..4ec84a633bd3 100644
+--- a/drivers/dma/imx-sdma.c
++++ b/drivers/dma/imx-sdma.c
+@@ -2096,27 +2096,6 @@ static int sdma_probe(struct platform_device *pdev)
+ 	if (pdata && pdata->script_addrs)
+ 		sdma_add_scripts(sdma, pdata->script_addrs);
+ 
+-	if (pdata) {
+-		ret = sdma_get_firmware(sdma, pdata->fw_name);
+-		if (ret)
+-			dev_warn(&pdev->dev, "failed to get firmware from platform data\n");
+-	} else {
+-		/*
+-		 * Because that device tree does not encode ROM script address,
+-		 * the RAM script in firmware is mandatory for device tree
+-		 * probe, otherwise it fails.
+-		 */
+-		ret = of_property_read_string(np, "fsl,sdma-ram-script-name",
+-					      &fw_name);
+-		if (ret)
+-			dev_warn(&pdev->dev, "failed to get firmware name\n");
+-		else {
+-			ret = sdma_get_firmware(sdma, fw_name);
+-			if (ret)
+-				dev_warn(&pdev->dev, "failed to get firmware from device tree\n");
+-		}
+-	}
+-
+ 	sdma->dma_device.dev = &pdev->dev;
+ 
+ 	sdma->dma_device.device_alloc_chan_resources = sdma_alloc_chan_resources;
+@@ -2161,6 +2140,33 @@ static int sdma_probe(struct platform_device *pdev)
+ 		of_node_put(spba_bus);
+ 	}
+ 
++	/*
++	 * Kick off firmware loading as the very last step:
++	 * attempt to load firmware only if we're not on the error path, because
++	 * the firmware callback requires a fully functional and allocated sdma
++	 * instance.
++	 */
++	if (pdata) {
++		ret = sdma_get_firmware(sdma, pdata->fw_name);
++		if (ret)
++			dev_warn(&pdev->dev, "failed to get firmware from platform data\n");
++	} else {
++		/*
++		 * Because that device tree does not encode ROM script address,
++		 * the RAM script in firmware is mandatory for device tree
++		 * probe, otherwise it fails.
++		 */
++		ret = of_property_read_string(np, "fsl,sdma-ram-script-name",
++					      &fw_name);
++		if (ret) {
++			dev_warn(&pdev->dev, "failed to get firmware name\n");
++		} else {
++			ret = sdma_get_firmware(sdma, fw_name);
++			if (ret)
++				dev_warn(&pdev->dev, "failed to get firmware from device tree\n");
++		}
++	}
++
+ 	return 0;
+ 
+ err_register:
+diff --git a/drivers/edac/edac_mc_sysfs.c b/drivers/edac/edac_mc_sysfs.c
+index 464174685589..4386ea4b9b5a 100644
+--- a/drivers/edac/edac_mc_sysfs.c
++++ b/drivers/edac/edac_mc_sysfs.c
+@@ -26,7 +26,7 @@
+ static int edac_mc_log_ue = 1;
+ static int edac_mc_log_ce = 1;
+ static int edac_mc_panic_on_ue;
+-static int edac_mc_poll_msec = 1000;
++static unsigned int edac_mc_poll_msec = 1000;
+ 
+ /* Getter functions for above */
+ int edac_mc_get_log_ue(void)
+@@ -45,30 +45,30 @@ int edac_mc_get_panic_on_ue(void)
+ }
+ 
+ /* this is temporary */
+-int edac_mc_get_poll_msec(void)
++unsigned int edac_mc_get_poll_msec(void)
+ {
+ 	return edac_mc_poll_msec;
+ }
+ 
+ static int edac_set_poll_msec(const char *val, const struct kernel_param *kp)
+ {
+-	unsigned long l;
++	unsigned int i;
+ 	int ret;
+ 
+ 	if (!val)
+ 		return -EINVAL;
+ 
+-	ret = kstrtoul(val, 0, &l);
++	ret = kstrtouint(val, 0, &i);
+ 	if (ret)
+ 		return ret;
+ 
+-	if (l < 1000)
++	if (i < 1000)
+ 		return -EINVAL;
+ 
+-	*((unsigned long *)kp->arg) = l;
++	*((unsigned int *)kp->arg) = i;
+ 
+ 	/* notify edac_mc engine to reset the poll period */
+-	edac_mc_reset_delay_period(l);
++	edac_mc_reset_delay_period(i);
+ 
+ 	return 0;
+ }
+@@ -82,7 +82,7 @@ MODULE_PARM_DESC(edac_mc_log_ue,
+ module_param(edac_mc_log_ce, int, 0644);
+ MODULE_PARM_DESC(edac_mc_log_ce,
+ 		 "Log correctable error to console: 0=off 1=on");
+-module_param_call(edac_mc_poll_msec, edac_set_poll_msec, param_get_int,
++module_param_call(edac_mc_poll_msec, edac_set_poll_msec, param_get_uint,
+ 		  &edac_mc_poll_msec, 0644);
+ MODULE_PARM_DESC(edac_mc_poll_msec, "Polling period in milliseconds");
+ 
+@@ -404,6 +404,8 @@ static inline int nr_pages_per_csrow(struct csrow_info *csrow)
+ static int edac_create_csrow_object(struct mem_ctl_info *mci,
+ 				    struct csrow_info *csrow, int index)
+ {
++	int err;
++
+ 	csrow->dev.type = &csrow_attr_type;
+ 	csrow->dev.groups = csrow_dev_groups;
+ 	device_initialize(&csrow->dev);
+@@ -415,7 +417,11 @@ static int edac_create_csrow_object(struct mem_ctl_info *mci,
+ 	edac_dbg(0, "creating (virtual) csrow node %s\n",
+ 		 dev_name(&csrow->dev));
+ 
+-	return device_add(&csrow->dev);
++	err = device_add(&csrow->dev);
++	if (err)
++		put_device(&csrow->dev);
++
++	return err;
+ }
+ 
+ /* Create a CSROW object under specifed edac_mc_device */
+@@ -443,7 +449,8 @@ error:
+ 		csrow = mci->csrows[i];
+ 		if (!nr_pages_per_csrow(csrow))
+ 			continue;
+-		put_device(&mci->csrows[i]->dev);
++
++		device_del(&mci->csrows[i]->dev);
+ 	}
+ 
+ 	return err;
+@@ -645,9 +652,11 @@ static int edac_create_dimm_object(struct mem_ctl_info *mci,
+ 	dev_set_drvdata(&dimm->dev, dimm);
+ 	pm_runtime_forbid(&mci->dev);
+ 
+-	err =  device_add(&dimm->dev);
++	err = device_add(&dimm->dev);
++	if (err)
++		put_device(&dimm->dev);
+ 
+-	edac_dbg(0, "creating rank/dimm device %s\n", dev_name(&dimm->dev));
++	edac_dbg(0, "created rank/dimm device %s\n", dev_name(&dimm->dev));
+ 
+ 	return err;
+ }
+@@ -928,6 +937,7 @@ int edac_create_sysfs_mci_device(struct mem_ctl_info *mci,
+ 	err = device_add(&mci->dev);
+ 	if (err < 0) {
+ 		edac_dbg(1, "failure: create device %s\n", dev_name(&mci->dev));
++		put_device(&mci->dev);
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/edac/edac_module.h b/drivers/edac/edac_module.h
+index dd7d0b509aa3..75528f07abd5 100644
+--- a/drivers/edac/edac_module.h
++++ b/drivers/edac/edac_module.h
+@@ -36,7 +36,7 @@ extern int edac_mc_get_log_ue(void);
+ extern int edac_mc_get_log_ce(void);
+ extern int edac_mc_get_panic_on_ue(void);
+ extern int edac_get_poll_msec(void);
+-extern int edac_mc_get_poll_msec(void);
++extern unsigned int edac_mc_get_poll_msec(void);
+ 
+ unsigned edac_dimm_info_location(struct dimm_info *dimm, char *buf,
+ 				 unsigned len);
+diff --git a/drivers/gpio/gpio-omap.c b/drivers/gpio/gpio-omap.c
+index fafd79438bbf..1ddc872b4e4b 100644
+--- a/drivers/gpio/gpio-omap.c
++++ b/drivers/gpio/gpio-omap.c
+@@ -838,9 +838,9 @@ static void omap_gpio_irq_shutdown(struct irq_data *d)
+ 
+ 	raw_spin_lock_irqsave(&bank->lock, flags);
+ 	bank->irq_usage &= ~(BIT(offset));
+-	omap_set_gpio_irqenable(bank, offset, 0);
+-	omap_clear_gpio_irqstatus(bank, offset);
+ 	omap_set_gpio_triggering(bank, offset, IRQ_TYPE_NONE);
++	omap_clear_gpio_irqstatus(bank, offset);
++	omap_set_gpio_irqenable(bank, offset, 0);
+ 	if (!LINE_USED(bank->mod_usage, offset))
+ 		omap_clear_gpio_debounce(bank, offset);
+ 	omap_disable_gpio_module(bank, offset);
+@@ -876,8 +876,8 @@ static void omap_gpio_mask_irq(struct irq_data *d)
+ 	unsigned long flags;
+ 
+ 	raw_spin_lock_irqsave(&bank->lock, flags);
+-	omap_set_gpio_irqenable(bank, offset, 0);
+ 	omap_set_gpio_triggering(bank, offset, IRQ_TYPE_NONE);
++	omap_set_gpio_irqenable(bank, offset, 0);
+ 	raw_spin_unlock_irqrestore(&bank->lock, flags);
+ }
+ 
+@@ -889,9 +889,6 @@ static void omap_gpio_unmask_irq(struct irq_data *d)
+ 	unsigned long flags;
+ 
+ 	raw_spin_lock_irqsave(&bank->lock, flags);
+-	if (trigger)
+-		omap_set_gpio_triggering(bank, offset, trigger);
+-
+ 	omap_set_gpio_irqenable(bank, offset, 1);
+ 
+ 	/*
+@@ -899,9 +896,13 @@ static void omap_gpio_unmask_irq(struct irq_data *d)
+ 	 * is cleared, thus after the handler has run. OMAP4 needs this done
+ 	 * after enabing the interrupt to clear the wakeup status.
+ 	 */
+-	if (bank->level_mask & BIT(offset))
++	if (bank->regs->leveldetect0 && bank->regs->wkup_en &&
++	    trigger & (IRQ_TYPE_LEVEL_HIGH | IRQ_TYPE_LEVEL_LOW))
+ 		omap_clear_gpio_irqstatus(bank, offset);
+ 
++	if (trigger)
++		omap_set_gpio_triggering(bank, offset, trigger);
++
+ 	raw_spin_unlock_irqrestore(&bank->lock, flags);
+ }
+ 
+@@ -1454,7 +1455,7 @@ static void omap_gpio_idle(struct gpio_bank *bank, bool may_lose_context)
+ {
+ 	struct device *dev = bank->chip.parent;
+ 	void __iomem *base = bank->base;
+-	u32 nowake;
++	u32 mask, nowake;
+ 
+ 	bank->saved_datain = readl_relaxed(base + bank->regs->datain);
+ 
+@@ -1464,6 +1465,16 @@ static void omap_gpio_idle(struct gpio_bank *bank, bool may_lose_context)
+ 	if (!bank->enabled_non_wakeup_gpios)
+ 		goto update_gpio_context_count;
+ 
++	/* Check for pending EDGE_FALLING, ignore EDGE_BOTH */
++	mask = bank->enabled_non_wakeup_gpios & bank->context.fallingdetect;
++	mask &= ~bank->context.risingdetect;
++	bank->saved_datain |= mask;
++
++	/* Check for pending EDGE_RISING, ignore EDGE_BOTH */
++	mask = bank->enabled_non_wakeup_gpios & bank->context.risingdetect;
++	mask &= ~bank->context.fallingdetect;
++	bank->saved_datain &= ~mask;
++
+ 	if (!may_lose_context)
+ 		goto update_gpio_context_count;
+ 
+@@ -1728,6 +1739,8 @@ static struct omap_gpio_reg_offs omap4_gpio_regs = {
+ 	.clr_dataout =		OMAP4_GPIO_CLEARDATAOUT,
+ 	.irqstatus =		OMAP4_GPIO_IRQSTATUS0,
+ 	.irqstatus2 =		OMAP4_GPIO_IRQSTATUS1,
++	.irqstatus_raw0 =	OMAP4_GPIO_IRQSTATUSRAW0,
++	.irqstatus_raw1 =	OMAP4_GPIO_IRQSTATUSRAW1,
+ 	.irqenable =		OMAP4_GPIO_IRQSTATUSSET0,
+ 	.irqenable2 =		OMAP4_GPIO_IRQSTATUSSET1,
+ 	.set_irqenable =	OMAP4_GPIO_IRQSTATUSSET0,
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index bca3e7740ef6..b8a5c1e3b99d 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -3012,7 +3012,7 @@ int gpiod_get_array_value_complex(bool raw, bool can_sleep,
+ int gpiod_get_raw_value(const struct gpio_desc *desc)
+ {
+ 	VALIDATE_DESC(desc);
+-	/* Should be using gpio_get_value_cansleep() */
++	/* Should be using gpiod_get_raw_value_cansleep() */
+ 	WARN_ON(desc->gdev->chip->can_sleep);
+ 	return gpiod_get_raw_value_commit(desc);
+ }
+@@ -3033,7 +3033,7 @@ int gpiod_get_value(const struct gpio_desc *desc)
+ 	int value;
+ 
+ 	VALIDATE_DESC(desc);
+-	/* Should be using gpio_get_value_cansleep() */
++	/* Should be using gpiod_get_value_cansleep() */
+ 	WARN_ON(desc->gdev->chip->can_sleep);
+ 
+ 	value = gpiod_get_raw_value_commit(desc);
+@@ -3304,7 +3304,7 @@ int gpiod_set_array_value_complex(bool raw, bool can_sleep,
+ void gpiod_set_raw_value(struct gpio_desc *desc, int value)
+ {
+ 	VALIDATE_DESC_VOID(desc);
+-	/* Should be using gpiod_set_value_cansleep() */
++	/* Should be using gpiod_set_raw_value_cansleep() */
+ 	WARN_ON(desc->gdev->chip->can_sleep);
+ 	gpiod_set_raw_value_commit(desc, value);
+ }
+@@ -3345,6 +3345,7 @@ static void gpiod_set_value_nocheck(struct gpio_desc *desc, int value)
+ void gpiod_set_value(struct gpio_desc *desc, int value)
+ {
+ 	VALIDATE_DESC_VOID(desc);
++	/* Should be using gpiod_set_value_cansleep() */
+ 	WARN_ON(desc->gdev->chip->can_sleep);
+ 	gpiod_set_value_nocheck(desc, value);
+ }
+@@ -4232,8 +4233,7 @@ EXPORT_SYMBOL_GPL(gpiod_get_index);
+  *
+  * Returns:
+  * On successful request the GPIO pin is configured in accordance with
+- * provided @dflags. If the node does not have the requested GPIO
+- * property, NULL is returned.
++ * provided @dflags.
+  *
+  * In case of error an ERR_PTR() is returned.
+  */
+@@ -4255,9 +4255,6 @@ struct gpio_desc *gpiod_get_from_of_node(struct device_node *node,
+ 					index, &flags);
+ 
+ 	if (!desc || IS_ERR(desc)) {
+-		/* If it is not there, just return NULL */
+-		if (PTR_ERR(desc) == -ENOENT)
+-			return NULL;
+ 		return desc;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index d94778915312..785747a14765 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -1349,6 +1349,7 @@ MODULE_PARM_DESC(edid_fixup,
+ 
+ static void drm_get_displayid(struct drm_connector *connector,
+ 			      struct edid *edid);
++static int validate_displayid(u8 *displayid, int length, int idx);
+ 
+ static int drm_edid_block_checksum(const u8 *raw_edid)
+ {
+@@ -2932,16 +2933,46 @@ static u8 *drm_find_edid_extension(const struct edid *edid, int ext_id)
+ 	return edid_ext;
+ }
+ 
+-static u8 *drm_find_cea_extension(const struct edid *edid)
+-{
+-	return drm_find_edid_extension(edid, CEA_EXT);
+-}
+ 
+ static u8 *drm_find_displayid_extension(const struct edid *edid)
+ {
+ 	return drm_find_edid_extension(edid, DISPLAYID_EXT);
+ }
+ 
++static u8 *drm_find_cea_extension(const struct edid *edid)
++{
++	int ret;
++	int idx = 1;
++	int length = EDID_LENGTH;
++	struct displayid_block *block;
++	u8 *cea;
++	u8 *displayid;
++
++	/* Look for a top level CEA extension block */
++	cea = drm_find_edid_extension(edid, CEA_EXT);
++	if (cea)
++		return cea;
++
++	/* CEA blocks can also be found embedded in a DisplayID block */
++	displayid = drm_find_displayid_extension(edid);
++	if (!displayid)
++		return NULL;
++
++	ret = validate_displayid(displayid, length, idx);
++	if (ret)
++		return NULL;
++
++	idx += sizeof(struct displayid_hdr);
++	for_each_displayid_db(displayid, block, idx, length) {
++		if (block->tag == DATA_BLOCK_CTA) {
++			cea = (u8 *)block;
++			break;
++		}
++	}
++
++	return cea;
++}
++
+ /*
+  * Calculate the alternate clock for the CEA mode
+  * (60Hz vs. 59.94Hz etc.)
+@@ -3665,13 +3696,38 @@ cea_revision(const u8 *cea)
+ static int
+ cea_db_offsets(const u8 *cea, int *start, int *end)
+ {
+-	/* Data block offset in CEA extension block */
+-	*start = 4;
+-	*end = cea[2];
+-	if (*end == 0)
+-		*end = 127;
+-	if (*end < 4 || *end > 127)
+-		return -ERANGE;
++	/* DisplayID CTA extension blocks and top-level CEA EDID
++	 * block header definitions differ in the following bytes:
++	 *   1) Byte 2 of the header specifies length differently,
++	 *   2) Byte 3 is only present in the CEA top level block.
++	 *
++	 * The different definitions for byte 2 follow.
++	 *
++	 * DisplayID CTA extension block defines byte 2 as:
++	 *   Number of payload bytes
++	 *
++	 * CEA EDID block defines byte 2 as:
++	 *   Byte number (decimal) within this block where the 18-byte
++	 *   DTDs begin. If no non-DTD data is present in this extension
++	 *   block, the value should be set to 04h (the byte after next).
++	 *   If set to 00h, there are no DTDs present in this block and
++	 *   no non-DTD data.
++	 */
++	if (cea[0] == DATA_BLOCK_CTA) {
++		*start = 3;
++		*end = *start + cea[2];
++	} else if (cea[0] == CEA_EXT) {
++		/* Data block offset in CEA extension block */
++		*start = 4;
++		*end = cea[2];
++		if (*end == 0)
++			*end = 127;
++		if (*end < 4 || *end > 127)
++			return -ERANGE;
++	} else {
++		return -ENOTSUPP;
++	}
++
+ 	return 0;
+ }
+ 
+@@ -5219,6 +5275,9 @@ static int drm_parse_display_id(struct drm_connector *connector,
+ 		case DATA_BLOCK_TYPE_1_DETAILED_TIMING:
+ 			/* handled in mode gathering code. */
+ 			break;
++		case DATA_BLOCK_CTA:
++			/* handled in the cea parser code. */
++			break;
+ 		default:
+ 			DRM_DEBUG_KMS("found DisplayID tag 0x%x, unhandled\n", block->tag);
+ 			break;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c
+index ecacb22834d7..719345074711 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c
+@@ -184,6 +184,25 @@ nvkm_i2c_fini(struct nvkm_subdev *subdev, bool suspend)
+ 	return 0;
+ }
+ 
++static int
++nvkm_i2c_preinit(struct nvkm_subdev *subdev)
++{
++	struct nvkm_i2c *i2c = nvkm_i2c(subdev);
++	struct nvkm_i2c_bus *bus;
++	struct nvkm_i2c_pad *pad;
++
++	/*
++	 * We init our i2c busses as early as possible, since they may be
++	 * needed by the vbios init scripts on some cards
++	 */
++	list_for_each_entry(pad, &i2c->pad, head)
++		nvkm_i2c_pad_init(pad);
++	list_for_each_entry(bus, &i2c->bus, head)
++		nvkm_i2c_bus_init(bus);
++
++	return 0;
++}
++
+ static int
+ nvkm_i2c_init(struct nvkm_subdev *subdev)
+ {
+@@ -238,6 +257,7 @@ nvkm_i2c_dtor(struct nvkm_subdev *subdev)
+ static const struct nvkm_subdev_func
+ nvkm_i2c = {
+ 	.dtor = nvkm_i2c_dtor,
++	.preinit = nvkm_i2c_preinit,
+ 	.init = nvkm_i2c_init,
+ 	.fini = nvkm_i2c_fini,
+ 	.intr = nvkm_i2c_intr,
+diff --git a/drivers/gpu/ipu-v3/ipu-ic.c b/drivers/gpu/ipu-v3/ipu-ic.c
+index 594c3cbc8291..18816ccf600e 100644
+--- a/drivers/gpu/ipu-v3/ipu-ic.c
++++ b/drivers/gpu/ipu-v3/ipu-ic.c
+@@ -257,7 +257,7 @@ static int init_csc(struct ipu_ic *ic,
+ 	writel(param, base++);
+ 
+ 	param = ((a[0] & 0x1fe0) >> 5) | (params->scale << 8) |
+-		(params->sat << 9);
++		(params->sat << 10);
+ 	writel(param, base++);
+ 
+ 	param = ((a[1] & 0x1f) << 27) | ((c[0][1] & 0x1ff) << 18) |
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index a8633b1437b2..2e3e03df83da 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -307,6 +307,9 @@ static void wacom_feature_mapping(struct hid_device *hdev,
+ 	wacom_hid_usage_quirk(hdev, field, usage);
+ 
+ 	switch (equivalent_usage) {
++	case WACOM_HID_WD_TOUCH_RING_SETTING:
++		wacom->generic_has_leds = true;
++		break;
+ 	case HID_DG_CONTACTMAX:
+ 		/* leave touch_max as is if predefined */
+ 		if (!features->touch_max) {
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 09b8e4aac82f..447394cc4222 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -1930,8 +1930,6 @@ static void wacom_wac_pad_usage_mapping(struct hid_device *hdev,
+ 		features->device_type |= WACOM_DEVICETYPE_PAD;
+ 		break;
+ 	case WACOM_HID_WD_BUTTONCENTER:
+-		wacom->generic_has_leds = true;
+-		/* fall through */
+ 	case WACOM_HID_WD_BUTTONHOME:
+ 	case WACOM_HID_WD_BUTTONUP:
+ 	case WACOM_HID_WD_BUTTONDOWN:
+@@ -2123,14 +2121,12 @@ static void wacom_wac_pad_report(struct hid_device *hdev,
+ 	bool active = wacom_wac->hid_data.inrange_state != 0;
+ 
+ 	/* report prox for expresskey events */
+-	if ((wacom_equivalent_usage(field->physical) == HID_DG_TABLETFUNCTIONKEY) &&
+-	    wacom_wac->hid_data.pad_input_event_flag) {
++	if (wacom_wac->hid_data.pad_input_event_flag) {
+ 		input_event(input, EV_ABS, ABS_MISC, active ? PAD_DEVICE_ID : 0);
+ 		input_sync(input);
+ 		if (!active)
+ 			wacom_wac->hid_data.pad_input_event_flag = false;
+ 	}
+-
+ }
+ 
+ static void wacom_wac_pen_usage_mapping(struct hid_device *hdev,
+@@ -2706,9 +2702,7 @@ static int wacom_wac_collection(struct hid_device *hdev, struct hid_report *repo
+ 	if (report->type != HID_INPUT_REPORT)
+ 		return -1;
+ 
+-	if (WACOM_PAD_FIELD(field) && wacom->wacom_wac.pad_input)
+-		wacom_wac_pad_report(hdev, report, field);
+-	else if (WACOM_PEN_FIELD(field) && wacom->wacom_wac.pen_input)
++	if (WACOM_PEN_FIELD(field) && wacom->wacom_wac.pen_input)
+ 		wacom_wac_pen_report(hdev, report);
+ 	else if (WACOM_FINGER_FIELD(field) && wacom->wacom_wac.touch_input)
+ 		wacom_wac_finger_report(hdev, report);
+@@ -2722,7 +2716,7 @@ void wacom_wac_report(struct hid_device *hdev, struct hid_report *report)
+ 	struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+ 	struct hid_field *field;
+ 	bool pad_in_hid_field = false, pen_in_hid_field = false,
+-		finger_in_hid_field = false;
++		finger_in_hid_field = false, true_pad = false;
+ 	int r;
+ 	int prev_collection = -1;
+ 
+@@ -2738,6 +2732,8 @@ void wacom_wac_report(struct hid_device *hdev, struct hid_report *report)
+ 			pen_in_hid_field = true;
+ 		if (WACOM_FINGER_FIELD(field))
+ 			finger_in_hid_field = true;
++		if (wacom_equivalent_usage(field->physical) == HID_DG_TABLETFUNCTIONKEY)
++			true_pad = true;
+ 	}
+ 
+ 	wacom_wac_battery_pre_report(hdev, report);
+@@ -2761,6 +2757,9 @@ void wacom_wac_report(struct hid_device *hdev, struct hid_report *report)
+ 	}
+ 
+ 	wacom_wac_battery_report(hdev, report);
++
++	if (true_pad && wacom->wacom_wac.pad_input)
++		wacom_wac_pad_report(hdev, report, field);
+ }
+ 
+ static int wacom_bpt_touch(struct wacom_wac *wacom)
+@@ -3717,7 +3716,7 @@ int wacom_setup_touch_input_capabilities(struct input_dev *input_dev,
+ 					     0, 5920, 4, 0);
+ 		}
+ 		input_abs_set_res(input_dev, ABS_MT_POSITION_X, 40);
+-		input_abs_set_res(input_dev, ABS_MT_POSITION_X, 40);
++		input_abs_set_res(input_dev, ABS_MT_POSITION_Y, 40);
+ 
+ 		/* fall through */
+ 
+diff --git a/drivers/hid/wacom_wac.h b/drivers/hid/wacom_wac.h
+index 295fd3718caa..f67d871841c0 100644
+--- a/drivers/hid/wacom_wac.h
++++ b/drivers/hid/wacom_wac.h
+@@ -145,6 +145,7 @@
+ #define WACOM_HID_WD_OFFSETBOTTOM       (WACOM_HID_UP_WACOMDIGITIZER | 0x0d33)
+ #define WACOM_HID_WD_DATAMODE           (WACOM_HID_UP_WACOMDIGITIZER | 0x1002)
+ #define WACOM_HID_WD_DIGITIZERINFO      (WACOM_HID_UP_WACOMDIGITIZER | 0x1013)
++#define WACOM_HID_WD_TOUCH_RING_SETTING (WACOM_HID_UP_WACOMDIGITIZER | 0x1032)
+ #define WACOM_HID_UP_G9                 0xff090000
+ #define WACOM_HID_G9_PEN                (WACOM_HID_UP_G9 | 0x02)
+ #define WACOM_HID_G9_TOUCHSCREEN        (WACOM_HID_UP_G9 | 0x11)
+diff --git a/drivers/hwtracing/intel_th/msu.c b/drivers/hwtracing/intel_th/msu.c
+index 8ff326c0c406..3cdf85b1ce4f 100644
+--- a/drivers/hwtracing/intel_th/msu.c
++++ b/drivers/hwtracing/intel_th/msu.c
+@@ -632,7 +632,7 @@ static int msc_buffer_contig_alloc(struct msc *msc, unsigned long size)
+ 		goto err_out;
+ 
+ 	ret = -ENOMEM;
+-	page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order);
++	page = alloc_pages(GFP_KERNEL | __GFP_ZERO | GFP_DMA32, order);
+ 	if (!page)
+ 		goto err_free_sgt;
+ 
+diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c
+index 70f2cb90adc5..e759ac0d48be 100644
+--- a/drivers/hwtracing/intel_th/pci.c
++++ b/drivers/hwtracing/intel_th/pci.c
+@@ -170,6 +170,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
+ 		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x02a6),
+ 		.driver_data = (kernel_ulong_t)&intel_th_2x,
+ 	},
++	{
++		/* Ice Lake NNPI */
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x45c5),
++		.driver_data = (kernel_ulong_t)&intel_th_2x,
++	},
+ 	{ 0 },
+ };
+ 
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index 5f4bd52121fe..7837ea67f1f8 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -91,6 +91,12 @@ void i3c_bus_normaluse_unlock(struct i3c_bus *bus)
+ 	up_read(&bus->lock);
+ }
+ 
++static struct i3c_master_controller *
++i3c_bus_to_i3c_master(struct i3c_bus *i3cbus)
++{
++	return container_of(i3cbus, struct i3c_master_controller, bus);
++}
++
+ static struct i3c_master_controller *dev_to_i3cmaster(struct device *dev)
+ {
+ 	return container_of(dev, struct i3c_master_controller, dev);
+@@ -565,20 +571,38 @@ static const struct device_type i3c_masterdev_type = {
+ 	.groups	= i3c_masterdev_groups,
+ };
+ 
+-int i3c_bus_set_mode(struct i3c_bus *i3cbus, enum i3c_bus_mode mode)
++int i3c_bus_set_mode(struct i3c_bus *i3cbus, enum i3c_bus_mode mode,
++		     unsigned long max_i2c_scl_rate)
+ {
+-	i3cbus->mode = mode;
++	struct i3c_master_controller *master = i3c_bus_to_i3c_master(i3cbus);
+ 
+-	if (!i3cbus->scl_rate.i3c)
+-		i3cbus->scl_rate.i3c = I3C_BUS_TYP_I3C_SCL_RATE;
++	i3cbus->mode = mode;
+ 
+-	if (!i3cbus->scl_rate.i2c) {
+-		if (i3cbus->mode == I3C_BUS_MODE_MIXED_SLOW)
+-			i3cbus->scl_rate.i2c = I3C_BUS_I2C_FM_SCL_RATE;
+-		else
+-			i3cbus->scl_rate.i2c = I3C_BUS_I2C_FM_PLUS_SCL_RATE;
++	switch (i3cbus->mode) {
++	case I3C_BUS_MODE_PURE:
++		if (!i3cbus->scl_rate.i3c)
++			i3cbus->scl_rate.i3c = I3C_BUS_TYP_I3C_SCL_RATE;
++		break;
++	case I3C_BUS_MODE_MIXED_FAST:
++		if (!i3cbus->scl_rate.i3c)
++			i3cbus->scl_rate.i3c = I3C_BUS_TYP_I3C_SCL_RATE;
++		if (!i3cbus->scl_rate.i2c)
++			i3cbus->scl_rate.i2c = max_i2c_scl_rate;
++		break;
++	case I3C_BUS_MODE_MIXED_SLOW:
++		if (!i3cbus->scl_rate.i2c)
++			i3cbus->scl_rate.i2c = max_i2c_scl_rate;
++		if (!i3cbus->scl_rate.i3c ||
++		    i3cbus->scl_rate.i3c > i3cbus->scl_rate.i2c)
++			i3cbus->scl_rate.i3c = i3cbus->scl_rate.i2c;
++		break;
++	default:
++		return -EINVAL;
+ 	}
+ 
++	dev_dbg(&master->dev, "i2c-scl = %ld Hz i3c-scl = %ld Hz\n",
++		i3cbus->scl_rate.i2c, i3cbus->scl_rate.i3c);
++
+ 	/*
+ 	 * I3C/I2C frequency may have been overridden, check that user-provided
+ 	 * values are not exceeding max possible frequency.
+@@ -1966,9 +1990,6 @@ of_i3c_master_add_i2c_boardinfo(struct i3c_master_controller *master,
+ 	/* LVR is encoded in reg[2]. */
+ 	boardinfo->lvr = reg[2];
+ 
+-	if (boardinfo->lvr & I3C_LVR_I2C_FM_MODE)
+-		master->bus.scl_rate.i2c = I3C_BUS_I2C_FM_SCL_RATE;
+-
+ 	list_add_tail(&boardinfo->node, &master->boardinfo.i2c);
+ 	of_node_get(node);
+ 
+@@ -2417,6 +2438,7 @@ int i3c_master_register(struct i3c_master_controller *master,
+ 			const struct i3c_master_controller_ops *ops,
+ 			bool secondary)
+ {
++	unsigned long i2c_scl_rate = I3C_BUS_I2C_FM_PLUS_SCL_RATE;
+ 	struct i3c_bus *i3cbus = i3c_master_get_bus(master);
+ 	enum i3c_bus_mode mode = I3C_BUS_MODE_PURE;
+ 	struct i2c_dev_boardinfo *i2cbi;
+@@ -2466,9 +2488,12 @@ int i3c_master_register(struct i3c_master_controller *master,
+ 			ret = -EINVAL;
+ 			goto err_put_dev;
+ 		}
++
++		if (i2cbi->lvr & I3C_LVR_I2C_FM_MODE)
++			i2c_scl_rate = I3C_BUS_I2C_FM_SCL_RATE;
+ 	}
+ 
+-	ret = i3c_bus_set_mode(i3cbus, mode);
++	ret = i3c_bus_set_mode(i3cbus, mode, i2c_scl_rate);
+ 	if (ret)
+ 		goto err_put_dev;
+ 
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index da81402992bc..9d7303041b3e 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -1041,15 +1041,19 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
+ 	}
+ 
+ 	if (MLX5_CAP_GEN(mdev, tag_matching)) {
+-		props->tm_caps.max_rndv_hdr_size = MLX5_TM_MAX_RNDV_MSG_SIZE;
+ 		props->tm_caps.max_num_tags =
+ 			(1 << MLX5_CAP_GEN(mdev, log_tag_matching_list_sz)) - 1;
+-		props->tm_caps.flags = IB_TM_CAP_RC;
+ 		props->tm_caps.max_ops =
+ 			1 << MLX5_CAP_GEN(mdev, log_max_qp_sz);
+ 		props->tm_caps.max_sge = MLX5_TM_MAX_SGE;
+ 	}
+ 
++	if (MLX5_CAP_GEN(mdev, tag_matching) &&
++	    MLX5_CAP_GEN(mdev, rndv_offload_rc)) {
++		props->tm_caps.flags = IB_TM_CAP_RNDV_RC;
++		props->tm_caps.max_rndv_hdr_size = MLX5_TM_MAX_RNDV_MSG_SIZE;
++	}
++
+ 	if (MLX5_CAP_GEN(dev->mdev, cq_moderation)) {
+ 		props->cq_caps.max_cq_moderation_count =
+ 						MLX5_MAX_CQ_COUNT;
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+index 9b5e11d3fb85..04ea7db08e87 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+@@ -1998,6 +1998,7 @@ static int ipoib_get_vf_config(struct net_device *dev, int vf,
+ 		return err;
+ 
+ 	ivf->vf = vf;
++	memcpy(ivf->mac, dev->dev_addr, dev->addr_len);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index be9ddcad8f28..87848faa7502 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -3481,13 +3481,14 @@ static const match_table_t srp_opt_tokens = {
+  * @net:	   [in]  Network namespace.
+  * @sa:		   [out] Address family, IP address and port number.
+  * @addr_port_str: [in]  IP address and port number.
++ * @has_port:	   [out] Whether or not @addr_port_str includes a port number.
+  *
+  * Parse the following address formats:
+  * - IPv4: <ip_address>:<port>, e.g. 1.2.3.4:5.
+  * - IPv6: \[<ipv6_address>\]:<port>, e.g. [1::2:3%4]:5.
+  */
+ static int srp_parse_in(struct net *net, struct sockaddr_storage *sa,
+-			const char *addr_port_str)
++			const char *addr_port_str, bool *has_port)
+ {
+ 	char *addr_end, *addr = kstrdup(addr_port_str, GFP_KERNEL);
+ 	char *port_str;
+@@ -3496,9 +3497,12 @@ static int srp_parse_in(struct net *net, struct sockaddr_storage *sa,
+ 	if (!addr)
+ 		return -ENOMEM;
+ 	port_str = strrchr(addr, ':');
+-	if (!port_str)
+-		return -EINVAL;
+-	*port_str++ = '\0';
++	if (port_str && strchr(port_str, ']'))
++		port_str = NULL;
++	if (port_str)
++		*port_str++ = '\0';
++	if (has_port)
++		*has_port = port_str != NULL;
+ 	ret = inet_pton_with_scope(net, AF_INET, addr, port_str, sa);
+ 	if (ret && addr[0]) {
+ 		addr_end = addr + strlen(addr) - 1;
+@@ -3520,6 +3524,7 @@ static int srp_parse_options(struct net *net, const char *buf,
+ 	char *p;
+ 	substring_t args[MAX_OPT_ARGS];
+ 	unsigned long long ull;
++	bool has_port;
+ 	int opt_mask = 0;
+ 	int token;
+ 	int ret = -EINVAL;
+@@ -3618,7 +3623,8 @@ static int srp_parse_options(struct net *net, const char *buf,
+ 				ret = -ENOMEM;
+ 				goto out;
+ 			}
+-			ret = srp_parse_in(net, &target->rdma_cm.src.ss, p);
++			ret = srp_parse_in(net, &target->rdma_cm.src.ss, p,
++					   NULL);
+ 			if (ret < 0) {
+ 				pr_warn("bad source parameter '%s'\n", p);
+ 				kfree(p);
+@@ -3634,7 +3640,10 @@ static int srp_parse_options(struct net *net, const char *buf,
+ 				ret = -ENOMEM;
+ 				goto out;
+ 			}
+-			ret = srp_parse_in(net, &target->rdma_cm.dst.ss, p);
++			ret = srp_parse_in(net, &target->rdma_cm.dst.ss, p,
++					   &has_port);
++			if (!has_port)
++				ret = -EINVAL;
+ 			if (ret < 0) {
+ 				pr_warn("bad dest parameter '%s'\n", p);
+ 				kfree(p);
+diff --git a/drivers/input/mouse/alps.c b/drivers/input/mouse/alps.c
+index 0a6f7ca883e7..dd80ff6cc427 100644
+--- a/drivers/input/mouse/alps.c
++++ b/drivers/input/mouse/alps.c
+@@ -24,6 +24,7 @@
+ 
+ #include "psmouse.h"
+ #include "alps.h"
++#include "trackpoint.h"
+ 
+ /*
+  * Definitions for ALPS version 3 and 4 command mode protocol
+@@ -2864,6 +2865,23 @@ static const struct alps_protocol_info *alps_match_table(unsigned char *e7,
+ 	return NULL;
+ }
+ 
++static bool alps_is_cs19_trackpoint(struct psmouse *psmouse)
++{
++	u8 param[2] = { 0 };
++
++	if (ps2_command(&psmouse->ps2dev,
++			param, MAKE_PS2_CMD(0, 2, TP_READ_ID)))
++		return false;
++
++	/*
++	 * param[0] contains the trackpoint device variant_id while
++	 * param[1] contains the firmware_id. So far all alps
++	 * trackpoint-only devices have their variant_ids equal
++	 * TP_VARIANT_ALPS and their firmware_ids are in 0x20~0x2f range.
++	 */
++	return param[0] == TP_VARIANT_ALPS && ((param[1] & 0xf0) == 0x20);
++}
++
+ static int alps_identify(struct psmouse *psmouse, struct alps_data *priv)
+ {
+ 	const struct alps_protocol_info *protocol;
+@@ -3164,6 +3182,20 @@ int alps_detect(struct psmouse *psmouse, bool set_properties)
+ 	if (error)
+ 		return error;
+ 
++	/*
++	 * ALPS cs19 is a trackpoint-only device, and uses different
++	 * protocol than DualPoint ones, so we return -EINVAL here and let
++	 * trackpoint.c drive this device. If the trackpoint driver is not
++	 * enabled, the device will fall back to a bare PS/2 mouse.
++	 * If ps2_command() fails here, we depend on the immediately
++	 * followed psmouse_reset() to reset the device to normal state.
++	 */
++	if (alps_is_cs19_trackpoint(psmouse)) {
++		psmouse_dbg(psmouse,
++			    "ALPS CS19 trackpoint-only device detected, ignoring\n");
++		return -EINVAL;
++	}
++
+ 	/*
+ 	 * Reset the device to make sure it is fully operational:
+ 	 * on some laptops, like certain Dell Latitudes, we may
+diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
+index 68fd8232d44c..af7d48431b85 100644
+--- a/drivers/input/mouse/synaptics.c
++++ b/drivers/input/mouse/synaptics.c
+@@ -179,6 +179,7 @@ static const char * const smbus_pnp_ids[] = {
+ 	"LEN0093", /* T480 */
+ 	"LEN0096", /* X280 */
+ 	"LEN0097", /* X280 -> ALPS trackpoint */
++	"LEN009b", /* T580 */
+ 	"LEN200f", /* T450s */
+ 	"LEN2054", /* E480 */
+ 	"LEN2055", /* E580 */
+diff --git a/drivers/input/tablet/gtco.c b/drivers/input/tablet/gtco.c
+index 4b8b9d7aa75e..35031228a6d0 100644
+--- a/drivers/input/tablet/gtco.c
++++ b/drivers/input/tablet/gtco.c
+@@ -78,6 +78,7 @@ Scott Hill shill@gtcocalcomp.com
+ 
+ /* Max size of a single report */
+ #define REPORT_MAX_SIZE       10
++#define MAX_COLLECTION_LEVELS  10
+ 
+ 
+ /* Bitmask whether pen is in range */
+@@ -223,8 +224,7 @@ static void parse_hid_report_descriptor(struct gtco *device, char * report,
+ 	char  maintype = 'x';
+ 	char  globtype[12];
+ 	int   indent = 0;
+-	char  indentstr[10] = "";
+-
++	char  indentstr[MAX_COLLECTION_LEVELS + 1] = { 0 };
+ 
+ 	dev_dbg(ddev, "======>>>>>>PARSE<<<<<<======\n");
+ 
+@@ -350,6 +350,13 @@ static void parse_hid_report_descriptor(struct gtco *device, char * report,
+ 			case TAG_MAIN_COL_START:
+ 				maintype = 'S';
+ 
++				if (indent == MAX_COLLECTION_LEVELS) {
++					dev_err(ddev, "Collection level %d would exceed limit of %d\n",
++						indent + 1,
++						MAX_COLLECTION_LEVELS);
++					break;
++				}
++
+ 				if (data == 0) {
+ 					dev_dbg(ddev, "======>>>>>> Physical\n");
+ 					strcpy(globtype, "Physical");
+@@ -369,8 +376,15 @@ static void parse_hid_report_descriptor(struct gtco *device, char * report,
+ 				break;
+ 
+ 			case TAG_MAIN_COL_END:
+-				dev_dbg(ddev, "<<<<<<======\n");
+ 				maintype = 'E';
++
++				if (indent == 0) {
++					dev_err(ddev, "Collection level already at zero\n");
++					break;
++				}
++
++				dev_dbg(ddev, "<<<<<<======\n");
++
+ 				indent--;
+ 				for (x = 0; x < indent; x++)
+ 					indentstr[x] = '-';
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index 109de67d5d72..2d06c507fbed 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -241,18 +241,21 @@ static int iommu_insert_resv_region(struct iommu_resv_region *new,
+ 			pos = pos->next;
+ 		} else if ((start >= a) && (end <= b)) {
+ 			if (new->type == type)
+-				goto done;
++				return 0;
+ 			else
+ 				pos = pos->next;
+ 		} else {
+ 			if (new->type == type) {
+ 				phys_addr_t new_start = min(a, start);
+ 				phys_addr_t new_end = max(b, end);
++				int ret;
+ 
+ 				list_del(&entry->list);
+ 				entry->start = new_start;
+ 				entry->length = new_end - new_start + 1;
+-				iommu_insert_resv_region(entry, regions);
++				ret = iommu_insert_resv_region(entry, regions);
++				kfree(entry);
++				return ret;
+ 			} else {
+ 				pos = pos->next;
+ 			}
+@@ -265,7 +268,6 @@ insert:
+ 		return -ENOMEM;
+ 
+ 	list_add_tail(&region->list, pos);
+-done:
+ 	return 0;
+ }
+ 
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index 15e55d327505..1bc86032d409 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -472,8 +472,12 @@ static void gic_deactivate_unhandled(u32 irqnr)
+ 
+ static inline void gic_handle_nmi(u32 irqnr, struct pt_regs *regs)
+ {
++	bool irqs_enabled = interrupts_enabled(regs);
+ 	int err;
+ 
++	if (irqs_enabled)
++		nmi_enter();
++
+ 	if (static_branch_likely(&supports_deactivate_key))
+ 		gic_write_eoir(irqnr);
+ 	/*
+@@ -485,6 +489,9 @@ static inline void gic_handle_nmi(u32 irqnr, struct pt_regs *regs)
+ 	err = handle_domain_nmi(gic_data.domain, irqnr, regs);
+ 	if (err)
+ 		gic_deactivate_unhandled(irqnr);
++
++	if (irqs_enabled)
++		nmi_exit();
+ }
+ 
+ static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
+diff --git a/drivers/irqchip/irq-meson-gpio.c b/drivers/irqchip/irq-meson-gpio.c
+index 7b531fd075b8..7599b10ecf09 100644
+--- a/drivers/irqchip/irq-meson-gpio.c
++++ b/drivers/irqchip/irq-meson-gpio.c
+@@ -73,6 +73,7 @@ static const struct of_device_id meson_irq_gpio_matches[] = {
+ 	{ .compatible = "amlogic,meson-gxbb-gpio-intc", .data = &gxbb_params },
+ 	{ .compatible = "amlogic,meson-gxl-gpio-intc", .data = &gxl_params },
+ 	{ .compatible = "amlogic,meson-axg-gpio-intc", .data = &axg_params },
++	{ .compatible = "amlogic,meson-g12a-gpio-intc", .data = &axg_params },
+ 	{ }
+ };
+ 
+diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c
+index 6ca868868fee..7393d64757a1 100644
+--- a/drivers/lightnvm/pblk-core.c
++++ b/drivers/lightnvm/pblk-core.c
+@@ -323,14 +323,16 @@ void pblk_free_rqd(struct pblk *pblk, struct nvm_rq *rqd, int type)
+ void pblk_bio_free_pages(struct pblk *pblk, struct bio *bio, int off,
+ 			 int nr_pages)
+ {
+-	struct bio_vec bv;
+-	int i;
+-
+-	WARN_ON(off + nr_pages != bio->bi_vcnt);
+-
+-	for (i = off; i < nr_pages + off; i++) {
+-		bv = bio->bi_io_vec[i];
+-		mempool_free(bv.bv_page, &pblk->page_bio_pool);
++	struct bio_vec *bv;
++	struct page *page;
++	int i, e, nbv = 0;
++
++	for (i = 0; i < bio->bi_vcnt; i++) {
++		bv = &bio->bi_io_vec[i];
++		page = bv->bv_page;
++		for (e = 0; e < bv->bv_len; e += PBLK_EXPOSED_PAGE_SIZE, nbv++)
++			if (nbv >= off)
++				mempool_free(page++, &pblk->page_bio_pool);
+ 	}
+ }
+ 
+diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c
+index f8986effcb50..6f776823b9ba 100644
+--- a/drivers/md/bcache/alloc.c
++++ b/drivers/md/bcache/alloc.c
+@@ -393,6 +393,11 @@ long bch_bucket_alloc(struct cache *ca, unsigned int reserve, bool wait)
+ 	struct bucket *b;
+ 	long r;
+ 
++
++	/* No allocation if CACHE_SET_IO_DISABLE bit is set */
++	if (unlikely(test_bit(CACHE_SET_IO_DISABLE, &ca->set->flags)))
++		return -1;
++
+ 	/* fastpath */
+ 	if (fifo_pop(&ca->free[RESERVE_NONE], r) ||
+ 	    fifo_pop(&ca->free[reserve], r))
+@@ -484,6 +489,10 @@ int __bch_bucket_alloc_set(struct cache_set *c, unsigned int reserve,
+ {
+ 	int i;
+ 
++	/* No allocation if CACHE_SET_IO_DISABLE bit is set */
++	if (unlikely(test_bit(CACHE_SET_IO_DISABLE, &c->flags)))
++		return -1;
++
+ 	lockdep_assert_held(&c->bucket_lock);
+ 	BUG_ON(!n || n > c->caches_loaded || n > MAX_CACHES_PER_SET);
+ 
+diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
+index fdf75352e16a..e30a983a68cd 100644
+--- a/drivers/md/bcache/bcache.h
++++ b/drivers/md/bcache/bcache.h
+@@ -726,8 +726,6 @@ struct cache_set {
+ 
+ #define BUCKET_HASH_BITS	12
+ 	struct hlist_head	bucket_hash[1 << BUCKET_HASH_BITS];
+-
+-	DECLARE_HEAP(struct btree *, flush_btree);
+ };
+ 
+ struct bbio {
+diff --git a/drivers/md/bcache/io.c b/drivers/md/bcache/io.c
+index c25097968319..4d93f07f63e5 100644
+--- a/drivers/md/bcache/io.c
++++ b/drivers/md/bcache/io.c
+@@ -58,6 +58,18 @@ void bch_count_backing_io_errors(struct cached_dev *dc, struct bio *bio)
+ 
+ 	WARN_ONCE(!dc, "NULL pointer of struct cached_dev");
+ 
++	/*
++	 * Read-ahead requests on a degrading and recovering md raid
++	 * (e.g. raid6) device might be failured immediately by md
++	 * raid code, which is not a real hardware media failure. So
++	 * we shouldn't count failed REQ_RAHEAD bio to dc->io_errors.
++	 */
++	if (bio->bi_opf & REQ_RAHEAD) {
++		pr_warn_ratelimited("%s: Read-ahead I/O failed on backing device, ignore",
++				    dc->backing_dev_name);
++		return;
++	}
++
+ 	errors = atomic_add_return(1, &dc->io_errors);
+ 	if (errors < dc->error_limit)
+ 		pr_err("%s: IO error on backing device, unrecoverable",
+diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
+index 6c94fa007796..d19dc6777a81 100644
+--- a/drivers/md/bcache/journal.c
++++ b/drivers/md/bcache/journal.c
+@@ -390,12 +390,6 @@ err:
+ }
+ 
+ /* Journalling */
+-#define journal_max_cmp(l, r) \
+-	(fifo_idx(&c->journal.pin, btree_current_write(l)->journal) < \
+-	 fifo_idx(&(c)->journal.pin, btree_current_write(r)->journal))
+-#define journal_min_cmp(l, r) \
+-	(fifo_idx(&c->journal.pin, btree_current_write(l)->journal) > \
+-	 fifo_idx(&(c)->journal.pin, btree_current_write(r)->journal))
+ 
+ static void btree_flush_write(struct cache_set *c)
+ {
+@@ -403,35 +397,25 @@ static void btree_flush_write(struct cache_set *c)
+ 	 * Try to find the btree node with that references the oldest journal
+ 	 * entry, best is our current candidate and is locked if non NULL:
+ 	 */
+-	struct btree *b;
+-	int i;
++	struct btree *b, *best;
++	unsigned int i;
+ 
+ 	atomic_long_inc(&c->flush_write);
+-
+ retry:
+-	spin_lock(&c->journal.lock);
+-	if (heap_empty(&c->flush_btree)) {
+-		for_each_cached_btree(b, c, i)
+-			if (btree_current_write(b)->journal) {
+-				if (!heap_full(&c->flush_btree))
+-					heap_add(&c->flush_btree, b,
+-						 journal_max_cmp);
+-				else if (journal_max_cmp(b,
+-					 heap_peek(&c->flush_btree))) {
+-					c->flush_btree.data[0] = b;
+-					heap_sift(&c->flush_btree, 0,
+-						  journal_max_cmp);
+-				}
++	best = NULL;
++
++	for_each_cached_btree(b, c, i)
++		if (btree_current_write(b)->journal) {
++			if (!best)
++				best = b;
++			else if (journal_pin_cmp(c,
++					btree_current_write(best)->journal,
++					btree_current_write(b)->journal)) {
++				best = b;
+ 			}
++		}
+ 
+-		for (i = c->flush_btree.used / 2 - 1; i >= 0; --i)
+-			heap_sift(&c->flush_btree, i, journal_min_cmp);
+-	}
+-
+-	b = NULL;
+-	heap_pop(&c->flush_btree, b, journal_min_cmp);
+-	spin_unlock(&c->journal.lock);
+-
++	b = best;
+ 	if (b) {
+ 		mutex_lock(&b->write_lock);
+ 		if (!btree_current_write(b)->journal) {
+@@ -810,6 +794,10 @@ atomic_t *bch_journal(struct cache_set *c,
+ 	struct journal_write *w;
+ 	atomic_t *ret;
+ 
++	/* No journaling if CACHE_SET_IO_DISABLE set already */
++	if (unlikely(test_bit(CACHE_SET_IO_DISABLE, &c->flags)))
++		return NULL;
++
+ 	if (!CACHE_SYNC(&c->sb))
+ 		return NULL;
+ 
+@@ -854,7 +842,6 @@ void bch_journal_free(struct cache_set *c)
+ 	free_pages((unsigned long) c->journal.w[1].data, JSET_BITS);
+ 	free_pages((unsigned long) c->journal.w[0].data, JSET_BITS);
+ 	free_fifo(&c->journal.pin);
+-	free_heap(&c->flush_btree);
+ }
+ 
+ int bch_journal_alloc(struct cache_set *c)
+@@ -869,8 +856,7 @@ int bch_journal_alloc(struct cache_set *c)
+ 	j->w[0].c = c;
+ 	j->w[1].c = c;
+ 
+-	if (!(init_heap(&c->flush_btree, 128, GFP_KERNEL)) ||
+-	    !(init_fifo(&j->pin, JOURNAL_PIN, GFP_KERNEL)) ||
++	if (!(init_fifo(&j->pin, JOURNAL_PIN, GFP_KERNEL)) ||
+ 	    !(j->w[0].data = (void *) __get_free_pages(GFP_KERNEL, JSET_BITS)) ||
+ 	    !(j->w[1].data = (void *) __get_free_pages(GFP_KERNEL, JSET_BITS)))
+ 		return -ENOMEM;
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index e489d2459569..e10323d50549 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -1186,18 +1186,16 @@ static void cached_dev_free(struct closure *cl)
+ {
+ 	struct cached_dev *dc = container_of(cl, struct cached_dev, disk.cl);
+ 
+-	mutex_lock(&bch_register_lock);
+-
+ 	if (test_and_clear_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags))
+ 		cancel_writeback_rate_update_dwork(dc);
+ 
+ 	if (!IS_ERR_OR_NULL(dc->writeback_thread))
+ 		kthread_stop(dc->writeback_thread);
+-	if (dc->writeback_write_wq)
+-		destroy_workqueue(dc->writeback_write_wq);
+ 	if (!IS_ERR_OR_NULL(dc->status_update_thread))
+ 		kthread_stop(dc->status_update_thread);
+ 
++	mutex_lock(&bch_register_lock);
++
+ 	if (atomic_read(&dc->running))
+ 		bd_unlink_disk_holder(dc->bdev, dc->disk.disk);
+ 	bcache_device_free(&dc->disk);
+@@ -1431,8 +1429,6 @@ int bch_flash_dev_create(struct cache_set *c, uint64_t size)
+ 
+ bool bch_cached_dev_error(struct cached_dev *dc)
+ {
+-	struct cache_set *c;
+-
+ 	if (!dc || test_bit(BCACHE_DEV_CLOSING, &dc->disk.flags))
+ 		return false;
+ 
+@@ -1443,21 +1439,6 @@ bool bch_cached_dev_error(struct cached_dev *dc)
+ 	pr_err("stop %s: too many IO errors on backing device %s\n",
+ 		dc->disk.disk->disk_name, dc->backing_dev_name);
+ 
+-	/*
+-	 * If the cached device is still attached to a cache set,
+-	 * even dc->io_disable is true and no more I/O requests
+-	 * accepted, cache device internal I/O (writeback scan or
+-	 * garbage collection) may still prevent bcache device from
+-	 * being stopped. So here CACHE_SET_IO_DISABLE should be
+-	 * set to c->flags too, to make the internal I/O to cache
+-	 * device rejected and stopped immediately.
+-	 * If c is NULL, that means the bcache device is not attached
+-	 * to any cache set, then no CACHE_SET_IO_DISABLE bit to set.
+-	 */
+-	c = dc->disk.c;
+-	if (c && test_and_set_bit(CACHE_SET_IO_DISABLE, &c->flags))
+-		pr_info("CACHE_SET_IO_DISABLE already set");
+-
+ 	bcache_device_stop(&dc->disk);
+ 	return true;
+ }
+@@ -1557,7 +1538,7 @@ static void cache_set_flush(struct closure *cl)
+ 	kobject_put(&c->internal);
+ 	kobject_del(&c->kobj);
+ 
+-	if (c->gc_thread)
++	if (!IS_ERR_OR_NULL(c->gc_thread))
+ 		kthread_stop(c->gc_thread);
+ 
+ 	if (!IS_ERR_OR_NULL(c->root))
+diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
+index eb42dcf52277..3ae271f95ffe 100644
+--- a/drivers/md/bcache/sysfs.c
++++ b/drivers/md/bcache/sysfs.c
+@@ -180,7 +180,7 @@ SHOW(__bch_cached_dev)
+ 	var_print(writeback_percent);
+ 	sysfs_hprint(writeback_rate,
+ 		     wb ? atomic_long_read(&dc->writeback_rate.rate) << 9 : 0);
+-	sysfs_hprint(io_errors,		atomic_read(&dc->io_errors));
++	sysfs_printf(io_errors,		"%i", atomic_read(&dc->io_errors));
+ 	sysfs_printf(io_error_limit,	"%i", dc->error_limit);
+ 	sysfs_printf(io_disable,	"%i", dc->io_disable);
+ 	var_print(writeback_rate_update_seconds);
+@@ -464,7 +464,7 @@ static struct attribute *bch_cached_dev_files[] = {
+ 	&sysfs_writeback_rate_p_term_inverse,
+ 	&sysfs_writeback_rate_minimum,
+ 	&sysfs_writeback_rate_debug,
+-	&sysfs_errors,
++	&sysfs_io_errors,
+ 	&sysfs_io_error_limit,
+ 	&sysfs_io_disable,
+ 	&sysfs_dirty_data,
+diff --git a/drivers/md/bcache/util.h b/drivers/md/bcache/util.h
+index 00aab6abcfe4..b1f5b7aea872 100644
+--- a/drivers/md/bcache/util.h
++++ b/drivers/md/bcache/util.h
+@@ -113,8 +113,6 @@ do {									\
+ 
+ #define heap_full(h)	((h)->used == (h)->size)
+ 
+-#define heap_empty(h)	((h)->used == 0)
+-
+ #define DECLARE_FIFO(type, name)					\
+ 	struct {							\
+ 		size_t front, back, size, mask;				\
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index 73f0efac2b9f..e9ffcea1ca50 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -735,6 +735,10 @@ static int bch_writeback_thread(void *arg)
+ 		}
+ 	}
+ 
++	if (dc->writeback_write_wq) {
++		flush_workqueue(dc->writeback_write_wq);
++		destroy_workqueue(dc->writeback_write_wq);
++	}
+ 	cached_dev_put(dc);
+ 	wait_for_kthread_stop();
+ 
+@@ -830,6 +834,7 @@ int bch_cached_dev_writeback_start(struct cached_dev *dc)
+ 					      "bcache_writeback");
+ 	if (IS_ERR(dc->writeback_thread)) {
+ 		cached_dev_put(dc);
++		destroy_workqueue(dc->writeback_write_wq);
+ 		return PTR_ERR(dc->writeback_thread);
+ 	}
+ 	dc->writeback_running = true;
+diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
+index 1ecef76225a1..76fb431a7712 100644
+--- a/drivers/md/dm-bufio.c
++++ b/drivers/md/dm-bufio.c
+@@ -1602,9 +1602,7 @@ dm_bufio_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
+ 	unsigned long freed;
+ 
+ 	c = container_of(shrink, struct dm_bufio_client, shrinker);
+-	if (sc->gfp_mask & __GFP_FS)
+-		dm_bufio_lock(c);
+-	else if (!dm_bufio_trylock(c))
++	if (!dm_bufio_trylock(c))
+ 		return SHRINK_STOP;
+ 
+ 	freed  = __scan(c, sc->nr_to_scan, sc->gfp_mask);
+diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c
+index d8334cd45d7c..4cdde7a02e94 100644
+--- a/drivers/md/dm-zoned-metadata.c
++++ b/drivers/md/dm-zoned-metadata.c
+@@ -1593,30 +1593,6 @@ struct dm_zone *dmz_get_zone_for_reclaim(struct dmz_metadata *zmd)
+ 	return zone;
+ }
+ 
+-/*
+- * Activate a zone (increment its reference count).
+- */
+-void dmz_activate_zone(struct dm_zone *zone)
+-{
+-	set_bit(DMZ_ACTIVE, &zone->flags);
+-	atomic_inc(&zone->refcount);
+-}
+-
+-/*
+- * Deactivate a zone. This decrement the zone reference counter
+- * and clears the active state of the zone once the count reaches 0,
+- * indicating that all BIOs to the zone have completed. Returns
+- * true if the zone was deactivated.
+- */
+-void dmz_deactivate_zone(struct dm_zone *zone)
+-{
+-	if (atomic_dec_and_test(&zone->refcount)) {
+-		WARN_ON(!test_bit(DMZ_ACTIVE, &zone->flags));
+-		clear_bit_unlock(DMZ_ACTIVE, &zone->flags);
+-		smp_mb__after_atomic();
+-	}
+-}
+-
+ /*
+  * Get the zone mapping a chunk, if the chunk is mapped already.
+  * If no mapping exist and the operation is WRITE, a zone is
+diff --git a/drivers/md/dm-zoned.h b/drivers/md/dm-zoned.h
+index 12419f0bfe78..ed8de49c9a08 100644
+--- a/drivers/md/dm-zoned.h
++++ b/drivers/md/dm-zoned.h
+@@ -115,7 +115,6 @@ enum {
+ 	DMZ_BUF,
+ 
+ 	/* Zone internal state */
+-	DMZ_ACTIVE,
+ 	DMZ_RECLAIM,
+ 	DMZ_SEQ_WRITE_ERR,
+ };
+@@ -128,7 +127,6 @@ enum {
+ #define dmz_is_empty(z)		((z)->wp_block == 0)
+ #define dmz_is_offline(z)	test_bit(DMZ_OFFLINE, &(z)->flags)
+ #define dmz_is_readonly(z)	test_bit(DMZ_READ_ONLY, &(z)->flags)
+-#define dmz_is_active(z)	test_bit(DMZ_ACTIVE, &(z)->flags)
+ #define dmz_in_reclaim(z)	test_bit(DMZ_RECLAIM, &(z)->flags)
+ #define dmz_seq_write_err(z)	test_bit(DMZ_SEQ_WRITE_ERR, &(z)->flags)
+ 
+@@ -188,8 +186,30 @@ void dmz_unmap_zone(struct dmz_metadata *zmd, struct dm_zone *zone);
+ unsigned int dmz_nr_rnd_zones(struct dmz_metadata *zmd);
+ unsigned int dmz_nr_unmap_rnd_zones(struct dmz_metadata *zmd);
+ 
+-void dmz_activate_zone(struct dm_zone *zone);
+-void dmz_deactivate_zone(struct dm_zone *zone);
++/*
++ * Activate a zone (increment its reference count).
++ */
++static inline void dmz_activate_zone(struct dm_zone *zone)
++{
++	atomic_inc(&zone->refcount);
++}
++
++/*
++ * Deactivate a zone. This decrement the zone reference counter
++ * indicating that all BIOs to the zone have completed when the count is 0.
++ */
++static inline void dmz_deactivate_zone(struct dm_zone *zone)
++{
++	atomic_dec(&zone->refcount);
++}
++
++/*
++ * Test if a zone is active, that is, has a refcount > 0.
++ */
++static inline bool dmz_is_active(struct dm_zone *zone)
++{
++	return atomic_read(&zone->refcount);
++}
+ 
+ int dmz_lock_zone_reclaim(struct dm_zone *zone);
+ void dmz_unlock_zone_reclaim(struct dm_zone *zone);
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index a4d2f552c8ab..05e288cba143 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -7674,7 +7674,7 @@ abort:
+ static int raid5_add_disk(struct mddev *mddev, struct md_rdev *rdev)
+ {
+ 	struct r5conf *conf = mddev->private;
+-	int err = -EEXIST;
++	int ret, err = -EEXIST;
+ 	int disk;
+ 	struct disk_info *p;
+ 	int first = 0;
+@@ -7689,7 +7689,14 @@ static int raid5_add_disk(struct mddev *mddev, struct md_rdev *rdev)
+ 		 * The array is in readonly mode if journal is missing, so no
+ 		 * write requests running. We should be safe
+ 		 */
+-		log_init(conf, rdev, false);
++		ret = log_init(conf, rdev, false);
++		if (ret)
++			return ret;
++
++		ret = r5l_start(conf->log);
++		if (ret)
++			return ret;
++
+ 		return 0;
+ 	}
+ 	if (mddev->recovery_disabled == conf->recovery_disabled)
+diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c
+index 9c163f658aaf..fea87ec80e55 100644
+--- a/drivers/media/common/videobuf2/videobuf2-core.c
++++ b/drivers/media/common/videobuf2/videobuf2-core.c
+@@ -207,6 +207,10 @@ static int __vb2_buf_mem_alloc(struct vb2_buffer *vb)
+ 	for (plane = 0; plane < vb->num_planes; ++plane) {
+ 		unsigned long size = PAGE_ALIGN(vb->planes[plane].length);
+ 
++		/* Did it wrap around? */
++		if (size < vb->planes[plane].length)
++			goto free;
++
+ 		mem_priv = call_ptr_memop(vb, alloc,
+ 				q->alloc_devs[plane] ? : q->dev,
+ 				q->dma_attrs, size, q->dma_dir, q->gfp_flags);
+diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
+index 270c3162fdcb..61670a1af713 100644
+--- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
++++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
+@@ -59,7 +59,7 @@ static int vb2_dma_sg_alloc_compacted(struct vb2_dma_sg_buf *buf,
+ 		gfp_t gfp_flags)
+ {
+ 	unsigned int last_page = 0;
+-	int size = buf->size;
++	unsigned long size = buf->size;
+ 
+ 	while (size > 0) {
+ 		struct page *pages;
+diff --git a/drivers/media/dvb-frontends/tua6100.c b/drivers/media/dvb-frontends/tua6100.c
+index b233b7be0b84..e6aaf4973aef 100644
+--- a/drivers/media/dvb-frontends/tua6100.c
++++ b/drivers/media/dvb-frontends/tua6100.c
+@@ -75,8 +75,8 @@ static int tua6100_set_params(struct dvb_frontend *fe)
+ 	struct i2c_msg msg1 = { .addr = priv->i2c_address, .flags = 0, .buf = reg1, .len = 4 };
+ 	struct i2c_msg msg2 = { .addr = priv->i2c_address, .flags = 0, .buf = reg2, .len = 3 };
+ 
+-#define _R 4
+-#define _P 32
++#define _R_VAL 4
++#define _P_VAL 32
+ #define _ri 4000000
+ 
+ 	// setup register 0
+@@ -91,14 +91,14 @@ static int tua6100_set_params(struct dvb_frontend *fe)
+ 	else
+ 		reg1[1] = 0x0c;
+ 
+-	if (_P == 64)
++	if (_P_VAL == 64)
+ 		reg1[1] |= 0x40;
+ 	if (c->frequency >= 1525000)
+ 		reg1[1] |= 0x80;
+ 
+ 	// register 2
+-	reg2[1] = (_R >> 8) & 0x03;
+-	reg2[2] = _R;
++	reg2[1] = (_R_VAL >> 8) & 0x03;
++	reg2[2] = _R_VAL;
+ 	if (c->frequency < 1455000)
+ 		reg2[1] |= 0x1c;
+ 	else if (c->frequency < 1630000)
+@@ -110,18 +110,18 @@ static int tua6100_set_params(struct dvb_frontend *fe)
+ 	 * The N divisor ratio (note: c->frequency is in kHz, but we
+ 	 * need it in Hz)
+ 	 */
+-	prediv = (c->frequency * _R) / (_ri / 1000);
+-	div = prediv / _P;
++	prediv = (c->frequency * _R_VAL) / (_ri / 1000);
++	div = prediv / _P_VAL;
+ 	reg1[1] |= (div >> 9) & 0x03;
+ 	reg1[2] = div >> 1;
+ 	reg1[3] = (div << 7);
+-	priv->frequency = ((div * _P) * (_ri / 1000)) / _R;
++	priv->frequency = ((div * _P_VAL) * (_ri / 1000)) / _R_VAL;
+ 
+ 	// Finally, calculate and store the value for A
+-	reg1[3] |= (prediv - (div*_P)) & 0x7f;
++	reg1[3] |= (prediv - (div*_P_VAL)) & 0x7f;
+ 
+-#undef _R
+-#undef _P
++#undef _R_VAL
++#undef _P_VAL
+ #undef _ri
+ 
+ 	if (fe->ops.i2c_gate_ctrl)
+diff --git a/drivers/media/i2c/Makefile b/drivers/media/i2c/Makefile
+index a64fca82e0c4..55a3a2dee2de 100644
+--- a/drivers/media/i2c/Makefile
++++ b/drivers/media/i2c/Makefile
+@@ -35,7 +35,7 @@ obj-$(CONFIG_VIDEO_ADV748X) += adv748x/
+ obj-$(CONFIG_VIDEO_ADV7604) += adv7604.o
+ obj-$(CONFIG_VIDEO_ADV7842) += adv7842.o
+ obj-$(CONFIG_VIDEO_AD9389B) += ad9389b.o
+-obj-$(CONFIG_VIDEO_ADV7511) += adv7511.o
++obj-$(CONFIG_VIDEO_ADV7511) += adv7511-v4l2.o
+ obj-$(CONFIG_VIDEO_VPX3220) += vpx3220.o
+ obj-$(CONFIG_VIDEO_VS6624)  += vs6624.o
+ obj-$(CONFIG_VIDEO_BT819) += bt819.o
+diff --git a/drivers/media/i2c/adv7511-v4l2.c b/drivers/media/i2c/adv7511-v4l2.c
+new file mode 100644
+index 000000000000..2ad6bdf1a9fc
+--- /dev/null
++++ b/drivers/media/i2c/adv7511-v4l2.c
+@@ -0,0 +1,1997 @@
++// SPDX-License-Identifier: GPL-2.0-only
++/*
++ * Analog Devices ADV7511 HDMI Transmitter Device Driver
++ *
++ * Copyright 2013 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
++ */
++
++/*
++ * This file is named adv7511-v4l2.c so it doesn't conflict with the Analog
++ * Device ADV7511 (config fragment CONFIG_DRM_I2C_ADV7511).
++ */
++
++
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/slab.h>
++#include <linux/i2c.h>
++#include <linux/delay.h>
++#include <linux/videodev2.h>
++#include <linux/gpio.h>
++#include <linux/workqueue.h>
++#include <linux/hdmi.h>
++#include <linux/v4l2-dv-timings.h>
++#include <media/v4l2-device.h>
++#include <media/v4l2-common.h>
++#include <media/v4l2-ctrls.h>
++#include <media/v4l2-dv-timings.h>
++#include <media/i2c/adv7511.h>
++#include <media/cec.h>
++
++static int debug;
++module_param(debug, int, 0644);
++MODULE_PARM_DESC(debug, "debug level (0-2)");
++
++MODULE_DESCRIPTION("Analog Devices ADV7511 HDMI Transmitter Device Driver");
++MODULE_AUTHOR("Hans Verkuil");
++MODULE_LICENSE("GPL v2");
++
++#define MASK_ADV7511_EDID_RDY_INT   0x04
++#define MASK_ADV7511_MSEN_INT       0x40
++#define MASK_ADV7511_HPD_INT        0x80
++
++#define MASK_ADV7511_HPD_DETECT     0x40
++#define MASK_ADV7511_MSEN_DETECT    0x20
++#define MASK_ADV7511_EDID_RDY       0x10
++
++#define EDID_MAX_RETRIES (8)
++#define EDID_DELAY 250
++#define EDID_MAX_SEGM 8
++
++#define ADV7511_MAX_WIDTH 1920
++#define ADV7511_MAX_HEIGHT 1200
++#define ADV7511_MIN_PIXELCLOCK 20000000
++#define ADV7511_MAX_PIXELCLOCK 225000000
++
++#define ADV7511_MAX_ADDRS (3)
++
++/*
++**********************************************************************
++*
++*  Arrays with configuration parameters for the ADV7511
++*
++**********************************************************************
++*/
++
++struct i2c_reg_value {
++	unsigned char reg;
++	unsigned char value;
++};
++
++struct adv7511_state_edid {
++	/* total number of blocks */
++	u32 blocks;
++	/* Number of segments read */
++	u32 segments;
++	u8 data[EDID_MAX_SEGM * 256];
++	/* Number of EDID read retries left */
++	unsigned read_retries;
++	bool complete;
++};
++
++struct adv7511_state {
++	struct adv7511_platform_data pdata;
++	struct v4l2_subdev sd;
++	struct media_pad pad;
++	struct v4l2_ctrl_handler hdl;
++	int chip_revision;
++	u8 i2c_edid_addr;
++	u8 i2c_pktmem_addr;
++	u8 i2c_cec_addr;
++
++	struct i2c_client *i2c_cec;
++	struct cec_adapter *cec_adap;
++	u8   cec_addr[ADV7511_MAX_ADDRS];
++	u8   cec_valid_addrs;
++	bool cec_enabled_adap;
++
++	/* Is the adv7511 powered on? */
++	bool power_on;
++	/* Did we receive hotplug and rx-sense signals? */
++	bool have_monitor;
++	bool enabled_irq;
++	/* timings from s_dv_timings */
++	struct v4l2_dv_timings dv_timings;
++	u32 fmt_code;
++	u32 colorspace;
++	u32 ycbcr_enc;
++	u32 quantization;
++	u32 xfer_func;
++	u32 content_type;
++	/* controls */
++	struct v4l2_ctrl *hdmi_mode_ctrl;
++	struct v4l2_ctrl *hotplug_ctrl;
++	struct v4l2_ctrl *rx_sense_ctrl;
++	struct v4l2_ctrl *have_edid0_ctrl;
++	struct v4l2_ctrl *rgb_quantization_range_ctrl;
++	struct v4l2_ctrl *content_type_ctrl;
++	struct i2c_client *i2c_edid;
++	struct i2c_client *i2c_pktmem;
++	struct adv7511_state_edid edid;
++	/* Running counter of the number of detected EDIDs (for debugging) */
++	unsigned edid_detect_counter;
++	struct workqueue_struct *work_queue;
++	struct delayed_work edid_handler; /* work entry */
++};
++
++static void adv7511_check_monitor_present_status(struct v4l2_subdev *sd);
++static bool adv7511_check_edid_status(struct v4l2_subdev *sd);
++static void adv7511_setup(struct v4l2_subdev *sd);
++static int adv7511_s_i2s_clock_freq(struct v4l2_subdev *sd, u32 freq);
++static int adv7511_s_clock_freq(struct v4l2_subdev *sd, u32 freq);
++
++
++static const struct v4l2_dv_timings_cap adv7511_timings_cap = {
++	.type = V4L2_DV_BT_656_1120,
++	/* keep this initialization for compatibility with GCC < 4.4.6 */
++	.reserved = { 0 },
++	V4L2_INIT_BT_TIMINGS(640, ADV7511_MAX_WIDTH, 350, ADV7511_MAX_HEIGHT,
++		ADV7511_MIN_PIXELCLOCK, ADV7511_MAX_PIXELCLOCK,
++		V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT |
++			V4L2_DV_BT_STD_GTF | V4L2_DV_BT_STD_CVT,
++		V4L2_DV_BT_CAP_PROGRESSIVE | V4L2_DV_BT_CAP_REDUCED_BLANKING |
++			V4L2_DV_BT_CAP_CUSTOM)
++};
++
++static inline struct adv7511_state *get_adv7511_state(struct v4l2_subdev *sd)
++{
++	return container_of(sd, struct adv7511_state, sd);
++}
++
++static inline struct v4l2_subdev *to_sd(struct v4l2_ctrl *ctrl)
++{
++	return &container_of(ctrl->handler, struct adv7511_state, hdl)->sd;
++}
++
++/* ------------------------ I2C ----------------------------------------------- */
++
++static s32 adv_smbus_read_byte_data_check(struct i2c_client *client,
++					  u8 command, bool check)
++{
++	union i2c_smbus_data data;
++
++	if (!i2c_smbus_xfer(client->adapter, client->addr, client->flags,
++			    I2C_SMBUS_READ, command,
++			    I2C_SMBUS_BYTE_DATA, &data))
++		return data.byte;
++	if (check)
++		v4l_err(client, "error reading %02x, %02x\n",
++			client->addr, command);
++	return -1;
++}
++
++static s32 adv_smbus_read_byte_data(struct i2c_client *client, u8 command)
++{
++	int i;
++	for (i = 0; i < 3; i++) {
++		int ret = adv_smbus_read_byte_data_check(client, command, true);
++		if (ret >= 0) {
++			if (i)
++				v4l_err(client, "read ok after %d retries\n", i);
++			return ret;
++		}
++	}
++	v4l_err(client, "read failed\n");
++	return -1;
++}
++
++static int adv7511_rd(struct v4l2_subdev *sd, u8 reg)
++{
++	struct i2c_client *client = v4l2_get_subdevdata(sd);
++
++	return adv_smbus_read_byte_data(client, reg);
++}
++
++static int adv7511_wr(struct v4l2_subdev *sd, u8 reg, u8 val)
++{
++	struct i2c_client *client = v4l2_get_subdevdata(sd);
++	int ret;
++	int i;
++
++	for (i = 0; i < 3; i++) {
++		ret = i2c_smbus_write_byte_data(client, reg, val);
++		if (ret == 0)
++			return 0;
++	}
++	v4l2_err(sd, "%s: i2c write error\n", __func__);
++	return ret;
++}
++
++/* To set specific bits in the register, a clear-mask is given (to be AND-ed),
++   and then the value-mask (to be OR-ed). */
++static inline void adv7511_wr_and_or(struct v4l2_subdev *sd, u8 reg, u8 clr_mask, u8 val_mask)
++{
++	adv7511_wr(sd, reg, (adv7511_rd(sd, reg) & clr_mask) | val_mask);
++}
++
++static int adv_smbus_read_i2c_block_data(struct i2c_client *client,
++					 u8 command, unsigned length, u8 *values)
++{
++	union i2c_smbus_data data;
++	int ret;
++
++	if (length > I2C_SMBUS_BLOCK_MAX)
++		length = I2C_SMBUS_BLOCK_MAX;
++	data.block[0] = length;
++
++	ret = i2c_smbus_xfer(client->adapter, client->addr, client->flags,
++			     I2C_SMBUS_READ, command,
++			     I2C_SMBUS_I2C_BLOCK_DATA, &data);
++	memcpy(values, data.block + 1, length);
++	return ret;
++}
++
++static void adv7511_edid_rd(struct v4l2_subdev *sd, uint16_t len, uint8_t *buf)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++	int i;
++	int err = 0;
++
++	v4l2_dbg(1, debug, sd, "%s:\n", __func__);
++
++	for (i = 0; !err && i < len; i += I2C_SMBUS_BLOCK_MAX)
++		err = adv_smbus_read_i2c_block_data(state->i2c_edid, i,
++						    I2C_SMBUS_BLOCK_MAX, buf + i);
++	if (err)
++		v4l2_err(sd, "%s: i2c read error\n", __func__);
++}
++
++static inline int adv7511_cec_read(struct v4l2_subdev *sd, u8 reg)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++
++	return i2c_smbus_read_byte_data(state->i2c_cec, reg);
++}
++
++static int adv7511_cec_write(struct v4l2_subdev *sd, u8 reg, u8 val)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++	int ret;
++	int i;
++
++	for (i = 0; i < 3; i++) {
++		ret = i2c_smbus_write_byte_data(state->i2c_cec, reg, val);
++		if (ret == 0)
++			return 0;
++	}
++	v4l2_err(sd, "%s: I2C Write Problem\n", __func__);
++	return ret;
++}
++
++static inline int adv7511_cec_write_and_or(struct v4l2_subdev *sd, u8 reg, u8 mask,
++				   u8 val)
++{
++	return adv7511_cec_write(sd, reg, (adv7511_cec_read(sd, reg) & mask) | val);
++}
++
++static int adv7511_pktmem_rd(struct v4l2_subdev *sd, u8 reg)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++
++	return adv_smbus_read_byte_data(state->i2c_pktmem, reg);
++}
++
++static int adv7511_pktmem_wr(struct v4l2_subdev *sd, u8 reg, u8 val)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++	int ret;
++	int i;
++
++	for (i = 0; i < 3; i++) {
++		ret = i2c_smbus_write_byte_data(state->i2c_pktmem, reg, val);
++		if (ret == 0)
++			return 0;
++	}
++	v4l2_err(sd, "%s: i2c write error\n", __func__);
++	return ret;
++}
++
++/* To set specific bits in the register, a clear-mask is given (to be AND-ed),
++   and then the value-mask (to be OR-ed). */
++static inline void adv7511_pktmem_wr_and_or(struct v4l2_subdev *sd, u8 reg, u8 clr_mask, u8 val_mask)
++{
++	adv7511_pktmem_wr(sd, reg, (adv7511_pktmem_rd(sd, reg) & clr_mask) | val_mask);
++}
++
++static inline bool adv7511_have_hotplug(struct v4l2_subdev *sd)
++{
++	return adv7511_rd(sd, 0x42) & MASK_ADV7511_HPD_DETECT;
++}
++
++static inline bool adv7511_have_rx_sense(struct v4l2_subdev *sd)
++{
++	return adv7511_rd(sd, 0x42) & MASK_ADV7511_MSEN_DETECT;
++}
++
++static void adv7511_csc_conversion_mode(struct v4l2_subdev *sd, u8 mode)
++{
++	adv7511_wr_and_or(sd, 0x18, 0x9f, (mode & 0x3)<<5);
++}
++
++static void adv7511_csc_coeff(struct v4l2_subdev *sd,
++			      u16 A1, u16 A2, u16 A3, u16 A4,
++			      u16 B1, u16 B2, u16 B3, u16 B4,
++			      u16 C1, u16 C2, u16 C3, u16 C4)
++{
++	/* A */
++	adv7511_wr_and_or(sd, 0x18, 0xe0, A1>>8);
++	adv7511_wr(sd, 0x19, A1);
++	adv7511_wr_and_or(sd, 0x1A, 0xe0, A2>>8);
++	adv7511_wr(sd, 0x1B, A2);
++	adv7511_wr_and_or(sd, 0x1c, 0xe0, A3>>8);
++	adv7511_wr(sd, 0x1d, A3);
++	adv7511_wr_and_or(sd, 0x1e, 0xe0, A4>>8);
++	adv7511_wr(sd, 0x1f, A4);
++
++	/* B */
++	adv7511_wr_and_or(sd, 0x20, 0xe0, B1>>8);
++	adv7511_wr(sd, 0x21, B1);
++	adv7511_wr_and_or(sd, 0x22, 0xe0, B2>>8);
++	adv7511_wr(sd, 0x23, B2);
++	adv7511_wr_and_or(sd, 0x24, 0xe0, B3>>8);
++	adv7511_wr(sd, 0x25, B3);
++	adv7511_wr_and_or(sd, 0x26, 0xe0, B4>>8);
++	adv7511_wr(sd, 0x27, B4);
++
++	/* C */
++	adv7511_wr_and_or(sd, 0x28, 0xe0, C1>>8);
++	adv7511_wr(sd, 0x29, C1);
++	adv7511_wr_and_or(sd, 0x2A, 0xe0, C2>>8);
++	adv7511_wr(sd, 0x2B, C2);
++	adv7511_wr_and_or(sd, 0x2C, 0xe0, C3>>8);
++	adv7511_wr(sd, 0x2D, C3);
++	adv7511_wr_and_or(sd, 0x2E, 0xe0, C4>>8);
++	adv7511_wr(sd, 0x2F, C4);
++}
++
++static void adv7511_csc_rgb_full2limit(struct v4l2_subdev *sd, bool enable)
++{
++	if (enable) {
++		u8 csc_mode = 0;
++		adv7511_csc_conversion_mode(sd, csc_mode);
++		adv7511_csc_coeff(sd,
++				  4096-564, 0, 0, 256,
++				  0, 4096-564, 0, 256,
++				  0, 0, 4096-564, 256);
++		/* enable CSC */
++		adv7511_wr_and_or(sd, 0x18, 0x7f, 0x80);
++		/* AVI infoframe: Limited range RGB (16-235) */
++		adv7511_wr_and_or(sd, 0x57, 0xf3, 0x04);
++	} else {
++		/* disable CSC */
++		adv7511_wr_and_or(sd, 0x18, 0x7f, 0x0);
++		/* AVI infoframe: Full range RGB (0-255) */
++		adv7511_wr_and_or(sd, 0x57, 0xf3, 0x08);
++	}
++}
++
++static void adv7511_set_rgb_quantization_mode(struct v4l2_subdev *sd, struct v4l2_ctrl *ctrl)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++
++	/* Only makes sense for RGB formats */
++	if (state->fmt_code != MEDIA_BUS_FMT_RGB888_1X24) {
++		/* so just keep quantization */
++		adv7511_csc_rgb_full2limit(sd, false);
++		return;
++	}
++
++	switch (ctrl->val) {
++	case V4L2_DV_RGB_RANGE_AUTO:
++		/* automatic */
++		if (state->dv_timings.bt.flags & V4L2_DV_FL_IS_CE_VIDEO) {
++			/* CE format, RGB limited range (16-235) */
++			adv7511_csc_rgb_full2limit(sd, true);
++		} else {
++			/* not CE format, RGB full range (0-255) */
++			adv7511_csc_rgb_full2limit(sd, false);
++		}
++		break;
++	case V4L2_DV_RGB_RANGE_LIMITED:
++		/* RGB limited range (16-235) */
++		adv7511_csc_rgb_full2limit(sd, true);
++		break;
++	case V4L2_DV_RGB_RANGE_FULL:
++		/* RGB full range (0-255) */
++		adv7511_csc_rgb_full2limit(sd, false);
++		break;
++	}
++}
++
++/* ------------------------------ CTRL OPS ------------------------------ */
++
++static int adv7511_s_ctrl(struct v4l2_ctrl *ctrl)
++{
++	struct v4l2_subdev *sd = to_sd(ctrl);
++	struct adv7511_state *state = get_adv7511_state(sd);
++
++	v4l2_dbg(1, debug, sd, "%s: ctrl id: %d, ctrl->val %d\n", __func__, ctrl->id, ctrl->val);
++
++	if (state->hdmi_mode_ctrl == ctrl) {
++		/* Set HDMI or DVI-D */
++		adv7511_wr_and_or(sd, 0xaf, 0xfd, ctrl->val == V4L2_DV_TX_MODE_HDMI ? 0x02 : 0x00);
++		return 0;
++	}
++	if (state->rgb_quantization_range_ctrl == ctrl) {
++		adv7511_set_rgb_quantization_mode(sd, ctrl);
++		return 0;
++	}
++	if (state->content_type_ctrl == ctrl) {
++		u8 itc, cn;
++
++		state->content_type = ctrl->val;
++		itc = state->content_type != V4L2_DV_IT_CONTENT_TYPE_NO_ITC;
++		cn = itc ? state->content_type : V4L2_DV_IT_CONTENT_TYPE_GRAPHICS;
++		adv7511_wr_and_or(sd, 0x57, 0x7f, itc << 7);
++		adv7511_wr_and_or(sd, 0x59, 0xcf, cn << 4);
++		return 0;
++	}
++
++	return -EINVAL;
++}
++
++static const struct v4l2_ctrl_ops adv7511_ctrl_ops = {
++	.s_ctrl = adv7511_s_ctrl,
++};
++
++/* ---------------------------- CORE OPS ------------------------------------------- */
++
++#ifdef CONFIG_VIDEO_ADV_DEBUG
++static void adv7511_inv_register(struct v4l2_subdev *sd)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++
++	v4l2_info(sd, "0x000-0x0ff: Main Map\n");
++	if (state->i2c_cec)
++		v4l2_info(sd, "0x100-0x1ff: CEC Map\n");
++}
++
++static int adv7511_g_register(struct v4l2_subdev *sd, struct v4l2_dbg_register *reg)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++
++	reg->size = 1;
++	switch (reg->reg >> 8) {
++	case 0:
++		reg->val = adv7511_rd(sd, reg->reg & 0xff);
++		break;
++	case 1:
++		if (state->i2c_cec) {
++			reg->val = adv7511_cec_read(sd, reg->reg & 0xff);
++			break;
++		}
++		/* fall through */
++	default:
++		v4l2_info(sd, "Register %03llx not supported\n", reg->reg);
++		adv7511_inv_register(sd);
++		break;
++	}
++	return 0;
++}
++
++static int adv7511_s_register(struct v4l2_subdev *sd, const struct v4l2_dbg_register *reg)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++
++	switch (reg->reg >> 8) {
++	case 0:
++		adv7511_wr(sd, reg->reg & 0xff, reg->val & 0xff);
++		break;
++	case 1:
++		if (state->i2c_cec) {
++			adv7511_cec_write(sd, reg->reg & 0xff, reg->val & 0xff);
++			break;
++		}
++		/* fall through */
++	default:
++		v4l2_info(sd, "Register %03llx not supported\n", reg->reg);
++		adv7511_inv_register(sd);
++		break;
++	}
++	return 0;
++}
++#endif
++
++struct adv7511_cfg_read_infoframe {
++	const char *desc;
++	u8 present_reg;
++	u8 present_mask;
++	u8 header[3];
++	u16 payload_addr;
++};
++
++static u8 hdmi_infoframe_checksum(u8 *ptr, size_t size)
++{
++	u8 csum = 0;
++	size_t i;
++
++	/* compute checksum */
++	for (i = 0; i < size; i++)
++		csum += ptr[i];
++
++	return 256 - csum;
++}
++
++static void log_infoframe(struct v4l2_subdev *sd, const struct adv7511_cfg_read_infoframe *cri)
++{
++	struct i2c_client *client = v4l2_get_subdevdata(sd);
++	struct device *dev = &client->dev;
++	union hdmi_infoframe frame;
++	u8 buffer[32];
++	u8 len;
++	int i;
++
++	if (!(adv7511_rd(sd, cri->present_reg) & cri->present_mask)) {
++		v4l2_info(sd, "%s infoframe not transmitted\n", cri->desc);
++		return;
++	}
++
++	memcpy(buffer, cri->header, sizeof(cri->header));
++
++	len = buffer[2];
++
++	if (len + 4 > sizeof(buffer)) {
++		v4l2_err(sd, "%s: invalid %s infoframe length %d\n", __func__, cri->desc, len);
++		return;
++	}
++
++	if (cri->payload_addr >= 0x100) {
++		for (i = 0; i < len; i++)
++			buffer[i + 4] = adv7511_pktmem_rd(sd, cri->payload_addr + i - 0x100);
++	} else {
++		for (i = 0; i < len; i++)
++			buffer[i + 4] = adv7511_rd(sd, cri->payload_addr + i);
++	}
++	buffer[3] = 0;
++	buffer[3] = hdmi_infoframe_checksum(buffer, len + 4);
++
++	if (hdmi_infoframe_unpack(&frame, buffer, sizeof(buffer)) < 0) {
++		v4l2_err(sd, "%s: unpack of %s infoframe failed\n", __func__, cri->desc);
++		return;
++	}
++
++	hdmi_infoframe_log(KERN_INFO, dev, &frame);
++}
++
++static void adv7511_log_infoframes(struct v4l2_subdev *sd)
++{
++	static const struct adv7511_cfg_read_infoframe cri[] = {
++		{ "AVI", 0x44, 0x10, { 0x82, 2, 13 }, 0x55 },
++		{ "Audio", 0x44, 0x08, { 0x84, 1, 10 }, 0x73 },
++		{ "SDP", 0x40, 0x40, { 0x83, 1, 25 }, 0x103 },
++	};
++	int i;
++
++	for (i = 0; i < ARRAY_SIZE(cri); i++)
++		log_infoframe(sd, &cri[i]);
++}
++
++static int adv7511_log_status(struct v4l2_subdev *sd)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++	struct adv7511_state_edid *edid = &state->edid;
++	int i;
++
++	static const char * const states[] = {
++		"in reset",
++		"reading EDID",
++		"idle",
++		"initializing HDCP",
++		"HDCP enabled",
++		"initializing HDCP repeater",
++		"6", "7", "8", "9", "A", "B", "C", "D", "E", "F"
++	};
++	static const char * const errors[] = {
++		"no error",
++		"bad receiver BKSV",
++		"Ri mismatch",
++		"Pj mismatch",
++		"i2c error",
++		"timed out",
++		"max repeater cascade exceeded",
++		"hash check failed",
++		"too many devices",
++		"9", "A", "B", "C", "D", "E", "F"
++	};
++
++	v4l2_info(sd, "power %s\n", state->power_on ? "on" : "off");
++	v4l2_info(sd, "%s hotplug, %s Rx Sense, %s EDID (%d block(s))\n",
++		  (adv7511_rd(sd, 0x42) & MASK_ADV7511_HPD_DETECT) ? "detected" : "no",
++		  (adv7511_rd(sd, 0x42) & MASK_ADV7511_MSEN_DETECT) ? "detected" : "no",
++		  edid->segments ? "found" : "no",
++		  edid->blocks);
++	v4l2_info(sd, "%s output %s\n",
++		  (adv7511_rd(sd, 0xaf) & 0x02) ?
++		  "HDMI" : "DVI-D",
++		  (adv7511_rd(sd, 0xa1) & 0x3c) ?
++		  "disabled" : "enabled");
++	v4l2_info(sd, "state: %s, error: %s, detect count: %u, msk/irq: %02x/%02x\n",
++			  states[adv7511_rd(sd, 0xc8) & 0xf],
++			  errors[adv7511_rd(sd, 0xc8) >> 4], state->edid_detect_counter,
++			  adv7511_rd(sd, 0x94), adv7511_rd(sd, 0x96));
++	v4l2_info(sd, "RGB quantization: %s range\n", adv7511_rd(sd, 0x18) & 0x80 ? "limited" : "full");
++	if (adv7511_rd(sd, 0xaf) & 0x02) {
++		/* HDMI only */
++		u8 manual_cts = adv7511_rd(sd, 0x0a) & 0x80;
++		u32 N = (adv7511_rd(sd, 0x01) & 0xf) << 16 |
++			adv7511_rd(sd, 0x02) << 8 |
++			adv7511_rd(sd, 0x03);
++		u8 vic_detect = adv7511_rd(sd, 0x3e) >> 2;
++		u8 vic_sent = adv7511_rd(sd, 0x3d) & 0x3f;
++		u32 CTS;
++
++		if (manual_cts)
++			CTS = (adv7511_rd(sd, 0x07) & 0xf) << 16 |
++			      adv7511_rd(sd, 0x08) << 8 |
++			      adv7511_rd(sd, 0x09);
++		else
++			CTS = (adv7511_rd(sd, 0x04) & 0xf) << 16 |
++			      adv7511_rd(sd, 0x05) << 8 |
++			      adv7511_rd(sd, 0x06);
++		v4l2_info(sd, "CTS %s mode: N %d, CTS %d\n",
++			  manual_cts ? "manual" : "automatic", N, CTS);
++		v4l2_info(sd, "VIC: detected %d, sent %d\n",
++			  vic_detect, vic_sent);
++		adv7511_log_infoframes(sd);
++	}
++	if (state->dv_timings.type == V4L2_DV_BT_656_1120)
++		v4l2_print_dv_timings(sd->name, "timings: ",
++				&state->dv_timings, false);
++	else
++		v4l2_info(sd, "no timings set\n");
++	v4l2_info(sd, "i2c edid addr: 0x%x\n", state->i2c_edid_addr);
++
++	if (state->i2c_cec == NULL)
++		return 0;
++
++	v4l2_info(sd, "i2c cec addr: 0x%x\n", state->i2c_cec_addr);
++
++	v4l2_info(sd, "CEC: %s\n", state->cec_enabled_adap ?
++			"enabled" : "disabled");
++	if (state->cec_enabled_adap) {
++		for (i = 0; i < ADV7511_MAX_ADDRS; i++) {
++			bool is_valid = state->cec_valid_addrs & (1 << i);
++
++			if (is_valid)
++				v4l2_info(sd, "CEC Logical Address: 0x%x\n",
++					  state->cec_addr[i]);
++		}
++	}
++	v4l2_info(sd, "i2c pktmem addr: 0x%x\n", state->i2c_pktmem_addr);
++	return 0;
++}
++
++/* Power up/down adv7511 */
++static int adv7511_s_power(struct v4l2_subdev *sd, int on)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++	const int retries = 20;
++	int i;
++
++	v4l2_dbg(1, debug, sd, "%s: power %s\n", __func__, on ? "on" : "off");
++
++	state->power_on = on;
++
++	if (!on) {
++		/* Power down */
++		adv7511_wr_and_or(sd, 0x41, 0xbf, 0x40);
++		return true;
++	}
++
++	/* Power up */
++	/* The adv7511 does not always come up immediately.
++	   Retry multiple times. */
++	for (i = 0; i < retries; i++) {
++		adv7511_wr_and_or(sd, 0x41, 0xbf, 0x0);
++		if ((adv7511_rd(sd, 0x41) & 0x40) == 0)
++			break;
++		adv7511_wr_and_or(sd, 0x41, 0xbf, 0x40);
++		msleep(10);
++	}
++	if (i == retries) {
++		v4l2_dbg(1, debug, sd, "%s: failed to powerup the adv7511!\n", __func__);
++		adv7511_s_power(sd, 0);
++		return false;
++	}
++	if (i > 1)
++		v4l2_dbg(1, debug, sd, "%s: needed %d retries to powerup the adv7511\n", __func__, i);
++
++	/* Reserved registers that must be set */
++	adv7511_wr(sd, 0x98, 0x03);
++	adv7511_wr_and_or(sd, 0x9a, 0xfe, 0x70);
++	adv7511_wr(sd, 0x9c, 0x30);
++	adv7511_wr_and_or(sd, 0x9d, 0xfc, 0x01);
++	adv7511_wr(sd, 0xa2, 0xa4);
++	adv7511_wr(sd, 0xa3, 0xa4);
++	adv7511_wr(sd, 0xe0, 0xd0);
++	adv7511_wr(sd, 0xf9, 0x00);
++
++	adv7511_wr(sd, 0x43, state->i2c_edid_addr);
++	adv7511_wr(sd, 0x45, state->i2c_pktmem_addr);
++
++	/* Set number of attempts to read the EDID */
++	adv7511_wr(sd, 0xc9, 0xf);
++	return true;
++}
++
++#if IS_ENABLED(CONFIG_VIDEO_ADV7511_CEC)
++static int adv7511_cec_adap_enable(struct cec_adapter *adap, bool enable)
++{
++	struct adv7511_state *state = cec_get_drvdata(adap);
++	struct v4l2_subdev *sd = &state->sd;
++
++	if (state->i2c_cec == NULL)
++		return -EIO;
++
++	if (!state->cec_enabled_adap && enable) {
++		/* power up cec section */
++		adv7511_cec_write_and_or(sd, 0x4e, 0xfc, 0x01);
++		/* legacy mode and clear all rx buffers */
++		adv7511_cec_write(sd, 0x4a, 0x00);
++		adv7511_cec_write(sd, 0x4a, 0x07);
++		adv7511_cec_write_and_or(sd, 0x11, 0xfe, 0); /* initially disable tx */
++		/* enabled irqs: */
++		/* tx: ready */
++		/* tx: arbitration lost */
++		/* tx: retry timeout */
++		/* rx: ready 1 */
++		if (state->enabled_irq)
++			adv7511_wr_and_or(sd, 0x95, 0xc0, 0x39);
++	} else if (state->cec_enabled_adap && !enable) {
++		if (state->enabled_irq)
++			adv7511_wr_and_or(sd, 0x95, 0xc0, 0x00);
++		/* disable address mask 1-3 */
++		adv7511_cec_write_and_or(sd, 0x4b, 0x8f, 0x00);
++		/* power down cec section */
++		adv7511_cec_write_and_or(sd, 0x4e, 0xfc, 0x00);
++		state->cec_valid_addrs = 0;
++	}
++	state->cec_enabled_adap = enable;
++	return 0;
++}
++
++static int adv7511_cec_adap_log_addr(struct cec_adapter *adap, u8 addr)
++{
++	struct adv7511_state *state = cec_get_drvdata(adap);
++	struct v4l2_subdev *sd = &state->sd;
++	unsigned int i, free_idx = ADV7511_MAX_ADDRS;
++
++	if (!state->cec_enabled_adap)
++		return addr == CEC_LOG_ADDR_INVALID ? 0 : -EIO;
++
++	if (addr == CEC_LOG_ADDR_INVALID) {
++		adv7511_cec_write_and_or(sd, 0x4b, 0x8f, 0);
++		state->cec_valid_addrs = 0;
++		return 0;
++	}
++
++	for (i = 0; i < ADV7511_MAX_ADDRS; i++) {
++		bool is_valid = state->cec_valid_addrs & (1 << i);
++
++		if (free_idx == ADV7511_MAX_ADDRS && !is_valid)
++			free_idx = i;
++		if (is_valid && state->cec_addr[i] == addr)
++			return 0;
++	}
++	if (i == ADV7511_MAX_ADDRS) {
++		i = free_idx;
++		if (i == ADV7511_MAX_ADDRS)
++			return -ENXIO;
++	}
++	state->cec_addr[i] = addr;
++	state->cec_valid_addrs |= 1 << i;
++
++	switch (i) {
++	case 0:
++		/* enable address mask 0 */
++		adv7511_cec_write_and_or(sd, 0x4b, 0xef, 0x10);
++		/* set address for mask 0 */
++		adv7511_cec_write_and_or(sd, 0x4c, 0xf0, addr);
++		break;
++	case 1:
++		/* enable address mask 1 */
++		adv7511_cec_write_and_or(sd, 0x4b, 0xdf, 0x20);
++		/* set address for mask 1 */
++		adv7511_cec_write_and_or(sd, 0x4c, 0x0f, addr << 4);
++		break;
++	case 2:
++		/* enable address mask 2 */
++		adv7511_cec_write_and_or(sd, 0x4b, 0xbf, 0x40);
++		/* set address for mask 1 */
++		adv7511_cec_write_and_or(sd, 0x4d, 0xf0, addr);
++		break;
++	}
++	return 0;
++}
++
++static int adv7511_cec_adap_transmit(struct cec_adapter *adap, u8 attempts,
++				     u32 signal_free_time, struct cec_msg *msg)
++{
++	struct adv7511_state *state = cec_get_drvdata(adap);
++	struct v4l2_subdev *sd = &state->sd;
++	u8 len = msg->len;
++	unsigned int i;
++
++	v4l2_dbg(1, debug, sd, "%s: len %d\n", __func__, len);
++
++	if (len > 16) {
++		v4l2_err(sd, "%s: len exceeded 16 (%d)\n", __func__, len);
++		return -EINVAL;
++	}
++
++	/*
++	 * The number of retries is the number of attempts - 1, but retry
++	 * at least once. It's not clear if a value of 0 is allowed, so
++	 * let's do at least one retry.
++	 */
++	adv7511_cec_write_and_or(sd, 0x12, ~0x70, max(1, attempts - 1) << 4);
++
++	/* clear cec tx irq status */
++	adv7511_wr(sd, 0x97, 0x38);
++
++	/* write data */
++	for (i = 0; i < len; i++)
++		adv7511_cec_write(sd, i, msg->msg[i]);
++
++	/* set length (data + header) */
++	adv7511_cec_write(sd, 0x10, len);
++	/* start transmit, enable tx */
++	adv7511_cec_write(sd, 0x11, 0x01);
++	return 0;
++}
++
++static void adv_cec_tx_raw_status(struct v4l2_subdev *sd, u8 tx_raw_status)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++
++	if ((adv7511_cec_read(sd, 0x11) & 0x01) == 0) {
++		v4l2_dbg(1, debug, sd, "%s: tx raw: tx disabled\n", __func__);
++		return;
++	}
++
++	if (tx_raw_status & 0x10) {
++		v4l2_dbg(1, debug, sd,
++			 "%s: tx raw: arbitration lost\n", __func__);
++		cec_transmit_done(state->cec_adap, CEC_TX_STATUS_ARB_LOST,
++				  1, 0, 0, 0);
++		return;
++	}
++	if (tx_raw_status & 0x08) {
++		u8 status;
++		u8 nack_cnt;
++		u8 low_drive_cnt;
++
++		v4l2_dbg(1, debug, sd, "%s: tx raw: retry failed\n", __func__);
++		/*
++		 * We set this status bit since this hardware performs
++		 * retransmissions.
++		 */
++		status = CEC_TX_STATUS_MAX_RETRIES;
++		nack_cnt = adv7511_cec_read(sd, 0x14) & 0xf;
++		if (nack_cnt)
++			status |= CEC_TX_STATUS_NACK;
++		low_drive_cnt = adv7511_cec_read(sd, 0x14) >> 4;
++		if (low_drive_cnt)
++			status |= CEC_TX_STATUS_LOW_DRIVE;
++		cec_transmit_done(state->cec_adap, status,
++				  0, nack_cnt, low_drive_cnt, 0);
++		return;
++	}
++	if (tx_raw_status & 0x20) {
++		v4l2_dbg(1, debug, sd, "%s: tx raw: ready ok\n", __func__);
++		cec_transmit_done(state->cec_adap, CEC_TX_STATUS_OK, 0, 0, 0, 0);
++		return;
++	}
++}
++
++static const struct cec_adap_ops adv7511_cec_adap_ops = {
++	.adap_enable = adv7511_cec_adap_enable,
++	.adap_log_addr = adv7511_cec_adap_log_addr,
++	.adap_transmit = adv7511_cec_adap_transmit,
++};
++#endif
++
++/* Enable interrupts */
++static void adv7511_set_isr(struct v4l2_subdev *sd, bool enable)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++	u8 irqs = MASK_ADV7511_HPD_INT | MASK_ADV7511_MSEN_INT;
++	u8 irqs_rd;
++	int retries = 100;
++
++	v4l2_dbg(2, debug, sd, "%s: %s\n", __func__, enable ? "enable" : "disable");
++
++	if (state->enabled_irq == enable)
++		return;
++	state->enabled_irq = enable;
++
++	/* The datasheet says that the EDID ready interrupt should be
++	   disabled if there is no hotplug. */
++	if (!enable)
++		irqs = 0;
++	else if (adv7511_have_hotplug(sd))
++		irqs |= MASK_ADV7511_EDID_RDY_INT;
++
++	/*
++	 * This i2c write can fail (approx. 1 in 1000 writes). But it
++	 * is essential that this register is correct, so retry it
++	 * multiple times.
++	 *
++	 * Note that the i2c write does not report an error, but the readback
++	 * clearly shows the wrong value.
++	 */
++	do {
++		adv7511_wr(sd, 0x94, irqs);
++		irqs_rd = adv7511_rd(sd, 0x94);
++	} while (retries-- && irqs_rd != irqs);
++
++	if (irqs_rd != irqs)
++		v4l2_err(sd, "Could not set interrupts: hw failure?\n");
++
++	adv7511_wr_and_or(sd, 0x95, 0xc0,
++			  (state->cec_enabled_adap && enable) ? 0x39 : 0x00);
++}
++
++/* Interrupt handler */
++static int adv7511_isr(struct v4l2_subdev *sd, u32 status, bool *handled)
++{
++	u8 irq_status;
++	u8 cec_irq;
++
++	/* disable interrupts to prevent a race condition */
++	adv7511_set_isr(sd, false);
++	irq_status = adv7511_rd(sd, 0x96);
++	cec_irq = adv7511_rd(sd, 0x97);
++	/* clear detected interrupts */
++	adv7511_wr(sd, 0x96, irq_status);
++	adv7511_wr(sd, 0x97, cec_irq);
++
++	v4l2_dbg(1, debug, sd, "%s: irq 0x%x, cec-irq 0x%x\n", __func__,
++		 irq_status, cec_irq);
++
++	if (irq_status & (MASK_ADV7511_HPD_INT | MASK_ADV7511_MSEN_INT))
++		adv7511_check_monitor_present_status(sd);
++	if (irq_status & MASK_ADV7511_EDID_RDY_INT)
++		adv7511_check_edid_status(sd);
++
++#if IS_ENABLED(CONFIG_VIDEO_ADV7511_CEC)
++	if (cec_irq & 0x38)
++		adv_cec_tx_raw_status(sd, cec_irq);
++
++	if (cec_irq & 1) {
++		struct adv7511_state *state = get_adv7511_state(sd);
++		struct cec_msg msg;
++
++		msg.len = adv7511_cec_read(sd, 0x25) & 0x1f;
++
++		v4l2_dbg(1, debug, sd, "%s: cec msg len %d\n", __func__,
++			 msg.len);
++
++		if (msg.len > 16)
++			msg.len = 16;
++
++		if (msg.len) {
++			u8 i;
++
++			for (i = 0; i < msg.len; i++)
++				msg.msg[i] = adv7511_cec_read(sd, i + 0x15);
++
++			adv7511_cec_write(sd, 0x4a, 0); /* toggle to re-enable rx 1 */
++			adv7511_cec_write(sd, 0x4a, 1);
++			cec_received_msg(state->cec_adap, &msg);
++		}
++	}
++#endif
++
++	/* enable interrupts */
++	adv7511_set_isr(sd, true);
++
++	if (handled)
++		*handled = true;
++	return 0;
++}
++
++static const struct v4l2_subdev_core_ops adv7511_core_ops = {
++	.log_status = adv7511_log_status,
++#ifdef CONFIG_VIDEO_ADV_DEBUG
++	.g_register = adv7511_g_register,
++	.s_register = adv7511_s_register,
++#endif
++	.s_power = adv7511_s_power,
++	.interrupt_service_routine = adv7511_isr,
++};
++
++/* ------------------------------ VIDEO OPS ------------------------------ */
++
++/* Enable/disable adv7511 output */
++static int adv7511_s_stream(struct v4l2_subdev *sd, int enable)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++
++	v4l2_dbg(1, debug, sd, "%s: %sable\n", __func__, (enable ? "en" : "dis"));
++	adv7511_wr_and_or(sd, 0xa1, ~0x3c, (enable ? 0 : 0x3c));
++	if (enable) {
++		adv7511_check_monitor_present_status(sd);
++	} else {
++		adv7511_s_power(sd, 0);
++		state->have_monitor = false;
++	}
++	return 0;
++}
++
++static int adv7511_s_dv_timings(struct v4l2_subdev *sd,
++			       struct v4l2_dv_timings *timings)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++	struct v4l2_bt_timings *bt = &timings->bt;
++	u32 fps;
++
++	v4l2_dbg(1, debug, sd, "%s:\n", __func__);
++
++	/* quick sanity check */
++	if (!v4l2_valid_dv_timings(timings, &adv7511_timings_cap, NULL, NULL))
++		return -EINVAL;
++
++	/* Fill the optional fields .standards and .flags in struct v4l2_dv_timings
++	   if the format is one of the CEA or DMT timings. */
++	v4l2_find_dv_timings_cap(timings, &adv7511_timings_cap, 0, NULL, NULL);
++
++	/* save timings */
++	state->dv_timings = *timings;
++
++	/* set h/vsync polarities */
++	adv7511_wr_and_or(sd, 0x17, 0x9f,
++		((bt->polarities & V4L2_DV_VSYNC_POS_POL) ? 0 : 0x40) |
++		((bt->polarities & V4L2_DV_HSYNC_POS_POL) ? 0 : 0x20));
++
++	fps = (u32)bt->pixelclock / (V4L2_DV_BT_FRAME_WIDTH(bt) * V4L2_DV_BT_FRAME_HEIGHT(bt));
++	switch (fps) {
++	case 24:
++		adv7511_wr_and_or(sd, 0xfb, 0xf9, 1 << 1);
++		break;
++	case 25:
++		adv7511_wr_and_or(sd, 0xfb, 0xf9, 2 << 1);
++		break;
++	case 30:
++		adv7511_wr_and_or(sd, 0xfb, 0xf9, 3 << 1);
++		break;
++	default:
++		adv7511_wr_and_or(sd, 0xfb, 0xf9, 0);
++		break;
++	}
++
++	/* update quantization range based on new dv_timings */
++	adv7511_set_rgb_quantization_mode(sd, state->rgb_quantization_range_ctrl);
++
++	return 0;
++}
++
++static int adv7511_g_dv_timings(struct v4l2_subdev *sd,
++				struct v4l2_dv_timings *timings)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++
++	v4l2_dbg(1, debug, sd, "%s:\n", __func__);
++
++	if (!timings)
++		return -EINVAL;
++
++	*timings = state->dv_timings;
++
++	return 0;
++}
++
++static int adv7511_enum_dv_timings(struct v4l2_subdev *sd,
++				   struct v4l2_enum_dv_timings *timings)
++{
++	if (timings->pad != 0)
++		return -EINVAL;
++
++	return v4l2_enum_dv_timings_cap(timings, &adv7511_timings_cap, NULL, NULL);
++}
++
++static int adv7511_dv_timings_cap(struct v4l2_subdev *sd,
++				  struct v4l2_dv_timings_cap *cap)
++{
++	if (cap->pad != 0)
++		return -EINVAL;
++
++	*cap = adv7511_timings_cap;
++	return 0;
++}
++
++static const struct v4l2_subdev_video_ops adv7511_video_ops = {
++	.s_stream = adv7511_s_stream,
++	.s_dv_timings = adv7511_s_dv_timings,
++	.g_dv_timings = adv7511_g_dv_timings,
++};
++
++/* ------------------------------ AUDIO OPS ------------------------------ */
++static int adv7511_s_audio_stream(struct v4l2_subdev *sd, int enable)
++{
++	v4l2_dbg(1, debug, sd, "%s: %sable\n", __func__, (enable ? "en" : "dis"));
++
++	if (enable)
++		adv7511_wr_and_or(sd, 0x4b, 0x3f, 0x80);
++	else
++		adv7511_wr_and_or(sd, 0x4b, 0x3f, 0x40);
++
++	return 0;
++}
++
++static int adv7511_s_clock_freq(struct v4l2_subdev *sd, u32 freq)
++{
++	u32 N;
++
++	switch (freq) {
++	case 32000:  N = 4096;  break;
++	case 44100:  N = 6272;  break;
++	case 48000:  N = 6144;  break;
++	case 88200:  N = 12544; break;
++	case 96000:  N = 12288; break;
++	case 176400: N = 25088; break;
++	case 192000: N = 24576; break;
++	default:
++		return -EINVAL;
++	}
++
++	/* Set N (used with CTS to regenerate the audio clock) */
++	adv7511_wr(sd, 0x01, (N >> 16) & 0xf);
++	adv7511_wr(sd, 0x02, (N >> 8) & 0xff);
++	adv7511_wr(sd, 0x03, N & 0xff);
++
++	return 0;
++}
++
++static int adv7511_s_i2s_clock_freq(struct v4l2_subdev *sd, u32 freq)
++{
++	u32 i2s_sf;
++
++	switch (freq) {
++	case 32000:  i2s_sf = 0x30; break;
++	case 44100:  i2s_sf = 0x00; break;
++	case 48000:  i2s_sf = 0x20; break;
++	case 88200:  i2s_sf = 0x80; break;
++	case 96000:  i2s_sf = 0xa0; break;
++	case 176400: i2s_sf = 0xc0; break;
++	case 192000: i2s_sf = 0xe0; break;
++	default:
++		return -EINVAL;
++	}
++
++	/* Set sampling frequency for I2S audio to 48 kHz */
++	adv7511_wr_and_or(sd, 0x15, 0xf, i2s_sf);
++
++	return 0;
++}
++
++static int adv7511_s_routing(struct v4l2_subdev *sd, u32 input, u32 output, u32 config)
++{
++	/* Only 2 channels in use for application */
++	adv7511_wr_and_or(sd, 0x73, 0xf8, 0x1);
++	/* Speaker mapping */
++	adv7511_wr(sd, 0x76, 0x00);
++
++	/* 16 bit audio word length */
++	adv7511_wr_and_or(sd, 0x14, 0xf0, 0x02);
++
++	return 0;
++}
++
++static const struct v4l2_subdev_audio_ops adv7511_audio_ops = {
++	.s_stream = adv7511_s_audio_stream,
++	.s_clock_freq = adv7511_s_clock_freq,
++	.s_i2s_clock_freq = adv7511_s_i2s_clock_freq,
++	.s_routing = adv7511_s_routing,
++};
++
++/* ---------------------------- PAD OPS ------------------------------------- */
++
++static int adv7511_get_edid(struct v4l2_subdev *sd, struct v4l2_edid *edid)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++
++	memset(edid->reserved, 0, sizeof(edid->reserved));
++
++	if (edid->pad != 0)
++		return -EINVAL;
++
++	if (edid->start_block == 0 && edid->blocks == 0) {
++		edid->blocks = state->edid.segments * 2;
++		return 0;
++	}
++
++	if (state->edid.segments == 0)
++		return -ENODATA;
++
++	if (edid->start_block >= state->edid.segments * 2)
++		return -EINVAL;
++
++	if (edid->start_block + edid->blocks > state->edid.segments * 2)
++		edid->blocks = state->edid.segments * 2 - edid->start_block;
++
++	memcpy(edid->edid, &state->edid.data[edid->start_block * 128],
++			128 * edid->blocks);
++
++	return 0;
++}
++
++static int adv7511_enum_mbus_code(struct v4l2_subdev *sd,
++				  struct v4l2_subdev_pad_config *cfg,
++				  struct v4l2_subdev_mbus_code_enum *code)
++{
++	if (code->pad != 0)
++		return -EINVAL;
++
++	switch (code->index) {
++	case 0:
++		code->code = MEDIA_BUS_FMT_RGB888_1X24;
++		break;
++	case 1:
++		code->code = MEDIA_BUS_FMT_YUYV8_1X16;
++		break;
++	case 2:
++		code->code = MEDIA_BUS_FMT_UYVY8_1X16;
++		break;
++	default:
++		return -EINVAL;
++	}
++	return 0;
++}
++
++static void adv7511_fill_format(struct adv7511_state *state,
++				struct v4l2_mbus_framefmt *format)
++{
++	format->width = state->dv_timings.bt.width;
++	format->height = state->dv_timings.bt.height;
++	format->field = V4L2_FIELD_NONE;
++}
++
++static int adv7511_get_fmt(struct v4l2_subdev *sd,
++			   struct v4l2_subdev_pad_config *cfg,
++			   struct v4l2_subdev_format *format)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++
++	if (format->pad != 0)
++		return -EINVAL;
++
++	memset(&format->format, 0, sizeof(format->format));
++	adv7511_fill_format(state, &format->format);
++
++	if (format->which == V4L2_SUBDEV_FORMAT_TRY) {
++		struct v4l2_mbus_framefmt *fmt;
++
++		fmt = v4l2_subdev_get_try_format(sd, cfg, format->pad);
++		format->format.code = fmt->code;
++		format->format.colorspace = fmt->colorspace;
++		format->format.ycbcr_enc = fmt->ycbcr_enc;
++		format->format.quantization = fmt->quantization;
++		format->format.xfer_func = fmt->xfer_func;
++	} else {
++		format->format.code = state->fmt_code;
++		format->format.colorspace = state->colorspace;
++		format->format.ycbcr_enc = state->ycbcr_enc;
++		format->format.quantization = state->quantization;
++		format->format.xfer_func = state->xfer_func;
++	}
++
++	return 0;
++}
++
++static int adv7511_set_fmt(struct v4l2_subdev *sd,
++			   struct v4l2_subdev_pad_config *cfg,
++			   struct v4l2_subdev_format *format)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++	/*
++	 * Bitfield namings come the CEA-861-F standard, table 8 "Auxiliary
++	 * Video Information (AVI) InfoFrame Format"
++	 *
++	 * c = Colorimetry
++	 * ec = Extended Colorimetry
++	 * y = RGB or YCbCr
++	 * q = RGB Quantization Range
++	 * yq = YCC Quantization Range
++	 */
++	u8 c = HDMI_COLORIMETRY_NONE;
++	u8 ec = HDMI_EXTENDED_COLORIMETRY_XV_YCC_601;
++	u8 y = HDMI_COLORSPACE_RGB;
++	u8 q = HDMI_QUANTIZATION_RANGE_DEFAULT;
++	u8 yq = HDMI_YCC_QUANTIZATION_RANGE_LIMITED;
++	u8 itc = state->content_type != V4L2_DV_IT_CONTENT_TYPE_NO_ITC;
++	u8 cn = itc ? state->content_type : V4L2_DV_IT_CONTENT_TYPE_GRAPHICS;
++
++	if (format->pad != 0)
++		return -EINVAL;
++	switch (format->format.code) {
++	case MEDIA_BUS_FMT_UYVY8_1X16:
++	case MEDIA_BUS_FMT_YUYV8_1X16:
++	case MEDIA_BUS_FMT_RGB888_1X24:
++		break;
++	default:
++		return -EINVAL;
++	}
++
++	adv7511_fill_format(state, &format->format);
++	if (format->which == V4L2_SUBDEV_FORMAT_TRY) {
++		struct v4l2_mbus_framefmt *fmt;
++
++		fmt = v4l2_subdev_get_try_format(sd, cfg, format->pad);
++		fmt->code = format->format.code;
++		fmt->colorspace = format->format.colorspace;
++		fmt->ycbcr_enc = format->format.ycbcr_enc;
++		fmt->quantization = format->format.quantization;
++		fmt->xfer_func = format->format.xfer_func;
++		return 0;
++	}
++
++	switch (format->format.code) {
++	case MEDIA_BUS_FMT_UYVY8_1X16:
++		adv7511_wr_and_or(sd, 0x15, 0xf0, 0x01);
++		adv7511_wr_and_or(sd, 0x16, 0x03, 0xb8);
++		y = HDMI_COLORSPACE_YUV422;
++		break;
++	case MEDIA_BUS_FMT_YUYV8_1X16:
++		adv7511_wr_and_or(sd, 0x15, 0xf0, 0x01);
++		adv7511_wr_and_or(sd, 0x16, 0x03, 0xbc);
++		y = HDMI_COLORSPACE_YUV422;
++		break;
++	case MEDIA_BUS_FMT_RGB888_1X24:
++	default:
++		adv7511_wr_and_or(sd, 0x15, 0xf0, 0x00);
++		adv7511_wr_and_or(sd, 0x16, 0x03, 0x00);
++		break;
++	}
++	state->fmt_code = format->format.code;
++	state->colorspace = format->format.colorspace;
++	state->ycbcr_enc = format->format.ycbcr_enc;
++	state->quantization = format->format.quantization;
++	state->xfer_func = format->format.xfer_func;
++
++	switch (format->format.colorspace) {
++	case V4L2_COLORSPACE_OPRGB:
++		c = HDMI_COLORIMETRY_EXTENDED;
++		ec = y ? HDMI_EXTENDED_COLORIMETRY_OPYCC_601 :
++			 HDMI_EXTENDED_COLORIMETRY_OPRGB;
++		break;
++	case V4L2_COLORSPACE_SMPTE170M:
++		c = y ? HDMI_COLORIMETRY_ITU_601 : HDMI_COLORIMETRY_NONE;
++		if (y && format->format.ycbcr_enc == V4L2_YCBCR_ENC_XV601) {
++			c = HDMI_COLORIMETRY_EXTENDED;
++			ec = HDMI_EXTENDED_COLORIMETRY_XV_YCC_601;
++		}
++		break;
++	case V4L2_COLORSPACE_REC709:
++		c = y ? HDMI_COLORIMETRY_ITU_709 : HDMI_COLORIMETRY_NONE;
++		if (y && format->format.ycbcr_enc == V4L2_YCBCR_ENC_XV709) {
++			c = HDMI_COLORIMETRY_EXTENDED;
++			ec = HDMI_EXTENDED_COLORIMETRY_XV_YCC_709;
++		}
++		break;
++	case V4L2_COLORSPACE_SRGB:
++		c = y ? HDMI_COLORIMETRY_EXTENDED : HDMI_COLORIMETRY_NONE;
++		ec = y ? HDMI_EXTENDED_COLORIMETRY_S_YCC_601 :
++			 HDMI_EXTENDED_COLORIMETRY_XV_YCC_601;
++		break;
++	case V4L2_COLORSPACE_BT2020:
++		c = HDMI_COLORIMETRY_EXTENDED;
++		if (y && format->format.ycbcr_enc == V4L2_YCBCR_ENC_BT2020_CONST_LUM)
++			ec = 5; /* Not yet available in hdmi.h */
++		else
++			ec = 6; /* Not yet available in hdmi.h */
++		break;
++	default:
++		break;
++	}
++
++	/*
++	 * CEA-861-F says that for RGB formats the YCC range must match the
++	 * RGB range, although sources should ignore the YCC range.
++	 *
++	 * The RGB quantization range shouldn't be non-zero if the EDID doesn't
++	 * have the Q bit set in the Video Capabilities Data Block, however this
++	 * isn't checked at the moment. The assumption is that the application
++	 * knows the EDID and can detect this.
++	 *
++	 * The same is true for the YCC quantization range: non-standard YCC
++	 * quantization ranges should only be sent if the EDID has the YQ bit
++	 * set in the Video Capabilities Data Block.
++	 */
++	switch (format->format.quantization) {
++	case V4L2_QUANTIZATION_FULL_RANGE:
++		q = y ? HDMI_QUANTIZATION_RANGE_DEFAULT :
++			HDMI_QUANTIZATION_RANGE_FULL;
++		yq = q ? q - 1 : HDMI_YCC_QUANTIZATION_RANGE_FULL;
++		break;
++	case V4L2_QUANTIZATION_LIM_RANGE:
++		q = y ? HDMI_QUANTIZATION_RANGE_DEFAULT :
++			HDMI_QUANTIZATION_RANGE_LIMITED;
++		yq = q ? q - 1 : HDMI_YCC_QUANTIZATION_RANGE_LIMITED;
++		break;
++	}
++
++	adv7511_wr_and_or(sd, 0x4a, 0xbf, 0);
++	adv7511_wr_and_or(sd, 0x55, 0x9f, y << 5);
++	adv7511_wr_and_or(sd, 0x56, 0x3f, c << 6);
++	adv7511_wr_and_or(sd, 0x57, 0x83, (ec << 4) | (q << 2) | (itc << 7));
++	adv7511_wr_and_or(sd, 0x59, 0x0f, (yq << 6) | (cn << 4));
++	adv7511_wr_and_or(sd, 0x4a, 0xff, 1);
++	adv7511_set_rgb_quantization_mode(sd, state->rgb_quantization_range_ctrl);
++
++	return 0;
++}
++
++static const struct v4l2_subdev_pad_ops adv7511_pad_ops = {
++	.get_edid = adv7511_get_edid,
++	.enum_mbus_code = adv7511_enum_mbus_code,
++	.get_fmt = adv7511_get_fmt,
++	.set_fmt = adv7511_set_fmt,
++	.enum_dv_timings = adv7511_enum_dv_timings,
++	.dv_timings_cap = adv7511_dv_timings_cap,
++};
++
++/* --------------------- SUBDEV OPS --------------------------------------- */
++
++static const struct v4l2_subdev_ops adv7511_ops = {
++	.core  = &adv7511_core_ops,
++	.pad  = &adv7511_pad_ops,
++	.video = &adv7511_video_ops,
++	.audio = &adv7511_audio_ops,
++};
++
++/* ----------------------------------------------------------------------- */
++static void adv7511_dbg_dump_edid(int lvl, int debug, struct v4l2_subdev *sd, int segment, u8 *buf)
++{
++	if (debug >= lvl) {
++		int i, j;
++		v4l2_dbg(lvl, debug, sd, "edid segment %d\n", segment);
++		for (i = 0; i < 256; i += 16) {
++			u8 b[128];
++			u8 *bp = b;
++			if (i == 128)
++				v4l2_dbg(lvl, debug, sd, "\n");
++			for (j = i; j < i + 16; j++) {
++				sprintf(bp, "0x%02x, ", buf[j]);
++				bp += 6;
++			}
++			bp[0] = '\0';
++			v4l2_dbg(lvl, debug, sd, "%s\n", b);
++		}
++	}
++}
++
++static void adv7511_notify_no_edid(struct v4l2_subdev *sd)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++	struct adv7511_edid_detect ed;
++
++	/* We failed to read the EDID, so send an event for this. */
++	ed.present = false;
++	ed.segment = adv7511_rd(sd, 0xc4);
++	ed.phys_addr = CEC_PHYS_ADDR_INVALID;
++	cec_s_phys_addr(state->cec_adap, ed.phys_addr, false);
++	v4l2_subdev_notify(sd, ADV7511_EDID_DETECT, (void *)&ed);
++	v4l2_ctrl_s_ctrl(state->have_edid0_ctrl, 0x0);
++}
++
++static void adv7511_edid_handler(struct work_struct *work)
++{
++	struct delayed_work *dwork = to_delayed_work(work);
++	struct adv7511_state *state = container_of(dwork, struct adv7511_state, edid_handler);
++	struct v4l2_subdev *sd = &state->sd;
++
++	v4l2_dbg(1, debug, sd, "%s:\n", __func__);
++
++	if (adv7511_check_edid_status(sd)) {
++		/* Return if we received the EDID. */
++		return;
++	}
++
++	if (adv7511_have_hotplug(sd)) {
++		/* We must retry reading the EDID several times, it is possible
++		 * that initially the EDID couldn't be read due to i2c errors
++		 * (DVI connectors are particularly prone to this problem). */
++		if (state->edid.read_retries) {
++			state->edid.read_retries--;
++			v4l2_dbg(1, debug, sd, "%s: edid read failed\n", __func__);
++			state->have_monitor = false;
++			adv7511_s_power(sd, false);
++			adv7511_s_power(sd, true);
++			queue_delayed_work(state->work_queue, &state->edid_handler, EDID_DELAY);
++			return;
++		}
++	}
++
++	/* We failed to read the EDID, so send an event for this. */
++	adv7511_notify_no_edid(sd);
++	v4l2_dbg(1, debug, sd, "%s: no edid found\n", __func__);
++}
++
++static void adv7511_audio_setup(struct v4l2_subdev *sd)
++{
++	v4l2_dbg(1, debug, sd, "%s\n", __func__);
++
++	adv7511_s_i2s_clock_freq(sd, 48000);
++	adv7511_s_clock_freq(sd, 48000);
++	adv7511_s_routing(sd, 0, 0, 0);
++}
++
++/* Configure hdmi transmitter. */
++static void adv7511_setup(struct v4l2_subdev *sd)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++	v4l2_dbg(1, debug, sd, "%s\n", __func__);
++
++	/* Input format: RGB 4:4:4 */
++	adv7511_wr_and_or(sd, 0x15, 0xf0, 0x0);
++	/* Output format: RGB 4:4:4 */
++	adv7511_wr_and_or(sd, 0x16, 0x7f, 0x0);
++	/* 1st order interpolation 4:2:2 -> 4:4:4 up conversion, Aspect ratio: 16:9 */
++	adv7511_wr_and_or(sd, 0x17, 0xf9, 0x06);
++	/* Disable pixel repetition */
++	adv7511_wr_and_or(sd, 0x3b, 0x9f, 0x0);
++	/* Disable CSC */
++	adv7511_wr_and_or(sd, 0x18, 0x7f, 0x0);
++	/* Output format: RGB 4:4:4, Active Format Information is valid,
++	 * underscanned */
++	adv7511_wr_and_or(sd, 0x55, 0x9c, 0x12);
++	/* AVI Info frame packet enable, Audio Info frame disable */
++	adv7511_wr_and_or(sd, 0x44, 0xe7, 0x10);
++	/* Colorimetry, Active format aspect ratio: same as picure. */
++	adv7511_wr(sd, 0x56, 0xa8);
++	/* No encryption */
++	adv7511_wr_and_or(sd, 0xaf, 0xed, 0x0);
++
++	/* Positive clk edge capture for input video clock */
++	adv7511_wr_and_or(sd, 0xba, 0x1f, 0x60);
++
++	adv7511_audio_setup(sd);
++
++	v4l2_ctrl_handler_setup(&state->hdl);
++}
++
++static void adv7511_notify_monitor_detect(struct v4l2_subdev *sd)
++{
++	struct adv7511_monitor_detect mdt;
++	struct adv7511_state *state = get_adv7511_state(sd);
++
++	mdt.present = state->have_monitor;
++	v4l2_subdev_notify(sd, ADV7511_MONITOR_DETECT, (void *)&mdt);
++}
++
++static void adv7511_check_monitor_present_status(struct v4l2_subdev *sd)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++	/* read hotplug and rx-sense state */
++	u8 status = adv7511_rd(sd, 0x42);
++
++	v4l2_dbg(1, debug, sd, "%s: status: 0x%x%s%s\n",
++			 __func__,
++			 status,
++			 status & MASK_ADV7511_HPD_DETECT ? ", hotplug" : "",
++			 status & MASK_ADV7511_MSEN_DETECT ? ", rx-sense" : "");
++
++	/* update read only ctrls */
++	v4l2_ctrl_s_ctrl(state->hotplug_ctrl, adv7511_have_hotplug(sd) ? 0x1 : 0x0);
++	v4l2_ctrl_s_ctrl(state->rx_sense_ctrl, adv7511_have_rx_sense(sd) ? 0x1 : 0x0);
++
++	if ((status & MASK_ADV7511_HPD_DETECT) && ((status & MASK_ADV7511_MSEN_DETECT) || state->edid.segments)) {
++		v4l2_dbg(1, debug, sd, "%s: hotplug and (rx-sense or edid)\n", __func__);
++		if (!state->have_monitor) {
++			v4l2_dbg(1, debug, sd, "%s: monitor detected\n", __func__);
++			state->have_monitor = true;
++			adv7511_set_isr(sd, true);
++			if (!adv7511_s_power(sd, true)) {
++				v4l2_dbg(1, debug, sd, "%s: monitor detected, powerup failed\n", __func__);
++				return;
++			}
++			adv7511_setup(sd);
++			adv7511_notify_monitor_detect(sd);
++			state->edid.read_retries = EDID_MAX_RETRIES;
++			queue_delayed_work(state->work_queue, &state->edid_handler, EDID_DELAY);
++		}
++	} else if (status & MASK_ADV7511_HPD_DETECT) {
++		v4l2_dbg(1, debug, sd, "%s: hotplug detected\n", __func__);
++		state->edid.read_retries = EDID_MAX_RETRIES;
++		queue_delayed_work(state->work_queue, &state->edid_handler, EDID_DELAY);
++	} else if (!(status & MASK_ADV7511_HPD_DETECT)) {
++		v4l2_dbg(1, debug, sd, "%s: hotplug not detected\n", __func__);
++		if (state->have_monitor) {
++			v4l2_dbg(1, debug, sd, "%s: monitor not detected\n", __func__);
++			state->have_monitor = false;
++			adv7511_notify_monitor_detect(sd);
++		}
++		adv7511_s_power(sd, false);
++		memset(&state->edid, 0, sizeof(struct adv7511_state_edid));
++		adv7511_notify_no_edid(sd);
++	}
++}
++
++static bool edid_block_verify_crc(u8 *edid_block)
++{
++	u8 sum = 0;
++	int i;
++
++	for (i = 0; i < 128; i++)
++		sum += edid_block[i];
++	return sum == 0;
++}
++
++static bool edid_verify_crc(struct v4l2_subdev *sd, u32 segment)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++	u32 blocks = state->edid.blocks;
++	u8 *data = state->edid.data;
++
++	if (!edid_block_verify_crc(&data[segment * 256]))
++		return false;
++	if ((segment + 1) * 2 <= blocks)
++		return edid_block_verify_crc(&data[segment * 256 + 128]);
++	return true;
++}
++
++static bool edid_verify_header(struct v4l2_subdev *sd, u32 segment)
++{
++	static const u8 hdmi_header[] = {
++		0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00
++	};
++	struct adv7511_state *state = get_adv7511_state(sd);
++	u8 *data = state->edid.data;
++
++	if (segment != 0)
++		return true;
++	return !memcmp(data, hdmi_header, sizeof(hdmi_header));
++}
++
++static bool adv7511_check_edid_status(struct v4l2_subdev *sd)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++	u8 edidRdy = adv7511_rd(sd, 0xc5);
++
++	v4l2_dbg(1, debug, sd, "%s: edid ready (retries: %d)\n",
++			 __func__, EDID_MAX_RETRIES - state->edid.read_retries);
++
++	if (state->edid.complete)
++		return true;
++
++	if (edidRdy & MASK_ADV7511_EDID_RDY) {
++		int segment = adv7511_rd(sd, 0xc4);
++		struct adv7511_edid_detect ed;
++
++		if (segment >= EDID_MAX_SEGM) {
++			v4l2_err(sd, "edid segment number too big\n");
++			return false;
++		}
++		v4l2_dbg(1, debug, sd, "%s: got segment %d\n", __func__, segment);
++		adv7511_edid_rd(sd, 256, &state->edid.data[segment * 256]);
++		adv7511_dbg_dump_edid(2, debug, sd, segment, &state->edid.data[segment * 256]);
++		if (segment == 0) {
++			state->edid.blocks = state->edid.data[0x7e] + 1;
++			v4l2_dbg(1, debug, sd, "%s: %d blocks in total\n", __func__, state->edid.blocks);
++		}
++		if (!edid_verify_crc(sd, segment) ||
++		    !edid_verify_header(sd, segment)) {
++			/* edid crc error, force reread of edid segment */
++			v4l2_err(sd, "%s: edid crc or header error\n", __func__);
++			state->have_monitor = false;
++			adv7511_s_power(sd, false);
++			adv7511_s_power(sd, true);
++			return false;
++		}
++		/* one more segment read ok */
++		state->edid.segments = segment + 1;
++		v4l2_ctrl_s_ctrl(state->have_edid0_ctrl, 0x1);
++		if (((state->edid.data[0x7e] >> 1) + 1) > state->edid.segments) {
++			/* Request next EDID segment */
++			v4l2_dbg(1, debug, sd, "%s: request segment %d\n", __func__, state->edid.segments);
++			adv7511_wr(sd, 0xc9, 0xf);
++			adv7511_wr(sd, 0xc4, state->edid.segments);
++			state->edid.read_retries = EDID_MAX_RETRIES;
++			queue_delayed_work(state->work_queue, &state->edid_handler, EDID_DELAY);
++			return false;
++		}
++
++		v4l2_dbg(1, debug, sd, "%s: edid complete with %d segment(s)\n", __func__, state->edid.segments);
++		state->edid.complete = true;
++		ed.phys_addr = cec_get_edid_phys_addr(state->edid.data,
++						      state->edid.segments * 256,
++						      NULL);
++		/* report when we have all segments
++		   but report only for segment 0
++		 */
++		ed.present = true;
++		ed.segment = 0;
++		state->edid_detect_counter++;
++		cec_s_phys_addr(state->cec_adap, ed.phys_addr, false);
++		v4l2_subdev_notify(sd, ADV7511_EDID_DETECT, (void *)&ed);
++		return ed.present;
++	}
++
++	return false;
++}
++
++static int adv7511_registered(struct v4l2_subdev *sd)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++	struct i2c_client *client = v4l2_get_subdevdata(sd);
++	int err;
++
++	err = cec_register_adapter(state->cec_adap, &client->dev);
++	if (err)
++		cec_delete_adapter(state->cec_adap);
++	return err;
++}
++
++static void adv7511_unregistered(struct v4l2_subdev *sd)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++
++	cec_unregister_adapter(state->cec_adap);
++}
++
++static const struct v4l2_subdev_internal_ops adv7511_int_ops = {
++	.registered = adv7511_registered,
++	.unregistered = adv7511_unregistered,
++};
++
++/* ----------------------------------------------------------------------- */
++/* Setup ADV7511 */
++static void adv7511_init_setup(struct v4l2_subdev *sd)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++	struct adv7511_state_edid *edid = &state->edid;
++	u32 cec_clk = state->pdata.cec_clk;
++	u8 ratio;
++
++	v4l2_dbg(1, debug, sd, "%s\n", __func__);
++
++	/* clear all interrupts */
++	adv7511_wr(sd, 0x96, 0xff);
++	adv7511_wr(sd, 0x97, 0xff);
++	/*
++	 * Stop HPD from resetting a lot of registers.
++	 * It might leave the chip in a partly un-initialized state,
++	 * in particular with regards to hotplug bounces.
++	 */
++	adv7511_wr_and_or(sd, 0xd6, 0x3f, 0xc0);
++	memset(edid, 0, sizeof(struct adv7511_state_edid));
++	state->have_monitor = false;
++	adv7511_set_isr(sd, false);
++	adv7511_s_stream(sd, false);
++	adv7511_s_audio_stream(sd, false);
++
++	if (state->i2c_cec == NULL)
++		return;
++
++	v4l2_dbg(1, debug, sd, "%s: cec_clk %d\n", __func__, cec_clk);
++
++	/* cec soft reset */
++	adv7511_cec_write(sd, 0x50, 0x01);
++	adv7511_cec_write(sd, 0x50, 0x00);
++
++	/* legacy mode */
++	adv7511_cec_write(sd, 0x4a, 0x00);
++	adv7511_cec_write(sd, 0x4a, 0x07);
++
++	if (cec_clk % 750000 != 0)
++		v4l2_err(sd, "%s: cec_clk %d, not multiple of 750 Khz\n",
++			 __func__, cec_clk);
++
++	ratio = (cec_clk / 750000) - 1;
++	adv7511_cec_write(sd, 0x4e, ratio << 2);
++}
++
++static int adv7511_probe(struct i2c_client *client, const struct i2c_device_id *id)
++{
++	struct adv7511_state *state;
++	struct adv7511_platform_data *pdata = client->dev.platform_data;
++	struct v4l2_ctrl_handler *hdl;
++	struct v4l2_subdev *sd;
++	u8 chip_id[2];
++	int err = -EIO;
++
++	/* Check if the adapter supports the needed features */
++	if (!i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_BYTE_DATA))
++		return -EIO;
++
++	state = devm_kzalloc(&client->dev, sizeof(struct adv7511_state), GFP_KERNEL);
++	if (!state)
++		return -ENOMEM;
++
++	/* Platform data */
++	if (!pdata) {
++		v4l_err(client, "No platform data!\n");
++		return -ENODEV;
++	}
++	memcpy(&state->pdata, pdata, sizeof(state->pdata));
++	state->fmt_code = MEDIA_BUS_FMT_RGB888_1X24;
++	state->colorspace = V4L2_COLORSPACE_SRGB;
++
++	sd = &state->sd;
++
++	v4l2_dbg(1, debug, sd, "detecting adv7511 client on address 0x%x\n",
++			 client->addr << 1);
++
++	v4l2_i2c_subdev_init(sd, client, &adv7511_ops);
++	sd->internal_ops = &adv7511_int_ops;
++
++	hdl = &state->hdl;
++	v4l2_ctrl_handler_init(hdl, 10);
++	/* add in ascending ID order */
++	state->hdmi_mode_ctrl = v4l2_ctrl_new_std_menu(hdl, &adv7511_ctrl_ops,
++			V4L2_CID_DV_TX_MODE, V4L2_DV_TX_MODE_HDMI,
++			0, V4L2_DV_TX_MODE_DVI_D);
++	state->hotplug_ctrl = v4l2_ctrl_new_std(hdl, NULL,
++			V4L2_CID_DV_TX_HOTPLUG, 0, 1, 0, 0);
++	state->rx_sense_ctrl = v4l2_ctrl_new_std(hdl, NULL,
++			V4L2_CID_DV_TX_RXSENSE, 0, 1, 0, 0);
++	state->have_edid0_ctrl = v4l2_ctrl_new_std(hdl, NULL,
++			V4L2_CID_DV_TX_EDID_PRESENT, 0, 1, 0, 0);
++	state->rgb_quantization_range_ctrl =
++		v4l2_ctrl_new_std_menu(hdl, &adv7511_ctrl_ops,
++			V4L2_CID_DV_TX_RGB_RANGE, V4L2_DV_RGB_RANGE_FULL,
++			0, V4L2_DV_RGB_RANGE_AUTO);
++	state->content_type_ctrl =
++		v4l2_ctrl_new_std_menu(hdl, &adv7511_ctrl_ops,
++			V4L2_CID_DV_TX_IT_CONTENT_TYPE, V4L2_DV_IT_CONTENT_TYPE_NO_ITC,
++			0, V4L2_DV_IT_CONTENT_TYPE_NO_ITC);
++	sd->ctrl_handler = hdl;
++	if (hdl->error) {
++		err = hdl->error;
++		goto err_hdl;
++	}
++	state->pad.flags = MEDIA_PAD_FL_SINK;
++	sd->entity.function = MEDIA_ENT_F_DV_ENCODER;
++	err = media_entity_pads_init(&sd->entity, 1, &state->pad);
++	if (err)
++		goto err_hdl;
++
++	/* EDID and CEC i2c addr */
++	state->i2c_edid_addr = state->pdata.i2c_edid << 1;
++	state->i2c_cec_addr = state->pdata.i2c_cec << 1;
++	state->i2c_pktmem_addr = state->pdata.i2c_pktmem << 1;
++
++	state->chip_revision = adv7511_rd(sd, 0x0);
++	chip_id[0] = adv7511_rd(sd, 0xf5);
++	chip_id[1] = adv7511_rd(sd, 0xf6);
++	if (chip_id[0] != 0x75 || chip_id[1] != 0x11) {
++		v4l2_err(sd, "chip_id != 0x7511, read 0x%02x%02x\n", chip_id[0],
++			 chip_id[1]);
++		err = -EIO;
++		goto err_entity;
++	}
++
++	state->i2c_edid = i2c_new_dummy(client->adapter,
++					state->i2c_edid_addr >> 1);
++	if (state->i2c_edid == NULL) {
++		v4l2_err(sd, "failed to register edid i2c client\n");
++		err = -ENOMEM;
++		goto err_entity;
++	}
++
++	adv7511_wr(sd, 0xe1, state->i2c_cec_addr);
++	if (state->pdata.cec_clk < 3000000 ||
++	    state->pdata.cec_clk > 100000000) {
++		v4l2_err(sd, "%s: cec_clk %u outside range, disabling cec\n",
++				__func__, state->pdata.cec_clk);
++		state->pdata.cec_clk = 0;
++	}
++
++	if (state->pdata.cec_clk) {
++		state->i2c_cec = i2c_new_dummy(client->adapter,
++					       state->i2c_cec_addr >> 1);
++		if (state->i2c_cec == NULL) {
++			v4l2_err(sd, "failed to register cec i2c client\n");
++			err = -ENOMEM;
++			goto err_unreg_edid;
++		}
++		adv7511_wr(sd, 0xe2, 0x00); /* power up cec section */
++	} else {
++		adv7511_wr(sd, 0xe2, 0x01); /* power down cec section */
++	}
++
++	state->i2c_pktmem = i2c_new_dummy(client->adapter, state->i2c_pktmem_addr >> 1);
++	if (state->i2c_pktmem == NULL) {
++		v4l2_err(sd, "failed to register pktmem i2c client\n");
++		err = -ENOMEM;
++		goto err_unreg_cec;
++	}
++
++	state->work_queue = create_singlethread_workqueue(sd->name);
++	if (state->work_queue == NULL) {
++		v4l2_err(sd, "could not create workqueue\n");
++		err = -ENOMEM;
++		goto err_unreg_pktmem;
++	}
++
++	INIT_DELAYED_WORK(&state->edid_handler, adv7511_edid_handler);
++
++	adv7511_init_setup(sd);
++
++#if IS_ENABLED(CONFIG_VIDEO_ADV7511_CEC)
++	state->cec_adap = cec_allocate_adapter(&adv7511_cec_adap_ops,
++		state, dev_name(&client->dev), CEC_CAP_DEFAULTS,
++		ADV7511_MAX_ADDRS);
++	err = PTR_ERR_OR_ZERO(state->cec_adap);
++	if (err) {
++		destroy_workqueue(state->work_queue);
++		goto err_unreg_pktmem;
++	}
++#endif
++
++	adv7511_set_isr(sd, true);
++	adv7511_check_monitor_present_status(sd);
++
++	v4l2_info(sd, "%s found @ 0x%x (%s)\n", client->name,
++			  client->addr << 1, client->adapter->name);
++	return 0;
++
++err_unreg_pktmem:
++	i2c_unregister_device(state->i2c_pktmem);
++err_unreg_cec:
++	if (state->i2c_cec)
++		i2c_unregister_device(state->i2c_cec);
++err_unreg_edid:
++	i2c_unregister_device(state->i2c_edid);
++err_entity:
++	media_entity_cleanup(&sd->entity);
++err_hdl:
++	v4l2_ctrl_handler_free(&state->hdl);
++	return err;
++}
++
++/* ----------------------------------------------------------------------- */
++
++static int adv7511_remove(struct i2c_client *client)
++{
++	struct v4l2_subdev *sd = i2c_get_clientdata(client);
++	struct adv7511_state *state = get_adv7511_state(sd);
++
++	state->chip_revision = -1;
++
++	v4l2_dbg(1, debug, sd, "%s removed @ 0x%x (%s)\n", client->name,
++		 client->addr << 1, client->adapter->name);
++
++	adv7511_set_isr(sd, false);
++	adv7511_init_setup(sd);
++	cancel_delayed_work(&state->edid_handler);
++	i2c_unregister_device(state->i2c_edid);
++	if (state->i2c_cec)
++		i2c_unregister_device(state->i2c_cec);
++	i2c_unregister_device(state->i2c_pktmem);
++	destroy_workqueue(state->work_queue);
++	v4l2_device_unregister_subdev(sd);
++	media_entity_cleanup(&sd->entity);
++	v4l2_ctrl_handler_free(sd->ctrl_handler);
++	return 0;
++}
++
++/* ----------------------------------------------------------------------- */
++
++static const struct i2c_device_id adv7511_id[] = {
++	{ "adv7511", 0 },
++	{ }
++};
++MODULE_DEVICE_TABLE(i2c, adv7511_id);
++
++static struct i2c_driver adv7511_driver = {
++	.driver = {
++		.name = "adv7511",
++	},
++	.probe = adv7511_probe,
++	.remove = adv7511_remove,
++	.id_table = adv7511_id,
++};
++
++module_i2c_driver(adv7511_driver);
+diff --git a/drivers/media/i2c/adv7511.c b/drivers/media/i2c/adv7511.c
+deleted file mode 100644
+index cec5ebb1c9e6..000000000000
+--- a/drivers/media/i2c/adv7511.c
++++ /dev/null
+@@ -1,1992 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-only
+-/*
+- * Analog Devices ADV7511 HDMI Transmitter Device Driver
+- *
+- * Copyright 2013 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+- */
+-
+-
+-#include <linux/kernel.h>
+-#include <linux/module.h>
+-#include <linux/slab.h>
+-#include <linux/i2c.h>
+-#include <linux/delay.h>
+-#include <linux/videodev2.h>
+-#include <linux/gpio.h>
+-#include <linux/workqueue.h>
+-#include <linux/hdmi.h>
+-#include <linux/v4l2-dv-timings.h>
+-#include <media/v4l2-device.h>
+-#include <media/v4l2-common.h>
+-#include <media/v4l2-ctrls.h>
+-#include <media/v4l2-dv-timings.h>
+-#include <media/i2c/adv7511.h>
+-#include <media/cec.h>
+-
+-static int debug;
+-module_param(debug, int, 0644);
+-MODULE_PARM_DESC(debug, "debug level (0-2)");
+-
+-MODULE_DESCRIPTION("Analog Devices ADV7511 HDMI Transmitter Device Driver");
+-MODULE_AUTHOR("Hans Verkuil");
+-MODULE_LICENSE("GPL v2");
+-
+-#define MASK_ADV7511_EDID_RDY_INT   0x04
+-#define MASK_ADV7511_MSEN_INT       0x40
+-#define MASK_ADV7511_HPD_INT        0x80
+-
+-#define MASK_ADV7511_HPD_DETECT     0x40
+-#define MASK_ADV7511_MSEN_DETECT    0x20
+-#define MASK_ADV7511_EDID_RDY       0x10
+-
+-#define EDID_MAX_RETRIES (8)
+-#define EDID_DELAY 250
+-#define EDID_MAX_SEGM 8
+-
+-#define ADV7511_MAX_WIDTH 1920
+-#define ADV7511_MAX_HEIGHT 1200
+-#define ADV7511_MIN_PIXELCLOCK 20000000
+-#define ADV7511_MAX_PIXELCLOCK 225000000
+-
+-#define ADV7511_MAX_ADDRS (3)
+-
+-/*
+-**********************************************************************
+-*
+-*  Arrays with configuration parameters for the ADV7511
+-*
+-**********************************************************************
+-*/
+-
+-struct i2c_reg_value {
+-	unsigned char reg;
+-	unsigned char value;
+-};
+-
+-struct adv7511_state_edid {
+-	/* total number of blocks */
+-	u32 blocks;
+-	/* Number of segments read */
+-	u32 segments;
+-	u8 data[EDID_MAX_SEGM * 256];
+-	/* Number of EDID read retries left */
+-	unsigned read_retries;
+-	bool complete;
+-};
+-
+-struct adv7511_state {
+-	struct adv7511_platform_data pdata;
+-	struct v4l2_subdev sd;
+-	struct media_pad pad;
+-	struct v4l2_ctrl_handler hdl;
+-	int chip_revision;
+-	u8 i2c_edid_addr;
+-	u8 i2c_pktmem_addr;
+-	u8 i2c_cec_addr;
+-
+-	struct i2c_client *i2c_cec;
+-	struct cec_adapter *cec_adap;
+-	u8   cec_addr[ADV7511_MAX_ADDRS];
+-	u8   cec_valid_addrs;
+-	bool cec_enabled_adap;
+-
+-	/* Is the adv7511 powered on? */
+-	bool power_on;
+-	/* Did we receive hotplug and rx-sense signals? */
+-	bool have_monitor;
+-	bool enabled_irq;
+-	/* timings from s_dv_timings */
+-	struct v4l2_dv_timings dv_timings;
+-	u32 fmt_code;
+-	u32 colorspace;
+-	u32 ycbcr_enc;
+-	u32 quantization;
+-	u32 xfer_func;
+-	u32 content_type;
+-	/* controls */
+-	struct v4l2_ctrl *hdmi_mode_ctrl;
+-	struct v4l2_ctrl *hotplug_ctrl;
+-	struct v4l2_ctrl *rx_sense_ctrl;
+-	struct v4l2_ctrl *have_edid0_ctrl;
+-	struct v4l2_ctrl *rgb_quantization_range_ctrl;
+-	struct v4l2_ctrl *content_type_ctrl;
+-	struct i2c_client *i2c_edid;
+-	struct i2c_client *i2c_pktmem;
+-	struct adv7511_state_edid edid;
+-	/* Running counter of the number of detected EDIDs (for debugging) */
+-	unsigned edid_detect_counter;
+-	struct workqueue_struct *work_queue;
+-	struct delayed_work edid_handler; /* work entry */
+-};
+-
+-static void adv7511_check_monitor_present_status(struct v4l2_subdev *sd);
+-static bool adv7511_check_edid_status(struct v4l2_subdev *sd);
+-static void adv7511_setup(struct v4l2_subdev *sd);
+-static int adv7511_s_i2s_clock_freq(struct v4l2_subdev *sd, u32 freq);
+-static int adv7511_s_clock_freq(struct v4l2_subdev *sd, u32 freq);
+-
+-
+-static const struct v4l2_dv_timings_cap adv7511_timings_cap = {
+-	.type = V4L2_DV_BT_656_1120,
+-	/* keep this initialization for compatibility with GCC < 4.4.6 */
+-	.reserved = { 0 },
+-	V4L2_INIT_BT_TIMINGS(640, ADV7511_MAX_WIDTH, 350, ADV7511_MAX_HEIGHT,
+-		ADV7511_MIN_PIXELCLOCK, ADV7511_MAX_PIXELCLOCK,
+-		V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT |
+-			V4L2_DV_BT_STD_GTF | V4L2_DV_BT_STD_CVT,
+-		V4L2_DV_BT_CAP_PROGRESSIVE | V4L2_DV_BT_CAP_REDUCED_BLANKING |
+-			V4L2_DV_BT_CAP_CUSTOM)
+-};
+-
+-static inline struct adv7511_state *get_adv7511_state(struct v4l2_subdev *sd)
+-{
+-	return container_of(sd, struct adv7511_state, sd);
+-}
+-
+-static inline struct v4l2_subdev *to_sd(struct v4l2_ctrl *ctrl)
+-{
+-	return &container_of(ctrl->handler, struct adv7511_state, hdl)->sd;
+-}
+-
+-/* ------------------------ I2C ----------------------------------------------- */
+-
+-static s32 adv_smbus_read_byte_data_check(struct i2c_client *client,
+-					  u8 command, bool check)
+-{
+-	union i2c_smbus_data data;
+-
+-	if (!i2c_smbus_xfer(client->adapter, client->addr, client->flags,
+-			    I2C_SMBUS_READ, command,
+-			    I2C_SMBUS_BYTE_DATA, &data))
+-		return data.byte;
+-	if (check)
+-		v4l_err(client, "error reading %02x, %02x\n",
+-			client->addr, command);
+-	return -1;
+-}
+-
+-static s32 adv_smbus_read_byte_data(struct i2c_client *client, u8 command)
+-{
+-	int i;
+-	for (i = 0; i < 3; i++) {
+-		int ret = adv_smbus_read_byte_data_check(client, command, true);
+-		if (ret >= 0) {
+-			if (i)
+-				v4l_err(client, "read ok after %d retries\n", i);
+-			return ret;
+-		}
+-	}
+-	v4l_err(client, "read failed\n");
+-	return -1;
+-}
+-
+-static int adv7511_rd(struct v4l2_subdev *sd, u8 reg)
+-{
+-	struct i2c_client *client = v4l2_get_subdevdata(sd);
+-
+-	return adv_smbus_read_byte_data(client, reg);
+-}
+-
+-static int adv7511_wr(struct v4l2_subdev *sd, u8 reg, u8 val)
+-{
+-	struct i2c_client *client = v4l2_get_subdevdata(sd);
+-	int ret;
+-	int i;
+-
+-	for (i = 0; i < 3; i++) {
+-		ret = i2c_smbus_write_byte_data(client, reg, val);
+-		if (ret == 0)
+-			return 0;
+-	}
+-	v4l2_err(sd, "%s: i2c write error\n", __func__);
+-	return ret;
+-}
+-
+-/* To set specific bits in the register, a clear-mask is given (to be AND-ed),
+-   and then the value-mask (to be OR-ed). */
+-static inline void adv7511_wr_and_or(struct v4l2_subdev *sd, u8 reg, u8 clr_mask, u8 val_mask)
+-{
+-	adv7511_wr(sd, reg, (adv7511_rd(sd, reg) & clr_mask) | val_mask);
+-}
+-
+-static int adv_smbus_read_i2c_block_data(struct i2c_client *client,
+-					 u8 command, unsigned length, u8 *values)
+-{
+-	union i2c_smbus_data data;
+-	int ret;
+-
+-	if (length > I2C_SMBUS_BLOCK_MAX)
+-		length = I2C_SMBUS_BLOCK_MAX;
+-	data.block[0] = length;
+-
+-	ret = i2c_smbus_xfer(client->adapter, client->addr, client->flags,
+-			     I2C_SMBUS_READ, command,
+-			     I2C_SMBUS_I2C_BLOCK_DATA, &data);
+-	memcpy(values, data.block + 1, length);
+-	return ret;
+-}
+-
+-static void adv7511_edid_rd(struct v4l2_subdev *sd, uint16_t len, uint8_t *buf)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-	int i;
+-	int err = 0;
+-
+-	v4l2_dbg(1, debug, sd, "%s:\n", __func__);
+-
+-	for (i = 0; !err && i < len; i += I2C_SMBUS_BLOCK_MAX)
+-		err = adv_smbus_read_i2c_block_data(state->i2c_edid, i,
+-						    I2C_SMBUS_BLOCK_MAX, buf + i);
+-	if (err)
+-		v4l2_err(sd, "%s: i2c read error\n", __func__);
+-}
+-
+-static inline int adv7511_cec_read(struct v4l2_subdev *sd, u8 reg)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-
+-	return i2c_smbus_read_byte_data(state->i2c_cec, reg);
+-}
+-
+-static int adv7511_cec_write(struct v4l2_subdev *sd, u8 reg, u8 val)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-	int ret;
+-	int i;
+-
+-	for (i = 0; i < 3; i++) {
+-		ret = i2c_smbus_write_byte_data(state->i2c_cec, reg, val);
+-		if (ret == 0)
+-			return 0;
+-	}
+-	v4l2_err(sd, "%s: I2C Write Problem\n", __func__);
+-	return ret;
+-}
+-
+-static inline int adv7511_cec_write_and_or(struct v4l2_subdev *sd, u8 reg, u8 mask,
+-				   u8 val)
+-{
+-	return adv7511_cec_write(sd, reg, (adv7511_cec_read(sd, reg) & mask) | val);
+-}
+-
+-static int adv7511_pktmem_rd(struct v4l2_subdev *sd, u8 reg)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-
+-	return adv_smbus_read_byte_data(state->i2c_pktmem, reg);
+-}
+-
+-static int adv7511_pktmem_wr(struct v4l2_subdev *sd, u8 reg, u8 val)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-	int ret;
+-	int i;
+-
+-	for (i = 0; i < 3; i++) {
+-		ret = i2c_smbus_write_byte_data(state->i2c_pktmem, reg, val);
+-		if (ret == 0)
+-			return 0;
+-	}
+-	v4l2_err(sd, "%s: i2c write error\n", __func__);
+-	return ret;
+-}
+-
+-/* To set specific bits in the register, a clear-mask is given (to be AND-ed),
+-   and then the value-mask (to be OR-ed). */
+-static inline void adv7511_pktmem_wr_and_or(struct v4l2_subdev *sd, u8 reg, u8 clr_mask, u8 val_mask)
+-{
+-	adv7511_pktmem_wr(sd, reg, (adv7511_pktmem_rd(sd, reg) & clr_mask) | val_mask);
+-}
+-
+-static inline bool adv7511_have_hotplug(struct v4l2_subdev *sd)
+-{
+-	return adv7511_rd(sd, 0x42) & MASK_ADV7511_HPD_DETECT;
+-}
+-
+-static inline bool adv7511_have_rx_sense(struct v4l2_subdev *sd)
+-{
+-	return adv7511_rd(sd, 0x42) & MASK_ADV7511_MSEN_DETECT;
+-}
+-
+-static void adv7511_csc_conversion_mode(struct v4l2_subdev *sd, u8 mode)
+-{
+-	adv7511_wr_and_or(sd, 0x18, 0x9f, (mode & 0x3)<<5);
+-}
+-
+-static void adv7511_csc_coeff(struct v4l2_subdev *sd,
+-			      u16 A1, u16 A2, u16 A3, u16 A4,
+-			      u16 B1, u16 B2, u16 B3, u16 B4,
+-			      u16 C1, u16 C2, u16 C3, u16 C4)
+-{
+-	/* A */
+-	adv7511_wr_and_or(sd, 0x18, 0xe0, A1>>8);
+-	adv7511_wr(sd, 0x19, A1);
+-	adv7511_wr_and_or(sd, 0x1A, 0xe0, A2>>8);
+-	adv7511_wr(sd, 0x1B, A2);
+-	adv7511_wr_and_or(sd, 0x1c, 0xe0, A3>>8);
+-	adv7511_wr(sd, 0x1d, A3);
+-	adv7511_wr_and_or(sd, 0x1e, 0xe0, A4>>8);
+-	adv7511_wr(sd, 0x1f, A4);
+-
+-	/* B */
+-	adv7511_wr_and_or(sd, 0x20, 0xe0, B1>>8);
+-	adv7511_wr(sd, 0x21, B1);
+-	adv7511_wr_and_or(sd, 0x22, 0xe0, B2>>8);
+-	adv7511_wr(sd, 0x23, B2);
+-	adv7511_wr_and_or(sd, 0x24, 0xe0, B3>>8);
+-	adv7511_wr(sd, 0x25, B3);
+-	adv7511_wr_and_or(sd, 0x26, 0xe0, B4>>8);
+-	adv7511_wr(sd, 0x27, B4);
+-
+-	/* C */
+-	adv7511_wr_and_or(sd, 0x28, 0xe0, C1>>8);
+-	adv7511_wr(sd, 0x29, C1);
+-	adv7511_wr_and_or(sd, 0x2A, 0xe0, C2>>8);
+-	adv7511_wr(sd, 0x2B, C2);
+-	adv7511_wr_and_or(sd, 0x2C, 0xe0, C3>>8);
+-	adv7511_wr(sd, 0x2D, C3);
+-	adv7511_wr_and_or(sd, 0x2E, 0xe0, C4>>8);
+-	adv7511_wr(sd, 0x2F, C4);
+-}
+-
+-static void adv7511_csc_rgb_full2limit(struct v4l2_subdev *sd, bool enable)
+-{
+-	if (enable) {
+-		u8 csc_mode = 0;
+-		adv7511_csc_conversion_mode(sd, csc_mode);
+-		adv7511_csc_coeff(sd,
+-				  4096-564, 0, 0, 256,
+-				  0, 4096-564, 0, 256,
+-				  0, 0, 4096-564, 256);
+-		/* enable CSC */
+-		adv7511_wr_and_or(sd, 0x18, 0x7f, 0x80);
+-		/* AVI infoframe: Limited range RGB (16-235) */
+-		adv7511_wr_and_or(sd, 0x57, 0xf3, 0x04);
+-	} else {
+-		/* disable CSC */
+-		adv7511_wr_and_or(sd, 0x18, 0x7f, 0x0);
+-		/* AVI infoframe: Full range RGB (0-255) */
+-		adv7511_wr_and_or(sd, 0x57, 0xf3, 0x08);
+-	}
+-}
+-
+-static void adv7511_set_rgb_quantization_mode(struct v4l2_subdev *sd, struct v4l2_ctrl *ctrl)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-
+-	/* Only makes sense for RGB formats */
+-	if (state->fmt_code != MEDIA_BUS_FMT_RGB888_1X24) {
+-		/* so just keep quantization */
+-		adv7511_csc_rgb_full2limit(sd, false);
+-		return;
+-	}
+-
+-	switch (ctrl->val) {
+-	case V4L2_DV_RGB_RANGE_AUTO:
+-		/* automatic */
+-		if (state->dv_timings.bt.flags & V4L2_DV_FL_IS_CE_VIDEO) {
+-			/* CE format, RGB limited range (16-235) */
+-			adv7511_csc_rgb_full2limit(sd, true);
+-		} else {
+-			/* not CE format, RGB full range (0-255) */
+-			adv7511_csc_rgb_full2limit(sd, false);
+-		}
+-		break;
+-	case V4L2_DV_RGB_RANGE_LIMITED:
+-		/* RGB limited range (16-235) */
+-		adv7511_csc_rgb_full2limit(sd, true);
+-		break;
+-	case V4L2_DV_RGB_RANGE_FULL:
+-		/* RGB full range (0-255) */
+-		adv7511_csc_rgb_full2limit(sd, false);
+-		break;
+-	}
+-}
+-
+-/* ------------------------------ CTRL OPS ------------------------------ */
+-
+-static int adv7511_s_ctrl(struct v4l2_ctrl *ctrl)
+-{
+-	struct v4l2_subdev *sd = to_sd(ctrl);
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-
+-	v4l2_dbg(1, debug, sd, "%s: ctrl id: %d, ctrl->val %d\n", __func__, ctrl->id, ctrl->val);
+-
+-	if (state->hdmi_mode_ctrl == ctrl) {
+-		/* Set HDMI or DVI-D */
+-		adv7511_wr_and_or(sd, 0xaf, 0xfd, ctrl->val == V4L2_DV_TX_MODE_HDMI ? 0x02 : 0x00);
+-		return 0;
+-	}
+-	if (state->rgb_quantization_range_ctrl == ctrl) {
+-		adv7511_set_rgb_quantization_mode(sd, ctrl);
+-		return 0;
+-	}
+-	if (state->content_type_ctrl == ctrl) {
+-		u8 itc, cn;
+-
+-		state->content_type = ctrl->val;
+-		itc = state->content_type != V4L2_DV_IT_CONTENT_TYPE_NO_ITC;
+-		cn = itc ? state->content_type : V4L2_DV_IT_CONTENT_TYPE_GRAPHICS;
+-		adv7511_wr_and_or(sd, 0x57, 0x7f, itc << 7);
+-		adv7511_wr_and_or(sd, 0x59, 0xcf, cn << 4);
+-		return 0;
+-	}
+-
+-	return -EINVAL;
+-}
+-
+-static const struct v4l2_ctrl_ops adv7511_ctrl_ops = {
+-	.s_ctrl = adv7511_s_ctrl,
+-};
+-
+-/* ---------------------------- CORE OPS ------------------------------------------- */
+-
+-#ifdef CONFIG_VIDEO_ADV_DEBUG
+-static void adv7511_inv_register(struct v4l2_subdev *sd)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-
+-	v4l2_info(sd, "0x000-0x0ff: Main Map\n");
+-	if (state->i2c_cec)
+-		v4l2_info(sd, "0x100-0x1ff: CEC Map\n");
+-}
+-
+-static int adv7511_g_register(struct v4l2_subdev *sd, struct v4l2_dbg_register *reg)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-
+-	reg->size = 1;
+-	switch (reg->reg >> 8) {
+-	case 0:
+-		reg->val = adv7511_rd(sd, reg->reg & 0xff);
+-		break;
+-	case 1:
+-		if (state->i2c_cec) {
+-			reg->val = adv7511_cec_read(sd, reg->reg & 0xff);
+-			break;
+-		}
+-		/* fall through */
+-	default:
+-		v4l2_info(sd, "Register %03llx not supported\n", reg->reg);
+-		adv7511_inv_register(sd);
+-		break;
+-	}
+-	return 0;
+-}
+-
+-static int adv7511_s_register(struct v4l2_subdev *sd, const struct v4l2_dbg_register *reg)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-
+-	switch (reg->reg >> 8) {
+-	case 0:
+-		adv7511_wr(sd, reg->reg & 0xff, reg->val & 0xff);
+-		break;
+-	case 1:
+-		if (state->i2c_cec) {
+-			adv7511_cec_write(sd, reg->reg & 0xff, reg->val & 0xff);
+-			break;
+-		}
+-		/* fall through */
+-	default:
+-		v4l2_info(sd, "Register %03llx not supported\n", reg->reg);
+-		adv7511_inv_register(sd);
+-		break;
+-	}
+-	return 0;
+-}
+-#endif
+-
+-struct adv7511_cfg_read_infoframe {
+-	const char *desc;
+-	u8 present_reg;
+-	u8 present_mask;
+-	u8 header[3];
+-	u16 payload_addr;
+-};
+-
+-static u8 hdmi_infoframe_checksum(u8 *ptr, size_t size)
+-{
+-	u8 csum = 0;
+-	size_t i;
+-
+-	/* compute checksum */
+-	for (i = 0; i < size; i++)
+-		csum += ptr[i];
+-
+-	return 256 - csum;
+-}
+-
+-static void log_infoframe(struct v4l2_subdev *sd, const struct adv7511_cfg_read_infoframe *cri)
+-{
+-	struct i2c_client *client = v4l2_get_subdevdata(sd);
+-	struct device *dev = &client->dev;
+-	union hdmi_infoframe frame;
+-	u8 buffer[32];
+-	u8 len;
+-	int i;
+-
+-	if (!(adv7511_rd(sd, cri->present_reg) & cri->present_mask)) {
+-		v4l2_info(sd, "%s infoframe not transmitted\n", cri->desc);
+-		return;
+-	}
+-
+-	memcpy(buffer, cri->header, sizeof(cri->header));
+-
+-	len = buffer[2];
+-
+-	if (len + 4 > sizeof(buffer)) {
+-		v4l2_err(sd, "%s: invalid %s infoframe length %d\n", __func__, cri->desc, len);
+-		return;
+-	}
+-
+-	if (cri->payload_addr >= 0x100) {
+-		for (i = 0; i < len; i++)
+-			buffer[i + 4] = adv7511_pktmem_rd(sd, cri->payload_addr + i - 0x100);
+-	} else {
+-		for (i = 0; i < len; i++)
+-			buffer[i + 4] = adv7511_rd(sd, cri->payload_addr + i);
+-	}
+-	buffer[3] = 0;
+-	buffer[3] = hdmi_infoframe_checksum(buffer, len + 4);
+-
+-	if (hdmi_infoframe_unpack(&frame, buffer, sizeof(buffer)) < 0) {
+-		v4l2_err(sd, "%s: unpack of %s infoframe failed\n", __func__, cri->desc);
+-		return;
+-	}
+-
+-	hdmi_infoframe_log(KERN_INFO, dev, &frame);
+-}
+-
+-static void adv7511_log_infoframes(struct v4l2_subdev *sd)
+-{
+-	static const struct adv7511_cfg_read_infoframe cri[] = {
+-		{ "AVI", 0x44, 0x10, { 0x82, 2, 13 }, 0x55 },
+-		{ "Audio", 0x44, 0x08, { 0x84, 1, 10 }, 0x73 },
+-		{ "SDP", 0x40, 0x40, { 0x83, 1, 25 }, 0x103 },
+-	};
+-	int i;
+-
+-	for (i = 0; i < ARRAY_SIZE(cri); i++)
+-		log_infoframe(sd, &cri[i]);
+-}
+-
+-static int adv7511_log_status(struct v4l2_subdev *sd)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-	struct adv7511_state_edid *edid = &state->edid;
+-	int i;
+-
+-	static const char * const states[] = {
+-		"in reset",
+-		"reading EDID",
+-		"idle",
+-		"initializing HDCP",
+-		"HDCP enabled",
+-		"initializing HDCP repeater",
+-		"6", "7", "8", "9", "A", "B", "C", "D", "E", "F"
+-	};
+-	static const char * const errors[] = {
+-		"no error",
+-		"bad receiver BKSV",
+-		"Ri mismatch",
+-		"Pj mismatch",
+-		"i2c error",
+-		"timed out",
+-		"max repeater cascade exceeded",
+-		"hash check failed",
+-		"too many devices",
+-		"9", "A", "B", "C", "D", "E", "F"
+-	};
+-
+-	v4l2_info(sd, "power %s\n", state->power_on ? "on" : "off");
+-	v4l2_info(sd, "%s hotplug, %s Rx Sense, %s EDID (%d block(s))\n",
+-		  (adv7511_rd(sd, 0x42) & MASK_ADV7511_HPD_DETECT) ? "detected" : "no",
+-		  (adv7511_rd(sd, 0x42) & MASK_ADV7511_MSEN_DETECT) ? "detected" : "no",
+-		  edid->segments ? "found" : "no",
+-		  edid->blocks);
+-	v4l2_info(sd, "%s output %s\n",
+-		  (adv7511_rd(sd, 0xaf) & 0x02) ?
+-		  "HDMI" : "DVI-D",
+-		  (adv7511_rd(sd, 0xa1) & 0x3c) ?
+-		  "disabled" : "enabled");
+-	v4l2_info(sd, "state: %s, error: %s, detect count: %u, msk/irq: %02x/%02x\n",
+-			  states[adv7511_rd(sd, 0xc8) & 0xf],
+-			  errors[adv7511_rd(sd, 0xc8) >> 4], state->edid_detect_counter,
+-			  adv7511_rd(sd, 0x94), adv7511_rd(sd, 0x96));
+-	v4l2_info(sd, "RGB quantization: %s range\n", adv7511_rd(sd, 0x18) & 0x80 ? "limited" : "full");
+-	if (adv7511_rd(sd, 0xaf) & 0x02) {
+-		/* HDMI only */
+-		u8 manual_cts = adv7511_rd(sd, 0x0a) & 0x80;
+-		u32 N = (adv7511_rd(sd, 0x01) & 0xf) << 16 |
+-			adv7511_rd(sd, 0x02) << 8 |
+-			adv7511_rd(sd, 0x03);
+-		u8 vic_detect = adv7511_rd(sd, 0x3e) >> 2;
+-		u8 vic_sent = adv7511_rd(sd, 0x3d) & 0x3f;
+-		u32 CTS;
+-
+-		if (manual_cts)
+-			CTS = (adv7511_rd(sd, 0x07) & 0xf) << 16 |
+-			      adv7511_rd(sd, 0x08) << 8 |
+-			      adv7511_rd(sd, 0x09);
+-		else
+-			CTS = (adv7511_rd(sd, 0x04) & 0xf) << 16 |
+-			      adv7511_rd(sd, 0x05) << 8 |
+-			      adv7511_rd(sd, 0x06);
+-		v4l2_info(sd, "CTS %s mode: N %d, CTS %d\n",
+-			  manual_cts ? "manual" : "automatic", N, CTS);
+-		v4l2_info(sd, "VIC: detected %d, sent %d\n",
+-			  vic_detect, vic_sent);
+-		adv7511_log_infoframes(sd);
+-	}
+-	if (state->dv_timings.type == V4L2_DV_BT_656_1120)
+-		v4l2_print_dv_timings(sd->name, "timings: ",
+-				&state->dv_timings, false);
+-	else
+-		v4l2_info(sd, "no timings set\n");
+-	v4l2_info(sd, "i2c edid addr: 0x%x\n", state->i2c_edid_addr);
+-
+-	if (state->i2c_cec == NULL)
+-		return 0;
+-
+-	v4l2_info(sd, "i2c cec addr: 0x%x\n", state->i2c_cec_addr);
+-
+-	v4l2_info(sd, "CEC: %s\n", state->cec_enabled_adap ?
+-			"enabled" : "disabled");
+-	if (state->cec_enabled_adap) {
+-		for (i = 0; i < ADV7511_MAX_ADDRS; i++) {
+-			bool is_valid = state->cec_valid_addrs & (1 << i);
+-
+-			if (is_valid)
+-				v4l2_info(sd, "CEC Logical Address: 0x%x\n",
+-					  state->cec_addr[i]);
+-		}
+-	}
+-	v4l2_info(sd, "i2c pktmem addr: 0x%x\n", state->i2c_pktmem_addr);
+-	return 0;
+-}
+-
+-/* Power up/down adv7511 */
+-static int adv7511_s_power(struct v4l2_subdev *sd, int on)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-	const int retries = 20;
+-	int i;
+-
+-	v4l2_dbg(1, debug, sd, "%s: power %s\n", __func__, on ? "on" : "off");
+-
+-	state->power_on = on;
+-
+-	if (!on) {
+-		/* Power down */
+-		adv7511_wr_and_or(sd, 0x41, 0xbf, 0x40);
+-		return true;
+-	}
+-
+-	/* Power up */
+-	/* The adv7511 does not always come up immediately.
+-	   Retry multiple times. */
+-	for (i = 0; i < retries; i++) {
+-		adv7511_wr_and_or(sd, 0x41, 0xbf, 0x0);
+-		if ((adv7511_rd(sd, 0x41) & 0x40) == 0)
+-			break;
+-		adv7511_wr_and_or(sd, 0x41, 0xbf, 0x40);
+-		msleep(10);
+-	}
+-	if (i == retries) {
+-		v4l2_dbg(1, debug, sd, "%s: failed to powerup the adv7511!\n", __func__);
+-		adv7511_s_power(sd, 0);
+-		return false;
+-	}
+-	if (i > 1)
+-		v4l2_dbg(1, debug, sd, "%s: needed %d retries to powerup the adv7511\n", __func__, i);
+-
+-	/* Reserved registers that must be set */
+-	adv7511_wr(sd, 0x98, 0x03);
+-	adv7511_wr_and_or(sd, 0x9a, 0xfe, 0x70);
+-	adv7511_wr(sd, 0x9c, 0x30);
+-	adv7511_wr_and_or(sd, 0x9d, 0xfc, 0x01);
+-	adv7511_wr(sd, 0xa2, 0xa4);
+-	adv7511_wr(sd, 0xa3, 0xa4);
+-	adv7511_wr(sd, 0xe0, 0xd0);
+-	adv7511_wr(sd, 0xf9, 0x00);
+-
+-	adv7511_wr(sd, 0x43, state->i2c_edid_addr);
+-	adv7511_wr(sd, 0x45, state->i2c_pktmem_addr);
+-
+-	/* Set number of attempts to read the EDID */
+-	adv7511_wr(sd, 0xc9, 0xf);
+-	return true;
+-}
+-
+-#if IS_ENABLED(CONFIG_VIDEO_ADV7511_CEC)
+-static int adv7511_cec_adap_enable(struct cec_adapter *adap, bool enable)
+-{
+-	struct adv7511_state *state = cec_get_drvdata(adap);
+-	struct v4l2_subdev *sd = &state->sd;
+-
+-	if (state->i2c_cec == NULL)
+-		return -EIO;
+-
+-	if (!state->cec_enabled_adap && enable) {
+-		/* power up cec section */
+-		adv7511_cec_write_and_or(sd, 0x4e, 0xfc, 0x01);
+-		/* legacy mode and clear all rx buffers */
+-		adv7511_cec_write(sd, 0x4a, 0x00);
+-		adv7511_cec_write(sd, 0x4a, 0x07);
+-		adv7511_cec_write_and_or(sd, 0x11, 0xfe, 0); /* initially disable tx */
+-		/* enabled irqs: */
+-		/* tx: ready */
+-		/* tx: arbitration lost */
+-		/* tx: retry timeout */
+-		/* rx: ready 1 */
+-		if (state->enabled_irq)
+-			adv7511_wr_and_or(sd, 0x95, 0xc0, 0x39);
+-	} else if (state->cec_enabled_adap && !enable) {
+-		if (state->enabled_irq)
+-			adv7511_wr_and_or(sd, 0x95, 0xc0, 0x00);
+-		/* disable address mask 1-3 */
+-		adv7511_cec_write_and_or(sd, 0x4b, 0x8f, 0x00);
+-		/* power down cec section */
+-		adv7511_cec_write_and_or(sd, 0x4e, 0xfc, 0x00);
+-		state->cec_valid_addrs = 0;
+-	}
+-	state->cec_enabled_adap = enable;
+-	return 0;
+-}
+-
+-static int adv7511_cec_adap_log_addr(struct cec_adapter *adap, u8 addr)
+-{
+-	struct adv7511_state *state = cec_get_drvdata(adap);
+-	struct v4l2_subdev *sd = &state->sd;
+-	unsigned int i, free_idx = ADV7511_MAX_ADDRS;
+-
+-	if (!state->cec_enabled_adap)
+-		return addr == CEC_LOG_ADDR_INVALID ? 0 : -EIO;
+-
+-	if (addr == CEC_LOG_ADDR_INVALID) {
+-		adv7511_cec_write_and_or(sd, 0x4b, 0x8f, 0);
+-		state->cec_valid_addrs = 0;
+-		return 0;
+-	}
+-
+-	for (i = 0; i < ADV7511_MAX_ADDRS; i++) {
+-		bool is_valid = state->cec_valid_addrs & (1 << i);
+-
+-		if (free_idx == ADV7511_MAX_ADDRS && !is_valid)
+-			free_idx = i;
+-		if (is_valid && state->cec_addr[i] == addr)
+-			return 0;
+-	}
+-	if (i == ADV7511_MAX_ADDRS) {
+-		i = free_idx;
+-		if (i == ADV7511_MAX_ADDRS)
+-			return -ENXIO;
+-	}
+-	state->cec_addr[i] = addr;
+-	state->cec_valid_addrs |= 1 << i;
+-
+-	switch (i) {
+-	case 0:
+-		/* enable address mask 0 */
+-		adv7511_cec_write_and_or(sd, 0x4b, 0xef, 0x10);
+-		/* set address for mask 0 */
+-		adv7511_cec_write_and_or(sd, 0x4c, 0xf0, addr);
+-		break;
+-	case 1:
+-		/* enable address mask 1 */
+-		adv7511_cec_write_and_or(sd, 0x4b, 0xdf, 0x20);
+-		/* set address for mask 1 */
+-		adv7511_cec_write_and_or(sd, 0x4c, 0x0f, addr << 4);
+-		break;
+-	case 2:
+-		/* enable address mask 2 */
+-		adv7511_cec_write_and_or(sd, 0x4b, 0xbf, 0x40);
+-		/* set address for mask 1 */
+-		adv7511_cec_write_and_or(sd, 0x4d, 0xf0, addr);
+-		break;
+-	}
+-	return 0;
+-}
+-
+-static int adv7511_cec_adap_transmit(struct cec_adapter *adap, u8 attempts,
+-				     u32 signal_free_time, struct cec_msg *msg)
+-{
+-	struct adv7511_state *state = cec_get_drvdata(adap);
+-	struct v4l2_subdev *sd = &state->sd;
+-	u8 len = msg->len;
+-	unsigned int i;
+-
+-	v4l2_dbg(1, debug, sd, "%s: len %d\n", __func__, len);
+-
+-	if (len > 16) {
+-		v4l2_err(sd, "%s: len exceeded 16 (%d)\n", __func__, len);
+-		return -EINVAL;
+-	}
+-
+-	/*
+-	 * The number of retries is the number of attempts - 1, but retry
+-	 * at least once. It's not clear if a value of 0 is allowed, so
+-	 * let's do at least one retry.
+-	 */
+-	adv7511_cec_write_and_or(sd, 0x12, ~0x70, max(1, attempts - 1) << 4);
+-
+-	/* clear cec tx irq status */
+-	adv7511_wr(sd, 0x97, 0x38);
+-
+-	/* write data */
+-	for (i = 0; i < len; i++)
+-		adv7511_cec_write(sd, i, msg->msg[i]);
+-
+-	/* set length (data + header) */
+-	adv7511_cec_write(sd, 0x10, len);
+-	/* start transmit, enable tx */
+-	adv7511_cec_write(sd, 0x11, 0x01);
+-	return 0;
+-}
+-
+-static void adv_cec_tx_raw_status(struct v4l2_subdev *sd, u8 tx_raw_status)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-
+-	if ((adv7511_cec_read(sd, 0x11) & 0x01) == 0) {
+-		v4l2_dbg(1, debug, sd, "%s: tx raw: tx disabled\n", __func__);
+-		return;
+-	}
+-
+-	if (tx_raw_status & 0x10) {
+-		v4l2_dbg(1, debug, sd,
+-			 "%s: tx raw: arbitration lost\n", __func__);
+-		cec_transmit_done(state->cec_adap, CEC_TX_STATUS_ARB_LOST,
+-				  1, 0, 0, 0);
+-		return;
+-	}
+-	if (tx_raw_status & 0x08) {
+-		u8 status;
+-		u8 nack_cnt;
+-		u8 low_drive_cnt;
+-
+-		v4l2_dbg(1, debug, sd, "%s: tx raw: retry failed\n", __func__);
+-		/*
+-		 * We set this status bit since this hardware performs
+-		 * retransmissions.
+-		 */
+-		status = CEC_TX_STATUS_MAX_RETRIES;
+-		nack_cnt = adv7511_cec_read(sd, 0x14) & 0xf;
+-		if (nack_cnt)
+-			status |= CEC_TX_STATUS_NACK;
+-		low_drive_cnt = adv7511_cec_read(sd, 0x14) >> 4;
+-		if (low_drive_cnt)
+-			status |= CEC_TX_STATUS_LOW_DRIVE;
+-		cec_transmit_done(state->cec_adap, status,
+-				  0, nack_cnt, low_drive_cnt, 0);
+-		return;
+-	}
+-	if (tx_raw_status & 0x20) {
+-		v4l2_dbg(1, debug, sd, "%s: tx raw: ready ok\n", __func__);
+-		cec_transmit_done(state->cec_adap, CEC_TX_STATUS_OK, 0, 0, 0, 0);
+-		return;
+-	}
+-}
+-
+-static const struct cec_adap_ops adv7511_cec_adap_ops = {
+-	.adap_enable = adv7511_cec_adap_enable,
+-	.adap_log_addr = adv7511_cec_adap_log_addr,
+-	.adap_transmit = adv7511_cec_adap_transmit,
+-};
+-#endif
+-
+-/* Enable interrupts */
+-static void adv7511_set_isr(struct v4l2_subdev *sd, bool enable)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-	u8 irqs = MASK_ADV7511_HPD_INT | MASK_ADV7511_MSEN_INT;
+-	u8 irqs_rd;
+-	int retries = 100;
+-
+-	v4l2_dbg(2, debug, sd, "%s: %s\n", __func__, enable ? "enable" : "disable");
+-
+-	if (state->enabled_irq == enable)
+-		return;
+-	state->enabled_irq = enable;
+-
+-	/* The datasheet says that the EDID ready interrupt should be
+-	   disabled if there is no hotplug. */
+-	if (!enable)
+-		irqs = 0;
+-	else if (adv7511_have_hotplug(sd))
+-		irqs |= MASK_ADV7511_EDID_RDY_INT;
+-
+-	/*
+-	 * This i2c write can fail (approx. 1 in 1000 writes). But it
+-	 * is essential that this register is correct, so retry it
+-	 * multiple times.
+-	 *
+-	 * Note that the i2c write does not report an error, but the readback
+-	 * clearly shows the wrong value.
+-	 */
+-	do {
+-		adv7511_wr(sd, 0x94, irqs);
+-		irqs_rd = adv7511_rd(sd, 0x94);
+-	} while (retries-- && irqs_rd != irqs);
+-
+-	if (irqs_rd != irqs)
+-		v4l2_err(sd, "Could not set interrupts: hw failure?\n");
+-
+-	adv7511_wr_and_or(sd, 0x95, 0xc0,
+-			  (state->cec_enabled_adap && enable) ? 0x39 : 0x00);
+-}
+-
+-/* Interrupt handler */
+-static int adv7511_isr(struct v4l2_subdev *sd, u32 status, bool *handled)
+-{
+-	u8 irq_status;
+-	u8 cec_irq;
+-
+-	/* disable interrupts to prevent a race condition */
+-	adv7511_set_isr(sd, false);
+-	irq_status = adv7511_rd(sd, 0x96);
+-	cec_irq = adv7511_rd(sd, 0x97);
+-	/* clear detected interrupts */
+-	adv7511_wr(sd, 0x96, irq_status);
+-	adv7511_wr(sd, 0x97, cec_irq);
+-
+-	v4l2_dbg(1, debug, sd, "%s: irq 0x%x, cec-irq 0x%x\n", __func__,
+-		 irq_status, cec_irq);
+-
+-	if (irq_status & (MASK_ADV7511_HPD_INT | MASK_ADV7511_MSEN_INT))
+-		adv7511_check_monitor_present_status(sd);
+-	if (irq_status & MASK_ADV7511_EDID_RDY_INT)
+-		adv7511_check_edid_status(sd);
+-
+-#if IS_ENABLED(CONFIG_VIDEO_ADV7511_CEC)
+-	if (cec_irq & 0x38)
+-		adv_cec_tx_raw_status(sd, cec_irq);
+-
+-	if (cec_irq & 1) {
+-		struct adv7511_state *state = get_adv7511_state(sd);
+-		struct cec_msg msg;
+-
+-		msg.len = adv7511_cec_read(sd, 0x25) & 0x1f;
+-
+-		v4l2_dbg(1, debug, sd, "%s: cec msg len %d\n", __func__,
+-			 msg.len);
+-
+-		if (msg.len > 16)
+-			msg.len = 16;
+-
+-		if (msg.len) {
+-			u8 i;
+-
+-			for (i = 0; i < msg.len; i++)
+-				msg.msg[i] = adv7511_cec_read(sd, i + 0x15);
+-
+-			adv7511_cec_write(sd, 0x4a, 0); /* toggle to re-enable rx 1 */
+-			adv7511_cec_write(sd, 0x4a, 1);
+-			cec_received_msg(state->cec_adap, &msg);
+-		}
+-	}
+-#endif
+-
+-	/* enable interrupts */
+-	adv7511_set_isr(sd, true);
+-
+-	if (handled)
+-		*handled = true;
+-	return 0;
+-}
+-
+-static const struct v4l2_subdev_core_ops adv7511_core_ops = {
+-	.log_status = adv7511_log_status,
+-#ifdef CONFIG_VIDEO_ADV_DEBUG
+-	.g_register = adv7511_g_register,
+-	.s_register = adv7511_s_register,
+-#endif
+-	.s_power = adv7511_s_power,
+-	.interrupt_service_routine = adv7511_isr,
+-};
+-
+-/* ------------------------------ VIDEO OPS ------------------------------ */
+-
+-/* Enable/disable adv7511 output */
+-static int adv7511_s_stream(struct v4l2_subdev *sd, int enable)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-
+-	v4l2_dbg(1, debug, sd, "%s: %sable\n", __func__, (enable ? "en" : "dis"));
+-	adv7511_wr_and_or(sd, 0xa1, ~0x3c, (enable ? 0 : 0x3c));
+-	if (enable) {
+-		adv7511_check_monitor_present_status(sd);
+-	} else {
+-		adv7511_s_power(sd, 0);
+-		state->have_monitor = false;
+-	}
+-	return 0;
+-}
+-
+-static int adv7511_s_dv_timings(struct v4l2_subdev *sd,
+-			       struct v4l2_dv_timings *timings)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-	struct v4l2_bt_timings *bt = &timings->bt;
+-	u32 fps;
+-
+-	v4l2_dbg(1, debug, sd, "%s:\n", __func__);
+-
+-	/* quick sanity check */
+-	if (!v4l2_valid_dv_timings(timings, &adv7511_timings_cap, NULL, NULL))
+-		return -EINVAL;
+-
+-	/* Fill the optional fields .standards and .flags in struct v4l2_dv_timings
+-	   if the format is one of the CEA or DMT timings. */
+-	v4l2_find_dv_timings_cap(timings, &adv7511_timings_cap, 0, NULL, NULL);
+-
+-	/* save timings */
+-	state->dv_timings = *timings;
+-
+-	/* set h/vsync polarities */
+-	adv7511_wr_and_or(sd, 0x17, 0x9f,
+-		((bt->polarities & V4L2_DV_VSYNC_POS_POL) ? 0 : 0x40) |
+-		((bt->polarities & V4L2_DV_HSYNC_POS_POL) ? 0 : 0x20));
+-
+-	fps = (u32)bt->pixelclock / (V4L2_DV_BT_FRAME_WIDTH(bt) * V4L2_DV_BT_FRAME_HEIGHT(bt));
+-	switch (fps) {
+-	case 24:
+-		adv7511_wr_and_or(sd, 0xfb, 0xf9, 1 << 1);
+-		break;
+-	case 25:
+-		adv7511_wr_and_or(sd, 0xfb, 0xf9, 2 << 1);
+-		break;
+-	case 30:
+-		adv7511_wr_and_or(sd, 0xfb, 0xf9, 3 << 1);
+-		break;
+-	default:
+-		adv7511_wr_and_or(sd, 0xfb, 0xf9, 0);
+-		break;
+-	}
+-
+-	/* update quantization range based on new dv_timings */
+-	adv7511_set_rgb_quantization_mode(sd, state->rgb_quantization_range_ctrl);
+-
+-	return 0;
+-}
+-
+-static int adv7511_g_dv_timings(struct v4l2_subdev *sd,
+-				struct v4l2_dv_timings *timings)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-
+-	v4l2_dbg(1, debug, sd, "%s:\n", __func__);
+-
+-	if (!timings)
+-		return -EINVAL;
+-
+-	*timings = state->dv_timings;
+-
+-	return 0;
+-}
+-
+-static int adv7511_enum_dv_timings(struct v4l2_subdev *sd,
+-				   struct v4l2_enum_dv_timings *timings)
+-{
+-	if (timings->pad != 0)
+-		return -EINVAL;
+-
+-	return v4l2_enum_dv_timings_cap(timings, &adv7511_timings_cap, NULL, NULL);
+-}
+-
+-static int adv7511_dv_timings_cap(struct v4l2_subdev *sd,
+-				  struct v4l2_dv_timings_cap *cap)
+-{
+-	if (cap->pad != 0)
+-		return -EINVAL;
+-
+-	*cap = adv7511_timings_cap;
+-	return 0;
+-}
+-
+-static const struct v4l2_subdev_video_ops adv7511_video_ops = {
+-	.s_stream = adv7511_s_stream,
+-	.s_dv_timings = adv7511_s_dv_timings,
+-	.g_dv_timings = adv7511_g_dv_timings,
+-};
+-
+-/* ------------------------------ AUDIO OPS ------------------------------ */
+-static int adv7511_s_audio_stream(struct v4l2_subdev *sd, int enable)
+-{
+-	v4l2_dbg(1, debug, sd, "%s: %sable\n", __func__, (enable ? "en" : "dis"));
+-
+-	if (enable)
+-		adv7511_wr_and_or(sd, 0x4b, 0x3f, 0x80);
+-	else
+-		adv7511_wr_and_or(sd, 0x4b, 0x3f, 0x40);
+-
+-	return 0;
+-}
+-
+-static int adv7511_s_clock_freq(struct v4l2_subdev *sd, u32 freq)
+-{
+-	u32 N;
+-
+-	switch (freq) {
+-	case 32000:  N = 4096;  break;
+-	case 44100:  N = 6272;  break;
+-	case 48000:  N = 6144;  break;
+-	case 88200:  N = 12544; break;
+-	case 96000:  N = 12288; break;
+-	case 176400: N = 25088; break;
+-	case 192000: N = 24576; break;
+-	default:
+-		return -EINVAL;
+-	}
+-
+-	/* Set N (used with CTS to regenerate the audio clock) */
+-	adv7511_wr(sd, 0x01, (N >> 16) & 0xf);
+-	adv7511_wr(sd, 0x02, (N >> 8) & 0xff);
+-	adv7511_wr(sd, 0x03, N & 0xff);
+-
+-	return 0;
+-}
+-
+-static int adv7511_s_i2s_clock_freq(struct v4l2_subdev *sd, u32 freq)
+-{
+-	u32 i2s_sf;
+-
+-	switch (freq) {
+-	case 32000:  i2s_sf = 0x30; break;
+-	case 44100:  i2s_sf = 0x00; break;
+-	case 48000:  i2s_sf = 0x20; break;
+-	case 88200:  i2s_sf = 0x80; break;
+-	case 96000:  i2s_sf = 0xa0; break;
+-	case 176400: i2s_sf = 0xc0; break;
+-	case 192000: i2s_sf = 0xe0; break;
+-	default:
+-		return -EINVAL;
+-	}
+-
+-	/* Set sampling frequency for I2S audio to 48 kHz */
+-	adv7511_wr_and_or(sd, 0x15, 0xf, i2s_sf);
+-
+-	return 0;
+-}
+-
+-static int adv7511_s_routing(struct v4l2_subdev *sd, u32 input, u32 output, u32 config)
+-{
+-	/* Only 2 channels in use for application */
+-	adv7511_wr_and_or(sd, 0x73, 0xf8, 0x1);
+-	/* Speaker mapping */
+-	adv7511_wr(sd, 0x76, 0x00);
+-
+-	/* 16 bit audio word length */
+-	adv7511_wr_and_or(sd, 0x14, 0xf0, 0x02);
+-
+-	return 0;
+-}
+-
+-static const struct v4l2_subdev_audio_ops adv7511_audio_ops = {
+-	.s_stream = adv7511_s_audio_stream,
+-	.s_clock_freq = adv7511_s_clock_freq,
+-	.s_i2s_clock_freq = adv7511_s_i2s_clock_freq,
+-	.s_routing = adv7511_s_routing,
+-};
+-
+-/* ---------------------------- PAD OPS ------------------------------------- */
+-
+-static int adv7511_get_edid(struct v4l2_subdev *sd, struct v4l2_edid *edid)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-
+-	memset(edid->reserved, 0, sizeof(edid->reserved));
+-
+-	if (edid->pad != 0)
+-		return -EINVAL;
+-
+-	if (edid->start_block == 0 && edid->blocks == 0) {
+-		edid->blocks = state->edid.segments * 2;
+-		return 0;
+-	}
+-
+-	if (state->edid.segments == 0)
+-		return -ENODATA;
+-
+-	if (edid->start_block >= state->edid.segments * 2)
+-		return -EINVAL;
+-
+-	if (edid->start_block + edid->blocks > state->edid.segments * 2)
+-		edid->blocks = state->edid.segments * 2 - edid->start_block;
+-
+-	memcpy(edid->edid, &state->edid.data[edid->start_block * 128],
+-			128 * edid->blocks);
+-
+-	return 0;
+-}
+-
+-static int adv7511_enum_mbus_code(struct v4l2_subdev *sd,
+-				  struct v4l2_subdev_pad_config *cfg,
+-				  struct v4l2_subdev_mbus_code_enum *code)
+-{
+-	if (code->pad != 0)
+-		return -EINVAL;
+-
+-	switch (code->index) {
+-	case 0:
+-		code->code = MEDIA_BUS_FMT_RGB888_1X24;
+-		break;
+-	case 1:
+-		code->code = MEDIA_BUS_FMT_YUYV8_1X16;
+-		break;
+-	case 2:
+-		code->code = MEDIA_BUS_FMT_UYVY8_1X16;
+-		break;
+-	default:
+-		return -EINVAL;
+-	}
+-	return 0;
+-}
+-
+-static void adv7511_fill_format(struct adv7511_state *state,
+-				struct v4l2_mbus_framefmt *format)
+-{
+-	format->width = state->dv_timings.bt.width;
+-	format->height = state->dv_timings.bt.height;
+-	format->field = V4L2_FIELD_NONE;
+-}
+-
+-static int adv7511_get_fmt(struct v4l2_subdev *sd,
+-			   struct v4l2_subdev_pad_config *cfg,
+-			   struct v4l2_subdev_format *format)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-
+-	if (format->pad != 0)
+-		return -EINVAL;
+-
+-	memset(&format->format, 0, sizeof(format->format));
+-	adv7511_fill_format(state, &format->format);
+-
+-	if (format->which == V4L2_SUBDEV_FORMAT_TRY) {
+-		struct v4l2_mbus_framefmt *fmt;
+-
+-		fmt = v4l2_subdev_get_try_format(sd, cfg, format->pad);
+-		format->format.code = fmt->code;
+-		format->format.colorspace = fmt->colorspace;
+-		format->format.ycbcr_enc = fmt->ycbcr_enc;
+-		format->format.quantization = fmt->quantization;
+-		format->format.xfer_func = fmt->xfer_func;
+-	} else {
+-		format->format.code = state->fmt_code;
+-		format->format.colorspace = state->colorspace;
+-		format->format.ycbcr_enc = state->ycbcr_enc;
+-		format->format.quantization = state->quantization;
+-		format->format.xfer_func = state->xfer_func;
+-	}
+-
+-	return 0;
+-}
+-
+-static int adv7511_set_fmt(struct v4l2_subdev *sd,
+-			   struct v4l2_subdev_pad_config *cfg,
+-			   struct v4l2_subdev_format *format)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-	/*
+-	 * Bitfield namings come the CEA-861-F standard, table 8 "Auxiliary
+-	 * Video Information (AVI) InfoFrame Format"
+-	 *
+-	 * c = Colorimetry
+-	 * ec = Extended Colorimetry
+-	 * y = RGB or YCbCr
+-	 * q = RGB Quantization Range
+-	 * yq = YCC Quantization Range
+-	 */
+-	u8 c = HDMI_COLORIMETRY_NONE;
+-	u8 ec = HDMI_EXTENDED_COLORIMETRY_XV_YCC_601;
+-	u8 y = HDMI_COLORSPACE_RGB;
+-	u8 q = HDMI_QUANTIZATION_RANGE_DEFAULT;
+-	u8 yq = HDMI_YCC_QUANTIZATION_RANGE_LIMITED;
+-	u8 itc = state->content_type != V4L2_DV_IT_CONTENT_TYPE_NO_ITC;
+-	u8 cn = itc ? state->content_type : V4L2_DV_IT_CONTENT_TYPE_GRAPHICS;
+-
+-	if (format->pad != 0)
+-		return -EINVAL;
+-	switch (format->format.code) {
+-	case MEDIA_BUS_FMT_UYVY8_1X16:
+-	case MEDIA_BUS_FMT_YUYV8_1X16:
+-	case MEDIA_BUS_FMT_RGB888_1X24:
+-		break;
+-	default:
+-		return -EINVAL;
+-	}
+-
+-	adv7511_fill_format(state, &format->format);
+-	if (format->which == V4L2_SUBDEV_FORMAT_TRY) {
+-		struct v4l2_mbus_framefmt *fmt;
+-
+-		fmt = v4l2_subdev_get_try_format(sd, cfg, format->pad);
+-		fmt->code = format->format.code;
+-		fmt->colorspace = format->format.colorspace;
+-		fmt->ycbcr_enc = format->format.ycbcr_enc;
+-		fmt->quantization = format->format.quantization;
+-		fmt->xfer_func = format->format.xfer_func;
+-		return 0;
+-	}
+-
+-	switch (format->format.code) {
+-	case MEDIA_BUS_FMT_UYVY8_1X16:
+-		adv7511_wr_and_or(sd, 0x15, 0xf0, 0x01);
+-		adv7511_wr_and_or(sd, 0x16, 0x03, 0xb8);
+-		y = HDMI_COLORSPACE_YUV422;
+-		break;
+-	case MEDIA_BUS_FMT_YUYV8_1X16:
+-		adv7511_wr_and_or(sd, 0x15, 0xf0, 0x01);
+-		adv7511_wr_and_or(sd, 0x16, 0x03, 0xbc);
+-		y = HDMI_COLORSPACE_YUV422;
+-		break;
+-	case MEDIA_BUS_FMT_RGB888_1X24:
+-	default:
+-		adv7511_wr_and_or(sd, 0x15, 0xf0, 0x00);
+-		adv7511_wr_and_or(sd, 0x16, 0x03, 0x00);
+-		break;
+-	}
+-	state->fmt_code = format->format.code;
+-	state->colorspace = format->format.colorspace;
+-	state->ycbcr_enc = format->format.ycbcr_enc;
+-	state->quantization = format->format.quantization;
+-	state->xfer_func = format->format.xfer_func;
+-
+-	switch (format->format.colorspace) {
+-	case V4L2_COLORSPACE_OPRGB:
+-		c = HDMI_COLORIMETRY_EXTENDED;
+-		ec = y ? HDMI_EXTENDED_COLORIMETRY_OPYCC_601 :
+-			 HDMI_EXTENDED_COLORIMETRY_OPRGB;
+-		break;
+-	case V4L2_COLORSPACE_SMPTE170M:
+-		c = y ? HDMI_COLORIMETRY_ITU_601 : HDMI_COLORIMETRY_NONE;
+-		if (y && format->format.ycbcr_enc == V4L2_YCBCR_ENC_XV601) {
+-			c = HDMI_COLORIMETRY_EXTENDED;
+-			ec = HDMI_EXTENDED_COLORIMETRY_XV_YCC_601;
+-		}
+-		break;
+-	case V4L2_COLORSPACE_REC709:
+-		c = y ? HDMI_COLORIMETRY_ITU_709 : HDMI_COLORIMETRY_NONE;
+-		if (y && format->format.ycbcr_enc == V4L2_YCBCR_ENC_XV709) {
+-			c = HDMI_COLORIMETRY_EXTENDED;
+-			ec = HDMI_EXTENDED_COLORIMETRY_XV_YCC_709;
+-		}
+-		break;
+-	case V4L2_COLORSPACE_SRGB:
+-		c = y ? HDMI_COLORIMETRY_EXTENDED : HDMI_COLORIMETRY_NONE;
+-		ec = y ? HDMI_EXTENDED_COLORIMETRY_S_YCC_601 :
+-			 HDMI_EXTENDED_COLORIMETRY_XV_YCC_601;
+-		break;
+-	case V4L2_COLORSPACE_BT2020:
+-		c = HDMI_COLORIMETRY_EXTENDED;
+-		if (y && format->format.ycbcr_enc == V4L2_YCBCR_ENC_BT2020_CONST_LUM)
+-			ec = 5; /* Not yet available in hdmi.h */
+-		else
+-			ec = 6; /* Not yet available in hdmi.h */
+-		break;
+-	default:
+-		break;
+-	}
+-
+-	/*
+-	 * CEA-861-F says that for RGB formats the YCC range must match the
+-	 * RGB range, although sources should ignore the YCC range.
+-	 *
+-	 * The RGB quantization range shouldn't be non-zero if the EDID doesn't
+-	 * have the Q bit set in the Video Capabilities Data Block, however this
+-	 * isn't checked at the moment. The assumption is that the application
+-	 * knows the EDID and can detect this.
+-	 *
+-	 * The same is true for the YCC quantization range: non-standard YCC
+-	 * quantization ranges should only be sent if the EDID has the YQ bit
+-	 * set in the Video Capabilities Data Block.
+-	 */
+-	switch (format->format.quantization) {
+-	case V4L2_QUANTIZATION_FULL_RANGE:
+-		q = y ? HDMI_QUANTIZATION_RANGE_DEFAULT :
+-			HDMI_QUANTIZATION_RANGE_FULL;
+-		yq = q ? q - 1 : HDMI_YCC_QUANTIZATION_RANGE_FULL;
+-		break;
+-	case V4L2_QUANTIZATION_LIM_RANGE:
+-		q = y ? HDMI_QUANTIZATION_RANGE_DEFAULT :
+-			HDMI_QUANTIZATION_RANGE_LIMITED;
+-		yq = q ? q - 1 : HDMI_YCC_QUANTIZATION_RANGE_LIMITED;
+-		break;
+-	}
+-
+-	adv7511_wr_and_or(sd, 0x4a, 0xbf, 0);
+-	adv7511_wr_and_or(sd, 0x55, 0x9f, y << 5);
+-	adv7511_wr_and_or(sd, 0x56, 0x3f, c << 6);
+-	adv7511_wr_and_or(sd, 0x57, 0x83, (ec << 4) | (q << 2) | (itc << 7));
+-	adv7511_wr_and_or(sd, 0x59, 0x0f, (yq << 6) | (cn << 4));
+-	adv7511_wr_and_or(sd, 0x4a, 0xff, 1);
+-	adv7511_set_rgb_quantization_mode(sd, state->rgb_quantization_range_ctrl);
+-
+-	return 0;
+-}
+-
+-static const struct v4l2_subdev_pad_ops adv7511_pad_ops = {
+-	.get_edid = adv7511_get_edid,
+-	.enum_mbus_code = adv7511_enum_mbus_code,
+-	.get_fmt = adv7511_get_fmt,
+-	.set_fmt = adv7511_set_fmt,
+-	.enum_dv_timings = adv7511_enum_dv_timings,
+-	.dv_timings_cap = adv7511_dv_timings_cap,
+-};
+-
+-/* --------------------- SUBDEV OPS --------------------------------------- */
+-
+-static const struct v4l2_subdev_ops adv7511_ops = {
+-	.core  = &adv7511_core_ops,
+-	.pad  = &adv7511_pad_ops,
+-	.video = &adv7511_video_ops,
+-	.audio = &adv7511_audio_ops,
+-};
+-
+-/* ----------------------------------------------------------------------- */
+-static void adv7511_dbg_dump_edid(int lvl, int debug, struct v4l2_subdev *sd, int segment, u8 *buf)
+-{
+-	if (debug >= lvl) {
+-		int i, j;
+-		v4l2_dbg(lvl, debug, sd, "edid segment %d\n", segment);
+-		for (i = 0; i < 256; i += 16) {
+-			u8 b[128];
+-			u8 *bp = b;
+-			if (i == 128)
+-				v4l2_dbg(lvl, debug, sd, "\n");
+-			for (j = i; j < i + 16; j++) {
+-				sprintf(bp, "0x%02x, ", buf[j]);
+-				bp += 6;
+-			}
+-			bp[0] = '\0';
+-			v4l2_dbg(lvl, debug, sd, "%s\n", b);
+-		}
+-	}
+-}
+-
+-static void adv7511_notify_no_edid(struct v4l2_subdev *sd)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-	struct adv7511_edid_detect ed;
+-
+-	/* We failed to read the EDID, so send an event for this. */
+-	ed.present = false;
+-	ed.segment = adv7511_rd(sd, 0xc4);
+-	ed.phys_addr = CEC_PHYS_ADDR_INVALID;
+-	cec_s_phys_addr(state->cec_adap, ed.phys_addr, false);
+-	v4l2_subdev_notify(sd, ADV7511_EDID_DETECT, (void *)&ed);
+-	v4l2_ctrl_s_ctrl(state->have_edid0_ctrl, 0x0);
+-}
+-
+-static void adv7511_edid_handler(struct work_struct *work)
+-{
+-	struct delayed_work *dwork = to_delayed_work(work);
+-	struct adv7511_state *state = container_of(dwork, struct adv7511_state, edid_handler);
+-	struct v4l2_subdev *sd = &state->sd;
+-
+-	v4l2_dbg(1, debug, sd, "%s:\n", __func__);
+-
+-	if (adv7511_check_edid_status(sd)) {
+-		/* Return if we received the EDID. */
+-		return;
+-	}
+-
+-	if (adv7511_have_hotplug(sd)) {
+-		/* We must retry reading the EDID several times, it is possible
+-		 * that initially the EDID couldn't be read due to i2c errors
+-		 * (DVI connectors are particularly prone to this problem). */
+-		if (state->edid.read_retries) {
+-			state->edid.read_retries--;
+-			v4l2_dbg(1, debug, sd, "%s: edid read failed\n", __func__);
+-			state->have_monitor = false;
+-			adv7511_s_power(sd, false);
+-			adv7511_s_power(sd, true);
+-			queue_delayed_work(state->work_queue, &state->edid_handler, EDID_DELAY);
+-			return;
+-		}
+-	}
+-
+-	/* We failed to read the EDID, so send an event for this. */
+-	adv7511_notify_no_edid(sd);
+-	v4l2_dbg(1, debug, sd, "%s: no edid found\n", __func__);
+-}
+-
+-static void adv7511_audio_setup(struct v4l2_subdev *sd)
+-{
+-	v4l2_dbg(1, debug, sd, "%s\n", __func__);
+-
+-	adv7511_s_i2s_clock_freq(sd, 48000);
+-	adv7511_s_clock_freq(sd, 48000);
+-	adv7511_s_routing(sd, 0, 0, 0);
+-}
+-
+-/* Configure hdmi transmitter. */
+-static void adv7511_setup(struct v4l2_subdev *sd)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-	v4l2_dbg(1, debug, sd, "%s\n", __func__);
+-
+-	/* Input format: RGB 4:4:4 */
+-	adv7511_wr_and_or(sd, 0x15, 0xf0, 0x0);
+-	/* Output format: RGB 4:4:4 */
+-	adv7511_wr_and_or(sd, 0x16, 0x7f, 0x0);
+-	/* 1st order interpolation 4:2:2 -> 4:4:4 up conversion, Aspect ratio: 16:9 */
+-	adv7511_wr_and_or(sd, 0x17, 0xf9, 0x06);
+-	/* Disable pixel repetition */
+-	adv7511_wr_and_or(sd, 0x3b, 0x9f, 0x0);
+-	/* Disable CSC */
+-	adv7511_wr_and_or(sd, 0x18, 0x7f, 0x0);
+-	/* Output format: RGB 4:4:4, Active Format Information is valid,
+-	 * underscanned */
+-	adv7511_wr_and_or(sd, 0x55, 0x9c, 0x12);
+-	/* AVI Info frame packet enable, Audio Info frame disable */
+-	adv7511_wr_and_or(sd, 0x44, 0xe7, 0x10);
+-	/* Colorimetry, Active format aspect ratio: same as picure. */
+-	adv7511_wr(sd, 0x56, 0xa8);
+-	/* No encryption */
+-	adv7511_wr_and_or(sd, 0xaf, 0xed, 0x0);
+-
+-	/* Positive clk edge capture for input video clock */
+-	adv7511_wr_and_or(sd, 0xba, 0x1f, 0x60);
+-
+-	adv7511_audio_setup(sd);
+-
+-	v4l2_ctrl_handler_setup(&state->hdl);
+-}
+-
+-static void adv7511_notify_monitor_detect(struct v4l2_subdev *sd)
+-{
+-	struct adv7511_monitor_detect mdt;
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-
+-	mdt.present = state->have_monitor;
+-	v4l2_subdev_notify(sd, ADV7511_MONITOR_DETECT, (void *)&mdt);
+-}
+-
+-static void adv7511_check_monitor_present_status(struct v4l2_subdev *sd)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-	/* read hotplug and rx-sense state */
+-	u8 status = adv7511_rd(sd, 0x42);
+-
+-	v4l2_dbg(1, debug, sd, "%s: status: 0x%x%s%s\n",
+-			 __func__,
+-			 status,
+-			 status & MASK_ADV7511_HPD_DETECT ? ", hotplug" : "",
+-			 status & MASK_ADV7511_MSEN_DETECT ? ", rx-sense" : "");
+-
+-	/* update read only ctrls */
+-	v4l2_ctrl_s_ctrl(state->hotplug_ctrl, adv7511_have_hotplug(sd) ? 0x1 : 0x0);
+-	v4l2_ctrl_s_ctrl(state->rx_sense_ctrl, adv7511_have_rx_sense(sd) ? 0x1 : 0x0);
+-
+-	if ((status & MASK_ADV7511_HPD_DETECT) && ((status & MASK_ADV7511_MSEN_DETECT) || state->edid.segments)) {
+-		v4l2_dbg(1, debug, sd, "%s: hotplug and (rx-sense or edid)\n", __func__);
+-		if (!state->have_monitor) {
+-			v4l2_dbg(1, debug, sd, "%s: monitor detected\n", __func__);
+-			state->have_monitor = true;
+-			adv7511_set_isr(sd, true);
+-			if (!adv7511_s_power(sd, true)) {
+-				v4l2_dbg(1, debug, sd, "%s: monitor detected, powerup failed\n", __func__);
+-				return;
+-			}
+-			adv7511_setup(sd);
+-			adv7511_notify_monitor_detect(sd);
+-			state->edid.read_retries = EDID_MAX_RETRIES;
+-			queue_delayed_work(state->work_queue, &state->edid_handler, EDID_DELAY);
+-		}
+-	} else if (status & MASK_ADV7511_HPD_DETECT) {
+-		v4l2_dbg(1, debug, sd, "%s: hotplug detected\n", __func__);
+-		state->edid.read_retries = EDID_MAX_RETRIES;
+-		queue_delayed_work(state->work_queue, &state->edid_handler, EDID_DELAY);
+-	} else if (!(status & MASK_ADV7511_HPD_DETECT)) {
+-		v4l2_dbg(1, debug, sd, "%s: hotplug not detected\n", __func__);
+-		if (state->have_monitor) {
+-			v4l2_dbg(1, debug, sd, "%s: monitor not detected\n", __func__);
+-			state->have_monitor = false;
+-			adv7511_notify_monitor_detect(sd);
+-		}
+-		adv7511_s_power(sd, false);
+-		memset(&state->edid, 0, sizeof(struct adv7511_state_edid));
+-		adv7511_notify_no_edid(sd);
+-	}
+-}
+-
+-static bool edid_block_verify_crc(u8 *edid_block)
+-{
+-	u8 sum = 0;
+-	int i;
+-
+-	for (i = 0; i < 128; i++)
+-		sum += edid_block[i];
+-	return sum == 0;
+-}
+-
+-static bool edid_verify_crc(struct v4l2_subdev *sd, u32 segment)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-	u32 blocks = state->edid.blocks;
+-	u8 *data = state->edid.data;
+-
+-	if (!edid_block_verify_crc(&data[segment * 256]))
+-		return false;
+-	if ((segment + 1) * 2 <= blocks)
+-		return edid_block_verify_crc(&data[segment * 256 + 128]);
+-	return true;
+-}
+-
+-static bool edid_verify_header(struct v4l2_subdev *sd, u32 segment)
+-{
+-	static const u8 hdmi_header[] = {
+-		0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00
+-	};
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-	u8 *data = state->edid.data;
+-
+-	if (segment != 0)
+-		return true;
+-	return !memcmp(data, hdmi_header, sizeof(hdmi_header));
+-}
+-
+-static bool adv7511_check_edid_status(struct v4l2_subdev *sd)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-	u8 edidRdy = adv7511_rd(sd, 0xc5);
+-
+-	v4l2_dbg(1, debug, sd, "%s: edid ready (retries: %d)\n",
+-			 __func__, EDID_MAX_RETRIES - state->edid.read_retries);
+-
+-	if (state->edid.complete)
+-		return true;
+-
+-	if (edidRdy & MASK_ADV7511_EDID_RDY) {
+-		int segment = adv7511_rd(sd, 0xc4);
+-		struct adv7511_edid_detect ed;
+-
+-		if (segment >= EDID_MAX_SEGM) {
+-			v4l2_err(sd, "edid segment number too big\n");
+-			return false;
+-		}
+-		v4l2_dbg(1, debug, sd, "%s: got segment %d\n", __func__, segment);
+-		adv7511_edid_rd(sd, 256, &state->edid.data[segment * 256]);
+-		adv7511_dbg_dump_edid(2, debug, sd, segment, &state->edid.data[segment * 256]);
+-		if (segment == 0) {
+-			state->edid.blocks = state->edid.data[0x7e] + 1;
+-			v4l2_dbg(1, debug, sd, "%s: %d blocks in total\n", __func__, state->edid.blocks);
+-		}
+-		if (!edid_verify_crc(sd, segment) ||
+-		    !edid_verify_header(sd, segment)) {
+-			/* edid crc error, force reread of edid segment */
+-			v4l2_err(sd, "%s: edid crc or header error\n", __func__);
+-			state->have_monitor = false;
+-			adv7511_s_power(sd, false);
+-			adv7511_s_power(sd, true);
+-			return false;
+-		}
+-		/* one more segment read ok */
+-		state->edid.segments = segment + 1;
+-		v4l2_ctrl_s_ctrl(state->have_edid0_ctrl, 0x1);
+-		if (((state->edid.data[0x7e] >> 1) + 1) > state->edid.segments) {
+-			/* Request next EDID segment */
+-			v4l2_dbg(1, debug, sd, "%s: request segment %d\n", __func__, state->edid.segments);
+-			adv7511_wr(sd, 0xc9, 0xf);
+-			adv7511_wr(sd, 0xc4, state->edid.segments);
+-			state->edid.read_retries = EDID_MAX_RETRIES;
+-			queue_delayed_work(state->work_queue, &state->edid_handler, EDID_DELAY);
+-			return false;
+-		}
+-
+-		v4l2_dbg(1, debug, sd, "%s: edid complete with %d segment(s)\n", __func__, state->edid.segments);
+-		state->edid.complete = true;
+-		ed.phys_addr = cec_get_edid_phys_addr(state->edid.data,
+-						      state->edid.segments * 256,
+-						      NULL);
+-		/* report when we have all segments
+-		   but report only for segment 0
+-		 */
+-		ed.present = true;
+-		ed.segment = 0;
+-		state->edid_detect_counter++;
+-		cec_s_phys_addr(state->cec_adap, ed.phys_addr, false);
+-		v4l2_subdev_notify(sd, ADV7511_EDID_DETECT, (void *)&ed);
+-		return ed.present;
+-	}
+-
+-	return false;
+-}
+-
+-static int adv7511_registered(struct v4l2_subdev *sd)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-	struct i2c_client *client = v4l2_get_subdevdata(sd);
+-	int err;
+-
+-	err = cec_register_adapter(state->cec_adap, &client->dev);
+-	if (err)
+-		cec_delete_adapter(state->cec_adap);
+-	return err;
+-}
+-
+-static void adv7511_unregistered(struct v4l2_subdev *sd)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-
+-	cec_unregister_adapter(state->cec_adap);
+-}
+-
+-static const struct v4l2_subdev_internal_ops adv7511_int_ops = {
+-	.registered = adv7511_registered,
+-	.unregistered = adv7511_unregistered,
+-};
+-
+-/* ----------------------------------------------------------------------- */
+-/* Setup ADV7511 */
+-static void adv7511_init_setup(struct v4l2_subdev *sd)
+-{
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-	struct adv7511_state_edid *edid = &state->edid;
+-	u32 cec_clk = state->pdata.cec_clk;
+-	u8 ratio;
+-
+-	v4l2_dbg(1, debug, sd, "%s\n", __func__);
+-
+-	/* clear all interrupts */
+-	adv7511_wr(sd, 0x96, 0xff);
+-	adv7511_wr(sd, 0x97, 0xff);
+-	/*
+-	 * Stop HPD from resetting a lot of registers.
+-	 * It might leave the chip in a partly un-initialized state,
+-	 * in particular with regards to hotplug bounces.
+-	 */
+-	adv7511_wr_and_or(sd, 0xd6, 0x3f, 0xc0);
+-	memset(edid, 0, sizeof(struct adv7511_state_edid));
+-	state->have_monitor = false;
+-	adv7511_set_isr(sd, false);
+-	adv7511_s_stream(sd, false);
+-	adv7511_s_audio_stream(sd, false);
+-
+-	if (state->i2c_cec == NULL)
+-		return;
+-
+-	v4l2_dbg(1, debug, sd, "%s: cec_clk %d\n", __func__, cec_clk);
+-
+-	/* cec soft reset */
+-	adv7511_cec_write(sd, 0x50, 0x01);
+-	adv7511_cec_write(sd, 0x50, 0x00);
+-
+-	/* legacy mode */
+-	adv7511_cec_write(sd, 0x4a, 0x00);
+-	adv7511_cec_write(sd, 0x4a, 0x07);
+-
+-	if (cec_clk % 750000 != 0)
+-		v4l2_err(sd, "%s: cec_clk %d, not multiple of 750 Khz\n",
+-			 __func__, cec_clk);
+-
+-	ratio = (cec_clk / 750000) - 1;
+-	adv7511_cec_write(sd, 0x4e, ratio << 2);
+-}
+-
+-static int adv7511_probe(struct i2c_client *client, const struct i2c_device_id *id)
+-{
+-	struct adv7511_state *state;
+-	struct adv7511_platform_data *pdata = client->dev.platform_data;
+-	struct v4l2_ctrl_handler *hdl;
+-	struct v4l2_subdev *sd;
+-	u8 chip_id[2];
+-	int err = -EIO;
+-
+-	/* Check if the adapter supports the needed features */
+-	if (!i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_BYTE_DATA))
+-		return -EIO;
+-
+-	state = devm_kzalloc(&client->dev, sizeof(struct adv7511_state), GFP_KERNEL);
+-	if (!state)
+-		return -ENOMEM;
+-
+-	/* Platform data */
+-	if (!pdata) {
+-		v4l_err(client, "No platform data!\n");
+-		return -ENODEV;
+-	}
+-	memcpy(&state->pdata, pdata, sizeof(state->pdata));
+-	state->fmt_code = MEDIA_BUS_FMT_RGB888_1X24;
+-	state->colorspace = V4L2_COLORSPACE_SRGB;
+-
+-	sd = &state->sd;
+-
+-	v4l2_dbg(1, debug, sd, "detecting adv7511 client on address 0x%x\n",
+-			 client->addr << 1);
+-
+-	v4l2_i2c_subdev_init(sd, client, &adv7511_ops);
+-	sd->internal_ops = &adv7511_int_ops;
+-
+-	hdl = &state->hdl;
+-	v4l2_ctrl_handler_init(hdl, 10);
+-	/* add in ascending ID order */
+-	state->hdmi_mode_ctrl = v4l2_ctrl_new_std_menu(hdl, &adv7511_ctrl_ops,
+-			V4L2_CID_DV_TX_MODE, V4L2_DV_TX_MODE_HDMI,
+-			0, V4L2_DV_TX_MODE_DVI_D);
+-	state->hotplug_ctrl = v4l2_ctrl_new_std(hdl, NULL,
+-			V4L2_CID_DV_TX_HOTPLUG, 0, 1, 0, 0);
+-	state->rx_sense_ctrl = v4l2_ctrl_new_std(hdl, NULL,
+-			V4L2_CID_DV_TX_RXSENSE, 0, 1, 0, 0);
+-	state->have_edid0_ctrl = v4l2_ctrl_new_std(hdl, NULL,
+-			V4L2_CID_DV_TX_EDID_PRESENT, 0, 1, 0, 0);
+-	state->rgb_quantization_range_ctrl =
+-		v4l2_ctrl_new_std_menu(hdl, &adv7511_ctrl_ops,
+-			V4L2_CID_DV_TX_RGB_RANGE, V4L2_DV_RGB_RANGE_FULL,
+-			0, V4L2_DV_RGB_RANGE_AUTO);
+-	state->content_type_ctrl =
+-		v4l2_ctrl_new_std_menu(hdl, &adv7511_ctrl_ops,
+-			V4L2_CID_DV_TX_IT_CONTENT_TYPE, V4L2_DV_IT_CONTENT_TYPE_NO_ITC,
+-			0, V4L2_DV_IT_CONTENT_TYPE_NO_ITC);
+-	sd->ctrl_handler = hdl;
+-	if (hdl->error) {
+-		err = hdl->error;
+-		goto err_hdl;
+-	}
+-	state->pad.flags = MEDIA_PAD_FL_SINK;
+-	sd->entity.function = MEDIA_ENT_F_DV_ENCODER;
+-	err = media_entity_pads_init(&sd->entity, 1, &state->pad);
+-	if (err)
+-		goto err_hdl;
+-
+-	/* EDID and CEC i2c addr */
+-	state->i2c_edid_addr = state->pdata.i2c_edid << 1;
+-	state->i2c_cec_addr = state->pdata.i2c_cec << 1;
+-	state->i2c_pktmem_addr = state->pdata.i2c_pktmem << 1;
+-
+-	state->chip_revision = adv7511_rd(sd, 0x0);
+-	chip_id[0] = adv7511_rd(sd, 0xf5);
+-	chip_id[1] = adv7511_rd(sd, 0xf6);
+-	if (chip_id[0] != 0x75 || chip_id[1] != 0x11) {
+-		v4l2_err(sd, "chip_id != 0x7511, read 0x%02x%02x\n", chip_id[0],
+-			 chip_id[1]);
+-		err = -EIO;
+-		goto err_entity;
+-	}
+-
+-	state->i2c_edid = i2c_new_dummy(client->adapter,
+-					state->i2c_edid_addr >> 1);
+-	if (state->i2c_edid == NULL) {
+-		v4l2_err(sd, "failed to register edid i2c client\n");
+-		err = -ENOMEM;
+-		goto err_entity;
+-	}
+-
+-	adv7511_wr(sd, 0xe1, state->i2c_cec_addr);
+-	if (state->pdata.cec_clk < 3000000 ||
+-	    state->pdata.cec_clk > 100000000) {
+-		v4l2_err(sd, "%s: cec_clk %u outside range, disabling cec\n",
+-				__func__, state->pdata.cec_clk);
+-		state->pdata.cec_clk = 0;
+-	}
+-
+-	if (state->pdata.cec_clk) {
+-		state->i2c_cec = i2c_new_dummy(client->adapter,
+-					       state->i2c_cec_addr >> 1);
+-		if (state->i2c_cec == NULL) {
+-			v4l2_err(sd, "failed to register cec i2c client\n");
+-			err = -ENOMEM;
+-			goto err_unreg_edid;
+-		}
+-		adv7511_wr(sd, 0xe2, 0x00); /* power up cec section */
+-	} else {
+-		adv7511_wr(sd, 0xe2, 0x01); /* power down cec section */
+-	}
+-
+-	state->i2c_pktmem = i2c_new_dummy(client->adapter, state->i2c_pktmem_addr >> 1);
+-	if (state->i2c_pktmem == NULL) {
+-		v4l2_err(sd, "failed to register pktmem i2c client\n");
+-		err = -ENOMEM;
+-		goto err_unreg_cec;
+-	}
+-
+-	state->work_queue = create_singlethread_workqueue(sd->name);
+-	if (state->work_queue == NULL) {
+-		v4l2_err(sd, "could not create workqueue\n");
+-		err = -ENOMEM;
+-		goto err_unreg_pktmem;
+-	}
+-
+-	INIT_DELAYED_WORK(&state->edid_handler, adv7511_edid_handler);
+-
+-	adv7511_init_setup(sd);
+-
+-#if IS_ENABLED(CONFIG_VIDEO_ADV7511_CEC)
+-	state->cec_adap = cec_allocate_adapter(&adv7511_cec_adap_ops,
+-		state, dev_name(&client->dev), CEC_CAP_DEFAULTS,
+-		ADV7511_MAX_ADDRS);
+-	err = PTR_ERR_OR_ZERO(state->cec_adap);
+-	if (err) {
+-		destroy_workqueue(state->work_queue);
+-		goto err_unreg_pktmem;
+-	}
+-#endif
+-
+-	adv7511_set_isr(sd, true);
+-	adv7511_check_monitor_present_status(sd);
+-
+-	v4l2_info(sd, "%s found @ 0x%x (%s)\n", client->name,
+-			  client->addr << 1, client->adapter->name);
+-	return 0;
+-
+-err_unreg_pktmem:
+-	i2c_unregister_device(state->i2c_pktmem);
+-err_unreg_cec:
+-	if (state->i2c_cec)
+-		i2c_unregister_device(state->i2c_cec);
+-err_unreg_edid:
+-	i2c_unregister_device(state->i2c_edid);
+-err_entity:
+-	media_entity_cleanup(&sd->entity);
+-err_hdl:
+-	v4l2_ctrl_handler_free(&state->hdl);
+-	return err;
+-}
+-
+-/* ----------------------------------------------------------------------- */
+-
+-static int adv7511_remove(struct i2c_client *client)
+-{
+-	struct v4l2_subdev *sd = i2c_get_clientdata(client);
+-	struct adv7511_state *state = get_adv7511_state(sd);
+-
+-	state->chip_revision = -1;
+-
+-	v4l2_dbg(1, debug, sd, "%s removed @ 0x%x (%s)\n", client->name,
+-		 client->addr << 1, client->adapter->name);
+-
+-	adv7511_set_isr(sd, false);
+-	adv7511_init_setup(sd);
+-	cancel_delayed_work(&state->edid_handler);
+-	i2c_unregister_device(state->i2c_edid);
+-	if (state->i2c_cec)
+-		i2c_unregister_device(state->i2c_cec);
+-	i2c_unregister_device(state->i2c_pktmem);
+-	destroy_workqueue(state->work_queue);
+-	v4l2_device_unregister_subdev(sd);
+-	media_entity_cleanup(&sd->entity);
+-	v4l2_ctrl_handler_free(sd->ctrl_handler);
+-	return 0;
+-}
+-
+-/* ----------------------------------------------------------------------- */
+-
+-static const struct i2c_device_id adv7511_id[] = {
+-	{ "adv7511", 0 },
+-	{ }
+-};
+-MODULE_DEVICE_TABLE(i2c, adv7511_id);
+-
+-static struct i2c_driver adv7511_driver = {
+-	.driver = {
+-		.name = "adv7511",
+-	},
+-	.probe = adv7511_probe,
+-	.remove = adv7511_remove,
+-	.id_table = adv7511_id,
+-};
+-
+-module_i2c_driver(adv7511_driver);
+diff --git a/drivers/media/i2c/mt9m111.c b/drivers/media/i2c/mt9m111.c
+index 5168bb5880c4..3a543e435e61 100644
+--- a/drivers/media/i2c/mt9m111.c
++++ b/drivers/media/i2c/mt9m111.c
+@@ -1248,9 +1248,11 @@ static int mt9m111_probe(struct i2c_client *client,
+ 	if (!mt9m111)
+ 		return -ENOMEM;
+ 
+-	ret = mt9m111_probe_fw(client, mt9m111);
+-	if (ret)
+-		return ret;
++	if (dev_fwnode(&client->dev)) {
++		ret = mt9m111_probe_fw(client, mt9m111);
++		if (ret)
++			return ret;
++	}
+ 
+ 	mt9m111->clk = v4l2_clk_get(&client->dev, "mclk");
+ 	if (IS_ERR(mt9m111->clk))
+diff --git a/drivers/media/i2c/ov7740.c b/drivers/media/i2c/ov7740.c
+index dfece91ce96b..8207e7cf9923 100644
+--- a/drivers/media/i2c/ov7740.c
++++ b/drivers/media/i2c/ov7740.c
+@@ -761,7 +761,11 @@ static int ov7740_try_fmt_internal(struct v4l2_subdev *sd,
+ 
+ 		fsize++;
+ 	}
+-
++	if (i >= ARRAY_SIZE(ov7740_framesizes)) {
++		fsize = &ov7740_framesizes[0];
++		fmt->width = fsize->width;
++		fmt->height = fsize->height;
++	}
+ 	if (ret_frmsize != NULL)
+ 		*ret_frmsize = fsize;
+ 
+diff --git a/drivers/media/media-device.c b/drivers/media/media-device.c
+index b8ec88612df7..8e2a66493e62 100644
+--- a/drivers/media/media-device.c
++++ b/drivers/media/media-device.c
+@@ -502,6 +502,7 @@ static long media_device_enum_links32(struct media_device *mdev,
+ {
+ 	struct media_links_enum links;
+ 	compat_uptr_t pads_ptr, links_ptr;
++	int ret;
+ 
+ 	memset(&links, 0, sizeof(links));
+ 
+@@ -513,7 +514,14 @@ static long media_device_enum_links32(struct media_device *mdev,
+ 	links.pads = compat_ptr(pads_ptr);
+ 	links.links = compat_ptr(links_ptr);
+ 
+-	return media_device_enum_links(mdev, &links);
++	ret = media_device_enum_links(mdev, &links);
++	if (ret)
++		return ret;
++
++	if (copy_to_user(ulinks->reserved, links.reserved,
++			 sizeof(ulinks->reserved)))
++		return -EFAULT;
++	return 0;
+ }
+ 
+ #define MEDIA_IOC_ENUM_LINKS32		_IOWR('|', 0x02, struct media_links_enum32)
+diff --git a/drivers/media/pci/saa7164/saa7164-core.c b/drivers/media/pci/saa7164/saa7164-core.c
+index 05f25c9bb308..f5ad3cf207d3 100644
+--- a/drivers/media/pci/saa7164/saa7164-core.c
++++ b/drivers/media/pci/saa7164/saa7164-core.c
+@@ -1122,16 +1122,25 @@ static int saa7164_proc_show(struct seq_file *m, void *v)
+ 	return 0;
+ }
+ 
++static struct proc_dir_entry *saa7164_pe;
++
+ static int saa7164_proc_create(void)
+ {
+-	struct proc_dir_entry *pe;
+-
+-	pe = proc_create_single("saa7164", S_IRUGO, NULL, saa7164_proc_show);
+-	if (!pe)
++	saa7164_pe = proc_create_single("saa7164", 0444, NULL, saa7164_proc_show);
++	if (!saa7164_pe)
+ 		return -ENOMEM;
+ 
+ 	return 0;
+ }
++
++static void saa7164_proc_destroy(void)
++{
++	if (saa7164_pe)
++		remove_proc_entry("saa7164", NULL);
++}
++#else
++static int saa7164_proc_create(void) { return 0; }
++static void saa7164_proc_destroy(void) {}
+ #endif
+ 
+ static int saa7164_thread_function(void *data)
+@@ -1503,19 +1512,21 @@ static struct pci_driver saa7164_pci_driver = {
+ 
+ static int __init saa7164_init(void)
+ {
+-	printk(KERN_INFO "saa7164 driver loaded\n");
++	int ret = pci_register_driver(&saa7164_pci_driver);
++
++	if (ret)
++		return ret;
+ 
+-#ifdef CONFIG_PROC_FS
+ 	saa7164_proc_create();
+-#endif
+-	return pci_register_driver(&saa7164_pci_driver);
++
++	pr_info("saa7164 driver loaded\n");
++
++	return 0;
+ }
+ 
+ static void __exit saa7164_fini(void)
+ {
+-#ifdef CONFIG_PROC_FS
+-	remove_proc_entry("saa7164", NULL);
+-#endif
++	saa7164_proc_destroy();
+ 	pci_unregister_driver(&saa7164_pci_driver);
+ }
+ 
+diff --git a/drivers/media/platform/aspeed-video.c b/drivers/media/platform/aspeed-video.c
+index 692e08ef38c0..668d8827e281 100644
+--- a/drivers/media/platform/aspeed-video.c
++++ b/drivers/media/platform/aspeed-video.c
+@@ -1600,8 +1600,9 @@ static int aspeed_video_init(struct aspeed_video *video)
+ 		return -ENODEV;
+ 	}
+ 
+-	rc = devm_request_irq(dev, irq, aspeed_video_irq, IRQF_SHARED,
+-			      DEVICE_NAME, video);
++	rc = devm_request_threaded_irq(dev, irq, NULL, aspeed_video_irq,
++				       IRQF_ONESHOT | IRQF_SHARED, DEVICE_NAME,
++				       video);
+ 	if (rc < 0) {
+ 		dev_err(dev, "Unable to request IRQ %d\n", irq);
+ 		return rc;
+diff --git a/drivers/media/platform/coda/coda-bit.c b/drivers/media/platform/coda/coda-bit.c
+index eaa86737fa04..cc1920e68bc1 100644
+--- a/drivers/media/platform/coda/coda-bit.c
++++ b/drivers/media/platform/coda/coda-bit.c
+@@ -1743,6 +1743,7 @@ static int __coda_start_decoding(struct coda_ctx *ctx)
+ 		v4l2_err(&dev->v4l2_dev, "CODA_COMMAND_SEQ_INIT timeout\n");
+ 		return ret;
+ 	}
++	ctx->sequence_offset = ~0U;
+ 	ctx->initialized = 1;
+ 
+ 	/* Update kfifo out pointer from coda bitstream read pointer */
+@@ -2150,12 +2151,17 @@ static void coda_finish_decode(struct coda_ctx *ctx)
+ 		else if (ctx->display_idx < 0)
+ 			ctx->hold = true;
+ 	} else if (decoded_idx == -2) {
++		if (ctx->display_idx >= 0 &&
++		    ctx->display_idx < ctx->num_internal_frames)
++			ctx->sequence_offset++;
+ 		/* no frame was decoded, we still return remaining buffers */
+ 	} else if (decoded_idx < 0 || decoded_idx >= ctx->num_internal_frames) {
+ 		v4l2_err(&dev->v4l2_dev,
+ 			 "decoded frame index out of range: %d\n", decoded_idx);
+ 	} else {
+-		val = coda_read(dev, CODA_RET_DEC_PIC_FRAME_NUM) - 1;
++		val = coda_read(dev, CODA_RET_DEC_PIC_FRAME_NUM);
++		if (ctx->sequence_offset == -1)
++			ctx->sequence_offset = val;
+ 		val -= ctx->sequence_offset;
+ 		spin_lock(&ctx->buffer_meta_lock);
+ 		if (!list_empty(&ctx->buffer_meta_list)) {
+@@ -2308,7 +2314,6 @@ irqreturn_t coda_irq_handler(int irq, void *data)
+ 	if (ctx == NULL) {
+ 		v4l2_err(&dev->v4l2_dev,
+ 			 "Instance released before the end of transaction\n");
+-		mutex_unlock(&dev->coda_mutex);
+ 		return IRQ_HANDLED;
+ 	}
+ 
+diff --git a/drivers/media/platform/coda/coda-common.c b/drivers/media/platform/coda/coda-common.c
+index fa0b22fb7991..9bf2116ffc76 100644
+--- a/drivers/media/platform/coda/coda-common.c
++++ b/drivers/media/platform/coda/coda-common.c
+@@ -1007,6 +1007,8 @@ static int coda_encoder_cmd(struct file *file, void *fh,
+ 	/* Set the stream-end flag on this context */
+ 	ctx->bit_stream_param |= CODA_BIT_STREAM_END_FLAG;
+ 
++	flush_work(&ctx->pic_run_work);
++
+ 	/* If there is no buffer in flight, wake up */
+ 	if (!ctx->streamon_out || ctx->qsequence == ctx->osequence) {
+ 		dst_vq = v4l2_m2m_get_vq(ctx->fh.m2m_ctx,
+diff --git a/drivers/media/platform/davinci/vpif_capture.c b/drivers/media/platform/davinci/vpif_capture.c
+index 6216b7ac6875..a20cb6fff2ec 100644
+--- a/drivers/media/platform/davinci/vpif_capture.c
++++ b/drivers/media/platform/davinci/vpif_capture.c
+@@ -1384,6 +1384,14 @@ vpif_init_free_channel_objects:
+ 	return err;
+ }
+ 
++static inline void free_vpif_objs(void)
++{
++	int i;
++
++	for (i = 0; i < VPIF_CAPTURE_MAX_DEVICES; i++)
++		kfree(vpif_obj.dev[i]);
++}
++
+ static int vpif_async_bound(struct v4l2_async_notifier *notifier,
+ 			    struct v4l2_subdev *subdev,
+ 			    struct v4l2_async_subdev *asd)
+@@ -1653,7 +1661,7 @@ static __init int vpif_probe(struct platform_device *pdev)
+ 	err = v4l2_device_register(vpif_dev, &vpif_obj.v4l2_dev);
+ 	if (err) {
+ 		v4l2_err(vpif_dev->driver, "Error registering v4l2 device\n");
+-		goto cleanup;
++		goto vpif_free;
+ 	}
+ 
+ 	while ((res = platform_get_resource(pdev, IORESOURCE_IRQ, res_idx))) {
+@@ -1700,7 +1708,9 @@ static __init int vpif_probe(struct platform_device *pdev)
+ 				  "registered sub device %s\n",
+ 				   subdevdata->name);
+ 		}
+-		vpif_probe_complete();
++		err = vpif_probe_complete();
++		if (err)
++			goto probe_subdev_out;
+ 	} else {
+ 		vpif_obj.notifier.ops = &vpif_async_ops;
+ 		err = v4l2_async_notifier_register(&vpif_obj.v4l2_dev,
+@@ -1719,6 +1729,8 @@ probe_subdev_out:
+ 	kfree(vpif_obj.sd);
+ vpif_unregister:
+ 	v4l2_device_unregister(&vpif_obj.v4l2_dev);
++vpif_free:
++	free_vpif_objs();
+ cleanup:
+ 	v4l2_async_notifier_cleanup(&vpif_obj.notifier);
+ 
+diff --git a/drivers/media/platform/davinci/vpss.c b/drivers/media/platform/davinci/vpss.c
+index 19cf6853411e..89a86c19579b 100644
+--- a/drivers/media/platform/davinci/vpss.c
++++ b/drivers/media/platform/davinci/vpss.c
+@@ -518,6 +518,11 @@ static int __init vpss_init(void)
+ 		return -EBUSY;
+ 
+ 	oper_cfg.vpss_regs_base2 = ioremap(VPSS_CLK_CTRL, 4);
++	if (unlikely(!oper_cfg.vpss_regs_base2)) {
++		release_mem_region(VPSS_CLK_CTRL, 4);
++		return -ENOMEM;
++	}
++
+ 	writel(VPSS_CLK_CTRL_VENCCLKEN |
+ 		     VPSS_CLK_CTRL_DACCLKEN, oper_cfg.vpss_regs_base2);
+ 
+diff --git a/drivers/media/platform/marvell-ccic/mcam-core.c b/drivers/media/platform/marvell-ccic/mcam-core.c
+index f1b301810260..0a6411b877e9 100644
+--- a/drivers/media/platform/marvell-ccic/mcam-core.c
++++ b/drivers/media/platform/marvell-ccic/mcam-core.c
+@@ -200,7 +200,6 @@ struct mcam_vb_buffer {
+ 	struct list_head queue;
+ 	struct mcam_dma_desc *dma_desc;	/* Descriptor virtual address */
+ 	dma_addr_t dma_desc_pa;		/* Descriptor physical address */
+-	int dma_desc_nent;		/* Number of mapped descriptors */
+ };
+ 
+ static inline struct mcam_vb_buffer *vb_to_mvb(struct vb2_v4l2_buffer *vb)
+@@ -608,9 +607,11 @@ static void mcam_dma_contig_done(struct mcam_camera *cam, int frame)
+ static void mcam_sg_next_buffer(struct mcam_camera *cam)
+ {
+ 	struct mcam_vb_buffer *buf;
++	struct sg_table *sg_table;
+ 
+ 	buf = list_first_entry(&cam->buffers, struct mcam_vb_buffer, queue);
+ 	list_del_init(&buf->queue);
++	sg_table = vb2_dma_sg_plane_desc(&buf->vb_buf.vb2_buf, 0);
+ 	/*
+ 	 * Very Bad Not Good Things happen if you don't clear
+ 	 * C1_DESC_ENA before making any descriptor changes.
+@@ -618,7 +619,7 @@ static void mcam_sg_next_buffer(struct mcam_camera *cam)
+ 	mcam_reg_clear_bit(cam, REG_CTRL1, C1_DESC_ENA);
+ 	mcam_reg_write(cam, REG_DMA_DESC_Y, buf->dma_desc_pa);
+ 	mcam_reg_write(cam, REG_DESC_LEN_Y,
+-			buf->dma_desc_nent*sizeof(struct mcam_dma_desc));
++			sg_table->nents * sizeof(struct mcam_dma_desc));
+ 	mcam_reg_write(cam, REG_DESC_LEN_U, 0);
+ 	mcam_reg_write(cam, REG_DESC_LEN_V, 0);
+ 	mcam_reg_set_bit(cam, REG_CTRL1, C1_DESC_ENA);
+diff --git a/drivers/media/platform/qcom/venus/firmware.c b/drivers/media/platform/qcom/venus/firmware.c
+index 6cfa8021721e..f81449b400c4 100644
+--- a/drivers/media/platform/qcom/venus/firmware.c
++++ b/drivers/media/platform/qcom/venus/firmware.c
+@@ -87,11 +87,11 @@ static int venus_load_fw(struct venus_core *core, const char *fwname,
+ 
+ 	ret = of_address_to_resource(node, 0, &r);
+ 	if (ret)
+-		return ret;
++		goto err_put_node;
+ 
+ 	ret = request_firmware(&mdt, fwname, dev);
+ 	if (ret < 0)
+-		return ret;
++		goto err_put_node;
+ 
+ 	fw_size = qcom_mdt_get_size(mdt);
+ 	if (fw_size < 0) {
+@@ -125,6 +125,8 @@ static int venus_load_fw(struct venus_core *core, const char *fwname,
+ 	memunmap(mem_va);
+ err_release_fw:
+ 	release_firmware(mdt);
++err_put_node:
++	of_node_put(node);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/media/platform/rcar_fdp1.c b/drivers/media/platform/rcar_fdp1.c
+index 6bda1eee9170..4f103be215d3 100644
+--- a/drivers/media/platform/rcar_fdp1.c
++++ b/drivers/media/platform/rcar_fdp1.c
+@@ -257,6 +257,8 @@ MODULE_PARM_DESC(debug, "activate debug info");
+ #define FD1_IP_H3_ES1			0x02010101
+ #define FD1_IP_M3W			0x02010202
+ #define FD1_IP_H3			0x02010203
++#define FD1_IP_M3N			0x02010204
++#define FD1_IP_E3			0x02010205
+ 
+ /* LUTs */
+ #define FD1_LUT_DIF_ADJ			0x1000
+@@ -2365,6 +2367,12 @@ static int fdp1_probe(struct platform_device *pdev)
+ 	case FD1_IP_H3:
+ 		dprintk(fdp1, "FDP1 Version R-Car H3\n");
+ 		break;
++	case FD1_IP_M3N:
++		dprintk(fdp1, "FDP1 Version R-Car M3N\n");
++		break;
++	case FD1_IP_E3:
++		dprintk(fdp1, "FDP1 Version R-Car E3\n");
++		break;
+ 	default:
+ 		dev_err(fdp1->dev, "FDP1 Unidentifiable (0x%08x)\n",
+ 				hw_version);
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc.c b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+index 9a53d3908b52..2504fe9761bf 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc.c
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+@@ -527,7 +527,8 @@ static void s5p_mfc_handle_seq_done(struct s5p_mfc_ctx *ctx,
+ 				dev);
+ 		ctx->mv_count = s5p_mfc_hw_call(dev->mfc_ops, get_mv_count,
+ 				dev);
+-		ctx->scratch_buf_size = s5p_mfc_hw_call(dev->mfc_ops,
++		if (FW_HAS_E_MIN_SCRATCH_BUF(dev))
++			ctx->scratch_buf_size = s5p_mfc_hw_call(dev->mfc_ops,
+ 						get_min_scratch_buf_size, dev);
+ 		if (ctx->img_width == 0 || ctx->img_height == 0)
+ 			ctx->state = MFCINST_ERROR;
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc_pm.c b/drivers/media/platform/s5p-mfc/s5p_mfc_pm.c
+index eb85cedc5ef3..5e080f32b0e8 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc_pm.c
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc_pm.c
+@@ -38,6 +38,11 @@ int s5p_mfc_init_pm(struct s5p_mfc_dev *dev)
+ 	for (i = 0; i < pm->num_clocks; i++) {
+ 		pm->clocks[i] = devm_clk_get(pm->device, pm->clk_names[i]);
+ 		if (IS_ERR(pm->clocks[i])) {
++			/* additional clocks are optional */
++			if (i && PTR_ERR(pm->clocks[i]) == -ENOENT) {
++				pm->clocks[i] = NULL;
++				continue;
++			}
+ 			mfc_err("Failed to get clock: %s\n",
+ 				pm->clk_names[i]);
+ 			return PTR_ERR(pm->clocks[i]);
+diff --git a/drivers/media/platform/vim2m.c b/drivers/media/platform/vim2m.c
+index dd47821fc661..240327d2a3ad 100644
+--- a/drivers/media/platform/vim2m.c
++++ b/drivers/media/platform/vim2m.c
+@@ -1355,7 +1355,7 @@ static int vim2m_probe(struct platform_device *pdev)
+ 						 MEDIA_ENT_F_PROC_VIDEO_SCALER);
+ 	if (ret) {
+ 		v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem media controller\n");
+-		goto error_m2m;
++		goto error_dev;
+ 	}
+ 
+ 	ret = media_device_register(&dev->mdev);
+@@ -1369,11 +1369,11 @@ static int vim2m_probe(struct platform_device *pdev)
+ #ifdef CONFIG_MEDIA_CONTROLLER
+ error_m2m_mc:
+ 	v4l2_m2m_unregister_media_controller(dev->m2m_dev);
+-error_m2m:
+-	v4l2_m2m_release(dev->m2m_dev);
+ #endif
+ error_dev:
+ 	video_unregister_device(&dev->vfd);
++	/* vim2m_device_release called by video_unregister_device to release various objects */
++	return ret;
+ error_v4l2:
+ 	v4l2_device_unregister(&dev->v4l2_dev);
+ error_free:
+diff --git a/drivers/media/platform/vimc/vimc-capture.c b/drivers/media/platform/vimc/vimc-capture.c
+index ea869631a3f6..bbc16072ec16 100644
+--- a/drivers/media/platform/vimc/vimc-capture.c
++++ b/drivers/media/platform/vimc/vimc-capture.c
+@@ -130,12 +130,15 @@ static int vimc_cap_s_fmt_vid_cap(struct file *file, void *priv,
+ 				  struct v4l2_format *f)
+ {
+ 	struct vimc_cap_device *vcap = video_drvdata(file);
++	int ret;
+ 
+ 	/* Do not change the format while stream is on */
+ 	if (vb2_is_busy(&vcap->queue))
+ 		return -EBUSY;
+ 
+-	vimc_cap_try_fmt_vid_cap(file, priv, f);
++	ret = vimc_cap_try_fmt_vid_cap(file, priv, f);
++	if (ret)
++		return ret;
+ 
+ 	dev_dbg(vcap->dev, "%s: format update: "
+ 		"old:%dx%d (0x%x, %d, %d, %d, %d) "
+diff --git a/drivers/media/radio/wl128x/fmdrv_v4l2.c b/drivers/media/radio/wl128x/fmdrv_v4l2.c
+index e25fd4d4d280..a1eaea19a81c 100644
+--- a/drivers/media/radio/wl128x/fmdrv_v4l2.c
++++ b/drivers/media/radio/wl128x/fmdrv_v4l2.c
+@@ -550,6 +550,7 @@ int fm_v4l2_init_video_device(struct fmdev *fmdev, int radio_nr)
+ 
+ 	/* Register with V4L2 subsystem as RADIO device */
+ 	if (video_register_device(&gradio_dev, VFL_TYPE_RADIO, radio_nr)) {
++		v4l2_device_unregister(&fmdev->v4l2_dev);
+ 		fmerr("Could not register video device\n");
+ 		return -ENOMEM;
+ 	}
+@@ -563,6 +564,8 @@ int fm_v4l2_init_video_device(struct fmdev *fmdev, int radio_nr)
+ 	if (ret < 0) {
+ 		fmerr("(fmdev): Can't init ctrl handler\n");
+ 		v4l2_ctrl_handler_free(&fmdev->ctrl_handler);
++		video_unregister_device(fmdev->radio_dev);
++		v4l2_device_unregister(&fmdev->v4l2_dev);
+ 		return -EBUSY;
+ 	}
+ 
+diff --git a/drivers/media/rc/ir-spi.c b/drivers/media/rc/ir-spi.c
+index 66334e8d63ba..c58f2d38a458 100644
+--- a/drivers/media/rc/ir-spi.c
++++ b/drivers/media/rc/ir-spi.c
+@@ -161,6 +161,7 @@ static const struct of_device_id ir_spi_of_match[] = {
+ 	{ .compatible = "ir-spi-led" },
+ 	{},
+ };
++MODULE_DEVICE_TABLE(of, ir_spi_of_match);
+ 
+ static struct spi_driver ir_spi_driver = {
+ 	.probe = ir_spi_probe,
+diff --git a/drivers/media/usb/dvb-usb/dvb-usb-init.c b/drivers/media/usb/dvb-usb/dvb-usb-init.c
+index 99951e02a880..dd063a736df5 100644
+--- a/drivers/media/usb/dvb-usb/dvb-usb-init.c
++++ b/drivers/media/usb/dvb-usb/dvb-usb-init.c
+@@ -287,12 +287,15 @@ EXPORT_SYMBOL(dvb_usb_device_init);
+ void dvb_usb_device_exit(struct usb_interface *intf)
+ {
+ 	struct dvb_usb_device *d = usb_get_intfdata(intf);
+-	const char *name = "generic DVB-USB module";
++	const char *default_name = "generic DVB-USB module";
++	char name[40];
+ 
+ 	usb_set_intfdata(intf, NULL);
+ 	if (d != NULL && d->desc != NULL) {
+-		name = d->desc->name;
++		strscpy(name, d->desc->name, sizeof(name));
+ 		dvb_usb_exit(d);
++	} else {
++		strscpy(name, default_name, sizeof(name));
+ 	}
+ 	info("%s successfully deinitialized and disconnected.", name);
+ 
+diff --git a/drivers/media/usb/hdpvr/hdpvr-video.c b/drivers/media/usb/hdpvr/hdpvr-video.c
+index e082086428a4..ae6609716347 100644
+--- a/drivers/media/usb/hdpvr/hdpvr-video.c
++++ b/drivers/media/usb/hdpvr/hdpvr-video.c
+@@ -439,7 +439,7 @@ static ssize_t hdpvr_read(struct file *file, char __user *buffer, size_t count,
+ 	/* wait for the first buffer */
+ 	if (!(file->f_flags & O_NONBLOCK)) {
+ 		if (wait_event_interruptible(dev->wait_data,
+-					     hdpvr_get_next_buffer(dev)))
++					     !list_empty_careful(&dev->rec_buff_list)))
+ 			return -ERESTARTSYS;
+ 	}
+ 
+@@ -465,10 +465,17 @@ static ssize_t hdpvr_read(struct file *file, char __user *buffer, size_t count,
+ 				goto err;
+ 			}
+ 			if (!err) {
+-				v4l2_dbg(MSG_INFO, hdpvr_debug, &dev->v4l2_dev,
+-					"timeout: restart streaming\n");
++				v4l2_info(&dev->v4l2_dev,
++					  "timeout: restart streaming\n");
++				mutex_lock(&dev->io_mutex);
+ 				hdpvr_stop_streaming(dev);
+-				msecs_to_jiffies(4000);
++				mutex_unlock(&dev->io_mutex);
++				/*
++				 * The FW needs about 4 seconds after streaming
++				 * stopped before it is ready to restart
++				 * streaming.
++				 */
++				msleep(4000);
+ 				err = hdpvr_start_streaming(dev);
+ 				if (err) {
+ 					ret = err;
+@@ -1133,9 +1140,7 @@ static void hdpvr_device_release(struct video_device *vdev)
+ 	struct hdpvr_device *dev = video_get_drvdata(vdev);
+ 
+ 	hdpvr_delete(dev);
+-	mutex_lock(&dev->io_mutex);
+ 	flush_work(&dev->worker);
+-	mutex_unlock(&dev->io_mutex);
+ 
+ 	v4l2_device_unregister(&dev->v4l2_dev);
+ 	v4l2_ctrl_handler_free(&dev->hdl);
+diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
+index 14cff91b7aea..aa021498036a 100644
+--- a/drivers/media/usb/uvc/uvc_ctrl.c
++++ b/drivers/media/usb/uvc/uvc_ctrl.c
+@@ -2350,7 +2350,9 @@ void uvc_ctrl_cleanup_device(struct uvc_device *dev)
+ 	struct uvc_entity *entity;
+ 	unsigned int i;
+ 
+-	cancel_work_sync(&dev->async_ctrl.work);
++	/* Can be uninitialized if we are aborting on probe error. */
++	if (dev->async_ctrl.work.func)
++		cancel_work_sync(&dev->async_ctrl.work);
+ 
+ 	/* Free controls and control mappings for all entities. */
+ 	list_for_each_entry(entity, &dev->entities, list) {
+diff --git a/drivers/media/usb/zr364xx/zr364xx.c b/drivers/media/usb/zr364xx/zr364xx.c
+index 96fee8d5b865..cd2bc9ed0cd9 100644
+--- a/drivers/media/usb/zr364xx/zr364xx.c
++++ b/drivers/media/usb/zr364xx/zr364xx.c
+@@ -703,7 +703,8 @@ static int zr364xx_vidioc_querycap(struct file *file, void *priv,
+ 	struct zr364xx_camera *cam = video_drvdata(file);
+ 
+ 	strscpy(cap->driver, DRIVER_DESC, sizeof(cap->driver));
+-	strscpy(cap->card, cam->udev->product, sizeof(cap->card));
++	if (cam->udev->product)
++		strscpy(cap->card, cam->udev->product, sizeof(cap->card));
+ 	strscpy(cap->bus_info, dev_name(&cam->udev->dev),
+ 		sizeof(cap->bus_info));
+ 	cap->device_caps = V4L2_CAP_VIDEO_CAPTURE |
+diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
+index 54d66dbc2a31..6b1469d11f58 100644
+--- a/drivers/media/v4l2-core/v4l2-ctrls.c
++++ b/drivers/media/v4l2-core/v4l2-ctrls.c
+@@ -2148,15 +2148,6 @@ static int handler_new_ref(struct v4l2_ctrl_handler *hdl,
+ 	if (size_extra_req)
+ 		new_ref->p_req.p = &new_ref[1];
+ 
+-	if (ctrl->handler == hdl) {
+-		/* By default each control starts in a cluster of its own.
+-		   new_ref->ctrl is basically a cluster array with one
+-		   element, so that's perfect to use as the cluster pointer.
+-		   But only do this for the handler that owns the control. */
+-		ctrl->cluster = &new_ref->ctrl;
+-		ctrl->ncontrols = 1;
+-	}
+-
+ 	INIT_LIST_HEAD(&new_ref->node);
+ 
+ 	mutex_lock(hdl->lock);
+@@ -2189,6 +2180,15 @@ insert_in_hash:
+ 	hdl->buckets[bucket] = new_ref;
+ 	if (ctrl_ref)
+ 		*ctrl_ref = new_ref;
++	if (ctrl->handler == hdl) {
++		/* By default each control starts in a cluster of its own.
++		 * new_ref->ctrl is basically a cluster array with one
++		 * element, so that's perfect to use as the cluster pointer.
++		 * But only do this for the handler that owns the control.
++		 */
++		ctrl->cluster = &new_ref->ctrl;
++		ctrl->ncontrols = 1;
++	}
+ 
+ unlock:
+ 	mutex_unlock(hdl->lock);
+@@ -2365,16 +2365,15 @@ struct v4l2_ctrl *v4l2_ctrl_new_custom(struct v4l2_ctrl_handler *hdl,
+ 		v4l2_ctrl_fill(cfg->id, &name, &type, &min, &max, &step,
+ 								&def, &flags);
+ 
+-	is_menu = (cfg->type == V4L2_CTRL_TYPE_MENU ||
+-		   cfg->type == V4L2_CTRL_TYPE_INTEGER_MENU);
++	is_menu = (type == V4L2_CTRL_TYPE_MENU ||
++		   type == V4L2_CTRL_TYPE_INTEGER_MENU);
+ 	if (is_menu)
+ 		WARN_ON(step);
+ 	else
+ 		WARN_ON(cfg->menu_skip_mask);
+-	if (cfg->type == V4L2_CTRL_TYPE_MENU && qmenu == NULL)
++	if (type == V4L2_CTRL_TYPE_MENU && !qmenu) {
+ 		qmenu = v4l2_ctrl_get_menu(cfg->id);
+-	else if (cfg->type == V4L2_CTRL_TYPE_INTEGER_MENU &&
+-		 qmenu_int == NULL) {
++	} else if (type == V4L2_CTRL_TYPE_INTEGER_MENU && !qmenu_int) {
+ 		handler_set_err(hdl, -EINVAL);
+ 		return NULL;
+ 	}
+diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
+index d6c9ebd8d263..278a4b679b8f 100644
+--- a/drivers/mmc/host/sdhci-msm.c
++++ b/drivers/mmc/host/sdhci-msm.c
+@@ -584,11 +584,14 @@ static int msm_init_cm_dll(struct sdhci_host *host)
+ 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ 	struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host);
+ 	int wait_cnt = 50;
+-	unsigned long flags;
++	unsigned long flags, xo_clk = 0;
+ 	u32 config;
+ 	const struct sdhci_msm_offset *msm_offset =
+ 					msm_host->offset;
+ 
++	if (msm_host->use_14lpp_dll_reset && !IS_ERR_OR_NULL(msm_host->xo_clk))
++		xo_clk = clk_get_rate(msm_host->xo_clk);
++
+ 	spin_lock_irqsave(&host->lock, flags);
+ 
+ 	/*
+@@ -636,10 +639,10 @@ static int msm_init_cm_dll(struct sdhci_host *host)
+ 		config &= CORE_FLL_CYCLE_CNT;
+ 		if (config)
+ 			mclk_freq = DIV_ROUND_CLOSEST_ULL((host->clock * 8),
+-					clk_get_rate(msm_host->xo_clk));
++					xo_clk);
+ 		else
+ 			mclk_freq = DIV_ROUND_CLOSEST_ULL((host->clock * 4),
+-					clk_get_rate(msm_host->xo_clk));
++					xo_clk);
+ 
+ 		config = readl_relaxed(host->ioaddr +
+ 				msm_offset->core_dll_config_2);
+diff --git a/drivers/mtd/nand/raw/mtk_nand.c b/drivers/mtd/nand/raw/mtk_nand.c
+index 2c0e09187773..d56f8bbb7556 100644
+--- a/drivers/mtd/nand/raw/mtk_nand.c
++++ b/drivers/mtd/nand/raw/mtk_nand.c
+@@ -508,7 +508,8 @@ static int mtk_nfc_setup_data_interface(struct nand_chip *chip, int csline,
+ {
+ 	struct mtk_nfc *nfc = nand_get_controller_data(chip);
+ 	const struct nand_sdr_timings *timings;
+-	u32 rate, tpoecs, tprecs, tc2r, tw2r, twh, twst, trlt;
++	u32 rate, tpoecs, tprecs, tc2r, tw2r, twh, twst = 0, trlt = 0;
++	u32 thold;
+ 
+ 	timings = nand_get_sdr_timings(conf);
+ 	if (IS_ERR(timings))
+@@ -544,11 +545,28 @@ static int mtk_nfc_setup_data_interface(struct nand_chip *chip, int csline,
+ 	twh = DIV_ROUND_UP(twh * rate, 1000000) - 1;
+ 	twh &= 0xf;
+ 
+-	twst = timings->tWP_min / 1000;
++	/* Calculate real WE#/RE# hold time in nanosecond */
++	thold = (twh + 1) * 1000000 / rate;
++	/* nanosecond to picosecond */
++	thold *= 1000;
++
++	/*
++	 * WE# low level time should be expaned to meet WE# pulse time
++	 * and WE# cycle time at the same time.
++	 */
++	if (thold < timings->tWC_min)
++		twst = timings->tWC_min - thold;
++	twst = max(timings->tWP_min, twst) / 1000;
+ 	twst = DIV_ROUND_UP(twst * rate, 1000000) - 1;
+ 	twst &= 0xf;
+ 
+-	trlt = max(timings->tREA_max, timings->tRP_min) / 1000;
++	/*
++	 * RE# low level time should be expaned to meet RE# pulse time,
++	 * RE# access time and RE# cycle time at the same time.
++	 */
++	if (thold < timings->tRC_min)
++		trlt = timings->tRC_min - thold;
++	trlt = max3(trlt, timings->tREA_max, timings->tRP_min) / 1000;
+ 	trlt = DIV_ROUND_UP(trlt * rate, 1000000) - 1;
+ 	trlt &= 0xf;
+ 
+diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c
+index fa87ae28cdfe..2fa37bbb8794 100644
+--- a/drivers/mtd/nand/spi/core.c
++++ b/drivers/mtd/nand/spi/core.c
+@@ -572,12 +572,12 @@ static int spinand_mtd_read(struct mtd_info *mtd, loff_t from,
+ 		if (ret == -EBADMSG) {
+ 			ecc_failed = true;
+ 			mtd->ecc_stats.failed++;
+-			ret = 0;
+ 		} else {
+ 			mtd->ecc_stats.corrected += ret;
+ 			max_bitflips = max_t(unsigned int, max_bitflips, ret);
+ 		}
+ 
++		ret = 0;
+ 		ops->retlen += iter.req.datalen;
+ 		ops->oobretlen += iter.req.ooblen;
+ 	}
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 59e919b92873..7b9a18e36a93 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -3866,8 +3866,8 @@ static netdev_tx_t bond_xmit_roundrobin(struct sk_buff *skb,
+ 					struct net_device *bond_dev)
+ {
+ 	struct bonding *bond = netdev_priv(bond_dev);
+-	struct iphdr *iph = ip_hdr(skb);
+ 	struct slave *slave;
++	int slave_cnt;
+ 	u32 slave_id;
+ 
+ 	/* Start with the curr_active_slave that joined the bond as the
+@@ -3876,23 +3876,32 @@ static netdev_tx_t bond_xmit_roundrobin(struct sk_buff *skb,
+ 	 * send the join/membership reports.  The curr_active_slave found
+ 	 * will send all of this type of traffic.
+ 	 */
+-	if (iph->protocol == IPPROTO_IGMP && skb->protocol == htons(ETH_P_IP)) {
+-		slave = rcu_dereference(bond->curr_active_slave);
+-		if (slave)
+-			bond_dev_queue_xmit(bond, skb, slave->dev);
+-		else
+-			bond_xmit_slave_id(bond, skb, 0);
+-	} else {
+-		int slave_cnt = READ_ONCE(bond->slave_cnt);
++	if (skb->protocol == htons(ETH_P_IP)) {
++		int noff = skb_network_offset(skb);
++		struct iphdr *iph;
+ 
+-		if (likely(slave_cnt)) {
+-			slave_id = bond_rr_gen_slave_id(bond);
+-			bond_xmit_slave_id(bond, skb, slave_id % slave_cnt);
+-		} else {
+-			bond_tx_drop(bond_dev, skb);
++		if (unlikely(!pskb_may_pull(skb, noff + sizeof(*iph))))
++			goto non_igmp;
++
++		iph = ip_hdr(skb);
++		if (iph->protocol == IPPROTO_IGMP) {
++			slave = rcu_dereference(bond->curr_active_slave);
++			if (slave)
++				bond_dev_queue_xmit(bond, skb, slave->dev);
++			else
++				bond_xmit_slave_id(bond, skb, 0);
++			return NETDEV_TX_OK;
+ 		}
+ 	}
+ 
++non_igmp:
++	slave_cnt = READ_ONCE(bond->slave_cnt);
++	if (likely(slave_cnt)) {
++		slave_id = bond_rr_gen_slave_id(bond);
++		bond_xmit_slave_id(bond, skb, slave_id % slave_cnt);
++	} else {
++		bond_tx_drop(bond_dev, skb);
++	}
+ 	return NETDEV_TX_OK;
+ }
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+index ecb1bd7eb508..78a01880931c 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+@@ -3858,9 +3858,12 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 	if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) {
+ 		if (!(bp->flags & TX_TIMESTAMPING_EN)) {
++			bp->eth_stats.ptp_skip_tx_ts++;
+ 			BNX2X_ERR("Tx timestamping was not enabled, this packet will not be timestamped\n");
+ 		} else if (bp->ptp_tx_skb) {
+-			BNX2X_ERR("The device supports only a single outstanding packet to timestamp, this packet will not be timestamped\n");
++			bp->eth_stats.ptp_skip_tx_ts++;
++			netdev_err_once(bp->dev,
++					"Device supports only a single outstanding packet to timestamp, this packet won't be timestamped\n");
+ 		} else {
+ 			skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+ 			/* schedule check for Tx timestamp */
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
+index 59f227fcc68b..0e1b884a5344 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
+@@ -182,7 +182,9 @@ static const struct {
+ 	{ STATS_OFFSET32(driver_filtered_tx_pkt),
+ 				4, false, "driver_filtered_tx_pkt" },
+ 	{ STATS_OFFSET32(eee_tx_lpi),
+-				4, true, "Tx LPI entry count"}
++				4, true, "Tx LPI entry count"},
++	{ STATS_OFFSET32(ptp_skip_tx_ts),
++				4, false, "ptp_skipped_tx_tstamp" },
+ };
+ 
+ #define BNX2X_NUM_STATS		ARRAY_SIZE(bnx2x_stats_arr)
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+index 626b491f7674..7a075f1f1242 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+@@ -15243,11 +15243,24 @@ static void bnx2x_ptp_task(struct work_struct *work)
+ 	u32 val_seq;
+ 	u64 timestamp, ns;
+ 	struct skb_shared_hwtstamps shhwtstamps;
++	bool bail = true;
++	int i;
++
++	/* FW may take a while to complete timestamping; try a bit and if it's
++	 * still not complete, may indicate an error state - bail out then.
++	 */
++	for (i = 0; i < 10; i++) {
++		/* Read Tx timestamp registers */
++		val_seq = REG_RD(bp, port ? NIG_REG_P1_TLLH_PTP_BUF_SEQID :
++				 NIG_REG_P0_TLLH_PTP_BUF_SEQID);
++		if (val_seq & 0x10000) {
++			bail = false;
++			break;
++		}
++		msleep(1 << i);
++	}
+ 
+-	/* Read Tx timestamp registers */
+-	val_seq = REG_RD(bp, port ? NIG_REG_P1_TLLH_PTP_BUF_SEQID :
+-			 NIG_REG_P0_TLLH_PTP_BUF_SEQID);
+-	if (val_seq & 0x10000) {
++	if (!bail) {
+ 		/* There is a valid timestamp value */
+ 		timestamp = REG_RD(bp, port ? NIG_REG_P1_TLLH_PTP_BUF_TS_MSB :
+ 				   NIG_REG_P0_TLLH_PTP_BUF_TS_MSB);
+@@ -15262,16 +15275,18 @@ static void bnx2x_ptp_task(struct work_struct *work)
+ 		memset(&shhwtstamps, 0, sizeof(shhwtstamps));
+ 		shhwtstamps.hwtstamp = ns_to_ktime(ns);
+ 		skb_tstamp_tx(bp->ptp_tx_skb, &shhwtstamps);
+-		dev_kfree_skb_any(bp->ptp_tx_skb);
+-		bp->ptp_tx_skb = NULL;
+ 
+ 		DP(BNX2X_MSG_PTP, "Tx timestamp, timestamp cycles = %llu, ns = %llu\n",
+ 		   timestamp, ns);
+ 	} else {
+-		DP(BNX2X_MSG_PTP, "There is no valid Tx timestamp yet\n");
+-		/* Reschedule to keep checking for a valid timestamp value */
+-		schedule_work(&bp->ptp_task);
++		DP(BNX2X_MSG_PTP,
++		   "Tx timestamp is not recorded (register read=%u)\n",
++		   val_seq);
++		bp->eth_stats.ptp_skip_tx_ts++;
+ 	}
++
++	dev_kfree_skb_any(bp->ptp_tx_skb);
++	bp->ptp_tx_skb = NULL;
+ }
+ 
+ void bnx2x_set_rx_ts(struct bnx2x *bp, struct sk_buff *skb)
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h
+index b2644ed13d06..d55e63692cf3 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h
+@@ -207,6 +207,9 @@ struct bnx2x_eth_stats {
+ 	u32 driver_filtered_tx_pkt;
+ 	/* src: Clear-on-Read register; Will not survive PMF Migration */
+ 	u32 eee_tx_lpi;
++
++	/* PTP */
++	u32 ptp_skip_tx_ts;
+ };
+ 
+ struct bnx2x_eth_q_stats {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 30cafe4cdb6e..09557bf49bb0 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -5481,7 +5481,16 @@ static int bnxt_cp_rings_in_use(struct bnxt *bp)
+ 
+ static int bnxt_get_func_stat_ctxs(struct bnxt *bp)
+ {
+-	return bp->cp_nr_rings + bnxt_get_ulp_stat_ctxs(bp);
++	int ulp_stat = bnxt_get_ulp_stat_ctxs(bp);
++	int cp = bp->cp_nr_rings;
++
++	if (!ulp_stat)
++		return cp;
++
++	if (bnxt_nq_rings_in_use(bp) > cp + bnxt_get_ulp_msix_num(bp))
++		return bnxt_get_ulp_msix_base(bp) + ulp_stat;
++
++	return cp + ulp_stat;
+ }
+ 
+ static bool bnxt_need_reserve_rings(struct bnxt *bp)
+@@ -7373,11 +7382,7 @@ unsigned int bnxt_get_avail_cp_rings_for_en(struct bnxt *bp)
+ 
+ unsigned int bnxt_get_avail_stat_ctxs_for_en(struct bnxt *bp)
+ {
+-	unsigned int stat;
+-
+-	stat = bnxt_get_max_func_stat_ctxs(bp) - bnxt_get_ulp_stat_ctxs(bp);
+-	stat -= bp->cp_nr_rings;
+-	return stat;
++	return bnxt_get_max_func_stat_ctxs(bp) - bnxt_get_func_stat_ctxs(bp);
+ }
+ 
+ int bnxt_get_avail_msix(struct bnxt *bp, int num)
+@@ -10165,10 +10170,10 @@ static void bnxt_remove_one(struct pci_dev *pdev)
+ 	bnxt_dcb_free(bp);
+ 	kfree(bp->edev);
+ 	bp->edev = NULL;
++	bnxt_cleanup_pci(bp);
+ 	bnxt_free_ctx_mem(bp);
+ 	kfree(bp->ctx);
+ 	bp->ctx = NULL;
+-	bnxt_cleanup_pci(bp);
+ 	bnxt_free_port_stats(bp);
+ 	free_netdev(dev);
+ }
+@@ -10730,6 +10735,7 @@ static void bnxt_shutdown(struct pci_dev *pdev)
+ 
+ 	if (system_state == SYSTEM_POWER_OFF) {
+ 		bnxt_clear_int_mode(bp);
++		pci_disable_device(pdev);
+ 		pci_wake_from_d3(pdev, bp->wol);
+ 		pci_set_power_state(pdev, PCI_D3hot);
+ 	}
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 878ccce1dfcd..87a9c5716958 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -1689,10 +1689,10 @@ static void fec_get_mac(struct net_device *ndev)
+ 	 */
+ 	if (!is_valid_ether_addr(iap)) {
+ 		/* Report it and use a random ethernet address instead */
+-		netdev_err(ndev, "Invalid MAC address: %pM\n", iap);
++		dev_err(&fep->pdev->dev, "Invalid MAC address: %pM\n", iap);
+ 		eth_hw_addr_random(ndev);
+-		netdev_info(ndev, "Using random MAC address: %pM\n",
+-			    ndev->dev_addr);
++		dev_info(&fep->pdev->dev, "Using random MAC address: %pM\n",
++			 ndev->dev_addr);
+ 		return;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.c b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
+index 17ab4f4af6ad..0da814618565 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
+@@ -247,6 +247,7 @@ void hnae3_unregister_ae_algo(struct hnae3_ae_algo *ae_algo)
+ 
+ 		ae_algo->ops->uninit_ae_dev(ae_dev);
+ 		hnae3_set_bit(ae_dev->flag, HNAE3_DEV_INITED_B, 0);
++		ae_dev->ops = NULL;
+ 	}
+ 
+ 	list_del(&ae_algo->node);
+@@ -347,6 +348,7 @@ void hnae3_unregister_ae_dev(struct hnae3_ae_dev *ae_dev)
+ 
+ 		ae_algo->ops->uninit_ae_dev(ae_dev);
+ 		hnae3_set_bit(ae_dev->flag, HNAE3_DEV_INITED_B, 0);
++		ae_dev->ops = NULL;
+ 	}
+ 
+ 	list_del(&ae_dev->node);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index c7d310903319..ecf6ad5bdc2d 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -4,6 +4,9 @@
+ #include <linux/dma-mapping.h>
+ #include <linux/etherdevice.h>
+ #include <linux/interrupt.h>
++#ifdef CONFIG_RFS_ACCEL
++#include <linux/cpu_rmap.h>
++#endif
+ #include <linux/if_vlan.h>
+ #include <linux/ip.h>
+ #include <linux/ipv6.h>
+@@ -24,8 +27,7 @@
+ #define hns3_set_field(origin, shift, val)	((origin) |= ((val) << (shift)))
+ #define hns3_tx_bd_count(S)	DIV_ROUND_UP(S, HNS3_MAX_BD_SIZE)
+ 
+-static void hns3_clear_all_ring(struct hnae3_handle *h);
+-static void hns3_force_clear_all_rx_ring(struct hnae3_handle *h);
++static void hns3_clear_all_ring(struct hnae3_handle *h, bool force);
+ static void hns3_remove_hw_addr(struct net_device *netdev);
+ 
+ static const char hns3_driver_name[] = "hns3";
+@@ -72,23 +74,6 @@ static irqreturn_t hns3_irq_handle(int irq, void *vector)
+ 	return IRQ_HANDLED;
+ }
+ 
+-/* This callback function is used to set affinity changes to the irq affinity
+- * masks when the irq_set_affinity_notifier function is used.
+- */
+-static void hns3_nic_irq_affinity_notify(struct irq_affinity_notify *notify,
+-					 const cpumask_t *mask)
+-{
+-	struct hns3_enet_tqp_vector *tqp_vectors =
+-		container_of(notify, struct hns3_enet_tqp_vector,
+-			     affinity_notify);
+-
+-	tqp_vectors->affinity_mask = *mask;
+-}
+-
+-static void hns3_nic_irq_affinity_release(struct kref *ref)
+-{
+-}
+-
+ static void hns3_nic_uninit_irq(struct hns3_nic_priv *priv)
+ {
+ 	struct hns3_enet_tqp_vector *tqp_vectors;
+@@ -100,8 +85,7 @@ static void hns3_nic_uninit_irq(struct hns3_nic_priv *priv)
+ 		if (tqp_vectors->irq_init_flag != HNS3_VECTOR_INITED)
+ 			continue;
+ 
+-		/* clear the affinity notifier and affinity mask */
+-		irq_set_affinity_notifier(tqp_vectors->vector_irq, NULL);
++		/* clear the affinity mask */
+ 		irq_set_affinity_hint(tqp_vectors->vector_irq, NULL);
+ 
+ 		/* release the irq resource */
+@@ -154,12 +138,6 @@ static int hns3_nic_init_irq(struct hns3_nic_priv *priv)
+ 			return ret;
+ 		}
+ 
+-		tqp_vectors->affinity_notify.notify =
+-					hns3_nic_irq_affinity_notify;
+-		tqp_vectors->affinity_notify.release =
+-					hns3_nic_irq_affinity_release;
+-		irq_set_affinity_notifier(tqp_vectors->vector_irq,
+-					  &tqp_vectors->affinity_notify);
+ 		irq_set_affinity_hint(tqp_vectors->vector_irq,
+ 				      &tqp_vectors->affinity_mask);
+ 
+@@ -333,6 +311,40 @@ static void hns3_tqp_disable(struct hnae3_queue *tqp)
+ 	hns3_write_dev(tqp, HNS3_RING_EN_REG, rcb_reg);
+ }
+ 
++static void hns3_free_rx_cpu_rmap(struct net_device *netdev)
++{
++#ifdef CONFIG_RFS_ACCEL
++	free_irq_cpu_rmap(netdev->rx_cpu_rmap);
++	netdev->rx_cpu_rmap = NULL;
++#endif
++}
++
++static int hns3_set_rx_cpu_rmap(struct net_device *netdev)
++{
++#ifdef CONFIG_RFS_ACCEL
++	struct hns3_nic_priv *priv = netdev_priv(netdev);
++	struct hns3_enet_tqp_vector *tqp_vector;
++	int i, ret;
++
++	if (!netdev->rx_cpu_rmap) {
++		netdev->rx_cpu_rmap = alloc_irq_cpu_rmap(priv->vector_num);
++		if (!netdev->rx_cpu_rmap)
++			return -ENOMEM;
++	}
++
++	for (i = 0; i < priv->vector_num; i++) {
++		tqp_vector = &priv->tqp_vector[i];
++		ret = irq_cpu_rmap_add(netdev->rx_cpu_rmap,
++				       tqp_vector->vector_irq);
++		if (ret) {
++			hns3_free_rx_cpu_rmap(netdev);
++			return ret;
++		}
++	}
++#endif
++	return 0;
++}
++
+ static int hns3_nic_net_up(struct net_device *netdev)
+ {
+ 	struct hns3_nic_priv *priv = netdev_priv(netdev);
+@@ -344,11 +356,16 @@ static int hns3_nic_net_up(struct net_device *netdev)
+ 	if (ret)
+ 		return ret;
+ 
++	/* the device can work without cpu rmap, only aRFS needs it */
++	ret = hns3_set_rx_cpu_rmap(netdev);
++	if (ret)
++		netdev_warn(netdev, "set rx cpu rmap fail, ret=%d!\n", ret);
++
+ 	/* get irq resource for all vectors */
+ 	ret = hns3_nic_init_irq(priv);
+ 	if (ret) {
+ 		netdev_err(netdev, "hns init irq failed! ret=%d\n", ret);
+-		return ret;
++		goto free_rmap;
+ 	}
+ 
+ 	clear_bit(HNS3_NIC_STATE_DOWN, &priv->state);
+@@ -377,7 +394,8 @@ out_start_err:
+ 		hns3_vector_disable(&priv->tqp_vector[j]);
+ 
+ 	hns3_nic_uninit_irq(priv);
+-
++free_rmap:
++	hns3_free_rx_cpu_rmap(netdev);
+ 	return ret;
+ }
+ 
+@@ -440,6 +458,20 @@ static int hns3_nic_net_open(struct net_device *netdev)
+ 	return 0;
+ }
+ 
++static void hns3_reset_tx_queue(struct hnae3_handle *h)
++{
++	struct net_device *ndev = h->kinfo.netdev;
++	struct hns3_nic_priv *priv = netdev_priv(ndev);
++	struct netdev_queue *dev_queue;
++	u32 i;
++
++	for (i = 0; i < h->kinfo.num_tqps; i++) {
++		dev_queue = netdev_get_tx_queue(ndev,
++						priv->ring_data[i].queue_index);
++		netdev_tx_reset_queue(dev_queue);
++	}
++}
++
+ static void hns3_nic_net_down(struct net_device *netdev)
+ {
+ 	struct hns3_nic_priv *priv = netdev_priv(netdev);
+@@ -460,10 +492,19 @@ static void hns3_nic_net_down(struct net_device *netdev)
+ 	if (ops->stop)
+ 		ops->stop(priv->ae_handle);
+ 
++	hns3_free_rx_cpu_rmap(netdev);
++
+ 	/* free irq resources */
+ 	hns3_nic_uninit_irq(priv);
+ 
+-	hns3_clear_all_ring(priv->ae_handle);
++	/* delay ring buffer clearing to hns3_reset_notify_uninit_enet
++	 * during reset process, because driver may not be able
++	 * to disable the ring through firmware when downing the netdev.
++	 */
++	if (!hns3_nic_resetting(netdev))
++		hns3_clear_all_ring(priv->ae_handle, false);
++
++	hns3_reset_tx_queue(priv->ae_handle);
+ }
+ 
+ static int hns3_nic_net_stop(struct net_device *netdev)
+@@ -1476,12 +1517,12 @@ static void hns3_nic_get_stats64(struct net_device *netdev,
+ static int hns3_setup_tc(struct net_device *netdev, void *type_data)
+ {
+ 	struct tc_mqprio_qopt_offload *mqprio_qopt = type_data;
+-	struct hnae3_handle *h = hns3_get_handle(netdev);
+-	struct hnae3_knic_private_info *kinfo = &h->kinfo;
+ 	u8 *prio_tc = mqprio_qopt->qopt.prio_tc_map;
++	struct hnae3_knic_private_info *kinfo;
+ 	u8 tc = mqprio_qopt->qopt.num_tc;
+ 	u16 mode = mqprio_qopt->mode;
+ 	u8 hw = mqprio_qopt->qopt.hw;
++	struct hnae3_handle *h;
+ 
+ 	if (!((hw == TC_MQPRIO_HW_OFFLOAD_TCS &&
+ 	       mode == TC_MQPRIO_MODE_CHANNEL) || (!hw && tc == 0)))
+@@ -1493,6 +1534,9 @@ static int hns3_setup_tc(struct net_device *netdev, void *type_data)
+ 	if (!netdev)
+ 		return -EINVAL;
+ 
++	h = hns3_get_handle(netdev);
++	kinfo = &h->kinfo;
++
+ 	return (kinfo->dcb_ops && kinfo->dcb_ops->setup_tc) ?
+ 		kinfo->dcb_ops->setup_tc(h, tc, prio_tc) : -EOPNOTSUPP;
+ }
+@@ -1826,9 +1870,9 @@ static pci_ers_result_t hns3_error_detected(struct pci_dev *pdev,
+ 	if (state == pci_channel_io_perm_failure)
+ 		return PCI_ERS_RESULT_DISCONNECT;
+ 
+-	if (!ae_dev) {
++	if (!ae_dev || !ae_dev->ops) {
+ 		dev_err(&pdev->dev,
+-			"Can't recover - error happened during device init\n");
++			"Can't recover - error happened before device initialized\n");
+ 		return PCI_ERS_RESULT_NONE;
+ 	}
+ 
+@@ -1847,6 +1891,9 @@ static pci_ers_result_t hns3_slot_reset(struct pci_dev *pdev)
+ 
+ 	dev_info(dev, "requesting reset due to PCI error\n");
+ 
++	if (!ae_dev || !ae_dev->ops)
++		return PCI_ERS_RESULT_NONE;
++
+ 	/* request the reset */
+ 	if (ae_dev->ops->reset_event) {
+ 		if (!ae_dev->override_pci_need_reset)
+@@ -3198,8 +3245,6 @@ static void hns3_nic_uninit_vector_data(struct hns3_nic_priv *priv)
+ 		hns3_free_vector_ring_chain(tqp_vector, &vector_ring_chain);
+ 
+ 		if (tqp_vector->irq_init_flag == HNS3_VECTOR_INITED) {
+-			irq_set_affinity_notifier(tqp_vector->vector_irq,
+-						  NULL);
+ 			irq_set_affinity_hint(tqp_vector->vector_irq, NULL);
+ 			free_irq(tqp_vector->vector_irq, tqp_vector);
+ 			tqp_vector->irq_init_flag = HNS3_VECTOR_NOT_INITED;
+@@ -3712,7 +3757,7 @@ static void hns3_client_uninit(struct hnae3_handle *handle, bool reset)
+ 
+ 	hns3_del_all_fd_rules(netdev, true);
+ 
+-	hns3_force_clear_all_rx_ring(handle);
++	hns3_clear_all_ring(handle, true);
+ 
+ 	hns3_uninit_phy(netdev);
+ 
+@@ -3884,40 +3929,26 @@ static void hns3_force_clear_rx_ring(struct hns3_enet_ring *ring)
+ 	}
+ }
+ 
+-static void hns3_force_clear_all_rx_ring(struct hnae3_handle *h)
+-{
+-	struct net_device *ndev = h->kinfo.netdev;
+-	struct hns3_nic_priv *priv = netdev_priv(ndev);
+-	struct hns3_enet_ring *ring;
+-	u32 i;
+-
+-	for (i = 0; i < h->kinfo.num_tqps; i++) {
+-		ring = priv->ring_data[i + h->kinfo.num_tqps].ring;
+-		hns3_force_clear_rx_ring(ring);
+-	}
+-}
+-
+-static void hns3_clear_all_ring(struct hnae3_handle *h)
++static void hns3_clear_all_ring(struct hnae3_handle *h, bool force)
+ {
+ 	struct net_device *ndev = h->kinfo.netdev;
+ 	struct hns3_nic_priv *priv = netdev_priv(ndev);
+ 	u32 i;
+ 
+ 	for (i = 0; i < h->kinfo.num_tqps; i++) {
+-		struct netdev_queue *dev_queue;
+ 		struct hns3_enet_ring *ring;
+ 
+ 		ring = priv->ring_data[i].ring;
+ 		hns3_clear_tx_ring(ring);
+-		dev_queue = netdev_get_tx_queue(ndev,
+-						priv->ring_data[i].queue_index);
+-		netdev_tx_reset_queue(dev_queue);
+ 
+ 		ring = priv->ring_data[i + h->kinfo.num_tqps].ring;
+ 		/* Continue to clear other rings even if clearing some
+ 		 * rings failed.
+ 		 */
+-		hns3_clear_rx_ring(ring);
++		if (force)
++			hns3_force_clear_rx_ring(ring);
++		else
++			hns3_clear_rx_ring(ring);
+ 	}
+ }
+ 
+@@ -4120,7 +4151,8 @@ static int hns3_reset_notify_uninit_enet(struct hnae3_handle *handle)
+ 		return 0;
+ 	}
+ 
+-	hns3_force_clear_all_rx_ring(handle);
++	hns3_clear_all_ring(handle, true);
++	hns3_reset_tx_queue(priv->ae_handle);
+ 
+ 	hns3_nic_uninit_vector_data(priv);
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+index ea94b5152963..cf20fa6768d7 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+@@ -241,11 +241,13 @@ static int hns3_lp_run_test(struct net_device *ndev, enum hnae3_loop mode)
+ 
+ 		skb_get(skb);
+ 		tx_ret = hns3_nic_net_xmit(skb, ndev);
+-		if (tx_ret == NETDEV_TX_OK)
++		if (tx_ret == NETDEV_TX_OK) {
+ 			good_cnt++;
+-		else
++		} else {
++			kfree_skb(skb);
+ 			netdev_err(ndev, "hns3_lb_run_test xmit failed: %d\n",
+ 				   tx_ret);
++		}
+ 	}
+ 	if (good_cnt != HNS3_NIC_LB_TEST_PKT_NUM) {
+ 		ret_val = HNS3_NIC_LB_TEST_TX_CNT_ERR;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 6d4d5a470163..14d37c26196b 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -550,8 +550,7 @@ static u8 *hclge_comm_get_strings(u32 stringset,
+ 		return buff;
+ 
+ 	for (i = 0; i < size; i++) {
+-		snprintf(buff, ETH_GSTRING_LEN,
+-			 strs[i].desc);
++		snprintf(buff, ETH_GSTRING_LEN, "%s", strs[i].desc);
+ 		buff = buff + ETH_GSTRING_LEN;
+ 	}
+ 
+@@ -890,6 +889,7 @@ static void hclge_parse_copper_link_mode(struct hclge_dev *hdev,
+ 	linkmode_set_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, supported);
+ 	linkmode_set_bit(ETHTOOL_LINK_MODE_TP_BIT, supported);
+ 	linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT, supported);
++	linkmode_set_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, supported);
+ }
+ 
+ static void hclge_parse_link_mode(struct hclge_dev *hdev, u8 speed_ability)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
+index 48eda2c6fdae..71a6f7c734b6 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
+@@ -215,6 +215,13 @@ int hclge_mac_connect_phy(struct hnae3_handle *handle)
+ 	linkmode_and(phydev->supported, phydev->supported, mask);
+ 	linkmode_copy(phydev->advertising, phydev->supported);
+ 
++	/* supported flag is Pause and Asym Pause, but default advertising
++	 * should be rx on, tx on, so need clear Asym Pause in advertising
++	 * flag
++	 */
++	linkmode_clear_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
++			   phydev->advertising);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+index a7bbb6d3091a..0d53062f7bb5 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+@@ -54,7 +54,8 @@ static int hclge_shaper_para_calc(u32 ir, u8 shaper_level,
+ 	u32 tick;
+ 
+ 	/* Calc tick */
+-	if (shaper_level >= HCLGE_SHAPER_LVL_CNT)
++	if (shaper_level >= HCLGE_SHAPER_LVL_CNT ||
++	    ir > HCLGE_ETHER_MAX_RATE)
+ 		return -EINVAL;
+ 
+ 	tick = tick_array[shaper_level];
+@@ -1124,6 +1125,9 @@ static int hclge_tm_schd_mode_vnet_base_cfg(struct hclge_vport *vport)
+ 	int ret;
+ 	u8 i;
+ 
++	if (vport->vport_id >= HNAE3_MAX_TC)
++		return -EINVAL;
++
+ 	ret = hclge_tm_pri_schd_mode_cfg(hdev, vport->vport_id);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 8dd7fef863f6..d7a15d5b6b61 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -2425,6 +2425,12 @@ static int hclgevf_reset_hdev(struct hclgevf_dev *hdev)
+ 		return ret;
+ 	}
+ 
++	if (pdev->revision >= 0x21) {
++		ret = hclgevf_set_promisc_mode(hdev, true);
++		if (ret)
++			return ret;
++	}
++
+ 	dev_info(&hdev->pdev->dev, "Reset done\n");
+ 
+ 	return 0;
+@@ -2504,9 +2510,11 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev)
+ 	 * firmware makes sure broadcast packets can be accepted.
+ 	 * For revision 0x21, default to enable broadcast promisc mode.
+ 	 */
+-	ret = hclgevf_set_promisc_mode(hdev, true);
+-	if (ret)
+-		goto err_config;
++	if (pdev->revision >= 0x21) {
++		ret = hclgevf_set_promisc_mode(hdev, true);
++		if (ret)
++			goto err_config;
++	}
+ 
+ 	/* Initialize RSS for this VF */
+ 	ret = hclgevf_rss_init_hw(hdev);
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+index 9b4d7cec2e18..2a261d849d5a 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+@@ -1236,6 +1236,9 @@ static void iavf_add_rx_frag(struct iavf_ring *rx_ring,
+ 	unsigned int truesize = SKB_DATA_ALIGN(size + iavf_rx_offset(rx_ring));
+ #endif
+ 
++	if (!size)
++		return;
++
+ 	skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page,
+ 			rx_buffer->page_offset, size, truesize);
+ 
+@@ -1260,6 +1263,9 @@ static struct iavf_rx_buffer *iavf_get_rx_buffer(struct iavf_ring *rx_ring,
+ {
+ 	struct iavf_rx_buffer *rx_buffer;
+ 
++	if (!size)
++		return NULL;
++
+ 	rx_buffer = &rx_ring->rx_bi[rx_ring->next_to_clean];
+ 	prefetchw(rx_buffer->page);
+ 
+@@ -1290,7 +1296,7 @@ static struct sk_buff *iavf_construct_skb(struct iavf_ring *rx_ring,
+ 					  struct iavf_rx_buffer *rx_buffer,
+ 					  unsigned int size)
+ {
+-	void *va = page_address(rx_buffer->page) + rx_buffer->page_offset;
++	void *va;
+ #if (PAGE_SIZE < 8192)
+ 	unsigned int truesize = iavf_rx_pg_size(rx_ring) / 2;
+ #else
+@@ -1299,7 +1305,10 @@ static struct sk_buff *iavf_construct_skb(struct iavf_ring *rx_ring,
+ 	unsigned int headlen;
+ 	struct sk_buff *skb;
+ 
++	if (!rx_buffer)
++		return NULL;
+ 	/* prefetch first cache line of first page */
++	va = page_address(rx_buffer->page) + rx_buffer->page_offset;
+ 	prefetch(va);
+ #if L1_CACHE_BYTES < 128
+ 	prefetch(va + L1_CACHE_BYTES);
+@@ -1354,7 +1363,7 @@ static struct sk_buff *iavf_build_skb(struct iavf_ring *rx_ring,
+ 				      struct iavf_rx_buffer *rx_buffer,
+ 				      unsigned int size)
+ {
+-	void *va = page_address(rx_buffer->page) + rx_buffer->page_offset;
++	void *va;
+ #if (PAGE_SIZE < 8192)
+ 	unsigned int truesize = iavf_rx_pg_size(rx_ring) / 2;
+ #else
+@@ -1363,7 +1372,10 @@ static struct sk_buff *iavf_build_skb(struct iavf_ring *rx_ring,
+ #endif
+ 	struct sk_buff *skb;
+ 
++	if (!rx_buffer)
++		return NULL;
+ 	/* prefetch first cache line of first page */
++	va = page_address(rx_buffer->page) + rx_buffer->page_offset;
+ 	prefetch(va);
+ #if L1_CACHE_BYTES < 128
+ 	prefetch(va + L1_CACHE_BYTES);
+@@ -1398,6 +1410,9 @@ static struct sk_buff *iavf_build_skb(struct iavf_ring *rx_ring,
+ static void iavf_put_rx_buffer(struct iavf_ring *rx_ring,
+ 			       struct iavf_rx_buffer *rx_buffer)
+ {
++	if (!rx_buffer)
++		return;
++
+ 	if (iavf_can_reuse_rx_page(rx_buffer)) {
+ 		/* hand second half of page back to the ring */
+ 		iavf_reuse_rx_page(rx_ring, rx_buffer);
+@@ -1496,11 +1511,12 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
+ 		 * verified the descriptor has been written back.
+ 		 */
+ 		dma_rmb();
++#define IAVF_RXD_DD BIT(IAVF_RX_DESC_STATUS_DD_SHIFT)
++		if (!iavf_test_staterr(rx_desc, IAVF_RXD_DD))
++			break;
+ 
+ 		size = (qword & IAVF_RXD_QW1_LENGTH_PBUF_MASK) >>
+ 		       IAVF_RXD_QW1_LENGTH_PBUF_SHIFT;
+-		if (!size)
+-			break;
+ 
+ 		iavf_trace(clean_rx_irq, rx_ring, rx_desc, skb);
+ 		rx_buffer = iavf_get_rx_buffer(rx_ring, size);
+@@ -1516,7 +1532,8 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
+ 		/* exit if we failed to retrieve a buffer */
+ 		if (!skb) {
+ 			rx_ring->rx_stats.alloc_buff_failed++;
+-			rx_buffer->pagecnt_bias++;
++			if (rx_buffer)
++				rx_buffer->pagecnt_bias++;
+ 			break;
+ 		}
+ 
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 580d14b49fda..a725dc709632 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -5687,6 +5687,7 @@ static void igb_tx_ctxtdesc(struct igb_ring *tx_ring,
+ 	 */
+ 	if (tx_ring->launchtime_enable) {
+ 		ts = ns_to_timespec64(first->skb->tstamp);
++		first->skb->tstamp = 0;
+ 		context_desc->seqnum_seed = cpu_to_le32(ts.tv_nsec / 32);
+ 	} else {
+ 		context_desc->seqnum_seed = 0;
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+index acba067cc15a..7c52ae8ac005 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+@@ -3226,7 +3226,8 @@ static int ixgbe_get_module_info(struct net_device *dev,
+ 		page_swap = true;
+ 	}
+ 
+-	if (sff8472_rev == IXGBE_SFF_SFF_8472_UNSUP || page_swap) {
++	if (sff8472_rev == IXGBE_SFF_SFF_8472_UNSUP || page_swap ||
++	    !(addr_mode & IXGBE_SFF_DDM_IMPLEMENTED)) {
+ 		/* We have a SFP, but it does not support SFF-8472 */
+ 		modinfo->type = ETH_MODULE_SFF_8079;
+ 		modinfo->eeprom_len = ETH_MODULE_SFF_8079_LEN;
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+index ff85ce5791a3..31629fc7e820 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+@@ -842,6 +842,9 @@ void ixgbe_ipsec_vf_clear(struct ixgbe_adapter *adapter, u32 vf)
+ 	struct ixgbe_ipsec *ipsec = adapter->ipsec;
+ 	int i;
+ 
++	if (!ipsec)
++		return;
++
+ 	/* search rx sa table */
+ 	for (i = 0; i < IXGBE_IPSEC_MAX_SA_COUNT && ipsec->num_rx_sa; i++) {
+ 		if (!ipsec->rx_tbl[i].used)
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h
+index 214b01085718..6544c4539c0d 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h
+@@ -45,6 +45,7 @@
+ #define IXGBE_SFF_SOFT_RS_SELECT_10G		0x8
+ #define IXGBE_SFF_SOFT_RS_SELECT_1G		0x0
+ #define IXGBE_SFF_ADDRESSING_MODE		0x4
++#define IXGBE_SFF_DDM_IMPLEMENTED		0x40
+ #define IXGBE_SFF_QSFP_DA_ACTIVE_CABLE		0x1
+ #define IXGBE_SFF_QSFP_DA_PASSIVE_CABLE		0x8
+ #define IXGBE_SFF_QSFP_CONNECTOR_NOT_SEPARABLE	0x23
+diff --git a/drivers/net/ethernet/marvell/mvmdio.c b/drivers/net/ethernet/marvell/mvmdio.c
+index c5dac6bd2be4..ee7857298361 100644
+--- a/drivers/net/ethernet/marvell/mvmdio.c
++++ b/drivers/net/ethernet/marvell/mvmdio.c
+@@ -64,7 +64,7 @@
+ 
+ struct orion_mdio_dev {
+ 	void __iomem *regs;
+-	struct clk *clk[3];
++	struct clk *clk[4];
+ 	/*
+ 	 * If we have access to the error interrupt pin (which is
+ 	 * somewhat misnamed as it not only reflects internal errors
+@@ -321,6 +321,10 @@ static int orion_mdio_probe(struct platform_device *pdev)
+ 
+ 	for (i = 0; i < ARRAY_SIZE(dev->clk); i++) {
+ 		dev->clk[i] = of_clk_get(pdev->dev.of_node, i);
++		if (PTR_ERR(dev->clk[i]) == -EPROBE_DEFER) {
++			ret = -EPROBE_DEFER;
++			goto out_clk;
++		}
+ 		if (IS_ERR(dev->clk[i]))
+ 			break;
+ 		clk_prepare_enable(dev->clk[i]);
+@@ -362,6 +366,7 @@ out_mdio:
+ 	if (dev->err_interrupt > 0)
+ 		writel(0, dev->regs + MVMDIO_ERR_INT_MASK);
+ 
++out_clk:
+ 	for (i = 0; i < ARRAY_SIZE(dev->clk); i++) {
+ 		if (IS_ERR(dev->clk[i]))
+ 			break;
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
+index ae2240074d8e..5692c6087bbb 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
+@@ -312,7 +312,8 @@ static void mvpp2_prs_sram_shift_set(struct mvpp2_prs_entry *pe, int shift,
+ 	}
+ 
+ 	/* Set value */
+-	pe->sram[MVPP2_BIT_TO_WORD(MVPP2_PRS_SRAM_SHIFT_OFFS)] = shift & MVPP2_PRS_SRAM_SHIFT_MASK;
++	pe->sram[MVPP2_BIT_TO_WORD(MVPP2_PRS_SRAM_SHIFT_OFFS)] |=
++		shift & MVPP2_PRS_SRAM_SHIFT_MASK;
+ 
+ 	/* Reset and set operation */
+ 	mvpp2_prs_sram_bits_clear(pe, MVPP2_PRS_SRAM_OP_SEL_SHIFT_OFFS,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index 8a67fd197b79..16ed6ebd31ee 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -950,7 +950,7 @@ static int esw_vport_enable_egress_acl(struct mlx5_eswitch *esw,
+ 		  vport->vport, MLX5_CAP_ESW_EGRESS_ACL(dev, log_max_ft_size));
+ 
+ 	root_ns = mlx5_get_flow_vport_acl_namespace(dev, MLX5_FLOW_NAMESPACE_ESW_EGRESS,
+-						    vport->vport);
++			mlx5_eswitch_vport_num_to_index(esw, vport->vport));
+ 	if (!root_ns) {
+ 		esw_warn(dev, "Failed to get E-Switch egress flow namespace for vport (%d)\n", vport->vport);
+ 		return -EOPNOTSUPP;
+@@ -1068,7 +1068,7 @@ static int esw_vport_enable_ingress_acl(struct mlx5_eswitch *esw,
+ 		  vport->vport, MLX5_CAP_ESW_INGRESS_ACL(dev, log_max_ft_size));
+ 
+ 	root_ns = mlx5_get_flow_vport_acl_namespace(dev, MLX5_FLOW_NAMESPACE_ESW_INGRESS,
+-						    vport->vport);
++			mlx5_eswitch_vport_num_to_index(esw, vport->vport));
+ 	if (!root_ns) {
+ 		esw_warn(dev, "Failed to get E-Switch ingress flow namespace for vport (%d)\n", vport->vport);
+ 		return -EOPNOTSUPP;
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+index 866cdc86a3f2..08045fd69fad 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_dev.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+@@ -3441,6 +3441,7 @@ static void qed_nvm_info_free(struct qed_hwfn *p_hwfn)
+ static int qed_hw_prepare_single(struct qed_hwfn *p_hwfn,
+ 				 void __iomem *p_regview,
+ 				 void __iomem *p_doorbells,
++				 u64 db_phys_addr,
+ 				 enum qed_pci_personality personality)
+ {
+ 	struct qed_dev *cdev = p_hwfn->cdev;
+@@ -3449,6 +3450,7 @@ static int qed_hw_prepare_single(struct qed_hwfn *p_hwfn,
+ 	/* Split PCI bars evenly between hwfns */
+ 	p_hwfn->regview = p_regview;
+ 	p_hwfn->doorbells = p_doorbells;
++	p_hwfn->db_phys_addr = db_phys_addr;
+ 
+ 	if (IS_VF(p_hwfn->cdev))
+ 		return qed_vf_hw_prepare(p_hwfn);
+@@ -3544,7 +3546,9 @@ int qed_hw_prepare(struct qed_dev *cdev,
+ 	/* Initialize the first hwfn - will learn number of hwfns */
+ 	rc = qed_hw_prepare_single(p_hwfn,
+ 				   cdev->regview,
+-				   cdev->doorbells, personality);
++				   cdev->doorbells,
++				   cdev->db_phys_addr,
++				   personality);
+ 	if (rc)
+ 		return rc;
+ 
+@@ -3553,22 +3557,25 @@ int qed_hw_prepare(struct qed_dev *cdev,
+ 	/* Initialize the rest of the hwfns */
+ 	if (cdev->num_hwfns > 1) {
+ 		void __iomem *p_regview, *p_doorbell;
+-		u8 __iomem *addr;
++		u64 db_phys_addr;
++		u32 offset;
+ 
+ 		/* adjust bar offset for second engine */
+-		addr = cdev->regview +
+-		       qed_hw_bar_size(p_hwfn, p_hwfn->p_main_ptt,
+-				       BAR_ID_0) / 2;
+-		p_regview = addr;
++		offset = qed_hw_bar_size(p_hwfn, p_hwfn->p_main_ptt,
++					 BAR_ID_0) / 2;
++		p_regview = cdev->regview + offset;
+ 
+-		addr = cdev->doorbells +
+-		       qed_hw_bar_size(p_hwfn, p_hwfn->p_main_ptt,
+-				       BAR_ID_1) / 2;
+-		p_doorbell = addr;
++		offset = qed_hw_bar_size(p_hwfn, p_hwfn->p_main_ptt,
++					 BAR_ID_1) / 2;
++
++		p_doorbell = cdev->doorbells + offset;
++
++		db_phys_addr = cdev->db_phys_addr + offset;
+ 
+ 		/* prepare second hw function */
+ 		rc = qed_hw_prepare_single(&cdev->hwfns[1], p_regview,
+-					   p_doorbell, personality);
++					   p_doorbell, db_phys_addr,
++					   personality);
+ 
+ 		/* in case of error, need to free the previously
+ 		 * initiliazed hwfn 0.
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
+index ded556b7bab5..eeea8683d99b 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
+@@ -2708,6 +2708,8 @@ qed_iwarp_ll2_start(struct qed_hwfn *p_hwfn,
+ 	data.input.rx_num_desc = n_ooo_bufs * 2;
+ 	data.input.tx_num_desc = data.input.rx_num_desc;
+ 	data.input.tx_max_bds_per_packet = QED_IWARP_MAX_BDS_PER_FPDU;
++	data.input.tx_tc = PKT_LB_TC;
++	data.input.tx_dest = QED_LL2_TX_DEST_LB;
+ 	data.p_connection_handle = &iwarp_info->ll2_mpa_handle;
+ 	data.input.secondary_queue = true;
+ 	data.cbs = &cbs;
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_rdma.c b/drivers/net/ethernet/qlogic/qed/qed_rdma.c
+index 7873d6dfd91f..13802b825d65 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_rdma.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_rdma.c
+@@ -803,7 +803,7 @@ static int qed_rdma_add_user(void *rdma_cxt,
+ 				     dpi_start_offset +
+ 				     ((out_params->dpi) * p_hwfn->dpi_size));
+ 
+-	out_params->dpi_phys_addr = p_hwfn->cdev->db_phys_addr +
++	out_params->dpi_phys_addr = p_hwfn->db_phys_addr +
+ 				    dpi_start_offset +
+ 				    ((out_params->dpi) * p_hwfn->dpi_size);
+ 
+diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
+index cba5881b2746..a10ef700f16d 100644
+--- a/drivers/net/ethernet/socionext/netsec.c
++++ b/drivers/net/ethernet/socionext/netsec.c
+@@ -1029,7 +1029,6 @@ static void netsec_free_dring(struct netsec_priv *priv, int id)
+ static int netsec_alloc_dring(struct netsec_priv *priv, enum ring_id id)
+ {
+ 	struct netsec_desc_ring *dring = &priv->desc_ring[id];
+-	int i;
+ 
+ 	dring->vaddr = dma_alloc_coherent(priv->dev, DESC_SZ * DESC_NUM,
+ 					  &dring->desc_dma, GFP_KERNEL);
+@@ -1040,19 +1039,6 @@ static int netsec_alloc_dring(struct netsec_priv *priv, enum ring_id id)
+ 	if (!dring->desc)
+ 		goto err;
+ 
+-	if (id == NETSEC_RING_TX) {
+-		for (i = 0; i < DESC_NUM; i++) {
+-			struct netsec_de *de;
+-
+-			de = dring->vaddr + (DESC_SZ * i);
+-			/* de->attr is not going to be accessed by the NIC
+-			 * until netsec_set_tx_de() is called.
+-			 * No need for a dma_wmb() here
+-			 */
+-			de->attr = 1U << NETSEC_TX_SHIFT_OWN_FIELD;
+-		}
+-	}
+-
+ 	return 0;
+ err:
+ 	netsec_free_dring(priv, id);
+@@ -1060,6 +1046,23 @@ err:
+ 	return -ENOMEM;
+ }
+ 
++static void netsec_setup_tx_dring(struct netsec_priv *priv)
++{
++	struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_TX];
++	int i;
++
++	for (i = 0; i < DESC_NUM; i++) {
++		struct netsec_de *de;
++
++		de = dring->vaddr + (DESC_SZ * i);
++		/* de->attr is not going to be accessed by the NIC
++		 * until netsec_set_tx_de() is called.
++		 * No need for a dma_wmb() here
++		 */
++		de->attr = 1U << NETSEC_TX_SHIFT_OWN_FIELD;
++	}
++}
++
+ static int netsec_setup_rx_dring(struct netsec_priv *priv)
+ {
+ 	struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX];
+@@ -1361,6 +1364,7 @@ static int netsec_netdev_open(struct net_device *ndev)
+ 
+ 	pm_runtime_get_sync(priv->dev);
+ 
++	netsec_setup_tx_dring(priv);
+ 	ret = netsec_setup_rx_dring(priv);
+ 	if (ret) {
+ 		netif_err(priv, probe, priv->ndev,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/common.h b/drivers/net/ethernet/stmicro/stmmac/common.h
+index 272b9ca66314..b069b3a2453b 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/common.h
++++ b/drivers/net/ethernet/stmicro/stmmac/common.h
+@@ -261,7 +261,7 @@ struct stmmac_safety_stats {
+ #define STMMAC_COAL_TX_TIMER	1000
+ #define STMMAC_MAX_COAL_TX_TICK	100000
+ #define STMMAC_TX_MAX_FRAMES	256
+-#define STMMAC_TX_FRAMES	25
++#define STMMAC_TX_FRAMES	1
+ 
+ /* Packets types */
+ enum packets_types {
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+index ba124a4da793..8325e6499739 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+@@ -893,6 +893,11 @@ static int sun8i_dwmac_set_syscon(struct stmmac_priv *priv)
+ 		 * address. No need to mask it again.
+ 		 */
+ 		reg |= 1 << H3_EPHY_ADDR_SHIFT;
++	} else {
++		/* For SoCs without internal PHY the PHY selection bit should be
++		 * set to 0 (external PHY).
++		 */
++		reg &= ~H3_EPHY_SELECT;
+ 	}
+ 
+ 	if (!of_property_read_u32(node, "allwinner,tx-delay-ps", &val)) {
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
+index 0877bde6e860..21d131347e2e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
+@@ -216,6 +216,12 @@ static void dwmac1000_set_filter(struct mac_device_info *hw,
+ 					    GMAC_ADDR_LOW(reg));
+ 			reg++;
+ 		}
++
++		while (reg <= perfect_addr_number) {
++			writel(0, ioaddr + GMAC_ADDR_HIGH(reg));
++			writel(0, ioaddr + GMAC_ADDR_LOW(reg));
++			reg++;
++		}
+ 	}
+ 
+ #ifdef FRAME_FILTER_DEBUG
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+index 7e5d5db0d516..d0e6e1503581 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+@@ -444,14 +444,20 @@ static void dwmac4_set_filter(struct mac_device_info *hw,
+ 		 * are required
+ 		 */
+ 		value |= GMAC_PACKET_FILTER_PR;
+-	} else if (!netdev_uc_empty(dev)) {
+-		int reg = 1;
++	} else {
+ 		struct netdev_hw_addr *ha;
++		int reg = 1;
+ 
+ 		netdev_for_each_uc_addr(ha, dev) {
+ 			dwmac4_set_umac_addr(hw, ha->addr, reg);
+ 			reg++;
+ 		}
++
++		while (reg <= GMAC_MAX_PERFECT_ADDRESSES) {
++			writel(0, ioaddr + GMAC_ADDR_HIGH(reg));
++			writel(0, ioaddr + GMAC_ADDR_LOW(reg));
++			reg++;
++		}
+ 	}
+ 
+ 	writel(value, ioaddr + GMAC_PACKET_FILTER);
+@@ -469,8 +475,9 @@ static void dwmac4_flow_ctrl(struct mac_device_info *hw, unsigned int duplex,
+ 	if (fc & FLOW_RX) {
+ 		pr_debug("\tReceive Flow-Control ON\n");
+ 		flow |= GMAC_RX_FLOW_CTRL_RFE;
+-		writel(flow, ioaddr + GMAC_RX_FLOW_CTRL);
+ 	}
++	writel(flow, ioaddr + GMAC_RX_FLOW_CTRL);
++
+ 	if (fc & FLOW_TX) {
+ 		pr_debug("\tTransmit Flow-Control ON\n");
+ 
+@@ -478,7 +485,7 @@ static void dwmac4_flow_ctrl(struct mac_device_info *hw, unsigned int duplex,
+ 			pr_debug("\tduplex mode: PAUSE %d\n", pause_time);
+ 
+ 		for (queue = 0; queue < tx_cnt; queue++) {
+-			flow |= GMAC_TX_FLOW_CTRL_TFE;
++			flow = GMAC_TX_FLOW_CTRL_TFE;
+ 
+ 			if (duplex)
+ 				flow |=
+@@ -486,6 +493,9 @@ static void dwmac4_flow_ctrl(struct mac_device_info *hw, unsigned int duplex,
+ 
+ 			writel(flow, ioaddr + GMAC_QX_TX_FLOW_CTRL(queue));
+ 		}
++	} else {
++		for (queue = 0; queue < tx_cnt; queue++)
++			writel(0, ioaddr + GMAC_QX_TX_FLOW_CTRL(queue));
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index a634054dcb11..f3735d0458eb 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -2058,6 +2058,9 @@ static int stmmac_napi_check(struct stmmac_priv *priv, u32 chan)
+ 						 &priv->xstats, chan);
+ 	struct stmmac_channel *ch = &priv->channel[chan];
+ 
++	if (status)
++		status |= handle_rx | handle_tx;
++
+ 	if ((status & handle_rx) && (chan < priv->plat->rx_queues_to_use)) {
+ 		stmmac_disable_dma_irq(priv, priv->ioaddr, chan);
+ 		napi_schedule_irqoff(&ch->rx_napi);
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 4041c75997ba..38a8ef194e05 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -614,6 +614,10 @@ static void axienet_start_xmit_done(struct net_device *ndev)
+ 
+ 	ndev->stats.tx_packets += packets;
+ 	ndev->stats.tx_bytes += size;
++
++	/* Matches barrier in axienet_start_xmit */
++	smp_mb();
++
+ 	netif_wake_queue(ndev);
+ }
+ 
+@@ -669,9 +673,19 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ 	cur_p = &lp->tx_bd_v[lp->tx_bd_tail];
+ 
+ 	if (axienet_check_tx_bd_space(lp, num_frag)) {
+-		if (!netif_queue_stopped(ndev))
+-			netif_stop_queue(ndev);
+-		return NETDEV_TX_BUSY;
++		if (netif_queue_stopped(ndev))
++			return NETDEV_TX_BUSY;
++
++		netif_stop_queue(ndev);
++
++		/* Matches barrier in axienet_start_xmit_done */
++		smp_mb();
++
++		/* Space might have just been freed - check again */
++		if (axienet_check_tx_bd_space(lp, num_frag))
++			return NETDEV_TX_BUSY;
++
++		netif_wake_queue(ndev);
+ 	}
+ 
+ 	if (skb->ip_summed == CHECKSUM_PARTIAL) {
+diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
+index 7a145172d503..d178d5bad7e4 100644
+--- a/drivers/net/gtp.c
++++ b/drivers/net/gtp.c
+@@ -289,16 +289,29 @@ static int gtp1u_udp_encap_recv(struct gtp_dev *gtp, struct sk_buff *skb)
+ 	return gtp_rx(pctx, skb, hdrlen, gtp->role);
+ }
+ 
+-static void gtp_encap_destroy(struct sock *sk)
++static void __gtp_encap_destroy(struct sock *sk)
+ {
+ 	struct gtp_dev *gtp;
+ 
+-	gtp = rcu_dereference_sk_user_data(sk);
++	lock_sock(sk);
++	gtp = sk->sk_user_data;
+ 	if (gtp) {
++		if (gtp->sk0 == sk)
++			gtp->sk0 = NULL;
++		else
++			gtp->sk1u = NULL;
+ 		udp_sk(sk)->encap_type = 0;
+ 		rcu_assign_sk_user_data(sk, NULL);
+ 		sock_put(sk);
+ 	}
++	release_sock(sk);
++}
++
++static void gtp_encap_destroy(struct sock *sk)
++{
++	rtnl_lock();
++	__gtp_encap_destroy(sk);
++	rtnl_unlock();
+ }
+ 
+ static void gtp_encap_disable_sock(struct sock *sk)
+@@ -306,7 +319,7 @@ static void gtp_encap_disable_sock(struct sock *sk)
+ 	if (!sk)
+ 		return;
+ 
+-	gtp_encap_destroy(sk);
++	__gtp_encap_destroy(sk);
+ }
+ 
+ static void gtp_encap_disable(struct gtp_dev *gtp)
+@@ -800,7 +813,8 @@ static struct sock *gtp_encap_enable_socket(int fd, int type,
+ 		goto out_sock;
+ 	}
+ 
+-	if (rcu_dereference_sk_user_data(sock->sk)) {
++	lock_sock(sock->sk);
++	if (sock->sk->sk_user_data) {
+ 		sk = ERR_PTR(-EBUSY);
+ 		goto out_sock;
+ 	}
+@@ -816,6 +830,7 @@ static struct sock *gtp_encap_enable_socket(int fd, int type,
+ 	setup_udp_tunnel_sock(sock_net(sock->sk), sock, &tuncfg);
+ 
+ out_sock:
++	release_sock(sock->sk);
+ 	sockfd_put(sock);
+ 	return sk;
+ }
+@@ -847,8 +862,13 @@ static int gtp_encap_enable(struct gtp_dev *gtp, struct nlattr *data[])
+ 
+ 	if (data[IFLA_GTP_ROLE]) {
+ 		role = nla_get_u32(data[IFLA_GTP_ROLE]);
+-		if (role > GTP_ROLE_SGSN)
++		if (role > GTP_ROLE_SGSN) {
++			if (sk0)
++				gtp_encap_disable_sock(sk0);
++			if (sk1u)
++				gtp_encap_disable_sock(sk1u);
+ 			return -EINVAL;
++		}
+ 	}
+ 
+ 	gtp->sk0 = sk0;
+@@ -949,7 +969,7 @@ static int ipv4_pdp_add(struct gtp_dev *gtp, struct sock *sk,
+ 
+ 	}
+ 
+-	pctx = kmalloc(sizeof(struct pdp_ctx), GFP_KERNEL);
++	pctx = kmalloc(sizeof(*pctx), GFP_ATOMIC);
+ 	if (pctx == NULL)
+ 		return -ENOMEM;
+ 
+@@ -1038,6 +1058,7 @@ static int gtp_genl_new_pdp(struct sk_buff *skb, struct genl_info *info)
+ 		return -EINVAL;
+ 	}
+ 
++	rtnl_lock();
+ 	rcu_read_lock();
+ 
+ 	gtp = gtp_find_dev(sock_net(skb->sk), info->attrs);
+@@ -1062,6 +1083,7 @@ static int gtp_genl_new_pdp(struct sk_buff *skb, struct genl_info *info)
+ 
+ out_unlock:
+ 	rcu_read_unlock();
++	rtnl_unlock();
+ 	return err;
+ }
+ 
+@@ -1363,9 +1385,9 @@ late_initcall(gtp_init);
+ 
+ static void __exit gtp_fini(void)
+ {
+-	unregister_pernet_subsys(&gtp_net_ops);
+ 	genl_unregister_family(&gtp_genl_family);
+ 	rtnl_link_unregister(&gtp_link_ops);
++	unregister_pernet_subsys(&gtp_net_ops);
+ 
+ 	pr_info("GTP module unloaded\n");
+ }
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index f6a6cc5bf118..e748aee82033 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -948,6 +948,9 @@ int phy_connect_direct(struct net_device *dev, struct phy_device *phydev,
+ {
+ 	int rc;
+ 
++	if (!dev)
++		return -EINVAL;
++
+ 	rc = phy_attach_direct(dev, phydev, phydev->dev_flags, interface);
+ 	if (rc)
+ 		return rc;
+@@ -1290,6 +1293,9 @@ struct phy_device *phy_attach(struct net_device *dev, const char *bus_id,
+ 	struct device *d;
+ 	int rc;
+ 
++	if (!dev)
++		return ERR_PTR(-EINVAL);
++
+ 	/* Search the list of PHY devices on the mdio bus for the
+ 	 * PHY with the requested name
+ 	 */
+diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
+index 71812be0ac64..b6efd2d41dce 100644
+--- a/drivers/net/phy/sfp.c
++++ b/drivers/net/phy/sfp.c
+@@ -186,10 +186,11 @@ struct sfp {
+ 	struct gpio_desc *gpio[GPIO_MAX];
+ 
+ 	bool attached;
++	struct mutex st_mutex;			/* Protects state */
+ 	unsigned int state;
+ 	struct delayed_work poll;
+ 	struct delayed_work timeout;
+-	struct mutex sm_mutex;
++	struct mutex sm_mutex;			/* Protects state machine */
+ 	unsigned char sm_mod_state;
+ 	unsigned char sm_dev_state;
+ 	unsigned short sm_state;
+@@ -1719,6 +1720,7 @@ static void sfp_check_state(struct sfp *sfp)
+ {
+ 	unsigned int state, i, changed;
+ 
++	mutex_lock(&sfp->st_mutex);
+ 	state = sfp_get_state(sfp);
+ 	changed = state ^ sfp->state;
+ 	changed &= SFP_F_PRESENT | SFP_F_LOS | SFP_F_TX_FAULT;
+@@ -1744,6 +1746,7 @@ static void sfp_check_state(struct sfp *sfp)
+ 		sfp_sm_event(sfp, state & SFP_F_LOS ?
+ 				SFP_E_LOS_HIGH : SFP_E_LOS_LOW);
+ 	rtnl_unlock();
++	mutex_unlock(&sfp->st_mutex);
+ }
+ 
+ static irqreturn_t sfp_irq(int irq, void *data)
+@@ -1774,6 +1777,7 @@ static struct sfp *sfp_alloc(struct device *dev)
+ 	sfp->dev = dev;
+ 
+ 	mutex_init(&sfp->sm_mutex);
++	mutex_init(&sfp->st_mutex);
+ 	INIT_DELAYED_WORK(&sfp->poll, sfp_poll);
+ 	INIT_DELAYED_WORK(&sfp->timeout, sfp_timeout);
+ 
+diff --git a/drivers/net/usb/asix_devices.c b/drivers/net/usb/asix_devices.c
+index 3d93993e74da..2eca4168af2f 100644
+--- a/drivers/net/usb/asix_devices.c
++++ b/drivers/net/usb/asix_devices.c
+@@ -238,7 +238,7 @@ static void asix_phy_reset(struct usbnet *dev, unsigned int reset_bits)
+ static int ax88172_bind(struct usbnet *dev, struct usb_interface *intf)
+ {
+ 	int ret = 0;
+-	u8 buf[ETH_ALEN];
++	u8 buf[ETH_ALEN] = {0};
+ 	int i;
+ 	unsigned long gpio_bits = dev->driver_info->data;
+ 
+@@ -689,7 +689,7 @@ static int asix_resume(struct usb_interface *intf)
+ static int ax88772_bind(struct usbnet *dev, struct usb_interface *intf)
+ {
+ 	int ret, i;
+-	u8 buf[ETH_ALEN], chipcode = 0;
++	u8 buf[ETH_ALEN] = {0}, chipcode = 0;
+ 	u32 phyid;
+ 	struct asix_common_private *priv;
+ 
+@@ -1073,7 +1073,7 @@ static const struct net_device_ops ax88178_netdev_ops = {
+ static int ax88178_bind(struct usbnet *dev, struct usb_interface *intf)
+ {
+ 	int ret;
+-	u8 buf[ETH_ALEN];
++	u8 buf[ETH_ALEN] = {0};
+ 
+ 	usbnet_get_endpoints(dev,intf);
+ 
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index 38ecb66fb3e9..82c25f07261f 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -806,6 +806,14 @@ static struct vxlan_fdb *vxlan_fdb_alloc(struct vxlan_dev *vxlan,
+ 	return f;
+ }
+ 
++static void vxlan_fdb_insert(struct vxlan_dev *vxlan, const u8 *mac,
++			     __be32 src_vni, struct vxlan_fdb *f)
++{
++	++vxlan->addrcnt;
++	hlist_add_head_rcu(&f->hlist,
++			   vxlan_fdb_head(vxlan, mac, src_vni));
++}
++
+ static int vxlan_fdb_create(struct vxlan_dev *vxlan,
+ 			    const u8 *mac, union vxlan_addr *ip,
+ 			    __u16 state, __be16 port, __be32 src_vni,
+@@ -831,18 +839,13 @@ static int vxlan_fdb_create(struct vxlan_dev *vxlan,
+ 		return rc;
+ 	}
+ 
+-	++vxlan->addrcnt;
+-	hlist_add_head_rcu(&f->hlist,
+-			   vxlan_fdb_head(vxlan, mac, src_vni));
+-
+ 	*fdb = f;
+ 
+ 	return 0;
+ }
+ 
+-static void vxlan_fdb_free(struct rcu_head *head)
++static void __vxlan_fdb_free(struct vxlan_fdb *f)
+ {
+-	struct vxlan_fdb *f = container_of(head, struct vxlan_fdb, rcu);
+ 	struct vxlan_rdst *rd, *nd;
+ 
+ 	list_for_each_entry_safe(rd, nd, &f->remotes, list) {
+@@ -852,6 +855,13 @@ static void vxlan_fdb_free(struct rcu_head *head)
+ 	kfree(f);
+ }
+ 
++static void vxlan_fdb_free(struct rcu_head *head)
++{
++	struct vxlan_fdb *f = container_of(head, struct vxlan_fdb, rcu);
++
++	__vxlan_fdb_free(f);
++}
++
+ static void vxlan_fdb_destroy(struct vxlan_dev *vxlan, struct vxlan_fdb *f,
+ 			      bool do_notify, bool swdev_notify)
+ {
+@@ -979,6 +989,7 @@ static int vxlan_fdb_update_create(struct vxlan_dev *vxlan,
+ 	if (rc < 0)
+ 		return rc;
+ 
++	vxlan_fdb_insert(vxlan, mac, src_vni, f);
+ 	rc = vxlan_fdb_notify(vxlan, f, first_remote_rtnl(f), RTM_NEWNEIGH,
+ 			      swdev_notify, extack);
+ 	if (rc)
+@@ -3573,12 +3584,17 @@ static int __vxlan_dev_create(struct net *net, struct net_device *dev,
+ 	if (err)
+ 		goto errout;
+ 
+-	/* notify default fdb entry */
+ 	if (f) {
++		vxlan_fdb_insert(vxlan, all_zeros_mac,
++				 vxlan->default_dst.remote_vni, f);
++
++		/* notify default fdb entry */
+ 		err = vxlan_fdb_notify(vxlan, f, first_remote_rtnl(f),
+ 				       RTM_NEWNEIGH, true, extack);
+-		if (err)
+-			goto errout;
++		if (err) {
++			vxlan_fdb_destroy(vxlan, f, false, false);
++			goto unregister;
++		}
+ 	}
+ 
+ 	list_add(&vxlan->next, &vn->vxlan_list);
+@@ -3590,7 +3606,8 @@ errout:
+ 	 * destroy the entry by hand here.
+ 	 */
+ 	if (f)
+-		vxlan_fdb_destroy(vxlan, f, false, false);
++		__vxlan_fdb_free(f);
++unregister:
+ 	if (unregister)
+ 		unregister_netdevice(dev);
+ 	return err;
+diff --git a/drivers/net/wireless/ath/ath10k/debugfs_sta.c b/drivers/net/wireless/ath/ath10k/debugfs_sta.c
+index c704ae371c4d..42931a669b02 100644
+--- a/drivers/net/wireless/ath/ath10k/debugfs_sta.c
++++ b/drivers/net/wireless/ath/ath10k/debugfs_sta.c
+@@ -663,6 +663,13 @@ static ssize_t ath10k_dbg_sta_dump_tx_stats(struct file *file,
+ 
+ 	mutex_lock(&ar->conf_mutex);
+ 
++	if (!arsta->tx_stats) {
++		ath10k_warn(ar, "failed to get tx stats");
++		mutex_unlock(&ar->conf_mutex);
++		kfree(buf);
++		return 0;
++	}
++
+ 	spin_lock_bh(&ar->data_lock);
+ 	for (k = 0; k < ATH10K_STATS_TYPE_MAX; k++) {
+ 		for (j = 0; j < ATH10K_COUNTER_TYPE_MAX; j++) {
+diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c
+index 1acc622d2183..f22840bbc389 100644
+--- a/drivers/net/wireless/ath/ath10k/htt_rx.c
++++ b/drivers/net/wireless/ath/ath10k/htt_rx.c
+@@ -2277,7 +2277,9 @@ static void ath10k_htt_rx_tx_compl_ind(struct ath10k *ar,
+ 		 *  Note that with only one concurrent reader and one concurrent
+ 		 *  writer, you don't need extra locking to use these macro.
+ 		 */
+-		if (!kfifo_put(&htt->txdone_fifo, tx_done)) {
++		if (ar->bus_param.dev_type == ATH10K_DEV_TYPE_HL) {
++			ath10k_txrx_tx_unref(htt, &tx_done);
++		} else if (!kfifo_put(&htt->txdone_fifo, tx_done)) {
+ 			ath10k_warn(ar, "txdone fifo overrun, msdu_id %d status %d\n",
+ 				    tx_done.msdu_id, tx_done.status);
+ 			ath10k_txrx_tx_unref(htt, &tx_done);
+diff --git a/drivers/net/wireless/ath/ath10k/hw.c b/drivers/net/wireless/ath/ath10k/hw.c
+index ad082b7d7643..b242085c3c16 100644
+--- a/drivers/net/wireless/ath/ath10k/hw.c
++++ b/drivers/net/wireless/ath/ath10k/hw.c
+@@ -158,7 +158,7 @@ const struct ath10k_hw_values qca6174_values = {
+ };
+ 
+ const struct ath10k_hw_values qca99x0_values = {
+-	.rtc_state_val_on		= 5,
++	.rtc_state_val_on		= 7,
+ 	.ce_count			= 12,
+ 	.msi_assign_ce_max		= 12,
+ 	.num_target_ce_config_wlan	= 10,
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 9c703d287333..b500fd427595 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -1630,6 +1630,10 @@ static int ath10k_mac_setup_prb_tmpl(struct ath10k_vif *arvif)
+ 	if (arvif->vdev_type != WMI_VDEV_TYPE_AP)
+ 		return 0;
+ 
++	 /* For mesh, probe response and beacon share the same template */
++	if (ieee80211_vif_is_mesh(vif))
++		return 0;
++
+ 	prb = ieee80211_proberesp_get(hw, vif);
+ 	if (!prb) {
+ 		ath10k_warn(ar, "failed to get probe resp template from mac80211\n");
+@@ -5588,8 +5592,8 @@ static void ath10k_bss_info_changed(struct ieee80211_hw *hw,
+ 	struct cfg80211_chan_def def;
+ 	u32 vdev_param, pdev_param, slottime, preamble;
+ 	u16 bitrate, hw_value;
+-	u8 rate, basic_rate_idx;
+-	int rateidx, ret = 0, hw_rate_code;
++	u8 rate, basic_rate_idx, rateidx;
++	int ret = 0, hw_rate_code, mcast_rate;
+ 	enum nl80211_band band;
+ 	const struct ieee80211_supported_band *sband;
+ 
+@@ -5776,7 +5780,11 @@ static void ath10k_bss_info_changed(struct ieee80211_hw *hw,
+ 	if (changed & BSS_CHANGED_MCAST_RATE &&
+ 	    !ath10k_mac_vif_chan(arvif->vif, &def)) {
+ 		band = def.chan->band;
+-		rateidx = vif->bss_conf.mcast_rate[band] - 1;
++		mcast_rate = vif->bss_conf.mcast_rate[band];
++		if (mcast_rate > 0)
++			rateidx = mcast_rate - 1;
++		else
++			rateidx = ffs(vif->bss_conf.basic_rates) - 1;
+ 
+ 		if (ar->phy_capability & WHAL_WLAN_11A_CAPABILITY)
+ 			rateidx += ATH10K_MAC_FIRST_OFDM_RATE_IDX;
+diff --git a/drivers/net/wireless/ath/ath10k/pci.c b/drivers/net/wireless/ath/ath10k/pci.c
+index 2c27f407a851..6e5f7ae00253 100644
+--- a/drivers/net/wireless/ath/ath10k/pci.c
++++ b/drivers/net/wireless/ath/ath10k/pci.c
+@@ -2059,6 +2059,11 @@ static void ath10k_pci_hif_stop(struct ath10k *ar)
+ 
+ 	ath10k_dbg(ar, ATH10K_DBG_BOOT, "boot hif stop\n");
+ 
++	ath10k_pci_irq_disable(ar);
++	ath10k_pci_irq_sync(ar);
++	napi_synchronize(&ar->napi);
++	napi_disable(&ar->napi);
++
+ 	/* Most likely the device has HTT Rx ring configured. The only way to
+ 	 * prevent the device from accessing (and possible corrupting) host
+ 	 * memory is to reset the chip now.
+@@ -2072,10 +2077,6 @@ static void ath10k_pci_hif_stop(struct ath10k *ar)
+ 	 */
+ 	ath10k_pci_safe_chip_reset(ar);
+ 
+-	ath10k_pci_irq_disable(ar);
+-	ath10k_pci_irq_sync(ar);
+-	napi_synchronize(&ar->napi);
+-	napi_disable(&ar->napi);
+ 	ath10k_pci_flush(ar);
+ 
+ 	spin_lock_irqsave(&ar_pci->ps_lock, flags);
+diff --git a/drivers/net/wireless/ath/ath10k/qmi.c b/drivers/net/wireless/ath/ath10k/qmi.c
+index a7bc2c70d076..8f8f717a23ee 100644
+--- a/drivers/net/wireless/ath/ath10k/qmi.c
++++ b/drivers/net/wireless/ath/ath10k/qmi.c
+@@ -1002,6 +1002,7 @@ int ath10k_qmi_deinit(struct ath10k *ar)
+ 	qmi_handle_release(&qmi->qmi_hdl);
+ 	cancel_work_sync(&qmi->event_work);
+ 	destroy_workqueue(qmi->event_wq);
++	kfree(qmi);
+ 	ar_snoc->qmi = NULL;
+ 
+ 	return 0;
+diff --git a/drivers/net/wireless/ath/ath10k/sdio.c b/drivers/net/wireless/ath/ath10k/sdio.c
+index fae56c67766f..28bdf0212538 100644
+--- a/drivers/net/wireless/ath/ath10k/sdio.c
++++ b/drivers/net/wireless/ath/ath10k/sdio.c
+@@ -602,6 +602,10 @@ static int ath10k_sdio_mbox_rx_alloc(struct ath10k *ar,
+ 						    full_len,
+ 						    last_in_bundle,
+ 						    last_in_bundle);
++		if (ret) {
++			ath10k_warn(ar, "alloc_rx_pkt error %d\n", ret);
++			goto err;
++		}
+ 	}
+ 
+ 	ar_sdio->n_rx_pkts = i;
+@@ -2077,6 +2081,9 @@ static void ath10k_sdio_remove(struct sdio_func *func)
+ 	cancel_work_sync(&ar_sdio->wr_async_work);
+ 	ath10k_core_unregister(ar);
+ 	ath10k_core_destroy(ar);
++
++	flush_workqueue(ar_sdio->workqueue);
++	destroy_workqueue(ar_sdio->workqueue);
+ }
+ 
+ static const struct sdio_device_id ath10k_sdio_devices[] = {
+diff --git a/drivers/net/wireless/ath/ath10k/txrx.c b/drivers/net/wireless/ath/ath10k/txrx.c
+index c5818d28f55a..4102df016931 100644
+--- a/drivers/net/wireless/ath/ath10k/txrx.c
++++ b/drivers/net/wireless/ath/ath10k/txrx.c
+@@ -150,6 +150,9 @@ struct ath10k_peer *ath10k_peer_find_by_id(struct ath10k *ar, int peer_id)
+ {
+ 	struct ath10k_peer *peer;
+ 
++	if (peer_id >= BITS_PER_TYPE(peer->peer_ids))
++		return NULL;
++
+ 	lockdep_assert_held(&ar->data_lock);
+ 
+ 	list_for_each_entry(peer, &ar->peers, list)
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+index 582fb11f648a..02709fc99034 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+@@ -2840,8 +2840,10 @@ ath10k_wmi_tlv_op_gen_mgmt_tx_send(struct ath10k *ar, struct sk_buff *msdu,
+ 	if ((ieee80211_is_action(hdr->frame_control) ||
+ 	     ieee80211_is_deauth(hdr->frame_control) ||
+ 	     ieee80211_is_disassoc(hdr->frame_control)) &&
+-	     ieee80211_has_protected(hdr->frame_control))
++	     ieee80211_has_protected(hdr->frame_control)) {
++		skb_put(msdu, IEEE80211_CCMP_MIC_LEN);
+ 		buf_len += IEEE80211_CCMP_MIC_LEN;
++	}
+ 
+ 	buf_len = min_t(u32, buf_len, WMI_TLV_MGMT_TX_FRAME_MAX_LEN);
+ 	buf_len = round_up(buf_len, 4);
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.h b/drivers/net/wireless/ath/ath10k/wmi.h
+index e1c40bb69932..12f57f9adbba 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.h
++++ b/drivers/net/wireless/ath/ath10k/wmi.h
+@@ -4535,9 +4535,10 @@ enum wmi_10_4_stats_id {
+ };
+ 
+ enum wmi_tlv_stats_id {
+-	WMI_TLV_STAT_PDEV	= BIT(0),
+-	WMI_TLV_STAT_VDEV	= BIT(1),
+-	WMI_TLV_STAT_PEER	= BIT(2),
++	WMI_TLV_STAT_PEER	= BIT(0),
++	WMI_TLV_STAT_AP		= BIT(1),
++	WMI_TLV_STAT_PDEV	= BIT(2),
++	WMI_TLV_STAT_VDEV	= BIT(3),
+ 	WMI_TLV_STAT_PEER_EXTD  = BIT(10),
+ };
+ 
+diff --git a/drivers/net/wireless/ath/ath6kl/wmi.c b/drivers/net/wireless/ath/ath6kl/wmi.c
+index 68854c45d0a4..9ab6aa9ded5c 100644
+--- a/drivers/net/wireless/ath/ath6kl/wmi.c
++++ b/drivers/net/wireless/ath/ath6kl/wmi.c
+@@ -1176,6 +1176,10 @@ static int ath6kl_wmi_pstream_timeout_event_rx(struct wmi *wmi, u8 *datap,
+ 		return -EINVAL;
+ 
+ 	ev = (struct wmi_pstream_timeout_event *) datap;
++	if (ev->traffic_class >= WMM_NUM_AC) {
++		ath6kl_err("invalid traffic class: %d\n", ev->traffic_class);
++		return -EINVAL;
++	}
+ 
+ 	/*
+ 	 * When the pstream (fat pipe == AC) timesout, it means there were
+@@ -1517,6 +1521,10 @@ static int ath6kl_wmi_cac_event_rx(struct wmi *wmi, u8 *datap, int len,
+ 		return -EINVAL;
+ 
+ 	reply = (struct wmi_cac_event *) datap;
++	if (reply->ac >= WMM_NUM_AC) {
++		ath6kl_err("invalid AC: %d\n", reply->ac);
++		return -EINVAL;
++	}
+ 
+ 	if ((reply->cac_indication == CAC_INDICATION_ADMISSION_RESP) &&
+ 	    (reply->status_code != IEEE80211_TSPEC_STATUS_ADMISS_ACCEPTED)) {
+@@ -2633,7 +2641,7 @@ int ath6kl_wmi_delete_pstream_cmd(struct wmi *wmi, u8 if_idx, u8 traffic_class,
+ 	u16 active_tsids = 0;
+ 	int ret;
+ 
+-	if (traffic_class > 3) {
++	if (traffic_class >= WMM_NUM_AC) {
+ 		ath6kl_err("invalid traffic class: %d\n", traffic_class);
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c
+index 8581d917635a..b6773d613f0c 100644
+--- a/drivers/net/wireless/ath/ath9k/hw.c
++++ b/drivers/net/wireless/ath/ath9k/hw.c
+@@ -252,8 +252,9 @@ void ath9k_hw_get_channel_centers(struct ath_hw *ah,
+ /* Chip Revisions */
+ /******************/
+ 
+-static void ath9k_hw_read_revisions(struct ath_hw *ah)
++static bool ath9k_hw_read_revisions(struct ath_hw *ah)
+ {
++	u32 srev;
+ 	u32 val;
+ 
+ 	if (ah->get_mac_revision)
+@@ -269,25 +270,33 @@ static void ath9k_hw_read_revisions(struct ath_hw *ah)
+ 			val = REG_READ(ah, AR_SREV);
+ 			ah->hw_version.macRev = MS(val, AR_SREV_REVISION2);
+ 		}
+-		return;
++		return true;
+ 	case AR9300_DEVID_AR9340:
+ 		ah->hw_version.macVersion = AR_SREV_VERSION_9340;
+-		return;
++		return true;
+ 	case AR9300_DEVID_QCA955X:
+ 		ah->hw_version.macVersion = AR_SREV_VERSION_9550;
+-		return;
++		return true;
+ 	case AR9300_DEVID_AR953X:
+ 		ah->hw_version.macVersion = AR_SREV_VERSION_9531;
+-		return;
++		return true;
+ 	case AR9300_DEVID_QCA956X:
+ 		ah->hw_version.macVersion = AR_SREV_VERSION_9561;
+-		return;
++		return true;
+ 	}
+ 
+-	val = REG_READ(ah, AR_SREV) & AR_SREV_ID;
++	srev = REG_READ(ah, AR_SREV);
++
++	if (srev == -EIO) {
++		ath_err(ath9k_hw_common(ah),
++			"Failed to read SREV register");
++		return false;
++	}
++
++	val = srev & AR_SREV_ID;
+ 
+ 	if (val == 0xFF) {
+-		val = REG_READ(ah, AR_SREV);
++		val = srev;
+ 		ah->hw_version.macVersion =
+ 			(val & AR_SREV_VERSION2) >> AR_SREV_TYPE2_S;
+ 		ah->hw_version.macRev = MS(val, AR_SREV_REVISION2);
+@@ -306,6 +315,8 @@ static void ath9k_hw_read_revisions(struct ath_hw *ah)
+ 		if (ah->hw_version.macVersion == AR_SREV_VERSION_5416_PCIE)
+ 			ah->is_pciexpress = true;
+ 	}
++
++	return true;
+ }
+ 
+ /************************************/
+@@ -559,7 +570,10 @@ static int __ath9k_hw_init(struct ath_hw *ah)
+ 	struct ath_common *common = ath9k_hw_common(ah);
+ 	int r = 0;
+ 
+-	ath9k_hw_read_revisions(ah);
++	if (!ath9k_hw_read_revisions(ah)) {
++		ath_err(common, "Could not read hardware revisions");
++		return -EOPNOTSUPP;
++	}
+ 
+ 	switch (ah->hw_version.macVersion) {
+ 	case AR_SREV_VERSION_5416_PCI:
+diff --git a/drivers/net/wireless/ath/ath9k/recv.c b/drivers/net/wireless/ath/ath9k/recv.c
+index 4e97f7f3b2a3..06e660858766 100644
+--- a/drivers/net/wireless/ath/ath9k/recv.c
++++ b/drivers/net/wireless/ath/ath9k/recv.c
+@@ -815,6 +815,7 @@ static int ath9k_rx_skb_preprocess(struct ath_softc *sc,
+ 	struct ath_common *common = ath9k_hw_common(ah);
+ 	struct ieee80211_hdr *hdr;
+ 	bool discard_current = sc->rx.discard_next;
++	bool is_phyerr;
+ 
+ 	/*
+ 	 * Discard corrupt descriptors which are marked in
+@@ -827,8 +828,11 @@ static int ath9k_rx_skb_preprocess(struct ath_softc *sc,
+ 
+ 	/*
+ 	 * Discard zero-length packets and packets smaller than an ACK
++	 * which are not PHY_ERROR (short radar pulses have a length of 3)
+ 	 */
+-	if (rx_stats->rs_datalen < 10) {
++	is_phyerr = rx_stats->rs_status & ATH9K_RXERR_PHY;
++	if (!rx_stats->rs_datalen ||
++	    (rx_stats->rs_datalen < 10 && !is_phyerr)) {
+ 		RX_STAT_INC(sc, rx_len_err);
+ 		goto corrupt;
+ 	}
+diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c
+index b17e1ca40995..3be0aeedb9b5 100644
+--- a/drivers/net/wireless/ath/ath9k/xmit.c
++++ b/drivers/net/wireless/ath/ath9k/xmit.c
+@@ -668,7 +668,8 @@ static bool bf_is_ampdu_not_probing(struct ath_buf *bf)
+ static void ath_tx_count_airtime(struct ath_softc *sc,
+ 				 struct ieee80211_sta *sta,
+ 				 struct ath_buf *bf,
+-				 struct ath_tx_status *ts)
++				 struct ath_tx_status *ts,
++				 u8 tid)
+ {
+ 	u32 airtime = 0;
+ 	int i;
+@@ -679,7 +680,7 @@ static void ath_tx_count_airtime(struct ath_softc *sc,
+ 		airtime += rate_dur * bf->rates[i].count;
+ 	}
+ 
+-	ieee80211_sta_register_airtime(sta, ts->tid, airtime, 0);
++	ieee80211_sta_register_airtime(sta, tid, airtime, 0);
+ }
+ 
+ static void ath_tx_process_buffer(struct ath_softc *sc, struct ath_txq *txq,
+@@ -709,7 +710,7 @@ static void ath_tx_process_buffer(struct ath_softc *sc, struct ath_txq *txq,
+ 	if (sta) {
+ 		struct ath_node *an = (struct ath_node *)sta->drv_priv;
+ 		tid = ath_get_skb_tid(sc, an, bf->bf_mpdu);
+-		ath_tx_count_airtime(sc, sta, bf, ts);
++		ath_tx_count_airtime(sc, sta, bf, ts, tid->tidno);
+ 		if (ts->ts_status & (ATH9K_TXERR_FILT | ATH9K_TXERR_XRETRY))
+ 			tid->clear_ps_filter = true;
+ 	}
+diff --git a/drivers/net/wireless/ath/dfs_pattern_detector.c b/drivers/net/wireless/ath/dfs_pattern_detector.c
+index d52b31b45df7..a274eb0d1968 100644
+--- a/drivers/net/wireless/ath/dfs_pattern_detector.c
++++ b/drivers/net/wireless/ath/dfs_pattern_detector.c
+@@ -111,7 +111,7 @@ static const struct radar_detector_specs jp_radar_ref_types[] = {
+ 	JP_PATTERN(0, 0, 1, 1428, 1428, 1, 18, 29, false),
+ 	JP_PATTERN(1, 2, 3, 3846, 3846, 1, 18, 29, false),
+ 	JP_PATTERN(2, 0, 1, 1388, 1388, 1, 18, 50, false),
+-	JP_PATTERN(3, 1, 2, 4000, 4000, 1, 18, 50, false),
++	JP_PATTERN(3, 0, 4, 4000, 4000, 1, 18, 50, false),
+ 	JP_PATTERN(4, 0, 5, 150, 230, 1, 23, 50, false),
+ 	JP_PATTERN(5, 6, 10, 200, 500, 1, 16, 50, false),
+ 	JP_PATTERN(6, 11, 20, 200, 500, 1, 12, 50, false),
+diff --git a/drivers/net/wireless/ath/wil6210/interrupt.c b/drivers/net/wireless/ath/wil6210/interrupt.c
+index 3f5bd177d55f..b00a13d6d530 100644
+--- a/drivers/net/wireless/ath/wil6210/interrupt.c
++++ b/drivers/net/wireless/ath/wil6210/interrupt.c
+@@ -296,21 +296,24 @@ void wil_configure_interrupt_moderation(struct wil6210_priv *wil)
+ static irqreturn_t wil6210_irq_rx(int irq, void *cookie)
+ {
+ 	struct wil6210_priv *wil = cookie;
+-	u32 isr = wil_ioread32_and_clear(wil->csr +
+-					 HOSTADDR(RGF_DMA_EP_RX_ICR) +
+-					 offsetof(struct RGF_ICR, ICR));
++	u32 isr;
+ 	bool need_unmask = true;
+ 
++	wil6210_mask_irq_rx(wil);
++
++	isr = wil_ioread32_and_clear(wil->csr +
++				     HOSTADDR(RGF_DMA_EP_RX_ICR) +
++				     offsetof(struct RGF_ICR, ICR));
++
+ 	trace_wil6210_irq_rx(isr);
+ 	wil_dbg_irq(wil, "ISR RX 0x%08x\n", isr);
+ 
+ 	if (unlikely(!isr)) {
+ 		wil_err_ratelimited(wil, "spurious IRQ: RX\n");
++		wil6210_unmask_irq_rx(wil);
+ 		return IRQ_NONE;
+ 	}
+ 
+-	wil6210_mask_irq_rx(wil);
+-
+ 	/* RX_DONE and RX_HTRSH interrupts are the same if interrupt
+ 	 * moderation is not used. Interrupt moderation may cause RX
+ 	 * buffer overflow while RX_DONE is delayed. The required
+@@ -355,21 +358,24 @@ static irqreturn_t wil6210_irq_rx(int irq, void *cookie)
+ static irqreturn_t wil6210_irq_rx_edma(int irq, void *cookie)
+ {
+ 	struct wil6210_priv *wil = cookie;
+-	u32 isr = wil_ioread32_and_clear(wil->csr +
+-					 HOSTADDR(RGF_INT_GEN_RX_ICR) +
+-					 offsetof(struct RGF_ICR, ICR));
++	u32 isr;
+ 	bool need_unmask = true;
+ 
++	wil6210_mask_irq_rx_edma(wil);
++
++	isr = wil_ioread32_and_clear(wil->csr +
++				     HOSTADDR(RGF_INT_GEN_RX_ICR) +
++				     offsetof(struct RGF_ICR, ICR));
++
+ 	trace_wil6210_irq_rx(isr);
+ 	wil_dbg_irq(wil, "ISR RX 0x%08x\n", isr);
+ 
+ 	if (unlikely(!isr)) {
+ 		wil_err(wil, "spurious IRQ: RX\n");
++		wil6210_unmask_irq_rx_edma(wil);
+ 		return IRQ_NONE;
+ 	}
+ 
+-	wil6210_mask_irq_rx_edma(wil);
+-
+ 	if (likely(isr & BIT_RX_STATUS_IRQ)) {
+ 		wil_dbg_irq(wil, "RX status ring\n");
+ 		isr &= ~BIT_RX_STATUS_IRQ;
+@@ -403,21 +409,24 @@ static irqreturn_t wil6210_irq_rx_edma(int irq, void *cookie)
+ static irqreturn_t wil6210_irq_tx_edma(int irq, void *cookie)
+ {
+ 	struct wil6210_priv *wil = cookie;
+-	u32 isr = wil_ioread32_and_clear(wil->csr +
+-					 HOSTADDR(RGF_INT_GEN_TX_ICR) +
+-					 offsetof(struct RGF_ICR, ICR));
++	u32 isr;
+ 	bool need_unmask = true;
+ 
++	wil6210_mask_irq_tx_edma(wil);
++
++	isr = wil_ioread32_and_clear(wil->csr +
++				     HOSTADDR(RGF_INT_GEN_TX_ICR) +
++				     offsetof(struct RGF_ICR, ICR));
++
+ 	trace_wil6210_irq_tx(isr);
+ 	wil_dbg_irq(wil, "ISR TX 0x%08x\n", isr);
+ 
+ 	if (unlikely(!isr)) {
+ 		wil_err(wil, "spurious IRQ: TX\n");
++		wil6210_unmask_irq_tx_edma(wil);
+ 		return IRQ_NONE;
+ 	}
+ 
+-	wil6210_mask_irq_tx_edma(wil);
+-
+ 	if (likely(isr & BIT_TX_STATUS_IRQ)) {
+ 		wil_dbg_irq(wil, "TX status ring\n");
+ 		isr &= ~BIT_TX_STATUS_IRQ;
+@@ -446,21 +455,24 @@ static irqreturn_t wil6210_irq_tx_edma(int irq, void *cookie)
+ static irqreturn_t wil6210_irq_tx(int irq, void *cookie)
+ {
+ 	struct wil6210_priv *wil = cookie;
+-	u32 isr = wil_ioread32_and_clear(wil->csr +
+-					 HOSTADDR(RGF_DMA_EP_TX_ICR) +
+-					 offsetof(struct RGF_ICR, ICR));
++	u32 isr;
+ 	bool need_unmask = true;
+ 
++	wil6210_mask_irq_tx(wil);
++
++	isr = wil_ioread32_and_clear(wil->csr +
++				     HOSTADDR(RGF_DMA_EP_TX_ICR) +
++				     offsetof(struct RGF_ICR, ICR));
++
+ 	trace_wil6210_irq_tx(isr);
+ 	wil_dbg_irq(wil, "ISR TX 0x%08x\n", isr);
+ 
+ 	if (unlikely(!isr)) {
+ 		wil_err_ratelimited(wil, "spurious IRQ: TX\n");
++		wil6210_unmask_irq_tx(wil);
+ 		return IRQ_NONE;
+ 	}
+ 
+-	wil6210_mask_irq_tx(wil);
+-
+ 	if (likely(isr & BIT_DMA_EP_TX_ICR_TX_DONE)) {
+ 		wil_dbg_irq(wil, "TX done\n");
+ 		isr &= ~BIT_DMA_EP_TX_ICR_TX_DONE;
+@@ -532,20 +544,23 @@ static bool wil_validate_mbox_regs(struct wil6210_priv *wil)
+ static irqreturn_t wil6210_irq_misc(int irq, void *cookie)
+ {
+ 	struct wil6210_priv *wil = cookie;
+-	u32 isr = wil_ioread32_and_clear(wil->csr +
+-					 HOSTADDR(RGF_DMA_EP_MISC_ICR) +
+-					 offsetof(struct RGF_ICR, ICR));
++	u32 isr;
++
++	wil6210_mask_irq_misc(wil, false);
++
++	isr = wil_ioread32_and_clear(wil->csr +
++				     HOSTADDR(RGF_DMA_EP_MISC_ICR) +
++				     offsetof(struct RGF_ICR, ICR));
+ 
+ 	trace_wil6210_irq_misc(isr);
+ 	wil_dbg_irq(wil, "ISR MISC 0x%08x\n", isr);
+ 
+ 	if (!isr) {
+ 		wil_err(wil, "spurious IRQ: MISC\n");
++		wil6210_unmask_irq_misc(wil, false);
+ 		return IRQ_NONE;
+ 	}
+ 
+-	wil6210_mask_irq_misc(wil, false);
+-
+ 	if (isr & ISR_MISC_FW_ERROR) {
+ 		u32 fw_assert_code = wil_r(wil, wil->rgf_fw_assert_code_addr);
+ 		u32 ucode_assert_code =
+@@ -580,7 +595,7 @@ static irqreturn_t wil6210_irq_misc(int irq, void *cookie)
+ 			/* no need to handle HALP ICRs until next vote */
+ 			wil->halp.handle_icr = false;
+ 			wil_dbg_irq(wil, "irq_misc: HALP IRQ invoked\n");
+-			wil6210_mask_halp(wil);
++			wil6210_mask_irq_misc(wil, true);
+ 			complete(&wil->halp.comp);
+ 		}
+ 	}
+diff --git a/drivers/net/wireless/ath/wil6210/txrx.c b/drivers/net/wireless/ath/wil6210/txrx.c
+index 4ccfd1404458..d74837cce67f 100644
+--- a/drivers/net/wireless/ath/wil6210/txrx.c
++++ b/drivers/net/wireless/ath/wil6210/txrx.c
+@@ -750,6 +750,7 @@ void wil_netif_rx_any(struct sk_buff *skb, struct net_device *ndev)
+ 		[GRO_HELD]		= "GRO_HELD",
+ 		[GRO_NORMAL]		= "GRO_NORMAL",
+ 		[GRO_DROP]		= "GRO_DROP",
++		[GRO_CONSUMED]		= "GRO_CONSUMED",
+ 	};
+ 
+ 	wil->txrx_ops.get_netif_rx_params(skb, &cid, &security);
+diff --git a/drivers/net/wireless/ath/wil6210/wmi.c b/drivers/net/wireless/ath/wil6210/wmi.c
+index 63116f4b62c7..de52e532c105 100644
+--- a/drivers/net/wireless/ath/wil6210/wmi.c
++++ b/drivers/net/wireless/ath/wil6210/wmi.c
+@@ -3211,7 +3211,18 @@ static void wmi_event_handle(struct wil6210_priv *wil,
+ 		/* check if someone waits for this event */
+ 		if (wil->reply_id && wil->reply_id == id &&
+ 		    wil->reply_mid == mid) {
+-			WARN_ON(wil->reply_buf);
++			if (wil->reply_buf) {
++				/* event received while wmi_call is waiting
++				 * with a buffer. Such event should be handled
++				 * in wmi_recv_cmd function. Handling the event
++				 * here means a previous wmi_call was timeout.
++				 * Drop the event and do not handle it.
++				 */
++				wil_err(wil,
++					"Old event (%d, %s) while wmi_call is waiting. Drop it and Continue waiting\n",
++					id, eventid2name(id));
++				return;
++			}
+ 
+ 			wmi_evt_call_handler(vif, id, evt_data,
+ 					     len - sizeof(*wmi));
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+index d7380016f1c0..c30f626b1602 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+@@ -2146,8 +2146,6 @@ void iwl_fw_dbg_collect_sync(struct iwl_fw_runtime *fwrt)
+ 	/* start recording again if the firmware is not crashed */
+ 	if (!test_bit(STATUS_FW_ERROR, &fwrt->trans->status) &&
+ 	    fwrt->fw->dbg.dest_tlv) {
+-		/* wait before we collect the data till the DBGC stop */
+-		udelay(500);
+ 		iwl_fw_dbg_restart_recording(fwrt, &params);
+ 	}
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.h b/drivers/net/wireless/intel/iwlwifi/fw/dbg.h
+index a199056234d3..97fcd57e17d8 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.h
+@@ -297,7 +297,10 @@ _iwl_fw_dbg_stop_recording(struct iwl_trans *trans,
+ 	}
+ 
+ 	iwl_write_umac_prph(trans, DBGC_IN_SAMPLE, 0);
+-	udelay(100);
++	/* wait for the DBGC to finish writing the internal buffer to DRAM to
++	 * avoid halting the HW while writing
++	 */
++	usleep_range(700, 1000);
+ 	iwl_write_umac_prph(trans, DBGC_OUT_CTRL, 0);
+ #ifdef CONFIG_IWLWIFI_DEBUGFS
+ 	trans->dbg_rec_on = false;
+@@ -327,7 +330,6 @@ _iwl_fw_dbg_restart_recording(struct iwl_trans *trans,
+ 		iwl_set_bits_prph(trans, MON_BUFF_SAMPLE_CTL, 0x1);
+ 	} else {
+ 		iwl_write_umac_prph(trans, DBGC_IN_SAMPLE, params->in_sample);
+-		udelay(100);
+ 		iwl_write_umac_prph(trans, DBGC_OUT_CTRL, params->out_ctrl);
+ 	}
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/smem.c b/drivers/net/wireless/intel/iwlwifi/fw/smem.c
+index ff85d69c2a8c..557ee47bffd8 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/smem.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/smem.c
+@@ -8,7 +8,7 @@
+  * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
+  * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+  * Copyright(c) 2016 - 2017 Intel Deutschland GmbH
+- * Copyright(c) 2018 Intel Corporation
++ * Copyright(c) 2018 - 2019 Intel Corporation
+  *
+  * This program is free software; you can redistribute it and/or modify
+  * it under the terms of version 2 of the GNU General Public License as
+@@ -31,7 +31,7 @@
+  * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
+  * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+  * Copyright(c) 2016 - 2017 Intel Deutschland GmbH
+- * Copyright(c) 2018 Intel Corporation
++ * Copyright(c) 2018 - 2019 Intel Corporation
+  * All rights reserved.
+  *
+  * Redistribution and use in source and binary forms, with or without
+@@ -134,6 +134,7 @@ void iwl_get_shared_mem_conf(struct iwl_fw_runtime *fwrt)
+ 		.len = { 0, },
+ 	};
+ 	struct iwl_rx_packet *pkt;
++	int ret;
+ 
+ 	if (fw_has_capa(&fwrt->fw->ucode_capa,
+ 			IWL_UCODE_TLV_CAPA_EXTEND_SHARED_MEM_CFG))
+@@ -141,8 +142,13 @@ void iwl_get_shared_mem_conf(struct iwl_fw_runtime *fwrt)
+ 	else
+ 		cmd.id = SHARED_MEM_CFG;
+ 
+-	if (WARN_ON(iwl_trans_send_cmd(fwrt->trans, &cmd)))
++	ret = iwl_trans_send_cmd(fwrt->trans, &cmd);
++
++	if (ret) {
++		WARN(ret != -ERFKILL,
++		     "Could not send the SMEM command: %d\n", ret);
+ 		return;
++	}
+ 
+ 	pkt = cmd.resp_pkt;
+ 	if (fwrt->trans->cfg->device_family >= IWL_DEVICE_FAMILY_22000)
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-csr.h b/drivers/net/wireless/intel/iwlwifi/iwl-csr.h
+index e539bc94eff7..426c90bb13b5 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-csr.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-csr.h
+@@ -335,6 +335,7 @@ enum {
+ /* RF_ID value */
+ #define CSR_HW_RF_ID_TYPE_JF		(0x00105100)
+ #define CSR_HW_RF_ID_TYPE_HR		(0x0010A000)
++#define CSR_HW_RF_ID_TYPE_HR1		(0x0010c100)
+ #define CSR_HW_RF_ID_TYPE_HRCDB		(0x00109F00)
+ #define CSR_HW_RF_ID_TYPE_GF		(0x0010D000)
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index 153717587aeb..559f6df1a74d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -419,6 +419,8 @@ static int iwl_run_unified_mvm_ucode(struct iwl_mvm *mvm, bool read_nvm)
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+ 
++	mvm->rfkill_safe_init_done = false;
++
+ 	iwl_init_notification_wait(&mvm->notif_wait,
+ 				   &init_wait,
+ 				   init_complete,
+@@ -537,8 +539,7 @@ int iwl_run_init_mvm_ucode(struct iwl_mvm *mvm, bool read_nvm)
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+ 
+-	if (WARN_ON_ONCE(mvm->rfkill_safe_init_done))
+-		return 0;
++	mvm->rfkill_safe_init_done = false;
+ 
+ 	iwl_init_notification_wait(&mvm->notif_wait,
+ 				   &calib_wait,
+@@ -1108,10 +1109,13 @@ static int iwl_mvm_load_rt_fw(struct iwl_mvm *mvm)
+ 
+ 	iwl_fw_dbg_apply_point(&mvm->fwrt, IWL_FW_INI_APPLY_EARLY);
+ 
++	mvm->rfkill_safe_init_done = false;
+ 	ret = iwl_mvm_load_ucode_wait_alive(mvm, IWL_UCODE_REGULAR);
+ 	if (ret)
+ 		return ret;
+ 
++	mvm->rfkill_safe_init_done = true;
++
+ 	iwl_fw_dbg_apply_point(&mvm->fwrt, IWL_FW_INI_APPLY_AFTER_ALIVE);
+ 
+ 	return iwl_init_paging(&mvm->fwrt, mvm->fwrt.cur_fw_img);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 4ddf620c267d..5caadaef707d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -207,6 +207,12 @@ static const struct cfg80211_pmsr_capabilities iwl_mvm_pmsr_capa = {
+ 	},
+ };
+ 
++static int iwl_mvm_mac_set_key(struct ieee80211_hw *hw,
++			       enum set_key_cmd cmd,
++			       struct ieee80211_vif *vif,
++			       struct ieee80211_sta *sta,
++			       struct ieee80211_key_conf *key);
++
+ void iwl_mvm_ref(struct iwl_mvm *mvm, enum iwl_mvm_ref_type ref_type)
+ {
+ 	if (!iwl_mvm_is_d0i3_supported(mvm))
+@@ -2535,7 +2541,7 @@ static int iwl_mvm_start_ap_ibss(struct ieee80211_hw *hw,
+ {
+ 	struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+ 	struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+-	int ret;
++	int ret, i;
+ 
+ 	/*
+ 	 * iwl_mvm_mac_ctxt_add() might read directly from the device
+@@ -2609,6 +2615,20 @@ static int iwl_mvm_start_ap_ibss(struct ieee80211_hw *hw,
+ 	/* must be set before quota calculations */
+ 	mvmvif->ap_ibss_active = true;
+ 
++	/* send all the early keys to the device now */
++	for (i = 0; i < ARRAY_SIZE(mvmvif->ap_early_keys); i++) {
++		struct ieee80211_key_conf *key = mvmvif->ap_early_keys[i];
++
++		if (!key)
++			continue;
++
++		mvmvif->ap_early_keys[i] = NULL;
++
++		ret = iwl_mvm_mac_set_key(hw, SET_KEY, vif, NULL, key);
++		if (ret)
++			goto out_quota_failed;
++	}
++
+ 	if (vif->type == NL80211_IFTYPE_AP && !vif->p2p) {
+ 		iwl_mvm_vif_set_low_latency(mvmvif, true,
+ 					    LOW_LATENCY_VIF_TYPE);
+@@ -3378,11 +3398,12 @@ static int iwl_mvm_mac_set_key(struct ieee80211_hw *hw,
+ 			       struct ieee80211_sta *sta,
+ 			       struct ieee80211_key_conf *key)
+ {
++	struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ 	struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+ 	struct iwl_mvm_sta *mvmsta;
+ 	struct iwl_mvm_key_pn *ptk_pn;
+ 	int keyidx = key->keyidx;
+-	int ret;
++	int ret, i;
+ 	u8 key_offset;
+ 
+ 	if (iwlwifi_mod_params.swcrypto) {
+@@ -3455,6 +3476,22 @@ static int iwl_mvm_mac_set_key(struct ieee80211_hw *hw,
+ 				key->hw_key_idx = STA_KEY_IDX_INVALID;
+ 				break;
+ 			}
++
++			if (!mvmvif->ap_ibss_active) {
++				for (i = 0;
++				     i < ARRAY_SIZE(mvmvif->ap_early_keys);
++				     i++) {
++					if (!mvmvif->ap_early_keys[i]) {
++						mvmvif->ap_early_keys[i] = key;
++						break;
++					}
++				}
++
++				if (i >= ARRAY_SIZE(mvmvif->ap_early_keys))
++					ret = -ENOSPC;
++
++				break;
++			}
+ 		}
+ 
+ 		/* During FW restart, in order to restore the state as it was,
+@@ -3523,6 +3560,18 @@ static int iwl_mvm_mac_set_key(struct ieee80211_hw *hw,
+ 
+ 		break;
+ 	case DISABLE_KEY:
++		ret = -ENOENT;
++		for (i = 0; i < ARRAY_SIZE(mvmvif->ap_early_keys); i++) {
++			if (mvmvif->ap_early_keys[i] == key) {
++				mvmvif->ap_early_keys[i] = NULL;
++				ret = 0;
++			}
++		}
++
++		/* found in pending list - don't do anything else */
++		if (ret == 0)
++			break;
++
+ 		if (key->hw_key_idx == STA_KEY_IDX_INVALID) {
+ 			ret = 0;
+ 			break;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+index b698d55ace1b..e1f90a4ae14f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+@@ -498,6 +498,9 @@ struct iwl_mvm_vif {
+ 	netdev_features_t features;
+ 
+ 	struct iwl_probe_resp_data __rcu *probe_resp_data;
++
++	/* we can only have 2 GTK + 2 IGTK active at a time */
++	struct ieee80211_key_conf *ap_early_keys[4];
+ };
+ 
+ static inline struct iwl_mvm_vif *
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index 0c2aabc842f9..96f8d38ea321 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -726,6 +726,9 @@ int iwl_mvm_tx_skb_non_sta(struct iwl_mvm *mvm, struct sk_buff *skb)
+ 
+ 	memcpy(&info, skb->cb, sizeof(info));
+ 
++	if (WARN_ON_ONCE(skb->len > IEEE80211_MAX_DATA_LEN + hdrlen))
++		return -1;
++
+ 	if (WARN_ON_ONCE(info.flags & IEEE80211_TX_CTL_AMPDU))
+ 		return -1;
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+index 1e36459948db..504c535309d5 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+@@ -168,7 +168,7 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
+ 
+ 	memcpy(iml_img, trans->iml, trans->iml_len);
+ 
+-	iwl_enable_interrupts(trans);
++	iwl_enable_fw_load_int_ctx_info(trans);
+ 
+ 	/* kick FW self load */
+ 	iwl_write64(trans, CSR_CTXT_INFO_ADDR,
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
+index 9274e317cc77..eeb349134056 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
+@@ -222,7 +222,7 @@ int iwl_pcie_ctxt_info_init(struct iwl_trans *trans,
+ 
+ 	trans_pcie->ctxt_info = ctxt_info;
+ 
+-	iwl_enable_interrupts(trans);
++	iwl_enable_fw_load_int_ctx_info(trans);
+ 
+ 	/* Configure debug, if exists */
+ 	if (iwl_pcie_dbg_on(trans))
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+index 2afce5c41322..98128024b95e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+@@ -894,6 +894,33 @@ static inline void iwl_enable_fw_load_int(struct iwl_trans *trans)
+ 	}
+ }
+ 
++static inline void iwl_enable_fw_load_int_ctx_info(struct iwl_trans *trans)
++{
++	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
++
++	IWL_DEBUG_ISR(trans, "Enabling ALIVE interrupt only\n");
++
++	if (!trans_pcie->msix_enabled) {
++		/*
++		 * When we'll receive the ALIVE interrupt, the ISR will call
++		 * iwl_enable_fw_load_int_ctx_info again to set the ALIVE
++		 * interrupt (which is not really needed anymore) but also the
++		 * RX interrupt which will allow us to receive the ALIVE
++		 * notification (which is Rx) and continue the flow.
++		 */
++		trans_pcie->inta_mask =  CSR_INT_BIT_ALIVE | CSR_INT_BIT_FH_RX;
++		iwl_write32(trans, CSR_INT_MASK, trans_pcie->inta_mask);
++	} else {
++		iwl_enable_hw_int_msk_msix(trans,
++					   MSIX_HW_INT_CAUSES_REG_ALIVE);
++		/*
++		 * Leave all the FH causes enabled to get the ALIVE
++		 * notification.
++		 */
++		iwl_enable_fh_int_msk_msix(trans, trans_pcie->fh_init_mask);
++	}
++}
++
+ static inline u16 iwl_pcie_get_cmd_index(const struct iwl_txq *q, u32 index)
+ {
+ 	return index & (q->n_window - 1);
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+index 12f02aaf923e..8c124debf53a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+@@ -1832,26 +1832,26 @@ irqreturn_t iwl_pcie_irq_handler(int irq, void *dev_id)
+ 		goto out;
+ 	}
+ 
+-	if (iwl_have_debug_level(IWL_DL_ISR)) {
+-		/* NIC fires this, but we don't use it, redundant with WAKEUP */
+-		if (inta & CSR_INT_BIT_SCD) {
+-			IWL_DEBUG_ISR(trans,
+-				      "Scheduler finished to transmit the frame/frames.\n");
+-			isr_stats->sch++;
+-		}
++	/* NIC fires this, but we don't use it, redundant with WAKEUP */
++	if (inta & CSR_INT_BIT_SCD) {
++		IWL_DEBUG_ISR(trans,
++			      "Scheduler finished to transmit the frame/frames.\n");
++		isr_stats->sch++;
++	}
+ 
+-		/* Alive notification via Rx interrupt will do the real work */
+-		if (inta & CSR_INT_BIT_ALIVE) {
+-			IWL_DEBUG_ISR(trans, "Alive interrupt\n");
+-			isr_stats->alive++;
+-			if (trans->cfg->gen2) {
+-				/*
+-				 * We can restock, since firmware configured
+-				 * the RFH
+-				 */
+-				iwl_pcie_rxmq_restock(trans, trans_pcie->rxq);
+-			}
++	/* Alive notification via Rx interrupt will do the real work */
++	if (inta & CSR_INT_BIT_ALIVE) {
++		IWL_DEBUG_ISR(trans, "Alive interrupt\n");
++		isr_stats->alive++;
++		if (trans->cfg->gen2) {
++			/*
++			 * We can restock, since firmware configured
++			 * the RFH
++			 */
++			iwl_pcie_rxmq_restock(trans, trans_pcie->rxq);
+ 		}
++
++		handled |= CSR_INT_BIT_ALIVE;
+ 	}
+ 
+ 	/* Safely ignore these bits for debug checks below */
+@@ -1970,6 +1970,9 @@ irqreturn_t iwl_pcie_irq_handler(int irq, void *dev_id)
+ 	/* Re-enable RF_KILL if it occurred */
+ 	else if (handled & CSR_INT_BIT_RF_KILL)
+ 		iwl_enable_rfkill_int(trans);
++	/* Re-enable the ALIVE / Rx interrupt if it occurred */
++	else if (handled & (CSR_INT_BIT_ALIVE | CSR_INT_BIT_FH_RX))
++		iwl_enable_fw_load_int_ctx_info(trans);
+ 	spin_unlock(&trans_pcie->irq_lock);
+ 
+ out:
+@@ -2113,10 +2116,18 @@ irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id)
+ 		return IRQ_NONE;
+ 	}
+ 
+-	if (iwl_have_debug_level(IWL_DL_ISR))
+-		IWL_DEBUG_ISR(trans, "ISR inta_fh 0x%08x, enabled 0x%08x\n",
+-			      inta_fh,
++	if (iwl_have_debug_level(IWL_DL_ISR)) {
++		IWL_DEBUG_ISR(trans,
++			      "ISR inta_fh 0x%08x, enabled (sw) 0x%08x (hw) 0x%08x\n",
++			      inta_fh, trans_pcie->fh_mask,
+ 			      iwl_read32(trans, CSR_MSIX_FH_INT_MASK_AD));
++		if (inta_fh & ~trans_pcie->fh_mask)
++			IWL_DEBUG_ISR(trans,
++				      "We got a masked interrupt (0x%08x)\n",
++				      inta_fh & ~trans_pcie->fh_mask);
++	}
++
++	inta_fh &= trans_pcie->fh_mask;
+ 
+ 	if ((trans_pcie->shared_vec_mask & IWL_SHARED_IRQ_NON_RX) &&
+ 	    inta_fh & MSIX_FH_INT_CAUSES_Q0) {
+@@ -2156,11 +2167,18 @@ irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id)
+ 	}
+ 
+ 	/* After checking FH register check HW register */
+-	if (iwl_have_debug_level(IWL_DL_ISR))
++	if (iwl_have_debug_level(IWL_DL_ISR)) {
+ 		IWL_DEBUG_ISR(trans,
+-			      "ISR inta_hw 0x%08x, enabled 0x%08x\n",
+-			      inta_hw,
++			      "ISR inta_hw 0x%08x, enabled (sw) 0x%08x (hw) 0x%08x\n",
++			      inta_hw, trans_pcie->hw_mask,
+ 			      iwl_read32(trans, CSR_MSIX_HW_INT_MASK_AD));
++		if (inta_hw & ~trans_pcie->hw_mask)
++			IWL_DEBUG_ISR(trans,
++				      "We got a masked interrupt 0x%08x\n",
++				      inta_hw & ~trans_pcie->hw_mask);
++	}
++
++	inta_hw &= trans_pcie->hw_mask;
+ 
+ 	/* Alive notification via Rx interrupt will do the real work */
+ 	if (inta_hw & MSIX_HW_INT_CAUSES_REG_ALIVE) {
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
+index 9c203ca75de9..8e0b40c4f4a4 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
+@@ -272,6 +272,15 @@ void iwl_trans_pcie_gen2_fw_alive(struct iwl_trans *trans, u32 scd_addr)
+ 	 * paging memory cannot be freed included since FW will still use it
+ 	 */
+ 	iwl_pcie_ctxt_info_free(trans);
++
++	/*
++	 * Re-enable all the interrupts, including the RF-Kill one, now that
++	 * the firmware is alive.
++	 */
++	iwl_enable_interrupts(trans);
++	mutex_lock(&trans_pcie->mutex);
++	iwl_pcie_check_hw_rf_kill(trans);
++	mutex_unlock(&trans_pcie->mutex);
+ }
+ 
+ int iwl_trans_pcie_gen2_start_fw(struct iwl_trans *trans,
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index 80695584e406..8ccfe0226818 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -3562,9 +3562,11 @@ struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev,
+ 			trans->cfg = &iwlax210_2ax_cfg_so_gf_a0;
+ 		}
+ 	} else if (cfg == &iwl_ax101_cfg_qu_hr) {
+-		if (CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id) ==
+-		    CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_HR) &&
+-		    trans->hw_rev == CSR_HW_REV_TYPE_QNJ_B0) {
++		if ((CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id) ==
++		     CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_HR) &&
++		     trans->hw_rev == CSR_HW_REV_TYPE_QNJ_B0) ||
++		    (CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id) ==
++		     CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_HR1))) {
+ 			trans->cfg = &iwl22000_2ax_cfg_qnj_hr_b0;
+ 		} else if (CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id) ==
+ 		    CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_HR)) {
+diff --git a/drivers/net/wireless/mediatek/mt7601u/dma.c b/drivers/net/wireless/mediatek/mt7601u/dma.c
+index f7edeffb2b19..401444f36402 100644
+--- a/drivers/net/wireless/mediatek/mt7601u/dma.c
++++ b/drivers/net/wireless/mediatek/mt7601u/dma.c
+@@ -193,10 +193,23 @@ static void mt7601u_complete_rx(struct urb *urb)
+ 	struct mt7601u_rx_queue *q = &dev->rx_q;
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&dev->rx_lock, flags);
++	/* do no schedule rx tasklet if urb has been unlinked
++	 * or the device has been removed
++	 */
++	switch (urb->status) {
++	case -ECONNRESET:
++	case -ESHUTDOWN:
++	case -ENOENT:
++		return;
++	default:
++		dev_err_ratelimited(dev->dev, "rx urb failed: %d\n",
++				    urb->status);
++		/* fall through */
++	case 0:
++		break;
++	}
+ 
+-	if (mt7601u_urb_has_error(urb))
+-		dev_err(dev->dev, "Error: RX urb failed:%d\n", urb->status);
++	spin_lock_irqsave(&dev->rx_lock, flags);
+ 	if (WARN_ONCE(q->e[q->end].urb != urb, "RX urb mismatch"))
+ 		goto out;
+ 
+@@ -228,14 +241,25 @@ static void mt7601u_complete_tx(struct urb *urb)
+ 	struct sk_buff *skb;
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&dev->tx_lock, flags);
++	switch (urb->status) {
++	case -ECONNRESET:
++	case -ESHUTDOWN:
++	case -ENOENT:
++		return;
++	default:
++		dev_err_ratelimited(dev->dev, "tx urb failed: %d\n",
++				    urb->status);
++		/* fall through */
++	case 0:
++		break;
++	}
+ 
+-	if (mt7601u_urb_has_error(urb))
+-		dev_err(dev->dev, "Error: TX urb failed:%d\n", urb->status);
++	spin_lock_irqsave(&dev->tx_lock, flags);
+ 	if (WARN_ONCE(q->e[q->start].urb != urb, "TX urb mismatch"))
+ 		goto out;
+ 
+ 	skb = q->e[q->start].skb;
++	q->e[q->start].skb = NULL;
+ 	trace_mt_tx_dma_done(dev, skb);
+ 
+ 	__skb_queue_tail(&dev->tx_skb_done, skb);
+@@ -363,19 +387,9 @@ int mt7601u_dma_enqueue_tx(struct mt7601u_dev *dev, struct sk_buff *skb,
+ static void mt7601u_kill_rx(struct mt7601u_dev *dev)
+ {
+ 	int i;
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&dev->rx_lock, flags);
+-
+-	for (i = 0; i < dev->rx_q.entries; i++) {
+-		int next = dev->rx_q.end;
+ 
+-		spin_unlock_irqrestore(&dev->rx_lock, flags);
+-		usb_poison_urb(dev->rx_q.e[next].urb);
+-		spin_lock_irqsave(&dev->rx_lock, flags);
+-	}
+-
+-	spin_unlock_irqrestore(&dev->rx_lock, flags);
++	for (i = 0; i < dev->rx_q.entries; i++)
++		usb_poison_urb(dev->rx_q.e[i].urb);
+ }
+ 
+ static int mt7601u_submit_rx_buf(struct mt7601u_dev *dev,
+@@ -445,10 +459,10 @@ static void mt7601u_free_tx_queue(struct mt7601u_tx_queue *q)
+ {
+ 	int i;
+ 
+-	WARN_ON(q->used);
+-
+ 	for (i = 0; i < q->entries; i++)  {
+ 		usb_poison_urb(q->e[i].urb);
++		if (q->e[i].skb)
++			mt7601u_tx_status(q->dev, q->e[i].skb);
+ 		usb_free_urb(q->e[i].urb);
+ 	}
+ }
+diff --git a/drivers/net/wireless/mediatek/mt7601u/tx.c b/drivers/net/wireless/mediatek/mt7601u/tx.c
+index 3600e911a63e..4d81c45722fb 100644
+--- a/drivers/net/wireless/mediatek/mt7601u/tx.c
++++ b/drivers/net/wireless/mediatek/mt7601u/tx.c
+@@ -117,9 +117,9 @@ void mt7601u_tx_status(struct mt7601u_dev *dev, struct sk_buff *skb)
+ 	info->status.rates[0].idx = -1;
+ 	info->flags |= IEEE80211_TX_STAT_ACK;
+ 
+-	spin_lock(&dev->mac_lock);
++	spin_lock_bh(&dev->mac_lock);
+ 	ieee80211_tx_status(dev->hw, skb);
+-	spin_unlock(&dev->mac_lock);
++	spin_unlock_bh(&dev->mac_lock);
+ }
+ 
+ static int mt7601u_skb_rooms(struct mt7601u_dev *dev, struct sk_buff *skb)
+diff --git a/drivers/net/wireless/ralink/rt2x00/rt2x00usb.c b/drivers/net/wireless/ralink/rt2x00/rt2x00usb.c
+index 086aad22743d..139bcdb42536 100644
+--- a/drivers/net/wireless/ralink/rt2x00/rt2x00usb.c
++++ b/drivers/net/wireless/ralink/rt2x00/rt2x00usb.c
+@@ -367,14 +367,9 @@ static void rt2x00usb_interrupt_rxdone(struct urb *urb)
+ 	struct queue_entry *entry = (struct queue_entry *)urb->context;
+ 	struct rt2x00_dev *rt2x00dev = entry->queue->rt2x00dev;
+ 
+-	if (!test_and_clear_bit(ENTRY_OWNER_DEVICE_DATA, &entry->flags))
++	if (!test_bit(ENTRY_OWNER_DEVICE_DATA, &entry->flags))
+ 		return;
+ 
+-	/*
+-	 * Report the frame as DMA done
+-	 */
+-	rt2x00lib_dmadone(entry);
+-
+ 	/*
+ 	 * Check if the received data is simply too small
+ 	 * to be actually valid, or if the urb is signaling
+@@ -383,6 +378,11 @@ static void rt2x00usb_interrupt_rxdone(struct urb *urb)
+ 	if (urb->actual_length < entry->queue->desc_size || urb->status)
+ 		set_bit(ENTRY_DATA_IO_FAILED, &entry->flags);
+ 
++	/*
++	 * Report the frame as DMA done
++	 */
++	rt2x00lib_dmadone(entry);
++
+ 	/*
+ 	 * Schedule the delayed work for reading the RX status
+ 	 * from the device.
+diff --git a/drivers/net/wireless/realtek/rtlwifi/usb.c b/drivers/net/wireless/realtek/rtlwifi/usb.c
+index e24fda5e9087..34d68dbf4b4c 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/usb.c
++++ b/drivers/net/wireless/realtek/rtlwifi/usb.c
+@@ -1064,13 +1064,13 @@ int rtl_usb_probe(struct usb_interface *intf,
+ 	rtlpriv->cfg->ops->read_eeprom_info(hw);
+ 	err = _rtl_usb_init(hw);
+ 	if (err)
+-		goto error_out;
++		goto error_out2;
+ 	rtl_usb_init_sw(hw);
+ 	/* Init mac80211 sw */
+ 	err = rtl_init_core(hw);
+ 	if (err) {
+ 		pr_err("Can't allocate sw for mac80211\n");
+-		goto error_out;
++		goto error_out2;
+ 	}
+ 	if (rtlpriv->cfg->ops->init_sw_vars(hw)) {
+ 		pr_err("Can't init_sw_vars\n");
+@@ -1091,6 +1091,7 @@ int rtl_usb_probe(struct usb_interface *intf,
+ 
+ error_out:
+ 	rtl_deinit_core(hw);
++error_out2:
+ 	_rtl_usb_io_handler_release(hw);
+ 	usb_put_dev(udev);
+ 	complete(&rtlpriv->firmware_loading_complete);
+diff --git a/drivers/nvdimm/dax_devs.c b/drivers/nvdimm/dax_devs.c
+index 0453f49dc708..326f02ffca81 100644
+--- a/drivers/nvdimm/dax_devs.c
++++ b/drivers/nvdimm/dax_devs.c
+@@ -126,7 +126,7 @@ int nd_dax_probe(struct device *dev, struct nd_namespace_common *ndns)
+ 	nvdimm_bus_unlock(&ndns->dev);
+ 	if (!dax_dev)
+ 		return -ENOMEM;
+-	pfn_sb = devm_kzalloc(dev, sizeof(*pfn_sb), GFP_KERNEL);
++	pfn_sb = devm_kmalloc(dev, sizeof(*pfn_sb), GFP_KERNEL);
+ 	nd_pfn->pfn_sb = pfn_sb;
+ 	rc = nd_pfn_validate(nd_pfn, DAX_SIG);
+ 	dev_dbg(dev, "dax: %s\n", rc == 0 ? dev_name(dax_dev) : "<none>");
+diff --git a/drivers/nvdimm/pfn.h b/drivers/nvdimm/pfn.h
+index dde9853453d3..e901e3a3b04c 100644
+--- a/drivers/nvdimm/pfn.h
++++ b/drivers/nvdimm/pfn.h
+@@ -36,6 +36,7 @@ struct nd_pfn_sb {
+ 	__le32 end_trunc;
+ 	/* minor-version-2 record the base alignment of the mapping */
+ 	__le32 align;
++	/* minor-version-3 guarantee the padding and flags are zero */
+ 	u8 padding[4000];
+ 	__le64 checksum;
+ };
+diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c
+index d271bd731af7..f0e918186504 100644
+--- a/drivers/nvdimm/pfn_devs.c
++++ b/drivers/nvdimm/pfn_devs.c
+@@ -420,6 +420,15 @@ static int nd_pfn_clear_memmap_errors(struct nd_pfn *nd_pfn)
+ 	return 0;
+ }
+ 
++/**
++ * nd_pfn_validate - read and validate info-block
++ * @nd_pfn: fsdax namespace runtime state / properties
++ * @sig: 'devdax' or 'fsdax' signature
++ *
++ * Upon return the info-block buffer contents (->pfn_sb) are
++ * indeterminate when validation fails, and a coherent info-block
++ * otherwise.
++ */
+ int nd_pfn_validate(struct nd_pfn *nd_pfn, const char *sig)
+ {
+ 	u64 checksum, offset;
+@@ -565,7 +574,7 @@ int nd_pfn_probe(struct device *dev, struct nd_namespace_common *ndns)
+ 	nvdimm_bus_unlock(&ndns->dev);
+ 	if (!pfn_dev)
+ 		return -ENOMEM;
+-	pfn_sb = devm_kzalloc(dev, sizeof(*pfn_sb), GFP_KERNEL);
++	pfn_sb = devm_kmalloc(dev, sizeof(*pfn_sb), GFP_KERNEL);
+ 	nd_pfn = to_nd_pfn(pfn_dev);
+ 	nd_pfn->pfn_sb = pfn_sb;
+ 	rc = nd_pfn_validate(nd_pfn, PFN_SIG);
+@@ -702,7 +711,7 @@ static int nd_pfn_init(struct nd_pfn *nd_pfn)
+ 	u64 checksum;
+ 	int rc;
+ 
+-	pfn_sb = devm_kzalloc(&nd_pfn->dev, sizeof(*pfn_sb), GFP_KERNEL);
++	pfn_sb = devm_kmalloc(&nd_pfn->dev, sizeof(*pfn_sb), GFP_KERNEL);
+ 	if (!pfn_sb)
+ 		return -ENOMEM;
+ 
+@@ -711,11 +720,14 @@ static int nd_pfn_init(struct nd_pfn *nd_pfn)
+ 		sig = DAX_SIG;
+ 	else
+ 		sig = PFN_SIG;
++
+ 	rc = nd_pfn_validate(nd_pfn, sig);
+ 	if (rc != -ENODEV)
+ 		return rc;
+ 
+ 	/* no info block, do init */;
++	memset(pfn_sb, 0, sizeof(*pfn_sb));
++
+ 	nd_region = to_nd_region(nd_pfn->dev.parent);
+ 	if (nd_region->ro) {
+ 		dev_info(&nd_pfn->dev,
+@@ -768,7 +780,7 @@ static int nd_pfn_init(struct nd_pfn *nd_pfn)
+ 	memcpy(pfn_sb->uuid, nd_pfn->uuid, 16);
+ 	memcpy(pfn_sb->parent_uuid, nd_dev_to_uuid(&ndns->dev), 16);
+ 	pfn_sb->version_major = cpu_to_le16(1);
+-	pfn_sb->version_minor = cpu_to_le16(2);
++	pfn_sb->version_minor = cpu_to_le16(3);
+ 	pfn_sb->start_pad = cpu_to_le32(start_pad);
+ 	pfn_sb->end_trunc = cpu_to_le32(end_trunc);
+ 	pfn_sb->align = cpu_to_le32(nd_pfn->align);
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 3a390b2c7540..cbbdd3dae5a1 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -3341,6 +3341,14 @@ static void nvme_ns_remove(struct nvme_ns *ns)
+ 		return;
+ 
+ 	nvme_fault_inject_fini(ns);
++
++	mutex_lock(&ns->ctrl->subsys->lock);
++	list_del_rcu(&ns->siblings);
++	mutex_unlock(&ns->ctrl->subsys->lock);
++	synchronize_rcu(); /* guarantee not available in head->list */
++	nvme_mpath_clear_current_path(ns);
++	synchronize_srcu(&ns->head->srcu); /* wait for concurrent submissions */
++
+ 	if (ns->disk && ns->disk->flags & GENHD_FL_UP) {
+ 		del_gendisk(ns->disk);
+ 		blk_cleanup_queue(ns->queue);
+@@ -3348,16 +3356,10 @@ static void nvme_ns_remove(struct nvme_ns *ns)
+ 			blk_integrity_unregister(ns->disk);
+ 	}
+ 
+-	mutex_lock(&ns->ctrl->subsys->lock);
+-	list_del_rcu(&ns->siblings);
+-	nvme_mpath_clear_current_path(ns);
+-	mutex_unlock(&ns->ctrl->subsys->lock);
+-
+ 	down_write(&ns->ctrl->namespaces_rwsem);
+ 	list_del_init(&ns->list);
+ 	up_write(&ns->ctrl->namespaces_rwsem);
+ 
+-	synchronize_srcu(&ns->head->srcu);
+ 	nvme_mpath_check_last_path(ns);
+ 	nvme_put_ns(ns);
+ }
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 693f2a856200..914eea2ea557 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2085,6 +2085,7 @@ static int nvme_setup_irqs(struct nvme_dev *dev, unsigned int nr_io_queues)
+ 		.priv		= dev,
+ 	};
+ 	unsigned int irq_queues, this_p_queues;
++	unsigned int nr_cpus = num_possible_cpus();
+ 
+ 	/*
+ 	 * Poll queues don't need interrupts, but we need at least one IO
+@@ -2095,7 +2096,10 @@ static int nvme_setup_irqs(struct nvme_dev *dev, unsigned int nr_io_queues)
+ 		this_p_queues = nr_io_queues - 1;
+ 		irq_queues = 1;
+ 	} else {
+-		irq_queues = nr_io_queues - this_p_queues + 1;
++		if (nr_cpus < nr_io_queues - this_p_queues)
++			irq_queues = nr_cpus + 1;
++		else
++			irq_queues = nr_io_queues - this_p_queues + 1;
+ 	}
+ 	dev->io_queues[HCTX_TYPE_POLL] = this_p_queues;
+ 
+@@ -2504,11 +2508,13 @@ static void nvme_reset_work(struct work_struct *work)
+ 	struct nvme_dev *dev =
+ 		container_of(work, struct nvme_dev, ctrl.reset_work);
+ 	bool was_suspend = !!(dev->ctrl.ctrl_config & NVME_CC_SHN_NORMAL);
+-	int result = -ENODEV;
++	int result;
+ 	enum nvme_ctrl_state new_state = NVME_CTRL_LIVE;
+ 
+-	if (WARN_ON(dev->ctrl.state != NVME_CTRL_RESETTING))
++	if (WARN_ON(dev->ctrl.state != NVME_CTRL_RESETTING)) {
++		result = -ENODEV;
+ 		goto out;
++	}
+ 
+ 	/*
+ 	 * If we're called to reset a live controller first shut it down before
+@@ -2545,6 +2551,7 @@ static void nvme_reset_work(struct work_struct *work)
+ 	if (!nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_CONNECTING)) {
+ 		dev_warn(dev->ctrl.device,
+ 			"failed to mark controller CONNECTING\n");
++		result = -EBUSY;
+ 		goto out;
+ 	}
+ 
+@@ -2605,6 +2612,7 @@ static void nvme_reset_work(struct work_struct *work)
+ 	if (!nvme_change_ctrl_state(&dev->ctrl, new_state)) {
+ 		dev_warn(dev->ctrl.device,
+ 			"failed to mark controller state %d\n", new_state);
++		result = -ENODEV;
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/opp/core.c b/drivers/opp/core.c
+index 0420f7e8ad5b..f5d565804e52 100644
+--- a/drivers/opp/core.c
++++ b/drivers/opp/core.c
+@@ -631,7 +631,7 @@ static int _set_opp_custom(const struct opp_table *opp_table,
+ 
+ 	data->old_opp.rate = old_freq;
+ 	size = sizeof(*old_supply) * opp_table->regulator_count;
+-	if (IS_ERR(old_supply))
++	if (!old_supply)
+ 		memset(data->old_opp.supplies, 0, size);
+ 	else
+ 		memcpy(data->old_opp.supplies, old_supply, size);
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index a7f703556790..e292801fff7f 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -178,6 +178,8 @@ static void qcom_ep_reset_assert(struct qcom_pcie *pcie)
+ 
+ static void qcom_ep_reset_deassert(struct qcom_pcie *pcie)
+ {
++	/* Ensure that PERST has been asserted for at least 100 ms */
++	msleep(100);
+ 	gpiod_set_value_cansleep(pcie->reset, 0);
+ 	usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500);
+ }
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index 82acd6155adf..40b625458afa 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -1875,6 +1875,7 @@ static void hv_pci_devices_present(struct hv_pcibus_device *hbus,
+ static void hv_eject_device_work(struct work_struct *work)
+ {
+ 	struct pci_eject_response *ejct_pkt;
++	struct hv_pcibus_device *hbus;
+ 	struct hv_pci_dev *hpdev;
+ 	struct pci_dev *pdev;
+ 	unsigned long flags;
+@@ -1885,6 +1886,7 @@ static void hv_eject_device_work(struct work_struct *work)
+ 	} ctxt;
+ 
+ 	hpdev = container_of(work, struct hv_pci_dev, wrk);
++	hbus = hpdev->hbus;
+ 
+ 	WARN_ON(hpdev->state != hv_pcichild_ejecting);
+ 
+@@ -1895,8 +1897,7 @@ static void hv_eject_device_work(struct work_struct *work)
+ 	 * because hbus->pci_bus may not exist yet.
+ 	 */
+ 	wslot = wslot_to_devfn(hpdev->desc.win_slot.slot);
+-	pdev = pci_get_domain_bus_and_slot(hpdev->hbus->sysdata.domain, 0,
+-					   wslot);
++	pdev = pci_get_domain_bus_and_slot(hbus->sysdata.domain, 0, wslot);
+ 	if (pdev) {
+ 		pci_lock_rescan_remove();
+ 		pci_stop_and_remove_bus_device(pdev);
+@@ -1904,9 +1905,9 @@ static void hv_eject_device_work(struct work_struct *work)
+ 		pci_unlock_rescan_remove();
+ 	}
+ 
+-	spin_lock_irqsave(&hpdev->hbus->device_list_lock, flags);
++	spin_lock_irqsave(&hbus->device_list_lock, flags);
+ 	list_del(&hpdev->list_entry);
+-	spin_unlock_irqrestore(&hpdev->hbus->device_list_lock, flags);
++	spin_unlock_irqrestore(&hbus->device_list_lock, flags);
+ 
+ 	if (hpdev->pci_slot)
+ 		pci_destroy_slot(hpdev->pci_slot);
+@@ -1915,7 +1916,7 @@ static void hv_eject_device_work(struct work_struct *work)
+ 	ejct_pkt = (struct pci_eject_response *)&ctxt.pkt.message;
+ 	ejct_pkt->message_type.type = PCI_EJECTION_COMPLETE;
+ 	ejct_pkt->wslot.slot = hpdev->desc.win_slot.slot;
+-	vmbus_sendpacket(hpdev->hbus->hdev->channel, ejct_pkt,
++	vmbus_sendpacket(hbus->hdev->channel, ejct_pkt,
+ 			 sizeof(*ejct_pkt), (unsigned long)&ctxt.pkt,
+ 			 VM_PKT_DATA_INBAND, 0);
+ 
+@@ -1924,7 +1925,9 @@ static void hv_eject_device_work(struct work_struct *work)
+ 	/* For the two refs got in new_pcichild_device() */
+ 	put_pcichild(hpdev);
+ 	put_pcichild(hpdev);
+-	put_hvpcibus(hpdev->hbus);
++	/* hpdev has been freed. Do not use it any more. */
++
++	put_hvpcibus(hbus);
+ }
+ 
+ /**
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 766f5779db92..b1e923cdef30 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -999,15 +999,10 @@ static void __pci_start_power_transition(struct pci_dev *dev, pci_power_t state)
+ 	if (state == PCI_D0) {
+ 		pci_platform_power_transition(dev, PCI_D0);
+ 		/*
+-		 * Mandatory power management transition delays, see
+-		 * PCI Express Base Specification Revision 2.0 Section
+-		 * 6.6.1: Conventional Reset.  Do not delay for
+-		 * devices powered on/off by corresponding bridge,
+-		 * because have already delayed for the bridge.
++		 * Mandatory power management transition delays are
++		 * handled in the PCIe portdrv resume hooks.
+ 		 */
+ 		if (dev->runtime_d3cold) {
+-			if (dev->d3cold_delay && !dev->imm_ready)
+-				msleep(dev->d3cold_delay);
+ 			/*
+ 			 * When powering on a bridge from D3cold, the
+ 			 * whole hierarchy may be powered on into
+@@ -2057,6 +2052,13 @@ static void pci_pme_list_scan(struct work_struct *work)
+ 			 */
+ 			if (bridge && bridge->current_state != PCI_D0)
+ 				continue;
++			/*
++			 * If the device is in D3cold it should not be
++			 * polled either.
++			 */
++			if (pme_dev->dev->current_state == PCI_D3cold)
++				continue;
++
+ 			pci_pme_wakeup(pme_dev->dev, NULL);
+ 		} else {
+ 			list_del(&pme_dev->list);
+@@ -4581,14 +4583,16 @@ static int pci_pm_reset(struct pci_dev *dev, int probe)
+ 
+ 	return pci_dev_wait(dev, "PM D3->D0", PCIE_RESET_READY_POLL_MS);
+ }
++
+ /**
+- * pcie_wait_for_link - Wait until link is active or inactive
++ * pcie_wait_for_link_delay - Wait until link is active or inactive
+  * @pdev: Bridge device
+  * @active: waiting for active or inactive?
++ * @delay: Delay to wait after link has become active (in ms)
+  *
+  * Use this to wait till link becomes active or inactive.
+  */
+-bool pcie_wait_for_link(struct pci_dev *pdev, bool active)
++bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active, int delay)
+ {
+ 	int timeout = 1000;
+ 	bool ret;
+@@ -4625,13 +4629,25 @@ bool pcie_wait_for_link(struct pci_dev *pdev, bool active)
+ 		timeout -= 10;
+ 	}
+ 	if (active && ret)
+-		msleep(100);
++		msleep(delay);
+ 	else if (ret != active)
+ 		pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n",
+ 			active ? "set" : "cleared");
+ 	return ret == active;
+ }
+ 
++/**
++ * pcie_wait_for_link - Wait until link is active or inactive
++ * @pdev: Bridge device
++ * @active: waiting for active or inactive?
++ *
++ * Use this to wait till link becomes active or inactive.
++ */
++bool pcie_wait_for_link(struct pci_dev *pdev, bool active)
++{
++	return pcie_wait_for_link_delay(pdev, active, 100);
++}
++
+ void pci_reset_secondary_bus(struct pci_dev *dev)
+ {
+ 	u16 ctrl;
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index 9cb99380c61e..59802b3def4b 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -493,6 +493,7 @@ static inline int pci_dev_specific_disable_acs_redir(struct pci_dev *dev)
+ void pcie_do_recovery(struct pci_dev *dev, enum pci_channel_state state,
+ 		      u32 service);
+ 
++bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active, int delay);
+ bool pcie_wait_for_link(struct pci_dev *pdev, bool active);
+ #ifdef CONFIG_PCIEASPM
+ void pcie_aspm_init_link_state(struct pci_dev *pdev);
+diff --git a/drivers/pci/pcie/portdrv_core.c b/drivers/pci/pcie/portdrv_core.c
+index 1b330129089f..308c3e0c4a34 100644
+--- a/drivers/pci/pcie/portdrv_core.c
++++ b/drivers/pci/pcie/portdrv_core.c
+@@ -9,6 +9,7 @@
+ #include <linux/module.h>
+ #include <linux/pci.h>
+ #include <linux/kernel.h>
++#include <linux/delay.h>
+ #include <linux/errno.h>
+ #include <linux/pm.h>
+ #include <linux/pm_runtime.h>
+@@ -378,6 +379,67 @@ static int pm_iter(struct device *dev, void *data)
+ 	return 0;
+ }
+ 
++static int get_downstream_delay(struct pci_bus *bus)
++{
++	struct pci_dev *pdev;
++	int min_delay = 100;
++	int max_delay = 0;
++
++	list_for_each_entry(pdev, &bus->devices, bus_list) {
++		if (!pdev->imm_ready)
++			min_delay = 0;
++		else if (pdev->d3cold_delay < min_delay)
++			min_delay = pdev->d3cold_delay;
++		if (pdev->d3cold_delay > max_delay)
++			max_delay = pdev->d3cold_delay;
++	}
++
++	return max(min_delay, max_delay);
++}
++
++/*
++ * wait_for_downstream_link - Wait for downstream link to establish
++ * @pdev: PCIe port whose downstream link is waited
++ *
++ * Handle delays according to PCIe 4.0 section 6.6.1 before configuration
++ * access to the downstream component is permitted.
++ *
++ * This blocks PCI core resume of the hierarchy below this port until the
++ * link is trained. Should be called before resuming port services to
++ * prevent pciehp from starting to tear-down the hierarchy too soon.
++ */
++static void wait_for_downstream_link(struct pci_dev *pdev)
++{
++	int delay;
++
++	if (pci_pcie_type(pdev) != PCI_EXP_TYPE_ROOT_PORT &&
++	    pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM)
++		return;
++
++	if (pci_dev_is_disconnected(pdev))
++		return;
++
++	if (!pdev->subordinate || list_empty(&pdev->subordinate->devices) ||
++	    !pdev->bridge_d3)
++		return;
++
++	delay = get_downstream_delay(pdev->subordinate);
++	if (!delay)
++		return;
++
++	dev_dbg(&pdev->dev, "waiting downstream link for %d ms\n", delay);
++
++	/*
++	 * If downstream port does not support speeds greater than 5 GT/s
++	 * need to wait 100ms. For higher speeds (gen3) we need to wait
++	 * first for the data link layer to become active.
++	 */
++	if (pcie_get_speed_cap(pdev) <= PCIE_SPEED_5_0GT)
++		msleep(delay);
++	else
++		pcie_wait_for_link_delay(pdev, true, delay);
++}
++
+ /**
+  * pcie_port_device_suspend - suspend port services associated with a PCIe port
+  * @dev: PCI Express port to handle
+@@ -391,6 +453,8 @@ int pcie_port_device_suspend(struct device *dev)
+ int pcie_port_device_resume_noirq(struct device *dev)
+ {
+ 	size_t off = offsetof(struct pcie_port_service_driver, resume_noirq);
++
++	wait_for_downstream_link(to_pci_dev(dev));
+ 	return device_for_each_child(dev, &off, pm_iter);
+ }
+ 
+@@ -421,6 +485,8 @@ int pcie_port_device_runtime_suspend(struct device *dev)
+ int pcie_port_device_runtime_resume(struct device *dev)
+ {
+ 	size_t off = offsetof(struct pcie_port_service_driver, runtime_resume);
++
++	wait_for_downstream_link(to_pci_dev(dev));
+ 	return device_for_each_child(dev, &off, pm_iter);
+ }
+ #endif /* PM */
+diff --git a/drivers/ras/cec.c b/drivers/ras/cec.c
+index f85d6b7a1984..5d2b2c02cbbe 100644
+--- a/drivers/ras/cec.c
++++ b/drivers/ras/cec.c
+@@ -369,7 +369,9 @@ static int pfn_set(void *data, u64 val)
+ {
+ 	*(u64 *)data = val;
+ 
+-	return cec_add_elem(val);
++	cec_add_elem(val);
++
++	return 0;
+ }
+ 
+ DEFINE_DEBUGFS_ATTRIBUTE(pfn_ops, u64_get, pfn_set, "0x%llx\n");
+diff --git a/drivers/regulator/da9211-regulator.c b/drivers/regulator/da9211-regulator.c
+index 4d7fe4819c1c..4e95e3d0fcd5 100644
+--- a/drivers/regulator/da9211-regulator.c
++++ b/drivers/regulator/da9211-regulator.c
+@@ -299,6 +299,8 @@ static struct da9211_pdata *da9211_parse_regulators_dt(
+ 				  0,
+ 				  GPIOD_OUT_HIGH | GPIOD_FLAGS_BIT_NONEXCLUSIVE,
+ 				  "da9211-enable");
++		if (IS_ERR(pdata->gpiod_ren[n]))
++			pdata->gpiod_ren[n] = NULL;
+ 		n++;
+ 	}
+ 
+diff --git a/drivers/regulator/s2mps11.c b/drivers/regulator/s2mps11.c
+index 134c62db36c5..8812c2c3cfc2 100644
+--- a/drivers/regulator/s2mps11.c
++++ b/drivers/regulator/s2mps11.c
+@@ -372,8 +372,8 @@ static const struct regulator_desc s2mps11_regulators[] = {
+ 	regulator_desc_s2mps11_buck1_4(4),
+ 	regulator_desc_s2mps11_buck5,
+ 	regulator_desc_s2mps11_buck67810(6, MIN_600_MV, STEP_6_25_MV),
+-	regulator_desc_s2mps11_buck67810(7, MIN_600_MV, STEP_12_5_MV),
+-	regulator_desc_s2mps11_buck67810(8, MIN_600_MV, STEP_12_5_MV),
++	regulator_desc_s2mps11_buck67810(7, MIN_750_MV, STEP_12_5_MV),
++	regulator_desc_s2mps11_buck67810(8, MIN_750_MV, STEP_12_5_MV),
+ 	regulator_desc_s2mps11_buck9,
+ 	regulator_desc_s2mps11_buck67810(10, MIN_750_MV, STEP_12_5_MV),
+ };
+@@ -821,9 +821,12 @@ static void s2mps14_pmic_dt_parse_ext_control_gpio(struct platform_device *pdev,
+ 				0,
+ 				GPIOD_OUT_HIGH | GPIOD_FLAGS_BIT_NONEXCLUSIVE,
+ 				"s2mps11-regulator");
+-		if (IS_ERR(gpio[reg])) {
++		if (PTR_ERR(gpio[reg]) == -ENOENT)
++			gpio[reg] = NULL;
++		else if (IS_ERR(gpio[reg])) {
+ 			dev_err(&pdev->dev, "Failed to get control GPIO for %d/%s\n",
+ 				reg, rdata[reg].name);
++			gpio[reg] = NULL;
+ 			continue;
+ 		}
+ 		if (gpio[reg])
+diff --git a/drivers/regulator/s5m8767.c b/drivers/regulator/s5m8767.c
+index bb9d1a083299..6ca27e9d5ef7 100644
+--- a/drivers/regulator/s5m8767.c
++++ b/drivers/regulator/s5m8767.c
+@@ -574,7 +574,9 @@ static int s5m8767_pmic_dt_parse_pdata(struct platform_device *pdev,
+ 			0,
+ 			GPIOD_OUT_HIGH | GPIOD_FLAGS_BIT_NONEXCLUSIVE,
+ 			"s5m8767");
+-		if (IS_ERR(rdata->ext_control_gpiod))
++		if (PTR_ERR(rdata->ext_control_gpiod) == -ENOENT)
++			rdata->ext_control_gpiod = NULL;
++		else if (IS_ERR(rdata->ext_control_gpiod))
+ 			return PTR_ERR(rdata->ext_control_gpiod);
+ 
+ 		rdata->id = i;
+diff --git a/drivers/regulator/tps65090-regulator.c b/drivers/regulator/tps65090-regulator.c
+index 0614551796a1..f6466db57900 100644
+--- a/drivers/regulator/tps65090-regulator.c
++++ b/drivers/regulator/tps65090-regulator.c
+@@ -381,11 +381,12 @@ static struct tps65090_platform_data *tps65090_parse_dt_reg_data(
+ 								    "dcdc-ext-control-gpios", 0,
+ 								    gflags,
+ 								    "tps65090");
+-			if (IS_ERR(rpdata->gpiod))
+-				return ERR_CAST(rpdata->gpiod);
+-			if (!rpdata->gpiod)
++			if (PTR_ERR(rpdata->gpiod) == -ENOENT) {
+ 				dev_err(&pdev->dev,
+ 					"could not find DCDC external control GPIO\n");
++				rpdata->gpiod = NULL;
++			} else if (IS_ERR(rpdata->gpiod))
++				return ERR_CAST(rpdata->gpiod);
+ 		}
+ 
+ 		if (of_property_read_u32(tps65090_matches[idx].of_node,
+diff --git a/drivers/s390/cio/qdio_main.c b/drivers/s390/cio/qdio_main.c
+index 9537e656e927..06b94b2ee199 100644
+--- a/drivers/s390/cio/qdio_main.c
++++ b/drivers/s390/cio/qdio_main.c
+@@ -738,6 +738,7 @@ static int get_outbound_buffer_frontier(struct qdio_q *q)
+ 
+ 	switch (state) {
+ 	case SLSB_P_OUTPUT_EMPTY:
++	case SLSB_P_OUTPUT_PENDING:
+ 		/* the adapter got it */
+ 		DBF_DEV_EVENT(DBF_INFO, q->irq_ptr,
+ 			"out empty:%1d %02x", q->nr, count);
+diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c
+index d94496ee6883..296bbc3c4606 100644
+--- a/drivers/s390/scsi/zfcp_fsf.c
++++ b/drivers/s390/scsi/zfcp_fsf.c
+@@ -11,6 +11,7 @@
+ #define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
+ 
+ #include <linux/blktrace_api.h>
++#include <linux/types.h>
+ #include <linux/slab.h>
+ #include <scsi/fc/fc_els.h>
+ #include "zfcp_ext.h"
+@@ -741,6 +742,7 @@ static struct zfcp_fsf_req *zfcp_fsf_req_create(struct zfcp_qdio *qdio,
+ 
+ static int zfcp_fsf_req_send(struct zfcp_fsf_req *req)
+ {
++	const bool is_srb = zfcp_fsf_req_is_status_read_buffer(req);
+ 	struct zfcp_adapter *adapter = req->adapter;
+ 	struct zfcp_qdio *qdio = adapter->qdio;
+ 	int req_id = req->req_id;
+@@ -757,8 +759,20 @@ static int zfcp_fsf_req_send(struct zfcp_fsf_req *req)
+ 		return -EIO;
+ 	}
+ 
++	/*
++	 * NOTE: DO NOT TOUCH ASYNC req PAST THIS POINT.
++	 *	 ONLY TOUCH SYNC req AGAIN ON req->completion.
++	 *
++	 * The request might complete and be freed concurrently at any point
++	 * now. This is not protected by the QDIO-lock (req_q_lock). So any
++	 * uncontrolled access after this might result in an use-after-free bug.
++	 * Only if the request doesn't have ZFCP_STATUS_FSFREQ_CLEANUP set, and
++	 * when it is completed via req->completion, is it safe to use req
++	 * again.
++	 */
++
+ 	/* Don't increase for unsolicited status */
+-	if (!zfcp_fsf_req_is_status_read_buffer(req))
++	if (!is_srb)
+ 		adapter->fsf_req_seq_no++;
+ 	adapter->req_no++;
+ 
+@@ -805,6 +819,7 @@ int zfcp_fsf_status_read(struct zfcp_qdio *qdio)
+ 	retval = zfcp_fsf_req_send(req);
+ 	if (retval)
+ 		goto failed_req_send;
++	/* NOTE: DO NOT TOUCH req PAST THIS POINT! */
+ 
+ 	goto out;
+ 
+@@ -914,8 +929,10 @@ struct zfcp_fsf_req *zfcp_fsf_abort_fcp_cmnd(struct scsi_cmnd *scmnd)
+ 	req->qtcb->bottom.support.req_handle = (u64) old_req_id;
+ 
+ 	zfcp_fsf_start_timer(req, ZFCP_FSF_SCSI_ER_TIMEOUT);
+-	if (!zfcp_fsf_req_send(req))
++	if (!zfcp_fsf_req_send(req)) {
++		/* NOTE: DO NOT TOUCH req, UNTIL IT COMPLETES! */
+ 		goto out;
++	}
+ 
+ out_error_free:
+ 	zfcp_fsf_req_free(req);
+@@ -1098,6 +1115,7 @@ int zfcp_fsf_send_ct(struct zfcp_fc_wka_port *wka_port,
+ 	ret = zfcp_fsf_req_send(req);
+ 	if (ret)
+ 		goto failed_send;
++	/* NOTE: DO NOT TOUCH req PAST THIS POINT! */
+ 
+ 	goto out;
+ 
+@@ -1198,6 +1216,7 @@ int zfcp_fsf_send_els(struct zfcp_adapter *adapter, u32 d_id,
+ 	ret = zfcp_fsf_req_send(req);
+ 	if (ret)
+ 		goto failed_send;
++	/* NOTE: DO NOT TOUCH req PAST THIS POINT! */
+ 
+ 	goto out;
+ 
+@@ -1243,6 +1262,7 @@ int zfcp_fsf_exchange_config_data(struct zfcp_erp_action *erp_action)
+ 		zfcp_fsf_req_free(req);
+ 		erp_action->fsf_req_id = 0;
+ 	}
++	/* NOTE: DO NOT TOUCH req PAST THIS POINT! */
+ out:
+ 	spin_unlock_irq(&qdio->req_q_lock);
+ 	return retval;
+@@ -1279,8 +1299,10 @@ int zfcp_fsf_exchange_config_data_sync(struct zfcp_qdio *qdio,
+ 	zfcp_fsf_start_timer(req, ZFCP_FSF_REQUEST_TIMEOUT);
+ 	retval = zfcp_fsf_req_send(req);
+ 	spin_unlock_irq(&qdio->req_q_lock);
+-	if (!retval)
++	if (!retval) {
++		/* NOTE: ONLY TOUCH SYNC req AGAIN ON req->completion. */
+ 		wait_for_completion(&req->completion);
++	}
+ 
+ 	zfcp_fsf_req_free(req);
+ 	return retval;
+@@ -1330,6 +1352,7 @@ int zfcp_fsf_exchange_port_data(struct zfcp_erp_action *erp_action)
+ 		zfcp_fsf_req_free(req);
+ 		erp_action->fsf_req_id = 0;
+ 	}
++	/* NOTE: DO NOT TOUCH req PAST THIS POINT! */
+ out:
+ 	spin_unlock_irq(&qdio->req_q_lock);
+ 	return retval;
+@@ -1372,8 +1395,10 @@ int zfcp_fsf_exchange_port_data_sync(struct zfcp_qdio *qdio,
+ 	retval = zfcp_fsf_req_send(req);
+ 	spin_unlock_irq(&qdio->req_q_lock);
+ 
+-	if (!retval)
++	if (!retval) {
++		/* NOTE: ONLY TOUCH SYNC req AGAIN ON req->completion. */
+ 		wait_for_completion(&req->completion);
++	}
+ 
+ 	zfcp_fsf_req_free(req);
+ 
+@@ -1493,6 +1518,7 @@ int zfcp_fsf_open_port(struct zfcp_erp_action *erp_action)
+ 		erp_action->fsf_req_id = 0;
+ 		put_device(&port->dev);
+ 	}
++	/* NOTE: DO NOT TOUCH req PAST THIS POINT! */
+ out:
+ 	spin_unlock_irq(&qdio->req_q_lock);
+ 	return retval;
+@@ -1557,6 +1583,7 @@ int zfcp_fsf_close_port(struct zfcp_erp_action *erp_action)
+ 		zfcp_fsf_req_free(req);
+ 		erp_action->fsf_req_id = 0;
+ 	}
++	/* NOTE: DO NOT TOUCH req PAST THIS POINT! */
+ out:
+ 	spin_unlock_irq(&qdio->req_q_lock);
+ 	return retval;
+@@ -1600,6 +1627,7 @@ int zfcp_fsf_open_wka_port(struct zfcp_fc_wka_port *wka_port)
+ {
+ 	struct zfcp_qdio *qdio = wka_port->adapter->qdio;
+ 	struct zfcp_fsf_req *req;
++	unsigned long req_id = 0;
+ 	int retval = -EIO;
+ 
+ 	spin_lock_irq(&qdio->req_q_lock);
+@@ -1622,14 +1650,17 @@ int zfcp_fsf_open_wka_port(struct zfcp_fc_wka_port *wka_port)
+ 	hton24(req->qtcb->bottom.support.d_id, wka_port->d_id);
+ 	req->data = wka_port;
+ 
++	req_id = req->req_id;
++
+ 	zfcp_fsf_start_timer(req, ZFCP_FSF_REQUEST_TIMEOUT);
+ 	retval = zfcp_fsf_req_send(req);
+ 	if (retval)
+ 		zfcp_fsf_req_free(req);
++	/* NOTE: DO NOT TOUCH req PAST THIS POINT! */
+ out:
+ 	spin_unlock_irq(&qdio->req_q_lock);
+ 	if (!retval)
+-		zfcp_dbf_rec_run_wka("fsowp_1", wka_port, req->req_id);
++		zfcp_dbf_rec_run_wka("fsowp_1", wka_port, req_id);
+ 	return retval;
+ }
+ 
+@@ -1655,6 +1686,7 @@ int zfcp_fsf_close_wka_port(struct zfcp_fc_wka_port *wka_port)
+ {
+ 	struct zfcp_qdio *qdio = wka_port->adapter->qdio;
+ 	struct zfcp_fsf_req *req;
++	unsigned long req_id = 0;
+ 	int retval = -EIO;
+ 
+ 	spin_lock_irq(&qdio->req_q_lock);
+@@ -1677,14 +1709,17 @@ int zfcp_fsf_close_wka_port(struct zfcp_fc_wka_port *wka_port)
+ 	req->data = wka_port;
+ 	req->qtcb->header.port_handle = wka_port->handle;
+ 
++	req_id = req->req_id;
++
+ 	zfcp_fsf_start_timer(req, ZFCP_FSF_REQUEST_TIMEOUT);
+ 	retval = zfcp_fsf_req_send(req);
+ 	if (retval)
+ 		zfcp_fsf_req_free(req);
++	/* NOTE: DO NOT TOUCH req PAST THIS POINT! */
+ out:
+ 	spin_unlock_irq(&qdio->req_q_lock);
+ 	if (!retval)
+-		zfcp_dbf_rec_run_wka("fscwp_1", wka_port, req->req_id);
++		zfcp_dbf_rec_run_wka("fscwp_1", wka_port, req_id);
+ 	return retval;
+ }
+ 
+@@ -1776,6 +1811,7 @@ int zfcp_fsf_close_physical_port(struct zfcp_erp_action *erp_action)
+ 		zfcp_fsf_req_free(req);
+ 		erp_action->fsf_req_id = 0;
+ 	}
++	/* NOTE: DO NOT TOUCH req PAST THIS POINT! */
+ out:
+ 	spin_unlock_irq(&qdio->req_q_lock);
+ 	return retval;
+@@ -1899,6 +1935,7 @@ int zfcp_fsf_open_lun(struct zfcp_erp_action *erp_action)
+ 		zfcp_fsf_req_free(req);
+ 		erp_action->fsf_req_id = 0;
+ 	}
++	/* NOTE: DO NOT TOUCH req PAST THIS POINT! */
+ out:
+ 	spin_unlock_irq(&qdio->req_q_lock);
+ 	return retval;
+@@ -1987,6 +2024,7 @@ int zfcp_fsf_close_lun(struct zfcp_erp_action *erp_action)
+ 		zfcp_fsf_req_free(req);
+ 		erp_action->fsf_req_id = 0;
+ 	}
++	/* NOTE: DO NOT TOUCH req PAST THIS POINT! */
+ out:
+ 	spin_unlock_irq(&qdio->req_q_lock);
+ 	return retval;
+@@ -2299,6 +2337,7 @@ int zfcp_fsf_fcp_cmnd(struct scsi_cmnd *scsi_cmnd)
+ 	retval = zfcp_fsf_req_send(req);
+ 	if (unlikely(retval))
+ 		goto failed_scsi_cmnd;
++	/* NOTE: DO NOT TOUCH req PAST THIS POINT! */
+ 
+ 	goto out;
+ 
+@@ -2373,8 +2412,10 @@ struct zfcp_fsf_req *zfcp_fsf_fcp_task_mgmt(struct scsi_device *sdev,
+ 	zfcp_fc_fcp_tm(fcp_cmnd, sdev, tm_flags);
+ 
+ 	zfcp_fsf_start_timer(req, ZFCP_FSF_SCSI_ER_TIMEOUT);
+-	if (!zfcp_fsf_req_send(req))
++	if (!zfcp_fsf_req_send(req)) {
++		/* NOTE: DO NOT TOUCH req, UNTIL IT COMPLETES! */
+ 		goto out;
++	}
+ 
+ 	zfcp_fsf_req_free(req);
+ 	req = NULL;
+diff --git a/drivers/scsi/NCR5380.c b/drivers/scsi/NCR5380.c
+index 01c23d27f290..b1a663d67aea 100644
+--- a/drivers/scsi/NCR5380.c
++++ b/drivers/scsi/NCR5380.c
+@@ -710,6 +710,8 @@ static void NCR5380_main(struct work_struct *work)
+ 			NCR5380_information_transfer(instance);
+ 			done = 0;
+ 		}
++		if (!hostdata->connected)
++			NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
+ 		spin_unlock_irq(&hostdata->lock);
+ 		if (!done)
+ 			cond_resched();
+@@ -1111,8 +1113,6 @@ static bool NCR5380_select(struct Scsi_Host *instance, struct scsi_cmnd *cmd)
+ 		spin_lock_irq(&hostdata->lock);
+ 		NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE);
+ 		NCR5380_reselect(instance);
+-		if (!hostdata->connected)
+-			NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
+ 		shost_printk(KERN_ERR, instance, "reselection after won arbitration?\n");
+ 		goto out;
+ 	}
+@@ -1120,7 +1120,6 @@ static bool NCR5380_select(struct Scsi_Host *instance, struct scsi_cmnd *cmd)
+ 	if (err < 0) {
+ 		spin_lock_irq(&hostdata->lock);
+ 		NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE);
+-		NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
+ 
+ 		/* Can't touch cmd if it has been reclaimed by the scsi ML */
+ 		if (!hostdata->selecting)
+@@ -1158,7 +1157,6 @@ static bool NCR5380_select(struct Scsi_Host *instance, struct scsi_cmnd *cmd)
+ 	if (err < 0) {
+ 		shost_printk(KERN_ERR, instance, "select: REQ timeout\n");
+ 		NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE);
+-		NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
+ 		goto out;
+ 	}
+ 	if (!hostdata->selecting) {
+@@ -1764,10 +1762,8 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
+ 						scmd_printk(KERN_INFO, cmd,
+ 							"switching to slow handshake\n");
+ 						cmd->device->borken = 1;
+-						sink = 1;
+-						do_abort(instance);
+-						cmd->result = DID_ERROR << 16;
+-						/* XXX - need to source or sink data here, as appropriate */
++						do_reset(instance);
++						bus_reset_cleanup(instance);
+ 					}
+ 				} else {
+ 					/* Transfer a small chunk so that the
+@@ -1827,9 +1823,6 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
+ 					 */
+ 					NCR5380_write(TARGET_COMMAND_REG, 0);
+ 
+-					/* Enable reselect interrupts */
+-					NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
+-
+ 					maybe_release_dma_irq(instance);
+ 					return;
+ 				case MESSAGE_REJECT:
+@@ -1861,8 +1854,6 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
+ 					 */
+ 					NCR5380_write(TARGET_COMMAND_REG, 0);
+ 
+-					/* Enable reselect interrupts */
+-					NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
+ #ifdef SUN3_SCSI_VME
+ 					dregs->csr |= CSR_DMA_ENABLE;
+ #endif
+@@ -1965,7 +1956,6 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
+ 					cmd->result = DID_ERROR << 16;
+ 					complete_cmd(instance, cmd);
+ 					maybe_release_dma_irq(instance);
+-					NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
+ 					return;
+ 				}
+ 				msgout = NOP;
+diff --git a/drivers/scsi/NCR5380.h b/drivers/scsi/NCR5380.h
+index efca509b92b0..5935fd6d1a05 100644
+--- a/drivers/scsi/NCR5380.h
++++ b/drivers/scsi/NCR5380.h
+@@ -235,7 +235,7 @@ struct NCR5380_cmd {
+ #define NCR5380_PIO_CHUNK_SIZE		256
+ 
+ /* Time limit (ms) to poll registers when IRQs are disabled, e.g. during PDMA */
+-#define NCR5380_REG_POLL_TIME		15
++#define NCR5380_REG_POLL_TIME		10
+ 
+ static inline struct scsi_cmnd *NCR5380_to_scmd(struct NCR5380_cmd *ncmd_ptr)
+ {
+diff --git a/drivers/scsi/mac_scsi.c b/drivers/scsi/mac_scsi.c
+index 8b4b5b1a13d7..27364b71e833 100644
+--- a/drivers/scsi/mac_scsi.c
++++ b/drivers/scsi/mac_scsi.c
+@@ -3,6 +3,8 @@
+  *
+  * Copyright 1998, Michael Schmitz <mschmitz@lbl.gov>
+  *
++ * Copyright 2019 Finn Thain
++ *
+  * derived in part from:
+  */
+ /*
+@@ -11,6 +13,7 @@
+  * Copyright 1995, Russell King
+  */
+ 
++#include <linux/delay.h>
+ #include <linux/types.h>
+ #include <linux/module.h>
+ #include <linux/ioport.h>
+@@ -52,7 +55,7 @@ static int setup_cmd_per_lun = -1;
+ module_param(setup_cmd_per_lun, int, 0);
+ static int setup_sg_tablesize = -1;
+ module_param(setup_sg_tablesize, int, 0);
+-static int setup_use_pdma = -1;
++static int setup_use_pdma = 512;
+ module_param(setup_use_pdma, int, 0);
+ static int setup_hostid = -1;
+ module_param(setup_hostid, int, 0);
+@@ -89,101 +92,217 @@ static int __init mac_scsi_setup(char *str)
+ __setup("mac5380=", mac_scsi_setup);
+ #endif /* !MODULE */
+ 
+-/* Pseudo DMA asm originally by Ove Edlund */
+-
+-#define CP_IO_TO_MEM(s,d,n)				\
+-__asm__ __volatile__					\
+-    ("    cmp.w  #4,%2\n"				\
+-     "    bls    8f\n"					\
+-     "    move.w %1,%%d0\n"				\
+-     "    neg.b  %%d0\n"				\
+-     "    and.w  #3,%%d0\n"				\
+-     "    sub.w  %%d0,%2\n"				\
+-     "    bra    2f\n"					\
+-     " 1: move.b (%0),(%1)+\n"				\
+-     " 2: dbf    %%d0,1b\n"				\
+-     "    move.w %2,%%d0\n"				\
+-     "    lsr.w  #5,%%d0\n"				\
+-     "    bra    4f\n"					\
+-     " 3: move.l (%0),(%1)+\n"				\
+-     "31: move.l (%0),(%1)+\n"				\
+-     "32: move.l (%0),(%1)+\n"				\
+-     "33: move.l (%0),(%1)+\n"				\
+-     "34: move.l (%0),(%1)+\n"				\
+-     "35: move.l (%0),(%1)+\n"				\
+-     "36: move.l (%0),(%1)+\n"				\
+-     "37: move.l (%0),(%1)+\n"				\
+-     " 4: dbf    %%d0,3b\n"				\
+-     "    move.w %2,%%d0\n"				\
+-     "    lsr.w  #2,%%d0\n"				\
+-     "    and.w  #7,%%d0\n"				\
+-     "    bra    6f\n"					\
+-     " 5: move.l (%0),(%1)+\n"				\
+-     " 6: dbf    %%d0,5b\n"				\
+-     "    and.w  #3,%2\n"				\
+-     "    bra    8f\n"					\
+-     " 7: move.b (%0),(%1)+\n"				\
+-     " 8: dbf    %2,7b\n"				\
+-     "    moveq.l #0, %2\n"				\
+-     " 9: \n"						\
+-     ".section .fixup,\"ax\"\n"				\
+-     "    .even\n"					\
+-     "91: moveq.l #1, %2\n"				\
+-     "    jra 9b\n"					\
+-     "94: moveq.l #4, %2\n"				\
+-     "    jra 9b\n"					\
+-     ".previous\n"					\
+-     ".section __ex_table,\"a\"\n"			\
+-     "   .align 4\n"					\
+-     "   .long  1b,91b\n"				\
+-     "   .long  3b,94b\n"				\
+-     "   .long 31b,94b\n"				\
+-     "   .long 32b,94b\n"				\
+-     "   .long 33b,94b\n"				\
+-     "   .long 34b,94b\n"				\
+-     "   .long 35b,94b\n"				\
+-     "   .long 36b,94b\n"				\
+-     "   .long 37b,94b\n"				\
+-     "   .long  5b,94b\n"				\
+-     "   .long  7b,91b\n"				\
+-     ".previous"					\
+-     : "=a"(s), "=a"(d), "=d"(n)			\
+-     : "0"(s), "1"(d), "2"(n)				\
+-     : "d0")
++/*
++ * According to "Inside Macintosh: Devices", Mac OS requires disk drivers to
++ * specify the number of bytes between the delays expected from a SCSI target.
++ * This allows the operating system to "prevent bus errors when a target fails
++ * to deliver the next byte within the processor bus error timeout period."
++ * Linux SCSI drivers lack knowledge of the timing behaviour of SCSI targets
++ * so bus errors are unavoidable.
++ *
++ * If a MOVE.B instruction faults, we assume that zero bytes were transferred
++ * and simply retry. That assumption probably depends on target behaviour but
++ * seems to hold up okay. The NOP provides synchronization: without it the
++ * fault can sometimes occur after the program counter has moved past the
++ * offending instruction. Post-increment addressing can't be used.
++ */
++
++#define MOVE_BYTE(operands) \
++	asm volatile ( \
++		"1:     moveb " operands "     \n" \
++		"11:    nop                    \n" \
++		"       addq #1,%0             \n" \
++		"       subq #1,%1             \n" \
++		"40:                           \n" \
++		"                              \n" \
++		".section .fixup,\"ax\"        \n" \
++		".even                         \n" \
++		"90:    movel #1, %2           \n" \
++		"       jra 40b                \n" \
++		".previous                     \n" \
++		"                              \n" \
++		".section __ex_table,\"a\"     \n" \
++		".align  4                     \n" \
++		".long   1b,90b                \n" \
++		".long  11b,90b                \n" \
++		".previous                     \n" \
++		: "+a" (addr), "+r" (n), "+r" (result) : "a" (io))
++
++/*
++ * If a MOVE.W (or MOVE.L) instruction faults, it cannot be retried because
++ * the residual byte count would be uncertain. In that situation the MOVE_WORD
++ * macro clears n in the fixup section to abort the transfer.
++ */
++
++#define MOVE_WORD(operands) \
++	asm volatile ( \
++		"1:     movew " operands "     \n" \
++		"11:    nop                    \n" \
++		"       subq #2,%1             \n" \
++		"40:                           \n" \
++		"                              \n" \
++		".section .fixup,\"ax\"        \n" \
++		".even                         \n" \
++		"90:    movel #0, %1           \n" \
++		"       movel #2, %2           \n" \
++		"       jra 40b                \n" \
++		".previous                     \n" \
++		"                              \n" \
++		".section __ex_table,\"a\"     \n" \
++		".align  4                     \n" \
++		".long   1b,90b                \n" \
++		".long  11b,90b                \n" \
++		".previous                     \n" \
++		: "+a" (addr), "+r" (n), "+r" (result) : "a" (io))
++
++#define MOVE_16_WORDS(operands) \
++	asm volatile ( \
++		"1:     movew " operands "     \n" \
++		"2:     movew " operands "     \n" \
++		"3:     movew " operands "     \n" \
++		"4:     movew " operands "     \n" \
++		"5:     movew " operands "     \n" \
++		"6:     movew " operands "     \n" \
++		"7:     movew " operands "     \n" \
++		"8:     movew " operands "     \n" \
++		"9:     movew " operands "     \n" \
++		"10:    movew " operands "     \n" \
++		"11:    movew " operands "     \n" \
++		"12:    movew " operands "     \n" \
++		"13:    movew " operands "     \n" \
++		"14:    movew " operands "     \n" \
++		"15:    movew " operands "     \n" \
++		"16:    movew " operands "     \n" \
++		"17:    nop                    \n" \
++		"       subl  #32,%1           \n" \
++		"40:                           \n" \
++		"                              \n" \
++		".section .fixup,\"ax\"        \n" \
++		".even                         \n" \
++		"90:    movel #0, %1           \n" \
++		"       movel #2, %2           \n" \
++		"       jra 40b                \n" \
++		".previous                     \n" \
++		"                              \n" \
++		".section __ex_table,\"a\"     \n" \
++		".align  4                     \n" \
++		".long   1b,90b                \n" \
++		".long   2b,90b                \n" \
++		".long   3b,90b                \n" \
++		".long   4b,90b                \n" \
++		".long   5b,90b                \n" \
++		".long   6b,90b                \n" \
++		".long   7b,90b                \n" \
++		".long   8b,90b                \n" \
++		".long   9b,90b                \n" \
++		".long  10b,90b                \n" \
++		".long  11b,90b                \n" \
++		".long  12b,90b                \n" \
++		".long  13b,90b                \n" \
++		".long  14b,90b                \n" \
++		".long  15b,90b                \n" \
++		".long  16b,90b                \n" \
++		".long  17b,90b                \n" \
++		".previous                     \n" \
++		: "+a" (addr), "+r" (n), "+r" (result) : "a" (io))
++
++#define MAC_PDMA_DELAY		32
++
++static inline int mac_pdma_recv(void __iomem *io, unsigned char *start, int n)
++{
++	unsigned char *addr = start;
++	int result = 0;
++
++	if (n >= 1) {
++		MOVE_BYTE("%3@,%0@");
++		if (result)
++			goto out;
++	}
++	if (n >= 1 && ((unsigned long)addr & 1)) {
++		MOVE_BYTE("%3@,%0@");
++		if (result)
++			goto out;
++	}
++	while (n >= 32)
++		MOVE_16_WORDS("%3@,%0@+");
++	while (n >= 2)
++		MOVE_WORD("%3@,%0@+");
++	if (result)
++		return start - addr; /* Negated to indicate uncertain length */
++	if (n == 1)
++		MOVE_BYTE("%3@,%0@");
++out:
++	return addr - start;
++}
++
++static inline int mac_pdma_send(unsigned char *start, void __iomem *io, int n)
++{
++	unsigned char *addr = start;
++	int result = 0;
++
++	if (n >= 1) {
++		MOVE_BYTE("%0@,%3@");
++		if (result)
++			goto out;
++	}
++	if (n >= 1 && ((unsigned long)addr & 1)) {
++		MOVE_BYTE("%0@,%3@");
++		if (result)
++			goto out;
++	}
++	while (n >= 32)
++		MOVE_16_WORDS("%0@+,%3@");
++	while (n >= 2)
++		MOVE_WORD("%0@+,%3@");
++	if (result)
++		return start - addr; /* Negated to indicate uncertain length */
++	if (n == 1)
++		MOVE_BYTE("%0@,%3@");
++out:
++	return addr - start;
++}
+ 
+ static inline int macscsi_pread(struct NCR5380_hostdata *hostdata,
+                                 unsigned char *dst, int len)
+ {
+ 	u8 __iomem *s = hostdata->pdma_io + (INPUT_DATA_REG << 4);
+ 	unsigned char *d = dst;
+-	int n = len;
+-	int transferred;
++
++	hostdata->pdma_residual = len;
+ 
+ 	while (!NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
+ 	                              BASR_DRQ | BASR_PHASE_MATCH,
+ 	                              BASR_DRQ | BASR_PHASE_MATCH, HZ / 64)) {
+-		CP_IO_TO_MEM(s, d, n);
++		int bytes;
+ 
+-		transferred = d - dst - n;
+-		hostdata->pdma_residual = len - transferred;
++		bytes = mac_pdma_recv(s, d, min(hostdata->pdma_residual, 512));
+ 
+-		/* No bus error. */
+-		if (n == 0)
++		if (bytes > 0) {
++			d += bytes;
++			hostdata->pdma_residual -= bytes;
++		}
++
++		if (hostdata->pdma_residual == 0)
+ 			return 0;
+ 
+-		/* Target changed phase early? */
+ 		if (NCR5380_poll_politely2(hostdata, STATUS_REG, SR_REQ, SR_REQ,
+-		                           BUS_AND_STATUS_REG, BASR_ACK, BASR_ACK, HZ / 64) < 0)
+-			scmd_printk(KERN_ERR, hostdata->connected,
++		                           BUS_AND_STATUS_REG, BASR_ACK,
++		                           BASR_ACK, HZ / 64) < 0)
++			scmd_printk(KERN_DEBUG, hostdata->connected,
+ 			            "%s: !REQ and !ACK\n", __func__);
+ 		if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH))
+ 			return 0;
+ 
++		if (bytes == 0)
++			udelay(MAC_PDMA_DELAY);
++
++		if (bytes >= 0)
++			continue;
++
+ 		dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host,
+-		         "%s: bus error (%d/%d)\n", __func__, transferred, len);
++		         "%s: bus error (%d/%d)\n", __func__, d - dst, len);
+ 		NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
+-		d = dst + transferred;
+-		n = len - transferred;
++		return -1;
+ 	}
+ 
+ 	scmd_printk(KERN_ERR, hostdata->connected,
+@@ -192,93 +311,27 @@ static inline int macscsi_pread(struct NCR5380_hostdata *hostdata,
+ 	return -1;
+ }
+ 
+-
+-#define CP_MEM_TO_IO(s,d,n)				\
+-__asm__ __volatile__					\
+-    ("    cmp.w  #4,%2\n"				\
+-     "    bls    8f\n"					\
+-     "    move.w %0,%%d0\n"				\
+-     "    neg.b  %%d0\n"				\
+-     "    and.w  #3,%%d0\n"				\
+-     "    sub.w  %%d0,%2\n"				\
+-     "    bra    2f\n"					\
+-     " 1: move.b (%0)+,(%1)\n"				\
+-     " 2: dbf    %%d0,1b\n"				\
+-     "    move.w %2,%%d0\n"				\
+-     "    lsr.w  #5,%%d0\n"				\
+-     "    bra    4f\n"					\
+-     " 3: move.l (%0)+,(%1)\n"				\
+-     "31: move.l (%0)+,(%1)\n"				\
+-     "32: move.l (%0)+,(%1)\n"				\
+-     "33: move.l (%0)+,(%1)\n"				\
+-     "34: move.l (%0)+,(%1)\n"				\
+-     "35: move.l (%0)+,(%1)\n"				\
+-     "36: move.l (%0)+,(%1)\n"				\
+-     "37: move.l (%0)+,(%1)\n"				\
+-     " 4: dbf    %%d0,3b\n"				\
+-     "    move.w %2,%%d0\n"				\
+-     "    lsr.w  #2,%%d0\n"				\
+-     "    and.w  #7,%%d0\n"				\
+-     "    bra    6f\n"					\
+-     " 5: move.l (%0)+,(%1)\n"				\
+-     " 6: dbf    %%d0,5b\n"				\
+-     "    and.w  #3,%2\n"				\
+-     "    bra    8f\n"					\
+-     " 7: move.b (%0)+,(%1)\n"				\
+-     " 8: dbf    %2,7b\n"				\
+-     "    moveq.l #0, %2\n"				\
+-     " 9: \n"						\
+-     ".section .fixup,\"ax\"\n"				\
+-     "    .even\n"					\
+-     "91: moveq.l #1, %2\n"				\
+-     "    jra 9b\n"					\
+-     "94: moveq.l #4, %2\n"				\
+-     "    jra 9b\n"					\
+-     ".previous\n"					\
+-     ".section __ex_table,\"a\"\n"			\
+-     "   .align 4\n"					\
+-     "   .long  1b,91b\n"				\
+-     "   .long  3b,94b\n"				\
+-     "   .long 31b,94b\n"				\
+-     "   .long 32b,94b\n"				\
+-     "   .long 33b,94b\n"				\
+-     "   .long 34b,94b\n"				\
+-     "   .long 35b,94b\n"				\
+-     "   .long 36b,94b\n"				\
+-     "   .long 37b,94b\n"				\
+-     "   .long  5b,94b\n"				\
+-     "   .long  7b,91b\n"				\
+-     ".previous"					\
+-     : "=a"(s), "=a"(d), "=d"(n)			\
+-     : "0"(s), "1"(d), "2"(n)				\
+-     : "d0")
+-
+ static inline int macscsi_pwrite(struct NCR5380_hostdata *hostdata,
+                                  unsigned char *src, int len)
+ {
+ 	unsigned char *s = src;
+ 	u8 __iomem *d = hostdata->pdma_io + (OUTPUT_DATA_REG << 4);
+-	int n = len;
+-	int transferred;
++
++	hostdata->pdma_residual = len;
+ 
+ 	while (!NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
+ 	                              BASR_DRQ | BASR_PHASE_MATCH,
+ 	                              BASR_DRQ | BASR_PHASE_MATCH, HZ / 64)) {
+-		CP_MEM_TO_IO(s, d, n);
++		int bytes;
+ 
+-		transferred = s - src - n;
+-		hostdata->pdma_residual = len - transferred;
++		bytes = mac_pdma_send(s, d, min(hostdata->pdma_residual, 512));
+ 
+-		/* Target changed phase early? */
+-		if (NCR5380_poll_politely2(hostdata, STATUS_REG, SR_REQ, SR_REQ,
+-		                           BUS_AND_STATUS_REG, BASR_ACK, BASR_ACK, HZ / 64) < 0)
+-			scmd_printk(KERN_ERR, hostdata->connected,
+-			            "%s: !REQ and !ACK\n", __func__);
+-		if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH))
+-			return 0;
++		if (bytes > 0) {
++			s += bytes;
++			hostdata->pdma_residual -= bytes;
++		}
+ 
+-		/* No bus error. */
+-		if (n == 0) {
++		if (hostdata->pdma_residual == 0) {
+ 			if (NCR5380_poll_politely(hostdata, TARGET_COMMAND_REG,
+ 			                          TCR_LAST_BYTE_SENT,
+ 			                          TCR_LAST_BYTE_SENT, HZ / 64) < 0)
+@@ -287,17 +340,29 @@ static inline int macscsi_pwrite(struct NCR5380_hostdata *hostdata,
+ 			return 0;
+ 		}
+ 
++		if (NCR5380_poll_politely2(hostdata, STATUS_REG, SR_REQ, SR_REQ,
++		                           BUS_AND_STATUS_REG, BASR_ACK,
++		                           BASR_ACK, HZ / 64) < 0)
++			scmd_printk(KERN_DEBUG, hostdata->connected,
++			            "%s: !REQ and !ACK\n", __func__);
++		if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH))
++			return 0;
++
++		if (bytes == 0)
++			udelay(MAC_PDMA_DELAY);
++
++		if (bytes >= 0)
++			continue;
++
+ 		dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host,
+-		         "%s: bus error (%d/%d)\n", __func__, transferred, len);
++		         "%s: bus error (%d/%d)\n", __func__, s - src, len);
+ 		NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
+-		s = src + transferred;
+-		n = len - transferred;
++		return -1;
+ 	}
+ 
+ 	scmd_printk(KERN_ERR, hostdata->connected,
+ 	            "%s: phase mismatch or !DRQ\n", __func__);
+ 	NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
+-
+ 	return -1;
+ }
+ 
+@@ -305,7 +370,7 @@ static int macscsi_dma_xfer_len(struct NCR5380_hostdata *hostdata,
+                                 struct scsi_cmnd *cmd)
+ {
+ 	if (hostdata->flags & FLAG_NO_PSEUDO_DMA ||
+-	    cmd->SCp.this_residual < 16)
++	    cmd->SCp.this_residual < setup_use_pdma)
+ 		return 0;
+ 
+ 	return cmd->SCp.this_residual;
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index 293f5cf524d7..2166df5e5cb8 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -6168,7 +6168,8 @@ megasas_get_target_prop(struct megasas_instance *instance,
+ 	int ret;
+ 	struct megasas_cmd *cmd;
+ 	struct megasas_dcmd_frame *dcmd;
+-	u16 targetId = (sdev->channel % 2) + sdev->id;
++	u16 targetId = ((sdev->channel % 2) * MEGASAS_MAX_DEV_PER_CHANNEL) +
++			sdev->id;
+ 
+ 	cmd = megasas_get_cmd(instance);
+ 
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 07dfc17d4824..00a6650dfefd 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -71,11 +71,11 @@ int scsi_init_sense_cache(struct Scsi_Host *shost)
+ 	struct kmem_cache *cache;
+ 	int ret = 0;
+ 
++	mutex_lock(&scsi_sense_cache_mutex);
+ 	cache = scsi_select_sense_cache(shost->unchecked_isa_dma);
+ 	if (cache)
+-		return 0;
++		goto exit;
+ 
+-	mutex_lock(&scsi_sense_cache_mutex);
+ 	if (shost->unchecked_isa_dma) {
+ 		scsi_sense_isadma_cache =
+ 			kmem_cache_create("scsi_sense_cache(DMA)",
+@@ -91,7 +91,7 @@ int scsi_init_sense_cache(struct Scsi_Host *shost)
+ 		if (!scsi_sense_cache)
+ 			ret = -ENOMEM;
+ 	}
+-
++ exit:
+ 	mutex_unlock(&scsi_sense_cache_mutex);
+ 	return ret;
+ }
+diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c
+index a340af797a85..bd70843940dc 100644
+--- a/drivers/scsi/sd_zbc.c
++++ b/drivers/scsi/sd_zbc.c
+@@ -431,7 +431,7 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, unsigned char *buf)
+ {
+ 	struct gendisk *disk = sdkp->disk;
+ 	unsigned int nr_zones;
+-	u32 zone_blocks;
++	u32 zone_blocks = 0;
+ 	int ret;
+ 
+ 	if (!sd_is_zoned(sdkp))
+diff --git a/drivers/spi/spi-rockchip.c b/drivers/spi/spi-rockchip.c
+index 3912526ead66..19f6a76f1c07 100644
+--- a/drivers/spi/spi-rockchip.c
++++ b/drivers/spi/spi-rockchip.c
+@@ -425,7 +425,7 @@ static int rockchip_spi_prepare_dma(struct rockchip_spi *rs,
+ 			.direction = DMA_MEM_TO_DEV,
+ 			.dst_addr = rs->dma_addr_tx,
+ 			.dst_addr_width = rs->n_bytes,
+-			.dst_maxburst = rs->fifo_len / 2,
++			.dst_maxburst = rs->fifo_len / 4,
+ 		};
+ 
+ 		dmaengine_slave_config(master->dma_tx, &txconf);
+@@ -526,7 +526,7 @@ static void rockchip_spi_config(struct rockchip_spi *rs,
+ 	else
+ 		writel_relaxed(rs->fifo_len / 2 - 1, rs->regs + ROCKCHIP_SPI_RXFTLR);
+ 
+-	writel_relaxed(rs->fifo_len / 2 - 1, rs->regs + ROCKCHIP_SPI_DMATDLR);
++	writel_relaxed(rs->fifo_len / 2, rs->regs + ROCKCHIP_SPI_DMATDLR);
+ 	writel_relaxed(0, rs->regs + ROCKCHIP_SPI_DMARDLR);
+ 	writel_relaxed(dmacr, rs->regs + ROCKCHIP_SPI_DMACR);
+ 
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index a83fcddf1dad..7f6fb383d7a7 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -2281,11 +2281,6 @@ int spi_register_controller(struct spi_controller *ctlr)
+ 	if (status)
+ 		return status;
+ 
+-	/* even if it's just one always-selected device, there must
+-	 * be at least one chipselect
+-	 */
+-	if (ctlr->num_chipselect == 0)
+-		return -EINVAL;
+ 	if (ctlr->bus_num >= 0) {
+ 		/* devices with a fixed bus num must check-in with the num */
+ 		mutex_lock(&board_lock);
+@@ -2356,6 +2351,13 @@ int spi_register_controller(struct spi_controller *ctlr)
+ 		}
+ 	}
+ 
++	/*
++	 * Even if it's just one always-selected device, there must
++	 * be at least one chipselect.
++	 */
++	if (!ctlr->num_chipselect)
++		return -EINVAL;
++
+ 	status = device_add(&ctlr->dev);
+ 	if (status < 0) {
+ 		/* free bus id */
+diff --git a/drivers/staging/media/davinci_vpfe/vpfe_video.c b/drivers/staging/media/davinci_vpfe/vpfe_video.c
+index 510202a3b091..84cca18e3e9d 100644
+--- a/drivers/staging/media/davinci_vpfe/vpfe_video.c
++++ b/drivers/staging/media/davinci_vpfe/vpfe_video.c
+@@ -419,6 +419,9 @@ static int vpfe_open(struct file *file)
+ 	/* If decoder is not initialized. initialize it */
+ 	if (!video->initialized && vpfe_update_pipe_state(video)) {
+ 		mutex_unlock(&video->lock);
++		v4l2_fh_del(&handle->vfh);
++		v4l2_fh_exit(&handle->vfh);
++		kfree(handle);
+ 		return -ENODEV;
+ 	}
+ 	/* Increment device users counter */
+diff --git a/drivers/staging/media/imx/imx7-mipi-csis.c b/drivers/staging/media/imx/imx7-mipi-csis.c
+index 2ddcc42ab8ff..e9d621e19d6d 100644
+--- a/drivers/staging/media/imx/imx7-mipi-csis.c
++++ b/drivers/staging/media/imx/imx7-mipi-csis.c
+@@ -455,13 +455,9 @@ static void mipi_csis_set_params(struct csi_state *state)
+ 			MIPI_CSIS_CMN_CTRL_UPDATE_SHADOW_CTRL);
+ }
+ 
+-static void mipi_csis_clk_enable(struct csi_state *state)
++static int mipi_csis_clk_enable(struct csi_state *state)
+ {
+-	int ret;
+-
+-	ret = clk_bulk_prepare_enable(state->num_clks, state->clks);
+-	if (ret < 0)
+-		dev_err(state->dev, "failed to enable clocks\n");
++	return clk_bulk_prepare_enable(state->num_clks, state->clks);
+ }
+ 
+ static void mipi_csis_clk_disable(struct csi_state *state)
+@@ -985,7 +981,11 @@ static int mipi_csis_probe(struct platform_device *pdev)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	mipi_csis_clk_enable(state);
++	ret = mipi_csis_clk_enable(state);
++	if (ret < 0) {
++		dev_err(state->dev, "failed to enable clocks: %d\n", ret);
++		return ret;
++	}
+ 
+ 	ret = devm_request_irq(dev, state->irq, mipi_csis_irq_handler,
+ 			       0, dev_name(dev), state);
+diff --git a/drivers/target/iscsi/iscsi_target_auth.c b/drivers/target/iscsi/iscsi_target_auth.c
+index 4e680d753941..e2fa3a3bc81d 100644
+--- a/drivers/target/iscsi/iscsi_target_auth.c
++++ b/drivers/target/iscsi/iscsi_target_auth.c
+@@ -89,6 +89,12 @@ out:
+ 	return CHAP_DIGEST_UNKNOWN;
+ }
+ 
++static void chap_close(struct iscsi_conn *conn)
++{
++	kfree(conn->auth_protocol);
++	conn->auth_protocol = NULL;
++}
++
+ static struct iscsi_chap *chap_server_open(
+ 	struct iscsi_conn *conn,
+ 	struct iscsi_node_auth *auth,
+@@ -126,7 +132,7 @@ static struct iscsi_chap *chap_server_open(
+ 	case CHAP_DIGEST_UNKNOWN:
+ 	default:
+ 		pr_err("Unsupported CHAP_A value\n");
+-		kfree(conn->auth_protocol);
++		chap_close(conn);
+ 		return NULL;
+ 	}
+ 
+@@ -141,19 +147,13 @@ static struct iscsi_chap *chap_server_open(
+ 	 * Generate Challenge.
+ 	 */
+ 	if (chap_gen_challenge(conn, 1, aic_str, aic_len) < 0) {
+-		kfree(conn->auth_protocol);
++		chap_close(conn);
+ 		return NULL;
+ 	}
+ 
+ 	return chap;
+ }
+ 
+-static void chap_close(struct iscsi_conn *conn)
+-{
+-	kfree(conn->auth_protocol);
+-	conn->auth_protocol = NULL;
+-}
+-
+ static int chap_server_compute_md5(
+ 	struct iscsi_conn *conn,
+ 	struct iscsi_node_auth *auth,
+diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
+index fa783531ee88..a02448105527 100644
+--- a/drivers/usb/core/devio.c
++++ b/drivers/usb/core/devio.c
+@@ -63,7 +63,7 @@ struct usb_dev_state {
+ 	unsigned int discsignr;
+ 	struct pid *disc_pid;
+ 	const struct cred *cred;
+-	void __user *disccontext;
++	sigval_t disccontext;
+ 	unsigned long ifclaimed;
+ 	u32 disabled_bulk_eps;
+ 	bool privileges_dropped;
+@@ -90,6 +90,7 @@ struct async {
+ 	unsigned int ifnum;
+ 	void __user *userbuffer;
+ 	void __user *userurb;
++	sigval_t userurb_sigval;
+ 	struct urb *urb;
+ 	struct usb_memory *usbm;
+ 	unsigned int mem_usage;
+@@ -582,22 +583,19 @@ static void async_completed(struct urb *urb)
+ {
+ 	struct async *as = urb->context;
+ 	struct usb_dev_state *ps = as->ps;
+-	struct kernel_siginfo sinfo;
+ 	struct pid *pid = NULL;
+ 	const struct cred *cred = NULL;
+ 	unsigned long flags;
+-	int signr;
++	sigval_t addr;
++	int signr, errno;
+ 
+ 	spin_lock_irqsave(&ps->lock, flags);
+ 	list_move_tail(&as->asynclist, &ps->async_completed);
+ 	as->status = urb->status;
+ 	signr = as->signr;
+ 	if (signr) {
+-		clear_siginfo(&sinfo);
+-		sinfo.si_signo = as->signr;
+-		sinfo.si_errno = as->status;
+-		sinfo.si_code = SI_ASYNCIO;
+-		sinfo.si_addr = as->userurb;
++		errno = as->status;
++		addr = as->userurb_sigval;
+ 		pid = get_pid(as->pid);
+ 		cred = get_cred(as->cred);
+ 	}
+@@ -615,7 +613,7 @@ static void async_completed(struct urb *urb)
+ 	spin_unlock_irqrestore(&ps->lock, flags);
+ 
+ 	if (signr) {
+-		kill_pid_info_as_cred(sinfo.si_signo, &sinfo, pid, cred);
++		kill_pid_usb_asyncio(signr, errno, addr, pid, cred);
+ 		put_pid(pid);
+ 		put_cred(cred);
+ 	}
+@@ -1427,7 +1425,7 @@ find_memory_area(struct usb_dev_state *ps, const struct usbdevfs_urb *uurb)
+ 
+ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb,
+ 			struct usbdevfs_iso_packet_desc __user *iso_frame_desc,
+-			void __user *arg)
++			void __user *arg, sigval_t userurb_sigval)
+ {
+ 	struct usbdevfs_iso_packet_desc *isopkt = NULL;
+ 	struct usb_host_endpoint *ep;
+@@ -1727,6 +1725,7 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 	isopkt = NULL;
+ 	as->ps = ps;
+ 	as->userurb = arg;
++	as->userurb_sigval = userurb_sigval;
+ 	if (as->usbm) {
+ 		unsigned long uurb_start = (unsigned long)uurb->buffer;
+ 
+@@ -1801,13 +1800,17 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ static int proc_submiturb(struct usb_dev_state *ps, void __user *arg)
+ {
+ 	struct usbdevfs_urb uurb;
++	sigval_t userurb_sigval;
+ 
+ 	if (copy_from_user(&uurb, arg, sizeof(uurb)))
+ 		return -EFAULT;
+ 
++	memset(&userurb_sigval, 0, sizeof(userurb_sigval));
++	userurb_sigval.sival_ptr = arg;
++
+ 	return proc_do_submiturb(ps, &uurb,
+ 			(((struct usbdevfs_urb __user *)arg)->iso_frame_desc),
+-			arg);
++			arg, userurb_sigval);
+ }
+ 
+ static int proc_unlinkurb(struct usb_dev_state *ps, void __user *arg)
+@@ -1977,7 +1980,7 @@ static int proc_disconnectsignal_compat(struct usb_dev_state *ps, void __user *a
+ 	if (copy_from_user(&ds, arg, sizeof(ds)))
+ 		return -EFAULT;
+ 	ps->discsignr = ds.signr;
+-	ps->disccontext = compat_ptr(ds.context);
++	ps->disccontext.sival_int = ds.context;
+ 	return 0;
+ }
+ 
+@@ -2005,13 +2008,17 @@ static int get_urb32(struct usbdevfs_urb *kurb,
+ static int proc_submiturb_compat(struct usb_dev_state *ps, void __user *arg)
+ {
+ 	struct usbdevfs_urb uurb;
++	sigval_t userurb_sigval;
+ 
+ 	if (get_urb32(&uurb, (struct usbdevfs_urb32 __user *)arg))
+ 		return -EFAULT;
+ 
++	memset(&userurb_sigval, 0, sizeof(userurb_sigval));
++	userurb_sigval.sival_int = ptr_to_compat(arg);
++
+ 	return proc_do_submiturb(ps, &uurb,
+ 			((struct usbdevfs_urb32 __user *)arg)->iso_frame_desc,
+-			arg);
++			arg, userurb_sigval);
+ }
+ 
+ static int processcompl_compat(struct async *as, void __user * __user *arg)
+@@ -2092,7 +2099,7 @@ static int proc_disconnectsignal(struct usb_dev_state *ps, void __user *arg)
+ 	if (copy_from_user(&ds, arg, sizeof(ds)))
+ 		return -EFAULT;
+ 	ps->discsignr = ds.signr;
+-	ps->disccontext = ds.context;
++	ps->disccontext.sival_ptr = ds.context;
+ 	return 0;
+ }
+ 
+@@ -2614,22 +2621,15 @@ const struct file_operations usbdev_file_operations = {
+ static void usbdev_remove(struct usb_device *udev)
+ {
+ 	struct usb_dev_state *ps;
+-	struct kernel_siginfo sinfo;
+ 
+ 	while (!list_empty(&udev->filelist)) {
+ 		ps = list_entry(udev->filelist.next, struct usb_dev_state, list);
+ 		destroy_all_async(ps);
+ 		wake_up_all(&ps->wait);
+ 		list_del_init(&ps->list);
+-		if (ps->discsignr) {
+-			clear_siginfo(&sinfo);
+-			sinfo.si_signo = ps->discsignr;
+-			sinfo.si_errno = EPIPE;
+-			sinfo.si_code = SI_ASYNCIO;
+-			sinfo.si_addr = ps->disccontext;
+-			kill_pid_info_as_cred(ps->discsignr, &sinfo,
+-					ps->disc_pid, ps->cred);
+-		}
++		if (ps->discsignr)
++			kill_pid_usb_asyncio(ps->discsignr, EPIPE, ps->disccontext,
++					     ps->disc_pid, ps->cred);
+ 	}
+ }
+ 
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 310eef451db8..b10d14af6416 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -3616,6 +3616,7 @@ static int hub_handle_remote_wakeup(struct usb_hub *hub, unsigned int port,
+ 	struct usb_device *hdev;
+ 	struct usb_device *udev;
+ 	int connect_change = 0;
++	u16 link_state;
+ 	int ret;
+ 
+ 	hdev = hub->hdev;
+@@ -3625,9 +3626,11 @@ static int hub_handle_remote_wakeup(struct usb_hub *hub, unsigned int port,
+ 			return 0;
+ 		usb_clear_port_feature(hdev, port, USB_PORT_FEAT_C_SUSPEND);
+ 	} else {
++		link_state = portstatus & USB_PORT_STAT_LINK_STATE;
+ 		if (!udev || udev->state != USB_STATE_SUSPENDED ||
+-				 (portstatus & USB_PORT_STAT_LINK_STATE) !=
+-				 USB_SS_PORT_LS_U0)
++				(link_state != USB_SS_PORT_LS_U0 &&
++				 link_state != USB_SS_PORT_LS_U1 &&
++				 link_state != USB_SS_PORT_LS_U2))
+ 			return 0;
+ 	}
+ 
+diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
+index df51a35cf537..8beacbee2553 100644
+--- a/drivers/vhost/net.c
++++ b/drivers/vhost/net.c
+@@ -36,7 +36,7 @@
+ 
+ #include "vhost.h"
+ 
+-static int experimental_zcopytx = 1;
++static int experimental_zcopytx = 0;
+ module_param(experimental_zcopytx, int, 0444);
+ MODULE_PARM_DESC(experimental_zcopytx, "Enable Zero Copy TX;"
+ 		                       " 1 -Enable; 0 - Disable");
+diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
+index d37dd5bb7a8f..559768dc2567 100644
+--- a/drivers/xen/balloon.c
++++ b/drivers/xen/balloon.c
+@@ -538,8 +538,15 @@ static void balloon_process(struct work_struct *work)
+ 				state = reserve_additional_memory();
+ 		}
+ 
+-		if (credit < 0)
+-			state = decrease_reservation(-credit, GFP_BALLOON);
++		if (credit < 0) {
++			long n_pages;
++
++			n_pages = min(-credit, si_mem_available());
++			state = decrease_reservation(n_pages, GFP_BALLOON);
++			if (state == BP_DONE && n_pages != -credit &&
++			    n_pages < totalreserve_pages)
++				state = BP_EAGAIN;
++		}
+ 
+ 		state = update_schedule(state);
+ 
+@@ -578,6 +585,9 @@ static int add_ballooned_pages(int nr_pages)
+ 		}
+ 	}
+ 
++	if (si_mem_available() < nr_pages)
++		return -ENOMEM;
++
+ 	st = decrease_reservation(nr_pages, GFP_USER);
+ 	if (st != BP_DONE)
+ 		return -ENOMEM;
+@@ -710,7 +720,7 @@ static int __init balloon_init(void)
+ 	balloon_stats.schedule_delay = 1;
+ 	balloon_stats.max_schedule_delay = 32;
+ 	balloon_stats.retry_count = 1;
+-	balloon_stats.max_retry_count = RETRY_UNLIMITED;
++	balloon_stats.max_retry_count = 4;
+ 
+ #ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG
+ 	set_online_page_callback(&xen_online_page);
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index 117e76b2f939..cf4ec27407db 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -1293,7 +1293,7 @@ void rebind_evtchn_irq(int evtchn, int irq)
+ }
+ 
+ /* Rebind an evtchn so that it gets delivered to a specific cpu */
+-int xen_rebind_evtchn_to_cpu(int evtchn, unsigned tcpu)
++static int xen_rebind_evtchn_to_cpu(int evtchn, unsigned int tcpu)
+ {
+ 	struct evtchn_bind_vcpu bind_vcpu;
+ 	int masked;
+@@ -1327,7 +1327,6 @@ int xen_rebind_evtchn_to_cpu(int evtchn, unsigned tcpu)
+ 
+ 	return 0;
+ }
+-EXPORT_SYMBOL_GPL(xen_rebind_evtchn_to_cpu);
+ 
+ static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest,
+ 			    bool force)
+@@ -1341,6 +1340,15 @@ static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest,
+ 	return ret;
+ }
+ 
++/* To be called with desc->lock held. */
++int xen_set_affinity_evtchn(struct irq_desc *desc, unsigned int tcpu)
++{
++	struct irq_data *d = irq_desc_get_irq_data(desc);
++
++	return set_affinity_irq(d, cpumask_of(tcpu), false);
++}
++EXPORT_SYMBOL_GPL(xen_set_affinity_evtchn);
++
+ static void enable_dynirq(struct irq_data *data)
+ {
+ 	int evtchn = evtchn_from_irq(data->irq);
+diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
+index 6d1a5e58968f..47c70b826a6a 100644
+--- a/drivers/xen/evtchn.c
++++ b/drivers/xen/evtchn.c
+@@ -447,7 +447,7 @@ static void evtchn_bind_interdom_next_vcpu(int evtchn)
+ 	this_cpu_write(bind_last_selected_cpu, selected_cpu);
+ 
+ 	/* unmask expects irqs to be disabled */
+-	xen_rebind_evtchn_to_cpu(evtchn, selected_cpu);
++	xen_set_affinity_evtchn(desc, selected_cpu);
+ 	raw_spin_unlock_irqrestore(&desc->lock, flags);
+ }
+ 
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index ef11808b592b..99f1820637e4 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -2713,6 +2713,11 @@ out_only_mutex:
+ 		 * for detecting, at fsync time, if the inode isn't yet in the
+ 		 * log tree or it's there but not up to date.
+ 		 */
++		struct timespec64 now = current_time(inode);
++
++		inode_inc_iversion(inode);
++		inode->i_mtime = now;
++		inode->i_ctime = now;
+ 		trans = btrfs_start_transaction(root, 1);
+ 		if (IS_ERR(trans)) {
+ 			err = PTR_ERR(trans);
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index fc93e0d6e48d..c7bcf8ba82ca 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -3308,6 +3308,30 @@ int btrfs_free_log_root_tree(struct btrfs_trans_handle *trans,
+ 	return 0;
+ }
+ 
++/*
++ * Check if an inode was logged in the current transaction. We can't always rely
++ * on an inode's logged_trans value, because it's an in-memory only field and
++ * therefore not persisted. This means that its value is lost if the inode gets
++ * evicted and loaded again from disk (in which case it has a value of 0, and
++ * certainly it is smaller then any possible transaction ID), when that happens
++ * the full_sync flag is set in the inode's runtime flags, so on that case we
++ * assume eviction happened and ignore the logged_trans value, assuming the
++ * worst case, that the inode was logged before in the current transaction.
++ */
++static bool inode_logged(struct btrfs_trans_handle *trans,
++			 struct btrfs_inode *inode)
++{
++	if (inode->logged_trans == trans->transid)
++		return true;
++
++	if (inode->last_trans == trans->transid &&
++	    test_bit(BTRFS_INODE_NEEDS_FULL_SYNC, &inode->runtime_flags) &&
++	    !test_bit(BTRFS_FS_LOG_RECOVERING, &trans->fs_info->flags))
++		return true;
++
++	return false;
++}
++
+ /*
+  * If both a file and directory are logged, and unlinks or renames are
+  * mixed in, we have a few interesting corners:
+@@ -3342,7 +3366,7 @@ int btrfs_del_dir_entries_in_log(struct btrfs_trans_handle *trans,
+ 	int bytes_del = 0;
+ 	u64 dir_ino = btrfs_ino(dir);
+ 
+-	if (dir->logged_trans < trans->transid)
++	if (!inode_logged(trans, dir))
+ 		return 0;
+ 
+ 	ret = join_running_log_trans(root);
+@@ -3447,7 +3471,7 @@ int btrfs_del_inode_ref_in_log(struct btrfs_trans_handle *trans,
+ 	u64 index;
+ 	int ret;
+ 
+-	if (inode->logged_trans < trans->transid)
++	if (!inode_logged(trans, inode))
+ 		return 0;
+ 
+ 	ret = join_running_log_trans(root);
+@@ -5407,9 +5431,19 @@ log_extents:
+ 		}
+ 	}
+ 
++	/*
++	 * Don't update last_log_commit if we logged that an inode exists after
++	 * it was loaded to memory (full_sync bit set).
++	 * This is to prevent data loss when we do a write to the inode, then
++	 * the inode gets evicted after all delalloc was flushed, then we log
++	 * it exists (due to a rename for example) and then fsync it. This last
++	 * fsync would do nothing (not logging the extents previously written).
++	 */
+ 	spin_lock(&inode->lock);
+ 	inode->logged_trans = trans->transid;
+-	inode->last_log_commit = inode->last_sub_trans;
++	if (inode_only != LOG_INODE_EXISTS ||
++	    !test_bit(BTRFS_INODE_NEEDS_FULL_SYNC, &inode->runtime_flags))
++		inode->last_log_commit = inode->last_sub_trans;
+ 	spin_unlock(&inode->lock);
+ out_unlock:
+ 	mutex_unlock(&inode->log_mutex);
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index 9f53c3d99304..06ebde492b31 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -1006,7 +1006,7 @@ ceph_direct_read_write(struct kiocb *iocb, struct iov_iter *iter,
+ 			 * may block.
+ 			 */
+ 			truncate_inode_pages_range(inode->i_mapping, pos,
+-					(pos+len) | (PAGE_SIZE - 1));
++						   PAGE_ALIGN(pos + len) - 1);
+ 
+ 			req->r_mtime = mtime;
+ 		}
+diff --git a/fs/cifs/cifs_fs_sb.h b/fs/cifs/cifs_fs_sb.h
+index ed49222abecb..afa56237a0c3 100644
+--- a/fs/cifs/cifs_fs_sb.h
++++ b/fs/cifs/cifs_fs_sb.h
+@@ -83,5 +83,10 @@ struct cifs_sb_info {
+ 	 * failover properly.
+ 	 */
+ 	char *origin_fullpath; /* \\HOST\SHARE\[OPTIONAL PATH] */
++	/*
++	 * Indicate whether serverino option was turned off later
++	 * (cifs_autodisable_serverino) in order to match new mounts.
++	 */
++	bool mnt_cifs_serverino_autodisabled;
+ };
+ #endif				/* _CIFS_FS_SB_H */
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index e9507fba0b36..cccc89c73140 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -1221,11 +1221,11 @@ next_pdu:
+ 					 atomic_read(&midCount));
+ 				cifs_dump_mem("Received Data is: ", bufs[i],
+ 					      HEADER_SIZE(server));
++				smb2_add_credits_from_hdr(bufs[i], server);
+ #ifdef CONFIG_CIFS_DEBUG2
+ 				if (server->ops->dump_detail)
+ 					server->ops->dump_detail(bufs[i],
+ 								 server);
+-				smb2_add_credits_from_hdr(bufs[i], server);
+ 				cifs_dump_mids(server);
+ #endif /* CIFS_DEBUG2 */
+ 			}
+@@ -3455,12 +3455,16 @@ compare_mount_options(struct super_block *sb, struct cifs_mnt_data *mnt_data)
+ {
+ 	struct cifs_sb_info *old = CIFS_SB(sb);
+ 	struct cifs_sb_info *new = mnt_data->cifs_sb;
++	unsigned int oldflags = old->mnt_cifs_flags & CIFS_MOUNT_MASK;
++	unsigned int newflags = new->mnt_cifs_flags & CIFS_MOUNT_MASK;
+ 
+ 	if ((sb->s_flags & CIFS_MS_MASK) != (mnt_data->flags & CIFS_MS_MASK))
+ 		return 0;
+ 
+-	if ((old->mnt_cifs_flags & CIFS_MOUNT_MASK) !=
+-	    (new->mnt_cifs_flags & CIFS_MOUNT_MASK))
++	if (old->mnt_cifs_serverino_autodisabled)
++		newflags &= ~CIFS_MOUNT_SERVER_INUM;
++
++	if (oldflags != newflags)
+ 		return 0;
+ 
+ 	/*
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index 538fd7d807e4..9e71619ce7bc 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -2371,6 +2371,8 @@ cifs_setattr_nounix(struct dentry *direntry, struct iattr *attrs)
+ 	struct inode *inode = d_inode(direntry);
+ 	struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
+ 	struct cifsInodeInfo *cifsInode = CIFS_I(inode);
++	struct cifsFileInfo *wfile;
++	struct cifs_tcon *tcon;
+ 	char *full_path = NULL;
+ 	int rc = -EACCES;
+ 	__u32 dosattr = 0;
+@@ -2417,6 +2419,20 @@ cifs_setattr_nounix(struct dentry *direntry, struct iattr *attrs)
+ 	mapping_set_error(inode->i_mapping, rc);
+ 	rc = 0;
+ 
++	if (attrs->ia_valid & ATTR_MTIME) {
++		rc = cifs_get_writable_file(cifsInode, false, &wfile);
++		if (!rc) {
++			tcon = tlink_tcon(wfile->tlink);
++			rc = tcon->ses->server->ops->flush(xid, tcon, &wfile->fid);
++			cifsFileInfo_put(wfile);
++			if (rc)
++				return rc;
++		} else if (rc != -EBADF)
++			return rc;
++		else
++			rc = 0;
++	}
++
+ 	if (attrs->ia_valid & ATTR_SIZE) {
+ 		rc = cifs_set_file_size(inode, attrs, xid, full_path);
+ 		if (rc != 0)
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index 0dc6f08020ac..f4e14cebe942 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -539,6 +539,7 @@ cifs_autodisable_serverino(struct cifs_sb_info *cifs_sb)
+ 			tcon = cifs_sb_master_tcon(cifs_sb);
+ 
+ 		cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_SERVER_INUM;
++		cifs_sb->mnt_cifs_serverino_autodisabled = true;
+ 		cifs_dbg(VFS, "Autodisabling the use of server inode numbers on %s.\n",
+ 			 tcon ? tcon->treeName : "new server");
+ 		cifs_dbg(VFS, "The server doesn't seem to support them properly or the files might be on different servers (DFS).\n");
+diff --git a/fs/cifs/smb2inode.c b/fs/cifs/smb2inode.c
+index 278405d26c47..d8d9cdfa30b6 100644
+--- a/fs/cifs/smb2inode.c
++++ b/fs/cifs/smb2inode.c
+@@ -120,6 +120,8 @@ smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 				SMB2_O_INFO_FILE, 0,
+ 				sizeof(struct smb2_file_all_info) +
+ 					  PATH_MAX * 2, 0, NULL);
++		if (rc)
++			goto finished;
+ 		smb2_set_next_command(tcon, &rqst[num_rqst]);
+ 		smb2_set_related(&rqst[num_rqst++]);
+ 		trace_smb3_query_info_compound_enter(xid, ses->Suid, tcon->tid,
+@@ -147,6 +149,8 @@ smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 					COMPOUND_FID, current->tgid,
+ 					FILE_DISPOSITION_INFORMATION,
+ 					SMB2_O_INFO_FILE, 0, data, size);
++		if (rc)
++			goto finished;
+ 		smb2_set_next_command(tcon, &rqst[num_rqst]);
+ 		smb2_set_related(&rqst[num_rqst++]);
+ 		trace_smb3_rmdir_enter(xid, ses->Suid, tcon->tid, full_path);
+@@ -163,6 +167,8 @@ smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 					COMPOUND_FID, current->tgid,
+ 					FILE_END_OF_FILE_INFORMATION,
+ 					SMB2_O_INFO_FILE, 0, data, size);
++		if (rc)
++			goto finished;
+ 		smb2_set_next_command(tcon, &rqst[num_rqst]);
+ 		smb2_set_related(&rqst[num_rqst++]);
+ 		trace_smb3_set_eof_enter(xid, ses->Suid, tcon->tid, full_path);
+@@ -180,6 +186,8 @@ smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 					COMPOUND_FID, current->tgid,
+ 					FILE_BASIC_INFORMATION,
+ 					SMB2_O_INFO_FILE, 0, data, size);
++		if (rc)
++			goto finished;
+ 		smb2_set_next_command(tcon, &rqst[num_rqst]);
+ 		smb2_set_related(&rqst[num_rqst++]);
+ 		trace_smb3_set_info_compound_enter(xid, ses->Suid, tcon->tid,
+@@ -206,6 +214,8 @@ smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 					COMPOUND_FID, current->tgid,
+ 					FILE_RENAME_INFORMATION,
+ 					SMB2_O_INFO_FILE, 0, data, size);
++		if (rc)
++			goto finished;
+ 		smb2_set_next_command(tcon, &rqst[num_rqst]);
+ 		smb2_set_related(&rqst[num_rqst++]);
+ 		trace_smb3_rename_enter(xid, ses->Suid, tcon->tid, full_path);
+@@ -231,6 +241,8 @@ smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 					COMPOUND_FID, current->tgid,
+ 					FILE_LINK_INFORMATION,
+ 					SMB2_O_INFO_FILE, 0, data, size);
++		if (rc)
++			goto finished;
+ 		smb2_set_next_command(tcon, &rqst[num_rqst]);
+ 		smb2_set_related(&rqst[num_rqst++]);
+ 		trace_smb3_hardlink_enter(xid, ses->Suid, tcon->tid, full_path);
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index aa61dcf471b3..db191975a994 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -705,8 +705,51 @@ int open_shroot(unsigned int xid, struct cifs_tcon *tcon, struct cifs_fid *pfid)
+ 
+ 	smb2_set_related(&rqst[1]);
+ 
++	/*
++	 * We do not hold the lock for the open because in case
++	 * SMB2_open needs to reconnect, it will end up calling
++	 * cifs_mark_open_files_invalid() which takes the lock again
++	 * thus causing a deadlock
++	 */
++
++	mutex_unlock(&tcon->crfid.fid_mutex);
+ 	rc = compound_send_recv(xid, ses, flags, 2, rqst,
+ 				resp_buftype, rsp_iov);
++	mutex_lock(&tcon->crfid.fid_mutex);
++
++	/*
++	 * Now we need to check again as the cached root might have
++	 * been successfully re-opened from a concurrent process
++	 */
++
++	if (tcon->crfid.is_valid) {
++		/* work was already done */
++
++		/* stash fids for close() later */
++		struct cifs_fid fid = {
++			.persistent_fid = pfid->persistent_fid,
++			.volatile_fid = pfid->volatile_fid,
++		};
++
++		/*
++		 * caller expects this func to set pfid to a valid
++		 * cached root, so we copy the existing one and get a
++		 * reference.
++		 */
++		memcpy(pfid, tcon->crfid.fid, sizeof(*pfid));
++		kref_get(&tcon->crfid.refcount);
++
++		mutex_unlock(&tcon->crfid.fid_mutex);
++
++		if (rc == 0) {
++			/* close extra handle outside of crit sec */
++			SMB2_close(xid, tcon, fid.persistent_fid, fid.volatile_fid);
++		}
++		goto oshr_free;
++	}
++
++	/* Cached root is still invalid, continue normaly */
++
+ 	if (rc)
+ 		goto oshr_exit;
+ 
+@@ -740,8 +783,9 @@ int open_shroot(unsigned int xid, struct cifs_tcon *tcon, struct cifs_fid *pfid)
+ 				(char *)&tcon->crfid.file_all_info))
+ 		tcon->crfid.file_all_info_is_valid = 1;
+ 
+- oshr_exit:
++oshr_exit:
+ 	mutex_unlock(&tcon->crfid.fid_mutex);
++oshr_free:
+ 	SMB2_open_free(&rqst[0]);
+ 	SMB2_query_info_free(&rqst[1]);
+ 	free_rsp_buf(resp_buftype[0], rsp_iov[0].iov_base);
+@@ -2004,6 +2048,10 @@ smb2_set_related(struct smb_rqst *rqst)
+ 	struct smb2_sync_hdr *shdr;
+ 
+ 	shdr = (struct smb2_sync_hdr *)(rqst->rq_iov[0].iov_base);
++	if (shdr == NULL) {
++		cifs_dbg(FYI, "shdr NULL in smb2_set_related\n");
++		return;
++	}
+ 	shdr->Flags |= SMB2_FLAGS_RELATED_OPERATIONS;
+ }
+ 
+@@ -2018,6 +2066,12 @@ smb2_set_next_command(struct cifs_tcon *tcon, struct smb_rqst *rqst)
+ 	unsigned long len = smb_rqst_len(server, rqst);
+ 	int i, num_padding;
+ 
++	shdr = (struct smb2_sync_hdr *)(rqst->rq_iov[0].iov_base);
++	if (shdr == NULL) {
++		cifs_dbg(FYI, "shdr NULL in smb2_set_next_command\n");
++		return;
++	}
++
+ 	/* SMB headers in a compound are 8 byte aligned. */
+ 
+ 	/* No padding needed */
+@@ -2057,7 +2111,6 @@ smb2_set_next_command(struct cifs_tcon *tcon, struct smb_rqst *rqst)
+ 	}
+ 
+  finished:
+-	shdr = (struct smb2_sync_hdr *)(rqst->rq_iov[0].iov_base);
+ 	shdr->NextCommand = cpu_to_le32(len);
+ }
+ 
+diff --git a/fs/coda/file.c b/fs/coda/file.c
+index 1cbc1f2298ee..43d371551d2b 100644
+--- a/fs/coda/file.c
++++ b/fs/coda/file.c
+@@ -27,6 +27,13 @@
+ #include "coda_linux.h"
+ #include "coda_int.h"
+ 
++struct coda_vm_ops {
++	atomic_t refcnt;
++	struct file *coda_file;
++	const struct vm_operations_struct *host_vm_ops;
++	struct vm_operations_struct vm_ops;
++};
++
+ static ssize_t
+ coda_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
+ {
+@@ -61,6 +68,34 @@ coda_file_write_iter(struct kiocb *iocb, struct iov_iter *to)
+ 	return ret;
+ }
+ 
++static void
++coda_vm_open(struct vm_area_struct *vma)
++{
++	struct coda_vm_ops *cvm_ops =
++		container_of(vma->vm_ops, struct coda_vm_ops, vm_ops);
++
++	atomic_inc(&cvm_ops->refcnt);
++
++	if (cvm_ops->host_vm_ops && cvm_ops->host_vm_ops->open)
++		cvm_ops->host_vm_ops->open(vma);
++}
++
++static void
++coda_vm_close(struct vm_area_struct *vma)
++{
++	struct coda_vm_ops *cvm_ops =
++		container_of(vma->vm_ops, struct coda_vm_ops, vm_ops);
++
++	if (cvm_ops->host_vm_ops && cvm_ops->host_vm_ops->close)
++		cvm_ops->host_vm_ops->close(vma);
++
++	if (atomic_dec_and_test(&cvm_ops->refcnt)) {
++		vma->vm_ops = cvm_ops->host_vm_ops;
++		fput(cvm_ops->coda_file);
++		kfree(cvm_ops);
++	}
++}
++
+ static int
+ coda_file_mmap(struct file *coda_file, struct vm_area_struct *vma)
+ {
+@@ -68,6 +103,8 @@ coda_file_mmap(struct file *coda_file, struct vm_area_struct *vma)
+ 	struct coda_inode_info *cii;
+ 	struct file *host_file;
+ 	struct inode *coda_inode, *host_inode;
++	struct coda_vm_ops *cvm_ops;
++	int ret;
+ 
+ 	cfi = CODA_FTOC(coda_file);
+ 	BUG_ON(!cfi || cfi->cfi_magic != CODA_MAGIC);
+@@ -76,6 +113,13 @@ coda_file_mmap(struct file *coda_file, struct vm_area_struct *vma)
+ 	if (!host_file->f_op->mmap)
+ 		return -ENODEV;
+ 
++	if (WARN_ON(coda_file != vma->vm_file))
++		return -EIO;
++
++	cvm_ops = kmalloc(sizeof(struct coda_vm_ops), GFP_KERNEL);
++	if (!cvm_ops)
++		return -ENOMEM;
++
+ 	coda_inode = file_inode(coda_file);
+ 	host_inode = file_inode(host_file);
+ 
+@@ -89,6 +133,7 @@ coda_file_mmap(struct file *coda_file, struct vm_area_struct *vma)
+ 	 * the container file on us! */
+ 	else if (coda_inode->i_mapping != host_inode->i_mapping) {
+ 		spin_unlock(&cii->c_lock);
++		kfree(cvm_ops);
+ 		return -EBUSY;
+ 	}
+ 
+@@ -97,7 +142,29 @@ coda_file_mmap(struct file *coda_file, struct vm_area_struct *vma)
+ 	cfi->cfi_mapcount++;
+ 	spin_unlock(&cii->c_lock);
+ 
+-	return call_mmap(host_file, vma);
++	vma->vm_file = get_file(host_file);
++	ret = call_mmap(vma->vm_file, vma);
++
++	if (ret) {
++		/* if call_mmap fails, our caller will put coda_file so we
++		 * should drop the reference to the host_file that we got.
++		 */
++		fput(host_file);
++		kfree(cvm_ops);
++	} else {
++		/* here we add redirects for the open/close vm_operations */
++		cvm_ops->host_vm_ops = vma->vm_ops;
++		if (vma->vm_ops)
++			cvm_ops->vm_ops = *vma->vm_ops;
++
++		cvm_ops->vm_ops.open = coda_vm_open;
++		cvm_ops->vm_ops.close = coda_vm_close;
++		cvm_ops->coda_file = coda_file;
++		atomic_set(&cvm_ops->refcnt, 1);
++
++		vma->vm_ops = &cvm_ops->vm_ops;
++	}
++	return ret;
+ }
+ 
+ int coda_open(struct inode *coda_inode, struct file *coda_file)
+@@ -207,4 +274,3 @@ const struct file_operations coda_file_operations = {
+ 	.fsync		= coda_fsync,
+ 	.splice_read	= generic_file_splice_read,
+ };
+-
+diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c
+index fe38b5306045..5b3d525aa213 100644
+--- a/fs/crypto/crypto.c
++++ b/fs/crypto/crypto.c
+@@ -159,7 +159,10 @@ int fscrypt_do_page_crypto(const struct inode *inode, fscrypt_direction_t rw,
+ 	struct crypto_skcipher *tfm = ci->ci_ctfm;
+ 	int res = 0;
+ 
+-	BUG_ON(len == 0);
++	if (WARN_ON_ONCE(len <= 0))
++		return -EINVAL;
++	if (WARN_ON_ONCE(len % FS_CRYPTO_BLOCK_SIZE != 0))
++		return -EINVAL;
+ 
+ 	fscrypt_generate_iv(&iv, lblk_num, ci);
+ 
+@@ -243,8 +246,6 @@ struct page *fscrypt_encrypt_page(const struct inode *inode,
+ 	struct page *ciphertext_page = page;
+ 	int err;
+ 
+-	BUG_ON(len % FS_CRYPTO_BLOCK_SIZE != 0);
+-
+ 	if (inode->i_sb->s_cop->flags & FS_CFLG_OWN_PAGES) {
+ 		/* with inplace-encryption we just encrypt the page */
+ 		err = fscrypt_do_page_crypto(inode, FS_ENCRYPT, lblk_num, page,
+@@ -256,7 +257,8 @@ struct page *fscrypt_encrypt_page(const struct inode *inode,
+ 		return ciphertext_page;
+ 	}
+ 
+-	BUG_ON(!PageLocked(page));
++	if (WARN_ON_ONCE(!PageLocked(page)))
++		return ERR_PTR(-EINVAL);
+ 
+ 	ctx = fscrypt_get_ctx(inode, gfp_flags);
+ 	if (IS_ERR(ctx))
+@@ -304,8 +306,9 @@ EXPORT_SYMBOL(fscrypt_encrypt_page);
+ int fscrypt_decrypt_page(const struct inode *inode, struct page *page,
+ 			unsigned int len, unsigned int offs, u64 lblk_num)
+ {
+-	if (!(inode->i_sb->s_cop->flags & FS_CFLG_OWN_PAGES))
+-		BUG_ON(!PageLocked(page));
++	if (WARN_ON_ONCE(!PageLocked(page) &&
++			 !(inode->i_sb->s_cop->flags & FS_CFLG_OWN_PAGES)))
++		return -EINVAL;
+ 
+ 	return fscrypt_do_page_crypto(inode, FS_DECRYPT, lblk_num, page, page,
+ 				      len, offs, GFP_NOFS);
+diff --git a/fs/dax.c b/fs/dax.c
+index 9fd908f3df32..4b85d539b845 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -131,6 +131,15 @@ static int dax_is_empty_entry(void *entry)
+ 	return xa_to_value(entry) & DAX_EMPTY;
+ }
+ 
++/*
++ * true if the entry that was found is of a smaller order than the entry
++ * we were looking for
++ */
++static bool dax_is_conflict(void *entry)
++{
++	return entry == XA_RETRY_ENTRY;
++}
++
+ /*
+  * DAX page cache entry locking
+  */
+@@ -203,11 +212,13 @@ static void dax_wake_entry(struct xa_state *xas, void *entry, bool wake_all)
+  * Look up entry in page cache, wait for it to become unlocked if it
+  * is a DAX entry and return it.  The caller must subsequently call
+  * put_unlocked_entry() if it did not lock the entry or dax_unlock_entry()
+- * if it did.
++ * if it did.  The entry returned may have a larger order than @order.
++ * If @order is larger than the order of the entry found in i_pages, this
++ * function returns a dax_is_conflict entry.
+  *
+  * Must be called with the i_pages lock held.
+  */
+-static void *get_unlocked_entry(struct xa_state *xas)
++static void *get_unlocked_entry(struct xa_state *xas, unsigned int order)
+ {
+ 	void *entry;
+ 	struct wait_exceptional_entry_queue ewait;
+@@ -218,6 +229,8 @@ static void *get_unlocked_entry(struct xa_state *xas)
+ 
+ 	for (;;) {
+ 		entry = xas_find_conflict(xas);
++		if (dax_entry_order(entry) < order)
++			return XA_RETRY_ENTRY;
+ 		if (!entry || WARN_ON_ONCE(!xa_is_value(entry)) ||
+ 				!dax_is_locked(entry))
+ 			return entry;
+@@ -262,7 +275,7 @@ static void wait_entry_unlocked(struct xa_state *xas, void *entry)
+ static void put_unlocked_entry(struct xa_state *xas, void *entry)
+ {
+ 	/* If we were the only waiter woken, wake the next one */
+-	if (entry)
++	if (entry && dax_is_conflict(entry))
+ 		dax_wake_entry(xas, entry, false);
+ }
+ 
+@@ -469,7 +482,7 @@ void dax_unlock_page(struct page *page, dax_entry_t cookie)
+  * overlap with xarray value entries.
+  */
+ static void *grab_mapping_entry(struct xa_state *xas,
+-		struct address_space *mapping, unsigned long size_flag)
++		struct address_space *mapping, unsigned int order)
+ {
+ 	unsigned long index = xas->xa_index;
+ 	bool pmd_downgrade = false; /* splitting PMD entry into PTE entries? */
+@@ -477,20 +490,17 @@ static void *grab_mapping_entry(struct xa_state *xas,
+ 
+ retry:
+ 	xas_lock_irq(xas);
+-	entry = get_unlocked_entry(xas);
++	entry = get_unlocked_entry(xas, order);
+ 
+ 	if (entry) {
++		if (dax_is_conflict(entry))
++			goto fallback;
+ 		if (!xa_is_value(entry)) {
+ 			xas_set_err(xas, EIO);
+ 			goto out_unlock;
+ 		}
+ 
+-		if (size_flag & DAX_PMD) {
+-			if (dax_is_pte_entry(entry)) {
+-				put_unlocked_entry(xas, entry);
+-				goto fallback;
+-			}
+-		} else { /* trying to grab a PTE entry */
++		if (order == 0) {
+ 			if (dax_is_pmd_entry(entry) &&
+ 			    (dax_is_zero_entry(entry) ||
+ 			     dax_is_empty_entry(entry))) {
+@@ -531,7 +541,11 @@ retry:
+ 	if (entry) {
+ 		dax_lock_entry(xas, entry);
+ 	} else {
+-		entry = dax_make_entry(pfn_to_pfn_t(0), size_flag | DAX_EMPTY);
++		unsigned long flags = DAX_EMPTY;
++
++		if (order > 0)
++			flags |= DAX_PMD;
++		entry = dax_make_entry(pfn_to_pfn_t(0), flags);
+ 		dax_lock_entry(xas, entry);
+ 		if (xas_error(xas))
+ 			goto out_unlock;
+@@ -602,7 +616,7 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
+ 		if (WARN_ON_ONCE(!xa_is_value(entry)))
+ 			continue;
+ 		if (unlikely(dax_is_locked(entry)))
+-			entry = get_unlocked_entry(&xas);
++			entry = get_unlocked_entry(&xas, 0);
+ 		if (entry)
+ 			page = dax_busy_page(entry);
+ 		put_unlocked_entry(&xas, entry);
+@@ -629,7 +643,7 @@ static int __dax_invalidate_entry(struct address_space *mapping,
+ 	void *entry;
+ 
+ 	xas_lock_irq(&xas);
+-	entry = get_unlocked_entry(&xas);
++	entry = get_unlocked_entry(&xas, 0);
+ 	if (!entry || WARN_ON_ONCE(!xa_is_value(entry)))
+ 		goto out;
+ 	if (!trunc &&
+@@ -856,7 +870,7 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
+ 	if (unlikely(dax_is_locked(entry))) {
+ 		void *old_entry = entry;
+ 
+-		entry = get_unlocked_entry(xas);
++		entry = get_unlocked_entry(xas, 0);
+ 
+ 		/* Entry got punched out / reallocated? */
+ 		if (!entry || WARN_ON_ONCE(!xa_is_value(entry)))
+@@ -1517,7 +1531,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
+ 	 * entry is already in the array, for instance), it will return
+ 	 * VM_FAULT_FALLBACK.
+ 	 */
+-	entry = grab_mapping_entry(&xas, mapping, DAX_PMD);
++	entry = grab_mapping_entry(&xas, mapping, PMD_ORDER);
+ 	if (xa_is_internal(entry)) {
+ 		result = xa_to_internal(entry);
+ 		goto fallback;
+@@ -1666,11 +1680,10 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order)
+ 	vm_fault_t ret;
+ 
+ 	xas_lock_irq(&xas);
+-	entry = get_unlocked_entry(&xas);
++	entry = get_unlocked_entry(&xas, order);
+ 	/* Did we race with someone splitting entry or so? */
+-	if (!entry ||
+-	    (order == 0 && !dax_is_pte_entry(entry)) ||
+-	    (order == PMD_ORDER && !dax_is_pmd_entry(entry))) {
++	if (!entry || dax_is_conflict(entry) ||
++	    (order == 0 && !dax_is_pte_entry(entry))) {
+ 		put_unlocked_entry(&xas, entry);
+ 		xas_unlock_irq(&xas);
+ 		trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf,
+diff --git a/fs/ecryptfs/crypto.c b/fs/ecryptfs/crypto.c
+index f664da55234e..8b247e5ad12e 100644
+--- a/fs/ecryptfs/crypto.c
++++ b/fs/ecryptfs/crypto.c
+@@ -1019,8 +1019,10 @@ int ecryptfs_read_and_validate_header_region(struct inode *inode)
+ 
+ 	rc = ecryptfs_read_lower(file_size, 0, ECRYPTFS_SIZE_AND_MARKER_BYTES,
+ 				 inode);
+-	if (rc < ECRYPTFS_SIZE_AND_MARKER_BYTES)
+-		return rc >= 0 ? -EINVAL : rc;
++	if (rc < 0)
++		return rc;
++	else if (rc < ECRYPTFS_SIZE_AND_MARKER_BYTES)
++		return -EINVAL;
+ 	rc = ecryptfs_validate_marker(marker);
+ 	if (!rc)
+ 		ecryptfs_i_size_init(file_size, inode);
+@@ -1382,8 +1384,10 @@ int ecryptfs_read_and_validate_xattr_region(struct dentry *dentry,
+ 				     ecryptfs_inode_to_lower(inode),
+ 				     ECRYPTFS_XATTR_NAME, file_size,
+ 				     ECRYPTFS_SIZE_AND_MARKER_BYTES);
+-	if (rc < ECRYPTFS_SIZE_AND_MARKER_BYTES)
+-		return rc >= 0 ? -EINVAL : rc;
++	if (rc < 0)
++		return rc;
++	else if (rc < ECRYPTFS_SIZE_AND_MARKER_BYTES)
++		return -EINVAL;
+ 	rc = ecryptfs_validate_marker(marker);
+ 	if (!rc)
+ 		ecryptfs_i_size_init(file_size, inode);
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index b16645b417d9..bd9474e82f38 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -714,6 +714,7 @@ void wbc_detach_inode(struct writeback_control *wbc)
+ void wbc_account_io(struct writeback_control *wbc, struct page *page,
+ 		    size_t bytes)
+ {
++	struct cgroup_subsys_state *css;
+ 	int id;
+ 
+ 	/*
+@@ -725,7 +726,12 @@ void wbc_account_io(struct writeback_control *wbc, struct page *page,
+ 	if (!wbc->wb)
+ 		return;
+ 
+-	id = mem_cgroup_css_from_page(page)->id;
++	css = mem_cgroup_css_from_page(page);
++	/* dead cgroups shouldn't contribute to inode ownership arbitration */
++	if (!(css->flags & CSS_ONLINE))
++		return;
++
++	id = css->id;
+ 
+ 	if (id == wbc->wb_id) {
+ 		wbc->wb_bytes += bytes;
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index a71d0b42d160..43ca14dd3466 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -139,19 +139,12 @@ struct nfs_cache_array {
+ 	struct nfs_cache_array_entry array[0];
+ };
+ 
+-struct readdirvec {
+-	unsigned long nr;
+-	unsigned long index;
+-	struct page *pages[NFS_MAX_READDIR_RAPAGES];
+-};
+-
+ typedef int (*decode_dirent_t)(struct xdr_stream *, struct nfs_entry *, bool);
+ typedef struct {
+ 	struct file	*file;
+ 	struct page	*page;
+ 	struct dir_context *ctx;
+ 	unsigned long	page_index;
+-	struct readdirvec pvec;
+ 	u64		*dir_cookie;
+ 	u64		last_cookie;
+ 	loff_t		current_index;
+@@ -531,10 +524,6 @@ int nfs_readdir_page_filler(nfs_readdir_descriptor_t *desc, struct nfs_entry *en
+ 	struct nfs_cache_array *array;
+ 	unsigned int count = 0;
+ 	int status;
+-	int max_rapages = NFS_MAX_READDIR_RAPAGES;
+-
+-	desc->pvec.index = desc->page_index;
+-	desc->pvec.nr = 0;
+ 
+ 	scratch = alloc_page(GFP_KERNEL);
+ 	if (scratch == NULL)
+@@ -559,40 +548,20 @@ int nfs_readdir_page_filler(nfs_readdir_descriptor_t *desc, struct nfs_entry *en
+ 		if (desc->plus)
+ 			nfs_prime_dcache(file_dentry(desc->file), entry);
+ 
+-		status = nfs_readdir_add_to_array(entry, desc->pvec.pages[desc->pvec.nr]);
+-		if (status == -ENOSPC) {
+-			desc->pvec.nr++;
+-			if (desc->pvec.nr == max_rapages)
+-				break;
+-			status = nfs_readdir_add_to_array(entry, desc->pvec.pages[desc->pvec.nr]);
+-		}
++		status = nfs_readdir_add_to_array(entry, page);
+ 		if (status != 0)
+ 			break;
+ 	} while (!entry->eof);
+ 
+-	/*
+-	 * page and desc->pvec.pages[0] are valid, don't need to check
+-	 * whether or not to be NULL.
+-	 */
+-	copy_highpage(page, desc->pvec.pages[0]);
+-
+ out_nopages:
+ 	if (count == 0 || (status == -EBADCOOKIE && entry->eof != 0)) {
+-		array = kmap_atomic(desc->pvec.pages[desc->pvec.nr]);
++		array = kmap(page);
+ 		array->eof_index = array->size;
+ 		status = 0;
+-		kunmap_atomic(array);
++		kunmap(page);
+ 	}
+ 
+ 	put_page(scratch);
+-
+-	/*
+-	 * desc->pvec.nr > 0 means at least one page was completely filled,
+-	 * we should return -ENOSPC. Otherwise function
+-	 * nfs_readdir_xdr_to_array will enter infinite loop.
+-	 */
+-	if (desc->pvec.nr > 0)
+-		return -ENOSPC;
+ 	return status;
+ }
+ 
+@@ -626,24 +595,6 @@ out_freepages:
+ 	return -ENOMEM;
+ }
+ 
+-/*
+- * nfs_readdir_rapages_init initialize rapages by nfs_cache_array structure.
+- */
+-static
+-void nfs_readdir_rapages_init(nfs_readdir_descriptor_t *desc)
+-{
+-	struct nfs_cache_array *array;
+-	int max_rapages = NFS_MAX_READDIR_RAPAGES;
+-	int index;
+-
+-	for (index = 0; index < max_rapages; index++) {
+-		array = kmap_atomic(desc->pvec.pages[index]);
+-		memset(array, 0, sizeof(struct nfs_cache_array));
+-		array->eof_index = -1;
+-		kunmap_atomic(array);
+-	}
+-}
+-
+ static
+ int nfs_readdir_xdr_to_array(nfs_readdir_descriptor_t *desc, struct page *page, struct inode *inode)
+ {
+@@ -654,12 +605,6 @@ int nfs_readdir_xdr_to_array(nfs_readdir_descriptor_t *desc, struct page *page,
+ 	int status = -ENOMEM;
+ 	unsigned int array_size = ARRAY_SIZE(pages);
+ 
+-	/*
+-	 * This means we hit readdir rdpages miss, the preallocated rdpages
+-	 * are useless, the preallocate rdpages should be reinitialized.
+-	 */
+-	nfs_readdir_rapages_init(desc);
+-
+ 	entry.prev_cookie = 0;
+ 	entry.cookie = desc->last_cookie;
+ 	entry.eof = 0;
+@@ -719,24 +664,9 @@ int nfs_readdir_filler(nfs_readdir_descriptor_t *desc, struct page* page)
+ 	struct inode	*inode = file_inode(desc->file);
+ 	int ret;
+ 
+-	/*
+-	 * If desc->page_index in range desc->pvec.index and
+-	 * desc->pvec.index + desc->pvec.nr, we get readdir cache hit.
+-	 */
+-	if (desc->page_index >= desc->pvec.index &&
+-		desc->page_index < (desc->pvec.index + desc->pvec.nr)) {
+-		/*
+-		 * page and desc->pvec.pages[x] are valid, don't need to check
+-		 * whether or not to be NULL.
+-		 */
+-		copy_highpage(page, desc->pvec.pages[desc->page_index - desc->pvec.index]);
+-		ret = 0;
+-	} else {
+-		ret = nfs_readdir_xdr_to_array(desc, page, inode);
+-		if (ret < 0)
+-			goto error;
+-	}
+-
++	ret = nfs_readdir_xdr_to_array(desc, page, inode);
++	if (ret < 0)
++		goto error;
+ 	SetPageUptodate(page);
+ 
+ 	if (invalidate_inode_pages2_range(inode->i_mapping, page->index + 1, -1) < 0) {
+@@ -901,7 +831,6 @@ static int nfs_readdir(struct file *file, struct dir_context *ctx)
+ 			*desc = &my_desc;
+ 	struct nfs_open_dir_context *dir_ctx = file->private_data;
+ 	int res = 0;
+-	int max_rapages = NFS_MAX_READDIR_RAPAGES;
+ 
+ 	dfprintk(FILE, "NFS: readdir(%pD2) starting at cookie %llu\n",
+ 			file, (long long)ctx->pos);
+@@ -921,12 +850,6 @@ static int nfs_readdir(struct file *file, struct dir_context *ctx)
+ 	desc->decode = NFS_PROTO(inode)->decode_dirent;
+ 	desc->plus = nfs_use_readdirplus(inode, ctx);
+ 
+-	res = nfs_readdir_alloc_pages(desc->pvec.pages, max_rapages);
+-	if (res < 0)
+-		return -ENOMEM;
+-
+-	nfs_readdir_rapages_init(desc);
+-
+ 	if (ctx->pos == 0 || nfs_attribute_cache_expired(inode))
+ 		res = nfs_revalidate_mapping(inode, file->f_mapping);
+ 	if (res < 0)
+@@ -962,7 +885,6 @@ static int nfs_readdir(struct file *file, struct dir_context *ctx)
+ 			break;
+ 	} while (!desc->eof);
+ out:
+-	nfs_readdir_free_pages(desc->pvec.pages, max_rapages);
+ 	if (res > 0)
+ 		res = 0;
+ 	dfprintk(FILE, "NFS: readdir(%pD2) returns %d\n", file, res);
+diff --git a/fs/nfs/flexfilelayout/flexfilelayoutdev.c b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+index 19f856f45689..3eda40a320a5 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayoutdev.c
++++ b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+@@ -257,7 +257,7 @@ int ff_layout_track_ds_error(struct nfs4_flexfile_layout *flo,
+ 	if (status == 0)
+ 		return 0;
+ 
+-	if (mirror->mirror_ds == NULL)
++	if (IS_ERR_OR_NULL(mirror->mirror_ds))
+ 		return -EINVAL;
+ 
+ 	dserr = kmalloc(sizeof(*dserr), gfp_flags);
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index 414a90d48493..e94af89ac877 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -1094,6 +1094,7 @@ int nfs_open(struct inode *inode, struct file *filp)
+ 	nfs_fscache_open_file(inode, filp);
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(nfs_open);
+ 
+ /*
+  * This function is called whenever some part of NFS notices that
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index c7cf23ae6597..b26622d9686f 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -69,8 +69,7 @@ struct nfs_clone_mount {
+  * Maximum number of pages that readdir can use for creating
+  * a vmapped array of pages.
+  */
+-#define NFS_MAX_READDIR_PAGES 64
+-#define NFS_MAX_READDIR_RAPAGES 8
++#define NFS_MAX_READDIR_PAGES 8
+ 
+ struct nfs_client_initdata {
+ 	unsigned long init_flags;
+diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c
+index f10b660805fc..1cf694c6ee67 100644
+--- a/fs/nfs/nfs4file.c
++++ b/fs/nfs/nfs4file.c
+@@ -49,7 +49,7 @@ nfs4_file_open(struct inode *inode, struct file *filp)
+ 		return err;
+ 
+ 	if ((openflags & O_ACCMODE) == 3)
+-		openflags--;
++		return nfs_open(inode, filp);
+ 
+ 	/* We can't create new files here */
+ 	openflags &= ~(O_CREAT|O_EXCL);
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 7066cd7c7aff..ca38ae357094 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -1890,7 +1890,7 @@ lookup_again:
+ 		spin_unlock(&ino->i_lock);
+ 		lseg = ERR_PTR(wait_var_event_killable(&lo->plh_outstanding,
+ 					!atomic_read(&lo->plh_outstanding)));
+-		if (IS_ERR(lseg) || !list_empty(&lo->plh_segs))
++		if (IS_ERR(lseg))
+ 			goto out_put_layout_hdr;
+ 		pnfs_put_layout_hdr(lo);
+ 		goto lookup_again;
+diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
+index 7325baa8f9d4..c95f32b83a94 100644
+--- a/fs/proc/proc_sysctl.c
++++ b/fs/proc/proc_sysctl.c
+@@ -498,6 +498,10 @@ static struct inode *proc_sys_make_inode(struct super_block *sb,
+ 
+ 	if (root->set_ownership)
+ 		root->set_ownership(head, table, &inode->i_uid, &inode->i_gid);
++	else {
++		inode->i_uid = GLOBAL_ROOT_UID;
++		inode->i_gid = GLOBAL_ROOT_GID;
++	}
+ 
+ 	return inode;
+ }
+diff --git a/fs/pstore/inode.c b/fs/pstore/inode.c
+index c60ee46f3e39..fc91f7263f79 100644
+--- a/fs/pstore/inode.c
++++ b/fs/pstore/inode.c
+@@ -330,22 +330,21 @@ int pstore_mkfile(struct dentry *root, struct pstore_record *record)
+ 		goto fail;
+ 	inode->i_mode = S_IFREG | 0444;
+ 	inode->i_fop = &pstore_file_operations;
+-	private = kzalloc(sizeof(*private), GFP_KERNEL);
+-	if (!private)
+-		goto fail_alloc;
+-	private->record = record;
+-
+ 	scnprintf(name, sizeof(name), "%s-%s-%llu%s",
+ 			pstore_type_to_name(record->type),
+ 			record->psi->name, record->id,
+ 			record->compressed ? ".enc.z" : "");
+ 
++	private = kzalloc(sizeof(*private), GFP_KERNEL);
++	if (!private)
++		goto fail_inode;
++
+ 	dentry = d_alloc_name(root, name);
+ 	if (!dentry)
+ 		goto fail_private;
+ 
++	private->record = record;
+ 	inode->i_size = private->total_size = size;
+-
+ 	inode->i_private = private;
+ 
+ 	if (record->time.tv_sec)
+@@ -361,7 +360,7 @@ int pstore_mkfile(struct dentry *root, struct pstore_record *record)
+ 
+ fail_private:
+ 	free_pstore_private(private);
+-fail_alloc:
++fail_inode:
+ 	iput(inode);
+ 
+ fail:
+diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
+index a7ceae90110e..76748255f843 100644
+--- a/fs/xfs/xfs_file.c
++++ b/fs/xfs/xfs_file.c
+@@ -517,6 +517,9 @@ xfs_file_dio_aio_write(
+ 	}
+ 
+ 	if (iocb->ki_flags & IOCB_NOWAIT) {
++		/* unaligned dio always waits, bail */
++		if (unaligned_io)
++			return -EAGAIN;
+ 		if (!xfs_ilock_nowait(ip, iolock))
+ 			return -EAGAIN;
+ 	} else {
+@@ -536,9 +539,6 @@ xfs_file_dio_aio_write(
+ 	 * xfs_file_aio_write_checks() for other reasons.
+ 	 */
+ 	if (unaligned_io) {
+-		/* unaligned dio always waits, bail */
+-		if (iocb->ki_flags & IOCB_NOWAIT)
+-			return -EAGAIN;
+ 		inode_dio_wait(inode);
+ 	} else if (iolock == XFS_IOLOCK_EXCL) {
+ 		xfs_ilock_demote(ip, XFS_IOLOCK_EXCL);
+diff --git a/include/asm-generic/bug.h b/include/asm-generic/bug.h
+index 0e9bd9c83870..aa6c093d9ce9 100644
+--- a/include/asm-generic/bug.h
++++ b/include/asm-generic/bug.h
+@@ -104,8 +104,10 @@ extern void warn_slowpath_null(const char *file, const int line);
+ 	warn_slowpath_fmt_taint(__FILE__, __LINE__, taint, arg)
+ #else
+ extern __printf(1, 2) void __warn_printk(const char *fmt, ...);
+-#define __WARN()		__WARN_TAINT(TAINT_WARN)
+-#define __WARN_printf(arg...)	do { __warn_printk(arg); __WARN(); } while (0)
++#define __WARN() do { \
++	printk(KERN_WARNING CUT_HERE); __WARN_TAINT(TAINT_WARN); \
++} while (0)
++#define __WARN_printf(arg...)	__WARN_printf_taint(TAINT_WARN, arg)
+ #define __WARN_printf_taint(taint, arg...)				\
+ 	do { __warn_printk(arg); __WARN_TAINT(taint); } while (0)
+ #endif
+diff --git a/include/drm/drm_displayid.h b/include/drm/drm_displayid.h
+index c0d4df6a606f..9d3b745c3107 100644
+--- a/include/drm/drm_displayid.h
++++ b/include/drm/drm_displayid.h
+@@ -40,6 +40,7 @@
+ #define DATA_BLOCK_DISPLAY_INTERFACE 0x0f
+ #define DATA_BLOCK_STEREO_DISPLAY_INTERFACE 0x10
+ #define DATA_BLOCK_TILED_DISPLAY 0x12
++#define DATA_BLOCK_CTA 0x81
+ 
+ #define DATA_BLOCK_VENDOR_SPECIFIC 0x7f
+ 
+@@ -90,4 +91,13 @@ struct displayid_detailed_timing_block {
+ 	struct displayid_block base;
+ 	struct displayid_detailed_timings_1 timings[0];
+ };
++
++#define for_each_displayid_db(displayid, block, idx, length) \
++	for ((block) = (struct displayid_block *)&(displayid)[idx]; \
++	     (idx) + sizeof(struct displayid_block) <= (length) && \
++	     (idx) + sizeof(struct displayid_block) + (block)->num_bytes <= (length) && \
++	     (block)->num_bytes > 0; \
++	     (idx) += (block)->num_bytes + sizeof(struct displayid_block), \
++	     (block) = (struct displayid_block *)&(displayid)[idx])
++
+ #endif
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 317ab30d2904..08a84d130120 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -662,7 +662,7 @@ static inline bool blk_queue_is_zoned(struct request_queue *q)
+ 	}
+ }
+ 
+-static inline unsigned int blk_queue_zone_sectors(struct request_queue *q)
++static inline sector_t blk_queue_zone_sectors(struct request_queue *q)
+ {
+ 	return blk_queue_is_zoned(q) ? q->limits.chunk_sectors : 0;
+ }
+@@ -1400,7 +1400,7 @@ static inline bool bdev_is_zoned(struct block_device *bdev)
+ 	return false;
+ }
+ 
+-static inline unsigned int bdev_zone_sectors(struct block_device *bdev)
++static inline sector_t bdev_zone_sectors(struct block_device *bdev)
+ {
+ 	struct request_queue *q = bdev_get_queue(bdev);
+ 
+diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
+index dec95654f3ae..04c4a478323b 100644
+--- a/include/linux/cpuhotplug.h
++++ b/include/linux/cpuhotplug.h
+@@ -116,10 +116,10 @@ enum cpuhp_state {
+ 	CPUHP_AP_PERF_ARM_ACPI_STARTING,
+ 	CPUHP_AP_PERF_ARM_STARTING,
+ 	CPUHP_AP_ARM_L2X0_STARTING,
++	CPUHP_AP_EXYNOS4_MCT_TIMER_STARTING,
+ 	CPUHP_AP_ARM_ARCH_TIMER_STARTING,
+ 	CPUHP_AP_ARM_GLOBAL_TIMER_STARTING,
+ 	CPUHP_AP_JCORE_TIMER_STARTING,
+-	CPUHP_AP_EXYNOS4_MCT_TIMER_STARTING,
+ 	CPUHP_AP_ARM_TWD_STARTING,
+ 	CPUHP_AP_QCOM_TIMER_STARTING,
+ 	CPUHP_AP_TEGRA_TIMER_STARTING,
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 6b10c21630f5..ab6bf81311a8 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -590,6 +590,11 @@ static inline bool is_vmalloc_addr(const void *x)
+ 	return false;
+ #endif
+ }
++
++#ifndef is_ioremap_addr
++#define is_ioremap_addr(x) is_vmalloc_addr(x)
++#endif
++
+ #ifdef CONFIG_MMU
+ extern int is_vmalloc_or_module_addr(const void *x);
+ #else
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index b25d20822e75..3508f4508a11 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -586,7 +586,7 @@ static inline void rcu_preempt_sleep_check(void) { }
+  * read-side critical sections may be preempted and they may also block, but
+  * only when acquiring spinlocks that are subject to priority inheritance.
+  */
+-static inline void rcu_read_lock(void)
++static __always_inline void rcu_read_lock(void)
+ {
+ 	__rcu_read_lock();
+ 	__acquire(RCU);
+diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h
+index e412c092c1e8..c735113f7d93 100644
+--- a/include/linux/sched/signal.h
++++ b/include/linux/sched/signal.h
+@@ -328,7 +328,7 @@ extern void force_sigsegv(int sig, struct task_struct *p);
+ extern int force_sig_info(int, struct kernel_siginfo *, struct task_struct *);
+ extern int __kill_pgrp_info(int sig, struct kernel_siginfo *info, struct pid *pgrp);
+ extern int kill_pid_info(int sig, struct kernel_siginfo *info, struct pid *pid);
+-extern int kill_pid_info_as_cred(int, struct kernel_siginfo *, struct pid *,
++extern int kill_pid_usb_asyncio(int sig, int errno, sigval_t addr, struct pid *,
+ 				const struct cred *);
+ extern int kill_pgrp(struct pid *pid, int sig, int priv);
+ extern int kill_pid(struct pid *pid, int sig, int priv);
+diff --git a/include/net/ip_vs.h b/include/net/ip_vs.h
+index 047f9a5ccaad..1790bb41c964 100644
+--- a/include/net/ip_vs.h
++++ b/include/net/ip_vs.h
+@@ -803,11 +803,12 @@ struct ipvs_master_sync_state {
+ 	struct ip_vs_sync_buff	*sync_buff;
+ 	unsigned long		sync_queue_len;
+ 	unsigned int		sync_queue_delay;
+-	struct task_struct	*master_thread;
+ 	struct delayed_work	master_wakeup_work;
+ 	struct netns_ipvs	*ipvs;
+ };
+ 
++struct ip_vs_sync_thread_data;
++
+ /* How much time to keep dests in trash */
+ #define IP_VS_DEST_TRASH_PERIOD		(120 * HZ)
+ 
+@@ -938,7 +939,8 @@ struct netns_ipvs {
+ 	spinlock_t		sync_lock;
+ 	struct ipvs_master_sync_state *ms;
+ 	spinlock_t		sync_buff_lock;
+-	struct task_struct	**backup_threads;
++	struct ip_vs_sync_thread_data *master_tinfo;
++	struct ip_vs_sync_thread_data *backup_tinfo;
+ 	int			threads_mask;
+ 	volatile int		sync_state;
+ 	struct mutex		sync_mutex;
+diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
+index d074b6d60f8a..ac3c047d058c 100644
+--- a/include/net/xdp_sock.h
++++ b/include/net/xdp_sock.h
+@@ -67,6 +67,8 @@ struct xdp_sock {
+ 	 * in the SKB destructor callback.
+ 	 */
+ 	spinlock_t tx_completion_lock;
++	/* Protects generic receive. */
++	spinlock_t rx_lock;
+ 	u64 rx_dropped;
+ };
+ 
+diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
+index 9b9e17bcc201..417a096e43d6 100644
+--- a/include/rdma/ib_verbs.h
++++ b/include/rdma/ib_verbs.h
+@@ -293,8 +293,8 @@ struct ib_rss_caps {
+ };
+ 
+ enum ib_tm_cap_flags {
+-	/*  Support tag matching on RC transport */
+-	IB_TM_CAP_RC		    = 1 << 0,
++	/*  Support tag matching with rendezvous offload for RC transport */
++	IB_TM_CAP_RNDV_RC = 1 << 0,
+ };
+ 
+ struct ib_tm_caps {
+diff --git a/include/sound/hda_codec.h b/include/sound/hda_codec.h
+index cc7c8d42d4fd..a0d9d788527d 100644
+--- a/include/sound/hda_codec.h
++++ b/include/sound/hda_codec.h
+@@ -262,6 +262,8 @@ struct hda_codec {
+ 	unsigned int auto_runtime_pm:1; /* enable automatic codec runtime pm */
+ 	unsigned int force_pin_prefix:1; /* Add location prefix */
+ 	unsigned int link_down_at_suspend:1; /* link down at runtime suspend */
++	unsigned int relaxed_resume:1;	/* don't resume forcibly for jack */
++
+ #ifdef CONFIG_PM
+ 	unsigned long power_on_acct;
+ 	unsigned long power_off_acct;
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index 7b60fd186cfe..77bc53ce419f 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -1383,7 +1383,7 @@ TRACE_EVENT(rxrpc_rx_eproto,
+ 			     ),
+ 
+ 	    TP_fast_assign(
+-		    __entry->call = call->debug_id;
++		    __entry->call = call ? call->debug_id : 0;
+ 		    __entry->serial = serial;
+ 		    __entry->why = why;
+ 			   ),
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index 9d01f4788d3e..9ae3f28ca469 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -2871,6 +2871,7 @@ struct bpf_prog_info {
+ 	char name[BPF_OBJ_NAME_LEN];
+ 	__u32 ifindex;
+ 	__u32 gpl_compatible:1;
++	__u32 :31; /* alignment pad */
+ 	__u64 netns_dev;
+ 	__u64 netns_ino;
+ 	__u32 nr_jited_ksyms;
+diff --git a/include/xen/events.h b/include/xen/events.h
+index a48897199975..c0e6a0598397 100644
+--- a/include/xen/events.h
++++ b/include/xen/events.h
+@@ -3,6 +3,7 @@
+ #define _XEN_EVENTS_H
+ 
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #ifdef CONFIG_PCI_MSI
+ #include <linux/msi.h>
+ #endif
+@@ -59,7 +60,7 @@ void evtchn_put(unsigned int evtchn);
+ 
+ void xen_send_IPI_one(unsigned int cpu, enum ipi_vector vector);
+ void rebind_evtchn_irq(int evtchn, int irq);
+-int xen_rebind_evtchn_to_cpu(int evtchn, unsigned tcpu);
++int xen_set_affinity_evtchn(struct irq_desc *desc, unsigned int tcpu);
+ 
+ static inline void notify_remote_via_evtchn(int port)
+ {
+diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
+index 4c2fa3ac56f6..29d781061cd5 100644
+--- a/kernel/bpf/Makefile
++++ b/kernel/bpf/Makefile
+@@ -1,5 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+ obj-y := core.o
++CFLAGS_core.o += $(call cc-disable-warning, override-init)
+ 
+ obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o
+ obj-$(CONFIG_BPF_SYSCALL) += hashtab.o arraymap.o percpu_freelist.o bpf_lru_list.o lpm_trie.o map_in_map.o
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 06ba9c5f156b..932fd3fa5a5a 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -1367,10 +1367,10 @@ select_insn:
+ 		insn++;
+ 		CONT;
+ 	ALU_ARSH_X:
+-		DST = (u64) (u32) ((*(s32 *) &DST) >> SRC);
++		DST = (u64) (u32) (((s32) DST) >> SRC);
+ 		CONT;
+ 	ALU_ARSH_K:
+-		DST = (u64) (u32) ((*(s32 *) &DST) >> IMM);
++		DST = (u64) (u32) (((s32) DST) >> IMM);
+ 		CONT;
+ 	ALU64_ARSH_X:
+ 		(*(s64 *) &DST) >>= SRC;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 4ff130ddfbf6..cbc03f051598 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -6197,17 +6197,18 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
+ 	 * the state of the call instruction (with WRITTEN set), and r0 comes
+ 	 * from callee with its full parentage chain, anyway.
+ 	 */
+-	for (j = 0; j <= cur->curframe; j++)
+-		for (i = j < cur->curframe ? BPF_REG_6 : 0; i < BPF_REG_FP; i++)
+-			cur->frame[j]->regs[i].parent = &new->frame[j]->regs[i];
+ 	/* clear write marks in current state: the writes we did are not writes
+ 	 * our child did, so they don't screen off its reads from us.
+ 	 * (There are no read marks in current state, because reads always mark
+ 	 * their parent and current state never has children yet.  Only
+ 	 * explored_states can get read marks.)
+ 	 */
+-	for (i = 0; i < BPF_REG_FP; i++)
+-		cur->frame[cur->curframe]->regs[i].live = REG_LIVE_NONE;
++	for (j = 0; j <= cur->curframe; j++) {
++		for (i = j < cur->curframe ? BPF_REG_6 : 0; i < BPF_REG_FP; i++)
++			cur->frame[j]->regs[i].parent = &new->frame[j]->regs[i];
++		for (i = 0; i < BPF_REG_FP; i++)
++			cur->frame[j]->regs[i].live = REG_LIVE_NONE;
++	}
+ 
+ 	/* all stack frames are accessible from callee, clear them all */
+ 	for (j = 0; j <= cur->curframe; j++) {
+diff --git a/kernel/iomem.c b/kernel/iomem.c
+index f7525e14ebc6..98950b84c50c 100644
+--- a/kernel/iomem.c
++++ b/kernel/iomem.c
+@@ -121,7 +121,7 @@ EXPORT_SYMBOL(memremap);
+ 
+ void memunmap(void *addr)
+ {
+-	if (is_vmalloc_addr(addr))
++	if (is_ioremap_addr(addr))
+ 		iounmap((void __iomem *) addr);
+ }
+ EXPORT_SYMBOL(memunmap);
+diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
+index 04fe4f989bd8..bfac4d6761b3 100644
+--- a/kernel/irq/chip.c
++++ b/kernel/irq/chip.c
+@@ -754,6 +754,8 @@ void handle_fasteoi_nmi(struct irq_desc *desc)
+ 	unsigned int irq = irq_desc_get_irq(desc);
+ 	irqreturn_t res;
+ 
++	__kstat_incr_irqs_this_cpu(desc);
++
+ 	trace_irq_handler_entry(irq, action);
+ 	/*
+ 	 * NMIs cannot be shared, there is only one action.
+@@ -968,6 +970,8 @@ void handle_percpu_devid_fasteoi_nmi(struct irq_desc *desc)
+ 	unsigned int irq = irq_desc_get_irq(desc);
+ 	irqreturn_t res;
+ 
++	__kstat_incr_irqs_this_cpu(desc);
++
+ 	trace_irq_handler_entry(irq, action);
+ 	res = action->handler(irq, raw_cpu_ptr(action->percpu_dev_id));
+ 	trace_irq_handler_exit(irq, action, res);
+diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
+index 9f8a709337cf..6c5b2b775ca6 100644
+--- a/kernel/irq/irqdesc.c
++++ b/kernel/irq/irqdesc.c
+@@ -679,6 +679,8 @@ int __handle_domain_irq(struct irq_domain *domain, unsigned int hwirq,
+  * @hwirq:	The HW irq number to convert to a logical one
+  * @regs:	Register file coming from the low-level handling code
+  *
++ *		This function must be called from an NMI context.
++ *
+  * Returns:	0 on success, or -EINVAL if conversion has failed
+  */
+ int handle_domain_nmi(struct irq_domain *domain, unsigned int hwirq,
+@@ -688,7 +690,10 @@ int handle_domain_nmi(struct irq_domain *domain, unsigned int hwirq,
+ 	unsigned int irq;
+ 	int ret = 0;
+ 
+-	nmi_enter();
++	/*
++	 * NMI context needs to be setup earlier in order to deal with tracing.
++	 */
++	WARN_ON(!in_nmi());
+ 
+ 	irq = irq_find_mapping(domain, hwirq);
+ 
+@@ -701,7 +706,6 @@ int handle_domain_nmi(struct irq_domain *domain, unsigned int hwirq,
+ 	else
+ 		ret = -EINVAL;
+ 
+-	nmi_exit();
+ 	set_irq_regs(old_regs);
+ 	return ret;
+ }
+@@ -945,6 +949,11 @@ unsigned int kstat_irqs_cpu(unsigned int irq, int cpu)
+ 			*per_cpu_ptr(desc->kstat_irqs, cpu) : 0;
+ }
+ 
++static bool irq_is_nmi(struct irq_desc *desc)
++{
++	return desc->istate & IRQS_NMI;
++}
++
+ /**
+  * kstat_irqs - Get the statistics for an interrupt
+  * @irq:	The interrupt number
+@@ -962,7 +971,8 @@ unsigned int kstat_irqs(unsigned int irq)
+ 	if (!desc || !desc->kstat_irqs)
+ 		return 0;
+ 	if (!irq_settings_is_per_cpu_devid(desc) &&
+-	    !irq_settings_is_per_cpu(desc))
++	    !irq_settings_is_per_cpu(desc) &&
++	    !irq_is_nmi(desc))
+ 	    return desc->tot_count;
+ 
+ 	for_each_possible_cpu(cpu)
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index e221be724fe8..89b3f38a57f3 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -3611,19 +3611,19 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
+ 	if (depth) {
+ 		hlock = curr->held_locks + depth - 1;
+ 		if (hlock->class_idx == class_idx && nest_lock) {
+-			if (hlock->references) {
+-				/*
+-				 * Check: unsigned int references:12, overflow.
+-				 */
+-				if (DEBUG_LOCKS_WARN_ON(hlock->references == (1 << 12)-1))
+-					return 0;
++			if (!references)
++				references++;
+ 
++			if (!hlock->references)
+ 				hlock->references++;
+-			} else {
+-				hlock->references = 2;
+-			}
+ 
+-			return 1;
++			hlock->references += references;
++
++			/* Overflow */
++			if (DEBUG_LOCKS_WARN_ON(hlock->references < references))
++				return 0;
++
++			return 2;
+ 		}
+ 	}
+ 
+@@ -3829,22 +3829,33 @@ out:
+ }
+ 
+ static int reacquire_held_locks(struct task_struct *curr, unsigned int depth,
+-			      int idx)
++				int idx, unsigned int *merged)
+ {
+ 	struct held_lock *hlock;
++	int first_idx = idx;
+ 
+ 	if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
+ 		return 0;
+ 
+ 	for (hlock = curr->held_locks + idx; idx < depth; idx++, hlock++) {
+-		if (!__lock_acquire(hlock->instance,
++		switch (__lock_acquire(hlock->instance,
+ 				    hlock_class(hlock)->subclass,
+ 				    hlock->trylock,
+ 				    hlock->read, hlock->check,
+ 				    hlock->hardirqs_off,
+ 				    hlock->nest_lock, hlock->acquire_ip,
+-				    hlock->references, hlock->pin_count))
++				    hlock->references, hlock->pin_count)) {
++		case 0:
+ 			return 1;
++		case 1:
++			break;
++		case 2:
++			*merged += (idx == first_idx);
++			break;
++		default:
++			WARN_ON(1);
++			return 0;
++		}
+ 	}
+ 	return 0;
+ }
+@@ -3855,9 +3866,9 @@ __lock_set_class(struct lockdep_map *lock, const char *name,
+ 		 unsigned long ip)
+ {
+ 	struct task_struct *curr = current;
++	unsigned int depth, merged = 0;
+ 	struct held_lock *hlock;
+ 	struct lock_class *class;
+-	unsigned int depth;
+ 	int i;
+ 
+ 	if (unlikely(!debug_locks))
+@@ -3882,14 +3893,14 @@ __lock_set_class(struct lockdep_map *lock, const char *name,
+ 	curr->lockdep_depth = i;
+ 	curr->curr_chain_key = hlock->prev_chain_key;
+ 
+-	if (reacquire_held_locks(curr, depth, i))
++	if (reacquire_held_locks(curr, depth, i, &merged))
+ 		return 0;
+ 
+ 	/*
+ 	 * I took it apart and put it back together again, except now I have
+ 	 * these 'spare' parts.. where shall I put them.
+ 	 */
+-	if (DEBUG_LOCKS_WARN_ON(curr->lockdep_depth != depth))
++	if (DEBUG_LOCKS_WARN_ON(curr->lockdep_depth != depth - merged))
+ 		return 0;
+ 	return 1;
+ }
+@@ -3897,8 +3908,8 @@ __lock_set_class(struct lockdep_map *lock, const char *name,
+ static int __lock_downgrade(struct lockdep_map *lock, unsigned long ip)
+ {
+ 	struct task_struct *curr = current;
++	unsigned int depth, merged = 0;
+ 	struct held_lock *hlock;
+-	unsigned int depth;
+ 	int i;
+ 
+ 	if (unlikely(!debug_locks))
+@@ -3923,7 +3934,11 @@ static int __lock_downgrade(struct lockdep_map *lock, unsigned long ip)
+ 	hlock->read = 1;
+ 	hlock->acquire_ip = ip;
+ 
+-	if (reacquire_held_locks(curr, depth, i))
++	if (reacquire_held_locks(curr, depth, i, &merged))
++		return 0;
++
++	/* Merging can't happen with unchanged classes.. */
++	if (DEBUG_LOCKS_WARN_ON(merged))
+ 		return 0;
+ 
+ 	/*
+@@ -3932,6 +3947,7 @@ static int __lock_downgrade(struct lockdep_map *lock, unsigned long ip)
+ 	 */
+ 	if (DEBUG_LOCKS_WARN_ON(curr->lockdep_depth != depth))
+ 		return 0;
++
+ 	return 1;
+ }
+ 
+@@ -3946,8 +3962,8 @@ static int
+ __lock_release(struct lockdep_map *lock, int nested, unsigned long ip)
+ {
+ 	struct task_struct *curr = current;
++	unsigned int depth, merged = 1;
+ 	struct held_lock *hlock;
+-	unsigned int depth;
+ 	int i;
+ 
+ 	if (unlikely(!debug_locks))
+@@ -4002,14 +4018,15 @@ __lock_release(struct lockdep_map *lock, int nested, unsigned long ip)
+ 	if (i == depth-1)
+ 		return 1;
+ 
+-	if (reacquire_held_locks(curr, depth, i + 1))
++	if (reacquire_held_locks(curr, depth, i + 1, &merged))
+ 		return 0;
+ 
+ 	/*
+ 	 * We had N bottles of beer on the wall, we drank one, but now
+ 	 * there's not N-1 bottles of beer left on the wall...
++	 * Pouring two of the bottles together is acceptable.
+ 	 */
+-	DEBUG_LOCKS_WARN_ON(curr->lockdep_depth != depth-1);
++	DEBUG_LOCKS_WARN_ON(curr->lockdep_depth != depth - merged);
+ 
+ 	/*
+ 	 * Since reacquire_held_locks() would have called check_chain_key()
+diff --git a/kernel/padata.c b/kernel/padata.c
+index 3e2633ae3bca..cb268716cefb 100644
+--- a/kernel/padata.c
++++ b/kernel/padata.c
+@@ -267,7 +267,12 @@ static void padata_reorder(struct parallel_data *pd)
+ 	 * The next object that needs serialization might have arrived to
+ 	 * the reorder queues in the meantime, we will be called again
+ 	 * from the timer function if no one else cares for it.
++	 *
++	 * Ensure reorder_objects is read after pd->lock is dropped so we see
++	 * an increment from another task in padata_do_serial.  Pairs with
++	 * smp_mb__after_atomic in padata_do_serial.
+ 	 */
++	smp_mb();
+ 	if (atomic_read(&pd->reorder_objects)
+ 			&& !(pinst->flags & PADATA_RESET))
+ 		mod_timer(&pd->timer, jiffies + HZ);
+@@ -387,6 +392,13 @@ void padata_do_serial(struct padata_priv *padata)
+ 	list_add_tail(&padata->list, &pqueue->reorder.list);
+ 	spin_unlock(&pqueue->reorder.lock);
+ 
++	/*
++	 * Ensure the atomic_inc of reorder_objects above is ordered correctly
++	 * with the trylock of pd->lock in padata_reorder.  Pairs with smp_mb
++	 * in padata_reorder.
++	 */
++	smp_mb__after_atomic();
++
+ 	put_cpu();
+ 
+ 	/* If we're running on the wrong CPU, call padata_reorder() via a
+diff --git a/kernel/pid_namespace.c b/kernel/pid_namespace.c
+index aa6e72fb7c08..098233ebe589 100644
+--- a/kernel/pid_namespace.c
++++ b/kernel/pid_namespace.c
+@@ -325,7 +325,7 @@ int reboot_pid_ns(struct pid_namespace *pid_ns, int cmd)
+ 	}
+ 
+ 	read_lock(&tasklist_lock);
+-	force_sig(SIGKILL, pid_ns->child_reaper);
++	send_sig(SIGKILL, pid_ns->child_reaper, 1);
+ 	read_unlock(&tasklist_lock);
+ 
+ 	do_exit(0);
+diff --git a/kernel/resource.c b/kernel/resource.c
+index 92190f62ebc5..eaeaa36a4938 100644
+--- a/kernel/resource.c
++++ b/kernel/resource.c
+@@ -325,7 +325,7 @@ EXPORT_SYMBOL(release_resource);
+  *
+  * If a resource is found, returns 0 and @*res is overwritten with the part
+  * of the resource that's within [@start..@end]; if none is found, returns
+- * -1 or -EINVAL for other invalid parameters.
++ * -ENODEV.  Returns -EINVAL for invalid parameters.
+  *
+  * This function walks the whole tree and not just first level children
+  * unless @first_lvl is true.
+@@ -364,16 +364,16 @@ static int find_next_iomem_res(resource_size_t start, resource_size_t end,
+ 			break;
+ 	}
+ 
++	if (p) {
++		/* copy data */
++		res->start = max(start, p->start);
++		res->end = min(end, p->end);
++		res->flags = p->flags;
++		res->desc = p->desc;
++	}
++
+ 	read_unlock(&resource_lock);
+-	if (!p)
+-		return -1;
+-
+-	/* copy data */
+-	res->start = max(start, p->start);
+-	res->end = min(end, p->end);
+-	res->flags = p->flags;
+-	res->desc = p->desc;
+-	return 0;
++	return p ? 0 : -ENODEV;
+ }
+ 
+ static int __walk_iomem_res_desc(resource_size_t start, resource_size_t end,
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index a75ad50b5e2f..242233490a49 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -5175,7 +5175,7 @@ long __sched io_schedule_timeout(long timeout)
+ }
+ EXPORT_SYMBOL(io_schedule_timeout);
+ 
+-void io_schedule(void)
++void __sched io_schedule(void)
+ {
+ 	int token;
+ 
+diff --git a/kernel/sched/sched-pelt.h b/kernel/sched/sched-pelt.h
+index a26473674fb7..c529706bed11 100644
+--- a/kernel/sched/sched-pelt.h
++++ b/kernel/sched/sched-pelt.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ /* Generated by Documentation/scheduler/sched-pelt; do not modify. */
+ 
+-static const u32 runnable_avg_yN_inv[] = {
++static const u32 runnable_avg_yN_inv[] __maybe_unused = {
+ 	0xffffffff, 0xfa83b2da, 0xf5257d14, 0xefe4b99a, 0xeac0c6e6, 0xe5b906e6,
+ 	0xe0ccdeeb, 0xdbfbb796, 0xd744fcc9, 0xd2a81d91, 0xce248c14, 0xc9b9bd85,
+ 	0xc5672a10, 0xc12c4cc9, 0xbd08a39e, 0xb8fbaf46, 0xb504f333, 0xb123f581,
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 5f3dd69b50e2..ef46559100f3 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -1053,27 +1053,6 @@ static inline bool legacy_queue(struct sigpending *signals, int sig)
+ 	return (sig < SIGRTMIN) && sigismember(&signals->signal, sig);
+ }
+ 
+-#ifdef CONFIG_USER_NS
+-static inline void userns_fixup_signal_uid(struct kernel_siginfo *info, struct task_struct *t)
+-{
+-	if (current_user_ns() == task_cred_xxx(t, user_ns))
+-		return;
+-
+-	if (SI_FROMKERNEL(info))
+-		return;
+-
+-	rcu_read_lock();
+-	info->si_uid = from_kuid_munged(task_cred_xxx(t, user_ns),
+-					make_kuid(current_user_ns(), info->si_uid));
+-	rcu_read_unlock();
+-}
+-#else
+-static inline void userns_fixup_signal_uid(struct kernel_siginfo *info, struct task_struct *t)
+-{
+-	return;
+-}
+-#endif
+-
+ static int __send_signal(int sig, struct kernel_siginfo *info, struct task_struct *t,
+ 			enum pid_type type, int from_ancestor_ns)
+ {
+@@ -1131,7 +1110,11 @@ static int __send_signal(int sig, struct kernel_siginfo *info, struct task_struc
+ 			q->info.si_code = SI_USER;
+ 			q->info.si_pid = task_tgid_nr_ns(current,
+ 							task_active_pid_ns(t));
+-			q->info.si_uid = from_kuid_munged(current_user_ns(), current_uid());
++			rcu_read_lock();
++			q->info.si_uid =
++				from_kuid_munged(task_cred_xxx(t, user_ns),
++						 current_uid());
++			rcu_read_unlock();
+ 			break;
+ 		case (unsigned long) SEND_SIG_PRIV:
+ 			clear_siginfo(&q->info);
+@@ -1143,13 +1126,8 @@ static int __send_signal(int sig, struct kernel_siginfo *info, struct task_struc
+ 			break;
+ 		default:
+ 			copy_siginfo(&q->info, info);
+-			if (from_ancestor_ns)
+-				q->info.si_pid = 0;
+ 			break;
+ 		}
+-
+-		userns_fixup_signal_uid(&q->info, t);
+-
+ 	} else if (!is_si_special(info)) {
+ 		if (sig >= SIGRTMIN && info->si_code != SI_USER) {
+ 			/*
+@@ -1193,6 +1171,28 @@ ret:
+ 	return ret;
+ }
+ 
++static inline bool has_si_pid_and_uid(struct kernel_siginfo *info)
++{
++	bool ret = false;
++	switch (siginfo_layout(info->si_signo, info->si_code)) {
++	case SIL_KILL:
++	case SIL_CHLD:
++	case SIL_RT:
++		ret = true;
++		break;
++	case SIL_TIMER:
++	case SIL_POLL:
++	case SIL_FAULT:
++	case SIL_FAULT_MCEERR:
++	case SIL_FAULT_BNDERR:
++	case SIL_FAULT_PKUERR:
++	case SIL_SYS:
++		ret = false;
++		break;
++	}
++	return ret;
++}
++
+ static int send_signal(int sig, struct kernel_siginfo *info, struct task_struct *t,
+ 			enum pid_type type)
+ {
+@@ -1202,7 +1202,20 @@ static int send_signal(int sig, struct kernel_siginfo *info, struct task_struct
+ 	from_ancestor_ns = si_fromuser(info) &&
+ 			   !task_pid_nr_ns(current, task_active_pid_ns(t));
+ #endif
++	if (!is_si_special(info) && has_si_pid_and_uid(info)) {
++		struct user_namespace *t_user_ns;
++
++		rcu_read_lock();
++		t_user_ns = task_cred_xxx(t, user_ns);
++		if (current_user_ns() != t_user_ns) {
++			kuid_t uid = make_kuid(current_user_ns(), info->si_uid);
++			info->si_uid = from_kuid_munged(t_user_ns, uid);
++		}
++		rcu_read_unlock();
+ 
++		if (!task_pid_nr_ns(current, task_active_pid_ns(t)))
++			info->si_pid = 0;
++	}
+ 	return __send_signal(sig, info, t, type, from_ancestor_ns);
+ }
+ 
+@@ -1436,13 +1449,44 @@ static inline bool kill_as_cred_perm(const struct cred *cred,
+ 	       uid_eq(cred->uid, pcred->uid);
+ }
+ 
+-/* like kill_pid_info(), but doesn't use uid/euid of "current" */
+-int kill_pid_info_as_cred(int sig, struct kernel_siginfo *info, struct pid *pid,
+-			 const struct cred *cred)
++/*
++ * The usb asyncio usage of siginfo is wrong.  The glibc support
++ * for asyncio which uses SI_ASYNCIO assumes the layout is SIL_RT.
++ * AKA after the generic fields:
++ *	kernel_pid_t	si_pid;
++ *	kernel_uid32_t	si_uid;
++ *	sigval_t	si_value;
++ *
++ * Unfortunately when usb generates SI_ASYNCIO it assumes the layout
++ * after the generic fields is:
++ *	void __user 	*si_addr;
++ *
++ * This is a practical problem when there is a 64bit big endian kernel
++ * and a 32bit userspace.  As the 32bit address will encoded in the low
++ * 32bits of the pointer.  Those low 32bits will be stored at higher
++ * address than appear in a 32 bit pointer.  So userspace will not
++ * see the address it was expecting for it's completions.
++ *
++ * There is nothing in the encoding that can allow
++ * copy_siginfo_to_user32 to detect this confusion of formats, so
++ * handle this by requiring the caller of kill_pid_usb_asyncio to
++ * notice when this situration takes place and to store the 32bit
++ * pointer in sival_int, instead of sival_addr of the sigval_t addr
++ * parameter.
++ */
++int kill_pid_usb_asyncio(int sig, int errno, sigval_t addr,
++			 struct pid *pid, const struct cred *cred)
+ {
+-	int ret = -EINVAL;
++	struct kernel_siginfo info;
+ 	struct task_struct *p;
+ 	unsigned long flags;
++	int ret = -EINVAL;
++
++	clear_siginfo(&info);
++	info.si_signo = sig;
++	info.si_errno = errno;
++	info.si_code = SI_ASYNCIO;
++	*((sigval_t *)&info.si_pid) = addr;
+ 
+ 	if (!valid_signal(sig))
+ 		return ret;
+@@ -1453,17 +1497,17 @@ int kill_pid_info_as_cred(int sig, struct kernel_siginfo *info, struct pid *pid,
+ 		ret = -ESRCH;
+ 		goto out_unlock;
+ 	}
+-	if (si_fromuser(info) && !kill_as_cred_perm(cred, p)) {
++	if (!kill_as_cred_perm(cred, p)) {
+ 		ret = -EPERM;
+ 		goto out_unlock;
+ 	}
+-	ret = security_task_kill(p, info, sig, cred);
++	ret = security_task_kill(p, &info, sig, cred);
+ 	if (ret)
+ 		goto out_unlock;
+ 
+ 	if (sig) {
+ 		if (lock_task_sighand(p, &flags)) {
+-			ret = __send_signal(sig, info, p, PIDTYPE_TGID, 0);
++			ret = __send_signal(sig, &info, p, PIDTYPE_TGID, 0);
+ 			unlock_task_sighand(p, &flags);
+ 		} else
+ 			ret = -ESRCH;
+@@ -1472,7 +1516,7 @@ out_unlock:
+ 	rcu_read_unlock();
+ 	return ret;
+ }
+-EXPORT_SYMBOL_GPL(kill_pid_info_as_cred);
++EXPORT_SYMBOL_GPL(kill_pid_usb_asyncio);
+ 
+ /*
+  * kill_something_info() interprets pid in interesting ways just like kill(2).
+@@ -4411,6 +4455,28 @@ static inline void siginfo_buildtime_checks(void)
+ 	CHECK_OFFSET(si_syscall);
+ 	CHECK_OFFSET(si_arch);
+ #undef CHECK_OFFSET
++
++	/* usb asyncio */
++	BUILD_BUG_ON(offsetof(struct siginfo, si_pid) !=
++		     offsetof(struct siginfo, si_addr));
++	if (sizeof(int) == sizeof(void __user *)) {
++		BUILD_BUG_ON(sizeof_field(struct siginfo, si_pid) !=
++			     sizeof(void __user *));
++	} else {
++		BUILD_BUG_ON((sizeof_field(struct siginfo, si_pid) +
++			      sizeof_field(struct siginfo, si_uid)) !=
++			     sizeof(void __user *));
++		BUILD_BUG_ON(offsetofend(struct siginfo, si_pid) !=
++			     offsetof(struct siginfo, si_uid));
++	}
++#ifdef CONFIG_COMPAT
++	BUILD_BUG_ON(offsetof(struct compat_siginfo, si_pid) !=
++		     offsetof(struct compat_siginfo, si_addr));
++	BUILD_BUG_ON(sizeof_field(struct compat_siginfo, si_pid) !=
++		     sizeof(compat_uptr_t));
++	BUILD_BUG_ON(sizeof_field(struct compat_siginfo, si_pid) !=
++		     sizeof_field(struct siginfo, si_pid));
++#endif
+ }
+ 
+ void __init signals_init(void)
+diff --git a/kernel/time/ntp.c b/kernel/time/ntp.c
+index f43d47c8c3b6..98b3678fd48e 100644
+--- a/kernel/time/ntp.c
++++ b/kernel/time/ntp.c
+@@ -42,6 +42,7 @@ static u64			tick_length_base;
+ #define MAX_TICKADJ		500LL		/* usecs */
+ #define MAX_TICKADJ_SCALED \
+ 	(((MAX_TICKADJ * NSEC_PER_USEC) << NTP_SCALE_SHIFT) / NTP_INTERVAL_FREQ)
++#define MAX_TAI_OFFSET		100000
+ 
+ /*
+  * phase-lock loop variables
+@@ -690,7 +691,8 @@ static inline void process_adjtimex_modes(const struct __kernel_timex *txc,
+ 		time_constant = max(time_constant, 0l);
+ 	}
+ 
+-	if (txc->modes & ADJ_TAI && txc->constant >= 0)
++	if (txc->modes & ADJ_TAI &&
++			txc->constant >= 0 && txc->constant <= MAX_TAI_OFFSET)
+ 		*time_tai = txc->constant;
+ 
+ 	if (txc->modes & ADJ_OFFSET)
+diff --git a/kernel/time/timer_list.c b/kernel/time/timer_list.c
+index 98ba50dcb1b2..acb326f5f50a 100644
+--- a/kernel/time/timer_list.c
++++ b/kernel/time/timer_list.c
+@@ -282,23 +282,6 @@ static inline void timer_list_header(struct seq_file *m, u64 now)
+ 	SEQ_printf(m, "\n");
+ }
+ 
+-static int timer_list_show(struct seq_file *m, void *v)
+-{
+-	struct timer_list_iter *iter = v;
+-
+-	if (iter->cpu == -1 && !iter->second_pass)
+-		timer_list_header(m, iter->now);
+-	else if (!iter->second_pass)
+-		print_cpu(m, iter->cpu, iter->now);
+-#ifdef CONFIG_GENERIC_CLOCKEVENTS
+-	else if (iter->cpu == -1 && iter->second_pass)
+-		timer_list_show_tickdevices_header(m);
+-	else
+-		print_tickdevice(m, tick_get_device(iter->cpu), iter->cpu);
+-#endif
+-	return 0;
+-}
+-
+ void sysrq_timer_list_show(void)
+ {
+ 	u64 now = ktime_to_ns(ktime_get());
+@@ -317,6 +300,24 @@ void sysrq_timer_list_show(void)
+ 	return;
+ }
+ 
++#ifdef CONFIG_PROC_FS
++static int timer_list_show(struct seq_file *m, void *v)
++{
++	struct timer_list_iter *iter = v;
++
++	if (iter->cpu == -1 && !iter->second_pass)
++		timer_list_header(m, iter->now);
++	else if (!iter->second_pass)
++		print_cpu(m, iter->cpu, iter->now);
++#ifdef CONFIG_GENERIC_CLOCKEVENTS
++	else if (iter->cpu == -1 && iter->second_pass)
++		timer_list_show_tickdevices_header(m);
++	else
++		print_tickdevice(m, tick_get_device(iter->cpu), iter->cpu);
++#endif
++	return 0;
++}
++
+ static void *move_iter(struct timer_list_iter *iter, loff_t offset)
+ {
+ 	for (; offset; offset--) {
+@@ -376,3 +377,4 @@ static int __init init_timer_list_procfs(void)
+ 	return 0;
+ }
+ __initcall(init_timer_list_procfs);
++#endif
+diff --git a/lib/reed_solomon/decode_rs.c b/lib/reed_solomon/decode_rs.c
+index 1db74eb098d0..121beb2f0930 100644
+--- a/lib/reed_solomon/decode_rs.c
++++ b/lib/reed_solomon/decode_rs.c
+@@ -42,8 +42,18 @@
+ 	BUG_ON(pad < 0 || pad >= nn);
+ 
+ 	/* Does the caller provide the syndrome ? */
+-	if (s != NULL)
+-		goto decode;
++	if (s != NULL) {
++		for (i = 0; i < nroots; i++) {
++			/* The syndrome is in index form,
++			 * so nn represents zero
++			 */
++			if (s[i] != nn)
++				goto decode;
++		}
++
++		/* syndrome is zero, no errors to correct  */
++		return 0;
++	}
+ 
+ 	/* form the syndromes; i.e., evaluate data(x) at roots of
+ 	 * g(x) */
+@@ -99,9 +109,9 @@
+ 	if (no_eras > 0) {
+ 		/* Init lambda to be the erasure locator polynomial */
+ 		lambda[1] = alpha_to[rs_modnn(rs,
+-					      prim * (nn - 1 - eras_pos[0]))];
++					prim * (nn - 1 - (eras_pos[0] + pad)))];
+ 		for (i = 1; i < no_eras; i++) {
+-			u = rs_modnn(rs, prim * (nn - 1 - eras_pos[i]));
++			u = rs_modnn(rs, prim * (nn - 1 - (eras_pos[i] + pad)));
+ 			for (j = i + 1; j > 0; j--) {
+ 				tmp = index_of[lambda[j - 1]];
+ 				if (tmp != nn) {
+diff --git a/lib/scatterlist.c b/lib/scatterlist.c
+index 739dc9fe2c55..f0757a67affe 100644
+--- a/lib/scatterlist.c
++++ b/lib/scatterlist.c
+@@ -678,17 +678,18 @@ static bool sg_miter_get_next_page(struct sg_mapping_iter *miter)
+ {
+ 	if (!miter->__remaining) {
+ 		struct scatterlist *sg;
+-		unsigned long pgoffset;
+ 
+ 		if (!__sg_page_iter_next(&miter->piter))
+ 			return false;
+ 
+ 		sg = miter->piter.sg;
+-		pgoffset = miter->piter.sg_pgoffset;
+ 
+-		miter->__offset = pgoffset ? 0 : sg->offset;
++		miter->__offset = miter->piter.sg_pgoffset ? 0 : sg->offset;
++		miter->piter.sg_pgoffset += miter->__offset >> PAGE_SHIFT;
++		miter->__offset &= PAGE_SIZE - 1;
+ 		miter->__remaining = sg->offset + sg->length -
+-				(pgoffset << PAGE_SHIFT) - miter->__offset;
++				     (miter->piter.sg_pgoffset << PAGE_SHIFT) -
++				     miter->__offset;
+ 		miter->__remaining = min_t(unsigned long, miter->__remaining,
+ 					   PAGE_SIZE - miter->__offset);
+ 	}
+diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c
+index b1d39cabf125..6753ee9326b8 100644
+--- a/net/9p/trans_virtio.c
++++ b/net/9p/trans_virtio.c
+@@ -782,10 +782,16 @@ static struct p9_trans_module p9_virtio_trans = {
+ /* The standard init function */
+ static int __init p9_virtio_init(void)
+ {
++	int rc;
++
+ 	INIT_LIST_HEAD(&virtio_chan_list);
+ 
+ 	v9fs_register_trans(&p9_virtio_trans);
+-	return register_virtio_driver(&p9_virtio_drv);
++	rc = register_virtio_driver(&p9_virtio_drv);
++	if (rc)
++		v9fs_unregister_trans(&p9_virtio_trans);
++
++	return rc;
+ }
+ 
+ static void __exit p9_virtio_cleanup(void)
+diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
+index 29420ebb8f07..3963eb11c3fb 100644
+--- a/net/9p/trans_xen.c
++++ b/net/9p/trans_xen.c
+@@ -530,13 +530,19 @@ static struct xenbus_driver xen_9pfs_front_driver = {
+ 
+ static int p9_trans_xen_init(void)
+ {
++	int rc;
++
+ 	if (!xen_domain())
+ 		return -ENODEV;
+ 
+ 	pr_info("Initialising Xen transport for 9pfs\n");
+ 
+ 	v9fs_register_trans(&p9_xen_trans);
+-	return xenbus_register_frontend(&xen_9pfs_front_driver);
++	rc = xenbus_register_frontend(&xen_9pfs_front_driver);
++	if (rc)
++		v9fs_unregister_trans(&p9_xen_trans);
++
++	return rc;
+ }
+ module_init(p9_trans_xen_init);
+ 
+diff --git a/net/batman-adv/bat_iv_ogm.c b/net/batman-adv/bat_iv_ogm.c
+index de61091af666..267418b6129a 100644
+--- a/net/batman-adv/bat_iv_ogm.c
++++ b/net/batman-adv/bat_iv_ogm.c
+@@ -2349,7 +2349,7 @@ batadv_iv_ogm_neigh_is_sob(struct batadv_neigh_node *neigh1,
+ 	return ret;
+ }
+ 
+-static void batadv_iv_iface_activate(struct batadv_hard_iface *hard_iface)
++static void batadv_iv_iface_enabled(struct batadv_hard_iface *hard_iface)
+ {
+ 	/* begin scheduling originator messages on that interface */
+ 	batadv_iv_ogm_schedule(hard_iface);
+@@ -2695,8 +2695,8 @@ unlock:
+ static struct batadv_algo_ops batadv_batman_iv __read_mostly = {
+ 	.name = "BATMAN_IV",
+ 	.iface = {
+-		.activate = batadv_iv_iface_activate,
+ 		.enable = batadv_iv_ogm_iface_enable,
++		.enabled = batadv_iv_iface_enabled,
+ 		.disable = batadv_iv_ogm_iface_disable,
+ 		.update_mac = batadv_iv_ogm_iface_update_mac,
+ 		.primary_set = batadv_iv_ogm_primary_iface_set,
+diff --git a/net/batman-adv/hard-interface.c b/net/batman-adv/hard-interface.c
+index 96ef7c70b4d9..9072392e43cd 100644
+--- a/net/batman-adv/hard-interface.c
++++ b/net/batman-adv/hard-interface.c
+@@ -807,6 +807,9 @@ int batadv_hardif_enable_interface(struct batadv_hard_iface *hard_iface,
+ 
+ 	batadv_hardif_recalc_extra_skbroom(soft_iface);
+ 
++	if (bat_priv->algo_ops->iface.enabled)
++		bat_priv->algo_ops->iface.enabled(hard_iface);
++
+ out:
+ 	return 0;
+ 
+diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
+index 26c4e2493ddf..abad64eb7dc4 100644
+--- a/net/batman-adv/translation-table.c
++++ b/net/batman-adv/translation-table.c
+@@ -3826,6 +3826,8 @@ static void batadv_tt_purge(struct work_struct *work)
+  */
+ void batadv_tt_free(struct batadv_priv *bat_priv)
+ {
++	batadv_tvlv_handler_unregister(bat_priv, BATADV_TVLV_ROAM, 1);
++
+ 	batadv_tvlv_container_unregister(bat_priv, BATADV_TVLV_TT, 1);
+ 	batadv_tvlv_handler_unregister(bat_priv, BATADV_TVLV_TT, 1);
+ 
+diff --git a/net/batman-adv/types.h b/net/batman-adv/types.h
+index ed0f6a519de5..3c83c8b4f1e1 100644
+--- a/net/batman-adv/types.h
++++ b/net/batman-adv/types.h
+@@ -2135,6 +2135,9 @@ struct batadv_algo_iface_ops {
+ 	/** @enable: init routing info when hard-interface is enabled */
+ 	int (*enable)(struct batadv_hard_iface *hard_iface);
+ 
++	/** @enabled: notification when hard-interface was enabled (optional) */
++	void (*enabled)(struct batadv_hard_iface *hard_iface);
++
+ 	/** @disable: de-init routing info when hard-interface is disabled */
+ 	void (*disable)(struct batadv_hard_iface *hard_iface);
+ 
+diff --git a/net/bluetooth/6lowpan.c b/net/bluetooth/6lowpan.c
+index a7cd23f00bde..50530561da98 100644
+--- a/net/bluetooth/6lowpan.c
++++ b/net/bluetooth/6lowpan.c
+@@ -187,10 +187,16 @@ static inline struct lowpan_peer *peer_lookup_dst(struct lowpan_btle_dev *dev,
+ 	}
+ 
+ 	if (!rt) {
+-		nexthop = &lowpan_cb(skb)->gw;
+-
+-		if (ipv6_addr_any(nexthop))
+-			return NULL;
++		if (ipv6_addr_any(&lowpan_cb(skb)->gw)) {
++			/* There is neither route nor gateway,
++			 * probably the destination is a direct peer.
++			 */
++			nexthop = daddr;
++		} else {
++			/* There is a known gateway
++			 */
++			nexthop = &lowpan_cb(skb)->gw;
++		}
+ 	} else {
+ 		nexthop = rt6_nexthop(rt, daddr);
+ 
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 8b893baf9bbe..31eb0449479b 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -5588,6 +5588,11 @@ static void hci_le_remote_conn_param_req_evt(struct hci_dev *hdev,
+ 		return send_conn_param_neg_reply(hdev, handle,
+ 						 HCI_ERROR_UNKNOWN_CONN_ID);
+ 
++	if (min < hcon->le_conn_min_interval ||
++	    max > hcon->le_conn_max_interval)
++		return send_conn_param_neg_reply(hdev, handle,
++						 HCI_ERROR_INVALID_LL_PARAMS);
++
+ 	if (hci_check_conn_params(min, max, latency, timeout))
+ 		return send_conn_param_neg_reply(hdev, handle,
+ 						 HCI_ERROR_INVALID_LL_PARAMS);
+diff --git a/net/bluetooth/hidp/core.c b/net/bluetooth/hidp/core.c
+index a442e21f3894..5abd423b55fa 100644
+--- a/net/bluetooth/hidp/core.c
++++ b/net/bluetooth/hidp/core.c
+@@ -775,7 +775,7 @@ static int hidp_setup_hid(struct hidp_session *session,
+ 	hid->version = req->version;
+ 	hid->country = req->country;
+ 
+-	strncpy(hid->name, req->name, sizeof(hid->name));
++	strscpy(hid->name, req->name, sizeof(hid->name));
+ 
+ 	snprintf(hid->phys, sizeof(hid->phys), "%pMR",
+ 		 &l2cap_pi(session->ctrl_sock->sk)->chan->src);
+diff --git a/net/bluetooth/hidp/sock.c b/net/bluetooth/hidp/sock.c
+index 2151913892ce..03be6a4baef3 100644
+--- a/net/bluetooth/hidp/sock.c
++++ b/net/bluetooth/hidp/sock.c
+@@ -192,6 +192,7 @@ static int hidp_sock_compat_ioctl(struct socket *sock, unsigned int cmd, unsigne
+ 		ca.version = ca32.version;
+ 		ca.flags = ca32.flags;
+ 		ca.idle_to = ca32.idle_to;
++		ca32.name[sizeof(ca32.name) - 1] = '\0';
+ 		memcpy(ca.name, ca32.name, 128);
+ 
+ 		csock = sockfd_lookup(ca.ctrl_sock, &err);
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 5406d7cd46ad..32d2be9d6858 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -4394,6 +4394,12 @@ static inline int l2cap_disconnect_rsp(struct l2cap_conn *conn,
+ 
+ 	l2cap_chan_lock(chan);
+ 
++	if (chan->state != BT_DISCONN) {
++		l2cap_chan_unlock(chan);
++		mutex_unlock(&conn->chan_lock);
++		return 0;
++	}
++
+ 	l2cap_chan_hold(chan);
+ 	l2cap_chan_del(chan, 0);
+ 
+@@ -5291,7 +5297,14 @@ static inline int l2cap_conn_param_update_req(struct l2cap_conn *conn,
+ 
+ 	memset(&rsp, 0, sizeof(rsp));
+ 
+-	err = hci_check_conn_params(min, max, latency, to_multiplier);
++	if (min < hcon->le_conn_min_interval ||
++	    max > hcon->le_conn_max_interval) {
++		BT_DBG("requested connection interval exceeds current bounds.");
++		err = -EINVAL;
++	} else {
++		err = hci_check_conn_params(min, max, latency, to_multiplier);
++	}
++
+ 	if (err)
+ 		rsp.result = cpu_to_le16(L2CAP_CONN_PARAM_REJECTED);
+ 	else
+diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
+index 621146d04c03..d2542984b191 100644
+--- a/net/bluetooth/smp.c
++++ b/net/bluetooth/smp.c
+@@ -2580,6 +2580,19 @@ static int smp_cmd_ident_addr_info(struct l2cap_conn *conn,
+ 		goto distribute;
+ 	}
+ 
++	/* Drop IRK if peer is using identity address during pairing but is
++	 * providing different address as identity information.
++	 *
++	 * Microsoft Surface Precision Mouse is known to have this bug.
++	 */
++	if (hci_is_identity_address(&hcon->dst, hcon->dst_type) &&
++	    (bacmp(&info->bdaddr, &hcon->dst) ||
++	     info->addr_type != hcon->dst_type)) {
++		bt_dev_err(hcon->hdev,
++			   "ignoring IRK with invalid identity address");
++		goto distribute;
++	}
++
+ 	bacpy(&smp->id_addr, &info->bdaddr);
+ 	smp->id_addr_type = info->addr_type;
+ 
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index 4af1e1d60b9f..51c0f10bb131 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -2442,8 +2442,10 @@ static int key_pol_get_resp(struct sock *sk, struct xfrm_policy *xp, const struc
+ 		goto out;
+ 	}
+ 	err = pfkey_xfrm_policy2msg(out_skb, xp, dir);
+-	if (err < 0)
++	if (err < 0) {
++		kfree_skb(out_skb);
+ 		goto out;
++	}
+ 
+ 	out_hdr = (struct sadb_msg *) out_skb->data;
+ 	out_hdr->sadb_msg_version = hdr->sadb_msg_version;
+@@ -2694,8 +2696,10 @@ static int dump_sp(struct xfrm_policy *xp, int dir, int count, void *ptr)
+ 		return PTR_ERR(out_skb);
+ 
+ 	err = pfkey_xfrm_policy2msg(out_skb, xp, dir);
+-	if (err < 0)
++	if (err < 0) {
++		kfree_skb(out_skb);
+ 		return err;
++	}
+ 
+ 	out_hdr = (struct sadb_msg *) out_skb->data;
+ 	out_hdr->sadb_msg_version = pfk->dump.msg_version;
+diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
+index 2c9609929c71..455804456008 100644
+--- a/net/netfilter/ipset/ip_set_hash_gen.h
++++ b/net/netfilter/ipset/ip_set_hash_gen.h
+@@ -625,7 +625,7 @@ retry:
+ 					goto cleanup;
+ 				}
+ 				m->size = AHASH_INIT_SIZE;
+-				extsize = ext_size(AHASH_INIT_SIZE, dsize);
++				extsize += ext_size(AHASH_INIT_SIZE, dsize);
+ 				RCU_INIT_POINTER(hbucket(t, key), m);
+ 			} else if (m->pos >= m->size) {
+ 				struct hbucket *ht;
+diff --git a/net/netfilter/ipvs/ip_vs_core.c b/net/netfilter/ipvs/ip_vs_core.c
+index 8ebf21149ec3..e72b51157cbb 100644
+--- a/net/netfilter/ipvs/ip_vs_core.c
++++ b/net/netfilter/ipvs/ip_vs_core.c
+@@ -2250,7 +2250,6 @@ static const struct nf_hook_ops ip_vs_ops[] = {
+ static int __net_init __ip_vs_init(struct net *net)
+ {
+ 	struct netns_ipvs *ipvs;
+-	int ret;
+ 
+ 	ipvs = net_generic(net, ip_vs_net_id);
+ 	if (ipvs == NULL)
+@@ -2282,17 +2281,11 @@ static int __net_init __ip_vs_init(struct net *net)
+ 	if (ip_vs_sync_net_init(ipvs) < 0)
+ 		goto sync_fail;
+ 
+-	ret = nf_register_net_hooks(net, ip_vs_ops, ARRAY_SIZE(ip_vs_ops));
+-	if (ret < 0)
+-		goto hook_fail;
+-
+ 	return 0;
+ /*
+  * Error handling
+  */
+ 
+-hook_fail:
+-	ip_vs_sync_net_cleanup(ipvs);
+ sync_fail:
+ 	ip_vs_conn_net_cleanup(ipvs);
+ conn_fail:
+@@ -2322,6 +2315,19 @@ static void __net_exit __ip_vs_cleanup(struct net *net)
+ 	net->ipvs = NULL;
+ }
+ 
++static int __net_init __ip_vs_dev_init(struct net *net)
++{
++	int ret;
++
++	ret = nf_register_net_hooks(net, ip_vs_ops, ARRAY_SIZE(ip_vs_ops));
++	if (ret < 0)
++		goto hook_fail;
++	return 0;
++
++hook_fail:
++	return ret;
++}
++
+ static void __net_exit __ip_vs_dev_cleanup(struct net *net)
+ {
+ 	struct netns_ipvs *ipvs = net_ipvs(net);
+@@ -2341,6 +2347,7 @@ static struct pernet_operations ipvs_core_ops = {
+ };
+ 
+ static struct pernet_operations ipvs_core_dev_ops = {
++	.init = __ip_vs_dev_init,
+ 	.exit = __ip_vs_dev_cleanup,
+ };
+ 
+diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c
+index 053cd96b9c76..179e9d11e41b 100644
+--- a/net/netfilter/ipvs/ip_vs_ctl.c
++++ b/net/netfilter/ipvs/ip_vs_ctl.c
+@@ -2382,9 +2382,7 @@ do_ip_vs_set_ctl(struct sock *sk, int cmd, void __user *user, unsigned int len)
+ 			cfg.syncid = dm->syncid;
+ 			ret = start_sync_thread(ipvs, &cfg, dm->state);
+ 		} else {
+-			mutex_lock(&ipvs->sync_mutex);
+ 			ret = stop_sync_thread(ipvs, dm->state);
+-			mutex_unlock(&ipvs->sync_mutex);
+ 		}
+ 		goto out_dec;
+ 	}
+@@ -3490,10 +3488,8 @@ static int ip_vs_genl_del_daemon(struct netns_ipvs *ipvs, struct nlattr **attrs)
+ 	if (!attrs[IPVS_DAEMON_ATTR_STATE])
+ 		return -EINVAL;
+ 
+-	mutex_lock(&ipvs->sync_mutex);
+ 	ret = stop_sync_thread(ipvs,
+ 			       nla_get_u32(attrs[IPVS_DAEMON_ATTR_STATE]));
+-	mutex_unlock(&ipvs->sync_mutex);
+ 	return ret;
+ }
+ 
+diff --git a/net/netfilter/ipvs/ip_vs_sync.c b/net/netfilter/ipvs/ip_vs_sync.c
+index 2526be6b3d90..a4a78c4b06de 100644
+--- a/net/netfilter/ipvs/ip_vs_sync.c
++++ b/net/netfilter/ipvs/ip_vs_sync.c
+@@ -195,6 +195,7 @@ union ip_vs_sync_conn {
+ #define IPVS_OPT_F_PARAM	(1 << (IPVS_OPT_PARAM-1))
+ 
+ struct ip_vs_sync_thread_data {
++	struct task_struct *task;
+ 	struct netns_ipvs *ipvs;
+ 	struct socket *sock;
+ 	char *buf;
+@@ -374,8 +375,11 @@ static inline void sb_queue_tail(struct netns_ipvs *ipvs,
+ 					      max(IPVS_SYNC_SEND_DELAY, 1));
+ 		ms->sync_queue_len++;
+ 		list_add_tail(&sb->list, &ms->sync_queue);
+-		if ((++ms->sync_queue_delay) == IPVS_SYNC_WAKEUP_RATE)
+-			wake_up_process(ms->master_thread);
++		if ((++ms->sync_queue_delay) == IPVS_SYNC_WAKEUP_RATE) {
++			int id = (int)(ms - ipvs->ms);
++
++			wake_up_process(ipvs->master_tinfo[id].task);
++		}
+ 	} else
+ 		ip_vs_sync_buff_release(sb);
+ 	spin_unlock(&ipvs->sync_lock);
+@@ -1636,8 +1640,10 @@ static void master_wakeup_work_handler(struct work_struct *work)
+ 	spin_lock_bh(&ipvs->sync_lock);
+ 	if (ms->sync_queue_len &&
+ 	    ms->sync_queue_delay < IPVS_SYNC_WAKEUP_RATE) {
++		int id = (int)(ms - ipvs->ms);
++
+ 		ms->sync_queue_delay = IPVS_SYNC_WAKEUP_RATE;
+-		wake_up_process(ms->master_thread);
++		wake_up_process(ipvs->master_tinfo[id].task);
+ 	}
+ 	spin_unlock_bh(&ipvs->sync_lock);
+ }
+@@ -1703,10 +1709,6 @@ done:
+ 	if (sb)
+ 		ip_vs_sync_buff_release(sb);
+ 
+-	/* release the sending multicast socket */
+-	sock_release(tinfo->sock);
+-	kfree(tinfo);
+-
+ 	return 0;
+ }
+ 
+@@ -1740,11 +1742,6 @@ static int sync_thread_backup(void *data)
+ 		}
+ 	}
+ 
+-	/* release the sending multicast socket */
+-	sock_release(tinfo->sock);
+-	kfree(tinfo->buf);
+-	kfree(tinfo);
+-
+ 	return 0;
+ }
+ 
+@@ -1752,8 +1749,8 @@ static int sync_thread_backup(void *data)
+ int start_sync_thread(struct netns_ipvs *ipvs, struct ipvs_sync_daemon_cfg *c,
+ 		      int state)
+ {
+-	struct ip_vs_sync_thread_data *tinfo = NULL;
+-	struct task_struct **array = NULL, *task;
++	struct ip_vs_sync_thread_data *ti = NULL, *tinfo;
++	struct task_struct *task;
+ 	struct net_device *dev;
+ 	char *name;
+ 	int (*threadfn)(void *data);
+@@ -1822,7 +1819,7 @@ int start_sync_thread(struct netns_ipvs *ipvs, struct ipvs_sync_daemon_cfg *c,
+ 		threadfn = sync_thread_master;
+ 	} else if (state == IP_VS_STATE_BACKUP) {
+ 		result = -EEXIST;
+-		if (ipvs->backup_threads)
++		if (ipvs->backup_tinfo)
+ 			goto out_early;
+ 
+ 		ipvs->bcfg = *c;
+@@ -1849,28 +1846,22 @@ int start_sync_thread(struct netns_ipvs *ipvs, struct ipvs_sync_daemon_cfg *c,
+ 					  master_wakeup_work_handler);
+ 			ms->ipvs = ipvs;
+ 		}
+-	} else {
+-		array = kcalloc(count, sizeof(struct task_struct *),
+-				GFP_KERNEL);
+-		result = -ENOMEM;
+-		if (!array)
+-			goto out;
+ 	}
++	result = -ENOMEM;
++	ti = kcalloc(count, sizeof(struct ip_vs_sync_thread_data),
++		     GFP_KERNEL);
++	if (!ti)
++		goto out;
+ 
+ 	for (id = 0; id < count; id++) {
+-		result = -ENOMEM;
+-		tinfo = kmalloc(sizeof(*tinfo), GFP_KERNEL);
+-		if (!tinfo)
+-			goto out;
++		tinfo = &ti[id];
+ 		tinfo->ipvs = ipvs;
+-		tinfo->sock = NULL;
+ 		if (state == IP_VS_STATE_BACKUP) {
++			result = -ENOMEM;
+ 			tinfo->buf = kmalloc(ipvs->bcfg.sync_maxlen,
+ 					     GFP_KERNEL);
+ 			if (!tinfo->buf)
+ 				goto out;
+-		} else {
+-			tinfo->buf = NULL;
+ 		}
+ 		tinfo->id = id;
+ 		if (state == IP_VS_STATE_MASTER)
+@@ -1885,17 +1876,15 @@ int start_sync_thread(struct netns_ipvs *ipvs, struct ipvs_sync_daemon_cfg *c,
+ 			result = PTR_ERR(task);
+ 			goto out;
+ 		}
+-		tinfo = NULL;
+-		if (state == IP_VS_STATE_MASTER)
+-			ipvs->ms[id].master_thread = task;
+-		else
+-			array[id] = task;
++		tinfo->task = task;
+ 	}
+ 
+ 	/* mark as active */
+ 
+-	if (state == IP_VS_STATE_BACKUP)
+-		ipvs->backup_threads = array;
++	if (state == IP_VS_STATE_MASTER)
++		ipvs->master_tinfo = ti;
++	else
++		ipvs->backup_tinfo = ti;
+ 	spin_lock_bh(&ipvs->sync_buff_lock);
+ 	ipvs->sync_state |= state;
+ 	spin_unlock_bh(&ipvs->sync_buff_lock);
+@@ -1910,29 +1899,31 @@ int start_sync_thread(struct netns_ipvs *ipvs, struct ipvs_sync_daemon_cfg *c,
+ 
+ out:
+ 	/* We do not need RTNL lock anymore, release it here so that
+-	 * sock_release below and in the kthreads can use rtnl_lock
+-	 * to leave the mcast group.
++	 * sock_release below can use rtnl_lock to leave the mcast group.
+ 	 */
+ 	rtnl_unlock();
+-	count = id;
+-	while (count-- > 0) {
+-		if (state == IP_VS_STATE_MASTER)
+-			kthread_stop(ipvs->ms[count].master_thread);
+-		else
+-			kthread_stop(array[count]);
++	id = min(id, count - 1);
++	if (ti) {
++		for (tinfo = ti + id; tinfo >= ti; tinfo--) {
++			if (tinfo->task)
++				kthread_stop(tinfo->task);
++		}
+ 	}
+ 	if (!(ipvs->sync_state & IP_VS_STATE_MASTER)) {
+ 		kfree(ipvs->ms);
+ 		ipvs->ms = NULL;
+ 	}
+ 	mutex_unlock(&ipvs->sync_mutex);
+-	if (tinfo) {
+-		if (tinfo->sock)
+-			sock_release(tinfo->sock);
+-		kfree(tinfo->buf);
+-		kfree(tinfo);
++
++	/* No more mutexes, release socks */
++	if (ti) {
++		for (tinfo = ti + id; tinfo >= ti; tinfo--) {
++			if (tinfo->sock)
++				sock_release(tinfo->sock);
++			kfree(tinfo->buf);
++		}
++		kfree(ti);
+ 	}
+-	kfree(array);
+ 	return result;
+ 
+ out_early:
+@@ -1944,15 +1935,18 @@ out_early:
+ 
+ int stop_sync_thread(struct netns_ipvs *ipvs, int state)
+ {
+-	struct task_struct **array;
++	struct ip_vs_sync_thread_data *ti, *tinfo;
+ 	int id;
+ 	int retc = -EINVAL;
+ 
+ 	IP_VS_DBG(7, "%s(): pid %d\n", __func__, task_pid_nr(current));
+ 
++	mutex_lock(&ipvs->sync_mutex);
+ 	if (state == IP_VS_STATE_MASTER) {
++		retc = -ESRCH;
+ 		if (!ipvs->ms)
+-			return -ESRCH;
++			goto err;
++		ti = ipvs->master_tinfo;
+ 
+ 		/*
+ 		 * The lock synchronizes with sb_queue_tail(), so that we don't
+@@ -1971,38 +1965,56 @@ int stop_sync_thread(struct netns_ipvs *ipvs, int state)
+ 			struct ipvs_master_sync_state *ms = &ipvs->ms[id];
+ 			int ret;
+ 
++			tinfo = &ti[id];
+ 			pr_info("stopping master sync thread %d ...\n",
+-				task_pid_nr(ms->master_thread));
++				task_pid_nr(tinfo->task));
+ 			cancel_delayed_work_sync(&ms->master_wakeup_work);
+-			ret = kthread_stop(ms->master_thread);
++			ret = kthread_stop(tinfo->task);
+ 			if (retc >= 0)
+ 				retc = ret;
+ 		}
+ 		kfree(ipvs->ms);
+ 		ipvs->ms = NULL;
++		ipvs->master_tinfo = NULL;
+ 	} else if (state == IP_VS_STATE_BACKUP) {
+-		if (!ipvs->backup_threads)
+-			return -ESRCH;
++		retc = -ESRCH;
++		if (!ipvs->backup_tinfo)
++			goto err;
++		ti = ipvs->backup_tinfo;
+ 
+ 		ipvs->sync_state &= ~IP_VS_STATE_BACKUP;
+-		array = ipvs->backup_threads;
+ 		retc = 0;
+ 		for (id = ipvs->threads_mask; id >= 0; id--) {
+ 			int ret;
+ 
++			tinfo = &ti[id];
+ 			pr_info("stopping backup sync thread %d ...\n",
+-				task_pid_nr(array[id]));
+-			ret = kthread_stop(array[id]);
++				task_pid_nr(tinfo->task));
++			ret = kthread_stop(tinfo->task);
+ 			if (retc >= 0)
+ 				retc = ret;
+ 		}
+-		kfree(array);
+-		ipvs->backup_threads = NULL;
++		ipvs->backup_tinfo = NULL;
++	} else {
++		goto err;
+ 	}
++	id = ipvs->threads_mask;
++	mutex_unlock(&ipvs->sync_mutex);
++
++	/* No more mutexes, release socks */
++	for (tinfo = ti + id; tinfo >= ti; tinfo--) {
++		if (tinfo->sock)
++			sock_release(tinfo->sock);
++		kfree(tinfo->buf);
++	}
++	kfree(ti);
+ 
+ 	/* decrease the module use count */
+ 	ip_vs_use_count_dec();
++	return retc;
+ 
++err:
++	mutex_unlock(&ipvs->sync_mutex);
+ 	return retc;
+ }
+ 
+@@ -2021,7 +2033,6 @@ void ip_vs_sync_net_cleanup(struct netns_ipvs *ipvs)
+ {
+ 	int retc;
+ 
+-	mutex_lock(&ipvs->sync_mutex);
+ 	retc = stop_sync_thread(ipvs, IP_VS_STATE_MASTER);
+ 	if (retc && retc != -ESRCH)
+ 		pr_err("Failed to stop Master Daemon\n");
+@@ -2029,5 +2040,4 @@ void ip_vs_sync_net_cleanup(struct netns_ipvs *ipvs)
+ 	retc = stop_sync_thread(ipvs, IP_VS_STATE_BACKUP);
+ 	if (retc && retc != -ESRCH)
+ 		pr_err("Failed to stop Backup Daemon\n");
+-	mutex_unlock(&ipvs->sync_mutex);
+ }
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index d2715b4d2e72..061bdab37b1a 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -1254,7 +1254,6 @@ static int ctnetlink_del_conntrack(struct net *net, struct sock *ctnl,
+ 	struct nf_conntrack_tuple tuple;
+ 	struct nf_conn *ct;
+ 	struct nfgenmsg *nfmsg = nlmsg_data(nlh);
+-	u_int8_t u3 = nfmsg->version ? nfmsg->nfgen_family : AF_UNSPEC;
+ 	struct nf_conntrack_zone zone;
+ 	int err;
+ 
+@@ -1264,11 +1263,13 @@ static int ctnetlink_del_conntrack(struct net *net, struct sock *ctnl,
+ 
+ 	if (cda[CTA_TUPLE_ORIG])
+ 		err = ctnetlink_parse_tuple(cda, &tuple, CTA_TUPLE_ORIG,
+-					    u3, &zone);
++					    nfmsg->nfgen_family, &zone);
+ 	else if (cda[CTA_TUPLE_REPLY])
+ 		err = ctnetlink_parse_tuple(cda, &tuple, CTA_TUPLE_REPLY,
+-					    u3, &zone);
++					    nfmsg->nfgen_family, &zone);
+ 	else {
++		u_int8_t u3 = nfmsg->version ? nfmsg->nfgen_family : AF_UNSPEC;
++
+ 		return ctnetlink_flush_conntrack(net, cda,
+ 						 NETLINK_CB(skb).portid,
+ 						 nlmsg_report(nlh), u3);
+diff --git a/net/netfilter/nf_conntrack_proto_icmp.c b/net/netfilter/nf_conntrack_proto_icmp.c
+index 9becac953587..71a84a0517f3 100644
+--- a/net/netfilter/nf_conntrack_proto_icmp.c
++++ b/net/netfilter/nf_conntrack_proto_icmp.c
+@@ -221,7 +221,7 @@ int nf_conntrack_icmpv4_error(struct nf_conn *tmpl,
+ 	/* See ip_conntrack_proto_tcp.c */
+ 	if (state->net->ct.sysctl_checksum &&
+ 	    state->hook == NF_INET_PRE_ROUTING &&
+-	    nf_ip_checksum(skb, state->hook, dataoff, 0)) {
++	    nf_ip_checksum(skb, state->hook, dataoff, IPPROTO_ICMP)) {
+ 		icmp_error_log(skb, state, "bad hw icmp checksum");
+ 		return -NF_ACCEPT;
+ 	}
+diff --git a/net/netfilter/nf_nat_proto.c b/net/netfilter/nf_nat_proto.c
+index 62743da3004f..0b0efbb953bf 100644
+--- a/net/netfilter/nf_nat_proto.c
++++ b/net/netfilter/nf_nat_proto.c
+@@ -567,7 +567,7 @@ int nf_nat_icmp_reply_translation(struct sk_buff *skb,
+ 
+ 	if (!skb_make_writable(skb, hdrlen + sizeof(*inside)))
+ 		return 0;
+-	if (nf_ip_checksum(skb, hooknum, hdrlen, 0))
++	if (nf_ip_checksum(skb, hooknum, hdrlen, IPPROTO_ICMP))
+ 		return 0;
+ 
+ 	inside = (void *)skb->data + hdrlen;
+diff --git a/net/netfilter/utils.c b/net/netfilter/utils.c
+index 06dc55590441..51b454d8fa9c 100644
+--- a/net/netfilter/utils.c
++++ b/net/netfilter/utils.c
+@@ -17,7 +17,8 @@ __sum16 nf_ip_checksum(struct sk_buff *skb, unsigned int hook,
+ 	case CHECKSUM_COMPLETE:
+ 		if (hook != NF_INET_PRE_ROUTING && hook != NF_INET_LOCAL_IN)
+ 			break;
+-		if ((protocol == 0 && !csum_fold(skb->csum)) ||
++		if ((protocol != IPPROTO_TCP && protocol != IPPROTO_UDP &&
++		    !csum_fold(skb->csum)) ||
+ 		    !csum_tcpudp_magic(iph->saddr, iph->daddr,
+ 				       skb->len - dataoff, protocol,
+ 				       skb->csum)) {
+@@ -26,7 +27,7 @@ __sum16 nf_ip_checksum(struct sk_buff *skb, unsigned int hook,
+ 		}
+ 		/* fall through */
+ 	case CHECKSUM_NONE:
+-		if (protocol == 0)
++		if (protocol != IPPROTO_TCP && protocol != IPPROTO_UDP)
+ 			skb->csum = 0;
+ 		else
+ 			skb->csum = csum_tcpudp_nofold(iph->saddr, iph->daddr,
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index ea8d5aed1e2c..a241d7cf2071 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -1767,6 +1767,7 @@ rpc_xdr_encode(struct rpc_task *task)
+ 	req->rq_snd_buf.head[0].iov_len = 0;
+ 	xdr_init_encode(&xdr, &req->rq_snd_buf,
+ 			req->rq_snd_buf.head[0].iov_base, req);
++	xdr_free_bvec(&req->rq_snd_buf);
+ 	if (rpc_encode_header(task, &xdr))
+ 		return;
+ 
+@@ -1799,8 +1800,6 @@ call_encode(struct rpc_task *task)
+ 			rpc_exit(task, task->tk_status);
+ 		}
+ 		return;
+-	} else {
+-		xprt_request_prepare(task->tk_rqstp);
+ 	}
+ 
+ 	/* Add task to reply queue before transmission to avoid races */
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index d7117d241460..0e185c29d9f1 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -1006,6 +1006,8 @@ xprt_request_enqueue_receive(struct rpc_task *task)
+ 
+ 	if (!xprt_request_need_enqueue_receive(task, req))
+ 		return;
++
++	xprt_request_prepare(task->tk_rqstp);
+ 	spin_lock(&xprt->queue_lock);
+ 
+ 	/* Update the softirq receive buffer */
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index a437ee8ae482..1b2450191110 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -909,6 +909,7 @@ static int xs_nospace(struct rpc_rqst *req)
+ static void
+ xs_stream_prepare_request(struct rpc_rqst *req)
+ {
++	xdr_free_bvec(&req->rq_rcv_buf);
+ 	req->rq_task->tk_status = xdr_alloc_bvec(&req->rq_rcv_buf, GFP_KERNEL);
+ }
+ 
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index a14e8864e4fa..5e0637db92ea 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -123,13 +123,17 @@ int xsk_generic_rcv(struct xdp_sock *xs, struct xdp_buff *xdp)
+ 	u64 addr;
+ 	int err;
+ 
+-	if (xs->dev != xdp->rxq->dev || xs->queue_id != xdp->rxq->queue_index)
+-		return -EINVAL;
++	spin_lock_bh(&xs->rx_lock);
++
++	if (xs->dev != xdp->rxq->dev || xs->queue_id != xdp->rxq->queue_index) {
++		err = -EINVAL;
++		goto out_unlock;
++	}
+ 
+ 	if (!xskq_peek_addr(xs->umem->fq, &addr) ||
+ 	    len > xs->umem->chunk_size_nohr - XDP_PACKET_HEADROOM) {
+-		xs->rx_dropped++;
+-		return -ENOSPC;
++		err = -ENOSPC;
++		goto out_drop;
+ 	}
+ 
+ 	addr += xs->umem->headroom;
+@@ -138,13 +142,21 @@ int xsk_generic_rcv(struct xdp_sock *xs, struct xdp_buff *xdp)
+ 	memcpy(buffer, xdp->data_meta, len + metalen);
+ 	addr += metalen;
+ 	err = xskq_produce_batch_desc(xs->rx, addr, len);
+-	if (!err) {
+-		xskq_discard_addr(xs->umem->fq);
+-		xsk_flush(xs);
+-		return 0;
+-	}
++	if (err)
++		goto out_drop;
++
++	xskq_discard_addr(xs->umem->fq);
++	xskq_produce_flush_desc(xs->rx);
+ 
++	spin_unlock_bh(&xs->rx_lock);
++
++	xs->sk.sk_data_ready(&xs->sk);
++	return 0;
++
++out_drop:
+ 	xs->rx_dropped++;
++out_unlock:
++	spin_unlock_bh(&xs->rx_lock);
+ 	return err;
+ }
+ 
+@@ -765,6 +777,7 @@ static int xsk_create(struct net *net, struct socket *sock, int protocol,
+ 
+ 	xs = xdp_sk(sk);
+ 	mutex_init(&xs->mutex);
++	spin_lock_init(&xs->rx_lock);
+ 	spin_lock_init(&xs->tx_completion_lock);
+ 
+ 	mutex_lock(&net->xdp.lock);
+diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
+index 610c0bdc0c2b..cd333701f4bf 100644
+--- a/net/xdp/xsk_queue.h
++++ b/net/xdp/xsk_queue.h
+@@ -240,7 +240,7 @@ static inline void xskq_produce_flush_desc(struct xsk_queue *q)
+ 	/* Order producer and data */
+ 	smp_wmb();
+ 
+-	q->prod_tail = q->prod_head,
++	q->prod_tail = q->prod_head;
+ 	WRITE_ONCE(q->ring->producer, q->prod_tail);
+ }
+ 
+diff --git a/net/xfrm/Kconfig b/net/xfrm/Kconfig
+index 5d43aaa17027..831668ee8229 100644
+--- a/net/xfrm/Kconfig
++++ b/net/xfrm/Kconfig
+@@ -14,6 +14,8 @@ config XFRM_ALGO
+ 	tristate
+ 	select XFRM
+ 	select CRYPTO
++	select CRYPTO_HASH
++	select CRYPTO_BLKCIPHER
+ 
+ config XFRM_USER
+ 	tristate "Transformation user configuration interface"
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index 6916931b1de1..6abf9625a401 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -150,6 +150,25 @@ static int verify_newsa_info(struct xfrm_usersa_info *p,
+ 
+ 	err = -EINVAL;
+ 	switch (p->family) {
++	case AF_INET:
++		break;
++
++	case AF_INET6:
++#if IS_ENABLED(CONFIG_IPV6)
++		break;
++#else
++		err = -EAFNOSUPPORT;
++		goto out;
++#endif
++
++	default:
++		goto out;
++	}
++
++	switch (p->sel.family) {
++	case AF_UNSPEC:
++		break;
++
+ 	case AF_INET:
+ 		if (p->sel.prefixlen_d > 32 || p->sel.prefixlen_s > 32)
+ 			goto out;
+diff --git a/scripts/kconfig/confdata.c b/scripts/kconfig/confdata.c
+index 08ba146a83c5..62508d9d8ad7 100644
+--- a/scripts/kconfig/confdata.c
++++ b/scripts/kconfig/confdata.c
+@@ -871,11 +871,12 @@ int conf_write(const char *name)
+ 				     "#\n"
+ 				     "# %s\n"
+ 				     "#\n", str);
+-		} else if (!(sym->flags & SYMBOL_CHOICE)) {
++		} else if (!(sym->flags & SYMBOL_CHOICE) &&
++			   !(sym->flags & SYMBOL_WRITTEN)) {
+ 			sym_calc_value(sym);
+ 			if (!(sym->flags & SYMBOL_WRITE))
+ 				goto next;
+-			sym->flags &= ~SYMBOL_WRITE;
++			sym->flags |= SYMBOL_WRITTEN;
+ 
+ 			conf_write_symbol(out, sym, &kconfig_printer_cb, NULL);
+ 		}
+@@ -1026,8 +1027,6 @@ int conf_write_autoconf(int overwrite)
+ 	if (!overwrite && is_present(autoconf_name))
+ 		return 0;
+ 
+-	sym_clear_all_valid();
+-
+ 	conf_write_dep("include/config/auto.conf.cmd");
+ 
+ 	if (conf_touch_deps())
+diff --git a/scripts/kconfig/expr.h b/scripts/kconfig/expr.h
+index 8dde65bc3165..017843c9a4f4 100644
+--- a/scripts/kconfig/expr.h
++++ b/scripts/kconfig/expr.h
+@@ -141,6 +141,7 @@ struct symbol {
+ #define SYMBOL_OPTIONAL   0x0100  /* choice is optional - values can be 'n' */
+ #define SYMBOL_WRITE      0x0200  /* write symbol to file (KCONFIG_CONFIG) */
+ #define SYMBOL_CHANGED    0x0400  /* ? */
++#define SYMBOL_WRITTEN    0x0800  /* track info to avoid double-write to .config */
+ #define SYMBOL_NO_WRITE   0x1000  /* Symbol for internal use only; it will not be written */
+ #define SYMBOL_CHECKED    0x2000  /* used during dependency checking */
+ #define SYMBOL_WARNED     0x8000  /* warning has been issued */
+diff --git a/security/integrity/digsig.c b/security/integrity/digsig.c
+index e19c2eb72c51..37869214c243 100644
+--- a/security/integrity/digsig.c
++++ b/security/integrity/digsig.c
+@@ -73,8 +73,9 @@ int integrity_digsig_verify(const unsigned int id, const char *sig, int siglen,
+ 	return -EOPNOTSUPP;
+ }
+ 
+-static int __integrity_init_keyring(const unsigned int id, key_perm_t perm,
+-				    struct key_restriction *restriction)
++static int __init __integrity_init_keyring(const unsigned int id,
++					   key_perm_t perm,
++					   struct key_restriction *restriction)
+ {
+ 	const struct cred *cred = current_cred();
+ 	int err = 0;
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 614bc753822c..bf37bdce9918 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -6269,11 +6269,12 @@ static int selinux_setprocattr(const char *name, void *value, size_t size)
+ 	} else if (!strcmp(name, "fscreate")) {
+ 		tsec->create_sid = sid;
+ 	} else if (!strcmp(name, "keycreate")) {
+-		error = avc_has_perm(&selinux_state,
+-				     mysid, sid, SECCLASS_KEY, KEY__CREATE,
+-				     NULL);
+-		if (error)
+-			goto abort_change;
++		if (sid) {
++			error = avc_has_perm(&selinux_state, mysid, sid,
++					     SECCLASS_KEY, KEY__CREATE, NULL);
++			if (error)
++				goto abort_change;
++		}
+ 		tsec->keycreate_sid = sid;
+ 	} else if (!strcmp(name, "sockcreate")) {
+ 		tsec->sockcreate_sid = sid;
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index c99e1b77a45b..1524748b224d 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -1004,7 +1004,7 @@ static ssize_t snd_seq_write(struct file *file, const char __user *buf,
+ {
+ 	struct snd_seq_client *client = file->private_data;
+ 	int written = 0, len;
+-	int err;
++	int err, handled;
+ 	struct snd_seq_event event;
+ 
+ 	if (!(snd_seq_file_flags(file) & SNDRV_SEQ_LFLG_OUTPUT))
+@@ -1017,6 +1017,8 @@ static ssize_t snd_seq_write(struct file *file, const char __user *buf,
+ 	if (!client->accept_output || client->pool == NULL)
+ 		return -ENXIO;
+ 
++ repeat:
++	handled = 0;
+ 	/* allocate the pool now if the pool is not allocated yet */ 
+ 	mutex_lock(&client->ioctl_mutex);
+ 	if (client->pool->size > 0 && !snd_seq_write_pool_allocated(client)) {
+@@ -1076,12 +1078,19 @@ static ssize_t snd_seq_write(struct file *file, const char __user *buf,
+ 						   0, 0, &client->ioctl_mutex);
+ 		if (err < 0)
+ 			break;
++		handled++;
+ 
+ 	__skip_event:
+ 		/* Update pointers and counts */
+ 		count -= len;
+ 		buf += len;
+ 		written += len;
++
++		/* let's have a coffee break if too many events are queued */
++		if (++handled >= 200) {
++			mutex_unlock(&client->ioctl_mutex);
++			goto repeat;
++		}
+ 	}
+ 
+  out:
+diff --git a/sound/hda/hdac_controller.c b/sound/hda/hdac_controller.c
+index b2e9454f5816..6a190f0d2803 100644
+--- a/sound/hda/hdac_controller.c
++++ b/sound/hda/hdac_controller.c
+@@ -78,6 +78,8 @@ void snd_hdac_bus_init_cmd_io(struct hdac_bus *bus)
+ 	snd_hdac_chip_writew(bus, RINTCNT, 1);
+ 	/* enable rirb dma and response irq */
+ 	snd_hdac_chip_writeb(bus, RIRBCTL, AZX_RBCTL_DMA_EN | AZX_RBCTL_IRQ_EN);
++	/* Accept unsolicited responses */
++	snd_hdac_chip_updatel(bus, GCTL, AZX_GCTL_UNSOL, AZX_GCTL_UNSOL);
+ 	spin_unlock_irq(&bus->reg_lock);
+ }
+ EXPORT_SYMBOL_GPL(snd_hdac_bus_init_cmd_io);
+@@ -414,9 +416,6 @@ int snd_hdac_bus_reset_link(struct hdac_bus *bus, bool full_reset)
+ 		return -EBUSY;
+ 	}
+ 
+-	/* Accept unsolicited responses */
+-	snd_hdac_chip_updatel(bus, GCTL, AZX_GCTL_UNSOL, AZX_GCTL_UNSOL);
+-
+ 	/* detect codecs */
+ 	if (!bus->codec_mask) {
+ 		bus->codec_mask = snd_hdac_chip_readw(bus, STATESTS);
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index fcdf2cd3783b..fa1d041da5d2 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -2955,15 +2955,19 @@ static int hda_codec_runtime_resume(struct device *dev)
+ #ifdef CONFIG_PM_SLEEP
+ static int hda_codec_force_resume(struct device *dev)
+ {
++	struct hda_codec *codec = dev_to_hda_codec(dev);
++	bool forced_resume = !codec->relaxed_resume;
+ 	int ret;
+ 
+ 	/* The get/put pair below enforces the runtime resume even if the
+ 	 * device hasn't been used at suspend time.  This trick is needed to
+ 	 * update the jack state change during the sleep.
+ 	 */
+-	pm_runtime_get_noresume(dev);
++	if (forced_resume)
++		pm_runtime_get_noresume(dev);
+ 	ret = pm_runtime_force_resume(dev);
+-	pm_runtime_put(dev);
++	if (forced_resume)
++		pm_runtime_put(dev);
+ 	return ret;
+ }
+ 
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 0c61c05503f5..b2f17da292a4 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -2304,8 +2304,10 @@ static void generic_hdmi_free(struct hda_codec *codec)
+ 	struct hdmi_spec *spec = codec->spec;
+ 	int pin_idx, pcm_idx;
+ 
+-	if (codec_has_acomp(codec))
++	if (codec_has_acomp(codec)) {
+ 		snd_hdac_acomp_register_notifier(&codec->bus->core, NULL);
++		codec->relaxed_resume = 0;
++	}
+ 
+ 	for (pin_idx = 0; pin_idx < spec->num_pins; pin_idx++) {
+ 		struct hdmi_spec_per_pin *per_pin = get_pin(spec, pin_idx);
+@@ -2428,7 +2430,6 @@ static void intel_haswell_fixup_connect_list(struct hda_codec *codec,
+ 	snd_hda_override_conn_list(codec, nid, spec->num_cvts, spec->cvt_nids);
+ }
+ 
+-#define INTEL_GET_VENDOR_VERB	0xf81
+ #define INTEL_GET_VENDOR_VERB	0xf81
+ #define INTEL_SET_VENDOR_VERB	0x781
+ #define INTEL_EN_DP12		0x02	/* enable DP 1.2 features */
+@@ -2537,18 +2538,32 @@ static int intel_pin2port(void *audio_ptr, int pin_nid)
+ 	return -1;
+ }
+ 
++static int intel_port2pin(struct hda_codec *codec, int port)
++{
++	struct hdmi_spec *spec = codec->spec;
++
++	if (!spec->port_num) {
++		/* we assume only from port-B to port-D */
++		if (port < 1 || port > 3)
++			return 0;
++		/* intel port is 1-based */
++		return port + intel_base_nid(codec) - 1;
++	}
++
++	if (port < 1 || port > spec->port_num)
++		return 0;
++	return spec->port_map[port - 1];
++}
++
+ static void intel_pin_eld_notify(void *audio_ptr, int port, int pipe)
+ {
+ 	struct hda_codec *codec = audio_ptr;
+ 	int pin_nid;
+ 	int dev_id = pipe;
+ 
+-	/* we assume only from port-B to port-D */
+-	if (port < 1 || port > 3)
++	pin_nid = intel_port2pin(codec, port);
++	if (!pin_nid)
+ 		return;
+-
+-	pin_nid = port + intel_base_nid(codec) - 1; /* intel port is 1-based */
+-
+ 	/* skip notification during system suspend (but not in runtime PM);
+ 	 * the state will be updated at resume
+ 	 */
+@@ -2578,6 +2593,8 @@ static void register_i915_notifier(struct hda_codec *codec)
+ 	spec->drm_audio_ops.pin_eld_notify = intel_pin_eld_notify;
+ 	snd_hdac_acomp_register_notifier(&codec->bus->core,
+ 					&spec->drm_audio_ops);
++	/* no need for forcible resume for jack check thanks to notifier */
++	codec->relaxed_resume = 1;
+ }
+ 
+ /* setup_stream ops override for HSW+ */
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index dde9a49ded78..e50bf2fee250 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -7614,9 +7614,12 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x12, 0x90a60130},
+ 		{0x17, 0x90170110},
+ 		{0x21, 0x03211020}),
+-	SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE,
++	SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE,
+ 		{0x14, 0x90170110},
+ 		{0x21, 0x04211020}),
++	SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE,
++		{0x14, 0x90170110},
++		{0x21, 0x04211030}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 		ALC295_STANDARD_PINS,
+ 		{0x17, 0x21014020},
+@@ -8751,6 +8754,11 @@ static const struct snd_hda_pin_quirk alc662_pin_fixup_tbl[] = {
+ 		{0x18, 0x01a19030},
+ 		{0x1a, 0x01813040},
+ 		{0x21, 0x01014020}),
++	SND_HDA_PIN_QUIRK(0x10ec0867, 0x1028, "Dell", ALC891_FIXUP_DELL_MIC_NO_PRESENCE,
++		{0x16, 0x01813030},
++		{0x17, 0x02211010},
++		{0x18, 0x01a19040},
++		{0x21, 0x01014020}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0662, 0x1028, "Dell", ALC662_FIXUP_DELL_MIC_NO_PRESENCE,
+ 		{0x14, 0x01014010},
+ 		{0x18, 0x01a19020},
+diff --git a/sound/soc/codecs/hdac_hdmi.c b/sound/soc/codecs/hdac_hdmi.c
+index 4de1fbfa8827..65177ca64827 100644
+--- a/sound/soc/codecs/hdac_hdmi.c
++++ b/sound/soc/codecs/hdac_hdmi.c
+@@ -1880,6 +1880,12 @@ static void hdmi_codec_remove(struct snd_soc_component *component)
+ {
+ 	struct hdac_hdmi_priv *hdmi = snd_soc_component_get_drvdata(component);
+ 	struct hdac_device *hdev = hdmi->hdev;
++	int ret;
++
++	ret = snd_hdac_acomp_register_notifier(hdev->bus, NULL);
++	if (ret < 0)
++		dev_err(&hdev->dev, "notifier unregister failed: err: %d\n",
++				ret);
+ 
+ 	pm_runtime_disable(&hdev->dev);
+ }
+diff --git a/sound/soc/generic/audio-graph-card.c b/sound/soc/generic/audio-graph-card.c
+index 69bc4848d787..f730830fb36c 100644
+--- a/sound/soc/generic/audio-graph-card.c
++++ b/sound/soc/generic/audio-graph-card.c
+@@ -460,9 +460,6 @@ static int graph_for_each_link(struct graph_priv *priv,
+ 			codec_ep = of_graph_get_remote_endpoint(cpu_ep);
+ 			codec_port = of_get_parent(codec_ep);
+ 
+-			of_node_put(codec_ep);
+-			of_node_put(codec_port);
+-
+ 			/* get convert-xxx property */
+ 			memset(&adata, 0, sizeof(adata));
+ 			graph_get_conversion(dev, codec_ep, &adata);
+@@ -482,6 +479,9 @@ static int graph_for_each_link(struct graph_priv *priv,
+ 			else
+ 				ret = func_noml(priv, cpu_ep, codec_ep, li);
+ 
++			of_node_put(codec_ep);
++			of_node_put(codec_port);
++
+ 			if (ret < 0)
+ 				return ret;
+ 
+diff --git a/sound/soc/meson/axg-tdm.h b/sound/soc/meson/axg-tdm.h
+index e578b6f40a07..5774ce0916d4 100644
+--- a/sound/soc/meson/axg-tdm.h
++++ b/sound/soc/meson/axg-tdm.h
+@@ -40,7 +40,7 @@ struct axg_tdm_iface {
+ 
+ static inline bool axg_tdm_lrclk_invert(unsigned int fmt)
+ {
+-	return (fmt & SND_SOC_DAIFMT_I2S) ^
++	return ((fmt & SND_SOC_DAIFMT_FORMAT_MASK) == SND_SOC_DAIFMT_I2S) ^
+ 		!!(fmt & (SND_SOC_DAIFMT_IB_IF | SND_SOC_DAIFMT_NB_IF));
+ }
+ 
+diff --git a/sound/soc/sh/rcar/ctu.c b/sound/soc/sh/rcar/ctu.c
+index 8cb06dab234e..7647b3d4c0ba 100644
+--- a/sound/soc/sh/rcar/ctu.c
++++ b/sound/soc/sh/rcar/ctu.c
+@@ -108,7 +108,7 @@ static int rsnd_ctu_probe_(struct rsnd_mod *mod,
+ 			   struct rsnd_dai_stream *io,
+ 			   struct rsnd_priv *priv)
+ {
+-	return rsnd_cmd_attach(io, rsnd_mod_id(mod) / 4);
++	return rsnd_cmd_attach(io, rsnd_mod_id(mod));
+ }
+ 
+ static void rsnd_ctu_value_init(struct rsnd_dai_stream *io,
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index c010cc864cf3..c58af17228c3 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -158,9 +158,10 @@ static void soc_init_component_debugfs(struct snd_soc_component *component)
+ 				component->card->debugfs_card_root);
+ 	}
+ 
+-	if (!component->debugfs_root) {
++	if (IS_ERR(component->debugfs_root)) {
+ 		dev_warn(component->dev,
+-			"ASoC: Failed to create component debugfs directory\n");
++			"ASoC: Failed to create component debugfs directory: %ld\n",
++			PTR_ERR(component->debugfs_root));
+ 		return;
+ 	}
+ 
+@@ -212,18 +213,21 @@ static void soc_init_card_debugfs(struct snd_soc_card *card)
+ 
+ 	card->debugfs_card_root = debugfs_create_dir(card->name,
+ 						     snd_soc_debugfs_root);
+-	if (!card->debugfs_card_root) {
++	if (IS_ERR(card->debugfs_card_root)) {
+ 		dev_warn(card->dev,
+-			 "ASoC: Failed to create card debugfs directory\n");
++			 "ASoC: Failed to create card debugfs directory: %ld\n",
++			 PTR_ERR(card->debugfs_card_root));
++		card->debugfs_card_root = NULL;
+ 		return;
+ 	}
+ 
+ 	card->debugfs_pop_time = debugfs_create_u32("dapm_pop_time", 0644,
+ 						    card->debugfs_card_root,
+ 						    &card->pop_time);
+-	if (!card->debugfs_pop_time)
++	if (IS_ERR(card->debugfs_pop_time))
+ 		dev_warn(card->dev,
+-			 "ASoC: Failed to create pop time debugfs file\n");
++			 "ASoC: Failed to create pop time debugfs file: %ld\n",
++			 PTR_ERR(card->debugfs_pop_time));
+ }
+ 
+ static void soc_cleanup_card_debugfs(struct snd_soc_card *card)
+@@ -2834,14 +2838,12 @@ static void snd_soc_unbind_card(struct snd_soc_card *card, bool unregister)
+ 		snd_soc_dapm_shutdown(card);
+ 		snd_soc_flush_all_delayed_work(card);
+ 
+-		mutex_lock(&client_mutex);
+ 		/* remove all components used by DAI links on this card */
+ 		for_each_comp_order(order) {
+ 			for_each_card_rtds(card, rtd) {
+ 				soc_remove_link_components(card, rtd, order);
+ 			}
+ 		}
+-		mutex_unlock(&client_mutex);
+ 
+ 		soc_cleanup_card_resources(card);
+ 		if (!unregister)
+@@ -2860,7 +2862,9 @@ static void snd_soc_unbind_card(struct snd_soc_card *card, bool unregister)
+  */
+ int snd_soc_unregister_card(struct snd_soc_card *card)
+ {
++	mutex_lock(&client_mutex);
+ 	snd_soc_unbind_card(card, true);
++	mutex_unlock(&client_mutex);
+ 	dev_dbg(card->dev, "ASoC: Unregistered card '%s'\n", card->name);
+ 
+ 	return 0;
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index 5d9d7678e4fa..6b0f45b9dcdf 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -2154,23 +2154,25 @@ void snd_soc_dapm_debugfs_init(struct snd_soc_dapm_context *dapm,
+ {
+ 	struct dentry *d;
+ 
+-	if (!parent)
++	if (!parent || IS_ERR(parent))
+ 		return;
+ 
+ 	dapm->debugfs_dapm = debugfs_create_dir("dapm", parent);
+ 
+-	if (!dapm->debugfs_dapm) {
++	if (IS_ERR(dapm->debugfs_dapm)) {
+ 		dev_warn(dapm->dev,
+-		       "ASoC: Failed to create DAPM debugfs directory\n");
++			 "ASoC: Failed to create DAPM debugfs directory %ld\n",
++			 PTR_ERR(dapm->debugfs_dapm));
+ 		return;
+ 	}
+ 
+ 	d = debugfs_create_file("bias_level", 0444,
+ 				dapm->debugfs_dapm, dapm,
+ 				&dapm_bias_fops);
+-	if (!d)
++	if (IS_ERR(d))
+ 		dev_warn(dapm->dev,
+-			 "ASoC: Failed to create bias level debugfs file\n");
++			 "ASoC: Failed to create bias level debugfs file: %ld\n",
++			 PTR_ERR(d));
+ }
+ 
+ static void dapm_debugfs_add_widget(struct snd_soc_dapm_widget *w)
+@@ -2184,10 +2186,10 @@ static void dapm_debugfs_add_widget(struct snd_soc_dapm_widget *w)
+ 	d = debugfs_create_file(w->name, 0444,
+ 				dapm->debugfs_dapm, w,
+ 				&dapm_widget_power_fops);
+-	if (!d)
++	if (IS_ERR(d))
+ 		dev_warn(w->dapm->dev,
+-			"ASoC: Failed to create %s debugfs file\n",
+-			w->name);
++			 "ASoC: Failed to create %s debugfs file: %ld\n",
++			 w->name, PTR_ERR(d));
+ }
+ 
+ static void dapm_debugfs_cleanup(struct snd_soc_dapm_context *dapm)
+diff --git a/tools/bpf/bpftool/jit_disasm.c b/tools/bpf/bpftool/jit_disasm.c
+index 3ef3093560ba..bfed711258ce 100644
+--- a/tools/bpf/bpftool/jit_disasm.c
++++ b/tools/bpf/bpftool/jit_disasm.c
+@@ -11,6 +11,8 @@
+  * Licensed under the GNU General Public License, version 2.0 (GPLv2)
+  */
+ 
++#define _GNU_SOURCE
++#include <stdio.h>
+ #include <stdarg.h>
+ #include <stdint.h>
+ #include <stdio.h>
+@@ -44,11 +46,13 @@ static int fprintf_json(void *out, const char *fmt, ...)
+ 	char *s;
+ 
+ 	va_start(ap, fmt);
++	if (vasprintf(&s, fmt, ap) < 0)
++		return -1;
++	va_end(ap);
++
+ 	if (!oper_count) {
+ 		int i;
+ 
+-		s = va_arg(ap, char *);
+-
+ 		/* Strip trailing spaces */
+ 		i = strlen(s) - 1;
+ 		while (s[i] == ' ')
+@@ -61,11 +65,10 @@ static int fprintf_json(void *out, const char *fmt, ...)
+ 	} else if (!strcmp(fmt, ",")) {
+ 		   /* Skip */
+ 	} else {
+-		s = va_arg(ap, char *);
+ 		jsonw_string(json_wtr, s);
+ 		oper_count++;
+ 	}
+-	va_end(ap);
++	free(s);
+ 	return 0;
+ }
+ 
+diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
+index 929c8e537a14..f6ce794c0f36 100644
+--- a/tools/include/uapi/linux/bpf.h
++++ b/tools/include/uapi/linux/bpf.h
+@@ -2869,6 +2869,7 @@ struct bpf_prog_info {
+ 	char name[BPF_OBJ_NAME_LEN];
+ 	__u32 ifindex;
+ 	__u32 gpl_compatible:1;
++	__u32 :31; /* alignment pad */
+ 	__u64 netns_dev;
+ 	__u64 netns_ino;
+ 	__u32 nr_jited_ksyms;
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 11c25d9ea431..43dc8a8e9105 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -2897,10 +2897,7 @@ int bpf_prog_load(const char *file, enum bpf_prog_type type,
+ int bpf_prog_load_xattr(const struct bpf_prog_load_attr *attr,
+ 			struct bpf_object **pobj, int *prog_fd)
+ {
+-	struct bpf_object_open_attr open_attr = {
+-		.file		= attr->file,
+-		.prog_type	= attr->prog_type,
+-	};
++	struct bpf_object_open_attr open_attr = {};
+ 	struct bpf_program *prog, *first_prog = NULL;
+ 	enum bpf_attach_type expected_attach_type;
+ 	enum bpf_prog_type prog_type;
+@@ -2913,6 +2910,9 @@ int bpf_prog_load_xattr(const struct bpf_prog_load_attr *attr,
+ 	if (!attr->file)
+ 		return -EINVAL;
+ 
++	open_attr.file = attr->file;
++	open_attr.prog_type = attr->prog_type;
++
+ 	obj = bpf_object__open_xattr(&open_attr);
+ 	if (IS_ERR_OR_NULL(obj))
+ 		return -ENOENT;
+diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
+index af5f310ecca1..4ecd33ff46ec 100644
+--- a/tools/lib/bpf/xsk.c
++++ b/tools/lib/bpf/xsk.c
+@@ -336,7 +336,8 @@ static int xsk_get_max_queues(struct xsk_socket *xsk)
+ 
+ 	channels.cmd = ETHTOOL_GCHANNELS;
+ 	ifr.ifr_data = (void *)&channels;
+-	strncpy(ifr.ifr_name, xsk->ifname, IFNAMSIZ);
++	strncpy(ifr.ifr_name, xsk->ifname, IFNAMSIZ - 1);
++	ifr.ifr_name[IFNAMSIZ - 1] = '\0';
+ 	err = ioctl(fd, SIOCETHTOOL, &ifr);
+ 	if (err && errno != EOPNOTSUPP) {
+ 		ret = -errno;
+@@ -559,7 +560,8 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
+ 		err = -errno;
+ 		goto out_socket;
+ 	}
+-	strncpy(xsk->ifname, ifname, IFNAMSIZ);
++	strncpy(xsk->ifname, ifname, IFNAMSIZ - 1);
++	xsk->ifname[IFNAMSIZ - 1] = '\0';
+ 
+ 	err = xsk_set_xdp_socket_config(&xsk->config, usr_config);
+ 	if (err)
+diff --git a/tools/perf/arch/arm/util/cs-etm.c b/tools/perf/arch/arm/util/cs-etm.c
+index 911426721170..0a278bbcaba6 100644
+--- a/tools/perf/arch/arm/util/cs-etm.c
++++ b/tools/perf/arch/arm/util/cs-etm.c
+@@ -31,6 +31,8 @@ struct cs_etm_recording {
+ 	struct auxtrace_record	itr;
+ 	struct perf_pmu		*cs_etm_pmu;
+ 	struct perf_evlist	*evlist;
++	int			wrapped_cnt;
++	bool			*wrapped;
+ 	bool			snapshot_mode;
+ 	size_t			snapshot_size;
+ };
+@@ -536,16 +538,131 @@ static int cs_etm_info_fill(struct auxtrace_record *itr,
+ 	return 0;
+ }
+ 
+-static int cs_etm_find_snapshot(struct auxtrace_record *itr __maybe_unused,
++static int cs_etm_alloc_wrapped_array(struct cs_etm_recording *ptr, int idx)
++{
++	bool *wrapped;
++	int cnt = ptr->wrapped_cnt;
++
++	/* Make @ptr->wrapped as big as @idx */
++	while (cnt <= idx)
++		cnt++;
++
++	/*
++	 * Free'ed in cs_etm_recording_free().  Using realloc() to avoid
++	 * cross compilation problems where the host's system supports
++	 * reallocarray() but not the target.
++	 */
++	wrapped = realloc(ptr->wrapped, cnt * sizeof(bool));
++	if (!wrapped)
++		return -ENOMEM;
++
++	wrapped[cnt - 1] = false;
++	ptr->wrapped_cnt = cnt;
++	ptr->wrapped = wrapped;
++
++	return 0;
++}
++
++static bool cs_etm_buffer_has_wrapped(unsigned char *buffer,
++				      size_t buffer_size, u64 head)
++{
++	u64 i, watermark;
++	u64 *buf = (u64 *)buffer;
++	size_t buf_size = buffer_size;
++
++	/*
++	 * We want to look the very last 512 byte (chosen arbitrarily) in
++	 * the ring buffer.
++	 */
++	watermark = buf_size - 512;
++
++	/*
++	 * @head is continuously increasing - if its value is equal or greater
++	 * than the size of the ring buffer, it has wrapped around.
++	 */
++	if (head >= buffer_size)
++		return true;
++
++	/*
++	 * The value of @head is somewhere within the size of the ring buffer.
++	 * This can be that there hasn't been enough data to fill the ring
++	 * buffer yet or the trace time was so long that @head has numerically
++	 * wrapped around.  To find we need to check if we have data at the very
++	 * end of the ring buffer.  We can reliably do this because mmap'ed
++	 * pages are zeroed out and there is a fresh mapping with every new
++	 * session.
++	 */
++
++	/* @head is less than 512 byte from the end of the ring buffer */
++	if (head > watermark)
++		watermark = head;
++
++	/*
++	 * Speed things up by using 64 bit transactions (see "u64 *buf" above)
++	 */
++	watermark >>= 3;
++	buf_size >>= 3;
++
++	/*
++	 * If we find trace data at the end of the ring buffer, @head has
++	 * been there and has numerically wrapped around at least once.
++	 */
++	for (i = watermark; i < buf_size; i++)
++		if (buf[i])
++			return true;
++
++	return false;
++}
++
++static int cs_etm_find_snapshot(struct auxtrace_record *itr,
+ 				int idx, struct auxtrace_mmap *mm,
+-				unsigned char *data __maybe_unused,
++				unsigned char *data,
+ 				u64 *head, u64 *old)
+ {
++	int err;
++	bool wrapped;
++	struct cs_etm_recording *ptr =
++			container_of(itr, struct cs_etm_recording, itr);
++
++	/*
++	 * Allocate memory to keep track of wrapping if this is the first
++	 * time we deal with this *mm.
++	 */
++	if (idx >= ptr->wrapped_cnt) {
++		err = cs_etm_alloc_wrapped_array(ptr, idx);
++		if (err)
++			return err;
++	}
++
++	/*
++	 * Check to see if *head has wrapped around.  If it hasn't only the
++	 * amount of data between *head and *old is snapshot'ed to avoid
++	 * bloating the perf.data file with zeros.  But as soon as *head has
++	 * wrapped around the entire size of the AUX ring buffer it taken.
++	 */
++	wrapped = ptr->wrapped[idx];
++	if (!wrapped && cs_etm_buffer_has_wrapped(data, mm->len, *head)) {
++		wrapped = true;
++		ptr->wrapped[idx] = true;
++	}
++
+ 	pr_debug3("%s: mmap index %d old head %zu new head %zu size %zu\n",
+ 		  __func__, idx, (size_t)*old, (size_t)*head, mm->len);
+ 
+-	*old = *head;
+-	*head += mm->len;
++	/* No wrap has occurred, we can just use *head and *old. */
++	if (!wrapped)
++		return 0;
++
++	/*
++	 * *head has wrapped around - adjust *head and *old to pickup the
++	 * entire content of the AUX buffer.
++	 */
++	if (*head >= mm->len) {
++		*old = *head - mm->len;
++	} else {
++		*head += mm->len;
++		*old = *head - mm->len;
++	}
+ 
+ 	return 0;
+ }
+@@ -586,6 +703,8 @@ static void cs_etm_recording_free(struct auxtrace_record *itr)
+ {
+ 	struct cs_etm_recording *ptr =
+ 			container_of(itr, struct cs_etm_recording, itr);
++
++	zfree(&ptr->wrapped);
+ 	free(ptr);
+ }
+ 
+diff --git a/tools/perf/jvmti/libjvmti.c b/tools/perf/jvmti/libjvmti.c
+index aea7b1fe85aa..c441a34cb1c0 100644
+--- a/tools/perf/jvmti/libjvmti.c
++++ b/tools/perf/jvmti/libjvmti.c
+@@ -1,5 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <linux/compiler.h>
++#include <linux/string.h>
+ #include <sys/types.h>
+ #include <stdio.h>
+ #include <string.h>
+@@ -162,8 +163,7 @@ copy_class_filename(const char * class_sign, const char * file_name, char * resu
+ 		result[i] = '\0';
+ 	} else {
+ 		/* fallback case */
+-		size_t file_name_len = strlen(file_name);
+-		strncpy(result, file_name, file_name_len < max_length ? file_name_len : max_length);
++		strlcpy(result, file_name, max_length);
+ 	}
+ }
+ 
+diff --git a/tools/perf/perf.h b/tools/perf/perf.h
+index c59743def8d3..b86ecc7afdd7 100644
+--- a/tools/perf/perf.h
++++ b/tools/perf/perf.h
+@@ -26,7 +26,7 @@ static inline unsigned long long rdclock(void)
+ }
+ 
+ #ifndef MAX_NR_CPUS
+-#define MAX_NR_CPUS			1024
++#define MAX_NR_CPUS			2048
+ #endif
+ 
+ extern const char *input_name;
+diff --git a/tools/perf/tests/parse-events.c b/tools/perf/tests/parse-events.c
+index 4a69c07f4101..8f3c80e13584 100644
+--- a/tools/perf/tests/parse-events.c
++++ b/tools/perf/tests/parse-events.c
+@@ -18,6 +18,32 @@
+ #define PERF_TP_SAMPLE_TYPE (PERF_SAMPLE_RAW | PERF_SAMPLE_TIME | \
+ 			     PERF_SAMPLE_CPU | PERF_SAMPLE_PERIOD)
+ 
++#if defined(__s390x__)
++/* Return true if kvm module is available and loaded. Test this
++ * and retun success when trace point kvm_s390_create_vm
++ * exists. Otherwise this test always fails.
++ */
++static bool kvm_s390_create_vm_valid(void)
++{
++	char *eventfile;
++	bool rc = false;
++
++	eventfile = get_events_file("kvm-s390");
++
++	if (eventfile) {
++		DIR *mydir = opendir(eventfile);
++
++		if (mydir) {
++			rc = true;
++			closedir(mydir);
++		}
++		put_events_file(eventfile);
++	}
++
++	return rc;
++}
++#endif
++
+ static int test__checkevent_tracepoint(struct perf_evlist *evlist)
+ {
+ 	struct perf_evsel *evsel = perf_evlist__first(evlist);
+@@ -1642,6 +1668,7 @@ static struct evlist_test test__events[] = {
+ 	{
+ 		.name  = "kvm-s390:kvm_s390_create_vm",
+ 		.check = test__checkevent_tracepoint,
++		.valid = kvm_s390_create_vm_valid,
+ 		.id    = 100,
+ 	},
+ #endif
+diff --git a/tools/perf/tests/shell/record+probe_libc_inet_pton.sh b/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
+index 61c9f8fc6fa1..58a99a292930 100755
+--- a/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
++++ b/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
+@@ -44,7 +44,7 @@ trace_libc_inet_pton_backtrace() {
+ 		eventattr='max-stack=4'
+ 		echo "gaih_inet.*\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" >> $expected
+ 		echo "getaddrinfo\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" >> $expected
+-		echo ".*\+0x[[:xdigit:]]+[[:space:]]\(.*/bin/ping.*\)$" >> $expected
++		echo ".*(\+0x[[:xdigit:]]+|\[unknown\])[[:space:]]\(.*/bin/ping.*\)$" >> $expected
+ 		;;
+ 	*)
+ 		eventattr='max-stack=3'
+diff --git a/tools/perf/ui/browsers/annotate.c b/tools/perf/ui/browsers/annotate.c
+index 98d934a36d86..b0d089a95dac 100644
+--- a/tools/perf/ui/browsers/annotate.c
++++ b/tools/perf/ui/browsers/annotate.c
+@@ -97,11 +97,12 @@ static void annotate_browser__write(struct ui_browser *browser, void *entry, int
+ 	struct annotate_browser *ab = container_of(browser, struct annotate_browser, b);
+ 	struct annotation *notes = browser__annotation(browser);
+ 	struct annotation_line *al = list_entry(entry, struct annotation_line, node);
++	const bool is_current_entry = ui_browser__is_current_entry(browser, row);
+ 	struct annotation_write_ops ops = {
+ 		.first_line		 = row == 0,
+-		.current_entry		 = ui_browser__is_current_entry(browser, row),
++		.current_entry		 = is_current_entry,
+ 		.change_color		 = (!notes->options->hide_src_code &&
+-					    (!ops.current_entry ||
++					    (!is_current_entry ||
+ 					     (browser->use_navkeypressed &&
+ 					      !browser->navkeypressed))),
+ 		.width			 = browser->width,
+diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
+index 09762985c713..0c43c5a0d9d9 100644
+--- a/tools/perf/util/annotate.c
++++ b/tools/perf/util/annotate.c
+@@ -932,9 +932,8 @@ static int symbol__inc_addr_samples(struct symbol *sym, struct map *map,
+ 	if (sym == NULL)
+ 		return 0;
+ 	src = symbol__hists(sym, evsel->evlist->nr_entries);
+-	if (src == NULL)
+-		return -ENOMEM;
+-	return __symbol__inc_addr_samples(sym, map, src, evsel->idx, addr, sample);
++	return (src) ?  __symbol__inc_addr_samples(sym, map, src, evsel->idx,
++						   addr, sample) : 0;
+ }
+ 
+ static int symbol__account_cycles(u64 addr, u64 start,
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index 966360844fff..7ca79cfe1aea 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -584,6 +584,9 @@ const char *perf_evsel__name(struct perf_evsel *evsel)
+ {
+ 	char bf[128];
+ 
++	if (!evsel)
++		goto out_unknown;
++
+ 	if (evsel->name)
+ 		return evsel->name;
+ 
+@@ -620,7 +623,10 @@ const char *perf_evsel__name(struct perf_evsel *evsel)
+ 
+ 	evsel->name = strdup(bf);
+ 
+-	return evsel->name ?: "unknown";
++	if (evsel->name)
++		return evsel->name;
++out_unknown:
++	return "unknown";
+ }
+ 
+ const char *perf_evsel__group_name(struct perf_evsel *evsel)
+diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
+index 682e3d524d3c..df608cfaa03c 100644
+--- a/tools/perf/util/header.c
++++ b/tools/perf/util/header.c
+@@ -1100,7 +1100,7 @@ static int build_caches(struct cpu_cache_level caches[], u32 size, u32 *cntp)
+ 	return 0;
+ }
+ 
+-#define MAX_CACHES 2000
++#define MAX_CACHES (MAX_NR_CPUS * 4)
+ 
+ static int write_cache(struct feat_fd *ff,
+ 		       struct perf_evlist *evlist __maybe_unused)
+diff --git a/tools/perf/util/metricgroup.c b/tools/perf/util/metricgroup.c
+index b8d864ed4afe..a48895c4324a 100644
+--- a/tools/perf/util/metricgroup.c
++++ b/tools/perf/util/metricgroup.c
+@@ -94,26 +94,49 @@ struct egroup {
+ 	const char *metric_expr;
+ };
+ 
+-static struct perf_evsel *find_evsel(struct perf_evlist *perf_evlist,
+-				     const char **ids,
+-				     int idnum,
+-				     struct perf_evsel **metric_events)
++static bool record_evsel(int *ind, struct perf_evsel **start,
++			 int idnum,
++			 struct perf_evsel **metric_events,
++			 struct perf_evsel *ev)
++{
++	metric_events[*ind] = ev;
++	if (*ind == 0)
++		*start = ev;
++	if (++*ind == idnum) {
++		metric_events[*ind] = NULL;
++		return true;
++	}
++	return false;
++}
++
++static struct perf_evsel *find_evsel_group(struct perf_evlist *perf_evlist,
++					   const char **ids,
++					   int idnum,
++					   struct perf_evsel **metric_events)
+ {
+ 	struct perf_evsel *ev, *start = NULL;
+ 	int ind = 0;
+ 
+ 	evlist__for_each_entry (perf_evlist, ev) {
++		if (ev->collect_stat)
++			continue;
+ 		if (!strcmp(ev->name, ids[ind])) {
+-			metric_events[ind] = ev;
+-			if (ind == 0)
+-				start = ev;
+-			if (++ind == idnum) {
+-				metric_events[ind] = NULL;
++			if (record_evsel(&ind, &start, idnum,
++					 metric_events, ev))
+ 				return start;
+-			}
+ 		} else {
++			/*
++			 * We saw some other event that is not
++			 * in our list of events. Discard
++			 * the whole match and start again.
++			 */
+ 			ind = 0;
+ 			start = NULL;
++			if (!strcmp(ev->name, ids[ind])) {
++				if (record_evsel(&ind, &start, idnum,
++						 metric_events, ev))
++					return start;
++			}
+ 		}
+ 	}
+ 	/*
+@@ -143,8 +166,8 @@ static int metricgroup__setup_events(struct list_head *groups,
+ 			ret = -ENOMEM;
+ 			break;
+ 		}
+-		evsel = find_evsel(perf_evlist, eg->ids, eg->idnum,
+-				   metric_events);
++		evsel = find_evsel_group(perf_evlist, eg->ids, eg->idnum,
++					 metric_events);
+ 		if (!evsel) {
+ 			pr_debug("Cannot resolve %s: %s\n",
+ 					eg->metric_name, eg->metric_expr);
+diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
+index 6d043c78f3c2..9c940242dcbe 100644
+--- a/tools/perf/util/stat-display.c
++++ b/tools/perf/util/stat-display.c
+@@ -539,7 +539,8 @@ static void collect_all_aliases(struct perf_stat_config *config, struct perf_evs
+ 		    alias->scale != counter->scale ||
+ 		    alias->cgrp != counter->cgrp ||
+ 		    strcmp(alias->unit, counter->unit) ||
+-		    perf_evsel__is_clock(alias) != perf_evsel__is_clock(counter))
++		    perf_evsel__is_clock(alias) != perf_evsel__is_clock(counter) ||
++		    !strcmp(alias->pmu_name, counter->pmu_name))
+ 			break;
+ 		alias->merged_stat = true;
+ 		cb(config, alias, data, false);
+diff --git a/tools/perf/util/stat-shadow.c b/tools/perf/util/stat-shadow.c
+index 83d8094be4fe..0ef98e991ade 100644
+--- a/tools/perf/util/stat-shadow.c
++++ b/tools/perf/util/stat-shadow.c
+@@ -303,7 +303,7 @@ static struct perf_evsel *perf_stat__find_event(struct perf_evlist *evsel_list,
+ 	struct perf_evsel *c2;
+ 
+ 	evlist__for_each_entry (evsel_list, c2) {
+-		if (!strcasecmp(c2->name, name))
++		if (!strcasecmp(c2->name, name) && !c2->collect_stat)
+ 			return c2;
+ 	}
+ 	return NULL;
+@@ -342,7 +342,8 @@ void perf_stat__collect_metric_expr(struct perf_evlist *evsel_list)
+ 			if (leader) {
+ 				/* Search in group */
+ 				for_each_group_member (oc, leader) {
+-					if (!strcasecmp(oc->name, metric_names[i])) {
++					if (!strcasecmp(oc->name, metric_names[i]) &&
++						!oc->collect_stat) {
+ 						found = true;
+ 						break;
+ 					}
+@@ -722,6 +723,7 @@ static void generic_metric(struct perf_stat_config *config,
+ 	double ratio;
+ 	int i;
+ 	void *ctxp = out->ctx;
++	char *n, *pn;
+ 
+ 	expr__ctx_init(&pctx);
+ 	expr__add_id(&pctx, name, avg);
+@@ -741,7 +743,19 @@ static void generic_metric(struct perf_stat_config *config,
+ 			stats = &v->stats;
+ 			scale = 1.0;
+ 		}
+-		expr__add_id(&pctx, metric_events[i]->name, avg_stats(stats)*scale);
++
++		n = strdup(metric_events[i]->name);
++		if (!n)
++			return;
++		/*
++		 * This display code with --no-merge adds [cpu] postfixes.
++		 * These are not supported by the parser. Remove everything
++		 * after the space.
++		 */
++		pn = strchr(n, ' ');
++		if (pn)
++			*pn = 0;
++		expr__add_id(&pctx, n, avg_stats(stats)*scale);
+ 	}
+ 	if (!metric_events[i]) {
+ 		const char *p = metric_expr;
+@@ -758,6 +772,9 @@ static void generic_metric(struct perf_stat_config *config,
+ 				     (metric_name ? metric_name : name) : "", 0);
+ 	} else
+ 		print_metric(config, ctxp, NULL, NULL, "", 0);
++
++	for (i = 1; i < pctx.num_ids; i++)
++		free((void *)pctx.ids[i].name);
+ }
+ 
+ void perf_stat__print_shadow_stats(struct perf_stat_config *config,
+diff --git a/tools/power/cpupower/utils/cpufreq-set.c b/tools/power/cpupower/utils/cpufreq-set.c
+index 1eef0aed6423..08a405593a79 100644
+--- a/tools/power/cpupower/utils/cpufreq-set.c
++++ b/tools/power/cpupower/utils/cpufreq-set.c
+@@ -306,6 +306,8 @@ int cmd_freq_set(int argc, char **argv)
+ 				bitmask_setbit(cpus_chosen, cpus->cpu);
+ 				cpus = cpus->next;
+ 			}
++			/* Set the last cpu in related cpus list */
++			bitmask_setbit(cpus_chosen, cpus->cpu);
+ 			cpufreq_put_related_cpus(cpus);
+ 		}
+ 	}
+diff --git a/tools/testing/selftests/bpf/progs/test_lwt_seg6local.c b/tools/testing/selftests/bpf/progs/test_lwt_seg6local.c
+index 0575751bc1bc..e2f6ed0a583d 100644
+--- a/tools/testing/selftests/bpf/progs/test_lwt_seg6local.c
++++ b/tools/testing/selftests/bpf/progs/test_lwt_seg6local.c
+@@ -61,7 +61,7 @@ struct sr6_tlv_t {
+ 	unsigned char value[0];
+ } BPF_PACKET_HEADER;
+ 
+-__attribute__((always_inline)) struct ip6_srh_t *get_srh(struct __sk_buff *skb)
++static __always_inline struct ip6_srh_t *get_srh(struct __sk_buff *skb)
+ {
+ 	void *cursor, *data_end;
+ 	struct ip6_srh_t *srh;
+@@ -95,7 +95,7 @@ __attribute__((always_inline)) struct ip6_srh_t *get_srh(struct __sk_buff *skb)
+ 	return srh;
+ }
+ 
+-__attribute__((always_inline))
++static __always_inline
+ int update_tlv_pad(struct __sk_buff *skb, uint32_t new_pad,
+ 		   uint32_t old_pad, uint32_t pad_off)
+ {
+@@ -125,7 +125,7 @@ int update_tlv_pad(struct __sk_buff *skb, uint32_t new_pad,
+ 	return 0;
+ }
+ 
+-__attribute__((always_inline))
++static __always_inline
+ int is_valid_tlv_boundary(struct __sk_buff *skb, struct ip6_srh_t *srh,
+ 			  uint32_t *tlv_off, uint32_t *pad_size,
+ 			  uint32_t *pad_off)
+@@ -184,7 +184,7 @@ int is_valid_tlv_boundary(struct __sk_buff *skb, struct ip6_srh_t *srh,
+ 	return 0;
+ }
+ 
+-__attribute__((always_inline))
++static __always_inline
+ int add_tlv(struct __sk_buff *skb, struct ip6_srh_t *srh, uint32_t tlv_off,
+ 	    struct sr6_tlv_t *itlv, uint8_t tlv_size)
+ {
+@@ -228,7 +228,7 @@ int add_tlv(struct __sk_buff *skb, struct ip6_srh_t *srh, uint32_t tlv_off,
+ 	return update_tlv_pad(skb, new_pad, pad_size, pad_off);
+ }
+ 
+-__attribute__((always_inline))
++static __always_inline
+ int delete_tlv(struct __sk_buff *skb, struct ip6_srh_t *srh,
+ 	       uint32_t tlv_off)
+ {
+@@ -266,7 +266,7 @@ int delete_tlv(struct __sk_buff *skb, struct ip6_srh_t *srh,
+ 	return update_tlv_pad(skb, new_pad, pad_size, pad_off);
+ }
+ 
+-__attribute__((always_inline))
++static __always_inline
+ int has_egr_tlv(struct __sk_buff *skb, struct ip6_srh_t *srh)
+ {
+ 	int tlv_offset = sizeof(struct ip6_t) + sizeof(struct ip6_srh_t) +


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [gentoo-commits] proj/linux-patches:5.1 commit in: /
@ 2019-07-28 16:25 Mike Pagano
  0 siblings, 0 replies; 23+ messages in thread
From: Mike Pagano @ 2019-07-28 16:25 UTC (permalink / raw
  To: gentoo-commits

commit:     077ba01d4280501028651068c5276ca67f0a827f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Jul 28 16:25:12 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jul 28 16:25:12 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=077ba01d

Linux patch 5.1.21

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1020_linux-5.1.21.patch | 2779 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2783 insertions(+)

diff --git a/0000_README b/0000_README
index 46f6f44..c48024b 100644
--- a/0000_README
+++ b/0000_README
@@ -123,6 +123,10 @@ Patch:  1019_linux-5.1.20.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.1.20
 
+Patch:  1020_linux-5.1.21.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.1.21
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1020_linux-5.1.21.patch b/1020_linux-5.1.21.patch
new file mode 100644
index 0000000..bb1da07
--- /dev/null
+++ b/1020_linux-5.1.21.patch
@@ -0,0 +1,2779 @@
+diff --git a/Makefile b/Makefile
+index ef6daeb823b9..254b5d831328 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 20
++SUBLEVEL = 21
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/mips/jz4740/board-qi_lb60.c b/arch/mips/jz4740/board-qi_lb60.c
+index 6718efb400f4..e6e86fdfd4a7 100644
+--- a/arch/mips/jz4740/board-qi_lb60.c
++++ b/arch/mips/jz4740/board-qi_lb60.c
+@@ -469,27 +469,27 @@ static unsigned long pin_cfg_bias_disable[] = {
+ static struct pinctrl_map pin_map[] __initdata = {
+ 	/* NAND pin configuration */
+ 	PIN_MAP_MUX_GROUP_DEFAULT("jz4740-nand",
+-			"10010000.jz4740-pinctrl", "nand", "nand-cs1"),
++			"10010000.pin-controller", "nand-cs1", "nand"),
+ 
+ 	/* fbdev pin configuration */
+ 	PIN_MAP_MUX_GROUP("jz4740-fb", PINCTRL_STATE_DEFAULT,
+-			"10010000.jz4740-pinctrl", "lcd", "lcd-8bit"),
++			"10010000.pin-controller", "lcd-8bit", "lcd"),
+ 	PIN_MAP_MUX_GROUP("jz4740-fb", PINCTRL_STATE_SLEEP,
+-			"10010000.jz4740-pinctrl", "lcd", "lcd-no-pins"),
++			"10010000.pin-controller", "lcd-no-pins", "lcd"),
+ 
+ 	/* MMC pin configuration */
+ 	PIN_MAP_MUX_GROUP_DEFAULT("jz4740-mmc.0",
+-			"10010000.jz4740-pinctrl", "mmc", "mmc-1bit"),
++			"10010000.pin-controller", "mmc-1bit", "mmc"),
+ 	PIN_MAP_MUX_GROUP_DEFAULT("jz4740-mmc.0",
+-			"10010000.jz4740-pinctrl", "mmc", "mmc-4bit"),
++			"10010000.pin-controller", "mmc-4bit", "mmc"),
+ 	PIN_MAP_CONFIGS_PIN_DEFAULT("jz4740-mmc.0",
+-			"10010000.jz4740-pinctrl", "PD0", pin_cfg_bias_disable),
++			"10010000.pin-controller", "PD0", pin_cfg_bias_disable),
+ 	PIN_MAP_CONFIGS_PIN_DEFAULT("jz4740-mmc.0",
+-			"10010000.jz4740-pinctrl", "PD2", pin_cfg_bias_disable),
++			"10010000.pin-controller", "PD2", pin_cfg_bias_disable),
+ 
+ 	/* PWM pin configuration */
+ 	PIN_MAP_MUX_GROUP_DEFAULT("jz4740-pwm",
+-			"10010000.jz4740-pinctrl", "pwm4", "pwm4"),
++			"10010000.pin-controller", "pwm4", "pwm4"),
+ };
+ 
+ 
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index c79abe7ca093..564729a4a25c 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -609,15 +609,16 @@ struct kvm_vcpu_arch {
+ 
+ 	/*
+ 	 * QEMU userspace and the guest each have their own FPU state.
+-	 * In vcpu_run, we switch between the user, maintained in the
+-	 * task_struct struct, and guest FPU contexts. While running a VCPU,
+-	 * the VCPU thread will have the guest FPU context.
++	 * In vcpu_run, we switch between the user and guest FPU contexts.
++	 * While running a VCPU, the VCPU thread will have the guest FPU
++	 * context.
+ 	 *
+ 	 * Note that while the PKRU state lives inside the fpu registers,
+ 	 * it is switched out separately at VMENTER and VMEXIT time. The
+ 	 * "guest_fpu" state here contains the guest FPU context, with the
+ 	 * host PRKU bits.
+ 	 */
++	struct fpu user_fpu;
+ 	struct fpu *guest_fpu;
+ 
+ 	u64 xcr0;
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index f83e79a4d0b2..5ce6bd1eb43d 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -184,6 +184,7 @@ static void vmx_disable_shadow_vmcs(struct vcpu_vmx *vmx)
+ {
+ 	vmcs_clear_bits(SECONDARY_VM_EXEC_CONTROL, SECONDARY_EXEC_SHADOW_VMCS);
+ 	vmcs_write64(VMCS_LINK_POINTER, -1ull);
++	vmx->nested.need_vmcs12_sync = false;
+ }
+ 
+ static inline void nested_release_evmcs(struct kvm_vcpu *vcpu)
+@@ -211,6 +212,8 @@ static void free_nested(struct kvm_vcpu *vcpu)
+ 	if (!vmx->nested.vmxon && !vmx->nested.smm.vmxon)
+ 		return;
+ 
++	kvm_clear_request(KVM_REQ_GET_VMCS12_PAGES, vcpu);
++
+ 	vmx->nested.vmxon = false;
+ 	vmx->nested.smm.vmxon = false;
+ 	free_vpid(vmx->nested.vpid02);
+@@ -1328,6 +1331,9 @@ static void copy_shadow_to_vmcs12(struct vcpu_vmx *vmx)
+ 	u64 field_value;
+ 	struct vmcs *shadow_vmcs = vmx->vmcs01.shadow_vmcs;
+ 
++	if (WARN_ON(!shadow_vmcs))
++		return;
++
+ 	preempt_disable();
+ 
+ 	vmcs_load(shadow_vmcs);
+@@ -1366,6 +1372,9 @@ static void copy_vmcs12_to_shadow(struct vcpu_vmx *vmx)
+ 	u64 field_value = 0;
+ 	struct vmcs *shadow_vmcs = vmx->vmcs01.shadow_vmcs;
+ 
++	if (WARN_ON(!shadow_vmcs))
++		return;
++
+ 	vmcs_load(shadow_vmcs);
+ 
+ 	for (q = 0; q < ARRAY_SIZE(fields); q++) {
+@@ -4336,7 +4345,6 @@ static inline void nested_release_vmcs12(struct kvm_vcpu *vcpu)
+ 		/* copy to memory all shadowed fields in case
+ 		   they were modified */
+ 		copy_shadow_to_vmcs12(vmx);
+-		vmx->nested.need_vmcs12_sync = false;
+ 		vmx_disable_shadow_vmcs(vmx);
+ 	}
+ 	vmx->nested.posted_intr_nv = -1;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 37028ea85d4c..aede8fa2ea9a 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -8172,7 +8172,7 @@ static int complete_emulated_mmio(struct kvm_vcpu *vcpu)
+ static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu)
+ {
+ 	preempt_disable();
+-	copy_fpregs_to_fpstate(&current->thread.fpu);
++	copy_fpregs_to_fpstate(&vcpu->arch.user_fpu);
+ 	/* PKRU is separately restored in kvm_x86_ops->run.  */
+ 	__copy_kernel_to_fpregs(&vcpu->arch.guest_fpu->state,
+ 				~XFEATURE_MASK_PKRU);
+@@ -8185,7 +8185,7 @@ static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu)
+ {
+ 	preempt_disable();
+ 	copy_fpregs_to_fpstate(vcpu->arch.guest_fpu);
+-	copy_kernel_to_fpregs(&current->thread.fpu.state);
++	copy_kernel_to_fpregs(&vcpu->arch.user_fpu.state);
+ 	preempt_enable();
+ 	++vcpu->stat.fpu_reload;
+ 	trace_kvm_fpu(0);
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index 6ea455b62cb4..6eff3c4712f4 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -13,6 +13,9 @@
+ #include <linux/rbtree.h>
+ #include <linux/blkdev.h>
+ #include <linux/blk-mq.h>
++#include <linux/mm.h>
++#include <linux/vmalloc.h>
++#include <linux/sched/mm.h>
+ 
+ #include "blk.h"
+ 
+@@ -372,22 +375,25 @@ static inline unsigned long *blk_alloc_zone_bitmap(int node,
+  * Allocate an array of struct blk_zone to get nr_zones zone information.
+  * The allocated array may be smaller than nr_zones.
+  */
+-static struct blk_zone *blk_alloc_zones(int node, unsigned int *nr_zones)
++static struct blk_zone *blk_alloc_zones(unsigned int *nr_zones)
+ {
+-	size_t size = *nr_zones * sizeof(struct blk_zone);
+-	struct page *page;
+-	int order;
+-
+-	for (order = get_order(size); order >= 0; order--) {
+-		page = alloc_pages_node(node, GFP_NOIO | __GFP_ZERO, order);
+-		if (page) {
+-			*nr_zones = min_t(unsigned int, *nr_zones,
+-				(PAGE_SIZE << order) / sizeof(struct blk_zone));
+-			return page_address(page);
+-		}
++	struct blk_zone *zones;
++	size_t nrz = min(*nr_zones, BLK_ZONED_REPORT_MAX_ZONES);
++
++	/*
++	 * GFP_KERNEL here is meaningless as the caller task context has
++	 * the PF_MEMALLOC_NOIO flag set in blk_revalidate_disk_zones()
++	 * with memalloc_noio_save().
++	 */
++	zones = kvcalloc(nrz, sizeof(struct blk_zone), GFP_KERNEL);
++	if (!zones) {
++		*nr_zones = 0;
++		return NULL;
+ 	}
+ 
+-	return NULL;
++	*nr_zones = nrz;
++
++	return zones;
+ }
+ 
+ void blk_queue_free_zone_bitmaps(struct request_queue *q)
+@@ -414,6 +420,7 @@ int blk_revalidate_disk_zones(struct gendisk *disk)
+ 	unsigned long *seq_zones_wlock = NULL, *seq_zones_bitmap = NULL;
+ 	unsigned int i, rep_nr_zones = 0, z = 0, nrz;
+ 	struct blk_zone *zones = NULL;
++	unsigned int noio_flag;
+ 	sector_t sector = 0;
+ 	int ret = 0;
+ 
+@@ -426,6 +433,12 @@ int blk_revalidate_disk_zones(struct gendisk *disk)
+ 		return 0;
+ 	}
+ 
++	/*
++	 * Ensure that all memory allocations in this context are done as
++	 * if GFP_NOIO was specified.
++	 */
++	noio_flag = memalloc_noio_save();
++
+ 	if (!blk_queue_is_zoned(q) || !nr_zones) {
+ 		nr_zones = 0;
+ 		goto update;
+@@ -442,7 +455,7 @@ int blk_revalidate_disk_zones(struct gendisk *disk)
+ 
+ 	/* Get zone information and initialize seq_zones_bitmap */
+ 	rep_nr_zones = nr_zones;
+-	zones = blk_alloc_zones(q->node, &rep_nr_zones);
++	zones = blk_alloc_zones(&rep_nr_zones);
+ 	if (!zones)
+ 		goto out;
+ 
+@@ -479,8 +492,9 @@ update:
+ 	blk_mq_unfreeze_queue(q);
+ 
+ out:
+-	free_pages((unsigned long)zones,
+-		   get_order(rep_nr_zones * sizeof(struct blk_zone)));
++	memalloc_noio_restore(noio_flag);
++
++	kvfree(zones);
+ 	kfree(seq_zones_wlock);
+ 	kfree(seq_zones_bitmap);
+ 
+diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
+index 7c858020d14b..efd5d09d56ad 100644
+--- a/drivers/dma-buf/dma-buf.c
++++ b/drivers/dma-buf/dma-buf.c
+@@ -1068,6 +1068,7 @@ static int dma_buf_debug_show(struct seq_file *s, void *unused)
+ 				   fence->ops->get_driver_name(fence),
+ 				   fence->ops->get_timeline_name(fence),
+ 				   dma_fence_is_signaled(fence) ? "" : "un");
++			dma_fence_put(fence);
+ 		}
+ 		rcu_read_unlock();
+ 
+diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c
+index c1618335ca99..03004a218ec1 100644
+--- a/drivers/dma-buf/reservation.c
++++ b/drivers/dma-buf/reservation.c
+@@ -357,6 +357,10 @@ int reservation_object_get_fences_rcu(struct reservation_object *obj,
+ 					   GFP_NOWAIT | __GFP_NOWARN);
+ 			if (!nshared) {
+ 				rcu_read_unlock();
++
++				dma_fence_put(fence_excl);
++				fence_excl = NULL;
++
+ 				nshared = krealloc(shared, sz, GFP_KERNEL);
+ 				if (nshared) {
+ 					shared = nshared;
+diff --git a/drivers/gpio/gpio-davinci.c b/drivers/gpio/gpio-davinci.c
+index 188b8e5c8e67..be34c672cf25 100644
+--- a/drivers/gpio/gpio-davinci.c
++++ b/drivers/gpio/gpio-davinci.c
+@@ -242,8 +242,9 @@ static int davinci_gpio_probe(struct platform_device *pdev)
+ 	for (i = 0; i < nirq; i++) {
+ 		chips->irqs[i] = platform_get_irq(pdev, i);
+ 		if (chips->irqs[i] < 0) {
+-			dev_info(dev, "IRQ not populated, err = %d\n",
+-				 chips->irqs[i]);
++			if (chips->irqs[i] != -EPROBE_DEFER)
++				dev_info(dev, "IRQ not populated, err = %d\n",
++					 chips->irqs[i]);
+ 			return chips->irqs[i];
+ 		}
+ 	}
+diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
+index 6a3ec575a404..6779853f166b 100644
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -155,6 +155,7 @@ static void of_gpio_flags_quirks(struct device_node *np,
+ 							of_node_full_name(child));
+ 					*flags |= OF_GPIO_ACTIVE_LOW;
+ 				}
++				of_node_put(child);
+ 				break;
+ 			}
+ 		}
+diff --git a/drivers/net/caif/caif_hsi.c b/drivers/net/caif/caif_hsi.c
+index 433a14b9f731..253a1bbe37e8 100644
+--- a/drivers/net/caif/caif_hsi.c
++++ b/drivers/net/caif/caif_hsi.c
+@@ -1455,7 +1455,7 @@ static void __exit cfhsi_exit_module(void)
+ 	rtnl_lock();
+ 	list_for_each_safe(list_node, n, &cfhsi_list) {
+ 		cfhsi = list_entry(list_node, struct cfhsi, list);
+-		unregister_netdev(cfhsi->ndev);
++		unregister_netdevice(cfhsi->ndev);
+ 	}
+ 	rtnl_unlock();
+ }
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index ae750ab9a4d7..5f81d9a3a2a6 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -4910,6 +4910,8 @@ static int mv88e6xxx_probe(struct mdio_device *mdiodev)
+ 		err = PTR_ERR(chip->reset);
+ 		goto out;
+ 	}
++	if (chip->reset)
++		usleep_range(1000, 2000);
+ 
+ 	err = mv88e6xxx_detect(chip);
+ 	if (err)
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+index 78a01880931c..9f07b85091f3 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+@@ -285,6 +285,9 @@ int bnx2x_tx_int(struct bnx2x *bp, struct bnx2x_fp_txdata *txdata)
+ 	hw_cons = le16_to_cpu(*txdata->tx_cons_sb);
+ 	sw_cons = txdata->tx_pkt_cons;
+ 
++	/* Ensure subsequent loads occur after hw_cons */
++	smp_rmb();
++
+ 	while (sw_cons != hw_cons) {
+ 		u16 pkt_cons;
+ 
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index 983245c0867c..2b79ef17e846 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -3086,39 +3086,42 @@ static void bcmgenet_timeout(struct net_device *dev)
+ 	netif_tx_wake_all_queues(dev);
+ }
+ 
+-#define MAX_MC_COUNT	16
++#define MAX_MDF_FILTER	17
+ 
+ static inline void bcmgenet_set_mdf_addr(struct bcmgenet_priv *priv,
+ 					 unsigned char *addr,
+-					 int *i,
+-					 int *mc)
++					 int *i)
+ {
+-	u32 reg;
+-
+ 	bcmgenet_umac_writel(priv, addr[0] << 8 | addr[1],
+ 			     UMAC_MDF_ADDR + (*i * 4));
+ 	bcmgenet_umac_writel(priv, addr[2] << 24 | addr[3] << 16 |
+ 			     addr[4] << 8 | addr[5],
+ 			     UMAC_MDF_ADDR + ((*i + 1) * 4));
+-	reg = bcmgenet_umac_readl(priv, UMAC_MDF_CTRL);
+-	reg |= (1 << (MAX_MC_COUNT - *mc));
+-	bcmgenet_umac_writel(priv, reg, UMAC_MDF_CTRL);
+ 	*i += 2;
+-	(*mc)++;
+ }
+ 
+ static void bcmgenet_set_rx_mode(struct net_device *dev)
+ {
+ 	struct bcmgenet_priv *priv = netdev_priv(dev);
+ 	struct netdev_hw_addr *ha;
+-	int i, mc;
++	int i, nfilter;
+ 	u32 reg;
+ 
+ 	netif_dbg(priv, hw, dev, "%s: %08X\n", __func__, dev->flags);
+ 
+-	/* Promiscuous mode */
++	/* Number of filters needed */
++	nfilter = netdev_uc_count(dev) + netdev_mc_count(dev) + 2;
++
++	/*
++	 * Turn on promicuous mode for three scenarios
++	 * 1. IFF_PROMISC flag is set
++	 * 2. IFF_ALLMULTI flag is set
++	 * 3. The number of filters needed exceeds the number filters
++	 *    supported by the hardware.
++	*/
+ 	reg = bcmgenet_umac_readl(priv, UMAC_CMD);
+-	if (dev->flags & IFF_PROMISC) {
++	if ((dev->flags & (IFF_PROMISC | IFF_ALLMULTI)) ||
++	    (nfilter > MAX_MDF_FILTER)) {
+ 		reg |= CMD_PROMISC;
+ 		bcmgenet_umac_writel(priv, reg, UMAC_CMD);
+ 		bcmgenet_umac_writel(priv, 0, UMAC_MDF_CTRL);
+@@ -3128,32 +3131,24 @@ static void bcmgenet_set_rx_mode(struct net_device *dev)
+ 		bcmgenet_umac_writel(priv, reg, UMAC_CMD);
+ 	}
+ 
+-	/* UniMac doesn't support ALLMULTI */
+-	if (dev->flags & IFF_ALLMULTI) {
+-		netdev_warn(dev, "ALLMULTI is not supported\n");
+-		return;
+-	}
+-
+ 	/* update MDF filter */
+ 	i = 0;
+-	mc = 0;
+ 	/* Broadcast */
+-	bcmgenet_set_mdf_addr(priv, dev->broadcast, &i, &mc);
++	bcmgenet_set_mdf_addr(priv, dev->broadcast, &i);
+ 	/* my own address.*/
+-	bcmgenet_set_mdf_addr(priv, dev->dev_addr, &i, &mc);
+-	/* Unicast list*/
+-	if (netdev_uc_count(dev) > (MAX_MC_COUNT - mc))
+-		return;
++	bcmgenet_set_mdf_addr(priv, dev->dev_addr, &i);
+ 
+-	if (!netdev_uc_empty(dev))
+-		netdev_for_each_uc_addr(ha, dev)
+-			bcmgenet_set_mdf_addr(priv, ha->addr, &i, &mc);
+-	/* Multicast */
+-	if (netdev_mc_empty(dev) || netdev_mc_count(dev) >= (MAX_MC_COUNT - mc))
+-		return;
++	/* Unicast */
++	netdev_for_each_uc_addr(ha, dev)
++		bcmgenet_set_mdf_addr(priv, ha->addr, &i);
+ 
++	/* Multicast */
+ 	netdev_for_each_mc_addr(ha, dev)
+-		bcmgenet_set_mdf_addr(priv, ha->addr, &i, &mc);
++		bcmgenet_set_mdf_addr(priv, ha->addr, &i);
++
++	/* Enable filters */
++	reg = GENMASK(MAX_MDF_FILTER - 1, MAX_MDF_FILTER - nfilter);
++	bcmgenet_umac_writel(priv, reg, UMAC_MDF_CTRL);
+ }
+ 
+ /* Set the hardware MAC address. */
+diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c
+index 8b3495ee2b6e..d097530af78a 100644
+--- a/drivers/net/ethernet/marvell/sky2.c
++++ b/drivers/net/ethernet/marvell/sky2.c
+@@ -4933,6 +4933,13 @@ static const struct dmi_system_id msi_blacklist[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "P-79"),
+ 		},
+ 	},
++	{
++		.ident = "ASUS P6T",
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK Computer INC."),
++			DMI_MATCH(DMI_BOARD_NAME, "P6T"),
++		},
++	},
+ 	{}
+ };
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index a80031b2cfaf..9a1a21a8ae45 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -294,6 +294,7 @@ enum {
+ 	MLX5E_RQ_STATE_ENABLED,
+ 	MLX5E_RQ_STATE_AM,
+ 	MLX5E_RQ_STATE_NO_CSUM_COMPLETE,
++	MLX5E_RQ_STATE_CSUM_FULL, /* cqe_csum_full hw bit is set */
+ };
+ 
+ struct mlx5e_cq {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
+index 476dd97f7f2f..f3d98748b211 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
+@@ -142,22 +142,20 @@ static int mlx5e_tx_reporter_timeout_recover(struct mlx5e_txqsq *sq)
+ {
+ 	struct mlx5_eq_comp *eq = sq->cq.mcq.eq;
+ 	u32 eqe_count;
+-	int ret;
+ 
+ 	netdev_err(sq->channel->netdev, "EQ 0x%x: Cons = 0x%x, irqn = 0x%x\n",
+ 		   eq->core.eqn, eq->core.cons_index, eq->core.irqn);
+ 
+ 	eqe_count = mlx5_eq_poll_irq_disabled(eq);
+-	ret = eqe_count ? false : true;
+ 	if (!eqe_count) {
+ 		clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state);
+-		return ret;
++		return -EIO;
+ 	}
+ 
+ 	netdev_err(sq->channel->netdev, "Recover %d eqes on EQ 0x%x\n",
+ 		   eqe_count, eq->core.eqn);
+ 	sq->channel->stats->eq_rearm++;
+-	return ret;
++	return 0;
+ }
+ 
+ int mlx5e_tx_reporter_timeout(struct mlx5e_txqsq *sq)
+@@ -264,13 +262,13 @@ static int mlx5e_tx_reporter_diagnose(struct devlink_health_reporter *reporter,
+ 
+ 		err = mlx5_core_query_sq_state(priv->mdev, sq->sqn, &state);
+ 		if (err)
+-			break;
++			goto unlock;
+ 
+ 		err = mlx5e_tx_reporter_build_diagnose_output(fmsg, sq->sqn,
+ 							      state,
+ 							      netif_xmit_stopped(sq->txq));
+ 		if (err)
+-			break;
++			goto unlock;
+ 	}
+ 	err = devlink_fmsg_arr_pair_nest_end(fmsg);
+ 	if (err)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 6a8dc73855c9..2793e4036953 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -948,6 +948,9 @@ static int mlx5e_open_rq(struct mlx5e_channel *c,
+ 	if (err)
+ 		goto err_destroy_rq;
+ 
++	if (MLX5_CAP_ETH(c->mdev, cqe_checksum_full))
++		__set_bit(MLX5E_RQ_STATE_CSUM_FULL, &c->rq.state);
++
+ 	if (params->rx_dim_enabled)
+ 		__set_bit(MLX5E_RQ_STATE_AM, &c->rq.state);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index c3b3002ff62f..e8a3656d631d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -829,8 +829,14 @@ static inline void mlx5e_handle_csum(struct net_device *netdev,
+ 		if (unlikely(get_ip_proto(skb, network_depth, proto) == IPPROTO_SCTP))
+ 			goto csum_unnecessary;
+ 
++		stats->csum_complete++;
+ 		skb->ip_summed = CHECKSUM_COMPLETE;
+ 		skb->csum = csum_unfold((__force __sum16)cqe->check_sum);
++
++		if (test_bit(MLX5E_RQ_STATE_CSUM_FULL, &rq->state))
++			return; /* CQE csum covers all received bytes */
++
++		/* csum might need some fixups ...*/
+ 		if (network_depth > ETH_HLEN)
+ 			/* CQE csum is calculated from the IP header and does
+ 			 * not cover VLAN headers (if present). This will add
+@@ -841,7 +847,6 @@ static inline void mlx5e_handle_csum(struct net_device *netdev,
+ 						 skb->csum);
+ 
+ 		mlx5e_skb_padding_csum(skb, network_depth, proto, stats);
+-		stats->csum_complete++;
+ 		return;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
+index 4eac42555c7d..5d0783e55f42 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
+@@ -698,7 +698,9 @@ static int mlx5_rdma_setup_rn(struct ib_device *ibdev, u8 port_num,
+ 
+ 	prof->init(mdev, netdev, prof, ipriv);
+ 
+-	mlx5e_attach_netdev(epriv);
++	err = mlx5e_attach_netdev(epriv);
++	if (err)
++		goto detach;
+ 	netif_carrier_off(netdev);
+ 
+ 	/* set rdma_netdev func pointers */
+@@ -714,6 +716,11 @@ static int mlx5_rdma_setup_rn(struct ib_device *ibdev, u8 port_num,
+ 
+ 	return 0;
+ 
++detach:
++	prof->cleanup(epriv);
++	if (ipriv->sub_interface)
++		return err;
++	mlx5e_destroy_mdev_resources(mdev);
+ destroy_ht:
+ 	mlx5i_pkey_qpn_ht_cleanup(netdev);
+ 	return err;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/port_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/port_tun.c
+index 40f4a19b1ce1..5e2cea26f335 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/port_tun.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/port_tun.c
+@@ -100,27 +100,12 @@ static int mlx5_set_entropy(struct mlx5_tun_entropy *tun_entropy,
+ 	 */
+ 	if (entropy_flags.gre_calc_supported &&
+ 	    reformat_type == MLX5_REFORMAT_TYPE_L2_TO_NVGRE) {
+-		/* Other applications may change the global FW entropy
+-		 * calculations settings. Check that the current entropy value
+-		 * is the negative of the updated value.
+-		 */
+-		if (entropy_flags.force_enabled &&
+-		    enable == entropy_flags.gre_calc_enabled) {
+-			mlx5_core_warn(tun_entropy->mdev,
+-				       "Unexpected GRE entropy calc setting - expected %d",
+-				       !entropy_flags.gre_calc_enabled);
+-			return -EOPNOTSUPP;
+-		}
+-		err = mlx5_set_port_gre_tun_entropy_calc(tun_entropy->mdev, enable,
+-							 entropy_flags.force_supported);
++		if (!entropy_flags.force_supported)
++			return 0;
++		err = mlx5_set_port_gre_tun_entropy_calc(tun_entropy->mdev,
++							 enable, !enable);
+ 		if (err)
+ 			return err;
+-		/* if we turn on the entropy we don't need to force it anymore */
+-		if (entropy_flags.force_supported && enable) {
+-			err = mlx5_set_port_gre_tun_entropy_calc(tun_entropy->mdev, 1, 0);
+-			if (err)
+-				return err;
+-		}
+ 	} else if (entropy_flags.calc_supported) {
+ 		/* Other applications may change the global FW entropy
+ 		 * calculations settings. Check that the current entropy value
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index 6d176be51a6b..309400fbf69d 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -5241,6 +5241,143 @@ static void rtl_hw_start_8411_2(struct rtl8169_private *tp)
+ 	/* disable aspm and clock request before access ephy */
+ 	rtl_hw_aspm_clkreq_enable(tp, false);
+ 	rtl_ephy_init(tp, e_info_8411_2, ARRAY_SIZE(e_info_8411_2));
++
++	/* The following Realtek-provided magic fixes an issue with the RX unit
++	 * getting confused after the PHY having been powered-down.
++	 */
++	r8168_mac_ocp_write(tp, 0xFC28, 0x0000);
++	r8168_mac_ocp_write(tp, 0xFC2A, 0x0000);
++	r8168_mac_ocp_write(tp, 0xFC2C, 0x0000);
++	r8168_mac_ocp_write(tp, 0xFC2E, 0x0000);
++	r8168_mac_ocp_write(tp, 0xFC30, 0x0000);
++	r8168_mac_ocp_write(tp, 0xFC32, 0x0000);
++	r8168_mac_ocp_write(tp, 0xFC34, 0x0000);
++	r8168_mac_ocp_write(tp, 0xFC36, 0x0000);
++	mdelay(3);
++	r8168_mac_ocp_write(tp, 0xFC26, 0x0000);
++
++	r8168_mac_ocp_write(tp, 0xF800, 0xE008);
++	r8168_mac_ocp_write(tp, 0xF802, 0xE00A);
++	r8168_mac_ocp_write(tp, 0xF804, 0xE00C);
++	r8168_mac_ocp_write(tp, 0xF806, 0xE00E);
++	r8168_mac_ocp_write(tp, 0xF808, 0xE027);
++	r8168_mac_ocp_write(tp, 0xF80A, 0xE04F);
++	r8168_mac_ocp_write(tp, 0xF80C, 0xE05E);
++	r8168_mac_ocp_write(tp, 0xF80E, 0xE065);
++	r8168_mac_ocp_write(tp, 0xF810, 0xC602);
++	r8168_mac_ocp_write(tp, 0xF812, 0xBE00);
++	r8168_mac_ocp_write(tp, 0xF814, 0x0000);
++	r8168_mac_ocp_write(tp, 0xF816, 0xC502);
++	r8168_mac_ocp_write(tp, 0xF818, 0xBD00);
++	r8168_mac_ocp_write(tp, 0xF81A, 0x074C);
++	r8168_mac_ocp_write(tp, 0xF81C, 0xC302);
++	r8168_mac_ocp_write(tp, 0xF81E, 0xBB00);
++	r8168_mac_ocp_write(tp, 0xF820, 0x080A);
++	r8168_mac_ocp_write(tp, 0xF822, 0x6420);
++	r8168_mac_ocp_write(tp, 0xF824, 0x48C2);
++	r8168_mac_ocp_write(tp, 0xF826, 0x8C20);
++	r8168_mac_ocp_write(tp, 0xF828, 0xC516);
++	r8168_mac_ocp_write(tp, 0xF82A, 0x64A4);
++	r8168_mac_ocp_write(tp, 0xF82C, 0x49C0);
++	r8168_mac_ocp_write(tp, 0xF82E, 0xF009);
++	r8168_mac_ocp_write(tp, 0xF830, 0x74A2);
++	r8168_mac_ocp_write(tp, 0xF832, 0x8CA5);
++	r8168_mac_ocp_write(tp, 0xF834, 0x74A0);
++	r8168_mac_ocp_write(tp, 0xF836, 0xC50E);
++	r8168_mac_ocp_write(tp, 0xF838, 0x9CA2);
++	r8168_mac_ocp_write(tp, 0xF83A, 0x1C11);
++	r8168_mac_ocp_write(tp, 0xF83C, 0x9CA0);
++	r8168_mac_ocp_write(tp, 0xF83E, 0xE006);
++	r8168_mac_ocp_write(tp, 0xF840, 0x74F8);
++	r8168_mac_ocp_write(tp, 0xF842, 0x48C4);
++	r8168_mac_ocp_write(tp, 0xF844, 0x8CF8);
++	r8168_mac_ocp_write(tp, 0xF846, 0xC404);
++	r8168_mac_ocp_write(tp, 0xF848, 0xBC00);
++	r8168_mac_ocp_write(tp, 0xF84A, 0xC403);
++	r8168_mac_ocp_write(tp, 0xF84C, 0xBC00);
++	r8168_mac_ocp_write(tp, 0xF84E, 0x0BF2);
++	r8168_mac_ocp_write(tp, 0xF850, 0x0C0A);
++	r8168_mac_ocp_write(tp, 0xF852, 0xE434);
++	r8168_mac_ocp_write(tp, 0xF854, 0xD3C0);
++	r8168_mac_ocp_write(tp, 0xF856, 0x49D9);
++	r8168_mac_ocp_write(tp, 0xF858, 0xF01F);
++	r8168_mac_ocp_write(tp, 0xF85A, 0xC526);
++	r8168_mac_ocp_write(tp, 0xF85C, 0x64A5);
++	r8168_mac_ocp_write(tp, 0xF85E, 0x1400);
++	r8168_mac_ocp_write(tp, 0xF860, 0xF007);
++	r8168_mac_ocp_write(tp, 0xF862, 0x0C01);
++	r8168_mac_ocp_write(tp, 0xF864, 0x8CA5);
++	r8168_mac_ocp_write(tp, 0xF866, 0x1C15);
++	r8168_mac_ocp_write(tp, 0xF868, 0xC51B);
++	r8168_mac_ocp_write(tp, 0xF86A, 0x9CA0);
++	r8168_mac_ocp_write(tp, 0xF86C, 0xE013);
++	r8168_mac_ocp_write(tp, 0xF86E, 0xC519);
++	r8168_mac_ocp_write(tp, 0xF870, 0x74A0);
++	r8168_mac_ocp_write(tp, 0xF872, 0x48C4);
++	r8168_mac_ocp_write(tp, 0xF874, 0x8CA0);
++	r8168_mac_ocp_write(tp, 0xF876, 0xC516);
++	r8168_mac_ocp_write(tp, 0xF878, 0x74A4);
++	r8168_mac_ocp_write(tp, 0xF87A, 0x48C8);
++	r8168_mac_ocp_write(tp, 0xF87C, 0x48CA);
++	r8168_mac_ocp_write(tp, 0xF87E, 0x9CA4);
++	r8168_mac_ocp_write(tp, 0xF880, 0xC512);
++	r8168_mac_ocp_write(tp, 0xF882, 0x1B00);
++	r8168_mac_ocp_write(tp, 0xF884, 0x9BA0);
++	r8168_mac_ocp_write(tp, 0xF886, 0x1B1C);
++	r8168_mac_ocp_write(tp, 0xF888, 0x483F);
++	r8168_mac_ocp_write(tp, 0xF88A, 0x9BA2);
++	r8168_mac_ocp_write(tp, 0xF88C, 0x1B04);
++	r8168_mac_ocp_write(tp, 0xF88E, 0xC508);
++	r8168_mac_ocp_write(tp, 0xF890, 0x9BA0);
++	r8168_mac_ocp_write(tp, 0xF892, 0xC505);
++	r8168_mac_ocp_write(tp, 0xF894, 0xBD00);
++	r8168_mac_ocp_write(tp, 0xF896, 0xC502);
++	r8168_mac_ocp_write(tp, 0xF898, 0xBD00);
++	r8168_mac_ocp_write(tp, 0xF89A, 0x0300);
++	r8168_mac_ocp_write(tp, 0xF89C, 0x051E);
++	r8168_mac_ocp_write(tp, 0xF89E, 0xE434);
++	r8168_mac_ocp_write(tp, 0xF8A0, 0xE018);
++	r8168_mac_ocp_write(tp, 0xF8A2, 0xE092);
++	r8168_mac_ocp_write(tp, 0xF8A4, 0xDE20);
++	r8168_mac_ocp_write(tp, 0xF8A6, 0xD3C0);
++	r8168_mac_ocp_write(tp, 0xF8A8, 0xC50F);
++	r8168_mac_ocp_write(tp, 0xF8AA, 0x76A4);
++	r8168_mac_ocp_write(tp, 0xF8AC, 0x49E3);
++	r8168_mac_ocp_write(tp, 0xF8AE, 0xF007);
++	r8168_mac_ocp_write(tp, 0xF8B0, 0x49C0);
++	r8168_mac_ocp_write(tp, 0xF8B2, 0xF103);
++	r8168_mac_ocp_write(tp, 0xF8B4, 0xC607);
++	r8168_mac_ocp_write(tp, 0xF8B6, 0xBE00);
++	r8168_mac_ocp_write(tp, 0xF8B8, 0xC606);
++	r8168_mac_ocp_write(tp, 0xF8BA, 0xBE00);
++	r8168_mac_ocp_write(tp, 0xF8BC, 0xC602);
++	r8168_mac_ocp_write(tp, 0xF8BE, 0xBE00);
++	r8168_mac_ocp_write(tp, 0xF8C0, 0x0C4C);
++	r8168_mac_ocp_write(tp, 0xF8C2, 0x0C28);
++	r8168_mac_ocp_write(tp, 0xF8C4, 0x0C2C);
++	r8168_mac_ocp_write(tp, 0xF8C6, 0xDC00);
++	r8168_mac_ocp_write(tp, 0xF8C8, 0xC707);
++	r8168_mac_ocp_write(tp, 0xF8CA, 0x1D00);
++	r8168_mac_ocp_write(tp, 0xF8CC, 0x8DE2);
++	r8168_mac_ocp_write(tp, 0xF8CE, 0x48C1);
++	r8168_mac_ocp_write(tp, 0xF8D0, 0xC502);
++	r8168_mac_ocp_write(tp, 0xF8D2, 0xBD00);
++	r8168_mac_ocp_write(tp, 0xF8D4, 0x00AA);
++	r8168_mac_ocp_write(tp, 0xF8D6, 0xE0C0);
++	r8168_mac_ocp_write(tp, 0xF8D8, 0xC502);
++	r8168_mac_ocp_write(tp, 0xF8DA, 0xBD00);
++	r8168_mac_ocp_write(tp, 0xF8DC, 0x0132);
++
++	r8168_mac_ocp_write(tp, 0xFC26, 0x8000);
++
++	r8168_mac_ocp_write(tp, 0xFC2A, 0x0743);
++	r8168_mac_ocp_write(tp, 0xFC2C, 0x0801);
++	r8168_mac_ocp_write(tp, 0xFC2E, 0x0BE9);
++	r8168_mac_ocp_write(tp, 0xFC30, 0x02FD);
++	r8168_mac_ocp_write(tp, 0xFC32, 0x0C25);
++	r8168_mac_ocp_write(tp, 0xFC34, 0x00A9);
++	r8168_mac_ocp_write(tp, 0xFC36, 0x012D);
++
+ 	rtl_hw_aspm_clkreq_enable(tp, true);
+ }
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index f3735d0458eb..ee3a5a4b2042 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -3058,17 +3058,8 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 	/* Manage oversized TCP frames for GMAC4 device */
+ 	if (skb_is_gso(skb) && priv->tso) {
+-		if (skb_shinfo(skb)->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6)) {
+-			/*
+-			 * There is no way to determine the number of TSO
+-			 * capable Queues. Let's use always the Queue 0
+-			 * because if TSO is supported then at least this
+-			 * one will be capable.
+-			 */
+-			skb_set_queue_mapping(skb, 0);
+-
++		if (skb_shinfo(skb)->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6))
+ 			return stmmac_tso_xmit(skb, dev);
+-		}
+ 	}
+ 
+ 	if (unlikely(stmmac_tx_avail(priv, queue) < nfrags + 1)) {
+@@ -3885,6 +3876,23 @@ static int stmmac_setup_tc(struct net_device *ndev, enum tc_setup_type type,
+ 	}
+ }
+ 
++static u16 stmmac_select_queue(struct net_device *dev, struct sk_buff *skb,
++			       struct net_device *sb_dev,
++			       select_queue_fallback_t fallback)
++{
++	if (skb_shinfo(skb)->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6)) {
++		/*
++		 * There is no way to determine the number of TSO
++		 * capable Queues. Let's use always the Queue 0
++		 * because if TSO is supported then at least this
++		 * one will be capable.
++		 */
++		return 0;
++	}
++
++	return fallback(dev, skb, NULL) % dev->real_num_tx_queues;
++}
++
+ static int stmmac_set_mac_address(struct net_device *ndev, void *addr)
+ {
+ 	struct stmmac_priv *priv = netdev_priv(ndev);
+@@ -4101,6 +4109,7 @@ static const struct net_device_ops stmmac_netdev_ops = {
+ 	.ndo_tx_timeout = stmmac_tx_timeout,
+ 	.ndo_do_ioctl = stmmac_ioctl,
+ 	.ndo_setup_tc = stmmac_setup_tc,
++	.ndo_select_queue = stmmac_select_queue,
+ #ifdef CONFIG_NET_POLL_CONTROLLER
+ 	.ndo_poll_controller = stmmac_poll_controller,
+ #endif
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index e7d8884b1a10..e60a620f9e31 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -849,7 +849,6 @@ int netvsc_recv_callback(struct net_device *net,
+ 
+ 	if (unlikely(!skb)) {
+ 		++net_device_ctx->eth_stats.rx_no_memory;
+-		rcu_read_unlock();
+ 		return NVSP_STAT_FAIL;
+ 	}
+ 
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 64a982563d59..bb65eaccbfad 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -869,6 +869,7 @@ static void macsec_reset_skb(struct sk_buff *skb, struct net_device *dev)
+ 
+ static void macsec_finalize_skb(struct sk_buff *skb, u8 icv_len, u8 hdr_len)
+ {
++	skb->ip_summed = CHECKSUM_NONE;
+ 	memmove(skb->data + hdr_len, skb->data, 2 * ETH_ALEN);
+ 	skb_pull(skb, hdr_len);
+ 	pskb_trim_unique(skb, skb->len - icv_len);
+@@ -1103,10 +1104,9 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
+ 	}
+ 
+ 	skb = skb_unshare(skb, GFP_ATOMIC);
+-	if (!skb) {
+-		*pskb = NULL;
++	*pskb = skb;
++	if (!skb)
+ 		return RX_HANDLER_CONSUMED;
+-	}
+ 
+ 	pulled_sci = pskb_may_pull(skb, macsec_extra_len(true));
+ 	if (!pulled_sci) {
+diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
+index b6efd2d41dce..be0271a51b0a 100644
+--- a/drivers/net/phy/sfp.c
++++ b/drivers/net/phy/sfp.c
+@@ -515,7 +515,7 @@ static int sfp_hwmon_read_sensor(struct sfp *sfp, int reg, long *value)
+ 
+ static void sfp_hwmon_to_rx_power(long *value)
+ {
+-	*value = DIV_ROUND_CLOSEST(*value, 100);
++	*value = DIV_ROUND_CLOSEST(*value, 10);
+ }
+ 
+ static void sfp_hwmon_calibrate(struct sfp *sfp, unsigned int slope, int offset,
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 9ee4d7402ca2..b4ac87aa09fd 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -169,23 +169,29 @@ static int vrf_ip6_local_out(struct net *net, struct sock *sk,
+ static netdev_tx_t vrf_process_v6_outbound(struct sk_buff *skb,
+ 					   struct net_device *dev)
+ {
+-	const struct ipv6hdr *iph = ipv6_hdr(skb);
++	const struct ipv6hdr *iph;
+ 	struct net *net = dev_net(skb->dev);
+-	struct flowi6 fl6 = {
+-		/* needed to match OIF rule */
+-		.flowi6_oif = dev->ifindex,
+-		.flowi6_iif = LOOPBACK_IFINDEX,
+-		.daddr = iph->daddr,
+-		.saddr = iph->saddr,
+-		.flowlabel = ip6_flowinfo(iph),
+-		.flowi6_mark = skb->mark,
+-		.flowi6_proto = iph->nexthdr,
+-		.flowi6_flags = FLOWI_FLAG_SKIP_NH_OIF,
+-	};
++	struct flowi6 fl6;
+ 	int ret = NET_XMIT_DROP;
+ 	struct dst_entry *dst;
+ 	struct dst_entry *dst_null = &net->ipv6.ip6_null_entry->dst;
+ 
++	if (!pskb_may_pull(skb, ETH_HLEN + sizeof(struct ipv6hdr)))
++		goto err;
++
++	iph = ipv6_hdr(skb);
++
++	memset(&fl6, 0, sizeof(fl6));
++	/* needed to match OIF rule */
++	fl6.flowi6_oif = dev->ifindex;
++	fl6.flowi6_iif = LOOPBACK_IFINDEX;
++	fl6.daddr = iph->daddr;
++	fl6.saddr = iph->saddr;
++	fl6.flowlabel = ip6_flowinfo(iph);
++	fl6.flowi6_mark = skb->mark;
++	fl6.flowi6_proto = iph->nexthdr;
++	fl6.flowi6_flags = FLOWI_FLAG_SKIP_NH_OIF;
++
+ 	dst = ip6_route_output(net, NULL, &fl6);
+ 	if (dst == dst_null)
+ 		goto err;
+@@ -241,21 +247,27 @@ static int vrf_ip_local_out(struct net *net, struct sock *sk,
+ static netdev_tx_t vrf_process_v4_outbound(struct sk_buff *skb,
+ 					   struct net_device *vrf_dev)
+ {
+-	struct iphdr *ip4h = ip_hdr(skb);
++	struct iphdr *ip4h;
+ 	int ret = NET_XMIT_DROP;
+-	struct flowi4 fl4 = {
+-		/* needed to match OIF rule */
+-		.flowi4_oif = vrf_dev->ifindex,
+-		.flowi4_iif = LOOPBACK_IFINDEX,
+-		.flowi4_tos = RT_TOS(ip4h->tos),
+-		.flowi4_flags = FLOWI_FLAG_ANYSRC | FLOWI_FLAG_SKIP_NH_OIF,
+-		.flowi4_proto = ip4h->protocol,
+-		.daddr = ip4h->daddr,
+-		.saddr = ip4h->saddr,
+-	};
++	struct flowi4 fl4;
+ 	struct net *net = dev_net(vrf_dev);
+ 	struct rtable *rt;
+ 
++	if (!pskb_may_pull(skb, ETH_HLEN + sizeof(struct iphdr)))
++		goto err;
++
++	ip4h = ip_hdr(skb);
++
++	memset(&fl4, 0, sizeof(fl4));
++	/* needed to match OIF rule */
++	fl4.flowi4_oif = vrf_dev->ifindex;
++	fl4.flowi4_iif = LOOPBACK_IFINDEX;
++	fl4.flowi4_tos = RT_TOS(ip4h->tos);
++	fl4.flowi4_flags = FLOWI_FLAG_ANYSRC | FLOWI_FLAG_SKIP_NH_OIF;
++	fl4.flowi4_proto = ip4h->protocol;
++	fl4.daddr = ip4h->daddr;
++	fl4.saddr = ip4h->saddr;
++
+ 	rt = ip_route_output_flow(net, &fl4, NULL);
+ 	if (IS_ERR(rt))
+ 		goto err;
+diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c
+index bd70843940dc..07be36aca0da 100644
+--- a/drivers/scsi/sd_zbc.c
++++ b/drivers/scsi/sd_zbc.c
+@@ -23,6 +23,8 @@
+  */
+ 
+ #include <linux/blkdev.h>
++#include <linux/vmalloc.h>
++#include <linux/sched/mm.h>
+ 
+ #include <asm/unaligned.h>
+ 
+@@ -64,7 +66,7 @@ static void sd_zbc_parse_report(struct scsi_disk *sdkp, u8 *buf,
+ /**
+  * sd_zbc_do_report_zones - Issue a REPORT ZONES scsi command.
+  * @sdkp: The target disk
+- * @buf: Buffer to use for the reply
++ * @buf: vmalloc-ed buffer to use for the reply
+  * @buflen: the buffer size
+  * @lba: Start LBA of the report
+  * @partial: Do partial report
+@@ -93,7 +95,6 @@ static int sd_zbc_do_report_zones(struct scsi_disk *sdkp, unsigned char *buf,
+ 	put_unaligned_be32(buflen, &cmd[10]);
+ 	if (partial)
+ 		cmd[14] = ZBC_REPORT_ZONE_PARTIAL;
+-	memset(buf, 0, buflen);
+ 
+ 	result = scsi_execute_req(sdp, cmd, DMA_FROM_DEVICE,
+ 				  buf, buflen, &sshdr,
+@@ -117,6 +118,53 @@ static int sd_zbc_do_report_zones(struct scsi_disk *sdkp, unsigned char *buf,
+ 	return 0;
+ }
+ 
++/*
++ * Maximum number of zones to get with one report zones command.
++ */
++#define SD_ZBC_REPORT_MAX_ZONES		8192U
++
++/**
++ * Allocate a buffer for report zones reply.
++ * @sdkp: The target disk
++ * @nr_zones: Maximum number of zones to report
++ * @buflen: Size of the buffer allocated
++ *
++ * Try to allocate a reply buffer for the number of requested zones.
++ * The size of the buffer allocated may be smaller than requested to
++ * satify the device constraint (max_hw_sectors, max_segments, etc).
++ *
++ * Return the address of the allocated buffer and update @buflen with
++ * the size of the allocated buffer.
++ */
++static void *sd_zbc_alloc_report_buffer(struct scsi_disk *sdkp,
++					unsigned int nr_zones, size_t *buflen)
++{
++	struct request_queue *q = sdkp->disk->queue;
++	size_t bufsize;
++	void *buf;
++
++	/*
++	 * Report zone buffer size should be at most 64B times the number of
++	 * zones requested plus the 64B reply header, but should be at least
++	 * SECTOR_SIZE for ATA devices.
++	 * Make sure that this size does not exceed the hardware capabilities.
++	 * Furthermore, since the report zone command cannot be split, make
++	 * sure that the allocated buffer can always be mapped by limiting the
++	 * number of pages allocated to the HBA max segments limit.
++	 */
++	nr_zones = min(nr_zones, SD_ZBC_REPORT_MAX_ZONES);
++	bufsize = roundup((nr_zones + 1) * 64, 512);
++	bufsize = min_t(size_t, bufsize,
++			queue_max_hw_sectors(q) << SECTOR_SHIFT);
++	bufsize = min_t(size_t, bufsize, queue_max_segments(q) << PAGE_SHIFT);
++
++	buf = vzalloc(bufsize);
++	if (buf)
++		*buflen = bufsize;
++
++	return buf;
++}
++
+ /**
+  * sd_zbc_report_zones - Disk report zones operation.
+  * @disk: The target disk
+@@ -132,30 +180,23 @@ int sd_zbc_report_zones(struct gendisk *disk, sector_t sector,
+ 			gfp_t gfp_mask)
+ {
+ 	struct scsi_disk *sdkp = scsi_disk(disk);
+-	unsigned int i, buflen, nrz = *nr_zones;
++	unsigned int i, nrz = *nr_zones;
+ 	unsigned char *buf;
+-	size_t offset = 0;
++	size_t buflen = 0, offset = 0;
+ 	int ret = 0;
+ 
+ 	if (!sd_is_zoned(sdkp))
+ 		/* Not a zoned device */
+ 		return -EOPNOTSUPP;
+ 
+-	/*
+-	 * Get a reply buffer for the number of requested zones plus a header,
+-	 * without exceeding the device maximum command size. For ATA disks,
+-	 * buffers must be aligned to 512B.
+-	 */
+-	buflen = min(queue_max_hw_sectors(disk->queue) << 9,
+-		     roundup((nrz + 1) * 64, 512));
+-	buf = kmalloc(buflen, gfp_mask);
++	buf = sd_zbc_alloc_report_buffer(sdkp, nrz, &buflen);
+ 	if (!buf)
+ 		return -ENOMEM;
+ 
+ 	ret = sd_zbc_do_report_zones(sdkp, buf, buflen,
+ 			sectors_to_logical(sdkp->device, sector), true);
+ 	if (ret)
+-		goto out_free_buf;
++		goto out;
+ 
+ 	nrz = min(nrz, get_unaligned_be32(&buf[0]) / 64);
+ 	for (i = 0; i < nrz; i++) {
+@@ -166,8 +207,8 @@ int sd_zbc_report_zones(struct gendisk *disk, sector_t sector,
+ 
+ 	*nr_zones = nrz;
+ 
+-out_free_buf:
+-	kfree(buf);
++out:
++	kvfree(buf);
+ 
+ 	return ret;
+ }
+@@ -301,8 +342,6 @@ static int sd_zbc_check_zoned_characteristics(struct scsi_disk *sdkp,
+ 	return 0;
+ }
+ 
+-#define SD_ZBC_BUF_SIZE 131072U
+-
+ /**
+  * sd_zbc_check_zones - Check the device capacity and zone sizes
+  * @sdkp: Target disk
+@@ -318,22 +357,28 @@ static int sd_zbc_check_zoned_characteristics(struct scsi_disk *sdkp,
+  */
+ static int sd_zbc_check_zones(struct scsi_disk *sdkp, u32 *zblocks)
+ {
++	size_t bufsize, buflen;
++	unsigned int noio_flag;
+ 	u64 zone_blocks = 0;
+ 	sector_t max_lba, block = 0;
+ 	unsigned char *buf;
+ 	unsigned char *rec;
+-	unsigned int buf_len;
+-	unsigned int list_length;
+ 	int ret;
+ 	u8 same;
+ 
++	/* Do all memory allocations as if GFP_NOIO was specified */
++	noio_flag = memalloc_noio_save();
++
+ 	/* Get a buffer */
+-	buf = kmalloc(SD_ZBC_BUF_SIZE, GFP_KERNEL);
+-	if (!buf)
+-		return -ENOMEM;
++	buf = sd_zbc_alloc_report_buffer(sdkp, SD_ZBC_REPORT_MAX_ZONES,
++					 &bufsize);
++	if (!buf) {
++		ret = -ENOMEM;
++		goto out;
++	}
+ 
+ 	/* Do a report zone to get max_lba and the same field */
+-	ret = sd_zbc_do_report_zones(sdkp, buf, SD_ZBC_BUF_SIZE, 0, false);
++	ret = sd_zbc_do_report_zones(sdkp, buf, bufsize, 0, false);
+ 	if (ret)
+ 		goto out_free;
+ 
+@@ -369,12 +414,12 @@ static int sd_zbc_check_zones(struct scsi_disk *sdkp, u32 *zblocks)
+ 	do {
+ 
+ 		/* Parse REPORT ZONES header */
+-		list_length = get_unaligned_be32(&buf[0]) + 64;
++		buflen = min_t(size_t, get_unaligned_be32(&buf[0]) + 64,
++			       bufsize);
+ 		rec = buf + 64;
+-		buf_len = min(list_length, SD_ZBC_BUF_SIZE);
+ 
+ 		/* Parse zone descriptors */
+-		while (rec < buf + buf_len) {
++		while (rec < buf + buflen) {
+ 			u64 this_zone_blocks = get_unaligned_be64(&rec[8]);
+ 
+ 			if (zone_blocks == 0) {
+@@ -390,8 +435,8 @@ static int sd_zbc_check_zones(struct scsi_disk *sdkp, u32 *zblocks)
+ 		}
+ 
+ 		if (block < sdkp->capacity) {
+-			ret = sd_zbc_do_report_zones(sdkp, buf, SD_ZBC_BUF_SIZE,
+-						     block, true);
++			ret = sd_zbc_do_report_zones(sdkp, buf, bufsize, block,
++						     true);
+ 			if (ret)
+ 				goto out_free;
+ 		}
+@@ -422,7 +467,8 @@ out:
+ 	}
+ 
+ out_free:
+-	kfree(buf);
++	memalloc_noio_restore(noio_flag);
++	kvfree(buf);
+ 
+ 	return ret;
+ }
+diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c
+index 0ccd51f72048..44c2fff2a8b7 100644
+--- a/fs/ext4/dir.c
++++ b/fs/ext4/dir.c
+@@ -108,7 +108,6 @@ static int ext4_readdir(struct file *file, struct dir_context *ctx)
+ 	struct inode *inode = file_inode(file);
+ 	struct super_block *sb = inode->i_sb;
+ 	struct buffer_head *bh = NULL;
+-	int dir_has_error = 0;
+ 	struct fscrypt_str fstr = FSTR_INIT(NULL, 0);
+ 
+ 	if (IS_ENCRYPTED(inode)) {
+@@ -144,8 +143,6 @@ static int ext4_readdir(struct file *file, struct dir_context *ctx)
+ 			return err;
+ 	}
+ 
+-	offset = ctx->pos & (sb->s_blocksize - 1);
+-
+ 	while (ctx->pos < inode->i_size) {
+ 		struct ext4_map_blocks map;
+ 
+@@ -154,9 +151,18 @@ static int ext4_readdir(struct file *file, struct dir_context *ctx)
+ 			goto errout;
+ 		}
+ 		cond_resched();
++		offset = ctx->pos & (sb->s_blocksize - 1);
+ 		map.m_lblk = ctx->pos >> EXT4_BLOCK_SIZE_BITS(sb);
+ 		map.m_len = 1;
+ 		err = ext4_map_blocks(NULL, inode, &map, 0);
++		if (err == 0) {
++			/* m_len should never be zero but let's avoid
++			 * an infinite loop if it somehow is */
++			if (map.m_len == 0)
++				map.m_len = 1;
++			ctx->pos += map.m_len * sb->s_blocksize;
++			continue;
++		}
+ 		if (err > 0) {
+ 			pgoff_t index = map.m_pblk >>
+ 					(PAGE_SHIFT - inode->i_blkbits);
+@@ -175,13 +181,6 @@ static int ext4_readdir(struct file *file, struct dir_context *ctx)
+ 		}
+ 
+ 		if (!bh) {
+-			if (!dir_has_error) {
+-				EXT4_ERROR_FILE(file, 0,
+-						"directory contains a "
+-						"hole at offset %llu",
+-					   (unsigned long long) ctx->pos);
+-				dir_has_error = 1;
+-			}
+ 			/* corrupt size?  Maybe no more blocks to read */
+ 			if (ctx->pos > inode->i_blocks << 9)
+ 				break;
+diff --git a/fs/ext4/ext4_jbd2.h b/fs/ext4/ext4_jbd2.h
+index 75a5309f2231..ef8fcf7d0d3b 100644
+--- a/fs/ext4/ext4_jbd2.h
++++ b/fs/ext4/ext4_jbd2.h
+@@ -361,20 +361,20 @@ static inline int ext4_journal_force_commit(journal_t *journal)
+ }
+ 
+ static inline int ext4_jbd2_inode_add_write(handle_t *handle,
+-					    struct inode *inode)
++		struct inode *inode, loff_t start_byte, loff_t length)
+ {
+ 	if (ext4_handle_valid(handle))
+-		return jbd2_journal_inode_add_write(handle,
+-						    EXT4_I(inode)->jinode);
++		return jbd2_journal_inode_ranged_write(handle,
++				EXT4_I(inode)->jinode, start_byte, length);
+ 	return 0;
+ }
+ 
+ static inline int ext4_jbd2_inode_add_wait(handle_t *handle,
+-					   struct inode *inode)
++		struct inode *inode, loff_t start_byte, loff_t length)
+ {
+ 	if (ext4_handle_valid(handle))
+-		return jbd2_journal_inode_add_wait(handle,
+-						   EXT4_I(inode)->jinode);
++		return jbd2_journal_inode_ranged_wait(handle,
++				EXT4_I(inode)->jinode, start_byte, length);
+ 	return 0;
+ }
+ 
+diff --git a/fs/ext4/file.c b/fs/ext4/file.c
+index 2c5baa5e8291..f4a24a46245e 100644
+--- a/fs/ext4/file.c
++++ b/fs/ext4/file.c
+@@ -165,6 +165,10 @@ static ssize_t ext4_write_checks(struct kiocb *iocb, struct iov_iter *from)
+ 	ret = generic_write_checks(iocb, from);
+ 	if (ret <= 0)
+ 		return ret;
++
++	if (unlikely(IS_IMMUTABLE(inode)))
++		return -EPERM;
++
+ 	/*
+ 	 * If we have encountered a bitmap-format file, the size limit
+ 	 * is smaller than s_maxbytes, which is for extent-mapped files.
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 7fd2d14dc27c..aa1987b23ffb 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -727,10 +727,16 @@ out_sem:
+ 		    !(flags & EXT4_GET_BLOCKS_ZERO) &&
+ 		    !ext4_is_quota_file(inode) &&
+ 		    ext4_should_order_data(inode)) {
++			loff_t start_byte =
++				(loff_t)map->m_lblk << inode->i_blkbits;
++			loff_t length = (loff_t)map->m_len << inode->i_blkbits;
++
+ 			if (flags & EXT4_GET_BLOCKS_IO_SUBMIT)
+-				ret = ext4_jbd2_inode_add_wait(handle, inode);
++				ret = ext4_jbd2_inode_add_wait(handle, inode,
++						start_byte, length);
+ 			else
+-				ret = ext4_jbd2_inode_add_write(handle, inode);
++				ret = ext4_jbd2_inode_add_write(handle, inode,
++						start_byte, length);
+ 			if (ret)
+ 				return ret;
+ 		}
+@@ -4081,7 +4087,8 @@ static int __ext4_block_zero_page_range(handle_t *handle,
+ 		err = 0;
+ 		mark_buffer_dirty(bh);
+ 		if (ext4_should_order_data(inode))
+-			err = ext4_jbd2_inode_add_write(handle, inode);
++			err = ext4_jbd2_inode_add_write(handle, inode, from,
++					length);
+ 	}
+ 
+ unlock:
+@@ -5514,6 +5521,14 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr)
+ 	if (unlikely(ext4_forced_shutdown(EXT4_SB(inode->i_sb))))
+ 		return -EIO;
+ 
++	if (unlikely(IS_IMMUTABLE(inode)))
++		return -EPERM;
++
++	if (unlikely(IS_APPEND(inode) &&
++		     (ia_valid & (ATTR_MODE | ATTR_UID |
++				  ATTR_GID | ATTR_TIMES_SET))))
++		return -EPERM;
++
+ 	error = setattr_prepare(dentry, attr);
+ 	if (error)
+ 		return error;
+@@ -6184,6 +6199,9 @@ vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf)
+ 	get_block_t *get_block;
+ 	int retries = 0;
+ 
++	if (unlikely(IS_IMMUTABLE(inode)))
++		return VM_FAULT_SIGBUS;
++
+ 	sb_start_pagefault(inode->i_sb);
+ 	file_update_time(vma->vm_file);
+ 
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index 20faa6a69238..c8fa2d140325 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -269,6 +269,29 @@ static int uuid_is_zero(__u8 u[16])
+ }
+ #endif
+ 
++/*
++ * If immutable is set and we are not clearing it, we're not allowed to change
++ * anything else in the inode.  Don't error out if we're only trying to set
++ * immutable on an immutable file.
++ */
++static int ext4_ioctl_check_immutable(struct inode *inode, __u32 new_projid,
++				      unsigned int flags)
++{
++	struct ext4_inode_info *ei = EXT4_I(inode);
++	unsigned int oldflags = ei->i_flags;
++
++	if (!(oldflags & EXT4_IMMUTABLE_FL) || !(flags & EXT4_IMMUTABLE_FL))
++		return 0;
++
++	if ((oldflags & ~EXT4_IMMUTABLE_FL) != (flags & ~EXT4_IMMUTABLE_FL))
++		return -EPERM;
++	if (ext4_has_feature_project(inode->i_sb) &&
++	    __kprojid_val(ei->i_projid) != new_projid)
++		return -EPERM;
++
++	return 0;
++}
++
+ static int ext4_ioctl_setflags(struct inode *inode,
+ 			       unsigned int flags)
+ {
+@@ -322,6 +345,20 @@ static int ext4_ioctl_setflags(struct inode *inode,
+ 			goto flags_out;
+ 	}
+ 
++	/*
++	 * Wait for all pending directio and then flush all the dirty pages
++	 * for this file.  The flush marks all the pages readonly, so any
++	 * subsequent attempt to write to the file (particularly mmap pages)
++	 * will come through the filesystem and fail.
++	 */
++	if (S_ISREG(inode->i_mode) && !IS_IMMUTABLE(inode) &&
++	    (flags & EXT4_IMMUTABLE_FL)) {
++		inode_dio_wait(inode);
++		err = filemap_write_and_wait(inode->i_mapping);
++		if (err)
++			goto flags_out;
++	}
++
+ 	handle = ext4_journal_start(inode, EXT4_HT_INODE, 1);
+ 	if (IS_ERR(handle)) {
+ 		err = PTR_ERR(handle);
+@@ -751,7 +788,11 @@ long ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ 			return err;
+ 
+ 		inode_lock(inode);
+-		err = ext4_ioctl_setflags(inode, flags);
++		err = ext4_ioctl_check_immutable(inode,
++				from_kprojid(&init_user_ns, ei->i_projid),
++				flags);
++		if (!err)
++			err = ext4_ioctl_setflags(inode, flags);
+ 		inode_unlock(inode);
+ 		mnt_drop_write_file(filp);
+ 		return err;
+@@ -1121,6 +1162,9 @@ resizefs_out:
+ 			goto out;
+ 		flags = (ei->i_flags & ~EXT4_FL_XFLAG_VISIBLE) |
+ 			 (flags & EXT4_FL_XFLAG_VISIBLE);
++		err = ext4_ioctl_check_immutable(inode, fa.fsx_projid, flags);
++		if (err)
++			goto out;
+ 		err = ext4_ioctl_setflags(inode, flags);
+ 		if (err)
+ 			goto out;
+diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
+index 1083a9f3f16a..c7ded4e2adff 100644
+--- a/fs/ext4/move_extent.c
++++ b/fs/ext4/move_extent.c
+@@ -390,7 +390,8 @@ data_copy:
+ 
+ 	/* Even in case of data=writeback it is reasonable to pin
+ 	 * inode to transaction, to prevent unexpected data loss */
+-	*err = ext4_jbd2_inode_add_write(handle, orig_inode);
++	*err = ext4_jbd2_inode_add_write(handle, orig_inode,
++			(loff_t)orig_page_offset << PAGE_SHIFT, replaced_size);
+ 
+ unlock_pages:
+ 	unlock_page(pagep[0]);
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 5d9ffa8efbfd..27b1fb2612bc 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -81,8 +81,18 @@ static struct buffer_head *ext4_append(handle_t *handle,
+ static int ext4_dx_csum_verify(struct inode *inode,
+ 			       struct ext4_dir_entry *dirent);
+ 
++/*
++ * Hints to ext4_read_dirblock regarding whether we expect a directory
++ * block being read to be an index block, or a block containing
++ * directory entries (and if the latter, whether it was found via a
++ * logical block in an htree index block).  This is used to control
++ * what sort of sanity checkinig ext4_read_dirblock() will do on the
++ * directory block read from the storage device.  EITHER will means
++ * the caller doesn't know what kind of directory block will be read,
++ * so no specific verification will be done.
++ */
+ typedef enum {
+-	EITHER, INDEX, DIRENT
++	EITHER, INDEX, DIRENT, DIRENT_HTREE
+ } dirblock_type_t;
+ 
+ #define ext4_read_dirblock(inode, block, type) \
+@@ -108,11 +118,14 @@ static struct buffer_head *__ext4_read_dirblock(struct inode *inode,
+ 
+ 		return bh;
+ 	}
+-	if (!bh) {
++	if (!bh && (type == INDEX || type == DIRENT_HTREE)) {
+ 		ext4_error_inode(inode, func, line, block,
+-				 "Directory hole found");
++				 "Directory hole found for htree %s block",
++				 (type == INDEX) ? "index" : "leaf");
+ 		return ERR_PTR(-EFSCORRUPTED);
+ 	}
++	if (!bh)
++		return NULL;
+ 	dirent = (struct ext4_dir_entry *) bh->b_data;
+ 	/* Determine whether or not we have an index block */
+ 	if (is_dx(inode)) {
+@@ -979,7 +992,7 @@ static int htree_dirblock_to_tree(struct file *dir_file,
+ 
+ 	dxtrace(printk(KERN_INFO "In htree dirblock_to_tree: block %lu\n",
+ 							(unsigned long)block));
+-	bh = ext4_read_dirblock(dir, block, DIRENT);
++	bh = ext4_read_dirblock(dir, block, DIRENT_HTREE);
+ 	if (IS_ERR(bh))
+ 		return PTR_ERR(bh);
+ 
+@@ -1509,7 +1522,7 @@ static struct buffer_head * ext4_dx_find_entry(struct inode *dir,
+ 		return (struct buffer_head *) frame;
+ 	do {
+ 		block = dx_get_block(frame->at);
+-		bh = ext4_read_dirblock(dir, block, DIRENT);
++		bh = ext4_read_dirblock(dir, block, DIRENT_HTREE);
+ 		if (IS_ERR(bh))
+ 			goto errout;
+ 
+@@ -2079,6 +2092,11 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry,
+ 	blocks = dir->i_size >> sb->s_blocksize_bits;
+ 	for (block = 0; block < blocks; block++) {
+ 		bh = ext4_read_dirblock(dir, block, DIRENT);
++		if (bh == NULL) {
++			bh = ext4_bread(handle, dir, block,
++					EXT4_GET_BLOCKS_CREATE);
++			goto add_to_new_block;
++		}
+ 		if (IS_ERR(bh)) {
+ 			retval = PTR_ERR(bh);
+ 			bh = NULL;
+@@ -2099,6 +2117,7 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry,
+ 		brelse(bh);
+ 	}
+ 	bh = ext4_append(handle, dir, &block);
++add_to_new_block:
+ 	if (IS_ERR(bh)) {
+ 		retval = PTR_ERR(bh);
+ 		bh = NULL;
+@@ -2143,7 +2162,7 @@ again:
+ 		return PTR_ERR(frame);
+ 	entries = frame->entries;
+ 	at = frame->at;
+-	bh = ext4_read_dirblock(dir, dx_get_block(frame->at), DIRENT);
++	bh = ext4_read_dirblock(dir, dx_get_block(frame->at), DIRENT_HTREE);
+ 	if (IS_ERR(bh)) {
+ 		err = PTR_ERR(bh);
+ 		bh = NULL;
+@@ -2691,7 +2710,10 @@ bool ext4_empty_dir(struct inode *inode)
+ 		EXT4_ERROR_INODE(inode, "invalid size");
+ 		return true;
+ 	}
+-	bh = ext4_read_dirblock(inode, 0, EITHER);
++	/* The first directory block must not be a hole,
++	 * so treat it as DIRENT_HTREE
++	 */
++	bh = ext4_read_dirblock(inode, 0, DIRENT_HTREE);
+ 	if (IS_ERR(bh))
+ 		return true;
+ 
+@@ -2713,6 +2735,10 @@ bool ext4_empty_dir(struct inode *inode)
+ 			brelse(bh);
+ 			lblock = offset >> EXT4_BLOCK_SIZE_BITS(sb);
+ 			bh = ext4_read_dirblock(inode, lblock, EITHER);
++			if (bh == NULL) {
++				offset += sb->s_blocksize;
++				continue;
++			}
+ 			if (IS_ERR(bh))
+ 				return true;
+ 			de = (struct ext4_dir_entry_2 *) bh->b_data;
+@@ -3256,7 +3282,10 @@ static struct buffer_head *ext4_get_first_dir_block(handle_t *handle,
+ 	struct buffer_head *bh;
+ 
+ 	if (!ext4_has_inline_data(inode)) {
+-		bh = ext4_read_dirblock(inode, 0, EITHER);
++		/* The first directory block must not be a hole, so
++		 * treat it as DIRENT_HTREE
++		 */
++		bh = ext4_read_dirblock(inode, 0, DIRENT_HTREE);
+ 		if (IS_ERR(bh)) {
+ 			*retval = PTR_ERR(bh);
+ 			return NULL;
+diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
+index efd0ce9489ae..668f9021cf11 100644
+--- a/fs/jbd2/commit.c
++++ b/fs/jbd2/commit.c
+@@ -187,14 +187,15 @@ static int journal_wait_on_commit_record(journal_t *journal,
+  * use writepages() because with dealyed allocation we may be doing
+  * block allocation in writepages().
+  */
+-static int journal_submit_inode_data_buffers(struct address_space *mapping)
++static int journal_submit_inode_data_buffers(struct address_space *mapping,
++		loff_t dirty_start, loff_t dirty_end)
+ {
+ 	int ret;
+ 	struct writeback_control wbc = {
+ 		.sync_mode =  WB_SYNC_ALL,
+ 		.nr_to_write = mapping->nrpages * 2,
+-		.range_start = 0,
+-		.range_end = i_size_read(mapping->host),
++		.range_start = dirty_start,
++		.range_end = dirty_end,
+ 	};
+ 
+ 	ret = generic_writepages(mapping, &wbc);
+@@ -218,6 +219,9 @@ static int journal_submit_data_buffers(journal_t *journal,
+ 
+ 	spin_lock(&journal->j_list_lock);
+ 	list_for_each_entry(jinode, &commit_transaction->t_inode_list, i_list) {
++		loff_t dirty_start = jinode->i_dirty_start;
++		loff_t dirty_end = jinode->i_dirty_end;
++
+ 		if (!(jinode->i_flags & JI_WRITE_DATA))
+ 			continue;
+ 		mapping = jinode->i_vfs_inode->i_mapping;
+@@ -230,7 +234,8 @@ static int journal_submit_data_buffers(journal_t *journal,
+ 		 * only allocated blocks here.
+ 		 */
+ 		trace_jbd2_submit_inode_data(jinode->i_vfs_inode);
+-		err = journal_submit_inode_data_buffers(mapping);
++		err = journal_submit_inode_data_buffers(mapping, dirty_start,
++				dirty_end);
+ 		if (!ret)
+ 			ret = err;
+ 		spin_lock(&journal->j_list_lock);
+@@ -257,12 +262,16 @@ static int journal_finish_inode_data_buffers(journal_t *journal,
+ 	/* For locking, see the comment in journal_submit_data_buffers() */
+ 	spin_lock(&journal->j_list_lock);
+ 	list_for_each_entry(jinode, &commit_transaction->t_inode_list, i_list) {
++		loff_t dirty_start = jinode->i_dirty_start;
++		loff_t dirty_end = jinode->i_dirty_end;
++
+ 		if (!(jinode->i_flags & JI_WAIT_DATA))
+ 			continue;
+ 		jinode->i_flags |= JI_COMMIT_RUNNING;
+ 		spin_unlock(&journal->j_list_lock);
+-		err = filemap_fdatawait_keep_errors(
+-				jinode->i_vfs_inode->i_mapping);
++		err = filemap_fdatawait_range_keep_errors(
++				jinode->i_vfs_inode->i_mapping, dirty_start,
++				dirty_end);
+ 		if (!ret)
+ 			ret = err;
+ 		spin_lock(&journal->j_list_lock);
+@@ -282,6 +291,8 @@ static int journal_finish_inode_data_buffers(journal_t *journal,
+ 				&jinode->i_transaction->t_inode_list);
+ 		} else {
+ 			jinode->i_transaction = NULL;
++			jinode->i_dirty_start = 0;
++			jinode->i_dirty_end = 0;
+ 		}
+ 	}
+ 	spin_unlock(&journal->j_list_lock);
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index 43df0c943229..e0382067c824 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -94,6 +94,8 @@ EXPORT_SYMBOL(jbd2_journal_try_to_free_buffers);
+ EXPORT_SYMBOL(jbd2_journal_force_commit);
+ EXPORT_SYMBOL(jbd2_journal_inode_add_write);
+ EXPORT_SYMBOL(jbd2_journal_inode_add_wait);
++EXPORT_SYMBOL(jbd2_journal_inode_ranged_write);
++EXPORT_SYMBOL(jbd2_journal_inode_ranged_wait);
+ EXPORT_SYMBOL(jbd2_journal_init_jbd_inode);
+ EXPORT_SYMBOL(jbd2_journal_release_jbd_inode);
+ EXPORT_SYMBOL(jbd2_journal_begin_ordered_truncate);
+@@ -2574,6 +2576,8 @@ void jbd2_journal_init_jbd_inode(struct jbd2_inode *jinode, struct inode *inode)
+ 	jinode->i_next_transaction = NULL;
+ 	jinode->i_vfs_inode = inode;
+ 	jinode->i_flags = 0;
++	jinode->i_dirty_start = 0;
++	jinode->i_dirty_end = 0;
+ 	INIT_LIST_HEAD(&jinode->i_list);
+ }
+ 
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index 8ca4fddc705f..990e7b5062e7 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -2565,7 +2565,7 @@ void jbd2_journal_refile_buffer(journal_t *journal, struct journal_head *jh)
+  * File inode in the inode list of the handle's transaction
+  */
+ static int jbd2_journal_file_inode(handle_t *handle, struct jbd2_inode *jinode,
+-				   unsigned long flags)
++		unsigned long flags, loff_t start_byte, loff_t end_byte)
+ {
+ 	transaction_t *transaction = handle->h_transaction;
+ 	journal_t *journal;
+@@ -2577,26 +2577,17 @@ static int jbd2_journal_file_inode(handle_t *handle, struct jbd2_inode *jinode,
+ 	jbd_debug(4, "Adding inode %lu, tid:%d\n", jinode->i_vfs_inode->i_ino,
+ 			transaction->t_tid);
+ 
+-	/*
+-	 * First check whether inode isn't already on the transaction's
+-	 * lists without taking the lock. Note that this check is safe
+-	 * without the lock as we cannot race with somebody removing inode
+-	 * from the transaction. The reason is that we remove inode from the
+-	 * transaction only in journal_release_jbd_inode() and when we commit
+-	 * the transaction. We are guarded from the first case by holding
+-	 * a reference to the inode. We are safe against the second case
+-	 * because if jinode->i_transaction == transaction, commit code
+-	 * cannot touch the transaction because we hold reference to it,
+-	 * and if jinode->i_next_transaction == transaction, commit code
+-	 * will only file the inode where we want it.
+-	 */
+-	if ((jinode->i_transaction == transaction ||
+-	    jinode->i_next_transaction == transaction) &&
+-	    (jinode->i_flags & flags) == flags)
+-		return 0;
+-
+ 	spin_lock(&journal->j_list_lock);
+ 	jinode->i_flags |= flags;
++
++	if (jinode->i_dirty_end) {
++		jinode->i_dirty_start = min(jinode->i_dirty_start, start_byte);
++		jinode->i_dirty_end = max(jinode->i_dirty_end, end_byte);
++	} else {
++		jinode->i_dirty_start = start_byte;
++		jinode->i_dirty_end = end_byte;
++	}
++
+ 	/* Is inode already attached where we need it? */
+ 	if (jinode->i_transaction == transaction ||
+ 	    jinode->i_next_transaction == transaction)
+@@ -2631,12 +2622,28 @@ done:
+ int jbd2_journal_inode_add_write(handle_t *handle, struct jbd2_inode *jinode)
+ {
+ 	return jbd2_journal_file_inode(handle, jinode,
+-				       JI_WRITE_DATA | JI_WAIT_DATA);
++			JI_WRITE_DATA | JI_WAIT_DATA, 0, LLONG_MAX);
+ }
+ 
+ int jbd2_journal_inode_add_wait(handle_t *handle, struct jbd2_inode *jinode)
+ {
+-	return jbd2_journal_file_inode(handle, jinode, JI_WAIT_DATA);
++	return jbd2_journal_file_inode(handle, jinode, JI_WAIT_DATA, 0,
++			LLONG_MAX);
++}
++
++int jbd2_journal_inode_ranged_write(handle_t *handle,
++		struct jbd2_inode *jinode, loff_t start_byte, loff_t length)
++{
++	return jbd2_journal_file_inode(handle, jinode,
++			JI_WRITE_DATA | JI_WAIT_DATA, start_byte,
++			start_byte + length - 1);
++}
++
++int jbd2_journal_inode_ranged_wait(handle_t *handle, struct jbd2_inode *jinode,
++		loff_t start_byte, loff_t length)
++{
++	return jbd2_journal_file_inode(handle, jinode, JI_WAIT_DATA,
++			start_byte, start_byte + length - 1);
+ }
+ 
+ /*
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 08a84d130120..1f270be8204f 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -344,6 +344,11 @@ struct queue_limits {
+ 
+ #ifdef CONFIG_BLK_DEV_ZONED
+ 
++/*
++ * Maximum number of zones to report with a single report zones command.
++ */
++#define BLK_ZONED_REPORT_MAX_ZONES	8192U
++
+ extern unsigned int blkdev_nr_zones(struct block_device *bdev);
+ extern int blkdev_report_zones(struct block_device *bdev,
+ 			       sector_t sector, struct blk_zone *zones,
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index dd28e7679089..c26d24caeb14 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -2703,6 +2703,8 @@ extern int filemap_flush(struct address_space *);
+ extern int filemap_fdatawait_keep_errors(struct address_space *mapping);
+ extern int filemap_fdatawait_range(struct address_space *, loff_t lstart,
+ 				   loff_t lend);
++extern int filemap_fdatawait_range_keep_errors(struct address_space *mapping,
++		loff_t start_byte, loff_t end_byte);
+ 
+ static inline int filemap_fdatawait(struct address_space *mapping)
+ {
+diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h
+index 2cf6e04b08fc..f7325f32f78f 100644
+--- a/include/linux/jbd2.h
++++ b/include/linux/jbd2.h
+@@ -454,6 +454,22 @@ struct jbd2_inode {
+ 	 * @i_flags: Flags of inode [j_list_lock]
+ 	 */
+ 	unsigned long i_flags;
++
++	/**
++	 * @i_dirty_start:
++	 *
++	 * Offset in bytes where the dirty range for this inode starts.
++	 * [j_list_lock]
++	 */
++	loff_t i_dirty_start;
++
++	/**
++	 * @i_dirty_end:
++	 *
++	 * Inclusive offset in bytes where the dirty range for this inode
++	 * ends. [j_list_lock]
++	 */
++	loff_t i_dirty_end;
+ };
+ 
+ struct jbd2_revoke_table_s;
+@@ -1400,6 +1416,12 @@ extern int	   jbd2_journal_force_commit(journal_t *);
+ extern int	   jbd2_journal_force_commit_nested(journal_t *);
+ extern int	   jbd2_journal_inode_add_write(handle_t *handle, struct jbd2_inode *inode);
+ extern int	   jbd2_journal_inode_add_wait(handle_t *handle, struct jbd2_inode *inode);
++extern int	   jbd2_journal_inode_ranged_write(handle_t *handle,
++			struct jbd2_inode *inode, loff_t start_byte,
++			loff_t length);
++extern int	   jbd2_journal_inode_ranged_wait(handle_t *handle,
++			struct jbd2_inode *inode, loff_t start_byte,
++			loff_t length);
+ extern int	   jbd2_journal_begin_ordered_truncate(journal_t *journal,
+ 				struct jbd2_inode *inode, loff_t new_size);
+ extern void	   jbd2_journal_init_jbd_inode(struct jbd2_inode *jinode, struct inode *inode);
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index 3b83288749c6..c5dabaff1732 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -716,7 +716,8 @@ struct mlx5_ifc_per_protocol_networking_offload_caps_bits {
+ 	u8         swp[0x1];
+ 	u8         swp_csum[0x1];
+ 	u8         swp_lso[0x1];
+-	u8         reserved_at_23[0xd];
++	u8         cqe_checksum_full[0x1];
++	u8         reserved_at_24[0xc];
+ 	u8         max_vxlan_udp_ports[0x8];
+ 	u8         reserved_at_38[0x6];
+ 	u8         max_geneve_opt_len[0x1];
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index 1f678f023850..0d35c4df8108 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -1044,6 +1044,11 @@ static inline int in_software_context(struct perf_event *event)
+ 	return event->ctx->pmu->task_ctx_nr == perf_sw_context;
+ }
+ 
++static inline int is_exclusive_pmu(struct pmu *pmu)
++{
++	return pmu->capabilities & PERF_PMU_CAP_EXCLUSIVE;
++}
++
+ extern struct static_key perf_swevent_enabled[PERF_COUNT_SW_MAX];
+ 
+ extern void ___perf_sw_event(u32, u64, struct pt_regs *, u64);
+diff --git a/include/net/dst.h b/include/net/dst.h
+index 6cf0870414c7..ffc8ee0ea5e5 100644
+--- a/include/net/dst.h
++++ b/include/net/dst.h
+@@ -313,8 +313,9 @@ static inline bool dst_hold_safe(struct dst_entry *dst)
+  * @skb: buffer
+  *
+  * If dst is not yet refcounted and not destroyed, grab a ref on it.
++ * Returns true if dst is refcounted.
+  */
+-static inline void skb_dst_force(struct sk_buff *skb)
++static inline bool skb_dst_force(struct sk_buff *skb)
+ {
+ 	if (skb_dst_is_noref(skb)) {
+ 		struct dst_entry *dst = skb_dst(skb);
+@@ -325,6 +326,8 @@ static inline void skb_dst_force(struct sk_buff *skb)
+ 
+ 		skb->_skb_refdst = (unsigned long)dst;
+ 	}
++
++	return skb->_skb_refdst != 0UL;
+ }
+ 
+ 
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 36fcd0ad0515..51f07f57ffa4 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -1067,7 +1067,8 @@ void tcp_get_default_congestion_control(struct net *net, char *name);
+ void tcp_get_available_congestion_control(char *buf, size_t len);
+ void tcp_get_allowed_congestion_control(char *buf, size_t len);
+ int tcp_set_allowed_congestion_control(char *allowed);
+-int tcp_set_congestion_control(struct sock *sk, const char *name, bool load, bool reinit);
++int tcp_set_congestion_control(struct sock *sk, const char *name, bool load,
++			       bool reinit, bool cap_net_admin);
+ u32 tcp_slow_start(struct tcp_sock *tp, u32 acked);
+ void tcp_cong_avoid_ai(struct tcp_sock *tp, u32 w, u32 acked);
+ 
+@@ -1679,6 +1680,11 @@ static inline struct sk_buff *tcp_rtx_queue_head(const struct sock *sk)
+ 	return skb_rb_first(&sk->tcp_rtx_queue);
+ }
+ 
++static inline struct sk_buff *tcp_rtx_queue_tail(const struct sock *sk)
++{
++	return skb_rb_last(&sk->tcp_rtx_queue);
++}
++
+ static inline struct sk_buff *tcp_write_queue_head(const struct sock *sk)
+ {
+ 	return skb_peek(&sk->sk_write_queue);
+diff --git a/include/net/tls.h b/include/net/tls.h
+index a67ad7d56ff2..22de0f06d455 100644
+--- a/include/net/tls.h
++++ b/include/net/tls.h
+@@ -285,6 +285,7 @@ struct tls_offload_context_rx {
+ 	(ALIGN(sizeof(struct tls_offload_context_rx), sizeof(void *)) + \
+ 	 TLS_DRIVER_STATE_SIZE)
+ 
++void tls_ctx_free(struct tls_context *ctx);
+ int wait_on_pending_writer(struct sock *sk, long *timeo);
+ int tls_sk_query(struct sock *sk, int optname, char __user *optval,
+ 		int __user *optlen);
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index f33bd0a89391..28fa3e7fbc02 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -2543,6 +2543,9 @@ unlock:
+ 	return ret;
+ }
+ 
++static bool exclusive_event_installable(struct perf_event *event,
++					struct perf_event_context *ctx);
++
+ /*
+  * Attach a performance event to a context.
+  *
+@@ -2557,6 +2560,8 @@ perf_install_in_context(struct perf_event_context *ctx,
+ 
+ 	lockdep_assert_held(&ctx->mutex);
+ 
++	WARN_ON_ONCE(!exclusive_event_installable(event, ctx));
++
+ 	if (event->cpu != -1)
+ 		event->cpu = cpu;
+ 
+@@ -4348,7 +4353,7 @@ static int exclusive_event_init(struct perf_event *event)
+ {
+ 	struct pmu *pmu = event->pmu;
+ 
+-	if (!(pmu->capabilities & PERF_PMU_CAP_EXCLUSIVE))
++	if (!is_exclusive_pmu(pmu))
+ 		return 0;
+ 
+ 	/*
+@@ -4379,7 +4384,7 @@ static void exclusive_event_destroy(struct perf_event *event)
+ {
+ 	struct pmu *pmu = event->pmu;
+ 
+-	if (!(pmu->capabilities & PERF_PMU_CAP_EXCLUSIVE))
++	if (!is_exclusive_pmu(pmu))
+ 		return;
+ 
+ 	/* see comment in exclusive_event_init() */
+@@ -4399,14 +4404,15 @@ static bool exclusive_event_match(struct perf_event *e1, struct perf_event *e2)
+ 	return false;
+ }
+ 
+-/* Called under the same ctx::mutex as perf_install_in_context() */
+ static bool exclusive_event_installable(struct perf_event *event,
+ 					struct perf_event_context *ctx)
+ {
+ 	struct perf_event *iter_event;
+ 	struct pmu *pmu = event->pmu;
+ 
+-	if (!(pmu->capabilities & PERF_PMU_CAP_EXCLUSIVE))
++	lockdep_assert_held(&ctx->mutex);
++
++	if (!is_exclusive_pmu(pmu))
+ 		return true;
+ 
+ 	list_for_each_entry(iter_event, &ctx->event_list, event_entry) {
+@@ -4453,12 +4459,20 @@ static void _free_event(struct perf_event *event)
+ 	if (event->destroy)
+ 		event->destroy(event);
+ 
+-	if (event->ctx)
+-		put_ctx(event->ctx);
+-
++	/*
++	 * Must be after ->destroy(), due to uprobe_perf_close() using
++	 * hw.target.
++	 */
+ 	if (event->hw.target)
+ 		put_task_struct(event->hw.target);
+ 
++	/*
++	 * perf_event_free_task() relies on put_ctx() being 'last', in particular
++	 * all task references must be cleaned up.
++	 */
++	if (event->ctx)
++		put_ctx(event->ctx);
++
+ 	exclusive_event_destroy(event);
+ 	module_put(event->pmu->module);
+ 
+@@ -4638,8 +4652,17 @@ again:
+ 	mutex_unlock(&event->child_mutex);
+ 
+ 	list_for_each_entry_safe(child, tmp, &free_list, child_list) {
++		void *var = &child->ctx->refcount;
++
+ 		list_del(&child->child_list);
+ 		free_event(child);
++
++		/*
++		 * Wake any perf_event_free_task() waiting for this event to be
++		 * freed.
++		 */
++		smp_mb(); /* pairs with wait_var_event() */
++		wake_up_var(var);
+ 	}
+ 
+ no_ctx:
+@@ -10899,11 +10922,6 @@ SYSCALL_DEFINE5(perf_event_open,
+ 		goto err_alloc;
+ 	}
+ 
+-	if ((pmu->capabilities & PERF_PMU_CAP_EXCLUSIVE) && group_leader) {
+-		err = -EBUSY;
+-		goto err_context;
+-	}
+-
+ 	/*
+ 	 * Look up the group leader (we will attach this event to it):
+ 	 */
+@@ -10991,6 +11009,18 @@ SYSCALL_DEFINE5(perf_event_open,
+ 				move_group = 0;
+ 			}
+ 		}
++
++		/*
++		 * Failure to create exclusive events returns -EBUSY.
++		 */
++		err = -EBUSY;
++		if (!exclusive_event_installable(group_leader, ctx))
++			goto err_locked;
++
++		for_each_sibling_event(sibling, group_leader) {
++			if (!exclusive_event_installable(sibling, ctx))
++				goto err_locked;
++		}
+ 	} else {
+ 		mutex_lock(&ctx->mutex);
+ 	}
+@@ -11027,9 +11057,6 @@ SYSCALL_DEFINE5(perf_event_open,
+ 	 * because we need to serialize with concurrent event creation.
+ 	 */
+ 	if (!exclusive_event_installable(event, ctx)) {
+-		/* exclusive and group stuff are assumed mutually exclusive */
+-		WARN_ON_ONCE(move_group);
+-
+ 		err = -EBUSY;
+ 		goto err_locked;
+ 	}
+@@ -11496,11 +11523,11 @@ static void perf_free_event(struct perf_event *event,
+ }
+ 
+ /*
+- * Free an unexposed, unused context as created by inheritance by
+- * perf_event_init_task below, used by fork() in case of fail.
++ * Free a context as created by inheritance by perf_event_init_task() below,
++ * used by fork() in case of fail.
+  *
+- * Not all locks are strictly required, but take them anyway to be nice and
+- * help out with the lockdep assertions.
++ * Even though the task has never lived, the context and events have been
++ * exposed through the child_list, so we must take care tearing it all down.
+  */
+ void perf_event_free_task(struct task_struct *task)
+ {
+@@ -11530,7 +11557,23 @@ void perf_event_free_task(struct task_struct *task)
+ 			perf_free_event(event, ctx);
+ 
+ 		mutex_unlock(&ctx->mutex);
+-		put_ctx(ctx);
++
++		/*
++		 * perf_event_release_kernel() could've stolen some of our
++		 * child events and still have them on its free_list. In that
++		 * case we must wait for these events to have been freed (in
++		 * particular all their references to this task must've been
++		 * dropped).
++		 *
++		 * Without this copy_process() will unconditionally free this
++		 * task (irrespective of its reference count) and
++		 * _free_event()'s put_task_struct(event->hw.target) will be a
++		 * use-after-free.
++		 *
++		 * Wait for all events to drop their context reference.
++		 */
++		wait_var_event(&ctx->refcount, refcount_read(&ctx->refcount) == 1);
++		put_ctx(ctx); /* must be last */
+ 	}
+ }
+ 
+diff --git a/mm/filemap.c b/mm/filemap.c
+index d78f577baef2..60177605c633 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -547,6 +547,28 @@ int filemap_fdatawait_range(struct address_space *mapping, loff_t start_byte,
+ }
+ EXPORT_SYMBOL(filemap_fdatawait_range);
+ 
++/**
++ * filemap_fdatawait_range_keep_errors - wait for writeback to complete
++ * @mapping:		address space structure to wait for
++ * @start_byte:		offset in bytes where the range starts
++ * @end_byte:		offset in bytes where the range ends (inclusive)
++ *
++ * Walk the list of under-writeback pages of the given address space in the
++ * given range and wait for all of them.  Unlike filemap_fdatawait_range(),
++ * this function does not clear error status of the address space.
++ *
++ * Use this function if callers don't handle errors themselves.  Expected
++ * call sites are system-wide / filesystem-wide data flushers: e.g. sync(2),
++ * fsfreeze(8)
++ */
++int filemap_fdatawait_range_keep_errors(struct address_space *mapping,
++		loff_t start_byte, loff_t end_byte)
++{
++	__filemap_fdatawait_range(mapping, start_byte, end_byte);
++	return filemap_check_and_keep_errors(mapping);
++}
++EXPORT_SYMBOL(filemap_fdatawait_range_keep_errors);
++
+ /**
+  * file_fdatawait_range - wait for writeback to complete
+  * @file:		file pointing to address space structure to wait for
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index dbcf2cd5e7e9..223566fb11ca 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -2176,7 +2176,7 @@ static void shrink_active_list(unsigned long nr_to_scan,
+  *   10TB     320        32GB
+  */
+ static bool inactive_list_is_low(struct lruvec *lruvec, bool file,
+-				 struct scan_control *sc, bool actual_reclaim)
++				 struct scan_control *sc, bool trace)
+ {
+ 	enum lru_list active_lru = file * LRU_FILE + LRU_ACTIVE;
+ 	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
+@@ -2202,7 +2202,7 @@ static bool inactive_list_is_low(struct lruvec *lruvec, bool file,
+ 	 * rid of the stale workingset quickly.
+ 	 */
+ 	refaults = lruvec_page_state(lruvec, WORKINGSET_ACTIVATE);
+-	if (file && actual_reclaim && lruvec->refaults != refaults) {
++	if (file && lruvec->refaults != refaults) {
+ 		inactive_ratio = 0;
+ 	} else {
+ 		gb = (inactive + active) >> (30 - PAGE_SHIFT);
+@@ -2212,7 +2212,7 @@ static bool inactive_list_is_low(struct lruvec *lruvec, bool file,
+ 			inactive_ratio = 1;
+ 	}
+ 
+-	if (actual_reclaim)
++	if (trace)
+ 		trace_mm_vmscan_inactive_list_is_low(pgdat->node_id, sc->reclaim_idx,
+ 			lruvec_lru_size(lruvec, inactive_lru, MAX_NR_ZONES), inactive,
+ 			lruvec_lru_size(lruvec, active_lru, MAX_NR_ZONES), active,
+diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c
+index ba303ee99b9b..6a9f48322bb9 100644
+--- a/net/bridge/br_input.c
++++ b/net/bridge/br_input.c
+@@ -79,7 +79,6 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb
+ 	struct net_bridge_fdb_entry *dst = NULL;
+ 	struct net_bridge_mdb_entry *mdst;
+ 	bool local_rcv, mcast_hit = false;
+-	const unsigned char *dest;
+ 	struct net_bridge *br;
+ 	u16 vid = 0;
+ 
+@@ -97,10 +96,9 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb
+ 		br_fdb_update(br, p, eth_hdr(skb)->h_source, vid, false);
+ 
+ 	local_rcv = !!(br->dev->flags & IFF_PROMISC);
+-	dest = eth_hdr(skb)->h_dest;
+-	if (is_multicast_ether_addr(dest)) {
++	if (is_multicast_ether_addr(eth_hdr(skb)->h_dest)) {
+ 		/* by definition the broadcast is also a multicast address */
+-		if (is_broadcast_ether_addr(dest)) {
++		if (is_broadcast_ether_addr(eth_hdr(skb)->h_dest)) {
+ 			pkt_type = BR_PKT_BROADCAST;
+ 			local_rcv = true;
+ 		} else {
+@@ -150,7 +148,7 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb
+ 		}
+ 		break;
+ 	case BR_PKT_UNICAST:
+-		dst = br_fdb_find_rcu(br, dest, vid);
++		dst = br_fdb_find_rcu(br, eth_hdr(skb)->h_dest, vid);
+ 	default:
+ 		break;
+ 	}
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index 45e7f4173bba..0ef4092202d0 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -934,6 +934,7 @@ static int br_ip4_multicast_igmp3_report(struct net_bridge *br,
+ 	int type;
+ 	int err = 0;
+ 	__be32 group;
++	u16 nsrcs;
+ 
+ 	ih = igmpv3_report_hdr(skb);
+ 	num = ntohs(ih->ngrec);
+@@ -947,8 +948,9 @@ static int br_ip4_multicast_igmp3_report(struct net_bridge *br,
+ 		grec = (void *)(skb->data + len - sizeof(*grec));
+ 		group = grec->grec_mca;
+ 		type = grec->grec_type;
++		nsrcs = ntohs(grec->grec_nsrcs);
+ 
+-		len += ntohs(grec->grec_nsrcs) * 4;
++		len += nsrcs * 4;
+ 		if (!ip_mc_may_pull(skb, len))
+ 			return -EINVAL;
+ 
+@@ -969,7 +971,7 @@ static int br_ip4_multicast_igmp3_report(struct net_bridge *br,
+ 		src = eth_hdr(skb)->h_source;
+ 		if ((type == IGMPV3_CHANGE_TO_INCLUDE ||
+ 		     type == IGMPV3_MODE_IS_INCLUDE) &&
+-		    ntohs(grec->grec_nsrcs) == 0) {
++		    nsrcs == 0) {
+ 			br_ip4_multicast_leave_group(br, port, group, vid, src);
+ 		} else {
+ 			err = br_ip4_multicast_add_group(br, port, group, vid,
+@@ -1006,7 +1008,8 @@ static int br_ip6_multicast_mld2_report(struct net_bridge *br,
+ 	len = skb_transport_offset(skb) + sizeof(*icmp6h);
+ 
+ 	for (i = 0; i < num; i++) {
+-		__be16 *nsrcs, _nsrcs;
++		__be16 *_nsrcs, __nsrcs;
++		u16 nsrcs;
+ 
+ 		nsrcs_offset = len + offsetof(struct mld2_grec, grec_nsrcs);
+ 
+@@ -1014,12 +1017,13 @@ static int br_ip6_multicast_mld2_report(struct net_bridge *br,
+ 		    nsrcs_offset + sizeof(_nsrcs))
+ 			return -EINVAL;
+ 
+-		nsrcs = skb_header_pointer(skb, nsrcs_offset,
+-					   sizeof(_nsrcs), &_nsrcs);
+-		if (!nsrcs)
++		_nsrcs = skb_header_pointer(skb, nsrcs_offset,
++					    sizeof(__nsrcs), &__nsrcs);
++		if (!_nsrcs)
+ 			return -EINVAL;
+ 
+-		grec_len = struct_size(grec, grec_src, ntohs(*nsrcs));
++		nsrcs = ntohs(*_nsrcs);
++		grec_len = struct_size(grec, grec_src, nsrcs);
+ 
+ 		if (!ipv6_mc_may_pull(skb, len + grec_len))
+ 			return -EINVAL;
+@@ -1044,7 +1048,7 @@ static int br_ip6_multicast_mld2_report(struct net_bridge *br,
+ 		src = eth_hdr(skb)->h_source;
+ 		if ((grec->grec_type == MLD2_CHANGE_TO_INCLUDE ||
+ 		     grec->grec_type == MLD2_MODE_IS_INCLUDE) &&
+-		    ntohs(*nsrcs) == 0) {
++		    nsrcs == 0) {
+ 			br_ip6_multicast_leave_group(br, port, &grec->grec_mca,
+ 						     vid, src);
+ 		} else {
+@@ -1298,7 +1302,6 @@ static int br_ip6_multicast_query(struct net_bridge *br,
+ 				  u16 vid)
+ {
+ 	unsigned int transport_len = ipv6_transport_len(skb);
+-	const struct ipv6hdr *ip6h = ipv6_hdr(skb);
+ 	struct mld_msg *mld;
+ 	struct net_bridge_mdb_entry *mp;
+ 	struct mld2_query *mld2q;
+@@ -1342,7 +1345,7 @@ static int br_ip6_multicast_query(struct net_bridge *br,
+ 
+ 	if (is_general_query) {
+ 		saddr.proto = htons(ETH_P_IPV6);
+-		saddr.u.ip6 = ip6h->saddr;
++		saddr.u.ip6 = ipv6_hdr(skb)->saddr;
+ 
+ 		br_multicast_query_received(br, port, &br->ip6_other_query,
+ 					    &saddr, max_delay);
+diff --git a/net/bridge/br_stp_bpdu.c b/net/bridge/br_stp_bpdu.c
+index 1b75d6bf12bd..37ddcea3fc96 100644
+--- a/net/bridge/br_stp_bpdu.c
++++ b/net/bridge/br_stp_bpdu.c
+@@ -147,7 +147,6 @@ void br_send_tcn_bpdu(struct net_bridge_port *p)
+ void br_stp_rcv(const struct stp_proto *proto, struct sk_buff *skb,
+ 		struct net_device *dev)
+ {
+-	const unsigned char *dest = eth_hdr(skb)->h_dest;
+ 	struct net_bridge_port *p;
+ 	struct net_bridge *br;
+ 	const unsigned char *buf;
+@@ -176,7 +175,7 @@ void br_stp_rcv(const struct stp_proto *proto, struct sk_buff *skb,
+ 	if (p->state == BR_STATE_DISABLED)
+ 		goto out;
+ 
+-	if (!ether_addr_equal(dest, br->group_addr))
++	if (!ether_addr_equal(eth_hdr(skb)->h_dest, br->group_addr))
+ 		goto out;
+ 
+ 	if (p->flags & BR_BPDU_GUARD) {
+diff --git a/net/core/filter.c b/net/core/filter.c
+index b76f14197128..b8893566339f 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -4211,7 +4211,7 @@ BPF_CALL_5(bpf_setsockopt, struct bpf_sock_ops_kern *, bpf_sock,
+ 						    TCP_CA_NAME_MAX-1));
+ 			name[TCP_CA_NAME_MAX-1] = 0;
+ 			ret = tcp_set_congestion_control(sk, name, false,
+-							 reinit);
++							 reinit, true);
+ 		} else {
+ 			struct tcp_sock *tp = tcp_sk(sk);
+ 
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index cce4fbcd7dcb..2f693f1168e1 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -1126,6 +1126,7 @@ int __neigh_event_send(struct neighbour *neigh, struct sk_buff *skb)
+ 
+ 			atomic_set(&neigh->probes,
+ 				   NEIGH_VAR(neigh->parms, UCAST_PROBES));
++			neigh_del_timer(neigh);
+ 			neigh->nud_state     = NUD_INCOMPLETE;
+ 			neigh->updated = now;
+ 			next = now + max(NEIGH_VAR(neigh->parms, RETRANS_TIME),
+@@ -1142,6 +1143,7 @@ int __neigh_event_send(struct neighbour *neigh, struct sk_buff *skb)
+ 		}
+ 	} else if (neigh->nud_state & NUD_STALE) {
+ 		neigh_dbg(2, "neigh %p is delayed\n", neigh);
++		neigh_del_timer(neigh);
+ 		neigh->nud_state = NUD_DELAY;
+ 		neigh->updated = jiffies;
+ 		neigh_add_timer(neigh, jiffies +
+diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
+index eb514f312e6f..83944b7480c8 100644
+--- a/net/ipv4/devinet.c
++++ b/net/ipv4/devinet.c
+@@ -66,6 +66,11 @@
+ #include <net/net_namespace.h>
+ #include <net/addrconf.h>
+ 
++#define IPV6ONLY_FLAGS	\
++		(IFA_F_NODAD | IFA_F_OPTIMISTIC | IFA_F_DADFAILED | \
++		 IFA_F_HOMEADDRESS | IFA_F_TENTATIVE | \
++		 IFA_F_MANAGETEMPADDR | IFA_F_STABLE_PRIVACY)
++
+ static struct ipv4_devconf ipv4_devconf = {
+ 	.data = {
+ 		[IPV4_DEVCONF_ACCEPT_REDIRECTS - 1] = 1,
+@@ -472,6 +477,9 @@ static int __inet_insert_ifa(struct in_ifaddr *ifa, struct nlmsghdr *nlh,
+ 	ifa->ifa_flags &= ~IFA_F_SECONDARY;
+ 	last_primary = &in_dev->ifa_list;
+ 
++	/* Don't set IPv6 only flags to IPv4 addresses */
++	ifa->ifa_flags &= ~IPV6ONLY_FLAGS;
++
+ 	for (ifap = &in_dev->ifa_list; (ifa1 = *ifap) != NULL;
+ 	     ifap = &ifa1->ifa_next) {
+ 		if (!(ifa1->ifa_flags & IFA_F_SECONDARY) &&
+diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
+index eb03153dfe12..792d16f7b62d 100644
+--- a/net/ipv4/igmp.c
++++ b/net/ipv4/igmp.c
+@@ -1232,12 +1232,8 @@ static void igmpv3_del_delrec(struct in_device *in_dev, struct ip_mc_list *im)
+ 	if (pmc) {
+ 		im->interface = pmc->interface;
+ 		if (im->sfmode == MCAST_INCLUDE) {
+-			im->tomb = pmc->tomb;
+-			pmc->tomb = NULL;
+-
+-			im->sources = pmc->sources;
+-			pmc->sources = NULL;
+-
++			swap(im->tomb, pmc->tomb);
++			swap(im->sources, pmc->sources);
+ 			for (psf = im->sources; psf; psf = psf->sf_next)
+ 				psf->sf_crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv;
+ 		} else {
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 365c8490b34b..caac580e1f1d 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -2630,6 +2630,8 @@ int tcp_disconnect(struct sock *sk, int flags)
+ 	tcp_saved_syn_free(tp);
+ 	tp->compressed_ack = 0;
+ 	tp->bytes_sent = 0;
++	tp->bytes_acked = 0;
++	tp->bytes_received = 0;
+ 	tp->bytes_retrans = 0;
+ 	tp->duplicate_sack[0].start_seq = 0;
+ 	tp->duplicate_sack[0].end_seq = 0;
+@@ -2784,7 +2786,9 @@ static int do_tcp_setsockopt(struct sock *sk, int level,
+ 		name[val] = 0;
+ 
+ 		lock_sock(sk);
+-		err = tcp_set_congestion_control(sk, name, true, true);
++		err = tcp_set_congestion_control(sk, name, true, true,
++						 ns_capable(sock_net(sk)->user_ns,
++							    CAP_NET_ADMIN));
+ 		release_sock(sk);
+ 		return err;
+ 	}
+diff --git a/net/ipv4/tcp_cong.c b/net/ipv4/tcp_cong.c
+index bc6c02f16243..48f79db446a0 100644
+--- a/net/ipv4/tcp_cong.c
++++ b/net/ipv4/tcp_cong.c
+@@ -332,7 +332,8 @@ out:
+  * tcp_reinit_congestion_control (if the current congestion control was
+  * already initialized.
+  */
+-int tcp_set_congestion_control(struct sock *sk, const char *name, bool load, bool reinit)
++int tcp_set_congestion_control(struct sock *sk, const char *name, bool load,
++			       bool reinit, bool cap_net_admin)
+ {
+ 	struct inet_connection_sock *icsk = inet_csk(sk);
+ 	const struct tcp_congestion_ops *ca;
+@@ -368,8 +369,7 @@ int tcp_set_congestion_control(struct sock *sk, const char *name, bool load, boo
+ 		} else {
+ 			err = -EBUSY;
+ 		}
+-	} else if (!((ca->flags & TCP_CONG_NON_RESTRICTED) ||
+-		     ns_capable(sock_net(sk)->user_ns, CAP_NET_ADMIN))) {
++	} else if (!((ca->flags & TCP_CONG_NON_RESTRICTED) || cap_net_admin)) {
+ 		err = -EPERM;
+ 	} else if (!try_module_get(ca->owner)) {
+ 		err = -EBUSY;
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index b8b4ae555e34..32bd52e06ef1 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -1289,6 +1289,7 @@ int tcp_fragment(struct sock *sk, enum tcp_queue tcp_queue,
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 	struct sk_buff *buff;
+ 	int nsize, old_factor;
++	long limit;
+ 	int nlen;
+ 	u8 flags;
+ 
+@@ -1299,8 +1300,16 @@ int tcp_fragment(struct sock *sk, enum tcp_queue tcp_queue,
+ 	if (nsize < 0)
+ 		nsize = 0;
+ 
+-	if (unlikely((sk->sk_wmem_queued >> 1) > sk->sk_sndbuf &&
+-		     tcp_queue != TCP_FRAG_IN_WRITE_QUEUE)) {
++	/* tcp_sendmsg() can overshoot sk_wmem_queued by one full size skb.
++	 * We need some allowance to not penalize applications setting small
++	 * SO_SNDBUF values.
++	 * Also allow first and last skb in retransmit queue to be split.
++	 */
++	limit = sk->sk_sndbuf + 2 * SKB_TRUESIZE(GSO_MAX_SIZE);
++	if (unlikely((sk->sk_wmem_queued >> 1) > limit &&
++		     tcp_queue != TCP_FRAG_IN_WRITE_QUEUE &&
++		     skb != tcp_rtx_queue_head(sk) &&
++		     skb != tcp_rtx_queue_tail(sk))) {
+ 		NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPWQUEUETOOBIG);
+ 		return -ENOMEM;
+ 	}
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index 9915f64b38a0..4b1a898982d0 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -1113,8 +1113,24 @@ add:
+ 		err = call_fib6_entry_notifiers(info->nl_net,
+ 						FIB_EVENT_ENTRY_ADD,
+ 						rt, extack);
+-		if (err)
++		if (err) {
++			struct fib6_info *sibling, *next_sibling;
++
++			/* If the route has siblings, then it first
++			 * needs to be unlinked from them.
++			 */
++			if (!rt->fib6_nsiblings)
++				return err;
++
++			list_for_each_entry_safe(sibling, next_sibling,
++						 &rt->fib6_siblings,
++						 fib6_siblings)
++				sibling->fib6_nsiblings--;
++			rt->fib6_nsiblings = 0;
++			list_del_init(&rt->fib6_siblings);
++			rt6_multipath_rebalance(next_sibling);
+ 			return err;
++		}
+ 
+ 		rcu_assign_pointer(rt->fib6_next, iter);
+ 		atomic_inc(&rt->fib6_ref);
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index ab348489bd8a..9fc2d803c684 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -2183,7 +2183,7 @@ static struct dst_entry *rt6_check(struct rt6_info *rt,
+ {
+ 	u32 rt_cookie = 0;
+ 
+-	if ((from && !fib6_get_cookie_safe(from, &rt_cookie)) ||
++	if (!from || !fib6_get_cookie_safe(from, &rt_cookie) ||
+ 	    rt_cookie != cookie)
+ 		return NULL;
+ 
+diff --git a/net/netfilter/nf_queue.c b/net/netfilter/nf_queue.c
+index 5b86574e7b89..12a008cf8865 100644
+--- a/net/netfilter/nf_queue.c
++++ b/net/netfilter/nf_queue.c
+@@ -190,6 +190,11 @@ static int __nf_queue(struct sk_buff *skb, const struct nf_hook_state *state,
+ 		goto err;
+ 	}
+ 
++	if (!skb_dst_force(skb) && state->hook != NF_INET_PRE_ROUTING) {
++		status = -ENETDOWN;
++		goto err;
++	}
++
+ 	*entry = (struct nf_queue_entry) {
+ 		.skb	= skb,
+ 		.state	= *state,
+@@ -198,7 +203,6 @@ static int __nf_queue(struct sk_buff *skb, const struct nf_hook_state *state,
+ 	};
+ 
+ 	nf_queue_entry_get_refs(entry);
+-	skb_dst_force(skb);
+ 
+ 	switch (entry->state.pf) {
+ 	case AF_INET:
+diff --git a/net/netrom/af_netrom.c b/net/netrom/af_netrom.c
+index 71ffd1a6dc7c..43910e50752c 100644
+--- a/net/netrom/af_netrom.c
++++ b/net/netrom/af_netrom.c
+@@ -872,7 +872,7 @@ int nr_rx_frame(struct sk_buff *skb, struct net_device *dev)
+ 	unsigned short frametype, flags, window, timeout;
+ 	int ret;
+ 
+-	skb->sk = NULL;		/* Initially we don't know who it's for */
++	skb_orphan(skb);
+ 
+ 	/*
+ 	 *	skb->data points to the netrom frame start
+@@ -970,7 +970,9 @@ int nr_rx_frame(struct sk_buff *skb, struct net_device *dev)
+ 
+ 	window = skb->data[20];
+ 
++	sock_hold(make);
+ 	skb->sk             = make;
++	skb->destructor     = sock_efree;
+ 	make->sk_state	    = TCP_ESTABLISHED;
+ 
+ 	/* Fill in his circuit details */
+diff --git a/net/nfc/nci/data.c b/net/nfc/nci/data.c
+index 908f25e3773e..5405d073804c 100644
+--- a/net/nfc/nci/data.c
++++ b/net/nfc/nci/data.c
+@@ -119,7 +119,7 @@ static int nci_queue_tx_data_frags(struct nci_dev *ndev,
+ 	conn_info = nci_get_conn_info_by_conn_id(ndev, conn_id);
+ 	if (!conn_info) {
+ 		rc = -EPROTO;
+-		goto free_exit;
++		goto exit;
+ 	}
+ 
+ 	__skb_queue_head_init(&frags_q);
+diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
+index e47ebbbe71b8..b85b37518fc5 100644
+--- a/net/openvswitch/actions.c
++++ b/net/openvswitch/actions.c
+@@ -175,8 +175,7 @@ static void update_ethertype(struct sk_buff *skb, struct ethhdr *hdr,
+ 	if (skb->ip_summed == CHECKSUM_COMPLETE) {
+ 		__be16 diff[] = { ~(hdr->h_proto), ethertype };
+ 
+-		skb->csum = ~csum_partial((char *)diff, sizeof(diff),
+-					~skb->csum);
++		skb->csum = csum_partial((char *)diff, sizeof(diff), skb->csum);
+ 	}
+ 
+ 	hdr->h_proto = ethertype;
+@@ -268,8 +267,7 @@ static int set_mpls(struct sk_buff *skb, struct sw_flow_key *flow_key,
+ 	if (skb->ip_summed == CHECKSUM_COMPLETE) {
+ 		__be32 diff[] = { ~(stack->label_stack_entry), lse };
+ 
+-		skb->csum = ~csum_partial((char *)diff, sizeof(diff),
+-					  ~skb->csum);
++		skb->csum = csum_partial((char *)diff, sizeof(diff), skb->csum);
+ 	}
+ 
+ 	stack->label_stack_entry = lse;
+diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c
+index ae8c5d7f3bf1..c77476273179 100644
+--- a/net/rxrpc/af_rxrpc.c
++++ b/net/rxrpc/af_rxrpc.c
+@@ -521,6 +521,7 @@ static int rxrpc_sendmsg(struct socket *sock, struct msghdr *m, size_t len)
+ 
+ 	switch (rx->sk.sk_state) {
+ 	case RXRPC_UNBOUND:
++	case RXRPC_CLIENT_UNBOUND:
+ 		rx->srx.srx_family = AF_RXRPC;
+ 		rx->srx.srx_service = 0;
+ 		rx->srx.transport_type = SOCK_DGRAM;
+@@ -545,10 +546,9 @@ static int rxrpc_sendmsg(struct socket *sock, struct msghdr *m, size_t len)
+ 		}
+ 
+ 		rx->local = local;
+-		rx->sk.sk_state = RXRPC_CLIENT_UNBOUND;
++		rx->sk.sk_state = RXRPC_CLIENT_BOUND;
+ 		/* Fall through */
+ 
+-	case RXRPC_CLIENT_UNBOUND:
+ 	case RXRPC_CLIENT_BOUND:
+ 		if (!m->msg_name &&
+ 		    test_bit(RXRPC_SOCK_CONNECTED, &rx->flags)) {
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 99ae30c177c7..23568679f07d 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -2162,6 +2162,9 @@ replay:
+ 		tfilter_notify(net, skb, n, tp, block, q, parent, fh,
+ 			       RTM_NEWTFILTER, false, rtnl_held);
+ 		tfilter_put(tp, fh);
++		/* q pointer is NULL for shared blocks */
++		if (q)
++			q->flags &= ~TCQ_F_CAN_BYPASS;
+ 	}
+ 
+ errout:
+diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c
+index cd04d40c30b6..1971f3a29730 100644
+--- a/net/sched/sch_fq_codel.c
++++ b/net/sched/sch_fq_codel.c
+@@ -600,8 +600,6 @@ static unsigned long fq_codel_find(struct Qdisc *sch, u32 classid)
+ static unsigned long fq_codel_bind(struct Qdisc *sch, unsigned long parent,
+ 			      u32 classid)
+ {
+-	/* we cannot bypass queue discipline anymore */
+-	sch->flags &= ~TCQ_F_CAN_BYPASS;
+ 	return 0;
+ }
+ 
+diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
+index 2f2678197760..650f21463853 100644
+--- a/net/sched/sch_sfq.c
++++ b/net/sched/sch_sfq.c
+@@ -828,8 +828,6 @@ static unsigned long sfq_find(struct Qdisc *sch, u32 classid)
+ static unsigned long sfq_bind(struct Qdisc *sch, unsigned long parent,
+ 			      u32 classid)
+ {
+-	/* we cannot bypass queue discipline anymore */
+-	sch->flags &= ~TCQ_F_CAN_BYPASS;
+ 	return 0;
+ }
+ 
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 4583fa914e62..e33382b3f82a 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -4828,35 +4828,17 @@ out_nounlock:
+ static int sctp_connect(struct sock *sk, struct sockaddr *addr,
+ 			int addr_len, int flags)
+ {
+-	struct inet_sock *inet = inet_sk(sk);
+ 	struct sctp_af *af;
+-	int err = 0;
++	int err = -EINVAL;
+ 
+ 	lock_sock(sk);
+-
+ 	pr_debug("%s: sk:%p, sockaddr:%p, addr_len:%d\n", __func__, sk,
+ 		 addr, addr_len);
+ 
+-	/* We may need to bind the socket. */
+-	if (!inet->inet_num) {
+-		if (sk->sk_prot->get_port(sk, 0)) {
+-			release_sock(sk);
+-			return -EAGAIN;
+-		}
+-		inet->inet_sport = htons(inet->inet_num);
+-	}
+-
+ 	/* Validate addr_len before calling common connect/connectx routine. */
+-	af = addr_len < offsetofend(struct sockaddr, sa_family) ? NULL :
+-		sctp_get_af_specific(addr->sa_family);
+-	if (!af || addr_len < af->sockaddr_len) {
+-		err = -EINVAL;
+-	} else {
+-		/* Pass correct addr len to common routine (so it knows there
+-		 * is only one address being passed.
+-		 */
++	af = sctp_get_af_specific(addr->sa_family);
++	if (af && addr_len >= af->sockaddr_len)
+ 		err = __sctp_connect(sk, addr, af->sockaddr_len, flags, NULL);
+-	}
+ 
+ 	release_sock(sk);
+ 	return err;
+diff --git a/net/sctp/stream.c b/net/sctp/stream.c
+index b6bb68adac6e..f72dfda4025d 100644
+--- a/net/sctp/stream.c
++++ b/net/sctp/stream.c
+@@ -168,13 +168,20 @@ out:
+ int sctp_stream_init_ext(struct sctp_stream *stream, __u16 sid)
+ {
+ 	struct sctp_stream_out_ext *soute;
++	int ret;
+ 
+ 	soute = kzalloc(sizeof(*soute), GFP_KERNEL);
+ 	if (!soute)
+ 		return -ENOMEM;
+ 	SCTP_SO(stream, sid)->ext = soute;
+ 
+-	return sctp_sched_init_sid(stream, sid, GFP_KERNEL);
++	ret = sctp_sched_init_sid(stream, sid, GFP_KERNEL);
++	if (ret) {
++		kfree(SCTP_SO(stream, sid)->ext);
++		SCTP_SO(stream, sid)->ext = NULL;
++	}
++
++	return ret;
+ }
+ 
+ void sctp_stream_free(struct sctp_stream *stream)
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index 12454f0d5a63..fdcf18c78bb5 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -61,7 +61,7 @@ static void tls_device_free_ctx(struct tls_context *ctx)
+ 	if (ctx->rx_conf == TLS_HW)
+ 		kfree(tls_offload_ctx_rx(ctx));
+ 
+-	kfree(ctx);
++	tls_ctx_free(ctx);
+ }
+ 
+ static void tls_device_gc_task(struct work_struct *work)
+@@ -746,6 +746,11 @@ int tls_set_device_offload(struct sock *sk, struct tls_context *ctx)
+ 	}
+ 
+ 	crypto_info = &ctx->crypto_send.info;
++	if (crypto_info->version != TLS_1_2_VERSION) {
++		rc = -EOPNOTSUPP;
++		goto free_offload_ctx;
++	}
++
+ 	switch (crypto_info->cipher_type) {
+ 	case TLS_CIPHER_AES_GCM_128:
+ 		nonce_size = TLS_CIPHER_AES_GCM_128_IV_SIZE;
+@@ -880,6 +885,9 @@ int tls_set_device_offload_rx(struct sock *sk, struct tls_context *ctx)
+ 	struct net_device *netdev;
+ 	int rc = 0;
+ 
++	if (ctx->crypto_recv.info.version != TLS_1_2_VERSION)
++		return -EOPNOTSUPP;
++
+ 	/* We support starting offload on multiple sockets
+ 	 * concurrently, so we only need a read lock here.
+ 	 * This lock must precede get_netdev_for_sock to prevent races between
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index f4f632824247..0c22af7b113f 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -251,7 +251,7 @@ static void tls_write_space(struct sock *sk)
+ 	ctx->sk_write_space(sk);
+ }
+ 
+-static void tls_ctx_free(struct tls_context *ctx)
++void tls_ctx_free(struct tls_context *ctx)
+ {
+ 	if (!ctx)
+ 		return;
+@@ -638,7 +638,7 @@ static void tls_hw_sk_destruct(struct sock *sk)
+ 
+ 	ctx->sk_destruct(sk);
+ 	/* Free ctx */
+-	kfree(ctx);
++	tls_ctx_free(ctx);
+ 	icsk->icsk_ulp_data = NULL;
+ }
+ 
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 41e17ed0c94e..fd931294f66f 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -1931,7 +1931,8 @@ bool tls_sw_stream_read(const struct sock *sk)
+ 		ingress_empty = list_empty(&psock->ingress_msg);
+ 	rcu_read_unlock();
+ 
+-	return !ingress_empty || ctx->recv_pkt;
++	return !ingress_empty || ctx->recv_pkt ||
++		!skb_queue_empty(&ctx->rx_list);
+ }
+ 
+ static int tls_read_size(struct strparser *strp, struct sk_buff *skb)
+diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
+index 61cfd8f70989..d089eb706d18 100644
+--- a/tools/perf/builtin-script.c
++++ b/tools/perf/builtin-script.c
+@@ -3669,7 +3669,8 @@ int cmd_script(int argc, const char **argv)
+ 		goto out_delete;
+ 
+ 	uname(&uts);
+-	if (!strcmp(uts.machine, session->header.env.arch) ||
++	if (data.is_pipe ||  /* assume pipe_mode indicates native_arch */
++	    !strcmp(uts.machine, session->header.env.arch) ||
+ 	    (!strcmp(uts.machine, "x86_64") &&
+ 	     !strcmp(session->header.env.arch, "i386")))
+ 		native_arch = true;
+diff --git a/tools/testing/selftests/net/txring_overwrite.c b/tools/testing/selftests/net/txring_overwrite.c
+index fd8b1c663c39..7d9ea039450a 100644
+--- a/tools/testing/selftests/net/txring_overwrite.c
++++ b/tools/testing/selftests/net/txring_overwrite.c
+@@ -113,7 +113,7 @@ static int setup_tx(char **ring)
+ 
+ 	*ring = mmap(0, req.tp_block_size * req.tp_block_nr,
+ 		     PROT_READ | PROT_WRITE, MAP_SHARED, fdt, 0);
+-	if (!*ring)
++	if (*ring == MAP_FAILED)
+ 		error(1, errno, "mmap");
+ 
+ 	return fdt;


^ permalink raw reply related	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2019-07-28 16:25 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-06-15 15:10 [gentoo-commits] proj/linux-patches:5.1 commit in: / Mike Pagano
  -- strict thread matches above, loose matches on Subject: below --
2019-07-28 16:25 Mike Pagano
2019-07-26 11:37 Mike Pagano
2019-07-21 14:42 Mike Pagano
2019-07-14 15:48 Mike Pagano
2019-07-10 11:07 Mike Pagano
2019-07-03 11:35 Mike Pagano
2019-06-25 10:54 Mike Pagano
2019-06-22 19:16 Mike Pagano
2019-06-19 16:36 Thomas Deutschmann
2019-06-17 19:22 Mike Pagano
2019-06-11 18:01 Mike Pagano
2019-06-11 12:43 Mike Pagano
2019-06-09 16:20 Mike Pagano
2019-06-04 11:09 Mike Pagano
2019-05-31 14:04 Mike Pagano
2019-05-26 17:07 Mike Pagano
2019-05-22 11:07 Mike Pagano
2019-05-16 23:05 Mike Pagano
2019-05-14 22:26 Mike Pagano
2019-05-11 13:04 Mike Pagano
2019-05-10 23:40 Mike Pagano
2019-05-06 11:25 Mike Pagano

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox